diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfhyp" "b/data_all_eng_slimpj/shuffled/split2/finalzzfhyp" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfhyp" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\nThe earth rotates on its axis every 24 hours producing the cycle of day and night. Correspondingly, almost all species exhibit changes in their behaviour between day and night \\citep{bell2005circadian}. These daily rhythms are not simply a response to the changes in the physical environment but, instead, arise from a timekeeping system within the organism \\citep{vitaterna2001overview}. This timekeeping system, or biological `clock', allows the organism to anticipate and prepare for the changes in the physical environment that are associated with day and night. Indeed, most organisms do not simply respond to sunrise but, rather, anticipate the dawn and adjust their biology accordingly. For example, when deprived of external time cues, many of these diurnal rhythms persist, indicating they are maintained by a biological circadian clock within the organism \\citep{mcclung2006plant}. The mechanisms underlying the biological timekeeping systems and the potential consequences of their failure are among the issues addressed by researchers in the field of circadian biology.\n\n\\subsection{The History of Clock Research in Plants}\n\\label{sec:history}\n\nCircadian rhythms are the subset of biological rhythms with period, defined as the time to complete one cycle, of approximately 24 hours (see Figure \\ref{fig:s1} for a visual interpretation of this terminology). A second defining attribute of circadian rhythms is that they are `endogenously generated and self-sustaining' \\citep{mcclung2006plant}. In particular, the period remains approximately 24 hours under constant environmental conditions, such as constant light (or dark) and constant temperature (i.e. when deprived of any external time cues).\n\n\\cite{millar1991circadian} identified a number of genes that were under circadian control in a particular plant, the Arabidopsis thaliana. However, these initial Arabidopsis experiments were extremely labour intensive, as plants needed to be individually processed by a researcher at regular intervals over a sustained period of time. One solution was a firefly luciferase system. This method fuses the gene of interest to a luciferase reporter gene. Thus, when the gene is expressed, so is luciferase and the plant produces light. A machine then measures bioluminescence (light emitted from the plant) at regular intervals to obtain a measure of the amount of the gene expressed at a given time \\citep{southern2005circadian}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{S1.pdf}\n\\caption{The defined rhythmic parameters: periodicity, phase, amplitude and clock precision (taken from \\cite{hanano2006multiple}).}\n\\label{fig:s1}\n\\end{figure}\n\n \\subsection{Current Methods}\n\n As discussed in Section \\ref{sec:history}, in circadian biology, a key aspect of data analysis is to estimate the period of the time series. A current approach in the circadian community would be to use the computer programme BRASS to obtain a period estimate for each time series using Fourier analysis. (See \\cite{moore2014online} for a complete description of these methods.) These methods require an underlying assumption that the data is stationary. However, nonstationary behaviour is common in biologocial systems \\citep{moore2014online} and our particular dataset displays nonstationary behaviour. Therefore, we propose methods that are capable of detecting \\textit{changes of period over time}.\n\n\\subsection{Aim and Structure of the Paper}\n\nIn this project we wish to understand how a plant's clock is effected when exposed to different concentrations of lithium. Furthermore, we would also like to know which concentrations produce similar effects and then characterise these effects. These questions have important implications for understanding the\nmechanism of the plant's circadian clock and also environmental implications associated with soil pollution. Therefore, we develop methods to group and characterise our circadian dataset which comprises of gene expression levels measured at regular time intervals.\n\nIn this paper, we demonstrate the time series in our circadian dataset are nonstationary. Therefore, we argue that the current methods used by the circadian community to analyse such data (which assume stationarity) are inappropriate. Therefore, we develop clustering methods using a wavelet transform. Wavelets are chosen as they are ideally suited to identifying discriminant local time and scale features. Furthermore, we propose treating the observed time series as realisations of locally stationary wavelet (LSW) processes. This allows us to define and rigorously estimate the evolutionary wavelet spectrum. We can then compare (in a quantitative way) using a functional principal components analysis, the time-frequency patterns of the time series. Our approach uses a clustering algorithm to group the data according to their time-frequency patterns. Our LSW-based approach combines the use of wavelets with (rigorous stochastic) nonstationary time series modelling and achieves superior results to current methods of clustering nonstationary time series. Furthermore, our method can also be used to produce visualisations that can be used to characterise the features associated with a particular cluster.\n\nThis article is organized as follows. In Section \\ref{sec:data}, we outline the circadian dataset and the pre-processing that we apply to these data. We also perform a hypothesis test of (second-order) stationarity. In Section \\ref{sec:model} we develop a locally stationary wavelet clustering method and describe its implementation. The results of clustering the circadian dataset using the proposed methodology are presented in Section \\ref{sec:app}. We also examine the results of our analysis in the context of several relevant biological questions in Section \\ref{sec:app} before concluding with a brief discussion in Section \\ref{sec:concs}.\n\n\\section{Data and Preliminary Processing}\\label{sec:data}\n\n\nIn this section we outline the dataset of circadian plant rhythms and the pre-processing that we apply to these data. We also report the results of the classical analysis a circadian biologist would use to analyse such data. Finally, we describe the features of the data which motivate our proposed methods. In particular, we perform a hypothesis test for second-order stationarity.\n\nTo obtain this data set, the lab uses a firefly luciferase system. This method fuses the gene of interest, in this experiment CCR2, to a luciferase reporter gene. Thus, when the gene is expressed, so is luciferase and the plant (in this experiment, the Arabidopsis thaliana) emits light. The researcher then measures bioluminescence (light emitted from the plant) to obtain a measure of the amount of the gene expressed \\citep{southern2005circadian}. The luminescence rhythms were monitored using a luminescence scintillation counter, TOPCount NXT (Perkin Elmer). This method allows for a quantitative, real-time gene expression measure in living plants. (See, for example, \\citep{perea2015modulation} for a complete description of this type of experiment.) A plot of the average expression at each time point for each of the groups is shown in Figure \\ref{fig:avplot} (on page \\pageref{fig:avplot}). Note that time is measured in hours relative to zeitgeber time, which is the time of the last external temporal cue: the dawn signal of lights-on.\n\nPrior to the recordings in Figure \\ref{fig:avplot}, the plants are grown under 12 hour light\/12 hour dark cycles to simulate a `normal' day. The plants are then transferred to the TOPCount machine. In this experiment, measurements were taken at equal intervals of approximately 1 hour. Measurement began after the transition to 12 hours of darkness on a given day. After this, the plant was exposed to constant light. In Figure \\ref{fig:avplot}, we can see average luminescence during a \"normal\" day of twelve hours of light followed by twelve hours of darkness, before exposure to constant light (for approximately 4 days). This is represented by the shaded bars below the graph (the plants are under constant light throughout the experiment, other than between 12:00 and 24:00 hours when the plants are in darkness. This is indicated by the black bar. The grey bars indicate that, though the plants were in constant light, they would be in darkness during a `normal' 12 hour light\/12 hour dark cycle.)\n\nOur data set consists of 96 time series recorded at 106 time points. The 96 time series are known to be derived from adding 8 different concentrations of lithium to the plants. In particular, a control group is grown in Hoagland media \\citep{hoagland1950water} which contains essential nutrients required for plant growth and is not exposed to any additional levels of lithium. The other seven groups are also grown in the Hoagland media with varying additional concentrations of lithium. In particular, 0.5mM, 1.5mM, 7.5mM, 10mM, 15mM, 22.5mM and 30mM of lithium chloride (LiCl) were respectively added to the Hoagland media to obtain the 7 additional groups. Thus, the data consists of 8 groups of 12 plants with plants from the same group having been treated in the same way.\n\n \\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{avplot2.pdf}\n\\caption{A plot of the average expression at each time point for each of the 8 groups of the circadian dataset. Time is measured in hours relative to zeitgeber time, which is the time of the last external temporal cue: the dawn signal of lights-on.}\n\\label{fig:avplot}\n\\end{figure}\n\nOn examining Figure \\ref{fig:avplot}, we notice a break in the data at time 30:03 for approximately 7 hours. One of the disadvantages of using the system described above to measure gene expression is the propensity of the recording equipment to break down resulting in gaps in the data. For the purposes of this analysis, we will truncate our dataset and examine only the observations after this point. However, we will discuss alternative methods to overcome this problem in Section \\ref{sec:concs} as avenues of further work. Under the LSW modelling framework, we require our data to be of length $N=2^J$ (see Section \\ref{sec:model}). Each plant was observed at 106 time points; therefore, we chose to truncate the data and use the 64 observations after the break (from time 37:12 to 103:46 in Figure \\ref{fig:avplot}). We use these particular observations as they display changes to the `rhythmic parameters' outlined in Section \\ref{sec:data description} which are of interest to the circadian biologist such as: period, phase, amplitude and precision. The truncated dataset of $64$ observations can be seen in Figure \\ref{fig:Circgroups}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{circgroupspdf.pdf}\n\\caption{Top left: Each realisation from the control group (in grey) along with the group average (in blue). Other panels: Each realisation from each group (in grey) along with the group average (in red) and the control group average (in blue) for our truncated circadian dataset. (Each time series has been normalised to have mean zero.)}\n\\label{fig:Circgroups}\n\\end{figure}\n\n\\subsection{Data Description}\n\\label{sec:data description}\nOn examining Figures \\ref{fig:avplot} and \\ref{fig:Circgroups}, there is visual evidence to suggest that adding lithium produces an effect in the circadian clock of this plant. In particular, the circadian biologist is interested in how certain `rhythmic parameters' of the clock are effected. These defined parameters are periodicity, phase, amplitude and clock precision. Figure \\ref{fig:s1} (taken from \\cite{hanano2006multiple}) provides graphical description of these parameters.\n\nIn particular, there seems to be a pronounced effect after adding 7.5mM (or more) of LiCl whereas the first 3 groups (the control and concentrations 0.5mM and 1.5mM represented by the dark blue, pink and yellow lines in Figure \\ref{fig:avplot} or the first three panels in Figure \\ref{fig:Circgroups}) are relatively indistinguishable. All concentrations up to and including 15mM seem to display cyclic behaviour whereas concentrations 22.5mM and 30mM do not. In conclusion, these plots indicate that adding increasing concentrations of lithium lengthens the period and also produces a dampening effect.\n\n\\subsection{BRASS Results}\nIn the circadian community, analysis of this data would typically be performed by the Microsoft Excel macro BRASS (see \\cite{moore2014online} for a detailed description of this software package and its advantages and disadvantages). Table \\ref{tab:BRASS} provides a summary of the output of the analysis the circadian dataset in BRASS. In particular, it shows the mean period estimate for each of the 8 groups and the number of plants that could be analysed by BRASS.\n\n\\begin{table}\n\\begin{tabular}{|p{30mm}|c|c|c|c|c|c|c|c|}\n\\hline & Hoagland & 0.5mM & 1.5mM & 7.5mM & 10mM & 15mM & 22.5mM & 30mM \\\\\n\\hline Number of plants & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 12 \\\\\n\\hline Number of \\newline \"rhythmic\" plants & 11 & 12 & 12 & 12 & 12 & 11 & 3 & 3 \\\\\n\\hline Period estimate \\newline (in hours) & 26.85 & 26.85 & 26.87 & 30.96 & 31.55 & 27.03 & 18.00 & 22.65 \\\\\n\\hline\n\\end{tabular}\n\\caption{A summary of the output of the analysis of the circadian dataset in BRASS.}\n\\label{tab:BRASS}\n\\end{table}\n\n\\medskip\n\nThe results of the analysis in BRASS (in Table \\ref{tab:BRASS}) show that not all the data is used to produce the period estimate reported by BRASS (the number of `rhythmic' plants is the number of time series which BRASS was able to return a period estimate for). For example, in the 22.5mM and 30mM groups, BRASS was only able to estimate a period for 6 of the 24 plants. This shows that BRASS is not able to analyse all the data produced by this experiment and indicates that this dataset is not suitably modelled using Fourier methods.\n\n\\subsection{Test of Stationarity}\n\nAs discussed in the previous section, we have reason to believe that this data is nonstationary. In Figure \\ref{fig:Circgroups}, the period and amplitude of individual groups seems to change with time. These features indicate that the data is nonstationary and that Fourier analysis is not appropriate for this kind of data. In this section, we investigate whether our truncated dataset is (second-order) stationary by performing a hypothesis test.\n\nThe test we will use is based on the methods in \\cite{priestley1969test}, the Priestley-Subba Rao (PSR) Test. The PSR test is implemented in the \\verb|fractal| package in R available from the CRAN package repository and the results can be found in Table \\ref{tab:stattest}. This shows number of (the truncated) time series which did not indicate enough evidence to reject the null hypothesis of stationarity (at the 1\\% significance level) for each group. This analysis indicates that very few (approximately 23\\%) of the time series did not provide enough evidence to reject the null hypothesis of stationarity. This suggests that the data are nonstationary and the current Fourier analysis methods are not suitable.\n\n\\begin{table}\n\\begin{tabular}{|p{30mm}|c|c|c|c|c|c|c|c|}\n\\hline & Hoagland & 0.5mM & 1.5mM & 7.5mM & 10mM & 15mM & 22.5mM & 30mM \\\\\n\\hline Number of plants & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 12 \\\\\n\\hline Number of \\newline \"stationary\" plants & 4 & 6 & 6 & 2 & 1 & 1 & 0 & 2 \\\\\n\\hline\n\\end{tabular}\n\\caption{Results for test of stationarity.}\n\\label{tab:stattest}\n\\end{table}\n\n\\section{Proposed Clustering Method}\\label{sec:model}\n\nIn their review of period estimation methods for circadian data, \\cite{moore2014online} recommend wavelet-based methods for nonstationary time series to extract changes of period over time. In this work, we combine the use of wavelets with (rigorous stochastic) nonstationary time series modelling. Our approach is to cluster the data according to their time-frequency patterns by assuming the locally stationary wavelet model.\n\n\\subsection{Modelling Nonstationary Time Series}\n\n Many approaches to modelling nonstationary time series have been developed from the spectral representation of stationary time series. This states that if a time series $\\{X_t\\}_{t \\in \\ \\mathbb{Z}}$ is a \\emph{stationary} stochastic process, then it admits the following Cram\\'er-Rao representation \\citep{priestley1983spectral}:\n\n \\begin{equation}\n \\label{SRT}\n X_t = \\int_{-\\pi}^{\\pi} A(\\omega)\\exp(i\\omega t ) d\\xi(\\omega),\n \\end{equation}\n\n where $A(\\omega)$ is the amplitude of the process and $d\\xi(\\omega)$ is an orthonormal increments process.\n\n \\subsubsection{Locally stationary Fourier model}\n\n In the representation of stationary process in \\eqref{SRT}, we note that, for stationary processes, the amplitude $A(\\omega)$ does not depend on time (i.e. the frequency behaviour is the same across time). For many real time series, such as our circadian plant data, this is not realistic and the model \\eqref{SRT} is inadequate. Therefore, we would prefer a model where the frequency behaviour can vary with time. One way of introducing time dependence into a model is to replace the $A(\\omega)$ in \\eqref{SRT} with a time dependent form. Therefore, \\cite{priestley1965evolutionary} introduced a time-frequency model which is analogous to \\eqref{SRT} with the amplitude replaced by $A(\\omega, t)$. \\cite{dahlhaus1997fitting} extended this and introduced the locally stationary modelling philosophy and developed the locally stationary Fourier (LSF) model. In this setting, the time-dependent transfer function is defined on \"rescaled time\" to enable asymptotic considerations.\n\n \\subsubsection{Locally stationary wavelet model and assumptions}\n\n An alternative approach to the LSF model is to replace the Fourier functions in \\eqref{SRT} by \\textit{wavelets}. The advantage of wavelets is that they are localised in both time and frequency and are therefore well-suited to modelling second-order characteristics that evolve over time. (We refer the reader to \\cite{daubechies1992ten} and \\cite{nason2008wavelet} for an introduction to wavelets and their application in statistics.) \\cite{nason2000wavelet} developed an alternative method for modelling nonstationary time series by replacing the Fourier functions in \\eqref{SRT} by a set of discrete non-decimated wavelets \\citep{nason1995stationary}. They proposed the Locally Stationary Wavelet (LSW) time series model.\n\n We will now define the LSW model for locally stationary time series as in \\cite{nason2000wavelet}. We assume that the reader is familiar with the Discrete Wavelet Transform (for further information see \\cite{mallat1989multiresolution} and \\cite{mallat1989theory} or \\cite{nason2008wavelet} for a complete introduction). We also assume that the reader is familiar with the Non-decimated Wavelet Transform (for a detailed description see \\cite{nason1995stationary} and \\cite{nason2000wavelet}). Therefore, we define the LSW model as follows.\n\n The locally stationary wavelet (LSW) process $\\{X_{t;T}\\}$ defined to be a sequence of (doubly-indexed) stochastic processes having the following representation in the mean-square sense:\n\n \\begin{equation}\n \\label{LSW rep}\n X_{t,T} = \\sum_{j = 1}^{\\infty} \\sum_k w_{j, k;T} \\psi_{j,k}(t)\\xi_{j,k},\n \\end{equation}\n where $\\{\\xi_{j,k}\\}$ is a random orthonormal increment sequence, $\\{\\psi_{j, k}(t) = \\psi_{j, t-k} \\}_{j,k}$ is a set of discrete non-decimated wavelets and $\\{ w_{j, k;T} \\}$ is a set of amplitudes.\n\n This LSW process formulation also requires three additional assumptions \\citep{nason2000wavelet}:\n\n\n \\begin{enumerate}\n \\item $\\mathbb{E}(\\xi_{j,k}) = 0$ which ensures $X_{t,T}$ is a zero mean process. In practice, if the time series has a non-zero mean, we estimate it and remove it.\n\n \\item $\\text{cov}(\\xi_{j,k} \\xi_{l,m}) = \\delta_{j,l}\\delta_{k,m}$, where $\\delta_{j,l}$ is the Kronecker delta. This ensures the orthogonal increment sequence is uncorrelated.\n\n \\item $\\sup_k \\lvert w_{j,k;T} - W_j(k\/T) \\rvert \\leq C_j \/ T,$\n\n where $W_j(z), z \\in (0,1)$ is a function (with various smoothness constraints) and $\\{ C_j \\}_j$ is a set of constants with $\\sum_{j=1}^{\\infty} C_j < \\infty$. This condition controls the speed of evolution of $w_{j,k;T}$ and thus permits estimation \\citep{nason2008wavelet}.\n \\end{enumerate}\n\n For an extensive review of the locally stationary wavelet model and its application in nonstationary time series analysis, see for example \\cite{nason2000wavelet} or \\cite{nason2008wavelet}.\n\n In the stationary time series setting, recall the \\textit{spectrum} of a time series, $f(\\omega) = |A(\\omega)|^2$ where $A(\\omega)$ is the amplitude as in equation \\eqref{SRT}. The spectrum quantifies the contribution to the variance of a stationary process over a \\textit{frequency}, $\\omega$. An analogous quantity can be defined for the LSW model \\citep{nason2000wavelet}, the evolutionary wavelet spectrum (EWS). The EWS quantifies how the power in an LSW process is distributed across time and scale.\n\n \n \n The evolutionary wavelet spectrum (EWS) of the time series $\\{X_{t,T}\\}_{t=0}^{T-1}$ is defined as:\n\n \\begin{equation}\n \\label{EWS eq}\n S_j(z) = |W_j (z)|^2,\n \\end{equation}\n for $j \\in \\mathbb{N}, z \\in (0,1).$\n \n The quantity $z$, known as \\emph{rescaled time}, $z = k\/T, z \\in (0,1)$, was introduced by \\cite{dahlhaus1997fitting} and is used to enable asymptotic considerations (see \\cite{fryzlewicz2009consistent} and, for a more detailed discussion, \\cite{dahlhaus1996kullback}).\n\n The (raw) wavelet periodogram is given by:\n \\begin{equation}\n \\label{periodogram}\n I_{k,T}^j = \\lvert d_{j,k;T} \\rvert^2,\n \\end{equation}\n where\n \\begin{equation}\n \\label{NDWTds}\n d_{j,k;T} = \\sum_{t=1}^T X_{t, T} \\psi_{j,k}(t),\n \\end{equation}\n are the \\textit{empirical non-decimated wavelet coefficients} and it is a biased estimator of the EWS. However, an unbiased \\textit{corrected} periodogram may be obtained by premultiplying \\citep{nason2000wavelet} the raw wavelet periodogram by the inverse of the autocorrelation wavelet inner product matrix, $A_J$ with\n\n \\[\n A_{j,l} = <\\Psi_j, \\Psi_l > = \\sum_\\tau \\Psi_j(\\tau)\\Psi_l(\\tau),\n \\]\n\n and where $\\Psi_j (\\tau) = \\sum_k \\psi_{j,k}(0) \\psi_{j,k} (\\tau)$ for all $ j \\in \\mathbb{N}$ and $\\tau \\in \\mathbb{Z}$ is the \\emph{autocorrelation wavelet} (see \\cite{nason2008wavelet} for more information).\n\n Thus, the \\textit{corrected} wavelet periodogram is:\n \\begin{equation}\n \\label{Corrected periodogram}\n L_{k, T} = A_J^{-1} I_k,\n \\end{equation}\n for $k = 0, \\dots, T - 1$.\n\n As in the stationary setting, the wavelet periodogram is not a consistent estimator of the EWS \\citep{nason2008wavelet}. One method to overcome this problem is to smooth the wavelet periodogram as a function of (rescaled) time for each scale, $j$. \\cite{nason2000wavelet} recommend smoothing the wavelet periodogram first and then applying the correction as in \\eqref{Corrected periodogram} as this is more analytically tractable. In particular, we could smooth the periodogram by log transform. The idea behind using a log transform is that (for Gaussian data) the distribution of the raw wavelet periodogram is approximately $\\chi^2$ and the use of the log stabilises the variance and draws the distribution towards normality, thereby permitting universal thresholding, which is designed to work in this situation. Alternative approaches using wavelet-Fisz transforms for smoothing using variance stabilisation techniques have been proposed by \\cite{fryzlewicz2006haar}. We will denote the corrected and smoothed periodogram of time series $\\{X_{t,T}\\}_{t=0}^{T-1}$ as $\\{\\hat{S}_{j}(z)\\}_{j}$. For a rigorous discussion of the estimation of the EWS see \\cite{nason2000wavelet}.\n\n\n Throughout this article, we work with \\textit{normal} LSW processes (i.e., the $\\xi_{j,k}$ in \\eqref{LSW rep} are distributed $N(0,1)$). This is for mathematical convenience and extensions to other distributions are possible. In particular, this results in the wavelet periodogram, $I_{k,T}^j$, having a scaled $\\chi_1^2$ distribution \\citep{fryzlewicz2009consistent}. As stated above, this assumption allows us to perform universal thresholding. Moreover, under this assumption, the correction of the wavelet periodogram (see Equation \\eqref{Corrected periodogram}) brings its distribution closer to Gaussianity \\citep{fryzlewicz2009consistent}.\n\n\n\\subsection{Current Clustering\/Classification Techniques Taking into Account Nonstationarity}\n\\label{sec:review}\n\n\\cite{shumway2003time} considers the use of time-varying spectra for classification and clustering nonstationary time series. This method uses locally stationary Fourier models and Kullback-Leibler discrimination measures to classify seismic data.\n\\cite{fryzlewicz2009consistent} develop a procedure for \\textit{classification} of nonstationary time series. In their setup, they have available a training dataset that consists of signals with known group labels. The observed signals are viewed as realisations of locally stationary wavelet processes and the evolutionary wavelet spectrum is estimated. The EWS, which contains the second-moment information on the signals, is used as the classification signature. \\cite{fryzlewicz2009consistent} thus combine the use of wavelets with rigorous stochastic nonstationary time series modelling. \\cite{krzemieniewska2014classification} developed this method in the context of an industrial experiment by proposing an alternative divergence index to compare the spectra of two time series.\n\nUsing maximum covariance analysis (MCA) on the wavelet representations of \\textit{two} series with clustering applications has been proposed in previous works. MCA is one method to extract common time-frequency patterns and also reduce the dimension of the data. \\cite{rouyer2008analysing} use MCA to compare, in a quantitative way, the wavelet spectra of \\textit{two} time series. Their approach applies a singular value decomposition on the covariance matrix between each pair of wavelet spectra. The distance between two wavelet spectra is measured by comparing a given number of the leading patterns and singular vectors obtained by the MCA that correspond to a fixed percentage of the total covariance. This is repeated for each pair of time series to build a distance matrix which is used to obtain a cluster tree that groups \\textit{wavelet spectra} according to their time-frequency patterns. \\cite{antoniadis2013clustering} also use a maximum covariance analysis over a localised covariance matrix, however, their methods are based on the \\textit{continuous wavelet transform}. They introduce a way of quantifying the similarity of these patterns by comparing the evolution in time of each pair of leading patterns. This builds a distance matrix which is then used within classical clustering algorithms to differentiate among high dimensional populations.\n\n \\cite{holan2010modeling} proposed achieving dimension reduction by treating a spectrum as an `image' and performing a functional principal components analysis. \\cite{holan2010modeling} classify nonstationary time series using a generalised linear model that incorporates the (dimension-reduced) spectrogram of a short-time Fourier transform into the model as a predictor.\n\n\\subsection{Proposed Functional PCA Approach for EWS Content}\n\\label{sec:PCA}\n\nIn this section we explain how we will use the features of the corrected spectra to cluster the plants based on their estimated time-scale behaviour. Our approach differs from those outlined in Section \\ref{sec:review} as we develop the functional PCA approach for wavelets under the LSW modelling framework. This combines the use of a dimension-reduced wavelet representation with rigorous stochastic nonstationary time series modelling. In particular, we are able to calculate an unbiased, consistent estimator of the EWS and use this as the basis of our clustering methodology.\n\n\nIn our biological problem of interest, the time-frequency representation of the signal is high dimensional. For example, the estimated spectrum of an individual time series from our data set consists of 64 time points by 6 frequencies and thus produces 384 possible time-frequency covariates. Therefore, we need to perform a dimension reduction technique. The method we use treats the spectrum as an \"image\" and the spectral coefficients as time-frequency \"pixels\". The pixels are not independent as they result from covariance present in the original signal. In fact, the spectrum presents coherent patterns that should be accounted for.\n\nTo address the dependence in the spectrum, \\cite{holan2010modeling} treat each spectrum as an image and decompose it as a Karhunen-Lo\\' eve representation \\citep{james2005functional}. In theory, the spectrum that results from a (continuous) time series is a continuous two-dimensional object. Therefore, consider a continuous spectrum $\\{S(\\mathbf{v}) : \\mathbf{v} = (j, z), \\mathbf{v} \\in \\mathbb{R} \\times (0, 1)\\}$. Suppose that $E[S(\\mathbf{v})]=0$, and define the covariance function as $E[S(\\mathbf{v}), S(\\mathbf{v}')] \\equiv C_S(\\mathbf{v}, \\mathbf{v}')$. Then the Karhunen-Lo\\' eve expansion allows the covariance function to be decomposed via a classical eigenvalue\/eigenfunction decomposition.\n\nAlthough the continuous Karhunen-Lo\\' eve representation is often the most realistic from the point of view of modelling a biological process, it is rarely considered in applications. This is due to the discrete nature of observations resulting from most experiments. In practice, we use the empirical version of the Karhunen-Lo\\' eve decomposition, the Karhunen-Lo\\' eve Transform (also known as empirical orthogonal function (EOF) analysis in meteorology and geophysics) as is common in spatial statistics \\citep{cressie2015statistics}.\n\nTherefore, assume we have observed $N$ (LSW) processes at $T = 2^J$ equally spaced time points. Denote the $i$th time series $\\{X_{t,T}^{(i)}\\}_{t=0}^{T-1}$ for $ i = 1, \\dots, N$. For each time series, $\\{X_{t,T}^{(i)}\\}_{t=0}^{T-1}$, calculate the corrected and smoothed periodogram, $\\{\\hat{S}^{(i)}_{j}(t\/T)\\}_{j}$, for $i = 1, \\dots, N$, where $t = 0, \\dots, T - 1$ and $j = 1, \\dots, J$. The resulting estimated spectral coefficients can be arranged in $N, J \\times T$ matrices which we denote $\\hat{S}^{(1)}, \\dots, \\hat{S}^{(N)}$. We can treat each of these matrices as an \"image\". In particular, for $i = 1, \\dots, N$, \"vectorise\" the matrix $\\hat{S}^{(i)}$. That is, concatenate the rows of the matrix $\\hat{S}^{(i)}$ to produce a vector $\\mathbf{\\hat{s}}^{(i)}$ that has length $J \\times T = n$. These $N$ vectors are combined to form an $N \\times n$ data matrix, $Q$, where each row of $Q$ represents one matrix, $\\hat{S}^{(i)}$. Formally,\n \\begin{equation}\n \\label{eq:datamat}\n Q = \\left[\\mathbf{\\hat{s}}^{(1)}, \\dots, \\mathbf{\\hat{s}}^{(N)} \\right]^T.\n \\end{equation}\n\n This results in a data matrix, $Q$, on which we can perform a classical principal components analysis (PCA) as in multivariate statistics. Therefore, we now briefly outline how to perform a PCA.\n\n Firstly, note that the data should be centred. Therefore, subtract the column mean from each column of $Q$ and denote the mean centred data matrix $U$. Now compute the sample covariance matrix, $R$, of (the transpose of) the mean centred data matrix, $U^T$:\n \\begin{equation}\n R = QQ^T.\n \\end{equation}\n\n Apply the singular value decomposition to the covariance matrix, $R$:\n \\begin{equation}\n R = \\Psi \\Lambda \\Psi^T,\n \\end{equation}\n where the columns of $\\Psi$ are the eigenvectors (known as the \\emph{singular vectors}) of $R$ and $\\Psi$ is a diagonal matrix whose diagonal elements are the eigenvalues of $R$ (arranged in decreasing order of magnitude), referred to as the \\emph{singular values}. The singular values are proportional to the squared covariance accounted for in each direction.\n\n We can project the data matrix, $U$, onto its principal components by multiplying: $UU^T \\Psi$. Call these projected values the scores of each process.\n\n\\subsection {Clustering Method}\n\\label{sec:method}\n\nOur clustering method compares the time series by obtaining a dissimilarity matrix based on the scores (obtained as described in Section \\ref{sec:PCA}). However, in order to calculate the dissimilarity matrix, we need a method to decide how many principal components to retain and also a suitable measure of distance. Furthermore, once we have obtained the dissimilarity matrix, we will also need to decide which clustering algorithm to use.\n\n \\subsubsection{Distance measures}\n \\label{sec:dist measures}\n\n The success of any clustering algorithm depends on the adopted dissimilarity measure. In this section, we will describe four possible distance measures and discuss their advantages and disadvantages. The proposed distance measures consist of (adaptations and developments of) those adopted in the work reviewed in Section \\ref{sec:review}. In our simulation studies in Section \\ref{sec:sims}, we will compare the performance of the different clustering algorithms outlined in Section \\ref{sec:review}, along with the different possible distance measures.\n\n The simplest choice for the dissimilarity measure is the squared quadratic distance. This distance measure is adopted by \\cite{fryzlewicz2009consistent} (in their paper which classifies nonstationary time series). The advantages of this measure are that it offers good practical performance and is straightforward (and easy) to compute.\n\n The distance between two time series, $\\{X_{t,T}^{(i)}\\}_{t=0}^{T-1}$ and $\\{X_{t,T}^{(j)}\\}_{t=0}^{T-1}$, is the sum of the squared differences between the scores relating to the principal components retained:\n \\begin{equation}\n \\label{SQD}\n SQD(X_{t,T}^{(i)}, X_{t,T}^{(j)}) = \\sum_{k=1}^{p} \\Big[\\text{Score}_{k}^{(i)}-\\text{Score}_{k}^{(j)}\\Big]^2,\n \\end{equation}\n where $\\text{Score}_{k}^{(i)}$ denotes the score (relating to principal component $k$) of time series $X_{t,T}^{(i)}$.\n The value $SQD(i, j)$ is the $(i, j)$th entry of the dissimilarity matrix, $D$.\n\nWe propose to develop this simplistic measure by aggregating the scores in the most significant $p$ directions using a \\textit{weighted} combination with weights given by the squared singular values. We refer to this measure as the Weighted Squared Quadratic Distance. Therefore, the distance between two time series, $\\{X_{t,T}^{(i)}\\}_{t=0}^{T-1}$ and $\\{X_{t,T}^{(j)}\\}_{t=0}^{T-1}$, is the weighted sum of the squared differences between their scores in $p$ directions. Formally:\n \\begin{equation}\n \\label{WSQD}\n WSQD(X_{t,T}^{(i)}, X_{t,T}^{(j)}) = \\frac{\\sum_{k=1}^{p} \\lambda_k^2 \\Big[\\text{Score}_{k}^{(i)}-\\text{Score}_{k}^{(j)}\\Big]^2}{\\sum_{k=1}^p \\lambda_k^2},\n \\end{equation}\n where $\\text{Score}_{k}^{(i)}$ is as in equation \\eqref{SQD} and $\\lambda_k$ denotes the corresponding $k^{th}$ singular value.\n The value $WSQD(i, j)$ is the $(i, j)$th entry of the dissimilarity matrix, $D$.\n\n Another choice for the dissimilarity measure is outlined in the method by \\cite{antoniadis2013clustering}. Their approach applies a singular value decomposition on the covariance matrix between each pair of wavelet transforms to obtain the leading patterns. The distance between two time series is measured by comparing a given number of the leading patterns obtained by the MCA. In particular, they compare the evolutions in time of each pair of leading patterns by measuring how dissimilar their shape is. Therefore, for the $k^{th}$ pair of leading patterns, they take the first derivative of the difference between them. This quantity is bigger (in absolute value) if the two leading patterns evolve very differently through time. Thus, formally, the dissimilarity between two leading patterns, $P_k^{(i)}$ and $P_k^{(j)}$, is measured by:\n\n \\begin{equation}\n d_k(i, j) = |\\Delta (P_k^{(i)} - P_k^{(j)})|,\n \\end{equation}\n where $\\Delta$ represents the first derivative. Finally, \\cite{antoniadis2013clustering} aggregate the leading patterns in the most significant $p$ directions using a weighted combination with weights given by the squared singular values:\n \\begin{equation}\n D(i, j) = \\frac{\\sum_{k=1}^p \\lambda_k^2 d_k^2(P_k^{(i)}, P_k^{(j)})}{\\sum_{k=1}^p \\lambda_k^2}.\n \\end{equation}\n\n \\cite{rouyer2008analysing} use the following distance (RD) measure adapted from \\cite{keogh1998enhanced}:\n \\begin{equation}\n \\label{RD}\n RD(P_k^{(i)}, P_k^{(j)}) = \\sum_{t=1}^{T-1} \\tan^{-1}[|(P_k^{(i)}(t)- P_k^{(j)}(t)) - (P_k^{(i)}(t+1) -P_k^{(j)}(t+1))|],\n \\end{equation}\n with $T$ being the length of the vectors and $P_k^{(i)}$ and $P_k^{(j)}$ the $k^{th}$ pair of leading patterns for time series $X_{t,T}^{(i)}$ and $X_{t,T}^{(j)}$ respectively. This metric compares two vectors by measuring the angle between each pair of corresponding segments (a segment is defined as a pair of consecutive points of a vector) and is a method for measuring parallelism between curves. The overall distance is then computed as a weighted mean of the distance for each of the $p$ pairs of leading patterns and singular vectors retained (with the weights being equal to the amount of covariance explained by each axis). For the comparison of the time series $X_{t,T}^{(i)}$ and $X_{t,T}^{(j)}$, we compute the distance $DT(i,j)$ according to the following formula:\n\n \\begin{equation}\n DT(i,j) = \\frac{\\sum_{k=1}^{p}\\lambda_k^2(RD(P_k^{(i)}, P_k^{(j)})+RD(\\phi_k, \\psi_k))}{\\sum_{j=1}^{p}\\lambda_k^2},\n \\end{equation}\n where $\\phi_k$ and $\\psi_k$ are the $k^{th}$ singular vectors of $X_{t,T}^{(i)}$ and $X_{t,T}^{(j)}$ respectively.\n\n \\subsubsection {Determining the number of principal components to retain}\n \\label{sec:numb EOFs}\n In each of the distance metrics we have discussed, we must decide how many axes, $p$, to retain. \\cite{antoniadis2013clustering} and \\cite{rouyer2008analysing} both decide to use the number of axes that correspond to a fixed percentage of the total covariance (as is common in PCA). However, another method is to select the number of components based on a screeplot. This displays the proportion of variance explained by the (ordered) eigenvalues. $p$ is then selected by looking for an elbow in the screeplot.\n\n \\cite{cho2013modeling} propose selecting this value based on the dimension of the correlation between two curves, $r$. They show retaining $r$ principal components gives a good approximation and also provide a method to estimate the correlation dimension using an information criterion. We do not adopt this method in this work: we obtained similar results with the methods outlined above with less computational burden. However, this may not be the case in other applications of our proposed clustering methodology.\n\n \\subsubsection{Choice of the clustering algorithm}\n\n Once we have obtained the dissimilarity matrix, we need to decide which clustering algorithm to use. We perform a partitioning around medoids (PAM). This technique admits a general dissimilarity matrix as input and is known to be more robust than other alternatives such as k-means \\citep{antoniadis2013clustering}.\n\n \\subsection{Proposed Clustering Method}\n\n We will now summarise our proposed method of clustering the data.\n\n \\begin{enumerate}\n \\item As in Equation \\ref{eq:datamat}, obtain an $N \\times n$ data matrix, $Q$, (where $n= J \\times T$) where the $i^{th}$ row of $Q$ represents the estimated spectra of time series $\\{X_{t,T}^{(i)}\\}_{t=0}^{T-1}$.\n \\item Obtain the scores of each time series as outlined in Section \\ref{sec:PCA}.\n \\item Retain only $D$ principal components using one of the methods in Section \\ref{sec:numb EOFs}.\n \\item Form a dissimilarity matrix where the $(i,j)$th entry is the distance (calculated using one of the possible distances defined in Section \\ref{sec:dist measures}) between the scores of the $i$-th and $j$-th time series.\n \\item This distance matrix is then used as the input of a clustering algorithm.\n \\end{enumerate}\n\n\\subsection{Simulation Study}\n\\label{sec:sims}\n\nIn Section \\ref{sec:method}, we described our procedure to cluster nonstationary time series. To assess the comparative performance of our proposed procedure with other methods, we conducted a simulation study. We describe the execution and results of this study below.\n\nFirstly, we assumed that each time series was a realisation from one of two groups. Each group has a different EWS, which we used to simulate our data. A data set of $N=100$ (50 simulations from each of the two groups) was generated. For each of the above methods, we obtained a dissimilarity matrix which was the input of a PAM algorithm which clustered the data into two groups. We then compared the clusters with the known group memberships and counted how many time series were correctly clustered (as a percentage). The above procedure was then repeated 100 times and the results for each method were averaged.\n\nTo illustrate the effectiveness of our approach for our application to the circadian dataset, we conducted a simulation study by creating a large sample of synthetic signals from two groups. We simulate the $i^{th}$ time series of group $g$, from the wavelet spectrum $S_j^{(g)}(z)$, where the spectrum for each of the groups is constructed to display differences which are of interest to the circadian biologist. For example, changes of amplitude and period. Figure \\ref{fig:specsandts} shows the wavelet spectra and an example of a realisation from each of the two groups. In particular, these spectra are generated by taking the average spectra of two groups of the observed circadian data set (the 0.5mM and 10mM groups). Therefore, the time series are of length $T = 64$ as this is the length of our truncated circadian dataset.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{specsandts.pdf}\n\\caption{Wavelet spectra and realisations from the two groups in the simulation study (based on the circadian dataset).}\n\\label{fig:specsandts}\n\\end{figure}\n\nFor this simulation study, we used Daubechies' extremal phase wavelet number 1 both to generate the data and within our clustering methods. The wavelet analysis was performed in the \\verb|locits| package in R and the PAM within the \\verb|cluster| package. Each periodogram was level smoothed by log transform, followed by translation invariant global universal thresholding and then the inverse transform is applied. For each scale of the wavelet periodogram, only levels 3 and finer are thresholded.\n\n Firstly, we propose that the similarity of the time series should be based on certain characteristics of the data rather than the raw data itself. Therefore, we propose that clustering time series based on their time-frequency decompositions should give better (and more meaningful) results than using the raw time series. Therefore we begin by reporting the results of clustering based on: the raw data; the wavelet coefficients and the raw wavelet periodogram in Table \\ref{tab:raw}. To obtain these results, we clustered the synthetic signals using PAM with the Euclidean distance and the inputs outlined previously. This simulation study shows that clustering based on the raw data and the raw wavelet transform gave poor results (54\\% correctly clustered) which supports the assertion that (for this application) clustering based on the second-moment information is preferable. This supports the motivation behind the methods outlined in Section \\ref{sec:review}. However, in this paper, we propose that combining the use of the wavelets with the rigorous stochastic nonstationary time series modelling that is achieved through the assumption of the LSW model, leads to superior methods than those based only on the use of wavelets. Therefore, we also report the results of clustering based on the raw corrected and smoothed wavelet periodogram in Table \\ref{tab:raw}. Therefore, this simulation study provides strong evidence that (when clustering based on the second-moment information is preferable and the LSW model is appropriate) methods based on the corrected and smoothed wavelet periodogram (obtained through the assumption of the LSW modelling framework) give significantly better results than methods based on the raw wavelet periodogram. Furthermore, we can also see that using the functional principal components analysis also improves the method from $71\\%$ correctly clustered to $75\\%$ (see Table \\ref{tab:PCs}).\n\n \\begin{table}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline Input & Data & Wavelet Transform & Wavelet Periodogram & Corrected Wavelet Periodogram \\\\\n \\hline Correctly Clustered (\\%) & 54\\% & 54\\% & 56\\% & 71\\% \\\\\n \\hline\n \\end{tabular}\n \\caption{Simulation study clustering results using PAM with the Euclidean distance and the following inputs: the raw data, wavelet transform, wavelet periodogram and the corrected wavelet periodogram.}\n \\label{tab:raw}\n \\end{table}\n\n \\begin{table}\n \\begin{tabular}{|c|c|c|}\n \\hline Method to choose number of EOFs & 90\\% of total covariance & Screeplot \\\\\n \\hline Squared Quadratic Distance & 73\\% & 75\\% \\\\\n \\hline Weighted Squared Quadratic Distance & 77 \\% & 78\\% \\\\\n \\hline \\cite{rouyer2008analysing} Distance & 54\\% & 75\\% \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparing methods to select number of principal components for proposed LSW clustering method in the simulation study. Percentages show correct clustering rates.}\n \\label{tab:PCs}\n \\end{table}\n\n To examine the effect of the choice of distance measure on our proposed clustering method, we performed the simulation study as above using all four distance measures outlined in Section \\ref{sec:dist measures}. The results are summarised in Table \\ref{tab:dists}. We can see that, for this simulation study, our method is fairly robust to the choice of distance measure. However, it would seem that the weighted squared quadratic distance gives the best results with $78\\%$ of the time series correctly clustered.\n\n \\begin{table}\n \\begin{center}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline Distance Measure & Squared Quadratic & Weighted Squared Quadratic & \\cite{rouyer2008analysing} & \\cite{antoniadis2013clustering} \\\\\n \\hline Correctly Clustered & 75\\% & 78\\% & 75\\% & 75\\% \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\caption{Comparing distance measures for proposed LSW clustering method in the simulation study.}\n \\label{tab:dists}\n \\end{table}\n\n We also examined the different methods outlined in Section \\ref{sec:numb EOFs} to select the number of principal components to retain for our LSW clustering method. Therefore, we performed our proposed method retaining only two principal components (based on examining the scree plot) and compared this with the situation where we retain the minimal number of components that correspond to $90\\%$ of the total covariance. The results are summarised in Table \\ref{tab:PCs}. Once again we can see that the LSW clustering method is fairly robust to the way in which we choose the number of principal components to retain. For the squared quadratic and weighted squared quadratic distances there appears to be very little difference between the two approaches. However, the \\cite{rouyer2008analysing} distance gives a much lower result when the number of principal components is chosen to explain $90\\%$ of the total covariance. On average this method usually uses thirteen principal components whereas the screeplot typically indicates two should be used. Therefore, Table \\ref{tab:PCs} suggests that, for this simulation study, the \\cite{rouyer2008analysing} distance is much more sensitive to the scores corresponding to relatively smaller proportions of the covariance. Alternatively, \\cite{holan2010modeling} argue that there is no reason that the scores associated with principal components that account for more variance should be the important scores in terms of discriminating between the two types of synthetic signal. Therefore, the scores that account for relatively large amounts of the covariance (but are perhaps not chosen by the screeplot approach) could actually not be effective in discriminating between the groups yet are being weighted highly within the distance measure.\n\n Finally, we compare the LSW method with the methods outlined in Section \\ref{sec:review} proposed by \\cite{rouyer2008analysing} and by \\cite{antoniadis2013clustering}. Both of these benchmark methods do well in practice and represent the state-of-the-art among procedures for clustering nonstationary time series. The results are summarised in Table \\ref{tab:methods}. This simulation study provides empirical evidence that our proposed LSW method works very well and outperforms the state-of-the-art among procedures for clustering nonstationary time series. Again we see that (for this particular application) methods based on the second-order information (our LSW method and the \\cite{rouyer2008analysing} method) perform better than the method based on the wavelet transform (the \\cite{antoniadis2013clustering} method). Moreover, our method, which utilises an unbiased, consistent estimator of the EWS, performs considerably better than the method which uses the raw wavelet periodogram. Finally, \\cite{rouyer2008analysing} and \\cite{antoniadis2013clustering} use MCA to compare the wavelet representations of two time series at a time and repeat this process for each pair of time series. This simulation study also shows that the method we developed based on \\cite{holan2010modeling}, which treats the spectrum as an `image' and performs a PCA on the estimated spectral coefficients of the entire dataset, outperforms the pairwise methods of \\cite{rouyer2008analysing} and \\cite{antoniadis2013clustering} and is also far less computationally expensive.\n\n \\begin{table}\n \\begin{tabular}{|c|c|c|c|}\n \\hline Method & \\cite{rouyer2008analysing} & \\cite{antoniadis2013clustering} & LSW Method \\\\\n \\hline Percentage Correctly Clustered & 65\\% & 63\\% & 78\\% \\\\\n \\hline\n \\end{tabular}\n \\caption{Comparing the proposed LSW clustering method with the methods outlined in \\cite{rouyer2008analysing} and \\cite{antoniadis2013clustering} in the simulation study.}\n \\label{tab:methods}\n \\end{table}\n\n\\section{Circadian Data Results}\\label{sec:app}\nIn this section, we apply the clustering method developed in Section \\ref{sec:method} to the circadian data that motivated this work.\n\n\\subsection{Pre-processing the Data}\nAs stated in Section \\ref{sec:model}, the time series should be a zero mean process. Therefore, we estimate the mean \\textit{of each series} and subtract this from each series. Figure \\ref{fig:Circgroups} shows each realisation from each group (in grey) along with the group average (in red) for our truncated zero-mean dataset.\n\n\\subsection{Preliminary Analysis}\n\\label{sec:prelim}\n\nFor each plant we calculated the corrected wavelet periodogram estimate of the EWS. The wavelet analysis is implemented in the \\verb|locits| package in R (available from the CRAN package repository). For this analysis we used Daubechies' extremal phase wavelet number 1. Each periodogram was level smoothed by log transform, followed by translation invariant global universal thresholding and then the inverse transform is applied. For each scale of the wavelet periodogram, only levels 3 and finer are thresholded.\n\n\\subsection{Clustering Results}\n\nTo cluster the data, we decided to retain two principal components based on examining the screeplot in Figure \\ref{fig:Circscree}. We can also plot the data in relation to the scores of the first two principal components. In Figure \\ref{fig:Circscoresall} we have plotted the data projected onto the first two principal components by group. On examining Figure \\ref{fig:Circscoresall} we can see that the lower concentrations occupy the top-left area of the plot.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{Circscree.pdf}\n\\caption{The screeplot for the circadian data.}\n\\label{fig:Circscree}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{Circscoresall.pdf}\n\\caption{The circadian data projected onto the first two principal components obtained from the LSW clustering method.}\n\\label{fig:Circscoresall}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{Circscores.pdf}\n\\caption{The circadian data projected onto the first two principal components obtained from the LSW clustering method. The control and concentrations 0.5mM and 1.5mM are represented by black circles, the higher concentrations by red squares. The circled points highlight plants which were misclustered.}\n\\label{fig:Circscores}\n\\end{figure}\n\n\n\n\nWe then obtained a dissimilarity matrix by computing the weighted squared quadratic distance between the first two scores of each time series. We used the \"cluster\" R package to perform a PAM. The results of clustering the data into two groups using our methods are shown in Figure \\ref{fig:Circclust2}. We chose to cluster into two groups since one biological application of this method could be to ascertain at which level of lithium we start to see an effect. Furthermore, we could characterise the effect this exposure is having with the results of this analysis.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{Circclust2.pdf}\n\\caption{The results of clustering the circadian dataset into two groups using the LSW method. The central plot shows the cluster labels for each time series (labelled 1-96). Thus, those with index 1-36 belong to the first 3 lower concentration groups with successive blocks of 12 time series belonging to rising concentrations. For reference, the time series and averages for each of the groups are shown beside the results.}\n\\label{fig:Circclust2}\n\\end{figure}\n\n\n\n\\subsection{Discussion of Findings}\n\nOn examining Figure \\ref{fig:Circclust2}, we can see that this method has effectively sorted the data into two groups:\n\\begin{enumerate}\n\\item the control and concentrations 0.5mM and 1.5mM and\n\\item the higher concentrations.\n\\end{enumerate}\n\n This is to be expected as Figure \\ref{fig:avplot}shows that these groups display a similar average. Furthermore, we also note that the increase in concentration from 1.5mM to 7.5mM is relatively large. Therefore, our method differentiates between the lower and higher concentrations of lithium. This would suggest that there exists a \"threshold\" of the amount of lithium the plant can tolerate. Furthermore, this study implies that this threshold is 7.5mM. However, as stated above, there is a relatively large jump between 7.5mM and the 1.5mM. Therefore, this study would imply that more research should be done with finer gradients between concentrations in the range 1.5mM to 7.5mM to find a more precise limit.\n\nThe proposed method also allows to characterise these groups. The time series of the clustered groups (in black) along with the cluster average (in red) are shown in Figure \\ref{fig:Circclustplots}. We also plot the average spectrum for each cluster in Figure \\ref{fig:avcirc}. These figures suggest that the period of all the plants changes (from 24 hours) after exposure to constant light. In particular, the average spectrum of cluster 1 has a peak in resolution level 1 beginning at 0 hours and then two peaks in resolution level 2 at around 20 hours and after 48 hours. This movement through the scales to the finer resolutions as time progresses implies that the period changes with time. This is more evidence to suggest that a single period estimate, the standard practice in the circadian community, does not effectively characterise the data. Furthermore, the large coefficients in resolution level 1 throughout the experiment in the average spectrum of cluster 2 also imply that this group has a longer period than cluster 1. Finally, we propose that exposure to higher concentrations of lithium reduces the amplitude of the signals as time progresses. This is apparent in the average spectrum of cluster 2 (in Figure \\ref{fig:avcirc}) as the magnitude of the spectral coefficients decreases as time progresses.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{Circclustplots.pdf}\n\\caption{The results of clustering the circadian dataset into two groups using the LSW method. The time series of the clustered groups are shown in black along with the cluster average in red.}\n\\label{fig:Circclustplots}\n\\end{figure}\n\n\n\nLet us now inspect the first two principal components in Figure \\ref{fig:PCsall2}. The first principal component identifies the differences in resolution level 2 of the evolutionary wavelet spectra of the two groups. The peak just after 32 hours corresponds to a peak in the spectrum of cluster 2 and the troughs correspond with the peaks in the spectrum of cluster 1. We can also see a similar relationship in resolution level 1 with peaks in the principal component corresponding to peaks in the spectrum of cluster 2 and troughs corresponding with the peaks in the spectrum of cluster 1. This gives a negative score for cluster 1 which we can see in Figure \\ref{fig:Circscores}. Therefore, the first principal component could represent the longer period associated with exposure to lithium.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{avcirc.pdf}\n\\caption{The average spectrum for each cluster resulting from clustering the circadian dataset into two groups using the LSW method. Cluster 1 corresponds to the lower concentration of lithium and cluster 2 the higher concentration.}\n\\label{fig:avcirc}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\linewidth]{PCsAll2.pdf}\n\\caption{Principal components 1 and 2 obtained by clustering the circadian dataset into two groups using the LSW method.}\n\\label{fig:PCsall2}\n\\end{figure}\n\n On interrogating Figure \\ref{fig:Circclust2}, it seems that 5 of the \"lower concentration\" time series and 4 of the \"higher concentration\" time series were \"incorrectly\" clustered. Therefore, we could say that this method has correctly clustered approximately 89\\% of the circadian data. The incorrectly clustered points are circled in Figure \\ref{fig:Circscoresallwrong}. Furthermore, Figure \\ref{fig:Circclustwrong} shows the observed time series of the miss-clustered points in relation to the observations from cluster 1. From the top panel of Figure \\ref{fig:Circclustwrong} we can see that the lower concentration time series that were assigned to cluster 2 seem to exhibit the dampening effect and period-lengthening which characterises the exposure to higher concentrations. This could mean that the effects of adding lithium can be seen in plants which are not exposed to higher concentrations of lithium. The lower panel of Figure \\ref{fig:Circclustwrong} displays the plants exposed to higher concentrations of lithium which were assigned to cluster 1. These time series do not seem to dampen in the same way as the other higher concentration time series and share a peak at around 30 hours which suggests that this property characterises the control group and lower concentrations. Furthermore, this indicates that some plants may be more resilient to the exposure of lithium. Therefore, we conclude that though there are certain features which characterise exposure to lithium, not all plants react in the same way.\n\n \\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\linewidth]{Circscoresallwrong.pdf}\n \\caption{The data projected onto the first two principal components obtained by clustering the circadian dataset into two groups using the LSW method. The circled points represent plants which were miss-clustered (one from the control group; one from 0.5mM,three from 1.5mM; one from 15mM; three from 30mM).}\n \\label{fig:Circscoresallwrong}\n \\end{figure}\n\n \\begin{figure}\n \\centering\n \\includegraphics[width=0.7\\linewidth]{Circsclustwrong.pdf}\n \\caption{The time series of cluster 1 (in grey) along with the cluster average (in red) obtained by clustering the circadian dataset into two groups using the LSW method. Top panel: the time series from lower concentrations that were clustered into group 2 (one from the control group; one from 0.5mM,three from 1.5mM). Bottom panel: the time series from higher concentrations that were clustered into group 1 (one from 15mM; three from 30mM). In both figures, the colours show the group concentration.}\n \\label{fig:Circclustwrong}\n \\end{figure}\n\nIn conclusion, our analysis demonstrated that a threshold exists with regards to the amount of lithium a plant can tolerate before its clock is distinctly affected. Furthermore, we found that this boundary is between 1.5mM and 7.5mM of LiCl. Finally, we were also able to characterise these effects using our proposed analysis methods. Therefore, adding concentrations of lithium above 1.5mM has two main effects (i) period lengthening and (ii) amplitude dampening.\n\nIn the circadian community, such data is currently analysed by simply estimating a (constant) period for each time series (by means of Fourier analysis). Our proposed multiscale analysis however, has clearly demonstrated the unsuitability of this approach. This is due to the underlying nonstationary character of the data coupled with the lithium concentration effects. Our time-scale method provides further insight into this area of study as data is characterised using a time-varying period as opposed to one period estimate.\n\n\n\\section{Conclusions and Further Work}\\label{sec:concs}\n\nIn this paper we developed a new procedure for clustering circadian plant rhythms by modelling them as nonstationary wavelet processes and exploiting their local time-scale spectral properties. Our method combines the advantage of a wavelet analysis with the benefits of rigorous stochastic nonstationary time series modelling. When compared to competitor (non-model based) methods, we found that the locally stationary wavelet model brought clear gains both for simulated and real data.\n\nThe proposed model-based clusterings can be used to produce visualisations helpful in answering questions such as `what other concentrations of lithium produce similar effects in plants?' and `what characterises the different types of reactions present in this dataset?' The answers to these questions have important implications for understanding the mechanism of the plant's circadian clock and also environmental implications associated with soil pollution. We also showed that our method has desirable properties such as low sensitivity to the choice of distance measure and number of principal components to retain. We believe these results show the method's suitability in organising and understanding multiple nonstationary time series such as the gene expression levels in our circadian dataset.\n\nAt this point we should note that method is not restricted to the dataset analysed in this paper, but can be applied to other circadian datasets. For example, we could extend this experiment to include the results of exposure to other elements and answer the question `which other elements in the periodic table, and at which concentrations, produce similar kinds of reactions in plants?' We can also extend the dataset to include plants with \\emph{deficiencies} of an element. This would also enable deeper understanding of the circadian clock mechanism. It is also possible to observe the expression of other genes in these experiments. Therefore, our methods could also be used to cluster nonstationary time series in order to identify genes potentially involved in, and regulated by, the circadian clock. Alternatively, it is possible to simultaneously measure the expression levels of multiple genes. Therefore, an avenue for further research would be to extend our methods to cluster \\emph{multivariate} time series.\n\nIn Section \\ref{sec:data}, we discussed the propensity of the recording equipment to break down resulting in gaps in the data. Another area of future work is to adapt current methods under the presence of missingness, or `gappy' data, often arising in experimental data. This estimate could then be used as a classification signature or within our clustering procedure.\n\nThe wavelet system gives a representation for nonstationary time series under which we estimate the wavelet spectrum and cluster the data based on these coefficients. We have found in simulations that our method is fairly robust to the choice of wavelet. However, it may be that certain wavelets are better suited to modelling and discriminating between certain datasets. Furthermore, it may be that different wavelets identify different features of the data. Thus an interesting area of further work would be to derive a procedure for determining significant data features and discriminating between groups.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nImage classification has a plethora of applications in software for safety-critical domains such as self-driving cars, medical diagnosis, \\hbox{\\emph{etc.}}\\xspace Even day-to-day consumer software includes image classifiers, such as Google Photo search and Facebook image tagging. Image classification is a well-studied problem in computer vision, where a model is trained to classify an image into single or multiple predefined categories~\\cite{kamavisdar2013survey}. \nDeep Neural Networks (DNNs) have enabled major breakthroughs in image classification tasks over the past few years, \nsometimes even matching human-level accuracy under some conditions~\\cite{he2016deep}, which has led to their ubiquity in modern software. \n\n\nHowever, in spite of such spectacular success, DNN-based image classification models, like traditional software, are known to have serious bugs. For example, Google faced backlash in 2015 due to a notorious error in its photo-tagging app, which tagged pictures of dark-skinned people as ``gorillas''~\\cite{google_tag}. Analogous to traditional software bugs, the Software Engineering (SE) literature denotes these classification errors as {\\em model bugs}~\\cite{Ma:2018:MAN:3236024.3236082}, which can arise due to either imperfect model structure or inadequate training data. \n \n\nAt a high-level, these bugs can affect either an {\\em individual image}, where a particular image is mis-classified (\\hbox{\\emph{e.g.,}}\\xspace a particular skier is mistaken as a part of a mountain), \nor an {\\em image class}, where a class of images is more likely to be mis-classified (\\hbox{\\emph{e.g.,}}\\xspace dark-skinned people are more likely to be misclassified), as shown in Table~\\ref{tab:example}. The latter bugs are specific to a whole {\\em class} of images rather than individual images, implying systematic bugs rather than the DNN equivalent of off-by-one errors. While much effort from the SE literature on Neural Network testing has focused on identifying individual-level violations\\textemdash using white-box~\\cite{pei2017deepxplore,ma2018deepgauge, kim2019guiding, wang2019adversarial}, grey-box~\\cite{tian2017deeptest,Ma:2018:MAN:3236024.3236082}, or concolic testing~\\cite{sun2018concolic}, detection of class-level violations remains relatively less explored. This paper focuses on automatically detecting such class-level bugs, so they can be fixed. \n\n\\input{tables\/example.tex}\n\nAfter manual investigation of some public reports describing the class-level violations listed in Table~\\ref{tab:example}, we determined two root causes: (i) \\textbf{Confusion}: The model cannot differentiate one class from another. For example, Google Photos confuses skier and mountain~\\cite{google_photo}. (ii) \\textbf{Bias}: The model shows disparate outcomes between two related groups. For example, Zhao \\hbox{\\emph{et al.}}\\xspace in their paper ``Men also like shopping''~\\cite{zhao2017men}, find classification bias in favor of women on activities like shopping, cooking, washing, \\hbox{\\emph{etc.}}\\xspace We further notice that some class-level properties are violated in both kinds of cases. For example, in the case of {\\em confusion errors}, the classification error-rate between the objects of two classes, say, skier and mountain, is significantly higher than the overall classification error rate of the model. Similarly, in the bias scenario reported by Zhao \\hbox{\\emph{et al.}}\\xspace, a DNN model should not have different error rates while classifying the gender of a person in the shopping category. Unlike individual image properties, this is a class property affecting all the shopping images with men or women. Any violation of such a property by definition affects the whole class although not necessarily every image in that class, \\hbox{\\emph{e.g.,}}\\xspace a man is more prone to be predicted as a woman when he is shopping, even though some individual images of a man shopping may still be predicted correctly. Thus, we need a class-level approach to testing image classifier software for confusion and bias errors.\n\nThe bugs in a DNN model occur due to sub-optimal interactions between the model structure and the training data~\\cite{Ma:2018:MAN:3236024.3236082}. \nTo capture such interactions, the literature has proposed various metrics primarily based on either neuron activations~\\cite{pei2017deepxplore, ma2018deepgauge, kim2019guiding} or feature vectors~\\cite{Ma:2018:MAN:3236024.3236082, pmlr-v97-odena19a}. \nHowever, these techniques are primarily targeted at the individual image level. To detect class-level violations, we abstract away such model-data interactions at the class level and analyze the inter-class interactions using that new abstraction. To this end, we propose a metric using neuron activations and a baseline metric using weight vectors of the feature embedding to capture the class abstraction.\n\n\nFor a set of test input images, we compute the probability of activation of a neuron per predicted class.\nThus, for each class, we create a vector of neuron activations where each vector element corresponds to a neuron activation probability. If the distance between the two vectors for two different classes is too close, compared to other class-vector pairs, that means the DNN under test may not effectively distinguish between those two classes. \nMotivated by MODE's technique~\\cite{Ma:2018:MAN:3236024.3236082}, we further create a baseline where each class is represented by the corresponding weight vector of the last linear layer of the model under test.\n\nWe evaluate our methodology for both single- and multi-label classification models in eight different settings. Our experiments demonstrate that DeepInspect\\xspace can efficiently detect both Bias and Confusion errors in popular neural image classifiers. \n\n We further check whether DeepInspect\\xspace can detect such classification errors in state-of-the-art models designed to be robust against norm-bounded adversarial attacks~\\cite{NIPS2018_8060}; DeepInspect\\xspace finds hundreds of errors proving the need for orthogonal testing strategies to detect such class-level mispredictions. \nUnlike some other DNN testing techniques~\\cite{tian2017deeptest,pei2017deepxplore,pmlr-v97-odena19a,sun2018concolic}, DeepInspect\\xspace does not need to generate additional transformed (synthetic) images to find these errors. \nThe primary contributions of this paper are: \n\n\\begin{itemize}[leftmargin=*]\n \\item \n We propose a novel neuron-coverage metric to automatically detect class-level violations (confusion and bias errors) in DNN-based visual recognition models for image classification. \n \\item \n We implemented our metric and underlying techniques in DeepInspect\\xspace.\n \\item\n We evaluated DeepInspect\\xspace and found many errors in widely-used DNN models with precision up to 100\\% (avg.~72.6\\%) for confusion errors and up to 84.3\\% (avg.~66.8\\%) for bias errors.\n\\end{itemize}\n\n\nOur code is available at \\url{https:\/\/github.com\/ARiSE-Lab\/DeepInspect}.\nThe errors reported by DeepInspect\\xspace are available at: \\url{https:\/\/www.ariselab.info\/deepinspect}. \n \n\n\\section{DNN Background}\n\\label{sec:background}\n\nDeep Neural Networks (DNNs) are a popular type of machine learning model loosely inspired by the neural networks of human brains. A DNN model learns the logic to perform a software task from a large set of \\emph{training examples}. For example, an image recognition model learns to recognize {\\tt cows} through being shown (trained with) many sample images of cows. \n\n\nA typical \"feed-forward\" DNN consists of a set of connected computational units, referred as {\\em neurons}, that are arranged sequentially in a series of {\\em layers}. \nThe neurons in sequential layers are connected to each other through \\textit{edges}.\nEach edge has a corresponding weight. \nEach neuron applies $\\sigma$, a \\textit{nonlinear activation function} (\\hbox{\\emph{e.g.,}}\\xspace ReLU~\\cite{nair2010rectified}, Sigmoid~\\cite{Mitchell:1997:ML:541177}), to the incoming values on its input edges and sends the results on its output edges to the next layer of connected neurons. \nFor image classification, convolutional neural networks (CNNs)~\\cite{lecun1998gradient}, a specific type of DNN, are typically used. CNNs consist of layers with local spatial connectivity and sets of neurons with shared parameters.\n\nWhen implementing a DNN application, developers \ntypically start with a set of annotated experimental data\\xspace and divide it into three sets: \n(i) training: to construct the DNN model in a supervised setting, meaning the training data is labeled (\\hbox{\\emph{e.g.,}}\\xspace using stochastic gradient descent with gradients computed using back-prop\\-agation~\\cite{rumelhart1988learning}); (ii) validation: to tune the model's hyper-parameters, basically configuration parameters that can be modified to better fit the expected application workload; and \n(iii) evaluation: to evaluate the accuracy of the trained model \\hbox{\\emph{w.r.t.}}\\xspace to its predictions on other annotated data, to determine whether or not it predicts correctly.\nTypically, training, validation, and testing data are drawn from the same initial dataset. \n\nFor image classification, a DNN can be trained in either of the following two settings: \n\n\\noindent\n(i)~\\textbf{Single-label Classification.}\nIn a traditional single-label classification problem, each datum is associated with a single label \\textit{l} from a set of disjoint labels $L$ where $|L| > 1$. If $|L| = 2$, the classification problem is called a binary classification problem; if $|L| > 2$, it is a multi-class classification problem~\\cite{tsoumakas2007multi}. Among some popular image classification datasets, MNIST, CIFAR-10\/CIFAR-100~\\cite{cifar} and ImageNet~\\cite{ILSVRC15} are all single-label,\nwhere each image can be categorized into only one class or outside that class.\n\n\\noindent\n(ii)~\\textbf{Multi-label Classification.}\nIn a multi-label classification problem, each datum is associated with a set of labels \\textit{Y} where $Y \\subseteq L$. \nCOCO\\cite{lin2014microsoft} and imSitu\\cite{yatskar2016} are popular datasets for multi-label classification. For example, an image from the COCO~ dataset can be labeled as \\textit{car, person, traffic light and bicycle}. A multi-label classification model is supposed to predict all of \\textit{car, person, traffic light and bicycle} from a single image that shows all of these kinds of objects.\n\nGiven any single- or multi-label classification task, DNN classifier software tries to learn the decision boundary between the classes\\textemdash all members of a class, say $C_i$, should be categorized identically irrespective of their individual features, \nand members of another class, say $C_j$, should not be categorized to $C_i$~\\cite{bengio2013representation}. The DNN represents the input image in an embedded space with the feature vector at a certain intermediate layer and uses the layers after as a classifier to classify these representations. The {\\em class separation} between two classes estimates how well the DNN has learned to separate each class from the other.\nIf the embedded distance between two classes is too small compared to other classes, or lower than some pre-defined threshold, we assume that the DNN could not separate them from each other.\n\n\n\\section{Methodology}\n\\label{sec:method}\n\nWe give a detailed technical description of DeepInspect\\xspace. \nWe describe a typical scenario where we envision our tool might be used in the following and design the methodology accordingly.\n\n\n\\smallskip\n\\noindent\n\\textbf{Usage Scenario.} Similar to customer testing of post-release software, DeepInspect\\xspace works in a real-world \nsetting where a customer gets a pre-trained model and tests its performance in a sample production scenario before deployment. The customer has white-box access to the model to profile, although all the data in the production system can be {\\em unlabeled}. \nIn the absence of ground truth labels, the classes are defined by the {\\em predicted labels}. \nThese predicted labels are used as class references as DeepInspect\\xspace tries to detect confusion and bias errors among the classes. \nDeepInspect\\xspace tracks the activated neurons per class and reports a potential class-level violation if the class-level activation-patterns are too similar between two classes. \nSuch reported errors will help customers evaluate how much they can trust the model's results related to the affected classes. \nAs elaborated in Section~\\ref{sec:discussion}, once these errors are reported back to the developers, they can focus their debugging and fixing effort on these classes. \nFigure \\ref{fig:workflow} shows the DeepInspect\\xspace workflow. \n\n\n\\subsection{Definitions}\n \nBefore we describe DeepInspect\\xspace's methodology in detail, we introduce definitions that we use in the rest of the paper.\nThe following table shows our notation. \\\\\n\\begin{center}\n{\\footnotesize\n\\begin{tabular}{ll}\n \\toprule\n All neurons set & $ N = \\{N_1,N_2,...\\}$ \\\\\n \n \n Activation function & $ out(N, c)$ returns output \\\\\n & for neuron $N$ , input $c$. \\\\\n Activation threshold & $Th$ \\\\\n \\bottomrule\n\\end{tabular}\n}\n\\end{center}\n\\ \\\\\n\\noindent\n \\textit{Neural-Path ($NP$).} \nFor an input image $c$, we define {\\em neural-path} as a sequence of neurons that are activated by $c$. \n\n\n\\noindent\n\\textit{Neural-Path per Class ($NP_{C}$).} \nFor a class $C_i$, this metric represents a set consisting of the union of neural-paths activated by all the inputs in $C_i$. \n\n\nFor example, consider a class {\\tt cow} containing two images: a brown cow and a black cow. Let's assume they activate two neural-paths: $[N_1,N_2,N_3]$ and $[N_4,N_5,N_3]$. \nThus, the neural-paths for class {\\tt cow} would be \n$NP_{cow} = \\{[N_1,N_2,N_3],[N_4,N_5,N_3]\\}$. \n$NP_{cow}$ is further represented by a vector \n$(N_1^1,N_2^1,N_3^2,N_4^1,N_5^1)$, where the superscripts represent the number of times each neuron is activated by $C_{cow}$. \nThus, each class $C_i$ in a dataset can be expressed with a {\\em neuron activation frequency vector}, which captures how the model interacts with $C_i$. \n\n\\smallskip\n\\noindent\n\\textbf{\\em Neuron Activation Probability}: \nLeveraging how {\\em frequently} a neuron $N_j$ is activated by all the members from a class $C_i$, this metric estimates the probability of a neuron $N_j$ to be activated by $C_i$. Thus,\nwe define: $\\displaystyle P(N_j ~|~ C_i) = \n \\frac{|\\{ c_{ik}~|~\\forall{c_{ik}} \\in C_i,out(N_j,c_{ik}) > Th\\}|}{|C_i|}$\n\n\n\\medskip\n\\noindent\nWe then construct a $n\\times m$ dimensional {\\em neuron activation probability matrix}, $\\rho$, ($n$ is the number of neurons and $m$ is the number of classes) with its ij-th entry being $P(N_j ~|~ C_i)$.\n\n\n{\n\\setlength{\\abovedisplayskip}{0pt}\n\\setlength{\\belowdisplayskip}{0pt}\n\\vspace{-0.3cm}\n\\begin{equation}\n\\scriptstyle\n\\label{eq:napm}\n \\rho=\n\\begin{blockarray}{cccccc}\n & C_1 & ... & C_i & ... & C_m \\\\\n\\begin{block}{c(ccccc)}\n N_1 & p_{11} & & & & p_{1m} \\\\\n ... & ... & & & & \\\\\n N_j & p_{j1} & &...& & p_{jm} \\\\\n ... & ... & & & & \\\\\n N_n & p_{n1} & & & & p_{nm}\\\\\n\\end{block}\n\\end{blockarray}\n\\end{equation}\n\\vspace{-0.3cm}\n}\n\n\nThis matrix captures how a model interacts with a set of input data. \nThe column vectors ($\\rho_{\\alpha m}$) represent \n the interaction of a class $C_m$ with the model. \nNote that, in our setting, $C$s are predicted labels. \n\n\nSince Neuron Activation Probability Matrix\\xspace ($\\rho$) is designed to represent each class, it should be able to distinguish between different $C$s. \nNext, we use this metric to find two different classes of errors\noften found in DNN systems: {\\em confusion} and {\\em bias} (see~\\Cref{tab:example}). \n\n\n\\begin{figure\n\\centering\n\\includegraphics[width=1\\columnwidth]{figures\/workflow-eps-converted-to.pdf}\n\\caption{\\textbf{\\small DeepInspect\\xspace Workflow\n}\n\\label{fig:workflow}\n\\end{figure}\n\n\\subsection{Finding Confusion Errors}\n\\label{sec:conf}\n\nIn an object classification task, when the model cannot distinguish one object class from another, confusion occurs. For example, as shown in~\\Cref{tab:example}, a Google photo app model confuses a skier with the mountain.\nThus, finding confusion errors means checking how well the model can distinguish between objects of different classes. An error happens when the model under test classifies an object with a wrong class, or for multi-label classification task, predicts two classes but only one of them is present in the test image.\n\nWe argue that the model makes these errors because during the training process the model has not learned to distinguish well between the two classes, say $a$ and $b$. Therefore, the neurons activated by these objects are similar and the column vectors corresponding to these classes: $\\rho_{\\alpha a}$ and $\\rho_{\\alpha b}$\nwill be very close to each other. Thus, we compute the confusion score between two classes\nas the euclidean distance between their two probability vectors: \n\n{\n\\setlength{\\abovedisplayskip}{0pt}\n\\setlength{\\belowdisplayskip}{0pt}\n\\begin{equation}\n\\label{eq:conf}\n\\begin{split}\n\\scriptstyle\n \\textsc{napvd}\\xspace(a,b) = \\Delta(a, b) = ||\\rho_{\\alpha a} - \\rho_{\\alpha b}||_2\n\\displaystyle\n = \\sqrt{\\sum_{i = 1}^{n} \\left( P(N_i|a) - P(N_i|b)\\right) ^{2}}\n\\end{split}\n\\end{equation}\n}\n\nIf the $\\Delta$ value is less than some pre-defined threshold (conf\\_th) for two pairs of classes, the model will potentially make mistakes in distinguishing one from another, which results in confusion errors. \nThis $\\Delta$ is called \\textsc{napvd}\\xspace (\\underline{N}euron \\underline{A}ctivation \\underline{P}robabiliy \\underline{V}ector \\underline{D}istance).\n\n\n\\subsection{Finding Bias Errors}\n\\label{sec:bias}\n\nIn an object classification task, bias occurs if the model under test shows disparate outcomes between two related classes. For example, \nwe find that ResNet-34 pretrained by imSitu dataset, often mis-classifies a man with a baby as {\\tt woman}. We observe that in the embedded matrix $\\rho$, $\\Delta(baby,woman)$\nis much smaller than $\\Delta(baby,man)$.\nTherefore, during testing, whenever the model finds an image with a baby, it is biased towards associating the baby image with a woman. Based on this observation, we propose an inter-class distance based metric to calculate the bias learned by the model. We define the bias between two classes $a$ and $b$ over a third class $c$ as follows: \n\\begin{equation}\n \\label{eq:bias}\n \\displaystyle\n bias(a, b, c) := \\frac{|\\Delta(c, a) - \\Delta(c, b)|}{\\Delta(c, a) + \\Delta(c, b)}\n\\end{equation}\n\n\n If a model treats objects of classes $a$ and $b$ similarly under the presence of a third object class $c$, $a$ and $b$ should have similar distance \\hbox{\\emph{w.r.t.}}\\xspace $c$ in the embedded space $\\rho$; thus, the numerator of the above equation will be small. Intuitively, the model's output can be more influenced by the nearer object classes, \\hbox{\\emph{i.e.}}\\xspace if $a$ and $b$ are closer to $c$. Thus, we normalize the disparity between the two distances to increase the influence of closer classes.\n \n This bias score is used to measure how differently the given model treats two classes in the presence of a third object class. \n An \\textbf{average bias} (abbreviated as \\textrm{avg\\_bias}\\xspace) between two objects $a$ and $b$ for all class objects $O$ is defined as:\n\\begin{equation}\n\\label{eq:abgbias}\n\navg\\_bias(a, b) := \\frac{1}{|O|-2} \\sum_{c\\in O, c\\neq a,b} bias (a, b, c)\n\\end{equation}\nThe above score captures the overall bias of the model between two classes. If the bias score is larger than some pre-defined threshold, we report potential {\\em bias errors}.\n\nNote that, even when the two classes $a$ and $b$ are not confused by the model, \\hbox{\\emph{i.e.}}\\xspace $\\Delta(a,b) > conf\\_th$, they can still show bias \\hbox{\\emph{w.r.t.}}\\xspace another class, say $c$, if $\\Delta(a,c)$ is very different from $\\Delta(b,c)$. Thus, bias and confusion are two separate types of class-level errors that we intend to study in this work.\n\n\nUsing these above equations we develop a novel testing tool, DeepInspect\\xspace, to inspect a DNN implementing image classification tasks and look for potential confusion and bias errors. We implemented DeepInspect\\xspace in the Pytorch deep learning framework and Python 2.7. All our experiments were run on Ubuntu 18.04.2 with two TITAN Xp GPUs. For all of our experiments, we set the activation threshold $Th$ to be 0.5 for all datasets and models. We discuss why we choose 0.5 as neuron activation threshold and how different thresholds affect our performance in the section \\ref{sec:discussion}.\n\n\n\n\\section{Experimental Design}\n\\label{sec:experiment}\n\n\\subsection{Study Subjects}\n\\label{sec:subj}\n\nWe apply DeepInspect\\xspace for both multi-label and single-label DNN-based classifications. Under different settings, DeepInspect\\xspace automatically inspects 8 DNN models for 6 datasets. \nTable~\\ref{tab:subj} summarizes our study subjects. All the models we used are standard, widely-used models for each dataset.\nWe used pre-trained models as shown in the Table for all settings except for COCO~ with gender. For COCO~ with gender model, we used the gender labels from \\cite{zhao2017men} and trained the model in the same way as \\cite{zhao2017men}. imSitu model is a pre-trained ResNet-34 model \\cite{yatskar2016}. There are in total 11,538 entities and 1,788 roles in the imSitu dataset. When inspecting a model trained using imSitu, we only considered the top 100 frequent entities or roles in the test dataset.\n\n\\input{tables\/subject.tex}\n\nAmong the 8 DNN models, three are pre-trained relatively more robust models that are trained using adversarial images along with regular images. These models are pre-trained by provably robust training approach proposed by~\\cite{NIPS2018_8060}. Three models with different network structures are trained using the CIFAR10 dataset~\\cite{NIPS2018_8060}.\n\n\\subsection{Constructing Ground Truth (GT) Errors}\n\\label{sec:gt}\n\nTo collect the ground truth for evaluating DeepInspect\\xspace, \n we refer to the test images misclassified by a given model. We then aggregate these misclassified image instances by their real and predicted class-labels and estimate pair-wise confusion\/bias. \n\n\\subsubsection{GT of Confusion Errors}\n\\label{sec:gt_conf} \nConfusion occurs when a DNN often makes mistakes in disambiguating \nmembers of two different classes. In particular, if a DNN is confused between two classes, the classification error rate is higher between those two classes than between the rest of the class-pairs. Based on this, we define two types of confusion errors for single-label classification and multi-label classification separately:\n\n\\textit{Type1 confusions\\xspace}: In single-label classification, Type1 confusion occurs when an object of class $x$ (\\hbox{\\emph{e.g.,}}\\xspace {\\tt violin}) is misclassified to another class $y$ (\\hbox{\\emph{e.g.,}}\\xspace {\\tt cello}). \nFor all the objects of class $x$ and $y$, it can be quantified as: $\\typeaconf(x, y) = \\mean(\\prob(x| y), \\prob(y| x))$ \\textemdash DNN's probability to misclassify class $y$ as $x$ and vice-versa, and takes the average value between the two. For example, given two classes {\\tt cello} and {\\tt violin}, $\\typeaconf$ estimates the mean probability of {\\tt violin} misclassified to {\\tt cello} and vice versa. Note that, this is a bi-directional score, \\hbox{\\emph{i.e.}}\\xspace misclassification of $y$ as $x$ is the same as misclassification of $x$ as $y$. \n\n\\textit{Type2 confusions\\xspace}: In multi-label classification, Type2 confusion occurs when an input image contains an object of class $x$ (\\hbox{\\emph{e.g.,}}\\xspace {\\tt mouse}) and no object of class $y$ (\\hbox{\\emph{e.g.,}}\\xspace {\\tt keyboard}), but the model predicts both classes (see~\\Cref{fig:coco_confusion_bugs}. For a pair of classes, this can be quantified as: $\\typebconf(x, y) = \\mean(\\prob(( x, y)| x), \\prob(( x, y)| y))$ to compute the probability to detect two objects in the presence of only one. For example, given two classes {\\tt keyboard} and {\\tt mouse}, $\\typebconf$ estimates the mean probability of {\\tt mouse} being predicted while predicting {\\tt keyboard} and vice versa. \nThis is also a bi-directional score. \n \nWe measure $\\typeaconf$ and $\\typebconf$ by using a DNN's {\\em true classification error} measured on a set of test images. \nThey create the DNN's true confusion characteristics between all possible class-pairs. We then draw the distributions of $\\typeaconf$ and $\\typebconf$. For example, ~\\Cref{fig:confusion_dist} shows $\\typebconf$ distribution for COCO~. The class-pairs with confusion scores greater than $1$ standard deviation from the mean-value are marked as pairs truly confused by the model and form our ground truth for confusion errors. For example, in the COCO~ dataset, there are 80 classes and thus 3160 class pairs (80*79\/2); 178 class-pairs are ground-truth confusion errors.\n\n\\begin{figure\n\\centering\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{figures\/type2_distribution_coco_2-eps-converted-to.pdf}\n \n \\caption{Confusions distribution}\n \\label{fig:confusion_dist}\n \\end{subfigure}\n ~\n \n \\begin{subfigure}[b]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{figures\/distance_coco_2-eps-converted-to.pdf}\n \\caption{NAPVD distribution}\n \\label{fig:distance_dist}\n \\end{subfigure}\n \n\\caption{\\small{\\textbf{Identifying Type2 confusions\\xspace for multi-classification applications. LHS shows how we marked the ground truth errors based on Type2 confusion score. RHS shows DeepInspect\\xspace's predicted errors based on NAPVD score.}}\n}\n\\label{fig:confusion_distribution}\n\\end{figure}\n\nNote that, unlike how a bug\/error is defined in traditional software engineering, our suspicious confusion pairs have an inherent probabilistic nature. For example, even if $a$ and $b$ represent a confusion pair, it does not mean that all the images containing $a$ or $b$ will be misclassified by the model. Rather, it means that compared with other pairs, images containing $a$ or $b$ tend to have a higher chance to be misclassified by the model. \n\n\n\\subsubsection{GT of Bias Errors}\n\\label{sec:gt_bias} \nA DNN model is {\\em biased} if it treats two classes differently. \nFor example, consider three classes: {\\tt man}, {\\tt woman}, and {\\tt surfboard}. An unbiased model should not have different error rates while classifying {\\tt man} or {\\tt woman} in the presence of {\\tt surfboard}. \nTo measure such bias formally, we define \\textbf{confusion disparity} ($\\textrm{cd}\\xspace$) to measure differences in error rate between classes $x$ and $z$ and between $y$ and $z$: $\\textrm{cd}\\xspace(x, y, z) = |error(x, z) - error(y, z)|$, where the error measure can be either $\\typeaconf$ or $\\typebconf$ as defined earlier. $\\textrm{cd}\\xspace$ essentially estimates the disparity of the model's error between classes $x$, $y$ (\\hbox{\\emph{e.g.,}}\\xspace~{\\tt man}, {\\tt woman}) \\hbox{\\emph{w.r.t.}}\\xspace a third class $z$ (\\hbox{\\emph{e.g.,}}\\xspace~{\\tt surfboard}).\n\nWe also define an aggregated measure \\textbf{average confusion disparity} (\\textbf{avg\\_cd}) between two classes $x$ and $y$ by summing up the bias between them over all third classes and taking the average:\n\\[\n \\textrm{avg\\_cd}\\xspace(x, y) := \\frac{1}{|O|-2} \\sum_{z \\in O, z \\neq x, y} \\textrm{cd}\\xspace(x, y, z).\n\\]\nDepending on the error types we used to estimate \\textrm{avg\\_cd}\\xspace, we refer to $Type1$\\_\\textrm{avg\\_cd}\\xspace and $Type2$\\_\\textrm{avg\\_cd}\\xspace. \nWe measure \\textrm{avg\\_cd}\\xspace using the true classification error rate reported for the test images. Similar to confusion errors, we draw the distribution of \\textrm{avg\\_cd}\\xspace for all possible class pairs and then consider the pairs as {\\em truly biased} if their \\textrm{avg\\_cd}\\xspace score is higher than one standard deviation from the mean value. Such truly biased pairs form our ground truth for bias errors. \n\n\\subsection{Evaluating DeepInspect\\xspace}\n\\label{sec:eval_metric}\n\nWe evaluate DeepInspect\\xspace using a set of test images.\n\n\n\\noindent\n\\textbf{Error Reporting.}\nDeepInspect\\xspace reports confusion errors based on NAPVD (see~\\Cref{eq:conf}) scores\\textemdash lower NAPVD indicates errors.\nWe draw the distributions of NAPVDs for all possible class pairs, \nas shown in~\\Cref{fig:distance_dist}. \nClass pairs having NAPVD scores lower than $1$ standard deviation from the mean score are marked as potential confusion errors. \n\nAs discussed in~\\Cref{sec:bias}, DeepInspect\\xspace reports bias errors based on \\textrm{avg\\_bias}\\xspace score (see~\\Cref{eq:abgbias}), where higher \\textrm{avg\\_bias}\\xspace means class pairs are more prone to bias errors.\nSimilar to above, from the distribution of \\textrm{avg\\_bias}\\xspace scores, DeepInspect\\xspace predicts pairs with \\textrm{avg\\_bias}\\xspace greater than $1$ \\textrm{standard deviation} from the mean score to be erroneous. \nNote that, while calculating error disparity between classes $a$, $b$ \\hbox{\\emph{w.r.t.}}\\xspace $c$ (see \\Cref{eq:bias}), if both $a$ and $b$ are far from $c$ in the embedded space $\\rho$, disparity of their distances ($\\Delta$) should not reflect true bias. Thus, while calculating $\\textrm{avg\\_bias}\\xspace(a,b)$ we further filter out the triplets where \n$\\Delta (c,a)>th \\land \\Delta (c,b)>th$, where $th$ is some pre-defined threshold. In our experiment, we remove all the class-pairs having $\\Delta$ larger than $1$ standard deviation (\\hbox{\\emph{i.e.}}\\xspace $th$) from the mean value of all $Delta$s across all the class-pairs. \n\n\n\\noindent\n\\textbf{Evaluation Metric.}\\label{sec:eval_metric}\nWe evaluate DeepInspect\\xspace in two ways:\n\n\\noindent\n\\textbf{Precision \\& Recall.}\nWe use precision and recall to measure DeepInspect\\xspace's accuracy. \nFor each error type t, suppose that E is the number of errors detected\nby DeepInspect\\xspace and A is the the number of true errors in the ground truth\nset. Then the precision and recall of DeepInspect\\xspace are $\\frac{|A\\cap E|}{|E|}$ and $\\frac{|A\\cap E|}{|A|}$ respectively.\n\n\\noindent\n\\textbf{Area Under Cost Effective Curve (AUCEC).}\nSimilarly to how static analysis warnings are ranked based on their priority levels~\\cite{rahman2013and}, we also rank the erroneous class-pairs identified by DeepInspect\\xspace based on the decreasing order of error proneness, \\hbox{\\emph{i.e.}}\\xspace most error-prone pairs will be at the top. \nTo evaluate the ranking we use a cost-effectiveness measure~\\cite{Arisholm2010Systematic}, AUCEC (Area Under the Cost-Effectiveness Curve), which has become standard to evaluate rank-based bug-prediction systems~\\cite{rahman2013sample,kamei2010revisiting,rahman2011bugcache, rahman2013and, ray2016naturalness}.\n\nCost-effectiveness evaluates when we inspect\/test top n\\% class-pairs in the ranked list (\\hbox{\\emph{i.e.}}\\xspace inspection cost), how many true errors are found (\\hbox{\\emph{i.e.}}\\xspace effectiveness). \nBoth cost and effectiveness are normalized to 100\\%. \n~\\Cref{fig:confusion-ce} shows cost on the x-axis, and effectiveness on the y-axis, indicating the portion of the ground truth errors found. AUCEC is the area under this curve. \n\n\\noindent\n\\textbf{Baseline.} \n\\label{sec:baseline}\nWe compare DeepInspect\\xspace \\hbox{\\emph{w.r.t.}}\\xspace two baselines:\n\n\\noindent\n(i) MODE-inspired: A popular way to inspect each image is to inspect a feature vector, which is an output of an intermediate layer ~\\cite{Ma:2018:MAN:3236024.3236082,zhang2018deeproad}. \nHowever, abstracting a feature vector per image to the class level is non-trivial.\nInstead, for a given layer, one could inspect the weight vector ($w_l = [w^0_l, w^1_l, ..., w^n_l]$) of a class, say $l$, where the superscripts represent a feature. Similar weight-vectors are used in MODE~\\cite{Ma:2018:MAN:3236024.3236082} to compare the difference in feature importance between two image groups. \nIn particular, from the last linear layer before the output layer we extract such per-class weight vectors and compute the pairwise distances between the weight vectors. Using these pairwise distances we calculate confusion and bias metrics as described in Section~\\ref{sec:method}. \n \n\\noindent\n(ii) Random: We also build a random model that picks random class-pairs for inspection~\\cite{witten2005data} as a baseline. \n\nFor AUCEC evaluation, we further show the performance of an optimal model that ranks the class-pairs perfectly\\textemdash if $n$\\% of all the class-pairs are truly erroneous, the optimal model would rank them at the top such that with lower inspection budget most of the errors will be detected. The optimal curve gives the lower upper bound of the ranking scheme. \n\n\\noindent\n\\textbf{Research Questions.}\nWith this experimental setting, we investigate the following three research questions to evaluate DeepInspect\\xspace for DNN image classifiers: \n\n\\begin{itemize}[leftmargin=*]\n \\item \n \\textbf{RQ1.}\n Can DeepInspect\\xspace distinguish between different classes?\n \\item\n \\textbf{RQ2.}~Can DeepInspect\\xspace identify the confusion errors?\n \\item \n \\textbf{RQ3.}~Can DeepInspect\\xspace identify the bias errors?\n\\end{itemize}\n\n\n\n\\section{Results}\n\\label{sec:result}\n\nWe begin our investigation by checking whether de-facto neuron coverage-based metrics can capture class separation. \n \n\\smallskip\n\\RQ{1}{Can \\tool distinguish between different classes?~}\n\\label{sec:rqa}\n\n\\noindent\n\\textbf{Motivation.} The heart of DeepInspect\\xspace's error detection technique lies in the fact that the underlying Neuron Activation Probability metric ($\\rho$) captures each class abstraction reasonably well and thus distinguishes between classes that do not suffer from class-level violations. In this RQ we check whether this is indeed true. We also check whether a new metric $\\rho$ is necessary, \\hbox{\\emph{i.e.}}\\xspace, whether existing neuron-coverage metrics could capture such class separations.\n\n\\noindent\n\\textbf{Approach.} We evaluate this RQ \\hbox{\\emph{w.r.t.}}\\xspace the training data since the DNN behaviors are not tainted with inaccuracies associated with the test images. Thus, all the class-pairs are benign. We evaluate this RQ in three settings: (i) using DeepInspect\\xspace's metrics, (ii) neuron-coverage proposed by Pei~\\hbox{\\emph{et al.}}\\xspace~\\cite{pei2017deepxplore}, and (iii) other neuron-activation related metrics proposed by DeepGauge~\\cite{ma2018deepgauge}. \n\n\\noindent\n\\textbf{Setting-1. DeepInspect\\xspace.}\nOur metric, Neuron Activation Probability Matrix\\xspace ($\\rho$), by construction is designed per class. \nHence it would be unfair to directly measure its capability to distinguish between different classes. Thus, we pose this question in slightly a different way, as described below. \nFor multi-label classification, each image contains multiple class-labels. \nFor example, an image might have labels for both {\\tt mouse} and {\\tt keyboard}. \nSuch coincidence of labels may create confusion\\textemdash if two labels always appear together in the ground truth set, no classifier can distinguish between them. \nTo check how many times two labels coincide, \nwe define a coincidence score between two labels $L_a$ and $L_b$ as: \n$\n\\displaystyle\ncoincidence\\left(L_a, L_b\\right) = \nmean(P\\left(L_a, L_b|L_a\\right),P\\left(L_a, L_b|L_b\\right))\n$.\n\nThe above formula computes the minimum probability of labels $L_a$ and $L_b$ occurring together in an image given that one of them is present. \nNote that this is a bi-directional score, \\hbox{\\emph{i.e.}}\\xspace we treat the two labels similarly. \nThe $mean$ operation ensures we detect the least coincidence in either direction. A low value of coincidence score indicates two class-labels are easy to separate and vice versa. \n\nNow, to check DeepInspect\\xspace's capability to capture class separation, we simply check the correlation between \ncoincidence score and confusion score (\\textsc{napvd}\\xspace)\nfrom Equation~\\ref{eq:conf} for all possible class-label pairs. \nSince only multi-label objects can have label coincidences, we perform this experiment for a pre-trained ResNet-50 model on the COCO~ multi-label classification task. \n\nA Spearman correlation coefficient between the confusion and coincidence scores reaches a value as high as 0.96, showing strong statistical significance. The result indicates that DeepInspect\\xspace can disambiguate most of the classes that have a low confusion scores. \n\n\n\n\nInterestingly, we found some pairs where coincidence score is high, but DeepInspect\\xspace was able to isolate them. For example, ({\\tt cup,chair}), ({\\tt toilet,sink}), \\hbox{\\emph{etc.}}\\xspace. Manually investigating such cases reveals that although these pairs often appear together in the input images, there are also enough instances when they appear by themselves. Thus, DeepInspect\\xspace disambiguates between these classes and puts them apart in the embedded space $\\rho$. \nThese results indicate DeepInspect\\xspace can also learn some hidden patterns from the context and, thus, \ncan go beyond inspecting the training data coincidence for evaluating model bias\/confusion, which is the de facto technique among machine learning researchers~\\cite{zhao2017men}. \n\n\n\\begin{figure}[!phtb]\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{figures\/coco_rq1.pdf}\n\\caption{\\textbf{\\small Distribution of neuron coverage per class label, for 10 randomly picked class labels,\nfrom the COCO~ dataset. \n}}\n\\label{fig:coverage}\n\\end{figure}\n\n\n\nNext, we investigate whether popular white-box metrics can distinguish between different classes. \n\n\\noindent\n\\textbf{Setting-2. Neuron Coverage ($NC$)}~\\cite{pei2017deepxplore}\ncomputes the ratio of the union of neurons activated by an input set and the total number of neurons in a DNN. \nHere we compute $NC$ per class-label, \\hbox{\\emph{i.e.}}\\xspace for a given class-label, we measure the number of neurons activated by the images tagged with that label \\hbox{\\emph{w.r.t.}}\\xspace to the total neurons. The activation threshold we use is 0.5. \nWe perform this experiment on COCO~ and CIFAR-100~ to study multi- and single-label classifications. \nFigure~\\ref{fig:coverage} shows results for COCO~. We observe similar results for CIFAR-100~.\n\nEach boxplot in the figure shows the distribution of neuron coverage per class-label across all the relevant images. \nThese boxplots visually show that {\\em different labels} have very {\\em similar $NC$} distribution. \nWe further compare these distributions using \\mbox{Kruskal Test}~\\cite{kruskaltest}, which is a non-parametric way of comparing more than two groups. Note that we choose a non-parametric measure as $NC$s may not follow normal distributions. (Kruskal Test is a parametric equivalent of the one-way analysis of variance (ANOVA).) The result reports a $p-value << 0.05$, \\hbox{\\emph{i.e.}}\\xspace some differences exist across these distributions. \nHowever, a pairwise Cohend's effect size for each class-label pair, as shown in the following table, shows \nmore than 56\\% and 78\\% class-pairs for CIFAR-100~ and COCO~ have small to negligible effect size. \nThis means neuron coverage cannot reliably distinguish a majority of the class-labels.\n\n\\begin{center}\n{\\scriptsize\n \\begin{tabular}{l|rrrr}\n \\toprule\n \\multicolumn{5}{c}{ Effect Size of neuron coverage across different classes } \\\\\n \\toprule\n \n Exp Setting & negligible & {small} & {medium} & {large} \\\\\n \\midrule\n COCO~ & 40.51\\% & 38.19\\% & 16.96\\% & 4.34\\% \\\\\n CIFAR-100~ & 31.94\\% & 25.69\\% & 23.87\\% & 18.48\\% \\\\\n \\bottomrule\n \\end{tabular}%\n}\n\\end{center}\n\n\n\n\\noindent\n\\textbf{Setting-3.~DeepGauge~\\cite{ma2018deepgauge}.}\nMa \\hbox{\\emph{et al.}}\\xspace~\\cite{ma2018deepgauge} argue that each neuron has a primary region of operation; they identify this region by using a boundary condition $[{low},{high}]$ on its output during training time; outputs outside this region ($(-\\infty, {low}) \\cup ({high}, +\\infty)$) are marked as corner cases. They therefore introduce multi-granular neuron and layer-level coverage criteria. For neuron coverage they propose: (i) {\\em k-multisection coverage} to evaluate how thoroughly the primary region of a neuron is covered, (ii) {\\em boundary coverage} to compute how many corner cases are covered, and (iii) {\\em strong neuron activation coverage} to measure how many corner case regions are covered in (${high}, +\\infty$) region. For layer-level coverage, they define (iv) {\\em top-k neuron coverage} to identify the most active k-neurons for each layer, and (v) {\\em top-k neuron pattern} for each test-case to find a sequence of neurons from the top-k most active neurons across each layer. \n\n\nWe investigate whether each of these metrics can distinguish between different classes by measuring the above metrics for individual input classes following Ma \\hbox{\\emph{et al.}}\\xspace's methodology. \nWe first profiled every neuron upper- and lower-bound for each class using the training images containing that class-label. \nNext, we computed per-class neuron coverage using test images containing that class; for k-multisection coverage we chose $k=100$ to scale up the analysis. It should be noted that we also tried $k=1000$ (which is used in the original DeepGauge paper) and observed similar results (not shown here).\n\n\\begin{figure}[!phtb]\n\\centering\n\\scriptsize\n \\includegraphics[width=0.8\\columnwidth]{figures\/coco_rq1-3.pdf}\n\n\\caption{\\textbf{\\small{Histogram of DeepGauge~\\cite{ma2018deepgauge} multi-granular coverage per class label for COCO~ dataset}}\n}\n\\label{fig:dg_coco}\n\\label{fig:dg}\n\\end{figure}\n\nFor layer-level coverage, we directly used the input images containing each class, where we select $k=1$.\n\n\\Cref{fig:dg_coco} shows the results as a histogram of the above five coverage criteria for the COCO~ dataset. For all five coverage criteria, there are many class-labels that share similar coverage. For example, in COCO~, there are $52$ labels with k-multisection neuron coverage with values between $0.31$ and $0.32$. \nSimilarly, there are $40$ labels with 0 neuron boundary coverage.\nTherefore, none of the five coverage criteria are an effective way to distinguish between different equivalence classes. The same conclusion was drawn for the CIFAR-100~ dataset.\n\n\n\\RS{1}{DeepInspect\\xspace can disambiguate classes better than previous coverage-based metrics for the image classification task.}\n\n\n\nWe now investigate DeepInspect\\xspace's capability in detecting confusion and bias errors in DNN models. \n\n\n\\input{tables\/precision-recall.tex}\n\n\n\\RQ{2}{Can \\tool identify the confusion errors?~}\n\\label{sec:rq2}\n\n\\noindent\n\\textbf{Motivation.} To evaluate how well DeepInspect\\xspace can detect class-level violations, in this RQ, we report DeepInspect\\xspace's ability to detect the first type of violation, \\hbox{\\emph{i.e.}}\\xspace, Type1\/Type2 confusions \\hbox{\\emph{w.r.t.}}\\xspace to ground truth confusion errors, as described in~\\Cref{sec:gt_conf}. \n\n\n\\begin{figure}[h]\n\\centering\n \\begin{subfigure}[b]{0.22\\textwidth}\n \\includegraphics[width=\\linewidth]{figures\/coco_correlation-eps-converted-to.pdf}\n \n \\caption{COCO~ dataset + ResNet-50}\n \\label{fig:type2a}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.22\\textwidth}\n \\includegraphics[width=\\linewidth]{figures\/cifar10s_correlation-eps-converted-to.pdf}\n \n \\caption{ Robust CIFAR-10 Small}\n \\label{fig:type1a}\n \\end{subfigure}\n \n\\caption{\\textbf{\\small{Strong negative Spearman correlation (-0.55 and -0.86) between \\textsc{napvd}\\xspace and ground truth confusion scores.\n}}}\n\\label{fig:rq2-corr}\n\\end{figure} \n\n\nWe first explore the correlation between \\textsc{napvd}\\xspace and ground truth Type1\/Type2 confusion score. Strong correlation has been found for all 8 experimental settings. ~\\Cref{fig:rq2-corr} gives examples on COCO and CIFAR-10. These results indicate that \\textsc{napvd}\\xspace can be used to detect confusion errors\\textemdash lower \\textsc{napvd}\\xspace means more confusion.\n\n\n\\input{tables\/rq2}\n\\noindent\n\\textbf{Approach.} \nBy default, DeepInspect\\xspace reports all the class-pairs with \\textsc{napvd}\\xspace scores one standard deviation less than the mean \\textsc{napvd}\\xspace score as error-prone (See~\\Cref{fig:distance_dist}). In this setting, as the result shown on ~\\Cref{tab:confusion_accuracy}, DeepInspect\\xspace reports errors at high recall under most settings. Specifically, on CIFAR-100 and robust CIFAR-10 ResNet, DeepInspect\\xspace can report errors as high as 71.8\\%, and 100\\%, respectively. \nDeepInspect\\xspace has identified thousands of confusion errors. \n\n\n\nIf higher precision is wanted, a user can choose to inspect only a small set of confused pairs based on \\textsc{napvd}\\xspace. As also shown in~\\Cref{tab:confusion_accuracy}, when only the top1\\% confusion errors are reported, a much higher precision is achieved for all the datasets. In particular, DeepInspect\\xspace identifies 31 and 39 confusion errors for the COCO model and the CIFAR-100 model with 100\\% and 79.6\\% precision, respectively. The trade-off between precision and recall can be found on the cost-effective curves shown on ~\\Cref{fig:confusion-ce}, which show overall performance of DeepInspect\\xspace at different inspection cutoffs. Overall, \\hbox{\\emph{w.r.t.}}\\xspace a random baseline mode, DeepInspect\\xspace is gaining AUCEC performance from $61.6\\%$ to $85.7\\%$; \\hbox{\\emph{w.r.t.}}\\xspace a MODE baseline mode, DeepInspect\\xspace is gaining AUCEC performance from $10.2\\%$ to $28.2\\%$. \n\n\n\n\n\\begin{figure}[!phtb]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/confusion_std.pdf}\n\\caption{\\textbf{\\small AUCEC plot of Type1\/Type2 Confusion errors in three different settings. The \\red{red} vertical line marks 1-standard deviation less from mean \\textsc{napvd}\\xspace score. DeepInspect\\xspace marks all class-pairs with \\textsc{napvd}\\xspace scores less than the red mark as potential errors.}}\n\\label{fig:confusion-ce}\n\\end{figure}\n\n\n\nFigure~\\ref{fig:coco_confusion_bugs} and Figure~\\ref{fig:imagenet_confusion_bugs} give some specific confusion errors found by DeepInspect\\xspace in the COCO~ and the ImageNet settings. In particular, as shown in Figure~\\ref{fig:confusionbugs1}, when there is only a keyboard but no mouse in the image, the COCO model reports both. Similarly, Figure~\\ref{fig:confusionbugs4} shows confusion errors on (cello, violin). There are several cellos in this image, but the model predicts it to show a violin. \n\n\n\n\n\n\n\\begin{figure\n\\centering\n\n \\begin{subfigure}[b]{0.40\\linewidth}\n \\includegraphics[width=\\linewidth]{{\"figures\/COCO_val2014_000000061034_no_mouse\"}.jpg}\n \\caption{\\small(keyboard,mouse)}\n \\label{fig:confusionbugs1}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.40\\linewidth}\n \\includegraphics[width=\\linewidth]{{\"figures\/COCO_val2014_000000044294_oven-_microwave\"}.jpg}\n \\caption{\\small (oven,microwave)}\n \\label{fig:confusionbugs2}\n \\end{subfigure}\n \n \n \n \n \n \n \n \n\\caption{\\textbf{\\small Confusion errors identified in COCO~ model. In each pair the second object is mistakenly identified by the model.}}\n\\label{fig:coco_confusion_bugs}\n\\vspace{-0.2cm}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}[!htpb]\n\\centering\n \\begin{subfigure}[b]{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{{\"figures\/ILSVRC2012_val_00035600\"}.JPEG}\n \\caption{\\small (cello, violin)}\n \\label{fig:confusionbugs4}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.4\\linewidth}\n \\includegraphics[width=\\linewidth]{{\"figures\/ILSVRC2012_val_00018849\"}.JPEG}\n \\caption{\\small (library, bookshop)}\n \\label{fig:confusionbugs5}\n \\end{subfigure}\n \n\\caption{\\textbf{\\small Confusion errors identified in the ImageNet model. For each pair, the second object is mistakenly identified by the model.}\n}\n\\label{fig:imagenet_confusion_bugs}\n\\end{figure}\n\n\n\n\n\n\n\n \n\n\n\nAcross all three relatively more robust \nCIFAR-10 models DeepInspect\\xspace identifies (cat, dog), (bird, deer) and (automobile, truck) as buggy pairs, where one class is very likely to be mistakenly classified as the other class of the pair. \nThis indicates that these confusion errors are to be tied to the training data, so all the models trained on this dataset including the robust models may have these errors. These results further show that the confusion errors are orthogonal to the norm-based adversarial perturbations and we need a different technique to address them. \n\n\nWe also note that the performance of all methods degrades quite a bit on ImageNet. ImageNet is known to have a complex structure, and all the tasks, including image classification and robust image classification \\cite{Xie_2019_CVPR} usually have inferior performance compared with simpler datasets like CIFAR-10 or CIFAR-100. Due to such inherent complexity, the class representation in the embedded space is less accurate, and thus the relative distance between two classes may not correctly reflect a model's confusion level between two classes.\n\n\\RS{2}{DeepInspect\\xspace can successfully find confusion errors with precision 21\\% to 100\\% at top1\\% for both single- and multi-object classification tasks. \nDeepInspect\\xspace also finds confusion errors in robust models.}\n\n\n\n\\RQ{3}{Can \\tool identify the bias errors?~}\n\n\\begin{figure}[!htpb]\n\\centering\n\\begin{subfigure}[b]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/coco,predicted,type2.pdf}\n \\caption{COCO~}\n \\label{fig:bias_coco_predicted_type2}\n \\end{subfigure}\n\\begin{subfigure}[b]{0.23\\textwidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/cifar100,predicted,type1.pdf}\n \\caption{CIFAR-100}\n \\label{fig:bias_coco_predicted_type1}\n \\end{subfigure}\n \n\\caption{\\textbf{\\small Strong positive Spearman's correlation (0.76 and 0.62) exist between \\textrm{avg\\_cd}\\xspace and \\textrm{avg\\_bias}\\xspace while \ndetecting classification bias.}\n}\n\\label{fig:bias_coco_predicted}\n\\end{figure}\n\n\\noindent\n\\textbf{Motivation.} To assess DeepInspect\\xspace's ability to detect class-level violations, in this RQ, we report DeepInspect\\xspace's performance in detecting the second type of violation, \\hbox{\\emph{i.e.}}\\xspace, Bias errors as described in~\\Cref{sec:gt_bias}.\n\n\\noindent\n\\textbf{Approach.} \nWe evaluate this RQ by estimating a model's bias (\\textrm{avg\\_bias}\\xspace) using~\\Cref{eq:abgbias} \\hbox{\\emph{w.r.t.}}\\xspace the ground truth (\\textrm{avg\\_cd}\\xspace), computed as in~\\Cref{sec:gt_bias}. We first explore the correlation between pairwise \\textrm{avg\\_cd}\\xspace and our proposed pairwise \\textrm{avg\\_bias}\\xspace; \n~\\Cref{fig:bias_coco_predicted} shows the results for COCO~ and CIFAR-10.\nSimilar trends \nwere found in the other datasets we studied. \nThe results show that a strong correlation exists between \\textrm{avg\\_cd}\\xspace and \\textrm{avg\\_bias}\\xspace. \nIn other words, our proposed \\textrm{avg\\_bias}\\xspace is a good proxy for detecting confusion errors.\n\n\\input{tables\/rq3}\n\nAs in RQ2, we also do a precision-recall analysis \\hbox{\\emph{w.r.t.}}\\xspace finding the bias errors across all the datasets.\nWe analyze the precision and recall of DeepInspect\\xspace when reporting bias errors at the cutoff Top1$\\%$(\\textrm{avg\\_bias}\\xspace) and mean(\\textrm{avg\\_bias}\\xspace)+standard deviation(\\textrm{avg\\_bias}\\xspace), respectively. The results are shown in Table \\ref{tab:bias_cutoffs}. At cutoff Top1$\\%$(\\textrm{avg\\_bias}\\xspace), DeepInspect\\xspace \ndetects suspicious pairs with precision as high as 75\\% and 84\\% for COCO~ and imSitu, respectively. At cutoff mean(\\textrm{avg\\_bias}\\xspace)+standard deviation(\\textrm{avg\\_bias}\\xspace), DeepInspect\\xspace has high recall but lower precision: DeepInspect\\xspace detects ground truth suspicious pairs with recall at 75.9\\% and 71.8\\% for COCO~ and imSitu. DeepInspect\\xspace can report 657(=249+408) total true bias bugs across the two models. DeepInspect\\xspace outperforms the random baseline by a large margin at both cutoffs. As in the case of detecting confusion errors, there is a significant trade-off between precision and recall.\nThis can be customized based on user needs.\nThe cost-effectiveness analysis in ~\\Cref{fig:bias-ce} shows the entire spectrum. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/bias_figures\/bias_top_1std.pdf}\n\\caption{\\textbf{\\small Bias errors detected \\hbox{\\emph{w.r.t.}}\\xspace the ground truth of \\textrm{avg\\_cd}\\xspace beyond one standard deviation from mean.}\n}\n\\label{fig:bias-ce}\n\\end{figure}\n\nAs shown in Figure~\\ref{fig:bias-ce}, DeepInspect\\xspace outperforms the baseline by a large margin. \nThe AUCEC gains of DeepInspect\\xspace are from $37.1\\%$ to $76.1\\%$ w.r.t. the random baseline and from $6.0\\%$ to $41.9\\%$ w.r.t. the MODE baseline across the 8 settings. DeepInspect\\xspace's performance is close to the optimal curve under some settings, specifically the AUCEC gains of the optimal over DeepInspect\\xspace are only 7.11\\% and 7.95\\% under the COCO~ and ImSitu settings, respectively.\n\nInspired by \\cite{zhao2017men}, which shows bias exists between men and women in COCO~ for the gender image captioning task, we analyze the most biased third class $c$ for $a$ and $b$ being men and women. \nAs shown in Figure \\ref{fig:fairness_coco_predicted}, we found that sports like skiing, snowboarding, and surfboarding are more closely associated with men and thus misleads the model to predict the women in the images as men. \nFigure \\ref{fig:fairness_imsitu_predicted} shows results on imSitu, where we found that the model tends to associate the class ``inside'' with women while associating the class ``outside'' with men.\n\n\n\n\\begin{figure}[!htpb]\n\\centering\n \\begin{subfigure}[b]{0.30\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/COCO_val2014_000000389273}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.30\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/COCO_val2014_000000493442}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.30\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/COCO_val2014_000000525899}\n \\end{subfigure}\n \n\\caption{\\textbf{\\small The model classifies the women in these pictures as men in the COCO~ dataset.}}\n\\label{fig:fairness_coco_predicted}\n\\end{figure}\n\n\n\nWe generalize the idea by choosing classes $a$ and $b$ to be any class-pair. We found that similar bias also exists in the single-label classification settings. For example, in ImageNet, one of the highest biases is between Eskimo\\_dog and rapeseed \\hbox{\\emph{w.r.t.}}\\xspace Siberian\\_husky. The model tends to confuse the two dogs but not Eskimo\\_dog and rapeseed. This makes sense since Eskimo\\_dog and Siberian\\_husk are both dogs so more easily misclassified by the model.\n\n\\begin{figure}[!htpb]\n\\centering\n \\begin{subfigure}[b]{0.25\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/slouching_23.jpg}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.25\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/bias_figures\/sweeping_99.jpg}\n \\end{subfigure}\n \n\\caption{\\textbf{\\small The model classifies the man in the first figure to be a woman and the woman in the second figure to be a man.}}\n\\label{fig:fairness_imsitu_predicted}\n\\end{figure}\n\nOne of the fairness violations of a DNN system can be drastic differences in accuracy across groups divided according to some sensitive feature(s). In black-box testing, the tester can get a number indicating the degree of fairness has been violated by feeding into the model a validation set. In contrast, DeepInspect\\xspace provides a new angle to the fairness violations. The neuron distance difference between two classes $a$ and $b$ \\hbox{\\emph{w.r.t.}}\\xspace a third class $c$ sheds light on why the model tends to be more likely to confuse between one of them and $c$ than the other. We leave a more comprehensive examination on interpreting bias\/fairness violations for future work.\n\n\\RS{3}{DeepInspect\\xspace can successfully find bias errors for both single- and multi-label classification tasks, and even for the robust models, from 52\\% to 84\\% precision at top1\\%.}\n\n\n\\section{Related Work}\n\\label{sec:rel}\n\n\\textbf{Software Testing \\& Verification of DNNs.}\nPrior research proposed different white-box testing criteria based on neuron coverage~\\cite{pei2017deepxplore,ma2018deepgauge,tian2017deeptest} and neuron-pair coverage~\\cite{sun2018testing}.\nSun \\hbox{\\emph{et al.}}\\xspace~\\cite{sun2018concolic} presented a concolic testing approach for DNNs called DeepConcolic. They showed that their concolic testing approach can effectively increase coverage and find adversarial examples. Odena and Goodfellow proposed TensorFuzz\\cite{pmlr-v97-odena19a}, which is a general tool that combines coverage-guided fuzzing with property-based testing to generate cases that violate a user-specified objective. It has applications like finding numerical errors in trained neural networks, exposing disagreements between neural networks and their quantized versions, surfacing broken loss functions in popular GitHub repositories, and making performance improvements to TensorFlow. \nThere are also efforts to verify DNNs~\\cite{pei2017towards,katz2017reluplex,huang2017safety,wang2018formal} against adversarial attacks. However, most of the verification efforts are limited to small DNNs and pixel-level properties.\nIt is not obvious how to directly apply these techniques to detect class-level violations. \n\n\\noindent\n{\\bf Adversarial Deep Learning.}\nDNNs are known to be vulnerable to well-crafted inputs called adversarial examples, where the discrepancies are imperceptible to a human but can easily make DNNs fail~\\cite{yuan2017adversarial,lu2017no, raghunathan2018certified, papernot2016cleverhans, evtimov2017robust, goodfellow2014explaining, kos2017adversarial, narodytska2016simple, nguyen2015deep, papernot2017practical, papernot2016limitations, szegedy2013intriguing, huang2017adversarial, kurakin2016adversarial}. \nMuch work has been done to defend against adversarial attacks~\\cite{bastani2016measuring, carlini2017towards, feinman2017detecting, grosse2017statistical, gu2014towards, metzen2017detecting, papernot2017extending, papernot2016distillation, shaham2015understanding, xu2017feature, zheng2016improving, he2017adversarial, metric_adv}.\nOur methods have potential to identify adversarial inputs.\nMoreover, adversarial examples are usually out of distribution data and not realistic, while we can find both out-distribution and in-distribution corner cases. \nFurther, we can identify a general weakness or bug rather than focusing on crafted attacks that often require a strong attacker model (\\hbox{\\emph{e.g.,}}\\xspace the attacker adds noise to a stop sign image).\n\n\n\n\n\n\n\n\\noindent\n{\\bf Interpreting DNNs.} \nThere has been much research on model interpretability and visualization~\\cite{lipton2016mythos, zhang2018visual, selvaraju2016grad,montavon2017methods,bau2017network,dong2017towards}. A comprehensive study is presented by Lipton ~\\cite{lipton2016mythos}. \nDong \\hbox{\\emph{et al.}}\\xspace ~\\cite{dong2017towards} observed that instead of learning the semantic features of whole objects, neurons tend to react to different parts of the objects in a recurrent manner. Our probabilistic way of looking at neuron activation per class aims to capture holistic behavior of \nan entire class instead of an individual object so diverse features of class members can be captured. \nClosest to ours is by Papernot \\hbox{\\emph{et al.}}\\xspace~\\cite{papernot2018deep}, who used nearest training points to explain adversarial attacks. \nIn comparison, we analyze the DNN's dependencies on the entire training\/testing data and represent it in Neuron Activation Probability Matrix\\xspace. We can explain the DNN's bias and weaknesses by inspecting this matrix.\n\n\n\n\\noindent\n\\textbf{Evaluating Models' Bias\/Fairness.}\nEvaluating the bias and fairness of a system is important both from a theoretical and a practical perspective~\\cite{luong2011k, ZemelICML, zafar2017fairness, Brun:2018:SF:3236024.3264838}. \nRelated studies first define a fairness criteria and then try to optimize the original objective while satisfying the fairness criteria ~\\cite{Dwork:2011, Hardt2016, barocas-hardt-narayanan, DBLP:conf\/fat\/MenonW18, DBLP:conf\/nips\/DoniniOBSP18, DBLP:journals\/corr\/abs-1901-10837}. These properties are defined either at individual~\\cite{Dwork:2011,Kusner,Kim} or group levels~\\cite{Calders,Hardt2016,ZafarWWW}. \nIn this work, we propose a definition of a bias error for image classification closely related to fairness notions at group-level. Class membership can be regarded as the sensitive feature and the equality that we want to achieve is for the confusion levels of two groups w.r.t. any third group. We showed the potential of DeepInspect\\xspace to detect such violations.\n\nGalhotra \\hbox{\\emph{et al.}}\\xspace~\\cite{galhotra2017fairness} first applied the notion of software testing to evaluating software fairness. \nThey mutate the sensitive features of the inputs and check whether the output changes. One major problem with their proposed method, Themis, is that it assumes the model takes into account sensitive attribute(s) during training and inference. This assumption is not realistic since most existing fairness-aware models drop input-sensitive feature(s). Besides, Themis will not work on image classification, where the sensitive attribute (\\hbox{\\emph{e.g.,}}\\xspace, gender, race) is a visual concept that cannot be flipped easily. \nIn our work, we use a white-box approach to measure the bias learned by the model during training. Our testing method does not require the model to take into account any sensitive feature(s). We propose a new fairness notion for the setting of multi-object classification, {\\em average confusion disparity}, and a proxy, {\\em average bias}, to measure for any deep learning model even when only unlabeled testing data is provided. In addition, our method tries to provide an explanation behind the discrimination. A complementary approach by Papernot \\hbox{\\emph{et al.}}\\xspace ~\\cite{papernot2018deep} shows such explainability behind model bias in a single classification setting. \n\n\n\n\n\\section{Discussion \\& Threats to Validity}\n\\label{sec:discussion}\n\n\n\\textbf{Discussion.}\nIn the \nliterature, bug detection, debugging, and repair are usually three distinct tasks, and there is a large body of work investigating each \nseparately.\nIn this work, we focus on bug detection for image classifier software.\nA natural follow-up of our work will be debugging and repair leveraging DeepInspect\\xspace's bug detection. We present some preliminary results and thoughts.\n\n\n\nA commonly used approach to improving (\\hbox{\\emph{i.e.}}\\xspace fixing) image classifiers is active learning, which consists of adding more labeled data by smartly choosing what to label next.\nIn our case, we can use \\textsc{napvd}\\xspace to identify the most confusing class pairs,\nand then target those pairs by collecting additional examples that contain individual objects from the confusing pairs.\nWe download 105 sample images from Google Images that contain isolated examples of these categories so that the model learns to disambiguate them.\nWe retrain the model from scratch using the original training data and these additional examples. Using this approach, we have some preliminary results on the COCO~ dataset. After retraining, we find that the $\\typebconf$ of the top confused pairs reduces. For example, the $\\typebconf$(baseball bat, baseball glove) is reduced from 0.23 to 0.16, and $\\typebconf$(refrigerator, oven) is reduced from 0.14 to 0.10. Unlike traditional active learning approaches that encourage labeling additional examples near the current decision boundary of the classifier, our approach encourages the labeling of problematic examples based on confusion bugs.\n\nAnother potential direction to explore is to use DeepInspect\\xspace in tandem with debugging \\& repair tools for DNN models like MODE~\\cite{Ma:2018:MAN:3236024.3236082}.\nDeepInspect\\xspace enables the user to focus debugging effort on the vulnerable classes even in the absence of labeled data. \nFor instance, once DeepInspect\\xspace identifies the vulnerable class-pairs, one can use the GAN-based approach proposed in MODE to generate more training data from these class-pairs, apply MODE to identify the most vulnerable features in these pairs to select for retraining.\n\nWe have also explored how the neuron coverage threshold($th$) used in computing $NAPVD$ affects our performance in detecting confusion and bias errors. We studied one multi-label classification task COCO~~and one single-label classification task CIFAR-100. Table \\ref{tab:confusion_coco_threshold}, \\ref{tab:confusion_cifar100_threshold}, \\ref{tab:bias_coco_threshold}, \\ref{tab:bias_cifar100_threshold} show how our precision and recall change when using different neuron coverage thresholds ($th$). We observed that for CIFAR-100 and COCO~~that DeepInspect's accuracies are overall stable at $0.4\\leq th \\leq 0.75$. With smaller th($<0.25$), too many neurons are activated pulling the per-class activation-probability-vectors closer to each other. In contrast, with higher th($>0.75$), important activation information gets lost. Thus, we select $th=0.5$ for all the other experiments to avoid either issue.\n\\input{tables\/threshold.tex}\n\n\\medskip\n\\noindent\n\\textbf{Threats to Validity.} We only test DeepInspect\\xspace on 6 datasets under 8 settings. We include both \nsingle-class and multi-class as well as regular and robust models to address these threats as much as possible.\n\nAnother limitation is that DeepInspect\\xspace needs to decide thresholds for both confusion errors and bias errors, and a threshold for discarding low-confusion triplets in the estimation of \\textrm{avg\\_bias}\\xspace. Instead of choosing fixed threshold, we mitigate this threat by choosing thresholds that are one standard deviation from the corresponding mean values and, also, reporting performance at top1\\%. \n\n\n\n\nThe task of accurately classifying any image\nis notoriously difficult. \nWe simplify the problem by testing the DNN model only for the classes that it has seen during training. \nFor example, while training, if a DNN does not learn to differentiate between black vs. brown {\\tt cow}s (\\hbox{\\emph{i.e.}}\\xspace, all the cow images only have label cow and they are treated as belonging to the same class by the DNN), DeepInspect\\xspace will not be able to test these sub-groups.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nOur testing tool for DNN image classifiers, DeepInspect\\xspace, automatically detects confusion and bias errors in classification models. We applied DeepInspect\\xspace to six different popular image classification datasets and eight pretrained DNN models, including three so-called relatively more robust models.\nWe show that DeepInspect\\xspace can successfully detect class-level violations for both single- and multi-label classification models with high precision. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{0pt}{8pt plus 4pt minus 2pt}{4pt plus 2pt minus 2pt}\n\n\n\\usepackage{moreverb}\n\n\n\n\n\n\n\\everymath{\\displaystyle}\n\n\n\n\n\n\\newcommand{\\rom}[1]{\\uppercase\\expandafter{\\romannumeral #1\\relax}}\n\\newcommand{DeepTest\\xspace}{DeepTest\\xspace}\n\\newcommand{\\hbox{\\emph{cf.}}\\xspace}{\\hbox{\\emph{cf.}}\\xspace}\n\\newcommand{\\ldots [deletia] \\ldots}{\\ldots [deletia] \\ldots}\n\\newcommand{\\hbox{\\emph{et al.}}\\xspace}{\\hbox{\\emph{et al.}}\\xspace}\n\\newcommand{\\hbox{\\emph{e.g.,}}\\xspace}{\\hbox{\\emph{e.g.,}}\\xspace}\n\\newcommand{\\hbox{\\emph{i.e.}}\\xspace}{\\hbox{\\emph{i.e.}}\\xspace}\n\\newcommand{\\hbox{\\emph{s.t.}}\\xspace}{\\hbox{\\emph{s.t.}}\\xspace}\n\\newcommand{\\hbox{\\emph{w.r.t.}}\\xspace}{\\hbox{\\emph{w.r.t.}}\\xspace}\n\\newcommand{\\hbox{\\emph{viz.}}\\xspace}{\\hbox{\\emph{viz.}}\\xspace}\n\\newcommand{\\hbox{\\emph{v.s.}}\\xspace}{\\hbox{\\emph{v.s.}}\\xspace}\n\\newcommand{\\hbox{\\emph{etc.}}\\xspace}{\\hbox{\\emph{etc.}}\\xspace}\n\n\n\\newcommand{\\defref}[1]{Definition~\\ref{#1}}\n\n\\newcommand{\\varname}[1]{{\\small\\texttt{#1}}\\xspace}\n\\newcommand{\\ch}[1]{{\\color{black} #1}\\xspace}\n\n\n\n\n\n\\def$\\superscript{\\S}${$\\superscript{\\S}$}\n\\def$^{\\dag}${$^{\\dag}$}\n\\def\\superscript{\\ddag}{\\superscript{\\ddag}}\n\\def$^{\\natural}${$^{\\natural}$}\n\n\\definecolor{gray50}{gray}{.5}\n\\definecolor{gray40}{gray}{.6}\n\\definecolor{gray30}{gray}{.7}\n\\definecolor{gray20}{gray}{.8}\n\\definecolor{gray10}{gray}{.9}\n\\definecolor{gray05}{gray}{.95}\n\n\\newlength\\Linewidth\n\\def\\findlength{\\setlength\\Linewidth\\linewidth\n\\addtolength\\Linewidth{-4\\fboxrule}\n\\addtolength\\Linewidth{-3\\fboxsep}\n}\n\\newenvironment{examplebox}{\\par\\begingroup\n \\setlength{\\fboxsep}{5pt}\\findlength\n \\setbox0=\\vbox\\bgroup\\noindent\n \\hsize=0.95\\linewidth\n \\begin{minipage}{0.95\\linewidth}\\normalsize}\n {\\end{minipage}\\egroup\n \\textcolor{gray20}{\\fboxsep1.5pt\\fbox\n {\\fboxsep5pt\\colorbox{gray05}{\\normalcolor\\box0}}}\n \\endgroup\\par\\noindent\n \\normalcolor\\ignorespacesafterend}\n\\let\\Examplebox\\examplebox\n\\let\\endExamplebox\\endexamplebox\n\n\\newcounter{RQCounter}\n\\newcounter{RQACounter}\n\n\n\n\\newcommand{\\RQ}[2]{%\n\\refstepcounter{RQCounter} \\label{#1}\n\n\n \\noindent\n \\textbf{RQ\\arabic{RQCounter}.~#2}\n}\n\n\\newcommand{\\RQA}[2]{%\n\\refstepcounter{RQACounter} \\label{#1}\n\\vspace{0.1in} \\noindent\\textbf{RQ\\arabic{RQACounter}.~#2 \\vspace{0.05in}}\n\n}\n\n\\newcommand{\\RQBoxed}[2]{%\n\\refstepcounter{RQCounter} \\label{#1}\n\\begin{framed}%\n\\filbreak\n\\vspace{0.001in}\n\n\t\\noindent\\textbf{RQ\\arabic{RQCounter}: }%\n#2\\end{framed}\n}\n\n\n\\newcommand{\\RS}[2]{%\n\\begin{framed}%\n\\filbreak\n\n\\textbf{Result {\\ref{#1}}:~}{\\emph {#2}}%\n\\end{framed}\n}\n\n\\definecolor{javared}{rgb}{0.6,0,0}\n\\definecolor{javagreen}{rgb}{0.25,0.5,0.35}\n\\definecolor{javapurple}{rgb}{0.5,0,0.35}\n\\definecolor{javadocblue}{rgb}{0.25,0.35,0.75}\n\n\n\\lstdefinestyle{customc}{\n belowcaptionskip=\\baselineskip,\n breaklines=true,\n\n xleftmargin=\\parindent,\n language=java,\n showstringspaces=false,\n basicstyle=\\scriptsize\\ttfamily,\n keywordstyle=\\bfseries\\color{javapurple},\n commentstyle=\\itshape\\blue,\n\n\n\n}\n\n\\lstset{escapechar=@,style=customc}\n\n\\DeclareCaptionLabelFormat{andimage}{#1~#2 \\& \\figurename~\\thefigure}\n\n\n\n\n\n\\section{Research Methods}\n\n\n\n\n\n\n\n\n\\end{document}\n\\endinput\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAdvanced Persistent Threats (APTs) are a formidable threat to the cybersecurity of modern technology ecosystems, routinely defeating sophisticated system security controls with Tactics, Techniques and Procedures (TTPs) often tailored to a target's environment and specifically designed to evade detection \\cite{daly2009advanced,pitropakis2018enhanced}. \nIn recent years, enabled by an acceleration in digital transformation and greater network connectivity, APT campaigns have proliferated to industries historically isolated from the Internet, such as critical national infrastructure and vendor supply chains \\cite{lemay2018survey,ahmad2019strategically}. \nAs a result, a growing democratisation and prevalence in APT-level capability among threat actors is leading to a paradigm shift in how cybersecurity practitioners and researchers approach threat modelling and risk assessments for security control selection of existing and emerging technological ecosystems.\n\nThe advent of 5G Core Networks (5GCN) promises to provide a vibrant and service-rich technology ecosystem converging distributed computing (Cloud, Edge, IoT), Software Defined Networking (SDN) and elastic application services capable of serving a wide range of state-of-the-art industry use cases in autonomous vehicles, industrial IoT and healthcare applications. \nNaturally, therefore, as the aperture of APTs continues to expand, 5GCN has been identified as an appealing target to threat actors such as APTs and to date numerous 5G network security assessments have been conducted by academics, security groups, and industry suppliers \\cite{khan2019survey,ENISAThreatReport,geller20185g} in an attempt to identify suitable controls for protecting 5GCN architectures. \nHowever, the introduction of diverse new technologies and an elastic re-configurable architecture at the core of 5GCN's value proposition introduces new cyber challenges. \n\nTraditional approaches to threat modelling that focus on specific software vulnerabilities or a static network configuration and application services are likely to become quickly outdated or entirely redundant when: (i) the flexible 5GCN architecture changes; or (ii) when applied to a particular 5GCN instance where the software platforms and applications services, in use, are different or constantly evolving. \nThis plasticity in the 5GCN introduces a particularly complex challenge for establishing standardised and analytically consistent processes for threat modelling, as the dynamic nature of 5GCN creates a malleable threat modelling landscape which evolves dynamically alongside the continuous reconfiguration of 5GCN itself. \n \nWell known threat modelling frameworks such as STRIDE \\cite{shostack2014threat} and OCTAVE \\cite{alberts2003introduction} typically formulate the modelling process by first defining a high-level abstraction of a target system, its sub-systems and interfaces for profiling potential system attackers, their methods and associated objectives. \nThis definition provides a model of system security contexts that allows for the creation of a catalogue of possible threats to the system and selection of security controls which can be used to address them; based on the severity of the threat and the risk it poses to the system. \nIn this manner, traditional threat modelling frameworks generate ``stationary'' representations of a system with predefined attacker profiles. \nThis approach has practical limitations for effective and resilient threat modelling in 5GCNs.\nThis is because they are dynamic and heterogeneous by design with system security contexts, which adapt continuously based on evolving configuration of the 5GCN instance (e.g. multi-tenancy application services, network slicing, quality of service, elastic compute, and software-defined radio access). \n\nThe MITRE ATT\\&CK framework provides a foundation for flexible and dynamic threat modelling of malleable system environments like the 5GCN which bridge heterogeneous technologies (e.g. SDN, Cloud, and IoT). \nThrough its unique identification and categorisation of adversarial Tactics, Techniques and Procedures (TTPs), it establishes an environment neutral threat modelling methodology that supports dynamic security control selection and reconfiguration based on the changing patterns of threat actor behaviour \\cite{nisioti2021data}. \nAs a result, the MITRE ATT\\&CK threat modelling methodology operates independently of a system abstraction, such as the high-level description of a specific 5GCN architecture and its component interfaces, as the modelling process focuses on the determination of \\emph{adversarial behaviours} of threat actors. \nWhilst this approach provides a solution to addressing the dynamic nature of the 5GCN, it is reliant on a knowledge base of TTPs already observed. \nImplementation of MITRE ATT\\&CK will rely on the identification of 5GCN specific TTPs in a preemptive fashion.\n\nIn this paper, we propose how to extend the existing MITRE ATT\\&CK framework to incorporate a set of 5G specific adversarial techniques and discuss their use in modelling threats. \nOur approach is vendor agnostic and can be applied to flexibly model threats to 5GCN environments independently of changes to the 5GCN configuration state. \nIn summary, we contribute to the literature by:\n\\begin{itemize}\n \\item applying MITRE ATT\\&CK to systematically identify 5GCN infrastructure components at risk ``pre'' and ``post'' intrusion to support the threat modelling and risk assessment process of 5GCN environments.\n \\item extending the MITRE ATT\\&CK framework to include adversarial techniques that APTs can leverage to target and compromise 5GCN infrastructure and services; identifying techniques relating to key technologies such as SDN, NFV and Network Slicing which are not currently included in the existing TTP matrices.\n \\item using existing threat intelligence surrounding the motivation and objectives of APTs, when targeting telecommunication networks and draw comparisons between historic attacks and how they may be realised in 5G networks utilising our identified techniques.\n\\end{itemize}\n\nThe remainder of the paper is organised as follows: \nSection \\ref{sec:background} provides a background on the specific threats relating to 5GCN infrastructure and the suitability of MITRE ATT\\&CK in relation to these. \nIn Section \\ref{sec:mitre5g}, we identify, based on current literature, a set of adversarial techniques which could be used to attack the 5GCN by APTs creating a 5G TTP knowledge base and map these to the 5GCN infrastructure components.\nIn Section \\ref{sec:application}, we demonstrate the use of our proposed \\emph{5GCN adversarial techniques} for integration into an expanded MITRE ATT\\&CK framework to model multi-stage attack threats to the 5GCN.\nFinally, Section \\ref{sec:concl} concludes the paper providing a summary of the main contributions and highlighting future work to be undertaken to further develop the integration of 5GCN adversarial tactics and techniques with the MITRE ATT\\&CK framework.\n\n\n\\section{Background}\\label{sec:background}\nThe attack surface of telecommunication networks is set to grow drastically with the introduction of new technologies and services to support new use cases in 5G networks. \nAn important pre-requisite to the \\textit{threat modelling} and \\textit{risk assessment} processes is the identification of threats as shown in Figure \\ref{Security Assessment Process}.\nIn the context of 5GCNs, effective threat modelling must consider an approach which divorces the modelling process from the need to define a static system abstraction for threat identification and instead fosters an approach which is independent of a specific 5GCN systems' configuration and architecture at a specific point in time.\n\nMITRE ATT\\&CK identifies a combination of techniques that an attacker may use.\nThese form specific attack scenarios expressed as attack graphs to demonstrate the multiple stages of an attack, without the dependency on defining a specific system abstraction. \nUnlike the traditional aforementioned high-level threat modelling methods, where selection of appropriate security controls are based on high-level descriptions of threats related to abstracted system security contexts, for each adversarial technique in the ATT\\&CK knowledge database there exists specific \\textit{detection} and \\textit{mitigation} techniques.\nThese can be used to address applicable threats, regardless of how the particular environment, under investigation, is configured. \nThis approach is considered a mid-level abstraction approach to modelling \\cite{strom2018mitre} when compared to high-level abstraction frameworks such as the STRIDE and OCTAVE, or low-level exploit and vulnerability threat modelling, such as CVSS \\cite{mell2006common}.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=1\\textwidth]{figures\/SecurityAssessment.png}\n\t\\caption[Security Assessment]{A Typical Process for Security Risk Assessment} \n\t\\label{Security Assessment Process}\n\\end{figure}\n\nAs the MITRE ATT\\&CK framework is formulated by references to specific adversarial behaviour, its formulation and expansion largely follows observations drawn from APT campaigns and attack vectors observed in the wild. \nAs a result, it can be considered a proactive event-driven framework which is updated as new adversarial behaviours are observed, analysed and systematically recorded. \nAfter the inception of a MITRE ATT\\&CK TTP matrix for Enterprise Networks, MITRE extended the framework to include an Industrial Control System (ICS) network and mobile platforms, as Tactics and Techniques have been discovered from observations of attacks against these systems. \nWhilst 5G networks are relatively new and therefore ``5G specific'' attack vectors are yet to become commonplace, technologies and practices utilised in 5GCN infrastructure and services inherit many overlapping properties associated with both traditional enterprise networks and the cyber-physical aspects of ICS networks. \nTherefore a 5GCN knowledge base can benefit from an established baseline of known adversarial TTPs which apply to them for modelling adversarial threats against 5GCN environments. \nHowever, naturally, in MITRE ATT\\&CK there currently exists a knowledge gap of potential future adversarial techniques which may be used to target components of 5GCN such as Software-Defined Networks (SDN), Network Function Virtualisation (NFV) and distributed cloud architecture, which are yet to be addressed by MITRE ATT\\&CK. \n\nIn this paper, we study how to extend MITRE ATT\\&CK TTP knowledge base for 5G networks. \nThe authors of \\cite{rao2020threat} propose the \\textit{Bahdra framework} which is a domain specific threat modelling framework for telecommunication networks. \nThe framework organises the attack life-cycle into 3 stages consisting of 8 tactical groups of 47 techniques aligned to ATT\\&CK. \nThe main key differences between this work and ours are the following. \n\nFirst, the framework omits some tactics included in the ATT\\&CK framework. \nA significant exclusion is that of data exfiltration which we believe is still a prime motivation of APTs when targeting 5G networks. \nIn our work we include all of the tactic groups defined in ATT\\&CK as we believe that the transition towards a traditional cloud computing architecture mean that 5G network inherit the same threat vectors in addition to the 5G specific ones. \nThe 5G specific TTPs we identify are an extension of the existing MITRE knowledge base for cloud computing and network threats. \n\nMoreover, \\textit{Bahdra framework} includes a new tactic Standard Protocol Misuse. \nWhilst we also include the misuse of protocols in a TTP knowledge base for 5G networks, we have included them as techniques which can be used to achieve some attack goals and not as a standalone tactic and therefore belong to the existing set of tactics in the ATT\\&CK framework.\nAn example of this is the misuse of signalling protocols to evade network defences by blending into standard traffic. \n\nFurthermore, the proposed framework does not include techniques which span multiple tactic groups. \nIn our work here, we identify some techniques which can be used for multiple tactics such as the use of standard network protocols for evading defences as well as for establishing Command and Control (C2) channels for data exfiltration. \nThis factor can have an impact on assessing security risk to the network, if a specific technique can be used to achieve multiple tactical stages of an attack, it may be given greater weighting when considering security risk, mitigating actions and security controls.\nLast, our work includes techniques which apply to the new 5G techniques such as SDN, NFV and network slicing which neither \\textit{Bahdra} nor ATT\\&CK include in their knowledge bases.\n\n\n\\subsection{5G Threats}\nWe have identified, through literature review, threat scenarios which relate to the new 5GCN architecture and technologies used. \nFig. \\ref{5GCNArchitecture} provides a generic reference architecture of the 5GCN. \nThe components are arranged based on the infrastructure layers and functional role within the 5G network architecture. \n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.98\\textwidth]{figures\/5GCN.png}\n\t\\caption[Scenario]{The 5GCN Architecture} \n\t\\label{5GCNArchitecture}\n\\end{figure}\n\nThe reference diagram provides an overview of the enabling technologies and the interfaces between the building blocks of 5G networks. \nThe 5G network functions are implemented at the virtual layer through Network Function Virtualisation (NFV) and represent the application layer which can be further divided into the control plane and data plane.\nManagement and Orchestration (MANO) services are provided at multiple layers of the network architecture and SDN is utilised to provide a programmable interface for network management.\n\nThe shift towards a cloud based service deployment means that remote services such as the MANO provide a new attack vector which could be exploited \\cite{ahmad2018overview}. \nGiven the dynamic and re-configurable nature of the 5GCN for service provision on demand, configuration changes may be common and detection of malicious or erroneous actions becomes challenging. \nFurther, there will be a heavy reliance on NFV and SDN in 5G networks to support new services which introduces new attack vectors. \nThreats to the SDN technology include Denial of Service (DoS) attacks and Man in the Middle (MiTM) attacks to modify network flow rules \\cite{shu2016security}. \nThe SDN flow tables provide valuable information about traffic routing within the core network and may be the target of discovery techniques as an attacker looks to identify target assets and navigate the network post intrusion. \nPrior to 5G, telecommunication networks contained many physical hardware devices responsible for providing service specific functionality in the core but these will now be virtualised service instances in the core network through NFV. \nNew attack vectors associated with NFV include isolation failure of virtualised components and targeting the VNF components with DoS attacks to exhaust underlying shared resources \\cite{lal2017nfv}.\n\nQuick and dynamic VNF deployment also leads to the potential of configuration errors which introduce vulnerabilities within the network.\nBesides the virtualisation aspect of the Network Functions (NFs) within Service Based Architecture (SBA) of the 5GCN, threats which target the Control Plane Signalling (CP) are of concern. \nEach NF can take on the role of the service provider or service consumer, which is managed by the Network Repository Function (NRF). \nSome NFs also interface with external networks such as the Access Management Function (AMF), Network Exposure Function (NEF), Security Edge Protection Proxy (SEPP) and User Plane Function (UPF). \nOther services provided by NFs of the SBA include the Authentication Server Function (AUSF), Session Management Function (SMF), Policy Control Function (PCF) and Network Slice Selection Function (NSSF). The Unified Data Management (UDM), Unified Data Repository (UDR) and Unstructured Data Storage Function (USDF) all serve as data repositories for providing data storage and access to the various NFs.\n\n\nWith core NFs becoming exposed to external networks and offering common interfaces by adopting an IP based protocol stack, there is a risk they could become targeted by attackers well versed in these technologies. \nCompromised NFs within the 5G core could lead to unauthorised access to data repositories, eavesdropping SBA communication, malware distribution, MiTM and DoS attacks \\cite{rudolph2019security}. \n\nService abuse is another motive for attacks on 5G networks. \nIn a roaming scenario, the home network interfaces with a third party serving network that introduces the possibility of service abuse such as service fraud or provide a platform for DoS attacks through a compromised or insecure trusted mobile network operator. \nThe lawful interception function provides emergency services the authorisation to intercept mobile communications for lawful purposes which could be used to access communications and confidential subscriber data if accessed by an adversary. \n\n\n\\section{MITRE ATT\\&CK in 5G}\\label{sec:mitre5g}\nThe MITRE ATT\\&CK framework provides a TTP Matrix for common enterprise and Industrial Control System (ICS) network types as well as a matrix for mobile devices.\nThe matrices can easily be navigated through filtering of techniques based on network properties or technologies including host operating system, cloud based deployments, and network infrastructure. \n\nIn order to apply the ATT\\&CK framework for threat modelling in the 5GCN, an extension of the framework is required to include adversarial techniques relating to the 5GCN threats. \nMany of these techniques relate to the technologies used such as SDN, NFV and network slicing, and the SBA which are not currently contained within the ATT\\&CK knowledge base. \nTable \\ref{tab:techniques} provides an overview of TTPs relating to network technologies and assets based on the existing TTP matrices and those which, we believe, must be included in a 5G specific one.\nWhilst the MITRE TTPs are a knowledge base of known APT behaviours, we aim to take a proactive approach to adversarial technique identification targeting these infrastructure components. \n\n\\begin{table}[!b]\n\\caption{Comparison of Existing Frameworks}\n{\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}lllll@{}}\\toprule\n\\textbf{Technology Aspects} & \\textbf{Enterprise} & \\textbf{Mobile} & \\textbf{ICS} & \\textbf{5G Core} \\\\\n\\midrule\n Network & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \n OS & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \n Cloud Infrastructure & \\checkmark & & & \\checkmark \\\\ \n Virtualisation\/Containerisation & \\checkmark & & & \\checkmark \\\\ \n Cyber Physical & & & \\checkmark & \\checkmark \\\\ \n Industry Specific Protocols & \\checkmark & \\checkmark & \\checkmark & \\checkmark \\\\ \n SDN & & & & \\checkmark \\\\ \n NFV & & & & \\checkmark \\\\ \n Network Slicing & & & & \\checkmark \\\\ \n MANO & & & & \\checkmark \\\\ \n 5G Procedures & & \\checkmark & & \\checkmark \\\\ \n\\bottomrule\n\\end{tabular*}}{}\n\\label{tab:techniques}\n\\end{table}\n\n\\subsection{5GCN Adversarial TTP Identification}\nThe MITRE ATT\\&CK TTP matrices are based on \\textit{threat intelligence} such as incident reports based on known \\textit{APT campaigns}. \nMITRE identifies eight APT groups attributed with attacks on telecommunication networks (APT19, APT39, APT41, Deep Panda, MuddyWater, OilRig, Soft Cell and Thrip). \n\nGiven that 5G networks are an amalgamation of existing and new technologies, a 5GCN TTP knowledge base should extend the existing TTP matrix for the underlying cloud infrastructure and network but also include new techniques to incorporate new technologies such as SDN, NFV, and network slicing. \nFigure \\ref{fig:CombinedTTPs} shows the proposed 5GCN TTPs which form the extended TTP knowledge base when combined with the existing knowledge base of TTPs used by APTs who have historically targeted telecommunication networks.\n\nThe techniques belonging to the MITRE ATT\\&CK TTP matrices are generally abstract definitions of attack steps.\nIn some cases there exist sub-techniques which provide finer granularity offering more specific information about a particular technique.\nThe newly identified 5GCN techniques are either not included in the existing TTP matrices or are sub-techniques which have specific context in 5G networks.\nFor each tactic of the attack life-cycle we identify and characterise new techniques and explain the rationale behind their inclusion in the 5G TTP knowledge base.\n\n\n\\subsubsection{Pre-Intrusion \\textit{(Initial Access, Execution)}}\nWe refer to the combination of Initial Access and Execution tactics as those of pre-intrusion, i.e. those which an attacker can use to gain access to the target network. \nIn 5G networks we identify several access points which could serve as attack vectors for network intrusion.\nWithin the core network, NFs adopt common protocols and become accessible via RESTful APIs. \nThe APIs exposed to external networks, such as that of the NEF and UPF provide a new attack vector to the 5GCN, such as those identified by the Open Web Application Security Project (OWASP) \\cite{van2017owasp} and security concerns over the use of REST APIs \\cite{yarygina2018overcoming}.\nTrust relationships within 5G networks introduce significant security challenges \\cite{rao2020threat}. \nThis includes trusted relationships between home and visited networks in the roaming scenario, distributed network components across MEC deployments, third party applications and services, and between NFs belonging to different network slices. \nAttack scenarios where trusted relationships with roaming partners and between network slices are exploited have already been identified \\cite{3GPP5GSecurityStudy, AdaptiveMobileSecurity}.\nExternal remote services such as those provided by the MANO component provide important functions to network operators but also introduce attack vectors to provide access to the 5G network such as through the use of valid user accounts.\n\n\\subsubsection{Post-Intrusion \\textit{(Persistence, Defence Evasion, Discovery, Lateral Movement, Collection, C2)}}\nPost-intrusion tactics relate to those techniques used by an adversary following the initial intrusion and prior to the final objective. \nThese activities can be thought of as those which help position the attacker to carry out the final stage of an attack whilst remaining undetected. \nIn the context of APTs, this is inclusive of activities to help the attacker establish a foothold, evade defences, remain persistent, and move laterally to the target within the network. \nThe use of virtualisation and container technology for the orchestration of NFs within the 5GCN introduce the possibility of image files being implanted within the system as to introduce a malicious NF \\cite{sultan2019container},\\cite{gao2017containerleaks}. \nThis type of technique could arise directly as a result of a supply chain compromise or be facilitated by a malicious insider. \nThis type of technique serves as a form of Persistence and Defence evasion tactic. \nThe use of virtualised components which reside on the same physical infrastructure also introduces the risk of VM\/container breakout, allowing an attacker to move laterally if there exists vulnerabilities in the hypervisor software or mis-configurations such as poor isolation of virtual machines and containers.\n\nTrusted relationships, as with the Initial Access tactic, relate to the vulnerabilities introduced by trusted relationships. \nIn the context of post intrusion techniques this refers to the trusted relationships within the 5GCN such as that between of NFs within the SBA and network slices. \nOnce a NF has been authenticated and authorised within the SBA there is no guarantee these assets cannot be compromised and turned against the network to abuse the trust relationship through signalling abuse \\cite{AdaptiveMobileSecurity, 3GPP5GSecurityStudy}. \nMany of the security controls within the SBA work at the network\/transport layer but this does not protect against application layer misuse or abuse. \nThese types of techniques can be used to evade defences and remain persistent given the lack of application level security and complexity of maintaining clear trust boundaries in a dynamic environment. \nCP signalling or rather misuse of it, is a sub-technique of the generalised standard protocol misuse technique set. \nThis type of adversarial technique has been observed in attacks on telecommunication networks including those which target legacy protocols such as SS7, Diameter and GTP. \nCP signalling is based on a set of HTTP messages defined in the 3GPP standard and therefore adversaries are well aware of the type of signalling used within 5G networks. \nThe impact of CP signalling abuse is wide ranging from DoS, data leakage, service fraud, and data integrity security risks. \nDue to the fact that this technique is misuse of a valid protocol, it can be very difficult to detect malicious behaviour and as such can serve as a technique for achieving Persistence, Defence Evasion, Discovery, Lateral Movement, Data Collection and C2 tactics. \n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=1\\textwidth]{figures\/5GTTP.png}\n\\caption{5G APT TTPs}\\label{fig:CombinedTTPs}\n\\end{figure*}\n\n\nA significant concern to network security should be the fact that CP signalling misuse can be deployed in a wide range of tactical steps.\nWe identify the potential for an adversary to gain initial access through the exploitation of the remote service offered for network MANO. \nConsidering this as the first step in an attack, there arises the possibility of misusing the service to make configuration changes to the 5GCN \\cite{homoliak2019insight}. \nPotential changes to the configuration can lead to changes in SDN flow tables, network configuration to impair defences or result in network boundary bridging. \nA more general purpose of this type of technique may be to introduce network configurations that can be exploited for lateral movement purposes whilst disguising configuration changes as intentional. \n\nIn all types of networks, data collection poses a significant risk. \nWhether it be for the purpose of extracting data from the network or for gaining useful information about the network to further attacks, data collection techniques present a significant challenge to security. \nWe have identified collection techniques which range from accessing data directly from NF repositories, which include crafting HTTP requests or exploitation of the databases themselves, memory scraping which targets the underlying memory in the cloud infrastructure, and passive attacks such as eavesdropping the Service Based Interface (SBI). \n\nCommand and Control (C2) tactics serve the purpose of providing an attacker with a covert channel for their activities. \nThis could be for providing a way of exfiltrating sensitive data outside of the network, maintaining a backdoor connection to the network or generally disguising communication between the target network and the outside world. \nIn 5G networks, examples include use of applications layer protocols to disguise malicious communication between roaming partners, MEC services or third party applications. \nWe have already identified the NF components, which provide an interface to the 5GCN, as a potential initial access point which logically could serve as a form of C2 channel if compromised. \nAs with NF compromise the legitimate external services to MANO components could equally serve as a C2 channel.\n\n\\subsubsection{Objectives \\textit{(Exfiltration, Impact)}}\nObjectives reflect the goal of attacks and the ATT\\&CK framework defines this as either Data Exfiltration or Impact techniques. \nOur contribution to extending the existing TTP knowledge base is the addition of Impact techniques, which are most relevant to 5G networks and have been identified in literature as potential threats. \nImpact techniques may be either combined for fulfilling the adversaries' end goal or adopted in a standalone manner.\nResource overloading and network slice compromise techniques are the result of some prior actions but can also be leveraged to advance attacks such as to induce a DoS or cause data leakage from a network slice. \nData Modification can have wide a ranging impact on the network from corrupting data repositories to interfering with and adversely affecting cyber-physical systems.\nService specific impacts include the Abuse of the Lawful Intercept Function and Service Fraud which can be realised through charging and billing fraud.\nLoss of security or control over parts of the network can have a significant impact on incident response and recovery depending on the level of control the adversary has achieved. \n\n\\begin{table*}[!b]\n\\caption{Mapping of Adversarial Techniques to 5G Components}\n{\\begin{tabular*}{\\textwidth}{@{\\extracolsep{\\fill}}lllllll@{}}\\toprule\n \\textbf{5G Technique} & \\textbf{Physical} & \\textbf{Virtual} & \\textbf{NF} & \\textbf{SDN} & \\textbf{MANO} & \\textbf{Network Slice} \\\\\n\\midrule\n Valid Accounts & & & & & \\checkmark & \\\\\n Exploit Public Facing NF & & & \\checkmark & & &\\\\\n External Remote Services & & & & & \\checkmark &\\\\\n Supply Chain Compromise & & \\checkmark & \\checkmark & \\checkmark & \\checkmark &\\\\\n Execution through API & & & \\checkmark & & \\checkmark &\\\\\n Implant Container\/VM Image & & \\checkmark & \\checkmark & & &\\\\\n Network Boundary Bridging& & \\checkmark & \\checkmark & & & \\checkmark \\\\\n CP Signalling & & & \\checkmark & \\checkmark & &\\\\\n Impair Defences & \\checkmark & \\checkmark & \\checkmark & \\checkmark & & \\checkmark\\\\\n NF Service Discovery & & & \\checkmark & & &\\\\\n SDN Flow Table Discovery & & & & \\checkmark & &\\\\\n Configuration Exploit & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark\\\\\n Container\/VM Breakout & & \\checkmark & \\checkmark & & &\\\\\n NF Compromise & & \\checkmark & \\checkmark & & &\\\\\n Data from NF Repositories & & & \\checkmark & & &\\\\\n SBI Eavesdropping & & \\checkmark & \\checkmark & & &\\checkmark \\\\\n Memory Scraping & \\checkmark & & & & &\\\\\n Application Layer Protocol (C2) & & \\checkmark & \\checkmark & & &\\\\\n External Remote Services (C2) & & & & & \\checkmark &\\\\\n Encrypted Channel (C2) & & \\checkmark & \\checkmark & & &\\\\\n Exfiltration over C2 & & & & & \\checkmark &\\\\\n Service Fraud & & & \\checkmark & & &\\\\\n Loss of Control & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark\\\\\n Loss of Security & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark\\\\\n Network Slice Isolation Compromise & & & & & & \\checkmark\\\\\n Resource Overloading & \\checkmark & \\checkmark & & & &\\\\\n Data Modification & & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark\\\\\n Denial of Service & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark\\\\\n \\bottomrule\n\\end{tabular*}}{}\n\\label{tab:mapping}\n\\end{table*}\n\n\\subsection{Mapping Adversarial Techniques to 5GCN Infrastructure}\\label{sec:MappingTTPs}\nMITRE ATT\\&CK positions itself as a mid level abstraction of adversarial threat modelling. \nWhilst it provides finer granularity and more details that high level abstraction frameworks, it lacks the low level information about the use of adversarial techniques. \nAs a result of this, focusing resources to provide suitable security monitoring for detection and applying security controls to mitigate threats requires an understanding of which network assets are vulnerable to different techniques. \nFurthermore once adversarial techniques from the 5GCN knowledge base are assigned to the ``at risk'' assets, it can support threat modelling of multi-stage attacks through identification of paths an intruder could take. \nThis can be represented through the use of attack graphs and used to support risk assessments by analysing each possible attack path. \n\nTo achieve this we utilise the mid-level abstraction approach of MITRE and correlate techniques to the technologies of the 5G network architecture. \nFor example; the exploitation of APIs technique is assigned to NF components of 5G networks rather than a specific instance of a NF such as the NEF. \nThe reason for this approach is to firstly help focus security analysis to the assets which come under a technology category.\nSecond, 5G networks are dynamic, have variations in deployment, and given that the specification is still under development, there may still changes to the general network architecture. \nIf a new NF is deployed, it will have the same security requirements as all other NFs in the network.\nFor this reason our approach to threat modelling can be applied to asset types regardless of network topology but still addresses threats to each technology group such as SDN, virtualisation, and asset properties such as trust relationships or protocol usage. \n\nTable \\ref{tab:mapping} presents the mapping between the core infrastructure components and the 5GCN adversarial TTPs. \nThis shows which of the 5GCN infrastructure components are at risk or impacted, in the case of adversary goals, of each adversarial technique we have identified.\nGeneral techniques or those that target windows operating systems or standard enterprise applications are not included as our focus is the mapping between the techniques and specific 5GCN components.\nIn some instances there may be multiple ``at risk'' components. \nFor example SBI eavesdropping can occur through a compromise NF, NFs within the same network slice or through monitoring of the virtual network traffic thus applicable to the virtual, NF and network slice assets.\n\n\\section{Multi-Stage Attack Modelling with 5GCN TTPs}\\label{sec:application}\nSo far we have identified a set of 5GCN adversarial techniques and created a mapping between those and the 5GCN infrastructure components. This equates to a 5GCN knowledge base of adversarial TTPs applicable to the network components and technologies. MITRE ATT\\&CK is a threat modelling framework which provides TTPs as an input to the modelling process. \n\nIn this section we demonstrate how the matrix consisting of 5GCN techniques can be used to model an APT as a multi-stage attack on the network. \nThis approach highlights the merits of breaking down APT multi-stage attacks into the stages relating to the adversarial tactics and identifying the specific techniques used by an attacker for each stage or tactic. \nEven if it is not possible to detect and mitigate all stages of the attack through a security risk assessment, it might be possible to detect enough stages to identify an APT and provide sufficient defences to prevent the attacker reaching their objective by eliminating critical paths.\n\nIn this section we demonstrate the use of the 5GCN knowledge base of adversarial techniques to model theoretical multi-stage attacks.\n\n\nIn the following examples we use threat intelligence from historic APT campaigns to support adversarial motivation and reproduce these using the newly identified 5GCN TTPs.\n\n\\subsection{Data Theft Scenario}\nA report published by Fireeye into APT41 includes threat intelligence about activities which targeted telecommunication networks \\cite{dragon2020double}. \nIt is claimed that the group targeted call record information as part of wider intelligence gathering efforts. \nThe following attack scenario, illustrated in Figure \\ref{DataTheftScenario}, considers the objective of the APT to extract confidential data from the 5GCN.\n\n\\begin{figure}\n\\centering{\\includegraphics[width=0.75\\textwidth]{figures\/Datatheft.png}}\n\\caption{Scenario 1: A data theft scenario\\label{DataTheftScenario}}\n\\end{figure}\n\nIn the beginning the attacker targets a public facing NF to gain initial access through an API exploit to compromise the NF. \nWith this C2 channel established between the attacker and compromised NF, the attacker is able to discover the available NFs registered within the SBA.\nCP signalling is used to request data from a target NF, the attacker uses CP signalling along with the trusted relationships between the registered NFs within the SBA to remain persistent and undetected from network defences. \nOn receipt of the service request, the target NF accesses the requested data from its data repository and returns it to the compromised NF which requested it. \nFollowing the data collection stage, it is exfiltrated out of the 5GCN using the application layer protocol to conceal the contents.\nIn Figure \\ref{DataTheftTTP} the attack vectors relating to this scenario are shown. \n\n\n\\subsection{MANO Service Abuse}\nIn order to provide scalability and flexibility in service provision, the 5GCN will use SDN and NFV providing a reprogammable and dynamic network architecture. \nTo support on demand services and different use case requirements, reconfiguration of the 5GCN components is likely to be a commonly repeated task. \nMisconfiguration of the virtual infrastructure can pose a significant threat to security, in the cloud environment, whether malicious or unintentional \\cite{iqbal2016cloud}. \nIn this scenario abuse of the MANO is considered as shown in Figure \\ref{MANOScenario}.\n\nThe MANO component is targeted for initial access to the 5GCN through the external remote service and use of valid user credentials. \nUsing the management API the firewall settings are modified allowing the attacker to bypass security controls the impact of this step is a Network Slice isolation. \nWith the NF now exposed to the outside attacker, a DoS is launched leading to the underlying physical resources being exhausted. \nThe end result is a DoS on the User Equipment (UE) being served by the targeted Network Slice.\nIn Figure \\ref{MANOTTP} the adversarial techniques for each stage of the attack are mapped to the 5GCN TTP matrix.\n\n\\begin{figure}\n\\centering{\\includegraphics[width=0.75\\textwidth]{figures\/MANOScenario.png}}\n\\caption{Scenario 2: MANO Attack Scenario\\label{MANOScenario}}\n\\end{figure}\n\n\\begin{figure*}\n\\centering{\\includegraphics[width=1\\textwidth]{figures\/datatheftTTP.png}}\n\\caption{Scenario 1 TTPs \\label{DataTheftTTP}}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=1\\textwidth]{figures\/MANOTTP.png}\n\t\\caption{Scenario 2 TTPs} \\label{MANOTTP}\n\\end{figure*}\n\n\\section{Conclusion}\n\\label{sec:concl}\nIn this paper, we identify the MITRE ATT\\&CK as a suitable framework for threat modelling in the 5GCN lending to the dynamic and elastic properties of the network architecture. \nThrough evaluation of 5GCN threat assessments, we identify adversarial techniques applicable to 5G networks to extend the existing ATT\\&CK framework knowledge base, provide a mapping to the tactical stages of an APT lifecycle and demonstrate a practical implementation for modelling multi-stage attack scenarios. \nOur approach towards identification of adversarial techniques and mapping those to the tactical stages of an APT campaign and 5GCN infrastructure, adopts a preemptive methodology to allow for application of the well established MITRE ATT\\&CK framework for threat modelling in the 5GCN and to support future 5GCN cyber risk assessments. \n\n\\subsection{Limitations}\nThe roll-out of 5G core networks is anticipated over the coming years with the specification still under development. \nIn the absence of real world threat intelligence reports for populating a 5G knowledge base which extends MITRE ATT\\&CK, we have adopted a forward thinking approach to producing one. \nOur work is based on relevant threat assessments produced by academics and industry but lacks the real world analytics at this stage to support the extension to ATT\\&CK.\n\nCurrently our proposed knowledge base addresses the identification of potential security risks in the form of TTPs but does not identify suitable detection and mitigation techniques as yet. \nThis in part is due to the lack of available data relating to APT campaigns against 5G networks but also because many of the new technologies are yet to be deployed.\n\nThe significance of security risk assessment is understanding the security risk to networks with consideration of likelihood and impact of given attack scenarios. \nThere are several factors that contribute to the process including detection\/mitigation capabilities, security controls which are in place, the capability of the adversary and network configuration. \nThese challenges are yet to be addressed and may not be until real world deployments of 5G networks are rolled out and analytics relating to attacks are available. \n\n\\subsection{Future Work}\nTo further support and extend the proposed approach, the following future work could involve:\n\\begin{itemize}\n \\item The simulation of identified 5GCN attack scenarios with generation of data sets providing further analysis of the techniques which have been identified. \n In the absence of real world data this can support the inclusion of identified TTPs.\n \\item Identification of suitable detection and mitigation capability requirements for the 5GCN adversarial techniques identified in this work. \n \\item A cyber security risk assessment incorporating the newly identified 5GCN TTPs to provide a holistic evaluation of security risk to the 5GCN.\n\\end{itemize}\nFuture work may include the creation of a framework to model 5GCN using requirements similar to our previous work \\cite{mavropoulos2019apparatus,mavropoulos2017conceptual}.\nWe also plan to propose a game-theoretic framework, which will study the interactions between an agent that defends the 5G infrastructure and an attacking agent, e.g.\\cite{rontidis2015game,panaousis2017game}.\n\n\\section*{Acknowledgement}\nThis work was partly funded by a UK Government PhD Studentship Scheme.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Our model}\n\\label{s:model}\n\nDenote an input utterance by $\\ensuremath{\\mathbf{X}} = [\\ensuremath{\\mathbf{x}}_1, \\dots, \\ensuremath{\\mathbf{x}}_T]$ where $\\ensuremath{\\mathbf{x}}_i \\in\n\\ensuremath{\\mathbb{R}}^d$ contains the audio features for the $i$-th frame. For our task\n(detecting a single event at a time), we are given the binary utterance\nlabel $y$ which indicates if an event \noccurs ($y=1$) or not ($y=0$). If $y=1$, we have additionally the onset\nand offset time of the event, or equivalently frame label $\\ensuremath{\\mathbf{y}} = [y_1,\n\\dots, y_T]$, where $y_t=1$ if the event is on at frame $t$ and $y_t=0$\notherwise. \nOur goal is to make accurate predictions at both the utterance level and the frame level. \n\nOur model uses a multi-layer RNN architecture $f$ to extract nonlinear features from\n$\\ensuremath{\\mathbf{X}}$, which yields a new representation \n\\begin{align*}\nf(\\ensuremath{\\mathbf{X}}) = [\\ensuremath{\\mathbf{h}}_1,\\dots,\\ensuremath{\\mathbf{h}}_T] \\in \\ensuremath{\\mathbb{R}}^{h\\times T},\n\\end{align*}\ncontaining temporal information. We also learn a vectorial representation\nof the acoustic event by $\\ensuremath{\\mathbf{w}} \\in \\ensuremath{\\mathbb{R}}^h$, which serves the purpose of a classifier and will be used in predictions at two levels.\n\nWith the standard logistic regression model, we perform per-frame\nclassfication based on the frame-level representation and the classifier\n$\\ensuremath{\\mathbf{w}}$: for $t=1,\\dots,T$,\n\\begin{align*}\np_t := P (y_t=1 | \\ensuremath{\\mathbf{X}}) = \\frac{1}{1 + \\exp \\rbr{- \\ensuremath{\\mathbf{w}}^\\top \\ensuremath{\\mathbf{h}}_t}} \\in [0,1],\n\\end{align*}\nand we measure the frame-level loss if the event occurs:\n\\begin{align*}\n& \\ensuremath{\\mathcal{L}}_{frame} (\\ensuremath{\\mathbf{X}}, \\ensuremath{\\mathbf{y}}) = \\\\ \n& \\left\\{ \\begin{array}{c@{\\hspace{1ex}}@{:}@{\\hspace{1ex}}l}\n\\frac{1}{T} \\sum_{t=1}^T y_t \\log p_t + (1 - y_t) \\log (1 - p_t) & y=1 \\\\\n0 & y=0 \n\\end{array}\n\\right. .\n\\end{align*}\nNote that we do not calculate the frame loss if no event occurs, even\nthough one can consider the frame label to be all $0$'s in this case. This\ndesign choice is consistent with the evaluation metric for rare events,\nsince if we believe no event occurs in an utterance, the onset\/offset or the frame labels are meaningless. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.85\\linewidth, bb=50 20 800 550, clip]{model.pdf}\n\\vspace*{-3ex}\n\\caption{Illustration of our RNN-based attention mechanism for rare sound events detection.}\n\\label{f:model}\n\\end{figure*}\n\n\nOn the other hand, we make the utterance-level prediction by collecting\nevidence at the frame level. Since the above $p_t$'s provide the alignment\nbetween each frame and the target event, we normalize them over the\nentire utterance to give the ``attention''~\\cite{Bahdan_15a,Bahdan_15b}:\n\\begin{align*}\na_t = \\frac{p_t}{\\sum_{t=1}^T p_t}, \\qquad t=1,\\dots,T,\n\\end{align*}\nand use these attention weights to combine the frame representations to form the utterance representation as\n\\begin{align*}\n\\ensuremath{\\mathbf{h}} = \\sum_{t=1}^T a_t \\ensuremath{\\mathbf{h}}_t.\n\\end{align*}\nWe make utterance-level prediction by classifying $\\ensuremath{\\mathbf{h}}$ using $\\ensuremath{\\mathbf{w}}$:\n\\begin{align*}\np := P (y=1| \\ensuremath{\\mathbf{X}}) = \\frac{1}{1 + \\exp \\rbr{- \\ensuremath{\\mathbf{w}}^\\top \\ensuremath{\\mathbf{h}}}} \\in [0,1],\n\\end{align*}\nand define the utterance-level loss based on it:\n\\begin{align*}\n\\ensuremath{\\mathcal{L}}_{utt} (\\ensuremath{\\mathbf{X}}, y) = y \\log p + (1 - y) \\log (1 - p) .\n\\end{align*}\nThis loss naturally encourages the attention to be peaked at the event\nframes (since they are better aligned with $\\ensuremath{\\mathbf{w}}$), and\nlow at the non-event frames. \n\nOur final objective function is a weighted combination of the two above losses:\n\\begin{align*}\n\\ensuremath{\\mathcal{L}} (\\ensuremath{\\mathbf{X}}, y, \\ensuremath{\\mathbf{y}}) = \\ensuremath{\\mathcal{L}}_{utt} (\\ensuremath{\\mathbf{X}}, y) + \\alpha \\cdot \\ensuremath{\\mathcal{L}}_{frame} (\\ensuremath{\\mathbf{X}}, \\ensuremath{\\mathbf{y}}),\n\\end{align*}\nwhere $\\alpha>0$ is a trade-off parameter. \nDuring training, we optimize $\\ensuremath{\\mathcal{L}} (\\ensuremath{\\mathbf{X}}, y, \\ensuremath{\\mathbf{y}})$ jointly over the\nparameters of RNNs $\\ensuremath{\\mathbf{f}}$ and the event representation $\\ensuremath{\\mathbf{w}}$. \nAn illustration of our model is given in Figure~\\ref{f:model}.\n\n\\subsection{Inference} \nFor a test utterance, we first calculate $p$ and\npredict that no event occurs if $p \\le thres_0$, and in the case of $p >\nthres_0$ which indicates that an event occurs, we threshold\n$[p_1,\\dots,p_T]$ by $thres_1$ to predict if the event occurs at each\nframe. For the DCASE challenge task 2, where we need to output the time\nboundary for a predicted event (and there is at most one event in each\nutterance), we simply return the boundary of the longest connected\ncomponent of $1$'s in the thresholded frame prediction. \nWe have simply used $thres_0 = thres_1=0.5$ in our experiments.\n\n\n\\section{Multi-resolution feature extraction}\n\\label{s:multires}\n\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=0.85\\linewidth, bb=0 0 800 580, clip]{multires.pdf}\n\\vspace*{-2ex}\n\\caption{RNN-based multi-resolution modeling.}\n\\label{f:multires}\n\\end{figure*}\n\n\nDifferent instances of the same event type may occur with somewhat different\nspeeds and durations. To be robust to variations in the time axis, we propose a\nmulti-resolution feature extraction architecture based on RNNs, as\ndepicted in Figure~\\ref{f:multires}, which will be used as the $f(\\ensuremath{\\mathbf{X}})$ mapping\nin our model.\n\nThis architecture works as follows. After running each recurrent layer, \nwe perform subsampling in the time axis with a rate of 2, i.e.\\@, the outputs of \nthe RNN cell for two neighboring frames are averaged, and the resulting \nsequence, whose length is half of the input length of this layer, \nis then used as input to the next recurrent layer. In such a way, the\nhigher recurrent layers effectively view the original utterance at coarser\nresolutions (larger time scales), and extract information from increasingly larger context of the input. \n\nAfter the last recurrent layer, we would like to obtain a representation\nfor each of the input frames. This is achieved by upsampling (replicating) the\nsubsampled output sequences from each recurrent layer, and summing them for\ncorresponding frames. Therefore, the final frame representation produced\nby this architecture takes into account information at different\nresolutions. \nWe note that the idea of subsampling in deep RNNs architecture is\nmotivated by that used in speech recognition~\\cite{Miao_16a}, and the idea of\nconnecting lower level features to higher layers is similar\nto that of resnet~\\cite{He_16a}. \nWe have implemented our model in the tensorflow framework~\\cite{Abadi_15a}.\n\n\n\n\\section{Conclusion}\n\\label{s:conclusion}\n\nWe have proposed a new recurrent model for rare sound events detection, which achieves competitive performance on Task 2 of the DCASE 2017 challenge. The model is simple in that instead of heuristically aggregating frame-level predictions, it is trained to directly make the utterance-level prediction, with an objective that combines losses at both levels through an attention mechanism.\nTo be robust to the variations in the time axis, we also propose a multi-resolution feature extraction architecture that improves over standard bi-directional RNNs. \nOur model can be trained efficiently in an end-to-end fashion, and thus can scale up to larger datasets and potentially to the simultaneous detection of multiple events.\n\n\\section{Experimental results}\n\\label{s:expt}\n\n\\noindent \\textbf{Data generation}\nWe demonstrate our rare event detection model on the task 2 of DCASE 2017\nchallenge~\\cite{Mesaro_16a} . The task data consist of isolated sound\nevents for three target events (babycry, glassbreak, gunshot), and recordings of 15 different audio scenes (bus,\ncafe, car, etc.) used as background sounds from TUT Acoustic Scenes 2016\ndataset~\\cite{Mesaro_16b}. \nThe synthesizer provided as a part of the DCASE challenge is\nused to generate the training set, and the mixing event-to-background\nratios (EBR) are $-6$, $0$ and $6$ dB. \nThe generated training set has 5000 or 15000 utterances for each target\nclass, and each utterance contains either one target class event or no events. \nWe use the same development and evaluation set (both of about $500$ utterances) provided by the DCASE challenge.\n\n\\noindent \\textbf{Feature extraction}\nThe acoustic features used in this work are log filter bank energies\n(LFBEs). The feature extraction operates on mono audio signals sampled at\n44.1 kHz. For each 30 seconds audio clip, we extract 64 dimensional LFBEs from frames of 46 ms duration\nwith shifts of 23 ms. \n \n\\noindent \\textbf{Evaluation metrics}\nThe evaluation metrics used for audio event detection in DCASE 2017 are \nevent-based error rate (ER) and F1-score. \nThese metrics are calculated using onset-only condition with a collar of\n500 ms, taking into account insertions, deletions, and substitutions of\nevents. Details of these metrics can be found in~\\cite{Mesaro_16a}.\n\n\\subsection{Training with 5K samples}\n\nFor each type of event, we first explore different architectures and hyperparameters on \ntraining sets of 5000 utterances, 2500 of which contain the event. \nThis training setup is similar to that of several participants of the\nDCASE challenge. \n\nFor the frame-level loss $\\ensuremath{\\mathcal{L}}_{frame}$, instead of summing the \ncross-entropy over all frames in a positive utterance, we only consider \nframes near the event and in particular, from 50 frames before the onset \nto 50 frames after the offset. In this way, we obtain a balanced set of\nframes (100 negative frames and a similar amount of\npositive frames per positive utterance) for $\\ensuremath{\\mathcal{L}}_{frame}$.\n\nOur models are trained with the ADAM algorithm~\\cite{KingmaBa15a} with a minibatch\nsize of 10 utterances, an initial stepsize of $0.0001$, for 15 epochs. \nWe tune the hyperparameter $\\alpha$ over the grid $\\cbr{0.1, 0.5, 1, 5, 10}$\non the development set. For each $\\alpha$, we monitor the model's performance on\nthe development set, and select the epoch that gives the lowest ER.\n\n\n\\begin{table}[t]\n\\centering\n\\caption{ER results of our model on the development set for different RNN architectures. \n Here the training set size is $5000$, and we fix the number of GRU layers to be $3$.}\n\\label{t:result-5k}\n\\begin{tabular}{@{}|c|c|c|c|@{}}\n\\hline\n& babycry & glassbreak & gunshot \\\\ \\hline \\hline\nuni-directional & 0.24 & 0.06 & 0.31 \\\\ \\hline\nbi-directional & 0.18 & 0.07 & 0.26 \\\\ \\hline\nmulti-resolution & 0.13 & 0.04 & 0.20 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure*}[t]\n\\centering\n\\begin{tabular}{@{}c@{\\hspace*{0.01\\linewidth}}c@{\\hspace*{0\\linewidth}}c@{\\hspace*{0\\linewidth}}c@{}}\n& babycry & glassbreak & gunshot \\\\\n\\rotatebox{90}{\\hspace*{7em}ER} & \n\\includegraphics[width=0.32\\linewidth, bb=0 20 700 650, clip]{alpha_babycry.pdf} & \n\\includegraphics[width=0.32\\linewidth, bb=0 20 700 650, clip]{alpha_glassbreak.pdf} & \n\\includegraphics[width=0.32\\linewidth, bb=0 20 700 650, clip]{alpha_gunshot.pdf} \\\\ [-1.5ex]\n& $\\alpha$ & $\\alpha$ & $\\alpha$\n\\end{tabular}\n\\vspace*{-1ex}\n\\caption{Performance of different RNN architectures for a range of\n $\\alpha$. Here the training set size is $5000$.}\n\\label{f:alpha}\n\\end{figure*}\n\n\n\n\\begin{table*}[t]\n\\centering\n\\caption{Performance of our model with 15000 training samples and $4$ GRU layers.}\n\\label{t:result-15k}\n\\begin{tabular}{@{}|c|c|c|c|c|c|c|c|c|c|@{}}\n\\hline\n& \\multirow{ 2}{*}{Methods} & \\multicolumn{2}{c|}{babycry} & \\multicolumn{2}{c|}{glassbreak} &\n \\multicolumn{2}{c|}{gunshot} & \\multicolumn{2}{c|}{average} \\\\ \\cline{3-10} \n& & ER & F1 (\\%) & ER & F1 (\\%) & ER & F1 (\\%) & ER & F1 (\\%) \\\\ \\hline\n \\hline\n\\multirow{ 4}{*}{\\caja{c}{c}{Development \\\\ set}}\n& Ours & 0.11 & 94.3 & 0.04 & 97.8 & 0.18 & 90.6 & 0.11 & 94.2 \\\\ \\cline{2-10} \n& DCASE Baseline \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& 0.53 & 72.7 \\\\ \\cline{2-10} \n& DCASE 1st place~\\cite{Lim_17a} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& 0.07 & 96.3 \\\\ \\cline{2-10} \n& DCASE 2nd place~\\cite{Cakir_17a} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& 0.14 & 92.9 \\\\\n\\hline \\hline\n\\multirow{ 4}{*}{\\caja{c}{c}{Evaluation\\\\ set}}\n& Ours & 0.26 & 86.5 & 0.16 & 92.1 & 0.18 & 91.1 & 0.20 & 89.9 \\\\ \\cline{2-10} \n& DCASE Baseline \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& 0.64 & 64.1 \\\\ \\cline{2-10} \n& DCASE 1st place \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& 0.13 & 93.1 \\\\ \\cline{2-10} \n& DCASE 2nd place \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& \\cellcolor{light-gray}\n& \\cellcolor{light-gray} \n& 0.17 & 91.0 \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\subsubsection{Effect of RNN architectures}\n\\label{s:rnn}\n\nWe explore the effect of RNN architectures for the frame feature\ntransformation. \nWe test 3 layers of uni-directional, bi-directional, and multi-resolution\nRNNs described in Section~\\ref{s:multires} for $f(\\ensuremath{\\mathbf{X}})$. The specific RNN cell we use\nis the standard gated recurrent units~\\cite{Cho_14a}, with 256 units in each direction. \nWe observe that bi-directional RNNs tend to outperform uni-directional\nRNNs, and on top of that, the multi-resolution architecture brings further\nimprovements on all events types.\n\n\n\n\n\\subsubsection{Effect of the $\\alpha$ parameter}\n\\label{s:alpha}\n\nIn Figure~\\ref{f:alpha}, we plot the performance of different RNN\narchitectures at different values of trade-off parameter $\\alpha$. We\nobserve that there exists a wide range of $\\alpha$ for which the model achieves\ngood performance. And for all three events, the optimal $\\alpha$ is close to\n$1$, placing equal weight on the utterance loss and frame loss.\n\n\n\\subsection{Training with 15K samples}\n\\label{s:final-expt}\n\nFor each type of event, we then increase the training set to 15000 utterances,\n7500 of which contain the event. We use $4$ GRU layers in our\nmulti-resolution architecture, and set $\\alpha=1.0$. \nTraining stops after $10$ epochs and we perform early stopping on the development\nset as before. \n\nThe results of our method, in terms of both ER and F1-score, are given in\nTable~\\ref{t:result-15k}. With the larger training set and deeper\narchitecture, our development set ER performance is further improved on babycry and\ngunshort; the averge ER of $0.11$ is only worse than the first place's\nresult of $0.07$ among all challenge participants.\n\n\n\n\\section{Introduction}\n\\label{s:intro}\n\nThe task of detecting rare sound events from audio has drawn much recent attention, due to its wide applicability for acoustic scene understanding and audio security surveillance. The goal of this task is to classify if certain type of event occurs in an audio segment, and when it does occur, detect also the time boundaries (onset and offset) of the event instance. \n\nThe task 2 of DCASE 2017 challenge provides an ideal testbed for detection algorithms~\\cite{Mesaro_17a}. The data set consists of isolated sound events for three target classes (baby crying, glass breaking, and gun shot) embedded in various everyday acoustic scenes as background. Each utterance contains at most one instance of the event type, and the data generation process provides temporal position of the event which can be used for modeling. \n\nThe most direct solution to this problem is perhaps to model the hypothesis space of segments, and to predict if each segment corresponds to the time span of the event of interest. This approach was adopted by~\\cite{Wang_17d} and~\\cite{Kao_18a}, whose model architecture heavily drew inspirations from the region proposal networks~\\cite{Ren_15a} developed in the computer vision community. There are a large number of hyper-parameters in such models, which requires much human guidance in tuning. More importantly, this approach is generally slow to train and test, due to the large number of segments to be tested.\n \nAnother straight-forward approach to this task is to generate reference labels for each frame indicating if the frame correspond to the event, and then train a classifier to predict the binary frame label. This was indeed the approach taken by many participants of the challenge (e.g., ~\\cite{Lim_17a, Cakir_17a}). The disadvantage of this approach is that it does not directly provide an utterance-level prediction (if an event occurs at all), and thus requires heuristics to aggregate frame-level evidence for that. It is the motivation of our work to solve this issue.\n\nWe propose a simple model for detecting rare sound events without aggregation heuristics for utterance-level prediction. Our learning objective combines a frame-level loss similar to the abovementioned approach, with an utterance-level loss that automatically collects the frame-level evidence. The two losses share a single classifier which can be seen as the vectorial representation of the event, and they are connected by an attention mechanism. \nAdditionally, we use multiple layers of recurrent neural networks (RNNs) for feature extraction from the raw features, and we propose an RNN-based multi-resolution architecture that consistently improve over the standard multi-layer bi-directional RNNs architectures for our task. \nIn the rest of this paper, we discuss our learning objective in Section~\\ref{s:model}, introduce the multi-resolution architecture in Section~\\ref{s:multires}, demonstrate them on the DCASE challenge in Section~\\ref{s:expt}, and provide concluding remarks in Section~\\ref{s:conclusion}.\n\n\n\n\\section{Acknowledgements}\nThe authors would like to thank Ming Sun and Hao Tang for useful\ndiscussions, and the anonymous reviewers for constructive feedback.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nWe study disordered XXZ spin chains in the Ising phase exhibiting droplet localization.\nThis is a single cluster localization property we previously proved for random XXZ spin chains inside the droplet spectrum \\cite{EKS}.\n\n\n\n\nThe basic phenomenon of Anderson localization in the single particle framework is that disorder can cause localization of electron states and thereby manifest itself in properties such as non-spreading of wave packets under time evolution and absence of dc transport. The mechanism behind this behavior is well understood by now, both physically and mathematically (e.g., \\cite{And,EM,Kirsch, GKber,AW,EK}). Many manifestations of single-particle Anderson localization \nremain valid if one considers a fixed number of interacting \nparticles, e.g., \\cite{CS,AW2,KN}.\t\n\n\nThe situation is radically different in the many-body setting. Little is known about the thermodynamic limit of an interacting electron gas in a random environment, i.e., an infinite volume limit in which the number of electrons grows proportionally to the volume. Even simplest models where the individual particle Hilbert space is finite dimensional (spin systems) pose considerable analytical and numerical challenges, due to the fact that the number of degrees of freedom involved grows exponentially fast with the size of the system. \n\nThe limited evidence from perturbative \\cite{AF,AGKL,GMP,AAB,PPZ,Imb} and numerical \\cite{GM,BR,HO,HP} approaches supports the persistence of a many-body localized (MBL) phase for one-dimen\\-sional spin systems in the presence of weak interactions. The numerics also suggests the existence of transition from a many-body localized (MBL) phase to delocalized phases as the strength of interactions increases, \\cite{HO,HP,BPM,BN,SPA}. \t\n\nMathematically rigorous results on localization in a true many-body system have been until very recently confined to investigations of exactly solvable (quasi-free) models (see \\cite{KP,ARNSS,SW}). More recent progress has been achieved primarily in the study of the XXZ spin chain, a system that is not integrable but yet amenable to rigorous analysis. The first results in this direction established the exponential clustering property for zero temperature correlations of the Andr\\'e-Aubry quasi-periodic model \\cite{Mas1,Mas2}. The authors recently proved localization results for the random XXZ spin chain in the droplet spectrum \\cite{EKS}. Related results are given in \\cite{BW}.\n\nIn {\\cite[Theorem~2.1]{EKS}}, the authors \n obtained a strong localization result for the droplet spectrum eigenstates of the random XXZ spin chain in the Ising phase. This result can be interpreted as the statement that a typical eigenstate in this part of the spectrum behaves as an effective quasi-particle, localized, in the appropriate sense, in the presence of a random field.\n \nIn this paper we study disordered XXZ spin chains exhibiting the same localization property we proved in {\\cite[Theorem~2.1]{EKS}}, which we call Property DL (for ``droplet localization\").\n We draw conclusions concerning the dynamics of the spin chain based exclusively on\nProperty DL.\n\nFor completely localized many-body systems, the dynamical manifestation of localization is often expressed in terms of the non-spreading of information under the time evolution. An alternative (and equivalent) description is the zero-velocity Lieb-Robinson bound. (See, e.g, \\cite{FHBSE}.)\n\nThere is, however, \na difficulty in even formulating our results for disordered XXZ spin chains. Property DL only carries information \nabout the structure of the eigenstates near the bottom of the spectrum, and we cannot assume complete localization for all energies. Moreover, Theorem~\\ref{thmrigid} below shows that Property DL can only hold inside the droplet spectrum for random XXZ spin chains, showing the near optimality of the interval in [16, Theorem 2.1].\nIn fact, numerical studies suggest the presence of a mobility edge for sufficiently small disorder, \\cite{HO,HP,BPM,BN}. To resolve this issue, we recast non-spreading of information and the zero-velocity Lieb-Robinson bound as a problem on the subspace of the Hilbert space associated with the given energy window in which Property DL holds. This leads to a number of interesting findings, formulated below in\nTheorem~\\ref{corquasloc} (non-spreading of information), Theorem~\\ref{thm:expclusteringgenJ} \n(zero-velocity Lieb-Robinson bounds), and Theorem~\\ref{thmexpclust} (general dynamical clustering).\n\nAs we mentioned earlier, our methodology in \\cite{EKS} is limited to the states near the bottom of the spectrum and sheds light only on what physicists call zero temperature localization. It is unrealistic to expect that this approach can yield insight about extensive energies of magnitude comparable to the system size which is the essence of MBL. Nonetheless, we believe that the ideas presented here will be useful in understanding the transport properties of interacting systems that have a mobility edge, such as the Quantum Hall Effect \\cite{GP, EGS,GKS}.\n\nSome of the results in this paper were announced in \\cite{EKS2}.\n\n\nThis paper is organized as follows: The model, Property DL, and the main theorems are stated in Section~\\ref{secmodel}. We collect some technical results in Section~\\ref{secprel}, and a lemma about spin chains is presented in Appendix~\\ref{appspinc}.\n Section~\\ref{secopt} is devoted to the proof that Property DL only holds inside the droplet spectrum for random XXZ spin chains (Theorem~\\ref{thmrigid}). Non-spreading of information (Theorem~\\ref{corquasloc}) is proven in Section~\\ref{secnonsp}. Zero-velocity Lieb-Robinson bounds (Theorem~\\ref{thm:expclusteringgenJ}) are proven in Section~\\ref{secLR}. Finally, the proof of general dynamical clustering (Theorem~\\ref{thmexpclust}) is given in Section~\\ref{secdyncl}.\n\n\n\\section{Model and results}\\label{secmodel}\n\nThe infinite disordered XXZ spin chain (in the Ising phase) is given by the (formal) Hamiltonian \n\\beq \\label{infXXZ}\nH=H_\\omega=H_0+ \\lambda B_\\omega,\\quad H_0=\\sum_{i\\in\\Z}h_{i,i+1},\\quad B_\\omega=\\sum_{i\\in\\Z} \\omega_i \\mathcal{N}_i,\n\\eeq\nacting on $\\bigotimes_{i\\in \\Z} \\C_i^2$, with $\\C_i^2=\\C^2$ for all $i\\in \\Z$, the quantum spin configurations on the one-dimensional lattice $\\Z$, where \n\\begin{enumerate}\n\\item $h_{i,i+1}$, the local next-neighbor Hamiltonian, is given by \n\\beq \nh_{i,i+1}=\\tfrac{1}{4}\\pa{I-\\sigma_i^z\\sigma_{i+1}^z}-\\tfrac{1}{4\\Delta}\\pa{\\sigma_i^x\\sigma_{i+1}^x+\\sigma_i^y\\sigma_{i+1}^y},\n\\eeq\nwhere $\\sigma^{x},\\sigma^{y},\\sigma^{z}$ are the standard Pauli matrices ($\\sigma_i^{x},\\sigma_i^{y},\\sigma_i^{z}$ act on $\\C_i^2$) and $\\Delta>1$ is a parameter;\n\\item \n$\\mathcal{N}_i = \\tfrac{1}{2} (1-\\sigma_i^z)\n$\nis the local number operator at site $i$\n(the projection onto the down-spin state at site $i$); \n\n\\item $\\omega = \\set{\\omega_i}_{i\\in\\Z}$ are identically\ndistributed random variables whose joint probability distribution is ergodic with respect to shifts in $\\Z$, and the single-site \n probability distribution $\\mu$ satisfies \n \\beq\n \\set{0,1}\\subset \\supp \\mu\\subset[0,1] \\qtx{and}\\mu(\\set{0})=0;\n \\eeq \n \n \\item $\\lambda > 0$ is the disorder parameter.\n \\end{enumerate}\n If in addition $\\set{\\omega_i}_{i\\in\\Z}$ are independent random variables we call $H_\\omega$ a \\emph{random} XXZ spin chain.\n \\medskip\n\n \nThe choice $\\Delta>1$ specifies the Ising phase. The Heisenberg chain corresponds to $\\Delta=1$, and the Ising chain is obtained in the limit $\\Delta\\to\\infty$. \n \n\nWe set $e_+ = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} $ and $e_- = \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} $, spin up and spin down, respectively. \n Recall $\\sigma^{z} e_{\\pm}= \\pm e_{\\pm}$. Thus, if $\\mathcal{N} = \\tfrac{1}{2} (1-\\sigma^z)\n$, we have $\\mathcal{N} e_+=0$ and $\\mathcal{N} e_-=e_-$.\n\n\n\n\nThe operator $H_\\omega$ as in \\eq{infXXZ} with $B_\\omega \\ge 0$ can be defined as an unbounded nonnegative self-adjoint operator as follows: \nLet $\\cH_0$ be the vector subspace of $\\bigotimes_{i\\in \\Z} \\C_i^2$ spanned by tensor products of the form $\\bigotimes_{i\\in \\Z}e_i$, $e_i\\in \\set{e_+,e_-}$, with a finite number of spin downs, equipped with the tensor product inner product, and let $\\cH$ be its Hilbert space completion. $H_\\omega$, defined in $\\cH_0$ by \\eq{infXXZ}, is an essentially self-adjoint operator on $\\cH$. Moreover, the ground state energy of $H_\\omega$ is $0$, with the unique ground state (or \\emph{vacuum}) given by the all-spins up configuration $\\psi_0= \\otimes_{i\\in \\Z} e_{+}$. Note that $\\cN_i \\psi_0=0$ for all $i\\in \\Z$ and $\\norm{\\psi_0}=1$.\n\n\nThe spectrum of $H_0$ is known to be of the form \\cite{NSt,FS} (recall $\\Delta >1$):\n\\beq\n\\sigma(H_0)=\\set{0} \\cup \\br{1 -\\tfrac 1 \\Delta, 1 +\\tfrac 1 \\Delta} \\cup \\set{\\left[2\\pa{1 -\\tfrac 1 \\Delta},\\infty\\right )\\cap\\sigma(H_0) }.\n\\eeq\nWe will call $I_1 =[1-\\frac{1}{\\Delta}, 2(1-\\frac{1}{\\Delta}))$ the \\emph{droplet spectrum}.\n(Droplet states in the Ising phase of the XXZ chain were first described in \\cite{NSt} (see also \\cite{NSS,FS}); they have energies in the interval $ \\br{1 -\\tfrac 1 \\Delta, 1 +\\tfrac 1 \\Delta}$.\nThe pure droplet spectrum is actually $I_1\\cap \\sigma(H_0)$; we call $I_1$ the droplet spectrum for convenience.)\n\n Since the disordered XXZ spin chain Hamiltonian $H_\\omega$ is ergodic with respect to translation in $\\Z$, $B_\\omega\\ge 0$, and $B_\\omega \\psi_0=0$, standard considerations imply that $H_\\omega$ has nonrandom spectrum $\\Sigma$, and \n\\beq\n\\sigma(H_\\omega)= \\Sigma= \\set{0} \\cup \\set{\\left[1 -\\tfrac 1 \\Delta,\\infty \\right ) \\cap \\Sigma}\\lqtx{almost surely}.\n\\eeq\n(In the case of a random XXZ spin chain Hamiltonian $H_\\omega$ with a continuous single-site probability distribution standard arguments yield\n$\\Sigma= \\set{0} \\cup \\left[1 -\\tfrac 1 \\Delta,\\infty \\right )$.)\n\nWe consider the restrictions of $H_\\omega$ to finite intervals $[-L,L]$, $L\\in \\N$ (We will write $[-L,L]$ for $[-L,L]\\cap \\Z$, etc., when it is clear from the context.) We let $\\cH\\up{L}=\\cH_{[-L,L]}$,\nwhere $\\cH_S =\\otimes_{i\\in S}\\C_i^2$ for $S\\subset \\Z$ finite, and define the self-adjoint operator \n\\beq \\label{finiteXXZ}\nH^{(L)} =H_\\omega ^{(L)}= \\sum_{i=-L}^{L-1} h_{i,i+1} + \\lambda\\sum_{i=-L}^L \\omega_i \\mathcal{N}_i + \\beta (\\mathcal{N}_{-L} + \\mathcal{N}_L) \\qtx{on}\\cH\\up{L}.\n\\eeq\nWe take (and fix) $\\beta \\ge \\frac{1}{2}(1-\\frac{1}{\\Delta})$ in the boundary term, \n which guarantees that the random spectrum of $H_\\omega ^{(L)}$ preserves the spectral gap of size $1 -\\frac 1 \\Delta$ above the ground state energy:\n \\beq\\label{gapcond}\n \\sigma(H^{(L)}_\\omega)= \\set{0} \\cup \\set{\\left[1 -\\tfrac 1 \\Delta,\\infty \\right ) \\cap \\sigma(H^{(L)}_\\omega)}.\n \\eeq\nThe ground state energy of $H_\\omega ^{(L)}$ is $0$, with the all-spins up configuration state $\\psi_0 ^{(L)}=\\otimes_{i\\in [-L,L]} \\e_+ \\in \\cH\\up{L}$ being a ground state, which is unique almost surely since $\\sum_{i=-L}^L \\omega_i \\mathcal{N}_i \\ne 0$ almost surely (which rules out the all-spins down configuration in $H_\\omega ^{(L)}$ as a ground state).\n\n\n \nGiven an interval $I$, we set $\\sigma_I(H_\\omega^{(L)})=\\sigma(H_\\omega^{(L)})\\cap I$, and let \n\\beq\nG_I= \\set{g:\\R \\to \\C \\mqtx{Borel measurable,} \\abs{g}\\le \\chi_I}.\n\\eeq\n\nIn this article we consider a disordered XXZ spin chain as in \\eq{infXXZ} for which we have localization in an interval $\\br{1 -\\tfrac 1 \\Delta, \\Theta_1}$ in the following form, where $\\norm{ \\ }_1$ is the trace norm. \n\n\\begin{assumption} Let $H=H_\\omega$ be a disordered XXZ spin chain. There exist $\\Theta_1 > \\Theta_0= 1 -\\tfrac 1 \\Delta$ and constants \n$C<\\infty$ and $m>0$, such that, setting $I= [\\Theta_0,\\Theta_1]$, we have, uniformly in $L$, \n\\beq \\label{eq:efcorDL}\n \\E\\pa{\\sup_{g \\in G_{ I} }\\norm{\\mathcal{N}_i g(H^{(L)}) \\mathcal{N}_j}_1} \\le C e^{-m|i-j|} \\mqtx{for all} i, j \\in[-L, L].\n\\eeq\n\\end{assumption}\n\nThis property is justified because we have proven its validity in the droplet spectrum \\cite{EKS} for \\emph{random}\nXXZ spin chains. The name Property DL (for Droplet Localization) is further justified by \nTheorem~\\ref{thmrigid} below.\n\n If $H=H_\\omega$ is a random\nXXZ spin chain, then\n $H^{(L)}$ almost surely has simple spectrum. A simple analyticity based argument for this can be found in \\cite[Appendix~A]{ARS}. (The argument is presented there for the XY chain, but it holds for every random spin chain of the form $H_0 + \\sum_{k=-L}^L \\omega_k \\mathcal{N}_k$ in $\\bigotimes_{i\\in [-L,L]}\\C_i^2$.) Thus, almost surely, all its normalized eigenstates can be labeled as $\\psi_E$ where $E$ is the corresponding eigenvalue. In particular,\n \\beq\\label{norm1}\n\\norm{\\mathcal{N}_i P_E^{(L)} \\mathcal{N}_j}_1= \\norm{\\mathcal{N}_i\\psi_E}\\norm{\\mathcal{N}_j\\psi_E},\n\\eeq\nwhere $P_E^{(L)}=\\chi_{\\set{E}}(H^{(L)})$ and $\\norm{ \\ }_1$ is the trace norm.\n\n\n\n Given $0\\le \\delta< 1$, we set\n\\beq \\label{dropspec}\nI_{1,\\delta} = \\left[ 1- \\tfrac{1}{\\Delta}, (2-\\delta)\\big(1-\\tfrac{1}{\\Delta}\\big) \\right];\n\\eeq\n note that $I_{1,\\delta}\\subsetneq I_1$ if $0<\\delta <1$.\nThe following result is proved in \\cite{EKS}.\n\n\n\n\\begin{teks}[{\\cite[Theorem~2.1]{EKS}}] \n Let $H=H_\\omega$ be a random XXZ spin chain whose single-site probability distribution is absolutely continuous with a bounded density.\nThere exists a constant $K>0$ with the following property: If $\\Delta >1$, $\\lambda >0$, and $0<\\delta< 1$ satisfy\n\\beq\\label{lambdaDeltahyp}\n \\lambda \\pa{\\delta(\\Delta -1)}^{{\\frac 1 2} } \\min \\set{1, \\pa{\\delta(\\Delta -1)}}\\ge K ,\n \\eeq \nthen there exist constants \n$C<\\infty$ and $m>0$ such that we have, uniformly in $L$, \n\\beq \\label{eq:efcor5}\n \\E\\pa{ \\sum_{E\\in \\sigma_{ I_{1,\\delta}}(H^{(L)})} \\norm{\\mathcal{N}_i\\psi_E}\\norm{\\mathcal{N}_j\\psi_E}} \\le C e^{-m|i-j|} \\mqtx{for all} i, j \\in[-L, L],\n\\eeq\nand, as a consequence,\n\\beq \\label{eq:efcor59}\n \\E\\pa{\\sup_{g \\in G_{ I_{1,\\delta}}} \\norm{\\mathcal{N}_i g(H^{(L)}) \\mathcal{N}_j}_1} \\le C e^{-m|i-j|} \\mqtx{for all} i, j \\in[-L, L].\n\\eeq\n\\end{teks}\n\nThe interval $I_{1,\\delta}$ in \\cite[Theorem~1.1]{EKS} is close to optimal, as the following theorem shows that for a random XXZ spin chain localization as in \\eq{eq:efcorDL} is only allowed in the droplet spectrum.\n\n\n\n\n\\begin{theorem}[Optimality of the droplet spectrum]\\label{thmrigid} Suppose Property DL is valid for a random XXZ spin chain $H$. Then\n$\\Theta_1\\le 2 \\Theta_0$, that is, if $I$ is the interval in Property DL, then we must have \n$I= I_{1,\\delta}$ for some $0\\le \\delta <1$.\n\\end{theorem}\n \n \n \n\\emph{Let $H_\\omega$ be a disordered XXZ spin chain satisfying Property DL.}\nWe consider the intervals $I=[\\Theta_0,\\Theta_1]$ and $I_0= [0,\\Theta_1]$, where $\\Theta_0,\\Theta_1$ are given in Property DL. We mostly omit $\\omega$ from the notation. We write $P^{(L)}_B=\\chi_B(H^{(L)})$ for a Borel set\n$B\\subset \\R$, and let $P^{(L)}_E=P^{(L)}_{\\set{E}}$ for $E\\in \\R$. It follows from \\eq{gapcond} that \n$P^{(L)}_{I_0}= P^{(L)}_0 + P^{(L)}_{I}$. Since $\\cN_i P^{(L)}_0=P^{(L)}_0\\cN_i =0$ for all $i \\in [-L.L]$, $G_I$ may be replaced by $G_{I_0}$ in \\eq{eq:efcorDL}. By $m>0$ we will always denote the constant in \\eq{eq:efcorDL}. $C$ will always denote a constant, independent of the relevant parameters, which may vary from equation to equation, and even inside the same equation.\n\n\n\nGiven an interval $J \\subset [-L,L]$, a local observable $X$ with support $J$ is an operator on $\\otimes_{j\\in J} \\C_j^2$, considered as an operator on $\\cH^{(L)}$ by acting as the identity on spins not in $J$. (We defined supports as intervals for convenience. Note that we do not ask $J$ to be the smallest interval with this property, supports of observables are not uniquely defined.) \n\nGiven a local observable $X$, we will generally specify a support for $X$, denoted by $\\cS_X=[s_X,r_X] $. We always assume $\\emptyset \\ne \\cS_X \\subset [-L,L]$. Given two local observables $X, Y$ we set $\\dist(X,Y)= \\dist (\\cS_X,\\cS_Y)$.\n\n\nGiven $\\ell \\ge 1$ and $B\\subset [-L,L]$, we set $B_\\ell = \\set{j \\in [-L,L]; \\ \\dist\\pa{j,B} }\\le \\ell$.\nIn particular, given a local observable $X$ we let\n\\beq\\label{cSell}\n\\cS_{X,\\ell}= \\pa{\\cS_X}_\\ell =[s_X-\\ell,r_X +\\ell] \\cap [-L,L].\n \\eeq \n \n\n\nIn this paper we derive several manifestations of dynamical localization for $H$ from Property DL. The time evolution of a local observable under $ H^{(L)}$ is given by \n\\beq\n\\tau^{(L)}_t\\pa{X}=\\e^{itH^{(L)}}X\\e^{-itH^{(L)}} \\qtx{for} t\\in \\R.\n\\eeq\n(We also mostly omit $L$ from the notation, and write $\\tau_t$ for $\\tau^{(L)}_t$. ) \n\nFor a completely localized many-body system (i.e., localized at all energies), dynamical localization is often expressed as the \\emph{non-spreading of information under the time evolution}: Given a local observable $X$, for all $\\ell \\ge 1$ and $t\\in \\R$ there is a local observable $X_\\ell(t)$ with support $\\cS_{X,\\ell}$, such that $\\norm{X_\\ell(t)- \\tau_t\\pa{X}} \\le C\\norm{X} \\e^{-c \\ell}$, with the constants $C$ and $c>0$ independent of $X$, $t$, and $L$. Since we only have localization in the energy interval $I$, and hence also in $I_0$, we should only expect non-spreading of information in these energy intervals.\n\n Thus, given an energy interval $J$, we consider the sub-Hilbert space \n$\\cH\\up{L}_ J=\\Ran P^{(L)}_J$, spanned by the the eigenstates of $ H\\up{L}$ with energies in $ J$, and localize an observable $X$ in the energy interval $J$ by considering\n its restriction to $\\cH\\up{L}_ J$, \n$X_J= P_J\\up{L} X P_J\\up{L}$. Clearly $\\tau_t \\pa{X_J\\up{L}}= \\pa{\\tau_t \\pa{X\\up{L}}}_J$.\n\n\n\nProperty DL implies non-spreading of information in the energy interval $I_0$.\n\n\\begin{theorem}[Non-spreading of information]\\label{corquasloc} Let $H=H_\\omega$ be a disordered XXZ spin chain satisfying Property DL.\n There exists $C<\\infty$, independent of $L$, such that for all local observables $X$, \n $t\\in \\R$ and $\\ell >0$ there is a local observable $X_\\ell(t)=\\pa{X_\\ell(t)}_\\omega $ with support $\\cS_{X,\\ell}$ satisfying\n\\begin{align} \\label{qlocI}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{ \\pa{X_\\ell(t) - \\tau_t\\pa{ X }}_{I_0}}_1} \\le C \\|X\\|\\e^{- \\frac{1}{16} m\\ell}.\n\\end{align}\n\\end{theorem}\n\nWe give an explicit expression for $X_\\ell(t)$ in \\eq{eq:X_ell}. \nNote that $ X_I= \\pa{ X_{I_0} }_I$, and hence \\eq{qlocI} implies the same statement with $I$ substituted for $I_0$. \n\n\nAnother manifestation of dynamical localization is the existence of zero-velocity Lieb-Robinson (LR) bounds \\emph{in the interval of localization}. The following theorem states a zero-velocity Lieb-Robinson bound in the energy interval $I$.\n If we include the ground state, i.e., if we look for Lieb-Robinson type bounds in the energy interval $I_0$, the situation is more complicated, and the zero-velocity Lieb-Robinson bound holds for the double commutator; the commutator requires counterterms. Note that \n $ [\\tau_t\\pa{ X_I },Y_I] \\ne \\pa{ [\\tau_t\\pa{ X },Y]}_I$. (We mostly omit $\\omega$ and $L$ from the notation.)\n\n\n\\begin{theorem}[Zero velocity LR bounds]\\label{thm:expclusteringgenJ} Let $H=H_\\omega$ be a disordered XXZ spin chain satisfying Property DL. Let $X, Y$ and $Z$ be local observables. The following holds uniformly in $L$:\n\n\n\n\\begin{align}\\label{eq:dynloc}\n& \\E \\pa{\\sup_{t\\in \\R} \\norm{ [\\tau_t\\pa{ X_I },Y_I]}_1} \\le C \\|X\\| \\|Y\\| \\e^{-\\frac 1 8 m\\dist (X,Y)},\\\\\n\\label{eq:LRquasimix}\n&\\E \\pa{\\sup_{t\\in \\R}\\norm{\\left[ \\tau_t\\pa{X_{I_0}},Y_{I_0}\\right]- \\pa{\\tau_t\\pa{ X}P_0Y - YP_0 \\tau_t\\pa{ X }}_I}_1} \\\\ & \\hskip114pt \\le C \\|X\\| \\|Y\\| \\e^{-\\frac 1 8m\\dist (X,Y)}, \\notag \\\\\n \\label{eq:LRquasimix2} \n&\\E\\pa{\\sup_{t,s \\in \\R} {\\norm{\\left[\\left[ \\tau_t\\pa{X_{I_0}}, \\tau_s\\pa{Y_{I_0}}\\right], Z_{I_0}\\right] }}_1} \\\\ \\notag\n& \\hskip80pt \\le C \\|X\\| \\|Y\\|\\|Z\\|\\e^{-\\frac 1 8m \\min\\set{\\dist (X,Y), \\dist (X,Z), \\dist (Y,Z)}}.\n\\end{align} \nMoreover, for the random XXZ spin chain the estimate \\eq{eq:LRquasimix} is not true without the counterterms.\n\\end{theorem}\n\n The counterterms in \\eq{eq:LRquasimix} are generated by the interaction between the ground state and states corresponding to the energy interval $I$ under the dynamics. Here, and also in Theorem~\\ref{thmexpclust} below, they are linear combinations of terms of the form $ \\pa{\\tau_t\\pa{ X}P_0Y}_I$ and $\\pa{YP_0 \\tau_t\\pa{ X }}_I$. \n Note that \n\\begin{align}\\notag\n\\norm{\\pa{\\tau_t\\pa{ X}P_0Y}_I}_1& =\\norm{\\pa{\\tau_t\\pa{ X}P_0Y}_I} =\\norm{P_I Y^* \\psi_0}\\norm{P_I X \\psi_0},\\\\\n\\norm{\\pa{YP_0 \\tau_t\\pa{ X }}_I}_1& =\\norm{\\pa{YP_0 \\tau_t\\pa{ X }}_I} =\\norm{P_I X^* \\psi_0}\\norm{P_I Y \\psi_0},\n\\end{align}\nwhich do not depend on either $t$ or $\\dist (X,Y)$. \n\n\n\nAnother manifestation of localization is the dynamical exponential clustering property. Let $B\\subset \\R$ be a Borel set. We define the truncated time evolution of an observable $X$ by ($H=H_\\omega^{(L)}$), \n\\beq\n\\tau^B_t\\pa{X}=\\e^{itH_B}X\\e^{-itH_B}, \\qtx{where} H_B=P_B H.\n\\eeq\nNote that $\\pa{\\tau^B_t\\pa{X}}_B= \\pa{\\tau_t\\pa{X}}_B=\\tau_t\\pa{X_B}$.\n\nThe correlator operator of two observables $X$ and $Y$ in the energy window $B$ is given by ($\\bar P_B=1- P_B$)\n\\beq\nR_{B} (X,Y)= P_B X \\bar P_B Y P_B = \\pa{X \\bar P_B Y }_B.\n\\eeq\nIf $E$ is a simple eigenvalue with normalized eigenvector $\\psi_E$, we have, with $R_{E} (X,Y) =R_{\\set{E}} (X,Y)$,\n\\begin{align}\n{\\tr \\pa{R_{E} (X,Y)} }&={\\scal{\\psi_E,XY\\psi_E}-\\scal{\\psi_E,X\\psi_E}\\scal{\\psi_E,Y\\psi_E}}.\n\\end{align}\n\n The following result is proved in \\cite{EKS}.\n\n\\begin{teks2}[{\\cite[Theorem~1.1]{EKS}}]Let $H=H_\\omega$ be a random XXZ spin chain,\nand assume \\eq{eq:efcor5} holds in an interval $I$. Then, for all local observables $X$ and $Y$ we have, uniformly in $L$, \n\\beq \\label{eq:expclustering}\n\\E \\pa{ \\sup_{t\\in \\R} \\sum_{E\\in \\sigma_I(H^{(L)}) } \\abs{\\tr \\pa{R_{E} (\\tau_t^I\\pa{X},Y)} } }\\le C \\|X\\| \\|Y\\| \\e^{-m \\dist\\pa{ X, Y}},\n\\eeq\n\\beq \\label{eq:expclustering2}\n\\E \\pa{ \\sup_{t\\in \\R} \\sum_{E\\in \\sigma_I(H^{(L)}) } \\abs{\\tr \\pa{R_{E} (\\tau_t\\pa{X_I},Y_I)} } }\\le C \\|X\\| \\|Y\\| \\e^{-m \\dist\\pa{ X, Y}},\n\\eeq\nand\n\\beq \\label{eq:expclustering3}\n\\E \\pa{ \\sup_{t\\in \\R} \\abs{\\tr \\pa{R_{I} (\\tau_t^I\\pa{X},Y)} } }\\le C \\|X\\| \\|Y\\| \\e^{-m \\dist\\pa{ X, Y}}.\n\\eeq\n\\end{teks2}\n\n\n The estimate \\eq{eq:expclustering2} is not the same as \\eq{eq:expclustering}, but it can be proven the same way; the proof of \\cite[Lemma~3.1]{EKS} is actually simpler in this case. \n \nSince\n \\beq\n \\tr \\pa{R_{I} (\\tau_t^I\\pa{X},Y)} = \\sum_{E\\in \\sigma_I(H^{(L)}) } \\scal{\\psi_E,\\tau_t^I\\pa{X} \\bar P_{I} Y \\psi_E},\n \\eeq\n \\eq{eq:expclustering3} is a statement about the diagonal elements of the correlator operator \n $R_{I} (\\tau_t^I\\pa{X},Y) $. We will now state a more general dynamical clustering\nresult that is not restricted to diagonal elements. The result, which holds in an interval of localization satisfying the conclusions of Theorem~\\ref{thmrigid}, requires counterterms.\n\n \n\\begin{theorem}[General dynamical clustering] \\label{thmexpclust} Let $H=H_\\omega$ be a disordered XXZ spin chain satisfying Property DL. Fix an interval\n$K= [\\Theta_0, \\Theta_2]$, where $ \\Theta_0 < \\Theta_2 <\\min\\set{2 \\Theta_0, \\Theta_1 }$, and $\\alpha\\in(0,1)$. There exists $\\tilde m>0$, such that\n for all local observables $X$ and $Y$ we have, uniformly in $L$,\n\\begin{multline}\\label{eq:expclusteringgen'}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{R_K\\pa{ \\tau^K_t\\pa{ X },Y} - \\pa{\\tau^K_t(X)P_0 Y +\\tau^K_t\\pa{Y} P_0 X }_K}}\\\\ C\\pa{1 + \\ln\\pa{\\min\\set{\\abs{\\cS_{X}},\\abs{\\cS_{Y}}}}} \\|X\\| \\|Y\\| \\e^{-\\tilde m \\pa{\\dist (X,Y)}^\\alpha},\n\\end{multline}\nand\n\\begin{align}\\label{eq:dynloc3333}\n&\\E \\pa{\\sup_{t\\in \\R} \\norm{ \\pa{ [[\\tau^K_t\\pa{ X },Y]]}_K }}\\\\ \\notag & \\hskip30pt \\le C\\pa{1 + \\ln\\pa{\\min\\set{\\abs{\\cS_{X}},\\abs{\\cS_{Y}}}}} \\|X\\| \\|Y\\| \\e^{-\\tilde m \\pa{\\dist (X,Y)}^\\alpha},\n\\end{align}\nwhere \n\\begin{align}\\label{[[]]}\n&[[\\tau^K_t\\pa{ X },Y]]= \n [\\tau^K_t\\pa{ X },Y] \\\\& \\notag \\hskip30pt - \\pa{\\tau^K_t\\pa{ X }P_0Y +\\tau^K_t\\pa{Y} P_0 X} + \\pa{ Y P_0\\tau^K_t\\pa{X} + X P_0\\tau^K_t\\pa{Y} } .\n\\end{align}\nMoreover, for the random XXZ spin chain the estimates \\eq{eq:expclusteringgen'} and \\eq{eq:dynloc3333} are not true without the counterterms.\n\\end{theorem}\n\nWhile it is obvious where the counterterms in \\eq{eq:LRquasimix} come from, the same is not\ntrue in \\eq{eq:expclusteringgen'}, where the time evolution in the second term seems to sit in the \\emph{wrong} place: it is $\\tau^K_t\\pa{Y}$ and not $\\tau^K_t\\pa{X}$. It turns out this term encodes information about the states above the energy window $K$, and the appearance of $\\tau^K_t\\pa{Y}$ is related to the reduction of this data to $P_0$, as can be seen in the proof.\n\n\\begin{remark}\\label{remwonder}\nOne may wonder why the counterterms in \\eq{eq:expclusteringgen'} do not appear in \\eq{eq:expclustering3}. The reason is that their traces obey decay estimates similar to \\eq{eq:expclustering3} with $\\alpha=1$, see Lemma~\\ref{lemwonder}. \n \\end{remark}\n\n\\section{Preliminaries}\\label{secprel}\n\n\\subsection{Decomposition of local observables}\n\nGiven $S\\subset[-L,L]\\subset \\Z$, $S\\ne \\emptyset$, we define projections $P_{\\pm}{\\up{S}}$ by\n\\beq\nP_+^{\\pa{S}}= \\bigotimes_{j\\in S}\\ \\tfrac{1}{2} (1+\\sigma_j^z)\\qtx{and} P_-^{\\pa{S}}=1-P_+^{\\pa{S}}.\n\\eeq \nNote that\n\\beq\\label{PsuppX}\n P_-^{(S)}\\le \\sum_{i\\in S} \\cN_i .\n \\eeq \nIn particular, \n\\beq\\label{P-SP}\nP_-^{\\pa{S}}P_0 = P_0 P_-^{\\pa{S}}=0 .\n\\eeq\nWe also set $S^c= [-L,L]\\setminus S$, and note that\n\\beq\\label{PPc}\nP_+^{\\pa{S}}P_+^{\\pa{S^c}}=P_+^{\\pa{S^c}}P_+^{\\pa{S}}= P_+^{[-L,L]}= P_0.\n\\eeq\n\n\nGiven an observable $X$, we set $P_\\pm ^{\\pa{X}}= P_{\\pm}^{(\\cS_X)}$, obtaining the decomposition\n\\beq\\label{Xdecomp}\n X =\\sum_{a,b \\in \\set{+,-}}X^{a,b}, \\qtx{where} X^{a,b}= P_{a} ^{\\pa{X}} X P_{b} ^{\\pa{X}}.\n \\eeq\nMoreover, since $P_+^{\\pa{X}}$ is a rank one projection on $\\cH_{\\cS_X}$, we must have \n\\beq\\label{Xzeta}\nX^{+,+}=\\zeta_X P_+^{\\pa{X}}, \\qtx{where} \\zeta_X\\in \\C, \\ \\abs{\\zeta_X} \\le \\|X\\|.\n\\eeq\n In particular, \n \\beq \\label{X++0}\n \\pa{X- \\zeta_X}^{+,+}=0 \\qtx{and} \\norm{X- \\zeta_X}\\le 2 \\norm{X}.\n \\eeq\n\n\n\\subsection{Consequences of Property DL}\n Let $H_\\omega$ be a disordered XXZ spin chain satisfying Property DL.\nWe write $H=H_\\omega^{(L)}$, and generally omit $\\omega$ and $L$ from the notation. The following results hold uniformly on $L$.\n\n\n\n\\begin{lemma}\\label{lem:cordyn} Let $X,Y$ be local observables. Then\n\\begin{align}\\label{P-gP-}\n&\\E \\pa{\\sup_{g\\in G_{I_0}} \\norm{P_{-} ^{\\pa{X}} g(H)P_{-} ^{\\pa{Y}}}_1}\\le C \\e^{-m\\dist (X,Y)},\n\\\\ \\label{P-gP-2}\n& \\E\\pa{\\norm{P_{-}^{\\pa{Y}} P_{-} ^{\\pa{X}} P_{I_0} }_1 } \\le C \\e^{-\\frac 1 2 m \\dist (X,Y)}.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} It follows from \\eq{PsuppX} that setting\n$Z= \\pa{ \\sum_{i\\in S} \\cN_i }^{-1}P_{-} ^{\\pa{S}}$, we have $\\norm{Z}\\le 1$ and \n$P_{-} ^{\\pa{S}}=\\pa{ \\sum_{i\\in S} \\cN_i } Z= Z \\pa{ \\sum_{i\\in S} \\cN_i }$, and hence we have\n\\beq\n\\norm{P_{-} ^{\\pa{X}} g(H)P_{-} ^{\\pa{Y}}}_1\\le \\sum_{i\\in \\cS_X,\\, j\\in \\cS_Y} \\norm{\\cN_i g(H)\\cN_j}_1.\n\\eeq\nThe estimate \\eq{P-gP-} then follows immediately from from \\eq{eq:efcorDL} using \\cite[Eq.~(3.25)]{EKS}\n\n\nSimilarly,\n\\begin{align}\n\\norm{P_{-} ^{\\pa{Y}} P_{-} ^{\\pa{X}} P_{I_0}}_1 = \\norm{P_{-} ^{\\pa{Y}} P_{-} ^{\\pa{X}} P_{I}}_1\\le \\sum_{k=-L}^L \\norm{P_{-} ^{\\pa{Y}} P_{-} ^{\\pa{X}} P_I\\cN_k }_1.\n\\end{align}\nSince $[P_{-} ^{\\pa{Y}} ,P_{-} ^{\\pa{X}}]=0$,\n\\begin{align}\n\\norm{P_{-} ^{\\pa{Y}} P_{-} ^{\\pa{X}} P_I\\cN_k }_1\\le \\min \\set{\\norm{ P_{-} ^{\\pa{X}} P_I\\cN_k }_1,\\norm{P_{-} ^{\\pa{Y}} P_I\\cN_k }_1},\n\\end{align}\nso it follows from \\eq{P-gP-} that\n\\beq\n\\E\\pa{\\norm{P_{-} ^{\\pa{Y}} P_{-} ^{\\pa{X}} P_I\\cN_k }_1}\\le C \\e^{-m \\max \\set{ \\dist\\pa{k, \\cS_X}, \\dist\\pa{k, \\cS_Y}}}.\n\\eeq\nSuppose, say, $\\max \\cS_X <\\min \\cS_Y $, and let $K= \\frac 12 \\pa{\\max \\cS_X + \\min \\cS_Y}$. Then, \n\\begin{align}\\notag\n\\E\\pa{\\norm{P_{-} ^{\\pa{Y}} P_{-} ^{\\pa{X}} P_I}_1}&\\le \\sum_{k\\le K} \\e^{-m\\dist\\pa{k, \\cS_Y}} +\n\\sum_{k\\ge K} \\e^{-m\\dist\\pa{k, \\cS_X}}\\\\\n& \\le C \\e^{-\\frac 1 2 m \\dist (X,Y)},\n\\end{align}\nwhere the last calculation is done as in \\cite[Eq. (3.25)]{EKS}, yielding \\eq{P-gP-2}. \n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lem:cordyn444}Let $X$ and $Y$ be local observables and $\\ell \\ge 1$. \n\n\n \\begin{enumerate}\n \n \\item We have \n\\begin{align}\\label{P-gP-3}\n\\E \\pa{\\sup_{I\\in G_I}\\norm{P_{-} ^{\\pa{X}}g(H) P_{+} ^{\\pa{\\cS_{X,\\ell}}}}_1} \\le C \\e^{-m \\ell}.\n\\end{align}\n\n\\item If $ \\ell \\le \\frac 1 2\\dist (X,Y)$,\nwe have\n\\begin{align}\\label{P-gP-4}\n\\E \\pa{\\sup_{g\\in G_I}\\norm{P_{+} ^{\\pa{\\cS_{Y,\\ell}^c}}g(H)P_{+} ^{\\pa{\\cS_{X,\\ell}^c}}}_1}\\le C \\e^{-m \\pa{\\dist \\pa{X,Y}-2\\ell}}.\\end{align}\n\n\\end{enumerate}\n\n\n\\end{lemma}\n\n\n\n\n\\begin{proof}\n Let $\\ell \\ge 1$ and $g \\in G_I$. If $\\cS_{X,\\ell}^c= \\emptyset$, \\eq{P-gP-3} is obvious since $P_{+} ^{\\pa{\\cS_{X,\\ell}}}=P_0$. If $\\cS_{X,\\ell}^c\\not= \\emptyset$, using \\eq{PPc} we get\n \\begin{align}\\label{PPc1}\n&\\norm{P_{-} ^{\\pa{X}}g(H) P_{+} ^{\\cS_{X,\\ell}}}_1 =\\norm{P_{-} ^{\\pa{X}}g(H) P_{-} ^{\\cS_{X,\\ell}^c} P_{+} ^{\\cS_{X,\\ell}}}_1 \\le\n\\norm{P_{-} ^{\\pa{X}}g(H) P_{-} ^{\\cS_{X,\\ell}^c} }_1,\n\\end{align}\nand \\eq{P-gP-3} follows from \\eq{PPc1} and \\eq{P-gP-}.\n\nSimilarly, using \\eq{PPc} twice, we get\n\\begin{align}\\notag\n\\norm{P_{+} ^{\\pa{\\cS_{Y,\\ell}^c}}g(H)P_{+} ^{\\pa{\\cS_{X,\\ell}^c}}}_1 & = \\norm{P_{+} ^{\\pa{\\cS_{Y,\\ell}^c}}P_{-} ^{\\pa{\\cS_{Y,\\ell}}}g(H) \nP_{-} ^{\\pa{\\cS_{X,\\ell}}}P_{+} ^{\\pa{\\cS_{X,\\ell}^c}}}_1\\\\\n& \\le \\norm{P_{-} ^{\\pa{\\cS_{Y,\\ell}}}g(H) \nP_{-} ^{\\pa{\\cS_{X,\\ell}}}}_1. \\label{P-gP-4222}\n\\end{align}\nIf $ \\ell \\le\\frac 1 2 \\dist (X,Y)$, then $\\dist (\\cS_{X,\\ell},\\cS_{Y,\\ell})\\ge \\dist (X,Y) - 2\\ell $.\n In this case \\eq{P-gP-4} follows from \\eq{P-gP-4222} and \\eq{P-gP-}.\n\\end{proof}\n\n\n \n\\begin{lemma} \\label{lemPXtgY} Let $X,Y$ be local observables with $X^{+,+}=Y^{+,+}=0$.\nThen\n\\beq\\label{PXtgY}\n\\E\\pa{\\sup_{t\\in \\R}\\sup_{g\\in G_I}\\norm{\\pa{\\tau_t\\pa{ X}g(H)Y}_I}_1}\\le C\\norm{X}\\norm{Y} \\e^{-\\frac 1 8 m\\dist (X,Y)} .\n\\eeq\n\\end{lemma}\n\n\n\\begin{proof} Since\n \\beq\n \\norm{\\pa{\\tau_t\\pa{ X}g(H)Y}_I}_1= \\norm{\\pa{{ X}\\e^{-itH}g(H)Y}_I}_1 ,\n \\eeq\nit suffices to prove\n\\beq\\label{PXtgY99}\n\\E\\pa{\\sup_{g\\in G_I}\\norm{\\pa{ X g(H)Y}_I}_1}\\le C\\norm{X}\\norm{Y} \\e^{-\\frac 1 8 m\\dist (X,Y)} .\n\\eeq\n\n\n Let $X,Y$ be local observables with $X^{+,+}=Y^{+,+}=0$, and let $0<2 \\ell= \\dist (X,Y) $. Set $\\cS_1=\\cS_{X,\\frac \\ell 2}^c$, $\\cS_2=\\cS_{Y,\\frac \\ell 2}^c$. Given $g \\in G_I$, and inserting $1=P_{-} ^{\\pa{\\cS_j}}+P_{+} ^{\\pa{\\cS_j}}$, $j=1,2$, we get\n\\beq\n X g(H)Y=\\sum_{a=\\pm;b=\\pm} { XP_{a} ^{\\pa{\\cS_1}}}g(H)P_{b} ^{\\pa{\\cS_2}}Y.\n\\eeq\nWe estimate the norms of the terms on the right hand side separately. If one of the indices $a,b$, say $a=-$, we get \n\\begin{align}\\notag\n&\\norm{\\pa{XP_{-} ^{\\pa{\\cS_1}}g(H)P_{b} ^{\\pa{\\cS_2}}Y}_I}_1 \\le \\|Y\\|\\norm{P_I XP_{-} ^{\\pa{\\cS_1}}\\e^{-itH}g(H)}_1 \\\\ & \\notag \\quad \\le \\|Y\\|\\norm{P_I XP_{-} ^{\\pa{\\cS_1}}P_I}_1\n= \\|Y\\|\\norm{P_I P_{-} ^{\\pa{\\cS_1}} XP_{-} ^{\\pa{\\cS_1}}P_I}_1\\\\ & \\quad \\le \\norm{X} \\|Y\\| \\pa{\\norm{P_IP_{-} ^{\\pa{\\cS_1}}P_{-} ^{\\pa{X}}}_1 +\\norm{P_{-} ^{\\pa{X}} P_{-} ^{\\pa{\\cS_1}}P_I}_1},\\label{EPX-+-Y49}\n\\end{align}\nwhere we have used the fact that $[P_{-} ^{\\pa{\\cS_1}},X]=0$, $X^{+,+}=0$, and $g\\in G_I$. If $a=b=+$, we bound the corresponding contribution as \n\\beq\n\\norm{ \\pa{ XP_{+} ^{\\pa{\\cS_1}} g(H)P_{+} ^{\\pa{\\cS_2}}Y}_I}_1\\le \\|X\\|\\|Y\\|\\norm{P_{+} ^{\\pa{\\cS_1}}g(H)P_{+}^{\\pa{\\cS_2}}}_1.\n\\eeq\nUsing \\eqref{P-gP-2} and \\eqref{P-gP-4} we get \n\\begin{align}\\notag\nE\\pa{\\sup_{g\\in G_I} \\norm{\\pa{{ X}g(H)Y}_I}_1} &\\le C\\norm{X}\\norm{Y}\\pa{2 \\e^{- \\frac m 4 \\ell} + \\e^{- m \\pa{\\dist \\pa{X,Y}-\\ell }}}\\\\ & \\le C\\norm{X}\\norm{Y}\\e^{-\\frac 1 8 m\\dist \\pa{X,Y}} . \\label{EPX-+-Y}\n\\end{align}\n \\end{proof}\n \n \n The following lemma justifies Remark~\\ref{remwonder}.\n \n \\begin{lemma}\\label{lemwonder} Let $X,Y$ be local observables. Then for all intervals $K \\subset I$\n we have\n \\begin{align}\\label{decaycounter}\n E \\pa{\\sup_{t\\in \\R}\\abs{\\tr\\pa{\\tau^K_t(X)P_0 Y}_K }}\\le C \\|X\\| \\|Y\\| \\e^{-m\\dist (X,Y)}.\n \\end{align}\n \\end{lemma}\n \n \\begin{proof}\n Given $K\\subset I$, we have\n \\begin{align}\n\\tr\\pa{\\tau^K_t(X)P_0 Y}_K & = \\tr P_K\\tau_t(X)P_0 YP_K = \\tr P_0 YP_K \\tau_t(X)P_0\\\\ \\notag & = \\tr P_0 Y P_{-} ^{\\pa{Y}}P_K \\e^{itH} P_{-} ^{\\pa{X}}XP_0,\n\\end{align}\nwhere we used \\eq{Xzeta}, \\eq{P-SP}, and $P_K P_0=0$. It follows that\n\\begin{align}\n\\abs{\\tr\\pa{\\tau^K_t(X)P_0 Y}_K}\\le \\|X\\| \\|Y\\| \\norm{ P_{-} ^{\\pa{Y}}P_K \\e^{itH} P_{-} ^{\\pa{X}}}_1.\n\\end{align}\nThe estimate \\eq{decaycounter} now follows from \\eq{P-gP-}.\n \\end{proof}\n\n\n\\subsection{Estimates with Fourier transforms}\n \nLet $H_\\omega$ be a disordered XXZ spin chain.\nGiven a function $f\\in C^\\infty_c(\\R)$, we write its Fourier transform as\n \\beq\\label{FTf}\n\\hat f(t)=\\tfrac 1 {2\\pi} \\int_\\R \\e^{itx} f(x) \\, \\d x,\\qtx{and recall } f(x)= \\int_\\R \\e^{-itx} \\hat f(t) \\, \\d t .\n\\eeq\n\n\nThe following lemma is an adaptation of an argument of Hastings \\cite{Hast,HK}, which combines the Lieb-Robinson bound with estimates on Fourier transforms.\n\n\\begin{lemma} \\label{lemHast} Let $\\alpha \\in (0,1)$, and consider a function $f\\in C^\\infty_c(\\R)$ such that\n\\beq\n\\abs{\\hat f(t)} \\le C_f \\e^{-m_f\\abs{t}^\\alpha} \\qtx{for all} \\abs{t}\\ge 1,\n\\eeq\nwhere $C_f$ and $m_f >0$ are constants.\nThen for all local observables $X$ and $Y$ we have\n\\begin{align}\\label{Htrickest}\n&\\norm{Xf(H)Y - \\int_\\R \\e^{-irH} Y \\tau_r\\pa{ X} \\hat f(r) \\, \\d r }\\\\ \\notag & \\qquad \\qquad \\qquad\n\\le C_1 \\norm{X} \\norm{Y}\\pa{1+ \\norm{\\hat f}_1} \\e^{- m_1\\pa{ \\dist (X,Y)}^\\alpha} ,\n\\end{align}\nwhere $C_1$ and $m_1>0$ are suitable constants (depending on $C_f$, $m_f$, and $\\alpha$),\nuniformly in $L$.\n\\end{lemma}\n\n\\begin{proof}\n\n\nWe have \n\\begin{align}\nXf(H)Y&= X \\pa{\\int_\\R \\e^{-irH} \\hat f(r) \\, \\d r}Y= \\int_\\R \\e^{-irH} \\tau_r\\pa{ X} Y\\hat f(r) \\, \\d r \\\\\n\\notag & =\\int_\\R \\e^{-irH}[ \\tau_r\\pa{ X} ,Y] \\hat f(r) \\, \\d r + \\int_\\R \\e^{-irH} Y \\tau_r\\pa{ X} \\hat f(r) \\, \\d r \n\\end{align}\n\n\n\n\nThe commutator in the first term can be estimated by the Lieb-Robinson bound (e.g. \\cite{NS}):\n\\beq\\label{LRbound}\n\\norm{[\\tau_r\\pa{X}, Y]}\\le C\\norm{X}\\norm{Y} \\min \\set{\\e^{- \\mu_1 \\pa{\\dist (X,Y)-v\\abs{r}}},1},\n\\eeq\nwhere $C$, $\\mu_1>0$, $v>0$ are constants, independent of $L$ and of the random parameter $\\omega$. We get \n\\begin{align}\n&\\norm{\\int_\\R \\e^{-irH}[ \\tau_r\\pa{ X} ,Y] \\hat f(r) \\, \\d r} \\\\ \\notag\n&\\le C \\norm{X} \\norm{Y}\\hskip-2pt\\pa{\\int_{\\abs{r} \\le \\frac {\\dist (X,Y)}{2v}}\\hskip-5pt \\e^{- \\mu_1 \\pa{\\dist (X,Y)-v\\abs{r}}} \\abs{\\hat f(r)} \\, \\d r + \\int_{\\abs{r} \\ge \\frac {\\dist (X,Y)}{2v}} \\abs{\\hat f(r)} \\, \\d r \\hskip-2pt}\\\\ \\notag\n&\\le C \\norm{X} \\norm{Y} \\pa{ \\norm{\\hat f}_1 \\e^{- \\frac{\\mu_1}2 \\dist (X,Y)} + \\int_{\\abs{r} \\ge \\frac {\\dist (X,Y)}{2v}} \\abs{\\hat f(r)} \\, \\d r }\\\\ \\notag\n&\\le C \\norm{X} \\norm{Y} \\pa{ \\norm{\\hat f}_1 \\e^{- \\frac{\\mu_1}2 \\dist (X,Y)} \n+ C_f \\e^{-\\frac {m_f}{2}\\pa{\\frac {\\dist (X,Y)}{2v}}^\\alpha}\n\\int_{\\R} \\e^{-\\frac {m_f}2 \\abs{r}^\\alpha} \\, \\d r },\n\\end{align}\nwhere we assumed \n${\\dist (X,Y)}\\ge{2v}$. The estimate \\eq{Htrickest} follows.\n\\end{proof}\n\n\nLemma~\\ref{lemHast} will be combined with the following lemma. \n\n\\begin{lemma}\\label{leminsertKf} Let $K=[\\Theta_0, \\Theta_2]$ and $f\\in C^\\infty_c(\\R)$ with $\\supp f \\subset [a_f,b_f]$. Then for all local observables $X$ and $Y$ we have\n\\begin{align}\\label{KKfK}\n\\int_\\R \\pa{ \\e^{-irH} Y \\tau_r\\pa{ X} }_K \\hat f(r) \\, \\d r = \\int_\\R \\pa{ \\e^{-irH} Y P_{K_f} \\tau_r\\pa{ X}}_K \\hat f(r) \\, \\d r,\n\\end{align}\nwhere \n\\beq\\label{Kf}\nK_f = K + K -\\supp f \\subset [2\\Theta_0 - b_f, 2\\Theta_2- a_f ].\n\\eeq\n\\end{lemma}\n\n\\begin{proof}\nLet $K=[\\Theta_0, \\Theta_2]$, $f\\in C_c(\\R)$ with $\\supp f \\subset [a_f,b_f]$. Then for all $E,E^\\prime \\in K$ we have\n\\begin{align}\\notag\n& P_E \\pa{ \\int_\\R \\e^{-irH} Y \\tau_r\\pa{ X} \\hat f(r) \\, \\d r } P_{E^\\prime}=\\int_\\R P_E\\, \\e^{-irH} Y \\e^{irH}{ X} \\e^{-irH}P_{E^\\prime}\\hat f(r) \\, \\d r\\\\ & \\quad = P_E Y \\pa{ \\int_\\R \\e^{ir(H-E -E^\\prime)}\\hat f(r) \\, \\d r}{ X} P_{E^\\prime}= P_E Y f(E+E^\\prime - H) { X} P_{E^\\prime} \\notag \\\\ & \\quad = \\notag\n P_E Y P_{K_f} f(E+E^\\prime - H) { X} P_{E^\\prime} \\\\ & \\quad = P_E \\pa{ \\int_\\R \\e^{-irH} YP_{K_f} \\tau_r\\pa{ X} \\hat f(r) \\, \\d r } P_{E^\\prime},\\label{P0mag}\n\\end{align}\nwhere $K_f$ is given in \\eq{Kf}. The equality \\eq{KKfK} follows.\n\\end{proof}\n\n\n\\subsection{Counterterms} Given vectors $\\psi_1,\\psi_2\\in \\cH\\up{L}$, we denote by $T(\\psi_1,\\psi_2)$ the rank one operator $T(\\psi_1,\\psi_2)= \\scal{\\psi_2, \\cdot}\\psi_1$. Recall \n\\[ \\norm{T(\\psi_1,\\psi_2)} =\\norm{T(\\psi_1,\\psi_2)}_1=\\norm{\\psi_1}\\norm{\\psi_2}.\\]\nNote that for all observables\n $X$ and $Y$ we have \n\\begin{align}\nXP_0\\up{L} Y& = T\\pa{Y^*\\psi_0\\up{L}, X\\psi_0\\up{L}}. \\label{Tnotation}\n\\end{align}\n\n\n\n\\begin{lemma}\\label{lem:spillterms} Let $H_\\omega$ be a random XXZ spin chain. Consider an interval $K\\subset [1 -\\tfrac 1 \\Delta, 1 +\\tfrac 1 \\Delta]$. Then there exist constants $\\gamma_K >0$ and $R_K$ such that for all $i,j \\in \\Z$ with $\\abs{i-j}\\ge R_K$, we have\n\\begin{align}\\label{eq:1term}\n \\E\\pa{\\liminf_{L\\to \\infty}\\norm{ \\pa{\\sigma^x_i P\\up{L}_0\\sigma^x_j}_K}}\n \\ge \\gamma_K>0,\n\\end{align}\n\\beq\\label{eq:2terms0}\n\\E\\pa{\\liminf_{L\\to \\infty}\\norm{ \\pa{\\ \\sigma^x_i P\\up{L}_0\\sigma^x_j \\pm \\sigma^x_j P\\up{L}_0\\sigma^x_i}_K}_{2}^2}\\ge \\gamma_K,\n\\eeq\nand\n\\begin{align}\\label{eq:4terms1}\n\\E\\pa{\\liminf_{L\\to \\infty} \\lim_{T\\to \\infty} \\tfrac 1 T \\int_0^T \\norm{\\pa{A\\up{L}(t) -\\pa{A\\up{L}(t)}^*}_K}_2^2\\, \\d t} \\ge 2 \\gamma_K,\n\\end{align}\nwhere\n\\begin{align}\\label{eq:4terms12}\n A\\up{L}(t)& = \\tau\\up{L}_t\\!\\pa{ \\sigma^x_i }P_0\\up{L}\\sigma^x_j +\\tau\\up{L}_t\\!\\pa{\\sigma^x_j } P_0\\up{L} \\sigma^x_i . \\end{align}\n\\end{lemma}\n\n\\begin{proof}Let $H$ be a random XXZ spin chain, and let $\\mathcal{N} = \\sum_{i\\in \\Z} \\mathcal{N}_i$ denote the total (down) spin number operator on $\\cH$. The self-adjoint operator $\\mathcal{N}$ has pure point spectrum. Its eigenvalues are $N=0,1, 2, \\ldots$, and the corresponding eigenspaces $\\cH_N$ are spanned by all the spin basis states with $N$ down spins. Since $[H, \\mathcal{N}]=0$, the eigenspaces $\\cH_N$ are left invariant by $H$. The restriction $H_N$ of $H$ to \n $\\cH_N$ is unitarily equivalent to an $N$-body discrete Schr\\\"odinger operator restricted to the fermionic subspace (e.g., \\cite{FS,EKS}).\n\nIn particular, $H_1=H_{\\omega,1}$ is unitarily equivalent to an one-dimensional Anderson model:\n\\begin{align}\\label{AndH}\nH_{\\omega,1} \\cong -\\tfrac{1}{2\\Delta}\\mathcal{L}_1+\\pa{1-\\tfrac{1}{\\Delta}} +\\lambda V_\\omega\n\\qtx{on}\\ell^2(\\Z),\n\\end{align}\nwhere $\\cL_1$ is the graph Laplacian on $\\ell^2(\\Z)$ and $V_\\omega$ is the random potential given by $V_\\omega(i)=\\omega_i$ for $i \\in \\Z$. \n\nThe same is true for restrictions to finite intervals $[-L,L]$, where we have the unitary equivalence \n\\begin{align}\\label{AndHL}\nH_{\\omega,1}\\up{L} \\cong -\\tfrac{1}{2\\Delta}\\mathcal{L}_1\\up{L}+\\pa{1-\\tfrac{1}{\\Delta}} +\\lambda V_\\omega + \\left(\\beta-\\tfrac{1}{2}(1-\\tfrac{1}{\\Delta})\\right)\\pa{ \\chi_{\\set{-L}}+\\chi_{\\set{L}}},\n\\end{align}\nacting on $\\ell^2([-L,L])$,\nwhere now $\\cL_1\\up{L}$ is the graph Laplacian on $\\ell^2([-L,L])$\n (e.g., \\cite{EKS}). Note that $H_{\\omega,1}\\up{L}$ is the restriction of $H_{\\omega,1}$ to \n $\\ell^2([-L,L])$, up to a boundary term.\n \nIn what follows we will consider these unitary equivalences as equalities. In this case,\nif $i\\in [-L,L]$ we have $\\sigma^x_i \\psi_0\\up{L} =\\delta_i \\in \\ell^2([-L,L])$, Note that for the infinite volume Anderson model in \\eq{AndH} we have\n\\beq\n\\sigma\\pa{H_1}\\supset \\Sigma_1:= [1 -\\tfrac 1 \\Delta, 1 +\\tfrac 1 \\Delta] \\quad \\text{almost surely}.\n\\eeq \n\n The following holds for all\n$\\omega \\in [0,1]^{\\Z}$: We have $\\lim_{L\\to \\infty} H_1\\up{L}= H_1$ in the strong resolvent sense, and hence $\\lim_{L\\to \\infty} f\\pa{H_1\\up{L}}= f\\pa{H_1}$ strongly for all bounded continuous functions $f$ on $\\R$. (For an interval $J\\subset \\Z$, we consider $\\ell^2(J)$ as a subspace of $ \\ell^2(\\Z)$ in the obvious way: $ \\ell^2(\\Z) =\\ell^2(J)\\oplus \\ell^2(\\Z\\setminus J)$.)\n In particular, for $f$ real valued with $\\norm{f}_\\infty\\le 1$, \n\\beq\\label{everywherelim}\n\\sup_L \\norm{ f(H_1\\up{L}) \\delta_u}\\le 1 \\sqtx{and} \\lim_{L\\to \\infty} f\\pa{H_1\\up{L}}\\delta_u = f\\pa{H_1}\\delta_u \\sqtx{for all} u \\in \\Z.\n\\eeq\n Moreover, \n\\begin{align}\\notag\n\\lim_{L\\to \\infty} \\E\\pa{ \\norm{ f(H_1\\up{L}) \\delta_u}^2}& =\\E\\pa{ \\norm{ f(H_1) \\delta_u}^2}\n=\\E\\pa{ \\scal{\\delta_u, \\pa{f(H_1)}^2 \\delta_u}}\\\\\n& = \\E\\pa{ \\scal{\\delta_0, \\pa{f(H_1)}^2 \\delta_0}}= \\int f^2(t) \\, \\d\\eta(t), \\label{conveta}\n\\end{align}\nwhere $\\eta$ is the density of states measure for the Anderson model $H_1$.\nIt also follows from \\eq{everywherelim} by bounded convergence that\n\\begin{align}\\label{limLinfty}\n\\lim_{L\\to \\infty} \\E\\pa{\\norm{ f(H_1\\up{L}) \\delta_j} \\norm{ f(H_1\\up{L}) \\delta_i}}=\\E\\pa{\\norm{ f(H_1) \\delta_j} \\norm{ f(H_1) \\delta_i}}.\n\\end{align}\n\nWe now fix a function $f\\in C_c(\\R)$ such that $\\supp f \\subset K\\cap \\Sigma_1$ and $\\chi_{K^{\\prime}} \\le f \\le\\chi_{K\\cap \\Sigma_1}$ for some nonempty interval $K^\\prime \\subset K\\cap \\Sigma_1$. Note that\n\\beq\\label{convetaD}\nD:= \\int f^2(t) \\, \\d\\eta(t) \\ >0,\n\\eeq\n\n Given $i,j\\in \\Z$, if $i,j\\in [-L,L] $, we have \n\\begin{align}\\notag\n&\\norm{ \\pa{\\sigma^x_i P\\up{L}_0\\sigma^x_j}_K}=\\norm{ P_K\\up{L} \\sigma^x_j \\psi_0\\up{L}} \\norm{ P_K\\up{L} \\sigma^x_i \\psi_0\\up{L}}\\\\ &\\qquad \\quad =\\norm{ P_K\\up{L} \\delta_j} \\norm{ P_K\\up{L} \\delta_i}\\ge \n\\norm{ f(H_1\\up{L}) \\delta_j} \\norm{ f(H_1\\up{L}) \\delta_i}, \\label{intf}\n\\end{align}\nand hence it follows from \\eq{everywherelim} that\n\\begin{align}\n&\\liminf_{L\\to\\infty}\\norm{ \\pa{\\sigma^x_i P\\up{L}_0\\sigma^x_j}_K} \\ge \n\\norm{ f(H_1) \\delta_j} \\norm{ f(H_1) \\delta_i}, \\label{intf58}\n\\end{align}\n\n\nGiven $u\\in \\Z$, let $H_1\\up{u,L}$ denote the restriction of $H_1$ to the interval $[u-L,u+L]=u +[-L,L]$, and note that \\eq{everywherelim} and \\eq{conveta} hold with $H_1\\up{u,L}$ substituted for $H_1\\up{L}=H_1\\up{0,L}$. In particular,\n \\begin{align}\\label{epsL0}\n\\lim_{L\\to \\infty} \\eps\\up{u,L}=0, \\qtx{where} \\eps\\up{u,L}= \\E\\pa{\\norm{ \\pa{f(H_1\\up{u,L})-f(H_1)} \\delta_u}},\n \\end{align}\nand note that $\\eps_L= \\eps\\up{u,L}$ is independent of $u\\in \\Z$. Moreover,\n\\beq\\label{1>2}\n\\E\\pa{\\norm{ f(H_1 )\\delta_u} }\\ge \\E\\pa{\\norm{ f(H_1 )\\delta_u}^2 }.\n\\eeq\n\nIt follows that for all $i,j\\in \\Z$ and $ L\\in \\N$, with $ \\abs{i-j} \\ge 3L$ we have ($\\eps_L \\le 1$)\n\\begin{align}\\notag \n&\\E\\pa{\\norm{ f(H_1) \\delta_j} \\norm{ f(H_1) \\delta_i}}\n\\ge \\E\\pa{\\norm{ f(H_1\\up{j,L}) \\delta_j} \\norm{ f(H_1\\up{i,L}) \\delta_i}}- 2 \\eps_L \\\\ \\notag &\\quad \\quad\n= \\E\\pa{\\norm{ f(H_1\\up{j,L}) \\delta_j} } \\E\\pa{ \\norm{ f(H_1\\up{i,L}) \\delta_i}}- 2\\eps_L \n\\\\ \\notag &\\quad \\quad\n\\ge \\E\\pa{\\norm{ f(H_1 )\\delta_j} } \\E\\pa{ \\norm{ f(H_1) \\delta_i}}- 4\\eps_L \n\\\\ \\notag &\\quad \\quad\n\\ge \\E\\pa{\\norm{ f(H_1 )\\delta_j}^2 } \\E\\pa{ \\norm{ f(H_1) \\delta_i}^2}-4\\eps_L \\\\ &\\quad \\quad\n= \\E\\pa{\\norm{ f(H_1 )\\delta_0}^2 }^2- 4\\eps_L \\ge D^2 - 4\\eps_L\\ge \\tfrac 12 D^2 \\label{Effij}\n \\end{align}\n where we used \\eq{epsL0}, the fact that the collections of random variables $\\set{\\omega_k}_{k\\in [j-L,j+L]}$ and $\\set{\\omega_s}_{s\\in [i-L,i+L]}$ are independent, used \\eq{epsL0} again, used \\eq{1>2}, and the last inequality follows from \\eq{conveta}, \\eq{convetaD}, and \\eq{epsL0}, taking $L$ sufficiently large. In particular, there exists $\\tilde R$ such that \\eq{Effij}\nholds if $\\abs{i-j} \\ge \\tilde R$.\n\nIt follows from \\eq{intf58} and \\eq{Effij} that for $\\abs{i-j} \\ge \\tilde R$ we have\n\\begin{align}\n \\E \\pa{ \\liminf_{L\\to \\infty}\\pa{\\norm{ P\\up{L}_K \\sigma^x_j \\psi_0\\up{L}} \\norm{ P_K\\up{L} \\sigma^x_i \\psi_0\\up{L}}}} \\ge \\tfrac 12 D^2,\n\\end{align}\nwhich is \\eq{eq:1term}.\n\nNote that $\\sqrt{f} \\in C_c(\\R)$ and $\\chi_{K^{\\prime}} \\le f\\le \\sqrt{f} \\le\\chi_{K\\cap \\Sigma_1}$. Given an observable $X$ we have\n\\begin{align}\\notag\n&\\norm{X_K}_{2}^2=\\norm{P_K\\up{L}XP_K\\up{L}}_{2}^2=\\tr \\pa{P_K\\up{L}X^*P_K\\up{L}X P_K\\up{L}} \\\\ &\\quad\\notag \\ge \\tr \\pa{P_K\\up{L}X^*f(H\\up{L})X P_K\\up{L}}= \\tr \\pa{\\sqrt{f}(H\\up{L})X P_K\\up{L}X^*\\sqrt{f}(H\\up{L})}\\\\ &\\quad\\ge \\tr \\pa{\\sqrt{f}(H\\up{L})X f(H\\up{L})X^*\\sqrt{f}(H\\up{L})} = \\norm{\\sqrt{f}(H\\up{L})X \\sqrt{f}(H\\up{L})}_{2}^2. \\label{eq:Ptof}\n\\end{align}\nThus, we can estimate \n\\begin{align}\\notag\n&\\norm{ \\pa{\\sigma^x_i P_0\\up{L}\\sigma^x_j \\pm \\sigma^x_j P_0\\up{L}\\sigma^x_i}_K}_{2}^2 \\\\ & \\notag \\quad \\ge \\norm{\\sqrt{f}({H_1\\up{L}}) \\pa{\\sigma^x_i P_0\\up{L}\\sigma^x_j \\pm \\sigma^x_j P_0\\up{L}\\sigma^x_i}\\sqrt{f}({H_1\\up{L}})}_{2}^2\\\\ & \\notag \\quad = \\norm{T\\pa{\\sqrt{f}({H_1\\up{L}}) \\delta_i, \\sqrt{f}({H_1\\up{L}})\\delta_j}\\pm T\\pa{ \\sqrt{f}({H_1\\up{L}}) \\delta_j,\\sqrt{f}({H_1\\up{L}})\\delta_i}}_2^2 \\\\ & \\quad = \\notag\n2 \\pa{\\norm{ \\sqrt{f}({H_1\\up{L}}) \\delta_i}^2\\norm{ \\sqrt{f}({H_1\\up{L}}) \\delta_j}^2\\pm \\Rea \\pa{\\scal{\\delta_j, f ({H_1\\up{L}})\\delta_i }}^2 }\\\\ & \\quad \\ge 2 \\pa{\\norm{ {f}({H_1\\up{L}}) \\delta_i}^2\\norm{ {f}({H_1\\up{L}}) \\delta_j}^2- \\abs{\\scal{\\delta_j, f ({H_1\\up{L}})\\delta_i }} }.\n\\label{sps}\n\\end{align}\nIt follows from \\eq{sps} and \\eq{everywherelim} that\n\\begin{align}\\notag\n&\\liminf_{L\\to \\infty} \\norm{ \\pa{\\sigma^x_i P_0\\up{L}\\sigma^x_j \\pm \\sigma^x_j P_0\\up{L}\\sigma^x_i}_K}_{2}^2 \\\\ & \\qquad \\qquad \\ge 2 \\pa{\\norm{ {f}({H_1}) \\delta_i}^2\\norm{ {f}({H_1}) \\delta_j}^2- \\abs{\\scal{\\delta_j, f ({H_1})\\delta_i }} }.\\label{sps2}\n\\end{align}\n\nGiven a scale $\\ell$ and $ \\abs{i-j} \\ge 3\\ell$, we have \n\\begin{align}\\label{sps45}\n\\abs{\\scal{\\delta_j, f (H_1)\\delta_i }}= \\abs{\\scal{\\delta_j, \\pa{ f (H_1)-f(H_1\\up{i,\\ell})}\\delta_i }}\\le\n\\norm{\\pa{ f (H_1)-f(H_1\\up{i,\\ell})}\\delta_i }\n\\end{align}\nSince $\\E\\pa{\\norm{ {f}(H_1) \\delta_i}\\norm{ {f}(H_1) \\delta_j}}\\le \\pa{\\E\\pa{\\norm{ {f}(H_1) \\delta_i}^2\\norm{ {f}(H_1) \\delta_j}^2}}^{\\frac 12}$,\nit follows from \\eq{sps2}, \\eq{sps45}, \\eq{Effij} and \\eq{epsL0}, that there exists $\\ell_1$, such that for $ \\abs{i-j} \\ge 3\\ell_1$ we have\n\\begin{align}\\notag\n&\\E \\pa{\\liminf_{L\\to \\infty} \\norm{ \\pa{\\sigma^x_i P_0\\up{L}\\sigma^x_j \\pm \\sigma^x_j P_0\\up{L}\\sigma^x_i}_K}_{2}^2} \\\\ & \\notag \\quad \\ge 2 \\pa{\\pa{\\E\\pa{\\norm{ {f}(H_1) \\delta_i}\\norm{ {f}(H_1) \\delta_j}}}^2 -\\E\\pa{\\norm{\\pa{ f (H_1)-f(H_1\\up{i,\\ell_1})}\\delta_i }} } \\\\ & \\quad \\ge 2 \\pa{ \\tfrac 1 4 D^4 -\\eps_{\\ell_1}}\\ge \\tfrac 1 4 D^4 .\n\\end{align}\nThe estimate \\eq{eq:2terms0} is proven.\n\nNow let $A\\up{L}(t)$ be as in \\eq{Zt} (we mostly omit $L$ from the notation), and let\n\\begin{align}\\notag\nZ\\up{L}(t) &= \\pa{A\\up{L}(t) -\\pa{A\\up{L}(t)}^*}_K \\\\\n&= \\e^{itH} \\pa{ \\sigma^x_i P_0\\sigma^x_j +\\sigma^x_j P_0 \\sigma^x_i }_K - \\pa{ \\sigma^x_i P_0\\sigma^x_j +\\sigma^x_j P_0 \\sigma^x_i }_K\\e^{-itH} \\notag \\\\\n&= \\e^{itH} A_K - A_K\\e^{-itH} = B_t - B^*_t, \\label{Zt}\n\\end{align}\nwhere\n\\begin{align}\nA= A\\up{L}(0)= \\sigma^x_i P_0\\sigma^x_j +\\sigma^x_j P_0 \\sigma^x_i = A^*\\qtx{and} B_t= \\e^{itH} A_K.\n\\end{align}\nWe have\n\\begin{align}\n\\norm{Z\\up{L}(t)}_2^2& = \\norm{B_t - B^*_t}_2^2\\notag \\\\\n& =\\tr \\pa{B_tB^*_t}+\\tr \\pa{B^*_tB_t} - \\tr \\pa{B_tB_t} - \\tr \\pa{B^*_t B^*_t}\\notag \\\\\n& =2 \\norm{A_K}^2_2 - 2 \\Rea \\tr \\pa{ P_K\\e^{itH} A P_K\\e^{itH} AP_K}.\n\\end{align}\nSince\n\\begin{align}\n\\tr \\pa{ P_K\\e^{itH} A P_K\\e^{itH} AP_K}= \\sum_{E,E^\\prime \\in \\sigma_K} \\e^{it(E+ E^\\prime)} \\tr \\pa{ P_EA P_{E^\\prime} AP_E},\n\\end{align}\n and $0\\notin K$, and $\\lim_{T\\to \\infty} \\tfrac 1 T \\int_0^T \\e^{it s} \\, \\d t =0$ if $s\\ne 0$,we conclude that\n \\begin{align}\n\\lim_{T\\to \\infty} \\tfrac 1 T \\int_0^T \\norm{Z\\up{L}(t)}_2^2\\, \\d t = 2 \\norm{A_K}^2_2= 2 \\norm{\\pa{\\sigma^x_i P_0\\sigma^x_j +\\sigma^x_j P_0 \\sigma^x_i}_K}^2_2.\n \\end{align}\nThe estimate \\eq{eq:4terms1} now follows from \\eq{eq:2terms0}\n\\end{proof}\n\n\n\\section{Optimality of the droplet spectrum}\\label{secopt}\nWe are ready to prove Theorem~\\ref{thmrigid}.\n\n\n\n \\begin{proof}[Proof of Theorem~\\ref{thmrigid}]\n \n Suppose Property DL is valid for a disordered XXZ spin chain $H$\nwith $\\Theta_1 > 2 \\Theta_0$. Let $K= [\\Theta_0, \\Theta_2]$, where $ \\Theta_0 < \\Theta_2 < \\Theta_1 $, and $\\eps= \\min \\set{ \\Theta_1 - 2 \\Theta_2, \\Theta_0} >0$. We pick and fix a Gevrey class function $h$ such that \n \\[0\\le h\\le 1,\\; \\supp h \\subset (-\\eps,\\eps),\\; h(0)=1, \\sqtx{and} \\abs{\\hat h(t)}\\le C \\e^{-c\\abs{t}^{\\frac 12}}\\sqtx{for all} t\\in\\R,\\]\nin particular, $ \\norm{\\hat h}_1<\\infty$. Note that $P_0= h(H)$.\n \n \n \n Let $X,Y$ be local observables with $X^{+,+}=Y^{+,+}=0$. It follows from Lemmas~\\ref{lemHast} and \\ref{leminsertKf} that \n \\begin{align}\n\\norm{ \\pa{XP_0Y}_K} &= \\norm{ \\pa{Xh(H)Y}_K}\\\\ \\notag &\n\\le C \\norm{X} \\norm{Y}\\e^{- m_1\\pa{ \\dist (X,Y)}^{\\frac 12}} +C^\\prime \\sup_{r\\in \\R}\\norm { \\pa{Y P_{K_h} \\tau_r\\pa{ X} }_K} ,\n \\end{align}\n where \n \\beq\n K_h \\subset [2\\Theta_0 - \\eps, 2\\Theta_2+ \\eps ]\\subset [\\Theta_0,\\Theta_1]=I.\n \\eeq\n \nIt follows from Lemma~\\ref{lemPXtgY} that\n \\begin{align}\\notag\n \\E\\pa{\\sup_{r\\in \\R}\\norm { \\pa{Y P_{K_h} \\tau_r\\pa{ X} }_K}}&\\le \\E\\pa{\\sup_{r\\in \\R}\\norm { \\pa{Y P_{K_h} \\tau_r\\pa{ X} }_I}}\\\\ & \\le C\\norm{X}\\norm{Y} \\e^{-\\frac 1 8 m\\dist (X,Y)} ,\n \\end{align}\nso we conclude that \n\\begin{align}\\label{estcont}\n\\E \\pa{\\norm{ \\pa{XP_0Y}_K}}\\le C\\norm{X}\\norm{Y} \\e^{- m_2 \\pa{ \\dist (X,Y)}^{\\frac 12}},\n\\end{align}\nwhere $m_2= \\min\\set{m_1,\\frac 1 8 m}>0$.\n\nFor all $k\\in \\Z$ we have $\\sigma^x_k=\\pa{\\sigma^x_k}^*$, $\\pa{\\sigma^x_k}^{+,+}=0$, and $\\norm{\\sigma^x_k}=1$. Thus it follows from \\eq{estcont} that for all $i, j \\in [-L,L]$ we have (we put $L$ back in the notation)\n\\begin{align}\\label{estcont9}\n\\E \\pa{\\norm{ \\pa{ \\sigma^x_i P\\up{L}_0 \\sigma^x_j }_K}}\\le C\\e^{- m_2 \\pa{ \\abs{i-j}}^{\\frac 12}},\n\\end{align}\nuniformly in $L$.\n \nIf $H$ is a random XXZ spin chain, \\eq{estcont9} contradicts \\eq{eq:1term} in Lemma~\\ref{lem:spillterms}\nif $\\abs{i-j}$ is sufficiently large. Thus we conclude that we cannot have $\\Theta_1> 2\\Theta_0$, that is, we must have $\\Theta_1\\le2\\Theta_0$.\n\\end{proof}\n\n\n\n\\section{Non-spreading of information}\\label{secnonsp}\n\nIn this section we prove Theorem~\\ref{corquasloc}.\n\n\\begin{proof}[Proof of Theorem \\ref{corquasloc}]\nLet $H_\\omega$ be a disordered XXZ spin chain satisfying Property DL. Let\n$X$ be a local observable with support $\\cS=\\cS_X=[s_X,r_X]$. In view of \\eq{X++0} we can assume $X^{+,+}=0$.\n\nWe take $\\ell \\ge 1$, and\nset (recall \\eq{cSell})\n\\begin{align}{\\mathcal O}&=[-L,L]\\setminus {\\cS}_{\\frac \\ell 2} = [-L, s_X - \\tfrac \\ell 2) \\cup (r_X +\\tfrac \\ell 2,L] \\\\ \\notag\n \\mathcal T& = {\\cS}_{ \\ell }\\cap {\\mathcal O}= [s_X-\\ell, s_X - \\tfrac \\ell 2)\\cup (r_X +\\tfrac \\ell 2,r_X+\\ell]\n \\end{align}\n \n \n \n\n \n \n We start by proving that\n\\beq\\label{compproof2999}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{ \\pa{{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0} }P_+^{\\pa{\\mathcal O}}} - \\tau_t\\pa{ X }}_{I_0}}_1} \\le C \\|X\\|\\e^{- \\frac{1}{16} m\\ell}.\n \\eeq\n Given an observable $Z$,\nwe write $Z_{I_0}=Z_1+Z_2+Z_3+Z_4$, where \n\\beq\nZ_1=P_0 ZP_0;\\ Z_2=P_I Z P_I= Z_I; \\ Z_3=P_0 ZP_I; \\ Z_4=P_I ZP_0.\n\\eeq\n Since $\\pa{X_i}_{I_0}=\\pa{X_{I_0}}_i=X_i$ and $\\tau_t(X_i)=\\pa{\\tau_t(X)}_i$ for $i=1,2,3,4$,\n $X_1 = X^{+,+}_1= 0$, and $\\pa{X_4}^*=\\pa{X^*}_3$, \nto prove \\eq{compproof2999} it suffices to prove\n \\beq\\label{compproof2}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{ \\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0} }P_+^{\\pa{\\mathcal O}}}_i - \\tau_t\\pa{ X_i }}_1} \\le C \\|X\\|\\e^{- \\frac{1}{16} m\\ell}\n\\eeq\nin the cases $i=2,3$. \n \nIf $i=3$, we have \n\\begin{align} \\notag\n&\\norm{\\tau_t\\pa{ X_3 }-\\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0}}P_+^{\\pa{\\mathcal O}}}_3}_1 =\\norm{\\pa{\\tau_t\\pa{ X_{I_0} }-P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0} }P_+^{\\pa{\\mathcal O}}}_3}_1\\\\ \\notag &\\quad \\qquad =\\norm{ \\pa{\\tau_t\\pa{ X_{I_0}}P_-^{{\\mathcal O}}}_3}_1= \\norm{P_0 X P_-^{(X)}\\e^{-itH} P_I P_-^{{\\mathcal O}} P_I}_1\n\\\\ & \\quad \\qquad \\le \\norm{X}\\norm{P_-^{(X)} \\e^{-itH}P_I P_-^{{\\mathcal O}} }_1, \\label{Pplusinsert} \n\\end{align} \nwhere we used $P_0 X=P_0 X P_-^{(X)}$ since $X^{+,+}=0$. Thus it follows from \\eqref{P-gP-} that\n\\begin{align}\n\\E\\pa{\\sup_{t\\in \\R}\\norm{\\tau_t\\pa{ X_3 }-\\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0}}P_+^{\\pa{\\mathcal O}}}_3}_1}\\le C \\|X\\|\\e^{- \\frac{1}{2} m\\ell}.\n\\end{align}\nIf $i=2$, recall that $Z_2=Z_I$. Since $P_I P_+^{\\pa{\\mathcal O}}P_0 = P_I P_0 =0$, we have \n\\beq\n\\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0} }P_+^{\\pa{\\mathcal O}}}_I= \\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_I}P_+^{\\pa{\\mathcal O}}}_I.\n\\eeq\nThus\n\\begin{align}\\notag \n&\\norm{\\tau_t\\pa{ X_I}-\\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0} }P_+^{\\pa{\\mathcal O}}}_I}_1\\\\ & \\qquad \\qquad = \\notag \\norm{\\pa{\\tau_t\\pa{ X_I } P_-^{{\\mathcal O}}}_I+ \\pa{P_-^{{\\mathcal O}}\\tau_t\\pa{ X_I } P_+^{\\pa{\\mathcal O}}}_I}_1\\\\ &\\qquad \\qquad \\le \\norm{\\pa{\\tau_t\\pa{ X_I } P_-^{{\\mathcal O}}}_I}_1+\n\\norm{\\pa{ P_-^{{\\mathcal O}}\\tau_t\\pa{ X_I }}_I}_1 \\notag \\\\ &\\qquad \\qquad =\\norm{\\pa{\\tau_t\\pa{ X_I } P_-^{{\\mathcal O}}}_I}_1+\\norm{\\pa{\\tau_t\\pa{ X^*_I } P_-^{{\\mathcal O}}}_I}_1.\n\\end{align}\nSince \n\\begin{align}\\label{eq:spplfk}\n\\norm{\\pa{\\tau_t\\pa{ X_I }P_-^{{\\mathcal O}}}_I}_1= \\norm{\\pa{\\tau_t\\pa{ X } P_I P_-^{{\\mathcal O}}}_I}_1,\n\\end{align}\nit follows from Lemma \\ref{lemPXtgY} that\n\\begin{align}\n\\E\\pa{\\sup_{t\\in \\R}\\norm{\\tau_t\\pa{ X_I}-\\pa{P_+^{\\pa{\\mathcal O}}\\tau_t\\pa{ X_{I_0} }P_+^{\\pa{\\mathcal O}}}_I}_1}\\le C \\|X\\|\\e^{- \\frac{1}{16} m\\ell}.\n\\end{align}\nThis finishes the proof of \\eqref{compproof2}, and hence of \\eq{compproof2999}.\n\n\n \n We now observe that for all observables $Z$ we have\n \\beq\\label{tildeZ}\nP_+^{({\\mathcal O})}Z P_+^{\\pa{\\mathcal O}}=\\tilde Z P_+^{\\pa{\\mathcal O}}=P_+^{\\pa{\\mathcal O}} \\tilde Z ,\n\\eeq\nwhere $\\tilde Z $ is an observable with $\\cS_{\\tilde Z}={\\cS}_{\\frac \\ell 2}$ and $\\|\\tilde Z \\|\\le \\|Z\\|$. To see this, we write the Hilbert space as $\\mathcal{H}\\up{L} = \\mathcal{H}_{\\mathcal O} \\otimes \\mathcal{H}_{\\cS_{\\frac \\ell 2} }$, and let $\\psi_{\\mathcal O}= \\otimes_{i\\in {\\mathcal O}} \\, e_{+}$ be the all spins up vector in $\\mathcal{H}_{{\\mathcal O}}$. We define $T: \\mathcal{H}_{{\\cS}_{\\frac \\ell 2} }\\to \\mathcal{H}\\up{L}$ by $T \\eta= \\psi_{\\mathcal O} \\otimes \\eta$ and\n$R:\\mathcal{H}\\up{L} \\to \\mathcal{H}_{{\\cS}_{\\frac \\ell 2} }$ by $P_+^{(\\mathcal{O})} \\varphi = \\psi_{\\mathcal O} \\otimes R\\varphi$. i.e., $P_+^{(\\mathcal{O})} = T R$. Note $\\norm{T},\\norm{R}\\le 1$. Given an observable $Z$, we define $\\hat Z :\\mathcal{H}_{{\\cS}_{\\frac \\ell 2} } \\to\\mathcal{H}_{{\\cS}_{\\frac \\ell 2} }$ by $ \\hat Z= RZT$. Then $\\tilde Z= I_{\\mathcal{H}_{\\mathcal O} } \\otimes \\hat Z$ satisfies \\eq{tildeZ}.\n\n\nIt follows from \\eq{compproof2999} and \\eq{tildeZ} that\n\\beq\\label{compproof2111}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{ \\pa{P_+^{\\pa{\\mathcal O}} \\widetilde{\\tau_t\\pa{ X_{I_0} }} - \\tau_t\\pa{ X }}_{I_0}}_1} \\le C \\|X\\|\\e^{- \\frac{1}{16} m\\ell}. \n \\eeq\n \n \nSince $\\widetilde{\\tau_t\\pa{ X_{I_0} }}$ does not have support in $\\cS_\\ell$, \nwe now define\n\\beq\\label{eq:X_ell}\nX_\\ell(t)=P_+^{\\pa{\\mathcal T}} \\widetilde{\\tau_t\\pa{ X_{I_0} }} = \\widetilde{\\tau_t\\pa{ X_{I_0} }} P_+^{\\pa{\\mathcal T}} \\qtx{for} t\\in \\R,\n\\eeq\nan observable with support in ${\\cS}_{\\frac \\ell 2} \\cup \\mathcal T= {\\cS}_{\\ell } $.\nand claim that $X_\\ell(t)$ satisfies \\eqref{qlocI}. \n\nTo show that \\eqref{qlocI} follows from \\eq{compproof2111}, we consider an observable $Y$ with $\\cS_Y=\\mathcal{O}^c= {\\cS}_{\\frac \\ell 2}$, and note that\n\\beq \\label{replaceOT1}\n\\pa{P_+^{\\pa{\\mathcal T}}-P_+^{\\pa{\\mathcal O}}} Y= P_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}}Y. \n\\eeq\nSince $P_0P_-^{{\\mathcal O}\\setminus\\mathcal T}=P_-^{{\\mathcal O}\\setminus\\mathcal T}P_0=0$, we have\n\\begin{align} \\label{replaceOT2}\n\\pa{ P_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}}Y}_{I_0}=\\pa{ P_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}}Y}_{I}. \n\\end{align}\n \nWe now apply \\eq{replaceOT1} and \\eq{replaceOT2} with $Y= \\widetilde{\\tau_t\\pa{ X_{I_0} }} $.\nWe have \n\\begin{align}\\notag\n& P_+^{\\mathcal{O}} \\pa{\\widetilde{\\tau_t\\pa{ X_{I_0} }}}^{+,+} P_+^{\\mathcal{O}} =P_+^{\\mathcal{O}}P_+^{\\mathcal{O}^c} \\widetilde{\\tau_t\\pa{ X_{I_0} }}P_+^{\\mathcal{O}^c}P_+^{\\mathcal{O}}\\\\ \\notag\n& = P_+^{\\mathcal{O}^c}P_+^{\\mathcal{O}} \\widetilde{\\tau_t\\pa{ X_{I_0} }}P_+^{\\mathcal{O}}P_+^{\\mathcal{O}^c} = P_+^{\\mathcal{O}^c}P_+^{\\mathcal{O}} {\\tau_t\\pa{ X_{I_0} }}P_+^{\\mathcal{O}}P_+^{\\mathcal{O}^c}= P_0 {\\tau_t\\pa{ X_{I_0} }} P_0 \\\\ \\\n& =P_0 X P_0 = P_0 X^{+,+} P_0=0,\n\\end{align}\nwhere we used \\eq{tildeZ}, $P_+^{\\mathcal{O}} P_+^{\\mathcal{O}^c}=P_0$ and $X^{+,+} =0$.\nSince $\\widetilde{\\tau_t\\pa{ X_{I_0} }}$ is supported on $\\mathcal{O}^c$, we conclude that\n$\\pa{\\widetilde{\\tau_t\\pa{ X_{I_0} }}}^{+,+}=0$. Thus we only need to estimate\n$\\pa{ P_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}}Y^{a,b}}_{I}$, where\n$Y= \\widetilde{\\tau_t\\pa{ X_{I_0} }} $ and $a,b=\\pm $, but either $a=-$ or $b=-$. \nIf $a=-$, we have\n \\begin{align}\nP_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}}Y^{-,b}= P_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}} P_-^{\\mathcal{O}^c} Y^{-,b}= P_-^{{\\mathcal O}\\setminus\\mathcal T} P_-^{\\mathcal{O}^c} P_+^{\\pa{\\mathcal T}} Y^{-,b},\n \\end{align}\nand hence\n\\begin{align}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{\\pa{P_-^{{\\mathcal O}\\setminus\\mathcal T}P_+^{\\pa{\\mathcal T}}Y^{-,b}}_I}_1} &\\le \\norm{Y} \\E \\pa{\\norm{P_I P_-^{{\\mathcal O}\\setminus\\mathcal T}P_-^{\\mathcal{O}^c}}_1} \\notag \\\\ & \\le C \\norm{X} \\e^{- {\\frac{1}{4}} m\\ell},\n\\end{align}\nusing \\eqref{P-gP-2}. Since the $b=-$ case is similar we conclude from \\eq{replaceOT1},\\eq{replaceOT2}, and \\eq{eq:X_ell} that\n\\beq\\label{compproof3333}\n\\E \\pa{\\sup_{t\\in \\R}\\norm{\\pa{P_+^{\\pa{\\mathcal O}} \\widetilde{\\tau_t\\pa{ X_{I_0} }} - X_\\ell(t)}_{I_0}}_1 }\\le C \\norm{X} \\e^{- {\\frac{1}{4}} m\\ell}\n\\eeq\n\nCombining \\eq{compproof2111} and \\eq{compproof3333} we get \\eq{qlocI}.\n\\end{proof}\n\n\n\\section{Zero-velocity Lieb-Robinson bounds}\\label{secLR}\n\nIn this section we prove Theorem~\\ref{thm:expclusteringgenJ}.\n\n \\begin{proof}[Proof of Theorem~\\ref{thm:expclusteringgenJ}]\nIn view of \\eq{X++0}, we can assume $X^{+,+}=Y^{+,+}=0$, and prove the theorem in this case. This is the only step where we use cancellations from the commutator. The estimate \\eq{eq:dynloc} then follows immediately from Lemma~\\ref{lemPXtgY}.\n\nTo prove \\eq{eq:LRquasimix}, recall $P_{I_0}= P_I + P_0$, and note that since $X^{+,+}=Y^{+,+}=0$ we have \n$P_0 X P_0= P_0 Y P_0=0$, so\n\\begin{align}\\notag\n&\\left[ \\tau_t\\pa{X_{I_0}},Y_{I_0}\\right]\n = \\left[ \\tau_t \\pa{X_I},Y_I\\right] + P_I\\pa{ \\tau_t\\pa{X}P_0 Y- Y P_0 \\tau_t\\pa{X} }P_I\n \\\\ \\notag & \\quad +P_0\\pa{ \\tau_t \\pa{X} P_I Y - Y P_I \\tau_t\\pa{X} }P_0\n+\\pa{\\tau_t\\pa{X_I} Y P_0- P_0Y \\tau_t\\pa{X_I} }\\\\ &\\quad \n +\\pa{P_0 X \\e^{-itH}Y_I - Y_I \\e^{itH} XP_0} .\\label{withcom}\n\\end{align}\nNote that $\\left[ \\tau_t\\pa{X_I},Y_I\\right] $ can be estimated by \\eq{eq:dynloc}. We have\n\\begin{align}\\notag\n\\norm{P_0 \\tau_t\\pa{X}P_I YP_0}_1 &= \\norm{P_0 \\tau_t\\pa{X^{+,-}}P_I Y^{-,+}P_0}_1\\\\\n& \\le \\norm{X}\\norm{Y}\\norm{ P_{-} ^{\\pa{X}} \\e^{-itH}P_I P_{-} ^{\\pa{Y}}}_1,\\label{XJYP145}\n\\end{align}\nso it can be estimated by \\eq{P-gP-2}, with a similar estimate for $\\norm{P_0 YP_I \\tau_t\\pa{X}P_0}_1$. Moreover,\n\\begin{align}\\label{XJYP1}\n&\\norm{\\tau_t\\pa{X_I} Y P_0}_1= \\norm{\\tau_t\\pa{X_I} Y^{-,+} P_0}_1\\\\ \\notag & \\quad \\le \n \\norm{X}\\norm{Y} \\norm{P_{-} ^{\\pa{X}} \\e^{-itH}P_IP_{-} ^{\\pa{Y}} }_1 + \\norm{Y}\\norm{P_I X^{-,+} \\e^{-itH}P_IP_{-} ^{\\pa{Y}} }_1.\n\\end{align}\nThe first term can be be estimated by \\eq{P-gP-2}. To estimate the second term, let\n$\\ell= \\dist(X,Y)\\ge 1$. Then \n\\begin{align}\\notag\n&\\norm{P_I X^{-,+} \\e^{-itH}P_IP_{-} ^{\\pa{Y}} }_1\\\\ \\notag\n& \\quad \\le \\norm{P_I X^{-,+} P_{+} \\up{\\cS_{Y,\\frac \\ell 2}} \\e^{-itH}P_I P_{-} ^{\\pa{Y}} }_1+ \\norm{P_I X^{-,+} P_{-} \\up{\\cS_{Y,\\frac \\ell 2}}\\e^{-itH}P_I P_{-} ^{\\pa{Y}} }_1\n\\\\ & \\quad \\le \\norm{X}\\pa{\\norm{ P_{+} \\up{\\cS_{Y,\\frac \\ell 2}} \\e^{-itH}P_I P_{-} ^{\\pa{Y}} }_1+\n\\norm{P_I P_{-} \\up{\\cS_{Y,\\frac \\ell 2}} P_{-} ^{\\pa{X}} }_1},\n\\end{align}\nwhere we used $[X^{-,+},P_{-} \\up{\\cS_{Y,\\frac \\ell 2}} ]=0$. Thus the second term in last line of \\eq{XJYP1} can be estimated by \\eq{P-gP-3} and \\eq{P-gP-2}.\n\n The remaining three terms in \\eq{withcom} can be similarly estimated.\n(Although \\eq{withcom} is stated for the commutator, it could have been stated separately for each term of the commutator. The above argument does not use cancellations from the commutator.) Combining all these estimates we get \\eq{eq:LRquasimix}. \n\nIt remains to prove \\eq{eq:LRquasimix2}. Let $X, Y$ and $Z$ be local observables. In view of\n \\eq{eq:LRquasimix}, we only need to estimate\n \\beq\n\\E\\pa{\\sup_{t,s \\in \\R} \\norm{ [P_I \\pa{\\tau_t\\pa{ X}P_0 \\tau_s(Y) - \\tau_s(Y)P_0 \\tau_t\\pa{ X }}P_I, Z_I ]}_1}.\n \\eeq\n If we expand the commutator, we get to estimate several terms, the first one being\n\\beq\n \\E\\pa{\\sup_{t,s \\in \\R} \\norm{ P_I \\tau_t\\pa{ X}P_0 \\tau_s(Y)P_I Z P_I }_1}\\le \\E\\pa{\\sup_{s \\in \\R} \\norm{P_0 \\tau_s(Y)P_IZ P_I }_1}\n \\eeq\n This can be estimated as in \\eq{XJYP145} and \\eq{XJYP1}, and the other terms can be similarly estimated, yielding \n\\eq{eq:LRquasimix2}.\n\n\nWe will now show that for the random XXZ spin chain the estimate \\eq{eq:LRquasimix} is not true without the counterterms. In fact, a stronger statement holds. Let now $H$ be a random XXZ\n spin chain, and assume that for all local observables $X$ and $Y$ we have \n \\begin{align}\n \\E \\pa{\\sup_{t\\in \\R}\\norm{\\left[ \\tau_t\\pa{X_{I_0}},Y_{I_0}\\right]}_1} \\le C \\|X\\| \\|Y\\| \\Ups\\pa{\\dist (X,Y)}, \n \\label{eq:LRquasimix44} \n \\end{align}\nuniformly in $L$, where the function $\\Ups:\\N \\to [0,\\infty)$ satisfies\n$\\lim_{r\\to \\infty} \\Ups\\pa{r}=0$. Assume \\eq{eq:LRquasimix} holds with the same right hand side as \\eq{eq:LRquasimix44}.\n\nIt follows from \\eq{eq:LRquasimix} and \\eq{eq:LRquasimix44}\nthat\n\\begin{align}\\notag \n&\\E \\pa{\\norm{ \\pa{{ X}P_0Y - YP_0 { X }}_I}_1}\\le\n\\E \\pa{\\sup_{t\\in \\R}\\norm{ \\pa{\\tau_t\\pa{ X}P_0Y - YP_0 \\tau_t\\pa{ X }}_I}_1}\\\\ &\\notag \\hskip20pt \\le \n \\E \\pa{\\sup_{t\\in \\R}\\norm{\\left[ \\tau_t\\pa{X_{I_0}},Y_{I_0}\\right]- \\pa{\\tau_t\\pa{ X}P_0Y - YP_0 \\tau_t\\pa{ X }}_I}_1} \\\\ \\notag & \\hskip110pt + \\E \\pa{\\sup_{t\\in \\R}\\norm{\\left[ \\tau_t\\pa{X_{I_0}},Y_{I_0}\\right]}_1}\\\\ &\\hskip20pt \\le \n 2C \\|X\\| \\|Y\\| \\Ups\\pa{\\dist (X,Y)}\\label{counter4}\n\\end{align}\nIn particular, taking $X=\\sigma_i^x$ and $Y=\\sigma_j^x$ we get (putting $L$ back in the notation)\n\\begin{align}\\label{counter44}\n\\E \\pa{\\norm{ \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x - \\sigma_j^x P_0\\up{L} \\sigma_i^x }_I}_1} \\le \n 2C\\, \\Ups\\pa{\\abs{i-j}}.\n\\end{align}\n\nThus, using $\\norm{A}_2^2\\le \\norm{A} \\norm{A}_1 $ and $\\norm{\\sigma_i^x P_0\\up{L} \\sigma_j^x - \\sigma_j^x P_0\\up{L} \\sigma_i^x }\\le 2$, we get\n\\begin{align}\\notag \n\\E \\pa{\\norm{ \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x - \\sigma_j^x P_0\\up{L} \\sigma_i^x }_I}^2_2} & \\le 2 \\E \\pa{\\norm{ \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x - \\sigma_j^x P_0\\up{L} \\sigma_i^x }_I}_1} \\\\ & \\le\n4 C\\, \\Ups\\pa{\\abs{i-j}}.\\label{cont13}\n\\end{align}\n\n\nSince \\eq{cont13} is not compatible with \\eq{eq:2terms0}, we have a contradiction, so \n\\eq{eq:LRquasimix44} cannot hold.\n \\end{proof}\n\n\\section{General dynamical clustering}\\label{secdyncl}\n\n\n\n\nWe now turn to the proof of Theorem~ \\ref{thmexpclust}. \nWe will use the following lemma. \n\n\\begin{lemma}\\label{lem:filter} Let $ \\Theta_2 < \\Theta_1 $.\n Given $\\alpha\\in(0,1)$, there exist constants $m_\\alpha>0$ and $C_\\alpha<\\infty$, such that, given $\\Theta_3 \\ge \\Theta_1$, there exists a function $f \\in C_c^\\infty(\\R)$, such that \n\\begin{enumerate}\n\\item $0\\le f\\le1$;\n\\item $\\supp f\\subset[\\Theta_2,\\Theta_3 + \\Theta_1 - \\Theta_2]$;\n\\item $f(x)=1$ for $x\\in[\\Theta_1,\\Theta_3 ]$;\n\\item $\\abs{\\hat f(t)}\\le C_\\alpha \\e^{-m_\\alpha \\abs{t}^\\alpha}$ for $\\abs{t}\\ge1$;\n\\item $\\norm{\\hat f}_1\\le C_\\alpha \\max\\set{1,\\ln \\pa{\\Theta_3-\\Theta_2} }$.\n\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}Let $\\theta= \\Theta_1 - \\Theta_2$. Pick a Gevrey class function $h\\ge 0$ such that \n \\[ \\supp h \\subset [0,\\theta];\\; \\int_\\R h(x)\\, \\d x=1; \\sqtx{and} \\abs{\\hat h(t)}\\le C_h \\e^{-m_h \\abs{t}^{\\alpha}}\\sqtx{for all} t\\in\\R,\\] \n where $C_h$ and $m_h >0$ are constants.\nLet\n \\[k(x)=\\int_{-\\infty}^x h\\pa{y}\\d y \\qtx{for} x\\in \\R,\\]\nthen $k\\in C^\\infty(\\R)$ is non-decreasing and satisfies\n\\[0\\le k \\le 1,\\quad \\supp k\\subset [0,\\infty),\\qtx{and} k(x)=1 \\mbox{ for } x\\ge \\theta.\\]\n\n\nGiven $\\Theta_3\\ge \\Theta_1$, we claim that the function \n\\beq\nf(x)=k(x-\\Theta_2)-k(x-\\Theta_3 )\n\\eeq\nhas all the required properties. Indeed, properties (i)--(iii) are obvious. To finish, we compute\n\\beq\n\\hat f(t)=\\int_\\R \\e^{-itx} \\pa{\\int^{x-\\Theta_2}_{x-\\Theta_3 }h(y)\\,\\d y} \\d x.\n\\eeq\nIntegrating by parts and noticing that the boundary terms vanish, we get\n\\begin{align}\\notag\n\\hat f(t)&=\\tfrac{-i}{t}\\int_\\R \\e^{-itx} \\pa{h\\pa{x-\\Theta_2}-h\\pa{x-\\Theta_3 }} dx = \\tfrac{-i}{t}\\pa{\\e^{-i\\Theta_2t}-\\e^{-i\\Theta_3t}}\\hat h(t)\\\\ & \n= \\tfrac{-i}{t} \\e^{-i\\Theta_2t} \\pa{1-\\e^{-i(\\Theta_3-\\Theta_2) t}}\\hat h(t).\n\\end{align}\nThus\n\\beq\n\\abs{\\hat f(t)}\\le 2 C_h\\abs{\\tfrac{\\sin\\pa{\\frac 12\\pa{ {\\Theta_3-\\Theta_2}}t}}{t}}\\e^{-m_h\\abs{t}^\\alpha}\\qtx{for all } t\\in \\R.\n\\eeq\nParts (iv) and (v) follow. \n\\end{proof}\n\n\n\nWe are ready to prove Theorem~\\ref{thmexpclust}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thmexpclust}] \nLet $H_\\omega$ be a disordered XXZ spin chain satisfying Property DL. Let\n$K= [\\Theta_0, \\Theta_2]$, where $ \\Theta_0 < \\Theta_2 <\\min\\set{2 \\Theta_0, \\Theta_1 }$.\nSince \\eq{eq:efcorDL} holds for the interval $[\\Theta_0,\\min\\set{2 \\Theta_0, \\Theta_1 } ]$, we assume $\\Theta_1\\le 2 \\Theta_0$ without loss of generality.\nWe set $K^\\prime= (\\Theta_2, \\infty)$. \n\n Let $X$ and $Y$ be local observables. In view of \\eq{X++0}, we can assume $X^{+,+}=Y^{+,+}=0$, and prove the theorem in this case. For a fixed $L$ (we omit $L$ from the notation), we have\n \\begin{align}\\label{RK1}\nR_{K} (\\tau^K_t\\pa{ X },Y)= \\pa{\\tau^K_t\\pa{ X }\\bar P_K Y}_K= \\pa{\\tau^K_t\\pa{ X }P_{K^\\prime} Y}_K+\\pa{\\tau^K_t\\pa{ X }P_0 Y}_K.\n \\end{align}\n\n\n Fix $\\alpha\\in(0,1)$, let $\\Theta_3 \\ge 2 \\Theta_2 $, to be chosen later, and let $f$ be the function given in Lemma ~\\ref{lem:filter}. We have\n\\begin{align}\\label{tXK'Y}\n\\pa{\\tau^K_t\\pa{ X }P_{K^\\prime} Y}_K = \\pa{\\tau^K_t\\pa{ X }\\pa{P_{K^\\prime} -f(H)}Y}_K +\\pa{\\tau^K_t\\pa{ X }f(H)Y}_K.\n\\end{align}\n\nTo estimate the first term, note that $P_{K^\\prime }-f(H)=g(H)$, where $\\abs{g}\\le 1$ and $g(H)= g(H) P_I + g(H) \\bar P\\up{\\Theta_3} $, where $P\\up{\\Theta_3}=P_{(-\\infty, \\Theta_3]}$ and $\\bar P\\up{\\Theta_3} = 1- P\\up{\\Theta_3}$. The term with $g(H) P_I $ can be estimated by Lemma~\\ref{lemPXtgY},\n\\begin{align}\\notag\n\\E\\pa{\\sup_{t\\in \\R}\\norm {\\pa{\\tau^K_t\\pa{ X }g(H)P_IY}_K}}&\\le \\E\\pa{\\sup_{t\\in \\R}\\norm {\\pa{\\tau_t\\pa{ X }g(H)P_IY}_I}} \\\\ & \\le C\\norm{X}\\norm{Y} \\e^{-\\frac 1 8 m\\dist (X,Y)} .\\label{RK2}\n\\end{align}\nThe contribution of $g(H) \\bar P\\up{\\Theta_3}$ is estimated by Lemma \\ref{lem:Kitaev},\n\\begin{align}\\notag\n&\\norm{\\pa{\\tau^K_t\\pa{ X }g(H) \\bar P\\up{\\Theta_3}Y}_K}\\le \\norm{Y}\\norm{P_K X g(H) \\bar P\\up{\\Theta_3}}\\\\ & \\notag \\qquad \\le \\norm{Y}\\norm{P\\up{\\Theta_2} X g(H) \\bar P\\up{\\Theta_3}}\\le \\norm{Y}\\norm{P\\up{\\Theta_2}X \\bar P\\up{\\Theta_3}}\\\\ & \\qquad \\le\n C_F \\norm{X} \\norm{Y} \\e^{-\\frac {m_F} {\\abs{\\cS_X}}\\pa{\\Theta_3-\\Theta_2}}.\\label{RK3}\n\\end{align} \n\n \n\n\nTo estimate the second term on the right hand side of \\eq{tXK'Y}, we recall that $H_K=0$ on $ \\supp f$, so \n\\begin{align}\n\\pa{\\tau^K_t\\pa{ X }f(H)Y}_K=\\e^{itH_K} \\pa{Xf(H)Y}_K.\n\\end{align}\nit follows from Lemmas~\\ref{lemHast}, \\ref{leminsertKf} and \\ref{lem:filter} that \n \\begin{align}\\label{RK4}\n\\pa{\\tau^K_t\\pa{ X }f(H)Y}_K= A + T(K_f),\n \\end{align}\n where\n \\begin{align}\\label{RK5}\n \\norm{ A}\\le 2 C_1 C_\\alpha \\norm{X} \\norm{Y} \\max\\set{1,\\ln \\pa{\\Theta_3-\\Theta_2} } \\e^{- m_1\\pa{ \\dist (X,Y)}^\\alpha},\n \\end{align}\n \\begin{align}\n T(J) = \\e^{itH_K} \\pa{ \\int_\\R \\e^{-irH} YP_{J} \\tau_r\\pa{ X} \\hat f(r) \\, \\d r }_K \\qtx{for} J\\subset\\R,\n \\end{align}\nand \n \\beq\n[0,2\\Theta_2 - \\Theta_1] \\subset K_f \\subset [2\\Theta_0 -\\Theta_3-\\pa{\\Theta_1-\\Theta_2}, 2\\Theta_2-\\Theta_2 ]\\subset (-\\infty, \\Theta_2 ].\n \\eeq \nIn view of \\eq{gapcond}, $ P_{K_f}= P_{K_f^\\prime} + P_0$, where $K_f^\\prime=K_f \\cap K$,\n so $T(K_f)=T(K_f^\\prime) +T(\\set{0})$. We have\n\\begin{align}\\notag\n\\E\\pa{\\sup_{r\\in \\R}\\norm{T({K_f^\\prime})} }&\\le \\norm{\\hat f}_1 \\E\\pa{\\sup_{r\\in \\R} \\norm{\\pa{YP_{{K_f^\\prime}} \\tau_r\\pa{ X} }_K} } \\\\ & \n\\le C \\max\\set{1,\\ln \\pa{\\Theta_3-\\Theta_2} }\\norm{X}\\norm{Y} \\e^{-\\frac 1 8 m\\dist (X,Y)},\\label{RK6}\n\\end{align} \nwhere we used Lemmas~\\ref{lemPXtgY} and \\ref{lem:filter}. In addition,\n\\begin{align}\nT(\\set{0})= \\e^{itH_K}\\pa{YP_{0} X }_K = \\pa{\\tau_t^K\\pa{Y}P_{0} X }_K.\\label{RK7}\n\\end{align}\nTo see this, let $E,E^\\prime \\in K$. Proceeding as in \\eq{P0mag},we have\n\\begin{align}\\notag\n& P_E \\pa{ \\int_\\R \\e^{-irH} YP_0 \\tau_r\\pa{ X} \\hat f(r) \\, \\d r } P_{E^\\prime}=\\int_\\R P_E \\e^{-irH} Y P_0 { X} \\e^{-irH}P_{E^\\prime}\\hat f(r) \\, \\d r\\\\ \\notag & \\quad = \\pa{ \\int_\\R \\e^{-ir(E +E^\\prime)}\\hat f(r) \\, \\d r} P_E YP_0 { X} P_{E^\\prime}= f(E+E^\\prime ) P_E Y P_0 { X} P_{E^\\prime} \\\\ & \\quad =\n P_E Y P_0 { X} P_{E^\\prime},\n\\label{P0mag3}\n\\end{align}\nsince $f(E+E^\\prime)=1$ as $E+E^\\prime \\in [2\\Theta_0,2\\Theta_2]\\subset [\\Theta_1,\\Theta_3]$.\n\n\nCombining \\eq{RK1}, \\eq{tXK'Y}, \\eq{RK2}, \\eq{RK3}, \\eq{RK4},\\eq{RK5},\\eq{RK6}, and \\eq{RK7}, we obtain\n\\begin{align}\n&\\norm{R_{K} (\\tau^K_t\\pa{ X },Y) - \\pa{\\tau^K_t\\pa{ X }P_0 Y}_K- \\pa{\\tau_t^K\\pa{Y}P_{0} X }_K}\\notag \\\\ & \\quad\n\\le C\\norm{X}\\norm{Y}\\pa{ \\max\\set{1,\\ln \\pa{\\Theta_3-\\Theta_2} } \\e^{- m_2\\pa{ \\dist (X,Y)}^\\alpha} +\\e^{-\\frac {m_F} {\\abs{\\cS_X}}\\pa{\\Theta_3-\\Theta_2}}},\n\\end{align}\nwhere $m_2= \\min\\set{m_1,\\frac 1 8 m}>0$.\n\nWe now choose $\\Theta_3= \\Theta_2 + {\\abs{\\cS_X}}\\pa{\\dist (X,Y)}^\\alpha$, note that $\\Theta_3 \\ge 2 \\Theta_2$ if $\\dist (X,Y)\\ge \\Theta_2^{\\frac 1\\alpha}$, obtaining\n\\begin{align}\n&\\norm{R_{K} (\\tau^K_t\\pa{ X },Y) - \\pa{\\tau^K_t\\pa{ X }P_0 Y}_K- \\pa{\\tau_t^K\\pa{Y}P_{0} X }_K}\\notag \\\\ & \\qquad\n\\qquad \\qquad\\le C\\norm{X}\\norm{Y}\\pa {1+ \\ln {\\abs{\\cS_X}} } \\e^{- m_3\\pa{ \\dist (X,Y)}^\\alpha},\n\\end{align}\nwith $m_3=\\frac 12 \\min\\set{m_2,m_F}>0$, for $\\dist (X,Y)$ sufficiently large. \nObserving that the argument can be done with $Y$ instead of $X$, we get \\eq{eq:expclusteringgen'}.\n\n\nSince\n\\begin{align}\n\\pa{[\\tau^K_t\\pa{ X },Y]}_K= R_K\\pa{ \\tau^K_t\\pa{ X },Y}- R_K\\pa{Y, \\tau^K_t\\pa{ X }}+ [\\tau_t\\pa{ X_K },Y_K],\n\\end{align}\n \\eq{eq:dynloc3333} follows immediately from \\eq{eq:expclusteringgen'} and \\eq{eq:dynloc}.\n\n\nTo conclude the proof, we need to show that for a random XXZ spin chain $H$ the estimates \\eq{eq:expclusteringgen'} and \\eq{eq:dynloc3333} are not true without the counterterms. \n\nSuppose \\eq{eq:expclusteringgen'} holds without counterterms, even in a weaker form: for all local observables $X$ and $Y$ we have\n\\begin{align}\\notag \n& \\E \\pa{\\sup_{t\\in \\R}\\norm{R_K\\pa{ \\tau^K_t\\pa{ X },Y} }}\\\\\n& \\qquad \\qquad \\le C\\pa{{\\min\\set{\\abs{\\cS_{X}},\\abs{\\cS_{Y}}}}} \\|X\\| \\|Y\\| \\Ups\\pa{\\dist (X,Y)},\n\\label{eq:expclusteringgen3}\n\\end{align}\nuniformly in $L$, where the function $\\Ups:\\N \\to [0,\\infty)$ satisfies\n$\\lim_{r\\to \\infty} \\Ups\\pa{r}=0$. Assume \\eq{eq:expclusteringgen'} holds with the same right hand side as \\eq{eq:expclusteringgen3}. Taking $X=\\sigma_i^x$ and $Y=\\sigma_j^x$, \nand proceeding as in \\eq{counter4}-\\eq{counter44}, we get (putting $L$ back in the notation)\n\\begin{align}\n&\\E \\pa{\\norm{ \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x +\\sigma_j^x P_0\\up{L} \\sigma_i^x }_K}} \\le\n4 C\\, \\Ups\\pa{\\abs{i-j}}.\\label{cont139}\n\\end{align}\n\nRecall that (in the notation of the proof of Lemma~\\ref{lem:spillterms}, as in \\eq{Tnotation}),\n\\begin{align}\n& Z: = \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x +\\sigma_j^x P_0\\up{L} \\sigma_i^x }_K = \nT\\pa{P_K\\up{L} \\delta_i, P_K\\up{L} \\delta_j} + T\\pa{ P_K\\up{L} \\delta_j,P_K\\up{L} \\delta_i}.\n\\end{align}\nLet $V$ be the two dimensional vector space spanned by the vectors $P_K\\up{L} \\delta_i$ and $P_K\\up{L} \\delta_j$, and let $Q_V$ be the orthogonal projection onto $V$. We clearly have\n$Z= Q_V Z Q_V$ and $\\norm{Z}\\le 2$, and hence \n\\begin{align}\n\\norm{Z}_2^2 \\le 2 \\norm{Z}^2 \\le 4 \\norm{Z} ,\n\\end{align}\nso it follows from \\eq{eq:2terms0} that there exist constants $\\gamma_K >0$ and $R_K$ such that\n\\begin{align}\\notag\n\\E \\pa{\\norm{ \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x +\\sigma_j^x P_0\\up{L} \\sigma_i^x }_K}} &\\ge \\tfrac 1 4\n\\E \\pa{\\norm{ \\pa{ \\sigma_i^x P_0\\up{L} \\sigma_j^x +\\sigma_j^x P_0\\up{L} \\sigma_i^x }_K}_2^2}\\\\ & \\ge \\tfrac 1 4 \\gamma_K, \\label{cont13977}\n\\end{align}\nfor all $i,j \\in \\Z$ with $\\abs{i-j}\\ge R_K$.\n\n Since \\eq{cont139} and \\eq{cont13977} establish a contradiction, we conclude that \\eq{eq:expclusteringgen3} cannot hold.\n\nWe show the necessity of the counterterms in \\eq{eq:dynloc3333} in a similar way. Note that the counterterm for $X=\\sigma_i^x$ and $Y=\\sigma_j^x$ is given by $ Z\\up{L}(t)$ as in \\eq{Zt}. If we assumed the validity of \\eq{eq:dynloc3333} without counterms, we would have\n\\beq\\label{cont13945}\n\\E \\pa{\\sup_{t\\in \\R} \\norm{ Z\\up{L}(t)}} \\le 4 C\\, \\Ups\\pa{\\abs{i-j}},\n\\eeq\nwhere the function $\\Ups$ is as in \\eq{cont139}. Since $Z\\up{L}(t)$ is a rank $4$ operator,\nwe have\n\\begin{align}\n\\norm{Z\\up{L}(t)}_2^2 \\le 4 \\norm{Z}^2 \\le 16 \\norm{Z} ,\n\\end{align}\nand hence\n\\begin{align}\n\\sup_{t\\in \\R} \\norm{ Z\\up{L}(t)} \\ge \\tfrac 1 {16}\\sup_{t\\in \\R} \\norm{ Z\\up{L}(t)}_2^2\\ge \\tfrac 1 {16} \\lim_{T\\to \\infty} \\tfrac 1 T \\int_0^T \\norm{Z\\up{L}(t)}_2^2\\, \\d t,\n\\end{align}\nso \\eq{cont13945} and \\eq{eq:4terms1} give a contradiction, an hence \\eq{cont13945} cannot hold.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}