diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlygr" "b/data_all_eng_slimpj/shuffled/split2/finalzzlygr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlygr" @@ -0,0 +1,5 @@ +{"text":"\n\\subsection{Related Works}\n\\label{sec:related}\n\\paragraph{Relation to Tensor Networks. }\nTensor networks are widely used in quantum physics~\\cite{orus2014practical}, numerical analysis~\\cite{grasedyck2013literature} and recently machine learning~\\cite{cichocki2016low, cichocki2017tensor}. Different from existing tensor networks, (1) our {TNN\\xspace}s have nonlinearity on the edges between the tensors; and (2) {TNN\\xspace}s are constructed as deep compositions of interleaving {\\em generalized tensor operations} and nonlinear transformations, similar to feedforward neural networks. Furthermore, while tensor networks are still multi-linear and thus can be learned by algorithms such as power iteration~\\cite{wang2017tensor} or alternative least squares~\\cite{comon2009tensor}, {TNN\\xspace}s require decomposition of nonlinear (no longer multi-linear) hierarchical tensor networks.\n\n\\paragraph{Compression of Neural Networks. }\nA recent survey~\\cite{cheng2017survey} reviews state-of-the-art techniques for compressing neural networks. These techniques can be grouped into two categories: (1) compressing an existing model and (2) novel compact designs. The first category includes \\textit{low-rank approximations}~\\cite{jaderberg2014speeding, denton2014exploiting, lebedev2014speeding, kim2015compression}, \\textit{knowledge distillation}~\\cite{romero2014fitnets, ba2014deep, hinton2015distilling, furlanello2018born} and \\textit{quantization}~\\cite{han2015deep, courbariaux2015binaryconnect, zhu2016trained, rastegari2016xnor, hubara2017quantized}; while the second category includes \\textit{compact designs of filters}~\\cite{cheng2015exploration, yang2015deep, sindhwani2015structured} and \\textit{compact designs of architectures}~\\cite{szegedy2015going, he2016deep, huang2017densely, chollet2016xception, szegedy2017inception}. When our {TNN\\xspace}s are used for compression, we project an existing neural network to the class of {TNN\\xspace}s with fewer parameters, and in Section~\\ref{sec:interpretation}, we demonstrate this projection naturally corresponds novel compact architecture. \n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nModern neural networks~\\cite{krizhevsky2012imagenet, simonyan2014very, szegedy2015going, he2016deep, szegedy2017inception, huang2017densely} achieve unprecedented accuracy over many difficult learning problems at the cost of deeper and wider architectures with overwhelming number of model parameters. \nThe large number of model parameters causes repeated high-cost in predictions, which becomes a practical bottleneck when these neural networks are deployed on constrained devices, such as smartphones and Internet of Things (IoT) devices.\n\nOne fundamental problem in deep learning research is to design neural network models with compact architectures, but still maintain comparable expressive power as large models.\nTo achieve this goal, two complementary approaches are adopted in the community: one approach is to compress well-trained neural networks while preserving their predictive performance as much as possible~\\cite{cheng2017survey}.\nAnother approach is to find better architectures for neural network designs such as grouping the filters into inception modules~\\cite{szegedy2015going, szegedy2017inception} or bottleneck layers~\\cite{lin2013network, he2016identity}. \n\nTo address this aforementioned fundamental problem, we propose \\textit{tensorial neural networks\\xspace} ({TNN\\xspace}s), which allows not only compression of well-trained networks, but also exploration of better designs in network architectures.\nTNN\\xspace is a generalization of existing neural networks\\xspace ({NN\\xspace}s) where matrix-vector multiplication (in fully connected layer and recurrent layers) and convolution (in convolutional layer) are extended to \\textit{generalized tensor operations}. To achieve this, we introduce new tensor algebra to extend existing operations with low order operands to those with high order operands (see Section~\\ref{sec:preliminary} and Appendix~\\ref{app:operations} for details).\n\n\\begin{figure}[!htbp]\n\t\\centering\n\t\\psfrag{c1}[][][0.8]{$\\bm{\\mathcal{G}}^{p}$}\n\t\\psfrag{c2}[][][0.8]{$\\bm{\\mathcal{H}}^{p}$}\n\t\\psfrag{c3}[][][0.8]{$\\bm{\\mathcal{G}}^{q}$}\n\t\\psfrag{c4}[][][0.8]{$\\bm{\\mathcal{H}}^{q}$}\n\t\\psfrag{f}[][][0.8]{$f$}\n\t\\psfrag{g1}[][][0.8]{$g^{q}$}\n\t\\psfrag{g2}[][][0.8]{$g^p$}\n\t\\psfrag{h1}[][][0.8]{$h^{q}$}\n\t\\psfrag{h2}[][][0.8]{$h^p$}\n\t\\psfrag{c1:compressedNN}[][][0.8]{$\\bm{\\mathcal{G}}^{p}$:compressed NN}\n\t\\psfrag{c2:compressedTNN}[][][0.8]{$\\bm{\\mathcal{H}}^{p}$:compressed TNN}\n\t\\psfrag{c3:NN}[][][0.8]{$\\bm{\\mathcal{G}}^{q}$: NN}\n\t\\psfrag{c4:TNN}[][][0.8]{$\\bm{\\mathcal{H}}^{q}$: TNN}\n\t\\includegraphics[width=0.4\\textwidth]{.\/\/circle.eps}\n\\caption{Relationship between {NN\\xspace}s and {TNN\\xspace}s. In the figure, $f$ is the target concept. (1) \\textbf{Learning} of a NN\\xspace with $q$ parameters results in $g^{q}$ that is closest to $f$ in $\\mathcal{G}^{q}$, while learning of a TNN\\xspace with $q$ number of parameters results in $h^{q}$ that is closest to $f$ in $\\mathcal{H}^{q}$. Apparently, $h^{q}$ is closer to $f$ than $g^{q}$. (2) \\textbf{Compression} of a pre-trained {NN\\xspace} $g^{q}$ to {NN\\xspace}s with $p$ parameters ($p \\le q$) results in $g^{p}$ that is closest to $g^{q}$ in $\\mathcal{G}^p$, while compressing of $g^{q}$ to {TNN\\xspace}s with $p$ parameters results in $h^p$ that is closest to $g^{q}$ in $\\mathcal{H}^p$. Apparently, the compressed {TNN\\xspace} $h^{p}$ is closer to the pre-trained {} $g^{q}$ than the compressed NN $g^{p}$. }\n\\label{fig:framework}\n\\end{figure}\n\nFigure~\\ref{fig:framework} illustrates the relationship between existing neural networks\\xspace and our proposed tensorial neural networks\\xspace. \nLet $\\mathcal{G}^{q}$ and $\\mathcal{H}^{q}$ denote the sets of functions that can be represented by existing {NN\\xspace}s and our {TNN\\xspace}s, both with at most $q$ parameters.\nSince existing {NN\\xspace}s are special cases of {TNN\\xspace}s, we have the following properties: \n(1) for any $q > 0$, $\\mathcal{G}^{q} \\subseteq \\mathcal{H}^{q}$ and \n(2) there exists $p \\leq q$ such that $ \\mathcal{H}^{p} \\subseteq \\mathcal{G}^{q}$. \nThe first property indicates that {TNN\\xspace}s are generalization of {NN\\xspace}s while the second property guarantees that {TNN\\xspace}s can be used for compression of {NN\\xspace}s.\n\nThe input to a {TNN\\xspace} is a tensor of any order $m$, and the TNN\\xspace reduces to a normal {NN\\xspace} if the input is a vector ($m = 1$) or a matrix ($m = 2$).\n\n\\textbf{Prediction} in {TNN\\xspace}s is similar to traditional {NN\\xspace}s: the input is passed through the layers of a TNN\\xspace in a feedforward manner, where each layer is a generalized tensor (multilinear) operation between the {\\em high-order} input and the {\\em high-order} weight kernels followed by a nonlinear activation function such as $\\mathsf{ReLU}$.\n\\textbf{Learning} parameters in a {TNN\\xspace} is equivalent to hierarchical nonlinear tensor decomposition, which is hard for arbitrary TNN\\xspace architectures. We introduce a suite of generalized tensor algebra, which allows easy derivation of backpropagation rules and a class of TNN architectures (detailed in Appendices~\\ref{app:derivatives}, \\ref{app:dense-tensorized} and \\ref{app:convolutional-tensorized}). With these backpropagation rules, {TNN\\xspace}s can be efficiently learned by standard {\\em stochastic gradient descent}. \n\n{TNN\\xspace}s can be used for compression of traditional neural networks\\xspace, since our proposed {TNN\\xspace}s naturally identify \\emph{invariant structures} in neural networks\\xspace (justified in section~\\ref{sec:invariant}).\nGiven a pre-trained {NN\\xspace} $g^{q} \\in \\mathcal{G}^{q}$, compressing it to a {TNN\\xspace} with $p$ parameters results in $h^{p}$ that is closest to $g^{q}$ in $\\mathcal{H}^p$ as depicted in Figure~\\ref{fig:framework}. It proceeds in two steps: (1) \\textbf{data tensorization}: the input is reshaped into an $m$-order tensor; and (2) \\textbf{knowledge distillation}: mapping from a NN\\xspace to a TNN\\xspace using layer-wise data reconstruction. \n\nWe demonstrate the effectiveness of compression using tensorial neural networks\\xspace by conducting a set of experiments on several benchmark computer vision datasets.\n We compress ResNet\\xspace-32 on the CIFAR10 dataset by 10x with a degrading of only 1.92\\% (achieving an accuracy of 91.28\\%). \nExperiments on LeNet-5 (MNIST), ResNet-32 (CIFAR10) and ResNet-50 (ImageNet) demonstrate that our TNN\\xspace compression outperforms (5\\% test accuracy improvement universally on CIFAR10) the state-of-the-art low-rank approximation techniques under same compression rate, besides achieving orders of magnitude faster convergence rates due to the efficiency of {TNN\\xspace}s. \n\n\\paragraph{Contributions of this paper}\n\\begin{compactenum}\n\\item We propose a new framework of {tensorial neural networks\\xspace} that extends traditional {neural networks\\xspace}, which naturally preserve multi-dimensional structures of the input data (such as videos).\n\\item We introduce a system of {\\em generalized tensor algebra} for efficient learning and prediction in {TNN\\xspace}s. In particular, we are the first to derive and analyze backpropagation rules for generalized tensor operations. \n\\item We apply {tensorial neural networks\\xspace} to effectively compress existing neural networks by exploiting additional invariant structures in both data and parameter spaces, therefore reduce the complexity of the model (the number of parameters).\n\\item We provide interpretations of famous neural network architectures in computer vision using our proposed {TNN\\xspace}s. Understanding of why some existing neural network architectures are successful in practice could lead to more insightful neural network designs.\n\\end{compactenum}\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Generalized Tensor Algebra}\n\\label{sec:preliminary}\n\n\\begin{figure}[!htbp]\n\\begin{minipage}{0.45\\textwidth}\n\n\t\\begin{subfigure}[b]{0.2\\textwidth}\n\t\\centering\n\t\\psfrag{n1}{$a$}\n\t\\includegraphics[width=0.4\\textwidth]{.\/\/diagram-1-scalar.eps}\n\t\\caption{Scalar}\n\t\\label{fig:diagram-scalar}\n\t\\end{subfigure}\n\t\\hfill\n\n\t\\begin{subfigure}[b]{0.24\\textwidth}\n\t\\centering\n\t\\psfrag{n1}{$\\myvector{v}$}\n\t\\psfrag{S1}{$I$}\n\t\\includegraphics[width=0.66\\textwidth]{.\/\/diagram-2-vector.eps}\n\t\\caption{Vector}\n\t\\label{fig:diagram-vector}\n\t\\end{subfigure}\n\t\\hfill\n\n\n\t\\begin{subfigure}[b]{0.24\\textwidth}\n\t\\centering\n\t\\psfrag{n1}{$\\mymatrix{M}$}\n\t\\psfrag{S1}{$I$}\n\t\\psfrag{S2}{$J$}\n\t\\includegraphics[width=\\textwidth]{.\/\/diagram-3-matrix.eps}\n\t\\caption{Matrix}\n\t\\label{fig:diagram-matrix}\n\t\\end{subfigure}\n\t\\hfill\n\n\t\\begin{subfigure}[b]{0.24\\textwidth}\n\t\\centering\n\t\\psfrag{n1}{$\\tensor{T}$}\n\t\\psfrag{S1}{$I$}\n\t\\psfrag{S2}{$J$}\n\t\\psfrag{S3}{$K$}\n\t\\includegraphics[width=\\textwidth]{.\/\/diagram-4-tensor.eps}\n\t\\caption{Tensor}\n\t\\label{fig:diagram-tensor}\n\t\\end{subfigure}\n\\caption{\\small{\\textbf{Tensor Diagrams} of \na scalar $a\\! \\in\\! \\R$,\na vector $\\myvector{v}\\! \\in\\! \\R^{I}$,\na matrix $\\mymatrix{M} \\in \\R^{I \\times J}$, \nand a $3$-order tensor $\\tensor{T} \\in \\R^{I \\times J \\times K}$.}}\n\\label{fig:diagrams}\n\\end{minipage}\n\\end{figure}\n\n\\paragraph{Notations.} \nAn $m$-dimensional array $\\tensor{T}$ is defined as an $m$-{\\em order} tensor\n$\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1}}$.\nIts $(i_0, \\cdots, i_{n-1}, i_{n+1}, \\cdots, i_{m-1})^{{\\mbox{\\tiny th}}}$ {\\em mode}-$n$ {\\em fiber}, a vector along the $n^{{\\mbox{\\tiny th}}}$ axis, is denoted as $\\tensorSub{T}{i_0,\\cdots, i_{n-1}, \\mathbf{:}, i_{n+1},\\cdots, i_{m-1}}$.\n\n\\paragraph{Tensor Diagrams.}\nFollowing the convention in quantum physics~\\cite{orus2014practical, grasedyck2013literature}, Figure~\\ref{fig:diagrams} introduces {\\em tensor diagrams}, graphical representations for multi-dimensional objects. In {tensor diagrams}, an array (scalar\/vector\/matrix\/tensor) is represented as a {\\em node} in the graph, and its {\\em order} is denoted by the number of {\\em edges} extending from the node, where each edge corresponds to one {\\em mode} (whose {\\em dimension} is denoted by the number associated to the edge). \n\n\\input{.\/\/figure_diagrams_operations}\n\n\\paragraph{Generalized Tensor Operations.}\nWe introduce {\\em generalized tensor operations} on {\\em high-order tensor operands} that consist of four primitive operations in Figure~\\ref{fig:operations}, extending the usual operations with vector or matrix operands.\nIn tensor diagrams, an operation is represented by linking edges from the input tensors, where the type of operation is denoted by the shape of line that connects the nodes: solid line stands for {\\em tensor contraction} or {\\em tensor multiplication}, dashed line represents {\\em tensor convolution}, and curved line is for {\\em tensor partial outer product}. \nWe illustrate our generalized tensor operations via simple examples where third-order operands $\\tensor{X}$ and $\\tensor{Y}$ are considered in Figure~\\ref{fig:operations}. However, our generalized tensor operations can extend to higher-order tensors as rigorously defined in Appendix~\\ref{app:operations}.\n\\section{Tensorial Neural Networks} \n\\label{sec:tnn}\n\nA traditional convolutional neural network using tensor diagram is depicted in Figure~\\ref{fig:nn}. We propose a richer class of functions, tensorial neural networks\\xspace ({TNN\\xspace}s), whose the input are tensors of any order (denoted $m$) and operations are generalized tensor operations.\n{TNN\\xspace} are generalization of traditional {neural networks\\xspace}: if the input is a vector and operation is matrix-vector multiplication, TNN\\xspace reduces to {\\em multi-layer perceptrons (MLP)}; and if the input is a feature map and operation is convolution, TNN\\xspace reduces to {\\em convolutional neural networks\\xspace (CNN\\xspace)}. \nThe order of the input tensor and the type of the generalized tensor operation are hyperparameters that define the {\\em architecture} of a TNN\\xspace. \nIn this paper, we design a number of successful architectures of {tensorial neural networks\\xspace} (see Appendices~\\ref{app:dense-tensorized} and \\ref{app:convolutional-tensorized}),\nand an example of \\textit{Tensor-train\\xspace TNN\\xspace} is illustrated in Figure~\\ref{fig:tnn}.\n\n\\input{.\/\/figure_tnn}\n\n\\subsection{One-layer of TNN\\xspace vs One-layer of CNN\\xspace}\nIn CNN\\xspace, each linear layer is a special convolution operation between a {\\em low-order} input and another {\\em low-order} weights kernel.\nIn contrast, each layer in TNN\\xspace is characterized by a generalized tensor operation between a {\\em high-order} input and another {\\em high-order} weights kernel, allowing for higher expressive power. \n\n\\paragraph{One-layer of CNN}\nA convolutional layer in traditional neural networks\\xspace is parameterized by a $4$-order kernel $\\tensor{K} \\in \\R^{H \\times W \\times S \\times T}$, \nwhere $H, W$ are height\/width of the filters, and $S, T$ are the numbers of input\/output channels. \nThe layer maps a $3$-order input tensor $\\tensor{U} \\in \\R^{X \\times Y \\times S}$ to another $3$-order output tensor $\\tensor{V} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times T}$, where $X, Y$ and $X^{\\prime}, Y^{\\prime}$ are heights\/widths of the input and output feature maps. \n\\begin{equation}\n\\tensor{V} = \\tensor{U} ~ (\\ast^{0}_{0} \\circ \\ast^{1}_{1} \\circ \\times^{2}_{2}) ~ \\tensor{K}\n\\label{def:convolutional-main}\n\\end{equation}\\noindent\nWe define such an operation as \\textit{compound operation}, where multiple edges are linked simultaneously (see Figure~\\ref{fig:process-original} for the tensor diagram of a convolutional layer).\nGeneral compound operations are discussed systematically in Appendices~\\ref{app:operations} and~\\ref{app:derivatives}, and in general {TNN\\xspace}s allow arbitrary compound operation to be used at each linear layer. \n\n\\paragraph{One-layer of TNN\\xspace} \nWe illustrate one layer of the {Tensor-train\\xspace TNN\\xspace} in Figure~\\ref{fig:process-rtt} (with input tensor of order $m = 5$), where the parameters are characterized by $(m-1)$ kernels $\\{ \\tensorSub{K}{i} \\}_{i = 0}^{m - 2}$.\nThe multi-dimensional structure lying in the higher-order input tensor $\\tensor{U}^{\\prime}$ is effectively preserved:\neach mode $i$ of the input tensor $\\tensor{U}^{\\prime}$ contracts with the corresponding kernel $\\tensorSub{K}{i}$. {Tensor-train\\xspace TNN\\xspace} allows interactions between the modes by the contraction edges between adjacent kernels $\\tensorSub{K}{i}$ and $\\tensorSub{K}{i+1}$. These edges are crucial for modeling general multi-dimensional transformations while preserving the structures of the input data. \nEffectively, each layer of {Tensor-train\\xspace TNN\\xspace} implements a multi-dimensional propagation of the high-order input.\n\n\\paragraph{Relationship between {TNN\\xspace}s and {CNN\\xspace}s}\n(1) \\textit{{TNN\\xspace} generalizes {NN\\xspace}}. Formally, for any $q > 0$, $\\mathcal{G}^{q} \\subseteq \\mathcal{H}^{q}$ holds. \nConsider a special case of Figure~\\ref{fig:process-rtt} where $\\tensor{K}_0$ and $\\tensorSub{K}{1}$ are identical mappings and contraction of $\\tensorSub{K}{2}$ and $\\tensorSub{K}{m-2}$ equals to a kernel $\\tensor{K}$ as in Figure~\\ref{fig:process-original}, the TNN\\xspace reduces to a CNN\\xspace. \n(2) \\textit{NN\\xspace can be mapped to TNN\\xspace with fewer parameters}. Formally, there exists $p \\leq q$ such that $\\mathcal{H}^{p} \\subseteq \\mathcal{G}^{q}$.\nGiven an $h^{p} \\in \\mathcal{H}^{p}$, we could factorize $\\tensor{K}$ into $\\{\\tensorSub{K}{i} \\}_{i = 0}^{m - 2}$ using generalized tensor decomposition (as detailed in Section~\\ref{sec:invariant} and Appendix~\\ref{app:decompositions}), therefore find $q \\leq p$ and $g^{q} \\in \\mathcal{G}^q$ ($\\{\\tensorSub{K}{i} \\}_{i = 0}^{m - 2}$ altogether have fewer parameters than $\\tensor{K}$).\n\n\\paragraph{Why are {TNN\\xspace}s more flexible than {NN\\xspace}s?}\n(1) {TNN\\xspace}s are more flexible than {NN\\xspace}s as the input can be tensors of arbitrary order, which naturally deals with data represented by multi-dimensional array without breaking their internal structures. \n(2) Even in the scenario that the input data is a vector or a matrix, one can reshape the input into higher-order tensor in order to exploit the invariant structures within the data discussed in Section~\\ref{sec:invariant}.\n\n\\subsection{Prediction in TNN\\xspace}\nPrediction with a TNN\\xspace proceeds similarly as a normal neural network: the input is passed through the layers in TNN\\xspace in a feedforward manner, where each layer is a generalized tensor operation (multilinear operation) between the input and the model parameters followed by a nonlinear activation function such as $\\mathsf{ReLU}$. \nWhen the number of operands in a generalized tensor operation is greater than two (i.e. the layer is characterized by more than one kernel), it is generally NP-hard to find the best order to evaluate the operation. We identify efficient strategies for all TNN\\xspace architectures in this paper. For example, each layer in {Tensor-train\\xspace TNN\\xspace} can be efficiently evaluated as:\n\\begin{subequations}\n\\begin{gather}\n\\tensorSub{U}{i+1} = \\tensorSup{U}{i} ~ \\left( \\times^{2}_{1} \\circ \\times^{-1}_{0} \\right) ~ \\tensorSub{K}{i} \\\\\n\\tensor{V}^{\\prime} = \\tensorSub{U}{m-1} ~ \\left( \\ast^{0}_{1} \\circ \\ast^{1}_{2} \\circ \\times^{-1}_{0} \\right) ~ \\tensorSub{K}{m}\n\\end{gather}\n\\end{subequations}\nwhere $\\tensorSub{U}{0} = \\tensor{U}^{\\prime}$ and $\\tensor{V}^{\\prime}$ are the input\/output of the layer, $\\tensorSub{U}{i}$ is the intermediate result after interacting with $\\tensorSub{K}{l}$. The forward pass for other architectures are derived in Appendix~\\ref{app:dense-tensorized} and~\\ref{app:convolutional-tensorized}, with their complexities summarized in Table~\\ref{table:convolutional-main}. \n\n\\subsection{Learning in TNN\\xspace}\nLearning in a TNN\\xspace corresponds to {\\em hierarchical nonlinear general tensor decomposition}, which recovers all tensors in the TNN\\xspace. \nSolving the decomposition problem in closed form is difficult, therefore we derive the backpropagation rules for general tensor operations (in Appendix~\\ref{app:derivatives}) such that the tensors can be recovered by {\\em stochastic gradient descent} approximately.\nFor each layer in TNN\\xspace, backpropagation requires computations of the derivatives of some loss function $\\mathcal{L}$ w.r.t. the input ${\\partial \\mathcal{L}}\/{\\partial \\tensor{U}^{\\prime}}$ and kernel factors $\\{ {\\partial \\mathcal{L}}\/{\\partial \\tensor{K}_{i}}\\}_{i = 0}^{m-2}$ given ${\\partial \\mathcal{L}}\/ \\partial \\tensor{V}^{\\prime}$.\nAs in forward pass, identifying the best strategy to compute these backpropagation equations simultaneously is NP-hard, and we need to develop efficient algorithm for each architecture. For the case of {Tensor-train\\xspace TNN\\xspace}, the algorithm proceeds as follows:\n{\\small\n\\begin{subequations}\n\\begin{gather}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{U}{m-2}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}^{\\prime}} \\left( (\\ast^{0}_{1})^{\\top} \\circ (\\ast^{2}_{1})^{\\top} \\right) \\tensorSub{K}{m-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{K}{m-2}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}^\\prime} \\left( (\\ast^{0}_{0})^{\\top} \\circ (\\ast^{1}_{1})^{\\top} \\circ \\times^{2}_{2} \\cdots \\times^{m-1}_{m-1} \\right) \\tensorSub{U}{m-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{U}{i}} = \\mathsf{swapaxes} \\big( \\tensorSub{K}{i} \\left( \\times^{2}_{-2} \\circ \\times^{3}_{-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{U}{i+1}} \\big) \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{K}{i}} = \\mathsf{swapaxes} \\big( \\tensorSub{U}{i} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\circ \\times^{3}_{2} \\cdots \\times^{m-1}_{m-2} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{U}{i+1}} \\big) \n\\end{gather}\n\\end{subequations}\n}\nwhere $\\mathsf{swapaxes}(\\cdot)$ permutes the order as needed (explained in Appendix~\\ref{app:notations}). The algorithms for other architectures are derived in appendices~\\ref{app:dense-tensorized} and~\\ref{app:convolutional-tensorized}, whose time complexities are summarized in Table~\\ref{table:convolutional-main}. \n\n\\input{.\/\/table_convolutional_main}\n \n\n\n\\section{Compression of Neural Networks via TNN\\xspace}\n\\label{sec:invariant}\nSuppose we are given a pre-trained neural network $g^{q} \\in \\mathcal{G}^{q}$ with $q$ parameters (as described in Figure~\\ref{fig:framework}) and we want to compress it to $p$ parameters such that $p \\ll q$. \nIf we are looking for a compressed model in the class of traditional neural networks\\xspace $\\mathcal{G}^{p}$, any compression algorithm can at best find $g^p$, while if we consider a boarder class of compressed models in our proposed {TNN\\xspace}s $\\mathcal{H}^{p}$, a good enough compression algorithm finds $h^p \\in \\mathcal{H}^{p} \\setminus \\mathcal{G}^{p}$.\nAs is demonstrated in Figure~\\ref{fig:framework}, $h^p$ is closer to the uncompressed NN\\xspace $g^{q}$ than $g^{p}$, therefore outperforms $g^{p}$ in its predictive accuracy.\n\nWe introduce a compression algorithm that projects a pre-trained NN\\xspace $g \\in \\mathcal{G}^{q}$ to a TNN\\xspace $h^{\\star} \\in \\mathcal{H}^{p}$. Superscripts on $g$ and $h$ are omitted for notational simplicity. Suppose the input to $g$ is $\\tensor{U}$, our goal is to find a $h^{\\star}$ such that\n\\begin{equation}\nh^{\\star} = \\arg \\min_{h \\in \\mathcal{H}^p} \\mathsf{dist}(h(\\tensor{U}^{\\prime}), g(\\tensor{U}))\n\\label{opt:}\n\\end{equation}\\noindent\nwhere $m$-order $\\tensor{U}^\\prime$ is the reshaped version of $\\tensor{U}$ (to fit the input requirement of $h$), and $ \\mathsf{dist}(\\cdot, \\cdot)$ denotes any distance, such as $\\ell_2$ distance, between the outputs of $h$ and $g$.\n\n\\subsection{Tensorization: Exploiting Invariant Structures}\nWe first reshape the input data $\\tensor{U}$ to $m$-order tensor $\\tensor{U}^{\\prime}$. \nConsider this toy example of a vector with periodic structure $[1,2,3,1,2,3,1,2,3]$ or modulated structure $[1,1,1,2,2,2,3,3,3]$ in Figure~\\ref{fig:demo-invariant}. The number of parameters needed to represent this vector, naively, is 9. However if we map or \\emph{reshape} the vector into a higher order object, for instance, a matrix ($m = 2$)$[1,1,1;2,2,2;3,3,3]$ where the columns of the matrix are repeated, then apparently this reshaped matrix can be decomposed into rank one without losing information. Therefore only 6 parameters are needed to represent the original length-9 vector.\n\\begin{figure}[!htbp]\n\\centering\n\t\\includegraphics[width=0.46\\textwidth]{.\/\/demo-invariant.eps}\n\t\\caption{{A toy example of invariant structures. The periodic and modulated structures are picked out by exploiting the low rank structure in the reshaped matrix.}}\n\\label{fig:demo-invariant}\n\\end{figure}\n\n\\noindent Now we are ready to learn the mapping from NN\\xspace to TNN\\xspace.\n\n\\subsection{Mapping NN to TNN}\n\n\\paragraph{End-to-End (E2E\\xspace) Learning}\nLet the $l$-th layer of TNN\\xspace $h$ be parameterized by $(m-1)$ kernels $\\{\\tensorInd{K}{i}{l}\\}_{i=0}^{m-2}$. if $\\ell_2$ distance is used, the compression reduces to an optimization problem : \n\\begin{equation}\n\\{ {\\tensorInd{K}{i}{l}}^{\\star} \\}_{i, l} = \\arg \\min_{\\{ \\tensorInd{K}{i}{l} \\}_{i, l}} \\lVert h(\\tensor{U}^\\prime) - g(\\tensor{U})\\rVert_2^2\n\\label{eq:end-to-end}\n\\end{equation}\\noindent\nSolving the problem by backpropagation is very expensive as it requires end-to-end gradients flow in TNN\\xspace, therefore we relax it into a sequence of problems as follows.\n\n \\vspace{-0.5em}\n \n\\paragraph{Sequential\\xspace (Seq\\xspace) Learning} Let $\\tensorSup{V}{l}$ and $\\tensorSup{V^\\prime}{l}$ be the outputs of the $l$-th layers of $g$ and $h$. We propose to project each layer of $g$ to a layer of $h$ bottom-up sequentially, i.e. for $l = 0, \\cdots, L$:\n\\begin{equation}\n\\{ {\\tensorInd{K}{i}{l}}^{\\star} \\}_{i} =\\arg \\min_{\\{\\tensorInd{K}{i}{l}\\}_{i}} \\lVert \\tensorSup{V}{l} - \\tensorSup{V^\\prime}{l}\\rVert_2^2\n\\label{eq:sequential}\n\\end{equation}\\noindent\nwhere the input to these layers are fixed to $\\tensorSup{V}{l-1}$ and $\\tensorSup{V^{\\prime}}{l-1}$ respectively. \n \n \\vspace{-0.5em}\n \n\\paragraph{Generalized Tensor Decomposition}\nSince each layer of TNN\\xspace should approximate the pre-trained NN\\xspace, our goal is to find ${\\tensorInd{K}{i}{l}}^{\\star}$ such that their composition is closed to the uncompressed kernel $\\tensorSup{K}{l}$, and in {Tensor-train\\xspace TNN\\xspace}:\n\\begin{equation}\n\\tensorSup{K}{l} \\approx {\\tensorInd{K}{0}{l}}^{\\star} \\times^{-1}_{0} {\\tensorInd{K}{1}{l}}^{\\star} \\times^{-1}_{0} \\cdots \\times^{-1}_{0} {\\tensorInd{K}{m-2}{l}}^{\\star}\n\\label{eq:convolutional-rtt-main}\n\\end{equation}\\noindent\nThe equation above is known as {\\em generalized tensor decomposition}, because it reverses the mapping of the general tensor operations proposed earlier: given a set of operations and a tensor, generalized tensor decomposition aims to recover the factors\/components such that the operations on these factors result in a tensor approximately equal to the original one. (See Appendix~\\ref{general_decomposition} for details). \n\n\\section{Interpretation of Neural Networks Designs}\n\\label{sec:interpretation}\nRecent advances in novel architecture design of neural networks include {\\em Inception}~\\cite{szegedy2017inception}, {\\em Xception}~\\cite{chollet2016xception} and {\\em Bottleneck structures}~\\cite{lin2013network, he2016identity}. We will show, in this section, that all aforementioned architectures can be naturally derived as special cases of our TNN\\xspace framework (with small modifications). \n\n\\paragraph{Inception Network.} \nInception network~\\cite{szegedy2017inception} is a special case of {Tensor-train\\xspace TNN\\xspace} as shown in Figure~\\ref{fig:inception_all}. For instance, an Inception network in Figure~\\ref{fig:inception} can be represented using tensor diagram as in Figure~\\ref{fig:inception-tensordiagram}, a simplified {Tensor-train\\xspace TNN\\xspace} with input tensor of size $3 \\times 3$ and the dimension of the connecting edge is $1$. \n\n\\input{.\/\/figure_inception}\n\n\\paragraph{Xception Network.} Xception~\\cite{chollet2016xception} is an architecture that further simplifies Inception, where local connection at the \\textit{depthwise layers} are further limited to one neuron (or one feature map).\nSimilar to Inception, the Xception network in Figure~\\ref{fig:xception} can be represented using tensor diagram as in Figure~\\ref{fig:xception-tensordiagram}, a simple Tensor-train\\xspace TNN\\xspace with input tensor of size $2 \\times 2$ and the dimension of the connecting edge is $2$. \n\n\\input{.\/\/figure_xception}\n\n\\paragraph{Bottleneck Network.} Finally, bottleneck architecture in~\\cite{lin2013network, he2016identity} can be interpreted as decomposition of a wider architecture into a tensor network with $3$ kernels.\n\n\\input{.\/\/figure_bottleneck}\n\n\\paragraph{Insights for NN designs.} Our TNN\\xspace provides some insights on how to design a compact NN architecture in practice: \n(1) First, we can start our design with a traditional neural network (with wide architecture); (2) Second, we factorize the kernel into a tensor network by general tensor decomposition, which converts the original neural network into a TNN\\xspace that naturally bear nice compact structures as we see in Figures~\\ref{fig:inception} and~\\ref{fig:xception}. (3) Since each tensor network corresponds to a compact design, therefore exploration of different designs can be done by designing different tensor networks. \n\\section{Experiments}\n\\label{sec:experiments} \n\n\nIn this section, we evaluate the effectiveness of our proposed compression algorithm (TNN based compression\\xspace) proposed in section~\\ref{sec:invariant} on several benchmark deep neural networks and datasets. We evaluate fully connected layer compression on MNIST (discussed in Appendix~\\ref{app:dense-tensorized}); we evaluate convolutional layer compression on ResNet\\xspace-32~\\cite{he2016identity} for CIFAR-10; and we evaluate the scalability of our compression algorithm on ResNet\\xspace-50~\\cite{he2016identity} for ImageNet (2012) dataset. \n\n\nWe conduct experiments on various compression rates and compare the performance of our TNN based compression\\xspace (TNN-C\\xspace) methods with that of the state-of-the-art low-rank approximation based compression\\xspace methods (NN-C\\xspace~\\cite{jaderberg2014speeding, lebedev2014speeding, kim2015compression}) under the same compression rate. More details of the state-of-the-art low-rank approximation based compression\\xspace are provided in appendix~\\ref{app:convolutional}. For all experiments, we use the Adam optimizer~\\cite{kingma2014adam} with 1e-3 learning rate and decay the learning rate by 10x every 50 epochs. Overall, we show that our TNN based compression\\xspace maintain high accuracy even when the networks are highly compressed. \n\nWe denote \\nnPrefix\\CP, \\nnPrefix\\TK, and \\nnPrefix\\TT as the compressed neural network obtained by baseline low-rank approximation based compression\\xspace, which uses classical types of tensor decompositions (CP\\xspace, TK\\xspace, and TT\\xspace) as the layer-wise projecting methods (mentioned in section~\\ref{sec:invariant}). While \\tnnPrefix\\rCP, \\tnnPrefix\\rTK and \\tnnPrefix\\rTT are compressed tensorial neural networks\\xspace obtained by TNN based compression\\xspace, which uses our proposed modified tensor decomposition (\\reshapePrefix\\CP, \\reshapePrefix\\TK, and \\reshapePrefix\\TT). \n\nAs mentioned in section~\\ref{sec:invariant}, we refer to traditional back propogation-based compression of the network as end-to-end\\xspace (E2E\\xspace) compression, and refer to our strategy of data reconstruction-based sequential compression as Sequential\\xspace (Seq\\xspace) compression.\n\\smallskip \n\n\\noindent \\textbf{Our algorithm achieves 5\\% higher accuracy than baseline on CIFAR10 using ResNet-32.}\nThe results from table~\\ref{baseline} demonstrate that our TNN-C\\xspace maintains high accuracy even after the networks are highly compressed on CIFAR-10. Given a well-trained 32-layer ResNet\\xspace network and a goal of reducing the number of parameters to 10\\% of the original size, the \\nnPrefix\\CP obtained by NN-C\\xspace using end-to-end\\xspace compression reduces the original accuracy from 93.2\\% to 86.93\\%; while the \\tnnPrefix\\rCP obtained by TNN-C\\xspace paired with Seq\\xspace compression increases the accuracy to 91.28\\% with the same compression rate --- \\textbf{a performance loss of 2\\% with only 10\\% of the number of parameters}. Furthermore, TNN-C\\xspace achieves further aggressive compression --- \\textbf{a performance loss of 6\\% with only 2\\% of the number of parameters}.\nWe observe similar trends (higher compression and higher accuracy) are observed for \\tnnPrefix\\rTT as well. \nThe structure of the Tucker\\xspace decomposition (see section~\\ref{app:convolutional-tensorized}) makes \\tnnPrefix\\rTK less effective with very high compression, since the {``internal structure''} of the network reduces to very low rank, which may lose necessary information. Increasing the network size to 20\\% of the original provides reasonable performance on CIFAR-10 for \\tnnPrefix\\rTK as well.\n\n\n\\begin{table*}[!htbp]\n\\centering\n\\begin{tabular}{l}\n\\begin{tabular}{c |c c c c || c | c c c c}\n\\multirow{2}{3.6em}{Architect.} & \\multicolumn{4}{c||}{Compression rate} & \\multirow{2}{3.6em}{Architect.} & \\multicolumn{4}{c}{Compression rate} \\\\ \n& \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{20\\%} & \\multicolumn{1}{c||}{40\\%} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} & \\multicolumn{1}{c}{20\\%} \\\\\n\\hline\n\\nSVD~\\cite{jaderberg2014speeding} & 83.09 & 87.27 & 89.58 & 90.85 & TNN-TR{$^\\dag$}~\\cite{wang2018wide} & - & 80.80{$^\\dag$} & - & 90.60 \\\\\n\\nnPrefix\\CP~\\cite{lebedev2014speeding} & 84.02 & 86.93 & 88.75 & 88.75 & \\tnnPrefix\\rCP & 85.7 & 89.86 & \\textbf{91.28} & - \\\\\n\\nnPrefix\\TK~\\cite{kim2015compression} & 83.57 & 86.00 & 88.03 & 89.35 & \\tnnPrefix\\rTK & 61.06 & 71.34 & 81.59 & 87.11 \\\\\n\\nnPrefix\\TT & 77.44 & 82.92 & 84.13 & 86.64 & \\tnnPrefix\\rTT & 78.95 & 84.26 & 87.89 & - \\\\\n\\end{tabular} \\\\\n\\rule{0in}{1.2em}$^\\dag$\\scriptsize The tensor ring (TR) results are cited from~\\cite{wang2018wide}, and the accuracy of 80.8\\% is achieved by 6.67\\% compression rate.\n\\end{tabular}\n\\vspace{-0.8em}\n\\caption{Percentage test accuracy of baseline \\textbf{NN-C\\xspace} with\nE2E\\xspace compression vs. our \\textbf{TNN-C\\xspace} with Seq\\xspace compression on CIFAR10. \nThe uncompressed ResNet\\xspace-32 achieves {93.2\\%} accuracy with 0.46M parameters. \n\\label{baseline}}\n\\end{table*}\n\n\n\\begin{table*}[!htbp]\n\\centering\n\\begin{tabular}{c | c c | c c | c c | c c}\n\\multirow{3}{3.6em}{Architect.} & \\multicolumn{8}{c}{Compression rate} \\\\ \n& \\multicolumn{2}{c|}{5\\%} & \\multicolumn{2}{c|}{10\\%} & \\multicolumn{2}{c|}{20\\%} & \\multicolumn{2}{c}{40\\%} \\\\\n& Seq & E2E & Seq & E2E & Seq & E2E & Seq & E2E \\\\\n\\hline\n\\nSVD & 74.04 & \\textbf{83.09} & 85.28 & \\textbf{87.27} & \\textbf{89.74} & 89.58 & \\textbf{91.83} & 90.85 \\\\\n\\nnPrefix\\CP & 83.19 & \\textbf{84.02} & \\textbf{88.50} & 86.93 & \\textbf{90.72} & 88.75 & \\textbf{89.75} & 88.75 \\\\\n\\nnPrefix\\TK & 80.11 & \\textbf{83.57} & \\textbf{86.75} & 86.00 & \\textbf{89.55} & 88.03 & \\textbf{91.3} & 89.35 \\\\\n\\nnPrefix\\TT & \\textbf{80.77} & 77.44 & \\textbf{87.08} & 82.92 & \\textbf{89.14} & 84.13 & \\textbf{91.21} & 86.64\\\\ \n\\end{tabular}\n\\caption{Percentage accuracy of \\textbf{our Seq} vs. \\textbf{baseline E2E} tuning using NN-C\\xspace on CIFAR10. \n\\label{seq-e2e-noreshape}}\n\\end{table*}\n\n\t\n\\begin{table*}[!htbp]\n\\centering\n\\begin{tabular}{c | c c || c | c c}\n\\multirow{2}{3.6em}{Architect.} & \\multicolumn{2}{c||}{Compression rate} & \\multirow{2}{3.6em}{Architect.} & \\multicolumn{2}{c}{Compression rate}\\\\ \n& \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c||}{10\\%} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c}{5\\%} & \\multicolumn{1}{c}{10\\%} \\\\ \\hline\n\\nnPrefix\\CP & 83.19 & 88.50 & \\tnnPrefix\\rCP & \\textbf{89.86} & \\textbf{91.28} \\\\\n\\nnPrefix\\TK & 80.11 & 86.73 & \\tnnPrefix\\rTK & 71.34 & 81.59 \\\\\n\\nnPrefix\\TT & 80.77 & 87.08 & \\tnnPrefix\\rTT & \\textbf{84.26} & \\textbf{87.89} \\\\\n\\end{tabular}\n\\caption{Percentage accuracy of \\textbf{our TNN-C\\xspace} vs. \\textbf{baseline NN-C\\xspace} using Seq tuning on CIFAR10.}\n\\label{reshape-vs-non}\n\\end{table*}\n\t\n\n\\begin{table*}[!htbp]\n\\begin{center}\n\\begin{tabular}{c|c||c||c||c}\n& & {Uncompressed} &{\\nnPrefix\\TT} (E2E\\xspace) &{\\tnnPrefix\\rTT} (Seq\\xspace) \\\\\n\\# epochs & \\# samples & \\# params. = 25M & \\# params. = 2.5M & \\# params. = 2.5M \\\\\n\\hline\n0.2 & 0.24M & 4.22& 2.78 & 44.35 \\\\\n0.3 & 0.36M & 6.23& 3.99 & 46.98 \\\\\n0.5 & 0.60M & 9.01& 7.48 & 49.92 \\\\\n1.0 & 1.20M & 17.3& 12.80 & 52.59 \\\\\n2.0 & 2.40M & 30.8& 18.17 & \\textbf{54.00}\\\\\n\\end{tabular}\t\t\n\\caption{Convergence of percentage accuracies of \\textbf{uncompressed} vs. \\textbf{NN-C\\xspace} (TT\\xspace decomposition) vs. \\textbf{TNN-C\\xspace} (\\tnnPrefix\\rTT decomposition) achieving 10\\% compression rate for ResNet\\xspace-50 ImageNet.\n\\label{imagenet}}\n\\end{center}\n\\end{table*}\n\n\n\n\n\\begin{figure}[!htbp]\n\\begin{minipage}[]{\\linewidth}\n\t\t\\centering\n\t\t\\psfrag{y-axis}{\\scriptsize{Test Error}}\n\t\t\\psfrag{x-axis}[Bl]{\\scriptsize{\\# Gradient Updates ($\\times 10^{11}$)}}\n\t\t\\psfrag{cp-seq}[Bl]{\\scriptsize{Seq-CP\\xspace}}\n\t\t\\psfrag{cp-e2e}[Bl]{\\scriptsize{E2E-CP\\xspace}}\n\t\t\\psfrag{tt-seq}[Bl]{\\scriptsize{Seq-TT\\xspace}}\n\t\t\\psfrag{tt-e2e}[Bl]{\\scriptsize{E2E-TT\\xspace}}\n\t\t\\psfrag{tk-e2e}[Bl]{\\scriptsize{E2E-TK\\xspace}}\n\t\t\\psfrag{tk-seq}[Bl]{\\scriptsize{Seq-TK\\xspace}}\n\t\t\\includegraphics[width=0.8\\textwidth]{.\/\/faster_convergence_new.eps}\n\t\t\\captionof{figure}{\\label{convergence}Convergence rate for Seq vs. E2E\\xspace compression on CIFAR10.}\n\t\\end{minipage}\n\n\\end{figure}\n\n\nTable~\\ref{baseline} shows that \\emph{TNN-C\\xspace with Seq\\xspace compression} outperforms \\emph{NN-C\\xspace with end-to-end\\xspace compression}. Now we address the following question: is one factor (Seq\\xspace compression or TNN-C\\xspace) primarily responsible for increased performance, or is the benefit due to synergy between the two?\n\n\\smallskip \n\n\\noindent \\textbf{Seq\\xspace compression, TNN based compression\\xspace, or both?} (1) \nWe present the effect of different compression methods on accuracy in Table~\\ref{seq-e2e-noreshape}. \nOther than at very high compression rate (5\\% column in\nTable~\\ref{seq-e2e-noreshape}), Seq\\xspace compression (Seq) consistently\noutperforms end-to-end\\xspace (E2E) compression. In addition, Seq\\xspace compression is also\n{\\em much\\\/} faster and leads to more stable convergence compared to end-to-end\\xspace compression.\nFigure~\\ref{convergence} plots the compression error over the number of gradient\nupdates for various compression methods.\n(2) We present the effect of different compression methods on accuracy in Table~\\ref{reshape-vs-non}.\nInterestingly, as what is demonstrated in Table~\\ref{reshape-vs-non}, if TNN based compression\\xspace method is used, the test accuracy is restored for even very high compression ratios~\\footnote{Note that \\tnnPrefix\\rTK remains an exception for aggressive compression due to the low rank internal structure that we previously discussed.}. \nThese results confirm that TNN\\xspace is more flexible than NN as TNN\\xspace allows exploitation of additional invariant structures in the parameter space of deep neural networks, and such invariant structures are picked up by our proposed TNN based compression\\xspace (our TNN-C\\xspace), but not by low-rank approximation based compression\\xspace (NN-C\\xspace). \nTherefore, our results show that TNN-C\\xspace and Seq\\xspace compression are\nsymbiotic, and {\\em both\\\/} are necessary to simultaneously obtain a\nhigh accuracy and a high compression rate.\n\n\n\\paragraph{Scalability}\nFinally, we show that our methods scale to state-of-the-art large\nnetworks, by evaluating performance on the ImageNet 2012 dataset on a\n50-layer ResNet\\xspace (uncompressed with 76.05\\% accuracy).\nTable~\\ref{imagenet} shows the accuracy of \\tnnPrefix\\rTT obtained by TNN-C\\xspace\nwith Seq\\xspace tuning compared to that of \\nnPrefix\\TT obtained by low-rank approximation based compression\\xspace with E2E\\xspace tuning, and the accuracy of the uncompressed network (ResNet\\xspace-50) with 10\\% compression rate. \nTable~\\ref{imagenet} shows that TNN-C\\xspace paired with Seq tuning is much faster than the alternative. This is an important\nresult because it empirically validates our hypotheses that (1)\nour TNN based compression\\xspace captures the invariant structure of the ResNet\\xspace (with few redundancies) better and faster than the baseline NN-C\\xspace compression, \n(2) data reconstruction Seq\\xspace tuning is effective even on the largest networks and datasets, and \n(3) our proposed efficient TNN-C\\xspace can scale to the state-of-the-art neural networks. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion and Perspectives}\n\\label{sec:conclusion}\nWe define a new generalized tensor algebra extending existing tensor operations. We extend vector\/matrix operations to their\nhigher order tensor counterparts, providing systematic notations and libraries for\ntensorization of neural networks and higher order tensor\ndecompositions. \nUsing these generalized tensor operations, we introduce tensorial neural networks\\xspace (TNN\\xspace) which extends existing neural networks. \nOur TNN is more flexible than NN and more compact than neural networks allowing same amount expressive power with fewer number of parameters. \nTherefore mapping NN to its closest TNN is a compression of NN as the resulting TNN will carry fewer number of parameters. \nOther compression techniques as mentioned in the related work can naturally be used on the compressed TNN to further compress the TNN. \n As a future step, we will explore optimizing the order of (parallel) implementations of the tensor algebra.\n\n\n\\section{Notations}\n\\label{app:notations}\n\n\\paragraph{Symbols:}\nLower case letters (e.g. $\\myvector{v}$) are used to denote column vectors, while upper case letters (e.g. $\\mymatrix{M}$) are used for matrices, and curled letters (e.g. $\\tensor{T}$) for multi-dimensional arrays (tensors). For a tensor $\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1}}$, we will refer to the number of indices as \\textit{order}, each individual index as \\textit{mode} and the length at one mode as \\textit{dimension}. Therefore, we will say that $\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1}}$ is an $m$-order tensor which has dimension $I_k$ at mode-$k$. {\\em Tensor operations} are extensively used in this paper: The {\\em tensor (partial) outer product} is denoted as $\\otimes$, {\\em tensor convolution} as $\\ast$, and finally $\\times$ denotes either {\\em tensor contraction} or {\\em tensor multiplication}. Each of these operators will be equipped with subscript and superscript when used in practice, for example $\\times^{m}_{n}$ denotes mode-$(m, n)$ tensor contraction (defined in Appendix~\\ref{app:operations}). Furthermore, the symbol $\\circ$ is used to construct {\\em compound operations}. For example, $(\\ast \\circ \\otimes)$ is a compound operator simultaneously performing tensor convolution and tensor partial outer product between two tensors. \n\n\\paragraph{Indexing:} In this paragraph, we explain the usages of subscripts\/superscripts for both multi-dimensional arrays and operators, and further introduce several functions that are used to alter the layout of multi-dimensional arrays. \n\n\\begin{itemize}[leftmargin=*]\n\n\\item \n(1) Nature indices start from 0, but reversed indices are used occasionally, which start from $-1$. Therefore the first entry of a vector $\\myvector{v}$ is ${v}_{0}$, while the last one is ${v}_{-1}$. (2) For multi-dimensional arrays, the {\\em subscript} is used to denote an entry or a subarray within an object, while {\\em superscript} is to index among a sequence of arrays. For example, $\\mymatrix{M}_{i, j}$ denotes the entry at $i^{{\\mbox{\\tiny th}}}$ row and $j^{{\\mbox{\\tiny th}}}$ column of a matrix $\\mymatrix{M}$, and $\\matrixSup{M}{k}$ is the $k^{{\\mbox{\\tiny th}}}$ matrix in a set of $N$ matrices $\\{ \\matrixSup{M}{0}, \\matrixSup{M}{1}, \\cdots \\matrixSup{M}{N-1} \\}$. For operators, as we have seen, both subscript and superscript are used to denote the modes involved in the operation. (3) {\\em The symbol colon '$:$'} is used to slice a multi-dimensional array. For example, $\\tensorSub{M}{:, k}$ denotes the $k^{{\\mbox{\\tiny th}}}$ column of $\\mymatrix{M}$, and $\\tensorSub{T}{:, :, k}$ denotes the $k^{{\\mbox{\\tiny th}}}$ {\\em frontal slice} of a 3-order tensor $\\tensor{T}$. (4) {\\em Big-endian notation} is adopted in conversion between multi-dimensional array and vectors. Specifically, the function $\\vectorize(\\cdot)$ flattens (a.k.a. {\\em vectorize}) a tensor $\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1}}$ into a vector $\\myvector{v} \\in \\R^{\\prod_{l=0}^{m-1} I_l}$ such that $\\tensorSub{T}{i_0, \\cdots, i_{m-1}} = v_{i_{m-1} + i_{m-2} I_{m-1} + \\cdots + i_0 I_1 \\cdots I_{m-1}}$.\n \n\n\\item \n(1) The function $\\mathsf{swapaxes}(\\cdot)$ is used to permute ordering of the modes of a tensor as needed. For example, given two tensors $\\tensor{U} \\in \\R^{I \\times J \\times K}$ and $\\tensor{V} \\in \\R^{K \\times J \\times I}$, the operation $\\tensor{V} = \\mathsf{swapaxes}(\\tensor{U})$ convert the tensor $\\tensor{U}$ into $\\tensor{V}$ such that $\\tensorSub{V}{k, j, i} = \\tensorSub{U}{i, j, k}$.\n(2) The function $\\mathsf{flipaxis}(\\cdot, \\cdot)$ flips a tensor along a given mode. For example, given a tensor $\\tensor{U} \\in \\R^{I \\times J \\times K}$ and $\\tensor{V} = \\mathsf{flipaxis}(\\tensor{U}, 0)$, the entries in $\\tensor{V}$ is defined as $\\tensorSub{V}{i, j, k} = \\tensorSub{U}{I - 1 - i (mod ~ I), j, k}$.\n\n\\end{itemize}\n\\section{Tensor operations}\n\\label{app:operations}\n\n\\input{.\/\/table_operations}\n\nTo begin with, we describe several {\\em basic tensor operations} that are natural generalization to their vector\/matrix counterparts. These basic operations can be further combined to construct {\\em compound operations} that serve as building blocks of {\\em tensorial neural networks}. \n\n\\paragraph{Tensor contraction} \nGiven an $m$-order tensor $\\tensorSup{T}{0} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1}}$ and another $n$-order tensor $\\tensorSup{T}{1} \\in \\R^{J_0 \\times \\cdots \\times J_{n-1}}$, which share the same dimension at mode-$k$ of $\\tensorSup{T}{0}$ and mode-$l$ of $\\tensorSup{T}{1}$( i.e. $I_k = J_l$), the mode-$(k, l)$ contraction of $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$, denoted as $\\tensor{T} \\triangleq \\tensorSup{T}{0} \\times^{k}_{l} \\tensorSup{T}{1}$, returns a $(m + n - 2)$-order tensor $\\tensor{T}\\ \\in R^{I_0 \\times \\cdots \\times I_{k-1} \\times I_{k+1} \\times \\cdots \\times I_{m-1} \\times J_0 \\times \\cdots \\times J_{l-1} \\times J_{l+1} \\times \\cdots \\times J_{n-1}}$, whose entries are computed as\n\\begin{subequations}\n\\begin{align}\n& \\tensorSub{T}{i_0, \\cdots, i_{k-1}, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}} \\nonumber \\\\\n= ~ & \\sum_{r=0}^{I_k - 1} \\tensorInd{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}}{0} ~ \\tensorInd{T}{j_0, \\cdots, j_{l-1}, r, j_{l+1}, \\cdots, j_{n-1}}{1} \\label{def:tensor-contraction-1} \\\\\n= ~ & \\langle \\tensorInd{T}{i_0, \\cdots, i_{k-1}, :, i_{k+1}, \\cdots, i_{m-1}}{0}, \\tensorInd{T}{j_0, \\cdots, j_{l-1}, :, j_{l+1}, \\cdots, j_{n-1}}{1} \\rangle \\label{def:tensor-contraction-2}\n\\end{align}\n\\end{subequations}\nNotice that tensor contraction is a direct generalization of matrix multiplication to higher-order tensor, and it reduces to matrix multiplication if both tensors are $2$-order (and therefore matrices). As each entry in $\\tensor{T}$ can be computed as inner product of two vectors, which requires $I_k = J_l$ multiplications, the total number of operations to evaluate a tensor contraction is therefore $O( ( \\prod_{u = 0}^{m - 1} I_u ) ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) )$, taking additions into account. \n\n\\paragraph{Tensor multiplication (Tensor product)} \nTensor multiplication (a.k.a. tensor product) is a special case of tensor contraction where the second operant is a matrix. Given a $m$-order tensor $\\tensor{U} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1}}$ and a matrix $\\mymatrix{M} \\in \\R^{I_k \\times J}$, where the dimension of $\\tensor{U}$ at mode-$k$ agrees with the number of the rows in $\\mymatrix{M}$, the mode-$k$ tensor multiplication of $\\tensor{U}$ and $\\myvector{M}$, denoted as $\\tensor{V} \\triangleq \\tensor{U} \\times_k \\mymatrix{M}$, yields another $m$-order tensor $\\tensor{V} \\in \\R^{I_0 \\times \\cdots \\times I_{k-1} \\times J \\times I_{k+1} \\times \\cdots I_{m-1}}$, whose entries are computed as\n\\begin{subequations}\n\\begin{align}\n& \\tensorSub{V}{i_0, \\cdots, i_{k-1}, j, i_{k+1}, \\cdots, i_{m-1}} \\nonumber \\\\\n= ~ & \\sum_{r=0}^{I_k - 1} \\tensorSub{U}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}} ~ \\mymatrix{M}_{r, j} \\label{def:tensor-multiplication-1} \\\\\n= ~ & \\langle \\tensorSub{U}{i_0, \\cdots, i_{k-1}, :, i_{k+1}, \\cdots, i_{m-1}}, ~ \\mymatrix{M}_{:, j} \\rangle \\label{def:tensor-multiplication-2}\n\\end{align}\n\\end{subequations}\nFollowing the convention of multi-linear algebra, the mode for $J$ now substitutes the location originally for $I_k$ (which is different from the definition of tensor contraction). Regardlessly, the number of operations for tensor multiplication follows tensor contraction exactly, that is $O ( ( \\prod_{u = 0}^{m - 1} I_u ) J )$.\n\n\\paragraph{Tensor convolution}\nGiven a $m$-order tensor $\\tensorSup{T}{0} \\in \\R^{I_0 \\times I_1 \\times \\cdots \\times I_{m-1}}$ and another $n$-order tensor $\\tensorSup{T}{1} \\in \\R^{J_0 \\times J_1 \\times \\cdots \\times J_{n-1}}$. The mode-$(k, l)$ convolution of $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$, denoted as $\\tensor{T} \\triangleq \\tensorSup{T}{0} \\ast^{k}_{l} \\tensorSup{T}{1}$, returns a $(m + n - 1)$-order tensor $\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I^{\\prime}_k \\times \\cdots \\times I_{m-1} \\times J_0 \\times \\cdots \\times J_{l-1} \\times J_{l+1} \\times \\cdots \\times J_{n-1}}$. The entries of $\\tensor{T}$ can be computed using any convolution operation $\\ast$ that is defined for two vectors.\n\\begin{subequations}\n\\begin{align}\n& \\tensorSub{T}{i_0, \\cdots, i_{k-1}, :, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}} \\nonumber \\\\\n= ~ & \\tensorInd{T}{i_0, \\cdots, i_{k-1}, :, i_{k+1}, \\cdots, i_{m-1}}{0} \\ast \\tensorInd{T}{j_0, \\cdots, j_{l-1}, :, j_{l+1}, \\cdots, j_{n-1}}{1} \\label{def:tensor-convolution} \\\\\n= ~ & \\tensorInd{T}{j_0, \\cdots, j_{l-1}, :, j_{l+1}, \\cdots, j_{n-1}}{1} ~\\overline{\\ast}~ \\tensorInd{T}{i_0, \\cdots, i_{k-1}, :, i_{k+1}, \\cdots, i_{m-1}}{0} \\label{def:tensor-convolution-reversed}\n\\end{align}\n\\end{subequations}\nHere we deliberately omit the exact definition of vector convolution $\\ast$ (and its conjugate $\\overline{\\ast}$), because it can be defined differently depending on the user case (Interestingly, the \"convolution\" in convolutional layer indeed computes {\\em correlation} instead of convolution).\nCorrespondingly, the resulted dimension $I^{\\prime}_k$ at mode-$k$ is determined by the chosen type of convolution. \n For example, the \"convolution\" in convolutional layer typically yields $I^{\\prime}_k = I_k$ (with same padding) or $I^{\\prime}_k = I_k - J_l + 1$ (with valid padding). \nWithout {\\em Fast Fourier Transform (FFT)}, the number of operations is $O ( ( \\prod_{u = 0}^{m - 1} I_u ) ( \\prod_{v = 0}^{n - 1} J_v ) )$.\n\n\\paragraph{Tensor outer product}\nGiven a $m$-order tensor $\\tensorSup{T}{0} \\in \\R^{I_0 \\times I_1 \\times \\cdots \\times I_{m-1}}$ and another $n$-order tensor $\\tensorSup{T}{1} \\in \\R^{J_0 \\times J_1 \\times \\cdots \\times J_{n-1}}$, the outer product of $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$, denoted $\\tensor{T} \\triangleq \\tensorSup{T}{0} \\otimes \\tensorSup{T}{1}$, concatenates all the indices of $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$, and returns a $(m + n)$-order tensor $\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1} \\times J_0 \\times \\cdots \\times J_{n-1}}$ whose entries are computed as \n\\begin{equation}\n\\label{def:tensor-outer-product}\n\\tensorSub{T}{i_0, \\cdots, i_{m-1}, j_0, \\cdots, j_{n-1}} = \\tensorInd{T}{i_0, \\cdots, i_{m-1}}{0} ~ \\tensorInd{T}{j_0, \\cdots, j_{n-1}}{1} \n\\end{equation}\\noindent\nIt is not difficult to see that tensor outer product is a direct generalization for outer product for two vectors $\\mymatrix{M} = \\myvector{u} \\otimes \\myvector{v} = \\myvector{u} ~ \\myvector{v}^\\top$. Obviously, the number of operations to compute a tensor outer product explicitly is $O ( ( \\prod_{u = 0}^{m - 1} I_u ) ( \\prod_{v = 0}^{n - 1} J_v ) )$. \n\n\\paragraph{Tensor partial outer product}\nTensor partial outer product is a variant of tensor outer product defined above, which is widely used in conjunction with other operations. Given a $m$-order tensor $\\tensorSup{T}{0} \\in \\R^{I_0 \\times I_1 \\times \\cdots \\times I_{m-1}}$ and another $n$-order tensor $\\tensorSup{T}{1} \\in \\R^{J_0 \\times J_1 \\times \\cdots \\times J_{n-1}}$, which share the same dimension at mode-$k$ of $\\tensorSup{T}{0}$ and mode-$l$ of $\\tensorSup{T}{1}$ (i.e. $I_k = J_l$), the mode-$(k, l)$ partial outer product of $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$, denoted as $\\tensor{T} \\triangleq \\tensorSup{T}{0} \\otimes^{k}_{l} \\tensorSup{T}{1}$, returns a $(m + n - 1)$-order tensor $\\tensor{T} \\in \\R^{I_0 \\times \\cdots \\times I_{m-1} \\times J_0 \\times \\cdots \\times J_{l-1} \\times J_{l+1} \\times \\cdots \\times J_{n-1}}$, whose entries are computed as\n\\begin{subequations}\n\\begin{align}\n& \\tensorSub{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}} \\nonumber \\\\\n= ~ & \\tensorInd{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}}{0} ~ \\tensorInd{T}{j_0, \\cdots, j_{l-1}, r, j_{l+1}, \\cdots, j_{n-1}}{1} \n\\label{def:tensor-partial-outer-product} \\\\\n= ~ & \\tensorInd{T}{\\cdots, r, \\cdots}{0} \\otimes \\tensorInd{T}{\\cdots, r, \\cdots}{1}\n\\end{align}\n\\end{subequations}\nThe operation bears the name \"partial outer product\" because it reduces to outer product once we fix the indices at mode-$k$ of $\\tensorSup{T}{0}$ and mode-$l$ of $\\tensorSup{T}{1}$.\nReferring to the computational complexity of tensor outer product, the number of operations for each fixed index is $O ( ( \\prod_{u = 0, u \\neq k}^{m - 1} I_u ) ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) )$, therefore the total time complexity for the tensor partial outer product is $O ( ( \\prod_{u = 0}^{m - 1} I_u ) ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) )$.\n\n\n\\paragraph{Compound operations:}\n As building blocks, the basic tensor operations defined above can further combined to construct compound operations that perform multiple operations on multiple tensors simultaneously. We illustrate their usage using two representative examples in this section, and we will see more examples when we derive backpropagation rules for tensor operations in Appendix~\\ref{app:derivatives}.\n \n\\begin{itemize}[leftmargin=*]\n\n\\item \\textbf{Simultaneous multi-operations between two tensors. } For example, given two $3$-order tensors $\\tensorSup{T}{0} \\in \\R^{R \\times X \\times S}$ and $\\tensorSup{T}{1} \\in \\R^{R \\times H \\times S}$, we can define a compound operation $\\left( \\otimes^0_0 \\circ \\ast^1_1 \\circ \\times^2_2 \\right)$ between $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$, where mode-$(0,0)$ partial outer product, mode-$(1,1)$ convolution and mode-$(2,2)$ contraction are performed simultaneously, which results in a $2$-order tensor $\\tensor{T}$ of $\\R^{R \\times X^{\\prime}}$ (it is indeed a matrix, though denoted as a tensor). The entries of $\\tensor{T} \\triangleq \\tensorSup{T}{0} \\left( \\otimes^0_0 \\circ \\ast^1_1 \\circ \\times^2_2 \\right) \\tensorSup{T}{1}$ are computed as\n\\begin{equation}\n\\tensorSub{T}{r, :} = \\sum_{s = 0}^{S - 1} \\tensorInd{T}{r, :, s}{0} \\ast \\tensorInd{T}{r, :, s}{1} \n\\label{def:compound-1}\n\\end{equation}\\noindent\nFor commonly used vector convolution, it is not difficult to show that number of operations required to compute the result $\\tensor{T}$ is $O\\left( R \\max(X, H) \\log(\\max(X, H)) S \\right)$ with FFT and $O(RXHS)$ without FFT, as each of the $R$ vectors in $\\tensor{T}$ is computed with a sum of $S$ vector convolutions. \n\n\\item \\textbf{Simultaneous operations between a tensor and a set of multiple tensors}. For example, given a $3$-order tensor $\\tensor{U} \\in \\R^{R \\times X \\times S}$ and a set of three tensors $\\tensorSup{T}{0} \\in \\R^{R \\times P}$, $\\tensorSup{T}{1} \\in \\R^{K \\times Q}$ and $\\tensorSup{T}{2} \\in \\R^{S \\times T}$, we can define a compound operation on $\\tensor{U}$ as $\\tensor{V} \\triangleq \\tensor{U} ( \\otimes^{0}_{0} \\tensorSup{T}{0} \\ast^{1}_{0} \\tensorSup{T}{1} \\times^{2}_{0} \\tensorSup{T}{2} )$, which performs mode-$(0,0)$ partial outer product with $\\tensorSup{T}{0}$, mode-$(1, 0)$ convolution with $\\tensorSup{T}{1}$ and mode-$(2, 0)$ contraction with $\\tensorSup{T}{2}$ simultaneously. In this case, a $5$-order tensor $\\tensor{V} \\in \\R^{R \\times X^{\\prime} \\times P \\times Q \\times T}$ is returned, with entries calculated as \n\\begin{equation}\n\\tensorSub{V}{r, :, p, q, t} = \\sum_{s=0}^{S-1} \\tensorInd{T}{r, p}{0} \\left( \\tensorSub{U}{r, :, s} \\ast \\tensorInd{T}{:, q}{1} \\right) \\tensorInd{T}{s, t}{2}\n\\label{def:compound-2}\n\\end{equation}\\noindent\nIdentifying the best order to evaluate a compound operation with multiple tensors is in general an NP-hard problem, but for this example we can find it using exhaustive search: \n\\begin{equation}\n\\tensorSub{V}{r, :, p, q, t} = \\left( \\left( \\sum_{s=0}^{S-1} \\tensorSub{U}{r, :, s} \\tensorInd{T}{s, t}{2} \\right) \\ast \\tensorInd{T}{:, q}{1} \\right) \\tensorInd{T}{r, p}{0}\n\\end{equation}\\noindent\nIf we follows the supplied brackets and break the evaluation into three steps, these steps take $O(RXST)$, $O(RXHPT)$ and $O(RX^{\\prime}HPT)$ operations respectively, therefore result in a total time complexity of $O( RXST + RXHPT + RX^{\\prime} PQT)$ for the compound operation.\n\n\\end{itemize}\n\nGenerally, compound operations over multiple tensors are difficult to flatten into mathematical equations, and usually described by {\\em tensor diagrams} as in Section~\\ref{sec:preliminary}, which are usually called {\\em tensor networks}~\\cite{cichocki2016low}in the physics literature. \n\n\n\\section{Backpropagation of tensor operations}\n\\label{app:derivatives}\n\nIn order to use the operations in Appendix~\\ref{app:operations} in tensorial neural networks, we derive in this section their backpropagation rules. For brevity, we use tensor contraction as a tutorial example, and omit the derivations for other operations. \n\n\n\n\n\n\\paragraph{Tensor contraction}\nRecall the definition of tensor contraction in Equations~\\eqref{def:tensor-contraction-1} and~\\eqref{def:tensor-contraction-2} in tensor algebra:\n\\begin{equation}\n\\tensor{T} = \\tensorSup{T}{1} \\times \\tensorSup{T}{2}\n\\label{def:tensor-contraction}\n\\end{equation}\nThe partial derivatives of the result $\\tensor{T}$ w.r.t. its operants $\\tensorSup{T}{0}$, $\\tensorSup{T}{1}$ can be computed at the entries level:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\tensorSub{T}{i_0, \\cdots, i_{k-1}, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}}}{\\partial \\tensorInd{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}}{0}} & = \\tensorInd{T}{j_0, \\cdots, j_{l-1}, r, j_{l+1}, \\cdots, j_{n-1}}{1}\n \\label{eq:derivative-tensor-contraction-1} \\\\\n\\frac{\\partial \\tensorSub{T}{i_0, \\cdots, i_{k-1}, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}}}{\\partial \\tensorInd{T}{j_0, \\cdots, j_{l-1}, r, j_{l+1}, \\cdots, j_{n-1}}{1}} & = \\tensorInd{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}}{0} \n\\label{eq:derivative-tensor-contraction-2}\n\\end{align}\n\\end{subequations}\nWith chain rule, the derivatives of $\\mathcal{L}$ w.r.t. $\\tensorSup{T}{0}$ and $\\tensorSup{T}{1}$ can be obtained through ${\\partial \\mathcal{L}}\/{\\partial \\tensor{T}}$.\n\\begin{subequations}\n\\begin{gather}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorInd{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}}{0}} = \\sum_{j_0 = 0}^{J_0 - 1} \\cdots \\sum_{j_{l-1} = 0}^{J_{l-1} - 1} \\sum_{j_{l+1} = 0}^{J_{l+1} - 1} \\cdots \\sum_{j_{n-1} = 0}^{J_{n-1} - 1} \\nonumber \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{T}{i_0, \\cdots, i_{k-1}, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}}} ~ \\tensorInd{T}{j_0, \\cdots, j_{l-1}, r, j_{l+1}, \\cdots, j_{n-1}}{1} \\label{eq:backprop-tensor-contraction-1-1} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorInd{T}{j_0, \\cdots, j_{l-1}, r, j_{l+1}, \\cdots, j_{n-1}}{1}} = \\sum_{i_0 = 0}^{I_0 - 1} \\cdots \\sum_{i_{k-1} = 0}^{I_{k-1} - 1} \\sum_{i_{k+1} = 0}^{I_{k+1} - 1} \\cdots \\sum_{i_{m-1} = 0}^{I_{m-1} - 1} \\nonumber \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSub{T}{i_0, \\cdots, i_{k-1}, i_{k+1}, \\cdots, i_{m-1}, j_0, \\cdots, j_{l-1}, j_{l+1}, \\cdots, j_{n-1}}} ~ \\tensorInd{T}{i_0, \\cdots, i_{k-1}, r, i_{k+1}, \\cdots, i_{m-1}}{0} \\label{eq:backprop-tensor-contraction-2-1} \n\\end{gather}\n\\end{subequations}\nThese tedious equations can be greatly simplified with tensor notations introduced before.\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{0}} & = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{m-1}_{0} \\circ \\cdots \\circ \\times^{m+l-2}_{l-1} \\circ \\times^{m+l-1}_{l+1} \\circ \\cdots \\circ \\times^{m+n-3}_{n-1} \\right) \\tensorSup{T}{1} \\right) \n\\label{eq:backprop-tensor-contraction-1-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{1}} & = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{k-1}_{k-1} \\circ \\times^{k}_{k+1} \\circ \\cdots \\circ \\times^{m-2}_{m-1} \\right) \\tensorSup{T}{0} \\right) \n\\label{eq:backprop-tensor-contraction-2-2}\n\\end{align}\n\\end{subequations}\nwhere $\\mathsf{swapaxes}(\\cdot)$ aligns the modes of outputs. Notice that the backpropagation equations are compound operations, even if the original operation is a basic one. The number of operations required for both backpropagation equations are $O ( ( \\prod_{u = 0}^{m - 1} I_u ) \\allowbreak ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) )$, which are exactly the same as in the forward computation in Equation~\\ref{def:tensor-contraction}. \n\n\n\\paragraph{Tensor multiplication} \nRecall the definition of tensor multiplication in Equations~\\eqref{def:tensor-multiplication-1} and~\\eqref{def:tensor-multiplication-2} in tensor algebra:\n\\begin{equation}\n\\tensor{V} = \\tensor{U} \\times_{k} \\mymatrix{M}\n\\label{def:tensor-multiplication}\n\\end{equation}\nBackpropagation equations:\n\\begin{subequations} \n\\begin{gather}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} \\times_{k} \\mymatrix{M}^{\\top} \\label{eq:backprop-tensor-multiplication-1-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\mymatrix{M}} = \\tensor{U} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{k-1}_{k-1} \\circ \\times^{k+1}_{k+1} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\label{eq:backprop-tensor-multiplication-2-2}\n\\end{gather}\n\\end{subequations}\nThe time complexities for both equations are $O ( ( \\prod_{u = 0}^{m - 1} I_u ) J )$, which is identical to the forward computation in Equation~\\eqref{def:tensor-multiplication}.\n\n\\paragraph{Tensor convolution} \nRecall the definition of tensor convolution in Equation~\\eqref{def:tensor-convolution} in tensor algebra:\n\\begin{equation}\n\\tensor{T} = \\tensorSup{T}{1} \\ast^{k}_{l} \\tensorSup{T}{2} \n\\label{def:tensor-convolution-2}\n\\end{equation}\nBackpropagation equations:\n\\begin{subequations} \n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{0}} \n& = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\big( \\times^{m}_{0} \\cdots \\times^{m+l-1}_{l-1} \\circ \\ast^{k}_{l} \\circ \\times^{m+l-2}_{l+1} \\circ \\cdots \\circ \\times^{m+n-2}_{n-1} \\big) \\mathsf{flipaxis}(\\tensorSup{T}{1}, l) \\right) \n\\nonumber \\\\\n& = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{m}_{0} \\cdots \\times^{m+l-1}_{l-1} \\circ (\\ast^{k}_{l})^{\\top} \\circ \\times^{m+l-2}_{l+1} \\cdots \\times^{m+n-2}_{n-1} \\right) \\tensorSup{T}{1} \\right) \n\\label{eq:backprop-tensor-convolution-1-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{1}} \n& = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\big( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{k-1}_{k-1} \\circ \\ast^{k}_{k} \\circ \\times^{k+1}_{k+1} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\big) \\mathsf{flipaxis}(\\tensorSup{T}{0}, k) \\right) \n\\nonumber \\\\\n & = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{k-1}_{k-1} \\circ (\\ast^{k}_{k})^{\\top} \\circ \\times^{k+1}_{k+1} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\tensorSup{T}{0} \\right) \n\\label{eq:backprop-tensor-convolution-2-2}\n\\end{align}\n\\end{subequations}\nwhere $(\\ast^{k}_{l})^{\\top}$ is the {\\em adjoint operator} of $\\ast^{k}_{l}$. Without FFT, \nthe time complexities for the equations are $O ( ( \\prod_{u = 0, u \\neq k}^{m - 1} I_u ) \\allowbreak ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) I_k^{\\prime} J_l )$ and $O ( ( \\prod_{u = 0, u \\neq k}^{m - 1} I_u ) \\allowbreak ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) I_k^{\\prime} I_k )$ respectively. \n\n\n\n\\paragraph{Tensor outer product} \nRecall the definition of tensor outer product in Equation~\\eqref{def:tensor-outer-product} in tensor alegebra:\n\\begin{equation}\n\\tensor{T} = \\tensorSup{T}{0} \\otimes \\tensorSup{T}{1}\n\\label{def:tensor-outer-product-2}\n\\end{equation}\nBackpropagation equations:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{0}} & = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{m}_{0} \\circ \\cdots \\circ \\times^{m+n-1}_{n-1} \\right) \\tensorSup{T}{1} \n\\label{eq:backprop-tensor-outer-product-1-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{1}} & = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\tensorSup{T}{0} \n\\label{eq:backprop-tensor-outer-product-2-2}\n\\end{align}\n\\end{subequations}\nThe number of operations required for both equations are $O ( ( \\prod_{u = 0, u \\neq k}^{m - 1} I_u ) \\allowbreak ( \\prod_{v = 0}^{n - 1} J_v ) )$.\n\n\\paragraph{Tensor partial outer product}\nRecall the definition of tensor partial outer product in Equation~\\eqref{def:tensor-partial-outer-product}:\n\\begin{equation}\n\\tensor{T} = \\tensorSup{T}{0} \\otimes^{k}_{l} \\tensorSup{T}{1}\n\\label{def:tensor-partial-outer-product}\n\\end{equation}\nBackpropagation equations:\n\\begin{subequations} \n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{0}} & = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{m}_{0} \\circ \\cdots \\circ \\times^{m+l-1}_{l-1} \\circ \\otimes^{k}_{l} \\circ \\times^{m+l-2}_{l+1} \\circ \\cdots \\circ \\times^{m+n-2}_{n-1} \\right) \\tensorSup{T}{1} \\right) \\label{eq:backprop-tensor-partial-outer-product-1-2} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{T}{1}} & = \\mathsf{swapaxes} \\left( \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{T}} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{k-1}_{k-1} \\circ \\otimes^{k}_{k} \\circ \\times^{k+1}_{k+1} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\tensorSup{T}{0} \\right) \\label{eq:backprop-tensor-partial-outer-product-2-2}\n\\end{align}\n\\end{subequations}\nIt is not difficult to show the time complexity for both equations above are $O( ( \\prod_{u = 0}^{m - 1} I_u ) \\allowbreak ( \\prod_{v = 0, v \\neq l}^{n - 1} J_v ) )$.\n\n\n\\section{Tensor decompositions}\n\\label{app:decompositions}\n\nTensor decompositions are natural extensions of matrix factorizations for multi-dimensional arrays. In this section, we will review three commonly used tensor decompositions, namely {\\em CANDECOMP\/PARAFAC\\xspace (CP\\xspace) decomposition}~\\cite{kolda2009tensor}, {\\em Tucker\\xspace (TK\\xspace) decomposition}~\\cite{kolda2009tensor} and {\\em Tensor-train\\xspace (TT\\xspace) decomposition}~\\cite{oseledets2011tensor}. \n\n\n\\paragraph{CP\\xspace decomposition} \nCP\\xspace decomposition is a direct generalization of singular value decomposition (SVD\\xspace) which decomposes a tensor into additions of rank-1 tensors (outer product of multiple vectors). Specifically, given an $m$-order tensor $\\tensor{T} \\in \\R^{I_0 \\times I_1 \\times \\cdots \\times I_{m-1}}$, CP\\xspace decomposition factorizes it into $m$ factor matrices $\\{ \\mymatrix{M}^{(0)} \\}_{l = 0}^{m - 1}$, where $\\mymatrix{M}^{(l)} \\in \\R^{R \\times I_l}, \\forall l \\in [m]$, where $R$ is called the \\textit{canonical rank} of the CP\\xspace decomposition, which is allowed to be larger than the $I_l$'s.\n\\begin{subequations}\n\\begin{gather}\n\\tensorSub{T}{i_0, \\cdots, i_{m-1}} \\triangleq \\sum_{r = 0}^{R - 1} \\mymatrix{M}^{(0)}_{r, i_0} \\cdots \\mymatrix{M}^{(m-1)}_{r, i_{m-1}} \n\\label{def:cp-decomposition-1} \\\\\n\\tensor{T} \\triangleq \\sum_{r = 0}^{R - 1} \\mymatrix{M}^ {(0)}_{r, :} \\otimes \\cdots \\otimes \\mymatrix{M}^{(m-1)}_{r, :} = \\myvector{1} \\times^{0}_{0} \\left( \\mymatrix{M}^{(0)} \\otimes^{0}_{0} \\cdots \\otimes^{0}_{0} \\mymatrix{M}^{(m-1)} \\right) \\label{def:cp-decomposition-2}\n\\end{gather}\n\\end{subequations}\nwhere $\\myvector{1} \\in \\R^{R}$ is an all-ones vector of length $R$. With CP\\xspace decomposition, $\\tensor{T}$ can be represented with only $( \\sum_{l = 0}^{m - 1} I_l ) R$ entries instead of $( \\prod_{l = 0}^{m - 1} I_l )$ as in the original tensor. \n\n\n\\paragraph{Tucker\\xspace decomposition} Tucker\\xspace decomposition provides more general factorization than CP\\xspace decomposition. Given an $m$-order tensor $\\tensor{T} \\in \\R^{I_0 \\times I_1 \\times \\cdots \\times I_{m-1}}$, Tucker\\xspace decomposition factors it into $m$ factor matrices $\\{ \\matrixSup{M}{l} \\}_{l = 0}^{m - 1}$, where $\\matrixSup{M}{l} \\in \\R^{R_l \\times I_l}, \\forall l \\in [m]$ and an additional $m$-order core tensor $\\tensor{C} \\in \\R^{R_0 \\times R_1 \\times \\dots \\times R_{m-1}}$, where the {\\em Tucker\\xspace ranks} $R_l$'s are required to be smaller or equal than the dimensions at their corresponding modes, i.e. $R_{l} \\leq I_l, \\forall l \\in [m]$. \n\\begin{subequations}\n\\begin{gather}\n\\tensorSub{T}{i_0, \\cdots, i_{m-1}} \\triangleq \\sum_{r_0 = 0}^{R_0 - 1} \\dots \\sum_{r_{m-1} = 0}^{R_{m-1} - 1} \\tensorSub{C}{r_0, \\dots, r_{m-1}} ~ \\mymatrix{M}^{(0)}_{r_0, i_0} \\cdots \\mymatrix{M}^{(m-1)}_{r_{m-1}, i_{m-1}} \n\\label{def:tucker-decomposition-1} \\\\\n\\tensor{T} \\triangleq \\tensor{C} \\left( \\times_{0} \\mymatrix{M}^{(0)} \\times_{1} \\mymatrix{M}^{(1)} \\cdots \\times_{m-1} \\mymatrix{M}^{(m-1)} \\right)\n\\label{def:tucker-decomposition-2}\n\\end{gather}\n\\end{subequations}\nNotice that when $R_0 = \\cdots = R_{m-1} = R$ and $\\tensor{C}$ is a super-diagonal tensor with all super-diagonal entries to be ones (a.k.a. identity tensor), Tucker\\xspace decomposition reduces to CP\\xspace decomposition, and therefore CP\\xspace decomposition is a special case of Tucker\\xspace decomposition. With Tucker\\xspace decomposition, a tensor is approximately by $(\\prod_{l=0}^{m-1} R_l + \\sum_{l=0}^{m-1} I_l R_l)$ entries. \n\n\n\\paragraph{Tensor-train\\xspace decomposition} \nTensor-train\\xspace decomposition factorizes a $m$-order tensor into $m$ interconnected low-order tensors $\\{ \\tensorSup{T}{l} \\}_{l = 0}^{m - 1}$, where $\\tensorSup{T}{l} \\in \\R^{R_l \\times I_l \\times R_{l+1}}, l = 1, \\cdots, m-2$ with $\\tensorSup{T}{0} \\in \\R^{I_0 \\times R_0}$, and $\\tensorSup{T}{m-1} \\in \\R^{R_{m-1}\\times I_{m-1}}$ such that\n\\begin{subequations}\n\\begin{gather}\n\\tensorSub{T}{i_0, \\dots, i_{m-1}} \\triangleq \\sum_{r_0 = 1}^{R_0 - 1} \\dots \\sum_{r_{m-2} = 1}^{R_{m-2} - 1} \\tensorInd{T}{i_0, r_0}{0} ~ \\tensorInd{T}{r_0, i_1, r_1}{1} \\cdots \\tensorInd{T}{r_{m-2}, i_{m-1}}{m-1} \n\\label{def:Tensor-train-decomposition-1} \\\\\n\\tensor{T} \\triangleq \\tensorSup{T}{0} \\times^{-1}_{0} \\tensorSup{T}{1} \\times^{-1}_{0} \\cdots \\times^{-1}_{0} \\tensorSup{T}{m-1} \n\\label{def:Tensor-train-decomposition-2}\n\\end{gather}\n\\end{subequations}\nwhere the $R_l$'s are known as \\textit{Tensor-train\\xspace ranks}, which controls the tradeoff between complexity and accuracy of the representation. With Tensor-train\\xspace decomposition, a tensor is represented by $(R_0 I_0 + \\sum_{l=1}^{m-2} R_l I_l R_{l+1} + R_{m-1} I_{m-1})$ entries. \n\n\n\n\\paragraph{General tensor decompositions}\n\\label{general_decomposition}\nIn this paper, we use the term tensor decomposition in a more general way, i.e. we do not stick to the standard formats defined in the previous paragraphs. Indeed, we consider tensor decomposition as a reverse mapping of tensor operations: consider a general tensor operation $f$ on $m$ input tensors $\\{ \\tensorSup{T}{l} \\}_{l = 0}^{m - 1}$ such that $\\hat{\\tensor{T}} = f(\\tensorSup{T}{0}, \\cdots, \\tensorSup{T}{m-1})$ (i.e. $\\hat{\\tensor{T}}$ is linear in each operant $\\tensorSup{T}{l}$), the corresponding general tensor decomposition aims to recover the input factors $\\{ \\tensorSup{T}{l} \\}_{l = 0}^{m - 1}$ from a given tensor $\\tensor{T}$ such that $\\tensor{T} \\approx \\hat{\\tensor{T}} = f(\\tensorSup{T}{0}, \\cdots, \\tensorSup{T}{m-1})$. In Figure~\\ref{fig:decompositions}, we demonstrate a few examples of general tensor decomposition using tensor diagrams. \n\n\n\\input{.\/\/figure_decompositions}\n\n\\input{.\/\/table_decompositions}\n\\section{Low-rank approximations of convolutional layer}\n\\label{app:convolutional}\n\nIn our paper, compact architecture of neural networks can be derived in two steps: (1) proposing an arbitrary architecture; and (2) decomposing the weights kernel at each layer into lower-order factors. When the proposed architecture is a standard convolutional neural network and the kernel is decomposed by traditional tensor decomposition, the derived architectures are known as low-rank approximations, in which each layer in the derived architecture is still a traditional operation (i.e. not general tensor operation). \n\n\\paragraph{Standard convolutional layer} \nWe review the standard convolutional neural network (CNN) for reference. In CNN, a convolutional layer is parameterized by a 4-order kernel $\\tensor{K} \\in \\R^{H \\times W \\times S \\times T}$, where $H$ and $W$ are height and width of the filters (which are typically equal), $S$ and $T$ are the number of input and output channels respectively. A convolutional layer maps a 3-order tensor $\\tensor{U} \\in \\R^{X \\times Y \\times S}$ to another 3-order tensor $\\tensor{V} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times T}$, where $X$ and $Y$ are the height and width for the input feature map, while $X^{\\prime}$ and $Y^\\prime$ are the ones for the output feature map, with the following equation:\n\\begin{equation}\n\\tensor{V} = \\tensor{U} \\left( \\ast_{0}^{0} \\circ \\ast_{1}^{1} \\circ \\times_{2}^{2} \\right) \\tensor{K} \n\\label{def:convolutional-2}\n\\end{equation}\\noindent\nNotice that the number of parameters in a standard convolutional layer is $HWST$ and the number of operations needed to evaluate the output $\\tensor{V}$ is $O(HWSTXY)$.\n\n\n\n\n \n\\paragraph{SVD\\xspace-convolutional layer} \nIn SVD\\xspace-convolutional layer~\\cite{jaderberg2014speeding}, the weights kernel $\\tensor{K}$ is factorized by SVD.\n\\begin{equation}\n\\tensor{K} = \\mathsf{swapaxes} \\left( \\tensorSup{K}{0} \\times^{2}_{1} \\tensorSup{K}{1} \\right) \\label{def:convolutional-svd-2}\n\\end{equation}\nwhere $\\tensorSup{K}{0} \\in \\R^{H \\times S \\times R}$ and $\\tensorSup{K}{1} \\in \\R^{W \\times R \\times T}$ are the two factor tensors. With two factors, the forward pass of SVD\\xspace-convolutional layer is computed in two steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\tensor{U} \\left( \\ast_{0}^{0} \\circ \\times^{2}_{1} \\right) \\tensorSup{K}{0} \\\\\n\\tensor{V} & = \\tensorSup{U}{0} \\left( \\ast_{1}^{1} \\circ \\times^{2}_{1} \\right) \\tensorSup{K}{1}\n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{0}$ is an intermediate tensor after the first step.\nThe backpropagation equations for these two steps are:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\left( (\\ast^{1}_{0})^{\\top} \\circ \\times^{2}_{2} \\right) ~ \\tensorSup{K}{1}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{1}} = \\tensorSup{U}{0} \\left( \\times^{0}_{0} \\circ (\\overline{\\overline{\\ast}^{1}_{1})^{\\top}} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}}\\left( (\\ast^{0}_{0})^{\\top} \\circ \\times^{2}_{2} \\right) ~ \\tensorSup{K}{0}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{0}} = \\tensor{U} \\left( \\overline{(\\overline{\\ast}^{0}_{0})^{\\top}} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}}\n\\end{align}\n\\end{subequations}\n\n\\paragraph{CP\\xspace-convolutional layer} \nIn CP\\xspace-convolutional layer~\\cite{lebedev2014speeding, denton2014exploiting}, the weights kernel $\\tensor{K}$ is factorized by CP decomposition.\n\\begin{equation}\n\\tensor{K} = \\myvector{1} \\times^{0}_{2} \\left( \\tensorSup{K}{1} \\otimes^{2}_{1} \\tensorSup{K}{0} \\otimes^{2}_{0} \\tensorSup{K}{2} \\right) \\label{def:convolutional-cp-2}\n\\end{equation}\nwhere $\\tensorSup{K}{0} \\in \\R^{S \\times R}$, $\\tensorSup{K}{1} \\in \\R^{H \\times W \\times R}$ and $\\tensorSup{K}{2} \\in \\R^{R \\times T}$ are three factor tensors.\nWith three factors, the forward pass of CP\\xspace-convolutional consists of three steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\tensor{U} \\times_2 \\tensorSup{K}{0} \\\\\n\\tensorSup{U}{1} & = \\tensorSup{U}{0} \\left( \\ast_{0}^{0} \\circ \\ast_{1}^{1} \\circ \\otimes_{2}^{2} \\right) \\tensorSup{K}{1} \\\\\n\\tensor{V} & = \\tensorSup{U}{1} \\times_2 \\tensorSup{K}{2}\n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{0} \\in \\R^{X \\times Y \\times R}$ and $\\tensorSup{U}{1} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times R}$ are two intermediate tensors, and the corresponding backpropagation equations can then be computed as:\n\\begin{subequations}\n\\begin{gather}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\times_2 (\\tensorSup{K}{2})^{\\top}, ~ \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{2}} = \\tensorSup{U}{1} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\left( (\\ast^{0}_{0})^{\\top} \\circ ({\\ast}^{1}_{1})^{\\top} \\circ \\otimes^{2}_{2} \\right) \\tensorSup{K}{1}, \n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{1}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\left( \\overline{(\\overline{\\ast}^{0}_{0})^{\\top}} \\circ \\overline{(\\overline{\\ast}^{1}_{1})^{\\top}} \\circ \\otimes^{2}_{2} \\right) \\tensorSup{U}{1} \\nonumber \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} \\times_2 (\\tensorSup{K}{0})^{\\top}, ~ \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{0}} = \\tensor{U} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}}\n\\end{gather}\n\\end{subequations}\n\n\\paragraph{TK\\xspace-convolutional layer} \nIn TK\\xspace-convolutional layer~\\cite{kim2015compression}, the kernel $\\tensor{K}$ is factorized by a partial Tucker decomposition,\n\\begin{equation}\n\\tensor{K} = \\tensorSup{K}{1} \\left( \\times_{2} (\\tensorSup{K}{0})^\\top \\times_{3} \\tensorSup{K}{2} \\right) \n\\label{def:convolutional-tk-2}\n\\end{equation}\nwhere $\\tensorSup{K}{0} \\in \\R^{S \\times R_s}$, $\\tensorSup{K}{1} \\in \\R^{H \\times W \\times R_s \\times R_t}$ and $\\tensorSup{K}{2} \\in \\R^{R_t \\times T}$ are three factor tensors.\nAgain, the forward pass of Tucker\\xspace-convolutional layer has three steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\tensor{U} \\times_{2} \\tensorSup{K}{0} \\\\\n\\tensorSup{U}{1} & = \\tensorSup{U}{0} \\left( \\ast_{0}^{0} \\circ \\ast_{1}^{1} \\circ \\times_{2}^{2} \\right) \\tensorSup{K}{1} \\\\\n\\tensor{V} & = \\tensorSup{U}{1} \\times_{2} \\tensorSup{K}{2}\n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{0} \\in \\R^{X \\times Y \\times R_s}$ and $\\tensorSup{U}{1} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times R_t}$ are two intermediate tensors. Their backpropagation equations are summarized as follows:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\times_2 (\\tensorSup{K}{2})^{\\top}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{2}} = \\tensorSup{U}{1} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\left( (\\ast^{0}_{0})^{\\top} \\circ (\\ast^{1}_{1})^{\\top} \\circ \\times^{2}_{3} \\right) \\tensorSup{K}{1}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{1}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\left( \\overline{(\\overline{\\ast}^{0}_{0})^{\\top}} \\circ \\overline{(\\overline{\\ast}^{1}_{1})^{\\top}} \\right) \\tensorSup{U}{1} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} \\times_2 (\\tensorSup{K}{0})^{\\top}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{0}} = \\tensor{U} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}}\n\\end{align}\n\\end{subequations}\n\n\\paragraph{TT\\xspace-convolutional layer}\nTT\\xspace-convolutional layer is derived by factorizing $\\tensor{K}$ using Tensor-train\\xspace decomposition.\n\\begin{equation}\n\\tensor{K} = \\mathsf{swapaxes} \\left( \\tensorSup{K}{0} \\times^{-1}_{0} \\tensorSup{K}{1} \\times^{-1}_{0} \\tensorSup{K}{2} \\times^{-1}_0 \\tensorSup{K}{3} \\right)\n\\label{def:convolutional-tt-2}\n\\end{equation}\nwhere $\\tensorSup{K}{0} \\in \\R^{S \\times R_s}$, $\\tensorSup{K}{1} \\in \\R^{R_s \\times H \\times R}$, $\\tensorSup{K}{2} \\in \\R^{R \\times W \\times R_t}$ and $\\tensorSup{K}{3} \\in \\R^{R_t \\times T}$ are four factor tensors.\nThe forward pass now contains four steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\tensor{U} \\times_2 \\tensorSup{K}{0} \\\\\n\\tensorSup{U}{1} & = \\tensorSup{U}{0} ~ (\\ast^{0}_{1} \\circ \\times^{2}_{0}) ~ \\tensorSup{K}{1} \\\\\n\\tensorSup{U}{2} & = \\tensorSup{U}{1} ~ (\\ast^{1}_{1} \\circ \\times^{2}_{0}) ~ \\tensorSup{K}{2} \\\\\n\\tensor{V} & = \\tensorSup{U}{2} \\times_2 \\tensorSup{K}{3}\n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{0} \\in \\R^{X \\times Y \\times R_s}$, $\\tensorSup{U}{1} \\in \\R^{X^{\\prime} \\times Y \\times R}$ and $\\tensorSup{U}{2} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times R_t}$ are three intermediate results.\nCorresponding backpropagation equations for these computations are:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{2}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\times_2 (\\tensorSup{K}{3})^{\\top}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{3}} = \\tensorSup{U}{2} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{2}} \\left( (\\ast^{1}_{0})^{\\top} \\circ \\times^{2}_{2} \\right) \\tensorSup{K}{2}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{2}} = \\tensorSup{U}{1} \\left( \\times^{0}_{0} \\circ \\overline{(\\overline{\\ast}^{1}_{1})^{\\top}} \\right)\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{2}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}}\\left( (\\ast^{0}_{0})^{\\top} \\circ \\times^{2}_{2} \\right) \\tensorSup{K}{1}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{2}} = \\tensorSup{U}{0} \\left(\\overline{(\\overline{\\ast}^{0}_{0})^{\\top}} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} \\times_2 (\\tensorSup{K}{0})^{\\top}, ~ & \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{0}} = \\tensor{U} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}}\n\\end{align}\n\\end{subequations}\n\n\\input{.\/\/table_convolutional}\n\\section{TNN based compression on dense layer}\n\\label{app:dense-tensorized}\n\n\n\nWhen the same technique in Appendix~\\ref{app:convolutional} is applied to compress the dense layer (a.k.a fully connected layer), we encounter one difficulty: the kernel in dense layer is a matrix and can not be further factorized by any tensor decomposition. In order to address this difficult, we define an equivalent {\\em tensorized dense layer} that maps an $m$-order input tensor $\\tensor{U} \\in \\R^{S_0 \\times \\cdots \\times S_{m-1}}$ to another $m$-order tensor $\\tensor{V} \\in \\R^{T_0 \\times \\cdots \\times T_{m-1}}$ with a $2m$-order kernel $\\tensor{K} \\in \\R^{S_0 \\times \\cdots S_{m-1} \\times T_0 \\times \\cdots \\times T_{m-1}}$. \n\\begin{equation}\n\\tensor{V} = \\tensor{U} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\tensor{K} \n\\label{def:dense-tensorized-2}\n\\end{equation}\nThe layer is equivalent to a standard dense layer if input $\\tensor{U}$, kernel $\\tensor{K}$ and output $\\tensor{V}$ are reshaped versions of their counterparts. Now we are ready to derive compact architectures for compression using general tensor decompositions. \n\n\\paragraph{\\reshapePrefix\\CP-dense layer}\nThe layer is derived if the kernel $\\tensor{K}$ is factorized by a modified CP\\xspace decomposition.\n\\begin{equation}\n\\tensor{K} = \\mathsf{swapaxes} \\left( \\myvector{1} \\times^{0}_{0} (\\tensorSup{K}{0} \\otimes^{0}_{0} \\cdots \\otimes^{0}_{0} \\tensorSup{K}{m-1} \\right)\n\\label{def:dense-cp-2}\n\\end{equation}\nwhere $\\tensorSup{K}{l} \\in \\R^{R \\times S_l \\times T_l}, \\forall l \\in [m]$ are $m$ factors. It results in a multi-steps procedure for forward pass: \n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\myvector{1} \\otimes \\tensor{U} \\\\\n\\tensorSup{U}{l+1} & = \\tensorSup{U}{l} \\left( \\otimes_{0}^{0} \\circ \\times^{1}_{1} \\right) \\tensorSup{K}{l} \\\\\n\\tensor{V} & = \\myvector{1} \\times^{0}_{0} \\tensorSup{U}{m}\n\\end{align}\n\\end{subequations}\nwhere the $m$ intermediate results $\\tensorSup{U}{l} \\in \\R^{R \\times S_{l} \\times \\cdots S_{m-1} \\times T_0 \\times \\cdots \\times T_{l-1}}, \\forall l \\in [m]$ are all $(m+1)$-order tensors. Correspondingly, their backpropagation equations are computed as\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l}} & = \\tensorSup{K}{l} \\left( \\otimes^{0}_{0} \\circ \\times^{2}_{-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{l}} & = \\tensorSup{U}{l} \\left( \\otimes^{0}_{0} \\circ \\times^{2}_{1} \\circ \\cdots \\circ \\times^{m}_{m-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\\\ \n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{m}} & = \\myvector{1} \\otimes \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}}, ~ \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} = \\myvector{1} \\times^{0}_{0} \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} \\nonumber\n\\end{align}\n\\end{subequations}\n\n\\paragraph{\\reshapePrefix\\TK-dense layer} The layer is obtained when a modified Tucker\\xspace decomposition is used.\n\\begin{equation}\n\\tensor{K} = \\tensor{C} \\left( \\times_0 (\\matrixSup{P}{0})^{\\top} \\cdots \\times_{m-1} (\\matrixSup{P}{m-1})^{\\top} \\times_{m} \\matrixSup{Q}{0} \\cdots \\times_{2m - 1} \\matrixSup{Q}{m-1} \\right) \n\\label{def:dense-tensorized-tk-2}\n\\end{equation}\nwhere $\\matrixSup{P}{l} \\in \\R^{S_l \\times R^s_l}, \\forall l \\in [m]$ are named as input factors, $\\tensor{C} \\in \\R^{R^s_0 \\times \\cdots \\times R^s_{m-1} \\times R^t_0 \\times \\cdots \\times R^t_{m-1}}$ as core factor, and lastly $\\matrixSup{Q}{l} \\in \\R^{R^t_l \\times T_l}, \\forall l \\in [m]$ as output factors.\nThe forward pass of \\reshapePrefix\\TK-dense layer can then be evaluated in three steps\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\tensor{U} \\left( \\times_0 \\matrixSup{P}{0} \\cdots \\times_{m-1} \\matrixSup{P}{m-1} \\right) \\\\\n\\tensorSup{U}{1} & = \\tensorSup{U}{0} \\left( \\times_{0}^{0} \\circ \\cdots \\circ \\times_{m-1}^{m-1} \\right) \\tensor{C} \\\\\n\\tensor{V} & = \\tensorSup{U}{1} \\left( \\times_0 \\matrixSup{Q}{0} \\cdots \\times_{m-1} \\matrixSup{Q}{m-1} \\right) \n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{0}$ and $\\tensorSup{U}{1}$ are two intermediate results. \nThe backpropagation equations can then be derived accordingly:\n\\begin{subequations}\n\\begin{gather}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\left( \\times_{0} (\\matrixSup{Q}{0})^{\\top} \\cdots \\times_{m-1} (\\matrixSup{Q}{m-1})^{\\top} \\right) \\\\\n\\left( \\frac{\\partial \\mathcal{L}}{\\partial \\matrixSup{Q}{l}} \\right)^{\\top} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{l-1}_{l-1} \\circ \\times^{l+1}_{l+1} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\nonumber \\\\\n\\left( \\tensorSup{U}{1} \\left( \\times_{0} \\matrixSup{Q}{0} \\cdots \\times_{l-1} \\matrixSup{Q}{l-1} \\times_{l+1} \\matrixSup{Q}{l+1} \\cdots \\times_{m-1} \\matrixSup{Q}{m-1} \\right) \\right) \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} \\left( \\times_{0} (\\matrixSup{P}{0})^{\\top} \\cdots \\times_{m-1} (\\matrixSup{P}{m-1})^{\\top} \\right) \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} = \\tensor{C} \\left( \\times^{m}_{0} \\circ \\cdots \\circ \\times^{2m - 1}_{m - 1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{C}} = \\tensorSup{U}{0} \\otimes \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\\\\n\\left( \\frac{\\partial \\mathcal{L}}{\\partial \\matrixSup{P}{l}} \\right)^{\\top} = \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} \\left( \\times^{0}_{0} \\circ \\cdots \\circ \\times^{l-1}_{l-1} \\circ \\times^{l+1}_{l+1} \\circ \\cdots \\circ \\times^{m-1}_{m-1} \\right) \\nonumber \\\\\n\\left( \\tensor{U} \\left( \\times_{0} \\matrixSup{P}{0} \\cdots \\times_{l-1} \\matrixSup{P}{l-1} \\times_{l+1} \\matrixSup{P}{l+1} \\cdots \\times_{m-1} \\matrixSup{P}{m-1} \\right) \\right)\n\\end{gather}\n\\end{subequations}\n\n\\paragraph{\\reshapePrefix\\TT-dense layer}\nThe layer~\\cite{novikov2015tensorizing} is derived by factorizing $\\tensor{K}$ using a modified Tensor-train\\xspace decomposition. \n\\begin{equation}\n\\tensor{K} = \\mathsf{swapaxes} \\left( \\tensorSup{K}{0} \\times^{-1}_{0} \\tensorSup{K}{1} \\times^{-1}_{0} \\cdots \\times^{-1}_{0} \\tensorSup{K}{m-1} \\right) \\label{def:dense-tensorized-tt-2}\n\\end{equation}\nwhere the factor tensors are $\\tensorSup{K}{l} \\in \\R^{R_{l-1} \\times S_l \\times T_l \\times R_l},\\forall l = 1, \\cdots, m-2$, with two corner cases $\\tensorSup{K}{0} \\in \\R^{S_0 \\times T_0 \\times R_0}$ and $\\tensorSup{K}{m-1} \\in \\R^{R_{m-2} \\times S_{m-1} \\times T_{m-1}}$.\n\\begin{equation}\n\\tensorSup{U}{l+1} = \\tensorSup{U}{l} \\left( \\times^{0}_{1} \\circ \\times^{-1}_{0} \\right) \\tensorSup{K}{l}\n\\end{equation}\\noindent\nwhere $\\tensorSup{U}{0} = \\tensor{U}$, $\\tensorSup{U}{m} = \\tensor{V}$, and the other $m$ tensors $\\tensorSup{U}{l}$'s are intermediate results. Following Appendix~\\ref{app:derivatives}, its backpropagation equations can be derived as:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l}} & = \\tensorSup{K}{l} \\left( \\times^{1}_{-2} \\circ \\times^{2}_{-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{l}} & = \\mathsf{swapaxes} \\left( \\tensorSup{U}{l} \\left( \\times^{1}_{0} \\circ \\cdots \\circ \\times^{m-1}_{m-2} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\right)\n\\end{align}\n\\end{subequations}\n\n\n\n\\input{.\/\/table_tensorized_dense}\n\n\\section{TNN compression on convolutional layer}\n\\label{app:convolutional-tensorized}\n\n\n\n\nInspired by the reshaping trick in Appendix~\\ref{app:dense-tensorized}, we develop three novel architectures that compress convolution layer by first proposing its higher-order counterpart: a {\\em tensorized convolutional layer} maps $(m+2)$-order tensor $\\tensor{U} \\in \\R^{X \\times Y \\times S_0 \\times \\cdots \\times S_{m-1}}$ to another $(m+2)$-order tensor $\\tensor{V} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times T_0 \\times \\cdots \\times T_{m-1}}$ with a $(2m + 2)$-order kernel $\\tensor{K} \\in \\R^{H \\times W \\times S_0 \\times \\cdots \\times S_{m-1} \\times T_0 \\times \\cdots \\times T_{m-1}}$.\n\\begin{equation}\n\\tensor{V} = \\tensor{U} \\left( \\ast_{0}^{0} \\circ \\ast_{1}^{1} \\circ \\times_{2}^{2} \\circ \\cdots \\circ \\times^{m+1}_{m+1} \\right) \\tensor{K} \n\\label{def:convolutional-tensorized-2}\n\\end{equation}\nThe tensorized convolutional layer is equivalent to standard convolutional layer if input $\\tensor{U}$, kernel $\\tensor{K}$ and output $\\tensor{V}$ are reshaped from their counterparts.. \n\n\n\\paragraph{\\reshapePrefix\\CP-convolutional layer}\nThe layer is derived if the kernel $\\tensor{K}$ is factorized by a modified CP decomposition as in Figure~\\ref{fig:decomposition-rcp}. \n\\begin{equation}\n\\tensor{K} = \\mathsf{swapaxes} \\left( \\myvector{1} \\times^{0}_{0} \\left( \\tensorSup{K}{0} \\otimes^{0}_{0} \\cdots \\otimes^{0}_{0} \\tensorSup{K}{m} \\right) \\right) \\label{def:convolutional-tensorized-cp-2}\n\\end{equation}\nwhere $\\tensorSup{K}{l} \\in \\R^{R \\times S_l \\times T_l}, \\forall l \\in [m]$ and $\\tensorSup{K}{m} \\in \\R^{R \\times H \\times W}$ are $(m + 1)$ factors.\nAccordingly, the multi-steps procedure to evaluate the output $\\tensor{V}$ now has $(m + 2)$-steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\myvector{1} \\otimes \\tensor{U} \\\\\n\\tensorSup{U}{l+1} & = \\tensorSup{U}{l} \\left( \\otimes_{0}^{0} \\circ \\times^{3}_{1} \\right) \\tensorSup{K}{l} \\\\\n\\tensor{V} & = \\tensorSup{U}{m} \\left( \\times_{0}^{0} \\circ \\ast^{1}_{1} \\circ \\ast^{2}_{2} \\right) \\tensorSup{K}{m}\n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{l} \\in \\R^{R \\times S_{l} \\times \\cdots \\times S_{m+1} \\times T_{0} \\times \\cdots \\times T_{l-1}}, \\forall l \\in [m]$ are $m$ intermediate tensors. \nThe backpropagation equations of these steps above are\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{m}} & = \\tensorSup{K}{m} \\left( (\\ast^{1}_{0})^{\\top} \\circ (\\ast^{2}_{1})^{\\top} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\\\ \n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{m}} & = \\tensorSup{U}{m} \\left( (\\ast^{1}_{0})^{\\top} \\circ (\\ast^{2}_{1})^{\\top} \\times^{3}_{2} \\circ \\cdots \\circ \\times^{m+2}_{m+1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l}} & = \\mathsf{swapaxes} \\left( \\tensorSup{K}{l} \\left( \\otimes^{0}_{0} \\circ \\times^{2}_{-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\right) \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{l}} & = \\tensorSup{U}{l} \\left( \\otimes^{0}_{0} \\circ \\otimes^{1}_{1} \\circ \\otimes^{2}_{2} \\circ \\times^{4}_{3} \\circ \\cdots \\circ \\times^{m+3}_{m+2} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{U}} & = \\myvector{1} \\times^{0}_{0} \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}}\n\\end{align}\n\\end{subequations}\n\n\\paragraph{\\reshapePrefix\\TK-convolutional layer} \nThe layer is derived if a modified TK decomposition as in Figure~\\ref{fig:decomposition-rtk} on kernel $\\tensor{K}$. \n\\begin{equation}\n\\tensor{K} = \\tensor{C} \\left( \\times_2 (\\matrixSup{P}{0})^\\top \\cdots \\times_{m+1} (\\matrixSup{P}{m-1})^\\top \\times_{m+2} \\matrixSup{Q}{0} \\cdots \\times_{2m + 1} \\matrixSup{Q}{m-1} \\right) \n\\label{def:convolutional-tensorized-tk-2}\n\\end{equation}\nwhere $\\matrixSup{P}{l} \\in \\R^{S_l \\times R^s_l}, \\forall l \\in [m]$, $\\tensor{C} \\in \\R^{H \\times W \\times R^s_0 \\times \\cdots \\times R^s_{m-1} \\times R^t_{0} \\times \\cdots \\times R^t_{m-1}}$ and $\\matrixSup{Q}{l} \\in \\R^{R^t_l \\times T_l}, \\forall l \\in [m]$ are named as input factors, core factor and output factors respectively. The forward pass of \\reshapePrefix\\TK-convolutional layer takes three steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{0} & = \\tensor{U} \\left( \\times_2 \\matrixSup{P}{0} \\cdots \\times_{m+1} \\matrixSup{P}{m-1} \\right) \\\\\n\\tensorSup{U}{1} & = \\tensorSup{U}{0} \\left(\\ast^{0}_{0} \\circ \\ast^{1}_{1} \\times_{2}^{2} \\circ \\cdots \\circ \\times_{m-1}^{m-1} \\right) \\tensor{C} \\\\\n\\tensor{V} & = \\tensorSup{U}{1} \\left(\\times_2 \\matrixSup{Q}{0} \\cdots \\times_{m+1} \\matrixSup{Q}{m-1} \\right) \n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{0} \\in \\R^{X \\times Y \\times R^s_0 \\times \\cdots \\times R^s_{m-1}}$ and $\\tensorSup{U}{1} \\in \\R^{X^{\\prime} \\times Y^{\\prime} \\times R^t_0 \\times \\cdots \\times R^t_{m-1}}$ are two intermediate tensors. The backpropagation equations can be derived similarly:\n\\begin{subequations}\n\\begin{gather}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{0}} = \\tensor{C} \\left( (\\ast^{0}_{0})^{\\top} \\circ (\\ast^{1}_{1})^{\\top} \\circ \\times^{m+2}_{2} \\circ \\cdots \\circ \\times^{2m + 1}_{m + 1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensor{C}} = \\tensorSup{U}{0} \\left( (\\ast^{0}_{0})^{\\top} \\circ (\\ast^{1}_{1})^{\\top} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{1}}\n\\end{gather}\n\\end{subequations}\n\n\\paragraph{\\reshapePrefix\\TT-convolutional layer} \nThe layer is derived by factoring $\\tensor{K}$ according to a modified TT\\xspace decomposition as in Figure~\\ref{fig:decomposition-rtt}.\n\\begin{equation}\n\\tensor{K} = \\mathsf{swapaxes} \\left( \\tensorSup{K}{0} \\times^{-1}_{0} \\tensorSup{K}{1} \\times^{-1}_{0} \\cdots \\times^{-1}_{0} \\tensorSup{K}{m} \\right) \n\\label{def:convolutional-tensorized-tt-2}\n\\end{equation}\nwhere $\\tensorSup{K}{0} \\in \\R^{S_0 \\times T_0 \\times R_0}$, $\\tensorSup{K}{l} \\in \\R^{R_{l-1} \\times S_l \\times T_l \\times R_l}$ and $\\tensorSup{K}{m} \\in \\R^{R_{m-1} \\times H \\times W}$ are $(m + 1)$ factor tensors. The multi-stages forward pass to evaluate $\\tensor{V}$ now has $(m+1)$ steps:\n\\begin{subequations}\n\\begin{align}\n\\tensorSup{U}{l} & = \\tensorSup{U}{l} \\left( \\times^{2}_{1} \\circ \\times^{-1}_{0} \\right) \\tensorSup{K}{l} \\\\\n\\tensor{V} & = \\tensorSup{U}{m} \\left( \\ast^{0}_{1} \\circ \\ast^{1}_{2} \\circ \\times^{-1}_{0} \\right) \\tensorSup{K}{m}\n\\end{align}\n\\end{subequations}\nwhere $\\tensorSup{U}{l} \\in \\R^{X \\times Y \\times S_{l} \\times \\cdots \\times S_{m-1} \\times T_{0} \\times \\cdots \\times T_{l-1} \\times R_{l-1}}, \\forall l \\in [m]$ are the intermediate results. We summarize the corresponding backpropagation equations as follows:\n\\begin{subequations}\n\\begin{align}\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{m}} & = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\left( (\\ast^{0}_{1})^{\\top} \\circ (\\ast^{2}_{1})^{\\top} \\right) \\tensorSup{K}{m} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{m}} & = \\frac{\\partial \\mathcal{L}}{\\partial \\tensor{V}} \\left( (\\ast^{0}_{0})^{\\top} \\circ (\\ast^{1}_{1})^{\\top} \\circ \\times^{2}_{2} \\cdots \\times^{m+1}_{m+1} \\right) \\tensorSup{U}{m} \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l}} & = \\mathsf{swapaxes} \\left( \\tensorSup{K}{l} \\left( \\times^{2}_{-2} \\circ \\times^{3}_{-1} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\right) \\\\\n\\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{K}{l}} & = \\mathsf{swapaxes} \\left( \\tensorSup{U}{l} \\left( \\times^{0}_{0} \\circ \\times^{1}_{1} \\circ \\times^{3}_{2} \\cdots \\times^{m+1}_{m} \\right) \\frac{\\partial \\mathcal{L}}{\\partial \\tensorSup{U}{l+1}} \\right) \n\\end{align}\n\\end{subequations}\n\n\\input{.\/\/table_tensorized_convolutional}\n\\section{Supplementary experiments}\n\\label{app:supp_experiments}\n\n\\paragraph{Convergence Rate}\nCompared to end-to-end\\xspace, an ancillary benefit of sequential\\xspace tuning is\n{\\em much} faster and leads to more stable convergence.\nFigure~\\ref{convergence} plots compression error over number of gradient\nupdates for various methods. (This experiment is for NN-C\\xspace with 10\\% compression rate.)\nThere are three salient points: first, sequential\\xspace tuning has very\nhigh error in the beginning while the ''early'' blocks of the\nnetwork are being tuned (and the rest of the network is left\nunchanged to {tensor decomposition values}).\nHowever, as the final block is tuned\n(around $2\\times 10^{11}$ gradient updates) in the figure, the errors drop\nto nearly minimum immediately. In comparison, end-to-end\\xspace tuning\nrequires 50--100\\% more gradient updates to achieve stable\nperformance. \nFinally, the result also shows that for each block,\nsequential\\xspace tuning achieves convergence very quickly (and nearly\nmonotonically), which results in the stair-step pattern since extra\ntuning of a block does not improve (or appreciably reduce)\nperformance. \n\n\\paragraph{Performance on Fully-Connected Layers} \nAn extra advantage of TNN based compression\\xspace is that it can apply flexibly to\nfully-connected as well as convolutional layers of a neural network.\nTable~\\ref{table:exp-dense} shows the results of applying TNN based compression\\xspace to\nvarious tensor decompositions on a variant of LeNet-5\nnetwork~\\cite{lecun1998gradient}. The convolutional layers of the\nLeNet-5 network were {\\em not} compressed, trained or updated in\nthese experiments. The uncompressed network achieves 99.31\\% accuracy.\nTable~\\ref{table:exp-dense} shows \\textbf{the fully-connected layers can be compressed\nto 0.2\\% losing only about 2\\% accuracy}. In fact, compressing the\ndense layers to 1\\% of their original size reduce accuracy by less\nthan 1\\%, demonstrating the extreme efficacy of TNN based compression\\xspace when\napplied to fully-connected neural network layers.\n\n\\begin{table}[!htbp]\n\\centering\n\\begin{tabular}{ c | c | c | c }\n& \\multicolumn{3}{c}{Compression rate} \\\\ \nArchitect. & 0.2\\% & 0.5\\% & 1\\% \\\\ \n\\hline\n\\tnnPrefix\\rCP & 97.21 & 97.92 & 98.65 \\\\\n\\tnnPrefix\\rTK & 97.71 & 98.56 & 98.52 \\\\\n\\tnnPrefix\\rTT & 97.69 & 98.43 & 98.63 \\\\\n\\end{tabular}\n\\caption{TNN compression of fully-connected layer in LeNet-5. \nThe uncompressed network achieves 99.31\\% accuracy.}\n\\label{table:exp-dense}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUnderstanding how the brain of mammals, including humans, represents, processes and stores information is one of the main challenges of contemporary science. In addition to the obvious direct interest in such an ambitious goal, any progress made towards elucidating the brain working principles would also help developing a new generation of artificial intelligence devises. Reversely, advances in computer science help shedding light on the analogies and differences between our present operational knowledge on \"artificial intelligence\u00b7 and \"natural intelligence\". This two-sided dialogue is hoped to guide exciting breakthroughs in the next coming years in both fields. \n\nA popular idea, coming from the world of artificial neural networks \\citep{Langton,Mitchell} and then exported to the realm of biological systems (see \\citep{RMP} and refs. therein), is that information-processing complex systems, composed of many individual interacting units, are best suited to encode, respond, process, and store information if they operate in a dynamical regime nearby the critical point of a phase transition, i.e. at the edge between \"order\" and \"disorder\"\\citep{Mora-Bialek,Plenz-Functional,RMP,Chialvo2010,KC,Shriki,Breakspear-review,Shew2015b,Serena-LG,Martinello}. In a nutshell, one can say that \"ordered phases\" encode information in a robust or stable way, but they are not flexible enough as to accommodate for or respond to input changes; on the other hand, \"disordered phases\" are dominated by noise, thus hindering information storage and making retrieval exceedingly difficult. Therefore, there needs to be some kind of trade-off between order and disorder that can be formulated in a number of different ways, e.g., between \"stability and responsiveness\" or between \"robustness and flexibility\". \nThe criticality hypothesis poses that such a trade-off is best resolved near criticality or \"at the edge of chaos\", where combined advantages from the two alternative phases can be obtained \\citep{RMP}. Furthermore, at critical points there is a concomitant scale invariance ---with its characteristic power-law distributions and scaling--- entailing the existence of broadly different time and length scales, which seem much convenient for the representation of multiscale complex inputs. Let us remark, that the terms \"criticality\" and \"edge of chaos\" are sometimes used indistinctly, though the last one applies to deterministic systems, in which a transition occurs between ordered and chaotic states, but as recently emphasized, they can describe two sides of the same coin (we refer to \\citep{Moritz} for an illuminating recent discussion). \n\nEmpirical evidence that actual neural networks might operate close to criticality has kept accumulating in recent years \\citep{BP,Petermann,Taglia,Plenz-synchro1,Shew2015b}. Most of this evidence (though not all) relies on the concept of neuronal avalanches \\citep{BP} which are empirically observed to be scale invariant across species, brain regions, resolution levels and experimental techniques \\citep{Schuster,Breakspear-review,RMP}. However, as of today, smoking-gun evidence is still needed to validate or dismiss this fascinating conjecture, and it remains controversial \\citep{Touboul}; thus, novel theoretical and data-oriented analyses are much needed \\citep{RMP}. \n\n\n\n\nIn a seemimgly-unrelated remarkable work, Stringer {\\emph et al.} have recently made a step forward in understanding how neuronal networks actually represent complex inputs. In particular, these authors proved mathematically that the statistics of spiking neurons representing external sensory inputs (such as natural images represented in the mouse visual cortex) need to obey certain constraints for the input representation (or \"neural code\") to be \"continuous and differentiable\" \\citep{Stringer}. These abstract mathematical properties are the formal counterpart of a much-desired property of neural networks: i.e., robustness of the representation against small perturbations of the inputs. Such \na robustness is well-known to be often violated in artificial neural networks (ANNs); in particular, so-called, \\emph{adversarial attacks}, consisting in tiny variations in the input or their statistics, can fool the network, leading to wrong predictions and mis-classifications \\citep{adversarial}.\nWe refer to Stringer \\emph{et al.} \\citep{Stringer} for an in-depth explanation and justification of these important ideas, as well as to \\citep{nassar_1n_2020} for an application of them onto multi-layer ANNs. In any case, the conclusion of Stringer \\emph{et al.} is that, in order to achieve robust input representations, the covariance matrix of neuronal activities measured across time when the network is exposed to a sequential series of inputs, must obey the following spectral property: its rank-ordered eigenvalues should decay as a power law of their rank, with an exponent $\\alpha$ strictly larger than $1+2\/d$, where $d$ is the embedding dimension of the input. Thus, $\\alpha=1$ sets a lower bound for the possible values of the eigenspectrum decay-exponent for complex, high-dimensional inputs.\n\n\n\nRather remarkably, these theoretical predictions are verified to be fulfilled in experimental recordings of more than $10000$ individual neurons in the mouse visual cortex exposed to a very large sequence of natural images. This confirms that information encoding occurs in as mathematically predicted, i.e. in a continuous and differentiable manifold.\n \nThe main question we pose here is: are the internal representations of ANNs trained to classify images similar to those of the mouse visual cortex? More specifically, is the spectrum of eigenvalues of the associated covariance matrix a power law of the rank? Is the exponent in all cases larger than (and close to) $1$? If so, do the exponent values change with the images dimensionality in the way predicted by Stringer et al.? \n\nHere, in order to tackle all these questions within the simplest possible scenario, we analyze the neural encoding of inputs with different dimensions in a paradigmatic example of ANN: the \\emph{Echo state network} (ESN) \\citep{jaeger__2001}. This type of networks, together with \\emph{liquid state machines} \\citep{Maass,maass_real-time_2002}, constitute the prototype of \\emph{reservoir computing} (RC) approaches \\citep{lukosevicius_reservoir_2009}, a paradigm of computation that seems particularly well suited for exploiting the putative advantages of operating at the \"edge of chaos\" \\citep{RMP}.\n\n \n\\section{Materials and Methods}\n The Echo State Network, in its original formulation, was devised by Jaeger as a flexible and easy-trainable recurrent neural network (RNN) for time-series prediction tasks \\citep{jaeger__2001, jaeger_harnessing_2004}. More specifically, the architecture of ESNs is described as consisting of: \n\n\\begin{itemize}\n\n\\item An input layer, which scales a number $L_{1}$ of inputs at each time step before they arrive in the reservoir, according to some random weights $W^{in}\\in\\mathbb{R}^{N\\times L_{1}}$.\n\n\\item A reservoir consisting of $N$ internal units connected with random weights $W^{res}\\in\\mathbb{R}^{N\\times N}$, whose states usually evolve according to a non-linear, time-discrete dynamical equation under the influence of a time-dependent input. In this way, the reservoir maps the external input into a high-dimensional space. \n\n\\item An output layer, that converts the information contained in the high-dimensional states of the neurons (which serve as an internal representation of the inputs) to generate the final output.\n\\end{itemize}\nThus, unlike in other ANNs, the internal weights or \"synaptic connections\" in ESNs do not need to be updated during the learning process, and training is achieved by just modifying the layer of output weights that readout the network internal states. \n\nIn order to adapt this architecture ---usually employed in time-series analyses--- for image classification tasks, we used black and white images with $L_{1}\\times L_{2}$ pixels (each of them characterized by a value in the $[0,1]$ interval, representing a normalized gray-scale) and converted them into multivariate time series by considering their vertical dimension as a vector of $L_{1}$ elements or features, that \"evolve\" along $T=L_{2}$ discrete \"time\" steps. One can then define a standard training protocol in which, as illustrated in Fig.\\ref{Figure_0}, at each time $t \\in[0,T]$, vectors $\\mathbf{u}(t) \\in[0,1]^{L_{1}}$ corresponding to columns of the image are fed as inputs to the ESN. In this way, the network dynamics for the reservoir states is given by the following non-linear activation function: \n\\begin{equation}\n\\mathbf{x}(t)=\\tanh(\\varepsilon W^{in}\\mathbf{u}(t)+W^{res}\\mathbf{x}(t-1))\\label{eq:States_Update}\n\\end{equation}\nwhere $\\varepsilon$ is an input scaling factor. \n\\begin{figure*}\n\\begin{centering}\n\\includegraphics[scale=0.5]{Images\/Figure_0}\n\\par\\end{centering}\n\\caption{{\\bf Sketch of the Echo State Network and the image classification task.} Left: Images are converted to multivariate time series and then fed into the reservoir. Right: for each processed image a set of parameters $\\theta_{x}$ is generated, which characterizes the high-dimensional state of the reservoir, i.e. the ``\\emph{reservoir model space}''. These are then fed into the readout module, that linearly transforms the information in the reservoir model space into an output label. Finally, output weights $\\tilde{W}_{out}$ are generated by minimizing the error between the predicted and target labels. Red arrows indicate steps in which a Ridge regression is performed.}\\label{Figure_0}\n\\end{figure*}\nUsing a supervised learning scheme, the goal of the ESN is to generate an output label $\\mathbf{y}\\in\\mathbb{\\mathbb{N}}^{F}$ that correctly classifies each image in the test set as belonging to one of the $F$ existing categories or classes. This label consists of a vector in which every element is zero, except for a value of one at the position corresponding to the assigned class (i.e., \"one-hot-encoded\" in the machine learning jargon). Several readout methods have been proposed in the literature to transform the information contained in the reservoir dynamics into the expected target output $\\mathbf{y}^{target}\\in\\mathbb{\\mathbb{N}}^{F}$, ranging from linear regressions methods over the reservoir states \\citep{reinhart_constrained_2010,reinhart_reservoir_2011}, to the use of \"support vector machines\" or \"multilayer perceptrons as decoders \\citep{babinec_merging_2006}. Here, we use a simple Ridge regression (see Appendix I for a detailed explanation of the algorithm) over the \\emph{\"reservoir model space\"}, a method that has been recently proposed for the classification of multivariate-time series \\citep{bianchi_reservoir_2021}. \n\nThe \"reservoir model space\" is a set of parameters $\\theta_{x}$ that encodes the reservoir \\emph{dynamical} state for a given input (image). Such parameters are obtained from a Ridge regression to predict the next reservoir state from the past one at discrete time steps,\n\\begin{equation}\n\\mathbf{x}(t+1)=W_{x}\\mathbf{x}(t)+\\mathbf{w}_{x},\n\\end{equation}\nin such a way that $\\theta_{x}=\\left[\\mathrm{vec}(W_{x});\\mathbf{w}_{x}\\right]\\in\\mathbb{R}^{N\\left(N+1\\right)}$ provides a characterization of the internal reservoir dynamical state during the presentation of a given input, where $\\mathrm{vec}(\\cdot)$ denotes reshaping to a one-column vector and $\";\"$ vertical concatenation. Then, for each image a readout module or decoder can transform this internal representation into an output label:\n\\begin{equation}\n\\mathbf{y}=W_{out}\\theta_{x}+\\mathbf{w}_{out}.\\label{Eq_Readout}\n\\end{equation}\nThe parameters $\\theta_{out}=\\left[\\mathrm{vec}\\left(W_{out}\\right);\\mathbf{w}_{out}\\right]$ ---where $W_{out}\\in\\mathbb{\\mathbb{R}}^{F\\times N(N+1)}$ and $\\mathbf{w}_{out}\\in\\mathbb{\\mathbb{R}}^{F}$ are defined as output weights and biases, respectively--- are determined again through Ridge regression, minimizing the error between the produced and target label for all the presented images in the training set. \n\nLet us remark that the presented framework can be naturally extended to include, for instance, leakage and noise terms in Eq.\\ref{eq:States_Update}, feedback connections from the output to the reservoir, or plastic rules that modify the reservoir weights according to the inputs \\citep{lukosevicius_practical_2012,morales_unveiling_2021}, among other possible extensions. However and since our aim here is not to reach state-of-the-art classification accuracy ---but rather highlight the link between optimal input representation and the internal dynamical state--- for the sake of parsimony, we refrain from adding further features to our model for the time being.\n\n\\section{Results}\n\n\\subsection{Non-trivial scaling at the edge of chaos.}\nAlthough relatively simple, our proposed ESN model has several hyperparameters than can be tuned, affecting its performance. More specifically, the \\emph{spectral radius} $\\rho$ of the reservoir internal weight matrix and the \\emph{scaling factor} $\\varepsilon$ of the input weights are two variables that usually determine the dynamical regime within the reservoir \\citep{lukosevicius_reservoir_2009}. The spectral radius ---or largest eigenvalue of the reservoir weight matrix--- controls the dynamical stability inside the reservoir when no input is fed into the network. Thus, a spectral radius exceeding unity has been often regarded as a source of instability in ESNs due to the loss of the so-called \\textquotedblleft \\emph{echo state property}\\textquotedblright, a mathematical condition ensuring that the effect of initial conditions on the reservoir states fades away asymptotically in time \\citep{jaeger__2001,jaeger_short_2001,yildiz_re-visiting_2012}. Nevertheless, later studies have shown that the echo state property can be actually maintained over a unitary spectral radius, and different sufficient conditions have been proposed \\citep{buehner_tighter_2006,yildiz_re-visiting_2012,gallicchio_chasing_2018} (see in particular \\citep{manjunath_echo_2013}, where the authors analyze the problem from the lens of non-autonomous dynamical systems, deriving a sufficient condition for the echo state property with regard to a given input). On the other hand, increasing the value of $\\varepsilon$ can convert an initially expanding mapping into a contracting dynamics, as stronger inputs tend to push the activities of the reservoir units towards the tails of the non-linearity.\n\nIn what follows, we analyze the input representation that the reservoir codifies in terms of the trade-off between $\\rho$ and $\\varepsilon$, which \\emph{together} determine the dynamical operating regime of the ESN and the presence or absence of the echo state property. For the rest of parameters, the number of units in the reservoir is kept fixed to $N=2000$ and the density of the reservoir-weight-matrix elements (i.e., the percentage of non-zero connections) to $10\\%$, while both reservoir and input weights are extracted at random from a uniform distribution in the interval $[-1,1]$. \n\\begin{figure}\n\\centering{}\\includegraphics[scale=0.5]{Images\/Figure_5_Map_Full}\\caption{Exponent for the power-law decay of the spectrum of the activity covariance matrix as a function of the spectral radius ($\\rho$) and input scaling factor ($\\varepsilon$) of the reservoir, plotted together with the maximum Lyapunov exponent (MLE) color-coded within the surface. The insets correspond to the activity covariance matrix eigenspectrum measured in three different points of the parameter space, where the variance in the n-th dimension (n-th eigenvalue) scales as a power-law $n^{-\\alpha}$ of the rank. For ease of visualization, the plane separating the region $\\alpha < 1$ in which the representation is no longer continuous nor differentiable was plotted in purple.}\\label{Figure_1}\n\\end{figure}\n\nFollowing the same methodology as in Stringer {\\emph et al.}\\citep{Stringer}, the ESN was first presented with a large set of high-dimensional, natural images, and the activity of the internal units in the reservoir was stored for each step of the training. Then, principal component analysis (PCA) was performed directly over the full set of neuron activities $X\\in\\mathbb{R}^{N\\times( T\\times M)}$, where $T=90$ is the number of pixels in the horizontal dimension of the images and\n$M=2800$ is the total number of images. In this way, we obtained the variance along each principal component or eigenvector of the covariance matrix, that serves as a basis for the activity inside the reservoir (see \\citep{shlens_tutorial_2014} for a very gentle but rigorous introduction to PCA).\n\nNotably, we found that the spectrum of eigenvalues as a function of their rank (i.e., the variance associated to the n-th principal component, when ordered from the largest to the smallest) can be well fitted to a power-law $n^{-\\alpha} $ (see insets in Fig.\\ref{Figure_1}), whose associated exponent $\\alpha$ decays (flatter spectrum) with the spectral radius $\\rho$; \nwhile, the exponent increases (faster decay) with the input scaling factor $\\varepsilon$ for most of the parameter space (see Fig. 2). In \\citep{Stringer} it was found that the exponent of this power-law relation is close to $1$ when natural, high-dimensional images were shown to the mouse as an input. As discussed above, the authors proved mathematically that $\\alpha>1+2\/d$ is a necessary condition for the neural manifold that emerges from the representation of a $d$-dimensional input to be continuous and differentiable. For natural images, $d$ is very large and one can approximate the critical exponent by $\\alpha_{c}\\approx1$ (this condition is marked by the purple plane in Fig.\\ref{Figure_1}). \n\nAs a following step, one might naturally wonder if there is some\n aspect of our model that characterizes such a regime of robust representations in the parameter space $(\\rho,\\varepsilon)$, for which an exponent $\\alpha$ close to unity is found. In other words: is the dynamics of the system inherently different in the regions for which the input representation manifold is found to be non-analytical (i.e. below the purple plane in Fig.\\ref{Figure_1})?\n\nAs it turns out, rather remarkably, the exponent $\\alpha$ characterizing the decay of the eigenspectrum approaches unity for choices of $\\rho$ and $\\varepsilon$ that drive the network dynamics towards the so-called \"edge-\n of instability\" or \"edge of chaos\", this is, near a transition point between an ordered and a chaotic regime. Traditionally, chaotic regimes are characterized by their average sensitivity to perturbations in the initial conditions; to quantify this effect, one usually measures the rate of divergence of two trajectories with a very small difference in their initial conditions:\n\\begin{equation}\n\\lambda=\\lim_{k\\to\\infty}\\dfrac{1}{k}log\\left(\\dfrac{\\gamma_{k}}{\\gamma_{0}}\\right)\n\\end{equation}where $\\lambda$ is termed the maximum Lyapunov exponent (MLE), $\\gamma_{0}$ is the initial distance between the perturbed and unperturbed trajectories, and $\\gamma_{k}$ is the distance between the trajectories at step $k$ (we refer the reader to \\citep{boedecker_information_2011} and \\citep{sprott_chaos_2003} for a detailed explanation of the algorithm used to compute the MLE). Thus, chaotic dynamics is typically associated with a positive MLE, while the system is said to be stable to local perturbations provided $\\lambda<0$. It can be clearly seen from Fig.\\ref{Figure_1} that the region in which one finds non-analytical representations of the input (below the purple plane) matches almost perfectly with the region (colored in green) in which a positive MLE is found. \n\nThe transition order-to-chaos can be also visualized looking directly at the activities inside the reservoir (see Fig.\\ref{Figure_5}). Observe that, when the network is in an \"ordered\" state, with $\\lambda<0$, the responses of the neurons are quite heterogeneous when compared among them, but they are highly localized within each neuron, i.e., individual neurons experience a limited response to stimuli. On the other hand, dynamical states characterized by $\\lambda>0$ have neurons whose response extends across the full range of the non-linearity (with higher probability along the tails, reflecting a saturated behavior), but it is this same \"phase space expansion\" that makes units almost indistinguishable from each other. It is only around the critical point or f-chaos, that we find a trade-off between dynamical richness in individual units and variability across units. \n\nComing back to the results in Stringer \\emph{et al.}, one may also wonder whether the continuity and differentiability condition $\\alpha>1+2\/d$ holds also for low-dimensional inputs, for which the expected bound $\\alpha_{c}=1+2\/d$ deviates considerably from unity. To this purpose, Fig.\\ref{Figure_2} shows the measured eigenspectrum of the reservoir activity covariance matrix (i.e. eigenvalues as a function of their rank) when images of different dimensionality (the same ones used by Stringer \\emph{et al.} in their experiments) are presented as inputs, and the reservoir is tuned to operate at the onset of a chaotic regime, i.e., for values in the parameter space $\\left(\\rho,\\varepsilon\\right)$ for which $\\lambda$ was near zero but still negative. Remarkably, we find in all cases that \\emph{the exponents observed in the mouse visual-cortex activity are best reproduced when the reservoir dynamics is tuned close to the \"edge of chaos\"}. \n\nThis finding suggests that one can set the network parameters in such a way that the neural activity manifold in which the input is represented is almost as high-dimensional as possible without loosing its \"smoothness\", and that such optimal solution is found at the edge of chaos.\n\\begin{figure}\n\\centering{}\\includegraphics[scale=0.8]{Images\/Figure_2_Full}\\caption{From left to right: (A) sample from the $M=2800$ images in the training set; (B) eigenspectrum of the images pixel intensities; (C) eigenspectrum for the activities of an ESN (blue line) and real, V1 mouse neurons (yellow line, plotted after \\citep{Stringer} applying cvPCA) when subject to images of dimensionality $d$ ; (D) same analysis, but now zero-centered white noise of amplitude $\\epsilon = 0.4$ is added to the neuron dynamics (blue line), and no cvPCA is performed over the experimental values (yellow line); (E) same analysis as in (D), but now noise has been substracted using cvPCA. From top to bottom: results for natural, high-dimensional images; the same images projected onto 8 dimensions; the same images projected onto 4 dimensions. To obtain the ESNs eigenspectra, parameters were chosen so that the networks operated near the edge of chaos, with $\\lambda \\sim -5 \\cdot 10^{-3}$. }\\label{Figure_2}\n\\end{figure}\n\nAt this point, it is pertinent and timely to dig a bit deeper on the similarities and differences between the results presented in \\citep{Stringer} for real, V1-cortex neurons in the mouse, and the power-law exponents obtained through our reservoir computing model. \n\n(i) First of all, as in the case of real neurons, the observed correlations between the internal units are not just a byproduct emerging from scale-free features of natural images (see second column in Fig.\\ref{Figure_2}). In particular, one can see that the power-law decay of the eigenspectrum persists even in response to low-dimensional inputs whose embedding vector space can be spanned with just a few principal components (i.e. without a power-law decaying intrinsic spectrum).\n\n(ii) In our model, images are processed sequentially in time along their horizontal dimension, so that for each image one can measure the activity of the $N$ internal units over $T=L_{2}$ time steps. In contrast, activity of V1 neurons in \\citep{Stringer} is scanned at a relatively low rate, so that for each image the neural representation in characterized by just one amplitude value in each neuron. \n\n(iii) To avoid confusion, let us remark that the variance observed by Stringer et al. is not directly measured over the raw activity of the neurons. Instead, the authors first project out the network spontaneous activity from the data, and then perform a cross-validated PCA (cvPCA) that allows them to filter out the trial-to-trial variability or \"noise\". The cvPCA method is thus able to estimate the stimulus-related variance confined in an n-dimensional manifold by first computing the eigenvectors spanning this manifold from a first repeat of the full training set, and then measuring the amount of a second repeat's variance that is confined to this plane (we refer to \\citep{Stringer} for a detailed explanation and derivation of the cvPCA method). However, since our model is completely deterministic for a given initialization of an ESN, the stimulus-related variance computed through cvPCA trivially matches that of a standard PCA. \n\n\nA natural question then arises from this last point: what happens when a noise term is included in Eq.\\ref{eq:States_Update}, so that the dynamics becomes stochastic? Are the power-law exponents robust to the introduction of noise? To answer these questions, we considered stochastic versions of the ESNs ---including an independent small additive noise term in their inputs--- and presented them with two repeats of the same input training set. The internal states of the noisy reservoirs were collected at each time step. We then performed the same type of cvPCA analyses proposed in \\citep{Stringer} to estimate the signal variance in our reservoirs (see fourth and fifth columns in Fig.\\ref{Figure_2}). Just as in the case of real V1 neurons, the exponents measured over the raw, noisy activity are lower and below the critical threshold for continuity and differentiability of the neural manifold. Nevertheless, a cvPCA over the internal states retrieves the expected exponents after noise has been filtered out.\nWe will further comment on the possible implications of this finding in the Discussion section, but for now, let us wrap up our findings tackling what we believe is a fundamental question from the perspective of machine learning: does working at the edge of chaos (or, equivalently, having maximal, continuous and differentiable neural manifolds) provide any functional advantage?\n\n\\subsection{Solving a bench-mark classification task.}\n\nThe advantages of working at the so-called edge of chaos were first pointed in general dynamical systems and cellular automata \\citep{crutchfield_computation_1988,Langton}, and only later analyzed in reservoir computer models with binary \\citep{bertschinger_real-time_2004, legenstein_edge_2007, busing_connectivity_2010} and analog \\citep{busing_connectivity_2010, boedecker_information_2011} internal units. In particular, in \\citep{boedecker_information_2011} the authors showed that ESNs presented maximal information storage and transfer, as well as enhanced memory capacity right at the edge of chaos. However, while ESNs and other RC approaches have been previously applied to classification tasks with very good results \\citep{schaetti_echo_2016, skowronski_automatic_2007, aswolinskiy_time_2016, ma_functional_2016, yusoff_modeling_2016,bianchi_reservoir_2021}, to the best of our knowledge an analysis of the influence of the dynamical regime on the performance of RC architectures for classification tasks is still missing. To this end, we measure the performance of ESNs in a classification task over the canonical MNIST dataset, which includes $60,000$ handwritten instances of the first $10$ digits as training (although only a third of them was used) and $10,000$ for testing, following the training and testing procedure as described in Materials and methods. \n\nThe results, showed in Fig.\\ref{Figure_4}, highlight the fact that optimal performance ($\\sim 2.2 \\%$ error rate) is found just below the onset of chaos, when $\\lambda \\lesssim 0$. Most notably, the plot also evinces that the decay in performance is not only preceded by a positive MLE, but coincides too with exponents $\\alpha$ for the fit of the covariance-matrix eigenpectrum that are below the limiting value $\\alpha_{c}\\approx1$, indicating the loss of continuity and differentiability of the neural representation manifold for high-dimensional images. Let us remark that, the slower the decay (i.e. the larger the exponent) the more weight is given to fine details of the input, but if the decay is too slow (smaller than the lower bound above), an excessive importance is given to such fine details at the cost of hampering the existence of a \"smooth manifold\" representation. Thus, operating near the edge of instability could provide the network with an optimal trade-off between representing as much details as possible and constructing operative, smooth representations.\n\n\nWe finally remark that the results shown here were obtained with a reservoir consisting only of $500$ internal units and using only one-third of the training set, with no pre-processing of the images. In contrast, the current best performance in MNIST digit recognition ($0.81\\%$ error rate) using reservoir computing networks has been achieved with a two-layer architecture, each consisting of $16,000$ units, which amounts to a total $880,000$ trainable parameters \\citep{jalalvand_design_2015}. In this sense, a simple ESN with readouts over the reservoir model space, when tuned near the edge of chaos, is able to outperform ESNs with a greater number of units and much more complicated dynamics (including feedback connections and leakage terms), trained over the full MNIST dataset \\citep{schaetti_echo_2016}. \n\n\\begin{figure}\n\\centering{}\\includegraphics[scale=1]{Images\/Figure_3_New}\\caption{Curves for the accuracy in MNIST testset (blue dots), maximum Lyapunov\nexponent (orange line) and best fit exponent for power-law spectrum\nof activity covariance matrix (purple line). Training was performed\nover 20,000 randomly chosen images of the MNIST training dataset, while classification error was assessed\nover the full test set (10,000 images). Errors in each case were estimated as the standard deviation from the mean over ten different initializations of the ESN.}\\label{Figure_4}\n\\end{figure}\n\n\\section{Discussion}\n Stringer {\\emph{et al.}} observed in \\citep{Stringer} that neural coding of different inputs in the mouse V1-cortex is close to optimal for each type of input, constrained by requirements of continuity and differentiability of the neural response manifold. In this paper, we open the door to the possibility that optimal, continuous and differentiable response manifolds emerge for neuron dynamics laying close to an edge of chaos type of critical point. Indeed, we have shown that a simple non-linear model of randomly-connected neurons, when subject to an external input, is able to reproduce power-law exponents similar to those found in mouse V1-cortex for the decay of the covariance matrix eigenspectrum.\n\nWe find nevertheless important to clarify that the term edge of chaos ---and the concept of chaos itself--- should be taken with caution as it is not devoid of criticism in this context. As pointed out in \\citep{manjunath_echo_2013}, ESNs are an example of nonautonomous dynamical systems, for which typical concepts based in the theory of autonomous systems (e.g., \"sensitivity to initial conditions\", \"attractor\" and \"deterministic chaos\") do not directly apply \\citep{clemson_discerning_2014,gandhi_theory_2012}. In fact, the authors of \\citep{manjunath_echo_2013} claim that local perturbation experiments cannot represent an ultimate evidence of chaotic dynamics in non-autonomous systems, since it might well be the case that the input drives the system towards and expanding dynamics for a certain time span, while the system shows on average a contracting, non-chaotic dynamics. Despite these caveats, at the light of the presented results it appears like there is indeed an actual dynamical phase transition occurring as the maximum Lyapunov exponent crosses zero. Thus, in any case, it seems a sensible choice to use such a quantity as a control parameter when analyzing the underlying neural representation of external inputs. \n\nOn the other hand, adding stochasticity in the form of small-amplitude white noise naturally leads to flatter eigenspectra, much like those found when PCA is performed over raw experimental data. Nevertheless, one can use the same cvPCA technique introduced also in \\citep{Stringer} to extract the input-related variance of the activity, thus obtaining similar exponents to the fully deterministic case. This remarkable result therefore suggests that the role of spontaneous activity and trial-to-trial variability on the representation of external inputs can be easily accounted for in our simple echo-state-network model.\n\n\nFinally, results obtained on a benchmark classification task suggest that input-representation manifolds that are critically high-dimensional (from the point of view of their analytic properties) may serve a bigger purpose than just being a mathematical curiosity, as ESNs show a better performance when poised near such a critical point, while the accuracy falls rapidly as soon as the representation manifold becomes fractal.\n\nTherefore, the presented results open the path for very exciting research avenues at the boundary of biology and machine learning, calling for theoretical formulations that can shed light into the fascinating properties of these input-representation neural manifolds and their relation with the criticality hypothesis.\n\n\n\\section*{Appendix A: Ridge regression}\n\nReadouts in ESNs are typically linear, as it can be seen from Eq.\\ref{Eq_Readout}. To simplify the forthcoming derivations, let us rewrite Eq.\\ref{Eq_Readout} as:\n\\begin{equation}\n\\mathbf{y}=\\tilde{W}_{out}\\left[\\theta_{x};1\\right]\n\\end{equation}\nwhere $\\tilde{W}_{out}\\in\\mathbb{\\mathbb{R}}^{F\\times\\left(N(N+1)+1\\right)}$ and \" ; \" indicates vertical vector concatenation. Thus, for a given input image a reservoir representation of the states $\\theta_{x}$ is constructed, and one can use the above equation to generate the corresponding output (which in our case will be a one-hot-encoded label $\\mathbf{y}\\in\\mathbb{\\mathbb{N}}^{F}$ classifying the image). We can trivially generalize the above equation to apply to the full set of $M$ images in the training set $Y\\in\\mathbb{\\mathbb{N}}^{F\\times M}$:\n\\begin{equation}\nY=\\tilde{W}_{out}\\Theta_{x}\n\\end{equation}\nwhere $\\Theta_{x}\\in\\mathbb{R}^{\\left(N\\left(N+1\\right)+1\\right)\\times M}$ contains as columns the vectors $\\left[\\theta_{x};1\\right]$ generated for each of the $M$ input images. Finding the optimal weights\n$\\tilde{W}_{out}$ that minimize the squared error between the produced and target labels, $\\mathbf{y}$ and $\\mathbf{y}^{target}$, is then reduced to a standard linear regression problem, which is the greatest strength of the reservoir-computing approach:\n\\begin{equation}\nY^{target}=\\tilde{W}_{out}\\Theta_{x}.\n\\end{equation}\nOwing to the fact that large output weights are commonly associated to over-fitting of the training data \\citep{lukosevicius_practical_2012}, it is a common practice to add a regularization term to the error in the target reconstruction, usually defined in terms of the root-mean-squared error (RMSE). Although several methods have been proposed to achieve this regularization \\citep{lukosevicius_practical_2012, reinhart_constrained_2010,reinhart_reservoir_2011}, one of the most efficient and stable algorithms is Ridge regression, which aims to solve:\n\\begin{equation}\n\\tilde{W}_{out}=\\underset{\\left\\{ \\tilde{W}_{out}\\right\\} }{\\arg\\min}\\dfrac{1}{M}\\sum_{n=1}^{M}\\sum_{i=1}^{F}\\left(y_{i}[n]-y_{i}^{target}[n]\\right)^{2}+\\beta\\left\\Vert \\tilde{w}_{i}^{out}\\right\\Vert ^{2}=Y^{target}\\Theta_{x}^{T}\\left(\\Theta_{x}\\Theta_{x}^{T}+\\beta I\\right)^{-1},\\label{eq:MSE_Error}\n\\end{equation}\nwhere $\\left\\Vert \\text{\\ensuremath{\\cdot}}\\right\\Vert $ stands for the Euclidian norm, $I$ is the identity matrix and $\\beta$ is the regularization coefficient. Notice that choosing $\\beta=0$ removes the regularization, turning the Ridge regression into a standard generalized linear regression problem (we used $\\beta = 1$ across all simulations in the paper).\n\nWe finally remark that, in order to obtain the reservoir model space parameters $\\Theta_{x}$, the same Ridge regression is also implemented to solve Eq. 2 for each of the input images.\n\n\\section*{Appendix B: Phase space of reservoir units}\nIn Fig.5, we reproduce the phase space of $4$ different internal units of a reservoir operating (from top to bottom lines) in a sub-critical, critical and super-critical regimes, respectively. Each plot represents the activity of the corresponding neuron at each time step against the total input (sum of external input plus reverberating activity in Eq.\\ref{eq:States_Update}) arriving to the neuron at the previous time. Since each image is first transformed into a multivariate time series of $T=L_{2}=90$ time steps ---and we plot the activity along the first three images only--- each panel in Fig.\\ref{Figure_5} contains 270 points.\n\\begin{figure}\n\\centering{}\\includegraphics[scale=1]{Images\/Figure_6_Neurons_Activity.png}\\caption{Activity of four different neurons operating in the sub-critical, critical and super-critical regimes (from top to bottom) when presented with three different high-dimensional, natural images. Each point in the panels represents the activity $x_{i}(t)$ of neuron $i$ at time step $t$ as a function of the total input $f_{i}(t)=\\varepsilon \\sum_{l=1}^{L_{1}} w_{il}^{in} u_{l}(t) + \\sum_{j=1}^{N} w_{ij}^{res} x_{j}(t-1)$ arriving to it.} \\label{Figure_5}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nA system of two mutually coupled semiconductor lasers is a simple example of interacting nonlinear oscillators. This system has been experimentally realised using for example edge emitting \\cite{HEI01}, quantum dot \\cite{HEG07}, VCSEL \\cite{FUJ03} and DBR \\cite{VAU09} lasers and the observed dynamical phenomena include leader-laggard dynamics \\cite{HEI01}, frequency locking and intensity pulsations \\cite{WUE05} and bubbling \\cite{FLU09,TIA12}. For recent reviews on the rich dynamical features of coupled semiconductor lasers we refer to \\cite{LUE12,SOR13}. \n\nIn the current paper we study this problem theoretically on the basis of a well established rate equation model. In general, there are two time scales which govern the character of the dynamics in coupled semiconductor lasers, namely the period of the relaxation oscillation $T_R=1\/\\nu_R$ and the delay time $\\tau$ due to the separation between the two lasers. In the long delay case $\\tau\\gg T_R$, it is well known that the synchronous state of two coupled lasers is in general not stable \\cite{WHI02}, which gives rise to leader-laggard dynamics and noise induced switching between the asymmetric states \\cite{MUL04}. In view of applications in small-scale and high-speed devices, we are however interested in the opposite limit, where the delay time is comparable to or even much smaller than the relaxation period. In this limit the rate equation approach is justified if the distance between the lasers is significantly larger than the optical wavelength itself. At even shorter distances, composite cavity \nmodels have recently been used to describe effects due to evanescent lateral coupling \\cite{ERZ08,BLA12}. Using typical values for the relaxation oscillation in the GHz regime and an optical wavelength of around 1~$\\mu$m, we therefore now focus on the case of two coupled lasers at distances between 10~$\\mu$m and 500~$\\mu$m. \n\nWe observe that for small spatial separation and strong optical coupling, the two lasers can mutually lock into one of two stable one-colour states. These states spontaneously break the original symmetry of exchanging the two lasers, and both lasers emit light at precisely the same frequency but at different intensities. In this bi-stable regime the current state of the system can be conveniently observed optically from the amplitude of the light output of the lasers. There also exists a region of hysteresis, where the symmetric one-colour state is jointly stable with the two symmetry broken one-colour states, thus giving rise to a parameter region of tri-stability. For more moderate coupling strength, the symmetry broken one-colour states become unstable, and instead symmetry broken two-colour states appear, which are similar to the ones observed numerically in \\cite{ROG03}. In this case, both lasers emit light at the same two optical frequencies, however the intensities of the respective colours are \nnot identical in both lasers. \n\nIn order to study these and similar transitions in the dynamics systematically, we introduce a reduced five dimensional model, which allows for a bifurcation analysis using the continuation software AUTO \\cite{DOE06}. The transition from a one-colour to a two-colour symmetry broken state then corresponds to a Hopf bifurcation from a symmetry-broken fixed-point state to a symmetry-broken limit cycle state in the language of bifurcation theory. We are in particular interested in the fundamental bifurcations, which bound the domain of symmetry broken two-colour states. We identify the relevant codimension two points, which organise the bifurcation scenario in this region, and explain the mechanism which gives rise to the large region of bi-stability between symmetry-broken two-colour states. \n\nFrom the technological side there exists currently a strong interest in the development of small-scale devices which are capable of all-optical signal processing. One particular challenge, which has attracted a significant amount of recent research activities, is the design of all-optical memory elements \\cite{HIL04,OSB09,LIU10,CHE11,HEI11,NOZ12,PER12}. The goal is to design fast and efficient memory elements which can be switched between at least two different stable states via an external optical signal. At the same time it is also desired that the state of the memory element is accessible optically, and the optical output from one memory element should be able to trigger the switching in further elements, with little or no intermediate processing. Based on these criteria, the question arises, if all-optical memory elements can be realised using two mutually interacting identical single-mode lasers. After having identified a number of interesting regions of multi-stabilities involving one-colour \nand two-colour states, we numerically demonstrate that these states can indeed be exploited for the design of all-optical memory units. By injecting suitable pulses of light into the coupled lasers we are able to switch between different multi-stable states. In order to assess the technologically relevant speed of the switching events, we define the write time as the minimal injected pulse duration required to trigger the switching event, and the read time as the minimum time after which the state of the memory can be obtained optically. We find read and write times of less than 100ps, which suggests the possibility of fast and simple all-optical memory elements on the basis of identical closely coupled single-mode lasers. \n\n\n\\section{Rate Equation Model}\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth]{fig01}\n\\caption{\\label{fg:scheme} Schematic diagram of two coupled lasers of wavelength $\\lambda_0$ separated by a distance $D$, where $C_p \\in [0,2\\pi)$ and $j \\in \\field{N}$.}\n\\end{figure}\n\nTwo identical single-mode semiconductor lasers with free-running wavelength $\\lambda_0$ are placed in a face-to-face alignment with a separation $D$ as sketched in Fig.~\\ref{fg:scheme}. They are mutually coupled via a certain amount of the light of each entering the other cavity after a delay $\\tau = D\/c$, where $c$ denotes the speed of light. This scenario has been successfully studied in the literature on the basis of rate equation models \\cite{HOH97,MUL02,ROG03,YAN04,ERZ06} and we will use the following system of delay differential equations:\n\n\\begin{subequations}\\label{eq:mLK}\n\\begin{eqnarray}\n\\dot{E_1}(t) =& (1 + i \\, \\alpha) \\, N_1(t) \\, E_1(t) + \\kappa e^{-i \\, {C_p}} E_2 (t -\\tau) \\label{eq:E1dot}\\\\\n\\dot{E_2}(t) =& (1 + i \\, \\alpha) \\, N_2(t) \\, E_2(t) + \\kappa e^{-i \\, {C_p}} E_1 (t -\\tau) \\label{eq:E2dot} \\\\\n\\dot{N_1}(t) =& \\frac{1}{T}\\left[P - N_1(t) - \\left( 1 + 2N_1(t) \\right) \\, |E_1(t)|^2\\right] \\label{eq:N1dot}\\\\\n\\dot{N_2}(t) =& \\frac{1}{T}\\left[P - N_2(t) - \\left( 1 + 2N_2(t) \\right) \\, |E_2(t)|^2\\right] \\label{eq:N2dot}\n\\end{eqnarray}\n\\end{subequations}\n\nThe parameters include $P=0.23$, the pumping of electron-hole pairs into each laser; $\\alpha=2.6$, the line-width enhancement factor; $T=392$, the ratio of the photonic and carrier lifetimes; $\\tau$ the dimensionless delay time between the lasers. The coupling strength $\\kappa$ and the coupling phase $C_p\\,=\\,2\\pi ( D\\,mod\\,\\lambda_0)$ are the two main bifurcation parameters in subsequent sections. The model~\\eqref{eq:mLK} is dimensionless with time measured in units of the photon lifetime $\\tau_p = 1.0204$ ps. The dynamical variables $N_1$ and $N_2$ denote the population inversions, and $E_1$ and $E_2$ are the slowly varying complex optical fields in laser 1 and laser 2. The rapidly oscillating physical fields can be recovered via $\\tilde{E}_{1,2} = E_{1,2}(t) e^{i \\omega_0 t}$, where the optical angular frequency is given by $ \\omega_0= 2\\pi c \/ \\lambda_0$. The phase factor $e^{-i C_p}$ in system~\\eqref{eq:mLK} is therefore a consequence of expressing the time delayed physical fields using slowly \nvarying fields via $\\tilde{E}_{1,2}(t-\\tau) e^{-i \\omega_0 t} = e^{-i C_p} E_{1,2}(t-\\tau)$. \n\n\nAny solution of equations~\\eqref{eq:mLK} can be multiplied by a common phase factor in both optical fields leading to a $S^1$ symmetry. In addition, as the two lasers are identical, a $\\field{Z}_2$ symmetry exists due to the ability to swap the lasers. Mathematically these two phase space symmetries can be formulated as \\cite{ERZ06},\n\\begin{equation}\n\\label{eq:symm}\n\\begin{aligned}\n & \\left(E_1,E_2\\right) \\rightarrow \\left(e^{i\\,b}E_1,e^{i\\,b}E_2\\right), \\: b \\in [0,2\\pi) \\: & S^1 \\text{ symmetry} \\\\\n & \\left(E_1,E_2,N_1,N_2\\right) \\rightarrow \\left(E_2,E_1,N_2,N_1\\right) \\: & \\field{Z}_2 \\text{ symmetry}\n\\end{aligned}\n\\end{equation}\nBoth the $S^1$ and $\\field{Z}_2$ symmetries are frequently used and referred to in subsequent sections. \n\n\\section{One-Colour States}\\label{sec:CLM}\n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig02}\n\\caption{\\label{fg:CLMs} Time-traces (top panels) and frequency spectra (bottom panels) showing symmetric (left column) and symmetry-broken (middle and right columns) CLMs for $\\tau=0.1$, $\\kappa=0.3$ and ${C_p}=0.33\\,\\pi$. These parameters are consistent with a point in region 7 of Fig.~\\ref{fg:map}.}\n\\end{figure}\n\nOne-colour states, which are also known as compound laser modes (CLMs)~\\cite{ERZ06} are constant amplitude and single frequency solutions to~\\eqref{eq:mLK} whereby the two lasers are frequency locked. They are characterised by the following ansatz, \n\\begin{equation}\n\\label{eq:1C}\n\\begin{aligned}\n& E_1(t) = A_1 e^{i\\,\\omega_A\\,t}& \\qquad &N_1(t)= N_1^c\\\\\n& E_2(t) = A_2 e^{i\\,\\omega_A\\,t} e^{i\\,\\delta_A}& &N_2(t)= N_2^c\n\\end{aligned}\n\\end{equation}\nwith real constants $A_1$, $A_2$, $N_1^c$, $N_2^c$, $\\delta_A$ and $\\omega_A$. The frequency $\\omega_A$ is the common locked frequency of the slowly varying optical fields of the two lasers and $\\delta_A$ allows for a constant phase difference between them. This ansatz can be split into two complementary classes; (i) symmetric CLMs ($A_1=A_2$) which are invariant under the $\\mathbb{Z}_2$ symmetry and (ii) symmetry-broken CLMs where both lasers lase at different intensities ($A_1\\neq A_2$). Further details on this classification are given in Appendix~\\ref{ap:ssbCLM}. \n\nSymmetric CLMs can be further sub-divided into ``in-phase'' ($\\delta_A=0$) and ``anti-phase'' ($\\delta_A=\\pi$) solutions and their stability is extensively studied in \\cite{ERZ06,YAN04}. In particular, it was found that symmetric CLMs can lose their stability via Hopf, saddle node or pitchfork bifurcations, and the stability boundaries were obtained via numerical continuation techniques \\cite{ERZ06} or in the instantaneous limit $\\tau=0$ analytically by using the characteristic equation of the system \\cite{YAN04}. In the literature, symmetry-broken CLMs play a role in the stability analysis of symmetric CLMs, but are themselves not stable. \n\nA principal result of this paper is that \\emph{symmetry-broken} CLMs are shown to be stable for small delay and relatively high coupling between the lasers. This is demonstrated numerically in the middle and right columns of Fig.~\\ref{fg:CLMs} where optical field intensities and frequency spectra of two stable symmetry-broken states are plotted. Due to the $\\mathbb{Z}_2$ symmetry of exchanging the two lasers, symmetry-broken CLMs always exist in pairs. For purposes of display, a parameter set was chosen where a symmetric CLM is also stable as shown in the left column of Fig.~\\ref{fg:CLMs}, giving rise to a tri-stability in CLMs. \n\nThe frequencies of the CLMs in the bottom panels of Fig.~\\ref{fg:CLMs}, can be analytically determined by plugging the ansatz~\\eqref{eq:1C} into the system of ODEs~\\eqref{eq:mLK} \\cite{ERZ06,YAN04}. The equations for $\\dot{E}_1$ and $\\dot{E}_2$ yield the following relation between the frequency $\\omega_A$ of the CLM and the phase difference $\\delta_A$ between the lasers \n\\begin{equation}\n\\label{eq:freqs}\n{\\frac{\\omega^{2}_A}{ \\kappa^{ 2} \\left( 1+ \\alpha^2 \\right)} } = \\sin^{2}\\left( {C_p} + \\omega_A\\tau + \\arctan\\alpha\\right) - \\sin^{ 2}\\delta_A\n\\end{equation}\nFor in-phase $\\omega_{in}=\\omega_A(\\delta_A=0)$ and anti-phase $\\omega_{an}=\\omega_A(\\delta_A=\\pi)$ CLMs, this immediately gives an implicit function for the frequency. For the symmetry-broken one-colour states one additionally needs to consider the fixed point solutions $\\dot{N}_{1,2}=0$ for the inversions. An implicit solution is calculated in \nAppendix~\\ref{ap:omegaA} for the system with delay time $\\tau$. A secant method is then used to obtain the numerical values appearing as dot-dashed vertical lines in Fig.~\\ref{fg:CLMs}.\n\n\\section{Two-Colour States}\\label{sec:2C}\n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig03}\n\\caption{\\label{fg:2CSB} Numerical solution of system~\\eqref{eq:mLK} for $\\tau=0.3, \\kappa=0.15, C_p=0.33 \\pi$ which is consistent with region 4 of Fig.~\\ref{fg:map}. The top left panel contains the magnitude of the electric fields, and the top right their inversions. The bottom panel shows the optical spectrum relative to the frequency of the free running lasers. The dot-dashed vertical lines are the frequencies calculated from ansatz~\\eqref{eq:2C}.}\n\\end{figure}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=1.0\\linewidth]{fig04}\n\\caption{\\label{fg:BS} Transition between in-phase (left) and anti-phase (right) CLMs for $\\tau=0.2$, $\\kappa=0.4$.\nThe in-phase ($\\omega_{in} = -74.19~GHz$) and anti-phase ($\\omega_{an} = +49.46~GHz$) one-colour frequencies are obtained from Eq.~\\eqref{eq:freqs}.}\n\\end{figure*}\n\nIn this section we introduce two-colour states which are stable for small delay and moderate to low coupling strength between the lasers. We stress that the two-colour states are induced by the coupling between the lasers alone, the uncoupled lasers are single-mode only. Like the one-colour states, the two-colour states can either be symmetric or symmetry-broken with respect to the $\\mathbb{Z}_2$ symmetry of exchanging the two lasers. \n\nAn example of a stable symmetry-broken two-colour state is shown in Fig.~\\ref{fg:2CSB}. Due to the $\\field{Z}_2$ symmetry of being able to exchange the two lasers, a twin two-colour state is also stable. In the optical spectrum in the bottom panel of Fig.~\\ref{fg:2CSB} we see that there are indeed only two dominating frequencies $\\omega_A$ and $\\omega_B$ and both lasers lase at these two frequencies, but with unequal intensities. In the time traces of the field amplitudes this gives rise to beating oscillations as shown in the upper left panel of Fig.~\\ref{fg:2CSB}. The corresponding inversions shown in the upper right panel of Fig.~\\ref{fg:2CSB} also reflects the symmetry-broken nature of this state and in addition shows small oscillations at the beating frequency.\n\nAn example for symmetric two-colour states and their connection with symmetric CLMs is shown in Fig.~\\ref{fg:BS}. Starting from the in-phase CLM in the left hand panel of Fig.~\\ref{fg:BS} and increasing the coupling phase $C_p$, we observe a torus bifurcation where the frequency of the anti-phase CLM is turned on. Increasing $C_p$ further smoothly transfers power from the in-phase mode to the anti-phase mode until a second torus bifurcation kills the in-phase CLM's frequency and only the anti-phase CLM remains. Frequency spectrum snapshots as the parameter $C_p$ is changed are shown from left to right in Fig.~\\ref{fg:BS}. We note that symmetric two-colour states exist only over very small ranges of the coupling phase $C_p$, however we will show in Sect.~\\ref{sec:map} that they are crucial for the overall understanding of the bifurcation structure in closely coupled single mode lasers. The symmetric two-colour states can be interpreted as beating between symmetric CLMs. This is similar to the beating \nbetween delay-created external cavity modes for a single laser with mirror \\cite{ERN00}.\n\nA useful ansatz~\\cite{ROG03} which approximates the dynamics of symmetric and symmetry-broken two-colour states is given by\n\\begin{equation}\n\\label{eq:2C}\n\\begin{aligned}\nE_1(t)=A_1 e^{i\\,\\omega_A\\,t}& + B_1 e^{i\\,\\omega_B\\,t}, &N_1(t)= N_1^c,\\\\\nE_2(t)=A_2 e^{i\\,\\omega_A\\,t}& e^{i\\,\\delta_A} + B_2 e^{i\\,\\omega_B\\,t} e^{i\\,\\delta_B}, &N_2(t)= N_2^c,\n\\end{aligned}\n\\end{equation}\nwith real constants $A_{1,2}$, $B_{1,2}$, $N_{1,2}^{c}$, $\\omega_{A,B}$ and $\\delta_{A,B}$. While this ansatz is a straightforward generalisation of the CLM ansatz \\eqref{eq:1C}, with a second frequency $\\omega_B$, we stress that in contrast to the CLM ansatz, equations~\\eqref{eq:2C} fulfil the original system~\\eqref{eq:mLK} only approximately. The presence of two frequencies give rise to oscillations in the intensities of the electric fields of the form \n\\begin{equation}\\label{eq:l2norm}\n\\left\\vert E_{1}\\left(t\\right) \\right\\vert^2 = A_{1}^2 + B_{1}^2 + 2 A_{1} B_{1} \\cos\\left(\\left(\\omega_A -\\omega_B\\right) t\\right)\n\\end{equation}\nand similarly for $\\left|E_2\\right|^2$. According to \\eqref{eq:mLK} this then also leads to oscillations in the population inversions, as we have seen in the upper left panel of Fig.~\\ref{fg:2CSB}, and thereby contradicts the assumptions of constant $N_{1,2}(t)= N_{1,2}^c$. However, as the parameter $T$ is large, ansatz \\eqref{eq:2C} is often well justified in practice, in particular if the beating frequency $\\omega_A - \\omega_B$ is also large. In the same way as for CLMs, the ansatz~\\eqref{eq:2C} can be split into symmetric ($A_1=A_2$, $B_1=B_2$) and symmetry-broken solutions. \n\nIn the case of symmetric two-colour states, the frequencies $\\omega_A$ and $\\omega_B$ correspond to the frequencies of in-phase and anti-phase CLMs and are obtained from \\eqref{eq:freqs}. For symmetry-broken states the analytical calculation of $\\omega_A$ and $\\omega_B$ is shown in Appendix~\\ref{ap:omegaA_omegaB}.\n\n\\section{Reduced Coordinate System}\\label{sec:coord}\n\nOur aim is to understand the bifurcation structure associated with the various one-colour states and two-colour states presented in the previous sections. As we are mostly interested in the closely coupled limit, we now formally set $\\tau=0$. System \\eqref{eq:mLK} then becomes a six dimensional system of ordinary differential equations. \n\nThe system still possesses the $S^1$ symmetry~\\eqref{eq:symm} which can be exploited in order to reduce the number of dimensions of the system. One popular way of achieving this is by rewriting the system \\eqref{eq:mLK} with the dynamical variables $\\left(\\left| E_{1} \\right|,\\left| E_{2} \\right|,\\phi_D, \\phi_A, N_1, N_2 \\right)$ using $E_{1,2}=\\left| E_{1,2} \\right| e^{\\phi_A\\mp \\phi_D\/_2}$. Then the dynamical variable for the absolute phase $\\phi_A$ decouples from the rest of the system and we are left with a five dimensional system. \n\n\\begin{figure}\n\\includegraphics[width=0.6\\linewidth]{fig05}\n\\caption{\\label{fg:singularity} Demonstration of the discontinuity of $\\phi_D$ at the origin. A small change in the electric field $E_1$ of the first laser can lead to a large change in the polar coordinate $\\phi_D$ to $\\phi_D+\\pi$.}\n\\end{figure}\n\nAlthough widely used, this approach is problematic, because the dynamical variable for the phase difference between the electric fields $\\phi_D$ is not well defined if either $E_1$ or $E_2$ vanishes. As a consequence, $\\phi_D$ can jump discontinuously as one of the electric fields goes through the origin. This is schematically demonstrated in Fig.~\\ref{fg:singularity}. To see this mathematically consider the differential equation for $\\phi_{ D}$ which is given by\n\\begin{equation}\n\\begin{aligned}\n\\dot{\\phi}_{ D} = &\\alpha \\left(N_2 - N_1\\right) \\\\\n& + \\kappa \\left[ \\frac{\\left|E_2\\right|}{ \\left|E_1\\right| } \\sin({ C_p} { -} \\phi_{ D} ) - \\frac{\\left|E_1\\right| }{ \\left|E_2\\right| } \\sin({ C_p} { +} \\phi_{ D} ) \\right] \\label{eq:phidot} \\nonumber \n \\end{aligned}\n\\end{equation}\nThe discontinuity in $\\phi_D$ manifests itself in the form of singularities at $\\left|E_{1\/2}\\right|=0$, which make it difficult to use numeric continuation software to explore the dynamical features of the system.\n\nIn order to avoid these singularities, we introduce a five dimensional coordinate system $(q_x,q_y,q_z,N_1,N_2)$, where the variables are defined via \n\\begin{subequations}\\label{eq:q}\n\\begin{eqnarray}\nq_x + i q_y &=& 2 E_1^* E_2 \\label{eq:qxy}\\\\\nq_z &=& \\vert E_1 \\vert^2 - \\vert E_2 \\vert^2 \n\\end{eqnarray}\n\\end{subequations}\nThe multiplication by 2 in \\eqref{eq:qxy} ensures that the Euclidean length of the q-vector equals the total intensity output of both lasers, ${R}=\\left( q_x^2 + q_y^2 + q_z^2 \\right)^\\frac{1}{2} = \\vert E_1 \\vert^2 + \\vert E_2 \\vert^2$. The coordinates $q_x, q_y, q_y$ are analogous to the usual Poincar{\\'e} sphere representation of polarised light, and per definition do not depend on the absolute phase $\\phi_A$. Therefore the new variables are invariant under the $S^1$ symmetry of the original system \\eqref{eq:symm}. The $\\field{Z}_2$ symmetry now operates as follows\n\\begin{equation}\n\\label{eq:symm_new}\n \\left(q_x,q_y,q_z,N_1,N_2\\right) \\rightarrow \\left(q_x,-q_y,-q_z,N_2,N_1\\right) \n\\end{equation}\n\nRewriting the system \\eqref{eq:mLK} for $\\tau=0$ in terms of the new coordinates~\\eqref{eq:q} yields the five dimensional system\n\\begin{equation}\\label{eq:qdot}\n\\begin{aligned}\n\\dot{q_x} &= q_x \\left( N_1 + N_2 \\right) + \\alpha q_y \\left( N_1 - N_2 \\right) + 2 {\\kappa} {R} \\cos({C_p}) \\\\\n\\dot{q_y} &= q_y \\left( N_1 + N_2 \\right) - \\alpha q_x \\left( N_1 - N_2 \\right) - 2 {\\kappa} q_z \\sin({C_p}) \\\\\n\\dot{q_z} &= q_z \\left( N_1 + N_2 \\right) + {R} \\left( N_1 - N_2 \\right) + 2 {\\kappa} q_y \\sin(C_p) \\\\\nT \\dot{N_1} &= P - N_1 - \\left( 1 + 2N_1 \\right) \\, \\left({R + q_z}\\right)\/2 \\\\\nT \\dot{N_2} &= P - N_2 - \\left( 1 + 2N_2 \\right) \\, \\left({R - q_z}\\right)\/2.\n\\end{aligned}\n\\end{equation}\n\nAs intended, there are now no singularities in the dynamical variables, and the dynamical equations are invariant under the $\\field{Z}_2$ symmetry operation \\eqref{eq:symm_new}. It is this reduced system that will be used for bifurcation analysis in the next section.\n\n\\section{Bifurcation Diagram}\\label{sec:map}\n\n\\begin{figure}\n\\includegraphics[width=\\linewidth]{fig06}\n\n\\caption{\\label{fg:map} Bifurcation diagram of system \\eqref{eq:qdot} in the $(C_p,\\kappa)$ parameter plane. The bifurcation lines separate parameter space into eight distinct dynamical regions. The codimension one bifurcation lines are organised by a set of codimension two points. The labelling used for the dynamical regions and for the codimension two points are explained in the left and right legends below the main graph.}\n\\end{figure}\n\n\nIn this section the bifurcation structure at $\\tau =0$ is explored using the 5 dimensional model \\eqref{eq:qdot}. In the reduced coordinate system \\eqref{eq:q}, one-colour states become fixed point solutions and the two-colour states become limit cycles. This reduction in complexity allows us use continuation software AUTO \\cite{DOE06} to obtain a comprehensive overview of the involved bifurcations. \n\nIn Fig.~\\ref{fg:map}, the bifurcation diagram with the two bifurcation parameters of coupling phase $C_p$ and coupling strength $\\kappa$ is presented. Due to the parameter symmetry\n\\begin{equation}\\label{eq:psymm}\n\\left( q_x, q_y, {C_p} \\right) \\rightarrow \\left( -q_x, -q_y, {C_p}+\\pi \\right) \n\\end{equation}\nit is sufficient to consider the parameter range of $C_p$ in the interval $[0,1\\pi)$ only. The (codimension one) bifurcation lines in the diagram separate the parameter space into eight distinct regions. Only bifurcations which affect stable dynamical states are plotted. In region 1 and 2 in-phase one-colour states and anti-phase one-colour states are stable respectively. For the new coordinates \\eqref{eq:q}, symmetric one colour states are confined to the $q_x$ axis ($q_y=0; q_z=0$), with $q_x>0$ in the in-phase case. \n\nAt very high coupling strengths ($\\kappa>PH1$), the in-phase region 1 and the anti-phase region 2 are separated by two supercritical Hopf bifurcations in close proximity as shown in the lower right panel of Fig.~\\ref{fg:cuts}. Between these two Hopf bifurcations a stable limit cycle of low amplitude and high frequency exists which corresponds to the symmetric two-colour dynamics seen in Fig.~\\ref{fg:BS} of Sect.~\\ref{sec:2C}, where the torus bifurcations of the original system \\eqref{eq:mLK} have become the Hopf bifurcations in the reduced system. In Fig.~\\ref{fg:map} these two vertical Hopf bifurcation lines appear as a single line at the scale of the diagram with the tiny region 3 of symmetric two-colour states nestled between them. In addition symmetry-broken one-colour states ($q_y\\neq 0; q_z \\neq 0$) are stable in region 6 which is accessible from the symmetric one-colour states via a supercritical bifurcation at the top of the blue pitchfork line between the SNP and PH1 points. \n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth]{fig07}\n\\caption{\\label{fg:cuts} Several cuts of constant $\\kappa$ as indicated across Fig.~\\ref{fg:map} are shown. The top panels and lower-left panel show bifurcation diagrams of maximum $N_{1,2}$ versus $C_p$. Solid lines indicate stable states and dashed lines unstable states. Symmetric in-phase and anti-phase one-colour states are red and blue respectively. Symmetry-broken one-colour states appear as purple lines whilst symmetry-broken limit cycles are orange. Green lines indicate symmetric limit cycles. Hopf, pitchfork, pitchfork of limit cycles, saddle-node, saddle-node of limit cycles are denoted by H, P, PLC, SN, SNLC. Point B contains several bifurcations, see main text. The lower-left panel shows a 3-dimensional graph in q-vector space~\\eqref{eq:q} showing symmetric limit cycles (green) between in-phase (red) and anti-phase (blue) one-colour states. The orange ring marks the pitchfork of limit cycles. The $C_p$ range corresponds to the same as right-most inset of Fig.~\\ref{fg:cut}. \nSymmetry-broken \nlimit cycles \nnot drawn.}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\includegraphics[width=0.9\\linewidth]{fig08}\n\\caption{\\label{fg:blowup} Sketch of bifurcation structure in vicinity of the pitchfork-Hopf codimension two point, labelled PH1 in Fig.~\\ref{fg:map}. Subscripts ${in}$, ${an}$ and ${sb}$ refer to bifurcations acting on in-phase, anti-phase and symmetry-broken states respectively. The solid bifurcation lines affect stable states, while the dashed lines only affect unstable states. }\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth]{fig09}\n\\caption{\\label{fg:cut} An inversion cut at $\\kappa=0.1$ across the entire $1\\pi$ range of $C_p$. Colours and labels are as in Fig.~\\ref{fg:cuts}. T represents a torus bifurcation. Insets are blow-ups of the regions indicated.}\n\\end{figure}\n\n\nA sketch of the situation in the vicinity of the PH1 point is provided in the left part of Fig.~\\ref{fg:blowup}, where one of these supercritical Hopf bifurcations and the supercritical pitchfork bifurcation intersect. The second supercritical Hopf bifurcation although nearby does not play a role. The PH1 point is a pitchfork Hopf codimension two bifurcation and analytical expressions are obtained in Appendix~\\ref{ap:location-ph1-point}. In order to classify this bifurcation, we note that the pitchfork Hopf and the Hopf-Hopf bifurcation share the same reduced normal form. PH1 corresponds to a Hopf-Hopf ``simple'' case III of~\\cite{KUZ04} or equivalently case II\/III of~\\cite{GUC83}. Figure~\\ref{fg:blowup} illustrates that this pitchfork Hopf point is responsible for the symmetry-broken two colour dynamics which originates between a pitchfork of limit cycles (PLC) and a supercritical Hopf bifurcation ($2H_{sb}$). Symmetry-broken two-colour states are stable in region 4 of Fig.~\\ref{fg:map}. \n\nThe point labelled SNP is a saddle-node pitchfork codimension two bifurcation. It shares the same reduced normal form as a generalised (Bautin) Hopf bifurcation without rotation. At this point the supercritical pitchfork bifurcation becomes subcritical, thus spawning the tri-stable region 7, where both symmetry-broken and symmetric one-colour states are stable. An integration with noise of the original system~\\eqref{eq:mLK} showing the three stable one-colour states was presented in Fig.~\\ref{fg:CLMs} of section~\\ref{sec:CLM}. \n\nThe third corner of symmetry-broken one-colour states is closed by a saddle-node Hopf (also known as fold Hopf) codimension two bifurcation labelled SNH in Fig.~\\ref{fg:map}. At this point a saddle-node and a Hopf bifurcation meet tangentially and also a torus bifurcation line emerges from this point. The unfolding of saddle-node Hopf bifurcations have been studied extensively in the literature, and we identify the SNH point with the third case in~\\cite{KUZ04}\\footnote{\\cite{KUZ04} does not enumerate the cases. Here we refer to the case of Figure 8.16 in the 3rd version of the book.}. These three codimension two bifurcations organise the multi-stabilities.\n\nNext we provide a series of cuts of constant coupling strength $\\kappa$ across the diagram (Fig.\\ref{fg:map}) to further explain the dynamics in each of the regions. In Fig.~\\ref{fg:cut}, a plot at $\\kappa=0.1$ of the inversions $N_{1,2}$ across the entire $1\\pi$ span of the coupling phase $C_p$ is presented. This plot illustrates that symmetry-broken two-colour states are stable over a large range of $C_p$. The bottom inset highlights region 5 which is relatively small at $\\kappa=0.1$. In this region a symmetric one-colour is stable in addition to the symmetry-broken two-colour states. In the right-most inset a blow-up of region 3 is provided. The limit cycle born after a supercritical Hopf bifurcation is still invariant under $\\field{Z}_2$ symmetry \\eqref{eq:symm_new} and therefore obeys \n\\begin{equation}\n\\label{eq:symm_lc}\n \\left(q_x,q_y,q_z,N_1,N_2\\right)\\left(t\\right) = \\left(q_x,-q_y,-q_z,N_2,N_1\\right)\\left(t+\\frac{\\tau_l}{2}\\right),\n\\end{equation}\nwhere $\\tau_l$ is the period of the limit cycle. In a projection to the $(q_x,q_y,q_z)$ coordinates, the limit cycles therefore form rings around the $q_x$ axis, as shown in the lower left panel of Fig.~\\ref{fg:cuts}. These limit cycles corresponds to the symmetric two-colour states. They lose stability at a pitchfork of limit cycles bifurcation and create the large regions 4 and 5 of symmetry-broken two-colour states. In the bifurcation diagram (Fig.~\\ref{fg:map}) the GH point is a codimension two generalised Hopf (Bautin) bifurcation. As shown in top right panel of Fig.~\\ref{fg:cuts}, the scenario remains very similar for $\\kappa$ below this point. However here the supercritical Hopf bifurcation becomes subcritical and a saddle-node of limit cycles bifurcation occurs. Finally in the top left of Fig.~\\ref{fg:cuts}, we look at an inversion cut of much higher $\\kappa$ just beneath the SNP \npoint. Here the supercritical Hopf and supercritical pitchfork of limit cycles bifurcations occur in close proximity at the point B. In contrast to Fig.~\\ref{fg:cut}, symmetry-broken one-colour states are stable limited on the left by a saddle-node bifurcation and on the right by a supercritical Hopf bifurcation. Region 7 of Fig.~\\ref{fg:map} corresponds the small area of hysteresis between these symmetry-broken one-colour and symmetric one-colour states. \n\nThe remaining two codimension two points plotted in Fig.~\\ref{fg:map} are a period-doubling torus (PDT) and a second pitchfork Hopf (PH2) bifurcation. PH2 gives rise to a tiny triangular region of bi-stability between symmetric one-colour states which occurs below the green horizontal Hopf as shown in~\\cite{YAN04}. We identify this point as being equivalent to the Hopf-Hopf ``difficult'' case VI of~\\cite{KUZ04} which is the same as case VIa of~\\cite{GUC83}. This completes the bifurcation picture for moderate to high coupling strength. We note that different dynamics occurs for lower coupling strengths, in particular region 8 of Fig.~\\ref{fg:map} which is outside the scope of the current paper.\n\n\\section{All-Optical Switching}\\label{sec:opticalswitch}\n\nThe realisation of stable, fast and scaleable all optical memory elements would significantly increase the scope of photonic devices and has therefore attracted considerable interest in the laser community. Examples of optical memory designs include hetero-structure photonic crystal lasers~\\cite{CHE11}, coupled micro-ring lasers~\\cite{HIL04} and dual mode semiconductor lasers with delayed feedback~\\cite{BRA12}. In this section, we propose a conceptually {\\em simple} all optical memory element on the basis of two closely coupled single-mode lasers.\n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig10}\n\\caption{\\label{fg:scheme2} Sketch showing the two laser coupling schemes considered as optical switches.}\n\\end{figure}\n\nMemory units require at least two stable states and a mechanism to switch between them. In the previous sections, a number of different parameter regions displaying multi-stabilities were discovered for closely coupled lasers. In particular, in section~\\ref{sec:2C} corresponding to region 4 of Fig.~\\ref{fg:map} we established the presence of symmetry-broken two-colour states. Switching between them is achieved via optical injection from two master lasers as shown schematically in Fig.~\\ref{fg:scheme2}(a). Initially the lasers are in a symmetry-broken state with an optical spectrum displayed in the top left panel of Fig.~\\ref{fg:switch_2c}. The oscillations in the magnitude of the electric fields, shown in the graph beneath, are due to the beatings between the two optical frequencies (Sec.~\\ref{sec:2C}). Between 5~ns and 6~ns an optical pulse from a master laser is injected into laser 1 which causes laser 1 to lock to the external frequency and for its carrier density to significantly reduce. This is shown in \nthe top panel of Fig~\\ref{fg:os1speed_N}, where we denote the point ``A'' at approximately $50$~ps when the carrier density of laser 1 is pushed beneath that of laser 2. After the pulse is switched off at 6~ns, a transient of about 3~ns is visible in the bottom panel of Fig.~\\ref{fg:switch_2c} for the coupled lasers to completely settle to the twin symmetry-broken two-colour state which corresponds to the two lasers having been exchanged (top centre panel of Fig.~\\ref{fg:switch_2c}). The contrast ratio between the two stable states is relatively large. For example, the intensity of the colour with higher frequency changes by a factor of more than four. \n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig11}\n\\caption{\\label{fg:switch_2c} Optical switching between symmetry-broken two-colour states for a two laser configuration as outlined in Fig.~\\ref{fg:scheme2}(a) for parameters $\\tau=0.2$, $\\kappa=0.1$, $C_p=0.35\\pi$. The large bottom panel shows the magnitude of the electric field in both lasers. Laser 1 in red and laser 2 mirrored underneath in blue. The central black line dividing the two shows the injection strength and duration of the master lasers. At 5~ns a pulse of 1~ns is injected from a master laser into laser 1. At 13~ns a pulse of 50~ps from a master laser is injected into laser 2. The top three panels show the corresponding frequency states. The black arrow indicates the frequency of injected light which is 19.5~GHz larger than the lasers' free running frequency.}\n\\end{figure}\n\nOne important aspect from an application point of view is the ability to switch between two states with a very short external optical pulse and we therefore define the \\emph{write time} as the minimum pulse duration to ensure switching. In order to demonstrate that a shorter pulse is sufficient to trigger the switch, we inject at 13~ns a pulse of 50ps duration into laser 2. In the lower panel Fig~\\ref{fg:os1speed_N}, we see that $N_2$ is indeed pushed below $N_1$ during the pulse. After the pulse is switched off, both $N_1$ and $N_2$ oscillate strongly but $N_2$ remains consistently below $N_1$. Therefore the write time is determined by the minimum time needed to reduce the carrier density of one laser below that of the other laser which for our setup is approximately 50~ps. We have thus demonstrated a basic mechanism for an all-optical memory element with a large contrast ratio, a short write time and a low coupling strength between the lasers. \n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig12}\n\\caption{\\label{fg:os1speed_N} Diagrams showing the dynamics of the carrier densities for both lasers during the first switching event (top panel) and second switching event (bottom panel) of Fig.~\\ref{fg:switch_2c}. Black vertical lines mark the time when the external injection is turned on and then off. Point labelled ``A'' indicates where the carrier density of laser~1 becomes less than laser~2.}\n\\end{figure}\n\nFor certain applications it may be desirable to inject into only one of the two lasers as in Fig.~\\ref{fg:scheme2}(b). To achieve this, we choose parameters consistent with region 5 of Fig.~\\ref{fg:map}. In this region, in addition to the two-colour symmetry-broken states of the previous paragraph, a one-colour symmetric state is also stable. In Fig.~\\ref{fg:switch_1c2c}, these states form the basis for the optical memory unit. Initially the two lasers start in a degenerate state with the same amplitude and the same single frequency in both lasers. Using a pulse with a positive detuning relative to the central frequency ($+83$~GHz in the case of Fig.~\\ref{fg:switch_1c2c}) the symmetric state of the two lasers symmetry breaks to a two-colour state. As the number, position and intensities of frequencies change, the two states can be easily distinguished which is desirable from an application point of view. \n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig13}\n\\caption{\\label{fg:switch_1c2c} Optical switching between a symmetric one-colour state and a symmetry-broken two-colour state as sketched in Fig.~\\ref{fg:scheme2}(b) for parameters $\\tau=0.2$, $\\kappa=0.2$, $C_p=0.25\\pi$. Panel layout as in Fig.~\\ref{fg:switch_2c}. At 5~ns a pulse of 1~ns with frequency offset of +83~GHz is injected from the master laser into laser 1. At 20~ns a pulse of 1~ns with frequency offset of -32~GHz is injected from the master laser into laser 1.}\n\\end{figure}\n\nTo discuss how quickly the two states can be ascertained, we introduce the \\emph{read time} which is the minimum duration needed to differentiate optically between the two states after the injection has turned off. In Fig.~\\ref{fg:os1speed_N} for the first optical switch after the external injection was removed at 6~ns and 13.05~ns, large amplitude oscillations in the carrier densities at a frequency consistent with the relaxation oscillations for the coupled system (5.7~GHz) ensue. These oscillations are directly related to the length of transient observed in the optical fields for the system to completely settle to the twin symmetry-broken two-colour state. We stress that one does not need to wait for all relaxation oscillations in the system to die out before reading. Small amplitude high frequency oscillations (40~GHz) which are due to the beatings between the two optical colours (see Sec.~\\ref{sec:2C}) are observable before each switching event and are discernible in Fig.~\\ref{fg:os1speed_N} as small \ndeformities in the larger amplitude oscillations within $1$~ns of the external injection being turned off. In Fig.~\\ref{fg:os2_freq}, two Fourier modes consistent with the peak frequencies of $-32$~GHz and $-83$~GHz are traced out for each switching event for the second optical switch. All other frequencies are filtered out. During the first injection episode from $5$~ns to $6$~ns, the frequency centred at $-83$~GHz is turned off within $400$~ps. We also observe that after injection, the frequency at $-32$~GHz has reached a stable intensity within 100~ps. To switch back to the symmetric one-colour state, a pulse with negative detuning (-38~GHz in the case of Fig.~\\ref{fg:switch_1c2c}) is injected into laser 1. In the lower panel of Fig.~\\ref{fg:os2_freq} the frequency component at $-83$~GHz relaxes to its equilibrium value within 1.5~ns. The frequency at $-34$~GHz is completely off within $100$~ps. This constitutes a robust memory element where we inject into one laser only. An external injection pulse with \npositive detuning causes the coupled lasers to enter a symmetry-broken state whilst negative detuning causes the coupled lasers to enter a symmetric state. Compared to the switching scenario in the previous paragraph (Fig.~\\ref{fg:switch_2c}) the two lasers are more strongly coupled but the contrast ratio between the two states is greater.\n\n\\begin{figure}\n\\includegraphics[width=1.0\\linewidth,type=pdf,ext=.pdf,read=.pdf]{fig14}\n\\caption{\\label{fg:os2_freq} Intensity plot tracing frequencies $-32$~GHz and $-83$~GHz for the uninjected laser 2 during the first switching event (top panel) and second switching event (bottom panel) of Fig.~\\ref{fg:switch_1c2c}. A Fourier transform using a Hann window is executed every $100$~ps.}\n\\end{figure}\n\nIn this section we have shown that two closely coupled identical lasers can operate as an optical memory element and have provided two specific examples. The high degree of multi-stability discovered in the previous sections enables many other designs. As the multi-stabilities persist in the limit of $\\tau\\rightarrow0$, the distance between the lasers can be reduced as far as is technologically possible. Memory units of this kind are therefore open to miniaturisation and allow integrability. For the optical switch of Fig.~\\ref{fg:switch_2c}, a fast write time of $50$~ps was demonstrated in the second switching event and was discussed with reference to the carrier density in Fig.~\\ref{fg:os1speed_N}. A significant speed increase on this number may be possible by increasing the injection strength of the external pulse or a fine tuning of the coupled lasers' parameters. For the optical switch of Fig.~\\ref{fg:switch_1c2c}, we showed via Fig.~\\ref{fg:os2_freq} that it is possible to distinguish optically between \nthe states within a read time of $100$~ps after the external injection was turned off. Again significant improvements on this number may be possible. Indeed the larger the frequency separation between states the shorter the time needed to differentiate between them. Therefore choosing parameters consistent with larger frequency separations which normally occur at higher coupling strength may substantially decrease the read time. We conclude that closely coupled lasers offer a promising approach for the realisation of scaleable and fast all optical memory elements.\n\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, a comprehensive overview of the bifurcation scenarios in a system of two closely coupled single-mode lasers is provided. For moderate to high coupling strength the four characteristic stable states are symmetric one-colour, symmetry-broken one-colour, symmetric two-colour and symmetry-broken two-colour states. We introduce a new coordinate representation which accounts for the $S^1$ symmetry in the system without creating unnecessary singularities. This allows us to study the bifurcation structure of this system using conventional numerical continuation techniques. \n\nOur results show that the bifurcations between the various stable states are organised by a number of codimension two bifurcation points which are identified with reference to the literature. In particular it is found that the interplay between a pitchfork Hopf, a saddle-node Hopf and a saddle-node pitchfork codimension two points give rise to regions of multi-stabilities. Detailed knowledge of the bifurcations and their boundaries of closely coupled lasers may open several technological applications. We propose two all-optical switch candidates. \n\n\\begin{acknowledgments}\nThis work was supported by Science Foundation Ireland under grant number 09\/SIRG\/I1615. We thank P. Harnedy and E. P. O'Reilly for supporting discussions. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn a previous study \\cite{dashti2018implied}, we introduced the ratio of realized variance to implied variance, represented by squared volatility indices VIX or VXO, as a measure of their correlations. We pointed out that the realized variance is calculated for trading dates while the implied variance covers every day, so one of them needs rescaled for a proper comparison. We argued that studying the distribution of the ratios produces a deeper insight into these correlations than a simple regression analysis \\cite{christensen1998relation}, which arrives at an obvious conclusion that VIX\/VXO are a slightly better predictor of the future realized volatility (RV) than the past RV since it builds on the latter with the benefit of additional information. \n\nIn \\cite{dashti2018implied} we concluded that the ratio of the actual realized variance $RV^2$ to $VIX^2$ and $VXO^2$, that is to its predicted values, was best described by the fat-tailed inverse Gamma (IGa) distribution and its inverse by the Gamma (Ga) distribution.We speculated that this is due to unanticipated spikes of realized volatility. In this paper we show that a Beta Prime (BP) distribution provides a better fit both for the ratio and its inverse. For the former, the exponent of the power dependence for small values of the ratio is very large, which mimics exponential behavior of IGa. For the latter, the exponent of the power-law tails is very large, which mimics exponential decay of Ga.\n\nWe also concluded in \\cite{dashti2018implied} that the ratio of $RV^2$ of the preceding month to $VIX^2$ and $VXO^2$ was best described by the lognormal (LN) distribution. Its inverse was also best described by the LN with similar parameters. We argued that while the spikes in the past realized volatility lead to spikes of implied volatility, there was enough uncertainty for the ratio to have heavy tails. In this paper we show that BP, with similar parameters for the ratio and its inverse, provides a better fit for both. The exponents of the power law for both small values and the fat tails are large, so that BP mimics the LN behavior. Additionally, for both BP and LN the distribution of the inverse variable is BP and LN respectively as well. \n\nAs a reminder to the reader, PDF of BP distribution is given by\n\\begin{equation}\nBP(p, q, \\beta; x) = \\frac{(1+\\frac{x}{\\beta})^{-p-q}(\\frac{x}{\\beta})^{p-1}}{\\beta \\space B[p,q]}\n\\label{BetaPrimePDF}\n\\end{equation}\nwhere $\\beta$ is a scale parameter, $p$ and $q$ are shape parameters and $B[p,q]$ is the Beta function; $BP \\propto x^{p-1}$ for $x \\ll \\beta$ and $BP \\propto x^{-q-1}$ for $x \\gg \\beta$.\n\nThis paper is organized as follows. In Section \\ref{EO} we summarize empirical observations regarding distributions of $RV^2$ and volatility indices $VIX^2$ and $VXO^2$ and their ratios. In Section \\ref{RV} we show the results of statistical fits of the ratios. In Section \\ref{Correlations} we summarize correlations between the quantities and their ratios.\n\n\\section{Empirical Observations \\label{EO}}\n\nIn \\cite{dashti2018implied} we presented empirical distributions (PDF) of $RV^2$ vis-a-vis $VIX^2$ and $VXO^2$, as well as the ratios, and in \\cite{dashti2018realized} we will have described their fits. We observe the following features, in agreement with \\cite{russon2017nonlinear} (see also \\cite{vodenska2013understanding,kownatzki2016howgood}):\n\n\\begin{itemize}\n \\item $VIX^2$ and $VXO^2$ have lower high-volatility probabilities relative to $RV^2$, including shorter fat tails, indicating that volatility indices do not predict accurately large values of RV, including the largest volatility spikes. In other word, volatility indices underestimate future large RV. \n \\item $VIX^2$ and $VXO^2$ have higher mid-volatility probabilities relative to $RV^2$, indicating that volatility indices overestimate future mid-level RV.\n \\item $VIX^2$ and $VXO^2$ have lower low-volatility probabilities relative to $RV^2$, indicating that volatility indices underestimate future low RV.\n\\end{itemize}\n\nFor the distributions of the ratios $RV^2\/VIX^2$, $RV^2\/VXO^2$, it is important to notice that, since realized and implied volatilities are correlated, we cannot construct them simply as the quotient distributions of two independent variable. We observe the following:\n\n\\subsection{Predicted Month}\n\nFor the month predicted by the volatility indices\n\n\\begin{itemize}\n \\item The distributions have fat tails, indicating again that VIX and VXO underestimate future values of RV, in particular volatility spikes.\n \\item Very small ratios are suppressed, as manifested by a very large power exponent, indicating that it is rare that RV is considerably smaller than the one predicted by the volatility indices. \n \\item The tail exponents of the ratio distributions is larger than that of either $RV^2$ or $VIX^2$ and $VXO^2$, pointing to that for the $RV^2$ values taken from the tails, the values of $VIX^2$, $VXO^2$ are also more likely to come from the tails. \n\\end{itemize}\n\n\\subsection{Preeceding Month}\n\nFor RV of the preceding month:\n\n\\begin{itemize}\n \\item The tails of the distributions are much shorter than those for the predicted month, reflecting the fact that volatility indices account for past RV.\n \\item The tail exponents of the distributions are almost identical to those of their inverse, $VIX^2\/RV^2$ and $VXO^2\/RV^2$ distributions, indicating, as above, strong correlations. \n\\end{itemize}\n\nFor the ratio distribution of $RV^2$ of the predicted (next) month to $RV^2$ of the preceding month (see below), we observe that \n\n\\begin{itemize}\n \\item The exponent of the fat tail is smaller than those of the $RV^2\/VIX^2$ and $RV^2\/VXO^2$ distributions, that is the tails are longer. \n \\item The power-law exponent at very small ratios is much smaller for this distribution than for $RV^2\/VIX^2$ and $RV^2\/VXO^2$, that is those ratios are far less suppressed. \n\\end{itemize}\n\nBy both measures, VIX and VXO are better predictors of the future RV than the past RV. \n\n\\section{Statistical Fits of Ratio Distributions \\label{RV}} \n\nBelow, figures show the plots of the ratios and their distribution fits and tables contain parameters of the distribution and KS statistics. The two novel elements here, relative to \\cite{dashti2018implied}, is the inclusion of BP in the fits of $RV^2\/VIX^2$ and $RV^2\/VXO^2$ distributions and the fits of the ratio distribution of $RV^2$ of the predicted (next) month to $RV^2$ of the preceding month (and its inverse).\n\n\\clearpage\n\\subsection{Predicted Month}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/RVOverVIXList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramRVOverVIX19902016}\n\\end{tabular}\n\\caption{$\\mathrm{RV}^2 \/ \\mathrm{VIX}^2$, from Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{RVOverVIXListSRV2OverVIX21990}\n\\end{figure}\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/VIXOverRVList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramVIXOverRV19902016}\n\\end{tabular}\n\\caption{$ \\mathrm{VIX}^2 \/ \\mathrm{RV}^2$, from Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{VIXOverRVListSVIX2OverRV21990nn}\n\\end{figure}\n\n\\begin{table}[!htb]\n\\caption{MLE results for ``$\\mathrm{RV}^2 \/ \\mathrm{VIX}^2$\" and ``$\\mathrm{VIX}^2 \/ \\mathrm{RV}^2$\"}\n\\label{MLESRV2OverVIX21990nn}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.9067) & 0.1940 \\\\\n\\hline\nLogNormal & LN( -0.2027, 0.5867) & 0.0446 \\\\\n\\hline\nIGa & IGa( 3.3595, 2.3466) & 0.0246 \\\\\n\\hline\nGamma & Gamma( 2.6219, 0.3814) & 0.0978 \\\\\n\\hline\nWeibull & Weibul( 1.1124, 1.4009) & 0.1224 \\\\\n\\hline\nIG & IG( 1.0000, 2.3168) & 0.0607 \\\\\n\\hline\nBP & BP( 27.2279, 3.8055, 0.1014) & 0.0198 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.5626) & 0.0972 \\\\\n\\hline\nLogNormal & LN( -0.1562, 0.5867) & 0.0446 \\\\\n\\hline\nIGa & IGa( 2.6219, 1.8314) & 0.0978 \\\\\n\\hline\nGamma & Gamma( 3.3595, 0.2977) & 0.0246 \\\\\n\\hline\nWeibull & Weibul( 1.1306, 1.8882) & 0.0500 \\\\\n\\hline\nIG & IG( 1.0000, 2.3168) & 0.0734 \\\\\n\\hline\nBP & BP( 3.8055, 27.2279, 6.8913) & 0.0198 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n \\end{minipage}\n\n\\end{table}\n\n\n\n\\clearpage\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/RVOverVXOList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramRVOverVXO19902016}\n\\end{tabular}\n\\caption{$\\mathrm{RV}^2 \/ \\mathrm{VXO}^2$, Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{RVOverVXOListSRV2OverVXO22016}\n\\end{figure}\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/VXOOverRVList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramVXOOverRV19902016}\n\\end{tabular}\n\\caption{$ \\mathrm{VXO}^2 \/ \\mathrm{RV}^2$, from Jan 2nd, 1990 to Dec 30th, 2016}\n\\label{VXOOverRVListSVXO2OverRV22016n}\n\\end{figure}\n\n\\begin{table}[!htb]\n\\caption{MLE results for ``$\\mathrm{RV}^2 \/ \\mathrm{VXO}^2$\" and ``$\\mathrm{VXO}^2 \/ \\mathrm{RV}^2$\"}\n\\label{MLESRV2OverVXO22016n}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.8747) & 0.1910 \\\\\n\\hline\nLogNormal & LN( -0.1973, 0.5795) & 0.0449 \\\\\n\\hline\nIGa & IGa( 3.4629, 2.4438) & 0.0224 \\\\\n\\hline\nGamma & Gamma( 2.6897, 0.3718) & 0.0971 \\\\\n\\hline\nWeibull & Weibul( 1.1150, 1.4256) & 0.1230 \\\\\n\\hline\nIG & IG( 1.0000, 2.3981) & 0.0611 \\\\\n\\hline\nBP & BP( 47.6001, 3.7157, 0.0563) & 0.0177 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.5467) & 0.0925 \\\\\n\\hline\nLogNormal & LN( -0.1513, 0.5795) & 0.0449 \\\\\n\\hline\nIGa & IGa( 2.6897, 1.8982) & 0.0971 \\\\\n\\hline\nGamma & Gamma( 3.4629, 0.2888) & 0.0224 \\\\\n\\hline\nWeibull & Weibul( 1.1308, 1.9374) & 0.0499 \\\\\n\\hline\nIG & IG( 1.0000, 2.3981) & 0.0729 \\\\\n\\hline\nBP & BP( 3.7157, 47.6002, 12.5409) & 0.0177 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n \\end{minipage}\n\n\\end{table}\n\n\\clearpage\n\\subsection{Preceding Month}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/RVOverVIXList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramRVOverVIX19902016}\n\\end{tabular}\n\\caption{$\\mathrm{RV}^2 \/ \\mathrm{VIX}^2$, from Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{RVOverVIXListSRV2OverVIX21990}\n\\end{figure}\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/VIXOverRVList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramVIXOverRV19902016}\n\\end{tabular}\n\\caption{$ \\mathrm{VIX}^2 \/ \\mathrm{RV}^2$, from Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{VIXOverRVListSVIX2OverRV21990nn}\n\\end{figure}\n\n\\begin{table}[!htb]\n\\caption{MLE results for ``$\\mathrm{RV}^2 \/ \\mathrm{VIX}^2$\" and ``$\\mathrm{VIX}^2 \/ \\mathrm{RV}^2$\"}\n\\label{MLESRV2OverVIX21990nn}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.4974) & 0.0992 \\\\\n\\hline\nLogNormal & LN( -0.1099, 0.4689) & 0.0147 \\\\\n\\hline\nIGa & IGa( 4.6889, 3.7619) & 0.0431 \\\\\n\\hline\nGamma & Gamma( 4.7110, 0.2123) & 0.0381 \\\\\n\\hline\nWeibull & Weibul( 1.1325, 2.1250) & 0.0672 \\\\\n\\hline\nIG & IG( 1.0000, 4.0580) & 0.0215 \\\\\n\\hline\nBP & BP( 9.2230, 9.9855, 0.9742) & 0.0117 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.4999) & 0.1059 \\\\\n\\hline\nLogNormal & LN( -0.1104, 0.4689) & 0.0147 \\\\\n\\hline\nIGa & IGa( 4.7110, 3.7796) & 0.0381 \\\\\n\\hline\nGamma & Gamma( 4.6889, 0.2133) & 0.0431 \\\\\n\\hline\nWeibull & Weibul( 1.1329, 2.1186) & 0.0751 \\\\\n\\hline\nIG & IG( 1.0000, 4.0580) & 0.0163 \\\\\n\\hline\nBP & BP( 9.9855, 9.2230, 0.8236) & 0.0117 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n \\end{minipage}\n\n\\end{table}\n\n\n\n\\clearpage\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/RVOverVXOList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramRVOverVXO19902016}\n\\end{tabular}\n\\caption{$\\mathrm{RV}^2 \/ \\mathrm{VXO}^2$, Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{RVOverVXOListSRV2OverVXO22016}\n\\end{figure}\n\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/VXOOverRVList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramVXOOverRV19902016}\n\\end{tabular}\n\\caption{$ \\mathrm{VXO}^2 \/ \\mathrm{RV}^2$, from Jan 2nd, 1990 to Dec 30th, 2016}\n\\label{VXOOverRVListSVXO2OverRV22016n}\n\\end{figure}\n\n\\begin{table}[!htb]\n\\caption{MLE results for ``$\\mathrm{RV}^2 \/ \\mathrm{VXO}^2$\" and ``$\\mathrm{VXO}^2 \/ \\mathrm{RV}^2$\"}\n\\label{MLESRV2OverVXO22016n}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.4915) & 0.1064 \\\\\n\\hline\nLogNormal & LN( -0.1041, 0.4539) & 0.0150 \\\\\n\\hline\nIGa & IGa( 5.0351, 4.0948) & 0.0331 \\\\\n\\hline\nGamma & Gamma( 4.9618, 0.2015) & 0.0454 \\\\\n\\hline\nWeibull & Weibul( 1.1316, 2.1383) & 0.0730 \\\\\n\\hline\nIG & IG( 1.0000, 4.3548) & 0.0203 \\\\\n\\hline\nBP & BP( 11.1694, 9.4027, 0.7520) & 0.0133 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.0000, 0.4768) & 0.0933 \\\\\n\\hline\nLogNormal & LN( -0.1026, 0.4539) & 0.0150 \\\\\n\\hline\nIGa & IGa( 4.9618, 4.0352) & 0.0454 \\\\\n\\hline\nGamma & Gamma( 5.0351, 0.1986) & 0.0331 \\\\\n\\hline\nWeibull & Weibul( 1.1319, 2.2099) & 0.0689 \\\\\n\\hline\nIG & IG( 1.0000, 4.3548) & 0.0212 \\\\\n\\hline\nBP & BP( 9.4027, 11.1694, 1.0814) & 0.0133 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n \\end{minipage}\n\n\\end{table}\n\n\\clearpage\n\\subsection{Ratio of Realized Variances of Two Adjacent Months}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/RVOverRVList19902016next}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramRVOverRV19902016next}\n\\end{tabular}\n\\caption{Ratio of next-month realized variance to that of the preceding month, from Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{RVOverVIXListSRV2OverRV21990}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width = 0.49 \\textwidth]{.\/RVOverRVList19902016}\n\\includegraphics[width = 0.49 \\textwidth]{.\/histogramRVOverRV19902016}\n\\end{tabular}\n\\caption{Ratio of preceding-month realized variance to that of the following month, from Jan 2nd, 1990 to Dec 30th, 2016.}\n\\label{RVOverVIXListSRV2OverRV21990}\n\\end{figure}\n\n\\begin{table}[!htb]\n\\caption{MLE results for ``$\\mathrm{nRV}^2 \/ \\mathrm{RV}^2$\" and ``$\\mathrm{RV}^2 \/ \\mathrm{nRV}^2$\"}\n\\label{MLESRV2OverRV21990nn}\n\\begin{minipage}{0.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.3175, 1.2580) & 0.1809 \\\\\n\\hline\nLogNormal & LN( -0.0037, 0.7211) & 0.0244 \\\\\n\\hline\nIGa & IGa( 2.1291, 1.6472) & 0.0472 \\\\\n\\hline\nGamma & Gamma( 1.9390, 0.6795) & 0.0801 \\\\\n\\hline\nWeibull & Weibul( 1.4403, 1.2869) & 0.0922 \\\\\n\\hline\nIG & IG( 1.3175, 1.8743) & 0.0340 \\\\\n\\hline\nBP & BP( 5.8771, 3.4893, 0.5556) & 0.0123 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{minipage}\n\\begin{minipage}{.5\\textwidth}\n\\begin{center}\n\\begin{tabular}{ c c c} \n\\multicolumn{2}{c}{} \\\\\n\\hline\n & parameters & KS test \\\\\n\\hline\nNormal & N( 1.2925, 1.0777) & 0.1422 \\\\\n\\hline\nLogNormal & LN( 0.0037, 0.7211) & 0.0244 \\\\\n\\hline\nIGa & IGa( 1.9390, 1.4717) & 0.0801 \\\\\n\\hline\nGamma & Gamma( 2.1291, 0.6071) & 0.0472 \\\\\n\\hline\nWeibull & Weibul( 1.4300, 1.3951) & 0.0608 \\\\\n\\hline\nIG & IG( 1.2925, 1.8387) & 0.0513 \\\\\n\\hline\nBP & BP( 3.4893, 5.8771, 1.7999) & 0.0123 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n \\end{minipage}\n\\end{table}\n\n\\section{Correlation Analysis \\label{Correlations}}\nTables \\ref{PCCVIX} and \\ref{PCCVXO} list Pearson correlation coefficients (PCC). Here \"n\" labels the \"next\" month, that is the month for which VIX and VXO were predicting the implied RV; \"r\" a \"random\" month and unlabeled RV is the one of the preceding month. All $RV^2$ are scaled, as explained earlier and in \\cite{dashti2018implied}. Tables \\ref{KSVIX} and \\ref{KSVXO} list Kolmogorov-Smirnov (KS) statistic for comparison of the two plots.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{PCC VIX}\n\\label{PCCVIX}\n\\begin{tabular}{ccccc} \n& \\multicolumn{4}{c}{}\\\\\n\\hline\n & $RV^2$ & $nRV^2$ & $VIX^2$ & $rRV^2$ \\\\\n\\hline\n$RV^2$ &1 & 0.70 & 0.88 & 0.0055 \\\\\n\\hline\n$nRV^2$ &0.70 & 1 & 0.71 & 0.0025 \\\\\n\\hline\n$VIX^2$ &0.88 & 0.71 & 1 & 0.003 \\\\\n\\hline\n$rRV^2$ &0.0055 & 0.0025 & 0.003 & 1 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{PCC VXO}\n\\label{PCCVXO}\n\\begin{tabular}{ccccc} \n& \\multicolumn{4}{c}{}\\\\\n\\hline\n & $RV^2$ & $nRV^2$ & $VXO^2$ & $rRV^2$ \\\\\n\\hline\n$RV^2$ &1 & 0.70 & 0.87 & 0.0015 \\\\\n\\hline\n$nRV^2$ &0.70 & 1 & 0.72 & 0.004 \\\\\n\\hline\n$VXO^2$ &0.87 & 0.72 & 1 & 0.002 \\\\\n\\hline\n$rRV^2$ &0.0015 & 0.004 & 0.002 & 1 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{KS VIX}\n\\label{KSVIX}\n\\begin{tabular}{ccccccc} \n& \\multicolumn{3}{c}{}\\\\\n\\hline\n & $\\frac{RV^2}{VIX^2}$ & $\\frac{nRV^2}{VIX^2}$ & $\\frac{RV^2}{nRV^2}$ & $\\frac{rRV^2}{rVIX^2}$ & $\\frac{rRV^2}{rRV^2}$& $\\frac{nRV^2}{RV^2}$ \\\\ \n\\hline\n$\\frac{RV^2}{VIX^2}$&0 & 0.056& 0.13& 0.20&0.26& -\\\\\n\\hline\n$\\frac{nRV^2}{VIX^2}$&0.056 & 0&- &0.18 & 0.23&0.13\\\\\n\\hline\n$\\frac{RV^2}{nRV^2}$ &0.13 &- & 0& 0.17&0.15&-\\\\ \n\\hline\n$\\frac{rRV^2}{rVIX^2}$ &0.20 & 0.18 & 0.17& 0 &0.063& 0.17\\\\\n\\hline\n$\\frac{rRV^2}{rRV^2}$ & 0.26 & 0.23 & 0.15 & 0.063& 0 &0.16 \\\\\n\\hline\n$\\frac{nRV^2}{RV^2}$ &- &0.13 & -& 0.17 &0.16&0 \\\\ \n\\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{KS VXO}\n\\label{KSVXO}\n\\begin{tabular}{ccccccc} \n& \\multicolumn{3}{c}{}\\\\\n\\hline\n & $\\frac{RV^2}{VXO^2}$ & $\\frac{nRV^2}{VXO^2}$ & $\\frac{RV^2}{nRV^2}$ & $\\frac{rRV^2}{rVXO^2}$ & $\\frac{rRV^2}{rRV^2}$ & $\\frac{nRV^2}{RV^2}$ \\\\ \n\\hline\n$\\frac{RV^2}{VXO^2}$&0 & 0.063& 0.13& 0.22&0.26&- \\\\\n\\hline\n$\\frac{nRV^2}{VXO^2}$&0.063 & 0&- &0.19 & 0.23&0.16\\\\\n\\hline\n$\\frac{RV^2}{nRV^2}$ &0.13 &- & 0& 0.17&0.16&-\\\\ \n\\hline\n$\\frac{rRV^2}{rVXO^2}$ &0.22 & 0.19 & 0.17& 0 &0.057& 0.18\\\\\n\\hline\n$\\frac{rRV^2}{rRV^2}$ & 0.26 & 0.23 & 0.16 & 0.057& 0& 0.17 \\\\\n\\hline\n$\\frac{nRV^2}{RV^2}$ &- &0.16 & -& 0.18&0.17&0\\\\ \n\\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\clearpage\n\n\\section{Conclusions \\label{Conclusions}}\nBeta Prime distribution provides the best fit to the distributions of ratios of realized variance (squared realized volatility) to squared implied volatility indices VIX and VXO, as well as of ratio of realized variances of two consecutive months. \n\nFor realized variance of the month for which volatility indices calculate implied realized variance, distributions have very slowly decaying fat tails. This indicates that volatility indices tends to underestimate future volatility, especially its large spikes. Conversely, probability of having very small ratios is suppressed due to a large power-law exponent. Comparing this to the ratio of realized variance of the preceding month to the following month, the latter has even longer tails, while the small ratios are significantly more populated. By both measures, VIX and VXO are better predictors of future realized volatility. \n\nFor realized variance of the preceding month, the power law exponents for small ratios and for those in the tails are nearly identical, which reflects the fact that distributions of the ratio and its inverse are nearly identical and that the distribution of the inverse variable of Beta Prime is also Beta Prime.\n\nCorrelation and Kolmogorov-Smirnov statistics are in excellent agreement with empirical analysis of Section \\ref{EO} and fitting in Section \\ref{RV}. In a future work we will more closely identify the months whose ratios are responsible for the tails and the low-ratio regions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper we establish the \\emph{universality} of certain probability distributions on Hilbert spaces known as \\emph{GAP measures} \\cite{Gold1,Rei08}. This makes precise some statements and mathematical considerations outlined in our earlier paper \\cite{Gold1} on the thermal equilibrium distribution of the wave function of an open quantum system. \n\nBy saying that GAP measures are universal we mean that the distributions $\\mu_1$ \\x{(described below)} are typically close to a GAP measure,\n\\begin{equation}\n\\mu_1 \\approx GAP\n\\end{equation}\nwhen the system's environment is sufficiently large. To illustrate the terminology of universality, one can say that the central limit theorem conveys a sense in which the Gaussian probability distribution on the real line is universal: many physically relevant probability distributions are approximately Gaussian. \\x{Instead} of universality, one also often speaks of typicality; we use these two terms more or less interchangeably.\n\nThe family of GAP measures is a family of probability measures on\nHilbert spaces. There is one GAP measure for every density matrix $\\rho$ in a Hilbert space $\\mathscr{H}$, denoted $GAP(\\rho)$; it is concentrated on the unit sphere in $\\mathscr{H}$,\n\\begin{equation}\n \\mathbb{S}(\\mathscr{H}) = \\{\\psi \\in \\mathscr{H}: \\|\\psi\\|=1\\}\\,.\n\\end{equation}\nThe density matrix of $GAP(\\rho)$ is $\\rho$, in the usual sense that, for any probability measure $\\mu$ on $\\mathbb{S}(\\mathscr{H})$, its density matrix is\n\\begin{equation}\\label{covmatrix}\n\\rho_\\mu=\\int_{\\mathbb{S}(\\mathscr{H})} \\mu(d\\psi) \\ket{\\psi}\\bra{\\psi}\\,,\n\\end{equation}\nwhich is also the covariance matrix of $\\mu$ provided $\\mu$ has mean zero. The GAP measures relevant to thermal equilibrium are those associated with canonical density matrices\n\\begin{equation}\\label{rhobeta}\n \\rho_\\beta = \\frac{1}{Z} e^{-\\beta H}\\,,\n\\end{equation}\nwhere $Z= \\tr e^{-\\beta H}$ is the normalization constant,\n$\\beta$ the inverse temperature and $H$ the Hamiltonian. Detailed\ndiscussions of GAP measures and their physical applications can be \nfound in \\cite{Gold1,Rei08}. See\n\\cite{TZ05} for a study about the support of GAP measures, that is,\nabout what $GAP(\\rho_\\beta)$-distributed wave functions typically look like.\n\nThe main application of GAP measures\nis the characterization of the wave functions of systems we encounter in\nnature. In most cases we do not know a system's wave function,\nfor example because it is a photon coming from the sun (or another star,\nor the cosmic microwave background, or a lamp), or because it is an electron\nthat has escaped from a piece of metal. But in many cases the system is more or less in thermal equilibrium, and then, according to the considerations presented in \\cite{Gold1} and here,\nits wave function should be GAP distributed.\n\n\n\n\n\\subsection{Conditional Wave Function}\n\\label{sec:condwf}\n\nConsider a composite quantum system consisting of two subsystems, system 1 and system 2, with associated Hilbert spaces $\\mathscr{H}_1$ and $\\mathscr{H}_2$. Suppose that the system is in a pure state $\\psi \\in \\mathscr{H}_{\\mathrm{total}} = \\mathscr{H}_1 \\otimes \\mathscr{H}_2$. We ask what might be meant by the wave\nfunction of system 1. An answer is provided by the notion of \\emph{conditional wave function}, defined as follows \\cite{Gold1}:\\footnote{This definition is inspired by Bohmian mechanics, a formulation of quantum mechanics with particle trajectories, where the (non-normalized) conditional wave function $\\psi_1$ of system 1 is defined as \\cite{DGZ}\n\\begin{equation}\n \\psi_1(x) = \\psi(x,Y)\n\\end{equation}\nfor $x$ in the configuration space of system 1, with $Y$ the actual configuration of system 2.} \nLet $b=\\{b_j\\}$ be an orthonormal basis of $\\mathscr{H}_2$. For\neach choice of $j$, the partial inner product $\\scp{b_j}\n{\\psi}$, taken in $\\mathscr{H}_2$, is a vector belonging to\n$\\mathscr{H}_1$. Regarding $j$ as random (and therefore writing $J$), we are led to\nconsider the random vector $\\psi_1\\in \\mathscr{H}_1$ given by\n\\begin{equation}\\label{Psi1def}\n \\psi_1 = \\frac{\\scp{b_J} {\\psi}}{\\bigl\\|\\scp{b_J} {\\psi}\\bigr\\|}\n\\end{equation}\nwhere \n$b_J$ is a random element of the basis $\\{b_j\\}$, chosen with the quantum distribution\n\\begin{equation}\\label{marg}\n \\mathbb{P}^{\\psi,b}(J = j) = \\bigl\\| \\scp{b_j} {\\psi} \\bigr\\|^2.\n\\end{equation}\nWe refer to $\\psi_1$ as the conditional wave function of system 1.\\footnote{The conditional wave function can be regarded as a precise version of the ``collapsed'' wave function in the standard quantum formalism: Suppose that system 1 has interacted with system 2, and their joint wave function, as produced by the appropriate Schr\\\"odinger evolution, is now \n\\begin{equation}\\label{superposition}\n\\sum_j c_j \\psi_j^{(1)} \\otimes \\psi_j^{(2)}\\,,\n\\end{equation}\nwhere the $c_j$ are complex coefficients and all $\\psi$s are normalized. If system 2 is a macroscopic system and the $\\psi_j^{(2)}$s are macroscopically different states then in the standard formalism one regards $j$ as random with distribution $|c_j|^2$, and says accordingly that system 1 can be attributed the ``collapsed'' wave function $\\psi_j^{(1)}$ with probability $|c_j|^2$. The conditional wave function of system 1, according to the above definition in the case that the $\\psi_j^{(2)}$s are among the $\\{b_j\\}$, is indeed $\\psi_j^{(1)}$ with probability $|c_j|^2$.}\n\nThe distribution of $\\psi_1$ corresponding to \\eqref{Psi1def} and\n\\eqref{marg} is given by \\x{the following} probability measure on $\\mathbb{S}\n(\\mathscr{H}_1)$: The probability that $\\psi_1\\in A\\subseteq \\mathbb{S}(\\mathscr{H}_1)$ is\n\\begin{align}\\label{mu1}\n \\mu_1(A)=\\mu_1^{\\psi,b}(A)= \\mathbb{P}(\\psi_1 \\in A) \n &= \\sum_{j} \\bigl\\|\n \\scp{b_j} {\\psi} \\bigr\\|^2 \\, \\delta_{\\scp{b_j}{\\psi}\/\\|\\scp{b_j}{\\psi}\\|}(A) \\\\\n &= \\sum_{j} \\bigl\\|\n \\scp{b_j} {\\psi} \\bigr\\|^2 \\, 1_A\\biggl(\\frac{\\scp{b_j}{\\psi}}{\\|\\scp{b_j}{\\psi}\\|}\\biggr) \\,,\n\\end{align}\nwhere $\\delta_\\phi$ denotes the Dirac ``delta'' measure (a point\nmass) concentrated at $\\phi$ \\x{and $1_A$ denotes the characteristic function of the set $A$}. While the density matrix $\\rho_{\\mu_1}$ associated with ${\\mu_1}$ always equals the reduced\ndensity matrix $\\rho_1^\\psi$ of system 1, given by\n\\begin{equation}\\label{rhoreddef}\n \\rho_1^\\psi = \\tr_2 |\\psi \\rangle \\langle \\psi | = \\sum_{j}\n \\scp{b_j}{\\psi} \\scp{\\psi}{b_j} \\,,\n\\end{equation}\nthe measure $\\mu_1$ itself usually depends on the choice of the basis $b$, so $\\mu_1=\\mu_1^{\\psi,b}$. \n\n\n\\subsection{Summary of Results}\n\nIn this paper, we prove several \\emph{universality theorems} about GAP measures, Theorems~\\ref{thm1}--\\ref{thm4}, formulated in Section~\\ref{sec:results}. These are statements to the effect that for \\emph{most} wave functions $\\psi$ from relevant subsets of $\\mathscr{H}_1\\otimes\\mathscr{H}_2$ and\/or \\emph{most} orthonormal bases $b$ of $\\mathscr{H}_2$, $\\psi_1$ is approximately GAP-distributed. Here, ``most'' means that the set of exceptions is small with respect to the appropriate natural uniform measure. \n\nThe basic universality property is expressed in Theorem~\\ref{thm1}, which asserts that \\y{for sufficiently large $\\dim \\mathscr{H}_2$, for any orthonormal basis $b$ of $\\mathscr{H}_2$, and for any density matrix $\\rho_1$ on $\\mathscr{H}_1$, most $\\psi$ in $\\mathbb{S}(\\mathscr{H}_1\\otimes\\mathscr{H}_2)$ with the reduced density matrix $\\tr_2\\ket{\\psi}\\bra{\\psi} = \\rho_1$ are such that} the distribution $\\mu_1^{\\psi,b}$ of $\\psi_1$ is arbitrarily close to $GAP(\\rho_1)$,\n\\begin{equation}\\label{mu1GAPrho1}\n\\mu_1^{\\psi,b} \\approx GAP(\\rho_1)\\,.\n\\end{equation}\nThis fact was derived \\x{(but not rigorously proven) in Section 5.1.3 of \\cite{Gold1}.}\nThe rigorous proof of Theorem~\\ref{thm1} is based on the fact,\nfound independently by several authors \\cite{Y72,W78,DEL92,Colthesis,PR04} \n(see also \\cite{Ol90,ZS00,Jia06,MT07}),\nthat for a random $n\\times n$ unitary matrix with\ndistribution given by the Haar measure on the unitary group $U(n)$,\nthe upper left (or any other) $k\\times k$ submatrix, multiplied by a normalization factor $\\sqrt{n}$, converges as $n\\to\\infty$ to a\nmatrix of independent complex Gaussian random variables with mean 0\nand variance 1. (To understand the factor $\\sqrt{n}$, note that a column of a unitary $n \\times n$ matrix is a unit vector, and thus a single entry should be of order $1\/\\sqrt{n}$.) \n\nTheorem~\\ref{corbasis} asserts that the conclusion of Theorem~\\ref{thm1}---that \\eqref{mu1GAPrho1} holds with arbitrary accuracy for sufficiently large $\\dim\\mathscr{H}_2$---is also true for \\emph{every} $\\psi$ with $\\tr_2 \\ket{\\psi}\\bra{\\psi}=\\rho_1$ for \\emph{most} $b$ (instead of for \\emph{every} $b$ for \\emph{most} $\\psi$).\n\nTheorems~\\ref{thm3} and \\ref{thm4} justify the physical conclusion that, if a system (system 1) is weakly coupled to a very large (but finite) second system (the ``heat bath,'' system 2) then, for most wave functions of the composite system with energy in a given narrow energy range $[E,E+\\delta E]$, the conditional wave function of the system is approximately GAP-distributed for most orthonormal bases of the heat bath. In more detail, let the interaction between the two systems be negligible so that the Hamiltonian can be taken to be\n\\begin{equation}\nH=H_1\\otimes I_2 + I_1\\otimes H_2\n\\end{equation}\n(with $I_{1\/2}$ the identity operator on $\\mathscr{H}_{1\/2}$), and let $\\mathscr{H}_R\\subset \\mathscr{H}_1\\otimes \\mathscr{H}_2$ be a micro-canonical energy shell of the composite system, i.e., the subspace spanned by the eigenstates of the total energy with eigenvalues in $[E,E+\\delta E]$. Assume that the eigenvalues of $H_2$ are sufficiently dense and that the dimensions of $\\mathscr{H}_2$ and $\\mathscr{H}_R$ are sufficiently large. Then, for most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$,\n\\begin{equation}\n\\mu_1^{\\psi,b} \\approx GAP(\\rho_\\beta)\n\\end{equation}\nfor most bases $b$ of $\\mathscr{H}_2$; here, $\\rho_\\beta$ is the canonical density matrix \\eqref{rhobeta} and $\\beta=\\beta(E)$ . \n\nIn Theorems~\\ref{thm3} and \\ref{thm4} we relax the condition that $\\psi$ have a prescribed reduced density matrix, and exploit instead \\emph{canonical typicality}. This is the fact, found independently by several groups \\cite{GMM04,Gold2,PSW05,PSW06}\nand anticipated long before by Schr\\\"odinger \\cite{schrbook},\nthat for most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$, the reduced density matrix $\\tr_2\\ket{\\psi}\\bra{\\psi}$ is approximately of the canonical form \\eqref{rhobeta}.\nMore generally, in Theorems~\\ref{thm3} and \\ref{thm4} we may regard $\\mathscr{H}_R$ as \\emph{any} subspace of $\\mathscr{H}_1\\otimes\\mathscr{H}_2$ of sufficiently high dimension. Canonical typicality then refers to the fact that for most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$,\n$\\tr_2\\ket{\\psi}\\bra{\\psi}$ is close to $\\tr_2\\rho_R$, where $\\rho_R$ denotes $1\/\\dim\\mathscr{H}_R$ times the projection to $\\mathscr{H}_R$; the precise version of canonical typicality that we use in the\nproof of our Theorems~\\ref{thm3} and \\ref{thm4} is due to Popescu, Short,\nand Winter \\cite{PSW05,PSW06}. Theorem~\\ref{thm3} asserts that for most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$,\n\\begin{equation}\\label{mu1GAPtr2rhoR}\n\\mu_1^{\\psi,b} \\approx GAP(\\rho^{(1)}_R)\\,,\n\\end{equation}\nwith $\\rho^{(1)}_R=\\tr_2 \\rho_R$, for most bases $b$ of $\\mathscr{H}_2$.\n\nTheorem~\\ref{thm4} is a very similar statement but differs in the detailed meaning of ``$\\approx$'' and refers to a fixed density matrix, such as $\\rho_\\beta$, in place of $\\rho^{(1)}_R$ in \\eqref{mu1GAPtr2rhoR}.\n\n\\subsection{Remarks}\n\n\\begin{itemize}\n\\item \\textit{Time evolution.} It may be interesting to consider how $\\mu_1^{\\psi,b}$ evolves with time if the wave function $\\psi=\\psi_t$ of systems 1 and 2 together evolves according to the Schr\\\"odinger equation\n\\begin{equation}\ni\\hbar\\frac{\\partial \\psi_t}{\\partial t} = H \\psi_t\\,.\n\\end{equation}\nIn a situation in which most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$ have $\\mu_1^{\\psi,b}\\approx GAP(\\rho^{(1)}_R)$, we may expect that even for $\\psi_0\\in\\mathbb{S}(\\mathscr{H}_R)$ with $\\mu_1^{\\psi_0,b}$ far from any GAP measure, $\\mu_1(t)=\\mu_1^{\\psi_t,b}$ will approach $GAP(\\rho^{(1)}_R)$ and stay near $GAP(\\rho^{(1)}_R)$ most of the time (though not forever, as follows from the recurrence property \\y{(almost-periodicity)} of the Schr\\\"odinger evolution in a finite-dimensional Hilbert space). We leave this problem open but briefly remark that one can already conclude by interchanging the time average and the average over $\\psi_0$ that whenever it is true for most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$ that $\\mu_1^{\\psi,b}\\approx GAP(\\rho^{(1)}_R)$, then for most $\\psi_0\\in\\mathbb{S}(\\mathscr{H}_R)$, $\\mu_1^{\\psi_t,b} \\approx GAP(\\rho^{(1)}_R)$ for most times $t$; the open problem is to prove a statement that concerns \\emph{all}, rather than \\emph{most}, $\\psi_0$.\n\n\n\\item \\textit{The role of interaction.}\nAnother remark concerns the role of interaction (between the system and the heat bath) for obtaining the distribution $GAP(\\rho_\\beta)$. The nature of the interaction is relevant to our discussion in two places---although our theorems do not depend on it, as they do not mention the Hamiltonian at all. First, interaction is relevant for creating typical wave functions, as it helps evolve atypical wave functions into typical ones. This is closely related to the fact that a system coupled to a heat bath (i.e., a big second system) will typically go from non-equilibrium to thermal equilibrium only in the presence of interaction; see Section~4 of \\cite{GLMTZ09b} for further discussion and examples. Second, it depends on the interaction which subspace of $\\mathscr{H}_1\\otimes\\mathscr{H}_2$ is the micro-canonical energy shell that we want $\\mathscr{H}_R$ to be, and thus also which density matrix $\\tr_2 \\rho_R$ is. In the limit of negligible interaction, $\\tr_2 \\rho_R$ has the canonical form $\\rho_\\beta = (1\/Z) e^{-\\beta H}$, while interaction makes it deviate from \\x{this form}. As a consequence of these two roles, when we want to obtain from non-equilibrium a wave function $\\psi\\in\\mathscr{H}_1\\otimes\\mathscr{H}_2$ such that the distribution of the conditional wave function $\\psi_1$ is close to $GAP(\\rho_\\beta)$, we may want that the interaction be not too large (or else there will be deviations from $\\rho_\\beta$) and that the interaction be not too small (or else it may take too long, say longer than the present age of the universe, to reach thermal equilibrium). \n\\end{itemize}\n\n\n\n\\subsection{Definition of the GAP Measure}\n\\label{sec:GAPdef}\n\nFor any density matrix $\\rho$ on $\\mathscr{H}$, the measure $GAP(\\rho)$ on (the Borel $\\sigma$-algebra of) $\\mathbb{S}(\\mathscr{H})$ is built \\x{by} starting from the measure $G(\\rho)$,\nwhich is the Gaussian measure on $\\mathscr{H}$ with mean 0 and covariance matrix\n$\\rho$. In this paper, we are interested only in the case $\\dim\\mathscr{H}< \\infty$.\nThen $G(\\rho)$ can be explicitly defined as follows: Let $S$ be the subspace of \n$\\mathscr{H}$ on which $\\rho$ is supported, i.e., its positive spectral subspace, or\nequivalently the orthogonal complement of its kernel, or\nequivalently its range; let $d'=\\dim S$ and $\\rho_+$ the restriction\nof $\\rho$ to $S$; then $G(\\rho)$ is the measure on $\\mathscr{H}$\nsupported on $S$ with the following density relative to the Lebesgue\nmeasure $\\lambda$ on $S$:\n\\begin{equation}\n \\frac{dG(\\rho)}{d\\lambda}(\\psi) = \\frac{1}{\\pi^{d'} \\,\n \\det \\rho_+} \\exp(-\\langle \\psi |\\rho^{-1}_+| \\psi \\rangle)\\,.\n\\end{equation}\n\\y{Equivalently, a $G(\\rho)$-distributed random vector $\\psi$ is one whose coefficients $\\scp{\\chi_i}{\\psi}$ relative to an eigenbasis $\\{\\chi_i\\}$ of $\\rho$ (i.e., $\\rho\\chi_i = p_i \\chi_i$ with $0\\leq p_i\\leq 1$) are independent complex Gaussian random variables with mean 0 and variances $\\mathbb{E}|\\scp{\\chi_i}{\\psi}|^2=p_i$; by a \\emph{complex Gaussian} random variable we mean one whose real and imaginary parts are independent real Gaussian random variables with equal variances.}\n\nNoting that \n\\begin{equation}\\label{Gpsi1}\n\\int_\\mathscr{H} G(\\rho)(d\\psi) \\, \\|\\psi\\|^2 = \\tr \\rho =1\\,,\n\\end{equation}\nwe now define the adjusted Gaussian measure $GA(\\rho)$ on $\\mathscr{H}$ as:\n\\begin{equation}\n GA(\\rho)(d\\psi) = \\|\\psi\\|^2 G(\\rho)(d\\psi)\\,.\n\\end{equation}\nIf $\\psi^{GA}$ is a $GA(\\rho)$-distributed vector, then $GAP(\\rho)$\nis the distribution of this vector projected on the unit sphere; that is, $GAP(\\rho)$ is the distribution of\n\\begin{equation}\n \\psi^{GAP} = \\frac{\\psi^{GA}}{\\|\\psi^{GA}\\|}\\,.\n\\end{equation}\nLike $\\G{\\rho}$ and unlike $\\GA{\\rho}$, $\\GAP{\\rho}$ has covariance matrix $\\rho$.\n\nMore generally, one can define for any measure $\\mu$ on $\\mathscr{H}$\nthe ``adjust-and-project'' procedure. We denote by $A\\mu$ the\nadjusted measure\n\\begin{equation}\\label{Adef}\nA\\mu(d\\psi) = \\|\\psi\\|^2 \\, \\mu(d\\psi)\\,.\n\\end{equation}\nThe projection on the unit sphere is defined as:\n\\begin{equation}\\label{Pdef}\nP:\\mathscr{H} \\setminus \\{0\\} \\to \\mathbb{S}(\\mathscr{H})\\, , \\quad\nP(\\psi)= \\frac{\\psi}{\\|\\psi\\|}\\,.\n\\end{equation}\nThen the adjusted-and-projected measure is $P_* ( A\\mu) = A\\mu \\circ\nP^{-1}$, where $P_*$ denotes the action of $P$ on measures, thus\ndefining a mapping $P_* \\circ A$ from the\nmeasures on $\\mathscr{H}$ with $\\int \\mu(d\\psi) \\, \\|\\psi\\|^2 = 1$\nto the probability measures on $\\mathbb{S}(\\mathscr{H})$.\n\n\n\n\n\n\n\n\n\\section{Results}\n\\label{sec:results}\n\n\n\\subsection{GAP Measure From a Typical Wave Function of a Large System, Given the Reduced Density Matrix}\n\\label{sec:typical}\n\n\nLet $\\mathscr{H}_\\mathrm{total}=\\mathscr{H}_1\\otimes\\mathscr{H}_2$, where\n$\\mathscr{H}_1$ and $\\mathscr{H}_2$ have respective dimension $d_1$\nand $d_2$, with $d_1< d_2<\\infty$. For any given density matrix $\\rho_1$ on\n$\\mathscr{H}_1$, let\n\\begin{equation}\n\\mathscr{R}(\\rho_1)=\\bigl\\{\\psi\\in\\mathbb{S}(\\mathscr{H}_\\mathrm{total}):\n\\rho_1^\\psi=\\rho_1\\bigr\\}\n\\end{equation}\nbe the set of all normalized wave functions in $\\mathscr{H}_{\\mathrm{total}}$ with reduced density matrix $\\rho_1^\\psi=\\rho_1$. We will see that $\\mathscr{R}(\\rho_1)$ is always non-empty.\n\nTheorem~\\ref{thm1} below concerns typical wave functions in\n$\\mathscr{R}(\\rho_1)$, i.e., typical wave functions with fixed\nreduced density matrix. The concept of ``typical'' refers to the \nuniform distribution $u_{\\rho_1}$ on $\\mathscr{R}(\\rho_1)$;\nan explicit definition of this distribution will be given in Section~\\ref{sec:urho1def}.\n\n\\x{Before we formulate Theorem~\\ref{thm1}, we introduce some notation. First, for} any Hilbert space $\\mathscr{H}$, let $\\mathscr{D}(\\mathscr{H})$ denote the set of all density operators on $\\mathscr{H}$,\ni.e., of all positive operators on $\\mathscr{H}$ with trace 1. \\x{Second, when $\\mu$ is a measure on $\\mathscr{H}$ or $\\mathbb{S}(\\mathscr{H})$ and $f(\\psi)$ is} a measurable function on $\\mathscr{H}$ or $\\mathbb{S}(\\mathscr{H})$ then we use the notation\n\\begin{equation}\\label{muf}\n\\mu(f):=\\int \\mu(d\\psi) f(\\psi)\\,.\n\\end{equation}\n\\x{Third, let} $\\|f\\|_\\infty=\\sup_{x} |f(x)|$.\n\n\\begin{thm}\\label{thm1}\nFor every $0<\\varepsilon<1$, $0<\\delta<1$, and $d_1\\in\\mathbb{N}$, there is $D_2=D_2(\\varepsilon,\\delta,d_1)>0$ such that for all $d_2\\in\\mathbb{N}$ with $d_2>D_2$, for every $\\mathscr{H}_1$ and $\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{1\/2}=d_{1\/2}$, for every orthonormal basis $b=\\{b_1,\\ldots,b_{d_2}\\}$ of $\\mathscr{H}_2$, for every $\\rho_1\\in \\mathscr{D}(\\mathscr{H}_1)$, and for every \\y{bounded measurable} function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$,\n\\begin{equation}\\label{ineqthm1}\nu_{\\rho_1} \\Bigl\\{ \\psi\\in \\mathscr{R}(\\rho_1): \\bigl|\\mu_1^{\\psi,b}(f) - \\GAP{\\rho_1}(f)\\bigr|< \\varepsilon \\, \\|f\\|_\\infty \\Bigr\\} \\geq 1-\\delta\\,.\n\\end{equation}\n\\end{thm}\n\nWe give the proof, as well as those of Theorems~\\ref{corbasis}--\\ref{thm4}, in Section~\\ref{sec:proofs}. \n\nIt follows from Theorem~\\ref{thm1} that, for every sequence $(\\mathscr{H}_{2,n})_{n\\in\\mathbb{N}}$ of Hilbert spaces with $d_{2,n}=\\dim \\mathscr{H}_{2,n} \\to \\infty$ as $n\\to\\infty$ and every sequence $(b_n)_{n\\in\\mathbb{N}}$ of orthonormal bases $b_n=\\{b_{1,n},\\ldots,b_{d_{2,n},n}\\}$ of $\\mathscr{H}_{2,n}$, for every $\\rho_1\\in\\mathscr{D}(\\mathscr{H}_1)$, and for every \\y{bounded measurable} function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$, the sequence of random variables $\\mu_1^{\\Psi_n,b_n}(f)$, where $\\Psi_n$ has distribution $u_{\\rho_1}$ on $\\mathbb{S}(\\mathscr{H}_1\\otimes\\mathscr{H}_{2,n})$, converges in distribution, as $n\\to\\infty$, to the constant $\\GAP{\\rho_1}(f)$, in fact uniformly in $\\rho_1$, $b_n$ and those $f$ with $\\|f\\|_\\infty\\leq 1$. Because of the convergence for every $f$, we can say that the sequence of random measures $\\mu_1^{\\Psi_n,b_n}$ converges ``weakly in distribution'' to the fixed measure $GAP(\\rho_1)$. \n\nA few comments about notation. In \\cite{Gold1}, $d_1$ was called $k$,\n$d_2$ was called $m$, and the notation for the\nbasis $\\{b_1, \\ldots, b_{d_2}\\}$ was $\\{\\ket{1}, \\ldots, \\ket{m}\\}$. For\nenumerating the basis, we will use the letter $j$, and thus write\n$b_j$; in \\cite{Gold1}, the notation was $q_2$ for $j$ (subscript 2\nbecause it refers to $\\mathscr{H}_2$). For a random choice of $j$, we\nwrite $J$; the corresponding notation in \\cite{Gold1} was $Q_2$.\n\n\n\n\\subsection{GAP Measure From a Typical Basis of a Large System}\n\nAs already explained in \\cite{Gold1}, instead of considering a\ntypical wave function and a fixed basis one can consider a fixed\nwave function and a typical basis. Let $ONB(\\mathscr{H}_2)$ be the set\nof all orthonormal bases of $\\mathscr{H}_2$, and recall the notation $\\rho_1^\\psi=\\tr_2 \\ket{\\psi}\\bra{\\psi}$.\n\n\\begin{thm}\\label{corbasis}\nFor every $0<\\varepsilon<1$, $0<\\delta<1$, $d_1\\in\\mathbb{N}$, \n$d_2\\in\\mathbb{N}$ with $d_2>D_2(\\varepsilon,\\delta,d_1)$ as in Theorem~\\ref{thm1}, every $\\mathscr{H}_1$ and $\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{1\/2}=d_{1\/2}$, every $\\psi\\in \\mathbb{S}(\\mathscr{H}_1\\otimes\\mathscr{H}_2)$, and every \\y{bounded measurable} function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$,\n\\begin{equation}\\label{ineqcorbasis}\nu_{ONB} \\Bigl\\{ (b_1,\\ldots,b_{d_2}) \\in ONB(\\mathscr{H}_2): \\bigl|\\mu_1^{\\psi,b}(f) - \\GAP{\\rho_1^\\psi}(f)\\bigr|< \\varepsilon \\, \\|f\\|_\\infty \\Bigr\\} \\geq 1-\\delta\\,,\n\\end{equation}\nwhere $u_{ONB}$ is the uniform probability measure on $ONB(\\mathscr{H}_2)$,\ncorresponding to the Haar measure on the unitary group $U(\\mathscr{H}_2)$.\n\\end{thm}\n\n\n\n\n\\subsection{GAP Measure From a Typical Basis and a Typical Wave Function in a Large Subspace}\n\nIn our main physical application, the reduced density matrix $\\rho_1^\\psi$ is not fixed, although---by a \\x{fact} known as canonical typicality---most of the relevant $\\psi$s have a reduced density matrix $\\rho_1^\\psi$ that is close to a certain fixed density matrix, for example to the canonical density matrix $\\rho_\\beta= (1\/Z)e^{-\\beta H}$. In this section, we present two further universality theorems that are appropriate for such situations, in which the relevant set of $\\psi$s is a subspace of $\\mathscr{H}_1\\otimes\\mathscr{H}_2$ that will be denoted $\\mathscr{H}_R$.\n \nThe physical setting to have in mind is this. A system with Hilbert space $\\mathscr{H}_1$ is entangled with a large system whose Hilbert space is $\\mathscr{H}_2$. The Hamiltonian $H$ is thus defined on $\\mathscr{H}_{\\mathrm{total}}=\\mathscr{H}_1\\otimes\\mathscr{H}_2$; suppose the total system is confined to a finite volume, so that $H$ has pure point spectrum. Let $[E,E+\\delta E]$ be a narrow energy window, located at a suitable energy $E$ such as one corresponding to a more or less fixed energy per particle or per volume. Then the \\emph{micro-canonical energy shell} is the spectral subspace of $H$ associated with this interval, i.e., the subspace spanned by the eigenvectors with eigenvalues between $E$ and $E+\\delta E$, and this is our subspace $\\mathscr{H}_R$. The \\emph{micro-canonical density matrix} $\\rho_R$ is the density matrix associated with $\\mathscr{H}_R$, i.e., $1\/\\dim\\mathscr{H}_R$ times the projection to $\\mathscr{H}_R$. Canonical typicality then asserts that for most wave functions in $\\mathbb{S}(\\mathscr{H}_R)$, the reduced density matrix is approximately $\\rho_\\beta$ for \\x{an} appropriate value of $\\beta$.\n\nFor more general $\\mathscr{H}_R$, canonical typicality means that for most $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$, the reduced density matrix $\\rho_1^\\psi$ is close to $\\tr_2\\rho_R$. The precise statement that we make use of is Theorem~1 of \\cite{PSW05} or the ``main theorem'' of \\cite{PSW06}, which asserts, in a somewhat specialized and simplified form that suffices for our purposes:\n\n\\begin{lemma}\\label{lem:can}\nConsider a Hilbert space $\\mathscr{H}_1$ of dimension $d_1\\in\\mathbb{N}$, another Hilbert space $\\mathscr{H}_2$ of dimension $d_2\\in\\mathbb{N}$ and a subspace $\\mathscr{H}_R\\subseteq \\mathscr{H}_1 \\otimes \\mathscr{H}_2$ of dimension $d_R$. Let $\\rho_R$ be $1\/d_R$ times the projection to $\\mathscr{H}_R$, and $u_R$ the uniform distribution on $\\mathbb{S}(\\mathscr{H}_R)$. Then for every $\\eta>0$,\n\\begin{equation}\n u_R \\left\\{ \\psi \\in \\mathbb{S}(\\mathscr{H}_R): \\Bigl\\|\\rho_1^\\psi -\n \\tr_2 \\rho_R \\Bigr\\|_{\\tr} \\geq \\eta + \\frac{d_1}{\\sqrt{d_R}}\n \\right\\} \\leq 4 \\exp\\Bigl(-\\frac{d_R\\eta^2}{18\\pi^3}\\Bigr)\\,.\n\\end{equation}\n\\end{lemma}\n\nHere, the \\emph{trace norm} is defined by\n\\begin{equation}\n \\|M\\|_{\\tr} = \\tr |M| = \\tr \\sqrt{M^* M}\\,.\n\\end{equation}\nBy the \\emph{uniform distribution} $u_R$ we mean the $(2d_R-1)$-dimensional surface area measure on $\\mathbb{S}(\\mathscr{H}_R)$, normalized so that $u_R(\\mathbb{S}(\\mathscr{H}_R))=1$.\n\n\n\n\\begin{thm}\\label{thm3}\nFor every $0<\\varepsilon<1$, $0<\\delta<1$, $d_1\\in\\mathbb{N}$, every Hilbert space $\\mathscr{H}_1$ with $\\dim \\mathscr{H}_1=d_1$, and every continuous function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$, there is a number $D_R=D_R(\\varepsilon,\\delta,d_1,f)>0$ such that for all $d_R,d_2\\in\\mathbb{N}$ with $d_R>D_R$ and $d_2>D_2(\\frac{\\varepsilon}{2\\|f\\|_\\infty},\\delta\/2,d_1)$ as in Theorem~\\ref{thm1}, and for every $\\mathscr{H}_2$ and $\\mathscr{H}_R\\subseteq \\mathscr{H}_1\\otimes\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{2\/R} = d_{2\/R}$, \n\\begin{multline}\\label{eq:thm3}\n u_R \\times u_{ONB} \\Bigl\\{ \\bigl( \\psi, b \\bigr) \\in\n \\mathbb{S}(\\mathscr{H}_R) \\times ONB (\\mathscr{H}_2) : \\\\\n \\bigl|\\mu_1^{\\psi,b}(f) - \\GAP{\\tr_2 \\rho_R}(f) \\bigr| < \\varepsilon \\Bigr\\}\n \\geq 1-\\delta\\,.\n\\end{multline}\n\\end{thm}\n\nIt follows that, for every sequence $(\\mathscr{H}_{2,n})_{n\\in\\mathbb{N}}$ of Hilbert spaces with $d_{2,n}=\\dim \n\\mathscr{H}_{2,n} \\to \\infty$ as $n\\to\\infty$, every sequence $(\\mathscr{H}_{R,n})_{n\\in\\mathbb{N}}$ of subspaces of $\\mathscr{H}_1\\otimes\\mathscr{H}_{2,n}$ with $d_{R,n}=\\dim\\mathscr{H}_{R,n} \\to\\infty$ as $n\\to\\infty$, and every continuous function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$, the sequence of random variables\n\\begin{equation}\n\\mu_1^{\\Psi_n,B_n}(f)-\\GAP{\\tr_2 \\rho_{R,n}}(f)\\,,\n\\end{equation}\nwhere $(\\Psi_n,B_n)$ has distribution $u_{R,n}\\times u_{ONB,n}$ on $\\mathbb{S}(\\mathscr{H}_{R,n})\\times ONB(\\mathscr{H}_{2,n})$, converges to zero in distribution as $n\\to\\infty$. We say that the sequence of random signed measures $\\mu_1^{\\Psi_n,B_n}-\\GAP{\\tr_2 \\rho_{R,n}}$ converges ``weakly in distribution'' to zero.\n\n\\bigskip\n\nFor $0<\\gamma<1\/\\dim\\mathscr{H}$ let $\\mathscr{D}_{\\geq \\gamma}(\\mathscr{H})$ denote the set of density matrices $\\rho\\in\\mathscr{D}(\\mathscr{H})$ whose eigenvalues are all greater than or equal to $\\gamma$ (so that, in particular, zero is not an eigenvalue of $\\rho$).\n\n\\begin{thm}\\label{thm4}\nFor every $0<\\varepsilon<1$, $0<\\delta<1$, $d_1\\in\\mathbb{N}$, \\x{and $0<\\gamma<1\/d_1$, there} are numbers $D_R'=D_R'(\\varepsilon,\\delta,d_1,\\gamma)>0$ and $r'=r'(\\varepsilon,d_1,\\gamma)>0$ such that for all $d_R,d_2\\in\\mathbb{N}$ with $d_R>D_R'$ and $d_2>D_2(\\varepsilon\/2,\\delta\/2,d_1)$ as in Theorem~\\ref{thm1}, \\x{for every Hilbert space $\\mathscr{H}_1$ with $\\dim \\mathscr{H}_1=d_1$, for every} $\\Omega \\in \\mathscr{D}_{\\geq\\gamma}(\\mathscr{H}_1)$, for every $\\mathscr{H}_2$ and $\\mathscr{H}_R\\subseteq \\mathscr{H}_1\\otimes\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{2\/R} = d_{2\/R}$ satisfying\n\\begin{equation}\\label{ROmega}\n\\bigl\\|\\tr_2 (\\rho_R)-\\Omega\\bigr\\|_{\\tr}\\max(D_R(f_1),\\ldots,D_R(f_\\ell))$, and then apply Theorem~\\ref{thm3} to obtain that $\\mu_1^{\\psi,b}$ and $GAP(\\tr_2\\rho_R)$ agree approximately on all linear combinations of $f_1,\\ldots,f_\\ell$.} and it applies to all subspaces $\\mathscr{H}_R$ of sufficient dimension. For the physical application, though, we often want to compare $\\mu_1^\\psi$ to $GAP(\\Omega)$ rather than $GAP(\\tr_2\\rho_R)$, for example because $\\Omega$ is the thermal density matrix $\\rho_\\beta=(1\/Z)e^{-\\beta H}$ while $\\tr_2\\rho_R$ is something complicated; we usually do not need that the estimate applies uniformly to all spaces $\\mathscr{H}_R$ of sufficient dimension, but instead consider only one fixed $\\mathscr{H}_R$; and in that situation we can, in fact, obtain an estimate, the one provided by Theorem~\\ref{thm4}, that is uniform in $f$.\n\n\n\n\n\\subsection{GAP Measure as the Thermal Equilibrium Distribution}\n\\label{sec:thermo}\n\nTheorem~\\ref{thm4} justifies regarding $GAP(\\rho_\\beta)$ as the thermal\nequilibrium distribution of the wave functions of the system 1 in the following way.\nLet $\\mathscr{H}_R$ be the microcanonical subspace, i.e., the spectral\nsubspace of $H$ associated with the interval $[E,E +\\delta E]$. It\nis a standard fact (e.g., \\cite{G95,ML79}) that when the interaction energy between system 1 and system 2 is sufficiently small, i.e., when we may set\n\\begin{equation}\nH = H_1 \\otimes I_2+ I_1\\otimes H_2\n\\end{equation}\non $\\mathscr{H}_\\mathrm{total}=\\mathscr{H}_1\\otimes\\mathscr{H}_2$, and when system 2 is a large heat bath, i.e., when the eigenvalues of $H_2$ are sufficiently dense, then $\\tr_2 \\rho_R$ is approximately of the exponential form $Z^{-1}\\exp(-\\beta\nH_1)$ with $Z = \\tr \\exp(-\\beta H_1)$ for suitable $\\beta>0$, i.e., is approximately the\ncanonical density matrix $\\rho_\\beta$. Then by Theorem~\\ref{thm4} in\nthis special case of negligible interaction we have that for most wave functions $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$,\n\\begin{equation}\n\\mu_1^{\\psi,b}\\approx GAP(\\rho_\\beta)\n\\end{equation}\nfor most orthonormal bases $b$ of $\\mathscr{H}_2$.\n\n\n\n\\section{Proofs}\n\\label{sec:proofs}\n\n\\subsection{Definition of $u_{\\rho_1}$}\n\\label{sec:urho1def}\n\nAccording to the Schmidt decomposition \\cite{Schmidt}, every $\\psi\\in\\mathscr{H}_\\mathrm{total}$ can\nbe written in the form\n\\begin{equation}\\label{Schmidt}\n\\psi = \\sum_{i=1}^{d_1} c_i \\, \\tilde{\\chi}_i\\otimes\\tilde{\\phi}_i\n\\end{equation}\nwhere $\\{\\tilde{\\chi}_i\\}$ is an orthonormal basis in $\\mathscr{H}_1$,\n$\\{\\tilde{\\phi}_i\\}$ is an orthonormal system in $\\mathscr{H}_2$ (i.e., a set of\northonormal vectors that is not necessarily complete), and the $c_i$ are\ncoefficients which can be chosen to be real and non-negative. If $\\|\\psi\\|=1$, the reduced\ndensity matrix of the system 1 is then\n\\begin{equation}\n\\rho_1^\\psi=\\sum_{i=1}^{d_1} c_i^2 \\ket{\\tilde{\\chi}_i}\\bra{\\tilde{\\chi}_i}\\,.\n\\end{equation}\nThus, $\\{\\tilde{\\chi}_i\\}$ is an eigenbasis of $\\rho_1^\\psi$, and $c_i^2$ are the corresponding eigenvalues. \n\nNow let a density matrix $\\rho_1$ be given, let $\\{\\chi_i\\}$ be an eigenbasis for $\\rho_1$, and let $0\\leq p_i\\leq 1$ be the corresponding eigenvalues. Then every $\\psi\\in\\mathscr{R}(\\rho_1)$ possesses a Schmidt decomposition of the form\n\\begin{equation}\\label{Schmidt2}\n\\psi=\\sum_{i=1}^{d_1} \\sqrt{p_i} \\, \\chi_i \\otimes \\phi_i\n\\end{equation}\nwith some orthonormal system $\\{\\phi_i\\}$ in $\\mathscr{H}_2$. Indeed, we know it has a Schmidt decomposition \\eqref{Schmidt} in which $\\{\\tilde{\\chi}_i\\}$ is an eigenbasis of $\\rho_1$, and $c_i^2$ are the eigenvalues. Reordering the terms in \\eqref{Schmidt}, we can make sure that $c_i=\\sqrt{p_i}$. Any two eigenbases $\\{\\chi_i\\}$ and $\\{\\tilde{\\chi}_i\\}$ of $\\rho_1$ are related by a block unitary; more precisely, for every eigenvalue $p$ of $\\rho_1$, $\\{\\chi_i:i\\in\\mathscr{I}(p)\\}$ and $\\{\\tilde{\\chi}_i:i\\in\\mathscr{I}(p)\\}$ (using the index set $\\mathscr{I}(p)=\\{i:c_i^2=p\\}=\\{i:p_i=p\\}$) are two orthonormal bases of the eigenspace of $p$, and thus related by a unitary matrix $(U_{ij}^{(p)})_{i,j\\in\\mathscr{I}(p)}$:\n\\begin{equation}\\label{chitildechi}\n\\tilde{\\chi}_i = \\sum_{j\\in\\mathscr{I}(p)} U^{(p)}_{ij} \\, \\chi_j\\,.\n\\end{equation}\nSetting\n\\begin{equation}\\label{phitildephi}\n\\phi_i = \\sum_{j\\in\\mathscr{I}(p)} U^{(p)}_{ji} \\tilde{\\phi}_j\\,,\n\\end{equation}\nwe obtain \\eqref{Schmidt2}, and that $\\{\\phi_i\\}$ is an orthonormal system.\n\nConversely, every orthonormal system $\\{\\phi_i\\}$ in $\\mathscr{H}_2$ defines, by \\eqref{Schmidt2}, a $\\psi\\in\\mathscr{R}(\\rho_1)$. Thus, \\eqref{Schmidt2} defines a bijection $F_{\\rho_1,\\{\\chi_i\\}}:ONS(\\mathscr{H}_2,d_1)\\to \\mathscr{R}(\\rho_1)$. The Haar measure on the unitary group of $\\mathscr{H}_2$ defines the uniform distribution on the set of orthonormal bases of $\\mathscr{H}_2$, of which the uniform distribution on $ONS(\\mathscr{H}_2,d_1)$ is a marginal; let $u_{\\rho_1,\\{\\chi_i\\}}$ be its image under $F_{\\rho_1,\\{\\chi_i\\}}$. \n\nWe note that $u_{\\rho_1,\\{\\chi_i\\}}$ actually does not depend on the choice of the eigenbasis $\\{\\chi_i\\}$. Indeed, if $\\{\\tilde{\\chi}_i\\}$ is any other eigenbasis of $\\rho_1$ (without loss of generality numbered in such a way that the eigenvalue of $\\tilde{\\chi}_i$ is $p_i$) then, as explained above, it is related to $\\{\\chi_i\\}$ by a block unitary $d_1\\times d_1$ matrix $U$ consisting of the blocks $(U_{ij}^{(p)})$. Let $\\overline{U}$ be the matrix whose entries are the complex conjugates of the entries of $U$, and let $\\hat{\\overline{U}}$ denote the action of $\\overline{U}$ on $ONS(\\mathscr{H}_2,d_1)$ given by\n\\begin{equation}\n\\hat{\\overline{U}}\\Bigl(\\bigl\\{\\phi_i:i=1,\\ldots,d_1\\bigr\\}\\Bigr) = \n\\biggl\\{\\sum_{j=1}^{d_1}\\overline{U}_{ij} \\phi_j:i=1,\\ldots,d_1\\biggr\\}\\,.\n\\end{equation}\nThen\n\\begin{equation}\nF_{\\rho_1,\\{\\chi_i\\}}=F_{\\rho_1,\\{\\tilde{\\chi}_i\\}}\\circ \\hat{\\overline{U}}\\,.\n\\end{equation}\nSince the Haar measure is invariant under left multiplication, its marginal on $ONS(\\mathscr{H}_2,d_1)$ is invariant under $\\hat{\\overline{U}}$. We thus define $u_{\\rho_1}$ to be $u_{\\rho_1,\\{\\chi_i\\}}$ for any eigenbasis $\\{\\chi_i\\}$.\n\n\n\\subsection{Proof of Theorem~\\ref{thm1}}\n\nWe will obtain Theorem~\\ref{thm1} from the corresponding\nstatement about the Gaussian measures $G(\\rho_1)$:\n\n\\begin{lemma}\\label{lemma1}\nFor every $0<\\tilde{\\varepsilon}<1$, $0<\\tilde{\\delta}<1$, and $d_1\\in\\mathbb{N}$, there is $\\tilde{D}_2=\\tilde{D}_2(\\tilde{\\varepsilon},\\tilde{\\delta},d_1)>0$ such that for all $d_2\\in\\mathbb{N}$ with $d_2>\\tilde{D}_2$, for every $\\mathscr{H}_1$ and $\\mathscr{H}_2$ with $\\dim \\mathscr{H}_{1\/2}=d_{1\/2}$, for every orthonormal basis $b=\\{b_1,\\ldots,b_{d_2}\\}$ of $\\mathscr{H}_2$, for every $\\rho_1\\in \\mathscr{D}(\\mathscr{H}_1)$, and for every bounded \\y{measurable} function $\\tilde{f}:\\mathscr{H}_1 \\to \\mathbb{R}$,\n\\begin{equation}\\label{ineqlemma1}\nu_{\\rho_1} \\Bigl\\{ \\psi\\in\\mathscr{R}(\\rho_1): \\bigl| \\tilde{\\mu}^{\\psi,b}_1(\\tilde{f}) -\\G{\\rho_1}(\\tilde{f}) \\bigr| < \\tilde{\\varepsilon} \\,\\|\\tilde{f}\\|_\\infty \\Bigr\\} \\geq 1-\\tilde{\\delta}\\,,\n\\end{equation}\nwhere $\\tilde{\\mu}^\\psi_1$ is the distribution of $\\sqrt{d_2}\\langle\nb_{J}|\\psi\\rangle \\in \\mathscr{H}_1$ (not normalized) with respect to\nthe uniform distribution of $J\\in\\{1,\\ldots,d_2\\}$.\n\\end{lemma}\n\n\nWe now collect the tools needed for the proof of Lemma~\\ref{lemma1}. \nWe note that $\\tilde{\\mu}^\\psi_1$ is the sum of $d_2$ delta measures\nwith equal weights,\n\\begin{equation}\\label{mubar}\n\\tilde{\\mu}^\\psi_1 = \\frac{1}{d_2} \\sum_{j=1}^{d_2} \\delta_{\\psi_1(j)}\\, ,\n\\end{equation}\nlocated at the points\n\\begin{equation}\\label{psi1}\n\\psi_1(j) = \\sqrt{d_2}\\scp{b_j}{\\psi}\\,.\n\\end{equation}\nLet $\\rho_1\\in\\mathscr{D}(\\mathscr{H}_1)$, let $\\{\\chi_i\\}$ be an eigenbasis of $\\rho_1$ with eigenvalues $p_i$, and let $\\psi\\in\\mathscr{R}(\\rho_1)$. According to \\eqref{Schmidt2},\n\\begin{equation}\n\\psi_1(j)=\\sum_{i=1}^{d_1} c_i \\sqrt{d_2}\n\\langle b_j | \\phi_i \\rangle \\chi_i\n\\end{equation}\nwith $c_i=\\sqrt{p_i}$. Now we regard $\\psi$ as random with\ndistribution $u_{\\rho_1}$; then the $\\psi_1(j)$ are $d_2$ random\nvectors, and $\\tilde{\\mu}^\\psi_1$ is their empirical distribution.\nAs noted above, $\\{\\phi_i\\}$ is a random orthonormal system\ndistributed according to a marginal of the Haar measure; in other words, the\nexpansion coefficients $\\langle b_j | \\phi_i \\rangle$ of\n\\begin{equation}\n\\phi_i = \\sum_{j=1}^{d_2} \\langle b_j | \\phi_i \\rangle b_j\n\\end{equation}\nform a $d_1\\times d_2$ matrix that arises as the first $d_1$ rows of a\nHaar-distributed unitary $d_2\\times d_2$ matrix.\n\nIt is therefore interesting to us how a submatrix of a Haar-distributed unitary matrix is distributed. We cite the relevant result from Olshanskij \\cite[Lemma 5.3]{Ol90} and Collins \\cite[Chap. 4]{Colthesis} (the corresponding fact for orthogonal matrices was established independently by Diaconis, Eaton, and Lauritzen \\cite{DEL92}). To formulate this result, we use the following terminology. Recall that a \\emph{complex Gaussian} random variable is one whose real and imaginary parts are independent real Gaussian random variables with equal variances. The \\emph{variation distance} of two measures $\\mu,\\nu$ on the same $\\sigma$-algebra $\\mathscr{A}$ is defined to be\n\\begin{equation}\\label{defvariationdistance}\n\\|\\mu-\\nu\\| \n= \\sup_{A\\in\\mathscr{A}} (\\mu(A)-\\nu(A))\n+\\sup_{A\\in\\mathscr{A}} (\\nu(A)-\\mu(A))\\,.\n\\end{equation}\nIn case $\\mu$ and $\\nu$ possess densities $f$ and $g$ relative to some measure $\\lambda$ on $\\mathscr{A}$, this coincides with the $L^1$ norm of $f-g$,\n\\begin{equation}\n\\|\\mu-\\nu\\| = \\int \\lambda(dx) \\, |f(x)-g(x)|\\,.\n\\end{equation}\n\n\\begin{lemma}\\label{lem:asymp}\nFor $k,n\\in\\mathbb{N}$ with $k \\leq n$, let the random matrix $(U_{ij})$ be $\\mathrm{Haar}(U(n))$ distributed, and let $X$ be the upper left $k\\times k$ submatrix multiplied by the normalization factor $\\sqrt{n}$, $X_{ij}=\\sqrt{n}U_{ij}$ for $1\\leq i,j \\leq k$. Let $G$ a random $k\\times k$ matrix whose entries $G_{ij}$ are independent complex Gaussian random variables with mean 0 and variance $\\mathbb{E} |G_{ij}|^2 =1$. Let $\\mu_{k,n}$ denote the distribution of $X$ and $\\mu_k$ that of $G$. Then $\\mu_{k,n}$ converges, as $n\\to\\infty$, to $\\mu_k$ in the variation distance. In fact, as soon as $n\\geq 2k$, $\\mu_{k,n}$ and $\\mu_k$ possess densities $f_{k,n}$ and $f_k$ relative to the Lebesgue measure in $\\mathbb{C}^{k\\times k}$ given by\n\\begin{equation}\\label{det}\nf_{k,n}(X) = \\mathscr{N}_{k,n}\\,1_{\\|X\\|_\\infty<\\sqrt{n}} \\biggl(\\det\\Bigl(I-\\frac{XX^*}{n}\\Bigr)\\biggr)^{n-2k} \n\\end{equation}\n(where $\\mathscr{N}_{k,n}$ is the appropriate normalization factor, $\\displaystyle\\|X\\|_\\infty=\\sup_{v\\in\\mathbb{C}^k\\setminus\\{0\\}}|Xv|\/|v|$, and $I$ denotes the $k\\times k$ unit matrix) and\n\\begin{equation}\\label{Gaussian}\nf_k(X) = \\frac{1}{\\pi^k} e^{-\\tr XX^*}\\,,\n\\end{equation}\nand\n\\begin{equation}\n\\|f_{k,n}-f_k\\|_{L^1(\\mathbb{C}^{k\\times k})}\\to 0\\text{ as }n\\to \\infty.\n\\end{equation}\n\\end{lemma}\n\nA random matrix such as $G$ is $\\sqrt{k}$ times what is sometimes called a ``standard non-selfadjoint Gaussian matrix.'' \nWe will use only the following consequence of Lemma~\\ref{lem:asymp}, concerning the convergence of the upper left $k\\times 2$ entries of $(U_{ij})$ to a matrix of independent complex Gaussian random variables:\n\n\\begin{cor}\\label{cor:AB}\nFor every $0<\\varepsilon<1$ and $k\\in\\mathbb{N}$ there is $n_0=n_0(\\varepsilon,k)>0$ such that for every $n\\in\\mathbb{N}$ with $n>n_0$ and every bounded measurable function $g:\\mathbb{C}^{k}\\to\\mathbb{R}$,\n\\begin{equation}\\label{LemmaA}\n \\Bigl|\\mathbb{E} g(\\sqrt{n}U_{11},\\ldots, \\sqrt{n} U_{k1}) -\n \\mathbb{E} g(G_{11},\\ldots,G_{k1}) \\Bigr|< \\varepsilon \\, \\|g\\|_\\infty\n\\end{equation}\nand\n\\begin{multline}\\label{LemmaB}\n \\Bigl|\\mathbb{E} [g(\\sqrt{n}U_{11},\\ldots, \\sqrt{n} U_{k1}) \n g(\\sqrt{n}U_{12},\\ldots, \\sqrt{n} U_{k2})] -\\\\\n \\mathbb{E} [g(G_{11},\\ldots,G_{k1})g(G_{12},\\ldots,G_{k2})] \\Bigr|< \\varepsilon \\, \\|g\\|_\\infty^2\\,,\n\\end{multline}\nwhere $(U_{ij})$ is Haar distributed in $U(n)$, and $G_{ij}$ ($i=1,\\ldots,k$; $j=1,2$) are independent complex Gaussian random variables with mean 0 and variance $\\mathbb{E} |G_{ij}|^2=1$.\n\\end{cor}\n\n\\proof\nChoose $n_0=n_0(\\varepsilon,k)\\geq 2k$ so that, for all $n>n_0$, $\\|f_{k,n}-f_k\\|_{L^1(\\mathbb{C}^{k\\times k})}<\\varepsilon$. This is possible by Lemma~\\ref{lem:asymp}. Then \\eqref{LemmaA} and \\eqref{LemmaB} follow.\n\\endproof\n\n\n\\proof[Proof of Lemma~\\ref{lemma1}]\nSet\n\\begin{equation}\\label{tildeDdef}\n\\tilde{D}_2(\\tilde{\\varepsilon},\\tilde{\\delta},d_1) \n= \\max \\biggl(n_0\\Bigl(\\frac{\\tilde{\\varepsilon}}{2},d_1\\Bigr),\nn_0\\Bigl(\\frac{\\tilde{\\delta}\\tilde{\\varepsilon}^2}{16},d_1\\Bigr),\n\\frac{32}{\\tilde{\\delta}\\tilde{\\varepsilon}^2}\\biggr)\\,.\n\\end{equation}\n\nLet us first introduce some abbreviations and notation. We write $n$ for $d_2$ and $k$ for $d_1$. Let $G_{11},\\ldots,G_{k1},G_{12},\\ldots,G_{k2}$ again be independent complex Gaussian random variables with mean 0 and variance $\\mathbb{E}|G_{ij}|^2=1$. We use the basis $\\{\\chi_i\\}$ to identify $\\mathscr{H}_1$ with\n$\\mathbb{C}^{k}$. Let $\\tilde{f}:\\mathbb{C}^{k} \\to \\mathbb{R}$ be a bounded \\y{measurable} function,\nand let $c_i\\geq 0$ be as before (the square roots of the\neigenvalues of $\\rho_1$). Set\n\\begin{equation}\ng(z_1,\\ldots,z_{k}) = \\tilde{f}(c_1z_1, \\ldots, c_{k}z_{k})\\,.\n\\end{equation}\nThen $g$, too, is \\y{measurable} and bounded with bound $\\|g\\|_\\infty = \\|\\tilde{f}\\|_\\infty$. As we said above, the\ndistribution of the $\\scp{b_j}{\\phi_i}$ is the same as that of\n$U_{ij}$ with $j \\in \\{1,\\ldots,n\\}$, that is, the first $k$\nrows of a Haar distributed unitary $n\\times n$ matrix. We thus\nwrite $U_{ij}$ instead of $\\scp{b_j}{\\phi_i}$, and obtain\n\\begin{equation}\\label{mumeans}\n\\tilde{\\mu}^\\psi_1 (\\tilde{f}) =: \\tilde{\\mu}(\\tilde{f}) = \\frac{1}{n} \\sum_{j=1}^{n}\ng(\\sqrt{n} U_{1j},\\ldots, \\sqrt{n} U_{kj}) \\,.\n\\end{equation}\nWe write $X$ for $\\sqrt{n}$ times the upper left $k\\times n$ submatrix of $(U_{ij})$, $\\vec{X}_j$ for the $j$-th column of $X$, i.e.,\n\\begin{equation}\n\\vec{X}_j = (\\sqrt{n}U_{1j},\\ldots,\\sqrt{n}U_{kj})\\,,\n\\end{equation}\nand $\\vec{G}_i$ for $(G_{1i},\\ldots,G_{ki})$ ($i=1,2$). In this notation,\n\\begin{equation}\\label{mumeans2}\n\\tilde{\\mu}(\\tilde{f}) = \\frac{1}{n} \\sum_{j=1}^n g(\\vec{X}_j)\n\\end{equation}\nand\n\\begin{equation}\\label{Gmeans}\nG(\\rho_1)(\\tilde{f}) = \\mathbb{E} \\tilde{f}(c_1 G_{11},\\ldots, c_{k} G_{k1}) = \\mathbb{E}\ng(\\vec{G}_1)\\,.\n\\end{equation}\n\nThe expression to be estimated is\n\\begin{equation}\n\\Bigl|\\tilde{\\mu} (\\tilde{f})-G(\\rho_1)(\\tilde{f})\\Bigr|\\,,\n\\end{equation}\nfor which we have by the triangle inequality that\n\\begin{equation}\\label{triangle}\n\\Bigl|\\tilde{\\mu} (\\tilde{f})-G(\\rho_1)(\\tilde{f})\\Bigr|\\leq \\Bigl|\\tilde{\\mu} (\\tilde{f})\n-\\mathbb{E}\\tilde{\\mu}(\\tilde{f}) \\Bigr| + \\Bigl|\\mathbb{E}\\tilde{\\mu}(\\tilde{f}) -\nG(\\rho_1)(\\tilde{f})\\Bigr|\\,.\n\\end{equation}\nNote that the second contribution in \\eqref{triangle} is nonrandom.\n\nWe first show that\n\\begin{equation}\\label{nonrandomsmall}\n\\Bigl|\\mathbb{E}\\tilde{\\mu}(\\tilde{f}) - G(\\rho_1)(\\tilde{f})\\Bigr| \n<\\frac{\\tilde{\\varepsilon}}{2} \\|\\tilde{f}\\|_\\infty\n\\end{equation}\nif $n$ is sufficiently large: By \\eqref{mumeans2},\n\\begin{equation}\\label{Emu}\n\\mathbb{E} \\tilde{\\mu}(\\tilde{f}) = \\frac{1}{n} \\sum_{j=1}^{n} \\mathbb{E} g(\\vec{X}_j) = \\mathbb{E} g(\\vec{X}_1)\n\\end{equation}\nbecause each $\\vec{X}_j$ has the same distribution---because the columns of $(U_{ij})$ are exchangeable due to the invariance of the Haar\nmeasure. By \\eqref{LemmaA} in Corollary~\\ref{cor:AB}, if $n>n_0(\\tilde{\\varepsilon}\/2,k)$ then the absolute difference between \\eqref{Gmeans} and \\eqref{Emu} is less than $\\tfrac{\\tilde{\\varepsilon}}{2}\\|g\\|_\\infty$, i.e., \\eqref{nonrandomsmall} holds.\n\nConcerning the first contribution in \\eqref{triangle}, Chebyshev's\ninequality (see, e.g., \\cite[p.~65]{Bill}) asserts that\n\\begin{equation}\\label{Chebyshev}\n\\mathbb{P}\\Bigl( \\bigl| \\tilde{\\mu}(\\tilde{f}) - \\mathbb{E}\\tilde{\\mu}(\\tilde{f}) \\bigr| >\n\\frac{\\tilde{\\varepsilon}}{2}\\|\\tilde{f}\\|_\\infty \\Bigr) \n< \\frac{4}{\\tilde{\\varepsilon}^2\\|\\tilde{f}\\|_\\infty^2} \\mathrm{var}(\\tilde{\\mu}(f)) \\,,\n\\end{equation}\nwhere $\\mathbb{P}$ is the Haar measure for $U_{ij}$ and var($Y$) is the\nvariance of the random variable $Y$. Now Lemma~\\ref{lemma1} follows if we can show that\n\\begin{equation}\\label{varto0}\n \\mathrm{var}(\\tilde{\\mu}(\\tilde{f})) \n \\leq \\frac{\\tilde{\\delta}\\tilde{\\varepsilon}^2}{4}\\|\\tilde{f}\\|_\\infty^2 \n\\end{equation}\nfor sufficiently large $n$. We find that\n\\begin{equation}\n\\mathrm{var}(\\tilde{\\mu}(\\tilde{f})) \n= \\mathbb{E} \\bigl[\\tilde{\\mu}(\\tilde{f})^2\\bigr] -\n(\\mathbb{E}\\tilde{\\mu}(\\tilde{f}))^2 = \\frac{1}{n^2} \\sum_{j,j'=1}^{n} \\mathbb{E}\n[g(\\vec{X}_j) g(\\vec{X}_{j'})] - [\\mathbb{E} g(\\vec{X}_1)]^2\\,.\n\\end{equation}\nSince the $\\vec{X}_j$ are exchangeable, the joint\ndistribution of $\\vec{X}_j$ and $\\vec{X}_{j'}$ for $j\\neq j'$ is the same as the joint\ndistribution of $\\vec{X}_1$ and $\\vec{X}_2$, so that all summands with\n$j\\neq j'$ are equal (and all summands with $j=j'$ are equal), and\nwe can write\n\\begin{align}\n\\mathrm{var}(\\tilde{\\mu}(\\tilde{f})) \n&= \\frac{n^2 - n}{n^2} \\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr] \n+ \\frac{n}{n^2} \\underbrace{\\mathbb{E} \\bigl[g(\\vec{X}_1)^2\\bigr]}_{\\leq \\|g\\|_\\infty^2} \n- [\\mathbb{E} g(\\vec{X}_1)]^2\\\\\n&\\leq \\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr] \n-\\frac{1}{n} \\underbrace{\\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr]}_{\\geq -\\|g\\|_\\infty^2} \n+ \\frac{1}{n} \\|g\\|_\\infty^2 - [\\mathbb{E} g(\\vec{X}_1)]^2\\\\\n&\\leq \\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr] \n+\\frac{2}{n} \\|g\\|_\\infty^2 - [\\mathbb{E} g(\\vec{X}_1)]^2\\\\\n&= \\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr] \n\\underbrace{- \\mathbb{E} \\bigl[ g(\\vec{G}_1) \\, g(\\vec{G}_2) \\bigr]\n + \\bigl[\\mathbb{E} g(\\vec{G}_1) \\bigr]^2}_{=0} \\: +\\nonumber\\\\\n&\\quad +\\: \\frac{2}{n} \\|g\\|_\\infty^2 \n- [\\mathbb{E} g(\\vec{X}_1)]^2\\\\\n&\\leq \n\\Bigl|\\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr] - \\mathbb{E} \\bigl[g(\\vec{G}_1) \\, g(\\vec{G}_2)\\bigr] \\Bigr| \n+\\frac{2}{n} \\|g\\|_\\infty^2 \\: + \\nonumber\\\\ \n&\\quad +\\: \\bigl[\\mathbb{E} g(\\vec{G}_1) \\bigr]^2\n- [\\mathbb{E} g(\\vec{X}_1)]^2\\\\\n&\\leq \n\\Bigl|\\mathbb{E} \\bigl[g(\\vec{X}_1) \\, g(\\vec{X}_2)\\bigr] - \\mathbb{E} \\bigl[g(\\vec{G}_1) \\, g(\\vec{G}_2)\\bigr] \\Bigr| \n+\\frac{2}{n} \\|g\\|_\\infty^2 \\: + \\nonumber\\\\\n&\\quad +\\: \\bigl|\\mathbb{E} g(\\vec{G}_1) - \\mathbb{E} g(\\vec{X}_1)\\bigr| \\, \n\\underbrace{\\bigl|\\mathbb{E} g(\\vec{G}_1) + \\mathbb{E} g(\\vec{X}_1)\\bigr|}_{\\leq 2 \\|g\\|_\\infty} \\\\\n&\\leq \\frac{\\tilde{\\delta}\\tilde{\\varepsilon}^2}{16} \\|g\\|_\\infty^2 \n+\\frac{2}{n} \\|g\\|_\\infty^2 \n+ \\frac{\\tilde{\\delta}\\tilde{\\varepsilon}^2}{16} \\|g\\|_\\infty \\, 2\\|g\\|_\\infty \n\\end{align}\nfor $n>n_0(\\tilde{\\delta}\\tilde{\\varepsilon}^2\/16,k)$ by Corollary~\\ref{cor:AB}. If in addition $n>32\/\\tilde{\\delta}\\tilde{\\varepsilon}^2$, we thus have that\n\\begin{equation}\n\\mathrm{var}(\\tilde{\\mu}(\\tilde{f})) \n\\leq \\tilde{\\delta}\\tilde{\\varepsilon}^2 \\|g\\|_\\infty^2 \\Bigl(\\frac{1}{16} + \\frac{1}{16} + \\frac{1}{8}\\Bigr)\n= \\frac{\\tilde{\\delta}\\tilde{\\varepsilon}^2}{4} \\|g\\|_\\infty^2\n= \\frac{\\tilde{\\delta}\\tilde{\\varepsilon}^2}{4} \\|\\tilde{f}\\|_\\infty^2\\,.\n\\end{equation}\nThus, for $d_2=n>\\tilde{D}_2$ in \\eqref{tildeDdef}, both \\eqref{nonrandomsmall} and \\eqref{varto0} hold; Lemma~\\ref{lemma1} follows using \\eqref{triangle} and \\eqref{Chebyshev}.\n\\endproof\n\n\nWe now collect the tools needed for the proof of Theorem~\\ref{thm1}.\n\n\\begin{lemma}\\label{lem:n} \n$\\tilde{\\mu}_1^\\psi\\bigl(\\|\\cdot\\|^2\\bigr)=1.$\n\\end{lemma}\n\n\\proof\nRecall that $\\tilde{\\mu}_1^\\psi$ is the distribution of $\\sqrt{d_2}\\scp{b_J}{\\psi}$ with $\\mathbb{P}(J=j)=1\/d_2$. Thus,\n\\begin{equation}\\label{mu1psisquare}\n\\int_{\\mathscr{H}_1} \\tilde{\\mu}_1^\\psi (d\\psi_1) \\|\\psi_1\\|^2 =\n\\mathbb{E}\\left(d_2\\|\\scp{b_J}{\\psi}\\|^2\\right) =\n\\sum_{j=1}^{d_2}\\|\\scp{b_j}{\\psi}\\|^2 = \\|\\psi\\|^2 = 1\\,.\n\\end{equation}\n\\endproof\n\n\\begin{lemma}\\label{lem:tildemu1}\n$\\mu_1^\\psi = P_* A \\tilde{\\mu}_1^\\psi$.\n\\end{lemma}\n\n\\proof\nBy definition, $A\\tilde{\\mu}_1^\\psi$ is the distribution of $\\sqrt{d_2}\\scp{b_J}{\\psi}$ with $\\mathbb{P}(J=j)=\\|\\scp{b_j}{\\psi}\\|^2$, and $P_* A \\tilde{\\mu}_1^\\psi$ is the distribution of $\\scp{b_J}{\\psi}\/\\|\\scp{b_J}{\\psi}\\|$ with $\\mathbb{P}(J=j)=\\|\\scp{b_j}{\\psi}\\|^2$. The latter is the definition of $\\mu_1^\\psi$. \n\\endproof\n\nRecall from \\eqref{Gpsi1} that, for every $\\rho\\in\\mathscr{D}(\\mathscr{H})$,\n\\begin{equation}\n\\int_\\mathscr{H} G(\\rho)(d\\psi) \\, \\|\\psi\\|^2 =1.\n\\end{equation}\n\n\\begin{lemma}\\label{lem:R}\nFor every $0<\\varepsilon<1$ and $d\\in\\mathbb{N}$, there is $R=R(\\varepsilon,d)>0$ such that, for every $\\mathscr{H}$ with $\\dim\\mathscr{H}=d$ and every $\\rho\\in\\mathscr{D}(\\mathscr{H})$,\n\\begin{equation}\n\\int_{\\{\\psi\\in\\mathscr{H}:\\|\\psi\\| 1-\\varepsilon\\,.\n\\end{equation}\n\\end{lemma}\n\n\\proo\nLet $X_1,\\ldots, X_d$ be independent complex Gaussian random variables with mean 0 and variance 1. Then $X=(X_1,\\ldots,X_d)$ has Gaussian distribution $\\G{I}$ with covariance matrix $I$, the identity matrix; note that\n\\begin{equation}\n\\int_\\mathscr{H} \\G{I}(d\\psi) \\, \\|\\psi\\|^2 = \\mathbb{E} \\sum_{i=1}^d |X_i|^2=d.\n\\end{equation}\nThus, there is $R>0$ with\n\\begin{equation}\n\\int_{\\{\\psi\\in\\mathscr{H}:\\|\\psi\\| d-\\varepsilon\\,.\n\\end{equation}\n\nFor any $\\mathscr{H}$ with $\\dim\\mathscr{H}=d$ and $\\rho\\in\\mathscr{D}(\\mathscr{H})$, choose an eigenbasis of $\\rho$ to identify $\\mathscr{H}$ with $\\mathbb{C}^d$, so $\\rho=\\mathrm{diag}(p_1,\\ldots,p_d)$. Set $Z_i=\\sqrt{p_i} X_i$, so $Z=(Z_1,\\ldots,Z_d)$ has distribution $\\G{\\rho}$. Then\n\\begin{align}\n\\int_{\\{\\psi\\in\\mathscr{H}:\\|\\psi\\|\\geq R\\}} \\G{\\rho}(d\\psi) \\, \\|\\psi\\|^2\n&=\\mathbb{E}\\Bigl( 1_{\\sum |Z_j|^2\\geq R^2} \\sum_{i} |Z_i|^2\\Bigr)\\\\\n&=\\mathbb{E}\\Bigl( 1_{\\sum p_j |X_j|^2\\geq R^2} \\sum_i p_i |X_i|^2\\Bigr)\\\\\n&\\leq \\mathbb{E}\\Bigl( 1_{\\sum |X_j|^2\\geq R^2} \\sum_i |X_i|^2\\Bigr)\\\\\n&= \\int_{\\{\\psi\\in\\mathscr{H}:\\|\\psi\\|\\geq R\\}} \\G{I}(d\\psi) \\, \\|\\psi\\|^2\n <\\varepsilon\\,.\n\\end{align}\n\\endproof\n\n\nAs an abbreviation, we write $M(\\mathscr{H}_1,\\mathscr{H}_2,\\rho_1,b,f,\\varepsilon)$ or shorter $M(f,\\varepsilon)$ for the set considered in \\eqref{ineqthm1}, and $\\tilde{M}(\\mathscr{H}_1,\\mathscr{H}_2,\\rho_1,b,\\tilde{f},\\tilde{\\varepsilon})$ or shorter $\\tilde{M}(\\tilde{f},\\tilde{\\varepsilon})$ for the set considered in \\eqref{ineqlemma1}.\n\n\\begin{lemma}\\label{lem:M}\nFix $0<\\varepsilon<1$, $\\mathscr{H}_1$, $\\mathscr{H}_2$, $b$, and $\\rho_1$. Let $R=R(\\varepsilon\/4,d_1)$ with $R(\\cdot,\\cdot)$ provided by Lemma~\\ref{lem:R}. Then, for every \\y{bounded measurable} function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$,\n\\begin{equation}\\label{M}\nM(f,\\varepsilon) \\supseteq \n\\tilde{M}\\Bigl((f\\circ P) 1_{N 1-\\varepsilon\/4\\,,\n\\end{equation}\nso the term \\eqref{term2} is less than or equal to\n\\begin{equation}\\label{bound9}\n\\|f\\|_\\infty \\, \\Bigl( \\varepsilon\/4 + \\bigl| \\tilde{\\mu}(1_{ND_2$, any $\\mathscr{H}_1$ and $\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{1\/2}=d_{1\/2}$, any orthonormal basis $b$ of $\\mathscr{H}_2$, any $\\rho_1\\in\\mathscr{D}(\\mathscr{H}_1)$, and any \\y{bounded measurable} function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$. Then, by two applications of Lemma~\\ref{lemma1},\n\\begin{align}\nu_{\\rho_1}\\Bigl( \\tilde{M}\\Bigl(g 1_{N0$; continuity is not uniform without this restriction. However, Lemma~\\ref{lem:cont} asserts that for any \\emph{fixed} and \\emph{continuous} test function $f$, continuity is uniform in $\\rho$ without restrictions.}\n\n\n\\begin{lemma}\\label{lem:cont}\nFor every $0<\\varepsilon<1$, every $d\\in\\mathbb{N}$, every Hilbert space $\\mathscr{H}$ with $\\dim \\mathscr{H}=d$, and every continuous function $f:\\mathbb{S}(\\mathscr{H})\\to\\mathbb{R}$ there is $r=r(\\varepsilon,d,f)>0$ such that for all $\\rho,\\Omega\\in\\mathscr{D}(\\mathscr{H})$,\n\\begin{equation}\n\\text{if }\\|\\rho-\\Omega\\|_{\\tr} < r\n\\text{ then }\n\\bigl| GAP(\\rho)(f)-GAP(\\Omega)(f) \\bigr| < \\varepsilon\\,.\n\\end{equation}\n\\end{lemma}\n\nWhile all norms on $\\mathscr{D}(\\mathscr{H})$ are equivalent for $\\dim \\mathscr{H}< \\infty$, we use the trace norm $\\|\\cdot\\|_{\\tr}$ here because in this norm the continuity extends to $\\dim \\mathscr{H}=\\infty$ and because it is used in Lemma~\\ref{lem:can}.\n\nTo formulate the other continuity statement, let $u_{\\mathbb{S}(\\mathscr{H})}$ denote the normalized uniform measure on the unit sphere in $\\mathscr{H}$. For any density matrix $\\rho\\in\\mathscr{D}(\\mathscr{H})$ of which zero is not an eigenvalue, $GAP(\\rho)$ possesses a density relative to $u_{\\mathbb{S}(\\mathscr{H})}$ \\cite{Gold1}.\n\n\\begin{lemma}\\label{lem:contLinfty}\nFor every $0<\\varepsilon<1$, every $d\\in\\mathbb{N}$, every Hilbert space $\\mathscr{H}$ with $\\dim \\mathscr{H}=d$, and every $0<\\gamma<1\/d$, there is $r=r(\\varepsilon,d,\\gamma)>0$ such that for all $\\rho,\\Omega\\in\\mathscr{D}_{\\geq\\gamma}(\\mathscr{H})$, \n\\begin{equation}\\label{eq:contLinfty}\n\\text{if }\\|\\rho-\\Omega\\|_{\\tr}0$ to be less than the smallest eigenvalue of $\\Omega$ and note that only finitely many $\\rho_n$ can lie outside $\\mathscr{D}_{\\geq \\gamma}(\\mathscr{H})$.\n\nTo see that in Lemma~\\ref{lem:contLinfty} $\\mathscr{D}_{\\geq\\gamma}(\\mathscr{H})$ cannot be replaced by $\\mathscr{D}(\\mathscr{H})$ (i.e., that continuity is not uniform without restrictions), note that, when 0 is an eigenvalue of $\\Omega$, $\\GAP{\\Omega}$ does not have a density with respect to $u_{\\mathbb{S}(\\mathscr{H})}$, so that at such an $\\Omega$, $\\rho\\mapsto\\GAP{\\rho}$ is certainly not continuous in $L^\\infty\\bigl(\\mathbb{S}(\\mathscr{H}),u_{\\mathbb{S}(\\mathscr{H})}\\bigr)$ or in the variation distance \\eqref{defvariationdistance}.\n\nTo see that in Lemma~\\ref{lem:cont} one cannot drop the assumption that $f$ is continuous, consider an $\\Omega$ that has zero as an eigenvalue and a $\\rho$ that does not. Then $GAP(\\Omega)$ is concentrated on a subspace of dimension less than $d$ while $GAP(\\rho)$ has a density on the sphere and lies \\emph{near} (rather than \\emph{in}) that subspace. Thus, for a test function $f$ that is bounded measurable but not continuous, $GAP(\\rho)(f)$ does not have to be close to $GAP(\\Omega)(f)$.\n\n\n\\label{sec:contproof}\n\nAs part of the proof of Lemma~\\ref{lem:cont}, we will need the continuity property of Gaussian measures expressed in the next lemma. When $\\mu_n,\\mu$ are measures on a topological space $X$, we write $\\mu_n \\Rightarrow \\mu$ to denote that the sequence of measures $\\mu_n$ converges weakly to $\\mu$. This means that $\\mu_n(f) \\to \\mu(f)$ for every bounded continuous function $f:X\\to \\mathbb{R}$ and implies that the same thing is true for every bounded measurable\nfunction $f:X\\to\\mathbb{R}$ such that $\\mu(D(f))=0$, where $D(f)$ is the set of\ndiscontinuities of $f$.\n\n\\begin{lemma}\\label{contGk}\nThe mapping $\\rho\\mapsto\\G{\\rho}$ is continuous in the weak topology on measures: If $\\rho_n \\in \\mathscr{D}(\\mathbb{C}^d)$ for every $n\\in\\mathbb{N}$ and $\\rho_n \\to \\rho$ then $G(\\rho_n) \\Rightarrow G(\\rho)$.\n\\end{lemma}\n\n\\proof \nWe use characteristic functions; as usual, the characteristic\nfunction $\\hat{\\mu}:\\mathbb{R}^{2d} \\to \\mathbb{C}$ of a probability measure\n$\\mu$ on $\\mathbb{R}^{2d}$ is defined by\n\\begin{equation}\n \\hat{\\mu}(k_1,\\ldots, k_{2d}) = \\int \\mu(dx_1 \\cdots dx_{2d}) \\,\n \\exp\\Bigl(i\\sum_{j=1}^{2d} k_j x_j \\Bigr)\\,,\n\\end{equation}\nor, in our notation on $\\mathscr{H} = \\mathbb{C}^{d}$,\n\\begin{equation}\n \\hat{\\mu}(\\phi) = \\int \\mu(d\\psi) \\,\n \\exp\\Bigl(i\\Re \\scp{\\phi}{\\psi} \\Bigr)\\,,\n\\end{equation}\nwhere $\\Re$ denotes the real part. We write $\\mu_n = G(\\rho_n)$ and\n$\\mu=G(\\rho)$; their characteristic functions are:\n\\begin{equation}\\label{hatGauss}\n \\hat{\\mu}_n(\\psi) = \\exp\\bigl(-\\bra{\\psi}\\rho_n\\ket{\\psi}\\bigr)\\,, \\quad\n \\hat{\\mu}(\\psi) = \\exp\\bigl(-\\bra{\\psi}\\rho\\ket{\\psi}\\bigr)\\,.\n\\end{equation}\nIf $\\rho_n \\to \\rho$ then $\\bra{\\psi}\\rho_n\\ket{\\psi} \\to\n\\bra{\\psi}\\rho\\ket{\\psi}$ for every $\\psi$ and thus $\\hat{\\mu}_n \\to \\hat{\\mu}$ pointwise. Since (e.g., \\cite{Bill}) pointwise convergence of\nthe characteristic functions is equivalent (in finite dimension) to\nweak convergence of the associated measures, it follows that\n$G(\\rho_n) \\Rightarrow G(\\rho)$, which is what we wanted to show.\n\\endproof\n\n\n\\proof[Proof of Lemma~\\ref{lem:cont}]\nSince $\\mathscr{D}(\\mathscr{H})$ is compact, uniform continuity follows from continuity. That is, it suffices to show that, assuming $\\rho_n \\in \\mathscr{D}(\\mathscr{H})$ for every $n\\in\\mathbb{N}$,\n\\begin{equation}\\label{GAPcont}\n\\text{if }\\rho_n \\to \\rho\n\\text{ then }GAP(\\rho_n) \\Rightarrow GAP(\\rho)\\,.\n\\end{equation}\nThis follows from Lemma~\\ref{contGk}, the continuity of the adjustment mapping $A$ defined in \\eqref{Adef} in Section~\\ref{sec:GAPdef}, and the continuity of the projection $P:\\mathscr{H}\\setminus\\{0\\}\\to\\mathbb{S}(\\mathscr{H})$. Our first step is to establish the continuity of $A$ on the set of probability measures $\\mu$ on $\\mathscr{H}$ such that $\\int \\mu(d\\psi) \\, \\|\\psi\\|^2 = 1$: If, for every $n\\in\\mathbb{N}$, $\\mu_n$ is a probability measure on the Borel $\\sigma$-algebra of $\\mathscr{H}$ such that $\\int \\mu_n(d\\psi) \\, \\|\\psi\\|^2 = 1$, then\n\\begin{equation}\\label{Acont}\n\\text{if }\\mu_n \\Rightarrow \\mu\n\\text{ and }\\int \\mu(d\\psi) \\,\\|\\psi\\|^2 = 1\n\\text{ then }A\\mu_n\\Rightarrow A\\mu\\,.\n\\end{equation}\n\nFix $\\varepsilon>0$ and an arbitrary non-zero, bounded, continuous\nfunction $f: \\mathscr{H} \\to \\mathbb{R}$. As before, we use the notation $N(\\psi) = \\|\\psi\\|$. Since,\nby hypothesis, $\\mu(N^2) = 1$, there\nexists $R>0$ so large that\n\\begin{equation}\n \\int_{\\{\\psi\\in \\mathscr{H}: \\|\\psi\\| 1- \\frac{\\varepsilon}{6\\|f\\|_\\infty}\\,.\n\\end{equation}\nLet the ``cut-off function''\n$\\chi_0: [0,\\infty) \\to [0,1]$ be any continuous function such that\n$\\chi_0(x)=1$ for $x\\leq R$ and $\\chi_0(x)=0$ for $x\\geq 2R$; set $\\chi(\\psi)=\\chi_0(\\|\\psi\\|)$. Because $\\chi N^2$ and $f\\chi N^2$ are bounded continuous functions, and because\n$\\mu_n \\Rightarrow \\mu$, we have that $\\mu_n(\\chi N^2) \\to \\mu(\\chi \nN^2)$ and $\\mu_n(f\\chi N^2) \\to \\mu(f\\chi N^2)$; that is, there is an $n_1\\in \\mathbb{N}$ such that, for all $n>n_1$,\n\\begin{equation}\n \\left| \\mu_n(\\chi N^2) - \\mu(\\chi N^2) \\right| < \\frac{\\varepsilon}{3\\|f\\|_\\infty} \n\\end{equation}\nand\n\\begin{equation}\n \\left| \\mu_n(f\\chi N^2) - \\mu(f\\chi N^2) \\right| < \\frac{\\varepsilon}{3}\\,.\n\\end{equation}\nThus, for all $n>n_1$, we have that\n\\begin{align}\n &\\left| A\\mu_n(f) - A\\mu(f) \\right| \n = \\left| \\mu_n(f N^2) - \\mu(f N^2) \\right| \\\\\n &\\leq \\left| \\mu_n(f\\chi N^2) - \\mu(f\\chi N^2) \\right| +\n \\left| \\mu_n\\bigl( f(1-\\chi) N^2 \\bigr) \\right| +\n \\left| \\mu\\bigl( f(1-\\chi)N^2 \\bigr) \\right| \\\\\n &< \\frac{\\varepsilon}{3} + \\|f\\|_\\infty \\mu_n\\bigl( (1-\\chi)N^2 \\bigr) \n + \\|f\\|_\\infty \\mu\\bigl( (1-\\chi)N^2\\bigr)\\\\\n &= \\frac{\\varepsilon}{3} + \\|f\\|_\\infty \\bigl(1-\\mu_n(\\chi N^2)\\bigr)\n + \\|f\\|_\\infty \\bigl(1-\\mu(\\chi N^2)\\bigr)\\\\\n &\\leq \\frac{\\varepsilon}{3} + 2\\|f\\|_\\infty \\bigl(1-\\mu(\\chi N^2)\\bigr) \n + \\|f\\|_\\infty \\bigl|\\mu_n(\\chi N^2)-\\mu(\\chi N^2)\\bigr|\\\\\n &\\leq \\frac{\\varepsilon}{3}+\\frac{\\varepsilon}{3}+\\frac{\\varepsilon}{3}\n = \\varepsilon\\,.\n\\end{align}\nThis proves \\eqref{Acont}.\\footnote{We remark that the \nhypothesis $\\int \\mu(d\\psi) \\|\\psi\\|^2=1$ cannot\nbe dropped, that is, does not follow from $\\int \\mu_n(d\\psi)\n\\|\\psi\\|^2=1$. An example is $\\mu_n = (1-1\/n) \\delta_0 + (1\/n)\n\\delta_{\\psi_n}$, where $\\delta_\\phi$ means the Dirac delta measure\nat $\\phi$ and $\\psi_n$ is any vector with $\\|\\psi_n\\|^2 = n$; then\n$\\mu_n$ is a probability measure with $\\int \\mu_n(d\\psi)\n\\|\\psi\\|^2=1$ but $\\mu_n \\Rightarrow \\delta_0$, which has $\\int\n\\delta_0(d\\psi) \\|\\psi\\|^2=0$.}\n\n\nWe are now ready to establish \\eqref{GAPcont}. Suppose $\\rho_n\\to\\rho$. We\nhave that $\\GAP{\\rho_n}=P_*A(\\G{\\rho_n})$ and that $\\left(AG(\\rho)\\right)(0)=0$. Since\n$\\psi\\mapsto P\\psi$ is continuous for $\\psi\\neq 0$, \\eqref{GAPcont}\nfollows from \\eqref{Acont} and Lemma~\\ref{contGk}. This completes the proof of Lemma~\\ref{lem:cont}.\n\\endproof\n\n\\bigskip\n\n\\proof[Proof of Lemma~\\ref{lem:contLinfty}]\nWe first note that, for any self-adjoint $d\\times d$ matrix $A$ and $\\psi\\in\\mathbb{S}(\\mathbb{C}^d)$,\n\\begin{equation}\\label{trnormineq}\n\\Bigl| \\scp{\\psi}{A|\\psi} \\Bigr| \\leq \\|A\\| \\leq \\|A\\|_{\\tr}\\,.\n\\end{equation}\n\nFor any density matrix $\\rho\\in\\mathscr{D}(\\mathscr{H})$ of which zero is not an eigenvalue, the density of $GAP(\\rho)$ relative to $u_{\\mathbb{S}(\\mathscr{H})}$ is given by \\cite{Gold1}\n\\begin{align}\n \\frac{dGAP(\\rho)}{du_{\\mathbb{S}(\\mathscr{H})}}(\\psi) &= \\frac{1}{\\pi^d \\, \\det \\rho}\n \\int\\limits_0^\\infty dr \\, r^{2d-1} \\, r^2 \\exp(-r^2 \\langle \\psi |\n \\rho^{-1} | \\psi \\rangle) = \\\\ &= \\frac{d!}{2\\pi^{d} \\, \\det\n \\rho} \\, \\langle \\psi | \\rho^{-1} | \\psi \\rangle^{-d-1}\n \\,.\\label{mupowerlaw}\n\\end{align}\n\nUsing the last expression, we will now show that \\eqref{eq:contLinfty} holds when $\\rho$ is sufficiently close to $\\Omega$. \nThis follows from the facts (i)~that, on $\\mathscr{D}_{\\geq \\gamma}(\\mathscr{H})$, the functions $\\rho\\mapsto1\/\\det\\rho$ and $\\rho\\mapsto\\rho^{-1}$ are uniformly continuous, (ii)~that\n\\begin{equation}\n\\Bigl| \\scp{\\psi}{\\rho^{-1}|\\psi}-\\scp{\\psi}{\\Omega^{-1}|\\psi}\\Bigr| \\leq \\|\\rho^{-1}-\\Omega^{-1}\\|_{\\tr}\n\\end{equation}\nfor all $\\psi\\in\\mathbb{S}(\\mathscr{H})$, (iii)~that the function $x\\mapsto x^{-d-1}$ is uniformly continuous on the interval $[1,\\infty)$, and (iv)~that $\\scp{\\psi}{\\rho^{-1}|\\psi}\\geq 1$, $\\scp{\\psi}{\\Omega^{-1}|\\psi}\\geq 1$. \nThis establishes the existence of $r(\\varepsilon,d,\\gamma)>0$ as described in Lemma~\\ref{lem:contLinfty}. \n\nNow \\eqref{eq:contL1} follows from \\eqref{eq:contLinfty} according to\n\\begin{align}\n&\\bigl| GAP(\\rho)(f)-GAP(\\Omega)(f) \\bigr|\\nonumber\\\\\n&=\\Biggl|\\,\\,\\int\\limits_{\\mathbb{S}(\\mathscr{H})}du_{\\mathbb{S}(\\mathscr{H})} \\biggl( \\frac{dGAP(\\rho)}{du_{\\mathbb{S}(\\mathscr{H})}}(\\psi)\n-\\frac{dGAP(\\Omega)}{du_{\\mathbb{S}(\\mathscr{H})}}(\\psi)\\biggr) f(\\psi) \\Biggr|\\\\\n&\\leq\\int\\limits_{\\mathbb{S}(\\mathscr{H})}du_{\\mathbb{S}(\\mathscr{H})} \\biggl| \\frac{dGAP(\\rho)}{du_{\\mathbb{S}(\\mathscr{H})}}(\\psi)\n-\\frac{dGAP(\\Omega)}{du_{\\mathbb{S}(\\mathscr{H})}}(\\psi)\\biggr| \\, |f(\\psi)|\n<\\varepsilon \\, \\|f\\|_1\\,.\n\\end{align}\n\\endproof\n\n\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm3}}\n\n\\proof[Proof of Theorem~\\ref{thm3}.] \nSuppose we are given $0<\\varepsilon<1$, $0<\\delta<1$, $d_1\\in\\mathbb{N}$, a Hilbert space $\\mathscr{H}_1$ of dimension $d_1$, and a continuous function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$. Set\n\\begin{equation}\nD_R=D_R(\\varepsilon,\\delta,d_1,f) = \\frac{4}{r(\\varepsilon\/2,d_1,f)^2}\\max\\Bigl(d_1^2, 18\\pi^3 \\log (8\/\\delta) \\Bigr)\\,,\n\\end{equation}\nwith $r(\\varepsilon,d,f)$ as provided by Lemma~\\ref{lem:cont}. Now consider any $d_R,d_2\\in\\mathbb{N}$ with $d_R>D_R$ and $d_2>D_2(\\varepsilon\/2\\|f\\|_\\infty,\\delta\/2,d_1)$, any $\\mathscr{H}_2$ and $\\mathscr{H}_R\\subseteq \\mathscr{H}_1\\otimes\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{2\/R}=d_{2\/R}$. Let $M(f,\\varepsilon)$ be the set mentioned in \\eqref{eq:thm3}, \n\\begin{equation}\nM(f,\\varepsilon)=\n\\left\\{ \\bigl( \\psi, b \\bigr) \\in\n \\mathbb{S}(\\mathscr{H}_R) \\times ONB (\\mathscr{H}_2) : \\bigl|\n \\mu_1^{\\psi,b}(f) - \\GAP{\\tr_2 \\rho_R}(f) \\bigr| < \\varepsilon \\right\\},\n\\end{equation}\nlet\n\\begin{equation}\nM'(f,\\varepsilon) =\n\\left\\{\\bigl( \\psi, b \\bigr) \\in\n \\mathbb{S}(\\mathscr{H}_R) \\times ONB (\\mathscr{H}_2) : \\bigl|\n \\mu_1^{\\psi,b}(f) - \\GAP{\\rho_1^\\psi}(f) \\bigr| < \\varepsilon \\right\\}\n\\end{equation}\nand\n\\begin{equation}\nM''(\\varepsilon) =\n\\Bigl\\{\\psi\\in\\mathbb{S}(\\mathscr{H}_R): \\|\\rho_1^\\psi -\n\\tr_2 \\rho_R\\|_{\\tr}<\\varepsilon \\Bigr\\}.\n\\end{equation}\nThen, by Lemma~\\ref{lem:cont},\n\\begin{equation}\\label{MM'M''}\nM(f,\\varepsilon) \\supseteq \nM'\\Bigl(f,\\frac{\\varepsilon}{2}\\Bigr) \\cap \n\\Bigl[M''\\Bigl(r\\bigl(\\frac{\\varepsilon}{2},d_1,f\\bigr)\\Bigr)\\times ONB(\\mathscr{H}_2)\\Bigr]\\,.\n\\end{equation}\n\nTheorem~\\ref{corbasis} yields, using our assumption $d_2>D_2(\\varepsilon\/2\\|f\\|_\\infty,\\delta\/2,d_1)$, that for every $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$,\n\\begin{equation}\nu_{ONB} \\Bigl\\{ b \\in ONB(\\mathscr{H}_2): \\bigl|\\mu_1^{\\psi,b}(f) - \\GAP{\\rho_1^\\psi}(f)\\bigr|< \\frac{\\varepsilon}{2} \\Bigr\\} \\geq 1-\\delta\/2\\,.\n\\end{equation}\nThus, averaging over $\\psi\\in\\mathbb{S}(\\mathscr{H}_R)$ according to $u_R$,\n\\begin{equation}\\label{M'}\nu_R\\times u_{ONB} \\Bigl( M'(f,\\varepsilon\/2) \\Bigr) \\geq 1 -\\delta\/2\\,.\n\\end{equation}\n\nLemma~\\ref{lem:can} with $\\eta = r\/2$ for $r=r(\\varepsilon\/2,d_1,f)$ yields, using our assumption $d_R > 4d_1^2\/r^2$, which implies that $d_1\/\\sqrt{d_R}\\leq r\/2$, that\n\\begin{equation}\nu_R(M''(r)) \\geq 1 - 4\\exp\\Bigl(-\\frac{d_R r^2}{18\\pi^34}\\Bigr)\\,.\n\\end{equation}\nUsing our assumption $d_R > 18\\pi^34\\log(8\/\\delta)\/r^2$, the right hand side is greater than or equal to $1-\\delta\/2$, and thus\n\\begin{equation}\\label{M''}\nu_R\\times u_{ONB} \\Bigl[M''(r)\\times ONB(\\mathscr{H}_2)\\Bigr] \\geq 1- \\delta\/2\\,.\n\\end{equation}\n\nFrom \\eqref{M'}, \\eqref{M''}, and \\eqref{MM'M''} together we have that\n\\begin{equation}\nu_R\\times u_{ONB} \\Bigl[ M(f,\\varepsilon) \\Bigr] \\geq 1 -\\delta\\,,\n\\end{equation}\nwhich is what we wanted to show.\n\\endproof\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm4}}\n\n\\proof[Proof of Theorem~\\ref{thm4}]\nSuppose we are given $0<\\varepsilon<1$, $0<\\delta<1$, $d_1\\in\\mathbb{N}$, $0<\\gamma<1\/d_1$, and a Hilbert space $\\mathscr{H}_1$ of dimension $d_1$. Set\n\\begin{align}\nD'_R&=D'_R(\\varepsilon,\\delta,d_1,\\gamma) = \\frac{4}{(r')^2}\\max\\Bigl(d_1^2, 18\\pi^3 \\log (8\/\\delta) \\Bigr)\\,,\\\\\nr'&=r'(\\varepsilon,d_1,\\gamma)=\\frac{1}{2}r(\\varepsilon\/2,d_1,\\gamma)\\,,\n\\end{align}\nwith $r(\\varepsilon,d,\\gamma)$ as provided by Lemma~\\ref{lem:contLinfty}. Now consider any $d_R,d_2\\in\\mathbb{N}$ with $d_R>D'_R$ and $d_2>D_2(\\varepsilon\/2,\\delta\/2,d_1)$, any $\\Omega\\in\\mathscr{D}_{\\geq\\gamma}(\\mathscr{H}_1)$, any $\\mathscr{H}_2$ and $\\mathscr{H}_R\\subseteq \\mathscr{H}_1\\otimes\\mathscr{H}_2$ with $\\dim\\mathscr{H}_{2\/R}=d_{2\/R}$, and any bounded measurable function $f:\\mathbb{S}(\\mathscr{H}_1)\\to\\mathbb{R}$. Let $M_0(f,\\varepsilon)$ be the set mentioned in \\eqref{eq:thm4}, \n\\begin{equation}\nM_0(f,\\varepsilon)=\n\\left\\{ \\bigl( \\psi, b \\bigr) \\in\n \\mathbb{S}(\\mathscr{H}_R) \\times ONB (\\mathscr{H}_2) : \\bigl|\n \\mu_1^{\\psi,b}(f) - GAP(\\Omega)(f) \\bigr| < \\varepsilon \\, \\|f\\|_\\infty \\right\\},\n\\end{equation}\nlet, as in the proof of Theorem~\\ref{thm3},\n\\begin{equation}\nM'(f,\\varepsilon) =\n\\left\\{\\bigl( \\psi, b \\bigr) \\in\n \\mathbb{S}(\\mathscr{H}_R) \\times ONB (\\mathscr{H}_2) : \\bigl|\n \\mu_1^{\\psi,b}(f) - GAP(\\rho_1^\\psi)(f) \\bigr| < \\varepsilon \\right\\},\n\\end{equation}\nlet\n\\begin{equation}\nM_0''(\\varepsilon) =\n\\Bigl\\{\\psi\\in\\mathbb{S}(\\mathscr{H}_R): \\|\\rho_1^\\psi -\n\\Omega\\|_{\\tr}<\\varepsilon \\Bigr\\},\n\\end{equation}\nand let, as in the proof of Theorem~\\ref{thm3},\n\\begin{equation}\nM''(\\varepsilon) =\n\\Bigl\\{\\psi\\in\\mathbb{S}(\\mathscr{H}_R): \\|\\rho_1^\\psi -\n\\tr_2 \\rho_R\\|_{\\tr}<\\varepsilon \\Bigr\\}.\n\\end{equation}\nNow assume $\\bigl\\|\\tr_2 \\rho_R-\\Omega\\bigr\\|_{\\tr}D_2(\\varepsilon\/2,\\delta\/2,d_1)$, and Lemma~\\ref{lem:can} yields \\eqref{M''} \\y{with $r$ replaced by $r'$}, using our assumption $d_R>D'_R$. From \\eqref{M'}, \\eqref{M''}, \\eqref{M0''M''}, and \\eqref{M0M'M0''} together we have that\n\\begin{equation}\nu_R\\times u_{ONB} \\Bigl[ M_0(f,\\varepsilon) \\Bigr] \\geq 1 -\\delta\\,,\n\\end{equation}\nwhich is what we wanted to show.\n\\endproof\n\n\n\n\n\n\n\\bigskip\n\n\\noindent\\textit{Acknowledgments.} \nWe are grateful to Benoit Collins for helpful discussions.\nS.~Goldstein was supported in part by the National Science Foundation [grant DMS-0504504].\nJ.~L.~Lebowitz and C.~Mastrodonato were supported in part by the National Science Foundation [grant DMR 08-02120] and the Air Force Office of Scientific Research [grant AF-FA 49620-01-0154].\nN.~Zangh\\`\\i\\ was supported in part by Istituto Nazionale di Fisica Nucleare. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}