diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmdvy" "b/data_all_eng_slimpj/shuffled/split2/finalzzmdvy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmdvy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:Introduction}\nRapid advances in sensing and data acquisition technologies are increasingly resulting in individual data samples or signals structured by multiple \\textit{modes}. Examples include hyperspectral video (four modes; two spatial, one temporal, and one spectral), colored depth video (five modes; two spatial, one temporal, one spectral, and one depth), and four-dimensional tomography (four modes; three spatial and one temporal). Such data form multiway arrays and are called \\textit{tensor data}~\\cite{smilde2005multi,kolda2009tensor}.\n\nTypical feature extraction approaches that handle tensor data tend to collapse or vectorize the tensor into a long one-dimensional vector and apply existing processing methods for one-dimensional data. Such approaches ignore the structure and inter-mode correlations in tensor data. More recently, several works instead assume a structure on the tensor of interest through tensor decompositions such as the CANDECOMP\/PARAFAC (CP) decomposition~\\cite{harshman1970foundations}, Tucker decomposition~\\cite{tucker1963implications}, and PARATUCK decomposition~\\cite{kolda2009tensor} to obtain meaningful representations of tensor data. Because these decompositions involve fewer parameters, or degrees of freedom, in the model, inference algorithms that exploit such decompositions often perform better than those that assume the tensors to be unstructured. Moreover, algorithms utilizing tensor decompositions tend to be more efficient in terms of storage and computational costs: the cost of storing the decomposition can be substantially lower, and numerical methods can exploit the structure by solving simpler subproblems.\n\nIn this work, we focus on the problem of finding sparse representations of tensors that admit a Tucker decomposition. More specifically, we analyze the \\textit{dictionary learning} (DL) problem for tensor data. The traditional DL problem for vector-valued data involves constructing an overcomplete basis (dictionary) such that each data sample can be represented by only a few columns (atoms) of that basis~\\cite{aharon2006img}. To account for the Tucker structure of tensor data, we require that the dictionary underlying the vectorized versions of tensor data samples be \\textit{Kronecker structured} (KS). That is, it is comprised of \\textit{coordinate dictionaries} that independently transform various modes of the tensor data. Such dictionaries have successfully been used for tensor data representation in applications such as hyperspectral imaging, video acquisition, distributed sensing, magnetic resonance imaging, and the tensor completion problem (multidimensional inpainting)~\\cite{duarte2012kronecker,caiafa2013multidimensional}. To provide some insights into the usefulness of KS dictionaries for tensor data, consider the hypothetical problem of finding sparse representations of $1024 \\times 1024 \\times 32 $ hyperspectral images. Traditional DL methods require each image to be rearranged into a one-dimensional vector of length $2^{25}$ and then learn an unstructured dictionary that has a total of $(2^{25} p)$ unknown parameters, where $p \\geq 2^{25}$. In contrast, KS DL only requires learning three coordinate dictionaries of dimensions $1024 \\times p_1$, $1024 \\times p_2$, and $32 \\times p_3$, where $p_1,p_2\\geq 1024$, and $p_3 \\geq 32$. This gives rise to a total of $[1024 (p_1 + p_2) + 32p_3]$ unknown parameters in KS DL, which is significantly smaller than $2^{25} p$. While such ``parameter counting'' points to the usefulness of KS DL for tensor data, a fundamental question remains open in the literature: what are the theoretical limits on the learning of KS dictionaries underlying $K$th-order tensor data? To answer this question, we examine the KS-DL objective function and find sufficient conditions on the number of samples (or sample complexity) for successful local identification of \\textit{coordinate dictionaries} underlying the KS dictionary. To the best of our knowledge, this is the first work presenting such identification results for the KS-DL problem.\n\n\\subsection{Our Contributions}\\label{subsec:contr}\nWe derive sufficient conditions on the true coordinate dictionaries, coefficient and noise distributions, regularization parameter, and the number of data samples such that the KS-DL objective function has a local minimum within a small neighborhood of the true coordinate dictionaries with high probability. Specifically, suppose the observations are generated from a true dictionary $\\mb{D}^0 \\in \\mbb{R}^{m \\times p}$ consisting of the Kronecker product of $K$ coordinate dictionaries, $\\mb{D}_k^0 \\in \\mbb{R}^{m_k \\times p_k}, k \\in \\lr{1,\\dots,K}$, where $m = \\prod_{k=1}^Km_k$ and $p = \\prod_{k=1}^Kp_k$. Our results imply that $N = \\max_{k\\in[K]} \\Omega(m_kp_k^3\\varepsilon_k^{-2})$ samples are sufficient (with high probability) to recover the underlying coordinate dictionaries $\\mb{D}_k^0$ up to the given estimation errors $\\varepsilon_k, k \\in \\lr{1,\\dots,K}$.\n\n\\subsection{Relationship to Prior Work}\\label{subsec:prior}\n\nAmong existing works on structured DL that have focused exclusively on the Tucker model for tensor data, several have only empirically established the superiority of KS DL in various settings for 2nd and 3rd-order tensor data~\\cite{hawe2013separable,zubair2013tensor,caiafa2013multidimensional,roemer2014tensor,dantas2017learning,ghassemi2017stark}.\n\nIn the case of unstructured dictionaries, several works do provide analytical results for the dictionary identifiability problem~\\cite{aharon2006uniqueness,agarwal2013learning,agarwal2013exact,arora2013new, schnass2014identifiability,schnass2014local,gribonval2014sparse,jung2015minimax}. These results, which differ from each other in terms of the distance metric used, cannot be trivially extended for the KS-DL problem. In this work, we focus on the Frobenius norm as the distance metric. Gribonval et al.~\\cite{gribonval2014sparse} and Jung et al.~\\cite{jung2015minimax} also consider this metric, with the latter work providing minimax lower bounds for dictionary reconstruction error. In particular, Jung et al.~\\cite{jung2015minimax} show that the number of samples needed for reliable reconstruction (up to a prescribed mean squared error $\\varepsilon$) of an $m\\times p$ dictionary within its local neighborhood must be \\emph{at least} on the order of $N = \\Omega(mp^2\\varepsilon^{-2})$. Gribonval et al.~\\cite{gribonval2014sparse} derive a competing upper bound for the sample complexity of the DL problem and show that $N = \\Omega(mp^3\\varepsilon^{-2})$ samples are \\emph{sufficient} to guarantee (with high probability) the existence of a local minimum of the DL cost function within the $\\varepsilon$ neighborhood of the true dictionary. In our previous works, we have obtained lower bounds on the minimax risk of KS DL for 2nd-order~\\cite{shakeri2016minimax} and $K$th-order tensors~\\cite{shakeri2017sample,shakeri2016arxiv}, and have shown that the number of samples necessary for reconstruction of the true KS dictionary within its local neighborhood up to a given estimation error scales with the sum of the product of the dimensions of the coordinate dictionaries, i.e., $N = \\Omega(p\\sum_{k=1}^Km_kp_k\\varepsilon^{-2})$. Compared to this sample complexity lower bound, our upper bound is larger by a factor $\\max_{k} p_k^2$.\n\nIn terms of the analytical approach, although we follow the same general proof strategy as the vectorized case of Gribonval et\nal.~\\cite{gribonval2014sparse}, our extension poses several technical challenges. These include: ($i$) expanding the asymptotic objective function into a summation in which individual terms depend on coordinate dictionary recovery errors, ($ii$) translating identification conditions on the KS dictionary to conditions on its coordinate dictionaries, and ($iii$) connecting the asymptotic objective function to the empirical objective function using concentration of measure arguments; this uses the \\textit{coordinate-wise Lipschitz continuity} property of the KS-DL objective function with respect to the coordinate dictionaries. To address these challenges, we require additional assumption on the generative model. These include: ($i$) the true dictionary and the recovered dictionary belong to the class of KS dictionaries, and ($ii$) dictionary coefficient tensors follow the \\textit{separable sparsity} model that requires nonzero coefficients to be grouped in blocks~\\cite{caiafa2013computing,shakeri2016arxiv}.\n\n\\subsection{Notational Convention and Preliminaries} \\label{subsec:notation}\nUnderlined bold upper-case, bold upper-case and lower-case letters are used to denote tensors, matrices and vectors, respectively, while non-bold lower-case letters denote scalars. For a tensor $\\ul{\\X}$, its $(i_1,\\dots,i_K)$-th element is denoted as $\\underline{x}_{i_1\\dots i_K}$. The $i$-th element of vector $\\mathbf{v}$ is denoted by $v_i$ and the $ij$-th element of matrix $\\mb{X}$ is denoted as $x_{ij}$. The $k$-th column of $\\mb{X}$ is denoted by $\\mb{x}_k$ and $\\mb{X}_{\\mathcal{I}}$ denotes the matrix consisting of the columns of $\\mb{X}$ with indices $\\mathcal{I}$. We use $|\\mathcal{I}|$ for the cardinality of the set $\\mathcal{I}$. Sometimes we use matrices indexed by numbers, such as $\\mb{X}_1$, in which case a second index (e.g., $\\mathbf{x}_{1,k}$) is used to denote its columns. We use $\\mathop{\\mathrm{vec}}\\nolimits(\\mb{X})$ to denote the vectorized version of matrix $\\mb{X}$, which is a column vector obtained by stacking the columns of $\\mb{X}$ on top of one another. We use $\\diag{\\mb{X}}$ to denote the vector comprised of the diagonal elements of $\\mb{X}$ and $\\Diag{\\mb{v}}$ to denote the diagonal matrix, whose diagonal elements are comprised of elements of $\\mb{v}$. The elements of the sign vector of $\\mathbf{v}$, denoted as $\\mathop{\\mathrm{sign}}\\nolimits(\\mathbf{v})$, are equal to $\\mathop{\\mathrm{sign}}\\nolimits(v_i)= v_i\/|v_i|$, for $v_i \\neq 0$, and $\\mathop{\\mathrm{sign}}\\nolimits(v_i)=0$ for $v_i = 0$, where $i$ denotes the index of any element of $v$. We also use $\\sin(\\mb{v})$ to denote the vector with elements $\\sin(v_i)$ (used similarly for other trigonometric functions). Norms are given by subscripts, so $\\|\\mb{v}\\|_0$, $\\|\\mb{v}\\|_1$, and $\\|\\mb{v}\\|_2$ are the $\\ell_0$, $\\ell_1$, and $\\ell_2$ norms of $\\mathbf{v}$, while $\\|\\mb{X}\\|_2$ and $\\|\\mb{X}\\|_F$ are the spectral and Frobenius norms of $\\mb{X}$, respectively.\nWe use $[K]$ to denote $\\{1,2,\\dots,K\\}$ and $\\mb{X}_{1:K}$ to denote $\\{\\mb{X}_k\\}_{k=1}^K$.\n\nWe write $\\mb{X} \\otimes \\mb{Y}$ for the \\textit{Kronecker product} of two matrices $\\mb{X}\\in \\mbb{R}^{m\\times n}$ and $\\mb{Y}\\in \\mbb{R}^{p\\times q}$,\nwhere the result is an $mp \\times nq$ matrix and we have $\\|\\mb{X} \\otimes\\mb{Y} \\|_F = \\|\\mb{X}\\|_F\\| \\mb{Y}\\|_F$~\\cite{horn2012matrix}. We also use $\\bigotimes_{k \\in K} \\mb{X}_k \\triangleq \\mb{X}_1 \\otimes \\dots \\otimes \\mb{X}_K$ . We define $\\mb{H}_{\\mb{X}}\\triangleq (\\mb{X}^\\top \\mb{X})^{-1}$, $\\mb{X}^+ \\triangleq \\mb{H}_{\\mb{X}}\\mb{X}^\\top$, and $\\mb{P}_{\\mb{X}} \\triangleq \\mb{X} \\mb{X}^+$ for full rank matrix $\\mb{X}$. In the body, we sometimes also use $\\Delta f(\\mb{X};\\mb{Y}) \\triangleq f(\\mb{X}) - f(\\mb{Y})$.\n\nFor matrices $\\mb{X}_1$ and $\\mb{X}_2$ of appropriate dimensions, we define their distance to be $d(\\mb{X},\\mb{Y}) = \\|\\mb{X}-\\mb{Y}\\|_F$. For $\\mb{X}^0$ belonging to some set $\\mathcal{X}$, we define\n\t\\begin{align}\n\t\\mc{S}_{\\varepsilon}(\\mb{X}^0) \\triangleq \\lr{\\mb{X} \\in \\mathcal{X}: \\|\\mb{X} - \\mb{X}^0\\|_F = \\varepsilon}, \\nonumber \\\\\n\t\\mathcal{B}_{\\varepsilon}(\\mb{X}^0) \\triangleq \\lr{\\mb{X} \\in \\mathcal{X}: \\|\\mb{X} - \\mb{X}^0\\|_F < \\varepsilon}, \\nonumber \\\\\t\n\t\\bar{\\mathcal{B}}_{\\varepsilon}(\\mb{X}^0) \\triangleq \\lr{\\mb{X} \\in \\mathcal{X}: \\|\\mb{X} - \\mb{X}^0\\|_F \\leq \\varepsilon}.\n\t\\end{align}\nNote that while $\\mc{S}_{\\varepsilon}(\\mb{X}^0)$ represents the surface of a sphere, we use the term ``sphere\" for simplicity. We use the standard ``big-$\\mathcal{O}$'' (Knuth) notation for asymptotic scaling.\n\n\\subsubsection{Tensor Operations and Tucker Decomposition for Tensors}\nA tensor is a multidimensional array where the order of the tensor is defined as the number of dimensions in the array.\n\n\\textit{Tensor Unfolding: }A tensor $\\ul{\\X} \\in \\mbb{R}^{p_1 \\times p_2\\times \\dots \\times p_K}$ of order $K$ can be expressed as a matrix by reordering its elements to form a matrix. This reordering is called unfolding: the mode-$k$ unfolding matrix of a tensor is a $p_k \\times \\prod_{i \\ne k} p_i$ matrix, which we denote by $\\mb{X}_{(k)}$. Each column of $\\mb{X}_{(k)}$ consists of the vector formed by fixing all indices of $\\ul{\\X}$ except the one in the $k$th-order.\nThe $k$-rank of a tensor $\\ul{\\X}$ is defined by $\\mathop{\\mathrm{rank}}\\nolimits(\\mb{X}_{(k)})$; trivially, $\\mathop{\\mathrm{rank}}\\nolimits(\\mb{X}_{(k)}) \\leq p_k$.\n\n\\textit{Tensor Multiplication: } The mode-$k$ matrix product of the tensor $\\ul{\\X}$ and a matrix $\\mb{A} \\in \\mbb{R}^{m_k \\times p_k}$, denoted by $\\ul{\\X} \\times_k \\mb{A}$, is a tensor of size $p_1 \\times \\dots p_{k-1} \\times m_k \\times p_{k+1} \\dots \\times p_K$ whose elements are\n$\n\t(\\ul{\\X} \\times_k \\mb{A})_{i_1\\dots i_{k-1} j i_{k+1} \\dots i_K} = \\sum_{i_k=1}^{p_k} \\underline{x}_{i_1\\dots i_{k-1} i_k i_{k+1} \\dots i_K} a_{ji_k}.\n$\nThe mode-$k$ matrix product of $\\ul{\\X}$ and $\\mb{A}$ and the matrix multiplication of $\\mb{X}_{(k)}$ and $\\mb{A}$ are related~\\cite{kolda2009tensor}:\n\t\\begin{align}\n\t\\ul{\\Y} = \\ul{\\X} \\times_k \\mb{A} \\Leftrightarrow \\mb{Y}_{(k)} = \\mb{A} \\mb{X}_{(k)}.\n\t\\end{align}\n\t\n\\textit{Tucker Decomposition: } The Tucker decomposition decomposes a tensor into a \\textit{core tensor} multiplied by a matrix along each mode~\\cite{tucker1963implications,kolda2009tensor}. We take advantage of the Tucker model since we can relate the Tucker decomposition to the Kronecker representation of tensors~\\cite{caiafa2013computing}.\nFor a tensor $\\ul{\\Y} \\in \\mbb{R}^{m_1 \\times m_2 \\times \\dots \\times m_K}$ of order $K$, if $\\mathop{\\mathrm{rank}}\\nolimits(\\mb{Y}_{(k)})\\leq p_k$ holds for all $k \\in [K]$ then, according to the Tucker model, $\\ul{\\Y}$ can be decomposed into:\n\t\\begin{align} \\label{eq:UY_UX}\n\t\\ul{\\Y} = \\ul{\\X} \\times_1 \\mb{D}_1 \\times_2 \\mb{D}_2 \\times_ 3 \\dots \\times_K \\mb{D}_K,\n\t\\end{align}\nwhere $\\ul{\\X} \\in \\mbb{R}^{p_1 \\times p_2\\times \\dots \\times p_K}$ denotes the core tensor and $\\mb{D}_k \\in \\mbb{R}^{m_k \\times p_k}$ are factor matrices.\nThe following is implied by \\eqref{eq:UY_UX}~\\cite{kolda2009tensor}:\n\t\\begin{align*}\n\t\\mb{Y}_{(k)} = \\mb{D}_{k}\\mb{X}_{(k)}(\\mb{D}_{K} \\otimes \\dots \\otimes \\mb{D}_{k+1} \\otimes \\mb{D}_{k-1} \\otimes \\dots \\otimes \\mb{D}_1)^\\top.\n\t\\end{align*}\nSince the Kronecker product satisfies $\\mathop{\\mathrm{vec}}\\nolimits(\\mb{B}\\mb{X}\\mb{A}^\\top)=(\\mb{A} \\otimes \\mb{B})\\mathop{\\mathrm{vec}}\\nolimits(\\mb{X})$, \\eqref{eq:UY_UX} is equivalent to\n\t\\begin{align} \\label{eq:vecty_vectx}\n\t\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\Y}) = \\big( \\mb{D}_K \\otimes \\mb{D}_{K-1} \\otimes \\dots \\otimes \\mb{D}_1 \\big) \\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X}),\n\t\\end{align}\nwhere $\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\Y}) \\triangleq \\mathop{\\mathrm{vec}}\\nolimits(\\mb{Y}_{(1)})$ and $\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X}) \\triangleq \\mathop{\\mathrm{vec}}\\nolimits(\\mb{X}_{(1)})$.\n\n\\subsubsection{Definitions for Matrices}\nWe use the following definitions for a matrix $\\mb{D}$ with unit-norm columns:\n$\\delta_s(\\mb{D})$ denotes the \\textit{restricted isometry property} ($\\mathsf{RIP}$) constant of order $s$ for $\\mb{D}$~\\cite{candes2008restricted}.\nWe define the \\textit{worst-case coherence} of $\\mb{D}$ as $\\mu_1(\\mb{D}) = \\max_{\\substack{i,j\\\\i \\neq j}} \\lra{\\mb{d}_i^\\top \\mb{d}_j}$.\nWe also define the \\textit{order-$s$ cumulative coherence} of $\\mb{D}$ as\n\t\\begin{align} \\label{eq:mu_1}\n\t\\mu_{s}(\\mb{D}) \\triangleq \\max_{|\\mc{J} |\\leq s} \\max_{j \\not\\in \\mc{J}}\n\t\t\\|\\mb{D}_{\\mc{J}}^\\top \\mb{d}_{j}\\|_1.\n\t\\end{align}\nNote that for $s=1$, the cumulative coherence is equivalent to the worst-case coherence and $\\mu_{s}(\\mb{D}) \\leq s \\mu_1(\\mb{D})$~\\cite{gribonval2014sparse}.\nFor $\\mb{D} = \\bigotimes_{k \\in [K]} \\mb{D}_k$, where $\\mb{D}_k$'s have unit-norm columns, $\\mu_1(\\mb{D}) = \\max_{k \\in [K]} \\mu_1(\\mb{D}_k)$~\\cite[Corollary 3.6]{jokar2009sparse} and it can be shown that\\footnote{The proof of \\eqref{eq:mu_s} is provided in Appendix C.}:\n\t\\begin{align}\\label{eq:mu_s}\n\t\\mu_s(\\mb{D}) &\\leq \\max_{k \\in [K]} \\mu_{s_k}(\\mb{D}_k)\n\t\t\\bigg( \\prod_{\\substack{i \\in [K], \\\\ i \\neq k}} \\lrp{ 1+\\mu_{s_i-1}(\\mb{D}_i)} \\bigg).\n\t\\end{align}\t\n\nThe rest of the paper is organized as follows. We formulate the KS-DL problem in Section~\\ref{sec:model}. In Section~\\ref{sec:asymp}, we provide analysis for asymptotic recovery of coordinate dictionaries composing the KS dictionary and in Section~\\ref{sec:finite}, we present sample complexity results for identification of coordinate dictionaries that are based on the results of Section~\\ref{sec:asymp}. Finally, we conclude the paper in Section~\\ref{sec:discuss}. In order to keep the main exposition simple, proofs of the lemmas and propositions are relegated to appendices.\n\n\\section{System Model} \\label{sec:model}\nWe assume the observations are $K$th-order tensors $\\ul{\\Y} \\in \\mbb{R}^{m_1\\times m_2 \\times \\dots \\times m_K}$. Given generating \\textit{coordinate dictionaries} $\\mb{D}^0_k \\in \\mbb{R}^{m_k \\times p_k}$, \\textit{coefficient tensor} $\\ul{\\X} \\in \\mbb{R}^{p_1\\times p_2 \\times \\dots \\times p_K}$, and \\textit{noise tensor} $\\ul{\\N}$, we can write $\\mb{y} \\triangleq \\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\Y})$ using \\eqref{eq:vecty_vectx} as\\footnote{We have reindexed $\\mb{D}_k$'s in \\eqref{eq:vecty_vectx} for ease of notation.}\n\t\\begin{align} \\label{eq:obs_model}\n\t\\mb{y} = \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\bigg) \\mb{x} + \\mb{w},\n\t\t\\quad \\|\\mb{x}\\|_0 \\leq s,\n\t\\end{align}\nwhere $\\mb{x}=\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X}) \\in \\mbb{R}^{p}$ denotes the sparse generating coefficient vector, $\\mb{D}^0 = \\bigotimes \\mb{D}_k^0 \\in \\mbb{R}^{m\\times p}$ denotes the underlying KS dictionary, and $\\mb{w}=\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\N}) \\in \\mbb{R}^m$ denotes the underlying noise vector. Here, $\\mb{D}_k^0 \\in \\mathcal{D}_k = \\lr{ \\mb{D}_k \\in \\mbb{R}^{m_k\\times p_k}, \\|\\mb{d}_{k,j}\\|_2 = 1, \\forall j \\in [p_k]}$ for $k \\in [K]$, $p = \\prod_{k \\in [K]}p_k$ and $m = \\prod_{k \\in [K]}m_k$.\\footnote{Note that the $\\mathcal{D}_k$'s are compact sets on their respective oblique manifolds of matrices with unit-norm columns~\\cite{gribonval2014sparse}.} We use $\\bigotimes$ for $\\bigotimes_{k\\in[K]}$ in the following for simplicity of notation. We assume we are given $N$ noisy tensor observations, which are then stacked in a matrix $\\mb{Y} = [\\mb{y}_1,\\dots,\\mb{y}_N]$. To state the problem formally, we first make the following assumptions on distributions of $\\mb{x}$ and $\\mb{w}$ for each tensor observation.\n\n\\textit{Coefficient distribution:} We assume the coefficient tensor $\\ul{\\X}$ follows the random \\textit{``separable sparsity\"} model. That is, $\\mb{x}=\\mathop{\\mathrm{vec}}\\nolimits(\\ul{\\X})$ is sparse and the support of nonzero entries of $\\mb{x}$ is structured and random. Specifically, we sample $s_k$ elements uniformly at random from $[p_k]$, $k \\in [K]$. Then, the random support of $\\mb{x}$ is $\\lr{ \\mc{J} \\subseteq [p], |\\mc{J} |=s}$ and is associated with\n\\begin{align*}\n\t\\lr{\\mc{J}_1\\times \\mc{J}_2 \\times \\dots \\times \\mc{J}_K:\n\t\t\\mc{J}_k\\subseteq [p_k], |\\mc{J}_k|=s_k, k \\in [K]}\n\\end{align*}\nvia lexicographic indexing, where $ s=\\prod_{k \\in [K]} s_k$, and the support of $\\mb{x}_{1:N}$'s are assumed to be independent and identically distributed (i.i.d.). This model requires nonzero entries of the coefficient tensors to be grouped in blocks and the sparsity level associated with each coordinate dictionary to be small~\\cite{caiafa2013computing}.\\footnote{In contrast, for coefficients following the random non-separable sparsity model, the support of the nonzero entries of the coefficient vector are assumed uniformly distributed over $\\lr{\\mc{J} \\subseteq [p]: |\\mc{J}|=s}$.}\n\nWe now make the same assumptions for the distribution of $\\mb{x}$ as assumptions A and B in Gribonval et al.~\\cite{gribonval2014sparse}.\nThese include:\n($i$) $\\mbb{E}\\lr{\\mb{x}_\\mc{J} \\mb{x}_\\mc{J}^\\top |\\mc{J}} = \\mbb{E}\\lr{x^2}\\mb{I}_{s}$,\n($ii$) $\\mbb{E}\\lr{\\mb{x}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top |\\mc{J}} = \\mbb{E} \\lr{|x|}\\mb{I}_{s}$, where $\\boldsymbol{\\sigma} = \\mathop{\\mathrm{sign}}\\nolimits(\\mb{x})$,\n($iii$) $\\mbb{E}\\lr{\\boldsymbol{\\sigma}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top |\\mc{J}} = \\mb{I}_{s}$,\n($iv$) magnitude of $\\mb{x}$ is bounded, i.e., $\\|\\mb{x}\\|_2 \\leq M_x $ almost surely, and\n($v$) nonzero entries of $\\mb{x}$ have a minimum magnitude, i.e., $\\min_{j \\in \\mc{J}} |x_j| \\geq x_{\\mathrm{min}}$ almost surely.\nFinally, we define $\\kappa_x \\triangleq \\mbb{E}\\lr{|x|}\/\\sqrt{\\mbb{E} \\lr{x^2}}$ as a measure of the flatness of $\\mb{x}$ ($\\kappa_x \\leq 1$, with $\\kappa_x=1$ when all nonzero coefficients are equal~\\cite{gribonval2014sparse}).\n\n\\textit{Noise distribution:} We make following assumptions on the distribution of noise, which is assumed i.i.d. across data samples:\n($i$) $\\mbb{E}\\lr{\\mb{w} \\mb{w}^\\top} = \\mbb{E}\\lr{w^2} \\mb{I}_m $,\n($ii$) $\\mbb{E}\\lr{\\mb{w} \\mb{x}^\\top |\\mc{J}}=\\mbb{E}\\lr{\\mb{w} \\boldsymbol{\\sigma}^\\top |\\mc{J}}=\\mathbf{0}$, and\n($iii$) magnitude of $\\mb{w}$ is bounded, i.e., $\\|\\mb{w}\\|_2 \\leq M_w$ almost surely.\n\nOur goal in this paper is to recover the underlying coordinate dictionaries, $\\mb{D}^0_k$, from $N$ noisy realizations of tensor data.\nTo solve this problem, we take the empirical risk minimization approach and define\n\t\\begin{align}\n\t&f_\\mb{y} \\lrp{\\D_{1:K}} \\triangleq\n\t\t \\inf_{\\mb{x}' \\in \\mbb{R}^p } \\bigg\\{\n\t\t \\frac{1}{2} \\norm{\\mb{y} - \\lrp{\\bigotimes \\mb{D}_k } \\mb{x}'\n\t\t }_2^2\n\t\t +\\lambda\\|\\mb{x}'\\|_1 \\bigg\\},\n \t\t\\text{and} \\nonumber \\\\\n\t&F_\\mb{Y} \\lrp{\\D_{1:K}} \\triangleq\n\t\t \\frac{1}{N} \\sum_{n =1}^N f_{\\mb{y}_n}\\lrp{\\D_{1:K}} ,\n\t\\end{align}\nwhere $\\lambda$ is a regularization parameter. In theory, we can recover the coordinate dictionaries by solving the following regularized optimization program:\n\t\\begin{align}\n \\min_{\\substack{\\mb{D}_k \\in \\mathcal{D}_k \\\\ k \\in [K]}}\n\t\tF_\\mb{Y} \\lrp{\\D_{1:K}}. \\label{eq:f_x}\n\t\\end{align}\nMore specifically, given desired errors $\\lr{\\varepsilon_k}_{k=1}^K$, we want a local minimum of~\\eqref{eq:f_x} to be attained by coordinate dictionaries $\\wh{\\D}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]$. That is, there exists a set $\\{\\wh{\\D}_k\\}_{k \\in [K]} \\subset \\lr{\\mb{D}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k \\in [K]}$ such that $F_\\mb{Y}(\\wh{\\D}_{1:K}) \\leq F_\\mb{Y}(\\D_{1:K})$.\\footnote{We focus on the local recovery of coordinate dictionaries (i.e., $\\wh{\\D}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)$) due to ambiguities in the general DL problem. This ambiguity is a result of the fact that dictionaries are invariant to permutation and sign flips of dictionary columns, resulting in equivalent classes of dictionaries. Some works in the literature on conventional overcome this issue by defining distance metrics that capture the distance between these equivalent classes~\\cite{agarwal2013exact,agarwal2013learning,arora2013new}.}\nTo address this problem, we first minimize the statistical risk:\n\t\\begin{align} \\label{eq:f_x_asym}\n\t&\\min_{\\substack{\\mb{D}_k \\in \\mathcal{D}_k \\\\ k \\in [K]}} f_\\mbb{P} \\lrp{\\D_{1:K}} \\triangleq\n\t\\min_{\\substack{\\mb{D}_k \\in \\mathcal{D}_k \\\\ k \\in [K]}}\n\t\t\\mbb{E}_\\mb{y} \\lr{ f_\\mb{y}\\lrp{\\D_{1:K}}}.\n\t\\end{align}\nThen, we connect $F_\\mb{Y} \\lrp{\\D_{1:K}}$ to $f_\\mbb{P} \\lrp{\\D_{1:K}}$ using concentration of measure arguments and obtain the number of samples sufficient for local recovery of the coordinate dictionaries. Such a result ensures that any KS-DL algorithm that is guaranteed to converge to a local minimum, and which is initialized close enough to the true KS dictionary, will converge to a solution close to the generating coordinate dictionaries (as opposed to the generating KS dictionary, which is guaranteed by analysis of the vector-valued setup~\\cite{gribonval2014sparse}).\n\n\\section{Asympototic Identifiability Results}\\label{sec:asymp}\nIn this section, we provide an identifiability result for the KS-DL objective function in \\eqref{eq:f_x_asym}. The implications of this theorem are discussed in Section~\\ref{sec:discuss}.\n\\begin{theorem}\\label{thm:asymp}\nSuppose the observations are generated according to \\eqref{eq:obs_model} and the dictionary coefficients follow the separable sparsity model of Section~\\ref{sec:model}. Further, assume the following conditions are satisfied:\n\t\\begin{align} \\label{eq:cond_k_i_p_i}\n\t&s_k \\leq \\frac{p_k}{8\\lrp{\\norm{\\mb{D}^0_k}_2+1}^2}, \\\\\\nonumber\n\t&\\max_{k \\in [K]} \\lr{\\mu_{s_k}(\\mb{D}^0_k)} \\leq \\frac{1}{4} , \\quad\n\t\\mu_s(\\mb{D}^0) <\\frac{1}{2},\n\t\\end{align}\nand\n\t\\begin{align} \\label{eq:cond_m_p}\n\t&\\frac{\\mbb{E}\\lr{x^2}}{M_x \\mbb{E}\\lr{|x|}} > \\frac{24\\sqrt{3}(4.5^{K\/2})K}{(1-2\\mu_s(\\mb{D}^0))} \\nonumber\\\\\n\t&\\qquad \\quad \\max_{k \\in [K]} \\lr{\n\t\t \\frac{s_k}{p_k}\n\t\t \\norm{ {\\mb{D}^0_k}^\\top\\mb{D}^0_k - \\mb{I}}_F\n\t\t \\lrp{ \\norm{\\mb{D}^0_k}_2+1}}.\n\t\\end{align}\nDefine\n\t\\begin{align} \\label{eq:cond_C_min_C_max}\n\t&C_{k,\\min} \\triangleq 8 (3^{\\frac{K+1}{2}})\\kappa_x^2\n\t\t\\lrp{\\frac{s_k}{p_k}}\n\t\t\\norm{{\\mb{D}_k^0}^\\top\\mb{D}_k^0 - \\mb{I}}_F \\lrp{ \\norm{\\mb{D}^0_k}_2+1} ,\n\t\t \\nonumber \\\\\n\t&C_{\\max} \\triangleq \\frac{1}{3K(1.5)^{K\/2}} \\frac{\\mbb{E}\\lr{|x|}}{M_x} (1-2\\mu_s(\\mb{D}^0)).\n\t\\end{align}\nThen, the map $\\D_{1:K} \\mapsto f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}=\\bigotimes_{k \\in [K]} \\widehat{\\mb{D}}_k$ such that $\\widehat{\\mb{D}}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)$, $k \\in [K]$, for any $\\varepsilon_k>0$ as long as\n\t\\begin{align} \\label{eq:cond_lambda}\n\t\\lambda \\leq \\frac{x_\\mathrm{min}}{8\\times 3^{(K-1)\/2}},\n\t\\end{align}\n\t\\begin{align} \\label{eq:cond_r_i}\n\t&\\frac{\\lambda C_{k,\\min}}{\\mbb{E}\\lr{|x|}} < \\varepsilon_k < \\frac{\\lambda C_{\\max}}{\\mbb{E}\\lr{|x|}}, \\ k \\in [K],\n\t\\end{align}\nand\n\t\\begin{align} \\label{eq:cond_noise}\n\t\\frac{M_w}{M_x}\n\t\t < 3(1.5)^{K\/2} \\bigg(\\frac{\\lambda K C_{\\max} }{\\mbb{E}\\lr{|x|}}\n\t\t - \\sum_{k \\in [K]} \\varepsilon_k\\bigg).\n\t\\end{align}\n\\end{theorem}\n\n\\subsection{Discussion}\n\nTheorem~\\ref{thm:asymp} captures how the existence of a local minimum for the statistical risk minimization problem depends on various properties of the coordinate dictionaries and demonstrates that there exists a local minimum of $f_{\\mbb{P}} \\lrp{\\D_{1:K}}$ that is in local neighborhoods of the coordinate dictionaries. This ensures asymptotic recovery of coordinate dictionaries within some local neighborhood of the true coordinate dictionaries, as opposed to KS dictionary recovery for vectorized observations~\\cite[Theorem 1]{gribonval2014sparse}.\n\nWe now explicitly compare conditions in Theorem~\\ref{thm:asymp} with the corresponding ones for vectorized observations~\\cite[Theorem 1]{gribonval2014sparse}.\nGiven that the coefficients are drawn from the separable sparsity model, the sparsity constraints for the coordinate dictionaries in \\eqref{eq:cond_k_i_p_i} translate into\n\t\\begin{align}\n\t\\frac{s}{p} = \\prod_{k \\in [K]} \\frac{ s_k}{ p_k}\n\t\t\\leq \\frac{1}{8^K \\prod_k \\lrp{\\norm{\\mb{D}^0_k}_2+1}^2} .\n\t\\end{align}\nTherefore, we have $\\dfrac{s}{p}= \\mathcal{O}\\lrp{ \\frac{1}{ \\prod_k \\norm{\\mb{D}^0_k}_2^2}}=\\mathcal{O}\\lrp{\\frac{1}{\\|\\mb{D}^0\\|_2^2}}$. Using the fact that $\\norm{\\mb{D}^0}_2 \\geq \\|\\mb{D}^0\\|_F\/ \\sqrt{m} = \\sqrt{p}\/\\sqrt{m}$, this translates into sparsity order\n$\ts = \\mathcal{O}\\lrp{ m}$. Next, the left hand side of the condition in \\eqref{eq:cond_m_p} is less than 1. Moreover, from properties of the Frobenius norm, it is easy to show that\n$\n\t\\norm{{\\mb{D}^0_k}^\\top\\mb{D}_k^0 - \\mb{I} }_F \\geq \\sqrt{p_k(p_k-m_k)\/m_k}.\n$\nThe fact that $\\norm{\\mb{D}_k^0}_2 \\geq \\sqrt{p_k}\/\\sqrt{m_k}$ and the assumption $\\mu_{s_k}(\\mb{D}_k^0)\\leq 1\/4$ imply that the right hand side of \\eqref{eq:cond_m_p} is lower bounded by $\\Omega\\lrp{ \\max_k s_k\\sqrt{(p_k-m_k)\/m_k^2}}$.\nTherefore, Theorem~\\ref{thm:asymp} applies to coordinate dictionaries with dimensions $p_k \\leq m_k^2$ and subsequently, KS dictionaries with $p \\leq m^2$. Both the sparsity order and dictionary dimensions are in line with the scaling results for vectorized data~\\cite{gribonval2014sparse}.\n\n\\subsection{Proof Outline}\n\nFor given radii $0<\\varepsilon_k\\leq 2\\sqrt{p_k}, k \\in [K]$, the spheres $\\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)$ are non-empty. This follows from the construction of dictionary classes, $\\mc{D}_k$'s.\nMoreover, the mapping $\\D_{1:K} \\mapsto f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ is continuous with respect to the Frobenius norm $\\|\\D_k-\\D'_k\\|_F$ on all $\\mb{D}_k,\\mb{D}'_k \\in \\mbb{R}^{m_k\\times p_k}, k \\in [K]$~\\cite{gribonval2015sample}. Hence, it is also continuous on compact constraint sets $\\mathcal{D}_k$'s.\nWe derive conditions on the coefficients, underlying coordinate dictionaries, $M_w$, regularization parameter, and $\\varepsilon_k$'s such that\n\t\\begin{align} \\label{eq:f_p_r_def}\n\t\\Delta f_{\\mbb{P}}\\lrp{\\eps_{1:K}}\n\t\t\\triangleq \\inf_{\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)}\n\t\t\\Delta f_{\\mbb{P}} \\lrp{ \\D_{1:K}; \\D^0_{1:K} } >0.\n\t\\end{align}\nThis along with the compactness of closed balls $\\bar{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k)$ and the continuity of the mapping $\\D_{1:K} \\mapsto f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ imply the existence of a local minimum of $f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ achieved by $\\wh{\\D}_{1:K}$ in open balls, $\\mathcal{B}_{\\varepsilon_k}(\\mb{D}_k^0)$'s, $k \\in [K]$.\n\n\nTo find conditions that ensure $\\Delta f_{\\mbb{P}}\\lrp{\\eps_{1:K}} > 0$, we take the following steps:\ngiven coefficients that follow the separable sparsity model, we can decompose any $\\mb{D}_\\mc{J}, |\\mc{J}|=s$, as\n\t\\begin{align} \\label{eq:Dj_D12}\n\t\\mb{D}_\\mc{J} = \\bigotimes \\mb{D}_{k,\\mc{J}_k},\n\t\\end{align}\nwhere $\\ |\\mc{J}_k|=s_k$ for $k \\in[K]$.\\footnote{The separable sparsity distribution model implies sampling without replacement from columns of $\\mb{D}_k$.}\nGiven a generating $\\boldsymbol{\\sigma} = \\mathop{\\mathrm{sign}}\\nolimits(\\mb{x})$,\nwe obtain $\\widehat{\\mb{x}}$ by solving $f_\\mb{y}\\lrp{\\D_{1:K}}$ with respect to $\\mb{x}'$, conditioned on the fact that $\\mathop{\\mathrm{sign}}\\nolimits(\\widehat{\\mb{x}})=\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$. This eliminates the dependency of $f_\\mb{y} \\lrp{\\D_{1:K}}$ on $\\inf_{\\mb{x}'}$ by finding a closed-form expression for $f_\\mb{y}\\lrp{\\D_{1:K}}$ given $\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$, which we denote as $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$. Defining\n\\begin{align}\n\\phi_{\\mbb{P}}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} \\triangleq \\mbb{E}\\lr{\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}},\n\\end{align}\nwe expand $ \\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}}$ using \\eqref{eq:Dj_D12} and separate the terms that depend on each radius $\\varepsilon_k = \\|\\D_k-\\D^0_k\\|_F$ to obtain conditions for sparsity levels $s_k, k\\in [K]$, and coordinate dictionaries such that $\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} >0$.\nFinally, we derive conditions on $M_w$, coordinate dictionary coherences and $\\varepsilon_k$'s that ensure $\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$ and $\\Delta f_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} } = \\Delta \\phi_{\\mbb{P}} \\lrp{\\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}}$.\n\n\n\\begin{remark}\nThe key assumption in the proof of Theorem~\\ref{thm:asymp} is expanding $\\mb{D}_{\\mc{J}}$ according to~\\eqref{eq:Dj_D12}. This is a consequence of the separable sparsity model for dictionary coefficients. For a detailed discussion on the differences between the separable sparsity model and the random sparsity model for tensors, we refer the readers to our earlier work~\\cite{shakeri2016minimax}.\n\\end{remark}\n\n\\begin{remark}\nAlthough some of the forthcoming lemmas needed of Theorem~\\ref{thm:asymp} impose conditions on $\\mb{D}_k$'s as well as true coordinate dictionaries $\\mb{D}^0_k$'s, we later translate these conditions exclusively in terms of $\\mb{D}_k^0$'s and $\\varepsilon_k$'s.\n\\end{remark}\n\n\nThe proof of Theorem~\\ref{thm:asymp} relies on the following propositions and lemmas. The proofs of these are provided in Appendix A.\n\n\\begin{proposition} \\label{prop:1}\nSuppose the following inequalities hold for $k \\in [K]$:\n\t\\begin{align} \\label{eq:cond_delt_s_prop}\n\ts_k \\leq \\frac{p_k}{8(\\|\\mb{D}_k^0\\|_2+1)^2}\\quad \\text{and} \\quad\n\t\\max_{k\\in [K]}&\\lr{\\delta_{s_k}(\\mb{D}_k^0)}\\leq \\frac{1}{4} .\n\t\\end{align}\nThen, for\n\t\\begin{align} \\label{eq:cond_lamb_1}\n\t\\bar{\\lambda} \\triangleq \\dfrac{\\lambda}{\\mbb{E}\\lr{|x|}}\\leq \\dfrac{1}{8\\times 3^{(K-1)\/2}},\n\t\\end{align}\nany collection of $\\lr{ \\varepsilon_k: \\varepsilon_k \\leq 0.15, k \\in [K]}$, and for all $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)$, we have :\n\t\\begin{align} \\label{eq:delt_p_LB}\n\t&\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}}\n\t\t\\geq \\frac{s\\mbb{E}\\{x^2\\}}{8} \\sum_{k \\in [K]}\n\t\t\\frac{\\varepsilon_k}{p_k} \\lrp{\\varepsilon_k- \\varepsilon_{k,\\min}(\\bar{\\lambda})},\n\t\\end{align}\nwhere\n\t\\begin{align*}\n\t&\\varepsilon_{k,\\min} (\\bar{\\lambda})\n\t\\triangleq \\frac{3^{(K-1)\/2}}{2} \\lrp{1.5^{\\frac{K-1}{2}}\n\t\t+2^{(K+1)}\\bar{\\lambda} } \\bar{\\lambda} C_{k,\\min}.\n\t\\end{align*}\nIn addition, if\n\t\\begin{align} \\label{eq:cond_lamb_2}\n\t\\bar{\\lambda} \\leq \\frac{0.15}{\\max_{k \\in [K]} C_{k,\\min}},\n\t\\end{align}\nthen $\\varepsilon_{k,\\min}(\\bar{\\lambda})<0.15$.\nThus, $\\Delta \\phi_\\mbb{P}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} > 0 $ for all $\\varepsilon_k \\in (\\varepsilon_{k,\\min}(\\bar{\\lambda}),0.15], k \\in [K]$.\n\\end{proposition}\n\nThe proof of Proposition \\ref{prop:1} relies on the following lemmas as well as supporting lemmas from the analysis of vectorized data~\\cite[Lemmas~4,6,7,15,16]{gribonval2014sparse}.\n\n\n\\begin{lemma} \\label{def:P_H_PS}\nLet $\\mb{D} = \\bigotimes \\D_k $ where $\\delta_s(\\mb{D}_k)<1$ for $k \\in [K]$, and $\\mc{J}$ be a support set generated by the separable sparsity model. Then any $\\mb{D}_\\mc{J}, |\\mc{J}|=s$, can be decomposed as\n$\\mb{D}_\\mc{J} = \\bigotimes \\mb{D}_{k,\\mc{J}_k}$,\nwhere $\\ |\\mc{J}_k|=s_k$ and $\\mathop{\\mathrm{rank}}\\nolimits (\\mb{D}_{k,\\mc{J}_k})=s_k$, for $k \\in[K]$. Also, the following relations hold for this model:\\footnote{The equations follow from basic properties of the Kronecker product~\\cite{horn2012matrix}.}\n\t\\begin{align} \\label{eq:P_H_Ps}\n\t\\bP_{\\D_{\\cJ}} = \\bigotimes \\bP_{\\D_{k,\\cJ_k}}, \\D_{\\cJ}^+ = \\bigotimes \\D_{k,\\cJ_k}^+, \\bH_{\\D_{\\cJ}} = \\bigotimes \\bH_{\\D_{k,\\cJ_k}},\n\t\\end{align}\nwhere $\\mb{P}$ and $\\mb{H}$ are defined in Section~\\ref{subsec:notation}.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lem:Dtld}\nGiven $\\D_{1:K}$ and $\\D^0_{1:K}$, the difference\n\t\\begin{align} \\label{eq:otD_otDp}\n\t&\\bigotimes \\D_k - \\bigotimes \\D^0_k \\nonumber\\\\\n\t&\\qquad = \\sum _{k \\in [K]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes\n\t\t\\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\widetilde{\\mb{D}}_{k,K},\n\t\\end{align}\t\nwhere without loss of generality, each $\\widetilde{\\mb{D}}_{k,i}$ is equal to either $\\mb{D}^0_i$ or $\\mb{D}_i$, for $k \\in [K]$.\n\\end{lemma}\n\nWe drop the $k$ index from $\\wt{\\D}_{k,i}$ for ease of notation throughout the rest of the paper.\n\n\\begin{lemma} \\label{lemma:f_closedform}\nLet $\\boldsymbol{\\sigma} \\in \\{-1,0,1\\}^p$ be an arbitrary sign vector and $\\mc{J} = \\mc{J}(\\boldsymbol{\\sigma})$ be its support. Define\\footnote{The quantity $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$ is not equal to $\\phi_\\mb{y}\\lrp{\\D_{1:K}}$ conditioned on $\\boldsymbol{\\sigma}$ and the expression is only used for notation.}\n\t\\begin{align} \\label{eq:delt_inf}\n\t\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} \\triangleq \\inf_{\\substack{\\mb{x} \\in \\mbb{R}^p \\\\ \\mathop{\\mathrm{supp}}\\nolimits(\\mb{x}) \\subset \\mc{J}}}\n\t\t\\frac{1}{2} \\norm{\\mb{y} - \\lrp{\\bigotimes \\D_k } \\mb{x} }_2^2+ \\lambda {\\boldsymbol{\\sigma}}^\\top\\mb{x}.\n\t\\end{align}\nIf $\\mb{D}_{k,\\mc{J}_k}^\\top \\mb{D}_{k,\\mc{J}_k}$ is invertible for $k \\in [K]$, then $\\widehat{\\mb{x}}$ minimizes $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} $, where\n\t\\begin{align} \\label{eq:xhat_closedform}\n\t\\widehat{\\mb{x}}_\\mc{J} = \\lrp{\\bigotimes \\D_{k,\\cJ_k}^+ } \\mb{y} - \\lambda \\lrp{ \\bigotimes \\big( \\mb{D}_{k,\\mc{J}_k}^\\top\\mb{D}_{k,\\mc{J}_k} \\big)^{-1} }\\boldsymbol{\\sigma}_\\mc{J},\n\t\\end{align}\nand $\\widehat{\\mb{x}}_{\\mc{J}^c} = \\mathbf{0}$. Thus, $\\phi_\\mb{y}\\lrp{\\D_{1:K} |\\boldsymbol{\\sigma}}$ can be expressed in closed form as:\n\t\\begin{align} \\label{eq:delt_closedform}\n\t&\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} = \\frac{1}{2}\\|\\mb{y}\\|_2^2\n\t\t- \\frac{1}{2} \\mb{y}^\\top\n\t\t\\lrp{ \\bigotimes \\bP_{\\D_{k,\\cJ_k}}}\\mb{y} \\nonumber\\\\\n\t&\\ + \\lambda {\\boldsymbol{\\sigma}}_\\mc{J}^\\top\n\t\t\\lrp{ \\bigotimes \\D_{k,\\cJ_k}^+ } \\mb{y}\n\t\t-\\frac{\\lambda^2}{2}{\\boldsymbol{\\sigma}}_\\mc{J}^\\top \\lrp{ \\bigotimes \\bH_{\\D_{k,\\cJ_k}}} \\boldsymbol{\\sigma}_\\mc{J}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:exp_phi}\nAssume $\\max\\lr{ \\delta_{s_k}(\\mb{D}_k^0),\\delta_{s_k}(\\mb{D}_k)}<1$ for $k \\in [K]$ and let $\\wt{\\D}_k$ be equal to either $\\mb{D}^0_k$ or $\\mb{D}_k$.\nFor\n\t\\begin{align}\n\t\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} \\big| \\boldsymbol{\\sigma}}\n\t\t\\triangleq \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K}| \\boldsymbol{\\sigma}}\n\t\t- \\phi_{\\mbb{P}} \\lrp{ \\D^0_{1:K}| \\boldsymbol{\\sigma}},\n\t\\end{align}\nwe have\n\t\\begin{align} \\label{eq:delt_ph_2}\n\t&\\Delta \\phi_{\\mbb{P}} \\lrp{ \\D_{1:K};\\D^0_{1:K} \\big| \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t& = \\frac{\\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} }\\dots \\nonumber\\\\\n\t&\\qquad \\qquad \\qquad \\qquad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}} \\nonumber \\\\\n\t&\\qquad \\qquad \\qquad \\qquad \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}}\n\t\t\\nonumber \\\\\n\t&- \\lambda \\mbb{E}\\{|x|\\}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1}} \\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k}} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K}} \\nonumber \\\\\n\t&+ \\frac{\\lambda^2}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_1,\\mc{J}_1}} }\\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }\t} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}}}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:UB_mu_delt_RIP}\nFor any $\\mb{D}_k \\in \\mc{D}_k$ satisfying $\\mathsf{RIP}$ of order $s_k$, given $\\mc{J}_k \\subset [p_k]$ and $ |\\mc{J}_k|=s_k$, the following relations hold:\n\t\\begin{align}\n\t\\norm{\\mb{D}_{k,\\mc{J}_k}}_2 &= \\norm{{\\mb{D}_{k,\\mc{J}_k}}^\\top}_2\n\t\t\\leq \\sqrt{1+\\delta_{s_k}(\\mb{D}_k)}, \\label{eq:l2_delt} \\\\\n\t\\delta_{s_k}(\\mb{D}_k) &\\leq \\mu_{s_k-1}(\\mb{D}_k). \\label{eq:delt_mu}\n\t\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}[Lemma 4~\\cite{gribonval2014sparse}]\n\\label{lem:H_Dps}\nLet $\\mb{D}_k$'s be coordinate dictionaries such that $\\delta_{s_k}(\\mb{D}_k)<1$. Then for any $\\mc{J}_k \\subset p_k, |\\mc{J}_k|=s_k$, $\\bH_{\\D_{k,\\cJ_k}}$ exists and\n\t\\begin{align} \\label{eq:pso_cond}\n\t&\\norm{\\bH_{\\D_{k,\\cJ_k}}}_2 \\leq \\frac{1}{1-\\delta_{s_k}(\\mb{D}_k)}, \\quad\n\t\\norm{\\D_{k,\\cJ_k}^+}_2 \\leq \\frac{1}{\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}},\n\t\\end{align}\nand for any $\\mb{D}_k'$ such that $\\|\\D_k-\\D'_k\\|_F \\leq \\varepsilon_k < \\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}$:\n\t\t\\begin{align} \\label{eq:cond_delt_r_i}\n\t\t&1-\\delta_{s_k}(\\mb{D}'_k) \\geq (\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)} - \\varepsilon_k)^2 \\triangleq 1-\\delta_k.\n\t\t\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}[Lemma 6~\\cite{gribonval2014sparse}]\n\\label{lem:D_12_tet}\nGiven any $\\mb{D}_k^1,\\mb{D}_k^2 \\in \\mc{D}_k$, there exist $\\mathbf{V}_k \\in \\mbb{R}^{m_k \\times p_k}$ with $\\diag{{\\mb{D}^1_k}^\\top \\mathbf{V}_k}=\\mathbf{0}$ and $\\diag{\\mathbf{V}_k^\\top \\mathbf{V}_k}=\\mb{I}_{p_k}$ and a vector $\\boldsymbol{\\theta}_k \\triangleq \\boldsymbol{\\theta}_k(\\mb{D}_k^1,\\mb{D}_k^2) \\in [0,\\pi]^{p_k}$, such that\n\t\\begin{align}\n\t\\mb{D}_k^2 = \\mb{D}_k^1 \\mathbf{C}_k (\\boldsymbol{\\theta}_k) + \\mathbf{V}_k \\mathbf{S}_k(\\boldsymbol{\\theta}_k),\n\t\\end{align}\nwhere $\\mathbf{C}_k (\\boldsymbol{\\theta}_k) \\triangleq \\Diag{\\cos(\\boldsymbol{\\theta}_k) }$ and $\\mathbf{S}_k (\\boldsymbol{\\theta}_k) \\triangleq \\Diag{\\sin(\\boldsymbol{\\theta}_k) }$. Moreover,\n\t\\begin{align} \\label{eq:tet_rk}\n\t&\\frac{2}{\\pi}\\theta_{k,j} \\leq \\|\\mb{d}^2_{k,j} - \\mb{d}^1_{k,j} \\|_2\n\t\t= 2\\sin \\lrp{\\frac{\\theta_{k,j}}{2}} \\leq\\theta_{k,j}, \\text{and} \\nonumber \\\\\n\t&\\frac{2}{\\pi} \\|\\boldsymbol{\\theta}_k\\|_2 \\leq \\|\\mb{D}_k^2 - \\mb{D}_k^1 \\|_F \\leq \\|\\boldsymbol{\\theta}_k\\|_2 ,\n\t\\end{align}\nwhere $j \\in [p_k]$.\nSimilarly, there exists $\\mathbf{V}_k'$ such that $\\mb{D}_k^1 = \\mb{D}_k^2 \\mathbf{C}_k (\\boldsymbol{\\theta}_k) + \\mathbf{V}'_k \\mathbf{S}_k(\\boldsymbol{\\theta}_k)$, where $\\diag{{\\mb{D}^2_k}^\\top \\mathbf{V}'_k}=\\mathbf{0}$.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lem:A_B_delt}\nFix $\\D_{1:K}$ and $\\D^0_{1:K}$, and suppose $\\lr{A_k},\\lr{B_k},\\lr{\\delta_k}$ satisfy the following:\n\t\\begin{align} \\label{eq:A_B_Delk}\n\t&A_k \\geq \\max\\lr{ \\|\\mb{D}_k^\\top \\mb{D}_k - \\mb{I}_{p_k}\\|_F,\\|{\\mb{D}_k^0}^\\top \\mb{D}_k^0 - \\mb{I}_{p_k}\\|_F } , \\nonumber \\\\\n\t&B_k \\geq \\max\\lr{ \\|\\mb{D}_k\\|_2, \\|\\mb{D}_k^0\\|_2 }, \\text{and}\\nonumber \\\\\n\t&\\delta_k \\geq \\max\\lr{ \\delta_{s_k}(\\mb{D}_k),\\delta_{s_k}(\\mb{D}_k^0) }.\n\t\\end{align}\nThen for all $ \\boldsymbol{\\theta}_k \\triangleq \\boldsymbol{\\theta}_k(\\mb{D}_k,\\mb{D}_k^0), k \\in [K]$, we have\n\t\\begin{align} \\label{eq: delt_ph_3}\n\t&\\Delta\\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&\\geq \\frac{s\\mbb{E}\\{x^2\\}}{2}\n\t\\sum_{k \\in [K]} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2 }{p_k}\n\t\t\\bigg[\\|\\boldsymbol{\\theta}_k\\|_2\n\t\t\\bigg( 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k}\n\t\t-\\bar{\\lambda} \\kappa_x^2\n\t\t\\delta_{-k}\\bigg)\n\t\t\\nonumber \\\\\n\t&\\qquad \\quad -\\bigg(\\delta_{-k}\n\t\t+2\\bar{\\lambda}\\prod_{i \\in [K]} \\frac{1}{1-\\delta_i}\\bigg)\n\t\t\\bar{\\lambda} \\kappa_x^2\n\t\t\\frac{s_k}{p_k} \\frac{2A_kB_k}{1-\\delta_k}\\bigg],\n\t\\end{align}\nwhere $\\bar{\\lambda} \\triangleq \\dfrac{\\lambda}{\\mbb{E}\\lr{|x|}} $ and $\\delta_{-k} \\triangleq \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}}\n\t\t\\sqrt{\\dfrac{1+\\delta_i}{1-\\delta_i}} $.\n\\end{lemma}\n\n\nProposition~\\ref{prop:1} shows $\\Delta\\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} > 0 $. However, given $\\widehat{\\mb{x}}$, the solution of $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$, $\\widehat{\\boldsymbol{\\sigma}} = \\mathop{\\mathrm{sign}}\\nolimits\\lrp{\\widehat{\\mb{x}}}$ is not necessarily equal to the sign of the generating $\\boldsymbol{\\sigma}$. We derive conditions that ensure $\\widehat{\\mb{x}}$ is almost surely the unique minimizer of $f_\\mb{y}\\lrp{\\D_{1:K}}$ and $\\widehat{\\boldsymbol{\\sigma}}=\\boldsymbol{\\sigma}$.\nWe introduce the following proposition for this purpose.\n\n\\begin{proposition} \\label{prop:3}\nLet the generating coordinate dictionaries $\\{ \\mb{D}_k^0 \\in\\mc{D}_k\\}$ satisfy:\n\t\\begin{align} \\label{eq:delt_cond}\n\t\\mu_{s}(\\mb{D}^0) < \\frac{1}{2} , \\quad \\max_k\\{ \\delta_{s_k}(\\mb{D}_k^0)\\} < \\frac{1}{4} .\n\t\\end{align}\nSuppose $\\bar{\\lambda} = \\dfrac{\\lambda}{\\mbb{E}\\lr{|x|}}\\leq \\dfrac{x_{\\min}}{2\\mbb{E}\\lr{|x|}}$ and\n\t\\begin{align} \\label{eq:prop_r_max}\n\t\\max_{k\\in [K]}\\{\\varepsilon_k\\} \\leq \\min\\lr{ \\bar{\\lambda}C_{\\max}, 0.15}.\n\t\\end{align}\nIf the following is satisfied:\n\t\\begin{align} \\label{eq:M_eps_M_al}\n\t\\frac{M_w}{M_x}\n\t\t < 3(1.5)^{K\/2} \\bigg( \\bar{\\lambda} K C_{\\max} - \\sum_{k \\in [K]} \\varepsilon_k\\bigg),\n\t\\end{align}\nthen for any $\\D_{1:K}$ such that $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$, $\\widehat{\\mb{x}}$ that is defined in~\\eqref{eq:xhat_closedform} is almost surely the minimizer of the map $\\mb{x}' \\mapsto \\frac{1}{2}\\norm{\\mb{y}- \\lrp{\\bigotimes \\D_k } \\mb{x}'}_2^2 +\\lambda \\|\\mb{x}'\\|_1 $ and \t\n$\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} = \\Delta f_{\\mbb{P}} \\lrp{\\D_{1:K};\\D^0_{1:K}}$.\n\\end{proposition}\n\n\\begin{remark}\nNote that $\\mu_s(\\mb{D}^0) < \\frac{1}{2}$ in \\eqref{eq:delt_cond} can be satisfied by ensuring that the right hand side of~\\eqref{eq:mu_s} is less than $\\frac{1}{2}$. One way this can be ensured is by enforcing strict conditions on coordinate dictionaries; for instance, $\\mu_{s_k}(\\mb{D}^0_k)\\leq \\frac{1}{2^K}$.\n\\end{remark}\n\nThe proof of Proposition~\\ref{prop:3} relies on the following lemmas and~\\cite[Lemmas 10--13]{gribonval2014sparse}.\n\n\\begin{lemma}[Lemma 13~\\cite{gribonval2014sparse}]\n\\label{lem:a_hat_min_cond}\nAssume $\\mu_s(\\mb{D}) <\\dfrac{1}{2}$. If\n\t\\begin{align} \\label{eq:lem8_cod}\n\t\\min_{j \\in \\mc{J}} \\left| x_j \\right| \\geq 2\\lambda, \\\n\t\t\\text{and} \\\n\t\t\\norm{\\mb{y} - \\mb{D} \\mb{x}}_2 < \\lambda (1-2\\mu_s(\\mb{D}))\n\t\\end{align}\nhold for generating $\\mb{x}$, then $\\widehat{\\mb{x}}$ defined in \\eqref{eq:xhat_closedform} is the unique solution of $\\min_{\\mb{x}'} \\frac{1}{2}\\norm{\\mb{y} - \\lrp{\\bigotimes \\D_k } \\mb{x}' }_2 +\\lambda\\|\\mb{x}'\\|_1$.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lem:mu_mu0_rel}\nFor any $\\mb{D}^0=\\bigotimes \\D^0_k $ and $\\mb{D} = \\bigotimes \\D_k $ such that\n$\\mb{D}_k \\in \\bar{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$, suppose the following inequalities are satisfied:\n\t\\begin{align} \\label{eq:UB_mu_delt}\n\t\\max_{k \\in [K]} \\{\\delta_{s_k}(\\mb{D}_k^0)\\} \\leq \\frac{1}{4},\n\t\t\\quad \\text{and}\n\t\t\\quad\t\\max_{k \\in [K]} \\varepsilon_k \\leq 0.15.\n\t\\end{align}\nThen, we have\n\t\\begin{align} \\label{eq:mu_mu0}\n\t\\mu_s(\\mb{D}) \\leq \\mu_s(\\mb{D}^0) + 2(1.5)^{K\/2}\\sqrt{s} \\bigg(\\sum_{k \\in [K]} \\varepsilon_k\\bigg).\n\t\\end{align}\t\n\\end{lemma}\n\n\n\\begin{IEEEproof}[Proof of Theorem~\\ref{thm:asymp}]\nTo prove this theorem, we use Proposition~\\ref{prop:1} to show that $\\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} > 0$, and then use Proposition~\\ref{prop:3} to show that $\\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} =\\Delta f_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}}$.\nThe assumptions in \\eqref{eq:cond_k_i_p_i} ensure that the conditions in \\eqref{eq:cond_delt_s_prop} and \\eqref{eq:delt_cond} are satisfied for Proposition~\\ref{prop:1} and Proposition~\\ref{prop:3}, respectively. Assumptions \\eqref{eq:cond_m_p} and \\eqref{eq:cond_lambda} ensure that the conditions in \\eqref{eq:cond_lamb_1} and \\eqref{eq:cond_lamb_2} are satisfied for Proposition~\\ref{prop:1}, $\\bar{\\lambda}\\leq \\dfrac{x_{\\min}}{2\\mbb{E}\\lr{|x|}}$ holds for Proposition~\\ref{prop:3}, and $\\max_{k \\in [K]}\\{C_{k,\\mathrm{min}}\\} 0 $ for all $\\varepsilon_k \\in (\\bar{\\lambda}C_{k,\\min},0.15], k \\in [K]$. Finally, using the assumption in \\eqref{eq:cond_noise} implies $\\Delta \\phi_{\\mbb{P}}\\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} =\\Delta f_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}}$ for all $\\varepsilon_k \\leq \\bar{\\lambda}C_{\\max}, k \\in [K]$. Furthermore, the assumption in \\eqref{eq:cond_lambda} implies $C_{\\max}\\bar{\\lambda} \\leq 0.15$. Consequently, for any $\\lr{ \\varepsilon_k>0,k \\in [K]}$ satisfying the conditions in \\eqref{eq:cond_r_i}, $\\D_{1:K} \\rightarrow f_{\\mbb{P}}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}= \\bigotimes \\widehat{\\mb{D}}_k$ such that $\\widehat{\\mb{D}}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]$.\n\\end{IEEEproof}\n\n\n\n\\section{Finite Sample Identifiability Results} \\label{sec:finite}\n\nWe now focus on leveraging Theorem~\\ref{thm:asymp} and solving~\\eqref{eq:f_x} to derive finite-sample bounds for KS dictionary identifiability. Compared to Gribonval et al.~\\cite{gribonval2014sparse}, who use Lipschitz continuity of the objective function with respect to the larger KS dictionary, our analysis is based on ``coordinate-wise Lipschitz continuity\" with respect to the coordinate dictionaries.\n\n\\begin{theorem}\\label{thm:finite_n}\nSuppose the observations are generated according to \\eqref{eq:obs_model} and the dictionary coefficients follow the separable sparsity model of Section~\\ref{sec:model} such that \\eqref{eq:cond_k_i_p_i} to \\eqref{eq:cond_noise} are satisfied. Next, fix any $\\xi \\in (0,\\infty)$. Then, for any number of observations satisfying\n\t\\begin{align} \\label{eq:smplCmp}\n\tN = \\max_{k \\in [K]}\n\t\t&\\Omega \\bigg(\n\t\t\\frac{p_k^2 (\t\\xi+m_kp_k) } {(\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))^2}\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\frac{2^{K}(1 + \\bar{\\lambda}^2) M_x^2}\n\t\t{s^2\\mbb{E}\\{x^2\\}^2}\n\t\t+ \\bigg(\\frac{M_w}{s\\mbb{E}\\{x^2\\}} \\bigg)^2 \\bigg) \\bigg),\n\t\\end{align}\nwith probability at least $1-e^{-\t\\xi}$, $\\D_{1:K} \\mapsto F_\\mb{Y}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}=\\bigotimes \\widehat{\\mb{D}}_k$ such that $\\widehat{\\mb{D}}_k \\in \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$.\n\\end{theorem}\n\n\\subsection{Discussion}\nLet us make some remarks about implications of Theorem~\\ref{thm:finite_n}. First, sample complexity has an inverse relationship with signal to noise ratio ($\\mathop{\\mathrm{SNR}}\\nolimits$),\\footnote{Sufficient conditioning on $N$ implies $\\mathcal{O}$-scaling for sample complexity.} which we define as\n\t\\begin{align}\n\t\\mathop{\\mathrm{SNR}}\\nolimits \\triangleq \\frac{\\mbb{E}\\{\\|\\mb{x}\\|_2^2\\}}{\\mbb{E}\\{\\|\\mb{w}\\|^2_2\\}} = \\frac{s\\mbb{E}\\{x^2\\}}{m\\mbb{E}\\{w^2\\}}.\n\t\\end{align}\nLooking at the terms on the right hand side of~\\eqref{eq:smplCmp} in Theorem~\\ref{thm:finite_n}, $M_x\/(s\\mbb{E}\\lr{x^2})$ is related to the deviation of $\\|\\mb{x}\\|_2$ from its mean, $\\mbb{E}\\lr{\\|\\mb{x}\\|_2}$, and depends on the coefficient distribution, while $M_w\/(s\\mbb{E}\\lr{x^2})$ is related to $1\/\\mathop{\\mathrm{SNR}}\\nolimits$ and depends on the noise and coefficient distributions.\n\nSecond, we notice dependency of sample complexity on the recovery error of coordinate dictionaries. We can interpret $\\varepsilon_k$ as the recovery error for $\\mb{D}^0_k$. Then, the sample complexity scaling in \\eqref{eq:smplCmp} is proportional to $\\max_k \\varepsilon_k^{-2}$.\nWe note that the sample complexity results obtained in~\\cite{gribonval2014sparse} that are independent of $\\varepsilon \\triangleq \\norm{\\mb{D}-\\mb{D}^0}_F$ only hold for the noiseless setting and the dependency on $\\varepsilon^{-2}$ is inevitable for noisy observations~\\cite{gribonval2014sparse}. Furthermore, given the condition on the range of $\\varepsilon_k$'s in \\eqref{eq:cond_r_i}, $\\varepsilon_k$'s cannot be arbitrarily small, and will not cause $N$ to grow arbitrarily large.\n\nThird, we observe a linear dependence between the sample complexity scaling in \\eqref{eq:smplCmp} and coordinate dictionaries' dimensions, i.e., $\\max_k \\mathcal{O}(m_k p_k^3)$. Comparing this to the $\\mathcal{O}(mp^3)=\\mathcal{O}\\lrp{\\prod_k m_kp_k^3}$ scaling in the unstructured DL problem~\\cite{gribonval2014sparse}, the sample complexity in the KS-DL problem scales with the dimensions of the largest coordinate dictionary, as opposed to the dimensions of the larger KS dictionary.\n\n\\begin{table}\n\\caption{\\small Comparison of upper and lower bounds on the sample complexity of dictionary learning for vectorized DL and KS DL.}\n\\label{table:1}\n\\centering\n\\begin{tabular}{l|C{1.7cm}|c| N} \\cline{2-3}\n & Vectorized DL & KS DL & \\\\ [20pt] \\hline\n\t\\multicolumn{1}{|c|} {Minimax Lower Bound}\n\t&$\\dfrac{mp^2}{\\varepsilon^2}$~\\cite{jung2015minimax}\n\t&$\\dfrac{p\\sum_{k } m_k p_k}{\\varepsilon^2 }$~\\cite{shakeri2016arxiv}\n\t& \\\\ [20pt]\\hline\n\t\\multicolumn{1}{|c|} {Achievability Bound} & $\\dfrac{mp^3}{\\varepsilon^2}$~\\cite{gribonval2014sparse}\n\t& $\\max\\limits_k \\dfrac{m_kp_k^3}{\\varepsilon_k^2} $ & \\\\ [20pt]\\hline\n\\end{tabular}\n\\end{table}\n\nWe also compare this sample complexity upper bound scaling to the sample complexity lower bound scaling in our previous work~\\cite[Corollary 1]{shakeri2016minimax}, where we obtained $N = \\Omega\\lrp{p\\sum_k m_kp_k\\varepsilon^{-2}\/K}$ as a \\emph{necessary condition for recovery of KS dictionaries}.\\footnote{We have the following relation between $\\varepsilon$ and $\\varepsilon_k$'s:\n\t\\begin{align*}\n\t\\varepsilon \t\\leq\t\\sum_{k \\in [K]} \\bigg(\n\t\t\\prod_{\\substack{i \\in [K]\\\\i \\neq k}} \\norm{\\wt{\\D}_k}_F \\bigg)\n\t\t\\norm{\\mb{D}_k-\\mb{D}^0_k}_F\n\t\t\\leq \\sqrt{p} \\sum_{k \\in [K]} \\varepsilon_k.\n\t\\end{align*}\nAssuming all $\\varepsilon_k$'s are equal, this then implies $\\varepsilon_k^2 \\geq \\varepsilon^2\/(K^2 p)$.}\nIn terms of overall error $\\varepsilon$, our result translates into $N = \\max_k \\Omega\\lr{ 2^K K^2p(m_kp_k^3)\\varepsilon^{-2}}$ as a \\emph{sufficient} condition for recovery of coordinate dictionaries. The lower bound depended on the average dimension of the coordinate dictionaries, $\\sum_k m_kp_k\/K$, whereas we observe here a dependence on the dimensions of the coordinate dictionaries in terms of the maximum dimension, $\\max_k m_kp_k$. We also observe an increase of order $\\max_k p_k^2$ in the sample complexity upper bound scaling. This gap suggests that tighter bounds can be obtained for lower and\/or upper bounds. A summary of these results is provided in Table~\\ref{table:1} for a fixed $K$.\n\n\\subsection{Proof Outline}\n\nWe follow a similar approach used in~\\cite[Theorem 2]{gribonval2014sparse} for vectorized data.\nWe show that, with high probability,\n\t\\begin{align}\n\t\\Delta F_\\mb{Y} (\\eps_{1:K}) \\triangleq \\inf_{\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)} \\Delta F_\\mb{Y} \\lrp{ \\D_{1:K};\\D^0_{1:K}}\n\t\\end{align}\nconverges uniformly to its expectation,\n\t\\begin{align}\n\t\\Delta f_{\\mbb{P}}(\\eps_{1:K})\\triangleq \\inf_{\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)}\n\t\t\\Delta f_{\\mbb{P}} \\lrp{ \\D_{1:K}; \\D^0_{1:K} }.\n\t\\end{align}\nIn other words, with high probability,\t\n\t\\begin{align}\n\t\\lra{\\Delta F_\\mb{Y} (\\eps_{1:K}) - \\Delta f_{\\mbb{P}}(\\eps_{1:K}) } \\leq \\eta_N,\n\t\\end{align}\nwhere $\\eta_N$ is a parameter that depends on the probability and other parameters in the problem.\nThis implies $\\Delta F_\\mb{Y} (\\eps_{1:K}) \\geq \\Delta f_{\\mbb{P}}(\\eps_{1:K}) - 2\\eta_N $.\nIn Theorem~\\ref{thm:asymp}, we obtained conditions that ensure $\\Delta f_{\\mbb{P}}(\\eps_{1:K})> 0$. Thus, if $2\\eta_N < \\Delta f_{\\mbb{P}}(\\eps_{1:K})$ is satisfied, this implies $\\Delta F_\\mb{Y} (\\eps_{1:K})> 0$, and we can use arguments similar to the proof of Theorem~\\ref{thm:asymp} to show that\n$\\D_{1:K} \\mapsto F_\\mb{Y}\\lrp{\\D_{1:K}}$ admits a local minimum $\\widehat{\\mb{D}}=\\bigotimes \\widehat{\\mb{D}}_k$, such that $\\widehat{\\mb{D}}_k \\in \\mathbf{B}_{\\varepsilon_k}(\\mb{D}^0_k)$, for $k \\in [K]$.\n\nIn Theorem~\\ref{thm:asymp}, we showed that under certain conditions, $f_{\\mbb{P}}(\\D_{1:K};\\D^0_{1:K}) = \\Delta \\phi_\\mbb{P}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$.\nTo find $\\eta_N$, we uniformly bound deviations of $\\D_{1:K} \\mapsto \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ from its expectation on $\\lr{ \\mc{S}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k=1}^K$.\nOur analysis is based on the \\textit{coordinate-wise Lipschitz continuity} property of $\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ with respect to coordinate dictionaries. Then, to ensure $ 2\\eta_N < \\Delta \\phi_\\mbb{P}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$, we show that $2\\eta_N$ is less than the right-hand side of~\\eqref{eq:delt_p_LB} and obtain conditions on the sufficient number of samples based on each coordinate dictionary dimension and recovery error.\n\nThe proof of Theorem~\\ref{thm:finite_n} relies on the following definition and lemmas. The proofs of these are provided in Appendix B.\n\n\\begin{definition}[Coordinate-wise Lipschitz continuity]\nA function $f: \\mc{D}_1 \\times \\dots \\times \\mc{D}_K \\rightarrow \\mbb{R}$ is coordinate-wise Lipschitz continuous with constants $(L_1,\\dots,L_K)$ if there exist real constants $\\lr{L_k \\geq 0}_{k=1}^K$, such that for $\\lr{\\mb{D}_k,\\mb{D}'_k \\in \\mc{D}_k}_{k=1}^K$:\n\t\\begin{align}\n\t\\lra{ f\\lrp{\\D_{1:K}} - f\\lrp{\\D'_{1:K}} }\n\t\t\\leq \\sum_{k \\in [K]} L_k \\norm{\\mb{D}_k - \\mb{D}'_k}_F.\n\t\\end{align}\n\\end{definition}\n\n\\begin{lemma}[Rademacher averages~\\cite{gribonval2014sparse}]\\label{lem:rad}\nConsider $\\mathcal{F}$ to be a set of measurable functions on measurable set $\\mc{X}$ and $N$ i.i.d. random variables $X_1,\\dots,X_N \\in \\mc{X}$. Fix any $\\xi \\in (0,\\infty)$. Assuming all functions are bounded by $B$, i.e., $|f(X)|\\leq B$, almost surely, with probability at least $1-e^{-\\xi}$:\n\t\\begin{align} \\label{eq:rad_gau}\n\t& \\sup_{f \\in \\mathcal{F}} \\bigg( \\frac{1}{N}\n\t\t\\sum_{n \\in [N] } f \\lrp{X_n}\n\t\t- \\mbb{E}_{X} \\lr{ f \\lrp{X}} \\bigg) \\nonumber \\\\\n\t&\\quad\n\t\\leq 2\\sqrt{\\frac{\\pi}{2}} \\mbb{E}_{X,\\beta_{1:N}}\n\t\t\\bigg\\{ \\sup_{f \\in \\mathcal{F}}\n\t\t\\bigg( \\frac{1}{N}\n\t\t\\sum_{n \\in [N] } \\beta_n f \\lrp{ X_n} \\bigg) \\bigg\\}\n\t\t+ B\\sqrt{\\frac{2\\xi}{N}},\n\t\\end{align}\nwhere $\\beta_{1:N}$'s are independent standard Gaussian random variables.\n\\end{lemma}\n\n\n\\begin{lemma} \\label{lemma:delt_m_T_dev}\nLet $\\mathcal{H}$ be a set of real-valued functions on $\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]$, that are bounded by $B$ almost everywhere and are coordinate-wise Lipschitz continuous with constants $(L_1,\\dots,L_K)$ .\nLet $h_1,h_2,\\dots,h_N$ be independent realizations from $\\mathcal{H}$ with uniform Haar measure on $\\mathcal{H}$. Then, fixing $\\xi \\in (0,\\infty)$, we have with probability greater than $1-e^{-\\xi}$ that:\n\t\\begin{align} \\label{eq:dev}\n\t&\\sup_{\\substack{ \\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\t \\bigg| \\frac{1}{N} \\sum_{n \\in [N]}\n\t\t h_n(\\D_{1:K}) - \\mbb{E} \\lr{ h(\\D_{1:K})} \\bigg| \\nonumber\\\\\n\t&\\qquad \\quad \\leq 4\\sqrt{\\frac{\\pi}{2N}} \\bigg(\\sum_{k \\in [K]} L_k\\varepsilon_k \\sqrt{Km_kp_k} \\bigg)\n\t\t+ B \\sqrt{\\frac{2\\xi}{N}}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma}[Lemma 5~\\cite{gribonval2014sparse}]\n\\label{lem:H_Ps}\nFor any $\\delta_k<1$, $\\mb{D}_k,\\mb{D}_k'$ such that $\\max(\\delta_{s_k}(\\mb{D}_k),\\delta_{s_k}(\\mb{D}'_k))\\leq \\delta_k$, and $\\mc{J}_k \\subset p_k, |\\mc{J}_k|=s_k$, we have\n\t\\begin{align} \\label{eq:PH_PHp}\n\t&\\|\\mb{I} - \\D_{k,\\cJ_k}^+ \\mb{D}'_{k,\\mc{J}_k}\\|_2 \\leq (1-\\delta_k)^{-1\/2} \\|\\D_k-\\D'_k\\|_F, \\nonumber \\\\\n\t&\\|\\bH_{\\D_{k,\\cJ_k}}-\\bH_{\\D'_{k,\\cJ_k}} \\|_2 \\leq 2(1-\\delta_k)^{-3\/2} \\|\\D_k-\\D'_k\\|_F, \\nonumber \\\\\n\t&\\|\\D_{k,\\cJ_k}^+ - {\\D'}_{k,\\cJ_k}^+ \\|_2\\leq 2(1-\\delta_k)^{-1} \\|\\D_k-\\D'_k\\|_F,\\text{and}\\nonumber \\\\\n\t&\\|\\bP_{\\D_{k,\\cJ_k}} -\\bP_{\\D'_{k,\\cJ_k}} \\|_2 \\leq 2(1-\\delta_k)^{-1\/2} \\|\\D_k-\\D'_k\\|_F.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{lemma} \\label{lem:phi_m_T1_lip}\nConsider $\\mb{D}^0_k \\in \\mc{D}_k$ and $\\varepsilon_k$'s such that $\\varepsilon_k < \\sqrt{1-\\delta_{s_k}(\\mb{D}_k^0)}$, for $k \\in [K]$ and define\n $\\sqrt{1-\\delta_k} \\triangleq \\sqrt{1-\\delta_{s_k}(\\mb{D}_k^0)} - \\varepsilon_k>0$. The function $\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ is almost surely coordinate-wise Lipschitz continuous on $\\lr{ \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k=1}^K$ with Lipschitz constants\n \t\\begin{align} \\label{eq:lipsch_const_h}\n \tL_k \\triangleq (1-\\delta_k)^{-1\/2}\n\t\t \\bigg(& M_x \\bigg( \\prod_{k \\in [K]} \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)}\\bigg)\n\t\t+M_w \\nonumber\\\\\n\t&+ \\lambda\\sqrt{s}\n\t\t \\prod_{k \\in [K] } (1-\\delta_k)^{-1\/2} \\bigg)^2 ,\n \t\\end{align}\nand $\\lra{\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}}$ is almost surely bounded on $\\lr{ \\mathcal{B}_{\\varepsilon_k}(\\mb{D}^0_k)}_{k=1}^K$ by $\\sum_{k \\in [K]} L_k\\varepsilon_k$.\n\\end{lemma}\n\n\n\n\\begin{IEEEproof}[Proof of Theorem 2]\nFrom Lemmas~\\ref{lemma:delt_m_T_dev} and \\ref{lem:phi_m_T1_lip}, we have that with probability at least $1-e^{-\\xi}$:\n\t\\begin{align} \\label{eq:delt_finite_UB}\n\t& \\sup_{\\substack{\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k)\\\\ k \\in [K]}}\n\t\t\\big| \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} - \\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\big| \\nonumber\\\\\n\t&\\qquad \\quad \\quad \\leq \\sqrt{\\frac{2}{N}}\\sum_{k \\in [K]} L_k\\varepsilon_k \\lrp{ 2\\sqrt{\\pi m_kp_k}\n\t\t+ \\sqrt{\\xi}},\n\t\\end{align}\nwhere $L_k$ is defined in \\eqref{eq:lipsch_const_h}.\nFrom \\eqref{eq:delt_finite_UB}, we obtain $ \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} >\\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} - 2\\eta_N$ where $\\eta_N = \\sqrt{\\frac{2}{N}}\\sum_{k \\in [K]} L_k\\varepsilon_k \\lrp{ 2\\sqrt{\\pi m_kp_k}\n\t\t+ \\sqrt{\\xi}}$. In Theorem~\\ref{thm:asymp}, we derived conditions that ensure $\\Delta f_\\mb{y} (\\D_{1:K};\\D^0_{1:K}) = \\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} $ and $\\Delta f_{\\mbb{P}}(\\D_{1:K};\\D^0_{1:K})=\\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$. Therefore, given that the conditions in Theorem~\\ref{thm:asymp} are satisfied, $\\Delta F_\\mb{Y} (\\eps_{1:K}) > \\Delta f_{\\mbb{P}}(\\eps_{1:K}) - 2\\eta_N $, and the existence of a local minimum of $F_\\mb{Y}(\\D_{1:K})$ within radii $\\varepsilon_k$ around $\\mb{D}_k^0$, $k \\in [K]$, is guaranteed with probability at least $1-e^{-\\xi}$ as soon as $2\\eta_N < \\Delta f_\\mbb{P} (\\eps_{1:K}) $. According to \\eqref{eq:delt_p_LB}, $\t\\Delta \\phi_\\mbb{P} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}\n\\geq \\dfrac{s\\mbb{E}\\{x^2\\}}{8} \\sum_{k \\in [K]} \\dfrac{\\varepsilon_k}{p_k}\n\\lrp{ \\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda})}$; therefore, it is sufficient to have for all $k \\in [K]$:\n\t\\begin{align*}\n\t\\sqrt{\\frac{8}{N}} L_k\\varepsilon_k \\lrp{ 2\\sqrt{\\pi m_kp_k}\n\t\t+ \\sqrt{\\xi}}\n\t < \\frac{s\\mbb{E}\\{x^2\\}\\varepsilon_k\\lrp{\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda})} }{8p_k},\n\t\\end{align*}\nwhich translates into $N \\geq \\max_{k \\in [K]} N_k$, where\n\t\\begin{align} \\label{eq:N_k}\n\t&N_k= \\lrp{ 2\\sqrt{\\pi m_kp_k} + \\sqrt{\\xi}}^2\n\t\\lrp{ \\frac{2^{4.5}L_k p_k }{s\\mbb{E} \\{x^2\\} (\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))}}^2 .\n\t\\end{align}\t\t\nFurthermore, we can upper bound $L_k$ by\n\t\\begin{align} \\label{eq:Rk_UB}\n\tL_k &\\numrel{\\leq}{r_Rk} \\sqrt{2}\\bigg(1.25^{K\/2} M_x + M_w + 2^{K\/2} \\lambda \\sqrt{s} \\bigg)^2 \\nonumber \\\\\n\t&\\numrel{\\leq}{r_lmb_Mx} \\sqrt{2}c_1 \\bigg(\\big(1.25^{K} + 2^{K} \\bar{\\lambda}^2 \\big) M_x^2 + M_w^2 \\bigg),\n\t\\end{align}\nwhere $c_1$ is some positive constant, \\eqref{r_Rk} follows from the fact that given the assumption in~\\eqref{eq:cond_delt_s_prop}, assumptions in Lemma~\\ref{lem:phi_m_T1_lip} are satisfied with $\\sqrt{1-\\delta_k}\\geq \\sqrt{1\/2}$ for any $\\varepsilon_k\\leq 0.15$, and \\eqref{r_lmb_Mx} follows from the following inequality:\n\t\\begin{align*}\n\t\\lambda\n\t\t= \\bar{\\lambda} \\mbb{E}\\lr{ |x| }\n\t\t= \\dfrac{1}{s}\\bar{\\lambda} \\mbb{E}\\lr{ \\norm{\\mb{x}}_1 }\n\t\t\\leq \\dfrac{1}{\\sqrt{s}} \\bar{\\lambda} \\mbb{E}\\lr{ \\norm{\\mb{x}}_2 }\n\t\t\\leq\\dfrac{1}{\\sqrt{s}} \\bar{\\lambda} M_x.\n\t\\end{align*}\nSubstituting \\eqref{eq:Rk_UB} in \\eqref{eq:N_k} and using $\\lrp{ \\sqrt{\\xi} + 2\\sqrt{\\pi m_kp_k} }^2 \\leq c_2 (\\xi + m_kp_k)$ for some positive constant $c_2$, we get\n\t\\begin{align*} %\n\t&N_k =\n\t\t\\Omega \\bigg(\n\t\tp_k^2 (m_kp_k+\\xi)\n\t\t\\bigg( \\frac{2^{K}(1 + \\bar{\\lambda}^2) M_x^2 + M_w^2}\n\t\t{s^2\\mbb{E}\\{x^2\\}^2(\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))^2} \\bigg) \\bigg)\n\t\t\\nonumber \\\\\n\t&=\n\t\t\\Omega \\bigg(\n\t\t\\frac{p_k^2 (m_kp_k+\\xi) } {(\\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda}))^2}\n\t\t\\bigg( \\frac{2^{K}(1 + \\bar{\\lambda}^2) M_x^2}\n\t\t{s^2\\mbb{E}\\{x^2\\}^2}\n\t\t+ \\frac{M_w^2}{s^2\\mbb{E}\\{x^2\\}^2} \\bigg) \\bigg).\n\t\\end{align*}\nand $N \\geq \\max_{k \\in [K]} N_k $.\n\\end{IEEEproof}\n\n\\begin{remark}\nTo bound deviations of $\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ from its mean,\nwe can also use the bound provided in~\\cite[Theorem 1]{gribonval2015sample} that prove uniform convergence results using covering number arguments for various classes of dictionaries. In this case, we get $\\eta_N \\leq c\\sqrt{\\dfrac{\\lrp{\\sum_k m_kp_k + \\xi}\\log N}{N}}$ for some constant $c$, where an extra $\\sqrt{\\log N}$ term appears compared to \\eqref{eq:dev}. Therefore, Lemma~\\ref{lemma:delt_m_T_dev} provides a tighter upper bound.\n\\end{remark}\n\n\n\n\n\\section{Conclusion} \\label{sec:discuss}\nIn this paper, we focused on local recovery of coordinate dictionaries comprising a Kronecker-structured dictionary used to represent $K$th-order tensor data. We derived a sample complexity upper bound for coordinate dictionary identification up to specified errors by expanding the objective function with respect to individual coordinate dictionaries and using the coordinate-wise Lipschitz continuity property of the objective function. This analysis is local in the sense that it only guarantees existence of a local minimum of the KS-DL objective function within some neighborhood of true coordinate dictionaries. Global analysis of the KS-DL problem is left for future work.\nOur results hold for dictionary coefficients generated according to the separable sparsity model. This model has some limitations compared to the random sparsity model and we leave the analysis for the random sparsity model for future work also. Another future direction of possible interest includes providing practical KS-DL algorithms that achieve the sample complexity scaling of Theorem~\\ref{thm:finite_n}.\n\n\n\n\\section*{Appendix A}\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:Dtld}]\nTo prove the existence of such a formation for any $K\\geq 2$, we use induction.\nFor $K=2$, we have\n\t\\begin{align} \\label{eq:k2}\n\t\\lrp{\\mb{D}_1\\otimes \\mb{D}_2} &- \\lrp{\\mb{D}_1^0\\otimes \\mb{D}_2^0} \\nonumber\\\\\n\t&= \\lrp{\\mb{D}_1 - \\mb{D}_1^0} \\otimes \\mb{D}^0_2 + \\mb{D}_1 \\otimes \\lrp{\\mb{D}_2 - \\mb{D}_2^0} \\nonumber \\\\\n\t&= \\lrp{\\mb{D}_1 - \\mb{D}_1^0} \\otimes \\mb{D}_2 + \\mb{D}^0_1 \\otimes \\lrp{\\mb{D}_2 - \\mb{D}_2^0} .\n\t\\end{align}\nFor $K$ such that $K>2$, we assume the following holds:\n\t\\begin{align} \\label{eq:kK}\n\t&\\bigotimes_{k \\in [K]} \\mb{D}_k - \\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\nonumber\\\\\n\t&\\qquad = \\sum _{k \\in [K]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes\n\t\t\\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\widetilde{\\mb{D}}_{k,K}.\n\t\\end{align}\nThen, for $K+1$, we have:\n\t\\begin{align} \\label{eq:kKp1}\n\t&\\bigotimes_{k \\in [K+1]} \\mb{D}_k - \\bigotimes_{k \\in [K+1]} \\mb{D}_k^0 \\nonumber\\\\\n\t&= \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k \\bigg) \\otimes \\mb{D}_{K+1}\n\t\t- \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\bigg) \\otimes \\mb{D}_{K+1}^0 \\nonumber\\\\\n\t&\\numrel{=}{r_k2} \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k -\\bigotimes_{k \\in [K]} \\mb{D}_k^0 \\bigg)\n\t\t\\otimes \\mb{D}_{K+1}^0 \\nonumber\\\\\n\t&+ \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k \\bigg) \\lrp{\\mb{D}_{K+1} - \\mb{D}_{K+1}^0} \\nonumber\\\\\n\t&\\numrel{=}{r_kK} \\bigg( \\sum _{k \\in [K]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes \\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\widetilde{\\mb{D}}_{k,K} \\bigg)\n\t\t \\nonumber \\\\\n\t&\\qquad \\qquad \\otimes \\mb{D}_{K+1}^0 + \\bigg( \\bigotimes_{k \\in [K]} \\mb{D}_k \\bigg) \\lrp{\\mb{D}_{K+1} - \\mb{D}_{K+1}^0} \\nonumber\\\\\n\t&\\numrel{=}{r_allcases}\n\t \t\\sum _{k \\in [K+1]} \\widetilde{\\mb{D}}_{k,1} \\otimes \\dots \\otimes \\lrp{\\mb{D}_k - \\mb{D}^0_k}\t\\otimes \\dots \\otimes \\wt{\\D}_{k,K+1} ,\n\t\\end{align}\nwhere \\eqref{r_k2} follows from \\eqref{eq:k2}, \\eqref{r_kK} follows from \\eqref{eq:kK} and \\eqref{r_allcases} follows from replacing $\\mb{D}^0_{K+1}$ with $\\wt{\\D}_{k,K+1}$ in the first $K$ terms of the summation and $\\mb{D}_k$'s with $\\wt{\\D}_{K+1,k}$, for $k \\in[K]$, in the $(K+1)$th term of the summation.\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lemma:f_closedform}]\nUsing the same definition as Gribonval et al.~\\cite[Definition 1]{gribonval2014sparse}, taking the derivative of $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$ with respect to $\\mb{x}$ and setting it to zero, we get the expression in~\\eqref{eq:xhat_closedform} for $\\widehat{\\mb{x}}$. Substituting $\\widehat{\\mb{x}}$ in~\\eqref{eq:delt_inf}, we get\n \t\\begin{align*}\n\t&\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} = \\frac{1}{2}\n\t\t\\bigg[\n\t\t\\|\\mb{y}\\|_2^2 - \\lrp{ \\lrp{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top} \\mb{y} -\\lambda \\boldsymbol{\\sigma}_\\mc{J} }^\\top \\nonumber\\\\\n\t&\\qquad \\lrp{ \\bigotimes (\\mb{D}_{k,\\mc{J}_k}^\\top \\mb{D}_{k,\\mc{J}_k})^{-1} }\n\t\t\\lrp{ \\lrp{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top} \\mb{y} -\\lambda \\boldsymbol{\\sigma}_\\mc{J} }\n\t\t\\bigg]\\nonumber\\\\\n\t&\\qquad \\qquad \\quad \\numrel{=}{r_dP} \\frac{1}{2}\\|\\mb{y}\\|_2^2\n\t\t- \\frac{1}{2} \\mb{y}^\\top \\lrp{\\bigotimes \\bP_{\\D_{k,\\cJ_k}}} \\mb{y} \\nonumber\\\\\n\t&\\qquad + \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{\\bigotimes \\D_{k,\\cJ_k}^+}\\mb{y}\n\t\t-\\frac{\\lambda^2}{2}\\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{ \\bigotimes \\bH_{\\D_{k,\\cJ_k}}} \\boldsymbol{\\sigma}_\\mc{J},\n\t\\end{align*}\nwhere \\eqref{r_dP} follows from \\eqref{eq:P_H_Ps}.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:exp_phi}]\nWe use the expression for $\\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}}$ from~\\eqref{eq:delt_closedform}. For any $\\mb{D}=\\bigotimes \\mb{D}_k,\\mb{D}'=\\bigotimes \\mb{D}'_k$, $\\mb{D}_k,\\mb{D}_k' \\in \\mc{D}_k$, we have\n\t\\begin{align} \\label{eq:delt_ph_1}\n\t&\\Delta \\phi_\\mb{y}\\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t= \\phi_\\mb{y}\\lrp{\\D_{1:K}|\\boldsymbol{\\sigma}} - \\phi_\\mb{y}\\lrp{\\D'_{1:K}|\\boldsymbol{\\sigma}} \\nonumber \\\\\n\t&\\quad= \t\\frac{1}{2} \\mb{y}^\\top \\lrp{ \\bigotimes \\bP_{\\D'_{k,\\cJ_k}} - \\bigotimes \\bP_{\\D_{k,\\cJ_k}} } \\mb{y} \\nonumber\\\\\n\t&\\qquad- \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{ \\bigotimes {\\D'}_{k,\\cJ_k}^+ - \\bigotimes \\D_{k,\\cJ_k}^+ } \\mb{y} \\nonumber \\\\\n\t&\\qquad + \\frac{\\lambda^2}{2}\\boldsymbol{\\sigma}_\\mc{J}^\\top \\lrp{ \\bigotimes \\bH_{\\D'_{k,\\cJ_k}} - \\bigotimes \\bH_{\\D_{k,\\cJ_k}} } \\boldsymbol{\\sigma}_\\mc{J}.\n\t\\end{align}\nWe substitute $\\mb{y} = \\lrp{\\bigotimes \\mb{D}^0_k} \\mb{x} +\\mb{w} = \\lrp{\\bigotimes \\mb{D}^0_{k,\\mc{J}_k} }\\mb{x}_{\\mc{J}} +\\mb{w}$\nand break up the sum in \\eqref{eq:delt_ph_1} into 6 terms:\n\t\\begin{align} \\label{eq:delt_t}\n\t\\Delta \\phi_\\mb{y}\\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}} = \\sum_{i\\in[6]} \\Delta \\phi_i \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}} ,\n\t\\end{align}\t\nwhere\n\t\\begin{align} \\label{eq:delt_phi}\n\t& \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t\t= \\frac{1}{2} {\\mb{x}}^\\top \\lrp{\\bigotimes \\mb{D}^0_k }^\\top \\nonumber\\\\\n\t&\\qquad\t\\lrp{ \\bigotimes \\bP_{\\D'_{k,\\cJ_k}} - \\bigotimes \\bP_{\\D_{k,\\cJ_k}} }\n\t\t\\lrp{\\bigotimes \\mb{D}^0_k } \\mb{x} \\nonumber\\\\\n\t&\\numrel{=}{r_Dtild} \\frac{1}{2} {\\mb{x}}^\\top \\lrp{\\bigotimes \\mb{D}^0_k }^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes\n\t\t\\nonumber\\\\\t\t\n\t&\\qquad \\lrp{\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}\t\\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\bigg)\n\t\t\\lrp{\\bigotimes \\mb{D}^0_k } \\mb{x} \\nonumber \\\\\n\t&= \\frac{1}{2} {\\mb{x}}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{{\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{{\\mb{D}^0_k}^\\top(\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}\t\\otimes \\dots \\otimes \\nonumber \\\\\n\t&\\qquad\n\t\t\\lrp{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K} \\bigg)\n\t\t\\mb{x} , \\nonumber \\\\\n\t&\\Delta \\phi_2 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t%\n\t= \\mb{w}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{\\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1}\n\t\t\\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad\n\t\t\\lrp{(\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}) \\mb{D}^0_k}\t\t\t\n\t\t\\otimes \\dots \\otimes\n\t\t\\lrp{ \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\mb{D}^0_K}\\bigg)\n\t\t\\mb{x}, \\nonumber \\\\\n\t&\\Delta \\phi_3 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t\t= \\frac{1}{2} \\mb{w}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}\n\t\t\\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg)\n\t\t\\mb{w}, \\nonumber \\\\\n\t&\\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t\t= - \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1} \\otimes \\dots \\otimes \t\\nonumber\\\\\n\t&\\qquad\n\t\t\\lrp{({\\D'}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+ ) \\mb{D}^0_k} \\otimes \\dots \\otimes\n\t\t\\lrp{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K} \\bigg)\n\t\t\\mb{x}, \\nonumber\\\\\n\t&\\Delta \\phi_5 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t\t= - \\lambda \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+ \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{{\\D'}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+ }\t\\otimes \\dots \\otimes\n\t\t {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\bigg) \\mb{w}, \\text{and}\\nonumber \\\\\n\t&\\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}}\n\t%\n\t%\n\t%\n\t\t= \\frac{\\lambda^2}{2}\\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t \t\\bigg( \\sum_{k \\in [K]} \\mb{H}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t &\\qquad \\lrp{\\bH_{\\D'_{k,\\cJ_k}} - \\bH_{\\D_{k,\\cJ_k}} }\n\t \t\\otimes \\dots \\otimes\n\t\t\\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg) \\boldsymbol{\\sigma}_\\mc{J},\n\t\\end{align}\nwhere \\eqref{r_Dtild} follows from Lemma~\\ref{lem:Dtld} and analysis for derivation of $\\lr{\\Delta \\phi_i\\lrp{ \\D_{1:K};\\D'_{1:K} | \\boldsymbol{\\sigma}} }_{i=2}^6 $ are omitted due to space constraints.\nNow, we set $\\mb{D}' = \\mb{D}^0$ and take the expectation of $\\Delta \\phi_\\mb{y}\\lrp{ \\D_{1:K};\\{\\mb{D}^0_k\\} | \\boldsymbol{\\sigma}}$ with respect to $\\mb{x}$ and $\\mb{w}$. Since the coefficient and noise vectors are uncorrelated,\n\t\\begin{align*}\n\t\\mbb{E} \\lr{ \\Delta \\phi_2 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}}=\\mbb{E} \\lr{\\Delta \\phi_5 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } = 0.\n\t\\end{align*}\nWe can restate the other terms as:\n\t\\begin{align} \\label{eq:delt_tr}\n\t &\\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t &\\numrel{=}{r_Imk} \\frac{1}{2}\\mathop{\\mathrm{Tr}}\\nolimits \\bigg[\\mb{x}_\\mc{J} {\\mb{x}}^\\top_\\mc{J}\n\t\t \\sum_{k \\in [K]} \\lrp{{\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\lrp{{\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}\t\n\t\t\\otimes \\dots \\otimes\n\t\t\\lrp{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}\n\t\t\\bigg] ,\\nonumber \\\\\n\t& \\Delta \\phi_3 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&= \\frac{1}{2} \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[ \\mb{w} \\mb{w}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes\n\t\t\\lrp{\\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}} \\nonumber\\\\\n\t&\\qquad \\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\bigg) \\bigg],\n\t\t\\nonumber\\\\\n\t &\\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t &\\numrel{=}{r_Isk} - \\lambda \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[ \\mb{x}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\lrp{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k}\t\\otimes \\dots \\otimes\n\t\t\\lrp{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K}\\bigg) \\bigg] ,\\text{and}\\nonumber \\\\\n\t& \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&= \\frac{\\lambda^2}{2} \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[ \\boldsymbol{\\sigma}_\\mc{J} \\boldsymbol{\\sigma}_\\mc{J}^\\top\n\t \t\\bigg( \\sum_{k \\in [K]} \\mb{H}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }\t\\otimes \\dots \\otimes\n\t\t\\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg) \\bigg],\n\t\\end{align}\nwhere \\eqref{r_Imk} and \\eqref{r_Isk} follow from the facts that $\\mb{P}_{\\mb{D}^0_{k,\\mc{J}_k}}\\mb{D}_k^0 =\\mb{D}_k^0$ and ${\\mb{D}^0}^+_{k,\\mc{J}_k} \\mb{D}^0_k = \\mb{I}_{s_k}$, respectively. Taking the expectation of the terms in~\\eqref{eq:delt_tr}, we get\n\t\\begin{align} \\label{eq:jam_T}\n\t& \\mbb{E} \\lr{ \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber \\\\\n\t&\\numrel{=}{r_tr_eq} \\frac{\\mbb{E}\\{x^2\\}}{2}\\mbb{E}_\\mc{J}\n\t\t\\bigg\\{ \\sum_{k \\in [K]} \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} \\dots \\nonumber\\\\\n\t&\\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k} \\dots\n\t\t\\mathop{\\mathrm{Tr}}\\nolimits \\lrb{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}\\bigg\\}\n\t\t\\nonumber \\\\\n\t&= \\frac{\\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_1}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\\mb{D}^0_1} } \\dots\\nonumber\\\\\n\t&\\qquad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k}} \\dots \\nonumber \\\\\n\t& \\qquad\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{{\\mb{D}^0_K}^\\top \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\mb{D}^0_K}}, \\nonumber \\\\\n\t& \\mbb{E} \\{ \\Delta \\phi_3 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\} \\nonumber \\\\\n\t&= \\frac{\\mbb{E}\\{w^2\\}}{2}\n\t\t\\mbb{E}_\\mc{J}\n\t\t\\bigg\\{ \\mathop{\\mathrm{Tr}}\\nolimits \\bigg[\\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}} \\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\lrp{\\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}} \\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} \\bigg] \t\n\t\t\\bigg\\}\n\t\t\\nonumber \\\\\n\t&= \\frac{\\mbb{E}\\{w^2\\}}{2}\n\t\t\\mbb{E}_\\mc{J}\n\t\t\\bigg\\{ \\sum_{k \\in [K]} \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{P}_{\\widetilde{\\mb{D}}_{1,J_1}} } \\dots\n\t\t \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}} \\nonumber\\\\\n\t&\\qquad \\dots \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}} } \\bigg\\} \\nonumber\\\\\n\t&\\numrel{=}{r_proj_tr} 0, \\nonumber \\\\\n\t& \\mbb{E} \\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber \\\\\n\t& =- \\lambda \\mbb{E}\\{|x|\\}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}^+\\mb{D}^0_1}} \\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k}} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}^+ \\mb{D}^0_K}},\n\t \\nonumber \\\\\n\t& \\mbb{E} \\lr{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber \\\\\n\t&= \\frac{\\lambda^2}{2}\n\t\t\\sum_{k \\in [K]} \\mbb{E}_{\\mc{J}_1} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}} }\\dots \\nonumber\\\\\n\t&\\quad \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }\t} \\dots\n\t\t\\mbb{E}_{\\mc{J}_K} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}}} .\n\t\\end{align}\t\nwhere \\eqref{r_tr_eq} follows from the relation $\\mathop{\\mathrm{Tr}}\\nolimits(\\mb{A}\\otimes \\mb{B})=\\mathop{\\mathrm{Tr}}\\nolimits[\\mb{A}]\\mathop{\\mathrm{Tr}}\\nolimits[\\mb{B}]$~\\cite{horn2012matrix} and \\eqref{r_proj_tr} follows from the fact that $\\bP_{\\D_{k,\\cJ_k}}$'s are orthogonal projections onto subspaces of dimension $s_k$ and $ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\bP_{\\D^0_{k,\\cJ_k}} -\\bP_{\\D_{k,\\cJ_k}}}=s_k - s_k=0$.\nAdding the terms in \\eqref{eq:jam_T}, we obtain the expression in \\eqref{eq:delt_ph_2}.\n\\end{IEEEproof}\n\n\\begin{IEEEproof} [Proof of Lemma~\\ref{lem:UB_mu_delt_RIP}]\nEquation \\eqref{eq:l2_delt} follows from the definition of $\\mathsf{RIP}$ and \\eqref{eq:delt_mu} follows from Gerschgorin's disk theorem~\\cite{HornJohnson,horn2012matrix,golub2012matrix}.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof} [Proof of Lemma~\\ref{lem:A_B_delt}]\nTo lower bound $\\Delta \\phi_\\mbb{P}\\lrp{ \\D_{1:K}; \\D^0_{1:K} | \\boldsymbol{\\sigma}}$, we bound each term in~\\eqref{eq:delt_ph_2} separately. For the first term $\\mbb{E}\\lr{ \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }$, we have\n\t\\begin{align} \\label{eq:P_Dt_LB}\n\t&\\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top \\mb{P}_{\\wt{\\D}_{k,\\mc{J}_k}}\\mb{D}^0_k} }\t\n\t%\n\t%\n\t\t= \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{\\mb{P}_{\\wt{\\D}_{k,\\mc{J}_k}}\\mb{D}^0_{k,\\mc{J}_k}}_F^2} .\n\t\\end{align}\nIf $\\wt{\\D}_k = \\mb{D}_k^0$, then\n\t\\begin{align}\n\t\\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{ \\bP_{\\D^0_{k,\\cJ_k}} \\mb{D}^0_{k,\\mc{J}_k}}_F^2} &\n\t\t%\n\t\t\\numrel{=}{r_Esp} \\frac{s_k}{p_k} \\norm{\\mb{D}^0_k}_F^2 = s_k,\n\t\\end{align}\nwhere~\\eqref{r_Esp} follows from~\\cite[Lemma 15]{gribonval2014sparse}. If $\\wt{\\D}_k = \\mb{D}_k$, then \t\n\t\\begin{align*}\n\t& \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{\\mb{P}_{\\mb{D}_{k,\\mc{J}_k}}\\mb{D}^0_{k,\\mc{J}_k}}_F^2}\n\t\\numrel{=}{r_DC} \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{ [\\mb{D}_k \\mb{C}_k^{-1}]_{\\mc{J}_k}}_F^2 } \\nonumber \\\\\n\t& \\numrel{=}{r_exp} \\frac{s_k}{p_k} \\norm{\\mb{D}_k\\mb{C}_k^{-1}}_F^2\n\t\t\\numrel{=}{r_d_j_1} \\frac{s_k}{p_k}\n\t\t\\sum_{j=1}^{p_k} \\frac{1}{\\cos^2(\\theta_{(k,j)})}\n\t\t\\numrel{\\geq}{r_cos_ineq} \\frac{s_k}{p_k} p_k = s_k,\n\t\\end{align*}\nwhere \\eqref{r_DC} is a direct consequence of Lemma~\\ref{lem:D_12_tet}; we can write $\\mb{D}^0_k = \\mb{D}_k\\mb{C}_k^{-1} - \\mathbf{V}_k\\mb{T}_k$ where $\\mb{C}_k = \\Diag{\\cos(\\boldsymbol{\\theta}_{k})}$, $\\mb{T}_k = \\Diag{\\tan(\\boldsymbol{\\theta}_k)}$ and $\\theta_{k,j}$ denotes the angle between $\\mb{d}_{k,j}$ and $\\mb{d}^0_{k,j}$. Hence $\\bP_{\\D_{k,\\cJ_k}} \\mb{D}^0_{k,\\mc{J}_k} = [\\mb{D}_k \\mb{C}_k^{-1}]_{\\mc{J}_k}$. Moreover,~\\eqref{r_exp} follows from~\\cite[Lemma 15]{gribonval2014sparse}, \\eqref{r_d_j_1} follows from the fact that $\\|\\mb{d}_{k,j}\\|_2=1$, and \\eqref{r_cos_ineq} follows from the fact that $\\cos (\\theta_{k,j})<1$. Similarly, we have\n\t\\begin{align}\n\t& \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0_k}^\\top(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_k} } \\nonumber\\\\\n\t&\\qquad = \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{(\\mb{I}_{m_k} - \\bP_{\\D_{k,\\cJ_k}})\\mb{D}^0_{k,\\mc{J}_k}}_F^2 } \\nonumber \\\\\n\t&\\qquad \\numrel{\\geq}{r_diff_p_LB} \\frac{s_k}{p_k} \\|\\boldsymbol{\\theta}_k\\|_2^2 \\lrp{ 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k} },\n\t\\end{align}\nwhere~\\eqref{r_diff_p_LB} follows from similar arguments as in Gribonval et al.~\\cite[Equation (72)]{gribonval2014sparse}.\nPutting it all together, we have\n\t\\begin{align} \\label{eq:delt1_LB}\n\t\\mbb{E} &\\lr{ \\Delta \\phi_1 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber\\\\\n\t&\\geq \\frac{\\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\bigg(\\prod_{\\substack{i \\in [K]\\\\ i \\neq k}} s_i\\bigg) \\frac{s_k}{p_k} \\|\\boldsymbol{\\theta}_k\\|_2^2\n\t\t\\lrp{ 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k} } \\nonumber \\\\\n\t&= \\frac{s \\mbb{E}\\{x^2\\}}{2}\n\t\t\\sum_{k \\in [K]} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2 }{p_k}\n\t\t\\lrp{ 1 - \\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k} }.\n\t\\end{align}\n\nNext, to lower bound $\\mbb{E}\\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }$, we upper bound $\\lra{\\mbb{E}\\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }}$. If $\\wt{\\D}_k = \\mb{D}_k^0$, we have\n\t\\begin{align}\n\t\\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}^0}_{k,\\mc{J}_k}^+\\mb{D}^0_{k,\\mc{J}_k}}}\n\t\t= \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{I}_{s_k}}} = s_k,\n\t\\end{align}\notherwise, if $\\wt{\\D}_k = \\mb{D}_k$, we get\n\t\\begin{align}\n\t&\\left| \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ {\\mb{D}_{k,\\mc{J}_k}}^+\\mb{D}^0_k}} \\right| \\nonumber\\\\\n\t&\\qquad \\numrel{\\leq}{r_tr_spn} s_k \\mbb{E}_{\\mc{J}_k} \\lr{ \\norm{ {\\mb{D}}_{k,\\mc{J}_k}^+\\mb{D}^0_{k,\\mc{J}_k}}_2}\n\t\t\\nonumber \\\\\n\t&\\qquad \\leq s_k \\mbb{E}_{\\mc{J}_k} \\lr{ \\| {\\mb{D}}_{k,\\mc{J}_k}^+\\|_2\\|\\mb{D}^0_{k,\\mc{J}_k} \\|_2}\n\t\t\\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_5} s_k \\lrp{ \\frac{1}{\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}} }\n\t\t\\lrp{ \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)}}\n\t\t\\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_delts} s_k \\sqrt{\\frac{1+\\delta_k}{1-\\delta_k}},\n\t\\end{align}\nwhere \\eqref{r_tr_spn} follows from the fact that for a square matrix $\\mb{A} \\in \\mbb{R}^{q\\times q}$, $\\mathop{\\mathrm{Tr}}\\nolimits \\lrb{\\mb{A}} \\leq q\\|\\mb{A}\\|_2$, \\eqref{r_5} follows from \\eqref{eq:l2_delt} and \\eqref{eq:pso_cond} and \\eqref{r_delts} follows from \\eqref{eq:A_B_Delk}. Similar to~\\cite[Equation (73)]{gribonval2014sparse}, we also have\n\t\\begin{align}\n\t&\\left| \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\big[\\mb{I}_{s_k} - \\D_{k,\\cJ_k}^+ \\mb{D}^0_k\\big]} \\right|\n\t\\nonumber\\\\\n\t&\\qquad \\leq \\frac{s_k}{p_k} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2}{2}\n\t\t+ \\frac{s_k^2}{p_k^2} \\frac{A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2.\n\t\\end{align}\nThus, defining $\\delta_{-k} \\triangleq \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}} \\sqrt{\\dfrac{1+\\delta_i}{1-\\delta_i}}$, we get\n\t\\begin{align} \\label{eq:delt4_LB}\n\t &\\mbb{E} \\lr{ \\Delta \\phi_4 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} } \\nonumber\\\\\n\t&\\quad \\geq - \\lambda \\mbb{E}\\{|x|\\}\n\t\t\\sum_{k \\in [K]}\\delta_{-k}\\bigg( \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}}s_i\n\t\t \\bigg) \\nonumber\\\\\n\t&\\quad \\qquad \\lrp{\\frac{s_k}{p_k} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2}{2}\n\t\t+ \\frac{s_k^2}{p_k^2} \\frac{A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2}\n\t\t \\nonumber \\\\\n\t&\\quad = - \\lambda s \\mbb{E}\\{|x|\\} \\sum_{k \\in [K]} \\frac{\\delta_{-k}}{p_k}\n\t\t\\lrp{ \\frac{\\|\\boldsymbol{\\theta}_k\\|_2^2}{2}\n\t\t+ \\frac{s_k}{p_k} \\frac{A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2}.\n\t\\end{align}\n\t\nTo lower bound $\\mbb{E}\\lr{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }$, we upper bound $\\lra{\\mbb{E}\\lr{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} }}$. For any $\\wt{\\D}_k$, we have\n\t\\begin{align}\n\t\\lra{ \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\wt{\\D}_{k,\\mc{J}_k}} }} }\n\t\t&\\leq \\mbb{E}_{\\mc{J}_k} \\lr{ s_k \\norm{ \\mb{H}_{\\wt{\\D}_{k,\\mc{J}_k}} }_2} \t\n\t\t\\numrel{\\leq}{r_H_UB} \\frac{s_k}{1-\\delta_k},\n\t\\end{align}\nwhere \\eqref{r_H_UB} follows from \\eqref{eq:pso_cond} and \\eqref{eq:A_B_Delk}. Similar to Gribonval et al.~\\cite[Equation (74)]{gribonval2014sparse}, we also have\n\t\\begin{align*}\n\t\\left| \\mbb{E}_{\\mc{J}_k} \\lr{ \\mathop{\\mathrm{Tr}}\\nolimits \\lrb{ \\mb{H}_{\\mb{D}^0_{k,\\mc{J}_k}} - \\bH_{\\D_{k,\\cJ_k}} }} \\right|\n\t\\leq \\frac{s_k^2}{p_k^2} \\frac{4A_kB_k}{(1-\\delta_k)^2}\\|\\boldsymbol{\\theta}_k\\|_2.\n\t\\end{align*}\nThus, we get\n\t\\begin{align} \\label{eq:delt6_LB}\n\t& \\mbb{E} \\{ \\Delta \\phi_6 \\lrp{ \\D_{1:K};\\D^0_{1:K} | \\boldsymbol{\\sigma}} \\} \\nonumber\\\\\n\t&\\quad \\geq - \\frac{\\lambda^2}{2}\n\t\t\\sum_{k \\in [K]} \\bigg( \\prod_{\\substack{i \\in [K]\\\\ i \\neq k}}\n\t\t\\frac{s_i}{1-\\delta_i}\\bigg)\n\t\t\\lrp{ \\frac{s_k^2}{p_k^2} \\frac{4A_kB_k}{(1-\\delta_k)^2}\\|\\boldsymbol{\\theta}_k\\|_2}\n\t\t\\nonumber \\\\\n\t&\\quad = - \\frac{\\lambda^2s}{2}\n\t\t\\sum_{k \\in [K]} \\frac{1}{p_k}\n\t\t\\bigg( \\prod_{i \\in [K]} \\frac{1}{1-\\delta_i}\\bigg)\n\t\t\\lrp{ \\frac{s_k}{p_k} \\frac{4A_kB_k}{1-\\delta_k}\\|\\boldsymbol{\\theta}_k\\|_2}.\n\t\\end{align}\nAdding~\\eqref{eq:delt1_LB}, \\eqref{eq:delt4_LB}, and \\eqref{eq:delt6_LB}, we get \\eqref{eq: delt_ph_3}.\n\\end{IEEEproof}\n\n\\begin{IEEEproof} [Proof of Proposition \\ref{prop:1}]\nTo show that $\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} > 0$, we use Lemma~\\ref{lem:A_B_delt} and prove that the right hand side of \\eqref{eq: delt_ph_3} is positive under certain conditions. First, we ensure the conditions in \\eqref{eq:cond_delt_r_i} and \\eqref{eq:A_B_Delk} hold for Lemma~\\ref{lem:H_Dps} and Lemma~\\ref{lem:A_B_delt}, respectively. We set $\\delta_k = \\dfrac{1}{2}$, $\\delta_{s_k}(\\mb{D}_k) = \\dfrac{1}{2}$ and $\\delta_{s_k}(\\mb{D}^0_k) = \\dfrac{1}{4}$, for $k \\in [K]$. For $\\varepsilon_k\\leq 0.15 $, this ensures:\n\t\t\\begin{align}\n\t\t&\\sqrt{1-\\delta_{s_k}(\\mb{D}_k)}\n\t\t\t\\geq \\sqrt{1-\\delta_{s_k}(\\mb{D}^0_k)} - \\varepsilon_k, \\text{ and}\n\t\t\t%\n\t\t\t\\nonumber \\\\\n\t\t&\\max\\lr{ \\delta_{s_k}(\\mb{D}^0_k), \\delta_{s_k}(\\mb{D}_k)} \\leq \\delta_k,\n\t\t\\end{align}\nand implies $\\delta_k<1$ (condition for Lemmas~\\ref{lem:exp_phi} and \\ref{lem:H_Ps}).\nNext, we find conditions that guarantee:\n\t\\begin{align} \\label{eq:tet_2_p1}\n\t\\frac{s_k}{p_k} \\frac{B^2_k}{1-\\delta_k}\n\t\t+\\bar{\\lambda} \\kappa_x^2\n\t\t\\delta_{-k}\n\t\t\\numrel{=}{r_deltmk} \\frac{2B^2_k s_k}{p_k}\n\t\t+\\bar{\\lambda} \\kappa_x^2\n\t\t\\lrp{3}^{(K-1)\/2}\n\t\t\\leq \\frac{1}{2},\n\t\\end{align}\nwhere \\eqref{r_deltmk} follows from replacing $\\delta_k$ with $\\dfrac{1}{2}$.\nIf we take $\\dfrac{s_k}{p_k} \\leq \\dfrac{1}{8B_k^2}$ and $\\bar{\\lambda}\\leq \\dfrac{1}{8\\times 3^{(K-1)\/2}}$, given the fact that $\\kappa_x^2\\leq 1$, \\eqref{eq:tet_2_p1} is satisfied.\\footnote{These numbers are chosen for a simplified proof and can be modified.}\nConsequently, we can restate~\\eqref{eq: delt_ph_3} as\n\t\\begin{align} \\label{eq:eb_1}\n\t&\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}\n\t\t\\geq \\frac{s \\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\|\\boldsymbol{\\theta}_k\\|_2 }{p_k}\n\t\t\\bigg[\\|\\boldsymbol{\\theta}_k\\|_2 \\nonumber\\\\\n\t&\\qquad -8 \\lrp{3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} }\n\t\t\\bar{\\lambda} \\kappa_x^2\n\t\t\\frac{s_k}{p_k} A_kB_k\\bigg].\n\t\\end{align}\nFrom~\\cite[Proof of Proposition 2]{gribonval2014sparse}, we use the following relations:\n\t\\begin{align} \\label{eq:A0_A}\n\tB_k\\leq B^0_k +\\varepsilon_k \\leq B^0_k +1, \\quad\n\tA_k \\leq A^0_k + 2B_k\\varepsilon_k, \\quad k\\in [K],\n\t\\end{align}\nwhere $A^0_k \\triangleq \\norm{{\\mb{D}_k^0}^\\top \\mb{D}_k^0 - \\mb{I}_{p_k}}_F$ and $B^0_k \\triangleq \\norm{\\mb{D}_k^0}_2$ and \\eqref{eq:A0_A} follows from matrix norm inequalities~\\cite{gribonval2014sparse}.\nDefining $\\gamma_k \\triangleq 16 \\bigg(3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} \\bigg) \\bar{\\lambda}\\kappa_x^2\n\t\t\\dfrac{B_k^2 s_k}{p_k}$ for $k \\in [K]$ and using $\\kappa_x^2 \\leq 1$, we have\n\t\\begin{align} \\label{eq:gam}\n\t\\gamma_k &\\leq\n\t\t2\\bigg(3^{(K-1)\/2}\n\t\t+\\frac{2^{(K+1)}}{8\\times 3^{(K-1)\/2}} \\bigg)\n\t\t\\bigg(\\frac{1}{8\\times 3^{(K-1)\/2}} \\bigg) \\nonumber\\\\\n\t\t&\\leq 2\\lrp{\\frac{1}{8}+\\frac{4}{64}}\n\t\t\\leq\\frac{1}{2}.\n\t\\end{align}\nThen, for $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)$, $k \\in [K]$, we get\n\t\\begin{align}\n\t&\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\nonumber\\\\\n\t&\\qquad \\numrel{\\geq}{r_tet_rk} \\frac{s\\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\frac{\\gamma_k}{2} \\frac{A_k}{B_k}\t}\n\t\t\\nonumber \\\\\n\t&\\qquad\t\\numrel{\\geq}{r_A_B} \\frac{s\\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\frac{\\gamma_k}{2} \\frac{A_k^0+2B_k\\varepsilon_k}{B_k}\t}\n\t\t\\nonumber \\\\\n\t&\\qquad\t\\geq \\frac{s\\mbb{E}\\{x^2\\}}{4}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k(1-\\gamma_k) - \\frac{\\gamma_k}{2} \\frac{A_k^0}{B_k}\t}\n\t\t\\nonumber \\\\\n\t&\\qquad\t\\numrel{\\geq}{r_gam} \\frac{s\\mbb{E}\\{x^2\\}}{8}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\gamma_k \\frac{A_k^0}{B_k}\t},\n\t\\end{align}\nwhere \\eqref{r_tet_rk} follows from \\eqref{eq:eb_1}, \\eqref{r_A_B} follows from \\eqref{eq:A0_A}, and \\eqref{r_gam} follows from \\eqref{eq:gam}.\nHence, we can write\n\t\\begin{align} \\label{eq:Delt_LB_rmin}\n\t\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} \\geq& \\frac{s\\mbb{E}\\{x^2\\}}{8}\n\t\t\\sum_{k \\in [K]} \\frac{\\varepsilon_k}{p_k}\n\t\t\\lrp{ \\varepsilon_k - \\varepsilon_{k,\\min}(\\bar{\\lambda})},\n\t\\end{align}\nwhere we define\n\t\\begin{align}\n\t& \\varepsilon_{k,\\min} (\\bar{\\lambda})\n\t\\triangleq \\gamma_k \\frac{A_k^0}{B_k} \\nonumber\\\\\n\t&= 16 \\lrp{3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} }\n\t\t\\bar{\\lambda}\\kappa_x^2\n\t\t\\frac{s_k}{p_k} A_k^0 B_k \\nonumber \\\\\n\t&=\\frac{2}{3^{(K+1)\/2}} \\bigg(3^{(K-1)\/2}\n\t\t+2^{(K+1)}\\bar{\\lambda} \\bigg) \\bar{\\lambda} C_{k,\\min},\n\t\\end{align}\nand $C_{k,\\min}$ is defined in \\eqref{eq:cond_C_min_C_max}.\nThe lower bound in \\eqref{eq:Delt_LB_rmin} holds for any $\\varepsilon_k \\leq 0.15$ and $\\mb{D}_k \\in \\mc{S}_{\\varepsilon_k}(\\mb{D}_k^0)$, $k \\in [K]$.\nFinally, since $3^{(K-1)\/2} +2^{(K+1)}\\bar{\\lambda} \\leq 0.5\\times 3^{(K+1)\/2}$,\nthe assumption\n$\\bar{\\lambda} \\leq 0.15\/(\\max_{k \\in [K]} C_{k,\\min})$\nimplies that $\\varepsilon_{k,\\min}(\\bar{\\lambda}) \\leq 0.15$ for $k\\in[K]$.\nTherefore, $\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} > 0 $ for all $\\varepsilon_k \\in (\\varepsilon_{k,\\min}(\\bar{\\lambda}),0.15]$, $k \\in [K]$.\n\\end{IEEEproof}\n\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:mu_mu0_rel}]\nConsidering $j \\not \\in \\mc{J}$, associated with $\\lrp{j_1,\\dots,j_k} \\not \\in \\lrp{\\mc{J}_1\\times \\dots \\times \\mc{J}_K}$, we have\n\t\\begin{align}\n\t&\\|\\mb{D}_\\mc{J}^\\top \\mb{d}_j\\|_1 \\nonumber\\\\\n\t&\\numrel{\\leq}{r_tr_ineq} \\|{\\mb{D}_\\mc{J}^0}^\\top \\mb{d}^0_j\\|_1 + \\|{\\mb{D}_\\mc{J}^0}^\\top (\\mb{d}_j-\\mb{d}^0_j)\\|_1\n\t\t+ \\|(\\mb{D}_\\mc{J}-\\mb{D}_\\mc{J}^0)^\\top \\mb{d}_j\\|_1\\nonumber \\\\\n\t&\\leq \\mu_s(\\mb{D}^0)\n\t\t+ \\sqrt{s} \\lrb{ \\|{\\mb{D}_\\mc{J}^0}^\\top( \\mb{d}_{j}-\\mb{d}_{j}^0) \\|_2\n\t\t+ \\|(\\mb{D}_\\mc{J}-\\mb{D}^0_\\mc{J})^\\top\\mb{d}_j\\|_2} \\nonumber \\\\\n\t&\\leq \\mu_s(\\mb{D}^0) + \\sqrt{s}\n\t\t\\bigg[ \\norm{ \\bigotimes {\\mb{D}^0_{k,\\mc{J}_k}}^\\top}_2\n\t\t\\norm{ \\bigotimes \\lrp{ \\mb{d}_{k,j_k} - \\mb{d}^0_{k,j_k} }}_2 \\nonumber\\\\\n\t&\\qquad + \\norm{\\bigotimes \\mb{D}_{k,\\mc{J}_k} -\\bigotimes \\mb{D}^0_{k,\\mc{J}_k}}_2\n\t\t \\norm{\\mb{d}_j}_2 \\bigg] \\nonumber \\\\\n\t&\\numrel{\\leq}{r_d0_mu} \\mu_s(\\mb{D}^0) + \\sqrt{s}\n\t\t\\bigg[ \\bigg( \\prod_{k \\in [K]} \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)} \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} \\norm{ \\widetilde{\\mb{d}}_{1,j_1}}_2\\dots\n\t\t\\norm{\\mb{d}_{k,j_k} - \\mb{d}^0_{k,j_k} }_2 \\dots \\\n\t\t\\norm{ \\widetilde{\\mb{d}}_{k,j_K}}_2 \\bigg) \\nonumber \\\\\n\t &\\quad\n\t \t+ \\sum_{k \\in [K]} \\norm{ \\wt{\\D}_{1,\\mc{J}_1}}_2\\dots\n\t\t\\norm{\\mb{D}_{k,\\mc{J}_k} - \\mb{D}^0_{k,\\mc{J}_k} }_2 \\dots \\\n\t\t\\norm{ \\wt{\\D}_{k,\\mc{J}_k}}_2 \\bigg] \\nonumber \\\\\n\t&\\numrel{\\leq}{r_RIP_d} \\mu_s(\\mb{D}^0) + \\sqrt{s}\n\t\t\\bigg[ \\bigg( \\prod_{k \\in [K]} \\sqrt{1+\\delta_{s_k}(\\mb{D}^0_k)} \\bigg)\n\t\t\\bigg( \\sum_{k \\in [K]} \\varepsilon_k \\bigg) \\nonumber\\\\\n\t&\\qquad + \\sum_{k \\in [K]} \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t\\norm{\\wt{\\D}_{i,\\mc{J}_i}}_2 \\bigg) \\varepsilon_k\n\t\t \\bigg] \\nonumber \\\\\n\t%\n\t%\n\t%\n\t%\n\t%\n\t&\\numrel{\\leq}{r_asump} \\mu_s(\\mb{D}^0) + 2(1.5)^{K\/2}\\sqrt{s} \\bigg( \\sum_{k \\in [K]} \\varepsilon_k \\bigg),\n\t\\end{align}\t\nwhere \\eqref{r_tr_ineq} follows from the triangle inequality, \\eqref{r_d0_mu} follows from \\eqref{eq:otD_otDp}, \\eqref{r_RIP_d} follows from \\eqref{eq:delt_mu}, and, \\eqref{r_asump} follows from substituting the upper bound value from \\eqref{eq:UB_mu_delt} for $\\delta_{s_k}(\\mb{D}_k^0) $. For $\\wt{\\D}_i = \\mb{D}_i^0$, $\\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 \\leq \\sqrt{1 + \\delta_{s_i}(\\mb{D}^0_i)} \\leq \\sqrt{\\frac{5}{4}} < 1.5$ and for $\\wt{\\D}_i = \\mb{D}_i$, according to \\eqref{eq:A0_A}, we have $\\norm{\\mb{D}_{i,\\mc{J}_i}}_2 \\leq \\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 + \\varepsilon_i \\leq \\sqrt{\\frac{5}{4}}+0.15 < 1.5$.\n\\end{IEEEproof}\n\n\\begin{IEEEproof}[Proof of Proposition~\\ref{prop:3}]\nWe follow a similar approach to Gribonval et al.~\\cite{gribonval2014sparse}. We show that the conditions in~\\eqref{eq:lem8_cod} hold for Lemma~\\ref{lem:a_hat_min_cond}. We have\n\t\\begin{align}\n\t& \\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x} }_2 \\nonumber\\\\\n\t&\\leq \\norm{ \\lrp{ \\bigotimes \\mb{D}^0_{k,\\mc{J}_k} - \\bigotimes \\mb{D}_{k,\\mc{J}_k} }\n\t\t\\mb{x}_\\mc{J}} _2+\\|\\mb{w}\\|_2 \\nonumber \\\\\n\t&\\leq M_x\n\t\t\\sum _{k \\in [K]} \\big\\| \\wt{\\D}_{1,\\mc{J}_1} \\otimes \\dots \\otimes\n\t\t\\lrp{\\mb{D}^0_{k,\\mc{J}_k} - \\mb{D}_{k,\\mc{J}_k} }\t\\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad \\wt{\\D}_{K,\\mc{J}_K}\\big\\|_2 +M_w \\nonumber \\\\\n\t&\\leq M_x\n\t\t\\sum _{k \\in [K]} \\norm{ \\wt{\\D}_{1,\\mc{J}_1}}_2 \\dots\n\t\t\\norm{\\mb{D}^0_{k,\\mc{J}_k} - \\mb{D}_{k,\\mc{J}_k} }_2 \\dots \\norm{ \\wt{\\D}_{K,\\mc{J}_K}}_2 \\nonumber\\\\\n\t&\\qquad +M_w\n\t\t\\nonumber \\\\\n\t&\\leq M_x\n\t\t\\sum _{k \\in [K]}\n\t\t\\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t\\norm{ \\wt{\\D}_{i,\\mc{J}_i }}_2 \\bigg) \\varepsilon_k\n\t\t +M_w \\nonumber \\\\\n\t&\\numrel{\\leq}{r_delt_leq} (1.5)^{(K-1)\/2} M_x \\sum_{k \\in [K]} \\varepsilon_k + M_w,\n\t\\end{align}\nwhere \\eqref{r_delt_leq} follows from \\eqref{eq:delt_cond} and the fact that for $\\wt{\\D}_i = \\mb{D}_i^0$, $\\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 \\leq \\sqrt{1 + \\delta_{s_i}(\\mb{D}^0_i)} \\leq \\sqrt{\\frac{5}{4}} < 1.5$ and for $\\wt{\\D}_i = \\mb{D}_i$, according to \\eqref{eq:A0_A}, we have $\\norm{\\mb{D}_{i,\\mc{J}_i}}_2 \\leq \\norm{\\mb{D}_{i,\\mc{J}_i}^0}_2 + \\varepsilon_i \\leq \\sqrt{\\frac{5}{4}}+0.15 < 1.5$.\nHence, we get\n\t\\begin{align} \\label{eq:lmb_m_err}\n\t&\\lambda (1-2\\mu_s(\\mb{D}))- \\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x}}_2 \\nonumber \\\\\n\t&\\geq \\lambda (1-2\\mu_s(\\mb{D})) - (1.5)^{(K-1)\/2} M_x \\sum_{k \\in [K]} \\varepsilon_k -M_w\n\t\t\\nonumber \\\\\n\t&\\numrel{\\geq}{r_mu_mu0} \\lambda (1-2\\mu_s(\\mb{D}^0))\n\t\t- (1.5)^{K\/2} \\lrp{ 4\\lambda \\sqrt{s} + (1.5)^{-1\/2}M_x} \\nonumber\\\\\n\t&\\qquad \\sum_{k \\in [K]} \\varepsilon_k\n\t\t-M_w \\nonumber \\\\\n\t&\\numrel{\\geq}{r_lam_m_a} \\lambda (1-2\\mu_s(\\mb{D}^0)) - 3(1.5)^{K\/2} M_x \\sum_{k \\in [K]} \\varepsilon_k -M_w\n\t\\nonumber \\\\\n\t&= 3(1.5)^{K\/2} M_x \\bigg( K \\bar{\\lambda} C_{\\max} - \\sum_{k \\in [K]} \\varepsilon_k \\bigg) -M_w,\n\t\\end{align}\nwhere \\eqref{r_mu_mu0} follows from \\eqref{eq:mu_mu0} and \\eqref{r_lam_m_a} follows from \\eqref{eq:lem8_cod} ($2\\lambda \\sqrt{s} \\leq x_{\\min}\\sqrt{s}\\leq M_x$) and~\\eqref{eq:mu_mu0}.\nIf $\\varepsilon_k < C_{\\max} \\bar{\\lambda} $, $k \\in [K]$, the assumption on the noise level in \\eqref{eq:M_eps_M_al} implies that the right-hand side of \\eqref{eq:lmb_m_err} is greater than zero and $\\lambda (1-2\\mu_s(\\mb{D}))> \\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x}}_2 $. Thus, according to Lemma~\\ref{lem:a_hat_min_cond}, $\\widehat{\\mb{x}}$ is almost surely the unique solution of $\\min_{\\mb{x} } \\frac{1}{2}\\norm{\\mb{y} - \\lrp{ \\bigotimes \\mb{D}_k} \\mb{x}' }_2 +\\lambda\\|\\mb{x}'\\|_1$ and $\\Delta\\phi_{\\mbb{P}}\\lrp{\\D_{1:K},\\D^0_{1:K}|\\boldsymbol{\\sigma}} = \\Delta f_{\\mbb{P}} \\lrp{\\D_{1:K},\\D^0_{1:K}}$.\n\\end{IEEEproof}\n\n\n\n\\section*{appendix B}\n\n\n\\begin{IEEEproof} [Proof of Lemma~\\ref{lemma:delt_m_T_dev}]\nAccording to Lemma~\\ref{lem:rad}, we have to upper bound\n$\n\t\\mbb{E} \\lr{ \\sup_{\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k), k \\in [K]}\\lra{ \\frac{1}{N}\n\\sum_{n \\in [N] } \\beta_n h_n(\\D_{1:K}) }}\n$. Conditioned on the draw of functions $h_1,\\dots,h_N$, consider the Gaussian processes\n$\n\tA_{\\D_{1:K}} = \\frac{1}{N} \\sum_{n \\in [N]} \\beta_n h_n(\\D_{1:K})\n$ and\n$\n\tC_{\\D_{1:K}} = \\sqrt{\\frac{K}{N}}\n\t\t\\sum_{k \\in [K]} \\bigg( L_k \\sum_{i \\in [m_k]} \\sum_{j \\in [p_k]}\n\t\t\\zeta_{ij}^k (\\mb{D}_k-\\mb{D}_k^0)_{ij} \\bigg)\n$,\nwhere $\\lr{\\beta_n}_{n=1}^N$'s and $\\lr{\\zeta_{ij}^k},k\\in [K],i\\in [m_k],j\\in [p_k]$'s are independent standard Gaussian vectors.\nWe have\n\t\\begin{align}\n\t&\\mbb{E} \\lr{\\lra{A_{\\D_{1:K}} - A_{\\D'_{1:K}}}^2} \\nonumber\\\\\n\t&\\qquad= \\frac{1}{N^2} \\bigg|\n\t\t\\sum_{n \\in [N]} h_n(\\D_{1:K})- h_n(\\D'_{1:K}) \\bigg|^2 \\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_lpsh_h} \\frac{1}{N}\n\t\t\\bigg(\\sum_{k \\in [K]} L_k\\|\\D_k-\\D'_k\\|_F \\bigg)^2\\nonumber \\\\\n\t&\\qquad \\numrel{\\leq}{r_cs} \\frac{K}{N}\\sum_{k \\in [K]} L_k^2\\|\\D_k-\\D'_k\\|_F^2\\nonumber \\\\\n\t&\\qquad = \\mbb{E}\\lr{\\lra{C_{\\D_{1:K}} - C_{\\D'_{1:K}}}^2},\n\t\\end{align}\nwhere \\eqref{r_lpsh_h} follows from coordinate-wise Lipschitz continuity of $h$ and \\eqref{r_cs} follows from Cauchy-Schwartz inequality. Hence, using Slepian's Lemma~\\cite{massart2007concentration}, we get\n\t\\begin{align}\n\t\\mbb{E} \\bigg\\{ \\sup_{\\substack{\\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\tA_{\\D_{1:K}} \\bigg\\}\n\t&\\leq \\mbb{E} \\bigg\\{ \\sup_{\\substack{ \\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\tC_{\\D_{1:K}} \\bigg\\} \\nonumber \\\\\n\t& = \\sqrt{\\frac{K}{N}} \\bigg(\\sum_{k \\in [K]} L_k\\varepsilon_k \\mbb{E} \\big\\{\\|\\boldsymbol{\\zeta}^k\\|_F \\big\\}\\bigg)\\nonumber \\\\\n\t& =\\sqrt{\\frac{K}{N}} \\bigg(\\sum_{k \\in [K]} L_k\\varepsilon_k \\sqrt{m_kp_k}\\bigg).\n\t\\end{align}\nThus, we obtain\n$\n\t\\mbb{E} \\lr{\\sup_{\\substack{ \\mb{D}_k \\in \\overline{\\mathcal{B}}_{\\varepsilon_k}(\\mb{D}^0_k) \\\\ k \\in [K]}}\n\t\t\\lra{\\frac{1}{N} \\sum_{n \\in [N]} \\beta_nh_n(\\D_{1:K}) } }\n\t\\\\ \\leq 2\\sqrt{\\frac{K}{N}}\\lrp{\\sum_{k \\in [K]} L_k\\varepsilon_k \\sqrt{m_kp_k}}.\n$\n\\end{IEEEproof}\t\n\n\n\n\\begin{IEEEproof}[Proof of Lemma~\\ref{lem:phi_m_T1_lip}]\nWe expand $\\Delta \\phi_\\mb{y} \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}$ according to \\eqref{eq:delt_t} and bound each term of the sum separately. Looking at the first term, we get\n\t\\begin{align}\n\t& \\lra{ \\Delta \\phi_1 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t\\numrel{=}{r_exp_phi1} \\bigg| \\frac{1}{2}\\mb{x}^\\top\n\t\t{\\mb{D}^0 }^\\top\n\t\t\\bigg( \\sum_{k \\in [K]} \\mb{P}_{\\widetilde{\\mb{D}}_{1,\\mc{J}_1}}\n\t\t\\otimes \\dots \\otimes \\nonumber\\\\\n\t&\\qquad\n\t\t\\lrp{\\bP_{\\D'_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}\t\\otimes \\dots \\otimes \\mb{P}_{\\widetilde{\\mb{D}}_{K,\\mc{J}_K}}\\bigg)\n\t\t\\mb{D}^0\n\t\t\\mb{x} \\bigg| \\nonumber \\\\\n\t&\\numrel{\\leq}{r_D0_D0k} \\frac{1}{2} \\norm{\\mb{x}}_2^2\n\t\t\\bigg(\\prod_{k \\in [K]} \\norm{ \\mb{D}^0_{k,\\mc{J}_k} }_2^2 \\bigg)\n\t\t\\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{\\bP_{\\D^0_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}_2 \\nonumber\\\\\n\t&\\qquad \\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}}\n\t\t\\norm{\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}}_2 \\bigg)\\bigg)\n\t\t\\nonumber \\\\\n\t&\\numrel{\\leq}{r_p_rip1} M_x^2\n\t\t\\bigg( \\prod_{k \\in [K]}\\big(1+\\delta_{s_k}(\\mb{D}^0_k)\\big) \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]}(1 - \\delta_k)^{-1\/2}\\|\\D_k-\\D^0_k\\|_F \\bigg) ,\n\t\\end{align}\nwhere \\eqref{r_exp_phi1} follows from \\eqref{eq:delt_phi}, \\eqref{r_D0_D0k} follows from the fact that $\\norm{\\mb{D}^0_\\mc{J}}_2 = \\prod_{k \\in [K]} \\norm{ \\mb{D}^0_{k,\\mc{J}_k} }_2$, and \\eqref{r_p_rip1} follows from the definition of $\\mathsf{RIP}$, equation \\eqref{eq:PH_PHp}, and $\\big\\|\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}\\big\\|_2=1$. Following a similar approach and expanding the rest of the terms, we get\n\t\\begin{align*}\n\t&\\lra{ \\Delta \\phi_2 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} } \\nonumber\\\\\n\t&\\leq \\norm{\\mb{w}}_2 \\norm{\\mb{x}}_2\n\t\t\\bigg( \\prod_{k \\in [K]} \\norm{ \\mb{D}^0_{k,\\mc{J}_k} }_2^2 \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} \\norm{\\bP_{\\D^0_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}} }_2\n\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}}_2 \\bigg)\\bigg)\n\t\\nonumber \\\\\n\t&\\numrel{\\leq}{r_p_rip} 2M_w M_x\n\t\t\\bigg( \\prod_{k \\in [K]}\\big(1+\\delta_{s_k}(\\mb{D}^0_k)\\big)^{1\/2} \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} (1 - \\delta_k)^{-1\/2}\\|\\D_k-\\D^0_k\\|_F \\bigg), \\nonumber \\\\\n\t&\\lra{ \\Delta \\phi_3 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t\\leq \\frac{1}{2} \\norm{\\mb{w}}_2^2 \\nonumber\\\\\n\t& \\qquad \\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{\\bP_{\\D^0_{k,\\cJ_k}} - \\bP_{\\D_{k,\\cJ_k}}}_2\t\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{\\mb{P}_{\\wt{\\D}_{i,\\mc{J}_i}}}_2 \\bigg)\n\t\t\\bigg) \\nonumber \\\\\n\t&\\leq M_w^2 \\bigg( \\sum_{k \\in [K]} (1 - \\delta_k)^{-1\/2}\\|\\D_k-\\D^0_k\\|_F \\bigg),\\nonumber \\\\\n\t&\\lra{ \\Delta \\phi_4 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t= \\lambda \\norm{\\boldsymbol{\\sigma}_\\mc{J}}_2 \\norm{\\mb{x}}_2\n\t\t\\bigg( \\prod_{k \\in [K]} \\norm{\\mb{D}^0_{\\mc{J}_k}}_2 \\bigg)\n\t\t\\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{{\\D^0}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+}_2\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{ {\\wt{\\D}_{i,\\mc{J}_i}}^+}_2 \\bigg)\n\t\t \\bigg) \\nonumber \\\\\n\t&\\numrel{\\leq}{r_ps} 2\\lambda \\sqrt{s} M_x\n\t\t \\bigg( \\prod_{k \\in [K]}\\big(1+\\delta_{s_k}(\\mb{D}^0_k)\\big)^{1\/2} \\bigg) \\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} (1-\\delta_k)^{-1}\n\t\t \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t (1-\\delta_i)^{-1\/2} \\bigg) \\|\\D_k-\\D^0_k\\|_F \\bigg) ,\\nonumber \\\\\n\t& \\lra{ \\Delta \\phi_5 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t= \\lambda \\norm{\\boldsymbol{\\sigma}_\\mc{J}}_2 \\norm{\\mb{w}}_2 \\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{{\\D^0}_{k,\\cJ_k}^+ - \\D_{k,\\cJ_k}^+ }_2\t\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{ {\\wt{\\D}_{i,\\mc{J}_i}}^+}_2 \\bigg)\n\t\t\\bigg)\t\\nonumber \\\\\n\t&\\leq 2\\lambda \\sqrt{s}M_w \\nonumber\\\\\n\t&\\qquad \\bigg( \\sum_{k \\in [K]} (1-\\delta_k)^{-1}\n\t\t \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t (1-\\delta_i)^{-1\/2} \\bigg) \\|\\D_k-\\D^0_k\\|_F \\bigg) ,\\nonumber \\\\\n\t&\\lra{ \\Delta \\phi_6 \\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}} }\n\t\t= \\frac{\\lambda^2}{2} \\norm{\\boldsymbol{\\sigma}_\\mc{J}}_2^2 \\nonumber \\\\\n\t&\\qquad\n\t \t\\bigg( \\sum_{k \\in [K]}\n\t\t\\norm{\\bH_{\\D^0_{k,\\cJ_k}} - \\bH_{\\D_{k,\\cJ_k}} }_2\t\n\t\t\\bigg( \\prod_{\\substack{i\\in [K] \\\\ i \\neq k}} \\norm{\\mb{H}_{\\wt{\\D}_{i,\\mc{J}_i}} }_2 \\bigg)\\bigg)\t\n\t\t\\nonumber \\\\\n\t&\\numrel{\\leq}{r_h} \\lambda^2 s\n\t\t\\bigg( \\sum_{k \\in [K]} (1-\\delta_k)^{-\\frac{3}{2}}\n\t\t \\bigg( \\prod_{\\substack{i \\in [K] \\\\ i \\neq k}}\n\t\t (1-\\delta_i)^{-1} \\bigg) \\|\\D_k-\\D^0_k\\|_F\\bigg),\n\t\\end{align*}\nwhere \\eqref{r_ps} and \\eqref{r_h} follow from \\eqref{eq:pso_cond} and \\eqref{eq:PH_PHp}. Adding all the terms together, we get\n\t\\begin{align}\n\t&\\lra{\\Delta \\phi_\\mb{y}\\lrp{\\D_{1:K};\\D^0_{1:K}|\\boldsymbol{\\sigma}}} \\leq\n\t\t\\sum_{k \\in [K]} L_k \\|\\D_k-\\D^0_k\\|_F.\n\t\\end{align}\nwhere $L_k$ is defined in \\eqref{eq:lipsch_const_h}.\n\\end{IEEEproof}\t\n\\section*{appendix C}\n\\begin{IEEEproof}[Proof of the coherence relation for KS dictionaries]\nTo prove \\eqref{eq:mu_s}, we define the set $\\mathcal{A} = \\lr{\\forall j_k \\in \\mc{J}_k, (j_1,\\dots,j_K) \\not\\in (\\mc{J}_1,\\dots,\\mc{J}_K)}$. We have\n\t\\begin{align}\n\t\\mu_s(\\mb{D})&= \\max_{|\\mc{J}|\\leq s} \\max_{j \\not\\in \\mc{J}}\\|\\mb{D}_\\mc{J}^\\top \\mb{d}_j\\|_1 \\nonumber \\\\\n\t&= \\max_{\\substack{|\\mc{J}_k|\\leq s_k\\\\ k \\in [K]}}\n\t\t\\max_{\\mathcal{A}}\n\t\t\\norm{\\lrp{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top}\\lrp{\\bigotimes \\mb{d}_{k,j_k}}}_1\\nonumber \\\\\n\t&= \\max_{\\substack{|\\mc{J}_k|\\leq s_k\\\\ k \\in [K]}}\n\t\t\\max_{\\mathcal{A}}\n\t\t\\norm{\\bigotimes \\mb{D}_{k,\\mc{J}_k}^\\top\\mb{d}_{k,j_k}}_1 \\nonumber\\\\\n\t&= \\max_{\\substack{|\\mc{J}_k|\\leq s_k\\\\ k \\in [K]}}\n\t\t\\max_{\\mathcal{A}}\n\t\t\\prod_{k \\in [K]} \\norm{\\mb{D}_{k,\\mc{J}_k}^\\top\\mb{d}_{k,j_k}}_1 \\nonumber\\\\\n\t&\\leq \\max_{k \\in [K]} \\mu_{s_k}(\\mb{D}_k)\n\t\t\\bigg( \\prod_{\\substack{i \\in [K], \\\\ i \\neq k}} \\lrp{ 1+\\mu_{s_i-1}(\\mb{D}_i)} \\bigg).\n\t\\end{align}\t\n\n\\end{IEEEproof}\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSolar sytem progenitors, from the deeply embedded sources to\nthe weak line T Tauri Stars (WTTSs), are sources of high\nenergy radiation (X-ray to UV); the total energy\nradiated in this range goes from $\\sim 0.02$~L$_{\\odot}$\nmeasured in the very young\nsources through their X-ray radiation to the 0.2 L$_{\\odot}$ radiated in the\nUV during the T Tauri phase (or Phase~TT) (Preibish 2004, G\\'omez de Castro 2008).\nLater on, in the Weak line T Tauri Phase the energy released drop to\n$\\sim 10^{-3}$~L$_{\\odot}$ radiated in X-ray. Being TTSs\nintrinsically cool stars (with $\\log T_{\\rm eff}\\sim$ 6500-3600~K),\nsurrounded by cool accretion disks radiating at infrared\nwavelengths, the source\nof this energy must be searched in the release of magnetic energy.\n\nIt is well known that the mediation of magnetic fields in the\naccretion process is able to heat up the plasmas since a fraction of the\ngravitational energy lost during accretion is invested in field\namplification and dynamo action thus, radiative loses are pushed\ntowards the high energy range. Unfortunately, in the early phases\n(ages $< 0.1$Myr) most of this radiation is reabsorbed by the\ndense circumstellar environment (Av$>3$) and only the hard X-ray\nradiation is able to escape from the system providing direct\ninformation on the evolution of the accretion process. After 1\nMyr, extinction drops enough to make the engine accessible to UV\nwavelengths; current technologies allow to carry out high\nresolution spectroscopy in the UV range which is also extremely\nrich in spectral tracers so a single echelle spectra can provide\ninformation on molecular, atomic and ionized gas (from singly\nionized gas to very high ionization species such as Fe~XII,\nFe~XVIII or Fe~XXI). Thus, UV spectroscopy is an extremely\nefficient tool to study solar system progenitors from 1 Myr on with the\ncurrent technology and these application will be discussed in detail\nbelow. However, one can foresee a future when\nmicroarcseconds UV imaging will be available and\nstudies similar to those being run on the Sun, will be feasible\nfrom 1 Myr old Suns all the way down into the main sequence while\nthe young planetary disk settles down and life begins to grow.\nThis review deals with this; with a description of our\ncurrent understanding of the evolution from the T Tauri phase to\nthe modern Sun and with a non-technologically biased ambitious\nview of what we could learn from challenging new UV observatories.\nThis review has been written after the end of the 1st. conference\nof the network for UV astronomy held in El Escorial in May 2007\nwhere some challenging projects for new space observatories were\npresented; you should find references to some of them in this text.\n\n\\section{Physics to be understood I: the gravito-magnetic engine}\n\nDuring the phase TT, stars count on an energy source\nwhich is not available during the main sequence evolution: gravitational\nenergy from the infalling material. This extra energy is released either\nthrough shocks on the stellar surface or through the gravito-magnetic\ninteraction between the star and the disk.\n\nShocks release the kinetic energy of the infalling material into\nheating at the impact point. If matter infall occurs along the\nfield lines, all the gravitational energy is damped into heating\nand the gas may reach temperatures as high as $\\sim 10^6$K. The\ndominant output radiation is produced by the photoionized preshock\ninfalling gas radiating mainly in the UV range (G\\'omez de Castro\n\\& Lamzin 1999; Gullbring et al. 2000). As the density of the\ninfalling gas column is high ($n_e \\simeq 10^9-10^{12}$~cm$^{-3}$)\nthe thickness of the radiating column is expected to be negligible\ncompared with the stellar radius thus, accretion shocks are\nobserved as {\\it hot spots} on the stellar surface. As such, they\nare expected to produce a rotationally modulated signal that has\nbeen detected in monitoring campaigns of some stars both in optical \n(see i.e. Petrov et al. 2001;\nBouvier et al. 2003) and UV (G\\'omez de Castro \\& Fern\\'andez\n1996; G\\'omez de Castro \\& Franqueira 1997a). An important result \nof these campaigns is that only $\\sim\n50$\\% of the UV continuum excess is rotationally modulated. Thus,\na significant fraction of the UV excess is not produced by the\naccretion shocks even in sources where rotational modulation\nhas been detected. However, this excess decreases as the star\napproaches the main sequence (see Fig.~1).\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig1.ps}}\n\\caption{The (UV-V, V) colour -- magnitude diagram for the\nT Tauri stars observed with the IUE satellite in the Taurus region.\nThe crosses represent cool TTSs (spectral types later than\n$\\sim $ K3) and the open circles warm TTSs (spectral types\nearlier than $\\sim$ K3). The location of the main sequence is\nmarked by the spectral types. The stars closer to the main sequence\nare the WTTSs (from G\\'omez de Castro 1997).}\n\\end{figure}\n\n\nIn fact, the major source of high energy radiation is the dissipation\nof the magnetic and mechanical energy produced by the gravito-magnetic\nengine. A simple analogy can be made with a self-regulated hydraulic\nturbine: the potential energy of the gas falling from the disk into the\nstellar magnetic field drives to the generation of electric currents\ndue to the Lorentz force that, in turn, create new field components\nthrough dynamo action. There are however, a great number of uncertainties\nin the way the system self-regulates and also on the dependence of the\nengine details on initial conditions such as the effective gravity of\nthe star, the role of stellar radiation and magnetic field on the engine\nperformance and the role of the ionizing radiation produced by the engine on\nthe evolution of the mass storage, the disk.\n\nThe Sun, itself, provides important clues to understand the\nphysics of the gravito-magnetic engine and its evolution. At the\nbase of the Sun convective layer, the tachocline marks the\nlocation of the shear layer between the rigid body rotation of the\nradiative core and the differentially rotating convective\nenvelope. The tachocline is significantly prolate; it is centered\nat 0.69R$_{\\odot}$ at the equator and 0.72R$_{\\odot}$ at latitude\n60$^o$ (Basu \\& Antia, 2003). The tachocline thickness is $\\sim\n0.04$R$_{\\odot}$. The angular velocity profile based on\nhelioseismic inversions shows that at a latitude of about 35$^o$,\nthe radial gradient changes sign becoming negative for latitudes\n$> 35^o$. This latitude marks the limit of the two latitude belts\nwhere the overwhelming majority of sunspots occur. There are also\nsome indications of the meridional flow moving equatorwards below\nthis latitude and polewards above it (see Miesch 2005). The Solar\nwind is (magnetic)\nlatitude dependent during solar minimum;\nabove $\\sim 35^o$ is fast (1000~km\/s) and\nthin, below is slower (300~km\/s) and denser (see Fig.2 from\nUlysses data). The current paradigm\nfor how solar dynamo operates includes: (1) field amplication in a\nturbulent downflow ($\\alpha$ effect) that is pumped downward by\nconvection and accumulate in the overshoot region and the\ntachocline; (2) field amplification and organization into toroidal\nflux tubes and sheets by differential rotation in the tachocline;\n(3) magnetic instabilities (buoyancy) drives the field to the surface and\n(4) the Coriolis force acting on the rising structures depends on the\nlatitude producing a latitude-dependent emergence of bipolar magnetic\nstructures.\n\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig2.eps}}\n\\caption{{\\it ``Solar wind observations collected by the Ulysses spacecraft\nduring two separate polar orbits of the Sun, six years apart,\nat nearly opposite times in the solar cycle. Near solar minimum\n(left) activity is focused at low altitudes, high-speed solar wind\nprevails, and magnetic fields are dipolar. Near solar maximum (right),\nthe solar winds are slower and more chaotic, with fluctuating magnetic\nfields.''} (From NASA Solar Probe Web (solarprobe.gsfc.nasa.gov),\ncourtesy of Southwest Research Institute and the Ulysses\/SWOOPS team)\n}\n\\end{figure}\n\n\nGoing backwards in time, during the phase TT,\nthe problem becomes complicated by the\npresence an additional ``tachocline'' or differentially rotating\nregion attached to the convective layer. This {\\it external\ntachocline} connects the star with the accretion disk which\nrotates significantly faster than the stellar surface; rotation\nperiods during the TT phase are about 7-8 days ($\\Omega _* =\n0.8-0.9$ day$^{-1}$) while the Keplerian frequency is:\n$$\n\\Omega _k = 11.1 {\\rm day}^{-1} \\lgroup \\frac {M}{M_{odot}} \\rgroup ^{1\/2}\n\\lgroup \\frac {r}{3 R_{odot}} \\rgroup ^{-3}\n$$\nKeplerian disk corotation radius is at,\n$$\nr_{\\rm co} = (7.2 - 6.9)R_{\\odot} \\lgroup \\frac {M}{M_{odot}} \\rgroup ^{1\/2}\n$$\nTo avoid this large shear, the magnetosphere will grow to balance the\ntoroidal component of the flux with the angular momentum of the infalling\nmatter (Ghosh \\& Lamb 1979) thus,\n$$\n\\frac{B_p B_t}{4 \\pi} 4\\pi r^2 \\Delta r \\simeq \\dot M r V_k\n$$\nwhere $B_p$ and $B_t$ are the poloidal and toroidal components of the\nfield respectively, $r$ is the magnetosphere radius, $\\Delta r$ is\nthe thickness of the shear layer, $\\dot M$ is the accretion rate and\n$V_k$ is the Keplerian velocity at the magnosphere radius. For typical\nT~Tauri stars parameters:\n$$\nr_{\\rm mag}= 4.4 R_{\\odot} \\gamma ^{2\/7} \\lgroup \\frac {B_*}{1kG}\n\\rgroup^{4\/7} \\lgroup \\frac {\\dot M}{10^{-8} M_{\\odot} {\\rm\nyr}^{-1}} \\rgroup^{-2\/7} \\lgroup \\frac {M_*}{M_{\\odot}}\n\\rgroup^{-1\/7}\n$$\nwhere $\\gamma ^{2\/7}$ is a factor about unity ($\\gamma =\n(B_t\/B_p)(\\Delta r \/r)$, see Lamb, 1989). Notice that the main\nuncertainties in the physics, namely the ratio between the\ntoroidal and the poloidal components and the relative thickness of\nthe {\\it ``external tachocline''} are enclosed in this factor.\nAs in the Solar\ninterior, the shear region is fed by turbulent, magnetized\nmaterial though this comes from the accretion disk instead of\nthe convective layer. The turbulent disk dynamo is fed by the\nmagneto-rotational instability in the acretion disk. Shear\namplifies the field producing a strong toroidal component; an\nexternal dynamo sets in. This toroidal field and\nthe associated magnetic pressure push the field lines outwards\nfrom the disk rotation axis, inflating and opening them in a {\\it\nbutterfly-like pattern} reminiscent of the helmet streamers in\nthe solar corona, so producing a current layer between the\nstellar and the disk dominated regions as displayed in Fig~3.\nMagnetic field dissipation in the current layer produces high\nenergy radiation and particles. The magnetic link between the star\nand the disk is broken and reestablished continuously by magnetic\nreconnection. The opening angle of the current layer, as well as\nits extent, depends on the stellar and disk fields, the accretion\nrate and the ratio between the inner disk radius and the stellar\nrotation frequencies. Hot, pressure driven outflows are produced\nfrom the region closer to the rotation axis while cool\ncentrifugally driven flows are produced by the disk; plasmoids are\nejected from the current layer generating a third outflowing\ncomponent.\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig3.eps}}\n\\caption{The interaction between the stellar magnetic field and the\ndisk twists the stellar field lines due to the differential rotation.\nThe toroidal magnetic field generated out of the poloidal flux and\nthe associated pressure tends to push the field lines outwards,\ninflating them, and eventually braking the magnetic link between\nthe star and the disk (boundary between regions I and II).\nThree basic regions can be defined:\nRegion I dominated by the stellar wind, Region II dominated by\nthe disk wind and Region III dominated by stellar magnetospheric\nphenomena. The dashed line traces the boundaries between this\nthree regions. The continuous lines indicate the topology\nof the field and the shadowed areas represent regions where\nmagnetic reconnection events are likely to occur, producing\nhigh energy radiation and particles (from G\\'omez de Castro 2004).}\n\\end{figure}\n\nDisk-star interaction has been investigated by means of numerical\nsimulations since the early works by Goodson et al (1997) till the\nlast results ({\\it i.e.} von Rekowski \\& Brandenburg 2006). They\nshow that the fundamental mechanism for disk winds formation is\nrobust; numerical simulations with different parameters (disk\/star\nfields) and initial conditions produce disk winds. Stellar winds\nare much more sensitive to the physical conditions and specially\nto the stellar field; compare the results of simulations assuming\nthat the stellar field is a magnetic dipole (von Rekowski \\&\nBrandenburg 2004) with those of simulations where the stellar\nfield is prescribed through the action of the stellar dynamo (von\nRekowski \\& Brandenburg, 2006). In fact, the characteristics of\nthe accretion flow and the winds (dominant driver, temperature,\nterminal velocity, density, variability) depend on the physical\nproperties of the system such as the degree of magnetization of\nthe disk, the characteristics of the disk dynamo and the stellar\nfield.\n\nThe bulk of the energy produced in this engine is released at UV\nand X-ray wavelenghts as in the Sun atmosphere. In the very early\nepochs, when extinction is high ($A_V \\geq 3$), only the X-ray\nradiation from the engine is detected. Later on, about 1 Myr,\nextinction drops and the engine can be studied in the UV. Only in\nthe UV, the various components of the engine can be defined and\nstudied as well as their evolution, starting during the phase~TT\nall the way down into the main sequence.\n\n\\section{Physics to be understood II: the\nimpact of the engine on disk evolution}\n\nThough often neglected, the impact of the engine in the inner disk\nevolution is enormous. On the one hand, the engine adds a significant\npoloidal component in the inner disk thus favouring gas motions\nperpendicular to the disk as shown by the numerical simulations,\non the other hand, the engine is a source of highly energetic\nradiation where part of the dissipation is produced at heights of\nsome few stellar radii above the disk in the inflating current\nlayer; this high latitude illumination favours energy absorption\nby the disk. Both together act to increase the disk scale height and\nthe absorption of the radiation henceforth producing the\nrarification of the disk\natmosphere and favoring disk evaporation close to the star.\nTo achieve the evaporation of a standard optically thin accretion disk,\nthe sound speed should be comparable to the keplerian velocity thus,\n$$\nT = \\frac{G M_*\/r_{\\rm mag}}{\\mu m_H \/ \\gamma \\kappa}\n= 3.13 \\times 10^7 K \\lgroup \\frac {M_*}{M_{\\odot}}\n\\rgroup \\lgroup \\frac {r_{\\rm mag}}{4.4 R_{\\odot}}\n\\rgroup ^{-1}\n$$\nwhere $\\mu$ is the mean molecular weight, $\\gamma $ is the\npolytropic index and $\\kappa $ is Boltzmann constant.\nHowever, this value relaxes in the presence of a poloidal\nfield as the expected in the disk-magnetosphere interface so,\n$$\nT = 3.13 \\times 10^7 K \\frac {\\beta}{1+\\beta}\n\\lgroup \\frac {M_*}{M_{\\odot}}\\rgroup\n\\lgroup \\frac {R_{\\rm mag}}{4.4 R_{\\odot}}\n\\rgroup ^{-1}\n$$\nwhere $\\beta$ is the rate between magnetic and thermal pressures.\nThus, for highly magnetized environments, $T$ may drop\narbitrarily. For thin accretion disks\\footnote{See Frank et al 2002\nfor the standard prescription of the thin disk density and temperature\nas a function the accretion rate, radius and stellar mass.} penetrated\nby the stellar dipolar field, $B_*$,\n\n\\begin{eqnarray}\n\\beta &=& \\frac {\\gamma \\kappa T\/ \\mu M_H}{B^2 \/4 \\pi \\rho} \\\\\n&=&\n4.77 \\lgroup \\frac {M_*}{M_{\\odot}}\\rgroup ^{7\/8}\n\\lgroup \\frac {\\dot M}{10^{-8}M_{\\odot}{\\rm yr}^{-1}}\\rgroup ^{17\/20} \\\\\n&\\times & \\lgroup \\frac {r}{4.4 R_{\\odot}}\\rgroup ^{-21\/8}\n\\lgroup \\frac {B_*}{1kG}\\rgroup ^{-2} \\\\\n\\end{eqnarray}\n\\noindent\nwhere $r$ is the disk radius, $B_*$ the stellar magnetic field and\n$\\dot M$ the accretion rate. Note that\n$\\beta $ drops to 0.02 for accretion rates of $10^{-9}$M$_{\\odot}$yr$^{-1}$.\n\n Another important phenomenon to be considered is that the disk\nis not unlocked from the engine so disk material should be\nsubjected to the propagation of the Alfv\\'en waves, shear waves\nand global alfv\\'en oscillations driven from the interface.\nIn summary, we might expect the inner rim of the disk to be hot\nwith temperatures of about $10^4$K well above the\ntemperature of dust sublimation.\n\nThe role of far-UV radiation fields and high energy particles in\nthe disk chemical equilibrium is now beginning to be understood.\nBergin et al. (2003) showed how strong Ly$\\alpha$ emission may\ncontribute to the observed enhancement of CN\/HCN in the disk. The\npenetration of UV photons coming from the engine in a dusty disk\ncould produce an important change in the chemical composition of\nthe gas allowing the growth of large organic molecules. In this\ncontext, UV photons photodissociating organic molecules at\n$\\lambda > 1500$~\\AA\\ could play a key role in the chemistry of\nthe inner regions of the disk, while those photodissociating H$_2$\nand CO will control the chemistry of the external layers of the\ndisk directly exposed to the radiation from the central engine.\n\n\n\n\n\\section{Lessons learned from UV (spectroscopic) observations}\n\nThe first observations of pre-main sequence (PMS) stars in the UV\nwere carried out with the International Ultraviolet Explorer (IUE)\n(1979-1997). The observations showed that pre-main sequence stars\nhave UV fluxes exceeding those of main sequence stars by a factor\nof about 50. In fact, the UV excess decreases as the stars\napproach the main sequence as shown in Fig~1.\n\nUV radiation provides direct information on the interaction\nbetween the star and the disk. This includes all the various\ncomponents mentioned above: the shear layer, i.e.the {\\it external\ntachocline}, the\nwind, the enhanced magnetospheres, mass ejections from\nreconnecting loops, shocks between the various wind components\n(among themselves and also with the disk material) and as well as\nthe inner regions of the disk. There is a recent review on UV\nobservations of pre-main sequence stars and young planetary disks\n(G\\'omez de Castro et al 2006) where a detailed accounting of the\nwork carried out since the IUE times is summarized. Thus, I should\nconcentrate on the main lessons learned from IUE and HST\nobservations\\footnote{Some observations have also been obtained\nwith FUSE but its small effective area has allowed to observed\nonly the brightest of the TTSs and some Vega-like disks.} that are:\n\n\\subsection{About the accretion flow}\n\nThe actual measurements of infalling gas in the UV are scarce. The\nare hints of accretion in the large extent of the red wings of the\nmain UV resonance lines (Herczeg et al 2005) or through the\ndetection of redshifted absorption components on the profiles\nof the most prominent emission lines.\nHowever, the only target for which there is clear spectroscopic\nevidence of accretion shocks (in the UV) is RY~Tau (see Fig.~4).\nTwo observations of the same star obtained in 1993 and 2001 show that\nthere is a variable redshifted component. From this single\nobservation three important properties are learnt:\n\\begin{enumerate}\n\\item As described above, UV radiation from accretion shocks\nis predicted to be produced in the preshock region on scale\nheights significantly smaller than the stellar radius. Thus, it is\nexpected that only matter falling onto the visible hemisphere can\nbe detected at UV wavelengths. The fact that the variable flux\ncomponent is redwards shifted supports these theoretical\nexpectations. Moreover, the broadening shows that infalling matter\nshould cover a significant fraction of the hemisphere to account\nfor the broad distribution of projected velocities in the\ninfalling gas.\n\n\\item The red wing extends to\nvelocities of 250~km\/s which corresponds to the free-fall velocity\\footnote{\nRY Tau mass is 1.63~M$_{\\odot}$ and radius 2.4~R$_{\\odot}$ according to\nHartigan et al 1995} from 1.7 R$_*$ which is much smaller than\nthe fiducial values derived for the inner disk radius.\n\n\\item The UV excess is not only produced by accretion;\nalso the wind contributes to it. Thus accretion rates derived from\nthe UV excess, assuming that it is caused just by magnetospheric\ninfall, are overestimated.\n\n\\end{enumerate}\n\n\nIt also adds to our understanding of {\\it magnetospheric\naccretion}. Magnetospheric accretion was originally proposed to\nexplain the large broadening of the TTSs H$\\alpha $ lines\n(Muzerolle et al 1998) later detected also in the UV\nresonance lines of CIV, CIII (Ardila et al 2002, Herczeg et al 2005).\nTypical line widths are about\n200-300 km\/s that exceed by far what expected from the rotation\nvelocities of the TTSs ($10-20$ km\/s) even if the corotating\nmagnetosphere is postulated to extend to some 4-5 stellar radii.\nInfall adds a radial velocity component to rotation of the\nradiating gas. As free fall velocity is:\n$$\nv_ {ff} \\simeq 315 {\\rm km s} ^{-1} \\lgroup \\frac {M_*}{M_{\\odot}}\n\\rgroup ^{1\/2} \\lgroup \\frac {R_*}{R_{\\odot}} \\rgroup ^{-1\/2}\n$$\nThe observed\nprofiles broadenings can be reproduced without difficulty. The\nobserved broadening of H$\\alpha$ or Mg~II lines do not vary\nsignificantly requiring that, at least, the spatial average of the\naccretion flow is rather stable. Strong variations in the\naccretion flow, like the reported from RY~Tau UV observations, should\nalso show in the large scale magnetosphere tracers (i.e.\nH$\\alpha$ or Mg~II profiles).\n\n\\begin{figure}[]\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig4.eps}}\n\\caption{ SiIII] and CIII] UV lines observed in RY~Tau (from\nG\\'omez de Castro \\& Verdugo 2007); RSDP processed data are\nplotted with a thin line and the 3-pixels average profile with a\nthick line. The rest wavelength of the lines and the velocity of\nthe unresolved jet at $\\simeq -80$ km\/s (from G\\'omez de Castro \\&\nVerdugo 2001 and Hamann 1994) are marked with dashed lines. {\\it\nLeft panel}: Observations obtained in Dec. 31st, 1993 with the\nGHRS. {\\it Right panel}: Observations obtained in March 27th,\n2001. Both lines show an excess of flux in the red wing compared\nwith the 1993 observations; this excess is shaded in the figure.}\n\\end{figure}\n\n\n\n\\subsection{About the wind}\n\nThere is another possibility to broaden the line profiles: adding\na radial velocity component associated with the outflow. The\npresence of magnetic fields and the relevance of centrifugal\nlaunching drives to the formulation of the velocity field in the\nwind by means of three components: axial component (along the\nrotation axis), radial expansion from the axis and the azimutal\ntoroidal component (rotation around the axis); in figure 5 there\nis a representation of the three components for a warm centrifugal\nwind model (from G\\'omez de Castro \\& Ferro-Font\\'an 2005). A\nrapid radial expansion close to the star (to guarantee that the\nwind density and temperature are about the observed\nn$_e=10^{10}$cm$^{-3}$ and T$_e = 20-30 \\times 10^{3}$K values)\ncould produce similar effects in the profiles than those predicted\nby magnetospheric infall. Several attempts have been made to\nreproduce the wind profiles with cold winds from accretion disks\n(Ferro-Font\\'an \\& G\\'omez de Castro 2003), warm disk winds\n(G\\'omez de Castro \\& Ferro-Font\\'an, 2005), coronal winds from\naccretion disks (Ferreira \\& Casse, 2004) and winds driven\nby the star-disk interaction (G\\'omez de Castro \\& von Rekoswki, 2008).\n\n\\begin{figure}[]\n \\includegraphics[width=18pc]{gomezdecastro_fig5.eps}\n\\caption{The basic kinematics of MHD centrifugal winds is outlined\nfrom the simple semiempirical model of centrifugally driven MHD\nwinds with thermal pressure of G\\'omez de Castro \\& Ferro-Font\\'an\n(2005). {\\it Left:} Velocity field in the flow.\n$V_r$ (solid) and $V_z$ (dashed) are the components of the\nvelocity along the $r$\nand $z$ axes, respectively. The toroidal component of the\nvelocity, $V_{t}$ is scaled with respect to the keplerian\nvelocity of the disk at the radius from which the wind is ejected.\n {\\it Right:} Line profiles generated by the wind in ring\nof gas at different heights along the\nz-axis. }\n\\end{figure}\n\nCold disk winds fail to reproduce the high temperatures observed.\nThe original wind temperature is as low as the disk one and\nheating has to be extracted from photoionization by the central\nsource. However, this radiation is able to heat the gas only to\nmild temperatures of about 10$^4$K. Warm disk winds produce\nprofiles that are too narrow to reproduce all the observations;\nthis is because the vertical thermal pressure push forces the\ngrowth of the radial velocity component to heights were the plasma\nis already too cool. Finally, winds driven from the star-disk\ninteraction also produce narrower profiles than the observed in\nsome sources as shown in Fig.6. According their UV forbidden lines\nprofiles, TTSs can be classified in two groups: stars with\nbroadenings with full width half maximum about 150~km~s$^{-1}$\nthat can be adjusted with the current models and stars with\nextremely broad profiles ($> 250$km~s$^{-1}$). The source of the\nvery large broadenings have to be seek in other structures such as\nion belts or plasma rings as the resolved in RW~Aur (see Sect.\n4.3).\n\n\\begin{figure}[]\n \\includegraphics[width=18pc]{gomezdecastro_fig6.eps}\n\\caption{{\\it Left:} Si~III] profiles of the TTSs observed\nwith HST; notice the very different line broadenings.\n{\\it Right:} Predicted Si~III] profiles for winds generated\nby means of the interaction between a stellar magnetosphere\nwith stellar field 1kG and an accretion disk undergoing\n$\\alpha ^2 \\Omega$-dynamo effect (e.g. Krause \\& Raedler, 1980)\nfrom G\\'omez de Castro \\& von Rekowski (2008).\n}\n\\end{figure}\n\nFinally, UV observations have clearly proved that:\n\\begin{enumerate}\n\\item {\\it Warm winds are latitude dependent on scales\ncomparable to the stellar radius}. As an example, the Mg~II\nresonance doublet has been observed in a broad sample of 17 TTSs\n(adding IUE and HST samples); this is the largest sample of TTSs\nobserved in a single UV spectral line. These lines can be\ngenerically described as broad, asymmetric emission lines with\ntypical full widths at 10 \\% intensity of few hundreds km\/s. The\nbroad blueward shifted absorption component characteristic of\nmass-loss was detected in few sources (Penston \\& Lago, 1983,\nImhoff \\& Appenzeller, 1989) but not in all of them; the degree of\nabsorption (the asymmetry of the line) varies from not absorption\nto full absorption of the bluewards shifted emission (see G\\'omez\nde Castro, 1997).\n\\item {\\it The collimated flow, the jet, radiates in the UV as\nwell as the bow-shock and the Herbig-Haro objects}. Basically\nall data have been obtained with the IUE satellite (see\nG\\'omez de Castro \\& Robles 2001 for a compilation).\nRecent observations obtained with Hopkins Ultraviolet Telescope\n(HUT) have shown that it is still unclear how line\nradiation is excited, at least, in HH2; in particular\nO~VI is not detected as was expected in high\nexcitation Herbig-Haro objects where line radiation is\npredicted to be produced in strong radiative shocks\nwhere the shock kinetic energy is damped into heating\n(Raymond et al 1997).\n\n\\end{enumerate}\n\n\\subsection{About the inner disk and ion belts}\n\nStrong continuum FUV emission (1300--1700 \\AA ) has\nbeen detected recently from some stars with bright molecular disks\nincluding GM~Aur, DM~Tau, and LkCa~15, together with inner disk\ngaps of few AUs (Bergin et al. 2004). This emission is likely due\nto energetic photoelectrons mixed into the molecular layer that\nlikely indicates the existence of a very hot component in the\ninner disk.\n\nHigh-resolution HST\/STIS spectra have revealed, for the first\ntime, the rich UV molecular emission in CTTSs. H$_2$ fluorescence\nemission has now been studied in detail in the nearest CTTS,\nTW~Hya, and the richness of the spectrum is overwhelming: Herczeg\net al. (2002) detected 146 Lyman-band H$_2$ lines. The observed\nemission is likely produced in the inner accretion disk, as are\nthe infrared CO and H$_2$O lines. From these UV data, Herczeg et\nal. (2004) estimated that the warm disk surface has a column\ndensity of $N_{H_2} = 3.2 \\times 10^{18}$~cm$^{-2}$, temperature\nof $T=2500$~K, and filling factor of H$_2$ as seen from the source\nof the Ly$\\alpha$ emission of 0.25$\\pm$0.08. The observed spectrum\nshows that some ground electronic state H$_2$ levels with\nexcitation energies as large as 3.8 eV are pumped by Ly$\\alpha$.\nThese highly excited levels may be formed by dissociative\nrecombination of H$^+_3$, which in turn may be formed by reactions\ninvolving X-rays and UV photons from the star. Also DF~Tau and\nV836~Tau H$_2$ emission seems to arise from the disk (Herczeg et\nal 2006).\n\nIn addition to this molecular component, there is increasing\nevidence of the existence of ion belts\/rings around some TTSs. An\nion belt has been detected around the TTS, RW~Aur (G\\'omez de\nCastro \\& Verdugo, 2003). A corotation radius of 4.4~$R_*$ is\nderived and a $\\log T_e (K) \\simeq 4.7$ and $\\log n_e (cm^{-3}) =\n11.6$ are estimated. This was the first detection of such an\nstructure around a classical TTS. In addition, there are\nindications of a similar structure around AB~Dor, a weak line TTS\n(see Fig.~7). The structure is resolved, as in RW~Aur, because\nthere is an inner hole that allows separating the stellar\/wind\ncontribution from the belt. However in a 5.7 hours time lapse the\ndouble peaked profile is lost, and the inner part of the profile\nis filled in again with emission (G\\'omez de Castro 2002) .\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=17pc]{gomezdecastro_fig7.eps}}\n\\caption[]{SiIII] profiles of AB~Dor obtained with the HST\/GHRS\n(see G\\'omez de Castro 2002 for more details). In the bottom\npanel, the profile at phase ($\\phi$) 0.329 is overplotted\n(dashed line) on the profiles at $\\phi = 0.794$ (continuous line),\nfor comparison.}\n\\end{figure}\n\n\n\n\\subsection{About the interaction between the disk and the wind}\n\nAB~Dor, a very bright nearby 30~Myr old star, is the only young\nstar that has been well monitored in the UV for flares. Nine events were\ndetected during 10.63~hours of monitoring with HST\/GHRS!.\nThe C~IV and Si~IV UV line profiles produced by most of the events\nare narrow and redshifted, indicating hot gas falling onto the\nstar during the flare. However, the strongest event produced a\nvery broad profile with narrow absorption slightly blueshifted.\nThis profile lasted a few kiloseconds and thus the broad wings are\nmost likely tracing the front shock of a CIR (G\\'omez de Castro\n2002). In the solar system, there are three very different types\nof ``flares'', which are sudden increases of the high energy\nradiation and particles flux: magnetic flares (magnetic\nreconnection events), corotating interaction regions or CIRs\n(shock fronts formed by the interaction between the slow and the\nfast component of the solar wind), and coronal mass ejections.\nThis classification also applies to TTSs and their circumstellar\nenvironments. High-resolution UV spectroscopic monitoring is\nrequired to disentangle the possible mechanisms for flares in\nproto-stellar systems and to study their impact in young planetary disks\nevolution as well as on planetary atmospheres embryos.\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=18pc]{gomezdecastro_fig8.eps}}\n\\caption[]{The C~IV 1548~\\AA\\ profile of AB~Dor during a normal\nstellar flare (left) and a transient feature probably associated\nwith a CIR (right). Both events lasted several kiloseconds. The\nleft profile is typical of three events that occured during the\nshort monitoring time, while the profile on the right was observed\nonly once. Note the presence of a narrow absorption and the very\nbroad line wings in the right panel profile (see G\\'omez de Castro\n2002 for more details). }\n\\end{figure}\n\n\n\n\n\\section{Summary: the key observables}\n\nIn brief, the radiation produced by the accretion engine\n(including magnetospheres, outflows, accretion, inner disk\nand shock between winds and young planetary disks) is produced\nin the UV. To separate the various contributions is necessary\neither very high spatial resolution or moderate time resolution.\n\nCurrent surveys show that although there are some few nearby TTSs\nand WTTSs sparsely distributed around the Sun (AB~Dor at 14.9~pc\nor TW~Hya at 56~pc) the nearest star forming complexes are\nconcentrated in a {\\it Star Formation Belt} (SFB) at 140~pc\naround the Sun which includes Taurus, Auriga-Perseus, Ophiuchus,\nLupus and Chamaleon molecular clouds and several thousands of\npre-main sequence stars forming in various environments (clustered\nas in Ophichus, sparse as in Taurus). Resolving spatial scales of\na tenth of the solar radius in the SFB would allow to study the\nconnection between the star and the outflow in full detail. For\nthis purposes spatial resolutions of 3.3 micro arcseconds are\nrequired; thus, for a fiducial wavelength of 1500\\AA\\ the aperture\nmust be about 10~km. Such a long baseline interferometry should be\ncarried out in the space and the Stellar Imager project (Carpenter\net al 2008) represents a first atempt to such an ambitious\nproject.\n\nHowever, the requirements are not so strong to resolve the inner\ndisk structure, the disk wind and the plasmoids ejection from the\ncurrent layer between the stellar and the disk wind. In such a\ncase, spatial resolutions of 1-0.5 milliarcseconds (mas) would be\nenough to map the SFB sources thus requiring apertures of 20~m and\neffective areas about 10$^4$ times those of HST\/STIS; research in\nnew coatings and detectors in the UV field (Kappelmann \\&\nBarnstedt, 2007) as well as a clever optical design may account\nfor a factor of ten but, still, a larger, $\\sim 30$m aperture will\nbe required to get SNR$\\simeq 10$ in the C~IV line in reasonable\nexposure times (few hours). There is an ongoing project that\nsatisfies these requirements, the Fresnel Interferometer (see\nKoechlin et al 2008) and some projections on the expected\nperformance of the interferometer on the mapping of the engine are\nplotted in Fig.~9.\n\nBoth space interferometer projects are under study by their\nnational space agencies and, in case they succeed, they will\nbe available about 2030. Is there anything else to be done\n{\\it in the meantime}?. The answer is definitely positive, time\nmapping will allow us to resolve the structures since the\nvariability time scales are not the same for all the\nphenomena and they do not produce the same inprint in the\nspectra (neither in temperatures, densities or velocities).\nSome examples of the power of this technique have\nalready been shown in this contribution.\n\nHigh resolution spectroscopy (R$\\sim 50,000$) is enough to\ndiscriminate among the various components thus, scaling with\nthe fluxes of the weakest H$_2$ lines (from Heczeg et al\n2002) detected with the HST\/STIS, a factor of 10 increase\nof the effective area with respect to HST\/STIS is required\nto reach most of the sources\nin the SFB. The COS instrument in HST will provide\na factor of 10 sensitivity increase with respect to STIS\nin the 1150-1700\\AA\\ range because of its optimized optical\ndesign (see Froening et al 2008), unfortunately\nthe orbital constrains of HST do not favour monitoring\nprograms. High orbit missions alike the WSO-UV (see Shustov et al\n2008) are better suited for this purpose.\n\nAn additional factor of 10\nwould be required to obtain SNR$\\sim 10$ in exposure times\nof few minutes; this short time scales are necessary to\nmap variations in flare time scales as shown in\nFig.~8 for the pre-main sequence stars in the SFB.\nUnfortunately, spectroscopic monitorings of the flaring activity in the\nSFB will have to wait for future missions, with collecting\nareas about 8-10~m, preferably located at the L2.\n\n\n\n\\begin{figure*}\n\\centerline{\\includegraphics[width=26pc]{gomezdecastro_fig9.eps}}\n\\caption[]{Theoretical prediction of the Si~III] emissivity from\nnumerical simulations of star-disk interaction (G\\'omez de Castro\n\\& Rekowski, 2008). The stellar magnetosphere is assumed to be\ndipolar with a field strength at the surface of 1kG. The\nmagnetosphere interacts with the disk which is under a moderate\n$\\alpha$-dynamo effect\\footnote{The magnetic field in the disk is\nassumed to be generated by a standard $\\alpha ^2 \\Omega $ dynamo\n(e.g. Krauser \\& Raedler 1980) where $\\alpha$ is the mean-field\n$\\alpha$ effect and $\\Omega$ the angular velocity of the plasma.\n$\\alpha$ scaling is: $\\alpha = -0.1 \\frac{z}{z_0} \\frac {\\chi\n_{\\rm disk} (r, z)}{1+ V_A^2\/C_s^2}$ where $\\chi _{\\rm disk}$ is\nthe disk profile (see von Rekowski et al 2003), $z_0$ is the disk\nhalf-thickness, $V_A$ is the local Alfv\\'en velocity and $C_s$ is\nthe local sound speed.} . The inner disk wind is\nmagnetocentrifugally accelerated ({\\it top panel}).\n\nThe convolution of the theoretical prediction with the point\nspread function of the 30m Fresnel Interferometer\nFII has been carried out by Laurent Koechlin\nand Truswin Raksasataya.\nThe convolution is shown for three inclinations\n(0$^o$, 45$^o$ and 90$^o$) and three distances to the Earth\n(15~pc, 40~pc and 140~pc). Notice that the inner ring is resolved\neven for 140~pc.\n }\n\\end{figure*}\n\n\n\n\n\n\n\\acknowledgments\n\nThis work has been supported by the Ministry of Education of Spain\nthrough grant AYA2007-67726 and the Comunidad Aut\\'onoma de Madrid\nthrough grant CAM-S-0505\/ESP\/0237\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion}\n\\label{sec:conclusion}\n\\vspace{-0.4em}\nWe hypothesised that explicitly modeling the internal structure of complex labels for morphological tagging improves the overall tagging accuracy over the baseline with monolithic tags.\nTo test this hypothesis, we experimented with three approaches to model composite morphological tags in a neural sequence tagging framework.\nExperimental results on 49 languages demonstrated the advantage of modeling morphological labels as sequences of category values, whereas the superiority of this model is especially pronounced on smaller datasets.\nFurthermore, we showed that, in contrast to baselines, our models are capable of predicting labels that were not seen during training.\n\n\\section{Analysis and Discussion}\n\\label{sec:discussion}\n\n\n\\paragraph{OOV label accuracy}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.95\\columnwidth]{fig\/oov_labels_seq_crop}\n\\caption{OOV label accuracies of the \\textsc{Seq} model.}\n\\label{fig:oov_labels_seq}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\columnwidth]{fig\/category_errors_crop}\n\\caption{Average error rates of distinct morphological categories for \\textsc{Seq} and \\textsc{Mc} models.}\n\\label{fig:cat_errors}\n\\end{figure}\n\n\nOur models are able to predict labels that were not seen in the training data. Figure~\\ref{fig:oov_labels_seq} presents the accuracy of test tokens with OOV labels obtained with our best performing \\textsc{Seq} model plotted against the number of OOV label types. The datasets with zero accuracy are omitted. The main observation is that although the OOV label accuracy is zero for some languages, it is above zero on ca. half of the datasets---a result that would be impossible with \\textsc{MarMoT} or \\textsc{Mc} baselines.\n\n\n\n\n\n\n\\paragraph{Error Analysis}\nFigure~\\ref{fig:cat_errors} shows the largest error rates for distinct morphological categories for both \\textsc{Seq} and \\textsc{Mc} models averaged over all languages. \nWe observe that the error patterns are similar for both models but the error rates of the \\textsc{Seq} model are consistently lower as expected. \n\n\\iffalse\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\columnwidth]{fig\/feature_confusion_crop}\n\\caption{Most common category-value confusion patterns.}\n\\label{fig:feature_confusion}\n\\end{figure}\n\nTo better understand the nature of these errors, we looked at the confusion patterns of category-value pairs. Ten most common confusion patterns over all languages are shown on Figure~\\ref{fig:feature_confusion}. For each confusion pattern, we plot both its average occurrence count together with the average proportion with respect to the first category-value pair in the pattern. We only show confusion patterns for which the average proportion is at least $0.7$. \nThe most common source of errors is predicting a value for a category that has not been annotated in the test set, e.g. predicting a \\emph{case} value for a noun which does not have an annotated \\emph{case} attribute. Since the error patterns are similar for both models, we conclude that both \\textsc{Seq} and \\textsc{Mc} models learn roughly the same classification function but due to the flexibility in composing the label, the \\textsc{Seq} model makes less errors. \n\\fi\n\n\\paragraph{Stability Analysis}\nTo assess the stability of our predictions, we picked five languages from different families and with different corpus size, and performed five independent train\/test runs for each language.\nTable~\\ref{tbl:stability} summarises the results of these experiments and demonstrates a reasonably small variance for all languages. \nFor all languages, except for Finnish, the worst accuracy of the \\textsc{Seq} model was better than the best accuracy of the \\textsc{Mc} model, confirming our results that in those languages, the \\textsc{Seq} model is consistently better than the \\textsc{Mc} baseline.\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lcc}\n\\toprule\nDataset & \\textsc{Seq} & \\textsc{Mc} \\\\\n\\midrule\nFinnish & 93.24 $\\pm$ 0.12 & 93.20 $\\pm$ 0.07 \\\\\nGerman & 88.45 $\\pm$ 0.21 & 87.74 $\\pm$ 0.17 \\\\\nHungarian & 84.51 $\\pm$ 0.54 & 80.68 $\\pm$ 0.48 \\\\\nRussian & 91.08 $\\pm$ 0.18 & 90.13 $\\pm$ 0.15 \\\\\nTurkish & 90.29 $\\pm$ 0.24 & 89.16 $\\pm$ 0.27 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean accuracy with standard deviation over five independent runs for \\textsc{Seq} and \\textsc{Mc} models.}\n\\label{tbl:stability}\n\\vspace{-1em}\n\\end{table}\n\n\\paragraph{Hyperparameter Tuning}\nIt is possible that the hyperparameters tuned on Finnish are not optimal for other languages and thus, tuning hyperparameters for each language individually would lead to different conclusions than currently drawn.\nTo shed some light on this issue, we tuned hyperparameters for the \\textsc{Seq} and \\textsc{Mc} models on the same subset of five languages.\nWe first independently optimised the dropout rates on word embeddings, encoder's LSTM inputs and outputs, as well as the number of LSTM layers.\nWe then performed a grid search to find the optimal initial learning rate, the learning rate decay factor and the decay step.\nValue ranges for the tuned parameters are given in Table~\\ref{tbl:tuning_grid}.\n\n\\begin{table}[h]\n\\centering\n\\small\n\\begin{tabular}{lc}\n\\toprule\nParameter & Values \\\\\n\\midrule\nWord embedding dropout & $\\{0, 0.1, \\dots, 0.5\\}$ \\\\\nLSTM input dropout & $\\{0, 0.1, \\dots, 0.5\\}$ \\\\\nLSTM input dropout & $\\{0, 0.1, \\dots, 0.5\\}$ \\\\\nNumber of LSTM layers & $\\{1, 2\\}$ \\\\\n\\midrule\nInitial learning rate & $\\{0.01, 0.1, 1, 2\\}$ \\\\\nLearning rate decay factor & $\\{0.97, 0.98, 0.99, 1\\}$ \\\\\nDecay step & $\\{1250, 2500, 5000\\}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{The grid values for hyperparameter tuning.}\n\\label{tbl:tuning_grid}\n\\vspace{-1em}\n\\end{table}\n\n\nTable~\\ref{tbl:tuning} reports accuracies for the tuned models compared to the mean accuracies reported in Table~\\ref{tbl:stability}.\nAs expected, both tuned models demonstrate superior performance on all languages, except for German with the \\textsc{Seq} model.\nHyperparameter tuning has a greater overall effect on the \\textsc{Mc} model, which suggests that it is more sensitive to the choice of parameters than the \\textsc{Seq} model.\nStill, the tuned \\textsc{Seq} model performs better or at least as good as the \\textsc{Mc} model on all languages.\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lcc|cc}\n\\toprule\nDataset & \\textsc{Seq} & Gain & \\textsc{Mc} & Gain \\\\ \n\\midrule\nFinnish & 93.44 & $+0.20$ & 93.43 & $+0.23$ \\\\\nGerman & 88.35 & $-0.10$ & 88.14 & $+0.40$ \\\\\nHungarian & 85.56 & $+1.05$ & 82.29 & $+1.61$ \\\\\nRussian & 91.44 & $+0.36$ & 90.74 & $+0.61$ \\\\\nTurkish & 90.56 & $+0.27$ & 89.32 & $+0.16$ \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Accuracies of the tuned \\textsc{Seq} and \\textsc{Mc} models compared to the mean accuracies in Table~\\ref{tbl:stability}.}\n\\label{tbl:tuning}\n\\end{table}\n\n\\paragraph{Comparison with Previous Work}\nSince UD datasets have been in rapid development and different UD versions do not match, direct comparison of our results to previously published results is difficult.\nStill, we show the results taken from \\citet{heigold2017}, which were obtained on UDv1.3, to provide a very rough comparison.\nIn addition, we compare our \\textsc{Seq} model with a neural tagger presented by \\citet{Dozat2017}, which is similar to our \\textsc{Mc} model, but employs a more sophisticated encoder.\nWe train this model on UDv2.1 on the same set of languages used by \\citet{heigold2017}.\n\nTable~\\ref{tbl:result_comparison} reports evaluation results for the three models.\nThe \\textsc{Seq} model and Dozat's tagger demonstrate comparable performance.\nThis suggests that the \\textsc{Seq} model can be further improved by adopting a more advanced encoder from \\citet{Dozat2017}.\n\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{lcc|c}\n\\toprule\nDataset & \\textsc{Seq} & Dozat & Heigold \\\\\n\\midrule\nArabic & \\textbf{93.84} & 92.85 & 93.78 \\\\\nBulgarian & 97.04 & \\textbf{97.25} & 95.14 \\\\\nCzech & \\bf 95.39 & 95.22 & 96.32 \\\\\nEnglish & 94.80 & \\textbf{94.81} & 93.32 \\\\\nEstonian & 93.30 & \\bf 93.90 & 94.25 \\\\\nFinnish & 93.41 & \\textbf{93.73} & 93.52 \\\\\nFrench & \\textbf{96.39} & 95.90 & 94.91 \\\\\nHindi & 91.75 & \\textbf{92.36} & 90.84 \\\\\nHungarian & \\textbf{84.12} & 82.84 & 77.59 \\\\\nRomanian & 97.16 & \\textbf{97.20} & 94.12 \\\\\nRussian-SynTagRus & \\textbf{96.67} & 96.20 & 96.45 \\\\\nTurkish & \\textbf{90.70} & 90.22 & 89.12 \\\\\n\\midrule\nAverage & \\bf 93.71 & 93.54 & 92.45 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Accuracies for the \\textsc{SEQ} model, \\citet{Dozat2017} and \\citet{heigold2017}.}\n\\label{tbl:result_comparison}\n\\vspace{-1em}\n\\end{table}\n\n\n\\iffalse\n\\paragraph{Runtime}\nWe perform runtime analysis on Nvidia Tesla P100 graphics processor.\nIn the training phase, the \\textsc{Seq} model processes 43 sentences per second on average.\nAlthough it is ca 4.5 times slower than the \\textsc{Mc} model\\footnote{Note that the batch size is 5 for \\textsc{Seq} and 20 for \\textsc{Mc} model.}, the overall training time is still reasonable, ranging from few hours for smaller corpora, such as Turkish, to several days for the largest corpora, such as Czech.\nThe inference speed is similar for both models, with the \\textsc{Seq} model running at a rate of 460 sentences per second on average with a batch size of 50.\n\n\n \n\n\n\\fi\n\\section{Experimental Setup}\n\\label{sec:experiments}\n\n\\begin{table*}\n\\centering\n\\ssmall\n\\renewcommand{\\arraystretch}{0.98}\n\\tabcolsep=0.11cm\n\\input{tbl_corpus_stats}\n\\caption{\\small Descriptive statistics for all UDv2.1 datasets. For training sets we report the number of word tokens and types, the average (Avg) and maximum (Max) tags per word type, the proportion of word types for which pre-trained embeddings were available (\\% Emb) and the size of the morphological tagset (\\# Tags). For the test sets, we also give the total number of tokens and types, the proportion of OOV words (\\% OOV) and the number of OOV tag tokens and types.}\n\\label{tbl:corpus_stats}\n\\end{table*}\n\nThis section details the experimental setup. We describe the data, then we introduce the baseline models and finally we report the hyperparameters of the models.\n\n\\subsection{Data}\nWe run experiments on the Universal Dependencies version 2.1~\\citep{nivre2017}.\nWe excluded corpora that did not include train\/dev\/test split, word form information\\footnote{French-FTB and Arabic-NYUAD}, or morphological features\\footnote{Japanese}. Additionally, we excluded corpora for which pre-trained word embeddings were not available.\\footnote{Ancient Greek and Coptic}\nThe resulting dataset contains 69 corpora covering 49 different languages.\nTagsets were constructed by concatenating the POS and morphological annotations of the treebanks.\nTable~\\ref{tbl:corpus_stats} gives corpus statistics. We present type and token counts for both training and test sets. For training set, we also show the average and maximum number of tags per word type and the size of the morphological tagset. \nFor the test set, we report the proportion of out-of-vocabulary (OOV) words as well as the number of OOV tag tokens and types.\n\nIn the encoder, we use fastText word embeddings \\citep{bojanowski2017} pre-trained on Wikipedia.\\footnote{\\ssmall \\url{https:\/\/github.com\/facebookresearch\/fastText}}\nAlthough these embeddings are uncased, our model still captures case information by means of character-level embeddings.\nIn Table~\\ref{tbl:corpus_stats}, we also report for each language the proportion of word types for which the pre-trained embeddings are available.\n\n\n\\subsection{Baseline Models}\n\nWe use two models as baseline: the CRF-based \\textsc{MarMoT} \\citep{mueller2013} and the regular neural multiclass classifier.\n\n\\paragraph{MarMoT (\\textsc{MMT})}\n\\textsc{MarMoT}\\footnote{\\url{http:\/\/cistern.cis.lmu.de\/marmot\/}} is a CRF-based morphological tagger which has been shown to achieve competitive performance across several languages \\citep{mueller2013}.\n\\textsc{MarMoT} approximates the CRF objective using a pruning strategy which enables training higher-order models and handling large tagsets.\nIn particular, the tagger first predicts the POS part of the label and based on that, constrains the set of possible morphological labels.\nFollowing the results of \\citet{mueller2013}, we train second-order models. We tuned the regularization type and weight on German development set and based on that, we use L2 regularization with weight 0.01 in all our experiments.\n\n\\paragraph{Neural Multiclass classifier (\\textsc{Mc})}\nAs the second baseline, we employ the standard multiclass classifier used by both \\citet{heigold2017} and \\citet{yu2017}.\nThe proposed model consists of an LSTM-based encoder, identical to the one described above in section~\\ref{sec:encoder}, and a softmax classifier over the full tagset.\nThe tagset sizes for each corpora are shown in Table~\\ref{tbl:corpus_stats}.\nDuring preliminary experiments, we also added CRF layer on top of softmax, but as this made the decoding process considerably slower without any visible improvement in accuracy, we did not adopt CRF decoding here.\nThe multiclass model is shown in Figure~\\ref{fig:neural_models} (d).\n\nThe inherent limitation of both baseline models is their inability to predict tags that are not present in the training corpus. Although the number of such tags in our data set is not large, it is nevertheless non-zero for most languages.\n\n\n\\subsection{Training and Parametrisation}\\label{sbsec:training}\nSince tuning model hyperparameters for each of the 69 datasets individually is computationally demanding, \nwe optimise parameters on Finnish---a morphologically complex language with a reasonable dataset size---and apply the resulting values to other languages.\nWe first tuned the character embedding size and character-LSTM hidden layer size of the encoder on the \\textsc{Seq} model and reused the obtained values with all other models.\nWe tuned the batch size, the learning rate and the decay factor for the \\textsc{Seq} and \\textsc{Mc} models separately since these models are architecturally quite different.\nFor the \\textsc{McMl} and \\textsc{HMcMl} models we reuse the values obtained for the \\textsc{Mc} model.\nThe remaining hyperparameter values are fixed.\nTable~\\ref{tbl:parameters} lists the hyperparameters for all models.\n\nWe train all neural models using stochastic gradient descent for up to 400 epochs and stop early if there has been no improvement on development set within 50 epochs.\nFor all models except \\textsc{Seq}, we decay the learning rate by a factor of 0.98 after every 2500 batch updates.\nWe initialise biases with zeros and parameter matrices using Xavier uniform initialiser~\\citep{Glorot2010}.\n\nWords in training sets with no pre-trained embeddings are initialised with random embeddings.\nAt test time, words with no pre-trained embedding are assigned a special UNK-embedding.\nWe train the UNK-embedding by randomly substituting the singletons in a batch with the UNK-embedding with a probability of 0.5.\n\n\\begin{table}[t]\n\\centering\n\\footnotesize\n\\tabcolsep=0.11cm\n\\begin{tabular}{lrr}\n\\toprule\n & \\textsc{Seq} & \\textsc{Other NN} \\\\\n\\midrule\n\\textbf{Encoder} & & \\\\\nWord embedding size & 300 & 300 \\\\\nCharacter embedding size & 100 & 100 \\\\ \nCharacter LSTM hidden layer size & 150 & 150 \\\\\nWord embedding dropout & 0.5 & 0.5 \\\\\nLSTM layers & 1 & 1 \\\\\nLSTM hidden state size & 400 & 400 \\\\\nLSTM input dropout & 0.5 & 0.5 \\\\\nLSTM state dropout & 0.3 & 0.3 \\\\\nLSTM output dropout & 0.5 & 0.5 \\\\\n\\midrule\n\\textbf{Decoder} & & \\\\\nLSTM hidden state size & 800 & 800 \\\\\nTag embedding size & 150 & -- \\\\\n\\midrule\n\\textbf{Training} & & \\\\\nInitial learning rate & 1.0 & 1.0 \\\\\nBatch size & 5 & 20 \\\\\nMaximum epochs & 400 & 400 \\\\\nLearning rate decay factor & -- & 0.98 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Hyperparameters for neural models.}\n\\label{tbl:parameters}\n\\end{table}\n\n\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\n\n\nThe common approach to morphological tagging combines the set of word's morphological features into a single monolithic tag and then, similar to POS tagging, employs multiclass sequence classification models such as CRFs \\citep{mueller2013} or recurrent neural networks \\citep{labeau2015,heigold2017}.\nThis approach, however, has a number of limitations.\nFirstly, it ignores the intrinsic compositional structure of the labels and treats two labels that differ only in the value of a single morphological category as completely independent; compare for instance labels \\textsc{[POS=noun,Case=Nom,Num=Sg]} and \\textsc{[POS=noun,Case=Nom,Num=Pl]} that only differ in the value of the \\textsc{Num} category.\nSecondly, it introduces a data sparsity issue as the less frequent labels can have only few occurrences in the training data.\nThirdly, it excludes the ability to predict labels not present in the training set which can be an issue for languages such as Turkish where the number of morphological tags is theoretically unlimited \\citep{yuret2006}.\n\nTo address these problems we propose to treat morphological tags as composite labels and explicitly model their internal structure. \nWe hypothesise that by doing that, we are able to alleviate the sparsity problems, especially for languages with very large tagsets such as Turkish, Czech or Finnish, and at the same time also improve the accuracy over a baseline using monolithic labels.\nWe explore three different neural architectures to model the compositionality of morphological labels.\nIn the first architecture, we model all morphological categories (including POS tag) as independent multiclass classifiers conditioned on the same contextual word representation.\nThe second architecture organises these multiclass classifiers into a hierarchy---the POS tag is predicted first and the values of morphological categories are predicted conditioned on the value of the predicted POS.\nThe third architecture models the label as a sequence of morphological category-value pairs.\nAll our models share the same neural encoder architecture based on bidirectional LSTMs to construct contextual representations for words \\citep{lample2016}.\n\nWe evaluate all our models on 49 UD version 2.1 languages.\nExperimental results show that our sequential model outperforms other neural counterparts establishing state-of-the-art results in morphological tagging for most languages.\nWe also confirm that all neural models perform significantly better than a competitive CRF baseline.\nIn short, our contributions can be summarised as follows:\n\\begin{enumerate}[label=\\arabic*),topsep=0em,noitemsep]\n\\item We propose to model the compositional internal structure of complex morphological labels for morphological tagging in a neural sequence tagging framework;\n\\item We explore several neural architectures for modeling the composite morphological labels;\n\\item We find that tag representation based on the sequence learning model achieves state-of-the art performance on many languages.\n\\item We present state-of-the-art morphological tagging results on 49 languages on the UDv2.1 corpora.\n\\end{enumerate}\n\n\n\\section*{Acknowledgments}\nThis work was supported by the Estonian Research Council (grants no. 2056, 1226 and IUT34-4).\n\n\n\n\\section{Neural Models}\n\\label{sec:models}\n\n\\begin{figure*}[]\n\\centering\n\\includegraphics[width=0.85\\textwidth]{fig\/neural_models_crop.pdf}\n\\caption{Neural architectures for modeling complex morphological labels: a) Multiclass Multilabel model (\\textsc{McMl}), b) Hierarchical Multiclass Multilabel model (\\textsc{HMcMl}), c) Sequence model (\\textsc{Seq}) and d) Multiclass baseline model (\\textsc{Mc}). \nCorrect labels are shown with a green border, incorrect labels have a red dotted border.}\n\\label{fig:neural_models}\n\\end{figure*}\n\nWe explore three different neural architectures for modeling morphological labels: multiclass multi\\-label model that predicts each category value separately, hierarchical multiclass multilabel model where the values of morphological features depend on the value of the POS, and a sequence model that generates morphological labels as sequences of feature-value pairs.\n\n\n\\subsection{Notation}\nGiven a sentence $w_1, \\dots, w_n$ consisting of $n$ words, we want to predict the sequence $t_1, \\dots, t_n$ of morphological labels for that sentence.\nEach label $t_i = \\{f_{i0}, f_{i1}, \\dots, f_{im}\\}$ consists of a POS tag ($f_{i0} \\equiv \\textsc{POS}$) and a sequence of $m$ category values. \nFor each word $w_i$, the encoder computes a contextual vector $h_i$, which captures information about the word and its left and right context.\n\n\\subsection{Decoder Models}\n\n\\paragraph{Multiclass Multilabel model (\\textsc{McMl})}\nThis model formulates the morphological tagging as a multiclass multilabel classification problem. \nFor each morphological category, a separate multiclass classifier is trained to predict the value of that category (Figure~\\ref{fig:neural_models} (a)).\nBecause not all categories are always present \nfor each POS (e.g., a noun does not have a \\textit{tense} category), we extend the morphological label of each word by adding all features that are missing from the annotated label and assign them a special value that marks the category as ``off''.\nFormally, the model can be described as:\n\\begin{equation}\np(t|h)_{\\textsc{McMl}} = \\prod_{j=0} ^ M p(f_j|h),\n\\end{equation}\nwhere $M$ is the total number of morphological categories (such as case, number, tense, etc.)\nobserved in the training corpus.\nThe probability of each feature value is computed with a softmax function:\n\\begin{equation*}\np(f_j|h)_{\\textsc{McMl}} = \\text{softmax}(W_j h + b_j),\n\\end{equation*}\nwhere $W_j$ and $b_j$ are the parameter matrix and bias vector for the $j$th morphological feature ($ j=0, \\dots, M$).\nThe final morphological label for a word is obtained by concatenating predictions for individual categories while filtering out off-valued categories.\n\n\\paragraph{Hierarchical Multiclass Multilabel model (\\textsc{HMcMl})}\nThis is a hierarchical version of the \\textsc{McMl} architecture that models the values of morphological categories as directly dependent on the POS tag (Figure~\\ref{fig:neural_models} (b)):\n\\begin{equation}\np(t|h)_{\\textsc{HMcMl}} = p(\\textsc{pos}|h)\\prod_{j=1} ^ M p(f_j|\\textsc{pos},h)\n\\end{equation}\nThe probability of the POS is computed from the context vector $h$ using the respective parameters:\n\\begin{equation*}\np(\\textsc{pos}|h) = \\text{softmax}(W_{\\textsc{pos}} h + b_{\\textsc{pos}})\n\\end{equation*}\nThe POS-dependent context vector $l$ is obtained by concatenating the context vector $h$ with the unnormalised log probabilities of the POS:\n\\begin{equation*}\nl = [h; W_{\\textsc{pos}} h + b_{\\textsc{pos}}]\n\\end{equation*}\nThe probabilities of the morphological features are computed using the POS-dependent context vector:\n\\begin{equation*}\np(f_j|\\textsc{pos}, h) = \\text{softmax}(W_j l + b_j) \\hspace{2mm} j=1, \\dots, M\n\\end{equation*}\n\n\\paragraph{Sequence model (\\textsc{Seq})}\nThe \\textsc{Seq} model predicts complex morphological labels as sequences of category values. This approach is inspired from neural sequence-to-sequence models commonly used for machine translation \\citep{Cho2014a,Sutskever2014}.\nFor each word in a sentence, the \ndecoder uses a unidirectional LSTM network (Figure~\\ref{fig:neural_models} (c)) to generate a sequence of morphological category-value pairs based on the context vector $h$ \nand the previous predictions.\nThe probability of a morphological label $t$ is under this model:\n\\begin{equation}\np(t|h)_{\\textsc{Seq}} = \\prod_{j=0} ^ m p(f_j|f_0, \\dots, f_{j-1}, h)\n\\end{equation}\n\nDecoding starts by passing the start-of-sequence symbol as input.\nAt each time step, the decoder computes the label context vector $g_j$ based on the previously predicted category value, previous label context vector and the word's context vector.\n\\begin{equation*}\ng_j = \\text{LSTM}([f_{j-1}; h], g_{j-1})\n\\end{equation*}\nThe probability of each morphological feature-value pair is then computed with a softmax.\n\\begin{equation*}\np(f_j|g_j)_{\\textsc{Seq}} = \\text{softmax}(W_{\\textsc{seq}} g_j + b_{\\textsc{seq}})\n\\end{equation*}\nAt training time, we feed correct labels as inputs while at inference time, we greedily emit the best prediction from the set of all possible feature-value pairs.\nThe decoding terminates once the end-of-sequence symbol is produced.\n\n\n\\subsection{Encoder}\n\\label{sec:encoder}\nWe adopt a standard sequence tagging encoder architecture for all our models.\nIt consists of a bidirectional LSTM network that maps words in a sentence into context vectors using character and word-level embeddings.\nCharacter-level word embeddings are constructed with a bidirectional LSTM network and they capture useful information about words' morphology and shape.\nWord level embeddings are initialised with pre-trained embeddings and fine-tuned during training.\nThe character and word-level embeddings are concatenated and passed as inputs to the bidirectional LSTM encoder.\nThe resulting hidden states $h_i$ capture contextual information for each word in a sentence.\nSimilar encoder architectures have been applied recently with notable success to morphological tagging \\citep{heigold2017,yu2017} as well as several other sequence tagging tasks \\citep{lample2016,Chiu2016,ling2015}.\n\n\\section{Related Work}\n\\label{sec:related}\n\n\nMost previous work on modeling the internal structure of complex morphological labels has occurred in the context of morphological disambiguation---a task where the goal is to select the correct analysis from a limited set of candidates provided by a morphological analyser. The most common strategy to cope with a large number of complex labels has been to predict all morphological features of a word using several independent classifiers whose predictions are later combined using some scoring mechanism \\citep{hajic1998,hajic2000,smith2005,yuret2006,zalmout2017,kirov2017}. \n\\citet{inoue2017} combined these classifiers into a multitask neural model sharing the same encoder, and predicted both POS tag and morphological category values given the same contextual representation computed by a bidirectional LSTM. \nThey showed that the multitask learning setting outperforms the combination of several independent classifiers on tagging Arabic.\nIn this paper, we experiment with the same architecture, termed as multiclass multilabel model, on many languages.\nAdditionally, we extend this approach and explore a hierarchical architecture where morphological features directly depend on the POS tag. \n\n\n\nAnother previously adopted approach involves modeling complex morphological labels as sequences of morphological feature values \\citep{hakkani2000,schmid2008}. In neural networks, this idea can be implemented with recurrent sequence modeling. Indeed, one of our proposed models generates morphological tags with an LSTM network. Similar idea has been applied for the morphological reinflection task \\citep{kann2016,faruqui2016} where the sequential model is used to generate the spellings of inflected forms given the lemma and the morphological label of the desired form. In morphological tagging, however, we generate the morphological labels themselves.\n\nAnother direction of research on modeling the structure of complex morphological labels involves structured prediction models~\\citep{mueller2013,mueller2015,malaviya2018,lee2011}.\n\\citet{lee2011} introduced a factor graph model that jointly infers morphological features and syntactic structures.\n\\citet{mueller2013} proposed a higher-order CRF model which handles large morphological tagsets by decomposing the full label into POS tag and morphology part.\n\\citet{malaviya2018} proposed a factorial CRF to model pairwise dependencies between individual features within morphological labels and also between labels over time steps for cross-lingual transfer.\nRecently, neural morphological taggers have been compared to the CRF-based approach \\citep{heigold2017,yu2017}.\nWhile \\citet{heigold2017} found that their neural model with bidirectional LSTM encoder surpasses the CRF baseline, the results of \\citet{yu2017} are mixed with the convolutional encoder being slightly better or on par with the CRF but the LSTM encoder being worse than the CRF baseline. \n\nMost previous work on neural POS and morphological tagging has shared the general idea of using bidirectional LSTM for computing contextual features for words \\citep{ling2015,huang2015,labeau2015,ma2016,heigold2017}.\nThe focus of the previous work has been mostly on modeling the inputs by exploring different character-level representations for words \\citep{heigold2016,santos2014,ma2016,inoue2017,ling2015,rei2016}.\nWe adopt the general encoder architecture from these works, constructing word representations from characters and using another bidirectional LSTM to encode the context vectors. \nIn contrast to these previous works, our focus is on modeling the compositional structure of the complex morphological labels. \n\nThe morphologically annotated Universal Dependencies (UD) corpora~\\citep{nivre2017} offer a great opportunity for experimenting on many languages. \nSome previous work have reported results on several UD languages \\citep{yu2017,heigold2017}. \nMorphological tagging results on many UD languages have been also reported for parsing systems that predict POS and morphological tags as preprocessing \\citep{andor2016,straka2016,straka2017}.\nSince UD treebanks have been in constant development, these results have been obtained on different UD versions and thus are not necessarily directly comparable. We conduct experiments on all UDv2.1 languages and we aim to provide a baseline for future work in neural morphological tagging\n\n\\section{Results}\n\\label{sec:results}\n\n\\begin{table*}\n\\centering\n\\ssmall\n\\renewcommand{\\arraystretch}{0.91}\n\\tabcolsep=0.095cm\n\\input{tbl_results}\n\\caption{\\small Morphological tagging accuracies on UDv2.1 test sets for MarMot (\\textsc{MMT}) and \\textsc{Mc} baselines as well as for \\textsc{McMl}, \\textsc{HMcMl} and \\textsc{Seq} compositional models. The left section shows the full \\textsc{Pos+Morph} tag results, the middle section gives accuracies for OOV words only, the right-most section shows the POS tagging accuracy. The best result in each section for each language is in bold. The languages are color-coded according to the training set size, lighter color denotes larger training set: cyan (<20K), violet (20K-50K), magenta (50K-100K), pink (>100K).}\n\\label{tbl:results}\n\\end{table*}\n\nTable~\\ref{tbl:results} presents the experimental results. We report tagging accuracy for all word tokens and also for OOV tokens only. \nA full morphological tag is considered correct if both its POS and all morphological features are correctly predicted. \n\nFirst of all, we can confirm the results of \\citet{heigold2017} that the performance of neural morphological tagging indeed exceeds the results of a CRF-based model. In fact, all our neural models perform significantly better than \\textsc{MarMoT} ($p<0.001$).\\footnote{As indicated by Wilcoxon signed-rank test.}\n\n\nThe best neural model on average is the \\textsc{Seq} model, which is significantly better from both the \\textsc{Mc} baseline as well as the other two compositional models, whereby the improvement is especially well-pronounced on smaller datasets. We do not observe any significant differences between \\textsc{McMl} and \\textsc{HMcMl} models neither on all words nor OOV evaluation setting.\n\nWe also present POS tagging results in the right-most section of Table~\\ref{tbl:results}. Here again, all neural models are better than CRF which is in line with the results presented by \\citet{plank2016}. For POS tags, the \\textsc{HMcMl} is the best on average. It is also significantly better than the neural \\textsc{Mc} baseline, however, the differences with the \\textsc{McMl} and \\textsc{Seq} models are insignificant.\n\nIn addition to full-tag accuracies, we assess the performance on individual features.\nTable~\\ref{tbl:features} reports macro-averaged F1-cores for the \\textsc{Seq} and the \\textsc{Mc} models on universal features.\nResults indicate that the \\textsc{Seq} model systematically outperforms the \\textsc{Mc} model on most features.\n\n\n\\begin{table}[t]\n\\centering\n\\small\n\\renewcommand{\\arraystretch}{0.91}\n\\tabcolsep=0.095cm\n\\begin{tabular}{lrrr|lrrr}\n\\toprule\nFeature & \\textsc{Seq} & \\textsc{Mc} & \\# & Feature & \\textsc{Seq} & \\textsc{Mc} & \\# \\\\\n\\midrule\nPOS & \\textbf{91.03} & 90.20 & 69 & NumType & \\textbf{89.68} & 87.82 & 54 \\\\\nNumber & \\textbf{94.02} & 93.05 & 63 & Polarity & \\textbf{93.83} & 92.86 & 54 \\\\\nVerbForm & \\textbf{91.29} & 89.86 & 61 & Degree & \\textbf{87.44} & 84.12 & 48 \\\\\nPerson & \\textbf{89.02} & 87.52 & 60 & Poss & \\textbf{94.52} & 93.60 & 44 \\\\\nTense & \\textbf{92.96} & 91.31 & 59 & Voice & \\textbf{88.40} & 82.85 & 42 \\\\\nPronType & \\textbf{89.83} & 88.81 & 58 & Definite & \\textbf{95.26} & 94.10 & 37 \\\\\nMood & \\textbf{87.34} & 85.40 & 58 & Aspect & \\textbf{89.76} & 87.71 & 29 \\\\\nGender & \\textbf{89.31} & 87.78 & 55 & Animacy & \\textbf{86.22} & 83.73 & 19 \\\\\nCase & \\textbf{88.90} & 87.04 & 55 & Polite & 75.76 & \\textbf{80.48} & 10 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Performance of \\textsc{Seq} and \\textsc{Mc} models on individual features reported as macro-averaged F1-scores. \n}\n\\label{tbl:features}\n\\end{table}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThere is a long and rich history of modeling the response of visual cortex neurons to stimuli extending back to the work of Hubel and Wiesel on simple and complex cells \\cite{hubel1962receptive}. \nIn recent years, artificial neural networks (ANNs) have achieved state-of-the-art performance predicting neural responses to natural stimuli \\cite{Antolik2016,Yamins2016,Klindt2017,Cadena2019,Batty2016,Sinz2018,Yamins2014,Vintch2015,Ecker2018,Walker2019}. \nThese models are accurate enough that the stimuli that maximally excite a neuron can be computed \\textit{in silico}, and when tested \\textit{in vivo} indeed drive neurons effectively \\cite{Walker2019,Bashivan2019}.\nHowever, these approaches place the computational burden of optimizing network parameters \\textit{after} extensive data from a neuron has been collected, which prohibits their use in real-time closed-loop experiments.\nTo avoid this optimization step, we wanted a model that can predict the response of a novel neuron to any stimulus, conditioned on a set of $K$ observed stimulus-response pairs -- essentially performing $K$-Shot prediction on neural responses.\n\n\\citet{Garnelo2018a} aptly describe how Neural Processes (NPs) can help solve this problem:\n\\emph{``Meta-learning models share the fundamental motivations of NPs as they shift workload from training time to test time. NPs can therefore be described as meta-learning algorithms for few-shot function regression''}. \nNPs achieve this by embedding input and output measurements into a latent space that maps to a space of functions, essentially learning the distribution over functions and a method to infer the posterior over functions given limited samples \\cite{Garnelo2018a,Garnelo2018,Kim2019}. \n\nA significant advance in modeling visual responses with ANNs was using convolutional neural networks with a factorized readout between the tuning function's location and properties \\cite{Klindt2017, Sinz2018}.\nWe found that NPs struggle to learn the space of tuning functions from stimulus-response samples without such a factorized representation. \nThus, we developed a Factorized Neural Process (FNP), which is composed of stacking multiple NPs.\nA key insight for this was that by passing the latent variable computed by early layers to deeper layers, in addition to the observations, we could obtain a factorized latent space while retaining the representational power and efficiency of NPs. \nWe used a two-layer FNP applied to visual responses, where the first NP produces a latent variable for the tuning function's location that the second NP uses to infer the tuning function's properties.\nWe found that a FNP trained on simulated data generalizes to new neurons, successfully inferring the tuning function's location and properties and predicting the responses to unseen stimuli. An FNP trained on neural responses from the mouse primary visual cortex made predictions with comparable accuracy to state-of-the-art approaches, and made these predictions almost \\emph{100 times faster}.\n\nIn short, our contributions in this work include: \\circled{1} We reformulate the problem of predicting the response of neurons to visual stimuli as a K-shot regression problem, removing the time consuming step of optimizing network parameters for each newly acquired neuron. \\circled{2} We develop a Factorized Neural Process that embeds the observed stimuli-response pairs into a latent space representing the tuning function that is partitioned into location and tuning function properties. \\circled{3} We train this Factorized Neural Process for Neural Processes end-to-end on simulated data and show it approaches the ground truth predictions as the size of the observation set increases. \\circled{4} We found that this approach performs comparably to state-of-the-art predictive models on responses from mouse visual cortex neurons while improving estimation speed by multiple orders of magnitude. The code is available at \\url{https:\/\/github.com\/peabody124\/fnp_neurips2020}.\n\n\\section{Neural Processes for Neural Processes}\n\nThe core steps that allow a NP to efficiently meta-learn a $K$-shot regression model are (1) encoding each element from a set of observed input-output observations into a representation space, (2) aggregating the elements in that representation space (typically by taking the mean) to produce a sufficient statistic of the observed set, (3) a conditional decoder that maps the aggregated representation to a function used for prediction of new observations, and (4) training this over many different sets of observations from different sample functions, i.e. meta-learning the distribution over tuning functions \\cite{Garnelo2018}. Our approach is largely based on \\citet{Garnelo2018a}, which expanded on \\citet{Garnelo2018} by introducing a stochastic variable used by the conditional decoder. NPs were further extended to include attention in \\citet{Kim2019}, which we do not use in this work. \n\nFirst, we describe the data generation process we seek to model: Let $\\mathcal F : \\mathcal X \\rightarrow \\mathcal Y$ be the space of all tuning functions that map from images to neural responses. An individual neuron corresponds to a sample function, $f \\in \\mathcal F$, from which we get $K$ observations $O_K=\\{(\\boldsymbol x_i, y_i)\\}_{i=0}^{i\n\\end{equation*}\n\nWe found that as as the size of the observation set increased, the predictive accuracy improved up to several hundred observations and then began to saturate. We also found that increasing the maximum set size used during training had a slight benefit in the asymptotic performance when increasing from 512 to 1024 trials, but with little benefit beyond this (Fig. \\ref{fig:accuracy}). These were averaged over three different seeds, with each fit producing similar performance. \n\nDespite trying a number of architecture variations, we could not get the asymptotic performance to quite reach ground truth. However, it performed well, with a $\\Delta LL$ of $0.4$ corresponding to a correlation coefficient between the ground truth mean response, $\\lambda_\\phi\\left(\\boldsymbol x\\right)$, and the model prediction mean, $\\lambda_{K=1024}(\\boldsymbol x)$, of $0.8$. \n\n\\subsection{Latent variables accurately capture the tuning function}\n\\label{section:reconstruction}\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{rf_accuracy.pdf}\n \\caption{a) the correlation between the location latent variable, $z_k^p$, and the ground truth for increasing observations. b) Reconstruction of receptive fields (RF). Each row corresponds to a different cell with the bottom half being complex cells. The first column shows the ground truth kernels, the second column is the RF reconstructed by the gradient method, and the remaining block shows the maximally exciting images computed using increasing numbers of observations. Ground truth kernels of complex cells use pseudocolor to reflect the two phases in the energy model and any reconstruction of this energy model with the same orientation and location is equally valid, regardless of the phase.}\n \\label{fig:rf_accuracy}\n\\end{figure}\n\nWe confirmed the information about the tuning function was correctly factorized by computing the correlation coefficient between the latent variable for location, $\\boldsymbol z_k^p$, and the ground truth location of the kernel, $k_\\phi$. We found that with only 64 observations there was a correlation of $0.8$, and it reached nearly $1.0$ with 256 observations (Fig.~\\ref{fig:rf_accuracy}a). \n\nWe then asked if the latent variables $(\\boldsymbol z_K^p, \\boldsymbol z_K^w)$ from 1024 observations were sufficient to reconstruct the receptive field. First, we computed receptive fields as the gradient of the tuning function conditioned on the latent variables: \n\\begin{equation*}\n RF_\\nabla^K = \\nabla_{\\boldsymbol x} \\left([\\mathcal T \\left( g_\\theta(\\boldsymbol x), \\boldsymbol z_K^p \\right), 1] \\cdot u_\\theta(\\boldsymbol z_K^w) \\right)\n\\end{equation*}\nFor simple cells the gradient showed a good correspondence to the kernel used to generate the responses, $k_\\phi$ (Fig. \\ref{fig:rf_accuracy}b). For complex cells it was in the correct location, but did not show the same structure as the kernel. This is expected as complex cells are not well described by a single kernel. We then computed the maximally exciting images (MEIs) for a neuron similarly to \\citet{Walker2019} by maximizing the predicted response, conditioned on the latent variables sampled after an increasing number of observations:\n\\begin{equation*}\n MEI_K = \\argmax_{\\boldsymbol x} \\left([\\mathcal T \\left( g_\\theta(\\boldsymbol x), \\boldsymbol z_K^p \\right), 1] \\cdot u_\\theta(\\boldsymbol z_K^w) \\right) - \\kappa \\Vert \\boldsymbol x \\Vert\n\\end{equation*}\nWith $\\kappa=0.01$ to regularize the images. As desired, MEIs computed with more observations converged towards the ground truth kernels, with complex cells having an anticipated random phase offset.\n\n\\subsection{Latent dimensionality and stimulus complexity}\n\\label{section:tuning_complexity}\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{latent.pdf}\n \\caption{\\textbf{Left}: The difference between the predictive log likelihood and ground truth as the dimension of the tuning function properties latent variable increases. Each line reflects an increasingly complex RF space. \\textbf{Right}: The correlation between the ground truth mean firing rate and the predicted mean firing rate for the same data.}\n \\label{fig:latent_dimension}\n\\end{figure}\n\nWe also studied how important the tuning function property's latent dimension $D$, with $\\boldsymbol z_K^w \\in \\mathbb R^D$, was to the predictive performance by increasing it from 2 to 64 (all experiments above used 64). We did this with different complexities of the simulated receptive fields by reducing the number of parameters in $\\phi$ that were randomized. In all experiments the orientation and location of the tuning function was randomized ($\\phi \\in \\mathbb R^3$). We increased the tuning function dimensions by then additionally randomizing (in order): frequency, width, phase offset, simple only versus simple and complex cells, and scale. Because this analysis involved refitting many models, we performed it with $16\\times 16$ stimuli. We found the performance improved with greater model capacity (tuning function properties latent dimension) and this impact was much more pronounced for more complex (higher dimensional) tuning functions (Fig.~\\ref{fig:latent_dimension}). Randomizing the phase offset produced the greatest reduction in predictive accuracy, although performance still remained quite good with high correlations between the model predictions and ground truth. Encouragingly, including complex cells did not produce a significant change in performance.\n\n\\section{Experiments with real neural responses}\n\nWe next tested our approach on real visual responses recorded with the same experimental paradigm as in \\citet{Walker2019}, and found it had a comparable predictive performance to optimization-based approaches. The data consists of pairs of neural population responses and grayscale visual stimuli sampled and cropped from ImageNet, isotropically downsampled to $64\\times 36$\\,px, with a resolution of $0.53$\\,ppd (pixels per degree of visual angle). The neural responses were recorded from layer L2\/3 of the primary visual cortex (area V1) of the mouse, using a wide field two photon microscope. A single scan contained the responses of approximately 5000--9000 neurons to up to 6000 images. \n\nWe trained an FNP on 57,533 mouse V1 neurons collected across 19 different scans and tested it on 1000 neurons from a hold-out scan (i.e. never seen during training). \nDuring testing, we assigned the latent variables assigned to their mean values: $\\boldsymbol z_K^p:=\\boldsymbol \\mu(s_K^p)$ and $\\boldsymbol z_K^w:=\\boldsymbol \\mu_K^w$, and used these in Eq.~\\ref{eq:conditional_decoder}. \nWe measured the $K$-shot predictive accuracy for each neuron as the correlation between the predicted mean from the conditional decoder, $\\lambda(\\boldsymbol x_t, \\boldsymbol z_K^p, \\boldsymbol z_K^w)$, and the real responses, $y_t$, for the remaining trials. \nIn agreement with synthetic data, the predictive accuracy improves rapidly with the first several hundred trials and continues to improve with additional observations (Fig.~\\ref{fig:real_predictions}). \nWe compared the performance of our FNP to an optimization based approach similar to \\citet{Klindt2017}, adapted for mouse V1, which we reference as Per Neuron Optimization (PNO). We measured the predictive performance of PNO similarly to FNP, on the same 1000 neurons with the readout optimized with $K$ trials and used to predict the response to the remaining stimuli. \nExcitingly, FNP performs well and with 1k images is almost as accurate as PNO (which is optimized for those individual cells), and even \\emph{outperforms} it for smaller numbers of observations (Fig.~\\ref{fig:real_predictions}). This likely arises because the FNP learns the prior distribution over tuning functions, which has a greater influence with less data. Please see the Appendix for details of both FNP and PNO fitting and testing.\n\nThese experiments also demonstrated the speed improvements for inferring the tuning function of a newly recorded neurons that FNP was designed for. While fitting the FNP to the training data took 6 days using two V100 GPUs, computing the latent variables for one thousand neurons with $K=1000$ took only 250~ms on a 1080Ti. This is in comparison to PNO which takes from 20~s to compute the readout using a pretrained CNN (Supplementary Table~\\ref{table:times}). Thus an FNP is two orders of magnitude faster, enabling real-time inference of tuning functions within the time of a single stimulus presentation.\n\n\\begin{figure}\n \\begin{center}\n\n \\includegraphics[width=0.5\\linewidth]{figures\/optimization_v_prediction.pdf}\n\n \\end{center}\n \\caption{Performance of a FNP for $K$-shot prediction for new neurons compared a traditional approach with per-neuron optimization (PNO) for $K$ up to 1000 trials}\n \\label{fig:real_predictions}\n\\end{figure}\n\n\\section{Discussion}\n\nUsing a Factorized Neural Process, we are able to learn a distribution over tuning functions that can be conditioned on a set of observed stimulus-response pairs and predict the response to novel stimuli. We first focused on simulated data from simple and complex cells where we could compare the inferred tuning functions to the ground truth. Importantly, the model performed equally well when including complex cells, which is not possible for classical techniques like spike-triggered average that similarly accumulate sufficient statistics. The fact that the asymptotic log likelihood for predictions did not reach the ground truth also indicates there is room to increase the model capacity, although the correlation between the ground truth and model predictions exceeded $0.8$. \nFollowing prior work \\cite{Klindt2017, Ecker2019, Walker2019,Sinz2018}, we restricted ourselves to a decoder that was a factorized linear readout on output of $g_\\theta$, but learning a more powerful decoder could also improve the capacity. \nWe then tested our approach on data from the mouse primary visual cortex in response to natural images. We found the trained FNP predicted the responses to test data with comparable accuracy as a model specifically optimized for those neurons, and even exceeded the performance when conditioned on less than 500 trials. Additionally, the FNP made these predictions orders of magnitudes more quickly than an optimization-based approach, thus opening the door to real-time, closed-loop inference of tuning functions updated after every stimulus presentation.\n \nThis work was motivated by real-time experimentation, but during an experiment the best way to know how a neuron responds to a stimulus is to measure it. The real need is using the observations to rapidly generate stimuli to test a hypothesis. We envision combining a FNP for rapid inference with a generator network that takes the latent representations as input and is trained \\textit{in silico} prior to experiments to generate stimuli to illicit a maximal responses or reduce the uncertainty in the latent representations. We believe this general approach of training a more powerful model prior to experiments that is capable of rapid, real-time inference will be a powerful tool for the neuroscience community, and that this approach using FNPs will facilitate it.\n\n\\section*{Broader Impact}\n\nWe hope this approach will be useful to the Neuroscience community and that Factorized Neural Processes may have even broader applications for modeling functions. The ability to perform real-time, closed-loop experiments and to performances inferences with less data may reduce the amount of time to record from animals or the number of experimental sessions. \nWe do not believe this methodology or the demonstrated application will disadvantage anyone.\n\n\\begin{ack}\nRJC thanks the Research Accelerator Program of the Shirley Ryan AbilityLab for support during residency.\nFHS is supported by the Carl-Zeiss-Stiftung and acknowledges the support of the DFG Cluster of Excellence \"Machine Learning \u2013 New Perspectives for Science\", EXC 2064\/1, project number 390727645.\nSupported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior\/Interior Business Center (DoI\/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI\/IBC, or the U.S. Government. \\end{ack}\n\n\\small\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Large parton densities in the nuclear wave function}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=5cm,angle=-90]{dhj-evol.eps}\n\\hspace{0.5cm}\n\\includegraphics[width=5cm,angle=-90]{dhj-pred.eps}\n\\caption{Forward particle production in d+Au collisions at RHIC. The left plot shows the importance of including both the large-$x$ DGLAP evolution of the dilute deuteron and the\nsmall-$x$ CGC evolution of the dense nucleus. The right plots shows the excellent description of the spectra shapes, and the K factors needed to obtain the normalization.}\n\\end{center}\n\\end{figure}\n\nWhen probing small distances inside a hadron or nucleus with a hard process, one resolves their partonic constituents. Increasing the energy of the scattering process at a fixed momentum transfer allows to probe lower-energy partons, with smaller energy fraction $x.$ As the parton densities in the hadronic\/nuclear wave function grow with decreasing $x,$ they eventually become so large that a non-linear (yet weakly-coupled) regime is reached, called saturation, where partons do not interact with the probe independently anymore, but rather behave coherently. \n\nThe Color Glass Condensate (CGC) is an effective theory of QCD \\cite{cgcrev} which aims at describing this part of the wave function. Rather than using a standard Fock-state decomposition, it is more efficient to describe it with collective degrees of freedom, more adapted to account for the collective behavior of the small-$x$ gluons. The CGC approach uses classical color fields: \n\\begin{equation}\n|h\\rangle=|qqq\\rangle+|qqqg\\rangle+\\dots+|qqqg\\dots ggg\\rangle+\\dots\\quad\n\\Rightarrow\\quad|h\\rangle=\\int D\\rho\\ \\Phi_{x_A}[\\rho]\\ |\\rho\\rangle\n\\label{cgc}\\ .\\end{equation}\nThe long-lived, large-$x$ partons are represented by a strong color source\n$\\rho\\!\\sim\\!1\/g_S$ which is static during the lifetime of the short-lived small-$x$ gluons, whose dynamics is described by the color field $A^\\mu\\!\\sim\\!1\/g_S.$ The arbitrary separation between the field and the source is denoted $x_A.$\n\nWhen probing the CGC with a dilute object carrying a weak color charge, the color field $A^\\mu$ is directly obtained from $\\rho$ via classical Yang-Mills equations:\n\\begin{equation}\n[D_\\mu,F^{\\mu\\nu}]=\\delta^{-\\nu}\\rho\\ ,\n\\end{equation}\nand it can be used to characterize the CGC wave function $\\Phi_{x_A}[A^-]$\n(in the $A^+\\!=\\!0$ gauge). This wave function is a fundamental object of this picture, it is mainly a non-perturbative quantity, but the $x_A$ evolution can be computed perturbatively. Requiring that observables are independent of the choice of $x_A,$ a functional renormalization group equation can be derived. In the leading-logarithmic approximation which resums powers of $\\alpha_S\\ln(1\/x_A),$ the JIMWLK equation describes the evolution of $|\\Phi_{x_A}|^2$ with $x_A.$\n\nThe information contained in the wave function, on gluon number and gluon correlations, can be expressed in terms of n-point correlators, probed in scattering processes. These correlators consist of Wilson lines averaged with the CGC wave function, and resum powers of $g_S A^-$ (therefore both multiple scatterings and non-linear QCD evolution are taken into account). For instance in the case of single inclusive gluon production in pA collisions, the CGC is described by its (all-twist) unintegrated gluon distribution, obtained from the 2-point function \\cite{mygprod}. More exclusive observables involve more complicated correlators. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=6.8cm]{dau.eps}\n\\hspace{0.5cm}\n\\includegraphics[width=6.8cm]{ppb.eps}\n\\caption{Two-particle production at forward rapidities in pA collisions. The $\\Delta\\phi$ spectrum is displayed at RHIC (left) and LHC (right) energies. When decreasing $p_{T_2}$ at fixed $y_2,$ the correlation in azimuthal angle is suppressed. At the LHC, smaller values of $x_A$ are probed, and the azimuthal angle correlation is more suppressed as indicated by the vertical axis; the peak is also less pronounced.}\n\\end{center}\n\\end{figure}\n\nForward particle production in pA collisions allows to investigate the non linear QCD dynamics of high-energy nuclei with a probe well understood in QCD. Indeed, while such processes are probing small-momentum partons in the nuclear wave function, only high-momentum partons of the proton contribute to the scattering ($\\sqrt{s} x_p\\!=\\!k e^y$ and $\\sqrt{s} x_A\\!=\\!k e^{-y}$ with $k$ and $y$ denoting transverse momentum and rapidity), and that involves standard parton distribution functions. In two-particle production, contrary to single particle production, the CGC cannot be described only by its unintegrated gluon distribution, the so-called $k_T$-factorization framework is not applicable.\n\nIt was not obvious that the CGC picture (\\ref{cgc}), which requires small values of $x_A,$ would be relevant at present energies. One of the most acclaimed successes came in the context of d+Au collisions at RHIC: the prediction that the yield of high-$p_T$ particles at forward rapidities in d+Au collisions is suppressed compared to A pp collisions, and should decrease when increasing the rapidity, was confirmed\n\\cite{jyrev}. In Fig.1 the $dAu\\!\\to\\!hX$ $p_T$ spectra computed in the CGC approach \\cite{dhj} is compared to RHIC data, and the description of the slope is impressive. The need of K factors to describe the normalization could be expected since this is a leading-order based calculation. Improving the calculation with the next-leading evolution has yet to be done.\n\nThe focus should now shift towards more exclusive observables like two-particle production $pA\\!\\to\\!h_1h_2X.$ In particular the correlations in azimuthal angle between the produced hadrons should be suppressed compared to pp collisions. Predictions for the process $pA\\to h_1h_2X$ are shown in Fig.2, for RHIC and the LHC \\cite{mytpc}. $k_1,$ $k_2$ and $y_1,$ $y_2$ are the transverse momenta and rapidities of the final state hadrons, and the azimuthal angle spectra are displayed. It is obtained that the perturbative back-to-back peak of the azimuthal angle distribution is reduced by initial state saturation effects. As the momenta decrease, the angular distribution broadens.\n\n\\section{Particle production in the Glasma}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=6.5cm]{mpi.eps}\n\\hspace{0.5cm}\n\\includegraphics[width=7.5cm]{horizon.eps}\n\\caption{Left: typical leading-order diagram for particle production in the Glasma, multiple partonic interactions are crucial when low values of $x$ are being probed in the nuclear wave functions. Right: the space-time location of events that may correlate two particles is the intersection of their past light-cones. Correlations between particles widely separated in rapidity are due to early-time dynamics.}\n\\end{center}\n\\end{figure}\n\nThe Glasma is the result of the collision of two CGCs, and it provides a weak-coupling description of the early stages after a high-energy heavy-ion collision. Each nuclear wave function is characterized by a strong color charge, and the field describing the dynamics of the small-x gluons is the solution of\n\\begin{equation}\n[D_\\mu,F^{\\mu\\nu}]=\\delta^{+\\nu}\\rho_1+\\delta^{-\\nu}\\rho_2\\ .\n\\end{equation}\nThe field after the collision is non-trivial \\cite{glasma}: it has a strong component ($A^\\mu\\sim1\/g_s$), a component which is particle like ($A^\\mu\\sim1$), and components of any strength in between. To understand how this pre-equilibrium system thermalizes, one needs to understand how the Glasma field decays into particles. Right after the collision, the strong field component contains all modes.\nThen, as the field decays, modes with $p_T>1\/\\tau$ are not part of the strong component anymore, and for those a particle description becomes more appropriate. After a time of order $1\/Q_s,$ this picture breaks down, and it has been a formidable challenge to determine weather a fast thermalization can be achieved within this framework.\n\nA problem which can be more easily addressed is particle production. The difficult task is to express the cross-section in terms of the Glasma field, taking into account multiple partonic interactions, as pictured in Fig.3 (left). Because of the flux-tube structure of its color field $A^\\mu,$ the Glasma is a natural candidate to explain the ridge-shaped two-particle correlations observed at RHIC, as well as three-particle correlations \\cite{ridge}. The ridge is collimated in azimuthal angle because of the radial flow which happens at a later stage, but since the ridge is several units long in rapidity, it is due to early time dynamics: this is explained in Fig.3 (right) which shows the space-time picture of the collision. In the forward light-cone, lines of constant proper time $\\tau=\\sqrt{x^+x^-}$ are hyperbolae and lines of constant rapidity $\\eta=\\frac12\\log(x^+\/x^-)$ are straight lines from the origin. For two final-state particles separated by the rapidity $\\Delta\\eta,$ causality imposes that they can be correlated only by events which happened at\n\\begin{equation}\n\\tau<\\tau_{f.o.}\\ e^{-\\Delta\\eta\/2}\\ ,\n\\end{equation}\nwhere the freeze-out time $\\tau_{f.o.}$ denote the time of last interaction. While the features of the ridge are qualitatively explained by the Glasma, a quantitative description is needed.\n\n\\section{Energy loss of high-$p_T$ partons in the QCD plasma}\n\nHard probes are believed to be understood well enough to provide clean measurements of the properties of the QGP formed in heavy-ion collisions. A large amount of work has been devoted to understand what happens to a quark (of high energy $E,$ mass $M$ and Lorentz factor $\\gamma=E\/M$) as it propagates through a thermalized plasma\n\\cite{jqrev}. Multiple scatterings are a main ingredient of the perturbative QCD (pQCD) description of how a quark losses energy, until it thermalizes or exits the medium (see Fig.4).\n\nAt lowest order with respect to $\\alpha_s,$ quantum fluctuations in a quark wave function consist of a single\ngluon, whose energy we denote $\\omega$ and transverse momentum $k_\\perp.$ The virtuality of that\nfluctuation is measured by the coherence time, or lifetime, of the gluon $t_c=\\omega\/k_\\perp^2.$\nShort-lived fluctuations are highly virtual while longer-lived fluctuations are more easily put on shell when\nthey interact. The probability of the fluctuation is $\\alpha_sN_c,$ up to a kinematic factor which for heavy\nquarks suppresses fluctuations with $\\omega>\\gamma k_\\perp.$ This means that when gluons are put\non-shell, they are not radiated in a forward cone around a heavy quark. This suppression of the available\nphase space for radiation, the {\\it dead-cone} effect, implies less energy loss for heavier quarks.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=4.5cm]{cartoon.eps}\n\\hspace{1cm}\n\\includegraphics[width=6.5cm]{raa.eps}\n\\caption{Left: production of high-energy partons in a hard process, which then lose energy propagating through the plasma. Some quantum fluctuations in their wave function are put on shell while interacting with the medium and become emitted radiation.\nRight: the resulting particle production in AA collisions is suppressed ($R_{AA}<1$) compared to independent nucleon-nucleon collisions. The suppression is large for light hadrons, and similar for heavy mesons (those data are displayed in the figure), which is difficult to accommodate in a weakly-coupled QCD description.}\n\\end{center}\n\\end{figure}\n\nIn pQCD, medium-induced gluon radiation is due to multiple scatterings of the virtual gluons.\nIf, while undergoing multiple scattering, the virtual gluons pick up enough transverse momentum to be put on shell,\nthey become emitted radiation. The accumulated transverse momentum squared picked up by a gluon of coherence\ntime $t_c$ is\n\\begin{equation}\np_\\perp^2=\\mu^2 \\frac{t_c}{l}\\equiv\\hat{q}\\ t_c\n\\end{equation}\nwhere $\\mu^2$ is the average transverse momentum squared picked up in each scattering, and\n$l$ is the mean free path. These medium properties are involved through the ratio\n$\\hat{q}=\\mu^2\/l.$\n\nSince only the fluctuations which pick up enough transverse momentum are freed ($k_\\perpQ_s$ do not have time to pick up enough $p_\\perp$ to be freed, while the longer-lived ones with $k_\\perp