diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcxon" "b/data_all_eng_slimpj/shuffled/split2/finalzzcxon" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcxon" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nStructured inference, where the goal is to infer a structured state output from a structured observation input, is a crucial component for a wide range of applications such as object recognition~\\cite{Quattoni}, image classification~\\cite{Pan}, natural language processing~\\cite{Yu1}, gesture recognition~\\cite{Wang}, handwriting recognition~\\cite{Feng}, and bioinformatics. A powerful and commonly-used approach to structured inference is the use of Markov random field (MRF) and conditional random field (CRF)~\\cite{Lafferty} models. A limitation of such graphical models is that they utilize unary and pairwise potentials on local neighborhoods only, and as such can result in smoothed state boundaries as well as prohibit long-range state boundaries given the limitations of constraint locality. This becomes particularly problematic in the presence of high observational uncertainties such as measurement noise and outliers.\n\nRecently there has been significant interest in the application of two types of models for the purpose of structured inference that help address the issues associated with locally-connected graphical models: i) fully-connected graphical models, and ii) deep-structured graphical models. Fully-connected graphical models address issues of locally-connected models by assuming full connectivity amongst all nodes in the graph, thus taking full advantage of long range relationships to improve inference accuracy. One of the main hurdles in utilizing fully-connected graphical models is the complexity of inference, which becomes computationally intractable as the size of the problem scales.\n\nMuch of recent research in fully-connected graphical models have revolved around addressing the computational complexity of inference step. Kr\\\"{a}henb\\\"{u}hl and Koltun~\\cite{Koltun,Koltun2} introduced an efficient inference procedure for fully-connected CRF based on specific potential functions, where the edge potentials are obtained by use of Gaussian kernels, thus allowing them to formulate the inference problem as a filtering problem. By computing the energy function via convolution, computational complexity is reduced to linear complexity by use of a permutohedral lattice~\\cite{permu}. Zhang and Chen~\\cite{Zhang} utilized a stationary constraint where the spatial potentials over two nodes are assumed to depend only on their relative positions for each of their states, thus allowing for statistical encoding by different distributions and thus relaxing the Gaussian assumption made by Kr\\\"{a}henb\\\"{u}hl and Koltun. Campbell et. al. \\cite{Campbell} further generalized the pairwise potentials to non-linear dissimilarity measures by representing the pairwise terms as density estimates of the conditional probability. Ristovski et al. \\cite{Ristovski} introduced a continuous fully-connected CRF that is similar to that proposed by Campbell et al., but targets the regression problems with continuous outputs. Nevertheless, while the aforementioned methods significantly reduce the computational complexity of inference on fully-connected graphical models, they all address the problem similarly by defining specific potential functions to manage the inference as a filtering approach, thus limiting the effectiveness of such models as the key merit of such models is to allow for arbitrary feature function selection.\n\nDeep-structured graphical models take a different approach to improving inference performance by introducing intermediate state layers, where there is a dependency of each layer on its previous layer, and inference is carried out in a layer-by-layer manner from bottom to top. Prabhavalkar and Fosler-Lussier~\\cite{Prabhavalkar} and Peng et al.~\\cite{Peng} both introduced multi-layer conditional random field models where the local factors in linear-chain conditional random fields are replaced by multi-layer neural networks and trained via back-propagation. Ratajczak et al.~\\cite{Ratajczak} introduce a context-specific deep conditional random field model by replacing the local factors in linear-chain conditional random fields with sum-product networks. Yu et al.~\\cite{Yu1,Yu2} introduced a deep-structured conditional random field model which consists of multiple layers of simple CRFs where each layer's input consists of the previous layer's input and the resulting marginal probabilities. While such deep-structured graphical models are good at handling high observational uncertainties such as measurement noise and outliers by characterizing different information at the different layers, they only implicitly take advantage of long range relationships and are more limited in this aspect when compared to fully-connected graphical models.\n\nWhile fully-connected and deep-structured graphical models both have their own benefits and limitations, these two types of graphical models have been largely explored independently, leaving the unification of these two concepts ripe for exploration. Such a unified graphical model could yield significant benefits in improving state boundary preservation, better enable long-range state boundaries, and better handle high observational uncertainties such as measurement noise and outliers. A fundamental challenge with unifying these two types of graphical models is in dealing with computational complexity, as not only are all nodes fully-connected within a layer, there are also multiple layers to process due to the deep structure of the graphical model. In this study, we investigate the feasibility of unifying fully-connected graphical models and deep-structured models in a computationally tractable manner for the purpose of statistical inference. To accomplish this, we introduce a deep-structured fully-connected random field (DFRF) model which integrates a series of intermediate sparse auto-encoding layers placed between state layers to significantly reduce computational complexity while still maintaining the benefits of fully-connected and deep-structured graphical models.\n\nThis paper is organized as follows. First, the methodology behind the proposed DFRF model and structured inference using DFRF for image segmentation is described in Section~\\ref{methods}. The experimental setup for evaluating the performance of the proposed DFRF model for solving the image segmentation problem is described in Section~\\ref{setup}. The experimental results and discussion is presented in Section~\\ref{results}, and conclusions are drawn and future work discussed in Section~\\ref{conclusions}.\n\n\\section{Deep-structured fully-connected random fields}\n\\label{methods}\n\nFrom the Bayes rule~\\cite{Bishop}, the joint distribution of the observation $X$ and labels $Y$ are modeled based on the product of conditional probability of labels given observation, $P(Y|X)$, and the probability of observation, $P(X)$ as\n\\begin{align}\nP(Y,X) = P(Y|X) P(X).\n\\label{eq:joint-dist}\n\\end{align}\n\nThe goal of the proposed work is to incorporate fully connected interactions into the model that can preserve more information by taking advantage of the long range interactions in the modeling. However, incorporating fully connected interactions imposes a high computational complexity into the model which makes inference intractable. To address the issue of computational tractability, we introduce a sparse auto-encoding layer that describes the fully connected interactions among random variables more concisely with a smaller number of variables. The auto-encoding layer is made possible as a result of the sparsity inherent in the structure of many types of data that are measured in a higher dimension than that needed to represent the data. In essence the auto-encoding layer is representing the data as a smaller set of variables that describe the data in a more concise manner.\n\nThe interaction among variables are determined by extracting the interaction parameters from the auto-encoding layer variables instead of the variable $Y$. As a result Eq.~\\eqref{eq:joint-dist} can be reformulated as\n\\begin{align}\nP(Y,X) = P(X) P(Y|X,A) P(X,A)\n\\label{eq:chain-rule}\n\\end{align}\n\\noindent where $A$ represents the auto-encoding layer where the number of its variables is much smaller than the number of output states. $P(X,A)$ characterizes the auto-encoding layer based on the observation and, based on the chain rule principle, is added to Eq.~\\eqref{eq:chain-rule}. The role of the auto-encoding layer is to involve the fully connected relationship among nodes into the model implicitly. The auto-encoding layer is constructed based on a specific number of variables, where the number of variables determines the fineness of structure in the data that can be characterized by the model (e.g., auto-encoding layers with fewer number of variables assume greater sparsity in the structure of the data and thus characterizes structure in the data more coarsely than when a higher number of variables are used).\n\n To fully utilize the different concise structure characterization properties of the auto-encoding layer, a deep structure is used during the modeling. This results in the deep-structured fully-connected random field (DFRF) as shown in Fig.~\\ref{fig:deepModel}. It is important to note that the configuration setup of the auto-encoding layers can be fine to coarse structure characterization or coarse to fine structure characterization depending on the specific application. The coarse to fine configuration is utilized in this study for the problem of image segmentation described in the next section.\n\n\n\\begin{figure}[tp]\n\t\\centering\n \\includegraphics[width=1\\linewidth]{fig1}\n\t\\caption{Deep-structured fully-connected random field model; The proposed framework is the combination of two different layers, auto-encoding layer ($A_i$) and label layer($Y_i$). The layer $Y_0$ is provided by finite mixture model (FMM) model and is an initialization for layer $Y_1$. Each node of layer $A_i$ is connected to all nodes in label layer $Y_i$. More information are provided to the model by increasing the number of nodes in the auto-encoding layers from the bottom to the top of the model. }\n\t\\label{fig:deepModel}\n\\end{figure}\n\n\nTo represent the proposed model mathematically, the joint probability distribution of labels $Y$ and observation $X$ is formulated as a chain product of the conditional probability of labels given observation, auto-encoding variables, and previous layer of labels, multiplied by the joint probability of observation and auto-encoding variables (see Eq.~(4), where $Y_i$ represents the label layer of $i$, $A_i$ is the auto-encoding layer corresponding to layer $i$, and the number of layers is $n+1$).\n\n\\begin{figure*}[!t]\n\\normalsize\n\\setcounter{equation}{3}\n\\begin{align}\nP(Y,X) = P(Y_n,X) = &P(X) \\prod_{i=0}^n P(Y_i|X,A_i,Y_{i-1}) P(X,A_i)\\\\\n = &P(X) P(Y_0|X,A_0)P(X,A_0) \\prod_{i=1}^n P(Y_i|X,A_i,Y_{i-1}) P(X,A_i) \\nonumber\\\\\n = &P(X) P(Y_0|X) \\prod_{i=1}^n P(Y_i|X,A_i,Y_{i-1}) P(X,A_i) \\nonumber \\\\\n = & P(X) \\frac{P(Y_0,X)}{P(X)} \\prod_{i=1}^n P(Y_i|X,A_i,Y_{i-1}) P(X,A_i) \\nonumber\\\\\n = & P(Y_0,X) \\prod_{i=1}^n P(Y_i|X,A_i,Y_{i-1}) P(A_i,X) \\nonumber\n\\end{align}\n\\setcounter{equation}{4}\n\\hrulefill\n\\vspace*{4pt}\n\\end{figure*}\n\n\\noindent Although there is no intra-layer connections among variables, the inter-layer interaction is fully connected and as such the interaction parameters among random variables in the label layer are computed by use of the auto-encoding layer. Therefore, the two aforementioned probabilities together are expressing a fully connected graphical model.\n\n\\subsection{Structured Inference using DFRF for Interactive Image Segmentation}\n\nWe use DFRF for interactive image segmentation to illustrate the feasibility of DFRF for structured inference in a computationally tractable manner. Interactive image segmentation is a type of binary classification in which each pixel in an image must be classified as foreground (object) or background based on a small set of user annotated pixels as illustrated in Fig.~\\ref{fig:interactiveSeg}.\n\n\\begin{figure*}[tp]\n\\begin{center}\n \\includegraphics[width=1\\linewidth]{fig2}\n\\caption{Example of interactive image segmentation. \\textbf{(a)} image with annotated seed regions (red: foreground and blue: background); \\textbf{(b)} ground truth; \\textbf{(c)} unary terms (GMM); \\textbf{(d)} FCRF~\\cite{Koltun}; and \\textbf{(e)} DFRF. }\n\\label{fig:interactiveSeg}\n\\end{center}\n\\end{figure*}\n\nA simple approach to tackle this problem is to learn a model based on the available training data, such as a Gaussian mixture model (GMM) or non-parametric histogram model, and apply the trained model to the image. However, this simple approach does not take into account the structure of the data. As a result, a common approach to tackle this problem is to use the trained model as the unary potential in a pairwise Markov random field (MRF) where the MRF enforces structural consistency.\n\nHere, we utilize two finite mixture models (FMM) to model the background and foreground distributions and use them to define the first layer $Y_0$ in the deep structure model:\n\\begin{align}\nP(Y_0,X) = P(Y_0|\\Lambda) \\text{\\; \\; s.t. \\; \\;} \\Lambda = \\{\\mu_i , \\sigma_i\\}\n\\label{eq:FMM}\n\\end{align}\n\\noindent where $\\Lambda$ is the set of trained mixture model parameters based on user annotated pixels. The results of layer $Y_0$ are propagated to the next layer (i.e. $Y_1$) by means of auto-encoding layer $A_1$. Each auto-encoding layer $A_i$ is constructed by maximizing the joint probability $P(A_i,X)$. The role of auto-encoding layer is to represent the structure of the image data in a concise manner using a smaller set of variables. Each auto-encoding layer characterizes the structural properties of the image data concisely at a certain fineness level as specified by the number of nodes in that layer. The joint probability $P(A_i,X)$ is modeled by a FMM as well:\n\\begin{align}\nP(A_i,X) = P(A_i|\\Gamma) \\text{\\; \\; s.t. \\; \\;} \\Gamma = \\{\\mu_i , \\sigma_i\\}\n\\label{eq:auto_encoding}\n\\end{align}\n\\noindent where $\\Gamma$ represents the mixture model parameters. The number of parameters $|\\Gamma|$ is different for each auto-encoding layer based on the level the sparseness of the layer.\n\nEach node in the auto-encoding layer conveys the interactions of a random variable in the label layer with other random variables based on a specific image data structure. On the other hand, the interactions among random variables in a label layer $Y_i$ are expressed by the nodes in the lower adjacent auto-encoding layer $A_i$ that determines the weights and, therefore, the random variables in label layer $Y_i$ are fully connected implicitly.\n\nThe state of each random variable in the label layer $Y_i$ is obtained by a conditional probability given the auto-encoding layer $A_i$, observation $X$ and the previous label layer $Y_{i-1}$:\n\\begin{align}\nP(Y_i|X,A_i,Y_{i-1}) = \\frac{1}{Z} \\exp \\Big(- E(Y_i|A_i,Y_{i-1}; X)\\Big)\n\\label{eq:DFRF}\n\\end{align}\n\\noindent where the conditional probability is formulated as a Gibbs distribution~\\cite{Gibbs} by the exponential of negative energy of the layer. $Z$ is the constant normalization and $E(\\cdot)$ is the energy of the layer. The interaction weight between two random variables is computed based on their connections regarding to the auto-encoding layer.\n\nEach random variable in the label layer $Y_i$ has two possible states, 0 and 1 determining the background or the foreground states. The energy in the layer $Y_i$ is minimized based on Maximum A Posterior (MAP) approach. The MAP framework tries to minimize the energy $E(\\cdot)$ of layer $Y_i$ based on the observation and image data structural properties as characterized by auto-encoding layer $A_i$. The computed state configuration of layer $Y_i$ is passed to the layer $Y_{i+1}$ after each optimization. A step-by-step summary of image segmentation by DFRF is presented in Algorithm~\\ref{Alg1}.\n\n\\begin{algorithm}\n\\caption{Structured Inference using DRFR for Image Segmentation}\\label{euclid}\n\\begin{algorithmic}[1]\n\\Procedure{DFRF}{}\n\\State Set $n_{EV}$ \\text{(The number of encoder variables)} and $n$ \\text{(The number of layers excluding zeroth layer)}\n\\State $Y_0 \\gets \\arg\\max P(Y_0|\\Lambda)$\n\\State $i \\gets 1$ ~~~~ (The layer number ($i$))\n\\BState \\emph{loop}:\n\\State Find the auto-encoding layer $A_i($$n_{EV}$$),~Eq.~\\eqref{eq:auto_encoding} $\n\\State $Y_i \\gets \\arg\\max P(Y_i|X,A_i,Y_{i-1})~Eq.~\\eqref{eq:DFRF} $\n\\State $i \\gets i+1$\n\\State Increase $n_{EV}$\n\\State if i<=n then \\textbf{goto} \\emph{loop}\n\\BState \\emph{endloop}\n\\EndProcedure\n\\end{algorithmic}\n\\label{Alg1}\n\\end{algorithm}\n\n\\section{Experimental setup}\n\\label{setup}\n\nIn this study, we use natural images to study the performance of the DFRF model for interactive image segmentation. Natural images from the Weizmann Segmentation Evaluation Database~\\cite{Weizmann} and the CSSD Complex Scene Database~\\cite{CSSD} were used in this study. The Weizmann database consists of two different datasets both with manual segmentation ground truth: i) a single-object dataset consisting of 100 images, and ii) a two-objects dataset consisting of 100 images. The CSSD database consists of 200 images with manual ground truth. Furthermore, to study binary classification performance at different noise levels, each of the images in the two datasets from the Weizmann database as well as the dataset from the CSSD database were also contaminated by white Gaussian noise with standard deviations at 25\\% and 50\\% of the dynamic range of the image, resulting in a total of 1200 different image permutations used in the analysis. A small set of seed pixels in the foreground and background are provided by the authors (Fig.~\\ref{fig:interactiveSeg}(a)). All methods were compared based on the same annotated seed pixels.\n\nTo quantitatively evaluate segmentation performance, we compute the F$_1$-score as follows~\\cite{Weizmann}:\n\\begin{equation}\nf = \\frac{2 \\cdot TP}{2 \\cdot TP + FN + FP}\n\\end{equation}\n\\noindent where $TP$ denotes true positive pixels, $FN$ denotes false negative pixels, and $FP$ denotes false positive pixels.\n\nOur DFRF method is compared to the inference method for fully-connected CRFs proposed by ~\\cite{Koltun} (which we will refer to as FCRF) using the implementation provided by the authors~\\cite{Koltun}. FCRF is the state-of-the-art method for structured inference using fully-connected graphical models. This method was also chosen for comparison because it had been shown~\\cite{Koltun} that FCRF performs better than state-of-the-art approaches such as grid CRFs~\\cite{Shotton} and $P^n$~CRF~\\cite{pn-crf}. For a fair comparison, the same 5-component GMM model used for the DFRF is used as the unary potential for the FCRF approach. \n\n\\subsection{Implementation details}\n\nThe DFRF has the following three parameters: i) the number of layers, ii) the number of encoding nodes at each sparse encoding layer (i.e., $n_{EV}$ (see Algorithm~\\ref{Alg1})), and iii) the set of trained mixture models for layer $Y_0$ (i.e., $\\Lambda$ (see Eq.~\\eqref{eq:FMM})). For our interactive image segmentation problem, we use 15 layers, and the number of encoding nodes at each sparse encoding layer is set to 450-660 nodes (increasing by $\\sim$15 nodes between each layer and a 5-component Gaussian mixture model (GMM) is trained with the annotated samples and used for layer $Y_0$ in the DFRF. These parameters were found to provide strong classification performance based on comprehensive testing.\n\nThe DFRF is implemented in MATLAB (The MathWorks, Inc.) and can classify a 300 $\\times$ 200 colour image (a total of 900,000 state nodes for this configuration) in $\\sim$60s on an Intel(R) Core(TM) i5-3317U CPU at 1.70GHz CPU with 4GB RAM. The FCRF was implemented in C++ by the authors of~\\cite{Koltun}, and can classify a 300 $\\times$ 200 colour image in $\\sim$0.48s.\n\n\\begin{figure*}[tp]\n\\begin{center}\n \\includegraphics[width=1\\textwidth]{fig3}\n\\caption{Example segmentation results for single-object dataset. \\textbf{(a,f,k,p)} image; \\textbf{(b,g,l,q)} ground truth; \\textbf{(c,h,m,r)} unary terms (GMM); \\textbf{(d,i,n,s)} FCRF~\\cite{Koltun}; and \\textbf{(e,j,o,t)} DFRF.}\n\\label{fig3}\n\\end{center}\n\\end{figure*}\n\n\\section{Experimental Results}\n\\label{results}\n\nThe F$_1$-score achieved by the tested methods at the various noise levels for the Weizmann single-object dataset, the Weizmann two-objects dataset, and the CSSD dataset are shown in Table~\\ref{tab:F1-scoreOneObj}, Table~\\ref{tab:F1-scoreTwoObj}, and Table~\\ref{tab:F1-scoreCSSD} respectively. It can be observed that the binary image segmentation results produced using the DFRF model for the noise-free scenarios is comparable to the state-of-the-art FCRF method for the Weizmann single-object case. For the Weizmann two-object case, we see that DFRF performs slightly better than FCRF by $\\sim$2\\% for the noise-free scenario. For the CSSD case, we see that FCRF performs slightly better than DFRF by $\\sim$ 1\\% for the noise-free scenario.\n\n\\begin{table}[tph]\n\t\\centering\n \\caption{F$_1$-Score for Weizmann single-object dataset.}\n \\begin{tabular}{l||cccc}\n \\hline\n ~ & \t\t\t\t FCRF~\\cite{Koltun} &\tDFRF \\\\ \\hline\n Noise-free & 0.8655& 0.8606 \\\\\n Noisy (25\\%) & 0.6586 & 0.6842\\\\\n Noisy (50\\%) & 0.4959 & 0.5554 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:F1-scoreOneObj}\n \\vspace{-0.3 cm}\n\\end{table}\n\n\n\\begin{table}[tph]\n\t\\centering\n \\caption{F$_1$-Score for Weizmann two-object dataset.}\n \\begin{tabular}{l||cccc}\n \\hline\n ~ & \t\t\t\t FCRF~\\cite{Koltun} &\tDFRF \\\\ \\hline\n Noise-free & 0.8397 & 0.8594 \\\\\n Noisy (25\\%) & 0.6718 & 0.7528 \\\\\n Noisy (50\\%) & 0.5131 & 0.6030 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:F1-scoreTwoObj}\n \\vspace{-0.3 cm}\n\\end{table}\n\n\\begin{table}[tph]\n\t\\centering\n \\caption{F$_1$-Score for CSSD dataset.}\n \\begin{tabular}{l||cccc}\n \\hline\n ~ & \t\t\t\tFCRF~\\cite{Koltun} &\tDFRF \\\\ \\hline\n Noise-free & 0.9558 & 0.9456 \\\\\n Noisy (25\\%) & 0.8161 & 0.8462 \\\\\n Noisy (50\\%) & 0.7122 & 0.7357 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:F1-scoreCSSD}\n \\vspace{-0.3 cm}\n\\end{table}\n\nExample segmentation results for Weizmann single-object and two-object datasets are shown in Fig.~\\ref{fig3} and Fig.~\\ref{fig4}, respectively. DFRF and FCRF preserve image structure much better than the baseline GMM method (used as unary) which has no structural cues. The lack of structural cues results in noise-like appearance in the segmentation as seen in Fig.~\\ref{fig4}(c) and (r). Furthermore, the fully connected random field model allows both DFRF and FCRF to capture elongated and thin object boundaries (see the metal posts in Fig.~\\ref{fig3}j, wooden fence post in Fig.~\\ref{fig3}t, and the light post in Fig.~\\ref{fig4}e).\n\n\\begin{figure*}[tp]\n\\begin{center}\n \\includegraphics[width=1\\textwidth]{fig4}\n\\caption{Example segmentation results for two-objects dataset. \\textbf{(a,f,k,p)} image; \\textbf{(b,g,l,q)} ground truth; \\textbf{(c,h,m,r)} unary terms (GMM); \\textbf{(d,i,n,s)} FCRF~\\cite{Koltun}; and \\textbf{(e,j,o,t)} DFRF. }\n\\label{fig4}\n\\end{center}\n\\end{figure*}\n\nUnlike FCRF, the deep-structure of our DFRF method allows us to handle slight variations in the observation that is not fully modeled by the small set of seeds provided by the user. For example, the sky and water in Fig.~\\ref{fig4} have slight variation in illumination and texture which are not captured by the user annotation. As a result, the FCRF method starts misclassifying regions of the sky and water as foreground. However, our DFRF method is better able to handle these variations and correctly classify the entire sky and water as background.\n\nDFRF's ability to handle variations in observation, due to its deep structure, can clearly be seen when we add noise to the image. Quantitatively, from Table~\\ref{tab:F1-scoreOneObj}, Table~\\ref{tab:F1-scoreTwoObj}, and Table~\\ref{tab:F1-scoreCSSD}, we can see that DFRF clearly outperforms FCRF under the presence of noise for all datasets. Visually, from Fig.~\\ref{fig5}, we can see that under the presence of noise FCRF starts to degrade and fails to maintain structural cues. On the other hand our DFRF method is able to handle the uncertainty in the observation and can better segment the image, even under presence of strong noise.\n\n\\begin{figure*}\n\\begin{center}\n \\includegraphics[width=1\\textwidth]{fig5}\n\\caption{Example segmentation results for noise-contaminated scenarios (noise level is indicated on the left side). \\textbf{(a,g,m,s,y)} image; \\textbf{(b,h,n,t,z)} noisy image; \\textbf{(c,i,o,u,I)} ground truth; \\textbf{(d,j,p,v,II)} unary terms (GMM); \\textbf{(e,k,q,w,III)} FCRF~\\cite{Koltun}; and \\textbf{(f,l,r,x,IV)} DFRF. }\n\\label{fig5}\n\\end{center}\n\\end{figure*}\n\n\\section{Conclusion}\n\\label{conclusions}\n\nIn this study, the feasibility of unifying fully-connected and deep-structured models in a computationally tractable manner for the purpose of structured inference was investigated through the introduction of a deep-structured fully-connected random field (DFRF) model with sparse auto-encoding layers. By incorporating intermediate sparse auto-encoding layers between state layers to condense node-to-node interactions, we were able to significantly reduce the computational complexity of the inference process. A quantitative performance analysis of the DFRF model for the problem of interactive image segmentation was performed to illustrate the feasibility of using the DFRF for structured inference in a computationally tractable manner. Results in this study show that it is feasible to unify fully-connected and deep-structured models in a computationally tractable manner for solving structured inference problems such as image segmentation.\n\nGiven the promising results, we aim in the future to investigate alternative auto-encoding approaches to better condense node-to-node interactions, as well as strategies for automatically determining the number of auto-encoding nodes to use for each auto-encoding layer. Furthermore, we aim in the future to explore the efficacy of the DFRF for solving other types of large-scale, vision-domain structured inference problems such as image reconstruction~\\cite{recon1,recon2,recon3,recon4}, image decomposition and representation~\\cite{imagerep1,imagerep2,imagerep3,imagerep4}, image restoration~\\cite{restore1,restore2,restore3,restore4}, and saliency detection~\\cite{saliency1,saliency2,saliency3}.\n\n\\section{Acknowledgment}\n\nThis work was supported by the Natural Sciences and Engineering Research Council of Canada, Canada Research Chairs Program, and the Ontario Ministry of Research and Innovation.\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe role and properties of the energy density are fundamental in quantum field theory. On curved backgrounds the energy density links to the geometry of the spacetime via the Einstein field equations. In Minkowski space, it represents a local observable of particular physical significance, while in 2d conformal field theory the algebra of the stress-energy tensor is completely fixed up to a scalar (central charge), and the existence of this large symmetry constrains significantly the structure of the theory.\n\nIn typical models of \\emph{classical} field theory the energy density is positive, e.g., in electrodynamics. This has interesting consequences in general relativity, where it implies certain restrictions on the geometry of spacetime, excluding ``exotic'' configurations such as wormholes, warp drives and time machines. (See, e.g., the Penrose and Hawking singularity theorems\\cite{HawPen1970}, positive mass theorems and Hawking's chronology protection results\\cite{Hawking:1992}.)\n\nIn quantum field theory, even in flat spacetime (not only on curved background) the energy density can be negative, while the Hamiltonian $H$ is still nonnegative. However, one expects that the smeared energy density, $T^{00}(g^2) = \\int g^2(t) T^{00}(t,0)$, with some fixed smooth real-valued test function $g$, cannot be indefinitely negative:\\cite{Ford:1991} Certain lower bounds hold, the so called \\emph{quantum energy inequalities} (QEIs). A \\emph{state-independent} QEI takes the form\n\\begin{equation}\\label{stinqei}\n\\langle \\varphi, T^{00}(g^2) \\varphi \\rangle \\geq -c_g \\|\\varphi \\|^2\n\\end{equation}\nfor all (suitably regular) state vectors $\\varphi$, where $c_g > 0$ is a constant depending on the test function $g$. \n\nWe note that in some theories, e.g., the non-minimally coupled scalar field in a curved spacetime, only a weaker form of this inequality can hold, a so called \\emph{state-dependent} QEI, where the right hand side of Eq.~\\eqref{stinqei} depends on the total energy of the state $\\varphi$.\n\nState-independent QEIs have been proved for the linear scalar field, linear Dirac field, linear vector field (both on flat and curved spacetime), the Rarita-Schwinger field, and for 1+1d conformal fields (see Ref.~\\citenum{Fewster:lecturenotes} for a review). State-dependent QEIs were established in a model-independent setting\\cite{BostelmannFewster:2009} for certain ``classically positive'' expressions, but their relation to the energy density is unclear. \n\nOnly recently QEIs have been obtained in the case of \\emph{self-interacting} quantum field theories, the first example being the massive Ising model\\cite{BostelmannCadamuroFewster:ising}. This model is an example of a specific class of self-interacting theories on 1+1 dimensional Minkowski space, so called \\emph{quantum integrable models}\\cite{BabujianFoersterKarowski:2006}. Other examples include the sinh-Gordon, the sine-Gordon and the nonlinear $O(N)$-invariant $\\sigma$-models.\n\nThere has been recent interest into these models from the side of rigorous quantum field theory. In particular, a large class of these theories were constructed from a prescribed factorizing S-matrix using operator-algebraic techniques\\cite{Lechner:2008}. They describe interacting relativistic particles and are characterized by infinitely many conserved currents, implying that the particle number is preserved during the scattering process and that the full scattering matrix is completely determined by the two-particle scattering function, hence called \\emph{factorizing}. \n\nHere we consider such models with one species of massive scalar bosons and without bound states. In this large class, we are interested in the stress-energy tensor $T^{\\alpha \\beta}$ evaluated in \\emph{one-particle states}. At that level, we investigate the existence of (state-independent) QEIs, but also the uniqueness of $T^{\\alpha \\beta}$ itself, the existence of states with negative energy density, and the lowest eigenvalue in the spectrum of $T^{00}(g^2)$.\n\n\n\n\\section{Stress-energy tensor in one-particle states}\n\nIn models derived from a classical Lagrangian, such as the sinh-Gordon model (see Ref.~\\citenum{FringMussardoSimonetti:1993}), a candidate for the energy density can be computed directly from the Lagrangian. However, there are examples of integrable models which are not associated with a Lagrangian (e.g. the generalized sinh-Gordon model in Table 1 of Ref.~\\citenum{BostelmannCadamuro:oneparticle}). \n\nIt is therefore useful to obtain an intrinsic characterization of the stress energy tensor $T^{\\alpha \\beta}$ at the \\emph{one-particle level} starting from ``first principles'', namely from the generic properties of this operator. We write $T^{\\alpha \\beta}$ at one-particle level as an integral kernel operator:\n\\begin{equation}\n\\langle \\varphi, T^{\\alpha \\beta}(g^2) \\psi \\rangle = \\int \\overline{\\varphi(\\theta)}F^{\\alpha \\beta}(\\theta,\\eta)\\psi(\\eta),\n\\end{equation}\nwhere $\\varphi, \\psi$ are vectors in the single-particle space $L^2(\\mathbb{R})$. \nThere are various restrictions on the form of the kernel $F^{\\alpha \\beta}$. The fact that the energy density is a \\emph{local} observable implies that $F^{\\alpha \\beta}$ fulfills certain analyticity, symmetry and boundedness properties\\cite{BostelmannCadamuro:characterization}. Additional conditions come from the specific properties of the stress-energy tensor, namely tensor symmetry, covariance under Poincar\\'e transformations and spacetime reflections, the continuity equation ($\\partial_{\\alpha}T^{\\alpha \\beta}=0$), and the fact that the $(0,0)$-component of the tensor integrates to the Hamiltonian ($\\int dx\\; T^{00}(t,x) = H$).\n\nStarting from these general properties, we can determine $T^{\\alpha \\beta}$ in one-particle matrix elements up to a certain polynomial factor. More specifically, we find (see Proposition~3.1 of Ref.~\\citenum{BostelmannCadamuro:oneparticle}) that functions $F^{\\alpha \\beta}$ are compatible with the requirements above if, and only if, there exists a real polynomial $P$ with $P(1)=1$ such that\n\\begin{equation}\nF^{\\alpha \\beta}(\\theta,\\eta) = F^{\\alpha \\beta}_{\\text{free}}(\\theta, \\eta)\n\\underbrace{P(\\cosh(\\theta -\\eta))F_{\\text{min}}(\\theta -\\eta +i\\pi)}_{=:F_P(\\theta-\\eta)}\n\\widetilde{g^2} (\\mu\\cosh\\theta-\\mu\\cosh\\eta),\n\\end{equation}\nwhere $\\mu>0$ is the mass of the particle and \n\\begin{equation}\nF^{\\alpha \\beta}_{\\text{free}}(\\theta,\\eta) = \\frac{\\mu^2}{2\\pi}\n\\left( \\begin{array}{ccc}\n\\cosh^2\\big( \\frac{\\theta +\\eta}{2}\\big) & \\frac{1}{2}\\sinh (\\theta +\\eta ) \\\\\n\\frac{1}{2}\\sinh(\\theta +\\eta) & \\sinh^2 \\big( \\frac{\\theta +\\eta}{2} \\big) \\end{array} \\right).\n\\end{equation}\nHere $F^{\\alpha \\beta}_{\\text{free}}$ is the well-known expression of the ``canonical'' stress-energy tensor of the free Bose field, and $F_{\\text{min}}$ is the so called \\emph{minimal solution} of the model\\cite{KarowskiWeisz:1978}, which is unique for a given scattering function. For example, $F_{\\text{min}}(\\zeta)=1$ for free fields and $F_{\\text{min}}(\\zeta)=-i\\sinh \\frac{\\zeta}{2}$ in the Ising model; for the sinh-Gordon model, $F_{\\text{min}}$ is given as an integral expression\\cite{FringMussardoSimonetti:1993}.\n\n\\section{Negative energy density and QEIs in one-particle states }\n\n\nFirst let us investigate whether there are single-particle states with negative energy density.\n\nIn the case of free fields (with $P=1$), it is known that there are no such states: One has to allow for superpositions, for example, of a zero- and a two-particle vectors to obtain negative energy densities. \n\nHowever, the introduction of interaction changes this situation drastically. Specifically, if there is a $\\theta_P \\in \\mathbb{R}$ such that $|F_P(\\theta_P)|>1$, then there exists a one-particle state $\\varphi \\in L^2(\\mathbb{R})$ and a real-valued Schwartz function $g$ such that $\\langle \\varphi, T^{00}(g^2) \\varphi \\rangle < 0$ (see Prop. 4.1 in Ref.~\\citenum{BostelmannCadamuro:oneparticle}.)\nThis applies, in particular, to the Ising and sinh-Gordon models.\n\nUnder stronger assumptions on the function $F_P$ we can actually prove a \\emph{no-go} theorem on existence of QEIs at one-particle level (see Ref.~\\citenum{BostelmannCadamuro:oneparticle}, Proposition~4.2):\n\\begin{theorem}\\label{nogo}\nSuppose there exist $\\theta_0 \\geq 0$ and $c > \\frac{1}{2}$ such that\n\\begin{equation}\n\\forall \\theta \\geq \\theta_0: \\quad F_P(\\theta) \\geq c \\cosh\\theta.\n\\end{equation}\nLet $g$ be real-valued and of Schwartz class, $g \\not\\equiv 0$. Then there exists a sequence $(\\varphi_j)_{j \\in \\mathbb{N}}$ in $\\mathcal{D}(\\mathbb{R})$, $\\| \\varphi_j \\| =1$, such that\n\\begin{equation}\n\\langle \\varphi_j, T^{00}(g^2) \\varphi_j \\rangle \\rightarrow -\\infty \\quad \\text{as } j \\rightarrow \\infty.\n\\end{equation}\n\\end{theorem}\nHere $\\mathcal{D}(\\mathbb{R})$ denotes the space of $\\mathcal{C}^\\infty$-functions with compact support. This proposition says that if the function $F_P$ grows ``too fast'', then the operator $T^{00}(g^2)$ (at one-particle level) cannot be bounded below. Only under certain upper bounds on $F_P$, we can establish one-particle state-independent QEIs (Thm.~5.1 of Ref.~\\citenum{BostelmannCadamuro:oneparticle}):\n\\begin{theorem}\\label{theoqei}\nSuppose there exist $\\theta_0 \\geq 0, \\lambda_0 >0$, and $00$ such that\n\\begin{equation}\\label{oneqei}\n\\forall \\varphi \\in \\mathcal{D}(\\mathbb{R}): \\quad \\langle \\varphi, T^{00}(g^2) \\varphi \\rangle \\geq - c_g \\| \\varphi \\|^2.\n\\end{equation}\nThe constant $c_g$ depends on $g$ (and on $F_P$, hence on $P$ and $S$) but not on $\\varphi$.\n\\end{theorem}\nTo motivate the above condition on the growth property of $F_P$, let us consider the expectation value of $T^{00}(g^2)$ in a one-particle state $\\varphi$,\n\\begin{equation}\\label{expectation}\n\\langle \\varphi, T^{00}(g^2) \\varphi \\rangle = \\frac{\\mu^2}{2\\pi} \\int d\\theta d\\eta\\; \\cosh^2\\frac{\\theta +\\eta}{2} F_P(\\theta -\\eta)\\widetilde{g^2}(\\omega(\\theta) -\\omega(\\eta)) \\overline{\\varphi(\\theta)}\\varphi(\\eta),\n\\end{equation}\nwhere $\\omega(\\theta) := \\mu \\cosh \\theta$. \n\nTo establish an inequality of the form Eq.~\\eqref{oneqei}, the rough idea goes as follows. The relevant contributions to the integral are in the regions $\\theta \\approx \\eta$ and $\\theta \\approx -\\eta$; outside these regions, the factor $\\widetilde{g^2}(\\omega(\\theta) -\\omega(\\eta))$ is strongly damping. At $\\theta \\approx \\eta$, the factor $F_P$ is nearly constant, and hence the integral is near to the free field expression, which is known to be positive when the smearing function is of the form $g^2$. At $\\theta \\approx -\\eta$, the factor $F_P$ may grow but the factor $\\cosh \\frac{\\theta +\\eta}{2}$ is nearly constant. If now $F_{\\text{min}}$ grows less than $\\frac{1}{2}\\cosh\\theta$, then the second mentioned part of the integral is negligible against the first mentioned one and the whole expression is positive, up to some bounded part. \n\nIn other words, at one-particle level the existence or non-existence of QEIs depends on the asymptotic behaviour of the function $F_P(\\theta) = P(\\cosh\\theta)F_{\\text{min}}(\\theta +i\\pi)$. If $F_P(\\theta) \\lesssim \\frac{1}{2}\\cosh\\theta$, then a state-independent one-particle QEI holds. On the other hand, if $F_P(\\theta) \\gtrsim \\frac{1}{2}\\cosh\\theta$, then no such QEI holds (see Proposition~\\ref{nogo}).\n\nIn specific models, the bound \\eqref{boundfp} is fulfilled for at least some choices of $P$: In the free and sinh-Gordon models, the function $F_{\\text{min}}$ converges to a constant for large $\\theta$, thus a QEI can hold only if $\\text{deg }P=0,1$.\n\nIn the Ising model, where the function $F_{\\text{min}}(\\theta +i\\pi)$ grows like $\\cosh \\frac{\\theta}{2}$ at large values of $\\theta$, a QEI holds if and only if $P \\equiv 1$.\n\nIn other words, for some $S$ the existence of QEIs fixes the energy density uniquely at one-particle level; whereas, in the sinh-Gordon model, and even in the free Bose field, we are left with the choice\n\\begin{equation}\nP(x)=(1-\\alpha) +\\alpha x \\quad \\text{with } \\alpha \\in \\mathbb{R},\\; |\\alpha|< \\frac{1}{2F_{\\text{min}}(\\infty +i\\pi)},\n\\end{equation}\nwhere $F_{\\text{min}}(\\infty +i\\pi) := \\lim_{\\theta \\rightarrow \\infty}F_{\\text{min}}(\\theta +i\\pi)$. Therefore, in this second class of models the existence of a QEI strongly restricts the form of the energy density without, however, fixing it uniquely.\n\n\\section{Numerical results}\n\nWhile we now have upper and lower estimates for the lowest eigenvalue in the spectrum of $T^{00}(g^2)$ on the one-particle level, it seems unrealistic to find the actual lowest eigenvalue in this way to any reasonable precision. An explicit result can be obtained only numerically. Here we sketch some results of Ref.~\\citenum{BostelmannCadamuro:oneparticle}.\n\nFor the sake of concreteness, we fix the smearing function $g$ to be a Gaussian,\n\\begin{equation}\ng(t) = \\pi^{-1\/4}\\sqrt{\\frac{\\mu}{2\\sigma}} \\exp\\Big( -\\frac{(\\mu t)^2}{8\\sigma^2}\\Big),\n\\end{equation}\nwhere $\\sigma>0$ is a dimensionless parameter. \n\nFor the numerical treatment, we restric the one-particle wavefunctions of the matrix elements of $T^{00}(g^2)$ to the Hilbert space $L^2([-R,R],d\\theta)$ rather than $L^2(\\mathbb{R},d\\theta)$, that is, we introduce a ``rapidity cutoff''. This serves to make the kernel yield a bounded operator. We then discretize the operator by dividing the interval $[-R,R]$ into N subintervals and using an orthonormal system of step functions $\\phi_j$ supported on these intervals. We are then left with a matrix\n\\(\nM_{jk}= \\langle \\phi_j, T^{00}(g^2) \\phi_k \\rangle\n\\)\nfor which eigenvalues and eigenvectors can be found by standard numerical methods, such as the implicit QL algorithm.\n\nThe eigenvector corresponding to the lowest eigenvalue for the sinh-Gordon model is shown in \\fref{fig:sinh}(a). We can then analyze how the lowest eigenvalue (i.e., the best constant $c_g$ in the QEI) depends on the interaction. \\fref{fig:sinh}(b) shows the dependency on the coupling constant $B$ in the sinh-Gordon model. As expected, as the coupling is taken to 0, we reach the limit 0 for the lowest eigenvalue (as in free field theory). Moreover, we note that the lowest possible negative energy density is reached when the coupling is maximal $(B=1)$, which fits with the picture that negative energy density in one-particle states is an effect of self-interaction in the quantum field theory.\n\n\n\n\\def\\figsubcap#1{\\par\\noindent\\centering\\footnotesize(#1)}\n\\begin{figure}[th]%\n\\begin{center}\n \\parbox{2.1in}{\\includegraphics[width=2in]{Eigenvector-Sinhgordon}\\figsubcap{a}}\n \\hspace*{4pt}\n \\parbox{2.1in}{\\includegraphics[width=2in]{SinhCoupling-SinhGordon}\\figsubcap{b}}\n \\caption{Energy density $T^{00}(g^2)$ in the sinh-Gordon model at one-particle level. (a) Lowest eigenvector for $B=1$. (b) Lowest eigenvalue for coupling $0 < B< 2$. Parameters: $N=500, R= (a)10\\; (b)7, \\sigma=0.1, P=1$.}%\n \\label{fig:sinh}\n\\end{center}\n\\end{figure}\n\n\n\\section{Conclusions}\n\nWe have investigated the properties of the energy density at one-particle level in a large class of quantum integrable models, which include the sinh-Gordon and Ising models. In particular, we have determined the stress-energy tensor at one-particle level up to a certain polynomial factor, and studied the existence of state-independent QEIs. Moreover, we have seen that demanding the existence of QEIs can, in some cases, fix this choice uniquely.\n\nAn investigation of analogous results at higher particle numbers is a challenging open question (except for the Ising model where results are available in Ref.~\\citenum{BostelmannCadamuroFewster:ising}) since higher form factors $F_{n}$ have poles on the integration path, and the integral operator $T^{00}(g^2)$ is hard to estimate (singular integral kernels). Numerical evidence, however, suggest that a term like \\eqref{expectation} is the dominating contribution also at two-particle level.\n\nApart from scalar models, existence of QEIs could be investigated also in models with more then one particle species (e.g. the nonlinear $O(N)$--invariant $\\sigma$--models), or in integrable models with bound states ($Z(N)$--Ising and sine--Gordon models), and it would be highly desirable to extend this study to a curved background. This would be a possible path towards a model-independent understanding of the energy density and QEIs.\n\n\n\n\\bibliographystyle{ws-procs975x65}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Proofs}\n\\label{sec:proofs}\n\nThis section details the proofs of our results.\n\n\\subsection{Preparatory lemma}\n\n\nBy standard arguments of convex analysis, the following lemma gives the first-order sufficient and necessary optimality condition of a minimizer of \\eqref{eq-group-lasso}.\n\n\\begin{lem}\\label{lem:first-order}\n A vector $\\xxs(y) \\in \\RR^p$ is a minimizer of \\eqref{eq-group-lasso} if, and only if, \n \\eq{\n \t- \\Fx(\\xxs(y),y) \\in \\partial \\J(\\xxs(y)).\n }\n If $\\J$ is partly smooth at $\\xxs(y)$ relative to $\\Mm$, then\n \\begin{equation*}\n -\\FxM(\\xxs(y),y) = \\JxM(\\xxs(y)) = \\e{\\xxs(y)} .\n \\end{equation*}\n\\end{lem}\n\n\\begin{proof}\nThe first monotone inclusion is just the first-order necessary and sufficient minimality condition for our convex program.\nThe second claim follows from \\eqref{eq:covgrad} and Fact~\\ref{fact:gradhess}. \\qed\n\\end{proof}\n\n\n\n\\input{sections\/proofs-local}\n\\input{sections\/proofs-dof}\n\n\n\\subsection{Proof of Lemma~\\ref{lem:injectivity-cond}}\nThe equivalence is a consequence of simple arguments from linear algebra. Indeed, when both $\\Fxx(\\xx,y)$ and $\\Q(\\xx)$ are positive semidefinite on $\\T$, we have $\\dotp{(\\Fxx(\\xx,y)\\xi}{\\xi} \\geq 0$ and $\\dotp{(\\Q(\\xx)\\xi}{\\xi} \\geq 0$, $\\forall ~ \\xi \\in \\T$. Thus, for \\eqref{eq-injectivity-cond} to hold, it is necessary and sufficient that $\\nexists ~ 0 \\neq \\xi \\in \\T$ such that $\\xi \\in \\Ker( \\Fxx(\\xx,y) )$ and $\\xi \\in \\Ker( \\Fxx(\\xx,y) )$, which is exactly what we state. \n\nWhen $\\Mm=\\xx+\\T$, the Riemannian hessians $\\Fxx(\\xx,y)$ and $\\Q(\\xx)$ are given by \\eqref{eq:covhessFlin} and \\eqref{eq:covhesslin}. Convexity and smoothness of $\\F(\\cdot,y)$ combined with \\eqref{eq:covhessFlin} imply that $\\Fxx(\\xx,y)$ is positive semidefinite. Moreover, convexity and partial smoothness of $\\J$ also yield that $\\Q(\\xx)$ is positive semidefinite, see~\\citep[Lemma~4.6]{LiangDR15}. \\qed\n\n\n\\subsection{Proof of Theorem~\\ref{thm-local}}\n\\label{sub:local}\n\nLet $y \\not\\in \\Hh$. To lighten the notation, we will drop the dependence of $\\xxs$ on $y$, where $\\xxs$ is a solution of \\lasso such that $(\\CondInj{\\xxs}{y})$ holds. \n\nLet the constrained problem on $\\Mm$\n\\eql{\\label{eq:restricted}\\tag{$\\regulP{y}_\\Mm$}\n \\umin{\\xx \\in \\Mm}\n \\F( \\xx, y) + \\J(\\xx) .\n}\nWe define the notion of strong critical points that will play a pivotal role in our proof.\n\\begin{defn}\n\\label{def:strongcrit}\nA point $\\xxs$ is a strong local minimizer of a function $f: \\Mm \\to \\RR \\cup \\ens{+\\infty}$ if $f$ grows at least quadratically locally around $\\xxs$ on $\\Mm$, i.e. $\\exists \\delta > 0$ such that $f(\\xx) \\geq f(\\xxs) + \\delta\\norm{\\xx-\\xxs}^2$, $\\forall \\xx \\in \\Mm$ near $\\xxs$.\n\\end{defn}\n\nThe following lemma gives an equivalent characterization of strong critical points that will be more convenient in our context.\n\\begin{lem}\n\\label{lem:strongcrit}\nLet $f \\in \\Cdeux(\\Mm)$. A point $\\xxs$ is a strong local minimizer of $f$ if, and only if, it is a critical point of $f$, i.e. $\\grad_\\Mm f(\\xxs)=0$, and satisfies the restricted positive definiteness condition\n\\[\n\\dotp{\\hess_\\Mm f(\\xxs)\\xi}{\\xi} > 0 \\quad \\forall ~ 0 \\neq \\xi \\in \\Tgt_{\\xxs}(\\Mm) .\n\\]\n\\end{lem}\n\n\\begin{proof}[of Lemma~\\ref{lem:strongcrit}]\nThe proof follows by combining the discussion after \\citep[Definition~5.4]{Lewis-PartlySmooth} and \\cite[Theorem~3.4]{miller2005newton}. \\qed\n\\end{proof}\n\nWe now define the following mapping\n\\begin{equation*}\n \\Gamma : (\\xx,y) \\in \\Mm \\times \\RR^n \\mapsto \\FxM( \\xx,y ) + \\JxM(\\xx).\n\\end{equation*}\n\n\nWe split the proof of the theorem in three steps.\nWe first show that there exists a continuously differentiable mapping $\\bar y \\mapsto \\solmB \\in \\Mm$ and an open neighborhood $\\neighb_y$ of $y$ such that every element $\\bar y$ of $\\neighb_y$ satisfies $\\Gamma(\\solm(\\bar y), \\bar y) = 0$. Then, we prove that $\\solmB$ is a solution of ($\\regulP{\\bar y}$) for any $\\bar y \\in \\neighb_y$. Finally, we obtain~\\eqref{eq-differential} from the implicit function theorem.\n\n\n\\paragraph{Step~1: construction of $\\solmB$.}\n\nUsing assumption \\eqref{hyp-f-reg}, the sum and smooth perturbation calculus rules of partial smoothness \\citep[Corollary~4.6 and Corollary~4.7]{Lewis-PartlySmooth} entail that the function $(\\xx,y) \\mapsto F(\\xx,y) + \\J(x)$ is partly smooth at $(\\xxs,y)$ relative to $\\Mm \\times \\RR^m$, which is a $\\Cdeux$-manifold of $\\RR^p \\times \\RR^m$. Moreover, it is easy to see that $\\Mm \\times \\RR^m$ satisfies the transversality condition of \\citep[Assumption~5.1]{Lewis-PartlySmooth}. By assumption $(\\CondInj{\\xxs}{y})$, $\\xxs$ is also a strong global minimizer of \\eqref{eq:restricted}, which implies in particular that $\\Gamma(\\xxs,y) = 0$; see Lemma~\\ref{lem:strongcrit}. It then follows from \\citep[Theorem~5.5]{Lewis-PartlySmooth} that there exist open neighborhoods $\\widetilde \\neighb_y$ of $y$ and $\\widetilde \\neighb_\\xxs$ of $\\xxs$ and a continuously differentiable mapping $\\solm : \\widetilde \\neighb_y \\to \\Mm \\cap \\widetilde \\neighb_\\xxs$ such that $\\solm(y) = \\xxs$, and $\\forall \\bar y \\in \\widetilde \\neighb_y$, $(\\regulP{\\bar y}_\\Mm)$ has a {\\emph{unique}} strong local minimizer, i.e.\n\\begin{align*}\n \\Gamma(\\solm(\\bar y),\\bar y) = 0\n \\qandq \\text{$(\\CondInj{\\solm(\\bar y)}{\\bar y})$ holds} ,\n\\end{align*}\nwhere we also used local normal sharpness property from partial smoothness of $\\J$; see Fact~\\ref{fact:sharp}.\n\n\n\\paragraph{Step~2: $\\solmB$ is a solution of $(\\regulP{\\bar y})$.}\nWe now have to check the first-order optimality condition of ($\\regulP{\\bar y}$), i.e. that $- \\Fx(\\solm(\\bar y),\\bar y) \\in \\partial J(\\solm(\\bar y))$; see Lemma~\\ref{lem:first-order}. We distinguish two cases.\n\n\\begin{enumerate}[label=$\\bullet$]\n\\item Assume that $-\\Fx(\\xxs,y) \\in \\ri \\partial J(\\xxs)$. The result then follows from \\citep[Theorem~5.7(ii)]{Lewis-PartlySmooth} which, moreover, allows to assert in this case that $-\\Fx(\\solm(\\bar y),\\bar y) \\in \\ri \\partial \\J(\\solm(\\bar y))$.\n\n\\item We now turn to the case where $-\\Fx(\\xxs,y) \\in \\rbd \\partial J(\\xxs)$.\nObserve that $(y, \\xxs) \\in \\Aa_{\\Mm}$. In particular $y \\in \\Pi_{n+p,n}(\\Aa_{\\Mm})$.\nSince by assumption $y \\not\\in \\Hh$, one has $y \\not\\in \\bd(\\Pi_{n+p,n}(\\Aa_{\\Mm}))$.\nHence, there exists an open ball $\\mathbb{B}(y, \\epsilon)$ for some $\\epsilon > 0$ such that $\\mathbb{B}(y, \\epsilon) \\subset \\Pi_{n+p,n}(\\Aa_{\\Mm})$.\nThus for every $\\bar y \\in \\mathbb{B}(y, \\epsilon)$, there exists $\\bar \\xx \\in \\Mc$ such that\n\\begin{equation*}\n - \\Fx(\\bar \\xx, \\bar y) \\in \\rbd \\partial \\J(\\bar \\xx) .\n\\end{equation*}\nSince $\\bar \\xx \\in \\Mm$, $\\bar \\xx$ is also a critical point of $(\\regulP{\\bar y}_\\Mm)$. But from Step~1, $\\solm(\\bar y)$ is unique, whence we deduce that $\\solm(\\bar y)=\\bar \\xx$. In turn, we conclude that\n\\begin{equation*}\n \\forall \\bar y \\in \\mathbb{B}(y, \\epsilon), \\quad\n - \\Fx(\\solm(\\bar y),\\bar y) \\in \\rbd \\partial \\J(\\solm(\\bar y)) \\subset \\partial \\J(\\solm(\\bar y)) .\n\\end{equation*}\n\\end{enumerate}\n\n\\paragraph{Step~3: Computing the differential.}\nIn summary, we have built a mapping $\\solm \\in \\Calt{1}(\\neighb)$, with $\\neighb = \\widetilde \\neighb_y \\cap \\mathbb{B}(y, \\epsilon))$, such that $\\solm(\\bar y)$ is a solution of $(\\regulP{\\bar y})$ and fulfills $(\\CondInj{\\solm(\\bar y)}{\\bar y})$. We are then in position to apply the implicit function theorem to $\\Gamma$, and we get the Jacobian of the mapping $\\solm$ as\n\\begin{equation*}\n \\jac \\solm(\\bar y) = -\n \\pa{\\Fxx(\\solm(\\bar y), \\bar y) + \\Q(\\solm(\\bar y))}^{+} \\jac (\\FxM)(\\solm(\\bar y),\\bar y)\n\\end{equation*}\nwhere \n\\eq{\n\t\\jac (\\FxM)(\\xx,y) = \\proj_{\\T_{\\xx}} \\Fxy(\\xx,y),\n}\nwhere the equality is a consequence of \\eqref{eq:covgrad} and linearity. \\qed\n\n\n\n\\subsection{Proof of Lemma~\\ref{lem:unique}}\n\\begin{enumerate}[label=(\\roman*)]\n\\item See \\cite[Lemma~8]{vaiter2013model}.\n\\item This is a specialization of Lemma~\\ref{lem:injectivity-cond} using \\eqref{eq-stric-cvx} and \\eqref{eq:covhesslin}. \\qed\n\\end{enumerate}\n\n\\subsection{Proof of Theorem~\\ref{thm-div}}\n\\label{sub:dof}\nWe can now prove Theorem~\\ref{thm-div}. At any $y \\notin \\Hh \\cup \\Gg$, we consider $\\xxs(y)$ a solution of \\eqref{eq-group-lasso}. By assumption, $(\\CondInj{\\xxs}{y})$ holds. According to Theorem~\\ref{thm-local}, one can construct a mapping $y \\mapsto \\solmB$ which is a solution to $(\\lassoB)$, coincides with $\\xxs(y)$ at $y$, and is $\\Calt{1}$ for $\\bar{y}$ in a neighborhood of $y$. Thus, by Lemma~\\ref{lem:unique}, $\\msol(\\bar y)=\\XX \\solmB$ is a single-valued mapping, which is also $\\Calt{1}$ in a neighbourhood of $y$. Moreover, its differential is equal to $\\Delta(y)$ as given, where we applied the chain rule in \\eqref{eq:covhessF}. \\qed\n\n\\subsection{Proof of Proposition~\\ref{prop-exist}}\n\\label{sub:exist}\nThe proofs of both statements are constructive.\n\n\\begin{enumerate}[label=(\\roman*)]\n\\item Polyhedral penalty:\nany polyhedral convex $\\J$ can be written as~\\citep{Rockafellar96}\n\\begin{align*}\n\\J(\\xx) &= \\umax{i \\in \\ens{1,\\dots,q}} \\ens{\\dotp{d_i}{\\xx} - b_i} + \\iota_{\\Cc}(\\xx), \\\\\n\\Cc \t&= \\enscond{\\xx \\in \\RR^p}{\\dotp{a_k}{\\xx} \\leq c_k}, k \\in \\ens{1,\\dots,r} .\n\\end{align*}\nIt is straightforward to show that\n\\begin{gather*}\n\\partial \\J(\\xx) = \\co \\ens{d_i}_{i \\in I_{\\xx}} + \\cone \\ens{a_k}_{k \\in K_{\\xx}}, \\qwhereq\\\\\nI_{\\xx} = \\enscond{i}{\\dotp{d_i}{\\xx} - b_i = \\J(\\xx)} \\qandq K_{\\xx} = \\enscond{j}{\\dotp{a_j}{\\xx} = c_i} ,\n\\end{gather*}\nand\n\\[\n\\T_{\\xx} = \\enscond{h}{\\dotp{h}{d_i} = \\dotp{h}{d_j} = \\tau_{\\xx}, ~~ \\forall i,j \\in I_{\\xx}} \\cap \\enscond{h}{\\dotp{h}{a_k} = 0, ~~ \\forall k \\in K_{\\xx}} .\n\\]\nLet $\\xxs$ be a solution of \\eqref{eq-group-lasso} for $\\J$ as above. Recall from Example~\\ref{ex:injpolyh} that $(\\CondInj{\\xxs}{y})$ is equivalent to $\\Ker(\\XX) \\cap \\T_{\\xxs} = \\ens{0}$. Suppose that this condition does not. Thus, there exists a nonzero vector $h \\in \\T_{\\xxs}$ such that the vector $v_t = \\xxs + th$, $t \\in \\RR$, satisfies $\\XX v_t = \\XX \\xxs$. Moreover,\n\\begin{gather*}\n\\dotp{v_t}{d_i} - b_i = \n\\begin{cases}\n\\J(\\xxs) + t\\tau_{\\xxs}, \t\t\t\t\t\t& \\text{if } i \\in I_{\\xxs} \\\\\n\\dotp{\\xxs}{d_i} - b_i + t\\dotp{h}{d_i} < \\J(\\xxs) + t\\dotp{h}{d_i}\t& \\text{otherwise} .\n\\end{cases}\n\\\\\\qandq \\\\\n\\dotp{v_t}{a_k} = \n\\begin{cases}\nc_k, \t\t\t\t\t\t\t\t\t& \\text{if } k \\in K_{\\xxs} \\\\\n\\dotp{\\xxs}{a_k} + t\\dotp{h}{a_k} < c_k + t\\dotp{h}{a_k}\t\t& \\text{otherwise} .\n\\end{cases}\n\\end{gather*}\nThus, for $t \\in ]-t_0,t_0[$, where\n\\[\nt_0 = \\min\\pa{\t\\min_{i \\notin I_{\\xxs}}\\bens{\\frac{\\J(\\xxs) - \\dotp{\\xxs}{d_i} + b_i}{\\abs{\\dotp{h}{d_i} - \\tau_{\\xxs}}}},\n\t\t\\min_{k \\notin K_{\\xxs}}\\bens{\\frac{c_k - \\dotp{\\xxs}{a_k}}{\\abs{\\dotp{h}{a_k}}}}} ,\n\\]\nwe have $I_{v_t} = I_{\\xxs}$ and $K_{v_t} = K_{\\xxs}$. Moreover, $v_t \\in \\Cc$. Therefore, for all such $t$, we indeed have $\\partial \\J(v_t) = \\partial \\J(\\xxs)$ and $\\T_{v_t}=\\T_{\\xxs}$. Altogether, we get that\n\\[\n-\\transp{\\XX}\\Fxo(\\XX v_t,y)=-\\transp{\\XX}\\Fxo(\\XX \\xxs,y) \\in \\partial \\J(\\xxs) = \\partial \\J(v_t) ,\n\\]\ni.e.~$v_t$ is a solution to \\eqref{eq-group-lasso}. Thus, by Lemma~\\ref{lem:unique}, we deduce that $\\F_0(\\XX v_t,y)=\\F_0(\\XX\\xxs,y)$ and $\\J(v_t)=\\J(\\xxs)$. The continuity assumption~\\eqref{hyp-f-reg} yields \n\\[\n\\F_0(\\XX v_{t_0},y) = \\F_0(\\XX \\xxs,y) .\n\\] \nFurthermore, since $\\J$ is lsc and $v_t$ is a minimizer of \\eqref{eq-group-lasso}, we have\n\\[\n\\liminf_{t \\to t_0} \\J(v_t) \\geq \\J(v_{t_0}) \\geq \\limsup_{t \\to t_0} \\J(v_t) \\iff \\J(v_{t_0}) = \\lim_{t \\to t_0} \\J(v_t) = \\J(\\xxs) .\n\\]\nConsequently, $v_{t_0}$ is a solution of \\eqref{eq-group-lasso} such that $I_{\\xxs} \\subsetneq I_{v_{t_0}}$ or\/and $K_{\\xxs} \\subsetneq K_{v_{t_0}}$, which in turn implies $\\T_{v_{t_0}} \\subsetneq \\T_{\\xxs}$. Iterating this argument, we conclude.\n\n\n\\item General group Lasso: \nLet $\\xxs$ be a solution of \\eqref{eq-group-lasso} for $\\J=\\norm{D^* \\cdot}_{1,2}$, and $I_{\\xxs}=\\enscond{i}{b_i \\in \\Bb \\tandt D^*_{b_i}\\xxs \\neq 0}$, i.e.~the set indexing the active blocks of $D^*\\xxs$. We recall from Example~\\ref{ex:gglassoM} that the partial smoothness subspace $\\Mm=\\T_{\\xxs} = \\Ker(D_{\\Lambda^c}^*)$, where $\\Lambda=\\bs(D^*\\xxs)$.\n\nFrom Lemma~\\ref{lem:first-order} and the subdifferential of the group Lasso, $\\xxs$ is indeed a minimizer if and only if there exists $\\eta \\in \\RR^p$ such that\n\\eql{\n\\label{eq:mincondglasso}\n-\\transp{\\XX}\\Fxo(\\XX \\xxs,y) + \\sum_{i \\in I} D_{b_i} \\eta_{b_i} = 0 \\qandq\n\\begin{cases}\n\\eta_{b_i} = \\frac{D^*_{b_i}\\xxs}{\\norm{D^*_{b_i}\\xxs}} & \\text{if}~ i \\in I_{\\xxs} \\\\\n\\norm{\\eta_{b_i}} \\leq 1 & \\text{otherwise} .\n\\end{cases}\n}\nSuppose that $(\\CondInj{\\xxs}{y})$ (or equivalently Lemma~\\ref{lem:unique}(ii)) does not hold at $\\xxs$. This is equivalent to the existence of a nonzero vector $h \\in \\RR^p$ in the set at the end of Example~\\ref{ex:injgglasso}. Let $v_t = \\xxs + t h$, for $t \\in \\RR$. By construction, $v_t$ obeys\n\\begin{gather*}\nv_t \\in \\T_{\\xxs} \\iff \\forall i \\notin I_{\\xxs}, ~ D_{b_i}^* v_t = 0 \\\\\n\\qandq \\XX v_t = \\XX \\xxs \\\\\n\\qandq \\forall i \\in I_{\\xxs}, \\exists \\mu_i \\in \\RR, ~ D^*_{b_i} v_t = (1+t\\mu_i)D^*_{b_i} \\xxs .\n\\end{gather*}\nLet \n\\[\nt_0 = \\min\\enscond{|t|}{1+t\\mu_i=0, i \\in I} = \\min_{i \\in I_{\\xxs}, \\mu_i \\neq 0} \\abs{\\mu_i}^{-1} .\n\\] \nFor all $t \\in ]-t_0,t_0[$, we have $1+t\\mu_i > 0$ for $i \\in I_{\\xxs}$ and $I_{v_t}=I_{\\xxs}$ (in fact $\\T_{v_t}=\\T_{\\xxs}$ by~Fact~\\ref{fact:sharp}), and thus\n\\[\n\\frac{D^*_{b_i} v_t}{\\norm{D^*_{b_i} v_t}} = \\frac{D^*_{b_i} \\xxs}{\\norm{D^*_{b_i}\\xxs}}, \\quad \\forall i \\in I_{v_t} .\n\\] \nMoreover, $-\\transp{\\XX}\\Fxo(\\XX v_t,y)=-\\transp{\\XX}\\Fxo(\\XX \\xxs,y)$. Inserting the last statements in \\eqref{eq:mincondglasso}, we deduce that $v_t$ is a solution of \\eqref{eq-group-lasso}. \n\nFrom Lemma~\\ref{lem:unique}(i), we get that $\\F_0(\\XX v_t,y)=\\F_0(\\XX\\xxs,y)$ and $\\norm{D^*v_t}_{1,2}=\\norm{D^*\\xxs}_{1,2}$. By continuity of $\\F_0(\\cdot,y)$ (assumption~\\eqref{hyp-f-reg}), and of $\\norm{\\cdot}_{1,2}$ one has\n\\[\n\\F_0(\\XX v_{t_0}) = \\F_0(\\XX \\xxs) \\qandq \\norm{D^* v_{t_0}}_{1,2} = \\norm{D^* \\xxs}_{1,2} .\n\\]\nClearly, we have constructed a solution $v_{t_0}$ of \\eqref{eq-group-lasso} such that $I_{v_{t_0}} \\subsetneq I_{\\xxs}$, hence $\\Ker(\\Q(v_{t_0})) \\cap \\T_{v_{t_0}} \\subsetneq \\Ker(\\Q(\\xxs)) \\cap \\T_{\\xxs}$. Iterating this argument shows the result. \\qed\n\\end{enumerate}\n\n\\begin{remark}\nFor the general group Lasso, the iterative construction is guaranteed to terminate at a non-trivial point. Indeed, if it were not the case, then eventually one would construct a solution such that $0 \\neq h \\in \\Ker(\\XX) \\cap \\Ker(D^*)$ leading to a contradiction with a classical condition in regularization theory. Moreover, $\\Ker(\\XX) \\cap \\Ker(D^*) = \\ens{0}$ is a sufficient (and necessary in our case) condition to ensure boundedness of the set of solutions to \\eqref{eq-group-lasso}.\n\\end{remark}\n\n\n\\subsection{Proof of Theorem~\\ref{thm-dof}}\n\\label{sub:sure}\n\n\\begin{enumerate}[label=(\\roman*)]\n\\item We obtain this assertion by proving that all $\\Hh_{\\Mm}$ are of zero measure for all $\\Mm$, and that the union is over a finite set, because of~\\eqref{hyp-tt}.\n\\begin{enumerate}[label=$\\bullet$]\n\\item Since $\\J$ is definable by~\\eqref{eq-condition-omin}, $\\Fx(\\xx,y)$ is also definable by virtue of Proposition~\\ref{prop-ominimal-diffjac}. \n\n\\item Given $\\Mm \\in \\Mscr$ which is definable, $\\Mc$ is also definable. Indeed, $\\Mc$ can be equivalently written\n\\begin{align*}\n\\Mc \t&= \\Mm \\cap \\enscond{\\xx}{\\exists \\epsilon > 0, \\forall \\xx' \\in \\Mm \\cap \\mathbb{B}(\\xx,\\epsilon), \\J \\in \\Cdeux(\\xx')} \\\\\n\t&\\cap \\enscond{\\xx}{\\forall (u,v) \\in (\\partial J(\\xx))^2, \\dotp{u-v}{\\xx'}=0, \\forall \\xx' \\in \\Tgt_{\\xx}(\\Mm)} \\\\\n\t&\\cap \\enscond{\\xx}{\\forall \\xx_r \\in \\Mm \\to \\xx \\tandt u \\in \\partial \\J(\\xx), \\exists u_r \\to u \\text{ s.t. } u_r \\in \\partial\\J(\\xx_r)} ~.\n\n\\end{align*}\nEach of the four sets above capture a property of partial smoothness as introduced in Definition~\\ref{defn:psg}. $\\Mc$ involves $\\Mm$ which is definable, its tangent space (which can be shown to be definable as a mapping of $\\xx$ using Proposition~\\ref{prop-ominimal-diffjac}), $\\partial \\J$ whose graph is definable thanks to Proposition~\\ref{cor-ominimal-subdiff}, continuity relations and algebraic equations, whence definability follows after interpreting the logical notations (conjunction, existence and universal quantifiers) in the first-order formula in terms of set operations, and using axioms 1-4 of definability in an o-minimal structure. \n \n\\item Let $\\boldsymbol{D}: \\RR^p \\rightrightarrows \\RR^p$ the set-valued mapping whose graph is \n\\[\n\\graph(\\boldsymbol{D}) = \\enscond{(\\xx,\\eta)}{\\eta \\in \\ri \\partial \\J(\\xx)} ~.\n\\]\nFrom Lemma~\\ref{lem-ominimal-ris}, $\\graph(\\boldsymbol{D})$ is definable. Since the graph $\\partial \\J$ is closed \\citep{hiriart1996convex}, and definable (Proposition~\\ref{cor-ominimal-subdiff}), the set\n\\[\n\\enscond{(\\xx,\\eta)}{\\eta \\in \\rbd \\partial \\J(\\xx)} = \\graph(\\partial \\J) \\setminus \\graph(\\boldsymbol{D}) ~,\n\\]\nis also definable by axiom 1. This entails that $\\Aa_{\\Mm}$ is also a definable subset of $\\RR^n \\times \\Mc$ since\n\\begin{align*}\n\\Aa_{\\Mm} = (\\RR^n \\times \\Mc \\times \\RR^n) &\\cap \\enscond{(y,\\xx,\\eta)}{\\eta=-\\Fx(\\xx_T,y)} \\\\\n\t\t\t\t\t &\\cap (\\RR^n \\times \\enscond{(\\xx,\\eta)}{\\eta \\in \\rbd \\partial \\J(\\xx)}) ~.\n\\end{align*}\n\n\\item By axiom 4, the canonical projection $\\Pi_{n+p,n}(\\Aa_{\\Mm})$ is definable, and its boundary $\\Hh_\\T=\\bd(\\Pi_{n+p,n}(\\Aa_{\\Mm}))$ is also definable by \\cite[Proposition~1.12]{coste1999omin} with a strictly smaller dimension than $\\Pi_{n+p,n}(\\Aa_{\\Mm})$ \\cite[Theorem~3.22]{coste1999omin}. \n\n\\item We recall now from \\citep[Theorem~2.10]{coste1999omin} that any definable subset $A \\subset \\RR^n$ in $\\omin$ can be decomposed (stratified) in a disjoint finite union of $q$ subsets $C_i$, definable in $\\omin$, called cells.\nThe dimension of $A$ is~\\cite[Proposition 3.17(4)]{coste1999omin}\n\\begin{equation*}\n d = \\umax{i \\in \\{1,\\dots,q\\}} d_i \\leq n ~,\n\\end{equation*}\nwhere $d_i=\\dim(C_i)$. Altogether we get that\n\\begin{equation*}\n \\dim \\Hh_{\\Mm}\n =\n \\dim \\bd(\\Pi_{n+p,n}(\\Aa_{\\Mm}))\n <\n \\dim \\Pi_{n+p,n}(\\Aa_{\\Mm})\n =\n d\n \\leq\n n\n\\end{equation*}\nwhence we deduce that $\\Hh$ is of zero measure with respect to the Lebesgue measure on $\\RR^n$ since the union is taken over the finite set $\\Mscr$ by~\\eqref{hyp-tt}.\n\\end{enumerate} \n\n\\item $\\F_0(\\cdot,y)$ is strongly convex with modulus $\\tau$ if, and only if,\n\\[\n\\F_0(\\mu,y) = G(\\mu,y) + \\frac{\\tau}{2} \\norm{\\mu}^2\n\\]\nwhere $G(\\cdot,y)$ is convex and satisfies~\\eqref{hyp-f-reg}, and in particular its domain in $\\mu$ is full-dimensional. Thus, \\eqref{eq-group-lasso} amounts to solving\n\\begin{equation*}\n \\umin{ \\xx \\in \\RR^p } \\frac{\\tau}{2} \\norm{\\XX\\xx}^2 + G(\\XX\\xx,y) + \\J(\\xx) .\n\\end{equation*}\nIt can be recasted as a constrained optimization problem\n\\begin{equation*}\n\\umin{ \\mu \\in \\RR^n, \\xx \\in \\RR^p} \\frac{\\tau}{2} \\norm{\\mu}^2 + G(\\mu,y) + \\J(\\xx) ~\\mathrm{s.t.}~ \\mu = \\XX\\xx .\n\\end{equation*}\nIntroducing the image $(\\XX\\J)$ of $\\J$ under the linear mapping $\\XX$, it is equivalent to\n\\begin{equation}\n\\umin{ \\mu \\in \\RR^n} \\frac{\\tau}{2} \\norm{\\mu}^2 + G(\\mu,y) + (\\XX\\J)(\\mu) ~,\n\\end{equation}\nwhere $(\\XX\\J)(\\mu) = \\umin{\\enscond{\\xx \\in \\RR^p}{\\mu = \\XX\\xx}} \\J(\\xx)$ is the co-called pre-image of $J$ under $\\XX$.\nThis is a proper closed convex function, which is finite on $\\Span(\\XX)$.\nThe minimization problem amounts to computing the proximal point at $0$ of $G(\\cdot,y) + (\\XX\\J)$, which is a proper closed and convex function. Thus this point exists and is unique.\n\nFurthermore, by assumption \\eqref{cnd-hess-bord}, the difference function\n\\[\n\\F_0(\\cdot,y_1)-\\F_0(\\cdot,y_2)=G(\\cdot,y_1)-G(\\cdot,y_2)\n\\]\nis Lipschitz continuous on $\\RR^p$ with Lipschitz constant $L\\norm{y_1-y_2}$. It then follows from \\cite[Proposition~4.32]{BonnansShapiro2000} that $\\msol(\\cdot)$ is Lipschitz continuous with constant $2L\/\\tau$. Moreover, $h$ is Lipschitz continuous, and thus so is the composed mapping $h \\circ \\msol(\\cdot)$. From~\\citep[Theorem~5, Section~4.2.3]{EvansGariepy92}, weak differentiability follows.\n\nRademacher theorem asserts that a Lipschitz continuous function is differentiable Lebesgue a.e. and its derivative and weak derivative coincide Lebesgue a.e., \\citep[Theorem~2, Section~6.2]{EvansGariepy92}. Its weak derivative, whenever it exsist, is upper-bounded by the Lipschitz constant. Thus\n\\[\n\\EE\\pa{\\Big|\\pd{ (h \\circ \\msol)_i}{y_i}(Y)\\Big|} < +\\infty ~.\n\\]\n\n\\item Now, by the chain rule~\\citep[Remark, Section~4.2.2]{EvansGariepy92}, the weak derivative of $h \\circ \\msol(\\cdot)$ at $y$ is precisely \n\\[\n\\jac( h\\circ\\msol)(y))=\\jac h\\pa{\\msol(y)}\\De(y) ~.\n\\]\nThis formula is valid everywhere except on the set $\\Hh \\cup \\Gg$ which is of Lebesgue measure zero as shown in (i). We conclude by invoking (ii) and Stein's lemma \\citep{stein1981estimation} to establish unbiasedness of the estimator $\\widehat \\DOF$ of the DOF.\n\n\\item Plugging the DOF expression (iii) into that of the $\\SURE$~\\citep[Theorem~1]{stein1981estimation}, the statement follows.\n\n\\end{enumerate}\n\\qed\n\n\\subsection{Proof of Theorem~\\ref{thm-dof-exp}}\n\\label{sub:sureexp}\n\nFor (i)-(iii), the proof is exactly the same as in Theorem~\\ref{thm-dof}.\nFor (iv): combining the DOF expression (iii) and \\citep[Theorem~1]{eldar-gsure}, and rearranging the expression yields the stated result. \\qed\n\n\n\\section{Partly Smooth Functions}\n\\label{sec:blocks}\n\n\\subsection{Partial Smoothness}\n\nToward the goal of studying the sensitivity behaviour of $\\widehat{\\xx}(y)$ and $\\msol(y)$ with regularizers $\\J \\in \\lsc(\\RR^p)$, we restrict our attention to a subclass of these functions that fulfill some regularity assumptions according to the following definition.\n\\begin{defn}\\label{defn:psg}\n Let $J \\in \\lsc(\\RR^{p})$ and a point $\\xx$ such that $\\partial J(\\xx) \\neq \\emptyset$. $J$ is said to be \\emph{partly smooth} at $\\xx$ relative to a set $\\Mm \\subseteq \\RR^p$ if\n \\begin{enumerate}\n \\item Smoothness: $\\Mm$ is a $\\Cdeux$-manifold and $J$ restricted to $\\Mm$ is $\\Cdeux$ around $\\xx$.\n \\item Sharpness:\n $\\Tgt_{\\xx}(\\Mm) = T_{\\xx} \\eqdef \\Lin(\\partial J(\\xx))^\\perp$.\n \\item Continuity: The set-valued mapping $\\partial J$ is continuous at $\\xx$ relative to $\\Mm$.\n \\end{enumerate}\n $\\J$ is said to be \\emph{partly smooth relative to the manifold $\\Mm$} if $J$ is partly smooth at each point $\\xx \\in \\Mm$ relative to $\\Mm$.\n\\end{defn}\nObserve that $\\Mm$ being affine or linear is equivalent to $\\Mm=\\xx+T_{\\xx}$. A closed convex set $\\Cc$ is partly smooth at a point $\\xx \\in \\Cc$ relative to a $\\Cdeux$-manifold $\\Mm$ locally contained in $\\Cc$ if its indicator function $\\iota_\\Cc$ maintains this property.\\\\\n\n\\citet[Proposition~2.10]{Lewis-PartlySmooth} allows to prove the following fact (known as local normal sharpness).\n\n\\begin{fact}\n\\label{fact:sharp}\nIf $\\J$ is partly smooth at $\\xx$ relative to $\\Mm$, then all $\\xx' \\in \\Mm$ near $\\xx$ satisfy\n\\[\n\\Tgt_{\\xx'}(\\Mm) = \\T_{\\xx'} ~.\n\\]\nIn particular, when $\\Mm$ is affine or linear, then\n\\begin{align*}\n\\text{$\\forall \\xx' \\in \\Mm$ near $\\xx$,} \\quad \\T_{\\xx'} = \\T_{\\xx} ~.\n\\end{align*}\n\\end{fact}\n\nIt can also be shown that the class of partly smooth functions enjoys a powerful calculus. For instance, under mild conditions, it is closed under positive combination, pre-composition by a linear operator and spectral lifting, with closed-form expressions of the resulting partial smoothness manifolds and their tangent spaces, see~\\citep{Lewis-PartlySmooth,vaiter2014model}. \n\nIt turns out that except the nuclear norm, the regularizing penalties that we exemplified in Section~\\ref{sec:introduction} are partly smooth relative to a linear subspace. The nuclear norm is partly smooth relative to the fixed-rank manifold.\n\n\\begin{ex}[Lasso]\n\\label{ex:lassoM}\nWe denote $(a_i)_{1 \\leq i \\leq p}$ the canonical basis of $\\RR^p$. Then, $J=\\norm{\\cdot}_1$ is partly smooth at $\\xx$ relative to\n\\begin{equation*}\n \\Mm = \\T_\\xx = \\Span \\ens{(a_i)_{i \\in \\supp(\\xx)}}\n \\qwhereq\n \\supp(\\xx) \\eqdef \\enscond{i \\in \\ens{1,\\dots,p}}{\\xx_i \\neq 0} .\n\\end{equation*}\n\\end{ex}\n\n\\begin{ex}[General Lasso]\n\\label{ex:analassoM}\n\\citet[Proposition~9]{vaiter2013model} relates the partial smoothness subspace associated to a convex partly smooth regularizer $\\J \\circ D^*$, where $D$ is a linear operator, to that of $\\J$. In particular, for $\\J=\\norm{\\cdot}_1$, $\\J \\circ D^*$ is partly smooth at $\\xx$ relative to\n\\begin{equation*}\n \\Mm = \\T_\\xx = \\Ker(D_{\\Lambda^c}^*) \\qwhereq \\Lambda = \\supp(D^* \\xx) .\n\\end{equation*}\n\\end{ex}\n\n\\begin{ex}[$\\linf$ Anti-sparsity]\n\\label{ex:linfM}\nIt can be readily checked that $\\J=\\normi{\\cdot}$ is partly smooth at $\\xx$ relative to\n\\begin{equation*}\n \t\\Mm = \\T_\\xx = \\enscond{\\xx'}{\\xx'_I \\in \\RR \\sign(\\xx_I)} \n\t\\qwhereq\n\tI = \\enscond{i}{\\xx_i = \\normi{\\xx}} ~.\n\\end{equation*}\n\\end{ex}\n\n\\begin{ex}[Group Lasso]\n\\label{ex:glassoM}\nThe partial smoothness subspace associated to $\\xx$ when the blocks are of size greater than 1 can be defined similarly, but using the notion of block support.\nUsing the block structure $\\Bb$, one has that the group Lasso regularizer is partly smooth at $\\xx$ relative to\n\\begin{equation*}\n \\Mm = \\T_\\xx = \\Span \\ens{(a_i)_{i \\in \\bs(\\xx)}},\n\\end{equation*}\nwhere\n\\begin{equation*}\n \\bs(\\xx) = \\enscond{i \\in \\ens{1,\\dots,p}}{\\exists b \\in \\Bb,\\, \\xx_b \\neq 0 \\qandq i \\in b} .\n\\end{equation*}\n\\end{ex}\n\n\\begin{ex}[General Group Lasso]\n\\label{ex:gglassoM}\nUsing again \\citep[Proposition~9]{vaiter2013model}, we can describe the partial smoothness subspace for $\\J=\\norm{D^* \\cdot}_\\Bb$, which reads\n\\begin{equation*}\n \\Mm = \\T_\\xx = \\Ker(D_{\\Lambda^c}^*) \\qwhereq \\Lambda = \\bs(D^* \\xx) .\n\\end{equation*}\n\\end{ex}\n\n\\begin{ex}[Nuclear norm]\n\\label{ex:nucM}\nPiecing together \\citep[Theorem~3.19]{daniilidis2013orthogonal} and Example~\\ref{ex:lassoM}, the nuclear norm can be shown to be partly smooth at $\\xx \\in \\RR^{p_1 \\times p_2}$ relative to the set\n\\begin{equation*}\n\t\\Mm = \\enscond{\\xx'}{ \\rank(\\xx')=r }, \\quad r=\\rank(\\xx) ,\n\\end{equation*}\nwhich is a $\\Cdeux$-manifold around $\\xx$ of dimension $(p_1+p_2-r)r$; see \\citep[Example~8.14]{lee2003smooth}.\n\\end{ex}\n\n\\begin{ex}[Indicator function of a partly smooth set $\\Cc$]\n\\label{ex:setM}\nLet $\\Cc$ be a closed convex and partly smooth set at $\\xx \\in \\Cc$ relative to $\\Mm$. Observe that when $\\xx \\in \\ri \\Cc$, $\\Mm = \\RR^p$. For $\\xx \\in \\rbd \\Cc$, $\\Mm$ is locally contained in $\\rbd \\Cc$.\n\\end{ex}\n\nWe now consider an instructive example of a partly smooth function relative to a non-flat active submanifold that will serve as a useful illustration in the rest of the paper.\n\\begin{ex}[$\\J=\\max(\\norm{\\cdot}-1,0)$]\n\\label{ex:maxnormM}\nWe have $\\J \\in \\lsc(\\RR^p)$ and continuous. It is then differentiable Lebesgue-a.e., except on the unit sphere $\\sph^{p-1}$. For $\\xx$ outside $\\sph^{p-1}$, $\\J$ is parly smooth at $\\xx$ relative to $\\RR^p$. For $\\xx \\in \\sph^{p-1}$, $\\J$ is partly smooth at $\\xx$ relative to $\\sph^{p-1}$. Obviously, $\\sph^{p-1}$ is a $\\Cdeux$-smooth manifold.\n\\end{ex}\n\n\n\\subsection{Riemannian Gradient and Hessian}\n\\label{subsec-restriction}\nWe now give expressions of the Riemannian gradient and Hessian for the case of partly smooth functions relative to a $\\Cdeux$-manifold. This is summarized in the following fact which follows by combining \\eqref{eq:covgrad}, \\eqref{eq:covhess}, Definition~\\ref{defn:psg} and \\citet[Proposition~17]{Daniilidis06}.\n\n\\begin{fact}\n\\label{fact:gradhess}\nIf $\\J$ is partly smooth relative at $\\xx$ relative to $\\Mm$, then for any $\\xx' \\in \\Mm$ near $\\xx$\n\\[\n\\JxM(\\xx') = \\proj_{\\T_{\\xx'}}\\pa{\\partial J(\\xx')} = \\e{\\xx'} ~,\n\\]\nand this does not depend on the smooth representation $\\tilde{J}$ of $\\J$ on $\\Mm$. In turn,\n\\[\n\\Q(\\xx) = \\proj_{\\T_{\\xx}} \\hess \\tilde{J}(\\xx) \\proj_{\\T_\\xx} + \\Afk(\\cdot,\\proj_{\\S_\\xx}\\grad \\tilde{J}(\\xx)) ~.\n\\]\n\\end{fact}\n\n\nLet's now exemplify this fact by providing the expressions of the Riemannian Hessian for the examples discussed above.\n\n\n\\begin{ex}[Polyhedral penalty]\n\\label{ex:lassohess}\nPolyhedrality of $J$ implies that it is affine nearby $\\xx$ along the partial smoothness subspace $\\Mm=\\xx+\\T_\\xx$, and its subdifferential is locally constant nearby $\\xx$ along $\\Mm$. In turn, the Riemannian Hessian of $J$ vanishes locally, i.e. $\\Q(\\xx') = 0$ for all $\\xx' \\in \\Mm$ near $\\xx$. Of course, this holds for the Lasso, general Lasso and $\\linf$ anti-sparsity penalties since they are all polyhedral. \n\\end{ex}\n\n\n\\begin{ex}[Group Lasso]\n\\label{ex:glassohess}\nUsing the expression of $\\Mm=T_\\xx$ in Example~\\ref{ex:glassoM}, it is straightforward to show that \n\\begin{equation*}\n \\Q(\\xx) = \\delta_\\xx \\circ Q_{\\xx^\\perp},\n\\end{equation*}\nwhere, for $\\Lambda = \\bs(\\xx)$, \n\\begin{gather*}\n \\delta_\\xx : \\T_{\\xx} \\to \\T_{\\xx}, v \\mapsto \n \\begin{cases}\n v_b \/ \\norm{\\xx_b} & \\text{if}~ \\xx_b \\neq 0 \\\\\n 0 & \\text{otherwise}\n \\end{cases}\n \\\\ \\qandq \\\\ \n Q_{\\xx^\\perp} : \\T_{\\xx} \\to \\T_{\\xx}, v \\mapsto \n \\begin{cases}\n v_b - \\frac{\\dotp{\\xx_b}{v_b}}{\\norm{\\xx_b}^2} \\xx_b & \\text{if}~ \\xx_b \\neq 0 \\\\\n 0 & \\text{otherwise}\n \\end{cases} ~.\n\\end{gather*}\n\\end{ex}\n\n\\begin{ex}[General Group Lasso]\n\\label{ex:gglassohess}\n\nApplying the chain rule to Example~\\ref{ex:glassohess}, we get \n\\begin{equation*}\n \\Q(\\xx) = \\proj_{\\Ker(D_{\\Lambda^c}^*)} D \\pa{\\delta_{D^* \\xx} \\circ Q_{(D^* \\xx)^\\perp}}D^*\\proj_{\\Ker(D_{\\Lambda^c}^*)},\n\\end{equation*}\nwhere $\\Lambda = \\bs(D^* \\xx)$ and the operator $\\delta_{D^* \\xx} \\circ Q_{(D^* \\xx)^\\perp}$ is defined similarly to Example~\\ref{ex:glassohess}.\n\\end{ex}\n\n\\begin{ex}[Nuclear norm]\n\\label{ex:nuchess}\nFor $\\xx \\in \\RR^{p_1 \\times p_2}$ with $\\rank(\\xx)=r$, let $\\xx = U \\diag(\\uplambda(\\xx)) \\adj{V}$ be a reduced rank-$r$ SVD decomposition, where $U \\in \\RR^{p_1 \\times r}$ and $V \\in \\RR^{p_2 \\times r}$ have orthonormal columns, and $\\uplambda(\\xx) \\in (\\RR_+ \\setminus \\ens{0})^{r}$ is the vector of singular values $(\\uplambda_1(\\xx),\\cdots,\\uplambda_r(\\xx))$ in non-increasing order. From the partial smoothness of the nuclear norm at $\\xx$ (Example~\\ref{ex:nucM}) and its subdifferential, one can deduce that\n\\begin{align}\n\\label{eq:Tnuc}\n&\\Tgt_{\\xx}(\\Mm) = \\T_{\\xx} = \\enscond{U A^* + B V^*}{ A \\in \\RR^{p_2 \\times r}, B \\in \\RR^{p_1 \\times r} } \\tandt \\\\\n&\\grad_{\\Mm}\\norm{\\cdot}_*(\\xx) = \\e{\\xx} = UV^* . \\nonumber\n\\end{align}\nIt can be checked that the orthogonal projector on $\\T_{\\xx}$ is given by\n\\begin{align*}\n\\proj_{\\T_{\\xx}}W = U\\adj{U} W + W V\\adj{V} - U\\adj{U} W V\\adj{V} \n\\end{align*}\nLet $\\xi \\in \\T_{\\xx}$ and $W \\in \\S_{\\xx}$. Then, from \\citep[Section~4.5]{Absil13}, the Weingarten map reads\n\\begin{align}\n\\label{eq:weingnuc}\n\\Afk_{\\xx}\\pa{\\xi,W} = W \\adj{\\xi} \\adj{\\xx^{+}} + \\adj{\\xx^{+}} \\adj{\\xi} W\n\\qwhereq\n\\adj{\\xx^{+}} \\eqdef U \\diag(\\uplambda(\\xx))^{-1} \\adj{V} .\n\\end{align}\nIn turn, from Fact~\\ref{fact:gradhess}, the Riemannian Hessian of the nuclear norm reads\n\\begin{align*}\n\\hess_{\\Mm}\\norm{\\cdot}_*(\\xx)(\\xi) &= \\proj_{\\T_{\\xx}} \\hess \\widetilde{\\norm{\\cdot}_*}(\\xx)(\\proj_{\\T_{\\xx}} \\xi) \\\\\n&\\quad + \\proj_{\\S_{\\xx}} \\grad\\widetilde{\\norm{\\cdot}_*}(\\xx) \\adj{\\xi} \\adj{\\xx^{+}} + \\adj{\\xx^{+}} \\adj{\\xi} \\proj_{\\S_{\\xx}} \\grad\\widetilde{\\norm{\\cdot}_*}(\\xx) ,\n\\end{align*}\nwhere $\\widetilde{\\norm{\\cdot}_*}$ is any smooth representative of the nuclear norm at $\\xx$ on $\\Mm$. Owing to the smooth transfer principle \\citep[Corollary~2.3]{daniilidis2013orthogonal}, the nuclear norm has a $\\Cdeux$-smooth (and even convex) representation on $\\Mm$ near $\\xx$ which is\n\\[\n\\widetilde{\\norm{\\xx'}_*} = \\widetilde{\\normu{\\uplambda(\\xx')}} = \\sum_{i=1}^r \\uplambda_i(\\xx') .\n\\]\nCombining this with \\citep[Corollary~2.5]{Lewis95}, we then have $\\grad\\widetilde{\\norm{\\cdot}_*}(\\xx) = UV^*$, and thus $\\Afk_{\\xx}\\pa{\\xi,\\proj_{\\S_{\\xx}}\\grad\\widetilde{\\norm{\\cdot}_*}(\\xx)} = 0$, or equivalently,\n\\begin{align}\n\\label{eq:nuchess}\n\\hess_{\\Mm}\\norm{\\cdot}_*(\\xx)(\\xi) &= \\proj_{\\T_{\\xx}} \\hess \\widetilde{\\norm{\\cdot}_*}(\\xx)(\\proj_{\\T_{\\xx}} \\xi) .\n\\end{align}\nThe expression of the Hessian $\\hess \\widetilde{\\norm{\\cdot}_*}(\\xx)$ can be obtained from the derivative of $UV^*$ using either \\citep[Theorem~4.3]{CandesSVTSURE12} or \\citep[Theorem~1]{DeledalleSVTSURE12} when $\\xx$ is full-rank with distinct singular values, or from \\citep[Theorem~3.3]{Lewis01} in the case where $\\xx$ is symmetric with possibly repeated eigenvalues. \n\\end{ex}\n\n\\begin{ex}[Indicator function of a partly smooth set $\\Cc$]\n\\label{ex:sethess}\nLet $\\Cc$ be a closed convex and partly smooth set at $\\xx \\in \\Cc$ relative to $\\Mm$. From Example~\\ref{ex:setM}, it is then clear that the zero-function is a smooth representative of $\\iota_\\Cc$ on $\\Mm$ around $\\xx$. In turn, the Riemannian gradient and Hessian of $\\iota_\\Cc$ vanish around $\\xx$ on $\\Mm$.\n\\end{ex}\n\n\\begin{ex}[$\\J=\\max(\\norm{\\cdot}-1,0)$]\n\\label{ex:maxnormhess}\nLet $\\xx \\in \\sph^{p-1}$. We have $\\T_{\\xx}=\\pa{\\RR\\xx}^\\perp$, and the orthogonal projector onto $\\T_\\xx$ is\n\\[\n\\proj_{\\T_\\xx} = \\Id - \\xx\\transp{\\xx} .\n\\]\nThe Weingarten map then reduces to\n\\[\n\\Afk_{\\xx}\\pa{\\xi,v} = -\\xi\\dotp{\\xx}{v} , \\quad \\xi \\in \\T_\\xx \\tandt v \\in \\S_\\xx .\n\\]\nMoreover, the zero-function is a smooth representative of $\\J$ on $\\sph^{p-1}$. It then follows that $\\Q(\\xx)=0$.\n\\end{ex}\n\n\n\\section{Basic Properties of o-minimal Structures}\n\\label{sec-omin}\n\nIn the following results, we collect some important stability properties of o-minimal structures. To be self-contained, we also provide proofs. To the best of our knowledge, these proofs, although simple, are not reported in the literature or some of them are left as exercices in the authoritative references~\\cite{van-den-Dries-omin-book,coste1999omin}. Moreover, in most proofs, to show that a subset is definable, we could just write the appropriate first-order formula (see \\cite[Page~12]{coste1999omin}\\cite[Section Ch1.1.2]{van-den-Dries-omin-book}), and conclude using \\cite[Theorem~1.13]{coste1999omin}. Here, for the sake of clarity and avoid cryptic statements for the non-specialist, we will translate the first order formula into operations on the involved subsets, in particular projections, and invoke the above stability axioms of o-minimal structures. In the following, $n$ denotes an arbitrary (finite) dimension which is not necessarily the number of observations used previously the paper. \n\n\\begin{lem}[Addition and multiplication]\\label{lem-ominimal-summult}\n\tLet $f : \\Om \\subset \\RR^n \\rightarrow \\RR^p$ and $g : \\Omega \\subset \\RR^n \\subset \\RR^p$ be definable functions. Then their pointwise addition and multplication is also definable.\n\\end{lem}\n\\begin{proof} \n\tLet $h=f+g$, and \n\t\\[\n\tB=(\\Om \\times \\RR \\times \\Om \\times \\RR \\times \\Om \\times \\RR) \\cap (\\Om \\times \\RR \\times \\graph(f) \\times \\graph(h)) \\cap S\n\t\\]\n\twhere $S=\\enscond{(x,u,y,v,z,w)}{x=y=z, u=v+w}$ is obviously an algebraic (in fact linear) subset, hence definable by axiom 2. Axiom 1 and 2 then imply that $B$ is also definable. Let $\\Pi_{3n+3p,n+p}: \\RR^{3n+3p} \\to \\RR^{n+p}$ be the projection on the first $n+p$ coordinates. We then have\n\t\\[\n\t\\graph(h) = \\Pi_{3n+3p,n+p}(B)\n\t\\]\n\twhence we deduce that $h$ is definable by applying $3n+3p$ times axiom 4. Definability of the pointwise multiplication follows the same proof taking $u=v \\cdot w$ in $S$. \\qed\n\\end{proof}\n\n\n\\begin{lem}[Inequalities in definable sets]\\label{lem-ominimal-ineq}\n\tLet $f : \\Om \\subset \\RR^n \\rightarrow \\RR$ be a definable function. Then $\\enscond{x \\in \\Om}{f(x) > 0}$, is definable. The same holds when replacing $>$ with $<$.\n\\end{lem}\nClearly, inequalities involving definable functions are accepted when defining definable sets.\n \nThere are many possible proofs of this statement.\n\\begin{proof}[1]\nLet $B=\\enscond{(x,y) \\in \\RR \\times \\RR}{f(x)=y} \\cap (\\Omega \\times (0,+\\infty)$, which is definable thanks to axioms 1 and 3, and that the level sets of a definable function are also definable. Thus\n\\[\n\\enscond{x \\in \\Om}{f(x) > 0} = \\enscond{x \\in \\Om}{\\exists y, f(x) = y, y > 0} = \\Pi_{n+1,n}(B) ~,\n\\]\nand we conclude using again axiom 4. \\qed\n\\end{proof}\nYet another (simpler) proof.\n\\begin{proof}[2]\nIt is sufficient to remark that $\\enscond{x \\in \\Om}{f(x) > 0}$ is the projection of the set $\\enscond{(x,t) \\in \\Om \\times \\RR}{t^2f(x)-1 = 0}$, where the latter is definable owing to Lemma~\\ref{lem-ominimal-summult}. \\qed\n\\end{proof}\n\n\\begin{lem}[Derivative]\\label{prop-ominimal-derivative}\nLet $f: I \\to \\RR$ be a definable differentiable function on an open interval $I$ of $\\RR$. Then its derivative $f': I \\to \\RR$ is also definable.\n\\end{lem}\n\\begin{proof} \nLet $g: (x,t) \\in I \\times \\RR \\mapsto g(x,t) = f(x+t)-f(x)$. Note that $g$ is definable function on $I \\times \\RR$ by Lemma~\\ref{lem-ominimal-summult}. We now write the graph of $f'$ as\n\\[\n\\graph(f') = \\enscond{(x,y) \\in I \\times \\RR}{\\forall \\varepsilon > 0, \\exists \\delta > 0, \\forall t \\in \\RR, \\abs{t} < \\delta, \\abs{g(x,t) - yt} < \\varepsilon|t|} ~.\n\\]\nLet $C=\\enscond{(x,y,v,t,\\varepsilon,\\delta) \\in I \\times \\RR^5}{((x,t),v) \\in \\graph(g)}$, which is definable since $g$ is definable and using axiom 3. Let\n\\[\nB = \\enscond{(x,y,v,t,\\varepsilon,\\delta)}{t^2 < \\delta^2, (v-ty)^2 < \\varepsilon^2t^2} \\cap C ~.\n\\]\nThe first part in $B$ is semi-algebraic, hence definable thanks to axiom 2. Thus $B$ is also definable using axiom 1. We can now write\n\\[\n\\graph(f') = \\RR^3 \\setminus \\pa{\\Pi_{5,3}\\pa{\\RR^5 \\setminus \\Pi_{6,5}(B)}} \\cap (I \\times \\RR) ~,\n\\]\nwhere the projectors and completions translate the actions of the existential and universal quantifiers. Using again axioms 4 and 1, we conclude. \\qed\n\\end{proof}\n\nWith such a result at hand, this proposition follows immediately.\n\\begin{prop}[Differential and Jacobian]\\label{prop-ominimal-diffjac}\nLet $f=(f_1,\\cdots,f_p): \\Om \\to \\RR^p$ be a differentiable function on an open subset $\\Om$ of $\\RR^n$. If $f$ is definable, then so its differential mapping and its Jacobian. In particular, for each $i=1,\\cdots,n$ and $j=1,\\cdots,p$, the partial derivative $\\partial f_i\/\\partial x_j: \\Om \\to \\RR$ is definable.\n\\end{prop}\n\n\nWe provide below some results concerning the subdifferential.\n\n\\begin{prop}[Subdifferential]\\label{cor-ominimal-subdiff}\nSuppose that $f$ is a finite-valued convex definable function. Then for any $x \\in \\RR^n$, the subdifferential $\\partial f(x)$ is definable.\n\\end{prop}\n\\begin{proof}\nFor every $x \\in \\RR^n$, the subdifferential $\\partial f(x)$ reads\n\\begin{equation*}\n \\partial f(x) = \\enscond{\\eta \\in \\RR^n}{f(x') \\geq f(x) + \\dotp{\\eta}{x'-x} \\quad \\forall x' \\in \\RR^n} .\n\\end{equation*}\nLet $K = \\enscond{(\\eta,x') \\in \\RR^n \\times \\RR^n}{f(x') < f(x) + \\dotp{\\eta}{x'-x}}$.\nHence, $\\partial f(x) = \\RR^n \\setminus \\Pi_{2n,n}(K)$.\nSince $f$ is definable, the set $K$ is also definable using Lemma~\\ref{lem-ominimal-summult} and~\\ref{lem-ominimal-ineq}, whence definability of $\\partial f(x)$ follows using axiom 4. \\qed\n\\end{proof}\n\n\\begin{lem}[Graph of the relative interior]\\label{lem-ominimal-ris}\n Suppose that $f$ is a finite-valued convex definable function.\n Then, the set\n \\begin{equation*}\n \\enscond{(x,\\eta)}{\\eta \\in \\ri \\partial f(x)}\n \\end{equation*}\n is definable.\n\\end{lem}\n\\begin{proof}\n Denote $C = \\enscond{(x,\\eta)}{\\eta \\in \\ri \\partial f(x)}$.\n Using the characterization of the relative interior of a convex set \\citep[Theorem~6.4]{Rockafellar96}, we rewrite $C$ in the more convenient form \n \\begin{align*}\n C = \\{(x,\\eta) \\, : \\, &\n \\forall u \\in \\RR^n,\n \\forall z \\in \\RR^n, f(z) - f(x) \\geq \\dotp{u}{z-x} ,\\\\\n & \\exists t > 1,\n \\forall x' \\in \\RR^n,\n f(x') - f(x) \\geq \\dotp{(1-t) u + t \\eta}{x'-x} \\} .\n \\end{align*}\n Let $D = \\RR^n \\times \\RR^n \\times \\RR^n \\times \\RR^n \\times (1,+\\infty) \\times \\RR^n$ and $K$ defined as\n \\begin{equation*}\n K = \\enscond{(x,\\eta,u,z,t,x') \\!\\in\\! D}{f(z) - f(x) \\geq \\dotp{u}{z-x}), f(x') - f(x) \\geq \\dotp{(1-t) u + t \\eta}{x'-x}} .\n \\end{equation*}\n Thus,\n \\begin{equation*}\n C = \\RR^{2n} \\setminus \\Pi_{3n,2n} \\left(\n \\RR^{3n} \\setminus \\Pi_{4n,3n} \\left(\n \\Pi_{4n+1,4n} \\left(\n \\RR^{4n} \\times (1,+\\infty) \\setminus \\Pi_{5n+1,4n+1} (K)\n \\right)\n \\right)\n \\right) ,\n \\end{equation*}\n where the projectors and completions translate the actions of the existential and universal quantifiers. Using again axioms 4 and 1, we conclude. \\qed\n\\end{proof}\n\n\n\\section{Notations and preliminaries}\n\\label{sec:notations}\n\n\\myparagraph{Vectors and matrices}\nGiven a non-empty closed set $\\Cc \\subset \\RR^p$, we denote $\\proj_{\\Cc}$ the orthogonal projection on $\\Cc$. For a subspace $\\T \\subset \\RR^p$, we denote\n\\eq{\n\t\\xx_{\\T} = \\proj_{\\T}\\xx \\qandq\n\t\\XXT = \\XX \\proj_{\\T}.\n}\nFor a set of indices $I \\subset \\NN^*$, we will denote $\\xx_I$ (resp. $\\XX_I$) the subvector (resp. submatrix) whose entries (resp. columns) are those of $\\xx$ (resp. of $\\XX$) indexed by $I$. For a linear operator $A$, $\\adj{A}$ is its adjoint. For a matrix $M$, $\\transp{M}$ is its transpose and $M^+$ its Moore-Penrose pseudo-inverse.\n\n\\myparagraph{Sets} \nIn the following, for a non-empty set $\\Cc \\subset \\RR^p$, we denote $\\co \\Cc$ and $\\cone \\Cc$ respectively its convex and conical hulls. $\\iota_{\\Cc}$ is the indicator function of $\\Cc$ (takes $0$ in $\\Cc$ and $+\\infty$ otherwise), and $N_\\Cc(\\xx)$ is the cone normal to $\\Cc$ at $\\xx$. For a non-empty convex set $\\Cc$, its affine hull $\\Aff C$ is the smallest affine manifold containing it. It is a translate of $\\Lin \\Cc$, the subspace parallel to $\\Cc$, i.e. $\\Lin \\Cc = \\Aff \\Cc - \\xx = \\RR(\\Cc - \\Cc)$ for any $\\xx \\in \\Cc$. The relative interior $\\ri \\Cc$ (resp. relative boundary $\\rbd \\Cc$) of $\\Cc$ is its interior (resp. boundary) for the topology relative to its affine hull.\n\n\\myparagraph{Functions} \nFor a $\\Calt{1}$ vector field $v: y \\in \\RR^n \\mapsto v(y)$, $\\jac v(y)$ denotes its Jacobian at $y$. \nFor a $\\Cdeux$ smooth function $\\tilde{f}$, $\\dder \\tilde{f}(\\xx)[\\xi]=\\dotp{\\grad \\tilde{f}(\\xx)}{\\xi}$ is its directional derivative, $\\grad \\tilde{f}(\\xx)$ is its (Euclidean) gradient and $\\hess \\tilde{f}(\\xx)$ is its (Euclidean) Hessian at $\\xx$. For a bivariate function $g: (\\xx,y) \\in \\RR^p \\times \\RR^n \\to \\RR$ that is $\\Cdeux$ with respect to the first variable $\\xx$, for any $y$, we will denote $\\grad g(\\xx,y)$ and $\\hess g(\\xx,y)$ the gradient and Hessian of $g$ at $\\xx$ with respect to the first variable.\n\nA function $f: \\xx \\in \\RR^p \\mapsto \\RR \\cup \\ens{+\\infty}$ is lower semicontinuous (lsc) if its epigraph is closed. $\\lsc(\\RR^p)$ is the class of convex and lsc functions which are proper (i.e.~not everywhere $+\\infty$). $\\partial f$ is the (set-valued) subdifferential operator of $f \\in \\lsc(\\RR^p)$. If $f$ is differentiable at $\\xx$ then $\\grad f(\\xx)$ is its unique subgradient, i.e. $\\partial f(\\xx) = \\ens{\\grad f(\\xx)}$. \n\nConsider a function $\\J \\in \\lsc(\\RR^p)$ such that $\\partial \\J(\\xx) \\neq \\emptyset$. We denote $\\S_\\xx$ the subspace parallel to $\\partial J(\\xx)$ and its orthogonal complement $\\T_\\xx$, i.e.\n\\begin{equation}\\label{eq-dfn-T-gauges}\n \\S_\\xx = \\Lin(\\partial J(\\xx)) \\qandq \\T_\\xx = \\S_\\xx^\\perp.\n\\end{equation}\nWe also use the notation \n\\begin{equation*}\n \\e{\\xx} = \\proj_{\\Aff(\\partial J(\\xx))}(0) ,\n\\end{equation*}\ni.e. the projection of 0 onto the affine hull of $\\partial \\J(\\xx)$.\n\n\\myparagraph{Differential and Riemannian geometry}\nLet $\\Mm$ be a $\\Cdeux$-smooth embedded submanifold of $\\RR^p$ around $\\xx^\\star \\in \\Mm$. To lighten notation, henceforth we shall state $\\Cdeux$-manifold instead of $\\Cdeux$-smooth embedded submanifold of $\\RR^p$. $\\Tgt_{\\xx}(\\Mm)$ denotes the tangent space to $\\Mm$ at any point $\\xx \\in \\Mm$ near $\\xx^\\star$. The natural embedding of a submanifold $\\Mm$ into $\\RR^p$ permits to define a Riemannian structure on $\\Mm$, and we simply say $\\Mm$ is a Riemannian manifold. For a vector $v \\in \\Tgt_{\\xx}(\\Mm)^\\perp$, the Weingarten map of $\\Mm$ at $\\xx$ is the operator $\\Afk_{\\xx}(\\cdot,v): \\Tgt_{\\xx}(\\Mm) \\to \\Tgt_{\\xx}(\\Mm)$ defined as\n\\[\n\\Afk_{\\xx}(\\xi,v) = -\\proj_{\\Tgt_{\\xx}(\\Mm) }\\dder V[\\xi]\n\\]\nwhere $V$ is any local extension of $v$ to a normal vector field on $\\Mm$. The definition is independent of the choice of the extension $V$, and $\\Afk_{\\xx}(\\cdot,v)$ is a symmetric linear operator which is closely tied to the second fundamental form of $\\Mm$; see~\\citep[Proposition~II.2.1]{chavel2006smooth}.\n\nLet $f$ be a real-valued function which is $\\Cdeux$ on $\\Mm$ around $\\xx^\\star$. The covariant gradient of $f$ at $\\xx$ is the vector $\\grad_{\\Mm} f(\\xx) \\in \\Tgt_\\xx(\\Mm)$ such that\n\\[\n\\dotp{\\grad_{\\Mm} f(\\xx)}{\\xi} = \\frac{d}{dt}f\\pa{\\proj_{\\Mm}(\\xx+t\\xi)}\\big|_{t=0} , \\forall \\xi \\in \\Tgt_\\xx(\\Mm) ~.\n\\]\nThe covariant Hessian of $f$ at $\\xx$ is the symmetric linear mapping $\\hess_\\Mm f(\\xx)$ from $\\Tgt_\\xx(\\Mm)$ into itself defined as\n\\[\n\\dotp{\\hess_{\\Mm} f(\\xx)\\xi}{\\xi} = \\frac{d^2}{dt^2}f\\pa{\\proj_{\\Mm}(\\xx+t\\xi)}\\big|_{t=0} , \\forall \\xi \\in \\Tgt_\\xx(\\Mm) ~.\n\\]\nThis definition agrees with the usual definition using geodesics or connections \\citep{miller2005newton}. Assume now that $\\Mm$ is a Riemannian embedded submanifold of $\\RR^p$, and that a function $f$ has a smooth restriction on $\\Mm$. This can be characterized by the existence of a smooth extension (representative) of $f$, i.e.~a smooth function $\\tilde{f}$ on $\\RR^p$ such that $\\tilde{f}$ and $f$ agree on $\\Mm$. Thus, the Riemannian gradient $\\grad_{\\Mm} f(\\xx)$ is also given by\n\\eql{\n\\label{eq:covgrad}\n\\grad_{\\Mm} f(\\xx) = \\proj_{\\Tgt_\\xx(\\Mm)} \\grad \\tilde{f}(\\xx)\n}\nand, $\\forall \\xi \\in \\Tgt_\\xx(\\Mm)$, the Riemannian Hessian reads\n\\begin{align}\n\\label{eq:covhess}\n\\hess_\\Mm f(\\xx)\\xi\t&= \\proj_{\\Tgt_\\xx(\\Mm)} \\dder\\pa{\\grad_{\\Mm} f}(\\xx)[\\xi] = \\proj_{\\Tgt_\\xx(\\Mm)}\\dder\\pa{\\xx \\mapsto \\proj_{\\Tgt_\\xx(\\Mm)} \\grad \\tilde{f}(\\xx)}[\\xi] \\nonumber\\\\\n\t\t\t&= \\proj_{\\Tgt_\\xx(\\Mm)} \\hess \\tilde{f}(\\xx) \\proj_{\\Tgt_\\xx(\\Mm)} \\xi + \\Afk_{\\xx}\\pa{\\xi,\\proj_{\\Tgt_\\xx(\\Mm)^\\perp} \\grad \\tilde{f}(\\xx)} ~,\n\\end{align}\nwhere the last equality comes from \\citep[Theorem~1]{Absil13}. When $\\Mm$ is an affine or linear subspace of $\\RR^p$, then obviously $\\Mm=\\xx+\\Tgt_\\xx(\\Mm)$, and $\\Afk_{\\xx}\\pa{\\xi,\\proj_{\\Tgt_\\xx(\\Mm)^\\perp} \\tilde{f}(\\xx)}=0$, hence \\eqref{eq:covhess} becomes\n\\eql{\n\\label{eq:covhesslin}\n\\hess_{\\Mm} f(\\xx) = \\proj_{\\Tgt_\\xx(\\Mm)}\\hess \\tilde{f}(\\xx)\\proj_{\\Tgt_\\xx(\\Mm)} ~.\n}\nSimilarly to the Euclidean case, for a real-valued bivariate function $g$ that is $\\Cdeux$ on $\\Mm$ around the first variable $\\xx$, for any $y$, we will denote $\\grad_\\Mm g(\\xx,y)$ and $\\hess_\\Mm g(\\xx,y)$ the Riemannian gradient and Hessian of $g$ at $\\xx$ with respect to the first variable. See e.g. \\citep{lee2003smooth,chavel2006smooth} for more material on differential and Riemannian manifolds.\n\n\n\n\n\n\n\\section{Sensitivity Analysis of $\\xxs(y)$}\n\\label{sec:local}\n\n\nIn all the following, we consider the variational regularized problem~\\eqref{eq-group-lasso}. We recall that $\\J \\in \\lsc(\\RR^p)$ and is partly smooth. We also suppose that the fidelity term fulfills the following conditions:\n\\eql{\\label{hyp-f-reg}\\tag{$C_{F}$}\n\t\\foralls y \\in \\RR^n, \\quad \n\t\\F(\\cdot,y) \\in \\Calt{2}(\\RR^p)\n\t\\qandq\n\t\\foralls \\xx \\in \\RR^p, \\quad \\F(\\xx,\\cdot) \\in \\Calt{2}(\\RR^n).\n}\nCombining \\eqref{eq:covhess} and the first part of assumption~\\eqref{hyp-f-reg}, we have for all $y \\in \\RR^n$\n\\eql{\\label{eq:covhessF}\n\\Fxx(\\xx,y)(\\xx,y)\\xi = \\proj_{\\T_\\xx}\\hess \\F(\\xx,y)\\proj_{\\T_\\xx} + \\Afk_{\\xx}\\pa{\\xi,\\proj_{\\S_\\xx} \\grad F(\\xx,y)}\\proj_{\\T_\\xx} .\n}\nWhen $\\Mm$ is affine or linear, equation \\eqref{eq:covhessF} becomes\n\\eql{\\label{eq:covhessFlin}\n\\Fxx(\\xx,y)(\\xx,y)\\xi = \\proj_{\\T_\\xx}\\hess \\F(\\xx,y)\\proj_{\\T_\\xx} .\n}\n\n\\subsection{Restricted positive definiteness}\nIn this section, we aim at computing the derivative of the (set-valued) map $y \\mapsto \\xsoly(y)$ whenever this is possible. The following condition plays a pivotal role in this analysis.\n\\begin{defn}[Restricted Positive Definiteness] A vector $\\xx \\in \\RR^p$ is said to satisfy the \\emph{restricted positive definiteness condition} if, and only if,\n\\eql{\\tag{$\\CondInj{\\xx}{y}$}\\label{eq-injectivity-cond}\n\t\\dotp{(\\Fxx(\\xx,y) + \\Q(\\xx))\\xi}{\\xi} > 0 \\quad \\forall ~ 0 \\neq \\xi \\in \\T_{\\xx} .\n}\n\\end{defn}\n\nCondition~\\eqref{eq-injectivity-cond} has a convenient re-writing in the following case.\n\\begin{lem}\\label{lem:injectivity-cond}\nLet $\\J \\in \\lsc(\\RR^p)$ be partly smooth at $\\xx \\in \\RR^p$ relative to $\\Mm$, and set $T = T_\\xx$. Assume that $\\Fxx(\\xx,y)$ and $\\Q(\\xx)$ are positive semidefinite on $\\T$. Then\n\\[\n\\text{\\eqref{eq-injectivity-cond} holds if and only if }~~ \\Ker( \\Fxx(\\xx,y) ) \\cap \\Ker(\\Q(\\xx)) \\cap \\T = \\{ 0 \\} .\n\\]\nFor instance, the positive semidefiniteness assumption is satisfied when $\\Mm$ is an affine or linear subspace.\n\\end{lem}\n\nWhen $\\F$ takes the form \\eqref{eq-fidelity-decompos} with $\\F_0$ the squared loss, condition~\\eqref{eq-injectivity-cond} can be interpreted as follows in the examples we discussed so far. \n\n\\begin{ex}[Polyhedral penalty]\n\\label{ex:injpolyh}\nRecall that a polyhedral penalty is partly smooth at $\\xx$ relative to $\\Mm=\\xx+\\T_{\\xx}$. Combining this with Example~\\ref{ex:lassohess}, condition \\eqref{eq-injectivity-cond} specializes to\n\\[\n\\Ker(\\XX_{\\T_{\\xx}}) = \\ens{0} .\n\\]\n\n\\paragraph{Lasso}\nApplying this to the Lasso (see~Example~\\ref{ex:lassoM}), \\eqref{eq-injectivity-cond} reads $\\Ker(\\XX_\\Lambda)=\\ens{0}$, with $\\Lambda=\\supp(\\xx)$. This condition is already known in the literature, see for instance~\\citep{2012-kachour-statsinica}.\n\\paragraph{General Lasso}\nIn this case, Example~\\ref{ex:analassoM} entails that \\eqref{eq-injectivity-cond} becomes\n\\[\n\\Ker(\\XX) \\cap \\Ker(D_{\\Lambda^c}^*)=\\ens{0}, \\qwhereq \\Lambda = \\supp(D^* \\xx) .\n\\]\nThis condition was proposed in~\\citep{vaiter-local-behavior}. \n\\end{ex}\n\n\\begin{ex}[Group Lasso]\n\\label{ex:injglasso}\nFor the case of the group Lasso, by virtue of Lemma~\\ref{lem:unique}(ii) and Example~\\ref{ex:glassohess}, one can see that condition~\\eqref{eq-injectivity-cond} amounts to assuming that the system $\\enscond{\\XX_b \\beta_b}{b \\in \\Bb, \\xx_{b} \\neq 0}$ is linearly independent. This condition appears in \\citep{LiuZhang09} to establish $\\ldeux$-consistency of the group Lasso. It goes without saying that condition~\\eqref{eq-injectivity-cond} is much weaker than imposing that $\\XX_\\Lambda$ is full column rank, which is standard when analyzing the Lasso.\n\\end{ex}\n\n\\begin{ex}[General group Lasso]\n\\label{ex:injgglasso}\nFor the general group Lasso, let $I_{\\xx}=\\enscond{i}{b_i \\in \\Bb \\tandt D^*_{b_i}\\xx \\neq 0}$, i.e.~the set indexing the active blocks of $D^*\\xx$. Combining Example~\\ref{ex:gglassoM} and Example~\\ref{ex:gglassohess}, one has\n\\begin{align*}\n\\Ker(\\Q(\\xx)) &\\cap \\Ker(D_{\\Lambda^c}^*) = \\\\ &\\enscond{h \\in \\RR^p}{D_{b_i}^*h=0 ~ \\forall i \\notin I_{\\xx} \\tandt D^*_{b_i} h \\in \\RR ~ D^*_{b_i} \\xx ~ \\forall i \\in I_{\\xx}} ,\n\\end{align*}\nwhere $\\Lambda = \\bs(D^* \\xx)$. Indeed, $\\delta_{D^*\\xx}$ is a diagonal strictly positive linear operator, and $Q_{(D^*\\xx)^\\perp}$ is a block-wise linear orthogonal projector, and we get for $h \\in \\Ker(D_{\\Lambda^c}^*)$,\n\\begin{align*}\nh \\in \\Ker(\\Q(\\xx))\t& \\iff \\dotp{h}{\\Q(\\xx)h} = 0 \\\\\n\t\t\t& \\iff \\dotp{D^*h}{\\pa{\\delta_{D^*\\xx} \\circ Q_{(D^*\\xx)^\\perp}} D^*h} = 0 \\\\\n\t\t\t& \\iff \\sum_{i \\in I_{\\xx}} \\frac{\\norm{\\proj_{(D^*_{b_i}\\xx)^\\perp}(D^*_{b_i} h)}^2}{\\norm{D^*_{b_i}\\xx}} = 0 \\\\\n\t\t\t& \\iff D^*_{b_i}\\xx \\in \\RR ~ D^*_{b_i} \\xx \\quad \\forall i \\in I_{\\xx} .\n\\end{align*}\nIn turn, by Lemma~\\ref{lem:unique}(ii), condition~\\eqref{eq-injectivity-cond} is equivalent to saying that $0$ is the only vector in the set\n\\[\n\\enscond{h \\in \\RR^p}{\\XX h = 0 \\tandt D^*_{b_i} h = 0 ~ \\forall i \\notin I_{\\xx} \\tandt D^*_{b_i} h \\in \\RR ~ D^*_{b_i} \\xx ~ \\forall i \\in I_{\\xx} } .\n\\]\nObserve that when $D$ is a Parseval tight frame, i.e.~$DD^*=\\Id$, the above condition is also equivalent to saying that the system $\\enscond{\\pa{\\XX D}_{b_i} D^*_{b_i}\\beta}{i \\in I_{\\xx}}$ is linearly independent.\n\\end{ex}\n\n\\begin{ex}[Nuclear norm]\n\\label{ex:injnuc}\nWe have seen in Example~\\ref{ex:nuchess} that the nuclear norm has a $\\Cdeux$-smooth representative which is also convex. It then follows from \\eqref{eq:nuchess} that the Riemannian Hessian of the nuclear norm at $\\xx$ is positive semidefinite on $\\T_\\xx$, where $\\T_\\xx$ is given in \\eqref{eq:Tnuc}.\n\nAs far as $F$ is concerned, one cannot conclude in general on positive semidefiniteness of its Riemannian Hessian. Let's consider the case where $\\xx \\in \\Sbf^p$, the vector space of real $p_1 \\times p_1$ symmetric matrices endowed with the trace (Frobenius) inner product $\\dotptr{\\xx}{\\xx'}=\\tr(\\xx\\xx')$. From \\eqref{eq:weingnuc} and \\eqref{eq:covhessF}, we have for any $\\xi \\in \\T_{\\xx} \\cap \\Sbf^{p_1}$\n\\begin{align*}\n\\dotptr{\\xi}{\\Fxx(\\xx,y)(\\xi)} = &\\dotptr{\\xi}{\\proj_{\\T_{\\xx}} \\hess\\F(\\xx,y)(\\proj_{\\T_{\\xx}} \\xi)} \\\\\n& + 2\\dotptr{\\xi U \\diag(\\uplambda(\\xx))^{-1} \\transp{U} \\xi}{\\proj_{\\S_\\xx}\\Fx(\\xx,y)} .\n\\end{align*}\nAssume that $\\xx$ is a global minimizer of \\lasso, which by Lemma~\\ref{lem:first-order}, implies that\n\\[\n\\proj_{\\S_\\xx}\\Fx(\\xx,y) = U_{\\perp} \\diag(\\upgamma) \\transp{U_{\\perp}}\n\\]\nwhere $U_{\\perp} \\in \\RR^{n \\times (p_1-r)}$ is a matrix whose columns are orthonormal to $U$, and $\\upgamma \\in [-1,1]^{p_1-r}$. We then get\n\\begin{align*}\n\\dotptr{\\xi}{\\Fxx(\\xx,y)(\\xi)} = &\\dotptr{\\xi}{\\proj_{\\T_{\\xx}} \\hess \\F(\\xx,y)(\\proj_{\\T_{\\xx}} \\xi)} \\\\\n& + 2\\dotptr{\\transp{U_\\perp}\\xi U \\diag(\\uplambda(\\xx))^{-1} \\transp{U} \\xi U_\\perp}{\\diag(\\upgamma)} .\n\\end{align*}\nIt is then sufficient that $\\xx$ is such that the entries of $\\upgamma$ are positive for $\\Fxx(\\xx,y)$ to be indeed positive semidefinite on $\\T$. In this case, Lemma~\\ref{lem:injectivity-cond} applies.\n\\end{ex}\n\nIn a nutshell, Lemma~\\ref{lem:injectivity-cond} does not always apply to the nuclear norm as $\\Fxx(\\xx,y)$ is not always guaranteed to be positive semidefinite in this case. One may then wonder whether there exist partly smooth functions $\\J$, with a non-flat active submanifold, for which Lemma~\\ref{lem:injectivity-cond} applies, at least at some minimizer of \\lasso. The answer is affirmative for instance for the regularizer of Example~\\ref{ex:maxnormM}.\n\n\\begin{ex}[$\\J=\\max(\\norm{\\cdot}-1,0)$]\n\\label{ex:injmaxnorm}\nLet $\\xx \\in \\sph^{p-1}$. From Example~\\ref{ex:maxnormhess}, we have for $\\xi \\in \\T_\\xx$\n\\[\n\\dotp{\\xi}{\\Fxx(\\xx,y)\\xi} = \\dotp{\\xi}{\\hess \\F(\\xx,y)\\xi} - \\norm{\\xi}^2\\dotp{\\xx}{\\Fx(\\xx,y)} .\n\\]\nAssume that $\\xx$ is a global minimizer of \\lasso, which by Lemma~\\ref{lem:first-order}, implies that\n\\[\n-\\Fx(\\xx,y) \\in \\xx [0,1] \\Rightarrow -\\dotp{\\xx}{\\Fx(\\xx,y)} \\in [0,1] .\n\\]\nThus, $\\dotp{\\xi}{\\Fxx(\\xx,y)\\xi} \\geq 0$, for all $\\xi \\in \\T_\\xx$. Since from Example~\\ref{ex:maxnormhess}, $\\Q(\\xx)=0$, Lemma~\\ref{lem:injectivity-cond} applies at $\\xx$. Condition~\\eqref{eq-injectivity-cond} then holds if, and only if, $\\Fxx(\\xx,y)$ is positive definite on $\\T_\\xx$. For the case of a quadratic loss, this is equivalent to \n\\[\n\\ker(\\XX) \\cap \\T_\\xx = \\ens{0} \\quad or \\quad \\text{$\\xx$ is not a minimizer of $\\F(\\cdot,y)$}.\n\\]\n\\end{ex}\n\n\n\\subsection{Sensitivity analysis: Main result}\nLet us now turn to the sensitivity of any minimizer $\\xsoly(y)$ of \\lasso to perturbations of $y$. Because of non-smoothness of the regularizer $\\J$, it is a well-known fact in sensitivity analysis that one cannot hope for a global claim, i.e. an everywhere smooth mapping\\footnote{To be understood here as a set-valued mapping.} $y \\mapsto \\xsoly(y)$. Rather, the sensitivity behaviour is local. This is why the reason we need to introduce the following transition space $\\Hh$, which basically captures points of non-smoothness of $\\xsoly(y)$.\n\nLet's denote the set of all possible partial smoothness active manifolds $\\Mm_{\\xx}$ associated to $\\J$ as\n\\begin{equation}\n\\label{eq:Tt}\n \\Mscr = \\left\\{ \\Mm_\\xx \\right\\}_{\\xx \\in \\RR^p}.\n\\end{equation}\nFor any $\\Mm \\in \\Mscr$, we denote $\\Mc$ the set of vectors sharing the same partial smoothness manifold $\\Mm$,\n\\begin{equation*}\n \\Mc = \\enscond{\\xx' \\in \\RR^p}{\\Mm_{\\xx'}=\\Mm}.\n\\end{equation*}\nFor instance, when $J=\\norm{\\cdot}_1$, $\\Mc_\\xx$ is the cone of all vectors sharing the same support as $\\xx$.\n\n\\begin{defn}\\label{defn:h}\n The \\emph{transition space} $\\Hh$ is defined as\n \\begin{align*}\n \\Hh = \\bigcup_{\\Mm \\in \\Mscr} \\; \\Hh_{\\Mm},\n \\qwhereq \\Hh_{\\Mm} &= \\bd(\\Pi_{n+p,n}(\\Aa_{\\Mm})) ,\n \\end{align*}\n where $\\Mscr$ is given by \\eqref{eq:Tt}, and we denote \n \\eq{\n \\Pi_{n+p,n} : \n \\left\\{\n \\begin{array}{ccc}\n \\RR^n \\times \\Mc & \\longrightarrow & \\RR^n \\\\\n (y, \\xx) & \\longmapsto & y \n \\end{array}\n \\right.\n }\n the canonical projection on the first $n$ coordinates, $\\bd \\Cc$ is the boundary of the set $\\Cc$, and \n \\begin{equation*}\n \\Aa_{\\Mm} =\n \\enscond{\n (y, \\xx) \\in \\RR^n \\times \\Mc\n }{\n - \\Fx( \\xx, y) \\in \\rbd \\partial J(\\xx)\n }.\n \\end{equation*}\n\\end{defn}\n\n\\begin{rem}\n\\label{rem:transpace}\nBefore stating our result, some comments about this definition are in order. When $\\bd$ is removed in the definition of $\\Hh_{\\Mm}$, we recover the classical setting of sensitivity analysis under partial smoothness, where $\\Hh_{\\Mm}$ contains the set of degenerate minimizers (those such that $0$ is in the relative boundary of the subdifferential of $\\F(\\cdot,y)+\\J$). This is considered for instance in \\citep{Bolte2011,LewisGenericConvex11} who studied sensitivity of the minimizers of $\\xx \\mapsto f_{\\upnu}(\\xx) \\eqdef f(\\xx)-\\dotp{\\upnu}{\\xx}$ to perturbations of $\\upnu$ when $f \\in \\lsc(\\RR^{p})$ and partly smooth; see also \\citep{LewisGenericSemialgebraic15} for the semialgebraic non-necessarily non-convex case. These authors showed that for $\\upnu$ outside a set of Lebesgue measure zero, $f_\\upnu$ has a non-degenerate minimizer with quadratic growth of $f_{\\upnu}$, and for each $\\bar{\\upnu}$ near $\\upnu$, the perturbed function $f_{\\bar{\\upnu}}$ has a unique minimizer that lies on the active manifold of $f_{\\upnu}$ with quadratic growth of $f_{\\bar{\\upnu}}$. These results however do not apply to our setting in general. To see this, consider the case of \\lasso where $\\F$ takes the form \\eqref{eq-fidelity-decompos} with $\\F_0$ the quadratic (the same applies to other losses in the exponential family just as well). Then, \\lasso is equivalent to minimizing $f_{\\upnu}$, with $f=\\J+\\norm{\\XX\\cdot}^2$ and $\\upnu=2\\transp{\\XX}y$. It goes without saying that, in general (i.e. for any $\\XX$), a property valid for $\\upnu$ outside a zero Lebesgue measure set does \\textbf{not} imply it holds for $y$ outside a zero Lebesgue measure set. To circumvent such a difficulty, our key contribution is to consider the boundary of $\\Hh_{\\Mm}$. This turns out to be crucial to get a set of dimension potentially strictly less than $n$, hence negligible, as we will show under a mild o-minimality assumption (see Section~\\ref{sec:dof}). However, doing so, uniqueness of the minimizer is not longer guaranteed.\n\\end{rem}\n\n\nIn the particular case of the Lasso (resp. general Lasso), i.e. $\\F_0$ is the squared loss, $\\J=\\norm{\\cdot}_1$ (resp. $\\J=\\norm{D^* \\cdot}_1$), the transition space $\\Hh$ specializes to the one introduced in~\\citep{2012-kachour-statsinica} (resp. \\citep{vaiter-local-behavior}). In these specific cases, since $\\J$ is a polyhedral gauge, $\\Hh$ is in fact a union of affine hyperplanes. The geometry of this set can be significantly more complex for other regularizers. For instance, for $\\J = \\norm{\\cdot}_{1,2}$, it can be shown to be a semi-algebraic set (union of algebraic hyper-surfaces). Section~\\ref{sec:dof} is devoted to a detailed analysis of this set $\\Hh$.\n\nWe are now equipped to state our main sensitivity analysis result, whose proof is deferred to Section~\\ref{sub:local}.\n\n\\begin{thm}\\label{thm-local}\n Assume that~\\eqref{hyp-f-reg} holds. Let $y \\not\\in \\Hh$, and $\\xxs(y)$ a solution of \\lasso where $\\J \\in \\lsc(\\RR^p)$ is partly smooth at $\\xxs(y)$ relative to $\\Mm \\eqdef \\Mm_{\\xxs(y)}$ and such that $(\\CondInj{\\xxs(y)}{y})$ holds.\n Then, there exists an open neighborhood $\\neighb \\subset \\RR^n$ of $y$, and a mapping $\\solm : \\neighb \\to \\Mm$ such that\n \\begin{enumerate}\n \\item For all $\\bar y \\in \\neighb$, $\\solmB$ is a solution of $(\\lassoB)$, and $\\solm(y) = \\xxs(y)$.\n \\item the mapping $\\solm$ is $\\Calt{1}(\\neighb)$ and\n \\eql{\\label{eq-differential}\n \\foralls \\bar y \\in \\neighb, \\quad\n \\jac \\solm(\\bar y) = -( \\Fxx(\\solm(\\bar y),\\bar y) + \\Q(\\solm(\\bar y)) )^{+} \n \\proj_{\\T_{\\solm(\\bar y)}} \n \\Fxy(\\solm(\\bar y),\\bar y),\n }\n where $\\Fxy(\\xx,y)$ is the Jacobian of $\\Fx(\\xx,\\cdot)$ with respect to the second variable evaluated at $y$.\n \\end{enumerate}\n\\end{thm}\nTheorem~\\ref{thm-local} can be extended to the case where the data fidelity is of the form $\\F(\\xx,\\theta)$ for some parameter $\\theta$, with no particular role of $y$ here.\n\n\\section{Introduction}\n\\label{sec:introduction}\n\n\\subsection{Regression and Regularization}\n\nWe consider a model\n\\begin{equation}\\label{eq:linear-problem}\n \\EE(Y|X) = h(\\XX\\xx_0),\n\\end{equation}\nwhere $Y=(Y_1,\\ldots,Y_n)$ is the response vector, $\\xx_0 \\in \\RR^p$ is the unknown vector of linear regression coefficients, $\\XX \\in \\RR^{n \\times p}$ is the fixed design matrix whose columns are the $p$ covariate vectors, and the expectation is taken with respect to some $\\sigma$-finite measure. $h$ is a known real-valued and smooth function $\\RR^n \\to \\RR^n$. The goal is to design an estimator of $\\xx_0$ and to study its properties. In the sequel, we do not make any specific assumption on the number of observations $n$ with respect to the number of predictors $p$. Recall that when $n < p$,~\\eqref{eq:linear-problem} is underdetermined, whereas when $n \\geq p$ and all the columns of $\\XX$ are linearly independent, it is overdetermined.\n\nMany examples fall within the scope of model \\eqref{eq:linear-problem}. We here review two of them.\n\n\\begin{ex}[GLM]\nOne naturally thinks of generalized linear models (GLMs) \\citep{McCullaghNelder89} which assume that conditionally on $\\XX$, $Y_i$ are independent with distribution that belongs to a given (one-parameter) standard exponential family. Recall that the random variable $Z \\in \\RR$ has a distribution in this family if its distribution admits a density with respect to some reference $\\sigma$-finite measure on $\\RR$ of the form\n\\[\np(z;\\theta) = B(z)\\exp(z\\theta - \\varphi(\\theta)), \\quad \\theta \\in \\Theta \\subseteq \\RR ~,\n\\]\nwhere $\\Theta$ is the natural parameter space and $\\theta$ is the canonical parameter. For model \\eqref{eq:linear-problem}, the distribution of $Y$\nbelongs to the $n$-parameter exponential family and its density reads\n\\begin{equation}\n\\label{eq:pdfexp}\nf(y|\\XX;\\xx_0) = \\pa{\\prod_{i=1}^n B_i(y_i)}\\exp\\pa{\\dotp{y}{\\XX\\xx_0} - \\sum_{i=1}^n\\varphi_i\\pa{\\pa{\\XX\\xx_0}_i}}, \\quad \\XX\\xx_0 \\in \\Theta^n ~,\n\\end{equation}\nwhere $\\dotp{\\cdot}{\\cdot}$ is the inner product, and the canonical parameter vector is the linear predictor $\\XX\\xx_0$. In this case, $h(\\mu)=(h_i(\\mu_i))_{1 \\leq i \\leq n}$, where $h_i$ is the {\\textit{inverse}} of the link function in the language of GLM. Each $h_i$ is a monotonic differentiable function, and a typical choice is the canonical link $h_i=\\varphi_i'$, where $\\varphi_i'$ is one-to-one if the family is regular \\citep{Brown86}. \n\\end{ex}\n\n\\begin{ex}[Transformations]\nThe second example is where $h$ plays the role of a transformation such as variance-stabilizing transformations (VSTs), symmetrizing transformations, or bias-corrected transformations. There is an enormous body of literature on transformations, going back to the early 1940s. A typical example is when $Y_i$ are independent Poisson random variables $\\sim \\Pp\\pa{(\\XX\\xx_0)_i}$, in which case $h_i$ takes the form of the Anscombe bias-corrected VST. See \\citep[Chapter~4]{DasGupta08} for a comprehensive treatment and more examples.\n\\end{ex}\n\n\\subsection{Variational Estimators}\n\\label{sec:block_regularizations}\n\nRegularization is now a central theme in many fields including statistics, machine learning and inverse problems. It allows one to impose on the set of candidate solutions some prior structure on the object to be estimated. This regularization ranges from squared Euclidean or Hilbertian norms~\\citep{Tikhonov97}, to non-Hilbertian norms that have sparked considerable interest in the recent years.\n\nGiven observations $(y_1,\\ldots,y_n)$, we consider the class of estimators obtained by solving the convex optimization problem\n\\begin{equation}\\label{eq-group-lasso}\\tag{\\regulP{y}}\n \\xsoly(y) \\in\n \\uArgmin{ \\xx \\in \\RR^p }\n \\F(\\xx,y) + \\J(\\xx) ~.\n\\end{equation}\nThe fidelity term $\\F$ is of the following form\n\\begin{equation}\\label{eq-fidelity-decompos}\n \\F(\\xx,y) = \\F_0(\\XX \\xx, y)\n\\end{equation}\nwhere $\\F_0(\\cdot,y)$ is a general loss function assumed to be a proper, convex and sufficiently smooth function of its first argument; see Section~\\ref{sec:blocks} for a detailed exposition of the smoothness assumptions. The regularizing penalty $\\J$ is proper lower semicontinuous and convex, and promotes some specific notion of simplicity\/low-complexity on $\\xsoly(y)$; see Section~\\ref{sec:blocks} for a precise description of the class of regularizing penalties $\\J$ that we consider in this paper. The type of convex optimization problem in \\eqref{eq-group-lasso} is referred to as a regularized $M$-estimator in \\cite{WainwrightDecomposable12}, where $\\J$ is moreover assumed to have a special decomposability property. \n\nWe now provide some illustrative examples of loss functions $F$ and regularizing penalty $J$ routinely used in signal processing, imaging sciences and statistical machine learning.\n\n\\begin{ex}[Generalized linear models]\\label{exp-glm}\nGeneralized linear models in the exponential family falls into the class of losses we consider. Indeed, taking the negative log-likelihood corresponding to \\eqref{eq:pdfexp} gives\\footnote{Strictly speaking, the minimization may have to be over a convex subset of $\\RR^p$.}\n\\begin{equation}\n\\label{eq:fidexp}\n\\F_0(\\mu,y) = \\sum_{i=1}^n\\varphi_i\\pa{\\mu_i} - \\dotp{y}{\\mu} ~.\n\\end{equation}\nIt is well-known that if the exponential family is regular, then $\\varphi_i$ is proper, infinitely differentiable, its Hessian is definite positive, and thus it is strictly convex~\\citep{Brown86}. Therefore, $\\F_0(\\cdot,y)$ shares exactly the same properties. We recover the squared loss $\\F_0(\\mu,y)=\\frac{1}{2}\\norm{y-\\mu}^2$ for the standard linear models (Gaussian case), and the logistic loss $\\F_0(\\mu,y) = \\sum_{i=1}^n\\log\\pa{1+\\exp(\\mu_i)} - \\dotp{y}{\\mu}$ for logistic regression (Bernoulli case).\n\nGLM estimators with losses \\eqref{eq:fidexp} and $\\lun$ or $\\lun-\\ldeux$ (group) penalties have been previously considered and some of their properties studied including in \\citep{Bunea08,VandeGeer08,VandeGeer08b,Meier08,Bach10,Kakade10}; see also \\cite[Chapter 3, 4 and 6]{BuhlmannVandeGeerBook11}.\n\\end{ex}\n\n\\begin{ex}[Lasso]\nThe Lasso regularization is used to promote the sparsity of the minimizers, see~\\citep{chen1999atomi,tibshirani1996regre,osborne2000new,donoho2006most,Candes09,BickelLassoDantzig07}, and~\\citep{BuhlmannVandeGeerBook11} for a comprehensive review. It corresponds to choosing $\\J$ as the $\\lun$-norm\n\\begin{equation}\n \\label{lun-synthesis}\n J(\\xx) = \\normu{\\xx} = \\sum_{i=1}^p \\abs{\\xx_i} .\n\\end{equation}\nIt is also referred to as $\\lun$-synthesis in the signal processing community, in contrast to the more general $\\lun$-analysis sparsity penalty detailed below.\n\\end{ex}\n\n\n\\begin{ex}[General Lasso]\nTo allow for more general sparsity penalties, it may be desirable to promote sparsity through a linear operator $D = (d_1,\\ldots,\\allowbreak d_q) \\in \\RR^{p \\times q}$. This leads to the so-called analysis-type sparsity penalty (a.k.a. general Lasso after~\\cite{tibshirani2012dof}) where the $\\lun$-norm is pre-composed by $D^*$, hence giving\n\\begin{equation}\n \\label{lun-analysis}\n J(\\xx) = \\normu{D^* \\xx} = \\sum_{j=1}^q \\abs{\\dotp{d_j}{\\xx}} .\n\\end{equation}\nThis of course reduces to the usual lasso penalty \\eqref{lun-synthesis} when $D = \\Id_p$. The penalty \\eqref{lun-analysis} encapsulates several important penalties including that of the 1-D total variation~\\citep{rudin1992nonlinear}, and the fused Lasso \\citep{tibshirani2005sparsity}. In the former, $D^*$ is a finite difference approximation of the derivative, and in the latter, $D^*$ is the concatenation of the identity matrix $\\Id_p$ and the finite difference matrix to promote both the sparsity of the vector and that of its variations.\n\\end{ex}\n\n\\begin{ex}[$\\linf$ Anti-sparsity]\n\\label{ex:linf}\nIn some cases, the vector to be reconstructed is expected to be flat. Such a prior can be captured using the $\\linf$ norm (a.k.a. Tchebycheff norm)\n\\begin{equation}\n \\label{linf}\n\tJ(\\xx) = \\normi{\\xx} = \\umax{i \\in \\ens{1,\\dots,p}} \\abs{\\xx_i}.\n\\end{equation}\nMore generally, it is worth mentioning that a finite-valued function $\\J$ is polyhedral convex (including Lasso, general Lasso, $\\linf$) if and only if can be expressed as $\\umax{i \\in \\ens{1,\\dots,q}} \\dotp{d_i}{\\xx} - b_i$, where the vectors $d_i$ define the facets of the sublevel set at $1$ of the penalty~\\citep{Rockafellar96}. The $\\linf$ regularization has found applications in computer vision~\\citep{jegou2012anti}, vector quantization~\\citep{lyubarskii2010uncertainty}, or wireless network optimization~\\citep{studer12signal}.\n\\end{ex}\n\n\\begin{ex}[Group Lasso]\nWhen the covariates are assumed to be clustered in a few active groups\/blocks, the group Lasso has been advocated since it promotes sparsity of the groups, i.e. it drives all the coefficients in one group to zero together hence leading to group selection, see~\\citep{bakin1999adaptive,yuan2006model,bach2008consistency,Wei10} to cite a few. The group Lasso penalty reads\n\\begin{equation}\n \\label{lun-deux-synthesis}\n J(\\xx) = \\norm{\\xx}_{1,2} = \\sum_{b \\in \\Bb} \\norm{\\xx_b}_2 .\n\\end{equation}\nwhere $\\xx_b=(\\xx_i)_{i \\in b}$ is the sub-vector of $\\xx$ whose entries are indexed by the block $b \\in \\Bb$ where $\\Bb$ is a disjoint union of the set of indices i.e.~$\\bigcup_{b \\in \\Bb} = \\ens{1,\\ldots,p}$ such that $b, b' \\in \\Bb, b \\cap b' = \\emptyset$. The mixed $\\lun-\\ldeux$ norm defined in~\\eqref{lun-deux-synthesis} has the attractive property to be invariant under (groupwise) orthogonal transformations.\n\\end{ex}\n\n\n\\begin{ex}[General Group Lasso]\nOne can push the structured sparsity idea one step further by promoting group\/block sparsity through a linear operator, i.e. analysis-type group sparsity. Given a collection of linear operators $\\{D_b\\}_{b \\in \\Bb}$, that are not all orthogonal, the analysis group sparsity penalty is \n\\begin{equation}\n \\label{lun-deux-analysis}\n J(\\xx) = \\norm{D^* \\xx}_{1,2} = \\sum_{b \\in \\Bb} \\norm{D_b^* \\xx}_2. \n\\end{equation}\nThis encompasses the 2-D isotropic total variation~\\citep{rudin1992nonlinear}, where $\\xx$ is a 2-D discretized image, and each $D_b^* \\xx \\in \\RR^2$ is a finite difference approximation of the gradient of $\\xx$ at a pixel indexed by $b$. The overlapping group Lasso \\citep{jacob-overlap-synthesis} is also a special case of \\eqref{lun-deux-analysis} by taking $D_b^* : \\xx \\mapsto \\xx_b$ to be a block extractor operator~\\citep{peyre2011adaptive,chen-proximal-overlap}.\n\\end{ex}\n\n\\begin{ex}[Nuclear norm]\nThe natural extension of low-complexity priors to matrix-valued objects $\\xx \\in \\RR^{p_1 \\times p_2}$ (where $p=p_1p_2$) is to penalize the singular values of the matrix. Let $U_{\\xx} \\in \\RR^{p_1 \\times p_1}$ and $V_{\\xx} \\in \\RR^{p_2 \\times p_2}$ be the orthonormal matrices of left and right singular vectors of $\\xx$, and $\\uplambda: \\RR^{p_1 \\times p_2} \\to \\RR^{p_2}$ is the mapping that returns the singular values of $\\xx$ in non-increasing order. If $j \\in \\lsc(\\RR^{p_2})$, i.e. convex, lower semi-continuous and proper, is an absolutely permutation-invariant function, then one can consider the penalty $J(\\xx) = j(\\uplambda(\\xx))$. This is a so-called spectral function, and moreover, it can be also shown that $J \\in \\lsc(\\RR^{p_1 \\times p_2})$~\\citep{LewisMathEig}. The most popular spectral penalty is the nuclear norm obtained for $j=\\normu{\\cdot}$, \n\\begin{equation}\n \\label{eq-nuclear-norm} \n J(\\xx) = \\norm{\\xx}_* = \\normu{\\uplambda(\\xx)} ~.\n\\end{equation}\nThis penalty is the best convex candidate to enforce a low-rank prior. It has been widely used for various applications, including low rank matrix completion~\\citep{recht2010guaranteed,candes2009exact}, robust PCA~\\citep{CandesRPCA11}, model reduction~\\citep{fazel2001rank}, and phase retrieval~\\citep{CandesPhaseLift}. \n\\end{ex}\n\n\\subsection{Sensitivity Analysis }\n\\label{sub:intro-sensitivity}\n\nA chief goal of this paper is to investigate the sensitivity of any solution $\\xsoly(y)$ to the parameterized problem~\\eqref{eq-group-lasso} to (small) perturbations of $y$. Sensitivity analysis\\footnote{The meaning of sensitivity is different here from what is usually intended in statistical sensitivity and uncertainty analysis.} is a major branch of optimization and optimal control theory. Comprehensive monographs on the subject are \\citep{BonnansShapiro2000,mordukhovich1992sensitivity}. The focus of sensitivity analysis is the dependence and the regularity properties of the optimal solution set and the optimal values when the auxiliary parameters (e.g. $y$ here) undergo a perturbation. In its simplest form, sensitivity analysis of first-order optimality conditions, in the parametric form of the Fermat rule, relies on the celebrated implicit function theorem.\n\nThe set of regularizers $J$ we consider is that of partly smooth functions relative to a Riemannian submanifold as detailed in Section~\\ref{sec:blocks}. The notion of partial smoothness was introduced in~\\citep{Lewis-PartlySmooth}. This concept, as well as that of identifiable surfaces~\\citep{Wright-IdentSurf}, captures essential features of the geometry of non-smoothness which are along the so-called ``active\/identifiable manifold''. For convex functions, a closely related idea was developed in \\citep{Lemarechal-ULagrangian}. Loosely speaking, a partly smooth function behaves smoothly as we move on the identifiable manifold, and sharply if we move normal to the manifold. In fact, the behaviour of the function and of its minimizers (or critical points) depend essentially on its restriction to this manifold, hence offering a powerful framework for sensitivity analysis theory. In particular, critical points of partly smooth functions move stably on the manifold as the function undergoes small perturbations~\\citep{Lewis-PartlySmooth,Lewis-PartlyTiltHessian}. \n\nGetting back to our class of regularizers, the core of our proof strategy relies on the identification of the active manifold associated to a particular minimizer $\\xsoly(y)$ of~\\eqref{eq-group-lasso}. We exhibit explicitly a certain set of observations, denoted $\\Hh$ (see Definition~\\ref{defn:h}), outside which the initial non-smooth optimization~\\eqref{eq-group-lasso} boils down locally to a smooth optimization along the active manifold. This part of the proof strategy is in close agreement with the one developed in~\\citep{Lewis-PartlySmooth} for the sensitivity analysis of partly smooth functions. See also \\citep[Theorem~13]{Bolte2011} for the case of linear optimization over a convex semialgebraic partly smooth feasible set, where the authors proves a sensitivity result with a zero-measure transition space. However, it is important to stress that neither the results of \\citep{Lewis-PartlySmooth} nor those of \\citep{Bolte2011,LewisGenericConvex11} can be applied straightforwardly in our context for two main reasons (see also Remark~\\ref{rem:transpace} for a detailed discussion). In all these works, a non-degeneracy assumption is crucial while it does not necessarily hold in our case, and this is precisely the reason we consider the boundary of the sets $\\Hh_{\\Mm}$ in the definition of the transition set $\\Hh$. Moreover, in the latter papers, the authors are concerned with a particular type of perturbations (see Remark~\\ref{rem:transpace}) which does not allow to cover our class of regularized problems except for restrictive cases such as $\\XX$ injective. For our class of problems \\lasso, we were able to go beyond these works by solving additional key challenges that are important in a statistical context, namely: (i) we provide an analytical description of the set $\\Hh$ involving the boundary of $\\Hh_{\\Mm}$, which entails that $\\Hh$ is potentially of dimension strictly less than $n$, hence of zero Lebesgue measure, as we will show under a mild o-minimality assumption. (ii) we prove a general sensitivity analysis result valid for any proper lower semicontinuous convex partly smooth regularizer $\\J$; (iii) we compute the first-order expansion of $\\xsoly(y)$ and provide an analytical form of the weak derivative of $y \\mapsto \\XX\\xsoly(y)$ valid outside a set involving $\\Hh$. If this set is of zero-Lebesgue measure, this allows us to get an unbiased estimator of the risk on the prediction $\\XX\\xsoly(Y)$.\n\n\\subsection{Degrees of Freedom and Unbiased Risk Estimation}\n\\label{sub:intro-risk}\n\nThe degrees of freedom (DOF) of an estimator quantifies the complexity of a statistical modeling procedure~\\citep{efron1986biased}. It is at the heart of several risk estimation procedures and thus allows one to perform parameter selection through risk minimization. \n\nIn this section, we will assume that $\\F_0$ in \\eqref{eq-fidelity-decompos} is strictly convex, so that the response (or the prediction) $\\msol(y)=\\XX \\xsoly(y)$ is uniquely defined as a single-valued mapping of $y$ (see Lemma~\\ref{lem:unique}). That is, it does not depend on a particular choice of solution $\\xsoly(y)$ of~\\eqref{eq-group-lasso}. \n\nLet $\\mu_0 = \\XX \\xx_0$. Suppose that $h$ in \\eqref{eq:linear-problem} is the identity and that the observations $Y \\sim \\Nn(\\mu_0,\\sigma^2 \\Id_n)$. Following~\\citep{efron1986biased}, the DOF is defined as\n\\eq{\n\t\\DOF = \\sum_{i=1}^n \\frac{\\mathrm{cov}(Y_i, \\msol_i(Y))}{\\sigma^2} ~.\n}\nThe well-known Stein's lemma~\\citep{stein1981estimation} asserts that, if $y \\mapsto \\msol(y)$ is weakly differentiable function (i.e. typically in a Sobolev space over an open subset of $\\RR^n$), such that each coordinate $y \\mapsto \\msol_i(y) \\in \\RR$ has an essentially bounded weak derivative\\footnote{We write the same symbol as for the derivative, and rigorously speaking, this has to be understood to hold Lebesgue-a.e.} \n\\eq{\n\\EE\\pa{\\Big| \\frac{\\partial \\msol_i}{\\partial y_i}(Y)\\Big|} < \\infty, \\quad \\forall i ~,\n}\nthen its divergence is an unbiased estimator of its DOF, i.e.\n\\eq{\n \\widehat{\\DOF} = \\diverg(\\msol)(Y) \\eqdef \\tr(\\jac \\msol(Y)) \\qandq \\EE(\\widehat{\\DOF}) = \\DOF ~,\n}\nwhere $\\jac \\msol$ is the Jacobian of $y \\mapsto \\msol(y)$. In turn, this allows to get an unbiased estimator of the prediction risk $\\EE( \\norm{\\msol(Y) - \\mu_0}^2)$ through the SURE~\\citep{stein1981estimation}.\n\nExtensions of the SURE to independent variables from an exponential family are considered in \\citep{hudson1978nie} for the continuous case, and \\citep{Hwang82} in the discrete case. \\cite{eldar-gsure} generalizes the SURE principle to continuous multivariate exponential families. \n\n\n\\subsection{Contributions}\n\\label{sub:intro-contrib}\n\nWe consider a large class of losses $\\F_0$, and of regularizing penalties $\\J$ which are proper, lower semicontinuous, convex and partly smooth functions relative to a Riemannian submanifold, see Section~\\ref{sec:blocks}.\nFor this class of regularizers and losses, we first establish in Theorem~\\ref{thm-local} a general sensitivity analysis result, which provides the local parametrization of any solution to~\\eqref{eq-group-lasso} as a function of the observation vector $y$. This is achieved without placing any specific assumption on $\\XX$, should it be full column rank or not. We then derive an expression of the divergence of the prediction with respect to the observations (Theorem~\\ref{thm-div}) which is valid outside a set of the form $\\Gg \\cap \\Hh$, where $\\Gg$ is defined in Section~\\ref{sec:sensmu}. Using tools from o-minimal geometry, we prove that the transition set $\\Hh$ is of Lebesgue measure zero. If $\\Gg$ is also negligible, then the divergence formula is valid Lebesgue-a.e.. In turn, this allows us to get an unbiased estimate of the DOF and of the prediction risk (Theorem~\\ref{thm-dof} and Theorem~\\ref{thm-dof-exp}) for model \\eqref{eq:linear-problem} under two scenarios: (i) Lipschitz continuous non-linearity $h$ and an additive i.i.d. Gaussian noise; (ii) GLMs with a continuous exponential family. Our results encompass many previous ones in the literature as special cases (see discussion in the next section). It is important however to mention that though our sensitivity analysis covers the case of the nuclear norm (also known as the trace norm), unbiasedness of the DOF and risk estimates is not guaranteed in general for this regularizer as the restricted positive definiteness assumption (see Section~\\ref{sec:local}) may not hold at any minimizer (see Example~\\ref{ex:injnuc}), and thus $\\Gg$ may not be always negligible.\n\n\n\\subsection{Relation to prior works}\n\nIn the case of standard Lasso (i.e. $\\lun$ penalty \\eqref{lun-synthesis}) with $Y \\sim \\Nn(\\XX\\xx_0,\\sigma^2\\Id_n)$ and $\\XX$ of full column rank, ~\\citep{zou2007degrees} showed that the number of nonzero coefficients is an unbiased estimate for the DOF. Their work was generalized in~\\citep{2012-kachour-statsinica} to any arbitrary design matrix. Under the same Gaussian linear regression model, unbiased estimators of the DOF for the general Lasso penalty~\\eqref{lun-analysis}, were given independently in~\\citep{tibshirani2012dof,vaiter-local-behavior}.\n\nA formula of an estimate of the DOF for the group Lasso when the design is orthogonal within each group was conjectured in~\\citep{yuan2006model}. \\cite{kato2009degrees} studied the DOF of a general shrinkage estimator where the regression coefficients are constrained to a closed convex set $\\Cc$. His work extends that of~\\citep{MeyerWoodroofe} which treats the case where $\\Cc$ is a convex polyhedral cone. When $\\XX$ is full column rank,~\\citep{kato2009degrees} derived a divergence formula under a smoothness condition on the boundary of $\\Cc$, from which an unbiased estimator of the degrees of freedom was obtained.\nWhen specializing to the constrained version of the group Lasso, the author provided an unbiased estimate of the corresponding DOF under the same group-wise orthogonality assumption on $\\XX$ as~\\citep{yuan2006model}.\n\\cite{hansen2014dof} studied the DOF of the metric projection onto a closed set (non-necessarily convex), and gave a precise representation of the bias when the projector is not sufficiently differentiable.\nAn estimate of the DOF for the group Lasso was also given by~\\citep{solo2010threshold} using heuristic derivations that are valid only when $\\XX$ is full column rank, though its unbiasedness is not proved. \n\n\\cite{vaiter-icml-workshops} also derived an estimator of the DOF of the group Lasso and proved its unbiasedness when $\\XX$ is full column rank, but without the orthogonality assumption required in~\\citep{yuan2006model,kato2009degrees}. When specialized to the group Lasso penalty, our results establish that the DOF estimator formula in~\\citep{vaiter-icml-workshops} is still valid while removing the full column rank assumption. This of course allows one to tackle the more challenging rank-deficient or underdetermined case $p>n$.\n\n\n\n\\section{Degrees of Freedom and Unbiased Risk Estimation}\n\\label{sec:dof}\n\nFrom now on, we will assume that\n\\begin{align}\n\\label{hyp-tt} \\tag{$C_\\Mscr$}\n\t\\text{the set $\\Mscr$ is finite.}\n\\end{align}\n\nAssumption~\\eqref{hyp-tt} holds in many important cases, including the examples discussed in the paper: polyhedral penalties (e.g. the Lasso, general Lasso or $\\linf$-norm), as well as for the group Lasso and its general form. \n\nThroughout this section, we use the same symbols to denote weak derivatives (whenever they exist) as for derivatives. Rigorously speaking, the identities have to be understood to hold Lebesgue-a.e.~\\citep{EvansGariepy92}.\n\nSo far, we have shown that outside $\\Hh \\cup \\Gg$, the mapping $y \\mapsto \\msol(y)$ enjoys (locally) nice smoothness properties, which in turn gives closed-form formula of its divergence. To establish that such formula holds Lebesgue a.e., a key argument that we need to show is that $\\Hh$ is of negligible Lebesgue measure. This is where o-minimal geometry enters the picture. In turn, for $Y$ drawn from some appropriate probability measures with density with respect to the Lebesgue measure, this will allow us to establish unbiasedness of quadratic risk estimators. \n\n\\subsection{O-minimal Geometry}\n\nRoughly speaking, to be able to control the size of $\\Hh$, the function $\\J$ cannot be too oscillating in order to prevent pathological behaviours. We now briefly recall here the definition. Some important properties of o-minimal structures that are relevant to our context together with their proofs are collected in Section~\\ref{sec-omin}. The interested reader may refer to~\\citep{van-den-Dries-omin-book,coste1999omin} for a comprehensive account and further details on o-minimal structures.\n\n\\begin{defn}[Structure]\\label{defn-omin}\nA \\emph{structure} $\\omin$ expanding $\\RR$ is a sequence $(\\omin_k)_{k \\in \\NN}$ which satisfies the\nfollowing axioms:\n\t\\begin{enumerate}\n\t\t\\item Each $\\omin_k$ is a Boolean algebra of subsets of $\\RR^k$, with $\\RR^k \\in \\omin_{k}$.\n\t\t\\item Every semi-algebraic subset of $\\RR^k$ is in $\\omin_k$.\n\t\t\\item If $A \\in \\omin_k$ and $B \\in \\omin_{k'}$, then $A \\times B \\in \\omin_{k+k'}$.\n\t\t\\item If $A \\in \\omin_{k+1}$, then $\\Pi_{k+1,k}(A) \\in \\omin_k$, where $\\Pi_{k+1,k}: \\RR^{k+1} \\to \\RR^k$ is the projection on the first $k$ components.\n\t\\end{enumerate}\t\n\tThe structure $\\omin$ is said to be \\emph{o-minimal} if, moreover, it satisfies\n\t\\begin{enumerate}\n\t\t\\item[5.] (o-minimality) Sets in $\\omin_1$ are precisely the finite unions of intervals and points of~$\\RR$.\n\t\\end{enumerate}\n\\end{defn}\nIn the following, a set $A \\in \\omin_k$ is said to be definable. \n\n\n\\begin{defn}[Definable set and function]\nLet $\\omin$ be an o-minimal structure. The elements of $\\omin_k$ are called the \\emph{definable subsets} of $\\RR^p$, i.e. $\\Om \\subset \\RR^k$ is definable if $\\Om \\in \\omin_k$. A map $f : \\Om \\rightarrow \\RR^m$ is said to be definable if its graph $\\Gg(f) = \\enscond{(x,u) \\in \\Om \\times \\RR^m}{u=f(x)} \\subseteq \\RR^{k} \\times \\RR^{m}$ is a definable subset of $\\RR^{k} \\times \\RR^m$ (in which case $m$ times applications of axiom 4 implies that $\\Omega$ is definable).\n\\end{defn}\n\nA fundamental class of o-minimal structures is the collection of semi-algebraic sets, in which case axiom~4 is actually a property known as the Tarski-Seidenberg theorem~\\citep{coste2002intro}. For example, in the special case where $q$ is a rational number, $J=\\norm{\\cdot}_q$ is semi-algebraic. When $q \\in \\RR$ is not rational, $\\norm{\\cdot}_q$ is not semi-algebraic, however, it can be shown to be definable in an o-minimal structure. To see this, we recall from \\citep[Example~5 and Property~5.2]{vandenDriesMiller96} that there exists a (polynomially bounded) o-minimal structure that contains the family of functions $\\enscond{t > 0}{t^\\gamma, \\gamma \\in \\RR}$ and restricted analytic functions. Functions $\\F_0$ that correspond to the exponential family losses introduced in Example~\\ref{exp-glm} are also definable. \\\\\n\nOur o-minimality assumptions requires the existence of an o-minimal structure $\\omin$ such that\n\\begin{equation}\n\\label{eq-condition-omin}\\tag{$C_{\\omin}$}\n\\begin{split}\n\t\\text{$F$, $\\J$ and $\\Mm$, $\\forall \\Mm \\in \\Mscr$, are definable in } \\omin.\n\\end{split}\n\\end{equation}\n\n\n\\subsection{Degrees of Freedom and Unbiased Risk Estimation}\nWe assume in this section that $\\F$ takes the form \\eqref{eq-fidelity-decompos} and that \n\\eql{\\label{eq-strong-convex}\\tag{$C_{\\mathrm{sconv}}$}\n\t\\foralls y \\in \\RR^n, \\quad\n\t\\F_0(\\cdot,y) \\text{ is strongly convex with modulus } \\tau\n}\nand\n\\begin{equation}\\label{cnd-hess-bord}\\tag{$C_{L}$}\n \\exists L>0, \\quad\n \\sup_{(\\mu,y) \\in \\RR^n\\times\\RR^n}\\norm{\\Fxyo(\\mu,y)} \\leq L .\n\\end{equation}\n\nObviously, assumption \\eqref{eq-strong-convex} implies \\eqref{eq-stric-cvx}, and thus the claims of the previous section remain true. Moreover, this assumption holds for the squared loss, but also for some losses of the exponential family \\eqref{eq:fidexp}, possibly adding a small quadratic term in $\\beta$. As far as assumption \\eqref{cnd-hess-bord} is concerned, it is easy to check that it is fulfilled with $L=1$ for any loss of the exponential family~\\eqref{eq:fidexp}, since $\\Fxyo(\\mu,y) = -\\Id$.\n\n\\myparagraph{Non-linear Gaussian regression.}\nAssume that the observation model \\eqref{eq:linear-problem} specializes to $Y \\sim \\Nn(h(\\XX\\xx_0),\\si^2\\Id_n)$, where $h$ is Lipschitz continuous.\n\n\\begin{thm}\\label{thm-dof}\n\tThe following holds.\n\t\\begin{enumerate}[label=(\\roman*)] \n\t\\item Under condition \\eqref{eq-condition-omin}, $\\Hh$ is of Lebesgue measure zero;\n\t\\item Under conditions \\eqref{eq-strong-convex} and \\eqref{cnd-hess-bord}, $h \\circ \\msol$ is Lipschitz continuous, hence weakly differentiable, with an essentially bounded gradient.\n\t\\item If conditions~\\eqref{eq-condition-omin}, \\eqref{eq-strong-convex}, \\eqref{hyp-f-reg} and \\eqref{cnd-hess-bord} hold, and~$\\Gg$ is of zero-Lebesgue measure, then, \t\n\t\\begin{enumerate}[label=(\\alph*)]\n\t\\item $\\widehat \\DOF = \\tr(\\jac h(\\msol(Y))\\De(Y))$ is an unbiased estimate of $\\DOF=\\EE(\\diverg(h \\circ \\msol(Y)))$, where $\\De(Y)$ is as given in Theorem~\\ref{thm-div}.\n\t\\item The $\\SURE$\n\t\\begin{align}\n\t\\label{eq:sure}\n \\SURE(h \\circ \\msol)(Y) = &\n \\norm{Y - h(\\msol(Y))}^2\n + 2 \\sigma^2 \\widehat \\DOF\n - n \\sigma^2\n\t\\end{align}\n\tis an unbiased estimator of the risk $\\EE\\pa{\\norm{h(\\msol(Y)) - h(\\mu_0)}^2}$.\n\t\\end{enumerate}\n\t\\end{enumerate}\n\\end{thm}\nThis theorem is proved in Section~\\ref{sub:sure}.\n\n\n\\myparagraph{GLM with the continuous exponential family.}\nAssume that the observation model \\eqref{eq:linear-problem} corresponds to the GLM with a distribution which belongs to a continuous standard exponential family as parameterized in \\eqref{eq:pdfexp}. From the latter, we have \n\\eq{\n\t\\grad \\log B(y)=\\pa{\\frac{\\partial \\log B_i(y_i)}{\\partial y_i}}_i.\n}\n\n\\begin{thm}\\label{thm-dof-exp}\n\tSuppose that conditions~\\eqref{eq-condition-omin}, \\eqref{eq-strong-convex}, \\eqref{hyp-f-reg} and \\eqref{cnd-hess-bord} hold, and~$\\Gg$ is of zero-Lebesgue measure.\n\tThen,\n\t\\begin{enumerate}[label=(\\roman*)] \n\t\\item $\\widehat \\DOF = \\tr(\\De(Y))$ is an unbiased estimate of $\\DOF=\\EE(\\diverg(\\msol(Y)))$.\n\t\\item The $\\SURE$\n\t\\begin{align}\n\t\\label{eq:gsureexp}\n \\SURE(\\msol)(Y) = &\n \\norm{\\grad \\log B(Y) - \\msol(Y)}^2\n + 2 \\widehat \\DOF\n - (\\norm{\\grad \\log B(Y)}^2 - \\norm{\\mu_0}^2)\n\t\\end{align}\n\tis an unbiased estimator of the risk $\\EE\\pa{\\norm{\\msol(Y) - \\mu_0}^2}$.\n\t\\end{enumerate}\n\\end{thm}\nThis theorem is proved in Section~\\ref{sub:sure}. Recall from Section~\\ref{sec:sensmu} that there are many regularizers where $\\Gg$ is indeed empty, and for which Theorem~\\ref{thm-dof} and \\ref{thm-dof-exp} then apply.\\\\\n\nThough $\\SURE(\\msol)(Y)$ depends on $\\mu_0$, which is obviously unknown, it is only through an additive constant, which makes it suitable for parameter selection by risk minimization. Moreover, even if it is not stated here explicitly, Theorem~\\ref{thm-dof-exp} can be extended to unbiasedly estimate other measures of the risk, including the {\\it projection} risk, or the {\\it estimation} risk (in the full rank case) through the Generalized Stein Unbiased Risk Estimator as proposed in \\citep[Section~IV]{eldar-gsure}, see also \\citep{vaiter-local-behavior} in the Gaussian case.\n\n\n\n\\section{Simulation results}\n\\label{sec:sim}\n\n\\paragraph{Experimental setting.}\nIn this section, we illustrate the efficiency of the proposed DOF estimator on\na parameter selection problem in the context of some imaging inverse problems.\nMore precisely, we consider the linear Gaussian regression model\n$\n Y \\sim \\Nn(\\XX\\xx_0, \\sigma^2 \\Id_n)\n$\nwhere $\\xx_0 \\in \\RR^{p = p_1 \\times p_2}$ is a column-vectorized\nversion of an image defined on a 2-D discrete grid of size $p_1 \\times p_2$.\nThe estimation of $\\xx_0$ is achieved by solving~\\eqref{eq-group-lasso} with\n\\eq{\n F(\\xx, y) = F_0(\\XX\\xx, y) = \\norm{\\XX \\xx - y}^2\n \\qandq\n J(\\xx) = \\lambda \\norm{D^* \\xx}_{1,2}\n}\nwhere $D^* \\xx \\in \\RR^{p \\times 2}$ is the 2-D discrete gradient vector field of the image $\\xx$, and $\\lambda > 0$ is the regularization parameter. Clearly, $J$ is the isotropic total variation regularization \\citep{rudin1992nonlinear}, which is a special case of the general group Lasso penalty~\\eqref{lun-deux-analysis} for blocks of size $2$. \n \nWe aim at proposing an automatic and objective way to choose $\\lambda$. This can be achieved typically by minimizing the SURE given in~\\eqref{eq:sure} with $h$ being the identity, i.e\n\\eq{\n\\SURE(\\msol)(Y) =\n \\norm{Y - \\msol(Y)}^2 + 2 \\sigma^2 \\widehat \\DOF - n \\sigma^2\n}\nwhere $\\widehat \\DOF = \\tr(\\De(Y))$ according to Theorem~\\ref{thm-dof}(iii)-(a), and the expression of $\\De(Y)$ is obtained from that of the general group Lasso in Example~\\ref{ex:divglasso} with $D^*$ the discrete 2-D gradient operator, and $-D$ is the discrete 2-D divergence operator. Owing to Proposition~\\ref{prop-exist}(ii) and Theorem~\\ref{thm-dof}(iii), the given SURE is indeed an unbiased estimator of the prediction risk.\n\nAs the image size $p$ can be large, the exact computation of $\\tr(\\De(y))$\ncan become computationally intractable. Instead, we devise an approach based on\nMonte-Carlo (MC) simulations \\citep[see,][for more details]{vonesch2008sure}, that is\n\\eq{\n \\widehat\\DOF^{\\mathrm{MC}}(z) = \\dotp{z}{\\Delta(Y) z}\n}\nwith $z$ a realization of $Z \\sim \\Nn(0, \\Id_n)$. It is clear that $\\EE_Z\\pa{\\widehat\\DOF^{\\mathrm{MC}}(Z)} = \\widehat\\DOF$.\n\nIt remains to compute the vector $\\Delta(y) z$. This is achieved by taking $\\Delta(y) z = \\XX \\nu$, where $\\nu$ is a solution of\n\\begin{align*}\n \\pa{\\transp{\\XX} \\XX + \\lambda D \\bpa{\\delta_{D^* \\xxs(y)} \\circ Q_{(D^* \\xxs(y))^\\perp}} D^*} \\nu = \\transp{\\XX} z\n \\qsubjq \\nu \\in T ,\n\\end{align*}\nwhere we recall that $\\T=\\Ker(D^*_{\\Lambda^c})$, $\\Lambda = \\bs(D^* \\xxs(y))$.\nTaking into account the constraint on $T$ through its Lagrange multiplier $\\zeta$, solving for $\\nu$ boils down to solving\nthe following linear system with a symmetric and positive-definite matrix\n\\begin{align}\\label{eq-sdp-dof}\n \\begin{pmatrix}\n \\transp{\\XX} \\XX + \\lambda D \\bpa{\\delta_{D^* \\xxs(y)} \\circ Q_{(D^* \\xxs(y))^\\perp}} D^*\n & ~~ &\n D_{\\Lambda^c}\\\\\n D^*_{\\Lambda^c} & ~~ & 0\\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n \\nu \\\\\n \\zeta\\\\\n \\end{pmatrix}\n =\n \\begin{pmatrix}\n \\transp{\\XX} z\\\\\n 0\\\\\n \\end{pmatrix}.\n\\end{align}\n\n\\begin{figure}[!t]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth,viewport=1 1 41 33,clip]{cameraman_zoom}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth,viewport=1 1 41 33,clip]{cameraman_zoom_nse}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth,viewport=1 1 41 33,clip]{cameraman_zoom_fil}}\\\\\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth]{cameraman_zoom_blur_sure_fd}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth]{cameraman_zoom_blur_sure_it}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth]{cameraman_zoom_blur_sure_cf}}\n\\caption{\n (a) Original image $\\xx_0$.\n (b) Blurry observation $y$.\n (c) $\\xxs(y)$ obtained for the value of $\\lambda$ minimizing the SURE estimate.\n (d-f) Prediction risk, average SURE and its confidence interval ($\\pm$ standard deviation)\n as a function of $\\lambda$ respectively for\n the finite difference approach \\citep{ramani2008montecarlosure},\n the iterative approach \\citep{vonesch2008sure},\n and our proposed approach.\n}\n\\label{fig:sure-deconvolution}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth,viewport=1 1 41 33,clip]{barbara_zoom}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth,viewport=1 1 41 33,clip]{barbara_zoom_nse}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth,viewport=1 1 41 33,clip]{barbara_zoom_fil}}\\\\\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth]{barbara_zoom_cs_sure_fd}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth]{barbara_zoom_cs_sure_it}}\\hfill%\n\\subfigure[]{\\includegraphics[width=0.32\\linewidth]{barbara_zoom_cs_sure_cf}}\n\\caption{\n (a) Original image $\\xx_0$.\n (b) Least squares estimate $\\XX^+ y$.\n (c) $\\xxs(y)$ obtained for the value of $\\lambda$ minimizing the SURE estimate.\n (d-f) Prediction risk, average SURE and its confidence interval ($\\pm$ standard deviation)\n as a function of $\\lambda$ respectively for\n the finite difference approach \\citep{ramani2008montecarlosure},\n the iterative approach \\citep{deledalle2014stein},\n and our proposed approach.\n}\n\\label{fig:sure-cs}\n\\end{figure}\n\n\\paragraph{Numerical solvers.}\nIn all experiments, optimization problem \\eqref{eq-group-lasso} was solved\nusing Douglas-Rachford proximal splitting algorithm \\citep{combettes2007douglas} with $2\\cdot10^4$ iterations.\nOnce the support $\\Lambda$ is identified with sufficiently high accuracy, the linear problem \\eqref{eq-sdp-dof} is solved\nusing the generalized minimal residual method \\citep[GMRES,][]{saad1986gmres}\nwith a relative accuracy of $10^{-7}$.\n\nOur proposed SURE estimator is compared\nfor different values of $\\lambda$\nwith the approach of \\citep{ramani2008montecarlosure} based on finite difference\napproximations, as well as\nthe approaches of \\citep{vonesch2008sure,deledalle2014stein} based\non iterative chain rule differentiations.\nAll curves are averaged on $40$ independent realizations of $Y$ and $Z$ and their\ncorresponding confidence intervals at $\\pm$ their standard deviation are displayed.\n\n\\paragraph{Deconvolution.}\nWe first consider an image of size $p = 34 \\times 42$\nwith grayscale values ranging in $[0, 255]$\nobtained from a close up of the standard {\\it cameraman} image.\n$\\XX$ is a circulant matrix \nrepresenting a periodic discrete convolution with a Gaussian kernel of width\n$1.5$ pixel. The observation $y$ is finally obtained by adding a zero-mean white Gaussian noise with $\\sigma = 5$.\nFigure~\\ref{fig:sure-deconvolution} depicts the evolution of the prediction risk and its SURE estimates as a function of $\\lambda$.\n\n\n\\paragraph{Compressive sensing.}\nWe next consider an image of size $p = 34 \\times 42$\nwith grayscale values ranging in $[0, 255]$\nobtained from a close up of the standard {\\it barbara} image. \nNow, $\\XX$ is a matrix corresponding to the composition of a periodic discrete convolution with a square kernel, and a random sub-sampling matrix with $n\/p = 0.5$. The noise standard deviation is again $\\sigma = 5$.\nFigure~\\ref{fig:sure-cs} shows the evolution of the prediction risk and its SURE estimates as a function of $\\lambda$.\n\n\\paragraph{Discussion.}\nThe three approaches seem to provide the same results with\naverage SURE curves that align very tightly with those of the prediction risk, with\nrelatively small standard deviation compared to the range of variation of the prediction risk.\n\nIt is worth observing that the SURE obtained with finite differences \\citep{ramani2008montecarlosure} \nor with iterative differentiations \\citep{vonesch2008sure,deledalle2014stein} estimate the risk at the last iterate\nprovided by the optimization algorithm to solve~\\eqref{eq-group-lasso}, which is not exactly $\\xxs(y)$ in general.\nIn fact, what is important is not $\\xxs(y)$ by itself but rather its group support $\\Lambda$. \nThus, provided $\\Lambda$ has been perfectly identified, the three approaches\nprovide, as observed, the same estimate of the risk up to machine precision.\nIt may then be important to run the solver with a large number of iterations\nin order to provide an accurate estimation of the risk.\nEven more important, solutions of \\eqref{eq-sdp-dof} should be\naccurate enough to avoid bias in the estimation.\nThe choice of $2 \\cdot 10^4$ iterations for Douglas-Rachford and\nrelative accuracy of $10^{-7}$ for GMRES appears in our simulations as a\ngood trade-off between negligible bias and reasonable computational time.\n\n\\section{Conclusion}\n\nIn this paper, we proposed a detailed sensitivity analysis of a class of estimators obtained by minimizing a general convex optimization problem with a regularizing penalty encoding a low complexity prior. This was achieved through the concept of partial smoothness. This allowed us to derive an analytical expression of the local variations of these estimators to perturbations of the observations, and also to prove that the set where the estimator behaves non-smoothly as a function of the observations is of zero Lebesgue measure. Both results paved the way to derive unbiased estimators of the prediction risk in two random scenarios, one of which covers the continuous exponential family. This analysis covers a large set of convex variational estimators routinely used in statistics, machine learning and imaging (most notably group sparsity and multidimensional total variation penalty). The simulation results confirm our theoretical findings and show that our risk estimator provides a viable way for automatic choice of the problem hyperparameters.\n\nDespite its generality, there are still problems which do not fall within our settings. One can think for instance to the case of discrete (even exponential) distributions, risk estimation for non-canonical parameter of non-Gaussian distributions, non-convex regularizers, or the graphical Lasso. \n\nExtension to the discrete case is far from obvious, even in the independent case. One can think for instance of using identities derived by \\citep{hudson1978nie,Hwang82}, but so far, provably unbiased estimates of SURE (not generalized one) are only available for linear estimators. \n\nIf the distribution under consideration is from a continuous exponential family, so that our results apply, but one is interested in estimating the risk at a function of the canonical parameter. First, this function has to be Lipschitz continuous, and one has first to prove a formula of the corresponding SURE. So far, we are only aware of such results in the Gaussian case (hence our Theorem~\\ref{thm-dof} which addresses this question precisely).\n\nStrictly speaking, the $\\lun$-penalized likelihood formulation of the graphical Lasso in \\citep{YuanLin07} ((3) or (6) in that reference) does not fall within our framework. This is due to the fidelity\/likelihood term which does not obey our assumptions. Note that the limitation due to fidelity\/likelihood can be circumvented at the price of a quadratic approximation \\citep[Section~4]{YuanLin07} also used in \\citep{MeinshausenBuhlmann06}. \n\nExtending our results to the non-convex case would be very interesting to handle penalties such as SCAD or MCP. This would however require more sophisticated material from variational analysis. Not to mention the other difficulties inherent to non-convexity, including handling critical points (that are not necessarily minimizers even local in general), and the fact that the mapping $y \\mapsto \\msol(y)$ is no longer single-valued. All the above settings will be left to future work.\n\n\n\\begin{acknowledgements}\nThis work has been supported by the European Research Council (ERC project SIGMA-Vision) and Institut Universitaire de France.\n\\end{acknowledgements}\n\n\n\\section{Sensitivity Analysis of $\\msol(y)$}\n\\label{sec:sensmu}\n\nWe assume in this section that $\\F$ takes the form \\eqref{eq-fidelity-decompos} with \n\\eql{\\label{eq-stric-cvx}\\tag{$C_{\\text{dp}}$}\n\t\\foralls (\\mu,y) \\in \\RR^n \\times \\RR^n, \\quad\n\t\\Fxxo(\\mu,y) \\text{ is positive definite.}\n}\nThis in turn implies that $\\F_0(\\cdot,y)$ is strictly convex for any $y$ (the converse is obviously not true). Recall that this condition is mild and holds in many situations, in particular for some losses \\eqref{eq:fidexp} in the exponential family, see Section~\\ref{sec:block_regularizations} for details.\n\nWe have the following simple lemma.\n\\begin{lem}\\label{lem:unique}\nSuppose the condition~\\eqref{eq-stric-cvx} is satisfied. The following holds,\n\\begin{enumerate}[label=(\\roman*)]\n\\item All minimizers of~\\eqref{eq-group-lasso} share the same image under $\\XX$ and $\\J$.\n\\item If the partial smoothness submanifold $\\Mm$ at $\\xx$ is affine or linear, then \\eqref{eq-injectivity-cond} holds if, and only if, $\\Ker( \\XX ) \\cap \\Ker(\\Q(\\xx)) \\cap \\T = \\{ 0 \\}$, where $\\T=\\T_\\xx$ and $\\Q(\\xx)$ is given in Fact~\\ref{fact:gradhess}.\n\\end{enumerate}\n\\end{lem}\n\n\nOwing to this lemma, we can now define the prediction \n\\eql{\\label{eq-predictor}\n\t\\msol(y) = X \\widehat{\\xx}(y)\n} \nwithout ambiguity given any solution $\\widehat{\\xx}(y)$, which in turn defines a single-valued mapping $\\msol$. The following theorem provides a closed-form expression of the local variations of $\\msol$ as a function of perturbations of $y$. For this, we define the following set that rules out the points $y$ where $(\\CondInj{\\xxs(y)}{y})$ does not hold for any any minimizer.\n\n\\begin{defn}[Non-injectivity set]\nThe \\emph{Non-injectivity set} $\\Gg$ is\n\\begin{align*}\n \\Gg = \\enscond{y \\notin \\Hh}{\\text{$(\\CondInj{\\xxs(y)}{y})$ does not hold for any minimizer $\\xxs(y)$ of~\\lasso}} ~.\n\\end{align*}\n\\end{defn}\n\n\\begin{thm}\\label{thm-div}\n Under assumptions~\\eqref{hyp-f-reg} and \\eqref{eq-stric-cvx},\n the mapping $y \\mapsto \\msol(y)$ is $\\Calt{1}(\\RR^n \\setminus (\\Hh \\cup \\Gg))$.\n Moreover, for all $y \\not\\in \\Hh \\cup \\Gg$, \n \\eql{\\label{eq:diverg}\n \\diverg(\\msol)(y) \\eqdef \\tr(\\jac \\msol(y)) = \\tr(\\De(y))\n }\n where\n \\begin{align*}\n \\De(y) = -\\XX_\\T ~ ( \\Fxx(\\msol(y),y) + \\Q(\\xxs(y)) )^{+} \n ~ \\transp{\\XX_\\T} ~ \\Fxyo(\\msol(y),y) , \\\\\n \\Fxx(\\msol(y),y) = \\transp{\\XX_\\T} \\Fxxo(\\msol(y),y) \\XX_\\T + \\Afk_{\\xx}\\pa{\\cdot,\\transp{\\XX_\\S}\\Fxo(\\msol(y),y)}\n \\end{align*}\n and $\\xxs(y)$ is any solution of \\lasso such that $(\\CondInj{\\xxs(y)}{y})$ holds\n and $\\J \\in \\lsc(\\RR^p)$ is partly smooth at $\\xxs(y)$ relative to $\\Mm$, with $\\T=\\S^\\perp=\\T_{\\xxs(y)}$. \n\\end{thm}\nThis result is proved in Section~\\ref{sub:dof}.\n\nA natural question that arises is whether the set $\\Gg$ is of full (Hausdorff) dimension or not, and in particular, whether there always exists a solution $\\xxs(y)$ such that $(\\CondInj{\\xxs(y)}{y})$ holds, i.e. $\\Gg$ is empty. Though we cannot provide an affirmative answer to this for any partly smooth regularizer, and this has to be checked on a case-by-case basis, it turns out that $\\Gg$ is indeed empty for many regularizers of interest as established in the next result.\n\\begin{prop}\\label{prop-exist}\nThe set $\\Gg$ is empty when: \n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\J \\in \\lsc(\\RR^p)$ is polyhedral, and in particular, when $\\J$ is the Lasso, the general Lasso or the $\\linf$ penalties.\n\\item $\\J$ is the general group Lasso penalty, and a fortiori the group Lasso.\n\\end{enumerate}\n\\end{prop}\nThe proof of these results is constructive.\\\\\n\nWe now exemplify the divergence formula \\eqref{eq:diverg} when $\\F_0$ is the squared loss.\n\\begin{ex}[Polyhedral penalty]\n\\label{ex:polyhdf}\nThanks to Example~\\ref{ex:lassohess}, it is immediate to see that \\eqref{eq:diverg} boils down to\n\\begin{equation*}\n \\diverg(\\msol)(y) = \\rank \\XX_{\\T_{\\xxs(y)}} = \\dim \\T_{\\xxs(y)}\n\\end{equation*}\nwhere we used the rank-nullity theorem and that Lemma~\\ref{lem:unique}(ii) holds at $\\xxs(y)$, which always exists by Proposition~\\ref{prop-exist}.\n\\end{ex}\n\n\n\\begin{ex}[Lasso and General Lasso]\nCombining together Example~\\ref{ex:analassoM} and Example~\\ref{ex:polyhdf} yields\n\\begin{equation*}\n \\diverg(\\msol)(y) = \\dim \\Ker(D^*_{\\Lambda^c}), \\quad \\Lambda = \\supp(D^* \\xxs(y)) ~,\n\\end{equation*}\nwhere $\\xxs(y)$ is such that Lemma~\\ref{lem:unique}(ii) holds. For the Lasso, Example~\\ref{ex:lassoM} allows to specialize the formula to\n\\begin{equation*}\n \\diverg(\\msol)(y) = \\abs{\\supp(\\xxs(y))}.\n\\end{equation*}\nThe general Lasso case was investigated in~\\citep{vaiter-local-behavior} and \\citep{tibshirani2012dof}, and the Lasso in \\citep{2012-kachour-statsinica} and \\citep{tibshirani2012dof}.\n\\end{ex}\n\n\\begin{ex}[$\\linf$ Anti-sparsity]\nBy virtue of Example~\\ref{ex:polyhdf} and Example~\\ref{ex:linfM}, we obtain in this case\n\\begin{equation*}\n \\diverg(\\msol)(y) = p - \\abs{I} + 1, \\qwhereq\n\tI = \\enscond{i}{\\xxs_i(y) = \\normi{\\xxs(y)}}\n\\end{equation*}\nand $\\xxs(y)$ is such that Lemma~\\ref{lem:unique}(ii) holds, and such a vector always exists by Proposition~\\ref{prop-exist}.\n\\end{ex}\n\n\\begin{ex}[Group Lasso and General Group Lasso]\n\\label{ex:divglasso}\nFor the general group Lasso, piecing together Example~\\ref{ex:gglassoM} and Example~\\ref{ex:gglassohess}, it follows that\n\\begin{equation*}\n \\diverg(\\msol)(y) = \\tr\\pa{\\XX_{\\T}\\pa{\\transp{\\XX_{\\T}}\\XX_{\\T} + \\proj_{\\T} D\\pa{\\delta_{D^* \\xxs(y)} \\circ Q_{(D^* \\xxs(y))^\\perp}}D^*\\proj_{\\T}}^{+}\\transp{\\XX_{\\T}}}\n\\end{equation*}\nwhere $\\T=\\Ker(D^*_{\\Lambda^c})$, $\\Lambda = \\bs(D^* \\xxs(y))$, and $\\xxs(y)$ is such that Lemma~\\ref{lem:unique}(ii) holds; such a vector always exists by Proposition~\\ref{prop-exist}. For the group Lasso, we get using Example~\\ref{ex:glassoM} that\n\\begin{equation*}\n \\diverg(\\msol)(y) = \\tr\\pa{\\XX_\\Lambda\\pa{\\transp{\\XX_\\Lambda}\\XX_\\Lambda + \\bpa{\\delta_{D^* \\xxs(y)} \\circ Q_{(D^* \\xxs(y))^\\perp}}_{\\Lambda,\\Lambda}}^{-1}\\transp{\\XX_\\Lambda}}\n\\end{equation*}\nwhere $\\bpa{\\delta_{D^* \\xxs(y)} \\circ Q_{(D^* \\xxs(y))^\\perp}}_{\\Lambda,\\Lambda}$ is the submatrix whose rows and columns are those of $\\delta_{D^* \\xxs(y)} \\circ Q_{(D^* \\xxs(y))^\\perp}$ indexed by $\\Lambda=\\bs(\\xxs(y))$. This result was proved in~\\citep{vaiter-icml-workshops} in the overdetermined case. An immediate consequence of this formula is obtained when $\\XX$ is orthonormal\\footnote{Obviously, Lemma~\\ref{lem:unique}(ii) holds in such a case at the unique minimizer $\\xxs(y)$.}, in which case one recovers the expression of \\citet{yuan2006model},\n\\begin{equation*}\n \\diverg(\\msol)(y) = \\abs{\\Lambda} - \\sum_{b \\in \\Bb, D^*_b \\xxs(y) \\neq 0} \\frac{\\abs{b}-1}{\\norm{y_b}} ~.\n\\end{equation*}\nThe general group Lasso formula is new to the best of our knowledge, and will be illustrated in the numerical experiments on the isotropic 2-D total variation regularization widely used in image processing.\n\\end{ex}\n\nWe could also provide a divergence formula for the nuclear norm, but as we discussed in Example~\\ref{ex:injnuc}, we cannot always guarantee the existence of a solution that satisfies $(\\CondInj{\\xxs(y)}{y})$. However, one can still find other partly smooth functions $\\J$ with a non-flat submanifold for which this existence can be certified. The function of Example~\\ref{ex:maxnormM} is again a prototypical example.\n\\begin{ex}[$\\J=\\max(\\norm{\\cdot}-1,0)$]\n\\label{ex:divmaxnorm} \nFor $\\xx \\in \\sph^{p-1}$. If $\\xx$ is a minimizer of \\lasso is not a minimizer of $F(\\cdot,y)$, from Example~\\ref{ex:injmaxnorm}, we have that $\\Fxx(\\xx,y)$ is positive definite on $\\T=\\T_\\xx$. Thus, we get for the case of the squared loss, that\n\\[\n\\diverg(\\msol)(y) = \\tr\\pa{\\XX_{\\T}\\pa{\\transp{\\XX_{\\T}}\\XX_{\\T} + \\proj_{\\T} \\dotp{\\XX\\xx}{y-\\XX\\xx}}^{+}\\transp{\\XX_{\\T}}} .\n\\]\n\\end{ex} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the factorization approach to non-leptonic meson decays\n\\cite{FEYNMAN,STECHF} one can\ndistinguish three classes of decays for which the amplitudes have the\nfollowing general structure \\cite{BAUER,NEUBERT}:\n\\begin{equation}\\label{1}\nA_{\\rm I}=\\frac{G_F}{\\sqrt{2}} V_{CKM}a_1(\\mu)\\langle O_1\\rangle_F\n\\qquad {\\rm (Class~I)}\n\\end{equation}\n\\begin{equation}\\label{2}\nA_{\\rm II}=\\frac{G_F}{\\sqrt{2}} V_{CKM}a_2(\\mu)\\langle O_2\\rangle_F\n\\qquad {\\rm (Class~II)}\n\\end{equation}\n\\begin{equation}\\label{3}\nA_{\\rm III}=\n\\frac{G_F}{\\sqrt{2}} V_{CKM}[a_1(\\mu)+x a_2(\\mu)]\\langle O_1\\rangle_F\n \\qquad {\\rm (Class~III)}\n\\end{equation}\nHere $V_{CKM}$ denotes symbolically the CKM factor characteristic for a\ngiven decay. $O_1$ and $O_2$ are local four quark operators present in\nthe relevant effective hamiltonian, $\\langle O_i\\rangle_F$ are\nthe hadronic matrix\nelements of these operators given as products of matrix elements of\nquark currents and $x$ is a non-perturbative factor equal to unity in\nthe flavour symmetry limit. Finally $a_i(\\mu)$ are $QCD$ factors which\nare the main subject of this paper.\n\nAs an example consider the decay $\\bar B^0\\to D^+\\pi^-$. Then the\nrelevant effective hamiltonian is given by\n\\begin{equation}\\label{4}\nH_{eff}=\\frac{G_F}{\\sqrt{2}}V_{cb}V_{ud}^{*}\n\\lbrack C_1(\\mu) O_1+C_2(\\mu)O_2 \\rbrack\n\\end{equation}\nwhere\n\\begin{equation}\\label{5}\nO_1=(\\bar d_i u_i)_{V-A} (\\bar c_j b_j)_{V-A}\n\\qquad\nO_2=(\\bar d_i u_j)_{V-A} (\\bar c_j b_i)_{V-A}\n\\end{equation}\nwith $(i,j=1,2,3)$ denoting colour indices and $V-A$ referring to\n$\\gamma_\\mu (1-\\gamma_5)$. $C_1(\\mu)$ and $C_2(\\mu)$ are short distance\nWilson coefficients computed at the renormalization scale $\\mu=O(m_b)$.\nWe will neglect the contributions of penguin operators since their\nWilson coefficients are numerically very small as compared to $C_{1,2}$\n\\cite{BJL,ROME}. Exceptions are CP-violating decays and rare decays which\nare beyond the scope of this paper.\nNote that we use here the labeling of the operators as given in\n\\cite{BAUER,NEUBERT} which differs from \\cite{BJL,ROME} by the interchange\n$1\\leftrightarrow 2$.\n$C_i$ and $a_i$ are related as follows:\n\\begin{equation}\\label{6}\na_1(\\mu)=C_1(\\mu)+\\frac{1}{N} C_2(\\mu) \\qquad\na_2(\\mu)=C_2(\\mu)+\\frac{1}{N} C_1(\\mu)\n\\end{equation}\nwhere $N$ is the number of colours. We will set $N=3$ in what follows.\n\nApplication of the factorization method gives\n\\begin{equation}\\label{7}\nA(\\bar B^0\\to D^+\\pi^-)=\\frac{G_F}{\\sqrt{2}}V_{cb}V_{ud}^{*}\na_1(\\mu)\\langle\\pi^-\\mid(\\bar d_i u_i)_{V-A}\\mid 0\\rangle\n\\langle D^+\\mid (\\bar c_j b_j)_{V-A}\\mid \\bar B^0\\rangle\n\\end{equation}\nwhere $\\langle D^+\\pi^-\\mid O_1 \\mid \\bar B^0\\rangle$ has been factored\n into two\nquark current matrix elements and the second term in $a_1(\\mu)$ represents\nthe contribution of the operator $O_2$ in the factorization approach.\n\nOther decays can be handled in a similar manner \\cite{NEUBERT}.\n Although the flavour\nstructure of the corresponding local operators changes,\nthe colour structure\nand the coefficients $C_i(\\mu)$ remain unchanged. For instance\n$\\bar B^0\\to \\bar K^0\\psi$ and $B^-\\to D^0 K^-$ belong to class II and\nIII respectively. Finally a similar procedure can be applied to\nD-decays with the coefficients $C_i$ evaluated at $\\mu=O(m_c)$.\nOnce the matrix elements have been expressed in terms of various meson\ndecay constants and generally model dependent formfactors, predictions\nfor non-leptonic heavy meson decays can be made.\nMoreover relations between non-leptonic and semi-leptonic decays can\nbe found which allow to test factorization in a model independent\nmanner.\n\nAlthough the simplicity of this framework is rather appealing,\nit is well known that\nnon-factorizable contributions must be present in the hadronic matrix\nelements of the current--current operators $O_1$ and $O_2$ in order\nto cancel the $\\mu$ dependence of $C_i(\\mu)$ or $a_i(\\mu)$ so that\nthe physical amplitudes do not depend on the arbitrary renormalization\nscale $\\mu$.\n$\\langle O_i\\rangle_F$ being products of matrix elements of\nconserved currents\nare $\\mu$ independent and the cancellation of the $\\mu$ dependence\nin (\\ref{1})-(\\ref{3}) does not take place.\nConsequently from the point of view of QCD\nthe factorization approach can be at best correct at a single value\nof $\\mu$, the so-called factorization scale $\\mu_F$. Although the\napproach itself does not provide the value of $\\mu_F$, the proponents\nof factorization expect $\\mu_F=O(m_b)$ and $\\mu_F=O(m_c)$ for\nB-decays and D-decays respectively.\n\nHere we would like to point out that beyond the leading logarithmic\napproximation for $C_i(\\mu)$ a new complication arises. As stressed\nin \\cite{BJLW}, at next to leading level in the renormalization\ngroup improved perturbation theory the coefficients $C_i(\\mu)$\ndepend on the renormalization scheme for operators. Again only\nthe presence of non-factorizable contributions\n in $\\langle O_i\\rangle$ can\nremove this scheme dependence in the physical amplitudes.\nHowever $\\langle O_i\\rangle_F$ are renormalization scheme\nindependent and the factorization approach is of course unable\nto tell us whether it works better with an anti-commuting $\\gamma_5$\nin $D\\not=4$ dimensions ( NDR scheme) or with another definition\nof $\\gamma_5$ such as used in HV (non-anticommuting $\\gamma_5$ in\n$D\\not=4$) or DRED ($\\gamma_5$ in $D=4$) schemes.\nThe renormalization scheme dependence of $a_i$ emphasized here\nis rather\nannoying from the factorization point of view as it precludes\na unique phenomenological determination of $\\mu_F$ as we will\nshow explicitly below.\n\nOn the other hand, arguments have been given\n\\cite{BJORKEN,DUGAN,NEUBERT} that once $H_{eff}$ in (\\ref{4})\nhas been constructed, factorization could be\napproximately true in the case of two-body decays with high\nenergy release \\cite{BJORKEN}, or in certain kinematic regions\n\\cite{DUGAN}. We will not repeat here these arguments, which\ncan be found in the original papers as well as in a critical\nanalysis of various aspects of factorization presented in\n\\cite{ISGUR}.\nNeedless to say the issue of factorization does not only\ninvolve the short distance gluon corrections discussed here\nbut also final state interactions which are discussed in these\npapers.\n\nIt is difficult to imagine that factorization\ncan hold even approximately in all circumstances.\nIn spite of this, it\nbecame fashionable these days to\ntest this idea,\nto some extent, by using a certain set of formfactors to calculate\n$ \\langle O_i\\rangle_F $ and by making global fits of\nthe formulae (\\ref{1})-(\\ref{3})\nto the data treating\n$ a_1 $ and $ a_2 $ as free independent parameters. The most recent\nanalyses of this type give for non-leptonic two-body B-decays\n\\cite{DEANDREA}-\\cite{KAMAL}\n\\begin{equation}\\label{8}\na_1\\approx 1.05\\pm0.10\n\\qquad\na_2\\approx 0.25\\pm0.05\n\\end{equation}\nwhich is compatible with earlier analyses \\cite{NEUBERT,STONE90}.\nThe new CLEO II data \\cite{CLEO} favour\na {\\it positive} value of $a_2$ in contrast to earlier expectations\n\\cite{BAUER,RUCKL} based on extrapolation from charm decays.\nAt the level of accuracy of the existing experimental data and\nbecause of strong model\ndependence in the relevant formfactors it is not yet possible\nto conclude on the basis of these analyses whether the\nfactorization approach is a useful approximation in general or not.\nIt is\ncertainly conceivable that factorization may apply better to some\nnon-leptonic decays than to others\n\\cite{NEUBERT,BJORKEN,DUGAN,ISGUR,BIGI,RUCKL94}\nand using all decays in a global fit may misrepresent the true\nsituation.\n\nIrrespective of all these reservations let us ask\nwhether the numerical values in (\\ref{8}) agree with\nthe QCD expectations for\n$\\mu=0(m_b)?$\n\nA straightforward calculation of $a_i(\\mu)$ with $C_i(\\mu)$ in the\nleading logarithmic approximation \\cite{LEE} gives for $ \\mu= 5.0~GeV $\nand the QCD scale $\\Lambda_{LO} = 225\\pm85~MeV$\n\\begin{equation}\\label{9}\na_1^{LO}=1.03\\pm0.01\\quad \\qquad a_2^{LO}=0.10\\pm0.02\n\\end{equation}\nWhereas the result for $ a_1 $ is compatible with the experimental findings,\nthe theoretical value for $ a_2 $ disagrees roughly by a factor of two.\nThe solution to this problem by dropping the $1\/N$ terms in (\\ref{6})\nsuggested in \\cite{BAUER} and argued for in \\cite{RUCKL,BSQCD,BLOK} gives\n$a_1^{LO}=1.12\\pm 0.02$ and $a_2^{LO}=-0.27\\pm0.03$. Whereas the absolute\nmagnitudes for $a_i$ are consistent with (\\ref{8}), the sign of $a_2$ is\nwrong. It\nhas been remarked in \\cite{NEUBERT} that the value of $a_2$ could be\nincreased by\nusing (\\ref{6}) with $ \\mu >> m_b $ .\nIndeed as shown in table 1 for $\\mu=15-20~GeV$\nthe calculated values for $a_1$ and $a_2$ are compatible with (\\ref{8}).\nThe large value of $\\mu=(3-4)~m_b$ is, however, not really what\nthe proponents of factorization would expect.\n\n\\begin{table}[thb]\n\\caption{Leading order coefficients\n$a_1^{LO}$ and $a_2^{LO}$ for B-decays.}\n\\begin{center}\n\\begin{tabular}{|c|c|c||c|c||c|c|}\n\\hline\n& \\multicolumn{2}{c||}{$\\Lambda_{LO}^{(5)}=140~MeV$} &\n \\multicolumn{2}{c||}{$\\Lambda_{LO}^{(5)}=225~MeV$} &\n \\multicolumn{2}{c| }{$\\Lambda_{LO}^{(5)}=310~MeV$} \\\\\n\\hline\n$\\mu [GeV]$ & $a_1$ & $a_2$ & $a_1$ &\n$a_2$ & $a_1$ & $a_2$ \\\\\n\\hline\n\\hline\n5.0 & 1.024 & 0.124 & 1.030 & 0.099 & 1.035 & 0.078\n\\\\\n\\hline\n10.0 & 1.011 & 0.191 & 1.014 & 0.176 & 1.016 & 0.164\n\\\\\n\\hline\n15.0 & 1.007 & 0.224 & 1.008 & 0.214 & 1.009 & 0.205\n\\\\\n\\hline\n20.0 & 1.004 & 0.246 & 1.005 & 0.238 & 1.006 & 0.231\n\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nYet it should be recalled that in order to address the issue of the value\n of $ \\mu $\ncorresponding to the findings in (\\ref{8}) it is mandatory to go beyond the\nleading logarithmitic approximation and to include at least\nthe next-to-leading (NLO)\nterms. In particular only then one is able to use meaningfully the value\nfor $\\Lambda_{\\overline{MS}}$ extracted from high energy processes. As an illustration we\nhave used in (\\ref{9}) $\\Lambda_{LO}=\\Lambda_{\\overline{MS}}$ which is of course rather\n arbitrary.\nTo our surprise no NLO analysis of $ a_1 $ and $ a_2 $ has been\npresented in the literature in spite of the fact that the NLO\ncorrections to $ C_1 $ and $ C_2 $ have been known for\nmany years \\cite{ALT,WEISZ}.\n\nAt this point an important warning should be made. The coefficients\n$ C_1 $ and $C_2 $ as given in \\cite{ALT,WEISZ} and also in \\cite{BJLW}\ncannot simply be inserted\ninto (\\ref{6}) as done often in the literature.\nAs stressed in \\cite{BJL} the coefficients\n given in \\cite{ALT,WEISZ,BJLW}\n differ from the true coefficients of the operators $O_i$\n by $ O(\\alpha_s) $ corrections which\nhave been included in these papers in order to remove the renormalization\nscheme dependence. The only paper which gives the true $ C_1 $ and\n$ C_2 $ for B-decays is ref. \\cite{BJL},\nwhere these coefficients have been\ngiven for\nthe NDR and HV renormalization\nschemes.\n\nNow the main topic of ref. \\cite{BJL} was the ratio\n$ \\varepsilon'\/\\varepsilon $. Consequently\nthe full set of ten operators including QCD-penguin and\nelectroweak penguin\noperators had to be be considered which made the whole analysis rather\ntechnical. The penguin operators have, however, no impact on the\ncoefficients $ C_1 $ and $ C_2 $ and also $ O(\\alpha_{QED}) $\nrenormalization considered in \\cite{BJL} can be neglected here. On the other\nhand we are interested in the $ \\mu$ dependence of $ a_1 $ and $ a_2 $\naround $ \\mu = O(m_b) $ and consequently we have to generalize the\nnumerical analysis of \\cite{BJL}.\n\nAt this point it should be remarked that in the context of the\nleading logarithmic approximation, the sensitivity of $a_2$ to\nthe precise values of $C_i$ has been emphasised in ref. \\cite{K84}\nlong time ago. The expectation of K\\\"uhn and R\\\"uckl that higher\norder QCD corrections should have an important impact on the\nnumerical values of $a_2$ turns out to be correct as we will\ndemonstrate explicitly below.\n\nThe main objectives of the present paper are:\n\\begin{itemize}\n\\item\nThe values of $ a_1(\\mu) $ and $ a_2(\\mu) $ beyond the leading\nlogarithmic approximation,\n\\item\nThe analysis of their $ \\mu $ and $\\Lambda_{\\overline{MS}}$ dependences,\n\\item\nThe analysis of their renormalization scheme dependence in\ngeneral terms, which we will illustrate here\nby calculating $ a_i(\\mu) $ in three renormalization schemes:\nNDR, HV and DRED.\n\\end{itemize}\n\nSince the $\\mu$, $\\Lambda_{\\overline{MS}}$ and the renormalization scheme dependences of\n$a_i(\\mu)$ are caused by the non-factorizable hard gluon contributions,\nthis analysis should give us some estimate of the expected departures\nfrom factorization. It will also give us the answer whether, within the\ntheoretical uncertainties, the problem of the small value of $a_2$,\nstressed by many authors in the past, can be avoided.\n\nOur paper is organized as follows. In section 2 we give\na set of compact expressions\n for $ C_1(\\mu) $\nand $ C_2(\\mu) $ which clearly exhibit the $ \\mu $ and\nrenormalization scheme dependences. Subsequently in sections 3 and 4\nwe will critically analyse $a_i$ for B-decays and D-decays respectively.\nOur main findings and conclusions are given in section 5.\n\\section{Master Formulae}\nThe coefficients $C_i(\\mu)$ can be written as follows:\n\\begin{equation}\\label{10}\nC_1(\\mu)=\\frac{z_+(\\mu)+z_-(\\mu)}{2}\n\\qquad\\qquad\nC_2(\\mu)=\\frac{z_+(\\mu)-z_-(\\mu)}{2}\n\\end{equation}\nwhere\n\\begin{equation}\\label{11}\nz_\\pm(\\mu)=\\left[1+\\frac{\\alpha_s(\\mu)}{4\\pi}J_\\pm\\right]\n \\left[\\frac{\\alpha_s(M_W)}{\\alpha_s(\\mu)}\\right]^{d_\\pm}\n\\left[1+\\frac{\\alpha_s(M_W)}{4\\pi}(B_\\pm-J_\\pm)\\right]\n\\end{equation}\nwith\n\\begin{equation}\\label{12}\nJ_\\pm=\\frac{d_\\pm}{\\beta_0}\\beta_1-\\frac{\\gamma^{(1)}_\\pm}{2\\beta_0}\n\\qquad\\qquad\nd_\\pm=\\frac{\\gamma^{(0)}_\\pm}{2\\beta_0}\n\\end{equation}\n\\begin{equation}\\label{13}\n\\gamma^{(0)}_\\pm=\\pm 2 (3\\mp 1)\n\\qquad\\quad\n\\beta_0=11-\\frac{2}{3}f\n\\qquad\\quad\n\\beta_1=102-\\frac{38}{3}f\n\\end{equation}\n\\begin{equation}\\label{14}\n\\gamma^{(1)}_{\\pm}=\\frac{3 \\mp 1}{6}\n\\left[-21\\pm\\frac{4}{3}f-2\\beta_0\\kappa_\\pm\\right]\n\\end{equation}\n\\begin{equation}\\label{15}\nB_\\pm=\\frac{3 \\mp 1}{6}\\left[\\pm 11+\\kappa_\\pm\\right].\n\\end{equation}\nHere we have introduced the parameter $\\kappa_\\pm$ which\ndistinguishes between various renormalization\nschemes:\n\\begin{equation}\\label{16}\n\\kappa_\\pm = \\left\\{ \\begin{array}{rc}\n0 & (\\rm{NDR}) \\\\\n\\mp 4 & (\\rm{HV}) \\\\\n\\mp 6-3 & (\\rm{DRED})\n\\end{array}\\right.\n\\end{equation}\n\nThus $J_\\pm$ in (\\ref{12}) can also be written as\n\\begin{equation}\\label{17}\nJ_\\pm=(J_\\pm)_{NDR}+\\frac{3\\mp 1}{6}\\kappa_\\pm\n=(J_\\pm)_{NDR}\\pm\\frac{\\gamma^{(0)}_\\pm}{12}\\kappa_\\pm\n\\end{equation}\nSetting $\\gamma_\\pm^{(1)}$, $B_\\pm$ and $\\beta_1$ to zero\ngives the leading logarithmic approximation \\cite{LEE}. The\nNLO corrections in the dimensional reduction scheme (DRED)\nhave been first considered in \\cite{ALT}. The corresponding\ncalculations in the NDR scheme\n and in the HV scheme have been presented in \\cite{WEISZ},\nwhere the DRED-results of \\cite{ALT} have been confirmed.\nIn writing (\\ref{14}) we have incorporated\nthe $-2 \\gamma^{(1)}_J$ correction\nin the HV scheme resulting from the non-vanishing two--loop anomalous\ndimension of the weak current. Similarly we have incorporated in\n$\\gamma^{(1)}_\\pm$ a finite renormalization of $\\alpha_s$ in the\ncase of the DRED scheme in order to work in all schemes with the usual\n$\\overline{MS}$ coupling \\cite{BBDM}. For the latter we take\n\\begin{equation}\\label{18}\n\\alpha_s(\\mu)=\\frac{4\\pi}{\\beta_0 \\ln(\\mu^2\/\\Lambda^2_{\\overline{MS}})}\n\\left[1-\\frac{\\beta_1}{\\beta^2_0}\n\\frac{\\ln\\ln(\\mu^2\/\\Lambda^2_{\\overline{MS}})}\n{\\ln(\\mu^2\/\\Lambda^2_{\\overline{MS}})}\\right].\n\\end{equation}\nThe formulae given above\n depend on $f$, the number of active flavours. In the case of\nB--decays $f=5$. According to the most recent world avarage\n\\cite{WEBER} we have:\n\\begin{equation}\\label{19}\n\\alpha_s(M_Z)=0.117\\pm0.007\n\\qquad\\quad\n\\Lambda_{\\overline{MS}}^{(5)}=(225\\pm85)~MeV\n\\end{equation}\nwhere the superscript stands for $f=5$.\n\nIn the case of D-decays the relevant scale is $\\mu=O(m_c)$. In order\nto calculate $C_i(\\mu)$ for this case one has to evolve these\ncoefficients from $\\mu=O(m_b)$ down to $\\mu=O(m_c)$ in an effective\ntheory with $f=4$. Matching $\\alpha_s^{(5)}(m_b)=\\alpha_s^{(4)}(m_b)$\nwe find to a very good approximation $\\Lambda_{\\overline{MS}}^{(4)}=(325\\pm110)~MeV$.\nUnfortunately the necessity to evolve $C_i(\\mu)$ from $\\mu=M_W$\ndown to $\\mu=m_c$ in two different theories ($f=5$ and $f=4$) and\neventually with $f=3$ for $\\mu< m_c$ makes the formulae\nfor $C_i(\\mu)$ in D--decays rather complicated.\nThey can be found in \\cite{BJL}.\nFortunately all these complications can be avoided by a simple trick,\nwhich reproduces the results of \\cite{BJL} to better than $0.5\\%$.\nIn order to find $C_i(\\mu)$ for $1~GeV\\leq\\mu\\leq 2~GeV$ one can\nsimply use the master formulae given above with $\\Lambda_{\\overline{MS}}^{(5)}$ replaced\nby $\\Lambda_{\\overline{MS}}^{(4)}$ and $f=4.15$. The latter \"effective\" value for $f$\nallows to obtain a very good agreement with \\cite{BJL}. The nice\nfeature of this method is that the $\\mu$ and renormalization scheme\ndependences of $C_i(\\mu)$ can be studied in simple terms.\n\nReturning to (\\ref{11}) we note that $(B_\\pm-J_\\pm)$ is scheme independent.\nThe scheme dependence of $z_\\pm(\\mu)$ originates then entirely from\nthe scheme dependence of $J_\\pm$ which has been explicitly shown\nin (\\ref{17}). We should stress that by the scheme dependence we always mean\nthe one related to the operator renormalization. The scheme for $\\alpha_s$\nis always $\\overline{MS}$.\nThe scheme dependence present in the first factor in (\\ref{11}) has\nbeen removed in \\cite{WEISZ} by multiplying $z_\\pm(\\mu)$ by\n$(1-B_\\pm \\alpha_s(\\mu)\/4\\pi)$ and the corresponding hadronic\nmatrix elements by $(1+B_\\pm \\alpha_s(\\mu)\/4\\pi)$. Although this\nprocedure is valid in general, it is not useful in the case of\nthe factorization approach which precisely omitts the non-factorizable,\nscheme dependent corrections such as $B_\\pm$ or $J_\\pm$ in the\nhadronic matrix elements. Consequently in what follows we will work\nwith the true coefficients $C_i(\\mu)$ of the operators $O_i$ as given\nin (\\ref{10}) and (\\ref{11}).\n\nIn order to exhibit the $\\mu$ dependence on the same footing as the\nscheme dependence, it is useful to rewrite (\\ref{11}) as follows:\n\\begin{equation}\\label{20}\nz_\\pm(\\mu)=\\left[1+\\frac{\\alpha_s(m_b)}{4\\pi} \\tilde J_\\pm(\\mu)\\right]\n \\left[\\frac{\\alpha_s(M_W)}{\\alpha_s(m_b)}\\right]^{d_\\pm}\n\\left[1+\\frac{\\alpha_s(M_W)}{4\\pi}(B_\\pm-J_\\pm)\\right]\n\\end{equation}\nwith\n\\begin{equation}\\label{21}\n\\tilde J_\\pm(\\mu)=(J_\\pm)_{NDR}\\pm\n\\frac{\\gamma^{(0)}_\\pm}{12}\\kappa_\\pm\n+\\frac{\\gamma^{(0)}_\\pm}{2}\\ln(\\frac{\\mu^2}{m^2_b})\n\\end{equation}\nsummarizing both the renormalization scheme dependence and the\n$\\mu$--dependence. Note that in the first parenthesis in (\\ref{20})\nwe have\nset $\\alpha_s(\\mu)=\\alpha_s(m_b)$ as the difference in the\nscales in this correction is still of a higher order.\nWe also note that the scheme and the $\\mu$--dependent terms\nare both proportional to $\\gamma^{(0)}_\\pm$. This implies that a\nchange of the renormalization scheme can be compensated by a change\nin $\\mu$. From (\\ref{21}) we find generally\n\\begin{equation}\\label{21a}\n\\mu_i^\\pm=\\mu_{NDR}\\exp\\left(\\mp\\frac{\\kappa_\\pm^{(i)}}{12}\\right)\n\\end{equation}\nwhere $i$ denotes a given scheme. From (\\ref{16}) we have then\n\\begin{equation}\\label{22}\n\\mu_{HV}=\\mu_{NDR}\\exp\\left(\\frac{1}{3}\\right)\n\\qquad\n\\mu_{DRED}^{\\pm}=\n\\mu_{NDR}\\exp\\left(\\frac{2\\pm 1}{4}\\right)\n\\end{equation}\nEvidently whereas the change in $\\mu$ relating HV and NDR is the\nsame for $z_+$ and $z_-$ and consequently for $a_i(\\mu)$ and\n$C_i(\\mu)$, the relation between NDR and DRED is more involved. In any\ncase $\\mu_{HV}$ and $\\mu_{DRED}^\\pm$ are larger than $\\mu_{NDR}$.\nThis discussion shows that a meaningful analysis of the $\\mu$\ndependence of $C_i(\\mu)$ can only be made simultaneously with the\nanalysis of the scheme dependence.\n\nUsing (\\ref{20}) and (\\ref{21}) we can find the explicit dependence\nof $a_i$ on $\\mu$ and the renormalization scheme:\n\\begin{equation}\\label{21c}\n\\Delta a_{1,2}(\\mu)\n=\\frac{\\alpha_s(m_b)}{3\\pi}\\left[F_+\\mp F_-\\right]\n\\ln(\\frac{\\mu^2}{m_b^2})+\n\\frac{\\alpha_s(m_b)}{18\\pi}\\left[F_+\\kappa_+\\pm F_-\\kappa_-\\right]\n\\end{equation}\nwhere $F_{\\pm}$ denotes the product of the last two factors in\n(\\ref{20}) which are scheme independent.\nFor $m_b=4.8~GeV$, $\\Lambda_{\\overline{MS}}^{(5)}=225\\pm85~MeV$\nwe have $F_+=0.88\\pm 0.01 $ and $F_-=1.28\\pm 0.03$.\nIt is evident from (\\ref{21c}) that\nthe $\\mu$ and renormalization scheme dependences are much smaller\nfor $a_1$ than for $a_2$. We will verify this numerically below.\n\nWe have written all the formulae without invoking heavy quark\neffective theory (HQET). It is sometimes stated in the literature\nthat for $\\mu 0.5 \\mu$ the hydrodynamic approximation $V_0 \\ll \\mu$ becomes less accurate, yet Eq.~(6) of the main text still provides a good fit to the GPE data. As physics beyond the hydrodynamic approximation becomes more important, we find that $R_{\\mathrm{eff}}$ decreases and $\\delta_{\\rm eff}$ increases somewhat.}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7cm]{suppFig_energyDiff.pdf}\n \\caption{Difference in incompressible energy of a free vortex infinitely far away from a pinning potential and a vortex at the {fixed point} within the d{pinning potential}. {Blue circles: $V_0 = 0.7\\mu$, orange squares: $V_0 = \\mu$.}\n }\n \\label{fig:energy}\n\\end{figure}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{FallOn_vs_PairCreation.pdf}\n \\caption{{Snapshots of the superfluid density (a,b) and velocity (c,d) for GPE simulations of vortex scattering in the fall-on ($v_x = 0.004, y_0 = 3$) regime (a,c) and pair creation ($v_x = 0.008, y_0 = 0$) regime (b,d). The plots are at a point in time where the velocity at the centre of the pin exceeds a threshold value of $u_{\\rm th }$ = 3c. The orange circle indicates the location of the incoming positive vortex. Obstacle parameters are $R\/\\xi=4$, $V_0\/\\mu = 0.9$ and $w\/\\xi = 1$.} \n }\n \\label{fig:fallon_vs_pc}\n\\end{figure*}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{StationarySolutions.pdf}\n \\caption{{Superfluid density (a,b) and velocity (c,d) of stationary solutions to the GPE in a frame moving with $v_x\/c = 0.32$ (a,c) and $v_x\/c = 0.325$ (b,d) for an obstacle with $R\/\\xi=4$, $V_0\/\\mu = 0.9$ and $w\/\\xi = 1$. The orange circle (blue triangle) indicates the location of a positive (negative) vortex.}}\n \\label{fig:stationaryStates}\n\\end{figure*}\n\n\\section*{Energy of Pinned Vortex}\nTo expand upon the vortex pinning results presented in Fig.~4 of the main text, in \\fig{fig:energy} we plot the difference in the incompressible energy between a free vortex infinitely far away from a pinning potential and a pinned vortex at the energy minima as a function of potential radius $R$. \nThis difference is the amount of incompressible energy that needs to be irreversibly lost via radiation or pair annihilation in order for an initially free vortex to become pinned as per the results in Fig.~3 of the main text. There are two curves for potential heights of $V_0 = 0.7\\mu$ and $\\mu$, both within a superflow of velocity $v_x = 0.1c$.\n\n\n\n\nThe results show that as the radius of the pinning potential increases, the amount of energy that must be lost to sound energy increases. In this pure superfluid system, there are only a finite number of mechanisms \n{for this energy loss to occur, and there is an upper limit to the energy that can be lost in a single scattering event.}\nTherefore as the energy of the bound state decreases for increasing radii, it becomes increasingly difficult for a vortex to become pinned as more energy must be lost by the vortex.\n\n\n\n\n\\section*{Vortex pair creation mechanism}\n\n{The mechanism for vortex pair creation in the pinning potential can be understood by considering the vector sum of the linear velocity field due to the externally imposed flow $v_x$, and the radial velocity field of the approaching vortex, $|v_r| \\propto 1\/r$.}\nEmpirically, we find that pair creation {only occurs for vortex trajectories for which $y > 0$, and it} always occurs at the centre of the pin. When $y<0$ the velocity field of the approaching positive vortex {decreases the total speed of the flow at the pin centre. {However for $y > 0$, the approaching vortex causes the total speed of the flow at the centre of the pin to increase. When the vortex is close enough to the pin, and the potential strength $V_0$ is sufficiently large, this can lead to a threshold velocity for vortex nucleation to be exceeded.}\n\n\nExamples demonstrating how this arises are shown in Figs.~\\ref{fig:fallon_vs_pc} and~\\ref{fig:stationaryStates}. In Fig.~\\ref{fig:fallon_vs_pc} we show an instant in time where the velocity at the centre of the pin exceeds a specified threshold (we chose $u_{\\rm{th}} > 3c$). For the fall-on {regime} [Fig.~\\ref{fig:fallon_vs_pc}(a)], the vortex is already well within the pin, and is releasing a burst of sound. In the pair creation case, the threshold is exceeded when the vortex is still well outside the pin. Complementary to this figure, Fig.~\\ref{fig:stationaryStates} shows two stationary solutions for the same obstacle in a frame moving at constant velocity $v_x$ (here no vortex is present outside {the pinning potential}). At $v_x\/c\\ = 0.32 $ [Fig.~\\ref{fig:stationaryStates}(a)], the velocity field inside the pin resembles that shown in Fig.~\\ref{fig:fallon_vs_pc}(d) (both resemble the velocity field of a Jones Roberts soliton~\\cite{jones1982}). At slightly higher velocity $v_x\/c = 0.325$, the soliton turns into a vortex dipole. In Fig.~\\ref{fig:fallon_vs_pc}(b) the velocity structure appears at an angle, consistent with the idea of the vector sum of the velocity fields. {In the vortex scattering scenario, the approaching vortex causes an increase in the velocity of the superfluid in the pin, and induces a transition to the vortex dipole state. The approaching vortex then interacts with this dipole, leading to the subsequent annihilation of the incoming vortex.}\n\n\\section*{Supplemental Movies}\n\n\nSupplemental movies S1--S4 show examples of the vortex pinning and scattering dynamics as presented in Fig.~3(a--d) of the main text. They correspond to the ``conservative\", ``fall-on\", ``pair creation\", and ``too fast\" regimes respectively. The simulation parameters are provided in the caption of Fig.~3 of the main text, as well as on the title slides.\n\nSupplemental Movie S5 shows examples of the stationary solutions of the the GPE with a vortex trapped on the pinning potential as the superfluid flow velocity $v_x$ is increased. The pinning potential has a strength $V_0 = 0.5 \\mu$, radius $R= 10 \\xi$, and width $w=2$. The largest velocity corresponds to the last stable solution that is found for this pinning potential, and determines the value of the unpinning velocity $u_c$ as plotted in Fig.~2(b) of the main text.\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}