diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeqqt" "b/data_all_eng_slimpj/shuffled/split2/finalzzeqqt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeqqt" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nThe field of interpretability aims at providing users and practitioners with techniques meant to explain either globally a trained machine learning model or locally a particular prediction made by a model. This can be achieved either by training directly an interpretable model, or in a post hoc approach, using model-agnostic or model-specific interpretability techniques.\n\nThis paper focuses on post hoc surrogate models that globally approximate a machine learning classifier while providing explanations at the local level of each prediction.\nWe are interested in model-agnostic interpretability approaches meant to be applied on standard feature spaces composed of tabular data. Our goal is to explain any type of trained model: the classifier is a black-box left to the discretion of the practitioners.\nWe refer the reader to recent published surveys for a global picture of the interpretability field as for instance~\\citep{Guidotti2018a}.\n\n\\begin{figure}\n \\centering\n \\resizebox*{\\columnwidth}{3.5cm}{\n \\input{illustration_tree_concept.tex}\n }\n \\caption{Concept Tree trained on FRED-MD macroeconomic dataset. Variables are grouped by Concepts to constraint the training of an interpretable surrogate decision tree}\n \\label{fig:my_label}\n\\end{figure}\n\nSurrogate models aiming at providing post hoc interpretability may induce confusion by conveying a false sense of simplicity, especially when subgroups of dependent variables are involved. We refer to dependent variables as variables sharing similar information and possibly generated by a common phenomenon. It may include the various lags of a given time series, various features of a variables, or various measures of a given fact. Surrogate models may arbitrarily select one given variable among a group of dependent variables, thus obscuring the global picture.\nSubsequently, practitioners may better understand a surrogate model that retains the whole set of dependent variables and depicts a bigger picture than a simpler model. \n\nThis paper introduces the idea of \\textit{concept}. A \\textit{concept} is a representation gathering a group of dependent variables. It can be defined using either domain knowledge or statistical properties of dependent variables (such as the Pearson correlation). The use of \\textit{concepts} allows to provide high-level representations that practitioners may find easier to interpret. We contend that \\textit{concept}-based methods may be better suited to human understanding and provide more practitioner-friendly representations of a black-box classifier. \n\nWe substantiate that claim with an application to decision tree surrogates. Decision trees are universally considered interpretable by domain experts~\\cite{freitas2014comprehensible}. We compare standard surrogate tree models to trees whose training is constrained by the grouping of subgroups of variables in \\textit{concepts}. More specifically, we embed the idea of \\textit{concept} in the TREPAN algorithm~\\citep{Craven1996b}, an interpretable decision tree originally instantiating a variant of \\emph{id2-of-3}~\\citep{murphy1991id2} with a mechanism of oracle querying aiming at populating areas of the training set where the fidelity of the surrogate can be improved. In our approach, the \\textit{concepts} are used at each node of the decision tree to constrain the training of the split rule based on \\emph{id2-of-3}. We compare the resulting \\textit{Concept Trees} to the surrogates provided by the original TREPAN algorithm. \n\nThe next section expands on the motivation and formally introduces the idea of \\textit{high-level concepts}. Section 3 introduces \\textit{Concept Trees}, a version of the TREPAN algorithm that builds on \\textit{concepts}, and shows that \\textit{Concept Trees} meet the prerequisites of a global-to-local, post-hoc and model-agnostic surrogate. Section 4 assesses both the qualitative and quantitative relevance of our proposition through experiments led on FRED-MD, a monthly macroeconomic database designed for empirical analysis of the US economy \\citep{mccracken2016fred}.\n\n\n\\section{Concept: Grouping Dependent Variables into High-Level Representation of Variables}\n\nIt is often the case that groupings of variables in a given dataset may naturally appear. Such grouping can derive from similar meaning or a similar origin (\\emph{e.g.} unemployment among men, unemployment among women, unemployment among young people...). A grouping can also be the result of multiple transformations applied to a given source of data (such as multiple lags of a time series, or features engineered from the same variable).\n\nIn this work, we consider two types of \\textit{concepts}: expert-defined grouping of features and automatically-defined grouping based on a statistical criterion such as feature correlation.\nExpert-based \\textit{concepts} may be used when domain knowledge is available. Automatically-defined concepts do not require prior domain knowledge.\n\nExploiting the group structure of variables has already been used in the literature to train more accurate sparse models, for instance with \\emph{group-lasso}~\\citep{yuan2006model} or \\emph{sparse-group-lasso}~\\citep{simon2013sparse}. In the latter, improved accuracy is observed with variable groupings such as gene pathways or factor level indicators in categorical data.\nOther machine learning fields also cover the idea of grouping dimensions, such as subspace clustering \\citep{vidal2011subspace}.\n\nIn the field of interpretability, the idea of exploiting a meaningful grouping of features to generate better explanations has emerged, for instance with topic-modeling-based feature compression~\\citep{kim2015scalable} or on image classification with deep learning models~\\citep{Kim2017,Ghorbani2019}.\n\nCorrelated features is a known challenge when building machine learning models and interpreting feature importances~\\cite{Buhlmann2013,Gregorutti2017,Strobl2008,Tolosi2011}. For instance, \\emph{lasso}-based methods for feature selection tend to select only one representative from a group of correlated features and to discard the others~\\cite{Buhlmann2013}. It has been pointed out that correlated features severely impact variable importance measures of random forests~\\cite{Strobl2008, Gregorutti2017}. Also, many feature selection methods suffer from a \\emph{correlation bias}: features belonging to a group of correlated features receive weights inversely proportional to the size of the group~\\cite{Tolosi2011}. This issue creates instability in the feature selection process. Small changes in the training data can result in significant changes in the selected set of features. This instability prevents a robust interpretation of variable importance.\n\nWe propose to use the idea of \\textit{concept} to address both expert-defined grouping of features and automatically (correlated)-defined grouping. \\textit{Concepts} are embedded into surrogate models in order to constrain their training, which provides two levels of granularity for the explanations: at high-level (concept) and at finer level (raw variables).\nThe next paragraph offers a formal presentation of the idea of \\textit{concepts}. \n\nWe consider a set of training examples $\\mathbb{X}$ where each example is denoted $x^{(i)}$ with $i \\in [1...|\\mathbb{X}|]$ and associated with a label $y^{(i)}$. The set of training examples $\\mathbb{X}$ is composed of a set of features $j \\in \\mathbb{J}$ and each feature vector is noted $x_j$ with $j \\in \\mathbb{J}=[1...N]$.\n\nA \\emph{concept} is a subset of features denoted $c_k \\subset \\mathbb{J}$. $K$ concepts $c_k$ co-exist to form the set of concepts $c_k \\in \\mathbb{C}, k \\in [1...K]$. The instantiation of a concept $c_k$ is the process of populating it with dependent features. Every feature $j \\in \\mathbb{J}$ belongs to a single concept $c_k$ and one concept only:\n$$c_k \\cap c_l = \\emptyset \\mid \\forall l \\in \\mathbb{J} \\text{ and } l \\neq k$$\n\n\\subsection{Expert knowledge concepts}\nThe instantiation of a concept $c_k$ can be either driven by domain knowledge or performed automatically. The former requires that all variables belong to user-defined groups that be meaningful to domain experts. The variable classifications are sometimes to be found in the documentation of a dataset. That is the case of the FRED-MD data, which is used in the experimentation section of this work. The paper accompanying the dataset \\citep{mccracken2016fred} includes in appendix a table that classifies the 134 monthly macroeconomic indicators into 8 categories: output and income, labour market, housing, consumption orders and inventories, money and credit, interest and exchange rates, prices, and stock markets. Table~\\ref{tab:fred_succint_desc} provides a sample of these categories. \n\n\\begin{table*}[]\n\\centering\n\\caption{Overview of the grouping of variables by concept in FRED-MD database~\\citep{mccracken2016fred}}\n\\label{tab:fred_succint_desc}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}ccc@{}}\n\\toprule\n\\thead{Concept 1: Output and Income} & \\thead{Concept 2: Labor Market} & \\thead{Concept 5: Money and Credit} \\\\ \\midrule\nReal Personal Income & Civilian Labor Force & Total Reserves of Depository Institutions \\\\\nReal personal income ex transfer receipts & Civilian Employment & Commercial and Industrial Loans \\\\\nIP: Consumer Goods & Civilian Unemployment Rate & Total Consumer Loans and Leases Outstanding \\\\\n... & ... & ... \\\\ \\bottomrule\n\\end{tabular}\n}\n\\end{table*}\n\n\\subsection{Automatic concepts: a simple approach}\nFailing that user may rely on domain knowledge, the set of \\textit{concepts} $\\mathbb{C}$ can be built automatically using a clustering algorithm based on feature correlations. Features (indexed by $j$) can be grouped in a concept $c_k$ using any dependence measure $\\rho$. The most straightforward is the Pearson correlation, that measures linear correlation between variables. Assuming the measure has values between $[-1;1]$ (an absolute value of $1$ meaning two variables perfectly dependent), a user-defined threshold $\\epsilon$ is set on the absolute value of the measure of dependence between two features $x_j$ and $x_{j'}$ in order to decide whether these features belong to the same concept $c_k$:\n\n$$\\left|\\rho(x_j,x_{j'})\\right| \\geq \\epsilon \\mid \\forall (j, j') \\in c_k$$\n\nThe clustering algorithm is greedy: for each iteration a feature is tested against all features and existing groups. A feature $j'$ is affected to a \\textit{concept} $c_k$ if its dependence to each feature in $c_k$ is higher than $\\epsilon$:\n$$\\left|\\rho(x_j,x_{j'})\\right| \\geq \\epsilon \\mid \\forall j \\in c_k \\rightarrow c_k = c_k \\cup j'$$\n\nIf a given feature is independent from all the others, it belongs to a singleton.\nThis formalization is also adequate for the expert's knowledge grouping. In that case, $\\rho$ and $\\epsilon$ would be the criteria of group assignment by the expert.\n\n\nThe next section explains how the notion of \\textit{concepts} may be used to constrain the training of a decision tree in order to produce more interpretable surrogates. \n\n\n\\section{Concept Tree: Embedding Concepts For More Interpretable Surrogate Decision Tree}\n\nDecision trees are a well-known interpretable machine learning model. A decision tree has a graphical structure, its decisions rely on a sparse subset of features, and features are used in a hierarchical way, thus conveying an intuitive sense of feature importance and providing several levels of explanation granularity~\\citep{freitas2014comprehensible}. Training a decision tree on the training set $\\mathbb{X}$ yields an interpretable classification algorithm, provided that the number of nodes is kept under a certain threshold. The limit on the tree complexity may come at the expense of predictive performance. Decision trees appear as good candidate surrogates to black-box classifiers. \n\nA decision tree surrogate is produced as follows. A black-box $b$ is trained on $\\mathbb{X}$ with the true class labels $y^{(i)}$ ; the surrogate $f$ is then trained on the black-box predictions $\\hat{y}^{(i)} = b(x^{(i)})$. In production, the classification is performed by the black-box while the explanations are provided by the surrogate decision tree. The fidelity of the surrogate is assessed as the proportion of instances where the surrogate makes the same prediction than the black-box classifier. \n\n\nThe TREPAN algorithm is an instance of interpretable surrogate tree model~\\citep{Craven1996}. It is model-agnostic and aims at mimicking the classification behaviour of a black-box $b$. It queries the black-box with instances to get predictions $\\hat{y}^{(i)} = b(x^{(i)})$ and then fits an interpretable decision tree. The outline of TREPAN is shown in Algorithm~\\ref{alg:trepan}. The querying of extra instances allows to populate the critical areas of the feature space and thus significantly curb the tendency of decision trees to overfit. \n\n\nTREPAN uses $m-of-n$ decision rules, that are inspired from $id2-of3$ decision trees~\\citep{murphy1991id2}. To fit an $m-of-n$ decision rule, the set of the $n$ most discriminative tests on the features for the node is discovered using the information gain. Then, in order to validate a node, an instance must validate at least $m$ tests among the $n$. For instance, given a decision rule with 3 tests $x_1$, $x_2$ and $x_3$, such as $2$-of-$\\{x_1, \\neg x_2, x_3\\}$ is equivalent to the logical expression $(x_1 \\vee \\neg x_2) \\wedge (x_1 \\vee x_3) \\wedge (\\neg x_2 \\vee x_3)$.\nThe parameters $m$ and $n$ are user-defined upper-bounds: their final values are learnt by the node. The $m-of-n$ decision rules are learnt in a greedy way for computational efficiency. For the outline of the fitting algorithm of an $m-of-n$ decision rule, we refer the reader to the original paper~\\citep{Craven1996b} for the sake of conciseness and precision.\n\nWhile the original TREPAN paper is two decades old already, researchers have kept reassessing its relevance up until recently~\\citep{Sarkar2016}.\nExperimentations show that TREPAN has a good fidelity to the black-box and a better accuracy on the test set than a decision tree directly trained on the training set $\\mathbb{X}$~\\citep{Craven1996}. This good performance is attributed to the additional-instance-drawing mechanism, which yields a denser support to the fit of a decision rule and thus a better prediction accuracy. \n\nThe $m-of-n$ decision rule structure improves the accuracy and the fidelity of the decision tree as it allows to learn more complex decision boundaries. However, it comes at the price of interpretability of both the node's decision rule and the decision tree overall. A practitioner may find it hard to understand all the possible ${n \\choose m}$ combinations of variables at the same time. Moreover, the contrary of a $m-of-n$ literal may be challenging to conceive as soon as $m>1$ and $1$\n\\STATE $n\\_nodes = 1$\n\\WHILE{$Queue \\neq \\emptyset$ and $n\\_nodes < max\\_nodes$}\n\\STATE Remove $$ from head of $Queue$ \n\\STATE Fit decision rule of node $N$\n\\FOR{each outcome $t$ of the test}\n \\STATE Initialize a child node $C$\n \\STATE $S_c \\gets$ instances of $S_N$ with outcome $t$ for the test\n \\STATE $S_C \\gets S_c \\cup DrawSample(min\\_sample-|S_c|)$\n \\STATE Get labels from black-box $b$ for $S_C$\n \\IF{$C$ is not pure enough}\n \\STATE Add $$ to $Queue$\n \\ENDIF\n \\STATE $n\\_nodes = n\\_nodes + 1$\n\\ENDFOR\n\\ENDWHILE\n\n\\STATE {\\bfseries Return} $R$\n\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{algorithm}[tb]\n \\caption{Construction of a Concept Tree decision rule}\n \\label{alg:concepttreenode}\n\\begin{algorithmic}\n \\STATE {\\bfseries ConstructConceptDecisionRule}($X$, $y$, $concepts$)\n \n \\STATE $best\\_candidate \\leftarrow \\emptyset$\n \\STATE $best\\_ig \\leftarrow = 0$\n \\FOR{$c \\in concepts$}\n \\STATE $X_c \\leftarrow$ Select features from $X$ belonging to $c$\n \\STATE $candidate \\leftarrow MofNDecisionRule(X_c, y, m, n)$\n \\STATE $ig \\leftarrow$ Compute information gain for $candidate$\n \\IF{$ig > best\\_ig$}\n \\STATE $best\\_ig \\leftarrow ig$\n \\STATE $best\\_candidate \\leftarrow candidate$\n \\ENDIF\n \\ENDFOR\n \n \\STATE {\\bfseries Return} $best\\_candidate$\n \n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experimentation: FRED-MD Macroeconomic Database}\n\nThis paper has introduced the ideas of Concept and Concept Tree, whose main objectives are to provide an accurate surrogate $f$ mimicking a black-box classifier $b$ while being as interpretable as possible. The next paragraphs describe experimentations made with the FRED-MD dataset ~\\citep{mccracken2016fred}, a publicly released macroeconomic database of 134 monthly U.S. indicators with more than 700 instances. Interpretability is critical in economics and our experimentations show how Concept Trees may match the requirements of the field. \n\nThe experimentations are conduced as follows. The classification target is computed from the \\emph{civilian unemployment rate}: if the value for an instance is lower than in the previous period, the target value is set to label 0 and the label is set to 1 otherwise. Domain-knowledge-based experts are extracted from the FRED-MD official documentation, which classifies variables into 8 subgroups (see Table~\\ref{tab:fred_succint_desc}). \n\nThe competitors are both flavors of Concept Tree (Concept Tree-Expert and Concept Tree-Correlated for automatically-defined concepts) and the original TREPAN.\nSince the Concept Tree and TREPAN have a similar structure, they share the same parameters for the experimentation. The maximal number of nodes $max\\_nodes$ is set to 10. For the split rules, the values of $m-of-n$ are set to $1-of-1$, $3-of-3$ and $5-of-5$. The minimal value of samples $min\\_samples$ to fit a split rule at a node is 100, thus additional samples are drawn from the fitted distribution if the $\\mathbb{X}$ is not large enough. For Concept Tree-Correlation, the threshold $\\epsilon$ on the correlation $\\rho$ is set to $0.9$ such as $\\left|\\rho(x_j,x_{j'})\\right| \\geq 0.9$.\n\nThe black-box $b$ used is a Random Forest with 200 estimators, with the scikit-learn default values for the other parameters.\nOut-sample-fidelity is computed by 5-fold cross-validation. At each split the black-box is fitted on the train set and makes predictions for the train set and the test set. The Concept Tree and TREPAN instances are then fitted on the train set with black-box predictions as targets, and their fidelities are measured against the black-box predictions made on the test set. Out-of-sample accuracy is assessed using the same procedure. Fidelity measures the proportion of predictions made by the surrogate that match the predictions made by the black-box, while accuracy measures the proportion of predictions made by the surrogate that match the actual value of the target. Interpretability is assessed by economic expert judgement.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Experimental results: surrogate accuracy and fidelity as a function of the algorithm, the concept type and the split rule}\n\\label{tab:results}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{@{}ccc||ccc@{}}\n\\toprule\n\\thead{Algorithm} & \\thead{Concept Type} & \\thead{Split Rule} & \\thead{Surr. Accuracy} & \\thead{Surr. Fidelity} \\\\ \\midrule \\midrule\nConcept Tree & Expert & \\multirow{ 3}{*}{$1-of-1$} & $63\\% \\pm 4\\%$ & $65\\% \\pm 9\\%$ \\\\\nConcept Tree & Correlation & & $68\\% \\pm 6\\%$ & $71\\% \\pm 6\\%$ \\\\\nTREPAN & \/ & & $\\bm{75\\% \\pm 9\\%}$ & $\\bm{74\\% \\pm 7\\%}$ \\\\ \\hline\nConcept Tree & Expert & \\multirow{ 3}{*}{$3-of-3$} & $69\\% \\pm 9\\%$ & $\\bm{76\\% \\pm 4\\%}$ \\\\\nConcept Tree & Correlation & & $\\bm{72\\% \\pm 11\\%}$ & $75\\% \\pm 5\\%$ \\\\\nTREPAN & \/ & & $68\\% \\pm 8\\%$ & $72\\% \\pm 6\\%$ \\\\ \\hline\nConcept Tree & Expert & \\multirow{ 3}{*}{$5-of-5$} & $\\bm{71\\% \\pm 4\\%}$ & $\\bm{73\\% \\pm 2\\%}$ \\\\\nConcept Tree & Correlation & & $70\\% \\pm 8\\%$ & $71\\% \\pm 8\\%$ \\\\\nTREPAN & \/ & & $67\\% \\pm 5\\%$ & $71\\% \\pm 4\\%$ \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\end{table}\n\n\\subsection{Results}\nTable~\\ref{tab:results} presents the cross-validated accuracies and fidelities for TREPAN, the Concept Tree with expert-defined concepts and the Concept Tree with automatically defined clusters. The black-box mean accuracy over the folds is $82\\% \\pm 4\\%$.\n\nThe experimentations show that Concept Tree provides surrogates whose fidelity and accuracy matches the performance of TREPAN trees and whose interpretability may be significantly enhanced. Although, TREPAN leads in terms of accuracy and fidelity for $1-of-1$ nodes; Concept Tree-Expert for $3-of-3$; and Concept Tree-Correlated for $5-of-5$ nodes, the non-negligible standard-deviations and the setup of this preliminary experiment (number of folds and datasets) don't allow for a final conclusion. However, the experiment highlights the relevance of Concept Tree in terms of accuracy and fidelity and as things stand, Concept Tree is at least as relevant as TREPAN.\n\nFigure \\ref{fig:trees} in Appendix displays the trees generated by TREPAN (Figure \\ref{fig:tree_trepan}), the Concept Tree with expert-defined concepts (Figure \\ref{fig:concept_expert}) and the Concept Tree with correlation-based defined concepts (Figure \\ref{fig:concept_correlated}). \n\nFrom a macroeconomic point of view, the Concept Tree yields meaningful high level explanations of the workings of the black-box classifiers. The Concept Tree-Expert highlights that Labor Market related variables are the most important in the prediction of the target, followed by Output and Income related variables and Consumption related variables. The Concept Tree-Correlated also sheds light on the importance of nodes referring to Labour market data. Overall, Concept Tree enhances the interpretability of surrogate trees by \\textbf{structuring} the explanations.\n\nIn Concept Tree-Expert (based on domain-knowledge), explanations are structured by sharing a common \"language\" with users or experts. It provides the big picture with one general \\textit{concept} by node. The detailed analysis of a node is eased because only related, homogeneous, variables are assembled. There is an intuitive relations between high-level explanations (concepts) and low-level explanations (raw variables).\n\nIn Concept Tree-Correlation, computed automatically based on variable correlations, part of the domain-knowledge can be recovered. Concept Tree-Correlation presents also the advantage of gathering dependent variables for each node, avoiding arbitrary choices between correlated variables to build a test.\n\nIn contrast, TREPAN trees use an idiosyncratic language not shared by the practitioner. Associations of tests in a TREPAN node generate confusion by gathering variables that are hardly related from a domain-knowledge point of view. Such nodes obstruct the understanding by preventing the user from getting the big picture.\n \nTo illustrate these arguments, we focus on the top three nodes of the trees in Figures \\ref{fig:tree_trepan}, \\ref{fig:concept_expert} and \\ref{fig:concept_correlated}. In the TREPAN tree (Figure \\ref{fig:tree_trepan}), the colored variables names highlight that 8 out of the 9 chosen variables are part of the Labor Market concept. This structure is explicitly displayed by the Concept Tree-Expert (Figure \\ref{fig:concept_expert}) as the concept chosen in the root node, facilitating the interpretation of the tree by referring to high level concept. We can also notice in the TREPAN tree that the left child of the root node chose the \\textit{Civilian Employment, gr} feature for its first rule, whereas the right child node chose the \\textit{All Employees: Total nonfarm, gr} feature instead. However, the cluster 3 in the Concept Tree-Correlated (Figure \\ref{fig:concept_correlated}) explicitly shows that these features are highly correlated, suggesting that they are interchangeable.\n\n\n\n\n\\section{Conclusion}\n\nThe present paper introduces \\textit{concepts}, a meaningful manner to group dependent variables, and Concept Trees, an alternative tree-based surrogate model that provides both high-level and detailed explanation to black-box classifiers. The grouping of variables in \\textit{concepts} allows to overcome the false sense of simplicity conveyed by simpler decision tree surrogate that may give an artificially high importance to a given variable picked among a set of correlated variables, thus obscuring the bigger picture. The use of \\textit{concepts} also helps practitioners make sense of otherwise cryptic $m-of-n$ literals, by relying on a higher-level representation of the data. Compared to TREPAN, Concept Trees produce surrogates that have a comparable size and are as accurate, but more easily understandable to a human thanks to a better organization of the information along higher-level representations that significantly enhance the interpretability of the surrogate.\nExperiments were conduced using FRED-MD, a macroeconomic database whose documentation includes a grouping of variables. The Concept Tree was applied to this data using both expert-defined concepts derived from the data documentations and concepts built using a simple correlation-based clustering algorithm. First results show a notable improvement in human-readability while accuracy and fidelity of the surrogate are preserved.\nFurther research could involve a deeper assessment of our propositions, both quantitatively and qualitatively. It could also be relevant to explore alternative clustering algorithms designed to produce more relevant \\textit{concepts}.\nFurther modification to the Concept Tree algorithm may improve performance: currently, following TREPAN's principle, one concept can only be used once in a decision path. Considering a concept encompass several variables, the accuracy and fidelity of the surrogate may suffer from this probably too severe constraint.\n\n\n\\bibliographystyle{icml2019}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter[Pupil Plane Phase]{Pupil Plane Phase Apodization}\\label{sec:ppp}\n\n\\author[M. Kenworthy, J. Codona and F. Snik]{Matthew A. Kenworthy, Johanan\nL. Codona\\footnote{Steward Observatory, 933 N. Cherry Avenue, Tucson, AZ\n85721, USA} and Frans Snik}\n\n\\address{Leiden Observatory, Leiden University, \\\\\nP.O. Box 9513, 2300 RA Leiden, The Netherlands, \\\\\nkenworthy@strw.leidenuniv.nl}\n\n\\begin{abstract}\n\nPhase apodization coronagraphs are implemented in a pupil plane to\ncreate a dark hole in the science camera focal plane.\nThey are successfully created as ``Apodizing Phase Plates'' (APPs) using\nclassical optical manufacturing, and as ``vector-APPs'' using\nliquid-crystal patterning with essentially achromatic performance.\nThis type of coronagraph currently delivers excellent broadband contrast\n($\\sim$10$^{-5}$) at small angular separations (few $\\lambda\/D$) at\nground-based telescopes, owing to their insensitivity to tip\/tilt\nerrors.\n\n\\end{abstract}\n\n\\body\n\n\\section{Introduction}\n\nPupil-plane apodization techniques (amplitude, phase, or complex) differ\nfrom focal plane mask coronagraphs in that they affect all objects in\nthe field in an identical fashion.\nThe main goal of such pupil-plane coronagraphs is to enforce dark holes\nin the ensuing point spread function (PSF) in which faint companions can\nbe directly detected and characterized.\nSince the star and companion have the same PSF, the halo should be\nsuppressed while preserving the starlight in the core as much as\npossible, {\\it i.e.}~a high Strehl ratio PSF.\nIn this situation, the ``noise'' is governed by the PSF diffraction halo\nplus any diffuse background, while the ``signal'' is contained in the\nPSF core.\n\nThe phase-only ``Apodizing Phase Plate''\\cite{Kenworthy07,\nKenworthy13,Kenworthy10a,Kenworthy10b} (APP) coronagraphs have now been\nsuccessfully applied on-sky at ground-based telescopes.\nThe main benefits of APPs include a high contrast inside the dark hole\n($\\sim$10$^{-4}$--10$^{-6}$), at a small inner working angle $\\sim$$1.5\n\\lambda\/D$, with complete insensitivity to tip\/tilt errors (and\npartially resolved stellar disks) that usually limit focal-plane\ncoronagraphs.\nThis invariance of the PSF additionally enables beam-switching for\nthermal background removal, and observations of multiple star systems.\nWith the introduction of advanced liquid-crystal technology for the\nvector-APP coronagraph\\cite{vAPP,vAPP-prototype,vAPP-MagAO}, it has also\nbecome efficient over spectral bandwidths of more than an octave, at\nwavelengths from 300 to 30,000 nm\\cite{Packham2010}.\nThe extreme phase patterns enabled by liquid-crystal writing techniques\ncan now also produce dark holes with various shapes, including\ncomplementary 180$^\\circ$ D-shaped dark holes and 360$^\\circ$\ndonut-shaped dark holes.\nAs a single pupil-plane optic, the (vector-)APP is easily implemented in\na filter wheel in existing instruments, and is fully compatible with\ncryovacuum (and likely also space-based) operation.\n\n\\section{Theory}\n\nThe 1-D apodization problem has been studied for a long time, including\nslit apodization in spectroscopy and pulse shaping to reduce channel\nbandwidth in telegraphy, by apodizing in amplitude\\cite{Jacquinot64}.\nThe family of functions to describe this are the Slepian functions and\nthe Prolate Spheroidal wavefunction\\cite{slepian1965ast}.\nSince transmission apodization is linear, it can achieve a high degree\nof suppression between the PSF and the halo beyond a selected inner\nworking angle (IWA), and in general the apodizations are complex with\nboth transmission and phase.\nThe accurate manufacture of complex amplitude masks is non-trivial and\ncan result in low transmission efficiencies.\n\nPhase-only apodization theory was initially developed for removing\nspeckles generated by residual optical aberrations in high contrast\nimaging experiments\\cite{Malbet95}, where wavefront sensing in the final\nfocal plane of a coronagraph forms a closed loop with a deformable\nmirror (DM) in the optical system.\nA sinusoidal ripple on the DM forms a diffraction grating in the phase\nof the wavefront, generating a pair of speckles that are copies of the\nAiry core of the central PSF.\nThe appropriate choice of spatial phase and amplitude of the ripple\napplied to the DM destructively interferes with speckles generated by\naberrations in the optical system.\nThe same principle can be generalized to cancel out the diffraction\nrings of the PSF itself, as demonstrated on-sky by the addition of coma\ninto an adaptive optics system to cancel out part of the first Airy\nring\\cite{Serabyn07}.\nApodization in phase over a two-dimensional region does not yet have an\nanalytic solution.\nSuperposing many different phase ripples in the pupil plane to suppress\nthe diffraction pattern over a region of interest (ROI - typically\ndefined as a D-shaped region next to the Airy core of the PSF) is\nchallenging, since the speckles add vectorially and interfere with each\nother, making it a nonlinear problem.\nRef.~\\refcite{Codona2007} searched for phase-only apodization solutions\nthrough a modal basis approach. An ROI is defined in a complex\namplitude focal plane, where the diffraction halo is to be minimized.\nA complex amplitude field is defined in the pupil plane, and a Fourier\nimaging operator is defined that maps from the pupil plane into the ROI.\nSingular Value Decomposition of this operator produces a modal basis set\nof complex pupil amplitudes, ordered canonically from the most power\ncontained within the ROI to the least.\nThese modes typically have complex amplitudes in the pupil plane, so\ntheir complex amplitude is normalized to unity to make them phase-only\napodization.\nThese ``antihalo'' modes are subtracted off the complex amplitude of the\npupil plane, and the process is repeated.\nThe antihalo modes extend a short distance beyond the ROI, and if the\nIWA is within the first Airy ring, flux from the core of the PSF is\ndetrimentally removed as well.\nCare is needed to suppress these modes by imposing additional\nconstraints to maximize the PSF core encircled energy.\nIf not properly accounted for, phase wrapping can also occur when the\npeak-to-valley phase apodization is greater than $2\\pi$.\n\nNew algorithms have been developed at Leiden Observatory by Doelman,\nKeller and Por.\nDoelman generates focal plane dark zones using a combination of\nphase-only pupil modes.\\cite{Doelman16}\nA simulated annealing approach is used, where the mode amplitudes are\nrandomly adjusted.\nSolutions that improve the dark region are kept, but worse solutions are\noccasionally accepted as well to escape local minima.\nKeller uses a Gerchberg-Saxton\\cite{Gerchberg72} method, switching\nbetween the pupil plane and focal plane.\nConvergence to a given contrast level is increased by an order of\nmagnitude using Douglas-Rachford operator splitting\\cite{Douglas56}.\nPor\\cite{por2017optimal} generalizes an algorithm by\nCarlotti\\cite{Carlotti2013} for general complex amplitudes in the pupil\nplane.\nStrehl ratio maximisation for this mask is a linear operation solved by\nlarge scale optimizer, and phase-only solutions are naturally found\nthrough this approach.\n\n\\section{First generation APPs using classical phase}\n\nThe manufacture of APP solutions requires the variation of phase across\nthe pupil plane of the camera, and the development of free-form optic\nmanufacture with notable departures from sphericity using\ncomputer-controlled diamond turning\\cite{Davis07} encoded the phase\npatterns as variations in the thickness of a high refractive index\ntransmissive substrate.\nFirst light observations of an APP with diamond turned\noptics\\cite{Kenworthy07} demonstrated the viability of the manufacturing\ntechnique and of the theory.\nThe success of the prototype led to APP coronagraphs on the 6.5m MMTO\ntelescope in Arizona\\cite{Kenworthy13} and on the Very Large Telescope\nin Chile\\cite{Kenworthy10a,Kenworthy10b}.\nThe VLT APP led to the first coronagraphic image of the extrasolar\nplanet $\\beta$ Pictoris b\\cite{Quanz10} and the discovery of the\nextrasolar planet HD~100546b\\cite{Quanz12}.\n\nDiamond turning only allows for low spatial frequencies in the azimuthal\ndirection of the cutting tip, and the classical phase plate\nmanufacturing was inherently chromatic.\nAttempts to achromatize the APP using doublets proved highly\nchallenging\\cite{Kenworthy10c}.\n\n\\section{The Vector-APP}\n\nThe main limitations of the APP coronagraph (chromaticity, limited\ncoverage around the star, limited phase pattern accuracy) were solved by\nthe introduction of the vector-APP (vAPP)\\cite{vAPP}.\nIn a similar way as for the Vector Vortex Coronagraph\\cite{VVC}, the\nvAPP replaces the classical phase pattern ($\\phi_{\\textrm{c}}[u,v] =\nn(\\lambda) \\Delta d[u,v]$) with the so-called\nPancharatnam\\cite{Pancharatnam}-Berry\\cite{Berry} phase or ``geometric\nphase''\\cite{Escuti-geometricphase}.\nThe vAPP phase pattern is imposed by a half-wave retarder with a\npatterned fast axis orientation $\\theta[u,v]$.\nThe geometric phase is imprinted on incident beams decomposed according\nto circular polarization state: $\\phi_{\\textrm{g}}[u,v] =\n\\pm2\\cdot\\theta[u,v]$, with the sign depending on the circular\npolarization handedness.\nAs this fast axis orientation pattern does not vary as a function of\nwavelength (with the possible exception of an inconsequential\noffset\/piston term), the geometric phase is strictly achromatic.\nA simple Fraunhofer propagation from the pupil $[u,v]$ to the focal\nplane $[x,y]$ shows that after splitting circular polarization states\nthe two ensuing coronagraphic PSFs are point-symmetric\n($PSF_{\\textrm{L}}[x,y] = PSF_{\\textrm{R}}[-x,-y]$), and therefore, in\nthe case of D-shaped dark holes, delivers complementary PSFs that\nfurnish instantaneous 3$60^\\circ$ search space around each star.\n\nVector-APP devices are produced by applying two breakthrough\nliquid-crystal techniques: any desired phase pattern is applied onto a\nsubstrate glass through a \\textit{direct-write\nprocedure}\\cite{directwrite} that applies the orientation pattern\n$\\theta[u,v]$ by locally polymerizing the alignment layer material in\nthe direction set by the controllable polarization of a scanning UV\nlaser.\nConsecutively, birefringent liquid-crystal layers are deposited on top\nof this alignment layer.\nSeveral self-aligning layers (``\\textit{Multi-Twist Retarders}'';\nMTR\\cite{MTR}) with predetermined parameters (birefringence dispersion,\nthickness, nematic twist) yield a linear retardance that is close to\nhalf-wave over the specified wavelength range.\nThe vAPP can become efficient over a large wavelength range (up to more\nthan an octave), while any phase pattern can be written with high\naccuracy.\n\n\\subsection{Prototyping and first on-sky results}\n\nThe first broad-band vAPP device was fully characterized in the lab at\nvisible wavelengths (500--900 nm)\\cite{vAPP-prototype}.\nThe main limitation of the contrast performance inside the dark hole was\nthe occurrence of leakage terms that produced a faint copy of the\nregular PSF on top of the coronagraphic PSFs.\nThese leakage terms are caused by small offsets to the half-wave\nretardance of the vAPP device, and offsets from quarter-wave retardance\nof the quarter-wave plate that, together with a Wollaston prism,\naccomplishes the (broad-band) circular polarization splitting.\nThis issue was resolved with the introduction of the\n``grating-vAPP''\\cite{grating-vAPP}, which implements the circular\npolarization splitting by superimposing a tilt (i.e.~a ``polarization\ngrating''\\cite{Packham2010}) pattern on top of the coronagraphic pupil\nphase pattern, which, by virtue of the properties of the geometric\nphase, very efficiently sends the coronagraphic PSFs into grating orders\n$\\pm$1, and leaves all the leakage terms in the zeroth order.\nThe grating-vAPP also greatly simplifies the optical configuration, as\nall the manipulation takes place within one single (flat) optic.\nThe coronagraphic PSFs are now subject to a lateral grating dispersion\nterm and so the grating-vAPP can only be used in combination with\nnarrow-band filters, although the wavelength range throughout which\nthese filters can be applied can still be very large.\n\nThe first grating-vAPP successfully demonstrated on-sky was installed at\nthe MagAO\/Clio instrument attached to the 6.5-m Magallan-Clay telescope\nin Chile\\cite{vAPP-MagAO} (\\fref{MagAO-vAPPs}a--c).\nThe device was designed and built to operate from 2--5 $\\mu$m, covering\nthe infrared atmospheric K, L and M-bands.\nThe first-light observations demonstrated excellent suppression of the\nstellar diffraction halo in the complementary dark holes (see\n\\fref{MagAO-vAPPs}c).\nDetailed analysis of the data demonstrated a 5-$\\sigma$ contrast for\npoint source detection of $\\sim$$10^{-5}$ at 2.5--7\n$\\lambda\/D$\\cite{vAPP-MagAO}.\nThe contrast performance is greatly enhanced by combining the two\ncomplementary dark holes through a simple rotation-subtraction procedure\nto further suppress the wind-driven starlight halo in the dark holes,\nwhich is caused by finite AO loop speed.\n\\Fref{MagAO-vAPPs}c shows the presence of the leakage term PSF in\nbetween the coronagraphic PSFs, which can be used as an astrometric and\nphotometric reference, in the (frequent) case that the coronagraphic PSF\ncores are saturated.\n\n\\begin{figure}[ht] \\includegraphics[width=\\textwidth]{MagAO-vAPPs}\n\n \\caption{Phase patterns, theoretical and on-sky PSFs (logarithmic\nscale) for the two vAPP devices installed at MagAO.\na) Theoretical phase pattern for a 180$^\\circ$ dark hole covering 2--7\n$\\lambda\/D$, b) the ensuing theoretical PSF, c) the on-sky PSFs at MagAO\nfor the star $\\eta$ Crucis at 3.9 $\\mu$m.\nd) Theoretical phase pattern for a 360$^\\circ$ dark hole covering 3--7\n$\\lambda\/D$, e) the ensuing theoretical PSF, f) the on-sky PSFs at MagAO\nfor the binary star $\\beta$ Centauri at 3.9 $\\mu$m.\nPhase pattern designs by Christoph Keller. Data processing by Gilles\nOtten\\cite{vAPP-MagAO}.}\n\n\\label{MagAO-vAPPs}\n\\end{figure}\n\n\\subsection{360 degree APP solutions}\n\nAs part of the algorithm exploration of the APP surface, a family of\nfunctions was found that showed 360 degrees of suppression around the\ncentral star.\nThese solutions have lower Strehl ratios for the star (typically\n20--40\\%) with larger IWA compared to the 180$^\\circ$ dark holes, and\nthese phase pattern solutions are pathological in nature, with rapid\nphase changes over small scales.\nThe advent of liquid-crystal patterning encouraged us to revisit these\n360$^\\circ$ solutions, and test them in the lab and on-sky.\n\\Fref{MagAO-vAPPs}d-f shows the phase pattern and ensuing PSFs for the\nexperimental vAPP device at MagAO.\nThe lower row of figures shows that the liquid-crystal manufacture\nsuccessfully reproduces the complex phase pattern, and this on-sky image\n(\\fref{MagAO-vAPPs}f) shows a fainter binary stellar companion to the\nright of the primary star's PSF.\n\n\\section{Future Directions}\n\nOur team is currently installing different vAPP coronagraphs at several\ninstruments at large telescopes around the world, and working on novel\ndesigns for the future extremely large telescopes.\nForeseeable future developments of the vector-APP as a separate optical\ncomponent, and as integral part of a high-contrast imaging system\ninclude:\n\n\\begin{itemize}\n\n\\item The combination of several grating layers in a\n``\\textit{double-grating-vAPP}'' to recombine the two coronagraphic PSFs\nwith 360$^\\circ$ dark holes to feed an integral-field unit while\nrejecting the leakage terms. \n\n\\item By prescribing a specific retardance profile as a function of\nwavelength, it is possible to build a \\textit{wavelength-selective vAPP}\ndevice, that operates as a regular vAPP coronagraph at the science\nwavelengths, and acts like a regular glass plate at the spectral range\nof a wavefront sensor behind it. \n\n\\item The pupil phase manipulation of the vAPP can be extended by\namplitude manipulation in the pupil to create complex\napodizers\\cite{Carlotti2013}, and by phase\/amplitude masks in the focal\nplane to yield \\textit{hybrid coronagraphy}\\cite{Ruane2015}.\n\n\\item As this technology is likely compatible with operation in space,\nit is opportune to characterize the performance of vAPP-like\ncoronagraphs at the \\textit{extreme contrast levels} ($\\sim$10$^{-9}$)\nof space-based high-contrast imaging.\n\n\\item To adapt the vAPP phase pattern to the observational needs, the\nobserving conditions, and segmented pupils with variable configurations,\nactive liquid-crystal devices will be developed to establish\n``\\textit{adaptive coronagraphy}''. Such a system can then deliver dark\nholes of various geometry and depth, depending on whether the observer\nis interested in detecting exoplanets or characterizing known targets.\n\n\\item As the vAPP relies on polarization splitting, it is possible to\ndesign an optimal system for \\textit{coronagraphic\npolarimetry}\\cite{vAPP-polarimetry}, particularly with the\n360$^\\circ$-designs.\n\n\\item The fact that the vAPP produces several PSFs for the same star at\nthe focal plane makes it an attractive option for implementing\n\\textit{focal-plane wavefront sensing}, for instance through\nphase-diversity techniques. Another promising approach involves the\nincorporation of an additional pupil phase pattern which generates pairs\nof PSF copies around the main PSFs, with each pair encoding a wavefront\nerror mode through an intensity difference\\cite{Wilby2016}.\n\n\\end{itemize}\n\n\\section{Acknowledgments}\n\nThe research of FS leading to these results has received funding from\nthe European Research Council under ERC Starting Grant agreement 678194\n(FALCONER).\n\n\\bibliographystyle{ws-rv-van}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHeavy-flavour hadrons are suitable to probe the conditions of the high-energy-density Quark-Gluon Plasma (QGP) medium formed in ultra-relativistic heavy-ion collisions. Heavy quarks are mainly produced in hard scattering processes in the early stage of Pb--Pb collisions. The time-scale of their production ($\\Delta\\tau< 1\/2m_{c,b}\\sim 0.07~{\\rm fm}\/c~{\\rm for~charm~and}\\sim 0.02~{\\rm fm}\/c~{\\rm for~beauty}$) is, in general, shorter than the formation time of the QGP, $\\tau_{0}\\sim0.1-1$~fm\/$c$. During their propagation through the medium, heavy quarks interact with its constituents and lose energy. QCD energy loss is expected to occur via both inelastic (radiative energy loss via medium-induced gluon radiation)~\\cite{Baier1997265} and elastic (collisional energy loss) processes~\\cite{PhysRevD.44.1298}. The energy loss for quarks is expected to be smaller than for gluons, due to the smaller colour coupling factor of quarks with respect to gluons. In addition, the ``dead-cone effect'' should reduce small-angle gluon radiation for heavy quarks with moderate energy compared to their mass, thus further attenuating the effect of the medium~\\cite{PhysRevD.69.114003}. \n\nThe nuclear modification factor $R_{\\rm AA}(p_{\\rm T})=({\\rm d}N_{\\rm AA}\/{\\rm d}p_{\\rm T})\/(\\langle T_{\\rm AA}\\rangle\\cdot{\\rm d}\\sigma_{\\rm pp}\/{\\rm d}p_{\\rm T})$, where $\\langle T_{\\rm AA}\\rangle$ is the average nuclear overlap function for a given centrality class, is sensitive to the interaction of hard partons with the medium. At large $p_{\\rm T}$, $R_{\\rm AA}$ is expected to be mostly sensitive to the average energy loss of heavy-quarks in the hot and dense medium. The questions whether low-momentum heavy quarks can reach thermal equilibrium with the medium constituents and participate in the collective expansion of the system~\\cite{Batsouli200326,Greco2004202}, and whether heavy quarks can hadronise also via recombination with other quarks from the medium~\\cite{Greco2004202,Andronic200336} are still open. \nThese questions are addressed by studying $R_{\\rm AA}$ at low and intermediate $p_{\\rm T}$ and measuring the azimuthal anisotropy of heavy-flavour hadron production with respect to the reaction plane, defined by the beam axis and the impact parameter of the collision. The hadronisation mechanisms of the c quark are also investigated through the measurement of ${\\rm D}_{\\rm s}^+$ production in nucleus--nucleus collisions compared to that in pp collisions~\\cite{Anastasia}.\n\n\\section{D-meson reconstruction}\n\nThe decays ${\\rm D^0}\\rightarrow {\\rm K^-}\\pi^{+}$, ${\\rm D^+}\\rightarrow {\\rm K^-}\\pi^{+}\\pi^{+}$ and ${\\rm D^{*+}}\\rightarrow {\\rm D^0}\\pi^{+}$, and their charge conjugates, were reconstructed at mid-rapidity ($|y|<0.8$) in minimum-bias Pb--Pb collisions using the ALICE central barrel detectors. The D-meson selection was based on the precise reconstruction of the primary and secondary (decay) vertices, which is provided by the Inner Tracking System (ITS).\nCharged pions and kaons were identified using the information provided by the Time Projection Chamber (TPC) and the Time Of Flight (TOF) detectors~\\cite{Abelev:2014ffa}.\nThe reference proton--proton cross section at $\\sqrt{s_{\\rm NN}}=2.76$~TeV, needed to compute $R_{\\rm AA}$, was obtained by a pQCD-based energy scaling of the $p_{\\rm T}$-differential cross section measured at $\\sqrt{s}=7$~TeV~\\cite{ALICE:2011aa}. \n\n\\section{D-meson nuclear modification factor}\n\n\\begin{figure}[!t]\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.92\\textwidth]{2014-May-15-pPbAndPbPb.eps}\n\\caption{\\label{fig:DmesonRAARpA}Average $R_{\\rm pPb}$ of prompt ${\\rm D^0}$, ${\\rm D^+}$ and ${\\rm D^{*+}}$ mesons~\\cite{Abelev:2014hha} compared to the prompt D-meson $R_{\\rm AA}$ in Pb--Pb collisions in the 0--20\\% and 40--80\\% centrality classes~\\cite{ALICE:2012ab}.}\n\\end{minipage}\\hspace{1pc\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.9\\textwidth]{2014-May-16-AverageDMesonsRaa_075_ComparisonWithModels_150514.eps}\n\\caption{\\label{fig:DmesonRAACentral}Average prompt D-meson $R_{\\rm AA}$ in Pb--Pb collisions in the 0--7.5\\% centrality class compared to theoretical models including parton energy loss.}\n\\end{minipage} \n\\end{figure}\n\nA large suppression of the D-meson $R_{\\rm AA}$ (factor 3-5) was observed for $p_{\\rm T}>5$~GeV\/$c$ in central Pb--Pb collisions at $\\sqrt{s_{\\rm NN}}=2.76~{\\rm TeV}$ (Figure~\\ref{fig:DmesonRAARpA})~\\cite{ALICE:2012ab}. The comparison of $R_{\\rm AA}$ with the D-meson nuclear modification factor measured in p--Pb collisions at $\\sqrt{s_{\\rm NN}}=5.02~{\\rm TeV}$~\\cite{Abelev:2014hha} shows that the expected cold nuclear matter effects are smaller than the uncertainties on $R_{\\rm AA}$ for $p_{\\rm T}>3~{\\rm GeV\/}c$. Therefore, the suppression observed in central Pb--Pb collisions is predominantly induced by final-state effects due to the presence of a hot and dense partonic medium. Figure~\\ref{fig:DmesonRAACentral} shows the D-meson $R_{\\rm AA}$ measured in Pb--Pb collisions in the centrality class 0--7.5\\%, compared to theoretical models including charm interactions with the medium constituents.\nThe large suppression observed, e.g. of a factor 6 at $p_{\\rm T}=10~{\\rm GeV\/}c$, is described by the models that include radiative and collisional heavy-quark energy loss.\n\n\\begin{figure}[!t]\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.93\\textwidth]{2015-Jun-25-DmesPions_8to16_CompDjordjevic_200515.eps}\n\\caption{\\label{fig:DmesonPionRAA}$R_{\\rm AA}$ of D mesons~\\cite{Adam:2015nna} and charged pions~\\cite{Abelev2014196} as a function of centrality compared to a pQCD model including mass dependent radiative and collisional energy loss~\\cite{Djordjevic2014298}.}\n\\end{minipage}\\hspace{1pc\n\\begin{minipage}[t]{0.48\\textwidth}\n\\includegraphics[height=0.93\\textwidth]{2015-Jun-26-DmesNonPromptJpsi_8to16_CompDjordjevic_110615.eps}\n\\caption{\\label{fig:DmesonJPsiRAA}D~\\cite{Adam:2015nna} and non-prompt J\/$\\psi$ meson~\\cite{CMSnonprompt} $R_{\\rm AA}$ vs. centrality compared to a pQCD model including mass dependent radiative and collisional energy loss~\\cite{Djordjevic2014298}.}\n\\end{minipage} \n\\end{figure}\n\nFigures~\\ref{fig:DmesonPionRAA} and~\\ref{fig:DmesonJPsiRAA} show the D-meson $R_{\\rm AA}$ as a function of centrality (quantified in terms of the average number of participant nucleons in the Pb--Pb collision)~\\cite{Adam:2015nna} along with the $R_{\\rm AA}$ of charged pions~\\cite{Abelev2014196} and non-prompt J\/$\\psi$ mesons measured by the CMS Collaboration~\\cite{CMSnonprompt}, respectively. The focus here is on the study of the parton energy loss, thus, the results are presented for the high-$p_{\\rm T}$ interval $8-16~{\\rm GeV\/}c$ for the pions and D mesons and for $6.50$ and, thus, consistent with the expectations from collective flow.\n\n\\section{Conclusions}\nThe results obtained by ALICE using the data from the LHC Run-1 (2010--2013) indicate a strong suppression of the D-meson production in central Pb--Pb collisions for $p_{\\rm T}>3~{\\rm GeV\/}c$, which is mainly due the interactions of heavy quarks with the hot and dense medium. \nThe smaller $R_{\\rm AA}$ observed for D mesons with respect to non-prompt J\/$\\psi$ confirms the mass-dependent nature of the energy-loss mechanisms.\nThe non-zero $v_2$ of D mesons and the azimuthal dependence of the ${\\rm D^0}$ $R_{\\rm AA}$ indicate that, during the collective expansion of the medium, the interactions between its constituents and the charm quarks transfer to the latter information on the azimuthal anisotropy of the system. During the LHC Run-2 we expect to collect a data sample larger by a factor 5-10 with respect to Run-1, depending on collision centrality. It will be, thus, possible to measure the D-meson $R_{\\rm AA}$ and $v_2$ with a better precision and in an extended $p_{\\rm T}$ range.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{section:intro}\n\nMassive MIMO is one of the most relevant technologies in wireless communications \\cite{marzetta,rusek}. Among the key features of this technology are high spectral efficiency and improved link reliability, making it a key enabler for 5G. Massive MIMO exploits spatial diversity far beyond traditional MIMO systems by employing a large scale antenna array in the base-station (BS) with hundreds or possibly even thousands of elements. This large number of elements allows for unprecedented spatial resolution and high spectral efficiency, while providing simultaneous service to several users within the same time-frequency resource.\n\nDespite all the advantages of Massive MIMO, there are still challenges from an implementation point of view. One of the most critical ones is sending data from the BS antennas to the central processing unit (CPU) and vice-versa, and the high interconnection throughput it requires. In current set-ups, uplink detection algorithms based on zero-forcing (ZF) equalizer typically rely on a centralized architecture, shown in Fig. \\ref{fig:BS_centralized}, where baseband samples are collected in the CPU for obtaining channel state information (CSI) and further matrix inversion, which allows data estimation and further detection. The same argument is valid for downlink precoding. In order to avoid dedicated links between antenna modules and CPU, a shared bus is typically used to exchange this data. In case of LuMaMi testbed \\cite{lumami,lumami2}, the shared bus was reported to support an aggregated data-rate of 384Gps, which exceed base-station internal interface standards such as eCPRI \\cite{ecpri}. Additionally, the pin-count of integrated circuits (IC) limits the number of links the IC can handle simultaneously and thus the throughput. Due to this high data-rate, the power appears as another potential limitation. This combination of factors are considered as the main bottleneck in the system and a clear limitation for array scalability. In this paper we will address the inter-connection throughput limitation by decreasing its value per link and consequently reducing the impact of the other two (pin-count and power).\n\nThe inter-connection bottleneck has been noted in several previous studies on different architectures for Massive MIMO BSs \\cite{argos,Bertilsson,puglielli,lumami,cavallaro,li_jeon,jeon_li}. As a solution, most of these studies recommend moving to a decentralized approach where uplink estimation and downlink precoding can be performed locally in processing nodes close to the antennas (final detection can still be done in a CPU). However, to achieve that, CSI still needs to be collected in the CPU, where matrix inversion is performed \\cite{argos,Bertilsson,lumami}, imposing an overhead in data shuffling.\n\nThe CSI problem is addressed in \\cite{cavallaro}, where CSI is obtained and used only locally (not shared) for precoding and estimation, with performance close to MMSE. However, this architecture relies on the CPU for exchanging a certain amount of consensus information between the nodes, and this exchange negatively impacts the processing latency and throughput \\cite{li_jeon}, and therefore limits the scalability of this solution. In order to solve these problems, feedforward architectures for detection \\cite{jeon_li} and precoding \\cite{li_jeon} have been proposed recently, where the authors present a partially decentralized (PD) architecture for detection and precoding, which achieves the same results as linear methods (MRC, ZF, L-MMSE), and therefore becomes optimal when $M$ is large enough. Partial Gramian matrices from antennas are added up before arriving to a processing unit where the Gramian is inverted.\n\nIn \\cite{argos}, a flat-tree structure with daisy-chained nodes was presented. The authors propose conjugate beamforming as a fully decentralized method with the corresponding penalty in system capacity. In the same work it is also pointed out that by following this topology the latency was being severely compromised. The more detailed analysis on latency is thus needed to evaluate the algorithm.\n\nIn this article we propose a fully decentralized architecture and a recursive algorithm for Massive MIMO detection and precoding, which is able to achieve very low inter-connection data-rate without compromising latency.\nThe proposed algorithm is pipelined so that it runs in a distributed way at the antenna processing units, providing local vectors for estimation\/detection that approximate to the zero-forcing solution.\nWe make use of the Coordinate Descent (CD) algorithm, which is detailed in Section \\ref{section:CD}, to compute these vectors.\n\nThere is previous work based on CD, such as \\cite{li_CD}. The main difference is that the coordinate update in \\cite{li_CD} is done per user basis, i.e., a different user index is updated every iteration, while in our proposed method the coordinate update is done per antenna basis, updating all users at once.\n\nWe extend the work presented in \\cite{jesus} and \\cite{muris}, which are also based on decentralized daisy-chain architecture. The novelties of the present work compared to these two is as follows:\n\\begin{itemize}\n\\item A common strategy for downlink precoding and uplink equalization is presented, in contrast to \\cite{jesus} and \\cite{muris}, which only covers uplink and downlink separately.\n\\item The algorithm has been modified that serial processing is only needed when new CSIs are estimated. The corresponding filtering phase can be conducted in parallel to reduce latency, in contrast to \\cite{jesus}, where serial processing is always needed, which increases the latency.\n\\item A recommended step-size is provided, in contrast to \\cite{jesus}.\n\\item An analytical expression for resulting SINR and a complete performance analysis is presented in this paper.\n\\item Complexity analysis from a general point of view (not attached to any specific implementation) is provided, which includes: inter-connection data-rate, memory size and latency. In \\cite{jesus}, only inter-connection data-rates are analyzed.\n\\end{itemize}\n\nDecentralized architectures, as shown in Fig. \\ref{fig:BS_decentralized}, have several advantages compared to the centralized counterpart, as shown in Fig. \\ref{fig:BS_centralized}. For example, they overcome bottlenecks by finding a more equal distribution of the system requirements among the processing nodes of the system. Apart from this, data localization is a key characteristic of decentralized architectures. In uplink, the architecture allows data to be consumed as close as possible to where it is generated, minimizing the amount to transfer, and therefore saving throughput and energy. To achieve data localization, processing nodes need to be located near the antenna, where they perform processing tasks locally such as channel and data estimation. Local CSI is estimated and stored locally in each, without any need to share it with any other nodes in the system. This approach has been suggested previously in \\cite{argos,Bertilsson,jeon_li,cavallaro,li_jeon,puglielli}, and we take advantage of it in the proposed solution.\n\nThe remainder of the paper is organized as follows. In Section \\ref{section:background} the preliminaries are presented, comprising the system model for uplink and downlink, together with an introduction to linear processing and the ZF method. Section \\ref{section:central_vs_decentral} is dedicated to a comparison between the centralized and decentralized architectures and reasoning why the latter one is needed, together with an overview of the daisy-chain topology. The proposed algorithm, based on CD, is presented in Section \\ref{section:CD}. In \\ref{section:analysis} closed-form expressions of the SIR and SINR are provided for this algorithm, together with interconnection data-rates, latency and memory requirements of the proposed solution. Finally, Section \\ref{section:conclusions} summarizes the conclusions of this publication. \n\nNotation: In this paper, lowercase, bold lowercase and upper bold face\nletters stand for scalar, column vector and matrix, respectively. The\noperations $(.)^T$, $(.)^*$ and $(.)^H$ denote transpose, conjugate and conjugate transpose respectively.\nThe $i$-th element of vector $\\h$ is denoted as $h_{i}$. A vector $\\w$ and a matrix $\\A$ related to the $m$th antenna is denoted by $\\w_m$ and $\\A_{m}$, respectively. $A_{i,j}$ denotes element $(i,j)$ of $\\A$. $\\mathbf{A}_{m}(i,j)$ denotes element $(i,j)$ of the $m$-th matrix in the sequence $\\{\\A_{m}\\}$. The $k$th coordinate vector in $\\mathbb{R}^{K}$ is defined as $\\e_{k}$. Kronecker delta is represented as $\\delta_{ij}$. Probability density function and cumulative density function are denoted respectively as $f_{\\mathbf{X}}(x)$ and $F_{\\mathbf{X}}(x)$. Computational complexity is measured in terms of the number of complex-valued multiplications.\\\\\n\n\\section{Background}\n\\label{section:background}\n\\subsection{System model}\nFor uplink, we consider a scenario with $K$ single-antenna users transmitting to a BS with an antenna array with $M$ elements. Assuming time-frequency-based channel access, a Resource\nElement (RE) represents a unit in the time-frequency grid (also\nnamed subcarrier in OFDM) where the channel is expected to be approximately flat. Under this scenario, the input-output relation is\n\\begin{equation}\n\\yu = \\Hbf\\xu + \\nup,\n\\label{eq:ul_model}\n\\end{equation}\nwhere $\\yu$ is the $M \\times 1$ received vector, $\\xu$ is the transmitted user data vector ($K \\times 1$), $\\Hbf=[\\h_1 \\; \\h_2 \\, \\cdots \\, \\h_M]^{{T}}$ is the channel matrix ($M \\times K$) and $\\nup$ an $M \\times 1$ vector of white, zero-mean complex Gaussian noise. The entries of $\\Hbf$ are i.i.d. zero-mean circularly-symmetric complex-gaussian entries, with rows $\\h_{i} \\sim \\mathcal{CN}(0, \\I)$ for all $i$. The noise covariance at the receiver is $N_{0}\\I$. The average transmitted power is assumed to be equal across all users and we assume, without any loss of generality, a unit transmit power. SNR is defined as $\\frac{1}{N_{0}}$ and represents the average \"transmit\" signal-to-noise ratio.\n\nFor downlink, if Time Division Duplex (TDD) is assumed, then according to channel reciprocity principle and by employing reciprocity calibration techniques \\cite{joao}, it is assumed that within the same coherence time, the channel matrix is the same as in the uplink case, and the system model follows\n\\begin{equation}\n\\xdd = \\Hbf^{T}\\yd + \\nd,\n\\label{eq:dl_model}\n\\end{equation}\nfor a RE, where $\\yd$ is the $M \\times 1$ transmitted vector, $\\xdd$ is the received data vector by users ($K \\times 1$), and $\\nd$ samples of noise ($K \\times 1$).\n\nOnce the system model is established, we introduce the linear processing fundamentals used for downlink precoding and uplink estimation.\n\n\\begin{figure*}\\centering\n\t\\footnotesize\n\t\\subfloat[Centralized architecture]{\n\t\t\\psfrag{1}{$1$}\n\t\t\\psfrag{M}{$M$}\n\t\t\\psfrag{RPU}[][][0.7]{$\\mathrm{RPU}$}\n\t\t\\psfrag{RF}[][][0.7]{$\\mathrm{RF}$}\n\t\t\\psfrag{OFDM}[][][0.55]{$\\mathrm{OFDM}$}\n\t\t\\psfrag{CPU}{$\\mathrm{CPU}$}\n\t\t\\psfrag{CHEST}[][][0.6]{$\\mathrm{CHEST}$}\n\t\t\\psfrag{EST}[][][0.6]{$\\mathrm{EST}$}\n\t\t\\psfrag{DET}[][][0.6]{$\\mathrm{DET}$}\n\t\t\\psfrag{DEC}[][][0.6]{$\\mathrm{DEC}$}\n\t\t\\psfrag{Bs}[][][1.0]{$\\text{Base Station}$}\n\t\t\\psfrag{Rc}{$R_\\mathrm{c}$}\t\n\t\t\\includegraphics[width=0.35\\textwidth]{BS_centralized.eps}\n\t\t\\label{fig:BS_centralized}\n\t}\n\t\\subfloat[Decentralized architecture]{\n\t\t\\psfrag{1}{$1$}\n\t\t\\psfrag{M}{$M$}\n\t\t\\psfrag{RPU}[][][0.7]{$\\mathrm{RPU}$}\n\t\t\\psfrag{RF}[][][0.7]{$\\mathrm{RF}$}\n\t\t\\psfrag{OFDM}[][][0.55]{$\\mathrm{OFDM}$}\n\t\t\\psfrag{CPU}{$\\mathrm{CPU}$}\n\t\t\\psfrag{CHEST}[][][0.5]{$\\mathrm{CHEST}$}\n\t\t\\psfrag{EST}[][][0.6]{$\\mathrm{EST}$}\n\t\t\\psfrag{DET}[][][0.6]{$\\mathrm{DET}$}\n\t\t\\psfrag{DEC}[][][0.6]{$\\mathrm{DEC}$}\n\t\t\\psfrag{Bs}{$\\text{Base Station}$}\n\t\t\\psfrag{Rd}{$R_\\mathrm{d}$}\n\t\t\\includegraphics[width=0.35\\textwidth]{BS_decentralized.eps}\n\t\t\\label{fig:BS_decentralized}\n\t}\n\t\n\t\\caption{Comparison between base station receiver chain in centralized and fully decentralized architectures for Massive MIMO uplink. Antenna array with $M$ elements is divided into RPUs, each containing a set of antennas. (a): Centralized architecture. Each RPU has one link to transfer baseband samples to the CPU, where the rest of processing tasks are done. (b): Fully decentralized architecture for detection. Each RPU performs RF, ADC, OFDM, channel estimation (CHEST) and data estimation (EST) locally. Detection (DET) and decoding (DEC) is centralized. RPUs are connected to each other by uni-directional links. Only one RPU has a direct connection with the CPU. Proposed algorithms are executed in EST blocks in parallel mode. The points where the interconnection data-rate is estimated are marked by circles and the value is denoted by $\\mathrm{R}_{c}$ and $\\mathrm{R}_{d}$ for centralized and decentralized respectively. The goal is to have $\\mathrm{R}_{d} \\ll \\mathrm{R}_{c}$ without compromising performance and latency.}\n\t\\label{fig:comparison}\n\\end{figure*}\n\n\\subsection{Linear Processing}\nIn this article we focus on linear estimators and precoders, because they show close to optimal performance in Massive MIMO regime while requiring low complexity.\n\nA linear estimator provides $\\hatx^u$, which is an estimate of $\\xu$, by applying an equalizer filter matrix $\\Wbf$ to the vector of observations, $\\yu$:\n\\begin{equation}\n\\begin{split}\n\\hatxu &= \\Wbf^{H} \\yu\\\\\n&= \\sum_{m=1}^{M} \\w_{m}^{*} \\ymu,\\\\\n\\end{split}\n\\label{eq:linear_det}\n\\end{equation}\nwhere $\\Wbf = [\\w_{1} \\; \\w_{2} \\, \\cdots \\, \\w_{M}]^{T}$ is an $M \\times K$ matrix, $\\w_{m}$ is a $K \\times 1$ filter vector related to antenna $m$ and $\\ymu$ the observation at antenna $m$. As it can be seen the estimate $\\hatxu$ is computed by the sum of $M$ partial products. If $\\w_{m}$ is obtained and stored locally in the m$th$ antenna module, then the partial products can be computed with local data only, reducing the amount of data to exchange between nodes. From implementation point of view, the linear estimator relies on the accumulation of all partial results according to \\eqref{eq:linear_det}, which can be done centrally (fusion node) or distributed.\n\nFor downlink, the data vector intended to the users, $\\xd$, is precoded with matrix $\\Pbf$ as\n\\begin{equation}\n\\yd = \\Pbf\\xd,\\\\\n\\label{eq:linear_prec}\n\\end{equation}\nwhere $\\Pbf = [\\p_{1} \\; \\p_{2} \\, \\cdots \\, \\p_M]^{T}$ is an $M \\times K$ matrix, which fulfills a power constraint $\\|\\Pbf\\|_{F}^{2}\\leq P$, such that $P$ is the maximum transmitted power. Particularly for antenna $m$ we have\n\\begin{equation}\n\\ymd = \\p_{m}^T \\xd.\\\\\n\\label{eq:linear_prec_i}\n\\end{equation}\nSimilarly to uplink, if $\\p_{m}$ is obtained and stored locally at the m$th$ antenna module, then $\\ymd$ can be computed only with local data after $\\xd$ is broadcasted to all antennas.\n\nThe zero-forcing (ZF) equalizer, which is one type of linear estimator, constitutes a reference in our analysis. It is defined for uplink estimation as\n\\begin{equation}\n\\Wbf_\\text{ZF}^{H} = (\\Hbf^H \\Hbf)^{-1}\\Hbf^H,\n\\label{eq:W_ZF}\n\\end{equation}\nand $\\Pbf_\\text{ZF}=\\Wbf_\\text{ZF}^{*}$ for the downlink precoding.\n\nZF is able to completely cancel inter-user interference (IUI) and reach the promised spectral efficiency of Massive MIMO. However, as ZF is performed in a central processor, the Gramian matrix $\\Hbf^{H}\\Hbf$ needs to be collected and inverted, which increases the average inter-connection data-rate. The computational load is also increased due to the matrix inversion and posterior matrix multiplication during estimation phase. Taking this into consideration, we look for methods with IUI-cancellation capabilities but with lower requirements for the system.\n\n\\subsection{Uplink \\& Downlink reciprocity}\nSubstituting \\eqref{eq:ul_model} into \\eqref{eq:linear_det} leads to\n\\begin{equation}\n\\begin{split}\n\\hatxu\n&= \\Eu \\xu + \\zu\\\\\n\\end{split}\n\\label{eq:Eu}\n\\end{equation}\nfor uplink, where $\\Eu = \\Wbf^{H} \\Hbf$ is a $K \\times K$ matrix containing the equivalent uplink channel with IUI information and $\\mathbf{z}^u$ is the $K \\times 1$ post-equalization noise term.\n\nOn the other hand, in the downlink, substituting \\eqref{eq:linear_prec} into \\eqref{eq:dl_model} leads to\n\\begin{equation}\n\\begin{split}\n\\xdd\n&= \\Ed \\xd + \\nd,\\\\\n\\end{split}\n\\label{eq:Ed}\n\\end{equation}\nwhere $\\Ed = \\Hbf^{T} \\Pbf$ is a $K \\times K$ matrix containing the equivalent downlink channel with IUI information. For the particular case that $\\Pbf^{T} = \\Wbf^{H}$, we have $\\Ed = \\Eu^{T}$, meaning that both equivalent channels are transposed, and therefore experiment the same IUI cancellation properties.\nFrom this result it is clear that once an equalization matrix $\\Wbf$ is obtained for uplink detection, it can also be applied for downlink precoding with no extra effort. It is interesting to note that, since $\\Pbf^{T} = \\Wbf^{H}$, it follows that $\\p_i = \\w_i^{*}$, so each antenna node can re-use same vector for detection and precoding, ideally reducing complexity and storage needs by half. Said that, in this article we focus mainly on uplink estimation without limiting the results to downlink. In reality, there is a downlink power constraint as the total transmitted power, which is addressed in \\ref{section:analysis}.\n\n\\section{Centralized vs Decentralized}\n\\label{section:central_vs_decentral}\nIn this section we describe the differences between centralized and decentralized Massive MIMO processing and the justification to study the later one.\n\nUplink estimation based on ZF equalization has two components that should be multiplied: $\\Wbf_\\text{ZF}$ and $\\yu$. The former includes a $K \\times K$ matrix inversion, which typically is done in one place, and for that, CSI from all antennas needs to be collected. Apart from that, the observation data vector, $\\yu$, is also needed for estimation. This vector is $M \\times 1$, increasing considerably the amount of data to transfer and limiting the scalability of the array. Based on those considerations, we can think of two possible architectures for the Massive MIMO base-station: centralized and decentralized.\n\nFig. \\ref{fig:BS_centralized} presents an architecture based on a central baseband processing node, where baseband samples are exchanged between Remote Processing Units (RPUs) and CPU. Each antenna is connected to a receiver and transmitter circuitry, which involves: RF front-end, ADC\/DAC and OFDM processing. For simplicity, only uplink is represented in this figure. We can identify some common tasks among these processing elements across different antennas, such as: time synchronization, automatic gain control, local oscillator generation, carrier frequency and sampling rate offset estimation, phase noise compensation, among others. Therefore, a few antennas (together with corresponding receivers\/transmitters) can be grouped into one RPU for efficient implementation of such common tasks. However, for simplicity, in this work we only analyze the case where each RPU manages one antenna.\n\nDedicated physical links would easily exceed the number of I\/O connections in current standards, in addition to the increment of the cost of adding a new RPUs when needed. To overcome this, we consider that RPUs are connected to the CPU node by a shared bus as shown in Fig. \\ref{fig:BS_centralized}. \n\nEven though, this approach can support ZF detection (and precoding) from a functionality point of view, from the implementation point of view, it requires a very high inter-connection data-rate in the bus and at the input of the CPU ($R_\\mathrm{c}$ in the figure). As an example, consider a 5G NR-based system with 128 antennas and OFDM as an access technology, then the average data-rate can be calculated as\n\\begin{equation}\nR_{\\mathrm{c}} = \\frac{2w M N_{\\mathrm{u}}}{T_{\\mathrm{OFDM}}},\n\\label{eq:R_central}\n\\end{equation}\nwhere $N_{\\mathrm{u}}$ is the number of active subcarriers, $w$ is the bit-width for the baseband samples (real\/imaginary parts) after FFT, and $T_{\\mathrm{OFDM}}$ is the OFDM symbol duration. For $N_{\\mathrm{u}}=3300$, $w=12$ and $T_{\\mathrm{OFDM}}=1\/120\\mathrm{kHz}$ then $R_{\\mathrm{c}}=1.2 \\mathrm{Tbps}$. This result clearly exceed the limit data-rate for common interfaces, such as eCPRI \\cite{ecpri} and PCIe, and furthermore, it is proportional to $M$, which clearly limits the scalability of the system.\n\nAs a solution to this limitation, we propose the fully-decentralized architecture for baseband detection and precoding shown in Figure \\ref{fig:BS_decentralized}. We can observe that channel estimation and estimation\/precoding have been moved from the CPU to the RPUs, with detection and decoding as a remaining task in the CPU from physical layer point of view. The benefit of this move is manifold. Firstly, the inter-connection data-rate scales with $K$ instead of $M$. Secondly, the high complexity requirement in the CPU for channel estimation and data estimation\/precoding is now equally distributed among RPUs, which highly simplifies the implementation and overcomes the computational bottleneck and, additionally, CSI is obtained and consumed locally in each RPU without the need for exchange, with the consequent reduction in the required inter-connection data-rate. In addition to the advantages already mentioned, which are common to other decentralized schemes, the proposed architecture presented in this work achieves an unprecedented low inter-connection data-rate by the direct connection of RPUs forming a daisy-chain, where the CPU is at one of the ends.\n\nIn the daisy-chain, depicted in Fig. \\ref{fig:BS_decentralized}, nodes are connected serially to each other by a dedicated connection. All elements in the chain work simultaneously in pipeline mode, processing and transmitting\/receiving to\/from the respective next\/previous neighbor in the chain. The data is passed through the nodes sequentially, being updated at every RPU. There is an unique connection to the root node where the last estimate is transmitted and therefore been detected by the CPU. An important remark is the average inter-connection data-rate between nodes is the same regardless of the number of elements in the chain. This topology was proposed in \\cite{argos} and further studied in \\cite{jesus} and \\cite{muris} with specific algorithms designed for this topology.\n\nWhen the decentralized architecture in Fig. \\ref{fig:BS_decentralized} needs to be deployed, antennas can be collocated in the same physical place or distributed over a large area. These antennas and therefore their corresponding RPUs can behave as nodes in the chain, whilst the CPU remains as the root node. There may be multiple chains in a network. The selection of the RPUs to form a chain may depend on the users they are serving. RPUs which serve the same set of users should be in the same chain, so they can work jointly to cancel IUI. This concept fits very well with the distributed wireless communication system \\cite{DWCS}, the recent cell-free Massive MIMO concept \\cite{cell-free} and the promising large intelligent surface \\cite{lis}.\n\nDecentralized architectures, such as the one shown in Fig. \\ref{fig:BS_decentralized}, require other type of algorithms compared to Fig. \\ref{fig:BS_centralized}. In the next section we introduce our proposed algorithm, which is a method for obtaining $\\w_{m}$ and $\\p_{m}$ as the equalization and precoding vectors, respectively.\n\n\\section{Coordinate Descent}\n\\label{section:CD}\n\nOur proposed algorithm is an iterative algorithm based on the gradient descent (GD) optimization, in which the gradient information is approximated with a set of observations in every step. From this, each antenna can obtain its own equalization\/precoding vector sequentially in a coordinate descent approach. The main advantage of this method is that it does not require access to all observations at each iteration, becoming an ideal choice for large scale distributed systems.\n\n\\subsection{Preliminaries}\nFrom \\eqref{eq:Eu} we know that in the non-IUI case, $\\Eu$ is a diagonal matrix, which is the case when zero-forcing (ZF) is applied. In the general case, IUI is not zero and as consequence $\\Eu$ contains non-zero entries outside the main diagonal.\n\nThe objective is to find a matrix $\\Wbf$, which cancels IUI to a high extent ($\\Eu \\approx \\I$), while fulfilling the following conditions:\n\\begin{itemize}\n\t\\item Uses daisy-chain as a base topology, so we exploit the advantages seen in Section \\ref{section:central_vs_decentral}.\n\t\\item No exchange of CSI between nodes. Only local CSI. \n\t\\item Limited amount of data to pass between antenna nodes. It should depend on $K$ instead of $M$, to enable scalability.\n\t\\item Limit the dependency on the central processing unit in order to reduce data transfer, processing and memory requirements of that unit. One consequence of this is to avoid matrix inversion in the central unit.\n\\end{itemize}\n\n\\subsection{Algorithm formulation}\nThe algorithm setup is that one intends to solve the unconstrained Least Squares (LS) problem in the uplink\n\\begin{equation} \\label{eq:CD_R} \\hatx = \\argmin_{\\mathbf{x}} \\|\\mathbf{y}-\\mathbf{H}\\mathbf{x}\\|^2\n\\end{equation}\nvia a GD approach. The gradient of \\eqref{eq:CD_R} equals $\\nabla_{\\mathbf{x}}=\\Hbf^{H}\\Hbf\\mathbf{x}-\\Hbf^{H}\\mathbf{y}$.\nEven though $\\Hbf^{H}\\Hbf$ and $\\Hbf^{H}\\mathbf{y}$ can be formulated in a decentralized way, the selection of $\\mathbf{x}$ and the product with $\\Hbf^{H}\\Hbf$ is preferably done in a central processing unit to limit latency and inter-connection data-rates. Following the fully-decentralized approach and the intention to off-load the equalization\/precoding computation from the CPU to the RPUs, we propose a different approach.\n\nThe proposed method can be derived as an approximate version of GD that can be operated in a decentralized architecture with minimum CPU intervention. It does so by computing, at each antenna, as much as possible of $\\nabla_{\\mathbf{x}}$ with the information available at the antenna. Then the estimate $\\hatx$ is updated by using a scaled version of the \"local\" gradient and the antenna passes the updated estimate on to the next antenna.\n\nThe above described procedure can, formally, be stated as\n\\begin{equation}\n\\begin{split}\n\\varepsilon_m &= y_{m} - \\h_{m}^{T} \\hatx_{m-1} \\\\\n\\hatx_{m} &= \\hatx_{m-1} + \\mu_m \\h_{m}^{*} \\varepsilon_m,\n\\end{split}\n\\label{eq:CD_sm}\n\\end{equation}\nfor antenna $m$, where $\\mu_m$ is a scalar step-size. The update rule in \\eqref{eq:CD_sm} corresponds to the Kaczmarz method \\cite{kaczmarz}, whose step-size is according to \\cite{censor}\n\\begin{equation}\n\\mu_{m} = \\frac{\\mu}{\\|\\h_{m}\\|^2},\n\\label{eq:mu_m}\n\\end{equation}\nwhere $\\mu \\in \\mathbb{R}$ is a relaxation parameter. In case of consistent systems, this is $\\mathbf{y}=\\Hbf \\mathbf{x}$ (if SNR is high enough or there is no noise), $\\mu=1$ is optimum and the method converge to the unique solution. Otherwise, when the system is inconsistent, $\\mu$ give us an extra degree of freedom, which allows to outperform the $\\mu=1$ case, as we will see in Section \\ref{section:analysis}.\n\nAfter $M$ iterations of \\eqref{eq:CD_sm} we have\n\\begin{equation}\n\\begin{split}\n\\hatx_{M} &= \\prod_{m=1}^{M} \\left( \\I_K - \\mu_{m} \\h_{m}^{*} \\h_{m}^{T} \\right) \\hatx_0 \\\\\n&+ \\sum_{m=1}^{M} \\prod_{i=m+1}^{M} \\left(\\I_K - \\mu_{i} \\h_{i}^{*} \\h_{i}^{T} \\right) \\mu_{m} \\h_{m}^{*} y_m.\n\\nonumber\n\\end{split}\n\\end{equation}\nIf we assume $\\hatx_0 = \\mathbf{0}_{K\\times1}$ \\footnote[1]{If prior information of $\\mathbf{x}$ is available, it can be used here.}, then it is possible to express $\\hatx_M$ as linear combination of $\\mathbf{y}$, in the same way as \\eqref{eq:linear_det}, and identify\n$\\w_m$ (the equalization vector associated to antenna $m$) as\n\\begin{equation}\n\\w_m = \\left[ \\prod_{i=m+1}^{M} \\left(\\I_K - \\mu_{i} \\h_{i} \\h_{i}^{H} \\right) \\right] \\mu_{m} \\h_{m}.\n\\label{eq:CD_W}\n\\end{equation}\nIf \\eqref{eq:CD_sm} is applied in reverse antenna order ($m=M \\cdots 1$), then we obtain a different estimation. The expression for $\\w_{m}$ when using the alternative approach is\n\\begin{equation}\n\\w_m = \\mu_{m} \\A_{m-1} \\h_{m},\n\\label{eq:CD_W2}\n\\end{equation}\nwhere matrix $\\A_m$ is defined as\n\\begin{equation}\n\\A_m = \\prod_{i=1}^{m} \\left(\\I_K - \\mu_{i} \\h_{i} \\h_{i}^{H} \\right).\n\\label{eq:CD_A_impl}\n\\end{equation}\n\nIt is important to remark that both approaches lead to different $\\w_{m}$ sequences, however the overall performance should be the same if CSI in all antennas shows same statistical properties (stationarity across antennas).\n\n\n\\subsection{Algorithm design and pseudocode}\n\\label{section:alg}\nIn this subsection we derive an equivalent and more attractive form for the calculation of the weights of the algorithm in \\eqref{eq:CD_W2} in an easy and low-complexity way, suitable for hardware implementation.\n\nThe algorithm description is shown in Algorithm \\ref{algo:CD}. The vector $\\w_{m}$ is computed in each antenna, while the matrix $\\A_{m-1}$ gets updated according to the recursive rule: $\\A_{m} = \\A_{m-1} - \\w_{m} \\h_{m}^{H}$. Then, $\\w_{m}$ is stored for the detection and precoding phase, and $\\A_{m}$ is passed to the next antenna node for further processing.\n\\IncMargin{1em}\n\\begin{algorithm}[ht]\n\t\\SetKwInOut{Input}{Input}\n\t\\SetKwInOut{Output}{Output}\n\t\\SetKwInOut{Preprocessing}{Preprocessing}\n\t\\SetKwInOut{Init}{Init}\n\t\\Input{ $\\Hbf = \\left[ \\h_{1}, \\h_{2} \\cdots \\h_{M} \\right]^{T}$}\n\t\\Preprocessing{}\n\t$\\A_0 = \\I_K$\\\\\n\t\\For{$m = 1,2,...,M$}{\n\t\t$\\w_m = \\mu_{m} \\A_{m-1} \\h_m$\\\\\n\t\t$\\A_m = \\A_{m-1} - \\w_{m} \\h_{m}^{H}$\n\t}\n\t\\caption{Proposed algorithm}\n\t\\label{algo:CD}\n\t\\Output{$\\Wbf = \\left[ \\w_{1}, \\w_{2} \\cdots \\w_{M} \\right]^{T}$}\n\t\n\\end{algorithm}\\DecMargin{1em}\n\nFrom Algorithm \\ref{algo:CD} we can observe that after $M$ steps we achieve the following expression: $\\A_M = \\I_{K} - \\Eu^{*}$. Then, if perfect IUI cancellation is achieved, $\\Eu=\\I_{K}$ and therefore $\\A_{M} = \\mathbf{0}$. As a consequence we can take $\\|\\A_{m}\\|^{2}$ as a metric for residual IUI. The interpretation of Algorithm \\ref{algo:CD} is as follows. $\\|\\A_{m}\\|$ is reduced by subtracting from $\\A_{m}$ a rank-1 approximation to itself. In order to achieve that, $\\A_{m}$ is projected onto $\\h_{m}$ to obtain $\\w_{m}$, therefore $\\w_{m} \\h^{H}_m$ is the best rank-1 approximation to $\\A_{m}$, having $\\h_{m}$ as vector base. Ideally, if the channel is rich enough, vectors $\\h_{m}$ are weakly correlated and assuming $M$ is large (Massive MIMO scenario) then IUI can be canceled out to a high extent \\footnote[2]{The selection of Coordinate Descent as our method's name is because we consider the vectors $\\{\\w_i\\}$ as the outcome of the method, and these can be seen as coordinates of a cost function to minimize. Such optimization problem can be written as: $\\w_{m} = \\argmin_{z} f(\\w_{1},\\cdots,\\w_{m-1},\\mathbf{z},\\w_{m+1},\\cdots,\\w_{M})$, where $f = \\|\\A_{m-1} - \\mathbf{z} \\h_{m}^{H}\\|_{F}^{2}$, and $\\A_{m-1} = \\I_{K}-\\sum_{i \\neq m} \\w_{i} \\h_{i}^{H}$. Each antenna solves this optimization problem in a sequential fashion, obtaining one coordinate as a result, while keeping the rest fixed. This is valid for single and multiple iterations to the array, which is presented in the next subsection.}.\n\nThe role of step-size $\\mu$ is to control how much IUI is removed at every iteration. High values will tend to reduce IUI faster at the beginning when the amount to remove is high, but will lead to oscillating or unstable residual IUI after some iterations because the steps are too big, so the introduced error dominates. Low values for $\\mu$ will ensure convergence of the algorithm and a relatively good IUI cancellation at the expense of a slower convergence.\n\n\\subsection{Multiple-iterations along the array}\n\\label{sub:multiple-iter}\nRecalling from Section \\ref{section:alg}, Algorithm \\ref{algo:CD} reduces the norm of $\\A$ at each step, providing as a result $\\A_{M}$, which contains the residual IUI after the algorithm is run along the array. It is possible to expand the algorithm and apply $\\A_{M}$ as initial value, $\\A_{0}$ for a new iteration through the array, with the intention of decreasing even more the norm of $\\A$. The pseudocode of the expanded version is shown in Algorithm \\ref{algo:CD_multiple}, with $n_{iter}$ iterations, and as it can be seen, an increment of $\\w_{m}$ is computed at each iteration. From topology point of view, it requires an extra connection between last and first RPUs, closing the daisy-chain and becoming a ring. It is expected to improve the performance at the expense of increasing the latency.\n\\IncMargin{1em}\n\\begin{algorithm}[ht]\n\t\\SetKwInOut{Input}{Input}\n\t\\SetKwInOut{Output}{Output}\n\t\\SetKwInOut{Preprocessing}{Preprocessing}\n\t\\SetKwInOut{Init}{Init}\n\t\\Input{ $\\Hbf = \\left[ \\h_{1}, \\h_{2} \\cdots \\h_{M} \\right]^{T}$}\n\t\\Preprocessing{}\n\t$\\A_{0,1} = \\I_K$\\\\\n\t$\\w_{m,1} = \\mathbf{0},m=1,...,M$\\\\\n\t\\For{$n = 1,2,...,n_{iter}$}{\n\t\t\\For{$m = 1,2,...,M$}{\n\t\t\t$\\w_{m,n} = \\w_{m,n-1} + \\mu_{m} \\A_{m-1,n} \\h_m$\\\\\n\t\t\t$\\A_{m,n} = \\A_{m-1,n} - \\w_{m,n} \\h_{m}^{H}$\\\\\n\t\t}\n\t\t$\\A_{0,n+1} = \\A_{M,n}$\n\t}\n\t\\caption{Proposed algorithm multiple iterations}\n\t\\label{algo:CD_multiple}\n\t\\Output{$\\Wbf = \\left[ \\w_{1,n_{iter}}, \\w_{2,n_{iter}} \\cdots \\w_{M,n_{iter}} \\right]^{T}$}\n\t\n\t\\end{algorithm}\\DecMargin{1em}\n\n\n\\section{Analysis}\n\\label{section:analysis}\nIn this section we present an analysis of the proposed solution. The main points are:\n\\begin{itemize}\n\t\\item Performance analysis of the presented solution based on SIR, SINR and BER evaluation, and comparison with other methods. \n\t\\item Complexity and timing analysis, including computational complexity, inter-connection throughput, memory requirement and latency.\n\\end{itemize}\n\nAs was commented in the Introduction, the analysis presented in this section is quite general and not dependent on any specific hardware implementation. The idea is to provide high level guidelines on algorithm-hardware trade-offs, system parameter selections, and hardware architectures. A more specific analysis can be performed when one has decided the dedicated implementation strategy.\n\n\\subsection{Performance}\n\\label{section:performance}\nIn this subsection we obtain and present different metrics to evaluate and compare the performance of the proposed algorithm. The analysis we present is divided as follows: Derivation of SIR and SINR closed form expressions, bit-error-rate (BER) analysis of the proposed algorithm based on ideal and measured channels and comparison with other methods, such as MF and ZF. The performance analysis that follows is focused on uplink, but it can be extended to downlink.\n\n\\subsubsection{SIR \\& SINR}\nSpecifically for user $k$, \\eqref{eq:Eu} is reduced to\n\\begin{equation}\n\\hat{x}^{u}_{k} = E_{k,k} x^{u}_{k} + \\sum_{i=1,i \\neq k}^{K} E_{k,i} x^{u}_{i} + z_{k},\n\\nonumber\n\\end{equation}\nwhere the first term represents the desired value to estimate (scaled version), the second one is the interference from other users and the third one is due to noise. The signal-to-interference ratio (SIR) for user $k$ is defined as\n\\begin{equation}\n\\text{SIR}_{k} = \\frac{\\E|E_{k,k}|^{2}}{ \\E \\left\\lbrace \\sum_{i=1,i \\neq k}^{K} |E_{k,i}|^2 \\right\\rbrace}.\n\\label{eq:SIR}\n\\end{equation}\n\nAnd for the signal-to-interference-and-noise ratio (SINR) we have\n\\begin{equation}\n\\text{SINR}_{k} = \\frac{\\E|E_{k,k}|^{2}}{\\E \\left\\lbrace \\sum_{i=1,i \\neq k}^{K} |E_{k,i}|^2 \\right\\rbrace + \\E|z_{k}|^2 }.\n\\label{eq:SINR}\n\\end{equation}\n\nA list of parameters and their corresponding values are presented in Table \\ref{table:parameters}, which are used in the following propositions.\n\n\\begin{table}[h!]\n\t\\begin{center}\n\t\t\\caption{Parameters}\n\t\t\\label{table:parameters}\n\t\t\\begin{tabular}{llr}\n\t\t\t\\cline{1-2}\n\t\t\tParameter & Description \\\\\n\t\t\t\\hline\n\t\t\t$\\alpha$ & $1-\\frac{2 \\mu}{K} +\\frac{\\mu^2}{K(K+1)}$\\\\\n\t\t\t\\hline\n\t\t\t$\\beta$ & $\\frac{\\mu^2}{K(K+1)}$ \\\\\n\t\t\t\\hline\n\t\t\t$\\nu$ & $1 - \\frac{\\mu}{K}$ \\\\\n\t\t\t\\hline\n\t\t\t$\\epsilon$ & $1 - \\frac{2\\mu}{K} + \\frac{\\mu^2}{K}$\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\nFrom \\eqref{eq:SIR} it is possible to obtain a closed-form expression of the SIR as follows:\n\\begin{theorem}\n\t\\label{prop:SIR}\n\tWith perfect $\\mathrm{CSI}$ and channel model as defined in Section \\ref{section:background}, $\\mathrm{SIR}$ per user in uplink with $\\mathrm{CD}$ algorithm for estimation is\n\n\t\\begin{equation}\n\t\\mathrm{SIR} = \\frac{1 - 2\\nu^{M} + \\alpha^{M} \\left(1-\\frac{1}{K}\\right) + \\epsilon^M \\frac{1}{K} }{\\left(1-\\frac{1}{K}\\right) \\cdot \\left( \\epsilon^{M} - \\alpha^{M} \\right)},\n\t\\end{equation}\n\n\twhich can be simplified in case of relatively large $M$, $K$, and $\\frac{M}{K}$, which is the case of Massive MIMO, as\n\n\t\\begin{equation}\n\t\\mathrm{SIR} \\approx e^{\\mu(2-\\mu)\\frac{M}{K}}.\n\t\\label{eq:SIR_approx}\n\t\\end{equation}\n\n\\end{theorem}\n\\begin{proof}\n\tSee Appendix-\\ref{proof:SIR}.\n\\end{proof}\n\n\\begin{figure*}\\centering\n\t\\subfloat[SINR vs $\\mu$ under different SNR. M=128 and K=16.]{\n\t\t\\includegraphics[width=0.48\\textwidth]{SINR_vs_mu_M128_K16}\n\t\t\\label{fig:SINR_vs_mu_M128_K16}\n\t}\n\t\\subfloat[SINR vs $\\mu$ under different channels. M=128 and K=5. SNR=0dB.]{\n\t\t\\includegraphics[width=0.48\\textwidth]{SINR_vs_mu_sim}\n\t\t\\label{fig:SINR_vs_mu_sim}\n\t}\\\\[-2ex]\n\t\\caption{ }\n\t\\label{fig:SINR_vs_mu}\n\t\\vspace*{-4mm}\n\\end{figure*}\n\nThe maximum value of \\eqref{eq:SIR_approx} is achieved for $\\mu=1$ and the SIR value only depends on the ratio $\\frac{M}{K}$ in an exponential fashion, showing how fast the IUI is canceled as $M$ grows, and therefore ZF is approached. As an example, for a target value of SIR = 40dB, $\\frac{M}{K}=10$ meets the requirement, which is a typical ratio in Massive MIMO regime.\n\nRegarding SINR, it can be derived based on previous results as\n\\begin{theorem}\n\t\\label{prop:SINR}\n\tWith perfect $\\mathrm{CSI}$ and channel model as defined in Section \\ref{section:background}, $\\mathrm{SINR}$ per user in uplink with $\\mathrm{CD}$ algorithm for estimation is given by\n\n\t\\begin{equation}\n\t\\begin{split}\n\t\\mathrm{SINR} = \\frac{1 - 2\\nu^{M} + \\alpha^{M} \\left(1-\\frac{1}{K}\\right) + \\epsilon^M \\frac{1}{K} }{\\left(1-\\frac{1}{K}\\right) \\left(\\epsilon^{M} - \\alpha^{M} \\right) + \\frac{N_{0}}{K-1} \\left( \\frac{\\mu}{2-\\mu}\\right) (1-\\epsilon^{M})},\n\t\\end{split}\n\t\\label{eq:SINR_CD}\n\t\\end{equation}\n\n\twhich can be simplified in case of relatively large $M$, $K$, and $\\frac{M}{K}$, which is the case of Massive MIMO, as\n\n\t\\begin{equation}\n\t\\mathrm{SINR} \\approx \\left[ e^{-\\mu(2-\\mu)\\frac{M}{K}} + \\frac{1}{K \\cdot \\mathrm{SNR}} \\left( \\frac{\\mu}{2-\\mu}\\right) \\right]^{-1}.\n\t\\label{eq:SINR_CD_limit}\n\t\\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}\n\tSee Appendix-\\ref{proof:SINR}.\n\\end{proof}\n\nThe first term in \\eqref{eq:SINR_CD_limit} represents SIR containing IUI information, while the second one takes into account the post-equalized noise power. For high SNR, the first term is dominant and $\\mathrm{SINR} \\to e^{\\mu(2-\\mu)\\frac{M}{K}}$, which depends on $\\frac{M}{K}$ and $\\mu$, but not on $\\mathrm{SNR}$. On the other hand, when SNR is low, the second term is dominant and $\\mathrm{SINR} \\to \\mathrm{SNR} \\cdot K \\left(\\frac{2 - \\mu}{\\mu}\\right)$ as $M$ grows, which grows linearly with $\\mathrm{SNR}$ and $K$ (up to certain value). This linear dependency on $K$ is due to the post-equalization noise is equally distributed among the users. While the noise power per antenna remains constant, the portion assigned to each user decays as $K$ grows, so the SINR per user grows linearly. However, as $K$ increases the IUI does so (first term in \\eqref{eq:SINR_CD_limit} grows), and both effects cancel out at some point, being IUI dominant afterwards, with the corresponding decay of SINR.\n\nThe optimal value of $\\mu$, denoted as $\\mu^{*}$, depends on $M$, $K$, and the specific channel. For the i.i.d. case, defined in Section \\ref{section:background}, it is possible to obtain $\\mu^{*}$ by numerical optimization over \\eqref{eq:SINR_CD}. An approximate value, denoted as $\\mu_{0}$, is presented as follows.\n\\begin{theorem}\n\t\\label{prop:mu_init}\n\tA recommended value for $\\mu_{0}$, in the vicinity of $\\mu^{*}$, under CD and i.i.d. channel as defined in Section \\ref{section:background}, is given by\n\n\t\\begin{equation}\n\t\\mu_{0} = \\frac{1}{2} \\frac{K}{M} \\log (4 M \\cdot \\mathrm{SNR} ).\n\t\\label{eq:mu_init}\n\t\\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}\n\tSee Appendix-\\ref{proof:mu_init}.\n\\end{proof}\n\nAs a side result, from the analysis performed in this section, we can extract interesting properties of the matrix $\\Wbf$, such the following one:\n\\begin{theorem}\n\t\\label{prop:W_power}\n\tThe equalization matrix $\\Wbf$ as result of $\\mathrm{CD}$ algorithm satisfies the next properties for $\\mu \\in [0,2)$\n\n\t\\begin{equation}\n\t\\E \\| \\Wbf \\|^{2}_{F} = \\frac{K}{K-1} \\cdot \\frac{\\mu}{2-\\mu} \\cdot \\left( 1-\\epsilon^{M} \\right).\n\t\\label{eq:W_power}\n\t\\end{equation}\n\n\t\\nonumber\n\\end{theorem}\n\\begin{proof}\n\tSee Appendix-\\ref{proof:W_power}.\n\\end{proof}\n\nThis result is relevant in downlink, where a transmission power budget is needed. Expression in \\eqref{eq:W_power} is a monotonically growing function of $\\mu$. It can be shown that total transmitted mean power is bounded by $4\\frac{M}{K}$, reaching this value at $\\mu=2$. However, as we will see in next section, optimal $\\mu$ for i.i.d. Gaussian channel is within the range $(0,1]$, therefore for a large enough $K$, we have $\\E \\| \\Wbf \\|^{2}_{F} \\leq 1$, which does not depend on $M$, therefore ensure the scalability of the proposed solution.\\\\\n\nExpression \\eqref{eq:SINR_CD} is plotted in Figure \\ref{fig:SINR_vs_mu_M128_K16} showing SINR vs $\\mu$ for CD under different SNR values and step-size according to \\eqref{eq:mu_m}. As expected, optimal $\\mu$ approaches 1 as SNR grows. Simulation results shows a good match with \\eqref{eq:SINR_CD}. The curve with $\\mu_{0}$ values obtained from \\eqref{eq:mu_init} is also plotted for a wide range of SNR. It is observed how the $\\mu_{0}$ value is reasonably close to the optimum for the SNR range depicted. Furthermore, the result is much closer to ZF than MRC values, which are $\\{40.5, 30.5, 20.5, 10.5\\}$dB and $\\{9.0, 9.0, 8.8, 6.8\\}$dB respectively for the different SNR values used in the figure.\n\nFigure \\ref{fig:SINR_vs_mu_sim} shows simulation results for the CD algorithm performance under different channels. For some of them we use a model (i.i.d and WINNER II) and others are based on real measurements (Rich A and LOS A). For this comparison we use different $\\frac{M}{K}$ ratio and the step-size according to \\eqref{eq:mu_m}. Rich A is non-line-of-sight (NLOS) channel, rich in scatters, while LOS A is predominantly line-of-sight (LOS) channel. WINNER II is obtained from a NLOS scenario with a uniform linear array at the BS, with M elements separated by $\\lambda$\/2. Users are randomly located in a 300m$\\times$300m area, with the BS at the center. It is noticed how rich channels (i.i.d and WINNER II) provide better performance. SINR levels reached by ZF are \\{20.9, 20.9, 19.8, 17.6\\}dB and for MRC they are \\{14.3, 15.2, 7.8, 4.8\\}dB, in both cases for the i.i.d., WINNER II, Rich A and LOS A channels, respectively. It is also noticed that CD performance lies in between ZF and MRC for these scenarios.\n\nFigure $\\ref{fig:SINR_vs_M_over_K_SNR0}$ shows SINR versus $\\frac{M}{K}$ for $M=128$ and SNR = 0dB. SINR for CD is shown comparing the effect of using $\\mu^{*}$ and $\\mu_{0}$ according to \\eqref{eq:mu_init}. We observe that $\\frac{M}{K} \\approx 10$ (equivalent to $K\\approx12$) is the preferred working point, where SINR reaches the maximum value and $\\mu_{0}$ gives the same result as $\\mu^{*}$. We also compare the performance with ZF and MRC algorithms.\n\nAs presented in Subsection \\ref{sub:multiple-iter}, the algorithm can be extended to perform multiple iterations through the array, in order to increase the performance. Figure $\\ref{fig:SINR_vs_mu_sim_num_iter}$ shows SINR versus $\\mu$ for a different number of iterations through the array together with ZF for comparison. From the figure we can notice that the maximum SINR increases after each iteration, approaching to ZF. It is also relevant to note that $\\mu^{*}$ changes with the number of iterations. \n\n\\begin{figure}[t]\\centering\n\t\\includegraphics[width=1\\linewidth]{SINR_vs_M_over_K_SNR0}\n\t\\vspace*{-4mm}\n\t\\caption{SINR (dB) versus $\\frac{M}{K}$ for SNR=0dB and M=128. CD SINR is plotted in the case of $\\mu^{*}$ (dashed) and $\\mu_{0}$ (solid) are used. i.i.d. channel. SNR = 0dB.}\n\t\\label{fig:SINR_vs_M_over_K_SNR0}\n\\end{figure}\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=1\\linewidth]{SINR_vs_mu_sim_num_iter}\n\t\\vspace*{-4mm}\n\t\\caption{SINR vs. SNR for $M$=128, $K=16$. 16QAM. i.i.d. channel. SNR=0dB. SINR after a certain number of iterations through the array. ZF added for comparison.}\n\t\\label{fig:SINR_vs_mu_sim_num_iter}\n\\end{figure}\n\n\n\\subsubsection{BER}\nBER versus SNR is shown in Figure \\ref{fig:BER_vs_SNR} under i.i.d. channel for three different methods: CD, ZF and MRC. CD is shown using two different values for $\\mu$: 1 and $\\mu^*$. It is noticeable the great impact of the selected $\\mu$ and therefore the importance of selecting an appropriate value.\n\nThe effect of non-ideal CSI in the BER is shown in Figure \\ref{fig:BER_vs_SNR_non-ideal-CSI} for ZF and CD (for $\\mu^{*}$). The non-ideal CSI is modeled as an ideal-CSI with a noise contribution (complex normal distributed) with a variance equal to $N_{0}$, therefore it depends inversely on SNR. No boosting in pilots is used. As it can be observed, for SNR$<$0dB the SNR gap is very small and increases as long as SNR increases too, in a similar fashion as the ideal CSI case. For SNR$>$0 the SNR gap in both cases is similar.\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=1\\linewidth]{BER_vs_SNR_M128_K16_16QAM_IID}\n\t\\vspace*{-4mm}\n\t\\caption{BER vs. SNR for $M$=128, $K=16$. 16QAM. i.i.d. channel.}\n\t\\label{fig:BER_vs_SNR}\n\\end{figure}\n\n\\begin{figure}\\centering\n\t\\includegraphics[width=1\\linewidth]{BER_vs_SNR_M128_K16_16QAM_IID_non-ideal-CSI}\n\t\\vspace*{-4mm}\n\t\\caption{BER vs. SNR for $M$=128, $K=16$. 16QAM. i.i.d. channel. Comparison between ideal and non-ideal CSI.}\n\t\\label{fig:BER_vs_SNR_non-ideal-CSI}\n\\end{figure}\n\n\\subsection{Complexity \\& Timing}\nIn this subsection we analyze the complexity of the proposed solution from three different domains: computational complexity (data processing), inter-connection throughput (data movement) and memory (data storage). Timing in the form of total system latency is also analyzed.\n\nFor this analysis we assume a frame structure based on OFDM, which contains one dedicated OFDM symbol per frame for channel estimation based on orthogonal pilots, so each one is dedicated to one of the users in a consecutive way. The other symbols convey users' data. Under the TDD assumption, some of them are used for DL and others for UL. We also assume that all RPUs perform IFFT\/FFT in parallel with an output data-rate of $\\frac{N_{\\mathrm{u}}}{T_{\\mathrm{OFDM}}}$.\n\n\\begin{figure*}[ht]\n\t\\footnotesize\n\t\\centering\n\t\\psfrag{P1}{$P_1$}\n\t\\psfrag{P2}{$P_2$}\n\t\\psfrag{P3}{$P_3$}\n\t\\psfrag{PN}{$P_N$}\n\t\\psfrag{D1}{$D_1$}\n\t\\psfrag{D2}{$D_2$}\n\t\\psfrag{D3}{$D_3$}\n\t\\psfrag{DN}{$D_N$}\n\t\\psfrag{M1}{$M_1$}\n\t\\psfrag{M2}{$M_2$}\n\t\\psfrag{M3}{$M_3$}\n\t\\psfrag{MN}{$M_N$}\n\t\\psfrag{C1}{$C_1$}\n\t\\psfrag{C2}{$C_2$}\n\t\\psfrag{C3}{$C_3$}\n\t\\psfrag{CN}{$C_N$}\n\t\\psfrag{W11}{$w_{1}^{(1)}$}\n\t\\psfrag{W12}{$w_{1}^{(2)}$}\n\t\\psfrag{W13}{$w_{1}^{(3)}$}\n\t\\psfrag{W1N}{$w_{1}^{(N)}$}\n\t\\psfrag{W21}{$w_{2}^{(1)}$}\n\t\\psfrag{W22}{$w_{2}^{(2)}$}\n\t\\psfrag{W23}{$w_{2}^{(3)}$}\n\t\\psfrag{W2N}{$w_{2}^{(N)}$}\n\t\\psfrag{WM1}{$w_{M}^{(1)}$}\n\t\\psfrag{WM2}{$w_{M}^{(2)}$}\n\t\\psfrag{WM3}{$w_{M}^{(3)}$}\n\t\\psfrag{WMN}{$w_{M}^{(N)}$}\n\t\\psfrag{M1}{$M_1$}\n\t\\psfrag{M2}{$M_2$}\n\t\\psfrag{M3}{$M_3$}\n\t\\psfrag{MN}{$M_N$}\n\t\\psfrag{ant1}{$1$}\n\t\\psfrag{ant2}{$2$}\n\t\\psfrag{ant3}{$3$}\n\t\\psfrag{antM}{$M$}\n\t\\psfrag{OFDM1}{$\\mathrm{OFDM} 1$}\n\t\\psfrag{OFDM2}{$\\mathrm{OFDM} 2$}\n\t\\psfrag{OFDM3}{$\\mathrm{OFDM} 3$}\n\t\\psfrag{OFDML}{$\\mathrm{OFDM} L$}\n\t\\psfrag{A1}{$\\mathbf{A}_{1}^{(n)}$}\n\t\\psfrag{A2}{$\\mathbf{A}_{2}^{(n)}$}\n\t\\psfrag{A3}{$\\mathbf{A}_{3}^{(n)}$}\n\t\\psfrag{AM1}{$\\mathbf{A}_{M-1}^{(n)}$}\n\t\\psfrag{A11}{$\\mathbf{A}_{1}^{(1)}$}\n\t\\psfrag{A1N}{$\\mathbf{A}_{1}^{(N)}$}\n\t\\psfrag{A21}{$\\mathbf{A}_{2}^{(1)}$}\t\n\t\\psfrag{A2N}{$\\mathbf{A}_{2}^{(N)}$}\n\t\\psfrag{TS}{$\\cdots$}\n\t\\psfrag{TPRB}{$T_{\\mathrm{PRB}}$}\n\t\\includegraphics[width=1.0\\linewidth]{time_diagram.eps}\n\t\\caption{Time diagram representing formulation and filtering\/precoding activities performed in the antenna modules. Each OFDM symbol is split into $N_\\mathrm{PRB}$ blocks ($N$ in the figure) in the same order as data come out of any of the receiver FFT. Those blocks which contains pilots are shown as $P_{i}$, while those carrying data are denoted as $D_{i}$. Channel estimation is performed during $C_{i}$ blocks, while formulation is done in $\\w_{i}$ blocks. Filtering\/precoding data is carried out during the MIMO processing blocks, named $\\mathrm{M}_{i}$. As it can be observed, all antennas perform their tasks simultaneously, while formulation is done sequentially as a matrix $\\A^{(n)}$ passes through the array. In total, $N$ matrices are passed sequentially through antenna $m$, corresponding to $\\A_{m}^{(n)}, n=1 \\cdots N$. $\\w_{i}$ vectors need to be available in the antenna modules before the corresponding data comes out of the receiver FFT so it can be properly processed. Daisy-chain topology exploits the parallelism of the operations by allowing the pipeline of the operations and the fully usage of all dedicated links simultaneously.}\n\t\\label{fig:time_diagram}\n\\end{figure*}\n\nWe can exploit channel correlation based on the Physical Resource Block (PRB) concept in 3GPP. A PRB is a region in frequency-time domain where the channel response is assumed to be approximately constant across all subcarriers within that PRB. Within an OFDM symbol, the number of subcarriers in each PRB and the number of PRB per symbol, defined as $N_{\\mathrm{sc,PRB}}$ and $N_{\\mathrm{PRB}}$ respectively, are related as follows: $N_{\\mathrm{u}} = N_{\\mathrm{PRB}} N_{\\mathrm{sc,PRB}}$. We define $T_{\\mathrm{PRB}}$ as the time needed by $N_{\\mathrm{sc,PRB}}$ consecutive subcarriers to come out the FFT.\n\nFor each PRB we have a different channel matrix and also MIMO model as in \\eqref{eq:ul_model} and \\eqref{eq:dl_model}. Then, it is required to have a unique set of vectors $\\w_m$ and $\\p_m (m=1...M)$ per antenna, as in \\eqref{eq:linear_det} and \\eqref{eq:linear_prec_i}, for uplink detection and downlink precoding respectively. The phase where these vectors are computed is named $\\textit{formulation}$, while the phase where user's data is processed is named $\\textit{filtering}$ and $\\textit{precoding}$ for UL and DL respectively. To minimize data buffering, formulation needs to be completed before filtering\/precoding starts. This imposes the constraint that the formulation phase needs to be finished within one OFDM symbol, or in other words, all antennas need to obtain these vectors and the matrix $\\A$ needs also to pass through the array within one OFDM symbol. A diagram of the main activities involved and their timing relationship is shown in Figure \\ref{fig:time_diagram}. The analysis assumes that the processing and data transmission are pipelined in each RPU so they concurrently operate.\n\n\\subsubsection{Computational complexity}\n\n\\begin{itemize} \n\\item Formulation phase:\nThe number of complex multiplications needed to formulate one precoding\/filtering vector per antenna are $C_{\\mathrm{form}} \\approx 2K^{2}$, which represents the matrix-vector product to obtain $\\w_{m}$ and the outer product to update $\\A_{m}$ according to algorithm \\ref{algo:CD}. Other possible required operations such as norm, square root or division are assumed to be negligible.\n\n\\item Filtering phase:\nDuring the filtering phase, each RPU performs the required operations for UL detection. Vectors $\\w_{m}$ are applied to all observations (data subcarriers), $y^{u}_{m}$, under the same PRB. The complexity measured in number of complex multiplications per antenna and per $N_{\\mathrm{sc,PRB}}$ subcarriers is $C_{\\mathrm{filt}} = KN_{\\mathrm{sc,PRB}}$.\n\n\\item Precoding phase:\nDuring the precoding phase, each RPU performs the operations required by \\eqref{eq:linear_prec_i}. Similarly to the filtering case, the same vector $\\p_{m}$ is applied to all data vectors $x^{d}_{m}$ under same PRB. The complexity measured in number of complex multiplications per antenna and PRB is $C_{\\mathrm{prec}} = KN_{\\mathrm{sc,PRB}}$.\n\\end{itemize}\n\n\\subsubsection{Inter-connection data-rate}\n\\label{section:data-rate}\n\\begin{itemize} \n\t\\item Formulation phase:\n\tThe average inter-connection data-rate during formulation can be calculated assuming that the average time to complete a transfer of a matrix $\\A$ is $T_{\\mathrm{PRB}}$, which leads to an average rate of\n\n\t\\begin{equation}\n\tR_{\\mathrm{d,form}} = \\frac{2w_{\\A} K^{2} N_{\\mathrm{PRB}}}{T_{\\mathrm{OFDM}}},\n\t\\nonumber\n\t\\end{equation}\n\n\twhere the numerator represents the amount of bits to transfer (all matrices $\\A$ in a symbol) and $w_{\\A}$ is the bit-width of $\\A$ entries (real\/imaginary parts).\n\t\n\t\\item Filtering phase:\n\tPartial filtering results from each RPU are added up through the chain. The average inter-connection data-rate per dedicated link can be calculated as\n\n\t\\begin{equation}\n\tR_{\\mathrm{d,filt}} = \\frac{2 w_{\\mathrm{d}} KN_{\\mathrm{u}}}{T_{\\mathrm{OFDM}}},\n\t\\nonumber\n\t\\end{equation}\n\n\twhere $w_{\\mathrm{d}}$ is the bit-width of baseband samples exchanged among RPUs.\n\t\n\t\\item Precoding phase:\n\tIn the precoding phase, the data vectors $\\xd$ are passed through the array for processing. Each node receives a vector which is passed to next node without any required pause (broadcasting). This leads to the same data-rate as in the filtering case.\n\t\n\\end{itemize}\n\n\\subsubsection{Latency}\nThe processing latency in the formulation phase for one antenna is given from next expression\n\\begin{equation}\n\\begin{split}\nT_{\\mathrm{proc,form}} &= \\frac{C_{\\mathrm{form}} T_{\\mathrm{CLK}}}{N_{\\mathrm{mult}}} \\\\\n&\\approx \\frac{2K^{2} T_{\\mathrm{CLK}}}{N_{\\mathrm{mult}}},\n\\end{split}\n\\nonumber\n\\end{equation}\nwhere $N_{\\mathrm{mult}}$ is the number of multipliers available in each RPU that can be used in parallel, $T_{\\mathrm{CLK}}$ is the clock period and we assume that one complex multiplication can be done within one $T_{\\mathrm{CLK}}$. Total latency is expressed as\n\\begin{equation}\n\\begin{split}\nLat_{form} &= M \\cdot T_{\\mathrm{proc, form}} + (N_{\\mathrm{RPU}}-1) \\cdot T_{\\mathrm{trans}},\n\\end{split}\n\\nonumber\n\\end{equation}\nwhere $N_{\\mathrm{RPU}}$ is the number of RPUs in the system, and $T_{\\mathrm{trans}}$ is the transmission latency between two consecutive RPUs. As said before, formulation needs to be finished within one $T_\\mathrm{OFDM}$, therefore the formulation latency is constrained as $Lat_{form} < T_{\\mathrm{OFDM}}$. This leads to an upper limit for M as\n\\begin{equation}\nM < \\frac{T_\\mathrm{OFDM}+T_{\\mathrm{trans}}}{T_{\\mathrm{proc, form}} + \\frac{T_{\\mathrm{trans}}}{M_{\\mathrm{RPU}}}},\n\\nonumber\n\\end{equation}\nwhere $M_{\\mathrm{RPU}}=\\frac{M}{N_{\\mathrm{RPU}}}$ is the number of antennas per RPU, which is considered as a design parameter. We can consider another limit, slightly lower than previous one but easier to extract conclusions as follows\n\\begin{equation}\nM < \\frac{T_\\mathrm{OFDM}}{T_{\\mathrm{proc, form}} + \\frac{T_{\\mathrm{trans}}}{M_{\\mathrm{RPU}}}}.\n\\nonumber\n\\end{equation}\n\nWe analyze three scenarios:\n\\begin{itemize} \n\t\\item $T_{\\mathrm{proc, form}} \\rightarrow 0$: When processing time is reduced, by increasing $N_{\\mathrm{mult}}$ or decreasing $T_{\\mathrm{CLK}}$, then transaction time becomes dominant and a reduction in the number of links allow for higher values of $M$. Formally, the upper value for $M$ scales proportionally to $M_{\\mathrm{RPU}}$ as follows\n\n\t\\begin{equation}\n\tM < M_{\\mathrm{RPU}} \\cdot \\frac{T_\\mathrm{OFDM}}{T_{\\mathrm{trans}}}.\n\t\\nonumber\n\t\\end{equation}\n\n\t\\item $T_{\\mathrm{trans}} \\rightarrow 0$: By decreasing the transaction time the upper limit of $M$ converges to a certain value, which is inversely proportional to the processing time as follows\n\n\t\\begin{equation}\n\tM < \\frac{T_\\mathrm{OFDM}}{T_{\\mathrm{proc, form}}}.\n\t\\nonumber\n\t\\end{equation}\n\n\t\\item $M_{\\mathrm{RPU}} \\gg \\frac{T_{\\mathrm{trans}}}{T_{\\mathrm{proc, form}}}$. When $M_{\\mathrm{RPU}}$ increases beyond a certain value, processing time becomes dominant and we obtain the same limit as previous point.\n\\end{itemize}\n\nIn case of filtering, its related processing is done in parallel as soon as data comes out of the FFT. However, partial results needs to be accumulated through the array from RPU 1 to $N_{\\mathrm{RPU}}$. This latency is uniquely due to data transfer through the dedicated links, then\n\\begin{equation}\n\\begin{split}\nLat_{\\mathrm{filt}} &= (N_{\\mathrm{RPU}}-1) \\cdot T_{\\mathrm{trans}}\\\\\n& < Lat_{\\mathrm{form}}\\\\ & < T_{\\mathrm{OFDM}}.\n\\end{split}\n\\label{eq:lat_filt}\n\\end{equation}\n\n\\subsubsection{Memory}\nIn terms of memory requirement, a centralized architecture requires to store the channel matrix $\\mathbf{H}$ fully at the CPU, previous to the inversion. There is a channel matrix per PRB, so CSI storage requires $M_{\\mathrm{H}} = 2 w_{\\mathrm{h}} M K N_{\\mathrm{PRB}}$ bits, where $w_{\\h}$ represents the bit-width of $\\Hbf$ entries (real\/imaginary parts), and in order to store the resulting square matrix, $(\\Hbf^{H}\\Hbf)^{-1}$ requires $M_{\\mathrm{inv}} = 2 w_{\\mathrm{h}} K^{2} N_{\\mathrm{PRB}}$ and therefore the total requirement is: $M_{\\mathrm{central}} = M_{\\mathrm{H}} + M_{\\mathrm{inv}} \\approx M_{\\mathrm{H}}$.\n\nIn the decentralized architecture, each antenna module needs to store the corresponding $\\h$, which gets replaced by $\\mathbf{w}$ after formulation. Both of them requires the same amount of memory if same bit-width is assumed, which is $M_{\\mathrm{w}} = 2 w_{\\mathrm{h}} K N_{\\mathrm{PRB}}$, and the total amount of memory in the system is: $M_{\\mathrm{daisy}} = M \\cdot M_{\\w} \\approx M_{\\mathrm{central}}$. Therefore, the total amount of memory required for $\\Hbf$ and $\\Wbf$ is the same in both systems, however the daisy-chain allows a uniform distribution of the memory requirements across all antenna modules, reducing design complexity, time and cost. As a drawback, we point out the need for data buffering during the filtering phase due to latency in the transfer of partial results, as discussed in the previous subsection (Latency). The buffer size for the RPU closest to the CPU (worst case) can, based on \\eqref{eq:lat_filt}, be obtained as\n\\begin{equation}\nM_{\\mathrm{buffer}} = \\frac{ 2 w_{\\mathrm{d}} K N_{\\mathrm{u}} Lat_{\\mathrm{filt}}}{T_{\\mathrm{OFDM}}},\n\\nonumber\n\\end{equation}\nwhich is shared by all antennas belonging to that RPU.\n\n\\subsection{Comparison}\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Inter-connection data-rate comparison for different system parameters [$G\\lowercase{b\/s}$]}\n\t\\label{tab:data-rate}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$R_{\\mathrm{d,form}} $ & 12.67 & 50.69 & 114.05 & 114.05\\\\\n\t\t$R_{\\mathrm{d,filt\/prec}} $ & 38.02 & 76.03 & 114.05 & 114.05\\\\\n\t\t\\hline\n\t\t$R_{\\mathrm{c}} $ & 304.13 & 608.26 & 1216.51 & 2433.02\\\\\n\t\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:data-rate} shows a comparison of interconnection data-rate between daisy-chain and centralized architecture for different scenarios of $M$ and $K$. It is important to remark that $R_{\\mathrm{c}}$ corresponds to the aggregated data\/rate at the shared bus, while $R_{\\mathrm{d}}$ is the average data\/rate in each of the RPU-RPU dedicated links. For the centralized case, \\eqref{eq:R_central} is used, while for the daisy-chain case, data-rates are detailed according to the different tasks (formulation, filtering and precoding) as described in Section \\ref{section:data-rate}. For the numerical results we employ $T_{\\mathrm{CLK}}=1\\mathrm{ns}$ and $w=12$. The rest of system parameters are as follows according to worst case in 5G NR: $N_{\\mathrm{u}}=3300$, $N_{\\mathrm{PRB}}=275$, $N_{\\mathrm{sc,PRB}}=12$ and $T_{\\mathrm{OFDM}}=\\frac{1}{120\\mathrm{KHz}}$. We observe that for $M=128$ case, daisy-chain requires $\\sim 10\\%$ of the inter-connection data-rate needed by the centralized case. This number can even decrease as $\\frac{M}{K}$ grows. As it is observed, daisy-chain requires much lower inter-connection data-rates than the centralized counterpart. We remark that if we take into account the total inter-connection data-rate in the decentralized case, which is $N_{\\mathrm{RPU}} R_{\\mathrm{d,form}}$, may easily exceed the centralized counterpart $R_{\\mathrm{c}}$, however the decentralized architecture is able to distribute this data-rate equally across all links, reducing considerably the requirements for each of them.\n\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Computational complexity comparison for different system parameters [$GOPS$]}\n\t\\label{tab:complexity}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$C_{\\mathrm{d,ant}} $ & 1.58 & 3.17 & 4.75 & 4.75\\\\\n\t\t\\hline\n\t\t$C_{\\mathrm{c}} $ & 50.69 & 202.75 & 608.26 & 1216.51\\\\\n\t\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:complexity} shows a computational complexity comparison between centralized and decentralized architectures. $C_{\\mathrm{d,ant}}$ represents complex multiplications per second and per antenna in the decentralized case, while $C_{\\mathrm{c}}$ is the computational complexity required by CPU in centralized system. In both cases, only filtering\/precoding is taken into account because formulation depends on how often channel estimation is available. The result of the comparison is meaningful. Even tough, the total complexity in the decentralized system is approximately equal to the centralized counterpart, this is $M \\cdot C_{\\mathrm{d,ant}} \\approx C_{\\mathrm{c}}$, our decentralized solution is able to divide equally the total computational complexity among all existing RPUs, relaxing considerably the requirements compared to the CPU in centralized case. The relatively low number obtained for the daisy-chain allows the employment of cheap and general processing units in each RPU, in opposite to the centralized architecture where the total complexity requirement is on the CPU.\n\nNumerical results for latency are shown in table \\ref{tab:latency} for $N_{\\mathrm{mult}}=8$, $T_{\\mathrm{trans}}=100ns$ and $N_{\\mathrm{RPU}}=\\frac{M}{4}$. These design parameters meets the constraint $Lat < T_\\mathrm{OFDM}$ up to $M=128$. For larger arrays there are different solutions: allows the latency to increase and buffer the needed input data (need for larger memory), group more antennas in each RPU (which reduces the number of links but increase the complexity of the CPU controlling each RPU), and\/or employ low-latency link connections (reducing $T_{\\mathrm{trans}}$ at the expense of higher cost). It is relevant to note that $T_\\mathrm{OFDM}$ value in the table is the worst case $1\/120KHz$.\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Latency comparison for different system parameters}\n\t\\label{tab:latency}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$Lat(\\mu s)$ & 0.83 & 2.52 & 7.71 & 15.52\\\\\n\t\t$Lat\/T_{\\mathrm{OFDM}}$ & 0.10 & 0.30 & 0.92 & 1.86\\\\\n\t\\end{tabular}\n\\end{table}\n\nIn table $\\ref{tab:memory}$ a comparison between both systems from memory perspective is shown. If $w_{\\h}=12$ and $N_{\\mathrm{PRB}}=275$ are assumed, then for the $M=128$ case, each antenna module in the daisy-chain only needs $\\sim 80 \\mathrm{kbits}$ of memory and each RPU needs at maximum $354 \\mathrm{kbits}$ for buffering, while in the centralized architecture, the central processor requires $\\sim 11 \\mathrm{Mbits}$, which is a challenging number for a cache memory. The memory requirement grows proportionally to M in the centralized system, while that does not happen in $M_{\\w}$. In order to reduce the buffer size we can group more antennas in each RPU, so all of them share the same buffer memory.\n\\begin{table}\n\t\\renewcommand{\\arraystretch}{1.3} \n\t\\caption{Memory requirement comparison for different system parameters [$kbits$]}\n\t\\label{tab:memory}\n\t\\centering\n\t\\begin{tabular}{l*{5}{c}}\n\t\t\\hline\n\t\tScenario & & & & \\\\\n\t\t$M$\t & 32 & 64 & 128 & 256 \\\\\n\t\t$K$\t & 4 & 8 & 12 & 12 \\\\\n\t\t\\hline\n\t\t$M_{\\w} (ant) $ & 26.4 & 52.8 & 79.2 & 79.2\\\\\n\t\t$M_{\\mathrm{buffer}} (RPU) $ & 26.6 & 114.1 & 353.6 & 718.5\\\\\t\n\t\t\\hline\n\t\t$M_{\\mathrm{H}} $ & 844.8 & 3379.2 & 10137.6 & 20275.2\\\\\n\t\t$M_{\\mathrm{inv}} $ & 105.6 & 422.4 & 950.4 & 950.4\\\\\n\t\\end{tabular}\n\\end{table}\n\n\\section{Conclusions}\n\\label{section:conclusions}\n\nIn this article we proposed an architecture for Massive MIMO base-station for uplink detection and downlink precoding, which is based on the fully distribution of the required baseband processing across all antenna modules in the system. The main goal is to reduce the inter-connection data-rate needed to carry out the processing tasks and enable the scalability needed in Massive MIMO. We continued our previous work in this topic \\cite{jesus} \\cite{muris} by a detailed introduction to the CD algorithm and its application to the Massive MIMO case. We also presented an extensive analysis of the expected performance of the system, the inter-connection data-rate, complexity, latency and memory requirements. The results show that there is a performance loss compared to ZF, but unlike MF, our proposed method does not have an error floor, from which we can not recover, while the inter-connection data-rate is distributed avoiding the aggregation of the centralized approach. At the same time, complexity and memory requirements per antenna module are easy to meet with commercial off-the-self hardware, which proves the scalability of this solution.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn high dimensional linear regression\n\\begin{align*}\ny=X\\beta^\\ast+w,\\quad\\mbox{with noise }w \\sim \\mathcal{N}(0,\\sigma^2 I),\n\\end{align*}\nthe goal is to parsimoniously predict the response $y\\in \\mathbb R^n$ as a linear combination of a large number of covariates $X=(X_1,X_2,\\ldots,X_p)\\in\\mathbb R^{n\\times p}$, and conduct statistical inference on the linear combination coefficients $\\beta^\\ast=(\\beta_1^\\ast,\\ldots,\\beta_p^\\ast)^T\\in\\mathbb R^p$ \\citep{tibshirani1996regression,donoho2006compressed}. By leveraging on certain lower dimensional structure in the regression coefficient vector $\\beta^\\ast\\in\\mathbb R^p$ such as a sparsity constraint $s=\\|\\beta^\\ast\\|_0\\ll n$, where $\\|\\beta^\\ast\\|_0$ counts the number of nonzeros in $\\beta^\\ast$, the number $p$ of covariates is allowed to be substantially larger than the sample size $n$. Due to the intrinsic computational hardness in dealing with the $\\ell_0$ metric reflecting sparsity, people instead use different metrics as surrogates, and cast the estimation problem into various convex or nonconvex optimization problems. Many approaches have been proposed for high dimensional regression by solving certain penalized optimization problem, including basis pursuit \\citep{chen2001atomic}, the Lasso \\citep{tibshirani1996regression}, the Dantzig selector \\citep{candes2007dantzig}, SCAD \\citep{fan2001variable}, MCP \\citep{zhang2010nearly} and so on. In this work, we focus on the recovery of $\\beta^\\ast\\in\\mathbb R^p$ without explicitly specifying a penalty.\n\nRecent work~\\citep{hoff2017lasso} shows that through a change-of-variable (over-parametrization) via Hadamard product parametrization, the Lagrangian (dual) form of the non-smooth convex optimization problem for the Lasso~\\eqref{Eqn:CS}:\n\\begin{equation}\\label{Eqn:CS}\n\\min_{\\beta} \\frac{1}{2n}\\|X\\beta-y\\|^2+\\lambda \\|\\beta\\|_{1}, \\quad\\mbox{with }\\|\\beta\\|_1:\\,=\\sum_{j=1}^p|\\beta_j|,\n\\end{equation}\ncan be reformulated as a smoothed optimization problem at a cost of introducing non-convexity. Due to the smoothness feature, simple and low-cost first-order optimization methods such as gradient descent and coordinate descent can be applied to recover $\\beta^\\ast$. Despite the non-convexity and exponentially many stationary points induced by the change-of-variable, these first-order algorithms exhibit encouraging empirical performance~\\citep{hoff2017lasso}.\n\nIn this work, we consider the same Hadamard product over-parametrization $\\beta = g\\circ l$ as in \\cite{hoff2017lasso}, where $g, \\,l\\in\\mathbb R^p$ and $\\circ$ denotes the Hadamard product (element-wise product). Instead of solving the penalized optimization problem~\\eqref{Eqn:CS}, we consider directly applying the gradient descent to the quadratic loss function\n\\begin{equation}\\label{eq_opt}\nf(g,l)=\\frac{1}{2n}\\,\\|X(g\\circ l)-y\\|^2.\n\\end{equation}\nIn the noiseless case where $\\sigma=0$, minimizing $f(g,\\,l)$ jointly over $(g,\\,l)$ is a highly non-convex optimization problem with exponentially many saddle points. To see this, notice that each non-zero pattern of $\\beta$ corresponds to at least one saddle point by choosing $g_j=l_j=0$ for each $j$ such that $\\beta_j=0$. In addition, due to the column rank deficiency of the design matrix $X$ (for example, when $p>n$), there are infinitely many global minimizers of \\eqref{eq_opt} as potential convergent points of the gradient descent. \nInterestingly, we show that despite these seemingly hopeless difficulties, in the noiseless case if we initialize the gradient descent arbitrarily close to $g=l=0$, then under\nthe prominent Restricted Isometry Property (RIP) condition~\\citep{candes2008restricted} on the design matrix $X$, a properly tuned gradient descent converges to least $\\ell_1$-norm solution within error $\\varepsilon$ in $\\mathcal O(\\log\\frac{C}{\\varepsilon})$ iterations, where constant $C$ depends on the RIP constant, step size of the gradient descent, and some other characteristics of the problem. Our proofs borrow ideas from \\cite{li2018algorithmic}, where they prove the algorithmic convergence of matrix factorized gradient descent in the context of noiseless matrix sensing under the RIP.\n\n\nIn high dimensional regression, the usual regularized least square is known to suffer from the so-called saturation phenomenon \\citep{vito2005learning,yao2007early}, where the overall estimation error is dominated by a bias term due to the penalty. In particular, since regularization is artificially introduced for restricting the ``effective size'' of the parameter space, the resulting estimator may be deteriorated and the estimation error cannot fall below the penalty level to adapt to a possibly faster convergence rate. For example, the estimator by solving the Lasso achieves the minimax rate of $\\sqrt{s}\\lambda\\asymp\\sqrt{s \\log p\/n}$. However, this worse-case rate only happens when there exist weak signals, meaning that some nonzero $\\beta^\\ast_j$'s have a borderline magnitude of order $\\sqrt{s\\log p\/n}$. In fact, if all signals are sufficiently strong, or significantly larger this borderline magnitude,\nthen the faster dimension-independent parametric rate $\\sqrt{s\/n}$ is attainable. For regularized approaches such the Lasso, one possible way to remove the penalty-induced bias term (whose order is $\\lambda$) is to refit the model with the selected variables. However, this two stage procedure requires stringent assumptions on the minimal signal strength to guarantee variable selection consistency for the first stage, and will suffer from weak signals. Interestingly, we show that by combining the Hadamard product over-parametrization with early stopping, a widely used regularization technique in boosting \\citep{zhang2005boosting} and nonparametric regression \\citep{raskutti2014early}, our method can overcome the saturation issue and lead to more accurate estimation. \nMore precisely, in the presence of random noise $w$ in the linear model, the solution path \nof minimizing the quadratic loss function~\\eqref{eq_opt} as we increase the gradient descent iteration still tends to converge to the least $\\ell_1$-norm solution that will overfit the data.\nFortunately, by terminating the gradient descent updating procedure earlier within a proper number of iterations, we may find a solution that optimally balances between the model complexity (reflected by the increasing $\\ell_1$-norm of the iterate) and goodness fit of the model, akin the bias-variance trade-off.\nIn particular, we show that the estimator can adapt to an optimal convergence rate of $\\sqrt{s\/n}$ when all signals are relatively strong. Generally, when both strong signals and weak signals exist, our estimator attains the rate $\\sqrt{s_1\/n}+\\sqrt{s_2 \\log p\/n}$ (with $s_1, s_2$ denoting thenumber of strong signals and weak signals, respectively).\n\n\nOur result also complements the recent surge of literature on over-parametrization for implicit regularization of the first-order iterative method for non-convex optimization in machine learning. \\cite{gunasekar2017implicit} introduce the phenomenon of implicit regularization in matrix factorization, where they empirically observe the convergence of gradient methods in matrix factorization problem to the minimal nuclear norm solution as the the initial value tends to zero. However, they only provide some heuristic illustration under some hard-to-check assumptions such as the continuity of the solution relative to the change in the initialization. Later, \\cite{li2018algorithmic} rigorously prove the implicit regularization in matrix sensing problem under a matrix RIP condition. Some other very recent works such \\cite{pmlr-v80-gunasekar18a} and \\cite{soudry2018implicit} study implicit regularizations in mirror descent and in classification problems. Note that all above implicit regularization literatures only consider data that are either noiseless (regression) or perfectly separable (classification). To our best knowledge, we are the first to rigorously study and utilize implicit regularization in high dimensional linear regression where responses are noisy. \n\n\nIn a nutshell, we show that through a simple change-of-variable, the non-smooth $\\ell_1$- penalized optimization problem~\\eqref{Eqn:CS} can be transformed to an unconstrained smooth quadratic loss minimization one; moreover, a simple gradient descent initialized near zero efficiently solves this non-convex optimization problem with provable guarantees. Furthermore, our method enjoy several advantages over existing procedures for high dimensional linear regression under sparsity constraints. First, our method is computationally efficient --- its time complexity is $O(np)$, which is linear in both $n$ and $p$. Second, despite the non-convexity nature, our method has a natural initialization that provably leads the optimal solution.\nIn comparison, penalized $M$-estimators based on non-convex penalties such as SCAD and MCP require stringent conditions on their initializations: to obtain good estimators, they require good initial values that are sufficiently closed to the truth (theoretically) or satisfy some restricted strong convexity conditions \\citep{zhao2018pathwise}, otherwise their optimization algorithms will suffer from bad local minima with bad generalization errors. In contrast, our algorithm only requires the initialization to be closed to zero. Moreover, unlike penalized approaches such as SCAD and MCP, where both parameters for the noise level and the concavity of the penalty need to be tuned, our method only need to tune the number of iterations. \n\n\n\n\nTo conclude, our main contributions with respect to the relative literatures are as follows:\n\\begin{enumerate}\n\t\\item We propose an estimator by combining early stopping with implicit regularization to overcome the saturation issues in high dimensional regression with explicit regularizations;\n\t\\item Unlike recent implicit regularization literatures that exclusively focus on noiseless data, we are the first to rigorously study the effect of implicit regularization for noisy data;\n\t\\item From computational perspective, we transform the non-smooth optimization problem to an unconstrained smooth quadratic loss minimization problem for which standard optimization tools can be applied.\n\\end{enumerate}\n\n\n\\section{Background and Our Method}\nTo begin with, we formally introduce the setup and notations used throughout the paper. After that, we introduce the intuition for our new implicit regularized algorithm for high dimensional linear regression via Hadamard product parameterization.\n\n\\subsection{Setup and notations}\nRecall that $\\beta^\\ast$ is the unknown $s$-sparse signal in $\\mathbb{R}^{p}$ to be recovered. Let $S\\subset\\{1,\\ldots,p\\}$ denote the index set that corresponds to the nonzero components of $\\beta^\\ast$, and the size $|S|$ of $S$ is then $s$.\nFor two vectors $g, l\\in \\mathbb{R}^{p}$, we call $\\beta = g\\circ l \\in\\mathbb R^p$ as their Hadamard product, whose components are $\\beta_j = g_jl_j$ for $j=1,\\ldots p$. For two vectors $a,b\\in\\mathbb R^p$, we use the notation $a\\geq b$ to indicate element-wise ``great than or equal to''.\nWhen there is no ambiguity, we use $\\beta^2=\\beta\\circ \\beta$ to denote the self-Hadamard product of $\\beta$. For a function $f:\\mathbb R^p\\times \\mathbb R^p \\to \\mathbb R$, $(g,\\,l)\\mapsto f(g,\\,l)$, we use $\\nabla_g f$ and $\\nabla_l f$ to denote its partial derivative relative to $g$ and $l$, respectively. \nFor any index set $J\\subset \\{1,\\ldots,p\\}$ and vector $a\\in \\mathbb R^p$, we use $a_J=(a_j:\\, j\\in J)$ to denote the sub-vector of $a$ formed by concatenating the components indexed by $J$. Let $\\mathbf 1\\in\\mathbb R^p$ denote the vector with all entries as $1$, and $I$ as the identity matrix in $\\mathbb R^p$. Let $I_J$ be the diagonal matrix with one on the $j$th diagonal for $j\\in J$ and $0$ elsewhere. For a vector $a\\in\\mathbb R^p$, we use $\\|a\\|$ to denote its vector-$\\ell_2$-norm, and $\\|a\\|_\\infty=\\max_{j}|a_j|$ its $\\ell_\\infty$-norm. Let $\\mbox{Unif}(a,b)$ to denote the uniform distribution over interval $(a,b)$. For a symmetric matrix $A$, let \n$\\lambda_{\\min}(A)$ denote its smallest eigenvalue. For two sequences $\\{a_n\\}$ and $\\{b_n\\}$, we use the notation $a_n\\lesssim b_n$ or $a_n\\gtrsim b_n$ to mean there exist some constant $c$ and $C$ independent of $n$ such that $a_n \\leq Cb_n$ or $a_n \\geq cb_n$ for all $n<0$, respectively, and $a_n\\asymp b_n$ to mean $a_n\\lesssim b_n$ and $b_n \\lesssim a_n$.\n\n\\subsection{Gradient descent with Hadamard product parametrization}\nAs we mentioned in the introduction, we consider augmenting the $p$-dimensional vector $\\beta$ into two $p$-dimensional vectors $g,\\,l$ through $\\beta=g\\circ l$. Instead of solving the Lasso problem~\\eqref{Eqn:CS} with $\\beta$ replaced with $g\\circ l$, we consider directly applying gradient descent to the quadratic loss function $f(g,l)=(2n)^{-1}\\|X(g\\circ l)-y\\|^2$. \nIn particular, we apply the updating formula $g_{t+1}=g_t-\\eta \\nabla f_g(g_t,\\,l_t)$, $l_{t+1}=l_t-\\eta \\nabla_l f(g_t,l_t)$, with random initial values $g_0$ and $l_0$ chosen close enough to $0$ (notice that $(0,0)$ is a saddle point of the objective function, so we need to apply a small perturbation $\\alpha$ on the initial values).\nThis leads to the following algorithm:\n\\smallskip\n\n\\begin{algorithm}[H]\n\\KwData{Design matrix $X\\in\\mathbb R^{n\\times p}$,\\, measurement vector $y\\in\\mathbb R^n$, initialization magnitude $\\alpha$, step size $\\eta$, and stopping threshold $\\epsilon$;}\n Initialize variables $[g_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$, $[l_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$ for $j=1,\\ldots,p$, and iteration number $t=0$;\\\\\n \\While{$\\ \\|X(g_t\\circ l_t)-y\\|\/\\sqrt{n}>\\epsilon\\ $}{\n ${g}_{t+1}=g_t-\\eta \\ l_t \\circ \\big[n^{-1}\\,X^{T}\\big(X(g_t\\circ l_t)-y\\big)\\big]$;\\\\ \n $\\, {l}_{t+1}=l_t-\\eta \\ g_t \\circ \\big[n^{-1}\\,X^{T}\\big(X(g_t\\circ l_t)-y\\big)\\big]$; \\\\[0.2em]\n $\\, t=t+1$;\\\\\n}\n \\KwResult{Output the final estimate $\\widehat \\beta=g_t\\circ l_t$;\n }\\label{alg1}\n \\caption{Gradient Descent for linear regression}\n\\end{algorithm}\n\\smallskip\n\nAlgorithm~\\ref{alg1} is the standard gradient descent algorithm, and the iterates $(g_{t+1},l_{t+1})$ tend to converge to a stationary point $(g_\\infty, l_{\\infty})$ of $f(g,l)$ that satisfies the first order optimality condition $\\nabla f_g(g_\\infty,\\,l_\\infty) = 0$ and $\\nabla f_l(g_\\infty,\\,l_\\infty) = 0$. However, stationary points of $f(g,l)$ can be local minimum, local maximum, or saddle points (when the Hessian matrix $\\nabla^2_{g,l} f(g,l)$ contains both positive and negative eigenvalues).\nThe following result provides the optimization landscape of $f(g,l)$, showing that $f(g,l)$ does not have local maximum, all its local minimums are global minimum, and all saddle points are strict. The strict saddle points are saddle points with negative smallest eigenvalues for Hessian matrix.\n\n\\begin{lem}\\label{Lem:global_min}\n$f(g,l)=(2n)^{-1}\\|X(g\\circ l)-y\\|^2$ does not have local maximum, and all its local minimums are global minimum. In particular, $(\\bar g, \\bar l\\,)$ is a global minimum of $f(g,l)$ if and only if\n\\begin{align*}\nX^T\\big(X(\\bar g\\circ \\bar l)-y\\big) = 0.\n\\end{align*}\nIn addition, any saddle point $(g^\\dagger,l^\\dagger)$ of $f(g,l)$ is a strict saddle, that is,\n$\\lambda_{\\min}\\big(\\nabla^2_{g,l} f(g^\\dagger,l^\\dagger)\\big)<0$.\n\\end{lem}\n\n\\noindent According to the first order condition associated with $f(g,l)$\n\\begin{align*}\ng\\circ \\big[X^T\\big(X(g\\circ l)-y\\big)\\big]=l\\circ \\big[X^T\\big(X(g\\circ l)-y\\big)\\big] = 0,\n\\end{align*}\nthere could be exponentially many (at least $2^p-1$) saddle points as a solution to this equation, for example, for those $(g,l)$ satisfying\n\\begin{align*}\ng_A=l_A =0\\in\\mathbb R^{|A|}, \\qquad\\mbox{and}\\qquad \\big[X^T\\big(X(g\\circ l)-y\\big)\\big]_{A^c} = 0\\in \\mathbb R^{p-|A|},\n\\end{align*}\nfor any non-empty subset $A$ of $\\{1,\\ldots,p\\}$. Consequently, the gradient descent algorithm may converge to any of these bad saddle points. To see this, if we initialize $(g,l)$ in a way such that the components in the index set $A$ are zero, or $[g_0]_A=[l_0]_A=0$, then these components will remain zero forever in the gradient iterations, or $[g_t]_A=[l_t]_A=0$ for all $t>0$. Fortunately, the following result implies that as long as we use a random initialization for $(g,l)$ with continuous pdf over $\\mathbb R^{2p}$ as in Algorithm~\\ref{alg1}, then the gradient descent almost surely converges to a global minimum.\n\n\\begin{lem}\\label{Lem:GD_global_min}\nSuppose the step size $\\eta$ is sufficiently small. Then with probability one, Algorithm~\\ref{alg1} converges to a global minimum of $f(g,l)$.\n\\end{lem}\n\nIn the low-dimensional regime where the design matrix $X$ has full column rank, the solution $\\bar \\beta$ to the normal equation $X^T(X\\beta-y) = 0$ is unique, which is also the least squares estimator. Under this scenario, Lemma~\\ref{Lem:global_min} and Lemma~\\ref{Lem:GD_global_min} together certify that Algorithm~\\ref{alg1} will converge to this optimal least squares estimator. However, in the high-dimensional regime which is the main focus in the paper, the normal equation $X^T(X\\beta-y) = 0$ have infinitely many solutions, and it is not clear which solution the algorithm tends to converge to. For example, if we consider instead applying the gradient descent to the original parameter $\\beta$ in the objective function $(2n)^{-1}\\|X\\beta-y\\|^2$ with initialization $\\beta_0=0$, then the iterates will converge to the minimal $\\ell_2$-norm solution of the normal equation (see below for details). Interestingly, as we will illustrate in the following, under the Hadamard parametrization the gradient descent Algorithm~\\ref{alg1} now tends to converge to the minimal $\\ell_1$-norm solution under certain conditions for initialization, thereby inducing sparsity and naturally facilitating variable selection.\n\n\n\n\\subsection{Gradient descent converges to sparse solution}\\label{Sec:Heuristic}\nIn this subsection, we provide two different perspectives for understanding the following informal statement on the behavior of simple gradient descent for the loss function $f(g,\\,l)$ defined in~\\eqref{eq_opt} under the Hadamard product parameterization $\\beta =g\\circ l$. For simplicity, we assume that the response $y$ in the linear model is perfect, that is, the noise variance $\\sigma^2=0$, throughout this subsection.\n Then in the next subsection, we turn to general noisy observations, and propose methods that lead to optimal estimation when the true regression coefficient $\\beta^\\ast$ is sparse. \n\n\\paragraph{Informal Statement:} \\emph{If we initialize the algorithm to be arbitrarily close to $g=l=0$, then under suitable conditions on design $X$, a simple gradient descent converges to a solution of basis pursuit problem:\n\t\\begin{equation}\\label{bs}\n\t\\min_{\\beta\\in\\mathbb R^p}{\\|\\beta\\|_1} \\quad \\mbox{subject to} \\quad X\\beta =y.\n\t\\end{equation}}\nOur first perspective is based on the $\\ell_2$-norm implicit regularization in linear system, and the second is based on analyzing the gradient dynamical system as the limit of the gradient descent algorithm as the step size $\\eta\\to 0_{+}$. However, both perspectives in this section are heuristic, and formal statements and proofs (based on a different strategy) will be provided in Section~\\ref{Sec:Theory}.\n\n\\paragraph{$\\ell_2$-norm implicit regularization perspective:} Consider the under-determined system $X\\beta=y$, where $X\\in\\mathbb R^{n\\times p}$ has full row rank.\nOur first intuition comes from the fact that a zero-initialized gradient descent algorithm over $\\beta\\in\\mathbb R^p$ for solving \n\\begin{align*}\n\\min_{\\beta\\in\\mathbb R^p} \\frac{1}{2n}\\,\\|X\\beta - y\\|^2:\\,= g(\\beta)\n\\end{align*}\nfinds a minimal $\\ell_2$-norm solution to $X\\beta=y$. \n\nIn fact, we know that any solution to the linear system $X\\beta=y$ takes the form of\n\\begin{align*}\n\\beta = X^{+} y + [I - X^+X]w,\\quad \\mbox{over all }w\\in\\mathbb R^p,\n\\end{align*}\nwhere $X^+=\\lim_{\\lambda\\to 0_+}(X^TX+\\lambda\\, I)^{-1}X^T$ is the Moore-Penrose inverse of $X$. Since $X(I-X^+X)=0$, we have \n\\begin{align*}\n\\|\\beta\\|^2 = \\|X^{+} y\\|^2 + \\| [I - X^+X]w\\|^2 \\geq \\|X^{+} y\\|^2,\n\\end{align*}\nimplying that $X^{+}y$ is the unique solution of $X\\beta=y$ in the column space of $X^T$, which is also the minimal $\\ell_2$-norm solution. \nNow since the gradient updating formula for $\\beta$, $\\beta_{t+1}=\\beta_t-\\eta X^T(X\\beta_t-y)\/n$, implies that the difference $\\beta_t-\\beta_0$ always lies in the column span of $X^T$. Let $\\beta_\\infty:\\,=\\lim_{t\\to\\infty}\\beta_t$ be the limiting point of the gradient algorithm. Then $\\beta_\\infty$ must be a solution to $X\\beta=y$. On the other hand , when $\\beta_0$ is initialized at zero, $\\beta_\\infty$ should also belong to the column span of $X^T$. These two properties combined imply that $\\beta_\\infty$ must be the minimal $\\ell_2$-norm solution $X^{+}y$.\n\nIn high dimensional linear regression with perfect observations, a popular class of penalization methods attempts find the minimal $\\ell_1$-norm solution to $X\\beta=y$ . \nUnder the Hadamard product parameterization $\\beta =g\\circ l$, the fact that gradient descent tends to find the minimal $\\ell_2$-norm solution suggests (this is not rigorous) that the gradient descent algorithm for jointly minimizing $f(g,\\,l)$ over $(g,\\,l)$ tends to converge to a solution to $X(g\\circ l) =y$ with a minimal $\\ell_2$-norm $\\sqrt{\\|g\\|^2+\\|l\\|^2}$. However, a minimal $\\ell_2$-norm solution to $X(g\\circ l) =y$ must satisfy $|g_j|=|l_j|$ for each $j=1,\\ldots,p$ (otherwise we can always construct another solution with strictly smaller $\\ell_2$-norm), which implies $\\sqrt{\\|g\\|^2+\\|l\\|^2} = \\sqrt{2}\\, \\|g\\circ l\\|_1=\\sqrt{2}\\,\\|\\beta\\|_1$. As a consequence, $\\beta_{\\infty}=g_{\\infty}\\circ l_{\\infty}$ should be the minimal $\\ell_1$-norm solution to $X\\beta = y$.\n\nAnother way to understand the difference in the evolutions of gradient descents for $f(g,l)$ and $h(\\beta)$ is by noticing that the gradient $\\nabla_{g_j} f(g,l) = l_j\\cdot \\nabla_{\\beta_j} h(\\beta)\\big|_{\\beta =g\\circ l}$ in the new parametrization, for each $j=1,\\ldots,p$, has an extra multiplicative factor of $l_j$ than the gradient $\\nabla_{\\beta_j} h(\\beta)$ in the usual least squares of minimizing $g(\\beta)$. It is precisely this extra multiplicative factor $l_j$ that helps select important signals (nonzero regression coefficients) and prevent unimportant signals (zero regression coefficients) to grow too fast at the early stage of the evolution when both $g$ and $l$ are close to zero. Precisely, as we will show in our theory part (Section~\\ref{Sec:Theory}), under suitable conditions on the model, all unimportant signals remain to stay in a $\\mathcal{O}(p^{-1})$ neighbourhood of zero, while important ones tend to grow exponentially fast. \n\n\n\\paragraph{Gradient dynamical system perspective:} Our second perspective comes from considering the limiting gradient dynamical system of the problem (i.e.~gradient descent with an infinitesimally small step size), which is motivated by the interpretation for matrix factorization problems in~\\cite{gunasekar2017implicit} and \\cite{pmlr-v80-gunasekar18a}. In particular, the behavior of this limiting dynamical system is captured by the ordinary differential equations\n\\begin{equation}\\label{de}\n\\begin{cases} \n\\ \\dot{g}(t)=-\\big[ X^{T}r(t)\\big] \\circ l(t),\\\\ \n\\ \\, \\dot{l}(t)=-\\big[X^{T}r(t)\\big] \\circ g(t),\n\\end{cases}\\quad\\mbox{with initialization}\\quad \n\\begin{cases} \n\\ g(0)=\\alpha\\mathbf 1,\\\\ \n\\ \\, l(0)= 0,\n\\end{cases}\n\\end{equation}\nwhere $r(t)=n^{-1}\\big[X(g(t)\\circ l(t))-y\\big]\\in\\mathbb R^p$, and for simplicity we fixed the initialization. To emphasize the dependence of the solution on $\\alpha$, we instead write $g(t),\\,l(t),\\,r(t)$ as $g(t,\\alpha), \\,l(t,\\alpha),\\,r(t,\\alpha)$.\nFor illustration purposes, we assume that the limiting point of this system is continuous and bounded as the initialization value $\\alpha\\to 0_+$, that is, both limits $g_\\infty=\\lim_{t\\to\\infty, \\alpha\\to 0_+}g(t,\\alpha)$ and $l_\\infty=\\lim_{t\\to\\infty, \\alpha\\to 0_+}l(t,\\alpha)$ exist in $\\mathbb R^p$ and are finite. \n\nLet $s(t,\\alpha)=\\int_0^t r(\\tau,\\alpha) d\\tau\\in\\mathbb R^p$, then simple calculation leads to the relation\n\\begin{equation*}\n\\left[ \\begin{array}{c} g_j(t,\\alpha)+l_j(t,\\alpha) \\\\[0.3em] g_j(t,\\alpha)-l_j(t,\\alpha) \\end{array} \\right] \n= \\alpha\\,\\left[ \\begin{array}{c}\\exp(-X_j^{T}s(t,\\alpha)) \\\\[0.3em] \\exp(X_j^{T}s(t,\\alpha)) \\end{array} \\right],\\quad\\mbox{for each }j=1,\\ldots,p.\n\\end{equation*}\nUnder the aforementioned assumption on the existence of limits as $t\\to\\infty$ and $\\alpha\\to 0_+$, the preceding display implies one of the following three situations for each $j$:\n\\begin{enumerate}\n\t\\item[Case 1:] $g_{j,\\infty}=l_{j,\\infty}\\neq 0$, and \n\t$\\displaystyle \\lim_{t\\to\\infty,\\alpha\\to 0_+}X_j^{T}s(t,\\alpha)\/\\log(1\/\\alpha)= 1$.\n\t\\item[Case 2:] $g_{j,\\infty}=-l_{j,\\infty}\\neq 0$, and \n\t$\\displaystyle \\lim_{t\\to\\infty,\\alpha\\to 0_+}X_j^{T} s(t,\\alpha)\/\\log(1\/\\alpha)= -1$.\n\t\\item[Case 3:] $g_{j,\\infty}=l_{j,\\infty}=0$, and \n\t$\\displaystyle \\lim_{t\\to\\infty,\\alpha\\to 0_+}X_j^{T} s(t,\\alpha)\/\\log(1\/\\alpha) =\\gamma_j\\in[-1,1]$.\n\\end{enumerate}\nDenote $s_\\infty$ as the limit $\\lim_{t\\to\\infty,\\alpha\\to 0_+} s(t,\\alpha)\/\\log(1\/\\alpha)$. Recall $\\beta_\\infty = g_\\infty\\circ l_\\infty$, and the previous three cases can be unified into\n\\begin{equation*}\nX_j^{T}s_\\infty=\n\\begin{cases} \n\\mbox{sign}(\\beta_{j,\\infty}), & \\mbox{if}\\ \\beta_{j,\\infty}\\neq 0, \\\\ \n\\gamma_j\\in [-1,1], & \\mbox{if}\\ \\beta_{j,\\infty}= 0,\n\\end{cases}\\quad\\mbox{for each }j=1,\\ldots,p.\n\\end{equation*}\nThis identity together with the limiting point condition $X\\beta_{\\infty}=y$\ncoincides with the KKT condition for the basis pursuit problem~\\eqref{bs}.\n\n\n\nAgain, this argument is based on the hard-to-check solution continuity assumption. In the next section, we provide a formal proof of the result without making this assumption, albeit under a somewhat strong RIP condition on $X$. We conjecture this gradient descent implicit regularization property to be generally true for a much wider class of design matrices (see our simulation section for numerical evidences).\n\n\n\n\\subsection{Gradient descent with early stopping}\nIn this subsection, we consider the general case where the response $y$ contains noise, or $\\sigma^2\\neq 0$. In particular, we propose the use of early stopping, a widely used implicit regularization technique \\citep{zhang2005boosting,raskutti2014early}, to the gradient descent Algorithm~\\ref{alg1}. As the name suggests, we will use certain criterion to decide whether to terminate the gradient descent updating to prevent overfitting of the data. Algorithm~\\ref{alg2} below summarizes a particular stopping criterion widely-used in the machine learning community via validation. In principal, we can also treat the number of iterations as a hyperparameter and repeat this procedure multiple times in same spirits as data splitting and cross validation.\n\n\\smallskip\n\n\\begin{algorithm}[H]\n\t\\KwData{Training design matrix $X\\in\\mathbb R^{n\\times p}$,\\, measurement vector $y\\in\\mathbb R^n$, validation data $X'$, $y'$, initialization magnitude $\\alpha$, step size $\\eta$, and maximal number of iterations $T_{max}$;}\n\tInitialize variables $[g_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$, $[l_0]_j\\overset{iid}{\\sim}\\mbox{Unif}(-\\alpha,\\alpha)$ for $j=1,\\ldots,p$, and iteration number $t=0$;\\\\\n\t\\While{$t \\|X'(g_{{\\tilde t}+1}\\circ l_{{\\tilde t}+1})-y'\\|$ or $\\|X'(g_{{\\tilde t}}\\circ l_{{\\tilde t}})-y'\\|$ is minimized over all iterations, then output the final estimate $\\widehat \\beta=g_{{\\tilde t}}\\circ l_{{\\tilde t}}$.\n\t}\\label{alg2}\n\t\\caption{Gradient Descent for Linear Regression with Validation Data} \n\\end{algorithm} \n\n\\smallskip\n\nRecall that in the introduction, we discussed about the saturation issue suffered by explicit penalized methods such as the Lasso. Now we turn to our method and illustrate that it is unaffected, or at least less suffered from the saturation issue. In the next section, we will provide rigorous theory showing that our method can achieve a faster $\\sqrt{s\/n}$ rate of convergence then the typical $\\sqrt{s\\log p\/n}$ rate when all nonzero signals are relatively large.\n\nDue to the connection of our method with the basis pursuit problem~\\eqref{bs}, one may naturally think that our method in the noisy case should be equivalent to a basis pursuit denoising problem:\n\\begin{equation}\\label{bsdn}\n\\min \\|\\beta\\|_1 \\quad \\mbox{subject to} \\quad \\|X \\beta -y\\|_2 \\leq \\epsilon,\n\\end{equation}\nwith some error tolerance level $\\varepsilon$ depending on the stopping criterion, and therefore is equivalent to the Lasso. Surprisingly, a simulation example below shows that the iterate path of the gradient descent Algorithm~\\ref{alg1} contains estimates with much smaller error than the Lasso. Precisely, we adopt the simulation setting S2 in section~\\ref{sec:simu} . As comparisons, we also report the Lasso solution path (as a function of the regularization parameter $\\lambda$) solved by ISTA and FISTA \\citep{beck2009fast}. For our gradient descent algorithm, we set $\\alpha = 10^{-5}$ in the random initialization. From figure~\\ref{fig:31}, when the iteration number is around $1000$, even though the prediction error in panel~(c) of our algorithm and the Lasso (with an optimally tuned $\\lambda$, see panel~(b) for the entire Lasso solution path), the estimation error in panel~(a) of our method is significantly lower than that of the Lasso, illustrating the occurrence of the saturation phenomenon of the Lasso. Moreover, the stabilized region (iterations $200$--$1000$) of our method GD in panel~(a) is relatively wide, and therefore the performance tends to be robust to the stopping criterion. \n\n\n\n\\begin{figure}[H]\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{estimation2.jpg}\n\t\t\\caption{Estimation error vs Iteration}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{lassopath.jpg}\n\t\t\\caption{Estimation error vs Regularization for Lasso}\n\t\\end{subfigure} \n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{prediction2.jpg}\n\t\t\\caption{Prediction error vs Iteration}\n\t\t\\label{fig:32}\n\t\\end{subfigure} \n\t\\caption{Panel (a) is a log-log plot of standardized estimation error $\\|\\widehat \\beta-\\beta^\\ast\\|^2_2\/\\|\\beta^\\ast\\|^2_2$ versus iteration number $t$. Panel (b) is a log-log plot of standardized estimation error versus regularization parameter $\\lambda$ for Lasso. Panel (c) is a log-log plot of mean prediction error $\\sqrt{\\|\\widehat y-y\\|^2_2\/n}$ versus iteration number $t$.}\\label{fig:31}\n\t\\vspace{-0.7em}\n\\end{figure}\n\n\nNext, let us briefly illustrate why implicit regularization with early stopping works, while explicit regularized methods may fail.\nWe know that early stopping, serving as algorithmic regularization, is based on the intuition that as the iteration number grows, the bias keeps decreasing while the variance increasing. Consequently, the iteration number $T$, acting as an implicit regularization parameter, aims to optimally balance between the bias and the variance, akin to the bias-variance trade-off. In our parametrization, the iteration number $T$ controls the $\\ell_1$ norm of the solution, reflecting the model space complexity. To see this, we plot the $\\ell_1$ norm versus the iteration number, and also the estimation errors versus the $\\ell_1$ norm, all in logarithmic scales, in figure~\\ref{fig:33}. As we expected, as the number of iterations increases, the $\\ell_1$ norm of the iterate also increases. When the logarithm of the iteration number is within $(2.3,3)$, the $\\ell_1$ norm of the estimated coefficients tends to be stabilized at the $\\ell_1$ norm of the true $\\beta^\\ast$ as $0.9$, corresponding to the most accurate estimation region in panel~(a) of figure~\\ref{fig:31}. In contrast, as we can see from panel~(b) of figure~\\ref{fig:33}, the estimation error is very sensitive in the regularization parameter domain --- the region corresponds to smallest estimation accuracy is very narrow, and a small change in the $\\ell_1$ norm in the solution leads to a drastic deterioration in the estimation accuracy. This numerical example provides evidences of why explicit regularized approaches may suffer from large bias and low accuracy.\n\n\n\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.48\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{L1IN.jpg}\n\t\t\\caption{$\\ell_1$ norm vs Iteration}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.48\\textwidth}\n\t\t\\includegraphics[width=\\linewidth]{EEL1.jpg}\n\t\t\\caption{Prediction error vs Iteration Comparison}\n\t\\end{subfigure} \n\t\\caption{Panel (a) is a log-log plot of $\\ell_1$ norm of the estimated coefficients versus iteration number $t$. Panel (b) is a log-log plot of standardized estimation error versus $\\ell_1$ norm of the estimated coefficients.} \\label{fig:33}\n\t\\vspace{-0.7em}\n\\end{figure}\n\nFinally, we discuss several commonly used early stopping rules by treating the iteration number as a tuning parameter. \n\n\n\\paragraph{Hold-out or cross validation:} The simplest method is to use hold-out data as validation: for example, we can split the training data into half-half, and then run gradient descent on the first half $D_1$ of the data while calculate the prediction error $R(t)=\\sum_{i\\in D_2}({X^{(i)}}^T (g_t\\circ l_t)-y_i)^2$ for all $t \\leq T_{max} $ on the second half $D_2$, then the final iteration number is decided by (cf.~Algorithm~\\ref{alg2}):\n\t\\begin{equation}\\label{cv}\n\t\\tilde t : = \\arg \\min \\{ t \\leq T_{max} \\,|\\, R ( t+1 ) > R (t) \\} \\quad \\mbox{or}\n\t\\end{equation} \n\t\\begin{equation}\\label{cv2}\n\t\\tilde t : = \\arg \\min \\{ t \\leq T_{max} \\,|\\, R ( t ) = \\min_{\\tau \\leq T_{max}} R (\\tau) \\}. \\quad \n\t\\end{equation} \n\tTo make use of the whole dataset, we can perform cross validation: first split data into $K$ folds, then apply gradient descent on all possible combinations of $K-1$ folds without replacement and evaluate at the corresponding rest $1$ folds. The final risk $R(t)$ can be the sum of all the evaluations on each fold, and the criterion~\\eqref{cv} or~\\eqref{cv2} can be used to obtain the iteration number. Finally we can apply the same iteration number obtained from cross validation to approximate the optimal one for the entire training data.\n\\paragraph{Stein's unbiased risk estimate (SURE):} \\cite{stein1981estimation} suggested the use of degrees of freedom as surrogate to the true prediction error given the standard derivation $\\sigma$. Under our settings, ignoring the second order term of step size $\\eta$, the updating of the prediction error (up to rescaling) $r_t=n^{-1}\\big[X(g_t\\circ l_t) - y\\big]\\in\\mathbb R^n$ through gradient descent can be approximated by (by ignoring second order terms of order $\\eta^2$):\n\t\\begin{equation}\n\tr_{t+1}\\approx [I - 2 \\eta n^{-1} X \\mbox{diag}(|g_t\\circ l_t|) X^T] \\,r_t, \n\t\\end{equation} \n\twhere for a vector $u$, diag$(u)$ denotes the diagonal matrix with components of $u$ in the its diagonals. Define $S_t= \\Pi_{i=1}^{t-1} (I - 2 \\eta n^{-1} X \\mbox{diag}(|g_t\\circ l_t|) X^T)$, then the estimated degrees of freedom at time $t$ can be approximated by $n-\\mbox{trace}(S_t)$. Consequently, the risk based on the $C_p$-type statistic \\citep{efron2004estimation} is\n\t\\begin{equation}\n\tR(t) = \\frac { \\| r_t \\| ^ { 2 } } { n } + \\Big(2-\\frac { 2 \\mbox{trace}(S_t) } { n } \\Big)\\sigma ^ { 2 }.\n\t\\end{equation} \n\tThe total iteration number as a tunign parameter can then be selected by minimizing $R(t)$ in equation~\\eqref{cv} or~\\eqref{cv2} . In practice, we can use the plug-in estimator $\\hat{\\sigma}$ to replace the unknown $\\sigma$ in $R(t)$. According to our simulation studies (for example, see figure~\\ref{fig:es}), early stopping based on SURE generally works not as good as the hold-out or cross validation method.\n\t\n\\paragraph{Measure of model complexity:} \\citep{raskutti2014early} proposed an early stopping rule based on local empirical Rademacher complexity of the Reproducing kernel Hilbert space. However, their method can not be directly applied in our case: their stopping rule is based on the eigenvalues of empirical kernel matrix, which is $ n^{-1} X \\mbox{diag}(|g_t\\circ l_t|) X^T$ in our settings. Since our empirical kernel matrix keeps updated throughout the iterations, their method is not directly applicable.\n\t\n\t\\smallskip \n\tIn the end of this subsection, we adopt the simulation framework S1-S4 (only change the standard derivation to $\\sigma=0.1$) in section~\\ref{sec:simu} to compare different early stopping criteria. We record the mean estimation errors averaging over $50$ trials and report the errors in figure~\\ref{fig:es}. From the figure, we can see that cross-validation tends to be more robust than SURE. \t\n\tTherefore, we recommend using hold-out or cross validation to select the iteration number, and will stick to this method in the rest of the paper.\n\n\t\\begin{figure}[H]\n\t\t\\centering\n\t\t\t\\includegraphics[scale=0.7]{ESrule.jpeg}\n\t\t\\caption{Comparison of the estimation errors for different early stopping rules, `Oracle' stands for the optimal early stopping rule with knowldege on the truth. 'CV' stand for early stopping through $5$ fold cross validation.} \\label{fig:es}\n\t\t\\vspace{-0.7em}\n\t\\end{figure}\n\t\n\n\\subsection{Adaptive step size and variable selection} \\label{sec:2.5}\n\nA nature extension of gradient descent algorithm~\\ref{alg1} is to assign different weights (step sizes) to different coordinates of $\\beta$, which is related to the adaptive Lasso \\citep{zou2006adaptive}. It can be seen from the differential equation interpretation: by inserting a constant weighting matrix $D(\\Omega)=\\mbox{diag}(\\omega_1,...,\\omega_p)$ into the equation~\\eqref{de}, we obtain the limiting dynamical system as\n\\begin{equation*}\n\\begin{cases} \n\\ \\dot{g}(t)=-\\big[D(\\Omega) X^{T}r(t)\\big] \\circ l(t),\\\\ \n\\ \\, \\dot{l}(t)=-\\big[D(\\Omega)X^{T}r(t)\\big] \\circ g(t).\n\\end{cases}\n\\end{equation*}\nBased on similar heuristic analysis as in Section~\\ref{Sec:Heuristic} for the noiseless case, the limiting point of the dynamic system satisfies:\n\\begin{equation*}\nX_j^{T}s_\\infty=\n\\begin{cases} \n\\mbox{sign}(\\beta_{j,\\infty}) \/\\omega_j, & \\mbox{if}\\ \\beta_{j,\\infty}\\neq 0, \\\\ \n\\gamma_j\\in [-\\frac{1}{\\omega_j},\\frac{1}{\\omega_j}], & \\mbox{if}\\ \\beta_{j,\\infty}= 0,\n\\end{cases}\\quad\\mbox{for each }j=1,\\ldots,p.\n\\end{equation*}\nwhich is the KKT condition for the dual form of the adaptive Lasso \n\\begin{align*}\n\\min_{\\beta\\in\\mathbb R^p}\\sum_{j=1}^p\\frac{|\\beta_j|}{w_j} \\qquad\\mbox{subject to } X\\beta =y.\n\\end{align*}\nIn the limiting case when the step size $\\omega_j$ of a particular component $\\beta_j$ tends to $0$, we are equivalently adding an $+\\infty$ when $\\beta_j\\neq 0$. In contrast, if we apply a larger step size $\\omega_j$ to $\\beta_j$, then $\\beta_j=g_j\\circ l_j$ tend to move faster and more freely in the parameter space, which is equivalent to a smaller penalty on $\\beta_j$.\nThe original paper in \\cite{zou2006adaptive} constructed the weights based on the ordinary least square solution, which requires $n\\geq p$. In practice when $p>n$, we can construct weights through a preprocessing step. For example, variable screening can be applied to introduce sparse weights.\n\n\nTo enable variable selection in our method, we can perform a component-wise hard thresholding operation to the final estimator $\\hat{\\beta}=g_{{\\tilde t}}\\circ l_{{\\tilde t}}$. Based on our theoretical analysis, since our method tries to shrink both weak signals and errors into very small order $p^{-2}$, it is more robust to false detections than other explicit regularizations when the same tuning parameter for noise level is applied. Let us consider a simple example to illustrate the basic idea: we set $n=10$, $p=20$, $X_{ij} \\overset{iid}{\\sim} \\mathbb{ N }(0,1)$ for $i=1,2,...,n$ and $j=1,2,...,p$, $\\beta^*_1=0.5 \\sigma\\sqrt{\\log p\/n}$, $\\beta^*_2=5 \\sigma \\sqrt{\\log p\/n}$, and all other components are zeros in the data generating model $y=X\\beta^* +w$ with $w \\sim \\mathbb{ N }(0,I)$. Since the strength of the first components of truth is weak, it is hard to be detected by all methods we have tried. However, the effect of the weak signals on $y$ still pertains. In particular, when applying the cross validation, traditional penalized methods tends to over-select the predictors, leading a lot of false discoveries. In comparison, due to the implicit regularization our method tend to be more robust to the weak signals---our estimate is typically non-sparse, the effect of the non-detected weak signals can be distributed to all components of the estimated vector, and no component is particularly spiked. As a consequence, our method tends to be more robust to false discoveries after applying the hard thresholding. The variable selection results are shown in figure~\\ref{fi:vsl}. As we can see, the Lasso can not detect the weak signal, and two components, indexed by $6,19$, appear to be false detected through cross validation (note that in Lasso, a soft thresholding has already been applied). In contrast, in our method most unimportant signals remains small. Performing a hard thresholding with the same regularization parameter selected by the Lasso can erase all false detections, leading to the selection of strong signal only. \n\n\\begin{figure}[t]\n\t\\includegraphics[width=\\linewidth]{selection.jpg}\n\t\\caption{The values versus index for truth $\\beta^*$, $\\beta$ estimator through lasso by minimizing the cross validation error in prediction, $\\beta$ estimator through gradient descent by minimizing the cross validation error in prediction and $\\beta$ estimator through `post estimation' selection for gradient descent.} \\label{fi:vsl}\n\t\\vspace{-0.7em}\n\\end{figure}\n\n\n\n\\subsection{Related literatures}\n\\cite{li2018algorithmic} studies the theory for implicit regularization in matrix sensing, which requires the data to be perfect measured and has different geometric structures as linear regression. \\cite{hoff2017lasso} considers the Hadamard product parametrization to optimize the parameters in high-dimensional linear regression. However, his method is computational-wise in order to reformulate the non-smooth Lasso optimization problem into a smooth one.\nIn particular, their objective function involves an $\\ell_2$ penalty on $(g,l)$ (which is equivalent to the $\\ell_1$ penalty on $\\beta$), and the solution is precisely the Lasso solution. \n\n\n\n\\section{Theoretical Analysis}\\label{Sec:Theory}\nIn this section, we provide formal statements for characterizing the behavior of the gradient descent algorithm for minimizing the Hadamard product parametrized quadratic loss $f(g,l)$ as defined in \\eqref{eq_opt}. We start with a description of our assumptions. Then we turn to the case of non-negative parameters, where a simpler parametrization $\\beta=u\\circ u$ can be applied, as a warm-up to convey the main proof ideas. Finally, we consider the general signal case and illustrate when the fast parametric root-$n$ rate independent of the dimension can be achieved. All proofs are deferred to the appendix in the supplementary material of the paper.\n\n\\subsection{Assumptions}\nRecall that the underlying true data generating model is $y=X\\beta^*+w$ with $w \\sim \\mathcal{N}(0,\\sigma^2 I)$, where the true parameter $\\beta^\\ast$ is $s$-sparse.\nWithin the $s$ nonzero signal components of $\\beta^\\ast$, we define the index set of strong signals as $S_1=\\{i\\in S: |\\beta^*_i| \\geq 2\\sigma \\log p \\sqrt{\\frac{\\log p}{n}}\\}$ and weak signals as $S_2=\\{i\\in S: |\\beta^*_i| \\leq 2\\sigma \\sqrt{\\frac{\\log p}{n}}\\}$, where $|S_1|=s_1$, $|S_2|=s_2$. According to the information-theoretic limits from \\cite{wainwright2009information}, weak signals of order $\\sigma \\sqrt{\\log p\/n}$ in sparse linear regression are generally impossible to be jointly recovered or selected (but can be detected in terms of the type I\/II error control in hypothesis testings, e.g.~see \\cite{jin2016rare}). Therefore, our primary focus would be the estimation and selection of strong signals.\nWe use the notation $\\theta_{s_1}(\\beta)$ to denote the $s_1$-th largest absolute component value of $\\beta$, and let $m=\\theta_{s_1}(\\beta^\\ast)$, which reflects the minimal strength for strong signals. We also use $\\kappa$ to denote the strong signal-condition number as the ratio between the largest absolute signal value to the smallest strong signal. \nWe will also make use of the notation of Restricted Isometry Property (RIP, \\cite{candes2008restricted}), which is a commonly used assumption (e.g. \\cite{candes2007dantzig}) in the high dimensional linear regression literatures.\n\n\\begin{defin}[Restricted Isometry Property]\n\tA matrix $X\\in \\mathbb{R}^{n\\times p}$ is said to satisfy the $(s,\\delta)$-Restricted Isometry Property (RIP) if for any $s$-sparse vector $u$ in $\\mathbb{R}^{p} $, we have:\n\t\\begin{equation*}\n\t(1-\\delta)\\|u\\|^2 \\leq \\frac{1}{n}\\,\\|Xu\\|^2 \\leq (1+\\delta)\\|u\\|^2\n\t\\end{equation*}\n\\end{defin}\n\\noindent As an easy consequence, if matrix $X$ satisfies $(2s,\\delta)$-RIP, then Euclidean inner-product is also approximately preserved, that is, $\\big|n^{-1}\\,\\langle Xu, Xv\\rangle - \\langle u,\\,v\\rangle \\big| \\leq \\delta\\,\\|u\\|\\cdot\\|v\\|$ holds for any two $s$-sparse vectors $u,\\,v\\in\\mathbb R^p$. \n\n\n\\noindent With these preparations, we make the following assumptions on the true parameter $\\beta^\\ast$, design matrix $X$, initialization parameter $\\alpha$ and step size $\\eta$ in Algorithm~\\ref{alg1}.\n\n\\paragraph{Assumption (A): } The true parameter $\\beta^\\ast$ is $s$-sparse, and $s=s_1+s_2$, that is, each nonzero signal in $\\beta^\\ast$ is either weak or strong. In addition, $\\kappa m \\lesssim1$. \n\n\\paragraph{Assumption (B):} The design matrix $X$ satisfies $(s+1,\\delta)$-RIP condition with $\\delta\\lesssim 1 \/(\\kappa \\sqrt{s}\\log\\frac{p}{\\alpha})$.\n\n\\paragraph{Assumption (C):} The initialization parameter $\\alpha$ satisfies $0<\\alpha\\lesssim p^{-1}$, and the step size $\\eta$ satisfies $0<\\eta\\lesssim (\\kappa\\log\\frac{p}{\\alpha})^{-1}$.\n\nSome remarks are in order.\nFirst, our current proof heavily relies on the RIP condition as in Assumption (B), which is satisfied if the predictors are iid and $s\\log p\\ll n$. However, the extensive simulation studies in the next section provide a strong numerical evidence suggesting that our conclusions remain valid even when the RIP condition is violated. We leave the formal theoretical investigation as an open problem for our future studies. Second, Assumption (A) is made mainly for illustrating the possibility of achieving the fast parametric root $n$ rate of convergence when $s_1=0$. In fact, our current proof can still lead to the typical high-dimensional rate of $\\sqrt{s\\log p\/n}$ without introducing the notion of strong and weak signals. And due to space constraint we omit the details.\n\n\n\\subsection{Non-negative Signals}\nTo start with, we demonstrate the key ideas of our proofs and give an analysis of the non-negative case as a warm-up. More specifically, we consider the case when all components of true signal $\\beta^\\ast$ are non-negative. To exploit this non-negativeness, we may instead use the self-Hadamard product parametrization $\\beta = u^2=u\\circ u$ for $u\\in\\mathbb R^p$, and write $\\beta^\\ast=(u^*)^2=u^* \\circ u^*$. Now, we have the optimization problem: \n\\begin{equation*}\n\\min_{u\\in \\mathbb{R}^{p}} f(u)=\\frac{1}{2n}\\,\\|Xu^2-y\\|^2,\n\\end{equation*}\nwith gradient descent updating formula $u_{t+1}=u_t-2\\eta\\, u_t \\circ \\big[n^{-1}X^T(Xu_t^2-y)\\big]$. For technical simplicity, we instead focus on the deterministic initialization $u_0=\\alpha \\mathbf 1\\in\\mathbb R^p$. \nThis case is simpler for the analysis than the general case since components of $u_t$ will not change sign during the iterations, and will always stay away from saddle points.\nWe summarize our result in the following main theorem. Since the non-negative signal constraint resembles the positive semi-definiteness constraint in matrix sensing, our proof utilizes the proof strategy in \\cite{li2018algorithmic} for analyzing matrix factorized gradient descent for noiseless matrix sensing by dividing the convergence into two stages (more details are provided after the theorem).\n\\begin{thm}\\label{thm1}\n\tUnder the above assumptions (A), (B) and (C). Let $\\epsilon=\\max\\{ \\alpha^2, \\sigma^2 \\frac{ Ms_1}{n}, \\sigma^2 \\frac{s_2\\log p}{n}\\}$, $\\tau=\\max \\{\\delta \\alpha, \\sigma \\sqrt{\\frac{\\log p}{n}}\\}$ and any $M\\geq 1$. Then there exist positive constants $(c_1,\\,c_2,\\,c_3,\\,c_4,\\,c_5)$ such that for every time $t$ satisfying $c_1\\, \\log(\\frac{p}{\\alpha})\/(\\eta m) \\leq t\\leq c_2 \/(\\eta \\tau)$, with probability at least $1-p^{-c_4}-e^{-c_5\\,Ms}$, the time $t$-iterate $u_t$ satisfies\n\t\\begin{align*}\n\t\\|u_t^2-\\beta^\\ast\\|^2\\leq c_5 \\,\\epsilon ,\n\t\\end{align*}\n\\end{thm}\n This theorem tells us in high dimension linear regression, combining early stopping with implicit regularization can significantly improve the estimation accuracy. In particular, when all signals are strong ($s_1=s$ and $s_2=0$), the estimate $\\hat \\beta =u_t^2$ attains a parametric rate $\\sigma\\sqrt{s_1\/n}$ of convergence that is independent of the dimension $p$. In general when weak signals exist, then the overall rate $\\sigma \\sqrt{\\frac{s_1}{n}}+\\sigma \\sqrt{\\frac{s_2\\log p}{n}}$ depends on the number of weak (strong) signals, which is still minimax-optimal \\citep{zhao2018pathwise}. The same remark also applies to our theory in the general case.\n\n\n\n\n\nOur proof strategy is to divide the convergence into two stages. Recall that $S=\\{j:\\,\\beta_j^\\ast\\neq 0\\}$ is the support set of true signal $\\beta^\\ast$, and $S_1 \\subset S$ corresponds to the subset of all strong signals. In the first ``burn-in'' stage, we show that each component of the strong signal part $u_{t,S_1}$ increases at an exponential rate in $t$ until hitting $\\sqrt{m}\/2$, while the weak signal and error part $u_{t,{S_1}^c}$ remains bounded\nby $\\mathcal O(p^{-1})$. In the second stage, iterate $u_t$ enters a geometric convergence region where $u_t$ converges towards $u^\\ast$ at a linear rate up to some high-order error term, and then stay in a $O(\\epsilon)$ neighborhood of $u^\\ast$ up to the time $\\Theta(1\/\\tau)$. Therefore, the time interval $c_1\\, \\log(\\frac{p}{\\alpha})\/(\\eta m) \\leq t\\leq c_2 \/(\\eta \\delta \\tau)$ would be the theoretical ``best solution region'' corresponding to the stabilized region in figure~\\ref{fig:31}.\n\n\nMore specifically, in the proof we consider the decomposition of $u_t$ into three parts: strong signal part $z_t$, weak signal part $d_t$ and error part $e_t$:\n\\begin{align*}\nu_t=z_t+d_t+e_t, \\quad\\mbox{with}\\quad\nz_t:=I_{S_1} u_t\\in\\mathbb R^p, \\quad\nd_t:=I_{S_2} u_t \\in \\mathbb R^p \\quad\\mbox{and}\\quad\ne_t:=I_{S^c}u_t\\in\\mathbb R^p,\n\\end{align*} \nwhere recall that $I_E$ is the diagonal matrix with ones on the positions indexed by subset $E\\subset\\{1,\\ldots,p\\}$ and zero elsewhere. We use induction to prove the following results characterizing the gradient dynamics in the first stage. Recall that $\\theta_{s_1}(b)$ denote the $s_1$-th largest absolute component value of vector $b\\in\\mathbb R^p$ and $m$ is the minimal strength of the strong signals.\n\n\\begin{pro}[Stage one dynamics]\\label{pro2.1}\n\tUnder assumptions of theorem~\\ref{thm1}, there are constants $(c_7,c_8)$ and $(c_7',c_8')$, such that for each $t0$. In both scenarios, we choose true signal $\\beta^\\ast=(-1,2,2,3)^T\\in\\mathbb R^p$, and set $y=X\\beta^\\ast$. When implementing gradient descent, we choose step size $\\eta=0.2\\ (0.1)$ for the independent (correlated) design, $\\alpha\\in\\{10^{-10},10^{-9},\\ldots,10^{1}\\}$, and stopping threshold $\\epsilon=0.01\\alpha$. Figure~\\ref{fig:200} shows the estimation error $\\|\\widehat \\beta-\\beta^\\ast\\|_2$ versus $\\alpha$ in log-log plots. As we can see, they tend to have a linear trend under the log-scale, which is consistent with our theoretical error bound estimate in Section 4.\nIn addition, in the correlated design scenario where the RIP does not hold, the algorithm is still able to recover $\\beta^\\ast$ as $\\alpha\\to0$, albeit under a slower convergence (due to a smaller allowable step size and a larger condition number of $X$). This observation provides evidence to the correctness of our informal statement made at the beginning of Section 3 even without RIP condition. We leave the proof of this conjecture open.\n\n\n\\paragraph{$\\ell_0$-norm minimizer differs from $\\ell_1$-norm minimizer:}\nIn this example, we study the empirical performance of the algorithm when the least $\\ell_1$-norm in the basis pursuit problem~\\eqref{Eqn:CS} is not the sparsest solution of $X\\beta=y$ (the null space property is violated). In particular, we choose\n\\begin{align*}\nX=\\begin{bmatrix} 0.2 & 1 & 0 \\\\ 0.2 & 0 &-1 \\end{bmatrix},\\quad \\beta^\\ast= \\begin{bmatrix} 5\\\\ 0 \\\\0 \\end{bmatrix}, \\quad\\mbox{and}\\quad y=\\begin{bmatrix} 1\\\\ 1 \\end{bmatrix},\n\\end{align*}\nso that $X\\beta^\\ast =y$. It is easy to verify that for this example, the sparsest solution of $X\\beta=y$ is $\\beta^\\ast$, while the least $\\ell_1$-norm solution is $\\beta^\\dagger = [0,1,-1]^T$. We use the same setting as before for implementing the algorithm with $\\alpha\\in\\{10^{-10},10^{-5},10^{-3},10^{-1},10^0,10^1\\}$. Table~\\ref{tb1} reports final outputs $\\beta=(\\beta_1,\\beta_2,\\beta_3)^T$ of the algorithm. Due to our intuition in Section 3, as expected, the algorithm still converges to the least $\\ell_1$-norm solution $\\beta^\\dagger$ instead of the least $\\ell_0$-norm solution $\\beta^\\ast$. Again, the estimation error decreases as the initialization level $\\alpha$ decreases. We conjecture this phenomenon of convergence to the least $\\ell_1$-norm solution to be generally true, and leave a formal proof as a future direction.\n\n\n\\begin{table}[h]\\caption{Convergent point without null space property} \\label{tb1}\n\t\\begin{center}\n\t\t$\\begin{array}{|ccccccc|} \n\t\t\\hline\n\t\t\\alpha & 10^{-10} & 10^{-5} & 10^{-3} & 0.1 & 1 & 10 \\\\ \n\t\t\\hline\n\t\t\\beta_1 & 7.433e-13 & 5.703e-7 & 1.289e-4 & 2.884e-2 & 2.987e-1 & 8.823e-1 \\\\ \n\t\t\\hline\n\t\t1-\\beta_2 & 1.492e-13 & 1.141e-7 & 2.577e-5 & 5.769e-3 & 5.974e-2 & 1.765e-1\\\\\n\t\t\\hline\n\t\t1+\\beta_3 & 1.492e-13 & 1.141e-7 & 2.577e-5 & 5.769e-3 & 5.974e-2 & 1.765e-1 \\\\\n\t\t\\hline \n\t\t\\end{array}$\n\t\\end{center}\n\t\\vspace{-0.7em}\n\\end{table}\n\\subsection{Simulations for Noisy Case}\\label{sec:simu}\n\n\n\n\\paragraph{Comparison with other high dimensional estimators:} We further demonstrate the advantages of our algorithm by considering the following $8$ simulation settings, the sparsity level $s=4$, signals $-1,2,2,3$ and noise level $\\sigma$ with $\\sigma =0.15*\\|\\beta^\\ast\\|$. We generate $3 n$ observations independently and split into $3$ even parts, then use the first part for training, the second part for validation and the final part for testing. The evaluation metric is standardized estimation error $\\|\\widehat \\beta-\\beta^\\ast\\|^2_2\/\\|\\beta^\\ast\\|^2_2$ and mean prediction error $\\sqrt{\\|y-\\hat{y}\\|^2\/n}$ for the test data set. We compare the median of the standardized estimation errors and prediction errors with the Lasso, SCAD and MCP by repeating $50$ times. We implement the Lasso using \\textit{glmnet} R package \\citep{friedman2010regularization} while for SCAD and MCP, we use the R package $\\textit{ncvreg}$ \\citep{breheny2011coordinate}. The standard error of medians are calculated by bootstrapping the calculated errors $1000$ times. For our algorithm, we use the initialization $\\alpha=10^{-5}\\times \\+1$. The simulation results in Table~\\ref{table1} and~\\ref{table2} indicate that our methods consistently have the best performance over all explicit penalization-based competitors across all settings.\n\n\\begin{enumerate}\n\t\\item \\textbf{S1:} $n=200$, $p=500$, $\\Sigma_{jk}=1$ for $j=k$ while $\\Sigma_{jk}=0$ for $j\\neq k$;\n\t\\item \\textbf{S2:} $n=200$, $p=500$, $\\Sigma_{jk}=0.1^{|j-k|}$;\n\t\\item \\textbf{S3:} $n=200$, $p=500$, $\\Sigma_{jk}=0.2^{|j-k|}$;\n\t\\item \\textbf{S4:} $n=200$, $p=500$, $\\Sigma_{jk}=0.5^{|j-k|}$;\n\t\\item \\textbf{S5:} $n=200$, $p=2000$, $\\Sigma_{jk}=1$ for $j=k$ while $\\Sigma_{jk}=0$ for $j\\neq k$;\n\t\\item \\textbf{S6:} $n=200$, $p=2000$, $\\Sigma_{jk}=0.1^{|j-k|}$;\n\t\\item \\textbf{S7:} $n=200$, $p=2000$, $\\Sigma_{jk}=0.2^{|j-k|}$;\n\t\\item \\textbf{S8:} $n=200$, $p=2000$, $\\Sigma_{jk}=0.5^{|j-k|}$.\n\\end{enumerate}\n\n\n\n\\begin{table}[H]\n\t\\begin{tabular*}{\\textwidth}{|c@{\\extracolsep{\\fill}}cccccccc|}\n\t\t\n\t\t\\hline\n\t\t&S1 &S2 &S3 &S4 &S5 &S6 &S7 & S8 \\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{GD} \n\t\t&\\textbf{0.520}\t&\\textbf{0.448}\t&\\textbf{0.510}\t&\\textbf{0.568}\t&\\textbf{0.385}\t&\\textbf{0.290} &\\textbf{0.465}\t\t&\\textbf{0.460}\n\t\t\n\t\t\\\\\n\t\t&(0.0428)\t&(0.0530)\t&(0.0607)\t&(0.0850)\t&(0.0533)\t&(0.0465)\t&(0.0858)\t&(0.0863) \n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{Lasso} \n\t\t&3.11\t&3.10\t&3.42\t&4.51\t&4.62\t&4.04\t&4.40\t&6.98\n\t\t\\\\\n\t\t&(0.219)\t&(0.173)\t&(0.242)\t&(0.274)\t&(0.279)\t&(0.205)\t&(0.306)\t&(0.452)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{SCAD}\n\t\t&0.613\t&0.533\t&0.650\t&0.691\t&0.519\t&0.401\t&0.595\t&0.646\n\t\t\\\\\n\t\t&(0.0464)\t&(0.0679)\t&(0.0702)\t&(0.103)\t&(0.0527)\t&(0.0574)\t&(0.0837)\t&(0.0776)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{MCP}\n\t\t&0.628\t&0.552\t&0.594\t&0.733\t&0.484\t&0.405\t&0.595\t&0.708\n\t\t\\\\\n\t\t&(0.0392)\t&(0.0779)\t&(0.0809)\t&(0.0902)\t&(0.0706)\t&(0.0597)\t&(0.0741)\t&(0.0680)\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\t\\caption{Simulation result for median of standardized estimation error of each method, with standard derivation in the parenthesis under the median. There are $10^{-3}$ factors for all medians and standard derivations.\n\t\t\\label{table1}}\n\\end{table}\n\n\n\\begin{table}[H]\n\t\\begin{tabular*}{\\textwidth}{|c@{\\extracolsep{\\fill}}cccccccc|}\n\t\t\n\t\t\\hline\n\t\t&S1 &S2 &S3 &S4 &S5 &S6 &S7 & S8 \\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{GD} \n\t\t&0.638\t&\\textbf{0.636}\t&0.640\t&\\textbf{0.634}\t&\\textbf{0.646}\t&\\textbf{0.651} &\\textbf{0.641}\t\t&\\textbf{0.642}\n\t\t\n\t\t\\\\\n\t\t&(0.0597)\t&(0.0753)\t&(0.0718)\t&(0.0709)\t&(0.0491)\t&(0.0510)\t&(0.0406)\t&(0.0498) \n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{Lasso} \n\t\t&0.676\t&0.672\t&0.671\t&0.685\t&0.693\t&0.699\t&0.696\t&0.708\n\t\t\\\\\n\t\t&(0.0899)\t&(0.0981)\t&(0.0932)\t&(0.0510)\t&(0.0568)\t&(0.0918)\t&(0.0615)\t&(0.0488)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{SCAD}\n\t\t&0.638\t&0.638\t&\\textbf{0.637}\t&0.637\t&0.650\t&0.654\t&0.643\t&0.647\n\t\t\\\\\n\t\t&(0.0530)\t&(0.0724)\t&(0.0716)\t&(0.0713)\t&(0.0470)\t&(0.0451)\t&(0.0402)\t&(0.0434)\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{2}{3em}{MCP}\n\t\t&\\textbf{0.637}\t&0.639\t&0.638\t&0.637\t&0.650\t&0.652\t&0.644\t&0.647\n\t\t\\\\\n\t\t&(0.0516)\t&(0.0756)\t&(0.0684)\t&(0.0753)\t&(0.0475)\t&(0.0441)\t&(0.0435)\t&(0.0408)\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\t\\caption{Simulation result for median of mean prediction error of each method, with standard derivation in the parenthesis under the median. There are $10^{-1}$ factors for all standard derivations.\n\t\t\\label{table2}}\n\t\n\t\n\\end{table}\n\n\\paragraph{Comparison in Variable Selection:} Now we consider variable selection when there exists some weak signals. Suppose the simulation settings are similar with S3 above, with only the true signals changed. Let $s=20$, and the strength of first $4$ signals is $0.5 \\sigma \\sqrt{\\log p\/n}$, while the other $16$ are $ 5 \\sigma \\sqrt{ \\log p \/ n}$, where $\\sigma =1$. Clearly the first $4$ signals are too weak to be selected by all methods. However, since all methods are based minimizing the prediction error, the effect of these weak signals pertains, and may increase the false discovery rate. Under the above settings, we perform a model selection based on minimized prediction errors through $5$-fold cross validation. For our method, we use the same regularization parameter as the lasso to perform hard threshold after estimation. We repeat the process $50$ times and compare variable selection errors. From the figure~\\ref{fi:vse}, we can see our method is more robust to the enhancement of false detection due to failure on detecting weak signals: although the true negative errors of our method is $4$, which means all weak signals can not be detected, the false detections of our methods are closed to zero. For other methods, although sometimes weak signals can be detected, the risk of false detections is high. Overall, our methods perform consistent variable selection for strong signals, and achieve better estimation than the competitors.\n\\begin{figure}[h!]\n\t\\includegraphics[scale=0.7]{0vse.jpeg}\n\t\\caption{Variable selection errors for selected model based on minimized prediction cross validation errors. `fp' stands for false positive when the truth is zero but detected as signal; `tn' stands for true negative when the truth is nonzero but not detected.}\\label{fi:vse}\n\\end{figure}\n\n\n\\subsection{Real Data Analysis}\nWe compare our method with others to analyze the Riboflavin data set \\citep{buhlmann2014high}, which is available in \\textit{hdi} R package. The dataset contains $71$ observations of log-transformed riboflavin production rate verses the logarithm of expression level of 4088 genes. Before estimation, we first perform independence screening \\citep{fan2008sure} based on the rank of the correlation strength for each predictor verses response to decrease the dimension of feature space into $500$. Then we normalize and add the intercept column into the design matrix. For evaluation, we split the observations into $50$ training samples and $21$ testing samples, with performing $10$-fold cross validation to select iteration steps and regularization parameters in the training data. Still, for our algorithm, we use the initial value $\\alpha=10^{-5} \\times \\+1$ for all training processes. We record the prediction errors for testing data set and repeat $50$ times. From the figure~\\ref{fi:box} below, our method also obtain the least prediction errors, which implies the estimation of this high dimensional linear regression problems can also have the least errors.\n\n\n\\begin{figure}[h!]\n\t\\includegraphics[scale=0.7]{realdata.jpeg}\n\t\\caption{Prediction errors on the test data of Riboflavin data set for each method. $x$-axis stands for the methods used for estimation, and $y$-axis stands for the testing prediction error $\\|y-\\hat{y}\\|$.}\\label{fi:box}\n\\end{figure}\n\n\nWe also perform variable selection on the whole Riboflavin data set with the same tuning parameter obtained through minimized estimation errors based on cross validation. For our algorithm, when we get the number of iterations from cross validation, we run gradient descent on the whole data set with the same initial values and step size until the corresponding number of iterations. We use the same regularization parameter obtained by the lasso as the the hard thresholding value on the absolute value of obtained the `post-estimation' selection. The comparison between different variable selection methods is given in Table~\\ref{tb:3}. From the table, we can see except one variable, all other variables detected by our method are also selected by other methods, illustrating that our methods tend to have lower false positive rate for variable selection without sacrificing estimation and prediction accuracy. \n\n\n\n\\begin{table}[H]\n\t\\begin{tabular*}{\\textwidth}{|c@{\\extracolsep{\\fill}}cccc|}\n\t\t\n\t\t\\hline\n\t\t&Lasso &SCAD &MCP &GD \\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{Lasso} \n\t\t& 33\t& \t&\t&\n\t\t\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{SCAD} \n\t\t&11\t&14\t&\t&\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{MCP}\n\t\t&2\t&3\t&5\t&\n\t\t\\\\\n\t\t\\hline\n\t\t\\multirow{1}{3em}{GD}\n\t\t&8\t&9\t&3\t&11\n\t\t\\\\\n\t\t\n\t\t\\hline\n\t\t\\multirow{1}{3em}{Independent}\n\t\t&20\t&2\t&2\t&1\n\t\t\\\\\n\t\t\\hline\n\t\\end{tabular*}\n\t\\caption{Variable selection result for Riboflavin data set, each cell stands for the number of the same detected variables between row labels and column labels. `Independent' means the number of detected variables for the corresponding column method that are not detected by others. \\label{tb:3}\n\t}\n\\end{table}\n\n\\section{Discussion}\nIn this paper, we illustrated the phenomenon of implicit regularization through Hadamard product change of variables in high dimensional linear regression, and demonstrated that a combination of implicit regularization with early stopping yields better estimation than state-of-the-art penalized approaches with explicit regularization. However, we still face several important open problems on implicit regularizations as our future directions. First, our theory heavily relies on the RIP condition, which is relatively strong comparing to the restricted eigenvalue condition as the minimal possible assumption on the design in the literature.\nIt would be interesting to investigate whether our results remain valid without the RIP condition.\nSecond, it is interesting to study whether any computationally efficient early stopping rule (rather than cross validation) based on certain date-driven model complexity measure can be applied and provably works.\n\n\n\\section{Proof of the results in the paper}\n\\subsection{Overview}\n\nIn this supplementary material, we provide proofs of the main theorems presented in the paper. \n\n\\subsection{Notation}\nRecall that $\\|v\\|=\\sqrt{\\sum_{j=1}^p v_j^2}$ and $\\|v\\|_\\infty=\\max_{j}|v_j|$ denote the vector $\\ell_2$-norm and $\\ell_\\infty$-norm, respectively. Moreover, $I$ is the identity matrix in $\\mathbb R^p$, and for any subset $S$ of $\\{1,\\ldots,p\\}$, $I_S$ is the diagonal matrix with $1$ on the $j$th diagonals for $j\\in S$ and $0$ elsewhere. We use bold letter $\\mathbf 1\\in\\mathbb R^p$ to denote an all-one vector. $\\theta_s(\\beta)$ denote the $s$-largest component of vector $\\beta\\in\\mathbb R^p$ in absolute value. We use notation $\\lesssim$ and $\\gtrsim$ to denote $\\leq$ and $\\geq$ up to some positive multiplicative constant, respectively. For two vectors $u$ and $v$ of the same dimension, we use $a\\geq b$ and $a\\leq b$ to denote element-wise $\\geq$ and $\\leq$. Denote $\\lambda_{\\max}(A)$ and $\\lambda_{\\min}(A)$ be the maximal and minimal eigenvalues of matrix $A$. Through this document, letters $c$, $c'$ and $c''$ denote some constants whose meaning may change from line to line.\n\n\\subsection{Some Useful Results}\nIn our proof, we will constantly deal with the Hadamard product $u\\circ v$ and the operation $(n^{-1}\\,X^TXu)\\circ v$ for two vectors $u,v\\in\\mathbb R^p$. \nTherefore, we collect some useful properties in this section, some of which are consequences of the RIP condition. \n\nThe first property regarding the Hadamard product is a direct consequence of the H\\\"{o}lder inequality.\n\\begin{lem}\\label{lem2}\n\tFor any two vectors $u$ and $v$ in $\\mathbb{R}^{p}$, we have:\n\t\\begin{equation}\n\t\\|u\\circ v\\| \\leq\\|u\\|\\|v\\|_\\infty.\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tThis follows since $\\|u\\circ v\\|^2 = \\sum_{j}u_j^2v_j^2 \\leq \\|v\\|_\\infty^2 \\sum_{j}u_j^2=\\|u\\|^2\\|v\\|_\\infty^2$.\n\\end{proof}\nThe second lemma shows that under the RIP, the product $(n^{-1}\\,X^TXu)\\circ v$ can be well-approximated by $u\\circ v$ for all sparse vectors $u\\in\\mathbb R^p$ and any vector $v\\in\\mathbb R^p$. \n\\begin{lem}\\label{lem3}\n\tLet $X$ be a matrix in $\\mathbb{R}^{n\\times p}$ that satisfies $(s+1,\\delta)$-restricted isometry property (see Definition 2.1 in the paper). Then for any $s$-sparse vectors $u$ and any $v$ in $\\mathbb{R}^{p} $, we have:\n\t\\begin{equation}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty} \\leq \\delta\\|u\\|_2\\|v\\|_{\\infty}.\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tLet $D(v)$ be the diagonal matrix in $\\mathbb{R}^{n\\times p}$ with diagonal elements the same as components of $v$ correspondingly. Then $\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty}$ can be represented as:\n\t\\begin{align*}\n\t\\max_{i=1,2,...,p} |e_i^T D(v) n^{-1}X^TXu-e_i^T D(v) u|,\n\t\\end{align*}\n\twhere $e_i$ is a $p$-dimensional vector whose $i$-th component is $1$ and $0$ elsewhere. Using the fact that $X$ satisfies $(s+1,\\delta)$-RIP and $e_i^T D(v)$ is $1$-sparse, we have (see the remark right after Definition 2.1 in the paper):\n\t\\begin{align*}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty}\\leq \\max_{i=1,2,...,p} \\delta\\|e_i^T D(v)\\|\\|u\\|= \\delta\\|u\\|_2\\|v\\|_{\\infty}.\n\t\\end{align*}\n\\end{proof}\nOur third lemma considers the case when $u$ and $v$ are both arbitrary. \n\\begin{lem}\\label{lem4}\n\tLet $X$ be a matrix in $\\mathbb{R}^{n\\times p}$ that satisfies $(2,\\delta)$-restricted isometry property. Then for any vectors $u,\\, v\\in\\mathbb R^p$, we have: \n\t\\begin{equation}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty} \\leq \\delta\\|u\\|_1\\|v\\|_{\\infty}.\n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tSince we can decompose $u=\\sum_j I_j u$, we have\n\t\\begin{align*}\n\t\\|(n^{-1}\\,X^TXu)\\circ v-u\\circ v\\|_{\\infty}&=\\max_{i=1,2,...,p} |e_i^T D(v)\\, n^{-1}X^TXu-e_i^T D(v) u|\\\\\n\t&\\leq \\sum_{j=1}^{p} \\max_{i=1,2,...,p} |e_i^T D(v)\\, n^{-1}X^TX \\, I_j u-e_i^T D(v)\\, I_j u| \\\\\n\t&\\leq \\sum_{j=1}^{p} \\max_{i=1,2,...,p} \\delta \\|e_i^T D(v)\\| \\|u_j\\| \\\\\n\t&\\leq \\delta\\|u\\|_1\\|v\\|_{\\infty}.\n\t\\end{align*}\n\\end{proof}\nOur fourth and fifth lemma consider the concentration behavior about the noise terms.\n\\begin{lem}\\label{lem6}\n\t$w \\sim \\mathcal{N}(0, \\sigma^2 I_{n \\times n})$, all $\\ell_2$ norm of column vectors of $X_{n \\times s}$ are normalized to $\\sqrt{n}$, $s0$, we can ensure $\\|e_0\\|_{\\infty}\\lesssim 1\/p$, and $\\|z_0\\|_{\\infty}\\lesssim 1$. Now suppose for time $tt>T_1$, since $\\|e_t\\|, \\|d_t\\| \\lesssim 1\/p$ is still controlled, combined with bound~\\eqref{decomp}, we have:\n\t\\begin{align*}\n\t\\|u_t^2-\\beta^\\ast\\|^2 \\leq c (\\alpha^2+\\sigma^2 \\frac{Ms_1}{n} + \\sigma^2 \\frac{s_2 \\log p}{n}).\n\t\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Proof of Proposition \\ref{pro3.1}}\n\nSimilar to the proof for the nonnegative case, we use induction to show that for each $t\\leq T_1$,\n\\begin{align}\n\\|a_{t,S_1^c}\\|_{\\infty}\\lesssim 1\/p,\\ &&\\|b_{t,S_1^c}\\|_{\\infty}\\lesssim 1\/p \\label{eq3.1.1},\\\\\n\\|a_{t,S_1}\\|_{\\infty}\\lesssim 1, &&\\|b_{t,S_1}\\|_{\\infty}\\lesssim 1 \\label{eq3.1.2},\\\\\n\\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\|\\lesssim \\sqrt{s} \\label{eq3.1.3},\n\\end{align}\nwhere the set $S_1^c$ is the union of weak signals and errors. When $t=0$, we have $g_0=\\alpha \\mathbf{1}$, $l_0=0$. Therefore, under the assumption $\\alpha\\lesssim 1\/p$, we have $\\|a_{0,S_1^c}\\|_{\\infty}\\lesssim 1\/p $, $\\|a_{0,S_1}\\|_{\\infty}\\lesssim 1$, and similar bounds for $b$. Now suppose for time $t\\beta_i^*\/2$, then $a_{t,i}^2$ decreases and $b_{t,i}^2$ increases, both in an exponential rate. Overall, the sign of $i$th component $\\beta_{t,i}=a_{t,i}^2-b_{t,i}^2$ of $\\beta_t$ tends to fall to negative in an exponential rate.\n\\end{itemize}\nOur analysis will be based on the resemblance between update rules of $(a_t,b_t)$ and $u_t$ in the nonnegative case. Recall that we assumed that the RIP constant $\\delta\\lesssim 1\/(\\kappa \\sqrt{s}\\log\\frac{p}{\\alpha})$, step size $\\eta\\lesssim 1\/(\\kappa \\log\\frac{p}{\\alpha})$, and $T_1=\\Theta(\\frac{ \\log(p\/\\alpha)}{\\eta m})$.\n\\begin{pro}\\label{apro3.2}\n\tUnder assumptions in theorem~\\ref{thm2} and the induction hypothesis~\\eqref{eq3.1.1}-\\eqref{eq3.1.3} at $t0;\\\\\n\t\\beta_{t,i}\\leq (1-c\\,\\eta\\,\\beta^\\ast_{i})^t (-\\alpha^2)+c'\\alpha^2,\\ \\ \\text{if}\\ \\beta^\\ast_{i}<0.\n\t\\end{align*}\n\\end{pro}\n\\begin{proof}\n\tSimilar with the proof of proposition \\ref{pro2.2}, first we approximate $(n^{-1}X^TXu)\\circ v$ by $u\\circ v$ based on the RIP condition via lemmas~\\ref{lem3} and \\ref{lem4},\n\t\\begin{align}\n\t&\\|n^{-1}X^T(X\\beta_t-y)-(I_{S_1}\\beta_{t}-I_{S_1}\\beta^\\ast)\\|_{\\infty} \\nonumber\\\\\n\t\\leq&\\, \\|n^{-1}X^TX (I_{S_1^c}r_{t})\\|_{\\infty}+\\delta\\|r_{t,S_1}\\| +\\|X^Tw\\|_\\infty \n\t\\lesssim \\frac{m}{\\log (p\/\\alpha)}, \\label{eq4.2.1}\n\t\\end{align}\n\timplying that under the condition $ m\\lesssim 1$, we have\n\t\\begin{align*}\n\t&\\|n^{-1}X^TX(\\beta_t-y)\\|_{\\infty}\\\\\n\t\\leq&\\, \\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\|_{\\infty} +\n\t\\|n^{-1}X^T(X (I_{S_1^c}r_{t})-w)\\|_{\\infty}+\\delta\\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\| \n\t\\lesssim 1, \n\t\\end{align*}\n\twhere the last inequality uses $\\|\\beta_{t,S_1^c}\\|_{\\infty}\\lesssim 1\/p$ and $\\|\\beta_{t,S_1}-\\beta^\\ast_{S_1}\\|\\lesssim \\sqrt{s}$.\n\t\n\tIn order to analyze $\\beta_{t,S_1}=a_{t,S_1}^2-b_{t,S_1}^2$, let us focus on $a_{t,S_1}^2$ and $b_{t,S_1}^2$, separately. According to the updating rule of $a_{t,S_1}$, we have\n\t\\begin{align*}\n\t&a_{t+1,S_1}^2=a_{t,S_1}^2-2\\eta a_{t,S_1}^2 \\circ [n^{-1}X^T(X\\beta_{t}-y)]_{S_1}+\\eta ^2 a_{t,S_1}^2 \\circ [n^{-1}X^T(X\\beta_{t}-y)]_{S_1}^2,\n\t\\end{align*}\n\twhere recall that for a vector $a\\in\\mathbb R^p$, $a_{S_1}$ denote the sub-vector of $a$ with indices in $S_1$. Applying lemmas~\\ref{lem3} with $v=\\mathbf 1$, we obtain\n\t\\begin{align*}\n\t&\\|a_{t+1,S_1}^2-a_{t,S_1}^2-2\\eta a_{t,S_1}^2(\\beta_{t,S_1}-\\beta^\\ast_{S_1})\\|_{\\infty}\\lesssim \\eta\\frac{m}{\\log (p\/\\alpha)}+\\eta^2 \\kappa^2 m^2\\stackrel{(i)}{\\lesssim} \\eta\\frac{m}{\\log (p\/\\alpha)}.\n\t\\end{align*}\n\twhere in step $(i)$ we used $\\eta \\kappa m \\lesssim \\frac{m}{\\log (p\/\\alpha)}$ sand $\\kappa m \\lesssim 1$. Similar to the nonnegative case, since $\\eta m \\leq 1\/2$, $\\frac{m}{\\log (p\/\\alpha)} \\leq 1\/2$, we have $a_{t,i}^2\/a_{t+1,i}^2 \\leq 4$ for $i\\in S_1$. Therefore, we can obtain an element-wise bound for $\\xi_t=(\\xi_{t,i})_{i\\in S_1}$,\n\t\\begin{equation*}\n\t\\xi_{t,i}:\\,=1-a_{t,i}^2\/a_{t+1,i}^2 \\circ(2\\eta ({\\beta}_{t,i}-\\beta^\\ast_i)), \n\t\\end{equation*} \n\tas $\\|\\xi_t\\|_{\\infty}\\lesssim \\eta\\frac{m}{\\log (p\/\\alpha)}$.\n\tEquivalently, we can write\n\t\\begin{align}\n\ta_{t+1,i}^2= a_{t,i}^2 (1-2\\eta ({\\beta}_{t,i}-\\beta^\\ast_i))+\\xi_{t,i} a_{t+1,i}^2. \\label{eq4.2.2}\n\t\\end{align} \n\tNow let us divide into two cases depending on the sign of $\\beta^\\ast_i,\\ i\\in S_1$:\n\t\n\t\\emph{Case $\\beta_i^* >0$:} When ${\\beta}_{t,i}-\\beta^\\ast_i\\leq -\\beta^\\ast_i\/2$, since $\\beta^\\ast_i \\geq m$, we have by equation~\\eqref{eq4.2.2},\n\t\\begin{align*}\n\ta_{t+1,i}^2 &\\geq \\frac{a_{t,i}^2(1+\\eta \\beta^\\ast_i)}{1+c\\eta\\frac{m}{\\log (p\/\\alpha)}} \\geq a_{t,i}^2(1+\\eta \\beta^\\ast_i)(1-c\\eta\\frac{\\beta_i^*}{\\log (p\/\\alpha)} )\\geq a_{t,i}^2(1+\\eta \\beta^\\ast_i\/4),\n\t\\end{align*}\n\twhere the last inequality follows since $1\/\\log(p\/\\alpha)\\leq 1\/2$ and $\\eta \\beta^\\ast_i \\leq 1\/2$. Similarly, we can analyze $b_{t,S}^2$ to get\n\t\\begin{align*}\n\tb_{t+1,i}^2 &\\leq \\frac{b_{t,i}^2(1-\\eta \\beta^\\ast_i)}{1-c\\eta\\delta\\sqrt{s}}\\leq b_{t,i}^2(1-\\eta \\beta^\\ast_i)(1+c\\eta\\beta_i^*\/\\log(p\/\\alpha))\\leq b_{t,i}^2(1-\\eta \\beta^\\ast_i\/4).\n\t\\end{align*}\n\tTherefore, $a_{t+1,i}^2$ increases at an exponential rate faster than the noise term $a_{t+1,S_1^c}$ while $b_{t+1,i}^2$ decreases to zero at an exponential rate, and when $a_{t+1,i}$ increases to $\\beta_{i}^\\ast\/2$, $b_{t+1,i}$ decreases to $O( \\alpha^4)$ correspondingly. A combination of these two leads to the first claimed bound for $\\beta_{i}^\\ast>0$.\n\t\n\t\\emph{Case $\\beta_i^* <0$:} The analysis for the case is similar: when ${\\beta}_{t,i}-\\beta^\\ast_i\\geq -\\beta^\\ast_i\/2$, we have:\n\t\\begin{align*}\n\ta_{t+1,i}^2 \\leq a_{t,i}^2(1-\\eta \\beta^\\ast_i\/4),\\quad\\mbox{and}\\quad b_{t+1,i}^2 \\geq b_{t,i}^2(1+\\eta \\beta^\\ast_i\/4),\n\t\\end{align*}\n\twhich leads to the second claimed bound for $\\beta_{i}^\\ast<0$.\n\\end{proof}\n\nAs a consequence of the proof in this step, after at most $T\\geq \\Theta ( \\frac{\\log(m\/\\alpha^2)}{\\eta m})$ iterations, we are guaranteed to have have $|{\\beta}_{T,i}|\\geq |\\beta^\\ast_i|\/2$ with $sign({\\beta}_{T,i})=sign(\\beta^\\ast_i)$ and $\\min\\{a_{T,i}^2, b_{T,i}^2\\} \\leq c \\alpha^4$.\n\n\\subsubsection*{Step 3: Prove Induction Hypothesis}\n\\begin{pro}\\label{apro3.3}\n\tUnder assumptions in theorem~\\ref{thm2}, the induction hypothesis~\\eqref{eq3.1.1}-\\eqref{eq3.1.3} at $t \\beta^*_j - \\sigma \\sqrt{s \/n} \\geq \\lambda$ based on the definition of the strong signals for $j \\in S_1$, while for errors and weak signals $j \\in S_1^c$, we have $\\|\\beta_{t,S_1^c}\\|_{\\infty} \\leq c\\alpha^2 < \\lambda$. Consequently, after the component-wise hard thresholding operation at level $\\lambda$, all strong signals remains nonzero while all weak signals and errors become zero.\n\\bibliographystyle{apalike}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}