diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzarey" "b/data_all_eng_slimpj/shuffled/split2/finalzzarey" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzarey" @@ -0,0 +1,5 @@ +{"text":"\n\n\n\\section{Method}\nOur method consists of a multi-scale light field encoded using a progressive neural network to reduce aliasing and enable adaptive streaming and rendering.\n\n\n\n\\subsection{Multi-scale Light Field}\n\nWe propose encoding multiple representations of the light field which produce anti-aliased representations of a light field, similar to mipmaps for conventional images. For this, we generate an image pyramid for each image view with area downsampling and encode each resolution as a separate light field representation. In contrast to bilinear or nearest pixel downsampling, area downsampling creates an anti-aliased view as shown in \\autoref{fig:image_downsampling_methods} where each pixel in the downsampled image is an average over a corresponding area in the full-resolution image.\n\n\\begin{figure}[!htb]\n \\centering\n \\captionsetup[subfigure]{labelformat=empty}\n \\begin{subfigure}{0.52\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/aliasing\/aliasing_fullres.pdf}\n \\vspace{-5.5mm}\n \\caption{Single-scale Light Field Network}\n \\end{subfigure}\\\\\n \\vspace{1mm}\n \\begin{subfigure}{0.52\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/aliasing\/aliasing_mipnet.pdf}\n \\vspace{-5.5mm}\n \\caption{Multi-scale Light Field Network}\n \\end{subfigure}\n \\caption{Rendering a single full-scale LFN at a lower (1\/8) resolution results in aliasing, unlike the multi-scale LFN at the same (1\/8) resolution. When changing viewpoints (3 shown above), the aliasing results in flickering.}\n \\label{fig:aliasing_figure}\n\\end{figure}\n\nAs with mipmaps, lower-resolution representations trade-off sharpness for less aliasing. In this case, where a light field is rendered into an image, sharpness may be preferable over some aliasing. However, in the case of videos where the light field is viewed from different poses or the light field is dynamic, sampling from a high-resolution light field leads to flickering. With a multi-scale representation, rendering light fields at smaller resolutions can be done at a lower scale to reduce flickering.\n\n\\subsection{Progressive Model}\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/model_illustration_subnet_lods.pdf}\n \\caption{Using subsets of neurons to encode different levels of detail within a single model.}\n \\label{fig:model_illustration_subnet}\n\\end{figure}\n\nFor light field streaming, we wish to have a compact representation that can simultaneously encode multiple levels of detail into a single progressive model. Specifically, we seek a model with the following properties:\nGiven specific subsets of network weights, we should be able to render a complete image with reduced details.\nProgressive levels of details should share weights with lower levels to eliminate encoding redundancy.\n\nWith the above properties, our model offers several benefits over a standard full-scale network or encoding multi-scale light fields using multiple networks.\nFirst, our model composes all scales within a single network hence requiring less storage space than storing four separate models.\nSecond, our model is progressive, thus only parameters necessary for the desired level of detail are needed in a streaming scenario.\nThird, our model allows rendering different levels of detail across pixels for dithered transitions and foveation.\nFourth, our model allows for faster inference of lower levels of detail by utilizing fewer parameters.\nModel sizes and training times appear in \\autoref{tab:nonprogressive_vs_progressive}.\n\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/subnet_layer_illustration.pdf}\n \\caption{Illustration of a single layer where a subset of the weight matrix and bias vector are used, evaluating half of the neurons and reducing the operations down to approximately a quarter.}\n \\label{fig:subnet_layer_illustration}\n\\end{figure}\n\n\\subsubsection{Subset of neurons}\nTo encode multiple levels of detail within a unified representation, we propose using subsets of neurons as shown in \\autoref{fig:model_illustration_subnet}. For each level of detail, we assign a fixed number of neurons to evaluate, with increasing levels of detail using an increasing proportion of the neurons in the neural network. An illustration of the matrix subset operation applied to each layer is shown in \\autoref{fig:subnet_layer_illustration}. For example, a single linear layer with $512$ neurons will have a $512 \\times 512$ weight matrix and a bias vector of size $512$. For a low level of detail, we can assign a neuron subset size of $25\\%$ or $128$ neurons. Then each linear layer would use the top-left $128 \\times 128$ submatrix for the weights and the top $128$ subvector for the bias. To keep the input and output dimensions the same, the first hidden layer uses every column of the weight matrix and the final output layer uses every row of the weight matrix.\nUnlike using a subset of layers, as is done with early exiting~\\cite{bolukbasi2017adaptive,yeo2018neuraladaptive}, using a subset of neurons preserves the number of sequential non-linear activations between all levels of details. Since layer widths are typically larger than model depths, using subsets of neurons also allows more granular control over the quantity and quality of levels. \n\n\\subsubsection{Training} \\label{sec:mipnet_training}\nWith multiple levels of detail at varying resolutions, a modified training scheme is required to efficiently encode all levels of detail together. \nOn one hand, generating all levels of detail at each iteration heavily slows down training as each iteration requires an independent forward pass through the network. \nOn the other hand, selecting a single level of detail at each iteration or applying progressive training compromises the highest level of detail whose full weights would only get updated a fraction of the time.\n\nWe adopt an intermediate approach that focuses on training the highest level of detail. \nDuring training, each batch does a full forward pass through the entire network, producing pixels at the highest level of detail along with a forward pass at a randomly selected lower level of detail. The squared L2 losses for both the full detail and reduced detail are added together for a combined backward pass. By training at the highest level of detail, we ensure that every weight gets updated at each batch and that the highest level of detail is trained to the same number of iterations as a standard model. Specifically, given a batch of rays $\\{\\mathbf{r}_i\\}_{i=1}^{b}$, corresponding ground truth colors $\\{\\mathbf\\{\\mathbf{y}_i^c\\}_{i=1}^{b}\\}_{c=1}^{4}$ for four levels of details, and a random lower level of detail $k \\in \\{1,2,3\\}$, our loss function is:\n$$L = \\frac{1}{b}\\sum_{i=1}^b \\left[ \\left\\Vert f_4(\\mathbf{r}_i) - \\mathbf{y}_i^4\\right\\Vert^2_2 + \\left\\Vert f_k(\\mathbf{r}_i) - \\mathbf{y}_i^k\\right\\Vert^2_2 \\right]$$\nAs rays are sampled from the highest-resolution representation, each ray will not correspond to a unique pixel in the lower-resolution images in the image pyramid. Thus for lower levels of detail, corresponding colors are sampled using bilinear interpolation from the area downsampled training images.\n\n\\subsection{Adaptive Rendering}\nOur proposed model allows dynamic levels of detail on a per-ray\/per-pixel level or on a per-object level. As aliasing appears primarily at smaller scales, selecting the level of detail based on the distance to the viewer reduces aliasing and flickering for distant objects.\n\n\\subsubsection{Distance-based Level of Detail}\nWhen rendering light fields from different distances, the space between rays in object space is proportional to the distance between the object and the camera. Due to this, aliasing and flickering artifacts can occur at large distances when the object appears small. By selecting the level of detail based on the distance of the object from the camera or viewer, aliasing and flickering can be reduced. With fewer pixels to render, rendering smaller objects involves fewer forward passes through the light field network. Hence, the amount of operations is typically proportional to the number of pixels. By rendering smaller representations at lower LODs, we further improve the performance as pixels from lower LODs only perform a forward pass using a subset of weights.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/lod_transition_zoom.pdf}\n \\caption{With per-pixel level of detail, dithering can be applied to transition between levels of detail.}\n \\label{fig:transition_example}\n\\end{figure}\n\n\\subsubsection{Dithered Transitions}\nWith per-pixel LOD, transitioning between levels can be achieved with dithered rendering. By increasing the fraction of pixels rendered at the new level of detail in each frame, dithering creates the appearance of a fading transition between levels as shown in \\autoref{fig:transition_example}.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/foveation_example.pdf}\n \\caption{An example of foveated rendering from our progressive multi-scale LFN. Foveated rendering generates peripheral pixels at lower levels of detail to further improve performance in eye-tracked environments. In this example, foveation is centered over the left eye of the subject. }\n \\label{fig:foveation_example}\n\\end{figure}\n\n\\subsubsection{Foveated Rendering}\nIn virtual and augmented reality applications where real-time gaze information is available through an eye-tracker, foveated rendering~\\cite{Meng20203D,meng2018Kernel,li2021logrectilinear} can be applied to better focus rendering compute. As each pixel can be rendered at a separate level of detail, peripheral pixels can be rendered at a lower level of detail to improve the performance. \\autoref{fig:foveation_example} shows an example of foveation from our progressive multi-scale LFN.\n\n\\newcommand{0.24\\linewidth}{0.24\\linewidth}\n\\newcommand{\\addDatasetResults}[2]{\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r8.pdf}\n \\caption{Dataset #1 at 1\/8 scale}\n \\label{fig:qualitative_comparison_#1_r8}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r4.pdf}\n \\caption{Dataset #1 at 1\/4 scale}\n \\label{fig:qualitative_comparison_#1_r4}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r2.pdf}\n \\caption{Dataset #1 at 1\/2 scale}\n \\label{fig:qualitative_comparison_#1_r2}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r1.pdf}\n \\caption{Dataset #1 at full scale}\n \\label{fig:qualitative_comparison_#1_r1}\n \\end{subfigure}\n}\n\\begin{figure*}[!htbp]\n \\centering\n \\addDatasetResults{C}{david5}\n \\caption{Qualitative results of one of our datasets at four scales. Single-scale LFNs (left) trained at the full-resolution exhibit aliasing and flickering at lower scales. Our progressive multi-scale LFNs (right) encode all four scales into a single model and have reduced aliasing and flickering at lower scales.}\n \\label{fig:qualitative_comparison}\n\\end{figure*}\n\n\\newcommand{\\datasetScaledResults}[2]{\n \\begin{subfigure}[b]{0.17\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/#2\/mipnet_lod_scaled.pdf}\n \\caption{Dataset #1}\n \\label{fig:qualitative_scaled_#2}\n \\end{subfigure}\n}\n\\begin{figure*}[!htbp]\n \\centering\n \\datasetScaledResults{A}{jon}\n \\hfill\n \\datasetScaledResults{B}{david4}\n \\hfill\n \\datasetScaledResults{C}{david5}\n \\hfill\n \\datasetScaledResults{D}{sida1}\n \\hfill\n \\datasetScaledResults{E}{maria}\n \\caption{Scaled qualitative results from our progressive multi-scale LFNs at four levels of detail.}\n \\label{fig:qualtiative_scaled}\n\\end{figure*}\n\n\n\\section{Conclusion}\nIn this paper, we propose a method to encode multi-scale light field networks targeted for streaming.\nMulti-scale LFNs encode multiple levels of details at different scales to reduce aliasing and flickering at lower resolutions.\nOur multi-scale LFN features a progressive representation that encodes all levels of details into a single network using subsets of network weights. \nWith our method, light field networks can be progressively downloaded and rendered based on the desired level of detail for real-time streaming of 6DOF content.\nWe hope progressive multi-scale LFNs will help improve the real-time streaming of photorealistic characters and objects to real-time 3D desktop, AR, and VR applications~\\cite{li2020meteovis,du2019geollery}.\n\\section{Experiments}\nTo evaluate our progressive multi-scale LFN, we conduct experiments with four levels of detail. We first compare the quality when rendering at different scales from a standard single-scale LFN and our multi-scale LFN. We then perform an ablation comparing multiple LFNs trained at separate scales to our progressive network. Finally, we benchmark our multi-scale LFN to evaluate the speedup gained by utilizing fewer neurons when rendering at lower levels of detail. Additional details are in the supplementary material.\n\n\\subsection{Experimental Setup}\nFor our evaluation, we use datasets of real people consisting of $240$ synchronized images from our capture studio. Images are captured at $4032 \\times 3040$ resolution. Our datasets are processed with COLMAP~\\cite{schoenberger2016sfm,schoenberger2016mvs} to extract camera parameters and background matting~\\cite{lin2020matting} to focus on the foreground subject. In each dataset, images are split into groups of 216 training poses, 12 validation poses, and 12 test poses.\n\nWe quantitatively evaluate the encoding quality by computing the PSNR and SSIM metrics. Both metrics are computed on cropped images that tightly bound the subject to reduce the contribution of empty background pixels. We also measure the render time to determine the relative speedup at different levels of detail. Metrics are averaged across all images in the test split of each dataset. \n\nWhen training LFNs, we use a batch size of $8192$ rays with $67\\%$ of rays sampled from the foreground and $33\\%$ sampled from the background. For each epoch, we iterate through training images in batches of two images. In each image batch, we train on $50\\%$ of all foreground rays sampling rays across the two viewpoints. For our multi-scale progressive model and single-scale model, we train using $100$ epochs. For our ablation LFNs at lower resolutions, we train until the validation PSNR matches that of our progressive multi-scale LFN with a limit of $1000$ epochs. Each LFN is trained using a single NVIDIA RTX 2080 TI GPU.\n\n\n\\begin{table}[!htbp]\n \\caption{Model Parameters at Different Levels of Detail.}\n \\label{tab:model_parameters}\n \\centering\n \\scalebox{0.87}{\n \\begin{tabular}{lrrrr}\n \\toprule\n Level of Detail & 1 & 2 & 3 & 4\\\\\n \\midrule\n Model Layers & 10 & 10 & 10 & 10\\\\\n Layer Width & 128 & 256 & 384 & 512\\\\\n Parameters & 136,812 & 533,764 & 1,193,860 & 2,116,100\\\\\n Full Size (MB) & 0.518 & 2.036 & 4.554 & 8.072\\\\\n Target Scale & $1\/8$ & $1\/4$ & $1\/2$ & $1$\\\\\n Train Width & 504 & 1008 & 2016 & 4032\\\\\n Train Height & 380 & 760 & 1520 & 3040\\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\\subsection{Rendering Quality}\nWe first evaluate the rendering quality by comparing a standard single-scale LFN trained at the highest resolution with our multi-scale progressive LFN. \nFor the single-scale LFN, we train a model with $10$ layers and $512$ neurons per hidden layer on each of our datasets and render them at multiple scales: $1\/8$, $1\/4$, $1\/2$, and full scale. \nFor our multi-scale progressive LFN, we train an equivalently sized model with four LODs corresponding to each of the target scales. The lowest LOD targets the $1\/8$ scale and utilizes $128$ neurons from each hidden layer. Each increasing LOD uses an additional $128$ neurons. Model parameters for each LOD are laid out in \\autoref{tab:model_parameters}.\nNote that our qualitative results are cropped to the subject and hence have a smaller resolution.\nQualitative results are shown in \\autoref{fig:qualitative_comparison} with renders from both models placed side-by-side. \nQuantitative results are shown in \\autoref{tab:quantiative_quality}.\n\nAt the full scale, both single-scale and multi-scale models are trained to produce full-resolution images. As a result, the quality of the renders from both models is visually similar as shown in \\autoref{fig:qualitative_comparison}.\nAt $1\/2$ scale, the resolution is still sufficiently high so little to no aliasing can be seen in renders from either model. At $1\/4$ scale, we begin to see significant aliasing around figure outlines and shape borders in the single-scale model.\nThis is especially evident in \\autoref{fig:qualitative_comparison_C_r4} where the outlines in the cartoon graphic have an inconsistent thickness. \nOur multi-scale model has much less aliasing with the trade-off of having more blurriness. At the lowest level of detail, we see severe aliasing along the fine textures as well as along the borders of the object in the single-scale model. With a moving camera, the aliasing seen at the two lowest scales appears as flickering artifacts.\n\n\\begin{table}[!htbp]\n \\caption{Average Rendering Quality Over All Datasets.}\n \\newcommand{0.9}{0.9}\n \\label{tab:quantiative_quality}\n \\centering\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nSingle-scale LFN & 26.95 & 28.05 & 28.21 & 27.75 \\\\\nMultiple LFNs & 29.13 & 29.88 & 29.27 & 27.75 \\\\\nMulti-scale LFN & 29.37 & 29.88 & 29.01 & 28.12 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{PSNR (dB) at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:quantiative_quality_psnr}\n \\end{subfigure}\\\\\n \\vspace{2mm}\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nSingle-scale LFN & 0.8584 & 0.8662 & 0.8527 & 0.8480 \\\\\nMultiple LFNs & 0.8133 & 0.8572 & 0.8532 & 0.8480 \\\\\nMulti-scale LFN & 0.8834 & 0.8819 & 0.8626 & 0.8570 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{SSIM at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:quantiative_quality_ssim}\n \\end{subfigure}\n\\end{table}\n\n\\subsection{Progressive Model Ablation}\nWe next perform an ablation experiment to determine the benefits and drawbacks of our progressive representation.\nFor a non-progressive comparison, we represent each scale of a multi-scale light field using a separate LFN. \nEach LFN is trained using images downscaled to the appropriate resolution.\nThis ablation helps determine the training overhead our progressive LFNs encounter by compressing four resolutions into a single model and how the rendering quality compares to having separate models.\n\n\\begin{table}[!htbp]\n \\caption{Multiple LFNs \\textit{vs} Progressive Multi-scale LFN.}\n \\newcommand{0.83}{0.83}\n \\label{tab:nonprogressive_vs_progressive}\n \\centering\n \\begin{subfigure}[t]{\\linewidth}\n \\centering\n \\scalebox{0.83}{\n \\begin{tabular}{lccccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 & Total\\\\\n \\midrule\n Multiple LFNs & 0.518 & 2.036 & 4.554 & 8.072 & 15.180\\\\\n Multi-scale LFN & \\multicolumn{4}{c}{\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt\\;8.072\\;\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt} & 8.072\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Model Size (MB).}\n \\label{tab:nonprogressive_vs_progressive_model_size}\n \\end{subfigure}\\\\\n \\vspace{2mm}\n \\begin{subfigure}[t]{\\linewidth}\n \\centering\n \\scalebox{0.83}{\n \\begin{tabular}{lccccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 & Total\\\\\n \\midrule\nMultiple LFNs & 3.51 & 6.36 & 4.09 & 11.35 & 25.31\\\\\nMulti-scale LFN & \\multicolumn{4}{c}{\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt\\;17.78\\;\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt} & 17.78\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Average Training Time Over All Datasets (hours).}\n \\label{tab:nonprogressive_vs_progressive_training_time}\n \\end{subfigure}\n\\end{table}\n\nProgressive and non-progressive model sizes and training times are shown in \\autoref{tab:nonprogressive_vs_progressive}. In terms of model size, using multiple LFNs adds up to $15$ MB compared to $8$ MB for our progressive model. This $47\\%$ savings could allow storing or streaming of more light fields where model size is a concern. \nFor the training time, we observe that training a multi-scale network takes on average $17.78$ hours. Additional offline training time for our multi-scale network compared to training a single standard LFN is expected since each training iteration involves two forward passes. Encoding a multi-scale light field using multiple independent LFNs requires training each network separately, totaling $25.31$ hours. Despite the higher compression achieved by our progressive multi-scale LFN, we observe little to no training overhead. On average, training our multi-scale progressive LFN saves $30\\%$ of training time compared to naively training multiple light field networks at different resolutions.\n\n\nQuantitative PSNR and SSIM quality metrics are shown in \\autoref{tab:quantiative_quality}. We observe that utilizing multiple LFNs to encode each scale of each light field yields better PSNR results compared to rendering from a single-scale LFN. However, utilizing multiple LFNs increases the total model size.\nOur multi-scale LFN achieves the superior PSNR and SSIM results between these two setups without increasing the total model size.\nIn an on-demand streaming scenario, using only a single-scale model would incur visual artifacts and unnecessary computation at lower scales. Streaming separate models resolves these issues but requires more bandwidth. Neither option is ideal for battery life in mobile devices. Our multi-scale model alleviates the aliasing and flickering artifacts without incurring the drawbacks of storing and transmitting multiple models.\n\n\n\n\\begin{table}[!htbp]\n \\caption{Training Ablation Average Rendering Quality Results.}\n \\newcommand{0.9}{0.9}\n \\label{tab:ablation_quality}\n \\centering\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Ablation & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nProgressive Training & 26.30 & 30.33 & 29.08 & 28.10 \\\\\n108 Training Views & 29.04 & 29.63 & 28.93 & 27.98 \\\\\nOurs & 29.37 & 29.88 & 29.01 & 28.12 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{PSNR (dB) at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:ablation_quality_psnr}\n \\end{subfigure}\\\\\n \\vspace{2mm}\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nProgressive Training & 0.7947 & 0.8719 & 0.8447 & 0.8390 \\\\\n108 Training Views & 0.8765 & 0.8771 & 0.8604 & 0.8566 \\\\\nOurs & 0.8834 & 0.8819 & 0.8626 & 0.8570 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{SSIM at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:ablation_quality_ssim}\n \\end{subfigure}\n\\end{table}\n\n\\subsection{Training Ablation}\nTo evaluate our training strategy, we perform two ablation experiments.\nFirst, we compare our strategy to a coarse-to-fine progressive training strategy where lower levels of detail are trained and then frozen as higher levels are trained.\nProgressive training takes on average $24.20$ hours to train.\nSecond, we use half the number of training views to evaluate how the quality degrades with fewer training views.\nBoth experiments use our progressive multi-scale model architecture.\nOur results are shown in~\\autoref{tab:ablation_quality} with additional details in the supplementary material.\n\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/rendering_pipeline.pdf}\n \\caption{Overview of our rendering pipeline which utilizes an auxiliary network to skip evaluation of empty rays.}\n \\label{fig:rendering_pipeline}\n\\end{figure}\n\n\\begin{table}[!htbp]\n \\caption{Average Model Rendering Performance for our Multi-scale LFN (milliseconds per frame).}\n \\label{tab:performance_table_v2}\n \\begin{subfigure}[t]{\\linewidth}\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n LOD 1 & LOD 2 & LOD 3 & LOD 4\\\\\n \\hline \n3.7 & 3.9 & 4.7 & 5.9\\\\\n \\hline\n \\end{tabular}\n \\caption{Rendering each LOD at 1\/8 scale.}\n \\label{tab:performance_table_v2_8thscale}\n \\end{subfigure}\\\\%\n \\begin{subfigure}[!ht]{\\linewidth}\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n LOD 1 & LOD 2 & LOD 3 & LOD 4\\\\\n \\hline\n4 & 11 & 58 & 305\\\\\n \\hline\n \\end{tabular}\n \\caption{Rendering at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{tab:performance_table_v2_multiscale}\n \\end{subfigure}\n\\end{table}\n\n\\subsection{Level of Detail Rendering Speedup}\nWe evaluate the rendering speed of our progressive model at different levels of detail to determine the observable speedup from rendering with the appropriate level of detail. For optimal rendering performance, we render using half-precision floating-point values and skip empty rays using an auxiliary neural network encoding only the ray occupancy as shown in \\autoref{fig:rendering_pipeline}. Our auxiliary network is made up of three layers with a width of 16 neurons per hidden layer. Evaluation is performed by rendering rays from all training poses with intrinsics scaled to the corresponding resolution as shown in \\autoref{tab:model_parameters}. \nAverage rendering time results are shown in \\autoref{tab:performance_table_v2}. \nRendering times for lower LODs (1\/8 scale) from our multi-scale network are reduced from $\\sim5.9$ to $\\sim3.7$ msec.\n\\section{Background and Related Works}\nOur work builds upon existing work in neural 3D representations, adaptive neural network inference, and levels of detail. In this section, we provide some background on neural representations and focus on prior work most related to ours. For a more comprehensive overview of neural rendering, we refer interested readers to a recent survey~\\cite{tewari2021advances}.\n\n\n\\begin{table}[!htbp]\n \\centering\n \\caption{Neural 3D Representations Comparison}\n \\label{tab:comparison_table}\n \\scalebox{0.80}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & Real-time & Model Size & Multi-scale & Progressive\\\\\n \\midrule\n NeRF & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n mip-NeRF & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n Plenoctree & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & $\\geq 30$ MB& \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} &\\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n LFN & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n Ours & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark}\\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\\subsection{Neural 3D Representations}\nTraditionally, 3D content has been encoded in a wide variety of formats such as meshes, point clouds, voxels, multi-plane images, and even side-by-side video.\nTo represent the full range of visual effects from arbitrary viewpoints, researchers have considered using radiance fields and light fields which can be encoded using neural networks. With neural networks, 3D data such as signed distance fields, radiance fields, and light fields can be efficiently encoded as coordinate functions without explicit limitations on the resolution.\n\n\\subsubsection{Neural Radiance Field (NeRF)}\nEarly neural representations focused on signed distance functions and radiance fields. Neural Radiance Fields (NeRF)~\\cite{mildenhall2020nerf} use differentiable volume rendering to encode multiple images of a scene into a radiance field encoded as a neural network. With a sufficiently dense set of images, NeRF is able to synthesize new views by using the neural network to interpolate radiance values across different positions and viewing directions. Since the introduction of NeRF, followup works have added support for deformable scenes \\cite{pumarola2020d,park2021nerfies,park2021hypernerf}, real-time inference \\cite{garbin2021fastnerf,reiser2021kilonerf,neff2021donerf,yu2021plenoxels},\nimproved quality \\cite{shen2021snerf}, generalization across scenes \\cite{yu2020pixelnerf,rebain2021lolnerf},\ngenerative modelling \\cite{schwarz2020graf,niemeyer2021campari,xie2021fignerf}, videos \\cite{li2021neural,xian2021space,wang2022fourier}, sparse views \\cite{Chibane_2021_CVPR,niemeyer2021regnerf,Jain_2021_ICCV}, fast training \\cite{mueller2022instant,yu2021plenoxels}, and even large-scale scenes \\cite{rematas2021urban,xiangli2021citynerf,turki2021meganerf}. \n\nAmong NeRF research, Mip-NeRF~\\cite{barron2021mipnerf,barron2022mipnerf360} and BACON~\\cite{lindell2021bacon} share a similar goal to our work in offering a multi-scale representation to reduce aliasing. \nMip-NeRF approximates the radiance across conical regions along the ray, cone-tracing, using integrated positional encoding (IPE). With cone-tracing, Mip-NeRF reduces aliasing while also slightly reducing training time.\nBACON~\\cite{lindell2021bacon} provide a multi-scale scene representation through band-limited networks using Multiplicative Filter Networks~\\cite{fathony2021multiplicative} and multiple exit points. Each scale in BACON has band-limited outputs in Fourier space.\n\n\\subsubsection{Light Field Network (LFN)}\nOur work builds upon Light Field Networks (LFNs)~\\cite{sitzmann2021lfns,feng2021signet,li2022neulf,chandramouli2021light}. Light field networks encode light fields~\\cite{levoy1996lightfieldrendering} using a coordinate-wise representation, where for each ray $\\mathbf{r}$, a neural network $f$ is used to directly encode the color $c=f(\\mathbf{r})$. To represent rays, Feng and Varshney \\cite{feng2021signet} pair the traditional two-plane parameterization with Gegenbauer polynomials while Sitzmann \\etal~\\cite{sitzmann2021lfns} use Pl\\\"{u}cker coordinates which combine the ray direction $\\mathbf{r}_d$ and ray origin $\\mathbf{r}_o$ into a 6-dimensional vector $(\\mathbf{r}_d, \\mathbf{r}_o \\times \\mathbf{r}_d)$. Light fields can also be combined with explicit representations~\\cite{ost2021point}. In our work, we adopt the Pl\\\"{u}cker coordinate parameterization which can represent rays in all directions.\n\nCompared to NeRF, LFNs are faster to render as they only require a single forward pass per ray, enabling real-time rendering. However, LFNs are worse at view synthesis as they lack the self-supervision that volume rendering provides with explicit 3D coordinates. As a result, LFNs are best trained from very dense camera arrays or from a NeRF teacher model such as in~\\cite{attal2022learning}. Similar to NeRFs, LFNs are encoded as multi-layer perceptrons, hence they remain compact compared to voxel and octree NeRF representations~\\cite{yu2021plenoctrees,hedman2021snerg,garbin2021fastnerf,yu2021plenoxels} which use upwards of $50$ MB. Therefore LFNs strike a balance between fast rendering times and storage compactness which could make them suitable for streaming 3D scenes over the internet.\n\n\n\\subsection{Levels of Detail}\nLevel of detail methods are commonly used in computer graphics to improve performance and quality. For 2D textures, one of the most common techniques is mipmapping~\\cite{williams1983mipmap} which both reduces aliasing and improves performance with cache coherency by encoding textures at multiple resolutions. For neural representations, techniques for multiple levels of detail have also been proposed for signed distance functions as well as neural radiance fields.\n\nFor signed distance fields, Takikawa \\etal \\cite{takikawa2021neural} propose storing learned features within nodes of an octree to represent a signed distance function with multiple levels of detail. \nBuilding upon octree features, Chen \\etal \\cite{chen2021multiresolution} develop Multiresolution Deep Implicit Functions (MDIF) to represent 3D shapes which combines a global SDF representation with local residuals. \nFor radiance fields, Yang \\etal~\\cite{yang2021recursivenerf} develop Recursive-NeRF which grows a neural representation with multi-stage training which is able to reduce NeRF rendering time. \nBACON~\\cite{lindell2021bacon} offers levels of detail for various scene representations including 2D images and radiance fields with different scales encoding different bandwidths in Fourier space.\nPINs~\\cite{Landgraf2022PINs} uses a progressive fourier encoding to improve reconstruction for scenes with a wide frequency spectrum.\nFor more explicit representations, Variable Bitrate Neural Fields~\\cite{takikawa2022variable} use a vector-quantized auto-decoder to compress feature grids into lookup indices and a learned codebook. Levels of detail are obtained by learning compressed feature grids at multiple resolutions in a sparse octree.\nIn parallel, Streamable Neural Fields~\\cite{cho2022streamable} propose using progressive neural networks~\\cite{rusu2016progressive} to represent spectrally, spatially, or temporally growing neural representations.\n\n\\subsection{Adaptive Inference}\nCurrent methods use adaptive neural network inference based on available computational resources or the difficulty of each input example. Bolukbasi \\etal~\\cite{bolukbasi2017adaptive} propose two schemes for adaptive inference. Early exiting allows intermediate layers to route outputs directly to an exit head while network selection dynamically routes features to different networks. Both methods allow easy examples to be classified by a subset of neural network layers. Yeo \\etal \\cite{yeo2018neuraladaptive} apply early exiting to upscale streamed video. By routing intermediate CNN features to exit layers, a single super-resolution network can be partially downloaded and applied on a client device based on available compute. Similar ideas have also been used to adapt NLP to input complexity~\\cite{zhou2020bert,weijie2020fastbert}.\nYang \\etal~\\cite{yang2020resolutionadaptive} develop Resolution Adaptive Network (RANet) which extracts features and performs classification of an image progressively from low-resolution to high-resolution. Doing so sequentially allows efficient early exiting as many images can be classified with only coarse features. Slimmable neural networks~\\cite{yu2018slimmable} train a model at different widths with a switchable batch normalization and observe better performance than individual models at detection and segmentation tasks.\n\n\\section{Introduction}\n\nVolumetric images and video allow users to view 3D scenes from novel viewpoints with a full 6 degrees of freedom (6DOF). Compared to conventional 2D and 360 images and videos, volumetric content enables additional immersion with motion parallax and binocular stereo while also allowing users to freely explore the full scene.\n\n\nTraditionally, scenes generated with 3D modeling have been commonly represented as meshes and materials. These classical representations are suitable for real-time rendering as well as streaming for 3D games and AR effects. However, real captured scenes are challenging to represent using the mesh and material representation, requiring complex reconstruction and texture fusion techniques \\cite{duo2016fusion4d,du2018montage4d}. To produce better photo-realistic renders, existing view-synthesis techniques have attempted to adapt these representations with multi-plane images (MPI)~\\cite{zhou2018stereo}, layered-depth images (LDI)~\\cite{shade1998ldi}, and multi-sphere images (MSI)~\\cite{attal2020matryodshka}. These representations allow captured scenes to be accurately represented with 6DOF but only within a limited area.\n\n\n\\begin{figure}[!tbp]\n \\includegraphics[width=\\linewidth]{figures\/maria\/teaser_v3_vertical.pdf}\n \\caption{Our progressive multi-scale light field network is suited for streaming, reduces aliasing and flicking, and renders more efficiently at lower resolutions.}\n \\label{fig:teaser}\n \\vspace{-5mm}\n\\end{figure}\n\nRadiance fields and light fields realistically represent captured scenes from an arbitrarily large set of viewpoints.\nRecently, neural representations such as neural radiance fields (NeRFs)~\\cite{mildenhall2020nerf} and light field networks (LFNs)~\\cite{sitzmann2021lfns} have been found to compactly represent 3D scenes and perform view-synthesis allowing re-rendering from arbitrary viewpoints. Given a dozen or more photographs capturing a scene from different viewpoints from a conventional camera, a NeRF or LFN can be optimized to accurately reproduce the original images and stored in less than 10 MB. Once trained, NeRFs will also produce images from different viewpoints with photo-realistic quality.\n\nWhile NeRFs are able to encode and reproduce real-life 3D scenes from arbitrary viewpoints with photorealistic accuracy, they suffer from several drawbacks which limit their use in a full on-demand video streaming pipeline. Some notable drawbacks include slow rendering, aliasing at smaller scales, and a lack of progressive decoding. While some of these drawbacks have been independently addressed in follow-up work~\\cite{barron2021mipnerf,yu2021plenoctrees}, approaches to resolving them cannot always be easily combined and may also introduce additional limitations.\n\nIn this paper, we extend light field networks with multiple levels of detail and anti-aliasing at lower scales making them more suitable for on-demand streaming. This is achieved by encoding multiple reduced-resolution representations into a single network. With multiple levels of details, light field networks can be progressively streamed and rendered based on the desired scale or resource limitations. For free-viewpoint rendering and dynamic content, anti-aliasing is essential to reducing flickering when rendering at smaller scales. In summary, the contributions of our progressive multi-scale light field network are:\n\\begin{enumerate}\n\\item Composing multiple levels of detail within a single network takes up less space than storing all the levels independently.\n\\item The progressive nature of our model requires us to only download the parameters necessary up to the desired level of detail. \n\\item Our model allows rendering among multiple levels of detail simultaneously. This makes for a smooth transition between multiple levels of detail as well as spatially adaptive rendering (including foveated rendering) without requiring independent models at multiple levels of detail on the GPU.\n\\item Our model allows for faster inference of lower levels of detail by using fewer parameters.\n\\item We are able to encode light fields at multiple scales to reduce aliasing when rendering at lower resolutions.\n\\end{enumerate}\n\n\\section{Discussion}\nThe primary contribution of our paper is the identification of a representation more suitable for on-demand streaming. \nThis is achieved by enhancing light field networks, which support real-time rendering, to progressively support multiple levels of detail at different resolutions.\nOur representation inherits some of the limitations of LFNs such as long training times and worse view-synthesis quality compared to NeRF-based models.\nSince our method does not support continuous LODs, we require the user to decide the LODs ahead of time. For intermediate resolutions, rendering to the nearest LOD is sufficient to avoid aliasing and flickering.\nWhile our light field datasets could also be rendered with classical techniques~\\cite{levoy1996lightfieldrendering,xiao2014cvpr,peter2001wavelet}, doing so over the internet would not be as bandwidth-efficient.\nFuture work may combine our multi-scale light field networks with PRIF~\\cite{Feng2022PRIF} to encode color and geometry simultaneously or with VIINTER~\\cite{Feng2022Viinter} for dynamic light fields.\n\n\n\n\\section{Method}\nOur method consists of a multi-scale light field encoded using a progressive neural network to reduce aliasing and enable adaptive streaming and rendering.\n\n\n\n\\subsection{Multi-scale Light Field}\n\nWe propose encoding multiple representations of the light field which produce anti-aliased representations of a light field, similar to mipmaps for conventional images. For this, we generate an image pyramid for each image view with area downsampling and encode each resolution as a separate light field representation. In contrast to bilinear or nearest pixel downsampling, area downsampling creates an anti-aliased view as shown in \\autoref{fig:image_downsampling_methods} where each pixel in the downsampled image is an average over a corresponding area in the full-resolution image.\n\n\\begin{figure}[!htb]\n \\centering\n \\captionsetup[subfigure]{labelformat=empty}\n \\begin{subfigure}{0.52\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/aliasing\/aliasing_fullres.pdf}\n \\vspace{-5.5mm}\n \\caption{Single-scale Light Field Network}\n \\end{subfigure}\\\\\n \\vspace{1mm}\n \\begin{subfigure}{0.52\\linewidth}\n \\includegraphics[width=\\linewidth]{figures\/aliasing\/aliasing_mipnet.pdf}\n \\vspace{-5.5mm}\n \\caption{Multi-scale Light Field Network}\n \\end{subfigure}\n \\caption{Rendering a single full-scale LFN at a lower (1\/8) resolution results in aliasing, unlike the multi-scale LFN at the same (1\/8) resolution. When changing viewpoints (3 shown above), the aliasing results in flickering.}\n \\label{fig:aliasing_figure}\n\\end{figure}\n\nAs with mipmaps, lower-resolution representations trade-off sharpness for less aliasing. In this case, where a light field is rendered into an image, sharpness may be preferable over some aliasing. However, in the case of videos where the light field is viewed from different poses or the light field is dynamic, sampling from a high-resolution light field leads to flickering. With a multi-scale representation, rendering light fields at smaller resolutions can be done at a lower scale to reduce flickering.\n\n\\subsection{Progressive Model}\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/model_illustration_subnet_lods.pdf}\n \\caption{Using subsets of neurons to encode different levels of detail within a single model.}\n \\label{fig:model_illustration_subnet}\n\\end{figure}\n\nFor light field streaming, we wish to have a compact representation that can simultaneously encode multiple levels of detail into a single progressive model. Specifically, we seek a model with the following properties:\nGiven specific subsets of network weights, we should be able to render a complete image with reduced details.\nProgressive levels of details should share weights with lower levels to eliminate encoding redundancy.\n\nWith the above properties, our model offers several benefits over a standard full-scale network or encoding multi-scale light fields using multiple networks.\nFirst, our model composes all scales within a single network hence requiring less storage space than storing four separate models.\nSecond, our model is progressive, thus only parameters necessary for the desired level of detail are needed in a streaming scenario.\nThird, our model allows rendering different levels of detail across pixels for dithered transitions and foveation.\nFourth, our model allows for faster inference of lower levels of detail by utilizing fewer parameters.\nModel sizes and training times appear in \\autoref{tab:nonprogressive_vs_progressive}.\n\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/subnet_layer_illustration.pdf}\n \\caption{Illustration of a single layer where a subset of the weight matrix and bias vector are used, evaluating half of the neurons and reducing the operations down to approximately a quarter.}\n \\label{fig:subnet_layer_illustration}\n\\end{figure}\n\n\\subsubsection{Subset of neurons}\nTo encode multiple levels of detail within a unified representation, we propose using subsets of neurons as shown in \\autoref{fig:model_illustration_subnet}. For each level of detail, we assign a fixed number of neurons to evaluate, with increasing levels of detail using an increasing proportion of the neurons in the neural network. An illustration of the matrix subset operation applied to each layer is shown in \\autoref{fig:subnet_layer_illustration}. For example, a single linear layer with $512$ neurons will have a $512 \\times 512$ weight matrix and a bias vector of size $512$. For a low level of detail, we can assign a neuron subset size of $25\\%$ or $128$ neurons. Then each linear layer would use the top-left $128 \\times 128$ submatrix for the weights and the top $128$ subvector for the bias. To keep the input and output dimensions the same, the first hidden layer uses every column of the weight matrix and the final output layer uses every row of the weight matrix.\nUnlike using a subset of layers, as is done with early exiting~\\cite{bolukbasi2017adaptive,yeo2018neuraladaptive}, using a subset of neurons preserves the number of sequential non-linear activations between all levels of details. Since layer widths are typically larger than model depths, using subsets of neurons also allows more granular control over the quantity and quality of levels. \n\n\\subsubsection{Training} \\label{sec:mipnet_training}\nWith multiple levels of detail at varying resolutions, a modified training scheme is required to efficiently encode all levels of detail together. \nOn one hand, generating all levels of detail at each iteration heavily slows down training as each iteration requires an independent forward pass through the network. \nOn the other hand, selecting a single level of detail at each iteration or applying progressive training compromises the highest level of detail whose full weights would only get updated a fraction of the time.\n\nWe adopt an intermediate approach that focuses on training the highest level of detail. \nDuring training, each batch does a full forward pass through the entire network, producing pixels at the highest level of detail along with a forward pass at a randomly selected lower level of detail. The squared L2 losses for both the full detail and reduced detail are added together for a combined backward pass. By training at the highest level of detail, we ensure that every weight gets updated at each batch and that the highest level of detail is trained to the same number of iterations as a standard model. Specifically, given a batch of rays $\\{\\mathbf{r}_i\\}_{i=1}^{b}$, corresponding ground truth colors $\\{\\mathbf\\{\\mathbf{y}_i^c\\}_{i=1}^{b}\\}_{c=1}^{4}$ for four levels of details, and a random lower level of detail $k \\in \\{1,2,3\\}$, our loss function is:\n$$L = \\frac{1}{b}\\sum_{i=1}^b \\left[ \\left\\Vert f_4(\\mathbf{r}_i) - \\mathbf{y}_i^4\\right\\Vert^2_2 + \\left\\Vert f_k(\\mathbf{r}_i) - \\mathbf{y}_i^k\\right\\Vert^2_2 \\right]$$\nAs rays are sampled from the highest-resolution representation, each ray will not correspond to a unique pixel in the lower-resolution images in the image pyramid. Thus for lower levels of detail, corresponding colors are sampled using bilinear interpolation from the area downsampled training images.\n\n\\subsection{Adaptive Rendering}\nOur proposed model allows dynamic levels of detail on a per-ray\/per-pixel level or on a per-object level. As aliasing appears primarily at smaller scales, selecting the level of detail based on the distance to the viewer reduces aliasing and flickering for distant objects.\n\n\\subsubsection{Distance-based Level of Detail}\nWhen rendering light fields from different distances, the space between rays in object space is proportional to the distance between the object and the camera. Due to this, aliasing and flickering artifacts can occur at large distances when the object appears small. By selecting the level of detail based on the distance of the object from the camera or viewer, aliasing and flickering can be reduced. With fewer pixels to render, rendering smaller objects involves fewer forward passes through the light field network. Hence, the amount of operations is typically proportional to the number of pixels. By rendering smaller representations at lower LODs, we further improve the performance as pixels from lower LODs only perform a forward pass using a subset of weights.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/lod_transition_zoom.pdf}\n \\caption{With per-pixel level of detail, dithering can be applied to transition between levels of detail.}\n \\label{fig:transition_example}\n\\end{figure}\n\n\\subsubsection{Dithered Transitions}\nWith per-pixel LOD, transitioning between levels can be achieved with dithered rendering. By increasing the fraction of pixels rendered at the new level of detail in each frame, dithering creates the appearance of a fading transition between levels as shown in \\autoref{fig:transition_example}.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/foveation_example.pdf}\n \\caption{An example of foveated rendering from our progressive multi-scale LFN. Foveated rendering generates peripheral pixels at lower levels of detail to further improve performance in eye-tracked environments. In this example, foveation is centered over the left eye of the subject. }\n \\label{fig:foveation_example}\n\\end{figure}\n\n\\subsubsection{Foveated Rendering}\nIn virtual and augmented reality applications where real-time gaze information is available through an eye-tracker, foveated rendering~\\cite{Meng20203D,meng2018Kernel,li2021logrectilinear} can be applied to better focus rendering compute. As each pixel can be rendered at a separate level of detail, peripheral pixels can be rendered at a lower level of detail to improve the performance. \\autoref{fig:foveation_example} shows an example of foveation from our progressive multi-scale LFN.\n\n\\newcommand{0.24\\linewidth}{0.24\\linewidth}\n\\newcommand{\\addDatasetResults}[2]{\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r8.pdf}\n \\caption{Dataset #1 at 1\/8 scale}\n \\label{fig:qualitative_comparison_#1_r8}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r4.pdf}\n \\caption{Dataset #1 at 1\/4 scale}\n \\label{fig:qualitative_comparison_#1_r4}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r2.pdf}\n \\caption{Dataset #1 at 1\/2 scale}\n \\label{fig:qualitative_comparison_#1_r2}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/#2\/comparison2_r1.pdf}\n \\caption{Dataset #1 at full scale}\n \\label{fig:qualitative_comparison_#1_r1}\n \\end{subfigure}\n}\n\\begin{figure*}[!htbp]\n \\centering\n \\addDatasetResults{C}{david5}\n \\caption{Qualitative results of one of our datasets at four scales. Single-scale LFNs (left) trained at the full-resolution exhibit aliasing and flickering at lower scales. Our progressive multi-scale LFNs (right) encode all four scales into a single model and have reduced aliasing and flickering at lower scales.}\n \\label{fig:qualitative_comparison}\n\\end{figure*}\n\n\\newcommand{\\datasetScaledResults}[2]{\n \\begin{subfigure}[b]{0.17\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/#2\/mipnet_lod_scaled.pdf}\n \\caption{Dataset #1}\n \\label{fig:qualitative_scaled_#2}\n \\end{subfigure}\n}\n\\begin{figure*}[!htbp]\n \\centering\n \\datasetScaledResults{A}{jon}\n \\hfill\n \\datasetScaledResults{B}{david4}\n \\hfill\n \\datasetScaledResults{C}{david5}\n \\hfill\n \\datasetScaledResults{D}{sida1}\n \\hfill\n \\datasetScaledResults{E}{maria}\n \\caption{Scaled qualitative results from our progressive multi-scale LFNs at four levels of detail.}\n \\label{fig:qualtiative_scaled}\n\\end{figure*}\n\n\n\\section{Conclusion}\nIn this paper, we propose a method to encode multi-scale light field networks targeted for streaming.\nMulti-scale LFNs encode multiple levels of details at different scales to reduce aliasing and flickering at lower resolutions.\nOur multi-scale LFN features a progressive representation that encodes all levels of details into a single network using subsets of network weights. \nWith our method, light field networks can be progressively downloaded and rendered based on the desired level of detail for real-time streaming of 6DOF content.\nWe hope progressive multi-scale LFNs will help improve the real-time streaming of photorealistic characters and objects to real-time 3D desktop, AR, and VR applications~\\cite{li2020meteovis,du2019geollery}.\n\\section{Experiments}\nTo evaluate our progressive multi-scale LFN, we conduct experiments with four levels of detail. We first compare the quality when rendering at different scales from a standard single-scale LFN and our multi-scale LFN. We then perform an ablation comparing multiple LFNs trained at separate scales to our progressive network. Finally, we benchmark our multi-scale LFN to evaluate the speedup gained by utilizing fewer neurons when rendering at lower levels of detail. Additional details are in the supplementary material.\n\n\\subsection{Experimental Setup}\nFor our evaluation, we use datasets of real people consisting of $240$ synchronized images from our capture studio. Images are captured at $4032 \\times 3040$ resolution. Our datasets are processed with COLMAP~\\cite{schoenberger2016sfm,schoenberger2016mvs} to extract camera parameters and background matting~\\cite{lin2020matting} to focus on the foreground subject. In each dataset, images are split into groups of 216 training poses, 12 validation poses, and 12 test poses.\n\nWe quantitatively evaluate the encoding quality by computing the PSNR and SSIM metrics. Both metrics are computed on cropped images that tightly bound the subject to reduce the contribution of empty background pixels. We also measure the render time to determine the relative speedup at different levels of detail. Metrics are averaged across all images in the test split of each dataset. \n\nWhen training LFNs, we use a batch size of $8192$ rays with $67\\%$ of rays sampled from the foreground and $33\\%$ sampled from the background. For each epoch, we iterate through training images in batches of two images. In each image batch, we train on $50\\%$ of all foreground rays sampling rays across the two viewpoints. For our multi-scale progressive model and single-scale model, we train using $100$ epochs. For our ablation LFNs at lower resolutions, we train until the validation PSNR matches that of our progressive multi-scale LFN with a limit of $1000$ epochs. Each LFN is trained using a single NVIDIA RTX 2080 TI GPU.\n\n\n\\begin{table}[!htbp]\n \\caption{Model Parameters at Different Levels of Detail.}\n \\label{tab:model_parameters}\n \\centering\n \\scalebox{0.87}{\n \\begin{tabular}{lrrrr}\n \\toprule\n Level of Detail & 1 & 2 & 3 & 4\\\\\n \\midrule\n Model Layers & 10 & 10 & 10 & 10\\\\\n Layer Width & 128 & 256 & 384 & 512\\\\\n Parameters & 136,812 & 533,764 & 1,193,860 & 2,116,100\\\\\n Full Size (MB) & 0.518 & 2.036 & 4.554 & 8.072\\\\\n Target Scale & $1\/8$ & $1\/4$ & $1\/2$ & $1$\\\\\n Train Width & 504 & 1008 & 2016 & 4032\\\\\n Train Height & 380 & 760 & 1520 & 3040\\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\\subsection{Rendering Quality}\nWe first evaluate the rendering quality by comparing a standard single-scale LFN trained at the highest resolution with our multi-scale progressive LFN. \nFor the single-scale LFN, we train a model with $10$ layers and $512$ neurons per hidden layer on each of our datasets and render them at multiple scales: $1\/8$, $1\/4$, $1\/2$, and full scale. \nFor our multi-scale progressive LFN, we train an equivalently sized model with four LODs corresponding to each of the target scales. The lowest LOD targets the $1\/8$ scale and utilizes $128$ neurons from each hidden layer. Each increasing LOD uses an additional $128$ neurons. Model parameters for each LOD are laid out in \\autoref{tab:model_parameters}.\nNote that our qualitative results are cropped to the subject and hence have a smaller resolution.\nQualitative results are shown in \\autoref{fig:qualitative_comparison} with renders from both models placed side-by-side. \nQuantitative results are shown in \\autoref{tab:quantiative_quality}.\n\nAt the full scale, both single-scale and multi-scale models are trained to produce full-resolution images. As a result, the quality of the renders from both models is visually similar as shown in \\autoref{fig:qualitative_comparison}.\nAt $1\/2$ scale, the resolution is still sufficiently high so little to no aliasing can be seen in renders from either model. At $1\/4$ scale, we begin to see significant aliasing around figure outlines and shape borders in the single-scale model.\nThis is especially evident in \\autoref{fig:qualitative_comparison_C_r4} where the outlines in the cartoon graphic have an inconsistent thickness. \nOur multi-scale model has much less aliasing with the trade-off of having more blurriness. At the lowest level of detail, we see severe aliasing along the fine textures as well as along the borders of the object in the single-scale model. With a moving camera, the aliasing seen at the two lowest scales appears as flickering artifacts.\n\n\\begin{table}[!htbp]\n \\caption{Average Rendering Quality Over All Datasets.}\n \\newcommand{0.9}{0.9}\n \\label{tab:quantiative_quality}\n \\centering\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nSingle-scale LFN & 26.95 & 28.05 & 28.21 & 27.75 \\\\\nMultiple LFNs & 29.13 & 29.88 & 29.27 & 27.75 \\\\\nMulti-scale LFN & 29.37 & 29.88 & 29.01 & 28.12 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{PSNR (dB) at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:quantiative_quality_psnr}\n \\end{subfigure}\\\\\n \\vspace{2mm}\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nSingle-scale LFN & 0.8584 & 0.8662 & 0.8527 & 0.8480 \\\\\nMultiple LFNs & 0.8133 & 0.8572 & 0.8532 & 0.8480 \\\\\nMulti-scale LFN & 0.8834 & 0.8819 & 0.8626 & 0.8570 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{SSIM at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:quantiative_quality_ssim}\n \\end{subfigure}\n\\end{table}\n\n\\subsection{Progressive Model Ablation}\nWe next perform an ablation experiment to determine the benefits and drawbacks of our progressive representation.\nFor a non-progressive comparison, we represent each scale of a multi-scale light field using a separate LFN. \nEach LFN is trained using images downscaled to the appropriate resolution.\nThis ablation helps determine the training overhead our progressive LFNs encounter by compressing four resolutions into a single model and how the rendering quality compares to having separate models.\n\n\\begin{table}[!htbp]\n \\caption{Multiple LFNs \\textit{vs} Progressive Multi-scale LFN.}\n \\newcommand{0.83}{0.83}\n \\label{tab:nonprogressive_vs_progressive}\n \\centering\n \\begin{subfigure}[t]{\\linewidth}\n \\centering\n \\scalebox{0.83}{\n \\begin{tabular}{lccccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 & Total\\\\\n \\midrule\n Multiple LFNs & 0.518 & 2.036 & 4.554 & 8.072 & 15.180\\\\\n Multi-scale LFN & \\multicolumn{4}{c}{\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt\\;8.072\\;\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt} & 8.072\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Model Size (MB).}\n \\label{tab:nonprogressive_vs_progressive_model_size}\n \\end{subfigure}\\\\\n \\vspace{2mm}\n \\begin{subfigure}[t]{\\linewidth}\n \\centering\n \\scalebox{0.83}{\n \\begin{tabular}{lccccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 & Total\\\\\n \\midrule\nMultiple LFNs & 3.51 & 6.36 & 4.09 & 11.35 & 25.31\\\\\nMulti-scale LFN & \\multicolumn{4}{c}{\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt\\;17.78\\;\\leavevmode\\leaders\\hrule height 0.7ex depth \\dimexpr0.4pt-0.7ex\\hfill\\kern0pt} & 17.78\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Average Training Time Over All Datasets (hours).}\n \\label{tab:nonprogressive_vs_progressive_training_time}\n \\end{subfigure}\n\\end{table}\n\nProgressive and non-progressive model sizes and training times are shown in \\autoref{tab:nonprogressive_vs_progressive}. In terms of model size, using multiple LFNs adds up to $15$ MB compared to $8$ MB for our progressive model. This $47\\%$ savings could allow storing or streaming of more light fields where model size is a concern. \nFor the training time, we observe that training a multi-scale network takes on average $17.78$ hours. Additional offline training time for our multi-scale network compared to training a single standard LFN is expected since each training iteration involves two forward passes. Encoding a multi-scale light field using multiple independent LFNs requires training each network separately, totaling $25.31$ hours. Despite the higher compression achieved by our progressive multi-scale LFN, we observe little to no training overhead. On average, training our multi-scale progressive LFN saves $30\\%$ of training time compared to naively training multiple light field networks at different resolutions.\n\n\nQuantitative PSNR and SSIM quality metrics are shown in \\autoref{tab:quantiative_quality}. We observe that utilizing multiple LFNs to encode each scale of each light field yields better PSNR results compared to rendering from a single-scale LFN. However, utilizing multiple LFNs increases the total model size.\nOur multi-scale LFN achieves the superior PSNR and SSIM results between these two setups without increasing the total model size.\nIn an on-demand streaming scenario, using only a single-scale model would incur visual artifacts and unnecessary computation at lower scales. Streaming separate models resolves these issues but requires more bandwidth. Neither option is ideal for battery life in mobile devices. Our multi-scale model alleviates the aliasing and flickering artifacts without incurring the drawbacks of storing and transmitting multiple models.\n\n\n\n\\begin{table}[!htbp]\n \\caption{Training Ablation Average Rendering Quality Results.}\n \\newcommand{0.9}{0.9}\n \\label{tab:ablation_quality}\n \\centering\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Ablation & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nProgressive Training & 26.30 & 30.33 & 29.08 & 28.10 \\\\\n108 Training Views & 29.04 & 29.63 & 28.93 & 27.98 \\\\\nOurs & 29.37 & 29.88 & 29.01 & 28.12 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{PSNR (dB) at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:ablation_quality_psnr}\n \\end{subfigure}\\\\\n \\vspace{2mm}\n \\begin{subfigure}[h]{\\linewidth}\n \\centering\n \\scalebox{0.9}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & LOD 1 & LOD 2 & LOD 3 & LOD 4 \\\\ \n \\midrule\nProgressive Training & 0.7947 & 0.8719 & 0.8447 & 0.8390 \\\\\n108 Training Views & 0.8765 & 0.8771 & 0.8604 & 0.8566 \\\\\nOurs & 0.8834 & 0.8819 & 0.8626 & 0.8570 \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{SSIM at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{fig:ablation_quality_ssim}\n \\end{subfigure}\n\\end{table}\n\n\\subsection{Training Ablation}\nTo evaluate our training strategy, we perform two ablation experiments.\nFirst, we compare our strategy to a coarse-to-fine progressive training strategy where lower levels of detail are trained and then frozen as higher levels are trained.\nProgressive training takes on average $24.20$ hours to train.\nSecond, we use half the number of training views to evaluate how the quality degrades with fewer training views.\nBoth experiments use our progressive multi-scale model architecture.\nOur results are shown in~\\autoref{tab:ablation_quality} with additional details in the supplementary material.\n\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/rendering_pipeline.pdf}\n \\caption{Overview of our rendering pipeline which utilizes an auxiliary network to skip evaluation of empty rays.}\n \\label{fig:rendering_pipeline}\n\\end{figure}\n\n\\begin{table}[!htbp]\n \\caption{Average Model Rendering Performance for our Multi-scale LFN (milliseconds per frame).}\n \\label{tab:performance_table_v2}\n \\begin{subfigure}[t]{\\linewidth}\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n LOD 1 & LOD 2 & LOD 3 & LOD 4\\\\\n \\hline \n3.7 & 3.9 & 4.7 & 5.9\\\\\n \\hline\n \\end{tabular}\n \\caption{Rendering each LOD at 1\/8 scale.}\n \\label{tab:performance_table_v2_8thscale}\n \\end{subfigure}\\\\%\n \\begin{subfigure}[!ht]{\\linewidth}\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n LOD 1 & LOD 2 & LOD 3 & LOD 4\\\\\n \\hline\n4 & 11 & 58 & 305\\\\\n \\hline\n \\end{tabular}\n \\caption{Rendering at 1\/8, 1\/4, 1\/2, and 1\/1 scale.}\n \\label{tab:performance_table_v2_multiscale}\n \\end{subfigure}\n\\end{table}\n\n\\subsection{Level of Detail Rendering Speedup}\nWe evaluate the rendering speed of our progressive model at different levels of detail to determine the observable speedup from rendering with the appropriate level of detail. For optimal rendering performance, we render using half-precision floating-point values and skip empty rays using an auxiliary neural network encoding only the ray occupancy as shown in \\autoref{fig:rendering_pipeline}. Our auxiliary network is made up of three layers with a width of 16 neurons per hidden layer. Evaluation is performed by rendering rays from all training poses with intrinsics scaled to the corresponding resolution as shown in \\autoref{tab:model_parameters}. \nAverage rendering time results are shown in \\autoref{tab:performance_table_v2}. \nRendering times for lower LODs (1\/8 scale) from our multi-scale network are reduced from $\\sim5.9$ to $\\sim3.7$ msec.\n\\section{Background and Related Works}\nOur work builds upon existing work in neural 3D representations, adaptive neural network inference, and levels of detail. In this section, we provide some background on neural representations and focus on prior work most related to ours. For a more comprehensive overview of neural rendering, we refer interested readers to a recent survey~\\cite{tewari2021advances}.\n\n\n\\begin{table}[!htbp]\n \\centering\n \\caption{Neural 3D Representations Comparison}\n \\label{tab:comparison_table}\n \\scalebox{0.80}{\n \\begin{tabular}{lcccc}\n \\toprule\n Model & Real-time & Model Size & Multi-scale & Progressive\\\\\n \\midrule\n NeRF & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n mip-NeRF & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n Plenoctree & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & $\\geq 30$ MB& \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} &\\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n LFN & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} & \\resizebox{\\widthof{\\scrossmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scrossmark} \\\\\n Ours & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & $\\leq 10$ MB & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark} & \\resizebox{\\widthof{\\scheckmark}*\\ratio{\\widthof{x}}{\\widthof{\\normalsize x}}}{!}{\\scheckmark}\\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\\subsection{Neural 3D Representations}\nTraditionally, 3D content has been encoded in a wide variety of formats such as meshes, point clouds, voxels, multi-plane images, and even side-by-side video.\nTo represent the full range of visual effects from arbitrary viewpoints, researchers have considered using radiance fields and light fields which can be encoded using neural networks. With neural networks, 3D data such as signed distance fields, radiance fields, and light fields can be efficiently encoded as coordinate functions without explicit limitations on the resolution.\n\n\\subsubsection{Neural Radiance Field (NeRF)}\nEarly neural representations focused on signed distance functions and radiance fields. Neural Radiance Fields (NeRF)~\\cite{mildenhall2020nerf} use differentiable volume rendering to encode multiple images of a scene into a radiance field encoded as a neural network. With a sufficiently dense set of images, NeRF is able to synthesize new views by using the neural network to interpolate radiance values across different positions and viewing directions. Since the introduction of NeRF, followup works have added support for deformable scenes \\cite{pumarola2020d,park2021nerfies,park2021hypernerf}, real-time inference \\cite{garbin2021fastnerf,reiser2021kilonerf,neff2021donerf,yu2021plenoxels},\nimproved quality \\cite{shen2021snerf}, generalization across scenes \\cite{yu2020pixelnerf,rebain2021lolnerf},\ngenerative modelling \\cite{schwarz2020graf,niemeyer2021campari,xie2021fignerf}, videos \\cite{li2021neural,xian2021space,wang2022fourier}, sparse views \\cite{Chibane_2021_CVPR,niemeyer2021regnerf,Jain_2021_ICCV}, fast training \\cite{mueller2022instant,yu2021plenoxels}, and even large-scale scenes \\cite{rematas2021urban,xiangli2021citynerf,turki2021meganerf}. \n\nAmong NeRF research, Mip-NeRF~\\cite{barron2021mipnerf,barron2022mipnerf360} and BACON~\\cite{lindell2021bacon} share a similar goal to our work in offering a multi-scale representation to reduce aliasing. \nMip-NeRF approximates the radiance across conical regions along the ray, cone-tracing, using integrated positional encoding (IPE). With cone-tracing, Mip-NeRF reduces aliasing while also slightly reducing training time.\nBACON~\\cite{lindell2021bacon} provide a multi-scale scene representation through band-limited networks using Multiplicative Filter Networks~\\cite{fathony2021multiplicative} and multiple exit points. Each scale in BACON has band-limited outputs in Fourier space.\n\n\\subsubsection{Light Field Network (LFN)}\nOur work builds upon Light Field Networks (LFNs)~\\cite{sitzmann2021lfns,feng2021signet,li2022neulf,chandramouli2021light}. Light field networks encode light fields~\\cite{levoy1996lightfieldrendering} using a coordinate-wise representation, where for each ray $\\mathbf{r}$, a neural network $f$ is used to directly encode the color $c=f(\\mathbf{r})$. To represent rays, Feng and Varshney \\cite{feng2021signet} pair the traditional two-plane parameterization with Gegenbauer polynomials while Sitzmann \\etal~\\cite{sitzmann2021lfns} use Pl\\\"{u}cker coordinates which combine the ray direction $\\mathbf{r}_d$ and ray origin $\\mathbf{r}_o$ into a 6-dimensional vector $(\\mathbf{r}_d, \\mathbf{r}_o \\times \\mathbf{r}_d)$. Light fields can also be combined with explicit representations~\\cite{ost2021point}. In our work, we adopt the Pl\\\"{u}cker coordinate parameterization which can represent rays in all directions.\n\nCompared to NeRF, LFNs are faster to render as they only require a single forward pass per ray, enabling real-time rendering. However, LFNs are worse at view synthesis as they lack the self-supervision that volume rendering provides with explicit 3D coordinates. As a result, LFNs are best trained from very dense camera arrays or from a NeRF teacher model such as in~\\cite{attal2022learning}. Similar to NeRFs, LFNs are encoded as multi-layer perceptrons, hence they remain compact compared to voxel and octree NeRF representations~\\cite{yu2021plenoctrees,hedman2021snerg,garbin2021fastnerf,yu2021plenoxels} which use upwards of $50$ MB. Therefore LFNs strike a balance between fast rendering times and storage compactness which could make them suitable for streaming 3D scenes over the internet.\n\n\n\\subsection{Levels of Detail}\nLevel of detail methods are commonly used in computer graphics to improve performance and quality. For 2D textures, one of the most common techniques is mipmapping~\\cite{williams1983mipmap} which both reduces aliasing and improves performance with cache coherency by encoding textures at multiple resolutions. For neural representations, techniques for multiple levels of detail have also been proposed for signed distance functions as well as neural radiance fields.\n\nFor signed distance fields, Takikawa \\etal \\cite{takikawa2021neural} propose storing learned features within nodes of an octree to represent a signed distance function with multiple levels of detail. \nBuilding upon octree features, Chen \\etal \\cite{chen2021multiresolution} develop Multiresolution Deep Implicit Functions (MDIF) to represent 3D shapes which combines a global SDF representation with local residuals. \nFor radiance fields, Yang \\etal~\\cite{yang2021recursivenerf} develop Recursive-NeRF which grows a neural representation with multi-stage training which is able to reduce NeRF rendering time. \nBACON~\\cite{lindell2021bacon} offers levels of detail for various scene representations including 2D images and radiance fields with different scales encoding different bandwidths in Fourier space.\nPINs~\\cite{Landgraf2022PINs} uses a progressive fourier encoding to improve reconstruction for scenes with a wide frequency spectrum.\nFor more explicit representations, Variable Bitrate Neural Fields~\\cite{takikawa2022variable} use a vector-quantized auto-decoder to compress feature grids into lookup indices and a learned codebook. Levels of detail are obtained by learning compressed feature grids at multiple resolutions in a sparse octree.\nIn parallel, Streamable Neural Fields~\\cite{cho2022streamable} propose using progressive neural networks~\\cite{rusu2016progressive} to represent spectrally, spatially, or temporally growing neural representations.\n\n\\subsection{Adaptive Inference}\nCurrent methods use adaptive neural network inference based on available computational resources or the difficulty of each input example. Bolukbasi \\etal~\\cite{bolukbasi2017adaptive} propose two schemes for adaptive inference. Early exiting allows intermediate layers to route outputs directly to an exit head while network selection dynamically routes features to different networks. Both methods allow easy examples to be classified by a subset of neural network layers. Yeo \\etal \\cite{yeo2018neuraladaptive} apply early exiting to upscale streamed video. By routing intermediate CNN features to exit layers, a single super-resolution network can be partially downloaded and applied on a client device based on available compute. Similar ideas have also been used to adapt NLP to input complexity~\\cite{zhou2020bert,weijie2020fastbert}.\nYang \\etal~\\cite{yang2020resolutionadaptive} develop Resolution Adaptive Network (RANet) which extracts features and performs classification of an image progressively from low-resolution to high-resolution. Doing so sequentially allows efficient early exiting as many images can be classified with only coarse features. Slimmable neural networks~\\cite{yu2018slimmable} train a model at different widths with a switchable batch normalization and observe better performance than individual models at detection and segmentation tasks.\n\n\\section{Introduction}\n\nVolumetric images and video allow users to view 3D scenes from novel viewpoints with a full 6 degrees of freedom (6DOF). Compared to conventional 2D and 360 images and videos, volumetric content enables additional immersion with motion parallax and binocular stereo while also allowing users to freely explore the full scene.\n\n\nTraditionally, scenes generated with 3D modeling have been commonly represented as meshes and materials. These classical representations are suitable for real-time rendering as well as streaming for 3D games and AR effects. However, real captured scenes are challenging to represent using the mesh and material representation, requiring complex reconstruction and texture fusion techniques \\cite{duo2016fusion4d,du2018montage4d}. To produce better photo-realistic renders, existing view-synthesis techniques have attempted to adapt these representations with multi-plane images (MPI)~\\cite{zhou2018stereo}, layered-depth images (LDI)~\\cite{shade1998ldi}, and multi-sphere images (MSI)~\\cite{attal2020matryodshka}. These representations allow captured scenes to be accurately represented with 6DOF but only within a limited area.\n\n\n\\begin{figure}[!tbp]\n \\includegraphics[width=\\linewidth]{figures\/maria\/teaser_v3_vertical.pdf}\n \\caption{Our progressive multi-scale light field network is suited for streaming, reduces aliasing and flicking, and renders more efficiently at lower resolutions.}\n \\label{fig:teaser}\n \\vspace{-5mm}\n\\end{figure}\n\nRadiance fields and light fields realistically represent captured scenes from an arbitrarily large set of viewpoints.\nRecently, neural representations such as neural radiance fields (NeRFs)~\\cite{mildenhall2020nerf} and light field networks (LFNs)~\\cite{sitzmann2021lfns} have been found to compactly represent 3D scenes and perform view-synthesis allowing re-rendering from arbitrary viewpoints. Given a dozen or more photographs capturing a scene from different viewpoints from a conventional camera, a NeRF or LFN can be optimized to accurately reproduce the original images and stored in less than 10 MB. Once trained, NeRFs will also produce images from different viewpoints with photo-realistic quality.\n\nWhile NeRFs are able to encode and reproduce real-life 3D scenes from arbitrary viewpoints with photorealistic accuracy, they suffer from several drawbacks which limit their use in a full on-demand video streaming pipeline. Some notable drawbacks include slow rendering, aliasing at smaller scales, and a lack of progressive decoding. While some of these drawbacks have been independently addressed in follow-up work~\\cite{barron2021mipnerf,yu2021plenoctrees}, approaches to resolving them cannot always be easily combined and may also introduce additional limitations.\n\nIn this paper, we extend light field networks with multiple levels of detail and anti-aliasing at lower scales making them more suitable for on-demand streaming. This is achieved by encoding multiple reduced-resolution representations into a single network. With multiple levels of details, light field networks can be progressively streamed and rendered based on the desired scale or resource limitations. For free-viewpoint rendering and dynamic content, anti-aliasing is essential to reducing flickering when rendering at smaller scales. In summary, the contributions of our progressive multi-scale light field network are:\n\\begin{enumerate}\n\\item Composing multiple levels of detail within a single network takes up less space than storing all the levels independently.\n\\item The progressive nature of our model requires us to only download the parameters necessary up to the desired level of detail. \n\\item Our model allows rendering among multiple levels of detail simultaneously. This makes for a smooth transition between multiple levels of detail as well as spatially adaptive rendering (including foveated rendering) without requiring independent models at multiple levels of detail on the GPU.\n\\item Our model allows for faster inference of lower levels of detail by using fewer parameters.\n\\item We are able to encode light fields at multiple scales to reduce aliasing when rendering at lower resolutions.\n\\end{enumerate}\n\n\\section{Discussion}\nThe primary contribution of our paper is the identification of a representation more suitable for on-demand streaming. \nThis is achieved by enhancing light field networks, which support real-time rendering, to progressively support multiple levels of detail at different resolutions.\nOur representation inherits some of the limitations of LFNs such as long training times and worse view-synthesis quality compared to NeRF-based models.\nSince our method does not support continuous LODs, we require the user to decide the LODs ahead of time. For intermediate resolutions, rendering to the nearest LOD is sufficient to avoid aliasing and flickering.\nWhile our light field datasets could also be rendered with classical techniques~\\cite{levoy1996lightfieldrendering,xiao2014cvpr,peter2001wavelet}, doing so over the internet would not be as bandwidth-efficient.\nFuture work may combine our multi-scale light field networks with PRIF~\\cite{Feng2022PRIF} to encode color and geometry simultaneously or with VIINTER~\\cite{Feng2022Viinter} for dynamic light fields.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Supplemental Material\\\\for ``Noncommuting Momenta of Topological Solitons\"}\n\n\nHere we extend our analysis for homogeneous spaces $G\/H$ to general K\\\"ahler manifold.\n\nSuppose ${\\cal M}$ is a K\\\"ahler manifold, whose metric $g_{a\\bar{b}} = \\partial_a \\bar{\\partial}_{\\bar{b}} K$ is given by the K\\\"ahler potential $K(z^a, \\bar{z}^{\\bar{b}})$. Then the K\\\"ahler form $\\omega = i g_{a\\bar{b}}\\mathrm{d}z^a \\wedge\\mathrm{d}\\bar{z}^{\\bar{b}}$ is a nontrivial element of $H^2({\\cal M})$. The general form of the Lagrangian in $2+1$ dimension is \n\\begin{equation}\n\\mathcal{L} = \\frac{i}{2}\\big(\\bar{\\partial}_{\\bar{b}} K \\dot{\\bar{z}}^{\\bar{b}}-\\partial_a K \\dot{z}^a\\big)\n+ g^T_{a\\bar{b}} \\dot{z}^a \\dot{\\bar{z}}^{\\bar{b}}\n- g^S_{a\\bar{b}} \\nabla_i{z}^a \\nabla_i{\\bar{z}}^{\\bar{b}}\\ .\n\\end{equation}\nHere we made it clear that the K\\\"ahler metrics $g^T_{a\\bar{b}}=\\partial_a \\bar{\\partial}_{\\bar{b}} K^T$, $g^S_{a\\bar{b}}=\\partial_a \\bar{\\partial}_{\\bar{b}} K^S$ may in general be different from $g_{a\\bar{b}}$ obtained from $K$. It is convenient to introduce a complex notation for the spatial coordinates $w = x+iy$, $\\nabla = \\frac{1}{2} (\\nabla_x - i \\nabla_y)$. The energy $E$ of a static field configuration is given by\n\\begin{eqnarray}\nE&=& \\int d^2 x\\,g^S_{a\\bar{b}} \\nabla_i{z}^a \\nabla_i{\\bar{z}}^{\\bar{b}}\\nonumber\\\\\n&=& \\int d^2 x\\,2g^S_{a\\bar{b}}\\left[\\left(\\nabla z^a \\bar{\\nabla}{\\bar{z}}^{\\bar{b}} - \\bar{\\nabla}{z}^a \\nabla\\bar{z}^{\\bar{b}}\\right)+2 \\bar{\\nabla}{z}^a \\nabla\\bar{z}^{\\bar{b}} \\right] \\nonumber \\\\\n&\\geq& \\int d^2 x\\,ig^S_{a\\bar{b}}\\epsilon^{ij}\\nabla_iz^a \\nabla_j\\bar{z}^{\\bar{b}} \n= 2\\pi\\hbar n_0 N^S\\ .\n\\end{eqnarray}\nHere we used the fact that $g^S_{a\\bar{b}}$ is positive definite to derive the inequality. The last expression is nothing but the pull-back of the K\\\"ahler form $\\omega^S = i g_{a\\bar{b}}^S\\mathrm{d}z^a \\wedge\\mathrm{d}\\bar{z}^{\\bar{b}}$. To minimize the energy for a fixed $N^S>0$, we need $\\bar{\\nabla}{z}^a = 0$, namely $z^a(w)$ is a holomorphic map ${\\mathbb C}\\rightarrow {\\cal M}$ (anti-holomorphic for $N^S<0$). Expanding the solution for its translational moduli, $z^a = z^a(w-w_0(t))$ with $w_0=x_0+iy_0$, the first term in the Lagrangian becomes\n\\begin{eqnarray}\n\\lefteqn{\n\\int d^2 x\\,\\frac{i}{2}\\big(\\bar{\\partial}_{\\bar{b}} K \\dot{\\bar{z}}^{\\bar{b}}-\\partial_a K \\dot{z}^a\\big)}\\nonumber \\\\\n&=&\\left[\\int d^2 x\\,2g_{a\\bar{b}}(\\nabla z^a\\bar{\\nabla}\\bar{z}^{\\bar{b}}-\\bar{\\nabla}z^a\\nabla\\bar{z}^{\\bar{b}})\\right]\\frac{1}{4i}(\\bar{w}_0\\dot{w}_0 - \\dot{\\bar{w}}_0w_0) \\nonumber \\\\\n&=&\\left[\\int d^2 x\\,ig_{a\\bar{b}}\\epsilon^{ij}\\nabla_iz^a\\nabla_j\\bar{z}^{\\bar{b}}\\right]\\frac{1}{2}\\epsilon_{ij} x_0^i \\dot{x}_0^j \\nonumber \\\\\n&=&(2\\pi \\hbar n_0 N) \\frac{1}{2} \\epsilon_{ij} x_0^i \\dot{x}_0^j,\n\\end{eqnarray}\nfor a nontrivial K\\\"ahler class. The skyrmion for a ferromagnet is a special case of this general consideration.\n\nWe believe the analysis would extend to even wider class of target manifolds as long as they permit presymplectic structure and have a nontrivial $H^2$.\n\n\n\\end{document}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt was shown by Schwinger\\cite{Schwinger} that electrons get an\nanomalous magnetic moment $\\mu^\\prime=\\alpha\/2\\pi\\mu_B$ (being\n$\\mu_B$ the Bohr magneton) due to QED radiative corrections. By\nconsidering the propagation of electromagnetic radiation in vacuum\nin presence of an external magnetic field, Shabad\n\\cite{shabad1,shabad2} showed the drastic departure of the photon\ndispersion equation from the light cone curve near the energy\nthresholds for free pair creation. Three different modes of photon\npropagation were found according to the eigenvalues of the\npolarization operator in presence of a magnetic field. Some years\nlater, the same property was obtained \\cite{usov1,usov2} in the\nvicinity of the threshold for positronium creation. As a result, the\nproblem of the propagation of light in empty space, in presence of\nan external magnetic field is similar to the problem of the\ndispersion of light in an anisotropic medium\\cite{proceeding}, where\nthe role of the medium is played by the polarized vacuum in the\nexternal magnetic field. An anisotropy is created by the preferred\ndirection in space along $\\textbf{B}$. In this context we can\nmention other characteristics of magnetized quantum vacuum, as the\nbirefringence\\cite{adler,uso3}, (which plays an important role in\nthe photon splitting and capture effect), and the vacuum\nmagnetization \\cite{Elizabeth}.\n\nFrom the previous paragraph we may conjecture that, similar to the\ncase of the electron\\cite{Schwinger}, a photon anomalous magnetic\nmoment might also exist, as a consequence of its radiative\ncorrections in presence of the magnetic field. We want to show in\nwhat follows that this conjecture is true: the photon having a non\nvanishing momentum component orthogonal to a constant magnetic\nfield, exhibits such anomalous magnetic moment due to its\ninteraction with the virtual electron-positron pairs in the\nmagnetized vacuum where it propagates.\n\nIn order to calculate this photon intrinsic property we recall that\nSchwinger's result \\cite{Schwinger} was obtained by a weak field\napproximation of the electron Green's function in the self energy\noperator. In place of using such approximation in the calculation of\nthe photon anomalous magnetic moment, we prefer to use the exact\nform of the polarization operator eigenvalue calculated by Batalin\nand Shabad \\cite{batalin, shabad1} which provides information about\nthe three photon propagation modes in the external classical\nmagnetic field. From it we get the photon equation of motion, in\nwhich we may take the weak as well as the strong field limits. In\nany case, we get that the photon acquires a magnetic moment along\nthe external field, which is proportional to the electron anomalous\nmagnetic moment.\n\nFor the zero field case the photon magnetic moment strictly\nvanishes, but for small fields $\\vert\\textbf{B}\\vert \\ll B_c$\n(where $B_c=m^2\/e\\simeq 4.4 \\times 10^{13}$G is the Schwinger\ncritical field) it is significant for a wide range of frequencies.\nThis might be interesting in an astrophysical context, for instance,\nin the propagation and light deflection near a dense magnetized\nobject.\n\nMoreover, the photon anomalous magnetic moment in the case of strong\nmagnetic fields, may play an important role in the propagation of\nelectromagnetic radiation through the magnetospheres of neutron\nstars \\cite{shabad3} and in other stellar objects where large\nmagnetic fields arise $\\textbf{B}\\gtrsim \\textbf{B}_c $. The free\nand bound pair creation of electrons and positrons at the\nthresholds\\cite{Hugo1,AO,Leinson} is related to a production and\npropagation of $\\gamma$ radiation\\cite{denisov,harding}.\n\nThe paper has the following structure: in the Sec. II from the\ngeneral solution of the dispersion equation of the polarization\noperator we analyze the interaction term and defined a magnetic\nmoment of the photon. In section III we obtain from \\cite{shabad1}\nthe three eigenvalues of the polarization operator calculated in one\nloop approximation in the weak field limit and the expression of the\nmagnetic moment under this field regime.\n\nIn Sec. IV and V the magnetic moment is obtained in the strong\nmagnetic field limit for the photon and the photon-positronium mixed\nstate when both particles are created in the Landau ground state\n$n^\\prime=n=0$. We will refer also to the case when one of the\nparticles appear in the first excited state $n=1$, and the other in\nthe Landau ground state. We discuss mainly the so-called second mode\nof propagation near the first pair creation threshold\n$n^\\prime=n=0$, since as it was studied in \\cite{shabad3}, in the\nrealistic conditions of production and propagation of $\\gamma-$\nquanta the third mode decays into the second mode via the photon\nsplitting $\\gamma\\rightarrow\\gamma\\gamma$ process \\cite{adler,uso3}.\nMoreover, higher thresholds are damped by the quasi-stationarity of\nthe electron and positron states on excited Landau levels $n\\geq1$\nor $n^\\prime\\geq1$, which may fall down to the ground state with a\nphoton emitted.\n\nFinally in the Sec. VI the results are analyzed and some remarks and\nconclusions are given in Sec. VI on the basis of comparing the\nphoton behavior with that of a neutral massive particle with\nnon-vanishing magnetic moment which interacts with the external\nfield $\\vert\\bf{B}\\vert \\simeq B_c$.\n\n\n\n\\section{The Interaction Energy and The Magnetic Moment of the Radiation}\n\nIn paper\\cite{batalin} it was shown that the presence of the\nconstant magnetic field, creates, in addition to the photon momentum\nfour-vector $k_\\mu$, three other four-vectors, $F_{\\mu \\rho}k^\\rho$,\n$F^2_{\\mu \\rho}k^{\\rho}$, $F^{*}_{\\mu \\rho}k^{\\rho}$, where $F_{\\mu\n\\nu}=\\partial_\\mu A_\\nu-\\partial_\\nu A_\\mu$ is the electromagnetic\nfield tensor and $F^*_{\\mu \\nu}=\\frac{i}{2}\\epsilon_{\\mu \\nu \\rho\n\\kappa}F^{\\rho \\kappa}$ its dual pseudotensor. One get from these\nfour-vectors three basic independent scalars $k^2$, $kF^2k$,\n$kF^{*2}k$, which in addition to the field invariant ${\\cal\nF}=\\frac{1}{4}F_{\\mu \\rho}F^{\\rho \\mu}=\\frac{1}{2}B^2$, are a set of\nfour basic scalars of our problem.\n\nIn correspondence to each eigenvalue $\\pi^{(i)}_{n,n^\\prime}$,\n$i=1,2,3$, the polarization tensor has an eigenvector\n$a^{(i)}_\\mu$(x). The basic vectors \\cite{shabad1} are obtained by\nnormalizing the set of four vectors $C^{1}_\\mu= k^2 F^2_{\\mu\n\\lambda}k^\\lambda-k_\\mu (kF^2 k)$, $C^{2}_\\mu=F^{*}_{\\mu\n\\lambda}k^\\lambda$, $C^{3}_\\mu=F_{\\mu \\lambda}k^\\lambda$,\n$C^{4}_\\mu=k_\\mu$. The first three satisfy in general the\nfour-dimensional transversality condition $C^{1,2,3}_\\mu k^{\\mu}=0$,\nwhereas it is $C^{4}_\\mu C^{{4}\\mu}=0$ only in the light cone. By\nconsidering $a^{(i)}_\\mu (x)$ as the electromagnetic four vector\ndescribing the eigenmodes, it is easy to obtain the corresponding\nelectric and magnetic fields of each mode ${\\bf e}^{(i)}=\n\\frac{\\partial }{\\partial x_0}\\vec{a}^{(i)}-\\frac{\\partial\n}{\\partial {\\bf x}}a^{(i)}_0$, ${\\bf\nh}^{(i)}=\\nabla\\times\\vec{a}^{(i)}$. Up to a factor of\nproportionality, we rewrite them from \\cite{shabad1}(see also\n\\cite{Hugo1}),\n\\begin{eqnarray}\n{\\bf e}^{(1)}=-\\textbf{k}_{\\perp} k^2 \\omega, \\ \\ {\\bf h}^{(1)}=\n[{\\bf k}\\times {\\bf k}_{\\parallel} ]k^2\\label{wavectors1}, \\\\\n\\textbf{e}^{(2)}_{\\perp}= {\\bf k}_{\\perp} k_{\\parallel}^2, \\ \\\n\\textbf{e}^{(2)}_{\\parallel}=\\textbf{k}_{\\parallel}(k_{\\parallel}^2-\\omega^2),\\\n\\ \\textbf{h}^{(2)}=[{\\bf k}_{\\perp}\\times {\\bf k}_{\\parallel}]\n\\omega\\label{wavectors2},\\\\ \\textbf{e}^{(3)}=[{\\bf k}_{\\perp}\\times {\\bf\nk}_{\\parallel} ] \\omega, \\ \\ \\textbf{h}^{(3)}_{\\perp}=-{\\bf\nk}_{\\perp}k_{\\parallel}^2,\\ \\ \\textbf{h}^{(3)}_{\\parallel}=-{\\bf\nk}_{\\parallel} k_{\\perp}^2. \\label{wavectors3}\n\\end{eqnarray}\nThe previous formulae refer to the reference frame which is at\nrest or moving parallel to $B$. The vectors ${\\bf k}_{\\perp}$ and\n${\\bf k}_{\\parallel}$ are the components of $\\textbf{k}$ across\nand along $\\bf B_z$ (Here the photon four-momentum squared,\n$k^2=k_\\perp^2+k_\\parallel^2-\\omega^2$). It is easy to see that\nthe mode $i=3$ is a transverse plane polarized wave with its\nelectric field orthogonal to the plane determined by the vectors\n$(\\textbf{B}, \\textbf{k})$. For propagation orthogonal to $B$, the\nmode $a^{(1)}_\\mu$ is a pure longitudinal and non physical\nelectric wave, whereas $a^{(2)}_\\mu$ is transverse. For\npropagation parallel to $B$, the mode $a^{(2)}_\\mu$ becomes purely\nelectric longitudinal (and non physical), whereas $a^{(1)}_\\mu$ is\ntransverse \\cite{shabad1,shabad2}.\nIn paper \\cite{shabad1} it was shown that the dispersion law of\na photon propagating in vacuum in a strong magnetic field is\ngiven for each mode by the solution of the equations\n\\begin{equation}\nk^2=\\pi_{n,n^{\\prime}}^{(i)}\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}},\\frac{kF^2k}{2\\mathcal{F}},B\\right),\\\n\\\\ \\ i=1,2,3 \\label{egg}\n\\end{equation}\nThe $\\pi^\\prime$s are the eigenvalues of the polarization\noperator $\\Pi_{\\mu\\nu}(k)$ with the electron and the positron in\nthe Landau levels $n$ and $n^{\\prime}$ or viceversa.\n\nBy solving (\\ref{egg}) for $z_1=k^2+\\frac{kF^2k}{2\\mathcal{F}}$ in\nterms of $kF^2k\/2\\mathcal{F}$ it results\n\\begin{equation}\n\\omega^2=\\vert\\textbf{k}\\vert^2+f_i\\left(\\frac{kF^2k}{2\\mathcal{F}},B\\right)\n\\label{eg2}\n\\end{equation}\nThe term $ f_i\\left(kF^2k\/2\\mathcal{F},B\\right)$ is due to the\ninteraction of the photon with the virtual $e^{\\pm}$ pairs in the\nexternal field, leading to the magnetization of\nvacuum\\cite{Elizabeth}. Moreover it causes a drastic departure of\nthe photon dispersion equation from the light cone curve near the\nenergy thresholds for free pair creation.\n\nThis characteristic stems from the arising of bound states in the\nexternal field, leading to a singular behavior of the polarization\noperator $\\Pi_{\\mu\\nu}(k)$ near the pair creation thresholds for\nelectrons and positrons. These particles, coming from the photon\ndecay in the external field, appear in Landau levels $n$ and\n$n^{\\prime}$ (cyclotron resonance), or either still stronger\nsingular behavior of $\\Pi_{\\mu\\nu}$ near the thresholds of an\n$e^+e^-$ bound state (due to positronium formation).\n\nTo understand the different behavior of $\\Pi_{\\mu\\nu}(k)$ in\npresence of an external magnetic field, as compared to the zero\nfield case, we recall that in the latter problem the polarization\noperator is rotationally invariant with regard to the only\nsignificant four-vector, $k_\\mu$, whereas in the magnetic field case\nthis symmetry is reduced to axial. Thus, it is invariant under\nrotations in the plane perpendicular to the external field\n$\\textbf{B}$.\n\nThe presence of the interaction energy of the photon with the\nelectron-positron field opens the possibility of defining a magnetic\nmoment for the photon, for this we expand the dispersion equation in\nlinear terms of $\\Delta B=B-B_r$ around some field value $B_r$,\n\\begin{equation}\n\\label{exp}\n\\omega(B)=\\omega(B_r)+\\left.\\left(\\frac{\\partial\\omega}{\\partial\nkF^2k}\\frac{\\partial kF^2k}{\\partial\nB}+\\frac{\\partial\\omega}{\\partial B}\\right)\\right\\vert_{B=B_r}\\Delta\nB\n\\end{equation}\n\nThis means that, in the rest frame, where no electric field exists\nthe photon exhibits a longitudinal magnetic moment given by\n\\begin{equation}\n\\mu_\\gamma=-\\left.\\left(-2k_\\perp^2\\textbf{B}_z\\frac{\\partial\\omega}{\\partial\nkF^2k}+\\frac{\\partial\\omega}{\\partial\n\\textbf{B}_z}\\right)\\right\\vert_{B=B_r}\\cdot\\vec{\\kappa}_\\parallel\\label{treee}\n\\end{equation}\nwhere $\\vec{\\kappa}_\\parallel$ is an unit vector in the direction\nalong the magnetic field $B_r=B_z$.\n\nThe modulus of the magnetic moment along $\\textbf{B}$ can be\nexpressed as $\\mu_\\gamma=\\mu^\\prime g_\\gamma$ where the factor\n$g_\\gamma$ is a sort of gyromagnetic ratio. As in the case of the\nelectron\\cite{lipman} $\\mu_{\\gamma}$ is not a constant of motion,\nbut is a quantum average.\n\nAs different from the classical theory of propagation of\nelectromagnetic radiation in presence of a constant external\nmagnetic field, it is expected that $\\mu_\\gamma$ be different from\nzero due to radiative corrections, which are dependent on\n$\\vert\\textbf{B}\\vert=B_z=B$. The magnetic moment is induced by the\nexternal field on the photon through its interaction with the\npolarized electron-positron virtual quanta of vacuum and is oriented\nalong $z$.\n\n\nThe gauge invariance property $\\pi^{(i)}(0,0)=0$ implies that the\nfunction $f_i(kF^2k\/2\\mathcal{F},B)$ vanishes when\n$kF^2k\/2\\mathcal{F}=0$, this means that the anomalous magnetic\nmoment of the photon is a magnitude subject to the gauge invariance\nproperty of the theory and therefore when the propagation is\nparallel to $\\bf{B}$, $k_\\perp=0$, is cancelled. In every mode,\nincluding positronium formation\n\\begin{equation}\n\\mu_\\gamma=0\\ \\ \\textrm{if} \\ \\ k_\\perp=0\\label{gauge}\n\\end{equation}\ntherefore the magnetic moment of the radiation is determined\nessentially by the perpendicular photon momentum component and\nthis determines the optical properties of the quantum vacuum.\n\nParticularly interesting is the case when $B_r=B_z^r\\to0$. If\n$\\bf{B}$ is assumed small $(\\vert\\textbf{B}\\vert\\ll B_c)$, the\ndispersion law can be written as\n\\begin{equation}\n\\omega=\\vert\\bf{k}\\vert-\\mu_\\gamma\\cdot\\bf{B} \\label{de0}\n\\end{equation}\nThe first term of (\\ref{de0}) corresponds to the light cone\nequation, whereas the second contains the dipole moment\ncontribution of the virtual pairs $e^\\pm$.\n\nBy substitution of (\\ref{de0}) in (\\ref{wavectors2}) and\n(\\ref{wavectors3}) we obtain that the electric and magnetic fields\nof the radiation corresponding to the second and third modes are\nincreased by the factors\n\\begin{equation}\n\\Delta\\textbf{e}^{(2)}_{\\parallel}=2\\mu_\\gamma^{(2)}B_z\\vert\\textbf{k}\\vert\\textbf{k}_{\\parallel},\n\\end{equation}\n\\begin{equation}\n\\Delta\\textbf{h}^{(2)}=\\Delta\\textbf{e}^{(3)}=-\\mu_\\gamma^{(2,3)}B_z[{\\bf\nk}_{\\perp}\\times {\\bf k}_{\\parallel}].\n\\end{equation}\nTherefore, the magnetic moment of the photon leads to linear\neffects in quantum electrodynamics and in consequence the refraction\nindex $n^{(i)}=\\vert\\textbf{k}\\vert\/\\omega_i$ in mode $i$ is given\nby\n\\begin{equation}\nn^{(i)}=1+\\frac{\\mu_\\gamma^{(i)}}{\\vert\\textbf{k}\\vert}\nB_z\\label{in}\n\\end{equation}\nthe gauge invariant property (\\ref{gauge}) implies that the\nrefraction index for parallel propagation, $k_\\perp=0$, be exactly\nunity: for any mode $n_i=1$. This can be reinterpreted by saying\nthat for parallel propagation to $B_z$ the refraction index is\nequal to unity due to the vanishing of the photon magnetic\nmoment.\n\nThe components of the group velocity,\n($\\textbf{v}^{(i)}=\\nabla_\\textbf{k} \\omega_i$),\n$\\textrm{v}_{\\perp\\parallel}$ can be written as\n \\begin{equation}\n\\textrm{v}_\\perp^{(i)}=\\frac{\\partial \\omega}{\\partial\nk_\\perp}=\\frac{k_\\perp}{\\vert\\textbf{k}\\vert}\\left(1-\\frac{\\vert\\textbf{k}\\vert}{k_\\perp}\\frac{\\partial\\mu_\\gamma^{(i)}}{\\partial\nk_\\perp}B_z\\right)\\label{vper}\n\\end{equation}\nand\n\\begin{equation}\n\\textrm{v}_\\parallel^{(i)}=\\frac{\\partial \\omega_i}{\\partial\nk_\\parallel}=\\frac{\nk_\\parallel}{\\omega_i}\\simeq\\frac{k_\\parallel}{\\vert\\textbf{k}\\vert}\\left(1+\\frac{\\mu_\\gamma^{(i)}}{\\vert\\textbf{k}\\vert}B_z\\right).\n\\label{vpara}\n\\end{equation}\n\nIt follows from (\\ref{vper}) and (\\ref{vpara}) that the angle\n$\\theta^{(i)}$ between the direction of the group velocity and the\nexternal magnetic field satisfies the relation\n\\[\n\\tan\\theta^{(i)}=\\frac{\\textrm{v}_\\perp^{(i)}}{\\textrm{v}_\\parallel^{(i)}}=\\left(1-\\frac{\\vert\\textbf{k}\\vert}{k_\\perp}\\frac{\\partial\\mu_\\gamma^{(i)}}{\\partial\nk_\\perp}B_z\\right)\\left(1+\\frac{\\mu_\\gamma^{(i)}}{\\vert\\textbf{k}\\vert}B_z\\right)^{-1}\\tan\\vartheta,\n\\] being $\\vartheta$ the angle between the photon momentum and\n$\\textbf{B}$, with $\\tan \\vartheta=k_\\perp\/k_\\parallel$.\n\n\\section{The Polarization Eigenvalues in Weak Field Limit}\nIn this paper we shall only deal with the transparency region (we do\nnot consider the absorption of the photon to create observable\n$e^{\\pm}$ pairs), $\\omega^2-k_\\parallel^2\\leq k_{\\perp}^{\\prime 2}$\n)\\textit{i.e.}, we will keep our discussion within the kinematic\ndomain, where $\\pi_{1,2,3}$ are real, where\n\\begin{equation}\nk_{\\perp}^{\\prime\n2}=m_0^2\\left[(1+2\\frac{B}{B_c}n)^{1\/2}+(1+2\\frac{B}{B_c}n^{\\prime})^{1\/2}\\right]^2\n\\end{equation}\nis the pair creation squared threshold energy, with the electron and\npositron in Landau levels $n,n^{\\prime}\\neq 0$). We will be\ninterested in a photon whose energy is near the pair creation\nthreshold energy.\n\nIn the limit $B\\ll B_c$ and in one loop approximation the first\nand third modes with energies range less than the first cyclotron\nresonance, whose energy is given by $2m_0$, does not show any\nsingular behavior and in this sense they behave similarly to the\neigenvalues does not contribute. It follows that the dispersion\nlaw for the first and third modes are given by\n$\\omega_1=\\omega_3=\\vert\\bf{k}\\vert$.\n\nNevertheless, from the calculations made in appendix A it is seen\nthat the second eigenvalue can be expressed as in the low frequency\nlimit $4m_0^2\\gg k^2+\\frac{kF^2k}{\\mathcal{F}}$ as\n\\begin{equation}\n\\pi_2=-\\frac{2\\mu^{\\prime}B}{m_0}\\frac{kF^2k}{2\\mathcal{F}}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right),\\label{pi22}\\\\\n\\end{equation}\nHere $\\mu^\\prime=(\\alpha\/2\\pi)\\mu_B$ is the anomalous magnetic\nmoment of the electron.\n\nThis means that the dispersion equation for the second mode has the\nsolution\n\\begin{equation}\n\\omega^2\\simeq\nk_\\parallel^2-\\frac{kF^2k}{2\\mathcal{F}}\\left(1-\\frac{2\\mu^{\\prime}B}{m_0}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\\right)\n\\label{de1}\n\\end{equation}\n\n\\begin{figure}[!htbp]\n\\includegraphics[width=3.5in]{edc.eps}\n\\caption{\\label{fig:edc} Dispersion curves for second mode in the\nfirst cyclotron resonance for different values of weak field. The\ndotted line represent the light cone curve, the dashed line\ncorrespond to the threshold energy for pair formation $k_\\perp^2 =\n4m_0^2$.}\n\\end{figure}\n\nFor values of energies and magnetic fields for which the\nexponential factor in (\\ref{pi22}) is of order unity we get\n\\[\n\\omega^2=k_\\parallel^2-\\frac{kF^2k}{2\\mathcal{F}}\\left(1-\\frac{2\\mu^{\\prime}B}{m_0}\\right)\n\\]\nwhich we can be approximated as\n\\[\n\\omega=\\vert\\textbf{k}\\vert+\\frac{\\mu^{\\prime}B}{m_0\\vert\\textbf{k}\\vert}\\frac{kF^2k}{2\\mathcal{F}}\n\\]\nIn such case the magnetic moment is determined by\n\\begin{equation}\n\\mu_\\gamma=-\\frac{\\mu^{\\prime}}{m_0\\vert\\textbf{k}\\vert}\\frac{kF^2k}{2\\mathcal{F}}\\label{lemm}\n\\end{equation}\nin the case of perpendicular propagation\n\\begin{equation}\n\\mu_\\gamma^{max}=\\frac{\\mu^{\\prime}k_\\perp}{m_0}\n\\end{equation}\n\nFor perpendicular propagation to $B_z$, $(k_\\parallel=0)$, and\nalthough the present approximation is not strictly valid for\n$k_\\perp\\to 2m_0$ the photon anomalous magnetic moment is of order\n\\begin{equation}\n\\mu_\\gamma \\sim 2\\mu^{\\prime}\n\\end{equation}\n\nBelow we will get a larger value near the first pair creation\nthreshold. In the table below we show some $\\mu_\\gamma^{max}$ values\nsuch that $\n\\exp\\left[-\\frac{k_\\perp^2}{2m_0}\\frac{B_c}{B}\\right]\\sim 1 $\ncorresponding to ranges of $X$- rays energies and magnetic field\nvalues.\n\n\\begin{widetext}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n\\cline{1-9} \\multicolumn{1}{|c|}{}&\n\\multicolumn{1}{|c|}{$\\omega=m_010^{-8}$} &\n\\multicolumn{1}{|c|}{$B=1$ G} &\n\\multicolumn{1}{|c|}{$\\omega=m_010^{-6}$} &\n\\multicolumn{1}{|c|}{$B=10^4$ G} &\n\\multicolumn{1}{|c|}{$\\omega=m_010^{-4}$} &\n\\multicolumn{1}{|c|}{$B=10^8$ G}&\n\\multicolumn{1}{|c|}{$\\omega=m_010^{-2}$}&\n\\multicolumn{1}{|c|}{$B=10^{12}$ G}\\\\\n\\hline \\multicolumn{1}{|c|}{$\\mu_{\\gamma}^{max}$} &\n\\multicolumn{2}{|c|}{$10^{-8}\\mu^{\\prime}$} &\n\\multicolumn{2}{|c|}{$10^{-6}\\mu^{\\prime}$} &\n\\multicolumn{2}{|c|}{$10^{-4}\\mu^{\\prime}$} &\n\\multicolumn{2}{|c|}{$10^{-2}\\mu^{\\prime}$} \\\\\n\\hline\n\\end{tabular}\n\\\\\n\\\\\n\\end{widetext}\n\nAccording to (\\ref{in}) the refraction index in the weak field\napproximation and low frequency limit is given by\n\\[\nn^{(2)}=1+\\frac{\\mu^\\prime k_\\perp^2}{m_0\\vert\\textbf{k}\\vert^2}B.\n\\]\n\nIt must be noticed that when the propagation is perpendicular to\n$B_z$ the refraction index is maximum\n\\[\nn_\\perp^{(2)}=1+\\frac{\\mu^\\prime}{m_0}B\n\\]\n\nFrom (\\ref{vper}) and (\\ref{vpara}) we obtain that the absolute\nvalue of the group velocity is given by\n\\begin{equation}\n\\textrm{v}^{(2)}\\simeq\n1-\\frac{\\mu_\\gamma^{(2)}}{\\vert\\textbf{k}\\vert}B.\\label{vel}\n\\end{equation}\nin the last expression we neglected the term $B_z$ squared. In the\nparticular case $k_\\parallel=0$\n\\begin{equation}\n\\textrm{v}_\\perp^{(2)}=1-\\frac{\\mu^\\prime}{m_0}B.\n\\end{equation}\n\nIn the asymptotic region of supercritical magnetic fields $B\\gg B_c$\nand restricted energy of longitudinal motion\\cite{proceeding}\n$\\omega^2-k_\\parallel^2\\ll(B\/B_c)m_0^2$, in the low frequency limit\nthe behavior of the photon propagating in the second mode is equal\nto the case of weak magnetic field (\\ref{pi22}). When the magnetic\nfield $B\\sim B_c$, the photon magnetic moment can be approximated\nby (\\ref{lemm}). For a typical $X$ ray photon of wavelength $\\sim$\n\\AA{}, and propagation perpendicular to $B_z$, its magnetic moment\nhas values of order $\\mu_\\gamma\\sim 10^{-2}\\mu^\\prime$ in this case.\nThis behavior is also present in the photon-positronium mixed state.\n\n\\section{The Cyclotron Resonance}\n\nSimilarly to the case of strong magnetic field $(B\\gtrsim B_c)$, the\nsingularity corresponding to cyclotron resonance appears in the\nsecond mode when $B\\ll B_c$. The corresponding eigenvalue near of\nthe first pair creation threshold is given by\n\\begin{equation}\n\\pi_2=\\frac{2\\alpha m_0^3\nB}{B_c}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\\left[4m_0^2+\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\right]^{1\/2},\\label{pi2}\\\\\n\\end{equation}\n\nThe term $\\exp(kF^2k B_c\/4m_0^2\\mathcal{F}B)$ plays an important\nrole in the cyclotron resonance, both in the weak and the large\nfield regime. The solution of the equation (\\ref{egg}) for the\nsecond mode (first obtained by Shabad \\cite{shabad2}) is shown\nschematically in Fig \\ref{fig:edc}. In this picture, it is noted the\ndeparture of the dispersion law from the light cone curve. For the\nfirst threshold the deflection increases with increasing external\nmagnetic field. In the vicinity of the first threshold the solutions\nof (\\ref{egg}) and (\\ref{pi2}) are similar to the case of strong\nmagnetic field regime, therefore, in order to show the results in a\nmore compact form let us use the form used in \\cite{Hugo2}. In it\nthe eigenvalues of $\\Pi_{\\mu\\nu}(k\\vert A_\\mu^{ext})$ near the\nthresholds can be written approximately as\n\\begin{equation}\n\\pi_{n,n^{\\prime}}^{(i)}\\approx-\\frac{2\\pi\\phi_{n,n^{\\prime}}^{(i)}}{\\vert\\Lambda\\vert}\n\\label{eg5}\n\\end{equation}\nwith $\\vert\\Lambda\\vert=((k_\\perp^{\\prime 2 }-k_\\perp^{\\prime \\prime\n2})(k_\\perp^{\\prime 2}+k^2+\\frac{kF^2k}{\\mathcal{F}}))^{1\/2}$\n and\n\\begin{eqnarray}\nk_\\perp^{\\prime\\prime\n2}=m_0^2\\left[(1+2\\frac{B}{B_c}n)^{1\/2}-(1+2\\frac{B}{B_c}n^{\\prime})^{1\/2}\\right]^2,\n\\end{eqnarray}\n\\noindent where $k_{\\perp}^{\\prime 2}$ is the squared threshold\nenergy for $e^{\\pm}$ pair production, and $k_{\\perp}^{\\prime\\prime\n2}$ is the squared threshold energy for excitation between Landau\nlevels $n,n^{\\prime}$ of an electron or positron. The functions\n$\\phi_{n,n^{\\prime}}^{(i)}$ are rewritten from \\cite{Hugo1} in the\nAppendix B.\n\nIn the vicinity of the first resonance $n=n^{\\prime}=0$ and\nconsidering $k_\\perp\\neq0$ and $k_\\parallel\\neq0$, according to\n\\cite{shabad1,shabad2} the physical eigenwaves are described by the\nsecond and third modes, but only the second mode has a singular\nbehavior near the threshold and the function $\\phi^{(2)}_{n n'}$ has\nthe structure\n\\begin{equation}\n\\phi_{0,0}^{(2)}\\simeq-\\frac{2\\alpha B m_0^4}{\\pi\nB_c}\\textrm{exp}\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\n\\end{equation}\nIn this case\n $k_{\\perp}^{\\prime\\prime 2}=0$ and $k_{\\perp}^{\\prime\n2}=4m_0^2$ is the threshold energy.\n\nWhen $\\textbf{B}=\\textbf{B}_z$, the scalar\n$kF^2k\/2\\mathcal{F}=-k_\\perp^2$ and the approximation of the modes\n(\\ref{eg5}) turns the dispersion equation (\\ref{eg1}) into a cubic\nequation in the variable $z_1=\\omega^2-k_\\parallel^2$ that can be\nsolved by applying the Cardano formula. We will refer in the\nfollowing to (\\ref{eg2}) as the real solution of this equation.\n\nWe would define the function\n\\begin{equation}\n\\Lambda^{*}=(k_\\perp^{\\prime\\prime 2}- k_\\perp^{\\prime 2})\n(k_\\perp^2-k_\\perp^{\\prime 2})\n\\end{equation}\nto simplify the form of the solutions (\\ref{eg2}) of the equation\n({\\ref{eg1}}). The functions $f_{i}$ are dependent on\n$k_{\\perp}^{2},k_{\\perp}^{\\prime 2},k_{\\perp}^{\\prime\\prime 2}, B$,\nand are\n\\begin{equation}\nf_i^{(1)}=\\frac{1}{3}\\left[2k_\\perp^{2}+k_\\perp^{\\prime\n2}+\\frac{\\Lambda^{* 2}}{(k_\\perp^{\\prime\\prime 2}-k_\\perp^{\\prime\n2})\\mathcal{G}^{1\/3}}+\n\\frac{\\mathcal{G}^{1\/3}}{k_\\perp^{\\prime\\prime 2}-k_{\\perp}^{\\prime\n2}}\\right] \\label{fi}\n\\end{equation}\nwhere\n\\[\n\\mathcal{G} =6 \\pi\\sqrt{3}D-\\Lambda^{*3}+54 \\pi^2\n\\phi_{n,n^{\\prime}}^{(i) 2}(k_\\perp^{\\prime\\prime 2}-k_\\perp^{\\prime\n2})^2\n\\]\nwith\n\\[\nD=\\sqrt{-(k_\\perp^{\\prime 2}-k_\\perp^{\\prime\\prime\n2})^2\\Lambda^{*3}\\phi_{n,n^{\\prime}}^{(i) 2}\\left[1-\\frac{27 \\pi^2\n\\phi_{n,n^{\\prime}}^{(i)2}(k_\\perp^{\\prime\\prime2}-k_\\perp^{\\prime\n2})^2}{\\Lambda^{*3}}\\right]}\n\\]\nThe solution $z_1=f_i^{(1)}$, where $z_1=\\omega^2-k^2_{\\parallel}$,\nconcern the values of $k_\\perp^2$ exceeding the root $k_\\perp^{\\ast\n2}$ of the equation $D=0$, approximately equal to\n\\[\nk_\\perp^{\\star 2}\\simeq k_\\perp^{\\prime\n2}-3\\left(\\frac{\\pi^2\\phi_{n,n^{\\prime}}^{ (i)2}(k_\\perp^{\\prime\n2})}{k_\\perp^{\\prime\\prime 2}-k_\\perp^{\\prime 2}}\\right)^{1\/3}\n\\]\n(for $k_\\perp^{2}k_\\perp^{\\ast 2}$ these\ntwo complex solutions are given by\n\\[\nf_i^{(2)}=\\frac{1}{6}\\left[2(2k_\\perp^{2}+k_\\perp^{\\prime\n2})-\\frac{(1+i\\sqrt {3})\\Lambda^{* 2}}{(k_\\perp^{\\prime\\prime\n2}-k_\\perp^{\\prime 2})\\mathcal{G}^{1\/3}}-\n\\frac{(1-i\\sqrt{3})\\mathcal{G}^{1\/3}}{k_\\perp^{\\prime\\prime\n2}-k_\\perp^{\\prime 2}}\\right]\n\\]\nand\n\\[\nf_i^{(3)}=\\frac{1}{6}\\left[2(2k_\\perp^{2}+k_\\perp^{\\prime\n2})-\\frac{(1-i\\sqrt {3})\\Lambda^{* 2}}{(k_\\perp^{\\prime\\prime\n2}-k_{\\perp}^{\\prime 2})\\mathcal{G}^{1\/3}}-\n\\frac{(1+i\\sqrt{3})\\mathcal{G}^{1\/3}}{k_\\perp^{\\prime\\prime\n2}-k_{\\perp}^{\\prime 2}}\\right]\n\\]\nbut they are not interesting to us in the present context.\n\nWe should define the functions\n\\[\nm_n=\\frac{k_\\perp^{\\prime}+k_\\perp^{\\prime\\prime }}{2}\\ \\\n\\textrm{and}\\ \\ m_{n^{\\prime}}=\\frac{k_\\perp^{\\prime\n}-k_\\perp^{\\prime\\prime }}{2},\n\\]\nwhich are positive for all possible values of $n$ and $n^\\prime$.\n\nNow the magnetic moment of the photon can be derived by taking the\nimplicit derivative $\\partial\\omega\/\\partial B_z$ and\n$\\partial\\omega\/\\partial kF^2k$ in the dispersion equation, from\n(\\ref{egg}) and (\\ref{eg5}) it is obtained that\n\\begin{widetext}\n\\begin{equation}\n\\mu_\\gamma^{(i)}=\\frac{\\pi}{2\\omega(\\vert\n\\Lambda\\vert^3-4\\pi\\phi_{n,n^\\prime}^{(i)}m_n\nm_{n^\\prime})}\\left[\\phi_{n,n^{\\prime}}^{(i)}\\left(A\\frac{\\partial\nm_n}{\\partial B}+Q\\frac{\\partial m_{n^\\prime}}{\\partial\nB}\\right)-2\\Lambda^2\\frac{\\partial\n\\phi_{n,n^{\\prime}}^{(i)}}{\\partial B}\\right] \\label{mm2}\n\\end{equation}\nbeing\n\\[\nA=-4m_{n^\\prime}[z_1-(m_n+m_{n^\\prime})(3m_n+m_{n^\\prime})]>0\n\\]\nand\n\\[\nQ=-4m_{n}[z_1-(m_n+m_{n^\\prime})(m_n+3m_{n^\\prime})]>0.\n\\]\n\\end{widetext}\n\nThe expression (\\ref{mm2}) contains terms with paramagnetic and\ndiamagnetic behavior, in which, as is typical in the relativistic\ncase, the dependence of $\\mu$ on $B$ is non-linear. It contains also\ndiamagnetic terms, depending on the sign of $\\phi_{n,\nn^{\\prime}}^{(i)}$ and its derivatives with regard to $B$: if\n$\\phi_{n, n^{\\prime}}^{(i)}>0$ the first term in the bracket,\ncontributes paramagnetically ($\\frac{\\partial m_n}{\\partial B}\\geq\n0$ and $\\frac{\\partial m_{n^\\prime}}{\\partial B}\\geq 0$), if\n$\\phi_{n, n^{\\prime}}^{(i)}<0$ the sign of this term is opposite and\nits contribution is diamagnetic. The second term of (\\ref{mm2}) will\nbe paramagnetic or diamagnetic depending on the sign of the\nderivative of the $\\phi_{n, n^{\\prime}}^{(i)}$ with regard to $B$.\n\nIn particular if $n=n^\\prime$\n\\begin{equation}\n\\mu_\\gamma^{(i)}=\\frac{\\pi}{\\omega(\\vert\n\\Lambda\\vert^3-4\\pi\\phi_{n}^{(i)}m_n^2)}\\left[\\phi_{n}^{(i)}A\\frac{\\partial\nm_n}{\\partial B}-\\Lambda^2\\frac{\\partial \\phi_{n}^{(i)}}{\\partial\nB}\\right] \\label{mm3}\n\\end{equation}\nwith $A=-4m_{n}[z_1-8m_n^2]>0$\n\nIn the vicinity of the first threshold $k_\\perp^{\\prime\\prime\n2}=0$, $k_\\perp^{\\prime 2}=4m_0^2$ and $\\partial m_n\/\\partial B=0$\nwhen $n=0$, therefore for the second mode the absolute value of the\nmagnetic moment is given by\n\\begin{equation}\n\\mu_{\\gamma}^{(2)}=-\\frac{\\pi\\vert\\Lambda\\vert^2}{\\omega\\left(\\vert\\Lambda\\vert^3-4m_0^2\\pi\\phi_{00}^{(2)}\\right)}\\frac{\\partial\n\\phi_{00}^{(2)}}{\\partial B}\\label{FRR}\n\\end{equation}\nOne can write the (\\ref{FRR}) in explicit form as\n\\begin{widetext}\n\\begin{equation}\n\\mu_\\gamma^{(2)}=\\frac{\\alpha\nm_0^3\\left(4m_0^2+k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\exp\\left(\\frac{kF^2k}{4m_0^2\n\\mathcal{F}}\\frac{B_c}{B}\\right)}{\\omega\nB_c\\left[(4m_0^2+k^2+\\frac{kF^2k}{2\\mathcal{F}})^{3\/2}+\\alpha m_0^3\n\\frac{B}{B_c} \\exp\\left(\\frac{kF^2k}{4m_0^2\n\\mathcal{F}}\\frac{B_c}{B}\\right)\\right]}\\left(1-\\frac{kF^2k}{4\nm_0^2\\mathcal{F}}\\frac{B_c}{B}\\right).\\label{FRR1}\n\\end{equation}\n\\end{widetext}\nif we consider for simplicity propagation perpendicular to the field\n$B$, and $\\omega$ near the threshold $2m_0$, the function\n$\\mu_{\\gamma}^{(2)}=f(X)$, where $X=\\sqrt{4m_0^2-\\omega^2}$ has a\nmaximum for $X= {\\pi\\phi_{00}^{(2)}\/m_0}^{1\/3}$, which is very near\nthe threshold.\n\nThus, for perpendicular propagation the expression (\\ref{FRR1}) has\na maximum value when $k_\\perp^2 \\simeq k_\\perp^{\\prime 2}$.\nTherefore in a vicinity of the first pair creation threshold the\nmagnetic moment of the photon has a paramagnetic behavior and a\nresonance peak whose value is given by\n\\begin{equation}\n\\mu_{\\gamma}^{(2)}=\\frac{m_0^2(B+2B_c)}{3m_\\gamma\nB^2}\\left[2\\alpha\\frac{B}{B_c}\\exp\\left(-\\frac{2B_c}{B}\\right)\\right]^{2\/3}\n\\label{mumax}\n\\end{equation}\nWe would like to define the \"dynamical mass\" $m_{\\gamma}$ of the\nphoton in presence of a strong magnetic field by the equation\n\\begin{equation}\nm_{\\gamma}^{(2)}=\\omega (k_\\perp^{\\prime\n2})=\\sqrt{4m_0^2-m_0^2\\left[2\\alpha\n\\frac{B}{B_c}\\exp\\left(-\\frac{2B_c}{B}\\right)\\right]^{2\/3}}\n\\label{dm}\n\\end{equation}\n\nFrom (\\ref{mumax}) it is seen that near the pair creation\nthreshold, the magnetic moment induced by the external field in the\nphoton as a result of its interaction with the polarized\nelectron-positron quanta of vacuum, has a peak (Fig. 3). This is a\nresonance peak, and we understand it as due to the interaction of\nthe photon with the polarized $e^{\\pm}$ pairs.\n\nThe expression (\\ref{mumax}) presents a maximum when $B\\simeq B_c$,\nin such case the magnetic moment is given by\n\\begin{equation}\n\\mu_\\gamma^{(2)}\\approx 3\\mu^\\prime\n\\left(\\frac{1}{2\\alpha}\\right)^{1\/3}\\approx 12.85\\mu^\\prime\n\\end{equation}\n\nThe arising of a photon dynamical mass is a consequence of the\nradiative corrections, which become significant for photon energies\nnear the pair creation threshold and magnetic fields large enough to\nmake significant the exponential term $\\exp(kF^2k\nB_c\/4m_0^2\\mathcal{F}B) =e^{-k_\\perp^2\/eB}$. This means that the\nmassless photon coexists with the massive pair, leading to a\nbehavior very similar to that of a neutral vector particle\n\\cite{Osipov} bearing a magnetic moment.One should notice from\n(\\ref{dm}) that the dynamical mass decrease with the increasing\nfield intensity, which suggests that the interaction energy of the\nphoton with its environment increase for magnetic fields far greater\nthan $B_c$ near the first threshold, when the photon coexist with\nthe virtual pair.\n\nThe problem of neutral vector particles is studied elsewhere\n\\cite{Herman}. The energy eigenvalues being\n\\begin{equation}\nE(p, B,\\eta) =\\sqrt{p_3^2+p_{\\perp}^2+M^2\n+\\eta(\\sqrt{p_{\\perp}^2+M^2}) qB} \\label{spec},\n\\end{equation}\n\\noindent where the second square root expresses the dependence of\nthe eigenvalues on the \"transverse energy\", proportional to the\nscalar $pF^2p\/{\\cal F}$. We have $\\eta =0,\\pm 1$, and $q$ being a\nquantity having the dimensions of magnetic moment. In what follows\nwe exclude the value $\\eta=0$, since it corresponds to the case of\nno interaction with the external field $B$. We have that for\n$\\eta=-1$ the magnetic moment is $\\mu= \\pm \\frac{q}{\\sqrt{M^2 -\nMqB}}$, which is divergent at the threshold $M =qB$. The behavior of\nthe photon near the critical field $B_c$ closely resembles this\nbehavior, since it has a maximum value at the threshold.\n\n\nFor $B_z\\gg B_c$ although the vacuum is strongly polarized, the\nphoton shows a weaker polarization, \\emph{i.e} the contribution from\nthe singular behavior near the thresholds decreases. Actually, the\npropagation is being decreased due to an increasing in the imaginary\npart of the photon energy $\\Gamma$ (we have the total frequency\n$\\omega=\\omega_r + i\\Gamma$, where $\\omega_r$ is the real part of\nit). As the modes are bent to propagate parallel to $B$, they\npropagate in an increasingly absorbent medium. But for $B>>B_c$ we\nare in a region beyond QED and new phenomena related to the standard\nmodel may appear, as for instance, the creation of $\\mu^{\\pm}$\npairs, and their subsequent decay according to the allowed channels.\n\n\nAs opposite to the second mode case, the polarization eigenvalue\nfrom the third mode in supercritical magnetic field does not\nmanifest a singular behavior in the first resonance\n\\cite{proceeding}. In this case the eigenvalues are given by\n\\begin{widetext}\n\\begin{equation}\n\\pi_3=\\frac{\\alpha\nk^2}{3\\pi}\\left(\\ln\\frac{B}{B_c}-C\\right)+\\frac{\\alpha}{3\\pi}\\left(0.21\n\\frac{kF^2k}{2\\mathcal{F}}-1.21\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\n\\right)\n\\end{equation}\nIn this case the dispersion equation has the solution\n\\begin{equation}\n\\omega^2=\\vert\\textbf{k}\\vert^2+\\frac{kF^2k}{2\\mathcal{F}}\\left[1-\\frac{\\alpha}{3\\pi}\\left(\\ln\\frac{B}{B_c}-C-1.21\\right)\\right]^{-1}\n\\label{3m}\n\\end{equation}\n\\end{widetext}\nbeing $C$ is Euler constant. For fields $B\\sim B_c$ so that the\nlogarithmic terms are small, the corresponding magnetic moment is\ngiven by\n\\begin{equation}\n\\mu_\\gamma^{(3)}=\\frac{2\\mu^{\\prime}}{3m_0}\\frac{k_\\perp^2}{\\omega}\n\\label{3mm}\n\\end{equation}\nfor perpendicular propagation to $B_z$ and for photons with energies\nnear of $m_0$ the magnetic moment of the photon propagating in the\nthird mode has a value of $\\mu_\\gamma^{(3)}\\sim\\mu^{\\prime}$.\n\n\\section{Magnetic Moment For The Photon-Positronium Mixed States}\n\nIn the case of positronium formation, by following\n\\cite{shabad3,Leinson}, neglecting the retardation effect and in the\nlowest adiabatic approximation, the Bethe-Salpeter equation is\nreduced to a Schr\\\"odinger equation in the variable $(z^e-z^p)$\nwhich governs the relative motion along $\\textbf{B}$ of the\nelectron and positron.\n\nUnder such approximations, the conservation law induced by the\ntraslational invariance takes the form $p_x=p_x^p+p_x^e=k_\\perp$.\nTherefore the binding energy depends on the distance between the\n$y$-coordinates of the centers of the electron and positron orbits\n$\\vert y_0^p-y_0^e\\vert=p_x\/\\sqrt{eB}$.\n\nThe Schr\\\"odinger equation mentioned includes the attractive\ncoulomb force whose potential in our case has the form\n\\[\nV_{nn^\\prime}(z^e-z^p)=-\\frac{e^2}{\\sqrt{(z^e-z^p)^2+L^4p_x^2}}\n\\]\nwhere $L=(eB)^{-1\/2}$ is the radius of the electron orbit. The\neigenvalue of this equation\n$\\Delta\\varepsilon_{n,n^{\\prime}}(n_c,k_\\perp^2)$ is the binding\nenergy of the particles which is numbered by a discrete number\n$n_c$ that identifies the Coulomb-bound state for\n$\\Delta\\varepsilon_{n,n^{\\prime}}(n_c,k_\\perp^2)>0$ and by a\ncontinuous one in the opposite case.\n\nThe energy of the pair which does not move along the external\nmagnetic field is given by\n\\begin{equation}\n\\varepsilon_{n,n^\\prime}(n_c,k_\\perp ^2)=k_\\perp^{\\prime}+\\Delta \\varepsilon_{n,n^{\\prime}}(n_c,k_\\perp^2)\n\\end{equation}\n\nIn this paper we will consider the case in which the Coulomb state\nis $n_c=0$, here the binding energy is given by the expression of\nthe eigenvalues of this equation,\n\\begin{equation}\n\\Delta \\varepsilon_{n,n^{\\prime}}(0,k_\\perp^2)=-\\frac{\\alpha^2\nM_r}{2}\\left(2\\ln\\left[\\frac{a_{nn^\\prime}^B}{2\n\\sqrt{L^2+L^4P_x^2}}\\right]\\right)^{-2},\n\\end{equation}\nwhere $a_B^{nn^\\prime}=1\/e^2M_r$ is Bohr radius and\n$M_r=m_nm_{n^\\prime}\/(m_n+m_{n^\\prime})$ the reduced mass of the\nbound pair.\n\nThe dispersion equation of the positronium is given by\n\\begin{equation}\nk_\\perp^2+k_\\parallel^2-\\omega^2=-\\frac{2\\pi \\Phi_{nn^\\prime\nn_c}^{(i)}}{\\varepsilon_{nn^\\prime}^2-\\omega^2+k_\\parallel^2}\n\\label{p1}\n\\end{equation}\n\nFor each set of discrete quantum number $n$, $n^\\prime$, $n_c$,\nequation (\\ref{p1}) is quadratic with regard to the variable\n$z_1=\\omega^2-k_\\parallel^2$ and its solutions are\n\\begin{eqnarray}\nf_i=\\frac{1}{2}\\left(\\varepsilon_{nn^\\prime\nn_c}^2(k_\\perp^2)+[k_\\perp^2\\pm\\right.\n\\\\\\nonumber\\left.(\\varepsilon_{nn^\\prime\nn_c}^2(k_\\perp^2)-k_\\perp^2)^2-8\\pi\\Phi_{nn^\\prime\nn_c}^{(i)}(k_\\perp^2)]^{1\/2}\\right)\n\\end{eqnarray}\n\nAt the first cyclotron resonance $n=n^{\\prime}=0$, the function\n$\\Phi_{000}$ that define the second mode has the structure\n\\begin{equation}\n\\Phi_{000}^{(2)}=\\phi_{00}^{(2)}\\varepsilon_{000}(k_\\perp^2)\\vert \\psi_{000}(0)\\vert^2\n\\end{equation}\nwith\n\\[\n\\Delta\\varepsilon_{00}(0,k_\\perp^2)=-\\alpha^2 m_0\\left(\\ln\\left[\\frac{1}{\\alpha}\\sqrt{\\frac{ B_c(1+\\frac{k_\\perp^2B_c}{m_0^2B})}{B}}\\right]\\right)^2\n\\]\nwhere\n\\[\n\\vert\\psi_{000}(0)\\vert^2=\\alpha\\left\\vert\\ln\\left[\\frac{1}{\\alpha}\\sqrt{\\frac{B}{B_c\\left(1+\\frac{k_\\perp^2B_c}{m_0^2B}\\right)}}\\right]\\right\\vert\n\\]\n is the wave function squared of the longitudinal motion\\cite{Loudon} at $ z^e=z^p$.\n\nIn the case of photon-positronium mixed states we obtain from\n(\\ref{treee}) and (\\ref{p1}) that\n\\begin{equation}\n\\mu_{\\gamma}^{P}=\\frac{\\pi\\Upsilon}{\\omega\n(k_\\parallel^2-\\omega^2-\\varepsilon^2)\\left\n(1+\\frac{2\\pi\\Phi_{nn^\\prime\nn_c}^{(i)}}{k_\\parallel^2-\\omega^2-\\varepsilon^2} \\right)}\n\\label{mps}\n\\end{equation}\nbeing\n\\[\n\\Upsilon=\\varepsilon_{nn^\\prime}(n_c) \\Phi_{n n^{\\prime}\nn_c}^{(i)}\\frac{\\partial\\varepsilon}{\\partial\nB}-\\frac{\\partial\\Phi_{n n^{\\prime} n_c}^{(i)}}{\\partial B}\n\\]\n\nFollowing the reasoning of (\\ref{mumax}), for magnetic fields\n$B\\gg B_c$ one can define the dynamical mass of the\nphoton-positronium mixed state in the first threshold for\npositronium energy, $\\varepsilon^2 \\sim 3.996m_0^2$ and\nperpendicular propagation as\n\\begin{equation}\nm_{\\gamma}^{P}=\\sqrt{\\varepsilon_{00}^2-2m_0^2\\alpha\\left[\\frac{B}{B_c}\\ln\\left(\\frac{1}{2\\alpha}\\frac{B}{B_c}\\right)\\right]^{1\/2}}\n\\end{equation}\nin this regime\n\\begin{equation}\n\\mu_{\\gamma}^{P}=\\frac{m_0^2\\alpha\\left(1+\\ln\\left[\\frac{B}{2\\alpha\nB_c}\\right]\\right)}{2B_cm_\\gamma^P\\sqrt{\\frac{B}{B_c}\\ln\\left[\\frac{B}{2\\alpha\nB_c}\\right]}}\n\\end{equation}\nwhen $k_\\parallel=0$.\n\nIn a similar way to the free pair creation, the behavior of the\nmagnetic moment of the magnetic moment of the mixed state for $n,n'\n\\neq 0$ can be paramagnetic for some values of Landau numbers and\nintervals in momentum space, and diamagnetic in other ones.\n\n\n\\section{Results and discussion}\n\nFor perpendicular propagation the dependence of the magnetic moment\nwith regard $k_\\perp^2$ in the first resonance $k_\\perp^{\\prime 2}\n=4m_0^2$ is displayed in FIG. \\ref{fig:Phvk} for free pair creation\nand photon-positronium mixed state. Both\n curves show the same qualitative behavior. As it\nwas expected, near the threshold energy appears the peak\ncharacteristic of the resonance. The result shows that near the pair\ncreation threshold the magnetic moment of a photon may have values\ngreater than the anomalous magnetic moment of the electron.\n(Numerical calculations range from 0 to more than $12\\mu^\\prime$).\nThis result is related to the probability of the pair creation,\nwhich is maximum in the first resonant form when $k_\\perp=2m_0$, and\nincrease with $B$, because the medium is becoming absorbent.\n\\begin{figure}[!htbp]\n\\includegraphics[width=3in]{GMk.eps}\n\\caption{\\label{fig:Phvk} Magnetic moment curves of the photon\n(dark) and photon-positronium mixed states (dashed) with regard to\nperpendicular momentum squared, for the second mode\n$k_\\perp^{\\prime 2}=4m_0^2$ with $n=n^{\\prime}=0$.}\n\\end{figure}\n\nWe observe that near the thresholds the behavior of the curves is\nthe same for all pairs $\\omega$, $k_\\parallel$ satisfying the\ncondition $\\omega^{2}-k_\\parallel^2=k_\\perp^{\\prime 2}$.\n\nIn all curves the magnetic moment decrease for momentum values\n$k_\\perp^2>4m_0^2$. Therefore the vacuum polarization decreases,\nthus, the magnetic moment tends to vanish as it is shown in FIG.\n\\ref{fig:Phvk}. We interpret these results in the sense that for\nphotons with squared transversal component of the momenta greater\nthan $4m_0^2$ the probability of free pair creation in the Landau\nground state (and positronium creation) decrease very fast, since\nthat region of momenta is to be considered inside the transparency\nregion corresponding to the next thresholds, i.e., $n=0, n'=1$ or\nvice-versa.\n\n\\begin{figure}[!htbp]\n\\includegraphics[width=3in]{GMB.eps}\n\\caption{\\label{fig:mb1} Magnetic moment curves for the photon and\nphoton-positronium mixed states with regard to external magnetic\nfield strength. The propagation is perpendicular to $\\textbf B$ and\nthe values of the perpendicular momentum are equal to the absolute\nvalues of the free and bound threshold energies. The first\n(continuous) curve refers to the photon whereas the second (dashed)\ncorresponds to the photon-positronium mixed state.}\n\\end{figure}\n\nThe behavior of the magnetic moment with regard to the field is\nshown in the FIG. \\ref{fig:mb1} for the case of photon and\nphoton-positronium mixed state. The picture was obtained in the\nfield interval $0 \\leq B \\leq 10B_c$ by considering perpendicular\npropagation and taking the values of the momentum squared as equal\nto the absolute values of the threshold energies of free and bound\npair creation. Here, as opposite to the case of low frequency, the\nmagnetic moment of the photon tend to vanish when $B\\rightarrow0$.\nWe note that, again, the magnetic moment of the photon is greater\nthan $\\mu^\\prime$. In correspondence with figs. \\ref{fig:Phvk} the\nmagnetic moment of the photon not considering Coulomb interaction,\nis greater than the corresponding to photon-positronium mixed state.\nThis result is to be expected due to the existence of the binding\nenergy, in the latter case it entails a decrease of the threshold\nenergy. Each curve has a maximum value, this maximum for the free\npair creation is approximately $B\\approx 1.5 B_c$, whereas for\nphoton-positron bound state the value is $B\\approx B_c$.\n\n\\begin{figure}[!htbp]\n\\includegraphics[width=3in]{masa1.eps}\n\\caption{\\label{fig:mass} Photon dynamical mass dependence with\nregard the external magnetic field of the photon-positronium\n(dashed) and photon-free pair (dark) mixed states. Both curves were\nobtained near the corresponding first thresholds for each process.}\n\\end{figure}\n\n\nIn fig. \\ref{fig:mass} we display the photon dynamical mass\ndependence on the magnetic field, by considering perpendicular\npropagation near the thresholds, for free and bound pair creation\nand for the second mode. It is shown that for that mode, the\ndynamical mass decreases with increasing magnetic field.\n\nFor magnetic field values $B>1.5B_c$ the dynamical mass of the\nphoton-positronium mixed state is greater than the corresponding\nto the case of free pair creation which suggests that the latter\nis more probable that the bound state case.\n\n\\begin{figure}[!htbp]\n\\includegraphics[width=3in]{EFk.eps}\n\\caption{\\label{su} Curves for the modulus of the photon magnetic\nmoment with regard to the perpendicular momentum squared, near the\npair creation threshold for the second mode $k_\\perp^{\\prime 2}=7.46\nm_0^2$ with $n=0$, $n^{\\prime}=1$ and for perpendicular propagation.\nThe value of the external magnetic field used for calculation are\n$B=B_c$. The dashed line correspond to photon-positronium mixed\nstate, whereas the dark line to the free pair creation.}\n\\end{figure}\n\nWe have found that when the particles are created in excited states\nthe behavior of the magnetic moment reaches higher\n values with regard the case analyzed previously when\n$k_\\perp^2=k_\\perp^{\\prime 2}$ and $B\\backsim B_c$. Calculation\npoints out that these values may be of order $10^{2}\\mu^\\prime$. The\nnew values obtained come fundamentally due to the fact of the\nthreshold energies, which depend on the magnetic field $\\textbf{B}$\n(see Fig.2). The new behavior of the photon and photon-positronium\nmixed state is shown in the Fig.\\ref{su} and Fig. \\ref{suB}.\n\n\n\\begin{figure}[!htbp]\n\\includegraphics[width=3in]{EFB.eps}\n\\caption{\\label{suB}Curves for the modulus of the photon magnetic\nmoment (dark) and photon-positronium mixed states(dashed) plotted\nwith regard to the external magnetic field strength when the\npropagation is perpendicular to $\\textbf B$ and the values of the\nperpendicular momentum squared is equal to the values of the free\nand bound threshold energies for $n=0$ and $n^\\prime=1$.}\n\\end{figure}\n\nIn this case the dynamical mass Fig.\\ref{m1}, as different from the\nprevious case, increases with increasing magnetic field. This means\nthat the magnetic field confine the virtual particles near the\nthreshold when these tend to be created in excited states. The\ndynamical mass of the positronium in such conditions is always\nsmaller than the corresponding to free pair creation, which\nsuggests that the bound state pair creation in this configuration is\nmore probable.\n\n\\begin{figure}[!htbp]\n\\includegraphics[width=3in]{masa.eps}\n\\caption{\\label{m1}Photon dynamical mass dependence with\nregard the external magnetic field of the photon-positronium\n(dashed) and photon-free pair (dark) being $n=0$ and $n^\\prime=1$. Both curves\nwere obtained in the corresponding energy thresholds.}\n\\end{figure}\n\n\\section{Conclusion}\n\nWe conclude, first, that a photon propagating in vacuum in presence\nof an external magnetic field, exhibits a nonzero magnetic moment\nand a sort of dynamical mass due to the magnetic field. This\nphenomenon occurs whenever the photon has a nonzero perpendicular\nmomentum component to the external magnetic field. The values of\nthis anomalous magnetic moment depend on the propagation mode and\nmagnetic field regime. The maximum value taken by the photon\nmagnetic moment is greater than the anomalous magnetic moment of the\nelectron in a strong magnetic field.\n\nSecond, in the small field and low frequency approximations, the\nmagnetic moment of the photon also exists and is slowly dependent of\nthe magnetic field intensity in some range of frequencies, whereas\nthe high frequency limit it depends on $\\vert\\bf{B}\\vert$. In both\ncases it vanishes when $B\\to0.$\n\nUnder these conditions, the behavior of the photon is similar to a\nvector neutral massive particle.\n\n\\section{Acknowledgement}\nBoth authors are greatly indebted to Professor A.E. Shabad, from\nP. N.Lebedev Physical Institute in Moscow for several comments and\nilluminating discussions.\n\n\\section{Appendix A}\nThe three eigenvalues $\\pi_i=1,2,3$ of the polarization operator in\none loop approximation, calculated using the exact propagator of\nelectron in an external magnetic field, can be expressed as linear\ncombination of three functions $\\Sigma_i$. In what follows we will\ncall $x=B\/B_c$\n\\begin{eqnarray}\n&\\pi_1&=-\\frac{1}{2}k^2\\Sigma_1,\\\\\n&\\pi_2&=-\\frac{1}{2}\\left(\\left(\\frac{kF^2k}{2\\mathcal{F}}+k^2\\right)\\Sigma_2-\\frac{kF^2k}{2\\mathcal{F}}\\Sigma_1\\right),\\\\\n&\\pi_3&=-\\frac{1}{2}\\left(\\left(\\frac{kF^2k}{2\\mathcal{F}}+k^2\\right)\\Sigma_1-\\frac{kF^2k}{2\\mathcal{F}}\\Sigma_3\\right).\\label{eg1}\n\\end{eqnarray}\nwhere $\\mathcal{F}=\\frac{B^2}{2}$ and\n\nWe express\n\\begin{equation}\n\\Sigma_i=\\Sigma_i^{(1)}+\\Sigma_i^{(2)},\\label{aS}\n\\end{equation}\nbeing\n\\begin{equation}\n\\Sigma_i^{(1)}(x)=\\frac{2\\alpha}{\\pi}\\int_0^\\infty\ndte^{-t\/x}\\int_{-1}^1d\\eta \\left[\\frac{\\sigma_i(t,\\eta)}{\\sinh\nt}-\\lim_{t\\rightarrow0}\\frac{\\sigma_i(t,\\eta)}{\\sinh t}\\right]\n\\end{equation}\nand\n\\begin{widetext}\n\\begin{eqnarray}\n\\Sigma_i^{(2)}(x,kF^2k, k^2,\n\\mathcal{F})=\\frac{2\\alpha}{\\pi}\\int_0^\\infty\ndte^{-t\/x}\\int_{-1}^1d\\eta\\frac{\\sigma_i(t,\\eta)}{\\sinh t}\n\\left[\\exp\\left(\\frac{kF^2k}{2\\mathcal{F}}\\frac{M(t,\\eta)}{m_0^2x}-\\left(\\frac{kF^2k}{2\\mathcal{F}}+k^2\\right)\\frac{1-\\eta^2}{4m_0^2x}t\\right)-1\\right],\n\\label{S2}\n\\end{eqnarray}\n\\end{widetext}\nwhere\n\\begin{equation}\nM(t,\\eta)=\\frac{\\cosh t -\\cosh\\eta t}{2\\sinh t},\n\\end{equation}\n\\begin{equation}\n\\sigma_1(t,\\eta)=\\frac{1-\\eta }{2}\\frac{\\sinh(1+\\eta)t}{\\sinh t},\n\\label{ss1}\n\\end{equation}\n\\begin{equation}\n\\sigma_2(t,\\eta)=\\frac{1-\\eta^2}{2}\\cosh t \\label{ss2},\n\\end{equation}\n\\begin{equation}\n\\sigma_3(t,\\eta)=\\frac{\\cosh t -\\cosh\\eta t}{2\\sinh^2 t}.\n\\label{ss3}\n\\end{equation}\n\nExpress $\\Sigma_i^{(1)}$ as\n\\begin{equation}\n\\Sigma_i^{(1)}(x)=\\frac{2\\alpha}{\\pi}\\int_0^\\infty\ndte^{-t\/x}\\left[\\frac{g_i(t)}{\\sinh t}-\\frac{1)}{3t}\\right].\n\\end{equation}\nHere\n\\[\ng_i(t)=\\int_{-1}^1d\\eta\\sigma_i(t,\\eta)d\\eta\n\\]\nand in explicit form\n\\begin{eqnarray}\n&g_1(t)&=\\frac{1}{4t\\sinh\nt}\\left(\\frac{\\sinh2t}{t}-2\\right),\\label{g1}\\\\\n&g_2(t)&=\\frac{\\cosh t}{3},\\label{g2}\\\\\n&g_3(t)&=\\frac{1}{\\sinh^2 t}\\left(\\cosh t-\\frac{\\sinh\nt}{t}\\right).\\label{g3}\n\\end{eqnarray}\n\nLet\n\\begin{equation}\nu_i(t)=\\frac{g_i(t)}{\\sinh t}-\\frac{1}{3t}\\label{e69}\n\\end{equation}\nThe asymptotic expansion of (\\ref{e69}) in powers of $\\exp(-t)$ have\nthe form\n\\begin{equation}\nu_1(t)=0, \\ \\ u_2(t)=1\/3,\\ \\ u_3(t)=0\n\\end{equation}\n\nOur next purpose is to analyze the behavior of $\\pi_{i}$ when\n$x\\rightarrow0$. In this case the behavior of $\\Sigma_i^{(1)}(x)$ is\ndetermined by the the factor $\\exp(-t\/x)$ at the integrand which\ntends to zero when $x\\rightarrow0$. Taking in account the expansion\n\\[\n\\exp(-t\/x)\\simeq \\exp(-t\/\\epsilon)+\\frac{\\exp(-t\/\\epsilon\n)t}{\\epsilon^2}(x-\\epsilon),\n\\]integrating by $t$ and taking\nthe limit when $\\epsilon\\rightarrow0$ we obtain that\n\n\\begin{eqnarray}\n&\\Sigma_1^{(1)}&(x)=0,\\ \\ \\Sigma_2^{(1)}(x)\\simeq\\frac{2\\alpha\nB}{3\\pi B_c},\\ \\ \\Sigma_3^{(1)}(x)=0.\\label{ddd}\n\\end{eqnarray}\n\n\nThe functions $\\Sigma_i^{(2)}$ depend of three arguments, as\nindicated in (\\ref{S2}). The asymptotic expansion of\n(\\ref{ss1}),(\\ref{ss2}), (\\ref{ss3}) in powers of $\\exp(-t)$ and\n$\\exp(t\\eta)$ produces an expansion of (\\ref{S2}) into a sum of\ncontributions coming from the thresholds, the singular behavior in\nthe threshold points originating from the divergencies of the\n$t-$integration in ($\\ref{S2}$) near $t=\\infty$ as it was made in\n\\cite{shabad1}. The leading term in the expansion of (\\ref{ss1}),\n(\\ref{ss2}), (\\ref{ss3}) at $t\\rightarrow\\infty$ are\n\\begin{eqnarray}\n&\\left.\\left(\\frac{\\sigma_1(t,\\eta)}{\\sinh\nt}\\right)\\right\\vert_{t\\rightarrow\n\\infty}&=\\frac{1-\\eta}{2}\\exp\\left(-t(1-\\eta)\\right),\\\\\n&\\left.\\left(\\frac{\\sigma_2(t,\\eta)}{\\sinh\nt}\\right)\\right\\vert_{t\\rightarrow \\infty}&=\\frac{1-\\eta^2}{4},\\\\\n&\\left.\\left(\\frac{\\sigma_3(t,\\eta)}{\\sinh\nt}\\right)\\right\\vert_{t\\rightarrow \\infty}&=2\\exp\\left(-2t)\\right).\n\\end{eqnarray}\nThe function $M(\\infty,\\eta)=1\/2$, one obtains near the lowest\nsingular threshold ($n=0$, $n^\\prime=1$ or viceversa for $i=1$,\n$n=n^\\prime=1$ for $i=1$, and $n=n^\\prime=0$ for $i=2$). Taking into\naccount this expansion we can write the expression (\\ref{S2}) for\nthe first and their modes as\n\\begin{widetext}\n\\begin{equation}\n\\Sigma_1^{(2)}=\\frac{4\\alpha e B}{\\pi}\\int_{-1}^1d\\eta(1-\\eta)\n\\left[\\frac{\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)}{4m_0^2+4(1-\\eta)eB+\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\left(1-\\eta^2\\right)}-\\frac{1}{4\nm_0^2+4(1-\\eta)eB}\\right], \\label{SS21}\n\\end{equation}\n\\begin{equation}\n\\Sigma_2^{(2)}=\\frac{2\\alpha e B}{\\pi}\\int_{-1}^1d\\eta(1-\\eta^2)\n\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\\left(\\frac{1}{4m_0^2+\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\left(1-\\eta^2\\right)}\\right)-\\frac{2\\alpha\neB}{3\\pi m_0^2}, \\label{SS22}\n\\end{equation}\n\\begin{equation}\n\\Sigma_3^{(2)}=\\frac{16\\alpha e B}{\\pi}\\int_{-1}^1d\\eta\n\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\\left[\\frac{1}{4m_0^2+8eB+\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\left(1-\\eta^2\\right)}-\\frac{1}{4m_0^2+8eB}\\right].\n\\label{SS23}\n\\end{equation}\n\nIn limit $x\\to 0$ we get only one expression with singularity in the\nfirst threshold\n\\begin{equation}\n\\Sigma_2^{(2)}=\\frac{2\\alpha B}{3 \\pi\nB_c}\\left(3m_0^2\\int_{0}^1d\\eta(1-\\eta^2)\\frac{\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\n\\frac{B_c}{B}\\right)}{4m_0^2+\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\left(1-\\eta^2\\right)}-1\\right).\\label{As}\n\\end{equation}\n\nBy carrying out the integration on $\\eta$ we obtain\n\\begin{equation}\n\\Sigma_2^{(2)}=\\frac{2\\alpha B}{3 \\pi\nB_c}\\left(3m_0^2\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\n\\left[\\frac{2}{k^2+\\frac{kF^2k}{2\\mathcal{F}}}-\\frac{8m_0^2\\arctan\\left(\\frac{k^2+\\frac{kF^2k}{2\\mathcal{F}}}{4m_0^2+k^2+\\frac{kF^2k}{2\\mathcal{F}}}\n\\right)^{1\/2}}{\\left(4m_0^2+k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)^{1\/2}\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)^{3\/2}}\\right]-1\\right).\\label{As0}\n\\end{equation}\n\nIts behavior in the low frequency limit is given by\n\\begin{equation}\n\\Sigma_2^{(2)}=\\frac{2\\alpha B}{3 \\pi\nB_c}\\left(3m_0^2\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\\left[\\frac{2}{k^2+\\frac{kF^2k}{2\\mathcal{F}}}-\\frac{8m_0^2}{\\left(4m_0^2+k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)}\\right]-1\\right),\\label{As}\n\\end{equation}\nwhich we can express as\n\\begin{equation}\n\\Sigma_2^{(2)}=\\frac{2\\alpha B}{3 \\pi\nB_c}\\left(\\frac{3m_0^2}{k^2+\\frac{kF^2k}{2\\mathcal{F}}}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\n\\frac{B_c}{B}\\right)\\left[2-\\frac{8m_0^2}{\\left(4m_0^2+k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)}\\right]-1\\right)\\label{As}\n\\end{equation}\n\\end{widetext}\ntherefore\n\\begin{equation}\n\\Sigma_2^{(2)}=\\frac{2\\alpha B}{3 \\pi\nB_c}\\left(\\frac{3}{2}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)-1\\right).\\label{As}\n\\end{equation}\n\nUnder such condition, by substituting the last one and the second\nexpression of (\\ref{ddd}) in (\\ref{eg1}), we have that the second\neigenvalue of the polarization operator can be written as\n\\begin{equation}\n\\pi_2=-\\frac{2\\mu^{\\prime}B}{m_0}\\left(\\frac{kF^2k}{2\\mathcal{F}}+k^2\\right)\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right).\n\\end{equation}\nIn the same approximation\n$\\frac{kF^2k}{2\\mathcal{F}}+k^2\\approx\\frac{kF^2k}{2\\mathcal{F}}$\nand\n\\begin{equation}\n\\pi_2=-\\frac{2\\mu^{\\prime}B}{m_0}\\frac{kF^2k}{2\\mathcal{F}}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right).\n\\end{equation}\n\nNow, the behavior $\\pi_2$ near of the first threshold can be\ndetermined from (\\ref{As0}) by taking into account the point\n$\\omega^2-k_\\parallel=4m_0^2-\\epsilon$ with $\\epsilon>0$ and\n$\\epsilon\\rightarrow0$\n\\begin{eqnarray}\n\\pi_2=\\frac{2\\alpha m_0^3\nB}{B_c}\\exp\\left(\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}\\right)\\left[4m_0^2+\\left(k^2+\\frac{kF^2k}{2\\mathcal{F}}\\right)\\right]^{1\/2}.\n\\end{eqnarray}\n\n\n\n\\section{Appendix B}\n\\begin{widetext}\nThe function $\\phi_{n,n^\\prime}^{(i)}$ is given by\n\\begin{equation}\n\\phi_{n,n^\\prime}^{(1)}=-\\frac{e^2}{4 \\pi^2}\\frac{e B\nk^2}{z_1}\\left[(2eB(n+n^\\prime)+z_1)F_{n,n^\\prime}^{(2)}-4k_\\perp^2N_{n,n^\\prime}^{(1)}\\right],\n\\end{equation}\n\\begin{equation}\n\\phi_{n,n^\\prime}^{(2)}=-\\frac{e^2}{4 \\pi^2}e B\n\\left[\\left(\\frac{2e^2B^2(n-n^\\prime)^2}{z_1}+2m^2+eB(n+n^\\prime)\\right)F_{n,n^\\prime}^{(1)}+2eB\n(n n^\\prime)^{1\/2} G_{n,n^\\prime}^{(1)}\\right],\n\\end{equation}\n\\begin{equation}\n\\phi_{n,n^\\prime}^{(3)}=-\\frac{e^2}{4 \\pi^2}\\frac{e B\nk^2}{z_1}\\left[(2eB(n+n^\\prime)+z_1)F_{n,n^\\prime}^{(2)}+4k_\\perp^2N_{n,n^\\prime}^{(1)}\\right],\n\\end{equation}\nwhere, calling $y=\\frac{kF^2k}{4m_0^2\\mathcal{F}}\\frac{B_c}{B}$ and\n$z_1=k^2+\\frac{kF^2k}{2\\mathcal{F}}$\n\\[\nF_{n,n^\\prime}^{(1)}=\\left\\{[L_{n^\\prime-1}^{n-n^\\prime}(y)]^2+\\frac{n^\\prime}{n}[L_{n^\\prime}^{n-n^\\prime}(y)]^2\\right\\}\\frac{(n^\\prime-1)!}{(n-1)!}y^{n-n^\\prime}\\exp[-y],\n\\]\n\\[\nF_{n,n^\\prime}^{(2,3)}=\\left\\{\\frac{y}{n}[L_{n^\\prime-1}^{n-n^\\prime+1}(y)]^2\\pm\\frac{n^\\prime}{x}[L_{n^\\prime}^{n-n^\\prime-1}(y)]^2\\right\\}\\frac{(n^\\prime-1)!}{(n-1)!}y^{n-n^\\prime}\\exp[-y],\n\\]\n\\[\nG_{n,n^\\prime}^{(1)}=2\\left(\\frac{n^\\prime}{n}\\right)^{1\/2}\\frac{(n^\\prime-1)!}{(n-1)!}y^{n-n^\\prime}L_{n^\\prime-1}^{n-n^\\prime}(y)L_{n^\\prime}^{n-n^\\prime}(y)\\exp[-y],\n\\]\n\\[\nN_{n,n^\\prime}^{(1)}=\\frac{n^\\prime!}{(n-1)!}y^{n-n^\\prime-1}L_{n^\\prime-1}^{n-n^\\prime+1}(y)L_{n}^{n-n^\\prime-1}(y)\\exp[-y].\n\\]\n\nHere $L_n^m(y)$ are generalized Laguerre polynomials. Laguerre\npolynomials with $-1$ for lower index must be taken as zero.\n\\end{widetext}\n\n\\bibliographystyle{apsrev}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nConsider the problem of recovering a vector $\\mathbf{x}^0 \\in {\\mathbb{R}}^N$\nfrom noisy linear measurements of the form\n\\begin{equation} \\label{eq:yAx}\n \\mathbf{y} = \\mathbf{A}\\mathbf{x}^0 + \\mathbf{w} \\in {\\mathbb{R}}^M ,\n \n\\end{equation}\nwhere $\\mathbf{A}$ is a known matrix and $\\mathbf{w}$ is an unknown, unstructured noise vector.\nIn the statistics literature, this problem is known as \\emph{standard linear regression}, and in the signal processing literature this is known as \\emph{solving a linear inverse problem}, or as \\emph{compressive sensing} when $M\\ll N$ and $\\mathbf{x}^0$ is sparse.\n\n\n\\subsection{Problem Formulations} \\label{sec:formulations}\n\nOne approach to recovering $\\mathbf{x}^0$ is \\emph{regularized quadratic loss minimization},\nwhere an estimate $\\hat{\\mathbf{x}}$ of $\\mathbf{x}^0$ is computed by solving an optimization problem of the form\n\\begin{equation}\n\\hat{\\mathbf{x}}\n= \\mathop{\\mathrm{arg\\,min}}_{\\mathbf{x}\\in{\\mathbb{R}}^N} \\frac{1}{2}\\|\\mathbf{y}-\\mathbf{A}\\mathbf{x}\\|_2^2 + f(\\mathbf{x}) .\n\\label{eq:RQLM}\n\\end{equation}\nHere, the penalty function or ``regularization'' $f(\\mathbf{x})$ is chosen to promote a desired structure in $\\hat{\\mathbf{x}}$.\nFor example, the choice $f(\\mathbf{x})=\\lambda\\|\\mathbf{x}\\|_1$ with $\\lambda>0$ promotes sparsity in $\\hat{\\mathbf{x}}$.\n\nAnother approach is through the Bayesian methodology.\nHere, one presumes a prior density $p(\\mathbf{x})$ and likelihood function $p(\\mathbf{y}|\\mathbf{x})$ and then aims to compute the posterior density\n\\begin{equation} \\label{eq:Bayes}\np(\\mathbf{x}|\\mathbf{y}) = \\frac{p(\\mathbf{y}|\\mathbf{x})p(\\mathbf{x})}{\\int p(\\mathbf{y}|\\mathbf{x})p(\\mathbf{x})\\dif\\mathbf{x}}\n\\end{equation}\nor, in practice, a summary of it \\cite{pereyra2016stoch}.\nExample summaries include the maximum \\emph{a posteriori} (MAP) estimate\n\\begin{equation} \\label{eq:MAP}\n\\hat{\\mathbf{x}}_{\\text{\\sf MAP}} = \\mathop{\\mathrm{arg\\,max}}_\\mathbf{x} p(\\mathbf{x}|\\mathbf{y}) ,\n\\end{equation}\nthe minimum mean-squared error (MMSE) estimate\n\\begin{equation} \\label{eq:MMSE}\n\\hat{\\mathbf{x}}_{\\text{\\sf MMSE}} = \\mathop{\\mathrm{arg\\,min}}_{\\tilde{\\mathbf{x}}} \\int \\|\\mathbf{x}-\\tilde{\\mathbf{x}}\\|^2 p(\\mathbf{x}|\\mathbf{y}) \\dif\\mathbf{x}\n= \\mathbb{E}[\\mathbf{x}|\\mathbf{y}] ,\n\\end{equation}\nor the posterior marginal densities $\\{p(x_n|\\mathbf{y})\\}_{n=1}^N$.\n\nNote that, if the noise $\\mathbf{w}$ is modeled as\n$\\mathbf{w} \\sim {\\mathcal N}(\\mathbf{0},\\gamma_w^{-1}\\mathbf{I})$,\ni.e., additive white Gaussian noise (AWGN) with some precision $\\gamma_w>0$,\nthen\nthe regularized quadratic loss minimization problem \\eqref{eq:RQLM}\nis equivalent to\nMAP estimation under the prior\n$p(\\mathbf{x}) \\propto \\exp[ -\\gamma_w f(\\mathbf{x}) ]$,\nwhere $\\propto$ denotes equality up to a scaling that is independent of $\\mathbf{x}$.\nThus we focus on MAP, MMSE, and marginal posterior inference in the sequel.\n\n\n\\subsection{Approximate Message Passing} \\label{sec:amp}\n\nRecently, the so-called \\emph{approximate message passing} (AMP) algorithm \\cite{DonohoMM:09,DonohoMM:10-ITW1} was proposed as an iterative method to recover $\\mathbf{x}^0$ from measurements of the form \\eqref{eq:yAx}.\nThe AMP iterations are specified in Algorithm~\\ref{algo:amp}.\nThere,\\footnote{The subscript ``1'' in $\\mathbf{g}_1$ is used promote notational consistency with Vector AMP algorithm presented in the sequel.}\n$\\mathbf{g}_1(\\cdot,\\gamma_k):{\\mathbb{R}}^N\\rightarrow{\\mathbb{R}}^N$ is a \\emph{denoising} function parameterized by $\\gamma_k$,\nand $\\bkt{\\mathbf{g}_1'(\\mathbf{r}_k,\\gamma_k)}$ is its \\emph{divergence} at $\\mathbf{r}_k$.\nIn particular, $\\mathbf{g}_1'(\\mathbf{r}_k,\\gamma_k)\\in{\\mathbb{R}}^N$ is the diagonal of the Jacobian,\n\\begin{align}\n\\mathbf{g}_1'(\\mathbf{r}_k,\\gamma_k)\n= \\mathop{\\mathrm{diag}}\\left[\\frac{\\partial \\mathbf{g}_1(\\mathbf{r}_k,\\gamma_k)}{\\partial \\mathbf{r}_k}\\right]\n\\label{eq:jacobian} ,\n\\end{align}\nand $\\bkt{\\cdot}$ is the empirical averaging operation\n\\begin{align}\n\\bkt{\\mathbf{u}} := \\frac{1}{N} \\sum_{n=1}^N u_n\n\\label{eq:bkt} .\n\\end{align}\n\n\\begin{algorithm}[t]\n\\caption{AMP}\n\\begin{algorithmic}[1] \\label{algo:amp}\n\\REQUIRE{Matrix $\\mathbf{A}\\!\\in\\!{\\mathbb{R}}^{M\\times N}$, measurement vector $\\mathbf{y}$,\ndenoiser $\\mathbf{g}_1(\\cdot,\\gamma_k)$, and number of iterations $K_{\\rm it}$.}\n\\STATE{Set $\\mathbf{v}_{-1}=\\mathbf{0}$ and select initial $\\mathbf{r}_0,\\gamma_0$.}\n\\FOR{$k=0,1,\\dots,K_{\\rm it}$}\n \\STATE{$\\hat{\\mathbf{x}}_{k} = \\mathbf{g}_1(\\mathbf{r}_k,\\gamma_k)$}\n \\label{line:x}\n \\STATE{$\\alpha_k = \\bkt{\\mathbf{g}_1'(\\mathbf{r}_k,\\gamma_k)}$}\n \\label{line:a}\n \\STATE{$\\mathbf{v}_k = \\mathbf{y} - \\mathbf{A}\\hat{\\mathbf{x}}_k\n + \\frac{N}{M}\\alpha_{k-1}\\mathbf{v}_{k-1}$}\n \\label{line:v}\n \\STATE{$\\mathbf{r}_{k\\! + \\! 1} = \\hat{\\mathbf{x}}_k + \\mathbf{A}^{\\text{\\sf T}}\\mathbf{v}_k$}\n \\label{line:r}\n \\STATE{Select $\\gamma_{k\\! + \\! 1}$}\n \\label{line:gamma}\n\\ENDFOR\n\\STATE{Return $\\hat{\\mathbf{x}}_{K_{\\rm it}}$.}\n\\end{algorithmic}\n\\end{algorithm}\n\nWhen $\\mathbf{A}$ is a large i.i.d.\\ sub-Gaussian matrix,\n$\\mathbf{w}\\sim{\\mathcal N}(\\mathbf{0},\\gamma_{w0}^{-1}\\mathbf{I})$,\nand $\\mathbf{g}_1(\\cdot,\\gamma_k)$ is \\emph{separable}, i.e.,\n\\begin{align}\n[\\mathbf{g}_1(\\mathbf{r}_k,\\gamma_k)]_n\n= g_1(r_{kn},\\gamma_k)\n~ \\forall n\n\\label{eq:g1sep} ,\n\\end{align}\nwith identical Lipschitz components $g_1(\\cdot,\\gamma_k):{\\mathbb{R}}\\rightarrow {\\mathbb{R}}$,\nAMP displays a remarkable behavior, which is that $\\mathbf{r}_k$ behaves like a white-Gaussian-noise corrupted version of the true signal $\\mathbf{x}^0$ \\cite{DonohoMM:09}.\nThat is,\n\\begin{align}\n\\mathbf{r}_k\n&= \\mathbf{x}^0 + {\\mathcal N}(\\mathbf{0},\\tau_k\\mathbf{I}) ,\n\\label{eq:unbiased}\n\\end{align}\nfor some variance $\\tau_k>0$.\nMoreover, the variance $\\tau_k$ can be predicted through the following \\emph{state evolution} (SE):\n\\begin{subequations} \\label{eq:ampSE}\n\\begin{align}\n{\\mathcal E}(\\gamma_k,\\tau_k)\n&= \\frac{1}{N}\\mathbb{E}\\left[\\big\\|\\mathbf{g}_1\\big(\\mathbf{x}^0+{\\mathcal N}(\\mathbf{0},\\tau_k\\mathbf{I}),\\gamma_k\\big)-\\mathbf{x}^0\\big\\|^2\\right] \\\\\n\\tau_{k\\! + \\! 1}\n&= \\gamma_{w0}^{-1} + \\frac{N}{M}{\\mathcal E}(\\gamma_k,\\tau_k),\n\\end{align}\n\\end{subequations}\nwhere ${\\mathcal E}(\\gamma_k,\\tau_k)$ is the \\textb{MSE of the} AMP estimate $\\hat{\\mathbf{x}}_k$.\n\nThe AMP SE \\eqref{eq:ampSE} was rigorously established\nfor i.i.d.\\ Gaussian $\\mathbf{A}$ in \\cite{BayatiM:11} and for i.i.d.\\ sub-Gaussian $\\mathbf{A}$ in \\cite{BayLelMon:15}\nin the \\emph{large-system limit} (i.e., $N,M\\rightarrow\\infty$ and $N\/M\\rightarrow\\delta\\in (0,1)$) under some mild regularity conditions.\nBecause the SE \\eqref{eq:ampSE} holds for generic $g_1(\\cdot,\\gamma_k)$ and generic $\\gamma_k$-update rules, it can be used to characterize the application of AMP to many problems, as further discussed in Section~\\ref{sec:bayes}.\n\n\n\\subsection{Limitations, Modifications, and Alternatives to AMP} \\label{sec:alternatives}\n\nAn important limitation of AMP's SE is that it holds only under large i.i.d.\\ sub-Gaussian $\\mathbf{A}$.\nAlthough recent analysis \\cite{rush2016finite} has rigorously analyzed AMP's performance under finite-sized i.i.d.\\ Gaussian $\\mathbf{A}$, there remains the important question of how AMP behaves with general $\\mathbf{A}$.\n\nUnfortunately, it turns out that the AMP Algorithm~\\ref{algo:amp} is somewhat fragile with regard to the construction of $\\mathbf{A}$.\nFor example, AMP diverges with even mildly ill-conditioned or non-zero-mean $\\mathbf{A}$ \\cite{RanSchFle:14-ISIT,Caltagirone:14-ISIT,Vila:ICASSP:15}.\nAlthough damping \\cite{RanSchFle:14-ISIT,Vila:ICASSP:15}, mean-removal \\cite{Vila:ICASSP:15}, sequential updating \\cite{manoel2015swamp}, and direct free-energy minimization \\cite{rangan2015admm} all help to prevent AMP from diverging, such strategies are limited in effectiveness.\n\nMany other algorithms for standard linear regression \\eqref{eq:yAx} have been designed using approximations of belief propagation (BP) and\/or free-energy minimization.\nAmong these are the Adaptive Thouless-Anderson-Palmer (ADATAP) \\cite{opper2001adaptive}, Expectation Propagation (EP) \\cite{Minka:01,seeger2005expectation}, Expectation Consistent Approximation (EC) \\cite{OppWin:05,kabashima2014signal,fletcher2016expectation}, (S-transform AMP) S-AMP \\cite{cakmak2014samp,cakmak2015samp},\nand (Orthogonal AMP) OAMP \\cite{ma2016orthogonal} approaches.\nAlthough numerical experiments suggest that some of these algorithms are more robust than AMP Algorithm~\\ref{algo:amp} to the choice of $\\mathbf{A}$, their convergence has not been rigorously analyzed.\nIn particular, there remains the question of whether there exists an AMP-like algorithm with a rigorous SE analysis that holds for a larger class of matrices than i.i.d.\\ sub-Gaussian.\nIn the sequel, we describe one such algorithm.\n\n\n\\subsection{Contributions}\n\nIn this paper, we propose a computationally efficient iterative algorithm for the estimation of the vector $\\mathbf{x}^0$ from noisy linear measurements $\\mathbf{y}$ of the form in \\eqref{eq:yAx}.\n(See Algorithm~\\ref{algo:vampSVD}.)\nWe call the algorithm ``\\emph{vector AMP}'' (VAMP) because\ni) its behavior can be rigorously characterized by a scalar SE under large random $\\mathbf{A}$,\nand\nii) it can be derived using an approximation of BP on a factor graph with vector-valued variable nodes.\nWe outline VAMP's derivation in Section~\\ref{sec:vamp} with the aid of some background material that is reviewed in Section~\\ref{sec:back}.\n\nIn Section~\\ref{sec:SE}, we establish the VAMP SE in the case of\nlarge \\emph{right-orthogonally invariant} random $\\mathbf{A}$\nand separable Lipschitz denoisers $\\mathbf{g}_1(\\cdot,\\gamma_k)$,\nusing techniques similar to those used by Bayati and Montanari in \\cite{BayatiM:11}.\nImportantly, these right-orthogonally invariant $\\mathbf{A}$ allow arbitrary singular values and arbitrary left singular vectors, making VAMP much more robust than AMP in regards to the construction of $\\mathbf{A}$.\nIn Section~\\ref{sec:replica}, we establish that the asymptotic MSE predicted by VAMP's SE agrees with the MMSE predicted by the replica method \\cite{tulino2013support} when VAMP's priors are matched to the true data.\nFinally, in Section~\\ref{sec:num}, we present numerical experiments demonstrating that VAMP's empirical behavior matches its SE at moderate dimensions, even when $\\mathbf{A}$ is highly ill-conditioned or non-zero-mean.\n\n\n\\subsection{Relation to Existing Work}\n\nThe idea to construct algorithms from graphical models with vector-valued nodes is not new, and in fact underlies the EC- and EP-based algorithms described in~\\cite{Minka:01,seeger2005expectation,OppWin:05,kabashima2014signal,fletcher2016expectation}.\nThe use of vector-valued nodes is also central to the derivation of S-AMP~\\cite{cakmak2014samp,cakmak2015samp}.\nIn the sequel, we present a simple derivation of VAMP \\textb{that uses the EP methodology from \\cite{Minka:01,seeger2005expectation}, which passes approximate messages between the nodes of a factor graph.\nBut we note that VAMP can also be derived using the EC methodology, which formulates a variational optimization problem using a constrained version of the Kullback-Leibler distance and then relaxes the density constraints to moment constraints.\nFor more details on the latter approach, we refer the interested reader to the discussion of ``diagonal restricted EC'' in \\cite[App.~D]{OppWin:05} and ``uniform diagonalized EC'' in \\cite{fletcher2016expectation}.\n}\n\nIt was recently shown \\cite{kabashima2014signal} that, for large right-orthogonally invariant $\\mathbf{A}$, the fixed points of diagonal-restricted EC are ``good'' in the sense that they are consistent with a certain replica prediction of the MMSE that is derived in \\cite{kabashima2014signal}.\nSince the fixed points of ADATAP and S-AMP are known \\cite{cakmak2014samp} to coincide with those of diagonal-restricted EC (and thus VAMP), all of these algorithms can be understood to have good fixed points.\nThe trouble is that these algorithms do not necessarily converge to their fixed points.\nFor example, S-AMP diverges with even mildly ill-conditioned or non-zero-mean $\\mathbf{A}$, as demonstrated in Section~\\ref{sec:num}.\n\\emph{Our main contribution is establishing that VAMP's behavior can be exactly predicted by an SE analysis analogous to that for AMP.\nThis SE analysis then provides precise convergence guarantees for large right-orthogonally invariant $\\mathbf{A}$.}\nThe numerical results presented in Section~\\ref{sec:num} confirm that, in practice, VAMP's convergence is remarkably robust, even with very ill-conditioned or mean-perturbed matrices $\\mathbf{A}$ of finite dimension.\n\nThe main insight that leads to both the VAMP algorithm and its SE analysis\ncomes from a consideration of the singular value decomposition (SVD) of $\\mathbf{A}$.\nSpecifically, take the ``economy\" SVD,\n\\begin{align}\n\\mathbf{A}\n&=\\overline{\\mathbf{U}}\\mathrm{Diag}(\\overline{\\mathbf{s}})\\overline{\\mathbf{V}}^{\\text{\\sf T}}\n\\label{eq:econSVD} ,\n\\end{align}\nwhere $\\overline{\\mathbf{s}}\\in{\\mathbb{R}}^R$ for $R:=\\mathrm{rank}(\\mathbf{A})\\leq\\min(M,N)$.\nThe VAMP iterations can be performed by matrix-vector multiplications with $\\overline{\\mathbf{V}}\\in{\\mathbb{R}}^{N\\times R}$ and $\\overline{\\mathbf{V}}^{\\text{\\sf T}}$, yielding a structure\nvery similar to that of AMP.\nComputationally, the SVD form of VAMP\n(i.e., Algorithm~\\ref{algo:vampSVD})\nhas the benefit that, once the SVD has been computed,\nVAMP's per-iteration cost will be dominated by $O(RN)$ floating-point operations (flops), as opposed to $O(N^3)$ for the EC methods from\n\\cite[App.~D]{OppWin:05} or \\cite{fletcher2016expectation}.\nFurthermore, if these matrix-vector multiplications have fast implementations (e.g., $O(N)$ when $\\overline{\\mathbf{V}}$ is a discrete wavelet transform), then the complexity of VAMP reduces accordingly.\nWe emphasize that VAMP uses a single SVD, not a per-iteration SVD.\nIn many applications, this SVD can be computed off-line.\nIn the case that SVD complexity may be an issue, we note that it costs $O(MNR)$ flops\nby classical methods or $O(MN\\log R)$ by modern approaches \\cite{halko2011matrix}.\n\nThe SVD offers more than just a fast algorithmic implementation.\nMore importantly, it connects VAMP to AMP in such a way that the Bayati and Montanari's SE analysis of AMP \\cite{BayatiM:11} can be extended to obtain a rigorous SE for VAMP.\nIn this way, the SVD can be viewed as a proof technique.\nSince it will be useful for derivation\/interpretation in the sequel, we note that the VAMP iterations can also be written without an explicit SVD\n(see Algorithm~\\ref{algo:vamp}),\nin which case they coincide with the uniform-diagonalization variant of the generalized EC method from \\cite{fletcher2016expectation}.\nIn this latter implementation, the linear MMSE (LMMSE) estimate \\eqref{eq:g2slr} must be computed at each iteration, as well as the trace of its covariance matrix \\eqref{eq:a2slr}, which both involve the inverse of an $N\\times N$ matrix.\n\nThe OAMP-LMMSE algorithm from \\cite{ma2016orthogonal}\nis similar to VAMP and diagonal-restricted EC,\nbut different in that it approximates certain variance terms.\nThis difference can be seen by comparing\nequations (30)-(31) in \\cite{ma2016orthogonal} to\nlines~\\ref{line:gamtilsvd} and \\ref{line:gamsvd} in Algorithm~\\ref{algo:vampSVD}\n(or lines~\\ref{line:gam1} and \\ref{line:gam2} in Algorithm~\\ref{algo:vamp}).\nFurthermore, OAMP-LMMSE differs from VAMP in its reliance on matrix inversion (see, e.g., the comments in the Conclusion of \\cite{ma2016orthogonal}).\n\n\n\\textb{Shortly after the initial publication of this work, \\cite{takeuchi2017rigorous} proved a very similar result for the complex case using a fully probabilistic analysis.}\n\n\\subsection{Notation}\n\nWe use\ncapital boldface letters like $\\mathbf{A}$ for matrices,\nsmall boldface letters like $\\mathbf{a}$ for vectors,\n$(\\cdot)^{\\text{\\sf T}}$ for transposition,\nand $a_n=[\\mathbf{a}]_n$ to denote the $n$th element of $\\mathbf{a}$.\nAlso, we use\n$\\|\\mathbf{a}\\|_p=(\\sum_n |a_n|^p)^{1\/p}$ for the $\\ell_p$ norm of $\\mathbf{a}$,\n$\\|\\mathbf{A}\\|_2$ for the spectral norm of $\\mathbf{A}$,\n$\\mathrm{Diag}(\\mathbf{a})$ for the diagonal matrix created from vector $\\mathbf{a}$, and\n$\\mathop{\\mathrm{diag}}(\\mathbf{A})$ for the vector extracted from the diagonal of matrix $\\mathbf{A}$.\nLikewise, we use\n$\\mathbf{I}_N$ for the $N\\times N$ identity matrix,\n$\\mathbf{0}$ for the matrix of all zeros, and\n$\\mathbf{1}$ for the matrix of all ones.\nFor a random vector $\\mathbf{x}$, we denote\nits probability density function (pdf) by $p(\\mathbf{x})$,\nits expectation by $\\mathbb{E}[\\mathbf{x}]$,\nand\nits covariance matrix by $\\mathrm{Cov}[\\mathbf{x}]$.\nSimilarly, we use\n$p(\\mathbf{x}|\\mathbf{y})$, $\\mathbb{E}[\\mathbf{x}|\\mathbf{y}]$, and $\\mathrm{Cov}[\\mathbf{x}|\\mathbf{y}]$ for the\n\\emph{conditional} pdf, expectation, and covariance, respectively.\nAlso, we use\n$\\mathbb{E}[\\mathbf{x}|b]$ and $\\mathrm{Cov}[\\mathbf{x}|b]$ to denote the\nexpectation and covariance of $\\mathbf{x}\\sim b(\\mathbf{x})$,\ni.e., $\\mathbf{x}$ distributed according to the pdf $b(\\mathbf{x})$.\nWe refer to\nthe Dirac delta pdf using $\\delta(\\mathbf{x})$\nand to\nthe pdf of a Gaussian random vector $\\mathbf{x}\\in{\\mathbb{R}}^N$ with mean $\\mathbf{a}$ and covariance $\\mathbf{C}$ using ${\\mathcal N}(\\mathbf{x};\\mathbf{a},\\mathbf{C})=\\exp( -(\\mathbf{x}-\\mathbf{a})^{\\text{\\sf T}}\\mathbf{C}^{-1}(\\mathbf{x}-\\mathbf{a})\/2 )\/\\sqrt{(2\\pi)^N|\\mathbf{C}|}$.\nFinally,\n$p(\\mathbf{x})\\propto f(\\mathbf{x})$ says that functions $p(\\cdot)$ and $f(\\cdot)$ are equal up to a scaling that is invariant to $\\mathbf{x}$.\n\n\n\\section{Background on the AMP Algorithm} \\label{sec:back}\n\nIn this section, we provide background on the AMP algorithm that will be useful in the sequel.\n\n\n\\subsection{Applications to Bayesian Inference} \\label{sec:bayes}\n\nWe first detail the application of the AMP Algorithm~\\ref{algo:amp} to the Bayesian inference problems from Section~\\ref{sec:formulations}.\nSuppose that the prior on $\\mathbf{x}$ is i.i.d., so that it takes the form\n\\begin{equation}\np(\\mathbf{x})=\\prod_{n=1}^N p(x_n)\n\\label{eq:pxiid} .\n\\end{equation}\nThen AMP can be applied to MAP problem \\eqref{eq:MAP} by choosing the scalar denoiser as\n\\begin{equation}\ng_1(r_{kn},\\gamma_k)\n= \\mathop{\\mathrm{arg\\,min}}_{x_n\\in{\\mathbb{R}}} \\left[ \\frac{\\gamma_k}{2}|x_n-r_{kn}|^2 - \\ln p(x_n) \\right]\n\\label{eq:gmapsca} .\n\\end{equation}\nLikewise, AMP can be applied to the MMSE problem \\eqref{eq:MMSE} by choosing\n\\begin{equation}\ng_1(r_{kn},\\gamma_k)\n= \\mathbb{E}[x_n|r_{kn},\\gamma_k]\n\\label{eq:g1mmsesca} ,\n\\end{equation}\nwhere the expectation in \\eqref{eq:g1mmsesca} is with respect to the conditional density\n\\begin{align}\np(x_n|r_{kn},\\gamma_k)\n\\propto \\exp\\left[ -\\frac{\\gamma_k}{2}|r_{kn}-x_n|^2 +\\ln p(x_n) \\right]\n\\label{eq:pxr1sca} .\n\\end{align}\nIn addition, $p(x_n|r_{kn},\\gamma_k)$ in \\eqref{eq:pxr1sca} acts as AMP's iteration-$k$ approximation of the marginal posterior $p(x_n|\\mathbf{y})$.\nFor later use, we note that the derivative of the MMSE scalar denoiser \\eqref{eq:g1mmsesca} w.r.t.\\ its first argument can be expressed as\n\\begin{equation}\n g_1'(r_{kn},\\gamma_k) = \\gamma_k\\mathrm{var}\\left[ x_n | r_{kn},\\gamma_k \\right]\n\\label{eq:g1dervar} ,\n\\end{equation}\nwhere the variance is computed with respect to the density~\\eqref{eq:pxr1sca}\n(see, e.g., \\cite{Rangan:11-ISIT}).\n\nIn \\eqref{eq:gmapsca}-\\eqref{eq:pxr1sca}, $\\gamma_k$ can be interpreted as an estimate of $\\tau_k^{-1}$, the iteration-$k$ precision of $\\mathbf{r}_k$ from \\eqref{eq:unbiased}.\nIn the case that $\\tau_k$ is known, the ``matched'' assignment\n\\begin{align}\n\\gamma_k=\\tau_k^{-1}\n\\label{eq:gamma_matched}\n\\end{align}\nleads to the interpretation of \\eqref{eq:gmapsca} and \\eqref{eq:g1mmsesca} as the scalar MAP and MMSE denoisers of $r_{kn}$, respectively.\nSince, in practice, $\\tau_k$ is usually not known, it has been suggested to use\n\\begin{align}\n\\gamma_{k\\! + \\! 1} &= \\frac{M}{\\|\\mathbf{v}_k\\|^2}\n\\label{eq:gamma_amp_practical} ,\n\\end{align}\nalthough other choices are possible \\cite{Montanari:12-bookChap}.\n\n\n\\subsection{Relation of AMP to IST} \\label{sec:ista}\n\nThe AMP Algorithm~\\ref{algo:amp} is closely related to the well-known \\emph{iterative soft thresholding} (IST) algorithm \\cite{ChamDLL:98,DaubechiesDM:04} that can be used\\footnote{The IST algorithm is guaranteed to converge \\cite{DaubechiesDM:04} when $\\|\\mathbf{A}\\|_2<1$.} to solve \\eqref{eq:RQLM} with convex $f(\\cdot)$.\nIn particular, if the term\n\\begin{align}\n\\frac{N}{M}\\alpha_{k-1}\\mathbf{v}_{k-1}\n\\label{eq:onsager}\n\\end{align}\nis removed from line~\\ref{line:v} of Algorithm~\\ref{algo:amp}, then what remains is the IST algorithm.\n\nThe term \\eqref{eq:onsager} is known as the \\emph{Onsager} term in the statistical physics literature \\cite{ThoulessAP:77}.\nUnder large i.i.d.\\ sub-Gaussian $\\mathbf{A}$, the Onsager correction ensures the behavior in \\eqref{eq:unbiased}.\nWhen \\eqref{eq:unbiased} holds, the denoiser $g_{1}(\\cdot,\\gamma_k)$ can be optimized accordingly, in which case each iteration of AMP becomes very productive.\nAs a result, AMP converges much faster than ISTA for i.i.d.\\ Gaussian $\\mathbf{A}$ (see, e.g., \\cite{Montanari:12-bookChap} for a comparison).\n\n\n\\subsection{Derivations of AMP} \\label{sec:amp_deriv}\n\nThe AMP algorithm can be derived in several ways.\nOne way is through approximations of loopy belief propagation (BP) \\cite{Pearl:88,YedidiaFW:03} on a bipartite factor graph constructed from the factorization\n\\begin{align}\np(\\mathbf{y},\\mathbf{x})\n&= \\left[ \\prod_{m=1}^M {\\mathcal N}(y_m;\\mathbf{a}_m^{\\text{\\sf T}}\\mathbf{x},\\gamma_w^{-1}) \\right] \\left[ \\prod_{n=1}^N p(x_n) \\right]\n\\label{eq:amp_factors} ,\n\\end{align}\nwhere $\\mathbf{a}_m^{\\text{\\sf T}}$ denotes the $m$th row of $\\mathbf{A}$.\nWe refer the reader to \\cite{DonohoMM:10-ITW1,Rangan:11-ISIT} for details on the message-passing derivation of AMP, noting connections to the general framework of \\emph{expectation propagation} (EP) \\cite{Minka:01,seeger2005expectation}.\nAMP can also be derived through a ``free-energy'' approach, where one\ni) proposes a cost function, ii) derives conditions on its stationary points, and iii) constructs an algorithm whose fixed points coincide with those stationary points.\nWe refer the reader to \\cite{RanSRFC:13-ISIT,Krzakala:14-ISITbethe,cakmak2014samp} for details, and note connections to the general framework of \\emph{expectation consistent approximation} (EC) \\cite{OppWin:05,fletcher2016expectation}.\n\n\n\n\n\\section{The Vector AMP Algorithm} \\label{sec:vamp}\n\nThe \\emph{Vector AMP} (VAMP) algorithm is stated in Algorithm~\\ref{algo:vampSVD}.\nIn line~\\ref{line:dsvd}, ``$\\overline{\\mathbf{s}}^2$'' refers to the componentwise square of vector $\\overline{\\mathbf{s}}$.\nAlso, $\\mathrm{Diag}(\\mathbf{a})$ denotes the diagonal matrix whose diagonal components are given by the vector $\\mathbf{a}$.\n\n\\begin{algorithm}[t]\n\\caption{Vector AMP (SVD Form)}\n\\begin{algorithmic}[1] \\label{algo:vampSVD}\n\\REQUIRE{\nMatrix $\\mathbf{A} \\in {\\mathbb{R}}^{M \\times N}$; measurements $\\mathbf{y} \\in {\\mathbb{R}}^M$;\ndenoiser $\\mathbf{g}_1(\\cdot,\\gamma_k)$;\nassumed noise precision $\\gamma_w \\geq 0$; and\nnumber of iterations $K_{\\rm it}$. }\n\\STATE{Compute economy SVD $\\overline{\\mathbf{U}}\\mathrm{Diag}(\\overline{\\mathbf{s}})\\overline{\\mathbf{V}}^{\\text{\\sf T}}=\\mathbf{A}$\nwith $\\overline{\\mathbf{U}}^{\\text{\\sf T}}\\overline{\\mathbf{U}}=\\mathbf{I}_R$,\n$\\overline{\\mathbf{V}}^{\\text{\\sf T}}\\overline{\\mathbf{V}}=\\mathbf{I}_R$,\n$\\overline{\\mathbf{s}}\\in{\\mathbb{R}}_+^{R}$},\n$R=\\mathrm{rank}(\\mathbf{A})$.\n\\STATE{Compute preconditioned $\\tilde{\\mathbf{y}}:=\\mathrm{Diag}(\\overline{\\mathbf{s}})^{-1}\\overline{\\mathbf{U}}^{\\text{\\sf T}}\\mathbf{y}$}\n\\STATE{Select initial $\\mathbf{r}_{0}$ and $\\gamma_{0}\\geq 0$.}\n\\FOR{$k=0,1,\\dots,K_{\\rm it}$}\n \\STATE{$\\hat{\\mathbf{x}}_{k} = \\mathbf{g}_1(\\mathbf{r}_{k},\\gamma_{k})$}\n \\label{line:xsvd}\n \\STATE{$\\alpha_{k} = \\bkt{ \\mathbf{g}_1'(\\mathbf{r}_{k},\\gamma_{k}) }$}\n \\label{line:asvd}\n \\STATE{$\\tilde{\\mathbf{r}}_k = (\\hat{\\mathbf{x}}_{k} - \\alpha_{k}\\mathbf{r}_{k})\/(1-\\alpha_{k})$}\n \\label{line:xtilsvd}\n \\STATE{$\\tilde{\\gamma}_{k} = \\gamma_{k}(1-\\alpha_{k})\/\\alpha_{k}$}\n \\label{line:gamtilsvd}\n \\STATE{$\\mathbf{d}_k=\\gamma_w\\mathrm{Diag}\\big(\\gamma_w\\overline{\\mathbf{s}}^2+\\tilde{\\gamma}_{k}\\mathbf{1}\\big)^{-1}\\overline{\\mathbf{s}}^2$}\n \\label{line:dsvd}\n \\STATE{$\\gamma_{k\\! + \\! 1} = \\tilde{\\gamma}_{k}\\bkt{\\mathbf{d}_k}\/(\\frac{N}{R}-\\bkt{\\mathbf{d}_k})$}\n \\label{line:gamsvd}\n \\STATE{$\\mathbf{r}_{k\\! + \\! 1} = \\tilde{\\mathbf{r}}_k + \\frac{N}{R}\n \\overline{\\mathbf{V}}\\mathrm{Diag}\\big(\\mathbf{d}_k\/\\bkt{\\mathbf{d}_k}\\big)\\big(\\tilde{\\mathbf{y}}-\\overline{\\mathbf{V}}^{\\text{\\sf T}}\\tilde{\\mathbf{r}}_k\\big)$}\n \\label{line:rsvd}\n\\ENDFOR\n\\STATE{Return $\\hat{\\mathbf{x}}_{K_{\\rm it}}$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Relation of VAMP to AMP} \\label{sec:vamp2amp}\n\nA visual examination of VAMP Algorithm~\\ref{algo:vampSVD} shows many similarities with AMP Algorithm~\\ref{algo:amp}.\nIn particular, the denoising and divergence steps in lines~\\ref{line:xsvd}-\\ref{line:asvd} of Algorithm~\\ref{algo:vampSVD} are identical to those in lines~\\ref{line:x}-\\ref{line:a} of Algorithm~\\ref{algo:amp}.\nLikewise, an Onsager term $\\alpha_k\\mathbf{r}_k$ is visible in line~\\ref{line:xtilsvd} of Algorithm~\\ref{algo:vampSVD}, analogous to the one in line~\\ref{line:v} of Algorithm~\\ref{algo:amp}.\nFinally, the per-iteration computational complexity of each algorithm is dominated by two matrix-vector multiplications: those involving $\\mathbf{A}$ and $\\mathbf{A}^{\\text{\\sf T}}$ in Algorithm~\\ref{algo:amp} and those involving $\\overline{\\mathbf{V}}$ and $\\overline{\\mathbf{V}}^{\\text{\\sf T}}$ in Algorithm~\\ref{algo:vampSVD}.\n\nThe most important similarity between the AMP and VAMP algorithms is not obvious from visual inspection and will be established rigorously in the sequel.\nIt is the following: for certain large random $\\mathbf{A}$,\nthe VAMP quantity $\\mathbf{r}_k$ behaves\nlike a white-Gaussian-noise corrupted version of the true signal $\\mathbf{x}^0$, i.e.,\n\\begin{align}\n\\mathbf{r}_k\n&= \\mathbf{x}^0 + {\\mathcal N}(\\mathbf{0},\\tau_k\\mathbf{I}) ,\n\\label{eq:unbiased2}\n\\end{align}\nfor some variance $\\tau_k>0$.\nMoreover, the noise variance $\\tau_k$ can be tracked through a scalar SE formalism whose details will be provided in the sequel.\nFurthermore, the VAMP quantity $\\gamma_k$ can be interpreted as an estimate of $\\tau_k^{-1}$ in \\eqref{eq:unbiased2}, analogous to the AMP quantity $\\gamma_k$ discussed around \\eqref{eq:gamma_matched}.\n\nIt should be emphasized that the class of matrices $\\mathbf{A}$ under which the VAMP SE holds is much bigger than the class under which the AMP SE holds.\nIn particular, VAMP's SE holds for large random matrices $\\mathbf{A}$ whose right singular%\n \\footnote{We use several forms of SVD in this paper.\n Algorithm~\\ref{algo:vampSVD} uses the ``economy'' SVD\n $\\mathbf{A}=\\overline{\\mathbf{U}}\\mathrm{Diag}(\\overline{\\mathbf{s}})\\overline{\\mathbf{V}}^{\\text{\\sf T}}\\in{\\mathbb{R}}^{M\\times N}$,\n where $\\overline{\\mathbf{s}}\\in{\\mathbb{R}}_+^R$ with $R=\\mathrm{rank}(\\mathbf{A})$, so that\n $\\overline{\\mathbf{U}}$ and\/or $\\overline{\\mathbf{V}}$ may be tall.\n The discussion in Section~\\ref{sec:vamp2amp} uses the ``standard'' SVD\n $\\mathbf{A}=\\mathbf{U}\\mathbf{S}\\mathbf{V}^{\\text{\\sf T}}$, where $\\mathbf{S}\\in{\\mathbb{R}}^{M\\times N}$ and both\n $\\mathbf{U}$ and $\\mathbf{V}$ are orthogonal.\n Finally, the state-evolution proof in Section~\\ref{sec:SE} uses the\n standard SVD on square $\\mathbf{A}\\in{\\mathbb{R}}^{N\\times N}$.}\nvector matrix $\\mathbf{V}\\in{\\mathbb{R}}^{N\\times N}$ is uniformly distributed on the group of orthogonal matrices.\nNotably, VAMP's SE holds for \\emph{arbitrary} (i.e., deterministic) left singular vector matrices $\\mathbf{U}$ and singular values, apart from some mild regularity conditions that will be detailed in the sequel.\nIn contrast, AMP's SE is known to hold \\cite{BayatiM:11,BayLelMon:15} only for large i.i.d.\\ sub-Gaussian matrices $\\mathbf{A}$, which implies i) random orthogonal $\\mathbf{U}$ and $\\mathbf{V}$ and ii) a particular distribution on the singular values of $\\mathbf{A}$.\n\n\\subsection{EP Derivation of VAMP}\n\n\\textb{As with AMP (i.e., Algorithm~\\ref{algo:amp}), VAMP (i.e., Algorithm~\\ref{algo:vampSVD}) can be derived in many ways.}\nHere we present a very simple derivation based on an EP-like approximation of the sum-product (SP) belief-propagation algorithm.\nUnlike the AMP algorithm, whose message-passing derivation uses a loopy factor graph with \\emph{scalar}-valued nodes, the VAMP algorithm uses a non-loopy graph with \\emph{vector}-valued nodes, hence the name ``vector AMP.''\nWe note that VAMP can also be derived using the ``diagonal restricted'' or ``uniform diagonalization'' EC approach \\cite{OppWin:05,fletcher2016expectation},\nbut that derivation is much more complicated.\n\nTo derive VAMP, we start with the factorization\n\\begin{align}\np(\\mathbf{y},\\mathbf{x})\n&= p(\\mathbf{x}) {\\mathcal N}(\\mathbf{y};\\mathbf{A}\\mathbf{x},\\gamma_w^{-1}\\mathbf{I}) ,\n\\end{align}\nand split $\\mathbf{x}$ into two identical variables $\\mathbf{x}_1=\\mathbf{x}_2$, giving an equivalent factorization\n\\begin{align}\np(\\mathbf{y},\\mathbf{x}_1,\\mathbf{x}_2)\n&= p(\\mathbf{x}_1) \\delta(\\mathbf{x}_1-\\mathbf{x}_2) {\\mathcal N}(\\mathbf{y};\\mathbf{A}\\mathbf{x}_2,\\gamma_w^{-1}\\mathbf{I})\n\\label{eq:vamp_factors} ,\n\\end{align}\nwhere $\\delta(\\cdot)$ is the Dirac delta distribution.\nThe factor graph corresponding to \\eqref{eq:vamp_factors} is shown in Figure~\\ref{fig:fg_split}.\n\\begin{figure}[t]\n \\centering\n \\newcommand{0.8}{0.8}\n \\psfrag{px}[b][Bl][0.8]{$p(\\mathbf{x}_1)$}\n \\psfrag{x1}[t][Bl][0.8]{$\\mathbf{x}_1$}\n \\psfrag{del}[b][Bl][0.8]{$\\delta(\\mathbf{x}_1-\\mathbf{x}_2)$}\n \\psfrag{x2}[t][Bl][0.8]{$\\mathbf{x}_2$}\n \\psfrag{py|x}[b][Bl][0.8]{${\\mathcal N}(\\mathbf{y};\\mathbf{A}\\mathbf{x}_2,\\gamma_w^{-1}\\mathbf{I})$}\n \\includegraphics[width=2.0in]{figures\/fg_split.eps}\n \\caption{The factor graph used for the derivation of VAMP.\n The circles represent variable nodes and\n the squares represent factor nodes from \\eqref{eq:vamp_factors}.}\n \\label{fig:fg_split}\n\\end{figure}\nWe then pass messages on this factor graph according to the following rules.\n\\begin{enumerate}\n\\item \\label{rule:b}\n\\emph{\\underline{Approximate beliefs}:}\nThe approximate belief $b_{\\textsf{app}}(\\mathbf{x})$ on variable node $\\mathbf{x}$\nis ${\\mathcal N}(\\mathbf{x};\\hat{\\mathbf{x}},\\eta^{-1}\\mathbf{I})$, where\n$\\hat{\\mathbf{x}} = \\mathbb{E}[\\mathbf{x}|b_{\\textsf{sp}}]$ and\n$\\eta^{-1} = \\bkt{\\mathop{\\mathrm{diag}}(\\mathrm{Cov}[\\mathbf{x}|b_{\\textsf{sp}}])}$\nare the mean and average variance of the corresponding SP belief\n$b_{\\textsf{sp}}(\\mathbf{x}) \\propto \\prod_i \\msg{f_i}{\\mathbf{x}}(\\mathbf{x})$,\ni.e., the normalized product of all messages impinging on the node.\nSee Figure~\\ref{fig:ep_rules}(a) for an illustration.\n\n\\item \\label{rule:v2f}\n\\emph{\\underline{Variable-to-factor messages}:}\nThe message from\na variable node $\\mathbf{x}$ to a connected factor node $f_i$ is\n$\\msg{\\mathbf{x}}{f_i}(\\mathbf{x}) \\propto b_{\\textsf{app}}(\\mathbf{x})\/\\msg{f_i}{\\mathbf{x}}(\\mathbf{x})$,\ni.e., the ratio of the most recent approximate belief $b_{\\textsf{app}}(\\mathbf{x})$ to\nthe most recent message from $f_i$ to $\\mathbf{x}$.\nSee Figure~\\ref{fig:ep_rules}(b) for an illustration.\n\n\\item \\label{rule:f2v}\n\\emph{\\underline{Factor-to-variable messages}:}\nThe message from a factor node $f$ to a connected variable node\n$\\mathbf{x}_i$ is\n$\\msg{f}{\\mathbf{x}_i}(\\mathbf{x}_i)\\propto\n \\int f(\\mathbf{x}_i,\\{\\mathbf{x}_j\\}_{j\\neq i}\\}) \\prod_{j\\neq i} \\msg{\\mathbf{x}_j}{f}(\\mathbf{x}_j) \\dif\\mathbf{x}_j$.\nSee Figure~\\ref{fig:ep_rules}(c) for an illustration.\n\\end{enumerate}\n\\begin{figure}[t]\n \\centering\n \\newcommand{0.8}{0.8}\n \\newcommand{0.6}{0.6}\n \\psfrag{x}[b][Bl][0.8]{$\\mathbf{x}$}\n \\psfrag{x1}[b][Bl][0.8]{$\\mathbf{x}_2$}\n \\psfrag{x2}[b][Bl][0.8]{$\\mathbf{x}_3$}\n \\psfrag{y}[b][Bl][0.8]{$\\mathbf{x}_1$}\n \\psfrag{f}[bl][Bl][0.7]{$f(\\mathbf{x}_1,\\mathbf{x}_2,\\mathbf{x}_3)$}\n \\psfrag{f1}[b][Bl][0.8]{$f_1(\\mathbf{x})$}\n \\psfrag{f2}[b][Bl][0.8]{$f_2(\\mathbf{x})$}\n \\psfrag{f3}[b][Bl][0.8]{$f_3(\\mathbf{x})$}\n \\psfrag{m1}[t][Bl][0.6]{$\\msg{f_1}{\\mathbf{x}}(\\mathbf{x})$}\n \\psfrag{m2}[b][Bl][0.6]{$\\msg{f_2}{\\mathbf{x}}(\\mathbf{x})$}\n \\psfrag{m3}[t][Bl][0.6]{$\\msg{f_3}{\\mathbf{x}}(\\mathbf{x})$}\n \\psfrag{m6}[t][Bl][0.6]{$\\msg{\\mathbf{x}}{f_1}(\\mathbf{x})$}\n \\psfrag{m7}[b][Bl][0.6]{$\\msg{\\mathbf{x}_2}{f}(\\mathbf{x}_2)$}\n \\psfrag{m8}[t][Bl][0.6]{$\\msg{\\mathbf{x}_3}{f}(\\mathbf{x}_3)$}\n \\psfrag{m9}[t][Bl][0.6]{$\\msg{f}{\\mathbf{x}_1}(\\mathbf{x}_1)$}\n \\psfrag{a}[t][Bl][0.8]{(a)}\n \\psfrag{b}[t][Bl][0.8]{(b)}\n \\psfrag{c}[t][Bl][0.8]{(c)}\n \\includegraphics[width=3.2in]{figures\/ep_rules}\n \\caption{Factor graphs to illustrate\n (a) messaging through a factor node and\n (b) messaging through a variable node.}\n \\label{fig:ep_rules}\n\\end{figure}\n\nBy applying the above message-passing rules to the factor graph in Figure~\\ref{fig:fg_split}, one obtains Algorithm~\\ref{algo:vamp}.\n(See Appendix~\\ref{sec:EP} for a detailed derivation.)\nLines~\\ref{line:x2}--\\ref{line:a2} of Algorithm~\\ref{algo:vamp} use\n\\begin{align}\n\\mathbf{g}_2(\\mathbf{r}_{2k},\\gamma_{2k})\n&:= \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_{2k}\\mathbf{I}\\right)^{-1}\n \\left( \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{y} + \\gamma_{2k}\\mathbf{r}_{2k} \\right)\n\\label{eq:g2slr} ,\n\\end{align}\nwhich can be recognized as the\nMMSE estimate of a random vector $\\mathbf{x}_2$ under\nlikelihood ${\\mathcal N}(\\mathbf{y};\\mathbf{A}\\mathbf{x}_2,\\gamma_w^{-1}\\mathbf{I})$\nand prior $\\mathbf{x}_2\\sim {\\mathcal N}(\\mathbf{r}_{2k},\\gamma_{2k}^{-1}\\mathbf{I})$.\nSince this estimate is linear in $\\mathbf{r}_{2k}$,\nwe will refer to it as the ``LMMSE'' estimator.\nFrom \\eqref{eq:jacobian}-\\eqref{eq:bkt} and \\eqref{eq:g2slr}, it follows that\nline~\\ref{line:a2} of Algorithm~\\ref{algo:vamp} uses\n\\begin{align}\n\\bkt{\\mathbf{g}'_2(\\mathbf{r}_{2k},\\gamma_{2k})}\n&= \\frac{\\gamma_{2k}}{N} \\mathrm{Tr}\\left[\n \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_{2k}\\mathbf{I}\\right)^{-1} \\right]\n\\label{eq:a2slr} .\n\\end{align}\n\n\\textb{%\nAlgorithm~\\ref{algo:vamp} is merely a restatement of VAMP Algorithm~\\ref{algo:vampSVD}.\nTheir equivalence can then be seen by substituting the ``economy'' SVD $\\mathbf{A}=\\overline{\\mathbf{U}}\\mathrm{Diag}(\\overline{\\mathbf{s}})\\overline{\\mathbf{V}}^{\\text{\\sf T}}$ into Algorithm~\\ref{algo:vamp}, simplifying, and equating\n$\\hat{\\mathbf{x}}_k \\equiv \\hat{\\mathbf{x}}_{1k}$,\n$\\mathbf{r}_k \\equiv \\mathbf{r}_{1k}$,\n$\\gamma_k \\equiv \\gamma_{1k}$,\n$\\tilde{\\gamma}_k \\equiv \\gamma_{2k}$,\nand\n$\\alpha_k \\equiv \\alpha_{1k}$.\n}\n\nAs presented in Algorithm~\\ref{algo:vamp}, the steps of VAMP exhibit an elegant symmetry.\nThe first half of the steps perform denoising on $\\mathbf{r}_{1k}$ and then Onsager correction in $\\mathbf{r}_{2k}$, while the second half of the steps perform LMMSE estimation $\\mathbf{r}_{2k}$ and Onsager correction in $\\mathbf{r}_{1,k\\! + \\! 1}$.\n\n\n\\begin{algorithm}[t]\n\\caption{Vector AMP (LMMSE form)}\n\\begin{algorithmic}[1] \\label{algo:vamp}\n\\REQUIRE{\nLMMSE estimator\n$\\mathbf{g}_2(\\mathbf{r}_{2k},\\gamma_{2k})$ from \\eqref{eq:g2slr},\ndenoiser $\\mathbf{g}_1(\\cdot,\\gamma_{1k})$,\nand\nnumber of iterations $K_{\\rm it}$. }\n\\STATE{ Select initial $\\mathbf{r}_{10}$ and $\\gamma_{10}\\geq 0$.}\n\\FOR{$k=0,1,\\dots,K_{\\rm it}$}\n \\STATE{\/\/ Denoising }\n \\STATE{$\\hat{\\mathbf{x}}_{1k} = \\mathbf{g}_1(\\mathbf{r}_{1k},\\gamma_{1k})$}\n \\label{line:x1}\n \\STATE{$\\alpha_{1k} = \\bkt{ \\mathbf{g}_1'(\\mathbf{r}_{1k},\\gamma_{1k}) }$}\n \\label{line:a1}\n \\STATE{$\\eta_{1k} = \\gamma_{1k}\/\\alpha_{1k}$}\n \\label{line:eta1}\n \\STATE{$\\gamma_{2k} = \\eta_{1k} - \\gamma_{1k}$}\n \\label{line:gam2}\n \\STATE{$\\mathbf{r}_{2k} = (\\eta_{1k}\\hat{\\mathbf{x}}_{1k} - \\gamma_{1k}\\mathbf{r}_{1k})\/\\gamma_{2k}$}\n \\label{line:r2}\n \\STATE{ }\n \\STATE{\/\/ LMMSE estimation }\n \\STATE{$\\hat{\\mathbf{x}}_{2k} = \\mathbf{g}_2(\\mathbf{r}_{2k},\\gamma_{2k})$}\n \\label{line:x2}\n \\STATE{$\\alpha_{2k} = \\bkt{ \\mathbf{g}_2'(\\mathbf{r}_{2k},\\gamma_{2k}) } $}\n \\label{line:a2}\n \\STATE{$\\eta_{2k} = \\gamma_{2k}\/\\alpha_{2k}$}\n \\label{line:eta2}\n \\STATE{$\\gamma_{1,k\\! + \\! 1} = \\eta_{2k} - \\gamma_{2k}$}\n \\label{line:gam1}\n \\STATE{$\\mathbf{r}_{1,k\\! + \\! 1} = (\\eta_{2k}\\hat{\\mathbf{x}}_{2k} - \\gamma_{2k}\\mathbf{r}_{2k})\/\\gamma_{1,k\\! + \\! 1}$}\n \\label{line:r1}\n\\ENDFOR\n\\STATE{Return $\\hat{\\mathbf{x}}_{1K_{\\rm it}}$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Implementation Details} \\label{sec:implementation}\n\nFor practical implementation with finite-dimensional $\\mathbf{A}$, we find that it helps to make some small enhancements to VAMP.\nIn the discussion below we will refer to Algorithm~\\ref{algo:vampSVD}, but the same approaches apply to Algorithm~\\ref{algo:vamp}.\n\nFirst, we suggest to clip the precisions $\\gamma_k$ and $\\tilde{\\gamma}_k$ to a positive interval $[\\gamma_{\\min},\\gamma_{\\max}]$.\nIt is possible, though uncommon, for line~\\ref{line:asvd} of Algorithm~\\ref{algo:vampSVD} to return a negative $\\alpha_k$, which will lead to negative $\\gamma_k$ and $\\tilde{\\gamma}_k$ if not accounted for.\nFor the numerical results in Section~\\ref{sec:num}, we used $\\gamma_{\\min}=1\\times 10^{-11}$ and $\\gamma_{\\max}=1\\times 10^{11}$.\n\nSecond, we find that a small amount of damping can be helpful when $\\mathbf{A}$ is highly ill-conditioned.\nIn particular, we suggest to replace lines~\\ref{line:xsvd} and \\ref{line:gamsvd} of Algorithm~\\ref{algo:vampSVD} with the damped versions\n\\begin{align}\n\\hat{\\mathbf{x}}_k\n&= \\rho \\mathbf{g}_1(\\mathbf{r}_k,\\gamma_k) + (1-\\rho) \\hat{\\mathbf{x}}_{k\\! - \\! 1}\n\\label{eq:xdamp}\\\\\n\\gamma_{k\\! + \\! 1}\n&= \\rho \\tilde{\\gamma}_{k}\\bkt{\\mathbf{d}_k}R\/(N-\\bkt{\\mathbf{d}_k}R) + (1-\\rho) \\gamma_k\n\\label{eq:gamdamp}\n\\end{align}\nfor all iterations $k>1$, where $\\rho\\in(0,1]$ is a suitably chosen damping parameter.\nNote that, when $\\rho=1$, the damping has no effect.\nFor the numerical results in Section~\\ref{sec:num}, we used $\\rho=0.97$.\n\nThird, rather than requiring VAMP to complete $K_{\\rm it}$ iterations, we suggest that the iterations are stopped when the normalized difference $\\|\\mathbf{r}_{1k}-\\mathbf{r}_{1,k\\! - \\! 1}\\|\/\\|\\mathbf{r}_{1k}\\|$ falls below a tolerance $\\tau$.\nFor the numerical results in Section~\\ref{sec:num}, we used $\\tau=1\\times 10^{-4}$.\n\n\\textb{We note that the three minor modifications described above are standard features of many AMP implementations, such as the one in the GAMPmatlab toolbox \\cite{GAMP-code}. However, as discussed in Section~\\ref{sec:alternatives}, they are not enough to stabilize AMP for in the case of ill-conditioned or non-zero-mean $\\mathbf{A}$.}\n\nFinally, we note that the VAMP algorithm requires the user to choose\nthe measurement-noise precision $\\gamma_w$ and\nthe denoiser $\\mathbf{g}_1(\\cdot,\\gamma_k)$.\nIdeally, the true noise precision $\\gamma_{w0}$ is known and the signal $\\mathbf{x}^0$ is i.i.d.\\ with known prior $p(x_j)$, in which case the MMSE denoiser can be straightforwardly designed.\nIn practice, however, $\\gamma_{w0}$ and $p(x_j)$ are usually unknown.\nFortunately, there is a simple expectation-maximization (EM)-based method to estimate both quantities on-line, whose details are given in \\cite{fletcher2016emvamp}.\nThe numerical results in \\cite{fletcher2016emvamp} show that the convergence and asymptotic performance of EM-VAMP is nearly identical to that of VAMP with known $\\gamma_{w0}$ and $p(x_j)$.\nFor the numerical results in Section~\\ref{sec:num}, however, we assume that $\\gamma_{w0}$ and $p(x_j)$ are known.\n\nMatlab implementations of VAMP and EM-VAMP can be found in the public-domain GAMPmatlab toolbox \\cite{GAMP-code}.\n\n\n\n\n\\section{State Evolution} \\label{sec:SE}\n\n\\subsection{Large-System Analysis} \\label{sec:large}\nOur primary goal is to understand the behavior of the VAMP algorithm for a certain class of matrices in the high-dimensional regime.\nWe begin with an overview of our analysis framework and follow with more details in later sections.\n\n\\subsubsection{Linear measurement model}\nOur analysis considers a sequence of problems indexed by the signal\ndimension $N$. For each $N$, we assume that there is a ``true\" vector $\\mathbf{x}^0\\in{\\mathbb{R}}^N$\nwhich is observed through measurements of the form,\n\\begin{equation} \\label{eq:yAxslr}\n \\mathbf{y} = \\mathbf{A}\\mathbf{x}^0 + \\mathbf{w} \\in {\\mathbb{R}}^N, \\quad \\mathbf{w} \\sim {\\mathcal N}(\\mathbf{0}, \\gamma_{w0}^{-1}\\mathbf{I}_N),\n\\end{equation}\nwhere $\\mathbf{A}\\in{\\mathbb{R}}^{N\\times N}$ is a known transform and $\\mathbf{w}$ is Gaussian noise with precision $\\gamma_{w0}$.\nNote that we use $\\gamma_{w0}$ to denote the ``true\" noise precision\nto distinguish it from $\\gamma_w$, which is the noise precision postulated by the estimator.\n\nFor the transform $\\mathbf{A}$,\nour key assumption is that it can be modeled as a large, \\emph{right-orthogonally invariant} random matrix.\nSpecifically, we assume that it has an SVD of the form\n\\begin{equation} \\label{eq:ASVD}\n \\mathbf{A}=\\mathbf{U}\\mathbf{S}\\mathbf{V}^{\\text{\\sf T}}, \\quad \\mathbf{S} = \\mathrm{Diag}(\\mathbf{s}),\n\\end{equation}\nwhere $\\mathbf{U}$ and $\\mathbf{V}$ are $N\\times N$ orthogonal matrices\nsuch that $\\mathbf{U}$ is deterministic and\n$\\mathbf{V}$ is Haar distributed (i.e.\\ uniformly distributed on the set of orthogonal matrices).\nWe refer to $\\mathbf{A}$ as ``right-orthogonally invariant'' because the distribution of $\\mathbf{A}$ is identical to that of $\\mathbf{A}\\mathbf{V}_0$ for any fixed orthogonal matrix $\\mathbf{V}_0$.\nWe will discuss the distribution of the singular values $\\mathbf{s}\\in{\\mathbb{R}}^N$ below.\n\nAlthough we have assumed that $\\mathbf{A}$ is square to streamline the analysis,\nwe make this assumption without loss of generality.\nFor example, by setting\n$$\n\\mathbf{U}=\\begin{bmatrix}\\mathbf{U}_0&\\mathbf{0}\\\\\\mathbf{0}&\\mathbf{I}\\end{bmatrix},\n\\quad\n\\mathbf{s}=\\begin{bmatrix}\\mathbf{s}_0\\\\\\mathbf{0}\\end{bmatrix},\n$$\nour formulation can model a wide rectangular matrix whose SVD is $\\mathbf{U}_0\\mathbf{S}_0\\mathbf{V}^{\\text{\\sf T}}$ with $\\mathop{\\mathrm{diag}}(\\mathbf{S}_0)=\\mathbf{s}_0$.\nA similar manipulation allows us to model a tall rectangular matrix.\n\n\\subsubsection{Denoiser}\nOur analysis applies to a fairly general class of denoising\nfunctions $\\mathbf{g}_1(\\cdot,\\gamma_{1k})$ indexed by the parameter $\\gamma_{1k}\\geq 0$. Our main assumption is that the denoiser is separable,\nmeaning that it is of the form \\eqref{eq:g1sep} for some scalar denoiser $g_1(\\cdot,\\gamma_{1k})$.\nAs discussed above, this separability assumption will occur for the MAP and MMSE denoisers\nunder the assumption of an i.i.d.\\ prior. However, we do not require the denoiser to be\nMAP or MMSE for any particular prior.\nWe will impose certain Lipschitz continuity conditions on $g_1(\\cdot,\\textb{\\gamma_{1k}})$ in the sequel.\n\n\\subsubsection{Asymptotic distributions}\nIt remains to describe the distributions of the\ntrue vector $\\mathbf{x}^0$ and the singular-value vector $\\mathbf{s}$.\nA simple model would be to assume that they\nare random i.i.d.\\ sequences that grow with $N$. However, following the\nBayati-Montanari analysis \\cite{BayatiM:11},\nwe will consider a more general framework where each of these vectors is modeled as deterministic sequence\nfor which the empirical distribution of the components converges in distribution.\nWhen the vectors $\\mathbf{x}^0$ and $\\mathbf{s}$ are i.i.d.\\ random sequences, they will satisfy this condition almost surely.\nDetails of this analysis framework are reviewed in Appendix~\\ref{sec:empConv}.\n\nUsing the definitions in Appendix~\\ref{sec:empConv},\nwe assume that the components of the singular-value vector $\\mathbf{s}\\in{\\mathbb{R}}^N$ in \\eqref{eq:ASVD}\nconverge empirically with second-order moments as\n\\begin{equation} \\label{eq:Slim}\n \\lim_{N \\rightarrow \\infty} \\{ s_n \\}_{n=1}^N \\stackrel{PL(2)}{=} S,\n\\end{equation}\nfor some positive random variable $S$. We assume that $\\mathbb{E}[S] > 0$ and $S \\in [0,S_{max}]$\nfor some finite maximum value $S_{max}$.\nAdditionally, we assume that\nthe components of the true vector, $\\mathbf{x}^0$, and the initial input to the denoiser, $\\mathbf{r}_{10}$,\nconverge empirically as\n\\begin{equation} \\label{eq:RX0lim}\n \\lim_{N \\rightarrow \\infty} \\{ (r_{10,n}, x^0_n) \\}_{n=1}^N \\stackrel{PL(2)}{=} (R_{10},X^0),\n\\end{equation}\nfor some random variables $(R_{10},X^0)$.\nNote that the convergence with second-order moments\nrequires that $\\mathbb{E} [(X^0)^2] < \\infty$ and $\\mathbb{E} [R^2_{10}] < \\infty$, so they have bounded\nsecond moments.\nWe also assume that the initial second-order term, \\textb{if dependent on $N$,} converges as\n\\begin{equation} \\label{eq:gam10lim}\n \\lim_{N \\rightarrow \\infty}\\gamma_{10}\\textb{(N)} = \\overline{\\gamma}_{10},\n\\end{equation}\nfor some $\\overline{\\gamma}_{10} > 0$.\n\nAs stated above, most of our analysis will apply to general separable denoisers\n$g_1(\\cdot,\\gamma_{1k})$. However, some results will apply specifically to MMSE denoisers.\nUnder the assumption that the components of the true vector $\\mathbf{x}^0$ are asymptotically\ndistributed like the random variable $X^0$\\textb{, as in \\eqref{eq:RX0lim}}, the MMSE denoiser~\\eqref{eq:g1mmsesca}\nand its derivative \\eqref{eq:g1dervar} reduce to\n\\begin{align} \\label{eq:g1mmsex0}\n\\begin{split}\n g_1(r_1,\\gamma_1) &= \\mathbb{E}\\left[ X^0 | R_1=r_1 \\right], \\\\\n g_1'(r_1,\\gamma_1) &= \\gamma_1\\mathrm{var}\\left[ X^0 | R_1=r_1 \\right],\n\\end{split}\n\\end{align}\nwhere $R_1$ is the random variable representing $X^0$ corrupted by AWGN noise,\ni.e.,\n\\[\n R_1 = X^0 + P, \\quad P \\sim {\\mathcal N}(0,\\gamma_1^{-1}),\n\\]\nwith $P$ being independent of $X^0$. Thus, the MMSE denoiser and its derivative\ncan be computed from the\nposterior mean and variance of $X^0$ under an AWGN measurement.\n\n\\subsection{Error Functions}\n\nBefore describing the state evolution (SE) equations and the\nanalysis in the LSL, we need to introduce two\nkey functions: \\emph{error functions} and \\emph{sensitivity functions}.\nWe begin by describing the error functions.\n\nThe error functions, in essence,\ndescribe the mean squared error (MSE)\nof the denoiser and LMMSE estimators under AWGN measurements.\n\\textb{Recall from Section~\\ref{sec:large}, that we have assumed that\nthe denoiser $\\mathbf{g}_1(\\cdot,\\gamma_1)$ is separable with some componentwise function\n$g_1(\\cdot,\\gamma_1)$.}\nFor this function $g_1(\\cdot,\\gamma_1)$, define the error function as\n\\begin{align}\n \\MoveEqLeft {\\mathcal E}_1(\\gamma_1,\\tau_1)\n := \\mathbb{E}\\left[ (g_1(R_1,\\gamma_1)-X^0)^2 \\right], \\nonumber \\\\\n & R_1 = X^0 + P, \\quad P \\sim {\\mathcal N}(0,\\tau_1). \\label{eq:eps1}\n\\end{align}\nThe function ${\\mathcal E}_1(\\gamma_1,\\tau_1)$ thus represents the MSE of the\nestimate $\\hat{X} = g_1(R_1,\\gamma_1)$ from a measurement $R_1$\ncorrupted by Gaussian noise of variance $\\tau_1$.\nFor the LMMSE estimator, we define the error function as\n\\begin{align}\n \\MoveEqLeft {\\mathcal E}_2(\\gamma_2,\\tau_2)\n := \\lim_{N \\rightarrow \\infty}\n \\frac{1}{N} \\mathbb{E} \\left[ \\| \\mathbf{g}_2(\\mathbf{r}_2,\\gamma_2) -\\mathbf{x}^0 \\|^2 \\right], \\nonumber \\\\\n & \\mathbf{r}_2 = \\mathbf{x}^0 + \\mathbf{q}, \\quad \\mathbf{q} \\sim {\\mathcal N}(0,\\tau_2 \\mathbf{I}), \\nonumber \\\\\n & \\mathbf{y} = \\mathbf{A}\\mathbf{x}^0 + \\mathbf{w}, \\quad \\mathbf{w} \\sim {\\mathcal N}(0,\\gamma_{w0}^{-1} \\mathbf{I}),\n \\label{eq:eps2}\n\\end{align}\nwhich is the average per component error of the vector estimate under Gaussian noise.\nNote that ${\\mathcal E}_2(\\gamma_2,\\tau_2)$ is implicitly a function of the noise precision levels $\\gamma_{w0}$ and $\\gamma_w$ \\textb{(through $\\mathbf{g}_2$ from \\eqref{eq:g2slr})}, but this dependence is omitted to simplify the notation.\n\nWe will say that both estimators are ``matched\" when\n\\[\n \\tau_1 = \\gamma_1^{-1}, \\quad \\tau_2 = \\gamma_2^{-1}, \\quad \\gamma_w = \\gamma_{w0},\n\\]\nso that the noise levels used by the estimators both match the true noise levels.\nUnder the matched condition, we will use the simplified notation\n\\[\n {\\mathcal E}_1(\\gamma_1) := {\\mathcal E}_1(\\gamma_1,\\gamma_1^{-1}), \\quad\n {\\mathcal E}_2(\\gamma_2) := {\\mathcal E}_2(\\gamma_2,\\gamma_2^{-1}).\n\\]\nThe following lemma establishes some basic properties of the error functions.\n\n\\begin{lemma} \\label{lem:errfn} Recall the error functions ${\\mathcal E}_1,{\\mathcal E}_2$ defined above.\n\\begin{enumerate}[(a)]\n\\item For the MMSE denoiser \\eqref{eq:g1mmsex0} under the matched condition\n$\\tau_1=\\gamma_1^{-1}$, the error function is the conditional variance\n\\begin{equation} \\label{eq:E1match}\n {\\mathcal E}_1(\\gamma_1) = \\mathrm{var}\\left[ X^0 | R_1 = X^0 \\textb{+ P} \\right],~\\textb{P\\sim{\\mathcal N}(0,\\gamma_1^{-1})} .\n\\end{equation}\n\n\\item The LMMSE error function is given by\n\\begin{equation} \\label{eq:eps2Q}\n {\\mathcal E}_2(\\gamma_2,\\tau_2) = \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\mathrm{Tr}\\left[ \\mathbf{Q}^{-2}\n \\tilde{\\mathbf{Q}} \\right] ,\n\\end{equation}\nwhere $\\mathbf{Q}$ and $\\tilde{\\mathbf{Q}}$ are the matrices\n\\begin{equation} \\label{eq:QQtdef}\n \\mathbf{Q} : = \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_2\\mathbf{I}, \\quad\n \\tilde{\\mathbf{Q}} :=\n \\frac{\\gamma_w^2}{\\gamma_{w0}}\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\tau_2\\gamma_2^2\\mathbf{I}.\n\\end{equation}\nUnder the matched condition $\\tau_2 = \\gamma_2^{-1}$ and $\\gamma_w = \\gamma_{w0}$,\n\\begin{equation} \\label{eq:eps2Qmatch}\n {\\mathcal E}_2(\\gamma_2) = \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\mathrm{Tr}\\left[ \\mathbf{Q}^{-1} \\right].\n\\end{equation}\n\n\\item The LMMSE error function is also given by\n\\begin{equation} \\label{eq:eps2S}\n {\\mathcal E}_2(\\gamma_2,\\tau_2) = \\mathbb{E}\\left[ \\frac{\n \\gamma_w^2 S^2\/\\gamma_{w0} + \\tau_2\\gamma_2^2}{(\\gamma_w S^2 + \\gamma_2)^2} \\right],\n\\end{equation}\nwhere $S$ is the random variable \\eqref{eq:Slim} representing the distribution of the\nsingular values of $\\mathbf{A}$. For the matched condition\n$\\tau_2 = \\gamma_2^{-1}$ and $\\gamma_w = \\gamma_{w0}$,\n\\begin{equation} \\label{eq:eps2Smatch}\n {\\mathcal E}_2(\\gamma_2) = \\mathbb{E}\\left[\n \\frac{1}{\\gamma_wS^2+ \\gamma_2} \\right].\n\\end{equation}\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof} See Appendix~\\ref{sec:errsenspf}.\n\\end{proof}\n\n\\subsection{Sensitivity Functions}\nThe sensitivity functions describe the expected divergence of the estimator.\nFor the denoiser, the sensitivity function is defined as\n\\begin{align}\n \\MoveEqLeft A_1(\\gamma_1,\\tau_1)\n := \\mathbb{E}\\left[ g_1'(R_1,\\gamma_1) \\right], \\nonumber \\\\\n & R_1 = X^0 + P, \\quad P \\sim {\\mathcal N}(0,\\tau_1), \\label{eq:sens1}\n\\end{align}\nwhich is the average derivative under a Gaussian noise input. For the\nLMMSE estimator, the sensitivity is defined as\n\\begin{align}\n \\MoveEqLeft A_2(\\gamma_2)\n := \\lim_{N \\rightarrow \\infty}\n \\frac{1}{N} \\mathrm{Tr}\\left[ \\frac{\\partial \\mathbf{g}_2(\\mathbf{r}_2,\\gamma_2)}{\\partial \\mathbf{r}_2}\n \\right].\n\\end{align}\n\n\\begin{lemma} \\label{lem:sens}\nFor the sensitivity functions above:\n\\begin{enumerate}[(a)]\n\\item For the MMSE denoiser \\eqref{eq:g1mmsex0} under the matched condition\n$\\tau_1=\\gamma_1^{-1}$, the sensitivity function is given by\n\\begin{equation} \\label{eq:A1match}\n A_1(\\gamma_1,\\gamma_1^{-1}) = \\gamma_1\\mathrm{var}\\left[ X^0 | R_1 = X^0 + {\\mathcal N}(0,\\gamma_1^{-1}) \\right],\n\\end{equation}\nwhich is the ratio of the conditional variance to the measurement variance $\\gamma_1^{-1}$.\n\n\\item The LMMSE estimator's sensitivity function is given by\n\\[\n A_2(\\gamma_2) = \\lim_{N \\rightarrow \\infty} \\frac{1}{N}\n \\gamma_2\\mathrm{Tr}\\left[ (\\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_2\\mathbf{I})^{-1} \\right].\n\\]\n\n\\item The LMMSE estimator's sensitivity function can also be written as\n\\[\n A_2(\\gamma_2) = \\mathbb{E}\\left[ \\frac{\\gamma_2}{\\gamma_wS^2+ \\gamma_2} \\right].\n\\]\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof} See Appendix~\\ref{sec:errsenspf}.\n\\end{proof}\n\n\n\\subsection{State Evolution Equations}\n\nWe can now describe our main result, which is the SE equations\nfor VAMP.\nFor a given iteration $k \\geq 1$, consider the set of components,\n\\[\n \\{ (\\hat{x}_{1k,n},r_{1k,n},x^0_n), ~ n=1,\\ldots,N \\}.\n\\]\nThis set represents the components of the true vector $\\mathbf{x}^0$,\nits corresponding estimate $\\hat{\\mathbf{x}}_{1k}$ and the denoiser input\n$\\mathbf{r}_{1k}$. \\textb{Theorem~\\ref{thm:se}} below will \nshow that, under certain assumptions,\nthese components converge empirically as\n\\begin{equation} \\label{eq:limrx1}\n \\lim_{N \\rightarrow \\infty} \\{ (\\hat{x}_{1k,n},r_{1k,n},x^0_n) \\}\n \\stackrel{PL(2)}{=} (\\hat{X}_{1k},R_{1k},X^0),\n\\end{equation}\nwhere the random variables $(\\hat{X}_{1k},R_{1k},X^0)$ are given by\n\\begin{subequations} \\label{eq:RX0var}\n\\begin{align}\n R_{1k} &= X^0 + P_k, \\quad P_k \\sim {\\mathcal N}(0,\\tau_{1k}), \\\\\n \\hat{X}_{1k} &= g_1(R_{1k},\\overline{\\gamma}_{1k}),\n\\end{align}\n\\end{subequations}\nfor constants $\\overline{\\gamma}_{1k}$ and $\\tau_{1k}$ that will be defined below.\nThus, each component $r_{1k,n}$ appears as the true component $x^0_n$ plus\nGaussian noise. The corresponding estimate $\\hat{x}_{1k,n}$ then appears as the\ndenoiser output with $r_{1k,n}$ as the input. Hence, the asymptotic behavior\nof any component $x^0_n$ and its corresponding $\\hat{x}_{1k,n}$ is identical to\na simple scalar system. We will refer to \\eqref{eq:limrx1}-\\eqref{eq:RX0var} as the denoiser's \\emph{scalar equivalent model}.\n\nFor the LMMSE estimation function,\nwe define the transformed error and transformed noise,\n\\begin{equation} \\label{eq:qerrdef}\n \\mathbf{q}_k := \\mathbf{V}^{\\text{\\sf T}}(\\mathbf{r}_{2k}-\\mathbf{x}^0), \\quad {\\boldsymbol \\xi} := \\mathbf{U}^{\\text{\\sf T}}\\mathbf{w},\n\\end{equation}\nwhere $\\mathbf{U}$ and $\\mathbf{V}$ are the matrices in the SVD decomposition \\eqref{eq:ASVD}.\n\\textb{Theorem~\\ref{thm:se} will also show that} \nthese transformed errors and singular values $s_n$ converge as,\n\\begin{equation} \\label{eq:limqxi}\n \\lim_{N \\rightarrow \\infty} \\{ (q_{k,n},\\xi_n,s_n) \\}\n \\stackrel{PL(2)}{=} (Q_k,\\Xi,S),\n\\end{equation}\nto a set of random variables $(Q_k,\\Xi,S)$.\nThese random variables are independent, with\n$S$ defined in the limit \\eqref{eq:Slim} and\n\\begin{equation} \\label{eq:QXivar}\n Q_k \\sim {\\mathcal N}(0,\\tau_{2k}), \\quad \\Xi \\sim {\\mathcal N}(0,\\gamma_{w0}^{-1}),\n\\end{equation}\nwhere $\\tau_{2k}$ is a variance that will be defined below and $\\gamma_{w0}$\nis the noise precision in the measurement model \\eqref{eq:yAxslr}.\nThus \\eqref{eq:limqxi}-\\eqref{eq:QXivar} is a scalar equivalent model for the LMMSE estimator.\n\nThe variance terms are defined recursively through what are called \\emph{state evolution}\nequations,\n\\begin{subequations} \\label{eq:se}\n\\begin{align}\n \\overline{\\alpha}_{1k} &= A_1(\\overline{\\gamma}_{1k},\\tau_{1k}) \\label{eq:a1se} \\\\\n \\overline{\\eta}_{1k} &= \\frac{\\overline{\\gamma}_{1k}}{\\overline{\\alpha}_{1k}}, \\quad\n \\overline{\\gamma}_{2k} = \\overline{\\eta}_{1k} - \\overline{\\gamma}_{1k} \\label{eq:eta1se} \\\\\\\n \\tau_{2k} &= \\frac{1}{(1-\\overline{\\alpha}_{1k})^2}\\left[\n {\\mathcal E}_1(\\overline{\\gamma}_{1k},\\tau_{1k}) - \\overline{\\alpha}_{1k}^2\\tau_{1k} \\right],\n \\label{eq:tau2se} \\\\\n \\overline{\\alpha}_{2k} &= A_2(\\overline{\\gamma}_{2k},\\tau_{2k}) \\label{eq:a2se} \\\\\n \\overline{\\eta}_{2k} &= \\frac{\\overline{\\gamma}_{2k}}{\\overline{\\alpha}_{2k}}, \\quad\n \\overline{\\gamma}_{1,k\\! + \\! 1} = \\overline{\\eta}_{2k} - \\overline{\\gamma}_{2k} \\label{eq:eta2se} \\\\\n \\tau_{1,k\\! + \\! 1} &= \\frac{1}{(1-\\overline{\\alpha}_{2k})^2}\\left[\n {\\mathcal E}_2(\\overline{\\gamma}_{2k},\\tau_{2k}) - \\overline{\\alpha}_{2k}^2\\tau_{2k} \\right],\n \\label{eq:tau1se}\n\\end{align}\n\\end{subequations}\nwhich are initialized with\n\\begin{equation}\n \\tau_{10} = \\mathbb{E}[(R_{10}-X^0)^2],\n\\end{equation}\nand $\\overline{\\gamma}_{10}$ defined from the limit \\eqref{eq:gam10lim}.\n\n\n\\begin{theorem} \\label{thm:se}\nUnder the above assumptions and definitions, assume additionally that for all iterations $k$:\n\\begin{enumerate}[(i)]\n\\item The solution $\\overline{\\alpha}_{1k}$ from the SE equations \\eqref{eq:se} satisfies\n\\begin{equation} \\label{eq:asecon}\n \\overline{\\alpha}_{1k} \\in (0,1).\n\\end{equation}\n\\item The functions $A_i(\\gamma_i,\\tau_i)$ and ${\\mathcal E}_i(\\gamma_i,\\tau_i)$\nare continuous at $(\\gamma_i,\\tau_i)=(\\overline{\\gamma}_{ik},\\tau_{ik})$.\n\\item The denoiser function $g_1(r_1,\\gamma_1)$ and its derivative\n $g_1'(r_1,\\gamma_1)$\nare uniformly Lipschitz in $r_1$ at $\\gamma_1=\\overline{\\gamma}_{1k}$.\n(See Appendix~\\ref{sec:empConv} for a precise definition of uniform Lipschitz continuity.)\n\\end{enumerate}\nThen, for any fixed iteration $k \\geq 0$,\n\\begin{equation} \\label{eq:aglim}\n \\lim_{N \\rightarrow \\infty} (\\alpha_{ik},\\eta_{ik},\\gamma_{ik}) =\n (\\overline{\\alpha}_{ik},\\overline{\\eta}_{ik}, \\overline{\\gamma}_{ik})\n\\end{equation}\nalmost surely.\nIn addition, the empirical limit \\eqref{eq:limrx1} holds almost surely for all $k > 0$,\nand \\eqref{eq:limqxi} holds almost surely for all $k \\geq 0$.\n\\end{theorem}\n\n\\subsection{Mean Squared Error}\nOne important\nuse of the scalar equivalent model is to predict the asymptotic performance\nof the VAMP algorithm in the LSL. For example, define the asymptotic\nmean squared error (MSE) of the iteration-$k$ estimate $\\hat{\\mathbf{x}}_{ik}$ as\n\\begin{equation} \\label{eq:mseLim}\n \\mbox{\\small \\sffamily MSE}_{ik} := \\lim_{N \\rightarrow \\infty} \\frac{1}{N}\\|\\hat{\\mathbf{x}}_{ik}-\\mathbf{x}^0\\|^2.\n\\end{equation}\nFor this MSE, we claim that\n\\begin{equation} \\label{eq:mseEcal}\n \\mbox{\\small \\sffamily MSE}_{ik} = {\\mathcal E}_i(\\overline{\\gamma}_{ik},\\tau_{ik}).\n\\end{equation}\nTo prove \\eqref{eq:mseEcal} for $i=1$, we write\n\\begin{align*}\n \\mbox{\\small \\sffamily MSE}_{1k}\n &=\\lim_{N \\rightarrow \\infty}\n \\frac{1}{N} \\sum_{n=1}^N (\\hat{x}_{1k,n}-x^0_n)^2 \\\\\n &\\stackrel{(a)}{=} \\mathbb{E}[(\\hat{X}_{1k}-X^0)^2] \\\\\n &\\stackrel{(b)}{=} \\mathbb{E}[(g_1(R_1,\\overline{\\gamma}_{1k})-X^0)^2]\n \\stackrel{(c)}{=} {\\mathcal E}_1(\\overline{\\gamma}_{1k},\\tau_{1k})\n\\end{align*}\nwhere (a) and (b) follow from the convergence in \\eqref{eq:limrx1} and the scalar equivalent model \\eqref{eq:limrx1},\nand where (c) follows from \\eqref{eq:eps1}. Using the scalar equivalent model~\\eqref{eq:limqxi},\nthe definition of ${\\mathcal E}_2(\\cdot)$ in \\eqref{eq:eps2}, and calculations similar to the proof\nof Lemma~\\ref{lem:errfn}, one can also show that \\eqref{eq:mseEcal} holds for $i=2$.\n\nInterestingly, this type of calculation can be used to compute any other componentwise distortion metric.\nSpecifically, given any distortion function $d(x,\\hat{x})$ that is pseudo-Lipschitz of order two,\nits average value is given by\n\\[\n \\lim_{N \\rightarrow \\infty}\n \\frac{1}{N} \\sum_{n=1}^N d(x^0_n,\\hat{x}_{1k,n}) = \\mathbb{E}\\left[ d(X^0,\\hat{X}_{1k}) \\right],\n\\]\nwhere the expectation is from the scalar equivalent model \\eqref{eq:limrx1}.\n\n\\subsection{Contractiveness of the Denoiser}\nAn essential requirement of Theorem~\\ref{thm:se} is the condition~\\eqref{eq:asecon}\nthat $\\overline{\\alpha}_{1k} \\in (0,1)$. This assumption requires that, in a certain average,\nthe denoiser function $g_1(\\cdot,\\gamma_1)$ is increasing (i.e., $g_1'(r_{1n},\\gamma_1) > 0$)\nand is a contraction (i.e., $g_1'(r_{1n},\\gamma_1) < 1$).\nIf these conditions are not met,\nthen $\\overline{\\alpha}_{1k} \\leq 0$ or $\\overline{\\alpha}_{1k} \\geq 1$,\nand either the estimated precision $\\overline{\\eta}_{1k}$ or $\\overline{\\gamma}_{2k}$\nin \\eqref{eq:eta1se} may be negative, causing subsequent updates to be invalid.\nThus, $\\overline{\\alpha}_{1k}$ must be in the range $(0,1)$.\nThere are two important conditions under which\nthis increasing contraction property are provably guaranteed:\n\n\\noindent\n\\paragraph*{Strongly convex penalties} Suppose that $g_1(r_{1n},\\gamma_1)$ is the\neither the MAP denoiser \\eqref{eq:gmapsca} or the MMSE denoiser\n\\eqref{eq:g1mmsesca} for a density $p(x_n)$ that is strongly log-concave.\nThat is, there exists constants $c_1,c_2 > 0$ such that\n\\[\n c_1 \\leq -\\frac{\\partial^2}{\\partial x_n^2} \\ln p(x_n) \\leq c_2.\n\\]\nThen, using results from log-concave functions \\cite{brascamp2002extensions},\nit is shown in \\cite{rangan2015admm} that\n\\[\n g_1'(r_{1n},\\gamma_1) \\in\n \\left[ \\frac{\\gamma_1}{c_2 + \\gamma_1}, \\frac{\\gamma_1}{c_1 + \\gamma_1}\\right]\n \\subset (0,1),\n\\]\nfor all $r_{1n}$ and $\\gamma_1 > 0$. Hence, from the definition of the\nsensitivity function \\eqref{eq:sens1},\nthe sensitivity $\\overline{\\alpha}_{1k}$ in \\eqref{eq:a1se} will be in the range $(0,1)$.\n\n\\noindent\n\\paragraph*{Matched MMSE denoising} Suppose that $g_1(r_{1n},\\gamma_1)$ is the\nMMSE denoiser in the matched condition where $\\overline{\\gamma}_{1k}=\\tau_{1k}^{-1}$\nfor some iteration $k$.\nFrom \\eqref{eq:A1match},\n\\[\n A_1(\\gamma_1,\\gamma_1^{-1}) = \\gamma_1\\mathrm{var}\\left[ X^0 | R_1 = X^0 + {\\mathcal N}(0,\\gamma_1^{-1})\n \\right].\n\\]\nSince the conditional variance is positive, $A_1(\\gamma_1,\\gamma_1^{-1}) > 0$.\nAlso, since the variance is bounded above by the MSE of a linear estimator,\n\\begin{align*}\n \\MoveEqLeft \\gamma_1\\mathrm{var}\\left[ X^0 | R_1 = X^0 + {\\mathcal N}(0,\\gamma_1^{-1})\\right]\\\\\n &\\leq \\gamma_1\\frac{\\gamma_1^{-1}\\tau_{x_0}}{\\tau_{x_0}+\\gamma_1^{-1}}\n = \\frac{\\gamma_1\\tau_{x_0}}{1+ \\gamma_1\\tau_{x_0}} < 1,\n\\end{align*}\nwhere $\\tau_{x0} = \\mathrm{var}(X^0)$.\nThus, we have $A_1(\\gamma_1,\\gamma_1^{-1}) \\in (0,1)$ and\n$\\overline{\\alpha}_{1k} \\in (0,1)$.\n\n\\medskip\nIn the case when the prior is not log-concave and the estimator uses an denoiser\nthat is not perfectly matched, $\\overline{\\alpha}_{1k}$ may not be in the valid range $(0,1)$.\nIn these cases, VAMP may obtain invalid (i.e.\\ negative) variance estimates.\n\n\n\\section{MMSE Denoising, Optimality, and Connections to the Replica Method} \\label{sec:replica}\n\nAn important special case of the VAMP algorithm\nis when we apply the MMSE optimal denoiser under matched $\\gamma_w$. In this case,\nthe SE equations simplify considerably.\n\n\\begin{theorem} \\label{thm:seMmse} Consider the SE equations \\eqref{eq:se} with\nthe MMSE optimal denoiser\n\\eqref{eq:g1mmsex0}, matched $\\gamma_w=\\gamma_{w0}$, and matched initial condition $\\overline{\\gamma}_{10} = \\tau_{10}^{-1}$.\nThen, for all iterations $k \\geq 0$,\n\\begin{subequations} \\label{eq:sematch}\n\\begin{align}\n \\overline{\\eta}_{1k} &= \\frac{1}{{\\mathcal E}_1(\\overline{\\gamma}_{1k})}, \\quad\n \\overline{\\gamma}_{2k} = \\tau_{2k}^{-1} = \\overline{\\eta}_{1k} - \\overline{\\gamma}_{1k},\n \\label{eq:eta1sematch} \\\\\n \\overline{\\eta}_{2k} &= \\frac{1}{{\\mathcal E}_2(\\overline{\\gamma}_{2k})}, \\quad\n \\overline{\\gamma}_{1,k\\! + \\! 1} = \\tau_{1,k\\! + \\! 1}^{-1} = \\overline{\\eta}_{2k} - \\overline{\\gamma}_{2k}.\n \\label{eq:eta2sematch}\n\\end{align}\nIn addition, for estimators $i=1,2$, $\\overline{\\eta}_{ik}$ is the inverse MSE:\n\\begin{equation} \\label{eq:etammse}\n \\overline{\\eta}_{ik}^{-1} = \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\hat{\\mathbf{x}}_{ik}-\\mathbf{x}^0\\|^2.\n\\end{equation}\n\\end{subequations}\n\\end{theorem}\n\\begin{proof} See Appendix~\\ref{sec:seMmsePf}.\n\\end{proof}\n\n\nIt is useful to compare this result with\nthe work \\cite{tulino2013support}, which uses the \\emph{replica method} from statistical physics\nto predict the asymptotic MMSE error in the LSL. To state the result,\ngiven a positive semidefinite matrix $\\mathbf{C}$, we define its Stieltjes transform as\n\\begin{equation} \\label{eq:stieltjes}\n S_{\\mathbf{C}}(\\omega) = \\frac{1}{N} \\mathrm{Tr}\\left[ (\\mathbf{C} - \\omega \\mathbf{I}_N)^{-1} \\right]\n = \\frac{1}{N} \\sum_{n=1}^N \\frac{1}{\\lambda_n - \\omega},\n\\end{equation}\nwhere $\\lambda_n$ are the eigenvalues of $\\mathbf{C}$. Also, let $R_{\\mathbf{C}}(\\omega)$\ndenote the so-called $R$-transform of $\\mathbf{C}$, given by\n\\begin{equation} \\label{eq:rtrans}\n R_{\\mathbf{C}}(\\omega) = S_{\\mathbf{C}}^{-1}(-\\omega) - \\frac{1}{\\omega},\n\\end{equation}\nwhere the inverse $S_{\\mathbf{C}}^{-1}(\\cdot)$ is in terms of composition of functions.\nThe Stieltjes and $R$-transforms are discussed in detail in \\cite{TulinoV:04}.\nThe Stieltjes and $R$-transforms can be\nextended to random matrix sequences by taking limits as $N \\rightarrow\\infty$\n(for matrix sequences where these limits converge almost surely).\n\nNow suppose that $\\hat{\\mathbf{x}} = \\mathbb{E}[\\mathbf{x}^0|\\mathbf{y}]$ is the MMSE estimate of $\\mathbf{x}^0$ given $\\mathbf{y}$.\nLet $\\overline{\\eta}^{-1}$ be the asymptotic inverse MSE\n\\[\n \\overline{\\eta}^{-1} := \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\hat{\\mathbf{x}}-\\mathbf{x}^0\\|^2.\n\\]\nUsing a so-called replica symmetric analysis, it is argued in\n\\cite{tulino2013support} that this MSE should satisfy the fixed point\nequations\n\\begin{equation} \\label{eq:fixrep}\n \\overline{\\gamma}_1 = R_{\\mathbf{C}}(-\\overline{\\eta}^{-1}), \\quad\n \\overline{\\eta}^{-1} = {\\mathcal E}_1( \\overline{\\gamma}_1 ),\n\\end{equation}\nwhere $\\mathbf{C}=\\gamma_{w0}\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A}$.\nA similar result is given in \\cite{kabashima2014signal}.\n\n\\begin{theorem} \\label{thm:replica} Let $\\overline{\\gamma}_i,\\overline{\\eta}_i$ be any fixed point\nsolutions to the SE equations \\eqref{eq:sematch} of VAMP under MMSE denoising and matched $\\gamma_w=\\gamma_{w0}$.\nThen $\\overline{\\eta}_1=\\overline{\\eta}_2$. If we define $\\overline{\\eta}:=\\overline{\\eta}_i$ as the common value,\nthen $\\overline{\\gamma}_1$ and $\\overline{\\eta}$ satisfy the replica fixed point equation \\eqref{eq:fixrep}.\n\\end{theorem}\n\\begin{proof} Note that we have dropped the iteration index $k$ since we are discussing\na fixed point. First, \\eqref{eq:sematch} shows that, at any fixed point,\n\\[\n \\overline{\\gamma}_1+\\overline{\\gamma}_2 = \\overline{\\eta}_1 = \\overline{\\eta}_2,\n\\]\nso that $\\overline{\\eta}_1=\\overline{\\eta}_2$. Also, in the matched case, \\eqref{eq:eps2Smatch} shows that\n\\[\n {\\mathcal E}_2(\\overline{\\gamma}_2) = S_{\\mathbf{C}}(-\\overline{\\gamma}_2) .\n\\]\nSince $\\overline{\\eta}^{-1} = {\\mathcal E}_2(\\overline{\\gamma}_2)$, we have that\n\\[\n \\overline{\\gamma}_1 = \\overline{\\eta}-\\overline{\\gamma}_2 = \\overline{\\eta}+S_\\mathbf{C}^{-1}(\\overline{\\eta}^{-1}) = R_\\mathbf{C}(-\\overline{\\eta}^{-1}).\n\\]\nAlso, $\\overline{\\eta}^{-1}=\\overline{\\eta}_1^{-1} = {\\mathcal E}(\\overline{\\gamma}_1)$.\n\\end{proof}\n\nThe consequence of Theorem~\\ref{thm:replica} is that, if the replica equations \\eqref{eq:fixrep}\nhave a unique fixed point, then the MSE achieved by the VAMP algorithm\nexactly matches the Bayes optimal MSE as predicted by the replica method. Hence, if this\nreplica prediction is correct, then the VAMP method provides a computationally efficient\nmethod for finding MSE optimal estimates under very general priors---including priors\nfor which the associated penalty functions are not convex.\n\nThe replica method, however, is generally heuristic.\nBut in the case of i.i.d.\\ Gaussian matrices,\nit has recently been proven that the replica prediction is correct\n\\cite{reeves2016replica,barbier2016mutual}.\n\n\n\n\n\\section{Numerical Experiments} \\label{sec:num}\n\nIn this section, we present numerical experiments that compare\nthe VAMP\\footnote{A Matlab implementation of VAMP can be found in the public-domain GAMPmatlab toolbox \\cite{GAMP-code}.}\nAlgorithm~\\ref{algo:vampSVD} to\nthe VAMP state evolution from Section~\\ref{sec:SE},\nthe replica prediction from \\cite{tulino2013support},\nthe AMP Algorithm~\\ref{algo:amp} from \\cite{DonohoMM:10-ITW1},\nthe S-AMP algorithm from \\cite[Sec.~IV]{cakmak2014samp},\nthe adaptively damped (AD) GAMP algorithm from \\cite{Vila:ICASSP:15},\nand the support-oracle MMSE estimator, whose MSE lower bounds that achievable by any practical method.\nIn all cases, we consider the recovery of vectors $\\mathbf{x}^0\\in{\\mathbb{R}}^N$\nfrom AWGN-corrupted measurements $\\mathbf{y}\\in{\\mathbb{R}}^M$ constructed from \\eqref{eq:yAx}, where\n$\\mathbf{x}^0$ was drawn i.i.d.\\ zero-mean Bernoulli-Gaussian with $\\Pr\\{x^0_j\\neq 0\\}=0.1$,\nwhere $\\mathbf{w}\\sim{\\mathcal N}(\\mathbf{0},\\mathbf{I}\/\\gamma_{w0})$,\nand where $M=512$ and $N=1024$.\nAll methods under test were matched to the true signal and noise statistics.\nWhen computing the support-oracle MMSE estimate, the support of $\\mathbf{x}^0$ is assumed to be known, in which case the problem reduces to estimating the non-zero coefficients of $\\mathbf{x}^0$.\nSince these non-zero coefficients are Gaussian, their MMSE estimate can be computed in closed form.\nFor VAMP we used the implementation enhancements described in Section~\\ref{sec:implementation}.\nFor line~\\ref{line:gamma} of AMP Algorithm~\\ref{algo:amp}, we used $1\/\\gamma_{k\\! + \\! 1}=1\/\\gamma_{w0}+\\frac{N}{M}\\alpha_k\/\\gamma_k$, as specified in \\cite[Eq.\\ (25)]{DonohoMM:10-ITW1}.\nFor the AMP, S-AMP, and AD-GAMP algorithms, we allowed a maximum of $1000$ iterations, and for the VAMP algorithm we allowed a maximum of $100$ iterations.\n\n\\subsection{Ill-conditioned $\\mathbf{A}$} \\label{sec:ill}\n\nFirst we investigate algorithm robustness to the condition number of $\\mathbf{A}$.\nFor this study, realizations of $\\mathbf{A}$ were constructed from the SVD $\\mathbf{A}=\\overline{\\mathbf{U}}\\mathrm{Diag}(\\overline{\\mathbf{s}})\\overline{\\mathbf{V}}^{\\text{\\sf T}}\\in{\\mathbb{R}}^{M\\times N}$ with geometric singular values $\\overline{\\mathbf{s}}\\in{\\mathbb{R}}^M$.\nThat is, $\\bar{s}_i\/\\bar{s}_{i-1}=\\rho~\\forall i$, with $\\rho$ chosen to achieve a desired condition number $\\kappa(\\mathbf{A}):=\\bar{s}_1\/\\bar{s}_M$ and with $\\bar{s}_1$ chosen so that $\\|\\mathbf{A}\\|_F^2=N$.\nThe singular vector matrices $\\overline{\\mathbf{U}},\\overline{\\mathbf{V}}$ were drawn uniformly at random from the group of orthogonal matrices, i.e., from the Haar distribution.\nFinally, the signal and noise variances were set to achieve a signal-to-noise ratio (SNR) $\\mathbb{E}[\\|\\mathbf{A}\\mathbf{x}\\|^2]\/\\mathbb{E}[\\|\\mathbf{w}\\|^2]$ of $40$~dB.\n\nFigure~\\ref{fig:nmse_vs_cond} plots the median normalized MSE (NMSE) achieved by each algorithm over $500$ independent realizations of $\\{\\mathbf{A},\\mathbf{x},\\mathbf{w}\\}$, where $\\text{NMSE}(\\hat{\\mathbf{x}}):=\\|\\hat{\\mathbf{x}}-\\mathbf{x}^0\\|^2\/\\|\\mathbf{x}^0\\|^2$.\nTo enhance visual clarity, NMSEs were clipped to a maximum value of $1$.\nAlso, error bars are shown that (separately) quantify the positive and negative standard deviations of VAMP's NMSE from the median value.\nThe NMSE was evaluated for condition numbers $\\kappa(\\mathbf{A})$ ranging from $1$ (i.e., row-orthogonal $\\mathbf{A}$) to $1\\times10^6$ (i.e., highly ill-conditioned $\\mathbf{A}$).\n\n\\begin{figure}[t]\n\\centering\n\\psfrag{SNR=40dB, N=1024, M=512, rho=0.2, U=Haar, V=Haar, isCmplx=0, median of 500}{}\n\\psfrag{condition number}[t][t][0.7]{\\sf condition number $\\kappa(\\mathbf{A})$}\n\\psfrag{median NMSE [dB]}[b][b][0.7]{\\sf median NMSE [dB]}\n\\psfrag{damped GAMP}[lB][lB][0.42]{\\sf AD-GAMP}\n\\includegraphics[width=3.15in]{figures\/nmse_vs_cond.eps}\n\\caption{NMSE versus condition number $\\kappa(\\mathbf{A})$ at final algorithm iteration. The reported NMSE is the median over $500$ realizations, with error bars shown on the VAMP trace.\n\\label{fig:nmse_vs_cond}}\n\\end{figure}\n\nIn Figure~\\ref{fig:nmse_vs_cond}, we see that AMP and S-AMP diverged for even mildly ill-conditioned $\\mathbf{A}$.\nWe also see that, while adaptive damping helped to extend the operating range of AMP, it had a limited effect.\nIn contrast, Figure~\\ref{fig:nmse_vs_cond} shows that VAMP's NMSE stayed relatively close to the replica prediction for all condition numbers $\\kappa(\\mathbf{A})$.\nThe small gap between VAMP and the replica prediction is due to finite-dimensional effects; the SE analysis from Section~\\ref{sec:SE} establishes that this gap closes in the large-system limit.\nFinally, Figure~\\ref{fig:nmse_vs_cond} shows that the oracle bound is close to the replica prediction at small $\\kappa(\\mathbf{A})$ but not at large $\\kappa(\\mathbf{A})$.\n\nFigure~\\ref{fig:nmse_vs_iter_cond}(a) plots NMSE versus algorithm iteration for condition number $\\kappa(\\mathbf{A})=1$ and Figure~\\ref{fig:nmse_vs_iter_cond}(b) plots the same for $\\kappa(\\mathbf{A})=1000$, again with error bars on the VAMP traces.\nBoth figures show that the VAMP trajectory stayed very close to the VAMP-SE trajectory at every iteration.\nThe figures also show that VAMP converges a bit quicker than AMP, S-AMP, and AD-GAMP when $\\kappa(\\mathbf{A})=1$, and that VAMP's convergence rate is relatively insensitive to the condition number $\\kappa(\\mathbf{A})$.\n\n\\begin{figure}[t]\n\\centering\n\\psfrag{condition number=1}[b][b][0.7]{(a)}\n\\psfrag{condition number=1000}[b][b][0.7]{(b)}\n\\psfrag{iterations}[t][t][0.7]{\\sf iteration}\n\\psfrag{median NMSE [dB]}[b][b][0.7]{\\sf median NMSE [dB]}\n\\psfrag{damped GAMP}[lB][lB][0.42]{\\sf AD-GAMP}\n\\includegraphics[width=3.15in]{figures\/nmse_vs_iter_cond.eps}\n\\caption{NMSE versus algorithm iteration for condition number $\\kappa(\\mathbf{A})=1$ in (a) and $\\kappa(\\mathbf{A})=1000$ in (b). The reported NMSE is the median over $500$ realizations, with error bars shown on the VAMP traces.\n\\label{fig:nmse_vs_iter_cond}}\n\\end{figure}\n\n\n\\subsection{Non-zero-mean $\\mathbf{A}$} \\label{sec:nzmean}\n\nIn this section, we investigate algorithm robustness to the componentwise mean of $\\mathbf{A}$.\nFor this study, realizations of $\\mathbf{A}$ were constructed by first drawing an i.i.d.\\ ${\\mathcal N}(\\mu,1\/M)$ matrix and then scaling it so that $\\|\\mathbf{A}\\|_F^2=N$ (noting that essentially no scaling is needed when $\\mu\\approx 0$).\nAs before, the signal and noise variances were set to achieve an SNR of $40$~dB.\nFor AD-GAMP, we used the mean-removal trick proposed in \\cite{Vila:ICASSP:15}.\n\nFigure~\\ref{fig:nmse_vs_mean} plots the NMSE achieved by each algorithm over $200$ independent realizations of $\\{\\mathbf{A},\\mathbf{x},\\mathbf{w}\\}$.\nThe NMSE was evaluated for mean parameters $\\mu$ between $0.001$ and $10$.\nNote that, when $\\mu>0.044$, the mean is larger than the standard deviation.\nThus, the values of $\\mu$ that we consider are quite extreme relative to past studies like \\cite{Caltagirone:14-ISIT}.\n\n\\begin{figure}[t]\n\\centering\n\\psfrag{SNR=40dB, N=1024, M=512, rho=0.2, isCmplx=0, median of 200}{}\n\\psfrag{nzmean}[t][t][0.7]{\\sf mean $\\mu$ of $\\mathbf{A}$}\n\\psfrag{median NMSE [dB]}[b][b][0.7]{\\sf median NMSE [dB]}\n\\psfrag{damped GAMP}[lB][lB][0.42]{\\sf MAD-GAMP}\n\\includegraphics[width=3.15in]{figures\/nmse_vs_mean.eps}\n\\caption{NMSE versus mean $\\mu$ at final algorithm iteration. The reported NMSE is the median over $200$ realizations, with error bars shown on the VAMP trace.\n\\label{fig:nmse_vs_mean}}\n\\end{figure}\n\nFigure~\\ref{fig:nmse_vs_mean} shows that AMP and S-AMP diverged for even mildly mean-perturbed $\\mathbf{A}$.\nIn contrast, the figure shows that VAMP and mean-removed AD-GAMP (MAD-GAMP) closely matched the replica prediction for all mean parameters $\\mu$.\nIt also shows a relatively small gap between the replica prediction and the oracle bound, especially for small $\\mu$.\n\n\\begin{figure}[t]\n\\centering\n\\psfrag{nzmean=0.001}[b][b][0.7]{(a)}\n\\psfrag{nzmean=1}[b][b][0.7]{(b)}\n\\psfrag{iterations}[t][t][0.7]{\\sf iteration}\n\\psfrag{median NMSE [dB]}[b][b][0.7]{\\sf median NMSE [dB]}\n\\psfrag{damped GAMP}[lB][lB][0.42]{\\sf MAD-GAMP}\n\\includegraphics[width=3.15in]{figures\/nmse_vs_iter_mean.eps}\n\\caption{NMSE versus algorithm iteration when $\\mathbf{A}$ has mean $\\mu=0.001$ in (a) and $\\mu=1$ in (b). The reported NMSE is the median over $200$ realizations, with error bars shown on the VAMP traces.\n\\label{fig:nmse_vs_iter_mean}}\n\\end{figure}\n\nFigure~\\ref{fig:nmse_vs_iter_mean}(a) plots NMSE versus algorithm iteration for matrix mean $\\mu=0.001$ and Figure~\\ref{fig:nmse_vs_iter_mean}(b) plots the same for $\\mu=1$.\nWhen $\\mu=0.001$, VAMP closely matched its SE at all iterations and converged noticeably quicker than AMP, S-AMP, and MAD-VAMP.\nWhen $\\mu=1$, there was a small but noticeable gap between VAMP and its SE for the first few iterations, although the gap closed after about $10$ iterations.\nThis gap may be due to the fact that the random matrix $\\mathbf{A}$ used for this experiment was not right-\\textb{orthogonally} invariant, since the dominant singular vectors are close to (scaled versions of) the $\\mathbf{1}$s vector for sufficiently large $\\mu$.\n\n\n\\subsection{Row-orthogonal $\\mathbf{A}$} \\label{sec:SNR}\n\nIn this section we investigate algorithm NMSE versus SNR for row-orthogonal $\\mathbf{A}$, i.e., $\\mathbf{A}$ constructed as in Section~\\ref{sec:ill} but with $\\kappa(\\mathbf{A})=1$.\nPrevious studies \\cite{cakmak2015samp,kabashima2014signal} have demonstrated that, when $\\mathbf{A}$ is \\textb{orthogonally} invariant but not i.i.d.\\ Gaussian (e.g., row-orthogonal),\nthe fixed points of S-AMP and diagonal-restricted EC are better than those of AMP because the former approaches exploit the singular-value spectrum of $\\mathbf{A}$, whereas AMP does not.\n\nTable~\\ref{tab:nmse_vs_snr} reports the NMSE achieved by VAMP, S-AMP, and AMP at three levels of SNR: $10$~dB, $20$~dB, and $30$~dB.\nThe NMSEs reported in the table were computed from an average of $1000$ independent realizations of $\\{\\mathbf{A},\\mathbf{x},\\mathbf{w}\\}$.\nSince the NMSE differences between the algorithms are quite small, the table also reports the standard error on each NMSE estimate to confirm its accuracy.\n\nTable~\\ref{tab:nmse_vs_snr} shows that VAMP and S-AMP gave nearly identical NMSE at all tested SNRs, which is expected because these two algorithms share the same fixed points.\nThe table also shows that VAMP's NMSE was strictly better than AMP's NMSE at low SNR (as expected), but that the NMSE difference narrows as the SNR increases.\nFinally, the table reports the replica prediction of the NMSE, which is about $3\\%$ lower (i.e., $-0.15$~dB) than VAMP's empirical NMSE at each SNR.\nWe attribute this difference to finite-dimensional effects.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{@{}c@{~} | @{~}c @{~~} r@{}l @{~~} r@{}l @{~~} r@{}l@{}}\nSNR &replica &VAMP&(stderr) &S-AMP&(stderr) &&(stderr) \\\\\\hline\n10 dB&5.09e-02&5.27e-02&(4.3e-04)&5.27e-02&(4.3e-04)&5.42e-02&(4.2e-04)\\\\\n20 dB&3.50e-03&3.57e-03&(2.7e-05)&3.58e-03&(2.7e-05)&3.62e-03&(2.6e-05)\\\\\n30 dB&2.75e-04&2.84e-04&(2.2e-06)&2.85e-04&(2.2e-06)&2.85e-04&(2.1e-06)\\\\\n\\end{tabular}\n\\medskip\n\n\\caption{Average NMSE versus SNR for row-orthogonal $\\mathbf{A}$, where the average was computed from $1000$ realizations. Standard error deviations are also reported.\n\\label{tab:nmse_vs_snr}}\n\\end{table}\n\n\\subsection{Discussion}\n\nOur numerical results confirm what is already known about the \\emph{fixed points} of diagonally restricted EC (via VAMP) and S-AMP.\nThat is,\nwhen $\\mathbf{A}$ is large and right-\\textb{orthogonally} invariant,\nthey agree with each other and with the replica prediction;\nand when $\\mathbf{A}$ is large i.i.d.\\ Gaussian (which is a special case of right-\\textb{orthogonally} invariant \\cite{TulinoV:04}),\nthey furthermore agree with the fixed points of AMP\n\\cite{cakmak2015samp,kabashima2014signal}.\n\nBut our numerical results also clarify that it is not enough for an algorithm to have good fixed points, because it may not converge to its fixed points.\nFor example, although the fixed points of S-AMP are good (i.e., replica matching) for \\emph{any} large right-\\textb{orthogonally} invariant $\\mathbf{A}$, our numerical results indicate that S-AMP converges only for a small subset of large right-\\textb{orthogonally} invariant $\\mathbf{A}$: those with singular-value spectra similar (or flatter than) i.i.d.\\ Gaussian $\\mathbf{A}$.\n\nThe SE analysis from Section~\\ref{sec:SE} establishes that, in the large-system limit and under matched priors, VAMP is guaranteed to converge to a fixed point that is also a fixed point of the replica equation \\eqref{eq:fixrep}.\nOur numerical results suggest that, even with large but finite-dimensional right \\textb{orthogonally} invariant $\\mathbf{A}$ (i.e., $512\\times 1024$ in our simulations),\nVAMP attains NMSEs that are very close to the replica prediction.\n\n\n\n\\section{Conclusions} \\label{sec:conc}\n\nIn this paper, we considered the standard linear regression (SLR) problem \\eqref{eq:yAx}, where the goal is to recover the vector $\\mathbf{x}^0$ from noisy linear measurements $\\mathbf{y}=\\mathbf{A}\\mathbf{x}^0+\\mathbf{w}$.\nOur work is inspired by Donoho, Maleki, and Montanari's AMP algorithm \\cite{DonohoMM:09}, which offers a computationally efficient approach to SLR.\nAMP has the desirable property that its behavior is rigorously characterized under large i.i.d.\\ sub-Gaussian $\\mathbf{A}$ by a scalar state evolution whose fixed points, when unique, are Bayes optimal \\cite{BayatiM:11}.\nA major shortcoming of AMP, however, is its fragility with respect to the i.i.d.\\ sub-Gaussian model on $\\mathbf{A}$: even small perturbations from this model can cause AMP to diverge.\n\nIn response, we proposed a vector AMP (VAMP) algorithm that (after performing an initial SVD) has similar complexity to AMP but is much more robust with respect to the matrix $\\mathbf{A}$.\nOur main contribution is establishing that VAMP's behavior can be rigorously characterized by a scalar state-evolution that holds for large, right-\\textb{orthogonally} invariant $\\mathbf{A}$.\nThe fixed points of VAMP's state evolution are, in fact,\nconsistent with the replica prediction of the minimum mean-squared\nerror recently derived in \\cite{tulino2013support}.\nWe also showed how VAMP can be derived as an approximation of belief propagation on a factor graph with vector-valued nodes, hence the name ``vector AMP.''\nFinally, we presented numerical experiments to demonstrate VAMP's robust convergence for ill-conditioned and mean-perturbed matrices $\\mathbf{A}$ that cause earlier AMP algorithms to diverge.\n\nAs future work, it would be interesting to extend VAMP to the generalized linear model, where the outputs $\\mathbf{A}\\mathbf{x}^0$ are non-linearly mapped to $\\mathbf{y}$.\nAlso, it would be interesting to design and analyze extensions of VAMP that are robust to \\textb{more general models for $\\mathbf{A}$, such as the case where $\\mathbf{A}$ is statistically coupled to $\\mathbf{x}^0$.}\n\n\\appendices\n\n\\section{Message-Passing Derivation of VAMP} \\label{sec:EP}\n\nIn this appendix, we detail the message-passing derivation of Algorithm~\\ref{algo:vamp}.\nBelow, we will use $k$ to denote the VAMP iteration and $n$ to index the elements of $N$-dimensional vectors like $\\mathbf{x}_{1},\\mathbf{r}_{1k}$ and $\\hat{\\mathbf{x}}_{1k}$.\nWe start by initializing the message-passing with\n$\\msg{\\delta}{\\mathbf{x}_1}(\\mathbf{x}_1)={\\mathcal N}(\\mathbf{x}_1;\\mathbf{r}_{10},\\gamma_{10}^{-1}\\mathbf{I})$.\nThe following steps are then repeated for $k=0,1,2,\\dots$.\n\nFrom Rule~\\ref{rule:b}, we first set\nthe approximate belief on $\\mathbf{x}_1$ as\n${\\mathcal N}(\\mathbf{x}_1;\\hat{\\mathbf{x}}_{1k},\\eta_{1k}^{-1}\\mathbf{I})$,\nwhere\n$\\hat{\\mathbf{x}}_{1k} = \\mathbb{E}[\\mathbf{x}_1|b_{\\textsf{sp}}(\\mathbf{x}_1)]$ and\n$\\eta_{1k}^{-1} = \\bkt{\\mathop{\\mathrm{diag}}(\\mathrm{Cov}[\\mathbf{x}_1|b_{\\textsf{sp}}(\\mathbf{x}_1)])}$\nfor the SP belief\n$b_{\\textsf{sp}}(\\mathbf{x}_1)\\propto p(\\mathbf{x}_1){\\mathcal N}(\\mathbf{x}_1;\\mathbf{r}_{1k},\\gamma_{1k}^{-1}\\mathbf{I})$.\nWith an i.i.d.\\ prior $p(\\mathbf{x}_1)$ as in \\eqref{eq:pxiid},\nwe have that $[\\hat{\\mathbf{x}}_{1k}]_n= g_1(r_{1k,n},\\gamma_{1k})$\nfor the conditional-mean estimator $g_1(\\cdot,\\gamma_{1k})$\ngiven in \\eqref{eq:g1mmsesca},\nyielding line~\\ref{line:x1} of Algorithm~\\ref{algo:vamp}.\nFurthermore, from \\eqref{eq:g1dervar} we see that\nthe corresponding conditional covariance is\n$\\gamma_{1k}^{-1}g_1'(r_{1k,n},\\gamma_{1k})$,\nyielding lines~\\ref{line:a1}-\\ref{line:eta1} of Algorithm~\\ref{algo:vamp}.\n\nNext, Rule~\\ref{rule:v2f} says to set the message $\\msg{\\mathbf{x}_1}{\\delta}(\\mathbf{x}_1)$\nproportional to\n${\\mathcal N}(\\mathbf{x}_1;\\hat{\\mathbf{x}}_{1k},\\eta_{1k}^{-1}\\mathbf{I})\n \/{\\mathcal N}(\\mathbf{x}_1;\\mathbf{r}_{1k},\\gamma_{1k}^{-1}\\mathbf{I})$.\nSince\n\\begin{align}\n\\lefteqn{\n{\\mathcal N}(\\mathbf{x};\\hat{\\mathbf{x}},\\eta^{-1}\\mathbf{I})\/{\\mathcal N}(\\mathbf{x};\\mathbf{r},\\gamma^{-1}\\mathbf{I})\n}\\nonumber\\\\\n&\\propto {\\mathcal N}\\big(\\mathbf{x};(\\hat{\\mathbf{x}}\\eta-\\mathbf{r}\\gamma)\/(\\eta-\\gamma),(\\eta-\\gamma)^{-1}\\mathbf{I}\\big)\n\\label{eq:gauss_div},\n\\end{align}\nwe have\n$\\msg{\\mathbf{x}_1}{\\delta}(\\mathbf{x}_1)={\\mathcal N}(\\mathbf{x}_1;\\mathbf{r}_{2k},\\gamma_{2k}^{-1}\\mathbf{I})$\nfor\n$\\mathbf{r}_{2k}=(\\hat{\\mathbf{x}}_{1k}\\eta_{1k}-\\mathbf{r}_{1k}\\gamma_{1k})\/(\\eta_{1k}-\\gamma_{1k})$\nand\n$\\gamma_{2k}=\\eta_{1k}-\\gamma_{1k}$,\nyielding lines~\\ref{line:gam2}-\\ref{line:r2} of Algorithm~\\ref{algo:vamp}.\nRule~\\ref{rule:f2v} then implies that the message $\\msg{\\mathbf{x}_1}{\\delta}(\\mathbf{x}_1)$\nwill flow rightward through the $\\delta$ node unchanged, manifesting as\n$\\msg{\\delta}{\\mathbf{x}_2}(\\mathbf{x}_2)={\\mathcal N}(\\mathbf{x}_2;\\mathbf{r}_{2k},\\gamma_{2k}^{-1}\\mathbf{I})$\non the other side.\n\nRule~\\ref{rule:b} then says to set the approximate belief on $\\mathbf{x}_2$ at\n${\\mathcal N}(\\mathbf{x}_2;\\hat{\\mathbf{x}}_{2k},\\eta_{2k}^{-1}\\mathbf{I})$,\nwhere\n$\\hat{\\mathbf{x}}_{2k} = \\mathbb{E}[\\mathbf{x}_2|b_{\\textsf{sp}}(\\mathbf{x}_2)]$ and\n$\\eta_{2k}^{-1} = \\bkt{\\mathop{\\mathrm{diag}}(\\mathrm{Cov}[\\mathbf{x}_2|b_{\\textsf{sp}}(\\mathbf{x}_2)])}$\nfor the SP belief\n$b_{\\textsf{sp}}(\\mathbf{x}_2)\\propto {\\mathcal N}(\\mathbf{x}_2;\\mathbf{r}_{2k},\\gamma_{2k}^{-1}\\mathbf{I})\n {\\mathcal N}(\\mathbf{y};\\mathbf{A}\\mathbf{x}_2,\\gamma_w^{-1}\\mathbf{I})$.\nUsing standard manipulations, it can be shown that this belief is Gaussian with mean\n\\begin{align}\n\\hat{\\mathbf{x}}_{2k}\n&= \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_{2k}\\mathbf{I}\\right)^{-1}\n \\left( \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{y} + \\gamma_{2k}\\mathbf{r}_{2k} \\right)\n\\label{eq:x2}\n\\end{align}\nand covariance $(\\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A}+\\gamma_{2k}\\mathbf{I})^{-1}$.\nThe equivalence between \\eqref{eq:x2} and \\eqref{eq:g2slr}\nexplains line~\\ref{line:x2} of Algorithm~\\ref{algo:vamp}.\nFurthermore, it can be seen by inspection that the average of the diagonal of this covariance matrix coincides with\n$\\gamma_{2k}^{-1} \\bkt{\\mathbf{g}_2'(\\mathbf{r}_{2k},\\gamma_{2k})}$\nfor $\\bkt{\\mathbf{g}_2'(\\mathbf{r}_{2k},\\gamma_{2k})}$ from \\eqref{eq:a2slr},\nthus explaining lines~\\ref{line:a2}-\\ref{line:eta2} of Algorithm~\\ref{algo:vamp}.\n\nRule~\\ref{rule:v2f} then says to set the message $\\msg{\\mathbf{x}_2}{\\delta}(\\mathbf{x}_2)$ at\n${\\mathcal N}(\\mathbf{x}_2;\\hat{\\mathbf{x}}_{2k},\\eta_{2k}^{-1}\\mathbf{I})\n \/{\\mathcal N}(\\mathbf{x}_2;\\mathbf{r}_{2k},\\gamma_{2k}^{-1}\\mathbf{I})$,\nwhich \\eqref{eq:gauss_div} simplifies to\n${\\mathcal N}(\\mathbf{x}_2;\\mathbf{r}_{1,k\\! + \\! 1},\\gamma_{1,k\\! + \\! 1}^{-1}\\mathbf{I})$\nfor\n$\\mathbf{r}_{1,k\\! + \\! 1}=(\\hat{\\mathbf{x}}_{2k}\\eta_{2k}-\\mathbf{r}_{2k}\\gamma_{2k})\/(\\eta_{2k}-\\gamma_{2k})$\nand\n$\\gamma_{1,k\\! + \\! 1}=\\eta_{2k}-\\gamma_{2k}$,\nyielding lines~\\ref{line:gam1}-\\ref{line:r1} of Algorithm~\\ref{algo:vamp}.\nFinally, Rule~\\ref{rule:f2v} implies that the message $\\msg{\\mathbf{x}_2}{\\delta}(\\mathbf{x}_2)$\nflows left through the $\\delta$ node unchanged, manifesting as\n$\\msg{\\delta}{\\mathbf{x}_1}(\\mathbf{x}_1)={\\mathcal N}(\\mathbf{x}_1;\\mathbf{r}_{1k\\! + \\! 1},\\gamma_{1,k\\! + \\! 1}^{-1}\\mathbf{I})$\non the other side.\nThe above messaging sequence is then repeated with $k\\leftarrow k+1$.\n\n\n\\section{Convergence of Vector Sequences} \\label{sec:empConv}\nWe review some definitions from the Bayati-Montanari paper \\cite{BayatiM:11}, since we will use the same\nanalysis framework in this paper.\nFix a dimension $r > 0$, and suppose that, for each $N$,\n$\\mathbf{x}(N)$ is a vector of the form\n\\[\n \\mathbf{x}(N) = (\\mathbf{x}_1(N),\\ldots,\\mathbf{x}_N(N)),\n\\]\n\\textb{with vector sub-components} $\\mathbf{x}_n(N) \\in {\\mathbb{R}}^r$. Thus, the total dimension\nof $\\mathbf{x}(N)$ is $rN$. In this case, we will say that\n$\\mathbf{x}(N)$ is a \\emph{block vector sequence that scales with $N$\nunder blocks $\\mathbf{x}_n(N) \\in {\\mathbb{R}}^r$.}\nWhen $r=1$, so that the blocks are scalar, we will simply say that\n$\\mathbf{x}(N)$ is a \\emph{vector sequence that scales with $N$}.\nSuch vector sequences can be deterministic or random.\nIn most cases, we will omit the notational dependence on $N$ and simply write $\\mathbf{x}$.\n\nNow, given $p \\geq 1$,\na function $\\mathbf{f}:{\\mathbb{R}}^s \\rightarrow {\\mathbb{R}}^r$ is called \\emph{pseudo-Lipschitz of order $p$},\nif there exists a constant $C > 0$ such that for all $\\mathbf{x}_1,\\mathbf{x}_2 \\in{\\mathbb{R}}^s$,\n\\[\n \\|\\mathbf{f}(\\mathbf{x}_1)-\\mathbf{f}(\\mathbf{x}_2)\\| \\leq C\\|\\mathbf{x}_1-\\mathbf{x}_2\\|\\left[ 1 + \\|\\mathbf{x}_1\\|^{p-1}\n + \\|\\mathbf{x}_2\\|^{p-1} \\right].\n\\]\nObserve that in the case $p=1$, pseudo-Lipschitz continuity reduces to\nthe standard Lipschitz continuity.\n\n\\textb{Now suppose that $\\mathbf{x}=\\mathbf{x}(N)$ is a block vector sequence,\nwhich may be deterministic or random.}\nGiven $p \\geq 1$, we will say that $\\mathbf{x}=\\mathbf{x}(N)$ converges\n\\emph{empirically with $p$-th order moments} if there exists a random variable\n$X \\in {\\mathbb{R}}^r$ such that\n\\begin{enumerate}[(i)]\n\\item $\\mathbb{E}|X|^p < \\infty$; and\n\\item for any scalar-valued pseudo-Lipschitz continuous function $f(\\cdot)$ of order $p$,\n\\begin{equation} \\label{eq:plplim}\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\sum_{n=1}^N f(x_n(N)) = \\mathbb{E}\\left[ f(X) \\right] \\mbox{ a.s.}.\n\\end{equation}\n\\end{enumerate}\nThus, the empirical mean of the components $f(x_n(N))$ converges to\nthe expectation $\\mathbb{E}[ f(X) ]$.\n\\textb{When $\\mathbf{x}$ converges empirically with $p$-th order moments},\nwe will write, with some abuse of notation,\n\\begin{equation} \\label{eq:plLim}\n \\lim_{N \\rightarrow \\infty} \\left\\{ x_n \\right\\}_{n=1}^N \\stackrel{PL(p)}{=} X,\n\\end{equation}\nwhere, as usual, we have omitted the dependence $x_n=x_n(N)$.\n\\textb{Note that the almost sure convergence in condition (ii) applies to the case\nwhere $\\mathbf{x}(N)$ is a random vector sequence. Importantly,\nthis condition holds pointwise over each function $f(\\cdot)$.\nIt is shown in \\cite[Lemma 4]{BayatiM:11} that, if condition (i) is true and\ncondition (ii) is true for any bounded continuous functions $f(x)$\nas well as $f(x)=x^p$, then condition (ii) holds for all pseudo-Lipschitz functions\nof order $p$. }\n\nWe conclude with one final definition.\nLet ${\\bm{\\phi}}(\\mathbf{r},\\gamma)$ be a function on $\\mathbf{r} \\in {\\mathbb{R}}^s$ and $\\gamma \\in {\\mathbb{R}}$.\nWe say that ${\\bm{\\phi}}(\\mathbf{r},\\gamma)$ is \\emph{uniformly Lipschitz continuous} in $\\mathbf{r}$\nat $\\gamma=\\overline{\\gamma}$ if there exists constants\n$L_1$ and $L_2 \\geq 0$ and an open neighborhood $U$ of $\\overline{\\gamma}$, such that\n\\begin{equation} \\label{eq:unifLip1}\n \\|{\\bm{\\phi}}(\\mathbf{r}_1,\\gamma)-{\\bm{\\phi}}(\\mathbf{r}_2,\\gamma)\\| \\leq L_1\\|\\mathbf{r}_1-\\mathbf{r}_2\\|,\n\\end{equation}\nfor all $\\mathbf{r}_1,\\mathbf{r}_2 \\in {\\mathbb{R}}^s$ and $\\gamma \\in U$; and\n\\begin{equation} \\label{eq:unifLip2}\n \\|{\\bm{\\phi}}(\\mathbf{r},\\gamma_1)-{\\bm{\\phi}}(\\mathbf{r},\\gamma_2)\\| \\leq L_2\\left(1+\\|\\mathbf{r}\\|\\right)|\\gamma_1-\\gamma_2|,\n\\end{equation}\nfor all $\\mathbf{r} \\in {\\mathbb{R}}^s$ and $\\gamma_1,\\gamma_2 \\in U$.\n\n\\section{Proof of Lemmas~\\ref{lem:errfn} and \\ref{lem:sens}} \\label{sec:errsenspf}\n\nFor Lemma~\\ref{lem:errfn}, part (a) follows immediately from\n\\eqref{eq:g1mmsex0} and \\eqref{eq:eps1}.\nTo prove part (b), suppose\n\\[\n \\mathbf{y} = \\mathbf{A}\\mathbf{x}^0 + \\mathbf{w}, \\quad \\mathbf{r}_2 = \\mathbf{x}^0+\\mathbf{q}.\n\\]\nThen, the error is given by\n\\begin{align}\n \\MoveEqLeft \\mathbf{g}_2(\\mathbf{r}_{2},\\gamma_2) - \\mathbf{x}^0\n \\stackrel{(a)}{=} \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_2\\mathbf{I}\\right)^{-1}\n \\nonumber \\\\\n & \\quad \\times \\left( \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A}\\mathbf{x}^0 + \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{w}\n + \\gamma_2\\mathbf{r}_{2} \\right) -\\mathbf{x}^0 \\nonumber \\\\\n &\\stackrel{(b)}{=}\n \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_2\\mathbf{I}\\right)^{-1}\n \\left( \\gamma_2\\mathbf{q}+ \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{w}\\right), \\nonumber \\\\\n &\\stackrel{(c)}{=} \\mathbf{Q}^{-1}\n \\left( \\gamma_2\\mathbf{q}+ \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{w}\\right), \\nonumber\n\\end{align}\nwhere (a) follows by substituting $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^0 + \\mathbf{w}$ into \\eqref{eq:g2slr};\npart (b) follows from the substitution $\\textb{\\mathbf{r}_2} = \\mathbf{x}^0+\\mathbf{q}$ and collecting the terms\nwith $\\mathbf{x}^0$; and (c) follows from the definition of $\\mathbf{Q}$ in \\eqref{eq:QQtdef}.\nHence, the error covariance matrix is given\n\\begin{align*}\n \\MoveEqLeft \\mathbb{E}\\left[ (\\mathbf{g}_2(\\mathbf{r}_{2},\\gamma_2) - \\mathbf{x}^0)(\\mathbf{g}_2(\\mathbf{r}_{2},\\gamma_2) - \\mathbf{x}^0)^{\\text{\\sf T}}\n \\right] \\\\\n &= \\mathbf{Q}^{-1}\\left[ \\gamma_2^2\\mathbb{E}[\\mathbf{q}\\qbf^{\\text{\\sf T}}] + \\gamma_w^2\\mathbf{A}\\mathbb{E}[\\mathbf{w}\\wbf^{\\text{\\sf T}}]\\mathbf{A}^{\\text{\\sf T}}\n \\right]\\mathbf{Q}^{-1} \\\\\n &= \\mathbf{Q}^{-1}\\tilde{\\mathbf{Q}}\\mathbf{Q}^{-1},\n\\end{align*}\nwhere we have used the\nthe fact that $\\mathbf{q}$ and $\\mathbf{w}$ are independent Gaussians\nwith variances $\\tau_2$ and $\\gamma_{w0}^{-1}$. This proves \\eqref{eq:eps2Q}.\n\\textb{Then,} under the matched condition, \\textb{we have that} $\\mathbf{Q}=\\tilde{\\mathbf{Q}}$, which proves \\eqref{eq:eps2Qmatch}.\nPart (c) of Lemma~\\ref{lem:errfn} follows from part (b) by using the SVD \\eqref{eq:ASVD}.\n\nFor Lemma~\\ref{lem:sens}, part (a) follows from averaging \\eqref{eq:g1mmsex0} over $r_1$.\nPart (b) follows by taking the derivative in \\eqref{eq:g2slr} and part (c)\nfollows from using the SVD~\\eqref{eq:ASVD}.\n\n\n\\section{Orthogonal Matrices Under Linear Constraints}\n\n\nIn preparation for proving Theorem~\\ref{thm:se},\nwe derive various results on orthogonal matrices subject to linear constraints.\nTo this end, suppose $\\mathbf{V} \\in {\\mathbb{R}}^{N \\times N}$ is an orthogonal matrix\nsatisfying linear constraints\n\\begin{equation} \\label{eq:AVB}\n \\mathbf{A} = \\mathbf{V}\\mathbf{B},\n\\end{equation}\nfor some matrices $\\mathbf{A}, \\mathbf{B} \\in {\\mathbb{R}}^{N \\times s}$ for some $s$. Assume $\\mathbf{A}$ and $\\mathbf{B}$\nare full column rank (hence $s \\leq N$). Let\n\\begin{equation} \\label{eq:UABdef}\n \\mathbf{U}_\\mathbf{A} = \\mathbf{A}(\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A})^{-1\/2}, \\quad\n \\mathbf{U}_\\mathbf{B} = \\mathbf{B}(\\mathbf{B}^{\\text{\\sf T}}\\mathbf{B})^{-1\/2}.\n\\end{equation}\nAlso, let $\\mathbf{U}_{\\mathbf{A}^\\perp}$ and $\\mathbf{U}_{\\mathbf{B}^\\perp}$ be any $N \\times (N-s)$\nmatrices whose columns are\nan orthonormal bases for $\\mathrm{Range}(\\mathbf{A})^\\perp$ and\n$\\mathrm{Range}(\\mathbf{B})^\\perp$, respectively.\nDefine\n\\begin{equation} \\label{eq:Vtdef}\n \\tilde{\\mathbf{V}} := \\mathbf{U}_{\\mathbf{A}^\\perp}^{\\text{\\sf T}}\\mathbf{V}\\mathbf{U}_{\\mathbf{B}^\\perp},\n\\end{equation}\nwhich has dimension $(N-s) \\times (N-s)$.\n\n\\begin{lemma} \\label{lem:orthogRep}\nUnder the above definitions $\\tilde{\\mathbf{V}}$ satisfies\n\\begin{equation} \\label{eq:VVtrep}\n \\mathbf{V} = \\mathbf{A}(\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A})^{-1}\\mathbf{B}^{\\text{\\sf T}} + \\mathbf{U}_{\\mathbf{A}^\\perp}\\tilde{\\mathbf{V}}\\mathbf{U}_{\\mathbf{B}^\\perp}^{\\text{\\sf T}}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\\textb{Let $\\mathbf{P}_\\mathbf{A}:=\\mathbf{U}_\\mathbf{A}\\mathbf{U}_\\mathbf{A}^{\\text{\\sf T}}$ and $\\mathbf{P}_\\mathbf{A}^\\perp:=\\mathbf{U}_{\\mathbf{A}^\\perp}\\mathbf{U}_{\\mathbf{A}^\\perp}^{\\text{\\sf T}}$ are\nthe orthogonal projections onto $\\mathrm{Range}(\\mathbf{A})$ and $\\mathrm{Range}(\\mathbf{A})^\\perp$ respectively.\nDefine $\\mathbf{P}_\\mathbf{B}$ and $\\mathbf{P}_\\mathbf{B}^\\perp$ similarly. Since, $\\mathbf{A}=\\mathbf{V}\\mathbf{B}$, we have $\\mathbf{V}^{\\text{\\sf T}}\\mathbf{A} = \\mathbf{B}$ and therefore,\n\\begin{equation} \\label{eq:PAVPB}\n \\mathbf{P}_\\mathbf{A}^\\perp\\mathbf{V}\\mathbf{P}_{\\mathbf{B}} = \\mathbf{0}, \\quad\n \\mathbf{P}_\\mathbf{A}\\mathbf{V}\\mathbf{P}_{\\mathbf{B}}^\\perp = \\mathbf{0}.\n\\end{equation}\nTherefore,\n\\begin{align}\n \\mathbf{V} &= (\\mathbf{P}_\\mathbf{A}+\\mathbf{P}_\\mathbf{A}^\\perp)\\mathbf{V}(\\mathbf{P}_\\mathbf{B}+\\mathbf{P}_\\mathbf{B}^\\perp) \\nonumber \\\\\n &= (\\mathbf{P}_\\mathbf{A}\\mathbf{V}\\mathbf{P}_\\mathbf{B} + \\mathbf{P}_\\mathbf{A}^\\perp\\mathbf{V}\\mathbf{P}_\\mathbf{B}^\\perp). \\label{eq:PVP1}\n\\end{align}\nNow,\n\\begin{align}\n \\MoveEqLeft \\mathbf{P}_\\mathbf{A}\\mathbf{V}\\mathbf{P}_\\mathbf{B} = \\mathbf{P}_\\mathbf{A}\\mathbf{V}\\mathbf{B}(\\mathbf{B}\\Bbf^{\\text{\\sf T}})^{-1}\\mathbf{B}^{\\text{\\sf T}} \\nonumber \\\\\n &= \\mathbf{P}_{\\mathbf{A}}\\mathbf{A}(\\mathbf{B}\\Bbf^{\\text{\\sf T}})^{-1}\\mathbf{B}^{\\text{\\sf T}} \\nonumber \\\\\n &= \\mathbf{A}(\\mathbf{B}\\Bbf^{\\text{\\sf T}})^{-1}\\mathbf{B}^{\\text{\\sf T}} = \\mathbf{A}(\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A})^{-1}\\mathbf{B}^{\\text{\\sf T}}, \\label{eq:PVP2}\n\\end{align}\nwhere, in the last step we used the fact that\n\\[\n \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A}=\\mathbf{B}^{\\text{\\sf T}}\\mathbf{V}^{\\text{\\sf T}}\\mathbf{V}\\mathbf{B} = \\mathbf{B}^{\\text{\\sf T}}\\mathbf{B}.\n\\]\nAlso, using the definition of $\\tilde{\\mathbf{V}}$ in \\eqref{eq:Vtdef},\n\\begin{equation} \\label{eq:PVP3}\n \\mathbf{P}_{\\mathbf{A}^\\perp}\\mathbf{V}\\mathbf{P}_{\\mathbf{B}^\\perp}^{\\text{\\sf T}} =\\mathbf{U}_{\\mathbf{A}^\\perp}\\tilde{\\mathbf{V}}\\mathbf{U}_{\\mathbf{B}^\\perp}^{\\text{\\sf T}}.\n\\end{equation}\nSubstituting \\eqref{eq:PVP2} and \\eqref{eq:PVP3} into \\eqref{eq:PVP1} obtains \\eqref{eq:VVtrep}.\nTo prove that $\\tilde{\\mathbf{V}}$ is orthogonal,\n\\begin{align*}\n \\MoveEqLeft \\tilde{\\mathbf{V}}^{\\text{\\sf T}}\\tilde{\\mathbf{V}} \\stackrel{(a)}{=}\n \\mathbf{U}_{\\mathbf{B}^\\perp}^{\\text{\\sf T}}\\mathbf{V}\\mathbf{P}_\\mathbf{A}\\mathbf{V}\\mathbf{U}_{\\mathbf{B}^\\perp}\n \\\\\n &\\stackrel{(b)}{=}\n \\mathbf{U}_{\\mathbf{B}^\\perp}^{\\text{\\sf T}}\\mathbf{V}^{\\text{\\sf T}}\\mathbf{V}\\mathbf{U}_{\\mathbf{B}^\\perp} \\stackrel{(c)}{=} \\mathbf{I},\n\\end{align*}\nwhere (a) uses \\eqref{eq:Vtdef}; (b) follows from \\eqref{eq:PAVPB} and\n(c) follows from the fact that $\\mathbf{V}$ and $\\mathbf{U}_{\\mathbf{B}^\\perp}$ have orthonormal\ncolumns.\n}\n\\end{proof}\n\n\\begin{lemma} \\label{lem:orthogLin} Let $\\mathbf{V} \\in {\\mathbb{R}}^{N \\times N}$ be a random matrix\nthat is Haar distributed. Suppose that $\\mathbf{A}$ and $\\mathbf{B}$ are deterministic and $G$ is\nthe event that $\\mathbf{V}$ satisfies linear constraints \\eqref{eq:AVB}.\nThen, the conditional distribution given $G$,\n$\\tilde{\\mathbf{V}}$ is Haar distributed matrix independent of $G$. Thus,\n\\[\n \\left. \\mathbf{V} \\right|_{G} \\stackrel{d}{=}\n \\mathbf{A}(\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A})^{-1}\\mathbf{B}^{\\text{\\sf T}} + \\mathbf{U}_{\\mathbf{A}^\\perp}\\tilde{\\mathbf{V}}\\mathbf{U}_{\\mathbf{B}^\\perp}^{\\text{\\sf T}},\n\\]\nwhere $\\tilde{\\mathbf{V}}$ is Haar distributed and independent of $G$.\n\\end{lemma}\n\\begin{proof} Let $O_N$ be the set\nof $N \\times N$ orthogonal matrices and let ${\\mathcal L}$ be the set of matrices $\\mathbf{V} \\in O_N$\nthat satisfy the linear constraints \\eqref{eq:AVB}. If $p_{\\mathbf{V}}(\\mathbf{V})$ is the uniform density\non $O_N$ (i.e.\\ the Haar measure), the conditional density on $\\mathbf{V}$ given the event $G$,\n\\[\n p_{\\mathbf{V}|G}(\\mathbf{V}|G) = \\frac{1}{Z}p_{\\mathbf{V}}(\\mathbf{V})\\indic{\\mathbf{V} \\in {\\mathcal L}},\n\\]\nwhere $Z$ is the normalization constant. Now let $\\phi:\\tilde{\\mathbf{V}} \\mapsto \\mathbf{V}$\nbe the mapping described by \\eqref{eq:VVtrep} which maps $O_{N-s}$ to ${\\mathcal L}$.\nThis mapping is invertible. Since $\\phi$ is affine, the conditional density on\n$\\tilde{\\mathbf{V}}$ is given by\n\\begin{align}\n \\MoveEqLeft p_{\\tilde{\\mathbf{V}}|G}(\\tilde{\\mathbf{V}}|G) \\propto\n p_{\\mathbf{V}|G}(\\phi(\\tilde{\\mathbf{V}})|G) \\nonumber \\\\\n &\\propto p_{\\mathbf{V}}(\\phi(\\tilde{\\mathbf{V}}))\\indic{\\phi(\\tilde{\\mathbf{V}} \\in {\\mathcal L})}\n = p_{\\mathbf{V}}(\\phi(\\tilde{\\mathbf{V}})), \\label{eq:ptvg}\n\\end{align}\nwhere in the last step we used the fact that, for any matrix $\\tilde{\\mathbf{V}}$,\n$\\phi(\\tilde{\\mathbf{V}}) \\in {\\mathcal L}$ (i.e.\\ satisfies the linear constraints \\eqref{eq:AVB}).\nNow to show that $\\tilde{\\mathbf{V}}$ is conditionally Haar distributed, we need to show that for any\northogonal matrix $\\mathbf{W}_0 \\in O_{N-s}$,\n\\begin{equation} \\label{eq:pvtorthog}\n p_{\\tilde{\\mathbf{V}}|G}(\\mathbf{W}_0\\tilde{\\mathbf{V}}|G)=p_{\\tilde{\\mathbf{V}}|G}(\\tilde{\\mathbf{V}}|G).\n\\end{equation}\nTo prove this, given $\\mathbf{W}_0 \\in O_{N-s}$, define the matrix,\n\\[\n \\mathbf{W} = \\mathbf{U}_{\\mathbf{A}}\\mathbf{U}_{\\mathbf{A}}^{\\text{\\sf T}} + \\mathbf{U}_{\\mathbf{A}^\\perp}\\mathbf{W}_0\\mathbf{U}_{\\mathbf{A}^\\perp}^{\\text{\\sf T}}.\n\\]\nOne can verify that $\\mathbf{W} \\in O_N$ (i.e.\\ it is orthogonal) and\n\\begin{equation} \\label{eq:pwvt}\n \\phi(\\mathbf{W}_0\\tilde{\\mathbf{V}}) =\\mathbf{W}\\phi(\\tilde{\\mathbf{V}}).\n\\end{equation}\nHence,\n\\begin{align}\n \\MoveEqLeft p_{\\tilde{\\mathbf{V}}|G}(\\mathbf{W}_0\\tilde{\\mathbf{V}}|G)\n \\stackrel{(a)}{\\propto} p_{\\mathbf{V}}(\\phi(\\mathbf{W}_0\\tilde{\\mathbf{V}})) \\nonumber \\\\\n &\\stackrel{(b)}{\\propto} p_{\\mathbf{V}}(\\mathbf{W}\\phi(\\tilde{\\mathbf{V}}))\n \\stackrel{(c)}{\\propto} p_{\\mathbf{V}}(\\phi(\\tilde{\\mathbf{V}})), \\nonumber\n\\end{align}\nwhere (a) follows from \\eqref{eq:ptvg}; (b) follows from \\eqref{eq:pwvt}; and\n(c) follows from the orthogonal invariance of $\\mathbf{V}$. Hence, the conditional density\nof $\\tilde{\\mathbf{V}}$ is invariant under orthogonal transforms and is thus Haar distributed.\n\\end{proof}\n\nWe will use Lemma~\\ref{lem:orthogLin} in conjunction with the following\nsimple result.\n\n\\begin{lemma} \\label{lem:orthogGaussLim} Fix a dimension $s \\geq 0$,\nand suppose that $\\mathbf{x}(N)$ and $\\mathbf{U}(N)$ are sequences such that\nfor each $N$,\n\\begin{enumerate}[(i)]\n\\item $\\mathbf{U}=\\mathbf{U}(N) \\in {\\mathbb{R}}^{N\\times (N-s)}$ is a deterministic\nmatrix with $\\mathbf{U}^{\\text{\\sf T}}\\mathbf{U} = \\mathbf{I}$;\n\\item $\\mathbf{x}=\\mathbf{x}(N) \\in {\\mathbb{R}}^{N-s}$ a random vector that\nis isotropically distributed in that $\\mathbf{V}\\mathbf{x}\\stackrel{d}{=}\\mathbf{x}$ for any orthogonal $(N-s)\\times(N-s)$ matrix\n$\\mathbf{V}$.\n\\item The \\textb{normalized squared Euclidean norm} converges almost surely as\n\\[\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\mathbf{x}\\|^2 = \\tau,\n\\]\nfor some $\\tau > 0$.\n\\end{enumerate}\nThen, if we define $\\mathbf{y} = \\mathbf{U}\\mathbf{x}$, we have that the components of $\\mathbf{y}$\nconverge empirically to a Gaussian random variable\n\\begin{equation} \\label{eq:ygausslim}\n \\lim_{N \\rightarrow \\infty} \\{ y_n \\} \\stackrel{PL(2)}{=} Y \\sim {\\mathcal N}(0,\\tau).\n\\end{equation}\n\\end{lemma}\n\\begin{proof} Since $\\mathbf{x}$ is isotropically distributed, it can be generated as a normalized\nGaussian, i.e.\\\n\\[\n \\mathbf{x} \\stackrel{d}{=} \\frac{\\|\\mathbf{x}\\|}{\\|\\mathbf{w}_0\\|}\\mathbf{w}_0, \\quad \\mathbf{w}_0 \\sim {\\mathcal N}(\\mathbf{0},\\mathbf{I}_{N-s}).\n\\]\nFor each $N$, let $\\mathbf{U}_\\perp$ be an $N \\times s$ matrix such that $\\mathbf{S} := [\\mathbf{U} ~ \\mathbf{U}_{\\perp}]$ is\northogonal. That is, the $s$ columns of $\\mathbf{U}_\\perp$ are an orthonormal basis\nof the orthogonal complement of the $\\mathrm{Range}(\\mathbf{U})$. If we let $\\mathbf{w}_1 \\sim {\\mathcal N}(0,\\mathbf{I}_s)$,\nthen if we define\n\\[\n \\mathbf{w} = \\left[ \\begin{array}{c} \\mathbf{w}_0 \\\\ \\mathbf{w}_1 \\end{array} \\right],\n\\]\nso that $\\mathbf{w} \\sim {\\mathcal N}(0,\\mathbf{I}_N)$.\nWith this definition, we can write $\\mathbf{y}$ as\n\\begin{equation} \\label{eq:yux}\n \\mathbf{y} = \\mathbf{U}\\mathbf{x} \\stackrel{d}{=} \\frac{\\|\\mathbf{x}\\|}{\\|\\mathbf{w}_0\\|}\\left[ \\mathbf{S}\\mathbf{w} - \\mathbf{U}_{\\perp}\\mathbf{w}_1 \\right].\n\\end{equation}\nNow,\n\\[\n \\lim_{N \\rightarrow\\infty} \\frac{\\|\\mathbf{x}\\|}{\\|\\mathbf{w}_0\\|} = \\sqrt{\\tau},\n\\]\nalmost surely. Also, since $\\mathbf{w} \\sim {\\mathcal N}(0,\\mathbf{I})$ and $\\mathbf{S}$ is orthogonal,\n$\\mathbf{S}\\mathbf{w} \\sim {\\mathcal N}(0,\\mathbf{I})$. Finally, since $\\mathbf{w}_1$ is $s$-dimensional,\n\\[\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\mathbf{U}_{\\perp}\\mathbf{w}_1\\|^2 =\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\mathbf{w}_1\\|^2 = 0,\n\\]\nalmost surely. Substituting these properties into \\eqref{eq:yux},\nwe obtain \\eqref{eq:ygausslim}.\n\\end{proof}\n\n\\section{A General Convergence Result}\n\n\nTo analyze the VAMP method, we a consider the\nfollowing more general recursion. For each dimension $N$, we are given\nan orthogonal matrix $\\mathbf{V} \\in {\\mathbb{R}}^{N \\times N}$,\nand an initial vector $\\mathbf{u}_0 \\in {\\mathbb{R}}^N$. \n\\textb{Also, we are given disturbance vectors \n\\[\n \\mathbf{w}^p=(w_1^p,\\ldots,w_n^p), \\quad\n \\mathbf{w}^q=(w_1^q,\\ldots,w_n^q),\n\\]\nwhere the components $w_n^p \\in {\\mathbb{R}}^{n_p}$ and $w_n^q \\in {\\mathbb{R}}^{n_q}$ \nfor some finite dimensions $n_p$ and $n_q$ that do not grow with $N$.\n}\nThen, we generate a sequence of iterates by the following recursion:%\n\\begin{subequations}\n\\label{eq:algoGen}\n\\begin{align}\n \\mathbf{p}_k &= \\mathbf{V}\\mathbf{u}_k \\label{eq:pupgen} \\\\\n \\alpha_{1k} &= \\bkt{ \\mathbf{f}_p'(\\mathbf{p}_k,\\mathbf{w}^p,\\gamma_{1k})},\n \\quad \\gamma_{2k} = \\Gamma_1(\\gamma_{1k},\\alpha_{1k})\n \\label{eq:alpha1gen} \\\\\n \\mathbf{v}_k &= C_1(\\alpha_{1k})\\left[\n \\mathbf{f}_p(\\mathbf{p}_k,\\mathbf{w}^p,\\gamma_{1k})- \\alpha_{1k} \\mathbf{p}_{k} \\right] \\label{eq:vupgen} \\\\\n \\mathbf{q}_k &= \\mathbf{V}^{\\text{\\sf T}}\\mathbf{v}_k \\label{eq:qupgen} \\\\\n \\alpha_{2k} &= \\bkt{ \\mathbf{f}_q'(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k})},\n \\quad \\gamma_{1,k\\! + \\! 1} =\\Gamma_2(\\gamma_{2k},\\alpha_{2k})\n \\label{eq:alpha2gen} \\\\\n \\mathbf{u}_{k\\! + \\! 1} &= C_2(\\alpha_{2k})\\left[\n \\mathbf{f}_q(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k}) - \\alpha_{2k}\\mathbf{q}_{k} \\right], \\label{eq:uupgen}\n\\end{align}\n\\end{subequations}\nwhich is initialized with some vector $\\mathbf{u}_0$ and scalar $\\gamma_{10}$.\nHere, $\\mathbf{f}_p(\\cdot)$ and $\\mathbf{f}_q(\\cdot)$ are separable functions, meaning\n\\begin{align}\\label{eq:fpqcomp}\n\\begin{split}\n \\left[ \\mathbf{f}_p(\\mathbf{p},\\mathbf{w}^p,\\gamma_1)\\right]_n = f_p(p_n,w^p_n,\\gamma_1)~\\forall n, \\\\\n \\left[ \\mathbf{f}_q(\\mathbf{q},\\mathbf{w}^q,\\gamma_2)\\right]_n = f_q(q_n,w^q_n,\\gamma_2)~\\forall n,\n\\end{split}\n\\end{align}\nfor scalar-valued functions $f_p(\\cdot)$ and $f_q(\\cdot)$.\nThe functions $\\Gamma_i(\\cdot)$ and $C_i(\\cdot)$ are also scalar-valued.\n\\textb{In the recursion \\eqref{eq:algoGen}, the variables $\\gamma_{1k}$\nand $\\gamma_{2k}$ \nrepresent some parameter of the update functions $\\mathbf{f}_p(\\cdot)$ and $\\mathbf{f}_q(\\cdot)$,\nand the functions $\\Gamma_i(\\cdot)$ represent how these parameters are updated.}\n\nSimilar to our analysis of the VAMP, we consider the following\nlarge-system limit (LSL) analysis.\nWe consider a sequence of runs of the recursions indexed by $N$.\nWe model the initial condition $\\mathbf{u}_0$ and disturbance vectors $\\mathbf{w}^p$ and $\\mathbf{w}^q$\nas deterministic sequences that scale with $N$ and assume that their components\nconverge empirically as\n\\begin{equation} \\label{eq:U0lim}\n \\lim_{N \\rightarrow \\infty} \\{ u_{0n} \\} \\stackrel{PL(2)}{=} U_0,\n\\end{equation}\nand\n\\begin{equation} \\label{eq:WpqLim}\n \\lim_{N \\rightarrow \\infty} \\{ w^p_n \\} \\stackrel{PL(2)}{=} W^p, \\quad\n \\lim_{N \\rightarrow \\infty} \\{ w^q_n \\} \\stackrel{PL(2)}{=} W^q,\n\\end{equation}\nto random variables $U_0$, $W^p$ and $W^q$. \\textb{The\nvectors $W_p$ and $W_q$ are random vectors in ${\\mathbb{R}}^{n_p}$ and ${\\mathbb{R}}^{n_q}$,\nrespectively.} We assume that the\ninitial constant converges as\n\\begin{equation} \\label{eq:gam10limgen}\n \\lim_{N \\rightarrow \\infty} \\gamma_{10} = \\overline{\\gamma}_{10},\n\\end{equation}\nfor some $\\overline{\\gamma}_{10}$.\n The matrix $\\mathbf{V} \\in {\\mathbb{R}}^{N \\times N}$\nis assumed to be uniformly distributed on the set of orthogonal matrices\nindependent of $\\mathbf{r}_0$, $\\mathbf{w}^p$ and $\\mathbf{w}^q$.\nSince $\\mathbf{r}_0$, $\\mathbf{w}^p$ and\n$\\mathbf{w}^q$ are deterministic, the only randomness is in the matrix $\\mathbf{V}$.\n\nUnder the above assumptions, define the SE equations\n\\begin{subequations} \\label{eq:segen}\n\\begin{align}\n \\overline{\\alpha}_{1k} &= \\mathbb{E}\\left[ f_p'(P_k,W^p,\\overline{\\gamma}_{1k})\\right],\n \\label{eq:a1segen} \\\\\n \\tau_{2k} &= C_1^2(\\overline{\\alpha}_{1k}) \\left\\{\n \\mathbb{E}\\left[ f_p^2(P_k,W^p,\\overline{\\gamma}_{1k})\\right] - \\overline{\\alpha}_{1k}^2\\tau_{1k} \\right\\}\n \\label{eq:tau2segen} \\\\\n \\overline{\\gamma}_{2k} &= \\Gamma_1(\\overline{\\gamma}_{1k},\\overline{\\alpha}_{1k}) \\label{eq:gam2segen} \\\\\n \\overline{\\alpha}_{2k} &= \\mathbb{E}\\left[ f_q'(Q_k,W^q,\\overline{\\gamma}_{2k} \\right],\n \\label{eq:a2segen} \\\\\n \\tau_{1,k\\! + \\! 1} &= C_2^2(\\overline{\\alpha}_{2k})\\left\\{\n \\mathbb{E}\\left[ f_q^2(Q_k,W^q,\\overline{\\gamma}_{2k})\\right] - \\overline{\\alpha}_{2k}^2\\tau_{2k}\\right\\}\n \\label{eq:tau1segen} \\\\\n \\gamma_{1,k\\! + \\! 1} &= \\Gamma_2(\\overline{\\gamma}_{2k},\\overline{\\alpha}_{2k}), \\label{eq:gam1segen}\n\\end{align}\n\\end{subequations}\nwhich are initialized with $\\overline{\\gamma}_{10}$ in \\eqref{eq:gam10limgen} and\n\\begin{equation} \\label{eq:tau10gen}\n \\tau_{10} = \\mathbb{E}[U_0^2],\n\\end{equation}\nwhere $U_0$ is the random variable in \\eqref{eq:U0lim}.\nIn the SE equations~\\eqref{eq:segen},\nthe expectations are taken with respect to random variables\n\\[\n P_k \\sim {\\mathcal N}(0,\\tau_{1k}), \\quad Q_k \\sim {\\mathcal N}(0,\\tau_{2k}),\n\\]\nwhere $P_k$ is independent of $W^p$ and $Q_k$ is independent of $W^q$.\n\n\\begin{theorem} \\label{thm:genConv} Consider the recursions \\eqref{eq:algoGen}\nand SE equations \\eqref{eq:segen} under the above assumptions. Assume additionally that,\nfor all $k$:\n\\begin{enumerate}[(i)]\n\\item For $i=1,2$, the functions\n\\[\n C_i(\\alpha_i), \\quad \\Gamma_i(\\gamma_i,\\alpha_i),\n\\]\nare continuous at the points $(\\gamma_i,\\alpha_i)=(\\overline{\\gamma}_{ik},\\overline{\\alpha}_{ik})$\nfrom the SE equations; and\n\\item The function $f_p(p,w^p,\\gamma_1)$ and its derivative $f_p'(p,w^p,\\gamma_1)$\nare uniformly Lipschitz continuous in $(p,w^p)$ at $\\gamma_1=\\overline{\\gamma}_{1k}$.\n\\item The function $f_q(q,w^q,\\gamma_2)$ and its derivative $f_q'(q,w^q,\\gamma_2)$\nare uniformly Lipschitz continuous in $(q,w^q)$ at $\\gamma_2=\\overline{\\gamma}_{2k}$.\n\\end{enumerate}\nThen,\n\\begin{enumerate}[(a)]\n\\item For any fixed $k$, almost surely the components of $(\\mathbf{w}^p,\\mathbf{p}_0,\\ldots,\\mathbf{p}_k)$\nempirically converge as\n\\begin{equation} \\label{eq:Pconk}\n \\lim_{N \\rightarrow \\infty} \\left\\{ (w^p_n,p_{0n},\\ldots,p_{kn}) \\right\\}\n \\stackrel{PL(2)}{=} (W^p,P_0,\\ldots,P_k),\n\\end{equation}\nwhere $W^p$ is the random variable in the limit \\eqref{eq:WpqLim} and\n$(P_0,\\ldots,P_k)$ is a zero mean Gaussian random vector independent of $W^p$,\nwith $\\mathbb{E}[P_k^2] = \\tau_{1k}$. In addition, we have that\n\\begin{equation} \\label{eq:ag1limgen}\n \\lim_{N \\rightarrow \\infty} (\\alpha_{1k},\\gamma_{1k}) = (\\overline{\\alpha}_{1k},\\overline{\\gamma}_{1k}),\n\\end{equation}\nalmost surely.\n\n\\item For any fixed $k$, almost surely the components of $(\\mathbf{w}^q,\\mathbf{q}_0,\\ldots,\\mathbf{q}_k)$\nempirically converge as\n\\begin{equation} \\label{eq:Qconk}\n \\lim_{N \\rightarrow \\infty} \\left\\{ (w^q_n,q_{0n},\\ldots,q_{kn}) \\right\\}\n \\stackrel{PL(2)}{=} (W^q,Q_0,\\ldots,Q_k),\n\\end{equation}\nwhere $W^q$ is the random variable in the limit \\eqref{eq:WpqLim} and\n$(Q_0,\\ldots,Q_k)$ is a zero mean Gaussian random vector independent of $W^q$,\nwith $\\mathbb{E}[P_k^2] = \\tau_{2k}$. In addition, we have that\n\\begin{equation} \\label{eq:ag2limgen}\n \\lim_{N \\rightarrow \\infty} (\\alpha_{2k},\\gamma_{2k}) = (\\overline{\\alpha}_{2k},\\overline{\\gamma}_{2k}),\n\\end{equation}\nalmost surely.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof} We will prove this in the next Appendix,\nAppendix~\\ref{sec:genConvPf}.\n\\end{proof}\n\n\\section{Proof of Theorem~\\ref{thm:genConv}} \\label{sec:genConvPf}\n\n\\subsection{Induction Argument}\nWe use an induction argument. Given iterations $k, \\ell \\geq 0$,\ndefine the hypothesis, $H_{k,\\ell}$ as the statement:\n\\begin{itemize}\n\\item Part (a) of Theorem~\\ref{thm:genConv} is true up to $k$; and\n\\item Part (b) of Theorem~\\ref{thm:genConv} is true up to $\\ell$.\n\\end{itemize}\nThe induction argument will then follow by showing the following three facts:\n\\begin{itemize}\n\\item $H_{0,-1}$ is true;\n\\item If $H_{k,k\\! - \\! 1}$ is true, then so is $H_{k,k}$;\n\\item If $H_{k,k}$ is true, then so is $H_{k\\! + \\! 1,k}$.\n\\end{itemize}\n\n\\subsection{Induction Initialization}\nWe first show that the hypothesis $H_{0,-1}$ is true. That is,\nwe must show \\eqref{eq:Pconk} and \\eqref{eq:ag1limgen} for $k=0$.\nThis is a special case of Lemma~\\ref{lem:orthogGaussLim}.\nSpecifically, for each $N$, let $\\mathbf{U}=\\mathbf{I}_N$, the $N \\times N$ identity\nmatrix, which trivially satisfies property (i) of Lemma~~\\ref{lem:orthogGaussLim}\nwith $s=0$. Let $\\mathbf{x} = \\mathbf{p}_0$. Since $\\mathbf{p}_0 = \\mathbf{V}\\mathbf{u}_0$ and $\\mathbf{V}$ is\nHaar distributed independent of $\\mathbf{u}_0$, we have that $\\mathbf{p}_0$ is orthogonally\ninvariant and satisfies property (ii) of Lemma~\\ref{lem:orthogGaussLim}.\nAlso,\n\\[\n \\lim_{N \\rightarrow \\infty} \\|\\mathbf{p}_0\\|^2 \\stackrel{(a)}{=}\n \\lim_{N \\rightarrow \\infty} \\|\\mathbf{u}_0\\|^2 \\stackrel{(b)}{=} \\mathbb{E} [U_0^2]\n \\stackrel{(c)}{=} \\tau_{10},\n\\]\nwhere (a) follows from the fact that $\\mathbf{p}_0=\\mathbf{V}\\mathbf{u}_0$ and $\\mathbf{V}$ is orthogonal;\n(b) follows from the assumption~\\eqref{eq:U0lim} and (c) follows from the definition\n\\eqref{eq:tau10gen}. This proves property (iii) of Lemma~\\ref{lem:orthogGaussLim}.\nHence, $\\mathbf{p}_0 = \\mathbf{U}\\mathbf{p}_0$, we have that the components of $\\mathbf{p}_0$ converge\nempirically as\n\\[\n \\lim_{N \\rightarrow \\infty} \\{ p_{0n} \\} \\stackrel{PL(2)}{=} P_0 \\sim {\\mathcal N}(0,\\tau_{10}),\n\\]\nfor a Gaussian random variable $P_0$. Moreover, since $\\mathbf{V}$ is independent of $\\mathbf{w}^p$,\nand the components of $\\mathbf{w}^p$ converge empirically as \\eqref{eq:WpqLim},\nwe have that the components of $\\mathbf{p}_n,\\mathbf{w}^p$ almost surely converge empirically as\n\\[\n \\lim_{N \\rightarrow \\infty} \\{ w^p_n, p_{0n} \\} \\stackrel{PL(2)}{=} (W^p,P_0),\n\\]\nwhere $W^p$ is independent of $P_0$. This proves \\eqref{eq:Pconk} for $k=0$.\n\nNow, we have assumed in \\eqref{eq:gam10limgen} that $\\gamma_{10} \\rightarrow \\overline{\\gamma}_{10}$\nas $N \\rightarrow \\infty$. Also, since\n $f_p'(p,w^p,\\gamma_1)$ is uniformly Lipschitz continuous in $(p,w^p)$\nat $\\gamma_1 = \\overline{\\gamma}_{10}$, we have that\n$\\alpha_{10} = \\bkt{ \\mathbf{f}_p'(\\mathbf{p}_0,\\mathbf{w}^p,\\gamma_{10}) }$\nconverges to $\\overline{\\alpha}_{10}$ in \\eqref{eq:a1segen} almost surely.\nThis proves \\eqref{eq:ag1limgen}.\n\n\n\n\\subsection{The Induction Recursion}\nWe next show the implication $H_{k,k\\! - \\! 1} \\Rightarrow H_{k,k}$. The implication\n$H_{k,k} \\Rightarrow H_{k\\! + \\! 1,k}$ is proven similarly. Hence, fix $k$ and assume\nthat $H_{k,k\\! - \\! 1}$ holds.\nSince $\\Gamma_1(\\gamma_i,\\alpha_i)$ is continuous at $(\\overline{\\gamma}_{1k},\\overline{\\alpha}_{1k})$,\nthe limits \\eqref{eq:ag1limgen} combined with \\eqref{eq:gam2segen} show that\n\\[\n \\lim_{N \\rightarrow\\infty} \\gamma_{2k} =\\lim_{N \\rightarrow\\infty} \\Gamma_1(\\gamma_{1k},\\alpha_{1k}) =\n \\overline{\\gamma}_{2k}.\n\\]\nIn addition, the induction hypothesis shows that for $\\ell=0,\\ldots,k$,\nthe components of $(\\mathbf{w}^p,\\mathbf{p}_\\ell)$ almost surely converge empirically as\n\\[\n \\lim_{N \\rightarrow\\infty} \\{ (w^p_n,p_{\\ell n})\\} \\stackrel{PL(2)}{=} (W^p,P_\\ell),\n\\]\nwhere $P_\\ell \\sim {\\mathcal N}(0,\\tau_{1\\ell})$ for $\\tau_{1\\ell}$ given by the SE equations.\nSince $f_p(\\cdot)$ is Lipschitz continuous \\textb{and} $C_1(\\alpha_{1\\ell})$ is continuous\nat $\\alpha_{1\\ell}=\\overline{\\alpha}_{1\\ell}$, \\textb{one may observe that} the definition of $\\mathbf{v}_\\ell$ in \\eqref{eq:vupgen}\nand the limits \\eqref{eq:ag1limgen} show that\n\\[\n \\lim_{N \\rightarrow\\infty} \\{ (w^p_n,p_{\\ell n},v_{\\ell n})\\} \\stackrel{PL(2)}{=} (W^p,P_\\ell ,V_\\ell ),\n\\]\nwhere $V_\\ell $ is the random variable\n\\begin{equation} \\label{eq:Velldef}\n V_\\ell = g_p(P_\\ell ,W^p,\\overline{\\gamma}_{1\\ell },\\overline{\\alpha}_{1\\ell}),\n\\end{equation}\nand $g_p(\\cdot)$ is the function\n\\begin{equation} \\label{eq:gpdef}\n g_p(p,w^p,\\gamma_1,\\alpha_1) := C_1(\\alpha_1)\\left[ f_p(p,w^p,\\gamma_1)-\\alpha_1 p \\right].\n\\end{equation}\n\\textb{\nSimilarly, we have the limit\n\\[\n \\lim_{N \\rightarrow\\infty} \\{ (w^q_n,q_{\\ell n},u_{\\ell n})\\} \n \\stackrel{PL(2)}{=} (W^q,Q_\\ell ,U_\\ell ),\n\\]\nwhere $U_\\ell $ is the random variable,\n\\begin{equation} \\label{eq:Uelldef}\n U_\\ell = g_q(Q_\\ell ,W^q,\\overline{\\gamma}_{2\\ell },\\overline{\\alpha}_{2\\ell})\n\\end{equation}\nand $g_q(\\cdot)$ is the function\n\\begin{equation} \\label{eq:gqdef}\n g_q(q,w^q,\\gamma_2,\\alpha_2) := C_2(\\alpha_1)\\left[ f_q(q,w^q,\\gamma_2)-\\alpha_2 q \\right].\n\\end{equation}\n}\n\n\nWe next introduce the notation\n\\[\n \\mathbf{U}_k := \\left[ \\mathbf{u}_0 \\cdots \\mathbf{u}_k \\right] \\in {\\mathbb{R}}^{N \\times (k\\! + \\! 1)},\n\\]\nto represent the first $k\\! + \\! 1$ values of the vector $\\mathbf{u}_\\ell$.\nWe define the matrices $\\mathbf{V}_k$, $\\mathbf{Q}_k$ and $\\mathbf{P}_k$ similarly.\nUsing this notation, let $G_k$ be the \\textb{tuple of random matrices},\n\\begin{equation} \\label{eq:Gdef}\n G_k := \\left\\{ \\mathbf{U}_k, \\mathbf{P}_k, \\mathbf{V}_k, \\mathbf{Q}_{k\\! - \\! 1} \\right\\}.\n\\end{equation}\nWith some abuse of notation, we will also use $G_k$\nto denote the sigma-algebra generated by these variables.\nThe set \\eqref{eq:Gdef} contains all the outputs of the algorithm\n\\eqref{eq:algoGen} immediately \\emph{before} \\eqref{eq:qupgen} in iteration $k$.\n\nNow, the actions of the matrix $\\mathbf{V}$ in the recursions \\eqref{eq:algoGen}\nare through the matrix-vector multiplications \\eqref{eq:pupgen} and \\eqref{eq:qupgen}.\nHence, if we define the matrices\n\\begin{equation} \\label{eq:ABdef}\n \\mathbf{A}_k := \\left[ \\mathbf{P}_k ~ \\mathbf{V}_{k\\! - \\! 1} \\right], \\quad\n \\mathbf{B}_k := \\left[ \\mathbf{U}_k ~ \\mathbf{Q}_{k\\! - \\! 1} \\right],\n\\end{equation}\nthe output of the recursions in the set $G_k$ will be unchanged for all\nmatrices $\\mathbf{V}$ satisfying the linear constraints\n\\begin{equation} \\label{eq:ABVconk}\n \\mathbf{A}_k = \\mathbf{V}\\mathbf{B}_k.\n\\end{equation}\nHence, the conditional distribution of $\\mathbf{V}$ given $G_k$ is precisely\nthe uniform distribution on the set of orthogonal matrices satisfying\n\\eqref{eq:ABVconk}. The matrices $\\mathbf{A}_k$ and $\\mathbf{B}_k$ are of dimensions\n$N \\times s$ where $s=2k+1$.\nFrom Lemma~\\ref{lem:orthogLin}, this conditional distribution is given by\n\\begin{equation} \\label{eq:Vconk}\n \\left. \\mathbf{V} \\right|_{G_k} \\stackrel{d}{=}\n \\mathbf{A}_k(\\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k)^{-1}\\mathbf{B}_k^{\\text{\\sf T}} + \\mathbf{U}_{\\mathbf{A}_k^\\perp}\\tilde{\\mathbf{V}}\\mathbf{U}_{\\mathbf{B}_k^\\perp}^{\\text{\\sf T}},\n\\end{equation}\nwhere $\\mathbf{U}_{\\mathbf{A}_k^\\perp}$ and $\\mathbf{U}_{\\mathbf{B}_k^\\perp}$ are $N \\times (N-s)$ matrices\nwhose columns are an orthonormal basis for $\\mathrm{Range}(\\mathbf{A}_k)^\\perp$ and $\\mathrm{Range}(\\mathbf{B}_k)^\\perp$.\nThe matrix $\\tilde{\\mathbf{V}}$ is Haar distributed on the set of $(N-s)\\times(N-s)$\northogonal matrices and independent of $G_k$.\n\nUsing \\eqref{eq:Vconk} we can write $\\mathbf{q}_k$ in \\eqref{eq:qupgen} as a sum of two terms\n\\begin{equation} \\label{eq:qpart}\n \\mathbf{q}_k = \\mathbf{V}^{\\text{\\sf T}}\\mathbf{v}_k = \\mathbf{q}_k^{\\rm det} + \\mathbf{q}_k^{\\rm ran},\n\\end{equation}\nwhere $\\mathbf{q}_k^{\\rm det}$ is what we will call the \\emph{deterministic} part:\n\\begin{equation} \\label{eq:qkdet}\n \\mathbf{q}_k^{\\rm det} = \\mathbf{B}_k(\\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k)^{-1}\\mathbf{A}_k^{\\text{\\sf T}}\\mathbf{v}_k,\n\\end{equation}\nand $\\mathbf{q}_k^{\\rm ran}$ is what we will call the \\emph{random} part:\n\\begin{equation} \\label{eq:qkran}\n \\mathbf{q}_k^{\\rm ran} = \\mathbf{U}_{\\mathbf{B}_k^\\perp}\\tilde{\\mathbf{V}}^{\\text{\\sf T}} \\mathbf{U}_{\\mathbf{A}_k^\\perp}^{\\text{\\sf T}} \\mathbf{v}_k.\n\\end{equation}\nThe next few lemmas will evaluate the asymptotic distributions of the two\nterms in \\eqref{eq:qpart}.\n\n\\begin{lemma} \\label{lem:qconvdet}\nUnder the induction hypothesis $H_{k,k\\! - \\! 1}$, there \\textb{exist} constants\n$\\beta_{k,0},\\ldots,\\beta_{k,k\\! - \\! 1}$ such that the components of $\\mathbf{q}_k^{\\rm det}$\nalong with $(\\mathbf{q}_0,\\ldots,\\mathbf{q}_{k\\! - \\! 1})$ converge empirically as\n\\begin{align}\n \\MoveEqLeft \\lim_{N \\rightarrow \\infty} \\left\\{ w^q_n,q_{0n},\\ldots,q_{k\\! - \\! 1,n},q_{kn}^{\\rm det}) \\right\\}\n \\nonumber \\\\\n &\\stackrel{PL(2)}{=} (W^q,Q_0,\\ldots,Q_{k\\! - \\! 1},Q_k^{\\rm det}),\n \\label{eq:qconvdet}\n\\end{align}\nwhere $Q_\\ell$, $\\ell=0,\\ldots,k\\! - \\! 1$ are the Gaussian random variables in induction hypothesis\n\\eqref{eq:Qconk} and $Q_k^{\\rm det}$ is a linear combination,\n\\begin{equation} \\label{eq:Qkdetlim}\n Q_k^{\\rm det} = \\beta_{k0}Q_0 + \\cdots + \\beta_{k,k\\! - \\! 1}Q_{k\\! - \\! 1}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe evaluate the asymptotic values of various terms in \\eqref{eq:qkdet}.\nUsing the definition of $\\mathbf{A}_k$ in \\eqref{eq:ABdef},\n\\[\n \\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k = \\left[ \\begin{array}{cc}\n \\mathbf{P}_k^{\\text{\\sf T}}\\mathbf{P}_k & \\mathbf{P}_k^{\\text{\\sf T}}\\mathbf{V}_{k\\! - \\! 1} \\\\\n \\mathbf{V}_{k\\! - \\! 1}^{\\text{\\sf T}}\\mathbf{P}_k & \\mathbf{V}_{k\\! - \\! 1}^{\\text{\\sf T}}\\mathbf{V}_{k\\! - \\! 1}\n \\end{array} \\right]\n\\]\nWe can then easily evaluate the asymptotic value of these\nterms as follows. For example, the asymptotic value of the\n$(i,j)$ component of the matrix $\\mathbf{P}_k^{\\text{\\sf T}}\\mathbf{P}_k$ is given by\n\\begin{align*}\n \\MoveEqLeft \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\left[ \\mathbf{P}_k^{\\text{\\sf T}}\\mathbf{P}_k \\right]_{ij}\n \\stackrel{(a)}{=} \\frac{1}{N} \\mathbf{p}_i^{\\text{\\sf T}}\\mathbf{p}_j \\\\\n &= \\frac{1}{N} \\sum_{n=1}^N p_{in}p_{jn}\n \\stackrel{(b)}{=} E(P_iP_j) \\stackrel{(c)}{=} \\left[ \\mathbf{Q}^p_k\\right]_{ij},\n\\end{align*}\nwhere (a) follows since the $i$-th column of $\\mathbf{P}_k$ is precisely the vector\n$\\mathbf{p}_i$; (b) follows due to convergence assumption in \\eqref{eq:Pconk};\nand in (c), we use $\\mathbf{Q}^p_k$ to denote the covariance matrix of $(P_0,\\ldots,P_k)$.\nSimilarly\n\\[\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\mathbf{V}_{k\\! - \\! 1}^{\\text{\\sf T}}\\mathbf{V}_{k\\! - \\! 1} = \\mathbf{Q}^v_{k\\! - \\! 1},\n\\]\nwhere $\\mathbf{Q}^v_{k\\! - \\! 1}$ has the components,\n\\[\n \\left[ \\mathbf{Q}^v_{k\\! - \\! 1} \\right]_{ij} = \\mathbb{E} \\left[ V_iV_j \\right],\n\\]\nwhere $V_i$ is the random variable in \\eqref{eq:Velldef}.\nFinally, the expectation for the cross-terms are given by\n\\begin{align*}\n \\mathbb{E}[V_iX_j]\n &\\stackrel{(a)}{=}\n \\mathbb{E}[g_p(P_i,W^p,\\overline{\\gamma}_{1i},\\overline{\\alpha}_{1i})X_j] \\nonumber \\\\\n &\\stackrel{(b)}{=} \\mathbb{E}\\left[ g_p'(P_i,W^p,\\overline{\\gamma}_{1i},\\overline{\\alpha}_{1i})\\right]\n \\mathbb{E}[X_iX_j] \\nonumber \\\\\n &\\stackrel{(c)}{=}\n \\mathbb{E}[X_iX_j]C_1(\\overline{\\alpha}_{1i})\\left( \\mathbb{E}\n \n \\left[ f_p'(P_i,W^p,\\overline{\\gamma}_{1i}) \\right]-\\overline{\\alpha}_{1i}\n \\right) \\\\\n &\\stackrel{(d)}{=} 0,\n\\end{align*}\nwhere (a) follows from \\eqref{eq:Velldef};\n(b) follows from Stein's Lemma; and (c) follows from the definition of $g_p(\\cdot)$\nin \\eqref{eq:gpdef}; and (d) follows from \\eqref{eq:a1segen}.\nThe above calculations show that\n\\begin{equation} \\label{eq:AAlim}\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N}\\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k \n \\stackrel{a.s.}{=} \\left[ \\begin{array}{cc}\n \\mathbf{Q}_k^p & \\mathbf{0} \\\\\n \\mathbf{0} & \\mathbf{Q}^v_{k\\! - \\! 1}\n \\end{array} \\right],\n\\end{equation}\nA similar calculation shows that\n\\begin{equation} \\label{eq:Aslim}\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{s}_k = \\left[\n \\begin{array}{c} \\mathbf{0} \\\\ \\mathbf{b}^s_k \\end{array} \\right],\n\\end{equation}\nwhere \\textb{$\\mathbf{b}^v_k$} is the vector of correlations\n\\begin{equation}\n \\mathbf{b}^v_k = \\mat{ \\mathbb{E}[V_0V_k] & \\mathbb{E}[V_1V_k] & \\cdots & \\mathbb{E}[V_{k\\! - \\! 1}V_k] }^{\\text{\\sf T}}.\n\\end{equation}\nCombining \\eqref{eq:AAlim} and \\eqref{eq:Aslim} shows that\n\\begin{equation} \\label{eq:Vsmult1}\n \\lim_{N \\rightarrow \\infty} (\\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k)^{-1}\\mathbf{A}_k^{\\text{\\sf T}} \\mathbf{v}_k \n \\stackrel{a.s.}{=}\n \\left[ \\begin{array}{c} \\mathbf{0} \\\\ \\mathbf{\\beta}_k \\end{array} \\right],\n\\end{equation}\nwhere \n\\[\n \\mathbf{\\beta}_k := \\left[ \\mathbf{Q}^v_{k-1}\\right]^{-1}\\mathbf{b}^v_k.\n\\]\nTherefore,\n\\begin{align}\n \\MoveEqLeft \\mathbf{q}_k^{\\rm det} = \\mathbf{B}_k(\\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k)^{-1}\\mathbf{A}_k^{\\text{\\sf T}}\\mathbf{v}_k \\nonumber \\\\\n &= \\left[ \\mathbf{U}_k ~ \\mathbf{Q}_{k\\! - \\! 1} \\right]\n \\left[ \\begin{array}{c} \\mathbf{0} \\\\ \\mathbf{\\beta}_k \\end{array} \\right]\n +{\\boldsymbol \\xi} \\nonumber \\\\\n &= \\sum_{\\ell=0}^{k\\! - \\! 1} \\beta_{k\\ell} \\mathbf{q}_\\ell + {\\boldsymbol \\xi}, \\label{eq:qbetasum}\n\\end{align}\nwhere \\textb{${\\boldsymbol \\xi} \\in {\\mathbb{R}}^N$ is the error,\n\\begin{equation} \\label{eq:xisdef}\n {\\boldsymbol \\xi} = \\mathbf{B}_k \\mathbf{s}, \\quad\n \\mathbf{s} := (\\mathbf{A}^{\\text{\\sf T}}_k\\mathbf{A}_k)^{-1}\\mathbf{A}_k^{\\text{\\sf T}} \\mathbf{v}_k -\n \\left[ \\begin{array}{c} \\mathbf{0} \\\\ \\mathbf{\\beta}_k \\end{array} \\right].\n\\end{equation}\nWe next need to bound the norm of the error term ${\\boldsymbol \\xi}$.\nSince ${\\boldsymbol \\xi} = \\mathbf{B}_k\\mathbf{s}$, the definition of $\\mathbf{B}_k$ in \n\\eqref{eq:ABdef} shows that\n\\begin{equation} \\label{eq:xisum}\n {\\boldsymbol \\xi} = \\sum_{i=0}^k s_i \\mathbf{u}_i + \\sum_{j=0}^{k\\! - \\! 1} s_{k+j+1}\\mathbf{q}_j,\n\\end{equation}\nwhere we have indexed the components of $\\mathbf{s}$ in \\eqref{eq:xisdef}\nas $\\mathbf{s}=(s_0,\\ldots,s_{2k})$.\nFrom \\eqref{eq:Vsmult1}, the components $s_j \\rightarrow 0$ almost surely,\nand therefore\n\\[\n \\lim_{N \\rightarrow \\infty} \\max_{j=0,\\ldots,2k} |s_j| \\stackrel{a.s.}{=} 0.\n\\]\nAlso, by the induction hypothesis,\n\\[\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\mathbf{u}_i\\|^2 \\stackrel{a.s.}{=} E(U_i^2),\n \\quad\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\mathbf{q}_j\\|^2 \\stackrel{a.s.}{=} E(Q_j^2).\n\\]\nTherefore, from \\eqref{eq:xisum},\n\\begin{align}\n \\MoveEqLeft \\lim_{N \\rightarrow \\infty} \\frac{1}{N}\\|{\\boldsymbol \\xi}\\|^2\n \\leq\n \\lim_{N \\rightarrow \\infty} \n \\left[ \\max_{j=0,\\ldots,2k} |s_j|^2 \\right] \\nonumber \\\\\n & \\times \n \\frac{1}{N} \\left[ \\sum_i \\|\\mathbf{u}_i\\|^2 + \\sum_j \\|\\mathbf{q}_j\\|^2 \\right] \\stackrel{a.s.}{=} 0.\n \\label{eq:xilimbnd}\n\\end{align}\nTherefore, if $f(q_1,\\cdots,q_k)$ is\npseudo-Lipschitz continuous of order 2,\n\\begin{align*}\n \\MoveEqLeft \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\sum_{n=1}^N\n f(q_{0n},\\cdots,q_{k\\! - \\! 1,n},q^{\\rm det}_k) \\\\\n &\\stackrel{(a)}{=} \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\sum_{n=1}^N\n f\\left( q_{0n},\\cdots,q_{k\\! - \\! 1,n},\n \\sum_{\\ell=0}^{k\\! - \\! 1} \\beta_{k\\ell} q_{\\ell n} \\right)\\\\\n &\\stackrel{(b)}{=} \\mathbb{E}\\left[\n f\\left(Q_0,\\cdots,Q_{k\\! - \\! 1}, \\sum_{\\ell=0}^{k\\! - \\! 1} \\beta_{k\\ell} Q_{\\ell n}\n \\right) \\right],\n\\end{align*}\nwhere (a) follows from the \\eqref{eq:qbetasum}, the bound\n\\eqref{eq:xilimbnd}, and the pseudo-Lipschitz continuity of $f(\\cdot)$;\nand (b) follows from the fact that $f(\\cdot)$ is pseudo-Lipschitz continuous\nand the induction hypothesis that\n\\[\n \\lim_{N \\rightarrow \\infty} \\{q_{0n},\\cdots,q_{k\\! - \\! 1,n}\\} \\stackrel{PL(2)}{=}\n (Q_0,\\ldots,Q_{k\\! - \\! 1}).\n\\]\nThis proves \\eqref{eq:qconvdet}.\n}\n\n\\end{proof}\n\n\n\\begin{lemma} \\label{lem:rhoconv}\nUnder the induction hypothesis $H_{k,k\\! - \\! 1}$, the following limit\nholds almost surely\n\\begin{equation}\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\| \\mathbf{U}_{\\mathbf{A}_k^\\perp}^{\\text{\\sf T}}\\mathbf{s}_k\\|^2 =\n \\rho_k,\n\\end{equation}\nfor some constant $\\rho_k \\geq 0$.\n\\end{lemma}\n\\begin{proof} From \\eqref{eq:ABdef}, the matrix $\\mathbf{A}_k$ has $s=2k+1$ columns.\nFrom Lemma~\\ref{lem:orthogLin},\n$\\mathbf{U}_{\\mathbf{A}_k^\\perp}$ is an orthonormal basis of $N-s$ in the $\\mathrm{Range}(\\mathbf{A}_k)^\\perp$.\nHence, the energy $\\| \\mathbf{U}_{\\mathbf{A}_k^\\perp}\\mathbf{s}_k\\|^2$ is precisely\n\\[\n \\| \\mathbf{U}_{\\mathbf{A}_k^\\perp}\\mathbf{s}_k\\|^2 = \\mathbf{s}_k^{\\text{\\sf T}}\\mathbf{s}_k - \\mathbf{s}_k^{\\text{\\sf T}}\\mathbf{A}_k\n (\\mathbf{A}_k^{\\text{\\sf T}}\\mathbf{A}_k)^{-1}\\mathbf{A}_k^{\\text{\\sf T}}\\mathbf{s}_k.\n\\]\nUsing similar calculations as the previous lemma, we have\n\\[\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\| \\mathbf{U}_{\\mathbf{A}_k}\\mathbf{s}_k\\|^2\n = \\mathbb{E}[S_k^2] - (\\mathbf{b}^s_k)^{\\text{\\sf T}}\\left[ \\mathbf{Q}^s_k \\right]^{-1}\\mathbf{b}^s_k.\n\\]\nHence, the lemma is proven if we define $\\rho_k$ as the right hand side of this\nequation.\n\\end{proof}\n\n\\begin{lemma} \\label{lem:qconvran}\nUnder the induction hypothesis $H_{k,k\\! - \\! 1}$, the components of\nthe ``random\" part $\\mathbf{q}_k^{\\rm ran}$ along with the components\nof $(\\mathbf{w}^q,\\mathbf{q}_0,\\ldots,\\mathbf{q}_{k\\! - \\! 1})$\nalmost surely converge empirically as\n\\begin{align}\n \\MoveEqLeft \\lim_{N \\rightarrow \\infty}\n \\left\\{ (w^q_n,q_{0n},\\ldots,q_{k\\! - \\! 1,n},q_{kn}^{\\rm ran}) \\right\\}\n \\nonumber \\\\\n &\\stackrel{PL(2)}{=} (W^q,Q_0,\\ldots,Q_{k\\! - \\! 1},U_k), \\label{eq:qranlim}\n\\end{align}\nwhere $U_k \\sim {\\mathcal N}(0,\\rho_k)$ is a Gaussian random variable\nindependent of $(W^q,Q_0,\\ldots,Q_{k\\! - \\! 1})$ and $\\rho_k$ is the constant\nin Lemma~\\ref{lem:rhoconv}.\n\\end{lemma}\n\\begin{proof}\nThis is a direct application of Lemma~\\ref{lem:orthogGaussLim}.\nLet $\\mathbf{x} = \\tilde{\\mathbf{V}}^{\\text{\\sf T}}\\mathbf{U}_{\\mathbf{A}_k^\\perp}^{\\text{\\sf T}}\\mathbf{s}_k$ so that\n\\[\n \\mathbf{q}_k^{\\rm det} = \\mathbf{U}_{\\mathbf{B}_k^\\perp}\\mathbf{x}_k.\n\\]\nFor each $N$, $\\mathbf{U}_{\\mathbf{B}_k^\\perp} \\in {\\mathbb{R}}^{N \\times (N-s)}$ is a matrix\nwith orthonormal columns spanning $\\mathrm{Range}(\\mathbf{B}_k)^\\perp$.\nAlso, since $\\tilde{\\mathbf{V}}$ is uniformly distributed on the set of\n$(N-s)\\times (N-s)$ orthogonal matrices, and independent of $G_k$,\nthe conditional distribution $\\mathbf{x}_k$ given $G_k$ is orthogonally invariant in that\n\\[\n \\left. \\mathbf{U} \\mathbf{x}_k \\right|_{G_k} \\stackrel{d}{=} \\left. \\mathbf{x}_k \\right|_{G_k},\n\\]\nfor any orthogonal matrix $\\mathbf{U}$. Lemma~\\ref{lem:rhoconv} also shows that\n\\[\n \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\|\\mathbf{x}_k\\|^2 = \\rho_k,\n\\]\nalmost surely.\nThe limit \\eqref{eq:qranlim} now follows from\nLemma~\\ref{lem:orthogGaussLim}.\n\\end{proof}\n\nUsing the partition \\eqref{eq:qpart} and Lemmas~\\ref{lem:qconvdet} and \\ref{lem:qconvran},\nthe components of $(\\mathbf{w}^q,\\mathbf{q}_0,\\ldots,\\mathbf{q}_k)$\nalmost surely converge empirically as\n\\begin{align*}\n \\lefteqn{ \\lim_{N \\rightarrow \\infty} \\{ (w^q_n,q_{0n},\\ldots,q_{kn}) \\} }\\\\\n &\\stackrel{PL(2)}{=} \\lim_{N \\rightarrow \\infty} \\{ (w^q_n,q_{0n},\\ldots,q^{\\rm det}_{kn} + q^{\\rm ran}_{kn}) \\}\n \\\\\n &\\stackrel{PL(2)}{=} (W^q,Q_0,\\ldots,Q_k),\n\\end{align*}\nwhere $Q_k$ is the random variable\n\\[\n Q_k = \\beta_{k0}Q_0 + \\cdots + \\beta_{k,k\\! - \\! 1}Q_{k\\! - \\! 1} + U_k.\n\\]\nSince $(Q_0,\\ldots,Q_{k\\! - \\! 1})$ is jointly Gaussian and $U_k$ is Gaussian independent of\n$(Q_0,\\ldots,Q_{k\\! - \\! 1})$ we have that $(Q_0,\\ldots,Q_k)$ is Gaussian. This proves\n\\eqref{eq:Qconk}.\n\nNow the function $\\Gamma_1(\\gamma_1,\\alpha_1)$ is assumed to be\ncontinuous at $(\\overline{\\gamma}_{1k},\\overline{\\alpha}_{1k})$. Also, the induction hypothesis assumes that\n$\\alpha_{1k} \\rightarrow \\overline{\\alpha}_{1k}$ and $\\gamma_{1k} \\rightarrow \\overline{\\gamma}_{1k}$ almost surely.\nHence,\n\\begin{equation} \\label{eq:gam2limpf}\n \\lim_{N \\rightarrow \\infty} \\gamma_{2k} = \\lim_{N \\rightarrow \\infty} \\Gamma_1(\\gamma_{1k},\\alpha_{1k})\n = \\overline{\\gamma}_{2k}.\n\\end{equation}\nIn addition, since we have assumed that $\\mathbf{f}_q'(\\mathbf{q},\\mathbf{w}^q,\\gamma_1)$ is Lipschitz\ncontinuous in $(\\mathbf{q},\\mathbf{w}^q)$ and continuous in $\\gamma_1$,\n\\begin{align}\n \\lim_{N \\rightarrow \\infty} \\alpha_{2k}\n &= \\lim_{N \\rightarrow \\infty} \\bkt{\\mathbf{f}_q'(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{1k})} \\nonumber \\\\\n &= \\mathbb{E}\\left[ f_q'(Q_k,W^q,\\overline{\\gamma}_{1k}) \\right] = \\overline{\\alpha}_{1k}. \\label{eq:a2limpf}\n\\end{align}\nThe limits \\eqref{eq:gam2limpf} and \\eqref{eq:a2limpf} prove \\eqref{eq:ag2limgen}.\n\nFinally, we need to show that $\\mathbb{E}[Q_k^2] = \\tau_{2k}$ is the variance from the SE\nequations.\n\\begin{align}\n \\mathbb{E}[Q_k^2] &\\stackrel{(a)}{=} \\lim_{N \\rightarrow \\infty} \\frac{1}{N}\n \\|\\mathbf{q}_k\\|^2 \\nonumber \\\\\n & \\stackrel{(b)}{=} \\lim_{N \\rightarrow \\infty} \\frac{1}{N}\n \\|\\mathbf{v}_k\\|^2 \\nonumber \\\\\n & \\stackrel{(c)}{=} \\mathbb{E}\\left[ g_p(P_k,W^p,\\overline{\\gamma}_{1k},\\overline{\\alpha}_{1k}) \\right] \\nonumber \\\\\n & \\stackrel{(d)}{=} C_1^2(\\overline{\\alpha}_{1k})\n \\mathbb{E}\\left[ \\left(f_p(P_k,W^p,\\overline{\\gamma}_{1k}) - \\overline{\\alpha}_{1k}P_k\\right)^2 \\right] \\nonumber \\\\\n & = C_1^2(\\overline{\\alpha}_{1k})\\Bigl\\{\n \\mathbb{E}\\left[ f_p^2(P_k,W^p,\\overline{\\gamma}_{1k})\\right] \\nonumber \\\\\n & \\quad - 2\\overline{\\alpha}_{1k}\n \\mathbb{E}\\left[ P_k f_p(P_k,W^p,\\overline{\\gamma}_{1k})\\right] + \\overline{\\alpha}^2_{1k}\\mathbb{E}\\left[ P_k^2 \\right]\n \\Bigr\\} \\nonumber \\\\\n & \\stackrel{(e)}{=} C_1^2(\\overline{\\alpha}_{1k})\\Bigl\\{\n \\mathbb{E}\\left[ f_p^2(P_k,W^p,\\overline{\\gamma}_{1k})\\right] \\nonumber \\\\\n & \\quad - 2\\overline{\\alpha}_{1k}\\tau_{1k}\n \\mathbb{E}\\left[ f_p'(P_k,W^p,\\overline{\\gamma}_{1k})\\right] + \\overline{\\alpha}^2_{1k}\\tau_{1k} \\Bigr\\}\n \\nonumber \\\\\n & \\stackrel{(f)}{=} C_1^2(\\overline{\\alpha}_{1k})\\left\\{\n \\mathbb{E}\\left[ f_p^2(P_k,W^p,\\overline{\\gamma}_{1k})\\right]- \\overline{\\alpha}_{1k}^2\\tau_{1k} \\right\\} \\nonumber \\\\\n & \\stackrel{(g)}{=} \\tau_{2k},\n\\end{align}\nwhere (a) follows from the fact that the components of $\\mathbf{q}_k$ converge empirically\nto $Q_k$;\n(b) follows from \\eqref{eq:qupgen} and the fact that $\\mathbf{V}$ is orthogonal;\n(c) follows from the limit \\eqref{eq:Velldef}; and\n(d) follows from \\eqref{eq:gpdef};\n(e) follows from Stein's Lemma and the fact that $\\mathbb{E} [P_k^2] = \\tau_{1k}$;\n(f) follows from the definition of $\\overline{\\alpha}_{1k}$ in \\eqref{eq:a1segen};\nand (g) follows from \\eqref{eq:tau2segen}. Thus, $\\mathbb{E} [Q_k^2] = \\tau_{2k}$,\nand we have proven the implication $H_{k,k\\! - \\! 1} \\Rightarrow H_{k,k}$.\n\n\\section{Proof of Theorem~\\ref{thm:se} }\n\nTheorem~\\ref{thm:se} is essentially a special case of Theorem~\\ref{thm:genConv}.\nWe need to simply rewrite the recursions in Algorithm~\\ref{algo:vamp} in the form\n\\eqref{eq:algoGen}.\nTo this end, define the error terms\n\\begin{equation} \\label{eq:pvslr}\n \\mathbf{p}_k := \\mathbf{r}_{1k}-\\mathbf{x}^0, \\quad\n \\mathbf{v}_k := \\mathbf{r}_{2k}-\\mathbf{x}^0,\n\\end{equation}\nand their transforms,\n\\begin{equation} \\label{eq:uqslr}\n \\mathbf{u}_k := \\mathbf{V}^{\\text{\\sf T}}\\mathbf{p}_k, \\quad\n \\mathbf{q}_k := \\mathbf{V}^{\\text{\\sf T}}\\mathbf{v}_k.\n\\end{equation}\nAlso, define the disturbance terms\n\\begin{equation} \\label{eq:wpqslr}\n \\mathbf{w}^q := ({\\boldsymbol \\xi},\\mathbf{s}), \\quad\n \\mathbf{w}^p := \\mathbf{x}^0, \\quad {\\boldsymbol \\xi} := \\mathbf{U}^{\\text{\\sf T}}\\mathbf{w},\n\\end{equation}\nand the componentwise update functions\n\\begin{subequations} \\label{eq:fqpslr}\n\\begin{align}\n f_q(q,(\\xi,s),\\gamma_2) &:= \\frac{\\gamma_w s\\xi + \\gamma_2 q}{\n \\gamma_w s^2 + \\gamma_2}, \\label{eq:fqslr} \\\\\n f_p(p,x^0,\\gamma_1) &= g_1(p+x^0,\\gamma_1) - x^0.\n \\label{eq:fpslr}\n\\end{align}\n\\end{subequations}\nWith these definitions, we claim that the outputs satisfy the recursions:\n\\begin{subequations} \\label{eq:gecslr}\n\\begin{align}\n \\mathbf{p}_k &= \\mathbf{V}\\mathbf{u}_k \\label{eq:pupslr} \\\\\n \\alpha_{1k} &= \\bkt{ \\mathbf{f}_p'(\\mathbf{p}_k,\\mathbf{x}^0,\\gamma_{1k})},\n \\quad \\gamma_{2k} = \\frac{(1-\\alpha_{1k})\\gamma_{1k}}{\\alpha_{1k}}\n \\label{eq:alpha1slr} \\\\\n \\mathbf{v}_k &= \\frac{1}{1-\\alpha_{1k}}\\left[\n \\mathbf{f}_p(\\mathbf{p}_k,\\mathbf{x}^0,\\gamma_{1k})- \\alpha_{1k} \\mathbf{p}_{k} \\right] \\label{eq:vupslr} \\\\\n \\mathbf{q}_k &= \\mathbf{V}^{\\text{\\sf T}}\\mathbf{v}_k \\label{eq:qupslr} \\\\\n \\alpha_{2k} &= \\bkt{ \\mathbf{f}_q'(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k})},\n \\quad \\gamma_{1,k\\! + \\! 1} = \\frac{(1-\\alpha_{2k})\\gamma_{2k}}{\\alpha_{2k}}\n \\label{eq:alpha2slr} \\\\\n \\mathbf{u}_{k\\! + \\! 1} &= \\frac{1}{1-\\alpha_{2k}}\\left[\n \\mathbf{f}_q(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k}) - \\alpha_{2k}\\mathbf{q}_{k} \\right] \\label{eq:uupslr}\n\\end{align}\n\\end{subequations}\nBefore we prove \\eqref{eq:gecslr}, we can see that \\eqref{eq:gecslr} is a special\ncase of the general recursions in \\eqref{eq:algoGen} if we define\n\\[\n C_i(\\alpha_i) = \\frac{1}{1-\\alpha_i}, \\quad \\Gamma_i(\\gamma_i,\\alpha_i) =\n \\gamma_i\\left[\\frac{1}{\\alpha_i}-1 \\right].\n\\]\nIt is also straightforward to verify the continuity assumptions in Theorem~\\ref{thm:genConv}.\nThe assumption of Theorem~\\ref{thm:se} states that $\\overline{\\alpha}_{ik} \\in (0,1)$. Since\n$\\overline{\\gamma}_{10} > 0$, $\\overline{\\gamma}_{ik} > 0$ for all $k$ and $i$. Therefore,\n$C_i(\\alpha_i)$ and $\\Gamma_i(\\gamma_i,\\alpha_i)$ are continuous at all points\n$(\\gamma_i,\\alpha_i) = (\\overline{\\gamma}_{ik},\\overline{\\alpha}_{ik})$.\nAlso, since $s \\in [0,S_{max}]$ and $\\gamma_{2k} > 0$ for all $k$, the function\n$f_q(q,(\\xi,s),\\gamma_2)$ in \\eqref{eq:fqpslr} is uniformly Lipschitz continuous\nin $(q,\\xi,s)$ at all $\\gamma_2 = \\overline{\\gamma}_{2k}$.\nSimilarly, since the denoiser function $g_1(r_1,\\gamma_1)$ is assumed be to uniformly\nLipschitz continuous in $r_1$ at all $\\gamma_1 = \\overline{\\gamma}_{1k}$, so is the function\n$f_p(r_1,x^0,\\gamma_1)$ in \\eqref{eq:fpslr}. Hence all the conditions of Theorem~\\ref{thm:genConv}\nare satisfied. The SE equations \\eqref{eq:se} immediately\nfrom the general SE equations \\eqref{eq:segen}. In addition, the limits\n\\eqref{eq:limrx1} and and \\eqref{eq:limqxi} are special cases of the limits\n\\eqref{eq:Pconk} and \\eqref{eq:Qconk}. This proves Theorem~\\ref{thm:se}.\n\nSo, it remains only to show that the updates in\n\\eqref{eq:gecslr} indeed hold.\nEquations \\eqref{eq:pupslr} and \\eqref{eq:qupslr} follow immediately\nfrom the definitions \\eqref{eq:pvslr} and \\eqref{eq:uqslr}.\nNext, observe that we can rewrite the\nLMMSE estimation function \\eqref{eq:g2slr} as\n\\begin{align}\n \\lefteqn{ \\mathbf{g}_2(\\mathbf{r}_{2k},\\gamma_{2k}) }\\nonumber\\\\\n &\\stackrel{(a)}{=} \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_{2k}\\mathbf{I}\\right)^{-1}\n \\left( \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{A}\\mathbf{x}^0 + \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{w}\n + \\gamma_{2k}\\mathbf{r}_{2k} \\right) \\nonumber \\\\\n &\\stackrel{(b)}{=} \\mathbf{x}^0 +\n \\left( \\gamma_w \\mathbf{A}^{\\text{\\sf T}}\\mathbf{A} + \\gamma_{2k}\\mathbf{I}\\right)^{-1}\n \\left( \\gamma_{2k}(\\mathbf{r}_{2k}-\\mathbf{x}^0)+ \\gamma_w\\mathbf{A}^{\\text{\\sf T}}\\mathbf{w}\\right) \\nonumber \\\\\n &\\stackrel{(d)}{=} \\mathbf{x}^0 +\n \\mathbf{V}\\left( \\gamma_w \\mathbf{S}^2 + \\gamma_{2k}\\mathbf{I}\\right)^{-1}\n \\left( \\gamma_{2k}\\mathbf{q}_k + \\mathbf{S}{\\boldsymbol \\xi} \\right), \\nonumber \\\\\n &\\stackrel{(d)}{=} \\mathbf{x}^0 + \\mathbf{V} \\mathbf{f}_q(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k}), \\label{eq:g2slrV}\n\\end{align}\nwhere (a) follows by substituting \\eqref{eq:yAxslr} into \\eqref{eq:g2slr};\n(b) is a simple algebraic manipulation;\n(c) follows from the SVD definition \\eqref{eq:ASVD} and the definitions\n${\\boldsymbol \\xi}$ in \\eqref{eq:wpqslr} and $\\mathbf{q}_k$ in \\eqref{eq:uqslr}; and\n(d) follows from the definition of componentwise function\n$f_q(\\cdot)$ in \\eqref{eq:fqslr}. Therefore, the divergence $\\alpha_{2k}$ satisfies\n\\begin{align}\n \\alpha_{2k}\n &\\stackrel{(a)}{=}\n \\frac{1}{N}\n \\mathrm{Tr}\\left[ \\frac{\\partial \\mathbf{g}_2(\\mathbf{r}_{2k},\\gamma_{2k})}{\\partial \\mathbf{r}_{2k}} \\right]\n \\nonumber \\\\\n & \\stackrel{(b)}{=} \\frac{1}{N}\n \\mathrm{Tr}\\left[ \\mathbf{V} \\mathrm{Diag}(\\mathbf{f}_q'(\\mathbf{q}_{k},\\mathbf{w}^q,\\gamma_{2k}))\n \\frac{\\partial \\mathbf{q}_k}{\\partial \\mathbf{r}_{2k}} \\right]\n \\nonumber \\\\\n & \\stackrel{(c)}{=} \\frac{1}{N}\n \\mathrm{Tr}\\left[ \\mathbf{V} \\mathrm{Diag}(\\mathbf{f}_q'(\\mathbf{q}_{k},\\mathbf{w}^q,\\gamma_{2k})) \\mathbf{V}^{\\text{\\sf T}} \\right] \\nonumber \\\\\n & \\stackrel{(d)}{=} \\bkt{ \\mathbf{f}_q'(\\mathbf{q}_{k},\\mathbf{w}^q,\\gamma_{2k}) },\n \\label{eq:a2pf}\n\\end{align}\nwhere\n(a) follows from line~\\ref{line:a2} of Algorithm~\\ref{algo:vamp} and \\eqref{eq:jacobian}--\\eqref{eq:bkt};\n(b) follows from \\eqref{eq:g2slrV}; (c) follows from \\eqref{eq:uqslr}; and\n(d) follows from $\\mathbf{V}^{\\text{\\sf T}}\\mathbf{V}=\\mathbf{I}$ and \\eqref{eq:jacobian}--\\eqref{eq:bkt}.\nAlso, from lines~\\ref{line:eta2}-\\ref{line:gam1} of Algorithm~\\ref{algo:vamp},\n\\begin{equation} \\label{eq:g2upa}\n \\gamma_{1,k\\! + \\! 1} = \\eta_{2k}-\\gamma_{2k} = \\gamma_{2k}\\left[ \\frac{1}{\\alpha_{2k}}-1 \\right].\n\\end{equation}\nEquations \\eqref{eq:a2pf} and \\eqref{eq:g2upa} prove \\eqref{eq:alpha2slr}.\nIn addition,\n\\begin{align}\n \\lefteqn{ \\mathbf{p}_{k\\! + \\! 1} \\stackrel{(a)}{=} \\mathbf{r}_{1,k\\! + \\! 1} - \\mathbf{x}^0 }\\nonumber\\\\\n &\\stackrel{(b)}{=} \\frac{1}{1-\\alpha_{2k}}\\left[\n \\mathbf{g}_2(\\mathbf{r}_{2k},\\gamma_{2k}) - \\alpha_{2k}\\mathbf{r}_{2k} \\right]\n -\\mathbf{x}^0 \\nonumber \\\\\n &\\stackrel{(c)}{=} \\frac{1}{1-\\alpha_{2k}}\\left[\n \\mathbf{x}^0 + \\mathbf{V} \\mathbf{f}_q(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k})\n - \\alpha_{2k}(\\mathbf{x}^0 + \\mathbf{v}_{k}) \\right]\n -\\mathbf{x}^0 \\nonumber \\\\\n &\\stackrel{(d)}{=} \\frac{1}{1-\\alpha_{2k}}\\left[\n \\mathbf{V} \\mathbf{f}_q(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k})\n - \\alpha_{2k}\\mathbf{v}_{k} \\right] \\nonumber \\\\\n &\\stackrel{(e)}{=} \\mathbf{V} \\left[ \\frac{1}{1-\\alpha_{2k}}\\left[\n \\mathbf{f}_q(\\mathbf{q}_k,\\mathbf{w}^q,\\gamma_{2k}) - \\alpha_{2k}\\mathbf{q}_{k} \\right] \\right]\n \\label{eq:pupslr2} ,\n\\end{align}\nwhere (a) follows from \\eqref{eq:pvslr};\n(b) follows from\nlines~\\ref{line:x2}-\\ref{line:r1} of Algorithm~\\ref{algo:vamp};\n(c) follows from \\eqref{eq:g2slrV} and the definition of $\\mathbf{v}_k$ in \\eqref{eq:pvslr};\n(d) follows from collecting the terms with $\\mathbf{x}^0$;\nand (e) follows from the definition\n$\\mathbf{q}_k=\\mathbf{V}^{\\text{\\sf T}}\\mathbf{v}_k$ in \\eqref{eq:uqslr}.\nCombining \\eqref{eq:pupslr2} with\n$\\mathbf{u}_{k\\! + \\! 1} = \\mathbf{V}^{\\text{\\sf T}}\\mathbf{p}_{k\\! + \\! 1}$ proves \\eqref{eq:uupslr}.\n\nThe derivation for the updates for $\\mathbf{v}_k$ are similar. First,\n\\begin{align}\n \\MoveEqLeft \\alpha_{1k} \\stackrel{(a)}{=}\n \\bkt{ \\mathbf{g}_1'(\\mathbf{r}_{1k},\\gamma_{1k}) }\n \\stackrel{(b)}{=} \\bkt{ \\mathbf{f}_p'(\\mathbf{p}_{k},\\mathbf{x}^0) },\n \\label{eq:a1pf}\n\\end{align}\nwhere (a) follows from line~\\ref{line:a1} of Algorithm~\\ref{algo:vamp}\nand (b) follows from the vectorization\nof $\\mathbf{f}_p(\\cdot)$ in \\eqref{eq:fpslr} and the fact that $\\mathbf{p}_k=\\mathbf{r}_{1k}+\\mathbf{x}^0$.\nAlso, from lines~\\ref{line:eta1}-\\ref{line:gam2} of Algorithm~\\ref{algo:vamp},\n\\begin{equation} \\label{eq:g1upa}\n \\gamma_{2k} = \\eta_{1k}-\\gamma_{1k} = \\gamma_{1k}\\left[ \\frac{1}{\\alpha_{1k}}-1 \\right].\n\\end{equation}\nEquations \\eqref{eq:a1pf} and \\eqref{eq:g1upa} prove \\eqref{eq:alpha1slr}.\nAlso,\n\\begin{align}\n \\lefteqn{ \\mathbf{v}_{k} \\stackrel{(a)}{=} \\mathbf{r}_{2k} - \\mathbf{x}^0 } \\nonumber \\\\\n &\\stackrel{(b)}{=} \\frac{1}{1-\\alpha_{1k}}\\left[\n \\mathbf{g}_1(\\mathbf{r}_{1k},\\gamma_{1k}) - \\alpha_{1k}\\mathbf{r}_{1k} \\right] -\\mathbf{x}^0 \\nonumber \\\\\n &\\stackrel{(c)}{=} \\frac{1}{1-\\alpha_{1k}}\\left[\n \\mathbf{f}_p(\\mathbf{p}_{k},\\mathbf{x}^0,\\gamma_{1k}) +\\mathbf{x}^0 - \\alpha_{1k}(\\mathbf{p}_{k}+\\mathbf{x}^0) \\right]\n -\\mathbf{x}^0 \\nonumber \\\\\n &\\stackrel{(d)}{=} \\frac{1}{1-\\alpha_{1k}}\\left[\n \\mathbf{f}_p(\\mathbf{p}_{k},\\mathbf{x}^0,\\gamma_{1k}) - \\alpha_{1k}\\mathbf{p}_{k}\\right]\n\\end{align}\nwhere (a) is the definition of $\\mathbf{v}_k$ in \\eqref{eq:pvslr};\n(b) follows from lines~\\ref{line:x1}-\\ref{line:r2} of Algorithm~\\ref{algo:vamp};\n(c) follows from the vectorization of $f_p(\\cdot)$ in \\eqref{eq:fpslr} and the definition of $\\mathbf{p}_k$ in \\eqref{eq:pvslr};\nand (d) follows from collecting the terms with $\\mathbf{x}^0$.\nThis proves \\eqref{eq:vupslr}. All together, we have proven \\eqref{eq:gecslr} and\nthe proof is complete.\n\n\\section{Proof of Theorem~\\ref{thm:seMmse}} \\label{sec:seMmsePf}\nWe use induction. Suppose that, for some $k$,\n$\\overline{\\gamma}_{1k} = \\tau_{1k}^{-1}$. From \\eqref{eq:a1se}, \\eqref{eq:A1match}\nand \\eqref{eq:E1match},\n\\begin{equation} \\label{eq:a1matchpf}\n \\overline{\\alpha}_{1k} = \\overline{\\gamma}_{1k}{\\mathcal E}_1(\\overline{\\gamma}_{1k}).\n\\end{equation}\nHence, from \\eqref{eq:eta1se}, $\\overline{\\eta}_{1k}^{-1} = {\\mathcal E}_1(\\overline{\\gamma}_{1k})$ and\n$\\overline{\\gamma}_{2k} = \\overline{\\eta}_{1k} - \\overline{\\gamma}_{1k}$. Also,\n\\begin{align*}\n \\tau_{2k}\n &\\stackrel{(a)}{=} \\frac{1}{(1-\\overline{\\alpha}_{1k})^2}\\left[\n {\\mathcal E}_1(\\overline{\\gamma}_{1k},\\tau_{1k}) - \\overline{\\alpha}_{1k}^2\\tau_{1k} \\right] \\\\\n &\\stackrel{(b)}{=} \\frac{1}{(1-\\overline{\\gamma}_{1k}{\\mathcal E}_1(\\overline{\\gamma}_{1k}))^2}\\left[\n {\\mathcal E}_1(\\overline{\\gamma}_{1k},\\tau_{1k}) - \\overline{\\gamma}_{1k}{\\mathcal E}_1^2(\\overline{\\gamma}_{1k}) \\right] \\\\\n &\\stackrel{(c)}{=} \\frac{{\\mathcal E}_1(\\overline{\\gamma}_{1k},\\tau_{1k})}\n {1-\\overline{\\gamma}_{1k}{\\mathcal E}_1(\\overline{\\gamma}_{1k})} \\\\\n &\\stackrel{(d)}{=} \\frac{1}{\\overline{\\eta}_{1k} - \\overline{\\gamma}_{1k}},\n\\end{align*}\nwhere (a) follows from \\eqref{eq:tau2se};\n(b) follows from \\eqref{eq:a1matchpf} and the matched condition $\\overline{\\gamma}_{1k} = \\tau_{1k}^{-1}$;\n(c) follows from canceling terms in the fraction and (d) follows from the fact that\n$\\overline{\\eta}_{1k}^{-1} = {\\mathcal E}_1(\\overline{\\gamma}_{1k})$ and $\\overline{\\gamma}_{1k} = \\overline{\\eta}_{1k}\/\\overline{\\alpha}_{1k}$.\nThis proves \\eqref{eq:eta1sematch}. A similar argument shows that \\eqref{eq:eta2sematch} holds if\n$\\overline{\\gamma}_{2k} = \\tau_{2k}^{-1}$. Finally, \\eqref{eq:etammse} follows from\n\\eqref{eq:sematch} and \\eqref{eq:mseEcal}.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA broad class of both Bayesian \n\\citep{neal, williams1997, hazan2015steps, lee2018deep, matthews2018, matthews2018b_arxiv, Borovykh2018, garriga2018deep, novak2018bayesian, yang2017mean, yang2018a, pretorius2019expected, yang2019scaling, yang2019wide, neuraltangents2020, hron2020, Hu2020InfinitelyWG} and gradient descent trained \\citep{Jacot2018ntk, li2018learning, allen2018convergence, du2018gradient, du2018gradienta, zou2018stochastic, lee2019wide, chizat2019lazy, arora2019on, sohl2020infinite, Huang2020OnTN, du2019graph, yang2019scaling, yang2019wide, neuraltangents2020, hron2020} neural networks converge to Gaussian Processes (GPs) or closely-related kernel methods as their intermediate layers are made infinitely wide. \nThe predictions of these infinite width networks are described by the Neural Network Gaussian Process (NNGP) \\citep{lee2018deep,matthews2018} kernel for Bayesian networks, and by the Neural Tangent Kernel (NTK) \\citep{Jacot2018ntk} and weight space linearization \\citep{lee2019wide,chizat2019lazy} for gradient descent trained networks. \n\nThis correspondence has been key to recent breakthroughs in our understanding of neural networks \\citep{xiao18a, valle-perez2018deep, wei2019regularization, xiao2019disentangling, NIPS2019_9449, ben2019role, yang2019fine, ober2020global, Hu2020Provable, lewkowycz2020large, lewkowycz2020training}. \nIt has also enabled practical advances in kernel methods \\citep{garriga2018deep, novak2018bayesian, arora2019on, li2019enhanced, Arora2020Harnessing, Shankar2020NeuralKW, neuraltangents2020, hron2020}, Bayesian deep learning \\citep{wang2018function, Cheng_2019_CVPR, Carvalho2020ScalableUF}, active learning \\citep{ijcai2019-499}, and semi-supervised learning \\citep{Hu2020InfinitelyWG}.\nThe NNGP, NTK, and related large width limits \\citep{cho2009kernel, daniely2016toward, poole2016exponential, chen2018rnn, li2018on, daniely2017sgd, pretorius2018critical, hayou2018selection, karakida2018universal, blumenfeld2019mean, hayou2019meanfield, schoenholz2016deep, pennington2017resurrecting, xiao18a, yang2017mean, geiger2019disentangling, geiger2020scaling, antognini2019finite, Dyer2020Asymptotics, huang2019dynamics, yaida2019non} are unique in giving an exact theoretical description of large scale neural networks.\nBecause of this, we believe they will continue to play a transformative role in deep learning theory.\n\nInfinite networks are a newly active field, and \nfoundational\nempirical questions remain unanswered. \nIn this work, we perform an extensive and in-depth empirical study of finite and infinite width neural networks. \nIn so doing, we provide quantitative answers to questions about the factors of variation that drive performance in finite networks and kernel methods, uncover surprising new behaviors, and develop best practices that improve the performance of both finite and infinite width networks.\nWe believe our results will both ground and motivate future work in wide networks.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiment design}\\label{sec:experimental_design}\n\n\n\\definecolor{ntk_param}{RGB}{255, 140, 0}\n\\definecolor{standard_param}{RGB}{65, 105, 225}\n\n\n\\vspace{-0.15cm}\\begin{figure}[t]\n\\hfill\\hfill Finite gradient descent (GD) \\hfill \\quad\\quad Infinite GD \\quad Infinite Bayesian\n\\vspace{-0.25cm}\n\\centering\n\\includegraphics[width=1.02\\columnwidth]{figures\/main_plot_nonlinear_split.pdf}\n\\caption{\n\n\\textbf{CIFAR-10 test accuracy for finite and infinite networks and their variations}. Starting from the \nfinite width %\n\\texttt{base} network of given architecture class described in \\sref{sec:experimental_design}, performance changes from \\textbf{centering} (\\texttt{+C}), \\textbf{large learning rate} (\\texttt{+LR}), allowing \\textbf{underfitting} by early stopping (\\texttt{+U}), input preprocessing with \\textbf{ZCA regularization} (\\texttt{+ZCA}), multiple initialization \\textbf{ensembling} (\\texttt{+Ens}), and some combinations are shown, for {\\color{standard_param}\\textbf{Standard}} and {\\color{ntk_param}\\textbf{NTK}} parameterizations. \nThe performance of the \\textbf{linearized} (\\texttt{lin}) base network is also shown. \nSee \\Tabref{tab:main-table} for precise values for each of these experiments, as well as for additional experimental conditions not shown here.}\n\\label{fig:tricks_vs_accuracy}\n\\end{figure}\n\n\nTo systematically develop a phenomenology of infinite and finite neural networks, we first establish base cases for each architecture where infinite-width kernel methods, linearized weight-space networks, and nonlinear gradient descent based training can be directly compared. In the finite-width settings, the base case uses \nmini-batch gradient descent at a constant small learning rate~\\cite{lee2019wide} with MSE loss (implementation details in~\\sref{app:batch-size}). In the kernel-learning setting we compute the NNGP and NTK for the entire dataset and do exact inference as described in~\\cite[page 16]{rasmussen2006gaussian}. Once this one-to-one comparison has been established, we augment the base setting with a wide range of interventions. We discuss each of these interventions in detail below. Some interventions will approximately preserve the correspondence (for example, data augmentation), while others explicitly break the correspondence in a way that has been hypothesized in the literature to affect performance (for example, large learning rates~\\cite{lewkowycz2020large}). \nWe additionally explore linearizing the base model around its initialization, in which case its training dynamics become exactly described by a constant kernel. This differs from the kernel setting described above due to finite width effects.\n\nWe use MSE loss to allow for easier comparison to kernel methods, whose predictions can be evaluated in closed form for MSE. \nSee \\Tabref{tab:xent-vs-mse} and \\Figref{fig:xent-vs-mse} for a comparison of MSE to softmax-cross-entropy loss. \nSoftmax-cross-entropy provides a consistent small benefit over MSE, and will be interesting to consider in future work.\n\nArchitectures we work with are built from either Fully-Connected (\\texttt{FCN})\nor Convolutional (\\texttt{CNN}) layers. In all cases we use ReLU nonlinearities with critical initialization with small bias variance ($\\sigma_w^2=2.0, \\sigma_b^2=0.01$). Except if otherwise stated, we consider \\texttt{FCN}s with 3-layers of width 2048 and \\texttt{CNN}s with 8-layers of 512 channels per layer. For convolutional networks we must collapse the spatial dimensions of image-shaped data before the final readout layer. To do this we either: flatten the image into a one-dimensional vector (\\texttt{VEC}) or apply global average pooling to the spatial dimensions (\\texttt{GAP}). \nFinally, we compare two ways of parameterizing the weights and biases of the network: the standard parameterization (STD), which is used in work on finite-width networks, and the NTK parameterization (NTK) which has been \nused in \nmost infinite-width studies to date (see~\\cite{sohl2020infinite} for the standard parameterization at infinite width).\n\nExcept where noted, for all kernel experiments we optimize over diagonal kernel regularization\nindependently for each experiment.\nFor finite width networks, except where noted we use a small learning rate corresponding to the base case. See \\sref{app hyperparameters} for details.\n\nThe experiments described in this paper are often very compute intensive. For example, to compute the NTK or NNGP for the entirety of CIFAR-10 for \\texttt{CNN-GAP} architectures one must explicitly evaluate the entries in a $6\\times10^7$-by-$6\\times10^7$ \nkernel matrix. Typically this takes around 1200 GPU hours with double precision, and so we implement our experiments via massively distributed compute infrastructure based on beam~\\cite{beam}. All experiments use the Neural Tangents library~\\cite{neuraltangents2020}, built on top of JAX~\\cite{jax2018github}. \n\nTo be as systematic as possible while also tractable given this large computational requirement, we evaluated every intervention for every architecture and focused on a single dataset, CIFAR-10~\\cite{krizhevsky2009learning}. However, to ensure robustness of our results across dataset, we evaluate several key claims on CIFAR-100 and Fashion-MNIST~\\cite{xiao2017\/online}. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Observed empirical phenomena}\n\n\n\n\\subsection{NNGP\/NTK can outperform finite networks}\n\\label{sec:infinite-vs-finite}\nA common \nassumption\nin the study of infinite networks is that they underperform the corresponding finite network in the large data regime.\nWe carefully examine this assumption, by comparing kernel methods against the base case of a finite width architecture trained with small learning rate and no regularization (\\sref{sec:experimental_design}), and then individually examining the effects of common training practices which break (large LR, L2 regularization) or improve (ensembling) the infinite width correspondence to kernel methods.\nThe results of these experiments are \nsummarized in \\Figref{fig:tricks_vs_accuracy} and \\Tabref{tab:main-table}.\n\nFirst focusing on base finite networks, we observe that infinite \\texttt{FCN} and \\texttt{CNN-VEC} outperform their respective finite networks. On the other hand, infinite \\texttt{CNN-GAP} networks perform worse than their finite-width counterparts in the base case, consistent with observations in~\\citet{arora2019on}. We emphasize that architecture plays a key role in relative performance, in line with an observation made in \\citet{geiger2019disentangling} in the study of lazy training. For example, infinite-\\texttt{FCN}s outperform finite-width networks even when combined with various tricks such as high learning rate, L2, and underfitting. Here the performance becomes similar only after ensembling (\\sref{sec:ensemble_of_networks}). \n\nOne interesting observation is that ZCA regularization preprocessing (\\sref{sec:zca}) can provide significant improvements to the \\texttt{CNN-GAP} kernel, closing the gap to within 1-2\\%. \n\n\\subsection{NNGP typically outperforms NTK}\n\\label{sec:nngp-vs-ntk}\nRecent %\nevaluations of infinite width networks have put significant emphasis on the NTK, without explicit comparison against the respective NNGP models \\citep{arora2019on, li2019enhanced, du2019graph, Arora2020Harnessing}. Combined with the view of NNGPs as ``weakly-trained'' \\citep{lee2019wide, arora2019on} (i.e. having only the last layer learned), one might expect NTK to be a more effective model class than NNGP.\nOn the contrary, we usually\nobserve that NNGP inference achieves better performance. This can be seen in \\Tabref{tab:main-table} where SOTA performance among fixed kernels is attained with the NNGP across all architectures. \nIn \\Figref{fig:nngp-vs-ntk} we show that this trend persists across CIFAR-10, CIFAR-100, and Fashion-MNIST (see~\\Figref{fig:nngp-vs-ntk-uci} for similar trends on UCI regression tasks). \nIn addition to producing stronger models, NNGP kernels require about half the memory and compute as the corresponding NTK, \nand some of the most performant kernels do not have an associated NTK at all~\\cite{Shankar2020NeuralKW}. Together these results suggest that when approaching a new problem where the goal is to maximize performance, \npractitioners should start with the NNGP.\n\nWe emphasize that both tuning of the diagonal regularizer (\\Figref{fig reg compare}) and sufficient numerical precision (\\sref{sec:diag-reg}, \\Figref{app kernel spectra}) were crucial to achieving an accurate comparison of these kernels.\n\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/nngp_vs_ntk.pdf}\n\\caption{\n\n\\textbf{NNGP often outperforms NTK in image classification tasks when diagonal regularization is carefully tuned.} \nThe performance of the NNGP and NT kernels are plotted against each other \nfor a variety of data pre-processing configurations (\\sref{sec:zca}),\nwhile regularization (\\Figref{fig reg compare}) is independently tuned for each.\n}\n\\label{fig:nngp-vs-ntk}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/ensemble_valid.pdf}\n\\caption{\n\n\\textbf{Centering can accelerate training and improve performance}. Validation accuracy throughout training for several finite width architectures. See \\Figref{fig:training_curves} for training accuracy. \n}\n\\label{fig:validation_curves}\n\\end{figure}\n\n\n\\subsection{Centering and ensembling finite networks both lead to kernel-like performance}\n\\label{sec:ensemble_of_networks}\nFor overparameterized neural networks, some randomness from the initial parameters persists throughout training and the resulting learned functions are themselves random. This excess variance in the network's predictions generically increases the total test error through the variance term of the bias-variance decomposition. For infinite-width kernel systems this variance is eliminated by using the mean predictor. For finite-width models, the variance can be large, and test performance can be significantly improved by \\emph{ensembling} a collection of models~\\cite{geiger2019disentangling, geiger2020scaling}. %\nIn \\Figref{fig:ensemble}, we examine the effect of ensembling. For \\texttt{FCN}, ensembling closes the gap with kernel methods, suggesting that finite width \\texttt{FCN}s underperform \\texttt{FCN} kernels primarily due to variance.\nFor \\texttt{CNN} models, ensembling also improves test performance, and ensembled \\texttt{CNN-GAP} models significantly outperform the best kernel methods. \nThe observation that ensembles of finite width \\texttt{CNN}s can outperform infinite width networks while ensembles of finite \\texttt{FCN}s cannot (see \\Figref{fig:ensemble}) is consistent with earlier findings in~\\cite{geiger2020scaling}.\n\nPrediction variance can also be reduced by \\emph{centering} the model, i.e. subtracting the model's initial predictions: $f_\\text{centered}(t) = f(\\theta(t)) - f(\\theta(0))$. A similar variance reduction technique has been studied in~\\cite{chizat2019lazy, zhang2019type, hu2020Simple, bai2020Beyond}. \nIn \\Figref{fig:validation_curves}, we observe that centering significantly speeds up training and improves generalization for \\texttt{FCN} and \\texttt{CNN-VEC} models, but has little-to-no effect on \\texttt{CNN-GAP} architectures.\nWe observe that the scale posterior variance of \\texttt{CNN-GAP}, in the infinite-width kernel, is small relative to the prior variance given more data, consistent with centering and ensembles having small effect.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/ensemble_network_performance.pdf}\n\\caption{\n\n\\textbf{Ensembling base networks enables them to match the performance of kernel methods, and exceed kernel performance for nonlinear \\texttt{CNN}s.} See \\Figref{fig app ensemble} for test MSE.\n}\n\\label{fig:ensemble}\n\\end{figure}\n\n\n\\subsection{Large LRs and L2 regularization drive differences between finite networks and kernels}\\label{sec:l2_lr}\nIn practice, L2 regularization (a.k.a. weight decay) or larger learning rates can break the correspondence between kernel methods and finite width neural network training even at large widths. \n\n\\citet{lee2019wide} \nderives \na critical learning rate $\\eta_{\\text{critical}}$ such that wide network training dynamics are equivalent to linearized training for $\\eta< \\eta_{\\text{critical}}$.\n\\citet{lewkowycz2020large} argues that even at large width a learning rate $\\eta \\in (\\eta_{\\text{critical}}, c\\cdot\\eta_{\\text{critical}})$ for a constant $c>1$ forces the network to move away from its initial high curvature minimum and converge to a lower curvature minimum, while \\citet{li2019towards} argues that large initial learning rates enable networks to learn `hard-to-generalize' patterns.\n\nIn \\Figref{fig:tricks_vs_accuracy} (and \\Tabref{tab:main-table}), we observe that the effectiveness of a large learning rate (LR) is highly sensitive to both architecture and paramerization: LR improves performance of \\texttt{FCN} and \\texttt{CNN-GAP} by about $ 1\\%$ for STD parameterization and about $2\\%$ for NTK parameterization. In stark contrast, it has little effect on \\texttt{CNN-VEC} with NTK parameterization and surprisingly, a huge performance boost on \\texttt{CNN-VEC} with STD parameterization ($+5\\%$). \n\nL2 regularization (\\eqref{eq:l2-reg}) regularizes the squared distance between the parameters and the origin and encourages the network to converge to minima with smaller Euclidean norms. Such minima are different from those obtained by NT kernel-ridge regression (i.e. adding a diagonal regularization term to the NT kernel) \\citep{wei2019regularization},\nwhich essentially penalizes the deviation of the network's parameters from initialization \\cite{hu2019understanding}. See~\\Figref{fig:reg-compare-sm} for a comparison.\n\nL2 regularization consistently improves (+$1$-$2\\%$) performance for all architectures and parameterizations. \nEven with a well-tuned L2 regularization, finite width \\texttt{CNN-VEC} and \\texttt{FCN} still underperform NNGP\/NTK. \nCombining L2 with early stopping produces a dramatic additional $10\\% - 15\\%$ performance boost for finite width \\texttt{CNN-VEC}, outperforming NNGP\/NTK.\nFinally, we note that L2+LR together provide a superlinear performance gain for all cases except \\texttt{FCN} and \\texttt{CNN-GAP} with NTK-parameterization. \nUnderstanding the nonlinear interactions between L2, LR, and early stopping on finite width networks is an important research question (e.g. see~\\cite{lewkowycz2020large,lewkowycz2020training} for LR\/L2 effect on the training dynamics). \n\n\\subsection{Improving L2 regularization for networks using the standard parameterization}\n\\label{sec improved standard}\n\nWe find that L2 regularization provides dramatically more benefit (by up to $6\\%$) to finite width networks with the NTK parameterization than to those that use the standard parameterization (see \\Tabref{tab:main-table}). There is a bijective mapping between weights in networks with the two parameterizations, which preserves the function computed by both networks: $W^l_\\text{STD} = \\nicefrac{W^l_\\text{NTK}\\,}{\\sqrt{n^l}}$, where $W^l$ is the $l$th layer weight matrix, and $n^l$ is the width of the preceding activation vector. \nMotivated by the improved performance of the L2 regularizer in the NTK parameterization, we use this mapping to construct a regularizer for standard parameterization networks that produces the same penalty as vanilla L2 regularization would produce on the equivalent NTK-parameterized network. This modified regularizer is\n ${R}^{\\text{STD}}_{\\text{Layerwise}} = \\frac{\\lambda}{2} \\sum_l n^l \\norm{W^l_\\text{STD}}^2$.\nThis can be thought of as a layer-wise regularization constant $\\lambda^l = \\lambda n^l$. \nThe improved performance of this regularizer is illustrated in \\Figref{fig reg compare}.\n\n\\begin{figure}\n\\centering\n\\begin{overpic}[width=\\columnwidth]{figures\/network_l2_ntk.pdf}\n\\end{overpic}\n\\ \\ \n\\vspace{-0.45cm}\n\\caption{\n\n\\textbf{Layerwise scaling motivated by NTK makes L2 regularization more helpful in standard parameterization networks.}\nSee \\sref{sec improved standard} for introduction of the improved regularizer, \\Figref{fig:l2-init} for further analysis on L2 regularization to initial weights, and \\Figref{fig:reg-compare-sm} for effects on varying widths.\n\\label{fig reg compare}\n}\n\\end{figure}\n\n\n\\subsection{Performance can be non-monotonic in width beyond double descent}\n\\label{sec:perf_vs_width}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/network_vs_width_ntk.pdf}\n\\caption{\n\n\\textbf{Finite width networks generally perform better with increasing width, but \\texttt{CNN-VEC} shows surprising non-monotonic behavior.}\n {\\bf L2}: non-zero weight decay allowed during training {\\bf LR}: large learning rate allowed. Dashed lines are allowing underfitting (\\textbf{U}). See \\Figref{fig:width-combined} for plots for the standard parameterization, and \\sref{sec:equivariance} for discussion of \\texttt{CNN-VEC} results.\n }\n\\label{fig:width}\n\\end{figure}\n\nDeep learning practitioners have repeatedly found that increasing the number of parameters in their models leads to improved performance~\\citep{lawrence1998size, bartlett1998sample, Neyshabur2014InSO, canziani2016analysis, novak2018sensitivity, parkoptimal, novak2018bayesian}. \nWhile this behavior is consistent with a Bayesian perspective on generalization \\citep{mackay1995probable,smith2017bayesian, wilson2020bayesian},\nit seems at odds with classic generalization theory which primarily considers worst-case overfitting \\citep{haussler1992decision, NIPS1988_154, Vapnik1998StatisticalLT, bartlett2002rademacher, bousquet2002stability, mukherjee2004statistical, poggio2004general}. This has led to a great deal of work on the interplay of overparameterization and generalization \\citep{zhang2016understanding, advani2017high, neyshabur2018towards, neyshabur2018the, NIPS2018_8038, allen2019learning, ghorbani2019limitations, ghorbani2019linearized, arora2019fine, brutzkus19b}.\nOf particular interest has been the phenomenon of double descent, in which performance increases overall with parameter account, but drops dramatically when the neural network is roughly critically parameterized~\\citep{opper1990ability, belkin2019reconciling, nakkiran2019deep}.\n\nEmpirically, we find that in most cases (\\texttt{FCN} and \\texttt{CNN-GAP} in both parameterizations, \\texttt{CNN-VEC} with standard parameterization) increasing width leads to monotonic improvements in performance. \nHowever, we also find a more complex dependence on width in specific relatively simple settings. \nFor example, in \\Figref{fig:width} for \\texttt{CNN-VEC}\nwith NTK parameterization the performance depends non-monotonically on the width, and the optimal width has an intermediate value.\\footnote{Similar behavior was observed in~\\cite{andreassen2020} for \\texttt{CNN-VEC} and in~\\cite{aitchison2019bigger} for finite width Bayesian networks.} This nonmonotonicity is distinct from double-descent-like behavior, as all widths correspond to overparameterized models.\n\n\n\\subsection{Diagonal regularization of kernels behaves like early stopping}\\label{sec:diag-reg}\n\\vspace{-0.1cm}\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.245\\columnwidth]{figures\/fc_diag_reg.pdf}\n\\includegraphics[width=0.245\\columnwidth]{figures\/cv_diag_reg.pdf}\n\\includegraphics[width=0.495\\columnwidth]{figures\/cg_diag_reg.pdf}\n\\caption{\n\n\\textbf{Diagonal kernel regularization acts similarly to early stopping.}\nSolid lines corresponds to NTK inference with varying diagonal regularization $\\varepsilon$. Dashed lines correspond to predictions after gradient descent evolution to time $\\tau = \\eta t$ (with $\\eta=\\nicefrac{m}{\\textrm{tr}({\\cal K})}$). \nLine color indicates varying training set size $m$. \nPerforming early stopping at time $t$ corresponds closely to regularizing with coefficient $\\varepsilon = \\nicefrac{K m}{\\eta t}$, where $K=10$ denotes number of output classes.\n}\n\\label{fig:diag-reg}\n\\end{figure}\n\nWhen performing kernel inference, it is common to add a diagonal regularizer to the training kernel matrix, ${\\cal K}_{\\textrm{reg}} = {\\cal K} + \\varepsilon \\tfrac{\\textrm{tr}({\\cal K})}{m} I$. For linear regression, \\citet{ali2019continuous} proved that the inverse of a kernel regularizer is related to early stopping time under gradient flow. With kernels, gradient flow dynamics correspond directly to training of a wide neural network \\citep{Jacot2018ntk, lee2019wide}. \n\nWe experimentally explore the relationship between early stopping, kernel regularization, and generalization in \\Figref{fig:diag-reg}. \nWe observe a close relationship between regularization and early stopping, and find that in most cases the best validation performance occurs with early stopping and non-zero $\\varepsilon$. \nWhile \\citet{ali2019continuous} do not consider a $\\tfrac{\\textrm{tr}({\\cal K})}{m}$ scaling on the kernel regularizer, we found it useful since experiments become invariant under scale of ${\\cal K}$.\n\n\n\\subsection{\nFloating point precision determines critical dataset size for failure of kernel methods\n}\n\\label{sec:kernel eigs}\n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/kernel_spectrum_powerlaw.png}\n\\caption{\n\\textbf{Tail eigenvalues of infinite network kernels show power-law decay.} \nThe red dashed line shows the predicted scale of noise in the eigenvalues due to floating point precision, for kernel matrices of increasing width. \nEigenvalues for CNN-GAP architectures decay fast, and may be overwhelmed by \\texttt{float32} quantization noise \nfor dataset sizes of $O(10^4)$. For \\texttt{float64}, quantization noise is not predicted to become significant until a dataset size of $O(10^{10})$ (\\Figref{app kernel spectra}).\n}\n\\label{fig:kernel_spectra}\n\\end{figure}\n\n\nWe observe empirically that kernels become sensitive to \\texttt{float32} vs. \\texttt{float64} numerical precision at a critical dataset size. For instance, GAP models suffer \\texttt{float32} numerical precision errors at a dataset size of $\\sim{10}^4$.\n This phenomena can be understood with a simple random noise model (see \\sref{app:noise-model} for details). The key insight is that kernels with fast eigenvalue decay suffer from floating point noise. Empirically, the tail eigenvalue of the NNGP\/NTK follows a power law (see \\Figref{fig:kernel_spectra})\n and measuring their decay trend provides good indication of critical dataset size\n\\begin{equation}\n m^* \\gtrsim\n \\left(\\nicefrac{C}{\\pp{\\sqrt{2} \\sigma_n}}\\right)^{\\tfrac{2}{2\\alpha - 1}} \\quad \\textrm{if } \\alpha > \\tfrac{1}{2}\\ \\qquad \\left(\\infty \\quad \\textrm{otherwise}\\right)\\,,\n\\label{eq:critical-m}\n\\end{equation}\nwhere $\\sigma_n$ is the typical noise scale, e.g. \\texttt{float32} epsilon, and the kernel eigenvalue decay is modeled as $\\lambda_i \\sim C \\, i^{-\\alpha}$ as $i$ increases. \nBeyond this critical dataset size, the smallest eigenvalues in the kernel become dominated by floating point noise.\n\n\\subsection{Linearized \\texttt{CNN-GAP} models perform poorly due to poor conditioning}\n\\label{sec:cnn-gap-conditioning}\n\nWe observe that the linearized \\texttt{CNN-GAP} \nconverges {\\em extremely} slowly on the training set (\\Figref{fig:training_curves}), \nleading to poor validation performance (\\Figref{fig:validation_curves}). \nEven after training for more than 10M steps with varying L2 regularization strengths and LRs, the best training accuracy was below 90\\%, and test accuracy $\\sim$70\\% -- worse\nthan both the corresponding infinite and nonlinear finite width networks.\n\nThis is caused by \npoor conditioning of pooling networks. \\citet{xiao2019disentangling} (Table 1) show that the conditioning at initialization of a \\texttt{CNN-GAP} network is worse than that of \\texttt{FCN} or \\texttt{CNN-VEC} networks by a factor of the number of pixels (1024 for CIFAR-10). This poor conditioning of the kernel eigenspectrum can be seen in \\Figref{fig:kernel_spectra}. For linearized networks, in addition to slowing training by a factor of 1024, this leads to numerical instability when using \\texttt{float32}.\n \n\n\n\\subsection{\nRegularized ZCA whitening improves accuracy\n}\\label{sec:zca}\n\n\nZCA whitening~\\cite{bell1997independent} (see \\Figref{fig cifar zca} for an illustration) is a data preprocessing technique that was once common~\\cite{goodfellow2013maxout,zagoruyko2016wide}, but has fallen out of favor. However it was recently shown to dramatically improve accuracy in some kernel methods by~\\citet{Shankar2020NeuralKW}, in combination with a small regularization parameter in the denominator (see \\sref{app ZCA}). \nWe investigate the utility of ZCA whitening as a preprocessing step for both finite and infinite width neural networks. \nWe observe that while pure ZCA whitening is detrimental for both kernels and finite networks (consistent with predictions in \\citep{wadia2020whitening}), with tuning of the regularization parameter it provides performance benefits for both kernel methods and finite network training (\\Figref{fig:zca}). \n \n\n\\begin{figure}\n\\centering\n\\begin{overpic}[width=\\linewidth]{figures\/kernel_zca.pdf}\n \\put (0,0) {\\textbf{\\small(a)}}\n\\end{overpic} \n\n\\vspace{0.2cm}\n\\begin{overpic}[width=\\linewidth]{figures\/network_zca.pdf}\n \\put (0,0) {\\textbf{\\small(b)}}\n\\end{overpic} \n\n\\caption{\n\n\\textbf{Regularized ZCA whitening improves image classification performance for both finite and infinite width networks.} \nAll plots show performance as a function of ZCA regularizaiton strength.\n(\\textbf{a}) ZCA whitening of inputs to kernel methods on CIFAR-10, Fashion-MNIST, and CIFAR-100. (\\textbf{b}) ZCA whitening of inputs to finite width networks (training curves in \\Figref{fig:app-zca-training}).\n}\n\\label{fig:zca}\n\\end{figure}\n\n\\subsection{Equivariance \nis only beneficial for narrow networks far from the kernel regime\n}\\label{sec:equivariance}\n \n Due to weight sharing between spatial locations, outputs of a convolutional layer are translation-{\\em equivariant} (up to edge effects), i.e. if an input image is translated, the activations are translated in the same spatial direction. However, the vast majority of contemporary \\texttt{CNN}s utilize weight sharing in conjunction with pooling layers, making the network outputs approximately translation-\\textit{invariant} (\\texttt{CNN-GAP}). \n The impact of equivariance alone (\\texttt{CNN-VEC})\n on generalization is not well understood -- \n it is a property of internal representations only, and does not translate into meaningful statements about the classifier outputs. \n Moreover, in the infinite-width limit it is guaranteed to have no impact on the outputs \\citep{novak2018bayesian, yang2019scaling}. In the finite regime it has been reported both to provide substantial benefits by \\citet{lecun1989generalization, novak2018bayesian} and no significant benefits by \\citet{bartunov2018assessing}.\n \n We conjecture that equivariance can only be leveraged far from the kernel regime. Indeed, as observed in \\Figref{fig:tricks_vs_accuracy} and discussed in \\sref{sec:l2_lr}, multiple kernel correspondence-breaking tricks are required for a meaningful boost in performance over NNGP or NTK (which are mathematically guaranteed to not benefit from equivariance), and the boost is largest at a moderate\n width (\\Figref{fig:width}). \n Otherwise, even large ensembles of equivariant models (see \\texttt{CNN-VEC LIN} in \\Figref{fig:ensemble}) perform comparably to their infinite width, equivariance-agnostic counterparts. Accordingly, prior work that managed to extract benefits from equivariant models \\citep{lecun1989generalization, novak2018bayesian} tuned networks far outside the kernel regime (extremely small size and \\texttt{+LR+L2+U} respectively). We further confirm this phenomenon in a controlled setting in \\Figref{fig:crop_translate}.\n \n\n \\definecolor{darkred_f}{RGB}{247, 129, 191}\n \\definecolor{darkblue_f}{RGB}{55, 126, 184}\n \\definecolor{darkorange_f}{RGB}{255, 127, 0}\n \\definecolor{darkgreen_f}{RGB}{77, 175, 74}\n\n \\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/crop_4.pdf}\n \\includegraphics[width=0.59\\textwidth]{figures\/translate_4.pdf}\n \\caption{\n \n \\textbf{Equivariance is only leveraged in a \\texttt{CNN} model outside of the kernel regime.} \n If a \\texttt{CNN} model is able to utilize equivariance effectively, we expect it to be more robust to crops and translations than an {\\color{darkred_f}\\texttt{FCN}}. Surprisingly, performance of a {\\color{darkgreen_f}wide \\texttt{CNN-VEC}} degrades with the magnitude of the input perturbation as fast as that of an {\\color{darkred_f}\\texttt{FCN}}, indicating that equivariance is not exploited.\n In contrast, performance of a {\\color{darkorange_f}narrow model with weight decay (\\texttt{CNN-VEC+L2+narrow})} falls off much slower.\n {\\color{darkblue_f}Translation-invariant \\texttt{CNN-GAP}} remains, as expected, the most robust. Details in \\sref{sec:equivariance}, \\sref{app hyperparameters}. \n }\n \\label{fig:crop_translate}\n \\end{figure}\n\n\\subsection{Ensembling kernel predictors enables practical data augmentation with NNGP\/NTK}\\label{sec:data-augmentation}\nFinite width neural network often are trained with data augmentation (DA) to improve performance. We observe that the \\texttt{FCN} and \\texttt{CNN-VEC} architectures (both finite and infinite networks) benefit from DA, and that DA can cause \\texttt{CNN-VEC} to become competitive with \\texttt{CNN-GAP} (\\Tabref{tab:main-table}). While \\texttt{CNN-VEC} possess translation equivariance but not invariance (\\sref{sec:equivariance}), we believe it can effectively leverage equivariance to learn invariance from data.\n\nFor kernels, expanding a dataset with augmentation is computationally challenging, since kernel computation is quadratic in dataset size, and inference is cubic. \n\\citet{li2019enhanced, Shankar2020NeuralKW} incorporated flip augmentation by doubling the training set size. \nExtending this strategy to more augmentations such as crop or mixup~\\cite{zhang2018mixup}, or to broader augmentations strategies like AutoAugment~\\cite{cubuk2019autoaugment} and RandAugment~\\cite{cubuk2019randaugment}, becomes rapidly infeasible.\n\nHere we introduce a straightforward method for ensembling kernel predictors to enable more extensive data augmentation. \nMore sophisticated approximation approaches such as the Nystr\u00f6m method~\\citep{williams2001using} might yield even better performance. \nThe strategy involves constructing a set of augmented batches, performing kernel inference for each of them, and then performing ensembling \nof the resulting predictions. This is equivalent to replacing the kernel with a block diagonal approximation, where each block corresponds to one of the batches, and the union of all augmented batches is the full augmented dataset. See \\sref{app kernel ensembling} for more details.\nThis method achieves SOTA for a kernel method corresponding to the infinite width limit of each architecture class we studied (\\Figref{fig:kerne-da-ens} and \\Tabref{tab:sota-kernel-table}).\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/da_ensemble.pdf}\n\\caption{\n\n\\textbf{Ensembling kernel predictors makes predictions from large augmented datasets computationally tractable.} \nWe used standard crop by 4 and flip data augmentation (DA) common for training neural networks for CIFAR-10. We observed that DA ensembling improves accuracy and is much more effective for NNGP compared to NTK. In the last panel, we applied data augmentation by ensemble to the Myrtle architecture studied in \\citet{Shankar2020NeuralKW}. We observe improvements over our base setting, but do not reach the reported best performance. We believe techniques such as leave-one-out tilt and ZCA augmentation also used in~\\cite{Shankar2020NeuralKW} contribute to this difference.}\n\\label{fig:kerne-da-ens}\n\\end{figure}\n\n\n\n\n\n\\begin{table}\n\\centering\n\\caption{\n\n\\textbf{CIFAR-10 test accuracy for kernels of the corresponding architecture type}}\n\\label{tab:sota-kernel-table}\n\\vspace{0.1cm}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{@{}llll@{}}\n\\toprule\nArchitecture & Method & \n\\begin{tabular}[c]{@{}l@{}}NTK\n\\end{tabular} & \n\\begin{tabular}[c]{@{}l@{}}NNGP\n\\end{tabular} \\\\ \n\\midrule \\midrule\n{}{}{\\textbf{FC}} \n & \\citet{novak2018bayesian} & - & 59.9 \\\\\n & ZCA Reg (this work) & 59.7 & 59.7 \\\\\n & DA Ensemble (this work) & \\textbf{61.5} & \\textbf{62.4} \\\\\n\\midrule\n{\\textbf{CNN-VEC}} & \\citet{novak2018bayesian} & \\textbf{-} & 67.1 \\\\\n & \\citet{li2019enhanced} & 66.6 & 66.8 \\\\\n & ZCA Reg (this work) & 69.8 & 69.4 \\\\\n & Flip Augmentation, \\citet{li2019enhanced} & 69.9 & 70.5 \\\\\n & DA Ensemble (this work) & \\textbf{70.5} & \\textbf{73.2} \\\\\n\\midrule\n{\\textbf{CNN-GAP}} & \\citet{arora2019on, li2019enhanced} & 77.6 & 78.5 \\\\\n & ZCA Reg (this work) & 83.2 & 83.5 \\\\\n & Flip Augmentation, \\citet{li2019enhanced} & 79.7 & 80.0 \\\\\n & DA Ensemble (this work) & \\textbf{83.7 (32 ens)} & \\textbf{84.8 (32 ens)} \\\\\n \\midrule\n{\\textbf{Myrtle}\n\\tablefootnote{The normalized Gaussian Myrtle kernel used in~\\citet{Shankar2020NeuralKW} does not have a corresponding finite-width neural network, and was additionally tuned on the test set for the case of CIFAR-10.} \n} \n & Myrtle ZCA and Flip Augmentation, \\citet{Shankar2020NeuralKW} & - & \\textbf{89.8} \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\\section{Discussion}\n\n\nWe performed an in-depth investigation of the phenomenology of finite and infinite width neural networks \nthrough a series of controlled interventions. \nWe \nquantified\nphenomena having to do with \ngeneralization, architecture dependendence, deviations between infinite and finite networks, numerical stability, data augmentation, data preprocessing, ensembling, network topology, and failure modes of linearization.\nWe further developed best practices that improve performance for both finite and infinite networks.\nWe believe our experiments provide firm empirical ground for future studies.\n\nThe careful study of other architectural components such as self-attention, normalization, and residual connections would be an interesting extension to this work, especially in light of results such as \\citet{Goldblum2020Truth} which empirically observes that the large width behavior of Residual Networks does not conform to the infinite-width limit.\nAnother interesting future direction would be incorporating systematic finite-width corrections, such as those in~\\citet{yaida2019non, Dyer2020Asymptotics, antognini2019finite, huang2019dynamics}.\n\n\n\\section*{Broader Impact}\n\nDeveloping theoretical understanding of neural networks is crucial both for understanding their biases, and predicting when and how they will fail. \nUnderstanding biases in models is of critical importance if we hope to prevent them from perpetuating and exaggerating existing racial, gender, and other social biases \\citep{hardt2016equality, barocas2016big, doshi2017towards, barocas-hardt-narayanan}. \nUnderstanding model failure has a direct impact on human safety, as neural networks increasingly do things like drive cars and control the electrical grid~\\citep{bojarski2016end, rudin2011machine, ozay2015machine}. \n\nWe believe that wide neural networks are currently the most promising direction for the development of neural network theory. \nWe further believe that the experiments we present in this paper will provide empirical underpinnings that allow better theory to be developed. \nWe thus believe that this paper will in a small way aid the engineering of safer and more just machine learning models.\n\n\n\\begin{ack}\nWe thank Yasaman Bahri and Ethan Dyer for discussions and feedback on the project.\nWe are also grateful to Atish Agarwala and Gamaleldin Elsayed for providing valuable feedbacks on a\ndraft. \n\nWe acknowledge the Python community~\\cite{van1995python} for developing the core set of tools that enabled this work, including NumPy~\\cite{numpy}, SciPy~\\cite{scipy}, Matplotlib~\\cite{matplotlib}, Pandas~\\cite{pandas}, Jupyter~\\cite{jupyter}, JAX~\\cite{jaxrepro}, Neural Tangents~\\cite{neuraltangents2020}, Apache Beam~\\cite{beam}, Tensorflow datasets~\\cite{TFDS} and Google Colaboratory~\\cite{colab}.\n\\end{ack}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\small\n\n\n\\section{Glossary}\nWe use the following abbreviations in this work:\n\n\\begin{itemize}\n \\item{\\bf L2}: L2 reguarization a.k.a. weight decay;\n \\item {\\bf LR}: using large learning rate;\n \\item {\\bf U}: allowing underfitting; \n \\item {\\bf DA}: using data augmentation;\n \\item {\\bf C}: centering the network so that the logits are always zero at initialization;\n \\item {\\bf Ens}: neural network ensembling logits over multiple initialization;\n \\item {\\bf ZCA}: zero-phase component analysis regularization preprocessing;\n \\item {\\bf FCN}: fully-connected neural network.;\n \\item {\\bf CNN-VEC}: convolutional neural network with a vectorized readout layer;\n \\item {\\bf CNN-GAP}: convolutional neural network with a global average pooling readout layer;\n \\item {\\bf NNGP}: neural network Gaussian process;\n \\item {\\bf NTK}: neural tangent kernel.\n\\end{itemize}\n\n\\section{Main table}\n\n\n \n\\begin{table}[h]\n\\centering\n\\caption{\\textbf{CIFAR-10 classification accuracy for nonlinear and linearized finite neural networks, as well as for NTK and NNGP kernel methods}.\nStarting from \\texttt{Base} network of given architecture class described in \\sref{sec:experimental_design}, performance change of \\textbf{centering} (\\texttt{+C}), \\textbf{large learning rate} (\\texttt{+LR}), allowing \\textbf{underfitting} by early stopping (\\texttt{+U}), input preprocessing with \\textbf{ZCA regularization} (\\texttt{+ZCA}), multiple initialization \\textbf{ensembling} (\\texttt{+Ens}), and some combinations are shown, for {\\color{standard_param}\\textbf{Standard}} and {\\color{ntk_param}\\textbf{NTK}} parameterization. See also~\\Figref{fig:tricks_vs_accuracy}.\n}\n\\vspace{0.1cm}\n\\label{tab:main-table}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{@{}lc|ccccccccc|cc|cc@{}}\n\\toprule\n{} &\n Param &\n Base &\n +C &\n +LR &\n +L2 &\n \\begin{tabular}[c]{@{}l@{}}+L2 \\\\ +U\\end{tabular}\n &\n \\begin{tabular}[c]{@{}c@{}}+L2 \\\\ +LR\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}+L2 \\\\+LR\\\\ +U \\end{tabular} &\n +ZCA &\n \\begin{tabular}[c]{@{}c@{}} Best \\\\ w\/o DA\\end{tabular} &\n +Ens &\n \\begin{tabular}[c]{@{}l@{}}+Ens \\\\+C \\end{tabular}\n&\n \\begin{tabular}[c]{@{}l@{}} +DA\\\\ +U \\end{tabular}&\n \\begin{tabular}[c]{@{}l@{}}+DA \\\\+L2\\\\ +LR\\\\ +U\\end{tabular} \\\\\n\\midrule\\midrule\nFCN &\n \\begin{tabular}[c]{@{}c@{}}STD\\\\ NTK\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}47.82\\\\ 46.16\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}53.22\\\\ 51.74\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}49.07\\\\ 48.14\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}49.82\\\\ 54.27\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}49.82\\\\ 54.27\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}55.32\\\\ 55.11\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}55.32\\\\ 55.44\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}44.29\\\\ 44.86\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}55.90\\\\ 55.44\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}58.11\\\\ 58.14\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}58.25\\\\ 58.31\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}} 65.29\\\\ 61.87\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}} 67.43\\\\ 69.35\\end{tabular} \\\\\n \\midrule\nCNN-VEC &\n \\begin{tabular}[c]{@{}c@{}}STD\\\\ NTK\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}56.68\\\\ 60.73\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}60.82\\\\ 58.09\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}62.16\\\\ 60.73\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}57.15\\\\ 61.30\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}67.07\\\\ 75.85\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}62.16\\\\ 76.93\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}68.99\\\\ 77.47\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}57.39\\\\ 61.35\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}68.99\\\\ 77.47\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}67.30\\\\ 71.32\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}65.65\\\\ 67.23\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}76.73\\\\ 83.92\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}83.01\\\\ 85.63\\end{tabular} \\\\\n \\midrule\nCNN-GAP &\n \\begin{tabular}[c]{@{}c@{}}STD\\\\ NTK\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}80.26\\\\ 80.61\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}81.25\\\\ 81.73\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}80.93\\\\ 82.44\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}81.67\\\\ 81.17\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}81.10\\\\ 81.17\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}83.69\\\\ 82.44\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}83.01\\\\ 82.43\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}84.90\\\\ 83.75\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}84.22\\\\ 83.92\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}84.15\\\\ 85.22\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}84.62\\\\ 85.75\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}84.36\\\\ 84.07\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}86.45\\\\ 86.68\\end{tabular}\n\\end{tabular}%\n}\n\\\\\n\\vspace{0.2cm}\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{@{}lc|cccccc||ccc|ccc@{}}\n\\toprule\n &\n Param &\n Lin Base &\n +C &\n +L2 &\n \\begin{tabular}[c]{@{}l@{}}+L2 \\\\+U \\end{tabular} &\n +Ens &\n \\begin{tabular}[c]{@{}l@{}}+Ens \\\\+C \\end{tabular}&\n NTK & +ZCA & \\begin{tabular}[c]{@{}c@{}}+DA \\\\ +ZCA\\end{tabular}&\n NNGP &+ZCA &\n\\begin{tabular}[c]{@{}c@{}}+DA \\\\ +ZCA\\end{tabular}\\\\\n \\midrule\\midrule\nFCN &\n \\begin{tabular}[c]{@{}c@{}}STD\\\\ NTK\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}43.09\\\\ 48.61\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}51.48\\\\ 52.12\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}44.16\\\\ 51.77\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}50.77\\\\ 51.77\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}57.85\\\\ 58.04\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}57.99\\\\ 58.16\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}58.05\\\\ 58.28\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}59.65\\\\ 59.68\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}-\\\\ 61.54\\end{tabular} &\n 58.61 &\n 59.70 &\n 62.40 \\\\\n \\midrule\nCNN-VEC &\n \\begin{tabular}[c]{@{}c@{}}STD\\\\ NTK\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}52.43\\\\ 55.88\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}60.61\\\\ 58.94\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}58.41\\\\ 58.52\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}58.41\\\\ 58.50\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}64.58\\\\ 65.45\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}64.67\\\\ 65.54\\end{tabular} &\n {\\begin{tabular}[c]{@{}c@{}}66.64\\\\ 66.78\\end{tabular}} &\n \\begin{tabular}[c]{@{}c@{}}69.65\\\\ 69.79\\end{tabular}&\n \\begin{tabular}[c]{@{}c@{}}-\\\\ 70.52\\end{tabular} &\n 66.69 &\n 69.44 &\n 73.23 \\\\\n \\midrule\nCNN-GAP &\n \\begin{tabular}[c]{@{}c@{}}STD\\\\ NTK\\end{tabular} &\n \\multicolumn{6}{c||}{\\begin{tabular}[c]{@{}c@{}} \\textgreater 70.00* (Train accuracy 86.22 after 14M steps)\\\\ \\textgreater 68.59* (Train accuracy 79.90 after 14M steps)\\end{tabular}} &\n \\begin{tabular}[c]{@{}c@{}}76.97\\\\ 77.00\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}83.24\\\\ 83.24\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}-\\\\ 83.74\\end{tabular}\n &\n 78.0 &\n 83.45 & 84.82\n\\end{tabular}%\n}\n\\end{table}\n\n\n\n\n\n\n\n\\section{Experimental details}\n\nFor all experiments, we use Neural Tangents (NT) library~\\cite{neuraltangents2020} built on top of JAX~\\cite{jaxrepro}. First we describe experimental settings that is mostly common and then describe specific details and hyperparameters for each experiments.\n\n\\textbf{Finite width neural networks}\nWe train finite width networks with Mean Squared Error (MSE) loss \n$${\\cal L } = \\frac{1}{2 |{\\cal D}| K} \\sum_{(x_i, y_i)\\in {\\cal D}} \\|f(x_i) - y_i\\|^2\\,,$$\nwhere $K$ is the number of classes and $\\|\\cdot \\|$ is the $L^2$ norm in $\\mathbb R^{K}$. For the experiments with \\texttt{+L2}, we add L2 regularization to the loss \n\\begin{equation}\\label{eq:l2-reg}\n {R}_{\\text{L2}} = \\frac{\\lambda}{2} \\sum_l \\norm{W^l}^2\\,,\n\\end{equation}\nand tune $\\lambda$ using grid-search optimizing for the validation accuracy.\n\nWe optimize the loss using mini-batch SGD with constant learning rate. We use batch-size of $100$ for \\texttt{FCN} and $40$ for both \\texttt{CNN-VEC} and \\texttt{CNN-GAP} (see \\sref{app:batch-size} for further details on this choice). \nLearning rate is parameterized with learning rate factor $c$ with respect to the critical learning rate\n\\begin{equation}\n \\eta = c\\, \\eta_\\text{critical}\\,.\n\\end{equation}\nIn practice, we compute empirical NTK $\\hat \\Theta (x, x') = \\sum_j \\partial_j f(x) \\partial_j f(x')$ on 16 random points in the training set to estimate $\\eta_\\text{critical}$~\\cite{lee2019wide} by maximum eigenvalue of $\\hat \\Theta (x, x)$. This is readily available in NT library~\\cite{neuraltangents2020} using \\texttt{nt.monte\\_carlo\\_kernel\\_fn} and \\texttt{nt.predict.max\\_learning\\_rate}. \nBase case considered without large learning rate indicates $c \\leq 1$, and large learning rate (\\texttt{+LR}) runs are allowing $c > 1$. Note that for linearized networks $\\eta_\\text{critical}$ is strict upper-bound for the learning rates and no $c >1$ is allowed~\\cite{lee2019wide, yang2019fine, lewkowycz2020large}.\n\n\nTraining steps are chosen to be large enough, such that learning rate factor $c \\leq 1$ can reach above $99\\%$ accuracy on $5k$ random subset of training data for 5 logarithmic spaced measurements. For different learning rates, physical time $t=\\eta \\times \\text{(\\# of steps)}$ roughly determines learning dynamics and small learning rate trials need larger number of steps. \nAchieving termination criteria was possible for all of the trials except for linearized \\texttt{CNN-GAP} and data augmented training of \\texttt{FCN}, \\texttt{CNN-VEC}. In these cases, we report best achieved performance without fitting the training set. \n\n\n\\textbf{NNGP \/ NTK} For inference, except for data augmentation ensembles for which default zero regularization was chosen, we grid search over diagonal regularization in the range \\texttt{numpy.logspace(-7, 2, 14)} and $0$. Diagonal regularization is parameterized as\n$${\\cal K}_{\\textrm{reg}} = {\\cal K} + \\varepsilon \\tfrac{\\textrm{tr}({\\cal K})}{m} I$$\nwhere ${\\cal K}$ is either NNGP or NTK for the training set. We work with this parameterization since $\\varepsilon$ is invariant to scale of ${\\cal K}$.\n\n\n\\textbf{Dataset}\nFor all our experiments (unless specified) we use train\/valid\/test split of 45k\/5k\/10k for CIFAR-10\/100 and 50k\/10k\/10k for Fashion-MNIST. For all our experiments, inputs are standardized with per channel mean and standard deviation. ZCA regularized whitening is applied as described in \\sref{app ZCA}. \nOutput is encoded as mean subtracted one-hot-encoding for the MSE loss, e.g. for a label in class $c$, $-0.1 \\cdot \\bf{1} + e_c$. For the softmax-cross-entropy loss in~\\sref{app:xent-vs-mse}, we use standard one-hot-encoded output.\n\nFor data augmentation, we use widely-used augmentation for CIFAR-10; horizontal flips with 50\\% probability and random crops by 4-pixels with zero-padding. \n\n\n\\textbf{Details of architecture choice:}\nWe only consider ReLU activation (with the exception of Myrtle-kernel which use scaled Gaussian activation~\\cite{Shankar2020NeuralKW}) and choose critical initialization weight variance of $\\sigma_w^2=2$ with small bias variance $\\sigma_b^2=0.01$. \nFor convolution layers, we exclusively consider $3 \\times 3$ filters with stride $1$ and \\texttt{SAME} (zero) padding so that image size does not change under convolution operation. \n\n\n\\subsection{Hyperparameter configurations for all experiments}\n\\label{app hyperparameters}\n\n\nWe used grid-search for tuning hyperparameters and use accuracy on validation set for deciding on hyperparameter configuration or measurement steps (for underfitting \/ early stopping). All reported numbers unless specified is test set performance.\n\n\\textbf{\\Figref{fig:tricks_vs_accuracy}, Table~\\ref{tab:main-table}}: We grid-search over L2 regularization strength $\\lambda \\in \\{0\\} \\cup \\{10^{-k} | k \\text{ from -9 to -3}\\}$ and learning rate factor $c \\in \\{2^k | k\\text{ from -2 to 5}\\}$. For linearized networks same search space is used except that $c>1$ configuration is infeasible and training diverges. For non-linear, centered runs $c \\in \\{2^k | k\\text{ from 0 to 4}\\}$ is used. Network ensembles uses base configuration with $\\lambda=0$, $c=1$ with 64 different initialization seed. Kernel ensemble is over 50 predictors for \\texttt{FCN} and \\texttt{CNN-VEC} and 32 predictors for \\texttt{CNN-GAP}. Finite networks trained with data-augmentation has different learning rate factor range of $c \\in \\{1, 4, 8\\}$. \n\n\n\\textbf{\\Figref{fig:nngp-vs-ntk}}: Each datapoint corresponds to either standard preprocessed or ZCA regularization preprocessed (as described in~\\sref{sec:zca}) with regularization strength was varied in $\\{10^{-k}| k \\in [-6, -5, ..., 4, 5]\\}$ for \\texttt{FCN} and \\texttt{CNN-VEC}, $\\{10^{-k}| k \\in [-3, -2, ..., 2, 3]\\}$ for \\texttt{CNN-GAP}. \n\n\n\\textbf{\\Figref{fig:validation_curves}, \\Figref{fig:ensemble}, \\Figref{fig:training_curves}, \\Figref{fig app ensemble}}: Learning rate factors are $c=1$ for non-linear networks and $c=0.5$ for linearized networks. While we show NTK parameterized runs, we also observe similar trends for STD parameterized networks. Shaded regions show range of minimum and maximum performance across 64 different seeds. Solid line indicates the mean performance. \n\n\\textbf{\\Figref{fig reg compare}}\nWhile \\texttt{FCN} is the base configuration, \\texttt{CNN-VEC} is a narrow network with 64 channels per layer since moderate width benefits from L2 more for the NTK parameterization~\\Figref{fig:width-combined}. For \\texttt{CNN-GAP} 128 channel networks is used. All networks with different L2 strategy are trained with \\texttt{+LR} ($c>1$).\n\n\\textbf{\\Figref{fig:width}, \\Figref{fig:reg-compare-sm}, \\Figref{fig:width-combined}}: \n$\\lambda \\in \\{0, 10^{-9}, 10^{-7}, 10^{-5}, 10^{-3}\\}$ and $c \\in \\{2^k | k \\text{ from} -2 \\text{ to } 5\\}$. \n\n\n\\textbf{\\Figref{fig:diag-reg}}: We use 640 subset of validation set for evaluation. \\texttt{CNN-GAP} is a variation of the base model with 3 convolution layers with $\\sigma_b^2 = 0.1$ while \\texttt{FCN} and \\texttt{CNN-VEC} is the base model.\nTraining evolution is computed using analytic time-evolution described in~\\citet{lee2019wide} and implemented in NT library via \\texttt{nt.predict.gradient\\_descent\\_mse} with 0 diagonal regularization. \n\n\n\\textbf{\\Figref{fig:zca}}: Kernel experiments details are same as in \\Figref{fig:nngp-vs-ntk}. Finite networks are base configuration with $c=1$ and $\\lambda=0$. \n\n\\textbf{\\Figref{fig:crop_translate}}: Evaluated networks uses NTK parameterization with $c=1$. {\\color{darkorange_f}\\texttt{CNN-VEC+L2+narrow}} uses 128 channels instead of 512 of the base {\\color{darkgreen_f}\\texttt{CNN-VEC}} and {\\color{darkblue_f}\\texttt{CNN-GAP}} networks, and trained with L2 regularization strength $\\lambda=10^{-7}$. \\emph{Crop} transformation uses zero-padding while \\emph{Translate} transformation uses circular boundary condition after shifting images. Each transformation is applied to the test set inputs where shift direction is chosen randomly. Each points correspond to average accuracy over 20 random seeds. {\\color{darkred_f}\\texttt{FCN}} had 2048 hidden units.\n\n\n\\textbf{\\Figref{fig:kerne-da-ens}, Table~\\ref{tab:sota-kernel-table}}: For all data augmentation ensembles, first instance is taken to be from non-augmented training set. Further details on kernel ensemble is described in~\\sref{app kernel ensembling}. For all kernels, inputs are preprocessed with optimal ZCA regularization observed in~\\Figref{fig:zca} (10 for \\texttt{FCN}, 1 for \\texttt{CNN-VEC}, \\texttt{CNN-GAP} and \\texttt{Myrtle}.). We ensemble over 50 different augmented draws for \\texttt{FCN} and \\texttt{CNN-VEC}, whereas for \\texttt{CNN-GAP}, we ensemble over 32 draws of augmented training set.\n\n\n\n\\textbf{\\Figref{fig:xent-vs-mse}, Table~\\ref{tab:xent-vs-mse}}:\nDetails for MSE trials are same as ~\\Figref{fig:tricks_vs_accuracy} and Table~\\ref{tab:main-table}. Trials with softmax-cross-entropy loss was tuned with same hyperparameter range as MSE except that learning rate factor range was $c\\in \\{1, 4, 8\\}$.\n\n\n\\textbf{\\Figref{fig:bs}}: We present result with NTK parameterized networks with $\\lambda=0$. \\texttt{FCN} network is width 1024 with $\\eta=10.0$ for MSE loss and $\\eta=2.0$ for softmax-cross-entropy loss. \\texttt{CNN-GAP} uses 256 channels with $\\eta=5.0$ for MSE loss and $\\eta=0.2$ for softmax-cross-entropy loss. Random seed was fixed to be the same across all runs for comparison.\n\n\\textbf{\\Figref{fig:l2-init}}: NTK pamareterization with $c=4$ was used for both L2 to zero or initialization. Random seed was fixed to be the same across all runs for comparison.\n\n\\section{Noise model}\n\\label{app:noise-model}\nIn this section, we provide details on noise model discussed in~\\sref{sec:kernel eigs}. Consider a random $m \\times m$ Hermitian matrix $N$ with entries order of $\\sigma_n$ which is considered as noise perturbation to the kernel matrix\n\\begin{equation}\n \\tilde K = K + N\\,.\n\\end{equation}\nEigenvalues of this random matrix $N$ follow Wigner's semi-circle law, and the smallest eigenvalue is given by $\\lambda_{\\min}(N) \\approx - \\sqrt{2m} \\sigma_n$. When the smallest eigenvalue of $K$ is smaller (in order) than $|\\lambda_{\\min}(N)|$, one needs to add diagonal regularizer larger than the order of $|\\lambda_{\\min}(N)|$ to ensure positive definiteness. For estimates, let us use machine precision\\footnote{\\texttt{np.finfo(np.float32).eps}, \\texttt{np.finfo(np.float64).eps}} $\\epsilon_{32} \\approx 10^{-7}$ and $\\epsilon_{64} \\approx 2 \\times 10^{-16}$ which we use as proxy values for $\\sigma_n$. \nNote that noise scale is relative to elements in $K$ which is assume to be $O(1)$. Naively scaling $K$ by multiplicative constant will also scale $\\sigma_n$.\n\nEmpirically one can model tail $i^{\\textrm{th}}$ eigenvalues of infinite width kernel matrix of size $m \\times m$ as\n\\begin{equation}\n \\lambda_i \\approx C \\frac{ m} {i^{\\alpha}} \\,.\n\\end{equation}\nNote that we are considering $O(1)$ entries for $K$ and typical eigenvalues scale linearly with dataset size $m$. For a given dataset size, the power law observed is $\\alpha$ and $C$ is dataset-size independent constant. Thus the smallest eigenvalue is order $\\lambda_\\text{min}(K) \\sim C m^{1- \\alpha}$. \n\nIn the noise model, we can apply Weyl's inequality which says\n\\begin{equation}\n \\lambda_\\text{min}(K) - \\sqrt{2 m} \\sigma_n \\leq \\lambda_\\text{min} (\\tilde K ) \\leq \\lambda_\\text{min}(K) + \\sqrt{2 m} \\sigma_n \\,.\n\\end{equation}\n\nConsider the worst-case where negative eigenvalue noise affecting the kernel's smallest eigenvalue. In that case perturbed matrices minimum eigenvalue could become negative, breaking positive semi-definiteness(PSD) of the kernel. \n\nThis model allows to predict critical dataset size ($m^*$) over which PSD can be broken under specified noise scale and kernel eigenvalue decay. With condition that perturbed smallest eigenvalue becomes negative \n\\begin{equation}\n C m^{1-\\alpha} \\lesssim \\sqrt{2 m}\\sigma_n\\,,\n\\end{equation}\nwe obtain\n\\begin{equation}\n m^* \\gtrsim\n \\begin{cases}\n \\left(\\frac{C}{\\sqrt{2} \\sigma_n}\\right)^{\\tfrac{2}{2\\alpha - 1}} & \\textrm{if } \\alpha > \\tfrac{1}{2}\\\\\n \\infty & \\textrm{else}\n \\end{cases}\n\\end{equation}\n\nWhen PSD is broken, one way to preserve PSD is to add diagonal regularizer (\\sref{sec:diag-reg}). \nFor CIFAR-10 with $m=50k$, typical negative eigenvalue from \\texttt{float32} noise is around $4 \\times 10^{-5}$ and $7 \\times 10^{-14}$ with \\texttt{float64} noise scale, considering $\\sqrt{2 m} \\sigma_n$. Note that \\citet{arora2019on} regularized kernel with regularization strength $5 \\times 10^{-5}$ which is on par with typical negative eigenvalue introduced due to \\texttt{float32} noise. Of course, this only applies if kernel eigenvalue decay is sufficiently fast that full dataset size is above $m^*$. \n\n\nWe observe that \\texttt{FCN} and \\texttt{CNN-VEC} kernels with small $\\alpha$ would not suffer from increasing dataset-size under \\texttt{float32} precision. On the other-hand, worse conditioning of \\texttt{CNN-GAP} not only affects the training time (\\sref{sec:cnn-gap-conditioning}) but also required precision. One could add sufficiently large diagonal regularization to mitigate effect from the noise at the expense of losing information and generalization strength included in eigen-directions with small eigenvalues. \n\n\n\\begin{figure}[t!]\n\\centering\n\n\\begin{overpic}[width=0.86\\linewidth]{figures\/kernel_spectrum.png}\n \\put (0,0) {\\textbf{\\small(a)}}\n \\end{overpic}\\\\\n\\vspace{0.3cm}\n \\begin{overpic}[width=0.43\\linewidth]{figures\/critical_dataset_size_by_exponent}\n \\put (0,0) {\\textbf{\\small(b)}}\n \\end{overpic}\n \\begin{overpic}[width=0.43\\linewidth]{figures\/critical_dataset_size_by_noise}\n \\put (0,0) {\\textbf{\\small(c)}}\n \\end{overpic}\n \n\\caption{\\textbf{The CNN-GAP architecture has poor\nkernel conditioning} \n\\textbf{(a)} Eigenvalue spectrum of infinite network kernels on 10k datapoints. Dashed lines are noise eigenvalue scale from \\texttt{float32} precision. Eigenvalue for CNN-GAP's NNGP decays fast and negative eigenvalue may occur when dataset size is $O(10^4)$ in \\texttt{float32} but is well-behaved with higher precision. \n\\textbf{(b-c)} Critical dataset size as function of eigenvalue decay exponent $\\alpha$ or noise strength $\\sigma_n$ given by \\eqref{eq:critical-m}. \n}\n\\label{app kernel spectra}\n\\end{figure}\n\n\n\\section{Data augmentation via kernel ensembling}\n\\label{app kernel ensembling}\n\n\nWe start considering general ensemble averaging of predictors. \nConsider a sequence of training sets $\\{{\\cal D}_i\\}$ each consisting of $m$ input-output pairs $\\{(x_1, y_1), \\dots, (x_m, y_m)\\}$ from a data-generating distribution. For a learning algorithm, which we use NNGP\/NTK inference for this study, will give prediction $\\mu (x^*, {\\cal D}_i)$ of unseen test point $x^*$. It is possible to obtain better predictor by averaging output of different predictors\n\\begin{equation}\n \\hat \\mu(x^*) = \\frac{1}{E}\\sum_i^E \\mu (x^*, {\\cal D}_i)\\, , \n\\end{equation}\nwhere $E$ denotes the cardinality of $\\{{\\cal D}_i\\}$. \nThis ensemble averaging is simple type of committee machine which has long history~\\cite{clemen1989combining,dietterich2000ensemble}. While more sophisticated ensembling method exists (e.g.~\\cite{freund1995desicion, breiman1996bagging,breiman2001random, opitz1996generating,opitz1999popular, rokach2010ensemble}), \nwe strive for simplicity and considered naive averaging. One alternative we considered is generalizing average by\n\\begin{equation}\n \\hat \\mu_w(x^*) = \\frac{1}{E}\\sum_i^E w_i \\,\\mu (x^*, {\\cal D}_i)\\,,\n\\end{equation}\nwere $w_i$ in general is set of weights satisfying $\\sum_i w_i = 1$. We can utilize posterior variance $\\sigma_i^2$ from NNGP or NTK with MSE loss via Inverse-variance weighting (IVW) where weights are given as\n\\begin{equation}\n w_i = \\frac{\\sigma_i^{-2}}{\\sum_j \\sigma_j^{-2}} \\,.\n\\end{equation}\nIn simple bagging setting~\\cite{breiman1996bagging}, we observe small improvements with IVW over naive averaging. This indicates posterior variance for different draw of $\\{{\\cal D}_i\\}$ was quite similar. \n\nApplication to data augmentation (DA) is simple as we consider process of generating $\\{{\\cal D}_i\\}$ from a (stochastic) data augmentation transformation ${\\cal T}$. We consider action of ${\\cal T}(x, y) = T(x, y)$ be stochastic (e.g. $T$ is a random crop operator) with probability $p$ augmentation transformation (which itself could be stochastic) and probability $(1 - p)$ of $T = \\text{Id}$. Considering ${\\cal D}_0$ as clean un-augmented training set, we can imagine dataset generating process ${\\cal D}_i \\sim {\\cal T} ({\\cal D}_0)$, where we overloaded definition of ${\\cal T}$ on training-set to be data generating distribution. \n\nFor experiments in~\\sref{sec:data-augmentation}, we took $T$ to be standard augmentation strategy of horizontal flip and random crop by 4-pixels with augmentation fraction $p = 0.5$ (see~\\Figref{fig:kerne-da-ens-frac} for effect of augmentation fraction on kernel ensemble). In this framework, it is trivial to generalize the DA transformation to be quite general (e.g. learned augmentation strategy studied by~\\citet{cubuk2019autoaugment, cubuk2019randaugment}).\n\n\n\n\n\n\n\n\n\n\\section{ZCA whitening}\n\\label{app ZCA}\nConsider $m$ (flattened) $d$-dimensional training set inputs $X$ (a $d\\times m$ matrix) with data covariance\n\\begin{equation}\n \\Sigma_X = \\frac{1}{d} X X^T \\,.\n\\end{equation}\nThe goal of whitening is to find a whitening transformation $W$, a $d\\times d$ matrix, such that the features of transformed input \n\\begin{equation}\n Y = W X\n\\end{equation}\nare uncorrelated, e.g. $\\Sigma_Y \\equiv \\frac 1 d YY^T= I$. Note that $\\Sigma_X$ is constructed only from training set while $W$ is applied to both training set and test set inputs.\nWhitening transformation can be efficiently computed by eigen-decomposition\\footnote{For PSD matricies, it is numerically more reliable to obtain via SVD.}\n\\begin{equation}\n \\Sigma_X = U D U^T\\,\n\\end{equation}\nwhere $D$ is diagonal matrix with eigenvalues, and $U$ contains eigenvector of $\\Sigma_X$ as its columns. \n\nWith this ZCA whitening transformation is obtained by following whitening matrix\n\\begin{align}\n W_\\text{ZCA} &= U \\sqrt{\\left(D + \\epsilon \\tfrac{tr(D)}{d} I_d\\right)^{-1} } \\, U^T \\,.\n\\end{align}\n\nHere, we introduced trivial reparameterization of conventional regularizer such that regularization strength $\\epsilon$ is input scale invariant. It is easy to check $\\epsilon \\rightarrow 0$ corresponds to whitening with $\\Sigma_Y = I$. In \\sref{sec:zca}, we study the benefit of taking non-zero regularization strength for both kernels and finite networks. We denote transformation with non-zero regularizer, ZCA regularization preprocessing. ZCA transformation preserves spatial and chromatic structure of original image as illustrated in~\\Figref{app ZCA}. Therefore image inputs are reshaped to have the same shape as original image. \n\nIn practice, we standardize both training and test set per (RGB channel) features\nof the training set before and after the ZCA whitening. This ensures transformed inputs are mean zero and variance of order 1. \n\n\n\n\n\\begin{figure}\n\\centering\n \\begin{overpic}[width=\\linewidth]{figures\/zca_fig.pdf}\n \\put (0,1) {\\textbf{\\small(a)}}\n \\put (48.5,1) {\\textbf{\\small(b)}}\n\\end{overpic} \n \\caption{\n \\textbf{Illustration of ZCA whitening.} Whitening is a linear transformation of a dataset\n that removes correlations between feature dimensions, setting all non-zero eigenvalues of the covariance matrix to 1. \n ZCA whitening is a specific choice of the linear transformation that rescales the data in the directions given by the eigenvectors of the covariance matrix, but without additional rotations or flips. \n {\\em (a)} A toy 2d dataset before and after ZCA whitening. Red arrows indicate the eigenvectors of the covariance matrix of the unwhitened data.\n {\\em (b)} ZCA whitening of CIFAR-10 images preserves spatial and chromatic structure, while equalizing the variance across all feature directions. \n Figure reproduced with permission from \\citet{wadia2020whitening}. See also \\sref{sec:zca}.\n }\n\\label{fig cifar zca}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{MSE vs Softmax-cross-entropy loss training of neural networks}\n\\label{app:xent-vs-mse}\nOur focus was mainly on fininte networks trained with MSE loss for simple comparison with kernel methods that gives closed form solution. Here we present comparison of MSE vs softmax-cross-entropy trained networks. See Table~\\ref{tab:xent-vs-mse} and \\Figref{fig:xent-vs-mse}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=.4\\columnwidth]{figures\/xent_vs_mse.pdf}\n\\caption{\\textbf{MSE trained networks are competitive while there is a clear benefit to using Cross-entropy loss}}\n\\label{fig:xent-vs-mse}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\caption{Effects of MSE vs softmax-cross-entropy loss on base networks with various interventions}\\label{tab:xent-vs-mse}\n\\begin{tabular}{llllllll}\n\\toprule\nArchitecture & Type & Param & Base & +LR+U & +L2+U & +L2+LR+U & Best \\\\\n\\midrule \\midrule\nFCN & MSE & STD & 47.82 & 49.07 & 49.82 & 55.32 & 55.90 \\\\\n & & NTK & 46.16 & 49.17 & 54.27 & 55.44 & 55.44 \\\\\n & XENT & STD & 55.01 & 57.28 & 53.98 & 57.64 & 57.64 \\\\\n & & NTK & 53.39 & 56.59 & 56.31 & 58.99 & 58.99 \\\\\n & MSE+DA & STD & 65.29 & 66.11 & 65.28 & 67.43 & 67.43 \\\\\n & & NTK & 61.87 & 62.12 & 67.58 & 69.35 & 69.35 \\\\\n & XENT+DA & STD & 64.15 & 64.15 & 67.93 & 67.93 & 67.93 \\\\\n & & NTK & 62.88 & 62.88 & 67.90 & 67.90 & 67.90 \\\\\n \\midrule\nCNN-VEC & MSE & STD & 56.68 & 63.51 & 67.07 & 68.99 & 68.99 \\\\\n & & NTK & 60.73 & 61.58 & 75.85 & 77.47 & 77.47 \\\\\n & XENT & STD & 64.31 & 65.30 & 64.57 & 66.95 & 66.95 \\\\\n & & NTK & 67.13 & 73.23 & 72.93 & 74.05 & 74.05 \\\\\n & MSE+DA & STD & 76.73 & 81.84 & 76.66 & 83.01 & 83.01 \\\\\n & & NTK & 83.92 & 84.76 & 84.87 & 85.63 & 85.63 \\\\\n & XENT+DA & STD & 81.84 & 83.86 & 81.78 & 84.37 & 84.37 \\\\\n & & NTK & 86.83 & 88.59 & 87.49 & 88.83 & 88.83 \\\\\n \\midrule\nCNN-GAP & MSE & STD & 80.26 & 80.93 & 81.10 & 83.01 & 84.22 \\\\\n & & NTK & 80.61 & 82.44 & 81.17 & 82.43 & 83.92 \\\\\n & XENT & STD & 83.66 & 83.80 & 84.59 & 83.87 & 83.87 \\\\\n & & NTK & 83.87 & 84.40 & 84.51 & 84.51 & 84.51 \\\\\n & MSE+DA & STD & 84.36 & 83.88 & 84.89 & 86.45 & 86.45 \\\\\n & & NTK & 84.07 & 85.54 & 85.39 & 86.68 & 86.68 \\\\\n & XENT+DA & STD & 86.04 & 86.01 & 86.42 & 87.26 & 87.26 \\\\\n & & NTK & 86.87 & 87.31 & 86.39 & 88.26 & 88.26 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\section{Comment on batch size}\n\\label{app:batch-size}\nCorrespondence between NTK and gradient descent training is direct in the full batch gradient descent (GD) setup (see~\\cite{Dyer2020Asymptotics} for extensions to mini-batch SGD setting). Therefore base comparison between finite networks and kernels is the full batch setting. While it is possible to train our base models with GD, for full CIFAR-10 large emprical study becomes impractical. In practice, we use mini-batch SGD with batch-size $100$ for FCN and $40$ for CNNs. \n\nWe studied batch size effect of training dynamics in \\Figref{fig:bs} and found that for these batch-size choices does not affecting training dynamics compared to much larger batch size. \n\\citet{shallue2018measuring, mccandlish2018empirical} observed that universally for wide variety of deep learning models there are batch size beyond which one could gain training speed benefit in number of steps. We observe that maximal useful batch-size in workloads we study is quite small.\n\n\\begin{figure}\n\\centering\n\\begin{overpic}[width=0.8\\linewidth]{figures\/bs_fc_mse.pdf}\n \\put (0,0) {\\textbf{\\small(a)}}\n\\end{overpic}\\\\\n\\vspace{0.2cm}\n\\begin{overpic}[width=0.8\\linewidth]{figures\/bs_fc_xent.pdf}\n \\put (0,0) {\\textbf{\\small(b)}}\n\\end{overpic}\n\\\\\n\\vspace{0.2cm}\n\\begin{overpic}[width=0.8\\linewidth]{figures\/bs_cg_mse.pdf}\n \\put (0,0) {\\textbf{\\small(c)}}\n\\end{overpic}\\\\ \n\\vspace{0.2cm}\n\\begin{overpic}[width=0.8\\linewidth]{figures\/bs_cg_xent.pdf}\n \\put (0,0) {\\textbf{\\small(d)}}\n\\end{overpic}\n\n \\caption{\\textbf{Batch size does not affect training dynamics for moderately large batch size.}}\n \\label{fig:bs}\n\\end{figure}\n\n\n\\section{Addtional tables and plots}\n\n\\begin{table}[h]\n\\centering\n\\caption{\\textbf{CIFAR-10 classification mean squared error(MSE) for nonlinear and linearized finite neural networks, as well as for NTK and NNGP kernel methods}.\nStarting from \\texttt{Base} network of given architecture class described in \\sref{sec:experimental_design}, performance change of \\textbf{centering} (\\texttt{+C}), \\textbf{large learning rate} (\\texttt{+LR}), allowing \\textbf{underfitting} by early stopping (\\texttt{+U}), input preprocessing with \\textbf{ZCA regularization} (\\texttt{+ZCA}), multiple initialization \\textbf{ensembling} (\\texttt{+Ens}), and some combinations are shown, for {\\color{standard_param}\\textbf{Standard}} and {\\color{ntk_param}\\textbf{NTK}} parameterization. See also Table~\\ref{tab:main-table} and \\Figref{fig:tricks_vs_accuracy} for accuracy comparison.}\n\\vspace{0.3cm}\n\\label{tab:main-table-mse}\n\\resizebox{\\columnwidth}{!}{%\n\n\\begin{tabular}{@{}lc|ccccccccc|cc|cc@{}}\n\\toprule\n{} &\n Param &\n Base &\n +C &\n +LR &\n +L2 &\n \\begin{tabular}[c]{@{}l@{}}+L2 \\\\ +U\\end{tabular}\n &\n \\begin{tabular}[c]{@{}c@{}}+L2 \\\\ +LR\\end{tabular} &\n \\begin{tabular}[c]{@{}l@{}}+L2 \\\\+LR\\\\ +U \\end{tabular} &\n +ZCA &\n \\begin{tabular}[c]{@{}c@{}} Best \\\\ w\/o DA\\end{tabular} &\n +Ens &\n \\begin{tabular}[c]{@{}l@{}}+Ens \\\\+C \\end{tabular}\n&\n \\begin{tabular}[c]{@{}l@{}} +DA\\\\ +U \\end{tabular}&\n \\begin{tabular}[c]{@{}l@{}}+DA \\\\+L2\\\\ +LR\\\\ +U\\end{tabular} \\\\\n \n \n\\midrule \\midrule\n FCN & STD & 0.0443 & 0.0363 & 0.0406 & 0.0411 & 0.0355 & 0.0337 & 0.0329 & 0.0483 & 0.0319 & 0.0301 & 0.0304 &0.0267 & 0.0242 \\\\\n & NTK & 0.0465 & 0.0371 & 0.0423 & 0.0338 & 0.0336 & 0.0308 & 0.0308 & 0.0484 & 0.0308 & 0.0300 & 0.0302 & 0.0281 & 0.0225 \\\\\n \\midrule\n CNN-VEC & STD & 0.0381 & 0.0330 & 0.0340 & 0.0377 & 0.0279 & 0.0340 & 0.0265 & 0.0383 & 0.0265 & 0.0278 & 0.0287 & 0.0228 & 0.0183 \\\\\n & NTK & 0.0355 & 0.0353 & 0.0355 & 0.0355 & 0.0231 & 0.0246 & 0.0227 & 0.0361 & 0.0227 & 0.0254 & 0.0278 & 0.0164 & 0.0143 \\\\\n \\midrule\n CNN-GAP & STD & 0.0209 & 0.0201 & 0.0207 & 0.0201 & 0.0201 & 0.0179 & 0.0177 & 0.0190 & 0.0159 & 0.0172 & 0.0165 & 0.0185 & 0.0149 \\\\\n & NTK & 0.0209 & 0.0201 & 0.0195 & 0.0205 & 0.0181 & 0.0175 & 0.0170 & 0.0194 & 0.0161 & 0.0163 & 0.0157 & 0.0186 & 0.0145 \\\\\n\\bottomrule\n\\end{tabular}%\n}\n\\\\\n\\vspace{0.2cm}\n\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{@{}lc|cccccc||ccc|ccc@{}}\n\\toprule\n &\n Param &\n Lin Base &\n +C &\n +L2 &\n \\begin{tabular}[c]{@{}l@{}}+L2 \\\\+U \\end{tabular} &\n +Ens &\n \\begin{tabular}[c]{@{}l@{}}+Ens \\\\+C \\end{tabular}&\n NTK & +ZCA & \\begin{tabular}[c]{@{}c@{}}+DA \\\\ +ZCA\\end{tabular}&\n NNGP &+ZCA &\n\\begin{tabular}[c]{@{}c@{}}+DA \\\\ +ZCA\\end{tabular}\\\\\n \\midrule\\midrule\n FCN & STD & 0.0524 & 0.0371 & 0.0508 & 0.0350 & 0.0309 & 0.0305 & 0.0306 & 0.0302 & - & \\multirow{2}*{0.0309} & \\multirow{2}*{0.0308} & \\multirow{2}*{0.0297} \\\\\n & NTK & 0.0399 & 0.0366 & 0.0370 & 0.0368 & 0.0305 & 0.0304 &0.0305 & 0.0302 & 0.0298 \\\\\n \\midrule\n CNN-VEC & STD & 0.0436 & 0.0322 & 0.0351 & 0.0351 & 0.0293 & 0.0291 & 0.0287 & 0.0277 & - & \\multirow{2}*{0.0286} & \\multirow{2}*{0.0281} & \\multirow{2}*{0.0256}\\\\\n & NTK & 0.0362 & 0.0337 & 0.0342 & 0.0339 & 0.0286 &0.0286 & 0.0283 & 0.0274 & 0.0273 \\\\\n \\midrule\n CNN-GAP & STD & \\multicolumn{6}{c||}{< 0.0272* (Train accuracy 86.22 after 14M steps)} & 0.0233 & 0.0200& - & \\multirow{2}*{0.0231} & \\multirow{2}*{0.0204} & \\multirow{2}*{0.0191} \\\\\n & NTK & \\multicolumn{6}{c||}{< 0.0276* (Train accuracy 79.90 after 14M steps)} & 0.0232 & 0.0200& 0.0195\\\\\n \\bottomrule\n\\end{tabular}%\n}\n\\end{table}\n\n \n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/nngp_vs_ntk_uci.pdf}\n\\caption{\\textbf{On UCI dataset NNGP often outperforms NTK on RMSE.}\nWe evaluate predictive performance of FC NNGP and NTK on UCI regression dataset in the standard 20-fold splits first utilized in~\\cite{hernandez2015probabilistic, gal2016dropout}. We plot average RMSE across the splits. Different scatter points are varying hyperparameter settings of (depth, weight variance, bias variance). In the tabular data setting, dominance of NNGP is not as prominent across varying dataset as in image classification domain. \n}\n\\label{fig:nngp-vs-ntk-uci}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/ensemble_training_curve.pdf}\n\\caption{\\textbf{Centering can accelerate training}. Validation (top) and training (bottom) accuracy throughout training for several finite width architectures. See also \\sref{sec:ensemble_of_networks} and \\Figref{fig:validation_curves}.\n}\n\\label{fig:training_curves}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/ensemble_network_performance_combined.pdf}\n\\caption{\\textbf{Ensembling base networks causes them to match kernel performance, or exceed it for nonlinear CNNs.} See also \\sref{sec:ensemble_of_networks} and \\Figref{fig:ensemble}.\n}\n\\label{fig app ensemble}\n\\end{figure}\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\begin{overpic}[width=\\linewidth]{figures\/network_l2.pdf}\n\\end{overpic} \n\\\\\n\\caption{\n\\textbf {Performance of nonlinear and linearized networks as a function of L2 regularization for a variety of widths.} Dashed lines are NTK parameterized networks while solid lines are networks with standard parameterization. We omit linearized \\texttt{CNN-GAP} plots as they did not converge even with extensive compute budget.\nL2 regularization is more helpful in networks with an NTK parameterization than a standard parameterization \n\\label{fig:reg-compare-sm}\n}\n\\end{figure}\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\begin{overpic}[width=.6\\columnwidth]{figures\/l2_to_init_training_curve.pdf}\n \\put (0,0) {\\textbf{\\small(a)}}\n\\end{overpic} \n\\begin{overpic}[width=.32\\columnwidth]{figures\/l2_to_init.pdf}\n \\put (0,0) {\\textbf{\\small(b)}}\n\\end{overpic} \n\\caption{\\textbf{L2 regularization to initial weights does not provide performance benefit.} {\\bf (a)} Comparing training curves of L2 regularization to either 0 or initial weights. {\\bf (b)} Peak performance of after L2 regularization to either 0 or initial weights. Increasing L2 regularization to initial weights do not provide performance benefits, instead performance remains flat until model's capacity deteriorates. \n}\n\\label{fig:l2-init}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/network_vs_width.pdf}\n\\caption{\n\\textbf{Finite width networks generally perform better with increasing width, but \\texttt{CNN-VEC} shows surprising non-monotonic behavior.} See also~\\sref{sec:perf_vs_width} and \\Figref{fig:width}\n {\\bf L2}: non-zero weight decay allowed during training {\\bf LR}: large learning rate allowed. Dashed lines are allowing underfitting (\\textbf{U}).}\n\\label{fig:width-combined}\n\\end{figure}\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/zca_train_curves_std.pdf}\n\\includegraphics[width=\\columnwidth]{figures\/zca_train_curves_ntk.pdf}\n\\caption{\\textbf{ZCA regularization helps finite network training.}\n(\\textbf{upper}) Standard parameterization, (\\textbf{lower}) NTK parameterization. See also \\sref{sec:zca} and \\Figref{fig:zca}.\n}\n\\label{fig:app-zca-training}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{figures\/da_ensemble_fraction.pdf}\n\\caption{\\textbf{Data augmentation ensemble for infinite network kernels with varying augmentation fraction.} See also \\sref{sec:data-augmentation}.}\n\\label{fig:kerne-da-ens-frac}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}