diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzigz" "b/data_all_eng_slimpj/shuffled/split2/finalzigz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzigz" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\\acresetall\n\nCryptocurrencies have become an increasingly popular medium for decentralized transactions in recent years. The computation of a cryptographic \\ac{PoW} is the foundational concept of decentralized consensus. It is computed by so-called miners, which are rewarded with a fraction of cryptocurrency for their effort. Currently, \\ac{PoW} computations are built by iterating a cryptographic hash function until the output has a dedicated form. This energy-intensive process presents a prime candidate for hardware acceleration.\n\nSeveral cryptocurrencies adopt so-called ASIC-resistant \\ac{PoW} algorithms. These types of \\ac{PoW} aim to deter dedicated hardware miners in favor of CPU- and GPU-based miners, which are more generally available to the public, thereby sustaining the distributed ledger idea. The Haven Protocol \\cite{Haven} is one of the projects with such a goal. For its \\ac{PoW}, Haven leverages on a custom ASIC-resistant hash function named as \\textit{CryptoNight-Haven}. \n\n\n\n\n\n\nIn this project, we challenge the ASIC-resistance claims of CryptoNight-Haven, by implementing the \\ac{PoW} as an RTL kernel on FPGA. We target the Xilinx Varium C1100 Blockchain Accelerator Card \\cite{VariumC1100}. The card employs Xilinx' recent Ultrascale+ architecture, \\ac{XRT} integration over a high-speed PCIe Gen 4 bus connection, and 8~GB of \\ac{HBM}. Xilinx demonstrated that the Varium C1100 accelerates transaction validation in Hyperledger Fabric \\cite{androulaki2018hyperledger} over an Intel Xeon Silver 4114 CPU with a factor of 14$\\times$.\n\nOur CryptoNight-Haven accelerator aims at a full hardware-based computation of the hash rather than software-assisted. It employs a pipelined datapath with multi-hash computation in a single kernel, and targets multiple kernel instantiations on a single FPGA with nonce-based \\ac{HBM} partitioning. We verified the computation modules under simulation; however, its memory interface demands improvements for random access rather than the simulator's straightforward memory models. \n\nOur CryptoNight-Haven miner RTL, testbench, and host-code (a software patch to XMRig \\cite{xmrig} with \\ac{XRT} integration) are publicly available at: \\url{https:\/\/github.com\/KULeuven-COSIC\/CryptoNightHaven-FPGA-miner}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Algorithm}\n\nThe Cryptonight-Haven algorithm is a variant of Cryptonight \\cite{Cryptonote}. It chains together multiple well-known cryptographic primitives: AES, Keccak, Blake, etc. Collectively, these primitives are used to initialize a large scratchpad data buffer, on which semi-random operations are performed. \n\n\n\\begin{figure}[t]\n\\centering\n\\input{figures\/overview2.tex}\n\\caption{The dataflow Cryptonight-Haven. \n}\n\\label{fig:overview}\n\\end{figure}\n\n\\Cref{fig:overview} shows an overview of the computation flow. First, the input passes through a Keccak module, extracting the state. The explode module takes the first 32 bytes of the state and expands them to 10 AES round keys. These keys are used to perform AES rounds on the remaining bytes (divided into 8 blocks of 128-bits) of the Keccak state. After 10 rounds, the AES output is written to the memory buffer. Next, the blocks are XOR'ed with each other, and they again undergo 10 rounds of AES with the same keys as before. This process is repeated until 4 MB of data has been generated.\n\nThe Shuffle step performs a semi-random operation; either AES, a division, or a multiplication followed by addition. Corresponding input and output data yield intensive memory accesses to irregular addresses.\n\nThe Implode operation is similar to Explode. Bytes 32-63 of the state are used to generate AES round keys. Bytes are read from the start of the memory buffer and XOR'ed with state bytes 64-191. Subsequently, these bytes are put through 10 AES rounds --one round per key-- and the next bytes in the memory buffer are read. After reading the entire 4 MB scratchpad buffer twice, the 10 AES rounds are repeated 16 additional times without reading from the memory buffer.\n\nFinally, the new state passes through a Keccak permutation to generate the final state. Depending on the 2 LSBs of this state, it is subsequently hashed using either of the Blake, Skein, JH or Groestl schemes, resulting in the final 256-bit output.\n\n\n\\section{Implementation}\n\nOur single accelerator kernel employs a datapath with a module hierarchy similar to \\Cref{fig:overview}. The hash computations in the first and last steps of the computation stages contribute a negligible overhead. The Explode step consists of a simple implementation that generates 4 MB of data with simple binary operations and stores it into \\ac{HBM} memory. It heavily benefits from AXI burst transfers with multiple outstanding transactions. The Implode step has double the latency of Explode for accessing the memory twice. For the underlying computations, both the Explode and Implode modules employ 10 AES cores.\n\nIn contrast, Shuffle's computation is demanding due to the multitude of iterations and underlying memory accesses. Additionally, data dependencies between consecutive accesses prevent optimizations, e.g. burst transfers. The memory size prevents using on-chip memory --BRAM or URAM-- that allows single-cycle data access. Irregular addresses are another obstacle that prevents caching parts of the memory on-chip. Hence, minimizing data transfer overheads is the most advantageous strategy.\n\nThe memory accesses of Shuffle form one of the foundations for CryptoNight-Haven's ASIC-resistance claims. Implementing these computations on an ASIC requires significant chip resources to be reserved for implementing memory, leaving a limited amount of silicon for the computation. In contrast, the Varium C1100 natively has 8~GB of \\ac{HBM} available, and our accelerator heavily benefits from it. Moreover, the partitioning of \\ac{HBM} into a number of \\acp{PC} allows us to instantiate each Shuffle unit with a dedicated memory port, resulting in a scalable design.\n\n\nWe pipelined Shuffle for simultaneous computation of up to 128 hashes to boost the computation performance. That requires an identical increase in memory consumption --easily accommodated by the 8~GB HBM-- where individual nonces for each hash computation partition memory regions. In line with this pipelining, we split the computation into various stages that communicate over AXI-Stream interfaces connected with FIFOs. Our pipelining approach allows the time-critical Shuffle module to be clocked at 500~MHz while the other modules remain at 200~MHz.\n\n\n\\section{Future Work}\n\nIn its current state, our design computes hashes correctly within the Vivado simulation environment. That employs a simplified view of memory, which restricts the memory model to AXI accessed BRAM. However, when instantiated as an \\ac{XRT} kernel on the Varium C1100, the hash computations are inconsistent with simulation. We have enhanced the design with a set of AXI-lite accessible status registers that collect additional performance and debug information on the hardware execution. The further roadmap we envision is as follows:\n\n\\begin{enumerate}\n \\item Enhancing our Vivado simulation with random AXI access latencies and AXI protocol checkers.\n \\item Progressing with \\ac{XRT} kernel construction, by replacing the BRAM with HBM under Vitis \\texttt{hw\\_emu} based \\ac{XRT} executions.\n \\item Extending Shuffle's memory accesses with Xilinx' \\ac{RAMA} IP.\n\\end{enumerate}\n\nAfter these steps enable the correct computation of the CryptoNight-Haven \\ac{PoW}, we have already taken the first steps to integrate the accelerator into XMRig \\cite{xmrig} using \\ac{XRT} APIs. The accelerator should be compared thoroughly to existing CPU and GPU-based miners for Cryptonight-Haven, hopefully showing increased throughput and\/or energy efficiency. Finally, we also aim to compare to related work: FPGA-based miners were proposed for the ASIC-resistant \\ac{PoW} Lyra2REv2~\\cite{Lyra2-FPGA, Lyra2-standalone}, Scrypt \\cite{MRSA21}, and X16R \\cite{9786081}.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro.sec}\n\nAs an Italian-Chinese collaboration project, the experiment ARGO-YBJ \n(Astrophysical Radiation with Ground-based Observatory at YangBaJing) \nis under way over the next few years at Yangbajing High Altitude Cosmic Ray \nLaboratory (4300 m a.s.l., 606 $g\/cm^2$), 90 km North to Lhasa \n(Tibet, P.R. China). \nThe aim of the experiment is the study of cosmic rays, mainly \ncosmic $\\gamma$-radiation, at an energy threshold of $\\sim 100$ $GeV$, by \nmeans of the detection of small size air showers at high altitude. \n\nThe apparatus consists of a full coverage detector of dimension \n$\\sim 71\\times 74\\>m^2$ realized with a single layer of Resistive Plate \nCounters (RPCs). The area surrounding the central detector core, up to $\\sim\n100\\times 100\\>m^2$, consists in a guard ring partially ($\\sim 50\\> \\%$) \ninstrumented with RPCs. \nThese outer detector improves the apparatus performance, enlarging the \nfiducial area, for the detection of showers with the core outside the \nfull coverage carpet. \nA lead converter $0.5$ $cm$ thick will cover uniformly the RPC plane in order\nto increase the number of charged particles by conversion of shower photons \nand to reduce the time spread of the shower front.\nThe site coordinates (longitude $90^{\\circ}$ 31' 50'' E, latitude \n$30^{\\circ}$ 06' 38'' N) permits the monitoring of the Northern hemisphere in \nthe declination band $-10^{\\circ}<\\delta <70^{\\circ}$. \n\nARGO-YBJ will image with high sensitivity atmospheric showers induced by\nphotons with $E_{\\gamma}\\geq 100$ $GeV$, allowing to bridge the\nGeV and TeV energy regions and to face a wide range of fundamental issues in\nCosmic Ray and Astroparticle Physics (Abbrescia et al. (1996)):\n\\begin{verse}\n1) {\\it \\underline {Gamma--Ray Astronomy}}, at a $\\sim 100$ $GeV$ threshold energy. \nSeveral galactic and extragalactic point candidate sources can be \ncontinuously monitored, with a sensitivity to unidentified sources better than \n$10\\%$ of the Crab flux.\\\\\n2) {\\it \\underline {Diffuse Gamma--Rays}} from the Galactic plane, \nmolecular clouds and SuperNova Remnants at energies $\\geq 100\\>GeV$.\\\\\n3) {\\it \\underline {Gamma Ray Burst physics}}, by allowing the extension of the \nsatellite measurements over the full GeV\/TeV energy range.\\\\\n4) {\\it \\underline {$\\overline{p}\/p$ ratio}} at energies $300\\>GeV\\div TeV$ not \naccessible to satellites, with a sensitivity adequate to distinguish \nbetween models of galactic or extragalactic $\\overline{p}$ origin.\\\\\n5) {\\it \\underline {Sun and Heliosphere physics}}, including cosmic ray \nmodulations at $10\\>GeV$ threshold energy, the continuous \nmonitoring of the large scale structure of the interplanetary \nmagnetic field and high energy gamma and neutron flares from the Sun.\n\\end{verse}\n\nAdditional items come from using ARGO-YBJ as a traditional EAS array \ncovering the full energy range from $10^{11}$ to $10^{16}\\>eV$. \nSince the detector provides a high granularity space-time picture of the \nshower front, detailed study of shower properties as, for instance, \nmulticore events, time and lateral distributions of EAS particles, \nmultifractal structure of particle densities near the core, can be \nperformed with unprecedented resolution. \nDetector assembling will start late in 2000 and data taking with the first \n$\\sim$ 750 $m^2$ of RPCs in 2001. \n\nIn order to investigate both the RPCs performance at 4300 $m$ a.s.l. and \nthe capability of the detector to sample the shower front of atmospheric \ncascades, during 1998 for the first time a full coverage carpet of \n$\\sim 50\\>m^2$ has been put in operation in the Yangbajing Laboratory. \nThe present paper is a report of this test-experiment. \n\n\\section{The test experiment at Yangbajing}\n\\label{testexp.sec}\n\nThe basic elements of ARGO-YBJ detector are RPCs of dimensions $280\\times \n125\\>cm^2$. The detector is organized in modules of 12 chambers whose \ndimensions are $5.7\\times 7.9\\>m^2$. \nThis group of RPCs represent a logical subdivision (Cluster) of the \napparatus: the detector consists of 117 Clusters in the central part and \n28 Cluster in the guard ring for a total of 1740 RPCs. The proposed \nlay-out allows to achieve an active area of $\\sim 92\\%$ the total. \nThe trigger and the DAQ systems are built following a two level\narchitecture. The signals from the Cluster are managed by Local Stations.\nThe information from each Local Station is collected and elaborated in the\nCentral Station. According to this logic a module of 12 RPCs (i.e. the Cluster)\nrepresents the basic detection unit.\n\nA Cluster prototype, very similar to that proposed for ARGO-YBJ \nexperiment, has been installed in the Yangbajing Laboratory. It \nconsists of 15 chambers distributed in 5 columns with an active area of \n$\\sim 90\\%$. The total area is about $6.1\\times 8.7$ $m^2$. \n\nThe detector consists of single-gap RPCs made of bakelite (with volume \nresistivity \n$\\rho > 5\\cdot 10^{11}\\>\\Omega\\cdot cm$) with a $280\\times 112\\>cm^2$ area. \nThe RPCs read-out is performed by means of Al strips 3.3 $cm$ wide and 56 \n$cm$ long at the edge of which the front-end electronics is connected.\nThe FAST-OR of 16 strips defines a logical unit called pad: ten pads \n($56\\times 56\\>cm^2$) cover each chamber. The FAST-OR signal from each pad \nis sent, via coaxial cable, to dedicated modules that generate the trigger \nand the STOP signals to the TDCs. Each channel of the TDCs \nmeasures, with $1$ $ns$ clock, the arrival times (up to \n16 hits per channel) of the particles hitting a pad, for a total number \nof 150 timing channels. \nThe RPCs have been operated in streamer mode with a gas mixture of \nargon ($15\\%$), isobutane ($10\\%$) and tetrafluoroethane $C_2H_2F_4$ \n($75\\%$), at a voltage of 7400 $V$, about 500 $V$ above the plateau knee. \nThe efficiency of the detector, as measured by a small telescope selecting \na cosmic ray beam, is $>95\\%$, and the intrinsic time resolution \n$\\sigma_t\\sim 1$ $ns$. The description of the results concerning the RPCs\nperformance at YangBaJing are given in Bacci et al. (1999). \n\n\\section{Data Analysis}\n\\label{datana.sec}\n\nDifferent triggers based on pad multiplicity have been used to collect \n$\\sim 10^6$ shower events with $0.5$ $cm$ of lead on the whole carpet in \nApril-May 1998.\nThe integral rate as a function of the pad multiplicity is shown in Fig. 1 \nfor showers before and after the lead was installed. \nA comparison at fixed rate indicates an increasing of pad multiplicity due to \nthe effect of the lead of $\\sim 15\\div 20\\%$, as expected according to our \nsimulations. \n\n\\begin{figure}[htb]\n\\vfill \\begin{minipage}{.47\\linewidth}\n\\begin{center}\n\\mbox{\\epsfig{file=frequenze.eps,height=8.cm,width=8.cm}}\n\\end{center}\n\\caption{\\em The integral rate as a function of the pad multiplicity. }\n\\end{minipage}\\hfill\n\\hspace{-0.5cm}\n\\begin{minipage}{.47\\linewidth}\n\\begin{center}\n\\mbox{\\epsfig{file=linee.eps,height=8.cm,width=8.cm}}\n\\end{center}\n\\caption{\\em Time profile observed in a typical event. Straight lines are \nfit to experimental hits. }\n\\end{minipage}\\hfill\n\\end{figure}\n\n\n\\subsection{Detector Calibration}\n\\label{detcali.sec}\n\nThe relative time offset among different pads are measured as follows: \n\\begin{verse}\n1) We construct the time distributions of each TDC channel adding all \nthe delays in the individuals events and compute the relative time \nmean values $\\overline{t_i}$. \\\\\n2) We compute the mean value of the $\\overline{t_i}$ distribution: \n$={ { \\sum_{i=1}^{N} \\overline{t_i} } \\over N}$.\\\\\n3) We define the time offset as $\\Delta t_i=\\overline{t_i}-$. These values \nare used to correct the times provided by each TDC channel. \n\\end{verse}\nTo check the consistency of the procedure, we fit on an event-by-event \nbasis the corrected time values to a plane, construct for each TDC channel \nthe distribution of time residuals $\\delta t_i=t_{plane}-t_i$ and calculate \nthe mean values $<\\delta t_i>$. These are distributed with a spread of \n$\\sim$ $0.6$ $ns$. \n\n\\subsection{ Event Reconstruction}\n\nIn this test-experiment we are not able to determine the core position, \ntherefore we use a plane as the fitting function. Moreover, the estimated \narrival direction is relatively free from the curvature effect because we \nsample only a small portion of the shower front. \nIn this approximation the expected particle arrival time is a linear \nfunction of the position.\nThe time profile observed in a typical event is shown in Fig. 2. Here $x,y$ \nare orthogonal coordinates which identify the pad position. Straight \nlines are one-dimensional fits to experimental hits along two different $x$ \nvalues. \n\nWe performe an optimized reconstruction procedure of the shower direction \nas follows:\n\\begin{verse}\n1) Unweighted plane fit to hits for each event with pad multiplicity \n$\\geq$ 25. \\\\ \n2) Rejection of out-lying points by means of a 2.5 $\\sigma$ cut and iteration \nof the fit until this condition is not verified.\\\\\n\\end{verse}\nAfter these iterations a fraction $\\leq 10\\%$ of the time signals that\ndeviate most from the fitted plane are excluded from further analysis. \nThe distribution of time residuals $\\delta t$ = $t_{plane}-t_i$\n(Fig. 3) exhibits a long tail due to time fluctuations and to the\ncurved profile of the shower front, more pronounced for low multiplicity. \nThe width of these distributions is related to the time thickness of the \nshower front. Since the position of the shower core is not reconstructed, \nthe experimental result concerns a time thickness averaged on different radial \ndistances. Increasing pad multiplicity select showers with core near the \ndetector, as confirmed by MC simulations. Taking into account the total \ndetector jitter of $1.3$ $ns$ (RPC intrinsic jitter, strip length, \nelectronics time resolution) the time jitter of the earliest particles in \nhigh multiplicity events ($116\\div 120$ hits) is estimated $\\sim 1.1$ $ns$. \n\n\n\\begin{figure}[htb]\n\\vfill \\begin{minipage}{.47\\linewidth}\n\\begin{center}\n\\mbox{\\epsfig{file=residui.eps,height=8.cm,width=8.cm}}\n\\end{center}\n\\caption{\\em Distribution of time residuals for events with different pad \nmultiplicity (all channel added).}\n\\end{minipage}\\hfill\n\\hspace{-0.5cm}\n\\begin{minipage}{.47\\linewidth}\n\\begin{center}\n\\mbox{\\epsfig{file=teta.eps,height=8.cm,width=8.cm}}\n\\end{center}\n\\caption{\\em Even-odd angle difference distribution for events with \ndifferent pad multiplicity.}\n\\end{minipage}\\hfill\n\\end{figure}\n\n\n\\subsection{Angular Resolution}\n\nThe angular resolution of the carpet has been estimated by dividing \nthe detector into two independent sub-arrays and comparing the two\nreconstructed shower directions. \nEvents with N total hits have been selected according to the \nconstraint $N_{odd}\\simeq N_{even}\\simeq N\/2$. \nThe even-odd angle difference $\\Delta \\theta_{eo}$ is shown in Fig. 4 for \nevents in different multiplicity ranges. \nWe note that these distributions narrow, as expected, with the increase of \nthe shower size. \nTo see the dependence of the angular resolution on the lead sheet, we show \nin Fig. 5 the median $M_{\\Delta \\theta_{eo}}$ of the distribution of \n$\\Delta \\theta_{eo}$ as a function of pad multiplicity for showers \nreconstructed before and after the lead was added. \nThe improvement of the angular resolution is a factor $\\sim$ 1.4 for $N=50$\nand decreases with increasing multiplicity. \n\nAssuming that the angular resolution for the entire array is Gaussian \n(Alexandreas et al. (1992)), a standard deviation \n$\\sigma_{\\theta}\\sim 2.1^{\\circ}$ is found for events with a pad \nmultiplicity $\\geq 100$.\n\n\\section{Conclusions}\n\\label{conclu.sec}\n\n\\begin{figwindow}[1,r,%\n{\\mbox{\\epsfig{file=risol.eps,height=7.cm,width=7.cm}}},%\n{\\em Median of $\\Delta \\theta_{eo}$ distribution as a function of\npad multiplicity.}]\nA Resistive Plate Counters carpet of $\\sim$ 50 $m^2$ has been put in \noperation at the Yangbajing Laboratory to study the high altitude \nperformance of RPCs and the detector capability of imaging with high \ngranularity a small portion of the EAS disc, in view of an enlarged use \nin Tibet (ARGO-YBJ experiment). \n\nIn this paper we have presented the results of this test experiment \nconcerning the carpet capability of reconstructing the shower features. \nIn particular, we have focused on the angular resolution \nin determining the arrival direction of air showers, the most important \nparameter for $\\gamma$-ray astronomy studies. \n\nThe effect of a $0.5$ $cm$ lead sheet on the whole carpet has been \ninvestigated. An increase $15\\div 20\\%$ of the hit multiplicity is found. \nThe improvement of the angular resolution depends on the shower density. \n\nThe test confirms that RPCs can be operated efficiently to sample air showers \nat high altitude with excellent space and time resolutions. \nThe results are consistent with data assumed in the computation of the \nperformance of the ARGO-YBJ detector. \n\\end{figwindow}\n\n\\vspace{1ex}\n\\begin{center}\n{\\Large\\bf References}\n\\end{center}\nAbbrescia M. et al., {\\it Astroparticle Physics with ARGO}, Proposal (1996). \nThis document can be downloaded at the URL: \nhttp:\/\/www1.na.infn.it\/wsubnucl\/cosm\/argo\/argo.html\\\\\nAlexandreas D.E. et al., Nucl. Instr. Meth. A311 (1992) 350.\\\\\nBacci C. et al. (ARGO-YBJ coll.), (1999) submitted to Nucl. Instr. Meth.\\\\\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe will study the so-called source-type self-similar solution to the Navier-Stokes\nequations. Our motivation for this is as follows.\nFirst of all, it will give us a\nparticular self-similar solution to the Navier-Stokes equations, which\ncharacterises the decaying process in the late stage of evolution. Second,\nit is likely to give useful information as to how we may handle more general\nsolutions.\n\nIt has been shown under mild conditions that no nontrivial smooth backward self-similar\nsolution exists to the Navier-Stokes equations. On the other hand it is known that\nnontrivial forward self-similar solutions {\\it do} exist, but their explicit\nfunctional forms are not known, except for some asymptotic results. \nIt is of interest to see they actually behave, because\nsuch solutions contain the important information regarding more general\nsolutions.\nThis is particularly the case when the governing equations are exactly\nlinearisable, e.g. the Burgers equations. While it is not expected\nthat the Navier-Stokes equations are exactly soluble in general, we might still obtain\ninsights into the nature of their solutions.\n\nIn \\cite{CP1996} the existence of forward self-similar solutions\nfor small data was proved by using a fixed-point theorem in a Besov space (see below).\nThere initial data for self-similar solution (3D)\nis assumed to be homogeneous of degree $-1$ in velocity\n$$\\bm{u}_0(\\lambda \\bm{x})=\\lambda^{-1} \\bm{u}_0(\\bm{x})$$\nand the existence of a self-similar solution of the form\n$$\\bm{u}(\\bm{x},t)=\\frac{1}{\\sqrt{t}}\\bm{U}\\left(\\frac{\\bm{x}}{\\sqrt{t}} \\right)$$\nhas been established under the assumption that initial data is small in some Besov space.\nMoreover, it has been proved that the self-similar profile $U$ satisfies (in their notations)\n$$U=S(1) u_0 +W,$$\nwhere $S(1)$ denotes a heat operator at time 1 and $\\|W\\|_{L^3}$ is small.\nIn Jia-Sverak(2014), using a locally H{\\\"o}lder class in $\\mathbb{R}^3\\backslash\\{0\\}$\nthe smallness assumption has been removed and\nit is furthermore shown that\n$$|U(x)-e^{\\triangle}u_0(x)| \\leq \\frac{C(M)}{(1+|x|)^{1+\\alpha}},$$\nwhere $0 < \\alpha <1$ and $C(M)$ denotes a constant with some norm $M$ of $u_0$. \nThose studies indicates that the self-similar solution is close to the heat flow in the late\nstage. However, studies on the determination of a specific functional form of\nself-similar solutions are few and far between, except for an attempt in\\cite{Brandolese2009}.\nWe also note that the existence of generalised self-similar solutions (in the sense that scaling holds\nonly at a set of discrete values of $\\lambda$) was studied subsequently, e.g. \\cite{CW2017, BT2019}.\n\nBy definition the source-type solution for nonlinear parabolic PDEs is a solution in a scaled space,\nwhich starts from a Dirac mass in \\textit{some} dependent variable and ends up like a near-identity of\nthe Gaussian function in the long-time limit. It is a counterpart to the fundamental solution to nonlinear PDEs.\nIt is important to choose the right unknown to alleviate the difficulty of analysis.\n\nWe will construct an approximate solution valid in the long-time limit, using\nthe vorticity curl $\\nabla \\times \\bm{\\omega}$.\nWe will explain why this is the most convenient variable for our purpose.\n\nThe unknown whose $L^1$-norm is marginally divergent is suitable for describing the late-stage\nevolution. This is because it satisfies the same scaling as the Dirac mass and both of them\nbelong to a Besov space near $L^1$. \nIn one dimension, in the limit of $t \\to 0$, we have roughly\n$$u \\sim \\dfrac{1}{x},\\; \\mbox{marginally}\\,\\notin L^1(\\mathbb{R}^1),$$\nwhich shows that the velocity is convenient in this case.\n\nIn two dimensions it is the vorticity which is the most convenient, \nas can be seen from\n$$\\omega \\sim \\dfrac{1}{r^2},\\;\\mbox{marginally}\\,\\notin L^1(\\mathbb{R}^2),$$\nwhere $|\\bm{x}|=r$.\nRecall that those scaling properties of velocity in 1D or vorticity in 2D,\nare the same as that of the Dirac mass:\n$\\lambda^d \\delta(\\lambda \\bm{x})=\\delta(\\bm{x})$\nin $d$-dimensions.\nNow consider Besov spaces whose norms are given by\n$$\\|\\bm{u}\\|_{B^s_{pq}} \\equiv \\left\\{\\sum_{j=1}^\\infty\n\\left( 2^{sj} \\|\\Delta_j (\\bm{u}) \\|_{L^p}\\right)^q \\right\\}^{1\/q},$$\nwhere $1 \\leq p,q \\leq \\infty, s \\in \\mathbb{R}$ and $\\Delta_j (\\bm{u})$ represents band-filtered velocity\nat frequency $2^j$.\nIt is known that in $\\mathbb{R}^d$ the Dirac delta mass is embedded as\n\\begin{equation}\\label{delta}\n\\delta(\\bm{x}) \\in B_{p,\\infty}^{-d+d\/p},\\;\\;\\mbox{for}\\;\\; p \\geq 1,\n\\end{equation}\nsee e.g. \\cite{HL2017}. In particular we have\n$\\delta(\\bm{x}) \\in B_{1,\\infty}^{0}$ for any $d.$\nWhile the velocity $u \\sim \\dfrac{1}{r} \\notin L^3(\\mathbb{R}^3),$ we have \n$$u \\sim \\frac{1}{r} \\in B_{3,\\infty}^{0}(\\mathbb{R}^3),$$ and correspondingly for the vorticity curl $\\chi$ \n$$\\chi \\sim \\frac{1}{r^3} \\in B_{1,\\infty}^{0}(\\mathbb{R}^3).$$ \nHence in three dimensions this $\\chi$ and the Dirac mass belong to the same\nfunction class $B_{1,\\infty}^{0}(\\mathbb{R}^3)$, with $p=1$ in (\\ref{delta}).\n\\section{Illustrationg the ideas with Burgers equations}\n\\subsection{1D Burgers equation}\nWe consider the Burgers equation \\cite{Burgers1948} \n\\begin{equation}\\label{1DBurgers}\n\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x}\n=\\nu \\frac{\\partial^2 u}{\\partial x^2},\n\\end{equation}\nwhich satisfies static scale-invariance under\n$$x \\to \\lambda x, t \\to \\lambda^2 t, u \\to \\lambda^{-1} u.$$\nThis means that if $u(x,t)$ is a solution, so is \n$\\lambda u(\\lambda x, \\lambda^2 t) :=u_\\lambda(x,t),$\nfor any $\\lambda (>0)$.\nIt is readily checked that \n$$\\|u_\\lambda\\|_{L^p}=\\lambda^{\\frac{p-1}{p}}\\|u\\|_{L^p},$$\nwhich shows that the $L^1$-norm is scale-invariant.\n\nLet us clarify the two kinds of critical scale-invariance;\none is deterministic where the additional terms arising in the governing equations\nunder dynamic\nscaling is minimised in number and the other is statistical in nature where the\nadditional terms under dynamic scaling is maxmised in number so that\nconservative form (that is, a divergence form) is completed in the advection term.\nIn the former the dependent variable has the same physical dimension\nas kinematic viscosity, whereas in the latter the argument of the Hopf\ncharacteristic functional (the independent variable) has the same physical \ndimension as the reciprocal of kinematic viscosity \\cite{Ohkitani2020}.\nThis approach provides a viewpoint from which the problem\nappears in the simplest possible form.\n\nCritical scale-invariance is achieved with the velocity potential $\\phi,$ which is defined by\n$u=\\partial_x \\phi$. If $\\phi(x, t)$ is a solution, so is $\\phi(\\lambda x, \\lambda^2 t).$\nUnder dynamic scaling for the velocity potential $\\phi(x,t)=\\Phi(\\xi,\\tau)$ we have\n\\begin{equation}\\label{ScaledBurgers0}\n\\frac{\\partial \\Phi}{\\partial \\tau}\n + \\frac{1}{2}\\left(\\frac{\\partial \\Phi}{\\partial \\xi}\\right)^2\n =a \\xi\\frac{\\partial \\Phi}{\\partial \\xi}+\\nu \\frac{\\partial^2 \\Phi}{\\partial \\xi^2},\n \\end{equation}\nwhose linearisation has the Ornstein-Uhlenbeck operator.\nThis will be called type 1 (deterministic) scale-invariance where the number of additional terms is\nminimised, that is, only the drift term.\nUnder dynamic scaling for velocity\n$u(x,t)=\\frac{1}{\\sqrt{2a(t+t_*)}}U(\\xi,\\tau),\\;\n\\xi=\\frac{x}{\\sqrt{2 a(t+t_*)}},\\;\n\\tau=\\frac{1}{2 a}\\log(1+2at)$ with $2at_*=1$,\nwe find\n\\begin{equation}\\label{ScaledBurgers}\n\\frac{\\partial U}{\\partial \\tau}\n +U \\frac{\\partial U}{\\partial \\xi}\n=a \\frac{\\partial}{\\partial \\xi}\\left( \\xi U \\right)\n+\\nu \\frac{\\partial^2 U}{\\partial \\xi^2},\n\\end{equation}\nwhose linaerisation is the Fokker-Planck equation. This will be called type 2 (statistical)\nscale-invariance where the number of additional terms is maximised\nin the sense that a divergence form is completed with the addition of $aU$ term.\nAs it is of second-order it has two independent\nsolutions, of which we will focus on the Gaussian one.\nSee \\textbf{Appendix A(a)} for the other non-Gaussian possibility.\n\nEquation (\\ref{ScaledBurgers}) is exactly soluble and its steady solution is called the source-type solution\n\\cite{BKW1999, BKW2001}:\n\\begin{equation}\\label{Burgers_source1}\nU(\\xi)=\\frac{U(0) \\displaystyle{\\exp \\left( -\\frac{a \\xi^2}{2 \\nu} \\right)}}\n{1 -\\displaystyle{\\frac{U(0)}{2\\nu}\\int_0^{\\xi}} \n \\exp \\left( -\\frac{a \\eta^2}{2 \\nu}\\right) d\\eta}.\n\\end{equation}\nThe name has come from the time zero asymptotics (with $t_*=0$)\n$$\\lim_{t \\to 0}\\frac{1}{\\sqrt{2at}} U(\\xi)=M\\delta(x),$$\nwhere $M\\equiv\\int_{\\mathbb{R}^1} u_0(x) dx$ and\n$U(0)=\\sqrt{\\frac{8a\\nu}{\\pi}}\\tanh\\frac{M}{4\\nu}\n\\approx \\sqrt{\\frac{a}{2\\pi \\nu}}M\\;\\; (\\mbox{for}\\;M\/\\nu \\ll 1).$\\footnote{Here we have taken the virtual time\n origin $t_*=0$. There are two ways of handling it; one is to take the limit\n $\\tau \\to \\infty$ first and then let $a\/\\nu \\to \\infty$ keeping $2at_*=1$. The other one is\nto consider $\\lambda(t)=\\sqrt{2at}$ from scratch and consider the steady equations.}\nSee \\cite{EZ1991, BKW1999, BKW2001}.\n\nIt is also known that for\n$u_0 \\in L^1$ we have \n$$t^{\\frac{1}{2}\\left(1-\\frac{1}{p}\\right)}\n\\left\\| u(x,t)- \\frac{1}{\\sqrt{2at}} U(\\xi)\\right\\|_{L^p}\n\\to 0\\;\\;\\mbox{as}\\;\\; t \\to \\infty,$$\nwhere $1 \\leq p \\leq \\infty$ and $\\xi=\\frac{x}{\\sqrt{2at}}.$\n\nThe simplest method for solving (\\ref{ScaledBurgers}), without linearisation,\nis as follows. Rewrite the equation \n\\begin{eqnarray}\n \\frac{U^2}{2}&=&a\\xi U +\\nu \\frac{dU}{d\\xi} \\nonumber\\\\\n &=&\\nu \\exp \\left(-\\frac{a \\xi^2}{2\\nu}\\right)\n \\frac{d}{d\\xi} \\left(U \\exp \\left( \\frac{a \\xi^2}{2\\nu} \\right)\\right).\n \\nonumber\n\\end{eqnarray}\nBy changing variables to\n$\\tilde{U}=U \\exp \\left( \\frac{a \\xi^2}{2\\nu}\\right),\\eta = \\frac{1}{2\\nu}\n\\int_0^\\xi \\exp \\left(-\\frac{a \\zeta^2}{2\\nu} \\right) d\\zeta,$ we find\n$$\\frac{d \\tilde{U}}{d \\eta}=\\tilde{U}^2,$$\nwhich is readily integrable. Alternatively we may solve the equation (2.1) by regarding it as a\nBernoulli equation.\n\nIt is in order to comment on the significance of source-type solution.\nWhen we recast (\\ref{Burgers_source1}) as\n\\begin{equation}\\label{Burgers_source2}\nU(\\xi)=-2\\nu \\frac{\\partial}{\\partial \\xi}\\log\\left(\n{1 -\\displaystyle{\\frac{U(0)}{2\\nu}\\int_0^{\\xi}} \\exp \\left( -\\frac{a \\eta^2}{2 \\nu}\\right) d\\eta}\n\\right),\n\\end{equation}\nwhich is reminiscent of the celebrated Cole-Hopf transform. In other words, the source-type solution\nencodes the vital information of the nonlinear term in the case of the Burgers equation. Note that the error-function\n$\\int_0^{\\xi} \\exp \\left( -\\frac{a \\eta^2}{2 \\nu}\\right) d\\eta$ itself is a self-similar solution to\nthe heat equation. This suggests that studying source-type solution of the Navier-Stokes equations may give a hint on the\nnature of their long-time evolution.\n\n\n\\subsection{Successive approximations}\nThe operator $L = \\triangle^* \\equiv \\triangle +\\frac{a}{\\nu}\\partial_\\xi(\\xi \\cdot )\n$ is not self-adjoint. It is possible to find a function $G$ such that\n$L^{\\dagger} G(\\xi) =-\\delta(\\xi)$ holds, where\n$L^{\\dagger}\\equiv \\partial_\\xi^2 -\\frac{a}{\\nu}\\xi \\partial_\\xi$ is the adjoint of $L$.\nIn fact $G(\\xi) \\propto D\\left( \\sqrt{\\frac{a}{2\\nu}} \\xi\\right),$ where $D(\\cdot)$ denotes\nthe Dawson's integral, see e.g. {\\bf Appendix A(a)}. \nHowever, because $G$ decays slowly at large distances\n$G(\\xi) \\propto \\frac{1}{\\xi}$ as $|\\xi| \\to \\infty$, it cannot be used as a Green's function,\nat least, in the usual manner.\n\nThe inversion formula for $\\triangle^*$ can be obtained in an alternative method.\nRecall that on the basis of a formal analogy with $\\frac{1}{a}=\\int_0^\\infty e^{-at} dt \\; (a>0),$\nthe fundamental solution to the Poisson equation in 1D is given by\n$$ (\\nu \\triangle)^{-1} \\equiv -\\int_{0}^{\\infty} ds e^{\\nu s \\triangle} =\\frac{|\\xi|}{2\\nu}*,$$\nwhere * denotes convolution.\nLikewise for the fundamental solution to the Fokker-Planck equation in 1D we write\n$$(\\nu \\triangle^*)^{-1} \\equiv -\\int_{0}^{\\infty} ds e^{\\nu s \\triangle^*} =\\int_{-\\infty}^{\\infty}\nd\\eta g(\\xi,\\eta),$$\n where\n $$g(\\xi,\\eta) \\equiv \\frac{-1}{\\sqrt{2\\pi\\nu}} {\\rm f.p.}\\int_{\\sqrt{a}}^{\\infty}\n \\frac{d\\sigma}{\\sigma^2-a}e^{-\\frac{1}{2\\nu}\\left(\\sigma \\xi -\\eta\\sqrt{\\sigma^2-a}\\right)^2}$$\n and f.p. denotes the finite part of Hadamard, e.g. \\cite{Bureau1955, Yosida1956}.\n It can be verified by changing the variable $\\sigma=\\sqrt{\\frac{a}{1-e^{-2a\\tau}}},$ in the solution to\n the Fokker-Planck equation\n$$e^{\\nu \\tau \\triangle^*}f=\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{1\/2}\n \\int_{\\mathbb{R}^1}e^{a\\tau} f(e^{a\\tau} \\eta)\\exp \\left( -\\frac{a}{2\\nu} \\frac{(\\xi-\\eta)^2}{1-e^{-2 a \\tau}}\\right) d\\eta.$$\nAs we will consider the 3D Navier-Stokes equations, for which methods of exact solutions are not known,\nwe treat (\\ref{ScaledBurgers}) by approximate methods as a preparation. Because the inversion of\n$\\triangle^*$ is cumbersome, we will seek a workaround by which we can dispense with it.\n\nFirst we convert it to an integral equation by\nthe Duhamel principle for the Fokker-Planck operator\n$\\triangle^*$. \n\\begin{align*}\n U(\\tau) = & e^{\\nu \\tau \\triangle^*} U(0)\n -\\int_{0}^{\\tau} e^{\\nu(\\tau-s)\\triangle^*}\\partial\\, \\frac{U(s)^2}{2}ds &\\\\\n = & e^{\\nu \\tau \\triangle^*} U(0)\n -\\int_{0}^{\\tau} e^{\\nu s \\triangle^*}\\partial\\,\\frac{U(\\tau-s)^2}{2}ds. &\n\\end{align*}\nThe long-time limit $U_1=\\lim_{\\tau \\to \\infty} e^{\\nu s \\triangle^*} U(0)$ is given by\n$$U_1=\\left( \\frac{a}{2\\pi \\nu}\\right)^{1\/2} M e^{-\\frac{a}{2\\nu}\\xi^2}\\;\\;\\mbox{with}\n\\;\\;M=\\int_{-\\infty}^{\\infty} U(0) d\\xi.$$\nWe may consider a number of different iteration schemes. For example, the following option (1), also known as the Picard iteration, requires\nthe inversion $(\\triangle^*)^{-1}$:\n$$\\mbox{Successive approximation (1):}\\;\\;U_{n+1} =U_1-\\int_{0}^{\\infty} e^{\\nu s \\triangle^*}\\partial\\,\\frac{U_n^2}{2}ds,$$\n $$\\mbox{in particular, for}\\;\\;n=1:\\;\\; U_2=U_1-\\int_{0}^{\\infty} e^{\\nu s \\triangle^*}\\partial\\,\\frac{U_1^2}{2}ds.$$\nIt is noted that $U_n$ is a steady function at each step.\n\nAlternatively we first consider the steady equation \n$$\\triangle^* U\\equiv \\triangle U +\\frac{a}{\\nu}(\\xi U)_\\xi=\\frac{1}{\\nu}\\left( \\frac{U^2}{2}\\right)_\\xi$$\nand then introduce iteration schemes:\n$$\\mbox{Iteration scheme (2a):}\\;\\;\n\\triangle U_{n+1} +\\frac{a}{\\nu}(\\xi U_{n+1})_\\xi =\\frac{1}{\\nu}\\left( \\frac{U_{n}^2}{2}\\right)_\\xi,\\;\\;(n \\geq 0)$$\n$$\\mbox{For}\\;\\;n=1:\\;\\;\\triangle U_{2} +\\frac{a}{\\nu}(\\xi U_{2})_\\xi =\\frac{1}{\\nu}\\left( \\frac{U_{1}^2}{2}\\right)_\\xi,$$\nor\n$$\\mbox{Iteration scheme (2b):}\\;\\;\\triangle U_{n+1} =-\\frac{a}{\\nu}(\\xi U_{n})_\\xi +\\frac{1}{\\nu}\\left( \\frac{U_{n}^2}{2}\\right)_\\xi,\n\\;\\;(n \\geq 1)$$\n$$\\mbox{For}\\;\\;n=1:\\;\\;\\triangle U_{2} =\\underbrace{-\\frac{a}{\\nu}(\\xi U_{1})_\\xi}_{= \\triangle U_1}\n+\\frac{1}{\\nu}\\left(\\frac{U_{1}^2}{2}\\right)_\\xi.$$\nNote that iteration schemes (1) and (2a) coincide with each other at $n=1$.\n\n\\subsection{Estimation of the size of nonlinear terms}\nFor the Burgers equation we can work out the two results to the second-order approximations\nanalytically. After some algebra they are\n \\begin{align*}\n \\mbox{(1)}\\;\\; U & \\approx Ce^{-\\frac{a}{2\\nu}\\xi^2}\n \\left( 1+\\frac{C}{2\\nu} \\int_0^\\xi e^{-\\frac{a}{2\\nu}\\eta^2} d\\eta \\right),\\\\\n \\mbox{(2b)}\\;\\; U & \\approx Ce^{-\\frac{a}{2\\nu}\\xi^2}\n +\\frac{C^2}{2\\nu} \\int_0^\\xi e^{-\\frac{a}{\\nu}\\eta^2} d\\eta,\n \\end{align*}\n where $C \\approx \\sqrt{\\frac{a}{2\\pi\\nu}} M.$\n On this basis we estimate the size of the nonlinear term $N(\\xi).$\n After non-dimensionalisation the second-order term is proportional to the Reynolds number\n $Re=\\int U d\\xi\/ \\nu.$ Separating out the $Re$-dependence, we define $N(\\xi)$ by\n $$U \\propto 1+ Re N(\\xi).$$\n From the above expressions for the 1D Burgers equation we then find\n $$\\mbox{(1)}\\;\\; N= \\frac{1}{4}=0.25,$$\n $$\\mbox{(2b)}\\;N=\\frac{1}{4\\sqrt{2}}\\approx 0.2,$$\n where $N=\\max_\\xi N(\\xi).$\n We conclude that the typical size of nonlinearity $N=O(10^{-1}),$ irrespective of the choice of schemes.\n \n\\subsection{Burgers equations in several dimensions} \n\nThe source-type solution is basically a near-identity function of the Gaussian\nform.\nIt has been seen how the source-type solutions show up in the long-time limit\nin one and two spatial dimensions in \\cite{Ohkitani2020}. Here we will take a look at\ncases in three and higher dimensions. We have by the Cole-Hopf transform\n$$U_i(\\bm{\\xi},\\tau)\n=-2\\nu\\frac{\\partial_{\\xi_i} \\int_{\\mathbb{R}^3} \\psi_0(\\lambda \\bm{\\eta})\n\\exp \\left(-\\dfrac{a}{2\\nu}\\dfrac{|\\bm{\\xi}-\\bm{\\eta}|^2}{1-e^{-2a\\tau}}\\right)\nd\\bm{\\eta}}{\\int_{\\mathbb{R}^3} \\psi_0(\\lambda \\bm{\\eta})\n\\exp\\left(-\\dfrac{a}{2\\nu}\\dfrac{|\\bm{\\xi}-\\bm{\\eta}|^2}{1-e^{-2a\\tau}}\\right) \nd\\bm{\\eta}}.$$\nIn view of the type 2 scale-invariance and differentiating it twice, we find\n$$\\partial_{\\xi_j}\\partial_{\\xi_k} U_i(\\bm{\\xi},\\tau)\n=-2\\nu\\frac{\\partial_{\\xi_i} \\partial_{\\xi_j} \\partial_{\\xi_k}\n \\int_{\\mathbb{R}^3} \\psi_0(\\lambda \\bm{\\eta})\n\\exp \\left(-\\dfrac{a}{2\\nu}\\dfrac{|\\bm{\\xi}-\\bm{\\eta}|^2}{1-e^{-2a\\tau}}\\right)\nd\\bm{\\eta}}{\\int_{\\mathbb{R}^3} \\psi_0(\\lambda \\bm{\\eta})\n\\exp\\left(-\\dfrac{a}{2\\nu}\\dfrac{|\\bm{\\xi}-\\bm{\\eta}|^2}{1-e^{-2a\\tau}}\\right) \nd\\bm{\\eta}}+\\ldots\n$$\n$$\n=-2\\nu\\frac{\\lambda^3\\int_{\\mathbb{R}^3} \\partial_i \\partial_j \\partial_k\\psi_0(\\lambda \\bm{\\eta})\n\\exp \\left(-\\dfrac{a}{2\\nu}\\dfrac{|\\bm{\\xi}-\\bm{\\eta}|^2}{1-e^{-2a\\tau}}\\right)\nd\\bm{\\eta}}{\\int_{\\mathbb{R}^3} \\psi_0(\\lambda \\bm{\\eta}) \n\\exp\\left(-\\dfrac{a}{2\\nu}\\dfrac{|\\bm{\\xi}-\\bm{\\eta}|^2}{1-e^{-2a\\tau}}\\right) \nd\\bm{\\eta}}+\\ldots\n$$\nThe denominator then tends to $K_{ijk}\\exp \\left(-\\frac{a}{2\\nu}|\\bm{\\xi}|^2 \\right)$ as $\\tau \\to \\infty,$\nwhere $K_{ijk}=\\int_{\\mathbb{R}^3} \\partial_i \\partial_j \\partial_k \\psi_0( \\bm{\\eta}) d\\bm{\\eta},\\;\n(i=1,2,3).$\nHence\n$$\\partial_{\\xi_j}\\partial_{\\xi_k} U_i(\\bm{\\xi},\\infty)\n=-2\\nu \\left( \\frac{K_{ijk}\\exp \\left(-\\frac{a}{2\\nu}|\\bm{\\xi}|^2 \\right)}{F_{ijk}(\\bm{\\xi})}\n+\\ldots\\right),$$\nwhere the function $F_{ijk}$ is to be determined such that\n$\\partial_i \\partial_j \\partial_k F_{ijk} \\propto\n \\exp \\left(-\\frac{a}{2\\nu}|\\bm{\\xi}|^2 \\right).$\n We can thus take\n$$\nF_{ijk}(\\bm{\\xi})=-\\frac{K_{ijk}}{2\\nu}\n\\int_0^{\\xi_1} \\exp \\left(-\\frac{a \\xi^2}{2\\nu} \\right) d\\xi\n\\int_0^{\\xi_2} \\exp \\left(-\\frac{a \\eta^2}{2\\nu} \\right) d\\eta\n\\int_0^{\\xi_3} \\exp \\left(-\\frac{a \\zeta^2}{2\\nu} \\right) d\\zeta\n+1.$$\nTherefore we find in three dimensions, say, with $i=1,j=2,k=3,$\n\\begin{equation}\\label{source_Burgers3D}\n\\frac{\\partial^2 U_1}{\\partial \\xi_2 \\partial \\xi_3}\n=K_{123}\n\\exp \\left(-\\frac{a}{2\\nu}(\\xi_1^2+\\xi_2^2+\\xi_3^2)\\right)\n\\frac{1+R(\\xi_1, \\xi_2,\\xi_3)}{(1-R(\\xi_1,\\xi_2,\\xi_3))^3},\n\\end{equation}\nwhere\n$$R(\\xi_1, \\xi_2,\\xi_3)=\\frac{K_{123}}{2\\nu} \n\\int_0^{\\xi_1} \\exp \\left(-\\frac{a \\xi^2}{2\\nu} \\right) d\\xi\n\\int_0^{\\xi_2} \\exp \\left(-\\frac{a \\eta^2}{2\\nu} \\right) d\\eta\n\\int_0^{\\xi_3} \\exp \\left(-\\frac{a \\zeta^2}{2\\nu} \\right) d\\zeta$$\ndenotes the Reynolds number. Because $R$ is small the expression (\\ref{source_Burgers3D}) is near-Gaussian.\nNote that $K_{123}=\\sqrt{\\dfrac{32a^3}{\\pi^3 \\nu}} \\tanh \\dfrac{M_{123}}{16\\nu},$ where\n$M_{123}=\\int \\frac{\\partial^2 U_1}{\\partial \\xi_2 \\partial \\xi_3} d\\bm{\\xi}.$\nWe can also write\n$$\\frac{\\partial^2 U_1}{\\partial \\xi_2 \\partial \\xi_3}\n=-2 \\nu \\frac{\\partial^3}{\\partial \\xi_1 \\partial \\xi_2 \\partial \\xi_3} \n\\log(1-R(\\xi_1, \\xi_2,\\xi_3)),$$\nwhich more directly reflects the Cole-Hopf transform. \n$\\bm{U}=\\nabla_{\\bm{\\xi}} \\phi,\\;\\phi=-2\\nu \\log(1-R(\\bm{\\xi})).$\nSee {\\bf Appendix B} for the general form in $n$-dimensions.\n\n\\section{2D Navier-Stokes equation}\nWe briefly recall the case of the 2D Navier-Stokes equation.\nThe so-called Burgers vortex was introduced originally to represent the reaction of a vortex under\nthe influence of the collective effect of surrounding vortices in the ambient medium.\nWhen we write the steady solution in velocity and vorticity\nusing cylindrical coordinates\n$$\\bm{u}=(u_r, u_\\theta, u_\\phi)=(-a r, v(r), 2 a z),\\;\n\\bm{\\omega}=(0, 0, \\omega(r)),$$\nthe solution takes the following forms:\n$$\\omega(r)=\\frac{a\\Gamma}{2\\pi\\nu}\\exp\\left(-\\frac{a r^2}{2\\nu} \\right),$$\n$$v(r)=\\frac{\\Gamma}{2\\pi r}\\left(1-\\exp \\left(-\\frac{a r^2}{2\\nu}\\right)\\right),$$\nwhere\n$\\Gamma \\equiv \\int_{\\mathbb{R}^2}\\omega_0(\\bm{x})d \\bm{x}$\ndenotes the velocity circulation.\n\nThe scaled form of the vorticity equation in two dimensions reads\n$$\n\\dfrac{\\partial \\Omega}{\\partial \\tau}+ \\bm{U}\\cdot \\nabla \\Omega\n=\\nu \\triangle \\Omega + a \\nabla \\cdot (\\bm{\\xi}\\Omega),$$\nwhere $\\Omega$ satisfies the type 2 scale-invariance.\nIt is known that the self-similar solution under scaling has\nthe mathematically identical form as the Burgers vortex tube above.\nIndeed in the scaled variables the above expression can be written\n$$\\Omega(\\xi)=\\frac{a \\Gamma}{2\\pi\\nu}\\exp\\left(-\\frac{a|\\bm{\\xi}|^2}{2\\nu} \\right),\n\\,\\bm{\\xi}=\\frac{\\bm{x}}{\\sqrt{2at}}.$$\n(See \\textbf{Appendix A(b)} for the other non-Gaussian solution.)\n\nWe recall that \n$\\frac{1}{2at}\\Omega(\\xi)\n=\\frac{\\Gamma}{4\\pi\\nu t}\\exp\\left(-\\frac{|\\bm{x}|^2}{4\\nu t} \\right)$\nis an exact self-similar solution with the following property\n$$\\lim_{t \\to 0} \\Omega(\\cdot)=\\Gamma \n\\delta(\\bm{x}).$$\nIt also satisfies the following asymptotic property, for $\\omega(\\cdot,0) \\in L^1,$ \n$$t^{1-\\frac{1}{p}}\\left\\|\\omega(\\bm{x},t)-\\frac{1}{2at}\\Omega(\\bm{\\xi})\n\\right\\|_{L^p} \\to 0\\;\\;\\mbox{as}\\;\\; t \\to \\infty,$$\nwhere $1 \\leq p \\leq \\infty,$ see e.g. \\cite{GW2005}.\n\n\\section{3D Navier-Stokes equations}\n\nWe will describe two approaches for handling a perturbative treatment for the 3D\nNavier-Stokes equations. First we describe a general framework based on the Green's function.\nSecond we describe the other iterative approach which is specifically suited for calculations\nassociated with the 3D Navier-Stokes problem.\n\n\\subsection{Governing equations}\nWe consider the 3D Navier-Stokes equations written in four different dependent\nvariables. Starting from the vector potential, taking the curl successively\n$\\bm{u}=\\nabla \\times \\bm{\\psi},\\; \\bm{\\omega}=\\nabla \\times \\bm{u},\\;\n\\bm{\\chi}=\\nabla \\times \\bm{\\omega},$ we have\n\\begin{equation}\\label{NS3D}\n\\left\\{\n\\begin{array}{l}\n\\dfrac{\\partial \\bm{\\psi}}{\\partial t}\n=\\dfrac{3}{4\\pi}{\\rm p.v.}\\displaystyle{\\int_{\\mathbb{R}^3}}\n\\dfrac{\\bm{r} \\times( \\nabla \\times \\bm{\\psi} (\\bm{y}))\\,\n\\bm{r} \\cdot (\\nabla \\times \\bm{\\psi} (\\bm{y}))}\n {|\\bm{r}|^5}\\;{\\rm d}\\bm{y}+\\nu\\triangle \\bm{\\psi},\\\\\n\\noalign{\\vskip 0.2cm} \n\\dfrac{\\partial \\bm{u}}{\\partial t}\n+ \\bm{u}\\cdot \\nabla \\bm{u} =-\\nabla p + \\nu\\triangle\\bm{u},\\\\\n\\noalign{\\vskip 0.2cm} \n\\dfrac{\\partial \\bm{\\omega}}{\\partial t}+ \\bm{u}\\cdot \\nabla \\bm{\\omega}\n=\\bm{\\omega} \\cdot \\nabla \\bm{u} + \\nu \\triangle \\bm{\\omega},\\\\\n\\noalign{\\vskip 0.2cm} \n\\dfrac{\\partial \\bm{\\chi}}{\\partial t}\n=\\nabla \\times \\left( \\bm{u}\\times \\bm{\\chi}+2 (\\bm{\\omega}\\cdot\\nabla)\\bm{u}\n\\right)+\\nu \\triangle \\bm{\\chi}, \n\\end{array} \\right.\n\\end{equation}\nwhere $\\bm{r}\\equiv\\bm{x}-\\bm{y}$ and p.v. denotes a principal-value integral. \nWe also have\n$ \\bm{\\omega}=- \\triangle \\bm{\\psi}, \\bm{\\chi}=-\\triangle \\bm{u},$\nbecause of the incompressibility condition. Equation (\\ref{NS3D})$_1$ can be found in \\cite{Ohkitani2015}.\nThe final fourth equation (\\ref{NS3D})$_4$ is obtained by applying a curl on the vorticity equations\n(details to be found in {\\bf Appendix C}). This is suitable for handling inviscid fluids.\nFor the equations for $\\bm{\\chi}$, we may alternatively take the Laplacian of the velocity equations\n(\\ref{NS3D})$_2$ to obtain the following form\n\\begin{equation}\\label{Chi.eq}\n\\frac{\\partial \\bm{\\chi}}{\\partial t}\n=\\triangle( \\bm{u} \\cdot \\nabla \\bm{u}+ \\nabla p) +\\nu \\triangle \\bm{\\chi}\n\\end{equation}\ninstead of the final line (\\ref{NS3D})$_4$.\n\nLikewise the dynamically-scaled 3D Navier-Stokes equations in four different unknowns read\n\\begin{equation}\\label{NS3Dscaled}\n\\left\\{\n\\begin{array}{l}\n\\dfrac{\\partial \\bm{\\Psi}}{\\partial \\tau}\n=\\dfrac{3}{4\\pi} {\\rm p.v.}\\displaystyle{\\int_{\\mathbb{R}^3}}\n\\dfrac{\\bm{\\rho} \\times( \\nabla \\times \\bm{\\Psi} (\\bm{\\eta}))\\,\n\\bm{\\rho} \\cdot (\\nabla \\times \\bm{\\Psi} (\\bm{\\eta}))}\n{|\\bm{\\rho}|^5}\\;{\\rm d}\\bm{\\eta}+\\nu\\triangle \\bm{\\Psi} \n+ a(\\bm{\\xi}\\cdot\\nabla)\\Psi,\\\\\n\\noalign{\\vskip 0.2cm} \n\\dfrac{\\partial \\bm{U}}{\\partial \\tau}\n+ \\bm{U}\\cdot \\nabla \\bm{U} =-\\nabla P + \\nu\\triangle\\bm{U}\n+ a(\\bm{\\xi}\\cdot\\nabla)\\bm{U}\n+a \\bm{U},\\\\\n\\noalign{\\vskip 0.2cm} \n\\dfrac{\\partial \\bm{\\Omega}}{\\partial \\tau}+ \\bm{U}\\cdot \\nabla \\bm{\\Omega}\n=\\bm{\\Omega} \\cdot \\nabla \\bm{U} + \\nu \\triangle \\bm{\\Omega}\n+a(\\bm{\\xi}\\cdot\\nabla)\\bm{\\Omega} +2a \\bm{\\Omega},\\\\\n\\noalign{\\vskip 0.2cm}\n\\dfrac{\\partial \\bm{X}}{\\partial \\tau}\n=\\nabla \\times \\left( \\bm{U}\\times \\bm{X}+2 (\\bm{\\Omega}\\cdot\\nabla)\\bm{U}\n\\right)+\\nu \\triangle \\bm{X}\n+ a\\nabla\\cdot(\\bm{\\xi}\\otimes \\bm{X}),\n\\end{array}\\right.\n\\end{equation}\nwhere $\\bm{\\rho}\\equiv\\bm{\\xi}-\\bm{\\eta}.$\nAlternatively for the final equation\nwe may take a Laplacian of the velocity equations to obtain the following form:\n\n\\begin{equation}\\label{scaled3DNS}\n\\dfrac{\\partial \\bm{X}}{\\partial \\tau}\n=\\triangle \\left(\\bm{U}\\cdot\\nabla\\bm{U}+\\nabla P \\right)\n+\\nu \\triangle \\bm{X}+a\\nabla\\cdot(\\bm{\\xi}\\otimes \\bm{X}).\n\\end{equation}\nIt is to be noted that the coefficient of the linear term increases in number with the order of derivatives\nand with the variable $\\bm{\\chi}$ a divergence is completed in the convective term. \nObserve that type 1 scale-invariance is achieved with $\\bm{\\Psi}$ and type 2 scale-invariance with $\\bm{X}$.\n\\subsection{Successive approximations}\nUsing the Duhamel principle, we convert the scaled Navier-Stokes equations\ninto integral equations\n$$\n\\bm{X}(\\bm{\\xi}, \\tau)=e^{\\nu \\tau \\triangle^*} \\bm{X}_0(\\bm{\\xi})\n+\\int_0^{\\tau} e^{\\nu s \\triangle^*}\\triangle \\left(\\bm{U}\\cdot\\nabla\\bm{U}+\\nabla P \\right)(\\bm{\\xi},\\tau-s)ds.\n$$\nHere $\\triangle^* \\equiv \\triangle +\\frac{a}{\\nu}\\nabla\\cdot(\\bm{\\xi}\\otimes \\cdot),$\nthe action of whose exponential operator is given by\n\\begin{equation}\\label{FP3D}\n\\exp(\\nu \\tau \\triangle^*)f(\\cdot)\n=\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} e^{3a\\tau}f(e^{a\\tau}\\bm{y})\n\\exp \\left( -\\frac{a}{2\\nu} \\frac{|\\bm{\\xi}-\\bm{y}|^2}{1-e^{-2 a \\tau}}\\right) d\\bm{y}\n\\end{equation}\nfor any function $f$; see (e) below for the derivation.\n\nThe inverse operator associated with\nthe fundamental solution to the Fokker-Planck equation in 3D is defined by\n$$(\\nu \\triangle^*)^{-1} \\equiv -\\int_{0}^{\\infty} ds e^{\\nu s \\triangle^*} =\\int d\\eta g(\\bm{\\xi},\\bm{\\eta}),$$\n where\n $$g(\\bm{\\xi},\\bm{\\eta}) \\equiv \\frac{-1}{(2\\pi\\nu)^{3\/2}} {\\rm f.p.}\\int_{\\sqrt{a}}^{\\infty}\n \\frac{\\sigma^2 d\\sigma}{\\sigma^2-a}e^{-\\frac{1}{2\\nu}|\\sigma \\bm{\\xi} -\\bm{\\eta}\\sqrt{\\sigma^2-a}|^2}.$$\n It can be verified by changing the variable $\\sigma=\\sqrt{\\frac{a}{1-e^{-2a\\tau}}}$\n in the solution to the Fokker-Planck equation (\\ref{FP3D}).\n\nWe consider the steady solution $\\bm{X}(\\bm{\\xi})$\nin the long-time limit of $\\tau \\to \\infty$\n$$\n\\bm{X}(\\bm{\\xi})=\\bm{X}^1(\\bm{\\xi}) + \\lim_{\\tau \\to \\infty}\n\\int_0^{\\tau} e^{\\nu s \\triangle^*}\\triangle \\left(\\bm{U}\\cdot\\nabla\\bm{U}+\\nabla P \\right)(\\bm{\\xi},\\tau-s)ds,\n$$\nwhere $\\bm{X}^1=\\mathbb{P}\\bm{M} G$ denotes the leading-order approximation.\n\nOn the other hand, steady equations are obtained by assuming $\\partial\/\\partial\\tau=0$ in (\\ref{scaled3DNS})\n$$\n\\triangle^* \\bm{X}\\equiv \\triangle \\bm{X}\n+\\frac{a}{\\nu} \\nabla\\cdot(\\bm{\\xi}\\otimes \\bm{X})=-\\frac{1}{\\nu}\n\\triangle \\left(\\bm{U}\\cdot\\nabla\\bm{U}+\\nabla P \\right),\n$$\nor,\n$$\n\\bm{X}=-\\frac{1}{\\nu}\\left(\\bm{U}\\cdot\\nabla\\bm{U}+\\nabla P \\right)\n-\\frac{a}{\\nu}\\triangle^{-1} \\nabla\\cdot(\\bm{\\xi}\\otimes \\bm{X}).\n$$\nThis is yet another form of the steady Navier-Stokes equations after dynamic scaling.\nIt is noted that one of the potential problems associated with the nonlinear\nterm has been eliminated without having a recourse to the Green's function.\nIt is this virtually trivial fact that allows us to set up a simple successive\napproximation.\n\nTo summarise, the steady Navier-Stokes equations after dynamic scaling can be written as\n\\begin{equation}\n\\bm{X}=-\\frac{1}{\\nu}\\mathbb{P} \\left(\\bm{U}\\cdot\\nabla\\bm{U}\\right)\n-\\frac{a}{\\nu}\\triangle^{-1} \\nabla\\cdot(\\bm{\\xi}\\otimes \\bm{X}),\n\\end{equation}\nor, by $\\bm{X}=-\\triangle \\bm{U},$ we can express it solely in terms of $\\bm{X}$ as\n\\begin{equation}\\label{scaled.Chi.eq}\n\\bm{X}=-\\frac{1}{\\nu}\\mathbb{P} \\left(\\triangle^{-1}\\bm{X}\\cdot\\nabla \\triangle^{-1}\\bm{X}\\right)\n-\\frac{a}{\\nu}\\triangle^{-1} \\nabla\\cdot(\\bm{\\xi}\\otimes \\bm{X}).\n\\end{equation}\nThis is the equation that we need to solve.\n\nIn passing we note the following facts before proceeding to the specific results. By the definition of\nscaled variables it is easily seen that for $p \\ge 1$\n $$t^{\\frac{3}{2}\\left(1-\\frac{1}{p}\\right)}\n\\left\\| \\bm{\\chi}(\\bm{x},t)-\\frac{\\bm{X}(\\bm{\\xi})}{(2at)^{3\/2}}\\right\\|_{L^p}\n=\\left\\| \\bm{X}(\\bm{\\xi},\\tau)-\\bm{X}(\\bm{\\xi}) \\right\\|_{L^p}.$$\nThis means that if \n$\\left\\| \\bm{X}(\\bm{\\xi},\\tau)-\\bm{X}(\\bm{\\xi}) \\right\\|_{L^p} \\to 0$ as $\\tau \\to \\infty,$\nwe have\n $$t^{\\frac{3}{2}\\left(1-\\frac{1}{p}\\right)}\n\\left\\| \\bm{\\chi}(\\bm{x},t)-\\frac{\\bm{X}(\\bm{\\xi})}{(2at)^{3\/2}}\n\\right\\|_{L^p} \\to 0\\;\\;\\mbox{as}\\;\\; t \\to \\infty.$$\nThat is about the long-time asymptotics. On the other hand, as time zero asymptotics\nwe have\n$$\\frac{\\bm{X}(\\bm{\\xi})}{(2at)^{3\/2}} \\to \\bm{X}_0*\\delta=\\bm{X}_0(\\cdot)\n\\;\\;\\mbox{as}\\;\\; t \\to 0,$$\nwhere $\\bm{X}_0(\\bm{\\xi})$ is singular, like the Dirac mass.\n\n\\subsection{Leading-order approximation}\nThe first-order (or, leading-order) approximation can be given explicitly because the Gaussian\nfunction is a radial function.\n(See \\textbf{Appendix A(c)} for the other non-Gaussian solution.)\n\nBefore discussing the second-order approximation, we give expressions of the first-order\nsolutions in different variables. In terms of the vorticity curl, the first-order approximation is given by\n$$X_i=M_j\\left( \\frac{a}{2\\pi\\nu}\\right)^{3\/2}\n\\left(\\delta_{ij}-\\partial_i \\partial_j \\triangle^{-1} \\right)\n\\exp\\left(-\\frac{a}{2\\nu}r^2\\right),\\;i=1,2,3,$$\nwhere $r=|\\bm{\\xi}|$ and $M_j=\\int X_j d\\bm{\\xi},\\; i=1,2,3.$\nThroughout this subsection we take $\\frac{a}{2\\nu}=1$ for simplicity. The corresponding\nexpressions for the four different unknowns are ($i=1,2,3$)\n\\begin{eqnarray}\nB_i&=&\\frac{1}{\\pi^{3\/2}}\n\\left(M_i \\triangle^{-2} e^{-r^2} \n- M_j \\partial_i \\partial_j \\triangle^{-3} e^{-r^2} \\right),\\\\\nA_i&=&-\\frac{\\epsilon_{ijk}M_j x_k}{8\\pi^{3\/2}r^3}\n\\left[re^{-r^2}+\\sqrt{\\pi}{\\rm erf}(r)\\left(r^2-\\frac{1}{2}\\right) \\right],\\\\\nU_i&=&-\\frac{1}{\\pi^{3\/2}} \\left(M_i \\triangle^{-1} e^{-r^2} \n- M_j \\partial_i \\partial_j \\triangle^{-2} e^{-r^2} \\right),\\\\\n\\Omega_i&=&-\\frac{\\epsilon_{ijk}M_j x_k}{4\\pi^{3\/2}r^3}\n\\left(2re^{-r^2}-\\sqrt{\\pi}{\\rm erf}(r) \\right),\\\\\nX_i&=&\\frac{1}{\\pi^{3\/2}}\n\\left(M_ie^{-r^2} - M_j \\partial_i \\partial_j \\triangle^{-1} e^{-r^2} \\right),\n\\end{eqnarray}\nwhere\n$\\bm{A}=\\nabla \\times \\bm{B}, \\bm{U}=\\nabla \\times \\bm{A}, \n\\bm{\\Omega}=\\nabla \\times \\bm{U},\\bm{X}=\\nabla \\times \\bm{\\Omega},$\nand ${\\rm erf}(r)\\equiv\\frac{2}{\\sqrt{\\pi}}\\int_0^r e^{-t^2}dt$ denotes the error function. \nBecause all the fields are incompressible we also have\n$\\bm{U}=-\\triangle \\bm{B}, \\bm{\\Omega}=- \\triangle \\bm{A}, \\bm{X}=-\\triangle \\bm{U}.$\n\nNote that $\\triangle^{-n} e^{-r^2}$ for $n=1,2,3$ can be evaluated by quadratures\nand their explicit form are as follows, which can be obtained most conveniently with\nthe assistance of computer algebra. The results are\n\\begin{eqnarray}\n\\triangle^{-1} e^{-r^2}&=&-\\frac{1}{2r}\\int_0^r e^{-s^2}ds\n=-\\frac{\\sqrt{\\pi}}{4r}{\\rm erf}(r),\\\\\n\\triangle^{-2} e^{-r^2}&=&-\\frac{1}{2r}\n\\int_0^r ds \\int_0^s ds' \\int_0^{s'} e^{-s''^2}ds'' \\nonumber\\\\\n&=&-\\frac{e^{-r^2}}{8}\n-\\frac{\\sqrt{\\pi}}{8}\\left(r+\\frac{1}{2r}\\right){\\rm erf}(r),\\\\\n\\triangle^{-3} e^{-r^2}&=&-\\frac{1}{2r}\n\\int_0^r ds \\int_0^s ds' \\int_0^{s'} ds'' \\int_0^{s''} ds''' \n\\int_0^{s'''} ds'''' e^{-s''''^2},\\nonumber\\\\\n&=&-\\frac{1}{384r}\n\\left[(4r^3+10r)e^{-r^2}+4\\sqrt{\\pi}\\left(r^4+3r^2+\\frac{3}{4}\\right)\n{\\rm erf}(r) \\right].\n\\end{eqnarray}\nUsing the above formulas it is instructive to compare a component of\nthe Gaussian function $\\dfrac{1}{\\pi^{3\/2}}e^{-x^2}$ with that of\nvorticity curl\n$$\\chi_1(x,0,0)=\\frac{\\sqrt{\\pi}{\\rm erf}(x)-2xe^{-x^2}}{2\\pi^{3\/2}x^3}.$$\nFigure \\ref{GvsPG} shows how $\\chi_1(\\xi_1,0,0)$ is affected by the incompressible condition (solenoidality),\nin particular the peak value at $x=0$ is reduced by a factor of $2\/3$. We have also confirmed that the numerical\nresult obtained with a Poisson solver agrees with the analytical expression (figures omitted).\n\n\\begin{figure}[ht]\n\\includegraphics[scale=0.3,angle=0]{GvsPG.eps}\n\\caption{Comparison of $\\exp(-\\xi^2)$ (dashed) with $\\pi^{3\/2}\\chi_1(\\xi)$ (solid). }\n\\label{GvsPG}\n\\end{figure}\n\n\\subsection{Numerical results for the second-order approximation}\nFor simplicity we make use of the iteration scheme (2b) illustrated in the previous section. The second-order solution\nin this case is given by\n$$\\bm{X}_2=\\bm{X}_1-\\frac{1}{\\nu}\\mathbb{P} \\left(\\triangle^{-1}\\bm{X}_1\\cdot\\nabla \\triangle^{-1}\\bm{X}_1\\right).$$\n\\begin{figure}[ht]\n\\begin{minipage}{0.5\\linewidth}\n\\includegraphics[scale=0.4,angle=0]{NS3D_nu0.5_PG1y.AL40.DH0.05.rn.ps}\n\\caption{$\\bm{X}_1|_2=\\mathbb{P}\\bm{M} G$}\n\\label{NS1st-order}\n\\end{minipage}\n\\begin{minipage}{0.5\\linewidth}\n\\includegraphics[scale=0.4,angle=0]{NS3D_nu0.5_FIXNLy.AL40.DH0.05.1st-it1.rn.ps}\n\\caption{$\\frac{1}{\\nu}\\mathbb{P}\\left(\\triangle^{-1}\\bm{X}_1\\cdot\\nabla \\triangle^{-1}\\bm{X}_1\\right)$}\n\\label{NS2nd-order}\n\\end{minipage}\n\\end{figure}\nIt turns out that the first component of the second-order correction is identically equal to zero.\nFigure \\ref{NS1st-order} shows the second component of the first-order approximation $\\bm{X}_1$ as a function\nof $\\xi_1$. It has a peak at the origin whose height is approximately 0.12.\nHence we show in Figure \\ref{NS2nd-order} the second component of the second-order correction due to nonlinearity\n$\\frac{1}{\\nu}\\mathbb{P} \\left(\\triangle^{-1}\\bm{X}_1\\cdot\\nabla \\triangle^{-1}\\bm{X}_1\\right)$\nas a function of $\\xi_1$.\nIt has double peaks near the origin, but their value is small and is about 0.0015.\nNoting $Re=\\frac{M}{\\nu}=\\frac{1}{1\/2}=2,$ after non-dimensionalisation\nwe can estimate the size of nonlinearity as\n$$ N \\approx \\frac{0.0015}{2 \\times 0.12} \\approx 6 \\times 10^{-3},$$\nActually the maximum value of the nonlinear term in $\\mathbb{R}^3$ is 0.0022, not much different from the above value.\nThus $N$ is at most, $N \\approx 9\\times 10^{-3}$ and we conclude\n$$N=O(10^{-2})\\;\\;\\mbox{for the 3D Navier-Stokes equations.}$$\nIt should be noted that it is much smaller than the value of $N$ for the Burgers equations, whose solutions are known to\nremain regular all time. Because the difference between the Navier-Stokes and Burgers equations is the presence or absence \nof the incompressibility condition, it is incompressibility that makes the value of $N$ for the Navier-Stokes equations\nsmall.\nOn a practical side, this also means that even if we add the second-order correction\nto the first-order term at low Reynolds number, say $Re=1,$ the superposed solution is virtually indistinguishable from\nthe first-order approximation.\n\n\\subsection{Derivations of some formulas}\nWe will derive the basic formulas used above.\nSolving the heat equation\n$$\\frac{\\partial \\bm{\\psi}_1}{\\partial t}=\\nu \\triangle \\bm{\\psi}_1$$\nthe first-order approximation is given by\n$$\\bm{\\psi}_1(\\bm{x},t)\n=\\frac{1}{(4\\pi \\nu t)^{3\/2}}\\int_{\\mathbb{R}^3} \\bm{\\psi}_0(\\bm{y})\n\\exp \\left( -\\frac{|\\bm{x}-\\bm{y}|^2}{4\\nu t}\\right) d \\bm{y}.$$\nAfter applying dynamic scaling\n$$\\bm{\\psi}(\\bm{x},t)=\\bm{\\Psi}(\\bm{\\xi},\\tau),$$\n$$\\bm{\\xi}=\\frac{\\bm{x}}{\\sqrt{2 a(t+t_*)}},\\; \n\\tau=\\frac{1}{2 a}\\log(1+2at),$$\nthe equations for the vector potential read\n$$\\frac{\\partial \\bm{\\Psi}_1}{\\partial \\tau}=a\\bm{\\xi}\\cdot \\nabla \\bm{\\Psi}_1\n+\\nu \\triangle \\bm{\\Psi}_1.$$\nIts solution is given by\n$$\\bm{\\Psi}_1(\\bm{\\xi},\\tau)=e^{-3 a \\tau}\n\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} \\bm{\\Psi}_0(\\bm{\\eta})\n\\exp \\left( -\\frac{a}{2\\nu} \\frac{|\\bm{\\xi}-\\bm{\\eta}e^{-a \\tau}|^2}\n{1-e^{-2 a \\tau}}\\right) d\\bm{\\eta}$$\n$$=\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} \\bm{\\Psi}_0(e^{a\\tau}\\bm{y})\n\\exp \\left( -\\frac{a}{2\\nu} \\frac{|\\bm{\\xi}-\\bm{y}|^2}\n{1-e^{-2 a \\tau}}\\right) d\\bm{y}.$$\nTaking a curl with respect to $\\bm{\\xi}$ three times,\nwe find the expressions for the vorticity curl\n$$\\bm{X}_1(\\bm{\\xi},\\tau)\n=\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} e^{3a\\tau}\\bm{X}_0(e^{a\\tau}\\bm{y})\n\\exp \\left( -\\frac{a}{2\\nu} \\frac{|\\bm{\\xi}-\\bm{y}|^2}\n{1-e^{-2 a \\tau}}\\right) d\\bm{y}.$$\nWe distinguish two cases to proceed.\n\n(i) If the initial data satisfies the similarity condition $\\lambda^3 \\bm{X}_0(\\lambda\\bm{y})=\\bm{X}_0(\\bm{y}),$\nwe have\n$$\\bm{X}_1(\\bm{\\xi},\\tau)\\to\n\\left( \\frac{a}{2\\pi \\nu}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} \\bm{X}_0(\\bm{y})\n\\exp \\left( -\\frac{a}{2\\nu} |\\bm{\\xi}-\\bm{y}|^2 \\right) d\\bm{y}\n= \\bm{X}_0*G,\n$$\nwhere $G(\\bm{\\xi})\\equiv\\left( \\frac{a}{2\\pi \\nu}\\right)^{3\/2}\n\\exp \\left(-\\frac{a}{2\\nu}|\\bm{\\xi}|^2 \\right).$\n\n(ii) For more general initial data without the similarity assumption, we make use of the formula\n$\\lambda^3 \\bm{X}_0(\\lambda\\bm{y}) \\to \\bm{M}\\delta(\\bm{y})$ where $\\bm{M}=\\int \\bm{X}_0 d\\bm{y}.$\nWe have, noting $\\bm{P}\\bm{X}_0=\\bm{X}_0,$\n\\begin{eqnarray}\n\\bm{X}_1(\\bm{\\xi},\\tau)\n&=&\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} e^{3a\\tau}(\\bm{P}\\bm{X}_0(e^{a\\tau}\\bm{y}))\n\\exp \\left( -\\frac{a}{2\\nu} \\frac{|\\bm{\\xi}-\\bm{y}|^2}{1-e^{-2 a \\tau}}\\right) d\\bm{y} \\nonumber\\\\\n&=&\\left( \\frac{a}{2\\pi \\nu(1-e^{-2 a \\tau})}\\right)^{3\/2}\n\\int_{\\mathbb{R}^3} e^{3a\\tau}\\bm{X}_0(e^{a\\tau}\\bm{y})\n\\bm{P}\\exp \\left( -\\frac{a}{2\\nu} \\frac{|\\bm{\\xi}-\\bm{y}|^2}{1-e^{-2 a \\tau}}\\right) d\\bm{y} \\nonumber\\\\\n&\\to& \\bm{P}\\bm{M}G\\:\\:\\mbox{as}\\;\\; \\tau \\to \\infty. \\nonumber\n\\end{eqnarray}\nThis is the leading-order approximation for the scaled 3D Navier-Stokes equations.\n(Alternatively we may apply the solenoidal projection to restore the incompressibility\nas the incompressibility is not maintained in the limiting procedure.)\n\n\n\\section{Summary and outlook}\nWe have studied self-similar solutions to the fluid dynamical equations,\nwith particular focus on the so-called source-type solutions.\nAs an illustration of successive approximation schemes we have discussed the 1D Burgers equation\nwhich is exactly soluble. In this case the velocity is the most convenient choice for its analysis.\nWe have introduced a method of quantitatively assessing the size of nonlinearity $N$ using the source-type solutions.\nThe similar analyses have been carried over to higher dimensional Burgers equations.\nFor Burgers equations in any dimensions, we find $N=O(0.1)$.\n\nWe then move on to consider the 2D and 3D Navier-Stokes equations.\nIn two dimensions we review the known results done with the vorticity.\nIn three dimensions the most convenient choice of the unknown is the vorticity curl.\nWe have formulated the dynamically-scaled equations using that variable and set up the successive\napproximation schemes. We have found that the second-order correction, stemming from the nonlinear term,\nis $N \\approx 0.01$, an order of magnitude smaller than that for the Burgers equations.\n\nIt may be in order to give our outlook on the topic.\nThe current approach relies on perturbative treatments.\nIt may be challenging, but worthwhile to study the functional form of the solution by non-perturbative\nmethods for further theoretical developments.\nIt is also of interest to seek a fully non-linear solution by numerical methods. \nIt is noted that this is at least one order of magnitude smaller than $N$ found for the Burgers equations\nwhose solutions are known to remain regular all the time.\nAs an application of the source-type solution, it is useful to characterise the late stage of\nstatistical solutions of the Navier-Stokes equations \\cite{Ohkitani2020}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn the recent past, collaborative and crowdsourcing platforms \\citep{estelles2012towards} have been investigated for their ability to obtain large amounts of user interactions for the annotation of image databases. Particularly, the capacity to outsource simple human intelligence tasks to a crowd population and simultaneously draw from client computing resources for interfacing, are being increasingly appreciated in the imaging community \\citep{mckenna2012strategies,maier2014can,mavandadi2012biogames}. First studies employing collaborative \\citep{haehn2014design,rajchl2016learning} or crowd sourcing platforms \\citep{maier2014can,albarqouni2016aggnet} via web interfaces have been proposed for biomedical image segmentation. Since such interfaces often have limited capacity to interact with image data, weak forms of annotations (\\emph{e.g.} bounding boxes, user scribbles, image-level tags, \\emph{etc.}) have been investigated to reduce the required annotation effort. Recent studies have shown that placing bounding box annotations is approximately 15 times faster than creating pixel-wise manual segmentations \\citep{lin2014microsoft,papandreou2015weakly}.\n\nHowever, in contrast to annotating natural images \\citep{lin2014microsoft,russell2008labelme} or recognising instruments in a surgical video sequence \\citep{maier2014can}, the correct interpretation of medical images requires specialised training and experience \\citep{nodine2000nature,gurari2015collect}, and therefore might pose a challenge for non-expert annotators, leading to incorrectly annotated data \\citep{cheplygina2016early}. Nonetheless, considering the limited resources of available clinical experts and the rapid increase in information of medical imaging data (\\emph{e.g.} through high-resolution, whole-body imaging, \\emph{etc.}) alternative approaches are sought. Particular challenges arise, when trying to develop machine learning based approaches that can be scaled to very large datasets (\\emph{i.e.} population studies). Many of the currently available approaches require large amounts of labelled training data to deal with the variability of anatomy and potential pathologies.\n\n\\subsection{Related Work}\nTo reduce the annotation effort, many well-known studies propose segmentation methods employing simple forms of user annotations to obtain voxel-wise segmentations \\citep{boykov2000interactive,rother2004grabcut,rajchl2017deepcut,koch2017multi}. While adjusting hyperparameters can be considered an interaction, in this study we concentrate on simplified forms of pictorial input \\citep{olabarriaga2001interaction}, called weak annotations (WA). Such annotations have been extensively used in the literature, particularly in the context of medical object segmentation problems. \\cite{boykov2000interactive} used user-provided scribbles (SC) or brush strokes as input and hard constraints to an interactive graphical segmentation problem. Similarly, \\cite{baxter2015optimization}, \\cite{baxter2017directed} and \\cite{rajchl2012fast} expressed this problem in a spatially continuous setting by using prior region ordering constraints and exploiting parallelism via GPU computing. The GrabCut algorithm \\citep{rother2004grabcut} employs rectangular regions (RR) as bounding boxes to both compute a colour appearance model and spatially constrain the search for an object. These spatial constraints further allow to reduce the computational effort \\citep{pitiot2004expert}. The segmentation platform ITK-SNAP\\footnote{\\emph{http:\/\/www.itksnap.org\/}} \\citep{yushkevich2006user} combines RR with SC and employs a pre-segmentation (PS) to initialise an active contour model. \n\nWhile the above object segmentation methods concentrate on how to accurately compute segmentations based on WA, recent studies have examined how to efficiently acquire the required annotations. Collaborative annotation platforms such as LabelMe\\footnote{\\emph{http:\/\/labelme.csail.mit.edu\/}} \\citep{russell2008labelme} or \\citep{lin2014microsoft} were proposed to distribute the effort of placing image annotations to a crowd of users. Such crowdsourcing approaches have been successfully used in proof-reading connectomic maps \\citep{haehn2014design}, identification of surgical tools in laparoscopic videos \\citep{maier2014can}, polyps from computed tomography (CT) colonography images \\citep{mckenna2012strategies} or the identification of the fetal brain \\citep{rajchl2016learning}.\nHowever, most studies concentrate on tasks that require little expertise of the crowd, as the objects to identify are either known from everyday life \\citep{russell2008labelme,lin2014microsoft} or foreign to background context \\citep{maier2014can}. \\cite{russell2008labelme} and \\cite{lin2014microsoft} concentrated on object recognition tasks in natural images, the latter constrained to objects \"easily recognizable by a 4 year old\". \\cite{maier2014can} asked users to identify a foreign surgical object in a video scene and \\cite{haehn2014design} provided an automated pre-segmentation to be corrected by users. \\cite{mckenna2012strategies} compensated for the lack of expertise in reading the colonography images by improving image rendering, implementing a training module and using a large number of redundant users (\\emph{i.e.} 20 knowledge workers per task). \nExpertise in the interpretation of medical images is largely acquired through massive amounts of case-reading experience \\citep{nodine2000nature} and it has been shown that novices performed with lower accuracy than an average expert in tasks such as screening mammograms for breast cancer \\citep{nodine1999experience,nodine2000nature}. In contrast to \\emph{diagnostic interpretation} of medical images, automated segmentation pipelines merely require the \\emph{identification} of anatomical structures (\\emph{i.e.} it requires less expertise to identify the liver in a CT image than a lesion in the liver). \n\n\\subsection{Contributions}\nIn this study, we examine types of commonly employed WA and investigate the impact of reducing the annotation frequency (\\emph{i.e.} only annotating every $k$-th slice in a volume) and the expertise on the segmentation accuracy. For this purpose, we employ a well-known graphical segmentation method and provide weak image annotations as initialisation and constraint to the segmentation problem at hand.\nWe address the problem of liver segmentation from a database of abdominal CT images and corresponding \\emph{non-redundant annotations}. It is of great importance for both planning for laparoscopic surgery and computed-assisted diagnosis \\citep{wolz2012multi} and requires extensive manual annotation effort because of its high spatial resolution and large field of view. \nFurther, we propose and evaluate how a \\emph{weakly labelled atlas} can be used for the detection and removal of incorrect annotations and achieve similar accuracy using weak annotations as a state-of-the-art fully-supervised automated segmentation method \\citep{wolz2012multi}. \n\n\\section{Methods} \nTo study their impact on accuracy, we simulate user annotations from expert manual segmentations $M$, subject to different expertise levels and annotation frequency:\n\n\\subsubsection*{Expertise} \nWe assume that the task of placing an annotation itself is defined well enough to be handled by a pool of general users with any experience level. However, the correct identification of anatomical structures might pose a challenge to non-expert users. We define expertise as the rate of correctly identifying an object of interest in a 2D image slice extracted from a 3D volume. If an error occurs, the user annotates the wrong object or rates that the object is not visible in this slice. We define the error rate (ERR $\\in [0,1]$), i.e. the frequency of misidentification, as a measure of expertise.\nAn annotation error is simulated by computing an annotation from another organ (\\emph{e.g.} the kidney, instead of the liver) or by setting the slice to background (\\emph{i.e.} the organ is not visible in this slice). \n\n\\subsubsection*{Annotation Rate} \nWe collect an equal amount of annotations from each of the three slice directions $d \\in D$ at an annotation rate (AR $\\in [0,1]$). When computing the AR, we annotate every $k$-th slice, where $k = AR^{-1}$. Note, that each slice was annotated at most \\emph{once}, \\emph{i.e.} annotations are \\emph{non-redundant}. \n\n\\begin{figure*}[!h]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{ann_types}\n\\caption{Weak annotation types (top left to bottom right): image, expert manual segmentation (blue), scribbles (SC, green), binary decision making (BD, magenta), rectangular bounding box regions (RR, cyan) and merging of pre-segmentations (PS, yellow). }\n\\label{fig:ann_types}\n\\end{figure*}\n\n\n\\subsection{Annotation strategies}\nFor all experiments, we simulate following forms of weak annotations from expert manual segmentations $M$:\n\n\\subsubsection*{Brush strokes or scribbles (SC)}\nSimilarly to many interactive segmentation methods \\citep{boykov2000interactive,rajchl2012fast,baxter2015optimization}, the user is asked to place a scribble or brush stroke into the object. We simulate placing scribble annotations by the iterative morphological erosion of the manual segmentation $M$ until a maximum of desired scribble size is reached. An example of such a generated SC label is depicted in Fig. \\ref{fig:ann_types} (green).\n\n\\subsubsection*{Binary decision making (BD)}\nThe image is split into $N_{\\textrm{BD}} = ds^2$ equally sized rectangular sub-regions, with the same number of splits ($ds$) per image dimension. For this type of weak annotation, a user is tasked to make a series of binary decisions on which sub-regions contain the object, such that all of the object is contained. We compute these weak annotations (BD) such that $\\forall \\mbox{BD} \\, \\cap \\, M \\neq \\emptyset$. Fig. \\ref{fig:ann_types} shows a BD (magenta) generated from $M$ (blue) for $ds = 4$. \n\n\\subsubsection*{Rectangular (bounding box) regions (RR)}\nSimilarly to \\citep{rother2004grabcut}, the user is asked to draw a tight rectangular region around the object. We compute a bounding box based on the maximum extent of $M$ within the respective image slice. An example RR (cyan) computed from $M$ (blue) is shown in Fig. \\ref{fig:ann_types}.\n\n\\subsubsection*{Merging pre-segmentations (PS)}\nInspired by recent work in \\citep{haehn2014design}, a user merges regions computed from an automated pre-segmentation method. We use a multi-region max flow intensity segmentation with the Potts energy, according to \\citep{yuan2010continuous}: \n\\begin{equation}\nE(u) = \\sum\\limits_{\\forall L} \\int\\limits_{\\Omega}(D_L(x)u_L(x)+ \\alpha_{\\textrm{Potts}} |\\nabla u_L(x)|)dx \\, , \n\\label{eq:potts_energy}\n\\end{equation} \n\\begin{equation}\ns.t. \\, u_L(x) \\geq 0 \\mbox{ and } \\sum\\limits_{\\forall L}u_L(x) = 1\n\\end{equation}\nto obtain piecewise constant regions. The data fidelity term for each label $L = 1,\\ldots,N_L$ is defined as the intensity L1-distance \n\\begin{equation}\nD_L(x)=|I(x)-l_L| \\, , \n\\label{eq:potts_dt}\n\\end{equation}\nwhere $l_L$ denotes the $L$-th most frequent intensity according to the histogram of the image volume. For all experiments, we fix $N_L$ = 16. The GPU-accelerated solver was provided with the ASETS library \\citep{rajchl2016hierarchical}. After convergence, a discrete label map is calculated voxel-wise as $\\argmax_l u_L(x)$.\n\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{potts}\n\\caption{Example pre-segmentation (PS) results on an abdominal CT volume: Top row (from left to right): CT slice images in transverse, coronal and sagittal direction. Bottom row: Corresponding PS segmentation labels after $\\argmax_l u_L(x)$, when minimising \\eqref{eq:potts_energy}, using \\eqref{eq:potts_dt}, $N_L = 16$ and $\\alpha_{Potts} = 0.05$.}\n\\label{fig:potts}\n\\end{figure*}\n\n\nTo obtain connected individual segments, the obtained segmentation is subsequently partitioned via 4-connected component analysis. Given such pre-segmentation (PS), the user is tasked to merge subregions, such that they contain the object of interest (\\emph{i.e.} the liver). We simulate the merging similar to BD, such that $\\forall \\mbox{PS} \\, \\cap \\, M \\neq \\emptyset$. A simulated PS annotation is shown in Fig. \\ref{fig:ann_types} in yellow and the corresponding $M$ in blue. \n\n\\subsection{Annotations as Segmentation Priors}\n\\label{sec:fuse_ann}\nTo be employed as priors in a volume segmentation problem, the annotations from individual slice directions $d \\in D$ need to be consolidated to account for voxels $x$ located on intersecting slices: \\\\\nThe binary SC annotations from all slice directions $d \\in D$ are combined to a volume annotation $SC_{Vol}$\n\\begin{equation}\nSC_{Vol}(x) \\, = \\, \\cup_{d=1}^D \\, SC_d(x) \\, \\, ,\n\\end{equation}\nand employed as foreground samples $S_{FG} = SC_{Vol}$. \\\\\n\nAll binary annotations $A_d \\in \\{BD, RR, PS\\}$ and all unannotated slices $U_d$ are similarly combined to volumes to establish $S_{BG}$. Note, that $A_d(x) = 1$ denotes the user rated the location $x$ as \"foreground\" and $U_d(x) = 1$ denotes, that the user has not seen this location.\n\\begin{equation}\nA_{Vol}(x) = \\, \\cup_{d=1}^D \\, A_d(x) \\,; \\, \\, \\, \\, U_{Vol}(x) = \\, \\cap_{d=1}^D \\, U_d(x) \\,;\n\\end{equation}\nThe background samples $S_{BG}$ are computed as all voxels $x$ that are outside $A_{Vol}$ and \\emph{are} annotated:\n\\begin{equation}\nS_{BG}(x) \\, = \\, \\neg \\, A_{Vol}(x) \\, \\, \\cap \\,\\, \\, \\neg \\, U_{Vol}(x) . \n\\end{equation}\n\nThe resulting samples $S_{FG}$ and $S_{BG}$ can then be used to compute intensity models or to enforce spatial constraints. For all experiments, we employ SC annotations as priors for foreground voxels and \\{BD, RR, PS\\} annotations as priors for background voxels. For each of these three combinations (\\emph{e.g.} SC and BD, \\emph{etc.}), we calculate $S_{FG}$ and $S_{BG}$ for each volume image to be segmented.\n\n\n\n\n\\subsection{Segmentation Problem}\nThe method employed to obtain a segmentation can be considered as a black box to be replaced by any specialised pipeline that suits a specific problem. For our experiments, we employ a well-known interactive flow maximisation \\citep{boykov2000interactive,rajchl2012fast} approach to compute image segmentations from the input annotations $A$, subject to a certain $AR$ and $ERR$. For this purpose, we use the continuous max-flow solver \\citep{yuan2010study,rajchl2016hierarchical} supporting GPU acceleration and allowing us to tackle the computational load for our experiments. We find a solution by minimising an energy $E(u)$ defined for the labelling or indicator function $u$ at each voxel location $x$ in the image $I$, $ \\mbox{ s.t. } u(x) \\in [ 0, 1 ]$ as, \n\n\\begin{equation}\nE(u) = \\int\\limits_{\\Omega}(D_s(x)u(x) + D_t(x)(1-u(x))+ \\alpha|\\nabla u(x)|)dx \\, , \\\\\n\\label{eq:binary_e}\n\\end{equation}\nHere, the data fidelity terms $D_{s,t}(x)$ are defined as the negative log-likelihood of the probabilities $\\omega_{1,2}$, computed from normalised intensity histograms of the foreground (FG) and background (BG) region, respectively,\n\\begin{equation}\nD_s(x) \\, = \\, -log(\\omega_1 (I(x))) \\, \\mbox{ and } \\, \nD_t(x) \\, = \\, -log(\\omega_2 (I(x))) \\, , \\\\\n\\label{eq:ll_data_term}\n\\end{equation}\nas described in \\citep{boykov2001interactive}. Additionally, we employ a soft spatial constraint by setting the cost for regions annotated as FG and BG, to a minimum:\n\n\\begin{align}\nD_s(x) = 0, \\, \\, \\forall x \\in \\mbox{FG}; \\, \\, D_t(x) = 0, \\, \\, \\forall x \\in \\mbox{BG};\n\\label{eq:soft_constraints}\n\\end{align}\n\nConsolidated volume annotations (see Section \\ref{sec:fuse_ann}) are used to compute samples $S_{FG}$ and $S_{BG}$ of FG and BG, respectively. $S_{FG}$ and $S_{BG}$ are subsequently employed to compute $\\omega_{1,2}$ in \\eqref{eq:ll_data_term} and as spatial constraints in \\eqref{eq:soft_constraints}.\nAfter optimisation of the energy in \\eqref{eq:binary_e}, the resulting continuous labelling function $u$ is thresholded at 0.5 to obtain a discrete segmentation result for the FG, as described in \\citep{yuan2010study}.\n\n\\subsection{Outlier Detection \\& Removal}\n\\label{sec:outlier_detection}\nWe propose a method for quality assessment to mitigate the impact of annotation errors on the accuracy of the segmentation results (\\emph{e.g.} when using databases labelled by crowds with low expertise). Note, that contrary to other studies \\citep{mckenna2012strategies,lin2014microsoft}, we do \\emph{not} require redundant annotations for outlier detection. Instead, we propose to make use of redundant information in the \\emph{flawed} and \\emph{weakly labelled} atlas database and retrieve similar image slices and their annotations in the fashion of multi-atlas segmentation pipelines \\citep{wolz2012multi,aljabar2009multi}.\n\nIf spatial variability is accounted for (\\emph{e.g.} through registration), we can retrieve slices from other atlas volumes and use their annotations to compute an agreement measure to rate a given annotation. For this purpose, we borrow from the concept of the SIMPLE method \\citep{langerak2010label}, where an iteratively refined agreement is used to assess the quality of individual atlases in a multi-atlas segmentation approach.\n\n\\subsubsection*{Weakly Labelled Atlas as Quality Reference}\nWe assume that $S$ subjects $s_i=\\{s_1,\\ldots,s_S\\}$ have been weakly (and potentially erroneously) annotated and aim to automatically detect the slices of each subject $s_i$ that have an annotation of insufficient quality (\\emph{e.g.} the wrong organ has been annotated or the organ was present, but not detected). \nIn the following, the $j$-th slice of subject $s_i$ in direction $d$ is denoted by $v_{i}^{j,d}$. For each slice in the database $v_{i}^{j,d}$, we first find a subset of the most similar spatially corresponding images $v_{q}^{j,d}$ of the subjects $s_q$ in the \\emph{weakly labelled atlas} using a global similarity measure, such as the sum of squared differences. \nWe then calculate a consensus segmentation $\\bar{O_1}$ from the annotations of these anatomically similar image slices using mean label fusion. For each of these selected atlas annotations, the overlap between the annotation and the estimated consensus segmentation is calculated with an accuracy metric.\n\nFor this purpose, we use the Dice similarity coefficient (DSC) between the regions $A$ and $B$ as a measure of overlap:\n\\begin{equation}\nDSC = \\frac{2 |A| \\cap |B|}{|A| + |B|}\n\\label{eq:dsc}\n\\end{equation} \n\nUsing the mean regional overlap $\\mu_\\textrm{DSC}^1$ between the atlas annotations and the consensus segmentation $\\bar{O_1}$, we can discard potentially inaccurately annotated atlas slices if their DSC with $\\bar{O_1}$ is less than this average $\\mu_\\textrm{DSC}^1$. Following \\citep{langerak2010label}, we calculate another fusion estimate $\\bar{O_2}$ using the reduced subset of both anatomically similar and reasonably accurate annotations and calculate another mean DSC, $\\mu_\\textrm{DSC}^2$, and reject the annotations corresponding to $v_{i}^{j,d}$ if its DSC with $\\bar{O_2}$ is less than $\\mu_\\textrm{DSC}^2$.\n\nThis procedure is repeated for each annotation in the database. Note that the \\emph{weakly labelled atlas} can be built from the database itself so that no external\/additional input is required. An illustration of the approach is provided in Fig. \\ref{fig:outlier_detection}.\n\n\\begin{algorithm}[h]\n \\KwData{weak annotation for $v_{i}^{j,d}$: $wa_{i}^{j,d}$\\;\n corresponding WAs in the weakly labelled atlas $Q$: $wa_{q}^{j,d}$, $q \\in Q$}\n \\KwResult{$v_{i}^{j,d}$ is outlier: yes\/no}\n $Q^1 \\leftarrow \\{ p \\in Q: |Q^1|=N_\\textrm{similar}$ and $\\sum_{q\\in Q^{1}}||v_{i}^{j,d}-v_{q}^{j,d}|| \\rightarrow min$ \\}\\;\n i = 1\\;\n \\While{ i $\\le$ N$_\\textrm{iterations}$}{\n $\\bar{O_i} \\leftarrow$ MajorityVote$(wa_{q}^{j,d} ~ \\forall q\\in Q^{i}$)\\;\n $\\mu_i \\leftarrow $ Average( Dice( $\\bar{O_i}, wa_{q}^{j,d}$) ~ $\\forall q\\in Q^{i}$)\\;\n $Q^{i+1} \\leftarrow \\{ p \\in Q^{i}:$ Dice($\\bar{O_i}, wa_{q}^{j,d}) \\ge \\mu_i\\}$\\;\n$i \\leftarrow i+1$\n}\n\\eIf{Dice($wa_{i}^{j,d}, Q^{N_\\textrm{iterations}}$) $\\ge \\mu_{N_\\textrm{iterations}}$}\n {\n return yes\\;\n }{\n return no\\;\n }\n\\caption{Outlier detection using a weakly labelled, flawed atlas database.}\n\\end{algorithm}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{outlier_detection}\n\\caption{Schematic illustration of the outlier detection and removal approach.}\n\\label{fig:outlier_detection}\n\\end{figure}\n\n\n\\section{Experiments}\n\\subsubsection*{Image Database}\nThe image database used in the experimental setup consists of 150 (114 \\mars, 36 \\female) abdominal volume CT images with corresponding manual segmentations from expert raters. Available labelled anatomical regions include the liver, spleen, pancreas and the kidneys. All scans were acquired at the Nagoya University Hospital with a TOSHIBA Aquilion 64 scanner and obtained under typical clinical protocols. The volume images were acquired at an in-plane resolution of 512 x 512 voxels (spacing 0.55 to 0.82 mm) and contain between 238 and 1061 slices (spacing 0.4 to 0.8 mm).\n\n\n\\subsubsection*{Pre-processing \\& Generation of Weak Annotations}\nPrior to the experiments, all volume image data were affinely registered using the NiftiReg library \\citep{modat2010fast} (default parameters) to a random subject to spatially normalize the images and to account for variability in size. Weak annotations are generated for each slice in each direction $d$ in all volume images of the database, subject to the annotation rate $AR = \\{1, 0.5, 0.33, 0.25, 0.1, 0.05, 0.01\\}$ and the error rate $ERR = \\{0, 0.05, 0.1, 0.25, 0.5\\}$.\n\n\\subsubsection*{Liver Segmentation with Weak Annotations}\nThe weak annotations are fused to compute $S_{FG}$ and $S_{BG}$ (see \\ref{sec:fuse_ann}) to subsequently compute the data terms $D_{s,t}(x)$ in \\eqref{eq:ll_data_term} and the soft constraints in \\eqref{eq:soft_constraints}. A continuous max-flow segmentation \\citep{yuan2010study}, minimizing \\eqref{eq:binary_e} is computed to obtain a segmentation result.\nThe regularisation parameter $\\alpha = 4$ in \\eqref{eq:binary_e} and the parameters $\\alpha_{Potts} = 0.05$ and $N_L = 16$ in \\eqref{eq:potts_energy} were determined heuristically on a single independent dataset. For the outlier detection, $N_{iterations}$ was set to 2. All experiments were performed on an Ubuntu 14.04 desktop machine with a Tesla 40c (NVIDIA Corp., Santa Clara, CA) with 12 GB of memory.\n\n\\subsubsection*{Experimental Setup}\n20 consecutively acquired subject images are used as a subset to examine the impact of AR, ERR and type of weak annotation on the mean segmentation accuracy. An average DSC is reported as a measure of accuracy between the obtained segmentations and the expert segmentations $M$ for all the parameter combinations of ERR, AR and all examined annotation types (SC in combination with \\{BD, RR, PS\\}), resulting in 2100 single volume segmentations results.\n\nFurther, the proposed outlier detection (see Section \\ref{sec:outlier_detection}) is employed using annotations from all 150 subjects. The annotations after outlier removal are used for repeated segmentations, resulting in additional 2100 segmentations. A study on atlas selection \\citep{aljabar2009multi} suggests an optimal subset size for brain segmentation to be 20. We increased $N_{similar}$ of the globally similar atlases $Q^1$ to 30, to account for the variation in abdominal soft tissue organs, such as the liver.\n\nA series of paired T-tests is computed to determine significant changes in accuracy at a $p = 0.05$ level between resulting DSC before and after outlier detection.\n\n\\section{Results}\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{vis_res_slices_lbls}\n\\caption{Example segmentation results (from top left to bottom right): expert manual segmentation (magenta), segmentation with $< 0.8$ (blue), $\\approx 0.85$ (cyan), $\\approx 0.9$ (orange) and $\\approx 0.95$ accuracy according to DSC. The colour coding is chosen to reflect those of accuracy matrices in Fig. ~\\ref{fig:results_mean}.}\n\\label{fig:results_visual}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=0.95\\linewidth]{vis_res_90}\n\\caption{Surface rendering of example segmentation results. The colour coding are chosen to reflect those of accuracy matrices in Fig. ~\\ref{fig:results_mean}.}\n\\label{fig:results_visual_surf}\n\\end{figure}\n\n\\subsubsection*{Segmentation Accuracy}\nFig. \\ref{fig:results_visual} depicts example segmentations of transverse slices on a single subject as comparative visual results with obtained DSC ranges. Visual inspection suggests that a DSC \\textgreater 0.9 can be considered an acceptable segmentation result. A DSC of lower than 0.8 can be considered a segmentation failure. Mean accuracy results of all examined methods are shown in Fig. \\ref{fig:results_mean}. \nThe main contribution to a decrease in accuracy was observed to be high error rates. Without outlier correction, acceptable segmentation results could be obtained with all annotation types, down to an AR of 25\\%. Using rectangular regions, this accuracy can still be obtained when annotating 1\\% of available slices, when the ERR is simultaneously less than 5\\%. In a densely annotated database (\\emph{i.e.} AR = 100\\%) more than 10\\% of erroneous annotations lead to segmentation failure. This is particularly interesting for medical image analysis studies considering a crowdsourcing approach with \\emph{non-redundant annotations} of non-experts. This effect still persists to a degree, after outlier correction at the highest tested ERR of 50\\%.\n\n\\subsubsection*{Performance after Outlier Correction}\nThe mean accuracy improves after the proposed outlier correction, however slight decreases in accuracy are observed at lower error rates. This is mainly due to the decreased number of available annotations after correction. The differences in mean DSC after outlier removal range from $-0.05$ to $+0.94$ (BD), $-0.0006$ to $+0.94$ (RR) and $-0.02$ to $+0.92$ (PS). Statistically significant changes are visualised in Figure \\ref{fig:results_mean} (bottom row) and numerical ranges reported in Tab. \\ref{tab:acc_diff}. \n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{results_mean}\n\\caption{Top Row: Accuracy results for each annotation type, subject to AR and ERR, without outlier detection. Middle Row: Accuracy results with proposed outlier detection. Bottom Row: Results of paired T-tests between top and middle row (p \\textless 0.05). Increased and decreased mean accuracy are depicted in red and blue, respectively. Black elements show no significant difference.}\n\\label{fig:results_mean}\n\\end{figure*}\n\n\n\\begin{table}[!h]\n\\label{tab:acc_diff}\n\\centering\n\\caption{Minimal and maximal changes in DSC accuracy after outlier removal, for all tested ERR and AR and each annotation type.}\n\\begin{tabular}{|l|c|c|c|}\n\\hline\nANN Type & \\textbf{BD} & \\textbf{RR} & \\textbf{PS} \\\\\n\\hline\nincr. $N$ (DSC) & 8 (+0.94) & 18 (+0.94) & 10 (+0.92) \\\\\ndecr. $N$ (DSC) & 14 (-0.05) & 3 (-0.0006) & 5 (-0.04) \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table*}[!h]\n\\label{tab:acc_diff}\n\\centering\n\\caption{Mean segmentation accuracy with decreasing annotation rate (AR) and ERR = 0. All results as DSC [\\%].}\n\\renewcommand{\\arraystretch}{1.2}\n\\begin{tabular}{|l|c|c|c|c|c|c|c|}\n\\hline\n& \\multicolumn{7}{c|}{Annotation Rate (AR) [\\%]} \\\\ \\hline\nType & 100 & 50 & 33 & 25 & 10 & 5 & 1 \\\\\n\\hline\nBD & \\, 94.3 (0.8) & \\, 94.3 (1.2) & \\, 94.0 (1.4) & \\, 93.3 (2.2) & \\, 88.7 (3.7) & \\, 87.0 (4.0) & \\, 86.9 (3.7) \\\\\nRR & \\, 94.6 (0.8) & \\, 94.5 (0.8) & \\, 94.4 (0.8) & \\, 94.3 (0.8) & \\, 94.1 (0.9) & \\, 93.2 (1.2) & \\, 92.1 (1.9) \\\\\nPS & \\, 92.1 (1.4) & \\, 92.7 (1.4) & \\, 92.9 (1.4) & \\, 92.8 (1.8) & \\, 89.6 (4.0) & \\, 88.2 (4.8) & \\, 88.8 (4.1) \\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\section{Discussion}\nIn this study, we tested types of weak annotations to be used for liver segmentation from abdominal CTs. We examined the effects of different expertise levels and annotation rates of crowd populations for their impact on the segmentation accuracy outcome and proposed a method to remove potential incorrect annotations. In the conducted experiments each of the slices was annotated at most \\emph{once}, reducing the effort associated with the acquisition of redundant annotations.\n\n\\subsubsection*{Segmentation Accuracy}\nWhile the max-flow segmentation was employed without any problem-specific adaptions, it yields comparably high accuracy to a state-of-the-art hierarchical multi-atlas approach described in \\citep{wolz2012multi}, where a mean DSC of 94.4\\% was reported for the segmentation of the liver. Comparable accuracy was obtained when employing RR annotations at AR = 10\\% and no errors present or for BD at AR = 50\\% and ERR = 10\\%. After the proposed outlier correction, using RR at an AR of 25\\%, errors of up to 25\\% yielded similarly high accuracy - a scenario realistic enough to be obtained by a non-expert crowd. Both BD and PS annotation types performed similarly robust to RR at higher annotation rates and yielded acceptable accuracy at AR down to 25\\% and ERR of up to 25\\% without outlier correction. \n\n\\subsubsection*{Impact of Expertise and Annotation Rate}\nAn expected decrease in accuracy with both higher ERR and lower AR is observed for all annotation types. We report more robust behaviour of the RR annotations and at a wide range of ERR and at an AR of down to 5\\%. For BD and PS annotations, AR of less than 25\\% yielded insufficient accuracy, even at the highest expertise levels (\\emph{i.e.} ERR = 0). For all annotation types, the presence of errors has a larger impact at high AR, suggesting that the total amount of incorrectly annotated image slices is related with segmentation failure, rather than its rate. Without correction, high ERR were tolerated in annotation rates of 1-10\\%, however lead to segmentation failure (\\emph{i.e.} DSC \\textless 0.8) at higher AR. This suggests that an increased number of annotations is not beneficial if performed at an high error rate. \n\n\\subsubsection*{Outlier Correction}\nThe proposed \\emph{weakly labelled atlas}-based outlier detection approach performed well, yielding maximal improvements of \\textgreater 0.9 DSC in accuracy, which is particularly observed in presence of high error rates (see Fig. \\ref{fig:results_mean}). Its application allows to obtain high quality (DSC \\textgreater 0.9) segmentations at the maximum tested ERR of 50\\%. At lower ERR, small decreases in accuracy are found. These are associated with a decrease in AR due to outlier removal. This effect can be seen when no annotation errors were present in the atlas prior to outlier correction (\\emph{i.e.} ERR = 0\\%). Fig. \\ref{fig:results_mean} nicely illustrates the existence of an upper accuracy bound (\\emph{i.e.} where ERR = 0\\%) and the comparable performance at higher ERR after correction. \n\n\n\n\\subsection*{Conclusions}\nWe tested forms of weak annotations to be used in medical image segmentation problems and examined the effects of expertise and frequency of annotations in their impact on accuracy outcome. Resulting segmentation accuracy was comparable to state-of-the-art performance of a fully-supervised segmentation method and the proposed outlier correction using a \\emph{weakly labelled atlas} was able to largely improve the accuracy outcome for all examined types of weak annotations. The robust performance of this approach suggests that weak annotations from non-expert crowd populations could be used obtain accurate liver segmentations and the general approach can be readily adapted to other organ segmentation problems.\n\n\n\\section*{Acknowledgements}\nWe gratefully acknowledge the support of NVIDIA Corporation with the donation of a Tesla K40 GPU used for this research. This work was supported by Wellcome Trust and EPSRC IEH award [102431] for the iFIND project and the Developing Human Connectome Project, which is funded through a Synergy Grant by the European Research Council (ERC) under the European Union's Seventh Framework Programme (FP\/2007-2013) \/ ERC Grant Agreement number 319456. \n\n\n\\bibliographystyle{model2-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Useful facts}\n\\label{sec:useful-facts}\n\n\nIn this section,\nwe jot down basic definitions and facts\nrelated Stieltjes transform that we will be using\nthroughout the paper.\n\nLet $Q$ be a bounded nonnegative measure on $\\mathbb{R}$.\nThe Stieltjes transform of $Q$ is defined at $z \\in \\mathbb{C}^{+}$\nby\n\\[\n m_Q(z) = \\int_{\\mathbb{R}} \\frac{1}{x - z} \\, \\mathrm{d}Q(x).\n\\]\n\n\n\n\\begin{fact}\n \\label{fact:lim_stiletjes_im}\n Let $m$ be the Stieltjes transform of bounded measure $Q$ on $\\mathbb{R}_{\\ge 0}$.\n Let $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$.\n Then, $\\Im(m(z)) \\to 0$ as $\\Im(z) \\to 0$.\n\\end{fact}\n\\begin{proof}\n Let $z = x + i y$ with $x < 0$ and $y > 0$.\n Since $m$ is a Stieltjes transform of $Q$,\n we have\n \\[\n \\Im(m(z)) \n = \\Im \\left( \\int \\frac{1}{r - z} \\, \\mathrm{d}Q(r) \\right)\n = \\Im \\left( \\int \\frac{1}{r - (x + i y)} \\, \\mathrm{d}Q(r) \\right)\n = \\int \\frac{-y}{(r - x)^2 + y^2} \\, \\mathrm{d}Q(r).\n \\]\n Thus,\n we can bound\n \\[\n | \\Im(m(z)) |\n \\le \\frac{y}{x^2} \\int \\mathrm{d}Q(r).\n \\]\n Since $Q$ is a bounded measure, by letting $y \\to 0$, one has $\\Im(m(z)) \\to 0$ \n as $\\Im(z) \\to 0$.\n\\end{proof}\n\n\nWe will be interested in the Stieltjes transforms of spectral measures.\nThe spectral distribution of a symmetric matrix ${\\mathbf{A}} \\in \\mathbb{C}^{p \\times p}$\nwith eigenvalues $\\lambda_1({\\mathbf{A}}), \\dots, \\lambda_p({\\mathbf{A}})$\nis the probability distribution that places a point mass of $\\tfrac{1}{p}$ at each\neigenvalue\n\\[\n F_\\mA(\\lambda) = \\tfrac{1}{p} \\sum_{i=1}^{p} \\mathbbm{1}\\{ \\lambda_i \\le \\lambda \\}.\n\\]\nThe matrices of interest for us will be the population covariance matrix \n${\\bm{\\Sigma}} \\in \\mathbb{C}^{p \\times p}$\nand the sample covariance matrix \n$\\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}}$ where ${\\mathbf{X}} \\in \\mathbb{C}^{n \\times p}$ is the random design matrix.\n\nIf the Stieltjes transform \nof spectrum of the sample covariance matrix $\\tfrac{1}{n} \\mX^\\ctransp \\mX$\nis \n\\begin{equation}\n \\label{eq:stieltjes-transform}\n m(z) = \\tfrac{1}{p} \\tr[(\\tfrac{1}{n} \\mX^\\ctransp \\mX - z \\mI_p)^{-1}],\n\\end{equation}\nthen the so-called \\emph{companion} Stieltjes transform \n\\begin{equation}\n \\label{eq:companion-stieltjes-transform}\n v(z) = \\tfrac{1}{n} \\tr[(\\tfrac{1}{n} \\mX \\mX^\\ctransp - z \\mI_n)^{-1}] \n\\end{equation}\nis the Stieltjes transform of $\\tfrac{1}{n} \\mX \\mX^\\ctransp$\n(and hence the prefix).\nThe reason it is useful is that\nit is often easier to work with the companion Stieltjes transform\nthan the Stieltjes transform.\nThe following fact relates the companion Stieltjes transform\nto the Stieltjes transform.\n\n\\begin{fact}\n\\label{fact:stieltjes-companion-stieltjes-relation}\nThe companion Stieltjes transform $v(z)$ \ncan be expressed in terms of the Stieltjes transform $m(z)$ at $z \\in \\mathbb{C}^{+}$ as\n\\begin{equation}\n \\label{eq:stieltjes-companion-stieltjes-relation}\n v(z) = \\frac{p}{n} m(z) + \\frac{1}{z} \\left(\\frac{p}{n} - 1\\right).\n\\end{equation}\n\\end{fact}\n\\begin{proof}\n Let $(\\lambda_i)_{i = 1}^r$ be the nonzero eigenvalues of \n $\\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}}$\n (which are also the nonzero eigenvalues of $\\tfrac{1}{n} {\\mathbf{X}} {\\mathbf{X}}^\\ctransp$).\n Define $\\Lambda(z) = \\sum_{i=1}^r \\tfrac{1}{\\lambda_i - z}$.\n From \\cref{eq:stieltjes-transform,eq:companion-stieltjes-transform},\n note that we can write\n \\begin{align}\n m(z) = \\frac{\\Lambda(z)}{p} - \\frac{(p - r)}{pz}, \\quad\n v(z) = \\frac{\\Lambda(z)}{n} - \\frac{(n - r)}{nz}.\n \\end{align}\nCombining these equations proves the claim.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proofs in \\cref{thm:sketched-pseudoinverse}}\n\\label{sec:proofs-main-results}\nWe first present the remaining details of the proof that extends the argument to positive semidefinite $\\mA$, and then we present the alternative proof sketch based on Jacobi's formula.\n\n\\subsection{Proof of \\cref{thm:sketched-pseudoinverse} for positive semidefinite $\\mA$}\n\\label{sec:proof:thm:sketched-pseudoinverse}\n\n\\begin{proof}\nWe begin by proving the equivalence \\cref{eq:thm:sketched-pseudoinverse} and then show that the limit as $\\lambda \\to 0$ is well-behaved when we multiply by $\\mA^{1\/2}$ to obtain \\cref{eq:thm:sketched-pseudoinverse-A-half}.\n\nLet $\\mA_\\delta \\defeq \\mA + \\delta \\mI_p$, $\\mU \\defeq \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp$, and $\\mV \\defeq \\inv{\\mA + \\mu \\mI_p}$. By the Woodbury matrix identity, we have the following two identities:\n\\begin{gather}\n \\mS \\biginv{\\mS^\\ctransp \\mA_\\delta \\mS + \\lambda \\mI_q} \\mS^\\ctransp\n = \\mU - \\delta \\mU \\inv{\\mI_p + \\delta \\mU} \\mU, \\\\\n \\inv{\\mA_\\delta + \\lambda \\mI_p}\n = \\mV - \\delta \\mV \\inv{\\mI_p + \\delta \\mV} \\mV.\n\\end{gather}\nIf either $\\lambda \\neq 0$ or $\\limsup \\tfrac{q}{p} < \\liminf r(\\mA)$, then we can conclude that (see, e.g., \\cite{bai_silverstein_1998}) that $\\bignorm[\\rm op]{\\biginv{\\mS^\\ctransp \\mA_\\delta \\mS + \\lambda \\mI_q}}$ is almost surely uniformly bounded and that $\\mu$ is bounded away from zero (see \\cref{rem:joint-signs-lambda-mu}). Thus, since $\\norm[\\rm op]{\\mS}$ is also almost surely bounded asymptotically, $\\norm[\\rm op]{\\mU}$ and $\\norm[\\rm op]{\\mV}$ are asymptotically bounded by constants $C_\\mU$ and $C_\\mV$, respectively. Therefore, for $\\delta < \\tfrac{1}{2}\\min\\set{C_\\mU, C_\\mV}$, we have the following bound on the trace functional difference:\n\\begin{multline}\n \\limsup \\big| \\tr \\bigbracket{\\mTheta \\bigparen{\\mS \\biginv{\\mS^\\ctransp \\mA_\\delta \\mS + \\lambda \\mI_q} \\mS^\\ctransp - \\inv{\\mA_\\delta + \\lambda \\mI_p}}}\n - \\tr \\bigbracket{\\mTheta \\bigparen{\\mU - \\mV}}\n \\big| \\\\\n \\leq \\tfrac{\\delta}{2} \\norm[\\tr]{\\mTheta} \\bigparen{C_\\mU^2 + C_\\mV^2}.\n\\end{multline}\nThus, as $\\delta \\searrow 0$, the trace functionals converge uniformly over $p$. We can therefore apply the Moore--Osgood Theorem to interchange limits, such that almost surely\n\\begin{align}\n \\lim_{p \\to \\infty} \\big| \\tr \\bigbracket{\\mTheta \\bigparen{\\mU - \\mV}}\n \\big| &= \n \\lim_{\\delta \\searrow 0} \\lim_{p \\to \\infty} \n \\big|\n \\tr \\bigbracket{\\mTheta \\bigparen{\\mS \\biginv{\\mS^\\ctransp \\mA_\\delta \\mS + \\lambda \\mI_q} \\mS^\\ctransp - \\inv{\\mA_\\delta + \\lambda \\mI_p}}}\n \\big| \\\\\n &= 0.\n\\end{align}\n\nTo prove the equivalence in \\cref{eq:thm:sketched-pseudoinverse-A-half}, we can apply the equivalence in \\cref{eq:thm:sketched-pseudoinverse} proved above unless $\\lambda = 0$ and $\\limsup \\tfrac{q}{p} \\geq \\liminf r(\\mA)$. We need only consider $\\limsup \\lambda_0 < 0$, so it suffices to consider $\\liminf \\tfrac{q}{p} > \\limsup r(\\mA)$ (see \\cref{rem:mu0-lambda0-vs-alpha}).\nThe condition $\\limsup \\lambda_0 < 0$ implies that there exists $c_\\lambda > 0$ such that $\\lambda_{\\min}^{+}(\\mS^\\ctransp \\mA \\mS) > c_\\lambda$. Therefore, $\\bignorm[\\rm op]{\\mA^{1\/2} \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q}}$ is almost surely uniformly bounded in $p$ for all $\\lambda \\in D_\\lambda$, where $D_\\lambda = \\bigset{z \\in \\complexset \\colon |z| < \\tfrac{c_\\lambda}{2}}$. We now need to bound $\\bignorm[\\rm op]{\\mA^{1\/2} \\inv{\\mA + \\mu \\mI_p}}$. \nFrom the definition of $\\mu_0$ in \\cref{eq:mu0-lambda0-fps},\nwe observe that\n\\begin{align}\n \\frac{p}{q}\n \\frac{r(\\mA) \\lambda_{\\max}(\\mA)^2}{ (\\lambda_{\\max}(\\mA) + \\mu_0)^2} \n \\leq 1 \\leq \n \\frac{p}{q}\n \\frac{r(\\mA) \\lambda_{\\min}^{+}(\\mA)^2}{ (\\lambda_{\\min}^{+}(\\mA) + \\mu_0)^2},\n\\end{align}\nfrom which we can conclude for the case that $\\tfrac{q}{p} > r(\\mA)$ and $\\lambda_{\\min}^{+}(\\mA) > 0$,\nwe can bound\n\\begin{equation}\n \\label{eq:mu-0-upper-lower-bounds}\n \\paren{\\tfrac{p r(\\mA)}{q} - 1} \\lambda_{\\max}(\\mA) < \n \\paren{\\sqrt{\\tfrac{p r(\\mA)}{q}} - 1} \\lambda_{\\max}(\\mA)\n \\leq \\mu_0 \\leq \n \\paren{\\sqrt{\\tfrac{p r(\\mA)}{q}} - 1} \\lambda_{\\min}^{+}(\\mA) < 0.\n\\end{equation}\nSince $\\liminf \\tfrac{q}{p} > \\limsup r(\\mA)$ and $\\liminf \\lambda_{\\min}^{+}(\\mA) > 0$, we therefore must have $\\limsup \\mu_0 < 0$.\nDefine the set $D_{\\mu} = \\bigset{z \\in \\complexset \\colon |z| < \\tfrac{-\\limsup \\mu_0}{2}}$. Since $-\\liminf \\lambda_{\\min}^{+}(\\mA) \\leq \\mu_0$, for all $\\mu \\in D_{\\mu}$, we must have the bound\n\\begin{equation}\n \\bignorm[\\rm op]{\\mA^{1\/2} \\inv{\\mA + \\mu \\mI_p}} \\leq \\frac{2\\bignorm[\\rm op]{\\mA^{1\/2}}}{-\\limsup \\mu_0}.\n\\end{equation}\nWe also know from \\cref{eq:sketched-modified-lambda} that\n\\begin{equation}\n |\\lambda| = |\\mu| \\big| 1 - \\tfrac{1}{q} \\tr \\bigbracket{\\mA \\inv{\\mA + \\mu \\mI_p}} \\big|.\n\\end{equation}\nOne can confirm that the second factor on the right-hand side is uniformly lower bounded away from 0 for $\\mu \\in D_\\mu$\nusing the first bound in \\cref{eq:mu-0-upper-lower-bounds}. Let $D_p = \\set{\\lambda : \\mu(\\lambda) \\in D_\\mu}$ be the inverse image of $D_\\mu$ under the map $\\lambda \\mapsto \\mu$ for each $p$. By the above arguments, the set $D = D_\\lambda \\cap \\limsup D_p$ is an open set over which the functions \n\\begin{equation}\n f_p(\\lambda) = \\big|\\tr \\bigparen{\\bigbracket{\\mTheta \\mA^{1\/2} \\mS \\big( \\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q \\big)^{-1} \\mS^\\ctransp} - \\mA^{1\/2} (\\mA + \\mu \\mI_q )^{-1}} \\big|\n\\end{equation}\nconverge uniformly as $\\lambda \\to 0$ over $p$. By Montel's theorem, these functions form a normal family. Since $f_p(\\lambda) \\to 0$ pointwise for $\\lambda \\neq 0$, this implies that $f_p(0) \\to 0$.\n\n\\end{proof}\n\n\\subsection{Alternative proof sketch for \\cref{thm:sketched-pseudoinverse} via Jacobi's formula}\n\\label{sec:proofs-main-results-jacobi}\n\nBelow we provide an alternate strategy for proving \\Cref{thm:sketched-pseudoinverse}\nvia Jacobi's formula.\nWe only sketch the outline for the proof \nand do not attempt to make it completely rigorous.\nBut nonetheless we think the idea behind it is interesting and decided to include it, in hopes that it may be useful, perhaps for proving the asymptotically free sketching result we propose in \\cref{conj:general-free-sketching}.\nIt is in fact how we first discovered the statement of \\Cref{thm:sketched-pseudoinverse} ourselves\nand only in the retrospect proved it using the shorter argument\nthat uses the Woodbury matrix identity\npresented in \\Cref{sec:proof:thm:sketched-pseudoinverse}\n\n\\begin{proof}[Proof sketch]\n We note that without loss of generality, we can consider positive semidefinite $\\mTheta$, since we can always replace $\\mTheta$ with $\\frac{1}{2} (\\mTheta + \\mTheta^\\ctransp)$ and evaluate separately $\\mTheta_+$ and $\\mTheta_-$, where the latter refer to the restriction of symmetric $\\mTheta$ to the eigenspaces corresponding to positive and negative eigenvalues, respectively.\n \n Define $\\mB_t \\defeq \\mA + t \\mTheta$ and $\\mB_t^\\mS \\defeq \\mS^\\ctransp \\mB_t \\mS$. Observe that $\\log \\abs{\\mB_t + \\lambda \\mI}$ is a monotonic function of $\\lambda$ that ranges from $-\\infty$ to $\\infty$ for $\\lambda \\in (-\\lambda_{\\mathrm{min}}(\\mB_t), \\infty)$, where $\\lambda_{\\mathrm{min}}(\\mB_t)$ is the smallest eigenvalue of $\\mB_t$; therefore there exist\\footnote{The existence of $\\mu$ is not unique since this relation can hold for any $D$.} $D \\in \\reals$ and $\\mu$ such that for $\\lambda > -\\lambda_{\\mathrm{min}}(\\mB_t^\\mS)$, \n \\begin{align}\n \\tfrac{1}{p} \\log \\abs{\\mB_t^\\mS + \\lambda \\mI_q} = \\tfrac{1}{p} \\log \\abs{\\mB_t + \\mu(\\lambda) \\mI_p} + D + o(1).\n \\end{align}\n Here a function $f(t) \\in o(g(t))$ if $\\lim_{t \\to 0} \\big|\\tfrac{f(t)}{g(t)}\\big| = 0$.\n Now, using Jacobi's formula, the fundamental theorem of calculus, and Leibniz' integral rule,\n \\begin{align}\n &\\tfrac{1}{p} \\tr \\bracket{\\mTheta \\paren{\\mS \\inv{\\mB_0^\\mS + \\lambda \\mI_q}\\mS^\\ctransp - \\inv{\\mB_0 + \\mu(\\lambda) \\mI_p}}} \\\\\n &= \\tfrac{1}{p} \\tfrac{\\partial}{\\partial t} \\paren{ \\log \\abs{\\mB_t^\\mS + \\lambda \\mI_q} - \\log \\abs{\\mB_t + \\mu(\\lambda) \\mI_p}} \\Big|_{t=0} \\\\\n &= \\tfrac{\\partial}{\\partial t}\n \\left. \\paren{\\int_{\\lambda_0(t)}^\\lambda \\tfrac{1}{p} \\tr\\bracket{\\inv{\\mB_t^\\mS + u \\mI_q}} - \\tfrac{1}{p} \\tr\\bracket{\\inv{\\mB_t + \\mu(u) \\mI_p}} \\tfrac{\\partial \\mu(u)}{\\partial u} du + C} \\right|_{t=0} \\\\\n &= \\int_{\\lambda_0(0)}^\\lambda \\tfrac{\\partial}{\\partial t}\n \\Bigg(\n \\underbrace{\\tfrac{1}{p} \\tr\\bracket{\\inv{\\mB_t^\\mS + u \\mI_q}}}_{\\alpha m_t^\\mS(-u)} - \\underbrace{\\tfrac{1}{p} \\tr\\bracket{\\inv{\\mB_t + \\mu(u) \\mI_p}}}_{m_t(-\\mu(u))} \\tfrac{\\partial \\mu(u)}{\\partial u} \n \\Bigg)\n \\Bigg|_{t=0} du.\n \\label{eq:jacobi-trace-functionals}\n \\end{align}\n Here $\\lambda_0(t)$ and $C$ are chosen\\footnote{We assume that such a $\\lambda_0(t)$ exists, but a guarantee of its existence needs to be formally proved and is currently the non-rigorous part in this proof sketch.} such that for a given $t$, the integrand is equal to zero when $u = \\lambda_0(0)$, which simplifies the application of Leibniz' rule.\n Thus, by the dominated convergence theorem, we need only show that this derivative converges to 0 for any fixed $u$.\n By the differentiation rule of asymptotic equivalence, we know that we need only consider $m_t^\\mS(-u)$ asymptotically.\n Applying the result of \\cite{rubio_mestre_2011}, as $p \\to \\infty$ with $\\alpha = \\tfrac{q}{p}$, $m_t^\\mS(-u)$ almost surely converges to the solution to\n \\begin{align}\n \\label{eq:lem:sketch:jacobi:rubio}\n \\frac{1}{m_t^\\mS(-u)} - u = \\tfrac{1}{q} \\tr \\bracket{\\mB_t \\inv{\\mI_p + m_t^\\mS(-u) \\mB_t}},\n \\end{align}\n which we can manipulate to obtain\n \\begin{align}\n \\frac{1}{m_t^\\mS(-u)} - u = \\frac{1}{\\alpha m_t^\\mS(-u)} \\paren{1 - \\frac{1}{ m_t^\\mS(-u)} m_t \\paren{- \\tfrac{1}{m_t^\\mS(-u)}}}.\n \\end{align}\n Let $\\rho_t(u) = \\tfrac{1}{m_t^\\mS(-u))}$. The above becomes\n \\begin{align}\n \\alpha(\\rho_t(u) - u) = \\rho_t(u) \\paren{1 - \\rho_t(u) m_t(- \\rho_t(u))}.\n \\end{align}\n Let us compute the partial derivatives with respect to $t$ and $u$ of $\\rho_t(u)$. Let $z = -\\rho_t(u)$ be the argument of $m_t(z)$ for the purposes of computing partial derivatives.\n Taking partial derivatives with respect to $t$,\n we obtain\n \\begin{equation}\n \\begin{split}\n \\alpha \\frac{\\partial \\rho_t(u)}{\\partial t} \n &= \\frac{\\partial \\rho_t(u)}{\\partial t} \\paren{1 - \\rho_t(u) m_t(- \\rho_t(u))} \\\\\n & \\quad + \\rho_t(u) \\paren{- \\frac{\\partial \\rho_t(u)}{\\partial t} m_t(- \\rho_t(u))\n - \\rho_t(u) \\frac{\\partial m_t(- \\rho_t(u))}{\\partial t} + \\rho_t(u) \\frac{\\partial m_t(- \\rho_t(u))}{\\partial z} \\frac{\\partial \\rho_t(u)}{\\partial t}}.\n \\end{split}\n \\end{equation}\n This yields\n \\begin{equation}\n \\frac{\\partial \\rho_t(u)}{\\partial t} \n = \\ddfrac{\\rho_t(u)^2 \\frac{\\partial m_t(- \\rho_t(u))}{\\partial t}}{1 - \\alpha - 2 \\rho_t(u) m_t(- \\rho_t(u)) + \\rho_t(u)^2 \\frac{\\partial m_t(- \\rho_t(u))}{\\partial z}}.\n \\end{equation}\n Taking partial derivatives with respect to $u$,\n we obtain\n \\begin{equation}\n \\begin{split}\n \\alpha \\frac{\\partial \\rho_t(u)}{\\partial u} - \\alpha \n &= \\frac{\\partial \\rho_t(u)}{\\partial u} \\paren{1 - \\rho_t(u) m_t(- \\rho_t(u))} \\\\\n & \\quad + \\rho_t(u) \\paren{- \\frac{\\partial \\rho_t(u)}{\\partial u} m_t(- \\rho_t(u))\n + \\rho_t(u) \\frac{\\partial m_t(- \\rho_t(u))}{\\partial z} \\frac{\\partial \\rho_t(u)}{\\partial u}}.\n \\end{split}\n \\end{equation}\n This yields\n \\begin{gather}\n \\frac{\\partial \\rho_t(u)}{\\partial u} \n = \\ddfrac{-\\alpha}{1 - \\alpha - 2 \\rho_t(u) m_t(- \\rho_t(u)) + \\rho_t(u)^2 \\frac{\\partial m_t(- \\rho_t(u))}{\\partial z}}.\n \\end{gather}\n Now note that by the chain rule, \n \\begin{align}\n \\frac{\\partial \\rho_t(u)}{\\partial t} \n = - \\frac{\\partial m_t^\\mS(-u)}{\\partial t} \\rho_t(u)^2,\n \\end{align}\n and therefore \n \\begin{align}\n \\alpha \\frac{\\partial m_t^\\mS(-u)}{\\partial t} \n = \\frac{\\partial m_t(- \\rho_t(u))}{\\partial t} \\frac{\\partial \\rho_t(u)}{\\partial u}.\n \\end{align}\n Thus, for $\\mu(u) = \\rho_0(u)$, the difference of trace functionals in \\cref{eq:jacobi-trace-functionals} converges to 0. This coincides precisely with \\cref{thm:sketched-pseudoinverse}, as $\\mu(u) = \\rho_0(u) = \\tfrac{1}{\\widetilde{v}(u)}$.\n\\end{proof}\n\\section{Proof of \\Cref{cor:basic-ridge-asympequi-in-r}}\n\n\\label{sec:proof:cor:basic-ridge-asympequi-in-r}\n\nAs a preliminary that we will need later, through a standard argument,\nwe will first show that\n$\\Im(c(z)) \\to 0$ as $\\Im(z) \\to 0$\nin \\cref{eq:basic-ridge-asympequi-in-r}\nfor $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$.\nTo proceed, \ndenote $\\tfrac{1}{p} \\tr\\bracket{{\\bm{\\Sigma}} (c(z) {\\bm{\\Sigma}} - z {\\mathbf{I}}_p)^{-1}}$ by $d(z)$.\nFrom the last part of \\cref{lem:basic-ridge-asympequi},\n$d(z)$ is a Stieltjes transform of a certain positive measure on $\\mathbb{R}_{\\ge 0}$\nwith total mass $\\tfrac{1}{p} \\tr[{\\bm{\\Sigma}}]$.\nSince the operator norm of ${\\bm{\\Sigma}}$ is uniformly bounded in $p$,\nwe have that $\\tfrac{1}{p} \\tr[{\\bm{\\Sigma}}]$ is bounded above by some constant independent of $p$.\nCombining this with \\cref{fact:lim_stiletjes_im},\nwe have that $\\Im(d(z)) \\to 0$ as $\\Im(z) \\to 0$\nfor $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$.\nNow manipulating \\cref{eq:basic-ridge-fp-in-c},\nwe can write\n\\begin{equation}\n \\label{eq:c-in-d}\n c(z)\n = \\frac{1}{1 + \\tfrac{p}{n} d(z)}.\n\\end{equation}\nThus,\nwe can conclude that $\\Im(c(z)^{-1}) \\to 0$ as $\\Im(z) \\to 0$\nfor $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$.\nThis in turn implies that $\\Im(c(z)) \\to 0$\nfor $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$.\n\nWe now begin the proof.\n\n\\begin{proof}\nWe start by considering $z \\in \\complexset^+$.\nTo obtain \\cref{eq:basic-ridge-asympequi-in-r}, we multiply both sides of \\cref{eq:basic-ridge-asympequi-in-c} by $z$:\n\\begin{align}\n z \\big( \\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}} - z {\\mathbf{I}}_p \\big)^{-1}\n &\\simeq z \\inv{c(z) {\\bm{\\Sigma}} -z {\\mathbf{I}}_p} \\\\\n &= \\tfrac{z}{c(z)} \\biginv{\\mSigma - \\tfrac{z}{c(z)} \\mI_p} \\label{eq:resolvent-symmetric-pre}.\n\\end{align}\nWe will let $\\zeta = \\tfrac{z}{c(z)}$ shortly. \nFirst let \n$m(z) = \\tfrac{1}{p} \\tr \\bigbracket{\\inv{c(z) \\mSigma - z \\mI_p}}$.\nBy an additional application of \\cref{lem:basic-ridge-asympequi}, \n$m(z)$ is asymptotically equal to $\\tfrac{1}{p} \\tr \\bigbracket{\\inv{\\tfrac{1}{n} \\mX^\\ctransp \\mX - z {\\mathbf{I}}}}$, the Stieltjes transform of the spectrum of $\\tfrac{1}{n} \\mX^\\ctransp \\mX$. \nNow note that we can write \\cref{eq:basic-ridge-fp-in-c} in terms of $m(z)$ as\n\\[\n \\tfrac{1}{c(z)} - 1 = \\tfrac{p}{n} m(z).\n\\]\nWe can manipulate the equation in the display above into the following form:\n\\begin{align}\n - \\frac{c(z)}{z} = \\tfrac{p}{n} m(z) + \\frac{1}{z} \\left(\\tfrac{p}{n} - 1\\right).\n\\end{align}\nFrom the relationship between Stieltjes and the companion Stieltjes transforms in \n\\cref{fact:stieltjes-companion-stieltjes-relation},\nthis means that $-\\tfrac{c(z)}{z}$ is asymptotically equal to $v(z) = \\tfrac{1}{n} \\tr \\bracket{\\inv{\\tfrac{1}{n} \\mX \\mX^\\ctransp - z {\\mathbf{I}}}}$, the companion Stieltjes transform\nof the spectrum of $\\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}}$.\nThus, letting $\\zeta = \\tfrac{z}{c(z)}$ in \\cref{eq:resolvent-symmetric-pre}, \nwe have that\n\\[\n z \\big( \\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}} - z {\\mathbf{I}}_p \\big)^{-1} \n \\simeq \\zeta ({\\bm{\\Sigma}} - \\zeta {\\mathbf{I}}_p)^{-1},\n\\]\nand that asymptotically,\n$\\zeta = -\\tfrac{1}{v(z)}$ is the unique solution in $\\complexset^+$ to \\cref{eq:basic-ridge-fp-in-r} for $z \\in \\complexset^+$.\nMoreover, through analytic continuation,\none can extend this relationship to the real line\noutside the support of the spectrum of $\\tfrac{1}{n} {\\mathbf{X}} {\\mathbf{X}}^\\ctransp$\nwhere by the similar argument as for $c(z)$ above,\nboth $v(z)$ and $\\zeta$ are real.\n\nIt remains to determine the interval for which the analytic continuation coincides with a unique solution to \\cref{eq:basic-ridge-fp-in-r} for a given $z$. \nLet $z_0 \\in \\reals$ denote the most negative zero of $v$. Then for all $z < z_0$, $\\zeta \\in \\reals$ is well-defined, asymptotically being a solution to\n\\begin{align}\n \\label{eq:proof:zeta-fp-in-r}\n z - \\zeta = -\\zeta \\tfrac{1}{n} \\tr \\bracket{\\mSigma \\inv{\\mSigma - \\zeta \\mI_p}},\n\\end{align}\nwhich is an algebraic manipulation of \\cref{eq:basic-ridge-fp-in-c}. However, as we will now show, the solution to this equation is not in general unique, so we will show that the most negative solution for $\\zeta$ is the correct analytic continuation of the corresponding solution in $\\complexset^+$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/zeta_z_proof_illustration.pdf}\n \\caption{\\textbf{Left:} Numerical illustration of the solutions to \\cref{eq:proof:zeta-fp-in-r} for $\\mSigma = \\mI$ and $\\tfrac{p}{n} = \\tfrac{1}{2}$. \n The right-hand side of \\cref{eq:proof:zeta-fp-in-r} is a fixed function of $\\zeta$ (blue, solid), \n but the left-hand-side is a line with slope $-1$ shifted by $z$ (orange to green, dashed). \n Solutions are the most negative intersections of the curves (circles), and not the most positive intersections (x's). The greatest possible value of $z$ yielding an intersection, $z = z_0$ (triangle), gives $\\zeta = \\zeta_0$ (dotted). For this example, we know that $z_0 = (1 - \\sqrt{\\tfrac{p}{n}})^2 \\approx 0.0858$ since the spectrum of $\\tfrac{1}{n} \\mX \\mX^\\ctransp$ follows the Marchenko--Pastur distribution.\n \\textbf{Right:} Illustration of the convergence of $z_0$ to $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$. For $\\mSigma = \\mI$, $p = 500$, $n = 1000$, we draw a random $\\tfrac{1}{n} \\mX \\mX^\\ctransp$ and compute its eigenvalues. To simulate increasing the dimensionality of the matrix while keeping $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$ fixed, we then take a subsample of size $n_s$ of the eigenvalues, comprised of $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$ and $n_s - 1$ other eigenvalues chosen uniformly at random. We then plot $v(z)$ (solid) using this subsample. For any finite $n_s$, $z_0$ (dashed) will always lie between $0$ and $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$, but $z_0$ approaches $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$ as $n_s$ tends to infinity.\n }\n \\label{fig:zeta_z_proof_illustration}\n\\end{figure}\n\nConsider the two sides of \\cref{eq:proof:zeta-fp-in-r}. The left-hand side is linear in $\\zeta$, and the right-hand side is concave for $\\zeta < \\lambda_{\\min}^{+}(\\mSigma)$. To see this, observe that\n\\begin{align}\n \\tfrac{\\partial^2}{\\partial \\zeta^2} \\paren{-\\zeta \\tfrac{1}{n} \\tr \\bracket{\\mSigma \\inv{\\mSigma - \\zeta \\mI_p}}}\n &= \\tfrac{\\partial}{\\partial \\zeta} \\paren{- \\tfrac{1}{n} \\tr \\bracket{\\mSigma \\inv{\\mSigma - \\zeta \\mI_p}} - \\zeta \\tfrac{1}{n} \\tr \\bracket{\\mSigma \\paren{\\mSigma - \\zeta \\mI_p}^{-2}}} \\\\\n &= \\tfrac{\\partial}{\\partial \\zeta} \\paren{- \\tfrac{1}{n} \\tr \\bracket{\\mSigma^2 \\paren{\\mSigma - \\zeta \\mI_p}^{-2}}} \\\\\n &= -\\tfrac{2}{n} \\tr \\bracket{\\mSigma^2 \\paren{\\mSigma - \\zeta \\mI_p}^{-3}} < 0.\n\\end{align}\nA linear function and a concave function can intersect at zero, one, or two points. If at one point, this must occur at the unique point $(z_1, \\zeta_1)$, $\\zeta_1 < \\lambda_{\\min}^{+}(\\mSigma)$ for which the derivatives of each side of \\cref{eq:proof:zeta-fp-in-r} coincide, satisfying\n\\begin{align}\n 1 = \\tfrac{1}{n} \\tr \\bracket{ \\mSigma^2 \\paren{\\mSigma - \\zeta_1 \\mI_p}^{-2} }.\n\\end{align}\nThis right-hand side of this equation sweeps the range $(0, \\infty)$ for $\\zeta_1 \\in (-\\infty, \\lambda_{\\min}^{+}(\\mSigma))$, so such a $(z_1, \\zeta_1)$ always exists.\nFurthermore, since the solutions $\\zeta$ are continuous as a function $z$, the analytic continuation of the complex solution to the reals of the map $z \\mapsto \\zeta$ with domain $(-\\infty, z_1)$ must have image of either $(-\\infty, \\zeta_1)$ or $(\\zeta_1, \\lambda_{\\min}^{+}(\\mSigma))$.\nThe correct image must be $(-\\infty, \\zeta_1)$, which we illustrate in \\cref{fig:zeta_z_proof_illustration} (left). \n\nTo see why this must be the correct image, consider $z = x + i \\varepsilon$ for a fixed $\\varepsilon > 0$ with $x$ very negative.\nRewriting \\cref{eq:proof:zeta-fp-in-r}, we have the form of \\cref{eq:basic-ridge-fp-in-r}:\n\\begin{align}\n \\label{eq:proof:zeta-fp-all-on-right-hand-side}\n z = \\zeta \\paren{1 - \\tfrac{1}{n} \\tr \\bracket{\\mSigma \\inv{\\mSigma - \\zeta \\mI_p}}}.\n\\end{align}\nWe begin by considering the behavior of the trace term.\nLet $\\zeta = \\chi + i \\xi$, and suppose that $\\chi < \\tfrac{x}{2}$, which means that $\\chi$ is also very negative. The trace is a sum of terms of the form\n\\begin{align}\n \\frac{\\sigma}{\\sigma - \\zeta} = \\frac{\\sigma (\\sigma - \\chi + i \\xi)}{(\\sigma - \\chi)^2 + \\xi^2}.\n\\end{align}\nLet $g(\\zeta)$ and $h(\\zeta)$ denote the real and imaginary parts of $\\tfrac{1}{n} \\tr \\bracket{\\mSigma \\inv{\\mSigma - \\zeta \\mI_p}}$.\nFor $x$ (and therefore $\\chi$) sufficiently negative, this gives us the simple bounds\n\\begin{align}\n \\left| g(\\zeta) \\right| \n &\\leq \\frac{\\tfrac{p}{n} \\sigma_{\\max}(\\mSigma)}{- \\chi} \n \\leq \\frac{2 \\tfrac{p}{n} \\sigma_{\\max}(\\mSigma)}{- x}, \\\\\n \\left| h(\\zeta) \\right| \n &\\leq \\frac{\\tfrac{p}{n} \\sigma_{\\max}(\\mSigma) \\xi}{\\chi^2}\n \\leq \\frac{4 \\tfrac{p}{n} \\sigma_{\\max}(\\mSigma) \\xi}{x^2}.\n\\end{align}\nWe therefore have by \\cref{eq:proof:zeta-fp-all-on-right-hand-side} that\n\\begin{align}\n \\chi = \\frac{x (1 - g(\\zeta)) + \\varepsilon h(\\zeta)}{(1 - g(\\zeta))^2 + h(\\zeta)^2}, \\quad\n \\xi = \\frac{\\varepsilon (1 - g(\\zeta)) - x h(\\zeta)}{(1 - g(\\zeta))^2 + h(\\zeta)^2}.\n\\end{align}\nBy our bounds on $g$ and $h$, we can conclude that for sufficiently negative $x$, there exists $a > 0$ and $0 < b < 1$ such that $ |\\xi| \\leq a \\varepsilon + b |\\xi| $, implying that $|\\xi| \\leq \\tfrac{a \\varepsilon}{1 - b}$, and therefore $|\\xi|$ is bounded. Since $|\\xi|$ is bounded, $|h(\\zeta)|$ has an upper bound of the form $\\tfrac{1}{x^2}$, so for any $c \\in (\\tfrac{1}{2}, 1)$ and sufficiently negative $x$, we have the bound $\\chi \\leq c x$. Therefore, we can confirm that our supposition that $\\chi < \\tfrac{x}{2}$ leads to the unique solution with $\\xi > 0$, since for any $c' \\in (0, 1)$ we similarly have $\\xi > c' \\varepsilon > 0$ for sufficiently negative $x$. One can similarly argue that for solutions with $\\chi \\to \\lambda_{\\min}^{+}(\\mSigma)$, it must be that $\\xi < 0$, which is the solution in the wrong half-plane. By continuity of $z \\mapsto \\zeta$, identifying these extreme cases is sufficient to identify the correct image.\nTherefore, for real-valued $z < z_1$, the correct $\\zeta$ is the most negative solution, which is the unique $\\zeta < \\zeta_1$, and $\\zeta$ is undefined for $z > z_1$.\n\nLastly, we argue that asymptotically, $z_0 = z_1$. In the case $n < p$, this is straightforward, as the most negative zero of $v$ must lie between the two most negative distinct eigenvalues of $\\frac{1}{n} \\mX \\mX^\\ctransp$. This is because there is a pole at each distinct eigenvalue, so the entire range $(-\\infty, \\infty)$ (including crossing $0$) is mapped to by $v$ between each successive pair of distinct eigenvalues. When $n < p$, there is not a point mass at $0$, so these two most negative eigenvalues must converge to the same value as the discrete eigenvalue distribution converges to a continuous distribution, and this value marks the beginning of the continuous support of the spectrum of $\\frac{1}{n} \\mX \\mX^\\ctransp$, so $z_0 \\to \\lambda_{\\min}(\\frac{1}{n} \\mX \\mX^\\ctransp)$. Moreover, $\\zeta$, being asymptotically equal to $\\tfrac{1}{v}$, is undefined only on the support of the limiting spectrum and continuous elsewhere; therefore by the argument in the previous paragraph, the solution to \\cref{eq:proof:zeta-fp-in-r} does not exist for $z > z_1$, and it must be that $\\lambda_{\\min}(\\frac{1}{n} \\mX \\mX^\\ctransp) \\to z_1$.\n\nFor $n > p$, we apply similar reasoning; however, we must take care to consider the point mass of the spectrum at 0. This means that $z_0 \\in (0, \\lambda_{\\min}^{+}(\\frac{1}{n} \\mX \\mX^\\ctransp))$, because like before, the first zero must lie between the two most negative distinct eigenvalues, as we illustrate in \\cref{fig:zeta_z_proof_illustration} (right).\nHowever, asymptotically, it must be that $z_0 \\to z_1 = \\lambda_{\\min}^{+}(\\frac{1}{n} \\mX \\mX^\\ctransp)$. \nThis is most easily seen by a contradiction argument. Suppose\nwe have $z_0 < z_1 - \\varepsilon$ for some $\\varepsilon > 0$.\nBecause $\\tfrac{1}{v}$ has a pole at $z_0$, $-\\zeta = \\tfrac{1}{v} \\to \\infty$ as $z \\searrow z_0$. In particular, this means that $\\tfrac{1}{v}$ is discontinuous at $z_0$, tending to $\\infty$ from the right. \nMeanwhile, as argued above, $\\lambda_{\\min}^{+}(\\frac{1}{n} \\mX \\mX^\\ctransp) \\to z_1$, and we know that for $z < z_1$, $\\zeta < \\zeta_1 \\in (-\\infty, \\lambda_{\\min}^{+}(\\mSigma))$.\nThis is a contradiction, because on the one hand $\\zeta$ is upper bounded by $\\lambda_{\\min}^{+}(\\mSigma)$ for any $z \\in (z_0, z_1)$, but on the other hand $\\tfrac{1}{v}$ can be made arbitrarily large by taking $z \\to z_0^+$.\nTherefore, we must have, asymptotically, that $z_0 = z_1 = \\lambda_{\\min}^{+}(\\frac{1}{n} \\mX \\mX^\\ctransp)$. For this reason, in the theorem statement, we denote $\\zeta_0 = \\zeta_1$.\n\\end{proof}\n\\section{Proofs in \\Cref{sec:properties}}\n\\label{sec:proofs-properties}\n\nWe collect the proofs of the various properties of the equivalences obtained in our paper.\n\n\n\\subsection{Proof of \\cref{rem:mu0-lambda0-vs-alpha}}\n\\label{sec:mu0-lambda0-vs-alpha}\n\n\\begin{proof}\nRecall from \\cref{eq:lambda0-mu0-in-alpha} that\nfor $\\alpha \\in (0, \\infty)$,\n\\begin{equation}\n \\label{eq:lambda0-with-mu0}\n \\lambda_0(\\alpha)\n = \\mu_0\n \\paren{ 1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu_0(\\alpha) {\\mathbf{I}})^{-1} } }.\n\\end{equation}\nFrom the statement of \\cref{rem:mu0-lambda0-vs-alpha},\n$\\lim_{\\alpha \\to \\infty} \\mu_0(\\alpha) = - \\lambda_{\\min}^{+}({\\mathbf{A}})$.\nWe will argue below that\n\\begin{equation}\n \\label{eq:mu0-by-alpha-lim}\n \\lim_{\\alpha \\to \\infty}\n \\frac\n {\\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu_0(\\alpha) {\\mathbf{I}})^{-1} }}\n {\\alpha}\n = 0,\n\\end{equation}\nwhich combined with \\cref{eq:lambda0-with-mu0} provides the desired result.\n\nObserve that the limit on the left-hand side of \\cref{eq:mu0-by-alpha-lim} \nis in the indeterminate $\\infty\/\\infty$ form because\n$\\lim_{\\alpha \\to \\infty} \\mu(\\alpha) = - \\lambda_{\\min}^{+}({\\mathbf{A}})$\nand thus\n$\\lim_{\\alpha \\to \\infty} \n\\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu(\\alpha) {\\mathbf{I}})^{-1} } = \\infty$.\nTo evaluate the limit, we will appeal to L'H{\\^o}pital's rule.\nThe derivative of the denominator with respect to $\\alpha$ is 1,\nwhile the derivative of the numerator with respect to $\\alpha$ is\n\\begin{equation}\n \\label{eq:deriv-numer-wrt-alpha}\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu_0(\\alpha) {\\mathbf{I}})^{-2} }\n \\frac{\\partial \\mu_0(\\alpha)}{\\partial \\alpha}.\n\\end{equation}\nImplicitly differentiating \\cref{eq:mu0-fp-in-alpha}\nwith respect to $\\alpha$, we have\n\\begin{equation}\n \\label{eq:deriv-mu0-wrt-alpha-relation}\n 1 \n = \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0(\\alpha) {\\mathbf{I}})^{-3} }\n \\frac{\\partial \\mu_0(\\alpha)}{\\partial \\alpha}.\n\\end{equation}\nSubstituting for $\\tfrac{\\partial \\mu_0(\\alpha)}{\\partial \\alpha}$\nfrom \\cref{eq:deriv-mu0-wrt-alpha-relation}\ninto \\cref{eq:deriv-numer-wrt-alpha},\nwe can write the derivative of the numerator as\n\\begin{align}\n \\frac\n {\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu_0(\\alpha) {\\mathbf{I}})^{-2} }\n }\n {\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0(\\alpha) {\\mathbf{I}})^{-3}}\n }.\n\\end{align}\nAs $\\alpha \\to \\infty$ and $\\mu_0(\\alpha) \\to - \\lambda_{\\min}^{+}({\\mathbf{A}})$,\nthe limit of the quantity in the display above becomes\n\\begin{align}\n \\lim_{\\alpha \\to \\infty}\n \\frac\n {\\lambda_{\\min}^{+}({\\mathbf{A}})}\n {(\\lambda_{\\min}^{+}({\\mathbf{A}}) + \\mu_0(\\alpha))^2}\n \\cdot\n \\frac\n {(\\lambda_{\\min}^{+}({\\mathbf{A}}) + \\mu(\\alpha))^3}{(\\lambda_{\\min}({\\mathbf{A}})^{+})^2}\n = \n \\lim_{\\alpha \\to \\infty}\n 1 + \\frac{\\mu_0(\\alpha)}{\\lambda_{\\min}^{+}}\n = 1 - 1\n = 0.\n\\end{align}\nThus, we can conclude that \\cref{eq:mu0-by-alpha-lim} holds,\nand the statement then follows.\nThe remaining claims follow by similar calculations.\n\\end{proof}\n\n\\subsection{Proof of \\cref{rem:mu0-lambda0-signs}}\n\\label{sec:mu0-lambda0-signs}\n\n\n\\begin{proof}\nWe start by noting that \n\\begin{align}\n \\lim_{x \\searrow 0}\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-2} }\n &=\n \\lim_{x \\searrow 0}\n \\tfrac{1}{p}\n \\sum_{i = 1}^{p}\n \\frac{\\lambda_i^2({\\mathbf{A}})}{(\\lambda_i({\\mathbf{A}}) + x)^2}\n \\\\\n &=\n \\lim_{x \\searrow 0}\n \\tfrac{1}{p}\n \\sum_{i = 1}^{p}\n \\frac{\\lambda_i({\\mathbf{A}})}{\\lambda_i({\\mathbf{A}}) + x}\n =\n \\lim_{x \\searrow 0}\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} } \\\\\n &=\n \\tfrac{1}{p} \\sum_{i=1}^{p}\n \\mathbbm{1}\\{ \\lambda_i({\\mathbf{A}}) > 0 \\} \n = r({\\mathbf{A}}).\n\\end{align}\nNow, write the first equation in \\cref{eq:mu0-lambda0-fps} \nin terms of $\\alpha$ as\n\\[\n \\alpha = \\tfrac{1}{p} \\tr \\bracket{\\mA^2 \\paren{\\mA + \\mu_0 \\mI}^{-2}}.\n\\]\nThus, when $\\alpha = r({\\mathbf{A}})$, we have $\\mu_0 = 0$\nas the solution to the first equation of \\cref{eq:mu0-lambda0-fps}.\nBecause $\\mu \\mapsto \\tfrac{1}{p} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2}]$\nis monotonically decreasing in $\\mu$,\nif $\\alpha < r({\\mathbf{A}})$, we have $\\mu_0 > 0$,\nwhile if $\\alpha > r({\\mathbf{A}})$, we have $\\mu_0 < 0$.\n\nNext we argue about sign pattern of $\\lambda_0$.\nWhen $\\alpha > r({\\mathbf{A}})$,\nwe have\n\\begin{align}\n \\alpha\n = \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-2} }\n = \\tfrac{1}{p}\n \\sum_{i = 1}^{p}\n \\frac{\\lambda_i^2({\\mathbf{A}})}{(\\lambda_i({\\mathbf{A}}) + \\mu_0)^2}\n &\\overset{(a)}{>}\n \\tfrac{1}{p}\n \\sum_{i = 1}^{p}\n \\frac{\\lambda_i({\\mathbf{A}})}{\\lambda_i({\\mathbf{A}}) + \\mu_0} \\\\\n &=\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-1} },\n\\end{align}\nwhere the inequality $(a)$ follows because $\\mu_0 < 0$.\nFrom \\cref{eq:mu0-lambda0-fps},\nit thus follows that $\\lambda_0 < 0$.\nSimilarly, when $\\alpha < r({\\mathbf{A}})$,\nnote that\n\\begin{align}\n \\alpha\n = \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-2} }\n = \\tfrac{1}{p}\n \\sum_{i = 1}^{p}\n \\frac{\\lambda_i^2({\\mathbf{A}})}{(\\lambda_i({\\mathbf{A}}) + \\mu_0)^2}\n &\\overset{(b)}{<}\n \\tfrac{1}{p}\n \\sum_{i = 1}^{p}\n \\frac{\\lambda_i({\\mathbf{A}})}{\\lambda_i({\\mathbf{A}}) + \\mu_0} \\\\\n &=\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-1} },\n\\end{align}\nwhere inequality $(b)$ follows from the fact that\n\\[\n 0\n <\n \\left(\\frac{\\lambda_{\\min}^{+}({\\mathbf{A}})}{\\lambda_{\\min}^{+}({\\mathbf{A}}) + \\mu_0}\\right)^2\n <\n \\frac{\\lambda_{\\min}^{+}({\\mathbf{A}})}{\\lambda_{\\min}^{+}({\\mathbf{A}}) + \\mu_0}\n < 1,\n\\]\nsince $\\mu_0 > 0$ in this case\nand $\\lambda_{\\min}^{+}({\\mathbf{A}}) > 0$.\nFrom \\cref{eq:mu0-lambda0-fps},\nit thus again follows that $\\lambda_0 < 0$.\nThis completes the proof.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of \\cref{prop:monotonicities-lambda-alpha}}\n\\label{sec:monotonicies-lambda-alpha}\n\n\\begin{proof}\nThe claims follow from simple derivative calculations.\nWe split into two cases, one with respect to $\\lambda$,\nand the other with respect to $\\alpha$.\n\n\\subsubsection{Monotonicity with respect to $\\lambda$}\n\nFor a fixed $\\alpha$,\nimplicitly differentiating the fixed-point equation \\cref{eq:sketched-modified-lambda}\nwith respect to $\\lambda$, we obtain\n\\begin{equation}\n \\label{eq:fp-differentiation-lambda}\n 1 =\n \\frac{\\partial \\mu}{\\partial \\lambda}\n - \\left( \\tfrac{1}{q} \\tr \\bracket{{\\mathbf{A}} \\inv{{\\mathbf{A}} + \\mu {\\mathbf{I}}} } \n - \\mu \\tfrac{1}{q}\\tr \\bracket{ {\\mathbf{A}} \\paren{{\\mathbf{A}} + \\mu {\\mathbf{I}}}^{-2} }\n \\right) \n \\frac{\\partial \\mu}{\\partial \\lambda}.\n\\end{equation}\nNote the following algebraic simplification:\n\\begin{align}\n {\\mathbf{A}} \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-1}\n - \\mu {\\mathbf{A}} \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-2}\n &= {\\mathbf{A}} \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-1}\n \\paren{ {\\mathbf{I}} - \\mu \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-1} } \\nonumber \\\\\n &= {\\mathbf{A}} \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-1} {\\mathbf{A}} \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-1}\n = {\\mathbf{A}}^2 \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-2} \\label{eq:matrix-diff-manip}.\n\\end{align}\nSubstituting \\cref{eq:matrix-diff-manip} into \\cref{eq:fp-differentiation-lambda},\nwe have\n\\begin{equation}\n \\label{eq:mu-deriv-lambda}\n \\frac{\\partial \\mu}{\\partial \\lambda}\n = \\frac\n {1}\n {\n 1 - \\tfrac{1}{q} \\tr \\bracket{ {\\mathbf{A}}^2 \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-2} }}.\n\\end{equation}\nObserve that $\\mu \\mapsto \\tfrac{1}{q} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2}]$\nis monotonically decreasing function of $\\mu$\nover $(\\mu_0, \\infty)$\nand because $1 = \\tfrac{1}{q} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-2}]$\nfrom the first equation in \\cref{eq:mu0-lambda0-fps},\nthe denominator of \\cref{eq:mu-deriv-lambda} is positive over $(\\mu_0, \\infty)$.\nConsequently, $\\tfrac{\\partial \\mu}{\\partial \\lambda}$ is positive,\nand $\\mu$ is a monotonically increasing function of $\\lambda$.\nFinally, note that as $\\lambda \\to \\lambda_0^{+}$, $\\mu(\\lambda) \\to \\mu_0$,\nand as $\\lambda \\to \\infty$, $\\mu(\\lambda) \\to \\infty$.\nThis completes the proof of the first part.\n\n\\subsubsection{Monotonicity with respect to $\\alpha$}\n\nWe begin by writing \\cref{eq:sketched-modified-lambda} \nin $\\alpha$ as\n\\begin{equation}\n \\label{eq:sketched-modified-lambda-in-alpha}\n \\lambda\n = \\mu\n \\paren{ 1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} } }.\n\\end{equation}\nFor a fixed $\\lambda$,\nimplicitly differentiating \\cref{eq:sketched-modified-lambda}\nwith respect to $\\alpha$, we have\n\\begin{equation}\n \\label{eq:fp-differentiation-alpha}\n 0\n =\n \\frac{\\partial \\mu}{\\partial \\alpha}\n +\n \\frac{\\mu}{\\alpha^2}\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} }\n -\n \\frac{1}{\\alpha}\n \\left(\n \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} }\n - \\mu \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2} }\n \\right)\n \\frac{\\partial \\mu}{\\partial \\alpha}.\n\\end{equation}\nSolving for $\\tfrac{\\partial \\mu}{\\partial \\alpha}$,\nwe obtain\n\\begin{equation}\n \\label{eq:mu-deriv-alpha-1}\n \\frac{\\partial \\mu}{\\partial \\alpha}\n = \n \\frac\n {- \\tfrac{1}{\\alpha^2} \\mu \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}} }\n {1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p}\n \\left( \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} \n - \\mu {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2} } \\right)}.\n\\end{equation}\nSimilar to the part above,\nsubstituting the relation \\cref{eq:matrix-diff-manip} into \\cref{eq:fp-differentiation-alpha}\nand simplifying yields\n\\begin{equation}\n \\label{eq:mu-deriv-alpha-2}\n \\frac{\\partial \\mu}{\\partial \\alpha}\n = \n \\frac\n {- \\tfrac{1}{\\alpha} \\mu \\tfrac{1}{q}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}} }\n {1 - \\tfrac{1}{q} \\tr \\bracket{ {\\mathbf{A}}^2 \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-2} }}.\n\\end{equation}\n\n\nBecause the denominator of \\cref{eq:mu-deriv-alpha-2} is positive\nfrom \\cref{eq:mu0-lambda0-fps}\nas argued above\nand $\\tr[{\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}]$ is positive\nfor $\\mu \\in (\\mu_0, \\infty)$,\nthe sign of $\\frac{\\partial \\mu}{\\partial \\alpha}$\nis opposite the sign of $\\mu$.\nBecause when $\\lambda \\ge 0$, $\\mu \\ge 0$\n(from the first part of \\cref{rem:joint-signs-lambda-mu}),\nin this case, $\\frac{\\partial \\mu}{\\partial \\alpha}$\nis negative, and $\\mu$ is monotonically decreasing in $\\alpha$.\nWhen $\\lambda < 0$,\nfor $\\alpha \\le r({\\mathbf{A}})$,\nwe have $\\mu(\\lambda) \\ge 0$ \n(from the second part of \\cref{rem:joint-signs-lambda-mu}).\nThus, over $(0, r({\\mathbf{A}}))$, $\\mu$ is monotonically decreasing in $\\alpha$.\nOn the other hand, for $\\alpha > r({\\mathbf{A}})$,\n$\\mu(\\lambda) < 0$ \n(since $\\mathrm{sign}(\\mu(\\lambda)) \n= \\mathrm{sign}(\\lambda)$ and $\\lambda < 0$),\nand consequently, $\\mu$ is monotonically increasing in $\\alpha$\nover $(r({\\mathbf{A}}), \\infty)$.\n\nFinally, to obtain the limit of $\\mu(\\alpha)$\nas $\\alpha \\to 0$,\nwe write \\cref{eq:sketched-modified-lambda-in-alpha} as\n\\[\n \\lambda \\alpha\n = \\mu \\alpha\n - \\mu \\tfrac{1}{p}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} }.\n\\]\nNow, for any $\\lambda \\in (\\lambda_0, \\infty)$,\n$\\lim_{\\alpha \\to 0^{+}} \\lambda \\alpha = 0$.\nThus, we have\n\\[\n \\lim_{\\alpha \\to 0^{+}}\n \\mu(\\alpha)\n = \\lim_{\\alpha \\to 0^{+}}\n f^{-1}(\\alpha),\n\\]\nwhere $f(x) = \\tfrac{1}{p} \\tr[{\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1}]$.\nObserve that function $f$ is strictly decreasing\nover $(\\mu_0, \\infty)$,\nand $\\lim_{x \\to \\infty} f(x) = 0$.\nHence, the function $f^{-1}$ is strictly decreasing\nand $\\lim_{\\alpha \\to 0^{+}} f^{-1}(\\alpha) = \\infty$.\nThis provides us with the first limit.\nTo obtain the limit of $\\mu(\\alpha)$ as $\\alpha \\to \\infty$,\nwrite from \\cref{eq:sketched-modified-lambda}\n\\[\n \\mu\n = \\lambda \n + \\tfrac{1}{\\alpha} \\tfrac{1}{p}\n \\tr \\bracket{ \\mu {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} }.\n\\]\nObserve that $\\tfrac{1}{p} \\tr[\\mu {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}]$\nis bounded for $\\mu \\in (\\mu_0, \\infty)$.\nThus, taking the limit $\\alpha \\to \\infty$,\nwe conclude that $\\lim_{\\alpha \\to \\infty} \\mu(\\alpha) = \\lambda$.\nThis finishes the second part, and completes the proof.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of \\cref{rem:joint-signs-lambda-mu}}\n\\label{sec:joint-signs-lambda-mu}\n\n\\begin{proof}\nWe start by writing \\cref{eq:sketched-modified-lambda}\nin terms of $\\alpha$ as\n\\[\n \\lambda = \\mu\n \\paren{1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} } }.\n\\]\nFor the subsequent argument, it will help to rearrange the terms in the equation in display above to arrive at\nthe following equivalent equation:\n\\begin{equation}\n \\label{eq:sketched-modified-lambda-rewriting}\n 1 - \\frac{\\lambda}{\\mu}\n = \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1} }.\n\\end{equation}\nWe consider two separate cases depending on $\\lambda \\ge 0$ and $\\lambda < 0$.\n\n\\textbf{Case $\\lambda \\ge 0$}:\nFix $\\alpha > 0$.\nObserve that the left side of \\cref{eq:sketched-modified-lambda-rewriting}\nis an increasing function of $\\mu$,\nand the right side of \\cref{eq:sketched-modified-lambda-rewriting}\nis a decreasing function of $\\mu$.\nAs $\\mu$ varies from $0^{+}$ to $\\infty$, \nthe right hand side decreases from $\\tfrac{r(A)}{\\alpha}$ to $0$,\nwhile the left hand side increases from $-\\infty$ to $1$.\nSince $1 > 0$,\nthere is a unique intersection for $\\mu \\ge 0$.\n\n\\textbf{Case $\\lambda < 0$}:\nFix $\\alpha \\le r({\\mathbf{A}})$.\nFor this subcase,\nfrom \\Cref{rem:mu0-lambda0-signs},\n$\\mu_0 \\ge 0$.\nThus, there is a unique intersection for $\\mu \\ge 0$.\nFix now $\\alpha > r({\\mathbf{A}})$.\nFor this subcase, the term in the parenthesis of \\cref{eq:sketched-modified-lambda}\nis positive. Thus, $\\mathrm{sign}(\\mu) = \\mathrm{sign}(\\lambda)$.\n\nThis completes all the three cases, and finishes the proof.\n\\end{proof}\n\n\n\\subsection{Proof of \\cref{rem:concavity-mu-in-lambda}}\n\\label{sec:concavity-mu-in-lambda}\n\n\\begin{proof}\nRecall that $\\mu_0 > - \\lambda_{\\min}^{+}({\\mathbf{A}})$.\nFor $x \\in (\\mu_0, \\infty)$,\nobserve that\n\\[\n \\frac{\\partial}{\\partial x}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} }\n = - \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-2} }\n < 0,\n\\]\n\\[\n \\frac{\\partial^2}{\\partial x^2}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} }\n = 2 \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-3} } \n > 0.\n\\]\nThus, the function\n\\[\n x \\mapsto\n \\tfrac{1}{q}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} }\n\\]\nis strictly decreasing and convex over $(\\mu_0, \\infty)$,\nand consequently\nthe function\n\\[\n x \\mapsto\n \\tfrac{1}{q}\n \\tr \\bracket{ x {\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} }\n = \n \\tfrac{1}{q}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{I}} - {\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1}) }\n =\n \\tfrac{1}{q}\n \\tr[ {\\mathbf{A}} ]\n - \\tfrac{1}{q}\n \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} }\n\\]\nis strictly increasing and concave over $(\\mu_0, \\infty)$.\nHence, the function $f$ \n(appearing in the right-hand side of \\cref{eq:sketched-modified-lambda} in $\\mu$)\ndefined by\n\\begin{equation}\n \\label{eq:sketched-modified-lambda-rhs}\n f(x)\n =\n x \n - x \\tfrac{1}{q}\n \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} }\n =\n x\n \\paren{ 1 - \\tfrac{1}{q} \\tr \\bracket{ {\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1} } }\n\\end{equation}\nis strictly increasing and convex over $(\\mu_0, \\infty)$.\n\nNow, observe from \\cref{eq:sketched-modified-lambda} that\nfor a given $\\lambda$,\n$\\mu(\\lambda) = f^{-1}(\\lambda)$,\nwhere $f$ is as defined in \\cref{eq:sketched-modified-lambda-rhs}.\nBecause inverse of a strictly increasing, continuous, and convex function\nis strictly increasing, continuous, and concave \n(see, e.g., Proposition 3 of \\cite{hiriart_urruty-martinez_legaz_2003}),\nwe conclude that $\\lambda \\mapsto \\mu(\\lambda)$\nwhere $\\mu(\\lambda)$ solves \\cref{eq:sketched-modified-lambda}\nis concave in $\\lambda$ over $(\\lambda_0, \\infty)$.\nWe remark that, more directly,\nwe can also compute the second derivative of $\\mu(\\lambda)$ \nwith respect to $\\lambda$.\nFrom \\cref{eq:mu-deriv-lambda}, we have\n\\begin{equation}\n \\label{eq:mu-deriv-lambda-in-alpha}\n \\frac{\\partial \\mu}{\\partial \\lambda}\n = \\frac\n {1}\n {\n 1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}}^2 \\paren{ {\\mathbf{A}} + \\mu {\\mathbf{I}} }^{-2} }}.\n\\end{equation}\nTaking partial derivative of \\cref{eq:mu-deriv-lambda-in-alpha}\nwith respect to $\\lambda$, we get\n\\[\n \\frac{\\partial^2 \\mu}{\\partial \\lambda^2}\n = \n \\frac\n {\n -2 \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-3} }\n }\n {\n \\left( \n 1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2} } \n \\right)^2\n }\n \\frac{\\partial \\mu}{\\partial \\lambda}\n =\n \\frac\n {\n -2 \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-3} }\n }\n {\n \\left( \n 1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2} } \n \\right)^3\n }\n < 0,\n\\]\nfrom which the concavity claim follows.\n\nUsing the concavity of $\\mu$ in $\\lambda$,\nwe can write for \n$\\lambda, \\widetilde{\\lambda} \\in (\\lambda_0, \\infty)$,\n\\begin{equation}\n \\label{eq:concavity-bound-mu-lambda-1}\n \\mu(\\lambda)\n \\le \\mu(\\widetilde{\\lambda})\n + \\frac{\\partial \\mu}{\\partial \\lambda} \n \\mathrel{\\Big |}_{\\lambda = \\widetilde{\\lambda}} \n (\\lambda - \\widetilde{\\lambda}).\n\\end{equation}\nNow, from \\cref{eq:sketched-modified-lambda},\nfor any $\\widetilde{\\lambda} \\in (\\lambda_0, \\infty)$, \nwe have \n\\begin{equation}\n \\label{eq:concavity-bound-mu-lambda-2}\n \\mu(\\widetilde{\\lambda}) - \\widetilde{\\lambda}\n = \\tfrac{1}{q}\n \\tr \\bracket{ \\mu(\\widetilde{\\lambda}) {\\mathbf{A}} ({\\mathbf{A}} + \\mu(\\widetilde{\\lambda}) {\\mathbf{I}})^{-1} }\n = \\tfrac{1}{\\alpha} \\tfrac{1}{p}\n \\tr \\bracket{ \\mu(\\widetilde{\\lambda}) {\\mathbf{A}} ({\\mathbf{A}} + \\mu(\\widetilde{\\lambda}) {\\mathbf{I}})^{-1} }.\n\\end{equation}\nSubstituting in \\cref{eq:concavity-bound-mu-lambda-2}\nin \\cref{eq:concavity-bound-mu-lambda-1} yields\n\\begin{equation}\n \\label{eq:concavity-bound-mu-lambda-3}\n \\mu(\\lambda)\n \\le\n \\frac{\\partial \\mu}{\\partial \\lambda} \n \\mathrel{\\Big |}_{\\lambda = \\widetilde{\\lambda}} \n \\lambda \n +\n \\tfrac{1}{\\alpha}\n \\tfrac{1}{p}\n \\tr\n \\bracket{ \\mu(\\widetilde{\\lambda}) {\\mathbf{A}} ({\\mathbf{A}} + \\mu(\\widetilde{\\lambda}) {\\mathbf{I}})^{-1} }.\n\\end{equation}\nFrom \\Cref{prop:monotonicities-lambda-alpha},\n$\\lambda \\mapsto \\mu(\\lambda)$ is monotonically increasing in $\\lambda$\nand $\\lim_{\\lambda \\to \\infty} \\mu(\\lambda) = \\infty$.\nIn addition, $\\mu \\mapsto \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2}]$\nis monotonically decreasing in $\\mu$\nand $\\lim_{\\mu \\to \\infty} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-2}] = 0$,\nwhile \n$\\mu \\mapsto \\tr[\\mu {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}]$\nis monotonically increasing in $\\mu$,\nand \n$\n\\lim_{\\mu \\to \\infty} \\tr[\\mu {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}] = \\tr[ {\\mathbf{A}} ].\n$\nThus, \nfrom \\cref{eq:mu-deriv-lambda-in-alpha},\nchoosing $\\widetilde{\\lambda}$ large enough\nso that $\\mu(\\widetilde{\\lambda})$ is large enough,\nfor any $\\epsilon > 0$, we can write\n\\begin{equation}\n \\label{eq:deriv-asymp-mu-lambda}\n \\frac{\\partial \\mu}{\\partial \\lambda} \n \\mathrel{\\Big |}_{\\lambda = \\widetilde{\\lambda}}\n = \n \\frac\n {1}\n {\n 1 - \\tfrac{1}{\\alpha} \n \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu(\\widetilde{\\lambda}) {\\mathbf{I}})^{-2} } }\n \\le\n 1 \n +\n \\epsilon,\n\\end{equation}\n\\begin{equation}\n \\label{eq:concavity-intercept-asymp}\n \\tfrac{1}{\\alpha}\n \\tfrac{1}{p}\n \\tr \\bracket{ \\mu(\\widetilde{\\lambda}) {\\mathbf{A}} ({\\mathbf{A}} + \\mu(\\widetilde{\\lambda}) {\\mathbf{I}})^{-1} }\n \\le\n \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{ {\\mathbf{A}} } + \\epsilon.\n\\end{equation}\nCombining \n\\cref{eq:concavity-bound-mu-lambda-2,eq:deriv-asymp-mu-lambda,eq:concavity-intercept-asymp},\none then has\n\\[\n \\mu(\\lambda)\n \\le\n (1 + \\epsilon)\n \\lambda \n + \n \\tfrac{1}{\\alpha}\n \\tfrac{1}{p}\n \\tr[{\\mathbf{A}}]\n + \\epsilon.\n\\]\nSince the inequality holds for any arbitrary $\\epsilon$, \nthe desired upper bound on $\\mu(\\lambda)$ follows.\nFor the lower bound,\nobserve from \\cref{eq:concavity-bound-mu-lambda-2} that\nfor any $\\lambda \\in (\\lambda_0, \\infty)$\n\\[\n \\mu(\\lambda)\n = \\lambda + \\tfrac{1}{q} \\tr \\bracket{ \\mu(\\lambda) {\\mathbf{A}} \n ({\\mathbf{A}} + \\mu(\\lambda) {\\mathbf{I}})^{-1} }.\n\\]\nFrom \\Cref{rem:joint-signs-lambda-mu},\n$\\mu(\\lambda) \\ge 0$ either when $\\lambda \\ge 0$,\nor when $\\alpha \\le r({\\mathbf{A}})$.\nIn either of the cases,\nthe term $\\tfrac{1}{q} \\tr[\\mu(\\lambda) {\\mathbf{A}} ({\\mathbf{A}} + \\mu(\\lambda) {\\mathbf{I}})^{-1}]$\nis positive,\nand thus $\\mu(\\lambda) \\ge \\lambda$.\nFinally, the limit as $\\lambda \\to \\infty$\nfollows simply by noting that\n$\\mu(\\lambda) \\to \\infty$ and \n$\\tr[\\mu {\\mathbf{A}} ({\\mathbf{A}} + \\mu {\\mathbf{I}})^{-1}] \\to \\tr[{\\mathbf{A}}]$\nas $\\lambda \\to \\infty$.\nThis finishes the proof.\n\\end{proof}\n\n\n\n\\subsection{Proof of \\cref{rem:alt-mu-prime}}\n\n\\begin{proof}\nWe begin by rewriting \\cref{eq:mu-prime} using \\cref{eq:thm:sketched-pseudoinverse}:\n\\begin{align}\n \\mu' = \\frac{\\frac{\\mu^3}{q} \\tr \\bracket{\\mPsi \\paren{\\mA + \\mu \\mI}^{-2} }}{\\mu \\paren{ 1 - \\tfrac{1}{q} \\tr \\bracket{\\mA \\inv{\\mA + \\mu \\mI_p}}} + \\frac{\\mu^2}{q} \\tr \\bracket{\\mA \\paren{\\mA + \\mu \\mI}^{-2}} }.\n\\end{align}\nAfter dividing both the numerator and denominator by $\\mu$, we note that the denominator has a form which has already been simplified in \\Cref{sec:monotonicies-lambda-alpha}, and immediately obtain the factorization in terms of $\\tfrac{\\partial \\mu}{\\partial \\lambda}$.\n\\end{proof}\n\\section*{Acknowledgments}\nWe are grateful to Arun Kumar Kuchibhotla, Alessandro Rinaldo, Yuting Wei, Jin-Hong Du, \nand other members of the Operational Overparameterized Statistics (OOPS) Working Group \nat Carnegie Mellon University for helpful conversations.\nWe are also grateful to Edgar Dobriban,\nas well as participants of the ONR MURI on Foundations of Deep Learning\nfor useful discussions and feedback on this work.\n\nThis work was sponsored by Office of Naval Research MURI grant N00014-20-1-2787. \nDL, HJ, and RGB were also supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571 and N00014-20-1-2534; AFOSR grant FA9550-22-1-0060; and a Vannevar Bush Faculty Fellowship, ONR grant N00014-18-1-2047.\n\n\n\n\\clearpage\n\\input{citations.bbl}\n\n\\clearpage\n\\setcounter{section}{0}\n\\setcounter{equation}{0}\n\\setcounter{figure}{0}\n\\renewcommand{\\thesection}{SM\\arabic{section}}\n\\renewcommand{\\theequation}{SM\\arabic{section}.\\arabic{equation}}\n\\renewcommand{\\thefigure}{SM\\arabic{figure}}\n\n\n\\thispagestyle{empty}\n\\begin{center}\n\\noindent\\textcolor{header1}{\\textbf{SUPPLEMENTARY MATERIALS: \\textsf{Asymptotics of the Sketched Pseudoinverse}}}\n\\color{gray}\\rule{\\textwidth}{4pt}\n\\end{center}\n\\medskip\n\nThis document serves as a supplement to the paper\n``Asymptotics of the Sketched Pseudoinverse.'' \nThe contents of this supplement are organized as follows.\nIn \\Cref{sec:useful-facts},\nwe collect some useful facts regarding Stieltjes transforms\nthat are used in some of the proofs in later sections.\nIn \\Cref{sec:proof:cor:basic-ridge-asympequi-in-r},\nwe provide a detailed proof for \\Cref{cor:basic-ridge-asympequi-in-r}.\nIn \\Cref{sec:proofs-main-results},\nwe provide proof and an alternative proof sketch for \\cref{thm:sketched-pseudoinverse}.\nFinally, in \\Cref{sec:proofs-properties},\nwe provide proofs of various properties regarding our main equivalences\nmentioned \\Cref{sec:properties} in the main paper.\n\n\n\\input{appendix\/appendix_facts}\n\n\\input{appendix\/proofs_prelim}\n\n\\input{appendix\/proofs_main}\n\n\\input{appendix\/proofs_properties}\n\n\n\n\n\n\n\\end{document}\n\n\n\n\\section{Discussion and extensions}\n\\label{sec:discussion}\n\nIn this paper, we have provided a detailed look at the asymptotic effects of i.i.d.\\ sketching on matrix inverses.\nWe have provided an extension of existing asymptotic equivalence results to real-valued regularization (including negative) and used this result to obtain both first- and second-order asymptotic equivalences for the sketched regularized pseudoinverse. \n\n\n\nOur work is far from a complete characterization of sketching. We now list some natural extensions to our results.\n\n\\paragraph{Relaxing assumptions, strengthening conclusions}\nAs mentioned in \\Cref{sec:main_results},\nwe make minimal assumptions on the base matrix ${\\mathbf{A}}$.\nIn particular, we do not assume that the empirical spectral\ndistribution of ${\\mathbf{A}}$ converges to any fixed limit.\nThe assumption that the maximum and minimum eigenvalue of ${\\mathbf{A}}$\nbe bounded away from $0$ and $\\infty$ can be weakened.\nIn particular, one can let some eigenvalues to escape\nto $\\infty$,\nand have some eigenvalues to decay to $0$,\nprovided certain functionals of the eigenvalues\nremain bounded.\nOur assumptions on the sketching matrix ${\\mathbf{S}}$ are also weak.\nWe do not assume any distributional structure on its entries\nand only require bounded moments of order $8 + \\delta$\nfor some $\\delta > 0$.\nUsing a truncation strategy,\none can push this to only requiring moments of order $4 + \\delta$\nfor some $\\delta > 0$ for almost sure equivalences up to order $2$\nthat we show in this paper.\nFinally, while our asymptotic results give practically relevant insights for finite systems, we lack a precise characterization for non-asymptotic settings. In particular, the rate of convergence depends on a number of factors including the choice of $\\lambda$ and the higher order moments of the elements of $\\mS$.\n\n\n\\paragraph{Generalized sketching}\nOur assumption that the elements of the matrix $\\mS$ are i.i.d.\\ draws from some distribution limits its application in practical settings on two key fronts:\nthe effect of a rotationally invariant sketch is isotropic regularization, and there is unnecessary distortion of the spectrum of $\\mA$ for $q \\to p$. We now discuss how to extend our framework to extend to more general classes of sketches that more closely align with those used in practice.\n\nIn practice we may desire to use generalized non-isotropic ridge regularization, to perform Bayes-optimal regression \n(see, e.g., Chapter 3 of \\cite{van-wieringen_2015})\nor to avoid multiple descent \\cite{mel2021regression,yilmaz2022descent}, or we may find ourselves using non-isotropic sketching matrices, such as in adaptive sketching~\\cite{lacotte2019adaptive} where the sketching matrix depends on the data. We can cover these cases with the following extension of \\cref{thm:sketched-pseudoinverse}. \n\\begin{corollary}\n [Non-isotropic sketching equivalence]\n \\label{cor:sketched-pseudoinverse-noniso}\n Assume the setting of \\cref{thm:sketched-pseudoinverse}.\n Let $\\mW$ be an invertible $p \\times p$ positive semidefinite matrix,\n either deterministic or random but independent of $\\mS$ with $\\limsup \\norm[\\mathrm{op}]{\\mW} < \\infty$. \n Let $\\widetilde{\\mS} = \\mW^{1\/2} \\mS$.\n Then for each $\\lambda > - \\liminf \\lambda_{\\min}^{+}(\\widetilde{\\mS}^\\top \\mA \\widetilde{\\mS})$\n as $p, q \\to \\infty$ such that\n $0 < \\liminf \\tfrac{q}{p} \\le \\limsup \\tfrac{q}{p} < \\infty$,\n \\begin{align}\n \\widetilde{\\mS}\n \\biginv{\\widetilde{\\mS}^\\top \\mA \\widetilde{\\mS} + \\lambda \\mI_q }\n \\widetilde{\\mS}^\\top\n \\simeq\n \\biginv{\\mA + \\mu \\mW^{-1} },\n \\end{align}\n where $\\mu$ most positive solution to\n \\begin{align}\n \\lambda = \\mu \\paren{ 1 - \\tfrac{1}{q} \\tr \\bracket{\\mA \\inv{\\mA + \\mu \\mW^{-1}}}}.\n \\end{align}\n\\end{corollary}\n\n\\begin{proof}\nThe proof uses simple algebraic manipulations.\nObserve that, since the operator norm is sub-multiplicative,\nand $\\| {\\mathbf{W}} \\|_{\\mathrm{op}}$, $\\| {\\mathbf{A}} \\|_{\\mathrm{op}}$\nare uniformly bounded in $p$,\n$\\| {\\mathbf{W}}^{1\/2} {\\mathbf{A}} {\\mathbf{W}}^{1\/2} \\|_{\\mathrm{op}}$ is also uniformly bounded $p$.\nUsing \\cref{thm:sketched-pseudoinverse},\nwe then have that\n\\[\n {\\mathbf{S}}\n \\biginv{\n {\\mathbf{S}}^\\top {\\mathbf{W}}^{1\/2} {\\mathbf{A}} {\\mathbf{W}}^{1\/2} {\\mathbf{S}}\n + \\lambda {\\mathbf{I}}_q\n }\n {\\mathbf{S}}^\\top\n \\simeq\n \\biginv{\n {\\mathbf{W}}^{1\/2} {\\mathbf{A}} {\\mathbf{W}}^{1\/2} + \\mu {\\mathbf{I}}_q\n }.\n\\]\nRight and left multiplying both sides by ${\\mathbf{W}}^{1\/2}$,\nand writing $\\widetilde{{\\mathbf{S}}} = {\\mathbf{W}}^{1\/2} {\\mathbf{S}}$,\nwe get\n\\[\n \\widetilde{{\\mathbf{S}}}\n \\biginv{\n \\widetilde{{\\mathbf{S}}}^\\top\n {\\mathbf{A}}\n \\widetilde{{\\mathbf{S}}}\n + \\lambda {\\mathbf{I}}_q\n }\n \\widetilde{{\\mathbf{S}}}^\\top\n \\simeq\n {\\mathbf{W}}^{1\/2}\n \\biginv{\n {\\mathbf{W}}^{1\/2} {\\mathbf{A}} {\\mathbf{W}}^{1\/2}\n + \\mu {\\mathbf{I}}_p\n }\n {\\mathbf{W}}^{1\/2}\n =\n \\biginv{{\\mathbf{A}} + \\mu {\\mathbf{W}}^{-1} }\n\\]\nas desired, completing the proof.\n\\end{proof}\n\nBecause non-isotropic sketching can be used to induce generalized ridge regularization, this can be exploited adaptively to induce a wide range of structure-promoting regularization via iteratively reweighted least squares, in a manner similar to adaptive dropout methods (see \\cite{lejeune2021flipside} and references therein). Additionally, this result shows that methods applying ridge regularization to adaptive sketching methods, using for example $\\mW = \\mA$ as in \\cite{lacotte2019adaptive}, are not equivalent to ridge regression but instead generalized ridge regression. \n\n\\paragraph{Free sketching}\nEven among isotropic sketches, there can be a wide range of behavior beyond i.i.d.\\ sketches. We suspect that a more general result holds for \\emph{free} sketching matrices (a notion from free probability that generalizes independence of random variables; see \\cite{mingo2017free} for an introductory text). We make the following conjecture without proof. However, we believe that our alternative proof sketch for \\cref{thm:sketched-pseudoinverse} via Jacobi's formula in the supplementary material may provide a strategy for proving this result.\n\\begin{conjecture}[General free sketching]\n \\label{conj:general-free-sketching}\n Let $\\mS \\in \\complexset^{p \\times q}$ be a norm-preserving sketch such that $\\mS \\mS^\\ctransp$ and $\\mA$ converge almost surely to operators that are free with respect to the average trace $\\tfrac{1}{p} \\tr [\\cdot]$. Then there exists a monotonic mapping $\\lambda \\mapsto \\gamma$ such~that\n \\begin{align}\n \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\simeq \\biginv{\\mA + \\gamma \\mI_p}.\n \\end{align}\n\\end{conjecture}\n\nA particularly important sketching matrix that fits this broader definition is the orthogonal sketch. Unlike the i.i.d.\\ sketch, an orthogonal sketch does not distort the spectrum near $q = p$ and so has less induced regularization. Assuming \\cref{conj:general-free-sketching} holds, we obtain the following.\n\n\\begin{conjecture}[Orthogonal sketching]\n\\label{conj:orthonormal-sketch}\nFor $q \\leq p$ with $\\lim \\tfrac{q}{p} = \\alpha$, let $\\sqrt{\\tfrac{q}{p}} \\mQ \\in \\complexset^{p \\times q}$ be a Haar-distributed matrix with orthonormal columns, and let $\\mA \\in \\complexset^{p \\times p}$ be positive semidefinite with eigenvalues converging to a bounded limiting spectral measure. Then\n\\begin{align}\n \\mQ \\big( \\mQ^\\ctransp \\mA \\mQ + \\lambda \\mI_q \\big)^{-1} \\mQ^\\ctransp \\simeq \\big( \\mA + \\gamma \\mI_p \\big)^{-1},\n\\end{align}\nwhere $\\gamma$ is the most positive solution to\n\\begin{align}\n \\label{eq:ortho-gamma}\n \\tfrac{1}{p} \\tr \\bracket{\\inv{\\mA + \\gamma \\mI_p}} (\\gamma - \\alpha \\lambda) = 1 - \\alpha.\n\\end{align}\nFurthermore, for $\\mu > 0$ from \\cref{thm:sketched-pseudoinverse} applied to the same $(\\mA, \\alpha, \\lambda)$, we have $\\gamma < \\mu$.\n\\end{conjecture}\n\\begin{proof}\nFirstly, $\\mQ \\mQ^\\ctransp$ and $\\mA$ are almost surely asymptotically free~\\cite[Theorem 4.9]{mingo2017free}. Assuming the equivalence in \\cref{conj:general-free-sketching} and inverting the steps of the proof of \\cref{thm:sketched-pseudoinverse}, we know that we can determine the relation between $\\gamma$ and $(\\mA, \\alpha, \\lambda)$ if we can find $c(z)$ analogously to \\cref{lem:basic-ridge-asympequi} such that \n\\begin{align}\n\\label{eq:proof:ortho-sketch-c}\n(\\mA^{1\/2} \\mQ \\mQ^\\transp \\mA^{1\/2} - z \\mI_p)^{-1} \\simeq (c(z) \\mA - z \\mI_p)^{-1}.\n\\end{align}\nLet $a$ and $b$ denote the limiting distributions of $\\mA$ and $\\mQ \\mQ^\\ctransp$, respectively. Because $\\mA \\mQ \\mQ^\\ctransp$ has the same eigenvalues as $\\mA^{1\/2} \\mQ \\mQ^\\transp \\mA^{1\/2}$, we need only determine how the free product $ab$ depends on $a$. The general strategy is to use the chain of invertible transformations\n\\begin{align}\n G_a(z) = \\tfrac{1}{p} \\tr\\bracket{(z - a)^{-1}} \n \\;\n \\longleftrightarrow\n \\;\n M_a(z) = \\frac{1}{z} G_a\\paren{\\frac{1}{z}} - 1\n \\;\n \\longleftrightarrow\n \\;\n S_a(z) = \\frac{1 + z}{z} M_a^{-1}(z),\n\\end{align}\nwhich are the Cauchy transform (negative of the Stieltjes transform), moment generating series, and $S$-transform of $a$, respectively. Then we exploit the property of free products that $S_{ab}(z) = S_a(z) S_b(z)$, or equivalently $M_{ab}^{-1}(z) = \\tfrac{1 + z}{z} M_a^{-1}(z) M_b^{-1}(z)$.\n\nWe first determine by the simple structure of $b$ that it has $M_b(z) = \\tfrac{\\alpha z}{\\alpha - z}$, which has inverse $M_b^{-1}(z) = \\tfrac{\\alpha z}{\\alpha + z}$. Thus, $M_{ab}^{-1}(z) = \\tfrac{\\alpha(1 + z)}{\\alpha + z} M_a^{-1}(z)$. At the same time, taking the trace of \\cref{eq:proof:ortho-sketch-c}, we observe that we can also express this relation in terms of $c$ as $M_{ab}(z) = M_a(z c(\\tfrac{1}{z}))$. If we define the function $C \\colon z \\mapsto z c(\\tfrac{1}{z})$, we can write $M_{ab} = M_a \\circ C$, where $\\circ$ denotes function composition, which implies that $C^{-1} = M_{ab}^{-1} \\circ M_a$. Now we have \n\\begin{align}\n C^{-1}(z) = \\frac{\\alpha z (1 + M_a(z))}{(\\alpha + M_a(z)}.\n\\end{align}\nReferring to notation from \\cref{cor:basic-ridge-asympequi-in-r}, we can define $\\zeta \\colon z \\mapsto \\tfrac{z}{c(z)}$, such that $\\zeta = \\tfrac{1}{\\cdot} \\circ C \\circ \\tfrac{1}{\\cdot}$. Writing the above equation in terms of $\\zeta$ and $G_a$, we obtain\n\\begin{align}\n \\zeta^{-1}(z) = \\frac{z G_a(z) + \\alpha - 1}{\\alpha G_a(z)} \\implies G_a(\\zeta(z)) (\\alpha z - \\zeta(z)) = \\alpha - 1.\n\\end{align}\nWe now let $\\lambda = -z$, $\\gamma = -\\zeta(z)$, and recall that $-G_a(-\\gamma) = \\tfrac{1}{p} \\tr \\bracket{(a + \\gamma)^{-1}}$, and we obtain the stated equation in \\cref{eq:ortho-gamma}.\nTo see that $\\gamma < \\mu$, observe that we can write \\cref{eq:sketched-modified-lambda} and \\cref{eq:ortho-gamma} as \n\\begin{align}\n \\tfrac{\\mu}{p} \\tr \\bracket{\\big(\\mA + \\mu \\mI_p \\big)^{-1}} &= 1 - \\alpha + \\frac{\\alpha \\lambda}{\\mu} \\\\\n \\tfrac{\\gamma}{p} \\tr \\bracket{\\big(\\mA + \\gamma \\mI_p \\big)^{-1}} &= 1 - \\alpha + \\alpha \\lambda \\tfrac{1}{p} \\tr \\bracket{\\big(\\mA + \\gamma \\mI_p \\big)^{-1}}.\n\\end{align}\nThe left-hand sides of these two equations are the same increasing function of $\\mu$ and $\\gamma$, respectively, while the right-hand sides are decreasing functions, with the function of $\\mu$ being strictly greater than the function of $\\gamma$, since $\\tfrac{1}{p} \\tr \\bracket{\\big(\\mA + \\mu \\mI_p \\big)^{-1}} < \\tfrac{1}{\\mu}$ for $\\mu > 0$. This means that the intersection with the decreasing function for $\\gamma$ must occur for a smaller value than the intersection for $\\mu$, proving the claim.\n\\end{proof}\n\n\nIn the statement, $\\gamma < \\mu$ means that the orthogonal sketch has less effective regularization than the i.i.d.\\ sketch. For settings in which we desire to solve a linear system with as little distortion as possible, we therefore would much prefer an orthogonal sketch to an i.i.d.\\ sketch, especially for $q \\approx p$.\n\n\n\\begin{figure}[t]\n \\label{fig:practical-concentration}\n \\centering\n \\includegraphics[width=6in]{figures\/practical_concentration.pdf}\n \\caption{\n Empirical density histograms over 20 trials demonstrating the concentration of diagonal elements of $\\mS \\inv{\\mS^\\transp \\mA \\mS + \\lambda \\mI} \\mS^\\transp$ for $\\mA$ as in \\cref{fig:empirical-concentration} with $q \\approx 0.8 p$, $\\lambda = 1$ and several normalized sketches $\\mS$ commonly used in practice. We also plot the diagonals of the i.i.d.\\ sketching equivalence $\\inv{\\mA + \\mu \\mI}$ (black, dotted) and the conjectured orthogonal sketching equivalence $\\inv{\\mA + \\gamma \\mI}$ from \\cref{conj:orthonormal-sketch} (red, dashed), where $\\mu \\approx 1.63$ and $\\gamma \\approx 1.17$.}\n\\end{figure}\n\nIn \\cref{fig:practical-concentration}, we repeat the experiment from \\cref{fig:empirical-concentration} for a variety of normalized non-i.i.d.\\ sketches used frequently in practice. Both CountSketch \\cite{charikar2002frequent} and the Fast Johnson--Lindenstrauss Transform (FJLT) \\cite{ailon2009fast} behave similarly to i.i.d.\\ sketching, with the FJLT slightly over-regularizing. As predicted by \\cref{cor:sketched-pseudoinverse-noniso}, adaptive sketching with $\\mW = \\mA$ \\cite{lacotte2019adaptive} behaves very differently from the other sketches, showing only two point masses instead of three since $\\mA^{-1}$ is not well-defined for its eigenvalues of 0. Lastly, the Subsampled Randomized Hadamard Transform (SRHT) \\cite{tropp2011hadamard} is an orthogonal version of the FJLT, and our experiment elucidates the effect of zero padding on the Hadamard transform of the SRHT. The Hadamard transform is defined only for powers of 2, so for other dimensions, the common approach is to simply zero-pad the data to the nearest power of 2. However, from this experiment we can see that this zero-padding can have a significant impact on the effective regularization; for $p$ slightly smaller than a power of 2, the SRHT performs almost identically to an orthogonal sketch. However, for $p$ slightly larger than a power of 2, there is significant effective regularization induced, even though the sketch is still norm-preserving.\n\nOur proposed framework of first- and second-order equivalence promises to provide a principled means of comparison of different sketching techniques. Once $\\gamma$ from \\cref{conj:general-free-sketching} can be determined for a given sketch (which should depend on its spectral properties), an analogous result to \\cref{lem:second-order-sketch} should directly follow to yield inflation with a factor of $\\gamma'$. Armed with both $\\gamma$ and $\\gamma'$ for a collection of sketches, we can compare them using these bias and variance-style comparisons and make principled choices analogously to classical estimation techniques.\n\n\\paragraph{Future work}\n\nAs alluded to in the introduction, \nthe first- and second-order equivalences developed in this work can be used directly to analyze the asymptotics of the predicted values and quadratic errors of sketched ridge regression. \nWe leave a detailed analysis of sketched ridge regression for a companion paper,\nin which we use the results in this work to study both primal (observation-side) and dual (feature-side) sketching of the data matrix, \nas well as joint primal and dual sketching. \nWe believe that our results can also be combined with the techniques in \\cite{liao2021hessian} who obtain deterministic equivalents for the Hessian of generalized linear models,\nenabling precise asymptotics for the implicit regularization due to sketching in nonlinear prediction models such as classification with logistic regression.\n\n\n\n\\section{Introduction}\n\n\nIn large-scale data processing systems, \\emph{sketching} or \\emph{random projections} play an essential role in making computation efficient and tractable. The basic idea is to replace high-dimensional data by relatively low-dimensional random linear projections of the data such that distances are preserved.\nIt is well-known that sketching can significantly reduce the size of the data without harming statistical performance, while providing a dramatic computational advantage \\cite{pmlr-v80-aghazadeh18a,gower2015randomized,lacotte2019adaptive,wang_lee_mahdavi_kolar_srebro_2017}. \nFor a summary of \nresults on the applications of \nsketching in optimization and \nnumerical linear algebra, we refer the reader to \\cite{mahoney2011randomized,woodruff2014sketching}. \n\nIn this work, we present a different kind of result than the usual sketching guarantee. Typically, sketching is guaranteed to preserve the output or statistical performance of computational methods with an error term that vanishes for sufficiently large sketch sizes \\cite{avron2017faster, bakshi2020robust,clarkson2014sketching, ivkin2019communication, pilanci2016iterative, woodruff2021very}. In contrast, we characterize the precise way in which the solution to a computational problem changes when operating on a sketched version of data instead of the original data, showing that sketching induces a specific type of regularization.\n\nOur primary contribution is a statement about the effect of sketching on the (regularized) pseudoinverse of a matrix. An informal statement of our result is as follows. Here the notation $\\mA \\simeq \\mB$ for two matrices $\\mA$ and $\\mB$ indicates an asymptotic first-order equivalence, which we define in \\Cref{sec:preliminaries}, and $\\lambda_{\\min}^{+}(\\mA)$ is the smallest nonzero eigenvalue of a matrix $\\mA$. We refer to $\\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp$ as the sketched (regularized) pseudoinverse of $\\mA$, because when $\\mS$ has orthonormal columns, the pseudoinverse of $\\mS \\mS^\\ctransp \\mA \\mS \\mS^\\ctransp$ is equal to $\\mS (\\mS^\\ctransp \\mA \\mS)^{-1} \\mS^\\ctransp$. This expression is related to the Nystr\\\"om approximation of the inverse of $\\mA$.\n\n\\begin{inftheorem}[\\Cref{thm:sketched-pseudoinverse}, informal]\n \\label{thm:sketched-pseudoinverse-informal}\n Given a positive semidefinite matrix $\\mA \\in \\complexset^{p \\times p}$, random sketching matrix $\\mS \\in \\complexset^{p \\times q}$, for any $\\lambda > -\\lambda_{\\min}^{+}(\\mS^\\ctransp \\mA \\mS)$, there exists $\\mu \\in \\reals$ such that\n \\begin{align}\n \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\simeq \\inv{\\mA + \\mu \\mI_p}.\n \\end{align}\n\\end{inftheorem}\n\nThe general implication of this result is that when we do computation using the sketched version of a matrix, there is a sense in which it is as if we were using ridge regularization. More precisely, when we solve (regularized) linear systems on a sketched version of the data and apply this solution to the sketched data, it is equivalent in a first-order sense to solving a regularized linear system in the original space. To see this, consider for example a least squares problem $\\min_\\vbeta \\norm[2]{\\vy - \\mX \\vbeta}^2$. The first-order optimality condition is $\\mX^\\ctransp \\mX \\vbeta = \\mX^\\ctransp \\vy$, and if we replace $\\mX$ by a sketch $\\mX \\mS$, we have the solution in the sketched domain $\\widehat{\\vbeta}_\\mS = (\\mS^\\ctransp \\mX^\\ctransp \\mX \\mS)^{-1} \\mS^\\ctransp \\mX^\\ctransp \\vy$. If we then apply this solution to a new sketched data point $\\mS^\\ctransp \\vx$, we obtain the prediction $\\widehat{y} = \\vx^\\ctransp \\mS (\\mS^\\ctransp \\mX^\\ctransp \\mX \\mS)^{-1} \\mS^\\ctransp \\mX^\\ctransp \\vy$. By our result, this is asymptotically equivalent to making the prediction $\\widehat{y} = \\vx^\\ctransp (\\mX^\\ctransp \\mX + \\mu \\mI)^{-1} \\mX^\\ctransp \\vy$---that is, as if we had solved the original least squares problem using some regularization $\\mu$.\n\n\\subsection*{Summary of Contributions}\n\nBelow we summarize the main contributions of the paper.\n\n\\begin{enumerate}\n \\item \\textbf{Real-valued equivalence.} We extend previous results from random matrix theory \\cite{rubio_mestre_2011} to real-valued regularization, explicitly characterizing\n the behaviour of the associated fixed-point\n equation extended from the complex half-plane to the reals, allowing\n for consideration of negative regularization. This result includes what is to the best of our knowledge the first characterization of the limiting smallest nonzero eigenvalue of arbitrary sample covariance matrices, which may be of independent interest.\n \\item \\textbf{First-order equivalence.} Applying the real-valued equivalence, we obtain a first-order equivalence for the ridge-regularized sketched pseudoinverse.\n \\item \\textbf{Second-order equivalence.} Using the calculus of asymptotic equivalents,\n we also obtain a second-order equivalence\n for ridge-regularized sketched pseudoinverse.\n \\item \\textbf{Equivalence properties.} We provide a thorough investigation of the theoretical properties of the equivalence relationship, such as how the induced regularization depends on the original applied regularization, sketch size, and matrix rank.\n \\item \\textbf{Free sketching conjecture.} Finally, we extend the scope of our results\n for first-order equivalence of the sketched pseudo-inverse to general asymptotically free sketching in the form of a conjecture and specialize to orthogonal sketching matrices.\n\\end{enumerate}\n\n\\subsection*{Related work}\n\nThe existence of an implicit regularization effect of sketching or random projections has been known for some time \\cite{derezinski2021determinantal,leamer1976bayesian, rudi2015less, thanei_heinze_meinshausen_2017}. %\nWhile prior works have demonstrated clear theoretical and empirical statistical advantages of sketching, our understanding of the precise nature of this implicit regularization has been largely limited to quantities such as error bounds. We provide, in contrast, a precise asymptotic characterization of the solution obtained by a sketching-based solver, not only enabling the understanding of the statistical performance of sketching-based methods, but also opening the door for exploiting the specific regularization induced by sketching in future algorithms.\n\nOur results in this work provide a general extension \nof a few results appearing in recent works that have revealed explicit characterizations of the implicit regularization effects induced by random subsampling. To the best of our knowledge, the first such result was presented by \\cite{lejeune_javadi_baraniuk_2020}, who showed that ensembles of (unregularized) ordinary least squares predictors on randomly subsampled observations and features converge in an $\\ell_2$ metric to an (optimal) ridge regression solution in the proportional asymptotics regime. \nThis result was limited in several aspects: \na) it required a strong isotropic Gaussian data assumption;\nb) it required the subsampled data to have more observations than features;\nc) it considered only unregularized base learners in the ensemble;\nd) it required an ensemble of infinite size to show \nthe ridge regression equivalence;\ne) it provided only a marginal guarantee of convergence \nover the data distribution \nrather than a single-instance convergence guarantee;\nand \nf) it did not provide the relationship between the subsampling ratio and the amount of induced ridge regularization. In addition, the proof relied on rote computation of expectations of matrix quantities, providing limited insight into the underlying mathematical principles at work. The result we present in this work in \\cref{thm:sketched-pseudoinverse} addresses all of these issues.\n\nAround the same time, \\cite{pmlr-v108-mutny20a} showed the remarkably simple result that the expected value of the pseudoinverse of any positive definite matrix sampled by a determinantal point process (DPP) is equal to a resolvent of the matrix. Similarly to the result by \\cite{lejeune_javadi_baraniuk_2020}, this result demonstrated that when random subsampling is applied in techniques without any regularization, the resulting solution is as if a regularized technique was used on the original data. This result provided a simple form of the argument of the induced resolvent as a solution to a matrix trace equation, which is analogous the results we present in this work for sketching. However, the proof technique relied on convenient algebraic properties of the DPP, again providing limited insight into what happens for more commonplace and computationally simple low-dimensional projection techniques such as random sketching. Despite this theoretical limitation, the same authors later empirically demonstrated that the same effects occur when using i.i.d.\\ Gaussian and Rademacher sketches \\cite{derezinski_surrogate_2020}, suggesting the broader result that we present in this work. In addition to our result in \\cref{thm:sketched-pseudoinverse} applying to i.i.d.\\ sketching matrices rather than DPP subsampling, our result also differs from this work in that we provide a single-instance equivalent ridge regularization in the asymptotic regime, rather than an expectation over the random subsamplings.\n\nOur results also echo the finite-sample results of \\cite{pmlr-v134-derezinski21a}, who showed that the unregularized inverse of a particular sketched matrix form has a merely multiplicative bias for sketch size minimally larger than the rank of the original matrix. This is captured by \\cref{cor:basic-ridge-asympequi-in-r} in our work when $z \\to 0$, combined with \\cref{rem:mu-prime-to-0} in which we observe that there is asymptotically no spectral distortion in the range of the original matrix for sketches larger than the rank.\n\n\n\n\n\\subsection*{Organization}\n\nThe rest of the paper is structured as follows.\nIn \\Cref{sec:preliminaries},\nwe start with some preliminaries\non the language of asymptotic equivalence\nof random matrices that we will use to state our results.\nIn \\Cref{sec:real-valued-equivalence},\nwe extend a previous result on asymptotic equivalence\nfor a ridge regularized resolvent\nto include real-valued negative regularization\nand provide a precise limiting lower limit\nof the permitted negative regularization.\nIn \\Cref{sec:main_results},\nwe provide our main results\nabout the first- and second-order\nequivalence of the sketched pseudoinverse.\nThen, in \\Cref{sec:properties}, we explore properties of the equivalence and present illustrative examples.\nFinally, in \\Cref{sec:discussion},\nwe conclude by giving\nvarious extensions and providing\na general conjecture on the asymptotic\nbehaviour of sketched pseudoinverse\nfor a broad family of sketching matrices\nusing the insights obtained from the proof\nof our main result and experimentally compare sketches commonly used in practice to our theory.\nOur code for generating all figures can be found at \\url{https:\/\/github.com\/dlej\/sketched-pseudoinverse}.\n\n\n\n\\subsection*{Notation}\nWe denote the real line by $\\mathbb{R}$\nand the complex plane by $\\mathbb{C}$.\nFor a complex number $z = x + iy$,\n$\\Re(z)$ denotes its real part $x$,\n$\\Im(z)$ denotes its imaginary part $y$,\nand $\\overline{z} = x - iy$ denotes its conjugate.\nWe use $\\mathbb{R}_{\\ge 0}$ and $\\mathbb{R}_{> 0}$ to be denote \nthe set of non-negative and positive real numbers, respectively;\nsimilarly, $\\mathbb{R}_{\\le 0}$ and $\\mathbb{R}_{< 0}$ respectively denote\nthe set of non-positive and negative real numbers.\nWe use $\\mathbb{C}^{+} = \\{ z \\in \\mathbb{C} : \\Im(z) > 0 \\}$ to denote \nthe upper half of the complex plane\nand $\\mathbb{C}^{-} = \\{ z \\in \\mathbb{C} : \\Im(z) < 0 \\}$ to denote \nthe lower half of the complex plane. \n\nWe denote vectors in lowercase bold letters (e.g., $\\vy$)\nand matrices in uppercase bold letters (e.g., $\\mX$).\nFor a vector $\\vy$, \n$\\| \\vy \\|_2$ denotes its $\\ell_2$ norm.\nFor a rectangular matrix ${\\mathbf{S}} \\in \\mathbb{C}^{p \\times q}$, \n${\\mathbf{S}}^\\ctransp \\in \\mathbb{C}^{q \\times p}$\ndenotes its conjugate or Hermitian transpose\n(such that $[\\mS^\\ctransp]_{ij} = \\overline{[\\mS]_{ji}}$),\n$\\norm[\\tr]{\\mS}$ denotes its trace norm (or nuclear norm),\nthat is $\\norm[\\tr]{\\mS} = \\tr\\bracket{(\\mS^\\ctransp \\mS)^{1\/2}}$,\nand $\\| {\\mathbf{S}} \\|_{\\rm op}$ denotes the operator norm\nwith respect to the $\\ell_2$ vector norm\n(which is also its spectral norm).\nFor a square matrix ${\\mathbf{A}} \\in \\mathbb{C}^{p \\times p}$,\n$\\tr[{\\mathbf{A}}]$ denotes its trace,\n$\\mathrm{rank}(\\mA)$ denotes its rank,\n$r(\\mA) = \\frac{1}{p} \\mathrm{rank}(\\mA)$\ndenotes its relative rank,\nand ${\\mathbf{A}}^{-1} \\in \\mathbb{C}^{p \\times p}$ denotes its inverse,\nif it is invertible.\nFor a positive semidefinite matrix ${\\mathbf{A}} \\in \\mathbb{C}^{p \\times p}$,\n${\\mathbf{A}}^{1\/2} \\in \\mathbb{C}^{p \\times p}$ denotes its positive semidefinite \nprincipal square root,\n$\\lambda_{\\min}({\\mathbf{A}})$ denotes its smallest eigenvalue, \nand $\\lambda_{\\min}^{+}({\\mathbf{A}})$ denotes its smallest positive eigenvalue.\n\nA sequence $x_n$ converging to $x_{\\infty}$ from the left \nand right is denoted by $x \\nearrow x_{\\infty}$ and $x \\searrow x_{\\infty}$, respectively.\nWe denote almost sure convergence by $\\xrightarrow{\\text{a.s.}}$.\n\n\n\n\n\n\n\n\n\n\\section{Main results}\n\\label{sec:main_results}\n\nOne way to think about \\cref{cor:basic-ridge-asympequi-in-r}\nis that the data matrix ${\\mathbf{X}} = {\\mathbf{Z}} \\mSigma^{1\/2}$ is a sketched version of the (square root) covariance matrix $\\mSigma^{1\/2}$,\nwhere ${\\mathbf{Z}}$ acts as a sketching matrix. The sketching is done by ``nature'' in the form of the $n$ observations,\nrather than by the statistician, but is otherwise mathematically identical to sketching. Using this insight, along with the Woodbury identity,\nwe can adapt the random matrix resolvent equivalence in \\cref{cor:basic-ridge-asympequi-in-r} to a sketched (regularized) pseudoinverse equivalence. To emphasize the shift in perspective, we denote the dimensionality of the sketched data as $q$ (replacing $n$), replace $\\mSigma$ with $\\mA$, and absorb the normalization by $\\tfrac{1}{q}$ (replacing $\\tfrac{1}{n}$) into the sketching matrix $\\mS$ (replacing $\\mZ$), so that the sketching transformation is norm-preserving (see \\Cref{rem:norm-preserving-sketch} for more details).\n\n\n\n\\subsection{First-order equivalence}\n\nOur first result provides a first-order equivalence for the sketched regularized pseudoinverse.\nBy first-order equivalence,\nwe refer to equivalence for matrices that involve \nthe \\emph{first} power of the ridge resolvent.\nWe also present a second-order equivalence\nfor matrices that involve the \\emph{second} power\nof the ridge resolvent in \\Cref{sec:second-order-sketch-equi}.\n\nIn preparation for the statements to follow,\nrecall that $r({\\mathbf{A}}) = \\frac{1}{p} \\sum_{i=1}^{p} \\mathbbm{1}\\{ \\lambda_i({\\mathbf{A}}) > 0 \\}$,\nor in other words, the normalized number of non-zero eigenvalues of ${\\mathbf{A}}$.\nNote that $0 \\le r({\\mathbf{A}}) \\le 1$.\n\n\\begin{theorem}\n [Isotropic sketching equivalence]\n \\label{thm:sketched-pseudoinverse}\n Let $\\mA \\in \\complexset^{p \\times p}$ be a positive semidefinite\n matrix such that $\\| \\mA \\|_{\\rm op}$ is uniformly bounded in $p$\n and $\\liminf \\lambda_{\\min}^{+}({\\mathbf{A}}) > 0$.\n Let $\\sqrt{q}\\mS \\in \\complexset^{p \\times q}$ be a random matrix\n consisting of i.i.d.\\ random variables that have mean 0, variance 1, and finite $8 + \\delta$ moment for some $\\delta > 0$. \n Let $\\lambda_0, \\mu_0 \\in \\reals$ be the unique solutions, satisfying $\\mu_0 > - \\lambda_{\\min}^{+}(\\mA)$, to the system of equations\n \\begin{align}\n \\label{eq:mu0-lambda0-fps}\n 1 = \\tfrac{1}{q} \\tr \\bracket{\\mA^2 \\paren{\\mA + \\mu_0 \\mI_p}^{-2}}, \\quad\n \\lambda_0 = \\mu_0 \\paren{1 - \\tfrac{1}{q} \\tr \\bracket{\\mA \\inv{\\mA + \\mu_0 {\\mathbf{I}}_p}}}.\n \\end{align}\n Then, \n as $q, p \\to \\infty$ \n such that\n $0 < \\liminf \\tfrac{q}{p} \\le \\limsup \\tfrac{q}{p} < \\infty$, \n the following asymptotic equivalences hold:\n \\begin{enumerate}[topsep=1em,parsep=0pt,label=(\\roman*)]\n \\item for any $\\lambda > \\limsup \\lambda_0$, \n we have\n \\end{enumerate}\n \\vspace{-\\abovedisplayskip}\n \\begin{align}\n \\label{eq:thm:sketched-pseudoinverse-A-half}\n \\mA^{1\/2} \\mS \\big( \\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q\n \\big)^{-1} \\mS^\\ctransp\n \\simeq \\mA^{1\/2} \\inv{\\mA + \\mu \\mI_p};\n \\end{align}\n \\begin{enumerate}[resume*]\n \\item if furthermore either $\\lambda \\neq 0$ or $\\limsup \\tfrac{q}{p} < \\liminf r(\\mA)$,\n we have\n \\end{enumerate}\n \\vspace{-\\abovedisplayskip}\n \\begin{align}\n \\label{eq:thm:sketched-pseudoinverse}\n \\mS \\big( \\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q\n \\big)^{-1} \\mS^\\ctransp\n \\simeq \\inv{\\mA + \\mu \\mI_p},\n \\end{align}\n where $\\mu$\n is the unique solution in $(\\mu_0, \\infty)$ to the fixed point equation\n \\begin{align}\n \\label{eq:sketched-modified-lambda}\n \\lambda = \\mu \\paren{ 1 - \\tfrac{1}{q} \\tr \\bracket{\\mA \\inv{\\mA + \\mu \\mI_p}}}.\n \\end{align}\n Furthermore, as $p, q \\to \\infty$, $|\\mu - \\tfrac{1}{ \\widetilde{v}(\\lambda)}| \\xrightarrow{\\text{a.s.}} 0$, where\n \\begin{align}\n \\widetilde{v}(\\lambda) = \\tfrac{1}{q} \\tr \\bracket{\\big( \\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q\n \\big)^{-1} },\n \\end{align}\n and $|\\lambda_0 + \\lambda_{\\min}^{+}(\\mS^\\ctransp \\mA \\mS)| \\xrightarrow{\\text{a.s.}} 0$. \n\\end{theorem}\n\n\\begin{proof}[Proof sketch]\nWe begin by considering the case that $\\mA$ satisfies $\\limsup \\bignorm[\\rm op]{\\mA^{-1}} < \\infty$. Then we can rewrite the left-hand side of \\cref{eq:thm:sketched-pseudoinverse-A-half} or \\cref{eq:thm:sketched-pseudoinverse}\nsuch that we can apply \\cref{cor:basic-ridge-asympequi-in-r} with $\\mX = \\sqrt{q} \\mS^\\ctransp \\mA^{1\/2}$, $\\lambda = -z$, and $\\mu = -\\zeta$. For any $\\lambda > -\\liminf z_0$,\n\\begin{subequations}\n\\begin{align}\n \\mA^{1\/2} \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q } \\mS^\\ctransp \\mA^{1\/2} &= \n \\mA^{1\/2} \\mS \\mS^\\ctransp \\mA^{1\/2} \\biginv{\\mA^{1\/2} \\mS \\mS^\\ctransp \\mA^{1\/2} + \\lambda \\mI_p } \\\\\n & = \\mI_p - \\lambda \\biginv{\\mA^{1\/2} \\mS \\mS^\\ctransp \\mA^{1\/2} + \\lambda \\mI_p } \\\\\n & \\simeq \\mI_p - \\mu \\biginv{\\mA + \\mu \\mI_p} \\\\\n & = \\mA^{1\/2} \\biginv{\\mA + \\mu \\mI_p} \\mA^{1\/2}.\n\\end{align}\n\\end{subequations}\nWe can then multiply on the right, or both left and right, by $\\mA^{-1\/2}$ to obtain the results in \\cref{eq:thm:sketched-pseudoinverse-A-half} and \\cref{eq:thm:sketched-pseudoinverse}, respectively, by the product rule of asymptotic equivalences. If $\\mA$ does not have a norm-bounded inverse, we can apply the above result for $\\mA_\\delta \\defeq \\mA + \\delta \\mI_p$ for $\\delta > 0$ and make a uniform convergence argument for interchanging limits of $p$ and $\\delta$ to prove the equivalence in \\cref{eq:thm:sketched-pseudoinverse}. We then multiply by $\\mA^{1\/2}$ and make another uniform convergence argument to extend this equivalence to the case $\\lambda = 0$ to obtain the equivalence in \\cref{eq:thm:sketched-pseudoinverse-A-half}. The details can be found in \\Cref{sec:proof:thm:sketched-pseudoinverse} of the supplementary material.\n\nWe also provide an alternative but incomplete proof strategy that is based on Jacobi's formula rather than on \\cref{cor:basic-ridge-asympequi-in-r} in \\Cref{sec:proofs-main-results-jacobi} of the supplementary material. The idea is to relate the derivatives of $\\log |\\mS^\\ctransp (\\mA + t \\mTheta) \\mS + \\lambda \\mI_q|$ and $\\log |\\mA + t \\mTheta + \\mu \\mI_q|$ with respect to each $t=0$ and $\\lambda$---the former yields the $\\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI} \\mS^\\ctransp$ term we are interested in, and the latter yields the Stieltjes transform of the spectrum of $\\mS^\\ctransp \\mA \\mS$. In this way, we can build an asymptotic equivalence (for trace functionals with general $\\mTheta$) using only a spectral relationship (determined by $\\mTheta = \\mI$). We do not need this proof strategy for \\cref{thm:sketched-pseudoinverse}, but we think it may be valuable for proving \\cref{conj:general-free-sketching} in \\Cref{sec:discussion}.\n\\end{proof}\n\nIn words, the sketched pseudoinverse of $\\mA$ with regularization $\\lambda$ is asymptotically equivalent to the regularized inverse of $\\mA$ with regularization $\\mu$, and the relationship between $\\lambda$ and $\\mu$ asymptotically depends only on $\\mA$, $p$, and $q$. As mentioned in \\Cref{sec:preliminaries}, this implies for example that the elements of the sketched pseudoinverse converge to the elements of the ridge-regularized inverse. We illustrate this in \\cref{fig:empirical-concentration}, where for a diagonal $\\mA$, the off-diagonals of the sketched pseudoinverse quickly converge to zero as $p$ increases, while the diagonals converge to the diagonals of the regularized inverse of $\\mA$.\n\nIn the case where $\\lambda = 0$ and $q < p$, we see that \\cref{eq:sketched-modified-lambda} reduces to $q = \\tr [\\mA \\inv{\\mA + \\mu \\mI_p}]$, which is exactly the same as the result obtained by \\cite{pmlr-v108-mutny20a} for the expected pseudoinverse under the determinantal point process (DPP), with $q$ playing the role of the expected subsample size of the DPP and $\\mu$ being the DPP scale factor. To our knowledge, the regularized pseudoinverse using the DPP has not been further investigated, so we cannot compare our equivalence beyond the $\\lambda = 0$ case. \nIn addition, it also coincides precisely with the optimal choice of $\\alpha = \\tfrac{q}{p}$ for subsampled ordinary least squares ensembles in \\cite{lejeune_javadi_baraniuk_2020} when $\\mu$ is the optimal ridge regularization strength. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=5in]{figures\/empirical_concentration.pdf}\n \\caption{Empirical density histograms over 20 trials demonstrating the concentration of the elements of $\\mS \\inv{\\mS^\\transp \\mA \\mS + \\lambda \\mI} \\mS^\\transp$ for real Gaussian $\\mS$ and diagonal $\\mA$ taking values $\\set{0, 1, 2}$ with equal frequency along the diagonal. We choose $\\lambda = 1$ and $q = \\floor{\\alpha p}$ for $\\alpha = 0.8$ over $p \\in \\set{60, 300, 1500}$. As expected by \\cref{thm:sketched-pseudoinverse}, the individual elements of the sketched pseudoinverse converge to those of $\\inv{\\mA + \\mu \\mI}$, where for this problem $\\mu \\approx 1.63$. Therefore, the diagonals concentrate with equal mass around $\\set{1 \/ (a + \\mu) : a \\in \\set{0, 1, 2}}$ (black, dotted), and the off-diagonals concentrate around 0.}\n \\label{fig:empirical-concentration}\n\\end{figure}\n\n\nBelow we provide several remarks on the assumptions and implications\nof \\cref{thm:sketched-pseudoinverse}. It will be useful to interpret the equations in terms of the sketching aspect ratio $\\alpha \\defeq \\tfrac{q}{p}$.\n\n\n\n\\begin{remark}\n [Normalization choice for the sketching matrix]\n \\label{rem:norm-preserving-sketch}\n We remark that the normalization factor $\\sqrt{q}$ in $\\sqrt{q} {\\mathbf{S}}$\n of the sketching matrix is such that\n the norm of the rows of ${\\mathbf{S}}$ is $1$ in expectation.\n This is done so that \n $\\mathbb{E}[ \\| {\\mathbf{S}}^\\ctransp {\\mathbf{x}} \\|_2^2] = \\| {\\mathbf{x}} \\|_2^2$\n as $\\mathbb{E}[{\\mathbf{S}} {\\mathbf{S}}^\\ctransp] = {\\mathbf{I}}_p$.\n One can alternately consider sketching matrices with normalization \n $\\sqrt{p} {\\mathbf{S}}$ such that the columns have norm $1$ in expectation.\n It is easy to write an equivalent version of \\Cref{thm:sketched-pseudoinverse}\n with such a normalization. \n We choose to focus on the former scaling\n because it is more common in practice.\n\\end{remark}\n\n\\begin{remark}\n [On assumptions]\n The assumptions imposed in \\cref{thm:sketched-pseudoinverse}\n are quite mild.\n In particular, the sequences of matrices ${\\mathbf{A}}$ \n being sketched can be random, so long as they are independent of ${\\mathbf{S}}$.\n Furthermore, the spectrum of the sequences of matrices ${\\mathbf{A}}$\n need not converge to a fixed spectrum.\n The aspect ratio $\\alpha$ of the sketching matrices ${\\mathbf{S}}$\n also need not converge to a fixed number.\n The reason this is possible is because\n we are not expressing the sketched resolvent\n in terms of the limiting spectrum of ${\\mathbf{S}}$ and ${\\mathbf{A}}$,\n but rather relating it through ${\\mathbf{A}}$ and a parameter $\\mu$\n that depends on $\\alpha$ and ${\\mathbf{A}}$ \n (and the original regularization level $\\lambda$),\n which allows us to keep our assumptions weak. \n\\end{remark}\n\n\n\\begin{remark}\n [Case of $\\lambda = 0$]\n While the form in \\cref{eq:thm:sketched-pseudoinverse} is the most general, it does not hold for $\\lambda = 0$ if the sketch size is larger than the rank of $\\mA$, since the inverse is unbounded. However, in machine learning settings such as ridgeless regression, we only need to evaluate the regularized pseudoinverse $\\mS (\\mS^\\ctransp \\tfrac{1}{n} \\mX^\\ctransp \\mX \\mS + \\lambda \\mI_p)^{-1} \\mS^\\ctransp \\tfrac{1}{\\sqrt{n}} \\mX$. Thus, we can apply the form in \\cref{eq:thm:sketched-pseudoinverse-A-half} with $\\mA^{1\/2} = (\\tfrac{1}{n} \\mX^\\ctransp \\mX)^{1\/2}$, which is sufficient for any downstream analysis.\n\\end{remark}\n\n\\begin{remark}\n [Alternate form of equivalence representation]\n Expressed in terms of $\\widetilde{v}(\\lambda)$,\n the equivalence \\cref{eq:thm:sketched-pseudoinverse} becomes \n \\begin{equation}\n {\\mathbf{S}} \\big( {\\mathbf{S}}^\\ctransp {\\mathbf{A}} {\\mathbf{S}} + \\lambda {\\mathbf{I}}_q \\big)^{-1} {\\mathbf{S}}^\\ctransp\n \\simeq\n \\widetilde{v}(\\lambda) \\inv{\\widetilde{v}(\\lambda) {\\mathbf{A}} + {\\mathbf{I}}_p},\n \\end{equation}\n and the fixed-point equation \\cref{eq:sketched-modified-lambda} becomes\n \\begin{equation}\n \\lambda \n = \\frac{1}{\\widetilde{v}(\\lambda)}\n - \\tfrac{1}{q} \\tr \\bracket{{\\mathbf{A}} \\inv{\\widetilde{v}(\\lambda) {\\mathbf{A}} + {\\mathbf{I}}_p} }.\n \\end{equation}\n\\end{remark}\n\n\n\\subsection{Second-order equivalence}\n\\label{sec:second-order-sketch-equi}\n\nAlthough the equivalence in \\cref{thm:sketched-pseudoinverse} holds for first order trace functionals, this equivalence does not hold for higher order functionals. To intuitively understand why, it is helpful to reason about the asymptotic equivalence similarly to an equivalence of expectation in classical random variables. That is, we may have two random variables $X, Y$ with $\\expect{X} = \\expect{Y}$, but this does not allow us to make any conclusions about the relationship between $\\expect{X^k}$ and $\\expect{Y^k}$ for $k > 1$. In the same way, our first-order asymptotic equivalence does not directly tell us higher order equivalences.\n\nFortunately, however, because of the resolvent structure of the regularized pseudoinverse, we can cleverly apply the derivative rule of the calculus of asymptotic equivalences to obtain a second order equivalence from the first order equivalence. Such a derivative trick has been employed in several prior works; see, e.g., \\cite{dobriban_wager_2018, karoui_kolsters_2011, ledoit_peche_2011, liu_dobriban_2019}. This approach could in principle be repeated for higher order functionals.\n\n\n\\begin{theorem}\n [Second-order isotropic sketching equivalence]\n \\label{lem:second-order-sketch}\n Consider the setting of \\cref{thm:sketched-pseudoinverse}. If $\\mPsi \\in \\complexset^{p \\times p}$ is a deterministic or random positive semidefinite matrix independent of $\\mS$ with $\\norm[\\rm op]{\\mPsi}$ uniformly bounded in $p$, then if either $\\lambda \\neq 0$ or $\\limsup \\tfrac{q}{p} < \\liminf r(\\mA)$,\n \\begin{align}\n \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\mPsi \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp\n \\simeq\n \\inv{\\mA + \\mu \\mI_p} (\\mPsi + \\mu' \\mI_p) \\inv{\\mA + \\mu \\mI_p},\n \\end{align}\n where $\\mu$ is as in \\cref{thm:sketched-pseudoinverse}, and\n \\begin{align}\n \\label{eq:mu-prime}\n \\mu' = \\frac{\\frac{1}{q} \\tr \\bracket{\\mu^3 \\inv{\\mA + \\mu \\mI_p} \\mPsi \\inv{\\mA + \\mu \\mI_p} }}{\\lambda + \\frac{1}{q} \\tr \\bracket{\\mu^2 \\mA \\paren{\\mA + \\mu \\mI_p}^{-2}} } \\geq 0.\n \\end{align}\n\\end{theorem}\n\n\\begin{proof}[Proof]\nBy assumption, there exists $M < \\infty$ such that $M > \\limsup \\bignorm[\\rm op]{\\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q}}$ and $M > \\limsup \\bignorm[\\rm op]{\\inv{\\mA + \\mu \\mI_p}}$ almost surely (see proof details for \\cref{thm:sketched-pseudoinverse} in the supplementary material). Define $\\mB_z \\defeq \\mA + z \\mPsi$. Then for all $z \\in D$, where\n\\begin{align}\n D = \\bigset{z \\in \\complexset \\colon \\limsup \\bigparen{|z| M \\norm[\\rm op]{\\mPsi} \\max \\bigset{\\norm[\\rm op]{\\mS}^2, 1}} < \\tfrac{1}{2}},\n\\end{align} we have that $\\max \\bigset{\\limsup \\bignorm[\\rm op]{\\biginv{\\mS^\\ctransp \\mB_z \\mS + \\lambda \\mI_q}}, \\limsup \\bignorm[\\rm op]{\\inv{\\mB_z + \\mu \\mI_p}}}\n\\leq 2 M$. Therefore, we can apply the differentiation rule of asymptotic equivalences for all $z \\in D$:\n\\begin{subequations}\n\\begin{align}\n -\\mS \\biginv{\\mS^\\ctransp \\mB_z \\mS &+ \\lambda \\mI_q} \\mS^\\ctransp \\mPsi \\mS \\biginv{\\mS^\\ctransp \\mB_z \\mS + \\lambda \\mI_q} \\mS^\\ctransp\n = \\tfrac{\\partial}{\\partial z} \\mS \\biginv{\\mS^\\ctransp \\mB_z \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\\\\n &\\simeq \\tfrac{\\partial}{\\partial z} \\biginv{\\mB_z + \\mu(z) \\mI_p} \\\\\n &= - \\biginv{\\mB_z + \\mu(z) \\mI_p} \\paren{\\mPsi + \\tfrac{\\partial}{\\partial z} \\mu(z) \\mI_p} \\biginv{\\mB_z + \\mu(z) \\mI_p}.\n\\end{align}\n\\end{subequations}\nWe let $\\mu'(z) = \\tfrac{\\partial}{\\partial z} \\mu(z)$, and then we can divide \\cref{eq:sketched-modified-lambda} by $\\mu(z)$ and differentiate to obtain\n\\begin{align}\n \\frac{\\lambda \\mu'(z)}{\\mu(z)^2} = \\tfrac{1}{q} \\tr \\bracket{\\mPsi \\inv{\\mB_z + \\mu(z) \\mI_p} - \\mB_z \\inv{\\mB_z + \\mu(z) \\mI_p} \\paren{\\mPsi + \\mu'(z) \\mI_p } \\inv{\\mB_z + \\mu(z) \\mI_p} }.\n\\end{align}\nSolving for $\\mu'(0)$ gives the expression in $\\cref{eq:mu-prime}$. For the non-negativity of $\\mu'$, see \\cref{rem:alt-mu-prime} and its proof.\n\\end{proof}\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=5in]{figures\/empirical_concentration_second.pdf}\n \\caption{\n Empirical density histograms over 20 trials demonstrating the concentration of diagonal elements of $\\mS \\inv{\\mS^\\transp \\mA \\mS + \\lambda \\mI} \\mS^\\transp \\mPsi \\mS \\inv{\\mS^\\transp \\mA \\mS + \\lambda \\mI} \\mS^\\transp$ for $(\\mS, \\mA, \\lambda)$ as in \\cref{fig:empirical-concentration} and $\\mPsi \\in \\set{\\mI_p, \\mA}$. As expected by \\cref{lem:second-order-sketch}, the individual elements of the sketched pseudoinverse converge to those of $\\inv{\\mA + \\mu \\mI}(\\mPsi + \\mu' \\mI) \\inv{\\mA + \\mu \\mI}$ (black, dotted), where $\\mu' \\approx 0.813$ and $0.403$ for $\\mPsi = \\mI_p$ and $\\mA$, respectively.}\n \\label{fig:empirical-concentration-second}\n\\end{figure}\n\n\nThat is, the second-order equivalence is the same as plugging in the first-order equivalence and then adding a non-negative inflation $\\mu' \\inv[2]{\\mA + \\mu \\mI}$. The inflation factor $\\mu'$ depends linearly on the matrix $\\mPsi$, but the inflation is always isotropic, rather than in the direction of $\\mPsi$. It is non-negative in the same way that the variance of an estimator is also non-negative. Examples of quadratic forms where this second-order equivalence can be used include estimation error ($\\mPsi = \\mI$) and prediction error ($\\mPsi = \\mSigma$, the population covariance) in ridge regression problems. We give a demonstration of the concentration in \\cref{fig:empirical-concentration-second}.\nWhile typically $\\mu' > 0$, it can go to 0 in the special case of $\\mu = 0$ and $\\mPsi$ sharing a subspace with $\\mA$, as we discuss in \\cref{rem:mu-prime-to-0}. \n\n\\begin{remark}\n [The case of $\\lambda = 0$]\n Similar to the variant form in \\cref{eq:thm:sketched-pseudoinverse-A-half} of \\cref{thm:sketched-pseudoinverse}, if we consider the slightly different form\n \\begin{align}\n \\mA^{1\/2}\n \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\mPsi &\\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp\n \\mA^{1\/2} \\\\\n &\\simeq\n \\mA^{1\/2}\n \\inv{\\mA + \\mu \\mI_p} (\\mPsi + \\mu' \\mI_p) \\inv{\\mA + \\mu \\mI_p}\n \\mA^{1\/2}\n \\end{align}\n for the second-order resolvent,\n we do not need the $\\lambda \\neq 0$ or $\\limsup \\tfrac{q}{p} < \\liminf r(\\mA)$ restriction as stated in the theorem. Because the proof of this case is entirely analogous to the results in \\cref{thm:sketched-pseudoinverse,lem:second-order-sketch}, we omit the proof.\n\\end{remark}\n\n\n\\section{Preliminaries}\n\\label{sec:preliminaries}\n\n\n\n\nWe will use the language of asymptotic equivalence of sequences of random matrices\nto state our main results.\nIn this section, we define the notion of asymptotic equivalence,\nreview some of the basic properties that such equivalence satisfies, \nand present an asymptotic equivalence for the ridge resolvent.\nWe then extend that result to handle real-valued resolvents, which\nwill form the building block for our subsequent results.\n\n\nTo begin, consider two sequences ${\\mathbf{A}}_n$ and ${\\mathbf{B}}_n$ of $p(n) \\times q(n)$ matrices, where $p$ and $q$ are increasing in $n$.\nWe will say that ${\\mathbf{A}}_n$ and ${\\mathbf{B}}_n$ are asymptotically equivalent\nif for any sequence of deterministic \nmatrices ${\\bm{\\Theta}}_n$ with trace norm uniformly bounded in $n$,\nwe have $\\tr[{\\bm{\\Theta}}_n ({\\mathbf{A}}_n - {\\mathbf{B}}_n)] \\xrightarrow{\\text{a.s.}} 0$\nas $n \\to \\infty$.\nWe write ${\\mathbf{A}}_n \\simeq {\\mathbf{B}}_n$ to denote this asymptotic equivalence.\nThe notion of \\emph{deterministic} equivalence,\nwhere the right-hand sequence is a sequence of deterministic matrices, \nhas been typically used in random matrix theory to obtain limiting behaviour\nof functionals of random matrices;\nfor example,\nsee \n\\cite{couillet_debbah_silverstein_2011,hachem_loubaton_najim_2006,serdobolskii_2000},\namong others.\nMore recently, the notion of deterministic equivalence \nhas been popularized and developed further \nin \\cite{dobriban_wonder_2020,dobriban_sheng_2021}\\footnote{Note that \n\\cite{dobriban_wonder_2020,dobriban_sheng_2021}\nuse the notation ${\\mathbf{A}}_n \\asymp {\\mathbf{B}}_n$\nto denote deterministic equivalence of sequence ${\\mathbf{A}}_n$ to ${\\mathbf{B}}_n$.\nWe instead use the notation ${\\mathbf{A}}_n \\simeq {\\mathbf{B}}_n$\nto emphasize that this equivalence is asymptotically exact, rather than up to constants.}.\nWe will use a slightly more general notion of asymptotic equivalence\nin this paper, where both sequences of matrices may be random.\n\nThe notion of asymptotic equivalence enjoys some properties\nthat we list next.\nThese are stated in the context of deterministic equivalence\nin \\cite{dobriban_sheng_2021,dobriban_wonder_2020}, but hold more generally for\nasymptotic equivalence.\nFor the statements to follow,\nlet ${\\mathbf{A}}_n$, ${\\mathbf{B}}_n$, ${\\mathbf{C}}_n$, and ${\\mathbf{D}}_n$ be sequences of random or deterministic matrices\n(of appropriate dimensions).\nThen the following properties hold:\n\\begin{enumerate}\n \\item \\textbf{Equivalence.}\n The relation $\\simeq$ is an equivalence relation.\n \\item \\textbf{Sum.}\n If ${\\mathbf{A}}_n \\simeq {\\mathbf{B}}_n$ and ${\\mathbf{C}}_n \\simeq {\\mathbf{D}}_n$,\n then ${\\mathbf{A}}_n + {\\mathbf{C}}_n \\simeq {\\mathbf{B}}_n + {\\mathbf{D}}_n$.\n \\item \\textbf{Product.}\n If ${\\mathbf{A}}_n$ and $\\mB_n$ have operator norm uniformly bounded in $n$ and $\\mA_n \\simeq \\mB_n$, and $\\mC_n$ is independent of $\\mA_n$ and $\\mB_n$ with operator norm bounded in $n$ almost surely, \n then ${\\mathbf{A}}_n {\\mathbf{C}}_n \\simeq {\\mathbf{B}}_n {\\mathbf{C}}_n$.\n \\item \\textbf{Trace.}\n If ${\\mathbf{A}}_n \\simeq {\\mathbf{B}}_n$ for square matrices ${\\mathbf{A}}_n$ and ${\\mathbf{B}}_n$ of dimension $p(n) \\times p(n)$, \n then $\\tfrac{1}{p(n)} \\tr[{\\mathbf{A}}_n] - \\tfrac{1}{p(n)} \\tr[{\\mathbf{B}}_n] \\xrightarrow{\\text{a.s.}} 0$.\n \\item \\textbf{Elements.} If $\\mA_n \\simeq \\mB_n$ for $\\mA_n, \\mB_n$ of dimension $p(n) \\times q(n)$ and $i(n) \\in \\set{1, \\ldots, p(n)}$ and $j(n) \\in \\set{1, \\ldots, q(n)}$, then $[\\mA_n]_{i(n), j(n)} - [\\mB_n]_{i(n), j(n)} \\xrightarrow{\\text{a.s.}} 0$.\n \\item \\textbf{Differentiation.}\n Suppose $f(z, {\\mathbf{A}}_n) \\simeq g(z, {\\mathbf{B}}_n)$\n where the entries of $f$ and $g$ are analytic\n functions in $z \\in D$ and $D$ is an open connected subset of $\\mathbb{C}$.\n Furthermore, suppose for any sequence ${\\bm{\\Theta}}_n$ of deterministic \n matrices with trace norm uniformly bounded in $n$,\n we have that $|\\tr[{\\bm{\\Theta}}_n (f(z, {\\mathbf{A}}_n) - g(z, {\\mathbf{B}}_n))]| \\le M$\n for every $n$ and $z \\in D$ for some constant $M < \\infty$.\n Then we have that $f'(z, {\\mathbf{A}}_n) \\simeq g'(z, {\\mathbf{B}}_n)$\n for every $z \\in D$,\n where the derivatives are taken entry-wise with respect to $z$.\n\\end{enumerate}\n\n\n\n\nThe almost sure convergence in the statements above\nis with respect to the entire randomness in the random variables involved.\nOne can also consider\nthe notion of conditional asymptotic equivalence\nwherein we condition on a sequence of random matrices.\nMore precisely,\nsuppose ${\\mathbf{A}}_n$, ${\\mathbf{B}}_n$ are sequence of random matrices\nthat may depend of another sequence of random matrices ${\\mathbf{Z}}_n$.\nWe call ${\\mathbf{A}}_n$ and ${\\mathbf{B}}_n$ to be asymptotically equivalent\nconditioned on ${\\mathbf{Z}}_n$,\nif for any sequence of deterministic matrices\n${\\bm{\\Theta}}_n$\nwith trace norm uniformly bounded in $n$,\nwe have $\\lim_{n \\to \\infty} \\tr[{\\bm{\\Theta}}_n ({\\mathbf{A}}_n - {\\mathbf{B}}_n)] = 0$ almost surely conditioned on ${\\mathbf{Z}}_n$.\nSimilar properties to those listed above for unconditional asymptotic equivalence\nalso hold for conditional equivalence by considering all the statements\nconditioned on the sequence ${\\mathbf{Z}}_n$.\nIn particular,\nfor the product rule,\nwe require that the sequence ${\\mathbf{C}}_n$ be \\emph{conditionally}\nindependent of ${\\mathbf{A}}_n$ and ${\\mathbf{B}}_n$ given ${\\mathbf{Z}}_n$.\nFinally,\nfor our asymptotic statements,\nwe will work with sequences of matrices,\nindexed by either $n$ or $p$.\nHowever, for notational brevity,\nwe will drop the index from now on whenever it is clear from the context.\n\n\n\n\nEquipped with the notion of asymptotic equivalence,\nbelow we state a result on the asymptotic deterministic\nequivalence for ridge resolvents,\nadapted from Theorem 1 of \\cite{rubio_mestre_2011} and Theorem 3.1 of \\cite{dobriban_sheng_2021},\nthat will form a base for our results.\n\n\\begin{lemma}\n [Basic deterministic equivalent for ridge resolvent, complex-valued regularization]\n \\label{lem:basic-ridge-asympequi}\n Let $\\mZ \\in \\complexset^{n \\times p}$ be a random matrix consisting of i.i.d.\\ random variables that have mean 0, variance 1, and finite \n absolute moment of order\n $8 + \\delta$ for some $\\delta > 0$. Let $\\mSigma \\in \\complexset^{p \\times p}$ be a positive semidefinite matrix with operator norm uniformly bounded in $p$, and let $\\mX = \\mZ \\mSigma^{1\/2}$. \n Then, for $z \\in \\complexset^+$,\n as $n, p \\to \\infty$ such that\n $0 < \\liminf \\tfrac{p}{n} \\le \\limsup \\tfrac{p}{n} < \\infty$,\n we have\n \\begin{equation}\n \\label{eq:basic-ridge-asympequi-in-c}\n \\big( \\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}} - z {\\mathbf{I}}_p \\big)^{-1}\n \\simeq\n \\inv{c(z) {\\bm{\\Sigma}} -z {\\mathbf{I}}_p},\n \\end{equation}\n where $c(z)$ is the unique solution in $\\complexset^-$ to the fixed point equation\n \\begin{equation}\n \\label{eq:basic-ridge-fp-in-c}\n \\frac{1}{c(z)} - 1\n = \\tfrac{1}{n} \\tr \\bracket{{\\bm{\\Sigma}} \\inv{c(z) {\\bm{\\Sigma}} - z {\\mathbf{I}}_p}}.\n \\end{equation}\n Furthermore, \n $\\tfrac{1}{p} \\tr\\bracket{{\\bm{\\Sigma}} (c(z) {\\bm{\\Sigma}} - z {\\mathbf{I}}_p)^{-1}}$\n is a Stieltjes transform of a certain positive measure on $\\mathbb{R}_{\\ge 0}$\n with total mass $\\tfrac{1}{p} \\tr[{\\bm{\\Sigma}}]$.\n\\end{lemma} \nStrictly speaking,\nthe results in \\cite{rubio_mestre_2011} and \\cite{dobriban_sheng_2021}\nrequire that the sequence ${\\bm{\\Sigma}}$ be deterministic.\nHowever, one can take ${\\bm{\\Sigma}}$ to be a random sequence of matrices\nthat are independent of ${\\mathbf{Z}}$;\nsee, for example, \\cite{ledoit_peche_2011}.\nIn this case,\nthe asymptotic equivalence is treated conditionally on ${\\bm{\\Sigma}}$.\n\n\n\\section{Real-valued equivalence}\n\\label{sec:real-valued-equivalence}\n\nFor real-valued negative $z$, corresponding to positive ridge regularization, \nwe remark that one can use \\cref{lem:basic-ridge-asympequi}\nto derive limits of linear and \ncertain non-linear functionals \n(through the calculus rules of asymptotic equivalence)\nof the ridge resolvent \n$(\\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}} - z {\\mathbf{I}}_p)^{-1}$\nby considering $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$\nand letting $\\Im(z) \\to 0$.\nThis follows because\na short calculation (see proof of \\cref{cor:basic-ridge-asympequi-in-r}) \nshows that $\\Im(c(z)) \\to 0$\nas $\\Im(z) \\to 0$ for $z \\in \\mathbb{C}^{+}$ with $\\Re(z) < 0$.\nThus one can recover a real limit\nfrom the right hand side of \\cref{eq:basic-ridge-asympequi-in-c}\nthrough a limiting argument.\nMoreover,\nit is easy to see that\nthe fixed-point equation \\cref{eq:basic-ridge-fp-in-c}\nhas a unique (real) solution $c(z) > 0$\nfor $z \\in \\mathbb{R}_{< 0}$. \n\nHowever, it has recently been pointed out that\nunder certain special data geometry,\nnegative regularization is often beneficial, in real data experiments \\cite{kobak_lomond_sanchez_2020} as well as in theoretical formulations where it can achieve optimal squared prediction risk \\cite{wu_xu_2020}.\nOne can still recover such a case\nby considering $z \\in \\mathbb{C}^{+}$\nwith $\\Re(z) > 0$ over a valid range, and taking the limit as $\\Im(z) \\to 0$.\nHowever, solving the fixed-point equation \\cref{eq:basic-ridge-fp-in-c}\nover reals directly in this case, which is the most efficient way to compute the solution numerically, poses certain subtleties\nas we no longer can guarantee a unique real solution for $c(z)$. \n\nOur next theorem shows how to handle this case.\nWe will make use of this for our results on sketching\nin \\Cref{sec:main_results}, but we believe the result to be of independent interest and worth stating on its own. In addition to enabling the computation of the asymptotic equivalence for non-negative real-valued $z$, it also provides the asymptotic value of $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX^\\ctransp \\mX)$ (given by $z_0$ in the theorem statement) for arbitrary $\\mSigma$, which to our knowledge is the first general characterization of the smallest nonzero eigenvalue of random matrices outside of special cases such as $\\mSigma = \\mI_p$ and some lower bounds (see, e.g., \\cite{bai_silverstein_1998}). We demonstrate the improvement of $z_0$ over these na\\\"ive lower bounds in \\cref{fig:lambda-minnz-bound}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=5.5in]{figures\/lambda_minnz_bound.pdf}\n \\caption{Plots showing how $z_0$ (solid) from \\cref{eq:basic-ridge-asympequi-in-r:bounds} matches the empirical minimum nonzero eigenvalue (markers) of $\\tfrac{1}{n} \\mX^\\transp \\mX$ when $\\mSigma = \\tfrac{1}{m} \\mY^\\transp \\mY$ for $\\mY \\in \\reals^{m \\times p}$ with i.i.d.\\ $\\normal(0, 1)$ elements, such that the limiting spectrum of $\\mSigma$ follows the $\\mathrm{Marchenko}\\text{--}\\mathrm{Pastur}(\\tfrac{p}{m})$ distribution for $\\tfrac{p}{m} \\in \\set{0.2, 0.9, 5}$. In contrast, the commonly used na\\\"ive bound (dashed)\n $\\liminf \\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX^\\transp \\mX) \\geq \n \\bigparen{1 - \\sqrt{\\tfrac{p}{m}}}^2 \n \\bigparen{1 - \\sqrt{\\tfrac{p}{n}}}^2 \n \\ind\\set{p < \\max\\set{m, n}}$,\n obtained by multiplying the minimum nonzero eigenvalues of $\\tfrac{1}{n} \\mZ^\\transp \\mZ$ and $\\mSigma$ when at most one of them is singular,\n is quite loose outside of the $m \\gg p$ and $n \\gg p$ cases and fails to capture the correct behavior at all when both are singular ($p > \\max\\set{m, n}$). Empirical values are computed for $p = 500$ for a single trial. }\n \\label{fig:lambda-minnz-bound}\n\\end{figure}\n\n\\begin{theorem}\n[Basic deterministic equivalent for ridge resolvent, real-valued regularization]\n\\label{cor:basic-ridge-asympequi-in-r}\nAssume the setting of \\cref{lem:basic-ridge-asympequi}. \nLet $\\zeta_0, z_0 \\in \\reals$ be the unique solutions, satisfying $\\zeta_0 < \\lambda_{\\min}^{+}({\\bm{\\Sigma}})$,\nto system of equations\n\\begin{align}\n \\label{eq:basic-ridge-asympequi-in-r:bounds}\n 1 = \\tfrac{1}{n} \\tr \\bracket{{\\bm{\\Sigma}}^2 \\paren{{\\bm{\\Sigma}} - \\zeta_0 {\\mathbf{I}}_p}^{-2}}, \\quad \n z_0 = \\zeta_0 \\paren{1 - \\tfrac{1}{n} \\tr \\bracket{{\\bm{\\Sigma}} \\inv{{\\bm{\\Sigma}} - \\zeta_0 {\\mathbf{I}}_p}}}.\n\\end{align}\nThen, for each $z \\in \\reals$ satisfying $z < \\liminf z_0$, as $n, p \\to \\infty$ such that\n$0 < \\liminf \\tfrac{p}{n} \\le \\limsup \\tfrac{p}{n} < \\infty$,\nwe have\n\\begin{align}\n \\label{eq:basic-ridge-asympequi-in-r}\n z \\big( \\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}} - z {\\mathbf{I}}_p \\big)^{-1}\n \\simeq\n \\zeta \\inv{{\\bm{\\Sigma}} - \\zeta {\\mathbf{I}}_p},\n\\end{align}\nwhere $\\zeta \\in \\reals$ is the unique solution in $(-\\infty, \\zeta_0)$ to\nthe fixed-point equation\n\\begin{equation+}\n \\label{eq:basic-ridge-fp-in-r}\n z = \\zeta \\paren{1 - \\tfrac{1}{n} \\tr \\bracket{{\\bm{\\Sigma}} \\inv{{\\bm{\\Sigma}} - \\zeta {\\mathbf{I}}_p}}}.\n\\end{equation+}\nFurthermore, as $n, p \\to \\infty$, $|\\zeta + \\tfrac{1}{v(z)}| \\xrightarrow{\\text{a.s.}} 0$, where \n$v(z)$ is the companion Stieltjes transform of the spectrum of $\\frac{1}{n} \\mX^\\ctransp \\mX$\ngiven by\n\\[\n v(z) = \\tfrac{1}{n} \\tr \\Big[\\big(\\tfrac{1}{n} \\mX \\mX^\\ctransp - z \\mI_n \\big)^{-1}\\Big],\n\\]\nand $|z_0 - \\lambda_{\\min}^{+}(\\frac{1}{n} \\mX^\\ctransp \\mX)| \\xrightarrow{\\text{a.s.}} 0$.\n\\end{theorem}\n\\begin{proof}[Proof sketch]\nTo prove this corollary, we define $\\zeta \\defeq \\tfrac{z}{c(z)}$ to obtain \\cref{eq:basic-ridge-asympequi-in-r} from \\cref{eq:basic-ridge-asympequi-in-c} for $z \\in \\complexset^+$, and also observe that $\\tfrac{1}{\\zeta}$ is the limiting companion Stieltjes transform $v(z)$ of $\\tfrac{1}{n} \\mX \\mX^\\ctransp$ at $z$. This implies that $\\zeta \\in \\complexset^+$ and that the mapping $z \\mapsto \\zeta$ is a holomorphic function on its domain, which includes all real $z < \\liminf \\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$. We then identify the analytic continuation of the mapping $z \\mapsto \\zeta$ to the reals, which consists of careful bookkeeping to determine $z_0$, the least positive value of $z$ for which $\\zeta$ does not exist, which must be asymptotically equal to $\\lambda_{\\min}^{+}(\\tfrac{1}{n} \\mX \\mX^\\ctransp)$. \nThe proof details can be found in \\Cref{sec:proof:cor:basic-ridge-asympequi-in-r} \nof the supplementary material.\n\\end{proof}\n\n\\begin{remark}\n [The case of $z = 0$]\n The form of the equivalence \\cref{eq:basic-ridge-asympequi-in-c}\n is slightly different as compared with \\cref{eq:basic-ridge-asympequi-in-r}\n in that the resolvent $(\\tfrac{1}{n} {\\mathbf{X}}^\\ctransp {\\mathbf{X}} - z {\\mathbf{I}}_p)^{-1}$ has a normalizing multiplier of $z$ in the latter case.\n This enables continuity of the left-hand side at $z = 0$, in contrast to\n specializing the equivalence \\cref{eq:basic-ridge-asympequi-in-c}\n to real $z$, where both the left- and right-hand sides may diverge as $z \\nearrow 0$.\n\\end{remark}\n\n\nOur main result in the next section for sketching follows directly from this theorem and shares a very similar form. For this reason, we defer discussion about the interpretation of the solutions to the above equations for our reformulation under the sketching setting; however, analogous interpretations will apply to the above theorem.\n\\section{Properties and examples}\n\\label{sec:properties}\n\nBelow we provide various analytical properties\nof the quantities that appear in \\Cref{thm:sketched-pseudoinverse,lem:second-order-sketch}.\nSee \\Cref{sec:proofs-properties} in the supplementary material for their proofs.\n\n\\subsection{Lower limits}\n\nThe quantities $\\lambda_0$ and $\\mu_0$\nprovide the lower limits of regularization\nin \\Cref{thm:sketched-pseudoinverse}.\nThe following two remarks describe their behaviour\nin terms of $\\alpha$.\n\n\n\\begin{remark}\n [Dependence of $\\mu_0$ and $\\lambda_0$ on $\\alpha$]\n \\label{rem:mu0-lambda0-vs-alpha}\n Writing the first equation in \\cref{eq:mu0-lambda0-fps} as\n \\begin{equation+}\n \\label{eq:mu0-fp-in-alpha}\n \\alpha = \\tfrac{1}{p} \\tr \\bracket{\\mA^2 \\paren{\\mA + \\mu_0 \\mI_p}^{-2}},\n \\end{equation+}\n note that for fixed $\\mA$, $\\mu_0$ only depends on $\\alpha$.\n Furthermore, the equation indeed admits a unique solution for $\\mu_0$ for a given $\\alpha$.\n This can be seen by noting that the function\n $f: \\mu_0 \\mapsto \\tfrac{1}{p} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}}_p)^{-2}]$\n is monotonically decreasing in $\\mu_0$,\n and \n \\[\n \\tfrac{1}{p} \\lim_{\\mu_0 \\searrow - \\lambda_{\\min}^{+}(\\mA)} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}}_p)^{-2}] = \\infty,\n \\quad\n \\text{and}\n \\quad\n \\tfrac{1}{p}\\lim_{\\mu_0 \\to \\infty} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}}_p)^{-2}] = 0.\n \\]\n In addition,\n because $\\mu_0(\\alpha) = f^{-1}(\\alpha)$,\n $\\mu_0$\n is monotonically decreasing in $\\alpha$,\n and $\\lim_{\\alpha \\searrow 0} \\mu_0(\\alpha) = \\infty$\n and $\\lim_{\\alpha \\to \\infty} \\mu_0(\\alpha) = - \\lambda_{\\min}^{+}({\\mathbf{A}})$.\n \n Given $\\mu_0$,\n the second equation in \\cref{eq:mu0-lambda0-fps} then\n provides $\\lambda_0$ as\n \\begin{equation+}\n \\label{eq:lambda0-mu0-in-alpha}\n \\lambda_0 \n = \\mu_0 \\paren{1 - \\tfrac{1}{\\alpha} \\tfrac{1}{p} \\tr \\bracket{\\mA \\inv{\\mA + \\mu_0 {\\mathbf{I}}}}}.\n \\end{equation+}\n For $\\alpha \\in (0, r({\\mathbf{A}}))$,\n $\\lambda_0 : \\alpha \\mapsto \\lambda_0(\\alpha)$\n is monotonically increasing,\n and $\\lim_{\\alpha \\searrow 0} \\lambda_0(\\alpha) = - \\infty$\n and $\\lim_{\\alpha \\to r({\\mathbf{A}})} \\lambda_0(\\alpha) = 0$.\n When $\\alpha = r({\\mathbf{A}})$, $\\mu_0 = 0$,\n and consequently $\\lambda_0 = 0$.\n Finally, for $\\alpha \\in (r({\\mathbf{A}}), \\infty)$,\n $\\lambda_0 : \\alpha \\mapsto \\lambda_0(\\alpha)$\n is monotonically decreasing in $\\alpha$,\n and $\\lim_{\\alpha \\to \\infty} \\lambda_0(\\alpha) = -\\lambda_{\\min}^{+}({\\mathbf{A}})$.\n This follows from a short limiting calculation.\n\\end{remark}\n\n\\begin{remark}\n [Joint sign patterns of $\\mu_0$ and $\\lambda_0$]\n \\label{rem:mu0-lambda0-signs}\n Observe from \\cref{eq:mu0-lambda0-fps}\n the sign pattern summarized in \\cref{tab:mu0-lambda0-signs}.\n \\begin{table}[h!]\n \\label{tab:mu0-lambda0-signs}\n \\centering\n \\caption{Sign patterns of $\\lambda_0$ and $\\mu_0$.}\n \\begin{tabular}{c | c | c | c}\n $\\alpha$ vs. $r({\\mathbf{A}})$ \n & $\\mu_0$ \n & $\\alpha$ vs. $\\tfrac{1}{p} \\tr[{\\mathbf{A}} ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-1}]$\n & $\\lambda_0$ \\\\\n \\hline\n $\\alpha > r({\\mathbf{A}})$ \n & $< 0$\n & $\\alpha = \\tfrac{1}{p} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-2}]\n > \\tfrac{1}{p} \\tr[{\\mathbf{A}} ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-1}]$ \n & $< 0$ \\\\\n $\\alpha = r({\\mathbf{A}})$ \n & 0\n & $\\alpha \n = \\lim_{x \\searrow 0}\n \\tfrac{1}{p} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + x {\\mathbf{I}})^{-2}]\n = \\lim_{x \\searrow 0}\n \\tfrac{1}{p} \\tr[{\\mathbf{A}} ({\\mathbf{A}} + x {\\mathbf{I}})^{-1}]$\n & 0 \\\\\n $\\alpha < r({\\mathbf{A}})$ \n & $> 0$ \n & $\\alpha = \\tfrac{1}{p} \\tr[{\\mathbf{A}}^2 ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-2}]\n < \\tfrac{1}{p} \\tr[{\\mathbf{A}} ({\\mathbf{A}} + \\mu_0 {\\mathbf{I}})^{-1}]$ \n & $< 0$\n \\end{tabular}\n \\end{table}\n\\end{remark}\n\n\n\\subsection{First-order equivalence}\n\nIn general,\nthe exact $\\mu$ depends on $\\lambda$, $\\alpha$, and ${\\mathbf{A}}$\nvia the fixed-point equation \\cref{eq:sketched-modified-lambda}.\nHowever, we can infer several properties of the behaviour\nof $\\mu$ as a function of $\\lambda$ and $\\alpha$\nas summarized below.\n\n\n\\begin{proposition}\n [Monotonicities of $\\mu$ in $\\lambda$ and $\\alpha$]\n \\label{prop:monotonicities-lambda-alpha}\n For a fixed $\\alpha \\ge 0$, \n the map $\\lambda \\mapsto \\mu(\\lambda)$,\n where $\\mu(\\lambda)$ is as defined in \\cref{eq:sketched-modified-lambda}\n is monotonically increasing in $\\lambda$\n over $(\\lambda_0, \\infty)$,\n and $\\lim_{\\lambda \\searrow \\lambda_0} \\mu(\\lambda) = \\mu_0$, \n while $\\lim_{\\lambda \\to \\infty} \\mu(\\lambda) = \\infty$.\n For a fixed $\\lambda \\ge 0$,\n the map $\\alpha \\mapsto \\mu(\\alpha)$\n where $\\mu(\\alpha)$ is as defined in \\cref{eq:sketched-modified-lambda}\n is monotonically decreasing in $\\alpha$ over $(0, \\infty)$;\n when $\\lambda < 0$,\n the map $\\alpha \\to \\mu(\\alpha)$ is monotonically decreasing\n over $(0, r({\\mathbf{A}}))$ \n and monotonically increasing over $(r({\\mathbf{A}}), \\infty)$.\n Furthermore,\n for any $\\lambda \\in (\\lambda_0, \\infty)$,\n $\\lim_{\\alpha \\searrow 0} \\mu(\\alpha) = \\infty$,\n and $\\lim_{\\alpha \\to \\infty} \\mu(\\alpha) = \\lambda$.\n\\end{proposition}\n\n\\begin{remark}\n [Joint signs of $\\lambda$ and $\\mu$]\n \\label{rem:joint-signs-lambda-mu}\n When $\\lambda \\ge 0$, \n for any $\\alpha > 0$, \n we have $\\mu \\ge 0$,\n where $\\mu$ is the unique solution to \n \\cref{eq:sketched-modified-lambda} in $(\\mu_0, \\infty)$.\n When $\\lambda < 0$,\n for $\\alpha \\le r({\\mathbf{A}})$,\n we have $\\mu \\ge 0$,\n while for $\\alpha > r({\\mathbf{A}})$,\n we have \n $\\mathrm{sign}(\\mu) \n = \\mathrm{sign}(\\lambda)$.\n\\end{remark}\n\n\n\\begin{proposition}\n [Concavity, bounds, and asymptotic behaviour of $\\mu$ in $\\lambda$]\n \\label{rem:concavity-mu-in-lambda}\n The function $\\lambda \\mapsto \\mu(\\lambda)$,\n where $\\mu(\\lambda)$ is solution to \\cref{eq:sketched-modified-lambda}\n is a concave function over $(\\lambda_0, \\infty)$.\n Furthermore, for any $\\alpha \\in (0, \\infty)$,\n $\\mu(\\lambda) \\le \\lambda + \\tfrac{1}{q} \\tr[\\mA]$ for all $\\lambda \\in (\\lambda_0, \\infty)$; and when $\\alpha \\le r({\\mathbf{A}})$,\n $\\mu(\\lambda) \\geq \\lambda$ for all $\\lambda \\in (\\lambda_0, \\infty)$,\n otherwise $\\mu(\\lambda) \\geq \\lambda$\n for $\\lambda \\geq 0$.\n Additionally,\n $\\lim_{\\lambda \\to \\infty} |\\mu(\\lambda) - (\\lambda + \\tfrac{1}{q} \\tr[\\mA])| = 0$. \n\\end{proposition}\n\n\\subsection{Second-order equivalence}\n\nWe also have a few remarks about the second-order equivalence.\n\n\\begin{remark}\n \\label{rem:alt-mu-prime}\n We have the following alternative form for $\\mu'$:\n \\begin{align}\n \\mu' = \\tfrac{1}{q} \\tr \\bracket{\\mu^2 \\inv{\\mA + \\mu \\mI_p} \\mPsi \\inv{\\mA + \\mu \\mI_p}} \\frac{\\partial \\mu}{\\partial \\lambda}.\n \\end{align}\n Note that the term $\\frac{\\partial{\\mu}}{\\partial \\lambda}$ does not depend in any way on $\\mPsi$, and that the remaining term is well-controlled for any $\\mu > \\mu_0$. Therefore, $\\mu'$ will only diverge when $\\frac{\\partial{\\mu}}{\\partial \\lambda}$ diverges, which occurs as $\\lambda \\to \\lambda_0$. This is clearly visible in \\cref{fig:mu-equiv} (top) as $\\lambda$ approaches $\\lambda_0$, where the slope of the curve tends to infinity. Additionally, because $\\mu$ is increasing in $\\lambda$, this decomposition shows that $\\mu' \\geq 0$.\n\\end{remark}\n\n\\begin{remark}[Vanishing $\\mu'$]\n\\label{rem:mu-prime-to-0}\nIf $\\mathrm{Ker}(\\mA) \\subseteq \\mathrm{Ker}(\\mPsi)$, then as $\\mu \\to 0$, $\\mu' \\to 0$. The best intuition for this is in the case $\\mPsi = \\mA$. Because we can only have $\\mu = 0$ for $\\alpha > r(\\mA)$ and $\\lambda = 0$, we have $\\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\mA \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\big|_{\\lambda = 0} = \\mS \\biginv{\\mS^\\ctransp \\mA \\mS + \\lambda \\mI_q} \\mS^\\ctransp \\big|_{\\lambda = 0}$, and the second-order equivalence reduces to the first-order equivalence with no inflation factor. This remarkable property means that i.i.d.\\ sketching leads to extremely accurate estimates with no spectral distortion, but only in low-rank settings with little regularization. \n\\end{remark}\n\n\n\n\\subsection{Illustrative examples}\n\nIn order to better understand \\cref{thm:sketched-pseudoinverse,lem:second-order-sketch}, \nwe consider a few examples with special choices of the matrix ${\\mathbf{A}}$. When the spectrum of $\\mA$ converges to a particular distribution of eigenvalues, $\\mu$ will converge to a value that is deterministic given $\\mA$. %\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/mu_equiv.pdf}\n \\caption{Plots of $\\mu$ as a function of $\\lambda$ and $\\alpha$ for rank-deficient isotropic (left) and Marchenko--Pastur (middle) spectra, normalized so that $\\frac{1}{p} \\tr[\\mA] = r = 1\/2$. \n The values of $\\lambda$ and $\\alpha$ in each location of the plot are indicated by the colormap (right), shared between the two views of each plot. \n As we sweep $\\alpha$, we also plot $(\\alpha, \\lambda_0, \\mu_0)$ (black, dotted). \n We also plot the lines $\\mu = 0$, $\\lambda = 0$, and $\\alpha = r$ (gray, dashed).\n The scaling of the $\\mu$ and $\\lambda$ axes are linear, and the scaling of the $\\alpha$ axis is proportional to $1\/\\alpha$. \n In this way we can clearly capture the general $\\mu \\approx \\lambda + \\tfrac{1}{p} \\tr[A] \/ \\alpha$ relationship for $\\lambda > 0$, as well as the limiting behavior of $\\mu = \\lambda$ for large $\\alpha$.\n The most significant difference between the two distributions is that for the isotropic distribution, $\\lambda_{\\min}^{+}(\\mA) = 1$, while for the Marchenko--Pastur case, $\\lambda_{\\min}^{+}(\\mA) = (\\sqrt{2} - 1)^2\/2 \\approx 0.0859$, limiting the achievable negative values of $\\mu$ when $\\lambda < 0$ and $\\alpha > r$.\n }\n \\label{fig:mu-equiv}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=5in]{figures\/mu_prime.pdf}\n \\caption{Plot of $\\mu'$ as a function of $\\mu$ and $\\alpha$ for the rank-deficient isotropic spectrum with $r = 1\/2$ for $\\mPsi \\in \\set{\\mI_p, \\mA}$. In both cases, as $\\mu \\to \\mu_0$ (dashed), $\\mu' \\to \\infty$. Otherwise, $\\mu'$ is not too large. For $\\mPsi = \\mI_p$, $\\mu'$ decays slowly in $\\alpha$ and $\\mu$. However, for $\\mPsi = \\mA$, there is a regime for $\\alpha > r$ around $\\mu = 0$ for which $\\mu'$ tends to zero. Thus, the unregularized pseudo-inverse preserves $\\mA$ remarkably well on its range when the sketch size is greater than the rank of the matrix, but outside of the range of $\\mA$, it has non-negligible error.}\n \\label{fig:mu-prime}\n\\end{figure}\n\n\n\\subsubsection{Isotropic rank-deficient matrix}\n\nFor the first example, let $0 < r \\le 1$ be a real number.\nWe then consider $\\mA = \\begin{bsmallmatrix}\\mI_{\\floor{r p}} & \\vzero \\\\ \\vzero & \\vzero \\end{bsmallmatrix}$ such that $r(\\mA) \\to r$ as $p \\to \\infty$. We have chosen the standard basis representation of this matrix, but the following results also hold for any $\\mA$ that is isotropic on a subspace, regardless of basis. Such an $\\mA$ includes settings such as $\\mA = \\mX^\\transp \\mX$ where $\\mX \\in \\reals^{n \\times p}$ is an orthogonal design matrix with orthonormal rows. In this case,\n\\begin{align}\n \\mu = \\frac{\\lambda + \\tfrac{r}{\\alpha} - 1 + \\sqrt{(\\lambda + \\frac{r}{\\alpha} - 1)^2 + 4 \\lambda}}{2}.\n\\end{align}\nFurthermore, we have simple forms for $\\mu_0$ and $\\lambda_0$:\n\\begin{align}\n \\mu_0 = \\sqrt{\\tfrac{r}{\\alpha}} - 1, \\quad\n \\lambda_0 = - \\paren{1 - \\sqrt{\\tfrac{r}{\\alpha}}}^2.\n\\end{align}\nThe expression for $\\lambda_0$ can also be obtained directly from the minimum nonzero eigenvalue of the Marchenko--Pastur distribution with aspect ratio $\\tfrac{\\alpha}{r}$ and variance scaling $\\tfrac{r}{\\alpha}$, which describes $\\mS^\\ctransp \\mA \\mS$. In the case $\\lambda = 0$, we have a very simple expression for $\\mu$:\n\\begin{align}\n \\mu = \\begin{cases}\n \\frac{r}{\\alpha} - 1 & \\text{if } \\alpha < r, \\\\\n 0 & \\text{otherwise}.\n \\end{cases}\n\\end{align}\nWe can also obtain the limiting behavior of $\\mu$ for large $\\lambda$ or small $\\alpha$:\n\\begin{align}\n \\label{eq:limiting-mu}\n \\lim_{\\lambda + \\tfrac{r}{\\alpha} \\to \\infty} \\frac{\\mu}{\\lambda + \\tfrac{r}{\\alpha}} = 1.\n\\end{align}\nIn \\Cref{fig:mu-equiv} (left), we plot $\\mu$ as a function of both $\\lambda$ and $\\alpha$.\nWe see that even for modest values of $\\lambda > 0$ or $\\alpha < r$, the relationship $\\mu \\sim \\lambda + \\tfrac{r}{\\alpha}$ holds quite accurately. We see a clear transition point at $\\alpha = r$ where $\\lambda_0 = 0$, and on either side of which $\\lambda_0$ decreases. Other properties from the previous sections, such as monotonicity, concavity in $\\lambda$, and sign patterns are clearly visible in this plot as well.\nWe also plot $\\mu'$ as a function of $\\mu$ and $\\alpha$ in \\Cref{fig:mu-prime}, where we see that the inflation vanishes for $\\mPsi = \\mA$ only if $\\alpha > r$ and $\\mu = 0$. It is non-negligible otherwise, and tends to infinity as $\\mu$ tends to $\\mu_0$ for each $\\alpha$.\n\n\\subsubsection{Marchenko--Pastur spectrum}\n\nWe also consider the case when ${\\mathbf{A}}$ is a random matrix\nof the form ${\\mathbf{A}} = \\tfrac{1}{n} {\\mathbf{Z}}^\\top {\\mathbf{Z}}$,\nwhere ${\\mathbf{Z}} \\in \\mathbb{R}^{n \\times p}$ contains i.i.d.\\ entries\nof mean $0$, variance $1$, and bounded moments of order $4 + \\delta$\nfor some $\\delta > 0$.\nThis case is of interest for real data settings\nwhere ${\\mathbf{A}}$ will be a sample covariance matrix.\nIn this case, the spectrum of ${\\mathbf{A}}$ can be computed explicitly and is given by the Marchenko--Pastur law.\nComputing $\\mu$ explicitly in this case is possible,\nbut cumbersome. We instead provide numerical illustrations\non the behaviour of $\\mu$ as a function of $\\alpha$ and $\\lambda$.\n\nFrom \\cref{fig:mu-equiv} (middle), we can see that the behavior of $\\mu$ for the Marchenko--Pastur spectrum is not substantially different from the rank-deficient isotropic spectrum. The only regime that differs significantly is when $\\alpha > r(\\mA)$ and $\\lambda < 0$, where $\\lambda_0$ is much closer to $0$ than in the isotropic case, and so there is no equivalence for more negative values of $\\lambda$.\n\nIt is also worth noting that when $\\alpha < r({\\mathbf{A}}) < 1$,\nthe na\\\"ive bound on the smallest\nregularization $\\lambda$ permissible is $0$\n(as explained in the caption of \\Cref{fig:lambda-minnz-bound}).\nHowever, from \\Cref{fig:mu-equiv}\nwe observe that the equivalence in \\cref{thm:sketched-pseudoinverse} holds even for quite negative $\\lambda$ (blue region), contrary to this na\\\"ive bound.\nIn fact, the true bound is almost the same as the rank-deficient isotropic case, $\\lambda_0 = - \\paren{1 - \\sqrt{\\tfrac{r}{\\alpha}}}^2$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}