diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcduh" "b/data_all_eng_slimpj/shuffled/split2/finalzzcduh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcduh" @@ -0,0 +1,5 @@ +{"text":"\\section{The new cosmological context}\n\n\\subsection{Model expansion versus universe expansion}\n\nIt is currently assumed that matter does not expand itself during an\neventual universe expansion. {\\it If this were true} some standard\nmodel could be used as the base for a non expanding theoretical\nreference framework. In it, ${dr(ik)}\/{r(ik)}=Hdt $. Then, from (1.1),\n(1.2), and NL mass-energy conservation,\n\\begin{equation}\n\\label{2.1}\nd\\phi (i)=-dw(i)=-\\,d\\sum_{k=1}^\\infty \\frac{Gm(k)f(ik)}{r(ik)}%\n=\\left[ \\sum_{k=1}^\\infty \\frac{Gm(k)f(ik)}{r(ik)}\\right] Hdt,\n\\end{equation}\n\\begin{equation}\n\\label{2.2}\nd\\phi \\left( i\\right) =w(U)Hdt=Hdt=dr(ik)\/r(ik)=d\\lambda\n(i)\/\\lambda (i).\n\\end{equation}\n\nThis means that {\\it every wave and particle would expand itself in\nthe same proportion as the intergalactic distances}. An eventual\nuniverse expansion would not change the relative values of all of\nthem: the distances, the velocities, the temperatures, the WRS, the\nHRS, and therefore, the local physical laws\\footnote {This\ngeneralizes the relativity postulates for eventual universe\nexpansions.}. {\\it Then the universe would have not a well-defined\nage and it may last indefinitely}. Of course, this would invalid the\ncurrent deductions normally made for the universe age, which anyway\nseem to be not consistent with the last measurements made with the\nHubble telescope.\n\n\\subsection{The new kind of black hole (BH)}\n\nThe new exponential G relations have not a singularity at $r=2GM$.\nThen, the new kind of BH is different to that of GR \\cite{V81b}. Its\nnucleus would be just a neutron star (NS) with a strong external {\\it\ngradient of the NL refraction index } that would act as a mirror for\nmost of the internal radiation's. Its outcoming {\\it critical\nreflection angle}, given\nby $sin^{-1}[\\left( 2eGM\/r\\right) e^{-2GM\/r}]$, would be rather\nnegligible. Thus the escape probabilities would be not strictly null.\nThen the BH would absorb and store for long time most of the\nradiation's traveling within the impact parameter $2eGM$.\n\n\\subsection{Relativistic particle generation}\n\nIn a way similar to the earth {\\it auroras}, most of the positively\ncharged nuclei would be driven by the magnetic fields towards the\nBH polar regions. Since the neutron binding energy in a BH is of a\nhigher order of magnitude compared with that in atoms, then one the\nmost probable reactions between them is {\\it nuclear stripping} \\cite\n{V77}, \\cite{V81b}. In it, some of the atomic neutrons\nwould be captured by the BH while the remaining nucleus (proton or\nproton rich nucleus) would be rejected by the NS. The last one\nwould take away the NL mass-energy difference between the\noriginal and final states of the captured neutrons. They could only\nescape from the magnetic fields, in axial directions, within the small\nescape angle given above. They would form {\\it narrow jets of\nrelativistic particles} richer in protons and with higher energies for\nhigher $p\/n$ ratios. They are consistent with the composition and\nenergies of {\\it cosmic ray particles} \\cite{V81b}, \\cite\n{R81}. They are also consistent with the {\\it radio sources\nand jets} going away from central regions of galaxies, most of them\nin just the expected orientations.\n\nThis process would be most important because it would convert G\nwork into mechanical and nuclear latent energies. This would\nregenerate new gas of high nuclear and kinetic energies at the cost\nof rather burnt out materials like He or heavier elements. Such low\nentropy materials can in principle extend the luminous lifetimes of\ngalaxies beyond the limits estimated from the current models.\n\n\\subsection{The entropy switch}\n\n{}From the BH surface, just to the contrary of the outside regions, the\nexternal universe would look as a source of {\\it blue shifted\nradiation's} that would increase both the local temperature and the\nprobabilities for filling up the local SW levels up to the highest NL\nfrequencies. This is equivalent to a decrease of the local entropy. In\nthis way the average NL mass and kinetic energy of the nucleons\nwould increase with the time, with the radiation energy coming from\nthe rest of the universe, up to some unstable state in which any\ndecrease of the NL refraction index gradient generated by external\nbodies would produce {\\it frustrated reflections} that can trigger the\nmass outflow. Thus the BH can {\\it explode} producing low density\ngas flowing away through older stellar remnants orbiting around it.\nThis would transform a fraction of the kinetic energy into rotational\none associated with randomly oriented angular momentum's. This is\nalso consistent with the fronts of H rich matter diverging from very\nsmall regions in the universe.\n\n\\section{The new astrophysical context}\n\n{}From above the universe would last, indefinitely, in a kind of\nconservative and {\\it isentropic steady state}. In it, {\\it matter and\nradiation's would evolve, indefinitely, in rather closed cycles,\nbetween the states of gas and BHs, and vice versa}.\n\n\\subsection{Matter cycles}\n\nSingle and chain of BH explosions would produce {\\it rather\nspherical stellar clusters and elliptical galaxies, rather free of\nmetals}. They would regenerate randomly oriented angular\nmomentum's that, in the long run, would be canceled out at faster\nrates compared with those parallel to the galactic axis. Thus an {\\it\nelliptical galaxy} would progressively get {\\it disc and spiral shapes}\nof smaller volumes. Finally it would become reduced to a small\ncentral luminous volume (AGN and {\\it quasar}) with massive stars\nand high density black bodies (black holes, neutron stars)\nsurrounded by a halo of dead stars and planetesimals [{\\it black\ngalaxy}]. The explosive events as supernovas would produce large\nchanges of luminosity, within relatively short periods, that are\nconsistent with those of quasars \\cite {N90}.\n\nDue to the low $\\phi (r)$ in the black galaxy center, their atoms\nwould emit strongly red shifted light rather scattered and reflected by\nthe external bodies. This accounts for the fact that quasar\ncorrelation's improve under the assumption that most of the\nobserved red shift is intrinsic \\cite {B73}. The detection of\nmetal lines would also prove the existence of highly evolved (old)\nmatter.\n\nThe {\\it black galaxy} (BG) resulting from a luminous one would be\ncooled down by its BHs. It would also capture and store radiation\ncoming from the external universe, in a way similar to a huge BH.\nAfter a long period, the explosion of some central BH can trigger a\nchain of BH explosions that would regenerate a luminous galaxy.\n\nWithin a larger time scale, the galaxy regeneration would look like a\nBG explosion that can trigger the virtual explosions of the next BGs,\nand so on. They would produce {\\it clusters}. Superclusters would also\nbe due to similar mechanisms. Thus the {\\it fronts} of galaxies in\nluminous stages would also account for the large scale structure of\nthe universe.\n\n\\subsection{High energy step down in stellar objects}\n\nMechanisms of nuclear stripping similar to those occurring BHs\ncould also occur during the matter fall over neutron stars (NSs),\neither steadily or in pulsed ways. They may also occur rather hidden\ninside some stars or gas clouds. They would transform heavy (burnt\nout) elements into protons of higher kinetic and nuclear latent\nenergies that would promote convection currents. This would\nprevent overheating and stellar collapse after neutrino cooling.\n\nThis kind of stellar model, \\cite{V93}, is consistent with all of\nthem: the low neutrino luminosity's, the higher densities and\ntemperatures, the better defined mass-luminosity's relations and the\nmagnetic structures of {\\it main sequence stars}.\n\n\\subsection{Density and isotropy of the universe}\n\nDue to the higher rates of energy emitted by the luminous galaxies\ncompared with those absorbed by the BGs, it is inferred (after a\nmass-energy balance) that {\\it most of the universe should be in the\nstate of low temperature BGs}, cooled down by their own BHs. This\nis consistent with the high average density of the universe derived\nfrom (1.1), and assuming $H=75 km\/\\sec $ per mps\\footnote {When\nthe common mass and energy unit is the joule, $G = G_{newt} c^\n4$.}.This one is $ \\simeq [4\\pi GR^2]^{-1}$, i. e., about $ 10^{-29}\ngm\/cm^3 $. This is of a higher order of magnitude than that of the\nluminous fractions of the universe. This is also consistent with the\ncurrent mass excesses detected from dynamic methods in galaxies\nand clusters.\n\nAfter integration of (2), the space properties are fixed, mostly, by\nmatter existing between $R$ and $3R$. The contribution of\nrelatively local matter is extremely small compared with that of the\nrather uniform universe. This is consistent with {\\it the weakness of\nordinary G interactions, and with the high isotropy of both the space\nproperties and of the cosmological radiation background}.\n\nThe low temperature black-body radiation coming from BGs, red\nshifted during its long average trip up to the observer, ($2R $),\nwould fix {\\it a rather uniform low temperature cosmic radiation\nbackground}. Thus {\\it the universe would always look like a perfect\nradiation absorber}\\footnote {Only steady state cosmologies can\naccount for the {\\it arrows} in nature \\cite {N90}.}.\n\n\\section{Conclusions}\n\nThe theoretical properties of the SW particle model fix a new kind of\n{\\it conservative and isentropic steady state} in which matter and\nradiation's evolve, indefinitely, in rather closed cycles. These cycles\nare fairly consistent with the luminous bodies ranging between\nelliptical galaxies and quasars, and also with larger scale structures\nof the universe.\n\nThis theory opens the way for new stellar models and non\nconventional interpretations of many celestial phenomena. The new\nuniverse would have not the narrow limits of time fixed by the rather\nconventional theories. In this way, also, astrophysics could do\nwithout the relatively large number of non testable hypotheses that\ncan be advanced on the universe origin.\n\nThere is simultaneous consistency of the theoretical properties of\nthe SW model with fundamental physics, and of the new\ncosmological context with a wide range of astronomical\nobservations. This seems to be a fair reliability test for all of them,\nthe SW particle model, for the relationships derived from it, and for the\nnew cosmological and astrophysical contexts. This unified way may\ncontribute to understand nature in terms of the most elemental\nproperties of radiation's (or vice versa), thus depending on the\nminimum number of parameters, postulates, and arbitrary\nassumptions normally made on relations between matter and its G field.\n\nDue to the large amount of subjects and materials accumulated\nfrom 1976 up to day, the author intends to complilate all of this\nwork into a single book for that may be useful to those that may\nlike to go in this way for undestanding nature from a self-consistent\nand unified viewpoint\\cite{V95c}.\n\nAcknowledgement. I appreciate very much the help of TW Andrews, after\nsending me some helpful literature, including his own ideas. I would also\nappreciate some encouragement and colaboration for finding\nfurther astronomical tests for this theory.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\tMany measurement operations in signal and image processing as well as in communication follow a bilinear model. Namely, in addition to the measurements depending linearly on the unknown signal, also certain parameters of the measurement procedure enter in a linear fashion. Hence one cannot employ a linear model (for example, in connection compressed sensing techniques \\cite{candes2006robust}) unless one has an accurate estimate of these parameters.\n\n\tWhen such estimates are not available or too expensive to obtain, there are certain asymmetric scenarios when one of the inputs can be recovered even though the other one is out of reach (e.g., \\cite{xu1995least,lee2017spectral}, this scenario is sometimes referred to as passive imaging). In most cases, however, the natural aim will be to recover both the signal and the parameters, that is, to solve the associated bilinear inverse problem. Even when some estimates of the parameters are available, such a unified approach will be preferred in many situations, especially when information is limited. Consequently, the study of bilinear inverse problems, including but not limited to the important problem of blind deconvolution, has been an active area of research for many years \\cite{Haykin1994}.\n\t\n\tObserving that bilinear maps admit a representation as a linear map in the rank one outer product of the unknown signal and the parameter vector, one can approach such problems using tools from the theory of low-rank recovery (see, e.g., \\cite{ahmedrechtromberg,lingstrohmer,jung2017blind}). Under sparsity assumptions, that is, when the signals and\/or parameter vectors admit an approximate representation using just a small (but unknown) subset of an appropriate basis (for more details regarding when such assumptions appear in bilinear inverse problems, see \\cite{strohmer}), however, the direct applicability of these approaches is limited, as two competing objectives arise: one aims to simultaneously minimize rank and sparsity. As a consequence, the problem becomes considerably more difficult; Oymak et al., for example, have demonstrated that minimizing linear combinations of the nuclear norm (a standard convex proxy for the rank) and the $\\ell_1$ norm (the corresponding quantity for sparsity) exhibits suboptimal scaling \\cite{oymak2015simultaneously}.\tIn fact it is not even clear if without additional assumptions efficient recovery is at all possible for a near-linear number of measurements (as it would be predicted identifiability considerations \\cite{doi:10.1137\/16M1067469}).\n\t\n\tRecently, a number of nonconvex algorithms for bilinear inverse problems have been proposed. For example, for such problems without sparsity constraints several such algorithms have been analyzed for blind deconvolution and related problems\\cite{li2016rapid,ling2017regularized} with near-optimal recovery guarantees. In contrast, our understanding of bilinear inverse problems with sparsity constraints is only in its beginning. Recently, several algorithms have been analyzed for sparse phase retrieval \\cite{bahmani2017anchored,soltanolkotabi2017structured} or blind deconvolution with sparsity constraints \\cite{qu2017convolutional}. The recovery guarantees for these algorithms, however, are either suboptimal in the number of necessary measurements or only local convergence guarantees are available, i.e., one relies on the existence of a good initialization. (A noteworthy exception are the two related papers \\cite{bahmani2016near,iwen2017robust}, where a two-stage approach for (sparsity) constrained bilinear inverse problems is proposed, which achieves recovery at near-optimal rate. However, the algorithm relies on a special nested structure of the measurements, which is not feasible for many practical applications.)\n\t\n\tIn \\cite{paper2} Lee, Wu, and Bresler introduced the {\\em sparse power factorization} (SPF) method together with a tractable initialization procedure based on alternating minimization. They also provide a first performance analysis of their method for random bilinear measurements in the sense that their lifted representation is a matrix with independent Gaussian entries.\n\tThat is, they work with linear operators $\\mathcal{A}\\colon \\mathbb{C}^{n_1 \\times n_2} \\longrightarrow \\mathbb{C}^{m} $ that admit a representation as\n\t\\begin{equation*}\n\t\\left( \\mathcal{A} \\left( X \\right) \\right) \\left( \\ell \\right)= \\text{trace} \\left( A_{\\ell}^\\ast X \\right)\n\t\\end{equation*}\n\tfor i.\\,i.\\,d.\\ Gaussian matrices $ A_{\\ell} \\in \\mathbb{C}^{n_1 \\times n_2} $.\n\t\n\t\n\tFor such measurements they show that with high probability, SPF converges locally to the right solution, i.e., one has convergence for initializations not too far from the signal to be recovered. \n\t\n\t\n\tFor signals that have a very large entry, they also devise a tractable initialization procedure -- they call it thresholding initialization -- such that one has global convergence to the right solution. Local convergence has also been shown for the multi-penalty approach {\\em A-T-LAS$_{1,2}$} \\cite{fornasier2018}, but to our knowledge, comparable global recovery guarantees are not available to date. This is why we focus on SPF in this paper, using the results of \\cite{paper2} as our starting point.\n\t\n\tThe precise condition for their guarantee to hold is that both (normalized) input signals need to be larger than some $c>\\tfrac{1}{2}$ in supremum norm -- more than one quarter of its mass needs to be located in just one entry, that is, the signals must have a very high peak to average power ratio.\n\t\n\t In this paper, we considerably weaken this rather strong restriction in two ways. Firstly, we show that similar results hold for smaller lower bounds $c$ at the expense of a moderately increased number of measurements. Secondly, we show that similar results can be obtained when the mass of one of the signals is concentrated in more than one, but still a small number of entries.\n\t\n\t\n\tThe SPF algorithm, the thresholding initialization, and the resulting recovery guarantees are reviewed in \\Cref{spfsection} before we discuss and prove our results in \\Cref{resultsection} and Section \\Cref{sectionproof}.\n\t\n\t\\subsection*{Notation}\n\tThroughout the paper we will use the following notation. By $ \\left[n\\right] $ we will denote the set $ \\left\\{1; \\ldots; n \\right\\} $. For any set $J$ we will denote its cardinality by $ \\vert J \\vert $. For a vector $v\\in \\mathbb{C}^m$ we will denote by $ \\Vert v \\Vert $ its $ \\ell_2$-norm and by $ \\Vert v \\Vert_{\\infty}$ the modulus of its largest entry. If $J \\subset \\left[n\\right] $ we will by $v_J$ denote the restriction of $v$ to elements indexed by $J$. For matrices $ A \\in \\mathbb{C}^{n_1 \\times n_2} $ we will denote by $ \\Vert A \\Vert_F$ its Frobenius norm and by $\\Vert A \\Vert $ its spectral norm, i.e., the largest singular value of $A$. \t\n\t\n\t\\section{Sparse Power Factorization: Algorithm and Initialization}\\label{spfsection}\n\t\n\t\\subsection{Problem formulation}\n\t\n\n\n\n\n\n\t\n\tLet $ b \\in \\mathbb{C}^m $ be given by\n\t\\begin{equation*}\n\tb:= B(u,v)+z,\n\t\\end{equation*}\n\twhere $ B \\colon \\mathbb{C}^{n_1} \\times \\mathbb{C}^{n_2} \\rightarrow \\mathbb{C}^m $ is a bilinear map and $z\\in \\mathbb{C}^m$ is noise. Recall that one can represent the bilinear map $B \\colon \\mathbb{C}^{n_1}\\times \\mathbb{C}^{n_2} \\rightarrow \\mathbb{C}^{m}$ by a linear map\n\t$\\mathcal{A}\\colon\\mathbb{C}^{n_1\\times n_2}\\longrightarrow\\mathbb{C}^m$, which satisfies\n\t\\[\n\tB(u,v)= \\mathcal{A}(uv^\\ast).\n\t\\]\n\tfor all vectors $u \\in \\mathbb{C}^{n_1}$ and all $ v \\in \\mathbb{C}^{n_2}$. Note that such a linear map $ \\mathcal{A}$ is characterized by a (unique) set of matrices $ \\left\\{ A_{\\ell} \\right\\}^m_{\\ell =1} \\subset \\mathbb{C}^{n_1 \\times n_2} $ such that the $\\ell$th entry of $ \\mathcal{A}\\left( X \\right)$ is given by\n\t\\begin{equation}\\label{operatorrepresentation}\n\t\\left( \\mathcal{A} \\left( X \\right) \\right) \\left( \\ell \\right)= \\text{trace} \\left( A_{\\ell}^\\ast X \\right).\n\t\\end{equation}\n\tIn this notation, our goal will be to reconstruct $u$ and $v$ from linear measurements given by\n\t\\begin{equation*}\n\tb_{\\ell} = \\text{trace} \\left( A^*_{\\ell} uv^* \\right)\n\t\\end{equation*}\n\tAt the core of the Sparse Power Factorization Algorithm, as introduced in \\cite{paper2}, are the linear operators $F \\colon \\mathbb{C}^{n_2} \\longrightarrow \\mathbb{C}^{m \\times n_1} $ and $ G \\colon \\mathbb{C}^{n_1} \\longrightarrow \\mathbb{C}^{m \\times n_2} $ defined by\n\t\n\t\t$$F(y) := \n\t\t\\begin{pmatrix}\n\t\ty^\\ast A_1^\\ast\\\\\n\t\t\\vdots\\\\\n\t\ty^\\ast A_m^\\ast\n\t\t\\end{pmatrix},\n\t\t\\quad\n\t\tG(x) := \n\t\t\\begin{pmatrix}\n\t\tx^\\ast A_1\\\\\n\t\t\\vdots\\\\\n\t\tx^\\ast A_m\n\t\t\\end{pmatrix}.$$\n\t\tA direct consequence of this definition is that $$\\mathcal{A}(xy^\\ast) = [F(y)]x = \\overline{[G(x)]y}$$ for all $ x \\in \\mathbb{C}^{n_1} $ and all $ y \\in \\mathbb{C}^{n_2} $.\n\n\n\t\\subsection{Sparse Power Factorization}\n\t\nThe idea of Sparse Power Factorization is to iteratively update estimates $u_t$ and $v_t$ for $u$ and $v$ in an alternating fashion.\nThat is, in each iteration one keeps one of $v_t$ and $u_t$ fixed and updates the respective other one by solving an (underdetermined) linear system. Solving each of these linear systems then amounts to solving a linear inverse problem with sparsity constraints. Hence, many pursuit algorithms proposed in the context of compressed sensing can be applied such as CoSaMP \\cite{needell2009cosamp}, Hard Thresholding Pursuit \\cite{foucart2011hard} or Basis Pursuit. In \\cite{paper2} the authors used Hard Thresholding Pursuit (HTP) for their analysis and in this paper, we will also restrict ourselves to HTP. With this, the Sparse Power Factorization Algorithm reads as follows.\n\t\n\t\n\\begin{alg}[Algorithm 1 in \\cite{paper2}]\\label{SPF.alg}~\\\\\\vspace*{-4mm}\n\t\\begin{algorithmic}[1]\n\t\t\\sffamily\\small\n\t\t\\Require Operator $\\mathcal{A}$, Measurement $b$, Sparsity Constraints $s_1, s_2$, Initialisation $v_0$.\n\t\t\\Ensure Estimate $\\widehat{X}$.\n\t\t\\State $t \\gets 0$\n\t\t\\While{stop condition not satisfied}\n\t\t\\State $t \\gets t + 1$\n\t\t\\State $v_{t - 1} \\gets \\frac{v_{t - 1}}{\\big\\|v_{t - 1}\\big\\|}$\\label{vt1norm}\n\t\t\\If{$s_1 < n_1$}\n\t\t\\State $u_t \\gets \\mathrm{HTP}(\\mathrm{F}(v_{t - 1}), b, s_1)$\n\t\t\\Else\n\t\t\\State $u_t \\gets \\operatorname*{{arg\\,min}}\\limits_x \\big\\|b - [\\mathrm{F}(v_{t - 1})]x\\big\\|^2$\n\t\t\\EndIf\n\t\t\\State $u_t \\gets \\frac{u_{t}}{\\big\\|u_{t}\\big\\|}$\n\t\t\\If{$s_2 < n_2$}\n\t\t\\State $v_t \\gets \\mathrm{HTP}(\\mathrm{G}(u_{t}), \\bar{b}, s_2)$\n\t\t\\Else\n\t\t\\State $v_t \\gets \\operatorname*{{arg\\,min}}\\limits_b \\big\\|\\bar{b}- [\\mathrm{G}(u_{t})]b\\big\\|^2$\\label{sparse-else}\n\t\t\\EndIf\n\t\t\\EndWhile \\\\\n\t\t\\Return $\\widehat{X} \\gets u_tv_t^{\\ast}$\n\t\\end{algorithmic}\n\\end{alg}\t\n\\noindent The Hard Thresholding Pursuit Algorithm is defined as follows:\n\\begin{alg}HTP(A, b, s)\\label{HTP.alg}~\\\\\\vspace*{-4mm}\n\t\\begin{algorithmic}[1]\n\t\t\\sffamily\\small\n\t\t\\Require Measurement matrix $A \\in \\mathbb{C}^{m \\times n}$, measurement $b \\in \\mathbb{C}^m$, sparsity constraint $s \\in \\mathbb{N}$.\n\t\t\\Ensure $ \\hat{x} \\in \\mathbb{C}^n $.\n\t\t\\State $t \\gets 0$\n\t\t\\While{stop condition not satisfied}\n\t\t\\State $t \\gets t+1 $\n\t\t\\State $ w= x_{t-1} + A^* \\left( b-Ax_{t-1} \\right) $\n\t\t\\State $J \\gets \\underset{J \\subset \\left[n\\right], \\ \\vert J \\vert =s}{\\arg \\max} \\ \\Vert w_J \\Vert $\n\t\t\\State $x_t \\gets \\underset{x: \\text{supp} \\left(x\\right) \\subset J }{\\arg \\min} \\Vert Ax-b \\Vert$\n\t\t\\EndWhile \\\\\n\t\t\\Return $ \\hat{x} \\gets x $\n\t\\end{algorithmic}\n\\end{alg}\t\t\n\t\n\t\n\t\n\t\\subsection{Initialization}\\label{Initialization}\n\tAs for many other non-convex algorithms (e.g., \\cite{Jain,candes2015phase}), the convergence properties of Sparse Power Factorization depend crucially on the choice of the starting point. In \\cite{Jain,candes2015phase} the starting point is chosen via a spectral initialization. That is, one chooses the leading left- and right-singular vectors of $ \\mathcal{A}^* \\left(y\\right) $ as the starting point. However, in order to work this approach requires that the number of measurements is at the order of $ \\max \\left\\{n_1, n_2\\right\\} $, which will in general not be optimal as it does not take into account the sparsity of the vectors $u$ and $v$. One way to incorporate the sparsity assumption would be to solve the Sparse Principal Component Analysis (SparsePCA) problem.\n\t\\begin{equation}\\label{equ:sparsePCA}\n\t\\begin{split}\n\t\\max \\quad & \\text{Re} \\left( \\tilde{u}^* \\mathcal{A}^* \\left(y \\right) \\tilde{v} \\right) \\\\\n\t\\text{subject to} \\quad &\\Vert \\tilde{u} \\Vert_0 \\le s_1, \\Vert \\tilde{u} \\Vert =1\\\\\n\t&\\Vert \\tilde{v} \\Vert_0 \\le s_2, \\Vert \\tilde{v} \\Vert =1,\n\t\\end{split}\n\t\\end{equation}\n\twhere $\\Vert \\cdot \\Vert_0 $ denotes the number of non-zero entries. As it was shown in \\cite[Proposition III.4]{paper2}, Algorithm \\ref{SPF.alg}, if initialized by a solution of \\eqref{equ:sparsePCA} is able to recover the solution $u$ and $v$ from a number of measurements at the order of $ \\left( s_1 + s_2 \\right) \\max \\left\\{ \\frac{s_1}{n_1}, \\frac{s_2}{n_2} \\right\\} $. However, the SparsePCA problem has been shown to be NP-hard \\cite{tillmann2014computational}. Nevertheless, in the last fifteen years there has been a lot of research on the SparsePCA problem and, in particular, on tractable (i.e., polynomial-time) algorithms, which yield good approximations to the true solution. Several computationally tractable algorithms have been proposed for solving (\\ref{equ:sparsePCA}), e.g., thresholdings algorithms \\cite{ma2013sparse}, a general version of the power method \\cite{journee2010generalized} and semidefinite programs \\cite{d2008optimal}. From the statistical perspective, a particular emphasis has been put for computationally efficient or at least tractable algorithms on the analysis of the single spike model\\cite{amini2009high,krauthgamer2015semidefinite,deshpande2014sparse}. These approaches, however, require that the number of samples scales with the square of the number of non-zero entries of the signal to estimate (up to $\\log$-factors). This raised the question whether there are fundamental barriers preventing the SparsePCA problem to be solved in polynomial time at a sampling rate close to the information theoretic limit. Indeed, it has been shown that an algorithm, that achieves this, would also allow for an algorithm which solves the $k$-clique problem in polynomial time \\cite{berthet2013optimal,wang2016statistical}. However, a widely believed conjecture in theoretical computer science states, that this is not the case, which indicates that this approach will not be suited for initializing bilinear recovery problems either.\\\\\n\n\t\\noindent In this manuscript we will analyse the following initialization algorithm, which is the one proposed in \\cite{paper2}. For a set $ J_1 \\subset \\left[n\\right] $, respectively $ J_2 \\subset \\left[n_2\\right] $ in the following we will denote by $ \\Pi_{J_1} $, respectively $ \\Pi_{J_2} $ the matrix, which projects a vector onto the components which belong to $ J_1$, respectively $J_2$.\n\t\n\\begin{alg}[Algorithm 3 in \\cite{paper2}]\\label{SPF_alginit}~\\\\\\vspace*{-4mm}\n\t\\begin{algorithmic}[1]\n\t\t\\sffamily\\small\n\t\t\\Require Operator $\\mathcal{A}$, Measurement $b$, Sparsity Constraints $s_1, s_2$, \n\t\t\\Ensure Initial guess $v_0$ for $ v \\in \\mathbb{C}^{n_2} $.\n\t\n\t\n\t\n\t\n\t\n\t\t\\State For all $ i \\in \\left[ n_1 \\right] $ let $ \\xi_i $ be the $ \\ell_2$-norm of the best $s_2$-sparse approximation of the $i$th row of the matrix $ \\mathcal{A}^* \\left(b\\right) \\in \\mathbb{C}^{n_1 \\times n_2} $.\n\t\t\\State Let $ \\widehat{J_1} \\subset \\left[n_1\\right] $ be the set of the $ s_2 $ largest elements in $ \\left\\{\\xi_1; \\xi_2; \\ldots ; \\xi_{n_1} \\right\\} $\n\t\t\\State Choose $\\widehat J_2$ to contain the indices of the $s_2$ columns of $\\Pi_{\\widehat J_1} \\mathcal{A}^* \\left( b \\right)$ largest in $\\ell_2$ norm, i.e.,\n\t\t\\begin{equation}\\label{equ:defwidetilde}\n\t\t\\widehat J_2 := \\underset{ J \\subset \\left[ n_2 \\right], \\ \\vert J \\vert = s_2 }{\\operatorname*{arg\\,max}} \\big\\|\\Pi_{\\widehat J_1}[\\mathcal{A}^\\ast(b)]\\Pi_{ J }\\big\\|_\\mathrm{F}.\n\t\t\\end{equation}\\\\\n\t\t\\Return $v_0$, the leading right singular vector of $\\Pi_{\\widehat{J_1}}[\\mathcal{A}^{\\ast}(b)]\\Pi_{\\widehat{J_2}}$.\n\t\n\t\n\t\\end{algorithmic}\n\\end{alg}\t\n\t\n\n\n\t\n\t\n\t\\section{Previous results}\n\t\n\tIn the following we will work with the that the model (\\ref{operatorrepresentation}), i.e., we observe\n\t\\begin{equation*}\n\t\\text{trace} \\left( A_{\\ell}^* uv^* \\right) + z_{\\ell}\n\t\\end{equation*}\n\twhere $u \\in \\mathbb{C}^{n_1}$ is $s_1$-sparse, $ v \\in \\mathbb{C}^{n_2}$ is $s_2$-sparse, and $z \\in \\mathbb{C}^m $ is noise. As in \\cite{paper2}, $ \\nu \\left(z\\right) $ will quantify the Noise-to-Signal Ratio by \n\t\\begin{equation}\\label{def:noiselevel}\n\t\\nu \\left(z\\right) := \\frac{\\Vert z \\Vert}{\\Vert \\mathcal{A} \\left(uv^*\\right) \\Vert }.\n\t\\end{equation}\n\t\\noindent For our analysis, $ \\mathcal{A} $ will be a Gaussian linear operator, that is, all the entries of the matrices $ A_1, \\ldots, A_{m}$ are independent with distribution $ \\mathcal{CN} \\left(0, \\frac{1}{m} \\right) $. (Here a complex-valued random variable $X$ has distribution $ \\mathcal{CN} \\left(0,\\frac{1}{m}\\right) $ if its real and complex part are independent Gaussians with expectation $0$ and variance $\\sqrt{\\frac{\\sigma}{2}} $.)\n\t\n\t\n\t\n\t\n\t\n\n\t\n\t\\noindent In \\cite{paper2}, the authors derived that Algorithm \\ref{SPF.alg}, if initialized by Algorithm \\ref{SPF_alginit}, is able to recover both $u$ and $v$ (up to scale ambiguity), if both $u$ and $v$ belong to a certain restricted class of signals. More precisely, they proved the following result.\n\n\t\\begin{thm}[{\\cite[see Theorems III.7 and Theorem III.10]{paper2}}]\\label{th1}\n\t\tAssume that $\\mathcal{A} \\colon \\mathbb{C}^{n_1 \\times n_2 } \\longrightarrow \\mathbb{C}^{m} $ is a Gaussian linear operator as described above. Let $ b= \\mathcal{A} \\left( uv^* \\right) + z$, where $u$ is $s_1$-sparse and $v$ is $s_2$-sparse. Suppose that $ \\Vert u \\Vert_{\\infty} \\ge 0.78 \\Vert u \\Vert $, $ \\Vert v \\Vert_{\\infty} \\ge 0.78 \\Vert v \\Vert $, and that the noise level satisfies $ \\nu \\left(z\\right) \\le 0.04 $. Then, with probability exceeding $ 1- \\exp \\left(-c_1 m\\right) $, the output of the Algorithm \\ref{SPF.alg}, initialized by Algorithm \\ref{SPF_alginit}, converges linearly to $uv^*$ provided that\n\t\\begin{equation*}\n\t\tm \\ge c_2 \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\t\t\\end{equation*}\t\n\twhere $c_1,c_2>0$ are absolute constants.\n\t\\end{thm}\t\n\t\\noindent Note that in order to apply Theorem \\ref{th1} to signals $u$ and $v$ one needs to require that more than half of the mass of $u$ and $v$ are located in one single entry, which is a severe restriction, which can be prohibitive for many applications. Our goal in the following will be to considerably relax this assumption by slightly increasing the amount of required measurements. We will relax this assumption in two different ways: On the one hand we will show that one can replace $ 0.78$ by an arbitrary small constant that will then show up in the number of measurements. On the other hand we generalize the result to the case that a significant portion of mass of $u$ is concentrated on a small number of entries $k$, rather than just one of them.\n\t\\section{Main Result}\\label{resultsection}\n\tIn this section we will state the main result of this article, Theorem \\ref{thm:mainresultreadable}.\n\tFor that, we need to define the norm\n\t\\begin{equation*}\n\t\\Vert x \\Vert_{\\left[k\\right]} := \\underset{I \\subset \\left[n\\right], \\ \\vert I \\vert = k}{\\max} \\left( \\sum_{i \\in I} \\vert x_i \\vert^2 \\right)^{1\/2} = \\left( \\sum_{i=1}^{k} \\left( x^*_i \\right)^2 \\right)^{1\/2},\n\t\\end{equation*}\n\tfor any $x\\in \\mathbb{C}^{n_1} $, where $ \\left( x^*_i \\right)^{n_1}_{i=1} $ denotes the non-increasing rearrangement of $ \\left( \\vert x_i \\vert \\right)^{n_1}_{i=1} $. Our main requirement on the vector $u$ will be that a significant amount of its mass is located in the largest $k$ entries, i.e., that $ \\frac{\\Vert u \\Vert_{\\left[k\\right]}}{\\Vert u \\Vert} $ is large enough.\n\t\\begin{thm}\\label{thm:mainresultreadable}\n\t\n\t\tLet $ k \\in \\left[ n_1 \\right] $ and $ 0<\\xi<1, 0<\\mu<1$. Then, there are absolute constants $C_1, C_2, C_3 >0$ such that if\n\t\t\\begin{equation}\n\t\tm \\ge C_1 \\max \\left\\{ \\frac{1}{\\xi^4 \\mu^4}, \\frac{k}{\\xi^2} \\right\\} \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\t\t\\end{equation}\n\t\tthen with probability at least $ 1-\\exp \\left( - C_2 m \\right) $ the following holds.\\\\\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t\n\t\tFor all $s_1$-sparse $u\\in \\mathbb{C}^{n_1}$ with $ \\Vert u \\Vert_{\\left[k\\right]} \\ge \\xi \\Vert u \\Vert $, all $s_2$-sparse $u\\in \\mathbb{C}^{n_2}$ with $ \\Vert v \\Vert_{\\infty} \\ge \\mu \\Vert v \\Vert $, and all $ z \\in \\mathbb{C}^m $ with $ \\nu\\left(z\\right) \\le C_3 \\min \\left\\{ \\xi^2 \\mu^2 ; \\frac{\\xi}{ \\sqrt{k}} \\right\\} $ the iterates $\\{X_t\\}_{t\\in\\mathbb{N}}$ generated by applying Algorithm \\ref{SPF.alg}, initialized by Algorithm \\ref{SPF_alginit}, satisfy\n\t\t$$\\limsup_{t\\to\\infty} \\frac{\\|X_t - uv^* \\|_\\mathrm{F}}{\\| uv^* \\|_\\mathrm{F}} \\leq 8.3 \\nu.$$\n\t\tFurthermore, the convergence is linear, i.e., for all $ t \\gtrsim \\log \\left( \\frac{1}{\\varepsilon}\\right) $ we have that\n\t\t\\begin{equation}\\label{equ:linearconvergence}\n\t\t\\frac{\\| X_{t} - uv^* \\|_\\mathrm{F}}{\\| uv^* \\|_\\mathrm{F}} \\leq 8.3 \\nu + \\varepsilon.\n\t\t\\end{equation}\n\t\\end{thm}\n\t\n\n\n\n\n\n\n\t\n\t\n\t\n\t\n\nIn the following we will discuss some important special cases of Theorem \\ref{thm:mainresultreadable}.\n\\begin{itemize}\n\\item \\textbf{Peaky signals: } In \\cite{paper2} the authors discuss recovery guarantees for signals $u$ and $v$ with $\\tfrac{\\Vert u \\Vert_{\\infty} }{\\Vert u \\Vert} $ and $\\tfrac{\\Vert v \\Vert_{\\infty} }{\\Vert v \\Vert} $, both bounded below by an absolute constant $\\mu \\approx 0.78$. The case $k=1$ of our theorem yields a direct improvement of this result in the sense that $\\mu$ can be chosen arbitrarily small with the number of required measurements only increasing by a factor of order $ \\mu^{-8} $. Hence, even when this constant decays logarithmically in the dimension, the required number of measurements will only increase by logarithmic factors.\n\\item \\textbf{Signals with multiple large entries: } When one of the input signals has multiple large entries, using the $\\Vert \\cdot \\Vert_{[k]} $ norm improves upon the resulting guarantee as compared to the scenario just discussed. As an example, assume that $s_1=s_2=s $, that $u$ and $v$ are normalized with $\\|v\\|_\\infty \\geq c_1 s^{-1\/8}$, and that $k=c_2 s^{1\/2}$ of the entries of $u$ are of absolute value at least $ c_3s ^{-1\/4}$. Then $ \\Vert u \\Vert_{\\left[k\\right]} \\ge \\sqrt{ c_2 } c_3 $. Using Theorem \\ref{thm:mainresultreadable} we obtain that the vectors $u$ and $v$ can be recovered if the number of measurements is on the order of $ s^{3\/2}$, thus below the order of $s^2$ that has been established for arbitrary sparse signals in \\cite{strohmer} (cf. next item). In contrast, applying Theorem \\ref{thm:mainresultreadable} with $k=1$ would yield that the number of measurements would have to be on the order of $ s^{5\/2} $, which is worse than the state-of-the-art.\n\\item \\textbf{Arbitrary sparse signals:}\tApplying Theorem \\ref{thm:mainresultreadable} to non-peaky signals yields suboptimal results. Indeed, let $u \\in \\mathbb{C}^{n_1}$ $s_1$-sparse and $v \\in \\mathbb{C}^{n_2} $ $s_2$-sparse be generic vectors. Observe that $ \\Vert v \\Vert_{\\infty} \\asymp \\frac{1}{\\sqrt{s_2}} \\Vert v \\Vert $.\nConsequently, Theorem \\ref{thm:mainresultreadable} applied with $ \\xi =1 $, $ k= s_1 $, and $ \\mu = \\frac{1}{\\sqrt{s_2}} $ yields that with high probability a generic $s_1$-sparse $u$ and a generic $s_2$-sparse $v$ can be recovered from $ y = \\mathcal{A} \\left( uv^* \\right) +z $, if the number of measurements satisfies\n\\begin{equation*}\nm \\ge C \\max \\left\\{ s_1; s_2^2 \\right\\} \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\\end{equation*}\t\nand if the noise level $\\nu$ is on the order of $ \\mathcal{O} \\left( \\max \\left\\{ \\frac{1}{s_2}; \\frac{1}{\\sqrt{s_1}} \\right\\} \\right) $. Previous results (see, e.g., \\cite{strohmer}), in contrast, require $ m \\ge C \\max \\left\\{ s^2_1; s^2_2 \\right\\} \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right) $ samples.\n\\end{itemize}\t\n\n\n\\begin{remark}\nThe peakiness assumptions in Theorem \\ref{thm:mainresultreadable} may seem arbitrary at first sight but in certain applications they are reasonable. Namely, when $u$ is the signal transmitted via a wireless channel and $v$ is the unknown vector of channel parameters it is natural to assume that $v$ has a large entry, as the direct path will always carry most of the energy. The signal $u$ can be modified by the sender, so some large entries can be artificially introduced. In this regard, being able to consider multiple entries of comparable size is of advantage as adding a single very large entry will result in a dramatic increase of the peak-to-average power ratio.\n\\end{remark}\n\t\t\n\n\t\n\t\n\t\n\t\t\n\n\n\t\n\\section{Proofs}\\label{sectionproof}\n\t\\subsection{Technical tools}\n\tThe goal of this section is to prove Theorem \\ref{thm:mainresultreadable}.\n\tWe will start by recalling the following variant of the well-known restricted isometry property.\n\t\\begin{defi}[see \\cite{paper2}]\n\tA linear operator $ \\mathcal{A}$ has the $(s_1, s_2, r )$-restricted isometry property with constant $\\delta$ if \n\t\\begin{equation}\n\t\\left(1-\\delta\\right) \\Vert X \\Vert_F^2 \\le \\Vert \\mathcal{A} \\left( X \\right) \\Vert^2 \\le \\left( 1+\\delta \\right) \\Vert X \\Vert^2_F\n\t\\end{equation}\n\tfor all matrices $ X \\in \\mathbb{C}^{n_1 \\times n_2}$ of rank at most $r$ with at most $s_1$ non-zero rows and at most $s_2$ non-zero columns.\n\t\\end{defi}\n\t\\noindent The following lemma tells us that this property holds with high probability for a number of measurements close to the information-theoretic limit.\n\t\\begin{lemma}[See, e.g., Theorem III.7 in \\cite{paper2}]\\label{thm37}\n\tThere are absolute constants $c_1, c_2>0 $, such that if\n\t\\begin{equation}\\label{necessarymeasurementsrip}\n\tm \\ge \\frac{c_1}{\\delta^2} r \\left(s_1 + s_2 \\right) \\log \\left( \\max \\left\\{ \\frac{n_1}{s_1}, \\frac{n_2}{s_2} \\right\\} \\right),\n\t\\end{equation}\n\tfor some $\\delta >0$, then with probability at least $1-\\exp \\left( - c_2 m \\right) $ $ \\mathcal{A}$ has the $(s_1, s_2, r)$-restricted isometry property with restricted isometry constant $\\delta$.\n\\end{lemma}\n\\noindent As in \\cite[Lemma VIII.7]{paper2} we will need the following quantity, which depends on $ \\delta$ and $ \\nu $.\n\t\\begin{align}\\label{def:omegasup}\n\t\t\t&\\omega_\\mathrm{sup} :=\\sup\\left\\{\\omega\\in[0,\\tfrac{\\pi}{2}):\\omega\\geq\\arcsin\\left(C_\\delta[\\delta\\tan(\\omega)+(1+\\delta)\\nu\\sec(\\omega)]\\right)\\right\\}\n\t\\end{align}\n\tHere, the constant $C_\\delta $ is given by the expression\n\t\\begin{equation*}\n\tC_{\\delta} = 1.1 \\frac{ \\sqrt{ \\frac{2}{1-\\delta^2} } + \\frac{1}{1-\\delta}}{1- \\sqrt{ \\frac{2}{1-\\delta^2} } \\delta },\n\t\\end{equation*}\n\tas it can be seen by an inspection of the proof of Lemma VIII.1 in \\cite{paper2}. The precise value of $ C_{\\delta}$ will not be important in the following, we will only use that $ 2 \\le C_{\\delta} \\le 5 $ for $\\delta \\le 0.04 $. \\\\\n\t\n\\noindent A simple estimate for $ \\omega_{\\sup}$ is given by the following lemma.\n\\begin{lemma}\\label{sin05}\n\tAssume that $ \\delta \\le 0.04 $ and $\\nu \\le 0.04 $. Then it holds that \n\t\\begin{equation*}\n\t\t\\tfrac{1}{2} \\leq \\sin(\\omega_\\mathrm{sup}) \\leq 1.\n\t\\end{equation*}\n\\end{lemma}\n\t\n\\begin{proof}\n\n\tWe observe that in order to show the claim it is enough to verify that $ \\omega= \\arcsin \\frac{1}{2} $ fulfills the inequality in (\\ref{def:omegasup}). Indeed, using $ \\cos \\omega = \\sqrt{\\frac{3}{4}} $ and $ C_{\\delta} \\le 5 $ we obtain that\n\t\\begin{align*}\n\t\tC_{\\delta} \\left[ \\delta \\tan \\left( \\arcsin \\frac{1}{2} \\right) + \\left(1+ \\delta\\right) \\nu \\sec \\left( \\arcsin \\frac{1}{2} \\right) \\right] &= C_{\\delta} \\left[ 0.04 \\frac{1\/2}{\\sqrt{3\/4}} + \\frac{ 1.04 \\cdot 0.04 }{\\sqrt{3\/4}} \\right] \\\\\t\t\n\t\t&\\le \\frac{1}{2}.\n\t\\end{align*}\n\n\\end{proof}\n\t\n\t\\noindent The quantity $ \\omega_{\\sup} $ controls the maximal angle between the initialization $ v_0$ and the ground truth $v$ such that the Sparse Power Factorization is guaranteed to converge as captured by the following theorem.\n\t\t\\begin{thm}[Theorem III.9 in \\cite{paper2}]\\label{thm39}\n\t\tAssume that\n\t\t\\begin{enumerate}[1)]\n\t\t\n\t\t\t\\item $\\mathcal{A}$ has the $(3s_1,3s_2,2)$-RIP with isometry constant $\\delta\\leq0.08$,\n\t\t\t\\item $\\nu \\leq 0.08$,\n\t\t\t\\item the initialization $v_0$ satisfies $\\sin(\\angle(v_0,v)) < \\sin \\left( \\omega_{\\sup} \\right)$.\n\t\t\\end{enumerate}\n\t\tThen the iterates $\\{X_t\\}_{t\\in\\mathbb{N}} $ generated by Algorithm \\ref{SPF.alg}, initialized via Algorithm \\ref{SPF_alginit}, satisfy\n\t\t$$\\limsup_{t\\to\\infty} \\frac{\\|X_t - uv^*\\|_\\mathrm{F}}{\\|uv^*\\|_\\mathrm{F}} \\leq 8.3 \\nu.$$\n\t\tFurthermore, the convergence is linear in the sense of (\\ref{equ:linearconvergence}).\n\t\\end{thm}\n\t\\noindent Thus, it remains to verify that the initialization satisfies $\\sin(\\angle(v_0,v)) < \\sin \\left( \\omega_{\\sup} \\right)$. The following lemma gives an upper bound on $\\sin(\\angle(v_0,v))$.\t\n\t\t\\begin{lemma}[Lemma 8 in \\cite{paper2}]\\label{lemma8.10}\n\t\tAssume that the $(3s_1, 3s_2, 2 )$-restricted isometry property holds for some constant $ \\delta >0 $. Furthermore, assume that $ \\Vert u \\Vert = \\Vert v \\Vert =1 $. Let $\\widehat{J_1} \\subseteq \\left[n_1\\right] $ and $\\widehat{J_2} \\subseteq \\left[n_2\\right]$ denote the output resulting from Algorithm \\ref{SPF_alginit}.\n\t\t\n\t\t Denote by $v_0$ the leading right singular vector of $\\Pi_{\\widehat{J_1}}[\\mathcal{A}^\\ast(b)]\\Pi_{\\widehat{J_2}}$. Then it holds that\n\t\t\\begin{equation}\\label{ineq:sufficientcondition2}\n\t\t\\sin(\\angle(v_0,v)) \\leq \\frac{\\big\\|\\Pi_{\\widehat J_1}u\\big\\|\\big\\|\\Pi_{\\widehat J_2}^\\perp v\\big\\| + (\\delta + \\nu+\\delta\\nu)}{\\big\\|\\Pi_{\\widehat J_1}u\\big\\|-(\\delta + \\nu+\\delta\\nu)}.\n\t\t\\end{equation}\n\t\\end{lemma}\n\n\t\\noindent Furthermore, we will need the following two lemmas for our proof.\n\\begin{lemma}\\label{lemma:lastlemma}[Lemma VIII.12 in \\cite{paper2}]\nLet $u$ and $v$ be as in Lemma \\ref{lemma:supportlowerbound} and assume that the measurement operator $ \\mathcal{A}$ satisfies the $ \\left( 3s_1, 3s_2, 2 \\right) $-restricted isometry property with constant $ \\delta $. Recall that $\\widehat J_1 \\subset \\left[n_1\\right] $ is the support estimate for $v_0$ given by the initialization algorithm \\ref{SPF_alginit}. Define\n\\begin{equation}\\label{equ:definitionJ1}\n\t\\widetilde{J_1} := \\left\\{ j \\in \\left[ n_1 \\right]: \\ \\vert u_j \\vert \\ge 2 \\left( \\delta + \\nu + \\delta \\nu \\right) \\right\\}.\n\\end{equation}\t\nThen we have that $ \\widetilde{J_1} \\subset \\widehat J_1 $. \n\\end{lemma}\t\n\t\n\\begin{lemma}\\label{lemma:supportlowerbound}\n\tAssume that $ \\mathcal{A}$ has the $(3s_1, 3s_2, 2)$-restricted isometry property with isometry constant $ \\delta >0 $ and assume that $u$, respectively $v$, are $s_1$-sparse, respectively $s_2$-sparse, and satisfy $ \\Vert u \\Vert = \\Vert v \\Vert =1 $. Let $ \\widetilde{J_1} $ be defined as in (\\ref{equ:definitionJ1}).\n\tThen, it holds that\n\t\\begin{equation*}\n\t\\big\\|\\Pi_{ \\widehat J_1 } u \\big\\| \\big\\| \\Pi_{ \\widehat J_2 } v \\big\\| \\ge \\big\\|\\Pi_{ \\widetilde{J_1} } u \\big\\| \\Vert v \\Vert_{\\infty} - 2 \\left( \\delta + \\nu +\\delta \\nu \\right).\n\t\\end{equation*}\n\\end{lemma}\n\n\n\\noindent Lemma \\ref{lemma:supportlowerbound} is actually a slight generalization of what has been shown in \\cite[p. 1685]{paper2}. For completeness we have included a proof in Section \\ref{section:supportlowerbound}, which closely follows the proof in \\cite{paper2}.\\\\\n\n\n\t\\subsection{Proof of our main result}\n\t\tWe will now piece together these ingredients to obtain a sufficient condition; in the remainder of the section we will then show that the condition holds in our measurement setup. First note that in order to apply Theorem \\ref{thm39} we need to check that \t$\\sin(\\angle(v_0,v)) < \\sin \\left( \\omega_{\\sup} \\right)$ is satisfied. By Lemma \\ref{lemma8.10} it is sufficient to show that the right-hand side of inequality (\\ref{ineq:sufficientcondition2}) is strictly smaller than $ \\sin \\left( \\omega_{\\sup} \\right) $. Combining this with the equality $ \\big\\|\\Pi_{\\widehat J_2}^{\\perp}v\\big\\| = \\sqrt{ 1 - \\big\\|\\Pi_{\\widehat J_2}v\\big\\|^2 } $ we obtain the sufficient condition\n\t\t\\begin{equation*}\n\t\t\\big\\|\\Pi_{\\widehat J_1}u\\big\\| \\sqrt{ 1 - \\big\\|\\Pi_{\\widehat J_2}v\\big\\|^2 } < \\sin \\left( \\omega_{\\sup} \\right) \\left( \\big\\|\\Pi_{\\widehat J_1}u\\big\\| - \\left( \\delta + \\nu+\\delta\\nu\\right) \\right) - \\left( \\delta + \\nu+\\delta\\nu \\right) \n\t\t\\end{equation*}\n\t\tFurther manipulations yield that this is equivalent to\n\t\t\\begin{equation}\\label{ineq:sufficientcondition}\n\t\t\\begin{split}\n\t\t\\big\\|\\Pi_{\\widehat J_1}u\\big\\|^2 < &\\left( \\sin \\left( \\omega_{\\sup} \\right) \\big\\|\\Pi_{\\widehat J_1}u\\big\\| - \\left( 1+ \\sin \\left( \\omega_{\\sup} \\right) \\right) \\left( \\delta + \\nu+ \\delta \\nu \\right) \\right)^2\\\\\n\t\t+ & \\big\\|\\Pi_{\\widehat J_1}u\\big\\|^2 \\big\\|\\Pi_{\\widehat J_2} v\\big\\|^2.\n\t\t\\end{split} \n\t\t\\end{equation}\t\t\n \t\tHence, in the following our goal will be to verify (\\ref{ineq:sufficientcondition}).\n\t\t\\noindent We already noticed that the angle $ \\omega_{\\sup} $ measures how much the vector $v_0$ given by the initializiation has to be aligned with the ground truth $v$ in order for the Sparse Power Factorization to converge. Consequently, it is natural to expect that the smaller the constant $ \\delta$ and the noise-to-signal ratio $\\nu$, the less the initializiation vector has to be aligned with the ground truth, i.e., the larger $ \\omega_{\\sup} $ can be. This fact is captured by the following lemma.\n\t\\begin{lemma}\\label{sin2}\n\t\tLet $\\delta \\leq 0.04$ and $\\nu \\leq 0.04$. Then it holds that\n\t\t$$\\sin(\\omega_\\mathrm{sup}) \\geq 1 -C_{\\delta}^2\\left(\\delta + 2\\delta\\nu+2\\nu\\right)^2.$$\n\t\\end{lemma}\n\t\\begin{proof}\n\t\tIt follows directly from \\eqref{def:omegasup} that\n\t\t\\begin{align*}\n\t\t \\omega_{\\sup} &= \\arcsin \\left( C_{\\delta} \\left[ \\delta \\tan \\left( \\omega_{\\sup} \\right) + \\left( 1+ \\delta \\right) \\nu \\sec \\left( \\omega_{\\sup} \\right) \\right] \\right).\n\t\t\\end{align*}\n\t\tUsing trigonometric identities we obtain that\n\t\t\\begin{align*}\n\t\t \\sin \\left( \\omega_{\\sup} \\right) &= C_{\\delta} \\left[ \\delta \\frac{\\sin \\left( \\omega_{\\sup} \\right)}{\\sqrt{1-\\sin \\left( \\omega_{\\sup} \\right)^2 }} + \\left(1+\\delta \\right) \\nu \\frac{1}{\\sqrt{ 1- \\sin \\left( \\omega_{\\sup} \\right)^2 }} \\right].\n\t\t\\end{align*}\n\t\tLemma \\ref{sin05} implies that\n\t\t\\begin{equation*}\n\t\t\\sin \\left( \\omega_{\\sup} \\right) \\le \\frac{ \\sin \\left( \\omega_{\\sup} \\right) }{\\sqrt{1-\\sin \\left( \\omega_{\\sup} \\right)^2}} C_{\\delta} \\left( \\delta + 2 \\left( 1+ \\delta \\right) \\nu \\right).\n\t\t\\end{equation*}\n\tRearranging terms yields that\n\t\t\\begin{equation*}\n\t\t\\sin \\left( \\omega_{\\sup} \\right) \\ge \\sqrt{1 - C^2_{\\delta} \\left( \\delta +2\\delta \\nu +2\\nu \\right)^2 }.\n\t\t\\end{equation*}\n\t\tThe claim follows then using the fact that $ \\sqrt{x} \\ge x $ for all $ x \\in \\left[ 0,1\\right] $.\n\t\t\\end{proof}\t\n\t\\noindent With these preliminary lemmas, we can now prove the following proposition, which is a slightly more general form of Theorem \\ref{thm:mainresultreadable}.\n\t\t\\begin{prop}\\label{prop:mainproposition}\n\tThere are absolute constants $c_1, c_2, c_3 >0$ such that if\n\t\\begin{equation}\\label{equ:numbermeasurements}\n\tm \\ge c_1 \\delta^{-2} \\left(s_1 + s_2 \\right) \\log \\left(\\max\\left\\{\\frac{n_1}{s_1},\\frac{n_2}{s_2}\\right\\}\\right),\n\t\\end{equation}\t\t\n\tfor some $ 0 < \\delta <0.01$, then with probability at least $ 1-\\exp \\left( - c_2 m \\right) $ the following statement holds uniformly for all $s_1$-sparse $u \\in \\mathbb{C}^{n_1} $, $s_2$-sparse $v \\in \\mathbb{C}^{n_2} $ and $ z \\in \\mathbb{C}^m $ such that $\\Vert u \\Vert = \\Vert v \\Vert =1 $ and $ \\nu \\left(z\\right) \\le 0.01 $:\\\\\t\n\t\\noindent Let the measurements be given by $ b= \\mathcal{A} \\left( uv^* \\right) +z $ for $ \\mathcal{A} $ Gaussian as above and let $ \\widetilde J_1 $ be defined by\n\t\\begin{equation}\\label{equ:definitionJ2}\n\t\\widetilde J_1 := \\left\\{ j \\in \\left[ n_1 \\right]: \\ \\vert u_j \\vert \\ge M_{\\delta,\\nu} \\right\\},\n\t\\end{equation}\n\twhere\n\t\\begin{equation*}\n\tM_{\\delta, \\nu} := 2 \\left( \\delta + \\nu + \\delta \\nu \\right).\n\t\\end{equation*}\n\n\n\n\n\n\tThen, whenever\n\t\\begin{equation}\\label{ineq:peakinessassumption}\n\n\t\\big\\|\\Pi_{ \\widetilde J_1 } u \\big\\| \\Vert v \\Vert_{\\infty} > c_3 \\sqrt{ M_{\\delta,\\nu} } ,\n\t\\end{equation}\n\tthe iterates $\\{X_t\\}_{t\\in\\mathbb{N}}$ generated by Algorithm \\ref{SPF.alg} initialized via Algorithm \\ref{SPF_alginit}, satisfy\n\t$$\\limsup_{t\\to\\infty} \\|X_t - uv^*\\|_\\mathrm{F} \\leq 8.3 \\nu.$$ \n\tFurthermore, the convergence is linear in the sense of (\\ref{equ:linearconvergence}).\n\\end{prop}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:mainproposition}]\t\n\n\n\tAssumption (\\ref{equ:numbermeasurements}) and \\Cref{thm37} yield that with probability at least $1 - \\exp \\left(-c m \\right) $ the ($3s_1$,$3s_2$,$2$)-restricted isometry property holds with constant $\\delta $.\n\tFor the remainder of the proof, we will consider the event that the restricted isometry property holds for such $\\delta$. \t\n\n\n\n\n\n\n\n\n\tWe obtain\n\t\\begin{equation*}\n\t\\big\\|\\Pi_{ \\widetilde J_1 } u \\big\\| \\Vert v \\Vert_{\\infty} \\ge \\left( \\sqrt{ C^2_{\\delta} +1 } + 1 \\right) \\sqrt{M_{\\delta,\\nu}}\n\t\\end{equation*}\n\tfrom $ 2\\le C_{\\delta} \\le 5 $ and by choosing the constant $c_3$ in assumption (\\ref{ineq:peakinessassumption}) large enough. Combining this with Lemma \\ref{lemma:supportlowerbound} we obtain that\n\t\\begin{equation}\\label{ineq:chain4}\n\t\\begin{split}\n\t\\big\\|\\Pi_{ \\widehat J_1 } u \\big\\| \\big\\| \\Pi_{ \\widehat J_2 } v \\big\\| &\\ge \\big\\|\\Pi_{ \\widetilde{ J_1} } u \\big\\| \\Vert v \\Vert_{\\infty} - M_{\\delta, \\nu}.\\\\\n\t&> \\sqrt{ \\left( C^2_{\\delta} + 1 \\right) M_{\\delta, \\nu} },\n\t\\end{split}\n\t\\end{equation}\n\twhere we used that $\\sqrt{x} \\ge x $ for all $ x \\in \\left[0,1\\right] $. This yields a lower bound for the second summand of the right-hand side of (\\ref{ineq:sufficientcondition}). To bound the first summand we estimate\n\t\\begin{equation}\\label{ineq:chain6}\n\t\\begin{split}\n\t&\\sin(\\omega_\\mathrm{sup}) \\|\\Pi_{\\widehat J_1}u\\| - \\left( \\sin(\\omega_\\mathrm{sup}) +1 \\right) \\left( \\delta+\\nu + \\delta \\nu \\right)\\\\\n\t\\ge& \\left( 1- C^2_{\\delta} \\left( \\delta + 2\\nu + 2\\delta \\nu \\right)^2 \\right) \\|\\Pi_{\\widehat J_1}u\\| -2 \\left( \\delta + \\nu +\\delta \\nu \\right) \\\\\n\t\\ge& \\|\\Pi_{\\widehat J_1}u\\| - C^2_{\\delta} \\left( \\delta +2 \\nu + 2\\delta \\nu \\right)^2 -2 \\left( \\delta + \\nu + \\delta \\nu \\right) \\\\\n\t\\ge & \\|\\Pi_{\\widehat J_1}u\\| - \\left( C^2_{\\delta}+1 \\right) M_{\\delta,\\nu} \\\\\n\t\\ge & 0.\n\t\\end{split}\n\t\\end{equation}\n\tIn the first line we used \\Cref{sin2} and the fact that $ \\sin(\\omega_\\mathrm{sup}) \\le 1 $. The second line is due to $ \\|\\Pi_{\\widehat J_1}u\\| \\le 1 $ and the third inequality is due to $ \\delta \\ge 0 $, $ \\nu \\ge 0 $. In order to verify the last inequality it is enough to observe that due to Lemma \\ref{lemma:lastlemma} and due to assumption (\\ref{ineq:peakinessassumption}) with $c_3$ large enough\n\t\\begin{align*}\n\t\\|\\Pi_{\\widehat J_1}u\\| &\\ge \\Vert \\Pi_{\\widetilde{J_1}} u \\Vert \\ge \\Vert \\Pi_{\\widetilde{J_1}} u \\Vert \\big\\| \\Vert v \\Vert_{\\infty} \\ge \\left( C^2_{\\delta} + 1 \\right) M_{\\delta,\\nu},\n\t\\end{align*}\n\twhere the last inequality uses that $ C_{\\delta} \\le 5 $ and $ 0 \\le \\delta, \\nu \\le 0.01 $. Hence, by squaring (\\ref{ineq:chain6}) we obtain that\n\t\\begin{equation}\\label{ineq:chain5}\n\t\\begin{split}\n\t&\\left( \\sin(\\omega_\\mathrm{sup}) \\|\\Pi_{\\widehat J_1}u\\| - \\left( \\sin(\\omega_\\mathrm{sup}) +1 \\right) \\left( \\delta + \\nu + \\delta \\nu \\right) \\right)^2\\\\\n\t\\ge & \\left( \\|\\Pi_{\\widehat J_1}u\\| - \\frac{1}{2}\\left( C^2_{\\delta}+1 \\right) M_{\\delta, \\nu} \\right)^2\\\\\n\t\\ge& \\|\\Pi_{\\widehat J_1}u\\|^2 - \\left( C^2_{\\delta}+1 \\right) M_{\\delta, \\nu} \\|\\Pi_{\\widehat J_1}u\\| \\\\\n\t\\ge & \\|\\Pi_{\\widehat J_1}u\\|^2 - \\left( C^2_{\\delta}+1 \\right) M_{\\delta, \\nu},\n\t\\end{split}\n\t\\end{equation}\n\twhere in the last line we again used that $ \\|\\Pi_{\\widehat J_1}u\\| \\le 1 $. Together with (\\ref{ineq:chain4}) this yields (\\ref{ineq:sufficientcondition}), as desired.\n\t\n\\end{proof}\n\\noindent Finally, we will deduce Theorem \\ref{thm:mainresultreadable} from Proposition \\ref{prop:mainproposition}.\n\\begin{proof}[Proof of Theorem \\ref{thm:mainresultreadable}]\n\tWe will prove this result by applying Proposition \\ref{prop:mainproposition} with \n\t\\begin{equation}\n\t\\delta = \\min \\left\\{ \\frac{\\xi}{6 \\sqrt{2k}} ; \\frac{\\xi^2 \\mu^2}{8c_3^2} \\right\\}.\n\t\\end{equation}\n Let $ u \\in \\mathbb{C}^{n_1}$ $s_1$-sparse, $ v \\in \\mathbb{C}^{n_2} $ $s_2$-sparse and $z \\in \\mathbb{C}^m $ such that the assumptions of Theorem \\ref{thm:mainresultreadable} are satisfied. Without loss of generality we may assume in the following that $\\Vert u \\Vert = \\Vert v \\Vert =1 $.\n\tFirst, we note that invoking $ \\delta, \\nu < 0.01 $ and potentially decreasing the size of $C_3$ we have that\n\t\\begin{align*}\n\t2 \\left( \\delta + \\nu \\left(z\\right) + \\delta \\nu \\left(z\\right) \\right) < 2 \\left( \\delta + 2 \\nu \\left(z\\right) \\right) \\le \\frac{\\xi}{\\sqrt{2k}}.\n\t\\end{align*}\n\tHence, we obtain that\n\t\\begin{equation}\\label{equ:Jinclusion}\n\t\\breve{J}_1:= \\left\\{ j \\in \\left[ n_1 \\right]: \\ \\vert u_j \\vert \\ge \\frac{\\xi}{\\sqrt{2k}} \\right\\} \\subset \\widetilde{J}_1,\n\t\\end{equation}\n\twhere $ \\widetilde{J}_1 $ is the set defined in (\\ref{equ:definitionJ2}). \n\n\tNote that\n\t\\begin{equation*}\n\t\\sum_{i \\in \\left[k\\right] \\backslash \\breve{J}_1 } \\left( u^*_i \\right)^2 < \\sum_{i \\in \\left[k\\right] \\backslash \\breve{J}_1 } \\frac{\\xi^2}{2k} \\le \\frac{\\xi^2}{2},\n\t\\end{equation*}\n\twhere in the first inequality we have used that $ u^*_i < \\frac{\\xi}{\\sqrt{2k}} $ for all $ i \\in \\left[k\\right] \\backslash \\breve{J}_1 $. By the assumption $ \\Vert u \\Vert_{\\left[k\\right]} \\ge \\xi $ this yields that $ \\sum_{i \\in \\left[k\\right] \\cap \\breve{J}_1 } \\left( u^*_i \\right)^2 \\ge \\frac{\\xi^2}{2} $, which in turn implies that $ \\Vert \\Pi_{\\breve{J}_1} u \\Vert \\ge \\frac{\\xi}{\\sqrt{2}} $. By the inclusion (\\ref{equ:Jinclusion}) we obtain that $ \\Vert \\Pi_{ \\widetilde J_1 } u \\Vert \\ge \\frac{\\xi}{\\sqrt{2}} $. Hence, using the assumption $ \\Vert v \\Vert_{\\infty} \\ge \\mu $, our choice of $\\delta$, the assumption on the noise level $ \\nu \\left( z \\right) $ and potentially again decreasing the value of the constant $C_3$ we obtain that\n\t\\begin{equation*}\n\t\\Vert \\Pi_{\\widetilde{J}_1} u \\Vert \\Vert v \\Vert_{\\infty} \\ge \\frac{\\xi \\mu }{ \\sqrt{2} } \\ge c_3 \\sqrt{ M_{\\delta, \\nu} }.\n\t\\end{equation*}\n\tThis shows that (\\ref{ineq:peakinessassumption}) is satisfied. Hence, we can apply Proposition \\ref{prop:mainproposition} and by inserting our choice of $\\delta$ into (\\ref{equ:numbermeasurements}), so choosing the constant $C_1$ large enough, we obtain the main result.\n\t\n\\end{proof}\n\n\n\n\t\\section{Outlook}\n\tWe see many interesting directions for follow-up work. Most importantly, it remains to explore whether additional constraints on the signals to be recovered are truly necessary (cf. our discussion on to SparsePCA in Section \\ref{Initialization}). Even if this is the case, there is substantial room for improvement with respect to the noise-dependence of the recovery results. A direction to proceed could be to consider stochastic noise models instead of deterministic noise. Also in this work we exclusively considered operators $\\mathcal{A}$ constructed using Gaussian matrices. However, in many applications of interest, the measurement matrices possess a significantly reduced amount of randomness. For example, in blind deconvolution one typically encounters rank-one measurements. That is, the restricted isometry property as used in this paper does not hold. Thus, one needs additional insight to study whether there exists a computationally tractable initialization procedure at a near-optimal sampling rate. First steps in this direction were taken in \\cite{lee2015rip,lee2017blind}, but a lot of questions remain open.\n\t\n\t\n\t\n\t\n\t\\section*{Acknowledgements}\n\tJakob Geppert is supported by the German Science Foundation (DFG) in the Collaborative Research Centre ``SFB 755: Nanoscale Photonic Imaging'' and partially in the framework of the Research Training Group ``GRK 2088: Discovering Structure in Complex Data: Statistics meets Optimization and Inverse Problems''. Felix Krahmer and Dominik St\\\"oger have been supported by the German Science Foundation (DFG) in the context of the joint project ``SPP 1798: Bilinear Compressed Sensing'' (KR 4512\/2-1). Furthermore, the authors want to thank Yoram Bresler and Kiryung Lee for helpful discussions.\n\t\n\t\n\n\t\n\t\n\t\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=1]{f1.eps}\n\\caption{A cartoon picture of our MHD model of BH jets, \nwhere magnetic field lines (solid lines) cross the ergosphere (dashed line) and the event horizon. \nThere is an extending plasma loading zone (shaded region) near the central BH, where particles are injected. \nThe inflow and outflow pattern naturally forms under the mutual influence of the central BH and the EM fields. \\label{fig:cartoon}}\n\\end{figure}\n\n\nRelativistic jets launched by accreting black holes (BHs) play an essential role in several energetic astrophysical phenomena,\nincluding stellar-mass BH X-ray binaries, active galactic nuclei and possibly gamma-ray bursts.\nAfter decades of debates among astrophysical communities, many open questions concerning the nature of the BH jets still remain to be answered \\citep[see e.g.,][for reviews]{Meier01, Blandford18}. To name a few of the most fundamental ones: what are the central engines of the jets; how the fluid within the jets is accelerated to the relativistic speed; how the jets are collimated.\n\nThe Blandford-Znajek (BZ) mechanism \\citep{BZ77,Znajek77}, which describes an electromagnetic (EM) process of extracting the rotation energy of the central BH in the form of Poynting flux, is believed to be the most promising candidate for the central engines\nof the BH jets. For understanding the jets powered by the BZ mechanism, one needs to study magnetohydrodynamic (MHD) process in the Kerr spacetime, where the EM fields and the fluid motion are coupled in a complicate way. Therefore people treat some components of the MHD process as dynamical variables with other components prescribed in many previous studies on this subject.\nFor studying the EM fields of the jet, force-free electromagnetodynamics (FFE) is a convenient assumption, where\nthe fluid energy is ignored and the EM fields are self-contained \\citep[e.g.,][]{Tanabe08, Kom01, Kom02, Kom04a, \nKom04b, Kom05, Kom07,Beskin13, Contop13,Nathan14, Gralla14,Gralla15,Gralla16,Pan14,Pan15,Pan15b,Pan16,Pan17,Pan18, \nYang14, Yang15, East18, Mahlmann18}.\nFor studying the fluid motion within the jet, people usually treat the fluid as test fluid in prescribed EM fields \\citep[e.g.,][]{Takahashi90, Beskin06, Globus13,Globus14,Pu12, Pu15}. There have also been some full MHD attempts where only\noutflow pattern and jet structure in weak gravity regime are addressed \\citep[see e.g.,][for self-similar outflow solutions in pseudo-potential]{Polko13, Polko14,Cecco18} \nand \\citep[see e.g.,][for outflow solutions in Minkowski spacetime]{Beskin98, Beskin00,Lyub09, Tchek09, Beskin10,Beskin17}.\nFor understanding the physics of BH accretion systems, general relativistic MHD (GRMHD) simulation has been another powerful tool \nin the past two decades, in which full MHD equations in curved spacetime are solved. \nNevertheless, GRMHD codes tend to become unstable to a vacuum and therefore a matter density floor is usually introduced \n\\citep[e.g.,][]{Gammie03, Shibata05, Porth17}, which may obscure our understanding of plasma loading and the flow within the jet. \n\nBesides all the theoretical explorations summarized above, substantial progress\nin spatial resolution has been made on the observation side. Especially the Event Horizon Telescope (EHT) \n\\citep[e.g.,][]{Doel08,Doel12,Ricarte15,EHT19I,EHT19V} is expected \nto resolve the structure of supermassive BHs nearby (Sgr A$^*$ and M87) down to horizon scales.\nIt is promising to unveil the physical nature of the jets in these systems if the coming EHT observations can \nbe correctly deciphered. This motivates us to construct a full GRMHD jet model, considering that \nall the previous studies are confined by different limitations.\n\nIn this paper, we aim to construct a GRMHD framework for investigating the structure of steady and axisymmetric jets \nof spinning BHs, in which the EM fields and fluid motion are self-consistently taken care of. \nA cartoon picture in Fig.~\\ref{fig:cartoon} is shown to illustrate the major elements of our jet model:\na central BH, EM fields, a plasma loading zone, inflow and outflow. \nThe magnetic field lines penetrate the event horizon of a spinning BH and \nextract the rotation energy from the BH in the form of Poynting flux.\nQuantifying plasma loading within the BH jet is also a complicate problem considering \nthe rich plasma sources, including the accretion flow centered on the equatorial plane, \npair production inside the jet \\citep{Lev11, Brod2015, Hiro16, Chen18}\nand neutrino pair annihilation from an extremely hot accretion flow \\citep[see e.g.][]{Pop99, Narayan01}. \nIn our jet model, we do not deal with these detailed processes. For convenience, we \nintroduce a plasma loading zone where plasma is injected and prescribe the loading function, i.e., particle number flux per magnetic flux $\\eta(r,\\theta)$. Under the mutual influence of the central BH and the EM fields, the inflow and outflow pattern naturally forms.\nIn summary: we aim to construct a framework for investigating MHD jet structure of spinning BHs, in which the EM fields and the fluid motion are self-consistently obtained given proper boundary conditions and a proper plasma loading function $\\eta(r,\\theta)$.\n\n\nThis paper is organized as follows. In Section~\\ref{sec:setup}, we summarize some basic equations and assumptions to be used in this paper. We derive the two governing equations: the Bernoulli equation and the MHD Grad-Shafranov (GS) equation in Section~\\ref{sec:Bern} and Section~\\ref{sec:GS}, respectively. We detail the numerical techniques\nfor solving the governing equations in Section~\\ref{sec:eg}. \nThe numerical solutions of MHD jet structure with split monopole magnetic field configuration are presented in Section~\\ref{sec:results}. \nSummary and discussion are given in Section~\\ref{sec:summary}. \nFor reference, we place some details for deriving the governing equations in Appendix \\ref{sec:D_der} and \\ref{sec:GS_der}.\nThroughout this paper, we use the geometrical units $c=G=M=1$, where $M$ is the mass of the central BH. \n\n\n\n\n\\section{Basic Setting Up} \\label{sec:setup}\n\nThe background Kerr metric is written in the Boyer-Lindquist coordinates as follows,\n\\begin{eqnarray}\n\t{\\rm d}s^2&=& g_{tt} {\\rm d}t^2 + 2g_{t\\phi}{\\rm d}t {\\rm d}\\phi + g_{\\phi\\phi} {\\rm d}\\phi^2 + g_{rr} {\\rm d}r^2 + g_{\\theta\\theta} {\\rm d}\\theta^2 \\nonumber\\\\\n\t&\\ & \\nonumber\\\\\n\t&=& \\left( \\frac{2Mr}{\\Sigma}-1 \\right) {\\rm d}t^2 - 2\\ \\frac{2Mar\\sin^2\\theta}{\\Sigma} {\\rm d}t {\\rm d}\\phi \\nonumber\\\\\n\t&\\ & + \\frac{\\beta\\sin^2\\theta}{\\Sigma} {\\rm d}\\phi^2 + \\frac{\\Sigma}{\\Delta} {\\rm d}r^2 + \\Sigma {\\rm d}\\theta^2\\ ,\n\\end{eqnarray}\nwhere $a$ and $M$ are the BH spin and mass, respectively, $\\Sigma=r^2+a^2\\cos^2\\theta$, $\\Delta=r^2-2Mr+a^2$, $\\beta =(r^2+a^2)^2-a^2\\Delta\\sin^2\\theta$ and the square root of the determinant $\\sqrt{-g}=\\Sigma\\sin\\theta$.\n\n\nWe investigate the structure of a steady and axisymmetric BH jet and we assume the plasma within the jet is perfectly conducting, i.e., $\\partial_t = \\partial_\\phi = 0$ and $\\mathbf{E} \\cdot \\mathbf{B}=0$, where $\\mathbf{E}$ and \n$\\mathbf{B}$ are the electric and the magnetic fields, respectively. Then all the non-vanishing components of Maxwell tensor are expressed as follows \\citep[see e.g.,][]{Pan14}\n\\begin{equation}\n\\label{eq:Maxwell}\n\\begin{aligned}\n F_{r\\phi} &= -F_{\\phi r} =\\Psi_{,r} \\ , &F_{\\theta\\phi} &= -F_{\\phi\\theta} = \\Psi_{,\\theta} \\ , \\\\\n F_{tr} &= -F_{rt} =\\Omega\\Psi_{,r} \\ , &F_{t\\theta} &= -F_{\\theta t} = \\Omega \\Psi_{,\\theta} \\ , \\\\\n F_{r\\theta} &= -F_{\\theta r} = - \\frac{\\Sigma }{\\Delta\\sin\\theta} I \\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\Psi = \\Psi(r,\\theta)$ is the magnetic flux and $\\Omega = \\Omega(\\Psi)$ is the angular velocity of magnetic field lines. \nFor convenience, we have defined poloidal electric current $I(r,\\theta) \\equiv \\sqrt{-g} F^{\\theta r}$. Therefore,\nthe EM fields are completely determined by three quantities: $\\{\\Psi(r,\\theta), \\Omega(\\Psi), I(r,\\theta)\\}$.\n\nBefore proceeding on, it is useful to define a few conserved quantities.\nFrom the perfectly conducting condition $F_{\\mu\\nu} u^\\nu = 0$, we find different components\nof fluid velocity are related by\n\\begin{equation}\n \\frac{ u^r}{\\Psi_{,\\theta}} = - \\frac{u^\\theta}{\\Psi_{,r}}\n\t= \\frac{(u^\\phi-\\Omega u^t)}{F_{r\\theta}}\\ ,\n\\end{equation}\nfrom which we can define the particle number flux per magnetic flux\n\\begin{equation}\\label{eq:eta}\n\\begin{aligned}\n \\eta&\\equiv\\frac{\\sqrt{-g} n u^r}{\\Psi_{,\\theta}} = - \\frac{\\sqrt{-g} n u^\\theta}{\\Psi_{,r}} \\\\\n\t&= \\frac{\\sqrt{-g} n(u^\\phi-\\Omega u^t)}{F_{r\\theta}} \\ .\n\\end{aligned}\n\\end{equation}\nFrom the energy-momentum tensor $T^{\\mu\\nu} = T^{\\mu\\nu}_{\\rm EM} + \tT^{\\mu\\nu}_{\\rm MT}$, \nwhere the EM part and the matter (MT) part are \n\\begin{equation}\n\\label{eq:energy_tensor}\n\\begin{aligned}\n\tT^{\\mu\\nu}_{\\rm EM}&= \\frac{1}{4\\pi} \\left( F^{\\mu\\rho}F_{\\ \\ \\rho}^{\\nu} - \\frac{1}{4}g^{\\mu\\nu}F_{\\alpha\\beta}F^{\\alpha\\beta} \\right)\\ , \\\\\n\tT^{\\mu\\nu}_{\\rm MT}&= \\rho u^\\mu u^\\nu = nm u^\\mu u^\\nu \\ ,\n\\end{aligned}\n\\end{equation}\nwe define total energy per particle $E$ and total angular moment $L$ per particle as follows,\n\\begin{equation}\\label{eq:EandL}\n\\begin{aligned}\n\tE&\\equiv E_{\\rm MT}+E_{\\rm EM}= -mu_t +\\frac{\\Omega I}{4\\pi\\eta}\\ , \\\\\n\tL&\\equiv L_{\\rm MT}+L_{\\rm EM} = mu_\\phi +\\frac{I}{4\\pi\\eta}\\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$, $n$ and $m$ are the proper energy density, the proper number density and the particle rest mass, respectively;\nand we have assumed cold plasma.\n\nNow let us examine the conservation property of these quantities along magnetic field lines.\nFor this purpose, we define derivative along field lines\n\\begin{equation}\n D^\\parallel_\\Psi \\equiv \\frac{1}{\\sqrt{-g}}(\\Psi_{,\\theta}\\partial_r - \\Psi_{,r}\\partial_\\theta)\\ ,\n\\end{equation}\nand it is straightforward to obtain (see Appendix \\ref{sec:D_der})\n\\begin{equation} \\label{eq:D_eta}\n\tD^\\parallel_\\Psi\\eta = (nu^\\mu)_{;\\mu} \\ ,\n\\end{equation}\ni.e., $D^\\parallel_\\Psi\\eta$ quantifies the plasma loading rate. In general, we can write the\nenergy-momentum conservation as \n\\begin{equation}\\label{eq:smu}\n T^{\\mu\\nu}_{\\phantom{xy};\\nu} = S^\\mu,\n\\end{equation}\nwhere the source term $S^\\mu$ comes from plasma loading. As a simple example, we assume \n $S^\\mu = (D^\\parallel_\\Psi\\eta) mu^\\mu$ in this paper,\ni.e., the source term is contributed by the kinetic energy of newly loaded plasma.\nWith a few steps of calculation as detailed in Appendix \\ref{sec:D_der}, we obtain\n\\begin{equation}\\label{eq:D_etaEL}\n\\begin{aligned}\n\tD^\\parallel_\\Psi(\\eta E)&= (D^\\parallel_\\Psi\\eta)(-mu_t)\\ ,\\\\\n\tD^\\parallel_\\Psi(\\eta L)&= (D^\\parallel_\\Psi\\eta)(mu_\\phi)\\ ,\n\\end{aligned}\n\\end{equation}\nwhere $\\eta E$ and $\\eta L$ are the energy flux per magnetic flux and angular momentum flux per magnetic flux, respectively.\n\\emph{Outside} the plasma loading zone where there is no particle injection,\nthe particle number conservation reads as\n\\begin{eqnarray}\n\t\\left(n u^\\mu\\right)_{;\\mu}&=&0\\ ,\n\\end{eqnarray}\ntherefore $\\eta, E, L$ are conserved along field lines,\ni.e., $\\eta=\\eta(\\Psi), E = E(\\Psi), L=L(\\Psi)$.\n\nIn summary: with assumptions of steady and axisymmetric jet structure and perfectly conducting plasma within the jet, we have obtained one conserved quantity $\\Omega(\\Psi)$\nand three ``quasi-conserved\" quantities $\\{\\eta(\\Psi), E(\\Psi), L(\\Psi)\\}$\nwhich are only conserved \\emph{outside} the plasma loading zone.\n\n\\section{Bernoulli equation} \\label{sec:Bern}\n\nFrom the normalization condition $u^\\mu u_\\mu=-1$ and Eqs.~(\\ref{eq:eta},\\ref{eq:EandL}), we obtain the relativistic Bernoulli equation\n\\begin{eqnarray}\\label{eq:Bern}\n\t\\mathcal{F}(u) = u_p^2+1-\\left(\\frac{E}{m}\\right)^2 U_g(r,\\theta) = 0\\ ,\\end{eqnarray}\nwhere $u_p^2 \\equiv u^Au_A$ with the dummy index $A=\\{r,\\theta\\}$.\nIn the Kerr space-time, the characteristic function $U_g$ is writen as \\citep{Camen86a,Camen86b,Camen87,Takahashi90,Fendt01,Fendt04,Levinson06,Pu15}\n\\begin{eqnarray}\n\\label{Ug}\n\tU_g(r,\\theta)&=& \\frac{k_0k_2-2k_2\\mathcal{M}^2-k_4\\mathcal{M}^4}{(\\mathcal{M}^2-k_0)^2}\\ ,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\label{Ugk}\n\tk_0&=& -[g_{tt}+2g_{t\\phi}\\Omega+g_{\\phi\\phi}\\Omega^2]\\ ,\\nonumber\\\\\n\tk_2&=& \\left[ 1-\\Omega(E\/L)^{-1} \\right]^2\\ ,\\nonumber\\\\\n\tk_4&=& \\frac{\\left[ g_{tt}(E\/L)^{-2} + 2g_{t\\phi}(E\/L)^{-1} +g_{\\phi\\phi} \\right]}{g_{tt}g_{\\phi\\phi}-g_{t\\phi}^2}\\ ,\n\\end{eqnarray}\nand the \\Alfven Mach number $\\mathcal{M}$ is given by\n\\begin{equation}\\label{eq:mach}\n \\mathcal{M}^2=\\frac{4\\pi m\\eta^2}{n} = 4\\pi mn\\frac{u_p^2}{B_p^2} = 4\\pi m\\eta\\frac{u_p}{B_p},\n\\end{equation}\nwith the poloidal magnetic field $B_p$ defined by\n\\begin{equation}\n(\\sqrt{-g} B_p)^2 = g_{rr} (\\Psi_{,\\theta})^2 + g_{\\theta\\theta} (\\Psi_{,r})^2 \\ .\n\\end{equation}\n\nSeveral characteristic surfaces can be defined according to the critical points of the flow velocity\n\\citep[see e.g., ][for details]{Michel69,Michel82,Camen86a,Camen86b, Takahashi90, Beskin09}.\nThe light surface (LS) is defined by where the rotation velocity of field lines approaches light speed and where\nparticles are forbidden to corotate with the field lines,\n\\begin{equation}\nk_0 \\big|_{r=r_{\\rm LS}} = 0 \\ .\n\\end{equation}\nThe \\Alfven surface is defined by where the denominator of characteristic function $U_g(r,\\theta)$ vanishes, i.e.,\n\\begin{equation}\n - k_0 + \\mathcal{M}^2 \\big|_{r=r_A} = 0 \\ .\n\\end{equation}\nOn the \\Alfven surface, we find \n\\begin{equation} \n\\frac{E}{L}= - \\frac{g_{tt} + g_{t\\phi}\\Omega }{g_{t\\phi} + g_{\\phi\\phi}\\Omega}\\Bigg|_{r=r_A}\\ , \n\\end{equation} \nwhere we have used Eqs.(\\ref{eq:Bern}-\\ref{Ugk}).\nThe stagnation surface where $u_p = 0$ is determined by\n\\begin{equation}\\label{eq:stag}\nD^\\parallel_\\Psi k_0 \\big|_{r=r_*}=0 \\ .\n\\end{equation}\nThe fast magnetosonic (FM) surface and slow magnetosonic (SM) surface are defined by where the denominator of $D_\\Psi^\\parallel u_p$ vanishes. In the cold plasma limit, the SM surface coincides with the stagnation surface.\nOn the stagnation surface, where both $u_p$ and $\\mathcal{M}$ vanish, we find\n\\begin{equation} \\label{eq:Estag}\n\\left(\\frac{E}{m} \\right)^2 = \\frac{k_0}{k_2}\\Bigg|_{r=r_*}\\ ,\n\\end{equation}\nwhere we have used Eqs.(\\ref{eq:Bern}, \\ref{Ug}). \n\nPlugging Eq.(\\ref{eq:eta}) into Eq.(\\ref{eq:Bern}), we find that the Bernoulli equation is a polynomial equation of\nfourth order in $u_p$ with to-be-determined eigenvalue $E\/L$ (or equivalently the location of the \\Alfven surface $r_A$), given prescribed angular velocity $\\Omega$ and particle number flux per field line $\\eta(r,\\theta)$ \\citep[see e.g.,][]{Camen86a,Camen86b, Takahashi90, Fendt01, Pu12, Pu15}.\n\n\\subsection{Single Loading Surface}\n\nAs a first step towards a full MHD jet solution, we mathematically idealize the plasma loading zone as a single surface, and we choose the stagnation surface (Eq.~(\\ref{eq:stag})) as the plasma loading surface \\citep[see e.g.,][ for a detailed gap model]{Brod2015} in this paper. \nTo define the plasma loading for both inflow and outflow, \nwe introduce a pair of dimensionless magnetization parameters on the loading surface,\n\\begin{equation}\\label{sigM}\n\t\\sigma_*^{\\rm in;out} =\\frac{B_{p,*}}{4\\pi m |\\eta|_{\\rm in;out}}\\ ,\n\\end{equation}\nwhere $B_{p,*}$ is the poloidal field on the loading surface.\nIn this way, the particle number flux per magnetic flux $\\eta$ is completely determined by $\\sigma_*^{\\rm in;out} $, recalling that\n$\\eta$ is a conserved quantity along field lines outside the loading zone. Note that $\\eta_{\\rm in} < 0$ and\n$\\eta_{\\rm out} > 0$, therefore there is a jump in $\\eta$ at the loading surface, i.e., \n$D_\\Psi^\\parallel \\eta \\propto \\delta(r-r_*)$.\n\nUsing Eq.(\\ref{eq:Estag}), the Bernoulli equation (\\ref{eq:Bern}) can be rewritten into a fourth-order polynomial equation as\n\\begin{eqnarray} \\label{Bern2}\n\t\\sum_{i=0}^4A_i u_p^i&=&0\\ ,\n\\end{eqnarray}\nwhere the coefficients $A_i$ are functions expressed by\n\\begin{equation}\n\\begin{aligned}\n\tA_4&= \\frac{1}{\\sigma_*^2}\\frac{B_{p,*}^2}{B_p^2}\\ ,\\\\\n\tA_3&= -\\frac{2k_0}{\\sigma_*}\\frac{B_{p,*}}{B_p}\\ ,\\\\\n\tA_2&= k_0^2 + \\left(1 + \\frac{k_{0,*}}{k_{2,*}} k_4\\right) \\frac{1}{\\sigma_*^2}\\frac{B_{p,*}^2}{B_p^2}\\ ,\\\\\n\tA_1&= \\left(-k_0 + \\frac{k_{0,*}}{k_{2,*}} k_2\\right) \\frac{2}{\\sigma_*}\\frac{B_{p,*}}{B_p}\\ ,\\\\\n\tA_0&= k_0^2 - \\frac{k_{0,*}}{k_{2,*}} k_0k_2\\ .\\\\\n\\end{aligned}\n\\end{equation}\n\nAs explored in several previous studies \\citep[e.g.,][]{Takahashi90, Pu15},\nsolving the Bernoulli equation above is in fact an eigenvalue problem, where $(E\/L)_{\\rm in}$ is the to-be-determined eigenvalue ensuring inflow smoothly cross the FM surface, while $(E\/L)_{\\rm out}$ is given by the match condition on the loading surface. Eq.(\\ref{eq:D_etaEL}) provides conditions connecting the inflow and the outflow for the single surface loading,\n\\begin{eqnarray}\\label{eq:deltaEL}\n\t\\delta(\\eta E)&=& m(\\delta\\eta)(-u_t)_*\\ ,\\nonumber\\\\\n\t\\delta(\\eta L)&=& m(\\delta\\eta)(u_\\phi)_*\\ ,\n\\end{eqnarray}\nwhich give the match condition\n\\begin{eqnarray}\\label{eq:EOLout}\n\t(E\/L)_{\\rm out}&=& \\frac{(\\eta E)_{\\rm out}}{(\\eta L)_{\\rm out}} =\\frac{(\\eta E)_{\\rm in} + m(\\delta\\eta)(-u_t)_*}{(\\eta L)_{\\rm in} + m(\\delta\\eta)(u_\\phi)_*}\\ ,\n\\end{eqnarray}\nwhere $\\delta\\eta\\equiv\\eta_{\\rm out}-\\eta_{\\rm in}$, and we have used the fact that $D_\\Psi^\\parallel\\eta$ is \na $\\delta$-function centered on the loading surface in deriving Eq.~(\\ref{eq:deltaEL}).\nIt is straightforward to see Eq.~(\\ref{eq:deltaEL}) guarantees a same jump in the total energy\nflux and its matter component, therefore the Poynting flux (and all the EM field components) is continuous across the loading surface. \n\nAs along as the Bernoulli equation is solved, i.e., both the eigenvalues $(E\/L)_{\\rm in, out}$ and the poloidal velocity field $u_p$ are obtained,\n$u^r$ and $u^\\theta$ is obtained via Eq.(\\ref{eq:eta}) and $u_p^2=u^Au_A$,\nwhile $u_t$ and $u_\\phi$ are obtained via relation $m(u_t+\\Omega u_\\phi) = -(E-\\Omega L)$ and the normalization condition $u\\cdot u=-1$.\n\nBefore delving into the details of numerically solving the Bernoulli equation, we can now give an estimate of the eigenvalues. Combining the definitions of $E$ and $L$ Eqs.(\\ref{eq:EandL}) with Eq.(\\ref{eq:Bern}), we find\n\\begin{equation}\n(u_t+\\Omega u_\\phi)_*=-\\sqrt{k_{0,*}},\n\\end{equation}\nplugging which back into Eqs.(\\ref{eq:EandL}), we obtain\n\\begin{eqnarray}\n\\label{EOL}\n\t(E\/L)_{\\rm in}&=&\\Omega + \\frac{m\\eta_{\\rm in}\\sqrt{k_{0,*}}}{(\\eta L)_{\\rm in}}\\ <\\Omega\\ ,\\nonumber\\\\\n\t(E\/L)_{\\rm out}&=&\\Omega + \\frac{m\\eta_{\\rm out}\\sqrt{k_{0,*}}}{(\\eta L)_{\\rm out}}\\ >\\Omega\\ ,\n\\end{eqnarray}\nwhich imply $E\/L = \\Omega [1 + O(\\sigma_*^{-1})]$ and we have used the fact $\\eta_{\\rm in}<0$ and \n$\\eta_{\\rm out}>0$. \n\n\\section{MHD Grad-Shafranov equation} \\label{sec:GS}\nWith the aid of the Maxwell's equation\n\\begin{equation}\n\t {F^{\\mu\\nu}}_{;\\nu} = 4\\pi j^\\mu \\ ,\n\\end{equation}\nthe trans-field component of the energy conservation equation (\\ref{eq:smu}) is written as \\footnote{Eq.~(\\ref{eqn_MHD}) \nonly holds for the specific choice of source function $S^\\mu = (nu^\\nu)_{;\\nu} mu^\\mu$.} \n\\begin{equation}\\label{eqn_MHD}\n\t\\frac{F^A_{\\ \\phi}}{F_{C\\phi}F^C_{\\ \\phi}} (mn u^\\nu u_{A;\\nu}-F_{A\\nu}j^\\nu)=0\\ ,\n\\end{equation}\nwhere we have used Eq.~(\\ref{eq:D_eta}) and the source function $S^\\mu$. \nThe repeated Latin letters $A$ and $C$ run over the poloidal coordinates $r$ and $\\theta$ only \n\\citep{Nitta91,Beskin93,Beskin97}.\nThis is known as the MHD GS equation, with the $1^{\\rm st}$ and $2^{\\rm nd}$ terms\nin the bracket being the fluid acceleration and the electromagnetic force, respectively.\n\nAfter some tedious derivation (see Appendix \\ref{sec:GS_der}), we write the full MHD GS equation in a compact form\n\\begin{eqnarray}\n\\label{GS_MHD}\n\t&\\ &\\mathcal{L}\\Psi=\\mathcal{S}_{\\rm EM}+\\mathcal{S}_{\\rm MT}\\ .\n\\end{eqnarray}\nHere $\\mathcal{L}$ is a differential operator defined by\n\\begin{eqnarray}\n\\label{GS_MHD_L}\n\t\\mathcal{L}\\Psi&& = \\left[\\Psi_{,rr} + \\frac{\\sin^2\\theta}{\\Delta} \\Psi_{,\\mu\\mu} \\right]\\ \\mathcal{A}(r,\\theta;\\Omega) \\nonumber\\\\\n\t&\\ & + \\left[ \\Psi_{,r} \\partial^\\Omega_r + \\frac{\\sin^2\\theta}{\\Delta} \\Psi_{,\\mu} \\partial^\\Omega_\\mu \\right] \\ \\mathcal{A}(r,\\theta;\\Omega) \\nonumber\\\\\n\t&\\ & + \\frac{1}{2} \\left[ (\\Psi_{,r})^2 + \\frac{\\sin^2\\theta}{\\Delta} (\\Psi_{,\\mu})^2 \\right] D_\\Psi^\\perp\\Omega\\ \\partial_\\Omega \\mathcal{A}(r,\\theta;\\Omega) \\nonumber\\\\\n\t&\\ & - \\left[ (\\Psi_{,r})^2 + \\frac{\\sin^2\\theta}{\\Delta} (\\Psi_{,\\mu})^2 \\right] \\frac{D^\\perp_\\Psi\\eta}{\\eta}\\ \\mathcal{M}^2(r,\\theta)\\ ,\\nonumber\\\\\n\\end{eqnarray}\nwhere $\\mu=\\cos\\theta$, $\\mathcal{A}(r,\\theta;\\Omega)=-k_0(r,\\theta;\\Omega)+\\mathcal{M}^2(r,\\theta)$,\nand we have defined $\\partial^\\Omega_A(A=r,\\mu)$ as the partial derivative with respect to coordinate $A$ with $\\Omega$ fixed, $\\partial_\\Omega$ as the derivative with respect to $\\Omega$, $D_\\Psi^\\perp$ as the derivative perpendicular to field lines\n\\begin{eqnarray}\n\tD_\\Psi^\\perp&\\equiv& \\frac{F^A_{\\ \\phi}\\partial_A}{F_{C\\phi}F^C_{\\ \\phi}}\\ ,\n\\end{eqnarray}\nwhich is equivalent to the ordinary derivative $d\/d\\Psi$ when acting on functions of $\\Psi$.\nThe two source terms are\n\\begin{equation}\n\\label{GS_MHD_S}\n\\begin{aligned}\n\t \\mathcal{S}_{\\rm EM} &=\\frac{\\Sigma}{\\Delta} I D^\\perp_\\Psi I\\ , \\\\\n \\mathcal{S}_{\\rm MT} &=-4\\pi\\Sigma\\sin^2\\theta mn(u^tD_\\Psi^\\perp u_t + u^\\phi D_\\Psi^\\perp u_\\phi)\\ ,\n\\end{aligned}\n\\end{equation}\nwhere $I=4\\pi(\\eta L-\\eta m u_\\phi)$ [see Eq.(\\ref{eq:EandL})]. \n\nIn the FFE limit, $\\mathcal{M}^2=0$, $\\mathcal{S}_{\\rm MT}=0$, and the GS equation reduces to \\citep{Pan17}\n\\begin{eqnarray}\n\\label{GS_FFE}\n\t&\\ &\\mathcal{L}\\Psi = \\mathcal{S}_{\\rm EM}\\ \\ .\n\\end{eqnarray}\nThe FFE solutions $\\{\\Psi|_{\\rm FFE}, \\Omega|_{\\rm FFE}, (\\eta L)|_{\\rm FFE} \\}$ have been well explored\nboth analytically and numerically in many previous studies \\citep[see e.g.,][]{BZ77,Tanabe08, Contop13, Pan15, Pan15b}.\nSimilar to the FFE case, solving the MHD GS equation (\\ref{GS_MHD}) is also eigenvalue problem, where $\\Omega$ and $4\\pi\\eta L$ are the to-be-determined eigenvalues ensuring field lines smoothly cross the two Alfven surfaces \\citep{Contop13, Nathan14, Pan17,Mahlmann18}.\n\n\n\n\n\n\\section{A split monopole example} \\label{sec:eg}\n\nAs previewed in the Introduction, we aim to construct a framework for investigating MHD jet structure of spinning BHs, \nin which the EM fields $(F_{\\mu\\nu})$ and the fluid motion $(n, u^\\mu)$ are self-consistently obtained \ngiven a proper plasma loading function $\\eta(r,\\theta)$ and proper boundary conditions.\n\nIn this section, we detail the procedure of consistently solving the two governing equations for an example of\nthe split monopole magnetic field configuration around a rapidly spinning central BH with a dimensionless spin $a=0.95$. For simplicity, we explore two different scenarios\nwith magnetization parameters $\\sigma_*^{\\rm out}=2\\sigma_*^{\\rm in}$ and $\\sigma_*^{\\rm out}=\\sigma_*^{\\rm in}$,\nrespectively. Remember that the loading function $\\eta(r,\\theta)$ is completely determined by the \nmagnetization parameters via the definition (\\ref{sigM}).\n\nBoundary conditions used here are similar to those of force-free solutions. \nExplicitly, we choose $\\Psi|_{\\mu=0}=\\Psi_{\\rm max}$ on the equatorial plane, \n$\\Psi|_{\\mu=1}=0$ in the polar direction, $\\Psi_{,r}|_{r=r_{\\rm H}}=0$ and $\\Psi_{,r}|_{r=\\infty}=0$\nfor the inner and outer boundaries, respectively. Here $r_{\\rm H}$ is the radius of the event horizon.\n\n\\subsection{Numerical Techniques} \\label{sec:tech}\nWe define a new radial coordinate $R = r\/(r+1)$, confine our \ncomputation domain $R\\times\\mu$ in the region $[R(r_{\\rm H}), R_{\\rm max}]\\times[0,1]$,\nand implement a uniform $256\\times64$ grid. \nIn practice, we choose $R_{\\rm max}=0.995$, i.e., $r_{\\rm max}\\approx 200 M$.\n\nThe Bernoulli equation (\\ref{eq:Bern}) and the MHD GS equation (\\ref{GS_MHD}), governing the flow along the field lines\nand field line configuration, respectively, are coupled. So we solve them one by one in an iterative way:\n\\begin{eqnarray} \\label{Eq_set}\n\\left\\{\\begin{array}{c}\n \\mathcal{L}\\Psi^{(l)}=(\\mathcal{S}_{\\rm EM}+\\mathcal{S}_{\\rm MT})\\{(\\eta L)^{(l)}, n^{(l-1)}, u^{(l-1)}\\}\\ , \\\\\n \\mathcal{F}\\{u^{(l)}; (E\/L)^{(l)}, \\Omega^{(l)}, \\Psi^{(l)}\\} =0 \\ ,\n\\end{array}\n\\right.\n\\end{eqnarray}\nwith $l=1,2,3,\\cdots$ . In a given loop $l$, we solve the GS equation updating $\\Psi$ and $\\{\\Omega, (\\eta L)\\}$ (with $\\{n, u^\\mu\\}$ inherited from the previous loop $l-1$), ensuring field lines smoothly cross the two \\Alfven surfaces; \nin a similar way, we solve the Bernoulli equation updating $u^\\mu$ and $(E\/L)$ (with freshly updated $\\Omega$ and $\\Psi$ from solving the GS equation), ensuring a super-sonic inflow solution and an outflow solution satisfying the match condition \n(\\ref{eq:EOLout}). Combing solutions to both equations and definitions of $\\{\\eta, E, L\\}$, we finally obtain all the desired quantities $\\{F_{\\mu\\nu}, n, u^\\mu\\}$ as functions of coordinates $r$ and $\\theta$. \n\nWe activate the iteration with an initial guess\n\\begin{eqnarray} \\label{init}\n\\left\\{\\begin{array}{cccc}\n \\Psi^{(0)}(r,\\theta)&=&\\Psi_{\\rm max}(1-\\cos\\theta)\\ , \\\\\n \\Omega^{(0)}(\\Psi)&=&0.5\\Omega_{\\rm H}\\ , \\\\\n (\\eta L)^{(0)}(\\Psi)&=&\\Omega_{\\rm H} \\Psi[2-(\\Psi\/\\Psi_{\\rm max})]\/(8\\pi)\\ ,\\\\\n n^{(0)}(r,\\theta) &=& u^{(0)}(r,\\theta) = 0\\ ,\n\\end{array}\n\\right.\n\\end{eqnarray}\nwhere $\\Omega_{\\rm H}\\equiv a\/(r_{\\rm H}^2+a^2)$ is the BH angular velocity.\n\nThe numerical techniques for tackling the two eigenvalue problems are detailed as follows:\n\n\\begin{itemize}\n\\item[\\it Step 1] \nThe MHD GS equation is a second-order differential\nequation which degrades to first order on the\n\\Alfven surfaces where $\\mathcal{A}(r,\\theta)=0$. \nNumerical techniques for dealing this problem have been well developed\nin previous force-free studies \\citep{Contop13, Nathan14, Huang16,Huang18, Pan17,Mahlmann18}, and we briefly recap them here.\n\n\nIn each loop $l$, we solve the GS equation (\\ref{GS_MHD}) with\nthe approximate solution obtained from the previous loop $\\left\\{\\Omega^{(l-1)},(\\eta L)^{(l-1)},\\Psi^{(l-1)}\\right\\}$\nas the initial guess. \nWe evolve the flux function $\\Psi^{(l)}$ using the overrelaxation technique with Chebyshev acceleration \\citep{Press86}, and $\\Psi^{(l)}(r,\\theta)$ is updated on grid points except those in the vicinity of the two \\Alfven surfaces. The flux function $\\Psi^{(l)}(r,\\theta)$ on the \\Alfven surfaces are obtained via interpolation from neighborhood grid points and the directional derivatives on the \\Alfven surfaces \\citep{Pan17}. \nUsually we obtain two different flux function $\\Psi(r_{\\rm A}^-)$ versus $\\Psi(r_{\\rm A}^+)$ \non the \\Alfven surface via interpolations from grid points inside and outside, respectively. \nTo decrease this discontinuity, we adjust $\\Omega^{(l)}(\\Psi)$ at the outer Alfv\\'en (OA) surface:\n\\begin{eqnarray}\n\t\\Omega^{(l)}_{\\rm new}(\\Psi_{\\rm new})&=& \\Omega^{(l)}_{\\rm old}(\\Psi_{\\rm old}) \\nonumber\\\\\n\t&\\ & + 0.05 [\\Psi(r_{\\rm OA}^+)-\\Psi(r_{\\rm OA}^-)],\n\\end{eqnarray}\nwith $\\Psi_{\\rm new}=0.5[\\Psi(r_{\\rm OA}^+)+\\Psi(r_{\\rm OA}^-)]$, \nwhere the subscript old\/new represents quantities before\/after the above adjustment;\nand adjust both $\\Omega^{(l)}(\\Psi)$ and $(\\eta L)^{(l)}(\\Psi)$ at the inner Alfv\\'en surface (IA):\n\\begin{eqnarray}\n\t\\Omega^{(l)}_{\\rm new}(\\Psi_{\\rm new})&=& \\Omega_{\\rm old}(\\Psi_{\\rm old}) \\nonumber\\\\\n\t&\\ & + 0.05[\\Psi(r_{\\rm IA}^+)-\\Psi(r_{\\rm IA}^-)],\\nonumber\\\\\n\t(\\eta L)^{(l)}_{\\rm new}(\\Psi_{\\rm new})&=& (\\eta L)^{(l)}_{\\rm old}(\\Psi_{\\rm old}) \\nonumber\\\\\n\t&\\ & - 0.05[\\Psi(r_{\\rm IA}^+)-\\Psi(r_{\\rm IA}^-)],\n\\end{eqnarray}\nwith $\\Psi_{\\rm new}= 0.5[\\Psi(r_{\\rm IA}^+)+\\Psi(r_{\\rm IA}^-)]$.\n\nAfter sufficient evolution, we obtain a converged solution $\\{\\Omega^{(l)}, (\\eta L)^{(l)}, \\Psi^{(l)}\\}$ \n which ensures field lines smoothly cross the two \\Alfven surfaces.\n\n\\item[\\it Step 2] The Bernoulli equation in the form of Eq.(\\ref{Bern2}) is a fourth-order polynomial equation in $u_p$\n\\citep{Camen86a,Camen86b,Camen87,Takahashi90,Fendt01,Fendt04,Levinson06,Pu15},\nwhere the FM point is a standard `X'-type singularity, while the Alfv\\'en point turns out to be a higher-order\nsingularity \\citep{Weber67}.\nMathematically, a FM point is the location of a multiple root to the Bernoulli equation. \nThe existence of FM point is very sensitive to the value of $(E\/L)_{\\rm in}^{(l)}$. \nFor a slightly small value, there exists only sub-sonic solutions in the region $r\\Omega$ for outflow. Specifically, the fluid angular velocity on the\nevent horizon $\\Omega_{\\rm MT}(r=r_{\\rm H})$ slightly exceeds the BH angular velocity $\\Omega_{\\rm H}$,\nwhich guarantees the fluid energy to be positive on the horizon (see Fig.~\\ref{fig:ut}). \n\nIn Fig.~\\ref{fig:ut}, we show the specific particle energy $-u_t$ for the two cases.\nBoth of them are positive everywhere, while the outflow of \\emph{Case 1} gains more efficient\nacceleration.\n\n\\subsection{Energy Extraction Rates}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=1]{f6.eps}\n\\caption{Results of the energy extraction rates in relation to $\\sigma_*^{\\rm in}$ with $\\sigma_*^{\\rm out}=2\\sigma_*^{\\rm in}$ assumed.\nThe energy rates measured at the event horizon\n$\\{ \\dot{E}_{\\rm tot}^{\\rm H}, \\dot{E}_{\\rm Poynting}^{\\rm H}, \\dot{E}_{\\rm MT}^{\\rm H}\\}$, are presented in filled squares, small filled squares, and filled circles, respectively. The solid black line in top panel, the dotted black line in top panel, and the solid black line in bottom panel are the corresponding fitting curves.\nSimilarly, the energy rates measured at infinity\n$\\{ \\dot{E}_{\\rm tot}^{\\infty}, \\dot{E}_{\\rm Poynting}^{\\infty}, \\dot{E}_{\\rm MT}^{\\infty}\\}$, are presented in open symbols and the solid grey lines are the corresponding fitting curves. \\label{fig:Edot}}\n\\end{figure}\n\nIn this subsection, we investigate the energy extraction rate from the central BH via the MHD jet,\nwhich is defined as \n\\begin{equation} \n\\begin{aligned}\n \\dot{E}_{\\rm tot}(r)\n &= -2\\pi \\int_0^{\\pi} \\sqrt{-g} T^r_{\\ t}(r) {\\rm d} \\theta\\ , \\\\ \n &= 4\\pi\\int_0^{\\Psi_{\\rm max}} (\\eta E)(r) {\\rm d}\\Psi \\ , \\\\ \n &= 4\\pi\\int_0^{\\Psi_{\\rm max}} [(E\/L)\\times(\\eta L)](r) {\\rm d}\\Psi \\ ,\n\\end{aligned}\n\\end{equation} \nwhere we have used Eqs.(\\ref{eq:eta}-\\ref{eq:EandL}) in the second line. In the third line, $E\/L$ and $\\eta L$ are the eigenvalues of the Bernoulli equation (\\ref{eq:Bern}) and of the GS equation (\\ref{GS_MHD}), respectively. In the similar way, we can define its matter\/electromagnetic component as\n\\begin{equation} \\label{eq:Edot}\n\\begin{aligned}\n\t\\dot{E}_{\\rm MT}(r)&= 4\\pi\\int_0^{\\Psi_{\\rm max}}(- \\eta m u_t)(r) {\\rm d}\\Psi\\ , \\\\\n\t\\dot{E}_{\\rm Poynting}(r)\n\t&= 4\\pi \\int_0^{\\Psi_{\\rm max}}(\\Omega I\/4\\pi)(r) {\\rm d}\\Psi \\ .\\\\\n\t&= \\dot{E}_{\\rm tot}(r) - \\dot{E}_{\\rm MT}(r)\\ . \n\\end{aligned}\n\\end{equation}\n\nWe measure these energy extraction rates at $r=r_{\\rm H}$\/ $r=\\infty$, and quantify their dependence\non the magnetization parameter $\\sigma_*$. In practice, we find that these energy extraction rates are not sensitive to the value of $\\sigma_*^{\\rm out}$,\nexcept the matter component of energy rate at infinity $\\dot{E}_{\\rm MT}^\\infty$. \nWithout loss of generality, we only show the rates in relation to $\\sigma_*^{\\rm in}$\nfor the $\\sigma_*^{\\rm out}=2\\sigma_*^{\\rm in}$ scenario in Fig.~\\ref{fig:Edot}, where\nall the rates are displayed in unit of the energy extraction rate in the force-free limit \n$\\dot E_{\\rm FFE}\\approx 0.4 (\\Psi_{\\rm max}^2\/4\\pi)$.\n\nAs we see in Fig.~\\ref{fig:Om}, the rotation of magnetic field lines is dragged down by\nthe loaded plasma, i.e., $\\Omega|_{\\rm MHD} < \\Omega|_{\\rm FFE}$, while the fluid that does not corotate with\nthe field lines with angular velocity $\\Omega_{\\rm MT}|_{\\rm inflow} > \\Omega > \\Omega_{\\rm MT}|_{\\rm outflow}$, tends to bend the field lines and induce a stronger $\\phi$-component of magnetic field, i.e., $I|_{\\rm MHD} > I|_{\\rm FFE}$. The net result is that the Poynting energy extraction rate on the event horizon $\\dot E_{\\rm Poynting}^{\\rm H}$ has little dependence on the magnetization. Going outward along the field lines, part of the Poynting flux is converted into the fluid kinetic energy. For the case with magnetization parameter $\\sigma_*^{\\rm in} = 30$, the matter component makes up about $13\\%$ of the total energy flux at infinity.\n\n\\subsection{Penrose Process}\\label{subsec:Penrose}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.68]{f7.eps}\n\\caption{The positron\/electron component of energy extraction rates in relation to $\\sigma_*^{\\rm in}$. \nThe energy rates $\\{\\dot{E}^{\\rm H}_{e^+}, \\dot{E}^{\\rm H}_{e^-}\\}$ \nare presented in open and filled circles, respectively. \nThe solid and dashed lines are the corresponding fitting curves.\nThe shaded regime denotes where the Penrose process is \nworking (for the electron component). \\label{fig:EdMT}}\n\\end{figure}\n\nAn implicit assumption in our MHD jet model is the two-fluid description, since \nthe electric current density $j^\\mu$ is \\emph{not} proportional to the fluid velocity $u^\\mu$.\nTherefore we can decompose the charged fluid as two oppositely charged components, \npositron ($e^+$) and electron ($e^-$). \\footnote{Though there is a degree of freedom in doing \nthis decomposition, e.g., we can also decompose the fluid as electrons and ions, it does not change\nour conclusion qualitatively.} We denote the number densities and the velocity fields as $n_\\pm$ \nand $u^\\mu_\\pm$, respectively, which are related to $j^\\mu$ and $nu^\\mu$ via relations\n\\begin{eqnarray}\n j^\\mu&=& e(n_+u^\\mu_+ - n_-u^\\mu_-)\\nonumber\\\\\n mnu^\\mu&=& m(n_+u^\\mu_+ + n_-u^\\mu_-).\n\\end{eqnarray}\nConsequently, we obtain \n\\begin{equation} \n m(n u^\\mu)_\\pm = \\frac{1}{2}[\\pm j^\\mu(m\/e) + nmu^\\mu] \\ ,\n\\end{equation} \nand we can decompose the matter energy flux into two components $\\dot{E}_{e^\\pm}$.\nHere we are only interested in the energy extraction rates on the event horizon\n\\begin{eqnarray}\n \\dot{E}^{\\rm H}_{e^+}&=& 4\\pi \\int_0^{\\Psi_{\\rm max}} \\frac{(-\\eta m n_+u_{t+})(r_{\\rm H})}{n(r_{\\rm H})} {\\rm d}\\Psi\\ ,\\nonumber\\\\\n \\dot{E}^{\\rm H}_{e^-}&=& 4\\pi \\int_0^{\\Psi_{\\rm max}} \\frac{(-\\eta m n_-u_{t-})(r_{\\rm H})}{n(r_{\\rm H})} {\\rm d}\\Psi \\ .\n\\end{eqnarray}\nAs an example, we choose the horizon enclosed magnetic flux $\\Psi_{\\rm max}=1000(m\/e)$, and show $\\dot{E}^{\\rm H}_{e^+}\/\\dot{E}_{\\rm FFE}$ \nand $\\dot{E}^{\\rm H}_{e^-}\/\\dot{E}_{\\rm FFE}$ in relation to $\\sigma_*^{\\rm in}$ in Fig.~\\ref{fig:EdMT}. \nThe energy extraction rate from positrons is always negative, while \nthe energy extraction rate from electrons, become positive \nwhen the plasma loading is low enough. In this regime, denoted in shades in Fig.~\\ref{fig:EdMT}, the magnetic Penrose process is working, though, only for one of the two charged component. \\footnote{We should not do any quantitative interpretation for the results of this subsection, because the two-fluid decomposition done here is not accurate, e.g., there is no guarantee for the velocity of each component $u^\\mu_\\pm$ to be timelike and normalized.\nWe will leave a more accurate two-fluid description of MHD jet structure \\citep{Koide09, Liu18} to future work .} This finding is in good agreement with recent particle-in-cell simulations \\citep{Parfrey18}.\n\n\n\\section{Summary and Discussion}\\label{sec:summary}\n\\subsection{Summary}\nTo describe the MHD structure of BH jets, we need a minimum set of quantities as functions of spacetime: \nMaxwell tensor $F_{\\mu\\nu}$, fluid rest mass density $\\rho$ (or equivalently particle number density $n$), and \nfluid four-velocity $u^\\mu$. For determining all these quantities self-consistently, we constructed a full MHD framework, in which EM fields and fluid motion are governed by the MHD GS equation (\\ref{GS_MHD}) and the Bernoulli equation (\\ref{eq:Bern}), respectively. From these two governing equations, we can completely determine $\\{F_{\\mu\\nu}, \\rho ,u^\\mu\\}$ given proper boundary conditions and a proper plasma loading function $\\eta(r,\\theta)$ (see Eq.(\\ref{eq:eta})). As an example, we consider a split monopole field configuration and idealized plasma loading on the stagnation surface.\n\nAssuming steady and axisymmetric jet structure, and perfectly conductive plasma within the jet, the EM fields are \ncompletely determined by three functions: the magnetic flux $\\Psi(r, \\theta)$, the angular velocity of magnetic field \nlines $\\Omega(\\Psi)$ and the poloidal electric current $I(r,\\theta)$ (see Eq.(\\ref{eq:Maxwell})). \nGiven fluid energy density $\\rho$ and velocity $u^\\mu$, the MHD GS equation (\\ref{GS_MHD}) turns out to be a second-order differential equation with respect to $\\Psi(r,\\theta)$ which degrades to be first-order on the two \\Alfven surfaces. Solving the GS equation is an eigenvalue problem, with eigenvalues $\\Omega(\\Psi)$\nand $I(r,\\theta)$ (or more precisely, the conserved quantity $4\\pi\\eta L(\\Psi)$ defined in Eq.(\\ref{eq:EandL})) to be determined ensuring field lines smoothly cross the \\Alfven surfaces.\n\nGiven EM fields $F_{\\mu\\nu}$, the Bernoulli equation turns out to be a fourth-order polynomial equations in the poloidal fluid velocity $u_p$. Solving the Bernoulli equation is also an eigenvalue problem, with the eigenvalue $(E\/L)_{\\rm in}$ to be determined ensuring the inflow smoothly cross the FM surface, and $(E\/L)_{\\rm out}$ to be determined by the match condition (\\ref{eq:EOLout}) on the loading surface. With both $E\/L$ and $u_p$ obtained, it is straightforward to obtain $n$ and $u^\\mu$ via Eqs.(\\ref{eq:eta},\\ref{eq:EandL}) and the normalization \ncondition $u\\cdot u=-1$. \n\nThe two governing equations are coupled, therefore we numerically solved them in an iterative way (see Sec.~\\ref{sec:tech}). \nAs a result, we find the rotation of magnetic field lines is dragged down by the plasma loaded, i.e., $\\Omega|_{\\rm MHD} < \\Omega|_{\\rm FFE}$; for the fluid angular velocity, we find $\\Omega_{\\rm MT}|_{\\rm outflow}<\\Omega<\\Omega_{\\rm MT}|_{\\rm inflow}$ ; \nthe non-corotating fluid tends to bend the field lines and induce a stronger $\\phi$-component of magnetic field, therefore \na stronger poloidal electric current\ni.e., $I|_{\\rm MHD} > I|_{\\rm FFE}$. The net result is that the Poynting energy extraction on the horizon is insensitive to the\nmagnetization, i.e., $\\dot E_{\\rm Poynting}^{\\rm H} |_{\\rm MHD} \\approx \\dot E_{\\rm Poynting}^{\\rm H}|_{\\rm FFE}$ (see Fig.~\\ref{fig:Edot}).\nGoing outward along the field lines, part of the Poynting flux is converted to the fluid kinetic energy. For the case we explored\nwith $\\sigma_*^{\\rm in} = 30$, the matter component makes up $\\sim 13\\%$ of the total energy flux at infinity.\n\nFinally, we examined the MHD Penrose process for the cases we numerically solved. We found the specific fluid energy $-m u_t$ is always positive on the event horizon, i.e., the MHD Penrose process is not working and therefore the BZ mechanism defines fully the jet energetics. However, if we decompose the charged fluid as two oppositely charged components ($e^\\pm)$, we found the magnetic Penrose process does work for one of the two components when the plasma loading is low enough (see Fig.~\\ref{fig:EdMT}).\n\n\\subsection{Discussion}\n\nAs a first step towards a full MHD jet model, we have investigated the MHD jet structure\nof split monopole geometry assuming an idealized plasma loading on the stagnation surface. \nThis simplified plasma loading gives rise to a few unphysical problems in the vicinity of the loading surface, \nincluding divergence of particle number density $n(r_*)$, which shows up in the source terms of the MHD\nGS equation (\\ref{GS_MHD_S}). To avoid the singularity arising from the unphysical divergence, we smoothed the\nfunction $n(r,\\theta)$ in the vicinity of the loading surface. Another consequence of the simplified \nplasma loading is that we must impose the continuity equation (\\ref{eq:EOLout}) to ensure the EM fields to be continuous across the loading surface. As a result, $(E\/L)_{\\rm out}$ is specified by $(E\/L)_{\\rm in}$, i.e., $r_{A,\\rm out}$ is specified by $r_{A, \\rm in}$. Therefore, we lose the freedom to adjust $(E\/L)_{\\rm out}$ until a supersonic outflow solution is found\nas we did for the inflow solution. Consequently, all the outflow solutions obtained in this paper are subsonic (see Fig.~\\ref{fig:up}). \n\nIn future work, we aim to investigate a full MHD jet model with a more realistic extending loading zone \nwhere the plasma injection is described by a continuous function $\\eta(r,\\theta)$. Then all the unphysical\ndiscontinuity and divergence described above would be avoided. For the extending plasma loading, the smooth EM fields would be naturally preserved, and the continuity requirement would not be a constraint. As a result,\nwe can adjust $(E\/L)_{\\rm out}$ for finding a supersonic outflow solution, which is more consistent with recent observations \\citep{Hada16,Mertens16}.\nIn addition to the plasma loading, the BH surroundings also play an important role in shaping the jet structure \\citep[e.g.][]{Tchek10,Beskin17}. The role of more realistic BH environment, including accretion flows and hot plasma with non-zero pressure will also be considered in future work. \n\n\n\n\n\\section*{Acknowledgements}\nWe thank the referee for his\/her careful reading of this manuscript and giving insightful suggestions.\nL.H. thanks the support by the National\nNatural Science Foundation of China (grants 11590784 and 11773054),\nand Key Research Program of Frontier Sciences, CAS (grant No. QYZDJ-SSW-SLH057).\nZ.P. thanks Hung-Yi Pu for his invaluable help throughout this research. \nZ.P. is supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute\nis supported by the Government of Canada through the Department of Innovation, \nScience and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science.\nC.Y. has been supported by the National Natural Science Foundation of China (grants 11521303, 11733010 and 11873103).\nThis work made extensive use of the NASA Astrophysics Data System and\nof the {\\tt astro-ph} preprint archive at {\\tt arXiv.org}.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGravitational wave (GW) sources~\\cite{DI:2016,secondBBH:2016,thirddetection,fourth:2017,GW170608,o1o2catalog} are now routinely detected by the advanced LIGO~\\cite{DII:2016,LSC:2015} and Virgo~\\cite{Virgo:2015} detectors. The last two observing runs of these GW detectors indicate that, on average, one GW source has been detected for every fifteen days of analyzed data. It is expected that this number will be superseded in the upcoming third observing run, since the advanced LIGO and Virgo detectors have been undergoing commissioning since August 2017. \\new{In their enhanced sensitivity configuration, they will be able to probe a larger volume of space, thereby boosting the expected detection rate} for binary black hole (BBH) mergers and binary neutron star (BNS), and may yield the first observations of neutron star-black hole (NSBH) mergers~\\cite{o1o2catalog}.\n\nGiven the expected scale of GW discovery in upcoming observing runs, it is in order to explore the use of efficient signal-processing algorithms for low-latency GW detection and parameter estimation. This work is motivated by the need to probe a deeper parameter space that is available to GW detectors, in real-time, and using minimal computational resources to maximize the number of studies that can be conducted with GW data. This combination of constraints is a common theme for large-scale astronomical facilities, which will be producing large datasets in low-latency within the next decade, e.g., the Large Synoptic Survey Telescope~\\cite{lsstbook}. Scenarios in which both LSST, among other electromagnetic observatories, and advanced LIGO and Virgo work in unison, analyzing disparate datasets in real-time to realize the science goals of Multi-Messenger Astrophysics make this work timely and relevant~\\cite{whitepaper:SCIMMA,eliuMMA:2019}. \n\nAmong a number of recent developments in signal-processing, deep learning exhibits great promise to increase the speed and depth of real-time GW searches. The first deep learning algorithms to do classification and regression of GWs emitted by non-spinning BBHs on quasi-circular orbits were presented in~\\cite{geodf:2017a} in the context of simulated LIGO noise. The extension of that study to realistic detection scenarios using real advanced LIGO noise was introduced in~\\cite{geodf:2017b}. Even though these algorithms were trained to do real-time classification and regression of GWs in realistic detection scenarios for a 2-D signal manifold (non-spinning BBHs on quasi-circular orbits), the studies presented in~\\cite{geodf:2017a,geodf:2017b,geodf:2017c,Rebei:2018R} have demonstrated that deep learning algorithms generalize to new types of sources, enabling the identification of moderately eccentric BBH mergers, spin precessing BBH mergers, and moderately eccentric BBH signals that include higher-order modes, respectively. These studies also indicate that while the detection of these new types of GW sources is possible, it is necessary to use higher-dimensional signal manifolds to train these algorithms to improve parameter estimation results, and to go beyond point-parameter estimation analysis. This work has sparked the interest of the GW community, leading to a variety of studies including the classification of simulated BBH waveforms in Gaussian noise, GW source modeling and GW denoising of BBH mergers~\\cite{geodf:2017c,hshen:2017,positionML:2018,wei:2019W,Rebei:2018R,AlvinC:2018,2018GN,Fan:2018,Gonza:2018,2018GN,Fuji:2018,LiYu:2017,Nakano:2018}.\n\nWhile detection and parameter estimation are the key goals for the development of deep learning for GW astrophysics, in this article we focus on the application of deep learning for parameter estimation. At present, GW parameter estimation is done using Bayesian inference~\\cite{bambi:2012MNRAS,bambiann:2015PhRvD,Singer_Price_2016}, which is a well tested and extensively used method, though computationally-intensive. On the other hand, given the scalability of deep learning models in training mode (i.e., the ability to combine distributed training and large datasets to enhance the performance of deep learning algorithms in realistic data analysis scenarios), and their computational efficiency in inference mode, it is natural to explore their applicability for GW parameter estimation, the theme of this article. \n\n\\noindent \\textbf{Previous Work} The first exploration of deep learning for the detection and point-parameter estimation of a 2-D signal manifold was presented in~\\cite{geodf:2017a,geodf:2017b}. For waveform signals with matched-filtering signal-to-noise ratio (SNR) \\(\\textrm{SNR}\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}} 10\\), these neural network models measure the masses of quasi-circular BBH mergers with a mean percentage absolute error \\(\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 15\\%\\), and with errors \\(\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 35\\%\\) for moderately eccentric BBH mergers. These results provided a glimpse of the robustness and scalability of deep neural network models, and the motivation to take these prototypical applications into a production run toolkit for GW parameter estimation. \n\n\\noindent \\textbf{Highlights of This Work} \n\n\\begin{cititemize2}\n\\item We have designed new architectures and training schemes to demonstrate that deep learning provides the means to reconstruct the parameters of BBH mergers in more realistic astrophysical settings, i.e., BHs whose spins are aligned or anti-aligned, and which evolve on quasi-circular orbits. \\new{This 4-D signal manifold marks the first time deep learning models \\textit{at scale} are used for GW data analysis, i.e., models trained using datasets with tens of millions of waveforms, and 1,024 nodes (64 processor per node) to significantly reduce the training stage.} Once fully trained, these deep learning models can reconstruct in real-time the parameters of the BBH catalog presented by the LIGO and Virgo Scientific Collaboration in~\\cite{o1o2catalog}. \n\\item The neural network models we introduce in this article have two different architectures. The first one is tailored for the measurement of the masses of the binary components, whereas the second is used to quantify the final spin and the quasi-normal modes (QNMs) of the BH remnant. Once both neural networks are fully trained, we use them in parallel for inferences studies, finding that we can reconstruct the parameters of BBH mergers within 2 milliseconds using a single Tesla V100 GPU. \n\\item \\new{We introduce a novel scheme to train Bayesian Neural Network (BNN) models at scale using 1,024 nodes on a High Performance Computing platform while keeping optimal performance for inference. We then adapted this framework to introduce for the first time the use of BNNs for GW parameter estimation. With this approach we can estimate the astrophysical parameters of the existing catalog of detected BBH mergers~\\cite{o1o2catalog}, and their posterior distributions, reporting inference times in the order of milliseconds.}\n\n\\item \\new{We use variational inference to approximate the posterior distribution of model parameters in the probabilistic layers of our neural networks. In the inference stage, we sample the network parameters to evaluate the posterior distribution of the physical parameters. Details of the model and training are in Sections~\\ref{sec:prob_model} and~\\ref{bnn_scale}.}\n\\end{cititemize2}\n\n\nThis article is structured as follows. Section~\\ref{method} introduces the model architectures used in these analyses, it describes the construction and curation of the datasets used to train, validate and test our neural network models. It also includes a revised curriculum learning for neural network training. We quantify the accuracy of these neural network models in realistic detection scenarios using real advanced LIGO noise in Section~\\ref{experiments}. We put at work our deep learning algorithms in Section~\\ref{discussion} to estimate the astrophysical parameters of the BBH mergers reported in~\\cite{o1o2catalog}. We summarize our findings and future directions of work in Section~\\ref{conclusion}.\n\n\\begin{figure*}[t!]\n\t\\centerline{\n\t\t\\raisebox{1cm}{\n\t\t\t\\includegraphics[width=90mm]{spin_omegas_diagram.png}}\n\t\t\\hspace{10mm}\n\t\t\\includegraphics[width=65mm]{masses_diagram.png}\n\t}\n\t\\caption{The \\new{left}\n\tarchitecture is used to estimate the final spin and quasi-normal modes of the black hole remnant. The \\new{right}\n\tarchitecture is used to estimate the masses of the binary black hole components. }\n\t\\label{model_diagram_spin_omegas}\n\\end{figure*}\n\n\\section{Methods}\n\\label{method}\n\nIn this section, we introduce the neural network models used for parameter estimation, and describe a novel curriculum learning scheme to accurately measure the masses of the binary components, and the final spin and QNMs of the BH remnant. We have used \\texttt{TensorFlow}~\\cite{abadi2016tensorflow,abadi2015tensorflow} to design, train, validate and test the neural network models presented in this section. \n\nThe rationale to use two neural network models stems from the fact that the masses, spins and QNMs span rather different scales. Therefore, to improve the accuracy with which deep learning can measure these parameters we have designed one neural network that is tailored to measure the masses of the binary components, and one to measure the final spin and QNMs of the remnant. The astute reader may have noticed that the final spin of the BH remnant and its QNMs have a similar range of values when the QNMs are cast in dimensionless units, and this is the approach we have followed. In practice, we train the second neural network model using the fact that the QNMs are determined by the final spin \\(a_f\\) using the relation~\\cite{Berti:2006b}\n\n\\begin{equation}\n\\omega_{220}\\left(a_f\\right)= \\omega_R + i\\, \\omega_{I}\\,,\n\\label{qnms}\n\\end{equation}\n\n\\noindent where \\((\\omega_R,\\,\\omega_{I})\\) correspond to the frequency and damping time of the ringdown oscillations for the fundamental \\(\\ell=m=2\\) bar mode, and the first overtone \\(n=0\\). We have computed the QNMs following~\\cite{Berti:2006b}. One can readily translate \\(\\omega_R\\) into the ringdown frequency (in units of Hertz) and \\(\\omega_I\\) into the corresponding (inverse) damping time (in units of seconds) by computing \\(M_f\\,\\omega_{220}\\). \\(M_f\\) represents the final mass of the remnant, and can be determined using Eq. (1) in~\\cite{HealyLous:2017PRDH}.\n\nAs we describe below, we have found that to accurately reconstruct the masses of the binary components, it is necessary to use a more complex and deeper neural network architecture. It is worth mentioning that once these models are fully trained, a single GPU is sufficient to perform regression analyses in milliseconds using both neural network models. \n\n\n\\subsection{Neural network model to measure the properties of the black hole remnant \\label{subsec:properties_model}}\n\nThe neural network model consists of two main parts: a shared root component for all physical parameters, and three leaf components for individual parameters ($a_f$, $\\omega_R$, and $\\omega_I$), as illustrated in the left panel of Figure~\\ref{model_diagram_spin_omegas}, and Table~\\ref{spin_omega_model_config}. The model architecture looks like a rooted tree. The root is composed of seven convolutional layers, and its output is shared by the leaves. Each leaf component has the same network architecture with three fully connected layers. This approach is inspired by the hierarchical self decomposing of convolutional neural networks described in~\\cite{DBLP:journals\/corr\/abs-1811-04406,hu2018squeeze}. The key idea behind this approach is that the neural network structures are composed of a general feature extractor for the first seven layers, which is then followed up by sub-networks that take values from the output of the general feature extractor. \n\nThe rationale to have splits after the universal structure is to use sub-structures that focus on different sub-groups of the data. As a simile: even though the human body has multiple limb locations (``leaves\"), human motion is controlled by the overall motion of the body (``the root\"). In practice this means that the tree structure of our models leverages the hierarchical structure of the data. It first extracts the universal features through the root, and then passes the information to the different sub-networks (``leaves\") to learn specialized features for different physical parameters. Notice that the root will also prevent overfitting in the ``leaves\", since each leaf is optimized through the root. \n\nAnother change to the conventional architecture is that we remove the nonlinear activation in the second to last layer in the leaf component, i.e., it is a linear layer with identity activation function (see Table~\\ref{spin_omega_model_config}). This allows more neurons to be activated and passed to the final layer. As discussed in~\\cite{DBLP:journals\/corr\/abs-1709-07634}, removing the nonlinear activation in some intermediate layers smooths the gradients and maintains the correlation of the gradients in the neural network weights, which, in turn, allows more information to be passed through the network as the depth increases.\n\n\\begin{table}[t]\n\t\\setlength{\\tabcolsep}{1.5pt}\n\t\\caption{Architecture of the neural network model used to measure the final spin and QNMs of the black hole remnant. For the root convolutional layers, the setup indicates: (kernel size, \\# of output channels, stride, dilation rate, max pooling kernel, max pooling stride). All convolutional layers have ReLU activation function and the padding is set to ``VALID'' mode. There is no max pooling layer if the last two entries in the configuration are 0's. The leaf fully connected layers setup: (\\# of output neurons, dropout rate). For the last layer, we use \\(\\tanh\\) activation function. However, the activation function in the second last layer is removed.}\n\t\\label{spin_omega_model_config}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{tabular}{c|c|c}\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Layer \\\\ Component} & \\specialcell{Layer \\\\ Configurations} & \\specialcell{Activation \\\\ Functions}\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Root Layer: \\\\ Convolutional} & $\\begin{array}{c}\n\t\t\t\t(16, 64, 1, 1, 4, 4) \\\\\n\t\t\t\t(16, 128, 1, 2, 4, 4) \\\\\n\t\t\t\t(16, 256, 1, 2, 4, 4) \\\\\n\t\t\t\t(32, 256, 1, 2, 4, 4) \\\\\n\t\t\t\t(4, 128, 1, 2, 0, 0) \\\\\n\t\t\t\t(4, 128, 1, 2, 0, 0) \\\\\n\t\t\t\t(2, 64, 1, 1, 0, 0) \\\\\n\t\t\t\t\\end{array}$ & ReLU \\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Leaf Layer: \\\\ Fully Connected} & $\\begin{array}{c}\n\t\t\t\t(128, 0.0) \\\\\n\t\t\t\t(128, 0.0) \\\\\n\t\t\t\t(1, 0.0)\n\t\t\t\t\\end{array}$ & $\\begin{tabular}{c}\n\t\t\t\tReLU \\\\\n\t\t\t\tIdentity \\\\\n\t\t\t\tTanh \\\\\n\t\t\t\t\\end{tabular}$ \\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table}\n\n\\subsection{Neural network model to measure the masses of the binary components} \n\\label{subsec:mass_model}\n\nThe tree-like network model used for this study is described in the right panel of Figure~\\ref{model_diagram_spin_omegas} and Table~\\ref{mass_model_config}. With respect to the architecture described in the previous section, we reduce the number of convolutional layers in the root from seven to three. We have done this because we are now using more layers in the leaves, which in turn makes the gradient back-propagation harder. Reducing the number of root layers improves gradient updates to the front layers.\n\nEach leaf component uses a squeeze-and-excitation (SE) structure~\\cite{hu2018squeeze}. The SE block is a sub-structure between two layers (squeeze step). It applies a global pooling, and assigns weights to each of the channels in the convolutional layers (excitation step). Compared to conventional convolutional structures with universal weights, the SE components adjust the importance of each channel with an adaptively learned weight, which, as described in~\\cite{hu2018squeeze}, effectively results in 25\\% improvement in image classification. For images, channels are usually represented in RGB. Since we are using 1-D time-series signals, we treat channels of the original input signals to be 1. The SE block adaptively recalibrates channel-wise feature responses. Furthermore, the weights are optimally learned through a constraint introduced by the global pooling. This ensures that the weights encode both spatial and channel-wise information. Furthermore, the weights help the channels represent group specific features at deeper layers, which is consistent with our objective of using ``leaves\" for different parameters. \n\nFollowing the SE components, the neural networks have two highway blocks~\\cite{srivas:2015S}. The structures are a variant of the residual structure, as proposed in~\\cite{he2016deep}. In the residual block, instead of directly learning the feature, it learns the residual components by an identity shortcut connection, which resolves the gradients vanishing when the model goes deeper. The highway block only introduces weights to the components in the residual block, which is similar to the application of importance weights on channels in SE components. Finally, we apply three fully connected layers with dropouts after the highway blocks to prevent overfitting~\\cite{JMLR:v15:srivastava14a}. The same nonlinearity reduction is also applied in the second last layer. \n\n\n\\begin{table}[t]\n\t\\setlength{\\tabcolsep}{1.5pt}\n\t\\caption{Architecture of the neural network model used to measure the masses of the binary components. For the root convolutional layers, the setup indicates: (kernel size, \\# of output channels, stride, dilation rate, max pooling kernel, max pooling stride). All convolutional layers have ReLU activation function and the padding is set to ``VALID'' mode. For the Leaf SE layer, the setup is: (\\# of output channels, \\# of residual blocks). The general structure for the SE layer follows the configuration described in~\\cite{hu2018squeeze}. Leaf highway layer setup: (kernel size, \\# of channels, stride, \\# of highway blocks). The configuration for the highway is described in~\\cite{srivas:2015S}. The leaf fully connected layers setup is: (\\# of output neurons, dropout rate). For the last layer we use ReLU activation. However, the activation function in the second last layer is removed.}\n\t\\label{mass_model_config}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{tabular}{c|c|c}\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Layer \\\\ Component} & \\specialcell{Layer \\\\ Configurations} & \\specialcell{Activation \\\\ Functions}\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Root Layer: \\\\ Convolutional} & $\\begin{array}{c}\n\t\t\t\t(16, 64, 1, 2, 4, 4) \\\\\n\t\t\t\t(16, 128, 1, 2, 4, 4) \\\\\n\t\t\t\t(16, 128, 1, 2, 4, 4)\n\t\t\t\t\\end{array}$ & ReLU\\\\\n\t\t\t\t\\hline\n\t\t\t\tLeaf Layer: SE & $\\begin{array}{c}\n\t\t\t\t(128, 3) \\\\\n\t\t\t\t(128, 3) \n\t\t\t\t\\end{array}$& ReLU \\\\ \n\t\t\t\t\\hline\n\t\t\t\tLeaf Layer: Highway& $\\begin{array}{c}\n\t\t\t\t(4, 128, 2, 30)\n\t\t\t\t\\end{array}$ & ReLU\\\\\n\t\t\t\t\\hline\n\t\t\t\t\\specialcell{Leaf Layer: \\\\ Fully Connected} & $\\begin{array}{c}\n\t\t\t\t(512, 0.1) \\\\\n\t\t\t\t(256, 0.1) \\\\\n\t\t\t\t(1, 0.0)\n\t\t\t\t\\end{array}$ & $\\begin{tabular}{c}\n\t\t\t\tReLU \\\\\n\t\t\t\tIdentity \\\\\n\t\t\t\tReLU \\\\\n\t\t\t\t\\end{tabular}$\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table}\n\n\\subsection{Probabilistic Model}\n\\label{sec:prob_model}\n\\new{In this section we present the probabilistic framework based on Bayesian inference, which we have applied to the neural networks outlined in Sections \\ref{subsec:mass_model} and \\ref{subsec:properties_model}. We use Bayesian neural networks (BNNs)~\\cite{neal2012bayesian,mackay1992practical}, which are neural networks with uncertainty over their weights, to provide estimates of the BBH masses and properties of the BH remnant posterior distributions. This is in contrast to standard neural networks which provide point estimates of parameters. We use prior and posterior distribution functions on the last two layers of each leaf. With this approach, each of the leaves becomes an independent probabilistic model that regresses the physical parameters. The root layers, on the other hand, can be viewed as feature extractors for each probabilistic leaf.}\n\n\\new{A BNN can be viewed as a probabilistic model for the posterior distribution, $p(\\boldsymbol{w}|{\\mathcal{D}})$, where $\\boldsymbol{w}$ are the model weights and $\\mathcal{D} = \\{\\boldsymbol{x}_j,\\boldsymbol{y}_j\\}_{j= 1}^n$ is the training dataset. Here, $\\boldsymbol{x}_j $ are the input noisy waveforms and $\\boldsymbol{y}_j$ are the continuous parameters of interest, i.e., the BBH masses and the properties of the BH remnant.} \n\n\\new{According to Bayes theorem, $p({\\boldsymbol{w}} | \\mathcal{D}) \\propto p(\\mathcal{D}|{\\boldsymbol{w}})p({\\boldsymbol{w}}),$ where $p({\\boldsymbol{w}})$ is the prior distribution for the weights and \n$p(\\mathcal{D}|{\\boldsymbol{w}})$ is the likelihood. We assume that the likelihood function for each pair of the training data is, }\n\n\\new{\n\\begin{equation}\n \\label{eq:likelihood}\n p\\left(\\boldsymbol{y} \\vert \\boldsymbol{x}, \\boldsymbol{w} \\right) = \\frac{1}{\\sqrt{2\\pi} \\epsilon }\\exp{ \\left(- \\frac{\\| \\boldsymbol{y} - f_{\\boldsymbol{w}}(\\boldsymbol{x}) \\|^2}{2 \\epsilon^2} \\right)},\n\\end{equation}\n}\n\n\\noindent \\new{where $f_{\\boldsymbol{w}}$ represents the neural network function with weights $\\boldsymbol{w}$ and $\\epsilon$ is the standard deviation. The aleatoric uncertainty is covered by the likelihood distribution. A BNN allows a stochastic sampling of the weight parameters during a forward pass through the network while also encoding prior knowledge through the use of prior distributions. \nWe use a variational inference (VI) algorithm to approximate the weight posterior distribution $p(\\boldsymbol{w} | \\mathcal{D})$ using a Gaussian distribution for the weights assuming a mean field approximation, denoted by $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})$. \nIt is parameterized by $\\boldsymbol{\\theta} = (\\boldsymbol{\\mu}, \\boldsymbol{\\sigma})$, representing the mean vector and the standard deviation vector of the distribution respectively. }\n\n\n\\new{The corresponding cost function can be written as}\n\n\\new{\\begin{equation}\n \\label{eq:loss}\n \\mathcal{L} = \\mathrm{KL}\\left ( q_{\\boldsymbol{\\theta}}(\\boldsymbol{w} ) \\| p(\\boldsymbol{w}) \\right) - \\mathbb{E}_{q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})} \\log p(\\mathcal{D} \\vert \\boldsymbol{w} ),\n\\end{equation}}\n\n\\noindent \\new{which is known as the variational free energy. The prior distribution is chosen to be a standard normal distribution. Since the probabilistic layers are parameterized by the mean and variance of the weight distributions, the number of parameters which need to be optimized is doubled compared to a standard neural network. The cost function can be approximated by drawing $N$ samples $\\boldsymbol{w}^{(i)}$ from $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})$,}\n\n\\new{\\begin{align}\n \\mathcal{L} & \\approx \\frac{1}{N} \\sum_{i = 1}^N \\left[ -\\log q_{\\boldsymbol{\\theta}} \\left(\\boldsymbol{w} \\right) - \\log p\\left(\\boldsymbol{w}^{(i)} \\right) \\right. \\nonumber \\\\\n & \\quad \\quad \\left . - \\log p\\left(\\mathcal{D} \\vert \\boldsymbol{w}^{(i)} \\right) \\right] \n \\label{eq:loss2} \n\\end{align}}\n\n\\noindent \\new{During training, for every forward model pass, the variational posterior distribution for the model parameters is estimated. Specifically, we use stochastic gradient descent to estimate $\\boldsymbol \\theta$ of $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w} )$ by minimizing Eq.~\\eqref{eq:loss2}. In testing or inference mode, for input waveform $\\boldsymbol{x}^*$, our approximate predictive distribution is given by,}\n\\new{\n\\begin{align}\n\\label{uncertainty_approx_fun}\n q( \\boldsymbol{y}^* | \\boldsymbol{x}^* ) & = \\int p(\\boldsymbol{y}^*| \\boldsymbol{x}^*, \\boldsymbol{w}) q_{\\boldsymbol{\\theta}}(\\boldsymbol{w}) \\, d{\\boldsymbol{w}}\\,.\n\\end{align}\n}\n\n\\noindent \\new{ We use sampling to compute the statistics of the corresponding estimated physical parameters, e.g., median and 90\\% confidence interval. In addition to the aleatoric uncertainty, the uncertainty in the predictions arises from uncertainty in the weights or so called `epistemic uncertainty.'}\n\n\n\\new{In this probabilistic modeling, we apply the following simplifications: (1) the likelihood function is assumed to be Gaussian, and (2) neural network weight distributions are assumed to be independent Gaussians. Under these assumptions, the loss in Eq.~\\eqref{eq:loss2} is simplified and tractable. The statistical models and VI method are implemented using the computing framework TensorFlow Probability (TFP) \\cite{2017arXiv171110604D,2018arXiv181203973T} using a modified sampling scheme and distributed across nodes in a data parallel fashion using Horovod~\\cite{2018arXiv180205799S}. Details of the model training at scale are discussed in Section~\\ref{bnn_scale}.}\n\n\\subsection{Dataset Preparation}\n\nTo demonstrate the use of deep learning for parameter estimation, we consider the catalog of BBH mergers presented in~\\cite{o1o2catalog}. Based on the Bayesian analyses presented in that study, we consider the following parameter space to produce our training dataset: \\(m_1\\in[9{\\rm M}_{\\odot},\\,65{\\rm M}_{\\odot}]\\), \\(m_2\\in[5.2{\\rm M}_{\\odot},\\, 42{\\rm M}_{\\odot}]\\). The spin of the binary components span a range \\(a_{\\{1,\\,2\\}}\\in[-0.8,\\,0.8]\\). By uniformly sampling this parameter space we produce a dataset with 300,180 waveforms. These waveforms are produced with the surrogate waveform family~\\cite{blackman:2015}, considering the last second of the evolution which includes the late inspiral, merger and ringdown. The waveforms are produced using a sample rate of 8192Hz.\n\n\nFor training purposes, we label the waveforms using the masses and spins of the binary components, and then use this information to also enable the neural net to estimate the final spin of the BH remnant using the formulae provided in~\\cite{Hofmann:2016yih}, and the QNMs of the ringdown following~\\cite{Berti:2006b}. In essence, we are training our neural network models to identify the key features that determine the properties of the BBHs before and after merger using a unified framework.\n\nIn order to encapsulate the true properties of advanced LIGO noise, we whiten all the training templates using real LIGO noise from the Hanford and Livingstone detectors gathered during the first and second observing runs~\\cite{losc}.\n\nWe use 70\\% of these waveform samples for training, 15\\% for validation, and 15\\% for testing. The training samples are randomly and uniformly chosen. Throughout the training, we use ADAM optimizer to minimize the mean squared error of the predicted parameters with default hyper-parameter setups~\\cite{journals\/corr\/KingmaB14}. We choose the batch size to be 64, the learning rate to be 0.0008, the total number of iterations to be 120,000 (maximum). We use a dropout rate 0.1 for training and no dropout is applied for testing and validation. To simulate the environment where the true GWs are embedded, we use real advanced LIGO noise to compute power spectral density, which is then used to whiten the templates. In addition, we apply a random 0\\% to 6\\% left or right shifts. This endows the neural networks with time-invariance, and improves their performance to estimate the parameters of the signal irrespective of their position in the data stream. On the other hand, this technique also prevents overfitting of the data. Since the locations are randomly shifted with independent noise injected, the training data are different at each epoch. \n\n\\begin{figure*}\n\t\\centerline{\n\t\t\\includegraphics[width=0.48\\linewidth]{masses_plot.png}\n\t\t\\hspace{5mm}\n\t\t\\includegraphics[width=0.48\\linewidth]{spin_omegas_plot.png}\n\t}\n\t\\caption{Relative error with which our deep learning algorithm can measure the masses, final spin, \\(a_f\\), and quasi-normal modes (QNMs), \\((\\omega_R\\,,\\omega_I)\\) of the binary black hole components as a function of optimal matched-filtering signal-to-noise ration (SNR). \\textit{Left panel:} For waveform with \\(\\textrm{SNR}\\geq15\\), the primary and secondary masses can be constrained with relative errors less than \\((7\\%,\\,12\\%)\\), respectively. \\textit{Right panel:} For signals with \\(\\textrm{SNR}\\geq15\\), \\((a_f\\,,\\omega_R\\,,\\omega_I)\\) can be recovered with relative errors less than \\((13\\%,\\, 5\\%,\\,3\\%)\\), respectively.}\n\t\\label{relative_errors_mass}\n\\end{figure*}\n\n\n\\subsection{Curriculum learning with decreasing signal-to-noise ratio}\n\\label{dec_snr}\n\n\n\n\\begin{table}[t]\n\t\\setlength{\\tabcolsep}{7pt}\n\t\\caption{Decreasing peak SNR (pSNR) setup. The pSNR is uniformly chosen within the indicated range. Notice that the early stopping criterion is also applied if the number of iterations is greater than 60,000 and the relative error threshold is met. The relation between match-filtering SNR to pSNR is: 1.0 pSNR $\\approx$ 13.0 SNR.}\n\t\\label{CL_BNN}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{sc}\n\t\t\t\t\\begin{tabular}{c|c}\n\t\t\t\t\t\\hline\n\t\t\t\t\tIterations & pSNRs \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\t1-12000 & 2.0-3.0\\\\\n\t\t\t\t\t12001-24000 & 1.5-3.0\\\\ \n\t\t\t\t\t24001-36000 & 1.0-3.0 \\\\\n\t\t\t\t\t36001-60000 & 0.5-3.0\\\\\n\t\t\t\t\t60001-90000 & 0.3-3.0\\\\\n\t\t\t\t\t90001-120000 & 0.2-3.0\\\\\n\t\t\t\t\t120001- & 0.1-3.0 \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{sc}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table}\n\n\nIn realistic detection scenarios, GWs have moderate SNRs, and are contaminated by non-Gaussian and non-stationary noise. In order to ensure that neural networks identify GWs over a broad range of astrophysically motivated SNRs, we start training them with large SNRs, and gradually reduce the SNRs to a lower level. This is an idea taken from curriculum learning literature~\\cite{Bengio:2009:CL:1553374.1553380}, which allows the network to distill more accurate information of the underlying signals with larger SNRs to signals with lower SNRs. This approach has been demonstrated for classification, regression and denoising of GW signals~\\cite{geodf:2017a,geodf:2017b,geodf:2017c,Rebei:2018R,hshen:2017,dgNIPS,wei:2019W,George:2017qtr}. Specifically, each waveform is normalized to have maximum amplitude 1, and then we use curriculum learning with the decreasing SNR scheme detailed in Table~\\ref{CL_BNN} (The strategy for BNN models is the same). The noisy data is then normalized to have variance one. We normalize the data to ensure that the trained model can characterize true BBH signals in realistic detection scenarios, covering a broad range of SNRs.\n\nThe different steps followed in our curriculum learning scheme are presented in Table~\\ref{CL_BNN}. In addition, we use an early stopping criterion with the relative error threshold 0.026 for \\((m_1\\,,m_2)\\) and 0.0016 for \\((a_f,\\,\\omega_R\\,,\\omega_I)\\). One additional change to the mass model is we rescale the masses by 1\/20, to make the optimization converge faster. In the evaluation, we just scale the data back to its original amplitude.\n\n\\subsection{Training of the Bayesian Neural Network Model}\n\\label{bnn_scale}\n\n\\new{For the probabilistic layers, as the effective number of parameters to be optimized is double that of a standard layers, we examine the impact of scaling the BNN code across nodes on the pre-exascale Cray XC40 system, Theta, at Argonne National Laboratory. Using an optimized build of both Tensorflow and Horovod for the Intel Xeon-Phi [coded name Knights Landing (KNL)] architecture we distribute the code using one MPI rank per node and 128 hardware threads per node and scale up to 1024 nodes. Results for the number of samples processed per second during training is shown in Figure~\\ref{bnn_nn_scaling}. We achieve $\\sim 75$\\% efficiency up to 1024 nodes on Theta. As the number of nodes is increased, there is increased communication of the gradients at each iteration which causes an expected decrease in performance away from the ideal scaling. As the BNN layers have in effect twice the parameters of the standard layers, the communication cost is slightly higher which can be seen as a decrease in the number of samples processed per second.}\n\n\\new{In addition to evaluating the efficiency on Theta, we fully trained the two BNN models on Hardware-Accelerated Learning (HAL) cluster at the National Center for Supercomputing Applications. Each model was trained on 4 NVIDIA V100 GPUs with batch size of 64. The parameter $\\epsilon$ in the likelihood function Eq.~\\eqref{eq:likelihood} is chosen to be 0.1 for the mass model and $10^{-3}$ for the final spin and QNMs model. We draw $N = 100$ samples and $M = 1600$ samples from $q_{\\boldsymbol{\\theta}}(\\boldsymbol{w})$ at training and testing respectively. The learning rate for the two BNN models is $8 \\times 10^{-6}$. The total number of iterations is 200,000 to guarantee convergence.}\n\n\\begin{figure}[t!]\n\\includegraphics[width=90mm]{bnn_nn_scaling.pdf}\n\t\\caption{Samples processed per second with increasing number of nodes during training of the neural network. The results for the BNN are shown in cyan and standard neural network in blue. Ideal scaling is shown as a dashed black bar at each node count. Error bars are the variance from all iterations during training.}\n\t\\label{bnn_nn_scaling}\n\\end{figure}\n\n\n\\section{Experimental Results}\n\\label{experiments}\nUsing the signal manifold described in the previous section, we present results of the accuracy with which our neural network models can measure the masses of the binary components, and the properties of the corresponding remnant. \n\n\\indent Figure~\\ref{relative_errors_mass} presents the accuracy with which the binary components \\((m_1,\\,m_2)\\) can be recovered over a a broad range of SNRs. We notice that for signals with \\(\\textrm{SNR}\\geq15\\), the primary and secondary masses can be constrained with relative errors~\\cite{relerror:1965} less than \\((7\\%,\\,12\\%)\\), respectively. These results represent a major improvement to the analysis we reported in the context of a 2-D signal manifold in~\\cite{geodf:2017a,geodf:2017b}. Furthermore, we can also see from the same figure that for signals with \\(\\textrm{SNR}\\geq15\\) our neural network models can measure the triplet \\((a_f\\,,\\omega_R\\,,\\omega_I)\\) with relative errors less than \\((13\\%,\\, 5\\%,\\,3\\%)\\), respectively. To the best of our knowledge, this is the first time deep learning is used to infer the properties of BH remnants directly from GW signals.\n\n\n\\section{Deep learning parameter estimation of detected binary black hole mergers}\n\\label{discussion}\n\n\n\\begin{table*}[!htp]\n\t\\setlength{\\tabcolsep}{4.5pt}\n\t\\renewcommand{\\arraystretch}{2.0}\n\t\\caption{\\new{Parameter estimation results for the catalog of binary black hole mergers reported in~\\cite{o1o2catalog} using our deterministic deep learning models. We report median values with the the 90\\% confidence interval, which was computed by whitening gravitational wave strain data that contain real gravitational wave signals with up to 240 different power spectral densities.}}\n\t\\label{tab_real_event_results}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{sc}\n\t\t\t\t\\begin{tabular}{c|ccccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\tEvent Name & $m_1\\, [{\\rm M}_{\\odot}]$ & $m_2\\, [{\\rm M}_{\\odot}]$ & $a_f$ & $\\omega_{R}$ & $\\omega_{I}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tGW150914 & $35.64_{-5.55}^{+5.19}$ & $29.74_{-3.90}^{+2.12}$ & $0.658_{-0.006}^{+0.039}$ & $0.5253_{-0.0026}^{+0.0186}$ & $0.0820_{-0.0009}^{+0.0002}$ \\\\\n\t\t\t\t\tGW151012 & $25.01_{-9.09}^{+12.00}$ & $16.45_{-6.01}^{+4.50}$ & $0.637_{-0.015}^{+0.011}$ & $0.5155_{-0.0086}^{+0.0028}$ & $0.0824_{-0.0002}^{+0.0002}$ \\\\ \n\t\t\t\t\tGW151226 & $12.39_{-0.25}^{+3.57}$ & $7.70_{-0.48}^{+5.77}$ & $0.725_{-0.140}^{+0.051}$ & $0.5558_{-0.0611}^{+0.0241}$ & $0.0776_{-0.0002}^{+0.0055}$ \\\\ \n\t\t\t\t\tGW170104 & $32.28_{-6.33}^{+4.31}$ & $22.31_{-3.06}^{+7.01}$ & $0.684_{-0.035}^{+0.014}$ & $0.5157_{-0.0068}^{+0.0071}$ & $0.0854_{-0.0015}^{+0.0004}$ \\\\\n\t\t\t\t\tGW170608 & $12.90_{-0.31}^{+3.27}$ & $9.93_{-0.09}^{+2.08}$ & $0.716_{-0.077}^{+0.017}$ & $0.5385_{-0.0154}^{+0.0057}$ & $0.0827_{-0.0004}^{+0.0006}$ \\\\\n\t\t\t\t\tGW170729 & $45.32_{-0.98}^{+2.23}$ & $24.41_{-02.32}^{+03.16}$ & $0.737_{-0.058}^{+0.036}$ & $0.5682_{-0.0303}^{+0.0038}$ & $0.0739_{-0.0016}^{+0.0054}$ \\\\\n\t\t\t\t\tGW170809 & $35.71_{-8.46}^{+7.53}$ & $24.09_{-2.44}^{+5.80}$ & $0.632_{-0.010}^{+0.008}$ & $0.5123_{-0.0041}^{+0.0034}$ & $0.0826_{-0.0002}^{+0.0001}$ \\\\\n\t\t\t\t\tGW170814 & $30.54_{-8.78}^{+2.01}$ & $22.33_{-7.96}^{+0.07}$ & $0.679_{-0.003}^{+0.002}$ & $0.5364_{-0.0030}^{+0.0009}$ & $0.0812_{-0.0001}^{+0.0003}$ \\\\\n\t\t\t\t\tGW170818 & $31.52_{-1.95}^{+2.15}$ & $25.97_{-0.87}^{+1.21}$ & $0.716_{-0.021}^{+0.015}$ & $0.5474_{-0.0104}^{+0.0062}$ & $0.0786_{-0.0013}^{+0.0013}$ \\\\\n\t\t\t\t\tGW170823 & $46.98_{-3.89}^{+0.58}$ & $33.01_{-5.92}^{+2.03}$ & $0.626_{-0.023}^{+0.014}$ & $0.5067_{-0.0057}^{+0.0070}$ & $0.0827_{-0.0003}^{+0.0006}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{sc}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table*}\n\n\\begin{table*}[!htp]\n\t\\setlength{\\tabcolsep}{4.5pt}\n\t\\renewcommand{\\arraystretch}{2.0}\n\t\\caption{\\new{As Table~\\ref{tab_real_event_results}, but now using our probabilistic deep learning models. The uncertainty for these models is captured by randomness in the network weights, not from various noisy realizations of the signals.}}\n\t\\label{tab_real_event_results_BNN}\n\t\\vskip -0.25in\n\t\\begin{center}\n\t\t\\begin{small}\n\t\t\t\\begin{sc}\n\t\t\t\t\\begin{tabular}{c|ccccc}\n\t\t\t\t\t\\hline\n\t\t\t\t\tEvent Name & $m_1 [{\\rm M}_{\\odot}]$ & $m_2 [{\\rm M}_{\\odot}]$ & $a_f$ & $\\omega_{R}$ & $\\omega_{I}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\tGW150914 & $36.08_{-4.45}^{+4.77}$ & $27.42_{-3.92}^{+3.49}$ & $0.689_{-0.032}^{+0.017}$ & $0.5390_{-0.0269}^{+0.0124}$ & $0.0797_{-0.0022}^{+0.0011}$ \\\\\n\t\t\t\t\tGW151012 & $21.56_{-2.12}^{+3.07}$ & $15.46_{-2.32}^{+2.44}$ & $0.681_{-0.032}^{+0.016}$ & $0.5365_{-0.0266}^{+0.0130}$ & $0.0804_{-0.0018}^{+0.0008}$ \\\\ \n\t\t\t\t\tGW151226 & $18.04_{-2.49}^{+1.98}$ & $11.96_{-2.89}^{+1.67}$ & $0.715_{-0.035}^{+0.017}$ & $0.5533_{-0.0280}^{+0.0142}$ & $0.0763_{-0.0036}^{+0.0017}$ \\\\ \t\n\t\t\t\t\tGW170104 & $31.23_{-3.26}^{+4.09}$ & $23.27_{-3.25}^{+3.62}$ & $0.692_{-0.033}^{+0.016}$ & $0.5358_{-0.0302}^{+0.0052}$ & $0.0796_{-0.0052}^{+0.0026}$ \\\\\n\t\t\t\t\tGW170608 & $16.73_{-2.19}^{+2.38}$ & $12.44_{-2.21}^{+2.03}$ & $0.673_{-0.036}^{+0.019}$ & $0.5235_{-0.0297}^{+0.0149}$ & $0.0818_{-0.0012}^{+0.0005}$ \\\\\n\t\t\t\t\tGW170729 & $45.28_{-6.42}^{+6.63}$ & $32.34_{-5.48}^{+3.91}$ & $0.751_{-0.038}^{+0.019}$ & $0.5776_{-0.0309}^{+0.0151}$ & $0.0756_{-0.0048}^{+0.0023}$ \\\\\n\t\t\t\t\tGW170809 & $32.88_{-3.45}^{+4.49}$ & $26.56_{-3.91}^{+3.54}$ & $0.714_{-0.034}^{+0.016}$ & $0.5492_{-0.0271}^{+0.0060}$ & $0.0760_{-0.0060}^{+0.0030}$ \\\\\n\t\t\t\t\tGW170814 & $32.40_{-3.60}^{+4.77}$ & $25.22_{-4.34}^{+4.38}$ & $0.675_{-0.033}^{+0.016}$ & $0.5329_{-0.0272}^{+0.0140}$ & $0.0794_{-0.0024}^{+0.0011}$ \\\\\n\t\t\t\t\tGW170818 & $33.49_{-3.19}^{+4.51}$ & $29.71_{-4.63}^{+4.59}$ & $0.631_{-0.032}^{+0.015}$ & $0.5159_{-0.0277}^{+0.0135}$ & $0.0829_{-0.0016}^{+0.0008}$ \\\\\n\t\t\t\t\tGW170823 & $38.24_{-5.38}^{+4.78}$ & $28.16_{-3.67}^{+4.63}$ & $0.664_{-0.036}^{+0.018}$ & $0.5321_{-0.0302}^{+0.0156}$ & $0.0757_{-0.0088}^{+0.0043}$ \\\\\n\t\t\t\t\t\\hline\n\t\t\t\t\\end{tabular}\n\t\t\t\\end{sc}\n\t\t\\end{small}\n\t\\end{center}\n\t\\vskip -0.1in\n\\end{table*}\n\n\nIn this section we use our neural network models to measure \\((m_1,\\,m_2,\\,a_f,\\,\\omega_R,\\,\\omega_I)\\) from all the BBH mergers detected to date by the advanced LIGO and Virgo observatories~\\cite{o1o2catalog}. We present results for two types of neural network models, namely, deterministic and probabilistic. \n\n\\subsection{Parameter estimation with deterministic neural networks}\n\n\\new{To get insights into the performance of our deterministic neural network models to infer the astrophysical parameters of BBH mergers, we begin by evaluating them for a given BBH system whose ground truth parameters are \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})= (31.10M_{\\odot},\\,20.46M_{\\odot},\\,0.718,\\,0.5412,\\, 0.0800)\\). Using 1,600 different noise realizations, we have constructed the model predictions for two different SNR cases, as shown in Figure~\\ref{fig_multiple_noise_check}. We notice that these distributions capture the ground-truth values of the BBH system under consideration, and that the reconstruction of the actual parameters of the system improves for larger SNR values, which is in agreement with the analysis presented with traditional Bayesian analysis for GW parameter estimation~\\cite{bambiann:2015PhRvD}. Having conducted similar experiments for other BBH systems, we then went on to using these deep learning models for the parameter reconstruction of real BBH mergers.}\n\n\nIn Table~\\ref{tab_real_event_results} we present the median and 90\\% confidence level for the astrophysical parameters \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})\\) of all the BBH mergers presented in~\\cite{o1o2catalog}. These values are computed by whitening the data containing a putative signal with 240 different Power Spectral Densities (PSDs), half of them are constructed using LIGO Hanford noise and the rest with LIGO Livingstone noise. Through this approach we are effectively measuring the impact of PSD variations in the measurements of the astrophysical parameters of BBH mergers. We find that these estimates are in very good agreement with the results obtained with the Bayesian analyses presented in Table III of~\\cite{o1o2catalog}. \n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_det_multi_noise_real_1_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_det_multi_noise_real_1_14.png}}\n \\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_test_GW170104_sim_3_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_test_GW170104_sim_3_14.png}}\n\t\n\t\\caption{\\new{Model predictions produced by our deterministic models by evaluating them with 1,600 different noise realizations for a binary black hole system with ground truth parameters \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})= (31.10M_{\\odot},\\,20.46M_{\\odot},\\,0.718,\\,0.5412,\\, 0.0800)\\). The panels show results for the distribution of the estimates for (\\(m_1,\\,m_2,\\,a_f,\\,w_R,\\,w_I\\)) assuming \\(\\textrm{SNR}=\\{13,\\,19.5\\}\\).}}\n\t\\label{fig_multiple_noise_check}\n\\end{figure*}\n\n\\subsection{Bayesian neural network parameter estimation}\n\n\\new{In addition to parameter estimation results obtained with our deterministic models, based on varying the noise realization with different PSDs, we also evaluated our BNN models for two types of signals. First, on simulated signals to quantify the performance of our probabilistic models. Results of this exercise are presented in Figure~\\ref{bnn_dist}. We carried out an exhaustive study to confirm that our BNN models provide consistent results for different random initializations, and that the results exhibit strong convergence for the optimal choice of hyperparameters.} \n\n\\new{Upon confirming that our probabilistic models perform well, we used them to estimate the astrophysical parameters of the entire catalog of BBH signals reported in~\\cite{o1o2catalog}. These results, which provide the median and the 90\\% confidence intervals, are summarized in Table~\\ref{tab_real_event_results_BNN}.}\n\n\n\\begin{figure*}\n\t\\centering\n\t\\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_BNN_newloss_newdata_multi_noise_real_1_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{mass_seaborn_BNN_newloss_newdata_multi_noise_real_1_14.png}}\n \\subfigure[SNR = 13.0]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_BNN_test_GW170104_sim_3_9.png}}\n\t\t\\subfigure[SNR = 19.5]{\n\t\t\\includegraphics[width=0.49\\linewidth]{spin_ome_BNN_test_GW170104_sim_2_14.png}}\n\t\n\t\\caption{\\new{Variation inference distributions produced by our Bayesian Neural Network models for a binary black hole system with ground truth parameters \\((m_1,\\,m_2,\\,a_f,\\,\\omega_{R},\\,\\omega_{I})= (31.10M_{\\odot},\\,20.46M_{\\odot},\\,0.718,\\, 0.5412,\\, 0.0800)\\). The panels show results for the distribution of the estimates for (\\(m_1,\\,m_2,\\,a_f,\\,w_R,\\,w_I\\)). As in Figure~\\ref{fig_multiple_noise_check}, we consider \\(\\textrm{SNR}=\\{13,\\,19.5\\}\\).}}\n\t\\label{bnn_dist}\n\\end{figure*}\n\n\n The deep learning parameter estimation results presented in Tables~\\ref{tab_real_event_results_BNN} are consistent with those obtained with established, Bayesian parameter estimation pipelines~\\cite{o1o2catalog}. \\new{The reliable astrophysical information inferred in low-latency by these deep learning algorithms for each BBH signal (less than 2 milliseconds) warrants the extension of this framework to characterize other GW sources, including eccentric compact binary mergers, and sources such as BBH systems with significant spin and asymmetric mass-ratios that require the inclusion of higher-order modes for accurate GW source modeling. This work is under earnest development and will be presented shortly.} \n\n\n\nHaving demonstrated the application of deep learning at scale for the characterization of BBH mergers, it is now in order to design deep neural networks for real-time detection and characterization of GW sources that are expected to have electromagnetic and astro-particle counterparts, i.e., BNS and NSBH systems. For that study, we expect no additional computational challenges to the ones we have already addressed in this analysis. The central development for such an effort, however, will consist of designing a clever algorithm to readily identify BNS or NSBH in a hierarchical manner, i.e., in principle it is not needed to train neural networks using minute long waveforms. Rather, we need to figure out how much information is needed to accurately reconstruct the astrophysical parameters of one of these events in real-time. These studies should be pursued in the future. \n\n\\section{Conclusion}\n\\label{conclusion}\n\nWe have presented the first application of deep learning at scale to characterize the astrophysical properties of BHs whose spins are aligned or anti-aligned, and which evolve on quasi-circular orbits. Using over \\(10^7\\) waveforms to densely sample this parameter space, and encoding time- and scale-invariance, we have demonstrated that deep learning enables real-time GW parameter estimation. These studies mark the first time BNNs are trained using 1,024 nodes on a supercomputer platform tuned for deep learning research, and when applied for the analysis of real advanced LIGO data, they maintain similar accuracy to models trained on 4 V100 GPUs. Our results are consistent with established, compute-intensive, Bayesian methods that are routinely used for GW parameter estimation. \n\nThe approach we have presented herein provides the means to constrain the parameters of BBHs before and after the merger event. We have shown that deep learning can directly infer the final spin and QNMs of BH remnants, thereby paving the way to directly use QNMs to assess whether BH remnants are accurately described by general relativity. In future work, we will study how accurately these neural network models can tell apart ringdown waveforms described by astrophysically motivated alternative theories of gravity in realistic detection scenarios. The extension of this work to enable real-time detection and parameter estimation of GW sources that are central for Multi-Messenger Astrophysics discovery campaigns, and other astrophysically motivated sources, such as eccentric BBH mergers, should also be investigated. \n\n\n\n\\section{Acknowledgements}\nDue to the size of the data, the datasets utilized in this study are available from the corresponding author on reasonable request. Codes will be available before the official publication. \n\n\nThis work utilized the Hardware-Accelerated Learning (HAL) cluster, supported by NSF Major Research Instrumentation program, grant \\#1725729, as well as the University of Illinois at Urbana-Champaign. \n\nThis research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. \n\nThis research is part of the Blue Waters sustained-petascale computing project, \nwhich is supported by the NSF (awards OCI-0725070 and ACI-1238993) \nand the State of Illinois. Blue Waters is a joint effort of the University of Illinois at \nUrbana-Champaign and its National Center for Supercomputing Applications (NCSA). We acknowledge support from the NCSA, and thank the \\href{http:\/\/gravity.ncsa.illinois.edu}{NCSA Gravity Group} for useful feedback. Tesla P100 and V100 GPUs used for this project were donated by NVIDIA to the \\href{http:\/\/gravity.ncsa.illinois.edu}{NCSA Gravity Group}, and are hosted by Vlad Kindratenko at the Innovative Systems Lab at NCSA. This work also made used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award ACI-1445606, at the Pittsburgh Supercomputing Center (PSC); TG-PHY160053 grant is gratefully acknowledged. \n\nThis paper was reviewed and approved by the LIGO P\\&P committee.\n\n\\vspace{-0.0cm}\n\\section{Contribution}\n\nEAH envisioned this study, and directed the construction of the data sets used to train\/validate\/test the neural network models. H~Shen developed the neural network structure and carried out the training and evaluation. ZZ supervised on the evaluation of the neural network performance. EJ created the BNN, implemented this to run at scale and advised on its use for parameter predictions. H~Shen created the BNN code with a new sampling approach and training objective, trained and evaluated the BNN model on HAL machine for parameter predictions on real GW events. H~Sharma developed the BNN code and carried out extensive scaling tests on Theta. All co-authors contributed to drafting and editing the manuscript. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\nThe possible existence of a liquid-liquid critical point in deeply supercooled water has been a subject of debate in part due to the challenges associated with providing definitive experimental evidence.\nPioneering work by Mishima and Stanley [Nature 392, 164 (1998) and Phys.~Rev.~Lett. 85, 334 (2000)] sought to shed light on this problem by studying the melting curves of different ice polymorphs and their metastable continuation in the vicinity of the expected location of the liquid-liquid transition and its associated critical point.\nBased on the continuous or discontinuous changes in slope of the melting curves, Mishima suggested that the liquid-liquid critical point lies between the melting curves of ice III and ice V.\nHere, we explore this conjecture using molecular dynamics simulations with a purely-predictive machine learning model based on \\textit{ab initio} quantum-mechanical calculations.\nWe study the melting curves of ices III, IV, V, VI, and XIII using this model and find that the melting lines of all the studied ice polymorphs are supercritical and do not intersect the liquid-liquid transition locus.\nWe also find a pronounced, yet continuous, change in slope of the melting lines upon crossing of the locus of maximum compressibility of the liquid.\nFinally, we analyze critically the literature in light of our findings, and conclude that the scenario in which melting curves are supercritical is favored by the most recent computational and experimental evidence.\nThus, although the preponderance of experimental and computational evidence is consistent with the existence of a second critical point in water, the behavior of the melting lines of ice polymorphs does not provide strong evidence in support of this viewpoint, according to our calculations.\n\n \n\n\n\\newpage\n\\parindent 1em\n\n\\section*{Introduction}\n\\label{sec:introduction}\nWater continues to be the focus of intense scientific inquiry, not only because of its importance in the biological and physical sciences, but also on account of its distinctive thermophysical properties and phase behavior. Water exhibits at least 17 different crystalline phases (with new ones continuing to be uncovered) \\cite{Salzmann19,Hansen21}, multiple glassy states \\cite{Loerting11}, and possibly also a liquid-liquid phase transition (LLT) between high-density and low-density liquids (HDL and LDL, respectively) under supercooled conditions \\cite{Poole92,Gallo16}. As such, water provides a rich proving ground to stretch our understanding of diverse thermophysical phenomena including complex phase equilibria, metastable phase transitions, and glass physics \\cite{DebenedettiBook}, as well as the possible relationships between them \\cite{Handle17,Debenedetti03}. \n\nThe possibility of an LLT in water has been the focus of numerous studies \\cite{Gallo16}, and a preponderance of both experimental and computational evidence points to the existence of water's LLT at positive pressures ($P$) and supercooled temperatures ($T$) (i.e., below the melting $T$ of the stable ice I phase) \\cite{Kim20,NILSSON22,Palmer14,Debenedetti20,Palmer18,Gartner22,weis2022liquid}. However, there remain many unresolved questions around the LLT and its relationship to water's properties and various solid phases. A set of observations instrumental to the development of the argument in favor of the LLT came about when Mishima and Stanley characterized the melting of various ice polymorphs to liquid water upon decompression at different $T$ \\cite{Mishima98,Mishima00}. They observed that the melting curve of ice III exhibited a notable but continuous change in slope in the $T-P$ plane, while ice V and ice IV exhibited sharp and seemingly discontinuous changes in slope. Recall that, by the Clausius-Clapeyron equation \\cite{callen1998thermodynamics},\n\\begin{equation}\n\\frac{dP}{dT} = \\frac{\\Delta H}{T_m \\Delta V},\n\\label{eq:Clausius-Clapeyron}\n\\end{equation}\nthe slope $dP \/dT$ of a line of phase coexistence $T_m(P)$ is related to the change in enthalpy $\\Delta H$ and volume $\\Delta V$ across the transition. This idea suggests that if a melting curve exhibits a discontinuous change in slope, it correspondingly reflects a discontinuous change in the properties of ice and\/or liquid water at that point. Given that the enthalpy and volume of crystalline solids is only weakly dependent on $T$ and $P$, Mishima and Stanley concluded that the properties of the liquid phase were changing discontinuously (i.e., evidence of an LLT). This argument, if correct, would place the liquid-liquid critical point (LLCP) somewhere in between the ice V and ice III melting lines, with the LLT coexistence line intersecting the ice V and ice IV melting curves at the point of discontinuous change in slope. Mishima also probed the melting lines of ices VI and XIII, but was unable to extend those curves far enough to intersect with the possible LLT line.\n\nThis rationalization for the observed trends, while plausible, remains difficult to definitively explore experimentally due to rapid crystallization of the stable ice I phase upon melting of the other polymorphs. Similar practical challenges also hamper direct experimental demonstration of the LLT. Thus, open questions remain about the true relationship between a possible LLT and the metastable melting of the ice phases. Molecular modeling represents an attractive route to probe these ideas, as one can design simulation methodologies free from unwanted crystallization, which allow us to directly study the relationship between the LLT and the various ices. In parallel, advances in machine learning (ML)-based interaction potentials \\cite{Noe20,Wen22} allow us to develop predictive intermolecular potential models that describe water's interactions at the level of an \\textit{ab initio} reference calculation (e.g., density functional theory), thus enabling purely-predictive simulations of complex collective properties and phase behavior at tractable computational cost \\cite{Gartner20,reinhardt2021quantum,Zhang21,Piaggi21,Schran21,piaggi2022homog}. In this study, we coupled one such ML-potential method (Deep Potential Molecular Dynamics, DPMD) \\cite{Zhang18,Wang18} with several advanced simulation techniques to shed further light on the possible relationship between the LLT and water's liquid-solid phase behavior. \n\n\\section*{Potential scenarios}\n\\label{sec:potential_scenarios}\n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Figure1.pdf}\n\\caption{\\label{fig:Fig1} Hypothetical scenarios describing the possible relationship between ice polymorph melting curves and the LLT. The upper plots show the melting curve of a hypothetical ice polymorph (red solid line), the LLT line (gray solid line), the LLCP (gray circle), and the Widom line (gray dashed line). The lower plots show hypothetical free energy surfaces for the liquid density along the melting curves at the three points marked by \\textbf{+} signs. Scenario 1 (left) shows a case where the melting curve is significantly supercritical, Scenario 2 (center) shows a case where the melting curve is slightly supercritical, and Scenario 3 (right) shows a case where the melting curve is subcritical.}\n\\end{figure*}\n\nBefore describing the details of our approach and results, we illustrate schematically the possible classes of behavior in FIG.~\\ref{fig:Fig1}.\nIn this discussion, we assume the existence of an LLT.\nThe elements that we consider in our analysis are the melting curve of an ice polymorph, the liquid-liquid critical point, the liquid-liquid coexistence line (or binodal), and the Widom line. The Widom line can be regarded as an extension of the liquid-liquid coexistence line to supercritical conditions and is defined by the locus of maxima of the correlation length.\nResponse functions, such as the heat capacity at constant pressure $C_P$ and the isothermal compressibility $\\kappa_T$, also have pronounced maxima at supercritical conditions even far from the critical point, and the values of the response functions diverge as the critical point is approached \\cite{xu2005relation}.\nFurthermore, the lines of maxima of the response functions in the $T-P$ plane asymptotically converge to the Widom line as the critical point is approached from supercritical conditions \\cite{xu2005relation}.\n$C_P=(\\partial H\/ \\partial T)_P$ and $\\kappa_T=-(1\/V)(\\partial V\/\\partial P)_T$ are derivatives of the enthalpy $H$ and volume $V$, and thus we expect the fastest change in these liquid-state properties in the immediate vicinity the Widom line.\nIn turn, a pronounced change in the enthalpy and volume of the liquid at the Widom line will lead to correspondingly pronounced changes in slope of the ice melting line as predicted by Eq.~\\eqref{eq:Clausius-Clapeyron}.\n\nWe now analyze three possible scenarios.\nIf the melting curve of a particular polymorph were to be significantly supercritical (Scenario 1, FIG.~\\ref{fig:Fig1} left), the impact of the critical point would be minimal.\nTherefore, we would expect to observe a modest change in slope of the melting curve and the free energy surface of the liquid state would have a single basin that smoothly moves from high to low density as temperature decreases along the melting curve. If the melting curve passed near to the critical point but still at supercritical conditions (Scenario 2, FIG.~\\ref{fig:Fig1} center), a more significant but still continuous change in slope might be observed as the liquid properties change swiftly but continuously upon crossing the Widom line. In this case, the free energy surfaces would still only show one single minimum at a given state point yet they can show significant asymmetry \\cite{Gartner22}, and broadening at the intersection of the melting curve with the Widom line.\nThe broadening of the free energy surface of the liquid as a function of density at the Widom line follows from the fact that density fluctuations $\\sigma_{\\rho}$ are related to $\\kappa_T$ via $\\sigma_{\\rho}^2=\\rho^2 k_B T \\kappa_T\/ V$ where $\\rho$ is the density and $k_B$ the Boltzmann constant \\cite{pathria2016statistical}.\nFinally, if the melting curve was subcritical (Scenario 3, FIG.~\\ref{fig:Fig1} right), a discontinuous change in liquid properties across the LLT would result in a discontinuous change in the slope of the melting curve, and a free energy surface with two basins of equal depth would develop at the point of liquid-liquid phase coexistence (i.e., where the ice melting line meets the LLT line). Moving forward, we will situate our simulation results in the context of these three potential scenarios.\n\n\\section*{Calculation of melting curves}\n\\label{sec:methods1}\n\nOur molecular dynamics simulations were driven by a deep potential model\\cite{Zhang18} of water developed by Zhang et al. \\cite{Zhang21}\nThe model has been carefully trained to reproduce with high fidelity the potential energy surface of water based on density functional theory (DFT) calculations with the Strongly Constrained and Appropriately Normed (SCAN) exchange and correlation functional \\cite{Sun15}.\nSCAN is one of the best semilocal functionals available and describes with good accuracy many properties of water and ice, and their anomalies \\cite{Sun16,Chen17,Piaggi21}.\nEven though the model is short-ranged with a cutoff of 6 \\AA, it can capture subtle physical effects, such as polarization \\cite{piaggi2022homog} and many-body correlations \\cite{Zhang18}.\nFurthermore, this model describes qualitatively the behavior of water and ice polymorphs in a region of the phase diagram spanning temperatures 0-500 K and pressures 0-50 GPa \\cite{Zhang21}.\nIt is thus suitable to represent ice III, IV, V, VI, and XIII at the conditions of interest for this work.\nAnother aspect of critical importance is whether the model has a liquid-liquid transition at deeply supercooled conditions.\nWe recently proved rigorously using free energy calculations that this model has a liquid-liquid transition with a critical point at $T_c = 242 \\pm 5$ K and $P_c = 0.295 \\pm 0.015$ GPa \\cite{Gartner22}.\nIt is important to note that SCAN also has limitations.\nLargely due to the self-interaction error in semilocal functionals \\cite{sharkas2020self}, the strength of the hydrogen bond is overestimated, resulting in an upward displacement of melting temperatures of about 40 K with respect to experiments \\cite{Piaggi21}. Additionally, the solid polymorphs ice III and ice XV are incorrectly predicted by SCAN to be metastable at all ($T$, $P$) \\cite{Zhang21}. However, given the complexity of water's phase diagram, SCAN predicts the relative location of the various phase boundaries in good agreement with experiment \\cite{Zhang21}.\n\n\\begin{figure*}\n\\includegraphics[width=0.95\\textwidth]{Figure2.pdf}\n\\caption{\\label{fig:Fig2} Overview of the methodology to calculate melting curves of ice polymorphs. The procedure is illustrated using the case of ice III. (A) Number of ice III-like molecules as a function of time in the biased coexistence simulations at various $T$ and $P$. The colors of the curves correspond to the $T$, as labeled to the right of the figure. Empty plots denote that no simulations were run at that ($T$, $P$). The range (324,378) that is reversibly sampled corresponds to one layer of ice III. (B) Free energy surfaces as a function of number of ice III-like molecules, where the dashed line is a linear fit to the free energy surface and the shaded region denotes the uncertainty. Colors match the same $T$ reported in panel (A) above. (C) Chemical potential difference between ice III and liquid at various $T$ and $P$. The gray dashed line is a linear fit to the data, and the shaded region represents one standard deviation of uncertainty in the fit parameters. (D) Melting curve obtained by this procedure, where the blue points represent the $T$ and $P$ of zero chemical potential difference between ice and liquid obtained in panel (C). Error bars represent one standard deviation errors in the fit parameters as shown in (C). The dashed line is the melting curve obtained from the integration of the Clausius-Clapeyron equation. (E,F) Simulation snapshots illustrating ice III and the molecular environments used to generate the order parameter \\cite{Piaggi19,Bore22} to drive the biased coexistence (E), and the ice III-liquid coexistence geometry (F).}\n\\end{figure*}\n\nHerein, we computed the melting lines of the ice polymorphs in two stages.\nIn the first stage, we calculated a few points along the liquid-solid coexistence lines using a biased coexistence approach \\cite{Bore22} in which we simulate a particular ice polymorph and liquid water in direct coexistence (FIG.~\\ref{fig:Fig2}F), and use a bias potential to reversibly crystallize and melt a layer of solid (FIG.~\\ref{fig:Fig2}A).\nThis approach was used in a recent work to calculate the phase diagram of the state-of-the-art empirical model of water TIP4P\/Ice \\cite{Abascal05}, and can be regarded as a generalization of the interface pinning approach \\cite{Pedersen13}.\nFrom biased coexistence simulations carried out at different temperatures and pressures, we extract the difference in chemical potential between the liquid and ice from the slope of the free energy surfaces\\cite{Pedersen13,Bore22} (FIG.~\\ref{fig:Fig2}B), and locate the liquid-ice coexistence temperature at a given pressure as the temperature at which this difference is zero (FIG.~\\ref{fig:Fig2}C-D).\nWe applied this procedure to ice III, IV, V, and XIII to obtain a few coexistence points for each polymorph.\nSee FIG.~\\ref{fig:Fig2} for an overview of this procedure for the case of ice III.\nWe show the results for ice IV, V, and XIII in the Supplementary Material \\cite{SI}.\nWe also validated the coexistence points obtained via the biased coexistence method for ice IV and V using standard direct-coexistence simulations (see the Supplementary Material \\cite{SI}).\nWe subsequently obtained continuous and smooth coexistence lines by integrating the Clausius-Clapeyron equation as first proposed by Kofke \\cite{Kofke93}.\nThis technique is based on the numerical integration of Eq.~\\eqref{eq:Clausius-Clapeyron} using the enthalpy and volume obtained from constant temperature and pressure simulations of each phase (see Methods section and the Supplementary Material\\cite{SI} for further details).\n\n\\section*{Results}\n\nUsing the techniques described above, we calculated the coexistence points and lines shown in FIG.~\\ref{fig:Fig3}A for ice III, IV, V, and XIII.\nThe circles and error bars correspond to biased coexistence simulations, and the lines were computed by integrating the Clausius-Clapeyron equation.\nWe also show in FIG.~\\ref{fig:Fig3}A the data for the liquid-liquid critical point, liquid-liquid coexistence line, and Widom line reported recently by us \\cite{Gartner22}.\nAccording to these calculations, the melting curves of all ice polymorphs as predicted by the SCAN functional are supercritical, \\textit{i.e.}, they pass above the liquid-liquid critical point.\nThe melting line of ice VI is also supercritical and is shown in the Supplementary Material \\cite{SI}.\nThus, all of them intersect the Widom line rather than the LLT line.\nOur simulations result in melting curves that show a pronounced, yet continuous, change of slope upon crossing the Widom line.\nThis behavior is compatible with the expected change of properties of liquid water from HDL-like to LDL-like as the Widom line is traversed from high to low pressures.\nMoreover, the change in slope is smoother for ice III than for the other polymorphs, consistent with an increasingly abrupt change in the properties of the liquid closer to the critical point.\nThe smoother change in slope of the melting curve of ice III resembles the behavior hypothesized in Scenario 1 described in FIG.~\\ref{fig:Fig1} while the more abrupt change shown by ice V, IV, and XIII is reminiscent of Scenario 2. \n\n\\begin{figure*}\n\\includegraphics[width=\\textwidth]{Figure3.pdf}\n\\caption{\\label{fig:Fig3} Melting curves of ice polymorphs III, IV, V, and XIII, and their location relative to the liquid-liquid critical point. A) Results obtained using a machine learning model based on the SCAN DFT functional. Circles represent melting points calculated using biased coexistence simulations \\cite{Bore22}, crosses were obtained by integrating the Clausius-Clapeyron equation, and lines are spline interpolations of the latter results. We also show the location of the critical point, the liquid-liquid coexistence line, and the Widom line (line of maxima of $\\kappa_T$) as calculated in our previous work \\cite{Gartner22}. B) Melting curves reported by Mishima \\cite{Mishima00} for heavy water based on decompression-induced melting experiments. The approximate location of the discontinuous change in slope in the melting curves of ice IV and V is marked with an X. The shaded region is the location of the critical point estimated by Bachler et al.~\\cite{Bachler21}. We also show the location of the critical point obtained by Shi and Tanaka using experimental measurements \\cite{Shi20}, by Debenedetti et al.\\ using molecular simulations with the empirical water models TIP4P\/2005 and TIP4P\/Ice \\cite{Debenedetti20}, and by Mishima and Sumita\\cite{mishima2023equation} using an extrapolation based on polynomial fits to equation of state data. On the left, we show atomic configurations representative of ices III, IV, V, and XIII.}\n\\end{figure*}\n\nOur results also show good agreement between the biased coexistence simulations and the integration of the Clausius-Clapeyron equation in the HDL-like region.\nOn the other hand, it was not possible to perform biased coexistence simulations in the LDL-like region due to the long relaxation times of the LDL-like liquid at those thermodynamic conditions.\nIndeed, even for the comparatively less expensive bulk liquid simulations for the Clausius-Clapeyron integration procedure, we needed long simulations (100 ns) of the bulk liquid in the LDL-like region for robust statistical certainty.\n\nThe analysis of the melting curves shown in FIG.~\\ref{fig:Fig3}A does not constitute proof of a continuous change in slope since the curves are obtained from a set of points interpolated with a spline, which is by construction smooth and differentiable.\nIn order to provide evidence for the continuous change in slope, we now analyze in detail the properties of liquid water along the melting curves of ice polymorphs.\nIn FIG.~\\ref{fig:Fig4} we show the enthalpy and density of liquid water as a function of pressure.\nBoth properties exhibit a swift change upon crossing of the Widom line and the change is more abrupt as the melting curves approach the critical point, with sequence ice III $\\rightarrow$ V $\\rightarrow$ IV $\\rightarrow$ XIII.\nWe ruled out that this behavior is a result of ice crystallization by analyzing configurations at regular intervals of 5 ps.\nWe calculated the structural fingerprints CHILL+ \\cite{nguyen2015identification} and Identify Diamond Structure \\cite{Larsen16}, as implemented in Ovito\\cite{Stukowski09}, and we did not find atomic environments compatible with ice I in any of our simulations.\nWe also show in FIG.~\\ref{fig:Fig4} the free energy surfaces (FES) as a function of the liquid water density for selected points along the coexistence lines.\nThe FES of the liquid along the melting curves of all studied ice polymorphs show a behavior reminiscent of Scenario 2 of FIG.~\\ref{fig:Fig1}.\nFor all ices, the FES at the state point closest to the Widom line shows clear broadening.\nFurthermore, the FES in the vicinity of the Widom line exhibits deviations from a quadratic form with significant asymmetry and a shoulder suggestive of the metastable free energy minimum that would appear below the critical point.\nTaken together, this behavior provides strong evidence of a continuous crossover from HDL-like to LDL-like liquids as the melting curves of ice III, V, IV, and XIII are traversed towards lower pressures.\nWe remark that none of the melting lines analyzed here have properties of the liquid compatible with subcritical Scenario 3 of FIG.~\\ref{fig:Fig1} that would lead to a discontinuous change in the slope of the melting line.\nBased on the analysis of the liquid properties described above, we conclude that the changes in slope of the melting curves shown in FIG.~\\ref{fig:Fig3} are indeed continuous.\n\n\\begin{figure*}\n\\includegraphics[width=0.9\\textwidth]{Figure4.pdf}\n\\caption{\\label{fig:Fig4} Properties of liquid water along melting curves of several ice polymorphs. Panels A, B, C, and D correspond to ice III, V, IV, and XIII, respectively. For each ice polymorph, we show the enthalpy of liquid water $H_L$, the density of liquid water $\\rho_L$, and the melting temperature $T$ as a function of pressure $P$. The locus of maxima of isothermal compressibility \\cite{Gartner22} is shown in the $T-P$ pane with a dashed line. We also show the free energy surfaces $F$ as a function of the density of liquid water $\\rho_L$. The free energy surfaces are color-coded to match the color of points along the $T$ vs $P$ coexistence line to specify the thermodynamic conditions at which they were calculated.}\n\\end{figure*}\n\nWe have so far focused on the properties of the liquid phase.\nHowever, according to Eq.~\\eqref{eq:Clausius-Clapeyron}, the properties of ice can also affect the slope of melting curves.\nIn the Supplementary Material \\cite{SI}, we show the change in enthalpy and density of ice polymorphs along the melting lines.\nThe data show that the changes experienced by the bulk ice polymorphs are much more subtle than the corresponding changes in the properties of the liquid phase.\nIn the pressure range shown in FIG.~\\ref{fig:Fig4}, the densities of ice polymorphs change by less than 1\\% while the density of liquid water changes by 10\\%.\nFurthermore, the enthalpy of ices varies by around 8\\% while the enthalpy of liquid water has a significantly larger variation of around 17\\%.\nThis analysis indicates that the changes of the properties of the liquid phase are the main factor driving the sharp changes in slope observed in FIG.~\\ref{fig:Fig3}.\n\nThe results described above correspond to a purely-predictive model derived from first principles calculations.\nAn alternative approach is to evaluate the melting lines of ice polymorphs using semi-empirical water models that are fit to experimental information.\nFor this reason, we calculated the melting line of ice V in the TIP4P\/Ice model \\cite{Abascal05}, which is a state-of-the-art semi-empirical model for the study of ice polymorphs.\nThe location of the liquid-liquid critical point for this model has been determined accurately by Debenedetti et al.~\\cite{Debenedetti20}.\nWe find that the melting curve of ice V within the TIP4P\/Ice model (shown in the Supplementary Material \\cite{SI}) is also supercritical, in agreement with the SCAN calculations reported above.\n\n\\section*{Discussion}\n\nThe picture that emerges from our present results is in contrast with Mishima and Stanley's interpretation \\cite{Mishima98,Mishima00}.\nAs described above, Mishima's interpretation of the experiments considers that the melting curve of ice III is supercritical, and the melting lines of ice IV, V, and XIII are subcritical \\cite{Mishima00}.\nOn the other hand, our calculations based on an \\textit{ab initio} model predict supercritical behavior for all the studied ice polymorphs.\nTo evaluate this discrepancy, we analyze the consistency of each of these two interpretations in the light of the most recent evidence for the location of the critical point.\nThe decompression-induced melting curves measured by Mishima \\cite{Mishima00} are shown in FIG.~\\ref{fig:Fig3}B together with recent estimates of the location of the liquid-liquid critical point.\nThe estimates include an extrapolation by Bachler et al. based on experimental data for the high- and low-density spinodals obtained from compression\/decompression experiments on glassy water \\cite{Bachler21}, an analysis by Shi and Tanaka using experimental measurements \\cite{Shi20}, calculations based on molecular simulations with the two realistic empirical water models TIP4P\/Ice and TIP4P\/2005 \\cite{Debenedetti20}, and a very recent extrapolation based on polynomial fits to equation of state data by Mishima and Sumita\\cite{mishima2023equation}.\nIt follows from FIG.~\\ref{fig:Fig3}B that, if such estimates are correct, all melting curves would be supercritical in experiments.\nFurthermore, the relative positions of the ice polymorph melting curves and the critical point provided by SCAN in FIG.~\\ref{fig:Fig3}A seems to be in excellent qualitative agreement with the experimental results shown in FIG.~\\ref{fig:Fig3}B, i.e., the relative stability of all phases is captured qualitatively.\nHowever, the quantitative positions of the melting curves and critical point in the $T-P$ plane differ significantly from experiments, which we attribute to the known limitations of SCAN \\cite{Gartner20,Piaggi21}. We note that it is possible that SCAN somehow shifts the location of the critical point relative to the ice melting curves, however, given the qualitative correspondence between FIG.~\\ref{fig:Fig3}A and FIG.~\\ref{fig:Fig3}B, we do not expect this to be the case.\nMoreover, the calculations described above based on a semi-empirical model also show that the melting line of ice V is supercritical, in disagreement with the original interpretation of the experiments and supporting the picture provided by the SCAN functional. \n\nIn FIG.~\\ref{fig:Fig3}B we have combined experimental melting curves for heavy water \\cite{Mishima00} with estimates of the critical point based on experiments carried out using light water \\cite{Bachler21,Shi20} and simulations that ignore nuclear quantum effects \\cite{Debenedetti20}.\nA figure equivalent to FIG.~\\ref{fig:Fig3}B, replacing the melting curves of heavy water ice polymorphs with melting curves of light water \n ices \\cite{mishima2021liquid} is shown in the Supplementary Material \\cite{SI}.\nThe isotopic effect in the melting lines is rather small, with melting temperatures of heavy water around 5 K higher than in light water \\cite{mishima2021liquid}.\nOn the other hand, the isotopic effect on the location of the critical point has recently been estimated by Eltareb et al.~\\cite{eltareb2022evidence} using path integral molecular dynamics and a semiempirical model of water.\nThey found a critical point location for heavy water 18 K and 9 MPa higher than in light water.\nThe combined isotopic effect on the melting curves and the location of the critical point may lead to a relative shift of around 12 K in light water compared to heavy water.\nTherefore, isotopic effects are unlikely to affect the picture shown in FIG.~\\ref{fig:Fig3}.\nWe also stress that our simulation results shown in FIG.~\\ref{fig:Fig3}A ignore nuclear quantum effects.\nThey are thus more representative of heavy water than of light water.\n\nThe discrepancy between our simulation results and Mishima's experiments lead to the question of why a sharp discontinuity in slope was observed in the experimental melting curves for ice V and ice IV. Such behavior could perhaps be explained by immediate crystallization of ice I rather than melting to a metastable (relaxed) liquid state, which of course is not an issue in the simulations due to the separation of time scales of ice nucleation and liquid-like equilibration\/relaxation. In this context, it should be noted that Mishima's hypothesized liquid-liquid phase transition is located very close to the homogeneous nucleation locus. Furthermore, the behavior reported by Mishima for the melting curves past the hypothesized LLT\\cite{Mishima00} is remarkably noisy on the low-pressure side. Experimental studies explicitly targeted towards this issue are needed to definitively evaluate this hypothesis.\n\n\\section*{Conclusions}\n\nOur results suggest that experiments reported by Mishima and Stanley that pointed to the existence of a liquid-liquid critical point at $\\sim$0.1 GPa and $\\sim$220 K \\cite{Mishima98}, and subcritical melting curves for ice IV, V, and XIII \\cite{Mishima00}, might call for a different interpretation.\nWhile our first principles calculations do support the existence of a liquid-liquid critical point \\cite{Gartner22}, they suggest its location to occur at lower temperatures than had been hitherto assumed, such that the melting curves of ice III, IV, V, VI, and XIII are in reality supercritical.\nThe relative stability of phases reported here is in excellent agreement with experiments, yet from a quantitative point of view our simulations are limited by the accuracy of our chosen semilocal DFT functional.\nFuture work could test our findings using more sophisticated DFT functionals or higher levels of electronic-structure theory.\nConsidering the plethora of known ice polymorphs, and the ones that continue to be discovered and characterized \\cite{gasser2021structural}, the search for ices with subcritical melting curves may be a fruitful endeavor.\nWe also hope that our work will stimulate further experimental efforts to elucidate the behavior of melting curves in the vicinity of the liquid-liquid critical point and definitively explain the discrepancies between the experimental and computational results.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}