diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdubv" "b/data_all_eng_slimpj/shuffled/split2/finalzzdubv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdubv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and preliminaries}\n\nRecently, many papers dealing with non-Newtonian metric and the\nmultiplicative metric have been published [2-10]. Although the\nmultiplicative metric and non-Newtonian metric were announced as a new\ntheory for topology and fixed point theory, many studies can be obtained\nwith a simple observation. In the present work we show that some topological\nresults of non-Newtonian metric can be obtained in an easier way. Therefore,\na lot of fixed point and common fixed point results from the metric setting\ncan be proved in the non-Newtonian metric (particularly the multiplicative\nmetric) setting.\n\nArithmetic is any system that satisfies the whole of the ordered field\naxioms whose domain is a subset of \n\\mathbb{R}\n$. There are infinitely many types of arithmetic, all of which are\nisomorphic, that is, structurally equivalent. In non-Newtonian calculus, a \n\\textit{generator} $\\alpha $ is a one-to-one function whose domain is all\nreal numbers and whose range is a subset of real numbers. Each generator\ngenerates exactly one arithmetic, and conversely each arithmetic is\ngenerated by exactly one generator. By \\textit{$\\alpha $-arithmetic}, we\nmean the arithmetic whose operations and whose order are defined as \n\\begin{equation*}\n\\begin{array}{lrcll}\n\\alpha \\text{-\\textit{addition}} & x\\ \\dot{+}\\ y & = & \\alpha \\{\\alpha\n^{-1}(x)+\\alpha ^{-1}(y)\\} & \\\\ \n\\alpha \\text{-\\textit{subtraction}} & x\\ \\dot{-}\\ y & = & \\alpha \\{\\alpha\n^{-1}(x)-\\alpha ^{-1}(y)\\} & \\\\ \n\\alpha \\text{-\\textit{multiplication}} & x\\ \\dot{\\times}\\ y & = & \\alpha\n\\{\\alpha ^{-1}(x)\\times \\alpha ^{-1}(y)\\} & \\\\ \n\\alpha \\text{-\\textit{division}} & x\\ \\dot{\/}\\ y & = & \\alpha \\{\\alpha\n^{-1}(x)\\div \\alpha ^{-1}(y)\\} & (\\alpha ^{-1}(y)\\neq 0) \\\\ \n\\alpha \\text{-\\textit{order}} & x\\ \\dot{<}\\ y & \\Leftrightarrow & \\alpha\n^{-1}(x)<\\alpha ^{-1}(y) & \n\\end{array\n\\end{equation*\nfor all $x$ and $y$ in the range \n\\mathbb{R}\n_{\\alpha }$ of $\\alpha $. In the special cases the identity function $I$ and\nthe exponential function $\\exp $ generate the classical and geometric\narithmetics, respectively.\\smallskip\n\n\\begin{equation*}\n\\begin{tabular}{ccccccc}\n\\hline\n$\\alpha $ & & $\\alpha \\text{-\\textit{addition}}$ & $\\alpha \\text{-\\textit\nsubtraction}}$ & $\\alpha \\text{-\\textit{multiplication}}$ & $\\alpha \\text{\n\\textit{division}}$ & $\\alpha \\text{-\\textit{order}}$ \\\\ \\hline\n$I$ & & $x+y$ & $x-y$ & $xy$ & $x\/y$ & $x}\\ \\dot{0}$ and any $x\\in X$ the set \n\\begin{equation*}\nB_{\\alpha }(x,\\varepsilon )=\\{y\\in X:d^{\\alpha }(x,y)\\ \\dot{<}\\ \\varepsilon\n\\}\n\\end{equation*\nis called an $\\alpha $\\textit{-open ball of center }$x$\\textit{\\ and radius \n$\\varepsilon $. A topology on $X$ is obtained easily by defining open sets\nas in the classical metric spaces.\n\nNow, let us emphasize that former topology is obtained by real-valued metric\nand vice versa.\n\n\\begin{theorem}\n\\label{thm}For any generator $\\alpha $ on \n\\mathbb{R}\n$ and for any non-empty set $X$ \\newline\n(1) If $d^{\\alpha }$ is a non-Newtonian metric on $X$, then $d=\\alpha\n^{-1}\\circ d^{\\alpha }$ is a metric on $X$,\\newline\n(2) If $d$ is a metric on $X$, then $d^{\\alpha }=\\alpha \\circ d$ is a\nnon-Newtonian metric on $X$.\n\\end{theorem}\n\n\\begin{proof}\nIt is obvious that $\\alpha $ and $\\alpha ^{-1}$ are one-to-one and order\npreserving.\n\\end{proof}\n\n\\begin{corollary}\n\\label{cor1}For any generator $\\alpha $ on \n\\mathbb{R}\n$ and, let $d^{\\alpha }$ and $d$ be a non-Newtonian metric and a metric on a\nnon-empty set $X$, respectively, as in Theorem \\ref{thm}. If $\\tau _{\\alpha\n} $ and $\\tau $ are metric topologies induced by $d^{\\alpha }$ and $d$,\nrespectively, then $\\tau _{\\alpha }=\\tau $.\n\\end{corollary}\n\n\\begin{proof}\nSince $\\delta _{\\varepsilon }=\\alpha ^{-1}(\\varepsilon )>0$ and $\\varepsilon\n_{\\delta }=\\alpha (\\delta )\\ \\dot{>}\\ \\dot{0}$ for all $\\varepsilon \\ \\dot{>\n\\ \\dot{0},\\delta >0$, we hav\n\\begin{eqnarray*}\nB_{\\alpha }(x,\\varepsilon _{\\delta }) &=&\\{y\\in X:d^{\\alpha }(x,y)\\ \\dot{<}\\\n\\varepsilon _{\\delta }\\}=\\{y\\in X:\\alpha \\left( d(x,y)\\right) \\ \\dot{<}\\\n\\alpha (\\delta )\\} \\\\\n&=&\\{y\\in X:d(x,y)<\\delta _{\\varepsilon }\\}=B(x,\\delta _{\\varepsilon })\n\\end{eqnarray*\nfor all $x\\in X,\\varepsilon \\ \\dot{>}\\ \\dot{0},\\delta >0$. Therefore, $\\tau\n_{\\alpha }=\\tau $.\n\\end{proof}\n\n\\begin{corollary}\n\\label{cor2}Under the hypothesis of Corollary \\ref{cor1}, the topological\nproperties of $(X,d)$ and $(X,d^{\\alpha })$ are equivalent. In particular,\nfor a sequence $(x_{n})$ in $X$ and for an element $x\\in X$\\newline\n(1) $x_{n}\\overset{d^{\\alpha }}{\\rightarrow }x$ if and only $x_{n}\\overset{d\n{\\rightarrow }x$,\\newline\n(2) $(x_{n})$ is $d^{\\alpha }$-Cauchy if and only if $(x_{n})$ is $d\n-Cauchy, and\\newline\n(3) $(X,d^{\\alpha })$ is complete if and only if $(X,d)$ is complete.\n\\end{corollary}\n\n\\section{Conclusion}\n\nThe topological results obtained by non-Newtonian metrics (particularly\nmultiplicative metrics) as in [2-10] are equivalent the ones obtained by\nmetrics. In [3-9] some results of the multiplicative metric and in [10] some\nresults of the non-Newtonian metric have been obtained for the fixed point\ntheory. Those results are direct consequences of Theorem \\ref{thm} and\nCorollary \\ref{cor2} since any type of contraction mapping for the\nnon-Newtonian metric space is also a contraction mapping for the metric\nspace and vice versa. For example, the non-Newtonian contraction \nT:X\\rightarrow X$ as in [10] is the classical Banach contraction sinc\n\\begin{equation}\nd^{\\alpha }(T(x),T(y))\\ \\dot{\\leq}\\ k\\dot{\\times}d^{\\alpha }(x,y)\\\n\\Leftrightarrow \\ d(T(x),T(y))\\leq \\lambda .d(x,y) \\label{1}\n\\end{equation\nfor all $x,y\\in X$ where $k\\in \\lbrack \\alpha (0),\\alpha (1))$ is constant, \nd=\\alpha ^{-1}\\circ d^{\\alpha }$ and $\\lambda =\\alpha ^{-1}(k)$. In\nparticular, by Remark \\ref{ref} and by (\\ref{1}), the multiplicative\ncontraction $T:X\\rightarrow X$ as in [2] is the classical Banach contraction\nsinc\n\\begin{equation*}\n\\begin{array}{rcl}\nd^{\\exp }(T(x),T(y))\\leq d^{\\exp }(x,y)^{\\lambda } & \\!\\!\\Leftrightarrow \\!\\!\n& d^{\\exp }(T(x),T(y))\\ \\dot{\\leq}\\ d^{\\exp }(x,y)^{\\lambda }=k\\dot{\\times\nd^{\\exp }(x,y) \\\\ \n& \\!\\!\\Leftrightarrow \\!\\! & d(T(x),T(y))\\leq \\lambda .d(x,y\n\\end{array\n\\end{equation*\nfor all $x,y\\in X$ where $\\lambda \\in \\lbrack 0,1)$ is constant, $d=\\ln\n\\circ d^{\\exp }$ and $\\lambda =\\ln k$. In this way we can obtain most of the\nnon-Newtonian metric results and most of the multiplicative metric results\napplying corresponding properties from the metric setting.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro1}\nRecently, the observation of gravitational wave and black hole image provide a new motivation and vision in black hole physics\\cite{Abbott:2016blz,Akiyama:2019cqa}. Providing a deep understanding in quantum gravity, black hole physics will always be a long-live project to study, theoretically and experimentally. One of the most important topics on black hole physics is no-hair theorem, which claims that black holes are determined only by mass, charge and angular momentum respectively\\cite{Ruffini:1971bza}. It was first concluded by Bekenstein that the static and spherical neutral black hole in asymptotic flat spacetime cannot be endowed with real scalar field, Proca field and spin-2 field\\cite{Bekenstein:1972ny}. Furthermore, the no-hair theorem was extended by Mayo and Bekestein that the static and spherical black hole cannot be endowed with a coupling charged scalar field together with a non-negative self-interacting potential \\cite{Mayo:1996mv}. However, the requirements of above no-hair theorems are as strong as its conclusion. Therefore, it is not difficult to find hairy black hole solutions if one breaks the requirements of these no-hair theorems. In Refs.~\\cite{Feng:2013tza,Liu:2013gja,Fan:2015tua,Khodadi:2020jij}, taking account into a non positive definite potential or asymptotic AdS spacetime, $D$-dimension scalarized black holes have been constructed in Einstein theory minimally coupled with neutral scalar field. Recently, in the extended Maxwell theory non-minimally coupled with a scalar field, such as Einstein-Maxwell-Dilaton (EMD) theory\\cite{Fan:2015oca}, Einstein-Maxwell-Scalar theory\\cite{Konoplya:2019goy}, Einstein-Born-Infeld (EBI) theory\\cite{Wang:2020ohb,Stefanov:2007eq}, and Quasi-topological Electromagnetism theory\\cite{Myung:2020ctt}, a class of charged black hole with scalar hair has been found. For a recent review relevant to the no-hair theorem, See Ref.~\\cite{Herdeiro:2015waa}.\n\nThe above examples give the simplest case of the black hole with an additional scalar hair. As we have mentioned, the Bekenstein's no-scalar-hair theorem was generalized by Ref.~\\cite{Mayo:1996mv}, claiming a very strong conclusion: the non-extremal static and spherical charged black hole cannot carry charged scalar hair whether minimally or nonminimally coupled to gravity, and with a regular positive semidefinite self-interaction potential. However, a numerical charged black hole solution with Q-hair was found in Einstein-Maxwell gravity minimally coupled with a non-linear complex scalar, the self interacting potential taking the following polynomial function \\cite{Hong:2020miv}\n\\begin{equation}\\label{defVs1}\nV(|\\psi|^2)=\\frac{m^2}{2}|\\psi|^2-\\frac{\\lambda}{4}|\\psi|^4 + \\frac{\\beta}{6}|\\psi|^6\\,.\n\\end{equation}\nIt was also pointed out that the detailed form of the potential $V(|\\psi|^2)$ is not crucial and the Q-hair can exist for large class of nonlinear potential\\cite{Hong:2019mcj,Herdeiro:2020xmb}. The reason why Ref.~\\cite{Mayo:1996mv} obtained a wrong statement is that it omitted a scalar mass term at an asymptotic infinity. Though the black holes with scalar hair has been found in many physical models, the discovery of Refs.~\\cite{Hong:2020miv} has a few of special interesting aspects. Particularly, in this model, the gravity, Maxwell field and complex scalar field are all minimally coupled with each other and the potential of scalar field is positive semidefinite. This gives a possibility to realize the scalar hairy black hole in the Einstein gravity and asymptotically flat spacetime. Furthermore, as the potential of scalar field is nonnegative, such model will have stable true vacuum state $\\psi=0$ in Minkowski spacetime.\n\nIt needs to note that the Reissner-Nordstr\\\"{o}m black hole (RN black hole) is still a solution of field equations even if the Q-hair appears in the models discussed by Refs.~\\cite{Hong:2020miv,Hong:2019mcj}. Given a scalarized charged black hole solution, it is worth to investigate whether the scalarized charged black hole is more stable than RN black hole in thermodynamics, i.e. whether the RN black hole can be spontaneously scalarized by phase transition. Recently, it has been well studied the thermodynamic self-stability associated with heat capacity in various ensembles in \\cite{Caldarelli:1999xj,Mo:2013sxa,Zhang:2018rlv,Quevedo:2006xk,Quevedo:2013pba}. However, we focus on investigating the thermodynamic stability of scalarized charged black holes, compared with the RN black hole in various ensembles. Concretely, in microcanonical ensemble, given the same ADM mass $M$ and total charge $Q$, the stability requires the maximum of the black hole entropy $S(M,Q)$ and a phase is more stable than the other if it has larger entropy. In canonical ensemble, with fixing Hawking temperature $T$ and $Q$, the black hole owning less Helmholtz free energy $F(T,Q)$ will be the more stable, while in grand canonical ensemble, with the identical $T$ and chemical potentials $\\mu$, the black hole which has smaller Gibbs free energy $G(T,\\mu)$ indicates it will be more stable. Furthermore, the black hole entropy $S(M,Q)$ is given by the area of horizon according to Bekenstein's entropy formula while the thermodynamic potentials $F(T,Q)$ and $G(T, \\mu)$ can be read off from partition function via Euclidean path-integral approach developed by Hawking and York, et al. \\cite{Gibbons:1976ue,Brown:1989fa,Braden:1990hw}.\n\nRemarkably, in astrophysics, a real black hole practically is more closer to grand canonical ensemble allowing charge and energy exchange with other matter in universe. However, as theoretical research interest, we still take the case of microcanonical ensemble and canonical ensemble under our consideration to study whether the RN black hole will be scalarzied under the process of discontinuous phase transition spontaneously.\nFurthermore, by means of analyzing the corresponding entropy or thermodynamic potentials in various ensembles, we find that there is a possibility that the scalarized charged black hole is more stable than the RN black hole in thermodynamics. This implies that, for a large class of nonlinear complex model, it is possible that the RN black hole will be scalarized spontaneously via a first order phase transition. Our numerical results also imply that the scalarized black hole is always more stable than the RN black hole in microcanonical ensemble and when $M\\rightarrow|Q|$. This implies the isolated near extremal RN black hole is not stable and will spontaneously scalarize.\n\nIn addition to the thermodynamic stability, another natural question is whether the scalarized charge black hole is kinetically stable, i.e. stable under against a small perturbation at least at linear level. It has been developed by Vishveshwara and Teukolsky et al.\\cite{Teukolsky:1973ha,Press:1973zz,Vishveshwara:1970cc} that in static axial or spherical symmetric background, the equation of motion associated with the perturbation field can reduce to radial equation in frequency domain. Furthermore, the radial equation can be interpreted as an eigenvalue problem. Specifically, imposing a physical boundary condition both in the spacial infinity and at the black hole horizon, a class of complex frequency called black hole quasi-normal modes(QNMs), which implies dissipation at both event horizon and spatial infinity, can be picked out. The instability of black hole might be triggered due to the negative imaginary part of the complex frequency. In general, the QNMs can be solved by shooting method in numerics. Approximatively, there have been some analytical approaches to achieve to calculate QNMs as well: the WKB approximation method, continue fraction method and Monodromy method. However, recently it has been argued in Ref.~\\cite{Konoplya:2019hlu} that the WKB approximation method cannot catch the unstable mode within the spectrum of the QNMs. Furthermore, we also find that the shooting method cannot efficiently give the stable modes through numerical error analysis. We thus adopt hybrid method to calculate the QNMs for studying the kinetic stability of scalarized charged black hole. If a given solution is not kinetically stable, then such solution cannot exist in a real physical system. For the situations that the scalarized black hole is more thermodynamically stable than RN black hole, but it does not automatically guarantee that the scalarized charged black hole is still stable kinetically. However, our results show the neutral perturbative scalar field will not trigger the instability of the scalarized charged black hole at linear level. In addition, for a recent review on perturbation theory on black hole, stability analysis and quasi-normal modes of black hole, more detail has been given in Refs.~\\cite{Pani:2013pma, Konoplya:2011qq, Berti:2009kk}.\n\nThis paper will be organized as follow: In Sec.~\\ref{modelset}, we briefly introduce the model of Einstein-Maxwell theory minimally coupled with a non-linear complex field. For general non-linear semi-definite potential, we present a proof that the scalarized charged black hole cannot be a result of continuously scalarization. In Sec.~\\ref{thermo}, Giving a logarithmic potential, we obtain a class of numerical scalarized charged black hole solution and in various ensembles, investigate their thermodynamic stability compared with the RN black hole. We find that in both microcanonical ensemble and canonical ensemble, it is possible that the RN black hole can be spontaneously scalarized via first order phase transition. In Sec.~\\ref{KietSta}, considering a probing neutral scalar field, we investigate the stability on the scalarized charged black hole by means of computing the quasi-normal modes using both the shooting method and the WKB approximation method. This suggests that the probing neutral scalar field cannot trigger the instability of the sclarized charged black hole. In Sec.~\\ref{Conclu}, we present our conclusion and further discussions.\n\n\\section{Model Setup}\\label{modelset}\nIn this paper, we consider the following Einstein-Maxwell theory minimally coupled with a non-linear complex scalar field (we set $G_N=c_s=\\hbar=k=1$, $c_s$ denotes the speed of light),\n\\begin{eqnarray}\\label{action1}\n&&S=\\frac1{16\\pi}\\int\\mathrm{d}^4x\\sqrt{-g} {\\cal L} \\, , \\cr\n~\\cr\n&&{\\cal L}=\\left(R-F_{\\mu\\nu}F^{\\mu\\nu}-(D_\\mu\\psi)^\\dagger(D^\\mu\\psi)-W(|\\psi|^2)\\right) \\, ,\n\\end{eqnarray}\nwhere the covariant derivative operator $D_{\\mu}=\\nabla_\\mu-iqA_\\mu$, $q$ denotes charge of the complex scalar field. In addition, $F_{\\mu\\nu}=\\partial_\\mu A_{\\nu}-\\partial_\\nu A_{\\mu}$ is the strength tensor of the U(1) electromagnetic field $A_{\\mu}$. The non-linear positively semi-definite potential $W(x)$ is a smooth function and satisfies following requirements:\n\\begin{equation}\\label{reqw1}\nW(0)=0,~~W'(0)=m^2>0,~~W(x)>0~\\text{if}~x>0\\,.\n\\end{equation}\nThese conditions insure that scalar field has stable true vacuum $\\psi=0$ in Minkowski spacetime when $F_{\\mu\\nu}=0$. Performing variation on the action \\eqref{action1} with respect to the metric $g_{\\mu\\nu}$, the gauge field $A_\\mu$ and the complex scalar field $\\psi$ respectively, we obtain the equation of motion\n\\begin{eqnarray}\\label{eom}\n&&R_{\\mu\\nu}-\\frac{R}2g_{\\mu\\nu}=8\\pi [T^{(M)}_{\\mu\\nu}+T^{(\\psi)}_{\\mu\\nu}]\\,, \\cr\n&& ~ \\cr\n&&\\nabla^\\mu F_{\\mu\\nu}=iq\\left(\\psi^\\dagger D_\\mu\\psi-\\psi (D_\\mu\\psi)^\\dagger\\right) \\, , \\cr\n&& ~ \\cr\n&&D^2\\psi-w(|\\psi|^2)\\psi=0\\, ,\n\\end{eqnarray}\nwhere we define $w(x)=W'(x)$ and the energy momentum tensor associated with both the electric field and the complex scalar field reads\n\\begin{eqnarray}\\label{Energy}\n&&T^{(M)}_{\\mu\\nu}=\\frac1{4\\pi}\\left(F_{\\mu\\sigma}{F^{\\sigma}}_\\nu-\\frac14g_{\\mu\\nu}F_{\\rho\\sigma}F^{\\rho\\sigma}\\right)\\, , \\cr\n~\\cr\n&&T^{(\\psi)}_{\\mu\\nu}=\\frac1{16\\pi}\n\\left( D_\\mu\\psi(D_\\nu\\psi)^\\dagger+D_\\nu\\psi(D_\\mu\\psi)^\\dagger \\right. \\cr\n&&\\left.-g_{\\mu\\nu}\\left((D_\\mu\\psi)^\\dagger(D^\\mu\\psi)+W(|\\psi|^2)\\right)\\right )\\, .\n \\end{eqnarray}\n %\nFor the non-linear potential, the requirement~\\eqref{reqw1} implies that\n\\begin{equation}\\label{smallxw}\nw(x)=m^2+w_1x+w_2x^2+\\cdots\n\\end{equation}\nwith some constants $\\{w_1,w_2,\\cdots\\}$ when $|x|\\ll1$.\n\nTo obtain the scalary charged black hole and investigate some related properties, we adopt the following spherical symmetric line element,\n\\begin{equation}\\label{metric0}\n \\mathrm{d} s^2=-f(r)\\mathrm{e}^{-\\chi(r)}\\mathrm{d} t^2+\\frac{\\mathrm{d} r^2}{f(r)}+r^2(\\mathrm{d} \\theta^2 + \\sin^2 \\theta \\mathrm{d} \\varphi^2).\n\\end{equation}\nDue to the spherical symmetry, the electromagnetic field and the complex scalar field take the following form,\n\\begin{equation}\\label{matters1}\n A_\\mu=\\phi(r)(\\mathrm{d} t)_\\mu,~~\\psi=\\psi(r)\\,.\n\\end{equation}\nFrom the line element Eq.~\\eqref{metric0}, the Hawking temperature $T$ reads\n\\begin{equation}\\label{hawkT1}\n T=\\frac{f'(r_h)\\mathrm{e}^{-\\chi(r_h)\/2}}{4\\pi}\\, ,\n\\end{equation}\nwhere $r_h$ denotes the outer event horizon.\nWith the given spherical anstaz, the equations of motion reduce to\n\\begin{equation}\\label{eqscalar1}\n\\begin{split}\n &\\psi''+\\left(\\frac{f'}{f}-\\frac{\\chi'}2+\\frac2r\\right)\\psi'+\\left(\\frac{\\mathrm{e}^{\\chi}\\phi^2q^2}{f^2}-\\frac{w(\\psi^2)}{f}\\right)\\psi=0\\\\\n &\\phi''+(\\chi'\/2+2\/r)\\phi'-\\frac{\\psi^2q^2}{2f}\\phi=0\\,,\\\\\n &\\chi'+\\frac{r\\mathrm{e}^{\\chi}\\psi^2\\phi^2q^2}{f^2}+r\\psi'^2=0\\,,\\\\\n &f'+\\left(\\frac1r+\\frac{r\\psi'^2}2\\right)f+r\\mathrm{e}^{\\chi}\\phi'^2 \\\\\n &-\\frac1r+\\frac{\\mathrm{e}^{\\chi}r\\psi^2\\phi^2q^2}{2f}+\\frac12rW(|\\psi|^2)=0\\, ,\n \\end{split}\n\\end{equation}\nwhere the prime denotes the derivative with respect to $r$. In this paper, we consider the asymptotically flat space time. The scalar field, gauge field and metric components should satisfy the following regular boundary conditions when $r\\rightarrow\\infty$\n\\begin{eqnarray}\\label{bdcond1}\n &f=1-\\frac{2M}r+\\cdots,~~\\chi=\\frac{\\chi_2}{r^2}+\\cdots \\, , \\cr\n ~\\cr\n &\\phi=\\mu-\\frac{Q}{r}+\\cdots,~~|\\psi|\\leq\\mathcal{O}(1\/r^2)\\, ,\n\\end{eqnarray}\nwhere $\\mu$ is the chemical potential, $M$ is the ADM mass and $Q$ is the total charge. With this boundary conditions and noting the fact that $w(|\\psi|^2)\\rightarrow m^2$ near the boundary, we find that the first equation of Eq.~\\eqref{eqscalar1} reduces into following simple form near the infinity\n\\begin{equation}\\label{eqscalar2}\n \\psi''+\\frac{2\\psi'}r+(q^2\\mu^2-m^2)\\psi=0\\, .\n\\end{equation}\nThe solution reads\n\\begin{equation}\\label{asysol1}\n \\psi(r)=\\frac{\\psi_+}{r}\\mathrm{e}^{r\\sqrt{m^2-q^2\\mu^2}}+\\frac{\\psi_-}r\\mathrm{e}^{-r\\sqrt{m^2-q^2\\mu^2}}\\,.\n\\end{equation}\nThe boundary conditions Eq.~\\eqref{bdcond1} implies following constraints\n\\begin{equation}\\label{constraint1}\n m^2-q^2\\mu^2>0,~~\\psi_+=0\\,.\n\\end{equation}\nThese are two necessary conditions on the spontaneous scalarization for asymptotically flat black holes.\n\nTaking into account a polynomial potential, as mentioned above, a numerical charged black hole solution with Q-hair has been found in \\cite{Mayo:1996mv, Hong:2020miv}. A natural question arises whether this class of scalarized black hole solution will arise from continuously spontaneous scalarization for specific non-linear potential. This question is important because if the answer is yes, the spacetime geometry can transit into scalarized black hole smoothly, otherwise, the latent heat will be relaxed or absorbed during phase transition between the RN black hole and scalarized black hole. Moreover, such latent heat will leave some observable effects if such a phase transition happened in our universe.\n\nIn the following, we shall present a proof that for arbitrary non-linear potential $W(|\\psi|^2)$ satisfying the requirement~\\eqref{reqw1}, the spontaneous scalarization of RN black hole cannot happen via continuous phase transition. As shown in Fig.~\\ref{conscal}, if such continuous phase transition can happen, when we tune the parameters (total charge, chemical potential, temperature, etc.) of the black hole, there is a critical point where the strength of scalar field begins to increase into nonzero continuously. In other words, there must be a small region near the critical value associated with the black hole parameters (See also the top panel of Fig.~\\ref{conscal}) where the complex scalar field is infinitesimal.\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.35\\textwidth]{bh-p2.pdf}\n \\includegraphics[width=0.35\\textwidth]{bh-p1.pdf}\n \\caption{\\textbf{Top}: Scalarization appear via continuous phase transition. \\textbf{Bottom}: Scalarization appear via discontinuous phase transition.}\\label{conscal}\n\\end{figure}\nWithout losing generality, we can assume this infinitesimal scalar field has following form,\n\\begin{equation}\n\\psi(r, \\theta, \\varphi)=\\varepsilon \\sum_{l,\\nu} a_{l \\nu} (r) Y_{l \\nu} (\\theta, \\varphi)\\, \\quad \\varepsilon\\rightarrow0\\, .\n\\end{equation}\nwhere $Y_{l \\nu}$ denotes the spherical harmonic function, $\\nu=0, \\pm 1, \\pm 2, \\cdots$ is the magnetic quantum number and $l > \\nu$ is the azimuthal quantum number.\nTaking it into Eq.~\\eqref{eom} and neglecting all the non-linear terms of $\\varepsilon$, we find that the metric and gauge field decouple from the scalar field. Then spacetime geometry and gauge field are given by a RN solution\n\\begin{eqnarray}\\label{rnsolu1}\n &\\chi=0,~~f(r)=\\frac{(r-r_h)(r-\\mu^2r_h)}{r^2} \\, , \\cr\n ~\\cr\n &\\phi=\\mu(1-r_h\/r),~~\\mu\\in[-1,1]\\,.\n\\end{eqnarray}\nHere we require $\\mu\\in[-1,1]$ since the $r_h$ is defined to be the most outer horizon. The E.O.M of scalar field reads\n\\begin{equation}\\label{eqscalar2}\n a_{l\\nu}''+\\left(\\frac{f'}{f}+\\frac2r\\right)a'+\\left(\\frac{\\phi^2q^2}{f^2}-\\frac{m^2}{f}-\\frac{l(l+1)}{f r^2}\\right)a_{l\\nu}=0 \\, .\n\\end{equation}\nThis can be rewritten into following form\n\\begin{equation}\\label{eqscalar3}\n \\frac{\\mathrm{d}}{\\mathrm{d} r}\\left(r^2f a_{l \\nu}\\frac{\\mathrm{d} a_{l \\nu}}{\\mathrm{d} r}\\right)=\\left[\\left(\\frac{\\tilde{m}(r)^2}{f}-\\frac{\\phi^2q^2}{f^2}\\right)a_{l \\nu}^2+a_{l \\nu}'^2\\right]r^2f \\, .\n\\end{equation}\nwhere we have redefine the effective mass as $\\tilde{m}(r)^2 = m(r)^2 + l(l+1)\/r^2$ and it is obviously that $\\tilde{m}^2 \\geq m^2$.\nIntegrate it from horizon to infinity and we find\n\\begin{equation}\\label{eqscalar4}\n \\left.r^2f a_{l \\nu}\\frac{\\mathrm{d} a_{l \\nu}}{\\mathrm{d} r}\\right|_{r_h}^\\infty=\\int_{r_h}^{\\infty}\\left[\\left(\\frac{\\tilde{m}(r)^2}{f}-\\frac{\\phi^2q^2}{f^2}\\right)a_{l \\nu}^2+a_{l \\nu}'^2\\right]r^2f\\mathrm{d} r \\, .\n\\end{equation}\nAs $a(r)$ is a regular at horizon and decays to zero at infinity, the left side of Eq.~\\eqref{eqscalar4} is zero. Thus, we have\n\\begin{equation}\n\\int_{r_h}^{\\infty}\\left[\\left(\\frac{\\tilde{m}(r)^2}{f}-\\frac{\\phi^2q^2}{f^2}\\right)a_{l \\nu}^2+a_{l \\nu}'^2\\right]r^2f\\mathrm{d} r=0\\,.\n\\end{equation}\nThis means that $\\exists r_0\\in(r_h,\\infty)$ such that\n\\begin{equation}\\label{condh1}\n \\frac{\\tilde{m}(r)^2}{f(r_0)}-\\frac{\\phi(r_0)^2 q^2}{f(r_0)^2}<0\\,.\n\\end{equation}\nWe then obtain\n\\begin{equation}\\label{condh2}\nm^2\\leq \\tilde{m}(r)^2<\\frac{\\phi(r_0)^2 q^2}{f(r_0)}=\\mu^2q^2\\frac{r_0-r_h}{r_0-\\mu^2r_h}<\\mu^2q^2\\,.\n\\end{equation}\nHere we have used the solution~\\eqref{rnsolu1}. Above result and requirement~\\eqref{constraint1} are contradictory. This shows that, if the complex scalar field appears in a static black hole, its strength cannot be infinitesimal no matter how we choose the black hole parameters. Thus, the continuous phase transition from RN black hole to scalarized charged black cannot occur.\n\nThe demonstration above implies that the scalarized charged black hole solution cannot produce from the continuous spontaneous phase transition. However, the spontaneous scalarization on charged black hole may arise through non-continuous phase transition, namely the first-order phase transition. In this case, the complex scalar field jumps into nonzero from zero when we tune the parameters of black hole, see the bottom panel of Fig.~\\ref{conscal}. Recall the equivalence between the black hole system and the thermal mechanic system, it is thus intriguing to study the thermodynamic stability of the scalarized charged black hole, compared with the corresponding charged black hole with no hair. Motivated by this, interpreted charged black hole with scalar hair as a class of ensemble, in the following, we focus on investigating the thermodynamical stability on scalarized charged black hole in various ensembles: microcanonical ensemble, canonical ensemble and grand canonical ensemble, and try to check if there is a first order phase transition.\n\n\\section{Thermodynamic Instability}\\label{thermo}\nIn this section, we analyze the thermodynamic stability of scalarized charged black hole by proposing a specific example. As mentioned, the relation of Beskein entropy implies that black holes can be investigated as a thermodynamical system in term of three different ensemble: microcanonical ensemble, canonical ensemble and grand canonical ensemble. The microcanonical ensemble implies that black holes are interpreted as an isolated system where there does not exist any charge and energy exchange of black holes. The canonical ensemble indicates that black holes only exchange energy with environment of which the temperature is fixed. The grand canonical ensemble is similar to canonical ensemble but also admits the black hole to exchange the particles with environment. In practice, black holes in astrophysics are more likely to be grand canonical ensemble with exchange of particles and energy. However, as theoretical investigation on scalaized charged black hole, in following we still take microcanonical ensemble, canonical ensemble and grand canonical ensemble into account, studying the thermodynamical stability associated with the scalarized charged black hole and RN black hole respectively. Furthermore, we find that in grand canonical ensemble, the RN black hole is more stable than the scalarized charged black hole in thermodynamics, corresponding to general expectation. Nevertheless, we also find that the discontinuous scarization on RN black hole may happen in both microcanonical ensemble and canonical ensemble respectively. In the following, we will give more detail discussion.\n\nBefore go on discussing our result, let us first make short comment on different ensembles in black holes. The Euclidean path-integral approach to black hole thermodynamics originally proposed by Hawking in microcanonical ensemble~\\cite{Hawking:1976de}. Later on, the canonical ensemble was investigated by York et al.~\\cite{York:1986it,Whiting:1988qr,Brown:1994su}. It was found that suitable boundary conditions must be added in canonical ensemble. Then the York's approach was generalized into other ensembles such as the charged black hole in the grand canonical ensemble~\\cite{PhysRevD.42.3376,Brown_1990}. It has been found that the results obtained by using the path-integral approach depend on the boundary conditions~\\cite{Brown:1994gs,Hawking:1982dh}. The stability of black holes then also depends on the choice of boundary conditions and, consequently, on the choice of ensembles~\\cite{Comer_1992}. In fact, the stability properties of a black hole are drastically influenced by the boundary conditions that determine ensemble.\n\nIn following we proceed to our discussion. We specify the non-linear potential as a logarithmic function with respect to the scalar field\n\\begin{equation} \\label{potential}\nW(x):=m^2 c^2 \\log(1+\\frac{x^2}{c^2}),\n\\end{equation}\nwhere $c$ is a constant and $m$ is the effective mass of the scalar field. Considering the flat directions in gauge-mediated supersymmetric model~\\cite{deGouvea:1997afu, Kusenko:1997si}, this potential is proposed in Ref.~\\cite{Hong:2019mcj}, in which the supersymmetric breaking has been absorbed into the rescaling of $\\psi$. In addition to satisfying the stable vacuum requirement Eq.(\\ref{reqw1}) , the shape of potential is asymptotic flat when taking large field limit $\\psi \\gg c$. From the equation of motion of $\\psi$ in Eq.~\\eqref{eqscalar1}, one will find that the scalar field becomes massless large field limit $\\psi \\gg c$. Moreover, the logarithmic potential can bring better numerical stability as well.\n\nSince this paper tries to find the black hole solutions with scalar hair, there should be a horizon locating at position $r=r_h$. In static spherically symmetric case, this implies $f(r_h)=0$. To set up the numerical method to solve the equation of motion Eq.~(\\ref{eqscalar1}), we in principle still needs five additional independent boundary conditions at horizon. Practically, the regularity of physical fields at $r=r_h$ implies that the solution can approximatively be written as the following Taylor's series with respect to $r-r_h$,\n\\begin{eqnarray} \\label{horbou1}\n&&f(r)=f_1 (r-r_h)+\\cdots, \\quad \\chi(r)=\\chi_0+\\chi_1(r-r_h)+\\cdots \\, ,\\cr\n~\\cr\n&&\\phi(r)=\\phi_0+\\phi_1(r-r_h)+\\phi_2 (r-r_h)^2+\\cdots\\, , \\cr\n~\\cr\n&&\\psi(r)=\\psi_0+\\psi_1(r-r_h)+\\cdots \\, .\n\\end{eqnarray}\nTake them into Eq.(\\ref{eqscalar1}) and we will find\n\\begin{eqnarray} \\label{horbou2}\n&&\\psi_1=\\frac{2\\psi_0 c^2 m^2}{(\\psi_0^2+c^2)f_1}\\, ,\\quad \\phi_0=0\\,.\n\\end{eqnarray}\nThis leaves three independent variables $\\{\\psi_0, \\chi_0, \\phi_1\\}$ at horizon and so we have 7 independent parameters in solving Eq.~\\eqref{eqscalar1}\n\\begin{equation}\n\\{r_h, \\psi_0, \\chi_0, \\phi_1, c, m, q \\} \\, ,\n\\end{equation}\nin which $\\{r_h, \\psi_0, \\chi_0, \\phi_1 \\}$ come from the value of various fields at the horizon, $q$ is the charge of the scalar field and $\\{c, m\\}$ are the parameters of the non-linear potential. It is remarkable that there is a scaling symmetry of the equation of motion\n\\begin{equation}\nr \\to \\lambda r \\, , \\quad t \\to \\lambda t \\, , \\quad q \\to \\frac{q}{\\lambda} \\, , \\quad m \\to \\frac{m}{\\lambda},\n\\end{equation}\nwhich equivalently rescale the metric $g_{\\mu\\nu} \\to \\lambda ^2 g_{\\mu\\nu}$ and the electronic field $A_\\mu \\to \\lambda A_\\mu$. Due to such a symmetry, we fix $m=0.01$ for convenience. In addition, we must impose\n\\begin{equation}\n\\chi \\to 0, \\quad \\text{when} \\quad r \\to \\infty,\n\\end{equation}\nwhich is a requirement about the normalisation of $t$ associated with the gravitational redshift\\cite{Hartnoll:2008kx}. Practically, given any arbitrary value of $\\chi_0$, the equation of motion is invariant when performing the following scaling transformation\n\\begin{equation} \\label{tresc}\nt \\to \\mathrm{e}^{-\\frac{\\chi_{\\infty}}{2}}t \\, , \\quad \\chi \\to \\chi - \\chi_{\\infty} \\, , \\quad \\phi(r) \\to \\mathrm{e}^{\\frac{\\chi_{\\infty}}{2}}\\phi(r),\n\\end{equation}\nwhere $\\chi_{\\infty}$ denotes the value of the solution $\\chi(r)$ in the asymptotic infinity. In other words, we perform a time rescaling $t \\to e^{-\\frac{\\chi_{\\infty}}{2}}t$ to set $\\chi=0$ at the boundary. In order to numerically solve the equations simply, we in general set $\\chi_0=0$. However, such choice will lead to $\\chi(\\infty)=\\chi_\\infty\\neq0$, we can thus finally transform the solution to satisfy $\\chi(\\infty)=0$ by the transformation~\\eqref{tresc}.\n\nBase on these two symmetries, two physical parameters, the charge of the complex scalar field $q$ and the parameter related to the scalar potential $c$, and three parameters as the initial value at the horizon $r_h$, $\\phi_1$ and $\\psi_0$ are left. Therefore, The integration of the equation of motion Eq.~\\eqref{eqscalar1} from the event horizon to the infinity will give us a map:\n\\begin{equation} \\label{map1}\n\\{r_h, \\phi_1, \\psi_0 , c, q \\} \\mapsto \\{\\mu, T, Q, M, \\psi_+ \\}\\, .\n\\end{equation}\nIf one chooses five parameters $\\{r_h, \\phi_1, \\psi_0 , c, q \\}$ arbitrarily, the $\\psi_+$ may be or may not be vanish. From Eq.~\\eqref{constraint1}, we know that only the one satisfying $\\psi_+=0$ is an admitted solution, leading to only four of parameters $\\{r_h, \\phi_1, \\psi_0 , c, q \\}$ are independent.\n\n\nWe choosing the parameters as $r_h=1, c=0.1, \\phi_1=0.8, m=0.01, q=\\frac{11}{10}m$ and set $r=r_{\\infty}=1000$ as the infinity cut off, finding that the complex scalar $\\psi$ has been efficiently decay to $0$ at the infinity cutoff $r = 1000$, we firstly numerically solve the equation of motion Eq.~\\eqref{eqscalar1} by Runge-Kutta methods under the initial condition Eq.~\\eqref{horbou1} and Eq.~\\eqref{horbou2}. As we have explained, the $\\psi_0$ now is not free because we need to satisfy $\\psi_+=0$. Then we use the standard shooting method to find the smallest value of $|\\psi_0|$ to satisfy the constrain Eq.~\\eqref{constraint1}. We find a numerical charged black hole solution with scalar hair when $\\psi_0\\approx0.1988$. After Performing the scaling transformation Eq.~\\eqref{tresc}, we show the numerical solution are shown in Fig.~\\ref{fig1} from which we find that $g_{tt}=f(r)\\mathrm{e}^{-\\chi(r)}$ is still a monotone increasing function with respect with $r$, indicating that in scalarized charged black hole, the gravity is still attraction, as the case of the RN black hole.\n\nIn addition, taking $r =1000$ as an infinity cutoff, from Fig.~\\ref{fig1} one can see that the scalar field $\\psi(r)$ have efficiently decay to $0$ smoothly at $r=1000$, implying that the spacetime manifold has efficiently reduce to RN black hole. In addition, if increase the infinity cutoff, to maintain effective numerical accuracy, one must increase the working precise, leading to increase the computational time. In practice, the best cut-off is chosen by following way: in double float accuracy, we take $\\max r=2^6,2^7,2^8,2^9$ and $2^{10}\\approx1000$. We observed that the differences of $\\psi_0$ are smaller and smaller. However, if we increase the cut-off to be $2^{11}$ and more, we found the differences of $\\psi_0$ are larger and larger. This implies 1000 is best cut-off. If we using quadruple float accuracy, the best cut-off is around 2000, however, the computational time will increase more 10 times. Therefore, for investigating the stability of scalarized charged black hole effectively and efficiently, it is reasonable to set $r=1000$ as an infinity cutoff.\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.22\\textwidth]{gtt.pdf}\n \\includegraphics[width=0.22\\textwidth]{scalar-f1.pdf}\n \\includegraphics[width=0.22\\textwidth]{scalar-a1.pdf}\n \\includegraphics[width=0.22\\textwidth]{scalar1.pdf}\n \\caption{Numerical solutions of the metric components $g_{tt}=f(r)\\mathrm{e}^{-\\chi(r)}$, $1\/g_{rr}=f(r)$, the electric potential $\\phi(r)$ and the scalar field $\\psi (r)$ respectively. Here we take parameters \\{$r_h=1, c=0.1, \\phi_1=0.8, m=0.01, q=\\frac{11}{10}m$\\}. The $\\psi_0\\approx0.1988$ is given by shooting method. }\\label{fig1}\n\\end{figure}\n\n\n\\subsection{Microcanonical Ensemble} \\label{micensem}\n\\subsubsection{Thermodynamical Stability for scalarized charged black hole}\nTo compare which one is more stable, RN black hole or scalarized black hole, we need to specify what ensemble we will consider. We first consider the microcanonical ensemble describing an isolated system. In the microcanonical ensemble, the characteristic thermodynamic variables is entropy and a physical realistic process will be always towards the direction of increasing entropy, a phase transition thus can happen only if the entropy will be increased. Therefore, in the case microcanonical ensemble, we consider an isolated black hole where the total mass $M$ and total charge $Q$ are fixed parameters and the black hole entropy can be interpreted as $S=S(M,Q)$. Since the black hole entropy is proportion to the area of black hole $S \\sim 4\\pi r_h^2$, which implies the larger the black hole radius indicates more stable in microcanonical ensemble, we need to compare horizon radii of scalarized black hole and RN black hole for the same mass $M$ and total charge $Q$.\n\n\nFollowing our illustration above, we numerically analyze the behavior of the event horizon radius working with $(Q\/M, \\Delta r_h\/M)$ plane with fixing total mass $M$, where $\\Delta r_h=r_{h}-r_{h_{\\text{RN}}}$ denotes difference between the scalarized charged black hole and the corresponding charged black hole with the same $M$ and $Q$. Specifically, given a numerical scalarized charged black hole solution, the ADM mass and the total charge read\n\\begin{equation}\\label{MQ}\nM=-\\frac{r}{2}(f(r)-1)|_{r \\to r_{\\infty}} \\, , \\quad Q= r^2 \\phi (r) |_{r \\to r_{\\infty}}.\n\\end{equation}\nThen, the event horizon radius of the corresponding RN black hole is\n\\begin{equation}\nr_{h_{\\text{RN}}}=M+\\sqrt{M^2-Q^2}.\n\\end{equation}\nNaively, we firstly consider the solution given in Fig.~\\ref{fig1}. The mass and the total charge directly read $M \\approx 11.53$ and $Q \\approx 11.81$, implying that there does not exist the corresponding RN black hole sharing the same mass and charge. Furthermore, it also indicates that due to the non-linear complex scalar field, the mass of the scalarized charged black hole could smaller than its total charge.\n\nIn following we search the parameter region in which the mass of the scalarized charged black hole is larger than its charge using shooting method where both scalarized black hole (if exists) and RN black hole are solutions of Eq.~\\eqref{eqscalar1}. In this paper, we mainly adopt the shooting method in numerics for investigating the thermodynamical stability associated with the scalarized charged black hole.\nWe give a plot in $(c,\\psi_{0})$ plane (See also the first plot of Fig.~\\ref{fig2}), where every point on the curve denotes a numerical solution of charged black hole with scalar hair. Remarkably, from this plot one can easily see that $\\psi_0$ will vanish when $c \\ll 1$, implying that the scalarized charged black hole will reduce to RN black hole. Said another way, our result is consistent with the case of the probe limit in which the action Eq.~\\eqref{action1} will reduce to Einstein-Maxwell theory after rescaling $\\psi \\to c \\psi$ then taking $c \\to 0$.\n\nRecall the Eq.~\\eqref{MQ}, one can pick out the ADM mass $M$ and total charge $Q$ of every scalarized charged black hole denoted by $\\{c_{i}, \\psi_{0_{i}}\\}$. Therefore, we present a plot on $(c, M-Q)$ plane in points and find $M-Q$ exist a zero point around $c \\in [0.00101, 0.02]$, (See also the second plot of Fig.~\\ref{fig2}). Using both the interpolation method and the shooting method, we obtain a series of numerical scalarized charged black hole solution with $M=Q \\approx 0.9676$, denoted by $\\{c \\approx 0.01220, \\psi_0 \\approx 0.02473 \\}$, which is actually a good seed numerical solution for for investigating the relation between $\\Delta r_h$ and $Q$ practically.\nFurthermore, recall that we are working with the microcanonical ensemble, the radius of the corresponding RN black hole $r_{h_{RN i}} $ reads $r_{h_{RN i}}=M+\\sqrt{M^2-Q_i^2} $. In other words. we established a relation between $\\Delta r_{h_i} = r_{h_i}-r_{h_{RN i}}$ and $Q_{i}$ (See also the last plot in Fig.~\\ref{fig2} and the Appendix.~\\ref{micapp} for detail discussion.).\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{c0psi.pdf}\n \\includegraphics[width=0.3\\textwidth]{MQ.pdf}\n \\includegraphics[width=0.3\\textwidth]{rhQ.pdf}\n \\caption{\\textbf{Top}: Relationship between $c$ and $\\psi_0$ based on our numerical solutions. \\textbf{Middle}: A typical example about the relation between $M-Q$ and $c$. \\textbf{Bottom}: The relation between $\\Delta r_h\/M$ and $Q\/M$ with fixing $M=0.9676$. }\\label{fig2}\n\\end{figure}\n\nFrom $S \\sim r_h^2$ and the plot presented in $(Q\/M, \\Delta r_h\/M)$ plane, one can find that there does exist a small interval $Q\/M \\in [0.9993,1]$ in which the entropy of the scalarized charged black hole is larger than the RN black hole in microcaonical ensemble. To conclude, in microcanonical ensemble, the scalarized charged black hole with the mass-charge ratio lying on the interval $0.9993M$, there still exists scalarized charged black hole solutions but the corresponding RN black hole does not.\n\n\n\\subsubsection{New versions for Penrose-Gibbons conjecture}\nIn fact our numerical result does also definitely give a negative answer to a long-standing conjecture named Penrose-Gibbons conjecture. It was conjectured that, for all asymptotically flat black holes, the area of horizon $A_h$, the total mass $M$ and the total charge $Q$ will satisfy following inequality~\\cite{Mars:2009cj}\n\\begin{equation}\\label{penrgibbs}\n M\\geq\\sqrt{\\frac{A_h}{16\\pi}}+Q^2\\sqrt{\\frac{\\pi}{A_h}}\\,\n\\end{equation}\nor a weaker version\n\\begin{equation}\\label{penrgibbsb}\n \\sqrt{\\frac{A_h}{16\\pi}}\\leq\\frac12\\left(M+\\sqrt{M^2-Q^2}\\right)\\,\n\\end{equation}\nif the weak energy condition is satisfied. The saturation will appear only in RN black holes. pThese two inequalities are charged generalization of following Penrose inequality\n\\begin{equation}\\label{penrose1}\n M\\geq\\sqrt{\\frac{A_h}{16\\pi}}\n\\end{equation}\nEq.~\\eqref{penrose1} has been proven in general static case by a few of different methods~\\cite{Mars:2009cj}. However, the proofs of charged generalizations~\\eqref{penrgibbs} and \\eqref{penrgibbsb} are still open. In spherical case, they reduce to\n\\begin{equation}\\label{penrgibbs2}\n M\\geq\\frac{r_h}2+\\frac{Q^2}{2r_h}\\,\n\\end{equation}\nand\n\\begin{equation}\\label{penrgibbs3}\n r_h\\leq\\left(M+\\sqrt{M^2-Q^2}\\right)\\,.\n\\end{equation}\nand the weak energy condition reduces into the requirement of $T_{00}\\geq0$ outside event horizon. In our model, recall the energy momentum tensor given in Eq.~\\eqref{Energy}, the $tt$ component of energy momentum tensor in spherical anstanz Eq.~\\eqref{metric0} reads\n\\begin{equation}\nT_{tt}=\\frac{1}{2}q^2\\phi^2\\psi^2+\\frac{1}{2}\\mathrm{e}^{-\\chi}f V(\\phi)+\\frac{1}{4}f(r)\\phi'^2+\\frac{1}{2}f^2\\psi^2 \\, .\n\\end{equation}\nFor a semi-definite non-linear potential $V(\\phi)$, it can obviously to observe that the weak energy condition $T_{00} \\geq 0$ is always satisfied. In addition, The inequalities~\\eqref{penrgibbs} and \\eqref{penrgibbsb} are two generalization of Penrose inequality in charged case. Though the Penrose inequality in static case (which is called Riemannian-Penrose inequality) has several proofs, the strength version ~\\eqref{penrgibbs} has not been prove even in static spherical case. A serval proofs have been obtained by assuming that outside the black hole there are no charge current sources i.e. in electrovacuum and horizon is connected, see Refs.~\\cite{Malec:1994sy,Hayward:1998jj,Gibbons:1998zr,Khuri2013}. For inequality~\\eqref{penrgibbs}, a couterexample was found when the horizon is not connected~\\cite{Weinstein2005}. Though the proof of inequality \\eqref{penrgibbsb} has not been obtained yet in general case, as far as we know, no counterexample of weaker version~\\eqref{penrgibbsb} was reported. Base on the inequality of arithmetic and geometric means $\\frac{a+b}{2}\\geq \\sqrt{ab}$, a natural deduction of Eq.~\\eqref{penrgibbs2} is $M \\geq Q$, satisfying in RN black hole. For inequality~\\eqref{penrgibbs3} to make sense it is necessary that the spacetime should satisfies $M\\geq |Q|$.\n\n\nHowever, our numerical results offer a counterexample for both inequalities~\\eqref{penrgibbs2} and \\eqref{penrgibbs3}. Therefore, even in spherically symmetric case which has only one connected horizon, the inequalities~\\eqref{penrgibbs} and \\eqref{penrgibbsb} can still be broken.\n\nWe note that inequalities~\\eqref{penrgibbs} is not the only natural generalization of original Penrose inequality. Here we offer two new generalizations. One natural generalization of the Penrose inequality in scalarized charged black hole is that interpreting $Q$ as the charge enclosed with event horizon $Q_h$, which is identical in RN black hole due to the charge conservation law. The other natural generalization is that we use chemical potential $\\mu$ to replace the role of charge $Q$. Therefore, we have two new versions of the generalized Penrose inequality,\n\\begin{equation}\\label{penrgibbs4}\n M\\geq\\sqrt{\\frac{A_h}{16\\pi}}+Q_h^2\\sqrt{\\frac{\\pi}{A_h}}\\, , \\quad M\\geq\\sqrt{\\frac{A_h}{16\\pi}}+\\mu^2 r_h^2 \\sqrt{\\frac{\\pi}{A_h}}.\n\\end{equation}\nIn spherical case, Eq.~\\eqref{penrgibbs4} will reduce to\n\\begin{equation}\\label{penrgibbs5}\n M\\geq\\frac{r_h}2+\\frac{Q_h^2}{2r_h}\\,, \\quad M\\geq\\frac{r_h}2+\\frac{\\mu^2 r_h}{2}\\, .\n\\end{equation}\nHere we only numerically verify these two inequalities Eq.~\\eqref{penrgibbs5} respectively. Recall the class of scalarized charged black hole solution denoted by $\\{c, \\psi_0 \\}$, we work with the $\\left(c, M-\\left(\\frac{r_h}{2}+\\frac{Q_h^2}{2r_h}\\right)\\right)$ plane and $\\left(c, M-\\left(\\frac{r_h}{2}+\\frac{\\mu^2 r_h}{2 }\\right)\\right)$ respectively, where $Q_h= r_h^2 \\mathrm{e}^{\\chi(r_h)}\\phi(r_h)$. From the plots given in Fig.~\\ref{fig6}, we find that $M>\\frac{r_h}{2}+\\frac{Q_h^2}{2 r_h}$ and $M>\\frac{r_h}{2}+\\frac{\\mu_h^2 r_h}{2}$ always hold as the growth of $c$, implying that the two possible generalization of the Penrose inequality we imposed hold in spherical case. Furthermore, it is an intriguing topic to proof Eq.~\\eqref{penrgibbs3} in general scalarized charged black hole and we will leave it as future work.\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{MQh.pdf}\n \\includegraphics[width=0.3\\textwidth]{Mmu.pdf}\n \\caption{A numerical check on the inequalities shown in Eq.~\\eqref{penrgibbs5}. Here we take $\\{m=0.01, q=1.1m, r_h=1, \\phi_1=0.8\\}$.}\\label{fig6}\n\\end{figure}\n\\subsection{Canonical Ensemble and Grand Canonical Ensemble}\nIn this section, we turn to investigate the thermodynamic stability on scalarized charged black hole in both canonical ensemble and grand canonical ensemble, by means of a series of specific numerical solution as an example. In canonical ensemble, the associated thermodynamic potential is the Helmholtz Free Energy $F(T,Q)$ with respect to the temperature $T$ and the total charge $Q$. The thermodynamics of the grand canonical ensemble can be described by the Gibbs Free Energy $G(T,\\mu)$, in which the thermodynamic variables are the temperature $T$ and the chemical potential $\\mu$ respectively. In both these two ensembles, the real physical process will be towards the direction increasing $F(T,Q)$ or $G(T,\\mu)$. Thus, the RN black hole will spontaneously scalarize if the hairy black hole has smaller free energy.\n\nFollowing the procedure developed by \\cite{Gibbons:1976ue,Caldarelli:1999xj}, one can read off the free energy from the on-shell Euclidean action, namely $Z=\\mathrm{e}^{-S_{\\text{E}_{\\text{on-shell}}}}$ and\n\\begin{equation}\nF:=-\\frac{1}{\\beta}\\ln Z|_{\\text{canonical}}\\,, \\quad G:=-\\frac{1}{\\beta}\\ln Z|_{\\text{grand canonical}}\\, ,\n\\end{equation}\nwhere $Z$ is the thermodynamic partition function and $\\beta=\\frac{1}{T}$ is the inverse temperature. We start with the Euclidean action associated with \\eqref{action1}\n\\begin{equation}\\label{Eaction1}\nS_E=S_{\\text{bulk}}+S_{\\text{surf}}+S_{\\text{ct}} \\, ,\n\\end{equation}\nwhere\n\\begin{eqnarray}\n&&S_{\\text{bulk}}:=-\\frac1{16\\pi}\\int \\mathrm{d}^4 x \\sqrt{g_E} \\cr\n&& \\left(R-F_{\\mu\\nu}F^{\\mu\\nu}-(D_\\mu\\psi)^\\dagger(D^\\mu\\psi)-W(|\\psi|^2)\\right)\\, , \\\\\n~ \\cr\n&&S_{\\text{surf}} := -\\frac{1}{8\\pi}\\int \\mathrm{d}^3 x \\sqrt{h_E}K \\, , \\\\\n~ \\cr\n&&S_{\\text{ct}} := \\frac{1}{8\\pi}\\int\\mathrm{d}^3 x \\sqrt{h_{\\rm E}}K_0 -\\frac{\\alpha}{4\\pi}\\int \\mathrm{d}^3 x \\sqrt{h_{\\rm E}} n_\\nu F^{\\mu\\nu}A_\\nu \\, .\n\\end{eqnarray}\n$S_{\\text{surf}}$ and $S_{\\text{ct}}$ denote the Gibbons-Hawking term and the counter terms respectively. Explicitly, $h_E$ and $K$ denote induce metric and the intrinsic curvature on arbitrary boundary $\\partial M$. In addition, the first term of $S_{\\text{ct}}$, $K_0 = \\frac{2}{r}$, can remove the infinity arising from the spherical coordinates, while the second term with $\\alpha$ is introduced for removing the boundary term relevant to the electromagnetic field in the case of canonical ensemble. In other words, $\\alpha=1,0$ denotes the case of canonical ensemble and grand-canonical ensemble respectively. In following we will present more illustration about the $\\alpha$ term.\n\nIn the grand canonical ensemble, performing variation on the Euclidean action Eq.~\\eqref{Eaction1} with respect to the fields gives\n\\begin{eqnarray}\n\\delta S = &&-\\frac{1}{16\\pi} \\int_{M} \\mathrm{d}^4 x \\sqrt{g_E} \\left (E_g^{\\mu\\nu}\\delta g_{\\mu\\nu} + E_{\\psi}\\delta \\psi \\right) \\cr\n~\\cr\n&& -\\frac{1}{4\\pi}\\int_{M} \\mathrm{d}^4 x \\sqrt{g_E} E_A^{\\mu}\\delta A_\\mu \\cr\n~\\cr\n&& + \\frac{1}{4\\pi}\\int_{\\partial M} \\mathrm{d}^3 x \\sqrt{h_E}n_\\mu F^{\\mu\\nu}\\delta A_\\nu \\, ,\n\\end{eqnarray}\nwhere $n^\\mu$ denotes the unit normal vector orthogonal to the boundary $\\partial M$, $E_g^{\\mu\\nu}, E_\\psi, E_A^\\mu$ are the E.O.M given in Eq.~\\eqref{eom}. For well-posed variation principle, one must impose an appropriate boundary condition. In general, $\\delta A_{\\mu}=0$ on $\\partial M$ is imposed, which is appropriate for grand canonical ensemble with fixing chemical potential $\\mu$. However, in the case of canonical ensemble with fixing the charge $Q$ on $\\partial M$ as the boundary condition, the second term of $S_{\\text{ct}}$ must be under consideration, namely $\\alpha=1$. Moreover, the well-posed variation principle requires the boundary condition $\\delta (n_\\mu F^{\\mu\\nu})=0$ \\cite{Hawking:1995ap,Caldarelli:1999xj}.\n\nIn the case that the scalar field is zero, the background is a RN black hole, of which the metric reads\n\\begin{eqnarray} \\label{RNback}\n&&\\mathrm{d} s_{\\text{RN}}^2= -f_{\\text{RN}}(r)\\mathrm{d} t^2+\\frac{\\mathrm{d} r^2}{f_{\\text{RN}}(r)}+r^2 \\mathrm{d} \\Omega^2 \\, , \\cr\n~\\cr\n&&f_{\\text{RN}}(r)=1-\\frac{2M}{r}+\\frac{Q^2}{r^2}\\, .\n\\end{eqnarray}\nPlugging the RN black hole background Eq.~\\eqref{RNback} into $S_{\\text{E}}$ and setting $r \\to \\infty$ as the boundary, we find\n\\begin{equation}\\label{EacRN}\nS_{\\text{E}_{\\text{RN}}}=\\frac{\\beta}{2}(M \\pm Q\\Phi_{r_+})\\, ,\n\\end{equation}\nin which $\\pm$ denotes the canonical ensemble case and the grand canonical ensemble case respectively. Then the Helmholtz free energy $F(T,Q)$ and the Gibbs free energy $G(\\mu,T)$ of RN black hole can directly read\n\\begin{eqnarray}\\label{Frn}\n&&F_{\\text{RN}}(T,Q)=\\frac{1}{2}\\left(M(T,Q) - \\mu (T,Q) Q\\right) \\, , \\\\\n~\\cr\n&&G_{\\text{RN}}=\\frac{1}{2}\\left(M(\\mu,T) + \\mu Q(\\mu,T) \\right)\\, ,\n\\end{eqnarray}\nin which the chemical potential $\\mu= \\frac{Q}{r_+} $ is interpreted as the electric potential at the infinity. In the canonical ensemble, the ADM mass of the RN black hole $M_{\\rm RN}$ can be solved by the following relation\n\\begin{eqnarray} \\label{rnTQ}\n&& M_{\\rm RN}=\\frac{1}{2}\\left(r_{h_{\\rm RN}} + r_-\\right) \\, , \\quad Q_{\\rm RN}^2 =r_{h_{\\rm RN}} r_- \\, , \\cr\n ~\\cr\n&& T_{\\rm RN}=\\frac{r_{h_{\\rm RN}} - r_-}{4\\pi r_{h_{\\rm RN}}^2}\\, ,\\quad \\mu=\\frac{Q}{r_{h_{\\rm RN}}}\\, .\n\\end{eqnarray}\nwhere $r_-$ is the inner horizon of the RN black hole. However, the analytical expression of $M_{\\rm RN}(T_{\\rm RN},Q_{\\rm RN})$ is too complicated to present since there are three real or complex roots. In practice, numerically selecting the real root of $M_{\\rm RN}(T_{\\rm RN}, Q_{\\rm RN})$ with $M_{\\rm RN}>\\frac{r_{h_{\\rm RN}}}{2}$, we then read off $F_{\\rm RN}(T_{\\rm RN},Q_{\\rm RN})$. As to in grand canonical ensemble, the Eq.~\\eqref{rnTQ} gives\n\\begin{equation}\\label{rnmuT}\nM_{\\rm RN}=\\frac{1- \\mu_{\\rm RN}^4}{8 \\pi T_{\\rm RN}} \\, , \\quad Q_{\\rm RN}=\\frac{\\mu_{\\rm RN}(\\mu_{\\rm RN}^2-1)}{4\\pi T_{\\rm RN}}\\, .\n\\end{equation}\nWe thus read off the value of $G_{\\rm RN}(\\mu,T_{\\rm RN})$ from Eq.~\\eqref{Frn} as well.\n\n\\subsubsection{The case of probe limit $c \\ll 1$}\nNow we turn to consider the case of the scalarized charged black hole. We firstly present a proof that, in the probe limit $c \\to 0$, the RN black hole will be more stable than the scalarized charged black hole in both canonical ensemble and the grand canonical ensemble. Firstly, rescaling $\\psi$, namely $\\tilde{\\psi} \\to c \\psi$, gives\n\\begin{equation}\\label{Eaction2}\n S_\\text{E}=S_{\\text{E}_{\\text{EM}}}+\\frac{\\beta c^2}{16\\pi}\\int \\mathrm{d}^4 x \\sqrt{g_{\\text{E}}} (D^ \\mu \\tilde{\\psi})^{\\dag}(D_\\mu \\tilde{\\psi})+m^2 \\log(1+|\\tilde{\\psi}|^2) \\, ,\n\\end{equation}\nwhere $S_{\\text{E}_{\\text{EM}}}$ denotes the Euclidean action contributed by Einstein-Maxwell theory. At the limit $c \\to 0$, it is clear that the solution will be a RN black hole with the metric $g_{\\mu\\nu}^{\\text{RN}}$ and gauge potential $A_{\\mu}^{\\text{RN}}$. When $c\\neq 0$ but $c\\ll q$, one can treat the contribution of scalar field as a perturbation of order $\\mathcal{O}(c^2)$. Let us assume that the metric and gauge potential become\n\\begin{equation}\\label{metricas1}\n g_{\\mu\\nu}=g_{\\mu\\nu}^{\\text{RN}}+c^2g_{\\mu\\nu}^{(1)},~~A_\\mu=A_{\\mu}^{\\text{RN}}+c^2A_\\mu^{(1)},~~~c^2\\ll1\\,.\n\\end{equation}\nUpon the $\\mathcal{O}(c^2)$ order, we find the Euclidean action contributed by Einstein-Maxwell theory reads\n\\begin{eqnarray}\\label{actems1}\n&& S_{\\text{E}_{\\text{EM}}}=S_{\\text{E}_{\\rm RN}} + c^2\\int_{M}\\sqrt{g_E}\\mathrm{d}^4x\\left[(G_{\\mu\\nu}|_{g_{\\mu\\nu}=g_{\\mu\\nu}^{\\text{RN}}})g_{\\mu\\nu}^{(1)} \\right. \\cr\n~\\cr\n&&+ \\left.(\\nabla_{\\mu}F^{\\mu\\nu}|_{A_\\mu=A_{\\mu}^{\\text{RN}}})A_\\nu^{(1)}\\right]+\\mathcal{O}(c^4)\\,,\n\\end{eqnarray}\nwhere $S_{\\text{E}_{\\rm RN}}$ is the on-shell action in RN background given by\n\\begin{equation}\nG_{\\mu\\nu}|_{g_{\\mu\\nu}=g_{\\mu\\nu}^{\\text{RN}}}=\\nabla_{\\mu}F^{\\mu\\nu}|_{A_\\mu=A_{\\mu}^{\\text{RN}}}=0 \\, .\n\\end{equation}\nTherefore, we find the $S_{\\text{E}_{\\text{EM}}}=S_{\\text{E}_{\\rm RN}}+\\mathcal{O}(c^4)$, which implies upon the leading order of $c$, the on-shell Euclidean action of Eq.~\\eqref{Eaction2} can be regard as\n\\begin{equation} \\label{Eaction3}\nS_{\\text{E}_{\\text{on-shell}}}=S_{\\text{E}_{ \\rm RN}}+S_{\\text{E}_\\text{scalar}} \\,\n\\end{equation}\nwith\n\\begin{equation} \\label{Eaction3b}\nS_{\\text{E}_\\text{scalar}}=\\frac{\\beta c^2}{16\\pi}\\int \\mathrm{d}^4 x \\sqrt{g_{\\text{E}}}[ ( \\bar{D}^ \\mu \\tilde{\\psi})^{\\dag}(\\bar{D}^{\\mu} \\tilde{\\psi})+m^2 \\log(1+|\\tilde{\\psi}|^2)] \\, ,\n\\end{equation}\nwhere $\\bar{D}_\\mu $ denotes the covariant derivative under the RN black hole background~\\eqref{RNback}.\n\n\nPlugging the RN black hole background Eq.~\\eqref{RNback} into Eq.~\\eqref{Eaction3b}, we have\n\\begin{eqnarray} \\label{Scaos}\n&&S_{\\text{E}_\\text{scalar}}= \\frac{\\beta c^2}{4}\\int^{\\infty}_{r_{h_{\\rm RN}}} \\mathrm{d} r ~r^2 \\cr\n~\\cr\n&&\\left(f_{\\text{RN}} \\tilde{\\psi'}^2 + \\frac{q^2 \\phi_{\\rm RN}^{2} \\tilde{\\psi}^2}{f_{\\text{RN}}} +m^2 \\log(1+|\\tilde{\\psi}|^2)\\right), \\,\n\\end{eqnarray}\nwhere the prime means the derivative with respect to $r$. It needs to note $\\phi_{\\rm RN}^{2}=-Q^2\/r^2\\leq0$, since $\\phi_{\\rm RN}$ denotes the electric potential of the RN black hole in Euclidean spacetime, namely $A^{\\rm RN}_{\\mu}= \\phi_{\\rm RN}(\\mathrm{d}\\tau)_{\\mu}=\\frac{i Q}{r}$ (here $\\tau$ is Euclidean time). This leads that the sign in the integration is undefined. Note that the equation of motion associated with the scalar field in RN background\n\\begin{equation}\n\\bar{D}^\\mu \\bar{D}_{\\mu}\\tilde{\\psi} - \\frac{m^2}{1+|\\tilde{\\psi|}^2}\\psi=0,\n\\end{equation}\nwhere explicitly given\n\\begin{equation}\nf_{\\rm RN}\\tilde{\\psi}''+f_{\\rm RN}'\\tilde{\\psi}'+\\frac{2f_{\\rm RN}}{r}\\tilde{\\psi}'-\\frac{m^2}{1+|\\tilde{\\psi}|^2}\\tilde{\\psi}- \\frac{q^2 \\phi_{\\rm RN}^2}{f}\\tilde{\\psi}=0\n\\end{equation}\nThen the Eq.~\\eqref{Scaos} gives\n\\begin{eqnarray}\\label{Scaos2}\n&&S_{\\text{E}_\\text{scalar}}=\\frac{\\beta c^2}{4}\\left( \\left(r^2 \\tilde{\\psi} \\tilde{\\psi}' f_{\\rm RN}\\right)|^{\\infty}_{r_{h_{\\rm RN}}} \\right. \\cr\n~\\cr\n&&+ \\left.\\int^{\\infty}_{r_{h_{\\rm RN}}} r^2\nm^2 \\left(\\log(1+|\\tilde{\\psi}|^2)-\\frac{\\tilde{\\psi}^2}{1+|\\tilde{\\psi}|^2}\\right)\\mathrm{d} r \\right)\\, .\n\\end{eqnarray}\nGiven appropriate boundary condition $\\psi (\\infty)=0$ and $f_{\\rm RN}|_{r=r_{h_{\\rm RN}}}=0$, the first term of Eq.~\\eqref{Scaos2} will vanish. To proceed, we consider the second term of Eq.~\\eqref{Scaos2}. One can construct a auxiliary function $t(x)$ as\n\\begin{equation}\nt(x)=\\log(1+x)-\\frac{x}{1+x}\\, , \\quad \\frac{\\mathrm{d} t(x)}{\\mathrm{d} x}=\\frac{x}{(1+x)^2} .\n\\end{equation}\nIt is easy to verify that for $x>0$, $ \\frac{\\mathrm{d} t(x)}{\\mathrm{d} x}>0$ always hold, indicating $t(x)$ is a monotone increasing function where the minimum value is lying on $x=0, t(0)=0$. Since $|\\psi|^2$ is positive for any $r$, the integrand of the second term of Eq.~\\eqref{Scaos2} is always positive. Therefore, we have proved\n\\begin{equation}\\label{frees1}\nS_{\\rm{E}_\\text{scalar}}>0.\n\\end{equation}\nFurthermore, according to $F=\\frac{1}{\\beta} S_{\\text{E}},$ and $G=\\frac{1}{\\beta} S_{\\text{E}}$, we have\n\\begin{eqnarray}\n&\\Delta F = F-F_{RN}=F_{\\text{scalar}}>0 \\, , \\\\\n&\\Delta G = G-F_{RN}=G_{\\text{scalar}}>0\\, .\n\\end{eqnarray}\nTo conclude, base on our demonstration above, the contribution of the complex scalar field to the $F(T, Q)$ and $G(T, \\mu)$ will be always positive. According to the stability requirement that the smaller free energy indicates the more stable of the black hole, we thus prove that in probe limit, the RN black hole is more stable than the scalarized charged black hole in both canonical ensemble and grand canonical ensemble.\n\n\\subsubsection{The case of general $c$: Canonical Ensemble}\nWhen $c$ is not infinitesimal, the higher order terms of $c$ play role and above proof is broken. In the following, we consider the Euclidean action Eq.~\\eqref{Eaction1} in general. As there is not analytical solution for scalarized black hole, we can only compute the free energy numerically. In order to do that, let us first some useful formulas, which can simplify the numerical computation of free energy. Performing the Wick rotation, the Euclidean line element gives\n\\begin{equation}\\label{metric0E}\n \\mathrm{d} s^2=f(r)\\mathrm{e}^{-\\chi(r)}\\mathrm{d} \\tau^2+\\frac{\\mathrm{d} r^2}{f(r)}+r^2\\mathrm{d}\\Omega^2\n\\end{equation}\nand\n\\begin{equation}\\label{matters1E}\n A_\\mu=i\\phi(r)(\\mathrm{d} \\tau)_\\mu,~~\\psi=\\psi(r)\\,.\n\\end{equation}\nNote that the trick given in \\cite{Hartnoll:2008kx}, we also find the following relation between the on shell Lagrangian and the $\\theta\\theta$ component of the energy momentum tensor,\n\\begin{equation}\n2T^{\\theta}{}_{\\theta}={\\cal L_{\\text{on-shell}}}-R.\n\\end{equation}\nConsider the equation of motion given in Eq.~\\eqref{eom}, we arrive\n\\begin{equation}\n{\\cal L}_{\\rm on-shell}=-{G^t}_t-{G^r}_r=\\frac2{r^2}[(rf)'+r\\chi'f-1] \\, ,\n\\end{equation}\nwhere $G_{\\mu\\nu}$ is the Einstein tensor. Recall the Euclidean on-shell action of RN black hole Eq.~\\eqref{EacRN}, we obtain the Euclidean on-shell action of Eq.~\\eqref{Eaction1},\n\\begin{equation}\nS_{\\text{E}_{\\text{on-shell}}}=\\frac{\\beta M}{2}+\\frac{\\beta}{2} \\int^\\infty_{r_h} \\mathrm{d} r[(rf\\mathrm{e}^{-\\chi\/2})'-\\mathrm{e}^{-\\chi\/2}]+\\alpha \\mu Q \\, .\n\\end{equation}\nAfter performing integration by parts, we respectively obtain the Gibbs free energy in grand canonical ensemble\n\\begin{equation}\\label{freeF2}\nG(\\mu,T)=\\frac{r_h}{2}-\\frac{M}2+\\frac12\\int_{r_h}^\\infty(1-\\mathrm{e}^{-\\chi\/2})\\mathrm{d} r \\, ,\n\\end{equation}\nand the Helmholtz free energy in canonical ensemble\n\\begin{equation}\\label{freeF1}\nF(Q,T)=\\frac{r_h}2-\\frac{M}2+\\frac12\\int_{r_h}^\\infty(1-\\mathrm{e}^{-\\chi\/2})\\mathrm{d} r + \\mu Q \\, .\n\\end{equation}\nGiven a scalarized charged black hole $\\{c_i,\\psi_{0_i} \\}$, the Hawking temperature and the chemical potential read\n\\begin{equation} \\label{Tmu}\nT=\\frac{f'(r_h)\\mathrm{e}^{-\\chi(r_h)\/2}}{4\\pi}\\, , \\quad \\mu = [\\phi(r) + r \\phi'(r)]|_{r \\to \\infty}.\n\\end{equation}\n\nIn the case of canonical ensemble, we pick out the temperature $T$, total charge $Q$ and evaluate the helmholtz free energy $F(T,Q)$ of every scalarized charged black hole solution $\\{c_i, \\psi_{0_i}\\}$ base on Eq.~\\eqref{MQ}, Eq.~\\eqref{Tmu} and Eq.~\\eqref{freeF1}. Using shooting method,\nwe work with dimensionless $(Q\/T, \\Delta F\/T)$ with fixing temperature $T=T_0=0.02858$ and $(T\/Q, \\Delta F\/Q)$ plane with fixing $Q=Q_0=1.318$ without losing generality. (See also the middle and the right plot in Fig.~\\ref{fig3} and Appendix.~\\ref{canapp} for detail discussion).\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.3\\textwidth]{cdf.pdf}\n \\includegraphics[width=0.3\\textwidth]{FreeQ.pdf}\n \\includegraphics[width=0.3\\textwidth]{FreeT.pdf}\n \\caption{\\textbf{Top}: A typical example about relationship between $\\Delta F$ and $c$. Here we take $\\{r_h=1, m=0.01, q=11m, \\phi_1=0.8\\}$. \\textbf{Middle}: The relation between $Q\/T$ and $\\Delta F\/T$ by fixing $T=0.02858$. \\textbf{Bottom}: The relation between $T\/Q$ and $\\Delta F\/ Q$ with fixing $Q=1.318$. }\\label{fig3}\n\\end{figure}\n\n\n\nFrom the plot in $(Q\/T, \\Delta F\/T)$ and $(T\/Q, \\Delta F\/Q)$ plane we present in Fig.~\\ref{fig3}, one can find that there also exist small intervals $Q\/T \\in [46.15,49.30] $ and $T\/Q \\in [0.02167,0.02318]$ in which the $F(T,Q)$ of sclarized charged black hole is smaller than the corresponding RN black hole in canonical ensemble. To summarize, this result indicates that in the case of canonical ensemble, the scalarized charged black hole is more stable than the corresponding RN black hole in the region $Q\/T \\in [46.15,49.30] $ and $T\/Q \\in [0.02167,0.02318]$. Remarkably, when $Q\/T>49.30$ and $T\/Q>0.02318$ region there does not exist the corresponding RN black hole.\n\n\\subsubsection{The case of general $c$: Grand Canonical Ensemble}\nFinally, we turn to consider the grand canonical ensemble case where the thermodynamic variables is temperature $T$ and chemical potential $\\mu$. Different from in the case of microcanonical ensemble and canonical ensemble, from Eq.~\\eqref{rnmuT} one will find that given a numerical scalarized charged black hole solution with temperature $T_i$ and chemical chemistry $\\mu_i$, there is always a corresponding RN black hole sharing the same $T_i$ and $\\mu_i$. With this in mind, we pick out $\\Delta G(T, \\mu)= G(T, \\mu)-G_{\\rm RN} (T,\\mu)$ and present a plot in $(c, \\Delta G)$ plane (See also the plots in Fig.~\\ref{cG}) from which one can observe that $\\Delta G$ is always positive for any $c_{i}$.\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{Gibbs24.pdf}\n \\includegraphics[width=0.3\\textwidth]{Gibbs25.pdf}\n \\caption{\\textbf{Top}: A typical example about the relationship between $\\Delta G$ and $c$. Here we take $c \\in [0.001, 0.4]$.\n \\textbf{Bottom}: Zooming in the left plot in the small $c$ region $c \\in [0.001, 0.004]$. Here we take $\\{r_h=1, m=0.01, q=11m, \\phi_1=0.8\\}$.}\\label{cG}\n\\end{figure}\nThe absence of appropriate seed solution indicates that the Gibbs free energy $G(T, \\mu)$ of an arbitrary scalarized charged black hole solution might be always larger than the corresponding RN black hole in grand canonical ensemble. In following we take a small $c=0.00101$ and larger $c=0.1$ as example to study the behavior of $\\Delta G$ related to the fixing temperature $T$ and chemical potential $\\mu$. In this case, one can read off the $G_{\\rm RN}(T, \\mu)$ of the corresponding RN black hole with temperature $T_i$ and chemical potential $\\mu_i$ from Eq.~\\eqref{Frn} and Eq.~\\eqref{rnTQ}. We then work with a dimensionless $(\\mu\/T, \\Delta G\/T)$ plane by fixing $T=0.02864$ and dimensionless $(T\/\\mu, \\Delta G\/ \\mu)$ plane with fixing $\\mu=0.800$ respectively, without losing generality as we mentioned above. We also offer two plots as well (See also the plots presented in Fig.~\\ref{GTmu} and Appendix.~\\ref{graapp} for detail).\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{Gibbs-mu.pdf}\n \\includegraphics[width=0.3\\textwidth]{Gibbs-T.pdf}\n \\caption{In the case of $c=0.00101$, \\textbf{Top}: A typical example about the relation between $\\mu\/T$ and $\\Delta G\/T$ with fixing $T=0.02864$. \\textbf{Bottom}: The other typical example about the relation between $T\/\\mu $ and $\\Delta G\/\\mu)$ with fixing $\\mu=0.800$. }\\label{GTmu}\n\\end{figure}\nThese results show that the $G(T,\\mu)$ of the scalarized charged black hole is always larger than the corresponding RN black hole in grand canonical ensemble when $c$ is small. Therefore it can be concluded that the RN black hole is more stable than scalarized charged black hole in grand canonical ensemble, which is consistent with our proof in probe limit.\n\nAs to Working with dimensionless $(\\mu\/T, \\Delta G\/T)$ plane with fixing $T=0.02721$ and dimensionless $(T\/\\mu, \\Delta G\/\\mu)$ plane with fixing $\\mu=0.8114$ respectively, we present a plot (See the plots of Fig.~\\ref{GTmu2})\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{Gibbs-mu2.pdf}\n \\includegraphics[width=0.3\\textwidth]{Gibbs-T2.pdf}\n \\caption{In the case of $c=0.1$, \\textbf{Top}: A typical figure about the relation between of $\\mu$ and $\\Delta G$ in dimensionless $(\\mu\/T, \\Delta G\/T)$ plane with fixing $T=0.02721$. \\textbf{Bottom}: Another typical example about the relation between of $T$ and $\\Delta G$ in dimensionless $(T\/\\mu, \\Delta G\/\\mu)$ plane with fixing $\\mu=0.8114$.}\\label{GTmu2}\n\\end{figure}\nand observe that $\\Delta G$ is also positive for any $T$ and $\\mu$ from the plots, it can be concluded that for a larger $c$, the RN black hole is also more stable than the scalarized charged black hole in grand canonical ensemble. We have check carefully for other different parameters and find the same conclusion. Hence, we summarize that in thermodynamics, the RN black hole is more stable than the scalarized charge black hole in grand canonical ensemble.\n\n\n\\section{Kinetic Stability}\\label{KietSta}\nIn this section, we turn to study the kinetic stability of the scalarized charged black hole solution, i.e. the stability against a small perturbation. At first, based on perturbation theory of black hole, we setup a general model to obtain one-dimension, Schr\\\"{o}dinger like radial function in frequency domain. We then investigate the validity of the shooting method numerically through analyzing numerical error, pointing out that the shooting method can catch the unstable modes efficiently, rather than stable modes. Based on the argument in \\cite{Konoplya:2019hlu}, we adopt the WKB approximation method to calculate the stable mode of perturbative field since the damping mode can be given by means of the WKB approximation method.\n\nPractically, considering a massless real perturbative scalar field, we obtain the linearized equation of motion associated with the scalar perturbation. Due to the static and spherical black hole background, the equation of motion of the scalar perturbation can be reduced to 1 dimension in frequency domain\\cite{Chandrasekhar:1975zza}. Imposing an appropriate physical boundary condition, ingoing wave near the horizon and outgoing wave at the spatial infinity, resonance state of the scalar perturbation arise, picking out a class of complex frequency $\\omega = \\omega_{\\rm R}+ i \\omega_{\\rm I}$ so called black hole quasi-normal modes (QNMs). Moreover, the imaginary part of frequency $\\omega_{\\rm I}$ indicates energy dissipation at both the horizon and the spactial infinity. If $\\omega_{\\rm I}>0$, the scalar perturbation will grow exponentially, leading to the instability of the black hole background at least at linear level. If $\\omega_{\\rm I}<0$, the scalar field will not trigger on the unstable of spacetime for exponentially damp, and finally dissipative out rapidly.\n\nFirstly, we begin with reducing the master equation of the scalar perturbation in radial equation in frequency domain. Given a spherical line element Eq.~\\eqref{metric0}, the equation of motion of the perturbative scalar field gives\n\\begin{equation}\\label{proeom}\n\\square \\psi_2 =0.\n\\end{equation}\nwhere $\\square=\\nabla^\\mu \\nabla_\\mu$ is the d'Alembert operator in spherical line element background, Eq.~\\eqref{metric0}. By adopting the separation variables method, we take the following anstaz of $\\psi_2$ under the spherical background,\n\\begin{equation}\n\\psi_2=\\mathrm{e}^{-i \\omega t}R(r)Y_{lm}(\\theta,\\varphi),\n\\end{equation}\nwhere $l$ and $m$ are the azimuthal quantum number and magnetic quantum number respectively, satisfying $l>m, m=0,1,2,...$. The radial equation reads\n\\begin{equation}\\label{radical}\n\\Delta \\frac{d}{dr}\\Delta \\frac{dR}{dr}+ U R =0,\n\\end{equation}\nwhere\n\\begin{equation}\n\\Delta=r^2 f(r)\\mathrm{e}^{-\\frac{\\chi}{2}}\\, , \\quad U=\\frac{\\omega^2 r^2}{f \\mathrm{e}^{-\\frac{\\chi}{2}}}-l(l+1).\n\\end{equation}\nBy introducing the tortoise coordinate and a new radial equation $\\tilde{R}$ as\n\\begin{equation}\\label{tortoise}\n\\frac{dy}{dr}=\\frac{r^2}{\\Delta}\\, , \\quad R=r \\tilde{R}\\, ,\n\\end{equation}\nthe radial equation Eq.(\\ref{radical}) can be written in the following standard wave function form\n\n\\begin{equation}\\label{derad}\n\\frac{d^2 \\tilde{R}}{dy^2}+\\tilde{U}\\tilde{R}=0 \\, ,\n\\end{equation}\nwhere\n\\begin{eqnarray}\\label{UU}\n\\tilde{U}&=&\\omega^2-V \\cr\n~\\cr\n&=&\\omega^2-\\frac{l(l+1)}{r(y)^2}f(y) \\mathrm{e}^{-\\chi(y)} \\cr\n~ \\cr\n&&-\\frac{1}{2r(y)}\\frac{d}{dr}\\left(f(y)^2 \\mathrm{e}^{-\\chi(y)}\\right)\\, .\n\\end{eqnarray}\nIn Eq.~\\eqref{UU}, we denotes $f(y), \\chi(y)$ and $\\chi(y)$ as a function with respect to $y$. From the definition of the tortoise coordinate, one can read $y \\to -\\infty$ corresponding to $r \\to r_h$, while $y \\to \\infty$ corresponding to $r \\to \\infty$. To single out a series of QNMs, we impose the following boundary condition,\n\\begin{equation}\\label{wavebound}\n\\tilde{R} \\varpropto \\left\\{\n\\begin{aligned}\n&\\mathrm{e}^{-i \\omega y } \\quad \\quad &&y \\to -\\infty \\\\\n&\\mathrm{e}^{i \\omega y} \\quad \\quad &&y \\to \\infty \\, ,\n\\end{aligned}\n\\right.\n\\end{equation}\nindicating that the probing scalar is pure going wave at the horizon and pure outgoing wave at the spatial infinity. In general, in order to investigate both the stable and unstable mode of perturbative field, one can solve QNMs using numerical method, popularly the shooting method. There are also serval effective approximative methods to calculate QNMs, for example, the WKB approximation method. It has been argued that the WKB approximation can only catch the stable modes with $\\omega_{\\rm I}<0$ even if the spectrum contains unstable modes~\\cite{Konoplya:2019hlu}. We will show that, on the contrary, the shooting method can only catch the unstable modes of $\\omega_{\\rm I}>0$ but cannot catch the stable modes. Thus, the combination of shooting method and the WKB approximation method can offer us a complete analyses on QNMs.\n\n\n\\subsection{Shooting Method and Analysis on Numerical Error}\\label{numerr}\nIn this section, we analyze numerical error of shooting method and illustrate its validity on the calculation of the unstable modes. Let us first briefly explain how to use shooting method to find QNMs.\n\nTo setup the numerical procedure, we firstly interpret Eq.~\\eqref{tortoise} and Eq.~\\eqref{derad} as a set of ODE (one first-order differential equation as well as one second order differential equation), one thus need three boundary condition. In addition to Eq.\\eqref{wavebound}, we need the last boundary condition associated to the radial coordinate $r$ pand tortoise coordinate $y$. we thus asymptotically expand near horizon. Specifically, in near horizon region, Eq.~\\eqref{tortoise} gives\n\\begin{equation}\\label{tortoise2}\n\\frac{\\mathrm{d} y}{\\mathrm{d} r}= \\frac{1}{f_1 (r-r_h)}\\, ,\n\\end{equation}\nwhere $f_1$ is given by Eq.~\\eqref{horbou1} and the equation of motion Eq.~\\eqref{eqscalar1}. Interpreted the radial coordinate as a analytical function with respect to the tortoise coordinate $y$. The Eq.~\\eqref{tortoise2} then can be solved that\n\\begin{equation}\\label{wavebound0}\nr=r_h+\\mathrm{e}^{f_1 y}\\, , \\quad y \\to -\\infty \\, .\n\\end{equation}\nTherefore, we obtain a numerical solvable boundary-valued question with two differential equations\n\\begin{equation}\\label{waveode}\n\\left\\{\n\\begin{aligned}\n&\\frac{d^2 \\tilde{R}}{dy^2}+\\tilde{U}\\tilde{R}=0 \\, , \\cr\n&\\frac{dr}{dy}=f(r(y))\\, ,\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $\\tilde{U}$ has been given in Eq.~\\eqref{UU}. To obtain a numerical eigenfunction of QNMs satisfying the boundary condition Eq.~\\eqref{wavebound}, working with complex numerics, we primarily numerically perform integration on Eq.\\eqref{waveode} with Eq.~\\eqref{wavebound} and\n\\begin{equation}\n\\tilde{R} \\varpropto \\mathrm{e}^{- i \\omega y} \\, , \\quad y \\to \\infty.\n\\end{equation}\nIn general, the numerical integration will give two branch solution in $y \\to \\infty$ region,\n\\begin{equation}\n\\tilde{R} \\varpropto \\mathrm{e}^{i \\omega y} + {B} \\mathrm{e}^{- i \\omega y}\\, , \\quad y \\to \\infty.\n\\end{equation}\nFurthermore, we have\n\\begin{equation}\n{B} \\varpropto \\frac{\\mathrm{e}^{i \\omega y}}{2 \\omega}\\left(R \\omega + i R'(y)\\right) \\, , \\quad y \\to \\infty.\n\\end{equation}\nThe QNMs will then give\n\\begin{equation}\nB \\left(\\omega_{\\text{QNMs}}\\right)=0\\, .\n\\end{equation}\nIn practice, one can locate the QNMs by drawing a density figure related to $\\frac{1}{B}$ in $\\left(\\omega_{\\text{R}}, \\omega_{\\text{I}} \\right)$ plane. Since the QNMs will give a vanish $R_{-}$, the location of QNMs in density figure can thus be observed as a bright spot in a density figure. Through observe the approximative region of the bright spot related to $\\omega_{\\text{R}}$ and $\\omega_{\\text{I}}$, denoted as $\\omega_{\\text{Initial}}=\\omega_{\\text{R}_{\\text{Initial}}} + i \\omega_{\\text{I}_{\\text{Initial}}}$. One can solve the QNMs by standard shooting method.\n\nNumerically, the error includes three parts in general. One comes from the finite difference when we solve the differential equation numerically, which can be suppressed by using higher order methods or smaller step-size. The second one comes from the fact that we have to set two finite cut-off at the horizon and infinite boundary. The third part comes from the float-point error of computer. We will show in following that, if imaginary part of QNMs is positive, errors of the second and third parts lead to the computational complexity will increase exponentially if we improve the desired accuracy of QNMs.\n\nInstead of considering the boundary condition~\\eqref{wavebound}, we firstly consider a general boundary-valued eigenvalue problem with following boundary condition for the radial equation Eq.~\\eqref{derad}\n\\begin{equation}\\label{wavesol}\n\\tilde{R} \\varpropto \\left\\{\n\\begin{aligned}\n&\\mathrm{e}^{-i \\omega y } + A \\mathrm{e}^{i \\omega y} \\quad \\quad &&y \\to -\\infty \\\\\n&\\mathrm{e}^{i \\omega y} + B \\mathrm{e}^{-i \\omega y} \\quad \\quad &&y \\to \\infty \\, ,\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $A, B$ are two complex constants. For a pair of given specific $A$ and $B$, the radial equation Eq.~\\eqref{derad} can numerically give a complex frequency $\\omega=\\omega_{\\text{R}}+i \\omega_{\\text{I}}$. Therefore, we can interpret a series of complex frequency as an analytical function with respect to $A$ and $B$, denoted as $\\omega(A, B)$. Theoretically, it is obvious that the QNMs arise from vanishing both $A$ and $B$, $\\omega_{\\text{QNM}}=\\omega(0,0)$, matching the boundary condition Eq.~\\eqref{wavebound}. However, in the shooting method in numerics, $A, B$ cannot exactly vanish due to floating-point error. Practically, in order to solve QNMs in numerics, the real boundary condition in the numerical computations is\n\\begin{equation}\\label{wavesol2}\n\\tilde{R} \\varpropto \\left\\{\n\\begin{aligned}\n&\\mathrm{e}^{-i \\omega y } + \\epsilon \\quad \\quad &&y \\to -\\infty \\\\\n&\\mathrm{e}^{i \\omega y} + \\epsilon \\quad \\quad &&y \\to \\infty \\, ,\n\\end{aligned}\n\\right.\n\\end{equation}\nwhere $\\epsilon$ denote the floating-point error in numerics. For a given complex frequency, comparing with Eq.~\\eqref{wavesol} and Eq.~\\eqref{wavesol2}, one can find that it is equivalent to set a pair of nonzero $A$ and $B$ and\n\\begin{equation} \\label{Aep}\nA \\sim \\epsilon \\mathrm{e}^{\\omega_{\\text{I}}y_{-}} \\, ,\\quad B \\sim \\epsilon \\mathrm{e}^{-\\omega_{\\text{I}} y_{+}} \\, ,\n\\end{equation}\nwhere $y_{\\pm }$ are numerical cutoff at both negative infinity and positive infinity respectively. Thus, for a computer which has fixed machine precision, the floating-point error restricts our ability to find the QNMs in arbitrary precision. To improve the accuracy of finding QNMs, we have to increase the machine precision of computer. In general, $\\omega(A, B)$ can be expanded as following approximation upon the 1st order of $A$ and $B$\n\\begin{equation}\n\\omega(A,B)= \\omega_{\\text{QNM}} +\\frac{\\partial\\omega}{\\partial A} A + \\frac{\\partial \\omega}{\\partial B} B + \\cdots\\,.\n\\end{equation}\nWe thus can obtain following estimation on the error of finding QNMs\n\\begin{equation}\n\\omega_{\\text{err}}:=|\\omega(A,B)-\\omega_{\\text{QNM}}|\\sim\\max\\left\\{\\left|\\frac{\\partial \\omega}{\\partial A} \\right|\\left|A\\right|,\\left|\\frac{\\partial \\omega}{\\partial B}\\right| \\left| B\\right| \\right\\}.\n\\end{equation}\ndenotes the numerical error associated with the QNMs. As it is reasonable to assume that $\\omega(A,B)$ is the analytical function of $A$ and $B$, so $\\partial\\omega\/\\partial A$ and $\\partial\\omega\/\\partial B$ are both finite. Thus,it can be concluded that the error is controlled by $A$ and $B$, i.e. $\\epsilon \\mathrm{e}^{\\omega_{\\text{I}}y_{-}}$ and $\\epsilon \\mathrm{e}^{-\\omega_{\\text{I}} y_{+}}$.\n\nFor a series of unstable mode, $\\omega_{\\text{I}} > 0$, triggering on exponential growth as time evolution, from Eq.\\eqref{Aep} one can find that an effective working precision with given $\\epsilon$ both $A, B$ will exponentially decay as large cutoff, ensuing numerical accuracy that $\\omega_{\\text{err}}$ is convergence in finite computation time. Therefore, the error caused by float-point is suppressed exponentially and we only need to care about the errors caused by finite difference and cut-off. In this case, the shooting method can catch the unstable mode efficiently.\n\nHowever, it does not work for stable modes $\\omega_{\\text{I}} < 0$, in which $\\psi_2$ will exponential damp as time evolution. In this case, we will suffer a contradiction that if we suppose $\\omega_{\\text{err}}$ is convergence to guarantee accuracy, the required working precision, related to computing time, will grow exponentially as the growth of cutoffs $y_{\\pm}$, while in order to obtain a precision-guarantee QNMs, we need to set $y_{\\pm}$ as large as possible to reduce cutoff error. Said another specific way, if one improve ten times of the cutoff, $y_{\\pm} \\to 10 y_{\\pm}$, the required working precision has to be improved up to $\\mathrm{e}^{10 \\omega_{\\text {I}}}$ to ensure computation precision. Consequently, the numerical approach cannot catch the stable modes efficiently.\n\n\nTo illustrate our analysis more clearly, we take the negative P\\\"{o}schl-Teller potential as an example in Appendix.~\\ref{PT}, in which shows how the shooting method can catch unstable modes efficiently. Based on our illustration above, we adopt the numerical approach to calculate the unstable mode $\\omega_{\\text{I}} > 0$ while the WKB approximation method to calculate the stable mode $\\omega_{\\text{I}} < 0$.\n\nTo end this section, we adopt the numerical solution, denoted as $\\{c=0.1, \\psi_0 \\approx 0.1988\\}$ with $\\{r_h=1, \\phi_1=0.8, m=\\frac{1}{100}, q=\\frac{11}{10}m\\}$, to investigate whether there exist unstable mode within QNMs. Following the above procedure, we show a 3D-plot associated with $1\/B$ with $(\\omega_{\\rm R}, \\omega_{\\rm I})$ plane in Fig.~\\ref{numpic1}. One can easily observe that there does not exist any peak of singularity which indicates that the existence of unstable mode within QNMs. Furthermore, we adopt some another numerical solutions $\\{c_{i}, \\psi_{0_{i}}\\}$ to investigate whether there exist unstable mode and cannot observe any other unstable mode within QNMs. Before we give a conclusion that the scalarized charged black hole is stable against a neutral scalar field at linear level, in the following section we also adopt the WKB approximation method to calculate the stable mode in QNMs.\n\\begin{figure}[hbpt]\n \\centering\n \n \\includegraphics[width=0.5\\textwidth]{numpic1.pdf}\n \\caption{3D-plot associated with $\\frac{1}{B}$ with $(\\omega_{\\rm R}, \\omega_{\\rm I})$ plane , taking the numerical solution $\\{c=0.1, \\psi_0 \\approx 0.1988\\}$ with $\\{r_h=1, \\phi_1=0.8, m=\\frac{1}{100}, q=\\frac{11}{10}m\\}$ as an example.}\\label{numpic1}\n\\end{figure}\n\n\\subsection{The WKB Approximation Method}\nAt first, we give brief introduction on calculating the QNMs by the WKB approximation method. The WKB approximation method was firstly used to calculate the QNMs of Schwardzchild black hole by Schutz and Will \\cite{Shusz}. It then was developed to the 3rd order by Iyer and Will \\cite{Iyer:1986np,Iyer:1986nq} and further 6th order by R.A. Konoplya \\cite{Konoplya:2003ii}. Recently it has been extended to the 12nd order by Matyjasek and Opala \\cite{Matyjasek:2017psv}. In this paper, we adopt the the 3rd order WKB approximation method to analyze the QNMs of the scalarized charged black hole. We point out again the we have used numerical method in the last subsection verified that there is no unstable QNMs.\n\nRecall the radial function Eq.~\\eqref{derad}, the effective potential $V(y)$ gives\n\\begin{equation}\\label{effVy}\nV(y)=\\frac{l(l+1)}{r(y)^2}f(y) \\mathrm{e}^{-\\chi(y)}+\\frac{1}{2r(y)}\\frac{d}{dr}\\left(f(y)^2 \\mathrm{e}^{-\\chi(y)}\\right)\n\\end{equation}\nWe firstly consider the numerical scalarized charged black hole solution $\\{c=0.1 ,\\psi_0=0.1987\\}$ as an example, found in Sec.~\\ref{thermo}. Substituting the numerical solution in Eq.~\\eqref{effVy}, we present a plot of $V(y)$ taking $l=0,1,2$ as example in Fig.~\\ref{effV1}. Furthermore, one can numerically read off the local maximal value $V_0(y_0)$ from Eq.~\\eqref{effVy} (See also Table.~\\ref{V0y0})\n\\begin{table}[hbtp]\n\\centering\n\\begin{tabular}{c|c|c|c}\n \\hline\n \\hline\n & $l=0$ & $l=1$ & $l=2$ \\\\\n \\hline\n $V_0$ & $ 0.04641$ & 0.2240 & $0.5794$ \\\\\n \\hline\n $y_0$& $ -4.485$ & $ -4.355$ & $-4.327$ \\\\\n \\hline\n \\hline\n\\end{tabular}.\n\\caption{The local maximal $V_0$ of the effective potential $V(y)$ under the numerical scalarized charged black hole solution $\\{c=0.1, \\psi_0=0.1978 \\}$. } \\label{V0y0}\n\\end{table}\n\\begin{figure}[h]\n \\centering\n \n \\includegraphics[width=0.3\\textwidth]{effVy.pdf}\n \\caption{ The effective potential under the tortoise coordinate $V(y)$. }\\label{effV1}\n\\end{figure}\nWe then proceed to compute the QNMs using the WKB approximation method up to the 3rd order. To maintain accuracy, we respectively take $l=0$ up to $n=2$ as well as $l=1,2$ up to $n=5$ as example. Recall the formula of the 3rd order WKB Approximation method, the QNMs have been given explicitly in Ref.~\\cite{Iyer:1986np,Iyer:1986nq} as follow,\n\\begin{equation}\n\\omega^2=(V_0+\\sqrt{-2V''_{0}} \\Lambda_2)-i(n+\\frac{1}{2})\\frac{1}{\\sqrt{-2 V_0''}}(1+\\Lambda_3),\n\\end{equation}\nwhere\n\\begin{eqnarray}\\label{WKB}\n\\Lambda_2 &=& \\frac{1}{\\sqrt{-2 V_0''}}\\left(\\frac{1}{8}\\left(\\frac{V^{(4)}_0}{V_0''}\\right)\\left(\\alpha^2+\\frac{1}{4}\\right)\\right. \\cr\n&&\\left.-\\frac{1}{288}\\left(\\frac{V^{(3)}_0}{V_0''}^2(60\\alpha^2+7) \\right)\\right), \\nonumber \\\\\n~ \\cr\n\\Lambda_3 &=& \\frac{1}{-2V_0''}\\left(\\frac{5}{6912}\\left(\\frac{V^{(3)}_0}{V_0''}\\right)^4(188\\alpha^2+77)\\right. \\cr\n&&-\\frac{1}{384}\\left(\\frac{V^{(3)2}_0V^{(4)}_0}{V''^3_0}\\right) (100\\alpha^2+51)\\cr\n&&+\\frac{1}{2304}\\left(\\frac{V^{(4)_0}}{V''_0}\\right)^2(68\\alpha^2+67) \\cr\n&&+\\frac{1}{288}\\left(\\frac{V'''_0 V^{(5)}_0}{V''^2_0} \\right) (28\\alpha^2+19) \\cr\n&&- \\left.\\frac{1}{288}\\left(\\frac{V^{(6)}_0}{V''_0}\\right)(4\\alpha^2+5) \\right)\\, ,\n\\end{eqnarray}\nwhere $V_0^{(n)}$ denotes the $n$-th derivative of $V(y)$ with respect to $y$ located on $y_0$ and $\\alpha=(n+\\frac{1}{2})$. In Eq.~\\eqref{WKB}, $\\Lambda_2$ and $\\Lambda_3$ denote the second order and the third order approximation associated with the WKB method, respectively Plugging the specific numerics into Eq.~\\eqref{WKB}, we then show the results of $n \\leq l+1$ in which $l=0, 1,2 $ respectively for maintaining precision (See also the Table.~\\ref{QNMs}). Since the WKB method is sufficient in high-lying mode but may not be sufficient enough in low-lying mode, we do not consider the high overtone case. In Table.~\\ref{QNMs}, we show the numerical results of QNMs up to 3rd and the associated relative errors $\\Delta$, defined as $\\Delta = |\\omega-\\omega_2|\/|\\omega|$ where $\\omega_2$ denotes the numerical QNMs given by the second order WKB method. One can observe that the imaginary part of the complex frequency $\\omega_{\\rm I}$ is always negative, indicating that the numerical solution $\\{c_0, \\psi_{0_0}\\}$ is stable against the scalar perturbation.\n\\begin{table}[hbtp]\n\\centering\n\\begin{tabular}{c|c|c|c}\n \\hline\n \\hline\n $n$ & $l=0$ & $l=1$ & $l=2$ \\\\\n \\hline\n $0$ & \\tabincell{c}{$0.146-0.125 i$ \\\\ $~24.8\\%$ } & \\tabincell{c}{$0.450-0.112 i$ \\\\$ 2.03\\%$ } & \\tabincell{c}{$0.748-0.112 i$ \\\\ $0.488 \\% $ } \\\\\n \\hline\n $1$& \\tabincell{c}{ $0.104-0.409i $ \\\\ $~22\\% $ } & \\tabincell{c}{$ 0.419 - 0.350 i $ \\\\$ ~6.24\\% $ } & \\tabincell{c}{ $0.728- 0.340 i$ \\\\ $1.90 \\%$} \\\\\n \\hline\n $2$& & \\tabincell{c}{$0.371 - 0.601 i $ \\\\ $~10\\% $ } & \\tabincell{c}{ $0.692-0.575 i$ \\\\ $ 4.23 \\%$ } \\\\\n \\hline\n $3$& & & \\tabincell{c}{$0.646-0.816 i$ \\\\ 7.07 \\% } \\\\\n \\hline\n \\hline\n\\end{tabular}\n\\caption{The QNMs of the scalarized charged black hole with $\\{ c=0.1, \\psi_0=0.1988\\}$. In each cell we show the numerical results of QNMs and associated relative error $\\Delta = |\\omega-\\omega_2|\/|\\omega|$. }\\label{QNMs}\n\\end{table}\n\n\nRecall a series numerical scalarized charged black hole solution, denoted by $\\{c_{i}, \\psi_{0_i}\\}$ in Sec.~\\ref{micensem}, for without lost generality, we calculate $\\omega_{\\rm I}$ associated with $c_i$ by the WKB approximation method as well. We find that $\\omega_{\\rm I}$ are always negative, and do not change obviously as the growth of $c_i$, implying that the QNMs of scalarized charged black hole is less affected by the amplitude of the non-linear potential. Furthermore, we also have checked other different parameters and found the similar results. This suggests that the scalarized charged black hole should be kinetically stable under a neutral perturbation.\\footnote{Strictly speaking, the QNMs do not form a complete bases in mathematics, so above analysis does not cover all possible perturbations in mathematics. }\n\n\\section{Conclusion}\\label{Conclu}\nIn this paper, we consider the Einstein-Maxwell theory minimally coupled with a non-linear complex field. Considering an appropriate boundary condition for scalarized black hole in asymptotic flat spacetime. We at first briefly list our main result:\n\\begin{itemize}\n\\item{For general non-linear semi-definite potential, we prove that the scalarization cannot result from a continuous phase transition. }\n\n\n\\item{Treating the scalarized black hole as a thermodynamical system, we observe that the discontinuous scalarization on RN black hole will not happen in grand canonical ensemble but will happen in both microcanonical ensemble and canonical ensemble.}\n\n\\item{We also find that neutral scalar perturbation will not trigger kinetic instability associated with the scalarized charged black hole by means of analysing the QNMs using numerical method and the WKB approximation method.}\n\\item{As a by-product, we use numerical results to give negative answer to Penrose-Gibbons conjecture and suggest two new versions of Penrose inequality in charged case.}\n\\end{itemize}\n\nIn detail, motivated by the numerical solution as a counterexample for the no-hair theorem given in \\cite{Hong:2020miv}, we investigate the thermodynamic stability of the scalarized charged black hole, compared with the RN black hole in various ensembles. It needs to note that it is more suitable to choose grand canonical ensemble for a black hole in astrophysics. However, in this paper, we still take microcanonical ensemble and canonical ensemble into account as theoretical research interests.\n\nIn microcanonical ensemble, ADM mass $M$ and total charge $Q$ are fixed and the phase transition will happen towards the direction which increases the entropy, namely the radius of the event horizon. Giving specific $M$ and $Q$ and working with $(Q\/M,\\Delta r_h\/M)$ plane, we find that it is possible the scalarized charged black hole is more stable than the RN black hole in thermodynamics. Particularly, our numerical results imply that the scalarized black hole is more stable than RN black hole when temperature is low enough and the near extremal RN black hole will always transit into scalarized black hole via a first order phase transition.\n\nAs to in the canonical ensemble and grand canonical ensemble, the stability in thermodynamics requires the minimal of the Helmholtz free energy $F(T, Q)$ with fixing Hawking temperature $T$ and total charge $Q$, and the Gibbs free energy $G(T,\\mu)$ with fixing $T$ and chemical potential $\\mu$ respectively. Taking probe limit, we firstly present a proof that the RN black hole is more thermodynamically stable in both canonical ensemble and grand canonical ensemble. Following the similar procedure in microcanonical ensemble, giving specific $T$ and $Q$ and working with $(T\/Q, \\Delta F\/Q)$ and $(Q\/T, \\Delta F \/ T)$ plane, we also find it is possible that the scalarized charged black hole is more stable in thermodynamics in canonical ensemble and so the RN black hole may spontaneously scalarize via a first order phase transition in canonical ensemble. However, in grand canonical ensemble, we find that RN black hole always have smaller Gibbs free energy and so is more stable than scalarized black hole, which implies that the RN black hole will not spontaneously scalarize in grand canonical ensemble.\n\nFinally, we study the kinetic stability of the scalarized charged black hole against scalar perturbation. Due to the static and spherical spacetime background, we firstly reduce the master equation of the perturbative scalar to $1$ dimension in frequency domain. Given pure ingoing wave condition at the horizon as well as pure outgoing wave condition at the spatial infinity, a series of complex frequency $\\omega=\\omega_{\\rm R}+ i \\omega_{\\rm I}$ is picked out, namely the quasi-normal modes (QNMs) of black hole. Through numerical error analysis, we claim that the shooting method in numerics can efficiently catch the unstable mode $\\omega_{\\rm I}>0$, rather than the stable mode $\\omega_{\\rm I}<0$. Therefore, we calculate the unstable mode within QNMs by the shooting method in numerics and conclude that there does not exist any unstable mode within the QNMs associated with the scalarized charged black hole. In addition, since the WKB approximation method can effectively catch the stable modes, rather than the unstable mode, within the QNMs, we adopt the 3rd order WKB approximation method to compute the QNMs of the scalarized charged black hole and find that the imaginary part of the QNMs $\\omega_{\\rm I}$ is always negative. Therefore, we concluded that the scalarized charged black hole should be kinetically stable under a neutral perturbation. As further discussion on future work, it is worth to investigate whether other perturbation, for instance, vector perturbation or tensor perturbation will trigger on kinetic instability. We thus will keep focusing on this topic in our future work.\n\nAs a by-product of our numerical construction of scalarized black hole, we also definitely give negative answer to a long-standing conjecture named Penrose-Gibbons conjecture. Particularly, we find that the the total charge can be larger than the ADM mass but the temperature is still positive in scalarized black hole. Based on our numerical results, we propose two new generalizations of Penrose inequality in charged case and numerically verify their correctness in our model. Moreover, we will give more discussion in our future works.\n\n\n\n\\acknowledgments\nWe are grateful to Hong L\\\"{u}, De-Cheng Zou and A. Zhidenko for useful discussions and Wen-Di Tan, Shi-Fa Guo and Ze Li for proofreading.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMatter-wave interferometers with electrons \\cite{Mollenstedt1956a,Kiesel2002,Hasselbach1988a,Hasselbach2010}, atoms \\cite{Carnal1991a,Keith1991a}, neutrons \\cite{Rauch1974,Colella1975}, molecules \\cite{Brezger2002a,Grisenti2000a} or ions \\cite{Hasselbach1998a,Maier1997,Hasselbach2010} are all extremely sensitive to dephasing mechanisms. Thereby the phase of each single-particle wave is shifted relative to the detector by a temporal varying process. Integrating the individual interference patterns with such alternating phase shifts leads to a loss of contrast, even if full coherence is still maintained in the system. However, if the time-dependent phase shift were known, the dephasing could be corrected and full contrast could be recovered. Decoherence, on the other hand, causes a loss of contrast on the single-particle level, which cannot be corrected for.\n\nThe origin of dephasing mechanisms can be quite different, such as mechanical vibrations \\cite{Stibor2005}, temperature drifts, or, especially important in the case of charged-particle interferometers, electromagnetic oscillations. While high frequency perturbations can be efficiently suppressed via vibrational isolation systems, electric filters, and mu-metal shieldings, low-frequency components become dominant. They can only be partially addressed by e.g using complex shielding schemes, low noise beam guiding electronics, and filtering of the \\unit[50]{Hz} oscillation of the electric network. Moreover, the beam emission center might drift in position, as has been observed for conventional field ionization sources such as \"supertips\" \\cite{Kalbitzer}, which was a major obstacle in the first realization of ion interferometry \\cite{Hasselbach1998a,Maier1997,Hasselbach2010}. The suppression of low-frequency oscillations is therefore of major importance for the realization of stable particle interferometers. It is, for instance, a substantial challenge for the realization of sophisticated ion interference experiments \\cite{Hasselbach1998a,Maier1997,Hasselbach2010}, e.g., in the context of Aharonov-Bohm physics \\cite{Batelaan2009,Silverman1993}, where long signal integration times are necessary.\n\n\\begin{figure*}\n\\includegraphics[width=5.2in]{figure1} \\caption{(Color)\nIn-vacuum setup of the electron biprism interferometer, which is a modified version of the one described in \\cite{Hasselbach1998a,Maier1997,Hasselbach2010} (not to scale). An electron beam is field emitted from a single-atom tip \\cite{Kuo2006a,Kuo2008} and adjusted by deflector electrodes towards the optical axis. The electron matter-waves are coherently separated and recombined by an electrostatic biprism \\cite{Mollenstedt1956a}. The resulting interference pattern is magnified by quadrupole lenses and detected in a delay line detector \\cite{Jagutzki2002}. It can be artificially disturbed by a time-varying field originating from two magnetic coils, which are placed outside the vacuum chamber. A mu-metal shield (not shown) is placed between the in-vacuum setup and the magnetic coils.} \\label{fig1}\n\\end{figure*}\n\nIn this article, we describe a method to significantly decrease the influence of low-frequency oscillations by including temporal and spatial particle correlations in the data analysis. The method is demonstrated experimentally using an electron interferometer, where a modern delay line detector \\cite{Jagutzki2002} provides not only spatial information about the particle impact but also high temporal resolution. This makes them superior to commonly used multi-channel plates (MCPs) in conjunction with a fluorescence screen, which does not allow for high-precision time and position measurements. We show that, even after strong dephasing oscillations, the interference pattern can be recovered via correlation analysis. Therefore, we provide a full theory, which takes into account spatial and temporal correlations of all particle pairs. In principle, this method can be used for all periodic dephasing oscillations in the low-frequency regime below the particle count rate. Our method is thus of special importance in matter-wave experiments where temperature drifts or mechanical oscillations from the environment, such as the building, the cooling system, or the vacuum pumps, tend to wash out the interference pattern. For periodic perturbations our procedure can determine the frequency and amplitude of the dephasing signal and completely restore the spatial fringe pattern. The capability to identify the origin of dephasing is helpful for the design of further shielding or filtering in a matter-wave interferometer.\n\n\n\\section{Experiment}\n\nWe demonstrate the correlation analysis using a biprism electron interferometer. It was originally constructed by Hasselbach et al. \\cite{Hasselbach1998a,Maier1997,Hasselbach2010} and modified with a new beam source and a new detector. Figure \\ref{fig1} shows a sketch of the in-vacuum setup. It consists of an iridium covered tungsten (111) single-atom tip \\cite{Kuo2006a,Kuo2008} that acts as a field emitter for a highly coherent electron beam. The field emission voltage was set to \\unit[-1.53]{kV}. The vacuum pressure in the setup was \\unit[$5\\times10^{-9}$]{mbar}. The beam adjustment towards the optical axis of the setup is performed by using three deflector electrodes. Each one consists of four metal plates pairwise on opposite potentials to deflect the beam in the horizontal ($x$) and vertical ($y$) direction. The tip illuminates a fine gold coated biprism glass fiber that is oriented along $x$ and has a diameter of $\\sim$ \\unit[400]{nm} \\cite{Warken2008}. It is positioned between two grounded plates to coherently split and recombine the electron matter-wave \\cite{Mollenstedt1956a}. Setting the biprism on a positive potential of a few volts, the partial waves overlap and create an interference pattern parallel to the direction of the biprism fiber. For imaging purposes this interference pattern is expanded along $y$ using a quadrupole lens. A magnetic coil directly after the biprism is used as an image rotator to align the fringes to the quadrupole lens. The electron signal is amplified by a double-stage multichannel plate and detected by a delay line anode. The signal is recorded and analyzed by a computer. The whole in-vacuum setup is surrounded by a mu-metal shield, primarily damping high frequency electromagnetic perturbations.\n\nEssential for our method of dephasing removal is the delay line detector. In biprism interferometry, interference patterns are typically detected using a MCP in conjunction with a fluorescent phosphor screen, which is then digitalized with a CCD camera. This allowed a moderate spatial resolution, restricted by the channel diameters of the MCP's, but only a rather limited temporal detection that is dependent on the long fluorescence decay time of the phosphor screen. With the delay line anode, single electrons can be detected with a spatial resolution below \\unit[100]{\\textmu m} and a time accuracy below \\unit[1]{ns}. The dead time between two individual pulses is \\unit[310]{ns} limiting the detectable count rate to the MHz regime. An incoming amplified electron pulse hits the delay line anode, consisting of two meandering wires oriented perpendicular to each other ($x$ and $y$ direction), and it induces a charge pulse in both directions of each wire. By measuring the time delay between those pulses at the wire endings, the spatial position and time of impact can be determined \\cite{Jagutzki2002}.\n\n\n\\begin{figure*}\n\\includegraphics[width=1.0\\textwidth]{figure2} \\caption{(Color) a) Electron biprism interference pattern and corresponding integrated line profile recorded with the setup of fig. \\ref{fig1}b. The same interference pattern after the introduction of a periodic \\unit[50]{Hz} magnetic field oscillation with a dc amplitude of 2$\\pi$. The fringes are completely washed out. c) Histogram and integrated line profile of spatial distances $\\Delta x$ and $\\Delta y$ between temporal adjacent events. They clearly show correlations in the position of two consecutive events, revealing the existence of an interference pattern even after perturbation. The integrated line profile has been corrected by a factor $\\left(1-|\\Delta y|\/Y\\right)^{-1}$ to correct for the finite pattern width of \\unit[$Y=20$]{mm}.}\n\\label{fig2}\n\\end{figure*}\n\nFigure \\ref{fig2}a shows an interference pattern, as detected with the delay line detector, after a total number of about $5\\times 10^5$ electrons recorded in about \\unit[100]{s}. This corresponds to a particle count rate of $\\sim$ \\unit[5]{kHz}. The distance between two fringes in the interferogram is $\\sim$ \\unit[2]{mm} and the contrast amounts to about \\unit[35]{\\%}. To demonstrate our method we artificially add a periodic dephasing in the form of an oscillating magnetic field. It is created by two external magnetic coils positioned outside the vacuum chamber and the mu-metal shield (see Figure \\ref{fig1}). The magnetic field lines are oriented in the $x$-direction applying a force on the electrons in the $y$-direction normal to the interference fringes and parallel to the detection plane. This causes a periodic shift of the interference pattern, reducing the overall fringe contrast of the time integrated pattern. Using a function generator the frequency and amplitude of this disturbance can be tuned.\n\nWe disturbed the interference pattern of Figure \\ref{fig2}a with a \\unit[50]{Hz} oscillation. The amplitude was set such that it moved the pattern by \\unit[$\\pm$ 2]{mm} when a static current was applied to the coils. For oscillating currents, however, this amplitude is reduced due to the mu-metal shield around the in-vacuum setup of the interferometer. We thus expect a peak phase shift of the interferogram below $2\\pi$. The resulting image with again $\\sim 5\\times 10^5$ events is illustrated in Figure \\ref{fig2}b and clearly shows a washed out pattern where the contrast decreased to almost zero.\n\nAs our detector provides a list of coordinates and impact times of all consecutive incidents, we correlate each electron with its subsequent temporal neighbor by determining their spatial distance in the $x$- and $y$-direction ($\\Delta x$, $\\Delta y$). The relative commonness of these distances are plotted in Figure \\ref{fig2}c. As it can be seen, the periodic pattern is revealed, a distinct evidence for matter-wave interferometry even in the presence of strong dephasing. For better visualization, Figure \\ref{fig2}c includes the integrated line profile (corrected by the finite pattern width), which clearly shows a periodic modulation on the length scale of the original fringe pattern.\n\n\n\\section{Theory}\n\nTo gain more information about the disturbed interference pattern we apply a full correlation analysis on the data by including correlations not only between temporally adjacent particles, but also between all possible particle pairs. For the theoretical description of particle correlations in a periodically disturbed interference pattern, the probability distribution $f(y,t)$ of particle impacts at the detector is of major importance:\n\n\\begin{eqnarray}\nf(y,t) & = & f_0\\left(1+K\\cos \\left(ky + \\varphi\\left(t\\right)\\right)\\right) \\label{eq1} \\\\\n\\quad \\mbox{with}\\quad \\varphi(t) & = & \\varphi_0\\cos(\\omega t) \\nonumber\n\\end{eqnarray}\n\n\\noindent At each time $t$, the distribution function is normalized via $f_0$ and characterized by its spatial periodicity $\\lambda=2\\pi\/k$, contrast $K$, and phase $\\varphi$. The time dependence of $\\varphi$ causes a periodic phase shift of the probability distribution, with $\\varphi_0$ being the peak phase deviation and $\\omega=2\\pi\\nu$ the frequency of the phase disturbance. The corresponding interference pattern observed at the detector is given by the time averaged probability distribution\n\\begin{equation}\n\\lim_{T\\rightarrow\\infty} \\frac{1}{T}\\int_0^T f(y,t) dt \\,=\\, f_0\\left(1+K\\,J_0(\\varphi_0)\\,\\cos(ky)\\right),\\label{eq2}\n\\end{equation}\nwith $J_0$ being the zero-order Bessel function of the first kind. Depending on the strength of the phase deviation, the visible contrast is thus reduced by a factor of $J_0(\\varphi_0)$, leading to vanishing interference fringes for large $\\varphi_0$.\n\n\\begin{figure*}\n\\includegraphics[width=1.0\\textwidth]{figure3} \\caption{(Color online) Two-dimensional correlation functions $g^{(2)}(u,\\tau)$ of disturbed interference patterns as a function of the spatial ($u$) and temporal ($\\tau$) distance between two detection events. The axes are normalized to the spatial periodicity ($\\lambda$) and the perturbation period ($\\nu^{-1}$). The plots visualize the results of Eq. \\ref{eq5} for different peak phase deviations $\\varphi_0$ of (a) $0\\pi$, (b) $1\/3\\pi$, (c) $2\/3\\pi$, and (d) $1\\pi$. The absolute value of $A(\\tau)$ normalized to the contrast $K^2$, is shown below each plot.}\n\\label{fig3}\n\\end{figure*}\n\nIf the detector allows for spatial and temporal information on the particle arrival, a correlation analysis can be used to retain information of the undisturbed interference pattern and the phase disturbance. Starting from eq. \\ref{eq1}, the second order correlation function reads\n\\begin{equation}\ng^{(2)}(u,\\tau) = \\frac{\\ll f(y+u,t+\\tau) f(y,t)\\gg_{y,t}}{\\ll f(y+u,t+\\tau) \\gg_{y,t} \\ll f(y,t) \\gg_{y,t}},\n\\label{eq3}\n\\end{equation}\nwhere $\\ll \\cdot \\gg_{y,t}$ denotes the average over position and time\n\\begin{equation}\n\\ll f(y,t) \\gg_{y,t} = \\lim_{Y,T\\rightarrow\\infty} \\frac{1}{TY}\\int_0^T \\int_{-Y\/2}^{Y\/2} f(y,t) \\,dy\\, dt\\label{eq4}\n\\end{equation}\nIn the limit of large acquisition times $T\\gg 2\\pi\/\\omega$ and lengths $Y\\gg \\lambda$ the integrals can be solved and the correlation function becomes\n\\begin{equation}\ng^{(2)}(u,\\tau) = 1 \\,+\\, A(\\tau)\\cos(ku) \\label{eq5}\n\\end{equation}\nwith\n\\begin{eqnarray}\nA(\\tau) & = & \\frac{1}{2}K^2\\sum_{n=-\\infty}^{\\infty} J_n(\\varphi_0)^2 \\, \\mbox{e}^{-in\\omega\\tau}\\label{eq6}\\\\\n& = & \\frac{1}{2}K^2 J_0(\\varphi_0)^2 \\, + \\, K^2\\sum_{n=1}^{\\infty}J_n(\\varphi_0)^2\\cos(n\\omega\\tau)\\label{eq7}\n\\end{eqnarray}\n\nCentered around 1, the second order correlation function of the disturbed interference pattern thus shows a periodic modulation in the spatial distance $u$ between two detection events with the same periodicity as the undisturbed interference pattern. The amplitude of this modulation, however, depends on the correlation time $\\tau$. In the frequency domain $A(\\tau)$ can be decomposed to a superposition of sidebands at discrete frequencies $n\\omega$ ($n\\in \\mathbb{Z}$) with strengths given by the peak phase deviation and the Bessel functions $J_n(\\varphi_0)$.\n\nFigure \\ref{fig3} shows the two-dimensional correlation function and the amplitude $|A(\\tau)|$ for different peak phase deviations. As illustrated in Figure \\ref{fig3}a, without modulation (undisturbed interference pattern) only $J_0$ remains non-zero and $A=0.5\\,K^2$. The correlation function thus becomes independent of $\\tau$ and resembles the original interference pattern (Figure \\ref{fig2}a). For small but non-zero $\\varphi_0$, the first order Bessel function $J_1$ comes into play causing a sinusoidal modulation of $A$ at frequency $\\omega$ (Figure \\ref{fig3}b). As $\\varphi_0$ increases further, more and more higher-order Bessel functions have to be taken into account, adding higher harmonic modulations to $A$. However, maximal spatial contrast $0.5 K^2$ is only recovered at multiples of $\\tau=1\/\\nu$, where all higher harmonics constructively interfere (Figure \\ref{fig3}c and d). Independent of the peak phase deviation, the correlation analysis thus reveals the frequency of the phase disturbance and the spatial frequency of the underlying interference pattern.\n\nBefore the correlation theory can be applied on our measurements, the second order correlation function needs to be extracted from the detector signal. This signal is given by the position $y_i$ and time $t_i$ of all particle events $i=1\\ldots N$. Following eq. \\ref{eq3} the correlation function is basically determined by the number $N_{u,\\tau}$ of particle pairs $(i,j)$ with $\\left(t_i-t_j\\right) \\in \\left[\\tau,\\tau+\\Delta \\tau\\right]$ and $\\left(y_i-y_j\\right) \\in \\left[u,u+\\Delta u\\right]$\n\\begin{equation}\ng^{(2)}(u,\\tau) = \\frac{TY}{N^2 \\Delta \\tau \\Delta u}\\,\\frac{N_{u,\\tau}}{\\left(1-\\frac{\\tau}{T}\\right)\\left(1-\\frac{\\left|u\\right|}{Y}\\right)}\n\\label{eq8}\n\\end{equation}\nwith normalization factor $TY\/N^2$ and discretisation step size $\\Delta \\tau$ and $\\Delta u$. The additional factor $\\left[(1-\\tau T^{-1})(1-\\left|u\\right|Y^{-1})\\right]^{-1}$ corrects $N_{u,\\tau}$ for the finite acquisition time $T$ and length $Y$ because large time and position differences will be less likely to be observed.\n\n\\begin{figure*}\n\\includegraphics[width=1.0\\textwidth]{figure4} \\caption{(Color online) a) Experimentally determined two dimensional correlation function $g^{(2)}(u,\\tau)$, extracted according to Eq. \\ref{eq8} from the detector signal of the disturbed interference pattern in Figure \\ref{fig2} (b). (b) Fit of the pattern in (a) with the theoretical expression in Eqs. \\ref{eq5}-\\ref{eq7}. (c) The residual of theoretical and experimental data. The remaining periodic structure might be related to diffraction at the biprism. (d) Reconstruction of the original undisturbed interference pattern from the strongly disturbed data of \\ref{fig2}b using the extracted fitting parameters.}\n\\label{fig4}\n\\end{figure*}\n\n\n\\section{Results}\n\nTo apply the theory to the outcome of our electron biprism experiment we extracted the $g^{(2)}(u,\\tau)$ function according to Eq. \\ref{eq8} from the raw data corresponding to Figure \\ref{fig2}b. The result is shown in Figure \\ref{fig4}a. As described in Sec. III, the periodicity of this pattern in $u$ and $\\tau$ is an apparent sign of matter-wave interference that can be observed even in experimental conditions with significant periodic dephasing perturbations.\n\nUsing the theoretical expression of eq. \\ref{eq5}-\\ref{eq7}, we fitted the data in Figure \\ref{fig4}a. The fit and the remaining residuum are shown in Figure \\ref{fig4}b and \\ref{fig4}c, respectively. They reveal all parameters describing the interferogram and the perturbation. The fitted parameters are: \\mbox{$\\nu=\\;$\\unit[49.996 ($\\pm ~0.018$)]{Hz}} for the dephasing frequency, \\mbox{\\unit[$K=34.5$ ($\\pm ~0.2$)]{\\%}} for the interference contrast, \\mbox{\\unit[$\\lambda=2.089$ ($\\pm ~0.001$)]{mm}} for the spatial period of the interference pattern, and \\mbox{\\unit[$\\varphi_0=0.802$ ($\\pm ~0.004$)]{$\\pi$}} for the peak phase deviation. These values are in excellent agreement with the properties of the unperturbed interference pattern (\\unit[$K\\approx 35$]{\\%}, \\unit[$\\lambda\\approx 2$]{mm}) and the applied disturbance frequency (\\unit[$\\nu=50$]{Hz}). Only the peak phase deviation shows a discrepancy to its dc value of $\\varphi_0=2\\pi$, which is due to the mu-metal shield between the interferometer and the dephasing coils. As expected, this shield damps the amplitude of external field oscillations.\n\nThe lack of any sub-structure in the residual plot (Figure \\ref{fig4}c) shows that our correlation model is well suited to describe the experimental data. The residuum shows only a weak remaining structure on the length scale of $\\sim$ \\unit[7]{mm}, which is probably due to diffraction on both edges of the biprism.\n\nWe demonstrated that it is possible to extract the unknown frequency and amplitude of periodic, single frequency dephasing oscillations from the perturbed interference pattern even if no interference fringes are visible in the spatially integrated image (Figure \\ref{fig2}b). With the obtained parameters and the spatial and temporal coordinates of the events, we are able to reverse the perturbation. This can be done by shifting each event back to its original, undisturbed coordinate according to the determined information,\n\\begin{equation}\ny_{new} = y \\, - \\, \\frac{\\lambda}{2\\pi}\\varphi_0 \\cos(\\omega t + \\phi) ~.\n\\label{ynew}\n\\end{equation}\nThe only parameter we do not obtain from the fit is the starting phase of the perturbation $\\phi$. We extract it by varying the starting phase between 0 and $2\\pi$ until the maximal contrast of the resulting interference pattern is achieved. Figure \\ref{fig4}d shows the reconstructed interference pattern. It agrees well with the experimentally undisturbed pattern of Figure \\ref{fig2}a. Even small structures like the local phase shifts by charged dust particles on the biprism can be reconstructed.\n\n\\newpage\n\n\\section{Conclusion}\n\nSensitive and accurate matter-wave interference experiments are susceptible to dephasing perturbations that wash out the interference pattern and decrease the contrast \\cite{Stibor2005}. The dephasing can be due to electromagnetic oscillations, electrical network oscillations, temperature drifts or mechanical vibrations. Usually major efforts to shield or damp these setups are required to suppress these perturbations.\n\nWe have presented a method to effectively decrease dephasing effects by including temporal and spatial correlations between the detected particles in the analysis of an interference signal. The full correlation analysis reveals the fringe pattern even in the presence of oscillating perturbations, while conventional methods that rely only on spatial signal accumulation are not able to verify matter-wave interference. The analysis can be applied whenever the frequency of the perturbing signal is significantly lower than the average incident rate on the detector. This condition is well met for most interference experiments since signal rates of several kHz and perturbations below a few hundred Hz are common. Besides information on the perturbation, our method can be used to retain the undisturbed interference pattern.\n\nOur method has potential applications in any kind of charged and neutral particle interferometer where a detector with a high spatial as well as temporal resolution is used. Nowadays, such detectors are available for electrons \\cite{Jagutzki2002}, ions \\cite{Jagutzki2002}, neutrons \\cite{Siegmund2007} and neutral atoms \\cite{Schellekens2005}. The technique is of general importance for the analysis of dephasing perturbations in matter-wave interferometry, and it allows to optimize shielding and damping installations. It decreases the requirements for vibrational stabilization, temperature stabilization and filtering of low-frequency perturbations from electronic instruments.\n\n\n\\section{Acknowledgements}\n\nThis work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Emmy Noether program STI 615\/1-1.A.R. acknowledges support from the Evangelisches Studienwerk e.V. Villigst, W.T.C. and I.S.H. from the Academia Sinica project AS-102-TP-A01, A.Ste. acknowledges the support from the Swiss National Science Foundation and A.G. from the DFG SFB TRR21. The authors thank P. Federsel, H. Prochel, and F. Hasselbach for helpful discussions.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe simplest concept taught to students about protein structure is\nthat hydrophobic monomers are mostly inside water-soluble globules,\nwhile hydrophilic monomers are mostly on the surface. This beautiful\nidea was around for over half a century \\cite{Bresler1, Bresler2},\neveryone agrees that it represents a cornerstone for our\nunderstanding of proteins \\cite{Ptitsyn_Finkelstein} - and yet it is\nsomehow neglected in the most sophisticated theories of\nheteropolymers with quenched sequences. Here, we have in mind the\ntrain of works which started from pioneering contributions by\nBryngelson and Wolynes \\cite{BW} and by Shakhnovich and Gutin\n\\cite{SG_IndependentInt}. The insight of the former authors\n\\cite{BW} was to recognize the deep connection between protein field\nand that of spin glasses and to apply the Random Energy Model\ndeveloped by Derrida \\cite{REM_Derrida}; the contribution of the\nlatter \\cite{SG_IndependentInt} was to actually derive the REM\napproximation for a consistent microscopic model of a heteropolymer\nwith independent random interactions. By now, it is understood that\nREM is a well controlled mean field approximation for the large\ncompact heteropolymer \\cite{Sfatos}. The important part of\nheteropolymer theory was also the idea of sequence design, which was\nused both to better model proteins and to test heteropolymer\nproperties in general \\cite{Shakhnovich_review_2006}.\n\nWhat is important to emphasize is that heteropolymer freezing and\nsequence design theories operate within the so-called volume\napproximation, neglecting surface terms in energy. Our goal in the\npresent work is to investigate, for the simplest tractable model,\nthe interplay heteropolymer freezing and sequence design with\npreferential solvation of some monomer species on the surface of\nthe globule. In fact, even for random sequences preferential\nsolvation was not included in REM-based heteropolymer theory until\nvery recently \\cite{Solvation_random}. Our work is ideologically a sequel to\nthe paper \\cite{Solvation_random}, and we will use the ideas of that work.\n\nIn the recent series of works \\cite{Khokhlov_PRL, Khokhlov_COSSMS}, sequence design was\ndiscussed (under a different name of ``coloring'') in a slightly\ndifferent prospective, with an eye on chemically preparing\nprotein-like copolymers. The solvation effect was given a very\nprominent role in these works. One of the goals of our work is to\nmake a closer link between various implementations of the sequence\ndesign paradigm.\n\nThe paper is organized as follows. First we will introduce the\nsolvation model (section \\ref{sec:solvation} and Fig.\n\\ref{fig:model_cartoon}). Then we will talk about sequence design\ntechnique and how it is affected by the solvation (section\n\\ref{sec:design}). Surface monomer composition distribution is\nobtained in section \\ref{sec:design_by_solvation}. Our major results\nare summarized in the phase diagram of the system, sketched in the\nFig. \\ref{fig:phase_diagram_2}. Finally, we discuss in section\n\\ref{sec:sequence_entropy} the availability of sequences as a\nfunction of their quality characterized by the energy gap between\ntheir ground state and the majority of other states.\n\n\\section{The model}\n\n\\subsection{Energy: bulk and surface terms}\\label{sec:solvation}\n\nIn our model, each monomer is assigned a quenched random variable\n$\\sigma$, which represents its monomer type. For the random\nsequence, we assume that $\\sigma$ for each monomer is drawn from\nsome probability distribution $p\\left(\\sigma\\right)$. For\nsimplicity, we restrict $p\\left(\\sigma\\right)$ to have zero\naverage $\\int\\sigma p\\left(\\sigma\\right)d\\sigma=0$ and unit\nvariance $\\int \\sigma^2 p\\left(\\sigma\\right)d\\sigma=1$. There are\n20 possible values of $\\sigma$ for natural proteins since the\nnumber of amino acids is 20. Theoretically it is convenient to\nconsider a continuous distribution of $\\sigma$ or a discrete\ndistribution of just 2 monomer types.\n\nIn our model, energy of the system consists of contributions from\ndirect contact between monomers and of the contribution of\ncontacts between monomers and solvent. The former, contact\nenergy, has a ``homopolymeric'' strong average attraction part\n$\\overline{B}$ independent on monomer type, and a\n``heteropolymeric'' contribution $- \\delta \\! B \\sigma_i\n\\sigma_j$ with amplitude $ \\delta \\! B$. We assume that\n$\\overline{B}$ is sufficiently large, such that the globule is\nquite dense, and the contacts with the solvent take place only on\nthe surface of the globule. We mostly look at the case $ \\delta\n\\! B>0$, such that similar monomers attract each other. We assume\nthat each contact with the solvent provides energy $-\\Gamma\n\\sigma_i$. Thus, since $\\Gamma > 0$, monomers with $\\sigma>0$ are\nhydrophilic, while those with $\\sigma<0$ are hydrophobic. Thus,\nHamiltonian of our model depends on the sequence, presented by\n${\\rm seq} = \\{ \\sigma_i \\}$, and conformation, specified by\npositions of all monomers ${\\rm conf} = \\{ \\mathbf{r}_i \\}$. The\nHamiltonian reads\n\\begin{equation} H \\left({\\rm seq},{\\rm conf}\\right) = \\sum_{i0$. The way to\ncharacterize this quantitatively is to address the statistics of\n$\\sigma$ values for surface exposed monomers. Namely, we will be\ninterested in distribution $f\\left( \\sigma \\right)$ of surface\nmonomers. We expect this distribution to be different from the\nbare distribution of all monomers $p \\left( \\sigma \\right)$.\nQualitatively, this is illustrated in the inset of Fig. \\ref{fig:model_cartoon}.\n\nOf course, the effect of surface exposure to the solvent depends\non the sequence. In general, hydrophilic effect adds frustration\nto the system. Indeed, placing a certain monomer on the surface\nnecessitates placing its sequence neighbors close to the surface,\nwhile their identity, or their $\\sigma$-values, might imply\nenergetic preference for the interior region of the globule. To\naddress this delicate sequence dependence, we will look at\ndesigned sequences.\n\n\\subsection{Sequence design}\\label{sec:design}\n\nRandom sequence can be generated by a suitable Poisson process,\ni.e., by the probability distribution\n\\begin{equation} P_{{\\rm seq}}^{ \\left(0 \\right)} = \\prod_{i=1}^N\np\\left(\\sigma_i\\right) \\ . \\label{eq:propensities} \\end{equation}\nBy sequence design, we want to bias the sequence probabilities in\na controlled fashion. This can be done in the following way.\n\nThe sequence design procedure starts from the choice of the target\nconformation which we will denote ${\\star}$. In our consideration\nhere, ${\\star}$ might be any compact conformation, in other words,\nwe ignore the difference of designability for possible target\nconformations \\cite{Wingreen} - simply because designability and\nsurface exposure are two independent effects and we do not want to\nfurther complicate our work by accounting for designability. There\nis no doubt that designability along with surface effects must be\nincorporated into the complete theory. Given a target\nconformation ${\\star}$, we will consider a statistical Gibbs\nensemble in which conformation $\\{ \\mathbf{r}_i \\} = {\\star}$ is\nquenched, while the sequence $\\{ \\sigma_i \\}$ is annealed and\ncomes to thermodynamic equilibrium at the design temperature\n$T_{d}$, which is not necessarily equal to the real temperature\n$T$. More specifically, we use the canonical sequence design\nscheme \\cite{BJ}, in the sense that it is generated by the\ncanonical ensemble of annealed sequences. This results in the\nfollowing probability distribution of sequences:\n\\begin{equation}\nP_{{\\rm seq}}^{\\star}=P_{{\\rm seq}}^{ \\left(0 \\right)} \\frac{ \\exp\n\\left[-H^d \\left({ \\rm seq},{\\star} \\right)\/T_d \\right]}{ \\sum_{{\n\\rm seq}^{\\prime}}P_{{ \\rm seq}^{\\prime}}^{ \\left(0 \\right)} \\exp\n\\left[-H^d \\left({\\rm seq}^{\\prime},{\\star} \\right) \/T_d \\right]}\n\\ , \\label{eq:probability_sequences}\n\\end{equation}\nwhere the denominator ensures normalization. Of course, this is\njust the scheme, the real features of the ensemble of designed\nsequences are controlled by the design Hamiltonian, $H^d \\left({\n\\rm seq},{\\star} \\right)$. It might be the same Hamiltonian as\n(\\ref{eq:Hamiltonian}), but this is not at all necessary.\nMoreover, it is useful to explore the more general situation in\nwhich $H^{d} \\neq H$. We will use design Hamiltonian of the same\nfunctional form as (\\ref{eq:Hamiltonian}), but with the different\nparameters $\\delta \\! B^d$, and $\\Gamma^d$ ($\\overline{B^d}$,\nalthough formally included for symmetry, does not play any role in\nsequence design, because this term in energy is\nsequence-independent):\n\\begin{equation} H^d \\left({\\rm seq},{\\star}\\right) = \\sum_{i0$, we have $p^{\\star} \\left( \\sigma \\right)\n> p \\left( \\sigma \\right)$. Compared with the bare monomer distribution\n$p\\left(\\sigma\\right)$, hydrophilic monomers have larger\nprobability to appear on the surface of target conformation.\n\nThe simplified $\\delta \\! B^d=0$ design scheme is reminiscent of\nthe method used in \\cite{Khokhlov_PRL, Khokhlov_COSSMS}, in which all the surface\nmonomers of target conformation ($G^{\\star}$ in our notation) are\nmade hydrophilic while all the monomers inside the globule are\nhydrophobic. In our more general consideration, it is just more\nprobable but not necessary for the surface exposed monomers to\nbecome hydrophilic during the design. The model of the works\n\\cite{Solvation_random} corresponds to the $T_d \\to 0$ limit of our theory.\n\n\\subsection{Surface monomer distribution in the ground state}\n\\label{sec:distribution_derivation}\n\nLet us continue examination of the simplified design scheme, with\n$\\delta \\! B^d =0$, when only surface energy biases the choice of\nsequences (design by solvation). Our goal now is to find the\nsurface energy correction terms of the ground state energy and, as\nthe major step in this direction, we need to consider the surface\nmonomer distribution $f(\\sigma)$ in the ground state. One should\nrealize that in the ground state, the set of surface exposed\nmonomers may or may not be similar to the set of monomers exposed\nto the surface during the design; in other words, $f(\\sigma)$\nmight be similar to $p^{\\star} (\\sigma)$, or might be quite\ndifferent from it. Therefore, there are two contributions to the\nsurface energy, one due to the monomers exposed to the surface,\nand the other due to the fact that selection of surface monomers\naffects the monomer composition left inside the globule. To\nexpress this quantitatively, we write for the arbitrary state the\nequation similar to (\\ref{eq:p_total1})\n\\begin{equation} p_{\\rm tot} (\\sigma) = p_{\\rm in}(\\sigma) + \\frac{K}{N} \\left[\nf (\\sigma) - p_{\\rm in} (\\sigma) \\right] \\ , \\label{eq:p_total2}\n\\end{equation}\nwhere $p_{\\rm in} (\\sigma)$ is the distribution of monomers left\ninside. Comparing equation (\\ref{eq:p_total1}) and\n(\\ref{eq:p_total2}), we find\n\\begin{equation} p_{\\rm in} (\\sigma) \\simeq p(\\sigma) + \\frac{K}{N} \\left[\np^{\\star} (\\sigma) - f(\\sigma) \\right] \\ . \\end{equation}\nAs everywhere, we neglect here the terms ${\\cal O} \\left((K\/N)^2\n\\right)$. Thus, we directly see already here how the deviation of\nthe state from the target state comes into play.\n\nTo compute $f(\\sigma)$ for the ground state, we adapt for designed\nsequences the procedure which was developed in the work\n\\cite{Solvation_random} for random sequence solvation. To make our work\nself-contained, we briefly outline major steps.\n\nWe begin by constructing a separate REM, called sub-REM, for each\npossible choice of surface monomers, $G$. Indeed, for each $G$\nthere are still many conformations available. The number of such\nconformations is naturally written in the form $M_G = e^{Ns - K\n\\omega_G}$, where $s$ is conformational entropy per monomer in\nvolume approximation, and $\\omega_G$ is entropy loss due to\nconfinement of some monomers on the surface. Although $M_G$ is\nmuch smaller than the total number of conformations, $M=e^{sN}$,\nbut the entropy loss caused by fixation of $G$ monomers on the\nsurface is only a surface effect ${\\cal O}(K)$. Following\n\\cite{Solvation_random}, we adopt a bold approximation that $\\omega_G =\n\\overline{\\omega}$ is independent on $G$; in this approximation,\ncounting all states shows that\n${\\overline \\omega} = \\ln \\left( N e \/ K \\right)$.\n\nFor each sub-REM, energies of all $M_G$ states are random in the\nsense that they depend on random sequence realization, and it is\nreasonable to assume \\cite{Solvation_random} that these energies are\nindependent Gaussian variables, because each energy, according to\nformula (\\ref{eq:Hamiltonian}), has of order $N$ mutually\nstatistically independent bulk contributions and of order $K$\nindependent surface contributions. To write down the resulting\nGaussian distribution, we should determine corresponding mean and\nvariance. The mean is found by averaging the bulk terms $\\left(\n\\overline{B} - \\delta \\! B \\sigma_i \\sigma_j \\right)$ over the\ndistribution $p_{\\rm in} (\\sigma)$ plus averaging the surface\nterms $- \\Gamma \\sigma_i$ over the distribution $f(\\sigma)$, and\nthe variance is similarly found by averaging the second moment.\nThis results finally in the following Gaussian distribution of\nrandom energy:\n\\begin{equation}\nw_G \\left( E \\right) \\propto \\exp \\left[ -\\frac{ \\left[ E - \\left(\nN \\overline B - K \\Gamma \\gamma_G \\right) \\right] ^2}{2 N \\delta\n\\! B^2 + 2 K \\delta \\! B^2 \\beta_G } \\right] \\ ,\n\\label{eq:Gaussian_distribution_of_levels}\n\\end{equation}\nwhere\n\\begin{eqnarray} \\gamma_G & = & \\int \\sigma f \\left(\\sigma \\right) d \\sigma \\ ,\n\\nonumber \\\\ \\beta_G & =& 2 \\int \\sigma^2 \\left[ p^{\\star} \\left(\n\\sigma \\right) - f \\left( \\sigma \\right) \\right] d \\sigma \\ .\n\\label{eq:beta_definition} \\end{eqnarray}\nNotice that dependence on the surface monomer group $G$ is only through\nthe surface monomer distribution, and that the dependence on design is\ndue to the $p^{\\star}(\\sigma)$.\n\nEvery sub-REM has a certain ground state energy $E_g (G) = E_g\\{f\n(\\sigma) \\}$, which is just the lowest of $M_G$ random energies\ndrawn independently from the distribution\n(\\ref{eq:Gaussian_distribution_of_levels}). Energy $E_g\\{f\n(\\sigma) \\}$ is still a random variable, its probability\ndistribution can be found from the so-called extreme value\nstatistics \\cite{Bouchaud_Mezard} (see also Appendix\n\\ref{sec:REM_distrib}):\n\\begin{equation}\n{\\cal W}_G \\left( E \\right) = \\frac{1}{T_{\\rm fr}} \\exp \\left[\n\\frac{\\delta \\! E_g}{T_{\\rm fr}} - \\exp \\left[ \\frac{\\delta \\! E_g\n}{ T_{\\rm fr} } \\right] \\right] \\ ,\n\\label{eq:ground_state_distribution_in_the_group}\n\\end{equation}\nwhere $T_{\\rm fr} = \\delta \\! B \/ \\sqrt{2 s}$ and $\\delta \\! E_g =\nE - E_g^{\\rm typ}\\{f (\\sigma) \\}$ is the deviation of ground state\nenergy from its most probable (typical) value, which includes both\nvolume and surface contributions:\n\\begin{eqnarray} E_g^{\\rm typ}\\{f (\\sigma) \\} & = & N \\left( \\overline B - 2 s\nT_{\\rm fr} \\right) \\nonumber \\\\\n& + & K \\left( \\overline \\omega T_{\\rm fr} - \\Gamma \\gamma_G + s\nT_{\\rm fr} \\beta_G \\right) \\ .\n\\label{eq:typical_lowest_energy_in_the_group} \\end{eqnarray}\nNotice that the only dependence of probability distribution ${\\cal W}_G$\non the surface monomers $G$ is hidden in $\\gamma_G$ and $\\beta_G$\ninside the most probable energy $E_g^{\\rm typ}\\{f (\\sigma) \\}$,\nand the dependence on design is also there inside $\\beta_G$ (see\nformula (\\ref{eq:beta_definition}). We also mention that $T_{\\rm\nfr} = \\delta \\! B \/ \\sqrt{2 s}$ appearing here as a parameter of\nthe ground state distribution happens to have its physical meaning\n- it is volume approximated freezing temperature of the random\nsequence polymer \\cite{Solvation_random}.\n\nThe probability to get ground state energy anywhere below its\ntypical most probable value\n(\\ref{eq:typical_lowest_energy_in_the_group}) is exponentially\nsmall. However, we try exponentially many times - namely, we have\nto choose the lowest among $e^{K {\\overline \\omega}} = (Ne \/ K)^K$\nsub-REM ground states. Therefore, we have a good chance to find\nsome particular sub-group $G$ with energy noticeably below typical\nvalue (\\ref{eq:typical_lowest_energy_in_the_group}). Essentially,\nwhat we have to do now is to resort second time to the extreme\nvalue statistics and find the expectation value of the lowest\namong the sub-REM ground states. It is convenient to perform this\noperation in a slightly different, but equivalent form. Namely, we\nnote that the low energy tail of the ground state probability\ndistribution, (\\ref{eq:ground_state_distribution_in_the_group}),\nis exponential, and, therefore, it looks effectively like\nBoltzmann distribution, with $T_{\\rm fr}$ playing the role of\ntemperature.\n\nIt is useful to note here that treating the tail of the distribution\n(\\ref{eq:ground_state_distribution_in_the_group}) as effective\nBoltzmann distribution with temperature $T_{\\rm fr}$ is\nreminiscent and essentially equivalent to the consideration given\nin the book \\cite{Ptitsyn_Finkelstein} and explaining the origin\nof phenomenologically discovered quasi-Boltzmann distribution over\nthe ensemble of evolutionary selected proteins.\n\nReturning to our argument, finding lowest among the $E_g^{\\rm typ}$\nof the sub-REMs is equivalent to minimizing the effective ``free\nenergy'', in which the effective entropy is given by the number of\nways to choose $K$ monomers with distribution $f (\\sigma )$ from\n$N$ monomers with distribution $p_{\\rm tot} ( \\sigma )$:\n\\begin{equation}\ns\\{f (\\sigma) \\}=-\\int p_{\\rm tot} ( \\sigma ) \\left[ \\phi \\ln\n\\phi + \\left( 1 - \\phi \\right) \\ln \\left( 1 - \\phi \\right) \\right]\nd \\sigma \\ ,\n\\end{equation}\nwhere $\\phi( \\sigma ) = K f( \\sigma )\/ N p_{\\rm tot}( \\sigma )$\nhas the meaning of the fraction of monomers with type $\\sigma$\nthat are exposed to the surface. Including this effective\nentropy, we have now the effective ``free energy''\n\\begin{equation}\nE_g^{\\rm typ} \\{ f(\\sigma ) \\} - T_{\\rm fr} N s \\{ f(\\sigma ) \\}\n\\ .\n\\end{equation}\nWe minimize this with respect to $f(\\sigma )$, subject to\nnormalization condition $\\int f(\\sigma ) d \\sigma = 1$ and obtain\n\\begin{equation} f(\\sigma) = \\frac{N}{K} \\frac{p_{\\rm tot} (\\sigma)}{1 + \\Lambda\ne^{\\eta_{\\rm fr}(\\sigma)}} \\ , \\label{eq:general_expression_for_f_simpler}\n\\end{equation}\nwhere $\\eta_{\\rm fr}(\\sigma)=2s\\sigma^2 - \\left(\\Gamma \/ T_{\\rm fr} \\right) \\sigma$,\nand $\\Lambda$ is the Lagrange multiplier which has to be\ndetermined from the normalization condition $\\int f(\\sigma) d\n\\sigma = 1$.\nComparing this with the paper \\cite{Solvation_random}, we see that the only role of design\nin this case is the modification of monomer distribution: instead of bare distribution\n$p(\\sigma)$, we have now the modified one $p_{\\rm tot}(\\sigma)$. Let us see what are\nthe consequence of this replacement.\n\nLet us concentrate on the regime without the depletion effect, when $\\phi(\\sigma)\\ll 1$\nat all values of $\\sigma$. This means that for any monomer type, only a small fraction\nof it is solvated to the surface region. Under such assumption, $f(\\sigma)$ can be\napproximated as\n\\begin{equation}\nf(\\sigma)\\propto\ne^{-\\eta_{\\rm fr}(\\sigma)}\\left[1+\\frac{K}{N}\\left(c\\exp\\left(\\frac{\\Gamma^d}{T_d}\n\\sigma\\right)-1\\right)\\right]p(\\sigma) \\ ,\n\\label{eq:surface_distribution}\n\\end{equation}\nwhere we dropped for simplicity the $\\sigma$-independent normalization factor.\nTo gain some insight, let us look at $f(\\sigma)$ for a couple of simple examples of bare monomer probability\ndistributions. Since real distribution involves a large number ($20$) of monomer species, we examine\ntwo limits of two monomer species and of continuous Gaussian distribution.\n\n\\subsubsection{Example: bimodal distribution}\nIn the simplest black-and-white model \\cite{Dill} two types of monomers, one hydrophilic and one hydrophobic,\nappear with same probability:\n\\begin{equation}\np\\left(\\sigma\\right)=\\frac{1}{2}\\left[ \\delta \\!\n\\left(\\sigma+1\\right)+\\delta\\left(\\sigma-1\\right)\\right]\\ .\n\\end{equation}\nSimple calculation shows that\n\\begin{eqnarray}\nf\\left(\\sigma\\right) &\\propto& \\delta\\left(\\sigma+1\\right)\ne^{-\\Gamma \/ T_{\\rm fr}}\\left[1-\\frac{K}{N}\n\\tanh\\left(\\frac{\\Gamma^d}{T_d}\\right)\\right]+\\nonumber\\\\\n&\\hphantom{\\propto}&\\delta\\left(\\sigma-1\\right)e^{ \\Gamma \/ T_{\\rm fr}}\n\\left[1+\\frac{K}{N}\\tanh\\left(\\frac{\\Gamma^d}{T_d}\\right)\\right]\\ .\n\\label{eq:surface_bimodal}\n\\end{eqnarray}\nWe see that there are two effects bringing hydrophilic monomers to the surface, that is,\nincreasing $f(+1)$ on the expense of decreasing $f(-1)$. First effect is due to $\\Gamma$ and\nis measured by the ratio $\\Gamma \/ T_{\\rm fr}$. This effect is present in random heteropolymer,\nhas nothing to do with design, and is simply energetic: since it is more favorable for the $\\sigma > 0$\nmonomers to be on the surface, so the surface gets enriched with such monomers. The second effect\nis entirely due to design and it is governed by the design parameters $\\Gamma^{d}\/T_{d}$. This effect\nis washed way at large design temperature and it saturates at small $T_d$. Notice that this design effect\nis only a surface effect, its maximal possible role is proportional to $K\/N$. This is because the best one\ncan do with this type of design is to shift the monomeric composition by the amount about $K\/N$.\n\n\\subsubsection{Example: Gaussian distribution}\nThe opposite limit is presented by\n\\begin{equation}\np\\left(\\sigma\\right)=\\frac{1}{\\sqrt{2\\pi}}\\exp\\left(-\\frac{\\sigma^2}{2}\\right)\\ .\n\\end{equation}\nThe corresponding surface monomer distribution is the following:\n\\begin{eqnarray}\nf\\left(\\sigma\\right)&\\propto&\\left[1+\\frac{K}{N}\\left[\\exp\\left(\\frac{\\Gamma^d\\sigma}{T_d}-\n\\frac{1}{2}\\left(\\frac{\\Gamma^d}{T_d}\\right)^2\\right)-1\\right]\\right]\\nonumber\\\\\n&\\hphantom{=}&\\times\\exp\\left(-\\frac{1+4s}{2}\\sigma^2+\\frac{\\Gamma}{\nT_{\\rm fr}}\\sigma\\right)\\ .\n\\label{eq:surface_Gaussian}\n\\end{eqnarray}\nIn Fig. \\ref{fig:gaussian_distribution}, we made a plot of an\nexample of $f(\\sigma)$ as a function of $\\sigma$ for the Gaussian\ncase. For comparison, random sequence solvation ($T_d\\to\\infty$) is\nalso included. It can be seen that design enriches surface with\nhydrophilic monomers such that the distribution is shifted toward\nhydrophilic region compared to no design case, which by itself is\nalready a shift relative to the bare distribution $p(\\sigma)$.\n\n\\begin{figure}\n\\centerline{\\scalebox{0.7}{\\includegraphics{gaussian1018.eps}}}\n\\caption{(Color online). A comparison of original distribution and\nsurface monomer distribution with design, together with the case of\nwithout design. Design favors the monomers with hydrophilic type.}\n\\label{fig:gaussian_distribution}\n\\end{figure}\n\n\\subsection{Sequence design by both solvation and monomer\ncontacts: mean field approximation}\\label{sec:Design_Full}\n\nIn the preceding section, we considered sequences designed by the\neffect of preferential solvation of certain monomer types under\nthe chain preparation conditions. This corresponds to having only\nthe second term in the design Hamiltonian\n(\\ref{eq:design_Hamiltonian}) , or having $\\delta \\! B^{d} =0$. By\ncontrast, sequence design by monomer-monomer contacts, i.e. the\nlimit of $\\Gamma^{d}=0$ and $\\delta \\! B^{d} \\neq 0$ was\nconsidered in the literature before (see, e.g., review article\n\\cite{RMP} and references therein).\n\nIn this section, we consider the general case, when both volume\nand surface terms of the design Hamiltonian\n(\\ref{eq:design_Hamiltonian}) are present. To make the argument,\nwe resort to the mean-field approximation for the design system.\nThat means, we consider design by an effective field which couples\nto the variables $\\sigma$ and acts differently on surface and bulk\nmonomers. Since design Hamiltonian (\\ref{eq:design_Hamiltonian})\nis quadratic in $\\sigma$, the said ``design field'' is\nproportional to $\\overline{\\sigma}$ - an average value of $\\sigma$\ndefined self-consistently. It is then easy to realize that this\nfield vanishes in the bulk, because in our model design does not\naffect overall composition of the chain, and, therefore,\n$\\overline{\\sigma}_{\\rm bulk}=0$. Therefore, the mean field\napproximated design Hamiltonian reads\n\\begin{equation}\nH^d_{\\rm mean \\ field} \\simeq -\\left(\\delta \\! B^d z \\overline{\n\\sigma}_{\\rm surf} +\\Gamma^d \\right)\\sum_{j \\in G^{\\star}}^K\n\\sigma_j \\ . \\label{eq:Design_hamiltonian_full}\n\\end{equation}\nHere, $\\overline{ \\sigma}_{\\rm surf}$ is the average $\\sigma$ of\nsurface monomers (that is, of monomers which happen to be on the\nsurface during the design process) and $z$ is the coordination\nnumber (the number of neighbors) for surface monomers.\n\nWithin the mean field approximation, the probability distribution\nof designed sequences (\\ref{eq:probability_sequences}) gets\nfactorized into independent distributions of all monomers, just\nlike in the $\\delta \\! B^d=0$ case. Accordingly, we obtain the\ndistribution of surface monomers, similar to Eq. (\\ref{eq:pstar}):\n\\begin{equation}\np^{\\star}(\\sigma) \\propto p(\\sigma) \\exp\\left[ \\frac{ \\delta \\!\nB^d z \\overline{ \\sigma}_{\\rm surf} + \\Gamma^d}{T_d}\\sigma \\right]\n\\ , \\label{eq:Surface_target_full}\n\\end{equation}\nwhere we dropped for brevity the $\\sigma$-independent\nnormalization factor. Now, the value of $\\overline{ \\sigma}_{\\rm\nsurf}$ must be determined from the self-consistency condition\n\\begin{eqnarray} \\overline{\n\\sigma}_{\\rm surf} & = & \\int \\sigma p^{\\star} (\\sigma) d \\sigma \n\\nonumber \\\\ & = & \\frac{\\int \\sigma p (\\sigma) \\exp\\left[ \\frac{\n\\delta \\! B^d z \\overline{ \\sigma}_{\\rm surf}+\n\\Gamma^d}{T_d}\\sigma \\right] d \\sigma}{\\int p (\\sigma) \\exp\\left[\n\\frac{ \\delta \\! B^d z \\overline{ \\sigma}_{\\rm surf} + \\Gamma^d\n}{T_d}\\sigma \\right] d \\sigma} \\ . \\label{eq:selfconsistency}\n\\end{eqnarray}\nTo gain an insight into the properties of the latter equation, it is\nuseful to consider the examples of bimodal and Gaussian\ndistributions for $p(\\sigma)$. We do that a few lines below, in\nsection \\ref{sec:self_consistency_by_examples}, but here we notice\nthat once $\\overline{\\sigma}_{\\rm surf}$ is determined, the rest of the\nanalysis follows automatically along the lines of our previous\nconsideration in the section \\ref{sec:distribution_derivation}.\nIndeed, all we needed to know to implement the result\n(\\ref{eq:general_expression_for_f_simpler}) is the overall monomer\ndistribution $p_{\\rm tot} (\\sigma)$, which is known as soon as\n$p^{\\star}(\\sigma)$ is determined (see Eq. (\\ref{eq:p_total1})).\nTherefore, we can directly use our results Eqs\n(\\ref{eq:surface_distribution}), (\\ref{eq:surface_bimodal}),\n(\\ref{eq:surface_Gaussian}) with the replacement $\\Gamma^d \\to\n\\Gamma^d + \\delta \\! B^d z \\overline{\\sigma}_{\\rm surf} \\equiv\n\\Gamma^{\\prime d}$.\n\nWith that in mind, let us return briefly to the determination of\n$\\overline{\\sigma}_{\\rm surf}$.\n\n\\subsection{Implementing the self-consistency condition}\n\\label{sec:self_consistency_by_examples}\n\n\\subsubsection{Example: bimodal distribution}\n\nFor bimodal $p(\\sigma)$, Eq. (\\ref{eq:selfconsistency}) becomes\n\\begin{equation} \\overline{\\sigma}_{\\rm surf} = \\tanh \\left[ \\frac{ \\delta \\!\nB^d z \\overline{ \\sigma}_{\\rm surf}+ \\Gamma^d}{T_d} \\right] \\ .\n\\end{equation}\nAt $\\Gamma^d = 0$, this equation has non-trivial non-vanishing\nsolutions only if $\\delta \\! B^d z \/T_d >1$, in which case there\nare automatically two solutions of the opposite sign. That means,\nthe non-zero $\\overline{\\sigma}_{\\rm surf}$ in this case results\nonly from the spontaneous symmetry breaking, because without\n$\\Gamma^d$ the system has no preference for hydrophobic or\nhydrophilic monomers dominating the surface. The non-zero\n$\\Gamma^d >0$ breaks this symmetry and yields always one and only\none positive solution for $\\overline{\\sigma}_{\\rm surf}$ (and\npossibly two negative ones which we ignore because they have\nhigher free energy).\n\nIn this bimodal case, we have $\\overline{\\sigma}_{\\rm surf} < 1$,\nwhich means that in the replacement $\\Gamma^d \\to \\Gamma^{\\prime d}$, the solvation term\n$\\Gamma^d$ dominates if $\\Gamma^d > \\delta \\! B^d z $.\n\n\n\\subsubsection{Example: Gaussian distribution}\n\nFor Gaussian $p(\\sigma)$, Eq. (\\ref{eq:selfconsistency}) is easily\nexplicitly resolved:\n\\begin{equation} \\overline{\\sigma}_{\\rm surf} = \\frac{\\Gamma^d\/T_d}{1 - z\n\\delta \\! B^d \/T_d} \\ . \\end{equation}\nThis is usually very close to $\\Gamma^d\/T_d$, because (see next\nsection \\ref{sec:phasediagram}) in most interesting regime close\nto the triple point of the phase diagram, the denominator is\ndominated by the unity.\n\n\n\n\\section{Free energy and phase diagram of designed polymers}\n\\label{sec:phasediagram}\n\n\\subsection{Preliminary remarks}\n\nIn this section, we will consider the possible phases of the\nheteropolymer whose sequence is designed as discussed above.\n Similar problem in volume approximation is well known in the\nliterature (see, e.g., review \\cite{RMP} and references therein).\nSpecifically, we will consider three phases and the transitions\nbetween them. We will summarize the relations between phases in\nterms of the phase diagram, Fig. \\ref{fig:phase_diagram_2}, in\nvariables $T_d$ and $T$, which describe, respectively, the\nensemble of sequences and the ensemble of conformations for any\ngiven sequence. The relevant phases in the diagram are named,\nrespectively, liquid-like globule, glassy globule, and folded\nglobule. We remind to the reader that liquid-like globule is the\nstate where great many conformations contribute to the partition\nfunction; glassy globule is dominated by one or a few\nconformations, but those unrelated to the target conformation\n$\\star$; and folded globule is dominated by the target\nconformation $\\star$. It is fairly obvious and illustrated in\nFig. \\ref{fig:phase_diagram_2}, that surface solvation effects\ndo not change the topology of the phase diagram, but does affect\nthe specific positions and shape of the corresponding phase\ntransition lines; these surface-driven changes are the subject of\nour interest in this section.\n\nThe temperatures of the transitions from liquid-like to glass-like\nand to folded globules are called glass temperature $T_{g}$ and\nfolding temperature $T_{f}$, respectively. Our goal is to analyze\nthe role of surface solvation effects and design on both $T_{g}$\nand $T_{f}$. In other words, we want to calculate how surface\ncorrections to $T_f$ and $T_g$ depend on the design temperature\n$T_d$.\n\nAs regards the third phase transition line, that between folded\nand glassy globule phases, this line must be vertical in phase\ndiagram. Indeed, both folded and glassy globules are zero entropy\nstates, the transition between them cannot be driven by\ntemperature change. On the phase diagram, like in Fig. \\ref{fig:phase_diagram_2}, \nthe corresponding phase boundary must\nbe represented by the line parallel to the temperature axis. This\nargument does not rely on the volume approximation, and,\ntherefore, remains valid independently of the surface solvation\neffects. Therefore, this line of phase transition is entirely\ndescribed by the value of design temperature at which there is the\ntriple point, we call it $T_d^{(3)}$. We want to calculate also\nthe surface contribution to this quantity.\n\n\n\\begin{figure}\n\\centering{\\scalebox{0.5}{\\includegraphics{phase_diagram_2.eps}}}\n\\caption{(Color online) Phase diagram of the heteropolymer system.\nThere are three phases: random liquid-like globule, frozen glassy\nglobule and folded globule. Surface term shifts the phase diagram\nfor volume approximation. For bimodal distribution, glass\ntemperature is design independent, while in Gaussian distribution,\nglass temperature increases for better design condition.}\n\\label{fig:phase_diagram_2}\n\\end{figure}\n\n\nThe way we approach the phase diagram is based on the idea that\nfor any frozen globule phase, whether glassy or folded, the free\nenergy coincides with energy, because entropy vanishes given the\nnumber of contributing states of order unity. On the other hand,\nthe free energy of the liquid-like globule we can find due to the\nproperty of REM that quenched averaged free energy is equal to the\nannealed average above the glass temperature. Therefore, what\nwe shall do is to compute the annealed average free energy and to\nfind temperature of entropy ``catastrophe'' - at which entropy\nvanishes; that is the glass temperature. Of course, our goal\nis to address surface terms and design terms in this procedure.\nFollowing this program, we write the annealed average as a\nfunctional of the surface monomer distribution $f(\\sigma)$ as\n\\begin{eqnarray}\nF \\{f\\} &=& - T \\ln \\sum_{{\\rm conf}} e^{-E\/T} \\nonumber\\\\\n&\\simeq& - T \\ln \\int e^{ sN - K\\omega_G + N s \\{ f(\\sigma) \\} }\nw_G(E) e^{-E\/T} dE \\nonumber\\\\\n& \\simeq & \\overline F + F_{\\rm surf} \\{ f \\} \\ ,\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\frac{\\overline F} {N} = \\overline B - sT - \\frac{\\delta \\! B^2} {2T}\n\\label{eq:bulk_free_energy}\n\\end{equation}\nis the bulk contribution to the free energy, and\n\\begin{eqnarray}\n\\frac{F_{\\rm surf} \\{ f \\}} {K}\n&=& \\overline\\omega T - \\frac{\\delta \\! B^2} {T} \\int \\sigma^2 p^{\\star} (\\sigma) d\\sigma\n+ \\frac{N} {K} T \\int d\\sigma p_{\\rm tot}(\\sigma) \\nonumber\\\\\n& & \\times \\left[ \\phi\\left( \\eta(\\sigma) - \\ln \\frac{1-\\phi} {\\phi} \\right)\n+ \\ln (1-\\phi) \\right]\n\\label{eq:surface_free_energy}\n\\end{eqnarray}\nis the surface contribution to free energy. In Eq. (\\ref{eq:surface_free_energy}),\n\\begin{eqnarray}\n\\eta(\\sigma) &=& \\frac{\\delta \\! B^2} {T^2} \\sigma^2 - \\frac{\\Gamma} {T} \\sigma \\ , \\nonumber\\\\\np^{\\star} \\left( \\sigma \\right) &=& \\frac{ p \\left( \\sigma \\right) \\exp \\left( \\frac{ \\Gamma^{\\prime d} }\n{T_d} \\sigma \\right) }\n{\\int p \\left( \\sigma \\right) \\exp \\left( \\frac{ \\Gamma^{\\prime d} } {T_d} \\sigma \\right) d\\sigma} \\ .\n\\end{eqnarray}\nIn deriving the free energy, we have used the saddle point approximation since above glass temperature,\nfree energy is dominated by the saddle point of the partition function.\n\nOptimizing $F_{\\rm surf} \\{f\\}$ yields\n\\begin{equation}\n\\phi^{\\star} = \\frac{1} {1 + \\Lambda e^{\\eta(\\sigma)}} = \\frac{K\nf^{\\star}(\\sigma)} {N p_{\\rm tot}(\\sigma)} \\ ,\n\\end{equation}\nand then optimal value of $F_{\\rm surf} \\{f\\}$ evaluates to\n\\begin{eqnarray}\n& & \\frac{ F_{\\rm surf} \\{ f^{\\star} \\} } {K T} = 1 + \\frac{\\int\np_{\\rm tot} (\\sigma) \\ln \\frac{\\Lambda e^{\\eta(\\sigma)}} {1 +\n\\Lambda e^{\\eta(\\sigma)}} d\\sigma } {\\int\n\\frac{p_{\\rm tot}(\\sigma)} {1 + \\Lambda e^{\\eta(\\sigma)}} d\\sigma} \\nonumber\\\\\n& & - \\frac{\\delta \\! B^2} {T^2} \\int \\sigma^2 p^{\\star} (\\sigma)\nd\\sigma - \\ln \\int \\frac{\\Lambda p_{\\rm tot} (\\sigma)} {1 + \\Lambda\ne^{\\eta(\\sigma)}} d\\sigma \\ .\n\\label{eq:surface_free_energy_optimized}\n\\end{eqnarray}\nThese results are similar to those obtained in the work\n\\cite{Solvation_random}, except we have now non-random sequences,\nas manifested by the dependence on $T_d$. In that work\n\\cite{Solvation_random}, solvation effect in random sequences was\ntreated as the response of the globule to the surface solvation\n``field''. The linear response regime is characterized by the\nstatistical independence of disordered parts of surface and volume\nenergies. Saturation regime is characterized by the depletion\neffect, when preferential solvation of a certain monomer species\nexhausts these monomers from the globule. For completeness, there\nis also a narrow range of the so-called weak response regime.\nNeither weak response nor saturation regimes are present for the\nblack-and-white polymer model, with bimodal distribution of\nmonomer types.\n\n\n\\subsection{No depletion: Glass temperature}\n\nLet us first consider the case when solvation doesn't cause the\ndepletion of any monomer type, that is, $\\phi (\\sigma) \\ll 1$ at\nevery $\\sigma$, then\n\\begin{equation}\n\\frac{F_{\\rm surf} \\{f^{\\star}\\}} {K T} \\simeq - \\frac{\\delta \\!\nB^2} {T^2} \\int \\sigma^2 p^{\\star}(\\sigma) d\\sigma - \\ln \\int\np_{\\rm tot}(\\sigma) e^{-\\eta(\\sigma)} d\\sigma \\ .\n\\end{equation}\nFirst we will discuss how the glass temperature $T_g$ is affected\nby the surface energy, relative to the volume approximated value\n$T_{\\rm fr} = \\delta \\! B \/ \\sqrt{2 s}$. Glass temperature must be\ndetermined from by the condition $- \\left. \\frac{\\partial F\\{f\\}}\n{\\partial T} \\right |_{T_g}=0$. Since $- \\left. \\frac{\\partial\n\\overline F} {\\partial T} \\right |_{T_{\\rm fr}}=0$, we can write\n(denoting temperature derivatives by prime sign) $\\Delta T_g \\simeq\nT_g - T_{\\rm fr} \\simeq - \\left. \\frac{F_{\\rm surf}^{\\prime}} {\n\\overline F^{\\prime\\prime} } \\right|_{T_{\\rm fr}}$. Then\n\\begin{eqnarray}\n\\frac{\\Delta T_g}{T_{\\rm fr}} &\\simeq& \\frac{K}{N} \\left[\n\\frac{\\Gamma} {2s T_{\\rm fr}} \\frac{\\int \\sigma p(\\sigma)\ne^{-\\eta_{\\rm fr}(\\sigma)} d\\sigma} {\\int\np(\\sigma) e^{-\\eta_{\\rm fr}(\\sigma)} d\\sigma} \\right. \\nonumber \\\\\n& & \\left. + \\int \\sigma^2 p^{\\star}(\\sigma) d\\sigma - \\frac{2\\int\n\\sigma^2 p(\\sigma) e^{-\\eta_{\\rm fr}(\\sigma)} d\\sigma } \n{\\int p(\\sigma) e^{-\\eta_{\\rm fr}(\\sigma)} d\\sigma} \\right. \\nonumber \\\\\n& & \\left. - \\frac{1} {2s} \\ln \\int p(\\sigma) \ne^{-\\eta_{\\rm fr}(\\sigma)} d\\sigma \\right] \\ ,\n\\label{eq:Delta_T_g}\n\\end{eqnarray}\nwhere we have replaced $p_{\\rm tot}(\\sigma)$ with $p(\\sigma)$\nbecause $\\Delta T_g$ itself is already of order ${\\cal O}\\left( K\n\/ N \\right)$, and we neglect any higher order corrections. Design\neffect is present here through $p^{\\star}(\\sigma)$.\n\nRelative to no solvation case, the order ${\\cal O} (K\/N)$ correction\nto glass temperature is positive; in other words, glass\ntemperature is increased due to the solvation effect, so the surface\neffect makes ground state more stable.\n\nIn the following part, we will further simplify and discuss $\\Delta\nT_g$ using the examples of $p(\\sigma)$.\n\n\\subsubsection{Example: bimodal distribution}\n\nFor bimodal $p(\\sigma)$, $\\int \\sigma^2 p^{\\star}(\\sigma)d\\sigma =\n1$, and design effect doesn't show up in glass temperature,\n\\begin{eqnarray}\n\\Delta T_g &=& \\frac{K} {2sN} T_{\\rm fr} \\left[\\frac{\\Gamma} {T_{\\rm fr}} \\tanh\\left(\\frac{\\Gamma} {T_{\\rm fr}} \\right)\n- \\ln\\cosh\\left(\\frac{\\Gamma} {T_{\\rm fr}}\\right)\\right] \\nonumber\\\\\n& \\simeq & \\frac{K}{N}\n\\begin{cases}\n\\frac{ \\Gamma^2} {2 \\sqrt{2 s} \\delta \\! B} & \\text{when $\\frac{\\Gamma}{T_{\\rm fr}}\\ll 1 $}\\\\\n\\frac{ \\ln 2 } {(2 s)^{3\/2} } \\delta \\! B & \\text{when\n$\\frac{\\Gamma}{T_{\\rm fr}}\\gg 1 $}\n\\end{cases} \\ .\n\\label{eq:freezing_temperature_bimodal}\n\\end{eqnarray}\nWhen the solvation strength $\\Gamma$ is small, $\\Gamma \/ T_{\\rm fr}\n\\ll 1$, this corresponds to the statistical independent region, in\nwhich surface term and volume term in a heteropolymer globule are\nroughly statistically independent. In this region, $\\Delta T_g\n\\propto \\Gamma^2$.\n\nThe region of $\\Gamma \/ T_{\\rm fr} \\gg 1$ is saturation region. When\nsolvation strength $\\Gamma$ is so large, essentially all the surface\nmonomers are of hydrophilic type, as can be clearly seen from\nsurface monomer distribution Eq. (\\ref{eq:surface_bimodal}): when\n$\\Gamma \/ T_{\\rm fr}\\gg 1$, surface monomer distribution function is\ndominated by hydrophilic term $f(+1)$. In this regime, $\\Delta T_g$\nbecomes independent of $\\Gamma$, because $\\Gamma$ is already so large\nthat all surface places are occupied by hydrophilic monomers and\nfurther increase of $\\Gamma$ cannot change anything.\n\n\\subsubsection{Example: Gaussian distribution}\n\nFor Gaussian $p(\\sigma)$, we can write $\\int \\sigma^2\np^{\\star}(\\sigma) d\\sigma = \\left(\\Gamma^{\\prime d} \/ T_d\\right)^2 +\n1 \\simeq \\left(\\Gamma^d \/ T_d\\right)^2 \\left(1 + 2z \\delta \\! B^d \/\nT_d \\right) + 1 \\simeq \\left(\\Gamma^d \/ T_d\\right)^2 + 1$, where\nthe asymptotic form comes from the fact that $T_d \/(z \\delta \\! B^d)\n\\gg 1$ since we work in the regime of $T_d > T_d^{(3,0)} = \\delta\n\\! B^d \/ \\sqrt{2s}$, where $T_d^{(3,0)}$ is the triple point in\nvolume approximation. Then we have\n\\begin{equation}\n\\Delta T_g \\simeq \\frac{K}{N} T_{\\rm fr} \\left[\n\\left(\\frac{\\Gamma^d} {T_d}\\right)^2 + 6 s + \\frac{ \\Gamma^2} {2\n\\delta \\! B^2}\\right] \\ , \\label{eq:freezing_T_Gauss}\n\\end{equation}\nwhere we also used the fact that $s \\ll 1$. As in the work\n\\cite{Solvation_random}, system with Gaussian distributed $\\sigma$,\nunlike bimodal one, has the weak response regime at very small\n$\\Gamma$, when $\\Gamma \/ T_{\\rm fr} \\ll s$; in this regime, surface\nsolvation is insignificant. The region of $ \\Gamma \/T_{\\rm fr} \\gg\n\\sqrt{24} s$ is the regime where volume and surface disorder are\nstatistically independent, and the result in this region, in terms\nof dependence on $\\Gamma$, is similar to that of the bimodal\ndistribution.\n\nOf course, the major difference from the bimodal example is that in\nGaussian case, there is an additional term due to sequence design.\nThat term increases glass temperature, which means design, as\nusually, makes for a more stable ground state.\n\n\n\\subsection{Triple point}\n\nNow let us consider the folded region, and begin with the\nsurface-corrected triple point $T_d^{(3)}$; we want to see its\nchange $\\Delta T_d^{(3)} = T_d^{(3)} - T_d^{(3,0)}$ relative to\n$T_d^{(3,0)} = \\delta \\! B^d \/ \\sqrt{2 s}$ determined in volume\napproximation. In general, triple point is determined from the\ncondition that glass temperature equals folding temperature, $T_f =\nT_g $. Glass temperature, along with its surface corrections, is\nalready known to us, see formula (\\ref{eq:Delta_T_g}) or its\nsimplified versions (\\ref{eq:freezing_temperature_bimodal}) and\n(\\ref{eq:freezing_T_Gauss}). The folding temperature $T_f$ should\nbe calculated from $\\langle E_{\\star} ({\\rm seq}) \\rangle = F \\{\nf^{\\star} \\}$. We take averaged ground state energy from formula\n(\\ref{eq:averaged_ground_state_energy}) and we take $F \\{ f^{\\star}\n\\}$ from Eqs. (\\ref{eq:bulk_free_energy}) and\n(\\ref{eq:surface_free_energy_optimized}); in the latter (which is\nthe surface part) we must neglect all order ${\\cal O}(K\/N)$\ncorrections. The result reads\n\\begin{eqnarray}\n\\Delta T_d^{(3)} &\\simeq& \\frac{K} {N} T_d^{(3,0)}\n\\left[\\frac{\\Gamma \\Gamma^d} {\\delta \\! B \\delta \\! B^d}\n- \\int \\sigma^2 p^{\\star}(\\sigma) d\\sigma \\right. \\nonumber\\\\\n& & \\left. - \\frac{1} {2s} \\ln \\int p(\\sigma) e^{-\\eta_{\\rm fr} (\\sigma)}\nd\\sigma \\right] \\ .\n\\label{eq:triple_point_general}\n\\end{eqnarray}\nFrom this formula, it is not even clear whether $\\Delta T_d^{(3)}$\nis positive or negative. As with other cumbersome results, let us\nlook at the specific examples of $p( \\sigma )$.\n\n\\subsubsection{Example: bimodal distribution}\n\nFor bimodal distribution $p(\\sigma)$, we have\n\\begin{eqnarray}\n& & \\Delta T_d^{(3)} \\simeq \\frac{K} {N} T_d^{(3,0)} \\left[\n\\frac{\\Gamma \\Gamma^d} {\\delta \\! B \\delta \\! B^d}\n- \\frac{1} {2s} \\ln \\cosh \\left( \\frac{\\Gamma} {T_{\\rm fr}} \\right) \\right]\\nonumber\\\\\n&\\simeq&\n\\begin{cases} \\frac{K} {N} T_d^{(3,0)} \\frac{\\Gamma} {\\delta \\! B} \\left( \\frac{\\Gamma^d} {\\delta \\! B^d}\n- \\frac{\\Gamma} {2 \\delta \\! B} \\right) & \\text{when $\\frac{\\Gamma} {T_{\\rm fr}} \\ll 1$} \\\\\n \\frac{K} {N} T_d^{(3,0)} \\frac{\\Gamma} {\\delta \\! B} \\left( \\frac{\\Gamma^d} {\\delta \\! B^d}\n - \\frac{1} {\\sqrt{2s}} \\right) & \\text{when $\\frac{\\Gamma} {T_{\\rm fr}} \\gg 1$}\n\\end{cases} \\ . \\label{eq:triple_point_bimodal}\n\\end{eqnarray}\nWe see that the design effect increases $\\Delta T_d^{(3)}$, pushes\ntriple point to the right on the phase diagram Fig. \\ref{fig:phase_diagram_2}, \nwhile the solvation effect acts in the\nopposite direction. Interestingly, in the statistical independence\nregime, when $\\Gamma \/ T_{\\rm fr} \\ll 1$, the sign of $\\Delta\nT_d^{(3)}$ is determined by the competition of the design term\n$\\Gamma^d \/ \\delta \\! B^d$ and folding term $\\Gamma \/ \\delta \\! B$.\nSpecifically, large design solvation strength $\\Gamma^d \/ \\delta \\!\nB^d \\gg \\Gamma \/ \\delta \\! B$ would make $\\Delta T_d^{(3)}\n> 0$ and in this sense the design makes folded state more stable. \nFrom Eq. (\\ref{eq:freezing_temperature_bimodal}), we already know that\nglass temperature is independent of $\\Gamma$ in saturation region\n$\\Gamma \/ T_{\\rm fr} \\gg 1$. Not surprisingly, in this region, the\nsign of $\\Delta T_d^{(3)}$ is also independent of $\\Gamma$.\n\n\\subsubsection{Example: Gaussian distribution}\n\nWhen $p(\\sigma)$ is Gaussian, we get\n\\begin{eqnarray}\n\\Delta T_d^{(3)} &\\simeq& \\frac{K} {N} T_d^{(3,0)} \\left[\n\\frac{\\Gamma {\\Gamma}^d} {\\delta \\! B \\delta \\! B^d}\n- \\frac{\\Gamma^2} {2 \\delta \\! B^2} - 2s \\right. \\nonumber\\\\\n& & \\left. - 2s \\left( \\frac{\\Gamma^d} {\\delta \\! B^d} \\right)^2 (1\n+ 2z\\sqrt{2s}) \\right] \\ . \\label{eq:triple_point_Gaussian}\n\\end{eqnarray}\nThe interesting and rather unexpected result is that the design\neffect, when it is \\emph{very} strong, might lead to reduction of\n$T_d^{(3)}$; in other words, it might have an adverse effect on the\nstability of the folded phase. Inspection of the origin of the\nnegative term $ \\propto - (\\Gamma_d \/ \\delta \\! B^d)^2$ shows that\nits origin is due to the fact that very strong solvation effect in\ndesign brings in a significant fraction of very strongly solvophilic\nmonomers; even though only small fraction of them subsequently turns\nout inside the globule in the folded state, they nevertheless make\nthe destabilizing effect on the globule. We emphasize that such\ndanger exists only when solvation effect in design is so strong that\nnot only $\\Gamma^d \/ \\delta \\! B^d > \\Gamma \/ \\delta \\! B$, but $s\n\\Gamma^d \/ \\delta \\! B^d > \\Gamma \/ \\delta \\! B $. It is unclear if\nsuch situation is realistic.\n\n\\subsection{Folding temperature}\n\nNext let us consider the folding temperature $T_f$ away but not far\nfrom the triple point. When $T_d < T_d^{(3)}$, we have\n\\begin{equation}\n\\frac{\\delta \\! B \\delta \\! B^d + \\frac{K} {N} \\Gamma \\Gamma^d }\n{T_d} = \\left. \\left(s T + \\frac{\\delta \\! B^2} {2 T} + F_{\\rm surf}\n\\right) \\right|_{T = T_f} \\ .\n\\end{equation}\nIn the vicinity of the triple point, when $T_d = T_d^{(3)} - \\Delta\nT_d$, where $\\Delta T_d \\ll \\frac{K}{N} T_d^{(3)}$, we have $F_{\\rm\nsurf}|_{T=T_f} \\simeq F_{\\rm surf}|_{T=T_g, T_d=T_d^{(3,0)}}$, or\n\\begin{equation}\n\\frac{\\delta \\! B \\delta \\! B^d } {T_d^{(3)}} \\frac{\\Delta T_d} {T_d^{(3)}}\n\\simeq s T_f + \\frac{\\delta \\! B^2} {2 T_f} - s T_g - \\frac{\\delta \\! B^2} {2 T_g} \\ ,\n\\end{equation}\nwhich yields upon some algebra\n\\begin{equation}\nT_f \\simeq T_g \\left[1 + \\sqrt{\\frac{ \\delta \\! B \\delta \\! B^d} {s\nT_d^{(3)} T_g} \\left(1 - \\frac{T_d} {T_d^{(3)}} \\right)} \\right] \\ .\n\\end{equation}\n\nIn Fig. \\ref{fig:phase_diagram_2}, the phase diagram of the system\nis sketched. In the regime of temperature $T$ below $T_f$, the\ndesigned sequences will thermodynamically stable when folded to the\ntarget state. In the regime of $T_d > T_d^{(3)}$, sequences obtained\nwill be either in frozen glassy state or in random liquid-like\nglobule.\n\n\\subsection{Depletion effect}\n\nIn preceding sections, we assumed no depletion effect. Here we will\nsee what happens if there is depletion. Depletion of monomers may\nhappen for Gaussian $p(\\sigma)$, while in bimodal case, there is no\ndepletion since the number of monomers for each monomer type is\nabundant. When depletion occurs, $\\phi = 1$ for $\\sigma \\geq\n\\sigma_m$, and $\\phi = 0$ for $\\sigma < \\sigma_m$, and this leads to\n\\begin{equation}\n\\int_{\\sigma_m}^{\\infty} p_{\\rm tot} (\\sigma) d\\sigma= K\/N \\ .\n\\end{equation}\nThe integration result is\n\\begin{equation}\n\\underbrace{ {\\rm erfc} \\left( \\frac{\\sigma_m - \\frac{\n\\Gamma^{\\prime d}}{T_d}} {\\sqrt{2}} \\right) - {\\rm erfc}\n\\left(\\frac{\\sigma_m} {\\sqrt{2}} \\right) }_{ >0 } - 2 = - \\frac{N}{K}\n{\\rm erfc} \\left( \\frac{\\sigma_m} {\\sqrt{2}} \\right) \\ .\n\\end{equation}\n\nFor the case of no design, $\\Gamma^{\\prime d} = 0 $,\n\\begin{equation}\n{\\rm erfc} \\left(\\frac{\\sigma_m^0} {\\sqrt{2}} \\right) = \\frac{2K}\n{N} \\ .\n\\end{equation}\nTherefore, we have ${\\rm erfc} \\left(\\sigma_m \/ \\sqrt{2} \\right) <\n{\\rm erfc} \\left( \\sigma_m^0 \/\\sqrt{2} \\right)$, and this gives\n$\\sigma_m > \\sigma_m^0$, so with design, the surface monomers are\nmore hydrophilic than without design. This makes physical sense and\nthis is also consistent with no depletion case $\\phi \\ll 1$, in\nwhich design favors the hydrophilic monomers on the surface.\n\n\\section{Sequence space entropy}\\label{sec:sequence_entropy}\n\nSequence design, when it is realized computationally, or if it could\nbe realized experimentally, helps finding sequences with\nparticularly low ground state energy. But of course there is a\nlimit - there is always a sequence whose ground state energy is the\nlowest among all sequences, and, therefore, no design can possibly\nproduce any sequence with lower energy. More generally and more\npractically, the lower ground state energy we want to obtain, the\nfewer sequences exist which can meet our demand. One may want to\nknow how many sequences are there to choose from with any given\nground state energy. Design paradigm provides the general method to\nsolve such problem. Indeed, we can compute the sequence space\nentropy (which is just the logarithm of the number of relevant\nsequences) as a function of $T_d$ and as we also know the average\nground state energy as a function of $T_d$, we can determine the\nnumber of sequences as depends on their ground state energy. This\nprocedure in volume approximation is described in the work\n\\cite{RMP}. Here we want to consider how it is affected by the\nsurface solvation effect.\n\nIn principle, sequence space entropy depends quite strongly on the\ntarget state fold here denoted as $\\star$, this dependence is called\ndesignability of the fold (see, for example, recent work on this\nsubject \\cite{Designability}). Here, we will neglect this fact.\nThis is not because designability is unimportant - it is very\nimportant indeed, but our goal is to look at the role of sequence\nsolvation effect, so to make this task manageable, we have to\nsacrifice the designability issue as a zeroth approximation.\n\nTo find sequence space entropy, we consider sequence space free\nenergy $ -T_d\\ln Z$, where $Z = \\sum_{{\\rm seq}} \\exp\\left[ -\nH^d({\\rm seq}, {\\star}) \/ T_d \\right]$. Note that design is to a\ncertain conformation $\\star$, so in the partition function here, the\nsummation runs over sequences. Sequence space entropy per monomer\n$s_{\\rm seq} $ can then be found using high $T_d$ expansion just in\nthe same way as the calculation of $\\langle E_{\\star}(seq) \\rangle$,\nformula (\\ref{eq:averaged_ground_state_energy}). The result is given\nby\n\\begin{equation}\ns_{\\rm seq} = -\\frac{\\partial \\left( -T_d \\ln Z \\right)} {\\partial\nT_d} \\simeq \\ln q - \\frac{ \\delta \\! {B^d}^2 + \\frac{K} {N}\n{\\Gamma^d}^2} {2T_d^2} \\ , \\label{eq:sequence_entropy}\n\\end{equation}\nwhere $q$ is the effective number of `letters in the alphabet'\ndetermined from the total number of possible sequences of length\n$N$: ${\\cal N}_{\\rm seq} = q^N$. It is not difficult to show that\n$q = - \\sum_{\\sigma} p(\\sigma) \\ln p(\\sigma)$ (see Eq.\n(\\ref{eq:propensities}); $q \\approx 18$ for real proteins).\n\nThe number of sequences is maximal if we impose no constraints on\nthe quality of design, which means sequence entropy has to be\nmaximal when $T_d$ is at the triple point. Therefore, we can\ncompute $s_{{\\rm seq}}^{\\rm max}$ using formula\n(\\ref{eq:sequence_entropy}) at $T_d = T_d^{(3)}$. The result reads:\n\\begin{equation}\ns_{{\\rm seq}}^{\\rm max} \\simeq \\ln q - s \\left[ 1 - \\frac{2 \\Delta\nT_d^{(3)}} {T_d^{(3,0)}} + \\frac{K {\\Gamma^d}^2} {N \\delta {\\!\nB^d}^2} \\right] \\ , \\label{eq:seq_max}\n\\end{equation}\nwhere the ratio $\\Delta T_d^{(3)} \/ T_d^{(3,0)}$ should be taken\nfrom Eq. (\\ref{eq:triple_point_general}) or from the simplified\nversions of it (\\ref{eq:triple_point_bimodal}) or\n(\\ref{eq:triple_point_Gaussian}).\n\nFirst, in the volume approximation, when there is no surface term,\nwe have $s_{{\\rm seq}}^{\\rm max} = \\ln q - s$. This result is a very\nnatural consequence of our neglect of the difference in\ndesignabilities between different folds. Indeed, volume\napproximation of $s_{{\\rm seq}}^{\\rm max}$ indicates that the number\nof sequences that can be designed for a given conformation $\\star$\nis $e^{Ns_{{\\rm seq}}^{\\rm max}} = {\\cal N}_{{\\rm seq}} \/ {\\cal\nN}_{{\\rm conf}}$, which means that all ${\\cal N}_{{\\rm seq}} =\nq^{N}$ sequences are equally distributed between ${\\cal N}_{\\rm\nconf} = e^{s N}$ conformations. This is because the fraction of\nsequences with ground state energy above $\\langle E_{\\star}(seq)\n\\rangle$ (\\ref{eq:averaged_ground_state_energy}) is extremely small,\nsee appendix \\ref{sec:ground_state}, so practically all sequences\nhave their ground state energy around $\\langle E_{\\star}(seq)\n\\rangle$.\n\nSecond, we look at the role of surface effect. For simplicity, we\nrestrict consideration to the most typical regime of statistical\nindependence between surface and bulk contributions. For both\nbimodal distribution and Gaussian distribution, plugging $\\Delta\nT_d^{(3)}$ into Eq. (\\ref{eq:seq_max}), we get the following simple\nresult:\n\\begin{equation}\ns_{\\rm seq}^{\\rm max} = \\ln q - s - \\frac{s K} {N} \\left[\n\\frac{\\Gamma} {\\delta \\! B} - \\frac{\\Gamma^d} {\\delta \\! B^d}\n\\right]^2 \\ .\n\\end{equation}\nThis tells us that the surface solvation effect reduces the number\nof sequences, $s_{\\rm seq}^{\\rm max} < \\ln q -s$. This happens\nbecause some of the sequences, with inadequate supply of hydrophilic\nmonomers, have their ground state energies above $\\langle\nE_{\\star}(seq) \\rangle$ when we look at them carefully enough to\nnotice their surface energy. Accordingly, the fraction of sequences\nwith ground state at $\\star$ is below its `fair' share of $e^{(\\ln\nq-s)N}$ and even maximal sequence space entropy falls short of its\nvolume approximated value $\\ln q - s$. Only very careful design, at\nwhich $\\Gamma^d\/\\delta \\! B^d = \\Gamma\/\\delta \\! B$, would be able\nto provide the ensemble of sequences adequate to their solvation\nconditions, in which case the solvation effect does not increase\nenergy and does not rule any sequences out of the competition.\nNotice that the condition $\\Gamma^d\/\\delta \\! B^d = \\Gamma\/\\delta \\!\nB$ does not involve design temperature, it specifies only the\nbalance of solvation and bulk heteropolymeric effects.\n\nLet us now look at the situation differently, namely, let us write\ndown the folding temperature $T_f$ in terms of $s_{\\rm seq}$ instead\nof $T_d$. Indeed, $T_d$ is a purely technical concept which may or\nmay not directly correspond to the experimental reality; for\ninstance, design can be controlled by some analog of solvent quality\ninstead of temperature. At the same time, sequence entropy is a\nvery clear quantity, it is the number of sequences whose ground\nstate stability corresponds to the temperature $T_f$. Simple\nalgebra shows that\n\\begin{equation}\nT_f = T_g \\left[ 1 + \\sqrt{\\frac{\\delta \\! B \\delta \\! B^d} {s T_g\nT_d^{(3)}} \\left(1 - \\sqrt{\\frac{ \\ln q - s_{\\rm seq}^{\\rm max} } {\n\\ln q - s_{\\rm seq} }} \\right) } \\right] \\ .\n\\end{equation}\nThis allows to re-interpret phase diagram, Fig.\n\\ref{fig:phase_diagram_2}, with sequence entropy on the horizontal\naxis.\n\nFinally, it is known \\cite{Shakhnovich_review_2006} that the quality\nof design is best characterized by the energy gap between the energy\nof the sequence in its purported target state and the average ground\nstate energy\n\\begin{equation} \\Delta \\epsilon = \\frac{ \\left. F\\{ f^\\star \\} \\right |_{T_g} - \\langle E_{\\star}\n({\\rm seq}) \\rangle }{N} \\ . \\end{equation}\nTherefore, we should look at the relation between sequence entropy\nand $\\Delta \\epsilon$. From the above results, we have found\n\\begin{equation}\ns_{\\rm seq} = \\ln q - s \\left( 1 + \\frac{\\Delta \\epsilon}{\\sqrt{2\ns} \\delta \\! B} + \\frac{K}{N} \\zeta \\right)^2 \\left( 1 + \\frac{K}{N}\n\\xi \\right) \\ ,\n\\end{equation}\nwhere the solvation related coefficients are given by\n\\begin{equation} \\xi = \\frac{ {\\Gamma^d}^2} { {\\delta \\! B^d}^2} - 2\\frac{ \\Gamma\n\\Gamma^d}{ \\delta \\! B \\delta \\! B^d} \\end{equation} and\n\\begin{equation} \n\\zeta = \\int \\sigma^2 p^\\star (\\sigma) d\\sigma \n+ \\frac{1}{2 s} \\ln \\int p(\\sigma) e^{-\\eta_{\\rm fr} (\\sigma)} d\\sigma \\ .\n\\end{equation}\nThere are less sequences available for larger energy gap design.\n\n\\subsubsection{Example: bimodal distribution}\n\nWhen $p(\\sigma)$ is bimodal,\n\\begin{eqnarray}\n\\zeta = \\frac{1}{2 s} \\ln \\cosh \\frac{\\Gamma} {T_{\\rm fr}}\n\\ .\n\\end{eqnarray}\n\n\\subsubsection{Example: Gaussian distribution}\n\nWhen $p(\\sigma)$ is Gaussian,\n\\begin{eqnarray}\n\\zeta = \\frac{\\Gamma^2} {2\\delta \\! B^2} + 2s \n+ 2s \\left(\\frac{\\Gamma^d}{\\delta \\! B^d} \\right)^2 \\ .\n\\end{eqnarray}\n\nIn either case, we see that the number of available sequences drops\ndramatically as we increase their desired quality by choosing a\nlarger $\\Delta \\epsilon$.\n\n\\section{Conclusion}\n\nIn this paper, we examined the interplay of surface solvation\neffects and sequence design for protein-like heteropolymer globule.\nIdeologically, our treatment of disordered sequences followed the\ntheoretical studies of heteropolymer folding in the works\n\\cite{BW,REM_Derrida,BJ,SG_IndependentInt}, and in our treatment of\npreferential solvation we used the approach of the work\n\\cite{Solvation_random}. What we did is we applied REM in the new\nchallenging context.\n\nAs in the volume approximation, designed sequences in the target\nconformation have lower energy than random sequences. This is not\nsurprising: this is after all the sole purpose of design. Less\nobvious, we found that the role of preferential solvation for the\ndesign itself might be controversial. The problem is that when\ndesign conditions favor too strongly the hydrophilicity of the\nsurface monomers, these monomers can have an adverse effect on the\noverall composition of the sequence and then disrupt the favorable\narrangement of contacts inside the globule.\n\nSpeaking about phase diagram of the heteropolymer globule, we found\nthat surface solvation effect operates differently for the two most\ntypical examples of monomer composition. If there are only two\ntypes of monomers, then glass transition temperature remains\nindependent of the design condition, as it was found in volume\napproximation. But this is not longer the case when there is a wide\nGaussian distribution of monomer types; in this case, design brings\nin a noticeable fraction of very hydrophilic monomers from the tail\nof hydrophilicity distribution, and they do affect the glass\ntransition.\n\nTo conclude, our study shows it possible to incorporate preferential\nsolvation effects into the the REM-based heteropolymer theory, and\nsome of the obtained results are quite delicate and unexpected. In\nreality, the role of surface in molecules of realistic sizes is\nquite significant, so the effects which were examined here on\nperturbative level, considering surface contributions ${\\cal\nO}(K\/N)$ very small, might be quite substantial and very important.\n\n\\acknowledgements The work of both authors was supported in part by\nthe MRSEC Program of the National Science Foundation under Award\nNumber DMR-0212302.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\\section{Introduction}\n\\input{introduction.tex}\n\n\\section{Related Work}\n\\input{related.tex}\n\n\\section{Problem Definition}\n\\input{problem.tex}\n\n\\section{Proposed Solution: Secret Gradient Descent}\n\\input{solution.tex}\n\n\\section{Privacy Guarantee}\n\\label{sec:guarantee}\n\\input{privacy_gurantee.tex}\n\n\\section{Discussion}\n\\input{conclusion.tex}\n\n\n\n\\subsection{Equivalence to \\(d\\)\\hyp SSS}\n\\label{sec:equivalence}\nAs we will explain in the last sentence of this subsection, we may assume w.l.o.g.\\ that the server is only given the messages \\(M^{t_0}=(\\{\\mskip-4mu\\{ a_1^{t_0},\\dots,a_N^{t_0}\\}\\mskip-4mu\\}, \\{\\mskip-4mu\\{ b_1^{t_0},\\dots,b_{NK}^{t_0}\\}\\mskip-4mu\\})\\)\nfrom a single training iteration \\(t_0\\), where all vector entries are encoded with \\(m\\) bits. We assume further that at least two clients are honest, w.l.o.g.\\ clients 1 and 2, and that in the case where one of the gradients that the server is searching for was sent in iteration \\(t_0\\), it was sent by either client 1 or client 2. For the moment we will ignore the messages of all other clients. We hence work with the messages \\((\\{\\mskip-4mu\\{ a_1^{t_0},a_2^{t_0}\\}\\mskip-4mu\\}, \\{\\mskip-4mu\\{ b_1^{t_0},\\dots,b_{2K}^{t_0}\\}\\mskip-4mu\\})\\), and the task of the adversarial server is to determine whether there exists a \\(K\\)\\hyp element submultiset \\(\\tilde{V}\\) of \\(\\{\\mskip-4mu\\{ b_1^{t_0},\\dots,b_{2K}^{t_0}\\}\\mskip-4mu\\}\\) such that either \\(a_1^{t_0} + \\sum_{\\tilde{v}\\in \\tilde{V}} \\tilde{v} = h^{t_0}\\) or \\(a_2^{t_0} + \\sum_{\\tilde{v}\\in \\tilde{V}} \\tilde{v} = h^{t_0}\\). We can equivalently formulate this as proving or disproving the existence of a submultiset \\(\\tilde{V}\\) such that either \\(\\sum_{\\tilde{v}\\in \\tilde{V}} \\tilde{v} = h^{t_0} - a_1^{t_0}\\) or \\(\\sum_{\\tilde{v}\\in \\tilde{V}} \\tilde{v} = h^{t_0} - a_2^{t_0}\\). Since the noisy gradients \\(a_1^{t_0}\\) and \\(a_2^{t_0}\\) are uniformly random, the right sides of these two equations are uniformly random too.\nThus, the server's task is equivalent to solving a \\(d\\)\\hyp SSS problem with a multiset of uniformly random vectors, a uniformly random target sum \\(w\\), and the additional constraint that the submultiset \\(\\tilde{V}\\) needs to be of cardinality \\(K\\).\n\nThe messages of clients other than 1 or 2 are independent of those of clients 1 and 2 and can therefore be ignored as pure noise. Including them would only increase the chance of false positives, i.e., solutions to the \\(d\\)\\hyp SSS search problem that do not correspond to a set of vectors sent by a single client. Furthermore, the \\(d\\)\\hyp SSS instances resulting from different training iterations are clearly independent, so the assumption from the beginning of this subsection that the server only has access to the data from one training iteration can be made w.l.o.g.\n\n\\subsection{Hardness Guarantee}\n\\label{sec:hardness}\nWe now show that the set of \\(d\\)\\hyp SSS instances from \\Secref{sec:equivalence}, which an adversary would have to solve, is computationally hard for the parameter choice \\(K=dm\/2\\). In the one\\hyp dimensional case and without the additional constraint, this has been done already in 1996 by Impagliazzo and Naor \\cite{impagliazzo1996efficient}. The proof can be extended to our setting.\nWe first formalize the problem using a similar notation as Impagliazzo and Naor, but invert it: Whereas in their case the number of vectors \\(K\\) is fixed, we fix the encoding length \\(m\\) and write \\(K\\) as a function of \\(m\\):\n\\begin{definition}\n\\label{def:hard_problem}\nLet \\(B=\\{\\mskip-4mu\\{ b_1,\\dots,b_{2K(m)}\\}\\mskip-4mu\\}\\) be a multiset of vectors drawn uniformly and independently from \\(\\mathbb{Z}_{2^m}^d\\). \\emph{SecGD SSS} is the problem of inverting the function\n\\(f_B(S)=(B,\\sum_{b\\in S} b)\\),\nwhere \\(S\\) is a uniformly randomly drawn submultiset of \\(B\\) with cardinality \\(K(m)\\).\n\\end{definition}\nNote that \\(S\\) can be represented as a vector \\(t\\in \\{0,1\\}^{2K(m)}\\) with \\(L_1\\)\\hyp norm equal to \\(K(m)\\) (\\(b_i \\in S\\) iff \\(t_i = 1\\)), which we will do in the following.\nWe are interested in hard instances of this problem, depending on the number of vectors \\(2K(m)\\), i.e., instances for which the function from \\Defref{def:hard_problem} is hard to invert. For this we extend the usual definition of one\\hyp way functions \\cite{goldreich2001foundations} to sequences of functions \\(\\{f_n\\}\\), where \\(f_n\\) is used for inputs of length \\(n\\) and may be random. In our case, \\(f_{n\/2K(m)dm}=f_B\\) for a multiset \\(B\\) of \\(2K(m)\\) \\(d\\)\\hyp dimensional random vectors and an encoding length of \\(m\\).\n\\begin{definition}[{\\cite{impagliazzo1996efficient}}]\nLet \\(\\{f_n\\}\\) be a sequence of (potentially random) functions defined on \\(D_n\\subset \\{0,1\\}^n\\) and let \\(f^*:\\bigcup_n D_n \\rightarrow\\{0,1\\}^*\\) be defined by its restrictions to the \\(D_n\\):\n\\(\\restr{f^*}{D_n} = f_n\\).\n\\(\\{f_n\\}\\) is \\emph{one\\hyp way} if the following two conditions hold:\n\\begin{itemize}\n\n\\item \\(f^*(t)\\) is computable in polynomial time for every \\(t\\in\\bigcup_n D_n\\).\n\\item Let \\(\\{t_n\\}\\) be a sequence of uniformly random inputs,\n\\(t_n{\\sim} \\mathcal{U}(D_n)\\) i.i.d.\nFor every probabilistic polynomial\\hyp time algorithm \\(A\\) (that attempts to invert \\(f^*\\)) and for all \\(c>0\\),\n\\(\\Pr(f^*(A(f^*(t_n))) = f^*(t_n)) < n^{-c}\\)\nfor all sufficiently large \\(n\\).\n\\end{itemize}\n\\end{definition}\nSince solving the search version of SecGD SSS is exactly the problem of inverting a function \\(f_n\\), we call sets of instances for which the corresponding sequence \\(\\{f_n\\}\\) is one\\hyp way \\emph{hard} \\cite{impagliazzo1996efficient}. In our case, sets of instances are defined by the number of messages as a function of the encoding length \\(2K(m)\\).\nUsing this definition, the hardest instances are those for which \\(2K(m)=dm\\):\n\\begin{theorem}[{cf.}\\xspace\\ {\\cite[Prop.~1.2]{impagliazzo1996efficient}}]\n\\label{thm:hardness}\n\\leavevmode\n\\begin{enumerate}\n\n\\item Let \\(2K'(m) \\leq 2K(m) \\leq dm\\). If SecGD SSS is hard for \\(K'(m)\\), then it is also hard for \\(K(m)\\).\n\\item Let \\(dm \\leq 2K(m) \\leq 2K'(m)\\). If SecGD SSS is hard for \\(K'(m)\\), then it is also hard for \\(K(m)\\).\n\\end{enumerate}\n\\end{theorem}\nFor the proof of \\Thmref{thm:hardness} we need to characterize the SecGD SSS instances in the two different cases. In the first case the function \\(f_B\\) is almost injective, while in the second case it is almost surjective with all values in its range occuring almost the same number of times.\n\\begin{definition}[\\cite{impagliazzo1996efficient,impagliazzo1989recycle}]\nLet \\(D\\) be a probability distribution on \\(\\{0,1\\}^n\\). We say \\(D\\) is \\emph{quasi\\hyp random} within \\(\\varepsilon\\), if for all \\(U\\subset \\{0,1\\}^n\\) we have that \\(\\abs{\\Pr_D(U) - \\abs{U}\/2^n} < \\varepsilon\\).\n\\end{definition}\n\\begin{lemma}[{cf.}\\xspace\\ {\\cite[Prop.1.1]{impagliazzo1996efficient}}]\n\\label{thm:characterization}\n\\leavevmode\n\\begin{enumerate}\n\n\\item Let \\(2K(m)\\leq cdm\\) for \\(c<1\\). Let \\(B\\) and \\(S\\) both be chosen uniformly at random. Except with probability exponentially small, there is no \\(S'\\neq S\\) such that \\(f_B(S)=f_B(S')\\).\n\\item Let \\(2K(m) \\geq cdm\\) for \\(c>1\\). Let \\(B\\) be chosen uniformly at random. Except with probability exponentially small (w.r.t. the choice of \\(B\\)), the distribution given by \\(f_B(S)\\) for a randomly chosen \\(S\\) is quasi-random (w.r.t. \\(S\\)) within an exponentially small amount.\n\\end{enumerate}\n\\end{lemma}\n\\Thmref{thm:hardness} can now easily be deduced from \\Lemmaref{thm:characterization} (see \\cite{impagliazzo1996efficient}). Assume we have an algorithm that efficiently solves SecGD SSS for instances with \\(2K(m)\\) vectors where the function \\(K\\) is chosen such that \\(f_B\\) is almost injective. Instances with \\(2K'(m) < 2K(m)\\) vectors can be transformed into instances with \\(2K(m)\\) vectors by removing enough of the least significant bits. This adds only few false positives, i.e., solutions to the inversion of \\(f_B\\), due to instances with \\(2K(m)\\) vectors being almost injective. Similarly, if we have an algorithm for instances with \\(2K(m)\\) vectors where \\(f_B\\) is almost uniform, we transform instances with \\(2K'(m) > 2K(m)\\) vectors by adding random bits at the end. Every solution to the modified\nproblem is a solution to the original\nproblem, and we do not lose many solutions because the sums \\(f_B(S)\\) are almost uniformly distributed for \\(2K(m)\\) vectors.\n\nThe proof of \\Lemmaref{thm:characterization} can be found in the \\Appref{appsec:proof} and closely follows that of Prop.~1.1 from Impagliazzo and Naor \\cite{impagliazzo1996efficient}. For the first part we use a simple union bound while for the second part we rely on the Leftover Hash Lemma from Santha and Vazirani \\cite{santha1984generating}.\n\n\n\\subsection{Hardness in Practice}\n\\label{sec:hardness_practice}\nOne\\hyp dimensional SSS is an NP hard problem \\cite{Karp1972}, its multi\\hyp dimensional generalization therefore is as well.\nHowever, multi\\hyp dimensional SSS has been mostly neglected by the research community so far, apart from a negative result about its approximability \\cite{emiris2017approximating}.\nOne\\hyp dimensional subset sum, on the other hand, has a long history of study. Similar to this paper, instances are typically characterized in terms of the ratio of the number of messages \\(n\\) and their encoding length \\(l(n)\\), where the optimal choice for security is \\(l(n) = n\\) \\cite{impagliazzo1996efficient}. For \\(l(n) > 1.06n\\), SSS can be transformed into a lattice shortest vector problem \\cite{joux1991improving,coster1991improved} that can be solved efficiently for certain instances but is, like SSS, NP hard in the general case. For \\(l(n)=\\mathcal{O}(\\log(n))\\), there exists a very efficient dynamic programming solution \\cite{galil1991almost}. Instances with \\(l(n)=n\\) are hard instances in the same sense as in our paper: The number of possible summands equals the number of bits per summand. For them the fastest algorithms still require exponential time. The fastest traditional algorithm runs in \\(\\tilde{\\mathcal{O}}(2^{0.291n})\\) \\cite{becker2011improved}, the fastest quantum algorithm in time \\(\\tilde{\\mathcal{O}}(2^{0.226n})\\) \\cite{bernstein2013quantum}, where the notation \\(\\tilde{\\mathcal{O}}\\) suppresses polynomial factors. Thus, despite significant efforts, no efficient algorithm has been found for the hardest set of one\\hyp dimensional SSS instances and it seems likely that this will be also the case for its multi\\hyp dimensional counterpart.\nNote that for small \\(n\\), the problem is still solvable. This translates to both \\(d\\) and \\(m\\) being small. Since \\(m=\\lceil\\log(N)\\,\\tilde{m}\\rceil\\) and the server could lie to the clients about \\(N\\), in the case of \\(d\\) being small, \\(\\tilde{m}\\) has to be chosen sufficiently large. Because this choice is transparent to the clients, they are able to detect when the server chooses a too small value and can refuse to participate in the training.\n\n\n\n\n\n\\subsection{Protecting the Global Gradient via Differential Privacy}\n\\label{sec:differential}\nUp to this point we allowed the server to learn the exact global gradient \\(g^t\\) and ensured that the server cannot learn anything about the clients' data beyond what can be learned from \\(g^t\\). In practice this is often not sufficient to keep the data private. Just the value of the global gradient can already reveal the values of all local gradients \\(g_i^t\\); for an example, see \\Appref{appsec:differential}. This problem can be solved by adding noise to the \\(g_i^t\\) to ensure differential privacy \\cite{dwork2014differential}. Let us assume that we want to achieve a certain privacy level \\((\\varepsilon, \\delta)\\) and that the corresponding necessary variance when adding Gaussian noise is \\(\\sigma^2\\). In standard distributed gradient descent we would have to add \\(\\mathcal{N}(0,\\sigma^2 \\mathbb{I})\\) to each \\(g_i^t\\), resulting in \\(\\mathcal{N}(0,N\\sigma^2 \\mathbb{I})\\) noise for \\(g^t\\). When using SecGD, however, the local gradients are already protected and we only need to ensure differential privacy for the global gradient. If we assume that there are \\(\\tilde{N}\\) honest clients, then each of them only has to add \\(\\mathcal{N}(0,\\sigma^2\/\\tilde{N} \\mathbb{I})\\) to their \\(g_i^t\\) to ensure a final noise of at least \\(\\mathcal{N}(0,\\sigma^2 \\mathbb{I})\\). Thus, the noise added to \\(g^t\\) will be only \\(\\mathcal{N}(0,(1 + N - \\tilde{N})\\sigma^2 \\mathbb{I})\\). A more detailed exposition can be found in \\Appref{appsec:differential}.\n\n\n\\subsection{Preprocessing}\n\\label{sec:preprocessing}\nFor our solution we need to represent the entries of the gradient vectors as elements from the group \\(\\mathbb{Z}_{2^m}\\), {i.e.}\\xspace, the integers modulo \\(2^m\\) for an integer \\(m\\). Here, \\(2^m\\) is an upper bound on the entries of the global gradient \\(g^t\\), derived from an upper bound \\(2^{\\tilde{m}}\\) on the entries of the local gradients and the number \\(N\\) of users: \\(m=\\lceil\\log(N)\\,\\tilde{m}\\rceil\\). The clients can transform their real\\hyp valued gradient vectors to elements from \\(\\mathbb{Z}_{2^m}\\) in a preprocessing step that we detail in \\Appref{sec:preprocessing}.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Protocol}\n\\xhdr{Main idea} The idea behind SecGD is to first add noise to the gradients prior to sending them, let the server sum them up to obtain a noisy version of \\(g^t\\), and to then tell the server how much noise was added so that it can remove the noise from \\(g^t\\). However, the noise vectors of the different clients get mixed throughout the process, turning the reconstruction of any of the \\(g_i^t\\) into a computationally hard subset sum problem. So instead of sending one message containing \\(g_i^t\\), each client additionally generates \\(K\\) independent random vectors \\(s_{i1}^t,\\dots,s_{iK}^t\\) and sends the following \\(K+1\\) messages:\n\\begin{equation*}\n \\tilde{g}_i^t \\mathrel{\\vcentcolon=} g_i^t - \\sum_{k=1}^K s_{ik}^t, \\hspace{5mm}\n s_{i1}^t,\n \\hspace{5mm}\n \\dots,\n \\hspace{5mm}\n s_{iK}^t.\n\\end{equation*}\nTo obtain \\(g^t\\), the server simply has to sum up all messages it received in the \\(t\\)\\hyp th training round, so from the utility perspective nothing has changed over regular distributed gradient descent. What about privacy? If the server knows which of the messages were sent by the same client \\(i\\), summing them up reveals \\(g_i^t\\)---exactly what we want to avoid.\nThe server has two ways to link messages from the same client with each other: (1) via their content and (2) via the metadata of the network packages. We will discuss both of those in the following two paragraphs.\n\n\\xhdr{Package content} For making the messages unlinkable via the vectors they contain, we need to make them, or at least the \\(s_{ik}^t\\), all look the same. This can easily be done by sampling them\nindependently\nfrom the same distribution. We, however, also do not want \\(\\tilde{g}^t_i\\) itself to carry any information about \\(g^t_i\\). This could for example happen if \\(K\\) were small and the \\(s_{ik}^t\\) were sampled from a distribution with small variance. This is why we choose the uniform distribution on \\(\\mathbb{Z}_{2^m}^d\\) for the additional summands, that is,\n\\(s_{ik}^t {\\sim} \\mathcal{U}(\\mathbb{Z}_{2^m}^d)\\) i.i.d.\nAs a consequence, \\(\\tilde{g}_i^t\\), too, is uniformly distributed on \\(\\mathbb{Z}_{2^m}^d\\). Furthermore, any \\(K\\)\\hyp element subset of the \\(K+1\\) messages a client sends is statistically independent. In \\Secref{sec:guarantee} we show that the information that still remains in the set of messages cannot be used by a computationally bounded adversary if we choose \\(K=dm\/2\\) (we do not\nhave to send \\(dm\/2\\) vectors; {cf.}\\xspace\\ \\Secref{sec:efficiency}).\n\n\\xhdr{Metadata} There are two types of metadata that the server receives,\n(1)~the IP address from which a package was sent and\n(2)~the time at which a package arrives.\nTo make this information useless, we first route all messages through an anonymization network, such as Tor \\cite{syverson2004tor}, or through a similar proxy server infrastructure, thereby removing all information that was originally carried by the IP address.\nSecond, to remove any information that the arrival times of packages might contain, we set the length of one training iteration to \\(n\\) seconds and make clients send their packages not all at once, but spread them randomly over the \\(n\\) seconds, thus making the packages' arrival times useless for attacks.\nWithout this measure, all update packages from the same user might be sent right after one another, and the server would receive groups of packages with larger breaks after each group and could thus assume that each such group contains all packages from exactly one user.\n\n\\xhdr{Malicious adversary} So far we worked under the assumption that the server is honest but curious, i.e., that it might try to infer additional information from the data it receives but that it at least honestly follows the protocol. Due to the nature of our protocol, the only way the server could deviate from it would be to send different parameter vectors or different additional organizational information (such as the length of a training period) to different clients.\nIn order to prevent this, we give clients a way to recognize when the server violates the protocol:\nInstead of requesting the current parameter vectors and information once from the server, the clients request it multiple times per iteration.\nOnly if they get the same response every time do they respond; otherwise they must assume an attack.\nSince the clients' requests are routed through an anonymization network, the server cannot identify subsequent requests from the same client and cannot maliciously send the same spurious data every time. To reduce communication costs, the clients do not actually request the data multiple times, but only once in the beginning, and afterwards request hashes of it.\nAs an even safer countermeasure, one could distribute the data that would otherwise be obtained directly from the server via a blockchain. This way, each user would be able to verify the integrity of the data they receive.\nIn the initial setup phase, the server also has to tell each client the total number of clients \\(N\\) so that they can compute \\(m\\), the number of bits to use for their vectors. Lying about \\(N\\), however, would not give the server any significant advantage since it would have to lie to all clients in the same way. We go more into detail about this in \\Secref{sec:hardness_practice}.\n\n\n\n\n\\subsection{Improving Communication Efficiency}\n\\label{sec:efficiency}\nIt is not necessary to send the \\(s_{ik}^t\\) as vectors, which would be of the same, potentially high, dimension as the gradient. Instead, the server and the clients agree on a common random number generator (RNG) beforehand, e.g., by hardcoding it. A client then generates \\(K\\) seeds \\(S_{i1}^t, \\dots, S_{iK}^t\\) and uses the RNG to compute \\(s_{i1}^t, \\dots, s_{iK}^t\\). It then sends the vector \\(\\tilde{g}_i^t\\) and the scalars \\(S_{i1}^t, \\dots, S_{iK}^t\\), which are used by the server to compute \\(s_{i1}^t, \\dots, s_{iK}^t\\) once again.\n\nHow many bits do we need for the seeds? Using seeds with fewer bits than the random vectors that are generated from them increases the probability of collisions, i.e., two users generating the same seeds and hence the same random vectors by chance. This might weaken the hardness guarantee in \\Secref{sec:hardness}. As we will see later, only collisions between the vectors of two of the users are to be avoided. If \\(q\\) is the number of bits used for the seeds, we can easily upper bound the collision probability \\(p\\) by assuming that the event of the collision of any two seeds is independent of the event of the collision of any two other seeds. We can then arrange the seeds in a list and compute the probability that the second seed collides with the first one, the probability that the third seed collides with the first or the second one and so on. Summing up yields \\(p \\leq \\frac{2K(2K-1)}{2}\\frac{1}{2^q}\\).\nFor a desired target probability \\(p\\), we need to choose \\(q = \\log\\left(\\frac{2K(2K-1)}{2p}\\right)\\).\nFor \\(d=10^6\\) dimensions, an encoding length of \\(m=30\\) bits, a collision probability of \\(p=10^{-10}\\) and the most secure choice for \\(K\\), namely \\(K=dm\/2\\) ({cf.}\\xspace\\ \\Secref{sec:guarantee}), only 82 bits are required per seed.\n\n\n\\subsection{Computation and Communication Cost}\nFor the runtime and communication analysis we will assume that the computation of the gradients \\(g_i^t\\) takes time in the order of the bit length, \\(\\mathcal{O}(d\\tilde{m})\\), where \\(\\tilde{m}\\) is the number of bits used to represent each gradient entry. \\(d\\tilde{m}\\) is the size of the output of the gradient computation and hence a lower bound; a higher computation time would favor our protocol because it would reduce the multiplicative overhead of our method. We further assume that the parameter vector \\(w\\) is encoded using one 32 bit float per dimension. We analyze a single training iteration. In the baseline case where each client sends their gradient directly, the runtime complexity for a client is \\(\\mathcal{O}(d\\tilde{m})\\), for the server it is \\(\\mathcal{O}(Nd\\tilde{m})\\). Each client needs to request the parameters from the server and send the gradient, which are \\(d(32 + \\tilde{m})\\) bits in total.\n\nMoving to SecGD, we show in the next section that the most secure choice for \\(K\\) is \\(K=dm\/2\\). For this parameter choice, every client has to sample \\(dm\/2\\) random seeds, generate the corresponding random vectors and add them to their gradient. This has complexity \\(\\mathcal{O}(d^2 m^2)\\). The server also has to generate those random vectors and add them up, leading to a complexity of \\(\\mathcal{O}(N d^2 m^2) = \\mathcal{O}(N\\log(N)^2 d^2 \\tilde{m}^2)\\). Each client needs to receive the model parameters and send both the sum of the gradient and the random vectors, together with the seeds. These are \\(\\mathcal{O}(32d + dm + Kq) = \\mathcal{O}(dm\\log(d m)) = \\mathcal{O}(d\\tilde{m}\\log(N)\\log(d \\tilde{m}\\log(N)))\\) bits.\n\nTo conclude, the computation increases by a factor of \\(\\mathcal{O}(d\\tilde{m}\\log(N)^2)\\) and the communication by a factor of \\(\\mathcal{O}(\\log(N)\\log(d\\tilde{m}\\log(N)))\\) over directly sending the gradients, for both the clients and the server.\n\n\n\n\\subsection{SecGD in a Nutshell}\n\\label{sec:nutshell}\nSecGD operates in the group \\(\\mathbb{Z}_{2^m}^d\\), to which real valued vectors must be mapped. In each training iteration, the server sends the current parameters \\(w^t\\) to the clients, which then compute the gradients \\(g_i^t\\) w.r.t. their local dataset. They sample \\(K=dm\/2\\) random seeds, use them to generate \\(K\\) uniformly random vectors in \\(\\mathbb{Z}_{2^m}^d\\) and subtract them from their gradient to obtain \\(\\tilde{g}_i^t\\). Then they send \\(\\tilde{g}_i^t\\) and the random seeds through an anonymization network to the server. The server uses the seeds to generate the corresponding random vectors, and adds them to the vectors \\(\\tilde{g}_i^t\\) it has received to get the global gradient \\(g^t\\), which it uses to update the model parameters. See \\Figref{fig:protocol} for a graphical overview of SecGD.\n\n\\begin{figure}\n\\procedure{\\(t\\)\\hyp th training step}{\n\\textbf{Server} \\> \\> \\textbf{Client \\(i\\)} \\\\\n\\rightmsgsmall{\\text{Send model parameters } w^t} \\\\\n\\> \\> \\text{Compute local gradient } g_i^t \\leftarrow \\nabla_w L(w^t,D_i) \\\\\n\\> \\> \\text{Sample seeds } S_{i1}^t,\\dots,S_{iK}^t \\\\\n\\> \\> s_{ik}^t\\leftarrow\\RNG(S_{ik}^t),\\dots,s_{iK}^t\\leftarrow\\RNG(S_{iK}^t) \\\\\n\\leftmsgsmall{\\text{Send } \\tilde{g}_i^t = g_i^t - \\sum_{k=1}^K s_{ik}^t} \\\\\n\\leftmsgsmall{\\text{Send } s_{i1}^t,\\dots,s_{iK}^t} \\\\ s_{11}^t\\leftarrow\\RNG(S_{11}^t),\\dots,s_{NK}^t\\leftarrow\\RNG(S_{NK}^t) \\> \\> \\\\\n\\text{Compute global gradient } g^t\\leftarrow\\sum_{i=1}^N \\left(\\tilde{g}_i^t + \\sum_{k=1}^K s_{ik}^t\\right) \\> \\> \\\\\n\\text{Update model } w^{t+1} \\leftarrow w^{t} - \\eta^t \\left(\\frac{1}{N} g^t + \\lambda \\nabla R(w^t)\\right) \\> \\>\n}\n\\caption{One step of the training procedure with SecGD.}\n\\label{fig:protocol}\n\\end{figure}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}