diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznebb" "b/data_all_eng_slimpj/shuffled/split2/finalzznebb" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznebb" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{i}\n\nThe halo mass function (HMF hereafter) is a unique prediction of\ncosmological models of structure formation. \nThe evolution of the HMF traced by galaxy clusters has been recognized\nsince a long time as a powerful tool to trace the growth of cosmic\nstructures and, therefore, to constrain cosmological parameters\n\\cite[see][for reviews, and references\ntherein]{Rosati2002,Allen2011}. In particular, cosmological\napplications of the HMF require to know its shape and evolution to a\nhigh precision, in order to fully exploit its potential as a\ncosmological tool to be applied to ongoing and future large surveys of\ngalaxy clusters \\citep[e.g.][]{Wu2010,Murray2013}.\n\nN--body simulations covering wide dynamic ranges are nowadays\nproviding rather accurate calibration of the mass function of Dark\nMatter (DM) halos \\citep[e.g.][]{Jenkins2001, Reed2003, Reed2007,\n Reed2013,Lukic2007,Tinker2008,Crocce2010,Courtin2011,\n Bhattacharya2011,Angulo2012,Watson2013}. Various extensions of the\nstandard $\\Lambda$CDM cosmology model, such as coupled dark energy\nmodels \\citep[e.g.][]{Cui2012b, Baldi2012}, modified gravity models\n\\citep[e.g.][]{Schmidt2009,Zhang2013,Puchwein2013}, non-Gaussian\ninitial conditions \\citep[e.g.][]{Grossi2009,Pillepich2010}, massive\nneutrinos \\citep[e.g.][]{Brandbyge2010,Ichiki2012,Costanzi2013}, Warm\nDark Matter \\citep[e.g.][]{Schneider2013,Angulo2013}, have been\nstudied using numerical simulations, and their effect on the HMF has\nbeen investigated.\n\nA crucial aspect in the \ncalibration of the HMF is related to the algorithm used to identify\nhalos, the two most widely used being the Friend of Friend (FoF) one\nand the Spherical Overdensity (SO) one. The choice of a specific\nalgorithm clearly impacts both the number of identified halos and\ntheir mass \\citep[e.g.][; see also \\citealt{Knebe2011} for a\ndetailed comparison between different halo finders]{White2001,\n White2002, Lukic2009, More2011, Watson2013}.\n\nAll the above mentioned HMF calibrations are based on N--body\nsimulations that follow the evolution of a collisionless DM fluid. On\nthe other hand, the presence of baryons is known to add subtle but\nsizeable effects on halo formation and internal structure, whose\ndetails also depend on the physical processes included in the\nnumerical treatment of the baryonic component, such gas\ncooling, star formation and energy feedback from supernovae (SN) and\nActive Galactic Nuclei (AGN) \\citep[e.g.][and references\ntherein]{KravtsovBorgani2012}.\n\nA number of studies based on cosmological hydrodynamical simulations\nhave been recently carried out to analyse in detail the effect of\nbaryonic processes on different properties of the total mass\ndistribution, such as the power spectrum of matter density\nfluctuations \\citep[e.g.]{Rudd2008, Daalen2011, Casarini2012}, the\nhalo correlation functions \\citep[e.g.][]{Zhu2012, Daalen2013}, the\nhalo density profiles \\citep[e.g.][]{Duffy2010, Lin2006} and\nconcentration \\citep[e.g.][]{Rasia2013, Bhattacharya2013}, and the HMF\n\\citep[e.g.][]{Stanek2009, Cui2012a, Sawala2013, Martizzi2013,\nCusworth2013, Balaguera2013, Wu2013}.\n\nAs for the effect of non--radiative hydrodynamics, the presence of\nbaryons has been shown to induce a slight increase of the HMF\n\\citep[][hereafter Paper I]{Cui2012a}. When extra--heating is included,\n\\cite{Stanek2009} found instead a decrease in the HMF. As for the\neffect of radiative cooling, star formation and SN feedback, different\ngroups consistently found an increase of the HMF, an effect that is\nmore evident in the high--mass end\n\\citep{Stanek2009,Cui2012a,Martizzi2013}. On the other hand,\n\\citet{Sawala2013} found that efficient SN feedback produces an\nopposite effect in low--mass halos.\n\nOn the other hand, a number of analyses have shown in the last years\nthat including AGN feedback in cosmological simulations provides\npopulations of galaxy clusters in better keeping with observational\nresults \\citep[e.g.][]{Puchwein2008, ShortThomas2010, Fabjan2010,\n McCarthy2011, Planelles2013, LeBrun2014}. \nWhen the AGN feedback is included, different results were found by\n\\citet{Martizzi2013} and \\citet{Cusworth2013}. The former showed that\nthe HMF with AGN feedback is higher than the fitting function from\n\\citet{Tinker2008}, while the latter predicted a lower HMF compared to\nthe same fitting function. However, their implementation of the AGN\nfeedback differ. \\cite{Martizzi2013} described AGN feedback by\ncomputing explicitly gas accretion rates onto super-massive black\nholes (SMBHs) included as sink particles in simulations that also\ninclude radiative cooling, star formation and SN feedback\n\\citep[e.g.][]{Springel2005,BoothSchaye2009}. \\cite{Cusworth2013}\nincluded AGN feedback by computing the associated feedback energy from\nthe semi--analytic model of galaxy formation by \\cite{Guo2011},\nwithout including radiative cooling and assuming zero mass for gas\nparticles, so that no back-reaction of baryons on the DM distribution\nis allowed.\n\nIn this paper we extend our previous analysis of baryonic effects on\nthe HMF, presented by \\citetalias{Cui2012a}, by also including in our\nsimulations the effect of AGN feedback. We directly compare the HMF\nobtained from DM--only simulations to those produced by radiative\nhydrodynamical simulations both with and without AGN feedback, using\nexactly the same initial conditions, mass and force resolutions. The\nplan of the paper is as follows. In section \\ref{simulation}, we\npresent the simulations analysed in this paper. Section \\ref{halo} is\ndevoted to the description of the halo identification methods. In\nsection \\ref{results} we present the results of our analysis and\ndescribe in detail the differences in the HMF induced by different\nfeedback models. Our results are discussed and summarised in Section\n\\ref{concl}.\n\n\\section{The Simulations}\n\\label{simulation}\n\nThree large--volume simulations are analysed in this paper, namely two\nhydrodynamical simulations which include different description of\nfeedback processes affecting the evolution of baryons and one N--body\nsimulation including only DM particles. Initial conditions for these\nsimulations are the same as described in \\citetalias{Cui2012a} and we refer\nto that paper for further details. The hydrodynamical simulations have\nthe same number dark matter particles ($1024^3$) and gas particles\n($1024^3$). A first hydrodynamical simulation includes radiative\ncooling, star formation and kinetic SN feedback (CSF hereafter), while\nthe second one also includes the effect of AGN feedback (AGN\nhereafter). As for the DM simulation, it starts for the same initial\nconditions as the hydrodynamical simulations, with the gas particles\nreplaced by collisionless particles, so as to have the same\ndescription of the initial density and velocity fields as in the\nhydrodynamical simulations.\n \nThe three simulations have been carried out using the TreePM-SPH code\n{\\small GADGET-3}, an improved version of the {\\small GADGET-2} code\n\\citep{Gadget2}. \nGravitational forces have been computed using a Plummer--equivalent\nsoftening which is fixed to $\\epsilon_{Pl}=7.5h^{-1}$ physical kpc\nfrom $z=0$ to $z=2$, and fixed in comoving units at higher\nredshift. The simulations assume flat $\\Lambda$CDM cosmology with\n$\\Omega_{\\rm m} = 0.24$ for the matter density parameter, $\\Omega_{\\rm\n b} = 0.0413$ for the baryon contribution, $\\sigma_8=0.8$ for the\npower spectrum normalisation, $n_{\\rm s} = 0.96$ for the primordial\nspectral index, and $h =0.73$ for the Hubble parameter in units of\n100$\\vel {\\rm Mpc}^{-1}$. Initial conditions have been generated at\n$z=49$ using the Zeldovich Approximation for a periodic cosmological\nbox with comoving size $L=410\\Mpc$.\nThe masses of gas and DM particles have a ratio such that to reproduce\nthe cosmic baryon fraction, with $m_g\\simeq 7.36 \\times 10^{8} \\hMsun$\nand $m_{DM}\\simeq 3.54 \\times 10^{9} \\hMsun$, respectively.\n\nIn the hydrodynamical simulations, radiative cooling is computed for\nnon--zero metallicity using the cooling tables by\n\\cite{sutherland_dopita93}, also including heating\/cooling from a\nspatially uniform and evolving UV background. Gas particles above a\ngiven threshold density are treated as multi-phase, so as to provide a\nsub\u2013resolution description of the inter\u2013stellar medium, according to\nthe model described by \\cite{springel_hernquist03}.\nConversion of collisional gas particles into collisionless star\nparticles proceeds in a stochastic way, with gas particles spawning a\nmaximum of two generations of star particles. We also include a\ndescription of metal production from chemical enrichment contributed\nby SN-II, SN-Ia and AGB stars, as described by\n\\cite{tornatore_etal07}. Kinetic feedback is implemented by mimicking\ngalactic ejecta powered by SN explosions, with a wind mass upload\nproportional to the local star-formation rate, $\\dot M_w=\\eta \\dot\nM_*$. In the CSF simulation we use $\\eta=2$ and $v_w = 500\\, {\\rm\n km}\\, s^{-1}$ for the wind velocity, which corresponds to assuming\nabout unity efficiency for the conversion of energy released by SN-II\ninto kinetic energy for the adopted Salpeter IMF.\n\nAs for the AGN simulation, it includes both the effect of galactic\nwinds, with $v_w = 350\\, {\\rm km}\\, s^{-1}$ and the same mass--load\nparameter $\\eta=2$, along with energy feedback\nresulting from gas accretion onto SMBHs. The model of AGN feedback\nused in this simulation is the same as that adopted by\n\\cite{Fabjan2010} and is largely inspired to the model originally\nintroduced by \\cite{Springel2005b}. SMBHs, seeded with an initial mass\nof $10^6M_\\odot$ in halos resolved with at least 100 DM particles,\nsubsequently grow by merging with other BHs and by gas accretion. The\nlatter proceeds at the Bondi rate and is Eddington--limited. A\nfraction $\\epsilon_r=0.1$ of accreted mass is converted into\nradiation, with a fraction $\\epsilon_f$ of this radiation thermally\ncoupled to the surrounding gas. We assume $\\epsilon_f=0.1$ which\nincreases by a factor of four whenever accretion takes place at a rate\nof at most one-hundredth of the Eddington limit. \n\nWe note that the main motivation for efficient SN feedback with $v_w =\n500\\, {\\rm km}\\, s^{-1}$ in the CSF simulations lies in the need of\nreconciling simulation predictions on the cosmic star formation rate\nwith observations, at least at redshift $z>2$, a choice that still\nproduces too efficient star formation at lower redshift\n\\citep[e.g.][]{Tornatore10}. Although AGN feedback is motivated by the\nneed of reducing the star formation rate at lower redshift, still its\neffect is quite significant already at $z\\sim2$. Therefore, in order\nto prevent too strong a reduction of star formation around this\nredshift when SN and AGN feedbacks are both included, we decided to\nreduce by a factor of two the kinetic energy associated to the\nformer. This lowers the resulting wind velocity to $v_w = 350\\, {\\rm\n km}\\, s^{-1}$.\n\n\\section{Halo identification}\n\\label{halo}\n\nThe two most common methods used for halo identification in\nsimulations are the Friend-of-Friend (FoF) algorithm\n\\citep[e.g.][]{Davis1985} and the spherical overdensity (SO) algorithm\n\\citep{Lacey1994}. The FoF algorithm has only one parameter, $b$,\nwhich defines the linking length as $b l$ where $l=n^{-1\/3}$ is the\nmean inter-particle separation, with $n$ the mean particle number\ndensity. In the SO algorithm, there is also only one free parameter,\nnamely the overdensity $\\Delta_c$. The overdensity determines the\naperture of the sphere, within which the total mean density is\n$\\Delta_c ~ \\rho_{crit}$. Here, $\\rho_{crit}$ is the critical cosmic\ndensity. Each of the two halo finders has its own advantages and\nshortcomings \\citep[see more details in][and references\n thereon]{Jenkins2001, White2001, \n Tinker2008}, and the differences between the two methods in terms of\nhalo masses and HMFs have been discussed in several analysis\n\\citep[e.g.][]{White2002, Reed2003, Reed2007, Cohn2008, More2011,\n Anderhalden2011, Knebe2013, Watson2013}. We adopt both methods to\nidentify halos in this paper.\n\n\\subsection{Friend-of-Friend Halos}\n\nIn our three simulations FoF halos are identified by a on-the-fly FoF\nfinder, with a slight smaller linking length $b=0.16$ compared to\ncommonly used one, $b = 0.2$. Dark matter particles are linked\nfirst. Then, each gas and star particle is linked to the nearest\ndark matter particle, whenever the linking criterion is satisfied.\n\n\\subsection{Spherical Overdensity Halos -- {\\small PIAO}}\n\n\\begin{CJK*}{UTF8}{gkai}\nWe carry out a spherical overdensity (SO) halo search by using an\nefficient memory-controlled parallel {\\bf P}ython spher{\\bf I}c{\\bf\n A}l {\\bf O}verdensity halo finding code --- {\\small PIAO} (Chinese\ncharacter: \u6f02). This code is based on the standard SO algorithm. Its\naim is not to provide a new halo identification method, but to analyse\nlarge simulations on a small computer server or PC with limited\nmemories. To overcome a memory deficiency problem, we adopt a simple\nstrategy, which is based on splitting the whole simulation box into\nsmall mesh-boxes, and analysing them one-by-one. The details of this\nstrategy and how to incorporate it within the SO method is discussed\nin Appendix \\ref{A:Piao}. {\\small PIAO} is parallelised with a python\nMPI package (MPI4py) to speed up the calculation by taking advantage\nof multi-core CPUs.\n\\end{CJK*} \n\nWe applied {\\small PIAO} to the three simulations analysed in this\npaper. For all of them, SO halos are identified at three different\noverdensity values\\footnote{In the following, the overdensity value\n $\\Delta_c$ is expressed in units of the cosmic critical density at a\n given redshift, $\\rho_c(z)=3H^2(z)\/(8\\pi G)$.}, $\\Delta_C = 2500,\n500, 200$. As detailed in the appendix, local density maxima around\nwhich growing spheres encompassing a given overdensity, are searched\nby assigning density at the positions of particles using 64 SPH neighbours and\nwithout allowing halos to overlap with each other.\n\n\\subsection{Matching halos}\n\n\\label{sec:match}\nSince all three simulations share the same initial\nconditions, dark matter particles have the same progressive\nidentification number (IDs). We exploit this information to match\nhalos identified in different simulations. Using a given halo\nidentified in the\nDM simulation as the reference, a halo in the CSF or AGN simulation is\ndefined to be the counterpart of the DM halo whenever it includes the\nlargest number of DM particles belonging to the latter. We define the\nmatching rate as the ratio of matched to total number of dark matter\nparticles in the DM halo. Clearly, the larger this rate, the more\naccurate is the matching. In order to avoid multi-matching, i.e. two\nhalos from CSF\/AGN simulation matched to one halo in the DM one, only\nhalos with matching rate larger than $0.5$ are selected. We \nverified that the fractions of matched SO halos for $\\Delta_c = 500$,\nare $97.5\\%$ at $z = 0$, $98.3\\%$ at $z = 0.6$, $98.6\\%$ at $z =\n1.0$ and $99.4\\%$ at $z = 2.2$. Most of the mismatched halos have\nsmaller halo mass, e.g. $85\\%$ of them have halo mass $M_{500} <\n10^{13} \\hMsun$ at $z = 0$. \n\nAt each overdensity $\\Delta_c$, we only consider halos with\n$M_{\\Delta_c} \\geq 10^{12.5} \\hMsun$. With this choice, the smallest\nhalo can still have $\\sim 1000$ particles within the corresponding\n$R_{\\Delta_c}$. However, to allow for a complete matching, we consider\nhalos as small as $M_{\\Delta_c}=10^{11.5} \\hMsun$ in the AGN and CSF\nsimulations to be matched to the halos in the DM simulation. As shown\nby \\cite{Reed2013}, halos resolved with fewer than $N \\sim 1000$\nparticles are unlikely to be used for a high-accuracy HMF measurement.\nFurthermore, \\cite{Watson2013} also pointed out that the correction\nfor low number of particles sampling FoF halos from \\cite{Warren2006}\nis $\\sim 2$ per cent for the FoF halos containing 1000 particles. \nWe used a fixed mass bin $\\Delta \\log M = 0.2$ for the calculation of\nthe HMF, without further correction. As discussed by \\cite{Lukic2007},\nthe uncertainty in the HMF resulting from the choice of the binning is\nnegligible as long as the bin width does not exceed $\\Delta \\log M =\n0.5$.\n\n\\section{Results}\n\\label{results}\n\nBasic information on the number of halos identified by the FoF and SO\nfinders can be obtained from the cumulative HMF. We just mention here\nthat over $10^4$ halos are always found with both methods at $z = 0$\nwith halo mass $M \\geq 10^{12.5} \\hMsun$. This number can reach $\\sim\n70000$ for FoF halos and for SO halos with $\\Delta_c = 200$. At the\nhighest considered redshift, $z = 2.2$, this number is still $\\sim\n10^4$ for FoF and for SO halos with $M_{200} \\geq 10^{12.5}$. However,\nat the same redshift we only have $\\sim 10^3$ SO halos with $M_{2500}\n\\geq 10^{12.5}\\hMsun$. The CSF simulation has more both SO and FoF\nhalos than the DM one at all redshifts and halo masses, an increase\nthat is less apparent for $\\Delta_c = 200$. On the contrary, the AGN\nsimulation produces fewer halos of fixed mass than the DM one. Due to\nlimited simulation box size, only a few halos have mass $M \\geq\n10^{15} \\hMsun$ for FoF and SO with $\\Delta_c = 200$. Given the\nlimited dynamical range accessible to our simulations, we attempt in\nthe following to provide fitting expressions to the corrections to the\nHMF induced by baryon effects, while we avoid providing absolute\nfitting functions to the HMF.\n\n\\subsection{The HMF from Friend-of-Friend}\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{HMF_fof.eps}\n\\caption{Baryon effects on the halo mass function (HMF) for halos\n identified with the Friend-of-Friends (FoF) algorithm. Upper panel:\n FoF halo mass functions (HMFs). Different line styles are for\n different redshifts, as indicated in the legend in the bottom left\n corner, while different colours refer to the different simulations\n (legend in upper right corner). Lower panel: ratios between each of\n the HMFs from the hydrodynamical simulations and the HMF of the DM\n simulation.}\n\\label{fig:hmf_fof}\n\\end{figure}\n\nWe compare in the upper panel of Fig. \\ref{fig:hmf_fof} the HMFs for\n the three different simulations, while the lower panel shows the\nrelative difference between each of the two hydrodynamical simulations\nand the DM one. As for the effect of the baryon physics described by\nthe CSF model, we note that the difference with respect to the DM case\nhas a clear redshift evolution and halo mass dependence. As redshift\ndecreases from $z = 2.2$ to 0, the HMF ratio drops from $\\sim 1.6$ to\n$\\sim 1.1$, with a weak increasing trend of this ratio with halo mass,\nat all redshifts. Quite remarkably, including AGN feedback has the\neffect of reducing the difference with respect to the DM-only case:\nthe HMF ratio drops to about unity for massive halos with $M_{FoF}\n\\simgt 10^{14} \\hMsun$, while at smaller halo mass it decreases to $\\sim\n0.9$ for $M_{FoF} \\approx 10^{13} \\hMsun$. Unlike the CSF case,\nthese differences do not show any evidence of redshift evolution\nfrom $z = 1$ to $z = 0.0$. At higher redshift, $z = 2.2$, the HMF\nratio keeps fluctuating around 1, as a consequence of the limited\nstatistics of halos due to the finite box size.\n\n\\subsection{The HMF from Spherical Overdensity}\n\\begin{figure*}\n\\includegraphics[width=1.0\\textwidth]{HMF_SO.eps}\n\\caption{Effect of baryons on the halo mass function (HMF) for\n spherical overdensity (SO) halos. Each panel is the same as in\n Fig. \\protect\\ref{fig:hmf_fof}, but for the spherical SO HMFs\n with halo masses computed for $\\Delta_c = 2500$ (left panel), 500\n (middle panel) and 200 (right panel). }\n\\label{fig:hmf_so}\n\\end{figure*}\n\nWe compare in the upper panels of Fig. \\ref{fig:hmf_so} the HMFs\nobtained from the SO halo finder at three overdensities, $\\Delta_c =\n2500, 500, 200$ (from left to right), along with the ratios of the\nHMFs from the CSF and AGN simulations with respect to the DM-only\nresult (lower panels). As expected, baryons have a larger impact at\nthe highest considered overdensity, $\\Delta_c = 2500$. In this case,\nthe ratio between CSF and DM HMFs shows a redshift evolution similar\nto the FoF results but with a higher amplitude, ranging from $\\sim\n1.4$ at $z = 0$ to $\\sim 2.5$ at $z = 2.2$, but with no significant\ndependence on the halo mass. At lower overdensities, $\\Delta_c = 200$\nand 500, the redshift evolution becomes weaker and the differences\nwith respect to the DM case are reduced, with only a $\\simlt 5$ per\ncent difference at $\\Delta_c=200$ \\citepalias[similar results for the\nCSF case were also found by][]{Cui2012a}.\n\nWhen AGN feedback is included in the simulation, the corresponding HMF\ndrops below the HMF from the DM simulation, by an amount that\ndecreases for lower $\\Delta_c$ values, with no evidence for redshift\ndependence of the HMF difference. At $\\Delta_c = 2500$, this ratio has\na weak halo mass dependence, ranging from $\\sim 0.7$ at $10^{12.5}\n\\hMsun$ to $\\sim 0.5$ at $10^{14} \\hMsun$. At $\\Delta_c = 500$ and\n200, the difference between AGN and DM HMFs reduces, with a mild\ndependence on halo mass: $dn\/dn_{DM} \\approx 0.7, \\approx 0.8$ at $M\n\\approx 10^{13} \\hMsun$ to $dn\/dn_{DM} \\approx 0.9, \\approx 1.0$\n($\\Delta_c = 500, 200$, respectively) in the high mass end.\n\nIn general, the effect of including baryons on the HMF goes in the\nsame direction, independent of whether FoF or SO halo finders are\nused. While this holds at a qualitative level, as expected\nquantitative differences between FoF and SO results are found,\nespecially for the AGN case. As we will discuss in the following, the\neffect of including AGN feedback is that of producing halos that are\nless concentrated than in the CSF case. As a result, one expects that\nmatching SO and FoF HMFs requires in the CSF simulation a higher\n$\\Delta_c$ than in the AGN simulation. Many efforts are made to\nrematch the two halo mass functions by tuning $b$ and $\\Delta_c$\n\\citep[for example][]{Lukic2009, Courtin2011, More2011}. However, as\nshown in \\cite{Watson2013}, even in dark-matter-only simulations,\nmatching FoF HMFs to SO HMFs not only depends on the choice of $b$ and\n$\\Delta_c$, but also on the concentration parameter, pseudo mass\nevolution, and the problems inside the two algorithms. These\nquantitative differences between FoF and SO results make this matching\nprogress even more complex if baryon models are taken in account.\n\n\\begin{figure*}\n\\includegraphics[width=1.0\\textwidth]{MD_fit.eps}\n\\caption{Mass dependence of the ratio of masses of matched SO halos\n computed for $\\Delta_c = 500$ at different redshifts, as reported in\n the upper--right corner of each panel. Each point represents a halo\n mass ratio between the matched CSF (red points) or AGN (green\n points) halos to DM ones, as a function of the mass of the matched\n halo in the DM simulation. The thick magenta and green lines show\n the mean values of this ratio within each mass bin for the CSF and\n AGN simulations, respectively. The best--fitting relation for the\n mass correction of Eq. \\protect\\ref{eq:1} is shown with the solid\n black lines. We note that the same relation provides a good fit at\n all redshifts, at least up to $z=1$. See Table 1 for the values of\n the parameters defining this best-fit relation. }\n\\label{fig:md_so}\n\\end{figure*}\n\nIn order to understand the origin of the baryonic effects on the HMFs\npredicted by our simulations, we further focus on the difference of\nmasses of matched halos at overdensity $\\Delta_c = 500$ (see Section\n\\ref{sec:match} for the description of the matching procedure). We\nshow in Fig. \\ref{fig:md_so} the ratio between masses of matched halos\nin each one of the two hydrodynamical simulations and in the DM\nsimulation (red and green points for the CSF and AGN case,\nrespectively). In each panel, the thick lines show the mean value of\nthese ratios computed within each mass bin (magenta for CSF and blue\nfor AGN). As for the CSF case, the effect of baryons is that of\nincreasing halo masses by an amount which is almost independent of\nredshift. At each redshift, the halo mass ratio weakly decreases with\nhalo mass, from $\\sim 1.1$ at $M_{500} = 10^{12.5} \\hMsun$ to $\\sim\n1.05$ at $M_{500} \\simgt 10^{13.5} \\hMsun$, then becoming constant\n\\citepalias[see also][]{Cui2012a}. As for the AGN simulation, the\neffect of baryons goes in the opposite direction of decreasing halo\nmasses, thereby decreasing the corresponding HMF, as shown in\nFig. \\ref{fig:hmf_so}. Also in this case, there is no evidence for a\nredshift evolution of the halo mass ratio, at least below $z = 1.0$.\nHowever, there is an obvious increase of this ratio with halo mass,\nthat ranges from $\\sim 0.8$ at $M_{500} = 10^{12.5} \\hMsun$ to $\\simeq\n1$ for the most massive halos found in our simulation box. Similar\ntrends are also found for the mass ratio with $\\Delta_c = 2500, 200$,\nboth of which also show no evidence of redshift dependence for both\nhydrodynamical simulations. We verified that using the median value of\nthose data points gives almost identical lines to these mean lines.\nAs discussed in the Appendix C, this effect of reduction of halo\nmasses in the presence of AGN feedback is quite robust against\nnumerical resolution. We refer to this Appendix for a more detailed\ndiscussion of the resolution test that we carried out.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{MD_sr.eps}\n\\caption{The same as the bottom right panel of Fig. \\ref{fig:md_so},\n but for halo masses in the CSF\/AGN simulations computed within the\n $R_{500}$ radius of the corresponding halos from the DM\n simulation. The dashed thick lines are the previous results in\n Fig. \\ref{fig:md_so} at $z = 0$.}\n\\label{fig:sr}\n\\end{figure}\n\nSince the masses of SO halos are computed by adding up all the\nparticles within a sphere with radius $R_{\\Delta_c}$, it is clear that\na change of the halo density profiles induced by the presence of\nbaryons would also change the corresponding values of\n$R_{\\Delta_c}$. In order to quantify the effect of this variation, we\nalso compute masses for each halo in the CSF\/AGN simulations by using\nthe value of $R_{\\Delta_c}$ of the corresponding halo identified in\nthe DM simulation. In Fig. \\ref{fig:sr}, we show again the halo mass\ndifference at $\\Delta_c = 500$ after applying this re-tuning of the\nhalo radii. A comparison with the $z = 0$ result from in\nFig. \\ref{fig:md_so} demonstrates that these ratios are only slightly\nshifted towards unity, for both CSF and AGN models. This small change\nimplies that the differences in halo masses are mostly contributed by\nthe baryon effects on the halo density profiles, which can not be\nrecovered by simply changing the halo radius.\n\nIncluding only SN feedback in the form of galactic ejecta is already\nknown not to be able to regulate overcooling at the centre of\nrelatively massive halos, with $M> 10^{12.5} \\hMsun$. Adiabatic\ncontraction \\citep[e.g.][]{gnedin2004}, associated to the condensation\nof an exceedingly large amount of cooled gas, leads then to an\nincrease of density within a fixed halo aperture radius and,\ntherefore, to an increase of the halo mass with respect to the DM\ncase. The opposite effect is instead associated to the inclusion of\nAGN feedback. In this case, the sudden displacement of large amount of\ngas at epochs corresponding to the peak of AGN feedback efficiency,\ntaking place at $z\\sim 2$--3, causes sudden variations of the halo\npotential, which reacts with an expansion, thus decreasing halo masses\n(see discussion in Sect. \\label{gnedin2004} below).\n\n\n\\subsection{Density profiles}\n\\label{sec:profs}\nHaving quantified the variation of halo masses, we now discuss how\nthis variation is associated to changes in the total density profiles\nof halos induced by baryonic processes.\n\n\\begin{figure*}\n\\includegraphics[width=1.0\\textwidth]{DP_tot.eps}\n\\caption{The stacked ratios of cumulative density profiles,\n $\\rho( 0$, with probability one, there exists a time $T$ such that if $t \\ge T$, then\n\t\t$$(1-\\epsilon)A \\subseteq \\tfrac{1}{t} B_t \\subseteq (1+\\epsilon)A.$$\n\t\n\tJust as standard FPP is a model of random geometry on the discrete lattice $\\mathbb Z^d$, Riemannian FPP is a model of random geometry in the continuum. We consider a random Riemannian metric $g$ on $\\mathbb R^d$ whose distribution is translation-invariant, has finite-range dependence and satisfies certain moment conditions. By the standard construction in Riemannian geometry, this defines a random distance function $d(x,y)$ in $\\mathbb R^d$. Kingman's theorem can again be applied to prove that for each unit vector $v$, there exists a non-random constant $\\mu_v \\ge 0$ such that $\\tfrac{1}{n} d(0,nv) \\to \\mu_v$ a.s. and in $L^1$. From the positive-definiteness of $g$, we prove Theorem \\ref{posconst}: $\\mu_v > 0$ for all $v \\in S^{d-1}$. Consider the set\n\t\t$$A = \\{x \\in \\mathbb R^d : |x| \\le \\mu_{x\/|x|}^{-1} \\},$$\n\tand the Riemannian ball of radius $t$ centered at the origin\n\t\t$$B_t = \\{ x \\in \\mathbb R^d : d(0,x) \\le t \\}.$$\n\tTheorem \\ref{shapethm} is the shape theorem for Riemannian FPP, which states that for all $\\epsilon > 0$, with probability one, there exists a random time $T > 0$ such that if $t \\ge T$, then\n\t\t$$(1-\\epsilon)A \\subseteq \\tfrac{1}{t} B_t \\subseteq (1+\\epsilon)A.$$\n\tConsequently, we call $A$ the limiting shape of the model. \n\t\n\tTheorem \\ref{completeness} follows from the shape theorem: with probability one, no curve $\\gamma$, parametrized by Riemannian length, reaches infinity in finite time. When the metric is further assumed to be smooth with probability one, then this is geometrically significant: by the Hopf-Rinow theorem \\cite{lee1997rmi}, this is equivalent to geodesic completeness of the metric.\n\t\n\n\tTo prove our results, we need some technical estimates on the distance function $d$, which we obtain in Section \\ref{disclemmas}. To prove these, we discretize the continuum model: to each point $z \\in \\mathbb Z^d$, we associate a certain value $X_z$ based on the Riemannian metric $g(x)$ over the unit cube $C_z = [z-1\/2,z+1\/2)^d$. By treating $X_z$ as defining a dependent FPP model on the lattice, we prove some estimates of the ``entropy-energy type'' on $X_z$, and from these derive the desired estimates on $d$.\n\t\n\t\n\t\\subsection{Geometry Background and Notation} \n\t\n\tBefore introducing any probabilistic structure, we introduce some geometric notation. Consider $\\mathbb R^d$ with $d \\ge 2$ and the standard Euclidean coordinates. Write\n\t\t$$\\operatorname{SPD} = \\{ \\mbox{symmetric, positive-definite $d \\times d$ real matrices} \\},$$\n\tand let $g \\in C(\\mathbb R^d,\\operatorname{SPD})$ be a continuous matrix-valued function on $\\mathbb R^d$ with values in $\\operatorname{SPD}$. $g$ defines a Riemannian structure on $\\mathbb R^d$: for tangent vectors $v, v' \\in T_x \\mathbb R^d$, we consider the inner product $\\langle v, g(x) v' \\rangle$. For a single vector $v$, we denote by $\\|v\\| = \\sqrt{\\langle v, g(x) v \\rangle}$ and $|v| = \\sqrt{\\langle v, v \\rangle}$ the Riemannian and Euclidean lengths of $v$, respectively. For a $C^1$-curve $\\gamma : [a,b] \\to \\mathbb R^d$, we define the Riemannian and Euclidean lengths of $\\gamma$ by\n\t\t$$R(\\gamma) = \\int_a^b \\| \\dot \\gamma(t) \\| \\sD t \\qquad \\mathrm{and} \\qquad L(\\gamma) = \\int_a^b | \\dot \\gamma(t) | \\sD t,$$\n\trespectively. We say that a curve is finite if it has finite Euclidean length; for our model, Theorem \\ref{completeness} will imply that finite curves have finite Riemannian length. The Riemannian distance between two points $x$ and $y$ is defined by\n\t\t$$d(x,y) = \\inf_\\gamma R(\\gamma),$$\n\twhere the infimum is over all $C^1$-curves $\\gamma$ connecting $x$ to $y$.\n\t\n\tFor a Riemannian metric $g$, we define the real, positive functions\n\t\t$$\\Lambda(x) = \\mbox{maximum eigenvalue of $g(x)$} \\qquad \\mathrm{and} \\qquad \\lambda(x) = \\mbox{minimum eigenvalue of $g(x)$}.$$\n\tFor any $K \\subseteq \\mathbb R^d$, define\n\t\t$$\\Lambda(K) = \\sup_{x\\in K} \\Lambda(x) \\qquad \\mathrm{and} \\qquad \\lambda(K) = \\inf_{x\\in K} \\lambda(x).$$\n\tBy the continuity and positivity of $g$, if $K$ is bounded then\n\t\t$$0 < \\lambda(K) \\le \\Lambda(K) < \\infty.$$\n\tFor $z \\in \\mathbb Z^d$, let $C_z = [z - 1\/2, z + 1\/2)^d$ be the unit cube centered at $z$. Write \n\t\t$$\\Lambda_z = \\Lambda(C_z) \\qquad \\mathrm{and} \\qquad \\lambda_z = \\lambda(C_z).$$\t\n\n\n\t\n\t\\subsection{Riemannian FPP}\n\t\n\n\t\n\tLet $\\Omega = C(\\mathbb R^d, \\operatorname{SPD})$ and let $\\mathcal F$ be the $\\sigma$-algebra generated by cylinder sets. Let $\\mathbb P$ be a translation-invariant probability measure on $(\\Omega, \\mathcal F)$ which has finite-range dependence, and consider a random Riemannian metric $g \\in \\Omega$ with distribution $\\mathbb P$. The finite-range dependence means there exists some $R > 0$ such that if $|x-y|\\ge R$, then $g(x)$ and $g(y)$ are independent. Furthermore, suppose that $\\Lambda_0$ has a finite moment-generating function. That is,\n\t\t\\begin{equation} \\label{finmom}\n\t\t\tM(r) = \\mathbb E[ \\E^{r \\Lambda_0} ] < \\infty \\qquad \\mbox{for all $r \\in \\mathbb R$,} \\end{equation}\n\twhere $\\mathbb E$ denotes expectation with respect to $\\mathbb P$. By translation invariance, the family $\\{\\Lambda_z\\}$ is identically distributed, and we refer to its generic element as $\\Lambda$; similarly for $\\{\\lambda_z\\}$ and $\\lambda$. By Chebyshev's inequality \\cite{durrett1996probability}, \\eqref{finmom} implies that $\\Lambda$ and $\\lambda$ have exponential tail decay:\n\t\t$$\\mathbb P( \\lambda > u ) \\le \\mathbb P( \\Lambda > u ) \\le M(r) \\E^{-r u},$$\n\tfor all $u > 0$ and $r > 0$.\t\n\t\n\tWe provide a concrete example.\n\t\n\t\\begin{env_exa}\n\t\tLet $c : [0,\\infty) \\to \\mathbb R$ be a compactly-supported covariance function (see \\cite{gneiting2002csc} for examples). Let $\\xi : \\mathbb R^d \\to \\mathbb R$ be a mean-zero, stationary, istropic Gaussian field with covariance function $c$; that is,\n\t\t\t$$\\mathbb E[\\xi(x)] = 0 \\qquad \\mathrm{and} \\qquad \\mathbb E[\\xi(x)\\xi(y)] = c(|x-y|)$$ \n\t\tfor all $x, y \\in \\mathbb R^d$. The covariance $c$ must satisfy certain necessary and sufficient conditions \\cite{talagrand1987regularity} for the field $\\xi$ to be everywhere continuous with probability one; suppose this is the case.\n\t\t\n\t\tLet $g : \\mathbb R^d \\to \\mathbb R$ be the diagonal matrix with entries\n\t\t\t$$g_{ii}(x) = \\log(1 + \\E^{\\xi(x)}),$$\n\t\tfor $1 \\le i \\le d$. This is continuous and positive, so it suffices to show that the assumption \\eqref{finmom} is satisfied. \n\t\t\n\t\t\\begin{env_pro}\n\t\t\tAssumption \\eqref{finmom} is satisfied for this choice of $g$, and the shape theorem (Theorem \\ref{shapethm}) applies. Let $A$ be the limiting shape, defined in \\eqref{limshape}. The measure $\\mathbb P$ is isotropic, so Corollary \\ref{isotropic} implies that $A$ is a Euclidean ball. Furthermore, if the field $\\xi$ is $C^1$ with probability one, then Corollary \\ref{realized} implies that the metric $g$ is geodesically complete.\n\t\t\\end{env_pro}\n\t\t\\begin{proof}\n\n\t\tAssume for simplicity that $c(0) = 1$. The Gaussian concentration inequality \\cite{adler07} implies that\n\t\t\t$$\\mathbb P(\\Lambda_0 > u) = \\mathbb P(\\sup \\xi > \\log(e^u - 1) ) \\le \\exp(-\\log(\\E^u-1)^2\/2) \\le 2 \\E^{-u^2\/2}.$$\n\t\tThe fundamental theorem of calculus and Fubini's theorem imply that\n\t\t\t$$\\mathbb E \\E^{r \\Lambda_0} = \\mathbb E \\! \\left( 1 + \\int_0^{\\Lambda_0} r \\E^{ru} \\sD u \\right) = 1 + \\int_0^{\\infty} r \\E^{ru} \\mathbb P( \\Lambda_0 > u ) \\sD u \\le 1 + 2 \\int_0^{\\infty} r \\E^{ru} \\E^{-u^2\/2} \\sD u,$$\n\t\twhich is finite for all $r$. Thus \\eqref{finmom} is satisfied, and the results of this paper apply to the random Riemannian metric $g$.\n\t\t\\end{proof}\n\t\\end{env_exa}\n\n\tThe random variables $\\lambda_z$ and $\\Lambda_z$ give rise to a dependent FPP model on sites of the lattice $\\mathbb Z^d$. In Section \\ref{depfpp}, we prove some general estimates for dependent FPP, then in Section \\ref{contapps} we apply these to estimates on our distance function $d$. Our techniques are based on the energy-entropy methods of mathematical physics, where one shows that an event occurs with extremely low probability over one particular connected set (``high energy''), but sums this over all possible connected sets at the origin (``high entropy''). One adjusts parameters in the problem so that this sum converges, then applies the Borel-Cantelli lemma. A large-deviations estimate like \\ref{finmom} is critical: the number of connected sets at the origin grows exponentially in $n$, the size of the sets, so the probabilities must decay exponentially in $n$ for the arguments to hold.\n\n\t\n\t\n\t\n\tIn the standard FPP setting of passage times $t_b$ across bonds $b$, Cox and Durrett \\cite{cox1981slt} prove that a necessary and sufficient condition for a shape theorem is that $\\mathbb E \\min\\{t_1, \\dots, t_{2d}\\}^d < \\infty$, where $t_i$ are $2d$ independent copies of $t_b$. Thus we believe that our assumption \\eqref{finmom} is not the most general, and can be replaced by a finite moment estimate on $\\Lambda$ instead to prove a more general result. \n\t\n\tIn probability theory, Kingman's subadditive ergodic theorem \\cite{durrett1996probability} is used to prove that stationary, subadditive sequences obey laws of large numbers. If $X_{n,m}$ is a non-negative, stationary sequence which satisfies $X_{n,m} \\le X_{n,r} + X_{r,m}$, then $\\tfrac{1}{n} X_{0,n}$ converges almost surely and in $L^1$. Furthermore, if the sequence is ergodic, this convergence is to a non-random constant. In our context, the sequence in question is $X_{n,m} = d(nv,mv)$ for a fixed unit vector $v$. The subadditivity condition is exactly the triangle inequality for $d$, so Kingman's theorem implies that for each $v \\in S^{d-1}$, there exists a non-random constant $\\mu_v \\ge 0$ such that\n\t\t\\begin{equation} \\label{kingman}\n\t\t\t\\lim_{t\\to\\infty} \\tfrac{1}{t} d(0,tv) = \\mu_v \\end{equation}\n\talmost surely and in $L^1$. The constants $\\mu_v$ may depend on the direction $v$, though if the measure $\\mathbb P$ is isotropic (rotationally-invariant) then $\\mu_v = \\mu$ will be independent of $v$. We show in Theorem \\ref{posconst} that $\\mu_v > 0$ for all $v$.\n\t\n\t\\begin{env_pro} \\label{contmu}\n\t \t$\\mu_v$ is a continuous function of $v$.\n\t\\end{env_pro}\n\t\\begin{proof}\n\t\tThis remarkably short proof is due to Kesten \\cite{kesten1180arp} (see his Proof of Theorem 1.7 on page 158). First we show that the function $\\mu_v$ is bounded above as a function of $S^{d-1}$. Write $v = \\sum_1^d v^i \\E_i$, for the standard basis vectors $\\E_i$ in $\\mathbb R^d$. We use the triangle inequality to bound $d(0,tv)$ by the sum of the distances between successive points $0$, $t v^1 \\E_1$, $t v^1 \\E_1 + t v^2 \\E_2$ and so on until $tv$. Translation-invariance implies\n\t \t\t$$\\mathbb E d(0,tv) \\le \\mathbb E d(0,tv^1 \\E_1) + \\dots + \\mathbb E d(0,tv^d \\E_d).$$\n\t \tDividing by $t$ and taking the limit $t \\to \\infty$ gives\n \t\t\t$$\\mu_v = \\lim_{t\\to\\infty} \\tfrac{1}{t} \\mathbb E d(0,tv) \\le \\sum_{i=1}^d \\lim_{t\\to\\infty} \\tfrac{1}{t} \\mathbb E d(0, tv^i \\E_d).$$\n \t\tThe terms which equal zero we may ignore; for the non-zero terms, we make the substitution $t' = |v^i| t,$ so that the right-hand side equals\n \t\t\t$$\\sum_{i=1}^d \\lim_{t'\\to\\infty} \\tfrac{|v^i|}{t'} \\mathbb E d(0, \\pm t' \\E_i) = \\sum_{i=1}^d |v^i| \\mu_{\\E_i} \\le d \\max\\{\\mu_{\\E_i}\\},$$\n \t\tas desired. \n \t\t \t \t\t\n\t \tNow, consider two different unit vectors $v$ and $v'$, and write $u = \\tfrac{v-v'}{|v-v'|}$. By the same arguments,\n\t \t\t$$|\\mu_v - \\mu_{v'}| \\le \\lim_{t\\to\\infty} \\tfrac{1}{t} \\mathbb E d(tv,tv') = \\mu_u |v - v'|.$$\n\t \tThis tends to zero as $v' \\to v$ since $\\mu_u$ is bounded above.\n\t\\end{proof}\t\n\t\n\t\n\t\n\t\\section{Discretization Lemmas} \\label{disclemmas}\n\t\n\t\\subsection{Dependent FPP on a Lattice} \\label{depfpp}\n\n\t\tFor a continuous curve $\\gamma$, we would like to introduce a discrete analogue $\\Gamma$ on the lattice. For example, $z \\in \\Gamma$ if $\\gamma$ meets the cube $C_z$. However, this set $\\Gamma$ is not connected on $\\mathbb Z^d$; consider the straight line from $0$ to $(1,1)$ in $\\mathbb R^2$. We get around this by modifying the familiar graph structure of $\\mathbb Z^d$ to introduce a new lattice, which we call the $*$-lattice. In this section, we prove some estimates for dependent FPP on the $*$-lattice, then in Section \\ref{contapps} we apply these estimates to the continuum model.\n\t\t\n\t\tFor $z \\in \\mathbb Z^d$, we write $z = (z^1, \\dots, z^d)$. We say that $z, z' \\in \\mathbb Z^d$ are $*$-adjacent if $\\max_{1\\le i\\le d} (z - z')^i \\le 1$. The $*$-lattice is the graph with vertex set $\\mathbb Z^d$, and edge set given by $*$-adjacency; that is, the usual lattice $\\mathbb Z^d$ along with all the diagonal edges.\n\t\t\n\t\tWe say that a set $\\Gamma \\subseteq \\mathbb Z^d$ is $*$-connected if for all $z, z' \\in \\Gamma$, there is a path from $z$ to $z'$ along the $*$-lattice which remains in the set $\\Gamma$. Technically, that there is a finite sequence of $*$-adjacent points beginning with $z$ and ending with $z'$, all contained in $\\Gamma$. \n\t\t\n\t\tLet $S_n$ be the number of $*$-connected sets which contain the origin.\n\t\t\n\t\t\\begin{env_lem} \\label{numsets}\n\t\t\tThere exists $\\sigma$ such that $S_n \\le \\sigma^n$. Obviously, $\\sigma > 1$. \n\t\t\\end{env_lem}\n\t\t\\begin{proof}\n\t\t\tClearly, $\\log S_n$ is a non-negative subadditive sequence:\n\t\t\t\t$$\\log S_{n+m} \\le \\log S_n + \\log S_m.$$\n\t\t\tBy Fekete's lemma \\cite{fekete1923verteilung}, there exists a constant $a \\ge 0$ such that $\\log S_n \\le a n$ for all $n$. Defining $\\sigma = \\E^a$ proves the result.\n\t\t\\end{proof}\n\t\t\n\t\tLet $X_z$ be a stationary, non-negative random field on the $*$-lattice with finite-range dependence, and with a finite moment-generating function\n\t\t\t\\begin{equation} \\label{finmom2}\n\t\t\t\tM(r) = \\mathbb E[\\E^{r X}] < \\infty \\qquad \\mbox{for all $r \\in \\mathbb R$.} \\end{equation}\n\t\tThe finite-range dependence means that there is an integer $R \\ge 1$ such that if $|z - z'| \\ge R$, then $X_z$ and $X_{z'}$ are independent.\n\t\t\n\t\tIf $\\Gamma \\subseteq \\mathbb Z^d$ is a collection of lattice points, we write\n\t\t\t$$X(\\Gamma) = \\sum_{z \\in \\Gamma} X_z,$$\n\t\tand call this the passage time of $\\Gamma$. \n\t\t\n\t\tThe following two lemmas can be thought of as spatial laws of large numbers. The first says that for sufficiently large $n$, if there is a uniform bound on $X(\\Gamma)\/n$, then there is also a uniform bound on $|\\Gamma|\/n$. The second lemma reverses the implication, though with different constants.\n\t\t\t\t\n\t\t\\begin{env_lem} \\label{passstep}\n\t\t\tSuppose $X_z$ additionally satisfies\n\t\t\t\t\\begin{equation} \\label{zeroatom}\n\t\t\t\t\t\\mathbb P(X = 0) < \\sigma^{-(2R+1)^d}. \\end{equation}\n\t\t\tFor any $A > 0$ there is a non-random $B > 0$ such that, with probability one, for any sequence $a_n \\in \\mathbb Z^d$, there exists $N > 0$ such that for all $n \\ge N$, if $\\Gamma$ is a $*$-connected set which contains the point $a_n$ and $X(\\Gamma) \\le An$, then $|\\Gamma| \\le Bn.$\n\t\t\\end{env_lem}\n\t\n\t\t\\begin{env_lem} \\label{upbound}\n\t\t\tFor any $B > 0$ there is a non-random $C > 0$ such that, with probability one, for any sequence $a_n \\in \\mathbb Z^d$, there exists $N > 0$ such that for all $n \\ge N$, if $\\Gamma$ is a $*$-connected set which contains the point $a_n$ and $|\\Gamma| \\le Bn$, then $X(\\Gamma) \\le Cn.$\n\t\t\\end{env_lem}\n\t\t\n\t\tSome assumption like \\eqref{zeroatom} is necessary for Lemma \\ref{passstep}. Let $p = \\mathbb P(X = 0)$, and suppose that $p > p_c$, the critical probability for site percolation on the $*$-lattice. By percolation theory, with probability one, the set\n\t\t\t$$\\Gamma = \\{ z \\in \\mathbb Z^d : X_z = 0 \\}$$\n\t\tcontains an infinite $*$-connected component. That is, $|\\Gamma| = \\infty$ but $X(\\Gamma) = 0$. No such assumption is necessary for Lemma \\ref{upbound}. \n\t\t\n\t\tIn this paper, our assumptions of non-negativity and \\eqref{zeroatom} are stronger than necessary: we apply these lemmas only to the positive fields $\\lambda_z$ and $\\Lambda_z$. However, we anticipate these lemmas to be of independent use in future work on Riemannian FPP, where one may consider fields $X_z$ which take the value $0$ on a cube $C_z$ with small but non-zero probability. For example, if $E_z$ is the event that $\\Lambda_z \\le h$ for a sufficiently large value of $h$, one may apply these lemmas to $X_z = 1_{E_z}$, the indicator function of $E_z$. \n\t\t\n\t\tThe generality of the sequence $a_n$ is needed for Lemma \\ref{unifT}. In the proof of that result, we fix a point $x \\in \\mathbb R^d$, and define the sequence $a_n = \\widetilde{nx} \\in \\mathbb Z^d$ to be the nearest lattice point to $nx$.\n\t\t\n\t\t\n\t\t\\begin{proof}[Proof of Lemma \\ref{passstep}]\n\t\tIn what follows, we assume that $\\Gamma$ is a $*$-connected set. Fix $A > 0$ and the sequence $a_n$. Consider the events\n\t\t\t$$E_n = \\{ \\exists ~ \\Gamma \\mathrm{~such~that~} a_n \\in \\Gamma, ~ X(\\Gamma) \\le An,\\mathrm{~and~} |\\Gamma| > Bn \\}.$$\n\t\tWe claim that we can choose an integer $B$ large enough so that $\\mathbb P(E_n)$ decays exponentially in $n$. From the Borel-Cantelli lemma it will follow that, with probability one, only finitely many of the events $E_n$ occur, which will prove the result.\n\n\t\tLet $z_i$ be a predetermined enumeration of $\\mathbb Z^d$; for example, a spiral path beginning at $a_n$. For any $*$-connected set $\\Gamma$, we define $\\Gamma' \\subseteq \\Gamma$ by proceeding along the sequence $z_i$ and including each point of $\\Gamma$ at a distance at least $R$ away from the previous points chosen. In the form of an algorithm:\n\t\t\t\\begin{itemize}\n\t\t\t\t\\item Let $i_1$ be the first index for which $z_{i_1} \\in \\Gamma$. Let $g_1 = z_{i_1}$.\n\t\t\t\t\\item Given $\\{g_1, \\dots, g_{j-1}\\}$, let $i_j$ be the first index for which $z_{i_j} \\in \\Gamma$ and so that $|z_{i_j} - g_{j'}| > R$ for $1 \\le j' < j$. Let $g_j = z_{i_j}$.\n\t\t\t\\end{itemize}\n\t\tLet $\\Gamma' = \\{g_1, g_2, \\dots\\}$. If $\\Gamma$ is finite, then so is $\\Gamma'$. Let \n\t\t\t$$B(z,R) = \\{z' \\in \\mathbb Z^d : |z-z'| \\le R\\}$$\n\t\tbe the Euclidean ball of radius $R$ centered at $z$ in $Z^d$. Note that $|B(z,R)| \\le (2R+1)^d$. To avoid this cumbersome factor $(2R+1)^d$ which appears frequently, for the remainder of this proof we write\n\t\t\t$$K = (2R+1)^d.$$\n\t\tBy construction, the set $\\Gamma$ is covered by taking balls around every point in $\\Gamma'$:\n\t\t\t$$\\Gamma \\subseteq \\bigcup_{z \\in \\Gamma'} B(z,R).$$\n\n\t\tLet $\\Gamma$ be a $*$-connected set described in the event $E_n$, so that\n\t\t\t$$Bn < |\\Gamma| \\le \\sum_{z \\in \\Gamma'} |B(z,R)| \\le |\\Gamma'| K.$$\n\t\tThus, $|\\Gamma'| > Bn\/K$.\n\t\t\n\t\tWe may assume that $\\Gamma$ consists only of the first $Bn$ points it meets of the sequence $z_i$; the non-negativity of $X_z$ implies that the passage time $X(\\Gamma)$ is still at most $An$. Similarly, we assume $|\\Gamma'| = \\lfloor Bn\/K \\rfloor$, the integer part of $Bn\/K$. Furthermore, since $\\Gamma' \\subseteq \\Gamma$,\n\t\t\t$$X(\\Gamma') \\le X(\\Gamma) \\le An.$$\n\t\tThus the probability $\\mathbb P(E_n)$ is bounded by\n\t\t\t\\begin{equation} \\label{splitestimate}\n\t\t\t\t\\mathbb P \\left( \\exists ~\\Gamma \\mathrm{~s.t.~} a_n \\in \\Gamma, ~ X(\\Gamma') \\le An, \\mathrm{~and~} |\\Gamma| = Bn \\right) \\le \\sum_{\\Gamma} \\mathbb P \\left( X(\\Gamma') \\le An \\right), \\end{equation}\n\t\twhere the outer sum is taken over all $*$-connected sets $\\Gamma$ for which $a_n \\in \\Gamma$ and $|\\Gamma| = Bn$. Lemma \\ref{numsets} implies that the number of such sets is bounded by $\\sigma^{Bn}$.\n\t\t\n\t\tThe family of random variables $\\{X_z\\}_{\\Gamma'}$ is independent, since the points $z \\in \\Gamma'$ are separated by distances at least $R$. Let $\\{X_i\\}$ be $\\lceil Bn\/K \\rceil$ independent copies of $X$. The exponential Chebyshev inequality \\cite{durrett1988lnp} implies that the right-hand side of \\eqref{splitestimate} is bounded above by\n\t\t\t$$\\sigma^{Bn} \\, \\mathbb P \\left( \\sum_{i=1}^{\\lceil Bn\/K \\rceil} X_i \\le An \\right) \\le \\sigma^{Bn} \\, \\E^{r An} \\, \\mathbb E \\left( \\E^{-r \\sum X_i} \\right)$$\n\t\tfor any $r > 0$. Again by independence, if we write $M(-r) = \\mathbb E \\E^{-r X}$, this is equal to\n\t\t\t$$\\sigma^{Bn} \\E^{r An} \\left(\\mathbb E \\E^{-r X} \\right)^{\\lceil Bn\/K \\rceil} \\le \\sigma^{Bn} \\E^{r An} M(-r)^{Bn\/K} = \\left( \\sigma \\, \\E^{r A \/ B} \\, M(-r)^{1\/K} \\right)^{Bn}.$$\n\t\n\t\tBy the bounded convergence theorem, as $r$ tends to infinity, $M(-r) \\to \\mathbb P(X = 0)$, which is strictly less than $\\sigma^{-K}$ by assumption \\eqref{zeroatom}. Let $r$ be large enough so that $M(-r) < \\sigma^{-K}$. Write $p = \\sigma M(-r)^{1\/K} < 1$, so that\n\t\t\t$$\\mathbb P(E_n) \\le (p \\E^{rA\/B})^{Bn}.$$\n\t\tChoose the number $B$ to satisfy \n\t\t\t\\begin{equation} \\label{defB}\n\t\t\t\trA\/\\log(1\/p) < B < rA\/\\log((1+p)\/2p). \\end{equation}\n\t\tThis implies\n\t\t\t$$\\tfrac{1+p}{2} < p \\E^{r A\/B} < 1.$$\n\t\tThe left inequality will be used later in the proof of Thereom \\ref{posconst}; the right inequality implies that\n\t\t\t$$\\sum \\mathbb P(E_n) \\le \\sum (p \\E^{r A\/B})^{Bn} < \\infty,$$\n\t\tso by the Borel-Cantelli lemma, with probability one, only finitely many of the events $E_n$ hold.\n\t\\end{proof}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\begin{proof}[Proof of Lemma \\ref{upbound}]\n\t\tIn what follows, we assume that $\\Gamma$ is a $*$-connected set. Fix an integer $B > 0$ and the sequence $a_n$. Consider the events\n\t\t\t$$E_n = \\{ \\exists ~ \\Gamma \\mathrm{~such~that~} a_n \\in \\Gamma,~ X(\\Gamma) > Cn,\\mathrm{~and~} |\\Gamma| \\le Bn \\}.$$\n\t\tWe claim that we can choose $C$ large enough so that $\\mathbb P(E_n)$ decays exponentially in $n$. From the Borel-Cantelli lemma, with probability one, it will follow that only finitely many of the events $E_n$ occur, which will prove the result.\n\t\n\t\tAs in the proof of Lemma \\ref{passstep}, we consider passage times of subsets of $\\Gamma$ to exploit independence. Unlike in that proof, it does not suffice to consider just one subset $\\Gamma'$. Instead, we partition the set into $k$ disjoint subsets $\\Gamma_1, \\dots, \\Gamma_k$, and consider the passage times of each.\n\t\t\n\t\tThere are $k = R^d$ points in the cube $\\{0, \\dots, R-1\\}^d$; order them $z_1, \\dots, z_k$. We can partition the lattice $\\mathbb Z^d$ into $k$ subsets by considering $R$-translations of these points. We write $z = (z^1, \\dots, z^d)$ for all $z \\in \\mathbb Z^d$. For any $*$-connected set $\\Gamma$, we partition it into $k$ subsets by defining\n\t\t\t$$\\Gamma_j = \\{ z \\in \\Gamma : \\mbox{$R$ divides $z^i - z_j^i$ for all $i = 1, \\dots, d$} \\}$$\n\t\tfor $j = 1, \\dots, k$. Since points in $\\Gamma_j$ are separated by distance at least $R$, for each $j$ the family of random variables $\\{X_z\\}_{\\Gamma_j}$ is independent.\n\t\t\n\t\tLet $\\Gamma$ be a set described in the event $E_n$. We may assume that $|\\Gamma| = Bn$, since if it is less, the inclusion of additional points will only increase the passage time calculation. Since\n\t\t\t$$X(\\Gamma) = X(\\Gamma_1) + \\dots + X(\\Gamma_k),$$\n\t\tif $X(\\Gamma) > Cn$, then $X(\\Gamma_j) > Cn\/k$ for some $j$. Furthermore, $|\\Gamma_j| \\le |\\Gamma| \\le Bn$ for each $j$.\n\t\t\n\t\t As in Lemma \\ref{passstep}, the probability $\\mathbb P(E_n)$ is bounded above by\n\t\t\t\\begin{equation} \\label{splitestimate2}\n\t\t\t\t\\sum_{\\Gamma} \\mathbb P \\left( \\exists ~ j \\in \\{1,\\dots,k\\} \\mathrm{~s.t.~} X(\\Gamma_j) > Cn\/k \\right), \\end{equation}\n\t\twhere again the sum is over $*$-connected sets $\\Gamma$ for which $a_n \\in \\Gamma$ and $|\\Gamma| = Bn$. For each $j$, the family of random variables $\\{X_i\\}_{\\Gamma_j}$ is independent. Let $\\{X_i\\}$ be $Bn$ independent copies of $X$. The exponential Chebyshev inequality \\cite{durrett1988lnp} implies that the right-hand side of \\eqref{splitestimate2} is bounded above by\n\t\t\t$$k \\, \\sigma^{Bn} \\, \\mathbb P \\left( \\sum_{i=1}^{Bn} X_i > Cn\/k \\right) \\le k \\, \\sigma^{Bn} \\, \\E^{-Cn\/k} \\, \\mathbb E \\left( \\E^{\\sum X_i} \\right),$$\n\t\twhere we have added extra independent copies of $X_i$ to make the number of terms exactly $Bn$. If we write $M = M(1) = \\mathbb E \\E^X$, then this is equal to\n\t\t\t$$k \\, \\sigma^{Bn} \\E^{-Cn\/k} M^{Bn} = k \\left( \\sigma^{B} \\E^{-C\/k} M^{B} \\right)^{n}.$$\n\t\tWe choose $C \\gg 1$ so that $\\sigma^{B} \\E^{-C\/k} M^{B} < 1$. Thus\n\t\t\t$$\\sum \\mathbb P(E_n) \\le k \\sum \\left( \\sigma^{B} \\E^{-C\/k} M^{B} \\right)^{n} < \\infty,$$\n\t\tso by the Borel-Cantelli lemma, with probability one only finitely many of the events $E_n$ hold.\n\t\\end{proof}\n\n\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\subsection{Applications to Continuum Model} \\label{contapps}\n\t\t\n\tIn this section, we apply the estimates from the previous section to the continuum model. Lemma \\ref{unifT} says that at large scales, the Riemannian distance function between two points is bounded by a uniform constant $K$ times the Euclidean distance between them. Theorem \\ref{posconst} says that $\\mu_v$ is positive for all $v$.\n\t\n\tThe cubes $C_z$ form a partition of $\\mathbb R^d$. For $x \\in \\mathbb R^d$, let $\\tilde x$ be the unique point on the lattice such that $x \\in C_{\\tilde x}$. The metric $g$ induces two notions of ``passage time'' over a discrete set $\\Gamma$:\n\t\t$$\\Lambda(\\tilde\\gamma) = \\sum_{z \\in \\tilde\\gamma} \\Lambda_z \\qquad \\mathrm{and} \\qquad \\lambda(\\tilde\\gamma) = \\sum_{z \\in \\tilde\\gamma} \\lambda_z.$$\n\n\t\\begin{env_lem} \\label{unifT}\n\t\tThere exists a non-random $K > 0$ such that, with probability one, for all $x \\in \\mathbb R^d$ and $\\rho > 0$, there exists $T(x) > 0$ such that if $t \\ge T$ and $|x-y| \\le \\rho$, then\n\t\t\t$$d(tx,ty) \\le Kt\\rho.$$ \n\t\\end{env_lem}\n\t\\begin{proof}\n\t\tBy rescaling $x$ and $y$, it suffices to prove the lemma with $\\rho = 1$. \n\t\n\t\tApply Lemma \\ref{upbound} with $B = 2d$ and $X_z = \\Lambda_z$. Thus there exists a non-random $C > 0$ such that, with probability one, for any sequence $a_n \\in \\mathbb Z^d$, there exists $N > 0$ such that for all $n \\ge N$, if $\\Gamma$ is a finite $*$-connected set which contains the point $a_n$ and $|\\Gamma| \\le 2d n$, then $\\Lambda(\\Gamma) \\le Cn$.\n\t\t\n\t\tLet $K = C\\sqrt{d}$, and suppose that with positive probability, there is some $x \\in \\mathbb R^d$ such that for any $n > 0$, there exists $y$ (depending on $n$) such that $|x - y| \\le 1$ but $d(nx, ny) > Kn$. We will show that this leads to a contradiction. Let $N$ be as in Lemma \\ref{upbound} applied with the sequence $a_n = \\widetilde{nx}$.\n\t\t\n\t\tSuppose that $n \\ge N$. Let $\\gamma$ be the straight-line segment from $nx$ to $ny$. Let\n\t\t\t$$\\Gamma = \\{ z \\in \\mathbb Z^d : \\gamma \\cap C_z \\ne \\emptyset \\},$$\n\t\tindex the cubes $C_z$ which $\\gamma$ meets. Note that $\\widetilde{nx} \\in \\Gamma$. Clearly, \n\t\t\t$$|\\Gamma| \\le 2d |nx-ny| \\le 2d n,$$\n\t\tsince $|x-y| \\le 1$. By Lemma \\ref{upbound},\n\t\t\t$$\\Lambda(\\Gamma) \\le Cn.$$\n\t\t\n\t\tThe distance between $nx$ and $ny$ is a lower bound for the Riemannian length of $\\gamma$:\n\t\t\t$$d(nx, ny) \\le R(\\gamma).$$\n\t\tFurthermore, since $\\gamma$ is a line segment, the Euclidean length of $\\gamma$ in each cube is at most $\\sqrt{d}$. Thus we can estimate the Riemannian length of $\\gamma$ by summing $\\Lambda_z$ over $\\Gamma$:\n\t\t\t\\begin{equation} \\label{partitionR}\n\t\t\t\td(nx, ny) \\le R(\\gamma) = \\sum_{z \\in \\Gamma} R(\\gamma \\cap C_z) \\le \\sum_{z \\in \\Gamma} \\Lambda_z \\sqrt{d} = \\Lambda(\\Gamma) \\sqrt{d} \\le C n \\sqrt{d} = Kn, \\end{equation}\n\t\tsince $K = C\\sqrt{d}$. This contradicts the assumption that $d(nx,ny) > Kn$.\n\t\t\t\\end{proof}\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\n\t\\begin{env_thm} \\label{posconst}\n\t\tThe constants $\\mu_v$ are all positive.\n\t\\end{env_thm}\n\t\\begin{proof}\n\t\tSuppose $\\mu = \\mu_v = 0$ for some unit vector $v$. \n\t\t\n\t\tLet $\\epsilon > 0$, and apply Lemma \\ref{passstep} with the constant sequence $a_n \\equiv 0$ to $A = 4\\epsilon$ and $X_z = \\lambda_z$. Thus there exists a non-random $B > 0$ such that, with probability one, there exists $N_1 > 0$ such that for $n \\ge N_1$, if $\\Gamma$ is a $*$-connected set which contains the origin and $\\lambda(\\Gamma) \\le 4\\epsilon n$, then $|\\Gamma| \\le Bn$. \n\t\t\n\t\tBy Kingman's subadditive ergodic theorem, with probability one there exists $N_2 > 0$ such that if $n \\ge N_2$, then\n\t\t\t\\begin{equation} \\label{kingmanposconst}\n\t\t\t\td(0,nv) \\le \\epsilon n \/ 2, \\end{equation}\n\t\tsince we assumed that $\\mu_v = 0$.\n\t\t\n\t\tLet $N = \\max\\{N_1, N_2\\}$, and suppose $n \\ge N$. \\emph{A priori}, the distance $d(0,nv)$ need not be realized as the Riemannian length of a curve, so let $\\gamma$ be a $C^1$-curve from $0$ to $nv$ with\n\t\t\t$$R(\\gamma) \\le d(0,nv) + \\epsilon n \/ 2 \\le \\epsilon n,$$\n\t\twhere the second inequality follows from \\eqref{kingmanposconst}.\n\t\n\t\tDefine the discrete set\n\t\t\t\\begin{equation} \\label{Gammadef}\n\t\t\t\t\\Gamma = \\{ z \\in \\mathbb Z^d : L(\\gamma \\cap C_z) \\ge 1\/4 \\}; \\end{equation}\n\t\tthat is, $z \\in \\Gamma$ provided the Euclidean length of $\\gamma$ in the cube $C_z$ is at least $1\/4$. \n\t\t\n\t\tWe claim that $\\Gamma$ is $*$-connected. Suppose not, so that the continuum set $W = \\bigcup_{z \\in \\Gamma} C_z$ has at least two components, separated by Euclidean distance at least $1$. Let $W'$ be the $1\/4$-neighborhood around $W$, so that the components of $W'$ are separated by Euclidean distance at least $1\/2$. By definition of $\\Gamma$, the curve $\\gamma$ meets each component of $W'$, but not the complement $\\mathbb R^d \\setminus W'$. Since $\\gamma$ is continuous, this is a contradiction; hence, $\\Gamma$ is $*$-connected. \n\n\t\tIn each cube $C_z$, we can estimate the Riemannian length of $\\gamma$ using $\\lambda_z$:\n\t\t\t$$L(\\gamma \\cap C_z) \\lambda_z \\le R(\\gamma \\cap C_z),$$\n\t\twhere $L$ denotes Euclidean length. Furthermore, by summing $\\lambda_z$ over the points of $\\Gamma$, we get a lower bound for $R(\\gamma)$:\n\t\t\t$$\\tfrac{1}{4} \\lambda(\\Gamma) \\le \\sum_{z \\in \\Gamma} L(\\gamma \\cap C_z) \\lambda_z \\le \\sum_{z \\in \\Gamma} R(\\gamma \\cap C_z) \\le R(\\gamma) \\le \\epsilon n.$$\n\t\tClearly, $0 \\in \\Gamma$, so Lemma \\ref{passstep} implies that $|\\Gamma| \\le Bn$.\n\n\t\tIn the proof of that lemma, we chose $B$ so that\n\t\t\t$$B \\le \\tfrac{r}{\\log((1+p)\/2p)} A,$$\n\t\tfor positive constants $r$ and $p < 1$ not depending on $A$; see \\eqref{defB}. Since $A = 4\\epsilon$, if we write $B' = 4r\/\\log((1+p)\/2p)$, then \n\t\t\t\\begin{equation} \\label{GammaB}\n\t\t\t\t|\\Gamma| \\le B' \\epsilon n. \\end{equation}\n\t\t\t\n\t\tLet $z$ be a lattice point in $\\Gamma$ which minimizes the distance $|nv - z|$. Clearly, $|z| \\ge n\/2$. Since $\\Gamma$ is $*$-connected and $z \\in \\Gamma$,\n\t\t\t$$|\\Gamma| \\ge n\/2\\sqrt{d}.$$\n\t\tFor small $\\epsilon$, this contradicts \\eqref{GammaB}, so $\\mu$ must be positive.\n\t\\end{proof}\n\t\n\t\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\section{The Shape Theorem and Consequences} \\label{results}\n\t\n\tDefine the function\n\t\t$$\\mu(x) = \\begin{cases} \\mu_{x\/|x|} |x|, & x \\ne 0 \\\\ 0, & x = 0. \\end{cases}$$\n\tProposition \\ref{contmu} and Theorem \\ref{posconst} imply that $\\mu$ is continuous and, for $x \\ne 0$, strictly positive. It follows from the triangle inequality for the distance function that $\\mu(x)$ is a norm on $\\mathbb R^d$. Consider the unit ball in this norm,\n\t\t\\begin{equation} \\label{limshape}\n\t\t\tA = \\{ x : \\mu(x) \\le 1 \\} = \\{ x : |x| \\le \\mu_{x\/|x|}^{-1} \\}, \\end{equation}\n\tas well as the random Riemannian ball of radius $t$ centered at the origin,\n\t\t$$B_t = \\{ x : d(0,x) \\le t \\}.$$\n\t\t\n\t\\begin{env_thm}[Shape Theorem]\t\\label{shapethm}\n\t\t\tFor all $\\epsilon > 0$, with probability one, there exists $T$ such that if $t \\ge T$, then\n\t\t\t\t\\begin{equation} \\label{shapestatement}\n\t\t\t\t\t(1-\\epsilon) A \\subseteq \\tfrac{1}{t} B_t \\subseteq (1+\\epsilon) A.\n\t\t\t\t\\end{equation} \n\t\\end{env_thm}\n\t\n\tThe set $A$ is called the limiting shape of the random Riemannian metric $g$. The shape theorem for lattice first-passage percolation was proved by Cox and Durrett \\cite{cox1981slt}; our proof is modeled on the arguments in Durrett \\cite{durrett1988lnp}.\n\t\t\n\t\t\\begin{proof}\n\t\t\tIt suffices to prove the theorem for $\\epsilon \\in (0,1)$. Let\n\t\t\t\t\\begin{equation} \\label{delta}\n\t\t\t\t\t\\delta < \\min \\left\\{ \\frac{1}{K}, 1 - \\frac{1+\\epsilon^2}{1+\\epsilon} \\right\\}, \\end{equation}\n\t\t\twhere $K$ is as in Lemma \\ref{unifT}. Let $B^{\\mathrm{E}}(x,r)$ denote the Euclidean ball of radius $r$ centered at $x$.\n\t\t\t\n\t\t\tWe will show that with probability one, there exists $T > 0$ such that if $t \\ge T$, then $(1-\\epsilon)A \\subseteq \\tfrac{1}{t} B_t$. To do this, we will first prove that for every $x$, there is a random $T(x)$ such that if $t \\ge T$, then the small Euclidean ball $B^{\\mathrm{E}}(x,\\delta\\epsilon^2)$ is contained in $\\tfrac{1}{t} B_t$. Since the set $(1-\\epsilon)A$ is compact, it can be covered by finitely many balls $B^{\\mathrm{E}}(x_i, \\delta\\epsilon^2)$. Letting $T = \\max\\{T(x_i)\\}$ will prove the result.\n\t\t\t\n\t\tFix $x \\in (1-\\epsilon)A$. We claim that with probability one, there exists $T(x)$ such that if $t \\ge T$ and $|x-y| \\le \\delta \\epsilon^2$, then $d(0,ty) \\le t$, hence $B^{\\mathrm{E}}(x,\\delta \\epsilon) \\subseteq \\tfrac{1}{t} B_t$ for $t \\ge T$. By the triangle inequality,\n\t\t\t$$d(0,ty) \\le d(0,tx) + d(tx,ty).$$\n\t\t\n\t\tThe first term is controlled by Kingman's theorem: with probability one, there exists $T_1(x)$ such that if $t \\ge T_1$, then\n\t\t\t$$d(0,tx) \\le (1+\\epsilon) t \\mu(x) \\le (1-\\epsilon^2) t,$$\n\t\tsince $\\mu(x) \\le (1-\\epsilon)$.\n\t\n\t\tThe second term is controlled by Lemma \\ref{unifT} applied to this $x$ and $\\rho = \\delta\\epsilon^2$. With probability one, there exists $T_2(x)$ such that for all $t \\ge T_2$ and $y$ with $|x-y| \\le \\delta \\epsilon^2$, then\n\t\t\t$$d(tx,ty) \\le Kt \\delta \\epsilon^2 < \\epsilon^2 t,$$\n\t\tsince $\\delta < 1\/K$.\n\t\t\n\t\tLet $T(x) = \\max\\{T_1, T_2\\}$. If $t \\ge T(x)$, then for all $y \\in B^{\\mathrm{E}}(x,\\delta \\epsilon^2)$, $$d(0,ty) \\le (1-\\epsilon^2)t + \\epsilon^2 t = t,$$ implying that $B^{\\mathrm{E}}(x, \\delta \\epsilon^2) \\subseteq \\tfrac{1}{t} B_t$ for all $t \\ge T(x)$. Let $x_i$ be finitely many points such that $\\bigcup B^{\\mathrm{E}}(x_i, \\delta \\epsilon^2)$ covers $(1-\\epsilon)A$, and let $T = \\max_i T(x_i)$. Then for all $t \\ge T$,\n\t\t\t$$(1-\\epsilon)A \\subseteq \\bigcup_i B^{\\mathrm{E}}(x_i, \\delta \\epsilon^2) \\subseteq \\tfrac{1}{t} B_t,$$\n\t\tcompleting the lower half of the shape theorem.\n\t\t\n\t\tNow we prove the upper half. For any $x$, let $T_2(x)$ be as above, so that if $t \\ge T_2$ and $|x-y| < \\delta \\epsilon^2$, then $d(tx,ty) \\le \\epsilon^2 t$. By Kingman's theorem, with probability one, there exists $T_3(x)$ such that if $t \\ge T_3$, then\n\t\t\t$$d(0,tx) \\ge (1-\\delta) t \\mu(x).$$\n\t\tChoose finitely many $x_i \\in 2A \\setminus (1+\\epsilon)A$ such that the closure of $2A \\setminus (1+\\epsilon)A$ is covered by $\\bigcup B^{\\mathrm{E}}(x_i, \\delta \\epsilon^2)$. Belonging to $2A \\setminus (1+\\epsilon)A$ implies that $\\mu(x_i) > (1+\\epsilon)$. Let $T = \\max_i \\{T_2(x_i), T_3(x_i)\\}$, and let $t \\ge T$. Then if $y \\in B^{\\mathrm{E}}(x_i, \\delta \\epsilon^2)$,\n\t\t\t\\begin{eqnarray*}\n\t\t\t \td(0,ty) &\\ge& d(0, tx_i) - d(tx_i, ty) \\\\\n\t\t\t\t\t&\\ge& (1-\\delta)t \\mu(x_i) - \\epsilon^2 t \\\\\n\t\t\t\t\t&\\ge& (1-\\delta)(1+\\epsilon)t - \\epsilon^2 t \\\\\n\t\t\t\t\t&>& (1+\\epsilon^2) t - \\epsilon^2 t = t,\n\t\t\t\\end{eqnarray*}\n\t\twhere the final inequality follows from the assumption \\eqref{delta} on $\\delta$. Thus $\\bigcup B^{\\mathrm{E}}(x_i, \\delta \\epsilon^2) \\subseteq \\tfrac{1}{t} B_t^c$ so\n\t\t\t$$2A \\setminus (1+\\epsilon)A \\subseteq \\bigcup B^{\\mathrm{E}}(x_i, \\delta \\epsilon^2) \\subseteq \\tfrac{1}{t} B_t^c.$$\n\t\tThe set $\\tfrac{1}{t} B_t$ is connected and contains the origin, hence $\\tfrac{1}{t} B_t \\subseteq (1+\\epsilon)A$ as desired.\n\t\\end{proof}\n\t\n\t\\begin{env_cor} \\label{isotropic}\n\t\tIf the measure $\\mathbb P$ is isotropic (rotationally-invariant), then the limiting shape $A$ is the Euclidean ball of radius $\\mu^{-1}$. \\hfill $\\square$\n\t\\end{env_cor}\n\t\n\tA simple consequence of Lemma \\ref{unifT} is that the convergence \\eqref{kingman} given by Kingman's theorem is uniform:\n\t\n\t\\begin{env_pro} \\label{unifconv}\n\t\tFor all $\\epsilon > 0$, with probability one, there exists $T>0$ such that if $t \\ge T$, then for all $v \\in S^{d-1}$,\n\t\t\t$$\\left| \\tfrac{1}{t} d(0,tv) - \\mu_v \\right| \\le \\epsilon.$$\n\t\\end{env_pro}\n\t\n\t\\begin{proof}\n\t\tSuppose not. Then with positive probability, there exist $\\epsilon > 0$, $t_n \\to \\infty$ and $v_n \\in S^{d-1}$ such that\n\t\t\t$$\\left| \\tfrac{1}{t_n} d(0,t_n v_n) - \\mu_{v_n} \\right| > \\epsilon,$$\n\t\tfor all $n$. By compactness of the sphere, a subsequence of $v_n$ converges to some $v \\in S^{d-1}$; assume without loss of generality that $v_n \\to v$. By the triangle inequality,\n\t\t\t\\begin{equation} \\label{epsunif}\n\t\t\t\t\\epsilon < \\left| \\tfrac{1}{t_n} d(0,t_nv_n) - \\tfrac{1}{t_n} d(0, t_nv) \\right| + \\left| \\tfrac{1}{t_n} d(0,t_nv) - \\mu_v \\right| + \\left| \\mu_{v_n} - \\mu_v \\right|. \\end{equation}\n\t\tThe second and third terms tend to zero a.s. as $n \\to \\infty$ by Kingman's theorem and Proposition \\ref{contmu}, respectively. By the triangle inequality, the first term is bounded by $\\tfrac{1}{t_n} d(t_nv, t_nv_n)$. Let $\\rho = \\epsilon \/ 2K$, where $K$ is as in Lemma \\ref{unifT}. Since for large $n$, $|v - v_n| \\le \\rho$, we can apply that lemma with $x = v$ to get\n\t\t\t$$\\tfrac{1}{t_n} d(t_nv,t_nv_n) \\le K\\rho = \\epsilon \/2,$$\n\t\tfor large $n$ almost surely. This contradicts \\eqref{epsunif}.\n\t\\end{proof}\n\t\n\t\n\t\\begin{env_thm} \\label{completeness}\n\t \tWith probability one, if $\\gamma$ is a $C^1$-curve parametrized by Riemannian length, then $|\\gamma(t)| < \\infty$ for all $t \\ge 0$. \n\t\\end{env_thm}\n\t\\begin{proof}\n\t\tIt suffices to consider curves starting from the origin. Suppose that with positive probability, there exists a smooth curve $\\gamma$ parametrized by Riemannian length which starts from the origin and for which\n\t\t\t\\begin{equation} \\label{Tlim}\n\t\t\t\t\\lim_{t \\to T} |\\gamma(t)| = \\infty \\end{equation}\n\t\tfor some finite $T$. Since $\\gamma$ is parametrized by Riemannian length, for all $t \\ge 0$,\n\t\t\t$$t \\ge d(0,\\gamma(t)).$$\n\t\tProposition \\ref{unifconv} and \\eqref{Tlim} imply that\n\t\t\t$$0 = \\lim_{t\\to T} \\frac{t}{|\\gamma(t)|} \\ge \\lim_{t\\to T} \\frac{d(0,\\gamma(t))}{|\\gamma(t)|} \\ge \\min_{v} \\mu_v > 0,$$\n\t\ta contradiction.\n\t\\end{proof}\n\t\n\tWe can refine this result if we impose an additional smoothness constraint on the metric. If $g$ is a $C^2$-smooth Riemannian metric, i.e. $g \\in C^2(\\mathbb R^d, \\operatorname{SPD})$, then we can use the calculus of variations to derive the Euler-Lagrange equations for the functional $R$. These are called the geodesic equations \\cite{lee1997rmi} for the Riemannian metric $g$, and solutions to this system are called geodesics. The geodesic equations form a second-order system with locally-Lipschitz coefficients, so a geodesic is uniquely determined by its starting point and velocity. We call a geodesic $\\gamma$ length-minimizing if for all $x, y \\in \\gamma$, the distance $d(x,y)$ is realized as the Riemannian length of the part of the curve $\\gamma$ which connects the two points. Not all geodesics are length-minimizing; for example, on the sphere, the geodesics are great circles, which do not minimize length past antipodal points.\n\n\tA metric is said to be geodesically complete if for all $x \\in \\mathbb R^d$ and $v \\in T_x \\mathbb R^d$, the unique geodesic $\\gamma$ at $x$ in direction $v$ can be continued for all time. Part of the Hopf-Rinow theorem \\cite{lee1997rmi} of Riemannian geometry is that geodesic completeness is equivalent to the condition stated in Theorem \\ref{completeness} for our random metric $g$. A further corollary \\cite{lee1997rmi} is that distances are always realized by geodesics. Summarizing, we have\n\t\n\t\\begin{env_cor} \\label{realized}\n\t\tSuppose that, in addition to the assumptions of this paper, $g$ is a $C^2$-smooth random Riemannian metric. Then, with probability one, $g$ is geodesically complete. Consequently, for all $x, y \\in \\mathbb R^d$, there is a finite length-minimizing geodesic $\\gamma$ connecting $x$ to $y$ such that\n\t\t\t$$d(x,y) = R(\\gamma).$$\n\t\\end{env_cor}\n\n\nWe alert the reader to a different meaning of the word ``geodesic,'' used often in the first-passage percolation literature. The term is used there to denote a globally length-minimizing path. This is very different from the standard meaning of the word in differential geometry: as described above, geodesics are the curves which locally minimize length, but not necessarily globally. We adhere to this meaning in the present paper.\n\n\n\tThe existence of two-sided minimizing paths is an open question for all FPP models. For two-dimensional standard FPP, Licea, Newman and Piza \\cite{licea1996geodesics, licea1996superdiffusivity, newman1997tds} have a number of results in this direction. Their work relies on certain curvature assumptions about the limiting shape, which have not been verified for models of independent FPP. Howard and Newman's \\cite{howard1997euclidean, howard2001special} model of Euclidean FPP, on the other hand, is rotationally invariant. Consequently, the limiting shape is a Euclidean ball, and they prove many results not available in the lattice setting. See the excellent survey \\cite{howard2004mfp} for more details. By Corollary \\ref{isotropic}, the limiting shape for isotropic Riemannian FPP is a Euclidean ball. Our hope is for this setting to be a fertile ground for adapting the results referenced in this paragraph.\\newline\n\n\t{\\bf Remark added after publication:} The moment assumptions \\eqref{finmom} and \\eqref{finmom2} can be slightly weakened to: $$M(r) = \\mathbb E[ \\E^{r \\Lambda_0} ] < \\infty \\qquad \\mbox{for all $r \\le a$},$$ for some $a > 0$. The only modification to the paper is in the proof of Lemma \\ref{passstep}, where one uses $M = M(a)$ instead of $M = M(1)$. We leave the details to the reader.\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAqueous solutions under confinement are present and essential in a wide range of chemical and biological processes. Confinement of aqueous electrolytes in carbon nanopores is of particular interest due to its relevance in several technological areas such as elec\\-tro\\-che\\-mi\\-cal double layer capacitors (EDLCs)\\cite{Fic12,Sajjad21} and capacitive deionization.\\cite{Porada13,Luciano20} Confinement alters significantly the properties of the adsorbed species with respect to their bulk counterparts which can in turn affect the performance of these devices. Characterization of quantities such as the hydration structure of the ions in the pores, ion dynamics, ion distribution across pores of different sizes, and ion-carbon wall interactions is thus necessary in order to improve these technologies and has been the subject of numerous studies. \n\nExperimental attempts to understand the interfacial behaviour of aqueous electrolytes under confinement have involved techniques such as X-ray diffraction,\\cite{Ohba12} extended X-ray absorption fine structure,\\cite{Ohkubo02} small-angle X-ray scattering,\\cite{Prehal15,Prehal17} electrochemical quartz crystal microbalance,\\cite{Escobar-Teran22,Wu18} Raman spectroscopy,\\cite{Nunes20} nuclear magnetic resonance spectroscopy,\\cite{Cervini19,Luo15a,Luo15b} and have provided valuable information about the electrolyte confinement in materials such as carbon nanotubes, slit-shaped nanospaces, graphene nanowrinkles, and nanoporous carbons. However, a quantitative microscopic characterization is still out of reach with many investigation techniques, especially for disordered porous carbons, due to their structural complexity and the existence of fast motion in the systems.\n\nThanks to its quantitative and nucleus specific features, nuclear magnetic resonance (NMR) spectroscopy is a remarkable technique to study a wide range of systems. The chemical shifts of NMR active nuclei are sensitive to their local environment making this technique compatible with the investigation of aqueous electrolyte species in the bulk and under confinement. More precisely, the peaks observed for species adsorbed to the carbon surface are shifted in the NMR spectrum relative to the species in the bulk electrolyte allowing for a clear identification and quantification of the confined species.\\cite{Harris96,Dickinson2000,Griffin14,Griffin16} The additional possibility to conduct NMR measurements while applying a potential difference between the electrodes has also been exploited to study charging mechanisms in EDLCs\\cite{Forse16,Griffin14} and even characterize ion diffusion for different states of charge.\\cite{Forse17} \n\nThe chemical shift of the species confined in porous carbonaceous materials is mainly the consequence of the secondary magnetic shielding generated by ring currents arising on the carbon surface when applying a primary magnetic field to the system.\\cite{Lazzeretti2000,Anderson10} This phenomenon can be gauged using a Nucleus Independent Chemical Shift (NICS) approach.\\cite{Chen05,Gershoni-Poranne21} NICS can be evaluated using density functional theory (DFT) calculations and previous studies have shown, amongst other effects, that the shielding effect is larger when the ions are closer to the surface of the carbon.\\cite{Xing14,Forse14} Hence the distribution of the ions at the interface and the porosity of the carbon are expected to have a major influence on the chemical shift of the adsorbed species. The local structure, and especially the size of the aromatic domains, was also shown to have a large effect which could be used to get insights into the structures of a range of porous carbons.\\cite{Forse15b} \n\nWhile the ring currents are predominant in determining the chemical shifts for a range of electrolytes,~\\cite{Forse21,Sasikumar21} other parameters can lead to significant modifications of the chemical shifts. NMR spectra are known to be influenced by the hydration number of the ions under consideration, the dehydration of ions upon entering the micropores can thus lead to large chemical shift changes.\\cite{Luo15b,Gerken02} Due to exchange between the environments inside and outside of the pores and motional averaging, the carbon particle size can also lead to large variations.\\cite{Cervini19} The concentration of ions in organic and aqueous electrolytes adsorbed on microporous carbon can further influence the overall chemical shift.\\cite{Cervini19,Fulik18} The interpretation of NMR spectra is, as a consequence, both rich and complex, and distinguishing the contributions from the above mentioned factors is an essential task for a better understanding of the behaviour of the electrolyte species at a local scale. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.58]{Figures\/Figure1.png}\n\\caption{Depending on the carbonaceous material and electrolyte, several factors can contribute differently to the chemical shifts of electrolyte species and consequently to the chemical shift difference, $\\Delta\\delta$, between bulk (ex-pore) and confined (in-pore) ions.}\n\\label{factors}\n\\end{figure}\n\nWhile DFT calculations are adapted to estimate NMR parameters of small static systems, they are not adequate to represent the variety of environments existing in real systems and the ion motion. Molecular dynamics (MD) simulations are more suited for such studies. Feng~\\textit{et~al.}\\cite{Feng10} conducted si\\-mu\\-la\\-tions of K$^+$ ions in water, in contact with slit pores, and demonstrated that the distribution of these ions in electrified pores of different sizes depend on the ion hydration, the water-water interactions and the pore size. Beckstein~\\textit{et~al.}\\cite{Beckstein04} used molecular simulations of an aqueous electrolyte in contact with different nanopores and channels to evaluate the energy barriers to Na$^+$ ion permeation, related to ion dehydration, through pores with various radii smaller than 1~nm. Another study on MD simulations of an aqueous NaCl electrolyte and carbide derived carbons (of average pore sizes 0.75~nm and 1.0~nm) have revealed that unlike large organic ions dissolved in acetonitrile, the desolvation of Na$^+$ ions is limited in the carbon nanopores.\\cite{ganfoud19} Overall, these works show that different ions are more or less susceptible to dehydration and as a consequence, to fully interpret NMR spectra, one has to consider both the NMR parameters for specific ion configurations and the distribution and exchange of ions between these environments.\n\nIn previous works, we proposed to combine the results from DFT calculations and MD simulations using a mesoscopic model to determine NMR spectra.\\cite{Merlet15,Sasikumar21} This method takes into account information on i)~ion adsorption and organization, ii)~local magnetic shielding, and iii)~pore size distribution. This technique was shown to give results in good agreement with experimental NMR, Raman spectroscopy and pair distribution function analysis for a range of porous carbons.\\cite{Forse15b} The mesoscopic model is also suitable for predicting \\emph{in situ} NMR spectra for supercapacitors based on organic electrolytes.\\cite{Sasikumar21} One of the assets of the model is to be able to modify independently the ion adsorption and local magnetic shielding entries allowing one to assess the relative importance of these two factors. Thanks to this feature, it was possible to demonstrate that, for organic electrolytes, the variations of the magnetic shieldings, as a consequence of the changes in electronic structure with charging (negatively or positively), have a predominant influence on the overall chemical shift compared to ion organization.\\cite{Sasikumar21} This result is consistent with experiments conducted on a range of electrolytes showing similar variations of the chemical shift with applied potential.\\cite{Forse21,Fulik18,Wang11b} \n\nIn this work, we investigate aqueous electrolytes with various alkali metal ions in contact with porous carbon particles corresponding to polyether ether ketone derived carbons (PDCs). The motivation for this study is to explore additional effects observed experimentally with aqueous electrolytes compared to organic electrolytes (see Figure~\\ref{factors}).\\cite{Cervini19} One interesting aspect of the PDCs is that their pore size distributions depend on the activation conditions. Here the PDCs will be referred to using burn-off values which denote the percentage of mass lost after the activation step and is thus indicative of the porosity generated. \n\n\nThe $^7$Li, $^{87}$Rb $^{133}$Cs, and $^1$H NMR spectra for LiCl, RbCl, and CsCl solutions (1M) in various PDCs were simulated using the mesoscopic model previously developed (also referred to as ``lattice model\" in the remainder in the article).\\cite{Merlet15,Sasikumar21} The information on ion adsorption and organization is provided as free energy profiles (extracted from MD simulations and shown in Figure~S2), the information on local magnetic shielding is given by NICS profiles (calculated through DFT) and the pore size distributions are the ones obtained previously from adsorption isotherms (see Figure~S3).\\cite{Cervini19} All data considered correspond to a neutral carbon particle. While the free energy and NICS profiles are calculated in the proximity of a single carbon surface, the lattice model considers slit pores with two carbon surface assuming symmetrical free energies and additive NICS. It is worth noting that any specific interaction or charge transfer between ions \/ water molecules and the carbon surface are neglected at this stage. The chemical shifts of in-pore species relative to the bulk ($\\Delta\\delta_{\\rm NICS}$) were evaluated and compared against experimental chemical shifts obtained by Cervini \\emph{et~al.}.\\cite{Cervini19} Results are shown in Figure~\\ref{compare_shift}.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.3]{Figures\/Figure2.png}\n\\caption{Comparison of average chemical shifts of the in-pore alkali metal ions and hydrogen atoms of water relative to the bulk species in various PDCs evaluated by lattice simulations and experiments.\\cite{Cervini19} Different symbols and colors correspond to different nuclei probed. Filled symbols correspond to experimental results while open symbols correspond to lattice model results. The $\\Delta\\delta_{\\rm NICS}$ values for Rb$^+$ and Cs$^+$ are superimposed.}\n\\label{compare_shift}\n\\end{figure}\n\nIn general, the simulated $\\Delta\\delta_{\\rm NICS}$ values decrease with an increase in the burn-off value of the PDC. This is in qualitative agreement with the experiments. The pore size distributions for the PDCs corresponding to the studied range of burn-off values considered show pores in the micropore and meosopore ranges with a large amount of pores around 0.8~nm, 1.1~nm and 2.2~nm.\\cite{Cervini19} With the increase in burn-off value, there is usually an increase of pores in the 2-3~nm range. As DFT calculations show that NICS are larger for small pore sizes\\cite{Xing14,Forse14}, an increase of the average pore size is expected to lead to a smaller $\\Delta\\delta$ values. The data for NICS is included in the lattice model, as well as the pores size distributions, which allows for a reproduction of the $\\Delta\\delta$ variation with burn-off values.\n\nLooking closer at the trend, the $\\Delta\\delta_{\\rm NICS}$ observed for all the species under consideration at a burn-off value of 50\\% are slightly smaller (- 5\\%) than that of 56\\%. In addition to the $\\Delta\\delta_{\\rm NICS}$ values, the lattice model also gives access to the ions or molecules distributions across pores of different sizes. The ion distributions for Li$^+$ and Cs$^+$ in the three PDCs considered are given in Figures~S3 and~S4 (results for Rb$^+$ are very similar to the results for Cs$^+$). For the PDC with 56\\% burn off, the fraction of ions occupying pores sizes in the range 1.9-3.2~nm is larger than for the PDC with 56\\% burn off leading to a smaller $\\Delta\\delta_{\\rm NICS}$ value for this PDC compared to the 50\\% material.\n\nWhile the trend with burn-off values is qualitatively similar for the various PDCs, the variation of the chemical shifts with the porosity, i.e. the slope of the curve, is slightly overestimated by the simulations. This discrepancy can come from the hypotheses made in the lattice model regarding the NICS calculated through DFT (\\emph{e.g.} averages across NICS profiles calculated for different aromatic molecules, positions at which the NICS are calculated close to the carbon surface, ...) which are described in details in previous works.\\cite{Merlet15,Sasikumar21} This could also be due to the difference in the carbon structures used for NMR experiments and adsorption studies to determine the pore size distribution. It is worth mentioning that the determination of pore size distributions is subject to some limitations. Different pore size distributions can be obtained for a given porous carbon depending for example on the probe molecule chosen and the model adopted to interpret the adsorption isotherms.~\\cite{Dombrowski00,Neimark09} Overall, the agreement between the lattice model and experiments for the trend across burn-off values is satisfying. \n\nThe $\\Delta\\delta_{\\rm NICS}$ calculated using the lattice model for all cations ($^7$Li, $^{87}$Rb, $^{133}$Cs) are similar and smaller than the ones of $^1$H for all porous carbons considered. The small difference of 1-2~ppm between $^1$H, $^7$Li and $^{87}$Rb\/$^{133}$Cs can be ascribed to the ion organization at the interface with carbon. Indeed, the free energy profiles extracted from MD simulations show that hydrogen atoms come closer to the carbon surface than the other species, followed by Li$^+$ and by Rb$^+$ and Cs$^+$ (see Figure~S2). As a consequence, the lattice model predicts that the population of Li$^+$ ions (or hydrogen atoms) in small pores is larger than the one of Cs$^+$ (or Rb$^+$) ions leading to a smaller $\\Delta\\delta_{\\rm NICS}$ value for large ions. This is illustrated in Figures~S3 and~S4 giving the distribution of Li$^+$ and Cs$^+$ ions in pores of different sizes. However, one should keep in mind that the lattice model only takes adsorption into account through average free energy profiles obtained at a planar surface and specific energy penalties related to desolvation could change the picture in very small pores. \n\nWhile the lattice model satisfactorily predicts the $\\Delta\\delta$ values of $^1$H and $^7$Li with respect to the experiments\\cite{Cervini19}, the values calculated for $^{87}$Rb and $^{133}$Cs are significantly underestimated, by 4-5 ppm. In addition, while the relative variation in $\\Delta\\delta$ between $^1$H and $^7$Li is reproduced by the lattice model, the relative difference between these two species and $^{87}$Rb\/$^{133}$Cs is completely wrong. This shows that the ion organization at the carbon surface and the resulting distribution in the pores, already taken into account in the lattice model, cannot be responsible for this shift. \n\nIn previous works, it was shown that more than ion organization, notable contributions from ring currents on the total shift are to be expected\\cite{Sasikumar21}. In the case of interest here, variations in ring currents may arise due to specific interactions and charge transfer between the electronic density of the alkali metal ions and the carbon. These effects are neglected in the NICS calculations used so far but they can be explored with DFT. \n\nThe chemical shift profiles for Li$^+$, Rb$^+$, and Cs$^+$ ions at various distances from a circumcoronene molecule were calculated by DFT (see Figure~S5). The shifts calculated, referenced to the bare ion in vacuum, are shown in Fi\\-gu\\-re~\\ref{shift_M+}. For completeness, the chemical shifts for other alkali metal ions in the series (Na$^+$ and K$^+$) are given in Figure~S6. For distances below 0.6~nm, the chemical shifts deviate increasingly from the NICS as the ion size increases, which corresponds to an increasing polarizability. The values for Li$^+$ are extremely close to the NICS values. Overall, more negative chemical shifts compared to NICS are observed for the Li$^+$ and Rb$^+$ with the deviation being much larger for the latter. The deviation in the shift is -0.2~ppm and -3.7~ppm for Li$^+$ and Rb$^+$ respectively at a distance of 0.35~nm. The case of Cs$^+$ is strikingly different from the other ions with an increase of shift by 10.7~ppm compared to the NICS at 0.35~nm. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=0.3]{Figures\/Figure3A.png}\n\\includegraphics[scale=0.3]{Figures\/Figure3B.png}\n\\caption{a) Chemical shift profiles of various alkali metal ions with respect to vacuum calculated at various distances from the centre of the circumcoronene molecule and NICS values at the same distances. b) $\\Delta\\delta$ values of the in-pore species relative to the bulk species obtained using the lattice model with the NICS profiles (NICS) and the chemical shift profiles in presence of the ions (CSP).}\n\\label{shift_M+}\n\\end{figure}\n\nThe origin of the deviation of the chemical shift of the ion from the NICS is not well understood. As the ring currents are sensitive to the molecule charge,\\cite{Sasikumar21} a charge transfer between the ion and aromatic molecule could be a reason for such a variation. To test this hypothesis, Bader and Mulliken charges on atoms of the cation - circumcoronene systems were calculated.\\cite{g16,bader85,Lu12} The calculations were done for cations 0.35~nm away from the centre of the circumcoronene. The results show evidence of a limited charge transfer of 0.03~e maximum between the cations and the carbon. Interestingly, previous studies have analyzed a small change in the ring currents upon interaction with an ion and the nature of the ion does not affect the ring current changes in the carbon.\\cite{Guell05,Rodriguez-otero08,Mary18}. \n\nOne way to evaluate the variation in ring currents due to the proximity of the ion is the calculation of the NICS(1)\\cite{Chen05}, i.e the chemical shift of a ``ghost\" atom located at 0.1~nm of the ion-circumcoronene system, below the plane of the circumcoronene molecule on the opposite side of the ion. The NICS(1) is evaluated to be -13.6~ppm and -14.0~ppm respectively for Li$^+$ and Cs$^+$. The NICS(1) corresponding to the circumcoronene molecule without any ion nearby is -15.3~ppm. While there is a small change between the systems with and without ion, the variation is probably too small to explain the change in $\\Delta\\delta$ values for large ions. \n\nTo evaluate the influence on the $\\Delta\\delta$ values of the variations in chemical shifts observed for large polarizable ions, with respect to NICS, the lattice simulations were repeated with the chemical shift profiles calculated with the actual ions. The results obtained are shown in Figure~\\ref{shift_M+}. The $\\Delta\\delta_{\\rm CSP}$ and $\\Delta\\delta_{\\rm NICS}$ values are very similar and as such cannot explain the $\\Delta\\delta$ values observed experimentally for Rb$^+$ and Cs$^+$. It is worth noting that the free energy profiles for these large ions (Figure~S2) show a first minimum close to 0.6~nm indicating that very few ions get closer to the carbon than this distance. As the chemical shift profiles and NICS values are not significantly different at these distances even for the large ions, it is not surprising that the variations in chemical shifts due to ion-carbon interactions do not lead to noticeable changes in $\\Delta\\delta$. \n\nOne possibility worth investigating is the fact that MD simulations with planar electrodes, used here to extract free energy profiles, may lead to an underestimation of the amount of ions in smaller pores as shown in other systems.\\cite{Sasikumar21} A more realistic representation of the experiment could be obtained from MD simulations with porous carbon electrodes to check the occupancy of the small pores by the cations. This is usually much more involved computationally and less general so it is out of scope of the current work. Instead, MD simulations of the electrolytes confined in slit pores of width 0.8~nm or 1.1~nm were conducted. These simulations do not show any evidence of cations inside the pore with a width of 0.8~nm. In the slit pore of width 1.1~nm, it was observed that Cs$^+$, Rb$^+$ and Li$^+$ ions occupy preferentially the centre of the pore, approximately 0.55~nm away from the carbon surface (see ion and water densities for the in Figure~S9). Interestingly, the MD simulations suggest that the adsorption of Cs$^+$ ions is slightly more favorable than the one of the other cations, in contrast with the lattice simulation results, but the effect is limited. Following these considerations, no major shift is expected from specific ion-carbon interactions. \n\nOverall, the results from the lattice model, relying on MD simulations and DFT calculations, suggest that ion distributions in pores of different sizes and NICS values are sufficient to explain chemical shifts for Li$^+$ and hydrogen atoms from the water molecules but not for the larger Rb$^+$ and Cs$^+$ ions. \n\n\nTheoretical calculations have indicated that the hydration number of ions can have a major effect on their chemical shift, this was shown for example for Na$^+$ and F$^-$ ions for various hydration numbers.\\cite{Luo15b,Gerken02} This is not accounted for in the lattice simulations. The adsorption and dehydration of ions in hydrophobic carbon pores are dependent on an energy barrier related to the solvation energy of the ions\\cite{Luo15b,Beckstein04} and on the pore size. The larger ions, Cs$^+$ and Rb$^+$, are more polarizable than the Li$^+$ ion and have a weaker hydration shell that can distort easily as opposed to the case of Li$^+$. They are therefore more prone to dehydration potentially leading to a significant variation of the chemical shift for confined species. Furthermore, in the vicinity of the carbon, the presence of solvent molecules can affect both the interaction of the ion with the carbon\\cite{Rao09} and the carbon itself thus altering the ring currents and the chemical shift. Here, we first investigate the dehydration effect in the bulk before exploring the interface between carbon and ion-solvent complexes. We mainly focus on Cs$^+$ as we expect a similar behaviour for Rb$^+$. \n\n\\begin{figure*}[ht!]\n\\centering\n\\includegraphics[scale=0.62]{Figures\/Figure4.png}\n\\caption{Scheme depicting the chemical shift variations for different ion-water-circumcoronene systems corresponding to various local environments for the Cs$^+$ and Li$^+$ ions.}\n\\label{dehydration_shifts}\n\\end{figure*}\n\nTo study the effect of the hydration number on the chemical shift, four configurations of a fully hydrated Cs$^+$ ion in the bulk were extracted from the MD simulations and the chemical shift of the hydrated ion relative to the bare ion was assessed for different hydration numbers by removing water molecules from the hydration shell (see Supplementary Information for details). Starting with configurations with 9 water molecules (a frequent arrangement in the MD simulations), water molecules were removed one by one until reaching the bare ion state which serves as a reference. Between the full hydration and the bare ion, the variation in chemical shift is huge, reaching almost 200~ppm. It is worth noting that low solvation numbers (below 4) are not present in solution, as can be seen in the MD simulations (see Figure~S10). For hydration numbers between 9 and 6, the variation of the chemical shift with the removal of one water molecule is relatively small ranging from 2 to 10~ppm. For hydration numbers below 6, the difference in chemical shift becomes larger in the range of 10 to 50~ppm. In addition to the effect of the number of water molecules, the calculations indicate that, for a given hydration number, the orientation of the water molecules and the ion-water distances affect the chemical shift significantly. A variation as large as 75~ppm is observed as local structure changes. \n\nFrom the DFT calculations, it seems that a dehydration of the ions in the pores could lead to a very large shift. To complement this study, the MD simulations were used to determine the distribution of hydration numbers for the electrolyte species in the bulk and under confinement in a 1.1~nm slit pore. The results are shown in Figure~S10. The distributions for the bulk and in-pore species are superimposed indicating that the extent of dehydration is minimal in this case. The ion densities across the pores for Cs$^+$ and water atoms indicate that the ions mostly reside at the centre of the pore in a fully hydrated state while the water molecules line the pore walls (see Figure~S9). The effect of the water reorientation in the pores can also be neglected as Cs$^+$-O, Cs$^+$-H pair distribution functions are very similar in the bulk and in the pores (see Figure~S11). It is worth noting that the first coordination shell corresponding to chloride ions is also mostly unaffected by the confinement as can be seen from the Cs$^+$-Cl$^-$ pair distribution functions (see Figure~S11). While the situation could be different in slightly smaller pores, the pore size distributions, showing peaks for 0.8~nm and 1.1~nm, and the fact that ions do not enter pores of 0.8~nm in the simulations suggest that no significant difference in $\\Delta\\delta$ is to be expected from a variation in the hydration number. \n\nTo go beyond the effect of dehydration alone and include the influence of the hydration in the vicinity of a carbon surface, ion-water complexes for fully and partially hydrated cations were extracted from the MD simulations and DFT calculations of the chemical shifts of the ions next to a circumcoronene molecule were conducted. The distance of the Cs$^+$ ion from the centre of the closest circumcoronene molecule is set to be 0.35~nm for a partially dehydrated state, which was the closest distance between the Cs$^+$ ion and the carbon surface observed in MD simulations, and 0.45~nm for the fully hydrated configuration, which is a distance commonly observed in the MD simulations. To investigate the chemical shift trend in a slit pore, one more circumcoronene molecule is placed parallel to the first such that the pore width is 1.1~nm. For a detailed understanding of the trends, the chemical shifts of bare ions near a carbon wall and in the slit pore were also evaluated. Similar calculations were conducted for Rb$^+$. For Li$^+$, no desolvation is observed in the MD simulations so a single fully hydrated configuration was considered with a circumcoronene - ion distance of 0.41~nm. The most relevant computed shifts are shown in Figure~\\ref{dehydration_shifts}. The full set of results is given in Supplementary Information (Figures~S12-S16).\n\nThe $\\Delta\\delta$ value calculated between a partially hydrated Cs$^+$ ion in vacuum, i.e. without the influence of any carbon surface, and the same ion-water complex in a slit pore of 1.1~nm, 0.35~nm away from the closest surface, is -13.0~ppm. This is a larger shift than what is observed in experiments but closer to the expected value compared to a simple NICS or bare ion estimation. Indeed, the $\\Delta\\delta$ value calculated between a bare Cs$^+$ ion in vacuum and in the same position in a slit pore is 3.7~ppm. For a fully hydrated Cs$^+$ ion in vacuum and at 0.45~nm away from the closest carbon surface, a case observed much more commonly according to MD simulations, the $\\Delta\\delta$ value calculated is -3.2~ppm. This is this time smaller than the $\\Delta\\delta$ value for the bare ion of -6.3~ppm. \n\nInterestingly, the $\\Delta\\delta$ values calculated for Li$^+$ with or without hydration are very similar (respectively -6.7~ppm and -6.8~ppm). This reinforces the idea that NICS (or chemical shift profiles) are sufficient to estimate the experimental chemical shift difference between bulk and in-pore Li$^+$ ions. On the contrary, for Rb$^+$, large chemical shift differences between hydrated and dehydrated configurations are obtained, in agreement with the fact that NICS (or chemical shift profiles) are, as for Cs$^+$, not sufficient to explain the $\\Delta\\delta$ values obtained. \n\nPrevious DFT calculations conducted in this work suggest that the change in ring currents is not the main origin for the large shift observed. One way to check this is to use the NICS(1) as was done for the bare cations. The NICS(1) is computed to be -14.3~ppm for the partially hydrated Cs$^+$ ion against -13.9~ppm for the bare ion. Clearly the variation in the chemical shift is not due to the alteration of the ring currents which seems insignificant from this calculation. \n\nThe observations made for various hydration numbers and cations underline the challenge of understanding the factors determining the chemical shifts for large polarizable ions such as Cs$^+$ and Rb$^+$ which are largely affected by their hydration shells. Indeed, experimental results also show a variation of the Cs$^+$ chemical shift with ion concentration even in the bulk,~\\cite{Cervini19} an effect we could not explore in this study. To get a better understanding of the relationship between local structure and chemical shift a much broader study including a large number of configurations and, ideally, more realistic nanopores would be needed. Such a study would represent a considerable computational investment and is out of scope of the current work. While the study conducted here on a few configurations is very limited, it proves that $\\Delta\\delta$ values in the range of -9 to -11~ppm, similar to what is measured experimentally, can be reached when partially or fully hydrated ions go from the bulk to a confined state in a nanopore. \n\n\nIn this work, we have used a range of computational methods to investigate different factors affecting the chemical shifts of alkali metal ions adsorbed in nanopores with respect to their counterpart in the bulk aqueous electrolytes. Chemical shift differences calculated between the in-pore and bulk ions, the so-called $\\Delta\\delta$, are systematically compared with experiments conducted with polyether ether ketone derived carbons. A lattice model was first used to explore the importance of the pore size distribution, the ion organization at the carbon surface and the ring currents in the carbon materials in determining the $\\Delta\\delta$ value. The results show that while this approach seems sufficient to evaluate $\\Delta\\delta$ for Li$^+$ and hydrogen atoms from water, it leads to very bad estimations for Cs$^+$ and Rb$^+$. DFT calculations realized on ion-water configurations extracted from MD simulations suggest that the hydration shell has a large effect on the chemical shifts for large polarizable alkali metal ions. While the solvation shell of these ions seems mostly unaffected by the confinement, the inclusion of water molecules in the chemical shifts calculations in the vicinity of a carbon surface is essential to reproduce values in good agreement with experiments. In the future, a better understanding of the relationship between local environment and chemical shift would require additional computational studies on a wide variety of ion-water configurations under confinement.\n\n\\section*{Methods}\n\nA thorough interpretation of the experimental NMR spectra of several aqueous electrolytes in PDCs\\cite{Cervini19} is attempted here through a combination of classical MD simulations, DFT calculations and a previously developed lattice model\\cite{Merlet15,Sasikumar21} which allows one to simulate the NMR spectra of electrolyte species diffusing inside porous carbon materials. In order to model a given system, i.e. a given electrolyte in contact with a PDC, one needs to provide the following information:\n\\begin{itemize}\n\\item a pore size distribution;\n\\item the free energy of ions in pores of different sizes;\n\\item the chemical shift for ions in pores of different sizes.\n\\end{itemize}\nFollowing the lattice model calculations, additional MD simulations and DFT were conducted to test possible origins for the discrepancies between calculations and experiments.\\\\\n\n\n\\emph{Molecular Dynamics simulations}\\\\\n\nThe systems simulated consist in two graphite slabs, placed parallel to each other, either encompassing or surrounded by an aqueous electrolyte in order to represent an unconfined or a nano-confined electrolyte. In the first geometry considered (Figure~S1a), the distance between the graphite slabs is sufficiently large to recover electrolyte bulk properties in the middle of the fluid region. These calculations are used to determine the free energy profiles of ions \/ water molecules adsorbed at planar carbon surfaces. In the second geometry considered (Figure~S1b), a slit pore, with a pore size of 0.8~nm or 1.1~nm, is immersed in the electrolyte. These calculations are used to investigate confinement effects. The choice of the 0.8~nm and 1.1~nm pore widths is based on the pore size distribution obtained for PDCs through adsorption studies\\cite{Cervini19} which show high proportions of pores with these sizes. The electrolytes considered are aqueous solutions of LiCl (1M), RbCl (1M) and CsCl (1M). In all the simulations, 60 ion pairs and 3300 water molecules are used for the electrolyte.\n\nAll-atom MD simulations are conducted using the LAMMPS software.\\cite{LAMMPS} Intermolecular interactions are represented by the sum of an electrostatic and a Lennard-Jones potential. For the water molecules, the commonly used point charge extended (SPC\/E) force field is chosen.\\cite{Berendsen87} The parameters for the alkali metal ions, chloride ions and carbon atoms can be found in published works.~\\cite{Koneshan98,Cole83} The systems are first simulated in the NPT ensemble at 1~atm and 298~K for 1~ns until they reach a constant volume. Following this NPT step, simulations are conducted in the NVT ensemble at 298~K for at least 20~ns (14.5~ns for RbCl) for the first geometry and 10~ns for the second geometry. The results from the last 19~ns of the first geometry (except for RbCl where the last 12~ns are considered) and last 9~ns of the second geometry are considered for the analysis. A timestep of 1~fs is used for all simulations. Additional details on the simulations are given in Supplementary Information. \\\\\n\n\n\\emph{Density Functional Theory calculations}\\\\\n\nAll chemical shift calculations are performed using the Gaussian software\\cite{frisch2013,g16} and the gauge-including atomic orbital GIAO method. A 6-31G(d) basis set and B3LYP exchange-correlation functional are used for NICS calculations. A 3-21G basis set and B3LYP functional are used for chemical shift calculations involving ions considering the basis set availability and computational expense of the calculations involving heavy atoms such as Cs. \n\nNICS profiles were evaluated using aromatic molecules (coronene, circumcoronene and dicircumcoronene) as model molecules representing the pore surface. Following previous works, the shielding tensors are calculated at various distances of the carbon surface along a line passing through the centre of the molecule, perpendicular to the molecular plane.\\cite{Xing14,Forse14,Kilymis20} The shielding constants are obtained by averaging the diagonal elements of the tensor. The same methodology was used for alkali metal ions at various distances from the centre of the circumcoronene molecule to study the effect of specific ion-carbon interactions (and possible charge transfer) on the chemical shift. The selection of circumcoronene in this case is based on previous simulations which provided a good agreement with the experiments in a range of systems\\cite{Merlet15,Sasikumar21}. The results of the calculations in presence of the alkali metal ions are designated as chemical shift profiles.\n\nChemical shifts of the cations in hydrated clusters, extracted from MD simulations, in the presence and absence of circumcoronene molecules are also determined. To study the effect of the hydration number on the chemical shift, water molecules are removed one at a time starting from a fully hydrated configuration, removing the farthest water molecule first, and the chemical shift is calculated at each step. Several starting configurations were considered to investigate a possible effect of the orientation of neighboring water molecules on the chemical shift of the ions. To evaluate the chemical shifts of fully or partially dehydrated ions in the vicinity of a carbon surface, configurations were extracted from the MD simulations and the chemical shifts were calculated in the presence of circumcoronene at representative distance for the solvation structures considered.\\\\\n \n\n\\emph{Lattice simulations}\\\\\n\nIn the mesoscopic model,\\cite{Merlet15,Sasikumar21} the carbon particles are represented by a cubic lattice, which represents a collection of slit pores. Here, we used 20$\\times$20$\\times$20 lattice sites. PDCs with burn off values of 41\\%, 50\\% and 56\\% are considered for this study through the inclusion of their pore size distributions obtained experimentally from gas adsorption studies.\\cite{Cervini19} Following previous studies, the pore surface distribution is chosen to be a log-normal distribution with a mean of -0.1 and a standard deviation of 0.25 in the case of NICS~\\cite{Merlet15,Anouar19,Sasikumar21} and a pore surface corresponding to circumcoronene in the case of chemical shift profiles.\n\nTo account for the distribution of ions in pores of different sizes in the carbon material, free energy profiles are obtained from MD simulations. The free energy profiles determined in this work for the species of interest in the range of electrolytes studied are shown in Figure~S2. The shielding environments of the ions adsorbed at each site are defined using the NICS profiles or the chemical shift profiles obtained from DFT calculations. \n\nThe ion and water dynamics between the pores of the carbon particle are modelled through kinetic Monte Carlo moves. The adsorbed species explore different shielding environments (and thus different resonance frequencies) throughout their motion. The NMR signal is calculated for each ion or water molecule, and a Fourier transform gives access to the NMR spectrum and effective chemical shift. A more detailed description of lattice simulations is available in published works\\cite{Merlet15,Anouar19}. \n\nThe mesoscopic model allows one to calculate the chemical shift difference, $\\Delta\\delta$, between species present between particles (bulk) and species present in the pores (in-pore) and to analyse the effects of ring currents and ion organization on the total global shift. It is worth noting that the lattice model does not account explicitly for any effects arising from the hydration of the ions on the global shift.\n\n\\begin{acknowledgement} \n\nThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement no. 714581). JMG acknowledges funding from EPSRC with grant number EP\/V05001X\/1. This work was granted access to the HPC resources of the CALMIP supercomputing centre under the allocations P17037 and P21014, and of TGCC under the allocations A0070911061 and A0080910463 made by GENCI. The authors acknowledge Luca Cervini for sharing experimental data and for fruitful discussions, and Patrice Simon and Alexander Forse for useful conversations.\n\n\\end{acknowledgement}\n\n\\begin{suppinfo}\n\nSupplementary Information available: details of the molecular dynamics simulations, free energy profiles, pore size distributions and distributions of ions in the different pore sizes, chemical shift profiles for alkali metal ions in the vicinity of circumcoronene, characterization of the ion hydration and its effect on the chemical shift in the bulk, chemical shift for different ion-water-circumcoronene configurations. \n\n\\end{suppinfo}\n\n\\section*{Data availability}\n\nThe program used to do the lattice simulations is available, along with a manual, on Github (https:\/\/github.com\/cmerlet\/LPC3D). The data corresponding to the plots reported in this paper, as well as input files for the MD and lattice simulations, and DFT calculations, are available in the Zenodo repository with the identifier 00.0000\/zenodo.0000000.\n\n\\providecommand{\\latin}[1]{#1}\n\\makeatletter\n\\providecommand{\\doi}\n {\\begingroup\\let\\do\\@makeother\\dospecials\n \\catcode`\\{=1 \\catcode`\\}=2 \\doi@aux}\n\\providecommand{\\doi@aux}[1]{\\endgroup\\texttt{#1}}\n\\makeatother\n\\providecommand*\\mcitethebibliography{\\thebibliography}\n\\csname @ifundefined\\endcsname{endmcitethebibliography}\n {\\let\\endmcitethebibliography\\endthebibliography}{}\n\\begin{mcitethebibliography}{53}\n\\providecommand*\\natexlab[1]{#1}\n\\providecommand*\\mciteSetBstSublistMode[1]{}\n\\providecommand*\\mciteSetBstMaxWidthForm[2]{}\n\\providecommand*\\mciteBstWouldAddEndPuncttrue\n {\\def\\unskip.}{\\unskip.}}\n\\providecommand*\\mciteBstWouldAddEndPunctfalse\n {\\let\\unskip.}\\relax}\n\\providecommand*\\mciteSetBstMidEndSepPunct[3]{}\n\\providecommand*\\mciteSetBstSublistLabelBeginEnd[3]{}\n\\providecommand*\\unskip.}{}\n\\mciteSetBstSublistMode{f}\n\\mciteSetBstMaxWidthForm{subitem}{(\\alph{mcitesubitemcount})}\n\\mciteSetBstSublistLabelBeginEnd\n {\\mcitemaxwidthsubitemform\\space}\n {\\relax}\n {\\relax}\n\n\\bibitem[Fic \\latin{et~al.}(2012)Fic, Lota, Meller, and Frackowiak]{Fic12}\nFic,~K.; Lota,~G.; Meller,~M.; Frackowiak,~E. Novel insight into neutral medium\n as electrolyte for high-voltage supercapacitors. \\emph{Energy Environ. Sci.}\n \\textbf{2012}, \\emph{5}, 5842--5850\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Sajjad \\latin{et~al.}(2021)Sajjad, Khan, Cheng, and Lu]{Sajjad21}\nSajjad,~M.; Khan,~M.~I.; Cheng,~F.; Lu,~W. A review on selection criteria of\n aqueous electrolytes performance evaluation for advanced asymmetric\n supercapacitors. \\emph{J. Ener. Stor.} \\textbf{2021}, \\emph{40}, 102729\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Porada \\latin{et~al.}(2013)Porada, Borchardt, Oschatz, Bryjak,\n Atchison, Keesman, Kaskel, Biesheuvel, and Presser]{Porada13}\nPorada,~S.; Borchardt,~L.; Oschatz,~M.; Bryjak,~M.; Atchison,~J.~S.;\n Keesman,~K.~J.; Kaskel,~S.; Biesheuvel,~P.~M.; Presser,~V. Direct prediction\n of the desalination performance of porous carbon electrodes for capacitive\n deionization. \\emph{Energy Environ. Sci.} \\textbf{2013}, \\emph{6},\n 3700--3712\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Luciano \\latin{et~al.}(2020)Luciano, Ribeiro, Bruch, and\n Silva]{Luciano20}\nLuciano,~M.~A.; Ribeiro,~H.; Bruch,~G.~E.; Silva,~G.~G. Efficiency of\n capacitive deionization using carbon materials based electrodes for water\n desalination. \\emph{J. Electroanal. Chem.} \\textbf{2020}, \\emph{859},\n 113840\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Ohba \\latin{et~al.}(2012)Ohba, Hata, and Kanoh]{Ohba12}\nOhba,~T.; Hata,~K.; Kanoh,~H. Significant hydration shell formation instead of\n hydrogen bonds in nanoconfined aqueous electrolyte solutions. \\emph{J. Am.\n Chem. Soc.} \\textbf{2012}, \\emph{134}, 17850--17853\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Ohkubo \\latin{et~al.}(2002)Ohkubo, Konishi, Hattori, Kanoh, Fujikawa,\n and Kaneko]{Ohkubo02}\nOhkubo,~T.; Konishi,~T.; Hattori,~Y.; Kanoh,~H.; Fujikawa,~T.; Kaneko,~K.\n Restricted hydration structures of Rb and Br ions confined in slit-shaped\n carbon nanospace. \\emph{J. Am. Chem. Soc.} \\textbf{2002}, \\emph{124},\n 11860--11861\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Prehal \\latin{et~al.}(2015)Prehal, Weingarth, Perre, Lechner,\n Amenitsch, Paris, and Presser]{Prehal15}\nPrehal,~C.; Weingarth,~D.; Perre,~E.; Lechner,~R.~T.; Amenitsch,~H.; Paris,~O.;\n Presser,~V. Tracking the structural arrangement of ions in carbon\n supercapacitor nanopores using in situ {S}mall-{A}ngle X-ray {S}cattering.\n \\emph{Energy Environ. Sci.} \\textbf{2015}, \\emph{8}, 1725--1735\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Prehal \\latin{et~al.}(2017)Prehal, Koczwara, J\u00e4ckel, Amenitsch,\n Presser, and Paris]{Prehal17}\nPrehal,~C.; Koczwara,~C.; J\u00e4ckel,~N.; Amenitsch,~H.; Presser,~V.; Paris,~O. A\n carbon nanopore model to quantify structure and kinetics of ion\n electrosorption with in situ {Small-Angle X-ray Scattering}. \\emph{Phys.\n Chem. Chem. Phys.} \\textbf{2017}, \\emph{19}, 15549--15561\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Escobar-Teran \\latin{et~al.}(2022)Escobar-Teran, Perrot, and\n Sel]{Escobar-Teran22}\nEscobar-Teran,~F.; Perrot,~H.; Sel,~O. Ion dynamics at the carbon\n electrode\/electrolyte interface: Influence of carbon nanotubes types.\n \\emph{Materials} \\textbf{2022}, \\emph{15}, 1867\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Wu \\latin{et~al.}(2018)Wu, Taberna, and Simon]{Wu18}\nWu,~Y.-C.; Taberna,~P.-L.; Simon,~P. Tracking ionic fluxes in porous carbon\n electrodes from aqueous electrolyte mixture at various pH. \\emph{Electrochem.\n Commun.} \\textbf{2018}, \\emph{93}, 119--122\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Nunes \\latin{et~al.}(2020)Nunes, Pires, {De Oliveira}, {de Marque},\n Cremasco, Vicentini, Doubek, {Da Silva}, and Zanin]{Nunes20}\nNunes,~W.~G.; Pires,~B.~M.; {De Oliveira},~F.~E.; {de Marque},~A.~M.;\n Cremasco,~L.~F.; Vicentini,~R.; Doubek,~G.; {Da Silva},~L.~M.; Zanin,~H.\n Study of the aging process of nanostructured porous carbon-based electrodes\n in electrochemical capacitors filled with aqueous or organic electrolytes.\n \\emph{J. Ener. Stor.} \\textbf{2020}, \\emph{28}, 101249\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Cervini \\latin{et~al.}(2019)Cervini, Lynes, Akien, Kerridge, Barrow,\n and Griffin]{Cervini19}\nCervini,~L.; Lynes,~O.~D.; Akien,~G.~R.; Kerridge,~A.; Barrow,~N.~S.;\n Griffin,~J.~M. Factors affecting the nucleus-independent chemical shift in\n {NMR} studies of microporous carbon electrode materials. \\emph{Energy Storage\n Mater.} \\textbf{2019}, \\emph{21}, 335--346\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Luo \\latin{et~al.}(2015)Luo, Xing, Ling, Kleinhammes, and Wu]{Luo15a}\nLuo,~Z.-X.; Xing,~Y.-Z.; Ling,~Y.-C.; Kleinhammes,~A.; Wu,~Y. Electroneutrality\n breakdown and specific ion effects in nanoconfined aqueous electrolytes\n observed by NMR. \\emph{Nature commun.} \\textbf{2015}, \\emph{6}, 1--8\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Luo \\latin{et~al.}(2015)Luo, Xing, Liu, Ling, Kleinhammes, and\n Wu]{Luo15b}\nLuo,~Z.-X.; Xing,~Y.-Z.; Liu,~S.; Ling,~Y.-C.; Kleinhammes,~A.; Wu,~Y.\n Dehydration of ions in voltage-gated carbon nanopores observed by in situ\n {NMR}. \\emph{J. Phys. Chem. Lett.} \\textbf{2015}, \\emph{6}, 5022\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Harris \\latin{et~al.}(1996)Harris, Thompson, Forshaw, Foley, Thomas,\n Norman, and Pottage]{Harris96}\nHarris,~R.; Thompson,~T.; Forshaw,~P.; Foley,~N.; Thomas,~K.; Norman,~P.;\n Pottage,~C. A magic-angle spinning NMR study into the adsorption of\n deuterated water by activated carbon. \\emph{Carbon} \\textbf{1996}, \\emph{34},\n 1275--1279\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Dickinson \\latin{et~al.}(2000)Dickinson, Harris, Shaw, Chinn, and\n Norman]{Dickinson2000}\nDickinson,~L.~M.; Harris,~R.~K.; Shaw,~J.~A.; Chinn,~M.; Norman,~P.~R.\n Oxygen-17 and deuterium NMR investigation into the adsorption of water on\n activated carbon. \\emph{Magn. Reson. Chem.} \\textbf{2000}, \\emph{38},\n 918--924\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Griffin \\latin{et~al.}(2014)Griffin, Forse, Wang, Trease, Taberna,\n Simon, and Grey]{Griffin14}\nGriffin,~J.~M.; Forse,~A.~C.; Wang,~H.; Trease,~N.~M.; Taberna,~P.-L.;\n Simon,~P.; Grey,~C.~P. Ion counting in supercapacitor electrodes using {NMR}\n spectroscopy. \\emph{Faraday Discuss.} \\textbf{2014}, \\emph{176}, 49--68\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Griffin \\latin{et~al.}(2016)Griffin, Forse, and Grey]{Griffin16}\nGriffin,~J.~M.; Forse,~A.~C.; Grey,~C.~P. Solid-state NMR studies of\n supercapacitors. \\emph{Solid State Nucl. Mag.} \\textbf{2016}, \\emph{74-75},\n 16 -- 35\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Forse \\latin{et~al.}(2016)Forse, Merlet, Griffin, and Grey]{Forse16}\nForse,~A.~C.; Merlet,~C.; Griffin,~J.~M.; Grey,~C.~P. New perspectives on the\n charging mechanisms of supercapacitors. \\emph{J. Am. Chem. Soc.}\n \\textbf{2016}, \\emph{138}, 5731--5744\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Forse \\latin{et~al.}(2017)Forse, Griffin, Merlet, Carreteo-Gonzalez,\n Raji, Trease, and Grey]{Forse17}\nForse,~A.~C.; Griffin,~J.~M.; Merlet,~C.; Carreteo-Gonzalez,~J.;\n Raji,~A.-R.~O.; Trease,~N.~M.; Grey,~C.~P. Direct observation of ion dynamics\n in supercapacitor electrodes using \\emph{in situ} diffusion {NMR}\n spectroscopy. \\emph{Nature Energy} \\textbf{2017}, \\emph{2}, 16216\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Lazzeretti(2000)]{Lazzeretti2000}\nLazzeretti,~P. Ring currents. \\emph{Prog. Nucl. Magn. Res. Sp.} \\textbf{2000},\n \\emph{36}, 1--88\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Anderson \\latin{et~al.}(2010)Anderson, McNicholas, Kleinhammes, Wang,\n Liu, and Wu]{Anderson10}\nAnderson,~R.~J.; McNicholas,~T.~P.; Kleinhammes,~A.; Wang,~A.; Liu,~J.; Wu,~Y.\n NMR methods for characterizing the pore structures and hydrogen storage\n properties of microporous carbons. \\emph{J. Am. Chem. Soc.} \\textbf{2010},\n \\emph{132}, 8618--8626\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Chen \\latin{et~al.}(2005)Chen, Wannere, Corminboeuf, Puchta, and\n Schleyer]{Chen05}\nChen,~Z.; Wannere,~C.~S.; Corminboeuf,~C.; Puchta,~R.; Schleyer,~P. v.~R.\n Nucleus-Independent Chemical Shifts (NICS) as an aromaticity criterion.\n \\emph{Chem. Rev.} \\textbf{2005}, \\emph{105}, 3842--3888\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Gershoni-Poranne and Stanger(2021)Gershoni-Poranne, and\n Stanger]{Gershoni-Poranne21}\nGershoni-Poranne,~R.; Stanger,~A. In \\emph{Aromaticity}; Fernandez,~I., Ed.;\n Elsevier, 2021; pp 99--154\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Xing \\latin{et~al.}(2014)Xing, Luo, K., and Wu]{Xing14}\nXing,~Y.-Z.; Luo,~Z.-X.; K.,~A.; Wu,~Y. Probing carbon micropore size\n distribution by nucleus independent chemical shift. \\emph{Carbon}\n \\textbf{2014}, \\emph{77}, 1132--1139\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Forse \\latin{et~al.}(2014)Forse, Griffin, Presser, Gogotsi, and\n Grey]{Forse14}\nForse,~A.~C.; Griffin,~J.~M.; Presser,~V.; Gogotsi,~Y.; Grey,~C.~P. Ring\n current effects: {F}actors Affecting the {NMR} chemical shift of molecules\n adsorbed on porous carbons. \\emph{J. Phys. Chem. C} \\textbf{2014},\n \\emph{118}, 7508--7514\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Forse \\latin{et~al.}(2015)Forse, Merlet, Allan, Humphreys, Griffin,\n Aslan, Zeiger, Presser, Gogotsi, and Grey]{Forse15b}\nForse,~A.~C.; Merlet,~C.; Allan,~P.~K.; Humphreys,~E.~K.; Griffin,~J.~M.;\n Aslan,~M.; Zeiger,~M.; Presser,~V.; Gogotsi,~Y.; Grey,~C.~P. New insights\n into the structure of nanoporous carbons from {NMR}, {R}aman, and pair\n distribution function Analysis. \\emph{Chem. Mater.} \\textbf{2015}, \\emph{27},\n 6848--6857\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Forse \\latin{et~al.}(2021)Forse, Merlet, Grey, and Griffin]{Forse21}\nForse,~A.~C.; Merlet,~C.; Grey,~C.~P.; Griffin,~J.~M. NMR Studies of Adsorption\n and Diffusion in Porous Carbonaceous Materials. \\emph{Prog. Nucl. Mag. Res.\n Sp.} \\textbf{2021}, \\emph{124}, 57\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Sasikumar \\latin{et~al.}(2021)Sasikumar, Belhboub, Bacon, Forse,\n Griffin, Grey, Simon, and Merlet]{Sasikumar21}\nSasikumar,~A.; Belhboub,~A.; Bacon,~C.; Forse,~A.~C.; Griffin,~J.~M.;\n Grey,~C.~P.; Simon,~P.; Merlet,~C. Mesoscopic simulations of the in situ NMR\n spectra of porous carbon based supercapacitors: electronic structure and\n adsorbent reorganisation effects. \\emph{Phys. Chem. Chem. Phys.}\n \\textbf{2021}, \\emph{2}, 15925--15934\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Gerken \\latin{et~al.}(2002)Gerken, Boatz, Kornath, Haiges, Schneider,\n Schroer, and Christe]{Gerken02}\nGerken,~M.; Boatz,~J.; Kornath,~A.; Haiges,~R.; Schneider,~S.; Schroer,~T.;\n Christe,~K. The $^{19}$F NMR shifts are not a measure for the nakedness of\n the fluoride anion. \\emph{J. Fluorine Chem.} \\textbf{2002}, \\emph{116},\n 49--58\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Fulik \\latin{et~al.}(2018)Fulik, Hippauf, Leistenschneider, Paasch,\n Kaskel, Brunner, and Borchardt]{Fulik18}\nFulik,~N.; Hippauf,~F.; Leistenschneider,~D.; Paasch,~S.; Kaskel,~S.;\n Brunner,~E.; Borchardt,~L. Electrolyte mobility in supercapacitor electrodes\n \u2013 Solid state NMR studies on hierarchical and narrow pore sized carbons.\n \\emph{Energy Storage Mater.} \\textbf{2018}, \\emph{12}, 183--190\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Feng \\latin{et~al.}(2010)Feng, Qiao, Huang, Sumpter, and\n Meunier]{Feng10}\nFeng,~G.; Qiao,~R.; Huang,~J.; Sumpter,~B.~G.; Meunier,~V. Ion distribution in\n electrified micropores and its role in the anomalous enhancement of\n capacitance. \\emph{ACS Nano} \\textbf{2010}, \\emph{4}, 2382--2390\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Beckstein \\latin{et~al.}(2004)Beckstein, Tai, and Sansom]{Beckstein04}\nBeckstein,~O.; Tai,~K.; Sansom,~M. S.~P. Not ions alone: Barriers to ion\n permeation in nanopores and channels. \\emph{J. Am. Chem. Society}\n \\textbf{2004}, \\emph{126}, 14694--14695\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Ganfoud \\latin{et~al.}(2019)Ganfoud, Sene, Haefele,\n Marin-Lafl{\\`e}che, Daffos, Taberna, Salanne, Simon, and\n Rotenberg]{ganfoud19}\nGanfoud,~N.; Sene,~A.; Haefele,~M.; Marin-Lafl{\\`e}che,~A.; Daffos,~B.;\n Taberna,~P.-L.; Salanne,~M.; Simon,~P.; Rotenberg,~B. Effect of the carbon\n microporous structure on the capacitance of aqueous supercapacitors.\n \\emph{Energy Storage Mater.} \\textbf{2019}, \\emph{21}, 190--195\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Merlet \\latin{et~al.}(2015)Merlet, Forse, Griffin, Frenkel, and\n Grey]{Merlet15}\nMerlet,~C.; Forse,~A.~C.; Griffin,~J.~M.; Frenkel,~D.; Grey,~C.~P. Lattice\n simulation method to model diffusion and {NMR} spectra in porous materials.\n \\emph{J. Chem. Phys.} \\textbf{2015}, \\emph{142}, 094701\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Wang \\latin{et~al.}(2011)Wang, K{\\\"o}ster, Trease, S{\\'e}galini,\n Taberna, Simon, Gogotsi, and Grey]{Wang11b}\nWang,~H.; K{\\\"o}ster,~T. K.~J.; Trease,~N.~M.; S{\\'e}galini,~J.;\n Taberna,~P.-L.; Simon,~P.; Gogotsi,~Y.; Grey,~C.~P. Real-time NMR studies of\n electrochemical double-Layer capacitors. \\emph{J. Am. Chem. Soc.}\n \\textbf{2011}, \\emph{133}, 19270--19273\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Dombrowski \\latin{et~al.}(2000)Dombrowski, Hyduke, and\n Lastoskie]{Dombrowski00}\nDombrowski,~R.~J.; Hyduke,~D.~R.; Lastoskie,~C.~M. Pore size analysis of\n activated carbons from argon and nitrogen porosimetry using density\n functional theory. \\emph{Langmuir} \\textbf{2000}, \\emph{16}, 5041--5050\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Neimark \\latin{et~al.}(2009)Neimark, Lin, Ravikovitch, and\n Thommes]{Neimark09}\nNeimark,~A.~V.; Lin,~Y.; Ravikovitch,~P.~I.; Thommes,~M. Quenched solid density\n functional theory and pore size analysis of micro-mesoporous carbons.\n \\emph{Carbon} \\textbf{2009}, \\emph{47}, 1617 -- 1628\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Frisch \\latin{et~al.}(2016)Frisch, Trucks, Schlegel, Scuseria, Robb,\n Cheeseman, Scalmani, Barone, Petersson, \\latin{et~al.} others]{g16}\nFrisch,~M.~J.; Trucks,~G.~W.; Schlegel,~H.~B.; Scuseria,~G.~E.; Robb,~M.~A.;\n Cheeseman,~J.~R.; Scalmani,~G.; Barone,~V.; Petersson,~G.~A., \\latin{et~al.}\n Gaussian16 {R}evision {C}.01. \\emph{Gaussian Inc. Wallingford CT}\n \\textbf{2016}, \\relax\n\\mciteBstWouldAddEndPunctfalse\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Bader(1985)]{bader85}\nBader,~R.~F. Atoms in molecules. \\emph{Acc. Chem. Res.} \\textbf{1985},\n \\emph{18}, 9--15\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Lu and Chen(2012)Lu, and Chen]{Lu12}\nLu,~T.; Chen,~F. Multiwfn: a multifunctional wavefunction analyzer. \\emph{J.\n Comput. Chem.} \\textbf{2012}, \\emph{33}, 580--592\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[G\u00fcell \\latin{et~al.}(2005)G\u00fcell, Poater, Luis, M\u00f3, Y\u00e1\u00f1ez, and\n Sol\u00e0]{Guell05}\nG\u00fcell,~M.; Poater,~J.; Luis,~J.~M.; M\u00f3,~O.; Y\u00e1\u00f1ez,~M.; Sol\u00e0,~M.\n Aromaticity analysis of lithium Cation\/$\\pi$ complexes of aromatic systems.\n \\emph{ChemPhysChem} \\textbf{2005}, \\emph{6}, 2552--2561\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Rodr\u00edguez-Otero \\latin{et~al.}(2008)Rodr\u00edguez-Otero, Cabaleiro-Lago,\n and Pe\u00f1a-Gallego]{Rodriguez-otero08}\nRodr\u00edguez-Otero,~J.; Cabaleiro-Lago,~E.~M.; Pe\u00f1a-Gallego,~A. Cation-$\\pi$ and\n anion-$\\pi$ interactions: Changes in aromaticity upon complexation.\n \\emph{Chem. Phys. Lett.} \\textbf{2008}, \\emph{452}, 49--53\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Mary and Gupta(2018)Mary, and Gupta]{Mary18}\nMary,~A.; Gupta,~R. Effect of counterion on the reactivity, stability,\n aromaticity and charge distribution in mono- and polyphosphacyclopentadienide\n ions \u2013 A theoretical investigation. \\emph{Comput. Theor. Chem.}\n \\textbf{2018}, \\emph{1139}, 27--37\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Rao \\latin{et~al.}(2009)Rao, Zipse, and Sastry]{Rao09}\nRao,~J.~S.; Zipse,~H.; Sastry,~G.~N. Explicit solvent effect on cation-$\\pi$\n interactions: A first principle investigation. \\emph{J. Phys. Chem. B}\n \\textbf{2009}, \\emph{113}, 7225--7236\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Plimpton(1995)]{LAMMPS}\nPlimpton,~S. Fast parallel algorithms for short-range molecular dynamics.\n \\emph{J. Comp. Phys.} \\textbf{1995}, \\emph{117}, 1--19\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Berendsen \\latin{et~al.}(1987)Berendsen, Grigera, and\n Straatsma]{Berendsen87}\nBerendsen,~H. J.~C.; Grigera,~J.~R.; Straatsma,~T.~P. The missing term in\n effective pair potentials. \\emph{J. Phys. Chem.} \\textbf{1987}, \\emph{91},\n 6269--6271\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Koneshan \\latin{et~al.}(1998)Koneshan, Rasaiah, Lynden-Bell, and\n Lee]{Koneshan98}\nKoneshan,~S.; Rasaiah,~J.~C.; Lynden-Bell,~R.~M.; Lee,~S.~H. Solvent structure,\n dynamics, and ion mobility in aqueous solutions at 25 \u00b0C. \\emph{J. Phys.\n Chem. B} \\textbf{1998}, \\emph{102}, 4193--4204\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Cole and Klein(1983)Cole, and Klein]{Cole83}\nCole,~M.~W.; Klein,~J.~R. The interaction between noble gases and the basal\n plane surface of graphite. \\emph{Surf. Sci.} \\textbf{1983}, \\emph{124}, 547\n -- 554\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Frisch \\latin{et~al.}(2013)Frisch, Trucks, Schlegel, Scuseria, Robb,\n Cheeseman, Scalmani, Barone, Mennucci, Petersson, \\latin{et~al.}\n others]{frisch2013}\nFrisch,~M.~J.; Trucks,~G.~W.; Schlegel,~H.~B.; Scuseria,~G.~E.; Robb,~M.~A.;\n Cheeseman,~J.~R.; Scalmani,~G.; Barone,~V.; Mennucci,~B.; Petersson,~G.~A.\n \\latin{et~al.} {Gaussian 09, Revision D. 01, 2013, Gaussian}. \\emph{Inc.,\n Wallingford CT} \\textbf{2013}, \\relax\n\\mciteBstWouldAddEndPunctfalse\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Kilymis \\latin{et~al.}(2020)Kilymis, Bart\\'ok, Pickard, Forse, and\n Merlet]{Kilymis20}\nKilymis,~D.; Bart\\'ok,~A.~P.; Pickard,~C.~J.; Forse,~A.~C.; Merlet,~C.\n Efficient prediction of nucleus independent chemical shifts for polycyclic\n aromatic hydrocarbons. \\emph{Phys. Chem. Chem. Phys.} \\textbf{2020},\n \\emph{22}, 13746--13755\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\bibitem[Belhboub \\latin{et~al.}(2019)Belhboub, Lahrar, Simon, and\n Merlet]{Anouar19}\nBelhboub,~A.; Lahrar,~E.~H.; Simon,~P.; Merlet,~C. On the development of an\n original mesoscopic model to predict the capacitive properties of\n carbon-carbon supercapacitors. \\emph{Electrochim. Acta} \\textbf{2019},\n \\emph{327}, 135022\\relax\n\\mciteBstWouldAddEndPuncttrue\n\\mciteSetBstMidEndSepPunct{\\mcitedefaultmidpunct}\n{\\mcitedefaultendpunct}{\\mcitedefaultseppunct}\\relax\n\\unskip.}\n\\end{mcitethebibliography}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCommodity programmable graphics hardware such as the AMD R600\/R700 and NVIDIA G80\/GT200 architectures have made available on the desktop TFLOP-scale floating-point performance formerly accessible only on dedicated supercomputers, with peak throughput improvements of over an order of magnitude relative to conventional CPUs. This extremely high performance makes GPUs attractive in computational chemistry, as many applications are bound by the limits of available computational power (e.g., the simulation time, simulated system size, and forcefield detail in molecular dynamics are all fundamentally constrained by available computation). Indeed, GPUs have been applied to several important problems in computational chemistry including molecular dynamics \\cite{Friedrichs09,Stone07}, Poisson-Boltzmann electrostatics \\cite{Narumi09}, DFT and MP2 quantum chemistry models \\cite{Yasuda08,Ufimtsev08,Vogt08}, and molecular comparison \\cite{Haque09}. Scientific problems outside chemistry, including biological sequence alignment \\cite{Manavski08} and machine learning \\cite{Catanzaro08} have also shown significant speedups from reimplementation for GPU execution. \n\nWhile GPGPU (general-purpose computation on GPUs) is attractive from the perspective of throughput, its origin in the relatively-error-tolerant area of consumer graphics has raised concerns about reliability. The reliability of GPUs in error-intolerant applications such as scientific simulations is largely unproven. Previous work \\cite{Sheaffer06, Sheaffer07} has questioned the reliability of GPU logic and proposed methods for logic hardening. A particular point of concern, however, is the reliability of memory on GPUs. The lack of ECC (error-checking-and-correcting) functionality in GPGPU systems has been previously noted \\cite{Sheaffer07}. \n\nWhile circumstantial evidence raises reliability questions, no prior work has actually quantified the reliability of GPGPU. Our contribution in this work is a quantification of the reliability of GPU memory subsystems in the context of GPGPU which has important implications for both developers and users of GPU-based scientific codes. We have carried out a large-scale assessment of GPU reliability by using a custom test code to measure error rates on more than 20,000 NVIDIA GPUs on the Folding@home distributed computing platform. In addition to the version of the tester deployed on Folding@home, we have released MemtestG80, a standalone version of our test code, under an open-source LGPL license at \\texttt{https:\/\/simtk.org\/home\/memtest}.\n\nOur experiment comprises over 200 terabyte-hours of memory testing distributed over more than 20,000 individual GPUs worldwide. We also present control experiments carried out on individual nodes and a GPU cluster. Our results demonstrate a detectable, pattern-sensitive rate of memory faults in the installed base of commercial GPU hardware. Specifically, even after controlling for overclocked cards and time of day of test execution (as a proxy for ambient temperature) we find that two-thirds of all tested GPUs exhibit sensitivity to memory faults in a pattern-dependent manner. This error rate depends strongly on board architecture, with devices based on the newer GT200 GPU exhibiting soft error rates nearly an order of magnitude lower than those based on the older G80\/G92 design. \n\nOur paper is organized as follows. We first offer a primer on soft error rate generation and detection mechanisms, and explore prior work (Section \\ref{sec:background}). We then present the design and validation of MemtestG80, our software-based tester to detect soft errors in logic and memory on NVIDIA GPUs (Section \\ref{sec:memtestG80}), and explain the methodology of our large-scale study of error rates using the Folding@home distributed computing network (Section \\ref{sec:methodology}). In Section \\ref{sec:results}, we present the results of our experiment. We subsequently present detailed analysis of the data in Section \\ref{sec:analysis}, examining of possible error-inducing conditions. We conclude with a discussion of the implications of our findings on the field of reliable GPGPU and steps to be taken by both hardware and software developers.\n\n\\section{Background and Previous Work}\n\\label{sec:background}\nErrors in hardware systems can be classified as ``soft'' or ``hard'': hard errors are triggered by physical hardware defects, whereas soft errors are random, transient faults not due to physical defects. The term ``soft error'' has traditionally referred to radiation-induced upsets in electronic circuits, in which high-energy particles from the environment (e.g., cosmic rays \\cite{Ziegler96} or radiation from chip packaging materials \\cite{Gordon08}) cause erroneous operation by creating localized charge disturbances \\cite{Shivakumar02, Baumann02}. These soft errors are a significant problem in traditional supercomputing, as reflected in the error-checking-and-correcting design of the IBM BlueGene\/L supercomputer \\cite{Adiga02}. \n\nMechanisms other than radiation can also induce transient errors. In memories, reads and writes can occasionally affect the state of adjacent cells; in both memories and logic, timing violations (e.g., from improper specification or overclocking) can cause transient errors \\cite{Cockburn94}. Furthermore, at both the system and chip level, improper signal routing and power design can induce transient errors by various mechanisms such as voltage droop, crosstalk and timing jitter \\cite{Sheaffer06, Metra00}. In this work, we use the term ``soft error'' to refer to the larger class of transient faults, rather than just radiation-induced errors.\n\nWhile memory manufacturers typically do not disclose soft error rate (SER) information \\cite{Cataldo01}, estimates have been drawn for RAM SER from a variety of sources \\cite{Tezzaron04}, ranging from $5 \\times 10^{-14}$ to $4 \\times 10^{-7}$ errors per bit-hour. Data from IBM indicate that in natural environments, even with hundreds of devices under test, more than 1,000 testing hours may be required to accumulate statistically meaningful test results \\cite{Gordon08}. Additionally, possible environmental (e.g., thermal or radiation) effects on SER dictate that a variety of conditions be tested. For example, cosmic ray flux varies by a factor of two depending on latitude \\cite{Ziegler96}, and approximately 13x between sea level and 10,200 ft in altitude \\cite{Gordon08}. Thus, a very-large-scale approach is required in order to efficiently accumulate statistically-significant data regarding SER.\n\nBecause of the significance of SER to the semiconductor and computer industries, much past work has been done on techniques for measuring memory and logic SER. Cockburn \\cite{Cockburn94} presents an introduction to hardware testing methods and fault models used by semiconductor device manufacturers to test memories, and also presents the use of random test patterns as a simple method that achieves good test coverage across a variety of failure modes. Suk and Reddy \\cite{Suk80} establish the importance of using different test patterns to detect pattern-sensitive device faults; however, their presented patterns depend on intimate knowledge of the underlying memory layout. \n\nThese hardware-based testing methods are useful for verifying sensitivity of hardware to design or manufacturing flaws. However, the low radiation flux in most (habitable) environments makes this sort of testing impractical to detect radiation-sensitivity faults in a high-throughput fashion. Consequently, semiconductor manufacturers will sometimes use high-radiation test environments to further characterize devices. Past work includes the use of radioactive isotopes \\cite{Blandford85} and particle accelerators \\cite{Violante07} to bombard chips with high levels of radiation. An alternative approach is the use of high-altitude testing \\cite{Gordon08} (cosmic ray neutron flux rises exponentially with altitude \\cite{Ziegler96}).\n\nHigher-level software and modeling-based techniques have also been proposed for soft error detection and correction. Software hardening techniques can be used to detect logic and memory soft errors \\cite{Nicolescu03, Nicolescu01}. Walcott et al.\\ describe a method for predicting the vulnerability of logic architectures to soft error and the ``architectural vulnerability'' \\cite{Mukherjee05} of different codes running on the same architecure \\cite{Walcott07} and validate their approach through software simulation. Sheaffer et al.\\ also take a simulation-based approach and specifically characterize the architectural vulnerability of graphics algorithms on GPUs \\cite{Sheaffer06}. As their later work points out, however, this characterization is inappropriate for GPGPU, which requires tighter error guarantees than does conventional graphics \\cite{Sheaffer07}.\n\nWhile prior work has laid out testing methods to detect soft errors, no previous studies have applied these techniques to a large installed base to broadly assess the impact of soft errors on the emerging GPGPU platform. Our contributions in this work are twofold: first, a test code using proven memory and logic testing methods to detect soft errors; second, a large body of data using this tester to assess SER on tens of thousands of installed GPUs worldwide.\n\n\\section{Design and Validation of MemtestG80}\n\\label{sec:memtestG80}\nIn this section we describe the design and validation of Mem\\-testG80, our CUDA-based \\cite{Nickolls08} code to test for memory errors on NVIDIA GPUs based on the G80\/Tesla architecture \\cite{Lindholm08}. MemtestG80 is a CUDA implementation of most of the memory tests in Memtest86 \\cite{Memtest86}, a widely-used open-source memory tester for x86-based machines that implements many popular memory-test patterns, including randomized tests and tests for pattern-sensitive errors. Section \\ref{sec:testMI} briefly lists the tests implemented in MemtestG80. For conciseness, we refer the reader to the Memtest86 documentation \\cite{Memtest86} for descriptions of most of the test patterns, and here highlight instead the design decisions made in a parallel GPU implementation of the code. We also detail the implementation of a logic test of our own devising that is unique to MemtestG80. \n\nIn the text that follows, one ``iteration'' of MemtestG80 refers to the execution of each implemented test once. For tests that take place in multiple rounds, every round is executed (with one exception, explained in Section \\ref{sec:procedure}). Such an iteration over 64 MiB of memory typically takes between 1 to 5 seconds to complete, depending on GPU speed; runtime scales linearly with the amount of memory tested.\n\n\\subsection{Offload and Parallelization Scheme}\n\nSeveral design parameters affect the sensitivity and speed of a software-based GPU memory tester. Specifically, the three components of the memory tester --- pattern generation, memory access (writing and reading patterns to\/from memory), and pattern verification --- can be performed either by the CPU or the GPU itself; and if performed on the GPU, can be performed either serially or in parallel across the multiple GPU cores. The decisions made in MemtestG80 are informed both by the assumptions we make about the relative error rates of various system components and by responsiveness requirements dictated by operation on donated distributed-computing resources. \n\nTo improve the speed and responsiveness of the memory tester, all pattern generation, memory access, and verification is done in parallel on the GPU. We assume that the memory error rate is sufficiently low that the on-GPU code (which occupies a much smaller footprint than the tested region) will not be corrupted during the test execution. Performing verification on the GPU leaves the tester vulnerable to GPU logic errors. We therefore implicitly assume that the GPU logic error rate is lower than the GPU memory error rate; however, we verify this assumption by also running a custom logic test that should report errors on architectural paths similar to those used in other parts of the tester. \n\n\\subsection{Logic Testing}\n\nBecause results from the GPU can be passed back to the host CPU only by a copy from the GPU main memory, detection of GPU logic errors under the assumption that memory errors are more frequent than logic errors is nontrivial --- an error in a computed result may be caused by an error in logic or memory. To overcome this problem, a test can be designed which produces the same expected result after varying amounts of logic operations. The same test can be run twice with (for example) four times the number of logic operations in the second execution. Since both tests write the same data to memory, the expected rate of errors due to memory faults will be equal between the two executions; since the latter test runs more logic operations, errors from logic faults should scale with the number of operations.\n\nThe design of our logic test, unique to MemtestG80, is based on the preceding principle. For the core calculation, we use a linear congruential random number generator (LCG) with a short period $k$ starting from zero. Such a generator, when started from zero, will return to zero after a fixed number of iterations $k$. Because the generator only reaches $k$ states, of $2^{32}$ possible (in the 32-bit generator), assuming a uniform probability of error over bits, most logic errors will cause the generator to transition to a state outside the normal operation cycle. Such a state is unlikely to return to zero in the correct number of steps, and therefore whether the generator returns to zero is a good indication of whether a logic error occurred. Our logic test starts the generator from zero and runs it for $k$ or $4k$ cycles, each time writing the results out to memory, reading it back, and verifying that it contains only zeros. Any nonzero values indicate either the presence of a logic or a memory error. Scaling of the number of nonzero values with the number of LCG iterations indicates logic, rather than memory, errors. The use of constant zero as the test pattern further increases the sensitivity of the test to logic rather than memory errors; as we show in Section \\ref{sec:validation}, the constant zero pattern is insensitive to faulty memory.\n\n\n\\subsection{Validation}\n\\label{sec:validation}\nTo validate MemtestG80, we carried out both negative and positive control experiments. Since the purpose of this study is to investigate the latent error rate in GPU hardware, a true negative control (one in which we are guaranteed no errors) is not possible. To minimize the possibility of errors from overheating or power disturbances, the controls were run on machines in controlled-temperature environments with known-good power sources. Shielding against ambient radiation such as cosmic rays requires meters of concrete or rock \\cite{Tezzaron04} and was therefore deemed impractical.\n\nWe ran negative controls on two types of hardware. The first was a GeForce 8800 GTX, a high-end consumer-grade graphics board operating at NVIDIA reference clock frequencies. Secondly, we ran tests on a cluster of 8 NVIDIA Tesla C870 boards. The Tesla C870 is a board designed specifically for GPGPU, is based on the same GPU architecture as the 8800GTX, and should reflect any engineering changes made by NVIDIA for ``pro'' or GPGPU-grade cards. Over 93,000 iterations of MemtestG80 were run over 700MiB of GPU memory on the 8800GTX. An aggregate 1.48 million iterations were run over 320 MiB on each Tesla board. No errors were ever detected on the negative control 8800GTX or Tesla boards, demonstrating that errors detected by MemtestG80 are unlikely to be spurious or due to errors in the code.\n\nTo ensure that MemtestG80 detects errors under conditions known to generate memory errors, we also carried out a positive control experiment. Since overclocking is known to generate memory errors (due to violations of timing constraints on both memory and memory controller circuitry), we used overclocking as our positive control. MemtestG80 was run on a GeForce 9500GT video card (default memory clock rate = 400 MHz) with its memory clock rate set to 400, 420, 430, 440, 450, 475, 500, and 530 MHz. Each test comprised 20 iterations of MemtestG80 (10 at 530MHz because of prompt instability). Between each clock frequency the board was reset to a memory clock of 400MHz and allowed to cool to a constant temperature to avoid thermal effects from affecting test results. We finally ran an additional 20 iterations at 400 MHz to verify that the results were unaffected by the intermediate overclocking. The results of the positive control experiment are presented in Figure \\ref{fig:testSensitivity}.\n\nTwo results deserve special attention. First, both variants of the moving-inversions test (which write the same constant --- zero, all ones, or a random number --- to all words in tested memory, and read it back) are completely insensitive to overclocking-induced errors. This inspires our choice of the all-zero pattern as the logic test pattern, as it seems to be insensitive to memory errors. \n\nSecond, the modulo-20 test is far more sensitive to over\\-clock\\-ing-induced errors than are the other tests, demonstrated by the fact that it started detecting errors at clock rates lower than those required to trigger errors in other tests. The modulo-20 test proceeds in 20 rounds. In round $i$, a 32-bit pattern is written to each memory location whose offset from the start of tested memory is equal to $i$ modulo 20; the bitwise complement of the pattern is then written twice to every other memory location. In the verification kernel, the offsets equal to $i$ modulo 20 are read back and it is verified that they contain the original pattern. This procedure is repeated for $i \\in \\{0,1,2,\\cdots,19\\}$. \n\nThe sensitivity of this test likely stems from the fact that the test's stride of 20 is unlikely to align with any architectural stride in the memory chips (e.g., a row length or chip interleave factor), so test locations are more likely to be physically scattered and to be influenced by neighboring cells. The sensitivity of the modulo-20 test persists in real-world situations and is key to our sampling of error rates. The pattern sensitivity of the memory subsystem is in itself an interesting result, as it demonstrates that the likelihood of encountering a memory error is highly dependent on the access and data patterns.\n\n\n\\begin{figure*}[ht]\n\\centering\n\\includegraphics[height=0.33\\textheight]{images\/testSensitivity_WER.pdf}\n\\caption{Overclocking positive control experiment: word error rate (ratio of incorrect to total words tested) versus clock rate for each memory test type. Both moving inversions tests displayed together as neither ever reported an error. Dashed lines represent zero errors found at the tested frequency and are arbitrarily set two log units below lowest number of errors. }\n\\label{fig:testSensitivity}\n\\end{figure*}\n\n\n\\section{Experimental Methodology}\n\\label{sec:methodology}\n\\subsection{Testing Procedure}\n\\label{sec:procedure}\nTo carry out the large-scale assessment presented in Section \\ref{sec:results}, MemtestG80 was deployed on the Folding@home (FAH) distributed computing network \\cite{ShirtsPande00,Beberg09}. For each execution of MemtestG80, we collected device information (card name, memory size, and shader-domain clock speed). Because of the widely varying capabilities of GPUs on Folding@home, the size of the tested memory region varied between 32 and 128 MiB; sizes larger than 128MiB were disallowed because of the negative impact on the responsiveness of donors' computers. Because of the high memory bandwidths required on GPUs, it is likely that memory blocks are interleaved across physical memory chips to speed total throughput. We believe that the tested memory region sizes are sufficiently large that they are likely to be spread across all or a substantial fraction of physical chips on the tested devices. The LCG period used in the logic test was 256, 512, or 1024; like the size of the test region, the LCG period varied based on the capability of donor graphics cards. Finally, on Folding@home, only 2 rounds of the modulo-20 test were run per test iteration because of responsiveness concerns. Later rounds were performed in following iterations.\n\nOn these test regions we ran a variable number of test iterations, collecting individual results for every test iteration. Rather than measuring the exact bit-error-rate (which may be unreliable due to GPU logic errors), we check only whether \\emph{any} errors were detected during a test iteration. This ensures the robustness of the test. It is possible that a test which should have returned errors may have its ``error flag'' bits toggled off by a subsequent GPU memory or logic error and will therefore not be detected, creating a false negative. If a test successfully executes, it is possible that a GPU error will toggle an error flag bit on --- but this is in itself a GPU error. Thus, this binary decision approach removes the problem of false positive results. In the worst case, our results will underestimate the error rate; they will never overestimate it.\n\n\\subsection{Test Device Statistics}\n\nThe test was run at least once on over 20,000 distinct GPUs, with an aggregate total of over 4.6 billion test iterations executed. 96.6\\% of our data were collected with a test region size of 64 MiB and an LCG period of 512; 3.2\\% were collected with 32 MiB regions and period-256 LCGs, and the remainder with 128 MiB test regions and period-1024 LCGs. The tested boards are distributed worldwide with excellent coverage of North America and Europe; distributions elsewhere cluster sharply near large population centers (however, besides South Africa, Africa is poorly sampled).\n\nUsing NVIDIA specifications and the shader clock speeds reported by each board, we are largely able to classify boards as overclocked or running at ``stock'' frequencies. In a few cases, identically-named boards have multiple possible stock clock rates (e.g., models that fit into different thermal envelopes); in these cases it is not possible to determine whether or not a board is overclocked. Although memory clock rates will have a larger impact on memory error rate than logic (shader) clock frequencies, memory overclocking can be expected to covary with shader overclocking. The typical reason for Folding@home users to overclock their boards is to increase their throughput on Folding@home molecular dynamics (MD) work units. Empirical testing has shown that shader clock frequencies have more of an impact on MD runtimes than memory clocks, so it is unlikely that a board on Folding@home would have overclocked memory but not overclocked shaders. Some cards are shipped by board vendors in an overclocked state; here too, it would be rare to see overclocked memory without overclocked shaders, as shader clock frequencies can have a major impact on graphics performance. \n\nFigure \\ref{fig:NcardsVcutoff} displays the number of cards that completed a given number of MemtestG80 iterations during the course of this experiment. It further breaks the data into cards running at or below stock frequencies (grouped together as ``stock''), overclocked cards, and cards whose overclocking state is indeterminate. The results show that a slight majority of cards on Folding@home are overclocked; for most iteration count cutoffs, the number of overclocked and stock cards is comparable. \n\nFinally, we achieve good coverage of GPUs across the NVIDIA product line. Table \\ref{tab:cards300k} shows the counts of cards in particular NVIDIA product families that completed at least 300,000 iterations during the course of the experiment. Although the dataset is strongly biased towards NVIDIA consumer graphics cards (GeForce) due to the consumer-oriented nature of Folding@home, we do sample a few professional (Quadro) and GPGPU-dedicated (Tesla) boards.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[trim=1cm 5mm 1cm 8mm,clip,width=\\columnwidth]{images\/Ncards_v_cutoff.pdf}\n\\caption{Number of cards that completed at least a given number of test iterations}\n\\label{fig:NcardsVcutoff}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{| c | c |}\n\\hline\n\\textbf{Card Family} & \\textbf{\\# cards $\\geq$ 300,000 iter.} \\\\ \\hline \\hline\n\\textit{Consumer graphics cards} & \\textit{4754 total} \\\\ \\hline\nGeForce 8800 & 1779 \\\\ \\hline\nGeForce 9800\/GTS & 1375 \\\\ \\hline\nGeForce GTX & 1137 \\\\ \\hline\nGeForce 9600 & 368 \\\\ \\hline\nGeForce 8600 & 65 \\\\ \\hline\nGeForce 9500 & 19 \\\\ \\hline\nMobile GeForce & 11 \\\\ \\hline \\hline\n\\textit{Professional graphics cards} & \\textit{36 total} \\\\ \\hline\nQuadro FX & 29 \\\\ \\hline\nQuadroplex 2200 & 6 \\\\ \\hline\nQuadro NVS & 1 \\\\ \\hline \\hline\n\\textit{Dedicated GPGPU cards} & \\textit{22 total} \\\\ \\hline\nTesla T10 & 20 \\\\ \\hline\nTesla C1060 & 2 \\\\ \\hline\n\\end{tabular}\n\\caption{Distribution of cards tested on FAH that ran at least 300,000 test iterations}\n\\label{tab:cards300k}\n\\end{table}\n\n\\section{Results}\n\\label{sec:results}\nWe classified each test iteration returned as having failed or not using the method described in Section \\ref{sec:procedure}. We then inferred an empirical probability of failure (in a given test iteration) for each card tested as the ratio of failed tests to total tests, thereby estimating an empirical probability distribution that any card might have a given probability of failure. To add statistical validity, we applied various cutoffs for the minimum number of test iterations a card must have completed to be used in constructing this empirical card-reliability probability distribution. Our underlying statistical model is that each card (in combination with its environment) has its own probability of failure $\\mathit{P\\left( fail \\right)}$, and that each card is drawn independently from some underlying distribution $\\mathit{P\\left( P \\left( fail \\right) \\right)}$. In all following plots and analyses, ``$\\mathit{P\\left( fail \\right)}$'' refers to any given card's probability of failing a single MemtestG80 iteration.\n\nFigure \\ref{fig:PfailCDF_few} displays the cumulative distribution functions (integrated probability distributions) derived from this data for 4 values of the iteration threshold. Each trace represents the distribution calculated using a different cutoff for the number of iterations required to have been completed to consider a card for inclusion. A larger number of completed iterations for a card increases the statistical certainty that its probability of failure lies in the given bin of the estimated probability distribution. We present cutoffs only up to 1 million test iterations because the number of cards sampled falls off rapidly past this limit; estimates made using only cards past a cutoff beyond this are not statistically useful.\n\nThe most apparent trend in the data is the strongly bimodal distribution. All the CDFs start with a nonzero value at $\\mathit{P\\left( fail \\right)} = 0$, representing the fraction of cards at each threshold which never failed a test. All CDFs further show a second population with a mean $\\mathit{P\\left( fail \\right)}$ around $2 \\times 10^{-5}$, which represents nearly all the remaining cards. Finally, there is a very small population of cards with failure rates higher than $1 \\times 10^{-4}$, likely representing faulty hardware. This bimodal trend is statistically relevant, as it continues to appear in the data even at the largest cutoff. In particular, the distributions at thresholds of 50,000, 300,000, and 1,000,000 iterations all have similar fractions of cards with zero errors, indicating that this particular population is stable (i.e., measuring with more iterations does not find errors from the zero-error population).\n\nAs these trends are stable with respect to iteration cutoff, it is instructive to examine the distribution of failure probabilities at a single, representative cutoff that maintains statistical validity. Figure \\ref{fig:Pfail_PMFCDF_300k} illustrates both the probability mass function and the cumulative distribution function over failure probabilities at an iteration cutoff of $\\geq$ 300,000 iterations. At this threshold, approximately one-third of cards tested never exhibited a memory error. Nearly all of the remainder had failure probabilities between 0 and $10^{-4}$; only about 2\\% had failure probabilities higher than this.\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[Empirical CDFs of card failure probability at several test-iteration-count thresholds] {\n\\includegraphics[width=0.95\\columnwidth]{images\/CDF_Pfail_few.pdf}\n\\label{fig:PfailCDF_few}\n}\n\\subfigure[Empirical PMF and CDF of card failure probabilities for cards running at least 300,000 MemtestG80 iterations] {\n\\includegraphics[width=0.95\\columnwidth]{images\/PMF_CDF_Pfail_300k.pdf}\n\\label{fig:Pfail_PMFCDF_300k}\n}\n\\caption{Empirical probability mass functions (PMFs) and cumulative distribution functions (CDFs) of card failure probabilities}\n\\end{figure*}\n\n\\section{Analysis}\n\\label{sec:analysis}\nIn this section we explore various hypotheses explaining features of the returned results, breaking the results down by test and by properties of the tested cards. The main statistical methods we apply are an examination of the mutual information between two probability distributions, and the information gain criterion for data partitioning attributes. Colloquially speaking, the mutual information between two distributions is a nonlinear measure of correlation between the two, and the information gain in an attribute measures the amount of variability in an underlying distribution that is explained by the attribute. These techniques are well-known in the statistical literature \\cite{CoverThomas, WittenFrank}.\n\n\\subsection{Hypothesis testing by information gain}\n\\label{sec:hypotest}\n\nTo test our hypotheses we apply the information-theoretic measure known as \\emph{information gain}, which is broadly used in data mining as a heuristic criterion for building decision tree models of data \\cite{WittenFrank}. The hypothesis testing problem is formulated as follows: given a labeled dataset $D$, we partition $D$ according to an indicator variable $V$ into multiple subsets $D_1, D_2, \\cdots, D_{|V|}$. We would like to know how good $V$ is at explaining the variability in $D$. \n\nWe measure the ``variability'' of D and each of its subsets by their respective Shannon entropies, $H(D)$, defined as\n\\begin{displaymath}\n\\nonumber H(D) = -\\sum_{x \\in D} p(x) \\log _2\\left( p(x) \\right)\n\\end{displaymath}\nThe information gain on $D$ from $V$, $I(D;V)$ (also known as the mutual information between $D$ and $V$), is defined as:\n\\begin{eqnarray}\n\\nonumber I(D;V) &=& H(D) - H(D|V) \\\\\n\\nonumber I(D;V) &=& H(D) - \\sum _{v \\in V} H(D_v)P(V = v)\n\\end{eqnarray}\n\nIf $I(D;V)$ is large compared to $H(D)$, then $V$ explains a significant portion of the distribution of $D$. \n\nTo estimate probability distributions $D$ in our hypothesis testing, we histogrammed failure probabilities on a per-card basis as was done for each distribution in Figure \\ref{fig:PfailCDF_few}, but across the entire range of probabilities from 0 to 1. Although this resolution is too high for the low number of counts at higher probabilities, most of these bins will be zero-valued and will not affect the entropy calculations.\n\n\n\\subsection{Bimodality of P(fail)}\nThe bimodal distribution of card failure probabilities illustrated in Figure \\ref{fig:Pfail_PMFCDF_300k} raises an obvious question: is the existence of cards with nonzero failure probability easily explained through a simple structural or environmental variable, or is it inherent to population of boards? To answer this we tested a set of obvious hypotheses on our data set: that failures are due to overclocking, that they are caused by thermal problems, or that they reflect architectural variability in the hardware.\n\nAs a reference, a perfect indicator variable on this dataset (separating the data into cards which never failed and those which did) has an information gain $I(D;V)$ of 0.8896 bits.\n\n\\subsubsection{Shader Overclocking}\n\nTo test whether shader-overclocked cards are responsible for the second mode in the distribution, around $\\mathit{P\\left( fail \\right)} = 2 \\times 10^{-5}$, we let $D$ be the set of all cards with a known overclocking status, and partitioned it into known stock and known overclocked cards. The entropy of $D$ was calculated to be 6.184 bits, and the mutual information between $D$ and the stock\/overclocked indicator variable was $7.5 \\times 10^{-2}$ bits. $I(D;V)$ for this split is much lower than that for a perfect indicator, indicating that overclocking status explains very little of the information in the distribution. Thus, it is unlikely that overclocking is the cause of bimodality.\n\n\\subsubsection{Time of day}\n\nHigh temperatures are a known cause of transient errors in electronics. Although the APIs used by MemtestG80 do not allow us to monitor board temperatures, the status of Folding@home as a donor project suggests that most tested boards will not be in closely-temperature-controlled environments such as machine rooms. We combine the estimated test runtime, time results were received, and IP geolocation data \\cite{GeoLite} to estimate the local time of day during the. Assuming that the ambient temperature will fluctuate with local time of day, we test the hypothesis that time of day controls the shape of our error distribution. \n\nWe define ``day'' to run from 6am to 6pm, and ``night'' as 6pm to 6am, local to where the test was run. We let $D$ be the set of all tests which both started and ended during the day or during the night, and split it into tests that ran exclusively during the day and tests which ran exclusively at night. This hypothesis has an information gain of 0.0413 bits, indicating that it is exceedingly poor at explaining the shape of the error distribution; local time of day is therefore not an adequate explanation for error rates.\n\n\\subsubsection{Board architecture}\n\nThe GT200 series GPU from NVIDIA has a more advanced memory controller than the original G80\/G92 GPU, supporting (perhaps among other redesigned features) additional memory coalescing modes. To test whether this change in GPU architecture was the cause of bimodality, we let $D$ be all GeForce graphics cards, and partitioned the set into GT200-based and non-GT200-based boards. $I(D;V)$ for this indicator was 0.453 bits, which is a large fraction of the information gained from a perfect indicator. Thus, it is likely that this division significantly explains the error distribution we observe. \n\nFigure \\ref{fig:PfailCDF_few_byarch} shows that GT200-based boards (comprising about one-fifth to one-fourth of our dataset, depending on iteration threshold) were far less likely to fail MemtestG80 iterations --- regardless of iteration threshold, approximately 90\\% of GT200-based cards never reported any errors. The population of GT200-based boards producing errors clusters at a failure probability of $2.2 \\times 10^{-6}$, an order of magnitude lower than the mode failure probability for the overall dataset. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{images\/CDF_Pfail_few_byarch.pdf}\n\\caption{Empirical CDFs of card failure probability at several test-iteration-count thresholds, by architecture (G80\/G92 in red, GT200 in blue)}\n\\label{fig:PfailCDF_few_byarch}\n\\end{figure}\n\n\\subsubsection{Bimodality conclusions}\n\nOur data suggest that the bimodal structure of the failure probability distributions is caused by differing architectures in the boards tested. Specifically, the newer GT200 architecture has an apparent soft error rate nearly tenfold lower than that of G80. The most obvious user-visible enhancement on the GT200 memory controller relative to that on G80 is improved support for coalescing memory operations, or combining multiple memory reads or writes into single transactions. However, this support is insufficient to explain the change in error rates. \n\nUsing the published guidelines for memory coalescing on either architecture \\cite{CudaGuide2.2}, we simulated the memory access patterns for G80 and GT200 on the modulo-20 test (the most sensitive test). As performed in MemtestG80 on Folding@home, the modulo-20 test executes the same number of transactions on both architectures. However, GT200 is able to shrink transactions to be smaller (in terms of bytes) than those performed on G80. As a consequence, G80 generates 16.7\\% more memory traffic (in terms of bytes, not transactions) than GT200 on the test. By itself this does not appear to explain a 10-fold reduction in error probability by GT200. \n\nThe large sample sizes for boards on both architectures make it unlikely that there is a consistent environmental difference between installations of either board type. While it is possible that the error rate is an age-induced effect (G80 is an older architecture than GT200, and it is possible that G80 boards in our sample are physically older than GT200 boards), our data seem to indicate that GT200 is in fact more resistant to soft error generation than is G80.\n\n\\subsection{Impact of errors on molecular dynamics}\n\nTo assess whether memory errors have an impact on scientific computing, we looked for mutual information between the probability that a given card generates memory errors on its Folding@home work units and the probability that the same card triggers an ``early unit end'' (EUE), or simulation failure, on its Folding@home work units. Counting only work units in which at least one MemtestG80 iteration was executed, the mutual information between MemtestG80 errors and Folding@home EUEs was 0.131 bits, compared to overall distribution entropies of 1.965 and 1.018 bits respectively for memory error and EUE distributions. This indicates that MemtestG80 errors likely do not correlate well with EUEs. However, we believe that this measure underreports the true impact on the simulations. EUEs have a variety of causes (including improper simulation setup) which may be unrelated to errors on the board; furthermore, because of the design of the Folding@home client used, certain simulation errors were not reported to the servers and could not be logged. Hence, our results are inconclusive as to the true impact of observed errors on scientific simulations.\n\n\\subsection{Failure modes of tests}\n\\label{sec:testMI}\n\nBy examining the mutual information between the results of each individual test comprising a MemtestG80 iteration, it is possible to better understand the mechanisms triggering failures under various conditions. For each test, we construct a list in which each element corresponds to a single execution of MemtestG80, and the value of each element is the number of failures on that test for that execution. Corresponding elements in each vector map to the same MemtestG80 execution. Each list of failure counts was then histogrammed into 10 bins and normalized to build an empirical probability mass function for the number of failures in that test on a given execution of MemtestG80. Using these probability distributions for tests $X$ and $Y$ we calculated the entropies $H(X)$ and $H(Y)$ according to the formulas in Section \\ref{sec:hypotest}; in this case we use an alternative (equivalent) formulation for the mutual information $I(X;Y)$:\n\\begin{displaymath}\n\\nonumber I(X;Y) = \\sum_{x \\in X} \\sum_{y \\in Y} p(x,y) \\log _2 \\frac{p(x,y)}{p(x)p(y)}\n\\end{displaymath}\n\nThe entropy $H(X)$ can be interpreted as the uncertainty in $X$, as measured by the number of bits required by an optimal code to specify a value from the distribution $p_X(x)$. The mutual information $I(X;Y)$ can be interpreted as the reduction in uncertainty in $X$ caused by knowledge of the value of $Y$, or vice versa (mutual information is symmetric) \\cite{CoverThomas}. Figure \\ref{fig:testMI} shows the ratio of $I(X;Y)$ to $H(X)$ for all tests $X$ and $Y$ used in MemtestG80; this ratio is the fraction of the uncertainty in $X$ explained by knowledge of $Y$. In Figure \\ref{fig:testMI}, the $Y$ (the ``explaining'' distributions) are along the rows; the $X$ (the ``explained'' distributions) are along the columns. The following codes are used to refer to tests within MemtestG80:\n\\begin{description}\n\\item[MI10] Moving inversions, ones and zeros. Writes constant patterns of all-ones and all-zeros to memory.\n\\item[MIR] Moving inversions, random. Writes a constant (host-chosen) pseudorandom number to memory.\n\\item[1WM] Memtest86 variant of walking 1-byte test pattern.\n\\item[1W0\/1] True walking zeros\/ones pattern, 1-byte width.\n\\item[4W0\/1] True walking zeros\/ones pattern, 4-byte width.\n\\item[RB] Random blocks. Writes a different pseudorandom number (generated on the GPU) to each memory block.\n\\item[M20] Modulo-20 test. Described in Section \\ref{sec:validation}.\n\\item[L, L4] Logic test, one or four iterations through LCG cycle.\n\\item[LS(4)] Logic test as L\/L4, but with intermediate LCG state stored in shared memory rather than registers.\n\\end{description}\n\nSeveral interesting trends emerge from this data:\n\\begin{enumerate}\n\\item \\textbf{The Modulo-20 test stands on its own}\\\\\nBoth the M20 column and the M20 row have small values across their lengths, indicating the Modulo-20 test covaried strongly with no other test. This is likely due to the Modulo-20 test's increased sensitivity relative to other tests and reinforces the notion that it probes a different failure mechanism than do other tests.\n\\item \\textbf{The Random Blocks test is a good logic test}\\\\\nAlthough it was not intended as a logic test, the large values in the RB row for the columns corresponding to the LCG-based logic tests indicate that RB does a good job of capturing the errors measured by the LCG tests. Conversely, the small values in the RB column for the LCG tests demonstrate that RB is measuring a superset of errors relative to the LCG tests. This result is reasonable in retrospect: the RB test is very shader-logic intensive. We have designed it around a multithreaded, multi-core Park-Miller Minimal Standard pseudorandom number generator \\cite{Park88}, which in the course of generating a new random number for each memory location performs many more logic operations than any other MemtestG80 test.\n\\item \\textbf{The logic tests measure a distinct failure mode from most memory tests}\\\\\nThe four-iteration variants of the logic test (L4 and LS4) are poorly explained by most memory tests, and in particular, are less-well-explained by the memory tests than are their one-iteration counterparts (L and LS). This is to be expected, as the one-iteration variants are more influenced by memory errors. However, the bright block in the bottom-right of the mutual information plot shows that the logic tests covary strongly among themselves. Furthermore, memory tests have higher mutual information to the L4 test than the LS4 test, indicating that the use of shared memory in the logic test is a significant variable. Together, these results show that the logic tests detect a failure mode distinct from that tested in the memory tests, and that apparent logic errors can be triggered by soft errors in the on-GPU shared memory.\n\\end{enumerate}\n\n\\begin{figure}\n\\centering\n\\includegraphics[trim=2.2cm 1.2cm 4.5cm 2cm,clip,width=\\columnwidth]{images\/testMI.pdf}\n\\caption{Mutual information-to-entropy ratios for each test pair. Each entry is the fraction of the entropy of the test in that column explained by the test in that row. Brighter squares indicate that more of the variance of the explained test is explained by the explainer test. Test codes defined in Section \\ref{sec:testMI}.}\n\\label{fig:testMI}\n\\end{figure}\n\n\\section{Conclusions}\n\nWe have presented the first large-scale study of error rates in GPGPU hardware, conducted over more than 20,000 GPUs on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our large-scale experimental results show that approximately two-thirds of tested cards exhibited a pattern-sensitive susceptibility to soft errors in GPU memory or logic, confirming concerns about the reliability of the installed base of GPUs for GPGPU computation. We have further demonstrated that this nonzero error rate cannot be adequately explained by overclocking or time of day of execution (a proxy for ambient temperature). However, it appears to correlate strongly with GPU architecture, with boards based on the newer GT200 GPU having much lower error rates than those based on the older G80\/G92 design. While we cannot rule out user error, misconfiguration on the part of Folding@home donors, or environmental effects as the cause behind nonzero error rates, our results strongly suggest that GPGPU is susceptible to soft errors under normal conditions on non-negligible timescales. \n\nOur negative control results suggest (but do not conclusively prove) that with environmental control and the use of dedicated-GPGPU hardware, GPGPU can be reliable. However, our experimental results raise concerns about the reliability of GPGPU on consumer-level hardware as installed in the wild. These data are particularly relevant both to GPU-based distributed computing applications and to vendors of consumer-targeted software that relies on GPU acceleration, such as recent video-encoding applications. We emphasize that although our data were collected only on NVIDIA GPUs, we have no reason to believe that the reliability picture would be significantly different for GPUs from ATI, Intel, Via, or other manufacturers, as the driving forces behind GPU development to date have not emphasized the strict reliability concerns found in GPGPU applications.\n\nWe have furthermore presented the design and validation of MemtestG80, our custom code to test NVIDIA CUDA-enabled GPUs for memory errors. We have released this tool under an open-source LGPL license at \\texttt{https:\/\/simtk.org\/home\/memtest} in the hope that it can be used by others for GPU stress testing or self-checking.\n\n\\subsubsection*{Is dedicated GPGPU hardware the answer?}\n\n\n\nOur work suggests several future avenues of investigation. One question which our sampling is unable to adequately answer is whether professional-level and GPGPU-dedicated boards are significantly more reliable than consumer-grade GPUs. While the underlying architectures are identical to those in consumer-grade cards, this specialist hardware is marketed as being more capable than consumer hardware, and NVIDIA has suggested that Tesla boards are recommended for mission-critical applications. \n\nWhile we were unable to sample a large enough number of Quadro and Tesla cards in our experiment to provide a conclusive answer to this question, the results of our negative control experiment (Section \\ref{sec:validation}) suggest that the Tesla line may truly be more reliable than consumer hardware. In the course of our control experiment, we accumulated approximately 1.48 million MemtestG80 iterations over 8 Tesla boards. Each test operated over 5 times the typical amount of memory tested by a Folding@home MemtestG80 iteration, so the control experiment was equivalent to approximately 7.4 million Folding@home tester iterations, or over 925,000 per card. Neither any memory errors nor any logic errors were ever observed. Because the empirical probability of having a zero-error card in the Folding@home dataset is approximately one-third, the probability that our control Tesla cards were drawn independently from the same distribution is $\\left( \\frac{1}{3} \\right)^8$, or less than 0.01\\% (even less if drawn from the G80-only dataset). While our data do not rule out the possibility that environmental factors are at work (machine room versus uncontrolled environment) or that we tested an unusually good batch of cards, they suggest that the Tesla line are in fact more reliable than consumer-grade hardware.\n\n\\subsubsection*{What can be done?}\n\n\n\nAn obvious first step for software developers is to incorporate active memory test functionality, like that found in MemtestG80, to proactively detect malfunctioning cards. On the hardware side, the addition of parity or ECC functionality to the memory subsystem would guard against memory-induced silent errors. While the addition of partial ECC to the GDDR5 specification is a first step, it is not an end-to-end system, and cannot protect against errors in the memory controller or RAM itself. \n\nWhile memory testing and ECC can guard against errors in GPU memory, our logic tests indicate that transient errors on the GPU itself must also be considered. To combat these sources of faults, it may be necessary to implement measures such as redundant computation, advanced software error-detection \\cite{Nicolescu03, Nicolescu01} or hardware redundancy \\cite{Sheaffer07} mechanisms. \n\nNevertheless, our data demonstrate that it is certainly possible to perform reliable GPGPU computing on consumer-grade hardware, but that doing so requires close attention to the characteristics of the hardware. With the great power of TFLOP-scale computation on a single board comes a great responsibility for the developer to ensure data integrity.\n\n\\section*{Acknowledgments}\nWe foremost thank all the donors on the Folding@home network, without whose donated computer time this study would not have been possible. We further thank Dr. Paul Coteus of IBM for discussions regarding soft error detection and correction mechanisms, Ewen Cheslack-Postava of Stanford for discussions on GPU architectures and the idea of an iterated logic tester, and Adam Beberg, Ewen Cheslack-Postava, Philip Guo, Peter Kasson, and Alex Rasmussen for helpful comments on the manuscript. ISH gratefully acknowledges support from an NSF graduate fellowship. We acknowledge support from NIH (R01-GM062868, U54 GM072970) and NSF (CHE-0535616).\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Summary}\nSynergistic use of controlled sampling from a generative autoencoder and molecular simulation-based screening accelerates design of novel, safe, and potent antimicrobial peptides.\n\n\n\\begin{sciabstract}\n\n\\section*{Abstract}\n\n\nDe novo therapeutic design is challenged by a vast chemical repertoire and multiple constraints, e.g., high broad-spectrum potency and low toxicity. We propose CLaSS (Controlled Latent attribute Space Sampling) \u2014 an efficient computational method for attribute-controlled generation of molecules, which leverages guidance from classifiers trained on an informative latent space of molecules modeled using a deep generative autoencoder. We screen the generated molecules for additional key attributes by using deep learning classifiers in conjunction with novel features derived from atomistic simulations. The proposed approach is demonstrated for designing non-toxic antimicrobial peptides (AMPs) with strong broad-spectrum potency, which are emerging drug candidates for tackling antibiotic resistance. Synthesis and testing of only twenty designed sequences identified two novel and minimalist AMPs with high potency against diverse Gram-positive and Gram-negative pathogens, including one multidrug-resistant and one antibiotic-resistant K. pneumoniae, via membrane pore formation. Both antimicrobials exhibit low in vitro and in vivo toxicity and mitigate the onset of drug resistance. The proposed approach thus presents a viable path for faster and efficient discovery of potent and selective broad-spectrum antimicrobials.\n\n\\end{sciabstract}\n\n\\section*{Introduction}\n\n\\textit{De novo} therapeutic molecule design remains a cost and time-intensive process: It typically requires more than ten years and \\$2-3 B USD for a new drug to reach the market, and the success rate could be as low as $<1$\\%. \\cite{dimasi2016innovation, desselle2017institutional}. Efficient computational strategies for targeted generation and screening of molecules with desired therapeutic properties are therefore urgently required. As a specific example, we consider here the antimicrobial peptide (AMP) design problem.\nAntimicrobial peptides are emerging drug candidates for tackling antibiotic resistance, \none of the biggest threats in global health, food security, and development. Patients at a higher risk from drug-resistant pathogens are also more vulnerable to illness from viral lung infections like influenza, severe acute respiratory syndrome (SARS), and COVID-19.\nDrug-resistant diseases claim 700,000 lives a year globally~\\cite{UN_2019}, which is expected to rise to 10 million deaths per year by 2050 based on current trends~\\cite{oneill_2016}. Of particular concern are multidrug-resistant Gram-negative bacteria~\\cite{WHO_2019}.\nAntimicrobial peptides that are believed to be the antibiotic of last resort\nare typically 12-50 amino acids long and produced by multiple higher-order organisms to combat invading microorganisms.\nDue to their exceptional structural and functional variety \\cite{Powers2003}, promising activity, and low tendency to induce (or even reduce) resistance,\nnatural AMPs have been proposed as promising alternatives to traditional antibiotics and as potential next-generation antimicrobial agents \\cite{Mahlapuu2016}.\nMost reported antimicrobials are cationic and amphiphilic in nature, and possess properties thought to be crucial for insertion into and disruption of bacterial membrane\n~\\cite{Mahlapuu2016}.\n\nRational methods for novel therapeutic peptide design,\nboth in wet lab and \\textit{in silico}, \nheavily rely upon \nstructure-activity relationship (SAR) studies \\cite{chen2019simulation, torres2018structure, tucker2018discovery, field2013saturation, fjell2012designing, li2017membrane}. Such methods struggle with\nthe prohibitively large molecular space,\ncomplex structure-function relationships, and multiple competing constraints \nsuch as activity, toxicity, synthesis cost, and stability associated with the design task.\nRecently, artificial intelligence (AI) methods, in particular, statistical learning and optimization-based approaches, have shown promise in designing small- and macro-molecules, including antimicrobial peptides. \nA comprehensive review of computational methodologies for AMP design can be found in ref \\cite{cardoso2020computer}. \nA conventional approach is to build a predictive model that estimates the properties of a given molecule, which is then used for candidate screening \\cite{jenssen2008qsar, vishnepolsky2019novo, maccari2013antimicrobial, meher2017predicting, thomas2010camp, witten2019deep,xiao2013iamp,veltri2018deep}. Either a manually selected or automatically learned set of compositional, structural, or physicochemical features, or direct sequence is used to build the predictive model. A candidate is typically obtained by combinatorial enumeration of chemically plausible fragments (or sub-sequences) followed by random selection, from an existing molecular library, or modification thereof. Sequence optimization using genetic algorithms \\cite{porto2018silico, fjell2011optimization}, pattern insertion \\cite{porto2018joker}, or sub-graph matching \\cite{nagarajan2019omega76} has been also used in the development of new drugs, which requires selection of initial (or template) sequence and\/or defining patterns. An alternative is to develop a generative model \nfor automated \\textit{de novo} design of novel molecules with user-specified properties. Deep learning-based architectures, such as neural language models as well as deep generative neural networks, have emerged as a popular choice \n\\cite{muller2018recurrent, grisoni2018designing, gupta2018generative,\ngomez1610automatic, jin2018junction, blaschke2018application, chan2019advancing, sanchez2018inverse, nagarajan2018computational}. Probabilistic autoencoders \\cite{hinton2006reducing, kingma2013auto}, a powerful class of deep generative models, \nhave been used for \nthis design task, which\nlearn a bidirectional mapping of the input molecules (and their attributes) to a continuous latent space. \n\n\n Earlier deep generative models for targeted generation have often limited the learning to a fixed library of molecules with desired attributes, to restrict the exhaustive search to a defined section of the chemical space. Such an approach can affect the novelty as well as the validity of the generated molecules, as the fixed library represents a small portion of the combinatorial molecular space \\cite{porto2018silico}.\nAlternative methods include Bayesian optimization (BO) on a learned latent space \\cite{gomez1610automatic}, reinforcement learning (RL) \\cite{guimaraes2017objective, popova2018deep}, or semi-supervised learning (SS) \\cite{kang2018conditional}. \nHowever, those approaches require surrogate model fitting (as in BO), optimal policy learning (as in RL), or minimizing attribute-specific loss objectives (as in SS), which suffers from additional computational complexity. As a result, controlling attribute(s) of designed molecules efficiently continues to remain a non-trivial task. \n\n\n\nTo tackle these challenges, we propose a computational framework for targeted design and screening of molecules, which combines attribute-controlled deep generative models and physics-driven simulations. \nFor targeted generation, we propose Conditional Latent (attribute) Space Sampling -- CLaSS\nthat leverages guidance from attribute classifier(s) trained on the latent space of the system of interest and uses a rejection sampling scheme for generating molecules with desired attributes. \nCLaSS has several advantages, as it is efficient and easily repurposable in comparison to existing machine learning algorithms for targeted generation. \nTo encourage novelty and validity of designed sequences, we performed CLaSS on the latent space of a deep generative autoencoder that was trained on a larger dataset consisting of all known peptide sequences, instead of a limited number of known antimicrobials. Extensive analyses showed that the resulting latent space is informative of peptide properties. As a result, the antimicrobial peptides generated from this informative space are novel, diverse, valid, and optimized. \n\nAlthough several antimicrobial peptides are in clinical trials \\cite{Mahlapuu2016}, the future design of novel AMP therapeutics requires minimizing the high production cost due to longer sequence length, proteolytic degradation, poor solubility, and off-target toxicity. A rational path for resolving these problems is to design short peptides as a minimal physical model\n\\cite{Losasso2019, Cipcigan2018} that captures the high selectivity of natural AMPs. That is, maximizing antimicrobial activity, while minimizing toxicity towards the host.\n\n\n\nTo account for these additional key requirements, such as broad-spectrum potency and low toxicity, we further provide an efficient \\textit{in silico} screening method that uses deep learning classifiers augmented with high-throughput physics-driven molecular simulations (Fig. \\ref{fig:Overview}). To our knowledge, this is the first computational approach for \\textit{de novo} antimicrobial design that explicitly accounts for broad-spectrum potency and low toxicity, and performs experimental verification of those properties. Synthesis of 20 candidate sequences (from a pool of $\\sim$ 90,000 generated sequences) that passed the screening enabled discovery of two novel and short length peptides with experimentally validated strong antimicrobial activity against diverse pathogens, including a hard-to-treat multidrug-resistant Gram-negative \\textit{K. pneumoniae}. Importantly, both sequences demonstrated low \\textit{in vitro} hemolysis (HC50) and \\textit{in vivo} lethal (LD50) toxicity. Circular dichroism experiments revealed the amphiphilic helical topology of the two novel cationic AMPs. All-atom simulations of helical YI12 and FK13 show distinct modes of early lipid membrane interaction. Wet lab experiments further confirmed their bactericidal nature; while live imaging with confocal microscopy showed bacterial membrane permeability. Both peptides displayed low propensity to induce resistance onset in \\textit{E. coli} compared to imipenem, an existing antibiotic. No cross-resistance was seen for either YI12 or FK13, when tested using a polymyxin-resistant strain. Taken together, both YI12 and FK13 appear promising therapeutic candidates that deserve further investigation.\nThe present strategy, therefore, provides an efficient \\textit{de novo} approach for discovering novel, broad-spectrum, and low-toxic antimicrobials with therapeutic potential at 10\\% success rate and rapid (48 days) pace.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Results}\n\n\n\n\n\n\\subsection\n{Peptide autoencoder}\n\\label{sec:lvm}\n\nFor modeling the peptide latent space, we used generative models based on a deep autoencoder \\cite{hinton2006reducing, kingma2013auto} \ncomposed of two neural networks, an encoder and a decoder. The encoder $q_{\\phi}({\\mathbf{z}}|{\\mathbf{x}})$ parameterized with $\\phi$ learns to map the input ${\\mathbf{x}}$ to a variational distribution, and the decoder $p_{\\theta}({\\mathbf{x}}|{\\mathbf{z}})$ parameterized with $\\theta$ aims to reconstruct the input ${\\mathbf{x}}$ given the latent vector ${\\mathbf{z}}$ from the learned distribution, as illustrated in Fig. \\ref{fig:overview-gen}A. Variational Autoencoder (VAE), the most popular model in this family \\cite{kingma2013auto}, assumes \\emph{latent variable} ${\\mathbf{z}} \\sim p({\\mathbf{z}})$ and follows a simple prior (\\textit{e.g.} Gaussian) distribution. And the decoder then produces a distribution over sequences given the continuous representation ${\\mathbf{z}}$. \nThus, the generative process is specified as: $p({\\mathbf{x}}) = \\int p({\\mathbf{z}}) p_\\theta({\\mathbf{x}} | {\\mathbf{z}}) d{\\mathbf{z}}$ where we integrate out the latent variable. \nAn alternative (and supposedly improved variant) of standard VAE is\nWasserstein Autoencoder (WAE) (For details, see SI).\nWithin the VAE\/WAE framework, the peptide generation is formulated as a density modeling problem, \\textit{i.e.} estimating $p({\\mathbf{x}})$ where ${\\mathbf{x}}$ are short variable-length strings of amino acids. The density estimation procedure has to assign a high likelihood to known peptides. Therefore, the model generalization implies that plausible novel peptides can be generated from regions with a high probability density under the model. Peptide sequences are presented as text strings composed of 20 natural amino acid characters. Only sequences with length $\\le 25$ were considered for model training and generation, as short AMPs are desired.\n\nHowever, instead of learning a model only over known AMP sequences, one can learn a model over all peptide sequences reported in the UniProt database \\cite{uniprot} -- an extensive database of protein\/peptide sequences that may or may not have an annotation. For example, the number of annotated AMP sequences is $\\sim$ 9000, and peptide sequences in Uniprot is $\\sim$ 1.7M, when a sequence length up to 50 is considered. \nTherefore, we learn a density model of over all known peptides sequences, in this work. The fact also inspires this approach, that unsupervised representation learning by pre-training on a large corpus has recently led to impressive results for downstream tasks in text and speech ~\\cite{ElmoPeters:2018, gpt2:radford2019language, cove:mccann2017learned, devlin2018bert}, as well as in protein biology \\cite{rao2019evaluating, Madani2020.03.07.982272}. \nAdditionally, in contrast to similar models for protein sequence generation \\cite{riesselman2017deep}, we do not restrict ourselves to learning the density associated with a single protein family or a specific 3D fold.\nInstead, we learn a global model, over all known short peptide sequences expressed in different organisms.\n\nThis global approach should enable meaningful density modeling across multiple families, the interpolation between them, better learning of the ``grammar'' of plausible peptides, and exploration beyond known antimicrobial templates, as shown next.\n\n\n\n\n\nThe advantage of training a WAE (instead of a plain VAE) on peptide sequences \n is evident from the reported evaluation metrics in Supplementary Information (SI) Table \\ref{tab:model_results} SI. We also observed high reconstruction accuracy and diversity of generated sequences, when the WAE was trained on all peptide sequences, instead of only on AMP sequences ( Table S\\ref{tab:model_results}). Next, we analyzed the information content of the peptide WAE, inspired by recent\n investigations in natural language processing. By using the so-called ``probing'' methods, it has been shown that encoded sentences can retain much linguistic information~\\cite{shi2016does}.\nIn a similar vein, we investigated if the similarity (we use pairwise similarity which is defined by global alignment\/concordance between two sequences) between sequences \\cite{yu2003compositional} is captured by their encoding in the latent ${\\mathbf{z}}$-space, as such information is known to specify the biological function and fold of peptide sequences.\n Fig. \\ref{fig:wae-latent}A reveals a negative correlation (Pearson correlation coefficient = $-0.63$) between sequence similarities and \nEuclidean distances in the ${\\mathbf{z}}$-space of the WAE model, suggesting that WAE intrinsically captures the sequence relationship within the peptide space. The VAE latent space fails to capture such a relation. \n\n\nWith the end-goal of a conditional generation of novel peptide sequences, it is crucial to ensure that the learned encoding in the ${\\mathbf{z}}$-space retains identifiable information about functional attributes of the original sequence. \nSpecifically, we investigate whether the space is \\emph{linearly} separable into different attributes, such that sampling from a specific region of that space yields consistent and controlled generations. For this purpose, we trained linear classifiers for binary (yes\/no) functional attribute prediction using the ${\\mathbf{z}}$ encodings of sequences (Fig. \\ref{fig:overview-gen}B).\nProbing the ${\\mathbf{z}}$-space modeled by the WAE uncovers that the space is indeed linearly separable into different functional attributes, as evident from the test accuracy of binary logistic classifiers on test data: The class prediction accuracy of the attribute ``AMP\" using WAE z-classifiers and\nsequence-level classifiers on test data are 87.4 and 88.0, respectively\n(also see Table S\\ref{tab:classifier_results}). Supplementary Table \\ref{tab:AMP-compare} shows the reported accuracy of several existing AMP classification methods, as well as of our sequence-level LSTM classifier. The reported accuracy varies widely from 66.8\\% for iAMP Pred \\cite{meher2017predicting}, to 79\\% for DBAASP-SP \\cite{vishnepolsky2018predictive} that relies on local density-based sampling using physico-chemical features, to 94\\% for Witten \\textit{et al.} method \\cite{witten2019deep} that uses a convolutional neural net trained directly on a large (similar to ours) corpus of peptide sequences. Our sequence-level LSTM model shows a comparable 88\\% accuracy. This comparison reveals a close performance of the ${\\mathbf{z}}$-level classifier, when compared to classifiers reported in literature~\\cite{veltri2018deep,witten2019deep,xiao2013iamp, vishnepolsky2019novo, meher2017predicting, thomas2010camp} or trained in-house that have access to the original sequences. We emphasize that the goal of this study is not to provide a new AMP prediction method that outperforms existing machine learning-based AMP classifiers. Rather the goal is to have a predictor trained on latent features resulting in comparable accuracy, which can be used to automatically generate new AMP candidates by conditional sampling directly from the latent space using CLaSS. It should be noted that, comparing different AMP prediction models is non-trivial, as different methods widely vary by training AMP dataset size (\\textit{e.g.} 712 for AMP Scanner v2 \\cite{veltri2018deep}, 3417 for iAMPpred \\cite{meher2017predicting}, 2578 for CAMP \\cite{thomas2010camp}, 140 for DBAASP-SP prediction \\cite{vishnepolsky2019novo}, 1486 for iAMP-2L \\cite{xiao2013iamp}, 4050 for \\cite{witten2019deep}, and 6482 in the present study), sequence length, different definition of AMP and non-AMP, and other data curation criteria. Also, both the z-level and the sequence-level AMP classifiers used in the current study do not require any manually defined set of features, in contrast to many existing prediction tools, e.g. \\cite{vishnepolsky2019novo} and \\cite{xiao2013iamp}\n\nOn toxicity classification task, a much lower accuracy was found using models trained on latent features, when compared to similar sequence-level deep classifiers~\\cite{gupta2013silico} (also see Fig.~\\ref{fig:wae-latent} and Table S\\ref{tab:classifier_results}) that report accuracy as high as 90\\%. \nThese results imply that some attributes, such as toxicity, are more challenging to predict from the learned latent peptide representation; one possible reason can be higher class imbalance in training data (see SI). \n\nWe also investigated the smoothness of the latent space by analyzing the sequences generated along a linear interpolation vector in the ${\\mathbf{z}}$-space between two distant training sequences (Fig. \\ref{fig:wae-latent}B-C). Sequence similarity, functional attributes (AMP and Toxic class probabilities), as well as several physicochemical properties including aromaticity, charge, and hydrophobic moment (indicating amphiphilicity of a helix) change smoothly during the interpolation. These results are encouraging, as the WAE latent space trained on the much larger amount of unlabeled data appears to carry significant structure in terms of functional, physicochemical, and sequence similarity.\nFigure \\ref{fig:wae-latent}C also demonstrates that it is possible to identify sequence(s) during linear interpolation that is visibly different from both endpoint sequences, indicating the potential of the learned latent space for novel sequence generation. \n\n\\subsection*{CLaSS for controlled sequence generation}\n\\label{sec:class}\nFor controlled generation, we aim to control a set of binary (yes\/no) attributes of interest such as antimicrobial function and\/or toxicity. We propose CLaSS - Conditional Latent (attribute) Space Sampling for this purpose. CLaSS leverages attribute classifiers directly trained on the peptide ${\\mathbf{z}}$-space, as those can capture important attribute information (Fig. \\ref{fig:wae-latent}).\nThe goal is to sample conditionally $p({\\mathbf{x}} | {\\mathbf{a}}_t)$ for a specified target attribute combination ${\\mathbf{a}}_t$.\nThis task was approached through CLaSS (Fig. \\ref{fig:overview-gen}C), which makes the assumption that attribute conditional density factors as follows:\n$p({\\mathbf{x}}|{\\mathbf{a}}) = \\int \\! \\mathrm{d}z \\, p({\\mathbf{z}}|{\\mathbf{a}}) p({\\mathbf{x}}|{\\mathbf{z}})$\nWe sample $p({\\mathbf{z}}|{\\mathbf{a}}_t)$\napproximately using rejection sampling from models in the latent ${\\mathbf{z}}$-space appealing to Bayes rule and $p({\\mathbf{a}}_t|{\\mathbf{z}})$ modeled by the attribute classifiers (Fig. \\ref{fig:overview-gen}B-C) (see SI). Since CLaSS only employs simple attribute predictor models and rejection sapling from models of ${\\mathbf{z}}$-space, it is a simple and efficient forward-only screening method. It does not require any complex optimization over latent space, when compared to existing methods for controlled generation, \\textit{e.g.} Bayesian optimization \\cite{gomez2018automatic}, reinforcement learning \\cite{guimaraes2017objective, popova2018deep}, or semi-supervised generative models \\cite{kang2018conditional}. CLaSS is easily repurposable and embarrassingly parallelizable at the same time and does not need defining a starting point in the latent space. \n\nSince the Toxicity classifier trained on latent features appears weaker (Fig. \\ref{fig:wae-latent}), antimicrobial function (yes\/no) was used as the sole condition for controlling the sampling from the latent peptide space. Generated antimicrobial candidates were then screened for toxicity using the sequence-level classifier during post-generation filtering. It is noteworthy, that CLaSS does not perform a heuristic-based search (as in genetic algorithm or node-based sampling, as reported in\\cite{sattarov2019novo}) on the latent space, rather it relies on a probabilistic rejection sampling-based scheme for attribute-conditioned generation. CLaSS is also different from the local density based sampling approaches (\\textit{e.g.} \\cite{vishnepolsky2019novo}), as those methods rely on clustering the labeled data and then finding the cluster assignment of the test sample by using similarity search, thus is suited for a forward design task. CLaSS, in contrast, is formulated for the inverse design problem, and allows targeted generation by attribute-conditioned sampling from the latent space followed by decoding (see Methods).\n\n\n\n\n\\subsection*{Features of CLaSS-generated AMPs}\n\\label{sec:seqanalysis} \nTo check the homology of CLaSS-generated AMP sequences with training data, we performed a BLAST sequence similarity search. We use Expect value (E-value) for the alignment score of the actual sequences to assess sequence homology, while using other alignment metrics, such as raw alignment scores, percent identity and positive matches, gaps and coverage in alignment, to get an overall sense of sequence similarity. E-value indicates statistical (\\textit{aka.} biological) significance of the match between the query and sequences from a database of a particular size. Larger E-value indicates a higher chance that the similarity between the hit and the query is merely a coincidence, \\textit{i.e.} the query is not homologous or related to the hit.\nWe analyzed the Expect value (E-value) for the matches with the highest alignment score. \n\nTypically E-values $\\le$ 0.001 when querying Uniprot $nr$ database of size $\\sim$ 220 M are used to infer homology \\cite{pearson2013introduction}. Since our training database is $\\sim$ 1000 times smaller than Uniprot, an E-value of $\\le$ 10$^{-6}$ can be used for indicating homology.\nAs shown in Table S\\ref{tab:Evalue}, about 14\\% of generated sequences show an E-value of $\\ge$ 10, and another 36\\% have an E-value $>$ 1, when considering the match with the highest alignment score, indicating insignificant similarity between generated and training sequences. If only the alignments with score $>$ 20 are considered, the average E-value is found to be $>$ 2, further implying the non-homologous nature of generated sequences. Similar criteria have also been used for detecting novelty of designed short antimicrobials \\cite{chen2019simulation}. \nCLaSS-generated AMPs are also more diverse, as the unique (\\textit{i.e.} found only once in an ensemble of sequences) $k$-mers ($k$ = 3-6) are more abundant compared to training sequences or their fragments (Figure S\\ref{tab:combined-comp}). \nThese results highlight the ability of the present approach to generate short-length AMP sequences that are, on average, novel with respect to training data, as well as diverse among themselves. \n\nDistributions of key molecular features implicated in antimicrobial nature, such as amino acid composition, charge, hydrophobicity (H), and hydrophobic moment ($\\mu$H), were compared between the training and generated AMPs, as illustrated in Fig. \\ref{fig_biometrics}A-D. Additional features are reported in Figure S\\ref{tab:combined-comp}. CLaSS-generated AMP sequences show distinct character:\nSpecifically, those are richer in R, L, S, Q, and C, whereas A, G, D, H, N, and W content is reduced, in comparison to training antimicrobial sequences (Fig. \\ref{fig_biometrics}A). We also present the most abundant $k$-mers ($k$=3, 4) in Figure S\\ref{tab:combined-comp}, suggesting that the most frequent 3 and 4-mers are K and L-rich in both generated and training AMPs, the frequency being higher in generated sequences. Generated AMPs are characterized by global net positive charge and aromaticity somewhere in between unlabeled and AMP-labeled training sequences, while the hydrophobic moment is comparable to that of known AMPs (Fig. \\ref{fig_biometrics}B-D and Figure S\\ref{tab:combined-comp}). These trends imply that the generated antimicrobials are still cationic and can form a putative amphiphilic $\\alpha$-helix, similar to the majority of known antimicrobials. Interestingly, they also exhibit a moderately higher hydrophobic ratio and an aliphatic index compared to training sequences (Figure S\\ref{tab:combined-comp}). These observations highlight the distinct physicochemical nature of the CLaSS-generated AMP sequences, as a result of the semi-supervised nature of our learning paradigm, that might help in their therapeutic application. For example, lower aromaticity and higher aliphatic index are known to induce better oxidation susceptibility and higher heat stability in short peptides \\cite{li2016molecular}, while lower hydrophobicity is associated with reduced toxicity \\cite{hawrani2008origin}.\n\n\n\n\n\n\n\n\n\n\\subsection*{\\textit{In silico} post-generation screening}\n\\label{sec:res-md} \n\nTo screen the $\\sim$90,000 CLaSS-generated AMP sequences, we first used an independent set of binary (yes\/no) sequence-level deep neural net-based classifiers that screens for antimicrobial function, broad-spectrum efficacy, presence of secondary structure, as well as toxicity\n (See Figs. \\ref{fig:Overview} and Table S\\ref{tab:classifier_results}). 163 candidates passed this screening, which were then subjected to coarse-grained Molecular Dynamics (CGMD) simulations of peptide-membrane interactions. The computational efficiency of these simulations makes them an attractive choice for high-throughput and physically-inspired filtering of peptide sequences.\n\n\nSince there exists no standardized protocol for screening antimicrobial candidates using molecular simulations, we performed a set of control simulations of known sequences with or without antimicrobial activity. From those control runs, we found for the first time that the variance of the number of contacts between positive residues and membrane lipids is predictive of antimicrobial activity (Supplementary Fig. \\ref{fig:cgmd}): Specifically, the contact variance differentiates between high potency AMPs and non-antimicrobial sequences with a sensitivity of 88\\% and specificity of 63\\% (see SI). Physically, this feature can be interpreted as measuring the robust binding tendency of a peptide sequence to the model membrane. \nTherefore, we used the contact variance cutoff of $2$ for further filtering of the 163 generated AMPs that passed the classifier screening.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection*{Wet lab characterization}\n\\label{sec:expt}\nA final set of 20 CLaSS-generated AMP sequences that passed the contact variance-based screening mentioned above, along with their simulated and physico-chemical characteristics, are reported in \n Tables S\\ref{tab:20P-sim} and S\\ref{tab:sicgmd}. Those sequences were tested in the wet lab for antimicrobial activity, as measured using minimum inhibitory concentration (MIC, lower the better) against Gram-positive S. aureus and Gram-negative E. coli (Table S\\ref{tab:20P_MIC}). \n11 generated non-AMP sequences were also screened for antimicrobial activity (Table S\\ref{tab:11N-MIC}).\nNone of the designed non-AMP sequences showed MIC values that are low enough to be considered as antimicrobials, implying that our approach is not prone to false-negative predictions. We also speculate that a domain shift between the AMP test set and generated sequences from the latent space trained on the labeled and unlabeled peptide sequences likely results in false positive prediction (18 out of 20).\n\n\nAmong the 20 AI-designed AMP candidates, two sequences, \\textit{YLRLIRYMAKMI-CONH2} (YI12, 12 amino acids) and \\textit{FPLTWLKWWKWKK-CONH2} (FK13, 13 amino acids), were identified to be the best with the lowest MIC values (Table \\ref{tab:tab-mic-compare} and Table S\\ref{tab:20P_MIC}). Both peptides are positively charged and have a nonzero hydrophobic moment (Table S\\ref{tab:sicgmd}), indicating their cationic and amphiphilic nature consistent with known antimicrobials. These peptides were further evaluated against the more difficult-to-treat Gram-negative \\textit{P. aeruginosa}, \\textit{A. baummannii}, as well as a multi-drug resistant Gram-negative \\textit{K. pnuemoniae}. As listed in Fig. \\ref{fig:6_7}, both YI12 and FK13 showed potent broad-spectrum antimicrobial activity with comparable MIC values. We have compared the MIC values of YI12 and FK13 with LLKKLLKKLLKK, an existing alpha helix-forming antimicrobial peptide with excellent antimicrobial activity and selectivity reported in \\cite{wiradharma2013rationally}. The MIC of LLKKLLKKLLKK against \\textit{S. aureus}, \\textit{E. coli}, and \\textit{P. aeruginosa} is $>$500, $>$500, and 63, respectively. MIC values of FK13 and FY12 are comparable to that of LLKKLLKKLLKK against \\textit{P. aeruginosa}. However, MIC values of FK13 and YI12 are significantly lower than those of LLKKLLKKLLKK against \\textit{S. aureus} and \\textit{E. coli}, demonstrating greater efficacy of antimicrobial peptides discovered in this study.\nWe also report the results of several existing AMP prediction methods on YI12 and FK13 in Table \\ref{tab:tab-mic-compare}. iAMP Pred \\cite{meher2017predicting} and CAMP-RF \\cite{thomas2010camp} predict both of them as AMP, where other methods misclassify either of the two. For example, DBAASP-SP \\cite{vishnepolsky2018predictive} that relies on local similarity search misclassifies FK13. Witten \\textit{et al.} \\cite{witten2019deep} predicts AMP activity against \\textit{S. aureus} for both sequences. The same method does not recognize FK13 as an effective antimicrobial against \\textit{E. coli}.\n\n\n\n\nWe further performed \\textit{in vitro} and \\textit{in vivo} testing for toxicity. \n Based on activity measure at 50\\% hemolysis ($HC_{50}$) and lethal dose ($LD_{50}$) toxicity values (Table \\ref{tab:tab-mic-compare} and Fig. S\\ref{fig:cd_n_tox}), both peptides appear biocompatible (as the $HC_{50}$ and $LD_{50}$ values are much higher than MIC values), FK13 being more biocompatible than YI12. More importantly, the $LD_{50}$ values of both peptides compare favorably with that of polymyxin B (20.5 mg\/kg) \\cite{rifkind1967prevention}, which is a clinically used antimicrobial drug for treatment of antibiotic-resistant Gram-negative bacterial infection. \n \n\n\n\\subsection*{Sequence similarity analyses}\n\\label{sec:novelty}\nTo investigate the similarity of YI12 and FK13 with respect to training sequences, we analyzed the alignment scores returned by the BLAST homology search in detail (Fig. S\\ref{fig:blast_result} and Fig. S\\ref{fig:cd_n_tox}), in line with earlier works \\cite{ronvcevic2018parallel, chen2019simulation}. Scoring metrics include raw alignment score, E-value, percentage of alignment coverage, percentage of identity, percentage of positive matches or similarity, and percentage of alignment gap (indicating the presence of additional amino acids). BLAST searching with an E-value threshold of 10 against the training database did not reveal any match for YI12, suggesting that there exists no statistically significant match of YI12. Therefore, we further searched for related sequences of YI12 in the much larger Uniprot database consisting of $\\sim$ 223.5 M non-redundant sequences, only a fraction of which was included in our model training. YI12 shows an E-value of 2.9 to its closest match,\n\nwhich is an 11 residue segment from the bacterial EAL domain-containing protein (Fig. S\\ref{fig:blast_result}). This result suggests that YI12 shares low similarity, even when all protein sequences in Uniprot are considered. We also performed a BLAST search of YI12 against the PATSEQ database that contains $\\sim$ 65.5 M patented peptides and still received a minimum E-value of 1.66. The sequence nearest to YI12 from PATSEQ is an eight amino acid long segment from a 79 amino acid long human protein, which has with 87.5\\% similarity and only 66.7\\% coverage, further confirming YI12's low similarity to known sequences. \n\n\nFK13 shows less than 75\\% identity, a gap in the alignment, and 85\\% query coverage to its closest match, in the training database, implying FK13 also shares low similarity to training sequences (Fig. S\\ref{fig:blast_result}). YI12 is more novel than FK13, though. The closest match of FK13 in the training database is a synthetic variant of a 13 amino acid long bactericidal domain (PuroA: FPVTWRWWKWWKG) of Puroindoline-A protein from wheat endosperm. The antimicrobial and hemolysis activities of FK13 are close to those reported for PuroA \\cite{jing2003conformation, haney2013mechanism}. Nevertheless, FK13 is significantly different from PuroA; FK13 is K-rich and low in W-content, resulting in lower Grand Average of Hydropathy (GRAVY) score ($-0.854$ \\textit{vs.} $-0.962$), higher aliphatic index ($60.0$ \\textit{vs.} $22.3$), and lower instability index ($15.45$ \\textit{vs.} $58.30$), all together indicative of higher peptide stability. In fact, lower W-content was found beneficial for stabilizing of FK13 during wet-lab experiments, since Tryptophan (W) is susceptible to oxidation in air. Lower W-content has also been implicated in improving \\textit{in vivo} peptide stability \\cite{mathur2018silico}.\nTaken together, these results illustrate that CLaSS on latent peptide space modeled by the WAE is able to generate novel and optimal antimicrobial sequences by efficiently learning the complicated sequence-function relationship in peptides and exploiting that knowledge for controlled exploration. When combined with subsequent \\textit{in silico} screening, novel and optimal lead candidates with experimentally confirmed high broad-spectrum efficacy and selectivity are identified at a success rate of 10\\%. The whole cycle (from database curation to wet lab confirmation) took 48 days in total and a single iteration (Fig. \\ref{fig:Overview}). \n \n\n\n \n \n \\subsection*{Structural and mechanistic analyses}\n \\label{sec:str}\n\n \n We performed all-atom explicit water simulations (see SI) of these two sequences in the presence of a lipid membrane starting from an $\\alpha$-helical structure. Different membrane binding mechanisms were observed for the two sequences, as illustrated in Fig.~\\ref{fig:6_7}A. YI12 embeds into the membrane by using positively charged N-terminal Arginine (R) residues. While FK13 embeds either with N-terminal Phenylalanine (F) or with C-terminal Tryptophan (W) and Lysine (K). These results provide mechanistic insights into different modes of action adopted by YI12 and FK13 during the early stages of membrane interaction.\n \n Peptides were further experimentally characterized using CD spectroscopy (see SI).\n Both YI12 and FK13 showed random coil-like structure in water, but formed an $\\alpha$-helix in 20\\% SDS buffer (Fig. S\\ref{fig:cd_n_tox}). Structure classifier predictions (see SI) and all-atom simulations are consistent with the CD resylts. From CD spectra, $\\alpha$-helicity of YI12 appears stronger than that of FK13, in line with its stronger hydrophobic moment (Table S\\ref{tab:sicgmd}). In summary, physicochemical analyses and CD spectroscopy together suggest that cationic nature and amphiphilic helical topology are the underlying factors inducing antimicrobial nature in YI12 and FK13.\n \n\n \n \n To provide insight onto the mechanism of action underlying the antimicrobial activity of YI12 and FK13, we conducted agar plate assay and found that both peptides YI12 and FK13 are bactericidal. There was a 99.9\\% reduction of colonies at 2 x MIC.\n \n Since $\\alpha$-helical peptides like YI12 and FK13 are known to disrupt membranes by leaky pore formation ~\\cite{kumar2018antimicrobial,guha2019mechanistic}, we performed live-imaging with confocal fluorescence microscopy for FK13 against \\textit{E. coli} (Figure \\ref{fig:confocal}). The results were compared with that of polymyxin B, that is one of a group of basic polypeptide antibiotics derived from \\textit{B polymyxa}. After 2 hours of incubating E. coli with polymyxin B or with FK13 at 2 x MIC, confocal imaging was performed. Emergence of fluorescence, as shown in Figure \\ref{fig:confocal}, confirmed that red propidium iodide (PI) has entered the bacterial cell and interacted with bacterial DNA in the presence of either FK13 or polymyxin B. This finding implies that both polymyxin and FK13 induce pore formation on the bacteria membrane and allow the PI dye to enter the bacteria. Without the pore formation, the PI dye will not be able to enter the bacteria.\n \n \\subsection*{Resistance analyses}\n Finally, we performed resistance acquisition studies of E. coli. in the presence of imipenem, an intravenous $\\beta$-lactam antibiotic, YI12 or FK13 at concentrations of sub MIC levels. Results shown in Figure \\ref{fig:6_7}B confirm that both YI12 and FK13 do not induce resistance after 25 passages, while E. coli has begun to develop resistance to the antibiotic Imipenem after just 6 passages. We have also investigated the efficacy of these peptides against polymyxin B resistant \\textit{K. pneumoniae}, a strain that is resistant to polymyxin B, a beta-lactum antibiotic, an antibiotic of last resort. Table \\ref{tab:tab-mic-compare} shows the MIC values of YI12, FK13, and polymyxin B, revealing no MIC increase for either of the two discovered peptides, when compared to the MIC against the MDR K. pneumoniae stain (from ATCC). In contrast, the MIC of polymyxin B is 2 $\\mu$g\/ml against the same MDR K. pneumoniae stain (from ATCC), but increases to $>$125 $\\mu$g\/ml once the \\textit{K. pneumoniae} became resistant to polymyxin B. The MIC value of YI12 and FK13 is still lower than that of polymyxin B in the polymyxin B resistant strain, indicating that the resistance to polymyxin B is not seen towards YI12 and FK13. Taken together, these results indicate that YI12 and FK13 hold therapeutic potential for treating resistant strains and therefore demand further investigation.\n\n \n\n\n\n\\section*{Discussion and Conclusions}\n\\label{sec:conclusion}\nLearning implicit interaction rule(s) of complex molecular systems is a major goal of artificial intelligence (AI) research.\nThis direction is critical for designing new molecules\/materials with specific structural and\/or functional requirements, one of the most anticipated and acutely needed applications.\nAntimicrobial peptides considered here represent an archetypal system for molecular discovery problems. They exhibit a near-infinite and mostly unexplored chemical repertoire,\na well-defined chemical palette (natural amino acids), as well as potentially conflicting or opposing design objectives, and is of high importance due to the global increase in antibiotic resistance and a depleted antibiotic discovery pipeline. Recent work has shown that deep learning can be used to help screen libraries of existing chemicals for antibiotic properties \\cite{stokes2020deep}. A number of recent studies have also used AI methods for design of antimicrobial peptides and provided experimental validation \\cite{loose2006linguistic, nagarajan2018computational, maccari2013antimicrobial, porto2018silico, fjell2011optimization, vishnepolsky2018predictive, nagarajan2019omega76}. However, to our knowledge, the present work provides for the first time a fully automated computational framework that combines controllable generative modeling, deep learning, and physics-driven learning for \\textit{de novo} design of broad-spectrum potent and selective AMP sequences and experimentally validates them for broad-spectrum efficacy and toxicity. Further, the discovered peptides show high efficacy against a strain that is resistant to an antibiotic of last resort, as well as mitigate drug resistance onset.\nWet lab results confirmed the efficiency of the proposed approach for designing novel and optimized sequences with a very modest number of candidate compounds synthesized and tested. The present design approach in this proof-of-concept study yielded a 10\\% success rate and a rapid turnaround of\n48 days, highlighting the importance of combining AI-drive computational strategies with experiments to achieve more effective drug candidates\nThe generative modeling approach presented here can be tuned for not only generating novel candidates, but also for designing novel combination therapies and antibiotic adjuvants, to further advance antibiotic treatments.\n\nSince CLaSS is a generic approach, it is suitable for a variety of controlled generation tasks and can handle multiple controls simultaneously. The method is simple to implement, fast, efficient, and scalable, as it does not require any optimization over the latent space. CLaSS has additional advantages regarding repurposability, as adding a new constraint requires a simple predictor training. \nTherefore, future directions of this work will explore the effect of \nadditional relevant constraints, such as the induced resistance, efficacy in animal models of infection, and fine-grained strain-specificity, on the designed AMPs using the approach presented here. Extending CLaSS application to other controlled molecule design tasks, such as target-specific and selective drug-like small molecule generation is also underway \\cite{chenthamarakshan2020target}.\nFinally, the AI models will be further optimized in an iterative manner\nby using the feedback from simulations and\/or experiments in an active learning framework. \n\n\n\n \n\n\n\n\\clearpage\n\n\n\n\\subsection*{Online Methods}\n\\subsubsection*{Generative Autoencoders}\nTo learn meaningful continuous latent representations from sequences without supervision, the Variational Autoencoder (VAE) \\cite{kingma2013auto} family has emerged as a principled and successful method.\nThe data distribution $p({\\mathbf{x}})$ over samples ${\\mathbf{x}}$ is represented as the marginal of a joint distribution $p({\\mathbf{x}}, {\\mathbf{z}})$ that factors out as $p({\\mathbf{z}}) p_\\theta({\\mathbf{x}}|{\\mathbf{z}})$.\nThe prior $p({\\mathbf{z}})$ is a simple smooth distribution, while $p_\\theta({\\mathbf{x}} | {\\mathbf{z}})$ is the decoder that maps a point in latent ${\\mathbf{z}}$-space to a distribution in ${\\mathbf{x}}$ data space.\nThe exact inference of the hidden variable ${\\mathbf{z}}$ for a given input ${\\mathbf{x}}$ would require integration over the full latent space: $p({\\mathbf{z}} | {\\mathbf{x}}) = \\frac{p({\\mathbf{z}}) p_\\theta({\\mathbf{x}}|{\\mathbf{z}})}{\\int d{\\mathbf{z}} p({\\mathbf{z}}) p_\\theta({\\mathbf{x}}|{\\mathbf{z}})}$.\nTo avoid this computational burden, the inference is approximated through an inference neural network or encoder $q_\\phi({\\mathbf{z}}|{\\mathbf{x}})$.\nOur implementation follows \\cite{bowman2015large}, where both encoder and decoder are single-layer LSTM recurrent neural networks \\cite{hochreiter1997long},\nand the encoder specifies a diagonal Gaussian distribution, i.e. $q_\\phi(z|x) = N(z; \\mu(x), \\Sigma(x))$ (Fig. \\ref{fig:overview-gen}).\n\nThe basis for auto-encoder training is optimization of an objective consisting of the sum of a reconstruction loss and a regularization constraint loss term: ${\\mathcal{L}}(\\theta, \\phi) = {\\mathcal{L}}_{rec}(\\theta, \\phi) + {\\mathcal{L}}_c(\\phi)$.\nIn the standard VAE objective \\cite{kingma2013auto}, reconstruction loss ${\\mathcal{L}}_{rec}(\\theta, \\phi)$ is based on the negative log likelihood of the training sample, \nand the constraint ${\\mathcal{L}}_c(\\phi)$ uses $D_{KL}$, the Kullback-Leibler divergence:\n\\[\n \\mathcal{L}_{\\text{VAE}}(\\theta, \\phi) =\n \\mathbb{E}_{q_\\phi(z|x)}[\\log p_\\theta(x|z)]\n \n - D_{KL}(q_\\phi(z|x) || p(z))\n \n\\]\nfor a single sample.\nThis exact objective is derived from a lower bound on the data likelihood; hence this objective is called the ELBO (Evidence Lower Bound). With the standard VAE, we observed the same posterior collapse as detailed for natural language in the literature \\cite{bowman2015generating},\nmeaning $q(z|x) \\approx p(z)$ such that no meaningful information is encoded in $z$ space.\nFurther extensions include $\\beta$-VAE that adds a multiplier ``weight'' hyperparameter $\\beta$ on the regularization term, and $\\delta$-VAE that encourages the $D_{KL}$ term to be close to a nonzero $\\delta$, \\textit{etc.}, to tackle the issue of posterior collapse. However, finding the right setting that serves as a workaround for the posterior collapse is tricky within these VAE variants. \n\n\nTherefore, many variations within the VAE family have been recently proposed, such as Wasserstein Autoencoder (WAE) \\cite{tolstikhin2017wasserstein, bahuleyan2018probabilistic} and Adversarial Autoencoder (AAE) \\cite{makhzani2015adversarial}.\n\nWAE factors an optimal transport plan through the encoder-decoder pair,\non the constraint that marginal posterior $q_\\phi(z) = \\mathbb{E}_{x \\sim p(x)} q_\\phi(z|x)$\nequals a prior distribution, i.e. $q_\\phi(z)=p(z)$.\nThis is relaxed to an objective similar to $\\mathcal{L}_{\\text{VAE}}$ above. However, \nin the WAE objective \\cite{tolstikhin2017wasserstein}, instead of each individual $q_\\phi({\\mathbf{z}} | {\\mathbf{x}})$, the marginal posterior $q_\\phi({\\mathbf{z}}) = \\mathbb{E}_{\\mathbf{x}} [q_\\phi({\\mathbf{z}}|{\\mathbf{x}})]$ is constrained to be close to the prior $p({\\mathbf{z}})$.\nWe enforce the constraint by penalizing maximum mean discrepancy \\cite{gretton2007kernel} with random features approximation of the radial basis function \\cite{rahimi2007random}:\n${\\mathcal{L}}_c(\\phi) = \\text{MMD}(q_\\phi({\\mathbf{z}}), p({\\mathbf{z}}))$.\nThe total objective for WAE is ${\\mathcal{L}} = {\\mathcal{L}}_{rec} + {\\mathcal{L}}_c$ where we use the reconstruction loss ${\\mathcal{L}}_{rec} = -\\mathbb{E}_{q_\\phi({\\mathbf{z}}|{\\mathbf{x}})}[\\log p_\\theta({\\mathbf{x}}|{\\mathbf{z}})]$. In WAE training with maximum mean discrepancy (MMD) or with a discriminator, we found a benefit of regularizing the encoder variance as in the literature~\\cite{rubenstein2018latent,bahuleyan2018probabilistic}. For MMD, we used a random features approximation of the Gaussian kernel \\cite{rahimi2007random}.\n\nDetails of autoencoder architecture and training, as well as an experimental comparison between different auto-encoder variations tested in this study, can be found in Supplementary Material sections \\ref{si:models}, \\ref{method:training} and \\ref{si:vaewaeaae}. Python codes for training peptide autoencoders are available via github at https:\/\/github.com\/IBM\/controlled-peptide-generation.\n\n\\subsubsection*{CLaSS - Conditional Latent (Attribute) Space Sampling}\n\\label{si:lcs}\nWe propose Conditional Latent (attribute) Space Sampling, CLaSS, a simple but efficient method to sample from the targeted region of the latent space from an auto-encoder, which was trained in an unsupervised manner (Fig. \\ref{fig:overview-gen}). \n\n\n\\textbf{Density Modeling in Latent Space}\nWe assume a latent variable model (e.g., Autoencoder) that has been trained in an unsupervised manner to meet the evaluation criteria outlined in (see SI).\n All training data ${\\mathbf{x}}_j$ are then encoded in latent space: ${\\mathbf{z}}_{j,k} \\sim q_\\phi({\\mathbf{z}} | {\\mathbf{x}}_j)$.\nThese ${\\mathbf{z}}_{j,k}$ are used to fit an explicit density model $Q_\\xi({\\mathbf{z}})$ to approximate marginal posterior $q_\\phi({\\mathbf{z}})$, and a classifier model $q_\\xi(a_i|{\\mathbf{z}})$ for attribute $a_i$ to approximate the probability $p(a_i|{\\mathbf{x}})$.\nThe motivation for fitting a $Q_\\xi({\\mathbf{z}})$ is in order to sample from $Q_\\xi$ rather than $p({\\mathbf{z}})$, since at the end of training the discrepancy between $q_\\phi({\\mathbf{z}})$ and $p({\\mathbf{z}})$ can be significant.\n\nAlthough any explicit density estimator could be used for $Q_\\xi({\\mathbf{z}})$,\nhere we consider Gaussian mixture density models and evaluate negative log-likelihood on a held-out set to determine the optimal complexity.\nWe find 100 components and untied diagonal covariance matrices to be optimal, giving a held-out log likelihood of $105.1$.\nTo fit $Q_\\xi$, we use K=10 random samples from the encoding distribution of the training data, ${\\mathbf{z}}_{j,k} \\sim q_\\phi({\\mathbf{z}} | {\\mathbf{x}}_j) = \\mathcal{N}(\\mu({\\mathbf{x}}_j), \\sigma({\\mathbf{x}}_j))$, with $k=1 \\dots K$.\n\n Independent simple linear attribute classifiers $q_\\xi(a_i | {\\mathbf{z}})$ are then fitted per attribute.\nFor each attribute $a_i$, the procedure consists of: \n(1) collecting dataset with all labeled samples for this attribute $({\\mathbf{x}}_j, a_i)$, \n(2) encoding the labeled data as before, ${\\mathbf{z}}_{j,k} \\sim q_\\phi({\\mathbf{z}} | {\\mathbf{x}}_j)$, \n(3) fitting $\\xi$, the parameters of logistic regression classifier $q_\\xi(a_i | {\\mathbf{z}})$ with inverse regularization strength $C=1.0$ and 300 lbfgs iterations.\n\n\\textbf{Rejection Sampling for Attribute-Conditioned Generation}\nLet us formalize that there are $n$ different (and possibly independent) binary attributes of interest ${\\mathbf{a}} \\in \\{0,1\\}^n = [a_1, a_2, \\dots, a_n]$, each attribute is only available (labeled) for a small and possibly disjoint subset of the dataset. Since functional annotation of peptide sequences is expensive, current databases typically represent a small ($\\approx 100-10000$) subset of the unlabeled corpus.\nWe posit that all plausible datapoints have those attributes, albeit mostly without label annotation. Therefore, the data distribution implicitly is generated as $p({\\mathbf{x}}) = \\mathbb{E}_{{\\mathbf{a}} \\sim p({\\mathbf{a}})} [ p({\\mathbf{x}} | {\\mathbf{a}})]$, where the distribution over the (potentially huge) discrete set of attribute combinations $p({\\mathbf{a}})$ is integrated out, and for each attribute combination the set of possible sequences is specified as $p({\\mathbf{x}} | {\\mathbf{a}})$.\nAs our aim is to sample novel sequences ${\\mathbf{x}} \\sim p({\\mathbf{x}} | {\\mathbf{a}})$, for a desired attribute combination ${\\mathbf{a}} = [a_1, \\dots, a_n]$, \nwe are now able to approach this task through conditional sampling in latent space:\n\\begin{align}\n p({\\mathbf{x}}|{\\mathbf{a}}) & = \\int \\! \\mathrm{d}z \\, p({\\mathbf{z}}|{\\mathbf{a}}) p({\\mathbf{x}}|{\\mathbf{z}}) \\\\\n & \\approx \\int \\! \\mathrm{d}z \\, \\hat{p}_\\xi({\\mathbf{z}}|{\\mathbf{a}}) p_\\theta({\\mathbf{x}}|{\\mathbf{z}})\n\\end{align}\nWhere $\\hat{p}_\\xi({\\mathbf{z}}|{\\mathbf{a}})$ will not be approximated explicitly, rather we will use rejection sampling using the models $Q_\\xi({\\mathbf{z}})$ and $q_\\xi(a_i | {\\mathbf{z}})$ to approximate samples from $p({\\mathbf{z}} | {\\mathbf{a}})$.\n\nTo approach this, we first use Bayes' rule and the conditional independence of the attributes $a_i$ conditioned on ${\\mathbf{z}}$, since we assume the latent variable captures all information to model the attributes: $a_i \\perp a_j | {\\mathbf{z}}$ (\\textit{i.e.} two attributes $a_i$ and $a_j$ are independent when conditioned on ${\\mathbf{z}}$)\n\\begin{align}\np({\\mathbf{z}}|{\\mathbf{a}})\n &= \\frac{p({\\mathbf{a}} | {\\mathbf{z}}) q_\\phi({\\mathbf{z}})} {p({\\mathbf{a}})} \\\\\n & = \\frac{ q_\\phi({\\mathbf{z}}) \\prod_i p(a_i | {\\mathbf{z}})} {p({\\mathbf{a}})}\n\\end{align}\n\nThis approximation is introduced to $\\hat{p}_\\xi({\\mathbf{z}}|{\\mathbf{a}})$,\nusing the models $Q_\\xi$ and $q_\\xi$ above:\n\\begin{align}\n\\hat{p}_\\xi({\\mathbf{z}}|{\\mathbf{a}}) &= \\frac{Q_\\xi({\\mathbf{z}}) \\prod_i q_\\xi (a_i | {\\mathbf{z}})}{q_\\xi({\\mathbf{a}})}\n\\label{eq:rejsamplingbase}\n\\end{align}\n\nThe denominator $q_\\xi({\\mathbf{a}})$ in Eq. (\\ref{eq:rejsamplingbase}) could be estimated by approximating the expectation $q_\\xi({\\mathbf{a}}) = \\mathbb{E}_{Q_\\xi({\\mathbf{z}})} q_\\xi({\\mathbf{a}} | {\\mathbf{z}}) \\approx \\frac{1}{N} \\sum_{{\\mathbf{z}}_j \\sim Q_\\xi({\\mathbf{z}})}^N q_\\xi({\\mathbf{a}} | {\\mathbf{z}})$.\nHowever, the denominator is not needed \\textit{a priori} in our rejection sampling scheme,\nin contrast, $q_\\xi({\\mathbf{a}})$ will naturally appear as the rejection rate of samples from the proposal distribution (see below).\n\nFor rejection sampling distribution with pdf $f({\\mathbf{z}})$, we need a proposal distribution $g({\\mathbf{z}})$ and a constant $M$, such that $f({\\mathbf{z}}) \\leq M g({\\mathbf{z}})$ for all ${\\mathbf{z}}$, i.e. $M g({\\mathbf{z}})$ envelopes $f({\\mathbf{z}})$.\nWe draw samples from $g({\\mathbf{z}})$ and accept the sample with probability $\\frac{f({\\mathbf{z}})}{M g({\\mathbf{z}})} \\leq 1$.\n\nIn the above, to sample from Eq. (\\ref{eq:rejsamplingbase}), we consider ${\\mathbf{a}}$ to be constant.\nWe perform rejection sampling through the proposal distribution: $g({\\mathbf{z}}) = Q_\\xi({\\mathbf{z}})$ that can be directly sampled.\nNow set $M=1\/q_\\xi({\\mathbf{a}})$ so $M g({\\mathbf{z}}) = Q_\\xi(z) \/ q_\\xi ({\\mathbf{a}})$,\nwhile our pdf to sample from is $f({\\mathbf{z}}) = Q_\\xi({\\mathbf{z}}) \\prod_i q_\\xi (a_i | {\\mathbf{z}}) \/ q_\\xi({\\mathbf{a}})$.\nTherefore, we accept the sample from $Q_\\xi({\\mathbf{z}})$ with probability\n\\[\n\\frac{f({\\mathbf{z}})}{M g({\\mathbf{z}})} = \\prod_i q_\\xi (a_i | {\\mathbf{z}}) \\leq 1\n\\]\nThe inequality trivially follows from the product of normalized probabilities.\nThe acceptance rate is $1\/M = q_\\xi({\\mathbf{a}})$.\nIntuitively, the acceptance probability is equal to the product of the classifier's scores, while sampling from explicit density $Q_\\xi(z)$. \nIn order to accept any samples, we need a region in ${\\mathbf{z}}$ space to exist where $Q_\\xi({\\mathbf{z}}) > 0$ and the classifiers assign a nonzero probability to all desired attributes, \\textit{i.e.} the combination of attributes has to be realizable in ${\\mathbf{z}}$-space.\n\n\n\\clearpage\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/natureBME_fig1_finalV2.pdf}\n\\caption{Overview and timeline of the proposed AI-driven approach for accelerated antimicrobial design. \n}\n\\label{fig:Overview}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/pipeline_figure-v3.edited.png}\n\\caption{Phases of attribute-controlled peptide sequence generation. (A) Training a generative Autoencoder (AE) model on peptide sequences (AE training in Fig. 1), (B) Mapping sparse peptide attributes to the model's latent ${\\mathbf{z}}$-space and constructing the density model of the ${\\mathbf{z}}$-space (Autoencoder Evaluation in Fig. 1), and (C) Sampling from the ${\\mathbf{z}}$-space using our CLaSS method (Controlled Generation in Fig. 1). }\n\\label{fig:overview-gen}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[width=\\textwidth]{figures\/natureBME_fig3_finalV2a.pdf}\n\n \n\\caption{Characteristics of the generative autoencoder latent space. (A) Relation between sequence similarity and Euclidean distance in latent ${\\mathbf{z}}$-space, when sequences were modeled using WAE (VAE in Inset). Darker points indicate similarity with itself (\\textit{i.e.} the same exact sequence). \nClassifiers were trained either using WAE ${\\mathbf{z}}$-space encodings (${\\mathbf{z}}$-) or on sequences (sequence-level). Note that the class prediction accuracy of attribute AMP using WAE z-classifiers and sequence classifiers on test data are 87.4 and 88.0, respectively. The same for toxicity attribute are 68.9 and 93.7. \n(B) Decoded sequences and their attributes during a linear interpolation between two distant sequences in the WAE latent space.\nAttributes include (1) physicochemical properties,\n (2) sequence similarity (\\texttt{evo\\_start}, \\texttt{evo\\_end}) from endpoint sequences, \nand (3) AMP (\\texttt{z\\_amp}) and Toxic (\\texttt{z\\_tox}) class probabilities from ${\\mathbf{z}}$-classifiers.\nValues in orange and blue are in the upper and lower quartile, respectively. \nBlack rectangle indicates sequences with low attribute similarity to endpoint sequences.\n(C) As an example of further analysis, we show the relation of AMP class probability and instability index for all candidates in the interpolation and a ball-stick rendering of a selected sequence in the path. An interactive demo of interpolations in peptide latent space is available at https:\/\/peptide-walk.mybluemix.net.\n}\n\\label{fig:wae-latent}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/natureBME_fig4_finalV2.pdf}\n\\caption{Physico-chemical property comparison. (A) Comparison of amino acid composition, (B) global hydrophobicity, (C) hydrophobic moment (C), and (D) charge distribution of CLaSS-generated AMPs with training sequences. Mean and standard deviation were estimated on three different sets, each consisting 3000 randomly chosen samples. Generated AMP: {\\color{our_orange}orange}; training AMP: {\\color{our_blue}blue}; training unlabeled: {\\color{our_gray}gray}. \n}\n\\label{fig_biometrics}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{figures\/figure_6_7.edited.png}\n\\caption{Atomistic simulation and resistance acquisition studies. (A) Snapshot from all-atom simulation of YI12 (A1) and FK13 (A2). Selected residues that interact with the membrane are highlighted. (B) Resistance acquisition studies of E. coli. in the presence of sub (1\/2x) MIC levels of imipenem, YI12 and FK13.\n}\n\\label{fig:6_7}\n\\end{figure}\n\n\n\\newcommand{\\specialcell}[2][c]{%\n\\begin{tabular}[#1]{@{}c@{}}#2\\end{tabular}}\n \n\\begin{table}\n \\centering\n \\begin{tabular}{c|c|c|c|c|c|c|c|c}\n \\toprule\n Sequence & \\specialcell{SA} & \\specialcell{EC} & \\specialcell{PA } & \\specialcell{AB} & \\specialcell{MDR-KP\n} & \\specialcell{polyR-KP} & HC$_{50}$ & LD$_{50}$ \\\\ \n \\hline\n YI12 & 7.80 & 31.25 & 125.00 & 15.60 & 31.25 & 31 & 125 & 182 \\\\\n FK13 & 15.60 & 31.25 & 62.50 & 31.25 & 15.60 & 16 & 500 & 158 \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Antimicrobial activity and toxicity values of YI12 and FK13, two best CLaSS-designed AMPs.\n MIC values against diverse strains, including SA (S. aureus), EC (E. coli), PA (P. aeruginosa), AB (A. baumannii), one multidrug-resistant (MDR) \\textit{K. pneumonia} (MDR-KP), and one polymyxin B resistant \\textit{K. pneumonia} (polyR-KP). Hemolytic activity measured at 50\\% hemolysis (HC$_{50}$) using rat red blood cells, and lethal dose toxicity (LD$_{50}$) values are for Balb\/C mice. All reported MIC and HC$_{50}$ values are in $\\mu$g\/mL, while unit for LD$_{50}$ is mg\/kg. As a baseline, MIC value of polymyxin B against polyR-KP is $>$125 $\\mu$g\/mL.}\n \\label{tab:tab-mic-compare}\n\\end{table}\n\n\n\n\\clearpage\n\n\\section*{Acknowledgments}\nWe acknowledge Youssef Mroueh and Kush Varshney for insightful discussions. We also thank Oscar Chang, Elham Khabiri, and Matt Riemer for help with the initial phase of the work. We would like to acknowledge David Cox, Yuhai Tu, Pablo Meyer Rojas, and Mattia Rigotti for providing valuable feedback on the manuscript. F.C. thanks Patrick Simcock for sharing knowledge. We thank anonymous reviewers for constructive feedback that significantly improved quality of this manuscript.\n\n\n\\section*{Author Information}\nCorrespondence and requests for materials should be addressed to \nPayel Das~(email: daspa@us.ibm.com).\n\n\\textbf{Author Contributions:} P.D., J.C., and A.M. conceived the project; P.D. designed and managed the project; P.D and T.S. designed and implemented the sequence generation and screening framework and algorithm with help from K.W., I.P., and S.G.; Autoencoder experiments were run and analyzed by P.D., T.S., K.W., I.P., P.C., V.C., and C.D.S.; Generated sequences were analyzed in silico by P.D., T.S., H.S., and K.W.; F.C. with help from P.D. and J.C. designed, performed and analyzed the molecular dynamics simulations; Y.Y.Y., J.T., J.H. performed and analyzed the wet lab experiments; H.S. created final figures; All authors wrote the paper. \n\n\\section{Ethics Statement}\nThe authors declare that they have no competing financial interests. \n\n\\section{Data and materials availability:} A full description of the model and method\nis available in the supplementary materials. Data and code will be available upon request and is also accessible via github at https:\/\/github.com\/IBM\/controlled-peptide-generation\/tree\/master\/models. \n\n\\section{Supplementary Information}\n\\textbf{Supplementary Information}\nSupplementary tables, figures, descriptions of the dataset and details of models and methods.\n\n\\textbf{Reporting Summary}\n Further information on research design is available in the Nature Research Reporting Summary linked to this article.\n \n \n\n\\clearpage\n\n\n\n\\bibliographystyle{Science}\n\n\n\\section*{Introduction}\n\nIn this file, we present some tips and sample mark-up to assure your\n\\LaTeX\\ file of the smoothest possible journey from review manuscript\nto published {\\it Science\\\/} paper. We focus here particularly on\nissues related to style files, citation, and math, tables, and\nfigures, as those tend to be the biggest sticking points. Please use\nthe source file for this document, \\texttt{scifile.tex}, as a template\nfor your manuscript, cutting and pasting your content into the file at\nthe appropriate places.\n\n{\\it Science\\\/}'s publication workflow relies on Microsoft Word. To\ntranslate \\LaTeX\\ files into Word, we use an intermediate MS-DOS\nroutine \\cite{tth} that converts the \\TeX\\ source into HTML\\@. The\nroutine is generally robust, but it works best if the source document\nis clean \\LaTeX\\ without a significant freight of local macros or\n\\texttt{.sty} files. Use of the source file \\texttt{scifile.tex} as a\ntemplate, and calling {\\it only\\\/} the \\texttt{.sty} and \\texttt{.bst}\nfiles specifically mentioned here, will generate a manuscript that\nshould be eminently reviewable, and yet will allow your paper to\nproceed quickly into our production flow upon acceptance \\cite{use2e}.\n\n\\section*{Formatting Citations}\n\nCitations can be handled in one of three ways. The most\nstraightforward (albeit labor-intensive) would be to hardwire your\ncitations into your \\LaTeX\\ source, as you would if you were using an\nordinary word processor. Thus, your code might look something like\nthis:\n\n\n\\begin{quote}\n\\begin{verbatim}\nHowever, this record of the solar nebula may have been\npartly erased by the complex history of the meteorite\nparent bodies, which includes collision-induced shock,\nthermal metamorphism, and aqueous alteration\n({\\it 1, 2, 5--7\\\/}).\n\\end{verbatim}\n\\end{quote}\n\n\n\\noindent Compiled, the last two lines of the code above, of course, would give notecalls in {\\it Science\\\/} style:\n\n\\begin{quote}\n\\ldots thermal metamorphism, and aqueous alteration ({\\it 1, 2, 5--7\\\/}).\n\\end{quote}\n\nUnder the same logic, the author could set up his or her reference list as a simple enumeration,\n\n\\begin{quote}\n\\begin{verbatim}\n{\\bf References and Notes}\n\n\\begin{enumerate}\n\\item G. Gamow, {\\it The Constitution of Atomic Nuclei\nand Radioactivity\\\/} (Oxford Univ. Press, New York, 1931).\n\\item W. Heisenberg and W. Pauli, {\\it Zeitschr.\\ f.\\ \nPhysik\\\/} {\\bf 56}, 1 (1929).\n\\end{enumerate}\n\\end{verbatim}\n\\end{quote}\n\n\\noindent yielding\n\n\\begin{quote}\n{\\bf References and Notes}\n\n\\begin{enumerate}\n\\item G. Gamow, {\\it The Constitution of Atomic Nuclei and\nRadioactivity\\\/} (Oxford Univ. Press, New York, 1931).\n\\item W. Heisenberg and W. Pauli, {\\it Zeitschr.\\ f.\\ Physik} {\\bf 56},\n1 (1929).\n\\end{enumerate}\n\\end{quote}\n\n\nThat's not a solution that's likely to appeal to everyone, however ---\nespecially not to users of B{\\small{IB}}\\TeX\\ \\cite{inclme}. If you\nare a B{\\small{IB}}\\TeX\\ user, we suggest that you use the\n\\texttt{Science.bst} bibliography style file and the\n\\texttt{scicite.sty} package, both of which are downloadable from our author help site.\n{\\bf While you can use B{\\small{IB}}\\TeX\\ to generate the reference list, please don't submit \nyour .bib and .bbl files; instead, paste the generated .bbl file into the .tex file, creating\n \\texttt{\\{thebibliography\\}} environment.}\n You can also\ngenerate your reference lists directly by using \n\\texttt{\\{thebibliography\\}} at the end of your source document; here\nagain, you may find the \\texttt{scicite.sty} file useful.\n\nWhatever you use, be\nvery careful about how you set up your in-text reference calls and\nnotecalls. In particular, observe the following requirements:\n\n\\begin{enumerate}\n\\item Please follow the style for references outlined at our author\n help site and embodied in recent issues of {\\it Science}. Each\n citation number should refer to a single reference; please do not\n concatenate several references under a single number.\n\\item The reference numbering continues from the \nmain text to the Supplementary Materials (e.g. this main \ntext has references 1-3; the numbering of references in the \nSupplementary Materials should start with 4). \n\\item Please cite your references and notes in text {\\it only\\\/} using\n the standard \\LaTeX\\ \\verb+\\cite+ command, not another command\n driven by outside macros.\n\\item Please separate multiple citations within a single \\verb+\\cite+\n command using commas only; there should be {\\it no space\\\/}\n between reference keynames. That is, if you are citing two\n papers whose bibliography keys are \\texttt{keyname1} and\n \\texttt{keyname2}, the in-text cite should read\n \\verb+\\cite{keyname1,keyname2}+, {\\it not\\\/}\n \\verb+\\cite{keyname1, keyname2}+.\n\\end{enumerate}\n\n\\noindent Failure to follow these guidelines could lead\nto the omission of the references in an accepted paper when the source\nfile is translated to Word via HTML.\n\n\n\n\\section*{Handling Math, Tables, and Figures}\n\nFollowing are a few things to keep in mind in coding equations,\ntables, and figures for submission to {\\it Science}.\n\n\\paragraph*{In-line math.} The utility that we use for converting\nfrom \\LaTeX\\ to HTML handles in-line math relatively well. It is best\nto avoid using built-up fractions in in-line equations, and going for\nthe more boring ``slash'' presentation whenever possible --- that is,\nfor \\verb+$a\/b$+ (which comes out as $a\/b$) rather than\n\\verb+$\\frac{a}{b}$+ (which compiles as $\\frac{a}{b}$). \n Please do not code arrays or matrices as\nin-line math; display them instead. And please keep your coding as\n\\TeX-y as possible --- avoid using specialized math macro packages\nlike \\texttt{amstex.sty}.\n\n\\paragraph*{Tables.} The HTML converter that we use seems to handle\nreasonably well simple tables generated using the \\LaTeX\\\n\\texttt{\\{tabular\\}} environment. For very complicated tables, you\nmay want to consider generating them in a word processing program and\nincluding them as a separate file.\n\n\\paragraph*{Figures.} Figure callouts within the text should not be\nin the form of \\LaTeX\\ references, but should simply be typed in ---\nthat is, \\verb+(Fig. 1)+ rather than \\verb+\\ref{fig1}+. For the\nfigures themselves, treatment can differ depending on whether the\nmanuscript is an initial submission or a final revision for acceptance\nand publication. For an initial submission and review copy, you can\nuse the \\LaTeX\\ \\verb+{figure}+ environment and the\n\\verb+\\includegraphics+ command to include your PostScript figures at\nthe end of the compiled file. For the final revision,\nhowever, the \\verb+{figure}+ environment should {\\it not\\\/} be used;\ninstead, the figure captions themselves should be typed in as regular\ntext at the end of the source file (an example is included here), and\nthe figures should be uploaded separately according to the Art\nDepartment's instructions.\n\n\n\n\n\n\n\n\n\\section*{What to Send In}\n\nWhat you should send to {\\it Science\\\/} will depend on the stage your manuscript is in:\n\n\\begin{itemize}\n\\item {\\bf Important:} If you're sending in the initial submission of\n your manuscript (that is, the copy for evaluation and peer review),\n please send in {\\it only\\\/} a PDF version of the\n compiled file (including figures). Please do not send in the \\TeX\\ \n source, \\texttt{.sty}, \\texttt{.bbl}, or other associated files with\n your initial submission. (For more information, please see the\n instructions at our Web submission site.)\n\\item When the time comes for you to send in your revised final\n manuscript (i.e., after peer review), we require that you include\n source files and generated files in your upload. {\\bf The .tex file should include\nthe reference list as an itemized list (see \"Formatting citations\" for the various options). The bibliography should not be in a separate file.} \n Thus, if the\n name of your main source document is \\texttt{ltxfile.tex}, you\n need to include:\n\\begin{itemize}\n\\item \\texttt{ltxfile.tex}.\n\\item \\texttt{ltxfile.aux}, the auxilliary file generated by the\n compilation.\n\\item A PDF file generated from\n \\texttt{ltxfile.tex}.\n\n\\end{itemize}\n\\end{itemize}\n\n\n\n\n\\section*{Table of Contents}\n\nSupplementary Text\\\\\nDataset\\\\\nModel and Methods\\\\\nFig. S1 to S4 \\\\\nTables S1 to S8\\\\\nSupplementary References \\textit{(1-67)}\n\\section{Supplementary Text}\n\n\n\n\n\n\n\n\n\\subsection{Conditional Generation with Autoencoders}\nSince the peptide sequences are represented as text strings here, we will be limiting our discussion to literature around text generation with constraints. \nControlled text sequence generation is non-trivial, as the discrete and non-differentiable nature of text samples does not allow the use of a global discriminator, which is commonly used in image generation tasks to guide generation. To tackle this issue of non-differentiability, policy learning has been suggested, which suffers from high variance during training \\cite{yu2017seqgan,guimaraes2017objective}. Therefore, specialized distributions, such as the Gumbel-softmax \\cite{jang2016categorical,kusner2016gans}, a concrete distribution \\cite{maddison2016concrete}, or a soft-argmax function \\cite{zhang2016generating}, have been proposed to approximate the gradient of the model from discrete samples.\n\nAlternatively, in a semi-supervised model setting, the minimization of element-wise reconstruction error has been employed \\cite{kingma2014semi}, which tends to lose the holistic view of a full sentence. Hu et al. \\cite{hu2017toward} proposed a VAE variant that allows both controllable generation and semi-supervised learning. The working principle needs labels to be present during training and encourages latent space to represent them, so the addition of \n new attributes will require retraining the latent variable model itself.\nThe framework relies on a set of new discrete binary variables in latent space to control the attributes, an ad-hoc wake-sleep procedure. It requires learning the right balance between multiple competing and tightly interacting loss objectives, which is tricky. \n\nEngel et al.~\\cite{engel2017latent} propose a conditional generation framework without retraining the model, similar in concept to ours, by modeling in latent space post-hoc. Their approach does not need an explicit density model in ${\\mathbf{z}}$-space, \nrather relies on adversarial training of generator and attribute discriminator and focuses modifying sample reconstructions rather than generating novel samples. \nRecent Plug and Play Language Model (PPLM) for controllable language generation combines a pre-trained language model (LM) with one or more simple attribute classifiers that guide text generation without any further training of the LM \\cite{dathathri2019plug}. \n\n\\subsection{ Conditional Generation for Molecule Design}\nFollowing the work of G\u00f3mez-Bombarelli et al. \\cite{gomez2018automatic}, Bayesian Optimization (BO) in the learned latent space has been employed for molecular optimization for properties such as drug likeliness (QED) or penalized logP. The standard BO routine consists of two key steps: (i) estimating the black-box function from data through a probabilistic surrogate model; usually a Gaussian process (GP), referred to as the response surface; (ii) maximizing an acquisition function that computes a score that trades off exploration and exploitation according to uncertainty and optimality of the response surface. As the dimensionality of the input latent space increases, these two steps become challenging. In most cases, such a method is restricted to local optimization using training data points as starting points, as optimizers are likely to follow gradients into regions of the latent space that the model has not been exposed to during training.\nReinforcement learning (RL) based methods provide an alternative approach for molecular optimization \\cite{zhou2019optimization, you2018graph, popova2018deep, Zhavoronkov2019natbio}, in which RL policies are learned by incorporating the desired attribute as part of the reward. However, a large number of evaluations are typically needed for both BO and RL-based optimizations while trading off exploration and exploitation \\cite{korovina2019chembo}. \nSemi-supervised learning has also been used for conditional generation of molecules \\cite{lim2018molecular, kang2018conditional, li2018multi, mendez2020novo}, which needs labels to be available during the generative model training. \n\nCLaSS is fundamentally different from these existing approaches, as it does not need expensive optimization over latent space, policy learning, or minimization of complex loss objectives - and therefore does not suffer from cumbersome computational complexity. Furthermore, CLaSS is not limited to local optimization around an initial starting point. Adding a new constraint in CLaSS is relatively simple, as it only requires a simple predictor training; therefore, CLaSS is easily repurposable. CLaSS is embarrassingly parallelizable as well. \n\n\n\n\\section{Dataset}\n\n\n\n\\subsection{A Dataset for Semi-Supervised Training of AMP Generative Model}\n\\label{si:data}\n\nWe compiled a new two-part (unlabeled and labeled) dataset for learning a meaningful representation of the peptide space and conditionally generating safe antimicrobial peptides from that space using the proposed CLaSS method. We consider discriminating for several functional attributes as well as for presence of structure in peptides. Only linear and monomeric sequences with no terminal modifications and length up to 50 amino acids were considered in curating this dataset.\nAs a further pre-processing step, the sequences with non-natural amino acids (B, J, O, U, X, and Z) and the ones with lower case letters were eliminated. \n\n\\textbf{Unlabeled Sequences:} The unlabeled data is from Uniprot-SwissProt and Uniprot-Trembl database \\cite{uniprot} and contains just over 1.7 M sequences, when considering sequences with length up to 50 amino acid.\n\n\\textbf{Labeled Sequences:}\nOur labeled dataset comprises sequences with different attributes curated from a number of publicly available databases~\\cite{singh2015satpdb,pirtskhalava2016dbaasp, khurana2018deepsol, bhadra2018ampep, gupta2013silico}. Below we provide details of the labeled dataset:\n\\begin{itemize}\n \\item Antimicrobial (8683 AMP, 6536 non-AMP);\n \\item Toxic (3149 Toxic, 16280 non-Toxic);\n \\item Broad-spectrum (1302 Positive, 1238 Negative);\n \\item Structured (1170 Positive, 2136 Negative);\n \\item Hormone (569 Positive);\n \\item Antihypertensive (1659 Positive);\n \\item Anticancer (504 Positive).\n\\end{itemize}\n\n\\subsubsection{Details of Labeled Datasets}\n\\label{si:datadetails}\n\\textbf{Sequences with AMP\/non-AMP Annotation.}\nAMP labeled dataset comprises sequences\nfrom two major AMP databases: satPDB \\cite{singh2015satpdb} and DBAASP \\cite{pirtskhalava2016dbaasp}, as well as a dataset used in an earlier AMP classification study named as AMPEP \\cite{bhadra2018ampep}. \nSequences with an antimicrobial function annotation in satPDB and AMPEP or a MIC value against any target species less than 25 $\\mu$g$\/$ml in DBAASP \nwere considered as AMP labeled instances. The duplicates between these three datasets were removed to generate a non-redundant AMP dataset. And, the ones with mean activity against all target species $>$ 100 $\\mu$g$\/$ml in DBAASP were considered negative instances (non-AMP). Since experimentally verified non-AMP sequences are rare to find, the non-AMP instances in AMPEP were generated from UniProt sequences after discarding sequences that were annotated as AMP, membrane, toxic, secretory, defensive, antibiotic, anticancer, antiviral, and antifungal and were used in this study as well.\n\n\n\\textbf{Sequences with Toxic\/nonToxic Annotation:}\nSequences with toxicity labels are curated from satPDB and DBAASP databases as well as from the ToxinPred dataset \\cite{gupta2013silico}. Sequences with ``Major Function\" or ``Sub Function\" annotated as toxic in satPDB and sequences with hemolytic\/cytotoxic activities against all reported target species less than 200 $\\mu$g$\/$ml in DBAASP were considered as Toxic instances. The toxic-annotated instances from ToxinPred were added to this set after removing duplicates resulting in a total to 3149 Toxic sequences.\nSequences with hemolytic\/cytotoxic activities $>$ 250 $\\mu$g$\/$ml were considered as nonToxic. The nonToxic instances reported in ToxinPred (sequences from SwissProt or TrEMBL that are not found in search using keyword associated with toxins, \\textit{i.e.} keyword (NOT KW800 NOT KW20) or keyword (NOT KW800 AND KW33090), were added to the nonToxic set, totaling to 16280 non-AMP sequences. \n\n\\textbf{Sequences with Broad-Spectrum Annotation}\nAntimicrobial sequences reported in the satPDB or DBAASP database can have both Gram-positive and Gram-negative strains as target groups. We consider such sequences as broad-spectrum. Otherwise, they are treated as narrow-spectrum. Through our filtering, we found 1302 broad-spectrum and 1238 narrow-spectrum sequences.\n\n\\textbf{Sequences with Structure\/No-Structure Annotation:}\nSecondary Structure assignment was performed for structures from satPDB using the STRIDE algorithm~\\cite{Frishman_Argos_1995}. If more than 60\\% of the amino acids are helix or beta-strand, we label it as structured (positive). Otherwise, they are labeled as negative. Through this filtering, we found 1170 positive sequences and 2136 negative sequences.\n\n\\textbf{Peptide Dataset for Baseline Simulations:} For the control simulations, three datasets were prepared. The first two were taken from the satpdb dataset, filtering sequences with length smaller than 20 amino acids. The high potency dataset contains the 51 sequences with the lowest average MIC, excluding sequences with cysteine residues. All these sequences have an average MIC of less than 10 $\\mu$g \/ ml. The low potency dataset contains the 41 sequences with the highest MIC, excluding cysteine residues. All these have an average MIC over 300 $\\mu$g \/ ml.\n\nTo create a dataset of inactive sequences, we queried UniProt using the following keywords: \\emph{NOT keyword:\"Antimicrobial [KW-0929]\" length:[1 TO 20] NOT keyword:\"Toxin [KW-0800]\" NOT keyword:\"Disulfide bond [KW-1015]\" NOT annotation:(type:ptm) NOT keyword:\"Lipid-binding [KW-0446]\" NOT keyword:\"Membrane [KW-0472]\" NOT keyword:\"Cytolysis [KW-0204]\" NOT keyword:\"Cell wall biogenesis\/degradation [KW-0961]\" NOT keyword:\"Amphibian defense peptide [KW-0878]\" NOT keyword:\"Secreted [KW-0964]\" NOT keyword:\"Defensin [KW-0211]\" NOT keyword:\"Antiviral protein [KW-0930]\" AND reviewed:yes}. From this, we picked 54 random sequences to act as the inactive dataset for simulation.\n\n\\section{Model and Methods}\n\\subsection{Autoencoder Details}\n\\label{sec:model}\n\n\n\n\\subsubsection{Autoencoder Architecture}\n\\label{si:models}\nWe investigate two different types of autoencoding approaches: $\\beta$-VAE\\cite{bowman2015generating} and WAE\\cite{tolstikhin2017wasserstein} in this study.\nFor each of these AEs the default architecture involves bidirectional-GRU encoder and GRU decoder.\nFor the encoder, we used a bi-directional GRU with hidden state size of 80.\nThe latent capacity was set at $D=100$.\n\nFor VAE, we used KL term annealing $\\beta$ from 0 to 0.03 by default.\nWe also present an unmodified VAE with KL term annealing $\\beta$ from 0 to 1.0.\nFor WAE, we found that the random features approximation of the gaussian kernel with kernel bandwidth $\\sigma=7$ to be performing the best. For comparison sake, we have included variations with $\\sigma$ values of 3 and 15 too.\nThe inclusion of $z$-space noise logvar regularization, $R(logVar)$, helped avoiding collapse to a deterministic encoder.\nAmong different regularization weights used, $1e-3$ had the most desirable behavior on the metrics (see Section \\ref{si:vaewaeaae}).\n\n\\subsubsection{Autoencoder Training}\n\\label{method:training}\n\nWhen training the AE model, we sub selected sequences with length $\\leq$ the hyperparameter\n$max\\_seq\\_length$.\nFurthermore, both AMP-labeled and unlabeled data were split into train, held out, and test set.\nThis reduces the available sequences for training; \\textit{e.g} for unlabeled set the number of available training sequences are 93k for $max\\_seq\\_length$=25, whereas the number of AMP-labeled sequences was 5000. \nThe sequences with reported activities were considered as confirmed labeled data, and those with confirmed labels were up-sampled at a 1:20 ratio. \nSuch upsampling of peptides with a specific attribute label will allow mitigation of possible domain shift due to unlabeled peptides coming from a different distribution.\n However, the\n benefit of transfer learning from unlabeled to labeled data likely outweighs the effects of domain shift.\n\n\nTo obtain the optimal hyperparameter setting for autoencoder training, we adopted an automated hyperparameter optimization. Specifically, we performed a grid search in the hyperparameter space and tracked an $L_2$ distance between the reconstructions of held-out data and training sequences, which was estimated using a weighted combination of BLEU, PPL, ${\\mathbf{z}}$-classifier accuracy, and amino acid composition-based heuristics.\nThe best hyperparameter configuration obtained using this process was the following: learning rate = 0.001, number of iterations = 200000, minibatch size = 32, word dropout = 0.3. A beam search decoder was used with a beam size of 5. \n\n\n\n\n\\subsubsection{ Autoencoder Evaluation}\nEvaluation of generative models is notoriously difficult \\cite{theis2015note}.\nIn the variational auto-encoder family, two competing objective terms are minimized: reconstruction of the input and a form of regularization in the latent space, which form a fundamental trade-off \\cite{alemi2017fixing}.\nSince we want a meaningful and consistent latent space, models that do not compromise the reconstruction quality to achieve lower constraint loss are preferred.\nWe propose an evaluation protocol using four metrics to judge the quality of both heldout reconstructions and prior samples \\cite{sercu2019interactive}. The metrics are\n\\begin{enumerate}[(i)]\n \\item The objective terms, evaluated on heldout data: reconstruction log likelihood $-\\log p_\\theta({\\mathbf{x}}|{\\mathbf{z}})$ and $D(q^h_\\phi({\\mathbf{z}}) | p({\\mathbf{z}}))$, where $q^h_\\phi({\\mathbf{z}}) = \\frac{1}{N_{hld}}\\sum_{{\\mathbf{x}}^i \\sim \\text{hld}} q_\\phi({\\mathbf{z}}|{\\mathbf{x}}^i)$ is the average over heldout encodings.\n \\item Encoder variance $\\log(\\sigma_j^2({\\mathbf{x}}^i))$ averaged over heldout samples, in $L^2$ over components $j$. In order to achieve a meaningful latent space, we needed to regularize the encoder variance to not becoming vanishingly small, i.e., for the encoder to become deterministic \\cite{rubenstein2018latent}. Large negative values indicate that the encoder is collapsed to deterministic.\n \\item Reconstruction BLEU score on held-out samples.\n \\item Perplexity (PPL) evaluated by an external language model, for samples from prior $p({\\mathbf{z}})$ and heldout encoding $q_\\phi({\\mathbf{z}}|{\\mathbf{x}})$.\n\\end{enumerate}\nNote that (iii) and (iv) involve sampling the decoder (we use beam search with beam 5), which will therefore also take into account any exposure bias \\cite{ranzato2015sequence,bengio2015scheduled}.\nWe propose to evaluate peptide generation using the perplexity under an independently trained language model (iv), which is a reasonable heuristic \\cite{zhao2017adversarially} if we assume the independent language model captures the distribution $p({\\mathbf{x}})$ well. Perplexity of a language model is the inverse probability of the test set, normalised by the number of words.\nA lower perplexity indicates a better model.\n\n\\textbf{External Language Model}\n\nWe trained an external language model on both labeled and unlabeled sequences to determine the perplexity of the generated sequences. Specifically, we used a character-based LSTM language model (LM) with the help of LSTM and QRNN Language Model Toolkit\\cite{meritylm} trained on both AMP-labeled and unlabeled data. We trained our language model with a total of $92624$ sequences, with a maximum sequence length of $25$. Our best model achieves a test perplexity of $13.26$. \n\nTo further validate the performance of our language model, we tested it on sequences with randomly generated synthetic amino acids from the vocabulary of lengths ranging between $10$ to $25$. As expected, we found it to have a high perplexity of $27.29$. Also, when evaluating it for repeated amino acids (sequence consisting of a single token from vocabulary), of length ranging between 10 to 25, we found the perplexity to be very low ($3.48$). Upon further investigation, we observed that the training data consists of amino acids with repeated sub-sequences, which the language model by nature, fails to penalize heavily. Due to this behavior, we can conclude that the perplexity of a collapsed peptide model will be closer to $3.48$ (as seen in the case of $\\beta$-VAE). We have summarized these observation in Table \\ref{tab:model_results}.\n\n\n\\subsubsection{Autoencoder Variants: Comparison}\n\\label{si:vaewaeaae}\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{l|l|rrrr}\n\\toprule\n\\multicolumn{2}{c}{Architecture} & PPL & BLEU & Recon & Encoder variance \\\\ \\hline\n\\multirow{4}{*}{Reference} & Repeated Sequences & 3.48 & & & \\\\ \n & Random & 27.29 & & & \\\\ \n & AMP Labeled & 5.580 & & & \\\\ \n & Labeled + Unlabeled & 13.26 & & & \\\\\n & Peptide LSTM, \\cite{muller2018recurrent} (Labeled) & 22.40 & & & \\\\\n & Peptide LSTM, \\cite{muller2018recurrent} (Unlabeled) & 20.26 & & & \\\\\\hline\n \\multicolumn{2}{l}{$\\beta$-VAE (1.0)} &\t3.820 & 4e-03 & 2.768 & -2.5e-4 \\\\ \n\\multicolumn{2}{l}{$\\beta$-VAE (0.03)} &\t15.34 & 0.475 & 1.075 & -0.620 \\\\\n\\cmidrule{1-6} \\multicolumn{2}{l}{WAE, $\\sigma=3$, $R(logVar)=1e-3$} &\t13.25 & 0.853 & 0.257 & -3.078 \\\\\n\\multicolumn{2}{l}{WAE, $\\sigma=15$, $R(logVar)=1e-3$} &\t12.98 & 0.909 & 0.224 & -4.180 \\\\\n\\multicolumn{2}{l}{WAE, $\\sigma=7$, $R(logVar) =0$} &\t12.77 & 0.881 & 0.214 & -13.81 \\\\\n\\multicolumn{2}{l}{WAE, $\\sigma=7$, $R(logVar)=1e-2$} &\t15.16 & 0.665 & 0.685 & -0.3962 \\\\\n \\multicolumn{2}{l}{WAE, $\\sigma=7$, $R(logVar)=1e-3$} & 12.87 & 0.892 & 0.216 & -4.141 \\\\ \n \\cmidrule{1-6} \\multicolumn{2}{l}{WAE (trained on AMP-labeled)} &16.12 & 0.510 & 0.354 & -4.316 \\\\ \n \\bottomrule\n\\end{tabular}\n\\caption{Performance of various autoencoder schemes against different baselines. }\n\\label{tab:model_results}\n\\end{table}\n\nThe evaluated metrics on held-out samples for different autoencoder models trained on either labeled or full dataset are also reported in Table \\ref{tab:model_results}. We observed that the reconstruction of WAE is more accurate compared to $\\beta$-VAE: we achieve a reconstruction error of $0.2163$ and a BLEU score of $0.892$ on a held-out set using WAE with $\\sigma$ of 7 and $R(logVar)$ of 1e-3 (values are $1.079$ and $0.493$ for $\\beta$-VAE with $\\beta$ set to $0.03$). The advantage of using abundant unlabeled data compared to only the labeled ones for representation learning is evident, as the language model perplexity (PPL, captures sequence diversity) for the WAE model trained on a full dataset is closer to that of the test perplexity ($13.26$), and the BLEU score is also higher when compared to the WAE model trained only on AMP-labeled sequences. For reference, PPL of random peptide sequences is $> 25$, and for repeated sequences is $3.48$. We have also compared the perplexity of the generated sequences using a LSTM model used in \\cite{ muller2018recurrent} and have reported the perplexity in Table \\ref{tab:model_results}. Results show that the perplexity of the generated samples using the LSTM model is around 22.4, which is close to the random sequence perplexity. In contrast, WAE (as well as VAE with appropriate KL annealing) achieves much lower perplexity that is close to the perplexity of peptide sequences.\nAttribute-controlled sampling using a simple LSTM model is non-trivial.\n\nThe ${\\mathbf{z}}$-classifier trained on the latent space of the best WAE model achieved a test accuracy of 87.4, 68.9, 77.4, 98.3, 76.3\\% for detecting peptides with AMP\/non-AMP, toxic\/non-toxic, anticancer, antihypertensive, and hormone annotation.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Post-Generation Screening}\n\n\\subsubsection{Sequence-Level Attribute Classifiers}\n\\label{si:xclf}\nFor post-generation screening, we used four sets of monolithic classifiers that are trained directly on peptide sequences. Each of these binary sequence-level classifiers was aimed at capturing one of the following four properties of peptide sequences, namely, \n\\begin{itemize}\n \\item AMP\/Non-AMP : Is the sequence an AMP or not?\n \\item Toxicity\/Non-Toxic : Is the sequence toxic or not?\n \\item Broad\/Narrow : Does the sequence show antibacterial activity on both Gram+ and Gram- strains or not?\n \\item Structure\/No-Structure : Does the sequence have secondary structure or not? \n\\end{itemize}{}\n\nFor each attribute, we trained a bidirectional LSTM-based classifier on the labeled dataset.\nWe used a hidden layer size of 100 and a dropout of 0.3. Size of dataset used as well as accuracies are reported in the Table \\ref{tab:classifier_results}.\n\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{l|lcc|cc|c}\n\\toprule\n\\multicolumn{1}{c|}{\\textbf{Attribute}} & \\multicolumn{3}{c|}{\\textbf{Data-Split}} & \\multicolumn{2}{|c|}{\\textbf{Accuracy (\\%)}} & \\textbf{Screening} \\\\ \n\\cmidrule{2-5} \\cmidrule{5-6}\n{} & \\textit{Train} & \\textit{Valid} & \\textit{Test} & \\textit{Majority Class} & \\textit{Test} & \\textbf{Threshold} \\\\ \\hline\n\\big\\{AMP , Non-AMP\\big\\} & 6489 & 811 & 812 & 68.9 & 88.0 & 7.944 \\\\ \n\\big\\{Toxic , Non-Toxic\\big\\} & 8153 & 1019 & 1020 & 82.73 & 93.7 & -1.573 \\\\\n\\big\\{Broad, Narrow\\big\\} & 2031 & 254 & 255 & 51.37 & 76.0 & -7.323 \\\\\n\\big\\{Structure , No-Structure\\big\\} & 2644 & 331 & 331 & 64.65 & 95.1 & -5.382 \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Performance of classifiers based on different attributes.}\n\\label{tab:classifier_results}\n\\end{table}\n\n\n\\begin{table}[ht]\n \\centering\n \n \\begin{tabular}{p{4cm}|p{2cm}|p{3.5cm}|p{4cm}}\n \\toprule\n \\textbf{Classifiers} & Reported Accuracy & YI12 & FK13 \\\\ \n \\hline \n AMP Classifier (ours) & 88.00 & AMP & AMP \\\\\n AMP Scanner v2\\cite{veltri2018deep} & 91.00 & Non-AMP & AMP \\\\\n iAMP Pred \\cite{meher2017predicting} & 66.80 & AMP & AMP \\\\\n DBAASP-SP \\cite{vishnepolsky2018predictive} & 79.00 & AMP & Non-AMP \\\\\n iAMP-2L \\cite{xiao2013iamp} & 92.23 & Non-AMP & AMP \\\\\n CAMP-RF \\cite{thomas2010camp} & 87.57 & AMP & AMP \\\\\n Witten E.Coli \\cite{witten2019deep} & 94.30 & AMP & Borderline \\footnotemark \\\\ \n Witten S.aureus \\cite{witten2019deep} & 94.30 & AMP & AMP \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Reported accuracy and prediction for YI12 and FK13 using different AMP classifiers: AMP Scanner v2 \\cite{veltri2018deep}, iAMP-2L \\cite{meher2017predicting}, DBAASP Prediction \\cite{vishnepolsky2018predictive}, iAMPpred \\cite{xiao2013iamp}, CAMP-RF \\cite{thomas2010camp}, Witten (CNN-70\\%) \\cite{witten2019deep}, and our LSTM-based sequence-based classifier. \n}\n \\label{tab:AMP-compare}\n\\end{table}{}\n\n\\footnotetext{Witten model predicts logMIC activity values. We followed their criteria for classification: if logMIC is within -1 to 3.5, then the peptide is active (AMP), if $>3.9$ then inactive (Non-AMP), and between 3.5 to 3.9 is considered borderline. LogMIC values for YI12 (\\textit{E. coli})=1.648, YI12 (\\textit{S. aureus}=3.607, FK13 (\\textit{E. coli})=1.389, FK13 (\\textit{S. aureus})=1.634.}\n\nBased on the distribution of the scores (classification probabilities\/logits), we determined the threshold by considering the $50^{th}$ percentile (median) of the scores ( reported in the last column of Table \\ref{tab:classifier_results}). Similarly, we selected a PPL threshold of $16.04$ that is the $25^{th}$ percentile of the PPL distribution of samples generated from the prior distribution of the best WAE model and also closer to the perplexity of our trained language model on test data. \n\n\n\n\n\\subsubsection{CGMD Simulations - Contact Variance as a Metric for Classifying Membrane Binding}\n\\label{method:sim}\nGiven a peptide sequence as an input, PeptideBuilder~\\cite{Tien2013} is used to prepare a PDB file of the all-atom representation of the peptide. This is prepared either as an alpha helix (with dihedral angles $\\phi=-57$, $\\psi=-47$) or as a random coil, with $\\phi$ and $\\psi$ dihedral angles taking random values between $-50^\\circ$ and $50^\\circ$. \n\nThis initial structure is then passed as in input to \\texttt{martinize.py}~\\cite{deJong2012}, which coarse-grains the system. The resulting files are passed into \\texttt{insane.py}~\\cite{Wassenaar2015} to create the peptide membrane system. The solvent is a 90:10 ratio of water to antifreeze particles, with the membrane being a 3:1 mixture of POPC to POPG. The system is 15 nm x 15 nm x 30 nm, with the membrane perpendicular to the longest direction. Ions are added to neutralize the system.\n\nThis is a minimal model of a bacterial membrane that serves as a high-throughput filter. While this model does not replicate the complex physics of the peptide-membrane interactions in exact detail (e.g. membrane composition difference between Gram-positive \\textit{vs.} Gram-negative), it allows us to prioritise the peptides that show the strongest interaction with a membrane for experimental characterisation.\n\nFor the CGMD simulations, we used the Martini forcefield~\\cite{Marrink2007},\nas Martini is optimized for predicting the interactions between proteins and membranes while being computationally efficient, it is well suited for the task of a quick but physically-inspired filtering peptide sequences.\n\nAfter building, the system is minimized for 50,000 steps using Gromacs 2019.1~\\cite{Berendsen1995, Abraham2015} and the 2.0 version of the Martini forcefield \\cite{Marrink2007}. After minimization, the production run is carried for 1 $\\mu$s at a 20 fs timestep. Temperature is kept constant at 310 K using Stochastic Velocity Rescaling \\cite{Bussi2007} applied independently to the protein, lipid, and the solvent groups. The pressure is kept constant at 1 atmosphere using a Parrinello-Rahman barostat~\\cite{Parrinello1981, Nos1983} applied independently to the same groups as the thermostat. \n\nAfter 1 $\\mu$s of sampling, we estimated the number of peptide-membrane contacts using TCL scripting and in-house Python scripts.\nThe number of contacts between positive residues and the lipid membranes is defined as the number of atoms belonging to a lipid at a distance less than 7.5 \\AA{} from a positive residue.\n\nFor the control simulations, three datasets consisting of reported high-potency AMP, low-potency AMP, and non-AMP sequences were used that are discussed in \\ref{si:datadetails}. We performed a set of 130 control simulations. We found that the variance of the number of contacts (cutoff 7.5 \\AA{}) between positive residues and Martini beads of the membrane lipids is predictive of antimicrobial activity. Specifically, the contact variance distinguishes between high potency and non-antimicrobial sequences with a sensitivity of 88\\% and specificity of 63\\%. \nTo screen, we used a cutoff value of 2 beads for the contact variance. We carried out a set of simulations for the 163 amp-positive and 179 amp-negative generated sequences. \nWe further restricted to sequences that bind in less than $500$ ns during the $1\\,\\mu$s long simulation, so that the contact variance is calculated over at least half of the total simulation time. Only sequences that formed at least $5$ contacts (averaged over the duration of the simulation) were considered. \n\n\n\n\\subsection{All-Atom Simulations}\n\\label{method:aa}\n\nWe used the CHARMM36m \\cite{Huang2016} forcefield to simulate the binding of four copies of YL12 and FK13 to a model membrane. Phi and Psi angles in the initial peptide structure were set to what was predicted using a recent deep learning model \\cite{qin2020artificial}. A 3:1 DLPC:DLPG bilayer, with shorter tails, was used to speed up the simulation, alongside a smaller water box (98 \\AA{}) than the Martini simulations, to investigate the short-term effect of peptide-membrane interactions. A 160 ns long trajectory was run for the FK13 system. The length of the YI12 simulation was 200 ns. The number of peptide-membrane and peptide-peptide contacts (using a threshold of 7.5 \\AA{} and ignoring hydrogen atoms) were found to be converged in less time than the maximum simulation length for both systems. \n\nThe bilayer is prepared using CHARMM-GUI~\\cite{Jo2009}, and the peptide sequence is prepared using PeptideBuilder~\\cite{Tien2013}. Solvation and assembly is performed using VMD 1.9.3 \\cite{HUMP96}. The system is simulated using NAMD 2.13 \\cite{Phillips2005}. Temperature is kept constant using a Langevin thermostat and a Nos\\'e-Hoover Langevin piston barostat\nThe Particle-Mesh Ewald method was used for long-range electrostatics. All simulations used a time step of 2 fs.\n\n\\subsection{Peptide Sequence Analysis}\n\\label{method:analysis}\nPhysicochemical properties like aromaticity, Eisenberg hydrophobicity, charge, charge density, aliphatic index, hydrophobic moment, hydrophobic ratio, isoelectric point, and instability index were estimated using the GlobalAnalysis method in modLAMP \\cite{muller2017modlamp}. Protparam tool from Expasy (https:\/\/web.expasy.org\/protparam) was used to estimate the grand average of hydropathicity (GRAVY) score. \n\nPairwise sequence similarity was estimated using a global alignment method, the PAM30 matrix\\cite{yu2003compositional}, a gap open penalty of -9, and a gap extension penalty of -1 using Pairwise2 function of Biopython package \\cite{cock2009biopython}. Higher positive values indicate better similarity. To check the correspondence between sequence similarity and Euclidean distance in ${\\mathbf{z}}$-space, a random set of sequence encodings were first selected, and then sequence similarity and ${\\mathbf{z}}$-distance with their close latent space neighbors were estimated.\nSequence similarity with respect to a sequence database was estimated using ``blastp-short'' command from NCBI BLAST sequence similarity search tool \\cite{madden2013blast, altschul1990basic} was used to query generated short sequences by using a word size of 2, the PAM30 matrix \\cite{yu2003compositional}, a gap open penalty of -9, a gap extension penalty of -1, threshold of 16, $comp\\_based\\_stats$ set to 0 and window size of 15. Alignment score (Bit score), Expect value (E-value), percentage of alignment coverage, percentage of identity, percentage of positive matches or similarity, percentage of alignment gap were used for analyzing sequence novelty. \nThe E-value is a measure of the probability of the high similarity score occurring by chance when searching a database of a particular size. E-values decrease exponentially as the score of the match increases.\nFor the search against patented sequences, we used the PATSEQ database \\cite{Patseq}.\n\n\\subsection{Wet Lab Experiments}\n\\label{sec:exptdetails}\n\n\n\\subsubsection{MIC Measurement}\nAll of the peptides were amidated at their C-terminus to remove the negative charge of the C-terminal carboxyl group. \nAntimicrobial activity of the best AMP hits was evaluated against a broad spectrum of bacteria for minimum inhibitory concentration (MIC), which include Gram-positive \\textit{Staphylococcus aureus} (ATCC 29737), Gram-negative \\textit{Escherichia coli} (ATCC 25922), \\textit{Pseudomonas aeruginosa} (ATCC 9027) and multi-drug resistant \\textit{K. pneumoniae} (ATCC 700603), which produce beta-lactamase SHV 18. The broth microdilution method was used to measure MIC values of the AMPs, and the detailed protocol was reported previously \\cite{chin2018macromolecular,ng2013synergistic}.\n\\subsubsection{Hemolytic Activity}\nThe selectivity of AMPs towards bacteria over mammalian cells was studied using rat red blood cells (rRBCs), which were obtained from the Animal Handling Unit of Biomedical Research Center, Singapore. Negative control: Untreated rRBC suspension in phosphate-buffered saline (PBS); Positive control: rRBC suspension treated with 0.1\\% Triton X. Percentage of hemolysis of rRBCs was obtained using the following formula:\n\\begin{equation}\n Hemolysis (\\%)=\\frac{O.D._{576 nm} \\textit{ of treated samples } - O.D._{576 nm} \\textit{ of negative control}}{O.D._{576 nm} \\textit{ of positive samples} - O.D._{576 nm} \\textit{ of negative control}}\n\\end{equation}\n\n\\subsubsection{Acute in Vivo Toxicity Analysis}\nThe animal study protocols were approved by the Institutional Animal Care and Use Committee of Biological Resource Center, Agency for Science, technology, and Research (A*STAR), Singapore. LD50 values of the AMPs, the dose required to kill 50\\% mice, were determined using a previously reported protocol \\cite{liu2017highly}. Specifically, female Balb\/c mice (8 weeks old, 18-22 g) were employed. AMPs were dissolved in saline and administered to mice by intraperitoneal (i.p.) injection at various doses. Mortality was monitored for 14 days post-AMP administration, and LD50 was estimated using the maximum likelihood method. \n\\subsubsection{CD Spectroscopy} The peptides were dissolved at 0.5 mg\/mL in either deionized water or deionized water containing 25 mM SDS surfactant. It forms anioic micelles in aqueous solution, which mimic the bacterial membrane. The CD spectra were measured using a CD spectropolarimeter from Jasco Corp. J-810 at room temperature and a quartz cuvette with 1 mm path length. The spectra were acquired by scanning from 190 to 260 nm at 10nm\/min after subtraction with the spectrum of the solvent.\n\n\\subsection{Mechanism study using Confocal microscopy}\nThe mechanism study was performed on an FV3000 Olympus confocal laser scanning microscope. E. coli was grown according to the MIC measurement. Briefly, 500 $\\mu$l of $10^8$ CFU\/ml of bacteria was prepared and placed in a centrifuged tube and 500 $\\mu$l of peptide and polymyxin B at 4xMIC concentration was added. The mixture was incubated at 37 $^{\\circ}$C and shaken at 100 rpm for 2 h. The sample was centrifuged at 4000 rpm for 10 min and supernatant was discarded before fresh 1 ml PBS was added. After washing for 3 times, the pallet was re-suspended with 500 $\\mu$l of glutaraldehyde (2.5\\% v\/v) for 30 min with occasional mixing. The sample was again washed for 3 times and re-suspended in 100 $\\mu$l of PBS and applied onto a poly-L-lysine coated slide. The bacteria was left to attach for an hour before the unattached bacteria was washed away with PBS. The slides was left to air dry before imaging using the FV3000 confocal microscope.\n\n\\subsection{Resistance Study}\nDrug resistance in the E. coli can be induced by treating repeatedly with peptides and imipenem at the sub MIC concentration (1\/2 MIC). MIC experiment was performed as stated earlier using the micro dilution method. The next generation of E. coli can be initiated by growing the bacteria suspension from the sub MIC concentration and performing another MIC assay. After 18h inoculation, the bacteria grown from sub MIC concentration of the tested peptides\/antibiotics was assayed for MIC. This will continue until we have reached 25 generations. The change in MIC was documented by taking the normalized value of MIC at generation X against the MIC at generation 1.\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\toprule\n \\textbf{E-value} & $<=0.001$ & $<=0.01$ & $<=0.1$ & $<=1$ & $<=10$ & $>10$ \\\\ \\hline \n Labeled & 9.36 & 3.42 & 9.70 & 29.59 & 29.88 & 18.05 \\\\\n Unlabeled & 5.16 & 4.05 & 9.07 & 30.97 & 36.65 & 14.08 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Percentage of CLaSS-generated AMP sequences in different categories of Expect value (E-value). E-value for the match with the highest score was considered, as obtained by performing BLAST similarity search against AMP-labeled training sequences (top row) and unlabeled training sequences (bottom row). \n}\n \\label{tab:Evalue}\n\\end{table}{}\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\n\\subcaptionbox{Fraction of unique $k$-mer present in different datasets. $k$ = 3-6. The mean and standard errors were estimated on three different sets, each consisting of 3000 randomly chosen samples.}{\\begin{tabular}{c|c|c|c}\n\\toprule\n$k$ & Generated AMP & AMP-labeled Training & Unlabeled Training \\\\ \\hline\n3 & $0.299 \\pm 0.006$ & $0.223 \\pm 0.006$ & $0.110 \\pm 0.005$ \\\\\n4 & $0.771 \\pm 0.004$ & $0.607 \\pm 0.002$ & $0.777 \\pm 0.001$ \\\\\n5 & $0.947 \\pm 0.002$ & $0.706 \\pm 0.002$ & $0.965 \\pm 0.000$ \\\\\n6 & $0.978 \\pm 0.001$ & $0.742 \\pm 0.002$ & $0.983 \\pm 0.001$ \\\\\n\\bottomrule\n\\end{tabular}}%\n\\centering\n\\hfill\n\\subcaptionbox{Top 3 $k$-mers ($k$ = 3 and 4) and their corresponding frequency in generated AMPs, training AMPs, and unlabeled training sequences. The estimated standard deviation values were close to zero.}{\\begin{tabular}{l|ll|ll|ll}\n\\toprule\n\\multirow{2}{*}{Kmers} & \\multicolumn{2}{|c|}{Generated AMP} & \\multicolumn{2}{|c|}{AMP-labeled Training} & \\multicolumn{2}{|c}{Unlabeled Training} \\\\\n\\cmidrule(lr){2-3} \\cmidrule(lr){4-5} \\cmidrule(lr){6-7}\n & top-3 & Frequency & top-3 & Frequency & top-3 & Frequency \\\\ \\hline\n\\multirow{3}{*}{3} & KKK & $0.011$ & KKK & $0.006$ & LLL & $0.002$ \\\\\n & LKK & $0.007$ & KKL & $0.004$ & LAA & $0.001$ \\\\\n & KLK & $0.007$ & GLL & $0.004$ & RRR & $0.001$ \\\\\n\\cmidrule{1-7} \n\\multirow{3}{*}{4} & KKKK & $0.005$ & KKKK & $0.003$ & PLDL & $0.001$ \\\\\n & KLKK & $0.004$ & KKLL & $0.002$ & LDLA & $0.001$ \\\\\n & KKLK & $0.003$ & KLLK & $0.002$ & NFPL & $0.001$ \\\\\n\\bottomrule\n\\end{tabular}}%\n\n\\hfill\n\\subcaptionbox{Physicochemical properties, such as charge, charge density, aliphatic index, aromaticity, hydrophobicity, hydrophobic moment, hydrophobic ratio, instability index, were estimated on unlabeled training, AMP-labeled training, and generated AMP sequences using CLaSS. Mean, and standard deviation were estimated on three different sets, each consisting of 3000 randomly chosen samples. }{\\begin{tabular}{l|rrr}\n\\toprule\n{} & Generated AMP & AMP-labeled Training & Unlabeled Training \\\\ \\hline\nCharge & $2.695 \\pm 0.039$ & $3.074 \\pm 0.238$ & $1.172 \\pm 0.218$\\\\\nCharge Density & $0.002 \\pm 0.000$ & $0.002 \\pm 0.000$ & $0.001 \\pm 0.000$\\\\\nAliphatic Index & $107.814 \\pm 0.649$ & $101.232 \\pm 1.550$ & $82.088 \\pm 8.440$ \\\\\nAromaticity & $0.095 \\pm 0.002$ & $0.102 \\pm 0.002$ & $0.082 \\pm 0.003$ \\\\\nHydrophobicity & $0.048 \\pm 0.007$ & $0.068 \\pm 0.002$ & $0.052 \\pm 0.032$\\\\\nHydrophobic Moment & $0.331 \\pm 0.000$ & $0.335 \\pm 0.019$ & $0.222 \\pm 0.003$ \\\\\nHydrophobic Ratio & $0.446 \\pm 0.001$ & $0.431 \\pm 0.010$ & $0.425 \\pm 0.013$ \\\\\n \\bottomrule\n\\end{tabular}\n}%\n\\caption{Comparison of CLaSS-generated AMPs with AMP-labeled and unlabeled peptides.}\n\\label{tab:combined-comp}\n\\end{figure}\n\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\n\\begin{tabular}{|l|c|c|c|c|}\n\\hline\n\n\\multicolumn{1}{|c|}{\\textbf{Sequence}} & \\textbf{Positive Residue} & \\textbf{Binding time (ns)} & \\textbf{Mean} & \\textbf{Variance } \\\\ \\hline\nYLRLIRYMAKMI & 3 & 210 & 6.45 & 1.27 \\\\ \\hline\nFPLTWLKWWKWKK & 4 & 90 & 5.90 & 1.39 \\\\ \\hline\n\nHILRMRIRQMMT & 3 & 17 & 7.84 & 1.44 \\\\ \\hline\nILLHAILGVRKKL & 3 & 105 & 7.16 & 1.19 \\\\ \\hline\nYRAAMLRRQYMMT & 3 & 19 & 8.79 & 1.25 \\\\ \\hline\nHIRLMRIRQMMT & 3 & 493 & 8.38 & 1.50 \\\\ \\hline\nHIRAMRIRAQMMT & 3 & 39 & 7.20 & 1.39 \\\\ \\hline\nKTLAQLSAGVKRWH & 3 & 177 & 7.62 & 1.46 \\\\ \\hline\n\nHILRMRIRQGMMT & 3 & 62 & 8.37 & 1.53 \\\\ \\hline\nHRAIMLRIRQMMT & 3 & 297 & 7.46 & 1.35 \\\\ \\hline\nEYLIEVRESAKMTQ & 2 & 150 & 6.65 & 1.79 \\\\ \\hline\nGLITMLKVGLAKVQ & 2 & 341 & 8.34 & 1.58 \\\\ \\hline\nYQLLRIMRINIA & 2 & 239 & 6.29 & 1.71 \\\\ \\hline\nVRWIEYWREKWRT & 4 & 125 & 6.41 & 1.28 \\\\ \\hline\nLIQVAPLGRLLKRR & 4 & 37 & 6.52 & 1.24 \\\\ \\hline\nYQLRLIMKYAI & 2 & 192 & 7.75 & 1.86 \\\\ \\hline\nHRALMRIRQCMT & 3 & 80 & 9.15 & 1.27 \\\\ \\hline\nGWLPTEKWRKLC & 3 & 227 & 6.11 & 1.63 \\\\ \\hline\nYQLRLMRIMSRI & 3 & 349 & 8.28 & 1.80 \\\\ \\hline\n\nLRPAFKVSK & 3 & 151 & 7.73 & 1.85 \\\\\n\\hline\n\\end{tabular}\n\\caption{Physics-derived features such as mean and variance of the number of contacts between positive amino acids and membrane beads (that are found to be associated with antimicrobial function in this study), as extracted from CGMD simulations of peptide membrane interactions for the top 20\nAI-designed AMP sequences.}\n\\label{tab:20P-sim}\n\\end{table}\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|l|c|c|r|c|}\n\\toprule\n\\multicolumn{1}{|c|}{\\textbf{Sequence}} & \\textbf{Length} & \\textbf{Charge} & \\textbf{H} & \\textbf{$\\mu$H} \\\\ \\hline\nYLRLIRYMAKMI & 12 & 3.99 & 0.08 & 0.79 \\\\ \\hline\nFPLTWLKWWKWKK & 13 & 5.00 & 0.05 & 0.20 \\\\ \\hline\nHILRMRIRQMMT & 12 & 4.10 & -0.25 & 0.36 \\\\ \\hline\nILLHAILGVRKKL & 13 & 4.09 & 0.27 & 0.33 \\\\ \\hline\nYRAAMLRRQYMMT & 13 & 3.99 & -0.28 & 0.06 \\\\ \\hline\nHIRLMRIRQMMT & 12 & 4.10 & -0.25 & 0.16 \\\\ \\hline\nHIRAMRIRAQMMT & 13 & 4.10 & -0.22 & 0.24 \\\\ \\hline\nKTLAQLSAGVKRWH & 14 & 4.09 & -0.08 & 0.49 \\\\ \\hline\n\nHILRMRIRQGMMT & 13 & 4.10 & -0.19 & 0.27 \\\\ \\hline\nHRAIMLRIRQMMT & 13 & 4.10 & -0.18 & 0.41 \\\\ \\hline\nEYLIEVRESAKMTQ & 14 & 0.00 & -0.16 & 0.26 \\\\ \\hline\nGLITMLKVGLAKVQ & 14 & 3.00 & 0.37 & 0.28 \\\\ \\hline\nYQLLRIMRINIA & 12 & 3.00 & 0.11 & 0.38 \\\\ \\hline\nVRWIEYWREKWRT & 13 & 3.00 & -0.41 & 0.55 \\\\ \\hline\nLIQVAPLGRLLKRR & 14 & 5.00 & -0.12 & 0.38 \\\\ \\hline\nYQLRLIMKYAI & 11 & 2.99 & 0.18 & 0.40 \\\\ \\hline\nHRALMRIRQCMT & 12 & 4.03 & -0.34 & 0.56 \\\\ \\hline\nGWLPTEKWRKLC & 12 & 2.93 & -0.13 & 0.33 \\\\ \\hline\nYQLRLMRIMSRI & 12 & 4.00 & -0.17 & 0.64 \\\\ \\hline\n\nLRPAFKVSK & 9 & 4.00 & -0.17 & 0.70 \\\\\n\\bottomrule\n\\end{tabular}\n\n\\caption{Physico-chemical properties for the top 20 AI-designed AMP sequences.}\n\\label{tab:sicgmd}\n\\end{table}\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|l|c|c|}\n\\hline\n\n\\multicolumn{1}{|c|}{\\textbf{Sequence}} & \\textbf{S. aureus ($\\mu$g\/mL)} & \\textbf{E.Coli ($\\mu$g\/mL)} \\\\ \\hline\nYLRLIRYMAKMI-CONH2 & 7.8 & 31.25 \\\\ \\hline\nFPLTWLKWWKWKK-CONH2 & 15.6 & 31.25 \\\\ \\hline\nHILRMRIRQMMT-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nILLHAILGVRKKL-CONH2 & 250 & 250 \\\\ \\hline\nYRAAMLRRQYMMT-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nHIRLMRIRQMMT-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nHIRAMRIRAQMMT-CONH2 & $>$1000 & 1000 \\\\ \\hline\nKTLAQLSAGVKRWH-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\n\nHILRMRIRQGMMT-CONH2 & $>$1000 & $>$1000\\\\ \\hline\nHRAIMLRIRQMMT-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nEYLIEVRESAKMTQ-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nGLITMLKVGLAKVQ-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nYQLLRIMRINIA-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nVRWIEYWREKWRT-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\n\nYQLRLIMKYAI-CONH2 & 125 & 125 \\\\ \\hline\nHRALMRIRQCMT-CONH2 & 1000 & 1000 \\\\ \\hline\nGWLPTEKWRKLC-CONH2 & 1000 & $>$1000 \\\\ \\hline\nYQLRLMRIMSRI-CONH2 & 250 & 500\\\\ \\hline\nFFPLPAISTELKRL-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nLIQVAPLGRLLKRR-CONH2 & $>$1000 & 1000 \\\\ \\hline\n\nLRPAFKVSK-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\n\\end{tabular}\n\\caption{Broad-spectrum MIC values of top 20 AI-designed AMP Sequences}\n\\label{tab:20P_MIC}\n\\end{table}\n\n\n\n\n\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|l|c|c|}\n\\hline\n\\multicolumn{1}{|c|}{\\textbf{Sequence}}& \\textbf{S. aureus ($\\mu$g\/mL)} & \\textbf{E.Coli ($\\mu$g\/mL)} \\\\ \\hline\nAMLELARIIGRR-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nIPRPGPFVDPRSR-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nVAKVFRAPKVPICP-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nFPSFTFRLRKWKRG-CONH2 & 62.5 & 62.5 \\\\ \\hline\nRPPFGPPFRR-CONH2 & $>$1000& $>$1000 \\\\ \\hline\nWEEMDSLRKWRIWS-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nRRQAQEVRGPRH-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nKKKKPLTPDFVFF-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nTRGPPPTFRAFR-CONH2 & $>$1000 & $>$1000 \\\\ \\hline\nLALHLEALIAGRR-CONH2 & 250 & $>$1000\\\\ \\hline\n\\end{tabular}\n\\caption{Broad-spectrum MIC values of 11 AI-designed Non-AMP Sequences}\n\\label{tab:11N-MIC}\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.0\\textwidth]{figures\/figure-sim_2.pdf}\n\\caption{(A) Snapshot from a coarse-grained molecular dynamics simulation of an AMP (in orange) binding with a lipid bilayer (gray). \n(B) Confusion matrix of the simulation-based classifier that uses peptide-membrane contact variance as feature for detecting AMP sequences.\n}\n\\label{fig:cgmd}\n\\end{figure}\n\n\\renewcommand{\\thesubfigure}{\\Alph{subfigure}}\n\\begin{figure}\n \\captionsetup{singlelinecheck = false, justification=justified}\n \\begin{subfigure}{0.33\\linewidth}\n \\centering\n \\caption{}\n \\includegraphics[scale=0.26]{figures\/AIdesigned_twoAMP_toxicity.png}\n \\label{fig:toxicity}\n \\end{subfigure}\n \\begin{subfigure}{0.32\\linewidth}\n \\centering\n \\caption{}\n \\includegraphics[scale=0.27]{figures\/YI12_CD.png}\n \\label{fig:YI-CD}\n \\end{subfigure}\n \\begin{subfigure}{0.32\\linewidth}\n \\centering\n \\caption{}\n \\includegraphics[scale=0.27]{figures\/FK13-CD.png}\n \\label{fig:FK-CD}\n \\end{subfigure}\n \n \n \\center\n \n\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\toprule\n\\multicolumn{1}{|c|}{\\textbf{Sequence}} & \\textbf{Score} & \\textbf{E-Value} & \\textbf{\\% Coverage} & \\textbf{\\% Identity} & \\textbf{\\% Positive} & \\textbf{\\% Gap}\\\\ \\hline\n\nYLRLIRYMAKMI & 21.88 & 1.20 & 66.67 & 75.00 & 75.00 & 75.00 \\\\ \\hline\nFPLTWLKWWKWKK & 33.30 & 4e-05 & 84.61 & 73.00 & 73.00 & 9.00\\\\ \n\\bottomrule\n\\end{tabular}\n\n\\caption{Percentage of hemolysis of rat red blood cells as a function of peptide concentration. (B) and (C) show CD Spectra of YI12 and FK13 peptide, respectively, at 0.5 mg\/ml concentration in DI water and presence of 20 mM SDS buffer. Both YI12 and FK13 showed a random coil-like structure in the absence of SDS. When SDS was present, both sequences form $\\alpha$-helical structure (evident from the 208 nm and 222 nm peaks). (D) BLAST search results (alignment score, E-value, percentage of alignment coverage, percentage of identity, percentage of positive matches or similarity, percentage of alignment gap) against full training data for YI12 and FK13.}\n\\label{fig:cd_n_tox}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/Confocal.png}\n\\caption{Confocal images of \\textit{E. coli} (A-B) without any treatment, (C-D) treated with 2xMIC FK13 and (E-F) treated with 2xMIC polymyxin B for 2 hours. Red colour denotes the propidium iodide. Images A, C and E are the dark field image and B, D and F are the bright field image. All scale bars are 10 $\\mu$m.}. \n\\label{fig:confocal}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/figure_6_7b.edited.png}\n\\caption{BLAST search results of YI12 and FK13.}. \n\\label{fig:blast_result}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}