diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfayy" "b/data_all_eng_slimpj/shuffled/split2/finalzzfayy" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfayy" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nWhen a liquid is cooled below its freezing point it is supposed to freeze.\nUsually, impurities or the solid boundaries of the liquid provide preferential\nsites for the formation of the solid phase.\nHowever, even in the absence of impurities, small nuclei of the new phase may\nbe formed within the bulk metastable liquid.\nThis mechanism of formation of the solid phase is called homogeneous\nnucleation.\\cite{debenedettibook,kashchievbook}\nHomogeneous nucleation is an activated process since the formation of a\ncritical nucleus requires the surmounting of a free energy barrier.\nAfter that, the crystalline nucleus can grow (nucleation-and-growth mechanism).\nIn general, at moderate supercooling, the limiting step is the formation of\nthe critical cluster rather than the crystal growth. The most relevant\nquantity to characterize nucleation is the nucleation rate, {\\em i.e.} the\nnumber of nucleating clusters per unit time and volume.\n\nWater freezing is arguably the most important liquid-to-solid transition. \nFor example, ice formation in atmospheric clouds is a key factor to the global radiation \nbudget and to \nclimate change.\\cite{review_ice_formation_clouds_2005,baker97,demott10}\nWater freezing is also a big issue, for instance, in the cryopreservation of cells and tissues.\\cite{cryopres}\nMoreover, ice formation is relevant to microbiology\\cite{hirano00},\nfood industry\\cite{maki74,bacterial_ice_nucleation}, materials \nscience\\cite{michaelides07}, geology\\cite{weathering_book} \nor physics.\\cite{pruppacher1995,debenedettibook,taborek,knopf11,nature_valeria,galli_mw,riechers13}\n\nDespite its great importance, our understanding of water freezing is far from \ncomplete. Not even homogeneous nucleation, the simplest \nconceivable mechanism by which ice can be formed, is fully understood.\nOne of the reasons for this is the need to perform experiments \nwith small droplets (10-100$~\\mu$m) \nto avoid heterogeneous nucleation.\\cite{kramer1999,stockel2005,murray2010,mishima10}\nThis, and the time that the droplets can be stabilized, sets the order of magnitude\nthat can be probed for the nucleation rate, $J$. Thus, experimental measurements for \n$\\log_{10} (J \/$(m$^{-3}$s$^{-1}$)) typically \nrange between $4$ and and $14$ . This corresponds to a temperature window spanning from 239~K to 233~K, \nthe latter often referred to as ``homogeneous nucleation temperature''.\\cite{debenedetti03} \nOur knowledge of the nucleation rate outside this temperature window is limited\nto extrapolations based on CNT. Such extrapolations\nmust be taken with care since the uncertainties in the nucleation rate and the\nnarrow range of temperatures for which J can be measured lead to important\ndifferences in the estimated value of the interfacial free energy and\/or the\nkinetic prefactor.\\cite{murray2010}\nMoreover, it has not been possible so far to observe a \ncritical ice nucleus in experiments because critical nuclei \nare relatively small and short-lived. Therefore, we only have \nestimates of the critical cluster size based on experimental measurements\nof $J$.\\cite{pruppacher1995,furukawa1996,jeffery1997,kramer1999,kiselev01,micelas}\nThe purpose of this paper is to fill these gaps by obtaining\nthe first estimate of the size of the critical\ncluster and of the nucleation rate at high temperatures which \nis not entirely based on theoretical extrapolations from measurements\nat low temperatures. We will make use of computer simulations\nto achieve these goals.\n\nComputer simulations are a valuable tool to investigate nucleation\\cite{S_1997_277_01975,Nature_2001_409_1020} since they\nprovide a microscopic description of the process.\nIt is therefore somehow surprising that the number of simulation studies dealing\nwith ice nucleation is rather small.\\cite{sear2012} On the one hand, it has\nbeen shown that ice nucleation can occur spontaneously (without the aid of \nspecial simulation techniques) when \nan electric field is applied\\cite{kusalik_electric_field}, when crystallization is assisted by a \nsubstrate\\cite{koga01,moore10} or by an interface\\cite{subsurface}, when coarse-grained models with accelerated dynamics are \nsimulated at high supercoolings,\\cite{nature_valeria,valeria_jcp_2010,valeria_pccp_2011} or when small systems \nare simulated\\cite{matsumoto02,PhysRevLett.88.195701,st2_recent}. \nOn the other hand, if nucleation does not\nhappen spontaneously, rare event techniques must be used. \nThe number of such works is limited and the agreement\nbetween different groups is not entirely satisfactory.\nRadhakrishnan and\nTrout\\cite{trout03,trout_prl_2003}, Quigley and Rodger\\cite{quigley08} and Brukhno et\nal.\\cite{anwar_tip4p_nucleation} determined\nthe free energy barrier for the formation of ice critical clusters with the\nTIP4P water model at 180~K (50 degrees below\nthe model's melting temperature), but mutually consistent results were not found.\nReinhardt and Doye\\cite{doye12} and Li {\\em et al.}\\cite{galli_mw} evaluated\nthe nucleation rate of the mW model at 55 K below freezing finding a \ndiscrepancy of \nsix orders of magnitude. \nVery recently, \nReinhardt {\\em et al.} investigated ice nucleation at moderate supercoolings,\\cite{doyejcp2013} \nto estimate the free energy of formation of small pre-critical clusters. \nIt is almost certain that more ice nucleation studies are on the way and,\nhopefully, the discrepancies will become smaller.\n\nNone of the studies mentioned in the previous paragraph deal with large systems at moderate \nsupercoolings like the present investigation does. \nBy supercooling, $\\Delta T$, we mean \nthe difference between the melting temperature and the temperature of interest. \nNote that the melting temperature of a model does not necessarily coincide with \nthe experimental melting temperature or with the melting temperature of other models.\nIn this work we determine, by means of\ncomputer simulations, the size of critical ice clusters and the nucleation rate \nfor $\\Delta T$ ranging from 15 to 35~K. \n\\edu{In this way we provide, for the first time, nucleation rates for $\\Delta T$ lower\nthan 35 K, where experimental measurements are not currently feasible (CNT based\nestimates of $J$ can in principle be made for any supercooling but, to the best of our knowledge, \nthere are no such estimates available for $\\Delta T < 30 K$).\\cite{pruppacher1995,kashchievbook,murray2010}}\nOur simulations predict that for $\\Delta T < 20$~K\nit is impossible that homogeneous ice nucleation takes place.\nTherefore, ice must necessarily nucleate heterogeneously \nfor supercoolings lower than 20~K.\nMoreover, we can directly compare our results for the \nlargest studied supercoolings to the experimental measurements. \nWe find, within uncertainty, a good agreement \nwith experimental nucleation rates. \nWe \npredict that the radius of the critical cluster goes from $\\sim$40~\\AA (8000 molecules)\nat $\\Delta T$ {\\em ca.} 15~K to $\\sim$17~\\AA (600 molecules)\nat $\\Delta T$ {\\em ca.} 35~K. \nWe also estimate the surface free energy via CNT. \nWe obtain, in agreement with \npredictions based on experimental measurements,\\cite{pruppacher1995,zobrist07,alpert11} that the surface free energy decreases \nwith temperature. An extrapolation of the interfacial free energy to the melting\ntemperature gives a value of $\\sim$29~mN\/m, in reasonable \nagreement with experimental results\\cite{gamma_exp}, and with\ncalculations by simulation.\\cite{gammadavid}\n\nWe use two simple, yet realistic, water models; namely \nTIP4P\/2005\\cite{TIP4P2005} and TIP4P\/Ice\\cite{TIP4PICE}. The \nmelting temperature\\cite{TIP4P2005,TIP4PICE} and the ability of these models to \npredict properties of real water has already been well established.\\cite{vega11}\nThe results obtained for both water models are quite similar provided that they are compared at the\nsame $\\Delta T$.\n\n\n\n\\section{Methodology}\n\nTo evaluate the size of critical ice clusters we follow a similar approach\nto that proposed by Bai and Li\\cite{bai06} to calculate the solid-liquid\ninterfacial energy for a Lennard-Jones system. They employ spherical crystal\nnuclei embedded in the supercooled liquid and determine the temperature at\nwhich the solid neither grows nor melts.\nThe key issue of this methodology is that determining the melting temperature\nof a solid cluster embedded in its corresponding supercooled liquid water is\nequivalent to the determination of the critical size of the cluster for a\ncertain given temperature.\nThus, in a sense, this methodology can be regarded as the extension to nucleation\nphenomena of the well known direct coexistence technique.\\cite{woodcook2}\nA similar method was applied to water by Pereyra et al.\\cite{carignano} They\ninserted an infinitely long (through periodical boundary conditions) ice\ncylinder in water and determined the melting temperature of the\ncylinder.\nRecently, the approach of Bai and Li has been used to investigate \nthe nucleation of clathrate \nhydrates.\\cite{jacobson11,knott12}\n\nHere we shall implement this methodology to study a three-dimensional spherical\nice cluster embedded in supercooled water. This follows closely the\nexperimental situation where the incipient ice embryo is fully immersed into\nliquid water.\nSuch {\\it brute force} approach requires very large systems\n(containing up to 2$\\times 10^5$ water molecules). \nHowever, molecular dynamics simulations can be efficiently parallelised so that it is \nnowadays possible to deal with such system size.\nThe methodology can then be implemented in a rather straightforward way, and\nis particularly useful at moderate supercooling,\nwhere other techniques (such as umbrella sampling\\cite{ARPC_2004_55_333,auer_frenkel}, Forward Flux Sampling\\cite{PRL_2005_94_018104} or \nTransition Path Sampling\\cite{ARPC_2002_53_0291}) may become numerically too expensive.\n\nOnce we calculate the critical cluster size we make use of \nCNT \\cite{ZPC_1926_119_277_nolotengo,becker-doring,kelton} in its version\nfor spherical clusters to \nestimate the surface free energy, $\\gamma$:\n\\begin{equation}\n\\label{ncrit}\n\\gamma=\\left(\\frac{3 N_{c} \\rho^2_s |\\Delta \\mu|^3}{32 \\pi}\\right)^{1\/3}\n\\end{equation}\nwhere $\\rho_s$ is the number density of the solid and $\\Delta \\mu$ is \nthe chemical potential difference between the metastable liquid and the solid at the temperature under consideration.\nThis expression allows us to obtain a value for $\\gamma$ associated to each cluster.\nCNT can also be used to estimate the height of the nucleation free-energy barrier, $\\Delta G_c$: \n\\begin{equation}\n\\label{eq_G_cnt}\n\\Delta G_c = \\frac{16 \\pi \\gamma^3}{3 \\rho^2_s |\\Delta \\mu|^2}.\n\\end{equation}\nOnce $\\Delta G_c$ is known, we can use the following CNT-based expression to evaluate the \nnucleation rate\\cite{auerjcp}:\n\\begin{equation}\nJ=Z f^{+} \\rho_f \\exp(-\\Delta G_{c}\/(k_B T))\n\\label{eqrate}\n\\end{equation}\nwhere $Z$ is the Zeldovich factor, $Z=\\sqrt{(|\\Delta G^{''}|_{N_c}\/(2\\pi k_B T))}$,\nand $f^{+}$ is the attachment rate of particles to the critical cluster.\nThe CNT form of the Zeldovich factor is \n\\begin{equation}\nZ=\\sqrt{|\\Delta \\mu|\/(6 \\pi k_B T N_c)},\n\\end{equation}\nwhich can be obtained from our calculations of $N_c$.\nWe follow Ref.~\\onlinecite{auerjcp} to calculate $f^{+}$ as a diffusion coefficient of the \ncluster at the top of the barrier: \n\\begin{equation}\nf^{+}=\\frac{<(N(t)-N_c)^2>}{2t}. \n\\label{eqattach}\n\\end{equation}\nTherefore, in order to obtain nucleation rates we combine CNT predictions with simulations\nof the critical clusters. \n\nBy using the methodology here described, the nucleation rate of clathrate hydrates \nhas been recently calculated.\\cite{knott12} The validity of this approach \nrelies on the ability of CNT to make good estimates of the free energy barrier\nfrom measured values of the critical cluster size. CNT is expected to work well for big \ncritical clusters. We are confident that the cluster sizes we deal with in this work \nare big enough for CNT to produce meaningful predictions. \nWe discuss why in Sec.~\\ref{validity}. \n\n\n\\section{Technical details}\n\n\\subsection{Simulation details}\nWe carry out $NpT$ GROMACS\\cite{hess08} molecular dynamics simulations (MD) \nof a system that consists of one spherical ice-Ih cluster surrounded by supercooled water molecules. \nWe use two different rigid non-polarizable models of water: \nTIP4P\/2005\\cite{TIP4P2005} and TIP4P\/Ice.\\cite{TIP4PICE}\nTIP4P\/2005 is a model that provides a quantitative account of many water\nproperties\\cite{vega09,vega11} including not only the well known thermodynamic\nanomalies but also the dynamical ones.\\cite{pi09,gonzalez10}\nTIP4P\/Ice was designed to reproduce the melting temperature, the densities and\nthe coexistence curves of several ice phases. \nOne of the main differences between the two models is \ntheir ice Ih melting temperature at 1~bar: $T_m = 252$~K for TIP4P\/2005 and\n$T_m = 272$~K for TIP4P\/Ice.\nWe evaluate long range electrostatic interactions using the smooth\nParticle Mesh Ewald method\\cite{essmann95} and truncate both the LJ and real\npart of the Coulombic interactions at 9~\\AA. \nWe preserve the rigid geometry of the water model by\nusing constraints. All simulations are run at the constant pressure of\n$p = 1$~bar, using an isotropic Parrinello-Rahman barostat\\cite{parrinello81}\nand at constant temperature, using the velocity-rescaling\nthermostat.\\cite{bussi07}\nWe set the MD time-step to 3~fs.\n\n\\subsection{Order parameter}\nTo determine the time evolution of the cluster size, we use the \nrotationally invariant order parameters proposed by Lechner and Dellago, $\\bar{q_{i}}$.\\cite{dellago}\nIn Fig.~\\ref{q6q4} we show the $\\bar{q}_{4},\\bar{q}_{6}$ values for 5000~molecules of either liquid water, \nice Ih or ice Ic at 1~bar and 237~K for TIP4P\/2005. The cut-off distance to identify neighbors for the calculation\nof $\\bar{q_{i}}$ is $3.5$~\\AA\\ between the oxygen atoms. \nThis approximately corresponds to the position of the first minimum of the \noxygen-oxygen pair correlation function in the liquid phase. \n\\begin{figure}[h!]\n\\includegraphics[width=0.4\\textwidth,clip=]{q6q4_1-108_T237K.eps}\\\\\n\\caption{Values of $\\bar{q}_6$ and $\\bar{q}_4$\\cite{dellago} \nfor 5000~molecules of the liquid phase (blue), of ice-Ih (red), and of ice-Ic (green) \nat 237~K for the TIP4P\/2005 model.} \n\\label{q6q4}\n\\end{figure}\n\nFrom Fig.~\\ref{q6q4} it is clear that $\\bar{q}_{6}$ alone is enough to\ndiscriminate between solid-like and fluid-like molecules, as already suggested\nin Ref.~\\onlinecite{doye_tip4p_2005}. As a threshold to separate the \nliquid from the solid clouds in Fig.~\\ref{q6q4} we \nchoose $\\bar{q}_{6,t}=0.358$, represented as a horizontal\ndashed line in the figure. \nThis threshold separates the liquid from both ice Ih and Ic. Therefore, \neven though we prepare the clusters with ice-Ih structure, ice-Ic molecules would \nbe detected as solid-like should they appear as the clusters grow.\nUnlike Refs.~\\onlinecite{ghiringhelli08} and \\onlinecite{li13} we do not consider\nas solid-like particles on the surface which are neighbor to solid-like particles. \nOnce molecules are labelled either as\nsolid or liquid-like, the solid cluster is found by means of a clustering\nalgorithm that uses a cut-off of 3.5~\\AA\\ to find neighbors of the same\ncluster. \n\n\\subsection{Initial configuration}\nWe prepare the initial configuration by inserting a spherical ice-Ih cluster\n(see Fig.~\\ref{cluster} for a cluster of 4648~molecules) into a configuration\nof supercooled water with $\\sim 20$~times as many molecules as the cluster. \n\\begin{figure}[h!]\n\\includegraphics[width=0.4\\textwidth,clip=]{cluster4648.eps}\n\\caption{Snapshot of a spherical ice-Ih cluster of 4648~molecules.} \n\\label{cluster}\n\\end{figure}\nTo obtain the cluster, we simply cut a spherical portion of a large\nequilibrated ice Ih crystal. \nNext, we insert the ice cluster in the supercooled liquid \nremoving the liquid molecules that overlap with the cluster.\nFinally, we equilibrate the system for about 0.2~ns at 200~K.\nThis time is long enough to equilibrate the cluster-liquid interface \n(see Supporting Information). \nWe then perform simulations for three different system\/cluster sizes labeled as H (Huge), L (Large) and B (Big) \n(see Table~\\ref{systemsize}). As far as we are aware, the studied system size are \nbeyond any previous numerical study of ice nucleation. Calculations were performed in the Spanish super-computer \nTirant. \nFor system H we use 150~nodes yielding 0.72~ns\/day; \nfor system L, 50~nodes at 1.5~ns\/day and, \nfor system B, 32~nodes at 4.7~ns\/day. \n\nOur order parameter allows us to correctly identify as solid-like the great\nmajority of the molecules belonging to the cluster shown in Fig.~\\ref{cluster}\n(4498 out of 4648). \nFig.~\\ref{cluster-po}(a) shows that indeed most molecules of the inserted ice cluster are detected as solid-like (red) \nas opposed to liquid-like (blue). Notice that most blue particles in Fig.~\\ref{cluster-po}(a) are located at the interface. \nThis is not surprising giving that our order parameter was tuned to distinguish between liquid-like and solid-like particles in the bulk. \nFig. ~\\ref{cluster-po}(a) corresponds to the cluster {\\em just} inserted in the liquid. \nAfter 0.2~ns of equilibration our order parameter detects that the number of molecules in the cluster drops down to 3170. \nTo explain the origin of this drop we show in Fig.~\\ref{cluster-po}(b) \na snapshot of the 4648 inserted molecules after the 0.2~ns equilibration period. \nClearly, the drop comes from the fact that the outermost layer of molecules of the inserted cluster becomes liquid-like during equilibration.\nBy removing the liquid-like molecules from Fig.~\\ref{cluster-po}(b) one can easily identify again the hexagonal \nchannels typical of ice (Fig.~\\ref{cluster-po}(c)). \nTherefore, the drop from 4648 to 3170~molecules in the ice cluster is due to the equilibration of the ice-water interface. \nThe size of the equilibrated clusters, $N_c$, is given in Table~\\ref{systemsize}. \n\n\\begin{figure}[h!]\n\\includegraphics[width=0.2\\textwidth,clip=]{cluster0.eps}(a)\n\\includegraphics[width=0.2\\textwidth,clip=]{cluster200ps.eps}(b)\\\\\n\\includegraphics[width=0.2\\textwidth,clip=]{cluster200psnaked.eps}(c)\n\\caption{Snapshot of the 4648~molecules inserted as an ice cluster just after insertion (a),\nand after 0.2~ns equilibration (b). \nMolecules are colored in red if detected as solid-like and \nin blue if detected as liquid-like. In (c) only solid-like molecules of snapshot (b) are shown.} \n\\label{cluster-po}\n\\end{figure}\n\n\\begin{table}[h!]\n\\caption{Total\nnumber of molecules in the system, $N_t$ (ice cluster + surrounding liquid water molecules) \nand number of molecules of the inserted spherical ice cluster, $N_i$ for the \nthree configurations prepared. $N_c$ is the number of molecules in the ice cluster after equilibration of the interface. \nThe radius of the equilibrated clusters $r_c$ in \\AA ~is also presented.}\n\\label{systemsize} \n\\centerline{\n\\begin{tabular}{ccccccc}\n\\hline \n System & $N_{t}$ & $N_{i}$ & $N_{c}^{2005}$ & $N_{c}^{Ice}$ & $r_{c}^{2005}$ & $r_{c}^{Ice}$ \\\\\n\\hline\nB & 22712 & 1089 & \t\t600 & 600 & 16.7 & 16.8 \\\\\nL & 76781 & 4648 & \t\t3170 & 3167 & 29.1 & 29.2 \\\\\nH & 182585 & 9998 & \t\t7931 & 7926 & 39.5 & 39.7 \\\\\n\\hline \n\\end{tabular}}\n\\end{table}\n\nOnce the interface is equilibrated for $0.2$ ns, the number of molecules in the cluster grows\nor shrinks (depending on the temperature) at a much slower rate (typically\nrequiring several nanoseconds as it is shown in Fig.~\\ref{figlargest}). \nThe initial time in our simulations corresponds to the configuration equilibrated after 0.2~ns. \nWe run MD simulations of the system with the equilibrated interface at \nseveral temperatures below the bulk melting temperature of the model.\nThe objective is to find a temperature range within which the cluster\ncan be considered to be critical.\nThe temperature range is comprised between the lowest temperature at which the solid cluster\nmelts and the highest at which it grows. \nWe monitor the number of molecules in the cluster \nand the global potential energy to find whether the cluster melts or grows.\n\n\\section{Results}\n\n\\subsection{Size of the critical clusters}\n\nIn Fig.~\\ref{figlargest} we represent the number of molecules in the ice cluster\nversus time for system H, TIP4P\/2005. \n\\begin{figure}[h!]\n\\includegraphics[width=0.45\\textwidth,clip=]{nbigvstime_7805.eps}\n\\caption{Number of molecules in the ice cluster versus time for system H and the TIP4P\/2005 potential. Results are shown for different temperatures as\nindicated in the legend.} \\label{figlargest}\n\\end{figure}\nDepending on the temperature the cluster either grows (230~K and 235~K) or\nshrinks (240~K). The highest temperature at which the cluster grows is 235~K\nand the lowest temperature at which it melts is 240~K. Hence, a cluster of\n$\\sim$7900~molecules (as detected by our order parameter) is critical at\n$237.5 \\pm 2.5$~K. \nAn analogous result can be obtained by monitoring the potential energy of the\nsystem as a function of time.\n(see Supporting Information).\nA decrease in the energy corresponds to the cluster's growth \nwhereas an increase in the energy corresponds to its melting. \nBy doing this analysis for both models (TIP4P\/2005 and TIP4P\/Ice ) and for the\nthree cluster sizes (H, L, and B ), \nwe obtain the results summarized in Table~\\ref{tabcluster}. \n\n\\begin{table}[h!]\n\\caption{\nWe report the\ntemperature ($T$ in K) for which the prepared ice clusters are found to be critical, \nthe supercooling ($\\Delta T$ in K) for the corresponding water model, \nthe chemical potential difference between the fluid and the solid ($\\Delta \\mu$ in kcal\/mol), \nthe liquid-solid surface free energy ($\\gamma$ in mN\/m) estimated from Eq.~\\ref{ncrit},\nand the nucleation free energy barrier height ($\\Delta G_{c}$ in $k_B T$) estimated from Eq.~\\ref{eq_G_cnt}. \n\\label{tabcluster}} \n\\centerline{\n\\begin{tabular}{ccccccccc}\n\\hline \n Model & System & $N_c$ & $T$ & $\\Delta T$ & $\\Delta \\mu$ & $\\gamma$ & $\\Delta G_{c}$\\\\\n\\hline\nTIP4P\/2005 & B & 600 & 222.5 & 29.5 & 0.114 & 20.4 & 77 \\\\\nTIP4P\/2005 & L & 3170 & 232.5 & 19.5 & 0.080 & 24.9 & 275 \\\\\nTIP4P\/2005 & H & 7931 & 237.5 & 14.5 & 0.061 & 25.9 & 515 \\\\\n\\hline\nTIP4P\/Ice & B & 600 & 237.5 & 34.5 & 0.133 & 23.6 & 85 \\\\\nTIP4P\/Ice & L & 3167 & 252.5 & 19.5 & 0.083 & 25.4 & 261 \\\\\nTIP4P\/Ice & H & 7926 & 257.5 & 14.5 & 0.063 & 26.3 & 487 \\\\\n\\hline \n\\end{tabular}}\n\\end{table}\n\n\nFor the temperatures explored in this work (from about 15~K to 35~K below the\nmelting temperature of both TIP4P\/2005 and TIP4P\/Ice) the size of the ice\ncritical cluster ranges from nearly 8000 (radius of 4 nm) to about 600 molecules (radius of 1.7 nm). \nThis compares reasonably well with a \ncritical cluster radius of $\\sim$ 1.3 nm obtained by \napplying CNT to \nexperimental measurements at a supercooling of about 40 K.\\cite{micelas,kiselev01} \n\\edu{Our results are also consistent with CNT based estimates of the critical size at lower\nsupersaturations.\\cite{bogdan,kashchievbook} For instance, in Fig. \n15.7 of Ref. \\cite{kashchievbook}, a critical\ncluster size ranging from 1000 to 300 molecules is predicted for 25 K $<\\Delta T <$ 30 K.} \nAn interesting remark is\nthat the temperatures of the TIP4P\/Ice are basically shifted 20~K above the\nthe corresponding ones for TIP4P\/2005 with the same nucleus size. \nThis is precisely the difference between the melting temperatures of both\nmodels and, thus, the supercoolings are very similar for a given ice cluster size in both models.\nThis is more clearly shown in Fig.~\\ref{masterfigure} where the size of\nthe critical cluster is plotted as a function of the the difference between the\nmelting temperature of the model and the temperature of interest $\\Delta T=T_m-T$.\nWe observe that, within our error bar, the critical cluster size of both models \nscales in the same way with respect to their melting temperatures. This is not so surprising since \nTIP4P\/2005 and TIP4P\/Ice present a similar charge distribution and mainly differ in \nthe choice of the potential parameters.\n\\begin{figure}[h!]\n\\includegraphics[width=0.45\\textwidth,clip=,angle=0]{new_nc_vs_deltat.eps}\n\\caption{Critical cluster size versus $\\Delta T$ for the studied water models.\nNotice that the points corresponding to both models at low supercooling are essentially on top \nof each other.}\n\\label{masterfigure}\n\\end{figure}\n\nIn previous works\\cite{vega07,vega09,vega11} we observed that, for a number of properties, \nthe values of TIP4P\/2005 lie in the middle of the values obtained for TIP4P and TIP4P\/Ice.\nTherefore, it is expected that TIP4P gives similar results to TIP4P\/2005 and TIP4P\/Ice regarding\nthe dependence of $N_c$ with $\\Delta T$. \nMatsumoto et al.\\cite{matsumoto02} studied ice nucleation at 230~K and a density of 0.96~g\/cm$^3$ using the\nTIP4P model. This thermodynamic state point corresponds to a pressure of about -1000~bar and $\\Delta T$ 5 K.\\cite{sanz_prl} \nBy extrapolating the data of Fig.~\\ref{masterfigure} to $\\Delta T=5 K$\none gets a critical cluster of the order of hundreds of thousand molecules. \nTherefore, \nit is likely that the results obtained by Matsumoto et al.\\cite{matsumoto02}, although pioneering \nand useful to learn about the \nice nucleation pathway, may suffer from system size effects and may not be valid\nto estimate either the size of the critical cluster or the nucleation rate. \n\n\nIn an important paper, Koop et al.\\cite{koop_nature} showed that the\nhomogeneous nucleation rate (and therefore the temperature of homogeneous\nnucleation) of pure water and of water solutions can be described quite well by\na function that depends only on the water activity.\nThis conclusion has been confirmed in more recent experiments.\\cite{knopf11}\nAlthough the nucleation rate for an aqueous solution is the same as for pure\nwater, the freezing points are different. One is then tempted to suggest that\nthe size of the critical cluster at the homogeneous nucleation temperature\ncould be the same for pure water and for aqueous solutions. Moreover, the fact\nthat thermodynamics is sufficient to predict the rate seems to indicate that\nthe water mobility is also determined by the free energy of water. A\nmicroscopic study of the relationship between crystallization rates, structure\nand thermodynamics of water which may explain the empirical findings of Koop\nand coworkers has recently been presented in Ref.~\\onlinecite{nature_valeria}.\n\n\\subsection{Interfacial free energy and free energy barrier}\n\nOnce the size of the critical cluster is known, \none can use Eq.~\\ref{ncrit} to estimate the solid-liquid interfacial free energy. \nSince ice density changes little with temperature\\cite{mendu_eos}, the density at coexistence is used in our calculations\n($\\rho_{m,TIP4P\/Ice}$=0.906 g\/cm$^3$ and $\\rho_{m,TIP4P\/2005}$=0.921 g\/cm$^3$). \nFor most substances it is possible to approximate $\\Delta \\mu$ by $\\Delta h_m\n(T_m-T)\/T_m$, where $\\Delta h_m$ is the melting enthalpy and $T_m$ is the\nmelting temperature. For water, however, this may not be a good approximation because $\\Delta\nh$ significantly changes with temperature as a manifestation of the anomalous\nsharp increase of the heat capacity of water as temperature\ndecreases.\\cite{jeffery1997,Kumar05062007} Hence, one needs to do a proper\nevaluation of the chemical potential difference between both phases to get the\nsurface free energy from Eq.~\\ref{ncrit}. \nWe have calculated $\\Delta \\mu$ at every temperature by means of standard\nthermodynamic integration \\cite{frenkel96} from the coexistence \ntemperature, at which $\\Delta \\mu = 0$. \nIn Table~\\ref{tabcluster} we report the values we obtain for $\\Delta \\mu$\nand $\\gamma$. \n\nFirst of all we note that $\\gamma$ decreases with temperature for both models.\nThis is in qualitative agreement with experimental estimates of the behavior\nof $\\gamma$ with $T$.\\cite{pruppacher1995,zobrist07,alpert11} A more quantitative comparison is\nnot possible in view of the large discrepancies between different estimates\n(see Fig.~10 in Ref.~\\onlinecite{pruppacher1995}). \nMotivated by the fact that the interfacial free energy can only be measured at\ncoexistence, we extrapolate our results to the melting temperature. \nTo do that, we take the two largest clusters and evaluate the slope of\n$\\gamma(T)$.\nWe get a value for the slope of $\\sim$0.18~mN\/(m~K) for both models, in\nvery good agreement with a recent calculation for the TIP4P\/2005\nmodel.\\cite{doyejcp2013} \nWith a linear extrapolation we get a value for $\\gamma$ at $T_m$ of\n$\\sim$28.7~mN\/m for both models, which can be compared to experimental\nmeasurements.\nIn contrast with the vapor-liquid surface tension, the value of\n$\\gamma$ for the solid-fluid interface is not well established.\nExperimental values range from 25 to 35~mN\/m.\\cite{pusztai2002} \nOur calculated data for $\\gamma$ at coexistence lies in the middle of that\nrange, so our models predict a surface free energy which is consistent with\ncurrent experimental data.\nWe now compare our estimated $\\gamma$ to direct calculations from simulations\nusing a planar interface. The value of $\\gamma$ depends on the plane in\ncontact with the liquid. Since the cluster used here is spherical \nwe shall compare with the average of the values obtained for the basal and prismatic planes. \nDavidchak et al. computed $\\gamma$ for a planar fluid-solid interface using two models similar to those\nused in this work: TIP4P and TIP4P-Ew.\nFor TIP4P, in an initial publication the authors reported a value of $\\gamma\n= 23.9$~mN\/m\\cite{gammadavid_old} that was later on modified (after improving\ntheir methodology) to $\\gamma = 26.5$~mN\/m.\\cite{gammadavid} For the TIP4P-Ew\\cite{horn04}\nDavidchak et al. reported (using the improved methodology) a value of\n$27.6$~mN\/m.\\cite{gammadavid} TIP4P-Ew is known to predict water properties in\nrelatively close agreement to those of TIP4P\/2005. Therefore, our results are also\nconsistent with the calculations reported in the literature for similar models.\nTo conclude, our values of $\\gamma$ seem to be\nreasonable estimates of the interfacial free-energy of the planar ice-water\ninterface.\n\nTo estimate the height of the nucleation free-energy barrier \nwe make use of Eq.~\\ref{eq_G_cnt}.\nOur results are summarized in Table~\\ref{tabcluster}. \nIn view of the height of the nucleation barrier for the clusters of systems\nL and H, around 250 and 500 $k_B T$ respectively, it seems virtually impossible\nto observe homogeneous nucleation of ice for supercoolings lower than \n20~K. \nThe height of the nucleation barrier provides an estimate of the \nconcentration of critical clusters in the metastable fluid as \n$\\rho_f \\exp(-\\Delta G_{c}\/(k_B T))$, where $\\rho_f$ is the number density \nof the fluid. \nFor $\\Delta G_{c} = 250$~$k_B T$, one critical cluster would appear on average\nin a volume $\\sim$10$^{60}$ times larger than the volume of the whole\nhydrosphere. From the values of $\\Delta G_{c}$ of Table~\\ref{tabcluster} we may infer why \nspontaneous ice nucleation has never been observed in previous studies of\nsupercooled water with the TIP4P\/2005 model\\cite{abascal10,abascal11,pettersson11}. Our \nresults show that the free energy\nbarrier for nucleation even for temperatures as low as 35~K below melting is\nstill of about 80 $k_B T$. This is much larger than the typical barrier found in studies where spontaneous \ncrystallization occurs in brute force simulations \\cite{hs_filion,lundrigan:104503} (about $18 k_B T$). \nIt is worth mentioning that neither Shevchuk and Rao\\cite{rao} nor Overduin and Patey\\cite{patey_sc} \nfind any evidence of ice nucleation in TIP4P models after runs of several microseconds which is consistent with the\nresults of this work. \nOur results may be of great interest to studies in which the competition between the \ncrystallization time and the equilibration time of water is crucial\\cite{limmer13,poole13,liu12}. \n\n\n\\subsection{Nucleation rate}\n\nAlthough the free energy barriers alone provide a strong indication that ice\ncan not appear on our planet via homogeneous nucleation at moderate\nsupercoolings ($\\Delta T < 20$~K), it is worth calculating the nucleation rate,\n$J$, to confirm such statement. \nThe nucleation rate takes into account not only the concentration of the \nclusters but also the speed at which these are formed. \nMoreover, the supercoolings for the smallest clusters we investigate are\ncomparable to those where most experimental measurements of $J$ have been made\n($\\Delta T \\sim$~35~K).\\cite{taborek,pruppacher1995,knopf11,riechers13,pruppacher_book}\n\nTo calculate the nucleation rate we use Eq.~\\ref{eqrate}. \nFirst, we compute $f^{+}$ from Eq.~\\ref{eqattach} by running 30 simulations of\nthe cluster at the temperature at which it was determined to be critical. We\nmonitor $(N(t)-N_c)^2$ and average it over all the runs. In Fig.~\\ref{attach}\nwe plot $<(N(t)-N_c)^2>$ versus time for the system L, TIP4P\/2005.\nFrom the slope at long times we can infer $f^{+}$.\\cite{auerjcp} We get \n$f^{+} = 70\\cdot10^9$~$s^{-1}$. The Zeldovich factor for this particular case is $1.77\\cdot10^{-3}$, \nand the density of the liquid is 0.977~g\/cm$^3$. With this, we have all the ingredients needed\nto calculate the nucleation rate via Eq.~\\ref{eqrate}. The final result for this \ncase is $\\log_{10} (J \/$(m$^{-3}$s$^{-1}$))=-83. \n\nThe same procedure is used to calculate the nucleation rate for the rest of the\nsystems described in Table~\\ref{systemsize}. The results for the nucleation\nrate as a function of the supercooling are presented in Fig.~\\ref{rate}\nand compared to the experimental measurements of Pruppacher\\cite{pruppacher1995}\nand Taborek\\cite{taborek}.\nThe horizontal dashed line corresponds to the nucleation rate required for the\nappearance of one critical cluster in the volume of Earth's hydrosphere in\nthe age of the universe, which we call ``impossible nucleation rate''.\nThe vertical line shows at which temperature the impossible nucleation rate\nline intercepts the upper limit of our error bars (grey and orange shadows for\nTIP4P\/2005 and TIP4P\/Ice respectively). \nIn view of this figure we can confidently claim what the free energy barriers\npreviously hinted: it is impossible that ice nucleates homogeneously in our\nplanet for $\\Delta T < 20$~K. \nIn other words, heterogeneous nucleation must take place in order for water to\nfreeze for supercoolings lower than 20~K. This is consistent with the fact\nthat, when heterogeneous nucleation is suppressed, moderately supercooled water\ncan remain metastable long enough for its thermodynamic properties to be\nmeasured.\\cite{speedy76,hare_sorensen_eos_supercooled_water,mishima10,debenedetti03,holten12} \nFrom our results it is also clear that ice formation should not be expected in\nbrute force molecular dynamics simulations at moderate supercoolings (provided\nthat the system is large enough not to be affected by finite size\neffects).\\cite{matsumoto02} To observe ice formation in brute force\nsimulations the nucleation rate should be higher than $\\log_{10} (J\n\/$(m$^{-3}$s$^{-1}$)) = 32 (this number is obtained assuming the formation of\nice after running about 100~ns in a system of about 50nm$^3$, which are typical\nvalues in computer simulations of supercooled water). Notice also that the\nmaximum in the isothermal compressibility at room\npressure\\cite{abascal10,abascal11} found at about $\\Delta T=20$~K for the\nTIP4P\/2005 model can not be the ascribed to the transient formation of ice\nas the nucleation rate of ice at this temperature is negligible. \n\nAnother interesting aspect of Fig.~\\ref{rate} is the comparison with\nexperiment. Both models give nucleation rates that reproduce the experimental\nmeasurements within the uncertainty of our method. This excellent result\nbrings confidence in the ability of the selected models to predict relevant\nquantities for the nucleation of ice such as the nucleation rate, the critical\ncluster size, and the surface free energy. \n\n\\edu{We also include in Fig.~\\ref{rate}\na green dashed line that corresponds to\nthe CNT based estimates of $J$ shown in Fig. 13.6 of Ref. \\cite{kashchievbook} The agreement \nbetween CNT, simulations and experiments is quite satisfactory. \nTo the best of our knowledge, there are no CNT estimates of $J$ available for\nsupersaturations lower than 30 K to compare our results with.\\cite{pruppacher1995,kashchievbook,murray2010}} \n\n\\begin{figure}[h!]\n\\includegraphics[width=0.45\\textwidth,clip=,angle=0]{figura30_trajs.eps}\n\\caption{$<(N(t)-N_c)^2>$ versus time for configuration L, TIP4P\/2005. The attachment rate $f^{+}$ is obtained as half the\nvalue of the slope. The curve above is obtained as an average over 30 trajectories. \nIn approximately half of these trajectories the critical \ncluster ended up growing, whereas it eventually melted in the other half. \n}\n\\label{attach}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\includegraphics[width=0.45\\textwidth,clip=,angle=0]{ultimissimamitica.eps}\n\\caption{Nucleation rate as a function of the supercooling.\n\\edu{Symbols correspond to our simulation results and to \nexperimental measurements as indicated in the legend. The green dashed line corresponds\nto CNT estimates of $J$.\\cite{kashchievbook}} The grey and orange shadows \nrepresent the estimated error bars for TIP4P\/2005 and TIP4P\/Ice respectively interpolated\nby splines. The horizontal \\edu{dotted} line indicates the rate given by the growth of one cluster in\nthe age of the universe in all the water of the Earth's hydrosphere. The vertical \n\\edu{dotted} line indicates the supercooling below which homogeneous nucleation is impossible.}\n\\label{rate}\n\\end{figure}\n\nBy using Forward Flux Sampling\\cite{PRL_2005_94_018104}, Li {\\em et al.}\ndetermined $J$ for the mW model of water for temperatures between 35 and 55~K\nbelow the model's melting temperature.\\cite{galli_mw} Since we are\ninterested in ice nucleation at moderate supersaturation, our study deals with\nlower supercoolings \n($14.5$~K$ < \\Delta T < 34.5$~K). \nNonetheless, our highest supercooling (34.5~K) is very close to the lowest one of \nLi {\\em et al.} (35~K) so we can compare both results. \nThe values of Li {\\em et al.} for $J$ are 5-8 orders of\nmagnitude below the experimental ones when compared at the same absolute\ntemperature (the deviation increases when the comparison is made at the same degree of\nsupercooling). The nucleation rates calculated in this work for TIP4P\/2005 and\nTIP4P\/ice are similar (although slightly larger) to those for the mW model. Initially this may\nappear surprising as the mW model is a coarse grained model of water with no\nhydrogens, which makes its dynamics faster than that of both real water and TIP4P-like models.\\cite{mw}\nHowever the free energy barrier of mW may be larger, compensating this kinetic effect. \nIn fact the interfacial free energy of mW has been found\\cite{galli_mw} to be $\\gamma=31$~mN\/m \n(larger than the values found in this work for TIP4P\/2005 and TIP4P\/Ice).\nThis high value of $\\gamma$ may be partially compensated by a significant overestimate\nof the ice density of this model (0.978 g\/cm$^3$ to be compared to the experimental result 0.91 g\/cm$^3$).\nThe net balance is that the values of $J$ of the mW model are similar,\nalthough somewhat lower, than those for TIP4P\/2005 and TIP4P\/ice.\n\nAs for the size of the critical cluster, we find that it is of about 600\nwater molecules for TIP4P\/Ice at 237.5~K ($\\Delta T = 34.5$~K). \nLi {\\em et al.} have reported a critical cluster size of about 850\nmolecules for the mW model at 240~K ($\\Delta T = 35$~K).\nBoth results are compatible since Li {\\em et al.} include in the ice cluster molecules which \nare neighbor to the solid cluster, and we do not. \nIn summary, our results for TIP4P\/2005 and TIP4P\/Ice \nare consistent with Li's {\\em et al.} for mW. \n\n\n\n\\section{Discussion}\n\\subsection{Validity and possible sources of error}\n\\label{validity}\n\nThe methodology we have used is subject to two main error sources: the determination\nof the cluster size and the location of the temperature at which the clusters\nare found to be critical. Moreover, our approach relies on the validity of \nCNT. In the following paragraphs we discuss the extent to which our \nresults may be affected by these issues. \n\nIn nucleation studies, the size of the largest solid cluster is usually\nconsidered a good reaction coordinate.\nTo identify the cluster, we first need to distinguish between liquid-like and solid-like molecules.\nThe chosen criterion should be able to identify the majority of molecules of the bulk solid as solid-like, \nand the majority of molecules of the bulk fluid as liquid-like. \nOne could in principle find several criteria that successfully perform this task. \nHowever, when interfaces are present in the system \n(as in the case of a solid-liquid\\cite{hs_filion} \nor a solid-vapor\\cite{maria06b,conde08} interface) depending on the chosen criterion \none might assign differently the interfacial molecules (see for instance \nReferences~\\onlinecite{doye12} and \\onlinecite{galli_mw} \nfor an illustration of this problem for the mW water model). \n\nHow does the choice of a criterion to distinguish liquid from solid-like molecules affect our results? \nWhether the cluster grows or shrinks for a given temperature does\nnot depend on the particular choice of the order parameter (see Supporting Information).\nThe same trend can be obtained by monitoring \nglobal thermodynamic properties of the system, such as the total potential energy (see Supporting Ingormation). \nTherefore, the fact that the cluster shown in Fig.~\\ref{cluster-po} \nis critical at 232.5~K is independent on the particular choice of the criterion to distinguish liquid from solid-like molecules. \n\nA different problem arises if one asks the question: how many ice molecules are \npresent in Fig.3b? Different criteria provide different answers even though the configuration \npresented in Fig.3b is unique. \nSince the origin of this arbitrarity is due to the interfacial region, it is expected that\nthe arbitrarity will become smaller as the ice cluster becomes larger. \nHowever, for the system sizes considered in this work the interface region still matters. \nTo take this effect into account we have estimated the error bars in Fig.~\\ref{rate}\nconsidering an arbitrarity of 60\\% in the labeling of {\\em interfacial} molecules. \nThis would affect the value of $\\gamma$ by 7\\%, and the free-energy barriers height \nby up to 20\\%. \nAlthough this estimated error seems large, it is worth pointing out that differences between the free-energy barrier estimated by\ndifferent groups may be, in the case of water, much larger than that.\\cite{trout_prl_2003,trout03,quigley08,anwar_tip4p_nucleation} \nIn summary we conclude that the liquid\/solid criterion chosen in this work provides reasonable estimates of \n$\\gamma$, and when used within the CNT framework allows to interpret our simulations results \nin a rather straightforward way. \n\nAnother important error source in the calculation of $J$ is the location of the temperature \nat which a cluster is critical. As we show in Fig.~\\ref{figlargest}, by performing runs at different temperatures we identify, within a certain\nrange, the temperature that makes critical a given ice cluster. \nWe assign the temperature in the middle of the range to the corresponding cluster, but the temperature\nthat really makes the cluster critical \ncould in principle be any other within the range. \nThis uncertainty \nhas a strong contribution to the error bars in Fig~\\ref{rate}, particularly at low supercoolings,\nwhere the variation of $J$ with $T$ is very steep. \nThis error could, in principle, be easier to reduce than that coming from the arbitrarity in the determination\nof the number of particles in the cluster. One simply has to do more runs to narrow \nthe temperature range. However, these simulations are very expensive given the large system \nsizes we are dealing with. \nIt is interesting to point out that temperature control is also seen as a major\nerror source in experiments.\\cite{riechers13} \n\nOur results for $\\gamma$, $\\Delta G_c$, and $J$ rely on the validity of CNT. \nClassical Nucleation Theory is expected to break down for small clusters, when \nthe view of nucleation as a competition between bulk and surface free energies starts to be\nquestionable (in clusters of a few hundred particles most molecules are placed at\nthe surface). However, for the large cluster sizes investigated in this work \nit seems reasonable to assume that CNT works well. The satisfactory comparison \nof our estimate of $\\gamma$ with that obtained in simulations of a flat interface\\cite{gammadavid}\nis certainly encouraging in this respect. \nMoreover, we have applied the methodology described in this paper \nto calculate the nucleation rate of the mW water model\nand we get, within error, the same nucleation rate as in Ref.\\cite{galli_mw} This is a very\nstringent test to our approach, given that in Ref\\cite{galli_mw} \na method that relies neither on CNT nor on the definition of the\ncluster size was used (Forward Flux Sampling). This comparison is made for a supercooling of 35 K, the deepest\ninvestigated in this work. \nFor lower supercoolings, where the critical cluster is larger,\nthe methodology is expected to be even more robust.\nThe advantage of the approach used here is that it allows to estimate (at a \nreasonable computational cost) critical cluster sizes and \nnucleation rates at low and moderate supercooling. \n\n\n\\subsection{Novelty}\n\nIn this paper\nwe provide values for the homogeneous nucleation rate of ice at moderate \nsupercoolings ($\\Delta T < 33$~K). For the first time, this is done without \nextrapolating from measurements at high supercoolings.\nThe experimental determination of $J$ is limited to a narrow temperature window \nat high supercoolings\n(between 233~K and 239~K). In that window, $J$ can be directly measured \nwithout introducing any type of approximation. \nIt only requires the knowledge of the droplet volume, the cooling\nrate and the fraction of freezing events. \nDifferences in the value of $J$ \nbetween different experimental groups are\nrelatively small (between one and two orders of magnitude). \nTherefore, the experimental\nvalue of $J$ is well established for the narrow range of temperatures in which the current experimental\ntechniques can probe the nucleation rate.\\cite{taborek,pruppacher1995,knopf11,riechers13,pruppacher_book}\nTo obtain values of $J$ outside that temperature window \none can either extrapolate the data or make an estimate via CNT. \nAn extrapolation from such a narrow temperature window would not be very reliable because $J$ changes sharply with T. \nIn turn, an estimate of $J$ based on CNT relies in the knowledge of the interfacial free energy. \nUnfortunately, our current knowledge of $\\gamma$ for the water-ice\ninterface is far from satisfactory in at least three respects. Firstly, the\ncalculated values of different groups using CNT differ\nsignificantly (see for instance Tables I and II in Ref~\\onlinecite{murray2010}). \nSecondly, the values obtained for $\\gamma$ from CNT seem to\nbe different from those determined for a planar ice-water interface at the\nmelting point (see for instance Fig.~8 in Ref~\\onlinecite{bartell94}).\nFinally, there is even no consensus about the\nvalue of $\\gamma$ for a planar interface at the melting point of water, a\nmagnitude that in principle could be obtained from direct experiments without\ninvoking CNT (values between 25 and 35~mN\/m have been reported). A look to\nFig.~10 of the classic paper of Pruppacher\\cite{pruppacher1995}\nis particularly useful. It shows the enormous\nuncertainty that exists at any temperature about the value of $\\gamma$ for the\nice-water interface. Since $\\gamma$ enters in the estimation of $J$ as\na power of three in an exponential term, the enormous scatter implies that,\nat this moment, there is no reliable estimate of the value of $J$ for moderately supercooled water arising from\nCNT.\nIn other words, you can get many different estimates of $J$ from the different\nestimates of $\\gamma$ shown in the paper by Pruppacher. \\edu{In addition, to the best of our knowledge, \nno one has estimated $J$ using CNT for supersaturations lower than 30 K. \\cite{pruppacher1995,kashchievbook,murray2010}}\n\n\\edu{Regarding the critical nucleus size}, it is not possible at the moment to \nmeasure it experimentally\nby direct observation. \nTherefore, the prediction of the critical cluster at moderate and experimentally accessible supercoolings\nis a novel result. \nSince the TIP4P\/2005 has been quite successful in describing a number of\nproperties of water (notably including the surface tension for the vapor-liquid\nequilibrium) we believe that the values reported here for $\\gamma$ and $J$ from our\nanalysis of the critical cluster are a reasonable estimate for the\ncorresponding values for real water.\n\n\n\\subsection{Summary and outlook}\n\nWe have studied homogeneous ice nucleation by means of computer simulations \nusing the TIP4P\/2005 and TIP4P\/Ice water models. \nThis is the first calculation of the size of the critical cluster and \nthe nucleation rate at moderate supercoolings (14.5-35~K). \nBoth models give similar results when compared at the same supercooling. \n\nTo determine the size of the critical cluster, we use a numerical approach in\nthe spirit of direct coexistence methods.\nWe prepare an initial configuration by inserting a large ice cluster (about\n10000, 4600 and 1000~molecules) in an equilibrated sample of liquid water.\nThen, we let the interface equilibrate for 0.2 ns at 200~K. Finally, we\nperform molecular dynamic runs at several temperatures to detect either the\nmelting or the growth of the inserted cluster by monitoring its size. We find\nthat the size of the critical cluster varies from $\\sim$8000~molecules\n(radius$ = 4$~nm) at 15~K below melting to $\\sim$600 molecules (radius$ =\n1.7$~nm) at 35~K below melting. \n\nWe use CNT to estimate\nthe interfacial free energy and the nucleation free energy barrier.\nOur predictions show that \nthe interfacial free energy decreases as the supercooling increases, in agreement\nwith experimental predictions. An extrapolation of the interfacial free energy to the melting\ntemperature gives a value of 29(3)~mN\/m, which is in reasonable agreement \nwith experimental measurements\nand with estimates obtained from computer simulations for TIP4P-like models.\nWe get free energy barriers higher than 250~kT\nfor supercoolings lower than 20~K. This strongly suggests that homogeneous ice nucleation\nfor supercoolings lower than 20~K is virtually impossible. \nWe confirm this by calculating the nucleation rate. To do that we compute, by means of molecular \ndynamics\nsimulations, \nthe rate at which particles attach to the critical clusters. \nThese calculations show that, indeed, for supercoolings lower than 20~K it is impossible\nthat ice nucleates homogeneously. According to this prediction, ice nucleation must necessarily \nbe heterogeneous for supercoolings lower than 20~K. The nucleation rate we obtain at higher \nsupercoolings (30-35~K) agrees, within the statistical uncertainty of our methodology, \nwith experimental measurements. \n\nIt would be interesting to extend this work in several directions. Modifying the shape of the inserted \ncluster (inserting for instance a small crystal with planar faces) or even inserting a block of \ncubic ice Ic to analyse whether this cluster may be more stable as suggested by some studies\\cite{huang_bartell,murray2010} \nare interesting issues that deserve further studies. \nSecondly, it would be of interest to consider other water models, to analyse the possible similarities\/differences\nwith respect to nucleation of different potential models varying significantly either in the charge distribution \nas TIP5P\\cite{mahoney00} or in the way the tetrahedral order is induced as in the mW model.\\cite{mw}\nAnalyzing the behaviour at higher degrees of supercooling than those presented here is another interesting problem as \nwell as the determination of the growth rate of ice.\\cite{kusalik_growth_ice}\nWe foresee that all these issues will be the centre of significant activity in the near future. \\\\ \n\n\n\n\n\\section{Supporting Information to ``Homogeneous ice nucleation at moderate supercooling from molecular simulation\"}\n\\subsection{Equilibration of the initial configuration}\n\nThe preparation protocol used for all cluster sizes is the following.\nAfter having inserted the ice cluster in the supercooled liquid, we remove the liquid molecules \noverlapping with the solid ones. Next, we equilibrate the system for about 0.2~ns at 200 K.\nTo make sure that the chosen 0.2~ns is a proper equilibration time, long enough to allow \nfor annealing mismatches at the interface, we run a simulation \nstarting from the initial configuration at time zero. \nIn Fig.~\\ref{fig:purity}, we represent the cluster size for the Large system of\nthe TIP4P\/2005 model as a function of time. \n\\begin{figure}[h!]\n\\centerline{\\includegraphics[clip,width=0.5\\columnwidth]{200_equilibrado.eps}}\n\\caption{Equilibration for the L system of the TIP4P\/2005 model. After an initial drop of the cluster size due to equilibration of the interphase, the cluster size changes very slowly with time.}\n\\label{fig:purity}\n\\end{figure}\nThe figure shows that, even though the dynamics is very slow at such low\ntemperatures, the chosen equilibration time is long enough. This result is\nindependent on the chosen cluster size or water model potential.\n\n\\subsection{Choice of the order parameter to distinguish between liquid\/solid particles}\n\nThe use of an alternative order parameter to identify solid-like particles\n($\\bar{q}_{3}$) does not affect the observed response of the cluster to\ntemperature. This is shown in Fig.~\\ref{q3op}, where the number of particles in\nthe cluster is monitored with two different order parameters for three\ndifferent temperatures. Both order parameters allow to conclude that the\ninserted cluster is critical between 235 and 240 K.\nObviously, the number of particles that belong to the cluster does depend on the order parameter. \nThe order parameter we use in this work should at least work well for the inner particles \ngiven that, according to Fig.~1 of the main text, it is able to \ndiscriminate between bulk liquid and bulk solid particles. The main ambiguity in the number of particles\nbelonging to the cluster comes from those particles that lie in the interface. \nIn view of Fig.~3 of the main text, it seems that our order parameter is doing reasonably well \nin identifying such particles either. Nonetheless, we have considered an error as large\nas 60\\% in the identification of the {\\em interfacial particles} to estimate the error of the nucleation \nrate. In this way the unavoidable ambiguity in the determination of the cluster size is reflected\nin the error bar of $J$.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.5\\columnwidth]{comparetwoorderparams.eps}\n\\caption{Number of particles in the cluster versus time for system size H, TIP4P\/2005 model. Black curves correspond to 230 K, red ones to 235 K and green to 240 K. \nSolid lines correspond to the analysis made with the order parameter described in the main text. Dashed lines correspond to the use of an alternative order parameter. \nWith such order parameter particles are considered as neighbors if their oxygen atoms are closer than 3 ~\\AA\\ and are labelled as solid-like whenever \ntheir $\\bar{q}_{3}$ is larger than 0.28.}\n\\label{q3op}\n\\centering\n\\end{figure}\n\n\\subsection{Cluster size and potential energy versus time for all system sizes and model potentials studied}\n\nIn order to determine the temperature at which the cluster was critical, we\nevaluated the highest temperature at which the cluster grows and the lowest\ntemperature at which it melts. In Fig.~4 of the main text we represented the\nnumber of molecules in the ice cluster versus time for system H simulated with\nTIP4P\/2005 potential. In what follows, we present the results of the cluster\nsize versus time for all sizes (B,L and H) for both the TIP4P\/2005\n(Fig.\\ref{fig:cluster-energy-2005}) and TIP4P\/Ice water models\n(Fig.\\ref{fig:cluster-energy-ice}). An analogous result could have been\nobtained by monitoring the potential energy of the system as a function of time\n(see the right panels at Figs.~\\ref{fig:cluster-energy-2005} and\n\\ref{fig:cluster-energy-ice}).\n\\begin{figure}[h!]\n\\includegraphics[clip,width=0.5\\columnwidth]{22712-clusters-TIP4P2005.eps}%\n\\includegraphics[clip,width=0.5\\columnwidth]{deltaE-661-tip4p2005.eps}\n\\includegraphics[clip,width=0.5\\columnwidth]{76781-clusters-TIP4P2005.eps}%\n\\includegraphics[clip,width=0.5\\columnwidth]{deltaE-4648-tip4p2005.eps}\n\\includegraphics[clip,width=0.5\\columnwidth]{182585-clusters-TIP4P2005.eps}%\n\\includegraphics[clip,width=0.5\\columnwidth]{deltaE-7805-tip4p2005.eps}\n\\caption{Left-hand panels: Number of molecules in the ice cluster versus time for system B (top), L (middle) and H (bottom) simulated with the TIP4P\/2005 potential. \nRight-hand panels: energy difference (between the energy at time $t$ and the one at time zero) versus time. Results are shown for different temperatures as indicated in the legend.}\n\\label{fig:cluster-energy-2005}\n\\end{figure}\n\\begin{figure}[h!]\n\\includegraphics[clip,width=0.5\\columnwidth]{22712-clusters-tip4pice.eps}%\n\\includegraphics[clip,width=0.5\\columnwidth]{deltaE-661-tip4pice.eps}\n\\includegraphics[clip,width=0.5\\columnwidth]{76781_clusters_tip4pice.eps}%\n\\includegraphics[clip,width=0.5\\columnwidth]{deltaE-4648-tip4pice.eps}\n\\includegraphics[clip,width=0.5\\columnwidth]{182585-clusters-tip4pice.eps}%\n\\includegraphics[clip,width=0.5\\columnwidth]{deltaE-7805-tip4pice.eps}\n\\caption{Same as Fig. \\ref{fig:cluster-energy-2005} but for TIP4P\/Ice instead of TIP4P\/2005.}\n\\label{fig:cluster-energy-ice}\n\\end{figure}\n\nThe energy is much less sensitive to changes in the cluster size than the order parameter. \nThis is due to the fact that the number of molecules in the cluster is a small fraction of the total number of molecules. \nNonetheless, by making a linear fit to the time evolution of the energy\nwe obtain in all cases consistent results with the analysis based in the order parameter: \nthe cluster grows when the slope is negative and shrinks when it is positive. \n\n\\section{Acknowledgments}\nE. Sanz acknowledges financial support from the EU grant 322326-COSAAC-FP7-PEOPLE-2012-CIG \nand from the Spanish grant Ramon y Cajal. C. Valeriani acknowledges financial support\nfrom the EU grant 303941-ANISOKINEQ-FP7-PEOPLE-2011-CIG and from the Spanish grant Juan de la Cierva. \nFundings also come from MECD Project FIS2010-16159 and from CAM MODELICO P2009\/ESP\/1691. \nAll authors acknowledge the use of the super-computational facility Tirant at Valencia \nfrom the Spanish Supercomputing Network (RES), along with the technical support and the\ngenerous allocation of CPU time to carry out this project (through projects QCM-2012-2-0017,\nQCM-2012-3-0038 and QCM-2013-1-0047).\nOne of us (C. Vega) would like to dedicate this paper to the memory of Prof. Tomas Boublik.\nWe thank the three referees for their useful comments and to A. Reinhardt and J.P.K. Doye for sending us a preprint\nof their work prior to publication. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe reconstruction problem for ordered sets asks if it is possible to reconstruct the isomorphism type of\na given ordered set from its collection\n(the ``deck\")\nof one-point-deleted subsets.\nIn \\cite{Sanun}, Sands asked\nif ordered sets might even be reconstructible from\nthe the collection of subsets obtained by erasing single maximal elements (the ``maximal deck\").\nThe negative answer to Sands' question in\n\\cite{KRex}\ntogether with the paper \\cite{KRtow} were the starting point for serious investigation of\nthe reconstruction problem for ordered sets.\n(Reconstruction of graphs and other relations has a\nlonger history, see \\cite{BoHem,Manv,Pouinterdit,Stockwindmill}.)\nSince then, results that show reconstructibility given\ncertain types of information (for example, see \\cite{IlRaCG}) as well as\nresults on reconstructibility of certain classes of ordered sets (for example, see \\cite{RaSch})\nhave been proved. For a more comprehensive survey of results\nand references available to date, consider \\cite{JXsurvey,Schbook}.\n\n\n\nRecently in \\cite{Schexam} it has been shown that even the maximal and minimal decks together\nare not sufficient to reconstruct ordered sets.\nMoreover, it was shown in \\cite{Schexam} that there are families of $\\displaystyle{ 2^{O(\\sqrt[3]{n} )} } $\npairwise nonisomorphic ordered sets of size $n$ that\nall have equal maximal and minimal decks.\n\nThe desire to\nrely on only a limited, focused amount of information in a reconstruction proof,\nand the fact that almost all ordered sets are reconstructible from two identifiable maximal cards\n(see \\cite{Schrigid}, Corollary 3.10)\nmotivates extensions of Sands' question.\nWhat subsets of the deck have a reasonable chance to\neffect reconstruction? In \\cite{Schmoreexam} it was shown that the maximal deck plus the minimal deck plus one deck\nobtained by removing points of a rank $k$ that contains no extremal elements are not sufficient for\nreconstruction.\nThe examples in \\cite{Schmoreexam}\nare somewhat limited. They\ncould not be extended to an ambiguity with more than two sets.\nIt also appeared as if they could\nnot be extended to more than two maximal elements, two minimal elements and two elements of rank $k$\nor to more than one middle rank producing equal decks.\nImmediately two questions arise.\n\\begin{itemize}\n\\item\nAre ordered sets reconstructible from the maximal deck, the minimal deck and a deck obtained\nby removing points of rank $k$ if one of these decks has at least three cards?\n\n\\item\nAre ordered sets reconstructible from the maximal deck, the minimal deck and\ntwo decks that were obtained by removing points of ranks $k$ and $l$ with $k\\not= l $?\n\n\\end{itemize}\n\nIn this paper these two questions are answered negatively, even if the neighborhood\ndecks are equal and for any two isomorphic cards the neighborhoods of the\nremoved elements are also isomorphic.\nThe present examples provide new guidance as to what kind of\npartial information is at least needed to\nreconstruct ordered sets.\nIn particular, they show that information derived from\n``small\" ranks, and even from many small ranks, is not sufficient\nto effect reconstruction.\nAnalysis of the examples also leads to results that underscore the role of rigidity in\norder reconstruction (cf. Section \\ref{rigsect}).\nIdeas on what types of information to consider next are given in the conclusion.\n\n\n\\section{Basic Definitions and Preliminaries}\n\nAn {\\bf ordered set} is a set $P$ equipped with a reflexive, antisymmetric and\ntransitive relation $\\leq $, the order relation.\nThroughout this paper we will assume that all ordered sets involved are finite.\nElements $x,y\\in P$ are called {\\bf comparable} iff $x\\leq y$ or $y\\leq x$.\nAn {\\bf antichain} is an ordered set in which each element is only comparable to itself.\nA {\\bf chain} is an ordered set\nin which any two elements are comparable.\nThe {\\bf length} of a chain is its number of elements minus $1$.\nAn element $m\\in P$ is called {\\bf maximal} iff for all $x$ comparable to $m$ we\nhave $x\\leq m$. {\\bf Minimal} elements are defined dually.\nThe {\\bf rank} of an element $x\\in P$ is the length of the longest chain that has a minimal element\nas its smallest element and $x$ as its largest element.\n\n\nThe {\\bf dual} $P^d $ of an ordered set $P$ is the ordered set obtained by reversing all\ncomparabilities.\nThe {\\bf dual rank} of an element $x\\in P$ is the length of the longest chain that has a maximal element\nas its largest element and $x$ as its smallest element.\n\n\nA function $f:P\\to Q$ from the ordered set $P$ to the ordered set $Q$ is called {\\bf order-preserving}\niff for all $x,y\\in P$ we have that $x\\leq y$ implies $f(x)\\leq f(y)$.\nThe function $\\varphi :P\\to Q$ is called an {\\bf (order) isomorphism} iff $\\varphi $ is bijective, order-preserving\nand $\\varphi ^{-1} $ is order-preserving, too.\nAn order isomorphism with equal domain and range is called an {\\bf (order) automorphism}.\nAn ordered set with exactly one order automorphism (the identity)\nis called {\\bf rigid}.\n\n\n\n\nFor precise overall reconstruction terminology, cf. \\cite{Schbook,Schrigid,Schmoreexam}.\nFor the purposes of this paper, a {\\bf card} of an ordered set is a subset\nwith one point deleted.\nIf the deleted element is of rank $k$ we shall also call the card a {\\bf rank $k$ card}.\nThe set of all cards obtained by erasing elements of rank $k$ is called the\n{\\bf rank $k$ deck}.\nThe set of all rank $k$ decks is called the {\\bf ranked deck}.\nA rank $k$ card is {\\bf marked} iff there is a function that indicates the\nrank of each element in the original set.\nThe set of all marked rank $k$ cards is the {\\bf marked rank $k$ deck} and the\nset of all marked rank $k$ decks is the {\\bf marked ranked deck}.\nA {\\bf maximal card} is a subset in which a maximal element is erased\nand a {\\bf minimal card} is a subset in which a minimal element is erased.\nThe sets of maximal and minimal cards respectively are called the {\\bf maximal deck} and the\n{\\bf minimal deck}.\n{\\bf Marked maximal cards} are maximal cards for which there is a function that indicates which\nelements are maximal in the original set. The set of all marked maximal cards is called the\n{\\bf marked maximal deck}.\n{\\bf Marked minimal cards} and the {\\bf marked minimal deck} are defined dually.\nIsomorphic cards will also be called {\\bf equal cards}, because their isomorphism\nclasses are equal. Decks will be called {\\bf equal} iff there is a\nbijection such that each card is isomorphic to its image.\nMarked cards will be called equal iff there is an isomorphism that preserves the marked property\n(rank in the original set or maximality\/minimality in the original set). Marked decks will be called\nequal iff there is a bijection such that any set is isomorphic to its image and each\nisomorphism also preserves the marked property.\nThe {\\bf up-set} of an element $x$ is the set $\\uparrow x=\\{ p\\in P:p\\geq x\\} $\nand the {\\bf down-set} is $\\downarrow x=\\{ p\\in P:p\\leq x\\} $.\nThe {\\bf neighborhood} of an element $x$ is the set\n$\\updownarrow x=\\uparrow x\\cup \\downarrow x$.\nThe set of all neighborhoods of points of rank $k$ is called the\n{\\bf rank $k$ neighborhood deck}.\n\n\nWe are concerned with results that show what type of information is {\\em not}\nsufficient to effect reconstruction. Therefore, throughout we\nconstruct nonisomorphic ordered sets such that between some of their cards\nthere are isomorphisms with certain\nproperties.\n\n\n\n\\section{Pairs of Nonisomorphic Ordered Sets with Equal Maximal and Minimal Decks and\n$n+1$ Non-Extremal Ranks for Which the Rank $k$ Decks are Equal}\n\n\nIn this section we describe the fundamental construction used to build the examples.\nLemma \\ref{severalmiddle} gives the overall idea, which is an extension of the work in \\cite{Schmoreexam}.\nLemma \\ref{severalmiddle} is also quite similar to\nthe examples in \\cite{Pouinterdit}.\nIn a way, Lemma \\ref{severalmiddle} is reminiscent of a ``vertical M\\\"obius strip\".\n\n\n\nLemma \\ref{getsmallerR} then shows that sets as needed in the construction in\nLemma \\ref{severalmiddle} actually exist.\nIn the following, we explore some features of the construction as well as variations\nthat lead to examples with\nother properties.\n\n\n\\begin{lem}\n\\label{severalmiddle}\n\nLet $Q$ be an ordered set such that\n\\begin{enumerate}\n\\item\n$Q$ has exactly two maximal elements $d$ and $p$,\n\n\\item\n$d$ and $p$ have the same rank,\n\n\\item\n\\label{Qpsi}\nThere is an isomorphism $\\psi :Q\\setminus \\{ d\\} \\to Q\\setminus \\{ p\\} $\nwith $\\psi (p)=d$,\n\n\\item\n$Q$ has two minimal elements $a$ and $b$,\n\n\\item\n\\label{Qpsia}\n$Q\\setminus \\{ a\\} $ has an automorphism $\\psi ^a $ with $\\psi ^a\n(p) = d$, $\\psi ^a (d) = p$, and $\\psi ^a (b) = b$,\n\n\\item\n\\label{Qpsib}\n$Q\\setminus \\{ b\\} $ has an automorphism $\\psi ^b $ with $\\psi ^b\n(p) = d$ and $\\psi ^b (d) = p$, and $\\psi ^b (a) = a$,\n\n\\item\n$Q$ is rigid.\n\n\\setcounter{itemtransfer}{\\value{enumi}}\n\n\\end{enumerate}\n\nLet $R$ be an ordered set such that\n\\begin{enumerate}\n\\setcounter{enumi}{\\value{itemtransfer}}\n\n\\item\n\\label{Rtwomax}\n$R$ has exactly two maximal elements $d$ and $p$,\n\n\\item\n\\label{Rtwomaxrnk}\n$d$ and $p$ have the same rank,\n\n\\item\n\\label{Rtwomin}\n$R$ has exactly two minimal elements $\\overline{d}$ and\n$\\overline{p}$,\n\n\n\\item\n\\label{Rpaut}\n$R\\setminus \\{ \\overline{p} \\} $ has an automorphism\n$\\psi ^{\\overline{p} } $ such that $\\psi ^{\\overline{p} } (d)=p $,\n$\\psi ^{\\overline{p} } (p)=d $, and $\\psi ^{\\overline{p} } (\\overline{d} )=\\overline{d} $,\n\n\\item\n\\label{Rdaut}\n$R\\setminus \\{ \\overline{d} \\} $ has an automorphism\n$\\psi ^{\\overline{d} } $ such that $\\psi ^{\\overline{d} } (d)=p $,\n$\\psi ^{\\overline{d} } (p)=d $, and $\\psi ^{\\overline{d} } (\\overline{p} )=\\overline{p} $,\n\n\n\\item\n\\label{Rswit}\n$R$ has an automorphism $\\varphi $ with $\\varphi (d)=p$, $\\varphi (p)=d$,\n$\\varphi (\\overline{d})=\\overline{p}$, $\\varphi (\\overline{p})=\\overline{d}$,\n\n\\item\n\\label{notwistR}\n$R$ has no automorphism that is the identity on the minimal elements and not the\nidentity on the maximal elements.\n\n\n\\end{enumerate}\n\nLet $\\tilde{Q} $ be the dual of $Q$ and let $R_1 , \\ldots , R_n $ be isomorphic copies of\n$R$ such that $Q, \\tilde{Q} , R_1 , \\ldots , R_n $ are all mutually disjoint.\nLet the elements of $R_i $ be distinguished by subscripts $i$, that is,\nthe maximal and minimal elements of $R_i $ are $d_i $, $p_i $ and $\\overline{d} _i $, $\\overline{p} _i $.\nLet the elements of $\\tilde{Q}$ similarly be distinguished by tildes.\n\nDefine $P_1 $ to be the ordered set obtained from\n$Q, R_1 , \\ldots , R_n , \\tilde{Q} $ as follows (also cf. Figure \\ref{switstck}).\n\n\\renewcommand{\\theenumi}{\\roman{enumi}}\n\n\\begin{enumerate}\n\\item\nAll non-maximal elements of $Q$ are below all non-minimal elements of $R_1 $.\n\n\\item\nThe element $d$ is identified with the element $\\overline{d} _1 $ and\nthe element $p$ is identified with the element $\\overline{p} _1 $.\nCall the thus obtained elements $d_0 $ and $p_0 $, respectively.\n\n\\item\nFor $i=1, \\ldots , n-1$,\nall non-maximal elements of $R_i $ are below all non-minimal elements of $R_{i+1} $.\n\n\\item\nFor $i=1, \\ldots , n-1$,\nthe element $d_i $ is identified with the element $\\overline{d} _{i+1} $ and\nthe element $p_i $ is identified with the element $\\overline{p} _{i+1} $.\nCall the thus obtained elements $d_i $ and $p_i $, respectively.\n\n\\item\nAll non-maximal elements of $R_n $ are below all non-minimal elements of $\\tilde{Q} $.\n\n\\item\n\\label{lastmerge}\nThe element $d_n $ is identified with the element $\\tilde{d} $ and\nthe element $p_n $ is identified with the element $\\tilde{p} $.\nCall the thus obtained elements $d_n $ and $p_n $, respectively.\n\n\\item\nPlus all comparabilities forced by transitivity.\n\n\\end{enumerate}\n\n\nDefine $P_2 $ to be the ordered set obtained from\n$Q, R_1 , \\ldots , R_n , \\tilde{Q} $ in the same way as $P_1 $ except that\n\\ref{lastmerge} is replaced with the following (also cf. Figure \\ref{switstck}).\n\n\n\\begin{enumerate}\n\\item[\\ref{lastmerge}']\nThe element $d_n $ is identified with the element $\\tilde{p} $ and\nthe element $p_n $ is identified with the element $\\tilde{d} $.\nCall the thus obtained elements $d_n $ and $p_n $, respectively.\n\n\n\\end{enumerate}\n\n\n\n\\renewcommand{\\theenumi}{\\alph{enumi}}\n\n\nThen\n\\begin{enumerate}\n\\item\n\\label{P1P2notiso}\n$P_1 $ and $P_2 $ are not isomorphic,\n\\item\n\\label{P1P2eqmarkedextr}\n$P_1 $ and $P_2 $ have\nequal marked maximal and minimal decks,\n\n\\item\n\\label{dipieqmark}\nFor $i=0, \\ldots , n$\nthe\ncard $P_1 \\setminus \\{ d_i \\} $ is isomorphic to the card $P_2\\setminus \\{ d_i \\} $ and\nthe\ncard $P_1 \\setminus \\{ p_i \\} $ is isomorphic to the card $P_2\\setminus \\{ p_i \\} $,\nand for all isomorphisms $\\Phi $ of cards and all\nelements $x\\in P_1 $ we have that ${\\rm rank} _{P_2 } (\\Phi (x) )={\\rm rank} _{P_1 } (x)$.\n\n\\item\n\\label{P1P2eqnremhood}\nFor all the above mentioned isomorphic cards $P_1 \\setminus \\{ x\\} $\nand $P_2 \\setminus \\{ x\\} $ the\nneighborhoods $\\updownarrow _{P_1 } x $ and $\\updownarrow _{P_2 } x $\nare isomorphic.\n\n\\end{enumerate}\n\n\\renewcommand{\\theenumi}{\\arabic{enumi}}\n\n\\end{lem}\n\n\n\\begin{figure}\n\n\\centerline{\n\\unitlength 1.1mm\n\\linethickness{0.4pt}\n\\begin{picture}(110.00,146.00)\n\\put(20.00,10.00){\\line(1,-1){5.00}}\n\\put(25.00,5.00){\\line(1,1){5.00}}\n\\put(30.00,10.00){\\line(1,-1){5.00}}\n\\put(35.00,5.00){\\line(1,1){5.00}}\n\\put(40.00,10.00){\\line(0,1){10.00}}\n\\put(40.00,20.00){\\line(-1,1){5.00}}\n\\put(35.00,25.00){\\line(-1,-1){5.00}}\n\\put(30.00,20.00){\\line(-1,1){5.00}}\n\\put(25.00,25.00){\\line(-1,-1){5.00}}\n\\put(20.00,20.00){\\line(0,-1){10.00}}\n\\put(30.00,17.00){\\makebox(0,0)[cc]{\\footnotesize $Q$}}\n\\put(30.00,13.00){\\makebox(0,0)[cc]{\\footnotesize {\\em not} symmetric}}\n\\put(25.00,15.00){\\vector(1,0){10.00}}\n\\put(35.00,5.00){\\circle*{2.00}}\n\\put(25.00,5.00){\\circle*{2.00}}\n\\put(25.00,2.00){\\makebox(0,0)[cc]{\\footnotesize $a$}}\n\\put(35.00,2.00){\\makebox(0,0)[cc]{\\footnotesize $b$}}\n\\put(37.00,25.00){\\makebox(0,0)[lc]{\\footnotesize $p_0 = p=\\overline{p}_1 $}}\n\\put(23.00,25.00){\\makebox(0,0)[rc]{\\footnotesize $d_0 = d=\\overline{d}_1 $}}\n\\put(20.00,30.00){\\line(1,-1){5.00}}\n\\put(25.00,25.00){\\line(1,1){5.00}}\n\\put(30.00,30.00){\\line(1,-1){5.00}}\n\\put(35.00,25.00){\\line(1,1){5.00}}\n\\put(40.00,30.00){\\line(0,1){10.00}}\n\\put(40.00,40.00){\\line(-1,1){5.00}}\n\\put(35.00,45.00){\\line(-1,-1){5.00}}\n\\put(30.00,40.00){\\line(-1,1){5.00}}\n\\put(25.00,45.00){\\line(-1,-1){5.00}}\n\\put(20.00,40.00){\\line(0,-1){10.00}}\n\\put(30.00,37.00){\\makebox(0,0)[cc]{\\footnotesize $R_1 $}}\n\\put(30.00,33.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(35.00,25.00){\\circle*{2.00}}\n\\put(25.00,25.00){\\circle*{2.00}}\n\\put(37.00,45.00){\\makebox(0,0)[lc]{\\footnotesize $p_1 =\\overline{p}_2 $}}\n\\put(23.00,45.00){\\makebox(0,0)[rc]{\\footnotesize $d_1=\\overline{d}_2 $}}\n\\put(20.00,50.00){\\line(1,-1){5.00}}\n\\put(25.00,45.00){\\line(1,1){5.00}}\n\\put(30.00,50.00){\\line(1,-1){5.00}}\n\\put(35.00,45.00){\\line(1,1){5.00}}\n\\put(40.00,50.00){\\line(0,1){10.00}}\n\\put(40.00,60.00){\\line(-1,1){5.00}}\n\\put(35.00,65.00){\\line(-1,-1){5.00}}\n\\put(30.00,60.00){\\line(-1,1){5.00}}\n\\put(25.00,65.00){\\line(-1,-1){5.00}}\n\\put(20.00,60.00){\\line(0,-1){10.00}}\n\\put(30.00,57.00){\\makebox(0,0)[cc]{\\footnotesize $R_2 $}}\n\\put(30.00,53.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(35.00,45.00){\\circle*{2.00}}\n\\put(25.00,45.00){\\circle*{2.00}}\n\\put(37.00,65.00){\\makebox(0,0)[lc]{\\footnotesize $p_2 =\\overline{p}_3 $}}\n\\put(23.00,65.00){\\makebox(0,0)[rc]{\\footnotesize $d_2=\\overline{d}_3 $}}\n\\put(35.00,65.00){\\circle*{2.00}}\n\\put(25.00,65.00){\\circle*{2.00}}\n\\put(30.00,75.00){\\makebox(0,0)[cc]{$\\vdots $}}\n\\put(37.00,85.00){\\makebox(0,0)[lc]{\\footnotesize $p_{n-2} =\\overline{p}_{n-1} $}}\n\\put(23.00,85.00){\\makebox(0,0)[rc]{\\footnotesize $d_{n-2} =\\overline{d}_{n-1} $}}\n\\put(20.00,90.00){\\line(1,-1){5.00}}\n\\put(25.00,85.00){\\line(1,1){5.00}}\n\\put(30.00,90.00){\\line(1,-1){5.00}}\n\\put(35.00,85.00){\\line(1,1){5.00}}\n\\put(40.00,90.00){\\line(0,1){10.00}}\n\\put(40.00,100.00){\\line(-1,1){5.00}}\n\\put(35.00,105.00){\\line(-1,-1){5.00}}\n\\put(30.00,100.00){\\line(-1,1){5.00}}\n\\put(25.00,105.00){\\line(-1,-1){5.00}}\n\\put(20.00,100.00){\\line(0,-1){10.00}}\n\\put(30.00,97.00){\\makebox(0,0)[cc]{\\footnotesize $R_{n-1} $}}\n\\put(30.00,93.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(35.00,85.00){\\circle*{2.00}}\n\\put(25.00,85.00){\\circle*{2.00}}\n\\put(37.00,105.00){\\makebox(0,0)[lc]{\\footnotesize $p_{n-1} =\\overline{p}_n $}}\n\\put(23.00,105.00){\\makebox(0,0)[rc]{\\footnotesize $d_{n-1} =\\overline{d}_n $}}\n\\put(20.00,110.00){\\line(1,-1){5.00}}\n\\put(25.00,105.00){\\line(1,1){5.00}}\n\\put(30.00,110.00){\\line(1,-1){5.00}}\n\\put(35.00,105.00){\\line(1,1){5.00}}\n\\put(40.00,110.00){\\line(0,1){10.00}}\n\\put(40.00,120.00){\\line(-1,1){5.00}}\n\\put(35.00,125.00){\\line(-1,-1){5.00}}\n\\put(30.00,120.00){\\line(-1,1){5.00}}\n\\put(25.00,125.00){\\line(-1,-1){5.00}}\n\\put(20.00,120.00){\\line(0,-1){10.00}}\n\\put(30.00,117.00){\\makebox(0,0)[cc]{\\footnotesize $R_n $}}\n\\put(30.00,113.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(35.00,105.00){\\circle*{2.00}}\n\\put(25.00,105.00){\\circle*{2.00}}\n\\put(37.00,125.00){\\makebox(0,0)[lc]{\\footnotesize $p_n =\\tilde{p} $}}\n\\put(23.00,125.00){\\makebox(0,0)[rc]{\\footnotesize $d_n=\\tilde{d} $}}\n\\put(35.00,125.00){\\circle*{2.00}}\n\\put(25.00,125.00){\\circle*{2.00}}\n\\put(20.00,130.00){\\line(1,-1){5.00}}\n\\put(25.00,125.00){\\line(1,1){5.00}}\n\\put(30.00,130.00){\\line(1,-1){5.00}}\n\\put(35.00,125.00){\\line(1,1){5.00}}\n\\put(40.00,130.00){\\line(0,1){10.00}}\n\\put(40.00,140.00){\\line(-1,1){5.00}}\n\\put(35.00,145.00){\\line(-1,-1){5.00}}\n\\put(30.00,140.00){\\line(-1,1){5.00}}\n\\put(25.00,145.00){\\line(-1,-1){5.00}}\n\\put(20.00,140.00){\\line(0,-1){10.00}}\n\\put(30.00,137.67){\\makebox(0,0)[cc]{\\footnotesize $\\tilde{Q} $}}\n\\put(30.00,133.00){\\makebox(0,0)[cc]{\\footnotesize {\\em not} symmetric}}\n\\put(25.00,135.00){\\vector(1,0){10.00}}\n\\put(37.00,145.00){\\makebox(0,0)[lc]{\\footnotesize $\\tilde{b} $}}\n\\put(23.00,145.00){\\makebox(0,0)[rc]{\\footnotesize $\\tilde{a} $}}\n\\put(35.00,145.00){\\circle*{2.00}}\n\\put(25.00,145.00){\\circle*{2.00}}\n\\put(5.00,145.00){\\makebox(0,0)[cc]{$P_1 $}}\n\\put(90.00,10.00){\\line(1,-1){5.00}}\n\\put(95.00,5.00){\\line(1,1){5.00}}\n\\put(100.00,10.00){\\line(1,-1){5.00}}\n\\put(105.00,5.00){\\line(1,1){5.00}}\n\\put(110.00,10.00){\\line(0,1){10.00}}\n\\put(110.00,20.00){\\line(-1,1){5.00}}\n\\put(105.00,25.00){\\line(-1,-1){5.00}}\n\\put(100.00,20.00){\\line(-1,1){5.00}}\n\\put(95.00,25.00){\\line(-1,-1){5.00}}\n\\put(90.00,20.00){\\line(0,-1){10.00}}\n\\put(100.00,17.00){\\makebox(0,0)[cc]{\\footnotesize $Q$}}\n\\put(100.00,13.00){\\makebox(0,0)[cc]{\\footnotesize {\\em not} symmetric}}\n\\put(95.00,15.00){\\vector(1,0){10.00}}\n\\put(105.00,5.00){\\circle*{2.00}}\n\\put(95.00,5.00){\\circle*{2.00}}\n\\put(95.00,2.00){\\makebox(0,0)[cc]{\\footnotesize $a$}}\n\\put(105.00,2.00){\\makebox(0,0)[cc]{\\footnotesize $b$}}\n\\put(107.00,25.00){\\makebox(0,0)[lc]{\\footnotesize $p_0 = p=\\overline{p}_1 $}}\n\\put(93.00,25.00){\\makebox(0,0)[rc]{\\footnotesize $d_0 = d=\\overline{d}_1 $}}\n\\put(90.00,30.00){\\line(1,-1){5.00}}\n\\put(95.00,25.00){\\line(1,1){5.00}}\n\\put(100.00,30.00){\\line(1,-1){5.00}}\n\\put(105.00,25.00){\\line(1,1){5.00}}\n\\put(110.00,30.00){\\line(0,1){10.00}}\n\\put(110.00,40.00){\\line(-1,1){5.00}}\n\\put(105.00,45.00){\\line(-1,-1){5.00}}\n\\put(100.00,40.00){\\line(-1,1){5.00}}\n\\put(95.00,45.00){\\line(-1,-1){5.00}}\n\\put(90.00,40.00){\\line(0,-1){10.00}}\n\\put(100.00,37.00){\\makebox(0,0)[cc]{\\footnotesize $R_1 $}}\n\\put(100.00,33.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(105.00,25.00){\\circle*{2.00}}\n\\put(95.00,25.00){\\circle*{2.00}}\n\\put(107.00,45.00){\\makebox(0,0)[lc]{\\footnotesize $p_1 =\\overline{p}_2 $}}\n\\put(93.00,45.00){\\makebox(0,0)[rc]{\\footnotesize $d_1=\\overline{d}_2 $}}\n\\put(90.00,50.00){\\line(1,-1){5.00}}\n\\put(95.00,45.00){\\line(1,1){5.00}}\n\\put(100.00,50.00){\\line(1,-1){5.00}}\n\\put(105.00,45.00){\\line(1,1){5.00}}\n\\put(110.00,50.00){\\line(0,1){10.00}}\n\\put(110.00,60.00){\\line(-1,1){5.00}}\n\\put(105.00,65.00){\\line(-1,-1){5.00}}\n\\put(100.00,60.00){\\line(-1,1){5.00}}\n\\put(95.00,65.00){\\line(-1,-1){5.00}}\n\\put(90.00,60.00){\\line(0,-1){10.00}}\n\\put(100.00,57.00){\\makebox(0,0)[cc]{\\footnotesize $R_2 $}}\n\\put(100.00,53.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(105.00,45.00){\\circle*{2.00}}\n\\put(95.00,45.00){\\circle*{2.00}}\n\\put(107.00,65.00){\\makebox(0,0)[lc]{\\footnotesize $p_2 =\\overline{p}_3 $}}\n\\put(93.00,65.00){\\makebox(0,0)[rc]{\\footnotesize $d_2=\\overline{d}_3 $}}\n\\put(105.00,65.00){\\circle*{2.00}}\n\\put(95.00,65.00){\\circle*{2.00}}\n\\put(100.00,75.00){\\makebox(0,0)[cc]{$\\vdots $}}\n\\put(107.00,85.00){\\makebox(0,0)[lc]{\\footnotesize $p_{n-2} =\\overline{p}_{n-1} $}}\n\\put(93.00,85.00){\\makebox(0,0)[rc]{\\footnotesize $d_{n-2} =\\overline{d}_{n-1} $}}\n\\put(90.00,90.00){\\line(1,-1){5.00}}\n\\put(95.00,85.00){\\line(1,1){5.00}}\n\\put(100.00,90.00){\\line(1,-1){5.00}}\n\\put(105.00,85.00){\\line(1,1){5.00}}\n\\put(110.00,90.00){\\line(0,1){10.00}}\n\\put(110.00,100.00){\\line(-1,1){5.00}}\n\\put(105.00,105.00){\\line(-1,-1){5.00}}\n\\put(100.00,100.00){\\line(-1,1){5.00}}\n\\put(95.00,105.00){\\line(-1,-1){5.00}}\n\\put(90.00,100.00){\\line(0,-1){10.00}}\n\\put(100.00,97.00){\\makebox(0,0)[cc]{\\footnotesize $R_{n-1} $}}\n\\put(100.00,93.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(105.00,85.00){\\circle*{2.00}}\n\\put(95.00,85.00){\\circle*{2.00}}\n\\put(107.00,105.00){\\makebox(0,0)[lc]{\\footnotesize $p_{n-1} =\\overline{p}_n $}}\n\\put(93.00,105.00){\\makebox(0,0)[rc]{\\footnotesize $d_{n-1} =\\overline{d}_n $}}\n\\put(90.00,110.00){\\line(1,-1){5.00}}\n\\put(95.00,105.00){\\line(1,1){5.00}}\n\\put(100.00,110.00){\\line(1,-1){5.00}}\n\\put(105.00,105.00){\\line(1,1){5.00}}\n\\put(110.00,110.00){\\line(0,1){10.00}}\n\\put(110.00,120.00){\\line(-1,1){5.00}}\n\\put(105.00,125.00){\\line(-1,-1){5.00}}\n\\put(100.00,120.00){\\line(-1,1){5.00}}\n\\put(95.00,125.00){\\line(-1,-1){5.00}}\n\\put(90.00,120.00){\\line(0,-1){10.00}}\n\\put(100.00,117.00){\\makebox(0,0)[cc]{\\footnotesize $R_n $}}\n\\put(100.00,113.00){\\makebox(0,0)[cc]{\\footnotesize symmetric}}\n\\put(105.00,105.00){\\circle*{2.00}}\n\\put(95.00,105.00){\\circle*{2.00}}\n\\put(107.00,125.00){\\makebox(0,0)[lc]{\\footnotesize $p_n =\\tilde{d} $}}\n\\put(93.00,125.00){\\makebox(0,0)[rc]{\\footnotesize $d_n=\\tilde{p} $}}\n\\put(105.00,125.00){\\circle*{2.00}}\n\\put(95.00,125.00){\\circle*{2.00}}\n\\put(90.00,130.00){\\line(1,-1){5.00}}\n\\put(95.00,125.00){\\line(1,1){5.00}}\n\\put(100.00,130.00){\\line(1,-1){5.00}}\n\\put(105.00,125.00){\\line(1,1){5.00}}\n\\put(110.00,130.00){\\line(0,1){10.00}}\n\\put(110.00,140.00){\\line(-1,1){5.00}}\n\\put(105.00,145.00){\\line(-1,-1){5.00}}\n\\put(100.00,140.00){\\line(-1,1){5.00}}\n\\put(95.00,145.00){\\line(-1,-1){5.00}}\n\\put(90.00,140.00){\\line(0,-1){10.00}}\n\\put(100.00,137.67){\\makebox(0,0)[cc]{\\footnotesize $\\tilde{Q} $}}\n\\put(100.00,133.00){\\makebox(0,0)[cc]{\\footnotesize {\\em not} symmetric}}\n\\put(107.00,145.00){\\makebox(0,0)[lc]{\\footnotesize $\\tilde{a} $}}\n\\put(93.00,145.00){\\makebox(0,0)[rc]{\\footnotesize $\\tilde{b} $}}\n\\put(105.00,145.00){\\circle*{2.00}}\n\\put(95.00,145.00){\\circle*{2.00}}\n\\put(75.00,145.00){\\makebox(0,0)[cc]{$P_2 $}}\n\\put(105.00,135.00){\\vector(-1,0){10.00}}\n\\end{picture}\n}\n\n\\caption{Ordered sets $P_1 $ and $P_2 $ as constructed in Lemma \\protect\\ref{severalmiddle}.\nThe mentioned symmetry is symmetry along the vertical axis.}\n\\label{switstck}\n\n\\end{figure}\n\n\n{\\bf Proof.}\nTo prove \\ref{P1P2notiso}, we assume that $P_1 $ and $P_2 $ are isomorphic.\nSo suppose that $\\Phi :P_1 \\to P_2 $ is an isomorphism.\nThen, because $Q$ is rigid, we have that $\\Phi (a)=a$, $\\Phi (b)=b$,\n$\\Phi (d_0 ) = d_0 $ and\n$\\Phi (p_0 )= p_0 $.\nBy property \\ref{notwistR},\nthis implies that\n$\\Phi (d_1 ) = d_1 $,\n$\\Phi (p_1 )= p_1 $, $\\ldots $, $\\Phi (d_n ) = d_n $,\n$\\Phi (p_n )= p_n $.\nBut then, because in $P_1 $ we have $d_n = \\tilde{d} $ and $p_n = \\tilde{p}$, while\nin $P_2 $ we have $d_n = \\tilde{p} $ and $p_n = \\tilde{d}$, $\\Phi |_{\\tilde{Q} } $ would be an\nautomorphism of $\\tilde{Q}$ with\n$\n{ \\Phi |_{\\tilde{Q} } ( \\tilde{d} ) = \\tilde{p} } $ and\n${ \\Phi |_{\\tilde{Q} } ( \\tilde{p} ) = \\tilde{d} } $.\nThis is a contradiction to the rigidity of $\\tilde{Q} $.\nTherefore, $P_1 $ and $P_2 $ cannot be isomorphic.\n\n\n\nTo show\n\\ref{P1P2eqmarkedextr},\nfirst note that\n$P_1 \\setminus \\{ a\\} $ is isomorphic to $P_2 \\setminus \\{ a\\} $.\nTo see this\nlet $\\varphi _i $ denote the automorphism for $R_i $ guaranteed by\nproperty \\ref{Rswit}.\nWe define\n\n$$\\Phi (x):=\\cases{ \\psi ^a (x); & if $x\\in Q\\setminus \\{ a\\} $, \\cr\n\\varphi _i (x); & if $x\\in R_i $, \\cr\nx; & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{d}, \\tilde{p} \\} $. \\cr\n} $$\n\nThe function $\\Phi $ is well-defined and bijective between\n$P_1 \\setminus \\{ a\\} $ and $P_2 \\setminus \\{ a\\} $ and it maps minimal elements of $P_1 $\nto minimal elements of $P_2 $.\nTo see that $\\Phi $ is order-preserving both ways, let $xx$ we have $y\\in \\tilde{Q} $ and thus $\\Phi (y)=y$. The other direction, as well as the proof for\n$x=\\tilde{p} = p_n $ is similar.\n\n\nWe have shown that\n$P_1 \\setminus \\{ a\\} $ is isomorphic to $P_2 \\setminus \\{ a\\} $ and that the isomorphism\npreserves the marked property ``minimality\".\nSimilarly, $P_1 \\setminus \\{ b\\} $ is isomorphic to $P_2 \\setminus \\{ b\\} $\n(and minimal elements of $P_1 $ are mapped to minimal elements of $P_2 $) via\n\n$$\\Phi (x):=\\cases{ \\psi ^b (x); & if $x\\in Q\\setminus \\{ b\\} $, \\cr\n\\varphi _i (x); & if $x\\in R_i $, \\cr\nx; & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{d}, \\tilde{p} \\} $. \\cr\n} $$\n\nThe proof that $P_1 $ and $P_2 $ have equal marked maximal decks is similar.\nThe set\n$P_1 \\setminus \\{ \\tilde{a} \\} $ is isomorphic to $P_2 \\setminus \\{ \\tilde{a} \\} $ via\n\n$$\\Phi (x):=\\cases{ \\psi ^{\\tilde{a} } (x); & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{a}\\} $, \\cr\nx; & if $x\\not\\in \\tilde{Q} $, \\cr\n} $$\n\nwhere $\\psi ^{\\tilde{a} } $ denotes the automorphism of $\\tilde{Q} \\setminus \\{ \\tilde{a} \\} $\nthat is guaranteed by the dual of property \\ref{Qpsia}.\nClearly $\\Phi $ maps maximal elements of $P_1 $ to maximal elements of $P_2 $.\nThe set\n$P_1 \\setminus \\{ \\tilde{b} \\} $ is isomorphic to $P_2 \\setminus \\{ \\tilde{b} \\} $\n(and maximal elements of $P_1 $ are mapped to maximal elements of $P_2 $)\nvia\n\n$$\\Phi (x):=\\cases{ \\psi ^{\\tilde{b} } (x); & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{b}\\} $, \\cr\nx; & if $x\\not\\in \\tilde{Q} $, \\cr\n} $$\n\nwhere $\\psi ^{\\tilde{b} } $ denotes the automorphism of $\\tilde{Q} \\setminus \\{ \\tilde{b} \\} $\nthat is guaranteed by the dual of property \\ref{Qpsib}.\n\n\nIn regards to \\ref{P1P2eqnremhood} note that the above isomorphisms show that\nfor $x\\in \\{ a,b,\\tilde{a},\\tilde{b} \\} $\nthe\nneighborhoods $\\updownarrow _{P_1 } x $ and $\\updownarrow _{P_2 } x $\nare isomorphic.\n(For example, the isomorphism between $P_1 \\setminus \\{ b\\} $ and\n$P_2 \\setminus \\{ b\\} $ provides an isomorphism\nbetween\n$\\updownarrow _{P_1 } a= \\uparrow _{P_1 } a$ and\n$\\updownarrow _{P_2 } a =\\uparrow _{P_2 } a$.)\n\n\nFor \\ref{dipieqmark}, let it be stated here that it is easy to see that\nall isomorphisms $\\Phi $ constructed in the following satisfy\n${\\rm rank} _{P_2 } (\\Phi (x) )={\\rm rank} _{P_1 } (x)$ for all\nelements $x\\in P_1 $.\n\n\n\nNow first notice that\nthe set\n$P_1 \\setminus \\{ d_n \\} $ is isomorphic to $P_2 \\setminus \\{ d_n \\} $ via\n$$\\Phi (x):=\\cases{ \\tilde{\\psi } (x); & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{d}\\} $, \\cr\nx; & if $x\\not\\in \\tilde{Q} $, \\cr\n} $$\n\nwhere $\\tilde{\\psi } $ is the isomorphism guaranteed by the dual of\nproperty \\ref{Qpsi}.\nThe set\n$P_1 \\setminus \\{ p_n \\} $ is isomorphic to $P_2 \\setminus \\{ p_n \\} $ via\n$$\\Phi (x):=\\cases{ \\left( \\tilde{\\psi } \\right) ^{-1} (x); & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{p}\\} $, \\cr\nx; & if $x\\not\\in \\tilde{Q} $. \\cr\n} $$\n\n\n\n\n\nFinally let $i\\in \\{ 0,\\ldots , n-1\\} $.\nFor $R_i $ denote the automorphisms $\\psi ^{\\overline{p} } $ and $\\psi ^{\\overline{d} } $\nof the respective cards of $R$\nguaranteed by properties\n\\ref{Rpaut} and \\ref{Rdaut} by $\\psi ^{\\overline{p} } _i $ and $\\psi ^{\\overline{d} } _i $.\nThen\nthe set\n$P_1 \\setminus \\{ p_i \\} $ is isomorphic to $P_2 \\setminus \\{ p_i \\} $ via\n\n$$\\Phi (x):=\\cases{\nx; & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{d}, \\tilde{p} \\} $ or $x\\in R_j $ for $j\\leq i$ or $x\\in Q$, \\cr\n\\varphi _j (x); & if $x\\in R_j $ for $j>i+1$, \\cr\n\\psi ^{\\overline{p} } _i (x) ; & if $x\\in R_{i+1} \\setminus \\{ p_i \\} $.\\cr\n} $$\n\nAll parts of the definition of isomorphism are readily verified.\nSimilarly,\nthe set\n$P_1 \\setminus \\{ d_i \\} $ is isomorphic to $P_2 \\setminus \\{ d_i \\} $ via\n\n$$\\Phi (x):=\\cases{\nx; & if $x\\in \\tilde{Q}\\setminus \\{ \\tilde{d}, \\tilde{p} \\} $ or $x\\in R_j $ for $j\\leq i$ or $x\\in Q$, \\cr\n\\varphi _j (x); & if $x\\in R_j $ for $j>i+1$, \\cr\n\\psi ^{\\overline{d} } _i (x) ; & if $x\\in R_{i+1} \\setminus \\{ d_i \\} $.\\cr\n} $$\n\nTo finish the proof of \\ref{P1P2eqnremhood}, similar to\nwhat was said after the proof of \\ref{P1P2eqmarkedextr},\nthe above isomorphisms show that\nfor $x\\in \\{ d_0 , p_0 , \\ldots , d_{n-1} , p_{n-1} \\} $\nthe\nneighborhoods $\\updownarrow _{P_1 } x $ and $\\updownarrow _{P_2 } x $\nare isomorphic.\nJust use the isomorphism between the cards on which the respective\n``other\" element of that rank has been erased.\n\\hfill\\ \\rule{2mm}{2mm} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\n\\centerline{\n\\unitlength 1mm\n\\linethickness{0.4pt}\n\\ifx\\plotpoint\\undefined\\newsavebox{\\plotpoint}\\fi\n\\begin{picture}(146,93)(0,0)\n\\put(50,50){\\line(1,-4){2.5}}\n\\put(15,40){\\line(1,4){2.5}}\n\\put(60,50){\\line(1,-4){2.5}}\n\\put(25,40){\\line(1,4){2.5}}\n\\put(70,50){\\line(1,-4){2.5}}\n\\put(35,40){\\line(1,4){2.5}}\n\\put(85,50){\\line(1,-4){2.5}}\n\\put(120,40){\\line(1,4){2.5}}\n\\put(95,50){\\line(1,-4){2.5}}\n\\put(130,40){\\line(1,4){2.5}}\n\\put(105,50){\\line(1,-4){2.5}}\n\\put(140,40){\\line(1,4){2.5}}\n\\put(52.5,40){\\line(1,4){2.5}}\n\\put(17.5,50){\\line(1,-4){2.5}}\n\\put(62.5,40){\\line(1,4){2.5}}\n\\put(27.5,50){\\line(1,-4){2.5}}\n\\put(72.5,40){\\line(1,4){2.5}}\n\\put(37.5,50){\\line(1,-4){2.5}}\n\\put(87.5,40){\\line(1,4){2.5}}\n\\put(122.5,50){\\line(1,-4){2.5}}\n\\put(97.5,40){\\line(1,4){2.5}}\n\\put(132.5,50){\\line(1,-4){2.5}}\n\\put(107.5,40){\\line(1,4){2.5}}\n\\put(142.5,50){\\line(1,-4){2.5}}\n\\put(50,50){\\circle*{1}}\n\\put(15,40){\\circle*{1}}\n\\put(60,50){\\circle*{1}}\n\\put(25,40){\\circle*{1}}\n\\put(70,50){\\circle*{1}}\n\\put(35,40){\\circle*{1}}\n\\put(85,50){\\circle*{1}}\n\\put(120,40){\\circle*{1}}\n\\put(95,50){\\circle*{1}}\n\\put(130,40){\\circle*{1}}\n\\put(105,50){\\circle*{1}}\n\\put(140,40){\\circle*{1}}\n\\put(52.5,40){\\circle*{1}}\n\\put(17.5,50){\\circle*{1}}\n\\put(62.5,40){\\circle*{1}}\n\\put(27.5,50){\\circle*{1}}\n\\put(72.5,40){\\circle*{1}}\n\\put(37.5,50){\\circle*{1}}\n\\put(87.5,40){\\circle*{1}}\n\\put(122.5,50){\\circle*{1}}\n\\put(97.5,40){\\circle*{1}}\n\\put(132.5,50){\\circle*{1}}\n\\put(107.5,40){\\circle*{1}}\n\\put(142.5,50){\\circle*{1}}\n\\put(55,50){\\circle*{1}}\n\\put(20,40){\\circle*{1}}\n\\put(65,50){\\circle*{1}}\n\\put(30,40){\\circle*{1}}\n\\put(75,50){\\circle*{1}}\n\\put(40,40){\\circle*{1}}\n\\put(90,50){\\circle*{1}}\n\\put(125,40){\\circle*{1}}\n\\put(100,50){\\circle*{1}}\n\\put(135,40){\\circle*{1}}\n\\put(110,50){\\circle*{1}}\n\\put(145,40){\\circle*{1}}\n\\put(75,50){\\line(1,-4){5}}\n\\put(80,30){\\line(1,4){5}}\n\\put(80,30){\\circle*{2}}\n\\qbezier(55,50)(57.5,31.5)(80,30)\n\\qbezier(105,50)(102.5,31.5)(80,30)\n\\qbezier(80,30)(67.5,35)(65,50)\n\\qbezier(80,30)(92.5,35)(95,50)\n\\put(75,50){\\circle{2}}\n\\put(40,40){\\circle{2}}\n\\put(60,50){\\circle{2}}\n\\put(25,40){\\circle{2}}\n\\put(90,50){\\circle{2}}\n\\put(125,40){\\circle{2}}\n\\put(95,50){\\circle{2}}\n\\put(130,40){\\circle{2}}\n\\put(69,49){\\framebox(2,2)[cc]{}}\n\\put(34,39){\\framebox(2,2)[]{}}\n\\put(54,49){\\framebox(2,2)[cc]{}}\n\\put(19,39){\\framebox(2,2)[]{}}\n\\put(84,49){\\framebox(2,2)[cc]{}}\n\\put(119,39){\\framebox(2,2)[]{}}\n\\put(109,49){\\framebox(2,2)[cc]{}}\n\\put(144,39){\\framebox(2,2)[]{}}\n\\put(80,70){\\circle*{2}}\n\\qbezier(20,40)(23,66)(80,70)\n\\qbezier(140,40)(137,66)(80,70)\n\\qbezier(80,70)(33,64)(30,40)\n\\qbezier(80,70)(127,64)(130,40)\n\\qbezier(80,70)(44.5,62)(40,40)\n\\qbezier(80,70)(115.5,62)(120,40)\n\\bezier{40}(15,49)(27.5,47)(40,49)\n\\bezier{40}(145,49)(132.5,47)(120,49)\n\\bezier{40}(75,41)(62.5,43)(50,41)\n\\bezier{40}(85,41)(97.5,43)(110,41)\n\\put(55,41.75){\\line(-3,1){20}}\n\\put(105,41.75){\\line(3,1){20}}\n\\put(83,27){\\makebox(0,0)[cc]{$c_b $}}\n\\put(83,73){\\makebox(0,0)[cc]{$c_t $}}\n\\put(17.5,37){\\makebox(0,0)[cc]{$A_1 $}}\n\\put(52.5,53){\\makebox(0,0)[cc]{$B_1 $}}\n\\put(97.5,53){\\makebox(0,0)[cc]{$C_1 $}}\n\\put(132.5,37){\\makebox(0,0)[cc]{$D_1 $}}\n\\put(27.5,37){\\makebox(0,0)[cc]{$A_2 $}}\n\\put(62.5,53){\\makebox(0,0)[cc]{$B_2 $}}\n\\put(107.5,53){\\makebox(0,0)[cc]{$C_2 $}}\n\\put(142.5,37){\\makebox(0,0)[cc]{$D_2 $}}\n\\put(37.5,37){\\makebox(0,0)[cc]{$A_{1,2} $}}\n\\put(72.5,53){\\makebox(0,0)[cc]{$B_{1,2} $}}\n\\put(87.5,53){\\makebox(0,0)[cc]{$C_{1,2} $}}\n\\put(122.5,37){\\makebox(0,0)[cc]{$D_{1,2} $}}\n\\put(75,10){\\circle*{2}}\n\\put(75,90){\\circle*{2}}\n\\put(85,10){\\circle*{2}}\n\\put(85,90){\\circle*{2}}\n\\put(75,7){\\makebox(0,0)[cc]{$\\overline{d}$}}\n\\put(75,93){\\makebox(0,0)[cc]{${d}$}}\n\\put(85,7){\\makebox(0,0)[cc]{$\\overline{p}$}}\n\\put(85,93){\\makebox(0,0)[cc]{${p}$}}\n\\put(15,85){\\makebox(0,0)[lc]{$A=A_1 \\cup A_2 \\cup A_{1,2} $}}\n\\put(15,75){\\makebox(0,0)[lc]{$B=B_1 \\cup B_2 \\cup B_{1,2} $}}\n\\put(145,85){\\makebox(0,0)[rc]{$C=C_1 \\cup C_2 \\cup C_{1,2} $}}\n\\put(145,75){\\makebox(0,0)[rc]{$D=D_1 \\cup D_2 \\cup D_{1,2} $}}\n\\end{picture}\n}\n\n\\caption{An ordered set $R$ as needed in Lemma \\protect\\ref{severalmiddle} and constructed in\nLemma \\ref{getsmallerR}.\nThe middle levels form an ordered set of height $1$.\nThe maximal element $d$ is above all maximal elements of the\nmiddle levels {\\em except} the circled maximal elements\nof the middle levels.\nThe maximal element $p$ is above all maximal elements of the\nmiddle levels {\\em except} the boxed maximal elements\nof the middle levels.\nSimilarly,\nthe minimal element $\\overline{d}$ is below all minimal elements of the\nmiddle levels {\\em except} the circled minimal elements and\nthe minimal element $\\overline{p}$ is below all minimal elements of the\nmiddle levels {\\em except} the boxed minimal elements.\nWithin the middle levels, the connected dotted arches indicate that the\nelements immediately above the upper arch and the elements immediately below the lower arch form a complete\nbipartite.\n}\n\\label{flatswit}\n\n\\end{figure}\n\n\n\n\n\\begin{lem}\n\\label{getsmallerR}\n\nThere is\nan ordered set $R$ as described in Lemma \\ref{severalmiddle}.\n\n\\end{lem}\n\n\n{\\bf Proof.}\nLet $R$ be the ordered set indicated in Figure \\ref{flatswit}.\nWe claim that $R$ is a set as desired in the description of the set $R$ in\nLemma \\ref{severalmiddle}.\n\n\nClaims \\ref{Rtwomax}, \\ref{Rtwomaxrnk}, \\ref{Rtwomin} are trivial.\nThe rest of the proof relies on the following, easy to verify, properties\nof the sets $A$, $B$, $C$ and $D$.\nFirst, $A\\cup \\{ c_t , \\overline{d} ,\\overline{p} \\} $ is dually isomorphic to\n$B\\cup \\{ c_b , d,p\\} $ and\n$D\\cup \\{ c_t , \\overline{d} ,\\overline{p} \\} $ is dually isomorphic to\n$C\\cup \\{ c_b , d,p\\} $.\nTherefore, properties proved for $B$ and $C$ will hold dually for $A$ and $D$.\nThe set $B\\cup \\{ c_b , d,p\\} $ is rigid.\nThis is because any automorphism of $B\\cup \\{ c_b , d,p\\} $ must map $c_b $ to itself\nand the sets $B_1 $, $B_2 $ and $B_{1,2} $ to themselves, respectively.\nThis implies that the automorphism must be the identity on $B$ and then it must be the identity on\n$\\{ d,p\\} $ also.\n\nMoreover,\nthere is exactly one isomorphism\nfrom\n$B\\cup \\{ c_b , d,p\\} $ to $C\\cup \\{ c_b , d,p\\} $. This isomorphism\nmaps $d$ to $p$, $p$ to $d$, $c_b $ to $c_b $ and $B_{1,2} $ to $C_{1,2} $, $B_{1} $ to $C_{1} $ and\n$B_{2} $ to $C_{2} $.\nFurthermore, there is exactly one isomorphism from\n$B\\cup \\{ c_b , d\\} $ to $C\\cup \\{ c_b , d\\} $.\nThis isomorphism maps $d$ to $d$, $c_b $ to $c_b $,\n$B_{1,2} $ to $C_{1} $, $B_{2} $ to $C_{1,2} $ and $B_{1} $ to $C_{2} $.\nSimilarly, there is exactly one isomorphism\nfrom\n$B\\cup \\{ c_b , p\\} $ to $C\\cup \\{ c_b , p\\} $.\nThis isomorphism maps $p$ to $p$, $c_b $ to $c_b $,\n$B_{1,2} $ to $C_{2} $, $B_{2} $ to $C_{1} $ and $B_{1} $ to $C_{1,2} $.\nThese facts and their duals will be used freely in the following.\n\n\n\n\nFor Claim \\ref{Rpaut} define $\\psi ^{\\overline{p} } $ to be\n\\begin{enumerate}\n\\item\n$\\psi ^{\\overline{p} } (\\overline{d} ):=\\overline{d} $,\n$\\psi ^{\\overline{p} } (d ):=p $,\n$\\psi ^{\\overline{p} } (p ):=d $,\n\n\\item\nOn $A\\cup \\{ c_t \\} $, $\\psi ^{\\overline{p} } $ is\nthe restriction of\nthe unique isomorphism from\n$A\\cup \\{ c_t , \\overline{d} \\} $ to\n$D\\cup \\{ c_t , \\overline{d} \\} $,\n\n\\item\nOn $D\\cup \\{ c_t \\} $, $\\psi ^{\\overline{p} } $ is\nthe restriction of\nthe unique isomorphism from\n$D\\cup \\{ c_t , \\overline{d} \\} $ to\n$A\\cup \\{ c_t \\overline{d} \\} $,\n\n\\item\nOn $B\\cup \\{ c_b \\} $, $\\psi ^{\\overline{p} } $ is\nthe restriction of the unique\nisomorphism from\n$B\\cup \\{ c_b, d,p \\} $ to\n$C\\cup \\{ c_b, d,p \\} $,\n\n\n\n\\item\nOn $C\\cup \\{ c_b \\} $, $\\psi ^{\\overline{p} } $ is\nthe restriction of the unique\nisomorphism from\n$C\\cup \\{ c_b, d,p \\} $ to\n$B\\cup \\{ c_b, d,p \\} $.\n\n\\end{enumerate}\n\nThen $\\psi ^{\\overline{p} } $ is as desired.\nClaim \\ref{Rdaut} is proved similarly.\n\nFor Claim \\ref{Rswit} define $\\varphi $ to be\n\\begin{enumerate}\n\\item\n$\\varphi (\\overline{d} ):=\\overline{p} $,\n$\\varphi (\\overline{p} ):=\\overline{d} $,\n$\\varphi (d ):=p $,\n$\\varphi (p ):=d $,\n\n\n\\item\nOn $A\\cup \\{ c_t \\} $, $\\varphi $ is\nthe restriction of the unique\nisomorphism from\n$A\\cup \\{ c_t , \\overline{d}, \\overline{p} \\} $ to\n$D\\cup \\{ c_t, \\overline{d}, \\overline{p} \\} $,\n\n\n\\item\nOn $D\\cup \\{ c_t \\} $, $\\varphi $ is\nthe restriction of the unique\nisomorphism from\n$D\\cup \\{ c_t, \\overline{d}, \\overline{p} \\} $ to\n$A\\cup \\{ c_t \\overline{d}, \\overline{p} \\} $,\n\n\\item\nOn $B\\cup \\{ c_b\\} $, $\\varphi $ is\nthe restriction of the unique\nisomorphism from\n$B\\cup \\{ c_b, d,p \\} $ to\n$C\\cup \\{ c_b , d,p \\} $,\n\n\n\\item\nOn $C\\cup \\{ c_b\\} $, $\\varphi $ is\nthe restriction of the unique\nisomorphism from\n$C\\cup \\{ c_b , d,p \\} $ to\n$B\\cup \\{ c_b, d,p \\} $.\n\n\\end{enumerate}\n\n\nFinally, for Claim \\ref{notwistR} suppose the automorphism\n$\\Psi :R\\to R$\nis the identity on the minimal elements.\nThen, because the unique isomorphism between\n$A\\cup \\{ c_t, \\overline{d},\\overline{p} \\} $ and $D\\cup \\{ c_t ,\\overline{d},\\overline{p} \\} $\nswitches $\\overline{d} $ and $\\overline{p} $,\n$\\Psi $ must map $A \\cup \\{c_t \\} $ to $A \\cup \\{ c_t\\} $ and\n$D \\cup \\{ c_t\\} $ to $D \\cup \\{ c_t\\} $.\nTherefore, $\\Psi $ must map\n$B \\cup \\{ c_b\\} $ to $B \\cup \\{ c_b\\} $ and\n$C \\cup \\{ c_b\\} $ to $C \\cup \\{ c_b\\} $.\nSince $B\\cup \\{ c_b , d , p \\} $ is\nrigid, this means that\n$\\Psi $ must fix $d $ and $p $.\n\\hfill\\ \\rule{2mm}{2mm} \n\n\n\\vspace{.1in}\n\n\nThe set $R$ in Figure \\ref{flatswit} has an additional property that will allow us to\nprove further properties of our examples.\n\n\\begin{lem}\n\\label{takeoutc}\n\nThe sets $R\\setminus \\{ c_b\\} $ and $R\\setminus \\{ c_t \\} $\nwith $R$ as in Figure \\ref{flatswit} each have an automorphism\n$\\Psi $ with $\\Psi (\\overline{d} )=\\overline{d} $, $\\Psi (\\overline{p} )=\\overline{p} $,\n$\\Psi (d)=p$ and $\\Psi (p)=d$.\n\n\\end{lem}\n\n{\\bf Proof.}\nFor $R\\setminus \\{ c_t \\} $ we define $\\Psi (\\overline{d} ):=\\overline{d} $, $\\Psi (\\overline{p} ):=\\overline{p} $,\n$\\Psi (d):=p$ and $\\Psi (p):=d$. We let\n$\\Psi $ map $A_{1,2} $ to $D_{1,2} $, $A_{2} $ to $D_{1} $, $A_{1} $ to $D_{2} $ and vice versa.\n(Visually, in each case the map is obtained by sliding one wedge horizontally onto the other.)\n\nOn $B\\cup \\{ c_b, d,p\\} $ we define $\\Psi $ to be the unique isomorphism from\n$B\\cup \\{ c_b, d,p\\} $ to $C\\cup \\{ c_b, d,p\\} $.\nFinally, on $C\\cup \\{ c_b, d,p\\} $ we define $\\Psi $ to be the unique isomorphism from\n$C\\cup \\{ c_b, d,p\\} $ to $B\\cup \\{ c_b, d,p\\} $.\n\nFor $R\\setminus \\{ c_b \\} $ we let $\\Psi $ be the identity on $\\{ \\overline{d} , \\overline{p} , c_t \\} \\cup A\\cup D$.\nWe let $\\Psi (d)=p$ and $\\Psi (p)=d$. Finally $\\Psi $ maps $B_{1,2} $ to itself, $B_1 $ to $B_2 $, $B_2 $ to $B_1 $,\n$C_{1,2} $ to itself, $C_1 $ to $C_2 $, $C_2 $ to $C_1 $, each in such a way that the comparabilities\nwith the appropriate maximal elements are preserved.\n\\hfill\\ \\rule{2mm}{2mm} \n\n\n\n\n\n\n\\vspace{.1in}\n\n\n\nWith Lemmas \\ref{severalmiddle}\nand \\ref{getsmallerR} proved, we can state our first main result.\nAside from insights on ranks and maximal and minimal cards, we also see that\nour construction yields pairs of nonisomorphic sets\nfor which a significant\nnumber of cards is isomorphic.\n\n\n\n\\begin{define}\n\nFor two ordered sets $P_1 $ and $P_2 $ with $n$ elements each, define the\n{\\bf equal card ratio} $ECR(P_1 , P_2 )$ to be the number of\nisomorphic cards divided by the size of the set.\n\n\\end{define}\n\n\n\n\n\n\\begin{theorem}\n\\label{tallones}\n\nThere is a sequence of pairs of ordered sets $(P_1 ^n , P_2 ^n )$\nsuch that\n\\begin{enumerate}\n\\item\n$P_1 ^n $ is not isomorphic to $P_2 ^n $,\n\\item\n\\label{eqmaxmindeck}\n$P_1 ^n $ and $P_2 ^n $ have equal marked maximal and minimal decks,\n\\item\n\\label{eqrankkdecks}\nThere are ranks $k_0, \\ldots , k_n $ such that $P_1 ^n $ and $P_2 ^n $ have\nequal marked rank $k_i $ decks,\n\\item\nFor any\npair of isomorphic cards as in parts \\ref{eqmaxmindeck} and \\ref{eqrankkdecks},\nthe neighborhoods of the respective removed elements are isomorphic,\n\\item\n\\label{onlyfourleft}\nThere are only four ranks that do not produce any isomorphic cards,\n\\item\n\\label{ecr>0}\n$\\displaystyle{ \\liminf _{n\\to \\infty } ECR(P_1 ^n , P_2 ^n )\\geq {1\\over 10} } $,\n\\item\n\\label{equalnhooddecks}\nFor every rank $k$, the rank $k$ neighborhood decks of $P_1 ^n $ and $P_2 ^n $\nare equal.\n\\end{enumerate}\n\n\\end{theorem}\n\n\n{\\bf Proof.}\nTo construct sets as indicated,\nwith notation as in Lemma \\ref{severalmiddle}, we make the following choices.\n\\begin{enumerate}\n\\item\nFor all $n$, as the set $Q$, use a fixed set $Q$ as in the proof of Theorem 5.3 of \\cite{Schmoreexam}.\nThese sets have height $3$.\n\\item\nFor all $n$, as the set $R$, use a fixed set $R$ as guaranteed by\nLemma \\ref{getsmallerR}.\n\\item\nThe ranks $k_0 , \\ldots , k_n $ are\n$k_i = {\\rm height} (Q)+i\\cdot {\\rm height} (R)$.\n\\end{enumerate}\n\nThis construction, independent of the choices for $Q$ and $R$,\nyields a sequence of sets that satisfy all parts of this theorem except\npossibly parts \\ref{onlyfourleft}, \\ref{ecr>0} and \\ref{equalnhooddecks}.\nFor these parts the construction must be done with the set $R$ indicated in the proof of\nLemma \\ref{getsmallerR}.\n\nFor part \\ref{onlyfourleft} note that\nby Lemma \\ref{takeoutc} the erasure of an element $c_t $ or $c_b $\nat corresponding ranks produces isomorphic marked cards for $P_1 ^n $ and $P_2 ^n $.\nThis means that only at the ranks $1$ and $2$ and at the two ranks immediately below the\nmaximal elements will there be no elements whose removal produces isomorphic cards.\n\n\n\nFor part \\ref{ecr>0},\nfirst note that because each pair of sets has at least $4n+6$ equal cards we have\n$$\\displaystyle{ ECR(P_1 ^n , P_2 ^n )\\geq {4n+6\\over n(|R|-2)+2|Q|-2}\n\\ontop{n\\to \\infty }{\\longrightarrow } {4\\over |R|-2}. } $$\n\nThat is,\nour lower bound on\n$\\displaystyle{\n\\liminf _{n\\to \\infty } ECR(P_1 ^n , P_2 ^n )} $ will solely depend on the number of elements that $R$ has.\n\nWith the set $R$ as given in the proof of Lemma \\ref{getsmallerR} we have $|R|=42$ and so\n$\\displaystyle{ \\liminf _{n\\to \\infty } ECR(P_1 ^n , P_2 ^n )\\geq {4\\over 42-2}={1\\over 10} .} $\n\nFor part \\ref{equalnhooddecks} we know that\nthe neighborhoods $\\updownarrow _{P_1 ^n } x$ and $\\updownarrow _{P_2 ^n } x$\nare isomorphic for $x\\in \\{ a,b, \\tilde{a}, \\tilde{b},d_0 , p_0 , \\ldots , d_n , p_n \\} $.\nThis leaves the non-extremal elements of the $R_i $ and the non-extremal\nelements of $Q$ and $\\tilde{Q} $.\n\nWe first consider the non-extremal elements of the $R_i $.\nFor $x\\in A\\cup D$\nwe have that $\\uparrow _R x\\setminus \\{ x\\} $ is a four-crown if $x$ is minimal and\na two-antichain if $x$ is maximal.\nSimilarly,\n$\\uparrow _R x\\setminus \\{ x\\} $ is\na two-antichain for the maximal elements of $C$ and $D$ that are below both\n$d$ and $p$ as well as for $c_t $.\nThis means that $\\updownarrow _R x$ has an automorphism that fixes\n$\\overline{d} $ and $\\overline{p} $ and that switches $d$ and $p$,\nwhich means that for these elements\n(in any of the $R_i $) we have that\n$\\updownarrow _{P_1 ^n } x$ and $\\updownarrow _{P_2 ^n } x$\nare isomorphic.\nFor the minimal elements of $B_{1,2} $ and $C_{1,2} $, call them\n$b_{1,2} $ and $c_{1,2} $, the set\n$\\uparrow _R x\\setminus \\{ x\\} $ is the disjoint union of two 2-chains\nand for these elements (in any $R_i $) we have\n$\\updownarrow _{P_1 ^n } b_{1,2} $ is isomorphic to\n$\\updownarrow _{P_2 ^n } c_{1,2} $\nand\n$\\updownarrow _{P_1 ^n } c_{1,2} $ is isomorphic to\n$\\updownarrow _{P_2 ^n } b_{1,2} $.\nFor the minimal elements of $B_1 $ and $B_2 $, call them\n$b_1 $ and $b_2 $,\nthe set\n$\\uparrow _R x\\setminus \\{ x\\} $ is an ``N\"\nand for these elements (in any $R_i $) we have\n$\\updownarrow _{P_1 ^n } b_{1} $ is isomorphic to\n$\\updownarrow _{P_2 ^n } b_{2} $.\nThe corresponding elements of $C$ are handled similarly.\nThe remaining maximal elements of $B$ and $C$ are below exactly one of $d$ and $p$,\ncall this element $m$.\nFor these elements (in any $R_i $) we prove that\n$\\updownarrow _{P_1 ^n } m $ is isomorphic to\n$\\updownarrow _{P_2 ^n } m $ as follows.\nIn an $R_i $ with $i3$ cards is due to\nIlle and Rampon (cf. \\cite{JXsurvey}, Section 8.2.2).\nThe size of their sets is exponential in the number of maximal elements.\nThe size of the sets in\nTheorem \\ref{populatedlevels} is linear in the number of maximal\nelements.\nIn this construction, to date the largest ratio of extremal elements to the size of the set is\nachieved with sets $R$ as in Figure \\ref{switch4}. The ratio approaches $\\displaystyle{1\\over 19} $ and it is\nachieved by folding at every merge of sets $R$ and at the merge of the top set $R$ with $\\tilde{Q} $.\nWith sets as presented earlier, the ratio approaches $\\displaystyle{ 1\\over 20} $.\n}\n\n\\end{remark}\n\n\n\n\\section{Further Consequences of the Folding Construction}\n\n\n\nAside from giving access to examples on reconstruction, the folding construction of Definition \\ref{foldoper}\nalso allows access to some results that can simplify the start of a reconstruction proof.\n\nRecall that two elements $x,y$ of an ordered set are called {\\bf adjacent} iff\n$x1$ and $|Q_k |>1 $, and\n\\item\nNo two elements of rank $k$\nhave the same strict upper and lower bounds\nin either $P$ or $Q$, and\n\\item\n$P\\setminus P_k $ and $Q\\setminus Q_k $ are both rigid,\n\\end{enumerate}\nthen $P$ is isomorphic to $Q$.\n\n\\end{prop}\n\n{\\bf Proof.}\nLet $P\\setminus \\{ p\\} $ and $Q\\setminus \\{ q\\} $ be isomorphic rank $k$ cards and\nlet $\\psi :P\\setminus \\{ p\\} \\to Q\\setminus \\{ q\\} $ be an isomorphism that\npreserves the original rank of each element.\nLet $P\\setminus \\{ p'\\} $ and $Q\\setminus \\{ q'\\} $ be isomorphic rank $k$ cards\nwith $p\\not= p'$, $q\\not= q'$ and\nlet $\\phi :P\\setminus \\{ p'\\} \\to Q\\setminus \\{ q'\\} $ be an isomorphism that\npreserves the original rank of each element.\nThen $\\phi |_{P\\setminus P_k } =\\psi |_{P\\setminus P_k } $.\n\n\nIf $\\psi (x)=q'$ and $x\\not= p'$, then $\\psi (p')\\not= q'$, which means $y:=\\phi ^{-1} (\\psi (p'))\\not= p'$.\nThis means that $y$ and $p'$ have the same sets of strict upper and lower bounds, a contradiction.\nThus $\\psi (p')=q'$ and symmetrically $\\phi (p)=q$.\n\nNow for any element $x\\in P_k \\setminus \\{ p,p'\\} $, the inequality $\\psi (x)\\not= \\phi (x)$ would imply\n$\\phi ^{-1} (\\psi (x))\\not= x$ has the same strict upper and lower bounds as $x$, a contradiction.\nThus\nfor all $x\\in P_k \\setminus \\{ p,p'\\} $, we have $\\psi (x)= \\phi (x)$.\n\nVia $\\phi |_{P\\setminus P_k } =\\psi |_{P\\setminus P_k } $ we immediately conclude that\n$$\\Phi (x):=\\cases{\n\\psi (x); & for $x\\not= p$, \\cr\n\\phi (p); & for $x=p$, \\cr\n} $$\nis an isomorphism between $P$ and $Q$.\n\\hfill\\ \\rule{2mm}{2mm} \n\n\n\n\n\n\n\\begin{define}\n\nLet $P$ be an ordered set and let $0l$ and\nno element of rank $l$.\n\n\\end{define}\n\n\n\n\n\\begin{prop}\n\\label{rigsep}\n\n\nLet $P$ and $Q$ be ordered sets and let $0l$ in $P$ and $Q$, respectively, and where\n${\\rm rank} _Q (\\phi (x)) ={\\rm rank} _P (x)$ for all $x$,\n\\item\nThere is an isomorphism\n$\\psi :P\\setminus \\{ m_P \\} \\to Q\\setminus \\{ m_Q \\} $, where\n$m_P $ and $m_Q $ denote\nminimal elements in $P$ and $Q$, respectively,\nand where\n${\\rm rank} _Q (\\psi (x)) ={\\rm rank} _P (x)$ for all $x$,\n\\end{enumerate}\nthen\n$P$ is isomorphic to $Q$.\n\n\\end{prop}\n\n\n\n{\\bf Proof.}\nBecause we must have\n$\\phi |_{P_{k\\uparrow } \\cap P_{l\\downarrow } } =\\psi |_{P_{k\\uparrow } \\cap P_{l\\downarrow } } $\nand because there are no adjacencies that ``cross\" $P_{k\\uparrow } \\cap P_{l\\downarrow } $\nthe function\n$$\\Phi (x):=\\cases{\n\\psi (x); & if ${\\rm rank} (x)>l$, \\cr\n\\phi (x); & if ${\\rm rank} (x)\\leq l$, \\cr\n} $$\nis an isomorphism between $P$ and $Q$.\n\\hfill\\ \\rule{2mm}{2mm} \n\n\n\n\n\\section{Conclusion}\n\n\n\n\n\nThe examples presented in this paper show that even substantial partial information\non the deck of an ordered set is not sufficient to\neffect reconstruction. In particular (see Theorem \\ref{populatedlevels}),\nthe maximal deck plus the minimal deck plus\n$n+1$ rank $k$ decks, plus the rank $k$ neighborhood decks\nare not sufficient for reconstruction\neven if all these decks have substantially more than $2$ cards.\nMoreover, by Remark \\ref{allbut2} even all rank $k$ decks except for the ones for\nrank $1$ and rank $2$ are not sufficient to effect reconstruction. Again the\nabsolute number of cards within these ranks is immaterial.\n\nFuture reconstruction research has to take these facts into account.\nProof attempts that do not consider enough information from the deck will not succeed.\nOn the other hand, the examples presented here show what kind of\ninformation might effect reconstruction or lead to a counterexample.\n\n\n\n\\begin{enumerate}\n\\item\nAll examples of ordered sets with equal maximal, minimal and\nsome equal rank $k$ decks so far have ``small waists\".\nThat is, the ranks that have equal decks all have fewer elements\n(by at least a factor $6$, even if we use sets $R$ as in Figure \\ref{switch4})\nthan other ranks nearby.\nIt thus should be instructive to investigate what can be\nconcluded from equality of rank $k$ decks, where $k$ is such that\nno other rank has more elements than the $k^{\\rm th} $ rank.\n\n\\item\nAlternatively, the construction of examples could be expanded to the\npoint where the overlap between the decks of two nonisomorphic sets\nbecomes so large that the decks would indeed have to be equal.\nThe present examples have been shown to be quite malleable. They might point the\nway towards examples with similarly strong properties and\nlarger equal card ratios or maybe even a counterexample\noverall. The largest equal card ratios in ordered sets observed so far slightly exceed 50\\%\n(see \\cite{JHthes}), but the examples do not have the equal subdecks that the examples presented here have.\n\n\\item\nAlong these lines, would it be true that\nif there is a sequence of pairs of nonisomorphic ordered sets $(P_1 ^n , P_2 ^n )$\nsuch that $ECR (P_1 ^n , P_2 ^n ) \\to 1$ as $n\\to \\infty $, then\nthere is a counterexample to the reconstruction conjecture?\n\n\\item\nOn p. 185 of \\cite{Manv}, P. Stockmeyer is quoted to have said\nonly {\\em half} in jest that ``The reconstruction conjecture [for graphs]\nis not true, but the smallest counterexample has 87 vertices and will\nnever be found.\" The present examples seem to show that if there is a\ncounterexample to the order reconstruction conjecture, it would have to be\nof substantial size and complexity.\nAt the same time, the approach of analyzing macrostructure\nas in Lemma \\ref{severalmiddle}\nand microstructure as in Lemma \\ref{getsmallerR}\nseparately may allow to\nbreak down the complexity to manageable stages.\n\n\n\n\\item\nFinally, Proposition \\ref{onlyoneseam} shows that we can concentrate on\nordered sets that have a certain type of ``vertical connectivity\".\n\n\n\n\n\\end{enumerate}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Background: computational methods for epidemiology}\n\\label{sec:models}\nEpidemiological models fall in two broad classes: statistical models that are largely data driven and mechanistic models that are based on underlying theoretical principles developed by scientists on how the disease spreads.\n\nData-driven models use statistical and machine learning methods to forecast outcomes, such as case counts, mortality and hospital demands. This is a very active area of research, and a broad class of techniques have been developed, including auto-regressive time series methods, Bayesian techniques and deep learning \\cite{adhikari:kdd19, perone:ssrn20, desai:hs19, Reich2019AccuracyOR, funk:epidemics18, murray:ihme}. \nMechanistic models of disease spread within a population~\\cite{Brauer-2008,Ne03, marathe:cacm13,ekmsw-2006} use mechanistic (also referred to as procedural or algorithmic) \nmethods to describe the evolution of an epidemic through a population. The most common of these is the SIR type models. Hybrid models that combine mechanistic models with data driven machine learning approaches are also starting to become popular, e.g.,~\\cite{Wang2019DEFSIDL}.\n\n\\subsection{Mass action compartmental models}\nThere are a number of models, which are referred to as SIR class of models. These partition a population of $N$ agents into three sets, each corresponding to a disease state, which is one of: susceptible ($S$), infective ($I$) and removed or recovered ($R$). \nThe specific model then specifies how susceptible individuals become infectious, and then recover. In its simplest form (referred to as the basic compartmental model)~\\cite{Brauer-2008,Ne03, marathe:cacm13}, the population is assumed to be completely mixed. \nLet $S(t)$, $I(t)$ and $R(t)$ denote the number of people who are susceptible, infected and recovered states at time $t$, respectively.\nLet $s(t)=S(t)\/N$, $i(t)=I(t)\/N$ and $r(t)=R(t)\/N$; then,\n$s(t)+i(t)+r(t)=1$. Then, the SIR model can be described by the following system of ordinary differential equations\n\\[\n\\frac{ds}{dt} = \\beta si, \\qquad \\frac{di}{dt} = \\beta si - \\gamma i, \\qquad \\frac{dr}{dt} = \\gamma i,\n\\]\nwhere $\\beta$ is referred to as the transmission rate, and $\\gamma$ is the recovery rate.\nA key parameter in such a model is the ``reproductive number'', denoted by $R_0=\\beta\/\\gamma$.\nAt the start of an epidemic, much of\nthe public health effort is focused on estimating $R_0$ from observed infections~\\cite{lipsitch:science03}.\n\n Mass action compartmental models have been the workhorse for epidemiologists\n and have been widely used for over 100 years. Their strength comes from their simplicity, both analytically and from the standpoint of understanding the\n outcomes. Software systems have been developed to solve such models and a number of associated tools have been built to support analysis using such models.\n \n\\subsection{Structured metapopulation models}\nAlthough simple and powerful, mass action compartmental models\ndo not capture the inherent heterogeneity of the underlying populations.\nSignificant amount of research has been conducted to extend the model, usually in two broad ways. \nThe first involves structured metapopulation models---these construct an abstraction of the mixing patterns in the population into $m$ different sub-populations, e.g., age groups and small geographical regions, and attempt to capture the heterogeneity in mixing patterns across subpopulations. In other words, the model has states $S_j(t), I_j(t), R_j(t)$ for each subpopulation $j$. The evolution of a compartment $X_j(t)$ is determined by mixing within and across compartments.\nFor instance, survey data on mixing across age groups~\\cite{Mossong2008SocialCA} has been used to construct age structured metapopulation models~\\cite{medlock+optvacc09}. More relevant for our paper are spatial metapopulation models, in which the subpopulations are connected through airline and commuter flow networks~\\cite{balcan:pnas2009, srini:ploscb19, gomes:plos-curr14, zhang:pnas17, chinazzi:science20}. \n\n\\medskip\n\\noindent \n\\textbf{Main steps in constructing structured metapopulation models:} This depends on the disease, population and the type of question being studied. The key steps in the development of such models for the spread of diseases over large populations include\n\\begin{itemize}\n\\item \nConstructing subpopulations and compartments: the entire population $V$ is partitioned into subpopulations $V_j$, within which the mixing is assumed to be complete. Depending on the disease model, there are $S_j, E_j, I_j, R_j$ compartments corresponding to the subpopulation $V_j$ (and more, depending on the disease)---these represent the number of individuals in $V_j$ in the corresponding state\n\\item \nMixing patterns among compartments: state transitions between compartments might depend on the states of individuals within the subpopulations associated with those compartments, as well as those who they come in contact with. For instance, the $S_j\\rightarrow E_j$ transition rate might depend on $I_k$ for all the subpopulations who come in contact with individuals in $V_j$. Mobility and behavioral datasets are needed to model such interactions.\n\\end{itemize}\n\n\nSuch models are very useful at the early days of the outbreak, when the disease dynamics are\ndriven to a large extent by mobility---these can be captured more easily within such models,\nand there is significant uncertainty in the disease model parameters. They can also model coarser interventions such as reduced mobility between spatial units and reduced mixing rates. However, these models become less useful to model the effect of detailed interventions\n(e.g., voluntary home isolation, school closures) on disease spread in and across communities. \n\n \n\\subsection{Agent based network models}\nAgent-based networked models (sometimes just called as agent-based models) \nextend metapopulation models further by explicitly capturing the \ninteraction structure of the underlying populations. Often such models are also resolved at the level of single individual entities (animals, humans etc.).\nIn this class of models, the epidemic dynamics can be modeled as a diffusion process on a specific undirected contact network $G(V,E)$ \non a population $V$ -- each edge $e=(u,v)\\in E$ implies that\nindividuals (also referred to as nodes) $u, v\\in V$ come into contact\\footnote{%\nNote that though edge $e$ is\nrepresented as a tuple $(u, v)$, it actually denotes the set $\\{u, v\\}$, as\nis common in graph theory.}\nLet $N(v)$ denote the set of neighbors of $v$. \nFor instance, in the graph in Figure \\ref{fig:network-ex}, we have $V=\\{a, b, c, d\\}$\nand $E=\\{(a, b), (a, c), (b, d), (c d)\\}$. Node $a$ has $b$ and $c$ as neighbors, so $N(a)=\\{b, c\\}$.\nThe SIR model on the graph $G$ is a dynamical process in which\neach node is in one of $S$, $I$ or $R$ states. Infection can\npotentially spread from $u$ to $v$ along edge $e=(u,v)$ with a probability of\n$\\beta(e,t)$ at time instant $t$\nafter $u$ becomes infected, conditional on node $v$ remaining uninfected\nuntil time $t$--- this is a discrete version of the rate of infection for the\nODE model discussed earlier. \nWe let $I(t)$ denote the set of nodes that become\ninfected at time $t$. \nThe (random) subset of edges on which the infections spread represents\na disease outcome, and is referred to as a \\emph{dendogram}.\nThis dynamical system starts with a configuration in which there are one or more nodes\nin state {\\sf I} and reaches a fixed point in which all nodes are in states {\\sf S} or {\\sf R}.\nFigure \\ref{fig:network-ex} shows an example of the SIR model on a network.\n\\medskip \n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.5\\linewidth]{figs\/sir-nw.pdf}\n\\caption{The SIR process on a graph. The contact graph $G=(V, E)$ is defined on\na population $V=\\{a, b, c, d\\}$. The node colors white, black and grey represent\nthe Susceptible, Infected and Recovered states, respectively. Initially, only node $a$\nis infected, and all other nodes are susceptible. A possible outcome at $t=1$\nis shown, in which node $c$ becomes infected, while node $a$ recovers. Node $a$ tries\nto independently infect both its neighbors $b$ and $c$, but only node $c$ gets\ninfected--- this is indicated by the solid edge $(a, c)$.\nThe probability of getting\nthis outcome is $(1-p(a,b))p(a,c)$.}\n\\label{fig:network-ex}\n\\end{center}\n\\end{figure*}\n\n\\medskip\n\\noindent \n\\textbf{Main steps in setting up an agent based model.} While the specific steps depend on the disease, the population, and the type of question being studied, the general process involves the following steps:\n\\begin{itemize}\n\\item \nConstruct a network representation $G$: the set $V$ is the population in a region, and is available from different sources, such as Census and Landscan. However, the contact patterns are more difficult to model, as no real data is available on contacts between people at a large scale. Instead, researchers have tried to model activities and mobility, from which contacts can be inferred, based on co-location. Multiple approaches have been developed for this, including random mobility based on statistical models, and very detailed models based on activities in urban regions, which have been estimated through surveys, transportation data, and other sources, e.g.,~\\cite{ekmsw-2006,barrett:wsc09,eubank2004modelling, longini05:science,Ferg20}.\n\\item \nDevelop models of within-host disease progression: such models can be represented as finite state probabilistic timed transition models, which are designed in close coordination with biologists, epidemiologists, and parameterized using detailed incidence data (see~\\cite{marathe:cacm13} for discussion and additional pointers).\n\\item \nDevelop high performance computer (HPC) simulations to study epidemic dynamics in such models, e.g.,~\\cite{barrett2008episimdemics, bisset2009epifast, deodhar2012enhancing, grefenstette2013fred}. Typical public health analyses involve large experimental designs, and the models are stochastic; this necessitates use of such HPC simulations on large computing clusters.\n\\item \nIncorporate interventions and behavioral changes: interventions include closure of schools and workplaces~\\cite{Ferg20,halloran:pnas08} and vaccinations~\\cite{eubank2004modelling}, whereas behavioral changes include individual level social distancing, changes in mobility, and use of protective measures.\n\\end{itemize}\n\n\nSuch a network model captures the interplay between the three components of \ncomputational epidemiology: ($i$) individual behaviors of agents, \n($ii$) unstructured, heterogeneous multi-scale networks, \nand ($iii$) the dynamical processes on these networks.\nIt is based on the hypothesis that a better understanding of the\ncharacteristics of the underlying network and individual behavioral\nadaptation can give better insights into contagion dynamics and\nresponse strategies.\nAlthough computationally expensive and data intensive, network-based epidemiology\nalters the types of questions that can be posed, providing\nqualitatively different insights into disease dynamics and public health policies.\nIt also allows policy makers to formulate\nand investigate potentially novel and context specific interventions. \n\n\n\\subsection{Models for epidemic forecasting}\nLike projection approaches, models for epidemic forecasting can be broadly classified into two broad groups: ($i$) statistical and machine learning based data driven models, ($ii$) causal or mechanistic models -- see~\\cite{desai:hs19,Reich2019AccuracyOR,nsoesie2014systematic,chretien2014influenza,kandula2019near,tabataba2017framework,brooks2020pancasting} and the references therein for the current state of the art in this rapidly evolving field.\n\nStatistical methods employ statistical and time-series based methodologies to\nlearn patterns in historical epidemic data and leverage those patterns for forecasting. Of course the simplest yet useful class is called\n{\\em method of analogues}. One simply compares the current epidemic with one of the earlier outbreaks and then uses the best match to forecast the\ncurrent epidemic. Popular statistical methods for forecasting influenza\nlike illnesses (that includes COVID-19) \ninclude e.g. generalized linear models (GLM), autoregressive integrated moving average (ARIMA), and generalized autoregressive moving average (GARMA)~\\cite{kandula2019near,LANLforecasts,IHMEcovid2020forecasting}.\nStatistical methods are fast, but they crucially depend on the availability of training data. Furthermore, since they are purely data driven, they do\nnot capture the underlying causal mechanisms. As a result epidemic dynamics affected by behavioral adaptations are usually hard to capture.\nArtificial neural networks (ANN) have gained increased\nprominence in epidemic forecasting due to their self-learning ability without prior knowledge~(See \\cite{Wang2019DEFSIDL,wang2020tdefsi,adhikari:kdd19} and the\nreferences therein). \nSuch models have used a wide variety of data as surrogates\nfor producing forecasts. This includes: ($i$) social media data, ($ii$) weather data, ($iii$) incidence curves and ($iv$) demographic data.\n\nCausal models can be used for epidemic forecasting in a natural \nmanner~\\cite{funk:epidemics18,nsoesie2014systematic,\nfadikar2018calibrating,tabataba2017epidemic,chretien2014influenza,yamana2017individual}.\nThese models, calibrate the internal model parameters using the disease\nincidence data seen until a given day and then execute the model forward in time to produce the future time series. Compartmental as well as agent-based models can be used to produce such forecasts. The choice of the models depends on the specific question at hand and the computational and data resource constraints.\nOne of the key ideas in forecasting is to develop ensemble models -- models that combine forecasts from multiple models~\\cite{chakraborty2014forecasting,Reich2019AccuracyOR,yamana2017individual,tabataba2017epidemic}. The idea which originated in the domain of weather forecasting has found methodological advances in the machine learning literature. Ensemble models\ntypically show better performance than the individual models.\n\n\n\\section{Biographies}\n\n\\begin{wrapfigure}{r}{0.16\\textwidth}\n\\vspace{-13pt}\n\\centering\n\\includegraphics[width=0.14\\textwidth]{figs\/aniruddha_adiga_webedited.jpg}\n\\end{wrapfigure}\n\nAniruddha Adiga is a Postdoctoral Research Associate at the NSSAC Division of the Biocomplexity Institute and initiative. He completed his PhD from the Department of Electrical Engineering, Indian Institute of Science (IISc), Bangalore, India and has held the position of Postdoctoral fellow at IISc and North Carolina State University, Raleigh, USA. \nHis research areas include signal processing, machine learning, data mining, forecasting, big data analysis etc. At NSSAC, his primary focus has been the analysis and development of forecasting systems for epidemiological signals such as influenza-like illness and COVID-19 using auxiliary data sources.\\\\ \n\n\\begin{wrapfigure}{l}{0.18\\textwidth}\n\\vspace{-10pt}\n\\centering\n\\includegraphics[width=0.16\\textwidth]{figs\/Devdatt_Dubhashi-3.jpg}\n\\end{wrapfigure}\n\\noindent\nDevdatt Dubhashi is a Professor in the Data Science and AI Division in the Department of computer Science and Engineering, Chalmers University Sweden. His research interests are in machine learning, algorithms and cognitive science. \nHe received his Ph.D. in Computer Science at Cornell University USA and has held positions at the Max Planck Institute for Computer Science, Saarbr\\\"ucken Germany, BRICS (Basic Research in computer Science) a center of the Danish National Science Foundation at the University of Aarhus Denmark and at the Indian Institute of Technology, Delhi India.\\\\\n\n\\begin{wrapfigure}{r}{0.15\\textwidth}\n\\vspace{-14pt}\n\\centering\n\\includegraphics[width=0.13\\textwidth]{figs\/Bryan-Lews.png}\n\\end{wrapfigure}\nBryan Lewis is a research associate professor in the Network Systems Science and Advanced Computing division. His research has focused on understanding the transmission dynamics of infectious diseases within specific populations through both analysis and simulation. Lewis is a computational epidemiologist with more than 15 years of experience in crafting, analyzing, and interpreting the results of models in the context of real public health problems. As a computational epidemiologist, for more than a decade, Lewis has been heavily involved in a series of projects forecasting the spread of infectious disease as well as evaluating the response to them in support of the federal government. These projects have tackled diseases from Ebola to pandemic influenza and melioidosis to cholera.\\\\\n\n\n\n\\begin{wrapfigure}{l}{0.17\\textwidth}\n\\vspace{-13pt}\n\\centering\n\\includegraphics[width=0.14\\textwidth]{figs\/Madhav-Picture-Feb2019.pdf}\n\\end{wrapfigure}\nMadhav Marathe is a Distinguished Professor in \nBiocomplexity, the division director of the Networks, Simulation Science and Advanced Computing (NSSAC) Division at the Biocomplexity Institute and \nInitiative, and a Professor in the Department of Computer Science at the University of Virginia (UVA). \nHis research interests are in network\nscience, computational epidemiology, AI, foundations of computing,\nsocially coupled system science and high performance computing. \nBefore joining UVA, he held positions at Virginia Tech and the Los Alamos National Laboratory. He is a Fellow of the IEEE, ACM, SIAM and AAAS.\\\\\n\n\\noindent\n\\begin{wrapfigure}{r}{0.15\\textwidth}\n\\vspace{-13pt}\n\\centering\n\\includegraphics[width=0.13\\textwidth]{figs\/Srini_Headshot_2019.jpg}\n\\end{wrapfigure}\nSrinivasan (Srini) Venkatramanan is a Research Scientist at the Biocomplexity Institute \\& Initiative, University of Virginia and his research focuses on developing, analyzing and optimizing computational models in the field of network epidemiology. He received his PhD from the Department of Electrical and Communication Engineering, Indian Institute of Science (IISc), and did his postdoctoral research at Virginia Tech. His areas of interest include network science, stochastic modeling and big data analytics. He has used in-silico models of society to study the spread of infectious diseases and invasive species. Recent research includes modeling and forecasting emerging infectious disease outbreaks (e.g., Ebola, COVID-19), impact of human mobility on disease spread and resource allocation problems in the context of epidemic control (e.g., seasonal influenza vaccines). \\\\\n\n\n\\begin{wrapfigure}{l}{0.15\\textwidth}\n\\vspace{-14pt}\n\\centering\n\\includegraphics[width=0.13\\textwidth]{figs\/anil.jpg}\n\\end{wrapfigure}\n\\medskip\n\\noindent\nAnil Vullikanti is a Professor in the Biocomplexity Institute and the Department of Computer Science\nat the University of Virginia. He got his PhD at the Indian Institute of Science, and was a postdoctoral associate at the Max Planck Institute for Computer Science, Saarbr\\\"ucken, Germany, and the Los Alamos National Lab. His research interests are in the broad areas of network science, dynamical systems, combinatorial optimization, and distributed computing, and their applications to computational epidemiology and social networks.\n\\section{Models and Policy making}\n\\label{sec:discussion}\n\\noindent\n\\textbf{Were some of the models wrong?} \\ \nIn a recent opinion piece\\footnote{\\emph{Indian Express}, July 30, 2020}, Professor Vikram Patel of the Harvard School of Public Health makes a stinging criticism of modelling:\n\\begin{quote}\n Crowning these scientific disciplines is the field of modelling, for it was its estimates of mountains of dead bodies which fuelled the panic and led to the unprecedented restrictions on public life around the world. None of these early models, however, explicitly acknowledged the huge assumptions that were made,\n\\end{quote}\nA similar article in NY Times recounted the \\emph{mistakes} in COVID-19 response in Europe\\footnote{NY Times July 20, 2020: \\url{https:\/\/www.nytimes.com\/2020\/07\/20\/world\/europe\/coronavirus-mistakes-france-uk-italy.html}}; also see~\\cite{avery2020policy}.\n\n\\medskip\n\\noindent \n\\textbf{Our point of view.} \\\nIt is indeed important to ensure that assumptions underlying mathematical models be made transparent and explicit. But we respectfully \ndisagree with Professor Patel's statement: most of the \\emph{good} models tried to be very explicit about their assumptions. The mountains of deaths that are being referred to are explicitly calculated when {\\bf no} interventions are put in place and are often used as a worst case scenario. Now, one might argue that the authors be explicit and state that this worst case scenario will never occur in practice.\nForecasting dynamics in social systems is inherently challenging: individual\nbehavior, predictions and epidemic dynamics {\\em co-evolve}; this coevolution immediately implies that a dire prediction can lead to extreme change in\nindividual and collective behavior leading to reduction in the incidence numbers. Would one say forecasts were wrong in such a case or they were\ninfluential in ensuring the worst case never happens? None of this implies that one should not explicitly state the assumption underlying their model.\nOf course our experience is that policy makers,\nnews reporters and common public are looking exactly for such a forecast --\nwe have been constantly asked \"when will peak occur\" or \"how many people are likely to die\". A few possible ways to overcome this tension between the unsatiable appetite for forecasts and the inherent challenges that lie\nin doing this accurately, include: \n\\begin{itemize}\n \\item \nWe believe that in general it might not be prudent to provide long term\nforecasts for such systems. \n \\item\nState the assumptions underlying the models as clearly as possible. Modelers need to be much more disciplined about this. They also need to ensure that the models are transparent and can be reviewed broadly (and expeditiously).\n \\item\nAccept that the forecasts are provisional and that they will be revised\nas new data comes in, society adapts, the virus adapts and we understand the biological impact of the pandemic.\n \\item\nImprove surveillance systems that would produce data that the models can use more effectively. Even with data, it is very hard to estimate the prevalence of COVID-19 in society.\n\\end{itemize}\nCommunicating scientific findings and risks is an important topical\narea in this context, see~\\cite{fischhoff2019evaluating,metcalf2020mathematical,adam2020modelling,vaezi2020infodemic}. \n\n\\medskip\n\\noindent \n\\textbf{Use of models for evidence-based policy making.} \\\nIn a new book, \\cite{KK20}, \\emph{Radical Uncertainty}, economists John Kay and Mervyn King (formerly Governor of the Bank of England) urge caution when using complex models. They argue that models should be valued for the insights they provide but not relied upon to provide accurate forecasts. The so-called \"evidence--based policy\" comes in for criticism where it relies on models but also supplies a false sense of certainty where none exists, or seeks out the evidence that is desired \\emph{ex ante} -- or \"cover\" -- to justify a policy decision. \\emph{\"Evidence based policy has become policy based evidence\".} \n\n\\medskip\n\\noindent \n\\textbf{Our point of view.} \\\nThe authors make a good point here. But again, everyone, from public\nto citizens and reporters clamour for a forecast. We argue that this can\nbe addressed in two ways:($i$) viewing the problem from the lens \nof control theory so that we forecast only to control the deviation\nfrom the path we want to follow and ($ii$) not insisting on exact numbers but general trends. As Kay and King opine, the value of models, especially in the face of \\emph{radical uncertainty}, is more in exploring alternative scenarios resulting from different policies:\n\\begin{quote}\n a model is useful only if the person using it understands that it does not represent the \"the world as it really is\" but is a tool for exploring ways in which a decision might or might not go wrong. \n\\end{quote}\n\n\n\\medskip\n\\noindent \n\\textbf{Supporting science beyond the pandemic.} \\\nIn his new book \\emph{The Rules of Contagion}, Adam Kucharski \\cite{K20} draws on lessons from the past. In 2015 and 2016, during the Zika outbreak, researchers planned large-scale clinical studies and vaccine trials. But these were discontinued as soon as the infection ebbed. \n\\begin{quote}\n This is a common frustration in outbreak research; by the time the infections end, fundamental questions about the contagion can remain unanswered. That's why building long term research capacity is essential.\n\\end{quote}\n\n\\medskip\n\\noindent \n\\textbf{Our point of view.} \\\nThe author makes an important point. We hope that today, after witnessing the devastating impacts of the pandemic on the economy and society, the correct lessons will be learnt: sustained investments need to be made in the field to be ready for the impact of the next pandemic.\n\n\\textbf{Concluding remarks}\nThe paper discusses a few important computational models \ndeveloped by researchers in the US, UK and Sweden for COVID-19 pandemic planning and response. The models have been used by \npolicy makers and public health officials in their respective \ncountries to assess the evolution of the pandemic, design and analyze\ncontrol measures and study various what-if scenarios.\nAs noted, all models faced challenges due to availability of data,\nrapidly evolving pandemic and unprecedented control measures put in place.\nDespite these challenges, we believe that mathematical models can provide\nuseful and timely information to the policy makers. On on hand the\nmodelers need to be transparent in the description of their models, \nclearly state the limitations and carry out detailed sensitivity \nand uncertainty quantification. Having these models reviewed independently\nis certainly very helpful. On the other hand, policy makers should be aware\nof the fact that using mathematical models \nfor pandemic planning, forecast\nresponse rely on a number of assumptions and lack data to over these\nassumptions. \n\n\n\\section*{Acknowledgments}\nThe authors would like to thank members of the Biocomplexity COVID-19 Response Team and Network Systems Science and Advanced Computing (NSSAC) Division for their thoughtful comments and suggestions related to epidemic modeling and response support. We thank members of the Biocomplexity Institute and Initiative, University of Virginia for useful discussion and suggestions. \nThis work was partially supported by National Institutes of Health (NIH) Grant 1R01GM109718, NSF BIG DATA Grant IIS-1633028, NSF DIBBS Grant ACI-1443054, \nNSF Grant No.: OAC-1916805, NSF Expeditions in Computing Grant CCF-1918656, CCF-1917819, NSF RAPID CNS-2028004, NSF RAPID OAC-2027541, US Centers for Disease Control and Prevention 75D30119C05935, DTRA subcontract\/ARA S-D00189-15-TO-01-UVA. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies.\n\n\n\\section{Introduction}\n\n\n\\iffalse\n\\textcolor{blue}{Maybe the following para is not needed. Also might not need some of the previous para, as it is fairly common knowledge now. I have\nleft his as is but moved things around. Over time the article becomes archival and so good to have it.}\nThe word epidemic is derived from two Greek works {\\em epi} (upon) and {\\em demos} (people) \nmeaning ``upon people''.\nAn epidemic is an occurrence in a community or region, of cases of an\nillness, specified health behavior or other health related events\nclearly in excess of normal expectancy. A pandemic refers to incidence of\nthe disease of global proportions, e.g., the H1N1 outbreak in 2009.\nIn contrast, an endemic disease is one\nwhose cases are constantly occurring in the population. \n{\\em Epidemiology} is the study of space-time patterns of illness in a population and\nthe factors that contribute to these patterns. It plays an essential\nrole in public health through the elucidation of the processes that\nlead to ill health as well as the evaluation of strategies designed to\npromote good health. Epidemiologists are primarily concerned with\npublic health data, which includes the design of studies, evaluation\nand interpretation of public health data, and the maintenance of data\ncollection systems. \n\\fi\n\n\nThe ongoing COVID-19 pandemic is the most significant pandemic since the 1918 Influenza pandemic. It has already caused over 21 Million confirmed cases and 758,000 deaths\\footnote{The numbers reported are as of August 14, 2020.\nSee \\url{https:\/\/coronavirus.jhu.edu\/map.html} and\n\\url{https:\/\/nssac.bii.virginia.edu\/covid-19\/dashboard\/} for most up to date\nsurveillance information}. \nThe economic impact is already in trillions of dollars.\nAs in other pandemics, researchers and public health policy makers are\ninterested in questions such as\\footnote{see \\url{https:\/\/www.nytimes.com\/news-event\/coronavirus}},\n($i$) How did it start?\n($ii$) How is it likely to progress and how can we control it?\n($iii$) How can we intervene while balancing public health and economic impact ?\n($iv$) Why did some countries do better than other countries thus far into the pandemic?\nIn particular, models and their projections\/forecasts have received unprecedented attention. With a multitude of modeling frameworks, underlying assumptions, available datasets and the region\/timeframe being modeled, these projections have varied widely, causing confusion among end-users and consumers. We believe an overview (non-exhaustive) of the current modeling landscape will benefit the readers and also serve as a historical record for future efforts. \n\n\\subsection{Role of models} \\ Models have been used by mathematical epidemiologists\nto support a broad range of policy questions. Their use during COVID-19\nhas been widespread. In general, the type and form of models used in epidemiology depends on the phase of the epidemic. \nBefore an epidemic, models are used for planning and identifying critical gaps and prepare plans to detect and respond in the event of a pandemic. At the start of a pandemic, policy makers are interested in asking questions such as: ($i$) where and how did the pandemic start, ($ii$) risk of its spread in the region, ($iii$) risk of importation in other regions of the world, ($iv$) basic understanding of the pathogen and its epidemiological characteristics. As the pandemic takes hold researchers begin investigating: ($i$) various intervention and\ncontrol strategies; usually pharmaceutical interventions do not work in the event of a pandemic and thus non-pharmaceutical interventions are most appropriate, ($ii$) forecasting the epidemic incidence rate, hospitalization rate and mortality rate, ($iii$) efficiently allocating scarce medical resources to treat the patients and ($iv$) understanding the change in individual and collective behavior and adherence to public policies.\nAfter the pandemic starts to slow down, modelers are interested in developing models related to recovery and long term impacts caused by the pandemic.\n\nAs a result comparing models needs to be done with care. When comparing models: one needs to specify: ($a$) the purpose of the model, ($b$) the end user to whom the model is targeted, ($c$) the spatial and temporal resolution of the model, ($d$) and the underlying assumptions and limitations. We illustrate these issues by summarizing a few key methods for \\emph{projection and forecasting} of disease outcomes in the US and Sweden.\n\n\n\n\n\\medskip\n\\noindent\n\\textbf{Organization.} \\ The paper is organized as follows. In \nSection~\\ref{sec:models} we give preliminary definitions. \nSection~\\ref{sec:imperial} discusses US and UK centric models developed by researchers at the Imperial College.\nSection~\\ref{sec:usa-models} discusses metapopulation models focused on the US that were developed by our group at UVA and the models developed by researchers at Northeastern University. \nSection~\\ref{sec:sweden-models} describes models developed Swedish researchers for studying the outbreak in Sweden. In Section~\\ref{sec:forecasting} \nwe discuss methods developed for forecasting.\nSection~\\ref{sec:discussion} contains discussion, model limitations and \nconcluding remarks. In a companion paper that appears in this special issue, we address certain complementary issues related to pandemic planning and response, including role of data and analytics.\n\n\\medskip\n\\noindent\n\\textbf{Important note.} \\ The primary purpose of the paper\nis to highlight some of the salient computational \nmodels that are currently being used to support COVID-19 pandemic response.\nThese models, like all models, have their strengths and weaknesses---they have all faced challenges arising from the lack of timely data.\nOur goal is \\textbf{not} to pick winners and losers among these model; \neach model has been used by policy makers and continues to be used to advice various agencies. Rather, our goal is to introduce to the reader\na range of models that can be used in such situations. A simple model\nis no better or worse than a complicated model. The suitability of\na specific model for a given question needs to be evaluated by the \ndecision maker and the modeler.\n\n\n\n\n\n\n\n\n\\section{Models from the Imperial College Modeling Group}\n\\label{sec:imperial}\n\\noindent\n\\textbf{Background.} \\ The modeling group led by Neil Ferguson\nwas to our knowledge the first model to study the impact of \nCOVID-19 across two large countries: US and UK. \nThe basic model was first developed in 2005 -- it was used to inform\npolicy pertaining to H5N1 pandemic and was one of three models used\nto informal the federal pandemic influenza plan and led to the now well\naccepted targeted layered containment (TLC) strategy. It was adopted to COVID-19 as discussed below.\n\n\\noindent\n\\textbf{Model Structure.} \\ The basic model structure consists of \ndeveloping a set of households based on census information for a given country. Individual members of the household interact with other members\nof the household. The data to produce these households is obtained using Census information for these countries. The paper is not clear if a\nnational census is used or more detailed state wise census information is used. Schools, workplaces and random meeting points are then added.\nThe number of schools, workplaces and random meeting points are used to represent the potential places of interaction between individuals. \nCensus data is used to assign age and household sizes. \nData on average class sizes and staff-student ratios were used to generate a synthetic population of schools distributed proportional to local population density. Data on the distribution of workplace size was used to generate workplaces with commuting distance data used to locate workplaces appropriately across the population. Individuals are assigned to each of these locations at the start of the simulation. The gravity style kernel is used to decide how far a person can go in terms of attending\nwork, school or community interaction place. The number of contacts between individuals at school, work and community meeting points are calibrated to produce a given attack rate.\n\nEach individual has an associated disease transmission model. The disease\ntransmission model parameters are based on data collected when the pandemic\nwas evolving in Wuhan; see page 4 of \\cite{ferguson-report9}.\n\nFinally, the model also has rich set of interventions. These include:\n($i$) case isolation, ($ii$) voluntary home quarantine, ($iii$) Social distancing of those over 70 years, ($iv$) social distancing of the entire population, ($v$) closure of schools and universities; see page 6 \\cite{ferguson-report9}. The code hsa now been available but the details of how such interventions are implemented remains a bit unclear. This is important as the interpretation of these interventions can have substantial\nimpact on the outcome.\n\n\\noindent\n\\textbf{What did the model suggest.} \\ \nThe Imperial college (IC Model) model was one of the first models to evaluate the COVID-19 pandemic using detailed agent-based model. The predictions made by the model were quite dire. The results show that to be able to reduce $R$ to close to 1 or below, a combination of case isolation, social distancing of the entire population and either household quarantine or school and university closure are required. The model had tremendous impact --\nUK and US both decide to start considering complete lock downs -- a\npolicy that was practically impossible to even talk about earlier in Western world. The paper came out around the same time that Wuhan epidemic was raging and the epidemic in Italy had taken the turn for the worse. This made the\nmodel results even more critical. So far the numbers have not played out per model forecasts -- this is a general issue that we will discuss across all models.\n\n\\noindent\n\\textbf{Strengths and Limitations.} \\\nIC model was one of the first model by a very reputed group to report the\npotential impact of COVID-19 with and without interventions. The model\nwas far more detailed than other models that were published by then.\nThe authors also took great care parameterizing the model with the best disease transmission data that was available until then. The model also considered a very rich set of interventions and was one of the first to analyze pulsing intervention. The work also had a few weaknesses. The\nrepresentation of the underlying social contact network was relatively \nsimple. Second, the details of how interventions were represented were\nnot made clear. Since then the modelers have made their code open and\nthe research community has witnessed an intense debate on the pros and cons of various modeling assumptions and the resulting software system. \nWe believe that despite some criticisms that were valid, overall the results\nrepresented a breakthrough in terms of when the results were put out and\nthe level of details the model had. \n\n\\section{Spatial metapopulation models}\n\\label{sec:metapop}\n\n\\noindent\n\\textbf{Background.}\nThis approach is an alternative to detailed agent based models, and has been used in\nmodeling the spread of multiple diseases, including Influenza~\\cite{balcan:pnas2009, srini:ploscb19},\nEbola~\\cite{gomes:plos-curr14} and Zika~\\cite{zhang:pnas17}. It has been adapted for\nstudying the importation risk of SARS-CoV-2 across the world~\\cite{chinazzi:science20}.\nStructured metapopulation models construct a simple abstraction of the mixing patterns in the population,\nin which the entire plane is decomposed into fully connected geographical regions,\nrepresenting subpopulation, which are connected through airline and commuter flow networks.\nThus, they lack the rich detail of agent based models, but have fewer parameters, and\nare therefore, easy to set up and scale to large regions.\n\n\\noindent\n\\textbf{Model structure.}\nHere, we summarize GLEaM~\\cite{balcan:pnas2009} and PatchSim~\\cite{srini:ploscb19}.\nGLEaM uses two classes of datasets-- population estimates and mobility.\nPopulation data is used from the ``Gridded Population of the World''~\\cite{ciesin}, which\ngives an estimated population value at a $15\\times 15$ minutes of arc (referred\nto as a ``cell'') over the entire planet.\nTwo different kinds of mobility processes are considered-- airline travel and commuter flow.\nThe former captures long-distance travel, whereas the latter captures localized mobility.\nAirline data is obtained from the International Air Transport Association (IATA)~\\cite{iata}, and\nthe Official Airline Guide (OAG)~\\cite{oag}. There are about 3300 airports world-wide; these\nare aggregated at the level of urban regions served by multiple airport (e.g., as in London).\nA voronoi tesselation is constructed with the resulting airport locations as centers, and\nthe population cells are assigned to these cells, with a 200 mile cutoff from the center.\nThe commuter flows connect cells at a much smaller spatial scale.\nWe represent this mobility pattern as a directed graph on the cells, and refer to it\nas the mobility network.\n\nIn the basic SEIR model, the subpopulation in each cell $j$ is partitioned into \ncompartments $S_j, E_j, I_j$ and $R_j$, corresponding to the disease states.\nFor each cell $j$, we define\nthe force of infection $\\lambda_j$ as the rate at which a susceptible individual in the\nsubpopulation in cell $j$ becomes infected---this is determined by the interactions the\nperson has with infectious individuals in cell $j$ or any cell $j'$ connected in the mobility network.\nAn individual in the susceptible compartment $S_j$ becomes infected with probability\n$\\lambda_j\\Delta t$ and enters the compartment $E_j$, in a time interval $\\Delta t$.\nFrom this compartment, the individual moves to the $I_j$ and then the $R_j$ compartments,\nwith appropriate probabilities, corresponding to the diease model parameters.\n\nThe PatchSim~\\cite{srini:ploscb19} model has a similar structure, except that it uses\nadministrative boundaries (e.g., counties), instead of\na voronoi tesselation, which are connected using the mobility network. Further, the disease model\nis deterministic, i.e., the transitions across the compartments occur at fixed rates.\n\n\\noindent\n\\textbf{What did the models suggest?}\n\n\\noindent\n\\textbf{Strength and limitations.}\nStructured metapopulation models provide a good tradeoff between the power and realism of\ndetailed agent-based models, and simplicity and need for fewer inputs for modeling,\nand scalability. This is especially true in the early days of the outbreak, when the disease dynamics are\ndriven to a large extent by mobility, which can be captured more easily within such models,\nand there is significant uncertainty in the disease model parameters.\nHowever, once the outbreak has spread, it is harder to model detailed interventions\n(e.g., social distancing), which are much more localized, and hard to model by a single parameter.\n\n\n\\section{Models in Sweden}\n\\label{sec:sweden-models}\nSweden was an outlier amongst countries in that it decided to implement public health interventions without a lockdown. Schools and universities were not closed and restaurants and bars remained open. Instead Swedish citizens\nimplemented \"work from home\" policies where possible.Moderate social distancing based on individual responsibility and without police enforcement was employed but emphasis was attempted to be placed on shielding the over 65 age group.\n\n\\subsection{Simple model}\n\n\\textbf{Background.} \\ \nStatistician Tom Britton developed a very simple model with a focus on predicting the number of infected over time in Stockholm.\n\n\\noindent\n\\textbf{Model structure.} \\ \nBritton \\cite{Britton2020} used a very simple SIR General epidemic model. It is used to make a coarse grain prediction of the behaviour of the outbreak based on knowing the basic reproduction number $R_0$ and the doubling time $d$ in the initial phase of the epidemic. Calibration to calendar time was done using the observed number of case fatalities, together with estimates of the time between infection to death, and the infection fatality risk. Predictions were made assuming no change of behaviour, as well as for the situation where preventive measures are put in place at one specific time--point. \n\n\\noindent\n\\textbf{What did the model suggest.} \\ One of the controversial predictions from this model was that the number of infections in the Stockholm area would quickly rise towards attaining \\emph{herd immunity} within a short period. However, mass testing carried out in Stockholm during June indicated a far smaller percentage of infections.\n\n\\noindent\n\\textbf{Strength and Limitations.} \\ \nBritton's model was intended as a quick and simple method to estimate and predict an on-going epidemic outbreak both with and without preventive measures put in place. It was intended as a complement to more realistic and detailed modelling. The estimation-prediction methodology is much simpler and straight- forward to implement for this simple model. It is more transparent to see how the few model assumptions affect the results, and it is easy to vary the few parameters to see their effect on predictions so one could see which parameter- uncertainties have biggest impact on predictions, and which parameter-uncertainties are less influential.\n\n\\subsection{Compartmentalized Models}\n\n\\noindent\n\\textbf{Background.} \\ \nA group around statistician Joacim Rockl\\\"ov developed a model to estimate the impact of COVID-19 on the Swedish population at the municipality level, considering demography and\nhuman mobility under various scenarios of mitigation and suppression. They attempted to estimate the\ntime course of infections, health care needs, and the mortality in relation to the Swedish ICU capacity, as well as the costs of care, and compared alternative policies and counterfactual\nscenarios.\n\n\\noindent\n\\textbf{Model structure.} \\ \\cite{Rocklov2020} used a SEIR compartmentalized model with age structured compartments (0-59, 60-79, 80+) susceptibles, infected, in-patient carem, ICU and recovered populations based on Swedish population data at the municipal level. It also incorporated inter-municipality travel using a radiation model. Parameters were calibrated based on a combination of values available from international literature and fitting to available outbreak data. The effect of a number of different intervention strategies were considered ranging from no intervention to modest social distancing and finally to imposed isolation of various groups. \n\n\\noindent\n\\textbf{What did the model suggest.} \\ \nThe model predicted an estimated death toll of around 40,000 for the strategies based only on social distancing and between 5000 and 8000 for policies imposing stricter isolation. It predicted ICU cases of upto 10,000 without much intervention and upto 6000 with modest social distancing, way above the available capacity of about 500 ICU beds. \n\\noindent\n\\textbf{Strength and limitations.} \\ \nThe model showed a good fit against the reported COVID-19 related deaths in Sweden up to 20th of April, 2020,\nHowever, the predictions of the total deaths and ICU demand turned out to be way off the mark.\n\n\\subsection{Agent Based microsimulations}\n\n\\noindent\n\\textbf{Background.} \\ \nFinally, \\cite{Gardner2020}used an individual-based model parameterized on Swedish demographics to assess the anticipated spread of COVID-19.\n\\noindent\n\\textbf{model structure.} \\ \n\\cite{Gardner2020} employed the individual agent-based model based on work by Ferguson et al \\cite{Ferg20}. Individuals are randomly assigned an age based on Swedish\ndemographic data and they are also assigned a household. Household size is normally distributed around the average household size in Sweden in 2018, 2.2 people per household. Households placed on a lattice using high-resolution population data from Landscan and census dara from the Statstics Sweden and each household is additionally allocated to a city based on\nthe closest city centre by distance and to a county based on city designation. Each individual is placed in a school or workplace at a rate similar to the current participation\nin Sweden.\n\nTransmission between individuals occurs through contact at each\nindividual's workplace or school, within their household, and in their communities. Infectiousness is thus a property dependent on contacts from household members,\nschool\/workplace members and community members with a probability based on household distances. Transmissibility was calibrated against data for the period 21\nMarch \u2013 6 April to reproduce either the doubling time reported using pan-European data35 or the growth in reported Swedish deaths for that period\n\nVarious types of interventions were studied including the policy implemented in Sweden by the public health authorities as well as more aggressive interventions approaching full lockdown.\n\n\\noindent\n\\textbf{What did the model suggest.} \\ \nTheir prediction was that \"under conservative epidemiological parameter estimates, the current Swedish public-health strategy will result in a peak intensive-care load in May that exceeds pre-pandemic capacity by over 40-fold, with a median mortality of 96,000 (95\\% CI 52,000 to 183,000)\". \n\n\\noindent\n\\textbf{Strength and limitations.} \\ \nThis model was based on adapting the well known Imperial model discussed in section~\\ref{sec:imperial} to Sweden and considered a wide range of intervention strategies. Unfortunately the predictions of the model were woefully off the mark on both counts: the deaths by June 18 are under 5000 and at he peak the ICU infrastructure had at least 20\\% unutilized capacity.\n\\section{Forecasting Models}\nIn the US, the Centers for Disease Control and Prevention (CDC) provides 1-,2-,3-, and 4-week ahead forecasts on cumulative deaths at the national level and across all states. The forecasts are provided by 21 (as of June 24, 2020) modelling groups and the CDC reports the cumulative deaths based on an ensemble of these forecasts.\n\\subsection{COVID-19 ForecastHub}\n\\noindent\n\\textbf{Model structure} It has been observed previously in the case of other infectious disease that an ensemble of forecasts from multiple models tends to be better than any individual contributing model \\cite{yamana2017individual}. Most of the models considered in the COVID-19 ForecastHub are variants of the SIER model with two models relying on agent-based models and one deep-learning-based model. The simulation outcomes are heavily dependent on the intervention scenarios considered. Some models assume that the effects of the interventions are reflected in the observed data and do not explicitly encode it in the model. Models such as [] incorporate the effects of interventions by suitably varying the contact rate while models [IHME,] incorporate the change in mobility patterns to reflect in the simulations. The current ensemble model equally weight the submitted forecasts. \n\\section{Models from the Imperial College Modeling Group (UK Model)}\n\\label{sec:imperial}\n\n\\noindent\n\\textbf{Background.} \\ The modeling group led by Neil Ferguson\nwas to our knowledge the first model to study the impact of \nCOVID-19 across two large countries: US and UK, see ~\\cite{Ferg20}. \nThe basic model was first developed in 2005 -- it was used to inform\npolicy pertaining to H5N1 pandemic and was one of the three models used\nto inform the federal pandemic influenza plan and led to the now well\naccepted targeted layered containment (TLC) strategy. It was adapted to COVID-19 as discussed below. The model was widely discussed and covered in the scientific as well as popular press~\\cite{adam2020modelling}.\nWe will refer to this as the IC-model.\n\n\\medskip\n\\noindent\n\\textbf{Model Structure.} \\ The basic model structure consists of \ndeveloping a set of households based on census information for a given country. The structure of the model is largely borrowed from their earlier work, see~\\cite{halloran:pnas08,ferguson2006strategies}.\nLandscan data was used to spatially distribute the population. Individual members of the household interact with other members\nof the household. The data to produce these households is obtained using Census information for these countries. Census data is used to assign age and household sizes. \nDetails on the resolution of census data and the dates was not clear. \nSchools, workplaces and random meeting points are then added.\nThe school data for US was obtained from the National Centre of Educational\nStatistics, while for UK schools were assigned randomly based on population density.\n Data on average class sizes and staff-student ratios were used to generate a synthetic population of schools distributed proportional to local population density. Data on the distribution of workplace size was used to generate workplaces with commuting distance data used to locate workplaces appropriately across the population. Individuals are assigned to each of these locations at the start of the simulation. The gravity style kernel is used to decide how far a person can go in terms of attending\nwork, school or community interaction place. The number of contacts between individuals at school, work and community meeting points are calibrated to produce a given attack rate.\n\nEach individual has an associated disease transmission model. The disease\ntransmission model parameters are based on data collected when the pandemic\nwas evolving in Wuhan; see page 4 of \\cite{Ferg20}.\n\nFinally, the model also has rich set of interventions. These include:\n($i$) case isolation, ($ii$) voluntary home quarantine, ($iii$) Social distancing of those over 70 years, ($iv$) social distancing of the entire population, ($v$) closure of schools and universities; see page 6 \\cite{Ferg20}. The code was recently released and is being analyzed. This is important as the interpretation of these interventions can have substantial impact on the outcome.\n\n\\medskip\n\\noindent\n\\textbf{Model predictions.} \\ \nThe Imperial college (IC Model) model was one of the first models to evaluate the COVID-19 pandemic using detailed agent-based model. The predictions made by the model were quite dire. The results show that to be able to reduce $R$ to close to 1 or below, a combination of case isolation, social distancing of the entire population and either household quarantine or school and university closure are required. The model had tremendous impact --\nUK and US both decide to start considering complete lock downs -- a\npolicy that was practically impossible to even talk about earlier in the Western world. The paper came out around the same time that Wuhan epidemic was raging and the epidemic in Italy had taken a turn for the worse. This made the\nmodel results even more critical. \n\n\\medskip\n\\noindent\n\\textbf{Strengths and Limitations.} \\\nIC model was one of the first model by a reputed group to report the\npotential impact of COVID-19 with and without interventions. The model\nwas far more detailed than other models that were published until then.\nThe authors also took great care parameterizing the model with the best disease transmission data that was available until then. The model also considered a very rich set of interventions and was one of the first to analyze pulsing intervention. On the flip side, the\nrepresentation of the underlying social contact network was relatively \nsimple. Second, often the details of how interventions were represented were\nnot clear. Since the publication of their article, the modelers have made their code open and\nthe research community has witnessed an intense debate on the pros and cons of various modeling assumptions and the resulting \nsoftware system, see~\\cite{chawla2020critiqued}. \nWe believe that despite certain valid criticisms, overall, the results\nrepresented a significant advance in terms of the \nwhen the results were put out and the level of details incorporated in the models.\n\n\\section{Spatial metapopulation models: Northeastern and UVA models (US Models)}\n\\label{sec:usa-models}\n\n\\noindent\n\\textbf{Background.}\nThis approach is an alternative to detailed agent based models, and \nhas been used in modeling the spread of multiple diseases, including Influenza~\\cite{balcan:pnas2009, srini:ploscb19}, Ebola~\\cite{gomes:plos-curr14} and Zika~\\cite{zhang:pnas17}. \nIt has been adapted for studying the importation risk of COVID-19 \nacross the world~\\cite{chinazzi:science20}.\nStructured metapopulation models construct a simple abstraction \nof the mixing patterns in the population, in which the entire region under study is decomposed into fully connected geographical regions,\nrepresenting subpopulations, which are connected through airline and commuter flow networks. Thus, they lack the rich detail of agent based models, \nbut have fewer parameters, and are therefore, easy to set up and scale to large regions.\n\n\\medskip\n\\noindent\n\\textbf{Model structure.}\nHere, we summarize GLEaM~\\cite{balcan:pnas2009} (Northeastern model) and PatchSim~\\cite{srini:ploscb19} (UVA model).\nGLEaM uses two classes of datasets-- population estimates and mobility.\nPopulation data is used from the ``Gridded Population of the World''~\\cite{lloyd2017high}, which gives an estimated population value at a $15\\times 15$ minutes of arc (referred to as a ``cell'') over the entire planet. Two different kinds of mobility processes are considered-- airline travel and commuter flow. The former captures long-distance travel, whereas the latter captures localized mobility. Airline data is obtained from the International Air Transport Association (IATA)~\\cite{IATA}, and\nthe Official Airline Guide (OAG)~\\cite{oag}. There are about 3300 airports world-wide; these are aggregated at the level of urban regions served by multiple airport (e.g., as in London). A Voronoi tessellation is constructed with the resulting airport locations as centers, and the population cells are assigned to these cells, with a 200 mile cutoff from the center.\nThe commuter flows connect cells at a much smaller spatial scale.\nWe represent this mobility pattern as a directed graph on the cells, \nand refer to it as the mobility network.\n\n\n\nIn the basic SEIR model, the subpopulation in each cell $j$ is partitioned into compartments $S_j, E_j, I_j$ and $R_j$, corresponding to the disease states. For each cell $j$, we define the force of infection \n$\\lambda_j$ as the rate at which a susceptible individual in the\nsubpopulation in cell $j$ becomes infected---this is determined by the interactions the person has with infectious individuals in cell $j$ or any cell $j'$ connected in the mobility network. An individual in the susceptible compartment $S_j$ becomes infected with probability $\\lambda_j\\Delta t$ and enters the compartment $E_j$, in a time interval $\\Delta t$. From this compartment, the individual moves to the $I_j$ and then the $R_j$ compartments, with appropriate probabilities, corresponding to the \ndisease model parameters.\n\nThe PatchSim~\\cite{srini:ploscb19} model has a similar structure, except that it uses administrative boundaries (e.g., counties), instead of\na Voronoi tesselation, which are connected using a mobility network. The mobility network is derived by combining commuter and airline networks, to model time spent per day by individuals of region (patch) $i$ in region (patch) $j$. Since it explicitly captures the level of connectivity through a commuter-like mixing, it is capable of incorporating week-to-week and month-to-month variations in mobility and connectivity. In addition to its capability to run in deterministic or stochastic mode, the open source implementation \\cite{NSSACPat38:online} allows fine grained control of disease parameters across space and time. Although the model has a more generic force of infection mode of operation (where patches can be more general than spatial regions), we will mainly summarize the results from the mobility model, which was used for COVID-19 response. \n\n\n\\medskip\n\\noindent\n\\textbf{What did the models suggest?}\nGLEaM model is being used in a number of COVID-19 related studies and analysis. In ~\\cite{kraemer2020effect} the Northeastern University team \nused the model to understand the spread of COVID-19 within China and relative risk of importation of the disease internationally. \nTheir analysis suggested that the spread of COVID-19 out of Wuhan into other parts of mainland China was not contained well due to the delays induced by detection and official reporting.\nIt is hard to interpret the results. The paper suggested that international importation could be contained substantially by strong travel ban. While it might have delayed the onset of cases, the subsequent spread across the world suggest that we were not able to arrest the spread effectively.\nThe model is also used to provide\nweekly projections (see \\url{https:\/\/covid19.gleamproject.org\/}); this site does not appear to be maintained for the most current forecasts\n(likely because the team is participating in the CDC forecasting group).\n\nThe PatchSim model is being used to support federal agencies as well\nas the state of Virginia. Due to our past experience, we have refrained from providing longer term forecasts, instead focusing on short term projections. The model is used within a \\emph{Forecasting via Projection Selection} approach, where a set of counterfactual scenarios are generated based on on-the-ground response efforts and surveillance data, and the best fits are selected based on historical performance. While allowing for future scenarios to be described, they also help provide a reasonable narrative of past trajectories, and retrospective comparisons are used for metrics such as 'cases averted by doing X'. These projections are revised weekly based on stakeholder feedback and surveillance update. Further discussion of how the model is used by the Virginia Department of Health each week can be found at \\url{https:\/\/www.vdh.virginia.gov\/coronavirus\/covid-19-data-insights\/#model}.\n\n\n\n\\medskip\n\\noindent\n\\textbf{Strength and limitations.}\nStructured metapopulation models provide a good tradeoff between the realism\/compute of detailed agent-based models and simplicity\/speed of mass action compartmental models and need far fewer inputs for modeling, and scalability. This is especially true in the early days of the outbreak, when the disease dynamics are\ndriven to a large extent by mobility, which can be captured more easily within such models, and there is significant uncertainty in the disease model parameters. However, once the outbreak has spread, it is harder to model detailed interventions (e.g., social distancing), which are much more localized. Further, these are hard to model using a single parameter. Both GLeaM and PatchSim models also faced their share of challenges in projecting case counts due to rapidly evolving pandemic, inadequate testing, a lack of understanding of the number of asymptomatic cases and assessing the compliance levels of the population at large.\n\n\n\\section{Models by KTH, Ume\u00e5 and Uppsala researchers (Swedish Models)}\n\\label{sec:sweden-models}\nSweden was an outlier amongst countries in that it decided to implement public health interventions without a lockdown. Schools and universities were not closed, and restaurants and bars remained open. Swedish citizens\nimplemented ``work from home'' policies where possible. Moderate social distancing based on individual responsibility and without police enforcement was employed but emphasis was attempted to be placed on shielding the 65+ age group.\n\n\\subsection{Simple model}\n\n\\medskip\n\\noindent\n\\textbf{Background.} \\ \nStatistician Tom Britton developed a very simple model with a focus on predicting the number of infected over time in Stockholm.\n\n\\medskip\n\\noindent\n\\textbf{Model structure.} \\ \nBritton \\cite{Britton2020} used a very simple SIR general epidemic model. It is used to make a coarse grain prediction of the behaviour of the outbreak based on knowing the basic reproduction number $R_0$ and the doubling time $d$ in the initial phase of the epidemic. Calibration to calendar time was done using the observed number of case fatalities, together with estimates of the time between infection to death, and the infection fatality risk. Predictions were made assuming no change of behaviour, as well as for the situation where preventive measures are put in place at one specific time--point. \n\n\\medskip\n\\noindent\n\\textbf{Model predictions.} \\ One of the controversial predictions from this model was that the number of infections in the Stockholm area would quickly rise towards attaining \\emph{herd immunity} within a short period. However, mass testing carried out in Stockholm during June indicated a far smaller percentage of infections.\n\n\\medskip\n\\noindent\n\\textbf{Strength and Limitations.} \\ \nBritton's model was intended as a quick and simple method to estimate and predict an on-going epidemic outbreak both with and without preventive measures put in place. It was intended as a complement to more realistic and detailed modelling. The estimation-prediction methodology is much simpler and straight-forward to implement for this simple model. It is more transparent to see how the few model assumptions affect the results, and it is easy to vary the few parameters to see their effect on predictions so that one could see which parameter-uncertainties have biggest impact on predictions, and which parameter-uncertainties are less influential.\n\n\\subsection{Compartmentalized Models I: FHM Model}\n\n\\textbf{Background.} \\\nThe Public Health Authority (FHM) of Sweden produced a model\nto study the spread of COVID-19 in four regions in Sweden: Dalarna, Sk\\aa ne, Stockholm, and V\\\"astra G\\\"otaland.\\cite{FHM20}.\n\n\\medskip\n\\noindent\n\\textbf{Model structure.} \\ \nIt is a standard compartmentalized SEIR model and within each compartment it is homogeneous, so individuals are assumed to have the same characteristics and act in the same way. Data used in the fitting of the model include point prevalences found by PCR-testing in Stockholm at two different time points.\n\n\\medskip\n\\noindent\n\\textbf{Model predictions.} \\ \nThe model estimated the number of infected individuals at different time points and the date with the largest number of infectious individuals. It predicted that by July 1, 8.5\\% (5.9 \u2013 12.9\\%) of the population in Dalarna will have been infected, 4\\% (2.4 \u2013 9.9\\%) of the population in Sk\u00e5ne will have been infected, 19\\% (17.7 \u2013 20.2\\%) of the population in Stockholm will have been infected, and 9\\% (6.3 \u2013 12.2\\%) of the population in V\u00e4stra G\u00f6taland will have been infected. It was hard to test these predictions because of the great uncertainty in immune response to SARS-CoV-2 -- prevalence of antibodies was surprisingly low but recent studies show that mild cases never seem to develop antibodies against SARS-CoV-2, but only T-cell-mediated immunity \\cite{karolinska}.\n\nThe model also investigated the effect of increased contacts during the summer that stabilises in autumn. It found that if the contacts in Stockholm and Dalarna increase by less than 60\\% in comparison to the contact rate in the beginning of June, the second wave will not exceed the observed first wave.\n\n\\medskip\n\\noindent\n\\textbf{Strength and limitations.} \\ \nThe simplicity of the model is a strength in ease of calibration and understanding but it is also a major limitation in view of the well known characteristics of COVID-19: since it is primarily transmitted through droplet infection, the social contact structure in the population is of primary importance for the dynamics of infection. The compartmental model used in this analysis does not account for variation in contacts, where few individuals may have many contacts while the majority have fewer. The model is also not age--stratified, but COVID-19 strikingly affects different age groups differently; e.g., young people seem to get milder infections. In this model, each infected individual has the same infectivity and the same risk of becoming a reported case, regardless of age. Different age groups normally have varied degrees of contacts and have changed their behaviour differently during the COVID-19 pandemic. This is not captured in the model.\n\n\\subsection{Compartmentalized Models II}\n\n\\medskip\n\\noindent\n\\textbf{Background.} \\ \nA group around statistician Joacim Rockl\\\"ov developed a model to estimate the impact of COVID-19 on the Swedish population at the municipality level, considering demography and\nhuman mobility under various scenarios of mitigation and suppression. They attempted to estimate the\ntime course of infections, health care needs, and the mortality in relation to the Swedish ICU capacity, as well as the costs of care, and compared alternative policies and counterfactual\nscenarios.\n\n\\medskip\n\\noindent\n\\textbf{Model structure.} \\ \\cite{Rocklov2020} used a SEIR compartmentalized model with age structured compartments (0-59, 60-79, 80+) susceptibles, infected, in-patient care, ICU and recovered populations based on Swedish population data at the municipal level. It also incorporated inter-municipality travel using a radiation model. Parameters were calibrated based on a combination of values available from international literature and fitting to available outbreak data. The effect of a number of different intervention strategies were considered ranging from no intervention to modest social distancing and finally to imposed isolation of various groups. \n\n\\medskip\n\\noindent\n\\textbf{Model predictions.} \\ \nThe model predicted an estimated death toll of around 40,000 for the strategies based only on social distancing and between 5000 and 8000 for policies imposing stricter isolation. It predicted ICU cases of upto 10,000 without much intervention and upto 6000 with modest social distancing, way above the available capacity of about 500 ICU beds. \n\n\\medskip\n\\noindent\n\\textbf{Strength and limitations.} \\ \nThe model showed a good fit against the reported COVID-19 related deaths in Sweden up to 20th of April, 2020,\nHowever, the predictions of the total deaths and ICU demand turned out to be way off the mark.\n\n\\subsection{Agent Based microsimulations}\n\n\\noindent\n\\textbf{Background.} \\ \nFinally, \\cite{Gardner2020,kamerlin2020managing}used an individual-based model parameterized on Swedish demographics to assess the anticipated spread of COVID-19.\n\n\\medskip\n\\noindent\n\\textbf{Model structure.} \\ \n\\cite{Gardner2020} employed the individual agent-based model based on work by Ferguson et al \\cite{Ferg20}. Individuals are randomly assigned an age based on Swedish\ndemographic data and they are also assigned a household. Household size is normally distributed around the average household size in Sweden in 2018, 2.2 people per household. Households were placed on a lattice using high-resolution population data from Landscan and census dara from the Statstics Sweden and each household is additionally allocated to a city based on\nthe closest city centre by distance and to a county based on city designation. Each individual is placed in a school or workplace at a rate similar to the current participation\nin Sweden.\n\nTransmission between individuals occurs through contact at each\nindividual's workplace or school, within their household, and in their communities. Infectiousness is thus a property dependent on contacts from household members,\nschool\/workplace members and community members with a probability based on household distances. Transmissibility was calibrated against data for the period 21\nMarch \u2013 6 April to reproduce either the doubling time reported using pan-European data or the growth in reported Swedish deaths for that period. Various types of interventions were studied including the policy implemented in Sweden by the public health authorities as well as more aggressive interventions approaching full lockdown.\n\n\\medskip\n\\noindent\n\\textbf{Model predictions.} \\ \nTheir prediction was that \"under conservative epidemiological parameter estimates, the current Swedish public-health strategy will result in a peak intensive-care load in May that exceeds pre-pandemic capacity by over 40-fold, with a median mortality of 96,000 (95\\% CI 52,000 to 183,000)\". \n\n\\medskip\n\\noindent\n\\textbf{Strength and limitations.} \\ \nThis model was based on adapting the well known Imperial model discussed in section~\\ref{sec:imperial} to Sweden and considered a wide range of intervention strategies. Unfortunately the predictions of the model were woefully off the mark on both counts: the deaths by June 18 are under 5000 and at the peak the ICU infrastructure had at least 20\\% unutilized capacity.\n\n\n\n\\section{Forecasting Models}\n\\label{sec:forecasting}\nForecasting is of particular interest to policy makers as they attempt to provide actual counts. Since the surveillance systems have relatively stabilized in recent weeks, the development of forecasting models has gained traction and several models are available in the literature. In the US, the Centers for Disease Control and Prevention (CDC) has provided a platform for modelers to share their forecasts which are analyzed and combined in a suitable manner to produce ensemble multi-week forecasts for cumulative\/incident deaths, hospitalizations and more recently cases at the national, state, and county level. Probabilistic forecasts are provided by 36 teams as of July 28, 2020 (there were 21 models as of June 24, 2020) and the CDC with the help of \\cite{Reichlab} has developed uniform ensemble model for multi-step forecasts \\cite{CDChub}.\n\n\\subsection{COVID-19 Forecast Hub ensemble model}\nIt has been observed previously for other infectious diseases that an ensemble of forecasts from multiple models perform better than any individual contributing model \\cite{yamana2017individual}. In the context of COVID-19 case count modeling and forecasting, a multitude of models have been developed based on different assumptions that capture specific aspects of the disease dynamics (reproduction number evolution, contact network construction, etc.). The models employed in the CDC Forecast Hub can be broadly classified into three categories, data-driven, hybrid models, and mechanistic models with some of the models being open source. \n\n\\medskip\n\\noindent\n\\textbf{Data-driven models:} They do not model the disease dynamics but attempt to find patterns in the available data and combine them appropriately to make short-term forecasts. In such data-driven models it is hard to incorporate interventions directly, hence, the machine is presented with a variety of exogenous data sources such as mobility data, hospital records, etc. with the hope that its effects are captured implicitly. Early iterations of Institute of Health Metrics and Evaluation (IHME) model \\cite{IHMEcovid2020forecasting} for death forecasting at state level employed a statistical model that fits a time-varying Gaussian error function to the cumulative death counts and is parameterized to control for maximum death rate, maximum death rate epoch, and growth parameter (with many parameters learnt using data from outbreak in China). The IHME models are undergoing revisions (moving towards the hybrid models) and updated implementable versions are available at \\cite{IHMEgithub}. The University of Texas at Austin\nCOVID-19 Modeling Consortium model \\cite{UTwoody2020projections} uses a very similar statistical model as \\cite{IHMEcovid2020forecasting} but employs real-time mobility data as additional predictors and also differ in the fitting process. The Carnegie Mellon Delphi Group employs the well known auto-regressive (AR) model that employs lagged version of the case counts and deaths as predictors and determines a sparse set that best describes the observations from it by using LASSO regression \\cite{delphi}. \\cite{deepGTcovid} is a deep learning model which has been developed along the lines of \\cite{adhikari:kdd19} and attempts to learn the dependence between death rate and other available syndromic, demographic, mobility and clinical data. \n\n\\medskip\n\\noindent\n\\textbf{Hybrid models}: These methods typically employ statistical techniques to model disease parameters which are then used in epidemiological models to forecast cases. Most statistical models \\cite{IHMEcovid2020forecasting, UTwoody2020projections} are evolving to become hybrid models. A model that gained significant interest is the Youyang Gu (YYG) model and uses a machine learning layer over an SEIR model to learn the set of parameters (mortality rate, initial R$_0$,\n post-lockdown R) specific to a region that best fits the region's observed data. The authors (YYG) share the optimal parameters, the SEIR model and the evaluation scripts with general public for experimentation~\\cite{yygeval}. Los Alamos National Lab (LANL) model \\cite{LANLforecasts} uses a statistical model to determine how the number of COVID-19 infections changes over time. The second process maps the number of infections to the reported data. The number of deaths are a fraction of the number of new cases obtained and is computed using the observed mortality data.\n\n\\medskip\n\\noindent \n\\textbf{Mechanistic models:} GLEaM and JHU models are county-level stochastic SEIR model dynamics. The JHU model incorporates the effectiveness of state-wide intervention policies on social distancing through the R$_0$ parameter. More recently model outputs from UVA's PatchSim model were included as part of a multi-model ensemble (including autoregressive and LSTM components) to forecast weekly confirmed cases. \n\n\\section{Comparative analysis across modeling types}\nWe end the discussion of the models above by qualitatively comparing\nmodel types. As discussed in the preliminaries, \nat one end of the spectrum are models that are largely data driven: these models range from simple statistical models (various forms\nof regression models) to the more complicated deep learning models.\nThe difference in such model lies in the amount of training data needed,\nthe computational resources needed and how complicated the mathematical\nfunction one is trying to fit to the observed data. These models are strictly data driven and hence unable to capture the constant behavioral\nadaptation at an individual and collective level. On the other end of the\nspectrum SEIR, meta-population and agent-based networked models\nare based on the underlying procedural representation of the dynamics -- in theory they are able to represent behavioral adaptation endogenously.\nBut both class of models face immense challenges due to the availability of data as discussed below.\n\n\\begin{enumerate}\n \\item \n Agent-based and SEIR models were used in all the three countries in the early part of the outbreak and continue to be used for counter-factual analysis. The primary reason is the lack of \n surveillance and disease specific data and hence purely data driven models were not easy to use. SEIR models lacked heterogeneity but were simple to program and analyze. Agent-based models were more computationally intensive, required a fair bit of data to instantiate the model but captured the heterogeneity of the underlying countries. \n By now it has become clear that use of such models for long term forecasting is challenging and likely to lead to mis-leading results. The fundamental reason is adaptive human behavior and lack of data about it.\n \\item\n Forecasting on the other hand has seen use of data driven methods as well as causal methods. Short term forecasts have been generally reasonable. Given the intense interest in the pandemic, a lot of data is also becoming available for researchers to use. This helps in validating some of the models further. Even so, real-time data on\n behavioral adaptation and compliance remains very hard to get and is one of the central modeling challenges.\n\\end{enumerate}\n\n\n \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}\\setcounter{equation}{0}}\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\\newcommand{\\nsect}{\\setcounter{equation}{0}\n\\def\\theequation{\\thesection.\\arabic{equation}}\\section}\n\\newcommand{\\nsubsect}{\\setcounter{equation}{0}\n\\def\\theequation{\\thesubsection.\\arabic{equation}}\\subsection}\n\n\\relax\n\n\\def\\d{\\partial}\n\\def\\a{\\alpha'}\n\\def\\R{\\mathcal{R}}\n\\def\\O{\\mathcal{O}}\n\n\\title{Tensorial perturbations and stability of spherically symmetric $d$--dimensional black holes in string theory}\n\n\\author{Filipe Moura\n\\\\\nCentro de Matem\\'atica da Universidade do Minho, \\\\Escola de Ci\\^encias, Campus de Gualtar, \\\\4710-057 Braga, Portugal\\\\\n\\\\\n\\email{fmoura@math.uminho.pt}\n}\n\n\n\\abstract{We compute the tensorial perturbations to a general spherically symmetric metric in $d$ dimensions with string--theoretical corrections quadratic in the Riemann tensor, from which we derive their respective potential. We use this result to study the stability of corresponding black hole solutions under such perturbations.\n}\n\n\n\n\\begin{document}\n\n\n\n\n\\vfill\n\n\\eject\n\n\n\\section{Introduction and Summary}\n\\indent\n\nIn string theory there is a large variety of black hole solutions in four and more spacetime dimensions. Studying the stability of such exact solutions is a very important subject: if a stationary black hole solution is stable under perturbations, it implies that such solution describes a possible final state of dynamical evolution of a gravitating system. If, on the other hand, it is shown to be unstable, that indicates the existence of a different branch of solutions which the original solution may decay into, which means that one can anticipate a wider variety of black hole solutions. The analysis of perturbations also gives information about physical properties of the black hole solutions, an example being the spectra of quasinormal modes. It is interesting to study how such properties, including the stability, are affected in the context of string theory.\n\nThe analysis of linear perturbations of stationary black holes in arbitrary $d$ spacetime dimensions was initiated in \\cite{iks00, ik03a, ik03c} for static, nonrotating black holes (without and with charge), for the three kinds of metric perturbations: tensor, vector and scalar. The analysis of stability was performed, and stability was obtained for asymptotically flat \\cite{ik03b, ik03d} and anti--de Sitter black holes \\cite{Konoplya:2008rq}, while an instability was found in de Sitter, for large charge and $\\Lambda$ \\cite{Konoplya:2008au}.\n\nMore recently the stability analysis was performed for rotating Myers--Perry black holes, first for tensor perturbations, where an instability was found in anti--de Sitter \\cite{Kunduri:2006qa, Kodama:2009rq}, but not for asymptotically flat black holes. Considering time--dependent perturbations, for the first time an instability was found for asymptotically flat black holes, for sufficiently rapid rotations (the ultraspinning instability) \\cite{Dias:2009iu, Dias:2010eu, Dias:2010maa}. Later the result was generalized for the case of anti--de Sitter \\cite{Dias:2010gk}.\n\nThe analysis of perturbations and stability was also extended to Lovelock theories in $d$ dimensions, considering corrections quadratic \\cite{dg04,dg05a} and cubic \\cite{Takahashi:2009dz, Takahashi:2010ye, Takahashi:2010gz} in the Riemann tensor. Detailed studies of black hole perturbations in Lovelock theories were\nperformed, in the context of the AdS\/CFT correspondence, in \\cite{Camanho:2009vw,Camanho:2009hu,Camanho:2010ru}. Depending on the gravitational potential resulting from the couplings of the theory, different kinds of instabilities were found, which could imply either causality violations or plasma instabilities in the holographic dual thermal field theory.\n\nAlthough superstring theories also require higher order corrections to the Einstein--Hilbert action (actually to supergravity, which is the low energy effective theory coming from superstrings), there are important differences between string and Lovelock theory effective actions. In Lovelock theories, the actions are made just with powers of the Riemann curvature tensor (i.e. one can have an action just with the metric field and its derivatives); in string theory, besides the graviton we always have to consider at least the dilaton field which, as we will see later in this article (after eq. (\\ref{bdfe})), cannot be set to zero in the presence of higher order terms. Besides, Lovelock actions are not seen as ``effective'' in the sense of string perturbation theory, but as ``exact''. To be precise, in perturbative string theory one only works up to the order of the perturbation constant (in the case of higher order string corrections, this is the inverse string tension $\\a$) which appears in the effective action, neglecting all the higher order terms which are meaningless in such effective action, while in Lovelock theories one considers exact solutions to the equations of motion, no matter the order of the constant which appears in such solutions, even if the constant only appears to first order in the lagrangian (multiplying the higher order term) and in the field equations. Because of these differences, results and conclusions which are obtained in Lovelock theories may be very different from those in string theory.\n\nString theory is the most promising candidate for a consistent description of quantum gravity; therefore, it is natural to study its effects on the stability of its black hole solutions. The study of tensorial perturbations to the metric and the respective stability of black holes with higher order corrections in the context of string theory was initiated in \\cite{Moura:2006pz}, concerning a particular spherically symmetric solution. In this article we extend such study to incorporate the most general spherically symmetric black hole solutions with gravitational corrections to first order in $\\a$ in string theory in arbitrary $d$ dimensions.\n\nThe article is organized as follows: in section 2, we review the general formalism for field perturbations, and then we compute the tensorial perturbation equations for the most general spherically symmetric background metric in $d$ dimensions. In section 3, we apply these perturbation equations to the field equations obtained from a string theory effective action with string corrections to first order in $\\a$, i.e. quadratic in the Riemann tensor. We obtain the master equation for the perturbation variable and the potential for tensorial perturbations including such corrections. Then in section 4 we review the ``S--deformation'' approach and we obtain a criterion for the gravitational stability of black holes under tensor perturbations from the master equation. Finally on section 5 we apply this criterion to study the stability of two different solutions with leading $\\a$ corrections in $d$ dimensions: the dilatonic compactified black hole and the double--charged black hole.\n\n\n\\section{General setup of the perturbation theory}\n\n\n\\subsection{Perturbations on a $(d-2)$--sphere}\n\\indent\n\nWe will study the behavior, under gravitational perturbations, of string--corrected black hole solutions in a generic spacetime dimension $d$. For such analysis we use the framework developed by Ishibashi and Kodama \\cite{iks00,ik03a,ik03c} for black holes. This framework applies to generic spacetimes of the form $\\mathcal{M}^{d} = \\mathcal{N}^{d-n} \\times \\mathcal{K}^n$, with coordinates $\\left\\{ x^\\mu \\right\\} = \\left\\{ y^a, \\theta^i \\right\\}$. In here $\\mathcal{K}^n$ is a manifold with constant sectional curvature $K.$ The metric in the total space $\\mathcal{M}^{d}$ is then written as\n\n\\be \\label{ikmetric}\ng = g_{ab}(y)\\ dy^a\\ dy^b + r^2(y)\\ \\gamma_{ij}(\\theta)\\ d\\theta^i\\ d \\theta^j.\n\\ee\nFor our purposes, we take $n = d-2$ and the manifold $\\mathcal{K}^n$, describing the geometry of the black hole event horizon, will be a $(d-2)$--sphere (thus, with $K=1$). Also, $\\mathcal{N}^{d-n}$ coordinates will be $\\left\\{ y^a \\right\\} = \\left\\{ t, r \\right\\},$ with $\\left\\{ r, \\theta^i \\right\\}$ being the usual spherical coordinates so that $r(y) = r$ and $\\gamma_{ij}(\\theta)\\ d\\theta^i d\\theta^j = d\\Omega_{d-2}^2$.\n\nDefining generic perturbations to the metric as $h_{\\mu\\nu} = \\delta g_{\\mu\\nu}, \\, h^{\\mu\\nu} = -\\delta g^{\\mu\\nu},$\nwe get for the variation of the Levi-Civita connection\n\n\\be \\label{deltagamma}\n\\delta \\Gamma_{\\mu\\nu}^\\rho = \\frac{1}{2} \\left( \\nabla_\\mu {h_{\\nu}}^{\\rho} + \\nabla_\\nu {h_{\\mu}}^{\\rho} - \\nabla^{\\rho} h_{\\mu\\nu} \\right)\n\\ee\nFrom this variation and the Palatini equation\n\n\\be\n\\delta {\\R^{\\rho}}_{\\sigma\\mu\\nu} = \\nabla_\\mu\\ \\delta \\Gamma_{\\nu\\sigma}^\\rho - \\nabla_\\nu\\ \\delta \\Gamma_{\\mu\\sigma}^\\rho, \\label{palatini}\n\\ee\none can easily derive the variation of the Riemann tensor:\n\n\\be \\label{palatiniexp}\n\\delta \\R_{\\rho\\sigma\\mu\\nu} = \\frac{1}{2} \\left( {\\R_{\\mu\\nu\\rho}}^{\\lambda} h_{\\lambda\\sigma} - {\\R_{\\mu\\nu\\sigma}}^{\\lambda} h_{\\lambda \\rho} - \\nabla_\\mu \\nabla_\\rho h_{\\nu\\sigma} + \\nabla_\\mu \\nabla_\\sigma h_{\\nu\\rho} - \\nabla_\\nu \\nabla_\\sigma h_{\\mu\\rho} + \\nabla_\\nu \\nabla_\\rho h_{\\mu\\sigma} \\right).\n\\ee\n\nGeneral tensors, of rank at most equal to two, can be uniquely decomposed in tensor, vector and scalar components, according to their tensorial behavior on the $(d-2)$--sphere, the geometry of the black hole event horizon \\cite{ik03a}. In particular, this is also true for the perturbations to the metric, but one should note that metric perturbations of tensor type only exist for dimensions $d>4$, unlike perturbations of vector and scalar type, which also exist for $d=4.$ This is because the 2--sphere does not admit any tensor harmonics \\cite{Higuchi:1986wu}. Tensor perturbations are therefore intrinsically higher--dimensional.\n\n\\subsection{Tensorial perturbations of a spherically symmetric static metric}\n\\indent\n\nIn this work we will only consider tensor type gravitational perturbations to the metric field, for $\\a$--corrected $\\R^{\\mu\\nu\\rho\\sigma} \\R_{\\mu\\nu\\rho\\sigma}$ black holes in string theory. One should consider perturbations to all the fields present in the low--energy effective action (in our case, the metric and the dilaton), but, as we will show later, one can consistently set tensor type perturbations to the dilaton field to zero. These metric perturbations were studied in \\cite{ik03a}, where it is shown that they can be written as\n\n\\be \\label{htensor}\nh_{ij} = 2 r^2 (y^a)\\ H_T (y^a)\\ \\mathcal{T}_{ij} (\\theta^i), \\quad h_{ia} = 0, \\quad h_{ab} = 0,\n\\ee\n\n\\noindent\nwith $\\mathcal{T}_{ij}$ satisfying\n\n\\be \\label{propt}\n\\left( \\gamma^{kl} D_k D_l + k_T \\right) \\mathcal{T}_{ij} = 0, \\quad D^i \\mathcal{T}_{ij} = 0, \\quad g^{ij} \\mathcal{T}_{ij} = 0.\n\\ee\n\n\\noindent\nHere, $D_i$ is the covariant derivative on the $(d-2)$--sphere, associated to the metric $\\gamma_{ij}$. Thus, the tensor harmonics $\\mathcal{T}_{ij}$ are the eigentensors of the $(d-2)$--laplacian $D^2$, whose eigenvalues are given by $k_T + 2 = \\ell \\left( \\ell + d - 3 \\right)$, with $\\ell = 2,3,4,\\ldots$. It should be further noticed that the expansion coefficient $H_T$ is gauge--invariant by itself. This is rather important: when dealing with linear perturbations to a system with gauge invariance one might always worry that final results could be an artifact of the particular gauge one chooses to work with. Of course the simplest way out of this is to work with gauge--invariant variables, and this is precisely implemented in the Ishibashi--Kodama framework \\cite{iks00, ik03a, ik03c}. As it was noticed in \\cite{Moura:2006pz}, the Ishibashi--Kodama gauge--invariant variables are also valid for higher derivative theories as long as diffeomorphisms keep implementing gauge transformations. This is because up to now we have only chosen the background metric we wish to perturb: so far, no choice of equations of motion has been done.\n\nNow we consider a static, spherically symmetric background metric. Such a metric is clearly of the type (\\ref{ikmetric}), and is given by\n\n\\be \\label{schwarz}\nds^2 = -f(r)\\ dt^2 + g^{-1}(r)\\ dr^2 + r^2 d\\Omega^2_{d-2}.\n\\ee\n\n\\noindent\nThe nonzero components of the Riemann tensor for this metric are\n\n\\bea \\label{ikriemann}\n\\R_{trtr} &=& \\frac{1}{2} f^{\\prime \\prime} + \\frac{1}{4} \\frac{f^\\prime g^\\prime}{g} - \\frac{1}{4} \\frac{f'^2}{f}\\,, \\nonumber \\\\\n\\R_{itjt} &=& \\frac{1}{2} \\frac{gf^\\prime}{r} g_{ij}\\,, \\nonumber \\\\\n\\R_{irjr} &=& - \\frac{1}{2} \\frac{g^\\prime}{rg} g_{ij}\\,, \\nonumber \\\\\\\n\\R_{ijkl} &=& \\frac{1}{r^2} \\big( 1 - g \\big) \\Big( g_{ik} g_{jl} - g_{il} g_{jk} \\Big).\n\\eea\n\n\\noindent\n\nOne first needs to obtain the variation of the Riemann tensor under generic perturbations of the metric. If one collects the expressions for $h_{\\mu\\nu}$ given in (\\ref{htensor}), their covariant derivatives, and further the components of the Riemann tensor given in (\\ref{ikriemann}), and replaces them on the Palatini equation (\\ref{palatiniexp}), one obtains\n\n\\bea\n\\delta\\R_{ijkl} &=& \\Big( \\big( 3 g - 1 \\big) H_T + r g \\partial_r H_T \\Big) \\Big( g_{il} \\mathcal{T}_{jk} - g_{ik} \\mathcal{T}_{jl} - g_{jl} \\mathcal{T}_{ik} + g_{jk} \\mathcal{T}_{il} \\Big) \\nonumber \\\\\n&+& r^2 H_T \\Big( D_i D_l \\mathcal{T}_{jk} - D_i D_k \\mathcal{T}_{jl} - D_j D_l \\mathcal{T}_{ik} + D_j D_k \\mathcal{T}_{il} \\Big), \\label{drtensori} \\\\\n\\delta\\R_{itjt} &=& \\left( - r^2 \\partial_t^2 H_T + \\frac{1}{2} r^2 f f' \\partial_r H_T + r f f' H_T \\right) \\mathcal{T}_{ij}\\,, \\\\\n\\delta\\R_{itjr} &=& \\left(- r^2 \\partial_t \\partial_r H_T -r \\partial_t H_T + \\frac{1}{2} r^2 \\frac{f'}{f} \\partial_t H_T \\right) \\mathcal{T}_{ij}\\,, \\\\\n\\delta\\R_{irjr} &=& \\left( - r \\frac{g'}{g} H_T - \\frac{1}{2} r^2 \\frac{g'}{g} \\partial_r H_T - 2 r \\partial_r H_T - r^2 \\partial^2_r H_T \\right) \\mathcal{T}_{ij}\\,, \\\\\n\\delta\\R_{abcd} &=& 0,\n\\eea\nand further\n\\bea\n\\delta \\mathcal{R}_{ij} &=& \\frac{r^2}{f} \\left( \\partial^2_t H_T \\right) \\mathcal{T}_{ij} - r^2 g \\left( \\partial^2_r H_T \\right) \\mathcal{T}_{ij} - \\frac{1}{2} r^2 \\left(f^\\prime + g^\\prime\\right) \\left( \\partial_r H_T \\right) \\mathcal{T}_{ij} - r \\left(f^\\prime + g^\\prime\\right) H_T \\mathcal{T}_{ij} \\nonumber \\\\ &-&\\left(d-2\\right) r g \\left( \\partial_r H_T \\right) \\mathcal{T}_{ij}\n+ 2\\left(d-3\\right) (1-g) H_T \\mathcal{T}_{ij} + \\left(k_T +2 \\right) H_T \\mathcal{T}_{ij} \\,, \\\\\n\\delta \\mathcal{R}_{ia} &=& 0, \\quad\n\\delta \\mathcal{R}_{ab} = 0, \\quad\n\\delta \\mathcal{R} = 0. \\label{drtensora}\n\\eea\n\\noindent\n\nThese are the equations we will need in order to perturb the $\\a$--corrected field equations.\n\n\\section{Gravitational perturbations to the $\\a$--corrected field equations}\n\n\\subsection{Analysis on the Einstein frame}\n\\indent\n\nThe $d$--dimensional effective action with $\\a$ corrections we will be dealing with is given, in the Einstein frame, by\n\\be \\label{eef} \\frac{1}{16 \\pi G} \\int \\sqrt{-g} \\left( \\R -\n\\frac{4}{d-2} \\left( \\d^\\mu \\phi \\right) \\d_\\mu \\phi +\n\\mbox{e}^{\\frac{4}{d-2} \\phi} \\frac{\\lambda}{2}\\\n\\R^{\\mu\\nu\\rho\\sigma} \\R_{\\mu\\nu\\rho\\sigma} \\right) \\mbox{d}^dx .\n\\ee\n\nHere $\\lambda = \\frac{\\a}{2}, \\frac{\\a}{4}$ and $0$, for\nbosonic, heterotic and type II strings, respectively. We are only\nconsidering gravitational terms: we can consistently settle all\nfermions and gauge fields to zero for the moment. That is not the\ncase of the dilaton, as it can be seen from the resulting field equations:\n\n\\bea\n\\nabla^2 \\phi - \\frac{\\lambda}{4}\\ \\mbox{e}^{\\frac{4}{2-d} \\phi} \\left(\n\\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau} \\right) &=&\n0, \\label{bdfe} \\\\ \\R_{\\mu\\nu} + \\lambda\\ \\mbox{e}^{\\frac{4}{2-d}\n\\phi} \\left( \\R_{\\mu\\rho\\sigma\\tau} {\\R_{\\nu}}^{\\rho\\sigma\\tau} -\n\\frac{1}{2(d-2)} g_{\\mu\\nu} \\R_{\\rho\\sigma\\lambda\\tau}\n\\R^{\\rho\\sigma\\lambda\\tau} \\right) &=& 0. \\label{bgfe} \\eea\n\nThe correction term we are considering in (\\ref{eef}) is $\\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau}$, the square of the Riemann tensor, which we generically designate as $\\R^2$: since we are not considering the Ricci tensor in the corrections (it would only contribute at a higher order in $\\lambda$), there is no possible confusion. From (\\ref{bdfe}) one sees that this correction term acts as a source for the dilaton and, therefore, one cannot set the dilaton to zero without setting this term to zero too. Still, as it was shown in \\cite{Moura:2009it} and we will review later (see eq. (\\ref{fr2})), for a spherically symmetric metric like (\\ref{schwarz}), at order $\\lambda=0$ the dilaton is a constant (which can be always set to 0). The dilaton only gets nonconstant terms at order $\\lambda;$ this is why we could neglect terms which are quadratic in $\\phi$ while deriving these field equations, since we are only working perturbatively to first order in $\\lambda.$\n\nIn the present context, any black hole solution is built perturbatively in $\\lambda,$ and a solution will only be valid in regions where $r^2 \\gg \\lambda$, \\textit{i.e.}, any perturbative solution is only valid for black holes whose event horizon is much bigger than the string length.\n\nWe want to study scattering processes associated to solutions to the field equations above and, therefore, they are the ones which we will perturb.\n\nThe perturbation of the $\\a$--corrected field equations (\\ref{bdfe}) and (\\ref{bgfe}) has already been taken in \\cite{Moura:2006pz}, for a spherically symmetric metric like (\\ref{schwarz}) but with $f(r)=g(r);$ here we consider the general case (\\ref{schwarz}) and apply the results to concrete metrics.\n\nBy perturbing (\\ref{bdfe}) and (\\ref{bgfe}) one gets\n\n\\bea\n\\delta \\nabla^2 \\phi &-& \\frac{\\lambda}{4}\\ \\mbox{e}^{\\frac{4}{2-d} \\phi}\\ \\delta \\left( \\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau} \\right) + \\frac{\\lambda}{d-2}\\ \\mbox{e}^{\\frac{4}{2-d} \\phi}\\ \\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau}\\ \\delta \\phi = 0, \\label{pbdfe} \\\\\n\\delta \\R_{ij} &+& \\lambda\\ \\mbox{e}^{\\frac{4}{2-d} \\phi} \\left[ \\delta \\left( \\R_{i\\rho\\sigma\\tau} {\\R_{j}}^{\\rho\\sigma\\tau} \\right) - \\frac{1}{2(d-2)} \\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau}\\ h_{ij} \\right. \\nonumber \\\\\n&-& \\left. \\frac{1}{2(d-2)}\\ g_{ij}\\ \\delta \\left( \\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau} \\right) \\right] + \\frac{4}{d-2}\\ \\R_{ij}\\ \\delta \\phi = 0. \\label{pbgfe}\n\\eea\n\n\\noindent\nUsing the explicit form of the Riemann tensor (\\ref{ikriemann}) together with the variations (\\ref{htensor}) and (\\ref{drtensori}--\\ref{drtensora}), one can compute the terms in (\\ref{pbdfe}) and (\\ref{pbgfe}).\n\nUsing these variations, it is a simple computation to verify that $\\delta \\left( \\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau} \\right) \\equiv 0.$ From this fact and (\\ref{pbdfe}), we see that one can consistently set $\\delta \\phi=0,$ as expected for a tensorial perturbation of a scalar field. The derivation is explicitly given in article \\cite{Moura:2006pz}, to which we refer the reader. Eq. (\\ref{pbdfe}) does not give us any other relevant information.\n\nCollecting the several expressions, the result for (\\ref{pbgfe}) finally becomes\n\n\\bea\n&&\n\\left( 1 - 2 \\lambda \\frac{f'}{r} \\right) \\frac{r^2}{f} \\partial^2_t H_T - \\left( 1 - 2 \\lambda \\frac{g'}{r} \\right) r^2 g \\,\\partial^2_r H_T \\nonumber \\\\\n&&\n- \\left[(d-2) r g + \\frac{1}{2} r^2 \\left(f'+g'\\right) + 4 \\lambda (d-4) \\frac{g \\left( 1 - g \\right)}{r} - 4 \\lambda g g' - \\lambda r \\left( f'^2 + g'^2 \\right) \\right] \\partial_r H_T \\nonumber \\\\\n&&\n+ \\left[ \\left( \\ell \\left( \\ell + d - 3 \\right) -2 \\right) \\left(1+\\frac{4\\lambda}{r^2}\\left(1-g\\right)\\right) + 2(d-2)- 2(d-3)g- r\\left(f' + g'\\right) \\right. \\nonumber \\\\\n&&\n+ \\left. \\lambda \\left( 8 \\frac{1 - g}{r^2} + 2 \\left( d - 3 \\right)\\frac{\\left( 1 - g \\right)^2}{r^2} - \\frac{r^2}{d-2} \\left[f'' + \\frac{1}{2} \\left( \\frac{f' g'}{g} - \\frac{f'^2}{f}\\right)\\right]^2 \\right) \\right] H_T = 0. \\label{master0}\n\\eea\n\n\\noindent\nThis is a second order partial differential equation for the perturbation function $H_T.$ If we now divide (\\ref{master0}) by $\\left( 1 - 2 \\lambda \\frac{f'}{r} \\right) \\frac{r^2}{f},$ we obtain an equation of the form\n\\be \\label{dottigen}\n\\partial^2_t H_T - F^2(r)\\ \\partial^2_r H_T + P(r)\\ \\partial_r H_T + Q(r)\\ H_T = 0\n\\ee\nwith\n\\bea\nF &=& \\sqrt{\\frac{1 - 2 \\lambda \\frac{g'}{r}}{1 - 2 \\lambda \\frac{f'}{r}} fg}, \\nonumber \\\\\nP &=& - \\frac{f}{1 - 2 \\lambda \\frac{f'}{r}} \\left[ (d-2) \\frac{g}{r} + \\frac{1}{2} \\left(f'+g'\\right) + \\frac{2 \\lambda}{r} \\left( 2 ( d - 4 ) \\frac{g \\left( 1 - g \\right)}{r^2}\n- 2 \\frac{g g'}{r} - \\frac{1}{2} \\left(f'^2+g'^2\\right) \\right) \\right], \\nonumber \\\\\nQ &=& \\frac{f}{1 - 2 \\lambda \\frac{f'}{r}} \\left[ \\frac{\\ell \\left( \\ell + d - 3 \\right)}{r^2} - \\frac{f'+g'}{r} + 2 (d-3) \\frac{1-g}{r^2} \\right. \\nonumber \\\\ &+& \\left. \\frac{\\lambda}{r^2} \\left( 4 \\ell \\left( \\ell + d - 3 \\right) \\frac{1-g}{r^2} + 2 (d-3) \\frac{\\left( 1-g \\right)^2}{r^2} - \\frac{r^2}{d-2} \\left[f'' + \\frac{1}{2} \\left( \\frac{f' g'}{g} - \\frac{f'^2}{f}\\right)\\right]^2\n\\right) \\right]. \\label{fpqnp}\n\\eea\n\\noindent\nFor our purposes, we would like to re--write the above equation (\\ref{dottigen}) in a more tractable form, as a master equation. In order to achieve so, we follow a procedure similar to the one in \\cite{dg05a}, defining a gauge--invariant ``master variable'' for the gravitational perturbation as\n\n\\be\n\\Phi = k(r) H_T, \\quad k(r) = \\frac{1}{\\sqrt{F}} \\exp \\left( - \\int \\, \\frac{P}{2F^2} \\, dr \\right), \\label{k}\n\\ee\nand replacing $\\partial\/\\partial r$ by $\\partial\/\\partial r_*,$ $r_*$ being the tortoise coordinate defined in this case by $dr_* = \\frac{dr}{F(r)}.$ It is then easy to see that an equation like (\\ref{dottigen}) may be written as a master equation:\n\n\\be\n\\frac{\\partial^2 \\Phi}{\\partial r_*^2} - \\frac{\\partial^2 \\Phi }{\\partial t^2} = \\left( Q + \\frac{F'^2}{4} - \\frac{F F''}{2} - \\frac{P'}{2} + \\frac{P^2}{4 F^2} + \\frac{P F'}{F} \\right) \\Phi \\equiv V_{\\textsf{T}} \\left[ f(r), g(r) \\right] \\Phi. \\label{potential0}\n\\ee\nThis is the equation we shall be working with for the rest of the article. We see that the field equations for the tensorial perturbations of a background spherically symmetric metric like (\\ref{schwarz}) in $d$ dimensions can be reduced to a single second order partial differential equation, in $(r_*,t)$ coordinates, for a master variable $\\Phi$ which is a simple combination of gauge--invariant variables. This result had been obtained in \\cite{ik03a} for Einstein gravity; here, we see that it is also valid in the presence of string corrections quadratic in the Riemann tensor.\n\n\nIn order to explicitly compute the potential $V_{\\textsf{T}} \\left[ f(r), g(r) \\right],$ one should first simplify the expressions from (\\ref{fpqnp}). To begin with, one can judiciously use the field equation (\\ref{bgfe}) to derive the relation\n$$\\lambda \\R_{abcd} \\R^{abcd} = 2 g^{ij} \\R_{ij} + \\lambda \\R_{ijkl} \\R^{ijkl}.$$\nHere we used the fact that, in the presence of a metric like (\\ref{schwarz}), the dilaton field is of order $\\lambda,$ as it was explicitly shown in \\cite{Moura:2009it}. This way we could neglect all the dilaton terms, which would only contribute at least to order ${\\mathcal{O}} (\\lambda^2)$). Using the above relation and the explicit form of the Riemann tensor (\\ref{ikriemann}), we can obtain the (on--shell) relation\n\\be \\label{f''}\n\\lambda \\left[f'' + \\frac{1}{2} \\left( \\frac{f' g'}{g} - \\frac{f'^2}{f}\\right)\\right]^2 -\\frac{2 (d-3) (d-2)}{r^2} \\left(\\frac{\\lambda (1-g)}{r^2}+1\\right) (1-g)+\\frac{(d-2)}{r} \\left(\\frac{g f'}{f}+g'\\right)=0.\n\\ee\nWe may use the relation above in order to remove the $\\left[f'' + \\frac{1}{2} \\left( \\frac{f' g'}{g} - \\frac{f'^2}{f}\\right)\\right]^2$ term from $Q$ in (\\ref{fpqnp}). Also, although the expressions in (\\ref{fpqnp}) are non--polynomial in $\\lambda$, we can expand them and take only the first order terms, since that is the order in $\\lambda$ to which we are working. A simple power series expansion yields\n\n\\bea\nF &=& \\sqrt{fg}\\left(1 +\\lambda \\frac{f'-g'}{r}\\right), \\nonumber \\\\\nP &=& - f \\left[ (d-2) \\frac{g}{r} + \\frac{1}{2} \\left(f'+g'\\right) + \\frac{\\lambda}{r^2} \\left(4 (d-4) \\frac{g(1-g)}{r}+r g'\\left(f'-g'\\right)-4 g g' +2 (d-2) g f' \\right) \\right], \\nonumber \\\\\nQ &=& \\frac{\\ell \\left( \\ell + d - 3 \\right)}{r^2} f + \\frac{(g-f)f'}{r} +\n\\frac{2\\lambda}{r^2} \\left[ \\frac{\\ell \\left( \\ell + d - 3 \\right)}{r} f \\left( 2 \\frac{1-g}{r} + f' \\right) + (g-f) f'^2 \\right].\n\\label{fpq}\n\\eea\nReplacing the expressions from (\\ref{fpq}) on $V_{\\textsf{T}} \\left[ f(r), g(r) \\right]$ given in (\\ref{potential0}), one finally obtains\n\n\\bea V_{\\textsf{T}} [f(r),g(r)] &=&\n\\frac{1}{16 r^2 f g}\n\\left[(16 \\ell (\\ell +d-3) f^2 g+ r^2 f^2 f'^2 +3 r^2 g^2 f'^2-2 r^2 f (f+g) f' g' -4 r^2 fg (g-f) f'' \\right. \\nonumber \\\\\n&+& \\left. 16r f g^2 f' +4 r (d-6)f^2 g f' +4 (d-2)r f^2 g g' +4 (d-4) (d-2) f^2 g^2 \\right] \\nonumber \\\\\n&+&\\frac{\\lambda}{8 r^4 f g} \\left[32 \\ell (\\ell +d-3) f^2 (1-g) g +16 \\ell (d+\\ell -3) f^2 g f' r -r^3 f^2 f'^2 \\left(f'-g'\\right) \\right. \\nonumber \\\\\n&+& 3 r^3 g^2 f'^2 \\left(f'-g'\\right)-2 r^3 f g f' \\left(f'-g'\\right) g' -4 r^3 f^2 g f' \\left(f''-g''\\right)-2 r^3 f^2 g g' \\left(f''-g''\\right) \\nonumber \\\\\n&+& 2 r^3 f g^2 \\left(-3 f' f''+2 g' f''+f' g''\\right) -4 r^3 f^2 g^2 \\left(f'''-g'''\\right) +18 r^2 f g^2 f'^2 -12 r^2 f^2 g f'^2 \\nonumber \\\\\n&-& 10 r^2 f^2 g g'^2 -2 r^2 f g^2 f' g' +2 r^2 (4 d-13) f^2 g f' g' +8 r^2 f^2 g^2 f'' +8 (d-5) r^2 f^2 g^2 g''\n \\nonumber \\\\\n&+& 4 r(d-4)^2 f^2 g^2 (f' + g') +8r f^2 g^2(g'-f') +8 (d-4) r f^2 g (f' + g'-4g g') \\nonumber \\\\\n&+& \\left. 16 (d-5) (d-4) f^2 g^2 (1-g) \\right].\n\\label{potential}\n\\eea\nTaking $f=g$ in (\\ref{schwarz}), (\\ref{potential}) matches the result of \\cite{Moura:2006pz}, as it should. Also, at order $\\lambda=0,$ (\\ref{potential}) matches the Einstein-Hilbert potential obtained in \\cite{ik03a}.\n\nEquation (\\ref{potential}) gives the generic expression for the potential for tensor--type gravitational perturbations of any kind of static, spherically symmetric $\\R^2$ string--corrected black hole in $d$--dimensions of the form (\\ref{schwarz}). This is also one of the main results of this article. Knowing this potential, we are now ready to study the stability of such black holes under those perturbations.\n\n\\subsection{Analysis in different frames}\n\\label{diffr}\n\\indent\n\nUnder a conformal transformation, the metric and Riemann tensor transform as:\n\\bea \\label{confgen}\ng_{\\mu\\nu} &\\rightarrow& \\exp \\left( \\Omega \\right) g_{\\mu\\nu}, \\\\\n{\\R_{\\mu\\nu}}^{\\rho\\sigma} &\\rightarrow& \\exp \\left( -\\Omega \\right) \\left( {\\R_{\\mu\\nu}}^{\\rho\\sigma} - 2 {\\delta_{\\left[\\mu\\right.}}^{\\left[\\rho\\right.} \\nabla_{\\left.\\nu \\right]} \\nabla^{\\left.\\sigma \\right]} \\Omega + {\\delta_{\\left[\\mu\\right.}}^{\\left[\\rho\\right.} \\left(\\nabla_{\\left.\\nu \\right]} \\Omega \\right) \\nabla^{\\left.\\sigma \\right]} \\Omega - {\\delta_{\\left[\\mu\\right.}}^{\\left[\\rho\\right.} \\delta_{\\left.\\nu\\right]}^{\\left. \\, \\, \\, \\, \\sigma\\right]} \\left(\\nabla^{\\lambda}\\Omega\\right) \\nabla_{\\lambda}\\Omega \\right).\n\\eea\n\nIn our perturbation analysis we assumed an action\/solution taken in the Einstein frame. But the action obtained directly from string perturbation theory comes in the string frame as\n\\be \\label{esf}\n\\frac{1}{16 \\pi G} \\int \\sqrt{-g}\\ \\mbox{e}^{-2 \\phi} \\Big( \\R + 4 \\left( \\d^\\mu \\phi \\right) \\d_\\mu \\phi + \\frac{\\lambda}{2}\\ \\R^{\\mu\\nu\\rho\\sigma} \\R_{\\mu\\nu\\rho\\sigma} \\Big) \\mbox{d}^dx.\n\\ee\nTaking a dilaton--dependent conformal transformation (\\ref{confgen}) with $\\Omega=\\frac{4}{d-2} \\phi,$ we obtain from (\\ref{esf}) the action (\\ref{eef}) in the Einstein frame, plus some terms with derivatives of the dilaton, which would only contribute at higher orders in $\\lambda,$ since as we saw the dilaton itself is already of order $\\lambda.$\n\nIn general, if $Y(\\R)$ is a scalar polynomial in the Riemann tensor representing the higher derivative corrections, with conformal weight $w$ and the convention that $w \\left( g_{\\mu\\nu} \\right) = +1,$ an arbitrary action with $\\a$ corrections is (just the gravitational sector)\n\\be\n\\frac{1}{16 \\pi G} \\int \\sqrt{-g} \\Big( \\R - \\frac{4}{d-2} \\left( \\d^\\mu \\phi \\right) \\d_\\mu \\phi + z Y(\\R) \\Big) \\mbox{d}^dx,\n\\ee\nwith $z$ being, up to a numerical factor, the suitable power of the inverse string tension $\\a$ for $Y(\\R)$.\n\nIn a different frame, after the conformal transformation (\\ref{confgen}), this action becomes\n\\be\n\\frac{1}{16 \\pi G} \\int \\sqrt{-g}\\ \\mbox{e}^{\\frac{d-2}{2} \\Omega} \\left( \\widetilde{\\R} - \\frac{4}{d-2} \\left( \\d^\\mu \\phi \\right) \\d_\\mu \\phi + z\\ \\mbox{e}^{\\left( 1 + w \\right) \\Omega} Y(\\widetilde{\\R}) \\right) \\mbox{d}^dx.\n\\ee\nAgain, if the conformal transformation (\\ref{confgen}) is dilaton--dependent, it will generate from $Y(\\widetilde{\\R})$ some dilaton terms which are of higher order in $\\lambda.$ Besides that, other lower order parcels containing the dilaton may be generated, affecting the numerical factor (and sign) of its kinetic term, as one can see comparing (\\ref{esf}) to (\\ref{eef}). These dilaton terms are the only change in the graviton field equation generated by a dilaton--dependent conformal transformation (\\ref{confgen}).\n\nThe field equation (\\ref{bgfe}) we considered was obtained after removing the trace term $-\\frac{1}{2} g_{\\mu\\nu} \\R,$ obtained after contracting (\\ref{bgfe}) itself. This procedure is totally independent of the chosen frame. Other dilaton terms were removed by considering the dilaton field equation, which obviously depends on the choice of frame (equation (\\ref{bdfe}) is valid in the Einstein frame). Still, as we have seen under tensorial perturbations the dilaton $\\phi$ is inert, and so is its field equation, in any chosen frame. Indeed the arguments in the discussion after (\\ref{bgfe}), namely the fact that $\\delta \\left( \\R_{\\rho\\sigma\\lambda\\tau} \\R^{\\rho\\sigma\\lambda\\tau} \\right) \\equiv 0$, do not depend on the choice of frame. This means that, in order to study tensorial perturbations, only (\\ref{bgfe}) is relevant, and the results obtained from it are valid in any chosen frame, namely the equation governing tensorial perturbations of the metric (\\ref{master0}) (or, equivalently, (\\ref{potential0})).\n\nFinally we should mention that, although the dilaton field equation looks different in the string and in the Einstein frames (see \\cite{cmp89}), their spherically symmetric $d$--dimensional solution is the same in both frames. Indeed the string frame dilaton solution obtained in \\cite{cmp89} is equivalent to the Einstein frame dilaton solution from \\cite{Moura:2009it}, apart from the normalization at infinity (which in \\cite{Moura:2009it} was taken to be zero - this is the solution we use). This is not obvious if one looks at the two solutions, but it can be verified case by case in $d$ using symbolic computation software. The reason becomes clear after comparing the two field equations and neglecting terms of higher order in $\\lambda.$ This fact makes the change from the string to the Einstein frame (and vice versa) unambiguous.\n\n\\section{General analysis of perturbative stability}\n\\indent\n\nIn order to study the stability of a solution, we use the ``S--deformation approach'', first introduced in \\cite{ik03b} and later further developed in \\cite{dg04, dg05a}. Let us briefly review this technique in the following (for more details we refer the reader to the original discussion in \\cite{ik03b}).\n\nAfter having obtained the potential $V_{\\textsf{T}}$ for the master equation (\\ref{potential0}), one assumes that its solutions are of the form $\\Phi(x,t) = e^{i\\omega t} \\phi(x)$, such that $\\frac{\\partial\\Phi}{\\partial t} = i\\omega \\Phi$. In this way the master equation (\\ref{potential0}) may be written in Schr\\\"odinger form, for a generic potential $V(x),$ as\n\n\\be\n\\left[ - \\frac{d^2}{dx^2} + V(x) \\right] \\phi(x) \\equiv A \\phi(x) = \\omega^2\n\\phi(x).\n\\ee\n\n\\noindent\nA given solution of the gravitational field equations will then be perturbatively stable if and only if the operator $A$ defined above has no negative eigenvalues for $x \\in {\\mathbb{R}}$ \\cite{ik03b}. The above condition is equivalent to the positivity, for any given $\\phi,$ of the inner product \\cite{ik03b}\n\n\\be\n\\left \\langle \\phi \\left| A \\phi \\right \\rangle \\right. = \\int_{-\\infty}^{+\\infty} \\phi^\\dagger (x) \\left[ - \\frac{d^2}{dx^2} + V(x) \\right] \\phi(x)\\ dx,\n\\ee\n\n\\noindent\nwhich, after some integrations by parts and further algebra, may be rewritten as\n\n\\be\n\\left \\langle \\phi \\left| A \\phi \\right \\rangle \\right. = \\int_{-\\infty}^{+\\infty} \\left[ \\left| \\frac{d\\phi}{dx} \\right|^2 + V(x) \\left|\\phi \\right|^2 \\right] dx = \\int_{-\\infty}^{+\\infty} \\left[ \\left| D \\phi \\right|^2 + \\widetilde{V}(x) \\left| \\phi \\right|^2 \\right] dx.\n\\ee\n\n\\noindent\nHere we have defined $D = \\frac{d}{dx} + S$ and $\\widetilde{V}(x) = V(x) + f \\frac{dS}{dr} - S^2$, with $S$ a completely arbitrary function. Taking $S = - \\frac{F}{k} \\frac{dk}{dr},$ with $k(r)$ given by (\\ref{k}), we simply obtain $\\widetilde{V}(x) = Q$ and\n\n\\be\n\\left \\langle \\phi \\left| A \\phi \\right \\rangle \\right. = \\int_{-\\infty}^{+\\infty} \\left|D \\phi \\right|^2 dx + \\int_{-\\infty}^{+\\infty} Q(x) \\left|\\phi \\right|^2 dx.\n\\ee\n\n\\noindent\nThe second term of the expression above may be written as ($R_H$ being the radius of the event horizon)\n\n\\be\n\\int_{R_H}^{+\\infty} \\frac{Q(r)}{F(r)} \\left|\\phi \\right|^2 dr,\n\\ee\n\n\\noindent\nSince $\\left|\\phi \\right|^2$ and $\\left|D \\phi \\right|^2$ are positive, perturbative stability of a given black hole solution then follows if one can prove that $\\frac{Q(r)}{F(r)}$ is a positive function for $r \\ge R_H.$ Here we note that, by definition, the horizon is the largest root of $f(r),$ i.e $f>0$ for $r>R_H.$ Here we assume the same is valid for $g,$ i.e. the classical part of $F=F_0 + \\lambda F_1,$ given by $F_0=\\sqrt{fg},$ is well defined and positive for $r>R_H.$ This does not mean that $F$ is necessarily positive: close to the horizon, $F_0=\\sqrt{fg}$ is expected to have very small values, which we cannot assume to be larger than the $\\lambda$--corrections, as one usually does. These corrections, from (\\ref{fpq}) given by $F_1= \\sqrt{fg} \\frac{f'-g'}{r},$ may well be negative if $f'R_H$ clearly $f_0^T >0,$ from (\\ref{qf}) we get\n$$r^4 \\sqrt{fg} \\left.\\frac{Q}{F}\\right|_1 = \\frac{2}{r^2} \\frac{\\ell \\left( \\ell + d - 3 \\right)}{r} f_0^T \\left( 2 \\frac{1-f_0^T}{r} + f_0^{T'} \\right).$$\nWe only need to compute\n$$2 \\frac{1-f_0^T}{r} + f_0^{T'}= (d-3) \\frac{R_H^{d-3}}{r^{d-2}},$$\nwhich is always a positive quantity. This way, $\\left.\\frac{Q}{F}\\right|_1$ is always positive.\n\n$r^2 \\sqrt{fg} \\left.\\frac{Q}{F}\\right|_0=\\frac{\\ell \\left( \\ell + d - 3 \\right)}{r^2} f + \\frac{(g-f)f'}{r} $ must be computed with the full, $\\lambda$--corrected metric (therefore, it will also include $\\lambda$ corrections).\n\nBy definition, the horizon is the largest root of $f(r).$ Since asymptotically $f(r) \\rightarrow 1,$ $f(r)$ must be positive for $r>R_H.$\n\nAs we have seen, the dilaton solution (\\ref{fr2}) is negative and its derivative (\\ref{fr3}) is a positive function.\nTherefore $\\phi - r \\phi'$ is a negative quantity. Since $1 - \\left(\\frac{R_H}{r}\\right)^{d-3} >0,$ from (\\ref{fnew}) we get $g-f>0$ (always for $r>R_H$). Also from (\\ref{fnew}) $g-f$ is of order $\\lambda,$ i.e. its classical part is zero. This way, in the term $(g-f)f'$ of $\\left.\\frac{Q}{F}\\right|_0,$ one can take just the classical part $f_0^{T'}$ to compute $f'$: its $\\lambda$ correction would only contribute to order $\\lambda^2.$ But, as we know, $f_0^{T'}>0.$ This means $\\left.\\frac{Q}{F}\\right|_0$ (and therefore $\\frac{Q}{F}$) is indeed positive for $r>R_H.$ This way, the dilatonic $d$--dimensional black hole solution (\\ref{fnew}) is indeed stable under tensorial perturbations.\n\n\\subsection{The double--charged black hole}\n\\label{giveon}\n\\subsubsection{Description of the solution}\n\\indent\n\nIn article \\cite{Giveon:2009da} one can find black holes in any dimension formed by a fundamental string compactified on an internal circle with any momentum $n$ and winding $w,$ both at leading order and with leading $\\a$\ncorrections. One starts with the Callan--Myers--Perry solution in the string frame \\cite{cmp89}, which is of the form (\\ref{schwarz}), with $f, g$ replaced by $f^{CMP}_S, g^{CMP}_S,$ given by\n\\bea\nf^{CMP}_S(r)&=&f_0^T \\left(1+2 \\frac{\\lambda}{R_H^2} \\mu(r)\\right), \\nonumber \\\\\ng^{CMP}_S(r)&=&f_0^T \\left( 1-2 \\frac{\\lambda}{R_H^2} \\epsilon(r)\\right), \\nonumber \\\\\n\\epsilon(r)&=&\\frac{d-3}{4} \\frac{\\left(\\frac{R_H}{r}\\right)^{d-3}}{1-\\left(\\frac{R_H}{r}\\right)^{d-3}} \\left[\\frac{(d-2)(d-3)}{2}\n-\\frac{2(2d-3)}{d-1} + (d-2)\\left(\\psi^{(0)}\\left(\\frac{2}{d-3}\\right) + \\gamma \\right)\\right. \\nonumber \\\\ &+& \\left. d \\left(\\frac{R_H}{r}\\right)^{d-1} +\\frac{4}{d-2} \\varphi(r) \\right], \\label{ep} \\\\\n\\mu(r)&=&-\\epsilon(r)+\\frac{2}{d-2} (\\varphi(r)-r \\varphi'(r)). \\label{mu}\n\\eea\n$f_0^T$ is given by (\\ref{tangher}) and $\\varphi(r)$ is given by (\\ref{fr2}). This dilaton solution $\\varphi(r)$ is such that $\\epsilon(r), \\mu(r)$ are finite at the horizon $r=R_H$ \\cite{Moura:2009it}: indeed, from (\\ref{firh}), (\\ref{firhd}) and the definitions (\\ref{ep}), (\\ref{mu}) one gets\n\\be\n\\epsilon(R_H)= -\\frac{d-1}{2}, \\, \\mu(R_H)=-\\frac{3d\\left(d-3\\right)\\left(d-\\frac{5}{3}\\right)}{4\\left(d-1\\right)}\n-\\frac{d-2}{2}\\left(\\psi^{(0)}\\left(\\frac{2}{d-3}\\right) + \\gamma \\right).\n\\ee\n\nIn the same way as $\\varphi(r)$, also $\\epsilon(r)$ in (\\ref{ep}) is negative (always for $r>R_H$): it has a negative value at the horizon and grows up to 0 at asymptotic infinity. Indeed one can easily show (for instance, constructing a table with Mathematica) that, for relevant values of $d,$ the first line in (\\ref{ep}) obeys $\\frac{(d-2)(d-3)}{2}-\\frac{2(2d-3)}{d-1} + (d-2)\\left(\\psi^{(0)}\\left(\\frac{2}{d-3}\\right) + \\gamma \\right)<0.$ There is a positive contribution from $d \\left(\\frac{R_H}{r}\\right)^{d-1}$, but it decreases with increasing $r,$ differently from other negative contributions. Another remarkable fact from $\\epsilon(r)$ is that, for most of its range, it has very small numerical values (when compared to $\\varphi(r)$). This is due to the overall factor $\\frac{\\left(\\frac{R_H}{r}\\right)^{d-3}}{1-\\left(\\frac{R_H}{r}\\right)^{d-3}},$ because of which the absolute value of $\\epsilon(r)$ decreases much faster than that of $\\varphi(r).$ Because of that, $\\mu(r)$ which, from (\\ref{mu}), is a difference of negative quantities, is also negative in all its range. This negativeness of $\\epsilon(r), \\mu(r)$ can be checked simply by plotting these two functions of $r\/R_H$ for all relevant values of $d.$\n\nThis Callan--Myers--Perry metric in the string frame is lifted to an additional dimension by adding an extra coordinate, taken to be compact (this means to produce a uniform black string). One then performs a boost along this extra direction, with parameter $\\alpha_w$, and $T$--dualizes around it (to change string momentum into winding), obtaining a $(d+1)$--dimensional black string winding around a circle. Finally one boosts one other time along this extra direction, with parameter $\\alpha_p$, in order to add back momentum charge. One finally obtains a spherically symmetric black hole in $d$ dimensions with two electrical charges.\n\nThe whole process is worked out in detail in \\cite{Giveon:2009da}; the final metric, in the string frame, is of the form (\\ref{schwarz}), with $f, g$ given by\n\\bea\nf_S(r)&=&\\frac{f_0^T}{\\Delta(\\alpha_n)\\Delta(\\alpha_w)}\n\\left[1+\\frac{2 \\lambda}{R_H^2} \\frac{\\mu(r)}{\\Delta(\\alpha_n)\\Delta(\\alpha_w)}\n-\\frac{2 \\lambda}{R_H^2} \\mu(r) \\frac{\\sinh^2(\\alpha_n)\\sinh^2(\\alpha_w)}{\\Delta(\\alpha_n)\\Delta(\\alpha_w)} \\left(\\frac{R_H}{r}\\right)^{2(d-3)}\\right. \\\\\n&+&\\frac{2 \\lambda}{R_H^2} \\mu(r)\\left(\\frac{\\sinh^{2}\\alpha_n}{\\Delta(\\alpha_n)}\n+\\frac{\\sinh^{2}\\alpha_w}{\\Delta(\\alpha_w)}\\right)+\\left. \\frac{\\lambda}{R_H^2} (d-3)^2 f_0^T \\left(\\frac{R_H}{r}\\right)^{2(d-2)}\n\\frac{\\sinh^2(\\alpha_n)\\sinh^2(\\alpha_w)}{\\Delta(\\alpha_n)\\Delta(\\alpha_w)}\\right], \\nonumber\\\\\n\\Delta\\left(x\\right)&:=&1+\\left(\\frac{R_H}{r}\\right)^{d-3}\\sinh^2x, \\label{delta} \\\\\ng_S(r)&=&f_0^T\\left( 1-2\\frac{\\lambda}{R_H^2} \\epsilon(r)\\right).\n\\eea\nThe dilaton in this case is given by\n\n\\bea\ne^{-2\\phi}&=&\\sqrt{\\Delta(\\alpha_n)\\Delta(\\alpha_w)}\n\\left[1-2 \\frac{\\lambda}{R_H^2} \\varphi(r)-\\frac{\\lambda}{R_H^2} \\mu(r) f_0^T \\left(\\frac{\\sinh^2\\alpha_n}{\\Delta(\\alpha_n)}\n+\\frac{\\sinh^2\\alpha_w}{\\Delta(\\alpha_w)}\\right) \\right. \\nonumber\\\\\n&-&\\left.\\frac{\\lambda}{R_H^2} \\frac{(d-3)^2}{2} f_0^T \\left(\\frac{R_H}{r}\\right)^{2(d-2)}\n\\frac{\\sinh^2(\\alpha_n)\\sinh^2(\\alpha_w)}{\\Delta(\\alpha_n)\\Delta(\\alpha_w)}\\right], \\label{conf}\n\\eea\nwith $\\varphi(r)$ still given by (\\ref{fr2}).\n\nFor later purposes it will useful to have $f=g,$ at least to order $\\lambda=0.$ This can be achieved with a conformal transformation which changes the frame, like we have seen in section \\ref{diffr}, defining a new metric\n\\be\ng_{\\mu\\nu}^I = e^{-2\\phi} g_{\\mu\\nu}^S. \\label{frs}\n\\ee\nThis way we get a solution of the form\n\\bea\nf(r) &=& f_0^I(r) \\left(1+ \\frac{\\lambda}{R_H^2} f_c(r) \\right), \\, \\, g(r)= f_0^I(r) \\left(1+ \\frac{\\lambda}{R_H^2} g_c(r) \\right), \\label{fcgc} \\\\\nf_0^I&=&\\frac{f_0^T}{\\sqrt{\\Delta(\\alpha_n)\\Delta(\\alpha_w)}}, \\label{fcgi}\n\\eea\n$f_0^T$ being given by (\\ref{tangher}). $f_c, g_c$ are given by\n\\bea\nf_c^I(r) &=& \\frac{1}{2 \\Delta(\\alpha_n)\\Delta(\\alpha_w)} \\Big(-4 \\Delta(\\alpha_n)\\Delta(\\alpha_w) \\varphi(r) + 2\\left(2-f_0^T\\right) \\left(\\Delta(\\alpha_n) \\sinh^2(\\alpha_w) + \\Delta(\\alpha_w) \\sinh^2(\\alpha_n)\\right)\\mu(r) \\nonumber\\\\\n&+& 4 \\left. \\left(1- \\left(\\frac{R_H}{r}\\right)^{2(d-3)} \\sinh^2(\\alpha_w) \\sinh^2(\\alpha_n) \\right) \\mu(r) + (d-3)^2 f_0^T \\left(\\frac{R_H}{r}\\right)^{2(d-2)} \\sinh^2(\\alpha_w) \\sinh^2(\\alpha_n) \\right), \\nonumber\\\\\ng_c^I(r)&=&\\frac{1}{2 \\Delta(\\alpha_n)\\Delta(\\alpha_w)} \\Big( 2 \\left(\\Delta(\\alpha_n) \\sinh^2(\\alpha_w) + \\Delta(\\alpha_w) \\sinh^2(\\alpha_n)\\right)\\mu(r) f_0^T \\nonumber\\\\\n&+& \\left. (d-3)^2 f_0^T \\left(\\frac{R_H}{r}\\right)^{2(d-2)} \\sinh^2(\\alpha_w) \\sinh^2(\\alpha_n)+ 4 \\Delta(\\alpha_n)\\Delta(\\alpha_w) \\left(\\varphi(r)-\\epsilon(r)\\right) \\right). \\label{fcgorb}\n\\eea\nSince, as discussed also in section \\ref{diffr}, the analysis of the stability under tensorial perturbations is independent of the chosen frame, this is the form of the metric we will take.\n\n\\subsubsection{Study of the stability}\n\\indent\n\nIn order to prove the stability of this solution, we need to show the positivity of (\\ref{qf}), as we have seen. Again it is simpler to split this expression in its \"classical\" and $\\lambda$--corrected parts: $\\frac{Q}{F} = \\left.\\frac{Q}{F}\\right|_0 + \\lambda \\left.\\frac{Q}{F}\\right|_1,$ with $\\left.\\frac{Q}{F}\\right|_1$ being evaluated using the $\\lambda=0$ parts of the corresponding functions. For this concrete solution,\n\\bea\nr^2 \\sqrt{fg} \\left.\\frac{Q}{F}\\right|_0 &=& \\frac{\\ell \\left( \\ell + d - 3 \\right)}{r^2} f + \\frac{(g-f)f'}{r} \\label{qf0} \\\\\nr^4 \\sqrt{fg} \\left.\\frac{Q}{F}\\right|_1 &=& \\frac{2}{r^2} \\ell \\left( \\ell + d - 3 \\right) f_0^I \\left[ 2 (1-f_0^I) + r (f_0^I)' \\right].\n\\eea\n$f_0^I(r)$ vanishes at $r=R_H,$ and from there it grows monotonically to the asymptotic value 1. Therefore one has both $(f_0^I)'>0$ and $1-f_0^I>0$ and therefore, in the range we are interested, $\\left.\\frac{Q}{F}\\right|_1>0.$\n\n$\\left.\\frac{Q}{F}\\right|_0$ given by (\\ref{qf0}) must be computed with the full, $\\lambda$ corrected metric. In this case, from (\\ref{fcgc}) and (\\ref{fcgorb}) we get\n\\bea\ng-f &=& \\frac{2 \\lambda f_0^T(r)}{\\left(\\Delta(\\alpha_w) \\Delta(\\alpha_n) \\right)^{\\frac{3}{2}}}\n\\Big[ \\Delta(\\alpha_w) \\Delta(\\alpha_n) \\left(2\\varphi(r)-\\epsilon(r)\\right) + \\left(\\left(\\frac{R_H}{r}\\right)^{2(d-3)} \\sinh^2(\\alpha_w) \\sinh^2(\\alpha_n) -1 \\right) \\mu(r) \\nonumber\\\\\n&-& \\left(\\frac{R_H}{r}\\right)^{d-3} \\left(\\sinh^2(\\alpha_n) \\Delta(\\alpha_w) + \\sinh^2(\\alpha_w) \\Delta(\\alpha_n) \\right) \\mu(r) \\Big]\n\\eea\nwhich, using the definitions (\\ref{mu}), (\\ref{delta}) and (\\ref{fcgi}), may be simplified to\n\\be\ng-f = 2 \\lambda \\frac{f_0^T(r)}{\\sqrt{\\Delta(\\alpha_w) \\Delta(\\alpha_n)}}\n\\Big[2\\varphi(r)-\\epsilon(r)- \\mu(r)\\Big] = \\frac{4}{d-2} \\lambda f_0^I(r)\n\\left[\\left(d-3\\right)\\varphi(r)+r \\varphi'(r)\\right].\\label{gmf}\n\\ee\n\nIn any other frame but the one we chose by (\\ref{frs}), this expression would be much more complicated.\n\n$\\Delta(\\alpha_w),\\, \\Delta(\\alpha_n)$ are strictly positive functions, as one can see from the definition (\\ref{delta}), and so is therefore $f_0^I(r)$. Therefore in order to analyze the positivity of (\\ref{gmf}) one needs to concentrate only on the factor inside brackets, $\\left(d-3\\right)\\varphi(r)+r \\varphi'(r).$ One can already anticipate that the final result for the stability will not depend on the magnitude of the charges\/boost parameters $\\alpha_w, \\alpha_n,$ a quite remarkable fact.\n\nAs we previously mentioned, the dilaton solution $\\varphi(r)$ is negative, but its derivative $\\varphi'(r)$ is positive: $\\varphi(r)$ have a negative value at the horizon and grow up to 0 at asymptotic infinity. The difference $g-f$ is therefore a sum of a positive and a negative term; now, does this mean that $g-f$ does not have a fixed sign, i.e. its sign may change for different values of $r?$ Or is there any dominant term in all the range $r>R_H$, such that the sign of $g-f$ does not change in such range? If this is the case, then which is that sign?\n\nClose to the horizon we have\n\\bea\ng-f &=& -\\frac{(d-3)(d-2)}{2 \\cosh(\\alpha_n) \\cosh(\\alpha_w)} \\frac{\\lambda}{R_H^2} \\nonumber \\\\\n&\\times& \\left(\\frac{8}{d-1} +d^2 -6d +1 +2(d-3) \\left(\\psi^{(0)}\\left(\\frac{2}{d-3}\\right) + \\gamma \\right)\\right) \\frac{r-R_H}{R_H} + {\\mathcal{O}} \\left( \\left( r-R_H \\right)^2 \\right)\n\\eea\nOne can easily show (again, for instance, constructing a table with Mathematica) that, for relevant values of $d,$ one has $\\frac{8}{d-1} +d^2 -6d +1 +2(d-3) \\left(\\psi^{(0)}\\left(\\frac{2}{d-3}\\right) + \\gamma \\right) <0$ and, therefore, in some neighborhood of $R_H$ (but of course with $r>R_H$) one has $g-f>0$ (for $r=R_H$ evidently $g=f$).\n\nFor large $r$ we obtain the following asymptotic expansion:\n\\be\ng-f = \\frac{d-3}{2} \\frac{\\lambda}{R_H^2} \\left(\\frac{R_H}{r}\\right)^{2(d-3)} \\left[d-2 - (d-1) \\left(\\frac{R_H}{r}\\right)^2 \\right] + {\\mathcal{O}} \\left( \\left(\\frac{R_H}{r}\\right)^{2d-3} \\right) .\n\\ee\nIn the limit $r\\rightarrow\\infty$ one has $g=f=1,$ i.e. $g-f=0.$ For large $r$ we see that the leading correction is positive: one also has $g-f>0,$ as one had close to the horizon. The same conclusions are reached when one restricts themselves to $\\left(d-3\\right)\\varphi(r)+r \\varphi'(r):$ positive at the horizon and vanishing at infinity. It remains to analyze the behavior of $\\left(d-3\\right)\\varphi(r)+r \\varphi'(r)$ in the intermediate range (neither close to $R_H$ nor asymptotically). Rather we analyze its derivative:\n\\be\n\\left(\\left(d-3\\right)\\varphi(r)+r \\varphi'(r)\\right)'=-\\frac{(d-3)(d-2)^2}{4r} \\left(\\frac{R_H}{r}\\right)^{2d-6} \\frac{d-3-(d-1) \\left(\\frac{R_H}{r}\\right)^{2} +2\\left(\\frac{R_H}{r}\\right)^{d-1}}{\\left(1-\\left(\\frac{R_H}{r}\\right)^{d-3}\\right)^2}. \\label{gmflinha}\n\\ee\nOne can easily check that, for relevant values of $d$, and for $r>R_H,$ $d-3-(d-1) \\left(\\frac{R_H}{r}\\right)^{2} +2\\left(\\frac{R_H}{r}\\right)^{d-1}>0.$ The other factors in (\\ref{gmflinha}) are clearly positive, except for the overall minus sign. Therefore one has $\\left(\\left(d-3\\right)\\varphi(r)+r \\varphi'(r)\\right)'<0$ for $r>R_H$ (in particular this derivative has no zeroes). This means $\\left(d-3\\right)\\varphi(r)+r \\varphi'(r)$ is a positive function which decreases to zero asymptotically. The behavior of $g-f$ is the same.\n\nSince for this solution $g-f$ is of order $\\lambda$ but it vanishes at the classical level, $f'$ in $\\left.\\frac{Q}{F}\\right|_0$ must be computed with the classical metric, exactly from the same argument we gave for the dilatonic solution (\\ref{fnew}) (the $\\lambda$--correction of the metric to $f'$ would only contribute at order $\\lambda^2$.) As we have mentioned, at order $\\lambda=0$ we have $(f_0^I)'>0;$ from our previous result, this way, to order $\\lambda,$ $(g-f) f' >0.$\n\nSince, by definition, $f(R_H)=0$ and $f>0$ for $r>R_H,$ we conclude that $\\left.\\frac{Q}{F}\\right|_0$ given by (\\ref{qf0}) is indeed positive for $r>R_H.$ The same is valid for $\\frac{Q}{F}$ given by (\\ref{qf}). This way, the double--charged $d$--dimensional black hole solution (\\ref{fcgc}) is indeed stable under tensorial perturbations, for any value of the magnitude of the charges\/boost parameters $\\alpha_w, \\alpha_n$.\n\n\\section{Discussion and Future Directions}\n\\indent\n\nWe have computed the tensorial perturbations for the most general spherically symmetric metric in $d$ dimensions. We applied it to the field equations resulting from a string effective action with $\\R^2$ corrections. We have shown that the master equation for the perturbation variable is of second order, regardless of the presence of the higher order terms in the lagrangian and in the field equations. We have also obtained the corresponding $\\a$ corrections to the potential for these perturbations. From these results we have studied the stability of two different $d$--dimensional black hole solutions with $\\R^2$ corrections in string theory. In both cases we concluded that they were stable under such perturbations.\n\nThese results are to be compared with the corresponding ones in Lovelock theory. As we have mentioned in the introduction, for tensorial perturbations of solutions to these theories several instabilities have been found \\cite{dg04,dg05a,Takahashi:2009dz,Takahashi:2010ye,Takahashi:2010gz}, depending on the dimensionality of spacetime (the necessary power of curvature needed in order to get a Lovelock theory also depends on $d$). Even more interesting are two other aspects of Lovelock theories which were found on these articles: the instabilities manifest themselves mainly on shorter scales (for ``small'' black holes), and there are domains of the parameters (the coupling constants of the higher order terms) in which the linear perturbation theory breaks down and is not applicable. The justification for such facts lies on the properties of Lovelock theories we mentioned: they are seen as exact and not effective theories; the dependence of the solutions on the coupling constants goes beyond perturbation theory, i.e. the order at which they appear in the lagrangian does not matter for such dependence, which is often nonlinear. This is the reason for the nonapplicability of the linear perturbation theory in certain domains.\n\nThe string--theoretical solutions we have considered are perturbative in $\\a$: their dependence on $\\a$ is of the same order in which $\\a$ appears on the lagrangian (first order). This is why linear perturbation theory is fully applicable to these solutions we have studied, but one must keep in mind that their stability which we have shown is just perturbative. Nothing in our work guarantees that, if one considers higher order corrections in $\\a$ to these solutions, an instability does not appear.\n\nThere is a lot of work still to be done in studying perturbations and stability of black holes in higher dimensions, already in classical Einstein gravity. With respect to static black holes for which the formalism of Ishibashi and Kodama is applicable, there is a lot to be understood concerning scalar perturbations (for generic horizon topologies) and tensor perturbations (for non--maximally symmetric black holes). For a recent review see \\cite{Ishibashi:2011ws}. There is even more to be done concerning the perturbations and stability of non--static black holes.\n\nWith respect to perturbative string--theoretical black holes, the main question one can ask is the following: do string $\\a$ corrections preserve the stability properties of the corresponding classical black hole solutions, or do they introduce a new behavior? The examples we have analyzed in this article provide some evidence in favor of the first answer, but they represent by far an extremely limited range of solutions for more definite conclusions to be drawn. Still a lot of work remains to be done, with other string--corrected solutions but even also with those solutions we have taken here, considering other kinds of metric perturbations (vector and scalar).\n\nKnowing the master equation and potential for tensor--type gravitational perturbations for these solutions, as we do, another study that can be made is the determination of the corresponding spectrum of quasinormal modes, including the leading $\\a$ corrections. We leave this for a future work.\n\n\\section*{Acknowledgments}\nThe author wishes to acknowledge useful discussions with Akihiro Ishibashi and Jos\\'e Sande Lemos. This work has been supported by CMAT - U.Minho through the FCT Pluriannual Funding Program, and by FCT and FEDER through CERN\/FP\/123609\/2011 project.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\begin{figure}\n \\centering\n \\includegraphics[scale=1.1]{small-diagram.pdf}\n \\caption{A simplified diagram of the experiments followed by this work towards the comparison of classical vs transfer learning from synthetic data for speaker recognition. A more detailed diagram can be found in Figure \\ref{fig:method}}\n \\label{fig:smalldiagram}\n\\end{figure}\nData scarcity is an issue that arises often outside of the lab, due to the large amount of data required for classification activities. This includes speaker classification in order to enable personalised Human-Machine (HMI) and Human-Robot Interaction (HRI), a technology growing in consumer usefulness within smart device biometric security on devices such as smartphones and tablets, as well as for multiple-user smarthome assistants (operating on a per-person basis) which are not yet available. Speaker recognition, i.e., autonomously recognising a person from their voice, is a well-explored topic in the state-of-the-art within the bounds of data availability, which causes difficulty in real-world use. It is unrealistic to expect a user to willingly provide many minutes or hours of speech data to a device unless it is allowed to constantly record daily life, something which is a modern cause for concern with the virtual home assistant. In this work, we show that data scarcity in speaker recognition can be overcome by collecting only several short spoken sentences of audio from a user and then using extracted Mel-Frequency Cepstral Coefficients (MFCC) data in both supervised and unsupervised learning paradigms to generate synthetic speech, which is then used in a process of transfer learning to better recognise the speaker in question.\n\nAutonomous speaker classification can suffer issues of data scarcity since the user is compared to a large database of many speakers. The most obvious solution to this is to collect more data, but with Smart Home Assistants existing within private environments and potentially listening to private data, this produces an obvious problem of privacy and security~\\cite{logsdon2018alexa,dunin2020alexa}. Not collecting more data on the other hand, presents an issue of a large class imbalance between the speaker to classify against the examples of other speakers, producing lower accuracies and less trustworthy results~\\cite{babbar2019data}, which must be solved for purposes such as biometrics since results must be trusted when used for security. In this study, weighting of errors is performed to introduce balance, but it is noted that the results still have room for improvement regardless. \n\n\n\nData augmentation is the idea that useful new data can be generated by algorithms or models that would improve the classification of the original, scarce dataset. A simple but prominent example of this is the warping, flipping, mirroring and noising of images to better prepare image classification algorithms~\\cite{wang2017effectiveness}. A more complex example through generative models can be seen in recent work that utilise methods such as the Generative Adversarial Network (GAN) to create synthetic data which itself also holds useful information for learning from and classification of data~\\cite{zhu2018emotion,frid2018synthetic}. Although image classification is the most common and most obvious application of generative models for data augmentation, recent works have also enjoyed success in augmenting audio data for sound classification~\\cite{yang2018se,madhu2019data}. This work extends upon a previous conference paper that explored the hypothesis \\textit{``can the synthetic data produced by a generative model aid in speaker recognition?\"}~\\cite{bird2020overcoming} within which a Character-Level Recurrent Neural Network (RNN), as a generative model, produced synthetic data useful for learning from in order to recognise the speaker. This work extends upon these preliminary experiments by the following:\n\\begin{enumerate}\n \\item The extension of the dataset to more subjects from multiple international backgrounds and the extraction of the MFCCs of each subject;\n \\item Benchmarking of a Long Short Term Memory (LSTM) architecture for 64, 128, 256 and 512 LSTM units in one to three hidden layers towards reduction of loss in generating synthetic data. The best model is selected as the candidate for the LSTM data generator;\n \\item The inclusion of OpenAI's GPT-2 model as a data generator in order to compare the approaches of supervised (LSTM) and attention-based (GPT-2) methods for synthetic data augmentation for speaker classification.\n\\end{enumerate}\nThe scientific contributions of this work, thus, are related to the application of synthetic MFCCs for improvement of speaker recognition. A diagram of the experiments can be observed in Figure \\ref{fig:smalldiagram}. To the authors' knowledge, the paper that this work extends is the first instance of this research being explored\\footnote{to the best of our knowledge and based on literature review.}. The best LSTM and the GPT-2 model are tasked with generating 2,500, 5,000, 7,500, and 10,000 synthetic data objects for each subject after learning from the scarce datasets extracted from their speech. A network then learns from these data and transfers their weights to a network aiming to learn and classify the real data, and many show an improvement. For all subjects, the results show that several of the networks perform best after experiencing exposure to synthetic data. \n\nThe remainder of this article is as follows. Section \\ref{sec1:background} initially explores important scientific concepts of the processes followed by this work and also the current State-of-the-Art in the synthetic data augmentation field. Following this, Section \\ref{sec2:method} then outlines the method followed by this work including data collection, synthetic data augmentation, MFCC extraction to transform audio into a numerical dataset, and finally the learning processes followed to achieve results. The final results of the experiments are then discussed in Section \\ref{sec:results}, and then future work is outlined and conclusions presented in Section \\ref{sec4:futureworkconclusion}.\n\n\n\n\\section{Background and Related Work}\n\\label{sec1:background}\n\n\nVerification of a speaker is the process of identifying a single individual against many others by spoken audio data~\\cite{poddar2017speaker}. That is, the recognition of a set of the person's speech data $X$ specifically from a speech set $Y$ where $X \\in Y$. In the simplest sense, this can be given as a binary classification problem; for each data object $o$, is $o \\in X$? Is the speaker to be recognised speaking, or is it another individual? Speaker recognition is important for social HRI~\\cite{mumolo2003distant} (the robot's perception of the individual based on their acoustic utterances), Biometrics~\\cite{ratha2001automated}, and Forensics~\\cite{rose2002forensic} among many others. In~\\cite{hasan2004speaker}, researchers found relative ease of classifying 21 speakers from a limited set, but the problem becomes more difficult as it becomes more realistic, where classifying a speaker based on their utterances is increasingly difficult as the dataset grows~\\cite{nagrani2017voxceleb,yadav2018learning,zeinali2019but}. In this work, the speaker is recognised from many thousands of other examples of human speech from the Flickr8k speakers dataset. \n\n\\subsection{LSTM and GPT-2}\nLong Short Term Memory (LSTM) is a form of Artificial Neural Network in which multiple RNNs will learn from previous states as well as the current state. Initially, the LSTM selects data to delete: \n\\begin{equation}\n \\mathop f\\nolimits_{\\text{t}} = \\sigma \\left( {\\mathop W\\nolimits_{f} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{f} } \\right),\n\\end{equation}\n\\noindent where $W_f$ are the weights of the units, $h_{t=1}$ is the output at $t=1$, $x_t$ are inputs and $b_f$ is an applied bias. Data to be stored is then selected based on input $i$, generating $C_t$ values:\n\\begin{equation}\n\\mathop o\\nolimits_{\\text{t}} = \\sigma \\left( {\\mathop W\\nolimits_{i} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{i} } \\right),\n\\end{equation}\n\n\\begin{equation}\n\\mathop {\\hat{C}}\\nolimits_{\\text{t}} = \\tanh \\left( {\\mathop W\\nolimits_{c} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{c} } \\right).\n\\end{equation}\n\nA convolutional operation updates values:\n\\begin{equation}\nC_{t} = f_{t} * C_{t-1} + i_{t} * \\tilde{C}_{t} .\n\\end{equation}\n\nOutput $o_t$ is presented, and the hidden state is updated:\n\\begin{equation}\n\\mathop o\\nolimits_{\\text{t}} = \\sigma \\left( {\\mathop W\\nolimits_{o} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{o} } \\right),\n\\end{equation}\n\\begin{equation}\nh_{t} = o_{t} * tanh(C_{t}).\n\\end{equation}\nDue to the observed consideration of previous data, it is often found that time dependent data are very effectively classified due to the memory-like nature of the LSTM. LSTM is thus a particularly powerful technique in terms of speech recognition~\\cite{graves2013hybrid} due to the temporal nature of the data~\\cite{belin2000voice}. \\\\\n\nIn addition to LSTM, this study also considers OpenAI's Generative Pretrained Transformer 2 (GPT-2) model~\\cite{radford2018improving,radford2019language} as a candidate for producing synthetic MFCC data to improve speaker recognition. The model in question, \\textit{335M}, is much deeper than the LSTMs explored at 12 layers. In the OpenAI paper, the GPT modelling is given for a set of samples $x_1, x_2, ..., x_n$ composed of variable symbol-sequence lengths $s_1, s_2, ..., s_n$ factorised by joint probabilities over symbols as the product of conditional probabilities~\\cite{jelinek1980interpolated,bengio2003neural}:\n\\begin{equation}\n p(x) = \\prod_{i=1}^n p(s_{i} | s_{1}, ..., s_{n-1}).\n\\end{equation}\nAttention is given to various parts of the input vector:\n\\begin{equation}\n Attention(Q,K,V) = softmax \\left( \\frac{QK^T}{ \\sqrt{d_{k}} } \\right) V,\n\\end{equation}\nwhere Q is the \\textit{query} i.e., the single object in the sequence, in this case, a word. K are the keys, which are vector representations of the input sequence, and V are values as vector representations of all words in the sequence. In the initial encoder, decoder, and attention blocks $Q=V$ whereas later on the attention block that takes these outputs as input, $Q \\neq V$ since both are derived from the block's 'memory'. In order to combine multiple queries, that is, to consider previously learnt rules from text, multi-headed attention is presented:\n\\begin{equation}\n\\begin{aligned}\n MultiHead(Q,K,V) = Concatenate(head_{1}, ..., head_{h})W^{O} \\\\\n head_{i} = Attention(QW^{Q}_{i}, KW^{K}_{i}, VW^{V}_{i}).\n\\end{aligned}\n\\end{equation}\nAs in the previous equation, it can be seen that previously learnt $h$ projections $d_Q, d_K$ and $d_V$ are also considered given that the block has multiple heads. The above equations and further detail on both attention and multi-headed attention can be found in \\cite{vaswani2017attention}. The unsupervised nature of the GPT models is apparent since $Q, K$ and $V$ are from the same source. \n\nImportantly, GPT produces output with consideration not only to input, but also to the task. The GPT-2 model has been shown to be a powerful state-of-the-art tool for language learning, understanding, and generation; researchers noted that the model could be used with ease to generate realistic propaganda for extremist terrorist groups~\\cite{solaiman2019release}, as well as noting that generated text by the model was difficult to detect~\\cite{gehrmann2019gltr,wolff2020attacking,adelani2020generating}. The latter aforementioned papers are promising, since a difficulty of detection suggests statistical similarities, which are likely to aid in the problem of improving classification accuracy of a model by exposing it to synthetic data objects output by such a model. \n\n\\subsection{Dataset Augmentation through Synthesis}\n\\label{subsec:dataaugmentation}\n\nSimilarities between real-life experiences and imagined perception have shown in psychological studies that the human imagination, though mentally augmenting and changing situations~\\cite{beres1960perception}, aids in improving the human learning process~\\cite{egan1989memory,heath2008exploring,macintyre2012emotions,egan2014imagination}. The importance of this ability in the learning process shows the usefulness of data augmentation in human learning, and as such, is being explored as a potential solution to data scarcity and quality in the machine learning field. Even though the synthetic data may not be realistic alone, minute similarities between it and reality allow for better pattern recognition. \n\nThe idea of data augmentation as the first stage in fine-tune learning is inspired by the aforementioned findings, and follows a similar approach. Synthetic data is generated by learning from the real data, and algorithms are exposed to them in a learning process prior to the learning process of real data; this is then compared to the classical approach of learning from the data solely, where the performance of the former model compared to the latter shows the effect of the data augmentation learning process. Much of the work is recent, many from the last decade, and a pattern of success is noticeable for many prominent works when comparing the approach to the classical method of learning from real data alone. \n\nAs described, the field of exploring augmented data to improve classification algorithms is relatively young, but there exist several prominent works that show success in applying this approach. When augmented data from the SemEval dataset is learned from by a Recurrent Neural Network (RNN), researchers found that the overall best F-1 score was achieved for relation classification in comparison to the model only learning from the dataset itself~\\cite{xu2016improved}. Due to data scarcity in the medical field, classification of liver lesions~\\cite{frid2018synthetic} and Alzheimer's Disease~\\cite{shin2018medical} have also shown improvement when the learning models (CNNs) considered data augmented by Convolutional GANs. In Natural Language Processing, it was found that word augmentation aids to improve sentence classification by both CNN and RNN models~\\cite{kobayashi2018contextual}. The \\textit{DADA} model has been suggested as a strong method to produce synthetic data for improvement of data-scarce problems through the Deep Adversarial method via a dedicated discriminator network aiming to augment data specifically~\\cite{zhang2019dada} which has noted success in machine learning problems~\\cite{barz2020deep}. \n\nData augmentation has shown promise in improving multi-modal emotion recognition when considering audio and images~\\cite{huang2018multimodal}, digital signal classification~\\cite{tang2018digital}, as well as for a variety of audio classification problems such as segments of the Hub500 problem~\\cite{park2019specaugment}. Additionally, synthetic data augmentation of mel-spectrograms have shown to improve acoustic scene recognition~\\cite{yang2018se}. Realistic text-to-speech is achieved by producing realistic sound such as the Tacotron model~\\cite{wang2017tacotron} when considering the textual representation of the audio being considered, and reverse engineering the model to produce audio based on text input. A recent preliminary study showed GANs may be able to aid in producing synthetic data for speaker recognition~\\cite{chien2018adversarial}. \n\nThe temporal models considered in this work to generate synthetic speech data have recently shown success in generating acoustic sounds~\\cite{eck2002finding} and accurate timeseries data~\\cite{senjyu2006application}, written text~\\cite{pawade2018story,sha2018order}, artistic images~\\cite{gregor2015draw}. Specifically, temporal models are also observed to be successful in generating MFCC data~\\cite{wang2017rnn}, which is the data type considered in this work. Many prominent works in speech recognition consider temporal learning to be highly important~\\cite{fernandez2007application,he2019streaming,sak2014long} and for generation of likewise temporal data~\\cite{valentini2016investigating,wang2017rnn,tachibana2018efficiently} (that is, which this study aims to perform). If it is possible to generate data that bares similarity to the real data, then it could improve the models while also reducing the need for large amounts of real data to be collected. \n\n\n\\section{Proposed Approach}\n\\label{sec2:method}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.9]{overalldiagram_new.pdf}\n \\caption{A diagram of the experimental method in this work. Note that the two networks being directly compared are classifying the same data, with the difference being the initial weight distribution either from standard random distribution or transfer learning from GPT-2 and LSTM produced synthetic data.}\n \\label{fig:method}\n\\end{figure*}\n\nThis section describes the development of the proposed approach, which can be observed overall in Figure \\ref{fig:method}. For each test, five networks are trained in order to gain results. Firstly, a network simply to perform the speaker classification experimen without transfer learning (from a standard random weight distribution). Firstly, a network simply to perform the speaker classification experiment. Produced by LSTM and GPT-2, synthetic data is used to train another network, of which the weights are used to train a final network as an initial distribution to perform the same experiment as described in the first network (classifying the speaker's real data from Flickr8k speakers). Thus, the two networks leading to the final classification score in the diagram are directly comparable since they are learning from the same data, they differ only in initial weight distribution (where the latter network has weights learnt from synthetic data).\n\n\n\n\\subsection{Real and Synthetic Data Collection}\nThe data collected, as previously described, presents a binary classification problem. That is, whether or not the individual in question is the one producing the acoustic utterance. \n\nThe large corpus of data for the \\textit{``not the speaker\"} class is gathered via the Flickr8k dataset~\\cite{harwath2015deep} which contains 40,000 individual utterances describing 8,000 images by a large number of speakers which is unspecified by the authors. MFCCs are extracted (described in Section \\ref{subsec:mfcc}) to generate temporal numerical vectors which represent a short amount of time from each audio clip. 100,000 data objects are selected through 50 blocks of 1,000 objects and then 50,000 other data objects selected randomly from the remainder of the dataset. This is performed so the dataset contains individual's speech at length as well as short samples of many other thousands of speakers also. \n\n\\begin{table*}[]\n\\footnotesize\n\\centering\n\\caption{Information regarding the data collection from the seven subjects}\n\\label{tab:subjects}\n\\begin{tabular}{@{}lllllll@{}}\n\\toprule\n\\textbf{Subject} & \\textbf{Sex} & \\textbf{Age} & \\textbf{Nationality} & \\textbf{Dialect} & \\textbf{Time Taken (s)} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Real\\\\Data\\end{tabular}} \\\\ \\midrule\n\\textit{\\textbf{1}} & M & 23 & British & Birmingham & 24 & 4978 \\\\\n\\textit{\\textbf{2}} & M & 24 & American & Florida & 13 & 2421 \\\\\n\\textit{\\textbf{3}} & F & 28 & Irish & Dublin & 12 & 2542 \\\\ \n\\textit{\\textbf{4}} & F & 30 & British & London & 12 & 2590 \\\\ \n\\textit{\\textbf{5}} & F & 40 & British & London & 10 & 2189 \\\\ \n\\textit{\\textbf{6}} & M & 21 & French & Paris & 8 & 1706 \\\\ \n\\textit{\\textbf{7}} & F & 23 & French & Paris & 9 & 1952 \\\\ \n\\midrule\n\\textit{\\textbf{Flickr8K}} & & & & & & 100000 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\nTo gather data for recognising speakers, seven subjects are considered. Information on the subjects can be seen in Table \\ref{tab:subjects}. Subjects speak five random Harvard Sentences sentences from the \\textit{IEEE recommended practice for speech quality measurements}~\\cite{rothauser1969ieee}, and so contain most of the spoken phonetic sounds in the English language~\\cite{bird2019phoneme2}. Importantly, this is a user-friendly process, because it requires only a few short seconds of audio data. The longest time taken was by subject 1 in 24 seconds producing 4978 data objects and the shortest were the two French individuals who required 8 and 9 seconds respectively to speak the five sentences. All of the audio data were recorded using consumer-available recording devices such as smartphones and computer headsets. Synthetic datasets are generated following the learning processes of the best LSTM and the GPT-2 model, where the probability of the next character is decided upon depending on the learning algorithm and are generated in blocks of 1,000 within a loop and the final line is removed (since it was often within the cutoff point of the 1,000-character block). Illogical lines of data (those that did not have 26 comma separated values and a class) were removed, but were observed to be somewhat rare as both the LSTM and GPT-2 models had learnt the data format relatively well since it was uniform throughout. The format, throughout the datasets, was a uniform 27 comma separated values where the values were all numerical and the final value was `1' followed by a line break character.\n\n\\subsection{Feature Extraction}\n\\label{subsec:mfcc}\n\nThe nature of acoustic data is that the previous and following points of data from a single point in particular are also related to the class. Audio data is temporal in this regard, and thus classification of a single point in time is an extremely difficult problem~\\cite{xiong2003comparing,bird2019phoneme}. In order to overcome this issue features are extracted from the wave through a sliding window approach. The statistical features extracted in this work are the first 26 Mel-Frequency Cepstral Coefficients due to findings in the scientific state of the art arguing for their prominence over other methods~\\cite{muda2010voice,sahidullah2012design}. Sliding windows are set to a length of 0.025 seconds with a step of 0.01 seconds and extraction is performed as follows:\n\\begin{enumerate}\n\\item The Fourier Transform of time window data $\\omega$ is calculated:\n\\begin{equation}\nX(j\\omega)=\\int_{-\\infty}^\\infty x(t) e^{-j\\omega t} dt.\n\\end{equation}\n\\item The powers from the FT are mapped to the Mel-scale, that is, the human psychological scale of audible pitch~\\cite{stevens1937scale} via a triangular temporal window. \n\\item The power spectrum is considered and $log$s of each of their powers are taken.\n\\item The derived Mel-log powers are then treated as a signal, and a Discrete Cosine Transform (DCT) is calculated:\n\\begin{equation}\n\\begin{aligned}\nX_k = \\sum_{n=0}^{N-1} x_n cos \\begin{bmatrix} \\frac{\\pi}{N} (n+ \\frac{1}{2}) k \\end{bmatrix} \\\\ \nk=0,...,N-1,\n\\end{aligned}\n\\end{equation}\n\\noindent where $x$ is the array of length $N$, $k$ is the index of the output coefficient being calculated, where $N$ real numbers $x_{0} ... x_{n-1}$ are transformed into the $N$ real numbers $X_{0} ... X_{n-1}$. \n\\end{enumerate}\nThe amplitudes of the spectrum produced are taken as the \\textit{MFCCs}. The resultant data then provides a mathematical description of wave behaviour in terms of sounds, each data object made of 26 attributes produced from the sliding window are then treated as the input attributes for the neural networks for both speaker recognition and also synthetic data generation (with a class label also). \n\nThis process is performed for all of the selected Flickr8K data as well as the real data recorded from the subjects. The MFCC data from each of the 7 subjects' audio recordings is used as input to the LSTM and GPT-2 generative models for training and subsequent data augmentation.\n\n\n\\subsection{Speaker Classification Learning Process}\nFor each subject, the Flickr data and recorded audio forms the basis dataset and the speaker recognition problem. Eight datasets for transfer learning are then formed on a per-subject basis, which are the aforementioned data plus $2500, 5000, 7500$ and $10000$ synthetic data objects generated by either the LSTM or the GPT-2 models. LSTM has a standard dropout of 0.2 between each layer.\n\nThe baseline accuracy for comparison is given as ``Synth. Data: 0\" later in Table \\ref{tab:bigresults} which denotes a model that has not been exposed to any of the synthetic data. This baseline gives scores that are directly comparable to identical networks with their initial weight distributions being those trained to classify synthetic data generated for the subject, which is then used to learn from the real data. As previously described, the two sets of synthetic data to expose the models to during pre-training of the real speaker classification problem are generated by either an LSTM or a GPT-2 language model. Please note that due to this, the results presented have no baring on whether or not the network could classify the synthetic data well or otherwise, the weights are simply used as the initial distribution for the same problem. If the pattern holds that the transfer learning networks achieve better results than the networks which have not been trained on such data, it argues the hypothesis that speaker classification can be improved when considering either of these methods of data augmentation. This process can be observed in Figure \\ref{fig:method}. \n\nFor the Deep Neural Network that classifies the speaker, a topology of three hidden layers consisting of 30, 7, and 29 neurons respectively with ReLu activation functions and an ADAM optimiser~\\cite{kingma2014adam} is initialised. These hyperparameters are chosen due to a previous study that performed a genetic search of neural network topologies for the classification of phonetic sounds in the form of MFCCs~\\cite{bird2020optimisation}. The networks are given an unlimited number of epochs to train, only ceasing through a set early stopping callback of 25 epochs with no improvement of ability. The best weights are restored before final scores are calculated. This is allowed in order to make sure that all models stabilise to an asymptote and reduce the risk of stopping models prior to them achieving their potential best abilities. \n\nClassification errors are weighted equally by class prominence since there exists a large imbalance between the speaker and the rest of the data. All of the LSTM experiments performed in this work were executed on an Nvidia GTX980Ti GPU, while the GPT-2 experiment was performed on an Nvidia Tesla K80 GPU provided by Google Colab. \n\n\\section{Results}\n\\label{sec:results}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{128-lstms.pdf}\n \\caption{The training processes of the best performing models in terms of loss, separated for readability purposes. Results are given for a benchmarking experiment on all of the dataset rather than an individual.}\n \\label{fig:128-lstms-benchmark}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{bad-lstms.pdf}\n \\caption{The training processes of LSTMs with 64, 256, and 512 units in 1-3 hidden layers, separated for readability purposes. Results are given for a benchmarking experiment on all of the dataset rather than an individual.}\n \\label{fig:bad-lstms-benchmark}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{gpt-graph.pdf}\n \\caption{The training process of the GPT-2 model. Results are given for a benchmarking experiment on all of the dataset rather than an individual.}\n \\label{fig:gpt-benchmark}\n\\end{figure}\n\n\\begin{table}[]\n\\centering\n\\caption{Best epochs and their losses for the 12 LSTM Benchmarks and GPT-2 training process. All models are benchmarked on the whole set of subjects for 100 epochs each in order to search for promising hyperparameters.}\n\\label{tab:benchmarking-models}\n\\begin{tabular}{@{}lll@{}}\n\\toprule\n\\textbf{Model} & \\textbf{Best Loss} & \\textbf{Epoch} \\\\ \\midrule\nLSTM(64) & 0.88 & 99 \\\\\nLSTM(64,64) & 0.86 & 99 \\\\\nLSTM(64,64,64) & 0.85 & 99 \\\\\nLSTM(128) & 0.53 & 71 \\\\\nLSTM(128,128) & 0.53 & 80 \\\\\nLSTM(128,128,128) & \\textbf{0.52} & 93 \\\\\nLSTM(256) & 0.83 & 83 \\\\\nLSTM(256,256) & 0.82 & 46 \\\\\nLSTM(256,256,256) & 0.82 & 39 \\\\\nLSTM(512) & 0.81 & 33 \\\\\nLSTM(512,512) & 0.81 & 31 \\\\\nLSTM(512,512,512) & 0.82 & 25 \\\\\nGPT-2 & 0.92 & 94 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:benchmarking-models} shows the best results discovered for each LSTM hyperparameter set and the GPT-2 model. Figures \\ref{fig:128-lstms-benchmark} and \\ref{fig:bad-lstms-benchmark} show the epoch-loss training processes for the LSTMs separated for readability purposes and Figure \\ref{fig:gpt-benchmark} shows the same training process for the GPT-2 model. These generalised experiments for all data provide a tuning point for synthetic data to be generated for each of the individuals (given respective personally trained models). LSTMs with 128 hidden units far outperformed the other models, which were also sometimes erratic in terms of their attempt at loss reduction over time. The GPT-2 model is observed to be especially erratic, which is possibly due to its unsupervised attention-based approach. \n\nAlthough some training processes were not as smooth as others, manual exploration showed that acceptable sets of data were able to be produced.\n\n\\subsection{Transfer Learning for Data-scarce Speaker Recognition}\n\n\\begin{table*}[]\n\\centering\n\\caption{Results of the experiments for all subjects. Best models for each Transfer Learning experiment are bold, and the best overall result per-subject is also underlined. Red font denotes a synthetic data-exposed model that scored lower than the classical learning approach. }\n\\label{tab:bigresults}\n\\footnotesize\n\\begin{tabular}{@{}llllllllll@{}}\n\\toprule\n\\multicolumn{1}{c}{} & & \\multicolumn{4}{l}{\\textbf{LSTM}} & \\multicolumn{4}{l}{\\textbf{GPT-2}} \\\\ \\cmidrule(l){3-10} \n\\multicolumn{1}{c}{\\multirow{-2}{*}{\\textbf{Subject}}} & \\multirow{-2}{*}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Synth.\\\\ Data\\end{tabular}}} & \\textit{\\textbf{Acc.}} & \\textit{\\textbf{F1}} & \\textit{\\textbf{Prec.}} & \\multicolumn{1}{l|}{\\textit{\\textbf{Rec.}}} & \\textit{\\textbf{Acc.}} & \\textit{\\textbf{F1}} & \\textit{\\textbf{Prec.}} & \\textit{\\textbf{Rec.}} \\\\ \\midrule\n & \\textit{\\textbf{0}} & 93.57 & 0.94 & 0.93 & \\multicolumn{1}{l|}{0.93} & 93.57 & 0.94 & 0.93 & 0.93 \\\\\n & \\textit{\\textbf{2500}} & {\\ul \\textbf{99.5}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & \\multicolumn{1}{l|}{{\\ul \\textbf{$\\sim$1}}} & 97.32 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{5000}} & 97.37 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & \\textbf{97.77} & \\textbf{0.98} & \\textbf{0.98} & \\textbf{0.98} \\\\\n & \\textit{\\textbf{7500}} & \\textbf{99.33} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 99.2 & 0.99 & 0.99 & 0.99 \\\\\n\\multirow{-5}{*}{\\textbf{1}} & \\textit{\\textbf{10000}} & 99.1 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & \\textbf{99.3} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 95.13 & 0.95 & 0.95 & \\multicolumn{1}{l|}{0.95} & 95.13 & 0.95 & 0.95 & 0.95 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.6} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.5 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{5000}} & \\textbf{99.5} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.41 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{7500}} & {\\ul \\textbf{99.7}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & \\multicolumn{1}{l|}{{\\ul \\textbf{$\\sim$1}}} & {\\ul \\textbf{99.7}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\\n\\multirow{-5}{*}{\\textbf{2}} & \\textit{\\textbf{10000}} & \\textbf{99.42} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 99.38 & 0.99 & 0.99 & 0.99 \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 96.58 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & 96.58 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.2} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 98.41 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{5000}} & 98.4 & 0.98 & 0.98 & \\multicolumn{1}{l|}{0.98} & \\textbf{99} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\\n & \\textit{\\textbf{7500}} & \\textbf{99.07} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 98.84 & 0.99 & 0.99 & 0.99 \\\\\n\\multirow{-5}{*}{\\textbf{3}} & \\textit{\\textbf{10000}} & 98.44 & 0.98 & 0.98 & \\multicolumn{1}{l|}{0.98} & {\\ul \\textbf{99.47}} & {\\ul \\textbf{0.99}} & {\\ul \\textbf{0.99}} & {\\ul \\textbf{0.99}} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 98.5 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & 98.5 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{2500}} & {\\color[HTML]{FE0000} 97.86} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} & \\multicolumn{1}{l|}{{\\color[HTML]{FE0000} 0.98}} & \\textbf{99.42} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\\n & \\textit{\\textbf{5000}} & \\textbf{99.22} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & {\\color[HTML]{FE0000} 97.75} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} \\\\\n & \\textit{\\textbf{7500}} & {\\color[HTML]{FE0000} 97.6} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} & \\multicolumn{1}{l|}{{\\color[HTML]{FE0000} 0.98}} & {\\color[HTML]{FE0000} \\textbf{98.15}} & {\\color[HTML]{FE0000} \\textbf{0.98}} & {\\color[HTML]{FE0000} \\textbf{0.98}} & {\\color[HTML]{FE0000} \\textbf{0.98}} \\\\\n\\multirow{-5}{*}{\\textbf{4}} & \\textit{\\textbf{10000}} & 99.22 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & {\\ul \\textbf{99.56}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 96.6 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & 96.6 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.47} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 99.23 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{5000}} & 99.4 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & \\textbf{99.83} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{7500}} & 99.2 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & {\\ul \\textbf{99.85}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\\n\\multirow{-5}{*}{\\textbf{5}} & \\textit{\\textbf{10000}} & 99.67 & $\\sim$1 & $\\sim$1 & \\multicolumn{1}{l|}{$\\sim$1} & \\textbf{99.78} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 97.3 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & 97.3 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.8} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.75 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{5000}} & 99.75 & $\\sim$1 & $\\sim$1 & \\multicolumn{1}{l|}{$\\sim$1} & {\\color[HTML]{FE0000} 96.1} & {\\color[HTML]{FE0000} 0.96} & {\\color[HTML]{FE0000} 0.96} & {\\color[HTML]{FE0000} 0.96} \\\\\n & \\textit{\\textbf{7500}} & {\\color[HTML]{000000} 97.63} & {\\color[HTML]{000000} 0.98} & {\\color[HTML]{000000} 0.98} & \\multicolumn{1}{l|}{{\\color[HTML]{000000} 0.98}} & {\\ul \\textbf{99.82}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\\n\\multirow{-5}{*}{\\textit{\\textbf{6}}} & \\textit{\\textbf{10000}} & 99.67 & $\\sim$1 & $\\sim$1 & \\multicolumn{1}{l|}{$\\sim$1} & \\textbf{99.73} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 90.7 & 0.91 & 0.91 & \\multicolumn{1}{l|}{0.91} & 90.7 & 0.91 & 0.91 & 0.91 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.86} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.78 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{5000}} & \\textbf{99.89} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.86 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{7500}} & \\textbf{99.91} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.84 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n\\multirow{-5}{*}{\\textit{\\textbf{7}}} & \\textit{\\textbf{10000}} & {\\ul \\textbf{99.94}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & \\multicolumn{1}{l|}{{\\ul \\textbf{$\\sim$1}}} & 99.73 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\ \\midrule\n\\textit{\\textbf{Avg.}} & & 98.43 & 0.98 & 0.98 & 0.98 & 98.40 & 0.98 & 0.98 & 0.98 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table*}\n\nTable \\ref{tab:bigresults} shows all of the results for each subject, both with and without exposure to synthetic data. Per-run, the LSTM achieved better results over the GPT-2 in 14 instances whereas the GPT-2 achieved better results over the LSTM in 13 instances. Of the five runs that scored lower than no synthetic data exposure, two were LSTM and three were GPT-2. Otherwise, 51 of the 56 experiments all outperformed the original model without synthetic data exposure and every single subject experienced their best classification result in all cases when the model had been exposed to synthetic data. The best score on a per-subject basis was achieved by exposing the network to data produced by the LSTM three times and the GPT-2 five times (both including Subject 2 where both were best at 99.7\\%). The maximum diversion of training accuracy to validation accuracy was $\\sim1\\%$ showing that although high results were attained, overfitting was relatively low; with more computational resources, k-fold and LOO cross validation are suggested as future works to achieve more accurate measures of variance within classification. \n\nThese results attained show that speaker classification can be improved by exposing the network to synthetic data produced by both supervised and attention-based models and then transferring the weights to the initial problem, which most often scores lower without synthetic data exposure in all cases but five although those subjects still experienced their absolute best result through synthetic data exposure regardless. \n\n\n\\subsection{Comparison to other methods of speaker recognition}\n\\begin{table}[]\n\\caption{Comparison of the best models found in this work and other classical methods of speaker recognition (sorted by accuracy)}\n\\label{tab:bigcomparison}\n\\footnotesize\n\\begin{tabular}{@{}clllll@{}}\n\\toprule\n\\multicolumn{1}{l}{\\textbf{Subject}} & \\textbf{Model} & \\textbf{Acc.} & \\textbf{F-1} & \\textbf{Prec.} & \\textbf{Rec.} \\\\ \\midrule\n\\multirow{7}{*}{\\textbf{1}} & \\textit{\\textbf{DNN (LSTM TL 2500)}} & \\textbf{99.5} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (GPT-2 TL 5000)}} & 97.77 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{SMO}} & 97.71 & 0.98 & 0.95 & 0.95 \\\\\n & \\textit{\\textbf{Random Forest}} & 97.48 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 97.47 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 82.3 & 0.87 & 0.96 & 0.82 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 78.96 & 0.84 & 0.953 & 0.77 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{2}} & \\textit{\\textbf{DNN (LSTM TL 7500)}} & \\textbf{99.7} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (GPT-2 TL 7500)}} & 99.7 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{SMO}} & 98.94 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.33 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.28 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 84.9 & 0.9 & 0.97 & 0.85 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 76.58 & 0.85 & 0.97 & 0.77 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{3}} & \\textit{\\textbf{DNN (GPT-2 TL 10000)}} & \\textbf{99.47} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 2500)}} & 99.2 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{SMO}} & 99.15 & 0.99 & 0.99 & 0.98 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.85 & 0.99 & 0.99 & 0.98 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.79 & 0.99 & 0.99 & 0.98 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 91.49 & 0.94 & 0.98 & 0.92 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 74.37 & 0.83 & 0.96 & 0.74 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{4}} & \\textit{\\textbf{DNN (GPT-2 TL 10000)}} & \\textbf{99.56} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 5000)}} & 99.22 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.66 & 0.99 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{SMO}} & 98.66 & 0.99 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Random Forest}} & 98 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 95.53 & 0.96 & 0.98 & 0.96 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 88.74 & 0.92 & 0.97 & 0.89 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{5}} & \\textit{\\textbf{DNN (GPT-2 TL 10000)}} & \\textbf{99.85} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 10000)}} & 99.67 & 1 & 1 & 1 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.86 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.7 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{SMO}} & 98.6 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 90.55 & 0.94 & 0.98 & 0.9 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 88.95 & 0.93 & 0.98 & 0.89 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{6}} & \\textit{\\textbf{DNN (GPT-2 TL 7500)}} & \\textbf{99.82} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 2500)}} & 99.8 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 99.1 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.9 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{SMO}} & 98.86 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 90.52 & 0.94 & 0.98 & 0.9 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 89.27 & 0.93 & 0.98 & 0.89 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{7}} & \\textit{\\textbf{DNN (LSTM TL 10000)}} & \\textbf{99.91} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (GPT-2 TL 5000)}} & 99.86 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{SMO}} & 99.4 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 99.13 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Random Forest}} & 99 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 88.67 & 0.93 & 0.98 & 0.89 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 86.9 & 0.91 & 0.98 & 0.87 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[]\n\\caption{Average performance of the chosen models for each of the 7 subjects.}\n\\label{tab:average-comparison}\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n\\textbf{Model} & \\textbf{Avg acc} & \\textbf{F-1} & \\textbf{Prec.} & \\textbf{Rec.} \\\\ \\midrule\n\\textit{\\textbf{DNN (LSTM TL)}} & 99.57 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n\\textit{\\textbf{DNN (GPT-2 TL)}} & 99.43 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n\\textit{\\textbf{SMO}} & 98.76 & 0.99 & 0.98 & 0.98 \\\\\n\\textit{\\textbf{Logistic Regression}} & 98.63 & 0.99 & 0.98 & 0.98 \\\\\n\\textit{\\textbf{Random Forest}} & 98.45 & 0.98 & 0.98 & 0.98 \\\\\n\\textit{\\textbf{Bayesian Network}} & 88.73 & 0.92 & 0.98 & 0.89 \\\\\n\\textit{\\textbf{Naive Bayes}} & 83.80 & 0.89 & 0.97 & 0.83 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:bigcomparison} shows a comparison of the models proposed in this paper to other state-of-the-art methods of speaker recognition. Namely, they are Sequential Minimal Optimisation (SMO), Logistic Regression, Bayesian Networks, and Naive Bayes. It can be observed that, although in some cases close, the DNN fine tuned from synthetic data generated by both the LSTM and GPT-2 achieve higher scores than other methods. Finally, Table \\ref{tab:average-comparison} shows the average scores for the chosen models for each of the seven subjects. \n\n\n\n\n\n\\section{Conclusion and Future Work}\n\\label{sec4:futureworkconclusion}\nTo finally conclude, this work found strong success for all 7 subjects when improving the classification problem of speaker recognition by generating augmented data by both LSTM and OpenAI GPT-2 models. Future work aims to solidify this hypothesis by running the experiments for a large range of subjects and comparing the patterns that emerge from the results. \n\nThe experiments in this work have provided a strong argument for the usage of deep neural network transfer learning from MFCCs synthesised by both LSTM and GPT-2 models for the problem of speaker recognition. One of the limitations of this study was hardware availability since it was focused on those available to consumers today. The Flickr8k dataset was thus limited to 8,000 data objects and new datasets created, which prevents a direct comparison to other speaker recognition works which often operate on larger data and with hardware beyond consumer availability. It is worth noting that the complex nature of training LSTM and GPT-2 models to generate MFCCs is beyond that of the task of speaker recognition itself, and as such, devices with access to TPU or CUDA-based hardware must perform the task in the background over time. The tasks in question took several minutes with the two GPUs used in this work for both LSTM and GPT-2 and as such are not instantaneous. As previously mentioned, although it was observed that overfitting did not occur too strongly, it would be useful in future to perform similar experiments with either K-fold or leave-one-out Cross Validation in order to achieve even more accurate representations of the classification metrics. In terms of future application, the then-optimised model would be the implemented within real robots and smarthome assistants through compatible software.\n\nIn this work, seven subjects were benchmarked with both a tuned LSTM and OpenAI's GPT-2 model. In future, as was seen with related works, a GAN could also be implemented in order to provide a third possible solution to the problem of data scarcity in speaker recognition as well as other related speech recognition classification problems - bias is an open issue that has been noted for data augmentation with GANs~\\cite{hu2019exploring}, and as such, this issue must be studied if a GAN is implemented for problems of this nature. This work further argued for the hypothesis presented, that is, data augmentation can aid in improving speaker recognition for scarce datasets. Following the 14 successful runs including LSTMs and GPT-2s, the overall process followed by this experiment could be scripted and thus completely automated, allowing for the benchmarking of many more subjects to give a much more generalised set of results, of which will be more representative of the general population. Additionally, samples spoken from many languages should also be considered in order to provide language generalisation, rather than just the English language spoken by multiple international dialects in this study. Should generalisation be possible, future models may require only a small amount of fine-tuning to produce synthetic speech for a given person rather than training from scratch as was performed in this work. In more general lines of thought, literature review shows that much of the prominent work is recent, leaving many fields of machine learning that have not yet been attempted to be improved via the methods described in this work. \n\n\\bibliographystyle{ieeetr}\n\n\\section{Introduction}\n\\begin{figure}\n \\centering\n \\includegraphics[scale=1.1]{small-diagram.pdf}\n \\caption{A simplified diagram of the experiments followed by this work towards the comparison of classical vs transfer learning from synthetic data for speaker recognition. A more detailed diagram can be found in Figure \\ref{fig:method}}\n \\label{fig:smalldiagram}\n\\end{figure}\nData scarcity is an issue that arises often outside of the lab, due to the large amount of data required for classification activities. This includes speaker classification in order to enable personalised Human-Machine (HMI) and Human-Robot Interaction (HRI), a technology growing in consumer usefulness within smart device biometric security on devices such as smartphones and tablets, as well as for multiple-user smarthome assistants (operating on a per-person basis) which are not yet available. Speaker recognition, i.e., autonomously recognising a person from their voice, is a well-explored topic in the state-of-the-art within the bounds of data availability, which causes difficulty in real-world use. It is unrealistic to expect a user to willingly provide many minutes or hours of speech data to a device unless it is allowed to constantly record daily life, something which is a modern cause for concern with the virtual home assistant. In this work, we show that data scarcity in speaker recognition can be overcome by collecting only several short spoken sentences of audio from a user and then using extracted Mel-Frequency Cepstral Coefficients (MFCC) data in both supervised and unsupervised learning paradigms to generate synthetic speech, which is then used in a process of transfer learning to better recognise the speaker in question.\n\nAutonomous speaker classification can suffer issues of data scarcity since the user is compared to a large database of many speakers. The most obvious solution to this is to collect more data, but with Smart Home Assistants existing within private environments and potentially listening to private data, this produces an obvious problem of privacy and security~\\cite{logsdon2018alexa,dunin2020alexa}. Not collecting more data on the other hand, presents an issue of a large class imbalance between the speaker to classify against the examples of other speakers, producing lower accuracies and less trustworthy results~\\cite{babbar2019data}, which must be solved for purposes such as biometrics since results must be trusted when used for security. In this study, weighting of errors is performed to introduce balance, but it is noted that the results still have room for improvement regardless. \n\n\n\nData augmentation is the idea that useful new data can be generated by algorithms or models that would improve the classification of the original, scarce dataset. A simple but prominent example of this is the warping, flipping, mirroring and noising of images to better prepare image classification algorithms~\\cite{wang2017effectiveness}. A more complex example through generative models can be seen in recent work that utilise methods such as the Generative Adversarial Network (GAN) to create synthetic data which itself also holds useful information for learning from and classification of data~\\cite{zhu2018emotion,frid2018synthetic}. Although image classification is the most common and most obvious application of generative models for data augmentation, recent works have also enjoyed success in augmenting audio data for sound classification~\\cite{yang2018se,madhu2019data}. This work extends upon a previous conference paper that explored the hypothesis \\textit{``can the synthetic data produced by a generative model aid in speaker recognition?\"}~\\cite{bird2020overcoming} within which a Character-Level Recurrent Neural Network (RNN), as a generative model, produced synthetic data useful for learning from in order to recognise the speaker. This work extends upon these preliminary experiments by the following:\n\\begin{enumerate}\n \\item The extension of the dataset to more subjects from multiple international backgrounds and the extraction of the MFCCs of each subject;\n \\item Benchmarking of a Long Short Term Memory (LSTM) architecture for 64, 128, 256 and 512 LSTM units in one to three hidden layers towards reduction of loss in generating synthetic data. The best model is selected as the candidate for the LSTM data generator;\n \\item The inclusion of OpenAI's GPT-2 model as a data generator in order to compare the approaches of supervised (LSTM) and attention-based (GPT-2) methods for synthetic data augmentation for speaker classification.\n\\end{enumerate}\nThe scientific contributions of this work, thus, are related to the application of synthetic MFCCs for improvement of speaker recognition. A diagram of the experiments can be observed in Figure \\ref{fig:smalldiagram}. To the authors' knowledge, the paper that this work extends is the first instance of this research being explored\\footnote{to the best of our knowledge and based on literature review.}. The best LSTM and the GPT-2 model are tasked with generating 2,500, 5,000, 7,500, and 10,000 synthetic data objects for each subject after learning from the scarce datasets extracted from their speech. A network then learns from these data and transfers their weights to a network aiming to learn and classify the real data, and many show an improvement. For all subjects, the results show that several of the networks perform best after experiencing exposure to synthetic data. \n\nThe remainder of this article is as follows. Section \\ref{sec1:background} initially explores important scientific concepts of the processes followed by this work and also the current State-of-the-Art in the synthetic data augmentation field. Following this, Section \\ref{sec2:method} then outlines the method followed by this work including data collection, synthetic data augmentation, MFCC extraction to transform audio into a numerical dataset, and finally the learning processes followed to achieve results. The final results of the experiments are then discussed in Section \\ref{sec:results}, and then future work is outlined and conclusions presented in Section \\ref{sec4:futureworkconclusion}.\n\n\n\n\\section{Background and Related Work}\n\\label{sec1:background}\n\n\nVerification of a speaker is the process of identifying a single individual against many others by spoken audio data~\\cite{poddar2017speaker}. That is, the recognition of a set of the person's speech data $X$ specifically from a speech set $Y$ where $X \\in Y$. In the simplest sense, this can be given as a binary classification problem; for each data object $o$, is $o \\in X$? Is the speaker to be recognised speaking, or is it another individual? Speaker recognition is important for social HRI~\\cite{mumolo2003distant} (the robot's perception of the individual based on their acoustic utterances), Biometrics~\\cite{ratha2001automated}, and Forensics~\\cite{rose2002forensic} among many others. In~\\cite{hasan2004speaker}, researchers found relative ease of classifying 21 speakers from a limited set, but the problem becomes more difficult as it becomes more realistic, where classifying a speaker based on their utterances is increasingly difficult as the dataset grows~\\cite{nagrani2017voxceleb,yadav2018learning,zeinali2019but}. In this work, the speaker is recognised from many thousands of other examples of human speech from the Flickr8k speakers dataset. \n\n\\subsection{LSTM and GPT-2}\nLong Short Term Memory (LSTM) is a form of Artificial Neural Network in which multiple RNNs will learn from previous states as well as the current state. Initially, the LSTM selects data to delete: \n\\begin{equation}\n \\mathop f\\nolimits_{\\text{t}} = \\sigma \\left( {\\mathop W\\nolimits_{f} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{f} } \\right),\n\\end{equation}\n\\noindent where $W_f$ are the weights of the units, $h_{t=1}$ is the output at $t=1$, $x_t$ are inputs and $b_f$ is an applied bias. Data to be stored is then selected based on input $i$, generating $C_t$ values:\n\\begin{equation}\n\\mathop o\\nolimits_{\\text{t}} = \\sigma \\left( {\\mathop W\\nolimits_{i} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{i} } \\right),\n\\end{equation}\n\n\\begin{equation}\n\\mathop {\\hat{C}}\\nolimits_{\\text{t}} = \\tanh \\left( {\\mathop W\\nolimits_{c} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{c} } \\right).\n\\end{equation}\n\nA convolutional operation updates values:\n\\begin{equation}\nC_{t} = f_{t} * C_{t-1} + i_{t} * \\tilde{C}_{t} .\n\\end{equation}\n\nOutput $o_t$ is presented, and the hidden state is updated:\n\\begin{equation}\n\\mathop o\\nolimits_{\\text{t}} = \\sigma \\left( {\\mathop W\\nolimits_{o} \\cdot \\left[ {\\mathop h\\nolimits_{t - 1} ,\\mathop x\\nolimits_{t} } \\right] + \\mathop b\\nolimits_{o} } \\right),\n\\end{equation}\n\\begin{equation}\nh_{t} = o_{t} * tanh(C_{t}).\n\\end{equation}\nDue to the observed consideration of previous data, it is often found that time dependent data are very effectively classified due to the memory-like nature of the LSTM. LSTM is thus a particularly powerful technique in terms of speech recognition~\\cite{graves2013hybrid} due to the temporal nature of the data~\\cite{belin2000voice}. \\\\\n\nIn addition to LSTM, this study also considers OpenAI's Generative Pretrained Transformer 2 (GPT-2) model~\\cite{radford2018improving,radford2019language} as a candidate for producing synthetic MFCC data to improve speaker recognition. The model in question, \\textit{335M}, is much deeper than the LSTMs explored at 12 layers. In the OpenAI paper, the GPT modelling is given for a set of samples $x_1, x_2, ..., x_n$ composed of variable symbol-sequence lengths $s_1, s_2, ..., s_n$ factorised by joint probabilities over symbols as the product of conditional probabilities~\\cite{jelinek1980interpolated,bengio2003neural}:\n\\begin{equation}\n p(x) = \\prod_{i=1}^n p(s_{i} | s_{1}, ..., s_{n-1}).\n\\end{equation}\nAttention is given to various parts of the input vector:\n\\begin{equation}\n Attention(Q,K,V) = softmax \\left( \\frac{QK^T}{ \\sqrt{d_{k}} } \\right) V,\n\\end{equation}\nwhere Q is the \\textit{query} i.e., the single object in the sequence, in this case, a word. K are the keys, which are vector representations of the input sequence, and V are values as vector representations of all words in the sequence. In the initial encoder, decoder, and attention blocks $Q=V$ whereas later on the attention block that takes these outputs as input, $Q \\neq V$ since both are derived from the block's 'memory'. In order to combine multiple queries, that is, to consider previously learnt rules from text, multi-headed attention is presented:\n\\begin{equation}\n\\begin{aligned}\n MultiHead(Q,K,V) = Concatenate(head_{1}, ..., head_{h})W^{O} \\\\\n head_{i} = Attention(QW^{Q}_{i}, KW^{K}_{i}, VW^{V}_{i}).\n\\end{aligned}\n\\end{equation}\nAs in the previous equation, it can be seen that previously learnt $h$ projections $d_Q, d_K$ and $d_V$ are also considered given that the block has multiple heads. The above equations and further detail on both attention and multi-headed attention can be found in \\cite{vaswani2017attention}. The unsupervised nature of the GPT models is apparent since $Q, K$ and $V$ are from the same source. \n\nImportantly, GPT produces output with consideration not only to input, but also to the task. The GPT-2 model has been shown to be a powerful state-of-the-art tool for language learning, understanding, and generation; researchers noted that the model could be used with ease to generate realistic propaganda for extremist terrorist groups~\\cite{solaiman2019release}, as well as noting that generated text by the model was difficult to detect~\\cite{gehrmann2019gltr,wolff2020attacking,adelani2020generating}. The latter aforementioned papers are promising, since a difficulty of detection suggests statistical similarities, which are likely to aid in the problem of improving classification accuracy of a model by exposing it to synthetic data objects output by such a model. \n\n\\subsection{Dataset Augmentation through Synthesis}\n\\label{subsec:dataaugmentation}\n\nSimilarities between real-life experiences and imagined perception have shown in psychological studies that the human imagination, though mentally augmenting and changing situations~\\cite{beres1960perception}, aids in improving the human learning process~\\cite{egan1989memory,heath2008exploring,macintyre2012emotions,egan2014imagination}. The importance of this ability in the learning process shows the usefulness of data augmentation in human learning, and as such, is being explored as a potential solution to data scarcity and quality in the machine learning field. Even though the synthetic data may not be realistic alone, minute similarities between it and reality allow for better pattern recognition. \n\nThe idea of data augmentation as the first stage in fine-tune learning is inspired by the aforementioned findings, and follows a similar approach. Synthetic data is generated by learning from the real data, and algorithms are exposed to them in a learning process prior to the learning process of real data; this is then compared to the classical approach of learning from the data solely, where the performance of the former model compared to the latter shows the effect of the data augmentation learning process. Much of the work is recent, many from the last decade, and a pattern of success is noticeable for many prominent works when comparing the approach to the classical method of learning from real data alone. \n\nAs described, the field of exploring augmented data to improve classification algorithms is relatively young, but there exist several prominent works that show success in applying this approach. When augmented data from the SemEval dataset is learned from by a Recurrent Neural Network (RNN), researchers found that the overall best F-1 score was achieved for relation classification in comparison to the model only learning from the dataset itself~\\cite{xu2016improved}. Due to data scarcity in the medical field, classification of liver lesions~\\cite{frid2018synthetic} and Alzheimer's Disease~\\cite{shin2018medical} have also shown improvement when the learning models (CNNs) considered data augmented by Convolutional GANs. In Natural Language Processing, it was found that word augmentation aids to improve sentence classification by both CNN and RNN models~\\cite{kobayashi2018contextual}. The \\textit{DADA} model has been suggested as a strong method to produce synthetic data for improvement of data-scarce problems through the Deep Adversarial method via a dedicated discriminator network aiming to augment data specifically~\\cite{zhang2019dada} which has noted success in machine learning problems~\\cite{barz2020deep}. \n\nData augmentation has shown promise in improving multi-modal emotion recognition when considering audio and images~\\cite{huang2018multimodal}, digital signal classification~\\cite{tang2018digital}, as well as for a variety of audio classification problems such as segments of the Hub500 problem~\\cite{park2019specaugment}. Additionally, synthetic data augmentation of mel-spectrograms have shown to improve acoustic scene recognition~\\cite{yang2018se}. Realistic text-to-speech is achieved by producing realistic sound such as the Tacotron model~\\cite{wang2017tacotron} when considering the textual representation of the audio being considered, and reverse engineering the model to produce audio based on text input. A recent preliminary study showed GANs may be able to aid in producing synthetic data for speaker recognition~\\cite{chien2018adversarial}. \n\nThe temporal models considered in this work to generate synthetic speech data have recently shown success in generating acoustic sounds~\\cite{eck2002finding} and accurate timeseries data~\\cite{senjyu2006application}, written text~\\cite{pawade2018story,sha2018order}, artistic images~\\cite{gregor2015draw}. Specifically, temporal models are also observed to be successful in generating MFCC data~\\cite{wang2017rnn}, which is the data type considered in this work. Many prominent works in speech recognition consider temporal learning to be highly important~\\cite{fernandez2007application,he2019streaming,sak2014long} and for generation of likewise temporal data~\\cite{valentini2016investigating,wang2017rnn,tachibana2018efficiently} (that is, which this study aims to perform). If it is possible to generate data that bares similarity to the real data, then it could improve the models while also reducing the need for large amounts of real data to be collected. \n\n\n\\section{Proposed Approach}\n\\label{sec2:method}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.9]{overalldiagram_new.pdf}\n \\caption{A diagram of the experimental method in this work. Note that the two networks being directly compared are classifying the same data, with the difference being the initial weight distribution either from standard random distribution or transfer learning from GPT-2 and LSTM produced synthetic data.}\n \\label{fig:method}\n\\end{figure*}\n\nThis section describes the development of the proposed approach, which can be observed overall in Figure \\ref{fig:method}. For each test, five networks are trained in order to gain results. Firstly, a network simply to perform the speaker classification experimen without transfer learning (from a standard random weight distribution). Firstly, a network simply to perform the speaker classification experiment. Produced by LSTM and GPT-2, synthetic data is used to train another network, of which the weights are used to train a final network as an initial distribution to perform the same experiment as described in the first network (classifying the speaker's real data from Flickr8k speakers). Thus, the two networks leading to the final classification score in the diagram are directly comparable since they are learning from the same data, they differ only in initial weight distribution (where the latter network has weights learnt from synthetic data).\n\n\n\n\\subsection{Real and Synthetic Data Collection}\nThe data collected, as previously described, presents a binary classification problem. That is, whether or not the individual in question is the one producing the acoustic utterance. \n\nThe large corpus of data for the \\textit{``not the speaker\"} class is gathered via the Flickr8k dataset~\\cite{harwath2015deep} which contains 40,000 individual utterances describing 8,000 images by a large number of speakers which is unspecified by the authors. MFCCs are extracted (described in Section \\ref{subsec:mfcc}) to generate temporal numerical vectors which represent a short amount of time from each audio clip. 100,000 data objects are selected through 50 blocks of 1,000 objects and then 50,000 other data objects selected randomly from the remainder of the dataset. This is performed so the dataset contains individual's speech at length as well as short samples of many other thousands of speakers also. \n\n\\begin{table*}[]\n\\footnotesize\n\\centering\n\\caption{Information regarding the data collection from the seven subjects}\n\\label{tab:subjects}\n\\begin{tabular}{@{}lllllll@{}}\n\\toprule\n\\textbf{Subject} & \\textbf{Sex} & \\textbf{Age} & \\textbf{Nationality} & \\textbf{Dialect} & \\textbf{Time Taken (s)} & \\textbf{\\begin{tabular}[c]{@{}l@{}}Real\\\\Data\\end{tabular}} \\\\ \\midrule\n\\textit{\\textbf{1}} & M & 23 & British & Birmingham & 24 & 4978 \\\\\n\\textit{\\textbf{2}} & M & 24 & American & Florida & 13 & 2421 \\\\\n\\textit{\\textbf{3}} & F & 28 & Irish & Dublin & 12 & 2542 \\\\ \n\\textit{\\textbf{4}} & F & 30 & British & London & 12 & 2590 \\\\ \n\\textit{\\textbf{5}} & F & 40 & British & London & 10 & 2189 \\\\ \n\\textit{\\textbf{6}} & M & 21 & French & Paris & 8 & 1706 \\\\ \n\\textit{\\textbf{7}} & F & 23 & French & Paris & 9 & 1952 \\\\ \n\\midrule\n\\textit{\\textbf{Flickr8K}} & & & & & & 100000 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\nTo gather data for recognising speakers, seven subjects are considered. Information on the subjects can be seen in Table \\ref{tab:subjects}. Subjects speak five random Harvard Sentences sentences from the \\textit{IEEE recommended practice for speech quality measurements}~\\cite{rothauser1969ieee}, and so contain most of the spoken phonetic sounds in the English language~\\cite{bird2019phoneme2}. Importantly, this is a user-friendly process, because it requires only a few short seconds of audio data. The longest time taken was by subject 1 in 24 seconds producing 4978 data objects and the shortest were the two French individuals who required 8 and 9 seconds respectively to speak the five sentences. All of the audio data were recorded using consumer-available recording devices such as smartphones and computer headsets. Synthetic datasets are generated following the learning processes of the best LSTM and the GPT-2 model, where the probability of the next character is decided upon depending on the learning algorithm and are generated in blocks of 1,000 within a loop and the final line is removed (since it was often within the cutoff point of the 1,000-character block). Illogical lines of data (those that did not have 26 comma separated values and a class) were removed, but were observed to be somewhat rare as both the LSTM and GPT-2 models had learnt the data format relatively well since it was uniform throughout. The format, throughout the datasets, was a uniform 27 comma separated values where the values were all numerical and the final value was `1' followed by a line break character.\n\n\\subsection{Feature Extraction}\n\\label{subsec:mfcc}\n\nThe nature of acoustic data is that the previous and following points of data from a single point in particular are also related to the class. Audio data is temporal in this regard, and thus classification of a single point in time is an extremely difficult problem~\\cite{xiong2003comparing,bird2019phoneme}. In order to overcome this issue features are extracted from the wave through a sliding window approach. The statistical features extracted in this work are the first 26 Mel-Frequency Cepstral Coefficients due to findings in the scientific state of the art arguing for their prominence over other methods~\\cite{muda2010voice,sahidullah2012design}. Sliding windows are set to a length of 0.025 seconds with a step of 0.01 seconds and extraction is performed as follows:\n\\begin{enumerate}\n\\item The Fourier Transform of time window data $\\omega$ is calculated:\n\\begin{equation}\nX(j\\omega)=\\int_{-\\infty}^\\infty x(t) e^{-j\\omega t} dt.\n\\end{equation}\n\\item The powers from the FT are mapped to the Mel-scale, that is, the human psychological scale of audible pitch~\\cite{stevens1937scale} via a triangular temporal window. \n\\item The power spectrum is considered and $log$s of each of their powers are taken.\n\\item The derived Mel-log powers are then treated as a signal, and a Discrete Cosine Transform (DCT) is calculated:\n\\begin{equation}\n\\begin{aligned}\nX_k = \\sum_{n=0}^{N-1} x_n cos \\begin{bmatrix} \\frac{\\pi}{N} (n+ \\frac{1}{2}) k \\end{bmatrix} \\\\ \nk=0,...,N-1,\n\\end{aligned}\n\\end{equation}\n\\noindent where $x$ is the array of length $N$, $k$ is the index of the output coefficient being calculated, where $N$ real numbers $x_{0} ... x_{n-1}$ are transformed into the $N$ real numbers $X_{0} ... X_{n-1}$. \n\\end{enumerate}\nThe amplitudes of the spectrum produced are taken as the \\textit{MFCCs}. The resultant data then provides a mathematical description of wave behaviour in terms of sounds, each data object made of 26 attributes produced from the sliding window are then treated as the input attributes for the neural networks for both speaker recognition and also synthetic data generation (with a class label also). \n\nThis process is performed for all of the selected Flickr8K data as well as the real data recorded from the subjects. The MFCC data from each of the 7 subjects' audio recordings is used as input to the LSTM and GPT-2 generative models for training and subsequent data augmentation.\n\n\n\\subsection{Speaker Classification Learning Process}\nFor each subject, the Flickr data and recorded audio forms the basis dataset and the speaker recognition problem. Eight datasets for transfer learning are then formed on a per-subject basis, which are the aforementioned data plus $2500, 5000, 7500$ and $10000$ synthetic data objects generated by either the LSTM or the GPT-2 models. LSTM has a standard dropout of 0.2 between each layer.\n\nThe baseline accuracy for comparison is given as ``Synth. Data: 0\" later in Table \\ref{tab:bigresults} which denotes a model that has not been exposed to any of the synthetic data. This baseline gives scores that are directly comparable to identical networks with their initial weight distributions being those trained to classify synthetic data generated for the subject, which is then used to learn from the real data. As previously described, the two sets of synthetic data to expose the models to during pre-training of the real speaker classification problem are generated by either an LSTM or a GPT-2 language model. Please note that due to this, the results presented have no baring on whether or not the network could classify the synthetic data well or otherwise, the weights are simply used as the initial distribution for the same problem. If the pattern holds that the transfer learning networks achieve better results than the networks which have not been trained on such data, it argues the hypothesis that speaker classification can be improved when considering either of these methods of data augmentation. This process can be observed in Figure \\ref{fig:method}. \n\nFor the Deep Neural Network that classifies the speaker, a topology of three hidden layers consisting of 30, 7, and 29 neurons respectively with ReLu activation functions and an ADAM optimiser~\\cite{kingma2014adam} is initialised. These hyperparameters are chosen due to a previous study that performed a genetic search of neural network topologies for the classification of phonetic sounds in the form of MFCCs~\\cite{bird2020optimisation}. The networks are given an unlimited number of epochs to train, only ceasing through a set early stopping callback of 25 epochs with no improvement of ability. The best weights are restored before final scores are calculated. This is allowed in order to make sure that all models stabilise to an asymptote and reduce the risk of stopping models prior to them achieving their potential best abilities. \n\nClassification errors are weighted equally by class prominence since there exists a large imbalance between the speaker and the rest of the data. All of the LSTM experiments performed in this work were executed on an Nvidia GTX980Ti GPU, while the GPT-2 experiment was performed on an Nvidia Tesla K80 GPU provided by Google Colab. \n\n\\section{Results}\n\\label{sec:results}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{128-lstms.pdf}\n \\caption{The training processes of the best performing models in terms of loss, separated for readability purposes. Results are given for a benchmarking experiment on all of the dataset rather than an individual.}\n \\label{fig:128-lstms-benchmark}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{bad-lstms.pdf}\n \\caption{The training processes of LSTMs with 64, 256, and 512 units in 1-3 hidden layers, separated for readability purposes. Results are given for a benchmarking experiment on all of the dataset rather than an individual.}\n \\label{fig:bad-lstms-benchmark}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.5]{gpt-graph.pdf}\n \\caption{The training process of the GPT-2 model. Results are given for a benchmarking experiment on all of the dataset rather than an individual.}\n \\label{fig:gpt-benchmark}\n\\end{figure}\n\n\\begin{table}[]\n\\centering\n\\caption{Best epochs and their losses for the 12 LSTM Benchmarks and GPT-2 training process. All models are benchmarked on the whole set of subjects for 100 epochs each in order to search for promising hyperparameters.}\n\\label{tab:benchmarking-models}\n\\begin{tabular}{@{}lll@{}}\n\\toprule\n\\textbf{Model} & \\textbf{Best Loss} & \\textbf{Epoch} \\\\ \\midrule\nLSTM(64) & 0.88 & 99 \\\\\nLSTM(64,64) & 0.86 & 99 \\\\\nLSTM(64,64,64) & 0.85 & 99 \\\\\nLSTM(128) & 0.53 & 71 \\\\\nLSTM(128,128) & 0.53 & 80 \\\\\nLSTM(128,128,128) & \\textbf{0.52} & 93 \\\\\nLSTM(256) & 0.83 & 83 \\\\\nLSTM(256,256) & 0.82 & 46 \\\\\nLSTM(256,256,256) & 0.82 & 39 \\\\\nLSTM(512) & 0.81 & 33 \\\\\nLSTM(512,512) & 0.81 & 31 \\\\\nLSTM(512,512,512) & 0.82 & 25 \\\\\nGPT-2 & 0.92 & 94 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:benchmarking-models} shows the best results discovered for each LSTM hyperparameter set and the GPT-2 model. Figures \\ref{fig:128-lstms-benchmark} and \\ref{fig:bad-lstms-benchmark} show the epoch-loss training processes for the LSTMs separated for readability purposes and Figure \\ref{fig:gpt-benchmark} shows the same training process for the GPT-2 model. These generalised experiments for all data provide a tuning point for synthetic data to be generated for each of the individuals (given respective personally trained models). LSTMs with 128 hidden units far outperformed the other models, which were also sometimes erratic in terms of their attempt at loss reduction over time. The GPT-2 model is observed to be especially erratic, which is possibly due to its unsupervised attention-based approach. \n\nAlthough some training processes were not as smooth as others, manual exploration showed that acceptable sets of data were able to be produced.\n\n\\subsection{Transfer Learning for Data-scarce Speaker Recognition}\n\n\\begin{table*}[]\n\\centering\n\\caption{Results of the experiments for all subjects. Best models for each Transfer Learning experiment are bold, and the best overall result per-subject is also underlined. Red font denotes a synthetic data-exposed model that scored lower than the classical learning approach. }\n\\label{tab:bigresults}\n\\footnotesize\n\\begin{tabular}{@{}llllllllll@{}}\n\\toprule\n\\multicolumn{1}{c}{} & & \\multicolumn{4}{l}{\\textbf{LSTM}} & \\multicolumn{4}{l}{\\textbf{GPT-2}} \\\\ \\cmidrule(l){3-10} \n\\multicolumn{1}{c}{\\multirow{-2}{*}{\\textbf{Subject}}} & \\multirow{-2}{*}{\\textbf{\\begin{tabular}[c]{@{}l@{}}Synth.\\\\ Data\\end{tabular}}} & \\textit{\\textbf{Acc.}} & \\textit{\\textbf{F1}} & \\textit{\\textbf{Prec.}} & \\multicolumn{1}{l|}{\\textit{\\textbf{Rec.}}} & \\textit{\\textbf{Acc.}} & \\textit{\\textbf{F1}} & \\textit{\\textbf{Prec.}} & \\textit{\\textbf{Rec.}} \\\\ \\midrule\n & \\textit{\\textbf{0}} & 93.57 & 0.94 & 0.93 & \\multicolumn{1}{l|}{0.93} & 93.57 & 0.94 & 0.93 & 0.93 \\\\\n & \\textit{\\textbf{2500}} & {\\ul \\textbf{99.5}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & \\multicolumn{1}{l|}{{\\ul \\textbf{$\\sim$1}}} & 97.32 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{5000}} & 97.37 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & \\textbf{97.77} & \\textbf{0.98} & \\textbf{0.98} & \\textbf{0.98} \\\\\n & \\textit{\\textbf{7500}} & \\textbf{99.33} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 99.2 & 0.99 & 0.99 & 0.99 \\\\\n\\multirow{-5}{*}{\\textbf{1}} & \\textit{\\textbf{10000}} & 99.1 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & \\textbf{99.3} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 95.13 & 0.95 & 0.95 & \\multicolumn{1}{l|}{0.95} & 95.13 & 0.95 & 0.95 & 0.95 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.6} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.5 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{5000}} & \\textbf{99.5} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.41 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{7500}} & {\\ul \\textbf{99.7}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & \\multicolumn{1}{l|}{{\\ul \\textbf{$\\sim$1}}} & {\\ul \\textbf{99.7}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\\n\\multirow{-5}{*}{\\textbf{2}} & \\textit{\\textbf{10000}} & \\textbf{99.42} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 99.38 & 0.99 & 0.99 & 0.99 \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 96.58 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & 96.58 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.2} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 98.41 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{5000}} & 98.4 & 0.98 & 0.98 & \\multicolumn{1}{l|}{0.98} & \\textbf{99} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\\n & \\textit{\\textbf{7500}} & \\textbf{99.07} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 98.84 & 0.99 & 0.99 & 0.99 \\\\\n\\multirow{-5}{*}{\\textbf{3}} & \\textit{\\textbf{10000}} & 98.44 & 0.98 & 0.98 & \\multicolumn{1}{l|}{0.98} & {\\ul \\textbf{99.47}} & {\\ul \\textbf{0.99}} & {\\ul \\textbf{0.99}} & {\\ul \\textbf{0.99}} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 98.5 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & 98.5 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{2500}} & {\\color[HTML]{FE0000} 97.86} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} & \\multicolumn{1}{l|}{{\\color[HTML]{FE0000} 0.98}} & \\textbf{99.42} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\\n & \\textit{\\textbf{5000}} & \\textbf{99.22} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & {\\color[HTML]{FE0000} 97.75} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} \\\\\n & \\textit{\\textbf{7500}} & {\\color[HTML]{FE0000} 97.6} & {\\color[HTML]{FE0000} 0.98} & {\\color[HTML]{FE0000} 0.98} & \\multicolumn{1}{l|}{{\\color[HTML]{FE0000} 0.98}} & {\\color[HTML]{FE0000} \\textbf{98.15}} & {\\color[HTML]{FE0000} \\textbf{0.98}} & {\\color[HTML]{FE0000} \\textbf{0.98}} & {\\color[HTML]{FE0000} \\textbf{0.98}} \\\\\n\\multirow{-5}{*}{\\textbf{4}} & \\textit{\\textbf{10000}} & 99.22 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & {\\ul \\textbf{99.56}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 96.6 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & 96.6 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.47} & \\textbf{0.99} & \\textbf{0.99} & \\multicolumn{1}{l|}{\\textbf{0.99}} & 99.23 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{5000}} & 99.4 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & \\textbf{99.83} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{7500}} & 99.2 & 0.99 & 0.99 & \\multicolumn{1}{l|}{0.99} & {\\ul \\textbf{99.85}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\\n\\multirow{-5}{*}{\\textbf{5}} & \\textit{\\textbf{10000}} & 99.67 & $\\sim$1 & $\\sim$1 & \\multicolumn{1}{l|}{$\\sim$1} & \\textbf{99.78} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 97.3 & 0.97 & 0.97 & \\multicolumn{1}{l|}{0.97} & 97.3 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.8} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.75 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{5000}} & 99.75 & $\\sim$1 & $\\sim$1 & \\multicolumn{1}{l|}{$\\sim$1} & {\\color[HTML]{FE0000} 96.1} & {\\color[HTML]{FE0000} 0.96} & {\\color[HTML]{FE0000} 0.96} & {\\color[HTML]{FE0000} 0.96} \\\\\n & \\textit{\\textbf{7500}} & {\\color[HTML]{000000} 97.63} & {\\color[HTML]{000000} 0.98} & {\\color[HTML]{000000} 0.98} & \\multicolumn{1}{l|}{{\\color[HTML]{000000} 0.98}} & {\\ul \\textbf{99.82}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} \\\\\n\\multirow{-5}{*}{\\textit{\\textbf{6}}} & \\textit{\\textbf{10000}} & 99.67 & $\\sim$1 & $\\sim$1 & \\multicolumn{1}{l|}{$\\sim$1} & \\textbf{99.73} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\ \\cmidrule(r){1-1}\n & \\textit{\\textbf{0}} & 90.7 & 0.91 & 0.91 & \\multicolumn{1}{l|}{0.91} & 90.7 & 0.91 & 0.91 & 0.91 \\\\\n & \\textit{\\textbf{2500}} & \\textbf{99.86} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.78 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{5000}} & \\textbf{99.89} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.86 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{7500}} & \\textbf{99.91} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\multicolumn{1}{l|}{\\textbf{$\\sim$1}} & 99.84 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n\\multirow{-5}{*}{\\textit{\\textbf{7}}} & \\textit{\\textbf{10000}} & {\\ul \\textbf{99.94}} & {\\ul \\textbf{$\\sim$1}} & {\\ul \\textbf{$\\sim$1}} & \\multicolumn{1}{l|}{{\\ul \\textbf{$\\sim$1}}} & 99.73 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\ \\midrule\n\\textit{\\textbf{Avg.}} & & 98.43 & 0.98 & 0.98 & 0.98 & 98.40 & 0.98 & 0.98 & 0.98 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table*}\n\nTable \\ref{tab:bigresults} shows all of the results for each subject, both with and without exposure to synthetic data. Per-run, the LSTM achieved better results over the GPT-2 in 14 instances whereas the GPT-2 achieved better results over the LSTM in 13 instances. Of the five runs that scored lower than no synthetic data exposure, two were LSTM and three were GPT-2. Otherwise, 51 of the 56 experiments all outperformed the original model without synthetic data exposure and every single subject experienced their best classification result in all cases when the model had been exposed to synthetic data. The best score on a per-subject basis was achieved by exposing the network to data produced by the LSTM three times and the GPT-2 five times (both including Subject 2 where both were best at 99.7\\%). The maximum diversion of training accuracy to validation accuracy was $\\sim1\\%$ showing that although high results were attained, overfitting was relatively low; with more computational resources, k-fold and LOO cross validation are suggested as future works to achieve more accurate measures of variance within classification. \n\nThese results attained show that speaker classification can be improved by exposing the network to synthetic data produced by both supervised and attention-based models and then transferring the weights to the initial problem, which most often scores lower without synthetic data exposure in all cases but five although those subjects still experienced their absolute best result through synthetic data exposure regardless. \n\n\n\\subsection{Comparison to other methods of speaker recognition}\n\\begin{table}[]\n\\caption{Comparison of the best models found in this work and other classical methods of speaker recognition (sorted by accuracy)}\n\\label{tab:bigcomparison}\n\\footnotesize\n\\begin{tabular}{@{}clllll@{}}\n\\toprule\n\\multicolumn{1}{l}{\\textbf{Subject}} & \\textbf{Model} & \\textbf{Acc.} & \\textbf{F-1} & \\textbf{Prec.} & \\textbf{Rec.} \\\\ \\midrule\n\\multirow{7}{*}{\\textbf{1}} & \\textit{\\textbf{DNN (LSTM TL 2500)}} & \\textbf{99.5} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (GPT-2 TL 5000)}} & 97.77 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{SMO}} & 97.71 & 0.98 & 0.95 & 0.95 \\\\\n & \\textit{\\textbf{Random Forest}} & 97.48 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 97.47 & 0.97 & 0.97 & 0.97 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 82.3 & 0.87 & 0.96 & 0.82 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 78.96 & 0.84 & 0.953 & 0.77 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{2}} & \\textit{\\textbf{DNN (LSTM TL 7500)}} & \\textbf{99.7} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (GPT-2 TL 7500)}} & 99.7 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{SMO}} & 98.94 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.33 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.28 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 84.9 & 0.9 & 0.97 & 0.85 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 76.58 & 0.85 & 0.97 & 0.77 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{3}} & \\textit{\\textbf{DNN (GPT-2 TL 10000)}} & \\textbf{99.47} & \\textbf{0.99} & \\textbf{0.99} & \\textbf{0.99} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 2500)}} & 99.2 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{SMO}} & 99.15 & 0.99 & 0.99 & 0.98 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.85 & 0.99 & 0.99 & 0.98 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.79 & 0.99 & 0.99 & 0.98 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 91.49 & 0.94 & 0.98 & 0.92 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 74.37 & 0.83 & 0.96 & 0.74 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{4}} & \\textit{\\textbf{DNN (GPT-2 TL 10000)}} & \\textbf{99.56} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 5000)}} & 99.22 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.66 & 0.99 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{SMO}} & 98.66 & 0.99 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Random Forest}} & 98 & 0.98 & 0.98 & 0.98 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 95.53 & 0.96 & 0.98 & 0.96 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 88.74 & 0.92 & 0.97 & 0.89 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{5}} & \\textit{\\textbf{DNN (GPT-2 TL 10000)}} & \\textbf{99.85} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 10000)}} & 99.67 & 1 & 1 & 1 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 98.86 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.7 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{SMO}} & 98.6 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 90.55 & 0.94 & 0.98 & 0.9 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 88.95 & 0.93 & 0.98 & 0.89 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{6}} & \\textit{\\textbf{DNN (GPT-2 TL 7500)}} & \\textbf{99.82} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (LSTM TL 2500)}} & 99.8 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 99.1 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Random Forest}} & 98.9 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{SMO}} & 98.86 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 90.52 & 0.94 & 0.98 & 0.9 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 89.27 & 0.93 & 0.98 & 0.89 \\\\ \\cmidrule(r){1-1}\n\\multirow{7}{*}{\\textbf{7}} & \\textit{\\textbf{DNN (LSTM TL 10000)}} & \\textbf{99.91} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} & \\textbf{$\\sim$1} \\\\\n & \\textit{\\textbf{DNN (GPT-2 TL 5000)}} & 99.86 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n & \\textit{\\textbf{SMO}} & 99.4 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Logistic Regression}} & 99.13 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Random Forest}} & 99 & 0.99 & 0.99 & 0.99 \\\\\n & \\textit{\\textbf{Bayesian Network}} & 88.67 & 0.93 & 0.98 & 0.89 \\\\\n & \\textit{\\textbf{Naive Bayes}} & 86.9 & 0.91 & 0.98 & 0.87 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[]\n\\caption{Average performance of the chosen models for each of the 7 subjects.}\n\\label{tab:average-comparison}\n\\centering\n\\begin{tabular}{@{}lllll@{}}\n\\toprule\n\\textbf{Model} & \\textbf{Avg acc} & \\textbf{F-1} & \\textbf{Prec.} & \\textbf{Rec.} \\\\ \\midrule\n\\textit{\\textbf{DNN (LSTM TL)}} & 99.57 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n\\textit{\\textbf{DNN (GPT-2 TL)}} & 99.43 & $\\sim$1 & $\\sim$1 & $\\sim$1 \\\\\n\\textit{\\textbf{SMO}} & 98.76 & 0.99 & 0.98 & 0.98 \\\\\n\\textit{\\textbf{Logistic Regression}} & 98.63 & 0.99 & 0.98 & 0.98 \\\\\n\\textit{\\textbf{Random Forest}} & 98.45 & 0.98 & 0.98 & 0.98 \\\\\n\\textit{\\textbf{Bayesian Network}} & 88.73 & 0.92 & 0.98 & 0.89 \\\\\n\\textit{\\textbf{Naive Bayes}} & 83.80 & 0.89 & 0.97 & 0.83 \\\\ \\bottomrule\n\\end{tabular}\n\\end{table}\n\nTable \\ref{tab:bigcomparison} shows a comparison of the models proposed in this paper to other state-of-the-art methods of speaker recognition. Namely, they are Sequential Minimal Optimisation (SMO), Logistic Regression, Bayesian Networks, and Naive Bayes. It can be observed that, although in some cases close, the DNN fine tuned from synthetic data generated by both the LSTM and GPT-2 achieve higher scores than other methods. Finally, Table \\ref{tab:average-comparison} shows the average scores for the chosen models for each of the seven subjects. \n\n\n\n\n\n\\section{Conclusion and Future Work}\n\\label{sec4:futureworkconclusion}\nTo finally conclude, this work found strong success for all 7 subjects when improving the classification problem of speaker recognition by generating augmented data by both LSTM and OpenAI GPT-2 models. Future work aims to solidify this hypothesis by running the experiments for a large range of subjects and comparing the patterns that emerge from the results. \n\nThe experiments in this work have provided a strong argument for the usage of deep neural network transfer learning from MFCCs synthesised by both LSTM and GPT-2 models for the problem of speaker recognition. One of the limitations of this study was hardware availability since it was focused on those available to consumers today. The Flickr8k dataset was thus limited to 8,000 data objects and new datasets created, which prevents a direct comparison to other speaker recognition works which often operate on larger data and with hardware beyond consumer availability. It is worth noting that the complex nature of training LSTM and GPT-2 models to generate MFCCs is beyond that of the task of speaker recognition itself, and as such, devices with access to TPU or CUDA-based hardware must perform the task in the background over time. The tasks in question took several minutes with the two GPUs used in this work for both LSTM and GPT-2 and as such are not instantaneous. As previously mentioned, although it was observed that overfitting did not occur too strongly, it would be useful in future to perform similar experiments with either K-fold or leave-one-out Cross Validation in order to achieve even more accurate representations of the classification metrics. In terms of future application, the then-optimised model would be the implemented within real robots and smarthome assistants through compatible software.\n\nIn this work, seven subjects were benchmarked with both a tuned LSTM and OpenAI's GPT-2 model. In future, as was seen with related works, a GAN could also be implemented in order to provide a third possible solution to the problem of data scarcity in speaker recognition as well as other related speech recognition classification problems - bias is an open issue that has been noted for data augmentation with GANs~\\cite{hu2019exploring}, and as such, this issue must be studied if a GAN is implemented for problems of this nature. This work further argued for the hypothesis presented, that is, data augmentation can aid in improving speaker recognition for scarce datasets. Following the 14 successful runs including LSTMs and GPT-2s, the overall process followed by this experiment could be scripted and thus completely automated, allowing for the benchmarking of many more subjects to give a much more generalised set of results, of which will be more representative of the general population. Additionally, samples spoken from many languages should also be considered in order to provide language generalisation, rather than just the English language spoken by multiple international dialects in this study. Should generalisation be possible, future models may require only a small amount of fine-tuning to produce synthetic speech for a given person rather than training from scratch as was performed in this work. In more general lines of thought, literature review shows that much of the prominent work is recent, leaving many fields of machine learning that have not yet been attempted to be improved via the methods described in this work. \n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}