diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzctve" "b/data_all_eng_slimpj/shuffled/split2/finalzzctve" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzctve" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\n\n\nThe mode-coupling theory of the glass transition (MCT) has been conceived and elaborated to describe the rapid slowing-down of the structural relaxation in (supercooled) liquids and has become a well-established and very \nsuccessful approach for the theoretical analysis of the slow dynamics in a variety of systems. In particular, mode-coupling theory and its extensions include now simple liquids~\\cite{Bengtzelius_1984,Gotze_1992,Gotze2009}, colloidal mixtures \\cite{Gotze1987,Bosse1987,Barrat_1990,Szamel:PRA_44:1991,Kob1995,Voigtmann2003}, non-spherical particles (rigid molecules) \\cite{Franosch1997a,Goetze2000c,Schilling:PRE_56_1997,Kaemmerer:PRE_56_1997,Kaemmerer:PRE_58_1998,Kaemmerer:PRE_58_1998b,Fabbian1999,Winkler2000,Chong1998,Chong2000}, \nrandom host structures \\cite{Krakoviack:PRL_94_2005,Krakoviack:PRE_84_2011} and confined systems \\cite{Lang2010, Lang2012,Mandal2014, \nMandal2017a}, \nsheared colloidal liquids~\\cite{Fuchs2002,Brader2007,Brader2009,Fuchs2009}, driven granular fluids \\cite{Sperl:PhysRevLett.104.225701,Sperl:PhysRevE.87.022207}\nactive microrheology \\cite{Gazuz2009,Gruber2016,Gruber2019_2,Senbil2019} and active particles \\cite{Nandi2017,Liluashvili2017,Janssen_2019}. \n\n\nA central prediction of MCT for the dynamics of simple liquids is the universal relaxation behavior close to the ideal glass transition. \nIn precisely defined asymptotic limits, \nMCT predicts a two-time fractal for any time-correlation function of observables in the vicinity of a critical plateau value. The \\emph{critical law} emerges directly as long-time relaxation towards this plateau as the glass-transition singularity is reached in the non-equilibrium state diagram. Already at close distances below the singularity this critical law becomes apparent, yet is superseded by another power law referred to as von Schweidler law characterizing the initial decay from the critical plateau~\\cite{Gotze_1990,Gotze_1992,Gotze2009, Franosch1997}. The two time fractals are intimately related and give rise to the first scaling law ($\\beta$-scaling) of the theory in terms of a universal scaling function. In particular, the exponents are related by a non-algebraic relation and the factorization theorem holds, i.e., the dynamical correlation functions asymptotically close to the transition are all governed by the same scaling function, where observable-dependent prefactors only enter in a critical amplitude. The governing equations for the scaling function have been derived first by G{\\\"o}tze \\cite{Gotze_1990}\nand later extended also for the leading corrections \\cite{Franosch1997} for scalar quantities. The case of matrix-valued time correlation functions \\cite{Voigtmann2019} and the scaling function for generalized mode-coupling theory \\cite{Szamel:PhysRevLett.90.228301,JansenReview2018,Janssen_2019_GMCT} has been elaborated only recently. \n\nA second scaling law ($\\alpha$-scaling) is predicted for the ultimate long-time relaxation characterizing the decay of time-correlation functions below the glass-transition singularity~\\cite{Gotze2009}. This long-time dynamics\n is stretched, it extends over many decades in time and is the theoretical explanation for the phenomenological time-temperature superposition principle and the empirical Kohlrausch function. Both scaling laws have been observed in a series of photon-correlation experiments on colloids \\cite{VanMegen1993,VanMegen1994}, depolarized light-scattering \\cite{Li:PRA_45_1992,Li:PRA_46_1992,Franosch1997,Singh1998} or dielectric spectroscopy \\cite{Lunkenheimer:PRL_77_1996,Schneider1999,Goetze2000} on supercooled liquids \nas well as in computer simulations of simple mixtures \\cite{Kob1994,Horbach2008}, water \\cite{Gallo:PRL_76_1996,Sciortino:PRE_54_1996,Sciortino:PRE_56_1997} or silica glasses \\cite{Sciortino:PRL_86_2001,Horbach1998,Voigtmann2006}. \n \n Many of the above mentioned extensions to MCT, viz. rigid molecules, confined systems, microrheology and active particles, have in common that the relaxation of \\gj{identical} particles in a system is governed by distinct decay channels for different degrees of freedom. For molecules and active particles these are the rotational and translational degrees of freedom\\cite{Franosch1997a,Chong1998,Fabbian1999,Chong2000,Winkler2000,Nandi2017,Liluashvili2017}, in confined systems the directions parallel and perpendicular to the wall\\cite{Lang2010, Lang2012,Mandal2014, Mandal2017a} and in active microrheology the directions along and perpendicular to the active force~\\cite{Gruber2016,Gruber2019_2}. The emergence of multiple decay channels changes the overall mathematical structure of the underlying equations of motion and it is not obvious whether the $\\beta $-scaling equation also translates to those situations.\n\nFor the case of a single molecule dissolved in a simple liquid, the validity of the factorization theorem has been proved \\footnote{See Ref.~[\\citen{Franosch1997a}] and A.~P.~Singh, \\emph{PhD thesis}, Technische Universit{\\\"a}t M{\\\"u}nchen; 1998}, stating that the critical dynamics can be factorized into a (matrix-valued) critical amplitude and a scalar, time-dependent function, called the $ \\beta$-correlator. We will encounter this theorem later as a result of the first-order expansion. \n\tFor the collective dynamics in situations where parallel relaxation \n\tplays a role, asymptotic expansions have been used to determine critical amplitudes \\cite{Winkler2000,Rinaldi2001} (based on the factorization theorem), yet, no \n\ttheoretical justification of the $ \\beta $-scaling equation was presented.\n\tIn fact, a recent asymptotic analysis for a particle with constant pulling force in a simple liquid (active microrheology) \\cite{Gruber2019_2} showed\n\tthat in this special case {a mixed continuous\/discontinuous transition} could be observed but no two-step-relaxation scenario.\n\tA rigorous calculations is thus required to establish the $\\beta$-scaling regime for the recent extensions of MCT mentioned above.\n\nThe purpose of this work is to derive the $ \\beta $-scaling equation for mode-coupling theories with multiple decay channels from asymptotic analysis. First, we derive in Sec.~\\ref{sec:analysis} \nthe evolution equations for the structural relaxation and show that new terms arise in the asymptotic expansion due to the existence of distinct decay channels. Using the scale-invariance of the scaling equation \nwe suggest a straightforward rescaling of the governing dynamical quantities to recover the original form of the $\\beta$-scaling equation. To validate the calculations we introduce in Sec.~\\ref{sec:schematic} \na schematic model with two decay channels, based on the Bosse-Krieger model \\cite{Krieger1987}. We perform a detailed analysis of the critical dynamics of this schematic model and evaluate the results using the previously derived asymptotic scaling laws. We summarize and conclude in Sec.~\\ref{sec:conclusion}. \n\n\\section{Asymptotic analysis and scaling equations}\n\\label{sec:analysis}\n\nIn the first part of this chapter, we will recapitulate the mode-coupling equations for systems with multiple decay channels as they were presented in Ref.~[\\citen{Lang_2013}]. We will focus on identifying the introduced quantities on the example of a simple liquid in confined geometry \\gj{and subsequently discuss how the formalism can be generalized, for example to describe molecular liquids\\cite{Schilling:PRE_56_1997,Kaemmerer:PRE_58_1998,Kaemmerer:PRE_58_1998b,Fabbian1999}}. We will then write down the equations for structural relaxation close to the glass transition and perform an asymptotic expansion to derive the factorization theorem and the $ \\beta $-scaling equation. For readers who are not interested in the technical details, we discuss the most important results of the asymptotic analysis in Sec.~\\ref{sec:discussion}.\n\n\\subsection{\\gj{Evolution equations of a simple liquid in confinement}}\n\n\\gj{Let us consider a system of $ N $ identical particles, confined in z-direction between two parallel, hard walls \\cite{Lang2012}. The system can be described by the microscopic particle density,\n\\begin{equation}\n\\rho(\\bm{r},z,t) = \\sum_{i=1}^{N} \\delta \\left[\\bm{r} - \\bm{r}_i(t)\\right] \\delta\\left[ z-z_i(t) \\right],\n\\end{equation}\nwith $ \\bm{r} = (x,y) $. Due to translational symmetry in the directions parallel to the walls, we find that the averaged density $ \\left\\langle \\rho (\\bm{r},z,t)\\right\\rangle = n(z) $ only depends on the z-coordinate. To characterize the dynamics of the microscopic system we introduce the Van Hove correlation function,\n\\begin{equation}\nG(|\\bm{r}-\\bm{r}'|,z,z',t) = n_0^{-1} \\left\\langle \\delta \\rho (\\bm{r},z,t)\\delta \\rho (\\bm{r}',z',t)\\right\\rangle,\n\\end{equation}\nas the time-correlation function of the microscopic density fluctuations, $ \\delta \\rho (\\bm{r},z,t) = \\rho (\\bm{r},z,t) - n(z) $, normalized by the area density $ n_0 $. It is now natural and numerically convenient to introduce density modes as the Fourier transform of the particle density, $ \\rho_\\mu(\\bm{q},t)=\\sum_{i=1}^N \\exp[{\\rm i}Q_\\mu z_i(t)]\\exp[{\\rm i}\\bm{q}\\cdot\\bm {r}_i(t)] $. The modes perpendicular to the confining direction are discrete, \n$ Q_\\mu = 2 \\pi \\mu \/ L $, in contrast to the continuous wave vectors $ \\bm{q} $ in the lateral direction. Here, $ L $ is the accessible slit width. From this we can define the generalized intermediate scattering function (ISF),\n\\begin{equation}\\label{key}\nS_{\\mu\\nu}(q,t)= \\frac{1}{N} \\left \\langle \\rho_\\mu(\\bm{q},t)^* \\rho_\\nu(\\bm{q},0) \\right \\rangle.\n\\end{equation}\nBy Fourier transformation one immediately finds that the ISF is nothing but the Fourier transform of the above introduced Van Hove correlation function \\cite{ Lang2012},\n\\begin{align}\nS_{\\mu \\nu}(q,t) = \\int_{-L\/2}^{L\/2} \\text{d}z\\int_{-L\/2}^{L\/2} \\text{d}z' \\int_{A}^{} \\text{d}(\\bm{r}-\\bm{r}')G(|\\bm{r}-\\bm{r}'|,z,z',t)\n \\exp \\left[-\\textrm{i}(Q_\\mu z - Q_\\nu z') \\right] e^{-\\textrm{i}\\bm{q}\\cdot(\\bm{r}-\\bm{r}')}.\n\\end{align} }\n\nChoosing the density modes $ \\left\\{ \\rho_\\mu(\\bm{q},t) \\right\\} $ as set of distinguished variables, the Zwanzig-Mori projection operator formalism\\cite{Zwanzig2001,Gotze2009,Hansen:Theory_of_Simple_Liquids} yields the equations of motion for the intermediate scattering function,\n\\begin{equation}\\label{eq:eom1}\n\\dot{\\mathbf{S}}(t)+\\int_0^t \\mathbf{K}(t-t')\\mathbf{S}^{-1}\\mathbf{S}(t') \\text{d}t' =0.\n\\end{equation}\nHere, we have introduced the matrix notation $ \\left[ \\mathbf{S}(t) \\right]_{\\mu \\nu} = S_{\\mu\\nu}(q,t) $ and the generalized structure factor $ \\mathbf{S} = \\mathbf{S}(t=0) $. Occasionally the dependence on the wavevector $\\bm{q}$ will be suprressed in cases where it serves merely as a parameter. \nThe \\emph{a priori} unknown memory kernel $ \\left[\\mathbf{K}(t)\\right]_{\\mu\\nu} = K_{\\mu\\nu}(q,t) $ defines the non-Markovian dynamics of the ISF and \ncorresponds formally to the time correlation function of the time derivatives of the density fluctuations, $ \\dot{\\rho}_\\mu(\\bm{q}) $ albeit with projected dynamics. \nMore precisely, it is the correlation function of \n$ \\dot{\\rho}_\\mu(\\bm{q},0) $ \nwith $ e^{(1-\\mathcal{P})\\mathcal{L}t } \\dot{\\rho}_\\mu(\\bm{q},0) $, \nwhere $ \\mathcal{L} $ is the standard Liouville operator and \n$ \\mathcal{P} $ the Zwanzig-Mori projection operator \\cite{Zwanzig2001,Gotze2009}.\n\nTo derive an expression for the memory kernel, we use the fact that the density modes fulfill the continuity equation,\n\\begin{equation}\\label{eq:continuity}\n\\dot{\\rho}_\\mu(\\bm{q},t) = { \\rm i} \\sum_{\\alpha=1}^m q_\\mu^\\alpha j^\\alpha_\\mu(\\bm{q},t),\n\\end{equation}\nwhere the superscript $\\alpha$ is referred to as channel index and the $ q_\\mu^\\alpha $ define the fundamental couplings of the current channel $ j^\\alpha_\\mu(\\bm{q},t) $ to the density modes. \nSince we are interested in systems with multiple relaxation channels, we need to generalize the standard continuity equation such that the currents split into $ {m} \\in \\mathbb{N} $ distinct contributions. In case of the slit geometry the decay channels correspond to the longitudinal and transversal currents and we have, $ q_\\mu^\\alpha = q\\delta_{\\alpha \\parallel} + Q_\\mu \\delta_{\\alpha \\perp} $. This splitting is physically motivated since we expect the relaxation dynamics to be significantly different in the directions parallel and orthogonal to the walls. From the above continuity equation we conclude that also the memory kernel naturally splits into multiple decay channels,\n\\begin{equation}\\label{eq:contract}\nK_{\\mu \\nu}(q,t) = \\left[ \\mathcal{C} \\{\\bm{\\mathcal{K}} \\} \\right]_{\\mu \\nu} \\coloneqq \\sum_{\\alpha=1}^{m}\\sum_{\\beta=1}^{m} q_\\mu^\\alpha \\mathcal{K}_{\\mu \\nu}^{\\alpha \\beta}(q,t) q_\\nu^\\beta,\n\\end{equation}\nHere, the generalized memory kernel $\\left[\\bm{\\mathcal{K}}(t)\\right]^{\\alpha\\beta}_{\\mu\\nu}= \\mathcal{K}_{\\mu\\nu}^{\\alpha\\beta}(q,t)$ carries in addition to the mode indices $\\mu,\\nu$ also the channel indices $\\alpha,\\beta \\in \\{1,\\ldots,m\\}$. Generally, we will denote matrix-valued objects with mode and channel indices by caligraphic letters. The conventional current kernel $\\mathbf{K}(t)$ is obtained as \\emph{contraction}\n$ \\mathcal{C} \\{\\bm{\\mathcal{K}}(t) \\} $ from the generalized current kernel. \nWe can perform a second Zwanzig-Mori projection step using the current modes $\\{j_\\mu^\\alpha(\\bm{q},t)\\}$ as distinguished variable and derive evolution equations for the current kernels, \n\\begin{equation}\\label{eq:eom2}\n\\dot{\\bm{\\mathcal{K}}}(t) + \\bm{\\mathcal{J}}\\bm{\\mathcal{D}}^{-1}\\bm{\\mathcal{K}}(t)+ \\int_0^t \\bm{\\mathcal{J}}\\bm{\\mathcal{M}}(t-t')\\bm{\\mathcal{K}}(t') \\text{d}t'= 0,\n\\end{equation}\nwith the static current correlator $ {\\mathcal{J}}^{\\alpha \\beta}_{\\mu \\nu}(q) \\coloneqq N^{-1} \\left\\langle j^\\alpha_\\mu(\\bm{q},0)^*j^\\beta_\\nu(\\bm{q},0) \\right\\rangle ={\\mathcal{K}}^{\\alpha \\beta}_{\\mu \\nu}(q,t=0) $. Here, the matrix $\\bm{\\mathcal{J}} \\succ 0$ is positive definite, while the instantaneous damping $ \\bm{\\mathcal{D}}^{-1} \\succeq 0$ is positive semidefinite. The new memory kernel $\\bm{\\mathcal{M}}(t)$ formally corresponds to the correlation functions of generalized forces with the further reduced dynamic time evolution. \n\nThe mode-coupling theory approach is now to approximate the force kernel $ \\bm{\\mathcal{M}}(t) $ as a bilinear functional of the intermediate scattering functions,\n\\begin{equation}\\label{key}\n\\mathcal{M}_{\\mu \\nu}^{\\alpha \\beta}(q,t) = \\mathcal{F}^{\\alpha \\beta}_{\\mu \\nu} \\left[ \\mathbf{S}(t),\\mathbf{S}(t);q \\right],\n\\end{equation}\nwith,\n\\begin{align}\\label{eq:memory_mct}\n&\\hspace*{-0.4cm}\\mathcal{F}^{\\alpha \\beta}_{\\mu \\nu} \\left[ \\mathbf{E},\\mathbf{F};q \\right] = \\frac{1}{4N} \\sum_{\\bm{q}_1,\\bm{q}_2=\\bm{q}-\\bm{q}_1} \\sum_{\\substack{\\mu_1,\\mu_2\\\\\\nu_1,\\nu_2}} \\mathcal{Y}^\\alpha_{\\mu \\mu_1\\mu_2}(\\bm{q},\\bm{q}_1,\\bm{q}_2) \\\\ &\\hspace*{-0.9cm}\\times(E_{\\mu_1\\nu_1}(q_1)F_{\\mu_2\\nu_2}(q_2)+F_{\\mu_1\\nu_1}(q_1)E_{\\mu_2\\nu_2}(q_2))\\mathcal{Y}^\\beta_{\\nu \\nu_1\\nu_2}(\\bm{q},\\bm{q}_1,\\bm{q}_2)^*,\\nonumber\n\\end{align}\nwhere the vertices $ \\mathcal{Y}^\\alpha_{\\mu \\mu_1\\mu_2}(\\bm{q},\\bm{q}_1,\\bm{q}_2) $ depend on the specific system that is described with this kind of MCT, like confined liquids \\cite{Lang2012} or molecules \\cite{Schilling:PRE_56_1997}. Here, we only need to assume that the vertices are smooth functions of the control parameters. We will anticipate the thermodynamic limit, \nhowever, for the moment we will assume that the wave vectors are discrete. Additionally, both the wave vectors and the modes will be truncated, i.e. the matrices are finite dimensional. \n\nTo find a set of integro-differential equations similar to the ones used in previously performed asymptotic analysis for a single decay channel \\cite{Franosch1997,Voigtmann2019} we introduce the Laplace transformation of \na time-dependent matrix $ \\mathbf{A}(t) $,\n\\begin{equation}\\label{key}\n\\text{LT} \\left\\{ \\mathbf{A}(t) \\right\\}(z) = \\hat{\\mathbf{A}}(z) := {\\rm i} \\int_0^\\infty \\mathbf{A}(t) e^{{\\rm i} zt} \\text{d}t,\n\\end{equation}\nand rewrite Eqs.~(\\ref{eq:eom1}) and (\\ref{eq:eom2}),\n\\newcommand{\\disphat}[2][3mu]{\\hat{#2\\mkern#1}\\mkern-#1\n\\begin{align}\n\t\\hat{\\mathbf{S}}(z) &= - [z\\mathbf{S}^{-1}+\\mathbf{S}^{-1}\\hat{\\mathbf{K}}(z)\\mathbf{S}^{-1}]^{-1},\\label{eq:eom1_Laplace}\\\\\n\\label{eq:eom2_Laplace}\n\t\\hat{\\bm{\\mathcal{K}}}(z) &= - [z\\bm{\\mathcal{J}}^{-1}+{\\rm i}\\bm{\\mathcal{D}}^{-1}+\\disphat{\\bm{\\mathcal{M}}}(z)]^{-1}.\n\\end{align}\nWith this we can define an effective memory kernel, $ \\mathbf{M}(z) $, as,\n\\begin{equation}\\label{eq:def_meff}\n\\hat{\\mathbf{K}}(z) \\eqqcolon -\\left[ z\\mathbf{J}^{-1} + {\\rm i}\\mathbf{{D}}^{-1} + \\hat{\\mathbf{M}}(z) \\right]^{-1},\n\\end{equation}\nwith static current correlator $ \\mathbf{{J}} = \\mathbf{K}(t=0)= \\mathcal{C}\\{\\bm{\\mathcal{J}}\\} \\succ 0$ and effective damping matrix $ \\mathbf{{D}}^{-1} = \\mathbf{J}^{-1}\\mathcal{C}\\{ \\bm{\\mathcal{J}}\\bm{\\mathcal{D}}^{-1} \\bm{\\mathcal{J}}\\} \\mathbf{J}^{-1} \\succeq 0 $.\nOne can show that $\\hat{\\mathbf{M}}(z)$ indeed shares all the properties of a matrix-valued correlation function \\cite{Franosch_2014}, in particular, its spectrum is non-negative. Here, it corresponds precisely to the force kernel provided the second Zwanzig-Mori step is performed without splitting the currents. \n\nFor the ISF in the time domain we therefore find the standard generalized harmonic oscillator equation for matrix-valued correlation functions, \n\\begin{align}\\label{eq:eom_eff}\n\\mathbf{J}^{-1}\\ddot{\\mathbf{S}}(t) &+\\mathbf{{D}}^{-1} \\dot{\\mathbf{S}}(t)+\\mathbf{S}^{-1}{\\mathbf{S}}(t) + \\int_{0}^{t} \\mathbf{M}(t-t') \\dot{\\mathbf{S}}(t') \\text{d}t' = 0, \n\\end{align}\nsubject to the initial conditions $\\mathbf{S}(t=0) = \\mathbf{S}, \\dot{\\mathbf{S}}(t=0)=0$. \nThe introduction of the effective memory kernel, $ \\mathbf{M}(t) $, proved also useful to derive a stable numerical integrator \\cite{Chong2000,Gruber2019_2}. A similar scheme will be introduced for the schematic model in Sec.~\\ref{sec:schematic}.\n\n\\subsection{\\gj{Generalized evolution equations with parallel relaxation}}\n\n\\gj{The equations of motion (\\ref{eq:eom1}), (\\ref{eq:contract}), (\\ref{eq:eom2}) and (\\ref{eq:memory_mct}) form a closed set of equations that can describe systems beyond the example of simple liquids in confined geometry. Generally, the mode index $ \\mu $ may refer to internal degrees of freedom (like orientations) or other broken symmetries. This includes systems like aspherical particles such as molecules (where the angular dependence of the microscopic densities is expanded in sphercial harmonics), active microrheology or active particles. In these systems parallel relaxation stems from different relaxation channels for transversal and rotational motion. Please note that although the mathematical structure of these equations, including matrix-valued correlation functions, is similar to previously published formalisms on mixtures (corresponding to $m=1$), we consider identical particles in this manuscript (and focus on $m>1$). The formal similarity, however, allows us to adapt proof strategies presented in Ref.~\\citen{Voigtmann2019}.}\n\n\\subsection{Glass form factors}\n\\label{sec:glass_form}\nThe long-time limit of the correlation functions (for each wavenumber $q$)\n\\begin{equation}\\label{key}\n\\mathbf{F} \\coloneqq \\lim\\limits_{t\\rightarrow \\infty} \\mathbf{S}(t),\n\\end{equation}\nis referred to as glass form factor. Solutions with vanishing glass form factor are called ergodic or liquid, while non-vanishing glass form factors correspond to glass states. \nThese long-time limits exist within the mode-coupling approximation under mild conditions even for Newtonian dynamics\\cite{Franosch_2014} and rather obviously for overdamped dynamics\\cite{Lang_2013}. In the case of matrix-valued correlation functions the glass form factors are necessarily positive-semidefinite matrices for each wavenumber~\\cite{Franosch2002}. \n\n\nThe glass form factors $\\mathbf{F}$ can be calculated without solving for the complete dynamics. The arguments can be literally transferred from the case of a single decay channel. \n A non-vanishing glass form factor implies for the Laplace transform a pole $\\hat{\\mathbf{S}}(z) = \\mathbf{F}\/z + \\{\\text{smooth}\\}$ and the representation of the current kernel in terms of the effective memory kernel, Eq.~\\eqref{eq:def_meff}, shows that \n\\begin{align}\\label{eq:nonergodic1}\n \\mathbf{S} - \\mathbf{F} = [ \\mathbf{S}^{-1} + \\mathbf{N} ]^{-1}\n ,\n\\end{align}\nwhere $\\mathbf{N} = - \\lim_{z\\to 0} z \\hat{\\mathbf{M}}(z) = \\mathbf{M}(t\\to\\infty)$ corresponds to the long-time limit of the effective memory kernel. The abbreviation $\\tilde{\\mathbf{S}} := \\mathbf{S}-\\mathbf{F}\\succ 0$ for the left hand side of Eq.\\eqref{eq:nonergodic1} turns out to be useful for the further discussion. \nAssuming that all memory kernels reach a positive-definite long-time limit~\\cite{Franosch2002,Lang2012} $ \\mathcal{F}[\\mathbf{F},\\mathbf{F}] \\succ 0 $, performing the contraction of the current kernel reveals that \n\\begin{align}\\label{eq:nonergodic2}\n\\mathbf{N} = \\mathbf{N}[\\mathbf{F}] = \\left[ \\mathcal{C}\\{ \\bm{\\mathcal{{F}}}[\\mathbf{F},\\mathbf{F}]^{-1} \\} \\right]^{-1}. \n\\end{align}\nMore generally $\\mathbf{N} = \\mathbf{N}[\\mathbf{E},\\mathbf{F}]$ can be viewed as a functional with two distinct entries in the mode-coupling functional $\\bm{\\mathcal{F}}[\\mathbf{E},\\mathbf{F}]$. Then it has been shown in Ref.~[\\citen{Lang2012}] that $\\mathbf{N}[\\mathbf{E},\\mathbf{F}]$ maps positive-semidefinite matrices $\\mathbf{E} \\succeq 0, \\mathbf{F} \\succeq 0$ (for each $q$) to positive-semidefinite ones $\\mathbf{N} \\succeq 0$. As a consequence the long-time limit can be obtained by a convergent iteration scheme of Eqs.~\\eqref{eq:nonergodic1}, \\eqref{eq:nonergodic2} starting with $\\mathbf{F}^{(0)} = \\mathbf{S}$. The solution satisfies also the maximum principle and corresponds to the long-time limit of the MCT equations. \n\n\n \n\nUpon changing the control parameters of the mode-coupling functional, the glass form factors will also change. The definition of the glass transition singularity implies that these changes are not smooth for smooth changes in the control parameters. To discuss the stability of the solutions in Eqs.~\\eqref{eq:nonergodic1},\\eqref{eq:nonergodic2} the functional $\\tilde{\\mathbf{S}} \\mathbf{N}[\\mathbf{F}] \\tilde{\\mathbf{S}} $ \nis linearized at the fixed point solution, i.e the linear map $\\mathbf{C}$ is introduced such that $\\tilde{\\mathbf{S}}( \\mathbf{N}[\\mathbf{F}+\\delta\\mathbf{F}] - \\mathbf{N}[\\mathbf{F}]) \\tilde{\\mathbf{S}} = \\mathbf{C}[\\delta \\mathbf{F}]+ {\\cal O}(\\delta \\mathbf{F})^2$. From the explicit representation Eq.~\\eqref{eq:nonergodic2} one finds \n\\begin{align}\\label{eq:Mlinearized}\n\\mathbf{C}[\\delta\\mathbf{F}] = 2 \\tilde{\\mathbf{S}}\\mathbf{N}[\\mathbf{F}] \\mathcal{C} \\big \\{ \\bm{\\mathcal{{F}}}[\\mathbf{F},\\mathbf{F}]^{-1} \\bm{\\mathcal{{F}}}[\\delta \\mathbf{F},\\mathbf{F}] \\bm{\\mathcal{{F}}}[\\mathbf{F},\\mathbf{F}]^{-1} \\big\\} \\mathbf{N}[\\mathbf{F}] \\tilde{\\mathbf{S}} .\n\\end{align}\nThe linearization of Eq.~\\eqref{eq:nonergodic1} then yields the condition\n\\begin{align}\\label{eq:nonergodic_linearized}\n \\delta \\mathbf{F} = \\mathbf{C}[\\delta \\mathbf{F}] + \\Delta \\mathbf{N},\n\\end{align}\nwhere $\\Delta \\mathbf{N}[\\mathbf{F}]$ is the small change of the functional $\\mathbf{N}[\\mathbf{F}]$ due to changes of control parameters while evaluated at the fixed point solution $\\mathbf{F}$. Correspondingly, $\\mathbf{C}[\\delta \\mathbf{F}]$ is the proper generalization of the stability matrix discussed in MCT for scalars \\cite{Franosch2002,Gotze2009}.\n\nSince the functional $\\mathbf{N}[\\mathbf{F}]$ for the multi-channel relaxation shares all the mathematical properties discussed in \\cite{Franosch2002} also the conclusions on the positive linear map $\\mathbf{C}[\\delta \\mathbf{F}]$ as inferred by the Frobenius-Perron theorem remain valid. In particular, it displays a maximal non-degenerate eigenvalue $\\gj{e}\\leq 1$ with a positive-definite eigenvector. For eigenvalues $e<1$ the solution of Eq.~\\eqref{eq:nonergodic_linearized} depends smoothly on control parameters $\\delta \\mathbf{F} \\propto \\Delta \\mathbf{N}[\\mathbf{F}]$, while $e=1$ corresponds to the glass transition singularity. \nThe corresponding glass form factors at the singularity are denoted by $\\mathbf{F}_{\\text{c}} \\succ 0$.\nFurthermore the eigenvector corresponding to such a critical point is denoted by $\\mathbf{\\tilde{H}} \\succeq 0$, and we indicate the fact that the control parameters are evaluated at the glass transition singularity by a subscript $\\text{c}$, hence \n\\begin{equation}\\label{eq:cfp}\n\\mathbf{C}_\\text{c}[\\mathbf{\\tilde{H}}] = \\mathbf{\\tilde{H}}. \n\\end{equation}\nFor the moment this eigenvector is defined up to positive multiples. \n\n As mentioned above, in deriving the properties of the effective memory kernel, we heavily relied on the assumptions that all memory kernels reach a positive-definite long-time limit $\\mathcal{F}[\\mathbf{F},\\mathbf{F}] \\succ 0$ such that the inverse can be performed in Eq.~\\eqref{eq:nonergodic2}. \n It is, however, conceivable that parts of the parallel relaxation structure display type-A phenomenology (i.e. a continuous increase of the long-time limit as the transition is reached) \\cite{Gruber2019_2} with vanishing glass form factor or that some of the relaxation channels remain ergodic \\cite{Sentjabrskaja2016}. These cases will have to be studied separately.\n\n\n\n\n\\subsection{Equations of structural relaxation}\n\nFor the asymptotic analysis we assume that there exists a time window on a time scale $ t_\\sigma \\gg t_0 $ where the correlator $ \\mathbf{S}(t) $ is close to $ \\mathbf{F}_\\text{c} $. Here, $ t_0 $ is an \\emph{a priori} unknown time scale describing the initial transient dynamics. \nIn this regime close to kinetic arrest, also the memory kernel $\\bm{\\mathcal{M}}(t)$ will remain almost constant, implying that $\\hat{\\bm{\\mathcal{M}}}(z)$ becomes large, dominating the term $z \\bm{\\mathcal{J}} + {\\rm i} \\bm{\\mathcal{D}}^{-1}$. \nWe therefore can approximate \n\\begin{equation} \\label{eq:eom_K}\n\\hat{\\mathbf{K}}(z) \\approx - \\mathcal{C} \\left\\{ \\disphat{\\bm{\\mathcal{M}}}(z)^{-1} \\right\\} = - \\mathcal{C} \\Big\\{ \\text{LT} \\left[ \\bm{\\mathcal{F}} \\left[ \\mathbf{S}(t),\\mathbf{S}(t)\\right] \\right](z) \\Big\\}^{-1}\n\\end{equation}\nFor a single decay channel, the contraction is not needed and we recover the equations considered for the analysis in Ref.~[\\citen{Voigtmann2019}] for matrix-valued correlation functions. \n\n\n\n\n\\subsection{Asymptotic expansion}\\label{Sec:Asymptotic_Expansion}\n\nThe strategy in this subsection is to expand the channel-resolved memory kernel $\\disphat{\\bm{\\mathcal{M}}}(z)$ in the window of the $\\beta$-relaxation and adapt to the steps in Ref.~[\\citen{Voigtmann2019}]. \n\nFirst, we will expand the intermediate scattering function $ \\mathbf{S}(t) $ close to the critical plateau $ \\mathbf{F}_\\text{c} $ on the divergent time scale $ t_\\sigma $. The precise definition of $t_\\sigma$ \nwill be elaborated only at the end of this subsection. \nFor rescaled times $ \\hat{t} = t\/t_\\sigma $ we thus identify $ \\mathbf{S}(t) - \\mathbf{F}_\\text{c} $ as a small parameter which enables an asymptotic expansion. In the following, it is assumed that the underlying bifurcation scenario is of type $ A_2 $, i.e. a Whitney fold bifurcation, which is also the scenario discussed in the previous literature\\footnote{For detailed discussions of the bifurcation scenario and its consequences we refer the reader to Ref.~\\citen{Franosch1997} ($ l=2 $) and Ref.~\\citen{Gotze2002} ($ l\\geq 3 $)}. This leads to the ansatz for the asymptotic expansion,\n\\begin{equation}\\label{eq:ansatz_expansion2}\n{\\mathbf{S}}({t}) = \\mathbf{F}_\\text{c} + \\sqrt{\\left| \\sigma \\right|} \\mathbf{G}^{(1)}(t) + \\sum_{n=2}^{\\infty} \\left|\\sigma\\right|^{n\/2}\\mathbf{G}^{(n)}(t),\n\\end{equation}\nfor some \\emph{a priori} unknown separation parameter $ \\sigma $. \nAdditionally, we assume a regular variation of the control parameters,\n\\begin{equation}\\label{eq:ansatz_expansion3}\n\\mathbf{S} = \\mathbf{S}_\\text{c} + \\sigma \\mathbf{S}^{(1)} + \\mathcal{O}(\\sigma^2),\n\\end{equation}\nwhich applies similarly to the mode-coupling functional~$ \\bm{\\mathcal{F}} $. The procedure is now to expand $\\disphat{\\bm{\\mathcal{M}}}(z)$ in powers of $\\sqrt{|\\sigma|}$, and then by Eq.~(\\ref{eq:eom_K}) also the current kernel $\\hat{\\mathbf{K}}(z)$, and finally with Eq.~(\\ref{eq:eom1_Laplace}) the density correlation function $\\hat{\\mathbf{S}}(z)$. \nFor details to the following derivation we refer the reader to Refs.~[\\citen{Franosch1997,Voigtmann2019}] since we will mainly focus on the discussion of differences to previously published results.\n\n\n\\subsubsection{Zeroth order expansion}\n\n\\noindent To zeroth order in $ \\sqrt{\\left| \\sigma \\right|} $ we find an equation for the critical nonergodicity parameter, \n\\begin{equation}\\label{key}\n\\mathbf{F}_\\text{c} = \\left[ \\mathbf{S}_\\text{c}^{-1} + \\mathbf{S}_\\text{c}^{-1} \\mathbf{N}_\\text{c}^{-1} \\mathbf{S}_\\text{c}^{-1} \\right]^{-1}\n=\\mathbf{S}_\\text{c} - \\left[ \\mathbf{S}^{-1}_\\text{c} + \\mathbf{N}_\\text{c} \\right]^{-1}, \n\\end{equation}\nwhich is equivalent to the expression derived in Ref.~[\\citen{Lang2010}] Eq.~(11). The expression for $ \\mathbf{N}_\\text{c} $ is given by,\n\\begin{equation}\\label{eq:Fc}\n\\mathbf{N}_\\text{c}^{-1}=\\mathcal{C}\\{ \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1} \\} .\\\\\n\\end{equation}\nAs shown in Sec.~\\ref{sec:glass_form} the two equations above readily yield an iteration scheme that is guaranteed to converge to the long-time limit of the dynamic equations and thus to the physically desired solution (see also Refs.~[\\citen{Lang2012},\\citen{Lang_2013}]). \n\n\\subsubsection{First order expansion}\n\n\\noindent To first order we obtain (details of the expansion are presented in App.~\\ref{app:exp}),\n\\begin{equation}\\label{eq:eigenvector}\n\\mathbf{G}^{(1)}(t) - \\mathbf{C}_\\text{c}[\\mathbf{G}^{(1)}(t)]=0,\n\\end{equation}\nwith the linear map,\n\\begin{equation}\\label{key}\n \\mathbf{C}_\\text{c}[\\mathbf{G}^{(1)}(t)] \\coloneqq 2\\tilde{\\mathbf{S}}_c \\mathbf{N}_\\text{c}^{(1)}[\\mathbf{G}^{(1)}(t),\\mathbf{F}_\\text{c}] \\tilde{\\mathbf{S}}_c,\n\\end{equation}\nand,\n\\begin{align}\\label{eq:FcG}\n\\mathbf{N}^{(1)}_\\text{c}[\\mathbf{E},\\mathbf{F}] = \\mathbf{N}_\\text{c} \\mathcal{C} \\big \\{ \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1} \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{E},\\mathbf{F}] \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1} \\big\\} \\mathbf{N}_\\text{c}.\n\\end{align}\nComparing Eq.~\\eqref{eq:eigenvector} with Eq.~\\eqref{eq:cfp} shows immediately that $ \\mathbf{G}^{(1)}(q,t) $ is proportional to the critical Perron-Frobenius eigenvector\\cite{Franosch1997,Gotze2009} $ \\tilde{\\mathbf{H}} $ for all times in the window of the $\\beta$-relaxation. The time dependence thus separates from the wave vector and mode index dependence and we find, \n\\begin{equation}\\label{eq:factorization}\n\\mathbf{G}^{(1)}(q,t) = \\tilde{\\mathbf{H}}(q) \\tilde{g}(\\hat{t}), \n\\end{equation}\nwhere we reinstated the dependence on rescaled times $\\hat{t} = t\/t_\\sigma$ in the $\\beta$-relaxation.\nEq.~(\\ref{eq:factorization}) is also called the factorization theorem, including the critical amplitude $ \\tilde{\\mathbf{H}}(q) $ and the $\\beta$-correlator $ \\tilde{g}(\\hat{t}) $. It basically states that close to the critical point on the time scale of $ t_\\sigma $ all dynamical correlation functions can be rescaled such that they superimpose on a single universal master curve. In the following, we will fix this eigenvector uniquely using the normalizations,\n\\begin{align}\\label{eq:norm}\n\\text{tr}\\left( \\hat{\\mathbf{H}} \\tilde{\\mathbf{H}} \\right) &= 1,\\nonumber\\\\\n\\text{tr}\\left( \\hat{\\mathbf{H}} \\tilde{\\mathbf{H}} \\tilde{\\mathbf{S}}_\\text{c}^{-1} \\tilde{\\mathbf{H}} \\right) &= 1.\n\\end{align}\nThe multiplication of the matrices is to be understood separately for each wavenumber, while the trace is extended to account for the set of wavenumbers $\\text{tr}\\left(\\mathbf{A} \\right) = \\sum_q \\text{tr} \\mathbf{A}(q) $. We also introduced $ \\hat{\\mathbf{H}} $ as the corresponding left-eigenvector of the above discussed linear map, defined as $ \\text{tr}( \\hat{\\mathbf{H}} \\mathbf{f} ) = \\text{tr}( \\hat{\\mathbf{H}} \\mathbf{C}[\\mathbf{f}] )$ for any trial matrix $ \\mathbf{f} $.\\cite{Voigtmann2019}\n\n\\subsubsection{Second order expansion}\n\n\\noindent To second order we find,\n\\begin{align}\\label{eq:2nd_order}\n\\mathbf{G}^{(2)}({t}) &- \\mathbf{C}_\\text{c}[\\mathbf{G}^{(2)}({t})] = \\tilde{\\mathbf{S}}_\\text{c} \\mathbf{N}_\\text{c}^{(1)}[\\tilde{\\mathbf{H}},\\tilde{\\mathbf{H}}] \\tilde{\\mathbf{S}}_\\text{c}g(\\hat{t})^2 - 2\\tilde{\\mathbf{S}}_\\text{c}\\mathbf{N}_\\text{c}^{(1)}[\\tilde{\\mathbf{H}},\\mathbf{F}_\\text{c}] \\tilde{\\mathbf{H}} \\frac{\\text{d}}{\\text{d} \\hat{t}} (g \\ast g) (\\hat{t}) \\nonumber\\\\\n&+ \\tilde{\\mathbf{S}}_\\text{c} \\mathbf{A} \\tilde{\\mathbf{S}}_\\text{c} \\frac{\\text{d}}{\\text{d} \\hat{t}} (g \\ast g) (\\hat{t}) \n+\\tilde{\\mathbf{S}}_\\text{c}\\mathbf{S}_\\text{c}^{-1} \\left( \\mathbf{S} \\mathbf{N}_\\epsilon(\\mathbf{S}-\\mathbf{F}_\\text{c}) -\\mathbf{S}_\\text{c} \\mathbf{N}_\\text{c}\\tilde{\\mathbf{S}}_\\text{c} \\right)\/|\\sigma|,\n\\end{align}\nwith,\n\\begin{equation}\\label{eq:def_A}\n\\mathbf{A} = 4 \\left ( \\mathbf{N}^{(1)}_\\text{c}[\\tilde{\\mathbf{H}},\\mathbf{F}_\\text{c}] \\mathbf{N}_\\text{c}^{-1} \\mathbf{N}^{(1)}_\\text{c}[\\tilde{\\mathbf{H}},\\mathbf{F}_\\text{c}] -\\mathbf{N}_\\text{c}^{(2)}[\\tilde{\\mathbf{H}},\\tilde{\\mathbf{H}}] \\right),\n\\end{equation}\nand the definitions,\n\\begin{align}\n\\mathbf{N}_\\epsilon&=\\left[ \\mathcal{C} \\{ \\bm{\\mathcal{F}}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1} \\}\\right]^{-1}, \\label{eq:Mnoncrit}\\\\\n\\mathbf{N}_\\text{c}^{(2)}[\\mathbf{E},\\mathbf{F}] &= \\mathbf{N}_\\text{c} \\mathcal{C} \\big\\{ \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1} \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{E},\\mathbf{F}_\\text{c}] \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1}\n\\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F},\\mathbf{F}_\\text{c}] \\bm{\\mathcal{{F}}}_\\text{c}[\\mathbf{F}_\\text{c},\\mathbf{F}_\\text{c}]^{-1} \\big\\} \\mathbf{N}_\\text{c}.\\label{eq:M2}\n\\end{align}\nDetails of the expansion are presented in App.~\\ref{app:exp} and the factorization theorem (\\ref{eq:factorization}) has been used. One readily checks that the matrix $ \\mathbf{A} $ evaluates to zero if there is only a single relaxation channel. The last term in Eq.~(\\ref{eq:2nd_order}) collects the linear terms in $ \\left|\\sigma \\right|$ due to variations of the control parameters. We now follow the standard route utilizing the solubility conditions \\cite{Franosch1997,Voigtmann2019}, \n\\begin{equation}\\label{key}\n\\text{tr}\\big( \\hat{\\mathbf{H}} \\mathbf{G}^{(n)}({t}) - \\hat{\\mathbf{H}} \\mathbf{C}_\\text{c}[\\mathbf{G}^{(n)}({t})] \\big)= 0 = \\text{tr}\\big( \\hat{\\mathbf{H}} \\mathbf{I}^{(n)} \\big),\n\\end{equation}\nand identifying the different terms in Eq.~(\\ref{eq:2nd_order}) as,\n\\begin{align}\\label{key}\n\\sigma &= \\text{tr} \\left[ \\hat{\\mathbf{H}}\\tilde{\\mathbf{S}}_\\text{c}\\mathbf{S}_\\text{c}^{-1} \\left( \\mathbf{S} \\mathbf{M}_\\epsilon(\\mathbf{S}-\\mathbf{F}_\\text{c}) -\\mathbf{S}_\\text{c} \\mathbf{M}_{c}\\tilde{\\mathbf{S}}_\\text{c} \\right) \\right],\\\\\n\\tilde{\\lambda} &= \\text{tr} \\left[ \\hat{\\mathbf{H}} \\tilde{\\mathbf{S}}_\\text{c} \\mathbf{N}_\\text{c}^{(1)}[\\tilde{\\mathbf{H}},\\tilde{\\mathbf{H}}] \\tilde{\\mathbf{S}}_\\text{c} \\right],\\\\\n\\Delta &= \\text{tr} \\left[ \\hat{\\mathbf{H}} \\tilde{\\mathbf{S}}_\\text{c} \\mathbf{A} \\tilde{\\mathbf{S}}_\\text{c} \\right], \\label{eq:Delta_term}\n\\end{align}\nto find,\n\\begin{align}\\label{eq:beta_scaling}\n\\left(1- \\Delta \\right) \\frac{\\text{d}}{\\text{d} \\hat{t}} (\\tilde{g} \\ast \\tilde{g}) (\\hat{t}) =\\tilde{\\lambda} \\tilde{g}(\\hat{t})^2 + \\text{sgn}\\, \\sigma,\n\\end{align}\nwith the convolution,\n\\begin{equation}\\label{key}\n(f \\ast g)(t) = \\int_0^t f(t-t^\\prime)g(t^\\prime)\\text{d}t^\\prime.\n\\end{equation}\nIn case of a single decay channel, the new term does not show up, $ \\Delta=0 $, and the result reduces to the well-known $ \\beta $-scaling equation derived in Refs.~[\\citen{Gotze2002,Voigtmann2019,Franosch1997}]. To recover their result in the presence of multiple relaxation channels we employ \\emph{a posteriori} a rescaling,\n\\begin{align}\\label{key}\n{g}(\\hat{t}) = \\tilde{g}(\\hat{t})\\sqrt{1-\\Delta},\\\\\n{\\mathbf{H}}(q)= \\tilde{{\\mathbf{H}}}(q)\/\\sqrt{1-\\Delta},\\\\\n{\\lambda} = \\tilde{\\lambda}\/(1-\\Delta).\n\\end{align}\nThis is possible due to the scale invariance of the scaling equation and it also leaves the factorization theorem untouched. This eventually leads to the original $ \\beta $-scaling equation,\n\\begin{align}\\label{eq:beta_scaling2}\n \\frac{\\text{d}}{\\text{d} \\hat{t}} ({g} \\ast {g}) (\\hat{t}) ={\\lambda} {g}(\\hat{t})^2 + \\text{sgn}\\, \\sigma. \n\\end{align}\nThe scaling equation displays a power-law solution with exponents $1\/2 \\leq -a \\leq 0$ for short rescaled times $\\hat{t} \\ll 1$. For $\\text{sgn } \\sigma = -1$ a second power law with exponent $0 \\leq b \\leq 1$ emerges at long rescaled times $\\hat{t}\\gg 1$, while the scaling function converges for long times to a finite value for $\\text{sgn } \\sigma = 1$. The exponent parameter \n $ \\lambda $, connects the exponents, $ a,b $, of the $ \\alpha $- and $ \\beta $-relaxation processes via \\emph{G\\\"otze's exponent relation},\n\\begin{equation}\\label{key}\n\\frac{\\Gamma(1+b)^2}{\\Gamma(1+2b)} = \\lambda = \\frac{\\Gamma(1-a)^2}{\\Gamma(1-2a)}.\n\\end{equation}\nTo fix a unique solution of the scaling equation, the condition $g(\\hat{t} \\ll 1) = \\hat{t}^{-a}$ is imposed. \n\n\\subsubsection{Critical dynamics}\n\\label{ch:scaling}\n\nWe recall the most important conclusions for the critical dynamics close to the glass transition that can be drawn from Eqs.~(\\ref{eq:ansatz_expansion2}),(\\ref{eq:factorization}) and (\\ref{eq:beta_scaling2}) (see Refs.~[\\citen{Franosch1997,Gotze2009}] for extended derivations). \n\n\\begin{itemize}\n\t\\item For $ t \\gg t_0 $ and $ {t} \\ll t_\\sigma $ the short-time solution $ g(\\hat{t} \\ll 1) = \\hat{t}^{-a} $ sets $ t_\\sigma = t_0 \\left| \\sigma\\right|^{-1\/2a} $ and we find,\n\t\\begin{align}\\label{eq:crit_asymptote}\n\t\\mathbf{S}(q,t) &= \\mathbf{F}_\\text{c}(q) + \\mathbf{{H}}(q) (t\/t_\\sigma)^{-a} \\sqrt{\\left|\\sigma \\right|} + \\mathcal{O}(\\sqrt{\\left|\\sigma \\right|}^2),\\\\\n\t\\boldsymbol{\\chi}''(q,\\omega) &= \\mathbf{{H}}(q) \\Gamma(1-a)\\sin\\left(\\pi a\/2\\right)(\\omega t_\\sigma)^a\\sqrt{\\left|\\sigma \\right|}+ \\mathcal{O}(\\sqrt{\\left|\\sigma \\right|}^2) .\\label{eq:crit_asymptote_freq}\n\t\\end{align}\n\tHere, $ \\boldsymbol{\\chi}''(q,\\omega) = \\omega \\mathbf{S}''(q,\\omega) $ is the susceptibility spectrum, calculated from the correlation spectrum, $ \\mathbf{S}''(q,\\omega) = \\int_0^\\infty \\cos (\\omega t ) \\mathbf{S}(q,t) \\text{d}t. $\n\t\n\t\\item For $ \\sigma \\geq 0 $ and $ {t} \\gg t_\\sigma $ the asymptotic expansion of the glass form factor follows,\n\t\\begin{equation}\\label{eq:static_asymptote}\n\t\\mathbf{F}(q) = \\lim\\limits_{t\\rightarrow \\infty }\t\\mathbf{S}(q,t) = \\mathbf{F}_\\text{c}(q) + \\mathbf{{H}}(q) \\sqrt{\\frac{\\sigma}{1-\\lambda}}+ \\mathcal{O}(\\sqrt{\\left|\\sigma \\right|}^2).\t\n\t\\end{equation}\n\t\\item For $ \\sigma < 0 $ and $ {t} \\gg t_\\sigma $ we find the emergence of a second power law, $ g(\\hat{t} \\gg 1 ) = -B\\hat{t}^b $, which sets the $ \\alpha $-relaxation time $ t'_\\sigma = (t_0\/B^{1\/b})\\left| \\sigma \\right|^{-\\gamma} $, with $ \\gamma = 1\/2a + 1\/2b $. For the intermediate scattering function and the susceptibility spectrum we obtain,\n\t\\begin{align}\\label{eq:alpha_asymptote}\n\t\\mathbf{S}(q,t) &= \\mathbf{F}_\\text{c}(q) - \\mathbf{{H}}(q) (t\/t'_\\sigma)^{b}+ \\mathcal{O}(\\sqrt{\\left|\\sigma \\right|}^2),\\\\\n\t\\boldsymbol{\\chi}''(q,\\omega) &= \\mathbf{{H}}(q) \\Gamma(1+b)\\sin\\left(\\pi b\/2\\right)(\\omega t'_\\sigma)^{-b}+ \\mathcal{O}(\\sqrt{\\left|\\sigma \\right|}^2).\\label{eq:alpha_asymptote_freq}\n\t\\end{align}\n\\end{itemize}\n\n\\subsection{Discussion}\n\\label{sec:discussion}\n\n\nWe find that the existence of multiple decay channels has no fundamental influence on the asymptotic scaling laws which can be derived from various kinds of mode-coupling theories. This confirms that the standard scaling analysis can indeed also be applied to systems where multiple decay channels emerge naturally, such as molecules, confined fluids or active particles. This result has been anticipated before \\cite{Winkler2000} but to the best knowledge of the authors it has not yet been {shown explicitly}. \n\nThe microscopic expression of the exponent parameter differs from the case of the single relaxation channel due to a new contribution as highlighted in the term $\\Delta$, Eq.~\\eqref{eq:Delta_term}. \nThese non-trivial terms could also be hidden in the normalizations of the eigenvectors Eq.~(\\ref{eq:norm}) at the prize of lengthy expressions. In both cases the derivation shows that the difference introduced in Eq.~(\\ref{eq:def_A}), which is largest if the decay channels are very different, will have an influence on the emergent power-law exponents. \n\n\n\n\n\n\\section{Schematic model with two decay channels}\n\\label{sec:schematic}\n\nTo understand the emergent static and dynamic properties of mode-coupling theory, schematic models have been proved to be very useful \\cite{Krieger1987,Gotze2009}. These models are characterized by a very small number of modes $ M $ (usually $ M \\leq 2 $) which significantly simplifies and accelerates the numerical solution of the equations. Inspired by the MCT equations for liquids in confinement, we suggest a schematic model to study the impact of multiple decay channels. We restrict ourselves to diagonal $ 2\\times 2 $ matrices with two distinct decay channels, leading to the equations of motion,\n \\begin{align}\n\\hat{S}_i(z) &= -\\left[z+\\hat{K}_i(z)\\right]^{-1}, \\qquad \\vspace*{3cm} i = 1,2,\\label{eq:mod1}\\\\\n\\hat{K}_i(z) &= Q^\\parallel \\hat{\\mathcal{K}}^\\parallel_i(z) + Q^\\perp \\hat{\\mathcal{K}}^\\perp_i(z),\\label{eq:mod2}\\\\\n\\hat{\\mathcal{K}}_i^\\alpha(z)&=-\\left[ z + {\\rm i} \\nu_i+ \\hat{\\mathcal{M}}^\\alpha_i(z)\\right]^{-1}, \\quad \\,\\,\\alpha = \\parallel,\\perp.\\label{eq:mod3}\n\\end{align}\nHere, Eqs.~(\\ref{eq:mod1}),(\\ref{eq:mod2}) and (\\ref{eq:mod3}), are the direct equivalents of Eqs.~(\\ref{eq:eom1_Laplace}),(\\ref{eq:contract}) and (\\ref{eq:eom2_Laplace}), respectively, where $ i \\equalhat (q,\\mu) $. For the memory kernels, we now use the Bosse-Krieger model\\cite{Krieger1987},\n \\begin{align}\n\\mathcal{M}_1^\\alpha(t)&= Q^\\alpha k\\left[\\delta^\\alpha S_1(t)^2+(1-\\delta^\\alpha)S_2(t)^2\\right] = Q^\\alpha \\tilde{\\mathcal{M}}_1^\\alpha(t)\\label{eq:schematic_mem1},\\\\\n\\mathcal{M}_2^\\alpha(t)&= Q^\\alpha krS_1(t)S_2(t) = Q^\\alpha \\tilde{\\mathcal{M}}_2^\\alpha(t)\\label{eq:schematic_mem2}.\n\\end{align}\nHere, the parameters $ 0 \\leq \\delta^\\parallel, \\delta^\\perp \\leq 1 $ and $ 0 < r < \\infty $ are assumed to be fixed and $ k $ is the control parameter. If one of the decay channels, $ Q^\\parallel, Q^\\perp, $ is zero, the schematic model reduces to the well-known single-channel Bosse-Krieger model \\cite{Krieger1987}. In the following, we will use $ \\delta^\\parallel = 0.9 $ and $ \\delta^\\perp = 0.1 $. The two decay channels therefore differ such that in parallel ``direction'' mode 1 couples stronger to itself than to mode 2, while in perpendicular ``direction'' it couples weakly to itself. To increase the stability and avoid unrealistic oscillations we include an instantaneous damping $ \\nu_1 > 0 $ and $\\nu_2 = 0$, as is commonly done in MCT \\cite{Franosch1997,Gotze2009}.\n\nEqs.~(\\ref{eq:mod1})-(\\ref{eq:mod3}) together with the expressions for the memory kernels Eq.~(\\ref{eq:schematic_mem1}) and Eq.~(\\ref{eq:schematic_mem2}) can be solved numerically by combining standard techniques for bulk MCT \\cite{Sperl2000} and methods that were recently suggested to solve MCT for active microrheology \\cite{Gruber2016,Gruber2019_2}. Details can be found in Appendix \\ref{ap:numerics}. In Table~\\ref{tab:parameters} the parameters for the model used in this work are summarized.\n\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{table}\n\t\\centering \\begin{tabular}{cc|cc|cc} \n\t\t$ \\delta^\\parallel $ & \\hphantom{1111} $ 0.9 $ \\hphantom{1111} & $ k^\\text{c} $ & 42.7021257192(1) & $ \\tilde{\\lambda} $ & \\hphantom{11} 0.6871 \\hphantom{11} \\\\\n\t\t\\rowcolor[gray]{0.925}\n\t\t$ \\delta^\\perp $ & $ 0.1 $ & $ F_1^\\text{c} $ & 0.7555& $ {\\lambda} $ & 0.7039 \\\\\n\t\t$ r $ & $ 0.075 $ & $ F_2^\\text{c} $ & 0.1734 & $a $ & 0.3322 \\\\\n\t\t\\rowcolor[gray]{0.925}\n\t\t$ Q^\\parallel $ & $ 1.0 $ & $ {H}_1 $ & 0.5177 & $ b $ & 0.6622\\\\ \n\t\t$ Q^\\perp $ & $ 1.0 $ & $ {H}_2 $ & 0.5664 &$ B $ & 0.6872 \\\\ \n\t\t\\rowcolor[gray]{0.925}\n\t\t$ \\nu_1 $ & $ 4.0 $ & $ \\Delta $ & -0.0244 & $ t_0 $ & 0.0371 \\\\\n\t\\end{tabular} \n\t\\caption{Summary of the parameters and critical properties of the schematic Bosse-Krieger model with two decay channels as defined in Eqs.~(\\ref{eq:mod1})-(\\ref{eq:schematic_mem2}). The parameter $ B $ was determined by linear interpolation of Tab.~3 in Ref.~[\\citen{Gotze_1990}]. }\n\t\\label{tab:parameters}\n\\end{table} \n\\renewcommand{\\arraystretch}{1.0}\n\n\\subsection*{Statics: Nonergodicity parameter}\n\n\\begin{figure}\n\t\\includegraphics[scale=1.0]{fig_plot_statics_full.eps}\n\t\\caption{Nonergodicity parameters $ F_i $ for various values of the {control-parameter distance} $ \\epsilon = (k-k^\\text{c})\/k^\\text{c}. $ The asymptotes were determined according to Eq.~(\\ref{eq:static_asymptote}). The inset shows the variation from the critical nonergodicity parameters in dependence of the separation parameter $ \\sigma $ in double logarithmic representation. } \n\t\\label{fig:nonergo}\n\\end{figure}\n\nThe bifurcation of the nonergodicity parameter above the critical point is displayed in \nFig.~\\ref{fig:nonergo}. As anticipated, the curves can be described by the leading order asymptote, $ F_i \\propto \\sqrt{\\epsilon} = \\sqrt{ (k-k^\\text{c})\/k^\\text{c}} $, for small control-parameter distance $\\epsilon$ to the transition (see also Eq.~(\\ref{eq:static_asymptote})). The inset provides numerical evidence that the expansion presented in the previous chapter indeed accurately describes the asymptotic behavior of a system with multiple decay channels. \n\n\\subsection*{Dynamics: Correlators and susceptibility}\n\n\\begin{figure}\n\t\\hspace*{-1.2cm}\\includegraphics[scale=0.95]{fig_plot_full_q1.eps}\\includegraphics[scale=0.95]{fig_plot_full_q2.eps}\n\t\\caption{ Time-dependent correlators $ S_1(t) $ (left) and $ S_2(t) $ (right). The different curves are calculated for control parameters $ \\epsilon=\\pm 10^{-n\/3} $ ($ n $ increases from left to right for $ \\epsilon<0 $ and from top to bottom for $ \\epsilon > 0 $ ). The critical correlator for $ k=k^\\text{c} $ ($ \\epsilon = 0 $) is displayed as a thick line and labeled by ``c''. } \n\t\\label{fig:time_full}\n\\end{figure}\n\nThe time dependences of the correlators $ S_i(t) $ are visualized in Fig.~\\ref{fig:time_full}. Both curves show qualitatively the same time dependence as known from bulk MCT, i.e. they are smooth functions of $ \\epsilon $ on any finite interval of time with divergent time scales, $ t_\\sigma $ and $ t_\\sigma^\\prime $, when approaching the critical point, $ \\epsilon \\rightarrow 0 $. For a detailed discussion of these general features we refer the reader to Ref.~[\\citen{Franosch1997}]. More quantitatively it can be observed that the Markovian contribution to the memory kernel leads to a damping of the correlator $ S_1(t) $ on a time scale $ \\nu_1^{-1} $, as expected from the equations of motion. This becomes even more apparent from the appearance of strong oscillations in the dynamics of correlator $ S_2(t) $ for which no Markovian damping was included.\n\n\\begin{figure}\n\t\\hspace*{-1.2cm}\\includegraphics[scale=0.95]{fig_plot_crit.eps}\\includegraphics[scale=0.95]{fig_plot_alpha.eps}\n\t\\caption{Time dependence of the critical correlators $ S_i(t) $ (left) and the $ \\alpha $-relaxation process (right). The asymptotes in the left panel ($ \\beta $-relaxation) were calculated according to Eq.~(\\ref{eq:crit_asymptote}). The time scale $ t_0 $ was determined by matching the asymptotic solution of $ S_1(t) $. The inset shows the convergence to the critical nonergodicity parameter in double logarithmic representation. Similarly, the right panel shows the $ \\alpha $-relaxation with the asymptotes calculated according to Eq.~(\\ref{eq:alpha_asymptote}) for a separation parameter $ \\sigma = 6.36 \\cdot 10^{-9} $ and thus $ t_\\sigma^\\prime=2.182\\cdot 10^{17} $. } \n\t\\label{fig:time_crit}\n\\end{figure}\n\nThe $ \\alpha$- and $ \\beta $-relaxation processes are shown separately and in more detail in Fig.~\\ref{fig:time_crit}, including the asymptotic scaling laws derived in Ch.~\\ref{ch:scaling}. The $ \\beta $-relaxation of the critical correlator is depicted in the left panel. For $ t > 10^2 $ there is no visible difference between the time fractal, $ g(\\hat{t}) = \\hat{t}^{-a} $, as derived in Eq.~(\\ref{eq:crit_asymptote}), and the correlator $ S_i(t,\\epsilon \\rightarrow +0 ) $. Here, we emphasize that the emergent power-law exponent $ a $ is a non-trivial result of the asymptotic expansion in a scenario with multiple decay channels presented in this manuscript. When approaching the critical point from below, a second relaxation process, called $ \\alpha $-relaxation, is observed (see right panel of Fig.~\\ref{fig:time_crit}). This decay from the plateau value $ F_i^\\text{c} $ is described by a second power law, $ g(\\hat{t}) = - B \\hat{t}^{b} $, which is commonly referred to as von Schweidler law \\cite{Gotze2009}. Interestingly, a distinct ``kink'' is observed in the $ \\alpha $-relaxation at $t\/t_\\sigma'\\approx 10$ that has not yet been reported for MCT dynamics. We will postpone its discussion to the next paragraph.\n\n\\begin{figure}\n\t\\hspace*{-1.2cm}\\includegraphics[scale=0.95]{fig_sus_full_q1.eps}\\includegraphics[scale=0.95]{fig_sus_full_q2.eps}\n\t\\caption{Frequency-dependent susceptibility $ \\chi_1^{\\prime \\prime}(\\omega) = \\omega {S}_1^{\\prime \\prime}(\\omega) $ (left) and $ \\chi_2^{\\prime \\prime}(\\omega) $ (right) ($ n $ increases from right to left)\n\t for the same parameters as in Fig.~\\ref{fig:time_full}. The dashed and dotted, black lines show Debye peaks, $ \\chi^{\\prime \\prime}_{\\text{D}_i}(\\omega) = 2 \\chi_{\\text{max}_i} \\omega \\tau_{ \\text{\\tiny D}_i}\/ \\left[ 1+ (\\omega \\tau_{ \\text{\\tiny D}_i} )^2 \\right] $, for mode 1 and 2, respectively ($ \\chi_{\\text{max}_1} = 0.26 $, $ \\tau_{ \\text{\\tiny D}_1} = 1.4\\cdot 10^{13} $, $ \\chi_{\\text{max}_2} = 0.06 $, $ \\tau_{ \\text{\\tiny D}_2}= 1.0\\cdot 10^{11} $). The dotted line is also included to the spectrum $ \\chi_1^{\\prime \\prime}(\\omega) $ for the sake of visualization. } \n\t\\label{fig:susc_full}\n\\end{figure}\n\n\\begin{figure}\n\t\\hspace*{-1.2cm}\\includegraphics[scale=0.95]{fig_sus_crit.eps}\\includegraphics[scale=0.95]{fig_sus_alpha.eps}\n\t\\caption{Susceptibility spectra $ \\chi_i^{\\prime \\prime}(\\omega) $ of the critical correlators (left) and the $ \\alpha $-relaxation processes (right) for the same parameters as in Fig.~\\ref{fig:time_crit}. The asymptotes are calculated according to Eqs.~(\\ref{eq:crit_asymptote_freq}) (left) and (\\ref{eq:alpha_asymptote_freq}) (right). } \n\t\\label{fig:susc_crit}\n\\end{figure}\n\nFrom the correlators discussed in the last paragraphs we also calculate the susceptibility spectra \\emph{via} Fourier transformation, relying on the modified Filon-Tuck algorithm \\cite{Abramowitz19070,Tuck1967}.\n The results are presented in Fig.~\\ref{fig:susc_full}. As expected from the two-step relaxation scenario two distinct peaks are observed in the spectra for $ \\epsilon < 0 $. Both peaks are described using the critical spectra derived in the previous chapter (see Fig.~\\ref{fig:susc_crit}). The figure shows that for several orders of magnitude in frequency a very good overlap between the asymptotic power laws and the numerical solution of the full MCT equations holds. All of these findings are in accordance to similar observations in bulk MCT \\cite{Franosch1997}. There is, however, one remarkable feature in the calculated susceptibility spectra, viz. the shoulder in the low-frequency peak in $ \\chi^{\\prime\\prime}_1(\\omega) $, that is specific for our model and that corresponds to the observed ``kink'' in the correlation function. \nA similar feature has been identified as the Cole-Cole law in Ref.~\\cite{Sperl:PhysRevE.74.011503}, but in that case the ``kink'' emerges for \nfrequencies larger than the von Schweidler law and thus cannot explain the shoulder observed in our model. The small-frequency susceptibility spectrum predicted by MCT with a single decay channel can be described by an asymmetric peak. (see also Fig.~\\ref{fig:susc_full}, right panel, since mode 2 has only one relaxation channel.) The right flank of this peak is given by the von Schweidler law \nand the left side corresponds to a Debye peak. The origin of the emergence of an additional shoulder therefore seems to be intrinsic for multiple decay channels: \nThe correlator $ S_2(t) $ decays slightly faster than $ S_1(t) $ due to the smaller amplitude of $ F_2^\\text{c} $. We thus observe a faster relaxation of $ \\mathcal{M}^\\perp_1(t) $ compared to $ \\mathcal{M}^\\parallel_1(t) $ due to the asymmetry between $ \\delta^{\\parallel} $ and $ \\delta^{\\perp} $. Therefore, two overlapping peaks emerge in $ \\chi^{\\prime\\prime}_1(\\omega) $, directly connected to those two decay channels. To support this explanation we have included the corresponding Debye peaks in Fig.~\\ref{fig:susc_full}, showing that the resulting spectrum, $ \\chi^{\\prime\\prime}_1(\\omega) $, is indeed emerging from two distinct relaxation processes. \n\n The observation of double peaks is an interesting finding in itself not directly connected to mode-coupling theory, but could also be observable in simulations or experiments for systems where multiple, very different decay channels come into play.\n\n\n\n\n\n\n\n\n\n\n\\section{Summary and conclusion}\n\\label{sec:conclusion}\n\nAn asymptotic analysis for MCT with multiple decay channels has been elaborated based on very moderate assumptions. We have found that non-trivial terms arise in the expansion in the vicinity of the glass form factors, \nleading to a slightly different form of the $ \\beta $-scaling equation. Using the scale invariance of this equation, we showed that its original form and thus its universality can be recovered by a straightforward rescaling of the critical amplitude and the fundamental constant $ \\lambda $. The rescaling factor $ \\Delta $ is largest if there is a strong asymmetry between the different decay channels. \n\n\nTo demonstrate the applicability of the derived asymptotic scaling laws we have suggested a novel schematic model - a generalization of the Bosse-Krieger model - with two decay channels. As anticipated, \nthe numerical solution of the MCT and the asymptotic time fractals are in very good agreement. As interesting new feature emerging already in this schematic we have observed a pronounced shoulder in the low-frequency susceptibility spectrum of the model which we have rationalize as an indicator for multiple decay channels.\n\nThe results of this work finally enable an asymptotic analysis of MCT for systems that are characterized by multiple decay channels. Considering that most of the recently applied MCTs, viz. MCT for active particles, MCT for molecules and MCT in strongly confined systems, fall into this category, several important applications of the presented theory could come up. Furthermore, it will be very interesting to investigate whether the observed asymmetry as trace for multiple decay channels is measurable in realistic systems via simulations or experiments.\n\n\nThe derivation of the $\\beta$-scaling law for the dynamics in the vicinity of the glass form factors relies, first, on a discontinuous type-B transition and, second, on the assumption that the wave vector dependence can be discretized. \nThe first assumption is violated in active microrheology where mixed continuous\/discontinuous transition was observed \\cite{Gruber2016,Gruber2019_2} and correspondingly a separate analysis is required. The second assumption is stricly speaking never fulfilled. For small wavenumbers hydrodynamic modes become arbitrarily slow and the structural relaxation does not dominate the dynamics of the intermediate scattering functions. Differently speaking the limit of control parameter approaching the critical value and the limit of small wavenumber do not commute. Then the hydrodynamic regime is expected to shrink as the glass transition is approached. Nevertheless the long-wavelength singularities are encoded properly in mode-coupling theory, yet the $\\beta$-scaling law cannot hold uniformly for all wavenumbers. Recently it has been shown that the interplay of structural relaxation and the hydrodynamic behavior can be addressed within mode-coupling theory~\\cite{Mandal:PhysRevLett.123.168001} by using a logarithmically spaced grid of wavenumbers. \n\n\n\\section*{Acknowledgments}\n\nGJ and TF gratefully acknowledge support by the Austrian Science Fund (FWF): I 2887 and TV acknowledges partial funding by DFG VO 1270\/7-2.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n In autonomous driving scenarios, the number of traffic participants and lanes surrounding the agent can vary considerably over time. Common autonomous driving systems use modular pipelines, where a perception component extracts a list of surrounding objects and passes this list to other modules, including localization, mapping, motion planning and high-level decision making components. Classical rule-based decision-making systems are able to deal with variable-sized object lists, but are limited in terms of generalization to unseen situations or are unable to cover all interactions in dense traffic. Since Deep Reinforcement Learning (DRL) methods can learn decision policies from data and off-policy methods can improve from previous experience, they offer a promising alternative to rule-based systems. In the past years, DRL has shown promising results in various domains \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15, DBLP:journals\/nature\/SilverHMGSDSAPL16, DBLP:conf\/nips\/WatterSBR15, DBLP:journals\/jmlr\/LevineFDA16, Mirchevska2018HighlevelDM}. However, classical DRL architectures like fully-connected or convolutional neural networks (CNNs) are limited in their ability to deal with variable-sized, structured inputs or to model interactions between objects. \n \n Prior works on reinforcement learning for autonomous driving that used fully-connected network architectures and fixed sized inputs \\cite{adaptive_behavior_kit, branka_rl_highway, Mirchevska2018HighlevelDM, 2018TowardsPH, overtaking_maneuvers_kaushik} are limited in the number of vehicles that can be considered. CNNs using occupancy grids \\cite{tactical_decision_making, deeptraffic} are limited to their initial grid size. Recurrent neural networks are useful to cover temporal context, but are not able to handle a variable number of objects permutation-invariant w.r.t to the input order for a fixed time step. In \\cite{DeepSetQ}, limitations of these architectures are shown and a more flexible architecture based on Deep Sets \\cite{NIPS2017_6931} is proposed for off-policy reinforcement learning of lane-change maneuvers, outperforming traditional approaches in evaluations with the open-source simulator SUMO.\n \nIn this paper, we propose to use Graph Networks \\cite{DBLP:journals\/corr\/KipfW16} as an interaction-aware input module in reinforcement learning for autonomous driving. We employ the structure of Graphs in off-policy DRL and formalize the Graph-Q algorithm. In addition, to cope with multiple object classes of different feature representations, such as different vehicle types, traffic signs or lanes, we introduce the formalism of Deep Scenes, that can extend Deep Sets and Graph Networks to fuse multiple variable-sized input sets of different feature representations. Both of these can be used in our novel DeepScene-Q algorithm for off-policy DRL. Our main contributions are:\n\\begin{enumerate}\n \\item Using Graph Convolutional Networks to model interactions between vehicles in DRL for autonomous driving.\n \\item Extending existing set input architectures for DRL to deal with multiple lists of different object types.\n\\end{enumerate}\n\n\n\\section{RELATED WORK}\n\nGraph Networks are a class of neural networks that can learn functions on graphs as input \\cite{Scarselli:2009:GNN:1657477.1657482, DBLP:journals\/corr\/abs-1812-08434, DBLP:journals\/corr\/abs-1806-01242,DBLP:journals\/corr\/BattagliaPLRK16, DBLP:journals\/corr\/abs-1806-01261} and can reason about how objects in complex systems interact. They can be used in DRL to learn state representations \\cite{DBLP:journals\/corr\/DaiKZDS17, DBLP:journals\/corr\/abs-1806-01203, DBLP:journals\/corr\/abs-1810-09202, DBLP:journals\/corr\/abs-1806-01242}, e.g. for inference and control of physical systems with bodies (objects) and joints (relations). In the application for autonomous driving, Graph Networks were used for supervised traffic prediction while modeling traffic participant interactions \\cite{DBLP:journals\/corr\/abs-1903-01254}, where vehicles were modeled as objects and interactions between them as relations. Another type of interaction-aware network architectures, Interaction Networks, were proposed to reason about how objects in complex systems interact \\cite{DBLP:journals\/corr\/BattagliaPLRK16}. A vehicle behavior interaction network that captures vehicle interactions was presented in \\cite{DBLP:journals\/corr\/abs-1903-00848}. In \\cite{DBLP:journals\/corr\/abs-1805-06771}, a convolutional social pooling component was proposed using a CNN to model spatial connections between vehicles for vehicle trajectory prediction.\n\n\n \n \\begin{figure*}[h]\n \\centering\n (a) \\hspace{-1mm} \\includegraphics[width=0.30\\textwidth]{fig\/deepscene.pdf}\n \\hspace{10mm} (b) \\includegraphics[width=0.31\\textwidth]{fig\/deepscene-graphs.pdf}\n\\caption{Scheme of DeepScene-Q, using (a) Deep Sets and (b) Graphs. Both architectures combine multiple variable-length object lists in a scene, here a traffic sign $s_1$, lanes $l_{1}, l_2$ and vehicles $x_1, x_2$. The modules $\\phi_i$, $\\rho$ and $Q$ are fully-connected networks. As permutation invariant pooling operator, we use the \\textit{sum}. The vector $x^{\\text{static}}$ includes static features and $q$ the action value output.}\n\\label{fig:deepscene}\n\\end{figure*}\n\n\n\\section{PRELIMINARIES}\n\nWe model the task of high-level decision making for autonomous driving as a Markov Decision Process (MDP), where the agent is following a policy $\\pi$ in an environment in a state $s_t$, applying a discrete action $a_t \\sim \\pi$ to reach a successor state $s_{t+1}\\sim\\mathcal{M}$ according to a transition model $\\mathcal{M}$. In every time step $t$, the agent receives a reward $r_{t}$, e.g. for driving as close as possible to a desired velocity. The agent tries to maximize the discounted long-term return $R(s_t) = \\sum_{i>=t} \\gamma^{i-t}r_i$, where $\\gamma \\in [0, 1]$ is the discount factor. In this work, we use Q-learning \\cite{Watkins92q-learning}. The Q-function $Q^\\pi(s_t,a_t)=\\mathbf{E}_{a_{i>t}\\sim\\pi}[R(s_t)|a_t]$ represents the value of following a policy $\\pi$ after applying action $a_t$. The optimal policy can be inferred from the optimal action-value function $Q^*$ by maximization over actions.\n\n\\subsection{Q-Function Approximation}\n\n We use DQN \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15} to estimate the optimal $Q$-function by function approximator $Q$, parameterized by $\\theta^{Q}$. It is trained in an offline fashion on minibatches sampled from a fixed replay buffer $\\mathcal{R}$ with transitions collected by a driver policy $\\hat{\\pi}$. As loss, we use\n$L(\\theta^{Q}) = \\frac{1}{b}\\sum_i \\left(y_i - Q(s_i, a_i| \\theta^{Q})\\right)^2$ with targets $y_i=r_i + \\gamma \\max_{a} Q'(s_{i+1}, a| \\theta^{Q'}),$ where $Q'$ is a target network, parameterized by $\\theta^{Q'}$, and $(s_i, a_i, s_{i+1}, r_i)|_{0\\leq i \\leq b}$ is a randomly sampled minibatch from $\\mathcal{R}$. For the target network, we use a soft update, i.e. $\\theta^{Q'}\\leftarrow\\tau\\theta^{Q}+ (1-\\tau)\\theta^{Q'}$ with update step-size $\\tau \\in [0,1]$. Further, we use a variant of Double-$Q$-learning \\cite{DBLP:journals\/corr\/HasseltGS15} which is based on two \\textit{Q}-network pairs and uses the minimum of the predictions for the target calculation, similar as in \\cite{DBLP:conf\/icml\/FujimotoHM18}.\n\n\\subsection{Deep Sets}\n\nA network $Q_{\\mathcal{DS}}$ can be trained to estimate the $Q$-function for a state representation $s=(X^{\\text{dyn}}, x^{\\text{static}})$ and action $a$. The representation consists of a static input $x^{\\text{static}}$ and a dynamic, variable-length input set $X^{\\text{dyn}}=[x^1, .., x^{\\text{seq len}}]^\\top$, where $x^j|_{1 \\le j \\le \\text{seq len}}$ are feature vectors for surrounding vehicles in sensor range. In \\cite{DeepSetQ}, it was proposed to use Deep Sets to handle this input representation, where the Q-network consists of three network modules $\\phi,\\rho$ and $Q$. The representation of the dynamic input set is computed by $ \\Psi(X^{\\text{dyn}})= \\rho\\left(\\sum_{x\\in X^{\\text{dyn}}}\\phi(x) \\right),$\n which makes the Q-function permutation invariant w.r.t. the order of the dynamic input \\cite{NIPS2017_6931}. Static feature representations $x^{\\text{static}}$ are fed directly to the $Q$-module, and the Q-values can be computed by $Q_{\\mathcal{DS}} = Q(\\Psi(X^{\\text{dyn}}) || x^{\\text{static}})$, where $||$ denotes a concatenation of two vectors. The Q-learning algorithm is called DeepSet-Q \\cite{DeepSetQ}. \n \n\n\n\\section{METHODS}\n\\label{sec:methods}\n\n\\subsection{Deep Scene-Sets}\n\\label{sec:deepscenes}\n\n To overcome the limitation of DeepSet-Q to one variable-sized list of the same object type, we propose a novel architecture, Deep Scene-Sets, that are able to deal with $K$ input sets $X^{\\text{dyn}_1}, ..., X^{\\text{dyn}_K}$, where every set has variable length. A combined, permutation invariant representation of all sets can be computed by \n$$ \\Psi(X^{\\text{dyn}_1}, ..., X^{\\text{dyn}_K}) = \\rho \\left( \\sum_{k} \\sum_{x \\in X^{ \\text{dyn}_k}}\\phi^k(x) \\right),$$\n\nwhere $1 \\le k \\le K$. The output vectors $\\phi^k(\\cdot) \\in \\mathbb{R}^F $ of the neural network modules $\\phi^k$ have the same length $F$. We additionally propose to share the parameters of the last layer for the different $\\phi$ networks. Then, $\\phi^k(\\cdot)$ can be seen as a projection of all input objects to the same encoded \\textit{object space}. We combine the encoded objects of different types by the \\textit{sum} (or other permutation invariant pooling operators, such as \\textit{max}) and use the network module $\\rho$ to create an encoded \\textit{scene}, which is a fixed-sized vector. The encoded scene is concatenated to $x^{\\text{static}}$ and the Q-values can be computed by $ Q_\\mathcal{D} = Q(\\Psi(X^{\\text{dyn}_1}, ..., X^{\\text{dyn}_K}) || x^{\\text{static}})$. We call the corresponding Q-learning algorithm DeepScene-Q, shown in Algorithm \\ref{alg:sceneq} (Option 1) and \\Cref{fig:deepscene} (a).\n\n\\begin{algorithm}[t]\n\\small\n \\SetAlgoLined\n \\DontPrintSemicolon\n initialize $Q_{\\mathcal{G}}=(\\phi,\\rho, H, Q)$ and $Q'_{\\mathcal{G}}=(\\phi',\\rho', H', Q')$, set replay buffer $\\mathcal{R}$\\\\\n \\For{\\text{optimization step} o=1,2,\\dots}{\n \\SetKwProg{Fn}{}{:}{}\n \n get minibatch $(s_i,a_i,(X^{\\text{dyn}}_{i+1}, x_{i+1}^{\\text{static}}),r_{i+1})$ from $\\mathcal{R}$\\\\\n \\ForEach{\\text{transition}}{\n \\ForEach{\\text{object }$x_{i+1}^j$ in $X^{\\text{dyn}}_{i+1}$}{\n $(\\phi'_{i+1})^j=\\phi'\\left(x_{i+1}^j\\right)$\\\\\n }\n compute $H^{'(L)}_{i+1}$ by GCN with $H^{'(0)}_{i+1} = [(\\phi'_{i+1})^1, ..., (\\phi'_{i+1})^\\text{seq len}]^\\top$\\;\n get $\\rho'_{i+1}= \\rho'\\left(\\sum\\limits_k \\sum\\limits_j H^{'(L)}_{i+1}\\right)$\\;\n }\n $y_i = r_{i+1}+\\gamma\\max_a Q'(\\rho'_{i+1}, x_{i+1}^{\\text{static}}, a)$\\\\\n \n perform a gradient step on loss: $\\frac{1}{b}\\sum\\limits_i\\left(Q_{\\mathcal{G}}(s_i, a_i)-y_i\\right)^2$\\\\\n update target network by: $\\theta^{Q_{\\mathcal{G}}'}\\leftarrow\\tau\\theta^{Q_{\\mathcal{G}}} + (1-\\tau)\\theta^{Q_{\\mathcal{G}}'}$\n }\n \\caption{Graph-Q}\n \\label{alg:graphq}\n\\end{algorithm}\n\n\\subsection{Graphs}\nIn the Deep Set architecture, relations between vehicles are not explicitly modeled and have to be inferred in $\\rho$. We extend this approach by using Graph Networks, considering graphs as input. Graph Convolutional Networks (GCNs) \\cite{DBLP:journals\/corr\/KipfW16} operate on graphs defined by a set of node features $X^{\\text{dyn}}=[x^1, .., x^{\\text{seq len}}]^\\top$ and a set of edges represented by an adjacency matrix $A$. The propagation rule of the GCN is $H^{(l)}= \\sigma(D^{\\frac{1}{2}} \\tilde A D^{\\frac{1}{2}} H^{(l-1)}W^{(l-1)}) \\text{ with } 1 \\le l \\le L,$ where we set $H^{(0)} = [\\phi(x_1), ..., \\phi(x_\\text{seq len})]^\\top$ using an encoder module similar as in the Deep Sets approach. $\\tilde A \\in \\mathbb{R}^{N \\times N}$ is an adjacency matrix with added self-connections, $D_{i,i} = \\sum_j \\tilde A_{i,j}$, $\\sigma$ the activation function, $H^{(l)} \\in \\mathbb{R}^{N \\times F}$ hidden layer activations and $W^{(l)}$ the learnable matrix of the $l$-th layer. The dynamic input representation can be computed from the last layer $L$ of the GCN: $ \\Psi(X^{\\text{dyn}})= \\rho\\left(\\sum_{x \\in X^{\\text{dyn}}} H^{(L)} \\right),$\n where $\\phi$ is a neural network and the output vector $\\phi(\\cdot) \\in \\mathbb{R}^F $ has length $F$. The Q-values can be computed by $ Q_\\mathcal{G} = Q(\\Psi(X^{\\text{dyn}}) || x^{\\text{static}})$. We call the corresponding Q-learning algorithm Graph-Q, see \\Cref{alg:graphq}.\n \n\n\n\\subsection{Deep Scene-Graphs}\n\nThe graph representation can be extended to deal with multiple variable-length lists of different object types $X^{\\text{dyn}_1}, ..., X^{\\text{dyn}_K}$ by using $K$ encoder networks. As node features, we use $H^{(0)} = [\\Phi^1, ..., \\Phi^K]^\\top$ and $\\Phi^k = [\\phi^k(x_1), ..., \\phi^k(x_{\\text{seq len}_k})] \\text{ for } 1 \\le k \\le K,$\nand compute the dynamic input representation from the last layer of the GCN:\n $$ \\Psi(X^{\\text{dyn}_1}, ..., X^{\\text{dyn}_K})= \\rho\\left(\\sum_{k} \\sum_{x \\in X^{ \\text{dyn}_k}} H^{(L)} \\right),$$\n with $1 \\le k \\le K$. Similar to the Deep Scene-Sets architecture, $\\phi^k$ are neural network modules with output vector length $D$ and parameter sharing in the last layer. To create a fixed vector representation, we combine all node features by the \\text{sum} into an encoded \\text{scene}. The Q-values can be computed by $Q_\\mathcal{D} = Q(\\Psi(X^{\\text{dyn}_1}, ..., X^{\\text{dyn}_K}) || x^{\\text{static}})$. This module can replace the DeepScene-Sets module in DeepScene-Q as shown in Algorithm \\ref{alg:sceneq} (Option 2) and in \\Cref{fig:deepscene} (b).\n\n\n\n\\begin{algorithm}[t]\n \\label{alg:sceneq}\n\\small\n \\SetAlgoLined\n \\DontPrintSemicolon\n initialize $Q_{\\mathcal{D}}=(\\phi^1, ..., \\phi^K,\\rho, H, Q)$ and $Q'_{\\mathcal{D}}=({\\phi^1}', ..., {\\phi^K}', \\rho', H', Q')$, set replay buffer $\\mathcal{R}$\\\\\n \\For{\\text{optimization step} o=1,2,\\dots}{\n \\SetKwProg{Fn}{}{:}{}\n \n get minibatch $(s_i,a_i,(X^{\\text{dyn}_1}_{i+1}, ..., X^{\\text{dyn}_K}_{i+1}, x_{i+1}^{\\text{static}}),r_{i+1})$ from $\\mathcal{R}$\\\\\n \\ForEach{\\text{transition}}{\n \\ForEach{\\text{object type} $k \\in (1, ..., K)$}{\n \\ForEach{\\text{object }$x_{i+1}^j$ in $X^{\\text{dyn}_k}_{i+1}$}{\n ${({\\phi^k_{i+1}}')}^j = {\\phi^k}' \\left(x_{i+1}^j\\right)$\\\\\n } \n }\n \\vspace{0.2cm}\n \\Fn{{Set (Option 1) }}{\n \\hspace{-0.3cm} get $\\rho'_{i+1}= \\rho'\\left(\\sum\\limits_k \\sum\\limits_j({\\phi^k_{i+1}}')^j\\right)$ \n }\n \n \\Fn{{Graph (Option 2) }}{\n \\hspace{-0.3cm} compute $H^{'(L)}_{i+1}$ by GCN with \n $H^{'(0)}_{i+1} = [\\Phi^1, ..., \\Phi^K ]^\\top \\text{ and } \\Phi^k = [(\\phi'_{i+1})^1, ..., ({\\phi'}_{i+1})^\\text{seq len}]$\\;\n \n \\hspace{-0.3cm} get $\\rho'_{i+1}= \\rho'\\left(\\sum\\limits_k \\sum\\limits_j H^{'(L)}_{i+1} \\right)$\\;\n \n }\n $y_i = r_{i+1}+\\gamma\\max_a Q'(\\rho'_{i+1}, x_{i+1}^{\\text{static}}, a)$\\\\\n }\n perform a gradient step on loss and update target network as in Algorithm \\ref{alg:graphq}. \n }\n \\caption{DeepScene-Q}\n\\end{algorithm}\n\n\n\\subsection{Graph Construction}\n\nWe propose two different strategies to construct bidirectional edge connections between vehicles for Graphs and Deep Scene-Graphs representations:\n\\vspace{-0.1cm}\n\\begin{enumerate}\n \\item Close agent connections: Connect agent vehicle to its direct leader and follower in its own and the left and right neighboring lanes ($6 \\cdot 2$ edges).\n \\item All close vehicles connections: Connect all vehicles to their leader and follower in their own and the left and right lanes ($K \\cdot 6 \\cdot 2$ edges for $K$ surrounding vehicles).\n\\end{enumerate}\n\nEdge weights are computed by the inverse absolute distance between two vehicles, as shown in \\cite{DBLP:journals\/corr\/abs-1903-01254}. A fully-connected graph is avoided due to computational complexity.\n\n\n\\subsection{MDP Formulation}\n\nThe feature representations of the the surrounding cars and lanes are shown in \\cref{sec:inputfeatures}. The action space $\\mathcal{A}$ consists of a discrete set of three possible actions in lateral direction: \\textit{keep lane}, \\textit{left lane-change} and \\textit{right lane-change}.\n Acceleration and collision avoidance are controlled by low-level controllers, that are fixed and not updated during training. Maintaining safe distance to the preceding vehicle is handled by an integrated safety module, as proposed in \\cite{deeptraffic,Mirchevska2018HighlevelDM}. If the chosen lane-change action is not safe, the agent keeps the lane. The reward function \n$r: \\mathcal{S}\\times\\mathcal{A}\\mapsto \\mathbb{R}$ is defined as:\n$r(s, a) = 1 - \\frac{|v_{\\text{current}}(s)- v_{\\text{desired}}(s)|}{v_{\\text{desired}}(s)} - p_{\\text{lc}}(a),$\nwhere $v_{\\text{current}}$ and $v_{\\text{desired}}$ are the actual and desired velocity of the agent, $p_{\\text{lc}}$ is a penalty for choosing a lane-change action and minimizing lane-changes for additional comfort.\n\n\n\\begin{table*}[t!]\n \\centering\n \\footnotesize\n \\begin{tabular}{c c c c c c }\n \\toprule\n Driver Type & maxSpeed & lcCooperative & accel\/ decel & length & lcSpeedGain\\\\\n \\midrule\n agent driver & 10 & - & 2.6\/4.5 & 4.5 & - \\\\\n passenger drivers 1& $\\mathcal{U}(8,12)$ & $0.2$ & 2.6\/4.5 & $\\mathcal{U}(4, 5)$& $\\mathcal{U}(5, 10)$ \\\\\n passenger drivers 2 & $\\mathcal{U}(5,9)$ & $1.0$ & 2.6\/4.5 & $\\mathcal{U}(4,5)$ & $\\mathcal{U}(5, 10)$ \\\\\n passenger drivers 3 & $\\mathcal{U}(3, 7)$ & $0.8$ & 2.6\/4.5 & $\\mathcal{U}(4, 5)$ & $\\mathcal{U}(5, 10)$ \\\\\n truck drivers & $\\mathcal{U}(2, 4)$& $0.4$ & 1.3 \/ 2.25 & $\\mathcal{U}(9.5, 14.5)$ & $ \\mathcal{U}(0,3)$ \\\\ \n motorcycle drivers & $ \\mathcal{U}(7, 11)$ & $0.2$ & 3.0\/5.0 & $ \\mathcal{U}(2,3)$ & $\\mathcal{U}(15, 20)$\\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{0.2cm}\n \\caption{SUMO parameters for different driver types. In each scenario, trucks and motorcycles are sampled with $10\\%$ and $5\\%$ probability, passenger cars and their driver types are sampled uniformly for the remaining number of vehicles.}\n \\label{tab:driver_types}\n\\end{table*}\n\n\n\\section{EXPERIMENTAL SETUP}\n\nWe use the open-source SUMO \\cite{SUMO} traffic simulation to learn lane-change maneuvers.\n\n\\subsection{Scenarios}\n\n\\paragraph{Highway} To evaluate and show the advantages of Graph-Q, we use the $\\SI{1000}{\\metre}$ circular highway environment shown in \\cite{DeepSetQ} with three continuous lanes and one object class (passenger cars). To train our agents, we used a dataset with 500.000 transitions. \n\n\\paragraph{Fast Lanes} To evaluate the performance of DeepScene-Q, we use a more complex scenario with a variable number of lanes, shown in \\Cref{fig:sumo}. It consists of a $1000\\,$m circular highway with three continuous lanes and additional fast lanes in two $\\SI{250}{\\metre}$ sections. At the end of lanes, vehicles slow down and stop until they can merge into an ongoing lane. The agent receives information about additional lanes in form of traffic signs starting $\\SI{200}{\\metre}$ before every lane start or end. Further, different vehicle types with different behaviors are included, i.e. cars, trucks and motorcycles with different lengths and behaviors. For simplicity, we use the same feature representation for all vehicle classes. As dataset, we collected 500.000 transitions in the same manner as for the \\textit{Highway} environment. \n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{fig\/fast_lanes.png}\n \\caption{\\textit{Fast Lanes} scenario in SUMO. The agent (blue) is overtaking other vehicles (red) on the fast lane and has to merge before the lane ends.}\n \\vspace{-2mm}\n \\label{fig:sumo}\n\\end{figure}\n\n\\subsection{Input Features}\n\\label{sec:inputfeatures}\nIn the \\textit{Highway} scenario, we use the same input features as proposed in \\cite{DeepSetQ}. For the \\textit{Fast Lanes} scenario, the input features used for vehicle $i$ are:\n\n\\begin{itemize}\n\\item \\textit{relative distance}:\n$dr_i = (p_i - p_{\\text{agent}})\/d_{\\text{max}}\\in\\mathbb{R}$,\\\\ $p_{\\text{agent}}$, $p_i$ are longitudinal positions in a curvilinear coordinate system of the lane.\n \n\\item \\textit{relative velocity}: \n$ dv_i = (v_i-v_{\\text{agent}}) \/v_{\\text{allowed}}$\n \n\\item \\textit{relative lane index}: \n$dl_i = l_i - l_{\\text{agent}} \\in \\mathbb{N}$,\\\\\nwhere $l_i$, $l_{\\text{agent}} $ are lane indices.\n \n\\item \\textit{vehicle length}: \n$\\text{len}_i \/ 10.0$\n\\end{itemize}\n\n\\noindent The state representation for lane $j$ is:\n\\begin{itemize}\n \\item \\textit{lane start and end}: distances (km) to lane start and end\n \\item \\textit{lane valid}: lane currently passable\n \\item \\textit{relative lane index}: $dl_j = l_j - l_{\\text{agent}} \\in \\mathbb{N}$,\\\\\n where $l_j$, $l_{\\text{agent}} $ are lane indices.\n\\end{itemize}\nFor the agent, the normalized velocity $v_{\\text{current}} \/ v_{\\text{desired}}$ is included, where $v_{\\text{current}}$ and $v_{\\text{desired}}$ are the current and desired velocity of the agent. Passenger cars, trucks and motorcycles use the same feature representation. When the agent reaches a traffic sign indicating a starting (ending) lane, the lane features get updated until the start (end) of the lane.\n\\label{sec:appendix:states}\n\n\n\\subsection{Training \\& Evaluation Setup}\nAll agents are trained off-policy on datasets collected by a rule-based agent with enabled SUMO safety module integrated, performing random lane changes to the left or right whenever possible. For training, traffic scenarios with a random number of $n \\in (30, 60)$ vehicles for \\textit{Highway} and with $n \\in (30, 90)$ vehicles for \\textit{Fast Lanes} are used. Evaluation scenarios vary in the number of vehicles $n \\in (30 , 35, ..., 90)$. For each fixed $n$, we evaluate 20 scenarios with different \\textit{a priori} randomly sampled positions and driver types for each vehicle, to smooth the high variance. \n\n\\label{sec:sumosettings}\nIn SUMO, we set the time step length to $\\SI{0.5}{\\second}$. The action step length of the reinforcement learning agents is $\\SI{2}{\\second}$ and the lane change duration is $2\\,s$. Desired time headway $\\tau$ and minimum gap are $\\SI{0.5}{\\second}$ and $\\SI{2}{\\metre}$. All vehicles have no desire to keep right ($\\text{lcKeepRight} = 0.0$). The sensor range of the agent is $d_{\\text{max}}=\\SI{80}{\\metre}$. \\textit{LC2013} is used as lane-change controller for all other vehicles. To simulate traffic conditions as realistic as possible, different driver types are used with parameters shown in \\Cref{tab:driver_types}.\n\n\n\n\\begin{table}[h!]\n \\centering\n \\footnotesize\n \\begin{tabular}{c|c| c}\n \\toprule\n Social CNN & VBIN & GCN\\\\\n \\midrule\n \\footnotesize{Input}($B \\times 80 \\times 5$) & \\footnotesize{Input}($B \\times 15$) & \\footnotesize{Input}($B \\times \\text{seq} \\times 3$)\\\\\n \\midrule\n $\\phi$: FC($20$), FC($80$) & $\\phi$: FC($20$), FC($80$) & $\\phi$: FC($20$), FC($80$) \\\\\n $16 \\times \\text{Conv2D} (3 \\times 1)$ & concat($\\cdot$) & $1 \\times \\text{GCN}(80)$\\\\\n $32 \\times \\text{Conv2D} (3 \\times 1)$ & $\\rho$: FC($80$), FC($20$) & sum($\\cdot$)\\\\\n \\midrule\n \\multicolumn{3}{c}{concat($\\cdot$, Input($B \\times 3$))}\\\\ \n \\multicolumn{3}{c}{FC(100)$^*$, FC(100), Linear(3)}\\\\ \n \\bottomrule\n \\end{tabular}\n \\vspace{0.15cm}\n\n \\begin{tabular}{c|c}\n \\toprule\n Deep Scene-Sets & Deep Scene-Graphs\\\\\n \\midrule\n \\multicolumn{2}{c}{Input($B \\times \\text{seq}_0 \\times 4$) and Input($B \\times \\text{seq}_{1} \\times 4$)} \\\\\n \\midrule\n $\\phi_{0}$: FC(20), FC(80),FC(80)$^{**}$ & $\\phi_{0}$: FC(20), FC(80),FC(80)$^{**}$\\\\\n $\\phi_{1}$: FC(20), FC(80), FC(80)$^{**}$& $\\phi_{1}$: FC(20), FC(80),FC(80)$^{**}$ \\\\\n sum($\\cdot$) & $1 \\times \\text{GCN}(80)$\\\\\n $\\rho$: FC($80$), FC($80$) & sum($\\cdot$)\\\\\n \\midrule\n \\multicolumn{2}{c}{concat($\\cdot$, Input($B \\times 3$))}\\\\ \n \\multicolumn{2}{c}{FC(100), FC(100), Linear(3)}\\\\ \n \\bottomrule\n \\end{tabular}\n \n \\caption{Network architectures. FC($\\cdot$) are fully-connected layers. The CNN uses strides of $(2 \\times 1)$. (*) For VBIN FC(200). (**) Parameters of the last layers are shared.}\n \\label{tab:networks1}\n\\end{table}\n\n\n\\subsection{Comparative Analysis}\n\nEach network is trained with a batch size of $64$ and optimized by Adam \\cite{DBLP:journals\/corr\/KingmaB14} with a learning rate of $10^{-4}$. As activation function, we use Rectified Linear Units (ReLu) in all hidden layers of all architectures. The target networks are updated with a step-size of $\\tau=10^{-4}$.\nAll network architectures, including the baselines, were optimized using Random Search with the same budget of 20 training runs. We preferred Random Search over Grid Search, since it has been shown to result in better performance using budgets in this range \\cite{Bergstra:2012:RSH:2188385.2188395}. The Deep Sets architecture and hyperparameter-optimized settings for all encoder networks are used from \\cite{DeepSetQ}. The network architectures are shown in \\Cref{tab:networks1}. Graph-Q is compared to two other interaction-aware Q-learning algorithms, that use input modules originally proposed for supervised vehicle trajectory prediction. \nTo support our architecture choices for the Deep Scene-Sets, we compare to a modification with separate $\\rho$ networks.\nWe use the following baselines\\footnote{Since we do not focus on including temporal context, we adapt recurrent layers to fully-connected layers in all baselines.}:\n\n\\paragraph{Rule-Based Controller}\nNaive, rule-based agent controller, that uses the SUMO lane change model \\textit{LC2013}.\n\n\\paragraph{Convolutional Social Pooling (SocialCNN)} In \\cite{DBLP:journals\/corr\/abs-1805-06771}, a social tensor is created by learning latent vectors of all cars by an encoder network and projecting them to a grid map in order to learn spatial dependencies. \n\n\\paragraph{Vehicle Behaviour Interaction Networks (VBIN)} In \\cite{DBLP:journals\/corr\/abs-1903-00848},\ninstead of summarizing the output vectors as in the Deep Sets approach, the vectors are concatenated, which results in a limitation to a fixed number of cars. We consider the 6 vehicles surrounding the agent (leader and follower on own, left and right lane).\n\n\\paragraph{Multiple $\\rho$-networks}\nDeep Scene architecture where all object types are processed separately by using $K$ different $\\rho$-network modules. The $K$ resulting output vectors are concatenated as \n$\\big \\lbrack \\rho^1 \\left( \\sum_{x \\in X^{ \\text{dyn}_1}}\\phi^1(x) \\right), ..., \\rho^K \\left( \\sum_{x \\in X^{ \\text{dyn}_K}}\\phi^K(x) \\right) \\big \\rbrack$ and fed into the Q-network module.\n\n\n\n\n\\subsection{Implementation Details \\& Hyperparameter Optimization}\n\\label{sec:appendix:networks}\n\nAll networks were trained for $1.25 \\cdot 10^6$ optimization steps. The Random Search configuration space is shown in \\Cref{tab:hyperopt}. For all approaches except VBIN, we used the same $\\phi$ and $Q$ architectures. Due to stability issues, adapted these parameters for VBIN. For SocialCNN, we used the optimized grid from \\cite{DeepSetQ} with a size of $80 \\times 5$. The GCN architectures were implemented using the pytorch gemoetric library \\cite{DBLP:journals\/corr\/abs-1903-02428}.\n\n\n \\begin{table}[t!]\n \\centering\n \\begin{tabular}{c c c}\n \\toprule\n Architecture & Parameter & Configuration Space\\\\\n \\midrule\n Encoders & $\\phi$: num layers & $1,2,3$\\\\\n & $\\phi$: hidden\/ output dims & $5, 20, 80, 100$ \\\\\n Deep Sets & $\\rho$: num layers & $1,2,3$\\\\\n & $\\rho$: hidden\/ output dims & $5, 20, 100$\\\\\nGCN & num GCN layers & 1,2,3\\\\\n& hidden and output dim & 20, 80\\\\\n& use edge weights & True, False\\\\\n SocialCNN & CONV: num layers & $2, 3$ \\\\\n & kernel sizes& $([7, 3, 2], [2, 1])$ \\\\\n & strides & $([2, 1], [2, 1])$ \\\\\n & filters & $8, 16, 32$ \\\\\n VBIN & $\\phi$ : output dim & 20, 80 \\\\\n & $\\rho$ : hidden dim & 20, 80, 160, 200 \\\\\n & $Q$ : hidden dim & 100, 200\\\\\n Deep Scene-Sets & $\\rho$ : output dim & 20, 80\\\\\n & shared parameters & True, False\\\\\n \n Deep Scene-Graphs & use $\\rho$ network & True, False\\\\\n & $\\rho$ : output dim & 20, 80\\\\\n & shared parameters & True, False\\\\\n \\bottomrule\n \\end{tabular} \n \\vspace{0.2cm}\n \\caption{Random Search configuration space. For every architecture, we sampled 20 configurations to find the best setting.}\n \\label{tab:hyperopt}\n\\end{table}\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.39\\textwidth]{fig\/exps.pdf}\n \\includegraphics[width=0.39\\textwidth]{fig\/exps_ablation.pdf}\n\\caption{ Mean performance and standard deviation in the \\textit{Highway} scenario over 10 training runs for Graph-Q with all close vehicle connections, the Deep Sets \\cite{DeepSetQ} and two other interaction-aware Q-function input modules (left), and Graph-Q using the two proposed graph construction strategies (right). The number of vehicles indicates the traffic intensity, from light to dense traffic.}\n\\label{fig:perfhighway}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.38\\textwidth]{fig\/exps_1000_allinone_new.pdf}\n \\includegraphics[width=0.38\\textwidth]{fig\/exps_1000_allinone_ablation.pdf}\n\\caption{Mean performance and standard deviation in the \\textit{Fast Lanes} scenario over 10 training runs for Deep Scene-Sets, Deep Scene-Graphs and the rule-based controller from SUMO (left), and different architecture choices of the Deep Scenes (right). The number of vehicles indicates the traffic intensity.}\n\\label{fig:perfdeepsetq}\n\\end{figure*}\n\n\\section{RESULTS}\n\nThe results for the \\textit{Highway} scenario are shown in \\Cref{fig:perfhighway}. Graph-Q using the GCN input representation (with all close vehicle connections) is outperforming VBIN and Social CNN. Further, the GCN input module yields a better performance compared to Deep Sets in all scenarios besides in very light traffic with rare interactions between vehicles. While the Social CNN architecture has a high variance, VBIN shows a better and more robust performance and is also outperforming the Deep Sets architecture in high traffic scenarios. This underlines the importance of interaction-aware network modules for autonomous driving, especially in urban scenarios. However, VBIN are still limited to fixed-sized input and additional gains can be achieved by combining both variable input and interaction-aware methods as in Graph Networks. To verify that the shown performance increases are significant, we performed a T-Test exemplarily for 90 car scenarios:\n\\begin{itemize}\n \n\\item Independence of the mean performances of DeepSet-Q and Graph-Q is highly significant ($< 0.001$) with a p-value of 0.0011.\n\\item Independence of the mean performances between Graph-Q and VBIN is significant ($< 0.1$) with a p-value of 0.0848. Graph-Q is additionally more flexible and can consider a variable number of surrounding vehicles.\n\\end{itemize}\n\n\\Cref{fig:perfhighway} (right) shows the performance of the two graph construction strategies. A graph built with connections for all close vehicles outperforms a graph built with close agent connections only. However, the performance increase is only slight, which indicates that interactions with the direct neighbors of the agent are most important. \n\n\nThe evaluation results for \\textit{Fast Lanes} are shown in \\Cref{fig:perfdeepsetq} (left). The vehicles controlled by the rule-based controller rarely use the fast lane. In contrast, our agent learns to drive on the fast lane as much as possible ($39.0\\%$ of the driving time). We assume, that the Deep Scene-Sets are outperforming Deep Scene-Graphs slightly, because the agent has to deal with less interactions than in the \\textit{Highway} scenario. Finally, we compare Deep Scene-Sets to a basic Deep Sets architecture with a fixed feature representation. Using the exact same lane features (if necessary filled with dummy values), both architectures show similar performance. However the performance collapse for the Deep Sets agent considering only its own, left and right lane shows, that the ability to deal with an arbitrary number of lanes (or other object types) can be very important in certain situations. Due to its limited lane representation, the Deep Sets (closest lanes) agent is not able to see the fast lane and thus significantly slower. \\Cref{fig:perfdeepsetq} (right) shows an ablation study, comparing the performance of the Deep-Scene Sets with and without shared parameters in the last layer of the encoder networks. Using shared parameters in the last layer leads to a slight increase in robustness and performance, and outperforms the architecture with separate $\\rho$ networks.\n\n\n\\section{CONCLUSION} \n\\label{sec:conclusion}\n\nIn this paper, we propose Graph-Q and DeepScene-Q, interaction-aware reinforcement learning algorithms that can deal with variable input sizes and multiple object types in the problem of high-level decision making for autonomous driving. We showed, that interaction-aware neural networks, and among them especially GCNs, can boost the performance in dense traffic situations. The Deep Scene architecture overcomes the limitation of fixed-sized inputs and can deal with multiple object types by projecting them into the same encoded object space. The ability of dealing with objects of different types is necessary especially in urban environments. In the future, this approach could be extended by devising algorithms that adapt the graph structure of GCNs dynamically to adapt to the current traffic conditions. Based on our results, it would be promising to omit graph edges in light traffic, essentially falling back to the Deep Sets approach, while it is beneficial to model more interactions with increasing traffic density.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\\clearpage\n\n\\balance\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe question we address here is if it is possible to indicate a representation of the system, we are interested in, which allows for a reduction of the system size with no loss of information. Our solution of this problem is based on the system symmetry which allows us to divide the state space of the system into sets of similar states. If it is the case, the system can then be represented by these sets. The proposed method is applicable for systems described by discrete states. Here, any two states of the system are connected if there is an elementary process which transforms one state into another. Such a description allows for the construction of a network whose nodes are identified with particular states of the system, and edges represent possible transitions between the states. The graph obtained in this way reflects the considered structure, i.e. the pattern of connections between the states of the analysed system. The graph is equivalent to a Kripke structure \\cite{kri}. In the simplest case, the graph is undirected and unweighted. However, more than often it is not the case, and not all transitions are equally probable. This leads to an application of weighted graphs. It is also possible that weights of transitions between two states in both directions are not the same; in a limit case the transition in one direction may not be possible.\\\\\nThe simplest observation one can make on the similarity of states concerns the number of nearest neighbours of each node of the obtained graph. However, it is possible that the properties of a particular node depend also on its connections with further neighbours. We can take this option indirectly into account by an analysis of the neighbourhood of the nearest neighbours of each node. Such an analysis allows us to indicate nodes for which the same structure of connections is observed. We say, that those nodes belong to the same class of nodes. To summarize, two nodes belong to the same class if they have the same number of nearest neighbours which belong to the same classes. The existence of classes of nodes is an indication of the symmetry of the state space.\\\\\nThe advantage of the class structure is that the obtained classes can be used instead of states to represent the analysed system. If the system walks randomly in the state space, after long time all states which belong to a given class have the same probability. Thus, one can construct a new graph whose nodes represent classes, and edges represent connections between states which form a given class. The transformation from the initial to the new network of states preserves the stationary probability distribution of the system; hence the calculations of the probability distribution can be carried out for a much smaller system. The presented method is general, and it can be applied to any system which can be described as a graph of states connected by elementary processes.\\\\\n\nBelow we present a summary of our method, with an indication of its crucial elements. We present results of the application of the method to the state space formed by the set of all possible states of Boolean networks of a given size. We also show some other examples of the application the method. In each case, the method results in a significant reduction of the system size. The paper is organised as follows: In the next section the class identification method is presented. The following two section are devoted to the presentation of the results of the application of the method to different systems. The last section concludes the presented method.\n\n\\section{Class identification procedure}\nThe idea of class identification is independent on the type of the network we deal with. If only the analysed system is characterised by some kind of symmetry, which is the case for many real systems, it is possible to indicate groups of nodes which have the same properties in the network. Here, we are interested in properties which manifest as the same patterns of connections of particular nodes. As it was already mentioned in the Introduction, the same pattern of connections of some group of nodes means that they have the same number of nearest neighbours which are of the same kind. Further, if the considered network is weighted, not only the number of connections must be taken into account but also the weights of particular connections. The exact procedure differs slightly depending on the type of graph, as one has to take into account whether considered graph is directed and\/or weighted. It should be also emphasized that even though we take explicitly into account only nearest neighbours of each node, in \nfact the procedure preserves information about connections with further neighbours. This is because the class of any node depends on the classes of its neighbours, which in turn depend on the classes of their neighbours and so on. A similar concept, termed 'regular equivalence', has been described in \\cite{bor}.\n\n\\subsection{Simple graphs}\nThe simplest case is a graph which is undirected and unweighted. An example network is presented in Tab.\\ref{tab_uu}. In the first column of the table, the node index appears, while in the next column there are indices of nearest neighbours of this node. As considered network is undirected and unweighted, the table provides full information needed for the class identification. The subsequent steps of the procedure are presented in Tab.\\ref{cl_uu}. At the very beginning one has to assign the same symbol to all nodes which have the same degree (i.e. the same number of nearest neighbours). So, the number of different symbols equals the number of different values of the node degree, present in the considered network. In our example this number is equal to $5$, and we use letters from $A$ to $E$ to indicate nodes with different degrees. Now, also the numbers which identify neighbours of nodes should we replaced by just assigned symbols. This is done in the first part of Table \\ref{cl_uu}. \nAt the moment, one has to check whether symbols assigned to the neighbours of nodes which were decorated by given symbols are the same (it should be emphasized that not all neighbours of a given node should belong to the same class). If it is the case, this means that proper classes are already obtained. In the other case, the procedure is to be continued, as one step is not enough. In our example, in the case of nodes which have degree $4$ and were decorated by the symbol $B$, in one case neighbours are decorated by the symbol $D$ and in another case by the symbol $E$, as in the second and the last line of the first part of Table \\ref{cl_uu}. This means that however degrees of those two nodes are the same, the patterns of their connections with other nodes of the network are different. One has to introduce further distinction of those nodes. Here we add a number which follows a given class symbol. As only nodes decorated by the symbol $B$ have to be distinguished, next to all remaining symbols we have $1$, and after $B$ we have $1$ and $2$. The introduced change involves an update of all already assigned symbols, which is done and presented in the second part of Tab.\\ref{cl_uu}. The last step ensures the proper class identification for the analysed network. As the result, presented in Tab.\\ref{clf_uu}, we obtain that $16$ possible states are classified to $6$ classes.\n\n\\begin{table*}\n\\begin{tabular}{c|*9{c}}\nnode number&\\multicolumn{9}{c}{neighbours of a given node}\\\\\\hline\n0&2&4&5&6&7&9&&&\\\\\n1&2&4&5&7&&&&&\\\\\n2&0&1&3&9&10&11&12&&\\\\\n3&2&4&5&6&7&9&&&\\\\\n4&0&1&3&6&10&11&12&&\\\\\n5&0&1&3&6&10&11&14&&\\\\\n6&0&3&4&5&8&12&14&15&\\\\\n7&0&1&3&9&10&11&14&&\\\\\n8&6&9&&&&&&&\\\\\n9&0&2&3&7&8&12&14&15&\\\\\n10&2&4&5&7&12&14&&&\\\\\n11&2&4&5&7&12&14&&&\\\\\n12&2&4&6&9&10&11&13&15&\\\\\n13&12&14&&&&&&&\\\\\n14&5&6&7&9&10&11&13&15&\\\\\n15&6&9&12&14&&&&&\\\\\n\\end{tabular}\n\\caption{The example of the undirected and unweighted graph\n\\label{tab_uu}\n\\end{table*}\n\n\\begin{table*}\n\\begin{tabular}{c|*8{c}}\nnode class&\\multicolumn{8}{c}{classes of neighbouring nodes}\\\\\\hline\n\\multicolumn{9}{c}{first iteration}\\\\\\hline\nC&D&D&D&D&E&E\\\\\nB&D&D&D&D\\\\\nD&B&C&C&C&C&E&E\\\\\nC&D&D&D&D&E&E\\\\\nD&B&C&C&C&C&E&E\\\\\nD&B&C&C&C&C&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nD&B&C&C&C&C&E&E\\\\\nA&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nC&D&D&D&D&E&E\\\\\nC&D&D&D&D&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nA&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nB&E&E&E&E\\\\\\hline\n\\multicolumn{9}{c}{second iteration}\\\\\\hline\nC1&D1&D1&D1&D1&E1&E1\\\\\nB1&D1&D1&D1&D1\\\\\nD1&B1&C1&C1&C1&C1&E1&E1\\\\\nC1&D1&D1&D1&D1&E1&E1\\\\\nD1&B1&C1&C1&C1&C1&E1&E1\\\\\nD1&B1&C1&C1&C1&C1&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D1&E1&E1\\\\\nD1&B1&C1&C1&C1&C1&E1&E1\\\\\nA1&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D1&E1&E1\\\\\nC1&D1&D1&D1&D1&E1&E1\\\\\nC1&D1&D1&D1&D1&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D1&E1&E1\\\\\nA1&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D1&E1&E1\\\\\nB2&E1&E1&E1&E1\\\\\n\\end{tabular}\n\\caption{Steps of classes identification procedure for the example of the undirected and unweighted graph presented in Tab.\\ref{tab_uu}\n\\label{cl_uu}\n\\end{table*}\n\n\n\\begin{table*}\n\\begin{tabular}{c|*8{c}}\nnode class&\\multicolumn{8}{c}{classes of neighbouring nodes}\\\\\\hline\nA1&E1&E1\\\\\nB1&D1&D1&D1&D1\\\\\nB2&E1&E1&E1&E1\\\\\nC1&D1&D1&D1&D1&E1&E1\\\\\nD1&B1&C1&C1&C1&C1&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D1&E1&E1\\\\\n\\end{tabular}\n\\caption{The final classes set for the example of the undirected and unweighted graph presented in Tab.\\ref{tab_uu}\n\\label{clf_uu}\n\\end{table*}\n\n\n\\subsection{Weighted graphs}\nConsider the same network as in the previous subsection but with different weights of the edges. The modified network is presented in Tab.\\ref{tab_uw}. For simplicity we consider just two different weight values, marked in the table by two different font styles. Now, not only the number of connections between nodes must be taken into account but also their weights. The procedure of classes identification also in this case starts with an assignment of symbols to the nodes to indicate their different degrees. As in the previous case, we have $5$ different values of the node degree. Both nodes decorated by the symbol $A$ have exactly the same kinds of neighbours and we do not need to introduce their further distinction. In the case of nodes decorated by the symbol $B$, we observe exactly the same situation as before; one of the nodes have neighbours which are decorated by the symbol $D$ and the other have neighbours which are decorated by the \nsymbol $E$. This of course means that they are not the same, so we add to the symbol two different numbers to enable their distinction. We also see, that however the symbols assigned to the neighbours of nodes decorated by symbols $C$ and $D$ are the same, the weights of some edges are different. Because of that we also add additional number to both those symbols. $E$ nodes are the same so they do not need to be distinguished. After the second iteration of the procedure we see that the classes identification is not finished, because of the difference of classes, the neighbours of $E$ nodes belong to. The remaining symbols are at the moment correct, so we add the additional number $1$ to all of them, while one of $E$ nodes gets added $1$ and the another gets $2$. This is the third iteration in Tab.\\ref{cl_uw}. The introduced changes are taken into account also in the lists of the neighbours of all nodes. Still, we did not obtain the proper classification. The last modification causes the necessity of a distinction of nodes decorated by the symbol $A$. After this modification we obtain the proper set of classes for the analysed network. The final result is presented in Tab.\\ref{clf_uw}. In this case the original state space size equal to $16$ is reduced to $10$ classes.\n\n\\subsection{Directed graphs, unweighted and weighted}\n\\label{duw}\nIn the case of directed graphs we have to indicate not only lists of nodes which are obtained from a given node, but also lists of nodes which lead to a given node. The whole procedure is then the same as in the case described above; the only difference is that now two lists of neighbours are taken into account. Two examples, for unweighted and weighted graphs, are presented in Fig.\\ref{gr_du} and Fig.\\ref{gr_dw}, respectively. The final results of the class identification procedure are presented in Tab.\\ref{clf_du} and Tab.\\ref{clf_dw}. We see that in the case of directed and unweighted graph the system size is reduced to $9$ classes, and in the case of directed and weighted graph to $12$ classes.\\\\ \n\n\n\\begin{figure}[!hptb]\n\\begin{center}\n\\includegraphics[width=.5\\columnwidth, angle=0]{graph_du.ps}\n\\caption{The example of the directed and unweighted graph}\n\\label{gr_du}\n\\end{center}\n\\end{figure}\n\n\\begin{table*}\n\\begin{tabular}{c|*9{c}}\nnode number&\\multicolumn{9}{c}{neighbours of a given node}\\\\\\hline\n0&\\textbf{2}&4&\\textbf{5}&6&7&9&&&\\\\\n1&2&4&5&7&&&&&\\\\\n2&\\textbf{0}&1&\\textbf{3}&9&10&11&12&&\\\\\n3&\\textbf{2}&4&\\textbf{5}&6&7&9&&&\\\\\n4&0&1&3&6&10&11&12&&\\\\\n5&\\textbf{0}&1&\\textbf{3}&6&10&11&14&&\\\\\n6&0&3&4&5&8&12&14&15&\\\\\n7&0&1&3&9&10&11&14&&\\\\\n8&6&9&&&&&&&\\\\\n9&0&2&3&7&8&12&14&15&\\\\\n10&2&4&5&7&12&14&&&\\\\\n11&2&4&5&7&12&14&&&\\\\\n12&2&4&6&9&10&11&13&15&\\\\\n13&12&14&&&&&&&\\\\\n14&5&6&7&9&10&11&13&15&\\\\\n15&6&9&12&14&&&&&\\\\\n\\end{tabular}\n\\caption{The example of the undirected and weighted graph; in bold connections with different weights are marked}\n\\label{tab_uw}\n\\end{table*}\n\n\\begin{table*}\n{\\scriptsize{\n\\begin{tabular}{ccc}\n\\hspace{-2cm}\n\\begin{tabular}{c|*8{c}}\nnode class&\\multicolumn{8}{c}{classes of neighbouring nodes}\\\\\\hline\n\\multicolumn{9}{c}{first iteration}\\\\\\hline\nC&D&D&\\textbf{D}&\\textbf{D}&E&E\\\\\nB&D&D&D&D\\\\\nD&B&C&C&\\textbf{C}&\\textbf{C}&E&E\\\\\nC&D&D&\\textbf{D}&\\textbf{D}&E&E\\\\\nD&B&C&C&C&C&E&E\\\\\nD&B&C&C&\\textbf{C}&\\textbf{C}&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nD&B&C&C&C&C&E&E\\\\\nA&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nC&D&D&D&D&E&E\\\\\nC&D&D&D&D&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nA&E&E\\\\\nE&A&B&C&C&D&D&E&E\\\\\nB&E&E&E&E\\\\\\hline\n\\multicolumn{9}{c}{third iteration}\\\\\\hline\nC21&D11&D11&\\textbf{D21}&\\textbf{D21}&E12&E12\\\\\nB11&D11&D11&D21&D21\\\\\nD21&B11&C11&C11&\\textbf{C21}&\\textbf{C21}&E11&E12\\\\\nC21&D11&D11&\\textbf{D21}&\\textbf{D21}&E12&E12\\\\\nD11&B11&C11&C11&C21&C21&E11&E12\\\\\nD21&B11&C11&C11&\\textbf{C21}&\\textbf{C21}&E11&E12\\\\\nE12&A11&B21&C21&C21&D11&D21&E11&E11\\\\\nD11&B11&C11&C11&C21&C21&E11&E12\\\\\nA11&E12&E12\\\\\nE12&A11&B21&C21&C21&D11&D21&E11&E11\\\\\nC11&D11&D11&D21&D21&E11&E11\\\\\nC11&D11&D11&D21&D21&E11&E11\\\\\nE11&A11&B21&C11&C11&D11&D21&E12&E12\\\\\nA11&E11&E11\\\\\nE11&A11&B21&C11&C11&D11&D21&E12&E12\\\\\nB21&E11&E11&E12&E12\\\\\n\\end{tabular}\n&&\n\\begin{tabular}{c|*8{c}}\nnode class&\\multicolumn{8}{c}{classes of neighbouring nodes}\\\\\\hline\n\\multicolumn{9}{c}{second iteration}\\\\\\hline\nC2&D1&D1&\\textbf{D2}&\\textbf{D2}&E1&E1\\\\\nB1&D1&D1&D2&D2\\\\\nD2&B1&C1&C1&\\textbf{C2}&\\textbf{C2}&E1&E1\\\\\nC2&D1&D1&\\textbf{D2}&\\textbf{D2}&E1&E1\\\\\nD1&B1&C1&C1&C2&C2&E1&E1\\\\\nD2&B1&C1&C1&\\textbf{C2}&\\textbf{C2}&E1&E1\\\\\nE1&A1&B2&C2&C2&D1&D2&E1&E1\\\\\nD1&B1&C1&C1&C2&C2&E1&E1\\\\\nA1&E1&E1\\\\\nE1&A1&B2&C2&C2&D1&D2&E1&E1\\\\\nC1&D1&D1&D2&D2&E1&E1\\\\\nC1&D1&D1&D2&D2&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D2&E1&E1\\\\\nA1&E1&E1\\\\\nE1&A1&B2&C1&C1&D1&D2&E1&E1\\\\\nB2&E1&E1&E1&E1\\\\\\hline\n\\multicolumn{9}{c}{fourth iteration}\\\\\\hline\nC211&D111&D111&\\textbf{D211}&\\textbf{D211}&E121&E121\\\\\nB111&D111&D111&D211&D211\\\\\nD211&B111&C111&C111&\\textbf{C211}&\\textbf{C211}&E111&E121\\\\\nC211&D111&D111&\\textbf{D211}&\\textbf{D211}&E121&E121\\\\\nD111&B111&C111&C111&C211&C211&E111&E121\\\\\nD211&B111&C111&C111&\\textbf{C211}&\\textbf{C211}&E111&E121\\\\\nE121&A112&B211&C211&C211&D111&D211&E111&E111\\\\\nD111&B111&C111&C111&C211&C211&E111&E121\\\\\nA112&E121&E121\\\\\nE121&A112&B211&C211&C211&D111&D211&E111&E111\\\\\nC111&D111&D111&D211&D211&E111&E111\\\\\nC111&D111&D111&D211&D211&E111&E111\\\\\nE111&A111&B211&C111&C111&D111&D211&E121&E121\\\\\nA111&E111&E111\\\\\nE111&A111&B211&C111&C111&D111&D211&E121&E121\\\\\nB211&E111&E111&E121&E121\\\\\n\\end{tabular}\n\\end{tabular}\n\\caption{Steps of classes identification procedure for the example of the undirected and weighted graph presented in Tab.\\ref{tab_uw}; in bold connections with different weights are marked}\n\\label{cl_uw}\n}}\n\\end{table*}\n\n\\begin{table*}\n\\begin{tabular}{c|*8{c}}\nnode class&\\multicolumn{8}{c}{classes of neighbouring nodes}\\\\\\hline\nA111&E111&E111\\\\\nA112&E121&E121\\\\\nB111&D111&D111&D211&D211\\\\\nB211&E111&E111&E121&E121\\\\\nC111&D111&D111&D211&D211&E111&E111\\\\\nC211&D111&D111&\\textbf{D211}&\\textbf{D211}&E121&E121\\\\\nD111&B111&C111&C111&C211&C211&E111&E121\\\\\nD211&B111&C111&C111&\\textbf{C211}&\\textbf{C211}&E111&E121\\\\\nE111&A111&B211&C111&C111&D111&D211&E121&E121\\\\\nE121&A112&B211&C211&C211&D111&D211&E111&E111\\\\\n\\end{tabular}\n\\caption{The final class set for the example of the undirected and weighted graph presented in Tab.\\ref{tab_uw}; in bold connections with different weights are marked}\n\\label{clf_uw}\n\\end{table*}\n\n\n\n\n\\begin{table*}\n\\begin{tabular}{c|*8{c}|*8{c}}\nnode&\\multicolumn{16}{c}{classes of neighbouring nodes}\\\\\nclass&\\multicolumn{8}{c|}{in-neighbours}&\\multicolumn{8}{c}{out-neighbours}\\\\\\hline\nA1&E1&E1& & & & & & &E1&E1\\\\\nA2&G1&G1& & & & & & &G1&G1\\\\\nB1&E1&E1&G1&G1& & & & &E1&E1&G1&G1\\\\\nB2&F1&F1&F1&F1& & & & &F1&F1&F1&F1\\\\\nC1&E1&E1&F1&F1&F1&F1& & &F1&F1&F1&F1\\\\\nD1&F1&F1&F1&F1&G1&G1& & &F1&F1&F1&F1&G1&G1\\\\\nE1&A1&B1&F1&F1&G1&G1& & &A1&B1&C1&C1&F1&F1&G1&G1\\\\\nF1&B2&C1&C1&D1&D1&E1&G1& &B2&C1&C1&D1&D1&E1&G1\\\\\nG1&A2&B1&D1&D1&E1&E1&F1&F1&A2&B1&D1&D1&E1&E1&F1&F1\\\\\n\\end{tabular}\n\\caption{The final class set for the example of the directed and unweighted graph presented in Fig.\\ref{gr_du}}\n\\label{clf_du}\n\\end{table*}\n\n\n\\begin{figure}[!hptb]\n\\begin{center}\n\\includegraphics[width=.5\\columnwidth, angle=0]{graph_dw.ps}\n\\caption{The example of the directed and weighted graph, by dashed lines edges with different weights are marked}\n\\label{gr_dw}\n\\end{center}\n\\end{figure}\n\n\n\\begin{table*}\n{\\scriptsize{\n\\begin{tabular}{c|*7{c}|*8{c}}\nnode&\\multicolumn{15}{c}{classes of neighbouring nodes}\\\\\nclass&\\multicolumn{7}{c|}{in-neighbours}&\\multicolumn{8}{c}{out-neighbours}\\\\\\hline\nA11 &B11 &B12 & & & & & &B11 &B12\\\\\nA21 &H11 &H11 & & & & & &H11 &H11\\\\\nB11 &A11 &G11 &G11 & & & & &A11 &C11 &F11 &F11 &G11 &G11 &H11 &H11\\\\\nB12 &A11 &G21 &G21 & & & & &A11 &C11 &F11 &F11 &G21 &G21 &H11 &H11\\\\\nC11 &B11 &B12 &H11 &H11 & & & &H11 &H11\\\\ \nD11 &G11 &G11 &\\textbf{G21} &\\textbf{G21} & & & &G11 &G11 &\\textbf{G21} &\\textbf{G21}\\\\\nE11 &G11 &G11 &G21 &G21 &H11 &H11 & &H11 &H11\\\\\nF11 &B11 &B12 &G11 &G11 &G21 &G21 & &G11 &G11 &G21 &G21\\\\\nF21 &G11 &G11 &\\textbf{G21} &\\textbf{G21} &H11 &H11 & &G11 &G11 &\\textbf{G21} &\\textbf{G21}\\\\\nG11 &B11 &D11 &F11 &F11 &F21 &H11 & &B11 &D11 &E11 &F11 &F11 &F21 &H11\\\\\nG21 &B12 &\\textbf{D11} &F11 &F11 &\\textbf{F21} &H11 & &B12 &\\textbf{D11} &E11 &F11 &F11 &\\textbf{F21} &H11\\\\\nH11 &A21 &B11 &B12 &C11 &E11 &G11 &G21 &A21 &C11 &E11 &F21 &G11 &G21\\\\\n\\end{tabular}\n\\caption{The final class set for the example of the directed and weighted graph presented in Fig.\\ref{gr_dw}, in bold connections with different weights are marked}\n\\label{clf_dw}\n}}\n\\end{table*}\n\nThe rate of the system size reduction depends on the structure of the particular network. However, as it will be presented in Sec.\\ref{res} in the case of large, real systems it can be significant.\n\n\\section{Equivalence of the representation of the system by classes to the representation by states}\nThe usefulness of the method arises from the fact that once we are interested in properties of the system in the stationary state, the representation of the system by means of classes is equivalent to the representation of the system by means of states. The core of this equivalence is that the probability of a given class equals to the sum of the probabilities of states which form this class. In the simplest case, if all states of the system are equally probable, the class probability $p_c$ is expressed as $p_c=n\/N$, where $n$ is the number of states which belong to a given class and $N$ is the system size: the number of nodes. In general, to determine the probabilities of particular states one can calculate them from the transition matrix \\cite{kamp}. This matrix should be constructed both for the network of states and the network of classes. In the transition matrix, an $i-th$ column is related to the rates of possible transitions from an $i-th$ state\/class to other states\/classes of the system. The probabilities of particular states\/classes are obtained from the components of the normalized dominant eigenvector connected with the dominant eigenvalue $1$ of the appropriate transition matrix. As a criterion of the correctness of the obtained classification of the state space into classes, the equality may be checked of the probabilities of all states which belong to a given class, and the probabilities of particular class; the latter is equal to the probability of the states it contains multiplied by the number of states it contains.\\\\\n\nThis method of the probability determination cannot be however applied, if more than one absorbing state are present in the system; these states act as traps. An absorbing state is a state which can be obtained from other states of the system but itself cannot be transformed to any other state. In such a case, the transition matrix contains columns which consist of zeros except one unity as the diagonal term. In this case, the final stationary probability distribution depends on the initial distribution, and the transition matrix cannot be the base of the calculation of the probabilities. To solve the problem the set of Master equations \\cite{kamp} can be used: \\[\\dfrac {dP_i (t)}{dt}=\\sum\\limits_{j\\in S_i}P_j (t)w_{j\\rightarrow i}- \\sum\\limits_{j\\in S_i}P_i (t)w_{i\\rightarrow j}\\] In the above equation we sum over the set of nearest neighbours of a given node $i$, and $w$ is a weight of a given edge in a considered graph. Then, the probabilities can be found from its solution in the asymptotically long time, given the initial distribution.\\\\\n\nFor some directed networks it is possible that some state are neither end points of the evolution nor elements of limit cycles. Accordingly, the probabilities of these states tend to zero in time. One of our examples are cellular automata where these states are known as \"Garden of Eden\" \\cite{wolf}.\n\n\n\\section{Examples of applications of the method}\n\\label{res}\nAs it was mentioned above, the method of the reduction of the system size by its representation by means of classes is general. Below we present a brief review of systems where the method has been applied. We also present main results obtained for each system. Details are presented in the cited literature. \n\\begin{enumerate}\n\\item\\textit{Ising and Potts models in spatial structures with geometrical frustration} \\cite{mk1} We analyse all periodic states of small pieces of some spatial structures for the Ising model with two possible spin orientations and the Potts model with three possible spin orientations, with antiferromagnetic interaction. The first analysed system is the triangular lattice with the periodic boundary conditions. In this case each node has six neighbours. The system properties, including the frustration effect, strongly depend on the size of the system and the model used. The second analysed lattice is the cubic Laves phase $C15$. An example is the $YMn_2$ intermetallic \\cite{ymn}, where $16$ Mn atoms form four tetrahedra. Similarly to the triangular lattice, each node has six neighbours, but their mutual positions form a three-dimensional structure. The structure of the lattice causes spins in the system to be frustrated, both for $2$ and $3$ possible spin orientations. For both systems, all ground states are listed which fulfil the periodic boundary conditions. Those ground states are used as nodes of the constructed network, while edges are defined by single spin flips which transform one ground state of the system to another. The obtained network is undirected and unweighted. The reduction of the system size due to symmetry is possible in the case of the triangular lattice with Ising antiferromagnetic interactions; there, the system size is reduced from $3\\;630$ states to $12$ classes for $25$ atoms, and from $263\\;640$ states to $409$ classes for $36$ atoms. In the case of the Potts model in $C15$ structure, our procedure allows us to reduce the system size from $90\\;936$ states to $28$ classes.\n\n\\item\\textit{Traffic system} \\cite{mk2} We analyse the network formed with the state space of vehicles on a small roundabout. In our system, there are three access and three exit roads. The maximal number of vehicles on each road is equal to $2$, so on each road two, one, or no cars is permitted. The state of the system is denoted as a sequence of six signs, where each sign reflects the current state of one road. As at most $2$ vehicles are allowed on each road at the same time, a new car can appear on an access road only if its current occupation is less than two. Neither the change of a state of any access road from $0$ to $1$ or from $1$ to $2$ nor the decrease of the number of vehicles on an exit road do not influence the state of remaining roads of the system. A shift of a vehicle from an access to an exit road is possible, if an access road is occupied at least by one car and an exit road is occupied by at most one car. When a roundabout is empty, a car can appear on any of access roads, with equal \nprobability. If at least one vehicle is on an access road, a passage is possible through the roundabout to each of the exit roads occupied by at most one vehicle. The probability of a passage to a given exit road depends on the distance to cover (the distance on the roundabout passed by the car is treated by analogy to the electric resistance). Assumed values of the model parameters - the number of roads and the number of permitted vehicles - cause that the whole state space contains $3^6=729$ states which determine nodes of the network. Any two nodes (states) are connected if a change of the position of one vehicle transforms one of those states into another. The obtained network of states is directed and weighted. We note here that, unfortunately, the results presented in our paper \\cite{mk2} are inconsistent with what was written there in the model description. In fact, the division of 729 possible states of the system into $55$ classes, reported in \\cite{mk2}, occurs for the case when weights of all passages are equal. If it is not the case, and passages from \ninput to output roads have different weights - as described above - $138$ classes are obtained.\n\n\\item\\textit{Polymer chain} \\cite{mk4} We analyse the state space formed by the set of all possible conformations of a circular polymer molecule in the repton model \\cite{deg,new,el1,el2}. We assume that reptons are indistinguishable, so in a consequence a shift of the whole molecule along its contour does not lead to a different state. What we however distinguish are the lattice cells. Yet, a translation of the whole molecule in the lattice does not generate a new state. With these assumptions, we find all possible conformations of a molecule of a given length. The length of the molecule is expressed by the number of reptons $N$. An obvious symmetry of the system, which allows for the reduction of the system size, is the symmetry of reflection in the lattice axes; all states of the reflected shape are equivalent to their initial counterparts. However, our method allows us to take into account also less trivial symmetries resulting from the similarities in the properties of particular states which are connected with possible transitions between the states. These similarities concern both the number of possible transitions and their probabilities. As the size of the state space depends on the molecule length and the probabilities of transitions depend on the strength and direction of the external electric field, our method allows for an analysis of the system in different motion regimes: strong field, weak field and zero field. Yet, in each case the size of the system can be reduced. The reduction rate depends on the current values of the model parameters. In the case of zero electric field the network of states is undirected and unweighted. For a molecule consisted of $N=5$ reptons, $35$ states are reduced to $9$ classes, and for $N=9$, $4891$ states are reduced to $702$ classes. If the molecule is placed in a gel medium, for $N=5$ we get $31$ molecule conformations; the number of classes is equal to $6$. In the case with field, in which case the network of states is directed and weighted, the number of classes depends on the direction of the applied field with respect to the square lattice. If the direction of the field is parallel to the lattice axes, $21$ classes for the $5$-repton long molecule are identified. A slightly smaller number of classes - $14$ - is identified for field directions parallel to the diagonal of the lattice cells.\n\n\\item\\textit{Elementary cellular automata} \\cite{mk5} We analyse one-dimensional cellular automata, with two possible states of each cell. The rule of the change of a cell state is determined on the basis of the state of the cell itself and its two nearest neighbours. Such a definition leads to $2^8=256$ different rules of the evolution process, so called elementary cellular automata \\cite{wolf1}. It has been shown that the number of unique rules is lower, as some of the rules are equivalent. As a result, the set of all elementary automata is represented by $88$ unique rules \\cite{wolf1}. We consider a system of a specified length $N$ consisting of $2^N$ possible sequences of zeros and ones, with periodic boundary conditions. Each sequence is converted to a single sequence determined by the automaton rule. Each sequence may, however, be obtained by the transformation of different sequences. The exact number of states (sequences) leading to a given state depends on the automaton rule and the system size $N$. \nThe state space can be represented as a directed unweighted graph, where $2^N$ nodes are identified with the possible states of the system. The out-degree of all nodes is equal to $1$, as each state can be transformed to only one other state, while the in-degree is different for different nodes. We have shown in \\cite{mk5} that different character of the relation of the number of classes $\\#$ to the system size $N$ are observed. For most of the rules, an exponential increase is observed, but we also observe some number of rules which show different kinds of behaviour. Namely, besides the obvious case of automata No $0$ and $255$, a non-trivial dependence of the number of classes in a function of the system size is observed for automata No $15, 45, 60, 90, 105$ and $ 154$ (and their equivalent rules). In most cases, the class identification allows a significant reduction of the system size. The proposed method indicates that the number of symmetry groups is equal to $80$, as for some groups indicated as different \nby Wolfram, the same pattern $\\#(N)$ is observed.\n\n\\item\\textit{Hubbard ring} \\cite{mk6} We analyse the Hubbard ring, which is an example of a quantum system. We consider a ring, i.e. a one-dimensional circular chain of atoms, within the single-band Hubbard model in the atomic limit \\cite{ham}. In accordance with the Pauli principle, each atom can be occupied by at most two electrons, and if it is the case their spins are opposite. Then, for a given chain length we can find all possible states; they form the state space of the analysed system. The set of states can be represented in the form of a network, where each state of the system can be treated as a node. An edge appears between two nodes if one state can be obtained from another in one of two possible processes: electron hopping and spin flip. The rates of those two processes are different, and the rate of the hopping is expected to be higher than the rate of flips \\cite{kha}. Further, a spin can flip only if at most a single electron is located at the target atom, also because of the Pauli principle. \nHere we take into account only hopping between neighbouring atoms of the ring, as further hoppings have very low probabilities \\cite{ham}. The electron hopping occurs with no change of the spin orientation. The obtained network is undirected and weighted. In our approach, the translational symmetry is taken into account explicitly, i.e. we do not distinguish the states which can be obtained one from another by a shift along the ring.\\\\\nIn our paper we analysed four cases: the full set of states, the ground states, and in both cases the state space with or without duplicates which are mirror reflections of other states of the system. In all cases the number of classes to the number of states ratio depends on the occupation ratio (number of electrons to doubled number of atoms). Also, which is intuitively clear, the difference is seen for cases with and without mirror reflections. However, in all cases the reduction ratio is significant, namely from $0.75$ to $0.003$.\\\\\n\\end{enumerate}\n\n\\section{Boolean networks}\nThe list of examples brought up in the preceding section provides a summary of the up-to-date applications of the method. Below we add a new application to the random Boolean network \\cite{dros,kauf}. These networks are of interest for their relevance for genetic networks \\cite{born}.\\\\\n\nThe state space consists of a set of all possible states of a Boolean network of a given size $N$, where each node can be in one of two possible states, let us say $0$ and $1$. This means that the size of the obtained state space equals $2^N$. The definition of the Boolean network requires an indication of the neighbours of each node, which is made by the random assignment of the set of $k$ nodes to each node, while a node cannot be a neighbour of itself. The state of each node changes in accordance to a Boolean function, which indicates the state of a given node in the next time step, based on the states of its neighbours. As each node has $k$ neighbours, there are $2^{2^k}$ possible Boolean functions, one of which is randomly assigned to each node. We assume that these functions do not change over time, and the states of all nodes are updated in parallel. The change of a state of all nodes leads to another state of the state space. As the process is deterministic, each state is transformed to exactly one state, while given state can be obtained from more states. The obtained graph is directed and unweighted. In accordance with what was written in Sec.\\ref{duw}, the class identification procedure must take into account both in- and out-neighbours lists.\\\\\n\nThe Boolean networks are somehow similar to the cellular automata analysed in our other paper \\cite{mk5}, however in the latter case a state of the node in the next generation depends not only on the states of its neighbours but also on the state of itself. The main difference is that the rules of cellular automata are the same for each cell, while in the case of the Boolean networks the change of the node state depends on the Boolean function chosen for this node, so for the same structure of the network the connections between particular states may be different. \\\\\n\nAn example of the network for the state space of the size $N=2^{10}$ states, formed from the Boolean networks for $n=10$ and $k=3$, where $n$ is the number of nodes and $k$ is the node degree is presented in Fig.\\ref{netS}. The related network of classes is presented in Fig.\\ref{netC}.\n\n\\begin{figure}[!hptb]\n\\begin{center}\n\\includegraphics[width=.8\\columnwidth, angle=270]{stany1.ps}\n\\caption{Example network for the state space of the size $N=2^{10}$ states, formed from the Boolean networks for $n=10$ and $k=3$, where $n$ - number of nodes, $k$ - node degree.}\n\\label{netS}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!hptb]\n\\begin{center}\n\\includegraphics[width=.7\\columnwidth, angle=270]{klasy.ps}\n\\caption{Network of classes for the network presented in Fig.\\ref{netS}.}\n\\label{netC}\n\\end{center}\n\\end{figure}\n\nFig.\\ref{N10_k2} presents the reduction of the system size expressed as the ratio $N_c\/N_s$, where $N_c$ number of classes and $N_s$ number of states, for the space state of the Boolean networks which consists of $N=2^{10}$ states with node degree $k=2$, and for $5\\times10^3$ repetitions. The average value is equal to $0.13\\pm0.09$. If we change the node degree in the Boolean networks to $k=3$, the reduction rate is lower, as presented in Fig.\\ref{N10_k3}. In this case, $=0.39\\pm0.12$. In the analysed set of $5\\times10^3$ networks in $90$ percent of them cycles are observed.\n\n\\begin{figure}[!hptb]\n\\begin{center}\n\\includegraphics[width=.8\\columnwidth, angle=270]{hist_N10_k2.ps}\n\\caption{Reduction of the system size $N_c\/N_s$ for the random Boolean networks for $N=2^{10}$ states and $k=2$ for $5\\times10^3$ repetitions, where $k$ - node degree, $N_c$ - number of classes and $N_s$ - number of states.}\n\\label{N10_k2}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[!hptb]\n\\begin{center}\n\\includegraphics[width=.8\\columnwidth, angle=270]{hist_N10_k3.ps}\n\\caption{Reduction of the system size $N_c\/N_s$ for the random Boolean networks for $N=2^{10}$ states and $k=3$ for $5\\times10^3$ repetitions, where $k$ - node degree, $N_c$ - number of classes and $N_s$ - number of states.}\n\\label{N10_k3}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Discussion}\nThe goal of the presented method is that it allows us to reconstruct the probability distribution of the states of the analysed system. This is done as follows: at first the graph which reflects the mutual relations between the elements of the system must be constructed. Then, the obtained graph is used for the indication of classes of states, which causes the reduction of the graph, and the equivalent representation of the system by classes. For the obtained network of classes one can calculate the probability distribution. When this is done, the probability distribution of the original system can be obtained, as the class probability is equal to the probability of states which form a given class multiplied by the number of states in this class.\\\\\nAs it was shown the presented method of the symmetry-driven reduction of the system size is general, and it can be applied to different kinds of discrete systems, not only being the subject of the classical physics. Here, we have presented six examples of systems, where the method has been applied. In all cases the method ensures that the probability of each state which belongs to a given class is the same. Thus, any analysis which concerns a stationary probability distribution of states of the analysed system can be performed using its reduced, equivalent, representation. The rate of the system size reduction depends on the considered system but in most cases it is significant.\n\n\n\\section*{Acknowledgments}\nThe author is grateful to Krzysztof~Ku\u0142akowski for critical reading of the manuscript and helpful discussions. The author is also grateful to the anonymous Referee for valuable comments that improved the manuscript. This research was supported in part by PL-Grid Infrastructure and in part by the Ministry of Science and Higher Education (MNiSW).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet ($X_{1},Y_{1}),(X_{2},Y_{2}),...,(X_{n},Y_{n})$ be independent and\nidentically distributed (iid) random vectors with joint distribution function\n(cdf) $F(x,y)$ and $X_{1:n}\\leq X_{2:n}\\leq\\cdots\\leq X_{n:n}$ be the order\nstatistics of the sample of first coordinate $X_{1},X_{2},...,X_{n}.$ Denote\nthe $Y$ -\\ variate associated with $X_{i:n}$ by $Y_{[i:n]},$ $i=1,2,..,n$,\ni.e. $Y_{[i:n]}=Y_{k}$ iff $X_{i:n}=X_{k}.$ The random variables\n$Y_{[1:n]},Y_{[2:n]},...,Y_{[n:n]}$ are called concomitants of order\nstatistics $X_{1:n},X_{2:n},...,X_{n:n}.$ The theory of order statistics is\nwell documented in David (1981), David and Nagaraja (2003), Arnold et al.\n(1992). \\ The concomitants of order statistics are described in David (1973),\nBhattacharya (1974), \\ David and Galambos \\ (1974), David and Nagaraja (1998),\nWang (2008). Denote by $F_{i:n}(x,y)$ the joint distribution of order\nstatistic $X_{i:n}$ and its concomitant $Y_{[i:n]}.$\n\nLet $X_{1},X_{2},...,X_{n}$ be a univariate sample with cdf $F(x)$ and\n$F_{i:n}(x)=P\\{X_{i:n}\\leq x\\},$ where $X_{i:n}$ is the $i$th order statistic\nof this sample. Recently, Bairamov (2011) considered mixtures of distribution\nfunctions of order statistics $K_{n}(x):\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{i:n}(x)$ and $H_{n}(x):\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{n-i+1:n}(x)$ and using inequalities of majorization theory showed that\nfor a particular choice of $p_{i}$'s, $\\ H_{n}(x)\\leq F(x)\\leq K_{n}(x)$ for\nall $x\\i\n\\mathbb{R}\n$ and the $L_{2}$ distance between $H_{n}(x)$ and $K_{n}(x)$ can be made\nsufficiently small. In other words there exists a sequence $(p_{1\n,p_{2},...,p_{n})=($ $p_{1}(m),p_{2}(m),...,p_{n}(m))$ such that for this\nsequence $H_{n}(x)$ and $K_{n}(x)$ converge to $F(x)$ as $m\\rightarrow\\infty$\nwith rate $o(1\/x^{1+\\alpha}),$ $0<\\alpha<1$ and the $L_{1}$ distance between\n$H_{n}(x)$ and $K_{n}(x)$ can be made as small as required. $\\ $\n\nIn this paper we extend the results presented in Bairamov (2011) to the\nbivariate case. We consider mixtures of joint distribution functions of order\nstatistics and concomitants $K_{n}(x,y):\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{i:n}(x,y)$ and $H_{n}(x,y):\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{n-i+1:n}(x,y).$ \\ Using majorization inequalities it is shown that for\na particular sequence $(p_{1},p_{2},...,p_{n})$ $=(p_{1}(m),p_{2\n(m),...,p_{n}(m)),$ $m=1,2,...$, $\\ $it is true that $H_{n}(x,y)\\leq\nF(x,y)\\leq K_{n}(x,y)$ for all $(x,y)\\i\n\\mathbb{R}\n$ and the distance between $H_{n}(x,y)$ and $K_{n}(x,y)$ goes to zero as\n$m\\rightarrow\\infty.$ \\ \n\n\\section{Auxiliary Results}\n\nLet $\\ \\mathbf{a=(}a_{1},a_{2},...,a_{n})\\i\n\\mathbb{R}\n^{n}$ , $\\mathbf{b}=(b_{1},b_{2},...,b_{n})\\i\n\\mathbb{R}\n^{n}$ and $a_{[1]}\\geq a_{[2]}\\geq\\cdots\\geq a_{[n]}$ denote the components of\n$\\mathbf{a}$ in decreasing order. The vector $\\mathbf{a}$ is said to be\nmajorized by the vector $\\mathbf{b}$ and $\\text{denoted by }\\mathbf{a\n\\prec\\mathbf{b},$ if\n\\[\n\\sum_{i=1}^{k}a_{[i]}\\leq\\sum_{i=1}^{k}b_{[i]}\\text{ for }k=1,2,\\cdots,n-1\n\\]\nan\n\\[\n\\sum_{i=1}^{n}a_{[i]}=\\sum_{i=1}^{n}b_{[i].\n\\]\nThe details of the theory of majorization can be found in Marshall et al.\n(2011). The following two theorems are important for our study.\n\n\\begin{proposition}\n\\label{Proposition 1} Denote $D=\\{(x_{1},x_{2},...,x_{n}):x_{1}\\geq x_{2\n\\geq\\cdots\\geq x_{n}\\},$ $\\mathbf{a=(}a_{1},a_{2},...,a_{n}),$ $\\mathbf{b}$\n$=(b_{1},b_{2},...,b_{n}).$ The inequality\n\\[\n\\sum_{i=1}^{n}a_{i}x_{i}\\leq\\sum_{i=1}^{n}b_{i}x_{i\n\\]\nholds for all $(x_{1},x_{2},...,x_{n})\\in D$ if and only if $\\mathbf{a\n\\prec\\mathbf{b}$ in $D.$(Marshal et al. 2011, page 160).\n\\end{proposition}\n\n\\begin{proposition}\n\\bigskip\\label{Proposition 2}The inequality\n\\[\n\\sum_{i=1}^{n}a_{i}x_{i}\\leq\\sum_{i=1}^{n}b_{i}x_{i\n\\]\nholds whenever $x_{1}\\leq x_{2}\\leq\\cdots\\leq x_{n}$ if and only if\n\\begin{align*\n{\\displaystyle\\sum\\limits_{i=1}^{k}}\na_{i} & \\ge\n{\\displaystyle\\sum\\limits_{i=1}^{k}}\nb_{i},\\text{ }k=1,2,...,n-1\\\\%\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\na_{i} & \n{\\displaystyle\\sum\\limits_{i=1}^{n}}\nb_{i}.\n\\end{align*}\n\n\\end{proposition}\n\n(Marshall et al. 2011, page 639).\n\n\\section{Main results in bivariate case}\n\nLet $(X,Y)$ be absolutely continuous random vector with joint cdf $F(x,y)$ and\nprobability density function (pdf) $f(x,y)$. Let $(X_{1},Y_{1}),(X_{2\n,Y_{2}),...,(X_{n},Y_{n})$ be independent copies of $(X,Y).$ Let $X_{r:n}$ be\nthe $r$th order statistic and $Y_{[r:n]}$ be its concomitant, i.e.\n$Y_{[r:n]}=Y_{i}$ iff $X_{r:n}=X_{i}.$ \\ The joint distribution of $X_{r:n}$\nand $Y_{[r:n]}$ can be easily derived and it i\n\\begin{align*}\nF_{r:n}(x,y) & =B_{n\n{\\displaystyle\\int\\limits_{-\\infty}^{x}}\nF_{X}^{r-1}(u)(1-F_{X}(u))^{n-r}F(du,y)du\\\\\n& =B_{n\n{\\displaystyle\\int\\limits_{-\\infty}^{x}}\nF_{X}^{r-1}(u)(1-F_{X}(u))^{n-r}\\left(\n{\\displaystyle\\int\\limits_{-\\infty}^{y}}\nf(u,v)dv\\right) du,\n\\end{align*}\nwhere\n\\[\nF(du,y)=\\frac{\\partial}{\\partial u}F(u,v)\n{\\displaystyle\\int\\limits_{-\\infty}^{y}}\nf(u,v)dv,\\text{ and }B_{n}=n\\binom{n-1}{r-1}.\n\\]\nIt \\ is easy to check that\n\\begin{equation}\n\\frac{1}{n\n{\\displaystyle\\sum\\limits_{r=1}^{n}}\nF_{r:n}(x,y)=F(x,y). \\label{d1\n\\end{equation}\n\n\n\\begin{lemma}\n\\bigskip\\label{Lemma 1}$F_{r+1:n}(x,y)\\leq F_{r:n}(x,y)$, $r=1,2,...,n-1,$ for\nall $(x,y)\\i\n\\mathbb{R}\n^{2}.$\n\\end{lemma}\n\n\\begin{proof}\nWe have\n\\begin{align}\n& F_{r+1:n}(x,y)-F_{r:n}(x,y)\\nonumber\\\\\n& =n\\binom{n-1}{r\n{\\displaystyle\\int\\limits_{-\\infty}^{x}}\nF_{X}^{r}(u)(1-F_{X}(u))^{n-r-1}F(du,y)du\\nonumber\\\\\n& -n\\binom{n-1}{r-1\n{\\displaystyle\\int\\limits_{-\\infty}^{x}}\nF_{X}^{r-1}(u)(1-F_{X}(u))^{n-r}F(du,y)du\\nonumber\\\\\n& \n{\\displaystyle\\int\\limits_{-\\infty}^{x}}\n\\left[ n\\binom{n-1}{r}F_{X}^{r}(u)(1-F_{X}(u))^{n-r-1}-\\right. \\nonumber\\\\\n& \\left. n\\binom{n-1}{r-1}F_{X}^{r-1}(u)(1-F_{X}(u))^{n-r}\\right]\nF(du,y)du\\nonumber\\\\\n& \n{\\displaystyle\\int\\limits_{0}^{F_{X}(x)}}\n\\left[ n\\binom{n-1}{r}t^{r}(1-t)^{n-r-1}\\right. \\label{d2}\\\\\n& \\left. -n\\binom{n-1}{r-1}t^{r-1}(1-t)^{n-r}\\right] F(dF_{X\n^{-1}(t),y)dt,\\nonumber\n\\end{align}\nwhere\n\\[\nF(dF_{X}^{-1}(t),y)\n{\\displaystyle\\int\\limits_{-\\infty}^{y}}\nf(F_{X}^{-1}(t),v)dv.\n\\]\nSince\n\\[\nh(t):=n\\binom{n-1}{r}t^{r}(1-t)^{n-r-1}-n\\binom{n-1}{r-1}t^{r-1\n(1-t)^{n-r}\\text{ and }g(t):=F(dF_{X}^{-1}(t),y)\n\\]\nare both bounded integrable functions in $t\\in\\lbrack0,F_{X}^{-1}(x)],$ for\nall $x,y\\i\n\\mathbb{R}\n$ and $g(x)$ is one sided function in this interval, then by the first mean\nvalue theorem for integral (see Gradshteyn and Ryzhik (2007 ), 12.1, page\n1053)\n\\\n{\\displaystyle\\int\\limits_{0}^{F_{X}(x)}}\nh(t)g(t)dt=g(\\xi\n{\\displaystyle\\int\\limits_{0}^{F_{X}(x)}}\nh(t)dt,\n\\]\nwhere $0\\leq\\xi_{x}\\leq F_{X}(x),$ $x\\i\n\\mathbb{R}\n,$ $g(\\xi_{x})\\geq0.$ The last equality together with (\\ref{d2}) leads t\n\\begin{align*}\n& F_{r+1:n}(x,y)-F_{r:n}(x,y)\\\\\n& =g(\\xi_{x}\n{\\displaystyle\\int\\limits_{0}^{F_{X}(x)}}\n\\left[ n\\binom{n-1}{r}t^{r}(1-t)^{n-r-1}-n\\binom{n-1}{r-1}t^{r-1\n(1-t)^{n-r}\\right] dt\\\\\n& =g(\\xi_{x})\\left\\{\n{\\displaystyle\\int\\limits_{0}^{F_{X}(x)}}\nn\\binom{n-1}{r}t^{r}(1-t)^{n-r-1}\n{\\displaystyle\\int\\limits_{0}^{F_{X}(x)}}\n\\frac{n!}{(r-1)!(n-r)!}t^{r-1}(1-t)^{n-r}dt\\right\\} \\\\\n& =g(\\xi_{x})[P\\{X_{r+1:n}\\leq x\\}-P\\{X_{r:n}\\leq x\\}]\\leq0.\n\\end{align*}\n\n\\end{proof}\n\nDenote\n\\[\nD_{+}^{1}=\\{(x_{1},x_{2},...,x_{n}):x_{i}\\geq0,i=1,2,...,n;\\text{ }x_{1}\\geq\nx_{2}\\geq\\cdots\\geq x_{n},\\sum_{i=1}^{n}x_{i}=1\\}.\n\\]\n\n\n\\begin{lemma}\n\\label{Lemma 2}Let $(p_{1},p_{2},...,p_{n})\\in D_{+}^{1}$ . Then\n\\begin{align}\nH_{n}(x,y) & \\equi\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{n-i+1:n}(x,y)\\leq F(x,y)\\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{i:n}(x,y)\\equiv K_{n}(x,y)\\text{ }\\label{b5}\\\\\n\\text{for all (}x,y) & \\i\n\\mathbb{R}\n^{2}\\nonumber\n\\end{align}\nand the equality holds if and only if $(p_{1},p_{2},\\cdots,p_{n})=(\\frac{1\n{n},\\frac{1}{n},...,\\frac{1}{n}).$ Furthermore, if \\ $\\mathbf{p}=(p_{1\n,p_{2},...,p_{n})\\in D_{+}^{1},$ $\\mathbf{q=}(q_{1},q_{2},...,q_{n})\\in\nD_{+}^{1}$ and $\\mathbf{p}\\prec\\mathbf{q,}$ then\n\\begin{align\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\nq_{i}F_{n-i+1:n}(x,y) & \\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{n-i+1:n}(x,y)\\leq F(x,y)\\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}F_{i:n}(x,y)\\label{b6}\\\\\n& \\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\nq_{i}F_{i:n}(x,y).\\nonumber\n\\end{align}\n\n\\end{lemma}\n\n\\begin{proof}\nBy \\ Lemma 1 $F_{1:n}(x,y)$ $\\geq F_{2:n}(x,y)\\geq\\cdots\\geq F_{n:n}(x,y)$ for\nall $x\\i\n\\mathbb{R}\n,$ and $(\\frac{1}{n},\\frac{1}{n},...,\\frac{1}{n})\\prec(p_{1},p_{2\n,...,p_{n}),$ the right hand side of the inequality (\\ref{b5}) follows from\nthe Proposition 1 and left hand side follows from Proposition 2.\n\\end{proof}\n\n\\begin{theorem}\nLet $p_{i}(m)=\\frac{m+n-i+1}{a_{n}(m)},$ $i=1,2,...,n;$ $\\ \\ m\\in\n\\{0,1,2,...\\},$ where $a_{n}(m)=nm+\\frac{n(n+1)}{2}.$ Then\n\\begin{align}\nH_{n}^{(m)}(x,y) & \\equi\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{n-i+1:n}(x,y)\\leq F(x,y)\\label{b7}\\\\\n& \\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{i:n}(x,y)\\equiv K_{n}^{(m)}(x,y)\\text{ for all }(x,y)\\i\n\\mathbb{R}\n^{2}\\nonumber\n\\end{align}\nan\n\\begin{align}\n\\underset{m\\rightarrow\\infty}{\\ \\lim\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{n-i+1:n}(x,y) & =\\underset{m\\rightarrow\\infty}{\\ \\lim\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{i:n}(x,y)\\label{b8}\\\\\n& =F(x,y)\\text{ for all }(x,y)\\i\n\\mathbb{R}\n^{2}.\\nonumber\n\\end{align}\nFurthermore,\n\\begin{equation\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert K_{n}^{(m)}(x,y)-H_{n}^{(m)}(x,y)\\right\\vert dxdy=o(\\frac\n{1}{m^{1-\\alpha}}),\\text{ }0<\\alpha<1.\\label{b9\n\\end{equation}\n\n\\end{theorem}\n\n\\begin{proof}\n\\bigskip Consider $p_{i}(m)=\\frac{m+n-i+1}{a_{n}(m)},$ $i=1,2,...,n;$\n$\\ \\ m\\in\\{0,1,2,...\\},$ where $a_{n}(m)=nm+\\frac{n(n+1)}{2}.$ It is clear\nthat $p_{1}(m)\\geq p_{2}(m)\\geq\\cdots\\geq p_{n}(m)$ and \n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)=1.$ Since $(\\frac{1}{n},\\frac{1}{n},...,\\frac{1}{n})\\prec\n(p_{1}(m),p_{2}(m),...,p_{n}(m))$ then from Lemma 1 we have\n\\begin{equation\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{n-i+1:n}(x,y)\\leq F(x,y)\\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{i:n}(x,y). \\label{3b\n\\end{equation}\nSince\n\\[\n\\underset{m\\rightarrow\\infty}{\\lim}p_{i}(m)=\\underset{m\\rightarrow\\infty\n{\\lim}\\frac{m+i}{nm+\\frac{n(n+1)}{2}}=\\frac{1}{n},\\text{ }i=1,2,...,n,\n\\]\n(\\ref{b8}) follows. To prove (\\ref{b9}) consider the $L_{1}$ distance between\n$K_{n}^{(m)}(x,y)$ and $H_{n}^{(m)}(x,y).$ We have\n\\begin{align*}\n\\Delta_{m} & \\equi\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert K_{n}^{(m)}(x,y)-H_{n}^{(m)}(x,y)\\right\\vert dxdy\\\\\n& \n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{i:n}(x,y)\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{n-i+1:n}(x,y)\\right\\vert dxdy\\\\\n& \n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{i:n}(x,y)-F(x,y)+F(x,y)\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\np_{i}(m)F_{n-i+1:n}(x,y)\\right\\vert dxdy\\\\\n& \n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\n(p_{i}(m)-\\frac{1}{n})F_{i:n}(x,y)+(\\frac{1}{n}-p_{i}(m))F_{n-i+1:n\n(x,y)\\right\\vert dxdy\\\\\n& \\le\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\n\\left\\vert p_{i}(m)-\\frac{1}{n}\\right\\vert\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert F_{i:n}(x,y)-F_{n-i+1:n}(x,y)\\right\\vert dxdy\\\\\n& \\leq(p_{1}(m)-\\frac{1}{n})c_{n}=\\frac{\\frac{1}{m}\\frac{n(n-1)}{2}\n{n^{2}+\\frac{n^{2}(n+1)}{2}\\frac{1}{m}}c_{n},\n\\end{align*}\nwhere $c_{n}\n{\\displaystyle\\sum\\limits_{i=1}^{n}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n{\\displaystyle\\int\\limits_{-\\infty}^{\\infty}}\n\\left\\vert F_{i:n}(x,y)-F_{n-i+1:n}(x,y)\\right\\vert dxdy.$\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe formalism of transition state theory (TST), as developed by Volmer and Weber~\\cite{vol26}, Becker and D\\\"{o}ring~\\cite{becker35}, Zeldovich~\\cite{zeldo}, and Frenkel~\\cite{frenkel46}, \ncontinues to be of fundamental importance for understanding phase transformations in a great variety of systems. The assumptions upon which TST is based nominally restrict this approach to predicting nucleation rates for systems that are only mildly metastable and for which the nucleation barrier is large relative to $kT$, where $T$ is the temperature and $k$ is Boltzmann's constant. However, many interesting phase changes occur in the deeply metastable regime where the system is approaching a limit of stability and the free energy barrier to nucleation is expected to disappear~\\cite{cahn,BZ,TO,parrinello06,vanish,S}. Some recent simulation studies~\\cite{SPB,Maibaum08,reguera09} find that the predictions of TST remain surprisingly robust in this deeply metastable regime, but others suggest that TST breaks down~\\cite{parrinello06,bagchi07,bagchi11}. Understanding whether TST remains applicable, or how the formalism should be adapted, when the nucleation barrier becomes low remains an open question.\n\nThe presence of a heterogeneous interface in a metastable system can dramatically lower the nucleation barrier in phase transformations such as vapor condensation and crystallization~\\cite{pablo, kelton}. Consequently, heterogeneous nucleation plays an important role in a variety of phenomena including atmospheric physics~\\cite{CK,kulmala08,winkler08}, the use of templates to form complex structures~\\cite{zak98,cacciuto04,cacciuto05} and protein crystallization~\\cite{chayen06,S07}. The basic principles of heterogeneous nucleation involving macroscopic, bulk surfaces are relatively well established. However, in many cases the heterogeneities are microscopic in size and there is considerable interest in understanding how particle size influences the nucleation mechanism and rate~\\cite{oh01,winkler08,cacciuto04,cacciuto05,sear,kea10,rkb11}, especially as the barrier approaches $kT$.\n\nHere we study heterogeneous nucleation in the two-dimensional ($2D$) Ising model to explore the nature of the nucleation barrier on approach to the limit of stability of a metastable phase. We seek to clarify the definition of the barrier in this limit, and to test the degree to which theories (in particular TST, and also the nucleation theorem) are able to predict the behaviour observed directly in this regime. The Ising system we examine was studied previously by Sear~\\cite{sear}, who demonstrated that a small cluster of fixed ``impurity\" spins increased the nucleation rate significantly relative to the homogeneous nucleation rate. In the present study, we exploit the fact that by increasing the size of the impurity we can systematically lower the nucleation barrier and also bring the system to a limit of stability. At the same time, as we will show, this simple model allows the heterogeneous nucleation barrier to be defined in a way that is free of significant approximations that affect the definition of the homogeneous nucleation barrier when the barrier height is low. As a consequence, this model provides an excellent opportunity to compare \nthe free energy barrier, critical cluster size, and nucleation rate\nas predicted by theory, with values obtained by direct simulations.\n\n\\section{Methods}\n\nOur results are based on Monte Carlo (MC) simulations of a 2D Ising model of a ferromagnet. We employ a $L\\times L$ square lattice with periodic boundary conditions, and choose $L=45$, the same system size studied in Ref.~\\cite{sear}. The energy of the system in spin configuration $c$ is given by,\n\\begin{equation}\nE_c=-J\\sum_{\\langle i,j \\rangle} s_i s_j + H\\sum_{i=1}^N s_i\n\\label{ham}\n\\end{equation}\nwhere $s_i=\\pm 1$ is the spin value of site $i$, $J>0$ quantifies the ferromagnetic exchange interaction, $H$ is the value of the external magnetic field, and $N=L^2$ is the number of sites in the lattice. The sum in the first term is taken over all nearest-neighbor pairs of spins. We explore the configuration space of the system using Metropolis single-spin-flip MC dynamics, in which one Monte Carlo step (MCS) corresponds to $N$ spin-flip attempts, and where spins are chosen at random. \n\nIn each of our runs, we initialize all free spins to $s_i=-1$, and equilibrate the system in the spin-down phase at $\\beta J=0.65$ (i.e. 0.678 of the critical temperature) and $\\beta H=-0.05$, where $\\beta=1\/kT$. We then create a metastable state by instantaneously changing the sign of the magnetic field, so that $\\beta H=+0.05$. These choices of $T$ and $H$ are the same as those used in Ref.~\\cite{sear}. Under these conditions the spin-down phase is metastable, and the system persists in this phase until nucleation of the stable spin-up phase occurs. \n\n\\section{Homogeneous Nucleation}\n\nThe aim of the present work is to study a system in which heterogeneous nucleation is the dominant process for transforming the metastable to the stable phase. In this section, we quantify the homogeneous nucleation process that occurs when no impurity is present. Doing so allows us to confirm that we are working under conditions where homogeneous nucleation can be neglected once we introduce an impurity into the system. We emphasize that we are not attempting here to conduct a detailed examination of homogeneous nucleation in the Ising model. There have been a number of recent and very thorough studies of homogeneous nucleation in the Ising model, to which we refer the interested reader~\\cite{pan,brendel,Cai2010a,Cai2010b}.\n\nWe begin by evaluating the free energy barrier for homogeneous nucleation, following the same approach as used in Ref.~\\cite{brendel,Cai2010a,Cai2010b}.\nThis method exploits the fact that when up-spin clusters occurring in a metastable down-spin phase are rare and do not interact, the free energy $g$ to form an up-spin cluster of size $m$ is well approximated by,\n\\begin{equation}\n\\beta g(m)=-\\log\\frac{{\\cal N}(m)}{N}.\n\\label{homo}\n\\end{equation}\nwhere ${\\cal N}(m)$ is the average number of up-spin clusters of size $m$~\\cite{thesis,reiss1999,auer2004}.\n\nAs we will see, the homogenous nucleation barrier in our case is large relative to $kT$. As in Refs.~\\cite{brendel,Cai2010a,Cai2010b}, we therefore use an umbrella sampling method to access the relatively rare configurations of the system that occur near the top of the nucleation barrier. In this approach, a biasing potential $U_B=\\kappa(M-M_0)^2$ is added to the system potential energy given in Eq.~\\ref{ham}, where $M$ is the size of the largest cluster of up-spins in the system, and $M_0$ is a target value of $M$. The effect of $U_B$ is to drive the system to sample configurations for which $M$ is close to $M_0$, over a range of $M$ that is controlled by the value of the parameter $\\kappa$. For a given value of $M_0$, we determine a segment of the ${\\cal N}(m)$ curve from,\n\\begin{equation}\n{\\cal N}(m)=\\bigl\\langle {\\cal N}_B(m) \\exp[\\beta U_B(M,M_0) ] \\bigl\\rangle,\n\\label{use}\n\\end{equation}\nwhere ${\\cal N}_B(m)$ is the number of up-spin clusters of size $m$ for a system configuration sampled during the biased simulation. In Eq.~\\ref{use}, $\\langle \\cdots \\rangle$ denotes an ensemble average computed during the biased simulation, and the exponential factor reweights the result to provide the estimate of ${\\cal N}(m)$ that would be found from an unbiased simulation (i.e. one with $U_B=0$).\n\nBy carrying out several simulations each for a different choice of $M_0$, we obtain estimates for overlapping segments of $g(m)$, which are then spliced together to form the complete $g(m)$ curve, as shown in Fig.~\\ref{us}. From the location of the maximum in $g(m)$, we find that the free energy barrier for homogeneous nucleation is approximately $27kT$, and that the size of the critical nucleus is approximately 200. \n\n\\begin{figure}\n\\bigskip\n\\centerline{\\includegraphics[scale=0.35]{fig-1.eps}}\n\\caption{(Color online) Free energy barrier $g(m)$ for homogenous nucleation at $\\beta J=0.65$ and $\\beta H=0.05$. The symbols (circles and plus signs) give the result obtained from umbrella sampling simulations using $\\kappa\/J=0.01$. The full curve is constructed from 16 separate simulations, where $M_0$ is varied from 0 to 300 in steps of 20. The data sets corresponding to each value of $M_0$ are represented by either black open circles or red plus signs. \nThe dashed blue line gives our estimate for $g(m)$ obtained from CNT using the approach of Ryu and Cai~\\cite{Cai2010a,Cai2010b}.\n}\n\\label{us}\n\\end{figure}\n\nBy way of comparison, we note that Ref.~\\cite{sear} estimates that the homogeneous nucleation barrier is $22kT$, and that the size of the critical nucleus is 219. These estimates are obtained using classical nucleation theory (CNT), assume a square droplet, and use the Onsager result for the interfacial tension in the 2D Ising model. Using a forward-flux sampling method, Ref.~\\cite{sear} also estimates the homogeneous nucleation rate of the present system to be $3.3\\times 10^{-13}$ events per MCS per lattice site. Thus the homogeneous nucleation time for a system of $L^2=2025$ sites is $1.5\\times 10^{9}$ MCS.\n\nA more accurate procedure for evaluating $g(m)$ from CNT has recently been described by Ryu and Cai~\\cite{Cai2010a,Cai2010b}. Although the two key ingredients for CNT, the surface tension and the difference in chemical potential between the metastable and stable phases, are known exactly for the 2D Ising model, Ryu and Cai found that the standard CNT expression for $g(m)$ failed to fit simulation-based calculations of the free energy barrier. However, by adding two additional terms to the CNT expression, one for shape fluctuations, and a constant term that ensures that the free energy of a single spin [i.e. $g(1)$] is correct, they were able to predict the free energy of forming a cluster within 1\\% of their umbrella sampling simulation results, with no fitting parameters, over a wide of temperatures and field strengths. Our evaluation of $g(m)$ using Ryu and Cai's corrected CNT expression (Eq.~6 of Ref.~\\cite{Cai2010a}) is included in Fig.~\\ref{us} and shows a similar level of agreement with our simulation results.\n\nAs shown in the following sections, for the cases of heterogeneous nucleation studied here, the height of the heterogeneous nucleation barrier is always less than $14kT$, and the system nucleation time is always less than $5\\times10^6$~MCS. Heterogeneous nucleation processes are thus always more than 300 times faster than the homogeneous process, under all conditions studied here. On this basis, we are assured that homogenous nucleation events (i.e. events that do not involve the impurity sites introduced below) are rare relative to heterogeneous events, and can be neglected in our analysis of the nucleation process in the presence of an impurity.\n\nThe above considerations also justify the choice of the system size ($L=45$) used here and in Ref.~\\cite{sear}. Since the homogeneous nucleation time of the system is proportional to $N$, then the smaller the system, the easier it is for heterogeneous nucleation events triggered by a single impurity to dominate the transformation of the metastable to the stable phase. At the same time, the system must be chosen large enough so that the critical cluster does not interact with its images across the periodic boundaries. For both homogeneous and heterogeneous nucleation, we find that the size of the critical nucleus is always 200 or less. In a system of size $N=45^2=2025$, the critical cluster will thus occupy 10\\% or less of the total system volume. Furthermore, since we conduct our simulations close to the coexistence curve at $H=0$, and well away from the critical temperature, we expect that the critical nucleus will be a relatively compact cluster, and that spin-spin correlations are negligible beyond a few lattice spacings. It is thus extremely unlikely for a critical cluster in our system to interact with spins in its periodic images.\n\n\\begin{figure}[t]\n\\vskip 0.25in\n\\centerline{\\includegraphics[scale=0.3]{fig-2.eps}}\n\\caption{(Color online) Example configurations of the system in the metastable phase with $\\beta J=0.65$, $\\beta H=+0.05$, and $l=7$ (a) when $n=n_0=21$, and (b) when $n=n^\\ast=174$. Impurity sites are shown as red open squares. Up-spins ($s_i=+1$) are shown as black filled squares. White regions correspond to down-spins ($s_i=-1$).}\n\\label{pic}\n\\end{figure}\n\n\\section{Free energy barrier for Heterogeneous Nucleation}\n\nTo induce heterogeneous nucleation, we next study the case where our system contains an impurity consisting of a line of $l$ spins fixed to $s_i=+1$; see Fig.~\\ref{pic}. To find the free energy barrier for heterogeneous nucleation, we seek to evaluate the minimum reversible work of formation of a critical cluster of the stable phase. However, since homogenous nucleation can be neglected, the critical cluster is necessarily a cluster of up-spins (i.e. sites with $s_i=+1$) attached to the impurity. \nIn the following, we define the ``impurity cluster\" as the contiguous cluster of up-spins that contains the impurity spins; thus the number of spins $n$ in the impurity cluster includes the impurity spins themselves. Under this definition there can only be one impurity cluster, and so $n$ is a {\\it system} property (and hence an order parameter) with respect to which the nucleation free energy barrier may be defined. \n\nTo define the free energy barrier for heterogeneous nucleation, we first consider the partition function of the system for fixed $(N,H,T,l)$. We write the system partition function ${\\cal Z}=\\sum_{n=l}^N Z(n)$ as a sum over the conditional partition function $Z(n)=\\sum_{c(n)} \\exp(-\\beta E_c)$. The sum in $Z(n)$ is over all system configurations $c$ in which the impurity cluster consists of exactly $n$ spins. The corresponding conditional free energy is $G(n)=-kT\\log Z(n)$, which is the free energy of the system when it contains an impurity cluster of size $n$.\n\n\\begin{figure}\n\\bigskip\n\\centerline{\\includegraphics[scale=0.35]{fig-3.eps}}\n\\caption{(Color online) (a) Free energy relative to a ``bare\" impurity, $G(n)-G(l)$ as a function of $n$ for impurities of size $l=3$ to $12$. (b) Free energy relative to the equilibrium metastable phase, $G(n)-G_m$ versus $n$ for $l=3$ to $11$. For all curves, the statistical error is less than $0.02kT$.}\n\\label{barrier}\n\\end{figure}\n\nTo compute $G(n)$, we note that the probability to observe an impurity cluster of $n$ spins is $P(n)=Z(n)\/{\\cal Z}$. Consequently, the work of formation of an $n$-spin impurity cluster, starting from a ``bare\" impurity (i.e. $n=l$), is given by the free energy difference,\n\\begin{eqnarray}\nG(n)-G(l) = -kT\\log \\frac{Z(n)}{Z(l)}\n= -kT\\log \\frac{P(n)}{P(l)}.\n\\label{hetg}\n\\end{eqnarray}\n\nWe evaluate $G(n)$ using Eq.~\\ref{hetg} from simulations in which $l$ ranges from 3 to 12. As shown below, for this range of $l$ the variation of $G(n)$ is never more that $12kT$, and therefore multi-window umbrella sampling is not required. Rather, we simply impose a constraint on our MC sampling such that $n\\leq n_{\\rm max}=300$. This choice of $n_{\\rm max}$ restricts our simulations to the metastable phase, and to configurations in the vicinity of transition states to the stable phase. This approach is equivalent to using a single umbrella sampling window in which $U_B=0$ for $n\\leq n_{\\rm max}$ and $U_B=\\infty$ for $n>n_{\\rm max}$. As in all umbrella sampling simulations, the relative probabilities with which configurations occur inside the umbrella window are correctly estimated after the appropriate reweighting, regardless of the specific form of $U_B$. Consequently, our results for $n\\leq n_{\\rm max}$ are independent of the choice of $n_{\\rm max}$.\n\nWe evaluate the equilibrium ratio $P(n)\/P(l)$ from our simulations, and plot the result for $G(n)-G(l)$ in Fig.~\\ref{barrier}(a), for various $l$. For $3\\leq l \\leq 11$, each curve exhibits a maximum at $n=n^\\ast$, indicating the size of the critical cluster. The value of $n^\\ast$ demarcates the boundary between the metastable and stable phases of the system, and we define the configuration space of the metastable phase as the set of microstates for which $l\\leq n\\leq n^\\ast$. \n\nFig.~\\ref{barrier}(a) also shows that as $l$ increases, a minimum in $G(n)$ at $n=n_0$ emerges and grows; this feature corresponds to wetting of the impurity by a finite cluster of the stable phase. \nFor $l=12$, $G(n)$ is a monotonically decreasing function of $n$, and the metastable phase has ceased to exist. This qualitative change in the shape of $G(n)$ as $l$ increases thus represents the limit of stability of the metastable phase. This limit of stability is also seen in Fig.~\\ref{n}, where we show that $n^\\ast$ and $n_0$ approach one another, and then become undefined for $l\\geq 12$. \n\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.34]{fig-4.eps}}\n\\caption{(Color online) Size of the impurity cluster at the maximum ($n^\\ast$) and minimum ($n_0$) of $G(n)$ as a function of impurity size $l$. Statistical errors are smaller than the symbol size.}\n\\label{n}\n\\end{figure}\n\nTo obtain the free energy barrier for nucleation from $G(n)$, we must take care to identify the appropriate thermodynamic reference state with respect to which the barrier height should be measured. \nWe follow the reasoning of Ref.~\\cite{frenkel-true}, which studied homogeneous nucleation, adapted here for the case of heterogeneous nucleation. That is, the free energy barrier for nucleation is defined as the minimum reversible work required to apply a constraint that confines the system to the transition state at $n=n^\\ast$, starting from a reference state that considers the entire configuration space of the metastable phase, i.e. all configurations in the range $l\\leq n\\leq n^\\ast$. \nTo implement this definition, we define the partition function of the metastable phase ${\\cal Z}_m=\\sum_{n=l}^{n^\\ast} Z(n)$ as a restricted sum over all states such that $l\\leq n\\leq n^\\ast$.\nThe corresponding free energy of the metastable phase is $G_m=-kT\\log {\\cal Z}_m$. The work of formation of an $n$-spin impurity cluster, starting from the equilibrium metastable phase, is then given by the free energy difference,\n\\begin{eqnarray}\nG(n)-G_m &=& -kT\\log \\frac{Z(n)}{{\\cal Z}_m} \\cr\n&=& -kT\\log \\frac{P(n)}{\\sum_{n'=l}^{n^\\ast} P(n')}.\n\\label{eq:qm}\n\\end{eqnarray}\nThe second equality above emphasizes that $G(n)-G_m$ can also be evaluated in our simulations from the relative probabilities for observing the impurity cluster to have various $n$. \n\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.35]{fig-5.eps}}\n\\caption{(Color online) (a) Comparison of the nucleation barriers $\\Delta G$ and $\\Delta G_0$ as a function of impurity size $l$. (b) Nucleation times $\\tau^0_{\\rm TST}$, $\\tau_{\\rm TST}$, $\\tau_{\\rm MFPT}$, and $\\tau_{\\rm Sear}$ as a function of $l$. For comparison, note that the homogeneous nucleation time when no impurity is present is $1.5\\times10^9$~MCS~\\cite{sear}. For all quantities, statistical errors are smaller than the symbol size.}\n\\label{G}\n\\end{figure}\n\nOur results for $G(n)-G_m$ are plotted in Fig.~\\ref{barrier}(b). \nThe difference between the free energy curves in Fig.~\\ref{barrier}(a) and (b) is a change in the reference state, giving rise to an $l$-dependent \nvertical shift without a change in shape.\nThe work of formation of the transition state from the metastable phase (i.e. the free energy barrier for nucleation) is given by $\\Delta G = G(n^\\ast)-G_m$. \nAs shown in Fig.~\\ref{G}(a), $\\Delta G$ does not go to zero at the limit of stability. Although paradoxical at first glance, this result is physically reasonable for our system. Since $n^\\ast$ remains non-zero even at the limit of stability, the metastable phase encompasses a considerable region of configuration space ($l\\leq n\\leq n^\\ast$) up to the point where stability is lost. Hence the work required to create the transition state remains finite, even as the metastable state ceases to exist as a distinct phase. In previous work, it has been assumed that the nucleation barrier should go to zero as the thermodynamic stability of a metastable phase is lost~\\cite{BZ,TO,parrinello06,vanish,S}. Our system provides a counter-example.\n\n\\section{Nucleation time}\n\nWe next assess the implications of our results for $\\Delta G$ for estimating the nucleation time using TST~\\cite{Cai2010a,Cai2010b}. For our system, the TST prediction for the nucleation time is,\n\\begin{equation}\n\\tau_{\\rm TST}=(f^+_c z)^{-1}\\exp(\\beta \\Delta G),\n\\label{TST}\n\\end{equation}\nwhere $\\tau_{\\rm TST}$ is the average time (in MCS) per impurity for a critical cluster to appear in the system that subsequently evolves into the stable phase.\n$z=\\sqrt{\\beta \\eta\/2 \\pi}$ is the Zeldovich factor, where $\\eta=-(\\partial^2 G\/\\partial n^2)_{n=n^\\ast}$ is the curvature of $G(n)$ at the top of the barrier. We estimate $\\eta$ from a quadratic fit to data that lies within $0.2kT$ of the maximum of $G(n)$. $f^+_c$ is the attachment rate of monomers to the critical cluster. We determine $f^+_c$ from the time dependence of fluctuations of the size of critical clusters, following the same procedure used in Refs.~\\cite{Cai2010a,Cai2010b}. The result for $\\tau_{\\rm TST}$ obtained from our data is shown in Fig.~\\ref{G}(b).\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[scale=0.34]{fig-6.eps}}\n\\caption{(Color online) Comparison of our estimates for the excess number of up-spins in the critical cluster $\\Delta n_{\\rm NT}$, $\\Delta n^0_{\\rm NT}$, and $\\Delta n_{\\rm sim}$ as a function of $\\beta H$, for the $l=7$ system.}\n\\label{thm}\n\\end{figure}\n\nTo test the accuracy of $\\tau_{\\rm TST}$, we directly evaluate the nucleation time in terms of the mean first passage time (MFPT) for the impurity cluster to grow to the critical size. For a given $l$, we set $n_{\\rm max}=n^\\ast$ so that the system is confined to explore only the configuration space of the metastable phase, and bring this constrained system into equilibrium. Then, at a randomly selected time, we set $t=0$ and measure the time it takes for the system to first reach $n=n_{\\rm max}$. The MFPT is the average of many such measurements. We define the nucleation time $\\tau_{\\rm MFPT}$ as twice the MFPT, because only half of the runs that reach the transition state would ultimately evolve into the stable phase. As shown in Fig.~\\ref{G}(b), $\\tau_{\\rm TST}$ is in excellent agreement with $\\tau_{\\rm MFPT}$. Fig.~\\ref{G}(b) also shows that $\\tau_{\\rm MFPT}$ is consistent with the nucleation times ($\\tau_{\\rm Sear}$) reported in Ref.~\\cite{sear} for the same system, as found using a forward-flux sampling method. Our results thus demonstrate that in our case TST is capable of predicting the nucleation time with remarkable accuracy even at the very limit of stability of the metastable phase.\n\n\\section{Nucleation Theorem}\n\nIt is also possible to validate our results for $\\Delta G$ by testing the nucleation theorem. The nucleation theorem~\\cite{kash1982,oxtoby1994,ford1996,bowles2001,kash2006} states that,\n\\begin{equation}\n\\left(\\frac{\\partial \\Delta G}{\\partial \\Delta \\mu}\\right)_{T}=\\frac{1}{2}\\left(\\frac{\\partial \\Delta G}{\\partial H}\\right)_{T}=-\\Delta n\\mbox{,}\\\\\n\\label{NT}\n\\end{equation}\nwhere $\\Delta \\mu$ is the difference in chemical potential between the stable and metastable phases, and $\\Delta n$ is the excess number of up-spins in the critical cluster. In the first equality, we have used $\\Delta \\mu\\approx2H$, which for the Ising model is a good approximation for $T$ below the Curie temperature~\\cite{Cai2010a,Cai2010b,harris1984}. To conduct this test, we carry out new runs for the case of $l=7$ over a range of $\\beta H$ from $0.048$ to $0.056$. Although $\\Delta n$ can be approximated as $n^\\ast - n_0$, in the low barrier regime it is more accurate to directly evaluate $\\Delta n$ as the difference in the average number of up-spins in the entire system (including those not in the impurity cluster) when the system is at $n=n^\\ast$, and the average number of up-spins in the metastable phase averaged over all $l\\leq n\\leq n^\\ast$; these results are shown in Fig.~\\ref{thm} and denoted as $\\Delta n_{\\rm sim}$. We also evaluate $\\Delta G$ as a function of $H$, and estimate the derivative in Eq.~\\ref{NT} using a five-point central-difference numerical method. The estimate of $\\Delta n$ thus obtained from Eq.~\\ref{NT} is denoted $\\Delta n_{\\rm NT}$ in Fig.~\\ref{thm}, and is in good agreement with $\\Delta n_{\\rm sim}$.\n\n\\section{Comparison of barrier definitions}\n\nAlthough our definition of $\\Delta G$ is straightforward, we note that almost all previous studies of heterogeneous nucleation on small impurities use a different definition. Specifically, when the free energy as a function of $n$ exhibits both a minimum (at $n_0$) and a maximum (at $n^\\ast$) the nucleation barrier is usually defined as $\\Delta G_0=G(n^\\ast)-G(n_0)$~\\cite{cacciuto05,vanish,kelton,CK,OR,BZ,TO,WWY,KO,AF,S,S07,DD,oh01}. However, this definition is an approximation that becomes increasingly inaccurate in the low barrier regime. \nTo illustrate the problem, we show our results for $\\Delta G_0$ as a function of $l$ in Fig.~\\ref{G}(a). Whereas $\\Delta G$ remains finite at the limit of stability, $\\Delta G_0$ vanishes.\nIn Fig.~\\ref{G}(b), we show $\\tau^0_{\\rm TST}$, the TST prediction for the nucleation time obtained if we use $\\Delta G_0$ instead of $\\Delta G$ in Eq.~\\ref{TST}. We find that for the lowest barriers (at large $l$), $\\tau^0_{\\rm TST}$ underestimates $\\tau_{\\rm MFPT}$ by more that two orders of magnitude.\nSimilarly, if we use $\\Delta G_0$ in Eq.~\\ref{NT}, the estimate obtained for $\\Delta n$ (denoted $\\Delta n^0_{\\rm NT}$) is distinctly less accurate than that found using $\\Delta G$ (Fig.~\\ref{thm}). \n\nThe above results demonstrate that the use of $\\Delta G_0$ instead of $\\Delta G$ leads to a qualitatively different and erroneous physical picture for nucleation in the low barrier regime: Using $\\Delta G_0$, the barrier vanishes, and theories such as TST break down, whereas using $\\Delta G$ we find that the actual behavior is exactly the opposite.\nWe emphasize that the difference between $\\Delta G$ and $\\Delta G_0$ is only apparent in the low barrier regime. When the barrier is high, even small clusters are rare, and the properties of the metastable phase are dominated by system configurations found near $n=l$. In this limit $G(l)\\approx G_m$, and $\\Delta G$ and $\\Delta G_0$ become equivalent. However, when approaching a limit of stability, the correct definition of the free energy barrier must be used.\n\n\\section{Discussion}\n\nIt is important to note how the definition of the free energy of cluster formation for the homogeneous system [$g(m)$ in Eq.~\\ref{homo}] differs from that for the heterogeneous case [$G(n)$ in Eq.~\\ref{hetg}] in the low barrier regime.\nThe definition of $g(m)$ is correct in the limit that stable-phase clusters are rare and non-interacting. In a finite-sized system near the transition state, this limit is realized only if there is at most one large cluster in the system.\nHowever, when the homogeneous nucleation barrier approaches $kT$, several large clusters may form simultaneously. In this case, cluster interactions cannot be neglected, and Eq.~\\ref{homo} is no longer accurate.\nIn contrast, our definition of $G(n)$ for heterogeneous nucleation depends only on taking the limit that homogeneous nucleation events are rare, which we have assured by our choice of $T$, $H$, and $N$.\nBy construction, there is always one, but only one, impurity cluster of any size present in our heterogeneous system, regardless of the height of the heterogeneous nucleation barrier. As a consequence, multiple large clusters do not occur in our system, even as we approach the stability limit of the metastable phase, and thus Eq.~\\ref{hetg} does not break down when the barrier height approaches $kT$. \n\nWe also emphasize that our definitions of the metastable phase and its limit of stability are only well-defined for a finite-sized system. As discussed in Section III, our system size is deliberately chosen to be small enough so that homogeneous nucleation processes can be neglected. If we take the limit $N\\to \\infty$ in a system that contains only one impurity, a homogeneous nucleation event somewhere in the system becomes overwhelmingly more probable than an event triggered by a lone impurity. This is a well-known conceptual challenge associated with the definition of metastability for any system (homogeneous or heterogeneous) in the thermodynamic limit~\\cite{pablo,langer,penrose}.\n\nIn addition, our results demonstrate the importance of the reference state when calculating the nucleation rate from a measure of the nucleation barrier. This insight is facilitated here by the fact that the definition of the heterogeneous nucleation barrier is free of the complications that arise in the homogenous case when the barrier is low, as discussed above. Although further work is required, we anticipate that a similar examination of the reference state appropriate to homogeneous nucleation may elucidate the rate and its relation to thermodynamic quantities in the low barrier regime.\n\nIn summary, for heterogeneous nucleation on small impurities, we show that $\\Delta G$ remains well-defined, and does not vanish, at the limit of stability of the metastable phase. Furthermore, we find that both TST and the nucleation theorem are impressively accurate, even at the limit of stability, so long as the correct reference state is used to define the height of the nucleation barrier. \nWe expect that the pattern of behavior found here will be common to all low-barrier systems where a free energy minimum and maximum converge at a finite value of the order parameter, and thus may be generic for heterogeneous nucleation on small impurities. That is, for impurity-induced nucleation, it is only when the size of the critical cluster goes to zero ($n^\\ast \\to 0$) that we should expect the nucleation barrier to vanish at the limit of stability.\n\n\\section{acknowledgements} \nWe thank ACEnet and WestGrid for providing computational resources, and NSERC for financial support. PHP thanks the CRC program for support.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}