{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nHD 98800 is an interesting and unusual system of four 10 Myr old post\nT-Tauri K stars -- two spectroscopic binaries A and B in orbit about\none another -- located in the TW Hydrae association (\\citealt{T95},\n\\citealt{Ka97}). It has a large infrared excess attributed to a\ncircumbinary disc around the B pair \\citep{K00, P01,\n Fu07}. Substantial extinction towards this pair is evidence that\nthey are observed through some of this material (\\citealt{S98},\n\\citealt{T99}). A photometric variability is also seen, but with no\ndefinite period \\citep{S98}. Both the apparent absence of CO molecular\ngas in the disc \\citep{De05} and infrared spectrum modelling indicate\nthat this is a T-Tauri transition disc that is just reaching the\ndebris disc stage, with a collisional cascade having been recently\ninitiated \\citep{Fu07, W07}. The orbits of stars are all highly\neccentric and inclined, creating a dynamical environment unlike almost\nall other known debris discs \\citep{T99,P01}.\n\nThe dust disc here is generally agreed to be an annulus around the B\nbinary, but the exact structure varies from model to\nmodel. \\citet{K00} estimate a coplanar narrow ring outwards of about\n$5.0\\pm 2.5$ au from the two stars, themselves separated by\napproximately 1 au. \\citet{P01} however determine a 1 au high coplanar\nring now from 2 and 5 au. \\citet{B05} argue that the line of sight\nextinction means that the disc is not coplanar with the sub-binary\nunless it is very flared. \\citet{Fu07} suggest an inner optically thin\nring at 2 au and an outer puffed up optically thick wall 0.75 au high\nat 5.9 au, with a gap between the two. Recently, \\citet{Ak07} showed\nthat a single continuous physically and optically thick coplanar disc\nbetween 3 and 10 au can reproduce the observed spectral energy\ndistribution. To explain the extinction, they used dynamical models of\ntest particles to show that the inclined stellar orbits could create a\nwarp in the dust disc, the outer layers of which could then just\nintercept the line of sight.\n\nThe unusually large infrared excess has been argued by \\citet{LBA00}\nto be caused by one of four reasons. The first is that a dust avalance\nis currently in progress. Radiation pressure acts to push dust grains\noutwards. When the disc is very dusty, these can impact other dust\ngrains, creating more particles that are themselves pushed outwards,\ncolliding and creating yet more dust, and so on. The second\npossibility is there is undetected gas maintaining the dust population\nagainst radiation pressure. The third and fourth reasons are\napplicable when the wide orbit has an eccentricity of near unity, much\nhigher than currently believed. In this case, it is possible that\neither an encounter with the A pair has heated the outer edges of the\ndisc, resulting in abnormally high infrared radiation, or disrupted a\nKupier Belt-like stucture, resulting in collisions and releasing large\namounts of dust. As the eccentricity is now known to be much lower,\nthese two possibilities are less likely, although it should be noted\nthat the wide orbit is currently very near to periastron.\n\nThe first two possibilities present difficulties in modelling the\nsystem. If the system is undergoing a dust avalance or is gas-rich,\nthen radiation pressure, collisions and gas dynamics must be included.\nFor example, \\citet{W07} has calculated the dust collision timescale\nto very short (0.36 years), so collisional effects are important in\nmodelling the dust evolution. However, in debris discs, dust is\ngenerated from an underlying planetesimal population, usually through\na collisional cascade. In this case, the dust population follows the\nplanetesimal population, which is less complex to model. If the disc\nhas large amounts of gas or a dust avalance is occuring, this may, of\ncourse, no longer be the case. Models of the planetesimal population\nremain interesting though, especially in such an unusual stellar\nenvironment. Indeed, the high inclination and eccentricities suggest\nthat little stability is likely, yet some must exist. Constraining\npossible planetesimal locations and then comparing with observations\nof the dust location may even be able to distinguish between the\npossible scenarios above. They would also serve as an indication of\nwhether any planetary stability might be possible in this system.\n\nThere is one other debris disc known in a similar stellar system to HD\n98800. This is GG Tau, which has a circumbinary debris disc of several\nhundred au radius in a quadruple star system with the same hierarchy\n\\citep{Gu99}. However, in this case the stars are about ten times more\ndistant, the mass ratios much smaller and their orbits relatively\ncoplanar so more stability would be\nexpected~\\citep{Be05,Be06}. Dynamical modelling has shown that the\ncircumbinary material forms a sharp ring and a more diffuse disc\ncomponent.\n\nHD 98800 remains an unusual environment, and the high inclination and\neccentricity of the stellar orbits will have a significant effect on\nthe dynamics and structure of the planetesimal population of the\ndebris disc. This effect has yet to be studied in detail, and is\ninvestigated here using direct numerical integrations. Section\n\\ref{sec:method} gives a description of the stellar system and the\nsimulation parameters. The results are then presented in Section\n\\ref{sec:results} and conclusions given in Section\n\\ref{sec:conclusion}.\n\n\n\n\\section{Method}\n\\label{sec:method}\n\n\\subsection{The Stellar System}\n\nThe wide orbit of the sub-binaries A and B is reasonably determined,\nas is that of the stars Ba and Bb in the B stellar pair. However the\norbit of the other pair, Aa and Ab, is only partly known as the\nsmaller star is not resolved. The parameters for all three orbits are\nlisted in Table \\ref{tab:orbit} and shown in Figure \\ref{fig:orbit}.\n\n\\citet{T99} derives three possible fits to the wide orbit, fixing the\neccentricity in each case as 0.3, 0.5 and 0.6 and assuming a total\nsystem mass of 2.6 $\\rm M_{\\rm\\odot}$. Table \\ref{tab:orbit} shows the 0.5\neccentricity case, and Table \\ref{tab:ABorbit} shows the others. Apart\nfrom the period (and hence separation), the orbital parameters do not\nvary much between the different solutions.\n\nThe A pair is a single-lined spectroscopic binary so only a partial\nradial velocity orbit has been determined \\citep{T95}. \\citet{P01} use\nevolutionary tracks to estimate the mass of the Aa star as 1.1$\\pm$0.1\n$\\rm M_{\\rm\\odot}$. The orbital inclination is not known, but the mass and\nseparation of Ab can be found as a function of this parameter, as\nshown in Table \\ref{tab:Aab}. A wide range of inclinations are still\npossible even given the constraint that the star is small and\nunobservable. It is likely that this small star is unimportant to the\ndynamics of the circumbinary disc, and so the sub-binary can be\nmodelled as a single object. However, it might alter the dynamics of\nthe stellar orbits, and this is investigated in the next sub-section.\n\n\\begin{table}\n\\caption{\n\\label{tab:orbit}\nThe orbital and physical parameters for the stellar system HD 98800. The wide orbit of AB is fit II from \\citet{T99}, with the semimajor axis calculated assuming a total system mass of 2.6 $\\rm M_{\\rm\\odot}$ and the MJD taken from the middle of the year 2025 given as the time of periastron of the AB pair. The Bab pair orbit is taken from the joint-fit given by \\citet{B05}. The orbit of the A sub-binary is from \\citet{T95}. Note that the reference plane is that perpendicular to the line of sight and the longitudes of the ascending nodes are measured from North through East.\n}\n\\centerline{\n\\scriptsize\n\\begin{tabular}{l|cc@{}c@{}c@{}c@{}c@{}c@{}cc@{}c@{}c@{}c@{}c@{}c@{}c}\nOrbital & Wide & \\multicolumn{7}{c}{A sub-binary} & \\multicolumn{7}{c}{B sub-binary} \\\\\nParameter & A & \\multicolumn{3}{c}{Aa} & &\\multicolumn{3}{c}{Ab} & \\multicolumn{3}{c}{Ba} & &\\multicolumn{3}{c}{Bb} \\\\\n\\hline\nMass ($\\rm M_{\\rm\\odot}$) & -- & 1.1 &$\\pm$& 0.1 & & & & & 0.699 &$\\pm$& 0.064 & & 0.582 &$\\pm$& 0.051 \\\\\nRadius ($\\rm R_{\\rm\\odot}$) & -- & & & & & & & & 1.09 &$\\pm$& 0.14 & & 0.85 &$\\pm$& 0.11 \\\\\n$a$ (au) & 67.6 & & & & & & & & 0.447 &$\\pm$& 0.013 & & 0.536 &$\\pm$& 0.013 \\\\\nPeriod & 345 yr & \\multicolumn{3}{r@{}}{262.15} &$\\pm$&\\multicolumn{3}{@{}l}{0.51 d} \n\t\t\t\t\t\t\t\t\t\t\t\t\t& \\multicolumn{3}{r@{}}{314.327} &$\\pm$& \\multicolumn{3}{@{}l}{0.028 d}\t \\\\\n$e$ & 0.5 & \\multicolumn{3}{r@{}}{0.484} &$\\pm$&\\multicolumn{3}{@{}l}{0.020} \n\t\t\t\t\t\t\t\t\t\t\t\t\t& \\multicolumn{3}{r@{}}{0.7849} &$\\pm$& \\multicolumn{3}{@{}l}{0.0053} \t \\\\\n$i$ (${}^\\circ$) & 88.3 & & & & & & & \n\t\t\t\t\t\t\t\t\t\t\t\t\t& \\multicolumn{3}{r@{}}{66.8} &$\\pm$& \\multicolumn{3}{@{}l}{3.2} \t \\\\\n$\\omega$ (${}^\\circ$) & 224.6 & \\multicolumn{3}{r@{}}{64.4} &$\\pm$&\\multicolumn{3}{@{}l}{2.1} \n\t\t\t\t\t\t\t\t\t\t\t\t\t& \\multicolumn{3}{r@{}}{289.6} &$\\pm$& \\multicolumn{3}{@{}l}{1.1} \\\\\n$\\Omega$ (${}^\\circ$) & 184.8 & & & & & & & \n\t\t\t\t\t\t\t\t\t\t\t\t\t& \\multicolumn{3}{r@{}}{337.6} &$\\pm$& \\multicolumn{3}{@{}l}{2.4} \t \\\\\n$\\tau$ (MJD) & 60840 & \\multicolumn{3}{r@{}}{8737.1} &$\\pm$&\\multicolumn{3}{@{}l}{1.6} \n\t\t\t\t\t\t\t\t\t\t\t\t\t& \\multicolumn{3}{r@{}}{52481.34} &$\\pm$& \\multicolumn{3}{@{}l}{0.028}\t\t \\\\ \n\\end{tabular}}\n\\large\n\\end{table}\n\n\\begin{table}\n\\caption{\n\\label{tab:ABorbit}\nThe three different orbital fits for the wide orbit AB from \\citet{T99}. Here $\\phi$ is the mutual inclination to the orbit of the B stellar pair.\n}\n\\centerline{\n\\scriptsize\n\\begin{tabular}{l|ccc}\nOrbital\t\t \t\t &\t\\multicolumn{3}{c}{Orbit} \t\\\\\nParameter\t \t\t &\tI & II & III\t\t\\\\\n\\hline\n$a$ (au) \t\t & 61.9 & 67.6 & 78.6 \\\\\nPeriod (years) \t\t & 302 & 345 & 429\t \\\\\n$e$ \t\t & 0.3 & 0.5 & 0.6\t \\\\\n$i$ (${}^\\circ$) & 87.4 & 88.3 & 88.7 \\\\\n$\\omega$ (${}^\\circ$) & 210.7 & 224.6 & 224.0 \\\\\n$\\Omega$ (${}^\\circ$) & 184.8 & 184.8 & 184.8 \\\\\n$\\tau$ (MJD) \t\t & 59379 & 60840 & 61205 \\\\\n$\\phi$ (${}^\\circ$) & 143.0 & 143.7 & 143.9 \\\\\n\\end{tabular}}\n\\large\n\\end{table}\n\n\n\\begin{table}\n\\caption{\n\\label{tab:Aab}\nThe mass of star Ab and the A binary's semimajor axes as a function of inclination to the plane of the sky.\n}\n\\centerline{\n\\scriptsize\n\\begin{tabular}{l|llllllll}\nInclination (${}^\\circ$) & 90 & 70 & 60 & 50 & 40 & 30 & 20 & 10 \\\\\n\\hline\n$M_{Ab}$ ($\\rm M_{\\rm\\odot}$) & 0.22 & 0.23 & 0.25 & 0.29 & 0.36 & 0.49 & 0.80 & 2.36 \\\\\n$a$ (au) & 0.88 & 0.88 & 0.89 & 0.89 & 0.91 & 0.94 & 0.99 & 1.21 \\\\\n\\end{tabular}}\n\\large\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=84mm]{fig1.eps}\n\\caption{\n\\label{fig:orbit}\nThe physical orbit of the quadruple star system HD 98800 in three\ndimensions. The wide orbit is fit II from \\citet{T99} and the B pair\norbit that of \\citet{B05}. Units are in au, and the plot is centred on\nthe barycentre of the B stellar pair. A projection of the orbits onto\nthe plane of the sky is also plotted along with the relevant\ndirections. The B pair orbit is overplotted magnified 20 times as\ndotted lines. The position of stars at the epoch MJD 52481.34 (mid\n2002, inner orbit at periastron) and the direction of the orbits are\ngiven. Finally, the line of intersection of the two orbits is also\nplotted as a dashed line.}\n\\end{figure}\n\n\\subsection{The Stellar Dynamics}\n\nThe mutual inclination for the stellar orbits given in\nTable \\ref{tab:orbit} can be calculated using the formula\n\\begin{equation}\n\\cos \\phi = \\cos i_1 \\cos i_2 + \\sin i_1 \\sin i_2 \\cos(\\Omega_2-\\Omega_1)\n\\end{equation}\nwhere $\\Omega_j$ are the longitudes of ascending node, $i_j$ are the\ninclinations relative to a given reference plane and $\\phi$ is the\nmutual inclination, the angle between the angular momentum vectors of\nthe two orbits (see e.g. \\citealt{S53}). It is found to be in this\ncase $143.7^\\circ$, and is similar for the other two orbits (see Table\n\\ref{tab:ABorbit}). Notably, it is retrograde.\n\nAs mentioned, it is likely for an investigation of the dynamics of the\ncircumbinary disc that the Aab system can be reasonably approximated\nby a single star of mass 1.3 $\\rm M_{\\rm\\odot}$, consistent with the minimum mass\nof Ab and the total system mass assumed by \\citet{T99}. In this case,\nthe system becomes a hierarchical triple and can be numerically\nstudied using the {\\sc{Moirai}} code \\citep{VE07}. A million year\nsimulation of orbit II in this case is shown in Figure\n\\ref{fig:3stars}. Energy is conserved to a relative error of $10^{-7}$\nand the integration was checked with a standard Bulirsch-Stoer\nintegrator \\citep{P89} and found to be in agreement. The semimajor\naxes of the stars and the wide orbit's eccentricity are not shown, as\nthey remain constant through out the simulation. Interestingly, the\nsystem is currently near its maximum eccentricity and mutual\ninclination. For all three possible outer orbits, the eccentricity and\nmutual inclination vary smoothly in a secular manner, with maximum\nangular separation between the orbital planes occurring for minimum\neccentricity of the inner orbit. The amplitude of the variation is\nvery similar in all cases and the periods are 65, 60 and 75 Kyr for\norbits I, II and III respectively. In fact the system remains in this\nstable configuration for at least a Gyr, as well as to at least 10 Myr\nago.\n\nThe orbital behaviour is well described by the octupole secular theory\nof \\citet{FKR00}, as overplotted in Figure \\ref{fig:3stars}. This\ntheory uses a third order expansion of the systems Hamiltonian in the\nratio of the semimajor axes of the stellar orbits to obtain a set of\ncoupled first order differential equations, their equations (29) to\n(32), for the time variation of the eccentricities and arguments of\nperiastron of the inner and outer orbits that can be numerically\nsolved. The inclination and nodes of the system are then derivable\nfrom these elements, and the semimajor axes remain constant. From the\nfigure, it can be seen that the theory is in excellent agreement with\nthe results from the full equations of motion, with only a small phase\ndrift after a Myr. In fact, because the two orbits are fairly\nseparated a quadrupole level theory is sufficient to describe the\nsystem. In this approximation the outer eccentricity is a constant,\nas seen here. Thus, in the three body approximation the stars follow a\nstable secular evolution and can be accurately integrated by the\n{\\sc{Moirai}} code.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.15in]{fig2.eps}\n\\caption{\n\\label{fig:3stars}\nThe three body stellar orbital evolution for orbit II over 1 Myr from\nnumerical and theoretical modelling. The blue line shows the results\nfrom the symplectic integrator, identical to those from the\nBulirsch-Stoer shown in red. The octopole theory results are shown in\nblack, and differ only slightly in phase. The semimajor axes of both\norbits and eccentricity of the outer orbit are constant and not\nshown.}\n\\end{figure}\n\nThe full four body system can now be considered. The mass of Ab and\nthe semimajor axis of the orbit can only be resolved by assuming the\ninclination to the line of sight (see Table \\ref{tab:Aab}). The\nlongitude of ascending node of the orbit remains an unknown, but is\nneeded to constrain the mutual inclination of the orbital\nplanes. Therefore, to investigate possible dynamics, a set of\nsimulations were run for inclinations in the range $90^\\circ$ to\n$30^\\circ$ (shown in Table \\ref{tab:Aab}) with values of ascending\nnode ranging from $0^\\circ$ to $315^\\circ$ in $45^\\circ$ steps and\nusing wide orbit II. To do this, the fourth star was approximated as a\nplanet around its primary, Aa, in the {\\sc{Moirai}} code.\n\nIn all cases, there is little difference to the three body results. An\nexample is shown in Figure \\ref{fig:muti}. The only change is to the\nsecular period of orbit Ba-Bb together with a slight modulation of\ntheir minimum eccentricity, and in some cases even this does not\noccur. A slight change in the period of the secular variations is\nunlikely to alter the overall structure of the planetesimal disc. Indeed,\n\\citet{Be06} also find that modelling the distant sub-binary in GG Tau\nas a single object has little effect on the disc structure\nthere. Hence, the three body approximation is a reasonable assumption\nand will be used for the purposes of this investigation.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.15in]{fig3.eps}\n\\caption{\n\\label{fig:muti}\nThe evolution of the stellar orbits for the four body case with\n$i_{Ab}=30^\\circ$ (relative to the plane of the sky) and\n$\\Omega_{Ab}=45^\\circ$ compared to the three body approximation. The\nleft hand panels show the orbit of the inner binary Ba-Bb, red\nindicating the three body approximation and black the four body\ncase. The right panels show the evolution of the orbit of the Aa-Ab\nbinary. The eccentricity of the wide binary A-B once again remains\neffectively constant, as do the semimajor axes of all orbits.}\n\\end{figure}\n\n\\subsection{Other Physics}\n\nAs the disc mass is low (0.002$\\rm{M}_\\oplus$, \\citealt{P01}), it is\nreasonable to model the planetesimals as non-interacting test\nparticles. However, there is one interaction process that may be\nimportant in the modelling of the planetesimals, namely\ncollisions. Following \\citet{W07}, it is possible to esimate the\ncollision timescale from the observed infrared excess. Although this\nassumes a collisional cascade is underway, it should provide a lower\nlimit to the timescale if dust is in fact maintained by gas or\ngenerated in an avalance. The collision timescale is given by their\nequation (13) as\n\\begin{equation}\nt_c = \\left( \\frac{3.8\\rho r^{2.5} dr D_c}{M_{\\star}^{0.5}M_{tot}}\\right)\n \\left( \\frac{(12q-20)(1+1.25(e\/I)^2)^{-0.5})}{(18-9q)G(q,X_c)} \\right)\n\\end{equation}\nwhere $q$ is assumed to be $11\/6$ for a collisional cascade and $G$ is\na function of $q$ and $X_c=D_{cc}\/D_c$, $D_c$ being the diameter of\nlargest planetesimal in the cascade, taken as $2000$ km, and $D_{cc}$\nbeing the smallest planetesimal that has enough energy to destroy\nanother of size $D_c$. $\\rho$ is the planetesimal density and taken as\n$2700$ kg m${}^{-3}$, $r$ is the planetesimal belt radius in au, $dr$\nis the planetesimal belt width in au, $M_{tot}$ is the total mass of\nmaterial in the cascade in $M_{\\oplus}$, $e$ is the mean orbital\neccentricity of planetesimals and $I$ is the mean orbital inclination\nof planetesimals in radians. The factor $G(q,X_c)$ is given by their\nequation (9) as\n\\begin{eqnarray}\nG(q,X_c) &=& (X_c^{5-3q}-1) + \\frac{6q-10}{3q-4}(X_c^{4-3q}-1) \\nonumber \\\\\n{} & & + \\frac{3q-5}{3q-3}(X_c^{3-3q}-1) \n\\end{eqnarray}\nand $X_c$ by their equation (11) as\n\\begin{equation}\nX_c = 1.3 \\times 10^{-3} \\left( Q_D^{\\star} r M_{\\star}^{-1}f(e,I)^{-2} \\right)^{1\/3}\n\\end{equation}\nwhere $f(e,I)$ is the ratio of the relative collision velocity to the\nKeplerian velocity and taken as 0.1 and $Q_D^{\\star}$ is the dispersal\nthreshold which is the specific incident energy needed to destroy a\nparticle, assumed to be 200 J kg${}^{-1}$. In addition, it is also\nassumed that $e \\approx I$.\n\nThe two stars Ba and Bb are approximated as a single object of mass\nand luminosity as 1.57 $M_\\odot$ and 0.58 $L_{\\odot}$ respectively\n\\citep{P01}. The total mass of the disc can be calculated from\nequations (4) to (6) of \\citet{W07} as\n\\begin{equation}\nM_{tot} = 5.194 \\sigma_{tot} = 14.36 r^2\n\\end{equation}\nand hence the collision timescale as a function of $r$ and $dr$ is\n\\begin{eqnarray}\nt_c &=& 1.014 \\times 10^{6} r^{0.5} dr \\frac{1}{G(q,X_c)}\n\\end{eqnarray}\nThe exact extent of the planetesimal disc is as yet unknown, but the\nlocation of the dust can be used as an approximation. The minimum\nradius for dust to be able to exist is given as 2.2 au by \\citet{L05},\nbut \\citet{K00} estimates the disc to be between 5.0 and 18 au. In the\nfirst case of a wide disc around 2 au, the collision timescale is 30\nKyr, and in the second case of a disc from 5 to 18 au, it is 200\nKyr. Preliminary results from test simulations indicated that the disc\nmust lie further out than 2 au, so the higher timescale is applicable\nand gravitational effects will dominate the behaviour of the\nplanetesimals here. Thus, for the purpose of determining overall disc\nstructure, no additional physics needs to be included.\n\n\n\\subsection{Method of investigation}\n\nTo summarise, the stars are modelled as a three body system with a\ndisc of massless planetesimals interacting through gravitational\neffects with the stars only. The three possible orbital configurations\nfor the wide binary will all be considered. To model the disc, the\n{\\sc{Moirai}} code has been shown to be accurate, and the test\nparticle disc can be implemented as circumbinary particles around the\nB stellar pair.\n\nThe models discussed in the introduction have all assumed that the\ndisc is coplanar with the B binary, but there is evidence for inclined\ndiscs around similar multiple stars. For example, the close binary\npair in the T Tauri system were recently determined to have misaligned\ncircumstellar discs \\citep{Sk07}, and polarization surveys have found\na small number of similar cases \\citep{Mo06}. Because of this, the\ntest particle distribution is not initially taken as\ncoplanar. Instead, they are spaced uniformly in inclination and\nlongitude relative to the B binary pair. The initial surface density\nof the disc is taken as $1\/r$ and the grid of test particles runs from\n1.75 au to 33.85 au (inner binaries radius to half the outer binaries\nradius), as shown in Table \\ref{tab:grids}. As the present day\nconfiguration, stability and geometry of the system are of interest to\ncompare to observations, the simulations are run from 1 Myr ago to\npresent day (defined as the periastron time of the Ba-Bb pair from\nTable \\ref{tab:orbit}). The results from these simulations are\npresented in the next section.\n\n\\begin{table}\n\\caption{\n\\label{tab:grids}\n The test particle grid used in the simulations.\n}\n\\centerline{\n\\scriptsize\n\\begin{tabular}{l|ccc}\nOrbital Parameter & Min & Max & Step Size \\\\\n\\hline\n$a$ (au)\t\t\t & 1.75 & 33.85 & 0.1 \\\\\n$e$\t\t\t\t\t & 0.0 & 0.04 & 0.02 \\\\\n$i$ (${}^\\circ$)\t & 0 & 180 & 5 \\\\\n$\\omega$ (${}^\\circ$)& 0 & 240 & 120 \\\\\n$\\Omega$ (${}^\\circ$)& 0 & 240 & 120 \\\\ \n$M$ (${}^\\circ$)\t & 0 & 240 & 120 \\\\\n\\hline\n$\\rm{N_{tp}}$\t\t\t & \\multicolumn{3}{c}{965034} \n\\end{tabular}}\t\n\\large\n\\end{table}\n\n\n\n\n\\section{Results}\n\\label{sec:results}\n\nThe stability of planetesimals is indicated by the stability of the test\nparticles. These are removed from the simulation if they cross\neither of the stellar orbits or if they become unbound. The test\nparticle decay rates are shown in Figure \\ref{fig:decay} for the three\ndifferent simulations. The half-life is short and by the end of the\nsimulations the decay rate has levelled out, although some particles\nare still being slowly eroded from very unstable locations. The\nmajority of unstable particles are removed on 10 to 100 Kyr timescales\nas they become unbound or cross the outer orbit. If the simulations\nare run for an additional million years, there is no further\nsignificant loss of particles so the simulation length is sufficient\nto determine the system state, especially given the system age of\nabout 10 Myr.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.15in]{fig4.eps}\n\\caption{\n\\label{fig:decay}\nThe test particle decay rates for the three simulations. The half-life\n(ignoring the first 0.1 Myr when all initially unstable particles are\nremoved) is about 0.3 Myr for orbit I, 0.2 Myr for orbit II and 0.1\nMyr for orbit III.}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=7in]{fig5.eps}\n\\caption{\n\\label{fig:II_faceon}\nEdge and face on views of the circumbinary material for the simulation\nusing orbit II. The inner binary's orbit lies in the $z=0$ plane and\nthe line of intersection with the outer stellar orbit is along the\n$y$-axis. The orbits of the stars and their current positions are\noverplotted in red, but the wide orbit is shown at a tenth of its\nactual size. The line of sight is also shown as a dashed line.}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=7in]{fig6new.eps}\n\\caption{\n\\label{fig:II_graphs}\nThe final test particle distributions for the simulation using orbit\nII. The two panels on the left show the inclination relative to the\ninner binary as a function of semimajor axis and radial distance from\nthe center of the binary, with colour indicating inner mutual\ninclination as shown. The two panels on the right show the inclination\nrelative to the wide orbit, but the colour still indicates inner\ninclination. Overplotted as a dashed line is the mutual inclination of\nthe two stellar orbits.}\n\\end{figure*}\n\nThe final structure at the end of the simulation is similar in all\nthree orbital cases, and is well illustrated by the case of orbit\nII. Shown in Figure \\ref{fig:II_faceon} is the spatial distribution of\nthe circumbinary material. A warped coplanar disc is apparent around\nthe inner binary, as well as a large amount of higher inclination\nmaterial. The disc appears to have a small inner ring separate to the\nmain bulk of particles. There is also a slight clumping of material\napparent, perpendicular to the inner orbits line of apses. The high\ninclination material appears to form another ring or halo\nperpendicular to the disc, most obvious in the middle plot of the\nfigure.\n\nThese different structures are more clearly seen in a plot of\ninclination as a function of distance from the inner binary, as shown\nin Figure \\ref{fig:II_graphs}. Here, the particle distribution\nrelative to the inner and outer orbits is plotted for both radial distance and\nsemimajor axis. To identify different populations, particles\nare colour coded to their inclination relative to the inner binary's\norbit.\n\nThree distinct features are seen in these plots: a prograde coplanar\ndisc with two gaps; a retrograde disc with one gap and an extended\nring or halo, as seen in the spatial plot, from $50^\\circ$ to\n$130^\\circ$ inner inclination. These populations are well separated in\ninclination by two gaps around $50^\\circ$ and $140^\\circ$, the later\nbeing centred on the inclination of the outer binary's orbit.\n\nThe coplanar disc extends from about 4 to 15 au, with two gaps between\n4.5 and 5 au (seen very clearly in Fig. \\ref{fig:II_faceon}) and 7 and\n8 au. The gaps are less obvious in the radial distribution due to\nparticle eccentricity, but are still apparent. The outer regions of\nthis disc are inclined up to almost $50^\\circ$ and provide the warped\nmaterial seen in the spatial plot. There is also a series of breaks in\nthe semimajor axis distribution slightly apparent both here and in the\nretrograde disc (these are very clear in Figure~\\ref{fig:stabmap}),\npresumably due to resonant features as they are much larger than the\ngrid resolution of 0.1 au. Relative to the outer orbit, the disc\nmaterial in the outer warped regions is perturbed up to almost\ncompletely retrograde orbits, explaining its stability.\n\nThe retrograde (relative to the inner binary) disc is smaller,\nextending from 4 to 9 au, but also has a gap between 4.5 and 5 au. The\nouter regions here are perturbed towards the same inclination as the\nouter orbit, similar but opposite to the coplanar disc's\nstructure. This disc is in fact much sharper relative to the outer\nstar, where it is prograde with an inclination of around $30^\\circ$.\n\nThe last component is the halo-like structure. This extends only from\n5.0 to 8.0 au but covers a large range of inclinations relative to\nboth stellar orbits. This material is unusual in remaining stable at\nvery high inclinations, but similar stable high inclination particles\nhave been seen before by \\citet{Be06} in simulations of GG Tau.\n\nThese three separate populations are also very distinct in a plot of\nfinal versus initial inclination, as shown in Figure\n\\ref{fig:II_initial}. Particles in the prograde disc start with\ninitial inner inclinations in the range $0^\\circ$ to $50^\\circ$ and\nremain within that range. Those in the halo, starting in the range\n$60^\\circ$ to $125^\\circ$, also remain there although there is less\nvariation in the middle of this group. Finally, the retrograde disc is\nclearly confined by the inclination of the outer stellar orbit, and\nparticles here start and remain in the range $145^\\circ$ to\n$180^\\circ$. These populations are again clear in the outer\ninclination distribution, as can be seen from the colour coding of the\nparticles. There is in fact a fourth diffuse population visible here,\nat around $70^\\circ$ to $90^\\circ$, which forms a more diffuse ring at\na different angle to that of the main halo.\n\nThe initial outer inclination distribution is different for the orbit\nI and III simulations, as the stars start in slightly different\nplaces. However, the components evolve to the same final inclination\ndistribution, showing that the results are robust and do not strongly\ndepend on parameters such as the initial longitudes. Note also that in\nthe orbit II case no particles start with initial outer inclinations\nless than about $30^\\circ$, so it could be possible that another\nregion of stability exists here. However, a disc of particles started\ncoplanar with this outer orbit very rapidly became unstable, with no\nmaterial surviving to the end of the simulation.\n\nA radial profile of the three main structures can be plotted by using\nthe inner inclination range to characterise them. This is shown in\nFigure \\ref{fig:II_profi} for each simulation. The coplanar disc\n(black) is defined as starting between $0^\\circ$ to $50^\\circ$, the\nhalo (green) between $55^\\circ$ to $140^\\circ$ and the retrograde disc\nbetween $145^\\circ$ to $180^\\circ$. In each case the profiles of each\ncomponent are very different, and the gaps in the two disc very\napparent.\n\nThe radial profiles here are the result of the evolution of an initial\n$1\/r$ distribution. A simulation was rerun for a flat initial density\nprofile in the orbit II case. The resulting disc profiles were\nsimilar, although the inner and outer edges were slightly steeper. The\nprofile of the halo also remained largely unchanged.\n\nThe most notable difference between the three simulations is the\ngenerally greater stability in the lower eccentricity cases and the\nexistence of an extended retrograde disc in the orbit I case. This\nstability is shown in greater detail in Figure \\ref{fig:stabmap}, and\ncompared to the other simulations. As the extended prograde coplanar\ndisc is retrograde relative to the outer star, this disc is prograde\nto it. The extended prograde disc is itself less populated here, but\nboth features can partly be explained by the initial particle\ninclinations. The stellar orbits are different here and particles that\nare retrograde relative to the inner star are more coplanar with the\nouter star than in the other orbital cases, and those that have\nprograde inner inclinations have lower outer inclinations. Thus, fewer\nparticles start in the prograde extended disc, while more start in the\nretrograde case. It should be noted, however, that even if the initial\ninclination distribution is similar to the other two simulations, a\nretrograde extended disc is still seen. The eccentricity of the\nstellar orbits is therefore still likely to be important to these\nfeatures, as can be seen by the decreasing radial extent of the\nprograde disc from orbit I to III in Figure \\ref{fig:stabmap}. Indeed,\nas discussed above, a retrograde disc relative to the outer star for\nthe orbit II case is completely unstable.\n\n\nThe radial extent of the structures seen in the three simulations is\ndetailed in Table \\ref{tab:rlimits} and compared to both previous\nestimates of the size of the observed dust disc and empirical\nstability limits from numerical studies of general binary and triple\nsystems. These empirical limits show the inner and outer radius for\ncoplanar circumbinary stability by modelling the stars as a low\ninclination triple system \\citep{VE07} and as two coplanar decoupled\nbinary systems \\citep{HW99}. A more general result from \\citet{VE07}\nis that in cases of high stellar eccentricity and mass ratio the\nstable region was more complex than a completely stable ring, with\ngaps appearing that were most likely due to overlapping resonances\nfrom the two stellar orbits, as seen here. The extended outer regions\nof the prograde and retrograde discs are outside the empirically\npredicted stable region. This is most likely an effect due to the high\ninclinations and the particles retrograde nature relative to the outer\nor inner orbit, as retrograde orbits are generally more stable than\nprograde. However, the inner edges are well predicted.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.15in]{fig7new.eps}\n\\caption{\n\\label{fig:II_initial}\nThe initial test particle inclinations related to their final\ninclinations for the simulation using orbit II. The left panel shows\ninclination relative to the inner binary and the right panel relative\nto the outer wide orbit. Colour now indicates initial outer\ninclination to highlight the small population around $80^\\circ$ in the\nright hand plot. The dotted vertical line shows the initial mutual\nstellar inclination and the horizontal the final.}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=7in]{fig8.eps\n\\caption{\n\\label{fig:II_profi}\nThe radial profile of the three inclination components of the\ncircumbinary material for all three simulations, showing the surface\ndensity as a function of radius. The coplanar disc is plotted in\nblack, the halo in green and the retrograde disc in red. Note that for\nthe orbit I case the retrograde disc starts at $105^\\circ$ instead of\n$145^\\circ$ due to the differences in the initial inclinations of the\nstars for this orbit. The retrograde disc and halo here also now\noverlap in inner inclination, and are separated using their outer\ninclination instead.}\n\\end{figure*}\n\n\n\n\n\\begin{table}\n\\caption{\n\\label{tab:rlimits}\nThe radial extent of the disc around HD 98800 B. The top four rows show other models of this system, the rest show empirical fits from general binary and triple systems and the extent of the three different inclination components for each possible outer orbit from the simulations here. \n}\n\\centerline{\n\\scriptsize\n\\begin{tabular}{l|l@{\\hspace{0.05in}}c@{\\hspace{0.05in}}l@{\\hspace{0.05in}}c@{\\hspace{0.05in}}l@{\\hspace{0.05in}}c@{\\hspace{0.05in}}r}\nModel \t\t& \\multicolumn{4}{l}{Inner edge (au)} \t & \\multicolumn{3}{r}{Outer edge (au)}\t\\\\\n\\hline\n\\citet{K00} & \\multicolumn{7}{c}{$5.0 \\pm 2.5$}\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n\\citet{P01} & 2.0\t & & & -- & &\t & 5.0\t\t\t\t\\\\\n\\citet{Fu07}& 2.0 & \\multicolumn{5}{c}{gap} & 5.9\t\t\t\t\\\\\n\\citet{Ak07}& 3.0 \t & & & -- & & \t& 10.0\t\t\t\t\\\\\n\\hline\n\\multicolumn{8}{c}{Orbit I}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n\\hline\n\\citet{HW99}& \\multicolumn{3}{l}{4.1}& -- & \\multicolumn{3}{r}{10.9}\t\\\\ \n\\citet{VE07}& \\multicolumn{3}{l}{3.9}& -- & \\multicolumn{3}{r}{11.0}\t\\\\\n\\hline\nDisc\t\t& 3.0 -- 4.0 & & & gap & 4.5 -- 8.0 & gap & 9.0 -- 20.0\t\t\\\\\nHalo \t& 5.0\t\t & & & -- & & \t& 8.0\t\t\t\t\\\\\nRetrograde & 3.5 -- 4.0 & & & gap & 4.5 -- 7.0 & gap & 7.5 -- 13.0 \t\t\\\\\n\\hline\n\\multicolumn{8}{c}{Orbit II}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n\\hline\n\\citet{HW99}& \\multicolumn{3}{l}{4.1}& -- & \\multicolumn{3}{r}{4.3}\t\\\\ \n\\citet{VE07}& \\multicolumn{3}{l}{3.9}& -- & \\multicolumn{3}{r}{0.6}\t\\\\\n\\hline\nDisc\t\t& 4.0 -- 4.5 & & & gap & 5.0 -- 7.0 & gap & 8.0 -- 15.0 \t\t\\\\\nHalo \t& 5.0\t\t & & & -- & & \t& 8.0\t\t\t\t\\\\\nRetrograde & 4.0 -- 4.5 & & & gap & 5.0 & -- & 9.0 \t\t\t\t\\\\\n\\hline\n\\multicolumn{8}{c}{Orbit III}\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n\\hline\n\\citet{HW99}& \\multicolumn{3}{l}{4.1}& -- & \\multicolumn{3}{r}{7.0}\t\\\\ \n\\citet{VE07}& \\multicolumn{3}{l}{3.9}& -- & \\multicolumn{3}{r}{7.3}\t\\\\\n\\hline\nDisc\t\t& 3.0 -- 3.5 & gap & 4.0 -- 4.5 & gap & 5.0 -- 7.0 & gap & 8.0 -- 15.0\t\t\\\\\nHalo \t& 5.0\t\t & & & -- & & & 8.0\t\t\t\t\\\\\nRetrograde & 4.0 -- 4.5 & & & gap & 5.0 & -- & 8.0\t\n\\end{tabular}}\t\n\\large\n\\end{table}\n\n\nAn important feature of the observations of the system is the\nextinction and photometric variability towards the B binary,\nattributed to disc material along the line of sight by \\citet{T99} and\n\\citet{B05}. \\citet{Ak07} find that for the system they investigate\nthe line of sight can just intercept the top of a warped disc of test\nparticles. To see what material, if any, occurs along the line of\nsight here, we plot the azimuthal structure of the disc, as shown in\nFigures \\ref{fig:II_rzl} and \\ref{fig:warp} for the case of orbit\nII. Here, the system has been rotated so that the inner binary lies in\nthe horizontal plane with its periastron at an azimuthal angle of\n$0^\\circ$. Each panel then shows the height above the plane as a\nfunction of radial distance within the plane of the inner binary in\nsegments of $10^\\circ$ in azimuthal angle. Particles are colour coded\nto inner mutual inclination as for Figure \\ref{fig:II_graphs} to\ndistinguish the different components. The locations of the three stars\nare also shown (the position of the outer star is shown reduced by a\nfactor of ten), as is the line of sight on the relevant plot.\n\nThe warp in the prograde disc is very apparent, with material here\nreaching heights of almost 10 au above the plane. The retrograde disc\nis slightly less warped, but does not extend out as far. The halo\nmaterial shows up very clearly as a ring, instead of a continuous\nshell covering all angles. There is a small amount of material\nperpendicular to the main ring only at very high $z$, the fourth\npopulation visible in Figure \\ref{fig:II_initial}. The warp lies along\nthe $30^\\circ$-$210^\\circ$ line, as illustrated in the lefthand panel\nin Figure \\ref{fig:warp}, but the line of sight at around $160^\\circ$\nstill intercepts a large amount of perturbed coplanar disc material, a\ngood 5 au above the plane of the binary's orbit. Figure \\ref{fig:Irz}\nshows similar plots for the other two orbital cases for the azimuthal\nsegment containing the line of sight. In the high eccentricity case of\norbit III, almost no material is intercepted, while in the low\neccentricity orbit I simulation far more prograde and retrograde\nmaterial remains, making either this or orbit II the most likely\norbital configurations if a warped planetesimal disc and associated dust \nis the cause of the observed extinction.\n\nOver the length of the simulation, the warp precesses with a timescale\nequal to twice that of the secular period of the stars, following\nalmost perpendicular to the circulation of the line of intersection of\nthe two stellar orbits. The extent of the warp in fact decreases as\nthe mutual inclination between the stars increases (since the orbit is\nretrograde the higher the inclination the closer the two planes). The\nnon-symmetrical distribution seen in Figure \\ref{fig:warp} persists,\nwith one side usually greater in height than the other. The warp is\nnot a short term feature, and the system will persist in its current\nconfiguration for some time, so if dust follows the planetesimal distribution \nit is very likely the extinction is\nindeed caused by the warp in the disc. It should also be noted that\nthe warped material remains after an additional Myr, and is not\nslowly eroded away.\n\nThe azimuthal particle distribution shown in the right hand panel of\nFigure \\ref{fig:warp} shows that the two discs are not skewed in any\nparticular direction, but that the halo is aligned along the minor\naxis of the inner binary's orbit. In fact, it remains aligned at this\nangle over the entire simulation. The material in this halo is seen in\nprojection in the right most panel of Figure \\ref{fig:II_faceon} as\nthe two clumps discussed earlier. As this material follows the\npericentre of the inner orbit, which precesses on the secular time\nscale, it will not intercept the line of sight for some time and is\nunlikely to be related to the observed variability and extinction.\n\nAn important final question raised by \\citet{P01} is the lack of a similar\ncircumbinary disc around the other binary A. As the eccentricity and\nmass ratio of this pair are both smaller, a disc should be more likely\nhere. In fact, the empirical criteria place the stable zone from\naround 3 au to 11, 8 and 7 au for orbits I, II and III respectively. A\npreliminary simulation taking A as a single star confirms this greater\nstability, so the question still remains as to why there is no observed disc\nhere. It is possible that the inclination of this binary's orbit\nplaces it so that no planetesimals can remain stable, and if so this would\nprovide limits on the orbit of this stellar pair. Another alternative if there\nis indeed a disc of stable planetesimals here is that it is far less dusty and so unobserved.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=7in]{fig9.eps}\n\\caption{\n\\label{fig:stabmap}\nStability maps comparing the results from the three simulations using orbits I, II and III. The average survival time is shown as a function of initial semimajor axis and inner inclination, with black indicating all particles starting at a given grid point remained at the end of the simulation through to white indicating the location was very quickly unstable.}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=4in]{fig10.eps}\n\\caption{\n\\label{fig:II_rzl}\nThe azimuthal distribution of material for the simulation using orbit\nII. The panels show the radial distance within the plane of the inner\nbinary and the height above it in segments of $10^\\circ$. The\nperiastron of the inner binary is at $0^\\circ$. The stellar positions\nare overplotted with the outer star shown reduced by a factor of ten,\nand the line of sight shown as a dashed line. The test particle colour\nindicates current inner inclination, as in Figure\n\\ref{fig:II_graphs}.}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=3.15in]{fig11.eps}\n\\caption{\n\\label{fig:warp}\nThe azimuthal distribution of particles for the simulation using orbit\nII. The left hand panel shows a polar plot of the warp in the prograde\n(black) and retrograde (red) discs. The halo is not a disc with\nrespect to this plane so not plotted. The average height away from the\nplane of the inner binary is shown as a function of azimuthal angle,\nincreasing anticlockwise from the horizontal axis, and the periastron\nof the inner binary is at $0^\\circ$. The line of sight is shown as a\nsolid line and the line of intersection of the two orbits as a dashed\nline (the rising node is towards the bottom right). A plus marks the\nside of the warp that is above the plane. The right hand panel shows a\npolar plot of the number of particles in each $10^\\circ$ angle\nbin. Red and black are as before and the halo is now also plotted in\ngreen.}\n\\end{figure}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=7in]{fig12.eps}\n\\caption{\n\\label{fig:Irz}\nThe material intercepting the line of sight in all three\nsimulations. As for Figure \\ref{fig:II_rzl} radial distance within the\nplane and height above it is shown with plot points colour-coded to\ninitial inclination. Here however only the segment near the line of\nsight has been shown for each simulation.}\n\\end{figure*}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nDynamical simulations of a planetesimal population in the debris disc\naround the Bab stellar pair in HD 98800 have been run. By studying a\nwide range of inclinations, three distinct stable populations have\nbeen identified. These are a prograde disc, a retrograde disc and a\nhigh inclination halo. The radial profiles of each component are\ndifferent and distinct. The discs both have large radial gaps, caused\npresumably by overlapping resonances from the stellar orbits. The\nradial extent of the discs are summarised in Table \\ref{tab:rlimits}\nbut are generally from 3 to 15 au for all three orbits, with gaps at\naround 4.5 au and 8 au.\n\nThe line of sight can currently pass through slightly warped material\nin the prograde disc, and this would account for the observed\nextinction and variability if the dust distribution followed the\nplanetesimals. However, this alignment with the line of sight\neffectively only occurs for the I and II orbits (with eccentricities\nof $0.3$ and $0.5$ respectively), which would rule out the higher\neccentricity case ($e = 0.6$) as a possible orbit for the outer star.\n\nThe radial profile is complex, and illustrates that, for discs in\nmultiple systems, dynamical effects purely due to the stars are very\nimportant. Indeed, if most triple stars are within the limit for\nresonance overlap to sculpt the stability zone, then this has large\nconsequences for debris discs and planet formation in such systems.\n\nThe simulation results here can be compared to other models of the\nsystem and estimates of the dust distribution. Indeed, as previously\nmentioned, comparisons to observations of the dust could place\nconstraints on the physical processes occuring in the circumbinary\ndisc. The bulk of the dynamically stable planetesimals are between 5\nand 7 au. This matches up well with the prediction of \\citet{K00} of a\nring outwards from 5 au. The model of \\citet{P01} places the disc from\n2 to 5 au, which, apart from the ring around 4.5 au, is unstable\nhere. \\citet{Fu07} suggest an inner ring at 2 au and a thicker puffed\nup component at 5.9 au, the only model to predict gaps and similar to\nthe prograde discs here. The later of these components matches up to\nthe bulk of the material here, and there is indeed an inner ring seen,\njust further out at 4 au. \\citet{P01} estimate the height of the disc\nas 1 au, and \\citet{Fu07} as 0.75 au. However, here material in the\nprograde planetesimal disc is not only warped but very flared,\nreaching heights of 5 to 10 au. This is an important dynamical feature\nthat should be taken into account in models of the\nobservations. Although roughly agreeing in location, these models do\nnot have enough detail as yet to further compare to the planetesimals.\n\nThe dynamical model of \\citet{Ak07} finds stability from 3 to 10 au,\nwhich does not match up that well to the limits here, and in fact\ntheir resulting disc has a very different geometry. Their model uses\ndifferent orbital parameters for the stellar system, most notably a\ndifferent mutual inclination -- as their aim is not to reproduce the\ncurrent configuration but to look at the general morphological\nstructure of debris discs in highly inclined triple star systems. They\nalso find that the line of sight can intercept a warp in the disc, but\ndue to the different geometry it crosses at a different point and at\nthe maximum warp in the disc, whereas here the warp is only marginally\norientated towards the line of sight. It is in fact material in the\nsparser populated outer regions of the prograde disc that intercepts\nthe line of sight here, from 8 to 15 au, which is a region that in a\ncoplanar stellar system is predicted to be unstable.\n\nThe possible dynamical reason for the high infrared excess already\nmentioned in the Introduction is a close pericentre passage of the A\nbinary stiring up planetesimals and resulting in high collision\nrates. Since the binary's period is about 300 years, many such\npericentre passages have occured, and by the end of the simulations\nthe particles appear very stable with regards to this. For example,\nthere appears to be no periodic increase in planetesimal\neccentricities as the outer star passes close to the disc. There is,\nhowever, another feature that may cause higher than normal collision\nrates. The extent of the flare and warp in the extended disc decreases\nas the acute angle between the stellar orbits decreases (and the\nmutual inclination increases). As this occurs, planetesimals that have\nbeen spread out in inclination are now packed into less space. This is\nlikely to result in an increased number of collisions and is\nparticularly relevant as the current stellar configuration is such\nthat mutual inclination is large and the disc is near its narrowest\npoint. However, modelling including collisions is needed to quantify\nthis effect.\n\nThe stable high inclination particles in the halo are curious from a\ndynamical perspective. The Kozai mechanism \\citep{Ko62} might be\nexpected to remove such particles very quickly -- however these\nparticles do not seem subject to this. It is possible that they are in\nfact an artifact of the numerical method, but checks with a standard\nBulirsch-Stoer integrator \\citep{P89} and the similar observation in\nsimulations of GG Tau discount this. The high stellar inclinations\ninvolved in the system may be one explanation, but it is worth looking\nat the stability should one of the binaries be removed. If the outer\nstar is not present, a sharp inner disc edge is seen at around 4 au\nfor all inclinations, with all particles outwards from this remaining\nstable. If the inner binary is approximated as a single star instead,\nthen a reasonably sharp outer edge is seen at around 7 to 10 au, for\nthe lower inclinations only. The near polar inclinations in this case\nare now unstable. The high inclination particles are within this\nregion which must be shaped by the combined perturbations from both\nstellar orbits, so are in a new regime of dynamical behaviour. The\nmechanism stabilising these particles, as well as that stabilising the\nextended prograde and retrograde discs, is the subject of a future\npaper.\n\nSome consideration is needed, however, of how such material would\nform. T Tauri multiple systems are common and believed to form\nprimordially through fragmentation processes, which could result in\nnon-coplanar discs (\\citealt{Ba00}, \\citealt{BB00}). As mentioned,\n\\citet{Mo06} find a small number of multiple T Tauri systems with\nmisaligned discs in their polarization survey, but suggest that\nperhaps these are perturbed disc soon to be realigned. So there is\nsome evidence that non-coplanar particle distributions are\nplausible. It may be possible as well that particles from the disc can\nbe captured into the polar stability region, and certainly a coplanar\nparticle distribution is quickly perturbed to fairly high\ninclinations.\n\nThere is much further work to be done in modelling this system. A\nmodel of the planetesimals including collisions, although expected to\nbe a minor effect on the dynamics, would indicate if the rate was\ncurrently high and may explain the unually dusty nature of the\nsystem. A model that also included dust and dust collisions, as well\nas interactions with any gas in the system, would then fully model the\nsystem and allow detailed comparisons with the observations. Although\nnot an easy task given the nature of the system it would narrow down\nthe physics and dynamics at work, important to our understanding of\nthis stage of the planetary formation process in multiple star\nsystems.\n\n\n\\section*{Acknowledgements}\n\nWe would like to thank Mark Wyatt for bringing this unique system to\nour attention, and for helpful discussions with him and Ken Rice. We\nwould also like to thank the anonymous referee for helpful\ncomments. PEV acknowledges financial support from the Science and\nTechnology Facilities Council.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{An abstract Cops and Robbers{} game}\\label{sec:absmodel}\nWe now present a general model of \\emph{Probabilistic} Cops and Robbers{} games; it is played with perfect information, is turn-based starting with the cops, and takes place on a discrete structure. \nFrom each state\/configuration of the game, after choosing their actions, the cops and robbers will jump to a state according to their transition matrices, denoted $T_{\\petitsrobbers}$ and $T_{\\petitscops}$. These matrices may encode probabilistic behaviours: $T_{\\petitscops}(s,a,s')$\\footnote{The notation $T(s,a,s')$ refers to a \n\\emph{transition matrix} view. In this way, it corresponds to annotating the edge $[s,s']$ of the transition system with an action $a$ and a positive value, the probability. In the Markov Decision Processes (MDP) community, it is also written $T_a(s,s')$, or $\\mathbb{P}(s'| s,a)$. } is interpreted as the probability that the cop, starting in $s$ and playing action $a$, will arrive in $s'$.\n\n\\newcommand{i_0}{i_0}\n\\begin{definition}\\label{def:genabspurgame}\nA \\emph{Generalized Probabilistic Cops and Robbers{}} game (GPCR) is played by two players, the \\emph{cop} team and the \\emph{robber} team. It is given by the following tuple \n\n\\begin{align}\n\\cl{G} \n&=\n\\left(\nS, i_0, F, A, T_{\\petitscops}, T_{\\petitsrobbers}\n\\right),\n\\end{align}\nsatisfying\n\\begin{enumerate}\n\n\\item $S = S_{\\petitscops} \\times S_{\\petitsrobbers} \\times S_{\\objects}$, the non-empty finite set of states representing the possible configurations of the game. The sets $S_{\\petitscops}$ and $S_{\\petitsrobbers}$ hold the possible cops and robbers positions while $S_{\\objects}$ may contain other relevant information (like whose turn it is).\n\\item $i_0\\in S$ is the initial state.\n\\item $F\\subseteq S$ is the set of final (winning) states for the cops.\n\\item $A = A_{\\petitscops}\\cup A_{\\petitsrobbers}$, with $A_{\\petitscops}$ and $A_{\\petitsrobbers}$ the non-empty, finite sets of actions of the cops and robbers, respectively.\n\\item $T_{\\petitscops} : S\\times A_{\\petitscops} \\times S \\rightarrow [0,1]$ is a \\emph{transition function} for the cops, that is, \n$$\\textstyle\\sum_{s'\\in S}T_{\\petitscops}(s,a,s')\\in \\se{0,1} \\mbox{ for all $s\\in S$ and } a\\in A_{\\petitscops}.$$\nWhen the sum is 1, we say that $a$ is playable in $s$, and we write $A_{\\petitscops}(s)$ for the set of playable actions for the cops at state $s\\in S$. \nFurthermore, $T_{\\petitscops}$ also satisfies\n\\begin{itemize\n\\item for all $s\\in S$, $A_{\\petitscops}(s)\\neq\\emptyset$\n\\item if $s\\in F$, then $T_{\\petitscops}(s,a,s)=1$ for all action $a\\inA_{\\petitscops}$; hence $T_{\\petitscops}(s,a,s')=0$ for all $s'\\neq s$\n\\end{itemize}\n\n\\item $T_{\\petitsrobbers}$ is a transition function for the robbers, similar to $T_{\\petitscops}$. $A_{\\petitsrobbers}(s)$ is the set of playable actions by the robbers in state $s\\in S$.\n\\end{enumerate}\nA \\emph{play} of $\\cl{G}$ is an infinite sequence $i_0 a_0s_1a_1s_2a_2\\dots \\in (SA_{\\petitscops} SA_{\\petitsrobbers})^\\omega$ of states and playable actions of $\\cl{G}$ that alternates the \\emph{moves}\\ of $\\mathrm{cop}\\!$ and $\\mathrm{rob}$. It thus satisfies $T_{\\petitscops}(s_j,a_j,s_{j+1})>0$ for $j = 0,2,4,\\dots $ and $T_{\\petitsrobbers}(s_j,a_j,s_{j+1})>0$ for $j = 1,3,5,\\dots $. The cops win whenever a final state $s\\in F$ is encountered, otherwise the robbers win. A turn\nis a subsequence\nof two moves, starting from $\\mathrm{cop}$. \nWe also consider finite plays and we write $\\cl{G}_n$ for the game where plays are finite with $n$ (complete) turns.\n\\end{definition}\nAn equivalent formulation for $T_{\\petitscops}$, and sometimes more handy, is to rather define $T_{\\petitscops}(s,a)$ as a distribution on $S$, for an action $a$ playable in $s$. \nThe correspondance is $T_{\\petitscops}(s,a)( X) =\\sum_{s'\\in X} T_{\\petitscops}(s,a,s') $ for $X\\subseteq S$. For example, the second condition of the fifth item in the preceding definition could have been stated $T_{\\petitscops}(s,a)=\\dirac{s}$, where $\\dirac{s}$ is the Dirac distribution on an element $s$, that is, $\\dirac s$ has value 1 on $\\{s\\}$, and is 0 elsewhere.\n\nA play progresses as follows: from a state $s$, the cops choose an action $a_{\\petitscops}\\in \\acs{s}$, which results in a new state $s'$, randomly chosen according to distribution $T_{\\petitscops}(s,a_{\\petitscops})$; then the robbers play an action $a_{\\petitsrobbers}\\in \\ars{s'}$, which results in the next state $s''$, drawn with probability $T_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s'')$. Once a final state is reached, the players are forced to stay in the same state. Notice that one could record whose turn it is in the third component of the states: $S_{\\objects}=\\{\\mathrm{cop},\\mathrm{rob}\\}$. However, this doubles the state set and complexifies the definition of the transition function. In most games, it is more intuitive to define the rules for movement independently of when this transition will be taken, like in chess.\n\n \n\nWe sometimes use the notation $s_{\\textsf{x}}$, for $\\textsf{x}\\in \\se{\\mathrm{cop}, \\mathrm{rob}, \\mathrm{o}}$ to denote the projection of a state $s\\in S$ on the set $S_{\\textsf{x}}$. The set $S_{\\objects}$ is rarely used in the current section, but will be valuable further on, such as in Example~\\ref{ex:dynamicgraphgame} on dynamic graphs whose structures vary with time. \n\nIn what follows, we write $\\dist{B}$ as the set of discrete distributions on a set $B$ and $\\unif{B}\\in \\dist{B}$ for the discrete uniform distribution on the same set. \n\n\n\n\nMost of the example games we will describe will be between a single cop and a single robber, even if the definition specifies a cop team and a robber team. The usual way of presenting the positions of the cop team is with a single vertex in the strong product of each member's possible territory. \n\\begin{comment}\n\\end{comment}\n\n\n\n\n\\subsection{Encoding of known games and processes, stochastic or not}\nWe now describe a few known games, following the structure of Definition~\\ref{def:genabspurgame}. The first one is a typical, deterministic example of a Cops and Robbers{} game.\nWe say a game is \\emph{deterministic} when both distributions defined by $T_{\\petitscops}$ and $T_{\\petitsrobbers}$ are concentrated on a single point, in other words if $T_{\\petitscops}(s,a)$ and $T_{\\petitsrobbers}(s,a)$ are Dirac for all $s\\in S$ and $a\\in A$. The reader can safely skip this section.\n\n\\begin{example}[\\bf Classic Cop and Robber game]\\label{ex:classiccrgame}\\rm\nLet $G=(V,E)$ be a finite graph. In this game, both players play alone and walk on the vertices of the graph, successively choosing their next moves among their neighbourhoods. The final states are those in which both players share a vertex, in which case the cop wins. The tricky part for encoding this game is that in their first move, the cop and the robber can choose whatever vertices they want, so the rule of moving differs at the first move from the rest of the play. So we let $i_{\\petitscops}, i_{\\petitsrobbers}\\notin V$ be two elements that will serve as starting points for the cop and the robber. Because the first moves are chosen in turn, the set of states $S$ below must contain states in $V\\times \\{i_{\\petitsrobbers}\\}$, which can only be reached after the cop's first move, but before the robber's. To simplify $S$, we include states that will not be reached, and this will be governed by the transition functions. The different sets are:\n\\begin{align*}\ni_0 &= (i_{\\petitscops}, i_{\\petitsrobbers}) \\\\\nS &= (\\se{i_{\\petitscops}}\\cup V)\\times (\\se{i_{\\petitsrobbers}}\\cup V)\\\\\nF &= \\se{(x,x)\\in V^2}\\\\\nA_{\\petitscops} &= V\\\\\nA_{\\petitsrobbers} &= V.\n\\end{align*}\nLet $(c,r)\\in S$, $x\\in V$, and actions $c'\\in A_{\\petitscops}$ and $r'\\in A_{\\petitsrobbers}$. We define:\n\\begin{align*}\nT_{\\petitscops}((c,r), c', (x,r)) \n&=\n\\begin{cases}\n1, &\\mbox{ if } x=c' \\mbox{ and } c = i_{\\petitscops} \\mbox{ or } c'\\in N[c],\\\\\n0, &\\mbox{ otherwise.}\n\\end{cases}\\\\\nT_{\\petitsrobbers}((c,r), r', (c,x))\n&=\n\\begin{cases}\n1, &\\mbox{ if } x=r' \\mbox{ and } r=i_{\\petitsrobbers} \\mbox{ or } r'\\in N[r],\\\\\n0, &\\mbox{ otherwise.}\n\\end{cases}\n\\end{align*}\nThus, for state $(c,r)\\in S\\setminus \\{i_0\\}$, the playable action set is $\\acs{c,r}=N[c]$. Similarly, for the robber we get $\\ars{c,r} = N[r]$. Because a play starts with the cop, it is not required to specify the condition $c\\neq i_{\\petitscops}$ in function $T_{\\petitsrobbers}$. Similarly, is it not necessary to make a special case of state $c=r$, since the play ends anyway. \n\\end{example}\n\nThe stochasticity of Definition~\\ref{def:genabspurgame} is motivated by the following example, called the Cop and Drunk Robber game. It is rather similar to the one just presented except that the robber moves randomly on the vertices of the graph.\n\n\\begin{example}[\\bf Cop and Drunk Robber game]\\label{ex:copdrunkrobber}\\rm \\label{ex:drunk}\nFrom the preceding example, only the robber's transition function $T_{\\petitsrobbers}$ is modified, the rest stays the same. Let $(c,r)\\in S$ and $r'\\in A_{\\petitsrobbers}$. The robber's transition function is then:\n\\begin{align*}\nT_{\\petitsrobbers}((c,r), r')\n&=\n\\begin{cases}\n\\dirac{(c,r')}, &\\mbox{ if } r=i_{\\petitsrobbers}, \\\\\n\\unif{\\{c\\}\\times N[r]}, &\\mbox{ otherwise. }\n\\end{cases}\n\\end{align*}\nThe robber, after the first move, moves uniformly randomly on her neighbourhood, which amounts to ignoring her action $r'\\in A_{\\petitsrobbers}$.\nOne could also restrict her actions by $\\ars{s} = \\se{1}$\nwhen $s\\in S\\setminus \\{i_0\\}$. \n\\end{example}\n\nIn the Cop and Drunk Robber game, the robber moves according to a uniform distribution on her neighbourhood. Varying her transition function could represent various scenarios. For example, the robber's probability of ending on a vertex $r'$ from vertex $r$ could depend on the distance between $r$ and $r'$.\n\nIn addition to the Cop and Drunk Robber game itself, a recent paper by Simard et al.~\\cite{Simard2015} presented a variant of this game in which the robber can evade capture. The main difference between these games is that the cop may not catch the robber even when standing on the same vertex. This game is presented in the next example.\n\n\n\\begin{example}[\\bf Cop and Drunk Defending Robber]\\label{ex:copdrunkdefrobber}\\rm \nThe game's main structure is again similar to that of Example~\\ref{ex:classiccrgame}, but we need a jail to simulate the catch of the robber, $j^*\\notin V$. The initial state is the same, and we have:\n\\begin{align*}\ni_0 &= (i_{\\petitscops}, i_{\\petitsrobbers}) \\\\\nS &= (\\se{i_{\\petitscops}}\\cup V)\\times (\\se{i_{\\petitsrobbers}}\\cup V) \\cup \\se{(j^*, j^*)}\\\\\nF &=\\se{(j^*, j^*)}.\n\\end{align*}\nWhen players do not meet, they move on $G$ as before. Yet, when the cop steps on the same vertex $v$ as the robber, there is a probability $p(v)$ the robber gets captured, where $p : V\\rightarrow [0,1]$. For $(c,r)\\not\\in F$, the robber's transition function is then:\n\\begin{align*}\nT_{\\petitsrobbers}((c,r), r')\n&=\n\\begin{cases}\n\\dirac{(c,r')}, &\\mbox{ if } r=i_{\\petitsrobbers}, \\\\\n\\unif{\\{c\\}\\times N[r]}, &\\mbox{ if } c\\neq r\\mbox{ and } r\\neq i_{\\petitsrobbers},\\\\\nD_{r}, &\\mbox{ if } c=r\\mbox{ and } r\\neq i_{\\petitsrobbers},\n\\end{cases}\\\\\n\\mbox{ where }\nD_{r}(x)\n&= \n\\begin{cases}\n\\frac{1-p(r)}{|N[r]|}, &\\mbox{ if } x\\in \\se{c}\\times N[r]\\mbox{ and } c = r, \\\\%}{x\\in N[r]},\\\\\np(r), &\\mbox{ if } x=(j^*, j^*).\n\\end{cases}\n\\end{align*}\nWhen the cop steps on the robber's vertex ($c=r$), at the end of his turn, the next move for the robber follows the distribution $D_r$. The robber is caught by the cop with probability $p(r)$, bringing the play in a final state, otherwise she proceeds as expected: the target state is chosen uniformly randomly in the robber's neighbourhood. Variations of this game could be defined through different distributions for $T_{\\petitsrobbers}((c,r), r')$ with $c\\neq r$. Likewise, in $D_r$, the factor $\\frac{1}{|N[r]|}$ could be replaced with any distribution on ${N[r]}$.\n\\end{example}\n\nWe now present the Cop and Fast Robber game with surveillance zone as first formulated in Marcoux \\cite{Marcoux}. This example is reconsidered further on in Section \\ref{sec:concrgames}. Chalopin et al.\\ also studied a game of Cop and Fast Robber with the aim of characterizing graph classes~\\cite{Chalopin2011}.\n\n\\begin{example}[\\bf Cop and Fast Robber]\\label{ex:copfastrobber}\\rm\nThis game is similar to the classic one (Example~\\ref{ex:classiccrgame}) except that the robber is not limited to a single transition. It has been studied by Fomin et al.~\\cite{FominGKNS10}. We present a variation where the cop can capture the robber when she appears in his watch zone, even in the middle of a path movement. This watch zone can simulate the use of a weapon by the cop. The states will now contain, in addition to both players' positions, the set of vertices watched by the cop. We assume here that the cop's watch zone is his neighbourhood, as in Marcoux \\cite{Marcoux}; Fomin et al.'s version is retrieved with a watch zone consisting of a single vertex, the cop's position. In the initial state, the cop's watch zone is empty since the robber cannot be captured before her first step. We again use a jail state $j^*\\notin V$. When both players find themselves there, the game ends and the robber has lost. Hence, we let:\n\\begin{align*}\ni_0 &= (i_{\\petitscops}, \\emptyset, i_{\\petitsrobbers}) \\mbox{ with } i_{\\petitscops}, i_{\\petitsrobbers} \\notin V,\\\\\nF &= \\se{(j^*, \\emptyset, j^*)},\\\\\nS &= \n\\left(\n\\se{(i_{\\petitscops}, \\emptyset)} \\cup \\se{(c, N[c])\\mid c\\in V}\n\\right)\n\\times \n\\left(\n\\se{i_{\\petitsrobbers}}\\cup V\n\\right)\n\\cup F\n\\end{align*}\nLet $(c,C,r)\\in S$ be the current state and $c'\\in N[c]$ an action of the cop. Here is the cop's transition function, for $(c,C,r)\\not\\in F$:\n\\begin{align*}\nT_{\\petitscops}((c,C,r), c')\n&=\n\\begin{cases}\n\\dirac{(c', N[c'], r)}, &\\mbox{ if } c=i_{\\petitscops}\\mbox{ and } c'\\in V \\mbox{ or }\\\\\n\t\t\t&\\mbox{ if } c\\in V\\mbox{ and } c'\\in N[c],\\\\\n0, &\\mbox{ otherwise.}\n\\end{cases}\n\\end{align*}\nAs in the classic game, the cop can jump to any vertex in his first move; after that he moves in the neighbourhood of his current position. His watch zone then changes to $N[c']$. We use $C$ as watch zone in this definition to emphasize the fact that it does not influence the cop's next state. On her turn, on vertex $r_1\\in V$, the robber's action consists in choosing a path $\\pi=(r_1, r_2, \\dots, r_n)$ of finite length $n>0$, that is, $[r_i,r_{i+1}]$ is an edge in $E$ for each $i=1,2, \\dots$. The robber's transition function is:\n\\begin{align*}\nT_{\\petitsrobbers}((c,C, r_1), \\pi)\n&=\n\\begin{cases}\n\\dirac{(c, C, r_n)}, &\\mbox{ if } r_1=i_{\\petitsrobbers} \\mbox{ and } r_n \\in V\\setminus N[c], \\mbox{ or}\\\\\n\t\t &\\mbox{ if } r_1\\in V \\mbox{ and } r_i\\notin C\\mbox{, for all } \\;2\\leq i\\leq n,\\\\\n\\dirac{(j^*, \\emptyset, j^*)}, &\\mbox{ otherwise.}\n\\end{cases}\n\\end{align*}\nThe robber is thus ensured to reach her destination $r_n$ provided that she never crosses the cop's watch zone on her path $\\pi$. If this happens, then the robber is taken to the jail state $(j^*, \\emptyset, j^*)$. \n\nIn Section \\ref{sec:concrgames}, we present this game again, but with the possibility for the robber to evade capture.\n\\end{example}\n\n\n\nHence, because of Definition~\\ref{def:genabspurgame}'s rather general description, it is possible to encode a great variety of random events resulting from the cops' or the robbers' actions. In the following example, we encode a simple inhomogeneous Markov Chain by forgetting the notions of cop. This makes the example fairly degenerate but it also shows the generality of Definition~\\ref{def:genabspurgame}.\n\n \\begin{example}[\\bf Finite Markov chain]\\label{ex:finiteMarkovchain}\\rm\n A Markov chain is a sequence of random variables $X_0,X_1,\\dots$ on a space $E$, having the Markov property. So we can assume that the evolution is given by an initial distribution $q$ on $E$ and a family of matrices $M_0,M_1,\\dots$, where $M_i(s,s')$ is the probability that $X_{i+1}=s'$ given that $X_i = s$. We can encode it as a GPCR game from Definition~\\ref{def:genabspurgame}. In previous examples, we have ignored the third component of states, $S_{\\objects}$, but here we can ignore one of the players sets, like $S_{\\petitsrobbers}$; equivalently, we can assume a single state for the robber and no effect by $T_{\\petitsrobbers}$. We define\n \\begin{align*}\n i_0&\\notin E\\\\\n S &= I \\cup (E\\times \\mathbb N)\\\\\n F&=\\emptyset\\\\\n A&=\\se{1}\\\\\n T_{\\petitscops}(i_0,1, (e,0)) &=q(e)\\\\\n T_{\\petitscops}((e,j),1,(e',j+1))\n &=M_j(e,e').\n \\end{align*}\n Since the action of the player has no influence on the progress of the game, it is natural to define $A$ as a singleton. Technically, a play alternates between the moves of cops and robbers, so it is a sequence $i_0 1 (e_0,0) 1 (e_0,0) 1 (e_1,1) 1 (e_1,1) \\dots$; the repetitions reflect the fact that the robber has no effect. If we ignore the useless information of such a play, we obtain a sequence $i_0 e_0 e_1 e_2 \\dots$, which is just a walk in the Markov chain (and the robber wins). Another way to write down this model would have been to let the two players play similarly, with $T_{\\petitsrobbers}=T_{\\petitscops}$, but the states would then have to be triplets, and the initial state would force a less simple encoding. \n \\end{example}\n\nSimilarly, we can encode a finite state Markov Decision Process (MDP) with reachability objectives~\\cite{Puterman2014} with Definition~\\ref{def:genabspurgame}.\nThe encoding will satisfy that the optimal value of the MDP is 1 if the cops wins, otherwise it is 0, and the robber wins. \n\n\nThe probabilistic Zombies and Survivors game on graphs \\cite{Bonato2016APV} can also be viewed as a GPCR game, one in which only the robbers play optimally. It models a situation in which a single robber (the survivor) tries to escape a set of cops (the zombies). However, the cops have to choose their initial vertices at random and, on each turn, choose randomly among the set of vertices that minimize the distance to the robber. \n\n\n\\subsection{Strategies}\n A deterministic (or pure) strategy is a function that prescribes to a player which action to play on each possible game history. Some strategies are better than others; we will be interested in the probability of winning for the cops, which will be attained by following a strategy. Ultimately, we are interested in memoryless strategies, that is, those that only depend on the present state, and not on the previous moves; nevertheless, we need to define more general strategies as well.\n \\begin{definition}\nLet $\\cl{G}$ be a game. A history on $\\cl{G}$ is an initial fragment of a play on $\\cl{G}$ ending in a state. $H_\\cl{G}$ is the set of histories on $\\cl{G}$.\\vspace{-2mm}\n\\begin{itemize}\\addtolength{\\itemsep}{-6pt}\n\\item the set of \\emph{general strategies} is $\\Omega^\\mathrm{g} = \\{\\sigma: H_\\cl{G} \\rightarrow A\\}$.\n\\item the set of \\emph{memoryless strategies} is $\\Omega=\\{\\sigma: S\\rightarrow A\\}$.\n\\item the set of \\emph{finite horizon strategies} is $\\Omega^{\\mathrm {f}}=\\{\\sigma: (S\\times {\\mathbb N}) \\rightarrow A\\}$.\n\\end{itemize}\n\\end{definition}\n A finite horizon strategy counts the number of turns remaining, and it is otherwise memoryless. A finite horizon strategy is conveniently defined on $\\cl{G}$ but it is actually played on $\\cl{G}_n$, hence the following definition of how such a strategy is followed. At turn 0 of $h$ (histories $i_0$ and $i_0 a_0 s_1$), there are $n$ turns remaining, so $\\sigma$ is evaluated with $n$ on the second coordinate of its argument; at turn 1 (histories $i_0 a_0 s_1 a_1 s_2$ and $i_0 a_0 s_1 a_1 s_2 a_2 s_3$), there are $n-1$ turns remaining. \n\n\\begin{definition}\nLet $h=i_0 a_0 s_1 a_1 s_2 a_2 s_3\\dots$ be a (finite or infinite) play of $\\cl{G}$.\n\\begin{itemize}\\addtolength{\\itemsep}{-6pt}\n\\item $h$ follows a general strategy $\\sigma\\in\\Omega^{\\mathrm{g}}$ for the cops if for all $j=0,2,4,\\dots$ we have $a_j = \\sigma(a_0 s_1 a_1 s_2 a_2 s_3\\dots s_j)$. Similarly for the robbers.\n\\item $h$ follows a memoryless strategy $\\sigma\\in\\Omega_{\\cops}$ for the cops if for all $j=0,2,4,\\dots$ we have $a_j = \\sigma(s_j)$. Similarly for the robbers. \n\\item $h$ follows a finite horizon strategy $\\sigma\\in\\Omega^{\\mathrm {f}}_{\\cops}$ on $\\cl{G}_n$ for the cops if for $j=0,2,4,\\dots, 2n$ we have $a_j = \\sigma(s_j,n-\\frac{j}{2}))$. \n\\item $h$ follows a finite horizon strategy $\\sigma\\in\\Omega^{\\mathrm {f}}_{\\robbers}$ on $\\cl{G}_n$ for the robbers if $j=1,3,5,\\dots, 2n+1$ we have $a_j = \\sigma(s_j,n-\\frac{j-1}{2})$. \n\\end{itemize}\n\n\\end{definition}\n\nThese strategies are all deterministic, or pure: a single action is chosen. Some papers consider \\emph{mixed} or \\emph{behavioral} strategies, where this choice is randomized. \nThis is unnecessary in our setting because, as is well known in perfect information games, among all optimal strategies, there is always a pure one. We will come back to this when we study optimal strategies later on.\n\nWe now present an example where the optimal strategy for the infinite game is memoryless (only depends on the states), but, for any finite horizon game $\\cl{G}_n$, it is a finite horizon strategy.\n\n\\begin{example}\nThis example is in the spirit of the Cop and Drunk Robber game, presented in Example~\\ref{ex:drunk}. As in this example, the cop moves on his neighbourhood and so does the robber, who cannot choose her action, as before, but the difference with Example~\\ref{ex:drunk} is that the robber's movement is not uniform. The graph is a cycle of length 5. The robber moves clockwise with probability 0.9, \n and counterclockwise with probability 0.1. If the cop is at distance 1 of the robber at his turn, of course he wins in this turn. Otherwise, the cop is at distance 2, more specifically at \\emph{clockwise distance} 2 or 3. Let us focus on states $s$ where this clockwise distance is 2 (from the cop to the robber). On the long term, the cop's best choice is to move counterclockwise. However, if only one turn remains, the best move for the cop is the clockwise move because then with probability $0.1$, the robber will jump to his position, whereas the probability of winning is zero in the counterclockwise direction. So the best strategy $\\sigma$ for $\\cl{G}_n$ satisfies $\\sigma(s,n)\\neq \\sigma(s,1)$ in such a state $s$, for $n>1$, hence it is not memoryless. Indeed, for example, $\\sigma(s,2)\\neq \\sigma(s,1)$ because the probability of catching the robbers by playing counterclockwise when 2 turns remain is $0.9$, and it is $0.19$ by playing clockwise ($0.1$ in one move of the robber plus $0.09$ in two moves).\n\n\\end{example}\n\n\n\n\\subsection{Winning conditions in GPCR games}\n\nIn this section we are interested in winning strategies for the cops, their probability of winning {in a given number $n$ of turns} {(that is, in $\\cl{G}_n$) and their probability of winning without any limit on the number of turns (in $\\cl{G}$).}\n\n\nGiven finite horizon strategies $\\sigma_{\\cops}$ and $\\sigma_{\\robbers}$, for the cops and for the robbers, we consider the probability that the robbers are captured in $n$ steps or less:\n\\begin{align*}\np_n(\\sigma_{\\cops},\\sigma_{\\robbers}) &:= \n\\p{\\mbox{\\say{capture in at most $n$ steps}} \\mid \\sigma_{\\cops},\\sigma_{\\robbers}}.\\nonumber\n\\end{align*}\n Since the cops want to maximize this probability, and the robbers want to minimize it, the probability for the cops to win in $n$ turns or less (playing optimally), whatever the robbers strategy, is:\n \\begin{align}\np_n^*&:= \n\\max_{\\sigma_{\\cops}\\in \\Omega^{\\mathrm {f}}_{\\cops}}\\min_{\\sigma_{\\robbers}\\in \\Omega^{\\mathrm {f}}_{\\robbers}} p_n(\\sigma_{\\cops},\\sigma_{\\robbers}).\n\\label{eq:optgameval1}\n\\end{align}\nThis is in fact the \\emph{value} of $\\cl{G}_n$ in the sense of game theory. In game theory, the value for $\\cl{G}_n$ exists if\n \\begin{align}\n\\max_{\\sigma_{\\cops}\\in \\Omega_{\\cops}^\\mathrm{g}}\\min_{\\sigma_{\\robbers}\\in \\Omega_{\\robbers}^\\mathrm{g}} p_n(\\sigma_{\\cops},\\sigma_{\\robbers})\n=\n\\min_{\\sigma_{\\robbers}\\in \\Omega_{\\robbers}^\\mathrm{g}}\\max_{\\sigma_{\\cops}\\in \\Omega_{\\cops}^\\mathrm{g}} p_n(\\sigma_{\\cops},\\sigma_{\\robbers})\n.\n\\end{align}\nIn our setting defining the payoff function of a play as 1 when the robbers are captured and 0 otherwise, we have, by Wal and Wessels~\\cite{markovGames}, that the game $\\cl{G}_n$ has value $p_n^*$. That the restriction of $p_n^*$ to finite horizon strategies does achieves the value of $\\cl{G}_n$ is given again by Wal and Wessels, who call such strategies Markov strategies. Finally, since $\\cl{G}_n$ is finite and with perfect information, a standard game-theoretical argument \\cite{osborne1994course} justifies that the optimal strategies are deterministic (or pure).\n\nWe say that the cops and the robbers play optimally in $\\cl{G}_n$ if they each follow a strategy that yields probability $p_n^*$ for the cops to win. We will show later on, but it is also straightforward\\footnote{This can be proven by induction, since for $n+1$ the cops can choose their optimal strategy for $n$ and simply do anything on the last turn.} from the definition, that $p_n^*$ is increasing in $n$; since it is moreover bounded by 1, the limit always exists and we will prove that is it equal to the value of $\\cl{G}$.\n\nIndeed, from a known result in Simple Stochastic Games (SSG), one can show that $\\cl{G}$ has a value and that this value is achieved by a pair of optimal strategies that are deterministic (or pure) and memoryless. The argument is well known in the literature on SSGs, but requires a construction, so we leave it to Appendix \\ref{sec:annex_ssg}. Thus,\nlet us write the value of game $\\cl{G}$ as $p_\\cl{G}^*$, that is,\n\\begin{align}\np_\\cl{G}^* &=\n\\max_{\\sigma_{\\cops}\\in \\Omega_c}\n\\min_{\\sigma_{\\robbers}\\in \\Omega_r}\n\\p{\\mbox{\\say{capture in a play}} \\mid \\sigma_{\\cops}, \\sigma_{\\robbers}},\n\\end{align}\nand the equality still holds when the $\\min$ and $\\max$ operators are switched. This value is guaranteed by Theorem \\ref{thm:ssgvalexist} \\cite{condon1992complexity,shapley1953stochastic}.\nIn Proposition \\ref{prop:epsoptimal}, we will show that the difference in the cop using a finite-horizon strategy in $\\cl{G}_n$ and a memoryless one in $\\cl{G}$ is negligible for a sufficiently large integer $n$.\n\n\n\n\n\n\nEquation \\eqref{eq:optgameval1} returns either $0$ or $1$ in deterministic games such as the Classic Cop and Robber game. We seek here to study games that can be stochastic, where $p_n^*$ can take any value in $[0,1]$. Thus, we adapt the usual definition of \\emph{\\textrm{copwin}{}} to our broader model.\n\n\\begin{definition}\\label{def:winningconditions}\nLet $\\cl{G}$ be a GPCR game. We say $\\cl{G}$ is \\vspace{-2,5mm}\n\\begin{itemize}\\addtolength{\\itemsep}{-6pt}\n\\item \\emph{\\cpwin{n}} if the cops can ensure a win with probability at least $p$ in at most $n$ turns, that is $p_n^* \\geq p$;\n\\item \n \\emph{$p$-copwin} if it is \\cpwin{n} for some $n\\in\\mathbb{N}$;\n\\item \\emph{almost surely copwin} if the cops can win {when they are allowed to play infinitely}, that is $p_\\cl{G}^* \n= 1$;\n \\item \n \\emph{\\textrm{copwin}{}} if it is \\cwin{n} for some $n\\in\\mathbb{N}$. \n\\end{itemize}\n\\end{definition}\nIt is easy to see that when $\\cl{G}$ corresponds to the Classic Cop and Robber game, as defined in Example~\\ref{ex:classiccrgame}, this definition of copwin coincides with the classical one. In that sense, it can be considered as a generalization of the classical one, because in any copwin finite graph, the cop wins in at most $n=\\@ifstar{\\oldabs}{\\oldabs*}{V(G)}^2$ turns.\n\n\n\\begin{remark}\nWe will see in Proposition \\ref{prop:epsoptimal} that $\\lim_{n\\to\\infty}p_n^* = p_\\cl{G}^*$. Thus, if there exists $n$ such that $p_n^* >0$ and if all states reachable within {a finite number of moves} of the cop's optimal strategy are in the same strongly connected component, then $p_\\cl{G}^* = 1$. {Indeed, after $n$ turns, if the play is not over, the cops can go back to the configuration where $p_n^*>0$: the initial position that is proposed by the cops' strategy. In that state, the probability that the robbers have not been caught is at most $1-p_n^*$; the probability that the robbers are not caught after $m$ repetition of this cycle is at most $(1-p_n^*)^m$. It is thus zero at the limit.} This happens, for example, if $p_n^* >0$ and $\\cl{G}$ is a strongly connected graph. However, we cannot, in general, claim that if $p_n^*>0$ after $n>\\@ifstar{\\oldabs}{\\oldabs*}{S}$ turns have been played, then $p_\\cl{G}^* = 1$.\n\\end{remark}\n\nWe define a probabilistic analog to the cop number, $c(G)$, which is the minimal number of cops required on a graph $G$ in order for the cops to capture the robbers. It is an important subject of research in Classic Cops and Robbers{} games~\\cite{Bonato2011f}, in particular relating to Meyniel's conjecture that $c(G) \\in \\bigO{\\sqrt{\\@ifstar{\\oldabs}{\\oldabs*}{V(G)}}}$. Furthermore, one of the main areas of research on cops and robbers games that involve random events is the expected capture time of the robbers \\cite{Komarov2013b,Komarov,Kehagias2012}. Thus, we further generalize the expected capture time of the robbers for any game $\\cl{G}$. \n\n{Adding cops in a game $\\cl{G}$ is done in the natural way. The set of cops states $S_{\\petitscops}$ is the cartesian product of the sets of single cop positions, and the transition function is updated so as to let all cops move in one step.}\n\\begin{definition}\nThe \\pcopntext{n} $\\pcopnsym{n}$ of a game $\\cl{G}$ is the minimal number of cops required for the capture of the robbers in at most $n$ turns with probability at least $p$. In other words, $\\pcopnsym{n}$ is the minimal number of cops required for a game $\\cl{G}$ to be $\\cpwin{n}$. The $p$-cop number, $c_p(\\mathcal{G}) = \\pcopnsym{\\infty}$, is the minimal number of cops necessary for having $p_\\cl{G}^*\\geq p$.\n\nLet $T_\\cl{G}^p$ be the random variable giving the number of turns required for the robbers to be captured with probability at least $p$ in $\\cl{G}$ under optimal strategies. Then, the $p$-expected capture time of the robbers is $\\e{T_\\cl{G}^p}$. The \\emph{expected capture time of the robbers} is $\\e{T_\\cl{G}^1}$.\n\\end{definition}\n\nSince some of the optimal strategies of $\\cl{G}$ are memoryless, we can turn the question of computing $\\e{T_{\\cl{G}}^p}$ into a question of computing an expected hitting time in a Markov chain. Let us write $\\sigma_{\\mathrm{cop}}^*$ ($\\sigma_{\\mathrm{rob}}^*$) for the optimal strategy of the cops (robbers) in $\\cl{G}$ and let $\\cl{M}$ be the Markov chain such that for any state $s\\in S$, it has two states $(s,\\sigma_{\\mathrm{cop}}^*(s))$ and $(s,\\sigma_{\\mathrm{rob}}^*(s))$. Furthermore, let $M$ be its transition matrix, that is governed by the distributions $T_{\\petitscops}(s,\\sigma_{\\mathrm{cop}}^*(s))$ and $T_{\\petitsrobbers}(s,\\sigma_{\\mathrm{rob}}^*(s))$. Suppose $(X_n)_{n\\geq 0}$ describes the stochastic process on $\\cl{M}$ beginning at the initial state $i_0$, then $T:= \\frac 1 2 \\min_{n\\geq 0}(X_n\\in F)$ is the hitting time of $F$ from $i_0$. The expectation of $T$ is $\\e{T_\\cl{G}^1}$.\n\\subsection{Solving GPCR games}\\label{subsec:wneq}\n\nSimilarly as with Bonato and MacGillivray's model, we define a method for solving GPCR games, that is, for computing the probability for the cops to capture the robbers in an optimal play, and the strategy to follow. This method takes the form of a recursion, defining the probability $w_n(s)$ that state $s$ leads to a final state in at most $n$ steps ($w$ is for \\emph{winning} in the following theorem). This recursion gives a strategy for the cops.\n\n\n\\begin{theorem}\\label{thm:copwinthm}\nLet $\\cl{G}$ be a GPCR game, and let:\n\\renewcommand{a_{\\petitscops}}{a}\n\\renewcommand{a_{\\petitsrobbers}}{a'}\n\\begin{align}\\label{eq:absgamewn}\nw_0&(s) :=\n\\begin{cases}\n1, &\\mbox{ if } s\\in F,\\\\\n0, &\\mbox{ otherwise.}\n\\end{cases}\\nonumber\\\\\nw_n&(s) :=\\nonumber\\\\&\n\\begin{cases}\n\\mbox{~}~1, &\\mbox{ if } s\\in F,\\\\\n\\displaystyle\n\\max_{\\!\\!a_{\\petitscops}\\in \\scalebox{0.8}{$\\acs{\\!s'\\!}\\!\\!$}} \\sum_{s'\\in S} \\!T_{\\petitscops}(s,a_{\\petitscops}, s')\\displaystyle\n\\min_{\\!\\!\\!\\!a_{\\petitsrobbers}\\in \\scalebox{0.8}{$\\ars{\\!s'\\!}\\!\\!$}} \\sum_{\\scalebox{0.7}{$s''$}\\in S} \\hspace{-0.8mm}T_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s''\\hspace{-0.2mm}) w_{n-1}(s''\\hspace{-0.1mm} ), \\hspace{-3.3mm}\\mbox{}\n&\\mbox{ otherwise.}\n\\end{cases}\n\\end{align}\n Then $w_n(s)$ gives the probability for the robbers to be captured in $n$ turns or less, given that both players play optimally, starting in state $s$. Thus,\n$$\nw_n(i_0) = p_n^*.\n$$\nThis also says that $\\cl{G}$ is \\cpwin{n} if and only if $w_n(i_0) \\geq p$.\nFor $(s,k)\\in S\\times\\mathbb{N}$, let\n$\\sigma^*_\\mathrm{cop}(s,k)$ be the $\\operatornamewithlimits{argmax}$ in place of $\\max$ in Equation~\\eqref{eq:absgamewn}.\nThis defines finite horizon strategies that are optimal in $\\cl{G}_n$\\footnote{The argmax is not necessary unique.}.\n\n\\end{theorem}\nThe recursive part of $w_n$'s definition is as follows: to win, the cops must take the best action $a$; this leads them to state $s'$ with probability $T_{\\petitscops}(s,a,s')$; from this state, the robbers must choose the action $a'$ that will give them the smallest probability of being caught. Action $a'$ leads the robbers to state $s''$ with probability $T_{\\petitsrobbers}(s',a',s'')$ and then we multiply by the probability that the cops catch the robbers from this state, $w_{n-1}(s'')$. Since the cops want a high probability, a maximum is taken; it is the converse for the robbers. The full equation gives the expected probability of capture of the robbers by the cops when both players move optimally.\n\n\\begin{proof}\nThe proof is by induction on $n$. We prove that $w_n(s)$ gives the probability for the robbers to be captured in $n$ turns or less, given that both players play optimally, starting in state $s$.\nLet $s$ be any state. \n\nIf $n=0$, then the cops win if and only if $s\\in F$, in which case, by definition we do have $w_0(s)=1$. Otherwise the robbers win and $w_0(s)=0$, as wanted.\n\n\nIf $n>0$, suppose the result holds for $n-1\\leq k$ and let $s$ be the current state. If this state is final, then the robbers are caught in $n$ turns or less with probability $1$ and $w_n(s) =1$ as desired. Otherwise, let the cops, playing first, choose an action $a_{\\petitscops}\\in \\acs{s}$, after what the next state $s'$ is drawn according to $T_{\\petitscops}(s, a_{\\petitscops}, s')$. Then, the robbers can choose an action $a_{\\petitsrobbers} \\in \\ars{s'}$, in which case the next state $s''$ will be drawn with probability $T_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s'')$. By the induction hypothesis, we know a final state will be encountered in $n-1$ turns or less with probability $w_{n-1}(s'')$ starting from state $s''$. Thus, the probability the robbers are caught in $n$ turns or less by playing action $a_{\\petitsrobbers}$ after the cops have reached state $s'$ is given by:\n\\[\n\\sum_{s''\\in S} T_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s'')w_{n-1}(s'').\n\\]\nNote that if $s'\\in F$, this value is exactly $w_{n-1}(s')$, since by definition, we must have $T_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s'') = 1$ if $s''=s'$ and 0 otherwise.\nThe robbers wish to minimize this value among their set of available actions, which is possible since both sets $S$ and $A_{\\petitsrobbers}$ are finite. Hence, supposing action $a_{\\petitscops}\\in\\acs{s}$ has been chosen by the cops, the game stochastically transits to some other state $s'\\in S$ with probability $T_{\\petitscops}(s,a_{\\petitscops}, s')$. Thus, with probability \n\\[\n\\sum_{s'\\in S}T_{\\petitscops}(s,a_{\\petitscops}, s')\n\\min_{a_{\\petitsrobbers}\\in \\ars{s'}}\n\\sum_{s''\\in S}\nT_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s'')\nw_{n-1}(s''),\n\\]\nthe robbers are caught in at most $n$ turns from state $s$, when the cops play action $a_{\\petitscops}$. The cops want to maximize this value and, as for the robbers, this is possible because the considered sets are finite. Thus, the cops must play the action \n\\[\n\\operatornamewithlimits{argmax}_{a_{\\petitscops} \\in \\acs{s}}\n\\sum_{s'\\in S}T_{\\petitscops}(s,a_{\\petitscops}, s')\n\\min_{a_{\\petitsrobbers}\\in \\ars{s'}}\n\\sum_{s''\\in S}\nT_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s'')\nw_{n-1}(s'').\n\\]\nThe claim about $\\sigma^*_\\mathrm{cop}$ is straightforward from this result. The choices of actions at the initial state thus give the probability $w_n(i_0)$. Because $p_n^*$ is, by definition, the probability of capture of the robbers in $n$ turns or less when both players play optimally, we conclude that $w_n(i_0) = p_n^*$.\n\\end{proof}\n\nThis result implies that the $w_n$'s are probabilities that increase with $n$. In other words, we have the following corollary. \n\n\\begin{corollary}\\label{cor:mono}\nFor any $n\\in\\mathbb{N}, s\\in S$ we have $0\\leq w_n(s)\\leq w_{n+1}(s)\\leq 1$.\n\\end{corollary}\n\nNote that there are many optimal strategies for the cops in $\\cl{G}$, that is, stategies that have value $p_n^*$, but they are not all as efficient. Consider a game $\\cl{G}_{n}$ where the robbers can be caught in $k0$ and sufficiently large integer $n$.\n\\end{proposition}\n\\begin{proof}\nFrom a previous argument, we know that some pair $(s_c,s_r)$ of optimal memoryless strategies for the cops and the robbers yields a probability $p_\\cl{G}^*$ of winning for the cops. \nIt holds that $p_n^* \\leq p_\\cl{G}^*$, for any integer $n$, since the value of $\\cl{G}_n$ can only be at most the value of $\\cl{G}$. Recall that since $p_n^*$ is non-decreasing in $n$ and bounded above by $p_\\cl{G}^*$, we have that $\\lim_{n\\to\\infty}p_n^* \\leq p_\\cl{G}^*$.\n\nNow, let us play strategies $(s_c,s_r)$, chosen above, in the game $\\cl{G}_n$ for any integer $n$. Consider the probability that the cops win in $\\cl{G}_n$ when both players follow those strategies. These probabilities, for each $n$, form a sequence $(v_n)_{n\\in \\mathbb{N}}:=(v_1,v_2, \\dots)$. This sequence is non-decreasing and bounded above by $p_\\cl{G}^*$. \n\nLet $A_n$ be the event \\say{there is a capture in at most $n$ turns under strategies $s_c$ and $s_r$}. Observe that $A_0\\subseteq A_1 \\subseteq \\dots$ is a non-decreasing sequence. Thus, by the Monotone Convergence Theorem:\n\\begin{align*}\np_\\cl{G}^* \n&= \\p{\\{h\\mid \\mbox{$h$ is a play following $s_c, s_r$ where cops win} \\}}\\\\\n&= \\p{\\cup_{i=0}^\\infty A_i}\\\\\n&= \\lim_{n\\to\\infty}\\p{A_n}\\\\\n&= \\lim_{n\\to\\infty}v_n.\n\\end{align*}\nThus, for any $\\epsilon>0$ there exists an integer $N$ such that for all $n\\geq N$, $p_\\cl{G}^*-v_n < \\epsilon$. But, we also have that $v_n\\leq p_n^*$ for any integer $n$, since $w_n(i_0)$ is the value of $\\cl{G}_n$. Hence, it follows that $0< p_\\cl{G}^* - p_n^*\\leq p_\\cl{G}^* - v_n = \\@ifstar{\\oldabs}{\\oldabs*}{p_\\cl{G}^*-v_n} <\\epsilon $. This completes the proof.\n\\end{proof}\n\nIt is interesting to note that this theorem only applies if there are best strategies for the cops and robbers. In particular, it is not true if $\\cl{G}$ is played on the {infinite graph of the following example.}\n\\begin{example}\nConsider an \\emph{infinite star graph} with a central vertex, from which paths of lengths $n$ are deployed, for every integer $n$, and consider the Classic Cops and Robbers{} game $\\cl{G}$ on this graph with one cop and one robber. The best move for the cop is to start on the (infinitely branching) central vertex. Then whatever state the robber chooses, the cop will catch her in a finite number of turn, so this graph is copwin in the sense of Definition~\\ref{def:winningconditions}. However this number of turn is unbounded, so when playing in $\\cl{G}_n$, the robber can simply choose a vertex at distance greater than $n$; so the value of $\\cl{G}_n$ is 0 for all $n$. The proof of the theorem fails in that case because, the graph being infinite, there is no optimal strategy for the robber in $\\cl{G}$. Whatever state the robber chooses, there is always a further state that would allow her to be captured in more turns, that is, there is always a better strategy.\n\\end{example}\n\n\nUnder certain conditions that will be further studied in Subsection \\ref{subsec:station}, the $(w_n)_{n\\in\\mathbb{N}}$ sequence becomes constant. \n\n\\begin{definition}\nWe say that $(w_n)_{n\\in\\mathbb{N}}$ is \\emph{stationary} if there exists an integer $N\\in \\mathbb{N}$ such that $w_{n}(s) = w_{n+1}(s)$, for all $n>N$, $s\\in S$.\nWe write $\\overline{w}$ for the stationary part of $(w_n)_{n\\in\\mathbb{N}}$.\n\\end{definition}\n\n\\begin{remark}\\label{r:grandN}\nIt follows from the definition of $w_n$ that, if for some $N$, $w_N(s)=w_{N+1}(s)$ for all $s\\in S$, then $(w_n)_{n\\in\\mathbb{N}}$ is {stationary} and $\\overline{w}$ starts at $n=N$ or less. \n\\end{remark}\n\nFrom Theorem \\ref{thm:copwinthm}, we deduce Theorem \\ref{th:fixpointcor} that is more in line with traditional game theoretical arguments and show that in addition to the equality $\\lim_{n\\to\\infty} w_n (i_0) = p_\\cl{G}^*$ we can compute explicitly the optimal strategy of the cops in $\\cl{G}$, from the limit of the $w_n$'s.\n\\begin{theorem}\\label{th:fixpointcor}\n\\renewcommand{a_{\\petitscops}}{a}\n\\renewcommand{a_{\\petitsrobbers}}{a'}\nThe (point-wise) limit $w_\\infty := \\lim_{n\\to\\infty} w_n$ exists and it satisfies \n\\begin{align}\\label{eq:w_infini}\nw_\\infty&(s) =\\nonumber\\\\&\n\\begin{cases}\n\\mbox{~}~1, &\\mbox{ if } s\\in F,\\\\\n\\displaystyle\n\\max_{\\!\\!a_{\\petitscops}\\in \\scalebox{0.8}{$\\acs{\\!s'\\!}\\!\\!$}} \n\\sum_{s'\\in S} \\!T_{\\petitscops}(s,a_{\\petitscops}, s')\\displaystyle\n\\min_{\\!\\!\\!\\!a_{\\petitsrobbers}\\in \\scalebox{0.8}{$\\ars{\\!s'\\!}\\!\\!$}} \n\\sum_{\\scalebox{0.7}{$s''$}\\in S} \n\\hspace{-0.8mm}\nT_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s''\\hspace{-0.2mm}) \nw_{\\infty}(s''\\hspace{-0.1mm} ), \\hspace{-3.3mm}\\mbox{}\n&\\mbox{ otherwise.}\n\\end{cases}\n\\end{align}\nMoreover, the optimal (memoryless) strategy for the cops in $\\cl{G}$, from any state $s$, can be retrieved by a cops' action for which the maximum of Equation~\\eqref{eq:w_infini} is achieved.\n\\end{theorem}\n\\begin{proof}\n\\renewcommand{a_{\\petitscops}}{a}\n\\renewcommand{a_{\\petitsrobbers}}{a'}\nLet $L$ be the lattice of functions $S \\rightarrow [0,1]$, ordered point-wise, with the null function as bottom element $\\bot$. Equation~\\eqref{eq:absgamewn} determines the following function ${\\cal F} : L\\rightarrow L$. For $f: S\\to [0,1]$ and $s\\in S$,\n\\begin{align}\n{\\cal F} (f) &(s) :=\\nonumber\n\\\\&\n\\begin{cases}\n\\mbox{~}~1, &\\mbox{ if } s\\in F,\\\\\n\\displaystyle\n\\max_{\\!a_{\\petitscops}\\in \\scalebox{0.8}{$\\acs{s'\\!}\\!$}} \\sum_{s'\\in S} \\!T_{\\petitscops}(s,a_{\\petitscops}, s')\\displaystyle\n\\min_{\\!\\!\\!\\!a_{\\petitsrobbers}\\in \\scalebox{0.8}{$\\ars{s'\\!}\\!$}} \\sum_{\\scalebox{0.7}{$s''$}\\in S} \\hspace{-0.8mm}T_{\\petitsrobbers}(s', a_{\\petitsrobbers}, s''\\hspace{-0.2mm}) f(s''\\hspace{-0.1mm} ), \\hspace{-3.3mm}\\mbox{}\n&\\mbox{ otherwise.}\n\\end{cases}\\nonumber\n\\end{align}\nFrom previous remarks, ${\\cal F}$ is monotone increasing. Thus, we deduce from the Knaster-Tarski fixed point theorem \\cite{granas2003} that ${\\cal F}$ has a least fixed point given by $w_\\infty :=\\lim_{n\\to \\infty} {\\cal F}^n(\\bot)$. \nFurthermore, we have\n${\\cal F}(\\bot) = w_0$ and ${\\cal F}(w_{n-1}) = w_n$, so ${\\cal F}^{n+1}(\\bot) = w_n$, for all integer $n$, and thus $w_\\infty = \\lim_{n\\to \\infty} w_n$ and satisfies Equation~\\eqref{eq:w_infini}. \n\nWe showed in Theorem~\\ref{thm:copwinthm} that $w_n(i_0) = p_n^*$, and in Proposition \\ref{prop:epsoptimal} that $\\lim_{n\\to\\infty} p_n^*= p_\\cl{G}^*$. \nConsequently, $w_\\infty(i_0) = p_\\cl{G}^*$. Hence, $w_\\infty(i_0)$ is the probability that the cops capture the robbers when both teams play optimally. Similarly, one can show that $w_\\infty(s)$ is the probability that, starting at $s$, the cops capture the robbers when both team play optimally.\nThis, together with the fact that $w_\\infty$ satisfies Equation~\\eqref{eq:w_infini}, imply that the optimal strategy for the cop is coherent with an action achieving the $\\operatornamewithlimits{argmax}$ operator in place of the $\\max$ operator in Equation~\\eqref{eq:w_infini}. One cannot choose any such action because, for example, a temporary bad action, like staying idle, can give the same probability of winning than another action, but you can only choose it a finite number of times, which is incompatible with a memoryless strategy.\n\\end{proof}\n\n\\begin{remark}\nRecall that we have $w_n(i_0) = p_n^*$ and that, by definition, it holds that $p_n^* = \\min_{\\sigma_{\\robbers}\\in \\Omega_{\\robbers}^\\mathrm{g}}\\max_{\\sigma_{\\cops}\\in \\Omega_{\\cops}^\\mathrm{g}} p_n(\\sigma_{\\cops},\\sigma_{\\robbers})$. Thus, we could have defined $w_n(i_0)$ with switched operators $\\min$ and $\\max$. Then, we can deduce the optimal robbers strategies by flipping those operators and replacing the $\\min$ operator by an $\\operatornamewithlimits{argmin}$ operator. This also holds in $w_\\infty$.\n\\end{remark}\n\n\nNow, with the help of Equation \\eqref{eq:absgamewn} we can {generalize} the classic theorem of \nCops and Robbers{} games. This is done in the next corollary. \n\\begin{corollary}\nLet $\\cl{G}$ be a GPCR game. Then, $\\cl{G}$ is \\emph{\\textrm{copwin}{}} if and only if the sequence $(w_n)_{n\\in\\mathbb{N}}$ is stationary and\n\\[\n\\overline{w}(i_0) = 1.\n\\]\nMoreover, the game is $p$\\textrm{-\\win{}}{} if and only if the sequence is stationary and \n\\[\n\\overline{w}(i_0) \\geq p.\n\\]\nIf $\\cl{G}$ is not $$p$\\textrm{-\\win{}}{}$ for any $p$, then the game is almost surely \\textrm{copwin}{} if and only if the sequence is not stationary and\n\\[\nw_\\infty(i_0) =1.\n\\]\n\\end{corollary}\n\n\\begin{remark}\\label{r:determstation}\nIf the GPCR game $\\cl{G}$ is deterministic, then $w_n(s)$ is $0$ or $1$ for any $n\\in\\mathbb{N}$ and $s\\in S$. It therefore follows from monotonicity of $(w_n)_{n\\in\\mathbb{N}}$ (see Corollary~\\ref{cor:mono}) and from Remark \\ref{r:grandN} that the stationary part starts at some $N\\leq \\@ifstar{\\oldabs}{\\oldabs*}{S}$. Indeed, if $w_n\\neq w_{n+1}$ there is at least one $s$ such that $w_n(s)=0$ and $w_{n+1}(s)=1$. This difference can be observed at most $\\@ifstar{\\oldabs}{\\oldabs*}{S}$ times.\n\\end{remark}\n\nThe conditions under which $(w_n)_{n\\in\\mathbb{N}}$ is stationary are presented in Proposition \\ref{prop:wnstation}.\n\n\n\\subsection{The computational complexity of the \\texorpdfstring{$w_n$}{wn} recursion}\\label{sec:compwn}\n\nWe show a result on the algorithmic complexity of computing function $w_n$ (Equation \\eqref{eq:absgamewn}). This function is computable with dynamic programming, yet it may require a high number of operations, especially as its complexity is function of the size of the state space. Recall that Equation \\eqref{eq:absgamewn} was devised to be as general and efficient as possible. However, given the context of Definition~\\ref{def:genabspurgame}, the best one can hope for its polynomial complexity in the size of the state and action spaces.\n\\begin{proposition}\\label{prop:compabswn}\nIn the worst case and under a dynamic programming approach, computing $w_n$ requires $\\bigO{n\\@ifstar{\\oldabs}{\\oldabs*}{S}^3\\max |A_{\\petitscops}|\\max |A_{\\petitsrobbers}|}$ operations, where $\\max |A_{\\petitscops}|$ is $\\max_{s\\in S}\\@ifstar{\\oldabs}{\\oldabs*}{A_{\\petitscops}(s)}$, similarly for $\\max |A_{\\petitsrobbers}|$. The spatial complexity is $\\bigO{n\\@ifstar{\\oldabs}{\\oldabs*}{S}}$.\n\\end{proposition}\n\\begin{proof}\nLet $a_n$ be the number of operations required for computing the recursion of $w_n$. Assume that computing probabilities $T_{\\petitscops}$ and $T_{\\petitsrobbers}$ require unit cost. Clearly, $a_0=1$. In the worst case, when $n>0$, all elements of the sets $A_{\\petitscops}$ and $A_{\\petitsrobbers}$ must be considered in order to ensure optimality of the actions chosen and thus $\\textstyle\\max |A_{\\petitsrobbers}|\\max |A_{\\petitscops}|$ operations are required. We always have that $\\@ifstar{\\oldabs}{\\oldabs*}{S}>\\@ifstar{\\oldabs}{\\oldabs*}{\\se{s'\\in S \\mid T_{\\petitscops}(s,a_{\\petitscops},s')>0}}$ and similarly for $T_{\\petitsrobbers}(s,a_{\\petitsrobbers},s')$. Then, in the worst case, \n\\begin{align*}\na_n &\\leq \\@ifstar{\\oldabs}{\\oldabs*}{S}^3\\textstyle\\max |A_{\\petitsrobbers}|\\max |A_{\\petitscops}| + a_{n-1}\\\\\n&\\leq n\\@ifstar{\\oldabs}{\\oldabs*}{S}^3\\textstyle\\max |A_{\\petitscops}|\\max |A_{\\petitsrobbers}|+ 1,\n\\end{align*}\nwhere we assumed that all values of $w_{n-1}$ were saved in memory for all $n-1$. Memorizing those values requires a spatial complexity of $\\bigO{n\\@ifstar{\\oldabs}{\\oldabs*}{S}}$ at most. The final complexity is thus $\\bigO{n\\@ifstar{\\oldabs}{\\oldabs*}{S}^3\\max |A_{\\petitscops}|\\max |A_{\\petitsrobbers}|}$.\n\\end{proof}\n\nConsequently, both spatial and temporal algorithmic complexities depend on the three sets $S$, $A_{\\petitscops}$ and $A_{\\petitsrobbers}$. This suggests that these complexities may be high if the number of available actions is. One could imagine a game in which actions are paths, resulting in exponential complexity in $\\@ifstar{\\oldabs}{\\oldabs*}{S}$. Still, whenever $A_{\\petitscops}\\in \\bigO{p(\\@ifstar{\\oldabs}{\\oldabs*}{S})}$ and $A_{\\petitsrobbers}\\in \\bigO{q(\\@ifstar{\\oldabs}{\\oldabs*}{S})}$ for some polynomials $p$ and $q$, then Equation \\eqref{eq:absgamewn} is clearly computable in polynomial time in the size of $S$. \nMoreover, as we will see in Corollary~\\ref{cor:wncomp}, $w_n$ does not have to be computed for all $n$ in order to determine if the cops have a winning strategy or not, essentially, $n=\\@ifstar{\\oldabs}{\\oldabs*}{S}$ suffices. \nIn many studied cases, $|S|$ is itself polynomial in the size of the structure on which the game is played, leading each time to polynomial time algorithms for solving the game. \n\\begin{comment}\n\\subsubsection{A hardness result of Cops and Robbers{} games}\n\nKinnersley~\\cite{Kinnersley2015} showed that the decision formulation of the cop number problem is \\textrm{EXPTIME-complete}{}. The class $\\mathrm{EXPTIME}$ contains all decision problems decidable in exponential time on a deterministic Turing machine. Observe that the game of $k$ cops and robbers is embedded in our model.\n\nWe can hint at the hardness of computing Equation~\\eqref{eq:absgamewn}. Indeed, Mamino~\\cite{Mamino2013OnTC} demonstrated that determining which player has a winning strategy in the game of $k$ cops and robbers is \\textrm{PSPACE-hard}{}. Since Equation~\\eqref{eq:absgamewn} can also determine which player has a winning strategy in deterministic cops and robbers games, it follows that computing it is also in general \\textrm{PSPACE-hard}{}.\n\n\\begin{proposition}\\label{prop:pspacehard}\nThe question:\n\\begin{quote}\nGiven a Cops and Robbers{} game $\\cl{G}$ from Definition~\\ref{def:genabspurgame}, is the value of $\\overline{w}$ greater or equal to some number $p\\in [0,1]$?\n\\end{quote}\nis \\textrm{PSPACE-hard}{} when both $\\cl{G}$ and $p$ are part of the input.\n\\end{proposition}\n\\begin{proof}\nThe classical game of $k$ cops and one robber can be polynomially mapped to a game $\\cl{G}$ of Definition~\\ref{def:genabspurgame} since $k$ is fixed. By way of contradiction, suppose this problem is easier to solve than \\textrm{PSPACE-hard}{}. Since this classical game is deterministic, one can compute Equation~\\eqref{eq:absgamewn} up to stationarity, which will incur at most at $N=\\@ifstar{\\oldabs}{\\oldabs*}{S}$ steps (see Remark~\\ref{r:determstation}). This would use polynomial space. This polynomial-space algorithm would have computed whether the cops or the robber have a winning strategy, contradicting Mamino's theorem.\n\\end{proof}\n\\end{comment}\n\n\\subsection{A stationarity result}\\label{subsec:station}\n\nIn traditional games of Cops and Robbers{} where a relation $\\preceq_n$ is defined (such as the classic game \\cite{Nowakowski1983} and the game with $k$ cops \\cite{Clarke2012}), it is useful to prove results on the convergence of the recursion $\\preceq_n$. One demonstrates the relation becomes \\emph{stationary}, that is, there exists a number $N\\in \\mathbb{N}$ such that for all integers $n>N$ and all pairs of vertices $(u,v)\\in V^2$, if $u\\preceq_n v$, then $u\\preceq_{n+1}v$. One then writes $\\preceq$ for the stationary part of the sequence, i.e. $\\preceq = \\preceq_N$. This result is vital for solving Cops and Robbers{} games as it ensures the relation $\\preceq$ can be computed in finite time.\n\nContrary to the relation $\\preceq_n$ found in deterministic Cops and Robbers games (such as the classic game in Example~\\ref{ex:classiccrgame}), the relation $w_n$ does not always become stationary. For example, on the triangle $K_3$, with one cop and one robber, although it is \\textrm{copwin}{} in the classical sense, whenever one adds a probability of capture on the vertices, say $1\/M$ for $M>0$, then after $n$ turns the cop will have captured the robber with probability only $1 - (1-\\frac 1 M)^n$. Thus, after $n$ turns, the cop can only ensure a probability of capture strictly less than $1$, although he can clearly win with probability $p$ for any $p\\in [0,1]$. In other words, a game may be almost surely copwin, but not \\cwin{n} for any integer $n$. In the following proposition, we formulate and prove an upper bound on the minimal number of steps $n$ required to determine $p_\\cl{G}^*$, the probability of capture in an infinite game.\n\nRecall that it does not hold in general that in a copwin graph (in the classical sense of one cop against one robber) every optimal strategy of the cop prevents him from visiting any vertex more than once~\\cite{boyer2013cops}. Were this to be true, we could easily upper bound the capture time of the robber. However, we show in Lemma~\\ref{lem:uniquestatesplay} that a milder version of this result holds for states, instead of simple cop position.\n\nTo have an intuition of why the following lemma is true, it is important to note that the condition of stationarity is a very strong one. The contraposition of the lemma may be more informative: the only way for $w_n$ to become stationary is that there is no loop possible in any play following the optimal strategies of the players. An example of a graph where it does not happen is a cycle of length 3, where the robbers have equal probability in both directions in every state. There are plays where the robbers are caught after an arbitrary large number of turns. On the other hand, an acyclic graph does induce stationarity for $w_n$. \n\\begin{lemma}\\label{lem:uniquestatesplay}\nSuppose $(w_n(s))_{n\\in\\mathbb{N}}$ is stationary at $N>0$ in a game $\\cl{G}$, for a state $s$ and that the cops and robbers follow their optimal strategy, from Proposition~\\ref{pr:sigma*N}.\nThen, every winning play (for the cops) from $s$ brings the cops in any given state at most once (at the end of a turn).\n\\end{lemma}\n\\begin{proof}\nWe prove the result for both the cops and robbers, that is, in a winning play where they follow their optimal strategy, none of them visit the same state twice at their turn.\nBecause of stationarity and Proposition~\\ref{prop:epsoptimal} the optimal strategy for the cops in $\\cl{G}$ is also optimal in $\\cl{G}_{n}$, $n\\geq N$. Suppose the lemma is false. Then there is a winning play $\\pi$ (i.e., reaching $F$ in $N$ turns or less) from state $s$ containing a loop through a state $s_k$ that is thus reached twice by the same player in the play, the second time being at $s_l$, $k0$, $w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+k}(s) =0$;\n\\item if $w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+1}(s)>w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}}(s)$, then $(w_n(s))_{n\\in\\mathbb{N}}$ is not stationary.\n\\end{enumerate}\n\n\\end{proposition}\n\\begin{proof}\nFor the first claim, assume that $w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+k}(s) > 0$. Then there is a path from state $s$ to a final state in $F$ that follows $\\sigma^*_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+k}$ (and that has positive probability). If this path is longer than $\\@ifstar{\\oldabs}{\\oldabs*}{S}$ then it contains a repetition of at least one state $s'$, at turns, say, $m_1$, and $m_2$.\nConsider the finite horizon strategy that follows $\\sigma^*_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+k}$ for the first $m_1$ turns, and then follows $\\sigma^*_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+k -m_2}$, which is the strategy followed by $\\sigma^*_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+k}$ when $s'$ of $\\pi$ was encountered for the second time originally in $\\pi$.\nSo removing from $\\pi$ the subpath between $m_1$ and $m_2$, we obtain a shorter path that has positive value and that follows this strategy. \nBy continuing this procedure, we obtain a path of length $|S|$ or less and Claim 1 is proved. \n\n\n\nFrom Lemma \\ref{lem:uniquestatesplay}, if $(w_n(s))_{n\\in\\mathbb{N}}$ is stationary from $N$, there is no (positive, or winning) plays where the same state is encountered twice in the $N$ first turns of $\\cl{G}_{N}$ following $\\sigma_{N}^*$. Now, suppose $N\\geq |S|$. Thus, there is no repetition of states, which implies that for all $s\\in S$, all paths that contribute to the value $w_N(s)$ are of length at most $|S|$, and the result follows.\n\\end{proof}\n\nIt is interesting to note the contrapositive of the second item in Proposition~\\ref{prop:wnstation}, that if $(w_n(s))_{n\\in \\mathbb{N}}$ is stationary for some state $s$, then $w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}}(s) = w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+1}(s)$. In other words, the stationary part starts at most at turn $\\@ifstar{\\oldabs}{\\oldabs*}{S}$. This result is \\emph{by state}, so other states may not be stationnary. Note however that we cannot deduce stationnarity from observing $w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}}(s) = w_{\\@ifstar{\\oldabs}{\\oldabs*}{S}+1}(s)$ because the sequence may stay stable for a few turns and then be updated with a positive value.\n\\begin{comment}\n\\end{comment}\nAnyway, we can complete the algorithmic complexity presented in Proposition \\ref{prop:compabswn}.\n\\begin{corollary}\\label{cor:wncomp}\nIn the worst case, under a dynamic programming approach at most $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{S}^4\\max |A_{\\petitscops}|\\max |A_{\\petitsrobbers}|}$ operations are sufficient in order to determine whether $w_n$ is null, stationary equal to a number $p\\in (0,1]$ or infinitely increasing.\n\\end{corollary}\n\\begin{proof}\nThe result follows from Proposition \\ref{prop:wnstation} and \\ref{prop:compabswn} by substituting $n$ for $\\@ifstar{\\oldabs}{\\oldabs*}{S}$. For stationnarity, for example, if $(w_n)_{n\\in \\mathbb{N}}$ is stationnary, then $(w_n(s))_{n\\in \\mathbb{N}}$ is stationnary for all $s\\in S$, so we can conclude that $(w_n)_{n\\in \\mathbb{N}}$ is stationnary at $n=\\@ifstar{\\oldabs}{\\oldabs*}{S}$.\n\\end{proof}\n\n\\begin{comment}\nAn elementary result on Markov chains is the following, that we present without demonstration. The proof of this proposition can be found in the book by Norris \\cite{Norris1998} on Markov chains.\n\\begin{proposition}[Norris \\cite{Norris1998}]\\label{prop:exptcaptime}\nFor any $\\bs{s}\\in S^{\\cl{M}}$, the expected capture time from $\\bs{s}$ is the minimal non-negative solution to the following system of linear equations:\n\\begin{align*}\n\\e{T\\mid s} &=\n\\begin{cases}\n0, &\\mbox{ if } s\\in F;\\\\\n\\displaystyle\n1 + \\sum_{\\bs{s'}\\in S^{\\cl{M}}}M({\\bs{s},\\bs{s'}})\\e{T\\mid \\bs{s'}}, &\\mbox{ if }s\\notin F.\n\\end{cases}\n\\end{align*}\n\\end{proposition}\n\\end{comment}\n\n\\begin{comment}\n\\color{darkross}\n\n\\modiffre{The Markov Chain of Definition \\ref{def:markovchain} can be used to describe the game $\\cl{G}$ in a structural manner.} The theory of Markov chains is rather extensive and contains many deep results. Notably, one of the most investigated questions in this field concerns the expected hitting time of some state from another. This question directly links to another question, this time in GPCR games, that of the expected capture time of the robbers, which was investigated by Komarov and Winkler \\cite{Komarov2013b,Komarov} as well as Pralat and Kehagias \\cite{Kehagias2012}. We can thus, without fully answering this question, present cues for further research on the topic.\n\nLet $\\cl{M}$ be the chain described in Definition \\ref{def:markovchain} and $M$ its transition matrix. If $(X_n)_{n\\geq 0}$ describes the stochastic process on $\\cl{M}$ beginning at the initial state $i_0$, then we define $T:= \\frac 1 2 \\min_{n\\geq 0}(X_n\\in F)$. Thus, $T$ is the capture time of the robbers \\added{in $\\cl{G}$}. The expected capture time, beginning on $\\bs{s}\\in S^{\\cl{M}}$ is written $\\e{T\\mid s}$. An elementary result on Markov chains is the following, that we present without demonstration. The proof of this proposition can be found in the book by Norris \\cite{Norris1998} on Markov chains.\n\\begin{proposition}[Norris \\cite{Norris1998}]\\label{prop:exptcaptime}\nFor any $\\bs{s}\\in S^{\\cl{M}}$, the expected capture time from $\\bs{s}$ is the minimal non-negative solution to the following system of linear equations:\n\\begin{align*}\n\\e{T\\mid s} &=\n\\begin{cases}\n0, &\\mbox{ if } s\\in F;\\\\\n\\displaystyle\n1 + \\sum_{\\bs{s'}\\in S^{\\cl{M}}}M({\\bs{s},\\bs{s'}})\\e{T\\mid \\bs{s'}}, &\\mbox{ if }s\\notin F.\n\\end{cases}\n\\end{align*}\n\\end{proposition}\n\nThus, once the optimal strategies have been determined, we can simply solve a, possibly large, linear system of equations in order to compute the expected capture times. Although it requires a strong hypothesis, namely the full knowledge of the chain $\\cl{M}$, Proposition \\ref{prop:exptcaptime} can be interesting if each value of $M$ can be computed online. Other, more complex, results exist in the literature on Markov chains that should enable one to compute $\\e{T\\mid s}$ more efficiently. Some require more in-depth analysis of $\\cl{M}$. We briefly refer to some recent papers dealing with the question of the expected hitting time, corresponding here to $\\e{T\\mid s}$, of a Markov chain \\cite{Chen2008,Cogill2010,Palacios2010}.\n\nA final interesting aspect of the chain $\\cl{M}$ is the possibility to rewrite Proposition \\ref{prop:wnstation} directly in term of its structure. Indeed, looking at the proof of this proposition, we see that if one could characterize the structure of this chain, one could achieve the same conclusions without explicitly having to compute the recursion $w_n$, except possibly for determining the optimal actions to be played.\n\n\\begin{corollary}\\label{cor:wnstation}\nLet $s\\in S$ be a game state and $\\bs{s}$ its corresponding state in $\\cl{M}$. Suppose $\\cl{M}$ has been constructed. Let $X$ be the set of states of $\\cl{M}$ from which $F$ is reachable. Then, three cases are possible:\n\\begin{enumerate}\n\\item $\\bs{s}\\notin X$, in which case $w_n(s) =0$ for all $n\\geq 0$;\n\\item $X\\setminus F$ contains no reachable cycle starting from $\\bs{s}$, in which case $(w_n(s))_{n\\in \\mathbb{N}}$ becomes stationary;\n\\item $X\\setminus F$ contains a reachable cycle starting from $\\bs{s}$, in which case $(w_n(s))_{n\\in \\mathbb{N}}$ never becomes stationary.\n\\end{enumerate}\n\\end{corollary}\n\\begin{proof}\nWe again make use of the notion of course $P_{(s, a^*)}^F$.\n\\begin{enumerate}\n\\item $\\bs{s}\\notin X$. Then, by definition, $\\bs{s}$ cannot reach a final state. \\modiffre{This corresponds to the case in which $P_{(s, a^*)}^F = \\emptyset$} and $w_n(s) =0$ for all $n\\geq 0$.\n\\item $X\\setminus F$ contains no reachable cycle from $\\bs{s}$. Then, $P_{(s, a^*)}^F$ is finite and $(w_n(s))_{n\\in \\mathbb{N}}$ becomes stationary.\n\\item $X\\setminus F$ contains a reachable cycle from $\\bs{s}$. Since $X$ contains only states that can reach $F$, all those states have a positive path towards $F$ and the argument on the cycle that increases the probability can be applied. Hence, $P_{(s, a^*)}^F$ is infinite and $(w_n(s))_{n\\in \\mathbb{N}}$ never becomes stationary.\n\\end{enumerate}\n\\end{proof}\n\nCorollary \\ref{cor:wnstation} gives some sort of structural characterization of GPCR games. Clearly, this result is not as deep as classic characterizations by dismantling that exist on Cops and Robbers{} games (see \\cite{Marcoux,Nowakowski1983}). However, this corollary states that, whenever the Markov chain is constructed, then it suffices to analyze its structure in order to deduce the properties of the relation $w_n$ from Equation \\eqref{eq:absgamewn}. Corollary \\ref{cor:wnstation} is merely a first step towards a full structural characterization of GPCR games, independent of Equation \\eqref{eq:absgamewn}, albeit a non-trivial one.\n\\begin{remark}\nOnce the chain $\\cl{M}$ has been constructed, the set $X$ can be built from a backwards depth-first search method beginning in $F$. In other words, let $A$ be the adjacency matrix defined as:\n\\begin{align*}\nA(\\bs{s'}, \\bs{s}) &:= \n\\begin{cases}\n1, &\\mbox{ if } M(\\bs{s}, \\bs{s'})>0,\\\\\n0, &\\mbox{ otherwise.}\n\\end{cases}\n\\end{align*}\nThen, for each state $f\\in F$, one can search the directed graph described by $A$ and include each encountered state in $X$. The adjacency matrix $A$ describes a graph whose edges are reversed compared to the one underlying $\\cl{M}$. Once $X$ has been constructed, the cycles of $X\\setminus F$ can be recovered again with a depth-first search algorithm.\n\\end{remark}\n\nWe conclude this section by noting how $\\cl{M}$ is still left to be constructed. Without going further, we note that this chain can easily be constructed in at most $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{S}^4\\max |A_{\\petitscops}|\\max |A_{\\petitsrobbers}|}$ operations, the time required to compute the recursion $w_n$ up to stationarity.\n\\color{black}\n\\end{comment}\n\n\\subsection{Bonato and MacGillivray's generalized Cops and Robbers game}\nThis subsection is dedicated to our comparison with Bonato and MacGilli\\-vray's generalized Cops and Robbers{} game \\cite{Bonatoa}, which is another attempt at studying Cops and Robbers{} games in general forms. For the sake of self-containment, their model is transcribed here. This model is completely deterministic and thus is included as a special case of Definition \\ref{def:genabspurgame}. \n\nBonato and MacGillivray's game is presented in the following definition.\n\\begin{definition}[Bonato and MacGillivray's game]\\label{def:bonatomacggame}\nA discrete time process $\\cl{G}$ is a generalized Cops and Robbers{} game if it satisfies the following rules :\n\\begin{enumerate}\n\\item Two players, \\emph{pursuer} and \\emph{evader} compete against each other.\n\\item There is perfect information.\n\\item There is a set $\\cl{P}_P$ of admissible positions for the pursuer and a set $\\cl{P}_E$ for the evader. The set of admissible positions of the game is the subset $\\cl{P}\\subseteq \\cl{P}_P\\times \\cl{P}_E$ of positions that can be reached according to the rules of the game. The set of game states is the subset $\\cl{S}\\subseteq \\cl{P}\\times \\se{P, E}$ such that $((p_P, q_E), X)\\in \\cl{S}$ if, when $X$ is the player next to play, the position $(p_P, q_E)$ can be reached by following the rules of the game.\n\\item For each game state and each player, there exists a non-empty set of allowed movements. Each movement leaves the other player's position unchanged. We write $\\cl{A}_P(p_P, q_E)$ the set of allowed movements for the pursuer when the game state is $((p_P, q_E), P)$ and $\\cl{A}_E(p_P, q_E)$ for the set of movements allowed to the evader when the game state is $((p_P, q_E), E)$.\n\\item The rules of the game specify how the game begins. Thus, there exists a set $\\cl{I}\\subseteq \\cl{P}_P\\times \\cl{P}_E$ of admissible starting positions. We define $\\cl{I}_P = \\se{p_P : \\exists\\; q_e\\in \\cl{P}_E, (p_P, q_E)\\in \\cl{I}}$ and, for $p_P\\in \\cl{P}_P$, we define the set $\\cl{I}_E(p_P) = \\se{q_E\\in \\cl{P}_E : (p_P, q_E)\\in \\cl{I}}$. The game $\\cl{G}$ starts with the pursuer choosing a starting position $p_P\\in \\cl{I}_P$ and then the evader choosing a starting position $q_E\\in \\cl{I}_E(p_P)$.\n\\item After both players have chosen their initial positions, the game unfolds alternatively with the pursuer moving first. Each player, on his turn, must choose an admissible action given the current state.\n\\item The rules of the game specify when the pursuer has captured the evader. In other words, there is a subset $\\cl{F}$ of final positions. The pursuer wins $\\cl{G}$ if, at any moment, the current position belongs to $\\cl{F}$. The evader wins if his position never belongs to $\\cl{F}$.\n\\end{enumerate}\nOnly Cops and Robbers{} games in which the set $\\cl{P}$ is finite are considered. Games considered are played on a finite sequence of turns indexed by natural integers including $0$.\n\\end{definition}\n\nWe also present how the same authors defined an extension of the relation $\\preceq_n$ of Nowakowski and Winkler \\cite{Nowakowski1983} in order to solve the set of games characterized by their model.\n\\begin{definition}[Bonato and MacGillivray's $\\preceq_n$]\\label{def:bonatopreceq}\nLet $\\cl{G}$ be a Cops and Robbers{} game given by Definition \\ref{def:bonatomacggame}. We let :\n\\begin{enumerate}\n\\item $q_E\\preceq_0 p_P$ if and only if $(p_P, q_E)\\in \\cl{F}$.\n\\item Suppose that $\\preceq_0, \\preceq_1, \\dots, \\preceq_{i-1}$ have all been defined for some $i\\geq 1$. Define $q_E\\preceq_i p_P$ if $(p_P, q_E)\\in \\cl{F}$ or if $((p_P, q_E), E)\\in \\cl{S}$ and for all $x_E\\in \\cl{A}_E(p_P, q_E)$ either $(p_P, x_E)\\in \\cl{F}$ or there exists some $w_P\\in \\cl{A}_P(p_P, x_E)$ such that $x_E\\preceq_j w_P$ for some $j0$, but the current state is final; finally, a last case, again when $n>0$, when both players must choose an action that is optimal in the subsequent turns. Thus, we formally show how those two equations are related in the coming lines.\n\nWe first note that Equation \\eqref{eq:absgamewn} can be simplified when following Bonato and MacGillvray's model. Since the component $s_{\\mathrm{o}}$ is not used in what follows, we simply write $(c,r)\\in S$. Since the game is deterministic, we let players choose their next position directly. The recursion $w_n$ is thus given by :\n\\begin{align}\\label{eq:detergenabswn}\nw_0(c,r) &=1 \\iff (c,r)\\in F;\\nonumber\\\\\nw_n(c,r) &=\n\\begin{cases}\n1, &\\mbox{ if } (c,r)\\in F;\\\\\n\\displaystyle\n\\max_{\nc'\\in \\acs{c,r}}\n\\min_{\nr'\\in \\ars{c',r}}w_{n-1}(c', r'), &\\mbox{ otherwise.}\n\\end{cases}\n\\end{align}\n\nThe following theorem thus makes the connection between the two formalisms that are our model and that of Bonato and MacGillivray. In order to clarify the exposition, the relation $\\preceq_n$ is written in our model, that of Definition \\ref{def:genabspurgame}. Given the preceding remarks, this should incur no loss of generality. \n\n\\begin{theorem}\nLet the relation $\\preceq_n$ be given by Definition \\ref{def:bonatopreceq} and $w_n$ the recursion given by Equation \\eqref{eq:absgamewn}. Assume $\\cl{G}$ is a GPCR game given by Definition \\ref{def:genabspurgame}, but following the specifications of Definition \\ref{def:bonatomacggame}. Then, we have:\n\\begin{align}\\label{eq:wncorpreceq}\nw_n(c,r) =1 \\iff \\exists \\; a_{\\petitscops} \\in \\acs{c,r} : r\\preceq_n c'.\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nFirst, observe that relation $\\preceq_n$ compares the positions of the pursuer and the evader. These positions are encoded in the game states $S$ of our model. Moreover, the set of actions $\\cl{A}$ defined in model \\ref{def:bonatomacggame} are in fact restrictions of the set of actions from model \\ref{def:genabspurgame}. Indeed, actions in $\\cl{A}$ directly correspond to game positions, whereas we enable, in definition \\ref{def:genabspurgame}, the action sets to be disjoint from the set of states. It is thus possible to define a game $\\cl{G}$ that respects the hypotheses of Definition \\ref{def:bonatomacggame} and where Expression \\eqref{eq:wncorpreceq} is well-defined. A subtle difference between both formalisms has to do with the turn counters: in relation $w_n$ the cops are next to play while the robbers are to make their move in relation $\\preceq_n$. This does not change the fact that cops play first in both games. Now, we prove the result by induction, similarly as in the proof of Proposition~\\ref{prop:classwneq}.\n\\\\\\emph{Base case: $n=0$.} $w_0(c,r) = 1$ if and only if $(c,r)\\in F$ and $(c,r)\\in F$ if and only if $r\\preceq_0 c$.\n\\\\\\emph{Induction step.} Assume the result holds for $n\\leq k$ and let's show it for $n=k+1$. It holds that $w_{k+1}(c,r)=1$ if and only if $(c,r)\\in F$, in which case $r\\preceq_{k+1}c$ by definition, or there exists an action $c' \\in \\acs{c,r}$ for the cops such that no matter the response $r'\\in \\ars{c', r}$ of the robbers, we have $w_k(c', r') = 1$. By the induction hypothesis, we have $w_k(c', r')=1$ if and only if there exists an action $c'' \\in \\acs{c', r'}$ such that $r'\\preceq_k c''$. Thus, if the cops play action $c'$, they position themselves on a state in which $r\\preceq_{k+1} c'$. Conversely, assume there exists an action $c'\\in \\acs{c,r}$ such that $r\\preceq_{k+1}c'$. Then, by definition, for all response $r'\\in \\ars{c', r}$ of the robbers there exists an action $c''\\in \\acs{c', r'}$ of the cops such that $r'\\preceq_k c''$. In this case, by the induction hypothesis, we have $w_k(c', r')=1$. The cops play action $c'\\in \\acs{c,r}$, in which case we have $w_{k+1}(c,r)=1$.\n\\end{proof}\n\n\\section{Constructing a GPCR game as a Simple Stochastic Game}\\label{sec:annex_ssg}\nThe following argument is inspired by the SSG exposition of Gimbert and Horn \\cite{gimbert2008simple}. A simple stochastic game is a tuple $(V, V_{\\max}, V_{\\min}, V_R, E, t, p)$, where $(V,E)$ describes a directed graph $G$ and $V_{\\max}, V_{\\min}, V_R$ form a partition of $V$. There is a special vertex $t\\in V$, called the target, and $p$ is a probability function such that for every vertex $w\\in V$ and $v\\in V_R$, $p(w\\mid v)$ is the probability of transiting from $v$ to $w$. There are two players, $\\max$ and $\\min$, and the game is played with perfect information. The set $V_{\\max}$ contains those nodes controlled by player $\\max$, i.e. where this player is next to play, and $V_{\\min}$ those nodes controlled by player $\\min$. The set of edges $E$ is defined by the possible moves in the game. The game proceeds as follows: imagine a token is placed on some initial vertex $i\\in V_{\\max}\\cup V_{\\min}$, then the player who is next to play moves the token along an edge, either the token is again in some vertex of $V_{\\max}\\cup V_{\\min}$ where one player has to make a move, or the token is now on some vertex $v$ of $V_R$. When the token is on $v$, an outneighbour of $v$ is chosen randomly according to the distribution $p(\\cdot\\mid v)$, where the token is moved. The game ends if $t$ is ever encountered, in which case $\\max$ wins, otherwise it continues indefinitely and the other player wins.\n\nFollowing Gimbert and Horn, we define a play as an infinite sequence of vertices $v_0v_1\\dots$ of $G$ such that $(v_i,v_{i+1})\\in E$ for all $i$ and a finite play (what we called a history) as a finite prefix of a play. A strategy for $\\max$ is a function $\\sigma : V^*V_{\\max}\\rightarrow V$ and a strategy for $\\min$ is a function $\\tau : V^*V_{\\min} \\rightarrow V$, where $V^*$ is the set of finite plays. We suppose that for each finite play $(v_0\\dots v_n)$ and vertex $v\\in V_{\\max}$, $(v,\\sigma(v_0\\dots v_n v)\\in E$ and similarly for $\\tau$. Note that such strategies are deterministic, which is without loss of generality. We write for convenience $\\Gamma_{\\max}$ and $\\Gamma_{\\min}$ for the sets of $\\max$ (resp. $\\min$) strategies. Now, for any node $v\\in V$, we can define the value of $v$ for $\\max$ (resp. $\\min$) as the probability the target node is reached from that node. If $p(t \\mid \\sigma, \\tau, v)$ is the probability that $t$ is reached from $v$ under strategies $\\sigma$ and $\\tau$, then we let \n\\begin{align*}\n\\underline{val}(v) \n&:= \\sup_{\\sigma\\in \\Gamma_{\\max}}\\inf_{\\tau\\in\\Gamma_{\\min}}\np(t\\mid \\sigma,\\tau,v),\\\\\n\\overline{val}(v)\n&:=\\inf_{\\tau\\in\\Gamma_{\\min}}\\sup_{\\sigma\\in \\Gamma_{\\max}}\np(t\\mid \\sigma,\\tau,v).\n\\end{align*} \n\nThe following theorem \\cite{condon1992complexity,gimbert2008simple,shapley1953stochastic} is well known about simple stochastic games.\n\\begin{theorem}\\label{thm:ssgvalexist}\nIn any simple stochastic game and from any vertex $v$, $\\underline{val}(v) = \\overline{val}(v)$ and we write $val(v):= \\underline{val}(v)$. Furthermore, there exists deterministic and memoryless strategies for players $\\max$ and $\\min$ that achieve the value $val(v)$.\n\\end{theorem}\n\nWe write a GPCR game $\\cl{G}$ as a simple stochastic game by describing a directed graph $G = (V = V_\\mathrm{cop}\\cup V_\\mathrm{rob}\\cup V_R\\cup\\se{t}, E)$, where $V_\\mathrm{cop}$ is the set of vertices controlled by the cops, $V_\\mathrm{rob}$ the set of vertices controlled by the robbers, $V_R$ is the set of random vertices and $t$ is the target vertex for the cops. The set of edges $E$ is induced by the transition functions $T_{\\petitscops}$ and $T_{\\petitsrobbers}$. If there exists a play in $\\cl{G}$ with a subsequence $sas'$, then we add an edge from $s$ to some node $v\\in V_R$, labelled $a$. We add an edge from $a$ to $s'$ weighted either by $T_{\\petitscops}(s,a,s')$ or $T_{\\petitsrobbers}(s,a,s')$ depending on whether $a$ was played by the cops or by the robbers. We assume that vertex $t$ holds all final states of $F$, thus all transitions of the form $T_{\\petitscops}(s,a,f)>0$ or $T_{\\petitsrobbers}(s,a,f)>0$ for any state $s$, action $a$ and final state $f$, induce edges from some vertex of $V_R$ to $t$. Now, in this game the cops win if and only if they can reach $t$ from the initial vertex $i_0$. Thus, this is a simple stochastic game that corresponds to $\\cl{G}$. We deduce from \\autoref{thm:ssgvalexist} that $val(v)$ exists and it is the probability that the cops capture the robbers from a vertex, or state, $v$ in $\\cl{G}$. Since this SSG has a value, so does $\\cl{G}$ and this value is the probability just mentioned.\n\n\\section{Conclusion}\\label{sec:conclusion}\nThis paper presented a relatively simple yet very general model in order to describe games of Cops and Robbers{} that, notably, may include stochastic aspects. \n The game $\\cl{G}$ was presented along with a method of resolution in the form of a recursion $w_n$ in Theorem~\\ref{thm:copwinthm}. \nWe show in Proposition~\\ref{prop:epsoptimal} that we can always retrieve an $\\epsilon$-optimal strategy for $\\cl{G}$ from the recursion $w_n$ (for large enough $n$). \nMoreover, in Proposition \\ref{prop:wnstation} we show that if the recursion becomes stationary, stationarity must occur at most at index $\\@ifstar{\\oldabs}{\\oldabs*}{S}$. This is a first step in the analysis of the rate of convergence of the recursion.\n\nWe have exposed how some classic Cops and Robbers games can be written into our model and \nextended. Many more games could now be studied as GPCR such as the Firefighting game, under certain conditions, in which a team of firefighters seeks to prevent the nodes of a graph from burning.\n An interesting notion that is captured by our framework, in Definition~\\ref{def:genconcretmodel}, is that of the surveillance zones of the cops that can be chosen at each step. \nThus, we claim a wide variety of games of Cops and Robbers{} can be solved with the concepts developed in this paper. Furthermore, such a broad exposition of games of Cops and Robbers{} as ours enables one to study the effects of modifying certain rules, for example on the number of cops or on the speed of players, on the games. That is, one can use Equation~\\eqref{eq:absgamewn} and probe its values in order to test how modifying these rules affect the ability of the cops to capture the robbers.\n\n \nWe have extended the classic notion of cop number with the $p$\\textrm{-cop number}{}, although the question is still open about the behaviour of this function. \nThe expected capture time of the robbers is also of great interest. This function can now be studied on large swaths of Cops and Robbers{} games. In part, this question can be motivated by a Simard et al.'s paper~\\cite{Simard2015} on the relation between an Operations Research problem and the resolution of a Cop and Drunk Robber game. Specifically, the authors tackled the problem of upper bounding the probability of detecting a hidden and randomly moving object on a graph with a single optimally moving searcher. This problem, being \\textrm{NP-hard}{} ~\\cite{Trummel}, is constrained to be solved in a maximum number of time steps $T\\in \\mathbb{N}$. In particular, it appears that if one could tightly upper bound the expected capture time of a game derived from Definition~\\ref{def:genabspurgame}, then one could, following the ideas presented in this paper, deduce the optimal number of searchers to send on a mission to rescue the object. Then, if this number were deduced, one could further apply the ideas of this article along with Equation~\\eqref{eq:absgamewn} in order to help solve this search problem with multiple searchers.\n\nFinally, a last avenue of research that is worth mentioning and that is possibly of most interest to researchers in robotics and operations research concerns the extension of model~\\ref{def:genabspurgame} to games of imperfect information. Imperfect information refers to the lack of knowledge of one or both players. Cops and Robbers games of imperfect information thus contain games in which robbers are invisible, that can model problems of graph search such as the one mentioned above. Game theory seems apt to enable the transition from perfect information to imperfect information games with the use of belief states. Such generalization could be paired with the \\emph{branch and bound} method presented in Simard et al.~\\cite{Simard2015} in order to solve more general search problems.\n\nIn light of the literature on Cops and Robbers{} games it appears this paper distances itself from most studies on the subject. Indeed, we do not claim any results on typical Cops and Robbers{} questions such as the asymptotic behaviour of $c_p(\\mathcal{G})$ or on dismantling schemes to characterize classes of winning graphs. However, we think that modelling such a wide variety of games opens the door to further studies on Cops and Robbers{} games that can now be tackled in their generality, which was not possible before. Thus, although our model may not enable one to compute analytical solutions on classical questions of Cops and Robbers{} games, we have good hope that algorithmic ones will be devised in order to solve more general problems on classes, not of graphs, but of games. In short, it appears that new and promising avenues of research have come to light with the objects presented in this paper and we hope researchers will be driven to tackle those open questions that were unearthed.\n\\section{A concrete model of GPCR games}\\label{sec:concrgames}\n\nIn this section we present a more concrete model of GPCR games that is closer to the usual definitions in the literature. Thus, we specify that the game is played on a graph, without pointing out its particular shape. The actions of the players will correspond to paths as in the game of Cop and Fast Robber \\cite{Marcoux}. The game presented in Definition~\\ref{def:genabspurgame} is abstract because its sets do not depend on any precise structure and so neither does the algorithmic complexity of computing Equation \\eqref{eq:absgamewn}. The point of reformulating Definition~\\ref{def:genabspurgame} is to refine some results and formulate them in terms of the graph's structure.\n\n\\subsection{Definition of concrete Cops and Robbers{} games}\nIn the game presented below, players walk on paths since it appears, in light of the literature, that such actions are most general. We also grant the cops a watch zone that enables them to capture the robbers whenever they are observed. We write $\\pt{}$ for the set of finite paths in a graph and $\\pt{v}\\subseteq \\pt{}$ for the set of paths that start on vertex $v\\in V$. To simplify the notation, we formulate the concrete model in the setting where there are one cop and one robber, and without the auxiliary information set $S_{\\objects}$. The extension to the general case is straightforward.\n\n\\begin{definition}\\label{def:genconcretmodel}\nA GPCR game $\\cl{G}=\\left(S, i_0, F, A, T_{\\petitscops}, T_{\\petitsrobbers} \\right)$ with one cop and one robber (Definition~\\ref{def:genabspurgame}) \n is \\emph{concrete} if there is a graph $G=(V,E)$ satisfying:\n\\begin{enumerate}\n\\item $S = S_{\\petitscops} \\times S_{\\petitsrobbers} $ is a finite set of configurations of the game.\n\\item $i_0 = (i_\\mathrm{cop},i_\\mathrm{rob})$, where $i_\\mathrm{cop}, i_\\mathrm{rob}\\not\\in V$.\n\\item $S_{\\petitscops} \\subseteq V\\times \\cl{P}(E) \\cup\\{ i_\\mathrm{cop}\\}$ is the set of configurations of the cop. \nThe second coordinate is the cop's watch zone.\n\\item $S_{\\petitsrobbers} \\subseteq V\\cup\\{ i_\\mathrm{rob}\\}$ is the set of positions of the robber. \n\\item $\\acs{(c,z),r} \\subseteq \\pt{c}\\times \\cl{P}(E)$ is the set of available actions for the cop. He can move along a path from his present position $c$ and choose a watch~zone. From the initial state, $\\acs{i_\\mathrm{cop}} \\subseteq V\\times \\cl{P}(E)$.\n\\item $\\ars{(c,z),r} \\subseteq \\pt{r}$ is the set of available actions for the robber. She can move along a path from her present position $r$. From the initial state, $\\ars{i_\\mathrm{rob}} \\subseteq V$.\n\\end{enumerate}\n\\end{definition}\n\n{The definition of a play} and all previous remarks and details that apply to Definition~\\ref{def:genabspurgame} are still applicable in Definition~\\ref{def:genconcretmodel}. \nA peculiarity here is how we let the cop have his own watch zone consisting in a set of edges. Thus, the cop can only capture the robber on the robber's turn. Indeed, seeing as the robber moves along paths, we can explicitly deduce at what point a robber is susceptible to get caught crossing a cop's watch zone. It's a natural choice of modeling that makes writing the probability of capture easier. \n\n\\subsection{Example: Classic Cop and Robber game}\\label{ex:classiccr}\nNowakowski and Winkler's, and Quilliot's, game is now presented in the form of Definition \\ref{def:genconcretmodel}. In this game, we will consider that the game is over not when the cop reaches the same position as the robber, but exactly after that, during the robber's turn, when she tries to escape. This slightly different interpretation leads to the same game. Our presentation allows to model and solve a more general situation where the robber could have a possibility of escaping, even if the cop reaches the robber's position. Let $G=(V,E)$ be a finite, undirected, reflexive and connected graph and let:\n\\begin{align*}\nS_{\\petitscops} &= V\\times \\cl{P}(E)\\\\\nS_{\\petitsrobbers} &= V\\\\\n\\acs{c,r}&= \\{([c,c'],E_{c'})\\mid [c,c']\\in E \\}.\n\\end{align*}\nThe watch zone $E_{c'}$ of the next state is the set of adjacent edges of the cop's next position $c'$. The final states are those in which both players stand on the same vertex, $F=\\se{(c,r)\\in S : c=r}$. The initial state is $i_0=(i_{\\petitscops}, i_{\\petitsrobbers})$ and we let players choose any vertex from it, that is, $\\acs{i_{\\petitscops}, i_{\\petitsrobbers}} = \\ars{c, i_{\\petitsrobbers}} = V$, with $c\\in V$. Finally, the probabilities of transition are trivial since the game is deterministic. \n\nNow, in order to show that Equation \\eqref{eq:absgamewn} is well-defined, we demonstrate how it encodes the relation $\\preceq_n$ of Nowakowski and Winkler \\cite{Nowakowski1983}. Since the game is deterministic, Equation \\eqref{eq:absgamewn} reduces to :\n\\begin{align}\\label{eq:wnclassic}\nw_0(c,r) &=1 \\iff c=r \\nonumber\\\\\nw_n(c,r) &= \n\\max_{c'\\in N[c]}\\min_{r'\\in N[r]}\nw_{n-1}(c', r').\n\\end{align}\nThis equation is also a particular case of Equation~\\eqref{eq:detergenabswn}. The next proposition shows that Equation \\eqref{eq:wnclassic} simulates the relation $\\preceq_n$. \n\\begin{proposition}\\label{prop:classwneq}\nIt holds that $w_n(c,r) = 1$ if and only if there exists a vertex $c'\\in N[c]$ such that $r\\preceq_n c'$.\n\\end{proposition}\n\\begin{proof}\nWe prove the result by induction. We note that in recursion $w_n$ it is the cop's turn to play, while in relation $\\preceq_n$ the robber is next to move.\n\\\\\\textit{Base case: $n=0$.} $w_0(c,r)=1$ if and only if $r=c$ and $r=c$ if and only if $r\\preceq_0 c$.\n\\\\\\textit{Induction step.} Assume the result holds for $n\\leq k$ and let us show it holds for $n=k+1$. Then, $w_{k+1}(c,r)=1$ if and only if there exists an action $c'$ for the cop from which, no matter the response $r'$ of the robber, we have $w_k(c',r')=1$. By the induction hypothesis, $w_k(c',r')=1$ if and only if there exists a vertex $c''\\in N[c']$ such that $r'\\preceq_k c''$. Thus, the cop can play action $c'\\in N[c]$ and we have $r\\preceq_{k+1} c'$. Conversely, if there exists a vertex $c'\\in N[c]$ such that $r\\preceq_{k+1}c'$, then, by definition, for any action $r'\\in N[r]$ of the robber there exist a response $c''\\in N[c]$ of the cop such that $r'\\preceq_k c''$. By the induction hypothesis, we thus have $w_k(c', r')=1$. In this case, the cop can play action $c'\\in N[c]$ such that, no matter the answer of the robber $r'\\in N[r]$, $w_k(c',r')=1$. By definition, we thus have $w_{k+1}(c,r)=1$.\n\\end{proof}\n\n\n\\subsection{Example: Cop and Fast Defending Robber game}\n\\label{ex:fastdefrobber}\n\nDefinition~\\ref{def:genconcretmodel} is further illustrated on the following example. It describes the game of Cop and Fast Robber with probability of capture, which is a variant of the one presented by Fomin et al.~\\cite{FominGKNS10}, already mentioned in Example~\\ref{ex:copfastrobber}, and a variant of Example~\\ref{ex:copdrunkdefrobber}, where the robber could evade from capture.\nUnsurprisingly, given both games ask of robbers to move along paths, it is easier to write this new game following Definition~\\ref{def:genconcretmodel}. \n\nFor a path $\\pi\\in \\pt{}$ on a graph $G$, we write $\\pi[k]$ for its $k^{th}$ vertex and $\\pi[*]$ for its last one.\nLet $G=(V,E)$ be a finite graph. Assume that the cop guards a watch zone $C\\subset E$ and that each time the robber crosses an edge $e$ he survives his walk with probability $q_C(e)$ (between 0 and 1). In Example~\\ref{ex:copdrunkdefrobber}, a capture probability was used, here we define a survival probability as it is simpler to use in the current context. \nContrary to the Defending Robber game \\ref{ex:copdrunkdefrobber}, the probability of survival depends on the cop's watch zone as well as the robber's action. Here, only the cop's watch zone and the transition functions are modified compared to Example~\\ref{ex:copfastrobber}. So we have an element $j^*\\notin V$ and the set of final states are $F=\\se{(j^*, \\emptyset, j^*)}$. \nWe write $E_c$ for the set of edges incident to $c$. \n Similarly, we write $E_\\pi$ for the set of edges of a path $\\pi$. Let:\n\n\\begin{align*}\nT_{\\petitscops}((c,E_c,r), c')\n&=\n\\begin{cases}\n\\dirac{(c', E_{c'}, r)}, &\\mbox{ if } c=i_{\\petitscops}\\mbox{ and } c'\\in V \\\\\n\t\t\t&\\mbox{ or if } c\\in V \\mbox{ and } c'\\in N[c];\\\\\n0, &\\mbox{ otherwise.}\n\\end{cases}\n\\end{align*}\nThe robber's transition function is given by:\n\\begin{align*}\nT_{\\petitsrobbers}((c,E_c, r), \\pi) \n&=\n\\begin{cases}\n\\dirac{(c,E_c,\\pi[*])}, &\\mbox{ if } E_\\pi \\cap E_c =\\emptyset;\\\\\nD_{(r, \\pi[*])}, &\\mbox{ if } E_\\pi \\cap E_c \\neq \\emptyset;\n\\end{cases}\n\\end{align*}\nwhere $D_{(r, \\pi[*])}$ is a function satisfying:\n\\[\nD_{(r, \\pi[*])}(x) \n= \n\\begin{cases}\n\\prod_{e\\in E_\\pi}q_{E_c}(e), &\\mbox{ if } x=(c, E_c, \\pi[*]);\\\\\n1-\\prod_{e\\in E_\\pi}q_{E_c}(e), &\\mbox{ if } x=(j^*, \\emptyset, j^*).\n\\end{cases}\n\\]\nNote that to retrieve the game considered in Example~\\ref{ex:copfastrobber} and Marcoux's thesis~\\cite{Marcoux}, we should rather use a watch zone $E_c$ containing all edges on paths of length 2 from $c$ and change the conditions on $T_{\\petitsrobbers}$ for $E_{\\pi_1}\\cap E_c =\\emptyset$ and $E_{\\pi_1} \\cap E_c \\neq\\emptyset$, where ${\\pi_1}$ is the subpath of $\\pi$ starting in $\\pi[1]$.\n\nSince the watch zone is determined by the cop's position, we can use the simplified notation $(c,r)$ for a state $(c,E_c,r)$. Thus, the recursion of Equation \\eqref{eq:absgamewn} can be written as follows.\nFor the jail state: $w_i (j^*, \\emptyset, j^*) = 1$ for all $i\\geq 0$. For $(c,E_c,r) \\neq (j^*, \\emptyset, j^*)$, we have $w_0(c,r) = 0$ and, for $n\\geq 1$,\n\\begin{align*}\nw_n(c,r)& = \\\\\n\\max_{\\!\\!\\!\\!c'\\in N[c]}&\n\\min_{\\pi\\in\\pt{r}\\!\\!}\nT_{\\petitsrobbers}((c', r), \\pi, (c', \\pi[*]))w_{n-1}(c', \\pi[*])+T_{\\petitsrobbers}((c', r), \\pi, (j^*, j^*)).\n\\end{align*}\n\nFollowing Proposition \\ref{prop:compabswn}, the algorithmic complexity of the previous recursion is at most $\\bigO{n\\Delta\\@ifstar{\\oldabs}{\\oldabs*}{V}^6\\@ifstar{\\oldabs}{\\oldabs*}{\\pt{}}}$, where $\\Delta$ is the maximal degree of $G$. Indeed, $S$ corresponds to the set of pairs of vertices, the cop can only move on his neighbourhood and the robber is allowed to choose any path of finite length. Hence, even if we restrict the possible paths that can choose the robber to elementary paths (paths that do not cross twice a same vertex), the size of the possible robber actions, and therefore the size of $\\cal{P}$, is exponential in the size of the graph on which the game is played. However, as shown in the next proposition, $w_n$ can be computed in polynomial time in the size of the graph itself.\n\\begin{proposition}\nComputing $w_n(i)$ in the Cop and Fast Defending Robber game requires at most \n$\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{V}^3\\log\\@ifstar{\\oldabs}{\\oldabs*}{V} + (n+1)\\@ifstar{\\oldabs}{\\oldabs*}{V}^2\\@ifstar{\\oldabs}{\\oldabs*}{E}}$\noperations and uses at most $O({\\@ifstar{\\oldabs}{\\oldabs*}{V}^3})$ space, for any $n\\in \\mathbb{N}$.\n\\end{proposition}\n\\begin{proof}\nLet $\\pth{r}{r'}$ be the set of paths beginning in $r$ and ending in $r'$. \nLet $(c,E_c)$ be a cop position. \nThe robber's transition function can be simplified by assuming $q_{E_c}(e) = 1$ if $e\\notin E_c$. Then, $T_{\\petitsrobbers}((c, r), \\pi, (c,\\pi[r']))= \\Pi_{e\\in E_\\pi}q_{E_c}(e)$ if the robber is not caught on $\\pi[r']$. The previous recursion, when state $(c,r)$ is not final, can be simplified to:\n\\begin{align*}\nw_n(c,r) &=\n\\max_{c'\\in N[c]}\n\\min_{\\substack{r'\\in V\\\\\\pi\\in \\pth{r}{r'}}}\n\\left(\n\\prod_{e\\in E_\\pi} q_{E_{c'}}(e)\tw_{n-1}(c',r')\n+1 - \\prod_{e\\in E_\\pi}\nq_{E_{c'}}(e)\n\\right).\n\\\\\n&=\\max_{c'\\in N[c]}\n\\min_{\\substack{r'\\in V\\\\\\pi\\in \\pth{r}{r'}}}\n\\Big(\n(w_{n-1}(c',r') -1)\\prod_{e\\in E_\\pi} q_{E_{c'}}(e)\n+1 \\Big).\n\\end{align*}\nIf $c'$ and $r'$ are fixed, we look for the path $\\pi$ minimizing the expression in parentheses, so maximizing $\\prod_{e\\in E_\\pi}q_{E_{c'}}(e)$. This is the same path that maximizes $\\sum_{e\\in E_\\pi}\\log q_{E_{c'}}(e)$ because $\\log$ is a monotone increasing function. Because $\\log q_{E_{c'}}(e)<0$, with $q_{E_{c'}}(e) \\in [0,1]$, we can minimize $\\sum_{e\\in E_\\pi} -\\log q_{E_{c'}}(e)$. \n\nObserve that the survival probabilities $q_{E_{c'}}(e)$ depend only on the vertex $c'$ and the edge $e$. Thus, prior to evaluating $w_n(c,r)$, we can precompute $\\@ifstar{\\oldabs}{\\oldabs*}{V}$ all-pairs shortest paths (one for each possible cop position) by weighting each edge $e\\in E$ with $-\\log q_{E_{x}}(e)$ for each source $x\\in V$. This is done in $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{E}\\@ifstar{\\oldabs}{\\oldabs*}{V}^2 + \\@ifstar{\\oldabs}{\\oldabs*}{V}^3\\log\\@ifstar{\\oldabs}{\\oldabs*}{V}}$ operations, for example by using the algorithm of Fredman and Tarjan \\cite{Fredman1987}. This takes $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{V}^3}$ space (because the path does not have to be stored, it can be recomputed in $\\bigO{\\Delta\\@ifstar{\\oldabs}{\\oldabs*}{V}}$). Thus, for each $c'$ we store the values $\\prod_{e\\in E_\\pi} q_{E_{c'}}(e)$ for the shortest path $\\pi$ between $r$ and $r'$. Finding the next robber position $r'$ thus requires at most $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{V}}$ operations.\n\nNow, assume $w_{n-1}(c',r')$ is computed for all $c'$, $r'$, in time $a_{n-1}$. We look for the vertex $c' \\in N[c]$ maximizing\n\\[\n\\min_{\\substack{r'\\in V\\\\\\pi\\in \\pth{r}{r'}}}\n\\Big((w_{n-1}(c',r') -1)\\prod_{e\\in E_\\pi} q_{E_{c'}}(e)\n+1 \\Big).\n\\] \nThe values $w_{n-1}(c',r') $ are already computed, as well as the $\\prod_{e\\in E_\\pi} q_{E_{c'}}(e)$'s. Thus, at most $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{N[c]}\\@ifstar{\\oldabs}{\\oldabs*}{V}}$ operations are required to evaluate expression $w_n(c,r)$ when $c$ and $r$ are fixed. To find all maxima, that is for all $c\\in V$ and $r\\in V$, we need to make at most $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{V}\\sum_{c\\in \\@ifstar{\\oldabs}{\\oldabs*}{V}} \\@ifstar{\\oldabs}{\\oldabs*}{N[c]}}=\\bigO{2\\@ifstar{\\oldabs}{\\oldabs*}{V}\\@ifstar{\\oldabs}{\\oldabs*}{E}}$ operations. On turn $n$, we make a number of operations $a_n \\in \n\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{V}\\@ifstar{\\oldabs}{\\oldabs*}{E}} + a_{n-1}\n\\subseteq\n\\bigO{n\\@ifstar{\\oldabs}{\\oldabs*}{V}\\@ifstar{\\oldabs}{\\oldabs*}{E}}.\n$\nThe total complexity is thus: \n\\[\n\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{E}\\@ifstar{\\oldabs}{\\oldabs*}{V}^2 + \\@ifstar{\\oldabs}{\\oldabs*}{V}^3\\log\\@ifstar{\\oldabs}{\\oldabs*}{V}} + a_n = \\bigO{(n+1)\\@ifstar{\\oldabs}{\\oldabs*}{E}\\@ifstar{\\oldabs}{\\oldabs*}{V}^2 + \\@ifstar{\\oldabs}{\\oldabs*}{V}^3\\log\\@ifstar{\\oldabs}{\\oldabs*}{V}}.\n\\] \nThe bottlenecks for the spatial complexity are the shortest path algorithms which require at most $\\bigO{\\@ifstar{\\oldabs}{\\oldabs*}{V}^3}$ space. For each $w_n$ we only need $w_{n-1}$ so we do not need to store any other $w_k$ for $k0$ only if $X\\subseteq S_{t+1}$. The same holds for the robbers.\n\n\\section{Introduction}\n\\newcommand{\\red}[1]{\\textcolor{red}{[#1]}}\n\nCops and Robbers games have been studied as examples of discrete-time pursuit games on graphs since the publication of Quilliot's doctoral thesis \\cite{Quilliot1978f} in 1978 and, independently, Nowakowski and Winkler's article \\cite{Nowakowski1983} in 1983. Both monographs describe a turn-based game in which a lone cop pursues a robber on the vertices of a graph. The game evolves in discrete time and with perfect information. The cop wins if he eventually shares the same vertex as the robber's, otherwise, if the play continues indefinitely, the latter wins. A given graph is \\textrm{copwin}{} if the cop has a winning strategy: for any possible move the robber makes, the cop has an answer that leads him to eventually catch the robber (in finite time). As there is no tie, it is always true that one player has a (deterministic) winning strategy.\n\nSince the first exposition of the game of Cop and Robber, many variants have emerged. Aigner and Fromme \\cite{Aigner1984a} notably presented in 1984 the \\emph{cop number}: it is the minimal number of cops required on a graph to capture a robber. Since then, more alternatives have been described, each one modifying one game parameter or more such as the speed of the players, the radius of capture of the cops, etc. We refer to Bonato and Nowakowski's book \\cite{Bonato2011f} for a comprehensive description of these different formulations. The survey on guaranteed graph searching problems by Fomin and Thilikos \\cite{Fomin2008} is also a great reference on the subject. In graph searching games, the objective is to capture a fugitive on a graph. The problems in which the object is always found are called guaranteed. \n\nIn 2017 Bonato and MacGillivray \\cite{Bonatoa} presented a first generalization of Cops and Robbers{} games that encompasses the majority of the variants described previously. Indeed, all two-player, turn-based, discrete-time, pursuit games of perfect information on graphs in which both players play optimally are contained in Bonato and MacGillivray's model. As such, this model encompasses all pursuit games deemed \\emph{combinatorial} (we refer to Conway's book \\emph{On Numbers and Games} \\cite{Conway1976} for an introduction on the subject of combinatorial games). Those games include the set of turn-based, perfect information, games played on a discrete structure without any randomness. \n\nRecently, some researchers such as Pra\\l{}at and Kehagias \\cite{Kehagias2012}, Komarov and Winkler \\cite{Komarov2013b} and Simard et al. \\cite{Simard2015} described a game, called the Cop and Drunk Robber game, in which the robber walks in a random fashion: each of her movements is described by a uniform random walk on the vertices of the graph. In general, this strategy is suboptimal. Since this particular game cannot be described by Bonato and MacGillivray's model, it appears natural to seek to extend their framework to integrate games with random events.\n\nThere has also been a recent push towards more game theoretic approaches to modeling Cops and Robbers{} games, notably by Konstantinidis, Kehagias and others (see for example \\cite{Konstantinidis2017,KEHAGIAS2013100,Kehagias2018,KEHAGIAS201725,KONSTANTINIDIS201648}). Our paper can be considered more in line with this way of treating Cops and Robbers{} games than more traditional approaches. \n\nThis paper thus presents a model of Cops and Robbers{} games that is more general than that of Bonato and MacGillivray. The main objective of this model is to incorporate games such as the Cop and Drunk Robber game. The probabilistic nature of this game leads to define a framework different from the one of Bonato and MacGillivray.\n\n\nIn Cops and Robbers{} games, one is generally interested in the question of \\emph{solving} a game. This question is universal to game theory where one defines a \\emph{solution concept} such as the \\emph{Nash Equilibrium}. In Cops and Robbers{} games, often-times the cops' point of view is adopted and one seeks to determine whether it is feasible, and if so how, for them to capture the robbers. In stochastic Cops and Robbers{} games, one can generalize the question to a quantitative scale of success: what is the (best) probability for the cops to capture the robbers, and which strategy reflects it. One can also ask the dual question of what would be the minimal number of cops required in order to capture the robbers with some probability. In deterministic games, this graph parameter is known as the \\emph{cop number}.\n\nOne can note that many solutions of Cops and Robbers{} games share the same structure, and this is reflected in the fact that they can be solved with a recursive expression. Indeed, Nowakowski and Winkler \\cite{Nowakowski1983} in 1983 presented a preorder relation on vertices, writing $x\\preceq_n y$ when the cop has a winning strategy in at most $n$ moves if positioned on vertex $y$, while the robber is on vertex $x$. An important aspect of this relation $\\preceq_n$ is that it can be computed recursively and thus leads to a polynomial time algorithm to compute its values, as well as the strategy of the cop. This relation was extended $20$ years later by Hahn and MacGillivray \\cite{Hahn2006} \nin order to solve games of $k$ cops by letting players move on the graph's strong product.\n Clarke and MacGillivray \\cite{Clarke2012} have also defined a characterization of $k$-cop-win graphs through a dismantling strategy and studied the algorithmic complexity of the problem. \nFor a fixed $k$ the problem can be resolved in polynomial time with degree $2k+2$. On a related note, Kinnersley \\cite{Kinnersley2015} proved that it is \\textrm{EXPTIME-complete}{} to determine whether the cop-number of a graph $G$ is less than some integer $k$ when both $G$ and $k$ are part of the input. This shows that Clarke and MacGillivray's result is somehow optimal.\n\nIn games with stochastic components, such order relations can be generalized by considering the probability of capture, as is done in a recent paper about the \\emph{Optimal Search Path} (OSP) problem \\cite{Simard2015}. A recursion $w_n(x,y)$ is defined: it represents the probability that a cop standing on vertex $y$ captures the robber, positioned on vertex $x$, in at most $n$ steps. This relation, defined on the Cop and Drunk Robber game \\cite{Kehagias2012,Komarov2013b,Simard2015}, is analogous to Nowakowski and Winkler's $x\\preceq_n y$ and is slightly more general as it enables to model the robber's random movement. \n One can wonder up to what point the relation $w_n$ can be extended while preserving its polynomial nature. Theorem~\\ref{thm:copwinthm} and Proposition~\\ref{prop:compabswn} give an answer to this question.\n\n\n\nThis paper is divided as follows. Section \\ref{sec:absmodel} presents our model of Cops and Robbers{} games, the $w_n$ recursion along with some complexity results, notably, on $w_n$. Stationarity results on $w_n$ are also included. Since most Cops and Robbers{} games are played on graphs, another formulation of our model is presented on such a structure in Section \\ref{sec:concrgames}. We conclude in Section \\ref{sec:conclusion}.\n\\section*{Acknowledgement}\n The authors acknowledge the careful reading of reviewers, which has helped improve the paper presentation. Jos\u00e9e Desharnais and Fran\u00e7ois Laviolette acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC, grant numbers 239294 and 262067).\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nA key property of Sobolev functions in Euclidean spaces is their absolute continuity on almost every line\nparallel to the coordinate axes. The restrictions to arbitrary lines need not be even bounded\nfor functions in Sobolev spaces $W^{1,s}$, $1\\leqslant s\\leqslant n$. However, for $s>n$\nthe restriction of a Sobolev function to \\textit{any} line has finite $p$-variation with $p=s\/(s-n+1)$, see Remark~\\ref{morrey}. Here we refer to the generalized variation~\\cite{MO,Wa,Yo}, which is defined as follows.\n\nFor distinct points $a,b\\in \\mathbb{R}^n$ we write $[a,b]= \\{(1-t)a + tb \\colon 0 \\leqslant t \\leqslant 1\\}$ and call $[a,b]$ the line segment with the endpoints $a$ and $b$. Any partition $0=t_0n$ and therefore\nhave finite $p$-variation on lines for some $p0$, is $\\delta$-monotone for some $\\delta= \\delta(\\alpha)$.\nThis mapping is locally H\\\"older continuous with exponent $\\min\\{\\alpha,1\\}$, which can be arbitrarily close to $0$. This shows that $\\delta$-monotone mappings are no more regular on the H\\\"older and Sobolev scales than general quasiconformal mappings. However, they have bounded variation on $C^1$-smooth curves, see~\\cite[Thm. 3.11.7]{AIMbook} and~\\cite[Thm. 1.10]{IKO3}. In particular,\n\\begin{equation}\\label{property}\n\\var_{[a,b]}(f;1)<\\infty \\quad \\mbox{if } a,b \\in \\mathbb{R}^n.\n\\end{equation}\n\nWhen $n=2$, we often identify $\\mathbb{R}^n$ with $\\mathbb{C}$ and use the complex derivatives $f_z$ and $f_{\\bar z}$.\nThen the inequality~\\eqref{Kav22} reads as\n\\begin{equation}\\label{Kav23}\n\\left|f_{\\bar z} \\right| \\leqslant k \\left|f_{ z} \\right|\\, \\quad \\textnormal{a.e.,} \\quad \\textnormal{where } \\ k= \\frac{K-1}{K+1}\n\\end{equation}\n\nA $\\delta$-monotone mapping $f \\colon \\mathbb{C} \\to \\mathbb{C}$ satisfies the stronger, \\textit{reduced} distortion inequality\n\\begin{equation}\\label{BeIn}\n\\abs{f_{\\bar z}} \\leqslant k \\re f_z \\quad \\textnormal{a.e. in } \\mathbb{C},\n\\end{equation}\nfor some constant $01$.\n\n\\begin{theorem}\\label{fvar} Let $f\\colon \\mathbb{C}\\to \\mathbb{C}$ is a reduced quasiconformal mapping.\nThen for any $q>1$ the mapping $f$ has finite $\\phi$-variation on lines with\n\\begin{equation}\\label{fvar1}\n\\phi(t)=\\frac{t}{(\\log (e+1\/t))^q} \\ \\mbox{ if } t>0 \\quad \\mbox{ and }\\quad \\phi(0)=0.\n\\end{equation}\n\\end{theorem}\n\nThe conclusion of Theorem~\\ref{fvar} is false for $0\\leqslant q<1\/2$, see Remark~\\ref{negphi}.\nThe gap between exponents $1\/2$ and $1$ remains open.\n\n\\begin{question}\\label{questionq} What is the smallest value of $q$ for which the conclusion of Theorem~\\ref{fvar} holds?\n\\end{question}\n\nSince $\\delta$-monotone mappings exist in any dimension $n\\geqslant 2$, one may ask whether it is possible to extend the\ndefinition of reduced quasiconformal mappings to higher dimensions. This question is addressed in section~\\ref{higher}, where we use quaternions\nto define reduced quasiconformal mappings in four dimensions, and extend Theorem~\\ref{fvar} to them.\n\n\\begin{question}\\label{questionred} Is there a natural analogue of reduced\nquasiconformal mappings in dimensions other than $2$ and $4$?\n\\end{question}\n\n\n\\section{Preliminaries}\n\n\n\nIn this section we first estimate the $p$-variation of Sobolev functions of lines. Although this result is probably known we give a proof for the sake of completeness. Later in the section we define quasisymmetric and monotone mappings and introduce some relevant notation.\nIn this paper $\\Omega$ stands for a domain in $\\mathbb{R}^n$.\n\n\\begin{proposition}\\label{morrey}\nLet $u\\in W^{1,s}(\\Omega)$, $s>n$. Then the restriction of $u$ to any closed line segment $I \\subset \\Omega$ has finite $p$-variation with $p=s\/(s-n+1)$.\n\\end{proposition}\n\n\nThe Morrey-Sobolev embedding theorem states that $ W^{1,s}(\\Omega) \\subset C^\\alpha_{\\loc} (\\Omega)$ with $\\alpha= 1- n\/s$. Clearly, any function $u \\in C^\\alpha_{\\loc} (\\Omega)$ has finite $p$-variation on lines with $p=1\/\\alpha= s\/(s-n)$. However, Proposition~\\ref{morrey} gives a better value of $p$. Its proof requires the following lemma.\n\n\\begin{lemma}\\label{union}\nLet $I$ be a line segment partitioned into smaller segments $I_m$, $m=1,2, \\dots , M$. For any mapping $f \\colon I \\to \\mathbb{R}^n$ we have\n\\begin{equation}\n\\var_{I} (f; \\phi) \\leqslant \\sum_{m=1}^M \\var_{I_m} (f;\\varphi)+ (M-1) \\osc_I f .\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFix a partition $(a_j)_{j=0}^{N}$ of $I$. We divide the set of indices as follows.\n\\[E= \\{j=1, \\dots , N \\colon [a_{j-1},a_j] \\subset I_m \\text{ for some } m \\} , \\qquad F= \\{1, \\dots , N\\} \\setminus E .\\]\nThen\n\\[\\sum_{j\\in E} \\phi (\\abs{f(a_j)-f(a_{j-1})}) \\leqslant \\sum_{m=1}^M \\var_{I_m} (f;\\varphi)\\]\nand\n\\[\\sum_{j\\in F} \\phi (\\abs{f(a_j)-f(a_{j-1})}) \\leqslant (M-1) \\osc_I f . \\]\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{morrey}.]\nDividing $I$ into subintervals and using Lemma~\\ref{union} we may reduce our task to the case\n\\[\\diam I < \\dist (I, \\partial \\Omega) .\\]\nLet $(a_j)_{j=0}^N$ be a partition of $I$. For $j=1, \\dots , N$ let $\\overline{B}_j$ be the closed ball with segment $[a_{j-1}, a_j]$ as a diameter. Morrey's inequality~\\cite[p.~143]{EG} yields\n\\[\\osc_{\\overline{B}_j} u \\leqslant C\\, \\abs{a_j - a_{j-1}}^{1-n\/s} \\left(\\int_{B_j} \\abs{\\nabla u(x)}^s \\, d x \\right)^{1\/s} . \\]\nRaising to the power $p$ and noticing that $(1-n\/s)p= 1-p\/s$ we arrive at\n\\[\\big( \\osc_{\\overline{B}_j} u \\big)^p \\leqslant C\\, \\abs{a_j - a_{j-1}}^{1-p\/s} \\left(\\int_{B_j} \\abs{\\nabla u(x)}^s \\, d x \\right)^{p\/s} . \\]\nSumming over $j$ and applying H\\\"older's inequality we obtain\n \\begin{equation}\\label{E}\n\\sum_{j=1}^N \\big(\\osc_{\\overline{B}_j} u\\big)^p \\leqslant C \\left( \\sum_{j=1}^N \\abs{a_j - a_{j-1}} \\right)^{1-p\/s} \\left( \\sum_{j=1}^N \\int_{B_j} \\abs{\\nabla u(x)}^s \\, d x \\right)^{p\/s}.\n \\end{equation}\nTherefore\n\\\n \\var_I(u; p) \\leqslant C (\\diam I)^{1-p\/s} \\left( \\int_{\\Omega} \\abs{\\nabla u(x)}^s \\, d x \\right)^{p\/s}\n \\\nas desired.\n\\end{proof}\n\n\\begin{definition}\\label{qs}\nLet $\\eta \\colon [0, \\infty) \\to [0, \\infty)$ be a homeomorphism. An injective mapping $f \\colon \\mathbb{R}^n \\to \\mathbb{R}^n$ is $\\eta$-quasisymmetric if\n\\[\\frac{\\abs{f(c)-f(a)}}{\\abs{f(b)-f(a)}} \\leqslant \\eta \\left(\\frac{\\abs{c-a}}{\\abs{b-a}} \\right)\\]\nfor any distinct points $a,b,c \\in \\mathbb{R}^n$.\nThe function $\\eta$ is called a modulus of quasisymmetry of $f$.\n\\end{definition}\nIt is well-known that a mapping $f \\colon \\mathbb{R}^n \\to \\mathbb{R}^n$ is quasiconformal if and only if it is sense-preserving and quasisymmetric ~\\cite{Hebook}.\n\nGiven a mapping $f \\colon \\mathbb{R}^n \\to \\mathbb{R}^n$, where $\\mathbb{R}^n \\subset\\mathbb{R}^n$, we define\nthe modulus of monotonicity $\\Delta_f \\colon \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ by the rule\n\\begin{equation}\\label{Deltadef}\n\\Delta_f(a,b)= \\begin{cases} \\left\\langle f(a)-f(b), \\frac{a-b}{|a-b|} \\right\\rangle & \\quad \\textnormal{if } a \\ne b\\\\\n0 & \\quad \\textnormal{if } a = b \\end{cases}\n\\end{equation}\nClearly $\\abs{\\Delta_f(a,b)}\\leqslant\\abs{f(a)-f(b)}$. By definition, $f$ is a monotone mapping if $\\Delta_f(a,b)\\geqslant 0$\nfor all $a,b\\in\\mathbb{R}^n$, and is strictly monotone if $\\Delta_f(a,b)>0$ unless $a=b$.\nAny reduced quasiconformal is monotone by (1.9) in~\\cite{IKO3}.\nAlso, $f$ is $\\delta$-monotone\nif and only if $\\Delta_f(a,b)\\geqslant\\delta\\abs{f(a)-f(b)}$ for all $a,b \\in \\mathbb{R}^n$.\nWhen $n=2$, the modulus of monotonicity can be expressed in complex notation:\n\\[\\Delta_f(a,b)=\\re\\left(\\frac{f(a)-f(b)}{a-b}\\right)\\abs{a-b}.\\]\n\n\n\\section{Generalized variation on lines: Proof of Theorem~\\ref{fvar}}\n\nWe will obtain Theorem~\\ref{fvar} as a consequence of the following result.\n\n\\begin{theorem}\\label{fvgen} Let $f\\colon \\Omega\\to \\mathbb{R}^n$ be a mapping and suppose that there\nis a homeomorphism $\\eta\\colon[0,\\infty) \\to[0,\\infty)$ such that\n\\begin{equation}\\label{fvgen1}\n\\abs{f(c)-f(a)}\\leqslant \\frac{\\abs{c-a}}{\\abs{b-a}}\\abs{f(b)-f(a)} +\n\\eta\\left(\\frac{\\abs{c-a}}{\\abs{b-a}}\\right)\\Delta_f(a,b)\n\\end{equation}\nfor any distinct points $a,b,c\\in\\Omega$. Then for any $q>1$ the mapping $f$ has finite $\\phi$-variation\non lines with $\\phi$ as in~\\eqref{fvar1}.\n\\end{theorem}\n\nBefore proving Theorem~\\ref{fvgen} we derive Theorem~\\ref{fvar} from it.\n\n\\begin{proof}[Proof of Theorem~\\ref{fvar}] Let $f\\colon\\mathbb{C} \\to\\mathbb{C}$ be a reduced quasiconformal\nmapping. We may assume that $f$ is nonlinear. For any $\\lambda \\in \\mathbb{R} $ the mapping $f^{\\lambda}(z)=f(z)+i\\lambda z$ also satisfies the reduced distortion inequality~\\ref{BeIn} with the same constant $k$ as $f$. By~\\cite[Cor. 1.5]{IKO} $f^\\lambda$ is a homeomorphism. Therefore, $f^\\lambda$ is $K$-quasiconformal with $K$ independent of $\\lambda$. Since quasiconformality implies quasisymmetry in $\\mathbb{C}$~\\cite[Thm 11.14]{Hebook}, there is a homeomorphism $\\eta\\colon[0,\\infty) \\to[0,\\infty)$ such that $f^\\lambda$ is $\\eta$-quasisymmetric in $\\mathbb{C}$ for all $\\lambda \\in \\mathbb{R}$. Given distinct points $a,b,c\\in \\mathbb{R}$, let $\\lambda=-\\im\\frac{f(b)-f(a)}{b-a}$, so that $\\abs{f^{\\lambda}(b)-f^{\\lambda}(a)}=\\Delta_f(a,b)$. Since $f^{\\lambda} $ is $\\eta$-quasisymmetric, we have\n\\[\n\\abs{f^{\\lambda}(c)-f^{\\lambda}(a)}\\leqslant \\eta\\left(\\frac{\\abs{c-a}}{\\abs{b-a}}\\right)\\abs{f^{\\lambda}(b)-f^{\\lambda}(a)},\n\\]\nfrom which~\\eqref{fvgen1} follows by means of the triangle inequality. It remains to apply Theorem~\\ref{fvgen} to $f$.\n\\end{proof}\n\nOur proof of Theorem~\\ref{fvgen} is based on two lemmas.\n\n\n\n\\begin{lemma}\\label{vargrowth} Let $f\\colon \\Omega\\to\\mathbb{R}^n$ be as in Theorem~\\ref{fvgen}. Given\ndistinct points $a,b\\in \\Omega$ and a partition $0=t_01$ and the function $\\phi$ be defined by the formula~\\eqref{fvar1}. Then\n\\begin{equation}\\label{series1}\n\\sum_{j=1}^N \\phi(d_j)\\leqslant C'\n\\end{equation}\nwhere $C'$ depends only on $C$ and $q$.\n\\end{lemma}\n\n\\begin{proof} Since $d_j\\leqslant C\\log(j+1)\/j\\leqslant C\\sqrt{j}$, it follows\nthat $\\log(e+1\/d_j)\\geqslant C_1\\log(j+1)$, where $C_1$ depends only on $C$. Therefore,\n\\begin{equation}\\label{series2}\n\\sum_{j=1}^N \\phi(d_j)\\leqslant C_1^{-q}\\sum_{j=1}^N \\frac{d_j}{(\\log(j+1))^{q}}.\n\\end{equation}\nNext we use summation by parts, replacing $d_j$ with $s_j-s_{j-1}$, where $s_0=0$ by convention.\n\\begin{equation}\\label{series3}\n\\begin{split}\n&\\sum_{j=1}^N \\frac{d_j}{(\\log(j+1))^{q}} \\\\\n&=\\frac{s_N}{(\\log(N+1))^{q}}+\n\\sum_{j=1}^{N-1}s_j\\left(\\frac{1}{(\\log(j+1))^{q}}-\\frac{1}{(\\log(j+2))^{q}}\\right)\n\\end{split}\n\\end{equation}\nThe first term on the right is bounded by $C\/(\\log (N+1))^{q-1}$.\nSince $s_j\\leqslant C\\log(j+1)$ and\n\\[\\frac{1}{(\\log(j+1))^{q}}-\\frac{1}{(\\log(j+2))^{q}}\\leqslant\n\\frac{q}{(j+1)(\\log(j+1))^{1+q}},\\]\nit follows that\n\\begin{equation}\\label{series4}\n\\begin{split}\n&\\sum_{j=1}^{N-1}s_j\\left(\\frac{1}{(\\log(j+1))^{q}}-\\frac{1}{(\\log(j+2))^{q}}\\right) \\\\\n&\\leqslant C\\sum_{j=1}^{\\infty}\\frac{1}{(j+1)(\\log(j+1))^{q}}=:C_2\n\\end{split}\n\\end{equation}\nwhere $C_2$ depends only on $C$ and $q$. Combining~\\eqref{series2}, \\eqref{series3},\nand \\eqref{series4}, we obtain~\\eqref{series1}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{fvgen}]\nLet $a,b\\in\\mathbb{R}^n$ be two distinct points.\nGiven a partition $(a_j)_{j=0}^N$ of $[a,b]$, let $d_1 \\geqslant \\dots \\geqslant d_N$ be the numbers $\\abs{f(a_j)-f(a_{j-1})}$ arranged in the nonincreasing order. By Lemma~\\ref{vargrowth} the partial sums $s_j=d_1+ \\dots +d_j$ are bounded by $C\\log (2j+2)\\abs{f(a)-f(b)}$, where $C$ is the constant in~\\eqref{vargrowth1}.\nApplying Lemma~\\ref{series} we arrive at the conclusion of the theorem.\n\\end{proof}\n\n\n\n\n\\section{Failure of bounded variation: Proof of Theorem~\\ref{rBexample}}\n\n\\begin{proof}\nLet $Q\\subset \\mathbb{C}$ be the open square $\\{x+iy\\colon 2< x< 6, \\abs{y}< 2\\}$.\nThe closure of $Q$ contains two smaller closed squares $Q_1=\\{x+iy\\colon \\abs{x-3}+\\abs{y}\\leqslant 1\\}$ and\n$Q_2=\\{x+iy\\colon \\abs{x-5}+\\abs{y}\\leqslant 1\\}$.\n\n\n\\begin{figure}[!h]\n\\begin{center}\n\\psfrag{q}{\\small {${Q}$}}\n\\psfrag{q1}{\\small ${Q_1}$}\n\\psfrag{q2}{\\small ${Q_2}$}\n\\psfrag{2}{\\small ${2}$}\n\\psfrag{4}{\\small ${4}$}\n\\psfrag{6}{\\small ${6}$}\n\n\\includegraphics*[height=2.0in]{pic.eps}\n\\caption{}\n\\end{center}\n\\end{figure}\n\n\n\n\n\nLet $g\\colon \\overline{Q}\\to \\mathbb{C}$ be a Lipschitz function such that\n\\[\ng(z)=\\begin{cases} i(z-2),\\quad &z\\in Q_1; \\\\\ni(6-z), \\quad &z\\in Q_2; \\\\\n0,\\quad &z\\in \\partial Q.\n\\end{cases}\n\\]\nExtend $g$ to the set $A=\\bigcup_{k\\in\\mathbb{Z}}(\\overline{Q}+8k)$ so that $g(z+8)=g(z)$. Finally, set $g(z)=0$ for $z\\notin A$.\n\nLet $L$ be the Lipschitz constant of $g$. We shall prove that for $0<\\epsilon<1\/(2L)$ the mapping\n\\begin{equation}\\label{rBex1}\nf(z)=z+\\epsilon\\sum_{m=0}^{\\infty}4^{-m}g(4^m z),\\quad z\\in\\mathbb{C},\n\\end{equation}\nsatisfies\n\\begin{equation}\\label{rBexdiff}\n \\left| f_{\\bar z} \\right| \\leqslant k \\re f_z \\quad \\textnormal{a.e. } \\quad \\textnormal{ with } \\quad k=\\frac{\\epsilon L}{1-\\epsilon L}.\n \\end{equation}\nFirst of all, the series in~\\eqref{rBex1} converges uniformly because $g$ is bounded.\nLet\n\\[B=\\bigcup_{\\ell \\in\\mathbb{Z}} \\big\\{ [Q\\setminus(Q_1\\cup Q_2)]+8\\ell \\big\\} \\]\nand note that\n\\begin{equation}\\label{Bprop2}\nB\\subset \\bigcup_{j\\in \\mathbb{Z}}\\{z\\colon \\abs{\\re z-2j}<\\abs{\\im z}\\}\n\\end{equation}\nand\n\\begin{equation}\\label{Bprop3}\nB\\cap \\bigcup_{j\\in \\mathbb{Z}}\\{z\\colon \\abs{\\re z-8j}<\\abs{\\im z}\\}=\\varnothing.\n\\end{equation}\nWe claim that that for any $z\\in \\mathbb{C}$ there exists at most one integer $m\\geqslant 0$ such that $4^m z\\in B$.\nIndeed, let $m_0$ be the smallest such integer. Replacing $z$ with $4^{m_0}z$, we may assume that $m_0=0$, i.e., $z\\in B$.\nAccording to~\\eqref{Bprop2}, there exists $j\\in\\mathbb{Z}$ such that $\\abs{\\re z-2j}<\\abs{\\im z}$.\nFor any $m\\geqslant 1$ the number $\\zeta=4^mz$ satisfies\n$\\abs{\\re \\zeta-8\\cdot 4^{m-1}j}<\\abs{\\im \\zeta}\\}$, which implies $\\zeta\\notin B$ by virtue of~\\eqref{Bprop3}.\nThis proves the claim.\n\nSince both $\\re g_z$ and $g_{\\bar z}$ vanish a.e. outside of $B$, it follows that\n\\[\n{ 1-\\epsilon L \\leqslant \\re f_z (z)} \\leqslant 1+ \\epsilon L ,\\quad \\abs{f_{\\bar z}(z)} \\leqslant \\epsilon L \\quad \\mbox{a.e. in } \\mathbb{C}.\n\\]\nThis proves~\\eqref{rBexdiff}. Since $g(z)=0$ when $\\abs{\\im z}>2$, it follows that\n$|\\im f_z(z)|\\leqslant \\epsilon \\, m L$ when $\\abs{\\im z}\\geqslant 2\\cdot 4^{-m}$. This implies\n\\begin{equation}\\label{Df}\n\\abs{Df(z)} \\leqslant C \\log \\left(e+ 1\/\\abs{\\im z }\\right)\n\\end{equation}\nfor some constant $C$. Hence $f\\in W^{1,p}_{\\textrm{loc}}(\\mathbb{C};\\mathbb{C})$\nfor all $p<\\infty$. Therefore, $f$ is quasiregular. By~\\cite[Cor. 1.5]{IKO} it is quasiconformal.\nIt is clear that $\\re f(x)=x$ for all $x\\in \\mathbb{R}$.\n\nIt remains to prove that the function\n\\[h(x):=\\epsilon^{-1}\\im f(x)=-i \\sum_{m=0}^{\\infty}\\frac{1}{4^{m}}g(4^mx),\\quad x\\in\\mathbb{R}\\]\nhas infinite variation on any nontrivial interval $[a,b]\\subset\\mathbb{R}$.\nDue to the self-similar structure of $h$ it suffices to consider the interval $[0,8]$.\nWe will show that the sum\n\\[\nV_N:=\\sum_{j=0}^{4^N} \\left|h\\left(\\frac{8j}{4^{N}}\\right)-h\\left(\\frac{8j-8}{4^{N}}\\right)\\right|\n\\]\nsatisfies\n\\begin{equation}\\label{nonr1}\nV_N\\geqslant c\\sqrt{N}\n\\end{equation}\nwith an absolute constant $c>0$. Let $x_j=(8j)4^{-N}$, $j=0,\\dots, 4^N$.\nWhen $m\\geqslant N$, we have $g(4^m x_j)=0$ for all $j$.\nTherefore, in the definition of $V_N$ we can replace $h$ with the partial sum\n\\[h_N(x)=-i \\sum_{m=0}^{N-1}\\frac{1}{4^{m}}g(4^mx).\\]\nSince $h_N$ is affine on each interval $[x_{j-1},x_j]$, it follows that\n$V_N=\\displaystyle \\int_0^8 \\abs{h_N'(x)}\\,dx$. We claim that for a.e. $x\\in\\mathbb{R}$\n\\begin{equation}\\label{derg}\n\\frac{d}{dx} \\left(-i g(x)\\right) = \\frac{1}{2} (s_{0}(x\/8)-s_{1}(x\/8)),\n\\end{equation}\nwhere $s_m$ is the $m$th Rademacher function, defined by \\[s_m(x)=\\sign \\sin(2^{m+1}\\pi x).\\]\nSince both sides of~\\eqref{derg} are periodic functions with period $8$, it suffices to check that equality holds a.e. on the interval $(0,8)$.\nFrom the definitions of $g$ and $s_m$ one can see that that both sides of~\\eqref{derg}\nagree with $\\chi_{[2,4]}-\\chi_{[4,6]}$ when $00$.\nThis completes the proof of~\\eqref{nonr1}.\n\\end{proof}\n\n\\begin{remark}\\label{negphi} The mapping $f$ constructed in the proof of Theorem~\\ref{rBexample}\ndoes not have finite $\\phi$-variation on lines with $\\phi$ as in~\\eqref{fvar1} for $0\\leqslant q<1\/2$.\n\\end{remark}\n\\begin{proof} Using Jensen's inequality and the estimate~\\eqref{nonr1}, we obtain\n\\[\n\\begin{split}\n\\sum_{j=0}^{4^N} \\phi(\\abs{h(x_j)-h(x_{j-1})})&\\geqslant 4^N\\phi(V_N\/4^N) \\\\\n&\\geqslant \\frac{c\\sqrt{N}}{(\\log(e+4^N\/(c\\sqrt{N}))^{q}}\\to\\infty\n\\end{split}\n\\]\nas $N\\to\\infty$.\n\\end{proof}\n\n\\begin{proof}[Proof of Corollary~\\ref{countlines}] We may assume that the lines $L_j$ are parallel to the the real axis; that is, $L_j= \\{z \\in \\mathbb{C} \\colon \\im z=b_j\\}$ where $b_j$ are distinct real numbers. For $m=1,2,\\dots$ let $\\epsilon_m= \\min_{1 \\leqslant j < \\ell \\leqslant m} \\abs{b_j- b_\\ell}$ and choose $c_m > 0$ so that\n\\begin{equation}\\label{cm1}\nc_m \\abs{b_m} < 2^{-m}\n\\end{equation}\nand\n\\begin{equation}\\label{cm2}\nc_m \\log \\left(e + 1\/\\epsilon_m\\right) < 2^{-m}.\n\\end{equation}\nWe define\n\\begin{equation}\\label{mapF}\nF(z)= \\sum_{m=1}^\\infty c_m f(z-ib_m)\n\\end{equation}\nwhere $f$ is the mapping in~\\eqref{rBex1}. Note that $\\abs{f(z)} \\leqslant \\abs{z}+M$ for some constant $M$. The sum in~\\eqref{mapF} converges locally uniformly because by~\\eqref{cm1}--\\eqref{cm2}\n\\[\n\\abs{ c_m f(z-ib_m)} \\leqslant c_m \\left(\\abs{z}+ \\abs{b_m}+M\\right)\\leqslant 2^{-m} \\left( \\abs{z} +1 +M\\right) .\n\\]\nTherefore $F$ is quasiconformal~\\cite[II 5.3]{LV}. We claim for every $j=1,2,\\dots$ the sum\n\\[R_j(z)=\\sum_{m\\ne j} c_m f(z-ib_m)\\]\nis Lipschitz on the line $L_j$. By~\\eqref{Df} the restriction of $f(z-ib_m)$ to $L_j$ is Lipschitz with a constant $C\\log \\left( e+ 1\/\\abs{b_m-b_j} \\right)$. We estimate the Lipschitz constant of $R_j$ by\n\\[ \n\\sum_{m \\ne j} c_m \\log \\left( e+ 1\/\\abs{b_m-b_j} \\right) \\leqslant \\sum_{mj} c_m \\log \\left( e+ 1\/ \\epsilon_m \\right) .\n\\]\nThe first sum on the right has finitely many terms, and the second sum converges by~\\eqref{cm2}. \n Since $F(z)=c_j f(z-ib_j)+R_j$, it follows that $F(L_j)$ is not locally rectifiable at any of its points.\n\\end{proof}\n\n\n\n\n\n\\section{Reduced quasiconformal mappings in four dimensions}\\label{higher}\n\nOur first goal in this section is to reformulate the definition of reduced quasiconformal mappings (Definition~\\ref{redqcdef})\nin terms of differential matrices $Df(x)\\in \\mathbb{M}_n$ rather than complex derivatives. This is done in Proposition~\\ref{realred} below. We write $\\mathbb{M}_n$ for the set of all $n\\times n$ matrices with real entries. Also, for $\\delta\\in [-1,1]$ we\nlet\n\\[\\mathbb{M}_n(\\delta)=\\{A\\in\\mathbb{M}_n\\colon \\inn{Av,v}\\geqslant \\delta \\abs{Av}\\abs{v} \\text{ for all }\nv\\in\\mathbb{R}^n\\}.\\]\n\n\\begin{proposition}\\label{deltaqc} For $\\delta\\in (0,1)$ let $H(\\delta)= \\frac{1+ \\sqrt{1-\\delta^2}}{1- \\sqrt{1-\\delta^2}}$. Then\n$\\norm{A}\\norm{A^{-1}}\\leqslant H(\\delta)$ for all nonzero matrices $A\\in \\mathbb{M}_n(\\delta)$.\n\\end{proposition}\n\\begin{proof}\nFor $n=2$ this proposition was proved in~\\cite[p.84]{AIMbook}. If $n \\geqslant 3$, let $v$ and $w$ be distinct unit vectors such that $\\abs{Av}= \\norm{A}$ and $\\abs{Aw}= \\norm{A^{-1}}^{-1}$. Applying the two-dimensional case to the subspace spanned by $v$ and $w$, we arrive at the desired conclusion.\n\\end{proof}\n\nA matrix $A\\in \\mathbb{M}_2$ determines a linear mapping $x\\mapsto Ax$ of the plane $\\mathbb{R}^2$. The same linear mapping\ncan be written as $z\\mapsto \\alpha^+ z+\\alpha^-\\bar z$ for some $\\alpha^+,\\alpha^- \\in\\mathbb{C}$. The numbers $\\alpha^+$ and $\\alpha^-$ can be thought of as the conformal and anticonformal parts of $A$ (cf.~\\cite{FS}).\n\n\\begin{proposition}\\label{compdelta}~\\cite[Thm. 3.11.6]{AIMbook} A matrix $A$ belongs to\n$\\mathbb{M}_2(\\delta)$ if and only if $\\abs{\\alpha^-}+\\delta\\abs{\\im \\alpha^+}\\leqslant \\sqrt{1-\\delta^2}\\re \\alpha^+$.\n\\end{proposition}\n\nA complex number $a+ib\\in\\mathbb{C}$ can be identified with the $2\\times 2$ matrix\n$Z=\\left(\\begin{smallmatrix}a & -b \\\\ b & a\\end{smallmatrix}\\right)$.\nThus we may consider $\\mathbb{C}$ as a linear subspace of $\\mathbb{M}_2$.\nWithin this subspace each matrix decomposes into real and imaginary parts:\n$\\re Z = \\left(\\begin{smallmatrix}a & 0 \\\\ 0 & a\\end{smallmatrix}\\right)$ and\n$\\im Z = \\left(\\begin{smallmatrix}0 & -b \\\\ b & 0\\end{smallmatrix}\\right)$.\nGiven $A\\in\\mathbb M_2$, let $\\mathbb{C}(A)$ be the orthogonal projection of $A$ onto the subspace $\\mathbb{C}$.\n\n\n\\begin{proposition}\\label{realred} A mapping $f\\in W_{\\loc}^{1,2}(\\mathbb{R}^2;\\mathbb{R}^2)$ is reduced\nquasiconformal in the sense of Definition~\\ref{redqcdef} if and only if there exists $\\delta>0$ such that for a.e. $x\\in\\mathbb{R}^2$ the derivative $A=Df(x)$ satisfies $A-\\im \\mathbb{C}(A)\\in \\mathbb M_2(\\delta)$.\n\\end{proposition}\n\\begin{proof} Writing the matrix $A=Df(x)$ in conformal-anticonformal coordinates as $(\\alpha^+,\\alpha^-)$, we observe\nthat $A-\\im \\mathbb{C}(A)$ corresponds to $(\\re \\alpha^+,\\alpha^-)$. According to Proposition~\\ref{compdelta}, the condition\n$A-\\im \\mathbb{C}(A)\\in \\mathbb M_2(\\delta)$ is equivalent to $\\abs{\\alpha^-}\\leqslant \\sqrt{1-\\delta^2}\\re \\alpha^+$.\nThe latter is inequality is the same as~\\eqref{BeIn} with $k=\\sqrt{1-\\delta^2}$.\n\\end{proof}\n\nA quaternion $\\alpha+\\beta\\mathbf{i}+\\gamma\\mathbf{j}+\\zeta\\mathbf{k}$ can be identified with a $4\\times 4$ real matrix\n\\begin{equation}\\label{quat1}\nQ=\\begin{pmatrix} \\alpha&-\\beta&-\\gamma&-\\zeta \\\\ \\beta&\\alpha&-\\zeta&\\gamma \\\\ \\gamma&\\zeta&\\alpha&-\\beta \\\\ \\zeta&-\\gamma&\\beta&\\alpha\n\\end{pmatrix}\n\\end{equation}\nWith this identification we consider the set of quaternions $\\mathbb H$ as a subset of $\\mathbb{M}_4$.\nSince quaternion conjugation corresponds to matrix transposition, we have $Q^TQ=\\norm{Q}^2I$, where\n$\\norm{Q}$ is the operator norm of matrix $Q$, also equal to the absolute value of the quaternion.\nConsequently, $\\abs{Qv}=\\norm{Q}\\abs{v}$ for any vector $v\\in\\mathbb{R}^4$.\n\nA quaternion $Q$ is the sum of its real (scalar) and imaginary parts:\n\\begin{equation}\\label{quat1n}\n\\re Q=\\begin{pmatrix} \\alpha&0&0&0 \\\\ 0&\\alpha&0&0 \\\\ 0&0&\\alpha&0 \\\\ 0&0&0&\\alpha\n\\end{pmatrix},\\quad \\im Q=Q-\\re Q.\n\\end{equation}\nIf $\\re Q=0$, the quaternion $Q$ is purely imaginary.\nFor a matrix $A\\in\\mathbb M_4$, we define $\\mathbb{H}(A)$ to be the orthogonal projection of $A$ onto the subspace\n$\\mathbb{H}\\subset \\mathbb M_4$.\n\n\\begin{definition}\\label{quatred} A homeomorphic mapping $f\\in W_{\\loc}^{1,4}(\\mathbb{R}^4;\\mathbb{R}^4)$\nis \\textit{reduced quasiconformal} if there exists $\\delta>0$ such that for a.e. $x\\in\\mathbb{R}^4$ the derivative\n$A=Df(x)$ satisfies $A-\\im \\mathbb{H}(A)\\in \\mathbb M_4(\\delta)$.\n\\end{definition}\n\nFirst of all, we need to justify the terminology by proving the following proposition.\n\n\\begin{proposition}\\label{quatprop} Any reduced quasiconformal mapping $f\\colon\\mathbb{R}^4\\to\\mathbb{R}^4$ is\n$K$-quasiconformal, where $K$ depends only on $\\delta$ in Definition~\\ref{quatred}. In addition, $f$ is monotone.\n\\end{proposition}\n\n\\begin{proof} The essence of this proposition is the algebraic implication\n\\begin{equation}\\label{quatp1}\nA-\\im \\mathbb{H}(A)\\in \\mathbb M_4(\\delta) \\implies \\norm{A} \\norm{A^{-1}} \\leqslant \\widetilde{H}(\\delta).\n\\end{equation}\n We may assume that $A$ is a nonzero matrix.\nLet $Q=\\im \\mathbb{H}(A)$ and $B=A-Q$. If $Q=0$, then Proposition~\\ref{deltaqc} gives~\\eqref{quatp1} with $ \\widetilde{H}(\\delta)= H(\\delta)$. Assume $Q\\ne 0$. Fix a unit vector $v\\in\\mathbb{R}^4$.\nSince $Qv\/\\norm{Q}$ is a unit vector orthogonal to $v$, it follows that\n\\[\\inn{Bv,v}^2+\\norm{Q}^{-2}\\inn{Bv,Qv}^2\\leqslant \\abs{Bv}^2\\]\nUsing the inequality $\\inn{Bv,v}\\geqslant \\delta \\abs{Bv}$, we obtain\n\\begin{equation}\\label{quatprop3}\n\\abs{\\inn{Bv,Qv}}\\leqslant \\sqrt{1-\\delta^2}\\norm{Q}\\abs{Bv},\n\\end{equation}\nwhich in turn yields\n\\begin{equation}\\label{quatprop2}\n\\begin{split}\n\\abs{Av}^2&=\\abs{Bv+Qv}^2 = \\abs{Bv}^2+\\norm{Q}^2+2\\inn{Bv,Qv}\\\\\n&\\geqslant (1-\\sqrt{1-\\delta^2})(\\abs{Bv}^2+\\norm{Q}^2)+\\sqrt{1-\\delta^2}(\\abs{Bv}-\\norm{Q})^2\\\\\n&\\geqslant (1-\\sqrt{1-\\delta^2})(\\abs{Bv}^2+\\norm{Q}^2).\n\\end{split}\n\\end{equation}\nIn particular, $A$ is invertible. We also have the trivial estimate\n\\begin{equation}\\label{quatprop1}\n\\abs{Av}^2\\leqslant 2(\\norm{Bv}^2+\\norm{Q}^2).\n\\end{equation}\nCombining~\\eqref{quatprop2} and ~\\eqref{quatprop1}, we conclude that\n\\[\\begin{split}\n\\norm{A}^2\\norm{A^{-1}}^2&=\n\\frac{\\max\\{\\abs{Av}^2\\colon \\abs{v}=1\\}}{\\min\\{\\abs{Av}^2\\colon \\abs{v}=1\\}}\\\\\n&\\leqslant \\frac{2}{1-\\sqrt{1-\\delta^2}}\n\\frac{\\max\\{\\abs{Bv}^2\\colon \\abs{v}=1\\}+\\norm{Q}^2}{\\min\\{\\abs{Bv}^2\\colon \\abs{v}=1\\}+\\norm{Q}^2} \\\\\n&\\leqslant \\frac{2}{1-\\sqrt{1-\\delta^2}}\n\\frac{\\max\\{\\abs{Bv}^2\\colon \\abs{v}=1\\}}{\\min\\{\\abs{Bv}^2\\colon \\abs{v}=1\\}}\\leqslant \\frac{2(1+ \\sqrt{1-\\delta^2})^2}{(1-\\sqrt{1-\\delta^2})^3}\n\\end{split}\\]\nwhere the last step uses Proposition~\\ref{deltaqc}. This proves~\\eqref{quatp1}.\nApplying~\\eqref{quatp1} to the derivative matrix $A=Df(x)$, we find that $f$ is $K$-quasiconformal with $K= \\widetilde{H}(\\delta)^3$.\n\nWith $A=Df(x)$ and $B=A-\\im \\mathbb{H}(A)$ as above, we have\n\\[\\inn{Av,v} = \\inn{Bv,v} \\geqslant 0,\\quad v\\in\\mathbb{R}^4.\\]\nIntegrating this inequality along the segments $[a,b]$ on which $f$ is absolutely continuous, we obtain $\\Delta_f(a,b)\\geqslant 0$.\nThe continuity of $f$ then implies $\\Delta_f(a,b)\\geqslant 0$ for all $a,b\\in\\mathbb{R}^4$.\n\\end{proof}\n\n\\begin{remark}\\label{remarkk}\nIf in Definition~\\ref{quatred} we do not require $f$ to be homeomorphic, then the proof of Proposition~\\ref{quatprop} shows that $f$ is $K$-quasiregular\n(see Definition~\\ref{qc}).\n\\end{remark}\n\n\nIt follows from Definition~\\ref{quatred} that the set of reduced quasiconformal mappings is a convex cone in four dimensions\nas well as in two dimensions. Another similarity with the planar case is provided by the following result.\n\n\\begin{proposition}\\label{delquat} Any nonconstant $\\delta$-monotone mapping $f\\colon\\mathbb{R}^4\\to\\mathbb{R}^4$ is\nreduced quasiconformal in the sense of Definition~\\ref{quatred}.\n\\end{proposition}\n\\begin{proof} Since $f$ is quasiconformal by~\\cite[Cor. 7]{Ko}, we have $f\\in W^{1,4}_{\\loc}(\\Omega ; \\mathbb{R}^4)$.\nFix a point $x\\in \\mathbb{R}^4$ where $f$ is differentiable and let $A=Df(x)$, $Q=\\im \\mathbb{H}(A)$, $B=A-Q$.\nThe definition of $\\delta$-monotonicity implies $A\\in \\mathbb{M}_4(\\delta)$.\nFor all $v\\in\\mathbb{R}^n$ we have $\\inn{Qv,v}=0$ since $Q$ is an antisymmetric matrix. Thus,\n\\begin{equation}\\label{shouldname}\n\\inn{Bv,v}=\\inn{Av,v}\\geqslant \\delta\\abs{Av}\\abs{v},\\quad v\\in\\mathbb{R}^n.\n\\end{equation}\n It remains to prove that $\\abs{Av}\\geqslant c\\abs{Bv}$ for a constant $c>0$ that depends only on $\\delta$.\nNote that $\\im \\mathbb{H}(A)$ is the orthogonal projection of $A$ onto the space of purely imaginary quaternions, considered as a linear subspace of $\\mathbb{M}_4$. Therefore, $B=A-\\im\\mathbb{H}(A)$ is the projection of $A$ onto the orthogonal\ncomplement of purely imaginary quaternions. Since orthogonal projections in $\\mathbb{M}_n$ do not\nincrease the Frobenius norm $\\norm{\\cdot}_F$, it follows that\n\\begin{equation}\\label{aname}\n\\norm{B}\\leqslant \\norm{B}_F\\leqslant \\norm{A}_F\\leqslant 2\\norm{A}\n\\end{equation}\nwhere we have used the relation between operator norm and Frobenius norm~\\cite[p.313]{HJbook}. Combining\nProposition~\\ref{deltaqc} with~\\eqref{aname} we obtain\n\\[\\abs{Av} \\geqslant \\frac{\\norm{A} \\abs{v} }{H(\\delta)} \\geqslant \\frac{\\norm{B} \\abs{v}}{2 H(\\delta)} \\geqslant \\frac{\\abs{Bv}}{2H(\\delta)}. \\]\nThis estimate together with~\\eqref{shouldname} imply $B \\in \\mathbb{M}_4(\\delta\/2H(\\delta))$.\n\\end{proof}\n\nOur last result is an extension of Theorem~\\ref{fvar} to four dimensions.\n\n\\begin{theorem}\\label{quatvar} Let $f\\colon \\mathbb{R}^4 \\to \\mathbb{R}^4$ be a reduced quasiconformal mapping in the sense of Definition~\\ref{quatred}.\nThen for any $q>1$ $f$ has finite $\\phi$-variation on lines with $\\phi$ as in~\\eqref{fvar1}.\n\\end{theorem}\n\\begin{proof} For a purely imaginary quaternion $Q\\in\\mathbb{M}_4$ we define $f^Q(x)=f(x)+Qx$.\nRecall the definition of the modulus of monotonicity $\\Delta_f$ in~\\eqref{Deltadef}.\nWe claim that\n\\begin{equation}\\label{quatvar1}\n\\Delta_f(a,b)=\\min_{Q}\\abs{f^Q(a)-f^Q(b)}\n\\end{equation}\nwhere the minimum is taken over all purely imaginary quaternions (and is attained). Indeed,\n\\[\\Delta_f(a,b)=\\Delta_{f^Q}(a,b)\\leqslant \\abs{f^Q(a)-f^Q(b)},\\quad \\text{for any $Q$ with }\\re Q=0.\\]\nIn proving the converse inequality we may assume that $v:=a-b$ is a nonzero vector.\nApplying the unit quaternions $\\mathbf{i}$, $\\mathbf{j}$, and $\\mathbf{k}$ to $v$,\nwe obtain an orthogonal basis of $\\mathbb{R}^4$, namely $\\{v,\\mathbf{i}v,\\mathbf{j}v,\\mathbf{k}v\\}$.\nExpand the vector $f(a)-f(b)$ in this basis:\n\\begin{equation}\\label{quatexp}\nf(a)-f(b)=\\alpha v+\\beta\\mathbf{i}v+\\gamma\\mathbf{j}v+\\zeta\\mathbf{k}v.\n\\end{equation}\nIn these terms, $\\Delta_f(a,b)=\\inn{\\alpha v,v\/\\abs{v}}=\\alpha\\abs{v}$. Since $f$ is monotone by Proposition~\\ref{quatprop}, we have $\\alpha\\geqslant 0$.\nThe quaternion $Q=-\\beta\\mathbf{i}-\\gamma\\mathbf{j}-\\zeta\\mathbf{k}$ satisfies\n\\[\\abs{f^Q(a)-f^Q(b)}=\\alpha\\abs{v}=\\Delta_f(a,b)\\]\nwhich proves~\\eqref{quatvar1}.\nFor future reference, observe that~\\eqref{quatexp} implies $\\abs{f(a)-f(b)}\\geqslant \\abs{Qv}=\\norm{Q}\\abs{v}$, hence\n\\begin{equation}\\label{quatest}\n\\norm{Q}\\leqslant \\frac{\\abs{f(a)-f(b)}}{\\abs{a-b}}.\n\\end{equation}\n\nSince linear mappings trivially satisfy the conclusion of the theorem, we may assume that $f$\nis nonlinear. By Remark~\\ref{remarkk} there exist $K<\\infty$ such that $f^Q$ is $K$-quasiregular for all purely imaginary quaternions. Also, $f^Q$ is monotone by Proposition~\\ref{quatprop}. By Theorem 1.2~\\cite{KO} any monotone quasiregular mapping defined on $\\mathbb{R}^n$ is either constant or a homeomorphism.\nSince $f$ is not linear, $f^Q$ cannot be constant. Thus $f^Q$ is $K$-quasiconformal.\nBy~\\cite[Thm 11.14]{Hebook} the family $f^Q$ has a common modulus of quasisymmetry $\\eta$. Given distinct points $a,b,c\\in \\mathbb{R}^4$, let $Q$ be a minimizing quaternion in~\\eqref{quatvar1}.\nThe quasisymmetry of $f^{Q}$ implies\n\\[\n\\abs{f^{Q}(c)-f^{Q}(a)}\\leqslant \\eta\\left(\\frac{\\abs{c-a}}{\\abs{b-a}}\\right)\\abs{f^{Q}(b)-f^{Q}(a)}.\n\\]\nHere $\\abs{f^{Q}(b)-f^{Q}(a)}=\\Delta_f(a,b)$. Using~\\eqref{quatest} we obtain\n\\[\\begin{split}\n\\abs{f(c)-f(a)}&\\leqslant \\abs{Q(c-a)}+\\abs{f^{Q}(c)-f^{Q}(a)} \\\\\n&\\leqslant \\frac{\\abs{c-a}}{\\abs{a-b}}\\abs{f(a)-f(b)}+\\eta\\left(\\frac{\\abs{c-a}}{\\abs{b-a}}\\right)\\Delta_f(a,b),\n\\end{split}\\]\nwhich is~\\eqref{fvgen1}. It remains to apply Theorem~\\ref{fvgen} to $f$.\n\\end{proof}\n\n\\section*{Acknowledgement}\nWe thank Tadeusz Iwaniec for many stimulating discussions on the subject of this paper.\n\n\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{In Model}\nUnder the independence assumption for each bond and the conditional independence for each atom given the bonds, the probability of sampling a molecule $M$ from $\\mathcal{M}$ is:\n\\begin{equation}\n\\begin{aligned}\nP_{\\mathcal{M}}(M) &= P_{\\mathcal{B}}(B) P_{\\mathcal{A|B}}(A|B) \\\\\n&=\\prod_{l=1}^{c} \\prod_{i x)$, and we observe $n$ data samples $x_1$, ..., $x_n$. Here we assume the $F(X)$ is absolutely continuous.\nThe $f(x)$ and $S(x) = 1 - F(x)$ are the probability density function and the survival function (or complementary cumulative distribution function) of the $X$ respectively. \n\n\\item {\\it Survival analysis:}\nThe hazard function $\\lambda(x)$ of the $X$ is defined as:\n \\begin{equation}\n \\footnotesize\n\\label{equ:hazard}\n\\lambda(x) = \\lim_{\\Delta x \\to 0^+} \\frac{Pr(x \\le X < x + \\Delta x | X \\ge x)}{\\Delta x} = \\frac{f(x)}{S(x)} \\ ,\n\\end{equation} interpreted as the the probability of $X$ sampled with value $x^+$ conditional on $X$ not being sampled with value smaller than $x$. \nWe define $\\Lambda(x) = \\int^x_{x_0}\\lambda(s) ds$ as the cumulative hazard function. Due to the fact that $\\Lambda(x)$ is monotonically increasing, thus $\\Lambda(x)$ is invertable and we define $\\Lambda^{-1} : \\mathscr{R^+} \\to \\mathscr{R} $, $ \\Lambda^{-1}(\\Lambda(x)) = x$. Similarly, we can define $F^{-1} : \\mathscr{R^+} \\to \\mathscr{R} $, $ F^{-1}(F(x)) = x$.\n\\item {\\it Point process:}\nWe use $\\mathscr{P}(t | \\lambda_p) = \\{t_1, ..., t_i, ...| 0 < t_1 \\le...\\le t_i \\le ..\\le t \\}$ to denote a Poisson point process until time $t$ with the occurrence time $t_i$ of event $i$, and intensity rate $\\lambda_p > 0$. \n\\end{itemize}\n\n\\subsection{Proposed Theorem and Corollary}\nOur main theorem gives the generative dynamics of an arbitrary distribution function $F(x)$ as follows:\n\\begin{theorem}\\label{theorem:main}\nGiven a dynamical system $\\mathscr{D}(t) = \\{x_i(t) > 0| \\frac{d x_i(t)}{dt}, \\\\ x_i{(t_i)} = x_0; i = 1, 2, ... \\}$ consisting of agent $i$ who arrives in the system at time $t_i$ according to a Poisson process $\\mathscr{P}(t | \\lambda_p) = \\{t_1, ..., t_i,....| 0 < t_1 \\le...\\le t_i \\le ... \\le t \\}$ , the state of agent $i$ changes according to a differential equation $\\frac{d x_i(t)}{dt}|_{x_0}$ with initial value $x_0$, and the cross-sectional states of $\\mathscr{D}(t)$ at time point $t$, namely $ x(t) = \\{x_1(t ), ..., x_i(t),... \\}$, follows the distribution $F(x(t))$ if and only if $\\ \\frac{d x_i(t)}{dt}|_{x_0} = \\frac{d F^{-1}(1-\\frac{t_i}{t})}{dt}$.\n\\end{theorem}\n\n\n\n\n\n\\subsection{Proof}\n\\label{sec:app:si_proof}\n \n\\begin{lemma}\\label{lemma:poisson} \\cite{daley2003introduction}\nGiven a Poisson process $\\mathscr{P}(t | \\lambda_p) = \\{t_1, ..., t_i, ...| 0< t_1 \\le...\\le t_i \\le ..\\le t \\}$ with $N(t|\\lambda_t) = n$, then the probability density function of a random event time $t_i$ given the total time $t$ is $f(t_i) = \\frac{1}{t}$, indicating a uniform distribution on $(0, t]$.\n\\end{lemma}\n\\begin{proof}\nThe joint probability density function of random variables $t_i, i = 1, ..., n $ is:\n \\begin{equation}\n \\footnotesize\n\\begin{aligned}\n\\label{equ:poissonuni}\n&Pr(t_i < T_i \\le t_i + \\delta_i, i = 1, ..., n | N(t) = n) = \\\\\n&\\left[\\dfrac{\\splitdfrac{Pr(N(t_i + \\delta_i) - N(t_i) = 1, N(t_{j+1}) - N(t_{j} + \\delta_{j}) = 0, \n}{i = 1,...,n, j = 0, ...,n, t_0 = 0, \\delta_0 = 0) }}\n{Pr( N(t) = n)}\\right]\\\\\n&= \\frac{\\prod_{i = 1}^{n} \\lambda_p \\delta_i e^{-\\lambda_p \\delta_i} e^{-\\lambda_p (t-\\sum_{i=1}^{n}\\delta_i)}}{e^{-\\lambda_p t }(\\lambda_p t)^n \/ n!} = \\frac{n!}{t^n} \\prod_{i = 1}^{n} \\delta_i,\n\\end{aligned}\n\\end{equation}\n and thus:\n \\begin{equation}\n \\footnotesize\n\\begin{aligned}\n\\label{equ:poissonuni}\n&f(t_i, i = 1, ..., n | N(t) = n) \\\\\n&= \\lim _{t_i \\to 0, i = 1,...,n} \\frac{Pr(t_i < T_i \\le t_i + \\delta_i, i = 1, ..., n | N(t) = n) }{\\prod_{i = 1}^{n} \\delta_i} \\\\\n &= \\frac{n!}{t^n}, \\\\\n\\end{aligned}\n\\end{equation} where $0 < t_1 \\le ...\\le ti \\le ... \\le t$. For the order statistics $t_i, i = 1, ..., n$, $f(t_i) = \\frac{1}{t}$.\n\\end{proof}\n\n\n\\begin{table*}[!htb] \n\\centering\n\\caption{Generative performance on QM9}\n\\begin{tabular}{l c c c c c c c}\n\\toprule\n & \\textbf{\\% Validity} & \\textbf{\\% Novelty} & \\textbf{\\% Uniqueness} & \\textbf{\\% Reconstruct} & \\textbf{\\% N.U.V.} & \\textbf{\\% Uniqueness2} \\\\\n\\midrule\n\\textbf{MoFlow} & $96.04\\pm0.45$ & $98.39\\pm0.24$ & $98.42\\pm0.29$ &$100$ & $93.00\\pm0.43$& $94.52\\pm0.49$\\\\ \n\\textbf{GraphNVP} & $83.1\\pm0.5$ & $58.2\\pm1.9$ & $99.2\\pm0.3$ &$100$ & $47.97$& $82.43$\\\\ \n\\bottomrule \n\\end{tabular}\n\\label{tab:app:qm9}\n\\end{table*}\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nDrug discovery aims at finding candidate molecules with desired chemical properties for clinical trials, which is a long (10-20 years) and costly (\\$0.5-\\$2.6 billion) process with a high failure rate \\cite{paul2010improve,avorn20152}. Recently, deep graph generative models have demonstrated their big potential to accelerate the drug discovery process by exploring large chemical space in a data-driven manner \\cite{jin2018junction,zhavoronkov2019deep}. These models usually first learn a continuous latent space by encoding\\footnote{In this paper, we use inference, embedding or encoding interchangeably to refer to the transformation from molecular graphs to the learned latent space, and we use decoding or generation for the reverse transformation.} the training molecular graphs and then generate novel and optimized ones through decoding from the learned latent space guided by targeted properties \\cite{gomez2018automatic,jin2018junction}. \nHowever, it is still very challenging to generate novel and chemically-valid molecular graphs with desired properties since: a) the scale of the chemical space of drug-like compounds is $10^{60}$ \\cite{mullard2017drug} but the scale of possibly generated molecular graphs by existing methods are much smaller, and\nb) generating molecular graphs that have both multi-type nodes and edges and follow bond-valence constraints\nis a hard combinatorial task.\n\nPrior works leverage different deep generative frameworks for generating molecular SMILES codes \\cite{weininger1989smiles} or molecular graphs, including variational autoencoder (VAE)-based models \\cite{kusner2017grammar,dai2018syntax,simonovsky2018graphvae,ma2018constrained,liu2018constrained,bresson2019two,jin2018junction}, generative adversarial networks (GAN)-based models \\cite{de2018molgan,you2018graph}, and autoregressive (AR)-based models \\cite{popova2019molecularrnn,you2018graph}. \nIn this paper, we explore a different deep generative framework, namely the normalizing flow \\cite{dinh2014nice,madhawa2019graphnvp,kingma2018glow} to generate molecular graphs. Compared with above three frameworks, the flow-based models are the only one which can memorize and exactly reconstruct all the input data, and at the same time have the potential to generate more novel, unique and valid molecules, which implies its potential capability of deeper exploration of the huge chemical space. To our best knowledge, there have been three flow-based models proposed for molecular graph generation. The GraphAF \\cite{shi2020graphaf} model is an autoregressive flow-based model that achieves state-of-the-art performance in molecular graph generation. GraphAF generates molecules in a sequential manner by adding each new atom or bond followed by a validity check. GraphNVP \\cite{madhawa2019graphnvp} and GRF \\cite{honda2019graph} are proposed for molecular graph generation in a one-shot manner. However, they cannot guarantee chemical validity and thus show poor performance in generating valid and novel molecules. \n\nIn this paper, we propose a novel deep graph generative model named MoFlow\\ to generate molecular graphs. Our MoFlow\\ is the first of its kind which not only generates molecular graphs efficiently by invertible mapping at one shot, but also has a chemical validity guarantee. More specifically, to capture the combinatorial atom-and-bond structures of molecular graphs, we propose a variant of the Glow model \\cite{kingma2018glow} to generate bonds (multi-type edges, e.g., single, double and triple bonds), a novel \\textit{graph conditional flow} to generate atoms (multi-type nodes, e.g. C, N etc.) given bonds\nby leveraging graph convolutions, and finally assemble atoms and bonds into a valid molecular graph which follows bond-valence constraints.\nWe illustrate our modelling framework in Figure~\\ref{fig:model}. Our MoFlow\\ is trained by exact and tractable likelihood estimation, and one-pass inference and generation can be efficiently utilized for molecular graph optimization.\n\n\nWe validate our MoFlow\\ through a wide range of experiments from molecular graph generation, reconstruction, visualization to optimization. As baselines, we compare the state-of-the-art VAE-based model \\cite{jin2018junction}, autoregressive-based models \\cite{you2018graph,popova2019molecularrnn}, and all three flow-based models \\cite{madhawa2019graphnvp,honda2019graph, shi2020graphaf}. \nAs for memorizing input data, MoFlow\\ achieves $100\\%$ reconstruction rate. As for exploring the unknown chemical space, MoFlow\\ outperforms above models by generating more novel, unique and valid molecules (as demonstrated by the N.U.V. scores in Table~\\ref{tab:qm9} and~\\ref{tab:zinc}). MoFlow\\ generates $100\\%$ chemically-valid molecules when sampling from prior distributions. Furthermore, if without validity correction, MoFlow\\ still generates much more valid molecules than existing models (validity-without-check scores in Table~\\ref{tab:qm9} and~\\ref{tab:zinc}). For example, the state-of-the-art autoregressive-flow-based model GraphAF \\cite{shi2020graphaf} achieves $67\\%$ and $68\\%$ validity-without-check scores for two datasets while MoFlow\\ achieves $96\\%$ and $82\\%$ respectively, thanks to its capability of capturing the chemical structures in a holistic way.\nAs for chemical property optimization, MoFlow\\ can find much more novel molecules with top drug-likeness scores than existing models (Table~\\ref{tab:topqed} and Figure~\\ref{fig:topqed}). As for constrained property optimization, MoFlow\\ finds novel and optimized molecules with the best similarity scores and second best property improvement (Table~\\ref{tab:plogp}). \n\nIt is worthwhile to highlight our contributions as follows:\n \\begin{compactitem}\n \\item {\\bf Novel MoFlow\\ model:}\n our MoFlow\\ is one of the first flow-based graph generative models which not only generates molecular graphs at one shot by invertible mapping but also has a validity guarantee. To capture the combinatorial atom-and-bond structures of molecular graphs, we propose a variant of Glow model for bonds (edges) and a novel graph conditional flow for atoms (nodes) given bonds, and then assemble them into valid molecular graphs.\n \n\\item {\\bf State-of-the-art performance:} our MoFlow\\ achieves many state-of-the-art results w.r.t. molecular graph generation, reconstruction, optimization, etc., and at the same time our one-shot inference and generation are very efficient, which implies its potentials in deep exploration of huge chemical space for drug discovery.\n\\end{compactitem}\n\n\n\nThe outline of this paper is:\nsurvey (Sec.~\\ref{sec:related}), proposed method (Sec.~\\ref{sec:mp} and \\ref{sec:model}), experiments (Sec.~\\ref{sec:exp}), and conclusions (Sec.~\\ref{sec:conclusion}). In order to promote reproducibility, our codes and datasets are open-sourced at \\url{https:\/\/github.com\/calvin-zcx\/moflow}.\n \n\n\\section{Related Work}\n\\label{sec:related}\n\\mytag{Molecular Generation.}\nDifferent deep generative frameworks are proposed for generating molecular SMILES or molecular graphs. Among the variational autoencoder (VAE)-based models \\cite{kusner2017grammar,dai2018syntax,simonovsky2018graphvae,ma2018constrained,liu2018constrained,bresson2019two,jin2018junction}, the JT-VAE \\cite{jin2018junction} generates valid tree-structured molecules by first generating a tree-structured scaffold of chemical substructures and then assembling substructures according to the generated scaffold.\nThe MolGAN \\cite{de2018molgan} is a generative adversarial networks (GAN)-based model but shows very limited performance in generating valid and unique molecules. The autoregressive-based models generate molecules in a sequential manner with validity check at each generation step. For example, the MolecularRNN \\cite{popova2019molecularrnn} sequentially generates each character of SMILES and the GCPN \\cite{you2018graph} sequentially generates each atom\/bond in a molecular graphs.\nIn this paper, we explore a different deep generative framework, namely the normalizing flow models \\cite{dinh2014nice,madhawa2019graphnvp,kingma2018glow}, for molecular graph generation, which have the potential to memorize and reconstruct all the training data and generalize to generating more valid, novel and unique molecules.\n\n \n \n \n\n\\noindent \\mytag{Flow-based Models.}\nThe (normalizing) flow-based models try to learn mappings between complex distributions and simple prior distributions through invertible neural networks and such a framework has good merits of exact and tractable likelihood estimation for training, efficient one-pass inference and sampling, invertible mapping and thus reconstructing all the training data etc. Examples include NICE\\cite{dinh2014nice}, RealNVP\\cite{dinh2016density}, Glow\\cite{kingma2018glow} and GNF \\cite{liu2019graph} which show promising results in generating images or even graphs \\cite{liu2019graph}. See latest reviews in \\cite{papamakarios2019normalizing,kobyzev2019normalizing} and more technical details in Section~\\ref{sec:mp}.\n\n\nTo our best knowledge, there are three flow-based models for molecular graph generation. The GraphAF \\cite{shi2020graphaf} is an autoregressive flow-based model which achieves state-of-the-art performance in molecular graph generation. The GraphAF generates molecular graphs in a sequential manner with validity check when adding any new atom or bond. The GraphNVP \\cite{madhawa2019graphnvp} and GRF \\cite{honda2019graph} are proposed for molecular graph generation in a one-shot manner. However, they have no guarantee for chemical validity and thus show very limited performance in generating valid and novel molecular graphs. \nOur MoFlow\\ is the first of its kind which not only generates molecular graphs efficiently by invertible mapping at one shot but also has a validity guarantee. In order to capture the atom-and-bond composition of molecules, we propose a variant of Glow\\cite{kingma2018glow} model for bonds and a novel graph conditional flow for atoms given bonds, and then combining them with a post-hoc validity correction. Our MoFlow\\ achieves many state-of-the-art results thanks to capturing the chemical structures in a holistic way, and our one-shot inference and generation are more efficient than sequential models.\n \n\n\n\n\n\\section{Model Preliminary}\n\\label{sec:mp}\n\\mytag{The flow framework.} The flow-based models aim to learn a sequence of invertible transformations $f_{\\Theta}=f_L \\circ ... \\circ f_1$ between complex high-dimensional data $X \\sim P_\\mathcal{X}(X)$ and $Z \\sim P_\\mathcal{Z}(Z)$ in a latent space with the same number of dimensions where the latent distribution $P_\\mathcal{Z}(Z)$ is easy to model (e.g., strong independence assumptions hold in such a latent space). The potentially complex data in the original space can be modelled by the \\textbf{change of variable formula} where $Z=f_{\\Theta}(X)$ and:\n\\begin{equation}\n\\small\nP_{\\mathcal{X}}(X)=P_{\\mathcal{Z}}(Z) \\mid \\det (\\frac{\\partial Z}{\\partial X}) \\mid.\n\\end{equation}\nTo sample $\\tilde{X}\\sim P_\\mathcal{X}(X)$ is achieved by sampling $\\tilde{Z} \\sim P_\\mathcal{Z}(Z)$ and then to transform $\\tilde{X} = f_{\\Theta}^{-1} (\\tilde{Z})$ by the reverse mapping of $f_{\\Theta}$.\n\nLet $Z = f_{\\Theta}(X) = f_L \\circ ... \\circ f_1 (X)$, \\ $H_l = f_l (H_{l-1})$ where $f_l$ ($l = 1, ... L \\in \\mathbb{N^+}$) are invertible mappings, $H_{0}=X$, $H_{L}=Z$ and $P_\\mathcal{Z}(Z)$ follows a standard isotropic Gaussian with independent dimensions. Then we get the log-likelihood of $X$ by the change of variable formula as follows:\n\\begin{equation}\n\\small\n\\begin{aligned}\n\\log P_{\\mathcal{X}}(X) &=\\log P_{\\mathcal{Z}}(Z) + \\log \\mid \\det (\\frac{\\partial Z}{\\partial X}) \\mid \\\\\n&=\\sum_i \\log P_{\\mathcal{Z}_i}(Z_i) + \\sum_{l=1}^{L}\\log \\mid \\det (\\frac{\\partial f_l}{\\partial H_{l-1}}) \\mid \n\\end{aligned}\n\\end{equation} where $P_{\\mathcal{Z}_i}(Z_i)$ is the probability of the $i^{th}$ dimension of $Z$ and $f_{\\Theta}=f_L \\circ ... \\circ f_1$ is an invertible deep neural network to be learnt. Thus, the exact-likelihood-based training is tractable.\n\n\\mytag{Invertible affine coupling layers.} However, how to design a.) an invertible function $f_{\\Theta}$ with b.) expressive structures and c.) efficient computation of the Jacobian determinant are nontrivial. The NICE\\cite{dinh2014nice} and RealNVP \\cite{dinh2016density} design an affine coupling transformation $Z = f_{\\Theta}(X):\\mathbb{R}^{n} \\mapsto \\mathbb{R}^{n}$:\n\\begin{equation}\n\\small\n\\begin{aligned}\nZ_{1:d} &= X_{1:d} \\\\\nZ_{d+1:n} &= X_{d+1:n} \\odot e^{S_{\\Theta}(X_{1:d})} + T_{\\Theta}(X_{1:d}),\n\\end{aligned}\n\\end{equation} by splitting $X$ into two partitions $X = (X_{1:d}, X_{d+1:n})$. Thus, a.) the invertibility is guaranteed by:\n\\begin{equation}\n\\small\n\\begin{aligned}\nX_{1:d} &= Z_{1:d} \\\\\nX_{d+1:n} &= (Z_{d+1:n} - T_{\\Theta}(Z_{1:d})) \/ e^{S_{\\Theta}(Z_{1:d})},\n\\end{aligned}\n\\end{equation}\nb.) the expressive power depends on arbitrary neural structures of the \\textbf{{S}cale function} $S_{\\Theta}:\\mathbb{R}^{d} \\mapsto \\mathbb{R}^{n-d}$ and the \\textbf{{T}ransformation function} $T_{\\Theta}:\\mathbb{R}^{d} \\mapsto \\mathbb{R}^{n-d}$ in the affine transformation of $X_{d+1:n}$, and c.) the Jacobian determiant can be computed efficiently by $\\det (\\frac{\\partial Z}{\\partial X})=\\exp{(\\sum_j S_{\\Theta}(X_{1:d})_j)}$. \n\n\\mytag{Splitting Dimensions.} \nThe flow-based models, e.g., RealNVP \\cite{dinh2016density} and Glow \\cite{kingma2018glow}, adopt\n\\mytag{squeeze operation} which compresses the spatial dimension $X^{c \\times n \\times n}$ into $X^{(ch^2) \\times \\frac{n}{h} \\times \\frac{n}{h}}$ to make more channels and then split channels into two halves for the coupling layer. A deep flow model at a specific layer transforms unchanged dimensions in the previous layer to keep all the dimensions transformed. \nIn order to learn an optimal partition of $X$, Glow \\cite{kingma2018glow} model introduces an \\mytag{ invertible $1 \\times 1$ convolution} $: \\mathbb{R}^{c \\times n \\times n} \\times \\mathbb{R}^{c \\times c} \\mapsto \\mathbb{R}^{c \\times n \\times n}$\nwith learnable convolution kernel $W \\in \\mathbb{R}^{c \\times c}$ which is initialized as a random rotation matrix. After the transformation $Y = \\text{invertible $1 \\times 1$ convolution}(X, W)$, a fixed partition $Y = (Y_{1:\\frac{c}{2}, :, :}, Y_{\\frac{c}{2}+1:n, :, :})$ over the channel $c$ is used for the affine coupling layers.\n\n\n\n\n\n\n\n\\mytag{Numerical stability by actnorm.}\nIn order to ensure the numerical stability of the flow-based models, \\mytag{actnorm layer} is introduced in Glow \\cite{kingma2018glow} which normalizes dimensions in each channel over a batch by an affine transformation with learnable scale and bias. The scale and the bias are initialized as the mean and the inverse of the standard variation of the dimensions in each channel over the batch.\n\n\n\n\n\n\n\n\n\\section{Proposed {\\bf MoFlow\\ } Model}\n\\label{sec:model}\nIn this section, we first define the problem and then introduce our \\underline{Mo}lecular \\underline{Flow} (MoFlow) model in detail. We show the outline of our MoFlow\\ in Figure~\\ref{fig:model} as a roadmap for this section.\n\n\n\\begin{figure}[!tb]\n\\vspace{-0.1in}\n\\centering\n\\includegraphics[width=0.52\\textwidth]{Fig\/framework2.pdf}\n\\vspace{-0.2in}\n\\caption{ \nThe outline of our MoFlow. A molecular graph $M$ (e.g. Metformin) is represented by a feature matrix $A$ for atoms and adjacency tensors $B$ for bonds. Inference:\nthe graph conditional flow (GCF) $f_{\\mathcal{A|B}}$ for atoms (Sec.~\\ref{sec:model:gcf}) transforms $A$ given $B$ into conditional latent vector $Z_{A|B}$, and the Glow $f_{\\mathcal{B}}$ for bonds (Sec.~\\ref{sec:model:glow}) transform $B$ into latent vector $Z_B$. The latent space follows a spherical Gaussian distribution. Generation: the generation process is the reverse transformations of previous operations, followed by a validity correction (Sec.~\\ref{sec:model:validity}) procedure which ensures the chemical validity. We summarize MoFlow\\ in Sec.~\\ref{sec:model:all}. Regression and optimization: \n the mapping $y(Z)$ between latent space and molecular properties are used for\nmolecular graph optimization and property prediction\n(Sec.~\\ref{sec:exp:opt}, Sec.~\\ref{sec:exp:copt}).\n\\label{fig:model}}\n\\vspace{-0.1in}\n\\end{figure}\n\n\n\n\n\n\\subsection{Problem Definition: Learning a Probability Model of Molecular Graphs}\n\\label{sec:model:problem}\nLet $\\mathcal{M}=\\mathcal{A} \\times \\mathcal{B} \\subset \\mathbb{R}^{n \\times k} \\times \\mathbb{R}^{c \\times n \\times n}$ denote the set of $\\mathcal{M}$olecules which is the Cartesian product of the $\\mathcal{A}$tom set $\\mathcal{A}$ with at most $n \\in \\mathbb{N^+}$ atoms belonging to $k \\in \\mathbb{N^+}$ atom types and the $\\mathcal{B}$ond set $\\mathcal{B}$ with $c\\in \\mathbb{N^+}$ bond types. A molecule $M=(A,B) \\in \\mathcal{A} \\times \\mathcal{B}$ is a pair of an atom matrix $A \\in \\mathbb{R}^{n \\times k}$ and a bond tensor $B \\in \\mathbb{R}^{c \\times n \\times n}$.\nWe use one-hot encoding for the empirical molecule data where\n$A (i, k) = 1$ represents the atom $i$ has atom type $k$, and $B (c, i, j)=B (c, j, i)=1$ represents a type $c$ bond between atom $i$ and atom $j$. Thus, a molecule $M$ can be viewed as an undirected graph with multi-type nodes and multi-type edges.\n\nOur primary goal is to learn a molecule generative model $P_{\\mathcal{M}} (M)$ which is the probability of sampling any molecule $M$ from $P_{\\mathcal{M}}$.\nIn order to capture the combinatorial atom-and-bond structures of molecular graphs,\nwe decompose the $P_{\\mathcal{M}} (M)$ into two parts:\n\\begin{equation}\\small\nP_{\\mathcal{M}}(M)=P_{\\mathcal{M}}((A,B)) \\approx P_{\\mathcal{A|B}}(A|B; \\theta_\\mathcal{A|B}) P_{\\mathcal{B}}(B;\\theta_\\mathcal{B}) \n\\end{equation}\nwhere \n$P_{\\mathcal{M}}$ is the distribution of molecular graphs,\n$P_{\\mathcal{B}}$ is the distribution of bonds (edges) in analogy to modelling multi-channel images , and \n$P_{\\mathcal{A|B}}$ is the conditional distribution of atoms (nodes) given the bonds by leveraging graph convolution operations. The $\\theta_\\mathcal{B}$ and $\\theta_\\mathcal{A|B}$ are learnable modelling parameters. \nIn contrast with VAE or GAN based frameworks, we can learn the parameters by exact maximum likelihood estimation (MLE) framework by maximizing:\n\\begin{equation}\\small\n\\argmax_{\\theta_\\mathcal{B}, \\theta_\\mathcal{A|B}} \\ \\mathbb{E}_{M=(A,B) \\sim p_{\\mathcal{M}-data}} [\\log P_{\\mathcal{A|B}}(A|B; \\theta_\\mathcal{A|B}) + \\log P_{\\mathcal{B}}(B;\\theta_\\mathcal{B})]\n\\end{equation}\nOur model thus consists of two parts, namely a \\emph{graph conditional flow for atoms} to learn the atom matrix conditional on the bond tensors and \na \\emph{flow for bonds} to learn bond tensors. We further learn a mapping between the learned latent vectors and molecular properties to regress the graph-based molecular properties, and to guide the generation of optimized molecular graphs.\n\n\\subsection{Graph Conditional Flow for Atoms}\n\\label{sec:model:gcf}\nGiven a bond tensor $B \\in \\mathcal{B} \\subset \\mathbb{R}^{c \\times n \\times n}$, our goal of the atom flow is to generate the right atom-type matrix $A \\in \\mathcal{A} \\subset \\mathbb{R}^{n \\times k} $ to assemble valid molecules $M=(A,B) \\in \\mathcal{M} \\subset \\mathbb{R}^{n \\times k + c \\times n \\times n} $. We first define \\mytag{$\\mathcal{B} $-conditional flow} and \\mytag{graph conditional flow} $f_{\\mathcal{A|B}}$ to transform $A$ given $B$ into conditional latent variable ${Z_{A|B}} = f_{\\mathcal{A|B}} (A|B)$ which follows isotropic Gaussian $P_{\\mathcal{{Z_{A|B}}}}$.\nWe can get the conditional probability of atom features given the bond graphs $P_{\\mathcal{A|B}}$ by a conditional version of the change of variable formula.\n\n\\subsubsection{$\\mathcal{B} $-Conditional Flow and Graph Conditional Flow}\n\\begin{definition} {\\mytag{$\\mathcal{B} $-conditional flow}:}\nA \\mytag{$\\mathcal{B} $-conditional flow} \\newline $Z_{A|B} | B = f_{\\mathcal{A|B}}(A | B)$ is an invertible and dimension-kept mapping \nand there exists reverse transformation $f_{\\mathcal{A|B}}^{-1}(Z_{A|B}|B)=A|B$ where $f_{\\mathcal{A|B}} \\text{ and } f_{\\mathcal{A|B}}^{-1}: \\mathcal{A} \\times \\mathcal{B} \\mapsto \\mathcal{A}\\times \\mathcal{B}$.\n\\end{definition}\n\nThe condition $B \\in \\mathcal{B}$ keeps fixed during the transformation. Under the independent assumption of $\\mathcal{A}$ and $\\mathcal{B}$, the Jacobian of $f_{\\mathcal{A|B}}$ is:\n\\begin{equation}\\small\n\\begin{aligned}\n \\frac{\\partial f_{\\mathcal{A|B}}}{\\partial (A,B)} = \\begin{bmatrix}\n\\frac{\\partial f_{\\mathcal{A|B}}}{\\partial A} & \\frac{\\partial f_{\\mathcal{A|B}}}{\\partial B}\\\\\n0 & \\mathbb{1}_B\n\\end{bmatrix},\n\\end{aligned}\n\\end{equation}\nthe determiant of this Jacobian is $\\det \\frac{\\partial f_{\\mathcal{A|B}}}{\\partial (A,B)} = \\det \\frac{\\partial f_{\\mathcal{A|B}}}{\\partial A}$, and thus the \\textit{conditional version of the change of variable formula} in the form of log-likelihood is: \n\\begin{equation} \\small\n\\begin{aligned}\n\\log P_\\mathcal{A|B} (A | B) = \\log P_{\\mathcal{Z_{A|B}}}(Z_{A|B}) + \\log \\mid \\det \\frac{\\partial f_{\\mathcal{A|B}}}{\\partial A} \\mid.\n\\end{aligned}\n\\end{equation} \n\n\\begin{definition} {\\mytag{Graph conditional flow}:}\nA graph conditional flow is a $\\mathcal{B} $-conditional flow $Z_{A|B} | B= f_{\\mathcal{A|B}}(A|B)$ where $B \\in \\mathcal{B} \\subset \\mathbb{R}^{c \\times n \\times n}$ is the adjacency tenor for edges with $c$ types and $A \\in \\mathcal{A} \\subset \\mathbb{R}^{n \\times k}$ is the feature matrix of the corresponding $n$ nodes.\n\\end{definition}\n\n\\begin{figure}[!tb]\n\\vspace{-0.2in}\n\\centering\n\\includegraphics[width=0.35\\textwidth]{Fig\/cgflow.pdf}\n\\vspace{-0.15in}\n\\caption{ \nGraph conditional flow $f_{\\mathcal{A|B}}$ for the atom matrix. We show the details of one invertible graph coupling layer and a multiscale structure consists of a cascade of $L$ layers of such graph coupling layer. The graphnorm is computed only once.\n\\label{fig:cgflow}}\n\\vspace{-0.15in}\n\\end{figure}\n\\subsubsection{Graph coupling layer} We construct aforementioned invertible mapping $f_{\\mathcal{A|B}}$ and $f_{\\mathcal{A|B}}^{-1}$ by the scheme of the affine coupling layer. Different from traditional affine coupling layer, our coupling transformation relies on graph convolution \\cite{sun2019graph} and thus we name such a coupling transformation as a \\textbf{graph coupling layer}. \n\nFor each graph coupling layer, we split input $A \\in \\mathbb{R}^{n \\times k}$ into two parts $A=(A_1, A_2)$ along the $n$ row dimension, and we get the output $Z_{A|B} = (Z_{A_1|B}, Z_{A_2|B}) = f_{\\mathcal{A|B}}(A|B)$ as follows:\n\\begin{equation}\\small\n\\begin{aligned}\nZ_{A_1|B} &= A_{1} \\\\\nZ_{A_2|B} &= A_{2} \\odot \\text{Sigmoid}(S_{\\Theta}(A_{1}| B)) + T_{\\Theta}(A_{1}| B)\n\\end{aligned}\n\\end{equation} \nwhere $\\odot$ is the element-wise product. We deign the scale function $S_{\\Theta}$ and the transformation function $T_{\\Theta}$ in each graph coupling layer by incorporating \\textbf{graph convolution} structures. The bond tensor $B \\in \\mathbb{R}^{c \\times n \\times n}$ keeps a fixed value during transforming the atom matrix $A$. We also apply the masked convolution idea in \\cite{dinh2016density} to the graph convolution in the graph coupling layer.\nHere, we adopt Relational Graph Convolutional Networks (R-GCN) \\cite{schlichtkrull2018modeling} to build {graph convolution layer} \\textbf{graphconv} as follows:\n\\begin{equation}\\small\n\\begin{aligned}\n\\text{graphconv}(A_1) = \\sum_{i=1}^{c} \\hat{B_i}(M\\odot A)W_i + (M \\odot A)W_0\n\\end{aligned}\n\\end{equation} \nwhere $\\hat{B_i} = D^{-1}B_i$ is the normalized adjacency matrix at channel $i$, $D = \\sum_{c,i}B_{c,i,j}$ is the sum of the in-degree over all the channels for each node, and $M \\in \\{0,1\\}^{n \\times k}$ is a binary mask to select a partition $A_1$ from A. Because the bond graph is fixed during graph coupling layer and thus the graph normalization, denoted as \\textbf{graphnorm}, is computed only once.\n\nWe use multiple stacked graphconv->BatchNorm1d->ReLu layers with a multi-layer perceptron (MLP) output layer to build the graph scale function $S_{\\Theta}$ and the graph transformation function $T_{\\Theta}$. What's more, instead of using exponential function for the $S_{\\Theta}$ as discussed in Sec.~\\ref{sec:mp}, we adopt Sigmoid function for the sake of the numerical stability of cascading multiple flow layers.\nThe reverse mapping of the graph coupling layer $f_{\\mathcal{A|B}}^{-1}$ is:\n\\begin{equation}\n\\small\n\\begin{aligned}\nA_{1} &= Z_{A_1|B} \\\\\nA_{2} &= (Z_{A_2|B} - T_{\\Theta}(Z_{A_1|B}| B))\/ \\text{Sigmoid}(S_{\\Theta}(Z_{A_1|B} | B)).\n\\end{aligned}\n\\end{equation} \nThe logarithm of the Jacobian determiant of each graph coupling layer can be efficiently computed by:\n\\begin{equation}\n\\small\n\\begin{aligned}\n \\log \\mid \\det (\\frac{\\partial f_{\\mathcal{A|B}}}{\\partial A}) \\mid=\\sum_j \\log \\text{Sigmoid}(S_{\\Theta}(A_1 | B))_j\n \\end{aligned}\n\\end{equation} \nwhere $j$ iterates each element. In principle, we can use arbitrary complex graph convolution structures for $S_{\\Theta}$ and $T_{\\Theta}$ since the computing of above Jacobian determinant of $f_{\\mathcal{A|B}}$ does not involve in computing the Jacobian of $S_{\\Theta}$ or $T_{\\Theta}$.\n\n\n\n\\subsubsection{Actnorm for 2-dimensional matrix}\nFor the sake of numerical stability, we design a variant of invertible actnorm layer \\cite{kingma2018glow} for the 2-dimensional atom matrix, denoted as \\textbf{actnorm2D} (activation normalization for 2D matrix), to normalize each row, namely the feature dimension for each node, over a batch of 2-dimensional atom matrices. Given the mean $\\mu \\in \\mathbb{R}^{n \\times 1}$ and the standard deviation $\\sigma^2 \\in \\mathbb{R}^{n \\times 1}$ for each row dimension, the normalized input follows $\\hat{A} = \\frac{A-\\mu}{\\sqrt{\\sigma^2 + \\epsilon}}$ where $\\epsilon$ is a small constant, the reverse transformation is $A = \\hat{A} * \\sqrt{\\sigma^2 + \\epsilon} + \\mu$, and the logarithmic Jacobian determiant is:\n\\begin{equation}\n\\small\n\\begin{aligned}\n\\log \\mid \\det \\frac{\\partial \\textbf{actnorm2D}}{\\partial X} \\mid = \\frac{k}{2}\\sum_i^{n}\\mid \\log({\\sigma^2_i + \\epsilon}) \\mid\n\\end{aligned}\n\\end{equation} \n\n\n\\subsubsection{Deep architectures}\nWe summarize our deep graph conditional flow in Figure~\\ref{fig:cgflow}. We stack multiple graph coupling layers to form graph conditional flow.\nWe alternate different partition of $A=(A_1, A_2)$ in each layer to transform the unchanged part of the previous layer.\n\n\n\\subsection{Glow for Bonds}\n\\label{sec:model:glow}\nThe bond flow aims to learn an invertible mapping $f_{\\mathcal{B}}: \\mathcal{B} \\subset \\mathbb{R}^{c \\times n \\times n} \\mapsto \\mathcal{B} \\subset \\mathbb{R}^{c \\times n \\times n}$ where the transformed latent variable $Z_B = f_{\\mathcal{B}} (B)$ follows isotropic Gaussian. According to the change of variable formula, we can get the logarithmic probability of bonds by $\\log P_{\\mathcal{B}}(B)=\\log P_{\\mathcal{Z_B}}(Z_B) + \\log \\mid \\det (\\frac{\\partial f_{\\mathcal{B}}}{\\partial B}) \\mid$ and generating bond tensor by reversing the mapping $\\tilde{B} = f_{\\mathcal{B}}^{-1} (\\tilde{Z})$ where $\\tilde{Z} \\sim P_{\\mathcal{Z}}(Z)$. We can use arbitrary flow model for the bond tensor and we build our bond flow $f_{\\mathcal{B}}$ based on a variant of Glow \\cite{kingma2018glow} framework. \n\nWe also follow the scheme of affine coupling layer to build invertible mappings. \nFor each affine coupling layer, We split input $B \\in \\mathbb{R}^{c \\times n \\times n}$ into two parts $B=(B_1, B_2)$ along the channel $c$ dimension, and we get the output $Z_B=(Z_{B_1}, Z_{B_2})$ as follows:\n\\begin{equation}\n\\small\n\\begin{aligned}\nZ_{B_1} &= B_{1} \\\\\nZ_{B_2} &= B_{2} \\odot \\text{Sigmoid}(S_{\\Theta}(B_{1})) + T_{\\Theta}(B_{1}) .\n\\end{aligned}\n\\end{equation} \nAnd thus the reverse mapping $f_{\\mathcal{B}}^{-1}$ is:\n\\begin{equation}\n\\small\n\\begin{aligned}\nB_{1} &= Z_{B_1} \\\\\nB_{2} &= (Z_{B_2} - T_{\\Theta}(Z_{B_1}))\/ \\text{Sigmoid}(S_{\\Theta}(Z_{B_1})).\n\\end{aligned}\n\\end{equation} \nInstead of using exponential function as scale function, we use the Sigmoid function with range $(0,1)$ to ensure the numerical stability when stacking many layers. We find that exponential scale function leads to a large reconstruction error when the number of affine coupling layers increases. The scale function $S_{\\Theta}$ and the transformation function $T_{\\Theta}$ in each affine coupling layer can have arbitrary structures. We use multiple $3\\times3$ conv2d->BatchNorm2d->ReLu layers to build them. The logarithm of the Jacobian determiant of each affine coupling is \n\\begin{equation}\n\\small\n\\begin{aligned}\\log \\mid \\det (\\frac{\\partial Z_B}{\\partial B}) \\mid=\\sum_j \\log \\text{Sigmoid}(S_{\\Theta}(B_1))_j.\n\\end{aligned}\n\\end{equation}\n\n\\begin{figure}[!tb]\n\\vspace{-0.3in}\n\\centering\n\\includegraphics[width=0.35\\textwidth]{Fig\/glow.pdf}\n\\label{fig:syn_exp1}\n\\vspace{-0.15in}\n\\caption{ \nA variant of Glow $f_{\\mathcal{B}}$ for bonds' adjacency tensors.\n\\label{fig:glow}}\n\\vspace{-0.3in}\n\\end{figure}\n\nIn order to learn optimal partition and ensure model's stability and learning rate, we also use the \ninvertible $1 \\times 1$ convolution layer and \nactnorm layer adopted in the Glow.\nIn order to get more channels for masking and transformation, we \\mytag{squeeze} the spatial size of $B$ from $\\mathbb{R}^{c \\times n \\times n}$ to $\\mathbb{R}^{(c*h*h) \\times \\frac{n}{h} \\times \\frac{n}{h}}$ by a factor $h$ and apply the affine coupling transformation to the squeezed data. The reverse \\mytag{unsqueeze} operation is adopted to the output.\nWe summarize our bond flow in Figure~\\ref{fig:glow}.\n\n\\subsection{Validity Correction}\n\\label{sec:model:validity}\n Molecules must follow the valency constraints for each atom, but assembling a molecule from generated bond tensor and atom matrix may lead to chemically invalid ones. Here we define the valency constraint for the $i^{th}$ atom as:\n \\begin{equation}\n \\small\n\\begin{aligned}\n\\sum_{c,j} c\\times B(c,i,j) \\le \\text{Valency}(\\text{Atom}_i) + Ch\n\\end{aligned}\n\\end{equation}\nwhere $B \\in \\{0,1\\}^{c\\times n \\times n}$ is the one-hot bond tensor over $c \\in \\{1,2,3\\}$ order of chemical bonds (single, double, triple) and $Ch \\in \\mathbb{N}$ represents the formal charge. Different from existing valency constraints defined in \\cite{you2018graph, popova2019molecularrnn}, we consider the effect of formal charge which may introduce extra bonds for the charged atoms. For example, ammonium [NH4]$^+$ may have 4 bonds for N instead of 3. Similarly, S$^+$ and O$^+$ may have 3 bonds instead of 2. Here we only consider $Ch=1$ for N$^+$, S$^+$ and O$^+$ and make $Ch=0$ for other atoms.\n\nIn contrast with the existing reject-sampling-based validity check adopted in the autoregressive models \\cite{you2018graph, popova2019molecularrnn}, we introduce a new post-hoc validity correction procedure \nafter generating a molecule $M$ at once: 1) check the valency constraints of $M$; 2) if all the atoms of $M$ follows valecny constraints, we return the largest connected component of the molecule $M$ and end the procedure; 3) if there exists an invalid atom $i$, namely $\\sum_{c,j} c \\times B(c,i,j) > \\text{Valency}(\\text{Atom}_i) + Ch$, we sort the bonds of $i$ by their order and delete $1$ order for the bond with the largest order; 4) go to step 1). Our validity correction procedure tries to make a minimum change to the existing molecule and to keep the largest connected component as large as possible.\n\n\n\n\\subsection{Inference and Generation}\n\\label{sec:model:all}\nWe summarize the inference (encoding) and generation (decoding) of molecular graphs by our MoFlow\\ in Algorithm~\\ref{alg:inference} and Algorithm~\\ref{alg:generate} respectively. We visualize the overall framework in Figure~\\ref{fig:model}. As shown in the algorithms, our MoFlow\\ have merits of exact likelihood estimation\/training, one-pass inference, invertible and one-pass generation, and chemical validity guarantee.\n\\begin{algorithm}[!h]\n\\footnotesize\n\\SetAlgoLined\n \\textbf{Input:} $f_\\mathcal{A|B}$: graph conditional flow for atoms, $f_\\mathcal{B}$: glow for bonds, $A$: atom matrix, $B$: bond tensor, $P_{\\mathcal{Z_*}}$: isotropic Gaussian distributions.\\\\\n\\textbf{Output:} \n$Z_M$:latent representation for atom $M$, \n$\\log P_\\mathcal{M}(M)$: logarithmic likelihood of molecule $M$. \\\\\n\\quad $Z_B = f_\\mathcal{B}(B)$ \\\\\n\\quad $\\log P_{\\mathcal{B}}(B)=\\log P_{\\mathcal{Z_B}}(Z_B) + \\log \\mid \\det (\\frac{\\partial f_{\\mathcal{B}}}{\\partial B}) \\mid$ \\\\\n\\quad $\\hat{B} = \\text{graphnorm(B)}$ \\\\\n\\quad $Z_{A|B} = f_\\mathcal{A|B}(A|\\hat{B})$ \\\\\n\\quad $ \\log P_\\mathcal{A|B} (A | B) = \\log P_{\\mathcal{Z_{A|B}}}(Z_{A|B}) + \\log \\mid \\det (\\frac{\\partial f_{\\mathcal{A|B}}}{\\partial A} )\\mid $ \\\\\n\\quad $Z_M = (Z_{A|B}, Z_B)$\\\\\n\\quad $\\log P_{\\mathcal{M}} (M) = \\log P_\\mathcal{B}(B) + \\log P_\\mathcal{A|B}(A|B)$ \\\\\n\\quad \\textbf{Return:} $Z_M$, $\\log P_\\mathcal{M}(M)$\\\\\n \\caption{Exact Likelihood Inference (Encoding) of Molecular Graphs by MoFlow\\ \\label{alg:inference}}\n\\end{algorithm}\n\n\\begin{algorithm}[!h]\n\\footnotesize\n\\SetAlgoLined\n \\textbf{Input:} $f_\\mathcal{A|B}$: graph conditional flow for atoms, $f_\\mathcal{B}$: glow for bonds, \n $Z_M$:latent representation of molecule $M$ or sampling from a prior Gaussian, \n validity-correction: validity correction rules.\\\\\n\\textbf{Output:} $M$: a molecule\\\\\n\\quad $(Z_{A|B}, Z_B) = Z_M$\\\\\n\\quad $B = f_\\mathcal{B}^{-1}(Z_B)$ \\\\\n\\quad $\\hat{B} = \\text{graphnorm(B)}$ \\\\\n\\quad $A = f_\\mathcal{A|B}^{-1}(Z_{A|B}|\\hat{B})$ \\\\\n\\quad $M = \\text{validity-correction}(A,B)$\\\\\n\\quad \\textbf{Return:} $M$\\\\\n \\caption{Molecular Graph Generation (Decoding) by the Reverse Transformation of MoFlow\\ \\label{alg:generate}}\n\\end{algorithm}\n\n \n\n\\section{Experiments}\n\\label{sec:exp}\n\nFollowing previous works \\cite{jin2018junction,shi2020graphaf}, we validate our MoFlow\\ by answering following questions:\n\\begin{compactitem}\n\\item \\mytag{Molecular graph generation and reconstruction (Sec.~\\ref{sec:exp:generation}):}\nCan our MoFlow\\ memorize and reconstruct all the training molecule datasets? Can our MoFlow\\ generalize to generate novel, unique and valid molecules as many as possible?\n\n\\item \\mytag{Visualizing continuous latent space (Sec.~\\ref{sec:exp:viz}):}\nCan our MoFlow\\ embed molecular graphs into continuous latent space with reasonable chemical similarity?\n\n\\item \\mytag{Property optimization (Sec.~\\ref{sec:exp:opt}):}\nCan our MoFlow\\ generate novel molecular graphs with optimized properties?\n\n\\item \\mytag{Constrained property optimization (Sec.~\\ref{sec:exp:copt}):}\nCan our MoFlow\\ generate novel molecular graphs with the optimized properties and at the same time keep the chemical similarity as much as possible?\n\\end{compactitem}\n\n\\mytag{Baselines.} \nWe compare our MoFlow\\ with: a) the state-of-the-art VAE-based method JT-VAE \\cite{jin2018junction} which captures the chemical validity by encoding and decoding a tree-structured scaffold of molecular graphs; b) the state-of-the-art autoregressive models GCPN \\cite{you2018graph} and MolecularRNN (MRNN)\\cite{popova2019molecularrnn} with reinforcement learning for property optimization, which generate molecules in a sequential manner; c) flow-based methods GraphNVP \\cite{madhawa2019graphnvp} and GRF \\cite{honda2019graph} which generate molecules at one shot and the state-of-the-art autoregressive-flow-based model GraphAF \\cite{shi2020graphaf} which generates molecules in a sequential way.\n\n\n\n\\mytag{Datasets.} \nWe use two datasets QM9 \\cite{ramakrishnan2014quantum} and ZINC250K \\cite{irwin2012zinc} for our experiments and summarize them in Table~\\ref{tab:data}. The QM9 contains $133,885$ molecules with maximum $9$ atoms in $4$ different types, and the ZINC250K has $249,455$ drug-like molecules with maximum $38$ atoms in $9$ different types. The molecules are kekulized by the chemical software RDKit \\cite{landrum2006rdkit} and the hydrogen atoms are removed. There are three types of edges, namely single, double, and triple bonds, for all molecules. Following the pre-processing procedure in \\cite{madhawa2019graphnvp}, we encode each atom and bond by one-hot encoding, pad the molecules which have less than the maximum number of atoms with an virtual atom, augment the adjacency tensor of each molecule by a virtual edge channel representing no bonds between atoms, and dequantize \\cite{madhawa2019graphnvp,dinh2016density} the discrete one-hot-encoded data by adding uniform random noise $U[0,0.6]$ for each dimension,\nleading to atom matrix $A\\in \\mathbb{R}^{9\\times5}$ and bond tensor $B\\in \\mathbb{R}^{4 \\times 9\\times 9}$ for QM9, and $A\\in \\mathbb{R}^{38\\times10}$ and\n$B\\in \\mathbb{R}^{4 \\times 38\\times 38}$ for ZINC250k. \n\\begin{table}[!htb] \n\\vspace{-0.15in}\n\\small\n\\centering \n\\caption{Statistics of the datasets.}\n\\vspace{-0.05in}\n\\begin{tabular\n{ l p{1.2cm} p{1.2 cm} p{1.2cm} p{1.2cm} p{1.2cm} }\n\\toprule \n \\bf{} & \\bf{\\#Mol. Graphs} & \\bf{Max. \\#Nodes} & \\bf{\\#Node Types} & \\bf{\\#Edge Types}\\\\ \n\\midrule\nQM9 & 133,885 & 9 & 4+1 & 3+1 \\\\ \nZINC250K & 249,455 & 38 & 9+1 & 3+1 \\\\ \n\\bottomrule \n\\end{tabular}\n\\label{tab:data} \n\\vspace{-0.1in}\n\\end{table}\n\n\\mytag{{\\bf MoFlow\\ } Setup.} \nTo be comparable with one-shot-flow baseline GraphNVP \\cite{madhawa2019graphnvp}, for the ZINC250K, we adopt $10$ coupling layers and $38$ graph coupling layers for the bonds' Glow and the atoms' graph conditional flow respectively. We use two $3*3$ convolution layers with $512,512$ hidden dimensions in each coupling layer. \nFor each graph coupling layer, we set one relational graph convolution layer with $256$ dimensions followed by a two-layer multilayer perceptron with $512,64$ hidden dimensions. \nAs for the QM9, we adopt $10$ coupling layers and $27$ graph coupling layers for the bonds' Glow and the atoms' graph conditional flow respectively. There are two 3*3 convolution layers with $128,128$ hidden dimensions in each coupling layer, and one graph convolution layer with $64$ dimensions followed by a two-layer multilayer perceptron with $128,64$ hidden dimensions in each graph coupling layer. \n As for the optimization experiments, we further train a regression model to map the latent embeddings to different property scalars (discussed in Sec.~\\ref{sec:exp:opt} and \\ref{sec:exp:copt}) by a multi-layer perceptron with 18-dim linear layer -> ReLu -> 1-dim linear layer structures. For each dataset, we use the same trained model for all the following experiments.\n\n\\mytag{Empirical Running Time.} \nFollowing above setup, we implemented our MoFlow\\ by Pytorch-1.3.1 and trained it by Adam optimizer \\cite{kingma2014adam} with learning rate $0.001$, batch size $256$, and $200$ epochs for both datasets on $1$ GeForce RTX 2080 Ti GPU and 16 CPU cores. \nOur MoFlow\\ finished $200$-epoch training within $22$ hours ($6.6$ minutes\/epoch) for ZINC250K and $3.3$ hours ($0.99$ minutes\/epoch) for QM9. Thanks to efficient one-pass inference\/embedding, our MoFlow\\ takes negligible $7$ minutes to learn an additional regression layer trained in $3$ epochs for optimization experiments on ZINC250K. In comparison, as for the ZINC250K dataset, GraphNVP \\cite{madhawa2019graphnvp} costs $38.4$ hours ($11.5$ minutes\/epoch) by our Pytorch implementation for training on ZINC250K with the same configurations, and the estimated total running time of GraphAF \\cite{shi2020graphaf} is $124$ hours ($24$ minutes\/epoch) which consists of the reported $4$ hours for a generation model trained by $10$ epochs and estimated $120$ hours for another optimization model trained by $300$ epochs. The reported running time of JT-VAE \\cite{jin2018junction} is roughly $24$ hours in \\cite{you2018graph}. \n\n\n\n\n\n\n \n\\subsection{Generation and Reconstruction}\n\\label{sec:exp:generation}\n\\begin{table*}[!htb] \n\\small\n\\centering\n\\caption{Generation and reconstruction performance on QM9 dataset. }\n\\vspace{-0.1in}\n\\begin{tabular}{l c c c c c c }\n\\toprule\n & \\textbf{\\% Validity} & \\textbf{\\% Validity w\/o check} & \\textbf{\\% Uniqueness} & \\textbf{\\% Novelty} & \\textbf{\\% N.U.V.} & \\textbf{\\% Reconstruct}\\\\\n\\midrule\nGraphNVP \\cite{madhawa2019graphnvp} & $83.1\\pm0.5$ & n\/a & $99.2\\pm0.3$& $58.2\\pm1.9$ & $47.97$ & $100$\\\\ \nGRF \\cite{honda2019graph}& $84.5\\pm 0.70$ & n\/a & $66.0\\pm1.15$& $58.6\\pm 0.82$ & $32.68$ &$100$\\\\ \nGraphAF \\cite{shi2020graphaf}& $100$ & $67$& $94.51$ & $88.83$ & $83.95$ &$100$\\\\ \n\\midrule\n\\textbf{MoFlow} & $\\mathbf{100.00\\pm0.00}$ & $\\mathbf{96.17\\pm0.18}$ & $\\mathbf{99.20\\pm0.12}$ & $\\mathbf{98.03\\pm0.14}$ & $\\mathbf{97.24\\pm0.21}$ & $\\mathbf{100.00\\pm0.00}$ \\\\ \n\\bottomrule \n\\end{tabular}\n\\label{tab:qm9}\n\\end{table*}\n\n\n\\begin{table*}[!htb] \n\\small\n\\centering\n\\caption{Generation and reconstruction performance on ZINC250K dataset.\n}\n\\vspace{-0.1in}\n\\begin{tabular}{l c c c c c c }\n\\toprule\n & \\textbf{\\% Validity} & \\textbf{\\% Validity w\/o check} & \\textbf{\\% Uniqueness} & \\textbf{\\% Novelty} & \\textbf{\\% N.U.V.} & \\textbf{\\% Reconstruct}\\\\\n\\midrule\nJT-VAE \\cite{jin2018junction} & $100$ & n\/a & $100$& $100$ & $100$ & $76.7$\\\\ \nGCPN \\cite{you2018graph} & $100$ & $20$ & $99.97$& $100$ & $99.97$ & n\/a\\\\ \nMRNN \\cite{popova2019molecularrnn}& $100$ & $65$ & $99.89$ & $100$ & $99.89$ & n\/a\\\\ \nGraphNVP \\cite{madhawa2019graphnvp} & $42.6\\pm1.6$ & n\/a & $94.8\\pm0.6$& $100$ & $40.38$ & $100$\\\\ \nGRF \\cite{honda2019graph} & $73.4\\pm 0.62$ & n\/a & $53.7\\pm 2.13$& $100$ & $39.42$ &$100$\\\\ \nGraphAF \\cite{shi2020graphaf}& $100$ & $68$& $99.10$ & $100$ & $99.10$ &$100$\\\\ \n\\midrule\n\\textbf{MoFlow} & $\\mathbf{100.00\\pm0.00}$ & $\\mathbf{81.76\\pm0.21}$ & $\\mathbf{99.99\\pm0.01}$ & $\\mathbf{100.00\\pm0.00}$ & $\\mathbf{99.99\\pm0.01}$ & $\\mathbf{100.00\\pm0.00}$ \\\\ \n\\bottomrule \n\\end{tabular}\n\\label{tab:zinc}\n\\end{table*}\n\\mytag{Setup.} In this task, we evaluate our MoFlow\\ 's capability of generating novel, unique and valid molecular graphs, and if our MoFlow\\ can reconstruct input molecular graphs from their latent representations. We adopted the widely-used metrics, including: \\textbf{Validity} which is the percentage of chemically valid molecules in all the generated molecules, \\textbf{Uniqueness} which is the percentage of unique valid molecules in all the generated molecules, \\textbf{Novelty} which is the percentage of generated valid molecules which are not in the training dataset, and \\textbf{Reconstruction} rate which is the percentage of molecules in the input dataset which can be reconstructed from their latent representations. Besides, because the novelty score also accounts for the potentially duplicated novel molecules, we propose a new metric \\textbf{N.U.V.} which is the percentage of \\underline{N}ovel, \\underline{U}nique, and \\underline{V}alid molecules in all the generated molecules. We also compare the validity of ablation models if not using validity check or validity correction, denoted as \\textbf{Validity w\/o check} in \\cite{shi2020graphaf}. \n\nThe prior distribution of latent space follows a spherical multivariate Gaussian distribution\n$\\mathcal{N}(0, {(t \\sigma)}^2 \\mathbf{I})$ where $\\sigma$ is the learned standard deviation and the hyper-parameter $t$ is the temperature for the reduced-temperature generative model \\cite{parmar2018image, kingma2018glow,madhawa2019graphnvp}. We use $t=0.85$ in the generation for both QM9 and ZINC250K datasets, and $t=0.6$ for the ablation study without validity correction. To be comparable with the state-of-the-art baseline GraphAF\\cite{shi2020graphaf}, we generate $10,000$ molecules, i.e., sampling $10,000$ latent vectors from the prior and then decode them by the reverse transformation of our MoFlow. We report the the mean and standard deviation of results over 5 runs. \nAs for the reconstruction, we encode all the molecules from the training dataset into latent vectors by the encoding transformation of our MoFlow\\ and then reconstruct input molecules from these latent vectors by the reverse transformation of MoFlow.\n\n\\mytag{Results.} Table~\\ref{tab:qm9} and Table~\\ref{tab:zinc} show that our MoFlow\\ outperfoms the state-of-the-art models on all the six metrics for both QM9 and ZINC250k datasets. \nThanks to the invertible characteristic of the flow-based models, our MoFlow\\ builds an one-to-one mapping from the input molecule $M$ to its corresponding latent vector $Z$, enabling $100\\%$ reconstruction rate as shown in Table~\\ref{tab:qm9} and Table~\\ref{tab:zinc}. In contrast, the VAE-based method JT-VAE and the autoregressive-based method GCPN and MRNN can't reconstruct all the input molecules. Compared with the one-shot flow-based model GraphNVP and GRF, by incorporating validity correction mechanism, our MoFlow\\ achieves $100\\%$ validity, leading to significant improvements of the validity score and N.U.V. score for both datasets. Specifically, the N.U.V. score of MoFlow\\ are $2$ and $3$ times as large as the N.U.V. scores of GraphNVP and GRF respectively in Table~\\ref{tab:qm9}. \nEven without validity correction, our MoFlow\\ still outperforms \nthe validity scores of GraphNVP and GRF by a large margin.\nCompared with the autoregressive flow-based model GraphAF, we find\nour MoFlow\\ outperforms GraphAF\nby additional $16\\%$ and $0.8\\%$ with respect to N.U.V scores for QM9 and ZINC respectively, indicating that our MoFlow\\ generates more novel, unique and valid molecules. Indeed, MoFlow\\ achieves better uniqueness score and novelty score compared with GraphAF for both datasets. What's more, our MoFlow\\ without validity correction still outperforms GraphAF without the validity check by a large margin w.r.t. the validity score (validity w\/o check in Table~\\ref{tab:qm9} and Table~\\ref{tab:zinc}) for both datasets, implying the superiority of capturing the molecular structures in a holistic way by our MoFlow\\ over autoregressive ones in a sequential way.\n\nIn conclusion, our MoFlow\\ not only memorizes and reconstructs all the training molecular graphs, but also generates more novel, unique and valid molecular graphs than existing models, indicating that our MoFlow\\ learns a strict superset of the training data and explores the unknown chemical space better.\n\n\n\\subsection{Visualizing Continuous Latent Space}\n\\label{sec:exp:viz}\n\\begin{figure*}[!th]\n\\vspace{-0.2in}\n\\centering\n\\includegraphics[width=.6\\textwidth]{Fig\/zinc_int.pdf}\n\\vspace{-0.1in}\n\\caption{ \nVisualization of learned latent space by our MoFlow. Top: Visualization of the grid neighbors of a seed molecule in the center, which serves as the baseline for measuring similarity. Bottom: Interpolation between two seed molecular graphs and the left one is the baseline molecule for measuring similarity. Seed molecules are highlighted in red boxs and they are randomly selected from ZINC250K. \n\\label{fig:zincint}}\n\\end{figure*}\n\n\n\n\\mytag{Setup.} We examine the learned latent space of our MoFlow\\ , denoted as $f$, by visualizing the decoded molecular graphs from a neighborhood of a latent vector in the latent space. Similar to \\cite{kusner2017grammar, jin2018junction}, we encode a seed molecule $M$ into $Z=f(M)$ and then grid search two random orthogonal directions with unit vector $X$ and $Y$ based on $Z$, then we get new latent vector by $Z' = Z + \\lambda_X*X + \\lambda_Y*Y$ where $\\lambda_X$ and $\\lambda_Y$ are the searching steps. Different from VAE-based models, our MoFlow\\ gets decoded molecules efficiently by the one-pass inverse transformation $M'=f^{-1}(Z')$. In contrast, the VAE-based models such as JT-VAE need to decode each latent vectors $10-100$ times and autoregressive-based models like GCPN, MRNN and GraphAF need to generate a molecule sequentially.\nFurther more, we measure the chemical similarity between each neighboring molecule and the centering molecule. We choose Tanimoto index \\cite{bajusz2015tanimoto} as the chemical similarity metrics and indicate their similarity values by a heatmap. We further visualize a linear interpolation between two molecules to show their changing trajectory \nsimilar to the interpolation case between images \\cite{kingma2018glow}.\n\n\\mytag{Results.} We show the visualization of latent space in Figure~\\ref{fig:zincint}. We find the latent space is very smooth and the interpolations between two latent points only change a molecule graph a little bit. Quantitatively, we find the chemical similarity between molecules majorly correspond to their Euclidean distance between their latent vectors, implying that our MoFlow\\ embeds similar molecular graph structures into similar latent embeddings. Searching in such a continuous latent space learnt by our MoFlow\\ is the basis for molecular property optimization and constraint optimization as discussed in the following sections.\n\n\n\n\n\\vspace{-0.1in}\n\\subsection{Property Optimization}\n\\label{sec:exp:opt}\n\\mytag{Setup.} \nThe property optimization task aims at generating novel molecules with the best Quantitative Estimate of Druglikeness (QED) scores \\cite{bickerton2012quantifying} which measures the drug-likeness of generated molecules. Following the previous works \\cite{you2018graph,popova2019molecularrnn}, we report the best property scores of novel molecules discovered by each method.\n\nWe use the pre-trained MoFlow, denoted as $f$, in the generation experiment to encode a molecule $M$ and get the molecular embedding $Z = f(M)$, and further train a multilayer perceptron to regress the embedding $Z$ of the molecules to their property values $y$. We then search the best molecules by the gradient ascend method, namely $Z' = Z + \\lambda * \\frac{d y}{ dZ}$ where the $\\lambda$ is the length of the search step. We conduct above gradient ascend method by $K$ steps. \nWe decode the new embedding $Z'$ in the latent space to the discovered molecule by reverse mapping $M' = f^{-1}(Z')$. The molecule $M'$ is novel if $M'$ doesn't exist in the training dataset. \n\n\\begin{table}[!h] \n\\vspace{-0.1in}\n\\footnotesize\n\\centering \n\\caption{Discovered novel molecules with the best QED scores. Our MoFlow\\ finds more molecules with the best QED scores. More results in Figure~\\ref{fig:topqed}.}\n\\vspace{-0.1in}\n\\begin{tabular\n{ l p{1.2cm} p{1.2 cm} p{1.2cm} p{1.2cm} p{1.2cm} }\n\\toprule \n \\bf{Method} & \\bf{1st} & \\bf{2nd} & \\bf{3rd} & \\bf{4th}\\\\ \n\\midrule\nZINC (Dataset) & 0.948 & 0.948 & 0.948 & 0.948 \\\\ \n\\midrule\nJT-VAE & 0.925 & 0.911 & 0.910 & -\\\\\nGCPN & 0.948 & 0.947 & 0.946 & -\\\\\nMRNN & 0.948 & 0.948 & 0.947 & -\\\\\nGraphAF & 0.948 & 0.948 & 0.947 & 0.946\\\\\n\\midrule\n\\bf{MoFlow\\ } & \\bf{0.948} & \\bf{0.948} & \\bf{0.948} & \\bf{0.948}\\\\\n\\bottomrule \n\\end{tabular}\n\\label{tab:topqed} \n\\vspace{-0.2in}\n\\end{table}\n\\begin{figure}[!h]\n\\vspace{-0.25in}\n\\small\n\\centering\n\\includegraphics[width=.4\\textwidth]{Fig\/top_qed2.pdf}\n\\label{fig:syn_exp1}\n\\vspace{-0.1in}\n\\caption{ \nIllustration of discovered novel molecules with the best druglikeness QED scores.\n\\label{fig:topqed}}\n\\vspace{-0.1in}\n\\end{figure}\n\\mytag{Results.}\nWe report the discovered novel molecules sorted by their QED scores in Table~\\ref{tab:topqed}. We find previous methods can only find very few molecules with the best QED score ($=0.948$). In contrast, our MoFlow\\ finds much more novel molecules which have the best QED values than all the baselines. We show more molecular structures with top QED values in Figure~\\ref{fig:topqed}.\n\n\n\n\\vspace{-0.1in}\n\\subsection{Constrained Property Optimization}\n\\label{sec:exp:copt}\n\\mytag{Setup.}\nThe constrained property optimization aims at finding a new molecule $M'$ with the largest similarity score $sim(M,M')$ and the largest improvement of a targeted property value $y(M') - y(M)$ given a molecule $M$. Following the similar experimental setup of \\cite{jin2018junction,you2018graph}, we choose Tanimoto similarity of Morgan fingerprint \\cite{rogers2010extended} as the similarity metrics, the penalized logP (plogp) as the target property, and $M$ from the $800$ molecules with the lowest plogp scores in the training dataset of ZINC250K. \nWe use similar gradient ascend method as discussed in the previous subsetion to search for optimized molecules.\nAn optimization succeeds if we find a novel molecule $M'$ which is different from $M$ and $y(M') - y(M) \\ge 0$ and $sim(M,M') \\ge \\delta$ within $K$ steps where $\\delta$ is the smallest similarity threshold to screen the optimized molecules.\n\n\\mytag{Results.} Results are summarized in Table~\\ref{tab:plogp}. We find that our MoFlow\\ finds the most similar new molecules at the same time achieves very good plogp improvement. Compared with the state-of-the-art VAE model JT-VAE, our MoFlow\\ achieves much higher similarity score and property improvement, implying that our model is good at interpolation and learning continuous molecular embedding. Compared with the state-of-the-art reinforcement learning based method GCPN and GraphAF which is good at generating molecules step-by-step with targeted property rewards, our model MoFlow\\ achieves the best similarity scores and the second best property improvements. We illustrate one optimization example in Figure~\\ref{fig:copt} with very similar structures but a large improvement w.r.t the penalized logP.\n\n\\begin{table}[!tb] \n\\vspace{-0.05in}\n\\scriptsize\n\\centering\n\\caption{Constrained optimization on Penalized-logP}\n\\vspace{-0.1in}\n\\begin{tabular}{l c c c c c c c}\n\\toprule\n& \\multicolumn{3}{c}{JT-VAE} & \\multicolumn{3}{c}{GCPN}\\\\\n\\cmidrule(l){2-4} \\cmidrule(l){5-7}\n $\\delta$ & \\textbf{Improvement} & \\textbf{Similarity} & \\textbf{Success} & \\textbf{Improvement} & \\textbf{Similarity} & \\textbf{Success} \\\\\n\\midrule\n\\textbf{0.0} & $ 1.91\\pm 2.04$ & $ 0.28\\pm 0.15$ & $97.5\\%$ &$ 4.20\\pm 1.28$ & $ \\mathbf{0.32\\pm 0.12}$& $100\\%$\\\\ \n\\textbf{0.2} & $ 1.68\\pm 1.85$ & $ 0.33\\pm 0.13$ & $97.1\\%$ &$4.12 \\pm 1.19$ & $ 0.34 \\pm 0.11$& $100\\%$\\\\ \n\\textbf{0.4} & $0.84 \\pm 1.45$ & $0.51 \\pm 0.10$ & $83.6\\%$ &$2.49 \\pm 1.30$ & $ 0.48\\pm 0.08$& $100\\%$\\\\ \n\\textbf{0.6} & $ 0.21\\pm 0.71$ & $ 0.69\\pm 0.06$ & $46.4\\%$ &$0.79 \\pm 0.63$ & $ 0.68\\pm 0.08$& $100\\%$\\\\ \n\\midrule\n& \\multicolumn{3}{c}{GraphAF} & \\multicolumn{3}{c}{\\textbf{MoFlow\\ }}\\\\\n\\cmidrule(l){2-4} \\cmidrule(l){5-7}\n $\\delta$ & \\textbf{Improvement} & \\textbf{Similarity} & \\textbf{Success} & \\textbf{Improvement} & \\textbf{Similarity} & \\textbf{Success} \\\\\n\\midrule\n\\textbf{0.0} & $ 13.13\\pm 6.89$ & $0.29 \\pm 0.15$ & $100\\%$ &$ 8.61\\pm 5.44$ & $0.30 \\pm 0.20 $ & $98.88\\%$\\\\ \n\\textbf{0.2} & $ 11.90\\pm 6.86$ & $ 0.33\\pm 0.12$ & $100\\%$ &$7.06 \\pm 5.04$ & $\\mathbf{0.43 \\pm 0.20 }$& $96.75\\%$\\\\ \n\\textbf{0.4} & $ 8.21\\pm 6.51$ & $0.49 \\pm 0.09$ & $99.88\\%$ &$4.71 \\pm4.55 $ & $ \\mathbf{0.61\\pm0.18} $& $85.75\\%$\\\\ \n\\textbf{0.6} & $4.98 \\pm 6.49$ & $0.66 \\pm 0.05$ & $96.88\\%$ &$ 2.10\\pm 2.86$ & $ \\mathbf{0.79\\pm 0.14}$& $58.25\\%$\\\\ \n\\bottomrule \n\\bottomrule \n\\end{tabular}\n\\label{tab:plogp} \n\\end{table}\n\n\\begin{figure}[!t]\n\\vspace{-0.1in}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{Fig\/copt.pdf}\n\\vspace{-0.1in}\n\\caption{ \nAn illustration of the constrained optimization of a molecule leading to an improvement of $+16.48$ w.r.t the penalized logP and with Tanimoto similarity $0.624$. The modified part is highlighted.\n\\label{fig:copt}}\n\\vspace{-0.1in}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we propose a novel deep graph generative model MoFlow\\ for molecular graph generation. Our MoFlow\\ is one of the first flow-based models which not only generates molecular graphs at one-shot by invertible mappings but also has a validity guarantee. \nOur MoFlow\\ consists of a variant of Glow model for bonds, a novel graph conditional flow for atoms given bonds, and then combining them with post-hoc validity corrections. \nOur MoFlow\\ achieves state-of-the-art performance on molecular generation, reconstruction and optimization.\nFor future work, we try to combine the advantages of both sequential generative models and one-shot generative models to generate chemically feasible molecular graphs. Codes and datasets are open-sourced at \\url{https:\/\/github.com\/calvin-zcx\/moflow}.\n\n\n\\small{\n\\section*{Acknowledgement}\nThis work is supported by NSF IIS 1716432, 1750326, ONR N00014-18-1-2585, Amazon Web Service (AWS) Machine Learning for Research Award and Google Faculty Research Award.\n}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nAlthough $\\mathrm{\\Lambda CDM}$ cosmology provides an excellent fit to the observational data on $\\apprge\\mathrm{Mpc}$ scales \\cite{Ade:2013zuv}, its success is less certain over the strongly nonlinear, $\\apprle\\mathrm{kpc}$ regime relevant to the substructure within galactic halos. \nThe deviation of galactic cores from an expected cuspy density profile \\cite{Moore:1994yx,Moore:1997sg} and an apparent shortfall of observed Milky Way satellites relative to expectations from simulations \\cite{Klypin:1999uc,Moore:1999nt} originally motivated considerations that the dark matter might have non-negligible self-interactions \\cite{Spergel:1999mh}. \nAlthough a combination of improved theoretical understanding and additional observations had appeared to alleviate these problems and remove the phenomenologically interesting parameter space for self-interacting dark matter (SIDM) \\cite{Yoshida:2000uw,Markevitch:2003at,Gnedin:2000ea}, a recent reevaluation of the constraints \\cite{Rocha:2012jg,Peter:2012jh} has demonstrated that SIDM with an velocity-independent elastic self-interaction cross section per unit mass $\\sigma\\simeq0.1-1\\ \\mathrm{cm^{2}\/g}\\simeq0.2-2\\ \\mathrm{b\/GeV}$ can simultaneously meet all constraints and alleviate the discrepancies between $\\mathrm{\\Lambda CDM}$ and observations.\n\nIn this paper, we exhibit a distinct area of the SIDM parameter space which is likewise both allowed by observations and potentially interesting phenomenologically. \nIn particular, we examine the case in which most of the dark matter remains non-self-interacting (or weakly self-interacting) as in the standard $\\mathrm{\\Lambda CDM}$ picture, but a small fraction $f\\ll1$ of the dark matter is made up of a subdominant component that is \\textit{ultra-strongly self-interacting}, abbreviated as uSIDM, with $\\sigma\\gg1\\ \\mathrm{cm^{2}\/g}$ (where $\\sigma$ denotes the cross section \\emph{per unit mass}). \nBecause most of the dark matter remains inert, constraints that rely on distinguishing the overall behavior of SIDM halos from their $\\mathrm{CDM}$ counterparts are no longer relevant. \n\nConsider, for example, the constraints placed on the SIDM cross section from observations of the Bullet Cluster (1E 0657-6). \nObservations reveal an offset between the gas ``bullet\" and the dark matter centroid of the currently merging subcluster.\nUnder the assumption that the subcluster has already passed through the main cluster, this offset is due to stripping and deceleration of gas in the subcluster due to interactions with the main cluster itself.\nThe observation that the dark matter has not been slowed to the same degree allows limits to be placed on the dark matter self-interaction cross section.\nThe strongest constraint \\cite{Randall:2007ph} comes from the measurement of the ratio of mass-to-light ratios of the subcluster and the main cluster, which is found to be $0.84\\pm0.07$. \nUnder the assumption that the subcluster and main cluster had the same initial mass-to-light ratio before merger, this means that the subcluster cannot have lost more than 23\\% of its mass.\n\nIn \\cite{Randall:2007ph}, this measurement plus estimates of the subcluster escape velocity and merger speed were used to constrain $\\sigma\\apprle0.6\\ \\mathrm{cm^2\/g}$ when $f=1$.\nHowever, it is clear that, even in the extreme example that $all$ of the SIDM mass in the subcluster was lost to scattering, current observations would not be able to detect the SIDM subcomponent if $f<0.07$, within the uncertainty on the mass-to-light ratio. \nSo constraints from the Bullet Cluster certainly do not apply when $f\\ll10^{-1}$, regardless of the size of the self-interaction cross section per unit mass $\\sigma$. \nEven when $f\\sim0.1$, $\\sigma$ may not be well-constrained, since one pass through the main cluster would not suffice to strip all of the SIDM from the bullet.\n\nWe note that observations of another cluster undergoing a major merger, A520 \\cite{Mahdavi:2007yp,Jee:2012sr,Clowe:2012am,Jee:2014hja} have not provided similar constraints on the SIDM cross section; here the dark matter centroid of the subcluster is in fact coincident with the (presumably stripped) gas.\nUnder certain assumptions, this can be taken as evidence of a nonzero dark matter self-interaction cross section per unit mass, as strong as $0.94\\pm0.06\\ \\mathrm{cm^2\/g}$ in the latest observations \\cite{Jee:2014hja}. \nThe limited number of ongoing major merger events in the observable universe makes it hard to give an overall estimate of the self-interaction cross section from major mergers, but future surveys could potentially combine many minor merger events to measure $\\sigma$ with a precision of $0.1\\ \\mathrm{cm^2\/g}$\n\\cite{Harvey:2013tfa}.\n\nRegardless of the situation for $f=1$ SIDM, we have seen that there are no observational constraints on a uSDIM component of the dark matter with $f\\apprle 0.1$. \nAt the same time, of course, a small component of uSIDM by itself is unable to produce cores or dissolve substructure to any observable degree. \nWe point out, though, that a uSIDM componen\n\\ of the dark matter could instead explain another potential discrepancy with the $\\mathrm{\\Lambda CDM}$ picture: the existence of billion-solar-mass quasars at high redshifts $z\\apprge6.5-7$ (for reviews, see \\cite{Dokuchaev:2007mf, Volonteri:2010wz, Sesana:2011qi, Treister:2011yi, Kelly:2011ab, 2013ASSL..396..293H}). \nIn \\textsection2 we review the observational situation and the difficulties with explaining it within $\\mathrm{\\Lambda CDM}$. \nIn \\textsection3 we suggest an alternative: gravothermal collapse of an ultra-strongly self-interacting dark matter component. We review the mechanism of gravothermal collapse, specialize to the case of a halo containing uSIDM, and solve the problem numerically. \nWe apply the results of \\textsection3 to individual observations of high-redshift quasars in \\textsection4, then discuss broader cosmological implications in \\textsection5, including a potential way for uSIDM to indirectly produce cores in dwarf halos. \nWe finally conclude in \\textsection6.\n\n\\section{Supermassive Black Holes}\\label{sec:smbh}\n\nSupermassive black holes (SMBHs) which grow primarily via gas accretion are Eddington-limited: the gravitational force on the accreting gas is balanced by its own radiation pressure. \nHence growth via gas accretion cannot proceed faster than exponentially, with an $e$-folding rate bounded by the inverse of the Salpeter time \\cite{Salpeter:1964kb}:\n\\begin{equation}\n\\label{eq:tsal}\nt_{\\mathrm{Sal}}=\\frac{\\epsilon_{r}\\sigma_{T}c}{4\\pi Gm_{p}}\\approx\\left(\\frac{\\epsilon_{r}}{0.1}\\right)45.1\\ \\mathrm{Myr},\n\\end{equation}\nwhere $\\sigma_{T}$ is the Thompson cross section,\n\\begin{equation}\n\\sigma_{T}=\\frac{8\\pi}{3}\\left(\\frac{e^{2}}{4\\pi\\epsilon_{0}m_{e}c^{2}}\\right)^{2},\n\\end{equation}\n$m_p$ and $m_e$ are respectively the proton and electron masses, and $\\epsilon_r$ is the radiative efficiency, which ranges from $1-\\sqrt{8\/9}\\approx0.057$ to $1-\\sqrt{1\/3}\\approx0.42$ as the angular momentum of the black hole increases from zero to its extremal value \\cite{Shapiro:2004ud}; in astrophysical applications, $\\epsilon_r$ is typically taken to be $\\epsilon_r=0.1$.\nAccretion faster than the Eddington limit, $\\dot{M_\\mathrm{Edd}}=M t_\\mathrm{Sal}^{-1}$, onto a black hole of mass $M$ will result in a radiation pressure exceeding the gravitational force, driving outflows which should quickly halt this excessive accretion.\nYet several dozen quasars with masses a few $\\times\\ 10^{9}\\ M_{\\odot}$ have been detected at redshifts $z\\apprge6$, including a quasar, ULAS J1120+0641, with mass $2.0_{-0.7}^{+1.5}\\times10^{9}\\ M_{\\odot}$ at redshift $z=7.085$ \\cite{Mortlock:2011va,Venemans:2012dt}. \nUsing the Planck Collaboration's best-fit cosmological values \\cite{Ade:2013zuv}, $z=7.085$ corresponds to $747\\ \\mathrm{Myr}$ after the Big Bang, so even continuous Eddington accretion since the Big Bang can only increase the mass of a seed black hole by a factor of $1.6\\times10^{7}$. \nIf we make the standard assumption that black hole seeds are formed from Pop III stars, the seed cannot have formed before around $z\\sim30$, so the maximum growth factor shrinks by another order of magnitude, to $1.75\\times10^{6}$, requirin\n\\ a seed black hole mass $\\sim10^{3}M_{\\odot}$. \n\nMore generally, in order to explain the observed abundance of $\\sim1\/\\mathrm{Gpc^{3}}$ billion-solar-mass quasars at $z\\simeq6$ \\cite{2013ASSL..396..293H} within $\\Lambda\\mathrm{CDM}$, we must form $10^{2-3}M_{\\odot}$ seed black holes soon after the beginning of baryonic structure formation and grow these black holes continuously at near-Eddington rates for $\\sim800\\ \\mathrm{Myr}$.\nSome simulations have shown this can be achieved \\cite{Li:2006ti}, but only by making optimistic assumptions about cooling and star formation \\cite{Tegmark:1996yt,Gao:2006ug}, fragmentation \\cite{Turk:2009ae,Stacy:2009zt,McKee:2007yx}, photoevacuation \\cite{Johnson:2006gd,Abel:2006gw,Yoshida:2006dv}, black hole spin \\cite{Bardeen:1972fi,Zhang:1997dy,Narayan:2011eb}, and black hole mergers \\cite{1983MNRAS.203.1049F,Merritt:2004xa,Haiman:2004ve}. \nWe emphasize, in particular, that these results depend critically on the assumption of $\\epsilon_r=0.1$; because the $e$-folding time itself depends linearly on the radiative efficiency, the maximum mass formed by a given time is \\textit{exponentially} sensitive to its value.\nBecause quasar masses are inferred by measuring their luminosities and assuming they are Eddington-limited, increasing the assumed radiative efficiency will decrease the inferred quasar mass by $\\epsilon_r^{-1}$. \nHowever, this reduction in required mass is made negligible by the much larger number of $e$-folds required to reach it. \nRecent work, both theoretical \\cite{Shapiro:2004ud} and observational \\cite{Trakhtenbrot:2014dza}, has found $\\epsilon_r\\gtrsim0.2$, which would be catastrophically incompatible with an assumption of black hole growth driven by Eddington accretion. \n\nOne alternative is to allow for extended periods of super-Eddington gas accretion. \nSuper-Eddington accretion is known to be possible, for example when outflows of gas and radiation are collimated \\cite{Shakura:1972te, Jiang:2014tpa}, and extended periods of super-Eddington growth could account for the observed supermassive high-redshift quasars \\cite{Volonteri:2014lja, Madau:2014pta}.\nHowever, estimates of quasar masses and luminosities at low redshifts using emission line widths indicate that, at least in the late universe, the vast majority of quasars are constrained to radiate at the Eddington limit \\cite{Kollmeier:2005cw}, or possibly well below it \\cite{Steinhardt:2009ig,Steinhardt:2011wr}.\n\nIn this paper, we will therefore neglect the possibility of extended super-Eddington accretion.\nWe will assume that growth of black holes from baryonic accretion is limited to exponential growth with an $e$-folding time given by the Salpeter time (\\ref{eq:tsal}).\nIn order to facilitate comparision of uSIDM to the standard picture, we will, however, allow for continuous accretion of baryons at this limit once a seed black hole has formed, despite the potential issues mentioned in the previous paragraph.\nIn other words, we attempt to modify the mechanism by which black hole seeds are formed, while leaving the simplest conventional mechanism for their growth from seeds to supermassive black holes intact.\nIt would be easy to combine our results with more realistic baryon accretion histories.\n\nFinally, we note that future observations in the near-infrared, e.g.\\ with the James Webb Space Telescope and Wide Field Infrared Survey Telescope (WFIRST), and in the radio, e.g.\\ with the Square Kilometer Array, should be able to detect (or place limits on the density of) even intermediate-mass ($\\sim10^5M_\\odot$) quasars out to $z\\sim10$ \\cite{Haiman:1997bv, Haiman:2000ky, Haiman:2004ny,Whalen:2012ib}, providing vastly more information about the formation and growth of high-redshift quasars.\n\n\\section{Gravothermal Collapse}\\label{sec:gravothermal_collapse}\n\nMotivated by the tensions within the standard ($\\Lambda\\mathrm{CDM}$) picture discussed in the previous section, we propose an alternative mechanism for black hole seed formation: the gravothermal collapse \\cite{LyndenBell:1968yw} of the uSIDM component of a dark matter galactic halo. \nThe simplest form of gravothermal collapse occurs in a population of gravitating point particles with elastic short-range interactions.\nThe classic illustration of the mechanism is globular clusters, where the point particles are stars. \nStellar short-range interactions are not purely elastic, so in this case collapse is eventually halted by binary formation.\nA gas of SIDM, however, has only elastic interactions, so core collapse continues until relativistic instability results in the formation of a black hole, which promptly Bondi accretes \\cite{Bondi:1952ni} the optically thick core of SIDM that surrounds it. \n\nIn this section we make this intuitive picture precise.\nFull expressions will be given below, but in brief we find that the uSIDM component of a galactic halo undergoes gravothermal collapse in $\\sim460$ halo relaxation times, forming a black hole which contains $\\sim2\\%$ of the uSIDM mass of the galaxy. \nThe halo relaxation time is a complicated expression which depends on the halo mass and time of formation as well as the uSIDM properties, but we show in the following sections that, for reasonable values of uSIDM fraction $f$ and cross section per unit mass $\\sigma$, there exist halos that can easily form seed black holes, and grow them using uSIDM and baryons, to achieve $10^{9}M_{\\odot}$ SMBHs by redshift 6. \n\nBefore formulating the problem, we first review the gravothermal collapse mechanism itself.\nIntuitively, gravothermal collapse depends on the simple observation that gravitationally bound systems have negative specific heat. \nFor a virialized system, this is immediate:\n\\begin{equation}\n0=2T+V=T+E\\rightarrow E=-T.\n\\end{equation}\nNow consider two systems, an inner, gravitationally bound system with negative specific heat and an outer system surrounding it with positive specific heat---the inner and outer parts of a globular cluster, for example. \nEvolution towards equilibrium will direct both mass and heat outward, causing both the inner and the outer system to increase in temperature. \nA possible physical mechanism is a two-body scattering in the inner system which sends one star closer to the core (where it gains potential energy and thus speeds up, increasing the temperature of the inner system) and kicks one star out to the periphery (where its higher speed increases the temperature of the outer system). \nImportantly, we see that the inner system \\emph{shrinks} as it heats up. \n\nNow two outcomes are possible, depending on the specific heat of the two systems as a function of their masses. \nIf the outer system always has the smaller (magnitude of) specific heat, its temperature will eventually grow to exceed that of the inner system, and the entire assemblage of masses will reach equilibrium. \nOn the other hand, if the outer system grows in mass too quickly, its specific heat will become too large and its temperature will never catch up to the inner system. Hence the inner system will continue shrinking in mass and growing in temperature until the thermodynamic description breaks down. \nThis is precisely the \\emph{gravothermal catastrophe }\\cite{LyndenBell:1968yw}. \nIn the case of a globular cluster (at least an idealized one with uniform-mass stars), the gravothermal collapse process is halted by binary formation, which acts as an energy sink \\cite{Heggie:1975tg,Hut:1992wz}. \nIf the uSIDM interacts purely via elastic scattering, however, no bound state formation is possible, and gravothermal collapse can drive the core to relativistic velocities, where it undergoes catastrophic collapse into a black hole via the radial instability \\cite{1966SvA.....9..742Z,1985ApJ...298...34S,1985ApJ...298...58S,1986ApJ...307..575S}.\n\n\\subsection{The Gravothermal Fluid Equations}\n\nWe now consider the gravothermal collapse of a general two-component dark matter halo, where the self-interacting component comprises some fraction $f$ of the mass of the halo. \nAt this stage we do not yet specialize to the uSIDM case, with $f\\ll1$. \nTo avoid confusion, we will therefore refer to the two different components of the halo as SIDM (making up a fraction $f$ of the total mass of the halo) and (ordinary) CDM (making up the remainder), denoting the SIDM as uSIDM only when $f\\ll1$.\nTo simulate the collapse, we employ the gravothermal fluid approximation \\cite{1980MNRAS.191..483L,Balberg:2002ue,2011MNRAS.415.1125K}, which reduces the problem to a set of coupled partial differential equations that can then be solved numerically. \nFirst consider the general case for an $f=1$ fluid, i.e.\\ a halo composed entirely of SIDM. \nA spherically symmetric ideal gas of point particles in hydrostatic equilibrium with arbitrary conductivity $\\kappa$ obeys the following equations \\cite{1980MNRAS.191..483L}:\n\\begin{equation}\n\\frac{\\partial M}{\\partial r}=4\\pi r^{2}\\rho\\label{eq:fund1}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial\\left(\\rho\\nu^{2}\\right)}{\\partial r}=-\\frac{GM\\rho}{r^{2}}\\label{eq:fund2}\n\\end{equation}\n\\begin{equation}\n\\frac{L}{4\\pi r^{2}}=-\\kappa\\frac{\\partial T}{\\partial r}\\label{eq:fund3}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial L}{\\partial r}=-4\\pi\\rho r^{2}\\nu^{2}\\left(\\frac{\\partial}{\\partial t}\\right)_{M}\\ln\\frac{\\nu^{3}}{\\rho},\\label{eq:fund4}\n\\end{equation}\nwhere $\\nu(r)$ is the one-dimensional velocity dispersion and $L(r)$ the total heat radiated \\textit{inward} through a sphere of radius $r$. \nThe first equation (\\ref{eq:fund1}) simply defines the integrated mass distribution in terms of the density. \nThe second (\\ref{eq:fund2}) is the statement of hydrostatic equilibrium: we inserted Euler's equation into the Poisson equation for a spherically symmetric potential and used the equation of state for an ideal gas, $p=\\rho\\nu^{2}$. \nThe third (\\ref{eq:fund3}) states that the heat flux is proportional to the temperature gradient, with proportionality constant given by the conductivity $\\kappa$. \nThe fourth (\\ref{eq:fund4}) is the second law of thermodynamics, inserting the specific entropy of an ideal gas of point particles $u=\\frac{k_{\\mathrm{B}}}{m}\\ln(\\frac{T^{3\/2}}{\\rho})$ and using the relation $\\nu^{2}=k_{\\mathrm{B}}T\/m$. \nThis gives a set of four differential equations with four dependent variables $\\{M,\\ \\rho,\\ \\nu,\\ L\\}$ and two independent variables $\\{r,\\ t\\}$.\n(The temperature $T$ is directly related to $\\nu$ by $\\nu^{2}=k_{\\mathrm{B}}T\/m$.)\n\nTo make progress, we need an expression for the form of the thermal conductivity $\\kappa$ in terms of our physical parameter, the elastic scattering cross section per unit mass $\\sigma$.\nDimensional analysis alone will not suffice: we have one time scale, the fluid relaxation time \n\\begin{equation}\nt_{r}\\equiv1\/(a\\rho\\sigma\\nu),\n\\end{equation}\n with $a=\\sqrt{16\/\\pi}\\approx2.257$ for hard-sphere interactions, but two length scales, the mean free path $\\lambda\\equiv1\/(\\rho\\sigma)$ and the Jeans length or gravitational scale height $H\\equiv\\sqrt{\\nu^{2}\/(4\\pi G\\rho)}$.\nFollowing \\cite{Balberg:2002ue,2011MNRAS.415.1125K}, we find the unique length scales in the two limiting cases, the short mean free path (smfp) regime $\\lambda\\ll H\\rightarrow\\ell_\\mathrm{smfp}=\\lambda$ and the long mean free path (lmfp) regime $\\lambda\\gg H\\rightarrow\\ell_{lmfp}=H$, and combine them in reciprocal to get a final length scale, $\\ell\\equiv\\left(\\ell_\\mathrm{smfp}^{-1}+\\ell_\\mathrm{lmfp}^{-1}\\right)^{-1}$. \nIn the smfp regime, transport theory tells us that \n\\begin{equation}\n\\frac{L}{4\\pi r^{2}}\\approx-\\frac{3}{2}a^{-1}b\\rho\\frac{\\lambda^{2}}{t_{r}}\\frac{\\partial\\nu^{2}}{\\partial r}.\n\\end{equation}\nThe coefficient $b$ is calculated perturbatively in Chapman-Enskog theor\n\\ \\cite{1981phki.book.....L}, \\textbf{$b=25\\sqrt{\\pi}\/32\\approx1.385$}.\nIn the lmfp regime, the flux equation is well approximated as\n\\begin{equation}\n\\frac{L}{4\\pi r^{2}}\\approx-\\frac{3}{2}C\\rho\\frac{H^{2}}{t_{r}}\\frac{\\partial\\nu^{2}}{\\partial r},\n\\end{equation}\nwhere $C$ is a constant setting the scale on which the two conduction mechanisms are equally effective, determined by N-body simulations \\cite{2011MNRAS.415.1125K} to be $C\\approx290\/385\\approx0.75$. \nHence the final expression is\n\\begin{equation}\n\\frac{L}{4\\pi r^{2}}=-\\frac{3}{2}ab\\nu\\sigma\\left[a\\sigma^{2}+\\frac{b}{C}\\frac{4\\pi G}{\\rho\\nu^{2}}\\right]^{-1}\\frac{\\partial\\nu^{2}}{\\partial r}.\n\\end{equation}\n\nNow consider the more general case, $f\\ne1$. Hydrostatic equilibrium is separately satisfied for each species of particl\n, but the gravitational potential is of course sourced by both species, giving the coupling between the two components. \nBecause the non-SIDM component is taken to be collisionless, it has $\\sigma=0$, so $L^{ni}=0$.\nSo the total system is governed by six partial differential equations with six dependent variables $\\{M,\\ \\rho^{int},\\ \\rho^{ni},\\ \\nu^{int},\\ \\nu^{ni},\\ L^{int}\\}$ and two independent variables $\\{r,\\ t\\}$:\n\\begin{equation}\n\\frac{\\partial M}{\\partial r}=4\\pi r^{2}\\left(\\rho^{int}+\\rho^{ni}\\right)\\label{eq:master 1}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial\\left(\\rho^{int}\\left(\\nu^{int}\\right)^{2}\\right)}{\\partial r}=-\\frac{GM\\rho^{int}}{r^{2}}\\label{eq:master he}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial\\left(\\rho^{ni}\\left(\\nu^{ni}\\right)^{2}\\right)}{\\partial r}=-\\frac{GM\\rho^{ni}}{r^{2}}\n\\end{equation}\n\\begin{equation}\n\\frac{L^{int}}{4\\pi r^{2}}=-\\frac{3}{2}ab\\nu^{int}\\sigma\\left[a\\sigma^{2}+\\frac{b}{C}\\frac{4\\pi G}{\\rho^{int}\\left(\\nu^{int}\\right)^{2}}\\right]^{-1}\\frac{\\partial\\left(\\nu^{int}\\right)^{2}}{\\partial r}\\label{eq:master luminosity}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial L^{int}}{\\partial r}=-4\\pi\\rho^{int}r^{2}\\left(\\nu^{int}\\right)^{2}\\left(\\frac{\\partial}{\\partial t}\\right)_{M}\\ln\\frac{\\left(\\nu^{int}\\right)^{3}}{\\rho^{int}}\n\\end{equation}\n\\begin{equation}\n0=\\left(\\frac{\\partial}{\\partial t}\\right)_{M}\\ln\\frac{\\left(\\nu^{ni}\\right)^{3}}{\\rho^{ni}}.\\label{eq:master 6}\n\\end{equation}\nAs before, the first equation gives the total mass distribution, while the second and third enforce hydrostatic equilibrium. \nThe fourth determines how the SIDM fluid conducts heat and the fifth how the flux gradient affects the fluid. \nFinally, the sixth equation ensures that the entropy of the collisionless component is conserved, $3\\dot{\\nu}\/\\nu=\\dot{\\rho}\/\\rho$. \nNotice that the fraction $f$ does not appear in the differential equations themselves, but only in the boundary conditions: we must have \n\\begin{equation}\n\\frac{\\int_{0}^{\\infty}4\\pi r^{\\prime2}\\rho^{int}(r^{\\prime})dr^{\\prime}}{\\int_{0}^{\\infty}4\\pi r^{\\prime2}\\rho^{ni}(r^{\\prime})dr^{\\prime}}=\\frac{f}{1-f}\n\\end{equation}\nat all times.\n\n\\subsection{Initial Conditions}\\label{sub:initial}\n\nIn principle (\\ref{eq:master 1} --\\ref{eq:master 6}) can be solved exactly given appropriate boundary conditions at $r=0$ and $r=\\infty$ and a set of initial radial profiles which obey the equations. \nIn practice, this is computationally impossible: even finding the initial profiles for an arbitrary $\\sigma$ is infeasible. \nBalberg, Shapiro, and Inagaki \\cite{Balberg:2002ue}, considering the $f=1$ case, took the $\\sigma\\rightarrow0$ limit, which admits a self-similar solution where separation of variables is possible, then found the eigenvalues of the resulting system of ordinary spatial differential equations and took the resulting profiles as their initial conditions for the more general $\\sigma\\ne0$ case. \n\nWe will instead \\textit{assume} that SIDM self-interactions are unimportant during the process of halo formation, so that the the SIDM and collisionless components have the same initial profile. \nThis allows us to use the results of (collisionless) $\\mathrm{\\Lambda CDM}$ simulations. \nWe simplify further by approximating the initial halo by an NFW profile, \n\\begin{equation}\n\\rho_{\\mathrm{NFW}}(r)=\\frac{\\rho_{s}}{(r\/r_{s})(1+r\/r_{s})^{2}},\\label{eq:nfw profile}\n\\end{equation}\nwhere $\\rho_{s}$ and $r_{s}$ are the characteristic density and scale radius, respectively. \nSince the NFW profile has a characteristic radius, we can state our assumption more precisely: we assume that halo formation proceeds much faster than heat conduction, which is true when the dynamical timescale of collapse is much less than the relaxation timescale due to collisions:\n\\begin{equation}\nt_{\\mathrm{dyn}}(r_{s})\\ll t_{\\mathrm{rel}}(r_{s})\\approx\\frac{1}{\\tau_{s}}t_{\\mathrm{dyn}}(r_{s})\\rightarrow\\tau_{s}\\ll1;\n\\end{equation}\ni.e.\\ so long as the halo is \\textit{optically thin at its characteristic radius}.\nAgain, if the optical depth is small, \n\\begin{equation}\n\\label{optically_thin}\n\\tau\\equiv f\\rho_{\\mathrm{NFW}}r\\sigma\\ll1\\rightarrow\n\\sigma f\\le\\frac{1}{\\rho_{s}r_{s}},\n\\end{equation}\ntypical SIDM particles have not yet undergone any self-interaction by the time of halo formation, so we are justified in assuming they follow the same initial profile as the collisionless dark matter, $\\rho_{0}^{int}(r)=f\\rho_{\\mathrm{NFW}}(r)$. \n\nBefore checking the validity of this assumption, we comment on the consequences of taking a different initial profile. \nThe NFW profile is particularly simple: its form means that the optical depth at small radii, $r\\ll r_{s}$, is independent of radius, so a small characteristic optical depth implies that the central regions are also optically thin despite the presence of a cusp. \nModern $\\mathrm{\\Lambda CDM}$ simulations, however, have tended to find density profiles more complicated than the NFW profile. \nProfiles with cores or at least less cuspy behavior, e.g.\\ Einasto profiles \\cite{Merritt:2005xc,Graham:2006ae}, will have $\\tau\\ll1$ everywhere if $\\tau_{s}\\ll1$. \nBelow we will see that SIDM halos with initial NFW density profiles grow cores on a scale of tens of halo relaxation times anyway, so shallower initial profiles will only result in slightly smaller times before black hole formation. \nProfiles with more cuspy behavior, e.g.\\ generalized NFW or Zhao profiles \\cite{Zhao:1995cp} with inner slope $\\alpha\\apprge1$, will unavoidably have regions at very small radii in the optically thick regime. \nBelow we will see that SIDM halos with initial NFW profiles first evacuate the cusp to form cores before beginning the gravothermal collapse process, and it seems reasonable to conclude that the same thing will happen for non-pathological cuspier profiles.\nWe conclude that imposing a different profile should not significantly change the behavior investigated below.\n\nWhen is the assumption that $\\tau_{s}\\ll1$ justified? \nRecall that the characteristic radius $\\rho_{s}$ and radius $r_{s}$ for an NFW profile are given in terms of the halo virial mass $M_{\\Delta}$ and concentration $c$:\n\\begin{equation}\nr_{\\Delta}\\equiv cr_{s},\n\\end{equation}\n\\begin{equation}\nM_{\\Delta}\\equiv M(r_{\\Delta})=\\int_{0}^{r_{\\Delta}}4\\pi r^{2}\\rho_{\\mathrm{NFW}}(r)dr=4\\pi\\rho_{s}r_{s}^{3}\\left[\\ln(1+c)-\\frac{c}{1+c}\\right],\n\\end{equation}\n\\begin{equation}\n\\rho_{s}\\equiv\\delta_{c}\\rho_{crit}(z).\n\\end{equation}\nThe density contrast $\\delta_c$ is in turn given by \n\\begin{equation}\n\\delta_{c}=\\frac{\\Delta}{3}\\frac{c^{3}}{K_{c}}, \n\\end{equation}\nwhere $K_{c}\\equiv\\ln(1+c)-c\/(1+c)$.\nThe problem thus reduces to finding an expression for $\\Delta$, the virial overdensity.\nIn the spherical collapse model, this is given by $\\Delta\\sim18\\pi^2\\Omega_{m}^{0.45}$ for a flat universe \\cite{Lahav:1991wc,Eke:1996ds,Bryan:1997dn,Neto:2007vq}; $\\Delta$ hence approaches the familiar value of 178 in the matter-dominated era.\n\nInserting these expressions into (\\ref{optically_thin}) above yields an inequality for $\\sigma f$ in terms of $c$ and $M_\\Delta$, along with the redshift of virialization $z$:\n\\begin{equation}\n\\sigma f\\le\\frac{1}{\\rho_{s}r_{s}}=(4\\pi)^{-1\/3}M_{\\Delta}^{-1\/3}\\left(\\frac{\\Delta\\rho_{crit}(z)}{3}\\right)^{-2\/3}K_c\\,c^{-2}\n\\label{eq:opt_thin}\n\\end{equation}\n \n\\begin{equation}\n\\label{optically_thin_bound}\n=24.56\\ \\mathrm{cm^{2}\/g}\\times\\left(\\frac{M_{\\Delta}}{10^{12}M_{\\odot}}\\right)^{-1\/3}\\times\\left(\\frac{\\rho_{crit}(z)}{\\rho_{crit}(z=15)}\\right)^{-2\/3}K_c\\,c^{-2}.\n\\end{equation}\nIn the second line we have inserted the typical halo parameters we will consider below: $z=15$, $M_\\Delta=10^{12} M_\\odot$.\n\nIt remains to insert plausible values for the concentration $c$.\nIndividual halos of mass $M_\\Delta$ formed at a fixed redshift $z$ will have varying concentrations, but there should be some mass- and redshift-dependent median concentration, $c(M_\\Delta,z)$.\nPrada \\textit{et al.} \\cite{Prada:2011jf} used the Millennium \\cite{Springel:2005nw,BoylanKolchin:2009nc}, Bolshoi \\cite{Klypin:2010qw}, and MultiDark \\cite{Riebe:2011gp} simulations to examine the shape of the $c(M_\\Delta,z)$ curve with varying mass and redshifts.\nThey found that for each redshift considered (from $z\\sim0-6$) the concentration formed a U-shaped curve: it was minimized at a certain value of the mass, but increased steeply both above and below this mass.\nFurthermore, they found that both the minimum value of the concentration and the mass at which this minimum was realized decreased with increasing redshift.\nAt the large redshifts we consider, the cluster-sized halos needed to form supermassive black holes are far more massive than the bottom of the U-shaped curve; accordingly, the fitting formulae given in \\cite{Prada:2011jf} predict that the concentration for these halos will be extremely large, of the order of $c\\sim10^5$ for the halo parameters above.\nIf this were true, the initial density profiles of these large, early halos would be extremely concentrated, so that their inner regions are extremely thick even for $\\sigma f\\apprge10^{-6}$. \nIn this case the simulations presented in this paper would not be reliable.\n\n\n\nWe emphasize, however, that the fitting formulae of \\cite{Prada:2011jf} were devised using simulated halos only out to $z\\sim6$; they should not be trusted so far away from their domain of validity.\nAccordingly, we have consulted the high-redshift halo catalogs of the FIRE simulation \\cite{Hopkins:2013vha}, which attempted to resolve an overdense region at high redshift.\nThe catalogs use the Amiga Halo Finder \\cite{Knollmann:2009pb} to measure $c$ in the same way as defined in \\cite{Prada:2011jf}. \nWe are interested in the concentration parameters of the most massive halos formed at a given redshift.\nPerhaps unsurprisingly, we find that, even at $z\\sim30$, halo concentrations range from 2 to 11, similar to the values found at lower redshifts in the simulations consulted in \\cite{Prada:2011jf}, rather than the much higher values predicted by naively applying the fitting formulae.\nWe do not attempt to construct the full $c(M_\\Delta,z)$ curve at high redshifts on the basis of this limited data, but we do assume that realistic halos will take concentrations in this observed range.\n\nThe upper bound on $\\sigma f$ for which $\\tau_{s}\\le1$ ranges from $0.32-2.65\\ \\mathrm{cm^{2}\/g}$ as concentrations decreasing from 11 to 2 are inserted into (\\ref{optically_thin_bound}).\nIn the remainder of this paper we will typically set $c=9$, which gives a bound of $0.425\\ \\mathrm{cm^{2}\/g}$.\nIn Section \\ref{discussion} below we will find that this bound is of the same order of magnitude as the cross section needed to produce the desired high-redshift supermassive black holes using uSIDM. \nAccordingly, there is a surprisingly small region of parameter space where both the assumption of an initial NFW profile is valid and the desired black holes are produced.\nWe will discuss this further in Section \\ref{discussion}.\nFor now, we note only that the qualitative results of this paper should still hold even when our assumption of an initial NFW profile is invalid. \nOutside of this range, we expect that gravothermal collapse should still occur---in fact, it should occur \\emph{faster} because core formation will have begun even before virialization---but the particular expressions given here will no longer be valid.\n\n\\subsection{Integration of the Equations}\\label{sub:integration}\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=1\\textwidth]{nfw_r1_cropped}\n \\end{center}\n \\caption{The dimensionless initial profiles for an NFW halo, with $f=0.01$.\n The CDM and SIDM have the same velocity profile, and their density profiles have the same shape but a normalization differing by $f\/(1+f)$.\n As expected for SIDM, the initial luminosity at small radii is negative (shown as dashed on the plot), indicating that the cusp is being forced outward as a core begins to form. \n The glitch in the luminosity at $\\tilde{r}=200$ is a numerical artifact. \n \\label{fig:nfw init}}\n\\end{figure}\n\nGiven the initial conditions, we can proceed to integrate the system of equations (\\ref{eq:master 1} --\\ref{eq:master 6}). \nWe first move to a dimensionless form of the problem by choosing fiducial mass and length scales $\\{M_{0},\\ R_{0}\\}$. \nThen the remaining dependent variables are given naturally in terms of these quantities, e.g.\\ $\\nu_{0}=\\sqrt{GM_{0}\/R_{0}}$.\nFull expressions for all dependent variables in terms of $M_0$ and $R_0$ are given in Section 5 of \\cite{Balberg:2002ue}.\nThe cross section per unit mass is now expressed dimensionlessly by $\\hat{\\sigma}=\\sigma\/\\sigma_{0},\\ \\sigma_{0}=4\\pi R_{0}^{2}\/M_{0}$.\nIt is convenient to use the two quantities already specified in the NFW profile, $\\{\\rho_{s},\\ r_{s}\\}$; we therefore take \n\\begin{equation}\n\\{M_{0}=4\\pi R_{0}^{3}\\rho_{s},\\ R_{0}=r_{s}\\}.\n\\end{equation}\nNote that we have made a different choice of $\\{M_{0},\\ R_{0}\\}$ than \\cite{Balberg:2002ue}, since we consider a cuspy NFW profile rather than a cored one and thus work with characteristic rather than central quantities.\nFinally, the timescale is set by the initial relaxation time at the characteristic radius, \n\\begin{equation}\nt_{r,c}(0)=1\/(fa\\rho_{s}\\nu_{s}\\sigma),\n\\end{equation}\nso the independent variable can also be made dimensionless.\nDimensionless quantities are written with tildes (e.g.\\ $\\tilde{\\rho}, \\tilde{t}$).\nThe resulting initial profiles for an $f=0.01$ halo are shown in Figure \\ref{fig:nfw init}. \n\nWe solve the problem by spatially discretizing into $N$ concentric spherical shells, initially evenly logarithmically spaced in radius. \nAt each timestep, we first apply the effects of heat conduction, which increases the energy within each shell, then adjust the profile to maintain hydrostatic equilibrium.\nThe heat conduction step is simple: we determine the luminosity profile from the density and velocity dispersion using the dimensionless, discretized form of (\\ref{eq:master luminosity}), then adjust the energy of each shell accordingly (using $dU\\equiv L dt$ for a finite but small timestep).\nTimesteps are chosen so that the change of (dimensionless) specific energy $\\tilde{u_{i}}=3\\tilde{\\nu}_{i}^{2}\/2$ is not large: we require $\\Delta\\tilde{u}_{i}\/\\tilde{u}_{i}<\\varepsilon\\ll1$ for each shell $i$, typically taking $\\varepsilon=0.001$. \nThis means that as the gravothermal catastrophe approaches and core temperatures and densities become large, the size of timesteps will decrease dramatically: as expected, we cannot integrate through the collapse because the fluid approximation itself breaks down there. \n\nTo carry out the hydrostatic equilibrium step, we use the method of Lagrangian zones, in which the radius of each shell is adjusted while the mass it contains is left constant. \nThe relaxation process, which involves long-range gravitational interactions rather than heat conduction via collisions, is entropy-preservin\n, so it preserves the adiabatic invariants $A_i\\equiv\\tilde{\\rho}_i \\tilde{V}_i^{5\/3}$ for each shell $i$. After the heat conduction step, each shell is temporarily out of hydrostatic equilibrium, so that the equality (\\ref{eq:master he}) is violated by some amount $\\Delta_i$. \nThe problem is to adjust the density, velocity dispersion, and radius of each shell $i$, such that hydrostatic equilibrium is again satisfied ($\\Delta_i=0\\ \\forall i$) while preserving the adiabatic invariants. \nThe assumption of adiabaticity, along with the use of Lagrangian zones to keep the mass of each shell fixed, fixes the density and velocity changes as functions of the set of changes of radii $\\Delta\\tilde{r}_i$.\nHence the requirement of hydrostatic equilibrium gives a system of differential equations for the changes of radii which, when linearized, is tridiagonal (since the thickness of each shell depends not only on its own central radius but that of its nearest neighbors). \nThe resulting system is solved using a standard linear algebra librar\n\\footnote{\\url{http:\/\/www.gnu.org\/software\/gsl}\n}. \n\n\\subsection{Results}\n\n\\begin{figure}\n \\begin{center}\n \\subfloat[\\label{fig:nfw_frac_density.pdf}][]{\\includegraphics[width=.7\\textwidth]{nfw_frac_density}}\n \\\\\n \\subfloat[\\label{fig:nfw_density.pdf}][]{\\includegraphics[width=.7\\textwidth]{density_central_evolution}}\n \\end{center}\n \\caption{(a) Evolution of SIDM density profiles, starting with an $f=0.01$ NFW halo. \nOnly the inner part of the halo is shown; the outer part still asymptotes to $r^{-3}$ as in Figure \\ref{fig:nfw init} above for all halos. \nFrom top to bottom, profiles are at $0.0$, $2.51$, $4.79$, $7.07$, $9.35$, and $11.63$ central relaxation times.\nBecause $t_{r}\\propto\\rho^{-1}$, this corresponds to integrating for $\\sim1000$ relaxation times in an $f=1$ halo. \nHowever, comparison to the $f=1$ results below suggests that the density profile flattens in the same manner, just $f^{-1}$ times slower: evidently the non-interacting dark matter has little influence on the central SIDM evolution. \n(b) Evolution of an $f=1$ halo starting from NFW initial conditions. \nFor clarity, only the inner portion of the density profile is shown: the outer profile has not yet changed significantly at this stage. \nFrom top to bottom, profiles are at $0.0$, $9.73$, $22.87$, $36.86$, and $65.49$ central relaxation times.\nAs in the $f=0.01$ case, the density profile is flattening as a core develops. \n\\label{fig:initial evolution}}\n\\end{figure}\n\n\nUnfortunately, the above procedure is still insufficient to integrate (\\ref{eq:master 1}--\\ref{eq:master 6}) in full generality. \nThe problem is that, because the SIDM and collisionless dark matter are \\emph{separately} in hydrostatic equilibrium, the method of Lagrangian zones will result in different sets of radii for the two species. \nBut in order to perform subsequent timesteps, we need the total mass distribution at each radius for both types of DM. \nFor computationally feasible numbers of shells ($N\\sim400$), interpolation is not accurate enough to preserve numerical stability and the distributions cannot be integrated all the way up to the point of gravothermal collapse. \n\nWe can, however, consider the two limiting cases.\n(Luckily, these happen to be the cases we are interested in!)\nIn the pure SIDM case $f=1$, there is only one species and the problem does not arise. \nIn the uSIDM case, $f\\ll1$, we can ignore the gravitational backreaction of the uSIDM component on the collisionless DM and assume that it maintains an NFW profile throughout, allowing the calculation of its mass distribution analytically at every point. \nWe expect that the two cases should yield similar results, because the temporary violation of (\\ref{eq:master he}), the hydrostatic equilibrium condition, after each heat conduction timestep is overwhelmingly due to the increase on the LHS of the equation, from heat conduction, rather than from interactions with the collisionless component, on the RHS of the equation. \nThis is just the statement that the self-interaction is much larger than gravitational strength. \nWe indeed find that this is the case, at least qualitatively. \nConsider Figure \\ref{fig:initial evolution}, which shows the early evolution of $f=0.01$ and $f=1$ halos with the same value of $\\hat{\\sigma}$. \nWe see that behavior is indeed qualitatively the same: in both cases, a core begins to form as heat conduction dissolves the initial cusp. \nNote that the time scales are different: in the uSIDM case the relaxation time is increased by a factor of $f^{-1}$ since the uSIDM density is a factor of $f$ lower. \nSo Figure \\ref{fig:initial evolution} suggests that uSIDM evolution is the same as the $f=1$ case, just $f^{-1}$ times slower. \n\n\\begin{figure}\n\\includegraphics[width=1\\textwidth]{3d_density_v5_v2}\n\\caption{Runaway collapse of an $f=1$ SIDM halo with $\\hat{\\sigma}=0.088$, starting from an initial NFW profile. \nThe inner profile starts cuspy, rapidly shrinks to a self-similar profile (as in \\cite{Balberg:2002ue} and Figure \\ref{fig:initial evolution} above) with a $\\tilde{\\rho}=1$ core, then slowly increases in density in a self-similar manner. \nAfter $\\sim450$ relaxation times, the core of the halo becomes optically thick, and self-similarity is broken: the core splits into a very dense inner core and an outer core which transitions between the two regions. \nCatastrophic collapse occurs as $\\tilde{t}\/\\tilde{t}_{\\mathrm{r,c}}(0)$ approaches $\\sim455.65$. \n\\label{fig:collapse_sfmp}}\n\\end{figure}\n\nWe will focus on the $f=1$ case in the following, and then rescale our final results by $f^{-1}$ as just described.\nFigures \\ref{fig:collapse_sfmp} and \\ref{fig:collapse_smfp_mass} show the entire evolution of an $f=1$ halo with $\\hat{\\sigma}=0.088$ (chosen to allow comparison with \\cite{2011MNRAS.415.1125K}) from initial NFW profile through to gravothermal collapse. \nFirst consider Figure \\ref{fig:collapse_sfmp}, which shows the evolution of the density profile.\nAlthough the halo is initially in an NFW profile, the initial negative luminosity at small radii causes the cusp to empty out, driving evolution towards the cored, self-similar profile found by \\cite{Balberg:2002ue}, as was already seen in Figure \\ref{fig:initial evolution} above. \nWhen the self-similar profile is reached after a few tens of relaxation times, the luminosity profile becomes everywhere positive, and the core increases in density while its mass steadily shrinks.\nWhile the entire profile is in the lmfp regime, evolution is self-similar, and the central density increases steadily.\nInevitably, there comes a time, about 450 relaxation times after virialization, when the inner density increases enough that the most central regions enter the smfp regime, and the core bifurcates into a very dense outer core and an inner core which transitions between the two regions.\n\n\\begin{figure}\n\\centering{}\\includegraphics[width=1\\linewidth]{3d_mass_v2}\n\\caption{Mass profile history of a cored SIDM halo with $\\hat{\\sigma}=0.088$, starting from an initial NFW profile. \nOnce the core enters the optically thick regime, around $\\tilde{t}\/\\tilde{t}_{r,c}(0)=450$, the inner core contains a constant total mass, around $2.5-3\\%$ of the characteristic mass $M_{0}$. \n\\label{fig:collapse_smfp_mass}}\n\\end{figure}\n\nImportantly, once the smfp regime has been reached, mass loss from the inner core is no longer efficient: the inner core has become so thick that evaporation is only possible from its boundary, not from the entire volume. \nThis means that the mass in the inner core is essentially constant over the very short time ($\\apprle10t_{r,c}(0)$) between breaking of self-similarity and catastrophic collapse.\nAs mentioned in subsection \\ref{sub:integration} above, the size of successive timesteps decreases rapidly as the gravothermal catastrophe approaches, so this short time takes very many (increasingly small) timesteps to integrate over, and the time of collapse can be precisely given as $455.65$ relaxation times after the start of integration. \nBecause evaporation is inefficient after the loss of self-similarity, the mass in the inner core is still nonzero at the moment of collapse, unlike in the globular cluster case, and a black hole will form. \nFigure \\ref{fig:collapse_smfp_mass} shows that the inner core at collapse contains a mass of around $0.025M_{0}$.\nBecause the fluid approximation breaks down, we do not know that the entire inner core will collapse directly into a black hole, but, because it is optically thick, Bondi accretion \\cite{Bondi:1952ni} is extremely efficient. Hence, we expect that the black hole will rapidly grow to encompass the entire region regardless.\n\n\\section{Supermassive Black Holes from uSIDM}\n\nWe have found that halos with a uSIDM component (and pure SIDM halos, on much longer timescales) grow black holes of mass $M_{BH}\\equiv0.025fM_{0}$ in a time $455.65t_{r,c}(0)$. \nGiven the considerations discussed above, uSIDM can help explain the existence of massive high-redshift quasars if the resulting black holes are large enough and form early enough that baryonic accretion can grow them to $\\sim10^{9}M_{\\odot}$ by $z\\apprge6$. \nIt remains to evaluate $M_{0}$ and $t_{r,c}(0)$ in terms of the halo parameters and use this requirement to place constraints on the uSIDM parameters $\\{\\sigma,\\ f\\}$.\n\n\\subsection{Halo Parameters}\n\nInstead of using the characteristic NFW parameters $\\{\\rho_{s},r_{s}\\}$, it is convenient to again parameterize a halo by its virial mass $M_{\\Delta}$ and concentration $c$. \nIn dimensionless units, the mass contained within the $i$th shell is\n\\begin{equation}\n\\tilde{M}_{i}=\\int_{0}^{\\tilde{r_{i}}}\\tilde{\\rho}\\tilde{r}^{2}d\\tilde{r}\\\\\n=\\int_{0}^{\\tilde{r}_{i}}\\tilde{r}^{-1}\\left(1+\\tilde{r}\\right)^{-2}\\tilde{r}^{2}d\\tilde{r}\\\\\n=-\\frac{\\tilde{r}_{i}}{1+\\tilde{r_{i}}}+\\ln(1+\\tilde{r_{i}}),\n\\end{equation}\nbut the virial radius $r_{\\Delta}\\equiv cr_{s}$, so\n\\begin{equation}\n\\frac{M_{BH}}{M_{\\Delta}}=\\frac{\\tilde{M}_{BH}}{\\tilde{M}(c)}=\\frac{0.025f}{\\ln(1+c)-c\/(1+c)}.\\label{eq:bh mass}\n\\end{equation}\nThis gives the desired expression for the seed black hole mass $\\tilde{M}_{BH}$ in term of the halo and uSIDM parameters. The denominator ranges from $\\sim0.5-2$ for realistic values of the halo concentration, so the BH mass is a few percent of the total uSIDM mass in the halo.\n\nRecall that the relaxation time is $t_{r,c}(0)=1\/(af\\rho_{s}\\nu_{s}\\sigma)$, i.e.\\ the scattering time at the characteristic radius. \nThe lower end of the interesting range for $f=1$ SIDM is $\\sim0.1\\ \\mathrm{cm^{2}\/g}$, for which the relaxation time at the characteristic radius of a Milky-Way scale halo is approximately a Hubble time. \nTo grow a black hole in galactic halos by $z\\sim6$, the relaxation time needs to be $\\sim10^{4}$ times smaller to ensure $\\sim500$ relaxation times by the time the universe was a twentieth of its present age. \nThis does not mean that $\\sigma\\approx1000f^{-1}\\ \\mathrm{cm^{2}\/g}$, though! \nRecall that $\\rho_{s}=\\delta_{c}\\rho_{crit}$, where $\\delta_{c}$ is a function of $c$ and the cosmology given below, and the critical density goes as $(1+z)^{3}$ in the matter-dominated era. \nAlso $r_{s}\\propto r_{\\Delta}\\propto(M_{\\Delta}\/\\rho_{crit})^{1\/3}$ implies $\\nu_{s}\\propto\\sqrt{M_{\\Delta}\/r_{\\Delta}}\\propto M_{\\Delta}^{1\/3}\\rho_{crit}^{1\/6}$.\nHence the mass and approximate redshift dependence of the relaxation time are\n\\begin{equation}\nt_{r,c}(0)\\propto(1+z)^{-7\/2}M_{\\Delta}^{-1\/3}\\label{eq:approximate_collapse_time}\n\\end{equation}\nand we expect that $\\sigma f$ need not be that much larger than the interesting range for $f=1$, i.e.\\ we expect $\\sigma f\\apprge 0.1-1\\ \\mathrm{cm^{2}\/g}$.\n\nAt this point, the reader might worry that this conclusion combined with the observation of non-collapsed cores in the nearby ($z\\sim0$) universe rules out the existence of standard ($f\\approx1$) SIDM.\nWe emphasize, however, that large values of $\\sigma f$ mean that core collapse in times much smaller than the age of the universe is \\textit{possible}, but not, we expect, \\textit{typical}; it occurs only in the rare halos which virialize at very high redshifts and remain uninterrupted, i.e.\\ do not experience major mergers, for long enough to complete the gravothermal collapse process. \nSee subsection \\ref{sec:caveats} below for further discussion of this point. \n\nThe exact expression for the halo relaxation time is\n\\begin{equation}\nt_{r,c}(0)=\\frac{1}{af\\sigma}(\\frac{K_c^{2}}{4\\pi G^{3}})^{1\/6}\\delta_{c}^{-7\/6}\\rho_{crit}(z)^{-7\/6}M_{\\Delta}^{-1\/3}\\label{eq:exact_collapse_time}\n\\end{equation}\n\\begin{equation}\n=\\mathrm{0.354\\ Myr}\\times\\left(\\frac{M_{\\Delta}}{10^{12}M_{\\odot}}\\right)^{-1\/3}\\left(\\frac{K_{c}}{K_{9}}\\right)^{3\/2}\\left(\\frac{c}{9}\\right)^{-7\/2}\\left(\\frac{\\rho_{crit}(z)}{\\rho_{crit}(z=15)}\\right)^{-7\/6}\\left(\\frac{\\sigma f}{1\\mathrm{\\ cm^2\/g}}\\right)^{-1},\n\\end{equation}\nwhere $K_c\\equiv\\ln(1+c)-c\/(1+c)$, $\\delta_{c}=(\\Delta\/3)c^{3}\/K_c$, and $\\Delta$, the virial overdensity, is $18\\pi^2\\Omega_{m}^{0.45}$ for a flat universe, approximately 178 in the matter-dominated era.\nSo the relaxation time, and hence the collapse time, is given in terms of the halo and uSIDM parameters. To match observations, we need some seed black holes to grow by a large enough factor via Eddington accretion to reach $M_{BH}\\approx10^{9}M_{\\odot}$ by $z\\apprge6$; this leads to an inequality on $\\sigma$ when the halo parameters and $f$ are specified.\n\n\\subsection{Explaining Observations}\n\nLet us spell out the procedure more precisely.\nAn observation of a particular high-redshift quasar at redshift $z_{obs}$ yields a value for the luminosity, which corresponds to a supermassive black hole of mass $M_{SMBH}$ once the measured luminosity is identified with the Eddington luminosity and a particular value for the radiative efficiency $\\epsilon_r$ is assumed. \n(We have already discussed potential issues with these assumptions in section \\ref{sec:smbh} above; in the remainder of the paper, we will take the published observations at face value and assume their quoted SMBH masses, which take $\\epsilon_r$=0.1 as input, are correct.)\n\nAt the same time, the uSIDM framework developed in this paper tells us that NFW halos of viral mass $M_\\Delta$ and concentration $c$ virialized at redshift $z$ form seed black holes of mass $M_{BH}$ in a time $455.65t_{r,c}(0)$, i.e.\\ the seed black holes are formed at redshift $z_{coll}$, where \n\\begin{equation}\nt(z_{coll})-t(z)=455.65t_{r,c}(0),\n\\label{eq:collapse_time}\n\\end{equation} \nand the time $t(z)$ after the Big Bang corresponding to redshift $z$ is given by the usual cosmology-dependent expression, \n\\begin{equation}\nt(z)=t_{0}\\int_{0}^{1\/(1+z)}\\frac{da}{\\dot{a}}.\n\\end{equation}\nEquations (\\ref{eq:bh mass}) and (\\ref{eq:exact_collapse_time}) then give expressions for these quantities in terms of the halo properties $\\{M_\\Delta,c,z\\}$ and the uSIDM parameters $\\{\\sigma,f\\}$.\n\nThere is still one parameter that must be specified: the fraction of SMBH mass which is due to accretion of baryons as opposed to the initial seed black hole. \nFor simplicity, we will assume that the central black hole accretes continuously at the Eddington limit from the time of formation to the time at which it is observed. \nOf course more complicated growth histories are both possible and likely.\nNevertheless, this simplifying assumption allows us to specify the fraction by instead giving $N_e$, the number of $e$-folds of accretion at the Eddington limit.\nThis finally allows us to compute the observable quantities: we have \n\\begin{equation}\nt(z_{obs})=t(z_{coll})+N_e t_{\\mathrm{Sal}}\\label{eq:z_obs},\n\\end{equation}\n\\begin{equation}\nM_{SMBH}=M_{BH}\\exp(N_e)\\label{eq:M_SMBH}.\n\\end{equation}\nTo find acceptable values of $\\sigma$ and $f$ given the SMBH observables, we must specify (or marginalize over) the halo parameters and the baryonic contribution to the SMBH mass.\nThe latter quantity directly sets (\\ref{eq:z_obs}) the redshift of seed black hole collapse, $z_{coll}$, which yields the required collapse time and thus the required value of $\\sigma f$ via (\\ref{eq:exact_collapse_time}).\nKnowing the growth due to accretion of baryons also tells us (\\ref{eq:M_SMBH}) the required seed black hole mass $M_{BH}$, which specifies $f$ via (\\ref{eq:bh mass}).\n\n\\subsection{Examples}\\label{sub_examples}\n\nAs an example, consider again ULAS J1120+0641, with mass $M_{SMBH}\\approx2\\times10^{9}M_{\\odot}$ at $z_{obs}=7.085.$ \nTo grow four orders of magnitude ($N_e=\\ln 10^4$) by Eddington-limited baryon accretion, for example, \nwe must form a seed black hole with mass $M_{BH}=2\\times10^{5}M_{\\odot}$ by $z_{coll}=12.9$. \nWith a halo of mass $M_\\Delta=10^{12}M_{\\odot}$ and concentration $c=9$ formed at redshift $z=15$, we find that $t_{r,c}(0)=0.354\\ \\mathrm{Myr}\\times(1\\ \\mathrm{cm^{2}\/g})\/(\\sigma f)$.\nIn order for $455.65$ relaxation times to have passed in the $64.5\\ \\mathrm{Myr}$ between $z=15$ and $z_{coll}=12.9$, we must have $\\sigma f=2.50\\ \\mathrm{cm^{2}\/g}$.\nFrom (\\ref{eq:bh mass}), we require $f=1.12\\times10^{-5}$ to get the correct seed mass, so $\\sigma=2.23\\times10^{5}\\ \\mathrm{cm^{2}\/g}=3.97\\times10^{5}\\ \\mathrm{b\/GeV}$.\nThe large value of $\\sigma$ is unsurprising: we chose to start with a halo much larger than the seed black hole we wanted to form, so $f$ had to be small and $\\sigma$ large in order to compensate. \n\nAlternatively, we could start with the same halo but produce the black hole entirely from uSIDM. \nThe relaxation time is unchanged: $t_{r,c}(0)=0.354\\ \\mathrm{Myr}\\times(1\\ \\mathrm{cm^{2}\/g})\/(\\sigma f)$. \nBut now the black hole need not form until $z=7.085$, $479\\ \\mathrm{Myr}$ after halo formation, so the required value of $\\sigma f$ is smaller, $\\sigma f=0.336\\ \\mathrm{cm^{2}\/g}$. \nAgain applying (\\ref{eq:bh mass}) yields $f=0.112$, $\\sigma=2.99\\ \\mathrm{cm^{2}\/g}=5.36\\ \\mathrm{b\/GeV}$, coming much closer to the classic SIDM cross section.\n\nOf course, in the absence of direct measurements of the host halo of ULAS J1120+0641 the problem is underdetermined. \nThe point is that $\\sigma f$ takes reasonable values of $\\mathcal{O}(1)\\ \\mathrm{cm^{2}\/g}$, well within the regime described by the gravothermal fluid approximation starting from an initial NFW profile.\n\n\\section{Discussion}\\label{discussion}\n\nThe examples in the previous section show that \\textit{individual} observations of high-redshift quasars can successfully be explained within the uSIDM paradigm.\nFor uSIDM to be fruitful, however, we should ideally be able to find (or rule out) a consistent choice of these parameters which successfully explains the \\textit{cosmological} abundance of high-redshift quasars.\nIt is unsurprising that some choice of cross section per unit mass $\\sigma$ and fraction $f$ can reproduce one particular observation, e.g.\\ ULAS J1120+0641, but it is more suggestive if that choice can reproduce the entire observed number density of supermassive black holes as a function of mass and redshift.\nThe minimal requirement for a viable uSIDM model is that it explain (or at least not conflict with) what has currently been observed.\nThat means producing the correct abundance of $\\sim 10^9 M_\\odot$ quasars at redshift $6-7$, as has already been discussed, and ensuring that supermassive black holes are not overproduced in the nearby (lower-redshift) universe.\nBeyond that, one would like to make concrete predictions for the next generation of experiments, which should be sensitive to smaller masses and higher redshifts.\n\nThis task is difficult for a number of reasons.\nThe essential problem is that a number of nuisance parameters must be constrained or marginalized over in order to connect the uSIDM properties to the SMBH distribution (and then further to the quasar distribution). Even in the simplified setup described above there were already the $e$-folds of baryonic accretion, $N_e$, and the halo parameters $M_\\Delta$ and $c$. \nIn the cosmological context, these nuisance parameters are promoted to entire unknown functions that are currently only poorly constrained by observations and simulations.\nEven when constraints or functional forms are available, they are often trustworthy only in regimes far separated from the ones of interest to us here (for example, in the low-redshift universe, or in a lower mass range).\nWe have already encountered this problem in Section \\ref{sec:gravothermal_collapse} above, when considering the concentrations of massive NFW profiles at high redshifts.\n\nNevertheless, in the remainder of this section we attempt to estimate the constraints that our existing knowledge places on the uSIDM parameter space.\nWe first explain the source of our cosmological uncertainty and means by which it could be improved.\nNext we note a different source of tension within $\\Lambda\\mathrm{CDM}$, independent of the existence of high-redshift SMBHs, that could be relieved by uSIDM.\nFinally, we present tentative maps of the uSIDM parameter space relevant to the resolution of these tensions.\n\n\\subsection{Cosmological Caveats}\\label{sec:caveats}\n\nPredicting the cosmological consequences of gravothermal collapse given a choice of the uSIDM parameters requires a unified picture of the SIDM profile at galaxy formation in terms of the halo mass and redshift, which will be easier given proper $N$-body simulations of halos containing uSIDM. \nThere are several reasons why using the fluid approximation to simulate an isolated halo does not suffice.\n\nFirst, although the process of gravothermal collapse can be quite short on cosmological timescales, which is why it allows massive quasars to form faster than in the standard $\\Lambda\\mathrm{CDM}$ picture, we have seen that it is long in terms of halo time scales (several hundred characteristic relaxation times). \nIt is therefore necessary for the halo to remain essentially undisturbed for this length of time in order for core collapse to occur and seed black holes to form..\nThe beginning and end of the collapse process---the elimination of the initial cusp and the catastrophic collapse itself after the core becomes optically thick---are driven entirely by dynamics in the innermost part of the profile, so we might expect them to be insensitive to accretion or mergers in the outer halo. \nFigures \\ref{fig:initial evolution} and \\ref{fig:collapse_sfmp} make clear, though, that these stages are very short compared to the length of the overall process.\nThe vast majority of the time required for collapse involves the slow increase of density in the core as mass flows inward from the outer halo, which we expect to be sensitive to accretion or mergers.\nIn other words, the halo must be isolated for several hundred relaxation times.\nStrong interactions with other masses, such as major mergers, will disrupt the collapse process, essentially resetting the clock for seed black hole formation.\nEven accounting for more controlled accretion via minor mergers will technically necessitate the tracking of substructure within the collapsing halo, since it breaks the spherical symmetry required by the fluid approximation, although we expect it will not change our qualitative conclusions. Such tracking of substructure is only truly possible using $N$-body simulations.\n\nMore importantly, determining how often the collapse process is disrupted, and therefore predicting the spectrum of black hole masses as a function of redshift for particular values of $\\sigma$ and $f$, in order to compare with existing and upcoming observations, requires detailed cosmological information.\nWe need not only the halo mass function at very high redshifts (up to redshift $15$ in the above example, and ideally out to at least $z\\apprge30-50$) but also information on halo shape (the concentration parameter $c(M_\\Delta,z)$ at the same high values of z, in the case that the halos form in NFW profiles) and, most importantly, detailed merger probabilities and histories as functions of mass and redshift.\nEven when analytical approximations to these quantities at $z\\apprle1$ exist, it is unclear how confidently they can be extrapolated to $z\\sim50$. Hence dedicated $N$-body simulations are desirable. \nWe will briefly note some additional interesting results, beyond the prediction of the history of the black hole mass function, which could be investigated given this cosmological information.\n\n\\subsection{The Too Big to Fail Problem}\n\nThis paper has noted that gravothermal collapse of uSIDM can produce seed black holes in the center of virialized halos.\nWe have primarily been concerned with using this mechanism to explain the abundance of massive high-redshift quasars, but we now mention a few other areas where it could prove useful.\nWe emphasize that these are logically independent of the quasar issue: we should not necessarily expect that the same choice of uSIDM parameters will be useful in both cases.\n\nFirst, it is intriguing that there exists a well-known (and relatively tight) relation between the properties of a host galaxy and the massive black hole it contains, the $M$--$\\sigma$ relation \\cite{Magorrian:1997hw,Ferrarese:2000se,Gebhardt:2000fk}, which suggests some sort of causal mechanism connecting the central portions of the galaxy containing the black hole with the more distant regions where the velocity dispersion is measured.\nGravothermal collapse naturally provides one such mechanism, and it would be suggestive if it produced the correct relation for some choice of the uSIDM parameters.\nAt a minimum, it should not spoil the observed relationship in nearby galaxies; this has been used previously to constrain the cross section of $f=1$ SIDM \\cite{Hennawi:2001be,Hu:2005cd}.\n\nMore speculatively, the presence of central black holes in dwarf galaxies could resolve the ``too big to fail\" problem \\cite{BoylanKolchin:2011de, BoylanKolchin:2011dk}, in which the central densities of the brightest Milky Way satellites have much lower central densities than the most massive subhalos in $\\Lambda\\mathrm{CDM}$ simulations of Milky-Way sized galaxies. \nOne way to resolve the problem is to invoke physics not present in the simulations to reduce the central densities (within $\\sim1\\ \\mathrm{kpc}$ of the subhalo center) by a factor of order unity. \nIf all of the dark matter is self-interacting with $\\sigma\\simeq0.1\\ \\mathrm{cm^{2}\/g}$, it naturally smooths out cusps to form cores, which could provide the needed reduction in density \\cite{Rocha:2012jg}.\nBut the small fractions $f\\ll1$ we consider in this paper cannot solve the problem in this manner; another method of removing substantial mass from the central $\\sim\\mathrm{kpc}$ is needed.\n\nUnder some circumstances, it is possible that black holes could provide the needed reduction in mass.\nMerging black hole binaries emit gravitational waves anistropically and thus receive an impulsive kick, up to several hundred $\\mathrm{km\/s}$. \nThis energy can be distributed to the surrounding baryons and kick out a substantial portion of the central mass, forming a core \\cite{Merritt:2004xa, BoylanKolchin:2004tf, Lippai:2008fx}.\nSuch a scenario is only viable if the required binary black hole mergers are sufficiently common within dwarf galaxies or their progenitors.\nAlthough the standard cosmological model predicts the presence of black holes in the center of nearly all large halos, it is not clear that $\\Lambda\\mathrm{CDM}$ produces enough black holes within the smaller halos which are the progenitors of dwarf galaxies.\nHere we propose instead to use uSIDM to produce them.\n\nSolving the Too Big to Fail problem using black holes formed from uSIDM requires a particular sequence of events: first, small halos must remain isolated enough to form seed black holes; second, the probability of major mergers must become large enough that essentially all of the Milky Way satellites have binary black holes coalesce within them in order to reduce their central densities. \nDuring the epoch of matter domination, we see that the black hole formation time for a halo of fixed mass goes as $(1+z)^{-7\/2}$ (\\ref{eq:approximate_collapse_time}), while we expect the merger timescale to be set roughly by the Hubble time, $H^{-1}(z)\\sim(1+z)^{-3\/2}$. \nSo halos of a given mass that form before some critical redshift will indeed grow black holes before they merge.\nIn the next subsection we consider the parameter space where black hole seeds are ubiquitously formed in the progenitors of today's dwarf galaxies.\n\n\\subsection{Parameter Space}\n\n\\subsubsection{High-Redshift Quasars}\n\nIn subsection \\ref{sub_examples} above, we presented two possible routes to produce a supermassive black hole matching observations. \nHere we move from specific examples to a discussion of the entire parameter space relevant to the production of high-redshift quasars like ULAS J1120+0641.\nRecall that we have six input parameters: $\\{M_\\Delta,c,z,N_e,\\sigma,f\\}$, respectively the halo mass, concentration, redshift of virialization, $e$-folds of Eddington-limited accretion after collapse, and uSIDM cross section per unit mass and fraction.\nWe specify the halo properties as above: $M_\\Delta=10^{12} M_\\odot$, $c=9$, $z=15$.\nWe then use (\\ref{eq:z_obs}) and the redshift at which a quasar is observed, in this case $z_{obs}=7.085$, to eliminate $N_e$, leaving a two-dimensional parameter space for production of black holes by this time.\nFinally, the requirement that the mass of the quasar match observations, $M_{SMBH}\\approx2\\times10^{9}M_{\\odot}$, combined with the assumption of continuous Eddington-limited growth since black hole formation, reduces the parameter space to one dimension, a curve $\\sigma(f)$.\n\n\\begin{figure}\n\\centering{}\\includegraphics[width=1\\linewidth]{quasarparam7_r1_cropped}\n\\caption{uSIDM parameter space for production of massive high-redshift quasars. \nWe have used the numbers considered in the example above: $M_{SMBH}\\approx2\\times10^{9}M_{\\odot}$, $z_{obs}=7.085$, $M_\\Delta=10^{12}M_{\\odot}$, $c=9$, $z=15$. \nThe solid line plots values of $\\sigma$ and $f$ that result in an SMBH of the desired size at the time of observation, assuming continuous Eddington accretion from the time the core collapses and the seed black hole is formed. \nThe green dotted vertical line marks the largest allowed value of $f$. \nTo its right, collapsed black holes are already larger than $M_{SMBH}$.\nTo its left, collapsed black holes form smaller than $M_{SMBH}$, but can grow larger by accreting baryons.\nThe points on the blue dashed line all result in collapse precisely at the redshift of observation; below this line, a black hole has not yet formed by $z_{obs}$.\nPoints on and above the red dashed line result in a halo that is already optically thick at the time of virialization, i.e.\\ optically thick at the characteristic radius for the initial NFW profile.\nAs discussed in the text, the methods used in this paper are not directly applicable here, but we still expect gravothermal collapse.\nNumerical values for all of these bounding lines are given in the text.\n\\label{fig:quasarparam}}\n\\end{figure}\n\nWe present the parameter space in Figure \\ref{fig:quasarparam}.\nThe one-dimensional curve $\\sigma(f)$, where continual Eddington-limited accretion since black hole formation results in a supermassive black hole with $M_{SMBH}=2\\times10^{9}M_{\\odot}$ at $z_{obs}=7.085$, is the solid black line.\nBecause the baryonic accretion history after seed black hole formation is uncertain, as discussed in Section \\ref{sec:smbh} above, we also indicate with the shaded regions the entire portion of the full $\\sigma$--$f$ plane in which black holes of any size smaller than $M_{SMBH}$ are produced by $z_{obs}$.\n\nThere are several constraints on this reduced parameter space.\nFirst is the simple requirement that gravothermal collapse indeed occurs before $z_{obs}=7.085$.\nWe have already seen in subsection \\ref{sub_examples} above that this constrains $\\sigma f\\ge0.336\\ \\mathrm{cm^{2}\/g}$.\nSecond is the requirement that the black hole produced by gravothermal collapse must not be larger than the observed mass of ULAS J1120+0641.\nCombined with additional assumptions about baryonic accretion, this excludes the entire region above the black curve in Figure \\ref{fig:quasarparam}.\nEven without the assumptions, this still constrains the black hole mass via equation (\\ref{eq:bh mass}), and therefore the uSIDM fraction $f$, provided that a black hole is actually produced.\nAgain, the resulting constraint was calculated in subsection \\ref{sub_examples}: $f\\le0.112$. \nLarger values of $f$ would produce black holes which contained too large a portion of the mass of the entire halo.\n\nFinally, recall that our expressions for the collapse time and resulting black hole mass are based on simulations.\nAs discussed in subsection \\ref{sub:initial} above, the simulations assume the uSIDM is initially in NFW profile, which is only valid if uSIDM interactions were slow compared to the timescale of halo formation, i.e.\\ when the halo is initially optically thin.\nThis places a constraint on $\\sigma f$ as a function of the halo parameters, given by equation (\\ref{optically_thin_bound}).\nFor our chosen values this gives $\\sigma f\\le0.425\\ \\mathrm{cm^{2}\/g}$.\n\nBecause $\\sigma f$ directly sets the collapse time via (\\ref{eq:exact_collapse_time}, \\ref{eq:collapse_time}), the upper bound on $\\sigma f$ is also a lower bound on the time of formation of a black hole from an initially optically thin uSIDM halo: in this case, we must have $z_{coll}\\le7.90$.\nIn turn, this places an upper bound on the number of $e$-folds of growth from baryons that can occur before $z_{obs}=7.085$, via (\\ref{eq:z_obs}).\nWe find $N_e\\le2.24$, i.e.\\ black holes formed from optically thin uSIDM halos have time to grow less than an order of magnitude from baryons.\nIn particular, we cannot trust the precise results of our simulations for the example we considered in subsection \\ref{sub_examples} above, with $N_e=\\ln 10^4$.\nNote, however, that the upper bound on $\\sigma f$, and thus on $z_{coll}$, is independent of $z_{obs}$: it depends only on $z$, the redshift of halo formation.\nMost high-redshift quasars are seen near $z_{obs}\\sim6$: ULAS J1120+0641 is an outlier.\nBlack holes at redshift 6 have had time for another $180\\ \\mathrm{Myr}\\sim4t_\\mathrm{sal}$ of baryonic growth, so they can grow by up to a factor of $\\sim500$ from baryons.\n\nWe have seen that there is an extremely narrow range, $0.336\\ \\mathrm{cm^{2}\/g}\\le\\sigma f\\le0.425\\ \\mathrm{cm^{2}\/g}$, in which the uSIDM halo considered here is optically thin at virialization but nevertheless rapidly collapses to form a black hole.\nHow can we explain the closeness of these two bounds? \nIn general, they are not independent.\nThe upper bound requires that the initial halo be optically thin, i.e.\\ that the scattering cross section be less than the ``characteristic cross section\" of the halo, $1\/(\\rho_s r_s)$.\nBut the lower bound requires that collapse not take too long, i.e.\\ that the scattering time at the characteristic radius, $1\/(\\sigma f \\rho_s \\nu_s)$ is small compared to a Hubble time.\nThese bounds can be simultaneously satisfied when $H^{-1}\\sim r_s\/\\nu_s$.\nBut, ignoring concentration dependence and numerical factors, $\\nu_s\\sim\\sqrt{GM_\\Delta\/r_s}\\sim\\sqrt{G \\rho_{crit} r_s^2}\\sim H r_s$, so $r_s\/\\nu_s\\sim H^{-1}$ as desired. \nThat is, both bounds exhibit the same mass dependence, and their redshift dependence is identical when $z$ and $z_{coll}$ are similar, as can be verified from (\\ref{eq:opt_thin}, \\ref{eq:exact_collapse_time}, \\ref{eq:collapse_time}). \nFor our particular choice of $c=9$, the numerical factors are nearly canceled by the concentration dependence, so the bounds are especially close.\n\nWe emphasize again, however, that the upper bound on $\\sigma f$ (the red dashed curve in Figure \\ref{fig:quasarparam}) is not a true physical exclusion of the uSIDM parameter space above it.\nIt merely signals that the fluid approximation used in this paper is no longer valid outside this space.\nAs discussed extensively in subsection \\ref{sub:initial} above, we expect that profiles in which the uSIDM starts optically thick should in fact undergo collapse even faster.\nThe requirement of an optically thin initial profile would only be physical if starting otherwise led to fragmentation, turbulence, or some other mechanism by which the core was destroyed or core collapse avoided.\n\nFigure \\ref{fig:quasarparam} presents the uSIDM parameter space for a particular choice of halo parameters $\\{M_\\Delta,c,z\\}$.\nWe briefly consider how constraints on the parameter space are changed when these parameters are altered.\nFirst consider the halo mass $M_\\Delta$. \nThe collapse time (\\ref{eq:exact_collapse_time}) scales as $M_\\Delta^{-1\/3}$, so smaller values of the halo mass require values of $\\sigma f$ to form black holes in the same time.\nAt the same time, the value of $\\sigma f$ required for the halo to start initially optically thin (\\ref{eq:opt_thin}) has the same scaling with halo mass.\nSo decreasing $M_\\Delta$ will shift both bounds to higher values of $\\sigma f$ (up and to the right on the $\\sigma$--$f$ plane), but it will not \\textit{qualitatively} change the shape of the allowed parameter space.\nThis shift accounts for the main difference between the high-redshift quasar parameter space and the dwarf satellite parameter space we will consider next.\nWe note, however, that at relatively low redshifts there is a well-known black hole--bulge relation \\cite{Magorrian:1997hw,Marconi:2003hj,Haring:2004hr}, $M_{SMBH}\\sim10^{-3}M_{bulge}$.\nIf this relation persists at high redshifts, we should not depart too far from $M_\\Delta\\sim10^{12}M_\\odot$ to explain $M_{SMBH}\\sim10^9 M_\\odot$. \n\nNext consider the concentration parameter $c$.\nAgain consulting (\\ref{eq:exact_collapse_time}), we see that the collapse time depends strongly on concentration, scaling roughly as $c^{-7\/2}$.\nThe collapse time depends more strongly on the concentration than does the optically thin condition, so for small enough values of $c$ it will be impossible to form black holes before a given redshift starting from an optically thin halo.\nWhen the other halo parameters and $z_{obs}$ are kept fixed, we find that this critical value of $c$ is $7.4$.\nConversely, by going to larger and larger values of $c$ we can form black holes by any desired time at smaller and smaller values of $\\sigma f$.\nHowever, extremely high values of the concentration parameter correspond (unsurprisingly) to extremely concentrated halos, with $r_s\\ll r_\\Delta$. \nIt is not clear that such halos are actually produced in $\\Lambda \\mathrm{CDM}$.\n\nFinally, consider the redshift of halo formation $z$.\nOnce more consulting (\\ref{eq:exact_collapse_time}), we see that we can take the collapse time to zero by increasing $z$.\nHeuristically, this is because the critical density, and thus characteristic density, increases with increasing redshift, so a just-virialized halo is closer to the densities needed to start the catastrophic collapse process.\nHowever, producing large virialized halos at higher and higher redshifts becomes increasingly unphysical given the bottom-up structure formation mechanism in $\\Lambda \\mathrm{CDM}$.\nDecreasing $z$ has the opposite effect: halos of a given size become more common, but larger values of $\\sigma f$ are required to produce a black hole by a given $z_{obs}$.\nLike the case of small concentration parameter, for small enough $z$ it is impossible to form black holes starting from optically thin halos before a given time.\nIn this case the bound on the redshift of formation for the halo considered here is $z>13.53$.\n\n\\subsubsection{Dwarf Galaxies}\n\nRecall from the previous subsection that one resolution to the Too Big to Fail problem is the formation of cores in dwarf galaxies if matter is ejected during binary black hole mergers.\nOur goal here is to specify the parameter space in which uSIDM produces black holes in the progenitors of dwarf galaxies before the epoch in which binary mergers are common.\nAs in the case of high-redshift quasars above, we start by specifying a set of typical values for the halo parameters $\\{M_\\Delta,c,z\\}$.\nRef. \\cite{BoylanKolchin:2011dk} compared the Milky Way dwarf galaxies to subhalos around similarly-sized galaxies in the Aquarius simulations \\cite{Springel:2008cc} to derive probable values for the virial mass $M_\\Delta$ and maximum central velocity $v_\\mathrm{max}$ of each halo at the time of its infall into the main Milky Way halo.\n\nRecall that the maximum velocity of an NFW profile is\n\\begin{equation}\nv_\\mathrm{max}=0.465\\sqrt{\\frac{c}{K_c}}v_\\Delta,\n\\end{equation}\nwith $K_c=\\ln(1+c)-c\/(1+c)$, at radius\n\\begin{equation}\nr_\\mathrm{max}=2.163r_s,\n\\end{equation}\nas can easily be verified numerically using the definition of the NFW density profile (\\ref{eq:nfw profile}) and $v=\\sqrt{G M(r)\/r}$.\nRearranging gives an expression for $K_c\/c$ in terms of $M_\\Delta$, $v_\\mathrm{max}$, and $z_{infall}$. \nIn particular, since $r_\\Delta\\sim\\rho_s\\sim\\rho_{crit}(z)^{-1\/3}$, we have $K_c\/c\\sim\\rho_{crit}(z)^{1\/3}$.\nBut $K_c\/c$ has a maximum value of $0.216$, so above some value of $z_{infall}$, there is no possible NFW profile with the given values of $M_\\Delta$ and $v_\\mathrm{max}$. \nFor the derived values for the Milky Way dwarfs, we find in general that $z_{infall}\\apprle6$. \nAs the infall redshift moves lower for each particular dwarf, the concentration increases from a minimal value $c=2.16$. (Actually, $K_c\/c$ attains its maximum at $c=2.16$ and approaches zero both as $c\\rightarrow0$ and $c\\rightarrow\\infty$, but we neglect the former branch, with $c<2.16$, as unphysical.)\nIn particular, we choose $z_{infall}\\sim4.5$, which results in $c\\sim9$ for a typical dwarf, the same as considered above.\n\nA typical Milky Way dwarf in \\cite{BoylanKolchin:2011dk} has $M_\\Delta=2\\times10^8 M_\\odot$ at the time of infall.\nIf Too Big to Fail is to be explained by means of binary black hole mergers, a typical dwarf should have undergone a major merger, so that a binary black hole merger occurs in the first place.\nWe will therefore take $M_\\Delta=10^8 M_\\odot$, $c=9$, $z_{obs}=4.5$ as our typical parameters. \n\n\\begin{figure}\n\\centering{}\\includegraphics[width=1\\linewidth]{dwarfparamv2_r1_cropped}\n\\caption{uSIDM parameter space for production of black holes in dwarf galaxies. \nThe parameters used are $M_{SMBH}=10^5M_{\\odot}$, $z_{obs}=4.5$, $M_\\Delta=10^{8}M_{\\odot}$, $c=9$, $z=15$. \nThe solid line plots values of $\\sigma$ and $f$ that result in an SMBH of the desired size at the time of observation, assuming continuous Eddington accretion from the time the core collapses and the seed black hole is formed. \nThe green dotted vertical line marks the largest allowed value of $f$. \nTo its right, collapsed black holes are already larger than $M_{SMBH}$.\nTo its left, collapsed black holes form smaller than $M_{SMBH}$, but can grow larger by accreting baryons.\nThe points on the blue dashed line all result in collapse precisely at the redshift of observation; below this line, a black hole has not yet formed by $z_{obs}$.\nPoints on and above the red dashed line result in a halo that is already optically thick at the time of virialization, i.e.\\ optically thick at the characteristic radius for the initial NFW profile.\nAs discussed in the text, the methods used in this paper are not directly applicable here, but we still expect gravothermal collapse.\nNumerical values for all of these bounding lines are given in the text.\n\\label{fig:dwarfparam}}\n\\end{figure}\n\nFigure \\ref{fig:dwarfparam} presents the parameter space for our typical dwarf halo.\nWe have taken the black hole mass to be $M_{SMBH}=10^5 M_\\odot$, in accordance with the black hole--bulge relation, and again assumed that the redshift of formation of the progenitor halo is $z=15$.\nThe various bounds in the figure are attained in the same manner as they were for the quasar bounds shown in Figure \\ref{fig:quasarparam}, so we simply quote them here and refer to the discussion above for details of their calculation.\nThe lower bound on $\\sigma f$, which comes from requiring collapse before $z_{obs}=4.5$, is $\\sigma f\\ge3.26\\ \\mathrm{cm^{2}\/g}$.\nThe upper bound on $f$, which is calculated using (\\ref{eq:bh mass}) and scales linearly with $M_{SMBH}$, is $f\\le0.056$.\n\nThe upper bound on $\\sigma f$, set by the requirement of an optically thin initial profile, is $\\sigma f\\le9.16\\ \\mathrm{cm^{2}\/g}$.\nThis corresponds to an upper bound on the redshift of collapse, $z_{coll}\\le7.90$.\n(As discussed above, the upper and lower bounds have the same mass dependence, and we are considering the same values of $z$ and $c$ as we did for high-redshift quasars, so we recover the same bound on the collapse time.)\nOnce again, this gives an upper bound on the number of $e$-folds of growth from baryons, via (\\ref{eq:z_obs}).\nBecause we have a much longer time for growth after black hole formation than we did in the high-redshift quasar case, it is much looser: $N_e\\le15.2$.\nOnce again, the allowed range on $\\sigma f$ is a factor of only a few.\nBut the significantly looser bound on $N_e$ means that black holes can grow by a factor of nearly $4\\times10^6$.\nSo $f$ can be decreased by over six orders of magnitude from its maximal value, and $\\sigma$ increased by a corresponding amount, while still maintaining an optically thin initial profile and allowing reasonably large black holes to form.\nThis explains the much larger range in $\\sigma$ and $f$ seen on Figure \\ref{fig:dwarfparam} compared to Figure \\ref{fig:quasarparam}.\n\n\\subsubsection{Both Simultaneously?}\n\nWe have just seen that the minimum value of $\\sigma f$ needed to produce massive high-redshift quasars is about an order of magnitude lower than those needed to produce black holes in dwarf galaxies before major mergers.\nWe can understand this qualitatively from the expression for the halo relaxation time, (\\ref{eq:exact_collapse_time}): it scales as $M_\\Delta^{-1\/3}$.\nThe black holes in dwarf galaxies have about twice as long to form, until $z_{obs}=4.5$ instead of $z_{obs}=7.085$, so $\\sigma f$ is scaled by a factor of $10^{4\/3}\/2\\approx10$.\n\nThis scaling of $\\sigma f$ implies that the uSIDM parameters which produce black holes in dwarfs are a strict subset of those which produce high-redshift quasars.\nIt is then easy to choose values which solve both: one simply takes $\\sigma f\\ge3.26\\ \\mathrm{cm^{2}\/g}$ and chooses compatible values of $\\sigma$ and $f$ to taste. \nBecause $\\sigma f$ is significantly larger than the minimum value needed to produce high-redshift quasars, the uSIDM halos which produce them will start initially optically thick, above the (red dashed) upper bound on $\\sigma f$ shown in Figure \\ref{fig:quasarparam}.\nIn this optically thick regime, the expressions for the gravothermal collapse time and black hole seed mass derived from the simulations of Section \\ref{sec:gravothermal_collapse} should be taken as limits: we expect that gravothermal collapse should occur a slightly shorter time after halo formation and result in slightly larger seed black holes.\n\nAs an example, consider the $\\{\\sigma,f\\}$ values that fall in the one-dimensional parameter space discussed at the beginning of this subsection, where continuous Eddington accretion from the time of black hole collapse until $z_{obs}$ just produces a supermassive black hole with the observed value of $M_{SMBH}=2\\times10^9M_\\odot$ (the solid black line in Figure \\ref{fig:quasarparam}). \nIf $\\sigma f=3.26$, the smallest possible value needed to also produce black holes in dwarfs by redshift $4.5$, we would find using (\\ref{eq:exact_collapse_time}, \\ref{eq:z_obs}) that $z_{coll}=13.3$.\n(Strictly speaking (\\ref{eq:z_obs}) is not valid in the context of an optically thick initial halo, since it uses an expression for the collapse time derived from an initially optically thin NFW profile.\nWe are simply using it here for the sake of illustration.)\nIn this case there is time for $9.54$ $e$-folds of baryonic accretion before $z=7.085$, and the initial black hole has mass $7\\times10^4 M_\\odot$.\nThis determines the USDIM parameters for this example as $\\sigma=8.14\\times10^5\\ \\mathrm{cm^2\/g}$, $f=4.01\\times10^{-6}$.\n\nAs the beginning of this section emphasized, it is largely beyond this scope of this paper to describe a fully consistent uSIDM cosmology.\nNevertheless, this subsection suggests that the USDIM paradigm is flexible enough to resolve both of the potential tensions within $\\mathrm{\\Lambda CDM}$ discussed here.\nIt is possible that a single species of ultra-strongly self-interacting dark matter could in fact resolve both tensions simultaneously.\nIn investigating this question further, it will be important to move beyond the simplifying assumptions employed herein, especially the stipulations of an initial optically thin profile and a cosmologically isolated profile.\n\n\\section{Conclusion}\n\nIn this paper, we considered a minimal extension of the SIDM parameter space, in which a self-interacting component comprises only a fraction of the dark matter.\nFor $f\\apprle0.1$, this evades all prior constraints on SIDM models.\nWe highlighted the uSIDM regime, where the SIDM component is subdominant but ultra-strongly self-interacting, with $f\\ll1$ and $\\sigma\\gg1\\ \\mathrm{cm^{2}\/g}$. \nIn the setup considered here, the presence of uSIDM leads to the production of black holes with a mass of around $2\\%$ of the total uSIDM mass in the halo at very early times. \nIn particular, such black holes can act as seeds for baryon accretion starting soon after halo formation, alleviating potential difficulties with accommodating massive quasars at high redshifts within the standard $\\Lambda\\mathrm{CDM}$ cosmology. \nIf black holes are formed ubiquitously in dwarf halos before they undergo mergers, they may also resolve the Too Big to Fail problem by ejecting matter from cores during black hole mergers.\nMore detailed cosmological simulations are needed to confirm the conclusions of this paper and suggest other potential observational consequences of uSIDM.\n\nSetting aside the detailed predictions, this paper has demonstrated that multi-component dark matter can have strong effects on small scales while still evading existing constraints.\nIn the toy model discussed here, the strong effect was the result of the gravothermal catastrophe.\nGravothermal collapse of a strongly-interacting dark matter component is a novel mechanism for production of seed black holes, potentially one with many implications.\nGiven its appearance in the simple extension of $\\Lambda\\mathrm{CDM}$ considered here, it is plausible that gravothermal collapse and its observational consequences, such as seed black hole formation, are generic features of more detailed models.\nIt is important to consider, and then observe or constrain, this and other observational consequences that are qualitatively different from the predictions of the standard cosmological model.\n\nOur discussion has been purely phenomenological, so it is reassuring to note the existence of a class of hidden-sector models \\cite{Boddy:2014yra} which naturally produce a subdominant strongly-interacting dark matter component, with self-interaction cross-sections ranging as high as $\\sigma\\sim10^{11}\\ \\mathrm{cm^{2}\/g}$.\nVery interestingly, some models give both a dominant component with $\\sigma\\simeq0.1-1\\ \\mathrm{cm^{2}\/g}$, as needed to alleviate discrepancies between $\\Lambda\\mathrm{CDM}$ and observations, and a uSIDM component with $\\sigma\\simeq10^5-10^7\\ \\mathrm{cm^{2}\/g}$, which could produce seed black holes via the mechanism described in this paper.\n\nWe thank Shmulik Balberg, James Bullock, Renyue Chen, Phil Hopkins, Jun Koda, Sasha Muratov, Lisa Randall, Paul Shapiro, Stu Shapiro, Charles Steinhardt, and Naoki Yoshida for helpful discussions. We thank especially Sasha Muratov for measuring concentration parameters at high redshifts in the FIRE runs and providing us with the resulting halo catalogs. This research is funded in part by DOE Grant \\#DE-SC0011632, and by the Gordon and Betty Moore Foundation through Grant \\#776 to the Caltech Moore Center for Theoretical Cosmology and Physics.\n\n\\bibliographystyle{utphys}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}