diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzicex" "b/data_all_eng_slimpj/shuffled/split2/finalzzicex" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzicex" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nThe ``Hall insulator'' defines a peculiar insulating state, in which the\nlongitudinal resistivity $\\rho_{xx}$ diverges in the limit of zero \ntemperature and frequency, yet the Hall resistivity $\\rho_{xy}$\nremains {\\it finite}. Such a behavior of $\\rho_{xy}$ has been argued to be a\nquite generic property of disordered single electron models\\cite{Fuku,KLZlett,Joe}, provided\n$\\sigma_{xy}\\sim\\sigma^2_{xx}$ in the limit of vanishing conductivities\n$\\sigma_{xx}$ and $\\sigma_{xy}$\\cite{Ora}. That is to say,\n\\begin{equation}\n\\lim_{\\sigma_{xx}\\to 0}\\rho_{xy}=\\sigma_{yx}\/(\\sigma_{xx}^2+\\sigma_{xy}^2) \n< \\infty\\; .\n\\label{rxydef}\n\\end{equation}\nExperimentally, \nseveral groups have observed a Hall insulating behavior in strong \nmagnetic fields -- both in three-dimensional samples \\cite{Hop} and in\nthe quantum Hall (QH) regime \\cite{Vladimir,SSS}. \nIn the global phase diagram of Kivelson, Lee and Zhang \\cite{KLZ} the \nentire insulating phase\nsurrounding the QH liquid phases is predicted to exhibit\nHall insulating behavior with\n$\\rho_{xy}\\sim B\/nec$, as in a {\\em classical} Hall conductor -- \nin agreement with the data of, e.g., Ref. \\cite{Vladimir}. \n(Here $B$ is the perpendicular magnetic field and $n$ is the electron density).\n\nConfusion has been compounded recently by measurement of the Hall voltage near the transition between a \n$1\/3$--QH liquid and the insulator \\cite{SSS} which indicates a different\nbehavior of $\\rho_{xy}$ in the insulating phase: quite remarkably, it \npreserves the quantized value $3h\/e^2$ over a finite range of the \nmagnetic field (the parameter which drives the transition) {\\it beyond} \nthe critical point. Moreover, the Hall voltage\n$V_H(I)$ is linear in the low current range where the longitudinal voltage\n$V(I)$ exhibits an insulating--like non--linear dependence.\nA deviation from the quantized Hall resistance, approaching a linear rise as a\nfunction of $B$, is observed only deeper in the insulating phase. This \npersistence of the QH plateau can not be explained by means of transport models\nbased on hopping between strongly localized single--electron sites, as such\nmodels do not pose any particular restriction on the {\\it value} of the finite\nHall coefficient. \n\nIn this paper we propose a transport model which reflects the prominent\nphenomenology of the exotic insulating phase described above -- hereon dubbed\n``a quantized Hall insulator'' (QHI). It is clear that one needs to take into account \nboth electron interactions, which are responsible for the fractional QH effect, and the random potential. This task is manageable in the limit of slow potential variations\nwith respect to the magnetic length. The incompressibility of the electron liquid \nat magic densities, creates\npuddles of QH liquid at these densities in the shape of the equipotential contours.\n\nThus we extend Chalker and Coddington's network model \\cite{CC}\nfrom the integer to the fractional QH regime.\nIn place of the semiclassical single--electron orbits, the transport here is conducted through a random network of edge states surrounding \nthe puddles of $1\/k$--QH liquid defined to be of density \n$n=B\/(k\\phi_0)$, where $\\phi_0$ is a flux quantum. \nThe edge states are connected to \neach other by tunnel barriers. As we show below, if the percolating network\nof edge states belongs to a {\\it single fraction} $1\/k$, $\\rho_{xy}$ of the \nentire network acquires the quantized value $k\\hbar\/e^2$ \\cite{ruzin}. \nSince edge states\ntunneling between fractional QH liquid involves power--law density of states, the \nlongitudinal current--voltage characteristic of the system,\n$V(I)$ is generally non--linear and $dV\/dI\\rightarrow \\infty$ for $I\\rightarrow 0$ in the\nweak tunneling limit. Below, we \ncalculate the Hall and longitudinal response of the edge states network, \nand relate its properties to the total carrier density, the magnetic field\nand the potential fluctuations.\n\nIn the framework of the present paper we do not elaborate on the\njustification of the puddles model; rather, we focus on its consequences\non transport properties, some of which are yet to be\nconfronted with experiment. It is worthwhile pointing out, though,\nthat the true ground state of certain realistic systems is quite\nlikely to be well imitated by such a model. In restricted regions\nof the sample, where considerable variations in the disorder potential\noccur over length scales much larger than the magnetic length, the\nformation of puddles of incompressible liquid is energetically\nfavorable. Since transport inside such a puddle is dissipationless, a\ncurrent carrying path that crosses the entire sample is expected to be\ndominated by channels that `hop' between neighboring puddles \nat places of minimal separation.\n\nIn section \\ref{sec:hall} we prove that the Hall resistance of a $1\/k$ \npuddle--network is quantized at $\\rho_{xy}=k\\hbar\/e^2$. In section \n\\ref{sec:long} we calculate the non--linear longitudinal response $ V(I)$ \nof the network. Deviations from a pure power--law behavior are estimated in\nAppendix A; the parameters of the model are related to the magnetic\nfield and potential fluctuations (using the theory of Renn and Arovas \n\\cite{RA} for single QH tunnel junctions) in Appendix B. In section \n\\ref{sec:summary} we summarize our main results and point out some\nopen questions and suggestions for experimental tests of our model.\n\n\\section{The Hall Resistance of a Puddle--Network}\n\\label{sec:hall}\nConsider a random two-dimensional network, combined of the basic elements \nschematically depicted in Fig. \\ref{fig:fig1}.\nCircles denote the ``puddles''; each couple of puddles is separated by\na tunnel junction which involves four edge currents $I_1,..I_4$.\nBy current conservation the tunnelling current $I$ is given by \n\\begin{equation}\nI = I_1-I_3=I_2-I_4\n\\label{Ii}\n\\end{equation}\nThe macroscopic theory of a Hall liquid in a confining potential yields a fundamental relation between the \nexcess chemical potentials at the edges \n$\\delta\\mu_i$ and the edge currents\\cite{Beenakker}:\n\\begin{equation}\n\\delta\\mu_i = \\mbox{sgn}(B) {h\\over e^2} k I_i\n\\label{Vi}\n\\end{equation}\nwhere $I_i$'s are positive in the clockwise direction around the puddle.\nEqs. (\\ref{Ii},\\ref{Vi}) yield a simple proportionality between the Hall voltage and the tunnel current\n\\begin{equation}\nV_H \\equiv \\delta\\mu_1-\\delta\\mu_3=\\delta\\mu_2-\\delta\\mu_4=\\mbox{sgn}(B) {h\\over e^2} k I\n\\label{V_H}\n\\end{equation}\nThe relation between the {\\em longitudinal} voltage drop and the tunneling current through the barrier is\n\\begin{equation}\nV(I) \\equiv \\delta\\mu_1-\\delta\\mu_2=\\delta\\mu_3-\\delta\\mu_4,\n\\label{VI}\\end{equation}\nwhich is in general a non linear function. Here we assume symmetry under reversal of magnetic field $V(I,B) = V(I,-B)$, which is expected for dissipative\ncurrent transport across a narrow channel. \n\nWe now consider the response of a random network of puddles and tunnel barriers with two current leads at $-x$ and $+x$, and two voltage leads at $-y$ and $+y$.\nThe network is described by a general two dimensional graph with $N_v$ \nvertices at the locations of the puddles, and $N_b$ bonds for each of the \ntunnel barriers (see, e.g., Fig 2).\nThe two dimensional layout of the puddles network ensures that bonds do not \ncross. \n\nHenceforth we shall\nassume that all quantum interference effects take place within the tunnel barrier lengthscales $L_{ij}$\nbeyond which dissipation due to low lying edge excitations destroys coherence between tunneling events. \nThus,\nthe response of the puddles network is given by classical Kirchoff's laws. \nFirst, current conservation at each vertex (puddle) $i$ is given by \n\\begin{equation}\n\\sum_{j\\in \\{ (ij)\\}} I_{ij}=0,~~~~~i=1,\\ldots N_v\n\\label{K-Nv}\n\\end{equation}\nwhere $ \\{(ij)\\}$ denote the set of bonds emenating from $i$. Second, \nthe sum of voltage\ndifferences around each plaquette $p$ is given by\n\\begin{equation}\n\\sum_{(ij)\\in p} V_{(ij)}(I_{ij})=0,~~~~~p=1,\\ldots N_p\n\\label{K-Np}\n\\end{equation}\nwhere $V_{(ij)}(I_{ij})$ is the non linear function of Eq. (\\ref{VI}).\nA total current $I$ is forced through the network through a lead coming from $-x$ and leaving\ntoward $+x$. There are no currents flowing through external leads in the \n$\\pm y$ directions. It is easy to prove the following:\n\\newline\n{\\em Lemma: The currents $I_{ij}$ in the network are completely determined by $I$.}\n\\newline\nThe proof uses Euler's theorem for two dimensional graphs \\cite{EulerT}\n\\begin{equation}\nN_v + N_p -N_b =1\n\\end{equation}\nThus the number of Kirchoff's equations (\\ref{K-Nv}, \\ref{K-Np}) is $N_v+N_p$\nwhich exceeds the number of unknown currents $N_b$ by one. The additional equation determines that the current flowing out of \nthe $+x$ lead must be, of course, $I$. Q.E.D. \n\nAs shown above, the Hall relations (\\ref{V_H}) have no effect on the currents $I_{ij}$.\nThe total transverse voltage $V_y$ is given by choosing any path of bonds \n${\\cal C}$ which connects the $-y$ lead to the $+y$ lead (see Fig. 2)\nand summing the voltages\n\\begin{equation}\nV_y= \\sum_{i\\in{\\cal C}} V_{i,i+1}(I_{i,i+1},|B|) +\\mbox{sgn}(B) k {h\\over e^2}\n\\sum_{i\\in{\\cal C},(ij)'} I_{i,j}\n\\end{equation}\nwhere $(ij)'$ denote all currents entering vertex $i$ from $-x$. By global current conservation, the second term is proportional to the total current. Defining the Hall voltage $V_H$ to be the antisymmetric component of $V_y$ \nwe thus obtain \n\\begin{equation}\nV_H = \\mbox{sgn}(B) k {h\\over e^2}I\n\\end{equation}\nwhich yields a quantized Hall resistance of $\\rho_{H}= k (h\/ e^2)$\nthat is completely independent of $B$ and $I$.\n\nThis relation should hold as long as the network does not involve\nappreciable contributions from edgestates of puddles of different $k$ values. \nThe width of the QHI regime therefore depends on relative abundance of \ndifferent density puddles, which depends in turn on the distribution of \npotential fluctuations. As the magnetic field increases, a wide distribution \nof potential minima will create mixed phases with puddles of different \ndensities. Relation (\\ref{V_H}) does not apply for tunneling between \ndifferent $1\/k$--QH liquids,\nand thus the above analysis fails for the mixed phase.\n\n\\section{The Non--Linear Longitudinal Transport}\n\\label {sec:long}\nThe dissipative response in the model introduced above is\nassociated with the longitudinal transport through the tunnel barriers. The \nbarriers connect edge--states of neighboring puddles of density \n$n=B\/(k\\phi_0)$, and we assume henceforth that $k$ is the same in all puddles.\n\nA non--linear current--voltage relation \nfor a tunnel junction between $1\/k$ QH Liquids was first proposed by Wen \\cite{Wen}\nwho mapped the fractional QH edge--states \nto chiral Luttinger Liquids. For small currents, the relation is a power--law\\cite{Wen,KF} \n\\begin{equation}\nI\\sim \\mbox{sgn}(V) V^{2g-1}\\; ,\n\\label{ivg}\n\\end{equation}\nwhere $g=k$ is the Luttinger liquid interaction parameter (and is equal to unity for\nthe integer Hall liquid).\n\nRenn and Arovas (RA) \\cite{RA} have extended Wen's result to long tunnel \nbarriers following Giamarchi and Schultz' renormalization group equations \nfor disordered Luttinger liquids\\cite{GS}. They consider the ``disordered \nantiwire'' geometry, i.e. a barrier of length with $n_t$ tunnel couplings \nof average magnitude $t$. In the small current limit they obtained that \n$g$ gets renormalized $g\\to \\tg > k$, and the longitudinal response is\n\\begin{equation}\nV_{RA}(I) \\simeq V_0 \\mbox{sgn}(I) \\left({|I|\\over I_0D} \\right)^{1\/(2\\tg-1)} ,\n\\label{VI-RA}\n\\end{equation}\nwhere\n\\begin{eqnarray}\nV_0&=& {\\hbar v\\over el},\\nonumber\\\\\nI_0&=& {ev\\over 2\\pi l k},\\nonumber\\\\\nD&\\simeq& {n_t |t^2| \\over 2\\pi v^2 \\hbar^2 }.\n\\label{i0-v0}\n\\end{eqnarray}\nHere $v$ is the edge \nstate velocity, and $l=\\sqrt{\\hbar c\/eB}$ is the magnetic length.\n\nHere we consider a network of RA junctions, and assume that the\ndephasing time is short enough that the tunneling events through consecutive\njunctions in the network are incoherent (coherent backscattering effects are\nincluded in RA's calculation of the single junction). Our model consists of\na random network of \nclassical non--linear resistors, each characterized by a power $\\alpha_n$\nand a conductance prefactor $D_n$\n\\begin{equation}\n{V_n\\over V_0}=\\mbox{sgn}(I)\\left({|I_n|\\over I_0D_n}\\right)^{\\alpha_n}\\; ;\n\\label{ivan}\n\\end{equation}\nBy (\\ref{i0-v0}), we assume that $V_0$ and $I_0$ are weakly dependent on\nthe barrier height fluctuations and magnetic field, compared e.g. to $D$. \nThus for simplicity they are taken to be uniform in the entire\nnetwork. The network of junctions with $D_n\\leq 1$ is assumed to percolate \nthrough the sample. Thus we can choose $D_n,\\alpha_n$ to be\nrandom variables whose distribution is bounded by\n\\begin{eqnarray}\n0 &\\leq D_n \\leq&1\\nonumber\\\\\n(2k-1) &\\geq \\alpha_n \\geq & 1\/(2k-1)\n\\end{eqnarray}\nIn appendix \\ref{app:B} we estimate the magnetic field dependence of the average\nconductance prefactor to be\n\\begin{equation}\n{\\bar D}(B)\\propto \\exp\\left( - k n_p \\left(B-B_c \\over 2B_c \\right)^2 \\right)\n\\label{barD}\n\\end{equation}\nwhere $n_p$ is the typical number of electrons in a puddle, and $B_c$ is the\nmagnetic field at which the puddles percolate through the sample.\nThe average power law is estimated using RA's renormalization group equations.\nWe find that (see App. \\ref{app:B}), in the limit of small $D$ \n\\begin{equation}\n{\\bar\\alpha}(B)\\sim {1\\over 2k-1}+{ k-3\/2 \\over 2k-1}{\\bar D}(B)\\; ,\n\\label{baralpha}\n\\end{equation}\nand for $ {\\bar D}\\rightarrow D_c=(2\\ln 2-1)$ \n\\begin{eqnarray}\n{\\bar\\alpha}\\simeq {1\\over 2+3\\sqrt{D_c-\\langle D\\rangle}}\\; .\n\\label{barAlpha}\n\\end{eqnarray}\n\nWe have solved Eqs. (\\ref{K-Nv}) and (\\ref{K-Np}) numerically, using an IMSL Levenberg-Marquardt algorithm, for square lattices of sizes upto\n$5\\times 5$. The distributions of $(D_n,\\alpha_n)$ were taken to be \n\\begin{eqnarray}\nP_1(D)= \\Theta(D)-\\Theta(D-1)\\nonumber\\\\\nP_2(\\alpha)={1\\over\\sqrt{2\\pi}\\sigma}\\exp\\left\\{{(\\alpha-{\\bar\\alpha})^2\\over\n2\\sigma^2}\\right\\}\\; ,\n\\label{Palpha}\n\\end{eqnarray} \nWe take the variance $\\sigma$, according\nto our estimate in appendix \\ref{app:B} to be 5 to 10 times smaller\nthan the mean ${\\bar\\alpha}$. The numerical results in the regime \n$I_0\/10 < I < I_0$, averaging over 5 realizations of disorder, \ncan be summarized by the averaged network's $I-V$ response: \n\\begin{equation}\n{ \\log V \\over \\log I } \\equiv \\alpha_{eff} = {\\bar\\alpha} \\pm \\epsilon \n\\label{aeff}\n\\end{equation}\nwhere $\\epsilon\\sim 10^{-2}-10^{-3}$. \nThat is to say, in the \nmoderate current regime {\\em the total voltage-current relation is given \nquite well by the average power law}. In the extremely small current limit, \none expects (\\ref{aeff}) to break down since due to the power law resistors, \nthe currents choose to flow through percolating networks of highest power \nlaws. In this regime the numerical algorithm also fails to converge properly. \n\nIn order to better estimate the corrections to the\npure power law, we examine a toy model dubbed\nthe Parallel--Series (PS) network. This model comprises of a random \ncombination of serial and parallel connections of elements $P$ and $S$, where\n$P$ is composed of $N_p$ resistors in parallel, and $S$ is its dual -- a \nlinear chain of $N_s$ resistors in series (see Fig. 3). \nThe $S$ and $P$ components can be created from\nan ordinary two dimensional network by a three peaked distribution of $D_n$'s \n(shorts, disconnections and resistors of $D_n=1$). This model is\nsymmetric on average with respect to exchange of the $x$ and $y$ directions, and hence is an adequate description of macroscopically isotropic samples.\n\nIn Appendix (\\ref{app:a}) we show that for currents which obey $\\sigma^2\\ln (I\/I_0)\\ll{\\bar\\alpha}$,\n\\begin{equation}\n{V\\over V_0}=\\left({I\\over I_0}\\right)^{\\alpha_{eff}},\\quad \\alpha_{eff}=\n{\\bar\\alpha}+\\left({\\sigma^2\\ln (I\/I_0)(1-1\/{\\bar\\alpha})\\over 4}\\right)\\; .\n\\label{IVfinal}\n\\end{equation}\nThe deviation of $\\alpha_{eff}$ from ${\\bar\\alpha}$ is\npositive for ${\\bar\\alpha}<1$. This indicates that in the `insulating' regime, \nalthough serial and \nparallel connections are equally represented, parallel configurations dominate at low\ncurrents.\nThe situation is reversed in the QH liquid side of the transition, where \n${\\bar\\alpha}>1$, while at the critical filling fraction (where ${\\bar\\alpha}=1$)\nthe $S$ and $P$ elements balance each other and $\\alpha_{eff}={\\bar\\alpha}$.\nWe note that under a duality transformation, which exchanges each resistor in\nthe network by a perpendicular resistor with $(V_n\/V_0)$ and $(I_n\/I_0)$ \ninterchanged, the $I-V$ characteristic of the whole network is inverted: \n${\\bar\\alpha}\\rightarrow 1\/{\\bar\\alpha}$, $\\sigma\\rightarrow \\sigma\/{\\bar\\alpha}^2$,\nand consequently $\\alpha_{eff}\\rightarrow 1\/\\alpha_{eff}$\nwhich is consistent with our requirement of macroscopic isotropy.\n\nIn comparing the results of the PS model to the square lattice simulations \nwe find that the correction to a pure power law in the numerical results, \nis smaller by at least a factor of 10 than the results of the PS model (\\ref{IVfinal}). \nWe suggest that the difference arises due to the fact that the PS model\nassumes greater inhomogeneity in $D_n$ as mentioned before. \nEq.~(\\ref{IVfinal}) can therefore be regarded as an estimate of the upper \nlimit on the discrepancy between the macroscopic $\\alpha_{eff}$ and \n${\\bar\\alpha}$ at moderate currents. The principal conclusion to be taken away from this calculation \nis that due to the self averaging property, the macroscopic\n$I-V$ is directly related to the physics of the single junction, \nand the non linear tunneling response between fractional quantum Hall edge-states.\n\n\n\\section{Summary and Final Remarks}\n\\label{sec:summary}\nAs demonstrated in the previous sections, the QHI phase observed in proximity\nto a QH liquid can be modelled by a network of puddles. Although similar in \nspirit to the semiclassical percolation description of Ref. \\cite{CC}, \nit naturally incorporates the electron interaction effects under the same \nassumption: smoothly varying potentials relative to the magnetic length $l$. \nThe most important feature of this\nmodel is that, in contrast with models based on single--electron hopping, \nit yields a \nquantized Hall resistance. The quantization is not affected by non--linearity\nof the dissipative part of the response. The latter is studied for $1\/k$--QHI\nwith $k>1$, yielding a power--law behavior of the longitudinal $I-V$ curve\nwhich is closely determined by the behavior of an average single junction \nbetween adjacent puddles. \nDeviations from a pure power--law are at most of\norder $\\sigma^2\\ln (I)$ (where $\\sigma$ is the variance of the power \ndistribution), estimated in appendix \\ref{app:B} to be typically small. \n\nThe magnetic field dependence of the average tunneling rate\nis gaussian as shown in Eq. (\\ref{barD}), with a width defined by the inverse number of electrons in a typical puddle. Thus, smaller puddle sizes allow a larger regime of the Quantized Hall Insulator phase. However, \nif these incompressible puddles are too small, it means that our assumption of slowly\nvarying potential becomes invalid.\n\nWe note that the integer QH case of $k=1$ implies all $\\alpha_n=1$ throughout the network. That is to say, that the puddles model reduces naturally to a random Ohmic resistors network with conductances proportional to Eq.( \\ref{barD}). Interference effects between junctions \\cite{CC} are ignored here, since we assume an inelastic scattering length of the order of inter-junction separation, \nan assumption that breaks down at low enough $T$.\n\nOur analysis so far has concentrated on the non linear transport of tunnel junctions,\napplicable for large enough bias and low $T$. \nAt finite $T$, transport in the junctions -- and \nhence through the entire network -- crosses over to linear response at \nsufficiently low bias $ khi\/e < k_BT $, where $i=I\/N_y$ is the average\ncurrent through single--junction and $N_y$ is the typical number of junctions\nacross the sample.\nThe linear conductivity is\nthen predicted to vary as a power--law of temperature \\cite{RA}, i.e. \n$V\\propto IT^{1-1\/{\\bar\\alpha}}$. A temperature--dependence \nmeasurement of the resistance in the Ohmic regime can thus provide a \nfurther test of our model. In addition, for a given $T$ the crossover from a\nlinear to non--linear $I(V)$ can provide an estimate of $N_y$.\n\nOne of the most interesting implications of our suggested puddles--network \nmodel is that the insulating phase, surrounding the fundamental QH liquids\nin the phase diagram of Ref. \\cite{KLZ}, is not a homogeneous phase. \nRestricted regions in the phase diagram which are in proximity to specific\n$1\/k$--QH liquids are dominated by weakly coupled puddles of the corresponding\nliquid. It is therefore implied, that a measurement of $\\rho_{xy}$ as a \nfunction of magnetic field at moderate disorder may exhibit plateaux at odd\ninteger multiples of $h\/e^2$ - even though the longitudinal transport \nindicates an insulating--like behavior. The width of the plateaux is expected \nhowever, to depend on details of the disorder potential in the sample. The\nwidth of the ``mixed--phases'', where $\\rho_{xy}$ rises with magnetic field \nbetween consecutive plateaux, is \n{\\it not} expected to vanish for $T\\rightarrow 0$ as in the QH liquid regime.\n\nFinally, we would like to comment on an open problem with regards to comparison\nof this theory with experimental results of Ref. \\cite{SSS}. The experiment has indicated a duality symmetry\nbetween $I-V$ curves at \nopposite sides of the $1\/3$--QH liquid--to--insulator transition. This \nphenomenon was interpreted in terms of charge--vortex duality, \nor equivalently as particle--hole symmetry\\cite{SSS}. In the puddles network \nmodel, such duality would be observed if at $B>B_c$ each tunnel barrier with \nresponse $I=F(V)$ is related to a narrow channel formed at $B'B_c$.\nResolution of this point is left to further research.\n\n\n\\acknowledgements\nWe thank D. Shahar for discussions that motivated this work, and \nacknowledge useful conversations with D. Arovas, Y. Avron, D. Bergman, \nY. Gefen, C. Henley and U. Sivan. This work was partly supported by the\nTechnion -- Haifa University Collaborative Research Foundation, the Fund for Promotion of Research at Technion, and a grant from the Israeli Academy of Sciences.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOver the last several decades, abundant astronomical evidence for black holes has accumulated from a variety of sources \\cite{Narayan2013}, most notably the recent spectacular observations \\cite{Abbott2016a,Abbott2016b,Abbott2017a,Abbott2017b} of gravitational waves emitted from black hole mergers. In all of these observations, the black holes appear as point-like objects, as the detectors have been far from being able to resolve distances on the scale of the Schwarzschild radius. The existence of the most striking feature of a black hole---namely, the event horizon---is only indirectly inferred.\n\nAll of this is expected to change dramatically within the coming year, when the Event Horizon Telescope (EHT) obtains images of black holes comprised of pixels smaller than the Schwarzschild radius.\n\nThis opens an exciting new chapter in experimental black hole astrophysics. It also presents a host of challenges to theorists who need to predict what will be seen by the EHT \\cite{Bardeen1973,Luminet1979,Broderick2006,Doeleman2008a,Doeleman2008b,Doeleman2009,Doeleman2012,Johnson2015}. While the Kerr solution is itself relatively simple, the nearby environment can contain complex magnetospheres, accretion disks, and jets that are the origin of the actual observed signal. The predicted signal in general depends on many a priori undetermined parameters describing this environment.\n\nUniversal and sometimes striking predictions are possible for the case of rapidly spinning black holes \\cite{Andersson2000,Glampedakis2001,Yang2013,Porfyriadis2014a,Porfyriadis2014b,Hadar2015,Gralla2016a,Gralla2016b,Burko2016,Hadar2017,Compere2017}. At the maximal allowed value of the spin, $J=GM^2$, the region near the horizon of the black hole acquires an infinite-dimensional conformal symmetry \\cite{Bardeen1973,Bardeen1999,Guica2009}. This is a precise astrophysical analog of the universal critical behavior appearing in many condensed matter systems. Not only do the symmetries supply powerful computational tools, but the universality reduces the dependence of physical predictions on undetermined parameters. For example, it was found recently that gravitational waves from a near-horizon orbiting body can end with a slow decay to silence on a single characteristic frequency \\cite{Gralla2016b}, in stark contrast to the rapid ``chirp'' of ordinary black hole binaries.\n\nIn this paper, we analyze the signal produced by a ``hotspot'' (localized emissivity enhancement) orbiting near a high-spin black hole, and find a very striking signal. The primary image moves along a line segment (the ``NHEKline\"), which is rotated by $90^\\circ$ relative to the orbital plane, just inside the shadow from black hole backlighting. Secondary images are generally negligible except for bright caustic flashes which extend to the whole line segment. These emissions pulsate in a complex periodic manner. This signature is strongest in the edge-on case, when the observer lies near the equatorial plane ($\\theta_o\\approx90^\\circ$), and disappears entirely when $\\theta_o<\\arctan{\\pa{4\/3}^{1\/4}}\\approx47^\\circ$, a critical angle determined by near-horizon physics. In general, emission signals of this type can only be computed numerically, but the emergent symmetries at high spin enable us herein to study the problem analytically. We perform a detailed calculation for a uniformly emitting sphere orbiting in the equatorial plane, but the main conclusions generalize to all near-horizon sources.\n\nOf course, it would require a fortuitous alignment of circumstances for an EHT target to have both high spin and a sufficiently long-lived brightness enhancement in the near-horizon region. Nevertheless, as we move into the era of precision black hole observation, it is not unreasonable to hope that such a configuration might eventually be observed. With enough resolution, the unique features of the signal would serve as a ``smoking gun'' for a high-spin black hole.\n\nThe paper is organized as follows. In Sec.~\\ref{sec:OrbitingEmitter}, we define the problem at any finite, non-extremal value of spin, and write down the equations to be solved. In Sec.~\\ref{sec:NearExtremalExpansion}, we solve the equations in the high-spin limit, keeping leading and subleading terms. In Sec.~\\ref{sec:ObservationalAppearance}, we explore the detailed observational appearance. In App.~\\ref{app:Shadow}, we discuss the black hole shadow in the high-spin regime. In App.~\\ref{app:NHEK}, we explain the connection of our computation to near-horizon geometry and argue that the signal persists more generally. We relegate technical aspects of our calculations to the remaining appendices.\n\n\\begin{figure}[ht!]\n\t\\includegraphics[width=\\textwidth]{Fig1.pdf}\n\t\\caption{Observational appearance of a point emitter (``hotspot\") orbiting near a rapidly spinning black hole. All light appears on a vertical line segment, the so-called NHEKline, which forms a portion of the black hole shadow's edge (dashed line). Over each cycle of the periodic image, the primary image appears near the center of the NHEKline before moving downward while blueshifting and spiking in brightness (right panels). On the right, we display the height of the image on the NHEKline (relative to the maximum), its flux (relative to a comparable Newtonian problem, with spin-dependence $\\epsilon$ of the black hole scaled out), and its redshift factor (ratio of observed to emitted frequency). Notice the net blueshift ($g>1$) at peak brightness, reflecting the Doppler boost from the ultrarelativistic near-horizon orbit overcoming the gravitational redshift.\t Video animations are available \\href{https:\/\/youtu.be\/ZRfTj5JKPkA}{here}. Secondary images have a rich caustic structure shown in Fig.~\\ref{fig:3x3} below. (The primary image, depicted here in black, is colored green in Fig.~\\ref{fig:3x3}.) The position of the source at the time of emission is shown in Fig.~\\ref{fig:Winding} for the primary image. The spin is $a\/M=99.99995\\%$ ($\\epsilon=.01$) and the viewing angle is nearly edge-on, $\\theta_o=84^\\circ$. Complete parameter choices are given in Eq.~\\eqref{eq:ExampleParameters}. The appearance is qualitatively similar for other parameter choices.}\n\t\\label{fig:OpticalAppearance}\n\\end{figure}\n\n\\section{Orbiting emitter}\n\\label{sec:OrbitingEmitter}\n\nWe work in the Kerr spacetime in Boyer-Lindquist coordinates $(t,r,\\theta,\\phi)$. The metric is\n\\begin{align}\n\tds^2=-\\frac{\\Delta\\Sigma}{\\Xi}\\mathop{}\\!\\mathrm{d} t^2+\\frac{\\Sigma}{\\Delta}\\mathop{}\\!\\mathrm{d} r^2+\\Sigma\\mathop{}\\!\\mathrm{d}\\theta^2+\\frac{\\Xi\\sin^2{\\theta}}{\\Sigma}\\pa{\\mathop{}\\!\\mathrm{d}\\phi-\\omega\\mathop{}\\!\\mathrm{d} t}^2,\n\\end{align}\nwhere\n\\begin{align}\n\t\\omega=\\frac{2aMr}{\\Xi},\\qquad\n\t\\Delta=r^2-2Mr+a^2,\\qquad\n\t\\Sigma=r^2+a^2\\cos^2{\\theta},\\qquad\n\t\\Xi=\\pa{r^2+a^2}^2-\\Delta a^2\\sin^2{\\theta}.\n\\end{align}\nOur emitter will be a point source orbiting on a circular, equatorial geodesic at radius $r_s$. The angular velocity is \\cite{Bardeen1972}\n\\begin{align}\n\t\\Omega_s=\\pm\\frac{M^{1\/2}}{r_s^{3\/2}\\pm aM^{1\/2}},\n\\end{align}\nwhere the upper\/lower sign corresponds to prograde\/retrograde orbits. Here and hereafter, the subscript $s$ stands for ``source.\" The local rest frame of the emitter consists of the four-velocity $u^\\mu=e_{(t)}^\\mu$ ($u_\\mu u^\\mu=-1$) along with three orthogonal unit spacelike vectors,\n\\begin{subequations}\n\\label{eq:EmitterFrame}\n\\begin{align}\n\te_{(t)}&=\\gamma\\sqrt{\\frac{\\Xi}{\\Delta\\Sigma}}\\pa{\\mathop{}\\!\\partial_t+\\Omega_s\\mathop{}\\!\\partial_\\phi},\\qquad\n\te_{(r)}=\\sqrt{\\frac{\\Delta}{\\Sigma}}\\mathop{}\\!\\partial_r,\\qquad\n\te_{(\\theta)}=\\frac{1}{\\sqrt{\\Sigma}}\\mathop{}\\!\\partial_\\theta,\\\\\n\te_{(\\phi)}&=\\gamma v_s\\sqrt{\\frac{\\Xi}{\\Delta\\Sigma}}\\pa{\\mathop{}\\!\\partial_t+\\omega\\mathop{}\\!\\partial_\\phi}+\\gamma\\sqrt{\\frac{\\Sigma}{\\Xi}}\\mathop{}\\!\\partial_\\phi,\n\\end{align}\n\\end{subequations}\nwhere\\footnote{Note that $v_s$ and $\\gamma$ are the velocity and boost factor according to the zero-angular momentum observer with four-velocity proportional to $\\mathop{}\\!\\partial_t+\\omega\\mathop{}\\!\\partial_\\phi$.}\n\\begin{align}\n\t\\label{eq:EmitterVelocity}\n\tv_s=\\frac{\\Xi}{\\Sigma\\sqrt{\\Delta}}\\pa{\\Omega_s-\\omega}\n\t=\\frac{\\pm M^{1\/2}\\pa{r_s^2\\mp2aM^{1\/2}r_s^{1\/2}+a^2}}{\\sqrt{\\Delta}\\pa{r_s^{3\/2}\\pm aM^{1\/2}}},\\qquad\n\t\\gamma=\\frac{1}{\\sqrt{1-v_s^2}}.\n\\end{align}\nWe define frame components of four-vectors $V^\\mu$ in the usual way,\n\\begin{align}\n\tV^{(b)}=\\eta^{(a)(b)}e_{(b)}^\\mu V_\\mu,\n\\end{align}\nwhere $\\eta^{(a)(b)}=\\mathrm{diag}\\pa{-1,1,1,1}$ and summation over repeated indices is implied. We raise and lower frame indices with $\\eta^{(a)(b)}$.\n\n\\subsection{Photon conserved quantities and interpretation} \n\nThe wavelength of light from astrophysically realistic sources is much smaller than the size of the black hole. This allows us to work in the geometric optics limit, where the emission corresponds to photons traveling on null geodesics. Each such photon with four-momentum $p^\\mu$ possesses four conserved quantities:\n\\begin{enumerate}\n\\item\nthe invariant mass $p_\\mu p^\\mu=0$,\n\\item\nthe total energy $E=-p_t$,\n\\item\nthe component of angular momentum parallel to the axis of symmetry $L=p_\\phi$, and\n\\item\nthe Carter constant $Q=p_\\theta^2-\\cos^2{\\theta}\\pa{a^2p_t^2-p_\\phi^2\\csc^2{\\theta}}$.\n\\end{enumerate}\nThe trajectory of the photon is independent of its energy and may be described by two rescaled quantities,\n\\begin{align}\n\t\\label{eq:ConservedQuantities}\n\t\\hat{\\lambda}=\\frac{L}{E},\\qquad\n\t\\hat{q}=\\frac{\\sqrt{Q}}{E}.\n\\end{align}\nWe follow the conventions of Refs.~\\cite{Cunningham1972,Cunningham1973}, but put hats over these quantities to distinguish them from the unhatted $\\lambda$ and $q$ that we introduce in Sec.~\\ref{sec:NearExtremalExpansion}. Note that while $Q$ can be negative and therefore $\\hat{q}$ imaginary, only $\\hat{q}^2$ appears in subsequent formulas. Furthermore, since $Q=p_\\theta^2\\ge0$ when $\\theta=\\pi\/2$, any photon passing through the equatorial plane must have a nonnegative Carter constant, and hence real $\\hat{q}$. Since we restrict to photons emitted by an equatorial source, we will always have real $\\hat{q}>0$. \n\nThe four-momentum may be reconstructed from the conserved quantities up to two choices of sign corresponding to the direction of travel,\n\\begin{subequations}\n\\label{eq:GeodesicEquation}\n\\begin{align}\n\t\\label{eq:RadialGeodesicEquation}\n\t\\frac{\\Sigma}{E}p^r&=\\pm\\sqrt{\\mathcal{R}(r)},\\\\\n\t\\label{eq:PolarGeodesicEquation}\n\t\\frac{\\Sigma}{E}p^\\theta&=\\pm\\sqrt{\\Theta(\\theta)},\\\\\n\t\\label{eq:AzimuthalGeodesicEquation}\n\t\\frac{\\Sigma}{E}p^\\phi&=-\\pa{a-\\frac{\\hat{\\lambda}}{\\sin^2{\\theta}}}+\\frac{a}{\\Delta}\\pa{r^2+a^2-a\\hat{\\lambda}},\\\\\n\t\\frac{\\Sigma}{E}p^t&=-a\\pa{a\\sin^2{\\theta}-\\hat{\\lambda}}+\\frac{r^2+a^2}{\\Delta}\\pa{r^2+a^2-a\\hat{\\lambda}},\n\\end{align}\n\\end{subequations}\nwhere \n\\begin{subequations}\n\\begin{align}\n\t\\mathcal{R}(r)&=\\pa{r^2+a^2-a\\hat{\\lambda}}^2-\\Delta\\br{\\hat{q}^2+\\pa{a-\\hat{\\lambda}}^2},\\\\\n\t\\Theta(\\theta)&=\\hat{q}^2+a^2\\cos^2{\\theta}-\\hat{\\lambda}^2\\cot^2{\\theta}.\n\\end{align}\n\\end{subequations}\nThe functions $\\mathcal{R}(r)$ and $\\Theta(\\theta)$ are generally called the radial and angular ``potentials''. Zeros of these functions correspond to turning points in the trajectories, where the sign $\\pm$ flips in \\eqref{eq:RadialGeodesicEquation} and \\eqref{eq:PolarGeodesicEquation}, respectively. The radial potential is quartic in $r$. The closed-form expression for the roots is not particularly helpful. On the other hand, the $\\theta$ turning points [zeroes of $\\Theta$] have a simple expression:\n\\begin{align}\n\t\\label{eq:AngularTurningPoints}\n\t\\theta_\\pm=\\arccos\\pa{\\mp\\sqrt{\\Delta_\\theta+\\sqrt{\\Delta_\\theta^2+\\frac{\\hat{q}^2}{a^2}}}},\\qquad\n\t\\Delta_\\theta=\\frac{1}{2}\\pa{1-\\frac{\\hat{q}^2+\\hat{\\lambda}^2}{a^2}}.\n\\end{align}\nFor photons that reach infinity, the conserved quantity $E$ is equal to the energy measured by stationary observers at infinity. The energy measured in the rest frame of the emitting source is\n\\begin{align}\n\tE_s=p^{(t)}\n\t=-p_\\mu u^\\mu\n\t=\\gamma E\\sqrt{\\frac{\\Xi}{\\Delta\\Sigma}}\\pa{1-\\Omega_s\\hat{\\lambda}}.\n\\end{align}\nThe ratio is the ``redshift factor'' $g$, given by\n\\begin{align}\n\t\\label{eq:Redshift}\n\tg=\\frac{E}{E_s}\n\t=\\frac{1}{\\gamma}\\sqrt{\\frac{\\Delta\\Sigma}{\\Xi}}\\frac{1}{1-\\Omega_s\\hat{\\lambda}}\n\t=\\frac{\\sqrt{r_s^3-3Mr_s^2\\pm2aM^{1\/2}r_s^{3\/2}}}{r_s^{3\/2}\\pm M^{1\/2}\\pa{a-\\hat{\\lambda}}},\n\\end{align}\nwhere again the upper\/lower sign corresponds to a prograde\/retrograde orbit. Notice that the redshift depends only on $\\hat{\\lambda}$ and not $\\hat{q}$.\n\nIn general, the system of equations \\eqref{eq:GeodesicEquation} cannot be solved in closed form, and must be approximated numerically. In Sec.~\\ref{sec:NearExtremalExpansion}, we will find tremendous simplifications in the high-spin limit, which will allow us to proceed mostly analytically. Moreover, we will see in Sec.~\\ref{sec:ObservationalAppearance} that these solutions exhibit a variety of surprising observable phenomena not previously encountered in numerical studies.\n\nThe conserved quantities $\\hat{\\lambda}$ and $\\hat{q}$ help connect the angle of emission to the angle of reception or equivalently, the image position on the screen. We parameterize the emission angle by $(\\Theta,\\Phi)$ defined as the direction cosines in the local rest frame\\footnote{That is, we denote by $\\Phi$ the angle that the photon three-velocity in the rest frame of the emitter makes with the direction of motion ($\\phi$-direction), and we denote by $\\Theta$ the angle relative to the local azimuth perpendicular to the equatorial plane ($-\\theta$-direction).}\n\\begin{align}\n\t\\label{eq:DirectionCosines}\n\t\\cos{\\Phi}=\\frac{p^{(\\phi)}}{p^{(t)}}\n\t=\\frac{\\pa{\\Sigma\/\\Xi}\\sqrt{\\Delta}\\hat{\\lambda}-v_s\\pa{1-\\omega\\hat{\\lambda}}}{1-\\Omega_s\\hat{\\lambda}},\\qquad\n\t\\cos{\\Theta}=-\\frac{p^{(\\theta)}}{p^{(t)}}\n\t=\\mp\\frac{\\hat{q}g}{r_s},\n\\end{align}\nwhere the upper\/lower sign corresponds to that in Eq.~\\eqref{eq:PolarGeodesicEquation}. Inverting these relations gives $\\hat{\\lambda}$ and $\\hat{q}$ as a function of the emission angles,\n\\begin{align}\n\t\\label{eq:ConservedQuantitiesDirectionCosines}\n\t\\hat{\\lambda}=\\frac{\\cos{\\Phi}+v_s}{\\pa{\\Sigma\/\\Xi}\\sqrt{\\Delta}+\\Omega_s\\cos{\\Phi}+\\omega v_s},\\qquad\n\t\\hat{q}=\\mp\\frac{r_s\\cos{\\Theta}}{g},\n\\end{align}\nwhere again the upper\/lower sign corresponds to that in Eq.~\\eqref{eq:PolarGeodesicEquation} (and ensures $\\hat{q}>0$). Photons with $\\theta\\to\\theta_o$ as $r\\to\\infty$ correspond to an image of the emitter on the observer's screen. Here and hereafter, the subscript $o$ stands for ``observer.\" The angle of approach to $\\theta_o$ corresponds to the position of the image on the observer screen. Following Refs.~\\cite{Bardeen1973}, we use ``screen coordinates'' $\\pa{\\alpha,\\beta}$ corresponding to the apparent position on the plane of the sky. As we review in App.~\\ref{app:ScreenCoordinates}, these are related to the conserved quantities by\n\\begin{align}\n\t\\label{eq:ScreenCoordinates}\n\t\\alpha=-\\frac{\\hat{\\lambda}}{\\sin{\\theta_o}},\\qquad\n\t\\beta=\\pm\\sqrt{\\hat{q}^2+a^2\\cos^2{\\theta_o}-\\hat{\\lambda}^2\\cot^2{\\theta_o}}\n\t=\\pm\\sqrt{\\Theta(\\theta_o)}.\n\\end{align}\nThe sign $\\pm$ is equal to the sign of $p_\\theta$ (the $\\theta$ component of the photon four-momentum) at the observer, which determines whether the photon arrives from above or below. The angles on the observer sky are given by $\\alpha\/r_o$ and $\\beta\/r_o$, where $r_o$ is the distance to the source.\n\n\\subsection{Image positions}\n\nIntegrating up Eqs.~\\eqref{eq:GeodesicEquation} reduces the geodesic equation to quadratures. That is, the geodesic(s) connecting a source $(t_s,r_s,\\theta_s,\\phi_s)$ to an observer $(t_o,r_o,\\theta_o,\\phi_o)$ satisfy\\footnote{In the special case of a completely equatorial geodesic, Eq.~\\eqref{eq:RTheta} must be discarded. The trajectory is instead governed by Eqs.~\\eqref{eq:DeltaPhi}--\\eqref{eq:DeltaT} without the angular integrals. We do not consider equatorial geodesics in this paper, as they are only relevant for a measure-zero set of observers.}\n\\begin{subequations}\n\\label{eq:RaytracingEquation}\n\\begin{align}\n\t\\label{eq:RTheta}\n\t&\\fint_{r_s}^{r_o}\\frac{\\mathop{}\\!\\mathrm{d} r}{\\pm\\sqrt{\\mathcal{R}(r)}}\n\t=\\fint_{\\theta_s}^{\\theta_o}\\frac{\\mathop{}\\!\\mathrm{d}\\theta}{\\pm\\sqrt{\\Theta(\\theta)}},\\\\\n\t\\label{eq:DeltaPhi}\n\t\\Delta\\phi=\\phi_o-\\phi_s=&\\fint_{r_s}^{r_o}\\frac{a}{\\pm\\Delta\\sqrt{\\mathcal{R}(r)}}\\pa{2Mr-a\\hat{\\lambda}}\\mathop{}\\!\\mathrm{d} r\n\t+\\fint_{\\theta_s}^{\\theta_o}\\frac{\\hat{\\lambda}\\csc^2{\\theta}}{\\pm\\sqrt{\\Theta(\\theta)}}\\mathop{}\\!\\mathrm{d}\\theta,\\\\\n\t\\label{eq:DeltaT}\n\t\\Delta t=t_o-t_s=&\\fint_{r_s}^{r_o}\\frac{r}{\\pm\\Delta\\sqrt{\\mathcal{R}(r)}}\\br{r^3+a^2\\pa{r+2M}-2aM\\hat{\\lambda}}\\mathop{}\\!\\mathrm{d} r\n\t+\\fint_{\\theta_s}^{\\theta_o}\\frac{a^2\\cos^2{\\theta}}{\\pm\\sqrt{\\Theta(\\theta)}}\\mathop{}\\!\\mathrm{d}\\theta.\n\\end{align}\n\\end{subequations}\nThe slash notation $\\fint$ indicates that these integrals are to be considered line integrals along a trajectory connecting the two points, where turning points in $r$ or $\\theta$ occur any time the corresponding potential $\\mathcal{R}(r)$ or $\\Theta(\\theta)$ vanishes. The signs $\\pm$ in front of $\\sqrt{\\mathcal{R}(r)}$ and $\\sqrt{\\Theta(\\theta)}$ are chosen to be the same as that of $\\mathop{}\\!\\mathrm{d} r$ and $\\mathop{}\\!\\mathrm{d}\\theta$, respectively. Each solution of Eqs.~\\eqref{eq:RaytracingEquation} corresponds to a null geodesic (labeled by $\\hat{\\lambda}$ and $\\hat{q}$) connecting the source point to the observer point. For any given pair of points, there may be no solutions, a single solution, or many solutions.\n\nThe problem has an equatorial reflection symmetry. Without loss of generality, we take the observer to sit in the northern hemisphere $\\theta_o\\in\\pa{0,\\pi\/2}$. We exclude the measure-zero (and mathematically inconvenient) cases of an exactly face-on $(\\theta_o=0)$ or edge-on $(\\theta_o=\\pi\/2)$ observation. We place the observer at angular coordinate $\\phi_o=0$ for all time $t_o$, while we place the source at angular coordinate $\\phi_s$ at the initial time $t_s=0$. The coordinates of the source and observer are thus chosen and interpreted as follows:\n\\begin{subequations}\n\\label{eq:GeodesicLegs}\n\\begin{align}\n\tt_s&:\\text{emission time},&\n\tt_o&:\\text{reception time},\\\\\n\tr_s&:\\text{orbital radius},&\n\tr_o&\\to\\infty,\\\\\n\t\\theta_s&=\\pi\/2,&\n\t\\theta_o&\\in\\pa{0,\\pi\/2}:\\text{inclination angle},\\\\\n\t\\phi_s&=\\Omega_st_s,&\n\t\\phi_o&=0.\n\\end{align}\n\\end{subequations}\nThe images seen at inclination angle $\\theta_o$ of a source orbiting at $r_s$ may thus be determined as follows: For each observer time $t_o$, one makes the choices \\eqref{eq:GeodesicLegs} for $r_s,\\theta_s,\\phi_s,r_o,\\theta_o,\\phi_o$ and plugs them into the basic equations \\eqref{eq:RaytracingEquation}. This produces a set of three integral equations for three variables $t_s,\\hat{\\lambda},\\hat{q}$ in terms of $t_o$. Each solution then corresponds to an image whose location is fixed by $\\hat{\\lambda}$ and $\\hat{q}$ using Eq.~\\eqref{eq:ScreenCoordinates}. (The emission time $t_s$ may be computed, but is not of observable interest. We will decouple it from the equations, so that we solve two equations for the two variables $\\hat{\\lambda},\\hat{q}$.) Solving this problem as a function of the time $t_o$ provides the time-dependent positions of the images. The total image will be periodic with periodicity equal to that of the source ($T_s=2\\pi\/\\Omega_s$). However, individual images may evolve on longer timescales (see Fig.~\\ref{fig:3x3}), with the required periodicity emerging only after the totality of images is summed over.\n\nWe now make the ray-tracing equations \\eqref{eq:RaytracingEquation} more explicit by introducing labels to account for the number of rotations ($i.e.$, increases of $\\phi$ by $2\\pi$) and librations ($i.e.$, turning points in $\\theta$). See Ref.~\\cite{Vazquez2004} for a similar treatment. For the winding in $\\phi$, we could introduce an integral winding number $n$, $i.e.$, $n=\\mod_{2\\pi}\\Delta\\phi$. However, we find it more convenient to instead allow the observation point $\\phi_o=0$ to take on any physically equivalent value, $i.e.$, $\\phi_o=2\\pi N$ for an integer $N$. Using $\\phi_s=\\Omega_st_s$, we then have\n\\begin{align}\n\t2\\pi N=\\phi_o\n\t=\\Delta\\phi+\\phi_s\n\t=\\Delta\\phi+\\Omega_st_s\n\t=\\Delta\\phi-\\Omega_s\\Delta t+\\Omega_st_o,\n\\end{align}\nwhich implies\n\\begin{align}\n\t\\label{eq:RaytracingCondition}\n\t\\Delta\\phi-\\Omega_s\\Delta t=-\\Omega_st_o+2\\pi N.\n\\end{align}\nThis form is natural because $\\phi-\\Omega_st$ is conjugate to the $\\partial_t+\\Omega_s\\partial_\\phi$ symmetry of the problem. Note that the periodicity of the image is manifest in that $N\\to N+1$ absorbs the shift $t_o\\to t_o+T_s$ (with $T_s=2\\pi\/\\Omega_s$), leaving Eq.~\\eqref{eq:RaytracingCondition} invariant. To fix the physical meaning of $N$, we restrict to a single period $t_o\\in\\pa{0,T_s}$. Then \n$N$ tracks the number of \\textit{extra} windings executed by the photon relative to the emitter between its time of emission and reception.\\footnote{The winding number of the photon trajectory is $n=\\mod_{2\\pi}\\Delta\\phi$. In the time interval $\\br{t_s,t_o}$, the emitter undergoes $n_s=\\mod_{2\\pi}\\Omega_s\\Delta t$ windings. Since $t_o\\in\\pa{0,T_s}$ by assumption, $\\mod_{2\\pi}t_o=0$ and therefore Eq.~\\eqref{eq:RaytracingCondition} implies that $N=n-n_s$.} Note that in the near-horizon, near-extremal limit below, $\\Delta t$ and $\\Delta \\phi$ (and hence the winding number $n$) separately diverge linearly [Eq.~\\eqref{eq:DeltaPhiLimit}], with the combination $\\Delta\\phi-\\Omega_s\\Delta t$ and its net winding number $N$ displaying a milder log divergence. This reflects the fact that photons received at a fixed time can have been emitted arbitrarily far in the past.\n\nFor the radial turning points, we note that there are two possibilities for light reaching infinity: ``direct'' trajectories that are initially outward-bound and have no turning points, and ``reflected'' trajectories that are initially inward-bound but have one radial turning point. We label these by $b=0$ (direct) and $b=1$ (reflected). For the $\\theta$ turning points, we let $m\\geq0$ denote the number of turning points and $s\\in\\cu{+1,-1}$ denote the \\textit{final} sign of $p_\\theta$, which is equal to the sign of $\\beta$ in Eq.~\\eqref{eq:ScreenCoordinates}.\n\nPutting everything together, the basic equations \\eqref{eq:RTheta} and \\eqref{eq:RaytracingCondition} can be re-expressed as the ``Kerr lens equations''\n\\begin{subequations}\n\\label{eq:LensEquations}\n\\begin{align}\n\t\\label{eq:LensEquation1}\n\tI_r+b\\tilde{I}_r&=G_\\theta^{m,s},\\\\\n\t\\label{eq:LensEquation2}\n\tJ_r+b\\tilde{J}_r+\\frac{\\hat{\\lambda}G^{m,s}_\\phi-\\Omega_sa^2G^{m,s}_t}{M}&=-\\Omega_st_o+2\\pi N,\n\\end{align}\n\\end{subequations}\nwhere the factor of $M$ was introduced to make both equations dimensionless, and we defined\n\\begin{align}\n\tG^{m,s}_i=\n\t\\begin{cases}\n\t\t\\hat{G}_i\\qquad\\qquad&m=0,\\\\\n\t\tmG_i-s\\hat{G}_i\\qquad&m\\ge1,\n\t\\end{cases}\n\t\\qquad\n\ti\\in\\cu{t,\\theta,\\phi},\n\\end{align}\nwith\n\\begin{subequations}\n\\begin{align}\n\tG_\\theta&=M\\int_{\\theta_-}^{\\theta_+}\\frac{\\mathop{}\\!\\mathrm{d}\\theta}{\\sqrt{\\Theta(\\theta)}},&\n\t\\hat{G}_\\theta&=M\\int_{\\theta_o}^{\\pi\/2}\\frac{\\mathop{}\\!\\mathrm{d}\\theta}{\\sqrt{\\Theta(\\theta)}},\\\\\n\tG_\\phi&=M\\int_{\\theta_-}^{\\theta_+}\\frac{\\csc^2{\\theta}}{\\sqrt{\\Theta(\\theta)}}\\mathop{}\\!\\mathrm{d}\\theta,&\n\t\\hat{G}_\\phi&=M\\int_{\\theta_o}^{\\pi\/2}\\frac{\\csc^2{\\theta}}{\\sqrt{\\Theta(\\theta)}}\\mathop{}\\!\\mathrm{d}\\theta,\\\\\n\tG_t&=M\\int_{\\theta_-}^{\\theta_+}\\frac{\\cos^2{\\theta}}{\\sqrt{\\Theta(\\theta)}}\\mathop{}\\!\\mathrm{d}\\theta,&\n\t\\hat{G}_t&=M\\int_{\\theta_o}^{\\pi\/2}\\frac{\\cos^2{\\theta}}{\\sqrt{\\Theta(\\theta)}}\\mathop{}\\!\\mathrm{d}\\theta,\n\\end{align}\n\\end{subequations}\nand\n\\begin{subequations}\n\\begin{align}\n\tI_r&=M\\int_{r_s}^{r_o}\\frac{\\mathop{}\\!\\mathrm{d} r}{\\sqrt{\\mathcal{R}(r)}},&\n\t\\tilde{I}_r&=2M\\int_{r_\\mathrm{min}}^{r_s}\\frac{\\mathop{}\\!\\mathrm{d} r}{\\sqrt{\\mathcal{R}(r)}},\\\\\n\tJ_r&=\\int_{r_s}^{r_o}\\frac{\\mathcal{J}_r}{\\sqrt{\\mathcal{R}(r)}}\\mathop{}\\!\\mathrm{d} r,&\n\t\\tilde{J}_r&=2\\int^{r_s}_{r_\\mathrm{min}}\\frac{\\mathcal{J}_r}{\\sqrt{\\mathcal{R}(r)}}\\mathop{}\\!\\mathrm{d} r,\\\\\n\t\\label{eq:RadialIntegrand}\n\t\\mathcal{J}_r&=\\frac{a\\pa{2Mr-a\\hat{\\lambda}}-\\Omega_sr\\br{r^3+a^2\\pa{r+2M}-2aM\\hat{\\lambda}}}{\\Delta},\n\\end{align}\n\\end{subequations}\nwhere $r_\\mathrm{min}$ is the largest (real) root of $\\mathcal{R}(r)$. These equations are valid when $r_\\mathrm{min}0$ defined by\n\\begin{align}\n\t\\label{eq:Upsilon}\n\t\\Upsilon\\equiv\\frac{q^4R_o}{q^2+qD_o+2R_o}e^{-qG_\\theta^{\\bar{m},s}}=\\frac{q^4}{q+2}e^{-qG_\\theta^{\\bar{m},s}}+\\O{\\frac{1}{R_o}}.\n\\end{align}\n\nWe now consider the direct $(b=0)$ and reflected $(b=1)$ cases separately. For direct images ($b=0$), we can rearrange \\eqref{eq:SimplifiedFirstEquation} to give\n\\begin{align}\n\t\\label{eq:DirectFirstEquation}\n\tq^2\\bar{R}+4\\pa{\\lambda-\\Upsilon}=-qD_s.\n\\end{align}\nSquaring both sides produces a quadratic equation in $\\lambda$,\n\\begin{align}\n\t\\label{eq:DirectFirstEquationSquared}\n\t\\pa{4-q^2}\\lambda^2-8\\Upsilon\\lambda-2\\Upsilon\\pa{q^2\\bar{R}-2\\Upsilon}=0,\n\\end{align}\nwhose solutions are\n\\begin{align}\n\t\\label{eq:LambdaSolutions}\n\t\\lambda_\\pm&=\\frac{2\\Upsilon}{4-q^2}\\br{2\\pm q\\sqrt{1+\\frac{\\bar{R}}{2\\Upsilon}\\pa{4-q^2}}}.\n\\end{align}\nThese solutions to the squared equation \\eqref{eq:DirectFirstEquationSquared} will only solve the original equation \\eqref{eq:DirectFirstEquation} when its LHS is negative.\\footnote{In the special case that $D_s=0$, corresponding to an emitter orbiting precisely at the photon's radial turning point, the equation is solved when the LHS is zero. We will consider this measure-zero case in Sec.~\\ref{sec:m}, but for the time being we can exclude it and thereby make Eq.~\\eqref{eq:AdditionalCondition} a strict inequality.} Thus, an additional condition on a valid solution $\\lambda_\\pm$ is\n\\begin{align}\n\t\\label{eq:AdditionalCondition}\n\t\\lambda_\\pm<\\Upsilon-\\frac{q^2\\bar{R}}{4}.\n\\end{align}\nHowever, Eq.~\\eqref{eq:LambdaSolutions} shows that $\\lambda_+$ is always larger than $\\Upsilon>0$, and hence can never satisfy \\eqref{eq:AdditionalCondition}. Thus, only $\\lambda_-$ can be a solution. Plugging the formula \\eqref{eq:LambdaSolutions} for $\\lambda_-$ into \\eqref{eq:AdditionalCondition}, we find that the condition for $\\lambda_-$ to be valid is\n\\begin{align}\n\t\\bar{R}\\in\\br{z_-,z_+},\\qquad\n\tz_\\pm=\\frac{4\\Upsilon}{q^2}\\pa{1\\pm\\frac{2}{\\sqrt{4-q^2}}}\\gtrless0.\n\\end{align}\nHowever, recall that $\\bar{R}>0$ in the extremal case and $\\bar{R}\\ge2^{1\/3}$ in the near-extremal case. Thus we can simplify the condition to\n\\begin{align}\n\t\\label{eq:DirectCondition}\n\t\\bar{R}<\\frac{4\\Upsilon}{q^2}\\pa{1+\\frac{2}{\\sqrt{4-q^2}}}.\n\\end{align}\n\nFor reflected images with $b=1$, we can rearrange \\eqref{eq:SimplifiedFirstEquation} to produce the analog of \\eqref{eq:DirectFirstEquation},\n\\begin{align}\n\t\\label{eq:ReflectedFirstEquation}\n\tq^2\\bar{R}+4\\lambda-\\frac{\\pa{4-q^2}\\lambda^2}{\\Upsilon}=-qD_s.\n\\end{align}\nSquaring both sides gives a quartic equation,\n\\begin{align}\n\t\\label{eq:QuarticEquation}\n\t\\frac{\\pa{4-q^2}\\lambda^2}{\\Upsilon^2}\\br{\\pa{4-q^2}\\lambda^2-8\\Upsilon\\lambda-2\\Upsilon\\pa{q^2\\bar{R}-2\\Upsilon}}=0.\n\\end{align}\nThe factor in brackets is actually the same quadratic equation as in the direct case \\eqref{eq:DirectFirstEquationSquared}. Thus the solutions to \\eqref{eq:QuarticEquation} are $\\lambda=0$ and $\\lambda_\\pm$ defined in \\eqref{eq:LambdaSolutions}. To be a true solution of the original equation \\eqref{eq:ReflectedFirstEquation}, the LHS of \\eqref{eq:ReflectedFirstEquation} must be strictly negative. Thus $\\lambda=0$ is inadmissible and $\\lambda_\\pm$ must satisfy\n\\begin{align}\n\t\\lambda_\\pm<\\frac{2\\Upsilon}{4-q^2}\\pa{1-\\sqrt{1+\\frac{q^2\\bar{R}}{4\\Upsilon}\\pa{4-q^2}}}<0\n\t\\quad\\text{or}\\quad\n\t\\lambda_\\pm>\\frac{2\\Upsilon}{4-q^2}\\pa{1+\\sqrt{1+\\frac{q^2\\bar{R}}{4\\Upsilon}\\pa{4-q^2}}}>0.\n\\end{align}\nHowever, when $\\lambda>0$, the outermost turning point is inside the horizon [see Eq.~\\eqref{eq:TurningRadius}], meaning there can be no reflected image in this case. [The lack of a valid trajectory for $b=1$ and $\\lambda>0$ can also be seen from the failure of $\\tilde{J}_r$ to exist---see the $1\/\\Delta$ factor in Eq.~\\eqref{eq:RadialIntegrand}.] Thus only $\\lambda_-$ is admissible and only the first condition can possibly be satisfied. Plugging in the formula \\eqref{eq:LambdaSolutions} for $\\lambda_-$, this condition becomes\n\\begin{align}\n\t\\bar{R}>\\frac{4\\Upsilon}{q^2}\\pa{1+\\frac{2}{\\sqrt{4-q^2}}},\n\\end{align}\nwhich is the opposite inequality of the direct condition \\eqref{eq:DirectCondition}. Both of these inequalities are saturated for the boundary case of emission precisely from a photon's radial turning point.\n\nTo summarize, for each choice of $m$, $b$, $s$, and $q$, there is either zero or one solution for $\\lambda$. The solution exists provided\n\\begin{subequations}\n\\label{eq:LambdaCondition}\n\\begin{align}\n\t\\bar{R}<\\frac{4\\Upsilon}{q^2}\\pa{1+\\frac{2}{\\sqrt{4-q^2}}}\n\t&\\qquad\\text{if $b=0$ (direct)},\\\\\n\t\\bar{R}>\\frac{4\\Upsilon}{q^2}\\pa{1+\\frac{2}{\\sqrt{4-q^2}}}\n\t&\\qquad\\textrm{if $b=1$ (reflected)},\n\\end{align}\n\\end{subequations}\nin which case it is given by [repeating Eq.~\\eqref{eq:LambdaSolutions}, choosing the minus branch]\n\\begin{align}\n\t\\label{eq:Lambda}\n\t\\lambda=\\frac{2\\Upsilon}{4-q^2}\\br{2-q\\sqrt{1+\\frac{\\bar{R}}{2\\Upsilon}\\pa{4-q^2}}},\\qquad\n\t\\Upsilon=\\frac{q^4}{q+2}e^{-qG_\\theta^{\\bar{m},s}}. \n\\end{align}\nWe have included the large-$R_o$ expression for $\\Upsilon$, making the expressions independent of $R_o$. The full version of $\\Upsilon$ is given above in Eq.~\\eqref{eq:Upsilon}. Note that one may equivalently choose $m$, $s$, and $q$, and hence determine $b$ from Eq.~\\eqref{eq:LambdaCondition}.\n\n\\subsubsection{Second equation}\n\nThe second equation to solve is Eq.~\\eqref{eq:LensEquation2}. We will work with a dimensionless time coordinate $\\hat{t}_o$ in terms of which the emitter has unit periodicity,\n\\begin{align}\n\t\\hat{t}_o=\\frac{t_o}{T_s}=\\frac{t_o}{4\\pi M}+\\O{\\epsilon}.\n\\end{align}\nIn terms of this phase, Eq.~\\eqref{eq:LensEquation2} can be rewritten as\n\\begin{align}\n\t\\label{eq:SecondEquation}\n\t\\hat{t}_o=N+\\mathcal{G},\\qquad\n\t\\mathcal{G}\\equiv-\\frac{1}{2\\pi}\\pa{J_r+b\\tilde{J}_r+2G^{m,s}_\\phi-\\frac{1}{2}G^{m,s}_t}.\n\\end{align}\nWithout loss of generality, we restrict to the single period $\\hat{t}_o\\in\\br{0,1}$.\n\nThe $G$ integrals are given as elliptic functions in Ref.~\\cite{InProgress} and reproduced in App.~\\ref{app:AngularIntegrals}. The $J$ integrals may be computed using the method of matched asymptotic expansions (App.~\\ref{app:MAE}), giving\n\\begin{align}\n\t\\label{eq:Jr}\n\tJ_r&=-\\frac{7}{2}I_r+\\frac{q}{2}\\pa{1-\\frac{3}{4}\\frac{\\bar{R}}{\\lambda}}-\\frac{1}{2}\\pa{D_o-\\frac{3}{4}\\frac{D_s}{\\lambda}}+\\log\\br{\\frac{\\pa{q+2}^2\\bar{R}}{\\pa{D_o+R_o+2}\\pa{D_s+2\\bar{R}+2\\lambda}}}+\\O{\\epsilon},\\\\\n\t\\label{eq:ReflectedJr}\n\t\\tilde{J}_r&=-\\frac{7}{2}\\tilde{I}_r-\\frac{3}{4}\\frac{D_s}{\\lambda}+\\log\\br{\\frac{\\pa{D_s+2\\bar{R}+2\\lambda}^2}{\\pa{4-q^2}\\bar{R}^2}}+\\O{\\epsilon}.\n\\end{align}\nNote that this expression for $\\tilde{J}_r$ is only valid for $\\lambda<0$. When $\\lambda>0$, the radial turning point $r_\\mathrm{min}$ is inside the horizon (see App.~\\ref{app:ReflectedRadialIntegral}) and the integral for $\\tilde{J}_r$ does not exist [see the $1\/\\Delta$ factor in Eq.~\\eqref{eq:RadialIntegrand}]. This corresponds to the fact that all reflected light $(b=1)$ has $\\lambda<0$, which shows that $\\lambda>0$ lies within the shadow.\n\nFor each choice of discrete parameters $m,s,b$ having a non-zero range of $q$ satisfying the condition \\eqref{eq:LambdaCondition}, $\\lambda(q)$ is determined by \\eqref{eq:Lambda} and $\\mathcal{G}$ becomes a function of $q$. Equation~\\eqref{eq:SecondEquation} then gives the observation time $t_o(q)$ for each choice of an integer $N$. Restricting to $0\\leq\\hat{t}_o<1$ determines $N$ uniquely for each $q$, and the multivalued inverse $q\\pa{\\hat{t}_o}$ corresponds to the tracks of images moving along the NHEKline. That is, if there are $p$ domains $\\br{\\bar{t}^{\\rm min}_p,\\bar{t}_p^{\\rm max}}$ where $\\hat{t}_o(q)$ is invertible and lies between $0$ and $1$, then each corresponding inverse $q_p\\pa{\\hat{t}_o}$, taken over its corresponding range $0<\\bar{t}^\\mathrm{min}_p<\\hat{t}_o<\\bar{t}_p^\\mathrm{max}<1$, describes the track of an image over one period (or just a portion thereof). The number of inverses changes at local maxima and minima of $\\mathcal{G}(q)$, corresponding to a change in the number of images at the associated time $\\hat{t}_o$. Minima correspond to pair creation of images, while maxima correspond to pair annihilation. Finding the tracks of all such images for all choices of $N,m,s,b$ completes the task of finding the time-dependent locations of the images. We describe a practical approach, along with an example, in Sec.~\\ref{sec:ObservationalAppearance} below.\n\n\\subsubsection{Winding number around the axis of symmetry}\n\nThe winding number around the axis of symmetry for a photon trajectory is $n=\\mod_{2\\pi}\\Delta\\phi$, where $\\Delta\\phi$ is computed from Eq.~\\eqref{eq:DeltaPhi}. Using the method of matched asymptotic expansions described in App.~\\ref{app:MAE}, we find that to leading order in $\\epsilon$,\n\\begin{align}\n\t\\label{eq:DeltaPhiLimit}\n\t\\Delta\\phi=\\frac{1}{2\\lambda\\epsilon}\\pa{\\frac{D_s}{\\bar{R}}-q}+\\O{\\log{\\epsilon}}.\n\\end{align}\nNote that this leading order expression diverges linearly in $\\epsilon$.\n\n\\subsubsection{Scaling with $\\epsilon$ and $R_o$}\n\nIt is instructive to examine the scaling of various quantities as $\\epsilon\\to0$ and $R_o\\to\\infty$. Plugging in \\eqref{eq:m} for $m$ into the definition \\eqref{eq:SecondEquation} of $\\mathcal{G}$, we find \n\\begin{align}\n\t\\label{eq:G}\n\t\\mathcal{G}=-\\frac{1}{2\\pi}\\br{J_r+b\\tilde{J}_r-\\frac{1}{qG_\\theta}\\pa{2G_\\phi-\\frac{1}{2}G_t}\\log{\\epsilon}+2G^{\\bar{m},s}_\\phi-\\frac{1}{2}G^{\\bar{m},s}_t}.\n\\end{align}\nThe integral $\\tilde{J}_r$ [Eq.~\\eqref{eq:ReflectedJr}] is finite for all parameter values, while the integral $J_r$ [Eq.~\\eqref{eq:Jr}] diverges as $\\epsilon\\to0$ and as $R_o\\to\\infty$,\n\\begin{align}\n\tJ_r&=\\frac{q}{2}\\pa{1-\\frac{3}{4}\\frac{\\bar{R}}{\\lambda}}-\\pa{1-\\frac{3}{8}\\frac{D_s}{\\lambda}}-\\frac{7}{2q}\\log\\br{\\frac{4q^4}{\\pa{q+2}\\pa{q^2\\bar{R}+qD_s+4\\lambda}}}+\\log\\br{\\frac{\\pa{q+2}^2\\bar{R}}{2\\pa{D_s+2\\bar{R}+2\\lambda}}}\\nonumber\\\\\n\t&\\quad+\\frac{7}{2q}\\log{\\epsilon}-\\pa{\\frac{R_o}{2}+\\log{R_o}}+\\O{\\frac{1}{R_o}}+\\O{\\epsilon}.\n\\end{align}\nThus Eq.~\\eqref{eq:G} has logarithmic divergences in $\\epsilon\\to0$ as well as linear and logarithmic divergences in $R_o\\to\\infty$. These signal that the integer $N$ will become asymptotically large. Similarly to Eq.~\\eqref{eq:m} for $m$, we may define\n\\begin{align}\n\t\\label{eq:N}\n\tN=\\frac{1}{2\\pi}\\br{\\frac{1}{2q}\\pa{7-\\frac{4G_\\phi-G_t}{G_\\theta}}\\log{\\epsilon}-\\pa{\\frac{R_o}{2}+\\log{R_o}}}+\\bar{N},\n\\end{align}\nwhere $\\bar{N}$ can be regarded as $\\O{\\epsilon^0}$ and $\\O{R_o^0}$ (see footnote \\ref{fn:ContinuousLabels}). Plugging \\eqref{eq:N} into \\eqref{eq:G} gives an equation with all terms $\\O{\\epsilon^0}$,\n\\begin{align}\n\t\\hat{t}_o=\\bar{N}-\\frac{1}{2\\pi}\\Bigg\\{&\n\t\\frac{q}{2}\\pa{1-\\frac{3}{4}\\frac{\\bar{R}}{\\lambda}}-\\pa{1+\\pa{2b-1}\\frac{3}{8}\\frac{D_s}{\\lambda}}-\\frac{7}{2q}\\log\\br{\\frac{4q^4}{\\pa{q+2}}\\frac{\\pa{q^2\\bar{R}+qD_s+4\\lambda}^{2b-1}}{\\br{4\\pa{4-q^2}\\lambda^2}^b}}\\nonumber\\\\\n\t&\\quad+\\log\\br{\\frac{\\pa{q+2}^2}{2\\pa{4-q^2}^b}\\pa{\\frac{D_s+2\\bar{R}+2\\lambda}{\\bar{R}}}^{2b-1}}+2G^{\\bar{m},s}_\\phi-\\frac{1}{2}G^{\\bar{m},s}_t\n\t\\Bigg\\}.\n\\end{align}\nSolving this equation is in effect the main step in producing an image, but in practice it is easier to work directly with \\eqref{eq:SecondEquation}, where the constraint that $m$ and $N$ be integers can be more directly imposed.\n\n\\subsection{Image fluxes}\n\nWe now expand formula \\eqref{eq:Flux} for the flux of an individual image to leading order in $\\epsilon$. Noting that\n\\begin{align}\n\t\\label{eq:HatJacobians}\n\t\\frac{\\mathop{}\\!\\partial\\lambda}{\\mathop{}\\!\\partial\\hat{\\lambda}}=-\\frac{1}{2M\\epsilon},\\qquad\n\t\\frac{\\mathop{}\\!\\partial q}{\\mathop{}\\!\\partial\\hat{q}}=-\\frac{\\sqrt{3-q^2}}{qM},\\qquad\n\t\\ab{\\det\n\t\\begin{pmatrix}\n\t\t\\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial\\hat{\\lambda}} & \\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial\\hat{q}}\\vspace{2pt}\\\\\n\t\t\\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial\\hat{\\lambda}} & \\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial\\hat{q}}\n\t\\end{pmatrix}}\n\t=\\frac{\\sqrt{3-q^2}}{2qM^2\\epsilon}\\ab{\\det\n\t\\begin{pmatrix}\n\t\t\\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial q}\\vspace{2pt}\\\\\n\t\t\\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial q}\n\t\\end{pmatrix}},\n\\end{align}\nwe have\n\\begin{align}\n\t\\label{eq:FluxExpansion}\n\t\\frac{F_o}{F_N}=\\frac{\\epsilon\\bar{R}}{2D_s}\\frac{qg^3}{\\sin{\\theta_o}\\sqrt{1-\\frac{q^2}{3}}\\sqrt{\\Theta_0(\\theta_o)}}\\ab{\\det\n\t\\begin{pmatrix}\n\t\t\\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial q}\\vspace{2pt}\\\\\n\t\t\\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial q}\n\t\\end{pmatrix}}^{-1},\n\\end{align}\nwhere $g$ and $D_s$ are given in Eqs.~\\eqref{eq:RedshiftLimit} and \\eqref{eq:RadialIntegrandLimits}, respectively, and [see Eq.~\\eqref{eq:AlphaBetaExpansion}]\n\\begin{align}\n\t\\Theta_0(\\theta_o)&=\\Theta(\\theta_o)|_{\\lambda=0}\n\t=3-q^2+\\cos^2{\\theta_o}-4\\cot^2{\\theta_o}\n\t=\\pa{\\frac{\\beta}{M}}^2.\n\\end{align}\nRecall that $A$ and $B$ are defined in terms of the various integrals $I$, $J$, and $G$ by Eqs.~\\eqref{eq:A} and \\eqref{eq:B}. To leading order in $\\epsilon$, the determinant is\n\\begin{align}\n\t\\label{eq:Determinant}\n\t\\ab{\\det\n\t\\begin{pmatrix}\n\t\t\\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial q}\\vspace{2pt}\\\\\n\t\t\\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial q}\n\t\\end{pmatrix}}\n\t=\\Bigg|&\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial\\lambda}\\pa{J_r+b\\tilde{J}_r}\\br{\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial q}\\pa{I_r+b\\tilde{I}_r}-\\frac{\\mathop{}\\!\\partial G_\\theta^{m,s}}{\\mathop{}\\!\\partial q}}\\\\\n\t&\\quad-\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial\\lambda}\\pa{I_r+b\\tilde{I}_r}\\br{\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial q}\\pa{J_r+b\\tilde{J}_r}+\\frac{\\mathop{}\\!\\partial G_{t\\phi}^{m,s}}{\\mathop{}\\!\\partial q}}\\Bigg|+\\O{\\epsilon\\log{\\epsilon}},\\nonumber\n\\end{align}\nwhere we introduced\n\\begin{align}\n\t\\label{eq:RelevantG}\n\tG_{t\\phi}^{m,s}=\\frac{\\hat{\\lambda}G^{m,s}_\\phi-\\Omega_sa^2G^{m,s}_t}{M}.\n\\end{align}\nNotice that the $\\mathop{}\\!\\partial_\\lambda G_\\theta^{m,s}$ and $\\mathop{}\\!\\partial_\\lambda G_{t\\phi}^{m,s}$ terms do not contribute in \\eqref{eq:Determinant}; these terms are subleading on account of the factor of $\\epsilon$ in the Jacobian $\\mathop{}\\!\\partial_\\lambda\\hat{\\lambda}$ [see Eq.~\\eqref{eq:HatJacobians}]. The $\\log{\\epsilon}$ scaling of $m$ makes them part of the $\\O{\\epsilon\\log{\\epsilon}}$ error.\n\nThe expressions for the $G$ integrals are given at the end of App.~\\ref{app:AngularIntegrals}. The unhatted integrals scale as $\\log{\\epsilon}$ on account of the $\\log{\\epsilon}$ scaling of $m$, while the hatted integrals are $\\O{\\epsilon^0}$, and therefore subleading. The derivatives of the $I$ and $J$ integrals may be computed for small $\\epsilon$ by the method of matched asymptotic expansions, and the results are listed in Eqs.~\\eqref{eq:RadialIntegralsVariation} of App~\\ref{app:MAE}. Using these expressions, the remaining terms in \\eqref{eq:RelevantG} are given by\n\\begin{subequations}\n\\begin{align}\n\t\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial\\lambda}\\pa{I_r+b\\tilde{I}_r}&=-\\frac{1}{\\lambda}\\br{\\pa{2b-1}\\frac{\\bar{R}}{D_s}+\\frac{1}{q}},\\\\\n\t\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial q}\\pa{I_r+b\\tilde{I}_r}&=-\\frac{I_r+b\\tilde{I}_r}{q}+\\frac{1}{q\\pa{4-q^2}}\\br{\\pa{8-q^2}\\pa{\\pa{2b-1}\\frac{\\bar{R}}{D_s}-\\frac{1}{D_o}+\\frac{2}{q}}+\\pa{2b-1}\\frac{4\\lambda}{D_s}-\\frac{2R_o}{D_o}},\\\\\n\t\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial\\lambda}\\pa{J_r+b\\tilde{J}_r}&=\\frac{1}{2\\lambda}\\br{\\pa{2b-1}\\frac{4\\bar{R}+\\lambda}{D_s}+\\frac{7}{q}+\\frac{3}{4}\\frac{\\pa{2b-1}D_s+q\\bar{R}}{\\lambda}},\\\\\n\t\\frac{\\mathop{}\\!\\partial}{\\mathop{}\\!\\partial q}\\pa{J_r+b\\tilde{J}_r}&=\\frac{7}{2q}\\pa{I_r+b\\tilde{I}_r}+\\frac{1}{2}-\\frac{3}{8q}\\frac{\\pa{2b-1}D_s+q\\bar{R}}{\\lambda}-\\frac{11}{q^2}\\\\\n\t&\\quad-\\frac{1}{2q\\pa{4-q^2}}\\br{\\pa{2b-1}\\frac{\\pa{32-5q^2}\\bar{R}+\\pa{16-q^2}\\lambda}{D_s}-\\frac{\\pa{7-q^2}\\pa{8-q^2+2R_o}}{D_o}+\\frac{24}{q}}.\\nonumber\n\\end{align}\n\\end{subequations}\nIn these expressions, we kept both the leading $\\O{\\log{\\epsilon}}$ and subleading $\\O{\\epsilon}$ contributions. It follows that the determinant \\eqref{eq:Determinant} scales as $\\log{\\epsilon}$ with subleading $\\O{\\epsilon^0}$ corrections included. Putting everything together, the flux \\eqref{eq:FluxExpansion} scales as $\\epsilon\/\\log{\\epsilon}$ with subleading $\\O{\\epsilon}$ contributions included. These subleading corrections are numerically important at reasonable values of $\\epsilon$.\n\n\\section{Observational appearance}\n\\label{sec:ObservationalAppearance}\n\nWe now describe our practical approach to implementing the method described above and discuss details of the results. We implemented this procedure in a \\textsc{Mathematica} notebook, which is included with the arXiv version of this submission.\n\nThe image depends on four parameters: $\\epsilon$, $\\bar{R}$, $R_o$, and $\\theta_o$. One must choose $\\epsilon\\ll1$ and $R_o\\gg1$ for our approximations to be accurate. The radius $\\bar{R}$ must satisfy $\\bar{R}\\geq2^{1\/3}$ for the emitter to be on a stable orbit of a near-extremal black hole. The observer inclination $\\theta_o$ must satisfy $\\arctan{(4\/3)^{1\/4}}<\\theta_o<\\pi\/2$ for there to be any flux at all (under our approximations). Here we will focus on the following example:\n\\begin{align}\n\t\\label{eq:ExampleParameters}\n\tR_o=100,\\qquad\n\t\\theta_o=\\frac{\\pi}{2}-\\frac{1}{10}=84.27^\\circ,\\qquad\n\t\\epsilon=10^{-2},\\qquad\n\t\\bar{R}=\\bar{R}_\\mathrm{ISCO}=2^{1\/3}.\n\\end{align} \nThis describes an emitter (or hotspot) on the ISCO of a near-extremal black hole with spin $a\/M=99.99995\\%$, viewed from a nearly edge-on inclination.\n\nAs described below Eq.~\\eqref{eq:ReflectedJr}, each image is labeled by discrete parameters $m,s,b,N$ as well as an additional label if the function $\\mathcal{G}(q)$ has maxima or minima. In practice, it is easiest to choose $m,s,b$ first and then determine what range of $N$ is allowed and whether there are any extra images due to maxima or minima. For each choice of integers $m,b,s$, the condition \\eqref{eq:LambdaCondition} determines the allowed values of $q$ by the condition \\eqref{eq:LambdaCondition}. We can then determine $\\lambda(q)$ [Eq.~\\ref{eq:Lambda}] and hence $\\mathcal{G}(q)$ [Eq.~\\eqref{eq:SecondEquation}] over the allowed domain. The range of $\\mathcal{G}(q)$ over this domain determines the allowed values of $N$ by the requirement that $\\hat{t}_o=\\mathcal{G}+N$ lies between $0$ and $1$. (In practice we impose a small-$q$ cutoff, as explained in Fig.~\\ref{fig:Segments}.) For each allowed value of $N$ we invert $\\hat{t}_o(q)$, labeling the inverses $q_i\\pa{\\hat{t}_o}$ by a discrete integer $i$. These functions $q\\pa{\\hat{t}_o}$ correspond to segments of an image track; each such track segment is uniquely labeled by $(m,b,s,N,i)$. An example is shown in Fig.~\\ref{fig:Segments}.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{Fig2.pdf}\n\t\\caption{Determining track segments for $m=2$, $b=0$, $s=+1$ with the parameter choices of \\eqref{eq:ExampleParameters}. The condition \\eqref{eq:LambdaCondition} allows the whole range of $q$. On the left, we plot $\\mathcal{G}(q)$ with color-coding explained below. In the middle, we plot the tracks $q(t)$ of the images over the single period $0<\\hat{t}_o<1$. The yellow, magenta, and cyan curves have $N=-6,-7,-8$, respectively, and no extra label $i$, while the red, green, and blue curves have $N=-9$ and $i=1,2,3$, respectively. The function $\\mathcal{G}(q)$ actually extends to negatively infinite values near $q=0$, corresponding formally to infinitely many images (values of $N$) near the ends of the NHEKline. However, these images are negligibly faint (rightmost plot). The complete image is formed by stitching together all track segments (labeled by $m,b,s,N,i$).}\n\t\\label{fig:Segments}\n\\end{figure}\n\nFor each track segment $q\\pa{\\hat{t}_o}$, we may determine $\\lambda\\pa{\\hat{t}_o}$ by \\eqref{eq:Lambda}. From these two conserved quantities, we may then compute the main observables for the segment: image position $\\pa{\\alpha,\\beta}$ [Eq.~\\eqref{eq:AlphaBetaExpansion}], image redshift $g$ [Eq.~\\ref{eq:RedshiftLimit}], and image flux $F_o$ [Eq.~\\eqref{eq:FluxExpansion}]. The complete observable information is built up by including all such track segments (all choices of $m,b,s,N,i$). Formally, there are infinitely many segments since $m$ and $-N$ can become arbitrarily large, but in practice the flux is vanishingly small for all but a few values of $m$ and $N$ (see Sec.~\\ref{sec:m} below and Fig.~\\ref{fig:Segments} for details). We find that the track segments line up into continuous tracks that begin and end either at the endpoints of the NHEKline with vanishing flux or as part of a pair creation\/annihilation event with infinite flux (a geometrical caustic). More generally, caustics appear when different tracks intersect. The infinite flux can be traced to the vanishing of the derivatives $\\mathop{}\\!\\partial\\theta_s\/\\mathop{}\\!\\partial\\hat{q}$ and $\\mathop{}\\!\\partial\\phi_s^\\star\/\\mathop{}\\!\\partial\\hat{q}$, indicating that the image extends in the vertical direction. That is, the whole NHEKline flashes at caustics.\n\nFigure~\\ref{fig:3x3} shows the main observables for three different values of spin. In each case, there is a bright primary image (green) together with secondary images that are important only near caustics. The primary image is a combination of direct ($b=0$) and reflected ($b=1$) light, with the transition occurring near peak flux. These photons are emitted near the forward direction (equivalently $g$ near $\\sqrt{3}$) and orbit the black hole $\\O{\\epsilon^{-1}}$ times, while crossing the equatorial plane $\\O{\\log{\\epsilon}}$ times, before emerging from the throat. For example, at $\\epsilon=.01$, the primary image is composed of segments with two and three equatorial crossings ($i.e.$, $m=2$ and $3$), and with winding number around the axis of symmetry [Eq.~\\eqref{eq:DeltaPhiLimit}] ranging between $17$ and $23$. The peak redshift factor of $g\\approx1.6$ corresponds to light emitted in a cone of $27^\\circ$ around the forward direction. For the secondary images, we have included a representative selection to illustrate the structure. The typical redshift in this case is $g=1\/\\sqrt{3}$ (corresponding to $\\lambda\\sim0$). As the spin is increased, the typical position and redshift of the images do not change, while the typical flux scales as $\\epsilon\/\\log{\\epsilon}$.\n\nWe see that the light emerges with typical redshift factor of either $g=\\sqrt{3}$ (blueshifted primary image) or $g=1\/\\sqrt{3}$ (redshifted secondary images). These factors will shift the iron line at $E_{\\mathrm{FeK}\\alpha}=6.4$~keV to $11.1$~keV and $3.7$~keV, respectively. Astronomical observation of spectral lines at such frequencies could be an indication of high-spin black holes. This is tantalizingly close to the observed peak at 3.5 keV \\cite{Carlson2015}.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{Fig3.pdf}\n\t\\caption{Positions, fluxes, and redshift factors of the brightest few images for three different values of near-extremal spin. Left to right, we have $\\epsilon=.15$ (Thorne limit), $\\epsilon=.01$, and $\\epsilon=.001$. We have color-coded by continuous image tracks, each of which may be composed of multiple track segments in our accounting (different values of $m,b,N,s,i$). For example, the primary image (green) is composed of $3$, $4$, and $5$ segments in the $\\epsilon=.15$, $.01$, $.001$ cases respectively.}\n\n\t\\label{fig:3x3}\n\\end{figure}\n\n\\subsection{Peak flux}\n\\label{sec:m}\n\nThe complete image is assembled from all the track segments $(m,b,s,N,i)$. The parameters $b,s,N,i$ have finite ranges, while in principle $m$ can take any value $m\\geq0$. However, according to the discussion surrounding Eq.~\\eqref{eq:m}, we expect that only values of $m\\sim-1\/(qG_\\theta)\\log{\\epsilon}$ should matter. Our numerical analysis confirms this suspicion and reveals that the precise value of the maximum flux is the special value $m_0$ such that the radius of emission $\\bar{R}$ precisely coincides with the radial turning point $x_\\mathrm{min}$ [Eq.~\\eqref{eq:TurningRadius}] of the photon trajectory. This is determined by plugging $\\bar{R}=x_\\mathrm{min}$ (equivalently $D_s=0$) into the geodesic equation \\eqref{eq:FirstEquation},\n\\begin{align}\n I_r|_{\\bar{R}=x_\\mathrm{min}}=m_0G_\\theta-s\\hat{G}_\\theta.\n\\end{align}\nNote that the dependence on $b$ has dropped out because $\\tilde{I}_r$ vanishes when $D_s=0$. Explicitly, we have\n\\begin{align}\n \\label{eq:mPeak}\n\tm_0=s\\frac{\\hat{G}_\\theta}{G_\\theta}+\\frac{1}{qG_\\theta}\\log\\br{\\frac{4q^2}{q^2+qD_o+2R_o}\\pa{1+\\frac{2}{\\sqrt{4-q^2}}}\\frac{R_o}{\\epsilon\\bar{R}}}.\n\\end{align}\nThis special value of $m$ also corresponds to the boundary between direct ($b=0$) and reflected ($b=1$) light. Indeed, using $e^{qG_\\theta^{\\bar{m},s}}=\\epsilon e^{qG_\\theta^{m,s}}$, the conditions \\eqref{eq:LambdaCondition} can be expressed as\n\\begin{subequations}\n\\begin{align}\n\tm&m_0\\qquad\\text{if $b=1$ (reflected)}.\n\\end{align}\n\\end{subequations}\nFigures~\\ref{fig:mFlux} and \\ref{fig:Winding} demonstrate the peak flux and its properties. In Fig.~\\ref{fig:mFlux}, we show the dependence of flux on $m$ at fixed $q$, showing the peak at $m=m_0$ and exponential falloff on either side. In Fig.~\\ref{fig:Winding}, we show the peak flux in the time domain along with the winding number $\\Delta\\phi\/2\\pi$. The cusp in the winding number is associated with $D_s=0$ [see Eq.~\\eqref{eq:DeltaPhiLimit}], showing that peak flux is indeed $D_s=0$ ($i.e.$, $m=m_0$).\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{Fig4.pdf}\n\t\\caption{Left to right: plots of $F_o\/F_N$ for $\\epsilon=.15$ (Thorne limit), $\\epsilon=.01$, and $\\epsilon=.001$, with parameters otherwise as in Eq.~\\eqref{eq:ExampleParameters}. We set $q=3\/2$ and let $m$ vary. Blue and red correspond to direct ($b=0$) and reflected ($b=1$), respectively. At each value of $m$ and $b$, there are two images, corresponding to $s=\\pm1$, except for the special case $m=0$, which has only one sign of $s$. The vertical lines are at the predicted peak value of flux, $m=m_0$ for each value of $s\\in\\cu{1,-1}$.}\n\t\\label{fig:mFlux}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[width=.4\\textwidth]{Fig5.pdf}\n\t\\caption{The flux (black) and winding number (gray) of the primary image for the parameters \\eqref{eq:ExampleParameters}. The position of the source at the time of emission is given by $\\mod_{2\\pi}\\Delta\\phi$. Notice that the single peak in observed flux corresponds to several orbits of the source.}\n\t\\label{fig:Winding}\n\\end{figure}\n\nFinally, we derive simple formulae for the flux, redshift factor, and winding number at peak flux. Since $m$ must be an integer, $m=m_0$ forces $q$ to take specific values found by solving Eq.~\\eqref{eq:mPeak}. Of the several such solutions for $q$, we find numerically that the peak flux corresponds to the largest value of $q$ in the allowed range \\eqref{eq:NHEKline}. (We also find that as $\\epsilon\\to0$, the other solutions for $q$ correspond to the fluxes of secondary images at the same moment.) When $m=m_0$ and $\\bar{R}=x_\\mathrm{min}$, we have\n\\begin{align}\n \\label{eq:BrightestImage}\n \\lambda=-P\\bar{R},\\qquad\n \\Upsilon=(1-P)P\\bar{R},\\qquad\n P\\equiv1-\\sqrt{1-\\frac{q^2}{4}}\\ge0,\n\\end{align}\nWe may now plug $m=m_0$ ($i.e.$, Eq.~\\eqref{eq:BrightestImage} and therefore $D_s=0$) into Eqs.~\\eqref{eq:RedshiftLimit}, \\eqref{eq:DeltaPhiLimit}, and \\eqref{eq:FluxExpansion} to determine the flux, redshift, and winding number at the moment of peak flux. We find\n\\begin{subequations}\n\\begin{align}\n \\label{eq:Peak}\n &\\frac{F_o}{F_N}=\\frac{\\epsilon\\bar{R}}{X}\\frac{qg^3}{\\sin{\\theta_o}\\sqrt{1-\\frac{q^2}{3}}\\sqrt{\\Theta_0(\\theta_o)}},\\qquad\n g=\\frac{1}{\\sqrt{3}-\\frac{4}{\\sqrt{3}}P},\\qquad\n\t\\Delta\\phi=\\frac{q}{2P}\\frac{1}{\\epsilon\\bar{R}},\\\\\n\tX&\\equiv2D_s\\ab{\\det\n\t\\begin{pmatrix}\n\t\t\\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial B}{\\mathop{}\\!\\partial q}\\vspace{2pt}\\\\\n\t\t\\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial\\lambda} & \\frac{\\mathop{}\\!\\partial A}{\\mathop{}\\!\\partial q}\n\t\\end{pmatrix}}\\\\\n\t&=\\ab{\\pa{1-\\frac{4}{P}}\\frac{\\mathop{}\\!\\partial G_\\theta^{m,s}}{\\mathop{}\\!\\partial q}+\\frac{2}{P}\\frac{\\mathop{}\\!\\partial G_{t\\phi}^{m,s}}{\\mathop{}\\!\\partial q}+\\frac{1+\\frac{3-q^2}{P}}{4-q^2}\\pa{1-\\frac{q^2-2R_o-8}{qD_o}-\\frac{16}{q^2}}+\\pa{1+\\frac{3}{P}}\\frac{I_r|_{\\bar{R}=x_\\mathrm{min}}}{q}}.\n\\end{align}\n\\end{subequations}\nNote that the dependence on the parameter $b\\in\\cu{0,1}$, which is undefined at the transition between direct and reflected light, has again dropped out, this time because it only entered through an overall factor of $\\ab{2b-1}=1$ multiplying $X$. We see from Eq.~\\eqref{eq:Peak} that the redshift factor at peak flux is bounded below by $1\/\\sqrt{3}$, so a redshift factor of at least $1\/\\sqrt{3}$ will always be reached by at least one image over each period. Numerically, we also find that the redshift factor is always maximized at peak flux, so Eq.~\\eqref{eq:Peak} gives an \\textit{upper} bound on the redshift factor over the image period.\n\nTo summarize: The peak flux, as well as the associated redshift factor and winding number, are given by Eq.~\\eqref{eq:Peak}, using the largest value of $q$ [in the allowed range \\eqref{eq:NHEKline}] such that $m_0$ [Eq.~\\eqref{eq:mPeak}] is an integer.\n\n\\subsection*{Acknowledgements}\n\nThis work was supported in part by NSF grants 1205550 to Harvard University and 1506027 to the University of Arizona. S.G. thanks Dimitrios Psaltis and Feryal \\\"Ozel for helpful conversations. A.L. is grateful to Sheperd Doeleman, Michael D. Johnson, Achilleas Porfyriadis, and Yichen Shi for fruitful discussions. Many of these took place at the Black Hole Initiative at Harvard University, which is supported by a grant from the John Templeton Foundation.\n\\hfill\\includegraphics[scale=.02]{cow.pdf}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn this work, we study an interacting diffusive particle system in ${\\mathbb{R}^d}$ and the heat kernel type estimate for its semigroup. Let us give an informal introduction to the model and main result at first. We denote by $\\mathcal{M}_\\delta({\\mathbb{R}^d})$ the set of point measures of type $\\mu = \\sum_{i=1}^{\\infty} \\delta_{x_i}$ on ${\\mathbb{R}^d}$, which we call \\textit{configurations} of particles, by $\\mathcal F_U$ the $\\sigma$-algebra generated by $\\mu(V)$ tested with all the Borel set $V \\subset U$, and use the shorthand $\\mathcal F := \\mathcal F_{{\\mathbb{R}^d}}$. Let $\\Pr$ be the Poisson point process of density $\\rho \\in (0,\\infty)$ as the law for the configuration $\\mu$, with $\\mathbb{E}_{\\rho}, \\mathbb{V}\\!\\mathrm{ar}_{\\rho}$ the associated expectation and variance. We have $\\a_\\circ : \\mathcal{M}_\\delta({\\mathbb{R}^d}) \\to \\mathbb{R}^{d\\times d}_{sym}$ an $\\mathcal F_{B_1}$-measurable symmetric matrix, i.e. it only depends on the configuration in the unit ball $B_1$, and $\\vert \\xi \\vert^2 \\leq \\xi \\cdot \\a_\\circ \\xi \\leq \\Lambda \\vert \\xi \\vert^2$ for any $\\xi \\in {\\mathbb{R}^d}$. Then let $\\a(\\mu, x) := \\a_\\circ(\\tau_{-x} \\mu)$ be the diffusive coefficient with local interaction at $x$, where $\\tau_{-x}$ represents the transport operation by the direction $-x$. Denoting by ${\\mu_t := \\sum_{i=1}^{\\infty}\\delta_{x_{i,t}} }$ the configuration at time $t \\geq 0$, our model can be informally described as an infinite-dimensional system with local interaction such that every particle $x_{i,t}$ evolves as a diffusion associated to the divergence-form operator $-\\nabla \\cdot \\a(\\mu_t, x_{i,t}) \\nabla$. More precisely, it is a Markov process $\\left(\\Omega, (\\mathscr{F}_t)_{t \\geq 0}, \\Pr\\right)$ defined by the \\textit{Dirichlet form}\n\\begin{align}\\label{eq:Dirichlet}\n\\mathcal{E}^{\\a}(f, f) := \\mathbb{E}_{\\rho}\\left[ \\int_{{\\mathbb{R}^d}} \\nabla f(\\mu, x) \\cdot \\a(\\mu, x) \\nabla f(\\mu, x) \\, \\d \\mu(x)\\right],\n\\end{align}\nwhere the directional derivative ${\\mathbf{e}_k \\cdot \\nabla f(\\mu, x) := \\lim_{h \\to 0} \\frac{1}{h} (f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - f(\\mu))}$ along the canonical direction $\\{\\mathbf{e}_k\\}_{1 \\leq k \\leq d}$ is defined for a family of suitable functions and $x \\in \\supp(\\mu)$.\n\nOne may expect that the diffusion follows the heat kernel estimate established by the pioneering work of John Nash \\cite{nash1958continuity}, as every single particle is a diffusion of divergence type. This is the object of our main theorem. Let $u : \\mathcal{M}_\\delta({\\mathbb{R}^d}) \\to \\mathbb{R}$ be an $\\mathcal F$-measurable function, depending only on the configuration in the cube $Q_{l_u} := \\left[-\\frac{l_u}{2}, \\frac{l_u}{2}\\right]^d$, and smooth with respect to the transport of every particle ( i.e. $u$ belongs to the function space $C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ defined in \\Cref{subsubsec:Ccinfty}), and let $u_t := \\mathbb{E}_{\\rho}[u(\\mu_t) \\vert \\mathscr{F}_0]$. Denoting $L^{\\infty}:= L^{\\infty}(\\mathcal{M}_\\delta({\\mathbb{R}^d}), \\mathcal F, \\Pr)$, we have the following estimate. \n\n\\begin{theorem}[Decay of variance]\\label{thm:main}\nThere exists two finite positive constants ${\\gamma := \\gamma(\\rho, d, \\Lambda)}$, ${C:=C(\\rho, d, \\Lambda)}$ such that for any $u \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ supported in $Q_{l_u}$, then we have\n\\begin{align}\\label{eq:main}\n\\mathbb{V}\\!\\mathrm{ar}_{\\rho}[u_t] \\leq C( 1 + \\vert \\log t \\vert )^{\\gamma} \\left(\\frac{1 + l_u}{\\sqrt{t}}\\right)^d \\Vert u \\Vert^2_{L^{\\infty}}.\n\\end{align} \n\\end{theorem}\n\n\\smallskip\n\nInteracting particle systems remain an active research topic, and it is hard to list all the references. We refer to the excellent monographs \\cite{kipnis1998scaling, komorowski2012fluctuations, liggett2012interacting, spohn2012large} for a panorama of the field. In recent years, many works in probability and stochastic processes illustrate the diffusion universality in various models: a well-understood model is the \\textit{random conductance model}, see \\cite{biskup2011recent} for a survey, and especially the \\textit{heat kernel bound} and \\textit{invariance principle} is established for the percolation clusters in \\cite{berger2007quenched, mathieu2007quenched, sidoravicius2004quenched, mathieu2008quenched, barlow2004random, hambly2009parabolic, sapozhnikov2017random}; from the view point of \\textit{stochastic homogenization}, the \\textit{quantitative} results are also proved in a series of work \\cite{armstrong2016lipschitz, armstrong2016mesoscopic, armstrong2016quantitative, armstrong2017additive, gloria2011optimal,gloria2012optimal,gloria2014optimal,gloria2014regularity,gloria2015quantification}, and the monograph \\cite{armstrong2018quantitative}, and these techiques also apply on the percolation clusters setting, as shown in \\cite{armstrong2018elliptic, dario2018optimal, gu2019efficient, dario2019quantitative}; for the system of hard-spheres, Bodineau, Gallagher and Saint-Raymond prove that Brownian motion is the Boltzmann-Grad limit of a tagged particle in \\cite{bodineau2015hard, bodineau2016brownian, bodineau2018derivation}. All these works make us believe that the model in this work should also have diffusive behavior in large scale or long time.\n\n\\smallskip\n\nNotice that our model is of \\textit{non-gradient type}, and our result is established in the continuum configuration space rather than a function space on ${\\mathbb{R}^d}$. In previous works, the construction of similar diffusion processes is studied by Albeverio, Kondratiev and R{\\\"o}ckner using Dirichlet forms in \\cite{albeverio1996canonical, albeverio1996differential, albeverio1998analysis, albeverio1998analysis2}; see also the survey \\cite{rockner1998stochastic}. To the best of our knowledge, we do not find \\Cref{thm:main} in the literature. While in the lattice side, let us remark one important work \\cite{janvresse1999relaxation} by Janvresse, Landim, Quastel and Yau, where the decay of variance is proved in the ${\\mathbb{Z}^d}$ zero range model, which is of \\textit{gradient type}. Since our research is inspired by \\cite{janvresse1999relaxation} and also uses some of their techniques, we point out our contributions in the following.\n\nFirstly, we give an explicit bound with respect to the size of the support of the local function $u$, that is uniform over $t$; the bound $\\left(\\frac{l_u}{\\sqrt{t}}\\right)^{d}$ captures the correct typical scale. For comparison, \\cite[Theorem 1.1]{janvresse1999relaxation} states the result\n\\begin{align}\\label{eq:Longtime}\n\\mathbb{V}\\!\\mathrm{ar}_{\\rho}[u_t] = \\frac{[\\tilde{u}'(\\rho)]^2 \\chi(\\rho)}{[8\\pi \\phi'(\\rho)t]^{\\frac{d}{2}}} + o\\left(t^{-\\frac{d}{2}}\\right),\n\\end{align}\nwhich should be considered as the asymptotic behavior in long time, and the term $o\\left(t^{-\\frac{d}{2}}\\right)$ is of type $(l_u)^{5d}t^{-\\left(\\frac{d}{2}+\\epsilon\\right)}$ if one tracks carefully the dependence of $l_u$ in the steps of the proof of \\cite[Theorem 1.1]{janvresse1999relaxation}. To get the typical scale $\\left(\\frac{l_u}{\\sqrt{t}}\\right)^{d}$, we do some combinatorial improvement in the intermediate coarse-graining argument in \\cref{eq:PerTwoa}; see also \\Cref{fig:coarse-graining} for illustration. On the other hand, we also wonder if we could establish a similar result as \\cref{eq:Longtime} to identify the diffusive constant in the long time behavior. This an interesting question and one perspective in future research, but a major difficulty here is to characterize the effective diffusion constant, because the zero range model satisfies the \\textit{gradient condition} while our model does not. We believe that it is related to the \\textit{bulk diffusion coefficient} and the equilibrium density fluctuation in the lattice nongradient model as indicated in \\cite[eq.(2.14), Proposition 2.1]{spohn2012large}. \n\nSecondly, we extend a localization estimate to the continuum configuration space: under the same context of \\Cref{thm:main}, and recalling that $\\mathcal F_{Q_K}$ represents the information of $\\mu$ in the cube ${Q_K = \\left[-\\frac{K}{2}, \\frac{K}{2}\\right]^d}$, we define ${\\mathcal{A}}_K u_t := \\mathbb{E}_{\\rho}[u_t \\vert \\mathcal F_{Q_K}]$, and show that for every ${t \\geq \\max\\left\\{(l_u)^2, 16 \\Lambda^2 \\right\\}}$ and $K \\geq \\sqrt{t}$ \n\\begin{equation}\\label{eq:LocalIntro}\n\\mathbb{E}_{\\rho}\\left[(u_t - {\\mathcal{A}}_K u_t)^2\\right] \\leq C(\\Lambda)\\exp\\left(- \\frac{K}{\\sqrt{t}} \\right)\\mathbb{E}_{\\rho}\\left[u^2\\right].\n\\end{equation}\nThis is a key estimate appearing in \\cite[Proposition 3.1]{janvresse1999relaxation}, and is also natural as $\\sqrt{t}$ is the typical scale of diffusion, thus when $K \\gg \\sqrt{t}$ one get very good approximation in \\cref{eq:LocalIntro}. Its generalization in the continuum configuration space is non-trivial, since in the proof of \\cite[Proposition 3.1]{janvresse1999relaxation}, one tests the Dirichlet form with ${\\mathcal{A}}_K u_t$, but in our model it is not in the domain of Dirichlet form $\\mathcal{D}(\\mathcal{E}^{\\a})$ and one cannot put ${\\mathcal{A}}_K u_t$ directly in the Dirichlet form \\cref{eq:Dirichlet}. This is one essential difference between our model and a lattice model. To solve it, we have to apply some regularization steps which we present in \\Cref{thm:localization}. \n\n\nFinally, we remark kindly a minor error in the proof in \\cite{janvresse1999relaxation} and fix it when revisiting the paper. This will be presented in \\Cref{subsec:Outline} and \\Cref{rmk:Error}.\n\n\\smallskip\n\nThe rest of this article is organized as follows. In \\Cref{sec:Pre}, we define all the notations and the rigorous construction of our model. \\Cref{sec:Strategy} is the main part of the proof of \\Cref{thm:main}, where \\Cref{subsec:Outline} gives its outline and we fix the minor error in \\cite{janvresse1999relaxation} mentioned above. The proof of some technical estimates used in \\Cref{sec:Strategy} are put in the last two sections, where \\Cref{sec:Localization} proves the localization estimate \\cref{eq:LocalIntro} in continuum configuration space, and \\Cref{sec:Toolbox} serves as a toolbox of other estimates including spectral inequality, perturbation estimate and calculation of the entropy.\n\n\\section{Preliminaries}\\label{sec:Pre}\n\\subsection{Notations}\\label{subsec:Notations}\nIn this part, we introduce the notations used in this paper. We write~${\\mathbb{R}^d}$ for the $d$-dimensional Euclidean space, $B_r(x)$ for the ball of radius $r$ centered at $x$, and $Q_s(x) := x + \\left[- \\frac{s}{2}, \\frac{s}{2}\\right]^d$ as the cube of edge length $s$ centered at $x$. We also denote by $B_r$ and $Q_s$ respectively short for $B_r(0)$ and $Q_s(0)$. The lattice set is defined by $ \\mathcal Z_s := {\\mathbb{Z}^d} \\cap Q_s$.\n\\subsubsection{Continuum configuration space}\nFor any metric space $(E,d)$, we denote by $\\mathcal M(E)$ the set of Radon measures on $E$. For every Borel set $U \\subset E$, we denote by $\\mathcal F_U$ the smallest {$\\sigma$-algebra} such that for every Borel subset $V \\subset U$, the mapping ${\\mu \\in \\mathcal M(E) \\mapsto \\mu(V)}$ is measurable. For a $\\mathcal F_U$-measurable function $f : \\mathcal M(E) \\to \\mathbb{R}$, we say that $f$ supported in $U$ i.e. $\\supp(f) \\subset U$. In the case $\\mu \\in \\mathcal M(E)$ is of finite total mass, we write\n\\begin{equation} \n\\label{e.def.fint}\n\\fint f \\, \\d \\mu := \\frac{\\int f \\, \\d \\mu}{\\int \\d \\mu}.\n\\end{equation}\n\nWe also define the collection of point measure $\\mathcal{M}_\\delta(E) \\subset \\mathcal M(E)$\n\\begin{align*}\n\\mathcal{M}_\\delta (E) := \\left\\{ \\mu \\in \\mathcal M(E) : \\mu = \\sum_{i \\in I} \\delta_{x_i} \\text{ for some } I \\text{ finite or countable}, \\text{ and } x_i \\in E \\text{ for any } i \\in I \\right\\},\n\\end{align*}\nwhich serves as the \\textit{continuum configuration space} where each Dirac measure stands the position of a particle. In this work we will mainly focus on the Euclidean space ${\\mathbb{R}^d}$ and its associated point measure space $\\mathcal{M}_\\delta({\\mathbb{R}^d})$, and use the shorthand notation $\\mathcal F := \\mathcal F_{{\\mathbb{R}^d}}$.\n\nWe define two operations for elements in $\\mathcal{M}_\\delta({\\mathbb{R}^d})$: \\textit{restriction} and \\textit{transport}.\n\\begin{itemize}\n\\item For every $\\mu \\in \\mathcal{M}_\\delta({\\mathbb{R}^d})$ and Borel set $U \\subset {\\mathbb{R}^d}$, we define the restriction operation $\\mu \\mres U$, such that for every Borel set $V \\subset {\\mathbb{R}^d}$, $(\\mu \\mres U) (V) = \\mu (U \\cap V)$. Then for a function $f : \\mathcal{M}_\\delta({\\mathbb{R}^d}) \\to \\mathbb{R}$ which is $\\mathcal F_U$-measurable, we have $f(\\mu) = f(\\mu \\mres U)$.\n\\item The transport on the set is defined as \n\\begin{align*}\n\\forall h \\in {\\mathbb{R}^d}, U \\subset {\\mathbb{R}^d}, \\tau_h U := \\{y + h: y \\in U\\}.\n\\end{align*}\nThen for every $\\mu \\in \\mathcal{M}_\\delta({\\mathbb{R}^d})$ and $h \\in {\\mathbb{R}^d}$, we define the transport operation $\\tau_h \\mu$ such that for every Borel set $U$, we have \n\\begin{align}\\label{eq:transportMu}\n\\tau_h \\mu (U) := \\mu(\\tau_{-h} U).\n\\end{align} \nFor $f$ an $\\mathcal F_V$-measurable function, we also define the transport operation $\\tau_h f$ as a pullback that \n\\begin{align}\\label{eq:transportFunction}\n\\tau_h f(\\mu) := f(\\tau_{-h} \\mu),\n\\end{align} \nwhich is an $\\mathcal F_{\\tau_h V}$-measurable function.\n\\end{itemize}\nNotice that the restriction operation can be defined similarly in $\\mathcal M(E)$ for a metric space, but the transport operation requires that $E$ is at least a vector space.\n\n \n\n\n\\smallskip\n\nWe fix $\\rho > 0$ once and for all, and define $\\Pr$ a probability measure on $(\\mathcal{M}_\\delta({\\mathbb{R}^d}),\\mathcal F)$, to be the Poisson measure on ${\\mathbb{R}^d}$ with density $\\rho$ (see \\cite{kingman2005p}). We denote by $\\mathbb{E}_{\\rho}$ the expectation, $\\mathbb{V}\\!\\mathrm{ar}_{\\rho}$ the variance associated with the law~$\\Pr$, and by $\\mu$ the canonical $\\mathcal{M}_\\delta({\\mathbb{R}^d})$-valued random variable on the probability space $(\\mathcal{M}_\\delta({\\mathbb{R}^d}), \\mathcal F, \\Pr)$. In the case $U \\subset {\\mathbb{R}^d}$ a bounded Borel set and $f$ a $\\mathcal F_{U}$-measurable function, we can rewrite the expectation $\\mathbb{E}_{\\rho}[f]$ in an explicit expression\n\\begin{equation} \n\\label{e.bulky}\n\\mathbb{E}_{\\rho} \\left[ f \\right] = \\sum_{N = 0}^{+\\infty} e^{-\\rho|U|} \\frac{(\\rho |U|)^N}{N!} \\fint_{U^N} f \\left( \\sum_{i = 1}^N \\delta_{x_i} \\right) \\, \\d x_1 \\cdots \\d x_N. \n\\end{equation}\nFor instance, for every bounded Borel set $U \\subset {\\mathbb{R}^d}$ and bounded measurable function $g : U \\to \\mathbb{R}$, we can write\n\\begin{equation*} \n\\mathbb{E}_{\\rho} \\left[ \\int_U g(x) \\, \\d \\mu(x) \\right] = \\rho \\int_U g(x) \\, \\d x.\n\\end{equation*}\nNotice that the measure $\\mu$ is a Poisson point process under $\\Pr$. In particular, the measures $\\mu \\mres U$ and $\\mu \\mres ({\\mathbb{R}^d} \\setminus U)$ are independent, and the conditional expectation $\\mathbb{E}_{\\rho} \\left[ \\cdot \\vert \\mathcal F_{({\\mathbb{R}^d} \\setminus U)}\\right]$ can thus be described equivalently as an averaging over the law of~$\\mu \\mres U$.\n\nFor any $1 \\leq p < \\infty$, we denote by $L^p(\\mathcal{M}_\\delta(U))$ the set of $\\mathcal F_U$-measurable functions $f : \\mathcal{M}_\\delta(U) \\to \\mathbb{R}$ such that the norm\n\\begin{equation*} \n\\|f \\|_{L^p(\\mathcal{M}_\\delta(U))} := \\left(\\mathbb{E}_{\\rho} \\left[ |f|^p \\right] \\right)^\\frac 1 p \n\\end{equation*}\nis finite and $L^p$ short for $L^p(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$. We denote by $L^{\\infty}(\\mathcal{M}_\\delta(U))$ the norm defined by essential upper bound under $\\Pr$. \n\n\n\\subsubsection{Derivative and $C_c^{\\infty}(\\mathcal{M}_\\delta(U))$}\\label{subsubsec:Ccinfty}\nWe define the directional derivative for a $\\mathcal F_{U}$-measurable function $f : \\mathcal{M}_\\delta(U) \\to \\mathbb{R}$. Let $\\{\\mathbf{e}_k\\}_{1 \\leq k \\leq n}$ be $d$ canonical directions, for $x \\in \\supp(\\mu)$, we define \n\\begin{align*}\n\\partial_k f(\\mu, x) := \\lim_{h \\to 0} \\frac{1}{h} (f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - f(\\mu)),\n\\end{align*}\nif the limit exists, and the gradient as a vector \n\\begin{align*}\n{\\nabla f(\\mu, x) := (\\partial_1 f(\\mu, x), \\partial_2 f(\\mu, x), \\cdots \\partial_d f(\\mu, x))}.\n\\end{align*}\nOne can define the function with higher derivative iteratively, but here we use a more natural way: for every Borel set $U \\subset {\\mathbb{R}^d}$ and $N \\in \\mathbb{N}$, let $\\mathcal{M}_\\delta(U, N) \\subset \\mathcal{M}_\\delta(E)$ be defined as\n\\begin{align*}\n\\mathcal{M}_\\delta(U, N) := \\left\\{\\mu \\in \\mathcal{M}_\\delta({\\mathbb{R}^d}) : \\mu = \\sum_{i=1}^N x_i, x_i \\in U \\text{ for every } 1 \\leq i \\leq N \\right\\}.\n\\end{align*} \nThen a function $f : \\mathcal M_\\delta(U,N) \\to \\mathbb{R}$ can be identified with a function $\\widetilde f : U^N \\to \\mathbb{R}$ by setting\n\\begin{equation} \n\\label{e.def.tdf}\n\\widetilde f(x) = \\widetilde f(x_1,\\ldots,x_N) := f \\left( \\sum_{i = 1}^N \\delta_{x_i} \\right) .\n\\end{equation}\nThe function $\\widetilde f$ is invariant under permutations of its $N$ coordinates. Conversely, any function satisfying this symmetry can be identified with a function from $\\mathcal{M}_\\delta(U,N)$ to $\\mathbb{R}$. We denote by $C^\\infty(\\mathcal{M}_\\delta(U,N))$ the set of functions $f : \\mathcal{M}_\\delta(U,N) \\to \\mathbb{R}$ such that $\\widetilde f$ is infinitely differentiable. \n For every $f \\in C^\\infty(\\mathcal{M}_\\delta(U,N))$ and $x_1,\\ldots,x_N \\in U$, the gradient at $x_1$ coincides with the its canonical sense for the coordinate $x_1$.\n\\begin{equation}\\label{def.grad.mu}\n\\nabla f \\left( \\sum_{i = 1}^N \\delta_{x_i} ,x_1 \\right) = \\nabla_{x_1} \\widetilde f(x_1,\\ldots,x_N).\n\\end{equation}\n\n\nWe denote by $C^\\infty_c(\\mathcal{M}_\\delta(U))$ the set of functions $f : \\mathcal{M}_\\delta(U) \\to \\mathbb{R}$ that satisfy:\n\\begin{enumerate} \n\\item there exists a compact Borel set $V \\subset U$ such that $f$ is $\\mathcal F_V$-measurable; \n\\item for every $N \\in \\mathbb{N}$,\n\\begin{equation*} \n\\text{the mapping } \\left\\{\n\\begin{array}{rcl} \n\\mathcal{M}_\\delta(U,N) & \\to & \\mathbb{R} \\\\\n\\mu & \\mapsto & f(\\mu)\n\\end{array}\n\\right.\n\\mbox{belongs to $C^\\infty(\\mathcal{M}_\\delta(U,N))$.}\n\\end{equation*}\n\\item the function is bounded. \n\\end{enumerate}\nA more heuristic description for $f \\in C^\\infty_c(\\mathcal{M}_\\delta(U))$ is a function uniformly bounded, depending only on the information in a compact subset $V \\subset U$, and when we do projection $f(\\mu) = f(\\mu \\mres V)$ it can be identified as a function $C^{\\infty}$ with finite coordinate, and also smooth when the number of particles in $V$ changes. \n\n\\subsubsection{Sobolev space on $\\mathcal{M}_\\delta(U)$}\nWe define the $H^1(\\mathcal{M}_\\delta(U))$ norm by\n\\begin{equation*} \n\\|f\\|_{H^1(\\mathcal{M}_\\delta(U))} := \\left( \\|f\\|_{L^2(\\mathcal{M}_\\delta(U))}^2 + \\mathbb{E}_{\\rho} \\left[ \\int_U |\\nabla f|^2 \\, \\d \\mu \\right] \\right)^\\frac 1 2,\n\\end{equation*}\nand let $H^1_0(\\mathcal{M}_\\delta(U))$ denote the completion with respect to this norm of the space \n\\begin{equation*} \n\\left\\{f \\in C^\\infty_c(\\mathcal{M}_\\delta(U)) \\ : \\ \\|f\\|_{H^1(\\mathcal{M}_\\delta(U))} < \\infty\\right\\}.\n\\end{equation*}\n\n\n\n\n\n\n\n\n\\subsection{Construction of model}\\label{subsec:Construction}\n\\subsubsection{Diffusion coefficient}\nIn this part, we define the coefficient field of the diffusion. We give ourselves a symmetric matrix valued function $\\a_\\circ : \\mathcal{M}_\\delta({\\mathbb{R}^d}) \\to \\mathbb{R}^{d \\times d}_{sym}$ which satisfies the following properties:\n\\begin{itemize} \n\\item uniform ellipticity: there exists $\\Lambda \\in [1,+\\infty)$ such that for every $\\mu \\in \\mathcal{M}_\\delta({\\mathbb{R}^d})$ and every $\\xi \\in {\\mathbb{R}^d}$, \n\\begin{equation} \n\\label{e.ellipticity}\n\\vert \\xi \\vert^2 \\le \\xi \\cdot \\a_\\circ(\\mu) \\xi \\le \\Lambda \\vert \\xi \\vert^2\\, ;\n\\end{equation}\n\\item locality: for every $\\mu \\in \\mathcal{M}_\\delta({\\mathbb{R}^d})$, $\\a_\\circ(\\mu) = \\a_\\circ\\left(\\mu \\mres B_1\\right)$.\n\\end{itemize}\nWe extend $\\a_\\circ$ by stationarity using the transport operation defined in \\cref{eq:transportFunction}: for every $\\mu \\in \\mathcal{M}_\\delta({\\mathbb{R}^d})$ and $x \\in {\\mathbb{R}^d}$,\n\\begin{equation*} \n\\a(\\mu,x) := \\tau_x \\a_{\\circ}(\\mu) = \\a_\\circ(\\tau_{-x}\\mu).\n\\end{equation*}\nA typical example of a coefficient field $\\a$ of interest is ${\\a_\\circ(\\mu) := (1 + \\mathbf{1}_{\\{ \\mu(B_1) = 1 \\}}})\\mathbf{Id}$\nwhose extension is given by $\\a(\\mu,x) := (1 + \\mathbf{1}_{\\{ \\mu(B_1(x)) = 1 \\}}) \\mathbf{Id}$.\nIn words, for $x \\in \\supp(\\mu)$, the quantity $\\a(\\mu,x)$ is equal to $2$ whenever there is no other point than $x$ in the unit ball around $x$, and is equal to $1$ otherwise.\n\n\\subsubsection{Markov process defined by Dirichlet form}\nIn this part, we construct our infinite particle system on $\\mathcal{M}_\\delta({\\mathbb{R}^d})$ by Dirichlet form (see \\cite{fukushima2010dirichlet, ma2012introduction} for the notations). We define at first the non-negative bilinear symmetric form \n\\begin{align*}\n\\mathcal{E}^{\\a}(f, g) := \\mathbb{E}_{\\rho}\\left[ \\int_{{\\mathbb{R}^d}} \\nabla f(\\mu, x) \\cdot \\a(\\mu, x) \\nabla g(\\mu, x) \\, \\d \\mu(x)\\right],\n\\end{align*}\non its \\textit{domain} $\\mathcal{D}(\\mathcal{E}^{\\a})$ that \n\\begin{align*}\n\\mathcal{D}(\\mathcal{E}^{\\a}) := H^1_0(\\mathcal{M}_\\delta({\\mathbb{R}^d})).\n\\end{align*}\nWe also use $\\mathcal{E}^{\\a}(f) := \\mathcal{E}^{\\a}(f, f)$ for short. It is clear that $\\mathcal{E}^{\\a}$ is \\textit{closed} and \\textit{Markovian} thus it is a \\textit{Dirichlet form}, so it defines the correspondence between the Dirichlet form and the generator $\\L$ that \n\\begin{align*}\n\\mathcal{E}^{\\a}(f,g) = \\mathbb{E}_{\\rho}\\left[f (-\\L)g\\right], \\qquad \\mathcal{D}(\\mathcal{E}^{\\a}) = \\mathcal{D}(-\\L).\n\\end{align*}\nand a $L^2(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ strongly continuous Markov semigroup $(P_t)_{t \\geq 0}$. We denote by $(\\mathscr{F}_t)_{t \\geq 0}$ its filtration and $(\\mu_t)_{t \\geq 0}$ the associated $\\mathcal{M}_\\delta({\\mathbb{R}^d})$-valued Markov process which stands the configuration of the particles, then for any $u \\in L^2(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, \n\\begin{align*}\nu_t(\\mu) := P_t u(\\mu) = \\mathbb{E}_{\\rho}[u(\\mu_t) \\vert \\mathscr{F}_0],\n\\end{align*}\nis an element in $\\mathcal{D}(\\mathcal{E}^{\\a})$ and is characterized by the parabolic equation on $\\mathcal{M}_\\delta({\\mathbb{R}^d})$ that for any $v \\in \\mathcal{D}(\\mathcal{E}^{\\a})$\n\\begin{align}\\label{eq:Evolution}\n\\mathbb{E}_{\\rho}[u_t v] - \\mathbb{E}_{\\rho}[u v] = - \\int_{0}^t \\mathcal{E}^{\\a}(u_s, v) \\, \\d s.\n\\end{align}\nFinally, we remark that the average is conserved for $u_t$ as we test \\cref{eq:Evolution} by constant $1$ that \n\\begin{align}\\label{eq:uAverage}\n\\mathbb{E}_{\\rho}[u_t] - \\mathbb{E}_{\\rho}[u] = - \\int_{0}^t \\mathbb{E}_{\\rho}\\left[ \\int_{{\\mathbb{R}^d}} \\nabla 1 \\cdot \\a(\\mu, x) \\nabla u_s(\\mu, x) \\, \\d \\mu\\right] \\, \\d s = 0.\n\\end{align}\nIn this work, we focus more on the quantitative property of $P_t$; see \\cite{rockner1998stochastic} for more details about the trajectory property of similar type of process. \n\n\\subsection{A solvable case}\\label{subsec:Solvable}\nWe propose a solvable model to illustrate that the behavior of this process is close to the diffusion and the rate of decay is the best one that we can expect.\n\nIn the following, we suppose that $\\a = \\frac{1}{2}$ which means that in fact every particle evolves as a Brownian motion i.e. $\\mu = \\sum_{i=1}^{\\infty} \\delta_{x_i} $, $\\mu_t = \\sum_{i=1}^{\\infty} \\delta_{B^{(i)}_t}$ that $\\left(B^{(i)}_t\\right)_{t \\geq 0}$ is a Brownian motion issued from $x_i$ and $\\left(B^{(i)}_{\\cdot}\\right)_{i\\in \\mathbb{N}}$ is independent.\n\n\\begin{example}\nLet $u(\\mu) := \\int_{{\\mathbb{R}^d}} f \\, \\d \\mu $ with $f \\in C^{\\infty}_c({\\mathbb{R}^d})$. \nIn this case, we have\n\\begin{align*}\nu_t(\\mu) = P_t u(\\mu) &= \\mathbb{E}_{\\rho}\\left[ u(\\mu_t) \\vert \\mathscr{F}_0 \\right] = \\mathbb{E}_{\\rho}\\left[ \\left. \\sum_{i \\in \\mathbb{N}} f\\left(B^{(i)}_t\\right) \\right\\vert \\mathscr{F}_0 \\right] = \\int_{{\\mathbb{R}^d}} f_t(x) \\, \\d \\mu(x),\n\\end{align*} \nwhere $f_t \\in C^{\\infty}({\\mathbb{R}^d})$ is the solution of the Cauchy problem of the standard heat equation: ${\\Phi_t(x) = \\frac{1}{(2\\pi t)^{\\frac{d}{2}}} \\exp\\left(- \\frac{\\vert x \\vert^2}{2t}\\right)}$ and $f_t(x) = \\Phi_t \\star f(x)$. Then we use the formula of variance for Poisson process\n\\begin{align*}\n\\mathbb{V}\\!\\mathrm{ar}_{\\rho}\\left[u \\right] &= \\rho\\int_{{\\mathbb{R}^d}} f^2(x) \\, \\d x = \\rho\\Vert f \\Vert^2_{L^2\\left({\\mathbb{R}^d}\\right)}, \\\\\n\\mathbb{V}\\!\\mathrm{ar}_{\\rho}\\left[u_t \\right] &= \\rho\\int_{{\\mathbb{R}^d}} f_t^2(x) \\, \\d x = \\rho\\Vert f_t \\Vert^2_{L^2\\left({\\mathbb{R}^d}\\right)}.\n\\end{align*}\nBy the heat kernel estimate for the standard heat equation, we known that ${\\Vert f_t \\Vert^2_{L^2({\\mathbb{R}^d})} \\simeq C(d)t^{-\\frac{d}{2}} \\Vert f \\Vert^2_{L^1({\\mathbb{R}^d})}}$, thus the scale $t^{-\\frac{d}{2}}$ is the best one that we can obtain. Moreover, if we take $f = \\Ind{Q_r}$, and ${t=r^{2(1-\\epsilon)}}$ for a small $\\epsilon >0$, then we see that the typical scale of diffusion is a ball of size $r^{1-\\epsilon}$. So for every $x \\in Q_{r\\left(1-r^{-\\frac{\\epsilon}{2}}\\right)}$, the value $f_t(x) \\simeq 1 - e^{-r^{\\frac{\\epsilon}{2}}}$ and we have \n\\begin{align*}\n\\mathbb{V}\\!\\mathrm{ar}_{\\rho}\\left[u_t \\right] = \\rho\\int_{{\\mathbb{R}^d}} f_t^2(x) \\, \\d x \\geq \\rho r^d(1-r^{-\\frac{\\epsilon}{2}}) = (1-r^{-\\frac{\\epsilon}{2}})\\mathbb{V}\\!\\mathrm{ar}_{\\rho}\\left[u \\right].\n\\end{align*}\nIt illustrates that before the scale $t = r^2$, the decay is very slow so in the \\Cref{thm:main} the factor $\\left(\\frac{l_u}{\\sqrt{t}}\\right)^d$ is reasonable.\n\\end{example}\n\n\n\n\\section{Strategy of proof}\\label{sec:Strategy}\nIn this part, we state the strategy of the proof of \\Cref{thm:main}. We will give a short outline in \\Cref{subsec:Outline}, which can be see as an ``approximation-variance decomposition\", and then focus on the term approximation in \\Cref{subsec:Approximation}. Several technical estimates will be used in this procedure and their proofs will be postponed in \\Cref{sec:Localization} and \\Cref{sec:Toolbox}.\n\\subsection{Outline}\\label{subsec:Outline}\nAs mentionned, this work is inspired from \\cite{janvresse1999relaxation}, and we revisit the strategy here.\nWe pick a centered $u \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ supported in $Q_{l_u}$ such that $\\mathbb{E}_{\\rho}[u] = 0$ and this implies $\\mathbb{E}_{\\rho}[u_t] = 0$ from \\cref{eq:uAverage}. Then we set a multi-scale $\\{t_n\\}_{n\\geq 0}, t_{n+1} = Rt_n$, where $R > 1$ is a scale factor to be fixed later. It suffices to prove that \\cref{eq:main} for every $t_n$, then for $t \\in [t_n, t_{n+1}]$, one can use the decay of $L^2$ that \n\\begin{align*}\n\\mathbb{E}_{\\rho}[(u_t)^2] \\leq \\mathbb{E}_{\\rho}[(u_{t_n})^2] \\leq C(1+\\log(t_n))^{\\gamma}\\left(\\frac{1+l_u}{\\sqrt{t_n}}\\right)^d \\Vert u \\Vert^2_{L^{\\infty}} \\leq C R^{\\frac{d}{2}}(1 + \\log t)^{\\gamma}\\left(\\frac{1 + l_u}{\\sqrt{t}}\\right)^d \\Vert u \\Vert^2_{L^{\\infty}},\n\\end{align*}\nthen by resetting the constant $C$ one concludes the main theorem. Another ingredient of the proof is an ``approximation-variance type decomposition\": \n\\begin{equation}\\label{eq:defTwoTerms}\n\\begin{split}\nu_t &= v_t + w_t, \\\\\nv_t &:= u_t - \\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K} \\tau_{y} u_t,\\\\\nw_t &:= \\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K} \\tau_{y} u_t,\n\\end{split}\n\\end{equation}\nwhere we recall $\\mathcal Z_K = Q_K \\cap {\\mathbb{Z}^d}$ is the lattice set of scale $K$. The philosophy of this decomposition is that in long time, the information in a local scale $K$ is mixed, thus $w_t$ as a spatial average is a good approximation of $u_t$ and $v_t$ is the error term. Thus, the following control \\Cref{prop:Variance} and \\Cref{prop:Perturbation} of the two terms $w_t$ and $v_t$ proves the main theorem \\Cref{thm:main}.\n\n\\begin{proposition}\\label{prop:Variance}\nThere exists a finite positive number $C := C(d)$ such that for any ${u \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))}$ with mean zero, supported in $Q_{l_u}$ and for any $K \\geq l_u$, we have \n\\begin{align}\\label{eq:Variance}\n\\mathbb{V}\\!\\mathrm{ar}_{\\rho}\\left[ \\left( \\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K} \\tau_{y} u_t \\right)^2\\right] \\leq C(d)\\left(\\frac{l_u}{K}\\right)^d \\mathbb{E}_{\\rho}[u^2].\n\\end{align} \n\\end{proposition}\n\\begin{proof}\nThen we can estimate the variance simply by $L^2$ decay that \n\\begin{multline*}\n\\mathbb{E}_{\\rho}[(w_t)^2] = \\mathbb{E}_{\\rho}\\left[\\left( P_t\\left(\\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K} \\tau_{y}u \\right)\\right)^2\\right] \n\\leq \\mathbb{E}_{\\rho}\\left[\\left(\\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K} \\tau_{y}u \\right)^2\\right] \n= \\frac{1}{\\vert \\mathcal Z_K \\vert^2} \\sum_{x, y \\in \\mathcal Z_K} \\mathbb{E}_{\\rho}\\left[ (\\tau_{x-y} u) u\\right].\n\\end{multline*}\nWe know that for $\\vert x-y \\vert \\geq l_u$, then the term $ \\tau_{x-y}u$ and $u$ is independent so $ \\mathbb{E}_{\\rho}\\left[ (\\tau_{x-y} u) u\\right]$ = 0. This concludes \\cref{eq:Variance}.\n\\end{proof}\n\n\\begin{proposition}\\label{prop:Approximation}\nThere exists two finite positive numbers $C := C(d, \\rho), \\gamma := \\gamma(d, \\rho)$ such that for any ${u \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))}$ supported in $Q_{l_u}$, $K \\geq l_u$ and $v_t$ defined in \\cref{eq:defTwoTerms}, for ${t_n \\geq \\max\\left\\{l_u^2, 16 \\Lambda^2 \\right\\}}, t_{n+1} = Rt_n$ with $R>1$ we have \n\\begin{align}\\label{eq:Approximation}\n(t_{n+1})^{\\frac{d+2}{2}} \\mathbb{E}_{\\rho}[(v_{t_{n+1}})^2] - (t_n)^{\\frac{d+2}{2}} \\mathbb{E}_{\\rho}[(v_{t_n})^2] \\leq C (\\log(t_{n+1}))^{\\gamma} K^2 (l_u)^{d} \\Vert u\\Vert^2_{L^{\\infty}} + \\mathbb{E}_{\\rho}[u^2].\n\\end{align} \n\\end{proposition}\n\\begin{proof}[Proof of \\Cref{thm:main} from \\Cref{prop:Approximation} and \\Cref{prop:Variance}]\nFor the case $t \\leq (l_u)^2$ or ${t \\leq 16 \\Lambda^2}$, the right hand side of \\cref{eq:main} is larger than $\\mathbb{E}_{\\rho}[u^2]$ and we can use the $L^2$ decay to prove the theorem. Thus without loss of generality, we set ${t_0 := \\max\\left\\{l_u^2, 16 \\Lambda^2 \\right\\}}$ and put \\cref{eq:Approximation} into \\cref{eq:main} by setting $K:= \\sqrt{t_{n+1}}$ that\n\\begin{equation}\\label{eq:FixError}\n\\begin{split}\n &\\mathbb{E}_{\\rho}[(u_{t_{n+1}})^2] \\\\\n\\leq & 2\\mathbb{E}_{\\rho}[(v_{t_{n+1}})^2] + 2\\mathbb{E}_{\\rho}[(w_{t_{n+1}})^2] \\\\\n\\leq & 2 \\left(\\frac{t_n}{t_{n+1}}\\right)^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[(v_{t_{n}})^2] + 2 (t_{n+1})^{-\\frac{d+2}{2}}\\left(C (\\log(t_{n+1}))^{\\gamma} t_{n+1} (l_u)^{d} \\Vert u\\Vert^2_{L^{\\infty}} + \\mathbb{E}_{\\rho}[u^2]\\right) \\\\\n& \\qquad \\qquad + 2\\mathbb{E}_{\\rho}[(w_{t_{n+1}})^2]\\\\\n\\leq & 4 \\left(\\frac{t_n}{t_{n+1}}\\right)^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[(u_{t_{n}})^2] + 2 (t_{n+1})^{-\\frac{d+2}{2}}\\left(C (\\log(t_{n+1}))^{\\gamma} t_{n+1} (l_u)^{d} \\Vert u\\Vert^2_{L^{\\infty}} + \\mathbb{E}_{\\rho}[u^2]\\right)\\\\\n& \\qquad + 4 \\left(\\frac{t_n}{t_{n+1}}\\right)^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[(w_{t_{n}})^2] + 2\\mathbb{E}_{\\rho}[(w_{t_{n+1}})^2].\n\\end{split}\n\\end{equation}\nWe set $U_n = (t_n)^{\\frac{d}{2}}\\mathbb{E}_{\\rho}[(u_{t_{n}})^2]$ and put\n\\cref{eq:Variance} into the equation above, we have \n\\begin{align*}\nU_{n+1} \\leq \\theta U_{n} + C_2 \\left((\\log(t_{n+1}))^{\\gamma} (l_u)^{d} \\Vert u\\Vert^2_{L^{\\infty}} + (t_{n+1})^{-1} \\mathbb{E}_{\\rho}[u^2]\\right) + C_3 (l_u)^{d}\\mathbb{E}_{\\rho}[u^2] ,\n\\end{align*}\nwhere $\\theta = 4R^{-1}$. By choosing $R$ large such that $\\theta \\in (0,1)$, we do a iteration for the equation above to obtain that \n\\begin{align*}\nU_{n+1} &\\leq \\sum_{k=1}^n\\left(C_2 \\left((\\log(t_{n+1}))^{\\gamma} (l_u)^{d} \\Vert u\\Vert^2_{L^{\\infty}} + \\mathbb{E}_{\\rho}[u^2]\\right) + C_3 (l_u)^{d}\\mathbb{E}_{\\rho}[u^2]\\right)\\theta^{n-k} + U_0 \\theta^{n+1} \\\\\n&\\leq \\frac{1}{1-\\theta}\\left(C_2 \\left((\\log(t_{n+1}))^{\\gamma} (l_u)^{d} \\Vert u\\Vert^2_{L^{\\infty}} + \\mathbb{E}_{\\rho}[u^2]\\right) + C_3 (l_u)^{d}\\mathbb{E}_{\\rho}[u^2]\\right) + (l_u)^{d}\\mathbb{E}_{\\rho}[u^2] \\\\\n\\Longrightarrow & \\mathbb{E}_{\\rho}[(u_{t_{n+1}})^2] \\leq C_4( \\log(t_{n+1}) )^{\\gamma}\\left(\\frac{l_u}{\\sqrt{t_{n+1}}}\\right)^d \\Vert u \\Vert^2_{L^{\\infty}}.\n\\end{align*}\n\\end{proof}\n\\begin{remark}\\label{rmk:Error}\nWe remark that there is a small error in the similar argument in \\cite[Proof of Proposition 2.2]{janvresse1999relaxation}: the authors apply \\cref{eq:Approximation} from $t_0$ to $t_n$, and they neglect the change of scale in $K$ at the endpoints $\\{t_{n}\\}_{n \\geq 0}$. However, it does not harm the whole proof and we fix it here: we add one more step of decomposition in \\cref{eq:FixError}, and put the iteration directly in $u_t$ instead of $v_t$, which avoids the problem of the changes of $K$.\n\\end{remark}\n\n\\subsection{Error for the approximation}\\label{subsec:Approximation}\nIn this part, we prove \\Cref{prop:Approximation}. The proof can be divided into 6 steps.\n\n\\begin{proof}[Proof of \\Cref{prop:Approximation}].\n\\textit{Step 1: Setting up.}\nTo shorten the equation, we define\n\\begin{align}\n\\Delta_n := (t_{n+1})^{\\frac{d+2}{2}} \\mathbb{E}_{\\rho}[(v_{t_{n+1}})^2] - (t_n)^{\\frac{d+2}{2}} \\mathbb{E}_{\\rho}[(v_{t_n})^2],\n\\end{align}\nand it is the goal of the whole subsection. In the step setting up, we do derivative for the flow $t^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[(v_t)^2]$ that \n\\begin{align}\\label{eq:ApproxDerivative}\n\\Delta_n = \\int_{t_n}^{t_{n+1}}\\left(\\frac{d+2}{2}\\right)t^{\\frac{d}{2}}\\mathbb{E}_{\\rho}[(v_t)^2] - 2t^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[v_t (-\\L v_t)] \\, \\d t.\n\\end{align}\n\n\\textit{Step 2: Localization.}\nWe set ${\\mathcal{A}}_L v_t := \\mathbb{E}\\left[v_t | \\mathcal F_{Q_L} \\right]$ and use it to approximate $v_t$ in $L^2$. Since it is a diffusion process, one can guess naturally a scale larger than $\\sqrt{t}$ will have enough information for this approximation. In \\Cref{thm:localization} we prove an estimate \n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[(v_t - {\\mathcal{A}}_L v_t)^2\\right] \\leq C(\\Lambda)\\exp\\left(-\\frac{L}{\\sqrt{t}}\\right)\\mathbb{E}_{\\rho}\\left[(v_0)^2\\right],\n\\end{align*}\nand we choose $L = \\lfloor \\gamma \\log(t_{n+1})\\rfloor \\sqrt{t_{n+1}}, \\gamma > \\frac{d+4}{2}$ here, and put it back to \\cref{eq:ApproxDerivative} to obtain \n\\begin{equation}\\label{eq:ApproxLocalization}\n\\begin{split}\n\\Delta_n \\leq &\\int_{t_n}^{t_{n+1}}(d+2)t^{\\frac{d}{2}}\\mathbb{E}_{\\rho}\\left[\\left({\\mathcal{A}}_L v_t\\right)^2\\right]+ (d+2)t^{\\frac{d}{2}-\\gamma} \\mathbb{E}_{\\rho}\\left[(v_0)^2\\right] - 2t^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[v_t (-\\L v_t)] \\, \\d t \\\\\n\\leq & \\mathbb{E}_{\\rho}[(u_0)^2] + \\int_{t_n}^{t_{n+1}}(d+2)t^{\\frac{d}{2}}\\mathbb{E}_{\\rho}\\left[\\left({\\mathcal{A}}_L v_t\\right)^2\\right] - 2t^{\\frac{d+2}{2}}\\mathbb{E}_{\\rho}[v_t (-\\L v_t)] \\, \\d t . \n\\end{split}\n\\end{equation}\n\n\n\\textit{Step 3: Approximation by density.}\nWe apply a second approximation: we choose another scale $l>0$, whose value will be fixed but $L\/l \\in \\mathbb{N}$ and $l \\simeq \\sqrt{t_{n+1}}$. We denote by $q = (L\/l)^d$ and $\\mathbf{M}_{L,l} = (\\mathbf{M}_1, \\mathbf{M}_2 \\cdots \\mathbf{M}_q)$ a random vector, where $\\mathbf{M}_i$ is the number of the particle in $i$-th cube of scale $l$. Then we define an operator \n\\begin{align*}\n\\mathcal{B}_{L,l} v_t := \\mathbb{E}_{\\rho}\\left[v_t \\vert \\mathbf{M}_{L,l} \\right].\n\\end{align*}\nThe main idea here is that the random vector $\\mathbf{M}_{L,l}$ captures the information of convergence, once we know the density in every cube of scale $l \\simeq \\sqrt{t_{n+1}}$ converges to $\\rho$. In \\Cref{prop:AB} we will prove a spectral inequality that \n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_L v_t - \\mathcal{B}_{L,l} v_t)^2\\right] \\leq R_0 l^2 \\mathbb{E}_{\\rho}\\left[v_t (-\\L v_t)\\right].\n\\end{align*}\nWe put this estimate into \\cref{eq:ApproxLocalization} \n\\begin{align*}\n\\Delta_n \\leq & \\mathbb{E}_{\\rho}[(u_0)^2] + \\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}}\\mathbb{E}_{\\rho}\\left[\\left(\\mathcal{B}_{L,l} v_t\\right)^2\\right] + 2t^{\\frac{d}{2}}((d+2)R_0 l^2 -t)\\mathbb{E}_{\\rho}[v_t (-\\L v_t)] \\, \\d t \\\\\n\\leq & \\mathbb{E}_{\\rho}[(u_0)^2] + \\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}}\\mathbb{E}_{\\rho}\\left[\\left(\\mathcal{B}_{L,l} v_t\\right)^2\\right] \\, \\d t, \n\\end{align*}\nwhere we obtain the last line by choosing a scale $l = c \\sqrt{t_{n+1}}$ such that $(d+2)R_0 l^2 \\leq t_n $ and $L\/l \\in \\mathbb{N}$. \n\nIt remains to estimate how small $\\mathbb{E}_{\\rho}\\left[\\left(\\mathcal{B}_{L,l} v_t\\right)^2\\right]$ is. The typical case is that the density is close to $\\rho$ in every cube of scale $l$ in $Q_L$. Let us define $M = (M_1, M_2, \\cdots M_q)$, and we have \n\\begin{align*}\n\\mathcal{B}_{L,l} v_t(M) = \\mathbb{E}_{\\rho}\\left[v_t \\vert \\mathbf{M}_{L,l} = M \\right]. \n\\end{align*}\nThen we call $\\mathcal C_{L,l,\\rho,\\delta}$ the $\\delta$-good configuration that \n\\begin{align}\n\\mathcal C_{L,l,\\rho,\\delta} := \\left\\{M \\in \\mathbb{N}^{q} \\left\\vert \\forall 1 \\leq i\\leq q, \\left\\vert\\frac{M_i}{\\rho\\vert Q_l \\vert} - 1\\right\\vert \\leq \\delta \\right.\\right\\}.\n\\end{align}\nWe can use standard Chernoff bound and union bound to prove the upper bound of $\\Pr\\left[\\mathbf{M}_{L,l} \\notin \\mathcal C_{L,l,\\rho,\\delta} \\right]$: for any $\\lambda > 0$, we have \n\\begin{align*}\n\\Pr\\left[\\exists \\leq i\\leq q, \\frac{\\mathbf{M}_i}{\\rho\\vert Q_l \\vert} \\geq 1+\\delta \\right] &\\leq \\left(\\frac{L}{l}\\right)^d \\exp(-\\lambda(1+\\delta))\\mathbb{E}_{\\rho}\\left[\\exp\\left(\\frac{\\lambda\\mu(Q_l)}{\\rho \\vert Q_l\\vert}\\right)\\right] \\\\\n&=\\left(\\frac{L}{l}\\right)^d \\exp\\left(-\\lambda(1+\\delta)+ \\rho \\vert Q_l\\vert \\left(e^{\\frac{\\lambda }{\\rho \\vert Q_l\\vert}}-1\\right)\\right) \\\\\n&\\leq \\left(\\frac{L}{l}\\right)^d \\exp\\left(-\\lambda \\delta + \\frac{\\lambda^2}{\\rho \\vert Q_l\\vert}\\right).\n\\end{align*}\nIn the second line we use the exact Laplace transform for $\\mu(Q_l)$ as we know ${\\mu(Q_l) \\stackrel{\\text{law}}{\\sim} \\text{Poisson}(\\rho \\vert Q_l \\vert)}$. Then we do optimization by choosing $\\lambda = \\frac{\\delta \\rho \\vert Q_l \\vert}{2}$. The other side is similar and we conclude\n\\begin{align}\n\\Pr\\left[\\mathbf{M}_{L,l} \\notin \\mathcal C_{L,l,\\rho,\\delta} \\right] \\leq \\left(\\gamma \\log(t_{n+1})\\right)^d \\exp\\left(-\\frac{\\rho \\vert Q_l\\vert \\delta^2}{4}\\right).\n\\end{align}\nFor the case $M \\notin \\mathcal C_{L,l,\\rho,\\delta}$, we can bound $\\mathcal{B}_{L,l} v_t(M)$ naively by $\\vert \\mathcal{B}_{L,l} v_t(M) \\vert \\leq C\\Vert u_0\\Vert_{L^{\\infty}}$, thus we have \n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[\\left(\\mathcal{B}_{L,l} v_t\\right)^2\\right] \\leq \\sum_{M\\in \\mathcal C_{L,l,\\rho,\\delta}} \\Pr[\\mathbf{M}_{L,l} = M] (\\mathcal{B}_{L,l} v_t(M))^2 + \\left(\\gamma \\log(t_{n+1})\\right)^d \\exp\\left(-\\frac{\\rho \\vert Q_l\\vert \\delta^2}{4}\\right)\\Vert u_0\\Vert^2_{L^{\\infty}} \n\\end{align*}\nand we finish this step by \n\\begin{equation}\\label{eq:ApproxDensity}\n\\begin{split}\n\\Delta_n \\leq & \\mathbb{E}_{\\rho}[(u_0)^2] + (t_{n+1})^{\\frac{d+2}{2}}\\left(\\gamma \\log(t_{n+1})\\right)^d \\exp\\left(-\\frac{\\rho \\vert Q_l\\vert \\delta^2}{4}\\right)\\Vert u_0\\Vert^2_{L^{\\infty}} \\\\\n& \\qquad + \\sum_{M\\in \\mathcal C_{L,l,\\rho,\\delta}} \\Pr[\\mathbf{M}_{L,l} = M] \\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}} (\\mathcal{B}_{L,l} v_t(M))^2 \\, \\d t.\n\\end{split}\n\\end{equation}\nWe remark that the parameter $\\delta > 0$ will be fixed at the end of the proof.\n\n\\textit{Step 4: Perturbation estimate.}\nIt remains to estimate the term $(\\mathcal{B}_{L,l} v_t(M))^2$ for the the $\\delta$-good configuration. Now we put the expression of $v_t$ in and obtain \n\\begin{align*}\n(\\mathcal{B}_{L,l} v_t(M))^2 &= \\left(\\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K } (\\mathcal{B}_{L,l} (u_t - \\tau_{y} u_t))(M)\\right)^2,\n\\end{align*}\nand our aim is to control \n\\begin{equation}\\label{eq:PerTwo}\n\\begin{split}\n\\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}} \\left(\\frac{1}{\\vert \\mathcal Z_K \\vert} \\sum_{y \\in \\mathcal Z_K } (\\mathcal{B}_{L,l} (u_t - \\tau_{y} u_t))(M)\\right)^2 \\, \\d t .\n\\end{split}\n\\end{equation}\n\nTo treat \\text{\\cref{eq:PerTwo}}, we calculate the Radon-Nikodym derivative that \n\\begin{align}\\label{eq:defgM}\ng_M := \\frac{\\d \\Pr[\\cdot \\vert \\mathbf{M}_{L,l} = M]}{\\d \\Pr} = \\frac{1}{\\Pr[\\mathbf{M}_{L,l} = M]]} \\Ind{\\mathbf{M}_{L,l} = M]}.\n\\end{align}\nThen we use the reversibility of the semigroup $P_t$ and denote by $g_{M,t} := P_t g_M$\n\\begin{align*}\n\\mathcal{B}_{L,l}(u_t - \\tau_y u_t)(M) = \\mathbb{E}_{\\rho}[g_M (u_t - \\tau_y u_t)] = \\mathbb{E}_{\\rho}\\left[g_{M,t} (u - \\tau_y u)\\right]. \n\\end{align*}\nThen we would like to apply the a perturbation estimate \\Cref{prop:Perturbation} to control it: let $l_k := l_u + 2k$ then for any $\\vert y \\vert \\leq k$, we have \n\\begin{align*}\n\\mathbb{E}_{\\rho}[g_M (u_t - \\tau_y u_t)] \\leq C(d) (l_k \\Vert u \\Vert_{L^\\infty})^2 \\mathcal{E}_{Q_{l_k}}(\\sqrt{g_M}),\n\\end{align*}\nwhere $\\mathcal{E}_{Q_{l_k}}(\\sqrt{g_M})$ is a localized Dirichlet form defined in \\cref{eq:DirichletLocal}. A heuristic analysis of order is $\\mathcal{E}_{Q_{l_k}}(\\sqrt{g_M}) \\simeq O\\left((l_k)^d\\right)$ since it is a Dirichlet form on $Q_{l_k}$. If we choose $k = K$ here to cover all the term, the bound will be of order $O(K^d)$, which is big when $K \\simeq \\sqrt{t} \\geq l_u$. Therefore, we apply a coarse-graining argument: let $\\overline{[0,y]}_k := \\{z_i\\}_{0 \\leq i \\leq n(y)}$ be a lattice path that of scale $k$, ${z_0 = 0}, {z_{n(y)} = y},{\\{z_i\\}_{1 \\leq i < n(y)} \\in (k\\mathbb{Z})^d}$ so the length of path is the shortest one. (See \\Cref{fig:coarse-graining} for illustration.) Then we have \n\\begin{align*}\n(u - \\tau_y u) = \\sum_{i=0}^{n(y) - 1} (\\tau_{z_i} u - \\tau_{z_{i+1}} u) = \\sum_{i=0}^{n(y) - 1} \\tau_{z_i} (u - \\tau_{h_{z_i}}u),\n\\end{align*}\nwhere $h_{z_i} = z_{i+1} - z_i$ the vector connecting the two and $\\vert h_{z_i} \\vert \\leq k$. This expression with the transport invariant law of Poisson point process, Cauchy-Schwartz inequality implies\n\\begin{equation}\\label{eq:Telescope}\n\\begin{split}\n\\left(\\mathcal{B}_{L,l}(u_t - \\tau_y u_t)(M)\\right)^2 &= \\left(\\sum_{z \\in \\overline{[0,y]}_k} \\mathbb{E}_{\\rho}\\left[g_{M,t} \\tau_z(u - \\tau_{h_z} u)\\right]\\right)^2 \\\\\n&= \\left(\\sum_{z \\in \\overline{[0,y]}_k} \\mathbb{E}_{\\rho}\\left[\\left(\\tau_{-z}g_{M,t} \\right)(u - \\tau_{h_z} u)\\right]\\right)^2 \\\\\n&\\leq C(d) n(y) \\sum_{z \\in \\overline{[0,y]}_k} \\left(\\mathbb{E}_{\\rho}\\left[\\left(\\tau_{-z}g_{M,t}\\right) (u - \\tau_{h_z} u)\\right]\\right)^2.\n\\end{split}\n\\end{equation}\nThis term appears a perturbation estimate, which will be proved in \\Cref{prop:Perturbation} that \n\\begin{align*}\n\\left(\\mathbb{E}_{\\rho}\\left[\\left(\\tau_{-z}g_{M,t}\\right) (u - \\tau_{h_z} u)\\right]\\right)^2 &\\leq C(d) (l_k \\Vert u \\Vert_{L^\\infty})^2 \\mathcal{E}_{Q_{l_k}}\\left(\\sqrt{\\tau_{-z}g_{M,t}}\\right) \\\\\n& = C(d) (l_k \\Vert u \\Vert_{L^\\infty})^2 \\mathcal{E}_{\\tau_z Q_{l_k}}\\left(\\sqrt{g_{M,t}}\\right),\n\\end{align*}\nwhere in the last step we use the transport invariant property of Poisson point process. Now we turn to the choice of the scale $k$. By the heuristic analysis that every $\\mathcal{E}_{Q_{l_k}}$ contributes order $O((l_k)^d)$ and taking in account $n(y) \\leq K\/k$ we have in \\cref{eq:Telescope}\n\\begin{align*}\n\\left(\\mathcal{B}_{L,l}(u_t - \\tau_y u_t)(M)\\right)^2 \\simeq O\\left(\\left(\\frac{K}{k}\\right)^2 (l_k)^{d+2}\\right) \\simeq O\\left(\\left(\\frac{K}{k}\\right)^2 (l_u + 2k)^{d+2}\\right).\n\\end{align*}\nFrom this we see that a good scale should be $k = l_u$ so the term above is of order $O(K^2(l_u)^d)$. We put these estimate back to \\cref{eq:PerTwo}\n\\begin{multline}\\label{eq:PerTwoa}\n\\text{\\cref{eq:PerTwo}} \\leq \\Vert u \\Vert^2_{L^\\infty} \\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}} K l_u \\left(\\frac{1}{\\vert \\mathcal Z_K \\vert}\\sum_{y \\in \\mathcal Z_K} \\sum_{z \\in \\overline{[0,y]}_{l_u}} \\mathcal{E}_{\\tau_z Q_{3 l_u}}\\left(\\sqrt{g_{M,t}}\\right)\\right)\\, \\d t .\n\\end{multline} \n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[scale=0.45]{path2}\n\\caption{The illustration of the coarse-graining argument, where we take a lattice path of scale $k$ to connect $0$ and $y$. The ball in blue is the support of $u$ and the box in red is $Q_{l_k}$. For the one on the left, the scale is $k = l_u$; the one on the right the scale is finer and we see that the coarse-graining is too dense. }\\label{fig:coarse-graining}\n\\end{figure}\n\n\n\\textit{Step 5: Covering argument.} In this step, we calculate the right hand side of \\cref{eq:PerTwoa}, where we notice one essential problem: there are totally about $K^{d+1}\/l_u$ terms of Dirichlet form $\\mathcal{E}_{\\tau_z Q_{3 l_u}}\\left(\\sqrt{g_{M,t}}\\right)$ in the sum $\\sum_{y \\in \\mathcal Z_K} \\sum_{z \\in \\overline{[0,y]}_{l_u}} \\mathcal{E}_{\\tau_z Q_{l_u}}\\left(\\sqrt{g_{M,t}}\\right)$, but the one with $z$ close to $0$ are counted of order $K^d$ times, while the one with $z$ near $\\partial \\mathcal Z_K$ are counted only constant times. To solve this problem, we have to reaverage the sum: by the transport invariant property of Poisson point process, at the beginning of the Step 1, we can write \n\\begin{align*}\n\\Delta_n = \\frac{1}{\\vert \\mathcal Z_l \\vert}\\sum_{x \\in \\mathcal Z_l}\\left((t_{n+1})^{\\frac{d+2}{2}} \\mathbb{E}_{\\rho}[(\\tau_x v_{t_{n+1}})^2] - (t_n)^{\\frac{d+2}{2}} \\mathbb{E}_{\\rho}[(\\tau_x v_{t_n})^2]\\right).\n\\end{align*}\nThen all estimates works in Step 1, Step 2 and Step 3 work by replacing ${v_t \\mapsto \\tau_x v_t}$ and ${u_t \\mapsto \\tau_x u_t}$. In the Step 4, this operation will change our object term \\cref{eq:PerTwo} \n\\begin{align*}\n\\text{\\cref{eq:PerTwo}-avg} = \\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}} \\left(\\frac{1}{\\vert \\mathcal Z_l \\vert}\\sum_{w \\in \\mathcal Z_l}\\frac{1}{\\vert \\mathcal Z_K \\vert}\\sum_{y \\in \\mathcal Z_K} (\\mathcal{B}_{L,l} \\tau_w(u_t - \\tau_y u_t)(M))^2\\right) \\, \\d t,\n\\end{align*}\nand the perturbation argument \\Cref{prop:Perturbation} reduces the problem as \n\\begin{multline}\\label{eq:PerTwoaNew}\n\\text{\\cref{eq:PerTwo}-avg} \\leq \\Vert u \\Vert^2_{L^\\infty} \\int_{t_n}^{t_{n+1}} 2(d+2)t^{\\frac{d}{2}}K l_u \\\\\n\\qquad \\qquad \\times \\left(\\frac{1}{\\vert \\mathcal Z_l \\vert}\\sum_{w \\in \\mathcal Z_l} \\frac{1}{\\vert \\mathcal Z_K \\vert}\\sum_{y \\in \\mathcal Z_K} \\sum_{z \\in \\overline{[0,y]}_{l_u}} \\mathcal{E}_{\\tau_{w+z} Q_{3 l_u}}\\left(\\sqrt{g_{M,t}}\\right)\\right)\\, \\d t .\\\\\n\\end{multline}\nNow we can apply the Fubini's lemma\n\\begin{align*}\n&\\frac{1}{\\vert \\mathcal Z_l \\vert}\\sum_{w \\in \\mathcal Z_l} \\frac{1}{\\vert \\mathcal Z_K \\vert}\\sum_{y \\in \\mathcal Z_K} \\sum_{z \\in \\overline{[0,y]}_{l_u}} \\mathcal{E}_{\\tau_{w+z} Q_{3 l_u}}\\left(\\sqrt{g_{M,t}}\\right) \\\\\n= & \\frac{1}{\\vert \\mathcal Z_l \\vert} \\frac{1}{\\vert \\mathcal Z_K \\vert} \\mathbb{E}_{\\rho}\\left[\\int_{{\\mathbb{R}^d}} \\left(\\sum_{w \\in \\mathcal Z_l}\\sum_{y \\in \\mathcal Z_K}\\sum_{z \\in \\overline{[0,y]}_{l_u}}\\Ind{x \\in \\tau_{w+z} Q_{3 l_u}} \\right) \\nabla \\sqrt{g_{M,t}}(\\mu, x) \\cdot \\nabla \\sqrt{g_{M,t}}(\\mu, x)\\, \\d \\mu(x)\\right],\n\\end{align*}\nwhile we notice that \n\\begin{multline*}\n\\sum_{w \\in \\mathcal Z_l}\\sum_{y \\in \\mathcal Z_K}\\sum_{z \\in \\overline{[0,y]}_{l_u}}\\Ind{x \\in \\tau_{w+z} Q_{3 l_u}} = \\sum_{y \\in \\mathcal Z_K}\\sum_{z \\in \\overline{[0,y]}_{l_u}}\\underbrace{\\sum_{w \\in \\mathcal Z_l}\\Ind{x-w \\in \\tau_{z} Q_{3 l_u}}}_{\\leq \\vert Q_{3 l_u} \\vert}\\\\\n\\leq \\sum_{y \\in \\mathcal Z_K}\\sum_{z \\in \\overline{[0,y]}_{l_u}} (3 l_u)^d \\leq C(d)(l_u)^{d-1} K^{d+1},\n\\end{multline*}\nso we have\n\\begin{align*}\n\\frac{1}{\\vert \\mathcal Z_l \\vert}\\sum_{w \\in \\mathcal Z_l} \\frac{1}{\\vert \\mathcal Z_K \\vert}\\sum_{y \\in \\mathcal Z_K} \\sum_{z \\in \\overline{[0,y]}_{l_u}} \\mathcal{E}_{\\tau_{w+z} Q_{3 l_u}}\\left(\\sqrt{g_{M,t}}\\right) \\leq \\frac{C(d)(l_u)^{d-1} K}{\\vert \\mathcal Z_l \\vert} \\mathcal{E}(\\sqrt{g_{M,t}}).\n\\end{align*}\nWe put this estimate to \\cref{eq:PerTwoaNew} and use $l = c\\sqrt{t_{n+1}},$ \n\\begin{equation}\\label{eq:PerTwoa2}\n\\begin{split}\n\\text{\\cref{eq:PerTwo}-avg} &\\leq C(d) \\Vert u \\Vert^2_{L^\\infty}K^2 (l_u)^{d} \\int_{t_n}^{t_{n+1}} \\left(\\frac{t^{\\frac{1}{2}}}{l}\\right)^d \\mathcal{E}(\\sqrt{g_{M,t}}) \\, \\d t \\\\\n&\\leq C(d) \\Vert u \\Vert^2_{L^\\infty} K^2 (l_u)^{d} \\int_{t_n}^{t_{n+1}} \\mathcal{E}(\\sqrt{g_{M,t}}) \\, \\d t.\n\\end{split}\n\\end{equation}\nWe put \\cref{eq:PerTwoa2} back to \\cref{eq:PerTwo} and \\cref{eq:ApproxDensity} and conclude\n\\begin{equation}\n\\begin{split}\n\\Delta_n \\leq & \\mathbb{E}_{\\rho}[(u_0)^2] + (t_{n+1})^{\\frac{d+2}{2}}\\left(\\gamma \\log(t_{n+1})\\right)^d \\exp\\left(-\\frac{\\rho \\vert Q_l\\vert \\delta^2}{4}\\right)\\Vert u_0\\Vert^2_{L^{\\infty}} \\\\\n& \\qquad + C(d) \\Vert u \\Vert^2_{L^\\infty} K^2 (l_u)^{d}\\sum_{M\\in \\mathcal C_{L,l,\\rho,\\delta}} \\Pr[\\mathbf{M}_{L,l} = M] \\int_{t_n}^{t_{n+1}} \\mathcal{E}(\\sqrt{g_{M,t}}) \\, \\d t.\n\\end{split}\n\\end{equation}\n\n\n\\textit{Step 6: Entropy inequality.}\nIn this step, we analyze the quantity $\\int_{t_n}^{t_{n+1}} \\mathcal{E}(\\sqrt{g_{M,t}})\\, \\d t$. We recall the definition of the entropy inequality: let $H(g_M) = \\mathbb{E}_{\\rho}[g_M \\log(g_M)]$, then \n\\begin{align}\nH(g_{M,t}) = H(g_{M}) - 4\\int_{0}^t \\mathbb{E}_{\\rho}[\\sqrt{g_{M,s}}(-\\L \\sqrt{g_{M,s}})]\\, \\d s,\n\\end{align}\nwe have \n\\begin{align*}\n\\int_{t_n}^{t_{n+1}} \\mathcal{E}(\\sqrt{g_{M,t}})\\, \\d t \\leq \\int_{t_n}^{t_{n+1}} \\mathbb{E}_{\\rho}[\\sqrt{g_{M,t}}(-\\L \\sqrt{g_{M,t}})]\\, \\d t \\leq H(g_{M,t_{n+1}}) \\leq H(g_M).\n\\end{align*}\nFor any $M \\in \\mathcal C_{L,l,\\rho,\\delta}$, one can calculate the bound of the entropy and we prove it in \\Cref{lem:Entropy}\n\\begin{align*}\nH(g_M) \\leq C(d, \\rho)\\left(\\frac{L}{l}\\right)^d \\left(\\log(l) + l^d\\delta^2\\right).\n\\end{align*}\nThis helps us conclude that \n\\begin{multline*}\n\\Delta_n \\leq \\mathbb{E}_{\\rho}[(u_0)^2] \\\\\n+ \\Vert u_0\\Vert^2_{L^{\\infty}} \\left(\\gamma \\log(t_{n+1})\\right)^d \\left((t_{n+1})^{\\frac{d+2}{2}}\\exp\\left(-\\frac{\\rho \\vert Q_l\\vert \\delta^2}{4}\\right) + K^2 (l_u)^{d} \\left(\\log(l) + l^d\\delta^2\\right)\\right).\n\\end{multline*}\nTo make the bound small, we choose a parameter $\\delta = c(d,\\rho) (\\log{t_{n+1}})^{\\frac{1}{2}}(t_{n+1})^{-\\frac{d}{2}}$, where $c(d,\\rho)$ is a positive number large enough to compensate the term $(t_{n+1})^{\\frac{d+2}{2}}$ and this proves \\cref{eq:Approximation}\n\\end{proof}\n\n\n\n\\section{Localization estimate}\\label{sec:Localization}\nIn this part, we prove the key localization estimate: we recall our notation of conditional expectation here that ${\\mathcal{A}}_s f = \\mathbb{E}\\left[f | \\mathcal F_{Q_s} \\right]$ for $Q_s$ a closed cube $\\left[- \\frac{s}{2}, \\frac{s}{2}\\right]^d$.\n\\begin{theorem}\\label{thm:localization}\nFor $u \\in L^2\\left(\\mathcal{M}_\\delta({\\mathbb{R}^d})\\right)$ of compact support that $\\supp(u) \\subset Q_{l_u}$, any ${t \\geq \\max\\left\\{l_u^2, 16 \\Lambda^2 \\right\\}}$, $K \\geq \\sqrt{t}$, and $u_t$ the function associated to the generator $\\L$ at time $t$, then we have the estimate\n\\begin{equation}\n\\mathbb{E}_{\\rho}\\left[(u_t - {\\mathcal{A}}_K u_t)^2\\right] \\leq C(\\Lambda)\\exp\\left(- \\frac{K}{\\sqrt{t}} \\right)\\mathbb{E}_{\\rho}\\left[u^2\\right].\n\\end{equation}\n\\end{theorem}\n\nThis is an important inequality which allows us to pay some error to localize the function, and it is introduced in \\cite{janvresse1999relaxation} and also used in \\cite{giunti2019heat}. The main idea to prove it is to use a multi-scale functional and analyze its evolution with respect to the time. Let us introduce its continuous version: for any $f \\in L^2(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, $f \\mapsto \\left({\\mathcal{A}}_s f \\right)_{s \\geq 0}$ is a c\\`adl\\`ag $L^2$-martingale with respect to $\\left(\\Omega, \\left(\\mathcal F_{Q_s}\\right)_{s \\geq 0}, \\P\\right)$. \n\nOur multi-scale functional for $f \\in H^1_0(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ is defined as \n\\begin{equation}\\label{eq:FunctionalMultiscaleContinous}\nS_{k,K,\\beta}(f) = \\alpha_k \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_k f)^2\\right] + \\int_{k}^K \\alpha_s \\,d \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_s f)^2\\right] + \\alpha_K \\mathbb{E}_{\\rho}\\left[ (f - {\\mathcal{A}}_K f)^2\\right],\n\\end{equation}\nwith $\\alpha_s = \\exp\\left(\\frac{s}{\\beta}\\right), \\beta > 0$. We can apply the integration by part formula for the Lebesgue-Stieltjes integral and obtain \n\\begin{equation}\\label{eq:FunctionalMultiscaleContinous2}\nS_{k,K,\\beta}(f) = \\alpha_K \\mathbb{E}_{\\rho}\\left[f^2\\right] - \\int_{k}^K \\alpha'_s \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_s f)^2\\right]\\, ds,\n\\end{equation}\nwhere $\\alpha'_s$ is the derivative with respect to $s$. The main idea is to put $u_t$ in \\cref{eq:FunctionalMultiscaleContinous2} and then study its derivative $\\frac{d}{dt} S_{k,K,\\beta}(u_t)$ and use it to prove \\Cref{thm:localization}. In this procedure, we will use the Dirichlet form for ${\\mathcal{A}}_s u_t$, but we have to remark that in fact we do not know \\`a priori this is a function in $ H^1_0(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$. We will give a counter example to make it more clear in the next section and introduce a regularized version of ${\\mathcal{A}}_s f$ to pass this difficulty.\n\n\n\\subsection{Conditional expectation, spatial martingale and its regularization}\n$\\left({\\mathcal{A}}_s f \\right)_{s \\geq 0}$ has nice property: we can treat it as a localized function or a martingale. Thus we use the notation \n\\begin{align}\n\\mathscr{M}^f_s := {\\mathcal{A}}_s f,\n\\end{align}\nwhich is a more canonical notation in martingale theory. In this subsection, we would like to understand the regularity of the closed martingale $\\left(\\mathscr{M}^f_s \\right)_{s \\geq 0}$. We will see it is a c\\`adl\\`ag $L^2$-martingale and the jump happens when there is particles on the boundary $\\partial Q_s$. At first, we remark a useful property for Poisson point process.\n\n\\begin{lemma}\\label{lem:OneParticle}\nWith probability $1$, for any $0 < s < \\infty$, there is at most one particle one the boundary $\\partial Q_s$.\n\\end{lemma}\n\\begin{proof}\nWe denote by \n\\begin{align*}\n\\mathcal N := \\{\\mu : \\exists 0 < s < \\infty, \\text{ there exist more than two particles on } \\partial Q_s \\}.\n\\end{align*}\nThen we choose an increasing sequence $\\{s^{\\epsilon}_{k}\\}_{k \\geq 0}$ with $s^{\\epsilon}_{0} = 0$, such that\n\\begin{align*}\n{\\mathbb{R}^d} = \\bigsqcup_{k=1}^{\\infty}C_{s^{\\epsilon}_{k}}, \n\\qquad C_{s^{\\epsilon}_{k}} := Q_{s^{\\epsilon}_{k}} \\backslash Q_{s^{\\epsilon}_{k-1}}, \n\\qquad \\vert C_{s^{\\epsilon}_{k}} \\vert = \\frac{\\epsilon}{k}.\n\\end{align*}\nThen we have that \n\\begin{align*}\n\\Pr \\left[\\mathcal N\\right] &\\leq \\Pr \\left[\\exists k, \\mu(C_{s^{\\epsilon}_{k}}) \\geq 2\\right] \\\\\n&\\leq \\sum_{k = 1}^{\\infty} \\Pr \\left[\\mu(C_{s^{\\epsilon}_{k}}) \\geq 2\\right]\\\\\n&\\leq \\sum_{k = 1}^{\\infty} \\left(\\rho \\vert C_{s^{\\epsilon}_{k}}\\vert \\right)^2 \\\\ \n&\\leq (\\rho \\epsilon)^2.\n\\end{align*}\nWe we let $\\epsilon$ go down to $0$ and prove that $\\Pr \\left[\\mathcal N\\right] = 0$.\n\\end{proof}\n\nFor this reason, in the following, we can do modification of the probability space and always suppose that there is at most one particle on the boundary. This helps us to prove the following regularity property for $\\left(\\mathscr{M}^f_s \\right)_{s \\geq 0}$.\n\n\\begin{lemma}\\label{lem:MtJump}\nAfter a modification, for any $f \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ the process $\\left(\\mathscr{M}^f_s \\right)_{s \\geq 0}$ is a c\\`adl\\`ag $L^2$-martingale with finite variation, and the discontinuity point occurs for $s$ such that $\\mu(\\partial Q_s) = 1$. \n\\end{lemma}\n\\begin{proof}\nBy the classical martingale theory, we know that $\\{\\mathcal{F}_{Q_s}\\}_{s \\geq 0}$ is a right continuous filtration, thus after a modification the process is c\\`adl\\`ag. Moreover, from \\Cref{lem:OneParticle} we can modify the value to $0$ on a negligible set so that $\\mu(\\partial Q_s) \\leq 1$ for all positive $s$. It remains to prove that if $\\mu(\\partial Q_s) = 0$, then the process is also left continuous. In this case, there exists a $0 < \\epsilon_0 < s$ such that for any $0 < \\epsilon < \\epsilon_0$, we have $\\mu(Q_{s-\\epsilon}) = \\mu(Q_s)$. Then \n\\begin{align*}\n{\\mathcal{A}}_s f(\\mu) = {\\mathcal{A}}_s f(\\mu \\mres Q_s) = {\\mathcal{A}}_s f(\\mu \\mres Q_{s - \\epsilon}).\n\\end{align*}\nWe use $\\mu \\mres Q_{s} = (\\mu \\mres Q_{s - \\epsilon}) + (\\mu \\mres (Q_s \\backslash Q_{s - \\epsilon}))$, then \n\\begin{align*}\n{\\mathcal{A}}_{s-\\epsilon} f(\\mu) &= \\mathbb{E}_{\\rho} \\left[ {\\mathcal{A}}_s f(\\mu \\mres Q_{s}) \\vert \\mathcal{F}_{Q_{s-\\epsilon}} \\right] \\\\\n&= \\mathbb{E}_{\\rho} \\left[ {\\mathcal{A}}_s f(\\mu \\mres Q_{s - \\epsilon} + \\mu \\mres (Q_s \\backslash Q_{s - \\epsilon})) \\vert \\mathcal{F}_{Q_{s-\\epsilon}} \\right]\\\\\n&= \\Pr\\left[\\mu(Q_s \\backslash Q_{s - \\epsilon}) = 0\\right]{\\mathcal{A}}_s f(\\mu \\mres Q_{s - \\epsilon}) \\\\\n& \\qquad + \\Pr\\left[\\mu(Q_s \\backslash Q_{s - \\epsilon}) \\geq 1\\right] \\mathbb{E}_{\\rho}\\left[{\\mathcal{A}}_s f \\vert \\mathcal{F}_{Q_{s-\\epsilon}}, \\mu(Q_s \\backslash Q_{s - \\epsilon}) \\geq 1 \\right]\\\\\n&= e^{-\\rho\\vert Q_s \\backslash Q_{s-\\epsilon}\\vert} {\\mathcal{A}}_s f(\\mu \\mres Q_{s - \\epsilon}) + \\left(1 - e^{-\\rho\\vert Q_s \\backslash Q_{s-\\epsilon}\\vert}\\right)\\mathbb{E}_{\\rho}\\left[{\\mathcal{A}}_s f \\vert \\mathcal{F}_{Q_{s-\\epsilon}}, \\mu(Q_s \\backslash Q_{s - \\epsilon}) \\geq 1 \\right].\n\\end{align*}\nIf we suppose that $\\Vert f \\Vert_{L^\\infty}$ is finite, then we have $\\lim_{\\epsilon \\searrow 0} {\\mathcal{A}}_{s-\\epsilon} f(\\mu) = {\\mathcal{A}}_{s} f(\\mu)$. Moreover, we have a estimate that \n\\begin{align*}\n\\left\\vert{\\mathcal{A}}_{s-\\epsilon} f(\\mu) - {\\mathcal{A}}_{s} f(\\mu) \\right\\vert \\leq C \\rho \\epsilon s^{d-1} \\Vert f \\Vert_{L^\\infty}.\n\\end{align*}\nThis implies that the c\\`adl\\`ag martingale $\\left(\\mathscr{M}^f_s \\right)_{s \\geq 0}$ is locally Liptchitz for the continuous part, thus it is almost surely of finite variation.\n\\end{proof}\n\n\n\nThe following corollaries are simple applications of the result above.\n\n\\begin{corollary}\\label{cor:Bracket}\nFor $f \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, we can define a bracket process for $\\left(\\mathscr{M}^f_s \\right)_{s \\geq 0}$:\nwe define that \n\\begin{align}\\label{eq:Bracket}\n\\left[\\mathscr{M}^f\\right]_s := \\sum_{0 < \\tau \\leq s}\\left(\\Delta \\mathscr{M}^f_\\tau \\right)^2, \\qquad \\Delta \\mathscr{M}^f_\\tau = \\mathscr{M}^f_\\tau - \\mathscr{M}^f_{\\tau-}, \\qquad \\tau \\text{ is jump point}.\n\\end{align}\nThen $\\left(\\left(\\mathscr{M}^f_s\\right)^2 - \\left[\\mathscr{M}^f\\right]_s\\right)_{s \\geq 0}$ is a martingale with respect to $\\left(\\Omega, \\left(\\mathcal F_{Q_s}\\right)_{s \\geq 0}, \\Pr\\right)$.\n\\end{corollary}\n\\begin{proof}\nThis is a direct result from jump process; see \\cite[Chapter 4e]{jacod2013limit}.\n\\end{proof}\n \n\\begin{corollary}\\label{cor:LeftLimit}\nLet $x \\in \\supp(\\mu)$, and we define a stopping time for $x$\n\\begin{align}\\label{eq:StoppingTime}\n\\tau(x) := \\min\\{s \\geq 0 \\vert x \\in Q_s\\},\n\\end{align}\nand the normal direction $\\mathbf{n}(x)$ and we define \n\\begin{align}\n{\\mathcal{A}}_{\\tau(x)-}f(\\mu - \\delta_x + \\delta_{x-}) := \\lim_{\\epsilon \\searrow 0} {\\mathcal{A}}_{\\tau(x)-\\epsilon}f(\\mu - \\delta_x + \\delta_{x-\\epsilon \\mathbf{n}(x)}). \n\\end{align}\nThen we have almost surely\n\\begin{align}\n{\\mathcal{A}}_{\\tau(x)-}f(\\mu) = {\\mathcal{A}}_{\\tau(x)} f(\\mu - \\delta_x), \\qquad {\\mathcal{A}}_{\\tau(x)-}f(\\mu - \\delta_x + \\delta_{x-}) = {\\mathcal{A}}_{\\tau(x)}f(\\mu).\n\\end{align}\n\\end{corollary}\n\\begin{proof}\nThe equation ${\\mathcal{A}}_{\\tau(x)-}f(\\mu) = {\\mathcal{A}}_{\\tau(x)} f(\\mu - \\delta_x)$ is the result of left continuous: from \\Cref{lem:OneParticle} we know with probability $1$ there is only $x$ on $\\partial Q_{\\tau(x)}$ and $\\mu - \\delta_x$ does not have particle on the boundary so we apply \\Cref{lem:MtJump} and obtain this equation.\n\nFor the second equation, we have \n\\begin{align*}\n{\\mathcal{A}}_{\\tau(x)}f(\\mu) &= \\lim_{\\epsilon_1 \\searrow 0}{\\mathcal{A}}_{\\tau(x)}f(\\mu - \\delta_x + \\delta_{x-\\epsilon_1 \\mathbf{n}(x)}) \\\\\n&= \\lim_{\\epsilon_1 \\searrow 0}{\\mathcal{A}}_{\\tau(x)}f(\\mu - \\delta_x + \\delta_{x-\\epsilon_1 \\mathbf{n}(x)}) \\\\\n&= \\lim_{\\epsilon_2 \\searrow 0}\\lim_{\\epsilon_1 \\searrow \\epsilon_2}{\\mathcal{A}}_{\\tau(x)-\\epsilon_2}f(\\mu - \\delta_x + \\delta_{x-\\epsilon_1 \\mathbf{n}(x)}) \\\\\n&= \\lim_{\\epsilon \\searrow 0} {\\mathcal{A}}_{\\tau(x)-\\epsilon}f(\\mu - \\delta_x + \\delta_{x-\\epsilon \\mathbf{n}(x)}). \n\\end{align*}\nIn the last step, we use the uniformly left continuous for ${\\mathcal{A}}_s f$ and the continuity with respect to $x$.\n\\end{proof}\n\nOne important remark about the conditional expectation is that in fact for $f \\in C_c^{\\infty}(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, we may have ${\\mathcal{A}}_L f \\notin C_c^{\\infty}(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$. The reason is that the conditional expectation creates a small gap at the boundary for the function. \nHere we give an example of the conditional expectation for $\\mathbb{E}_{\\rho}[f \\vert \\mathcal F_{B_r}]$, which is easier to state but it shares the same property of ${\\mathcal{A}}_L f$.\n\n\\begin{example}\\label{ex:CounterExample}\nLet $\\eta \\in C^{\\infty}_c({\\mathbb{R}^d})$ be a plateau function: \n\\begin{align*}\n\\supp(\\eta) \\subset B_1, 0 \\leq \\eta \\leq 1, \\eta \\equiv 1 \\text{ in } B_{\\frac{1}{2}}, \\eta(x) = \\eta(\\vert x \\vert) \\text{ decreasing with respect to } \\vert x \\vert.\n\\end{align*}\nand we define our function \n\\begin{align*}\nf(\\mu) = \\left(\\int_{{\\mathbb{R}^d}} \\eta(x) \\, \\d\\mu(x)\\right) \\wedge 3 .\n\\end{align*}\nWe define the level set $B_r$ such that \n\\begin{align*}\nB_r := \\left\\{x \\in {\\mathbb{R}^d} \\left| \\frac{1}{2} \\leq \\eta(x) \\leq 1 \\right. \\right\\}.\n\\end{align*}\nThen, we have $\\mathbb{E}_{\\rho}[f \\vert \\mathcal{F}_{B_r}] \\notin C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$.\n\\end{example}\n\\begin{proof}\nLet $\\mu_1 = \\mu \\mres B_r, \\mu_2 = \\mu \\mres (B_1 \\backslash B_r) $, then since $\\supp(f) \\subset B_1$, we have that \n\\begin{align*}\n\\mathbb{E}_{\\rho}[f \\vert \\mathcal{F}_{B_r}] = \\left( \\mu_1(\\eta) + \\mu_2(\\eta)\\right) \\wedge 3\n\\end{align*}\nLet us choose a specific configuration to see that $\\mathbb{E}_{\\rho}[f \\vert \\mathcal{F}_{B_r}](\\mu)$ is not even continuous:\n\\begin{align*}\n\\mu_1 = \\delta_{x_1} + \\delta_{x_2} + \\delta_{x_3}, \\text{ where } x_1, x_2 \\in B_{\\frac{1}{2}}, x_3 \\in B_r \\backslash B_{\\frac{1}{2}}.\n\\end{align*}\nThen we can calculate that $2.5 \\leq \\mu_1(\\eta) < 3$ and $2.5 \\leq \\mathbb{E}_{\\rho}[f \\vert \\mathcal{F}_{B_r}](\\mu) < 3$. However, if we take another $\\mu_1$ that \n\\begin{align*}\n\\mu_1 = \\delta_{x_1} + \\delta_{x_2} + \\delta_{x_3} + \\delta_{x_4}, \\text{ where } x_1, x_2 \\in B_{\\frac{1}{2}}, x_3 \\in B_r \\backslash B_{\\frac{1}{2}}, x_4 \\in B_r.\n\\end{align*}\nThen we see that $\\mu_1(\\eta) > 3$ and we have $\\mathbb{E}_{\\rho}[f \\vert \\mathcal{F}_{B_r}](\\mu) = 3$. Therefore, once the 4-th particle $x_4$ enters the ball $B_r$, the value of the function will jump to $3$. From this we conclude that $\\mathbb{E}_{\\rho}[f \\vert \\mathcal{F}_{B_r}] \\notin C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$.\n\\end{proof}\n\n\\bigskip\nTo make the conditional expectation more regular, we introduce its regularized version: for any $0 < \\epsilon < \\infty$, we define \n\\begin{align}\n{\\mathcal{A}}_{s,\\epsilon} f := \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} {\\mathcal{A}}_{s + t} f \\, dt,\n\\end{align}\nThen we have the following properties.\n\n\\begin{proposition}\nFor any $f \\in H^1_0(\\mathcal{M}_\\delta ({\\mathbb{R}^d}))$, the function ${\\mathcal{A}}_{s,\\epsilon} f \\in H^1_0(\\mathcal{M}_\\delta ({\\mathbb{R}^d}))$ and $\\left(\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s,\\epsilon} f)^2 \\right]\\right)_{s \\geq 0}$ a $C^1$ increasing process.\n\\end{proposition}\n\\begin{proof}\nWe calculate the formula for $\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s,\\epsilon} f)^2 \\right]$:\n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s,\\epsilon} f)^2 \\right] = \\frac{1}{\\epsilon^2} \\int_{0}^{\\epsilon} \\int_{0}^{\\epsilon}\\mathbb{E}_{\\rho}\\left[ {\\mathcal{A}}_{s+t_1} f {\\mathcal{A}}_{s+t_2} f \\right] \\, \\d t_1 \\d t_2.\n\\end{align*}\nAs we know that $\\mathbb{E}_{\\rho}\\left[ {\\mathcal{A}}_{s+t_1} f {\\mathcal{A}}_{s+t_2} f \\right] = \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+ (t_1 \\wedge t_2)} f)^2 \\right] $, we obtain that \n\\begin{align}\\label{eq:E2Regularized}\n\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s,\\epsilon} f)^2 \\right] = \\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon - t) \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s+ t} f)^2 \\right] \\, \\d t.\n\\end{align}\nThen we calculate its derivative that for $0 < h < \\epsilon$\n\\begin{align*}\n& \\lim_{h \\searrow 0}\\frac{1}{h}\\left(\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+h, \\epsilon} f)^2 \\right] - \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s, \\epsilon} f)^2 \\right]\\right) \\\\\n= & \\lim_{h \\searrow 0} \\frac{2}{h\\epsilon^2}\\left( \\int_{\\epsilon}^{\\epsilon + h} (\\epsilon + h - t)\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+t} f)^2 \\right] \\, \\d t - \\int_{0}^{h} (\\epsilon - t)\\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+t} f)^2 \\right] \\, \\d t + \\int_{h}^{\\epsilon} h \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+t} f)^2 \\right] \\, \\d t \\right)\\\\\n= & \\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+t} f)^2 \\right] - \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s} f)^2 \\right] \\, \\d t. \n\\end{align*}\nIn the last step, we use the right continuity and this proves that \n\\begin{align}\\label{eq:E2ds}\n\\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s, \\epsilon} f)^2 \\right] = \\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s+t} f)^2 \\right] - \\mathbb{E}_{\\rho}\\left[ ({\\mathcal{A}}_{s} f)^2 \\right] \\, \\d t.\n\\end{align}\nThen we calculate the partial derivative. We use the formula that \n\\begin{align}\\label{eq:Derivative}\n\\mathbf{e}_k \\cdot \\nabla {\\mathcal{A}}_{s,\\epsilon} f(\\mu, x) = \\lim_{h \\to 0}\\frac{1}{h}\\left( \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} {\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - {\\mathcal{A}}_{s + t} f(\\mu)\\, dt\\right).\n\\end{align}\nWe study this derivative case by case.\n\\begin{enumerate}\n\\item Case $x \\in Q_{s+\\epsilon}^c$. In this case, in \\cref{eq:Derivative}, for a $h$ small enough, for any $t \\in [0, \\epsilon]$, neither $x$ nor $x + h \\mathbf{e}_k$ is in $Q_{t+s}$, so we have ${\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) = {\\mathcal{A}}_{s + t} f(\\mu \\mres Q_{s+t})$. This implies that \\cref{eq:Derivative} is $0$ in this case.\n\\item Case $x \\in \\op{Q_{s}}$. In this case, for a $h$ small enough, for any $t \\in [0, \\epsilon]$, both $x$ and $x + h \\mathbf{e}_k$ is in $Q_{t+s}$, then we have \n\\begin{align*}\n\\mathbf{e}_k \\cdot \\nabla {\\mathcal{A}}_{s,\\epsilon} f(\\mu, x) &= \\lim_{h \\to 0}\\frac{1}{h}\\left( \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} {\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - {\\mathcal{A}}_{s + t} f(\\mu)\\, dt\\right) \\\\\n&= \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} \\lim_{h \\to 0}\\frac{1}{h} \\left({\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - {\\mathcal{A}}_{s + t} f(\\mu)\\right)\\, dt \\\\\n&= {\\mathcal{A}}_{s,\\epsilon} \\left(\\mathbf{e}_k \\cdot \\nabla f(\\mu, x)\\right).\n\\end{align*}\n\\item Case $x \\in \\left( Q_{s+\\epsilon} \\backslash \\op{Q_{s}} \\right)$, $\\mathbf{e}_k$ is the normal direction. In this case, we study at first the situation $\\mathbf{n}(x)$ and $h \\searrow 0$. We divide \\cref{eq:Derivative} in three terms:\n\\begin{align*}\n\\mathbf{e}_k \\cdot \\nabla {\\mathcal{A}}_{s,\\epsilon} f(\\mu, x) &= \\mathbf{I} + \\mathbf{II} + \\mathbf{III} \\\\\n\\mathbf{I} &= \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} \\Ind{s + t < \\tau(x)} \\frac{1}{h} \\left({\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - {\\mathcal{A}}_{s + t} f(\\mu)\\right)\\, dt \\\\\n\\mathbf{II} &= \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} \\Ind{s + t \\geq \\tau(x) + h} \\frac{1}{h} \\left({\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - {\\mathcal{A}}_{s + t} f(\\mu)\\right)\\, dt \\\\ \n\\mathbf{III} &= \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} \\Ind{\\tau(x) \\leq s + t < \\tau(x) + h} \\frac{1}{h} \\left({\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) - {\\mathcal{A}}_{s + t} f(\\mu)\\right)\\, dt. \n\\end{align*}\nThe term $\\mathbf{I}$ and $\\mathbf{II}$ are similar as we have discussed above and we have \n\\begin{align*}\n\\lim_{h \\searrow 0} \\mathbf{I} + \\mathbf{II} = \\frac{1}{\\epsilon} \\int_{0}^{\\epsilon} \\Ind{s+t > \\tau(x)} {\\mathcal{A}}_{s+t} \\left(\\mathbf{e}_k \\cdot \\nabla f(\\mu, x)\\right)\\, dt.\n\\end{align*}\nFor the term $\\mathbf{III}$, since $x + h\\mathbf{e}_k \\notin Q_{s+t}$, we have ${\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x + h \\mathbf{e}_k}) = {\\mathcal{A}}_{s + t} f(\\mu - \\delta_x)$. Then, we use the right continuity of ${\\mathcal{A}}_s f$\n\\begin{align*}\n\\lim_{h \\searrow 0}\\mathbf{III} &= \\lim_{h \\searrow 0}\\frac{1}{h \\epsilon} \\int_{\\tau(x)-s}^{\\tau(x)-s+h} {\\mathcal{A}}_{s + t} f(\\mu - \\delta_x) - {\\mathcal{A}}_{s + t} f(\\mu) \\, dt \\\\\n&= \\frac{1}{\\epsilon} \\left({\\mathcal{A}}_{\\tau(x)} f(\\mu - \\delta_x) - {\\mathcal{A}}_{\\tau(x)} f(\\mu)\\right). \n\\end{align*}\nWe should also remark that is is also the case we do partial derivative from left, in this case we should pay attention on the term $\\mathbf{III}$ which is \n\\begin{align*}\n\\lim_{h \\searrow 0}\\mathbf{III}' &= \\lim_{h \\searrow 0} \\frac{1}{h\\epsilon} \\int_{0}^{\\epsilon} \\Ind{\\tau(x)-h \\leq s + t < \\tau(x)} \\left({\\mathcal{A}}_{s + t} f(\\mu - \\delta_x) - {\\mathcal{A}}_{s + t} f(\\mu - \\delta_x + \\delta_{x - h \\mathbf{e}_k})\\right)\\, dt \\\\\n&= \\frac{1}{\\epsilon} \\left({\\mathcal{A}}_{\\tau(x)-}f(\\mu - \\delta_x) - {\\mathcal{A}}_{\\tau(x)-}f(\\mu - \\delta_x + \\delta_{x-})\\right).\n\\end{align*}\nIn the last step, we use the left continuity of $A_{\\tau(x)} f$ when the particle on the boundary is removed. Thanks to \\Cref{cor:LeftLimit}, we know this limit coincide with that of $\\mathbf{III}$. In conclusion, we could use the notation \\cref{eq:Bracket}\n\\begin{align}\n\\Delta A_{\\tau(x)} f = A_{\\tau(x)} f - A_{\\tau(x)-} f,\n\\end{align}\nto unify the two. Thus we see it is nothing but the jump of the c\\`adl\\`ag martingale.\n\n\\item Case $x \\in \\left( Q_{s+\\epsilon} \\backslash \\op{Q_{s}} \\right)$, $\\mathbf{e}_k$ is not the normal direction. This case is simpler than $\\mathbf{e}_k$ is normal direction, where we do not have to consider the term $\\mathbf{III}$ in the discussion above.\n\\end{enumerate}\nIn summary, we obtain the formula that for any $ x \\in \\supp(\\mu)$\n\\begin{align}\n\\nabla {\\mathcal{A}}_{s,\\epsilon} f(\\mu, x) = \\left\\{ \\begin{array}{ll}\n {\\mathcal{A}}_{s,\\epsilon} \\left(\\nabla f(\\mu, x)\\right) & x \\in \\op{Q_{s}};\\\\\n \\frac{1}{\\epsilon} \\int_{\\tau(x)-s}^{\\epsilon} {\\mathcal{A}}_{s+t} \\left(\\nabla f(\\mu, x)\\right)\\, dt - \\frac{\\mathbf{n}(x)}{\\epsilon}\\Delta A_{\\tau(x)} f & x \\in \\left( Q_{s+\\epsilon} \\backslash \\op{Q_{s}} \\right);\\\\\n 0 & x \\in Q_{s+\\epsilon}^c.\\end{array} \\right. \n\\end{align}\nFinally, we prove that ${\\mathcal{A}}_{s,\\epsilon} f \\in H^1_0(\\mathcal{M}_\\delta ({\\mathbb{R}^d}))$. It is clear that ${\\mathcal{A}}_{s,\\epsilon} f \\in L^2(\\mathcal{M}_\\delta ({\\mathbb{R}^d}))$ by Jensen's inequality for conditional expectation. For its gradient, we have \n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[\\int_{{\\mathbb{R}^d}} \\vert\\nabla {\\mathcal{A}}_{s,\\epsilon} \\vert^2 \\, \\d \\mu \\right] & \\leq \\mathbb{E}_{\\rho}\\left[\\int_{Q_s} \\vert {\\mathcal{A}}_{s,\\epsilon} \\left(\\nabla f\\right) \\vert^2 \\, \\d \\mu \\right] + 2\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\left\\vert \\frac{1}{\\epsilon} \\int_{\\tau(x)-s}^{\\epsilon} {\\mathcal{A}}_{s+t} \\left(\\nabla f \\right)\\, dt \\right\\vert^2 \\, \\d \\mu \\right]\\\\\n& \\qquad + \\frac{2}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\vert\\Delta A_{\\tau(x)} f \\vert^2 \\, \\d \\mu \\right].\n\\end{align*}\nFor the first and second term in the equation above, we use Jensen's inequality for conditional expectation and Cauchy's inequality that \n\\begin{multline*}\n\\mathbb{E}_{\\rho}\\left[\\int_{Q_s} \\vert {\\mathcal{A}}_{s,\\epsilon} \\left(\\nabla f\\right) \\vert^2 \\, \\d \\mu \\right] + 2\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\left\\vert \\frac{1}{\\epsilon} \\int_{\\tau(x)-s}^{\\epsilon} {\\mathcal{A}}_{s+t} \\left(\\nabla f \\right)\\, dt \\right\\vert^2 \\, \\d \\mu \\right] \\\\\n\\leq \\mathbb{E}_{\\rho}\\left[\\int_{Q_s} \\vert \\nabla f \\vert^2 \\, \\d \\mu \\right] + \\frac{2}{\\epsilon}\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\vert \\nabla f \\vert^2 \\, \\d \\mu \\right].\n\\end{multline*}\nFor the third term, it is in fact the sum of square of the jump part in the martingale $\\left(\\mathscr{M}^{f}_s\\right)_{s \\geq 0}$, so we use \\Cref{cor:Bracket} that \n\\begin{multline*}\n\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\vert\\Delta A_{\\tau(x)} f \\vert^2 \\, \\d \\mu \\right] = \\mathbb{E}_{\\rho}\\left[\\sum_{s \\leq \\tau \\leq s+\\epsilon} \\vert \\Delta \\mathscr{M}^{f}_{\\tau} \\vert^2 \\right] = \\mathbb{E}_{\\rho}\\left[[\\mathscr{M}^{f}]_{s+\\epsilon} - [\\mathscr{M}^{f}]_{s} \\right] \\\\\n= \\mathbb{E}_{\\rho}\\left[\\left(\\mathscr{M}^{f}_{s+\\epsilon}\\right)^2 - \\left(\\mathscr{M}^{f}_{s}\\right)^2 \\right] =\\mathbb{E}_{\\rho}\\left[\\left( A_{s+\\epsilon} f\\right)^2 - \\left( A_{s} f\\right)^2 \\right],\n\\end{multline*}\nwhere in the last step we also use the $L^2$ isometry for martingale. This concludes the desired result ${\\mathcal{A}}_{s,\\epsilon} f \\in H^1_0(\\mathcal{M}_\\delta ({\\mathbb{R}^d}))$. \n\\end{proof}\n\n\n\\subsection{Proof of \\Cref{thm:localization}}\nIn this part, we prove \\Cref{thm:localization} in three steps.\n\\begin{proof}\n\\textit{Step 1: Setting up.}\nWe propose a regularized multi-scale functional of \\cref{eq:FunctionalMultiscaleContinous}\n\\begin{equation}\\label{eq:FunctionalMultiscaleRegulized1}\nS_{k,K,\\beta, \\epsilon}(f) = \\alpha_k \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{k,\\epsilon} f)^2\\right] + \\int_{k}^K \\alpha_s \\left(\\frac{d}{ds}\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s, \\epsilon} f)^2\\right]\\right) \\, ds + \\alpha_K \\mathbb{E}_{\\rho}\\left[ f^2 - \\left({\\mathcal{A}}_{K, \\epsilon} f\\right)^2\\right],\n\\end{equation}\nwhere we recall that $\\alpha_s = \\exp\\left(\\frac{s}{\\beta}\\right)$. The advantage is that $\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s, \\epsilon} f)^2\\right]$ is $C^1$ for $s$ from \\cref{eq:E2ds}, we can treat it as usual Riemann integral and apply integration by part to obtain an equivalent definition\n\\begin{equation}\\label{eq:FunctionalMultiscaleRegulized2}\nS_{k,K,\\beta, \\epsilon}(f) = \\alpha_K \\mathbb{E}_{\\rho}\\left[f^2\\right] - \\int_{k}^K \\alpha'_s \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s,\\epsilon} f)^2\\right]\\, ds.\n\\end{equation}\nOur object is to calculate $\\frac{d}{dt}S_{k,K,\\beta, \\epsilon}(u_t)$, and we pay attention to $\\frac{d}{dt}\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s,\\epsilon} u_t)^2\\right]$. We use the formula from \\cref{eq:E2Regularized}\n\\begin{align*}\n\\frac{d}{dt}\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s,\\epsilon} u_t)^2\\right] &= \\frac{d}{dt}\\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon - r) \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s+ r} u_t)^2 \\right] \\, \\d r \\\\\n&= \\frac{d}{dt}\\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon - r) \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s+ r} u_t) u_t\\right] \\, \\d r. \n\\end{align*}\nWe define that \n\\begin{align}\\label{eq:Atilde}\n\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f := \\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon - r) {\\mathcal{A}}_{s+ r} f \\, \\d r,\n\\end{align}\nand it satisfies similar property as ${\\mathcal{A}}_{s,\\epsilon} f $. For example, we have also the formula \n\\begin{align}\\label{eq:AtildeDerivative}\n\\nabla \\tilde{{\\mathcal{A}}_{s,\\epsilon}}f(\\mu, x) = \\left\\{ \\begin{array}{ll}\n \\tilde{{\\mathcal{A}}_{s,\\epsilon}} \\left(\\nabla f(\\mu, x)\\right) & x \\in \\op{Q_{s}};\\\\\n \\frac{2}{\\epsilon^2} \\left(\\int_{\\tau(x)-s}^{\\epsilon}(\\epsilon - r){\\mathcal{A}}_{s+r} \\left(\\nabla f(\\mu, x)\\right)\\, dr - (s+ \\epsilon - \\tau(x))\\Delta A_{\\tau(x)} f \\mathbf{n}(x)\\right) & x \\in \\left( Q_{s+\\epsilon} \\backslash \\op{Q_{s}} \\right);\\\\\n 0 & x \\in Q_{s+\\epsilon}^c.\\end{array} \\right. \n\\end{align}\nthen we have \n\\begin{align}\\label{eq:dtSemigroupConditional}\n\\frac{d}{dt}\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s,\\epsilon} u_t)^2\\right] = \\frac{d}{dt}\\mathbb{E}_{\\rho}\\left[\\left(\\tilde{{\\mathcal{A}}_{s,\\epsilon}}u_t\\right) u_t\\right] = \\mathbb{E}_{\\rho}\\left[\\left(\\frac{d}{dt}\\tilde{{\\mathcal{A}}_{s,\\epsilon}}u_t\\right) u_t\\right] + \\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}}u_t (\\L u_t)\\right].\n\\end{align}\nWe study at first the semi-group. For a function $g \\in H^1_0(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, we recall the definition that \n\\begin{align*}\ng_t(\\mu) = P_t g(\\mu) := \\mathbb{E}_{\\rho}\\left[g(\\mu_t) | \\mathscr{F}_0\\right].\n\\end{align*}\nWe also know its semi-group that \n\\begin{align*}\n\\frac{d}{dt} P_t g(\\mu) = \\L P_t g(\\mu) \\Rightarrow \\partial_t g_t(\\mu) = \\L g_t(\\mu).\n\\end{align*}\nNow in our question we propose that $g = \\tilde{{\\mathcal{A}}_{s,\\epsilon}} u_0$, then we have \n\\begin{align*}\ng_t(\\mu) &= P_t \\left(\\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon-r)\\mathbb{E}_{\\rho}[u(\\mu) | \\mathcal{F}_{Q_{s+r}}] \\, \\d r \\right) \\\\\n&= \\mathbb{E}_{\\rho}\\left[\\left(\\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon-r)\\mathbb{E}_{\\rho}[u(\\mu_t) | \\mathcal{F}_{Q_{s+r}}] \\, \\d r \\right)\\left \\vert \\mathscr{F}_0 \\right.\\right] \\\\\n&= \\frac{2}{\\epsilon^2} \\int_{0}^{\\epsilon} (\\epsilon-r)\\mathbb{E}_{\\rho}\\left[ \\mathbb{E}_{\\rho}\\left[ u(\\mu_t)\\left \\vert \\mathscr{F}_0 \\right.\\right] | \\mathcal{F}_{Q_{s+r}}\\right] \\, \\d r \\\\\n&= \\tilde{{\\mathcal{A}}_{s,\\epsilon}} u_t(\\mu). \n\\end{align*}\nTherefore, we have $\\frac{d}{dt} \\tilde{{\\mathcal{A}}_{s,\\epsilon}} u_t(\\mu)= \\L \\tilde{{\\mathcal{A}}_{s,\\epsilon}} u_t(\\mu)$ and put it back to \\cref{eq:dtSemigroupConditional} and use reversibility to obtain that \n\\begin{align*}\n\\frac{d}{dt}\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s,\\epsilon} u_t)^2\\right] = 2\\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}} u_t (\\L u_t)\\right].\n\\end{align*}\nWe conclude that\n\\begin{align}\\label{eq:ThmLocPfSetup}\n\\frac{d}{dt} S_{k,K,\\beta, \\epsilon}(u_t) = 2\\alpha_K \\mathbb{E}_{\\rho}\\left[u_t (\\L u_t)\\right] + \\int_{k}^K 2\\alpha'_s \\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}}u_t (-\\L u_t)\\right]\\, ds.\n\\end{align}\n\n\\textit{Step 2: Estimate of a localized Dirichlet energy.}\nIn this step, we will give an estimate for the term $\\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}}u_t (-\\L u_t)\\right]$ appeared in \\cref{eq:ThmLocPfSetup}. We will establish the following lemma.\n\n\n\\begin{lemma}\nFor any $f \\in H^1_0(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, we define that \n\\begin{equation}\\label{eq:I}\nI^f_s := \\mathbb{E}_{\\rho}\\left[ \\int_{Q_s} \\nabla f \\cdot \\a \\nabla f \\, \\d\\mu \\right],\n\\end{equation}\nthen for $\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f$ introduced in \\cref{eq:Atilde}, for any $s, \\theta, \\epsilon \\in (0,\\infty)$, we have \n\\begin{equation}\\label{eq:Perturbation}\n\\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f (-\\L f)\\right] \\leq I^f_{s-1} + \\Lambda\\left(I^f_{s} - I^f_{s-1}\\right) + \\Lambda\\left(\\frac{\\theta}{\\epsilon} + 1\\right) \\left(I^f_{s+\\epsilon} - I^f_{s}\\right) + \\frac{\\Lambda}{2\\theta} \\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[\\left({\\mathcal{A}}_{s,\\epsilon} f\\right)^2 \\right].\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFrom \\cref{eq:AtildeDerivative}, we can decompose the quantity $\\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f (-\\L f)\\right]$ into three terms\n\\begin{equation}\\label{eq:PerThreeTerms}\n\\begin{split}\n\\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f (-\\L f)\\right] & = \\underbrace{\\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s-1}} \\nabla (\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f) \\cdot \\a \\nabla f \\, \\d\\mu\\right]}_{\\text{\\cref{eq:PerThreeTerms}-a}} + \\underbrace{\\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\nabla (\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f) \\cdot \\a \\nabla f \\, \\d\\mu\\right]}_{\\text{\\cref{eq:PerThreeTerms}-b}} \\\\\n& \\qquad + \\underbrace{\\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s+\\epsilon} \\backslash Q_{s}} \\nabla (\\tilde{{\\mathcal{A}}_{s,\\epsilon}} f) \\cdot \\a \\nabla f \\, \\d\\mu\\right]}_{\\text{\\cref{eq:PerThreeTerms}-c}}.\n\\end{split}\n\\end{equation}\n\nFor the first term \\cref{eq:PerThreeTerms}-a, since $x \\in Q_{s-1}$, then the coefficient is $\\mathcal F_{Q_s}$ measurable. We use the formula \\cref{eq:AtildeDerivative}, \\cref{eq:Atilde} and apply Jensen's inequality for conditional expectation \n\\begin{align*}\n\\text{\\cref{eq:PerThreeTerms}-a} &= \\frac{2}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s-1}} \\int_{0}^{\\epsilon} (\\epsilon - r) {\\mathcal{A}}_{s+ r} (\\nabla f) \\cdot \\a \\nabla f \\, \\d r \\, \\d\\mu\\right] \\\\\n&= \\frac{2}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s-1}} \\int_{0}^{\\epsilon} (\\epsilon - r) \\mathbb{E}_{\\rho}\\left[{\\mathcal{A}}_{s+ r} (\\nabla f) \\cdot \\a {\\mathcal{A}}_{s+ r} (\\nabla f) \\, \\vert \\mathcal{F}_{Q_{s+r}}\\right] \\, \\d r \\, \\d\\mu\\right] \\\\\n&\\leq \\frac{2}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s-1}} \\int_{0}^{\\epsilon} (\\epsilon - r) \\mathbb{E}_{\\rho}\\left[\\nabla f \\cdot \\a \\nabla f \\, \\vert \\mathcal{F}_{Q_{s+r}}\\right] \\, \\d r \\, \\d\\mu\\right]\\\\\n&= \\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s-1}} \\nabla f \\cdot \\a \\nabla f \\, \\d\\mu\\right]\n\\end{align*}\n\nFor the second term \\cref{eq:PerThreeTerms}-b, it is similar but $\\a$ is no longer $\\mathcal{F}_{Q_s}$ measurable. We use at first Young's inequality\n\\begin{align*}\n\\text{\\cref{eq:PerThreeTerms}-b} &\\leq \\frac{2}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\int_{0}^{\\epsilon} (\\epsilon - r) {\\mathcal{A}}_{s+ r} (\\nabla f) \\cdot \\a \\nabla f \\, \\d r \\, \\d\\mu\\right] \\\\\n&\\leq \\frac{\\Lambda}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\int_{0}^{\\epsilon} (\\epsilon - r) \\left( \\left\\vert{\\mathcal{A}}_{s+ r} (\\nabla f)\\right\\vert^2 + \\left\\vert\\nabla f \\right\\vert^2 \\right) \\, \\d r \\, \\d\\mu\\right].\n\\end{align*}\nThen for the part with conditional expectation, we use the uniform bound $1 \\leq \\a \\leq \\Lambda$ that \n\\begin{align*}\n\\frac{\\Lambda}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\int_{0}^{\\epsilon} (\\epsilon - r) \\left\\vert{\\mathcal{A}}_{s+ r} (\\nabla f)\\right\\vert^2 \\, \\d r \\, \\d\\mu\\right] & \\leq \\frac{\\Lambda}{2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\vert \\nabla f \\vert^2 \\d\\mu\\right] \\\\\n& \\leq \\frac{\\Lambda}{2}\\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\nabla f \\cdot \\a \\nabla f \\d\\mu\\right]. \n\\end{align*}\nThis concludes that $\\text{\\cref{eq:PerThreeTerms}-b} \\leq \\Lambda \\mathbb{E}_{\\rho}\\left[ \\int_{Q_s \\backslash Q_{s-1}} \\nabla f \\cdot \\a \\nabla f \\d\\mu\\right]$.\n\nFor the third term \\cref{eq:PerThreeTerms}-c, we use \\cref{eq:AtildeDerivative} and obtain \n\\begin{align*}\n\\text{\\cref{eq:PerThreeTerms}-c} &\\leq \\text{\\cref{eq:PerThreeTerms}-c1} + \\text{\\cref{eq:PerThreeTerms}-c2} \\\\\n\\text{\\cref{eq:PerThreeTerms}-c1} &= \\frac{2}{\\epsilon^2}\\left\\vert\\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} \\int_{\\tau(x)-s}^{\\epsilon} (\\epsilon - r) {\\mathcal{A}}_{s+ r} (\\nabla f) \\cdot \\a \\nabla f \\, \\d r \\, \\d\\mu\\right] \\right\\vert\\\\\n\\text{\\cref{eq:PerThreeTerms}-c2} &= \\frac{2}{\\epsilon^2}\\left\\vert\\mathbb{E}_{\\rho}\\left[\\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} (\\epsilon - \\tau(x))\\Delta A_{\\tau(x)} f \\mathbf{n}(x) \\cdot \\a \\nabla f \\, \\d\\mu\\right] \\right\\vert.\n\\end{align*}\nThe part of \\cref{eq:PerThreeTerms}-c1 is similar as that of \\cref{eq:PerThreeTerms}-b and we have that \n\\begin{align*}\n\\text{\\cref{eq:PerThreeTerms}-c1} \\leq \\Lambda \\mathbb{E}_{\\rho}\\left[ \\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} \\nabla f \\cdot \\a \\nabla f \\d\\mu\\right].\n\\end{align*}\nWe study the part \\cref{eq:PerThreeTerms}-c2 with Young's inequality\n\\begin{align*}\n&\\frac{2}{\\epsilon^2}\\left\\vert\\mathbb{E}_{\\rho}\\left[\\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} (\\epsilon - \\tau(x))\\Delta A_{\\tau(x)} f \\mathbf{n}(x) \\cdot \\a \\nabla f \\, \\d\\mu \\right] \\right\\vert \\\\\n\\leq & \\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} (s+ \\epsilon - \\tau(x))\\Delta \\vert A_{\\tau(x)} f \\vert^2 \\, \\d\\mu\\right] + \\frac{\\theta\\Lambda}{\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} (s + \\epsilon - \\tau(x)) \\vert \\nabla f \\vert^2 \\, \\d\\mu \\right] \\\\\n\\leq & \\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}}} (s+ \\epsilon - \\tau(x))\\Delta \\vert A_{\\tau(x)} f \\vert^2 \\, \\d\\mu\\right] + \\frac{\\theta\\Lambda}{\\epsilon}\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\nabla f \\cdot \\a \\nabla f \\, \\d\\mu \\right]. \\\\\n\\end{align*}\nThe first part is in fact the bracket process defined in \\Cref{cor:Bracket}\n\\begin{align*}\n\\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } (s+ \\epsilon - \\tau(x))\\Delta \\vert A_{\\tau(x)} f \\vert^2 \\, \\d\\mu\\right] = \\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\sum_{s \\leq \\tau \\leq s+\\epsilon} (s+ \\epsilon - \\tau) \\vert \\Delta \\mathscr{M}^{f}_{\\tau} \\vert^2 \\right].\n\\end{align*}\nThen we develop it with Fubini theorem and the $L^2$-isometry that ${\\mathbb{E}_{\\rho}\\left[\\left[\\mathscr{M}^f \\right]_{s}\\right] = \\mathbb{E}_{\\rho}\\left[(\\mathscr{M}^f_s)^2\\right] = \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_s f)^2\\right]}$\n\\begin{align*}\n\\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\sum_{s \\leq \\tau \\leq s+\\epsilon} (\\epsilon - \\tau) \\vert \\Delta \\mathscr{M}^{f}_{\\tau} \\vert^2 \\right] &= \\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\sum_{s \\leq \\tau \\leq s+\\epsilon} \\int_{s}^{s+\\epsilon} \\Ind{\\tau \\leq r \\leq s+ \\epsilon} \\, \\d r \\vert \\Delta \\mathscr{M}^{f}_{\\tau} \\vert^2 \\right]\\\\\n&= \\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{s}^{s+\\epsilon} \\sum_{s \\leq \\tau \\leq r} \\vert \\Delta \\mathscr{M}^{f}_{\\tau} \\vert^2 \\, \\d r\\right]\\\\\n&= \\frac{\\Lambda}{\\theta\\epsilon^2}\\mathbb{E}_{\\rho}\\left[\\int_{s}^{s+\\epsilon} \\left[\\mathscr{M}^f \\right]_{r} - \\left[\\mathscr{M}^f \\right]_{s} \\, \\d r\\right] \\\\\n&= \\frac{\\Lambda}{\\theta\\epsilon^2}\\int_{0}^{\\epsilon} \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s+r} f)^2\\right] - \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_s f)^2\\right] \\, \\d r \\\\\n&= \\frac{\\Lambda}{2\\theta} \\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s, \\epsilon} f)^2\\right].\n\\end{align*}\nIn the last step, we use the identity \\cref{eq:E2ds}. This concludes that \n\\begin{align*}\n\\text{\\cref{eq:PerThreeTerms}-c} \\leq \\left(\\frac{\\theta\\Lambda}{\\epsilon} + \\Lambda\\right)\\mathbb{E}_{\\rho}\\left[\\int_{ Q_{s+\\epsilon} \\backslash \\op{Q_{s}} } \\nabla f \\cdot \\a \\nabla f \\, \\d\\mu \\right] + \\frac{\\Lambda}{2\\theta} \\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_{s, \\epsilon} f)^2\\right],\n\\end{align*}\nand we combine all the estimate for the three terms \\cref{eq:PerThreeTerms}-a, \\cref{eq:PerThreeTerms}-b, \\cref{eq:PerThreeTerms}-c to obtain the desired result in \\cref{eq:Perturbation}.\n\\end{proof}\n\n\n\\textit{Step 3: End of the proof.} We take $k = \\sqrt{t}, K > k$ and and put the estimate \\cref{eq:Perturbation} into \\cref{eq:ThmLocPfSetup} with $\\theta, \\epsilon, \\beta> 0$ to be fixed, \n\\begin{align*}\n&\\frac{d}{dt} S_{k,K,\\beta, \\epsilon}(u_t) \\\\\n=& 2\\alpha_K \\mathbb{E}_{\\rho}\\left[u_t (\\L u_t)\\right] + \\int_{k}^K 2\\alpha'_s \\mathbb{E}_{\\rho}\\left[\\tilde{{\\mathcal{A}}_{s,\\epsilon}}u_t (-\\L u_t)\\right]\\, ds \\\\\n\\leq & -2\\alpha_K I^{u_t}_{\\infty} + \\int_{k}^{K} 2\\alpha'_s \\left\\{I^{u_t}_{s-1} + \\Lambda\\left(I^{u_t}_{s} - I^{u_t}_{s-1}\\right) + \\Lambda\\left(\\frac{\\theta}{\\epsilon} + 1\\right) \\left(I^{u_t}_{s+\\epsilon} - I^{u_t}_{s}\\right) + \\frac{\\Lambda}{2\\theta} \\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[\\left({\\mathcal{A}}_{s,\\epsilon} u_t\\right)^2 \\right]\\right\\} \\, \\d s.\n\\end{align*}\nWe recall that $\\alpha'_s = \\frac{\\alpha_s}{\\beta}$, then we do some calculus and obtain that\n\\begin{align*}\n\\frac{d}{dt} S_{k,K,\\beta, \\epsilon}(u_t) \\leq &\\int_{k-1}^{K+\\epsilon} \\left(-2 \\alpha_{K \\wedge (s+1)} + 2\\Lambda(\\alpha_{s+1} - \\alpha_s) + 2\\Lambda \\left(\\frac{\\theta}{\\epsilon} + 1\\right)(\\alpha_s - \\alpha_{s-\\epsilon})\\right)\\, d I^{u_t}_{s} \\\\\n& \\qquad + \\int_{0}^{k-1} -2 \\alpha_k \\, d I^{u_t}_{s} + \\int_{K+\\epsilon}^{\\infty} -2 \\alpha_K \\, d I^{u_t}_{s} + \\frac{\\Lambda}{\\beta\\theta} \\int_{k}^K \\alpha_s \\left(\\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[\\left({\\mathcal{A}}_{s,\\epsilon} u_t\\right)^2 \\right]\\right) \\, \\d s. \n\\end{align*}\nWe see that the term $2\\Lambda(\\alpha_{s+1} - \\alpha_s) \\simeq \\frac{2\\Lambda}{\\beta}\\alpha_s$ and $2\\Lambda \\left(\\frac{\\theta}{\\epsilon} + 1\\right)(\\alpha_s - \\alpha_{s-\\epsilon}) \\simeq 2\\Lambda\\left(\\frac{\\theta}{\\beta} + \\frac{\\epsilon}{\\beta}\\right)\\alpha_s$.\nOne can choose the parameters $\\theta = \\frac{\\beta}{2\\Lambda}$, $\\epsilon = \\frac{1}{2}$, then for $\\beta > 4\\Lambda$, the part of integration with respect to $I^{u_t}_{s}$ is negative. We use the definition \\cref{eq:FunctionalMultiscaleRegulized1} and obtain that \n\\begin{align*}\n\\frac{d}{dt}S_{k,K,\\beta, \\epsilon}(u_t) \\leq \\frac{\\Lambda}{\\beta\\theta} \\int_{k}^K \\alpha_s \\left(\\frac{d}{ds} \\mathbb{E}_{\\rho}\\left[\\left({\\mathcal{A}}_{s,\\epsilon} u_t\\right)^2 \\right]\\right) \\, \\d s \\leq \\frac{2\\Lambda^2}{\\beta^2} S_{k,K,\\beta, \\epsilon}(u_t),\n\\end{align*}\nwhich implies that for $k = \\sqrt{t} \\geq l_u$, ($l_u$ the diameter of support of $u_0$ in \\Cref{thm:localization})\n\\begin{align*}\n\\alpha_K \\mathbb{E}_{\\rho}\\left[(u_t)^2 - ({\\mathcal{A}}_{K,\\epsilon} u_t)^2\\right] \\leq S_{k,K,\\beta,\\epsilon}(u_t) \\leq \\exp\\left(\\frac{2\\Lambda^2 t}{\\beta^2}\\right)S_{k,K,\\beta,\\epsilon}(u_0) = \\exp\\left(\\frac{2\\Lambda^2 t}{\\beta^2}\\right)\\alpha_k \\mathbb{E}_{\\rho}\\left[(u_0)^2\\right].\n\\end{align*} \nFinally we remark that \n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[(u_t - {\\mathcal{A}}_{K+\\epsilon} u_t)^2\\right] = \\mathbb{E}_{\\rho}\\left[(u_t)^2 - ({\\mathcal{A}}_{K+\\epsilon} u_t)^2\\right] \\leq \\mathbb{E}_{\\rho}\\left[(u_t)^2 - ({\\mathcal{A}}_{K,\\epsilon} u_t)^2\\right],\n\\end{align*}\nand choose $\\beta = \\sqrt{t}$ and it gives us the desired result, after shrinking a little the value of $K$.\n\\end{proof}\n\n\n\\section{Spectral inequality, perturbation and perturbation}\\label{sec:Toolbox}\nIn this section, we collect several estimates used in the proof of the main result. They can also be read for independent interests.\n\n\\subsection{Spectral inequality}\nThe spectral inequality is an important topic in probability theory and Markov process, and it has its counterpart in analysis known as \nPoincar\\'e's inequality. \n\nLet $L > l >0$ and $q = L\/l \\in \\mathbb{N}$, and denote by $\\{Q_l^i\\}_{1 \\leq i\\leq q}$ the partition of $Q_L$ by the small cube by scale $l$. Let $\\mathbf{M}_{L,l} = (\\mathbf{M}_1, \\mathbf{M}_2 \\cdots \\mathbf{M}_q),$ be a random vector that $\\mathbf{M}_i = \\mu\\left(Q^i_l\\right)$, and we define ${\\mathcal{B}_{L,l} f := \\mathbb{E}_{\\rho}\\left[f \\vert \\mathbf{M}_{L,l} \\right]}$, then we have the following estimate.\n\\begin{proposition}[Spectral inequality]\\label{prop:AB}\nThere exists a finite positive number $R_0(d)$, such that for any $0 < l < L < \\infty$, $L\/l \\in \\mathbb{N}$, we have an estimate for any $f \\in H^1(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$, \n\\begin{align}\\label{eq:AB}\n\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_L f - \\mathcal{B}_{L,l} f)\\right] \\leq R_0 l^2 \\mathbb{E}_{\\rho}\\left[\\int_{Q_L} \\vert\\nabla f \\vert^2 \\, \\d \\mu\\right].\n\\end{align}\n\\end{proposition}\n\\begin{proof}\nWe prove at first a simple corollary from Efron-Stein inequality \\cite[Chapter 3]{boucheron2013concentration}: let $f_n \\in C^1({\\mathbb{R}^{d \\times n}})$ and $X = (X_1, X_2 \\cdots X_n)$, where $(X_i)_{1 \\leq i \\leq n}$ a family independent ${\\mathbb{R}^d}$-valued random variables following uniform law in $Q_l$, then Efron-Stein inequality states\n\\begin{align}\\label{eq:EfronStein}\n\\mathbb{V}\\!\\mathrm{ar}\\left[ f_n(X)\\right] \\leq \\frac{1}{2} \\sum_{i=1}^n \\mathbb{E}\\left[ \\left(f_n(X) - f_n(X^{i})\\right)^2\\right],\n\\end{align}\nwhere $f_n(X^{i}) := \\mathbb{E}\\left[ f_n(X) \\vert X_1 \\cdots X_{i-1}, X_{i+1}, \\cdots X_n\\right]$. From this, we calculate the expectation with respect to $X_i$ for $\\left(f_n(X) - f_n(X^{i})\\right)^2$, and apply the standard Poincar\\'e's inequality for $X_i$\n\\begin{align*}\n\\mathbb{E}_{X_i}\\left[ \\left(f_n(X) - f(X^{i})\\right)^2\\right] &= \\fint_{Q_l} \\left(f_n(x_1, x_2, \\cdots x_n ) - \\fint_{Q_l} f_n(x_1, x_2, \\cdots x_n ) \\, \\d x_i\\right)^2 \\, \\d x_i \\\\\n& \\leq C(d) l^2 \\fint_{Q_l} \\vert \\nabla_{x_i} f_n \\vert^2 (x_1, x_2, \\cdots x_n ) \\, \\d x_i,\\\\\n\\Longrightarrow \\mathbb{E} \\left[ \\left(f_n(X) - f(X^{i})\\right)^2\\right] &\\leq C(d) l^2 \\mathbb{E}[\\vert \\nabla_{x_i} f_n(X)\\vert^2].\n\\end{align*}\nWe combine the sum of all the term and obtain \n\\begin{align}\\label{eq:SpectralLemma}\n\\mathbb{V}\\!\\mathrm{ar}\\left[ f_n(X)\\right] \\leq C(d)l^2 \\sum_{i=1}^n \\mathbb{E}[\\vert \\nabla_{x_i} f_n(X)\\vert^2].\n\\end{align}\n\n\nWe then apply \\cref{eq:SpectralLemma} in \\cref{eq:AB}.\n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_L f - \\mathcal{B}_{L,l} f)^2\\right] = \\sum_{M \\in \\mathbb{N}^{q}} \\Pr[\\mathbf{M}_{L,l} = M] \\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_L f - \\mathcal{B}_{L,l} f)^2 \\vert \\mathbf{M}_{L,l} = M \\right].\n\\end{align*}\nConditioned $\\{\\mathbf{M}_{L,l} = M\\}$, we know that the expectation of ${\\mathcal{A}}_L f$ is $\\mathcal{B}_{L,l} f(M)$ and all the particles are distributed uniformly in its small cubes of size $l$, thus we can apply \\cref{eq:SpectralLemma} that \n\\begin{align*}\n\\mathbb{E}_{\\rho}\\left[({\\mathcal{A}}_L f - \\mathcal{B}_{L,l} f)^2 \\vert \\mathbf{M}_{L,l} = M \\right] &= \\mathbb{V}\\!\\mathrm{ar}{\\rho}\\left[ {\\mathcal{A}}_L f \\vert \\mathbf{M}_{L,l}= M \\right] \\\\\n&\\leq C(d)l^2 \\mathbb{E}_{\\rho}\\left[\\int_{Q_L}\\vert \\nabla {\\mathcal{A}}_L f \\vert^2 \\, \\d \\mu \\left\\vert \\mathbf{M}_{L,l} = M \\right.\\right] \\\\\n&\\leq C(d)l^2 \\mathbb{E}_{\\rho}\\left[\\int_{Q_L}\\vert {\\mathcal{A}}_L \\nabla f \\vert^2 \\, \\d \\mu \\left\\vert \\mathbf{M}_{L,l} = M \\right.\\right].\n\\end{align*}\nThen we do the sum and concludes \\cref{eq:AB}.\n\\end{proof}\n\n\n\\subsection{Perturbation}\nA similar version of the following lemma appears in \\cite{janvresse1999relaxation}, where the authors give some sketch and here we prove it in our model with some more details. We define a localized Dirichlet form for Borel set $U \\subset {\\mathbb{R}^d}$ that \n\\begin{align}\\label{eq:DirichletLocal}\n\\mathcal{E}_U(f,g) = \\mathbb{E}_{\\rho}[g(-\\Delta_{U} f)] := \\mathbb{E}_{\\rho}\\left[\\int_U \\nabla g(\\mu, d) \\cdot \\nabla f(\\mu, x) \\, \\d \\mu(x)\\right],\n\\end{align}\nand we use ${\\mathcal{E}_U(f) := \\mathcal{E}_U(f,f)}$ and ${\\mathcal{E}(f) := \\mathcal{E}_{{\\mathbb{R}^d}}(f)}$ for short.\n\\begin{proposition}[Perturbation]\\label{prop:Perturbation}\nLet $u \\in C^{\\infty}_c(\\mathcal{M}_\\delta({\\mathbb{R}^d}))$ and $l_k := l_u + 2k$ be the minimal scale such that for any $\\vert h\\vert \\leq k, \\supp(\\tau_h u) \\subset Q_{l_k}$, then for any $g$ such that ${\\mathbb{E}_{\\rho}[g] = 1}, {\\sqrt{g} \\in H^1_0(\\mathcal{M}_\\delta({\\mathbb{R}^d}))}$, we have \n\\begin{align}\n\\left(\\mathbb{E}_{\\rho}[g(u - \\tau_h u)]\\right)^2 \\leq C(d) (l_k \\Vert u \\Vert_{L^\\infty})^2 \\mathcal{E}_{Q_{l_k}}(\\sqrt{g}).\n\\end{align}\n\\end{proposition}\n\\begin{proof}\nThe proof of this proposition relies on the following lemma:\n\\begin{lemma}[Lemma 4.2 of \\cite{janvresse1999relaxation}]\\label{lem:4.2}\nLet $(\\Omega, \\P, \\mathcal F)$ be a probability space and let ${\\langle f, g \\rangle = \\int_{\\Omega} fg \\, \\d \\P}$ denote the standard inner product on $L^2(\\Omega, \\P, \\mathcal F)$. Let $A$ be a non-negative definite symmetric operator on $L^2(\\Omega, \\P, \\mathcal F)$, which has $0$ as a simple eigenvalue with corresponding eigenfunction the constant function $1$, and second eigenvalue $\\delta > 0$ (the spectral gap). Let $V$ be a function of means zero, $\\langle 1, V\\rangle = 0$ and assume that $V$ is essential bounded. Denote by $\\lambda_{\\epsilon}$ the principal eigenvalue of $-A + \\epsilon V$ given by the variational formula \n\\begin{align}\\label{eq:4.2.1}\n\\lambda_{\\epsilon} = \\sup_{\\Vert f \\Vert_{L^2} = 1} \\langle f, (-A + \\epsilon V)f\\rangle.\n\\end{align}\nThen for $0 < \\epsilon < \\delta(2 \\Vert V \\Vert_{L^{\\infty}})^{-1}$,\n\\begin{align}\\label{eq:4.2.2}\n0 \\leq \\lambda_{\\epsilon} \\leq \\frac{\\epsilon^2 \\langle V, A^{-1}V\\rangle}{1 - 2\\Vert V \\Vert_{L^{\\infty}} \\epsilon \\delta^{-1}}.\n\\end{align}\n\\end{lemma}\n\n\nIn our context, we should look for a good frame for this lemma. Since for any ${\\vert h \\vert \\leq k}, {(u - \\tau_h u) \\in \\mathcal F_{Q_{l_k}}}$, we have \n\\begin{equation}\\label{eq:PerSetUp}\n\\begin{split}\n\\mathbb{E}_{\\rho}[g(u - \\tau_h u)] &= \\mathbb{E}_{\\rho}[({\\mathcal{A}}_{Q_{l_k}} g)(u - \\tau_h u)]\\\\\n&= \\sum_{n=0}^{\\infty}\\Pr[\\mu(Q_{l_k}) = n]\\mathbb{E}_{\\rho}[({\\mathcal{A}}_{Q_{l_k}} g)(u - \\tau_h u) | \\mu(Q_{l_k}) = n].\n\\end{split}\n\\end{equation}\nThen, we focus on the estimate of $\\mathbb{E}_{\\rho}[({\\mathcal{A}}_{Q_{l_k}} g)(u - \\tau_h u) | \\mu(Q_{l_k}) = n]$: to shorten the notation, we use $\\P_{\\rho, n}$ for the probability $\\Pr[\\cdot \\vert \\mu(Q_{l_k}) = n]$ and $\\mathbb{E}_{\\rho, n}$ for its associated expectation. Then we apply \\Cref{lem:4.2} on the probability space $(\\Omega, \\mathcal F_{Q_{l_k}}, \\P_{\\rho, n})$, where we set $V = u - \\tau_h u$ and the symmetric non-negative operator $A$ is $(-\\Delta_{Q_{l_k}})$ defined for any $f \\in H^1(\\mathcal{M}_\\delta(Q_{l_k}))$\n\\begin{align*}\n\\mathbb{E}_{\\rho, n}[f (-\\Delta_{Q_{l_k}} f)] := \\mathbb{E}_{\\rho, n}\\left[\\int_{Q_{l_k}} \\vert \\nabla f\\vert^2 \\, \\d \\mu \\right].\n\\end{align*}\nWe should check that this setting satisfies the condition of \\Cref{lem:4.2}: \n\\begin{itemize}\n\\item Spectral gap for $A = -\\Delta_{Q_{l_k}}$: by \\cref{eq:EfronStein} we have the spectral gap $\\delta = (l_k)^{-2}$ for any function $f \\in H^1(\\mathcal{M}_\\delta(Q_{l_k}))$ with ${\\mathbb{E}_{\\rho, n}[f] = 0}$\n\\begin{align*}\n\\mathbb{E}_{\\rho, n}[f^2] \\leq (l_k)^2\\mathbb{E}_{\\rho, n}[f (-\\Delta_{Q_{l_k}} f)].\n\\end{align*}\n\\item Mean zero for $V = u - \\tau_h u$: under the probability $\\Pr$ this is clear by the transport invariant property of Poisson point process, while under $\\P_{\\rho, n}$ this requires some calculus. By the definition of $l_k$, we know that ${\\supp(u) \\subset Q_{l_u}}$, thus we denote by the projection ${u(\\mu) = \\tilde{u}_m(x_1, x_2, \\cdots x_m)}$ under the case $\\mu \\mres Q_{l_u} = \\sum_{i=1}^m \\delta_{x_i}$. Then we have \n\\begin{align*}\n\\mathbb{E}_{\\rho, n}[u] &= \\sum_{m=0}^n \\P_{\\rho, n}[\\mu(Q_{l_u})=m]\\mathbb{E}_{\\rho, n}[u \\vert \\mu(Q_{l_u})=m] \\\\\n&= \\sum_{m=0}^n {n \\choose m} \\left(\\frac{\\vert Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)^m \\left(1 - \\frac{\\vert Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)^{n-m} \\fint_{(Q_{l_u})^m} \\tilde{u}_m(x_1, \\cdots x_m) \\, \\d x_1 \\cdots \\d x_m,\n\\end{align*}\nbecause under $\\P_{\\rho, n}$, the number of particles in $Q_{l_u}$ follows the law $\\text{Bin}\\left(n, \\frac{\\vert Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)$ and they are uniformly distributed conditioned the number. We use the similar argument for the expectation of $\\tau_h u$, where we should study the case for particles in $\\tau_{-h} Q_{l_u} \\subset Q_{l_k}$\n\\begin{align*}\n\\mathbb{E}_{\\rho, n}[\\tau_h u]=& \\sum_{m=0}^n \\P_{\\rho, n}[\\mu(\\tau_{-h} Q_{l_u})=m]\\mathbb{E}_{\\rho, n}[\\tau_h u \\vert \\mu(\\tau_{-h} Q_{l_u})=m] \\\\\n=& \\sum_{m=0}^n {n \\choose m} \\left(\\frac{\\vert \\tau_{-h} Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)^m \\left(1 - \\frac{\\vert \\tau_{-h} Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)^{n-m} \\\\\n& \\qquad \\qquad \\times \\fint_{(\\tau_{-h} Q_{l_u})^m} \\tilde{u}_m(x_1+h, \\cdots x_m+h) \\, \\d x_1 \\cdots \\d x_m \\\\\n=& \\sum_{m=0}^n {n \\choose m} \\left(\\frac{\\vert Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)^m \\left(1 - \\frac{\\vert Q_{l_u} \\vert}{\\vert Q_{l_k}\\vert}\\right)^{n-m} \\fint_{(Q_{l_u})^m} \\tilde{u}_m(x_1, \\cdots x_m) \\, \\d x_1 \\cdots \\d x_m.\n\\end{align*}\nThus we establish $\\mathbb{E}_{\\rho, n}[\\tau_h u] = \\mathbb{E}_{\\rho, n}[u]$ and $V$ has mean zero.\n\\end{itemize}\n\n\nNow we can apply the lemma: for any $0 < \\epsilon < \\frac{1}{8}(\\Vert u\\Vert_{L^\\infty} (l_k)^2)^{-1}$, we put $\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\/\\mathbb{E}_{\\rho, n}[{\\mathcal{A}}_{Q_{l_k}}g]$ at the place of $f$ in \\cref{eq:4.2.1} and combine with \\cref{eq:4.2.2} to obtain that \n\\begin{align*}\n\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g (u- \\tau_h u)\\right] &\\leq 2\\epsilon\\mathbb{E}_{\\rho, n}[(u- \\tau_h u) ((-\\Delta_{Q_{l_k}})^{-1}(u- \\tau_h u))]\\mathbb{E}_{\\rho, n}[{\\mathcal{A}}_{Q_{l_k}}g]\\\\\n& \\qquad + \\frac{1}{\\epsilon}\\mathbb{E}_{\\rho, n}\\left[\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\\right)\\right].\n\\end{align*}\nNotice that $(-\\Delta_{Q_{l_k}})^{-1}:L^2 \\to H^1$ well-defined thanks to the Lax-Milgram theorem and the spectral bound, we get \n\\begin{multline}\\label{eq:PtbSmall}\n\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g (u- \\tau_h u)\\right] \\\\\n\\leq 8\\epsilon (l_k)^2 \\Vert u\\Vert^2_{L^{\\infty}}\\mathbb{E}_{\\rho, n}[{\\mathcal{A}}_{Q_{l_k}}g]+ \\frac{1}{\\epsilon}\\mathbb{E}_{\\rho, n}\\left[\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\\right)\\right].\n\\end{multline}\nFor the case $\\epsilon > \\frac{1}{8}(\\Vert u\\Vert_{L^\\infty} (l_k)^2)^{-1}$, we have $1 \\leq 8 \\epsilon \\Vert u\\Vert_{L^\\infty} (l_k)^2$, thus we use a trivial bound \n\\begin{align}\\label{eq:PtbBig}\n\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g (u- \\tau_h u)\\right] \\leq 2\\Vert u\\Vert_{L^\\infty} \\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g\\right] \\leq 16 \\epsilon\n(l_k)^2\\Vert u\\Vert^2_{L^\\infty}\\left[{\\mathcal{A}}_{Q_{l_k}}g\\right].\n\\end{align}\nWe combine \\cref{eq:PtbSmall}, \\cref{eq:PtbBig} and do optimization with for $\\epsilon$ to obtain that \n\\begin{align*}\n\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g (u- \\tau_h u)\\right] \\leq 4\nl_k \\Vert u\\Vert_{L^\\infty} \\left(\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g\\right]\\mathbb{E}_{\\rho, n}\\left[\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\\right)\\right]\\right)^{\\frac{1}{2}}.\n\\end{align*}\nHere the term $\\mathbb{E}_{\\rho, n}\\left[\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\\right)\\right]$ is not the desired term and we should remove the conditional expectation here. For any $x \\in Q_{l_k}$, using Cauchy-Schwartz inequality we have \n\\begin{align*}\n{\\mathcal{A}}_{Q_{l_k}}\\left(\\frac{\\vert\\nabla g(\\mu, x)\\vert^2}{g(\\mu)}\\right) {\\mathcal{A}}_{Q_{l_k}}g(\\mu) \\geq \\left({\\mathcal{A}}_{Q_{l_k}}\\vert \\nabla g(\\mu, x)\\vert\\right)^2 \\geq \\left\\vert {\\mathcal{A}}_{Q_{l_k}} \\nabla g(\\mu, x)\\right\\vert^2.\n\\end{align*}\nThus, in the term ${\\mathbb{E}_{\\rho, n}\\left[\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\\right)\\right]}$ we have \n\\begin{align*}\n\\mathbb{E}_{\\rho, n}\\left[\\sqrt{{\\mathcal{A}}_{Q_{l_k}} g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{{\\mathcal{A}}_{Q_{l_k}} g}\\right)\\right] &= \\frac{1}{4}\\mathbb{E}_{\\rho, n}\\left[\\int_{Q_{l_k}} \\frac{\\left\\vert {\\mathcal{A}}_{Q_{l_k}} \\nabla g(\\mu, x)\\right\\vert^2}{{\\mathcal{A}}_{Q_{l_k}}g(\\mu)}\\, \\d \\mu \\right]\\\\\n&\\leq \\frac{1}{4}\\mathbb{E}_{\\rho, n}\\left[\\int_{Q_{l_k}} {\\mathcal{A}}_{Q_{l_k}}\\left(\\frac{\\vert\\nabla g(\\mu, x)\\vert^2}{g(\\mu)}\\right)\\, \\d \\mu \\right]\\\\\n&= \\mathbb{E}_{\\rho, n}\\left[\\sqrt{g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{g}\\right)\\right].\n\\end{align*}\nUsing the transpose invariant property for $\\mu$, we obtain\n\\begin{align*}\n\\left\\vert\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g (u- \\tau_h u)\\right]\\right\\vert \\leq 4\nl_k \\Vert u\\Vert_{L^\\infty} \\left(\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g\\right]\\mathbb{E}_{\\rho, n}\\left[\\sqrt{g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{g}\\right)\\right]\\right)^{\\frac{1}{2}},\n\\end{align*}\nand put it back to \\cref{eq:PerSetUp} and use Cauchy-Schwartz inequality \n\\begin{align*}\n&\\left(\\mathbb{E}_{\\rho}[g(u - \\tau_h u)]\\right)^2 \\\\\n= &(l_k \\Vert u\\Vert_{L^\\infty})^2\\left(\\sum_{n=0}^{\\infty}\\Pr[\\mu(Q_{l_k}) = n]\\left(\\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g\\right]\\mathbb{E}_{\\rho, n}\\left[\\sqrt{g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{g}\\right)\\right]\\right)^{\\frac{1}{2}}\\right)^2\\\\\n\\leq & (l_k \\Vert u\\Vert_{L^\\infty} )^2\\underbrace{\\left(\\sum_{n=0}^{\\infty}\\Pr[\\mu(Q_{l_k}) = n] \\mathbb{E}_{\\rho, n}\\left[{\\mathcal{A}}_{Q_{l_k}}g\\right]\\right)}_{= \\mathbb{E}_{\\rho}[{\\mathcal{A}}_{Q_{l_k}}g]= \\mathbb{E}_{\\rho}[g] = 1} \\left(\\sum_{n=0}^{\\infty}\\Pr[\\mu(Q_{l_k}) = n] \\mathbb{E}_{\\rho, n}\\left[\\sqrt{g} \\left((-\\Delta_{Q_{l_k}}) \\sqrt{g}\\right)\\right]\\right)\\\\\n=& (l_k \\Vert u\\Vert_{L^\\infty} )^2\\mathbb{E}_{\\rho}[\\sqrt{g}(-\\Delta_{Q_{l_k}} \\sqrt{g})].\n\\end{align*}\n\\end{proof}\n\n\n\n\\subsection{Entropy}\n\nWe recall the definition of $\\delta$-good configuration for $\\frac{L}{l} \\in \\mathbb{N}$\n\\begin{align*}\n\\mathcal C_{L,l,\\rho,\\delta} = \\left\\{M \\in \\mathbb{N}^{\\left(\\frac{L}{l}\\right)^d} \\left\\vert \\forall 1 \\leq i\\leq \\left(\\frac{L}{l}\\right)^d, \\left\\vert\\frac{M_i}{\\rho \\vert Q_l \\vert} - 1 \\right\\vert \\leq \\delta \\right.\\right\\}.\n\\end{align*}\n\\begin{lemma}[Bound for entropy]\\label{lem:Entropy}\nGiven $l \\geq 1, \\frac{L}{l} \\in \\mathbb{N}$, $0 < \\delta < \\frac{\\rho}{2}$ for any $M \\in \\mathcal C_{L,l,\\rho,\\delta}$, we have a bound for the entropy of $g_M$ defined in \\cref{eq:defgM} that \n\\begin{align}\\label{eq:Entropy}\nH(g_M) \\leq C(d, \\rho)\\left(\\frac{L}{l}\\right)^d \\left(\\log(l) + l^d\\delta^2\\right). \n\\end{align}\n\\begin{proof}\n\\begin{align*}\nH(g_M) = \\mathbb{E}_{\\rho}[g_M \\log(g_M)] = -\\mathbb{E}_{\\rho}[g_M \\log(\\Pr[\\mathbf{M}_{L,l} = M])].\n\\end{align*}\nIt suffices to prove a upper bound for $-\\log(\\Pr[\\mathbf{M}_{L,l} = M])$, which is \n\\begin{align}\\label{eq:LogPM}\n-\\log(\\Pr[\\mathbf{M}_{L,l} = M]) = -\\log\\left(\\prod_{i=1}^{\\left(\\frac{L}{l}\\right)^d} e^{-\\rho \\vert Q_l\\vert}\\frac{(\\rho \\vert Q_l\\vert)^{M_i}}{M_i !}\\right) = \\sum_{i=1}^{\\left(\\frac{L}{l}\\right)^d} -\\log\\left(e^{-\\rho \\vert Q_l\\vert}\\frac{(\\rho \\vert Q_l\\vert)^{M_i}}{M_i !}\\right).\n\\end{align}\nFor every term $M_i$, we set $\\delta_i := \\frac{M_i}{\\rho \\vert Q_l \\vert} - 1$, and use Stirling's formula upper bound ${n! \\leq e\\sqrt{n}\\left(\\frac{n}{e}\\right)^n}$ for any $n \\in \\mathbb{N}$\n\\begin{align*}\n-\\log\\left(e^{-\\rho \\vert Q_l\\vert}\\frac{(\\rho \\vert Q_l\\vert)^{M_i}}{M_i !}\\right) & = \\rho \\vert Q_l\\vert - M_i \\log(\\rho \\vert Q_l\\vert)+ \\log(M_i !) \\\\\n&\\leq \\rho \\vert Q_l\\vert - M_i \\log(\\rho \\vert Q_l\\vert)+ \\log\\left(e \\sqrt{M_i}\\left(\\frac{M_i}{e}\\right)^{M_i}\\right) \\\\\n&\\leq \\rho\\vert Q_l\\vert\\left( \\frac{M_i}{\\rho\\vert Q_l\\vert} \\log\\left(\\frac{M_i}{\\rho\\vert Q_l\\vert}\\right) + 1 - \\frac{M_i}{\\rho\\vert Q_l\\vert}\\right) + \\frac{1}{2}\\log(M_i) \\\\\n&= \\rho\\vert Q_l\\vert\\left( (1+\\delta_i) \\underbrace{\\log\\left(1+\\delta_i\\right)}_{\\leq \\delta_i} - \\delta_i \\right) + \\frac{1}{2}\\underbrace{\\log(M_i)}_{\\leq C\\log(l)} \\\\\n&\\leq \\rho\\vert Q_l\\vert(\\delta_i)^2 + C\\log(l). \n\\end{align*}\nWe use $\\vert \\delta_i \\vert \\leq \\delta$ and put it back to \\cref{eq:LogPM} and obtain the desired result.\n\\end{proof}\n\\end{lemma}\n\n\\bigskip\n\n\\subsection*{Acknowledgments}\nI am grateful to Jean-Christophe Mourrat for his suggestion to study this topic and inspiring discussions, Chenmin Sun and Jiangang Ying for helpful discussions. I would like to thank NYU Courant Institute for supporting the academic visit, where part of this project is carried out. \n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\\setcounter{equation}{0} \nThe most straightforward procedure for quantizing a Lagrangian\ndynamical system with configuration space $Q$ is to specify the\nquantum state by a wavefunction $\\psi:Q\\ra\\C$, but many other\nprocedures are possible \\cite{simwoo}. One may take $\\psi$ to be a section of a\ncomplex line bundle over $Q$. Clearly we recover the original\nquantization if the bundle is trivial. Depending on the topology of\n$Q$ there may be many inequivalent line bundles, giving rise to\nquantization ambiguity. The set of equivalence classes of line bundles\nis parametrized by $H^2(Q,\\Z)$. Quantization ambiguity can be used to\ngenerate fermionic or bosonic quantizations of the same classical\nsystem. The example of a charged particle under the influence of a\nmagnetic monopole studied by Dirac \\cite{dirac1, dirac2} clearly\ndemonstrates the utility of complex line bundles for analyzing quantum\ndynamics. See also, the discussion in \\cite{z} and \\cite[chapter\n6]{bmss}.\n\nIn the case of a Lagrangian field theory supporting topological\nsolitons, configuration space is typically the space of (sufficiently\nregular) maps from some 3-manifold (representing physical space) to\nsome target manifold. A famous example of this is the Skyrme model with target\nspace ${\\rm SU}(N)$, $N\\geq 2$ and physical space ${\\mathbb{R}}^3$. Here fermionic\nquantization is phenomenologically crucial since the solitons are\ntaken to represent protons and\nneutrons. \n\nRecall the distinction between bosons and fermions: a macroscopic ensemble\n of identical bosons behaves statistically as if\naribtrarily many particles can lie in the same state, while a\nmacroscopic ensemble of identical fermions behaves as if no two particles can\nlie in the same state. Photons are examples of particles with\nbosonic statistics and electrons are examples of particles with\nfermionic statistics. There are several theoretical models of particle\nstatistics. In quantum mechanics, the wavefunction representing a\nmultiparticle state is symmetric under exchange of any pair of\nidentical bosons, and antisymmetric under exchange of any pair of\nidentical fermions. In conventional perturbative quantum field\ntheory, commuting fields are used to represent bosons and\nanti-commuting fields are used to represent fermions. More precisely,\nbosons have commuting creation operators and fermions have\nanti-commuting creation operators. However, there are times when\nfermions may arise within a field theory with purely bosonic\nfundamental fields. This phenomenon is called emergent fermionicity,\nand it relies crucially on the topological properties of the\nunderlying configuration space of the model. Analogous instances of\nemergent fermionicity in quantum mechanical (rather than field\ntheoretic) settings are described in \\cite{B, sch,z} and \\cite[chapter\n7]{bmss}. \n\nWhen Skyrme originally proposed\nhis model, it was not clear how fermionic quantization of the solitons\ncould be achieved, a fundamental gap which he acknowledged\n\\cite{sky}. \nThe possibility of fermionically quantizing the Skyrme model was first \ndemonstrated (for $N=2$) by\nFinkelstein and Rubinstein \\cite{finrub}. Full consistency of their\nquantization procedure was established by Giulini \\cite{giu}. The\ncase $N\\geq 3$ was dealt with in a rather different way by Witten\n\\cite{wit} at the cost of introducing a new term into the Skyrme\naction. This was a crucial development, since the $N=3$ model is\nparticularly phenomenologically favoured. Although the approaches of\nFinkelstein-Rubinstein and Witten appear quite different, they can be\ntreated in a common framework, as demonstrated by Sorkin\n\\cite{sorkin}. See \\cite{at1} and \\cite{bmss} for exposition about\nwhere the Skyrme model fits into modern physics. Also see \\cite{at2},\nand \\cite{ramadas} for a discussion of fermionic quantization of\n${\\rm SU}(N)$ valued Skyrme fields. We will review the Finkelstein-Rubinstein and \nSorkin models of particle statistics in section \\ref{phy} below, and \ndescribe the obvious generalization of their models when the domain of the \nsoliton is something other than ${\\mathbb{R}}^3$.\n\nSpin is a property of a particle state associated with how it transforms\nunder spatial rotations. Let us briefly\nreview the usual mathematical models of spin. There are two\ngeneral situations. In one, space admits a global rotational\nsymmetry, in the other, it does not. \nWhen physical space is ${\\mathbb{R}}^3$, so space-time is usual\nMinkowski space, the classical rotational symmetries induce quantum\nsymmetries that are representations of the $Spin$ group. These\nirreducible representations are labeled by half integers (one half of\nthe number of boxes in a Young diagram for a representation of\nSU$(2)$.) The integral representations are honest representations\nof the rotation group, but the fractional ones are not. The spin of a \nparticle is the half integer labelling the ${\\rm SU}(2)$ representation under\nwhich its wavefunction transforms. If the spin is not an integer, the\nparticle\nis said to be spinorial.\nWhen physical\nspace is not ${\\mathbb{R}}^3$, one can instead consider the bundle of frames\nover physical space (vierbeins). Spin may then be modeled by the\naction of the rotation group on these frames \\cite{birrell}. The\neasiest case in this direction is when space-time admits a spin\nstructure. This reconciliation of spin with the possibility of a\ncurved space time was an important discovery in the last century. The\nsituation is parallel in nonlinear models. In the easiest example of\nof a model with solitons incorporating spin, the configuration space\nis not ${\\mathbb{R}}^3$, but maps defined on ${\\mathbb{R}}^3$. The rotation group acts on\nsuch maps by precomposition. Sorkin described a model for the spin of\nsolitons which generalizes to any maps defined on any domain. We cover\nthis in section \\ref{phy}. \n\nAlthough spin and particle statistics have completely different conceptual\norigins, there are strong connexions between the two.\nThe spin-statistics\ntheorem asserts, in the context of axiomatic quantum field theory,\nthat particles are fermionic if and only if they are spinorial. Said\ndifferently, any particle with fractional spin is a fermion, and any\nparticle with integral spin is a boson. Analogous spin-statistics \ntheorems have been found for solitons also.\nSuch a theorem was proved for ${\\rm SU}(2)$ Skyrmions on ${\\mathbb{R}}^3$ by\nFinkelstein and Rubinstein \\cite{finrub}, and for arbitrary textures\non ${\\mathbb{R}}^3$ by Sorkin \\cite{sorkin}, using only the topology of\nconfiguration space. By a texture, we mean that the\nfield must approach a constant limiting value at spatial infinity, in\ncontrast to, say, monopoles and vortices. So pervasive is the link between fermionicity and\nspinoriality that the two are often elided. For example,\nRamadas argued that it was\npossible to fermionically quantize SU$(N)$ Skyrmions on ${\\mathbb{R}}^3$ because\nit was possible to spinorially quantize them \\cite{ramadas}.\n\nIsospin is the conserved quantity analogous to spin associated with\n an {\\em internal} rotational \nsymmetry. As in the simplest model of spin, a particle's \nisospin is a half integer labelling the representation of the quantum\nsymmetry group corresponding to the classical internal ${\\rm SO}(3)$\nsymmetry.\nIn all the models we consider, the target space\nhas a natural $SO(3)$ action, so it will make sense to determine\nwhether these models admit isospinorial quantization in the \nusual sense.\nKrusch \\cite{kru} has shown that ${\\rm SU}(2)$\nSkyrmions are spinorial if and only if they are isospinorial, which is\nin good agreement with nuclear phenomenology, since they represent\nbound states of nucleons. Recall that \nnucleon is the collective term for the proton\nand neutron. Both have spin and isospin 1\/2 but are in different\neigenstates of the 3rd component of isospin: the proton has $I_3=1\/2$,\nand the neutron has $I_3=-1\/2$. In general, the integrality of spin is\nunrelated to that of isospin. Strange hadrons can have half-integer\nspin and integer isospin (and vice-versa). For example, the\n$\\Sigma$-baryon has isospin $1$, but spin $1\/2$, and the $K$\nmesons have isospin $1\/2$ and spin $1$. One would hope, therefore,\nthat the correlation found by Krusch fails in the ${\\rm SU}(N)$ Skyrme\nmodel with $N>2$, since this is supposed to model low energy QCD with\nmore than two light quark flavours, and should therefore be able to\naccommodate the more exotic baryons. The mathematical reader\nunfamiliar with spin, isospin, strangeness etc.\\ may find the book by\nHalzen and Martin \\cite{HM} and the comprehensive listing of particles\nand their properties in \\cite{pdg} helpful.\n\nEmergent fermionicity, like (iso)spinoriality, \ncan often be incorporated into a quantum system\nby exploiting the possibility of \ndifferences between the classical and quantum symmetries of\n the space of quantum states \\cite[chapter 7]{bmss}. A spinning top is\n a well known example of this. The classical symmetry group is\n SO$(3)$, while the quantum symmetry group for some quantizations is\n SU$(2)$, \\cite{B, sch}. An electron in the field of a magnetic\n monopole is also a good example, \\cite[chapter 7]{bmss} and\n \\cite{z}. We emphasize, however, that \nemergent fermionicity does {\\em not} depend on any symmetry assumptions. In\n fact, the model of particle statistics that we mainly consider\n (Sorkin's model) depends only on the topology of the configuration\n space.\n\n\nThe purpose of this paper is to determine the quantization ambiguity\nfor a wide class of field theories supporting topological solitons of\ntexture type in 3 spatial dimensions. We will allow (the one\npoint compactification of) physical space to be any compact, oriented\n3-manifold $M$ and target space to be any Lie group $G$, or the\n2-sphere. The results use only the topology of configuration space and\nare completely independent of the dynamical details of the field\ntheory. They cover, therefore, the Faddev-Hopf and general Skyrme\nmodels on any orientable domain. Our main mathematical results will\nbe the computation of the fundamental group and the rational or real\ncohomology ring of $Q$. (The universal coefficient theorem implies\nthat the rational dimension of the rational cohomology is equal to the\nreal dimension of the real cohomology. Homotopy theorists tend to\nexpress results using rational coefficients and physicists tend to use\nreal or complex coefficients.) We shall see that quantization ambiguity, as \ndescribed by $H^2(Q,\\Z)$, may be reconstructed from these\ndata. We also give geometric interpretations of the algebraic results\nwhich are useful for purposes of visualization. We then determine\nunder what circumstances the quantization ambiguity allows for\nconsistent fermionic quantization of Skyrmions and hopfions within the\nframeworks of Finkelstein-Rubinstein and Sorkin. We finally discuss the \nspinorial and isospinorial quantization of these models.\n\nThe main motivation for this work was to test the phenomenon of\nemergent fermionicity (i.e.\\ fermionic solitons in a bosonic theory)\nin the Skyrme model to see, in particular, whether it survives the\ngeneralization from domain ${\\mathbb{R}}^3$ to domain $M$. Our philosophy is\nthat a concept in field theory which cannot be properly formulated on\nany oriented domain should not be considered fundamental. In fact, we\nshall see indications that emergent fermionicity is insensitive to\nthe topology of $M$, but depends crucially on the topology of the\ntarget space. \n\nWe would like to thank Louis Crane, Steffen Krusch and Larry Weaver for \nhelpful conversations about particle physics.\n\n\\section{Notation and statement of results}\n\\label{not}\n\\setcounter{equation}{0}\n\nRecall that topologically distinct complex line bundles over a\ntopological space $Q$ are classified by $H^2(Q,\\Z)$. Note that $Q$\nneed have no differentiable structure to make sense of this: we can\ndefine $c_1(L) \\in H^2(Q,\\Z)$ corresponding to bundle $L$ directly in\nterms of the transition functions of $L$ rather than thinking of it as\nthe curvature of a unitary connexion on $L$ \\cite{simwoo}. So this\nclassification applies in the cases of interest. The free part of\n$H^2(Q,\\Z)$ is determined by $H^2(Q,{\\mathbb{R}})$, while its torsion is\nisomorphic to the torsion of $H_1(Q)$. For any topological space,\n$H_1(Q)$ is isomorphic to the abelianization of $\\pi_1(Q)$. The\nbundle classification problem is solved, therefore, once we know\n$\\pi_1(Q)$ and $H^2(Q,{\\mathbb{R}})$.\n\nIn this section we will define the configuration spaces that we\nconsider, set up notation and state our main topological\nresults. There are in fact many different but related configuration\nspaces that we could consider (for example spaces of free maps versus\nspaces of base pointed maps) and several different possibilities\ndepending on whether the domain is connected etc. We give clean\nstatements of our results for special cases in this section, and\ndescribe how to obtain the most general results in the next section.\nThe next section will also include some specific examples. Of course\nhomotopy theorists have studied the algebraic topology of spaces of\nmaps. The paper by Federer gives a spectral sequence whose limit\ngroup is a sum of composition factors of homotopy groups for a space\nof based maps \\cite{fed}. We do not need a way to compute -- we\nactually need the computations, and this is what is contained here.\n\nLet $M$ be a compact, oriented 3-manifold and $G$ be any Lie group.\nThen the first configuration space we consider is either\n$\\free{M}{G}$, the space of continuous maps $M\\ra G$, or $G^M$, the\nsubset of $\\free{M}{G}$ consisting of those maps which send a chosen\nbasepoint $x_0\\in M$ to $1\\in G$. We will address configuration spaces\nof $S^2$-valued maps later in this section and paper. Both\n$\\free{M}{G}$ and $G^M$ are given the compact open topology. In\npractice some Sobolev topology depending on the energy functional is\nprobably appropriate. The issue of checking the algebraic topology\narguments given in this paper for classes of Sobolev maps is\ninteresting. See \\cite{AK3} for a discussion of the correct setting\nand arguments generalizing the labels of the path components of these\nconfiguration spaces for Sobolev maps. The space $\\free{M}{G}$ is\nappropriate to the $G$-valued Skyrme model on a genuinely compact\ndomain, while $G^M$ is appropriate to the case where physical space\n$\\hat{M}$ is noncompact but has a connected end at infinity which, for\nfinite energy maps, may be regarded as a single point $x_0$ in the\none point compactification $M$ of $\\hat{M}$. \n\nThe space $G^M$ splits into disjoint path components which are labeled\nby certain cohomology classes on $M$ \\cite{AK1}. Let $(G^M)_0$ be the\nidentity component of the topological group $G^M$, that is, the path\ncomponent containing the constant map $u(x)=1$. In physical terms\n$(G^M)_0$ is the vacuum sector of the model. Then $(G^M)_0$ is a\nnormal subgroup of $G^M$ whose cosets are precisely the other path\ncomponents. The set of path components itself has a natural group\nstructure. As a set, the space of path components of the based maps is\ngiven by the following proposition.\n\n\\begin{prop}[Auckly-Kapitanski]\\label{components}\nLet $G$ be a compact, connected Lie group and $M$ be a connected,\nclosed $3$-manifold. The set of path components of $G^M$ is\n$$\nG^M\/(G^M)_0\\,\\cong\\, H^3(M; \\pi_3(G))\\times H^1(M; H_1(G)).\n$$\n\\end{prop} \n\n\\noindent\nThe reason the above proposition only describes the set of path\ncomponents is that Auckly and Kapitanski only establish an exact\nsequence\n$$\n0\\to H^3(M; \\pi_3(G))\\to G^M\/(G^M)_0\\to H^1(M; H_1(G))\\to 0.\n$$\nTo understand the group structure on the set of path components one\nwould have to understand a bit more about this sequence (e.g. does it\nsplit?). Every path component of $G^M$ is homeomorphic to $(G^M)_0$\nsince $\\widetilde{u}(x)\\mapsto u(x)^{-1}\\widetilde{u}(x)$ is a homeomorphism $u\\,\n(G^M)_0\\ra (G^M)_0$. Our first result computes the fundamental group\nof the configuration space of based $G$-valued maps.\n\n\\begin{thm}\\label{thm1} If $M$ is a closed, connected, orientable \n$3$-manifold, and $G$ is any Lie group, then \n$$\n\\pi_1(G^M)\\, \\cong\\,{\\mathbb Z}_2^s\\,\\oplus H^2(M; \\pi_3(G)).\n$$\nHere $s$ is the number of symplectic factors in the lie algebra of $G$.\n\\end{thm}\n\n\\noindent\nOur next result gives the whole real cohomology ring\n$H^*((G^M)_0,{\\mathbb{R}})$, including its multiplicative structure. This, of\ncourse, includes the required computation of $H^2((G^M)_0,{\\mathbb{R}})$.\n\nSimilarly to Yang-Mills theory, there is a $\\mu$ map, \n$$\n\\mu:H_d(M;{\\mathbb R})\\otimes H^j(G;{\\mathbb R}) \\to\nH^{j-d}(G^M;{\\mathbb R}),\n$$\nand the cohomology ring is generated (as an algebra) by the images of\nthis map. To state the theorem we do not need the definition of this\n$\\mu$ map, but the definition may be found in subsection\n\\ref{cohomdesc} of section \\ref{geo}, in particular, equation\n(\\ref{mudef}).\n\n\\begin{thm}\\label{thm1co} Let $G$ be a compact, simply-connected, simple Lie \ngroup. The cohomology ring of any of these groups is a free\ngraded-commutative unital algebra over ${\\mathbb{R}}$ generated by degree $k$\nelements $x_k$ for certain values of $k$ (and with at most one\nexception at most one generator for any given degree). The values of\n$k$ depend on the group and are listed in table \\ref{tbl1} in section\n\\ref{hom}. Let $M$ be a closed, connected, orientable\n$3$-manifold. The cohomology ring $H^*((G^M)_0;{\\mathbb{R}})$ is the free\ngraded-commutative unital algebra over ${\\mathbb{R}}$ generated by the elements\n$\\mu(\\Sigma_j^d\\otimes x_k)$, where $\\Sigma_j^d$ form a basis for\n$H_d(M;{\\mathbb{R}})$ for $d>0$ and $k-d>0$. \n\\end{thm}\n\\noindent\nThe examples in the next section best illuminate the details of the\nabove theorem.\n\nTurning to the Faddeev-Hopf model, the configuration space of interest\nis either the space of free $S^2$-valued maps $\\free{M}{S^2}$, or\n$(S^2)^M$, the space of based continuous maps $M\\ra S^2$. One can\nanalyze $\\free{M}{S^2}$ in terms of $(S^2)^M$ by making use of the\nnatural fibration \n$$ (S^2)^M\\hookrightarrow\n\\free{M}{S^2}\\stackrel{\\pi}{\\ra}S^2,\\qquad \\pi:u(x)\\mapsto u(x_0).\n$$\n The fundamental cohomology class (orientation class),\n$\\mu_{S^2}\\in H^2(S^2,\\Z)$ plays an important role in the description\nof the mapping spaces of $S^2$-valued maps. The path components of\n$(S^2)^M$ were determined by Pontrjagin \\cite{pont}:\n\n\n\\begin{thm}[Pontrjagin] \n\\label{ponthm}\nLet $M$ be a closed, connected, oriented 3-manifold, and $\\mu_{S^2}$\nbe a generator of $H^2(S^2;\\Z)\\cong\\Z$. To any based map $\\varphi$\nfrom $M$ to $S^2$ one may associate the cohomology class,\n$\\varphi^*\\mu_{S^2}\\in H^2(M;\\Z)$. Every second cohomology class may\nbe obtained from some map and any two maps with different cohomology\nclasses lie in distinct path components of $(S^2)^M$. Furthermore,\nthe set of path components corresponding to a cohomology class,\n$\\alpha\\in H^2(M)$ is in bijective correspondence with\n$H^3(M)\/(2\\alpha\\smallsmile H^1(M))$.\n\\end{thm}\n\n\\noindent\nA discussion of this theorem in the setting of the Faddeev model may\nbe found in \\cite{AK2} and \\cite{AK3}. Let $(S^2)^M_0$ denote the\nvacuum sector, that is the path component of the constant map, and\n$(S^2)^M_\\varphi$ denote the path component containing\n$\\varphi$. Since $(S^2)^M$ is not a topological group, there is no\nreason to expect all its path components to be homeomorphic. We will\nprove, however that two components $(S^2)^M_\\varphi$ and\n$(S^2)^M_\\psi$ are homeomorphic if $\\varphi^*\\mu_{S^2} =\n\\psi^*\\mu_{S^2}$:\n\n\\begin{thm}\n\\label{fhhom} \nLet $\\varphi\\in (S^2)^M$ such that $\\varphi^*\\mu_{S^2}=\n\\psi^*\\mu_{S^2}$. Then $(S^2)^M_\\varphi\\cong(S^2)^M_\\psi$.\n\\end{thm}\n\n\\noindent\nMoreover, the fundamental group of any component can be computed, as\nfollows.\n\n\\begin{thm}\\label{thm2} Let $M$ be closed, connected and orientable. For \nany $\\varphi\\in (S^2)^M$,\nthe fundamental group of $(S^2)^M_\\varphi$ is given by\n$$\n\\pi_1((S^2)^M_\\varphi)\\cong {\\mathbb Z}_2\\oplus H^2(M;{\\mathbb\nZ})\\oplus \\hbox{\\rm ker}(2\\varphi^*\\mu_{S^2}\\smallsmile).\n$$\nHere $2\\varphi^*\\mu_{S^2}\\smallsmile:H^1(M;{\\mathbb Z})\\to H^3(M;{\\mathbb\nZ})$ is the usual map given by the cup product. \n\\end{thm}\n\nThere is a general relationship between the fundamental group of the\nconfiguration space of based $S^2$-valued maps and the corresponding\nconfiguration space of free maps. It implies the following result for\nthe fundamental group of the space of free $S^2$-valued maps.\n\n\n\\begin{thm}\\label{freefun}\nWe have $\\pi_1(\\free{M}{S^2}_\\varphi)\\cong {\\mathbb Z}_2 \\oplus\n\\left(H^2(M;{\\mathbb Z})\/ \\langle 2\\varphi\\mu_{S^2}\\rangle\n\\right)\\oplus \\hbox{\\rm ker}(2\\varphi^*\\mu_{S^2}\\smallsmile)$. \n\\end{thm}\n\n\nTo complete the classification of complex line bundles over\n$(S^2)^M_\\varphi$ one also needs the second cohomology $H^2((S^2)^M_\\varphi;{\\mathbb{R}})$, which can\nbe extracted from the above theorem and the following computation of\nthe cohomology ring $H^*((S^2)^M_\\varphi,{\\mathbb{R}})$:\n\n\\begin{thm}\\label{thm2co}\nLet $M$ be closed, connected and orientable, let $\\varphi:M\\to S^2$,\nlet $\\Sigma_j^d$ form a basis for $H_d(M;{\\mathbb{R}})$ for $d<3$, and let\n$\\{\\alpha_k\\}$ for a basis for\n$\\hbox{ker}(2\\varphi^*\\mu_{S^2}\\cup):H^1(M;\\Z)\\to H^3(M;\\Z))$. The\ncohomology ring $H^*((S^2)^M_\\varphi;{\\mathbb{R}})$ is the free\ngraded-commutative unital algebra over ${\\mathbb{R}}$ generated by the elements\n$\\alpha_k$ and $\\mu(\\Sigma_j^d\\otimes x)$, where $x\\in H^3(Sp_1;\\Z)$\nis the orientation class. The classes $\\alpha_k$ have degree $1$ and\n$\\mu(\\Sigma_j^d\\otimes x)$ have degree $3-d$.\n\\end{thm}\n\nWe can compute the cohomology of the space of free $S^2$-valued maps\nusing the following theorem.\n\\begin{thm}\\label{freeco}\nThere is a spectral sequence with $E_2^{p,q}=H^p(S^2;{\\mathbb{R}})\\otimes\nH^q((S^2)^M_\\varphi;{\\mathbb{R}})$ converging to\n$H^*(\\free{M}{S^2}_\\varphi;{\\mathbb{R}})$. The second differential is given by\n$d_2\\mu(\\Sigma^{(2)}\\otimes x)=2\\varphi^*\\mu_{S^2}[\\Sigma]\\mu_{S^2}$\nwith $d_2$ of any other generator trivial. All higher differentials\nare trivial as well.\n\\end{thm}\n\nIn order to compare the classical and quantum isospin symmetries, we\nwill use the following theorem due to Gottlieb \\cite{Got}. It is\nbased on earlier work of Hattori and Yoshida \\cite{HY}.\n\n\\begin{thm}[Gottlieb]\\label{thmgot}\nLet $L\\to X$ be a complex line bundle over a locally compact space. An\naction of a compact connected lie group on $X$, say $\\rho:X\\times G\\to\nX$, lifts to a bundle action on $L$ if and only if two obstructions\nvanish. The first obstruction is the pull back of the first chern\nclass, $L_{x_0}^*c_1(L)\\in H^2(G;\\Z)$. Here $L_{x_0}$ is the map\ninduced by applying the group action to the base point. The second\nobstruction lives in $H^1(X;H^1(G;\\Z))$.\n\\end{thm}\n\n\\noindent We have taken the liberty of radically changing the notation\nfrom the original theorem, and we have only stated the result for\nline bundles. The actual theorem is stated for principal torus\nbundles. Since our configuration spaces are not locally compact, we\nshould point out that we will use one direction of this theorem by\nrestricting to a locally compact equivariant subset. In the other\ndirection, we will just outline a construction of the lifted action.\n\nOur main physical conclusions are:\n\\begin{description}\n\\item[C1]In these models, there is a portion of\nquantization ambiguity that depends only on the codomain and is\ncompletely independent of the topology of the domain. This allows for\nthe possibility that emergent fermionicity may only depend on the\ntarget.\n\\item[C2] It is possible to quantize $G$-valued\nsolitons fermionically (with odd exhange statistics) if and only if the Lie algebra \ncontains a\nsymplectic ($C_n$) or special unitary ($A_n$) factor.\n\\item[C3] It is possible to quantize $G$-valued solitons\nwith fractional isospin when the Lie algebra of $G$ contains a\nsymplectic ($C_n$) or special unitary ($A_n$) factor.\n\\item[C4] It is not possible to quantize $G$-valued\nsolitons with fractional isospin when the Lie algebra does not\ncontain such a factor.\n\\item[C5] It is always possible to choose a quantization\nof these systems with integral isospin (however such might not be\nconsistent with other constraints on the model)\n\\item[C6] It is always possible to quantize $S^2$-valued\nsolitons with fractional isospin and odd exchange statistics.\n\\end{description}\n\nThe rest of this paper is structured as follows. In section \\ref{hom}\nwe describe how to reduce the description of the topology of general\n$G$-valued and $S^2$-valued mapping spaces to the theorems listed in\nthis section. We also provide several illustrative examples. In\nsection \\ref{geo} we review the Pontrjagin-Thom construction and\ndescribe geometric interpretations of some of our results using the\nPontrjagin-Thom construction. Physical applications, particularly the\npossibility of consistent fermionic quantization of Skyrmions, are\ndiscussed in section \\ref{phy}. Finally, section \\ref{pro} contains\nthe proofs of our results.\n\n\n\n\\section{Preliminary reductions and examples}\n\\label{hom}\\label{reduct}\n\\setcounter{equation}{0}\n\nWe begin this section with a collection of observations that allow one\n to reduce questions about the topology of various mapping spaces of\n $G$-valued and $S^2$-valued maps to the theorems listed in the\n previous section. Many of these observations will reduce a more\n general mapping space to a product of special mapping spaces, or put\n such spaces into fibrations. These reductions ensure that our results\n are valid for arbitrary closed, orientable $3$-manifolds, and valid\n for {\\it any} Lie group.\n\nIt follows directly from the definition of $\\pi_1$ that\n$\\pi_1(X\\times Y) \\cong \\pi_1(X)\\times\\pi_1(Y)$. The cohomology of a\nproduct is described by the K\\\"unneth theorem, see \\cite{span}. For\nreal coefficients it takes the simple form, $H^*(X\\times Y)\\cong\nH^*(X)\\otimes H^*(Y)$. The cohomology ring of a disjoint union of\nspaces is the direct sum of the corresponding cohomology rings,\ni. e. $H^*(\\disju X_\\nu;A)= \\bigoplus H^*(X_\\nu;A)$. Recall that a\nfibration is a map with the covering homotopy property, see for\nexample \\cite{span}. Given a fibration $F\\hra E\\ra B$, there is an\ninduced long exact sequence of homotopy groups, $\\dots\\ra\n\\pi_{k+1}(B)\\ra \\pi_k(F)\\ra \\pi_k(E)\\ra\\pi_k(B)\\ra \\dots$, see\n\\cite{span}. By itself this sequence is not enough to determine the\nfundamental group of a term in a fibration from the other\nterms. However, combined with a bit of information about the twisting\nin the bundle it will be enough information. One can also relate the\ncohomology rings of the terms in a fibration. This is accomplished by\nthe Serre spectral sequence, see \\cite{span}.\n\n\\begin{red}\\label{r1}\nWe have, $\\free{ \\disju X_\\nu}{Y}\\cong \\prod\\free{X_\\nu}{Y}$ and\n$Y^{\\disju X_\\nu}\\cong Y^{X_0}\\times \\prod_{\\nu\\ne 0}\\free{X_\\nu}{Y}$\nwhere $X_0$ is the component of $X$ containing the base point.\n\\end{red}\n\nIt follows that there is no loss of generality in assuming that $M$ is\nconnected. Likewise there is no loss of generality in assuming that\nthe target is connected because of the following reduction.\n\n\\begin{red}\\label{r2}\nWe have, $\\free{X}{\\disju Y_\\nu}\\cong\\disju\\free{X}{Y_\\nu}$ and\nassuming $X$ is connected $Y^X=Y_0^X$ where $Y_0$ is the component\ncontaining the base point.\n\\end{red}\n\nBoth $\\free{M}{G}$ and $G^M$ are topological groups under pointwise\nmultiplication. In fact $\\free{M}{G}\\cong G^M\\rtimes G$, the\nisomorphism being $u(x)\\mapsto (u(x)u(x_0)^{-1},u(x_0))$, which is\nclearly a homeomorphism $\\free{M}{G}\\ra G^M\\times G.$ It is thus\nstraightforward to deduce $\\pi_1(\\free{M}{G}$ and\n$H^*(\\free{M}{G},{\\mathbb{R}})$ from $\\pi_1(G^M)$ and $H^*(G^M,\\Z)$. Note that\nthe based case includes the standard choice $\\hat{M}={\\mathbb{R}}^3$.\n\n\\begin{red}\\label{r3}\nWe have $\\free{M}{G}\\cong G^M\\times G$.\n\\end{red}\n\nIn the same way, we can reduce the free maps case to the based case\nfor $S^2$-valued maps. In this case we only obtain a fibration. See\nLemmas \\ref{s2tofree}, \\ref{Fhseq} and \\ref{Fhsplit}.\n\n\\begin{red}\\label{r4}\nWe have a fibration, $(S^2)^M\\hra \\free{M}{S^2}\\ra S^2$,\n$\\pi_0(\\free{M}{S^2})=\\pi_0((S^2)^M)$, and\n$\\pi_1(\\free{M}{S^2}_\\varphi)= \\pi_1((S^2)^M_\\varphi)$.\n\\end{red}\n\n\\noindent\nThe relevant information about the twisting in this fibration as far\nas the fundamental group detects is given in the proof of Theorem\n\\ref{freefun} contained in subsection \\ref{subsecpc}. For cohomology,\nthe information is encoded in the second differential of the\nassociated spectral sequence. Returning to the case of group-valued\nmaps we know by the Cartan-Malcev-Iwasawa theorem that any connected\nLie group is homeomorphic to a product $G=K\\times {\\mathbb{R}}^n$, where $K$ is\ncompact \\cite{iwa}.\n\n\\begin{red}\\label{r5}\nIf $X^\\prime$ and $Y^\\prime$ are homotopy equivalent to $X$ and $Y$\nrespectively, then $(Y^\\prime)^{X^\\prime}$ is homotopy equivalent to\n$Y^X$. In particular we have $G^M\\simeq K^M$.\n\\end{red}\n\n\nRecall that every path component of $G^M$ is homeomorphic to $(G^M)_0$\nWe may therefore consider only the vacuum sector $(G^M)_0$, without\nloss of generality. We shall see that things are very different for\nthe Faddeev-Hopf configuration space, where we must keep track of\nwhich path component we are studying.\n\n\n\\begin{red}\\label{r6}\nIf $G$ is a Lie group, $\\tilde{G}$ is the universal covering group of\nits identity component and $M$ is a $3$-manifold, then\n$$(\\tilde G^M)_0 \\cong (G^M)_0.\n$$\n\\end{red}\n\\medskip\n{\\it Proof:} Without loss of generality we may assume that $G$ is\nconnected. We have the exact sequence,\n$$\n1\\to\\tilde G^M\\to G^M \\to H^1(M;H_1(G))\\to 0,\n$$\nfrom \\cite{AK1}. The exactness follows from the unique path lifting\nproperty of covers at the first term, the lifting criteria for maps\nto the universal cover at the center term, and induction on the\nskeleton of $M$ at the last term. Clearly, the identity component of\n$\\tilde G^M$ maps to the identity component of $G^M$. By the above\nsequence, this map is injective. Any element of $(G^M)_0$, say $u$,\nmaps to $0$ in $H^1(M;H_1(G))$, so is the image of some map in\n$\\tilde G^M$, say $\\tilde u$. Using the homotopy lifting property of\ncovering spaces, we may lift the homotopy of $u$ to a constant map,\nto a homotopy of $\\tilde u$ to a constant map and conclude that\n$\\tilde u\\in(\\tilde G^M)_0$. It follows that the map, $(\\tilde\nG^M)_0\\to (G^M)_0$ is a homeomorphism. \\hfill $\\Box$\n\n\\begin{red}\\label{r7}\nThe universal covering group of any compact Lie group is a product of\n${\\mathbb R}^m$ with a finite number of compact, simple,\nsimply-connected factors \\cite{Mimura}. Furthermore, $\\left(\\prod\nY_\\nu\\right)^X \\cong \\prod Y_\\nu^X$ and $\\free{X}{\\prod\nY_\\nu)}\\cong\\prod\\free{X}{Y_\\nu}$. \n\\end{red}\n\nWe have therefore reduced to the case of closed, connected, orientable\n$M$ and compact, simple, simply-connected Lie groups. All compact,\nsimple, simply-connected Lie groups are listed together with their\ncenter and rational cohomology in table \\ref{tab1}.\n\n\n\\begin{table}\\label{tbl1}\\label{table1}\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hlin\ngroup, $G$ & center, $Z(G)$ &\n$H^*(G;{\\mathbb Q})$ \\\\ \\hline $A_n= \\hbox{SU}(n+1)$, $n\\ge 2$ &\n${\\mathbb Z}_{n+1}$ & ${\\mathbb Q}[x_3, x_5, \\dots x_{2n+1}]$ \\\\\n\\hline $B_n=\\hbox{Spin}(2n+1)$, $n\\ge 3$ & ${\\mathbb Z}_2$ &\n${\\mathbb Q}[x_3, x_7, \\dots x_{4n-1}]$ \\\\ \\hline $C_n=\\hbox{Sp}(n)$,\n$n \\ge 1$ & ${\\mathbb Z}_2$ & ${\\mathbb Q}[x_3, x_7, \\dots x_{4n-1}]$\n\\\\ \\hline $D_n=\\hbox{Spin}(2n)$, $n \\ge 4$ & ${\\mathbb Z}_2\\oplus\n{\\mathbb Z}_2$ for $n\\equiv_2 0$ & ${\\mathbb Q}[x_3, x_7, \\dots\nx_{4n-5}, y_{2n-1}]$ \\\\ \\, & ${\\mathbb Z}_4$ for $n\\equiv_2 1$ & \\\\\n\\hline $E_6$ & ${\\mathbb Z}_3$ & ${\\mathbb Q}[x_3, x_9, x_{11},\nx_{15}, x_{17}, x_{23}]$ \\\\ \\hline $E_7$ & ${\\mathbb Z}_2$ &\n${\\mathbb Q}[x_3, x_{11}, x_{15}, x_{19}, x_{23}, x_{27}, x_{35}]$ \\\\\n\\hline $E_8$ & 0 & ${\\mathbb Q}[x_3, x_{15}, x_{23}, x_{27}, x_{35},\nx_{39}, x_{47}, x_{59}]$ \\\\ \\hline $F_4$ & 0 & ${\\mathbb Q}[x_3,\nx_{11}, x_{15}, x_{23}]$ \\\\ \\hline $G_2$ & 0 & ${\\mathbb Q}[x_3,\nx_{11}]$ \\\\ \\hline%\n\\end{tabular}\n\\caption{Simple groups}\\label{tab1} \n\\end{table}\n\nRecall from Proposition \\ref{components} that the path components of a\nconfiguration space of group-valued maps depend on the fundamental\ngroup of the group. The fundamental group of any Lie group is a\ndiscrete subgroup of the center of the universal covering group.\nThe center of such a group is just the product of the centers of the\nfactors.\n\nSome comments about table \\ref{tab1} are in order at this point. The\nlast generator of the cohomology ring of $D_n$ is labeled with a $y$\ninstead of an $x$ because there are two generators in degree $2n-1$\nwhen $n$ is even. As usual, $\\hbox{SU}(k)$ is the set of special\nunitary matrices, that is complex matrices with unit determinant\nsatisfying, $A^*A=I$. The symplectic groups, $\\hbox{Sp}(k)$, consist\nof the quaternionic matrices satisfying $A^*A=I$, and the special\northogonal groups, $\\hbox{SO}(k)$ consist of the real matrices with\nunit determinant satisfying $A^*A=I$. The spin groups,\n$\\hbox{Spin}(k)$ are the universal covering groups of the special\northogonal groups. The definitions of the exceptional groups may be\nfound in \\cite{Adams}. The following isomorphisms hold,\n$\\hbox{SU}(2)\\cong\\hbox{Sp}(1)\\cong\\hbox{Spin}(3)$,\n$\\hbox{Spin}(5)\\cong\\hbox{Sp}(2)$, and\n$\\hbox{Spin}(6)\\cong\\hbox{SU}(4)$, \\cite{Adams}. We will need some\nhomotopy groups of Lie groups. Recall that the higher homotopy groups\nof a space are isomorphic to the higher homotopy groups of the\nuniversal cover of the space, and the higher homotopy groups take\nproducts to products. We have $\\pi_3(G)\\cong {\\mathbb Z}$ for any\nof the simple $G$, and $\\pi_4(\\hbox{Sp}(n))\\cong {\\mathbb Z}_2$ and\n$\\pi_4(G)=0$ for all other simple groups \\cite{Mimura}. This is the\nreason we grouped the simple groups as we did. Note in particular\nthat we are calling SU$(2)$ a symplectic group.\n\n\n\\subsection{Examples}\\label{ex}\n\nIn this subsection, we present two examples that suffice to illustrate\nall seven reductions described earlier.\n\n\\noindent\n{\\bf Example 1} For our first example, we take $M=(S^2\\times\nS^1)\\disju {\\mathbb{R}{{P}}}^3$ and \n$$\nG=\\hbox{Sp}(2)\\times\\left\\{\\left(\\begin{array}{cc} a\n&b\\\\0&c\\end{array} \\right)\\in\\hbox{GL}_2{\\mathbb{R}}\\right\\}.\n$$\nWe take $((1,0,0),(1,0))\\in S^2\\times S^1$ as the base point in\n$M$. In this example, neither the domain nor codomain is connected\n($G\/G_0\\cong \\Z_2\\times\\Z_2$). In addition, the group is not\nreductive. We also see exactly what is meant by the number of\nsymplectic factors in the Lie algebra: it is just the number of $C_n$\nfactors in the Lie algebra of the maximal compact subgroup of the\nidentity component of $G$. This example requires Reductions \\ref{r1},\n\\ref{r2}, \\ref{r3}, and \\ref{r5}. To analyze the topology of the\nspaces of free and based maps, it suffices to understand maps from\n$S^2\\times S^1$ and ${\\mathbb{R}{{P}}}^2$ into the identity component, $G_0$\n(Reductions \\ref{r1}, \\ref{r2} and \\ref{r3}). \nIn fact, we may replace $G_0$ with\n$\\hbox{Sp}(2)$ (Reduction \\ref{r5}). Proposition \\ref{components} implies\nthat $\\pi_0(\\hbox{Sp}(2)^{S^2\\times S^1})=\\Z$ and\n$\\pi_0(\\hbox{Sp}(2)^{{\\mathbb{R}{{P}}}^3})=\\Z$, so $\\pi_0(\\free{M}{G})=\\Z_2^4\\times\n\\Z^2$ and $\\pi_0(G^M)=\\Z_2^2\\times\\Z^2$. Similarly, Theorem\n\\ref{thm1} implies that $\\pi_1(\\hbox{Sp}(2)^{S^2\\times\nS^1})=\\Z_2\\oplus\\Z$ and $\\pi_1(\\hbox{Sp}(2)^{{\\mathbb{R}{{P}}}^3})=\\Z_2\\oplus\\Z_2$,\nso $\\pi_1(\\free{M}{G})=\\pi_1(G^M)=\\Z_2^3\\oplus \\Z$. \n\nTurning to the cohomology, we know that $H^*(\\hbox{Sp}(2);{\\mathbb{R}})$ is the\nfree graded-commutative unital algebra generated by $x_3$ and\n$x_7$. Graded-commutative means $xy=(-1)^{|x||y|}yx$. It follows that\nany term with repeated factors is zero. We can list the generators of\nthe groups in each degree. In the expression below we list the\ngenerators left to right from degree $0$ with each degree separated by\nvertical lines:\n$$\nH^*(\\hbox{Sp}(2);{\\mathbb{R}})=|1|0|0|x_3|0|0|0|x_7|0|0|x_3x_7|.\n$$\nThe product structure is apparent. Theorem \\ref{thm1co} tells us that\n$H^*((\\hbox{Sp}(2))^{S^2\\times S^1}_0;{\\mathbb{R}})$ is the free unital\ngraded-commutative algebra generated by $\\mu([S^2\\times\n\\hbox{pt}]\\otimes x_3)^{(1)}$, $\\mu([\\hbox{pt}\\times S^1]\\otimes\nx_3)^{(2)}$, $\\mu([S^2\\times S^1]\\otimes x_7)^{(4)}$,\n$\\mu([S^2\\times \\hbox{pt}]\\otimes x_7)^{(5)}$, and\n$\\mu([\\hbox{pt}\\times S^1]\\otimes x_7)^{(6)}$. Here we have included\nthe degree of the generator as a subscript. In the same way we see\nthat $H^*((\\hbox{Sp}(2))^{{\\mathbb{R}{{P}}}^3}_0;{\\mathbb{R}})$ is the free unital\ngraded-commutative algebra (FUGCA) generated by $\\mu([{\\mathbb{R}{{P}}}^3]\\otimes\nx_7)^{(4)}$. Using the reductions and the K\\\"unneth theorem we see\nthat $H^*((G^M)_0;{\\mathbb{R}})$ is the FUGCA generated by $\\mu([S^2\\times\n\\hbox{pt}]\\otimes x_3)^{(1)}$, $\\mu([\\hbox{pt}\\times S^1]\\otimes\nx_3)^{(2)}$, $x_3$, $\\mu([S^2\\times S^1]\\otimes x_7)^{(4)}$,\n$\\mu([{\\mathbb{R}{{P}}}^3]\\otimes x_7)^{(4)}$, $\\mu([S^2\\times \\hbox{pt}]\\otimes\nx_7)^{(5)}$, $\\mu([\\hbox{pt}\\times S^1]\\otimes x_7)^{(6)}$, $x_7$.\nNotice that this is not finitely generated as a vector space even\nthough it is finitely generated as an algebra. This is because it is\npossible to have repeated even degree factors. The vector space in\neach degree is still finite dimensional.\n\n\nThe cohomology ring of the configuration space of based loops is just\nthe direct sum, $H^*(G^M;{\\mathbb{R}})=\\bigoplus_{\\pi_0(G^M)}\nH^*((G^M)_0;{\\mathbb{R}})$. Notice that it is infinitely generated as an\nalgebra. The cohomology of the identity component will usually be the\nimportant thing. Using the reductions, we see that the identity\ncomponent of the space of free maps is up to homotopy just the\nproduct, $\\free{M}{G}_0=G^M\\times \\hbox{Sp}(2)$, so the cohomology\nring $H^*(\\free{M}{G}_0;{\\mathbb{R}})$ is obtained from $H^*((G^M)_0;{\\mathbb{R}})$ by\nadjoining new generators in degrees $3$ and $7$, say $y_3$ and\n$y_7$. Thus $H^2((G^M)_0;\\Z)\\cong H^2(\\free{M}{G}_0;\\Z)\\cong\n\\Z\\oplus\\Z_2^3$.\n\n\n\n\n\\medskip\n\n\n\n\n\\noindent\n{\\bf Example 2} For this example, we take $M=T^3\\# L(m,1)$,\n$G_1=\\hbox{SO} (8)$ and $G= \\hbox{U}(2)\\times \\hbox{SO}(8)$. Recall\nthat the lens space $L(m,1)$ is the quotient $\\hbox{Sp}(1)\/\\Z_m$ where\nwe view $\\Z_m$ as the $m$-th roots of unity in\n$S^1\\subset\\hbox{Sp}(1)$. In this example we will need to use\nReductions \\ref{r6} and \\ref{r7}. The unitary group is isomorphic to\n$\\hbox{Sp}(1)\\times_{\\Z_2} S^1$, where $\\Z_2$ is viewed as the\ndiagonal subgroup, $\\pm(1,1)$. The universal covering group of SO$(8)$\nis Spin$(8)$. It follows that $G_1$ and $G$ are connected, $G$ has\nuniversal covering group $\\hbox{Sp}(1)\\times{\\mathbb{R}}\\times\\hbox{Spin}(8)$,\nand fundamental groups are $\\pi_1(\\hbox{Spin}(8))=\\Z_2$ and\n$\\pi_1(G)=\\Z\\oplus\\Z_2$. The group $G$ has two simple factors, one of\nwhich is symplectic. The integral cohomology of $M$ is given by\n$H^1(M;\\Z)\\cong \\Z^3$ and $H^2(M;\\Z)\\cong \\Z^3\\oplus\\Z_m$. The\nuniversal coefficient theorem and Proposition \\ref{components} imply\n$\\pi_0(G_1^M)=\\pi_0(\\free{M}{G_1})=\\Z\\times\\Z_2^4$ if $m$ is even,\n$\\Z\\times\\Z_2^3$ if $m$ is odd, and\n$\\pi_0(G^M)=\\pi_0(\\free{M}{G})=\\Z^5\\times\\Z_2^4$ if $m$ is even and\n$\\Z^5\\times\\Z_2^3$ if $m$ is odd. Theorem \\ref{thm1} implies that\n$\\pi_1(G_1^M)=\\Z^3\\oplus\\Z_m$,\n$\\pi_1(\\free{M}{G_1})=\\Z^3\\oplus\\Z_m\\oplus\\Z_2$,\n$\\pi_1(G^M)=\\Z^6\\oplus\\Z_m^2\\oplus\\Z_2$ and\n$\\pi_1(\\free{M}{G})=\\Z^7\\oplus\\Z_m^2\\oplus\\Z_2^2$.\n\nTurning once again to cohomology, we see from Theorem \\ref{thm1co}\nthat $H^*((\\hbox{U}(2)^M)_0;{\\mathbb{R}})$ is the FUGCA generated by\n$\\mu([T^2\\times \\hbox{pt}]\\otimes x_3)^{(1)}$, $\\mu([S^1\\times\n\\hbox{pt}\\times S^1]\\otimes x_3)^{(1)}$, $\\mu([ \\hbox{pt}\\times\nT^2]\\otimes x_3)^{(1)}$, $\\mu([S^1\\times \\hbox{pt}]\\otimes\nx_3)^{(2)}$, $\\mu([ \\hbox{pt}\\times S^1 \\hbox{pt}]\\otimes x_3)^{(2)}$,\nand $\\mu([ \\hbox{pt}\\times S^1]\\otimes x_3)^{(2)}$.\n\n\n\nAlso, $H^*((G_1^M)_0;{\\mathbb{R}})$ is the FUGCA generated by $\\mu([T^2\\times\n\\hbox{pt}]\\otimes y_3)^{(1)}$, $\\mu([S^1\\times \\hbox{pt}\\times\nS^1]\\otimes y_3)^{(1)}$, $\\mu([ \\hbox{pt}\\times T^2]\\otimes\ny_3)^{(1)}$, $\\mu([S^1\\times \\hbox{pt}]\\otimes y_3)^{(2)}$, $\\mu([\n\\hbox{pt}\\times S^1 \\hbox{pt}]\\otimes y_3)^{(2)}$, $\\mu([\n\\hbox{pt}\\times S^1]\\otimes y_3)^{(2)}$, $\\mu([T^3]\\otimes\ny_7)^{(4)}$, $\\mu([T^3]\\otimes z_7)^{(4)}$, $\\mu([T^2\\times\n\\hbox{pt}]\\otimes y_7)^{(5)}$, $\\mu([S^1\\times \\hbox{pt}\\times\nS^1]\\otimes y_7)^{(5)}$, $\\mu([ \\hbox{pt}\\times T^2]\\otimes\ny_7)^{(5)}$, $\\mu([S^1\\times \\hbox{pt}]\\otimes y_7)^{(6)}$, $\\mu([\n\\hbox{pt}\\times S^1 \\hbox{pt}]\\otimes y_7)^{(6)}$, $\\mu([\n\\hbox{pt}\\times S^1]\\otimes y_7)^{(6)}$, $\\mu([T^2\\times\n\\hbox{pt}]\\otimes z_7)^{(5)}$, $\\mu([S^1\\times \\hbox{pt}\\times\nS^1]\\otimes z_7)^{(5)}$, $\\mu([ \\hbox{pt}\\times T^2]\\otimes\nz_7)^{(5)}$, $\\mu([S^1\\times \\hbox{pt}]\\otimes z_7)^{(6)}$, $\\mu([\n\\hbox{pt}\\times S^1 \\hbox{pt}]\\otimes z_7)^{(6)}$, $\\mu([\n\\hbox{pt}\\times S^1]\\otimes z_7)^{(6)}$, $\\mu([T^3]\\otimes\ny_{11})^{(8)}$, $\\mu([T^2\\times \\hbox{pt}]\\otimes y_{11})^{(9)}$,\n$\\mu([S^1\\times \\hbox{pt}\\times S^1]\\otimes y_{11})^{(9)}$, $\\mu([\n\\hbox{pt}\\times T^2]\\otimes y_{11})^{(9)}$, $\\mu([S^1\\times\n\\hbox{pt}]\\otimes y_{11})^{(10)}$, $\\mu([ \\hbox{pt}\\times S^1\n\\hbox{pt}]\\otimes y_{11})^{(10)}$, and $\\mu([ \\hbox{pt}\\times\nS^1]\\otimes y_{11})^{(10)}$.\n\nTherefore, $H^*((G^M)_0;{\\mathbb{R}})$ is the FUGCA generated by all of the\ngenerators listed for the two previous algebras. We changed the\nnotation for the generators of the cohomology of the lie groups as\nneeded. To get to the cohomology of the identity component of the\nspace of free maps, we would just have to add generators for the\ncohomology of the group $G_0$ to this list. In general the cohomology\nof a connected lie group is the same as the cohomology of the maximal\ncompact subgroup, and every compact Lie group has a finite cover that\nis a product of simple, simply-connected, compact Lie groups and a\ntorus. In this case, we need to add generators, $t_1$, $u_3$, $w_3$,\n$u_7$, $v_7$, and $u_{11}$. \n\nThus, $H^2((G_1^M)_0;\\Z)\\cong\\Z^6\\oplus\\Z_m$,\n$H^2(\\free{G_1}{M}_0;\\Z)\\cong\\Z^6\\oplus\\Z_m\\oplus\\Z_2$,\n$H^2((G^M)_0;\\Z)\\cong \\Z^{21}\\oplus\\Z_m^2\\oplus\\Z_2$, and\n$H^2(\\free{G}{M}_0;\\Z)\\cong \\Z^{21}\\oplus\\Z_m^2\\oplus\\Z_2^2$. We can\nalso analyze the topology of the space of $S^2$-valued maps with\ndomain $M$. The path components of $(S^2)^M$ agree with the path\ncomponents of $\\free{M}{S^2}$ (Reduction \\ref{r4}) and are given by Theorem\n\\ref{ponthm}. Let $\\varphi_0:M\\to S^2$ be the constant map and let\n$\\varphi_3:M\\to S^2$ be the map constructed as the composition of the\nmap $M\\to T^3$ (collapse the $L(m,1)$), the projection $T^3\\to T^2$,\nand a degree three map $T^2\\to S^2$. According to Theorem \\ref{thm2}\nand Theorem \\ref{freefun}, we have\n$$\n\\begin{array}{rcl}\n\\pi_1(\\free{M}{S^2}_{\\varphi_0})=\\pi_1((S^2)^M_{\\varphi_0})&\\cong&\n\\Z^6\\oplus\\Z_m\\oplus\\Z_2, \\nonumber \\\\ \n\\pi_1((S^2)^M_{\\varphi_3})&\\cong&\n\\Z^5\\oplus\\Z_m\\oplus\\Z_2, \\qquad\\hbox{and} \\nonumber \\\\\n\\pi_1(\\free{M}{S^2}_{\\varphi_3})&\\cong&\n\\Z^4\\oplus\\Z_m\\oplus\\Z_6\\oplus\\Z_2.\n\\end{array}\n$$\n\nUsing Theorem \\ref{thm2co} we can write out generators for the\ncohomology. The cohomology, $H^*((S^2)^M_{\\varphi_0};{\\mathbb{R}})$ is the FGCUA\ngenerated by $PD([T^2\\times \\hbox{pt}])^{(1)} $, $PD([S^1\\times\n\\hbox{pt}\\times S^1]) ^{(1)} $, $PD([ \\hbox{pt}\\times T^2]) ^{(1)} $,\n$\\mu([T^2\\times \\hbox{pt}]\\otimes x)^{(1)} $, $\\mu([S^1\\times\n\\hbox{pt}\\times S^1]\\otimes x)^{(1)}$, $\\mu([ \\hbox{pt}\\times\nT^2]\\otimes x)^{(1)}$, $\\mu([S^1\\times \\hbox{pt}]\\otimes x)^{(2)}$,\n$\\mu([ \\hbox{pt}\\times S^1 \\hbox{pt}]\\otimes x)^{(2)}$, and $\\mu([\n\\hbox{pt}\\times S^1]\\otimes x)^{(2)}$.\n\nSimilarly, $H^*((S^2)^M_{\\varphi_3};{\\mathbb{R}})$ is the FGCUA generated by\n$PD([S^1\\times \\hbox{pt}\\times S^1]) ^{(1)} $, $PD([ \\hbox{pt}\\times\nT^2]) ^{(1)} $, $\\mu([T^2\\times \\hbox{pt}]\\otimes x)^{(1)} $,\n$\\mu([S^1\\times \\hbox{pt}\\times S^1]\\otimes x)^{(1)}$, $\\mu([\n\\hbox{pt}\\times T^2]\\otimes x)^{(1)}$, $\\mu([S^1\\times\n\\hbox{pt}]\\otimes x)^{(2)}$, $\\mu([ \\hbox{pt}\\times S^1\n\\hbox{pt}]\\otimes x)^{(2)}$, and $\\mu([ \\hbox{pt}\\times S^1]\\otimes\nx)^{(2)}$.\n\nThe reason why there is no generator corresponding to $PD([T^2\\times\n\\hbox{pt}])^{(1)}$ in the $\\varphi_3$ cohomology is that it is not in\nthe kernel since $2\\varphi_3^*\\mu_{S^2}\\cup PD([T^2\\times\n\\hbox{pt}])^{(1)} = 6\\mu_M$.\n\nWe can use Theorem \\ref{freeco} to compute the cohomology of the space\nof free maps. In the component with $\\varphi_0$ we notice that the\nsecond differential is trivial because $\\varphi_0^*\\mu_{S^2}=0$. It\nfollows that $H^*(\\free{M}{S^2}_{\\varphi_0};{\\mathbb{R}})$ is the\ngraded-commutative, unital algebra generated by $PD([T^2\\times\n\\hbox{pt}])^{(1)} $, $PD([S^1\\times \\hbox{pt}\\times S^1]) ^{(1)} $,\n$PD([ \\hbox{pt}\\times T^2]) ^{(1)} $, $\\mu([T^2\\times\n\\hbox{pt}]\\otimes x)^{(1)} $, $\\mu([S^1\\times \\hbox{pt}\\times\nS^1]\\otimes x)^{(1)}$, $\\mu([ \\hbox{pt}\\times T^2]\\otimes x)^{(1)}$,\n$\\mu([S^1\\times \\hbox{pt}]\\otimes x)^{(2)}$, $\\mu([ \\hbox{pt}\\times\nS^1 \\hbox{pt}]\\otimes x)^{(2)}$, $\\mu([ \\hbox{pt}\\times S^1]\\otimes\nx)^{(2)}$, and $\\mu_{S^2}$. Notice that this algebra is not free. It\nis subject to the single relation, $\\mu_{S^2}^2=0$.\n\nIn the component containing $\\varphi_3$ all of the generators of\n$H^*((S^2)^M_{\\varphi_3};{\\mathbb{R}})$ except $\\mu([T^2\\times \\hbox{pt}]\\otimes\nx)^{(1)} $ survive to $H^*(\\free{M}{S^2}_{\\varphi_3};{\\mathbb{R}})$ because they\nare in the kernel of $d_2$. However, $d_2\\mu([T^2\\times\n\\hbox{pt}]\\otimes x)^{(1)}=6\\mu_{S^2}$ so $\\mu_{S^2}$ does not survive\nand $H^*(\\free{M}{S^2}_{\\varphi_3};{\\mathbb{R}})$ is the FUGCA\ngenerated by $PD([S^1\\times \\hbox{pt}\\times S^1]) ^{(1)} $, $PD([\n\\hbox{pt}\\times T^2]) ^{(1)} $, $\\mu([S^1\\times \\hbox{pt}\\times\nS^1]\\otimes x)^{(1)}$, $\\mu([ \\hbox{pt}\\times T^2]\\otimes x)^{(1)}$,\n$\\mu([S^1\\times \\hbox{pt}]\\otimes x)^{(2)}$, $\\mu([ \\hbox{pt}\\times\nS^1 \\hbox{pt}]\\otimes x)^{(2)}$, and $\\mu([ \\hbox{pt}\\times\nS^1]\\otimes x)^{(2)}$.\n\nThus $H^2((S^2)^M_{\\varphi_0};\\Z)\\cong \\Z^{18}\\oplus\\Z_m\\oplus\\Z_2$,\n$H^2(\\free{M}{S^2}_{\\varphi_0};\\Z)\\cong \\Z^{19}\\oplus\\Z_m\\oplus\\Z_2$,\n$H^2((S^2)^M_{\\varphi_3};\\Z)\\cong \\Z^{13}\\oplus\\Z_m\\oplus\\Z_2$, and\n$H^2(\\free{M}{S^2}_{\\varphi_3};\\Z)\\cong \\Z^{9}\\oplus\\Z_m\\oplus\\Z_2$.\n\n\n\n\\section{Geometric interpretations}\n\\label{geo}\n\\setcounter{equation}{0}\n\nWe will follow the folklore maxim: think with intersection theory and\nprove with cohomology. The combination of Poincar\\'e duality and the\nPontrjagin-Thom construction gives a powerful tool for visualizing\nresults in algebraic topology. If $W$ is an $n$-dimensional homology\nmanifold, Poincar\\'e duality is the isomorphism $H^k(W)\\cong\nH_{n-k}(W)$. It is tempting to think of the $k$-th cohomology as the\ndual of the $k$-th homology. This is not far from the truth. The\nuniversal coefficient theorem is the split exact sequence \n$$\n0\\to\n\\hbox{Ext}^1_\\Z(H_{k-1}(W;\\Z),A)\\to H^k(W;A)\\to\n\\hbox{Hom}_\\Z(H_k(W;\\Z),A)\\to 0. \n$$\nPutting this together, we see that\nevery degree $k$ cohomology class corresponds to a unique $(n-k)$-cycle\n(codimension $k$ homology cycle), and the image of the cocycle applied\nto a $k$-cycle is the weighted number of intersection points with the\ncorresponding $(n-k)$-cycle. For field coefficients this is the entire\nstory since there is no torsion and the $\\hbox{Ext}$ group\nvanishes. With other coefficients, this gives the correct answer up to\ntorsion. The Pontrjagin-Thom construction associates a framed\ncodimension $k$ submanifold of $W$ to any map $W\\to S^k$. The\nassociated submanifold is just the inverse image of a regular\npoint. This is well defined up to framed cobordism. Going the other way, a\nframed submanifold produces a map $W\\to S^k$ defined via the\nexponential map on fibers of a tubular neighborhood of the submanifold\nand as the constant map outside of the neighborhood. We will take this\nup in greater detail later in this section. Before addressing the\ntopology of our configuration spaces, we need to understand the\ncohomology of Lie groups.\n\nA number of different approaches may be utilized to compute the real\ncohomology of a compact Lie group: H-space methods, equivariant Morse\ntheory, the Leray-Serre spectral sequence, Hodge theory. The\ncohomology is a free graded-commutative algebra over ${\\mathbb\nR}$. Recall that this means that\n$xy=(-1)^{\\hbox{deg}(x)\\hbox{deg}(y)}yx$. For our purposes, the\nspectral sequence and Hodge theory are the two most important. The\nfibration $\\hbox{SU}(N)\\hookrightarrow\\hbox{SU}(N+1)\\to S^{2N+1}$ may\nbe used to compute the cohomology of SU$(N)$, and we will use it and\nother similar fibrations to compute the cohomology of various\nconfiguration spaces. According to Hodge theory, the real cohomology\nis isomorphic to the collection of harmonic forms. Any compact Lie\ngroup admits an Ad-invariant innerproduct on the Lie algebra obtained\nby averaging any innerproduct over the group, or as the Killing form,\n$\\langle X, Y \\rangle=-\\hbox{Tr}(\\hbox{ad}(X)\\hbox{ad}(Y))$ in the\nsemisimple case. Such an innerproduct induces a biinvariant metric on\nthe group. With respect to this metric, the space of harmonic forms\nis isomorphic to the space of Ad-invariant forms on the Lie\nalgebra. Any harmonic form induces a form on the Lie algebra by\nrestriction and any Ad invariant form on the Lie algebra induces a\nharmonic form via left translation.\n\nIn the case of SU$(N)$, these forms may be described as products of\nthe elements, $x_j=\\hbox{Tr}((u^{-1}du)^j)$. In some applications it\nmight be appropriate to include a normalizing constant so that the\nintegral of each of these forms on an associated primitive homology\nclass is $1$. \n\n\\subsection{Components of $G^M$}\n\nFor simplicity, we will just consider geometric descriptions of\n $G$-valued maps for the compact, simple, simply-connected Lie groups.\n By applying the Pontrjagin-Thom construction, we will obtain a\n correspondence between homotopy classes of based maps $M\\ra G$ and\n finite collections of signed points in $M$. This may be used to give\n a geometric interpretation of Proposition \\ref{components}. In\n physical terms, the signed points may be thought of as particles and\n anti-particles in the theory. \n\nTo use the Pontrjagin-Thom construction in this setting we need a\nspecial basis for $H_*(G;{\\mathbb{R}})$. By the universal coefficient theorem,\nthere are $(2k+1)$-cycles $\\beta_{2k+1}$ in $H_{2k+1}(G;{\\mathbb R})$\ndual to $x_{2k+1}$. Assuming that the generators $x_{2k+1}$ are\nsuitably normalized, we may assume that the $\\beta_{2k+1}$ are\nintegral classes i.e.\\ images of elements of the form\n$\\beta^\\prime_{2k+1}$ for $\\beta^\\prime_{2k+1}\\in H_{2k+1}(G;\\Z)$. We\nwill often use notation from de~Rham theory to denote the analogous\nconstructions in singular, or cellular theory. For example, the\nevaluation pairing between cohomology and homology is called the cap\nproduct. It is usually denoted, $x\\cap\\beta$ or $x[\\beta]$. The cap\nproduct corresponds to integration ($\\int_\\beta x$) in de Rham\ntheory. By Poincar\\'e duality, we can identify each cocycle\n$x_{2k+1}$ with a codimension $(2k+1)$-cycle $F$ in $G$ so that the\nimage of any $(2k+1)$-chain $c_{2k+1}$ under $x_{2k+1}$ is precisely\nthe algebraic intersection number of $F$ and $c_{2k+1}$. Hence, each\ncompact, simple, simply-connected Lie group contains a codimension $3$\ncycle $F$ Poincar\\'e dual to $x_3$, which intersects $\\beta_{3}$\nalgebraically in one positively oriented point. We will shortly\ndescribe these codimension 3 cycles in greater detail, but we first\ndescribe how these cycles may be used to determine the path\ncomponents of the configuration space. \n\nAssume for now that the cycle $F$ has a trivial normal bundle. We will\njustify this assumption later. (Throughout this paper we will use\nnormal bundles, open and closed tubular neighborhoods and the\nrelation between them via the exponential map without explicitly\nwriting the map. If $\\Sigma\\subset M$ then $\\nu\\Sigma\\subset TM$,\nwill denote the normal bundle and $N\\Sigma\\subset M$ will denote the\nclosed tubular neighborhood.) Fix a trivialization of the normal\nbundle. Using this trivialization, we may associate a finite\ncollection of signed points to any generic based map, $u:M\\to G$. To\nsuch a map we associate the collection of points, $u^{-1}(F)$. Such a\npoint is positively oriented if the push forward of an oriented frame\nat the point has the same orientation as the trivialization of the\nnormal bundle at the image. Conversely, to any finite collection of\nsigned points we may associate a based map, $u:M\\to G$. Using a\npositively or negatively oriented frame at each point, we construct a\ndiffeomorphism from the closed tubular neighborhood of each point to\nthe $3$-disk of radius $\\pi$ in the space of purely imaginary\nquaternions, ${\\mathfrak sp}(1)$. Via the exponential map,\n$\\hbox{exp}:{\\mathfrak sp}(1)\\to \\hbox{Sp}(1)$ given by,\nexp$(x)=\\cos(|x|)+\\frac{\\sin(|x|)}{|x|}x$ we define a map from the\nclosed tubular neighborhood of the points to Sp$(1)$. This map may be\nextended to the whole $3$-manifold by sending points in the complement\nof the neighborhood to $-1$. We next modify the map by multiplying by\n$-1$, so that the base point will be $1$. Finally, we notice that the\nclass, $\\beta_3$ is represented by a homomorphic image of ${\\rm Sp}(1)$\nin any Lie group. For the classical groups, this homomorphism is just\nthe standard inclusion, ${\\rm Sp}(1)={\\rm SU}(2)\\hookrightarrow{\\rm SU}(n+1)$,\n${\\rm Sp}(1)=\\Spin(3)\\hookrightarrow\\Spin(n)$, or\n${\\rm Sp}(1)\\hookrightarrow{\\rm Sp}(n)$. The homomorphism for each exceptional\ngroup is described in \\cite{AK1}. This matches exactly with the\nstatement of Proposition \\ref{components}. In the case we are\nconsidering here, $H_1(G;\\Z)=0$, and an element of\n$H^3(M;\\pi_3(G))\\cong H^3(M;H_3(\\tilde G;\\Z))\\cong\nH^3(M;H^{g-3}(\\tilde G;\\Z))$ is just a machine that eats a $3$-cycle\nin $M$, i.e. $[M]$, and spits out a machine that eats a codimension\n$3$-cycle in $G$, i.e. $F$, and spits out an integer. If $G$ is not\nsimple, there will be independent codimension $3$-cycles for each\nsimple factor, and one could interpret the intersection number with\neach cycle as a different type of particle (soliton). If $G$ were not\nsimply connected, the element of $H^1(M;H_1(G_0))$ would be the\nobvious one, and one obtains the element of $H^3(M;\\pi_3(G))$ from a\nmodification of the map into $G$ that lifts to $\\tilde G$.\n\nIt is not difficult to describe the cycles, $\\beta_{2k+1}$ and $F$ for\n${\\rm SU}(n+1)$. Recall that the suspension of a pointed topological space\nis $SX=X\\times [0,1]\/(X\\times\\{0,1\\}\\cup \\{p_0\\}\\times [0,1])$. This\nmay be visualized as the product $X\\times S^1$ with the circle above\nthe marked point in $X$ and the copy of $X$ above a marked point in\n$S^1$ collapsed to a point. Identify ${\\mathbb C}P^k$ with\nU$(k+1)\/(\\hbox{U}(1)\\times\\hbox{U}(k))$ and define $\\beta_{2k+1}\n:S{\\mathbb C}P^k\\to \\hbox{SU}(n+1)$ by \n$$\n\\beta_{2k+1}([A,t])= [A, e^{\\pi it}\\oplus e^{-\\pi it}I_k]_{\\rm\ncom}\\oplus I_{n-k}.\n$$\nHere, $[A,B]_{\\rm com}=ABA^{-1}B^{-1}$ is the usual commutator in a\ngroup.\n\nThe normalization constants of $x_{2k+1}$ would ensure that\n$\\int_{\\beta_{2k+1}} x_{2k+1} =1$. The values of these constants for\n$k=1$ have been computed in \\cite{AK1}. We do not need these constants\nfor this present work. The value of the normalization constants for\n$k=2$ would, for example, be important if one wished to add a\nWess-Zumino term to the Skyrme Lagrangian.\n\nThe multiplication on a Lie group may be used to endow the homology of\nthe Lie group with a unital, graded-commutative algebra structure,\nand the cohomology with a comultiplication. The homology product is\ngiven by $(\\sigma:\\Sigma\\to G)\\cdot(\\sigma^\\prime:\\Sigma^\\prime\\to\nG):= (\\sigma\\sigma^\\prime:\\Sigma\\times\\Sigma^\\prime\\to G)$ and the\ncomultipication on cohomology is dual to this. The multiplication and\ncomultiplication give $H^*(G;{\\mathbb R})$ the structure of a Hopf\nalgebra. It is exactly in this context that Hopf algebras were first\ndefined. Using this algebra structure, we may give an explicit\ndescription of the Poincar\\'e duality isomorphism. Any product of\ngenerators, $x_j$ in $H^*(G;{\\mathbb R})$ is sent to the element of\n$H_*(G;{\\mathbb R})$ obtained from the product, $\\prod_{k=1}^{n}\n\\beta_{2k+1}$ by removing the corresponding $\\beta_j$. In particular,\n$F=\\prod_{k=2}^{n} \\beta_{2k+1}$ is the cycle Poincar\\'e dual to\n$x_3$. Geometrically, Poincar\\'e duality is described by the equation,\n$\\int_\\Sigma \\omega = \\#(PD(\\omega)\\cap\\Sigma)$. Since $S{\\mathbb\nC}P^k$ is not a manifold some words about our interpretation of the\nnormal bundle to $F$ are in order at this point. For ${\\rm SU}(2)$ we may\ntake $F=\\{-1\\}$. This is a codimension $3$ submanifold, so there are\nno problems. Recall that ${\\mathbb C}P^k-{\\mathbb C}P^{k-1}$ is\nhomeomorphic to ${\\mathbb R}^{2k}$. It follows that the subset of\n$F$, call it $F_0$, obtained from the product of the $S{\\mathbb\nC}P^k-S{\\mathbb C}P^{k-1}$ is a codimension $3$ cell properly\nembedded in ${\\rm SU}(n+1)-(F-F_0)$. Since $F-F_0$ has codimension $5$, we\nmay assume, using general position, that any map of a $3$-manifold\ninto SU$(n+1)$ avoids $F-F_0$. As $F_0$ is contractible, it has a\ntrivial normal bundle, justifying our assumption at the beginning of\nthis description.\n\n\\subsection{The fundamental group of $G^M$}\\label{fundesc}\n\nThe Pontrjagin-Thom construction may also be used to understand the\nisomorphism,\n$$\n\\phi:\\pi_1(G^M)\\, \\to\\,{\\mathbb Z}_2^s\\,\\oplus H^2(M; \\pi_3(G)),\n$$\nasserted in Theorem \\ref{thm1}. A loop in $(G^M)_0$ based at the\nconstant map $u(x)=1$, may be regarded as a based map $\\gamma:SM\\to\nG$. The identifications in the suspension provide a particularly nice\nway to summarize all of the constraints on $\\gamma$ imposed by the\nbase points. We will use the same notation for the map,\n$\\gamma:M\\times [0,1]\\to G$ obtained from $\\gamma$ by composition\nwith the natural projection. The inverse image $\\gamma^{-1}(F)$ with\nframing obtained by pulling back the trivialization of $\\nu(F)$ may\nbe associated to $\\gamma$. Conversely, given a framed link in\n$(M-p_0)\\times (0,1)$ one may construct an element of\n$\\pi_1(G^M)$. Using the framing each fiber of the closed tubular\nneighborhood to the link may be identified with the disk of radius\n$\\pi$ in ${\\mathfrak sp}(1)$. As before $-1$ times the exponential\nmap may be used to construct a map, $\\gamma:SM\\to G$ representing an\nelement of $\\pi_1(G^M)$.\n\nIt is now possible to describe the geometric content of the\nisomorphism in Theorem \\ref{thm1}. For a class of loops $[\\gamma]\\in\n(G^M)_0$, let $\\phi(\\gamma)=(\\phi_1(\\gamma),\\phi_2(\\gamma))$. Restrict\nattention to the case of simply-connected $G$, and make the\nidentifications, $\\pi_3(G)\\cong H_3(G;{\\mathbb Z})\\cong\nH^{g-3}(G;{\\mathbb Z})$. An element of $H^2(M; \\pi_3(G))$ may be\ninterpreted as a function that associates an integer to a surface in\n$M$, say $\\Sigma$, and a codimension $3$ cycle in $G$, say $F$. Set\n$\\phi_2(\\gamma)(\\Sigma,F)=\\#(\\Sigma\\times [0,1]\\cap\\gamma^{-1}(F))$.\nNote that $\\gamma^{-1}(F)$ inherits an orientation from the framing\nand orientation on $M$. Using Poincar\\'e duality this may be said in\na different way. The homology class of $\\gamma^{-1}(F)$ in\n$(M-p_0)\\times (0,1)$ projects to an element of $H_1(M)$ dual to the\nelement associated to $\\phi_2(\\gamma)$. The first component of the\nisomorphism counts the parity of the number of twists in the framing. \n\nConsider the framing in greater detail. Using a spin structure on $M$\nwe associate a canonical framing to any oriented $1$-dimensional\nsubmanifold of $(M-p_0)\\times (0,1)$. See Proposition \\ref{split} in\nthe proofs section. For now restrict attention to null-homologous\nsubmanifolds. Let $\\Sigma$ be an oriented $2$-dimensional\nsubmanifold of $(M-N(p_0))\\times (0,1)$ with non trivial\nboundary. The normal bundle to $\\Sigma$ inherits an orientation from\nthe orientations on $(M-p_0)\\times (0,1)$ and $\\Sigma$. Oriented\n$2$-plane bundles are classified by the second cohomology. Since\n$H^2(\\Sigma;{\\mathbb Z})=0$, the normal bundle is trivial. Let $(e_1,\ne_2)$ be an oriented trivialization of this bundle. Let\n$e_3\\in\\Gamma(T\\Sigma|_{\\partial\\Sigma})$ be the outward unit\nnormal. The canonical framing on $\\partial\\Sigma$ is $(e_1, e_2,\ne_3)$. Given a second framing, $(f_1, f_2, f_3)$ on $\\partial\\Sigma$\nand an orientation preserving parameterization of the boundary, we\nobtain an element $A\\in\\pi_1(GL^+(3,{\\mathbb\nR}))=\\pi_1(\\hbox{SO}(3))\\cong {\\mathbb Z}_2$ satisfying $(f_1, f_2,\nf_3)=(e_1, e_2, e_3)A$. This is the origin of the first component of\nthe isomorphism. The generator of $\\pi_1(\\hbox{Sp}(1)^{S^3})\\cong\n{\\mathbb Z}_2$ is represented by,\n$$\n\\gamma:(\\lambda,x_1,x_2)\\mapsto\\left(\\begin{array}{cc}\nx_1&-\\bar{\\lambda}\\bar{x}_2 \\\\ \\lambda x_2&\\bar{x}_1\\end{array}\\right),\n$$\nhaving identified $S^3$ with the unit sphere in ${\\mathbb C}^2$ (so\n$|x_1^2+|x_2|^2=1$), $S^1\\cong$U$(1)$ and Sp$(1)\\cong$SU$(2)$. The\nimage of $\\gamma$ under the obvious inclusion\n$\\iota:$SU$(2)\\rightarrow$SU$(3)$, that is,\n$\\iota(U)=\\hbox{diag}(U,1)$, is homotopically trivial, as can be seen\nby constructing an explicit homotopy between it and $\\iota\n\\circ\\gamma(1,\\cdot)$. First note that any SU$(3)$ matrix is uniquely\ndetermined by its first two columns, which must be an orthonormal\npair. For all $t\\in [0,1]$, let $\\mu_t(\\lambda)=t\\lambda+1-t$ (so\n$\\mu_1=\\hbox{id}$ and $\\mu_0=1$) and define\n$$\ne:=\\left(\\begin{array}{c}x_1\\\\ \\mu_t(\\lambda)x_2\\\\\n\\sqrt{1-|\\mu_t(\\lambda)|^2}x_2\\end{array}\\right),\\qquad\nv:=\\left(\\begin{array}{c}-\\bar{\\lambda}\\bar{x}_2\\\\ \\bar{x}_1 \\\\ 0 \n\\end{array}\\right),\\qquad\nv_\\perp:=v-(e^\\dagger v)e.\n$$\nThen\n$$\n(t,\\lambda,x_1,x_2)\\mapsto \\left(e,\\frac{v_\\perp}{|v_\\perp|},*\\right)\n$$\nis the required homotopy between $\\iota\\circ\\gamma$ ($t=1$) and the\ntrivial loop based at $\\iota:S^3\\rightarrow $SU$(3)$. It is\nstraightforward to check that $e$ and $v$ are never parallel (so the\nmap is well defined), that $(t,\\lambda,1,0)\\mapsto I_3$ for all\n$t,\\lambda$ (this is a homotopy through loops of based maps\n$S^3\\rightarrow$SU$(3)$) and that\n$(t,1,x_1,x_2)\\mapsto\\iota(x_1,x_2)$ for all $t$ (each loop is based\nat $\\iota$). \n\nThe homomorphic image of Sp$(1)$ is contained in a standardly embedded\nSU$(3)$ in each of the exceptional groups and the classical groups\nSU$(n+1)$, $n\\ge 2$, and Spin$(N)$, $N\\ge 7$, \\cite{AK1}. This is the\nreason why the ${\\mathbb Z}_2$ factors only correspond to the\nsymplectic factors of the Lie group.\n\nThe following figures show some loops in the configuration spaces. For\nthe first two figures, the horizontal direction represents the\ninterval direction in $M\\times [0,1]$. The disks represent the $x-y$\nplane in a coordinate chart in $M$, and we suppress the $z$ direction\ndue to lack of space. Figure \\ref{fig1} shows two copies of a typical\nloop representing an element in a symplectic ${\\mathbb Z}_2$\nfactor. Only the first vector of the framing is shown in figure\n\\ref{fig1}. The second vector is obtained by taking the cross product\nwith the tangent vector to the curve in the displayed slice, and the\nfinal vector is the $z$-direction. It is easy to see that the left\ncopy may be deformed into the right copy. We describe the left copy as\nfollows: a particle and antiparticle are born; the particle undergoes\na full rotation; the two particles then annihilate. The right copy\nmay be described as follows: a first particle-antiparticle pair is\nborn; a second pair is born; the two particles exchange positions\nwithout rotating; the first particle and second antiparticle\nannihilate; the remaining pair annihilates. Notice that there are two\nways a pair of particles can exchange positions. Representing the\nparticles by people in a room, the two people may step sideways\nforwards\/backwards and sideways following diametrically opposite\npoints on a circle always facing the back of the room. This is the\nexchange without rotating described in figure \\ref{fig1}. This\nexchange is non-trivial in $\\pi_1(\\hbox{Sp}(1)^{S^3})$. The second\nway a pair of people may change positions is to walk around a circle\nat diametrically opposite points always facing the direction that\nthey walk to end up facing the opposite direction that they\nstarted. This second change of position is actually homotopically\ntrivial. Since the framed links in figure \\ref{fig1} avoid the\nslices, $M\\times \\{0, 1\\}$, they represent a loop based at the\nconstant identity map.\n\n\\begin{figure}\n\\hskip105bp\\epsfig{file=skfig1.eps,width=4truein}%\n\\caption{The rotation or exchange loop}\\label{fig1}\n\\end{figure}\n\nIt is possible to describe a framing without drawing any normal\nvectors at all. The first vector may be taken perpendicular to the\nplane of the figure, the second vector may be obtained from the cross\nproduct with the tangent vector, and the third vector may be taken to\nbe the suppressed $z$-direction. The framing obtained by following\nthis convention is called the black board framing. We use the\nblackboard framing in figure \\ref{fig2}. The Pontrjagin-Thom\nconstruction may also be used to visualize loops in other components\nof the configuration space. Figure \\ref{fig2} shows a loop in the\ndegree $2$ component of the space of maps from $M$ to Sp$(1)$.\n\n\\begin{figure}\n\\hskip170bp\\epsfig{file=skfig2.eps,width=2truein}%\n\\caption{The degree $2$ exchange loop}\n\\label{fig2}\n\\end{figure}\n\nWe can also use the Pontrjagin-Thom construction to draw figures of\nhomotopies between loops in configuration space. Figure \\ref{fig3}\ndisplays a homotopy between the loop corresponding to a canonically\nframed unknot and the constant loop. In this figure, the horizontal\ndirection represents the second interval factor of $M\\times\n[0,1]\\times [0,1]$, the direction out of the page represents the\nfirst interval factor, the vertical direction represents the $x$\ndirection, and the $y$ and $z$ directions are suppressed. The framing\nis given by the normal vector to the hemisphere, the $y$ direction\nand the $z$ direction.\n\n\\begin{figure}\n\\hskip170bp\\epsfig{file=skfig3.eps,width=1.8truein}%\n\\caption{The contraction of a canonically framed contractible link}\n\\label{fig3}\n\\end{figure}\n\n\\subsection{Cohomology of $G^M$}\\label{cohomdesc}\n\nWe now turn to a description of the real cohomology of $G^M$. We will\nuse the slant product to associate a cohomology class on $G^M$ to a\npair consisiting of a homology class on $M$ and a cohomology class on\n$G$. Recall that the slant product is a map $H^n(X\\times Y;A)\\otimes\nH_k(X;B)\\to H^{n-k}(Y;A\\otimes B)$, \\cite{span}. In addition the\nuniversal coefficient theorem allows us to identify $H^k(G^M;{\\mathbb{R}})$ with\n$\\hbox{Hom}(H_k(G^M;\\Z),{\\mathbb{R}})$. Let $\\sigma:\\Sigma\\to M$ be a singular\nchain representing a homology class in $H_d(M;{\\mathbb{R}})=H_d(M;\\Z)\\otimes {\\mathbb{R}}$\n(instead of viewing singular chains as linear combinations of singular\nsimplicies, we will combine them together and view a singular chain as\na map of a special polytope into the space), and let $x_j$ be a\ncohomology class in $H^j(G;{\\mathbb{R}})$. To define the image of the mu map,\n$\\mu(\\Sigma\\otimes x_j)$, let $u:F\\to G^M$ be a singular chain\nrepresenting an element in the $H_{d-j}(G^M)$. This induces a natural\nsingular chain $\\widehat u:M\\times F\\to G$. The pull back produces\n$\\widehat u^*x_j\\in H^j(M\\times F;{\\mathbb{R}})$. The formal definition of the\nmu map is then, \n\\beq\n\\label{mudef} \n\\mu(\\Sigma\\otimes x_j)(u):=(\\widehat\nu^*x_j\/\\Sigma)[F]. \n\\end{equation} \nWriting this in notation from the de Rham\nmodel of cohomology may help to clarify the definitions. In principle\none could construct a homology theory based on smooth chains and make\nthe following rigorous. The $\\mu$ map produces a $(j-d)$-cocycle in\n$G^M$ from a $d$-cycle in $M$ and a $j$-cocycle in $G$. On the level\nof chains, let $e^d:D^d\\to M$ be a $d$-cell, and $x_j$ be a closed\n$j$-form on $G$. Given a singular simplex, $u:\\Delta^{j-d}\\to G^M$,\nlet $\\widehat u:M\\times \\Delta^{j-d}\\to G$ be the natural map and write\n$$\n\\mu(e^d\\otimes x_j)(u)= \\int_{D^d\\times \\Delta^{j-d}} \\widehat u^* x_j.\n$$\nUsing the product formula for the boundary,\n$$\n\\partial(D^d\\times\\Delta^{j-d+1}) = (\\partial\nD^d)\\times\\Delta^{j-d+1} + (-1)^{d}D^d\\times \\partial \\Delta^{j-d+1},\n$$\nwe can get a simple formula for the coboundary of the image of an\nelement under the $\\mu$-map. Let $v:\\Delta^{j-d+1}\\to G^M$, be a\nsingular simplex, then \n\\bea \n\\delta(\\mu(e^d\\otimes\nx_j))(v)&=&\\sum_{k=0}^{j-d+1} (-1)^k \\int_{D^d\\times \\Delta^{j-d}}\n\\widehat{(v\\circ f_k)}^* x_j = \\int_{D^d\\times\n\\partial\\Delta^{j-d+1}} \\widehat v^* x_j \\nonumber \\\\ &=&\n(-1)^{d+1}\\int_{(\\partial D^d)\\times \\Delta^{j-d+1}} \\widehat v^*\nx_j+(-1)^d\\int_{\\partial(D^d\\times\\Delta^{j-d+1})}\\widehat v^*\nx_j\\nonumber\\\\\n\\label{delta}\n&=& (-1)^{d+1}\\mu((\\partial e^d)\\otimes x_j)(v). \n\\end{eqnarray} \nWe used Stokes'\ntheorem in the last line. It follows that $\\mu$ is well defined at\nthe level of homology. Theorem \\ref{thm1co} asserts that\n$H^*(G^M_0;{\\mathbb{R}})$ is a finitely generated algebra with generators\n$\\mu(\\Sigma_j^d \\otimes x_k)$ where $\\{\\Sigma_j^d\\}$ and $\\{x_k\\}$ are\nbases for $H_*(M;{\\mathbb{R}})$ and $H^*(G;{\\mathbb{R}})$ respectively. The multiplication\non $H^*(G^M;{\\mathbb{R}})$ is given by the cup product. Recall that this is\ndefined at the level of cochains by,\n$(\\alpha\\smallsmile\\beta)(w)=\\alpha(~_kw) \\beta(w_\\ell)$, where $\\alpha$ is a\n$k$-cocycle, $\\beta$ is a $\\ell$-cocycle, $w$ is a\n$(k+\\ell)$-singular simplex and $~_kw$ is the front $k$-face and\n$w_\\ell$ is the back $\\ell$-face \\cite{span}. Note that $\\smallsmile$ is\ngraded-commutative, that is, $\\alpha\\smallsmile\\beta\n=(-1)^{k\\ell}\\beta\\smallsmile\\alpha$. \n\nIt is instructive to understand some classes that do not appear as\ngenerators. One might expect $\\mu(\\hbox{pt}\\otimes x_j)$ to be a\ngenerator in degree $j$. However, since $G^M$ consists of based maps,\nthe induced map $\\widehat u:M\\times F\\to G$ arising from a chain\n$u:F\\to G^M$ restricts to a constant map on $\\hbox{pt}\\times F$. It\nfollows that $\\mu(\\hbox{pt}\\otimes x_j)=0$. There would be an\nanalogous class if we considered the cohomology of the space of free\nmaps. Turning to the other end of the spectrum, one might expect to\nsee classes of the form $\\mu(M\\otimes x_3)$ in degree zero. Such\ncertainly could not appear in the cohomology of the identity component\n$G^M_0$. In fact we stated our theorem for the identity component\nbecause the argument leading to generators of the form\n$\\mu(\\Sigma\\otimes x_3)$ breaks down when $\\Sigma$ is a $3$-cycle and\n$x_3$ is a $3$-cocycle. The argument starts by considering maps of\nspheres into the group $G$, and then assembles the cohomology of these\nmapping spaces (which are denoted by $\\Omega^kG$) into the cohomology\nof $G^M$. The path fibration is used to compute the cohomology of the\n$\\Omega^kG$. The fibration leading to the cohomology of $\\Omega^3G$\ndoes not have a simply connected base and this is the break down. See\nLemma \\ref{og}. Finally one might expect to see classes of the form\n$\\mu(\\Sigma\\otimes x_j\\cup x_k)$. It will turn out in the course of\nthe proof (Lemma \\ref{og}) that such classes vanish.\n\n\nUp to this point, our geometric descriptions of the algebraic topology\nof configuration spaces have been simpler than we had any right to\nexpect. We were able to describe the space of path components and the\nfundamental group of the configuration space of maps from an\norientable $3$-manifold into an arbitrary simply-connected Lie group\nby just considering subgroups isomorphic to Sp$(1)$. This will not\nhold for all homotopy invariants of $G^M$. The main object of\ninterest to us is the second cohomology of the configuration space\nwith integral coefficients, because this classifies the complex line\nbundles over the configuration space (the quantization\nambiguitity). It is possible to describe one second cohomology class\non Sp$(n)^M$ in terms of Sp$(1)$ geometry. However we need to pass to\nSU$(3)$ subgroups to get at the second cohomology in general. \n\nBefore considering these geometric representatives of the second\ncohomology, briefly recall the definition of the Ext groups. Given\n$R$-modules $A$ and $B$ pick a free resolution of $A$ say $\\to C_2\\to\nC_1\\to C_0\\to A$. The $k$th Ext group is just defined to be the $k$th\nhomology of the complex $\\hbox{Hom}(C_*,B)$,\ni.e. $\\hbox{Ext}^k_R(A,B)=H_k(\\hbox{Hom}(C_*,B))$. When $R$ is a PID\n(principal ideal domain)\nevery $R$-module has a free resolution of the form, $0\\to C_1\\to\nC_0\\to A$. Given such a resolution one obtains the exact sequence,\n\\beq\n\\label{ext} 0\\to \\hbox{Hom}(A,B)\\to \\hbox{Hom}(C_0,B)\\to\n\\hbox{Hom}(C_1,B)\\to \\hbox{Ext}^1_R(A,B)\\to 0, \n\\end{equation} \nand all higher Ext\ngroups vanish. We will always take $R=\\Z$ and drop the ground ring\nfrom the notation. Based on the above exact sequence, we say that the\nExt groups measure the failure of Hom to be exact i.e. take exact\nsequences to exact sequences. The Ext group may also be identified\nwith the collection of extensions of $A$ by $B$ \\cite{span}. By the\nuniversal coefficient theorem \n$$\n\\begin{array}{rcl}\nH^2(G^M;\\Z)&\\cong&\\hbox{Ext}^1(H_1(G^M;\\Z),\\Z)\\oplus\n\\hbox{Hom}(H_2(G^M;\\Z),\\Z)\n \\\\\nH^2(G^M;{\\mathbb{R}})&\\cong&\\hbox{Ext}^1(H_1(G^M;\\Z),{\\mathbb{R}})\\oplus\n\\hbox{Hom}(H_2(G^M;\\Z),{\\mathbb{R}})\n\\end{array}\n$$ \nNow for all $A$, $\\hbox{Ext}^1(A,{\\mathbb{R}})=0$, so\n$\\hbox{Hom}(H_2(G^M;\\Z),\\Z)$ is a free abelian group of rank\n$b_2=\\hbox{dim}_{{\\mathbb{R}}}H^2(G^M;{\\mathbb{R}})$. In addition, $\\hbox{Ext}^1(A,\\Z)$ is\njust the torsion subgroup of $A$ and $H_1(G^M;\\Z)\\cong\n\\pi_1^{\\rm ab}(G^M)= \\pi_1(G^M)$. Hence \n$$\nH^2(G^M;\\Z)\\cong\\Z^{b_2}\\oplus{\\rm Tor}(\\pi_1(G^M)) \n$$ where $\\pi_1(G^M)$ and\nthe Betti number $b_2$ may be obtained from Theorems \\ref{thm1} and\n\\ref{thm1co}.\n\nWe will use the universal coefficient theorem and Ext groups to\ndescribe some cohomology classes of our configuration spaces.\nThere is a natural ${\\mathbb Z}_2$ contained in\nthe fundamental group of the configuration space for any group with a\nsymplectic factor. This ${\\mathbb Z}_2$ is generated by the exchange\nloop. Wrapping twice around the exchange loop is the boundary of a\ndisk in the configuration space. Since ${\\mathbb R}P^2$ is the result\nof identifying the points on the boundary of a disk via a degree $2$\nmap, one expects to find an ${\\mathbb R}P^2$ embedded into any of the\nSkyrme configuration spaces with a symplectic factor. In\n\\cite{sorkin}, R.~Sorkin describes an embedding,\n$f_{\\rm{stat}}:{\\mathbb R}P^2\\to \\hbox{SU}(n+1)^{S^3}$. He also\ndescribes an embedding, $f_{\\rm{spin}}:{\\mathbb R}P^3\\to\n\\hbox{SU}(n+1)^{S^3}$. He further shows that $f_{\\rm{spin}}$\nrestricted to the ${\\mathbb R}P^2$ subspace is homotopic to\n$f_{\\rm{stat}}$. Using the map, $M\\to M^{(3)}\/M^{(2)}\\cong S^3$,\nthese induce maps into SU$(n+1)^M$. Here $M^{(k)}$ is the $k$ skeleton\nof $M$ with respect to some CW structure. \nIn fact, using the inclusion of Sp$(1)=\\hbox{SU}(2)$ into any\nsimply-connected simple Lie group one obtains maps from ${\\mathbb{R}{{P}}}^2$ and\n${\\mathbb{R}{{P}}}^3$ into any configuration space of Lie group valued maps. This is\nmost interesting when the map factors through a symplectic factor.\n\nWe briefly recall Sorkin's elegant construction. Describe\n${\\mathbb R}P^2$ as the $2$-sphere with antipodal points\nidentified. By the addition of particle antiparticle pairs, we may\nassume that there are two particles in a coordinate chart. We may\nplace the particles at antipodal points of a sphere in a coordinate\nchart using frames parallel to the coordinate directions. The map\nobtained from these frames using the Pontrjagin-Thom construction is\n$f_{\\rm{stat}}$. The projective space, ${\\mathbb R}P^3$ is\nhomeomorphic to the rotation group SO$(3)$. The map $f_{\\rm{spin}}$\nmay be described by using SO$(3)$ to rotate a single frame and then\napplying the Pontrjagin-Thom construction. Sorkin includes a second\nunaffected particle in his description of $f_{\\rm{spin}}$ to make\nthe comparison with $f_{\\rm{stat}}$ easier.\n\nA degree one map $M\\to S^3$ (which always exists) induces a map\n$G^{S^3}\\to G^M$. The space $G^{S^3}$ is typically denoted\n$\\Omega^3G$. If the Lie algebra of the maximal compact subgroup admits\na symplectic factor, then we have an interesting map Sp$(1)\\to G$\nwhich induces a map $\\Omega^3\\hbox{Sp}(1)\\to \\Omega^3G$. We will see\nin the course of our proofs that on the level of $\\pi_1$ or $H_1$\nthese maps give a sequence of injections,\n$$\nH_1({\\mathbb{R}{{P}}}^2;\\Z)\\to H_1({\\mathbb{R}{{P}}}^3;\\Z)\\to H_1(\\Omega^3\\hbox{Sp}(1);\\Z)\\to\nH_1(\\Omega^3G;\\Z)\\to H_1(G^M;\\Z).\n$$\nThe universal coefficient theorem implies that there is a $\\Z_2$\nfactor in the second cohomology of $G^M$ when $G$ contains a\nsymplectic factor. In fact, in this case we see that twice the\nexchange loop is a generator of the $1$-dimensional boundaries. This\nmeans we can define a homomorphism from $B_1$ (the $1$-dimensional\nboundaries) to $\\Z$ taking twice the exchange loop to $1$. The cocycle\ndefined by following the boundary map from $2$-chains by this\nhomomorphism generates the $\\Z_2$ in $H^2(G^M;\\Z)$. We see that this\nclass evaluates nontrivially on the Sorkin ${\\mathbb{R}{{P}}}^2$.\n\nWhen $G$ has unitary factors, there will be infinite cyclic factors in\nthe second cohomology of $G^M$. This is nicely explained by a\nconstruction of Ramadas, \\cite{ramadas}. Ramadas constructs a map,\n$S^2\\times S^3\\to \\hbox{SU}(3)$. This construction goes as follows. He\nfirst defines a map $K: \\hbox{SU}(2)\\to \\hbox{SU}(2)$ by\n$$\nK(\\left(\\begin{array}{cc}a&b\\\\-\\bar b&\\bar a\\end{array}\\right)\n)=(|a|^4+|b|^4)^{-\\frac12}\\left(\\begin{array}{cc}a^2&-\\bar b^2\\\\\nb^2&\\bar a^2\\end{array}\\right).\n$$\nThis map satisfies,\n$K(\\hbox{diag}(\\lambda,\\bar\\lambda)A)=K(A)\\hbox{diag}(\\lambda^2,\n\\bar\\lambda^2)$.\nFinally define $\\widehat\\sigma: S^1\\backslash \\hbox{SU}(2)\n\\times\\hbox{SU}(2)\\to\\hbox{SU}(3)$ by \n$$\n\\widehat\\sigma([A],B)=\n\\hbox{diag}(1,K(A)) \\hbox{diag}(ABA^*,1)\\hbox{diag}(1,K(A)^*).\n$$\nHere we are\nviewing $S^1$ as the subgroup $\\hbox{diag}(\\lambda,\\bar\\lambda)$ of\nSU$(2)$. It is well known that $ S^1\\backslash \\hbox{SU}(2)\\cong S^2$\nand SU$(2)\\cong S^3$. The map $\\widehat\\sigma:S^2\\times S^3\\to\n\\hbox{SU}(3)$ induces a map\n$\\sigma:S^2\\to\\Omega^3\\hbox{SU}(3)$. Ramadas shows that this map\ngenerates $H_2(\\Omega^3\\hbox{SU}(3);\\Z)\\cong \\Z$. Combining with the\ndegree one map from $M$ and the inclusion into a special unitary\nfactor of $G$, we obtain a map $S^2\\to G^M$ generating an\ninfinite cyclic factor of $H_2(G^M;\\Z)$. By the universal coefficient theorem\na map from $H_2(G^M;\\Z)$ to $\\Z$ taking this generator to $1$ is a cohomology\nclass in $H^2(G^M;\\Z)$. Clearly this class evaluates non-trivially on\nthis $S^2$.\n\nIf $G$ does not have a symplectic or special unitary factor, then\nthere is no reason to expect any elements of the second cohomology. In\nfact under this hypothesis, $H^2(\\Omega^3G;\\Z)=0$. It is worth\nmentioning how these maps behave in general. The third homotopy group\nof any Lie group is generated by homomorphic images of Sp$(1)$. Each\ntime one of these generators is contained in a symplectic factor, we\nget a $\\Z_2$ in the second cohomology detected by a Sorkin\n${\\mathbb{R}{{P}}}^2$. When one of these factors is not contained in a symplectic\nfactor, it is contained in a copy of SU$(3)$. This kills the $\\Z_2$\nfactor in $\\pi_1$ as explained above in subsection \\ref{fundesc}. If\nthe SU$(3)$ is contained in a special unitary factor, the Sorkin map\n${\\mathbb{R}{{P}}}^2\\to \\hbox{Sp}(1)^M\\to \\hbox{SU}(3)^M \\to G^M$ pulls back the\nsecond cohomology class described by Ramadas (and extended to\narbitrary $M$ and $G$ with special unitary factor as above) to the\ngenerator of $H^2({\\mathbb{R}{{P}}}^2;\\Z)\\cong \\Z_2$. (Ramadas proves that the\ngenerator of $H^2(\\Omega^3\\hbox{SU}(3);\\Z)$ pulls back to the\ngenerator of $H^2({\\mathbb{R}{{P}}}^2;\\Z)$ and the rest follows from our proofs.)\n If this\nSU$(3)$ is not contained in a special unitary factor, it follows from\nour proofs that the second homology class associated to $S^2\\to G^M$\nbounds, so there is no associated cohomology class. \n\n\\subsection{Components of $(S^2)^M_\\varphi$}\\label{scomp}\n\nThe picture of the components of $(S^2)^M_\\varphi$ arising from the\nPontrjagin-Thom construction and Poincar\\'e duality is quite nice. The\ninverse image of a regular point in $S^2$ is Poincar\\'e dual to\n$\\varphi^*\\mu_{S^2}$. The number of twists in the framing of a second\nmap with the same pull-back is the element of $H^3(M;\\Z)\/\\langle\n2\\varphi^*\\mu_{S^2}\\rangle$. This is very similar to the description\nof elements of the fundamental group of $G^M$ when $G$ has symplectic\nfactors. We will identify $S^2$ with ${\\mathbb{C}}{\\mathbb{P}}^1$ and consider several\nmaps. We have $\\varphi_1, \\varphi_1^\\prime, \\varphi_3:{\\mathbb{C}}{\\mathbb{P}}^1\\times\nS^1\\to {\\mathbb{C}}{\\mathbb{P}}^1$ given by,\n$$\n\\varphi_1([z:w],\\lambda)= [z:w],\\qquad\n\\varphi_1^\\prime([z:w],\\lambda)=[\\lambda z:w],\\quad\\hbox{and}\\quad\n\\varphi_3([z:w],\\lambda)=[z^3:w^3].\n$$\nWe can view ${\\mathbb{C}}{\\mathbb{P}}^1\\times S^1$ as $S^2\\times [0,1]$ (a spherical shell)\nwith the inner and outer ($S^2\\times\\{0\\}$ and $S^2\\times\\{0\\}$)\nspheres identified. Using this convention and the framing conventions\nfrom subsection \\ref{fundesc}, We have displayed the framed\n$1$-manifolds arising as the inverse image of a regular point in\nfigure \\ref{sputnik}.\n\\begin{figure}\n\\hskip95bp\\epsfig{file=skfig3-1.eps,width=4truein}%\n\\caption{Pontrjagin-Thom representatives of $S^2$-valued\nmaps}\\label{sputnik}\n\\end{figure}\nIt may appear that there is a well defined twist number associated to\na $S^2$-valued map. However, there is a homeomorphism of ${\\mathbb{C}}{\\mathbb{P}}^1\\times\nS^1$ twisting the $2$-sphere (such a map is given by\n$([z:w],\\lambda)\\mapsto ([\\lambda z:w],\\lambda)$). This will change\nthe number of twists in a framing, but will not change the relative\nnumber of twists. The reason why this relative number of twists is\nonly well defined modulo twice the divisibility of the cohomology\nclass $\\varphi^*\\mu_{S^2}$ is demonstrated for $\\varphi_1$ in figure\n\\ref{untwist}.\n\n\\begin{figure}\n\\hskip80bp\\epsfig{file=skfig3-2.eps,width=4.5truein}%\n\\caption{Introducing $2d$ twists}\\label{untwist}\n\\end{figure}\n\n\n\n\n\\subsection{Fundamental group of $(S^2)^M_\\varphi$}\n\nAn element of $\\pi_1((S^2)^M_\\varphi)$ is represented by a map,\n$\\gamma:M\\times S^1\\to S^2$. The inverse image of a regular point is a\n$2$-dimensional submanifold, say $\\Sigma$. This defines an element of\n$H^1(M;\\Z)$ as follows. To any $1$-cycle in $M$, say $\\sigma$, we\nassociate the intersection number of $\\Sigma$ and $\\sigma\\times\nS^1$. Since our loop is in the path component of $\\varphi$, the\nsurface $\\Sigma$ is parallel to the $\\varphi$-inverse image of a\nregular point. This implies that our element of $H^1(M;\\Z)$ is in the\nkernel of the map, $2\\varphi^*\\mu_{S^2}:H^1(M;\\Z)\\to H^3(M;\\Z)$. Given\nany element of this kernel, we can define a loop in $(S^2)^M_\\varphi$\nvia the ${\\mathfrak q}$-map defined below at line \\ref{qdef}. There is\na map from $u:M\\times S^1\\to \\hbox{Sp}(1)$ that may be used to change\nthis new loop back into $\\gamma$. The remaining homotopy invariants of\n$\\gamma$ are just those of $u$ as described in subsection\n\\ref{fundesc}.\n\n\n\n\n\\section{Physical consequences}\n\\label{phy}\n\\setcounter{equation}{0}\n\nAs explained in section \\ref{reduct}, the configuration space of the Skyrme \nmodel with arbitrary\ntarget group is homotopy equivalent to the configuration space of a\ncollection of uncoupled Skyrme fields each taking values in a\ncompact, simply connected, simple Lie group. We\nwill therefore assume, throughout this section that $G$ is compact, simply \nconnected\nand simple. In this case, by Proposition \\ref{components}, the\npath components of $G^M$ are labeled by $H^3(M;\\Z)\\cong\\Z$, identified with\nthe baryon number $B$ of the configuration. This identification has already\nbeen justified by consideration of the Pontrjagin-Thom construction.\nLet us denote the baryon number $B$ sector by $Q_B$. \n\nWe first recall how Finkelstein and Rubinstein introduced fermionicity to\nthe Skyrme model \\cite{finrub}. \nThe idea is that the quantum state is specified by\na wavefunction on $\\widetilde{Q}_B$, the universal cover of $Q_B$, rather than\n$Q_B$ itself. By the uniqueness of lifts, there is a natural action of\n$\\pi_1(Q_B)$ on $\\widetilde{Q}_B$ by deck transformations. Let $\\pi:\\widetilde{Q}_B\\ra\nQ_B$ denote the covering projection, $\\lambda\\in \\pi_1(Q_B)$ and $D_\\lambda$\nbe the associated deck transformation. Since all points in $\\pi^{-1}(u)$\nare physically indistinguishable, we must impose the constraint\n$$\n|\\psi(D_\\lambda q)|=|\\psi(q)|\n$$\non the wavefunction $\\psi:\\widetilde{Q}_B\\ra \\C$, for all $q\\in \\widetilde{Q}_B$ and\n$\\lambda\\in \\pi_1(Q_B)$. This leaves us the freedom to assign phases\nto the deck transformations, that is, the remaining quantization ambiguity\nconsists of a choice of $U(1)$ representation of $\\pi_1(Q_B)$. The \npossibility of fermionic quantization arises if the two-Skyrmion exchange\nloop in $Q_2$ is noncontractible with even order: we can then choose\na representation which assigns this loop the phase $-1$. In this case our \nwavefunction aquires a minus sign under Skyrmion interchange. Clearly, the Finkelstein-Rubinstein model could apply to any sigma model with a configuration space admiting non-trivial elements of the fundamental group representing the exchange of identical particles. In particular, the domain does not have to be ${\\mathbb{R}}^3$. \n\nNote we have insisted that the wavefunction $\\psi$ have support on a single \npath component\n$Q_B$, because baryon number is conserved in nature, so transitions which \nchange $B$ have zero probability. It seems, then, that the choice of\nrepresentation of $\\pi_1(Q_B)$ can be made independently for each $B$, but\nin fact there is a strong consistency requirement between the \nrepresentations associated with the various components.\nRecall that all the sectors are homeomorphic and\nthat given any $u\\in Q_B$ one obtains a homeomorphism $Q_0\\ra Q_B$ by\npointwise multiplication by $u$. Hence, to each $u\\in Q_B$ there is \nassociated an isomorphism $\\pi_1(Q_0)\\ra\\pi_1(Q_B)$, so one has a map\n$Q_B\\ra {\\rm Iso}(\\pi_1(Q_0),\\pi_1(Q_B))$. Since $Q_B$ is connected and \n$\\pi_1$\nis discrete, this map is constant, that is, there is a {\\em canonical}\nisomorphism $\\pi_1(Q_0)\\ra\\pi_1(Q_B)$, which may be obtained by pointwise\nmultiplication by {\\em any} charge $B$ configuration. Having chosen a\nrepresentation of $\\pi_1(Q_0)$, we obtain canonical representations of\n$\\pi_1(Q_B)$ for all other $B$. Physically, we are demanding that the\nphase introduced by transporting a configuration around a closed loop \nshould be independent of the presence of static Skyrmions held remote\nfrom the loop. This places nontrivial consistency conditions, if we are\nto obtain a genuinely fermionic quantization. In particular, the loop in\n$Q_{2B}$ consisting of the exchange of a pair of identical charge $B$\nSkyrmions must be assigned the phase $(-1)^B$, since a charge $B$ Skyrmion\nrepresents a bound state of $B$ nucleons, which is a fermion for $B$ odd\nand a boson for $B$ even. \n\nThe Finkelstein-Rubinstein formalism can be used to give a consistent\nfermionic quantization of the Skyrme model on any domain $M$ if\n$G={\\rm Sp}(n)$, but not for any of the other simple target groups. In this\ncase, Theorem \\ref{thm1} tells us that \n$$ \n\\pi_1(Q_B)\\equiv\\Z_2\\oplus H_1(M) \n$$\nand we can choose (and fix) a $U(1)$ representation which maps the\ngenerator of $\\Z_2$ to $(-1)$. The generator of the $\\Z_2$-factor in the baryon number zero component is exactly the rotation--exchange loop as may be seen in the proof of proposition \\ref{prop1} in the next section. To see that this assigns phase $(-1)$\nto the 2-Skyrmion exchange loop, we may consider the Pontrjagin-Thom\nrepresentative of the loop. This is a framed 1-cycle in $S^1\\times M$\ndepicted in figure \\ref{fighwb}. It is framed-cobordant to the\nrepresentative of the loop in which one of the Skyrmions remains\nstatic, while the other rotates through $2\\pi$ about its center. \nFigure \\ref{fighwb} gives a sketch of the cobordism. The\nhorizontal direction represents the loop parameter (``time''), the vertical\ndirection represents $M$ and the direction into the page represents the\ncobordism parameter. The framing has been omitted, and the start and\nend 1-cycles of the cobordism have been repeated, for clarity. Note that the\napparent self intersection of the cobordism (along the dashed line) is an\nartifact of the pictorial projection from 5 dimensions to 3. Hence, the \nexchange loop in\n$Q_2$ is homotopic to the loop represented by one static Skyrmion and one Skyrmion that undergoes a full rotation. \n\nTo identify the phase assigned to this homotopy\nclass, we must transfer the loops to $Q_0$ by adding a pair of\nanti-Skyrmions, as depicted in figure \\ref{figexch1}. This changes\neach configuration by multiplying by a fixed charge $-2$ configuration\nwhich is $1$ outside a small ball -- precisely one of the\nhomeomorphisms discussed above. \nThe figure may be described thus: the\nexchange loop is homotopic to the rotation loop with an extra static $1$-Skyrmion lump (far left) which is \ntransferred to the vacuum sector by adding a stationary pair of \nanti-Skyrmions (2nd box). This loop is homotopic to the charge $0$ rotation loop of\nfigure \\ref{fig1}, via the sequence of moves shown. The orientations on the\ncurves indicate how to assign a framing via the blackboard framing \nconvention.\nThe resulting Pontrjagin-Thom\nrepresentative is framed cobordant to the charge $0$ exchange loop\ndescribed in section \\ref{fundesc}, which, as explained, generates the\n$\\Z_2$ factor in $\\pi_1(Q_0)$. Hence, the loop along which two\nidentical $1$-Skyrmions are exchanged (without rotating) around a contractible path\nin $M$ is assigned the phase $(-1)$. \n\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=hwb1.eps,width=5truein}%\n\\caption{ The cobordism between Skyrmion rotation and Skyrmion\nexchange. }\n\\label{fighwb}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=exch1.eps,width=6truein}%\n\\caption{ Mapping the Skyrmion exchange loop into the vacuum sector\n$Q_0$. }\n\\label{figexch1}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=exch2.eps,width=6truein}%\n\\caption{ Exchange of baryon number 2 Skyrmions is contractible in $Q_4$. }\n\\label{figexch2}\n\\end{center}\n\\end{figure}\n\nExchange of higher charge Skyrmions may be treated by considering\ncomposites of $B$ unit Skyrmions, as depicted for $B=2$ in figure\n\\ref{figexch2}. The loop may be deformed into one with four distinct\nsingle exchange events (surrounded by dashed boxes). Each of these may\nbe replaced by a pair of uncrossed strands, one of which has a $2\\pi$\ntwist, using the homotopy described in figures \\ref{fig1} and\n\\ref{fighwb} in each box. Since each strand has an even number of\ntwists, this is homotopic to the constant loop. Hence it must be\nassigned the phase $(+1)$. The argument clearly generalizes: given an\nexchange loop of a pair of charge $B$ composites, one may isolate\n$B^2$ single exchange events, each of which can be replaced by a\nsingle twist in one of the uncrossed strands. It is easy to see that\nthe twists may be distributed so that every strand except at most one\nhas an even number of twists. Hence if $B$ is even, this last strand\nalso has an even number of twists and the loop is necessarily\ncontractible. If $B$ is odd, the last strand has an odd number of\ntwists, so the loop is homotopic to the loop where $2B-1$ Skyrmions\nremain static and one Skyrmion executes a $2\\pi$ twist. Adding $2B$\nanti-Skyrmions, this loop is identified with the baryon number $0$ exchange\nloop and hence receives a phase of $(-1)$.\n\nFinkelstein and Rubinstein also model spin in this framework. \nIn this model spin is determined by the phase associated to the rotation loop. As we saw in the previous section the rotation and exchange loops agree up to homotopy confirming the spin statistics theorem in this model. This is essentially the\nobservation that the exchange loop is homotopic to the $2\\pi$ rotation\nloop (figure \\ref{fighwb}).\n\nNote that throughout the above discussion we have used only a local\nversion of exchange to model particle statistics, and verify the spin-statistics correlation. This definition of particle statistics and spin makes sense on general $M$, even\nwithout an action of $SO(3)$, because the exchange and rotation loops\nhave support over a single coordinate chart, so we have a local\nnotion of rotating a Skyrmion. Things become much more subtle when a\nloop has Pontrjagin-Thom representative which projects to a nontrivial\ncycle in $M$. It should not be surprising that it requires a spin\nstructure on $M$ to specify whether the constituent Skyrmions of such\na loop undergo an even or odd number of rotations. It was precisely\nthe generalization from ${\\mathbb{R}}^3$ to general spaces that motivated the\ndefinition of spin structures in the first place. Notice that by\nchanging the spin structure, we can interpret a loop as either having\nan even or odd number of rotations, so one must fix a spin structure\non space before discussing spin. (This is similar to the reason,\ndiscussed in section \\ref{scomp} above, why the secondary invariant\nfor path components of $S^2$-valued maps is only a relative\ninvariant). Even in the simple case of quantization of many {\\em\npoint} particles on a topologically nontrivial domain, the statistical\ntype (boson, fermion or something more exotic) of the particles is\nusually taken to be determined only by their exchange behavior\naround trivial loops in $M$ \\cite{ptpart}. It may be more reasonable\nto require that spin be determined by the behavior of locally\nsupported rotations, but to insist that the statistical type be\nconsistent under any particle exchange. It follows from Proposition\n\\ref{prop1} that the notion of an exchange or rotation loop around a\ncontractible loop is well defined independent of the choice of spin\nstructure. This is just the image of $\\pi_4(G)$. That the parity of a\nrotation around a non-contractible loop is determined by a spin\nstructure is explained in Proposition \\ref{split} in the next section.\n\nIn fact, the Finkelstein-Rubinstein quantization scheme remains\nconsistently fermionic in this extended sense provided that the correct representation into U$(1)$ is chosen. As with more traditional models of spin, a spin structure on the domain will be required. When the domain has non trivial first cohomology with $\\Z_2$ coefficients there are many spin structures to choose from. Selecting a spin structure produces an isomorphism of $\\pi_1(Q_B)$ with $\\Z_2\\oplus H_1(M)$.\nThe required representation is just projection onto the $\\Z_2$ factor. Exchange around a (possibly) non-contractible simple closed curve in space means that two identical solitons start at antipodal points on the curve and each one moves without rotating half way around the curve to exchange places with the other soliton. The notion of moving without rotating is where the spin structure enters. We will define this after describing the representation of an exchange displayed in figure \n\\ref{fig9}. Each rectangle in this figure represents a slice of a cobordism. The horizontal direction represents time, the vertical direction represents space, and the thick lines are the world lines of the solitons. The top and bottom of each rectangle are to be identified to make each slice a cylinder representing the curve cross time. We can imagine different spin structures obtained by identifying the top and bottom in the straight-forward way or by putting a full twist before making the identification. The first slice is just the exchange around the curve. One of the loops makes a left hand rotation followed by a right hand rotation, but this wobble is the same as no rotation at all. Adding a ribbon between the non-rotating soliton and the right rotation of the bottom soliton produces the second slice. This slice may be described as one soliton making a full left rotation in a fixed location while a second soliton traverses once around the curve without rotation. This slice homotopes to the third slice, and a second ribbon gives the fourth slice. The fourth slice may be described as follows. The vertical S-curve represents the birth of a Skyrmion anti-Skyrmion pair after which the Skyrmion and anti-Skyrmion move in opposite directions around the curve until they collide and annihilate. The horizontal lines are two (nearly) static Skyrmions, and the figure eight curve is a contractible (left) rotation loop. By definition, the exchange is non-rotating with respect to the spin structure if the $\\Z_2$ representation of the vertical S-curve resulting from a baryon number $1$ exchange is trivial. We now see that the general exchange is consistent because the two horizontal lines contribute nothing to the representation, the S-curve contributes nothing, and the baryon number $B$ contractible rotation loop contributes $(-1)^B$ as described previously and seen in figure \\ref{figexch2}.\n\n\n\n\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=fig9.eps,width=6truein}%\n\\caption{ Sum of a loop with the rotation loop. }\n\\label{fig9}\n\\end{center}\n\\end{figure}\n\n\n\nAs will be seen in section \\ref{subsecpc}, there is a close connexion\nbetween ${\\rm Sp}(1)^M$ and $(S^2)^M$. This allows us to transfer the\nFinkelstein-Rubinstein construction of a fermionic quantization scheme\nfor ${\\rm Sp}(N)$ valued Skyrmions to the Faddeev-Hopf model. Recall that\nhere, unless $H^2(M;\\Z)=0$, the path components of $Q$ are not\nlabelled simply by an integer, but rather are separated by an\ninvariant $\\alpha\\in H^2(M)$, and a relative invariant $c\\in\nH^3(M)\/2\\alpha\\smallsmile H^1(M)$. Configurations with $\\alpha\\neq 0$\nnecessarily have support which wraps around a nontrivial cycle in\n$M$. They therefore lack one of the key point-like characteristics of\nconventional solitons: they are not homotopic to arbitrarily highly\nlocalized configurations. Such configurations are intrinsically tied\nto some topological ``defect'' in physical space, and so are somewhat\nexotic. We therefore, mainly restrict our attention to configurations\nwith $\\alpha=0$. As with Skyrmions, these configurations are labelled\nby $B\\in\\Z\\cong H^3(M;\\Z)$ which we identify with the Hopfion\nnumber. This is the relative invariant between the given\nconfiguration and the constant map. It turns out that all the\n$\\alpha=0$ sectors, $Q_B$, are homeomorphic (Theorem \\ref{fhhom}) and,\nfurther, that $Q_B$ is homotopy equivalent to $({\\rm Sp}(1))^M_0$, the\nvacuum sector of the classical Skyrme model. The homotopy equivalence\nis given by the fibration $f^*$ of Lemma \\ref{Fhseq}. This may be\nused to map the charge zero Skyrmion exchange loop to a charge 2\nHopfion exchange loop, generating a $\\Z_2$ factor in $\\pi_1(Q_2)$. It\nfollows that Hopfions can be quantized fermionically within the\nFinkelstein-Rubinstein scheme. In the more exotic configurations with\n$\\alpha\\neq 0$, the relative invariant takes values in $\\Z\/(2d\\Z)\\cong\nH^3(M;\\Z)\/(2\\alpha\\smile H^1(M;\\Z))$. So even though the Hopfion number\nis not defined in this case, the parity of the Hopfion number is well\ndefined and the Finkelstein-Rubinstein formalism still yields a consistently\nfermionic quantization scheme.\n\nWe return now to the Skyrme model, but in the case where $G$ is not\n${\\rm Sp}(N)$ but is rather ${\\rm SU}(N)$, $\\Spin$ or exceptional. In this case\nthe Skyrmion exchange loop is contractible, so must be assigned phase\n$(+1)$ in the Finkelstein-Rubinstein quantization scheme so that only\nbosonic quantization is possible in that framework. To proceed, one\nmay take the wavefunction to be a section of a complex line bundle\nover $Q_B$ equipped with a unitary connexion. Parallel transport with\nrespect to the connexion associates phases to closed loops in $Q_B$ in\na way that one might hope will mimic fermionic behaviour. The problem\nwith this is that the holonomy of a loop is not (for non-flat\nconnexions) homotopy invariant, so the phase assigned to an exchange\nloop will depend on the fine detail of how the exchange is\ntransacted. To get round this, Sorkin introduced a purely topological\ndefinition of statistical type and spin for solitons defined on\n${\\mathbb{R}}^3$. His definition extends immediately to solitons defined on\narbitrary domains. We review these definitions next.\n\nRecall Sorkin's definition of $f_{\\rm stat}:{\\mathbb{R}{{P}}}^2\\ra Q_2$: choose a\nsphere in ${\\mathbb{R}}^3$ and associate to each antipodal pair of points on\nthis sphere the charge 2 configuration with Pontrjagin-Thom\nrepresentation given by that pair of points, framed by the coordinate\nbasis vectors. The subscript stat in $f_{\\rm stat}$ refers to\nstatistics. There is an associated homomorphism $f_{\\rm\nstat}^*:H^2(Q_2)\\ra H^2({\\mathbb{R}{{P}}}^2)\\cong\\Z_2$ given by pullback. According to\nSorkin's definition, the quantization wherein the wavefunction is a\nsection of the line bundle over $Q$ associated with class $c\\in\nH^2(Q;\\Z)$ is fermionic if $f_{\\rm stat}^*(c)=1$, bosonic otherwise,\n\\cite{sorkin}. Thinking of $f_{\\rm stat}$ as an inclusion map, the\npulled-back class $f_{\\rm stat}^*(c)$ represents the Chern class of\nthe restriction of the bundle associated to $c$ over $Q_2$ to the\nsubset ${\\mathbb{R}{{P}}}^2$. The intuition behind this definition is that if there\nwas a unitary connection on the bundle with parallel transport equal\nto $(-1)$ around exchange loops, then the restriction to the bundle to\nsuch a ${\\mathbb{R}{{P}}}^2$ would have to be non-trivial. This definition\ngeneralizes to solitons defined on arbitrary domains by analogous maps\nfrom ${\\mathbb{R}{{P}}}^2$ into the configuration space based on embeddings of $S^2$\nin the domain. To make sense of the framing, one must pick a\ntrivialization of the tangent bundle of the domain restricted to the\n$S^2$. Up to homotopy there is a unique such framing. The elementary\nSorkin maps will be the ones associated with sufficiently small\nspheres that lie in a single coordinate chart.\n\n\nTo model spin for solitons with domain ${\\mathbb{R}}^3$, Sorkin considers the\naction of SO$(3)$ on the configuration space given by precomposition\nwith any field. The orbit of a basic soliton with Pontrjagin-Thom\nrepresentative given by one point and an arbitrary frame has\nrepresentatives obtained by rotates of the frame. This rotation may be\nperformed on any isolated lump in any component of configuration space\nto define a map $f_{\\rm spin}:\\hbox{SO}(3)\\to Q_B$. Sorkin defines the\nquantization associated to a class $c\\in H^2(Q_B;\\Z)$ to be spinorial\nif and only if the pull-back $f_{\\rm spin}^*c\\in\nH^2(\\hbox{SO}(3);\\Z)\\cong \\Z_2$ is non-trivial, \\cite{sorkin}. This definition generalizes immediately to solitons on arbitrary domains. The\nintuition behind this definition is clearly explained in the paper of\nRamadas, \\cite{ramadas}. The idea is that the classical SO$(3)$\nsymmetry of $Q_B$ lifts to a quantum\nSp$(1)=\\hbox{SU}(2)=\\hbox{Spin}(3)$ symmetry that descends to an\nSO$(3)$ action on the space of quantum states if and only if the\npull-back $f_{\\rm spin}^*c\\in H^2(\\hbox{SO}(3);\\Z)$ is trivial. In the\ncase where the domain is not ${\\mathbb{R}}^3$, there is no SO$(3)$ symmetry, but\nthere is still a SO$(3)$ orbit of any single soliton, obtained by\nrotating the framing of a single-point Pontrjagin-Thom representative,\nso one still has a local notion of what it means to rotate a soliton.\nOne may still define a map $f_{\\rm spin}:\\hbox{SO}(3)\\to Q_B$ and\ndefine the quantization corresponding to class $c$ to be spinorial if\nand only if $f_{\\rm spin}^*c\\neq 0$, though there is no corresponding\nstatement about quantum symmetries. This should be contrasted with\nthe case of isospin, which we discuss later in this section. \n\nSorkin proves a version of the spin statistics theorem when the domain\nis ${\\mathbb{R}}^3$. Recall that rotations may be represented by vectors along\nthe axis of the rotation with magnitude equal to the angle of\nrotation. A one-half rotation in one direction is equivalent to a\none-half rotation in the opposite direction. This gives a natural\ninclusion of ${\\mathbb{R}{{P}}}^2$ into SO$(3)$ as the set of one-half\nrotations. This inclusion induces an isomorphism on cohomology,\n$\\iota^*:H^2(\\hbox{SO}(3);\\Z)\\to H^2({\\mathbb{R}{{P}}}^2;\\Z)$. Sorkin's version of\nthe spin statistics theorem states that $\\iota^*f_{\\rm\nspin}^*=f_{\\rm stat}^*$. There is one slightly stronger version of the\nspin statistics correspondence that one may hope for when the domain\nis arbitrary. We will discuss this later in this section.\n\nRamadas proved that the Sorkin definition of statistical type and\nspinoriality were strict generalizations of the Finkelstein-Rubinstein\ndefinition when the target group is SU$(N)$. The statement works as\nfollows. One first notices that the universal coefficient theorem\ngives an isomorphism \n$$ \nH^2(Q;\\Z)\\cong\n\\hbox{Hom}(H_2(Q;\\Z),\\Z)\\oplus \\hbox{Ext}^1(H_1(Q;\\Z),\\Z) \\cong\n\\hbox{Hom}(H_2(Q;\\Z),\\Z)\\oplus \\hbox{Ext}^1(\\pi_1(Q),\\Z). \n$$ \nWhen\nfermionic quantization is possible in the framework of\nFinkelstein-Rubinstein, the exchange loop is an element of order $2$\nin $\\pi_1(Q)$. Ramadas shows that the corresponding element of\n$H^2(Q;\\Z)$ pulls back to the non-trivial element of\n$H^2(\\hbox{SO}(3);\\Z)$ under $f_{\\rm spin}$ (when the target is\nSU$(2)$ so that $f_{\\rm spin}$ is defined). More precisely, he shows\nseveral things. He shows that $H^2(\\QS{N};\\Z)\\cong \\Z$ for $N>2$ and\n$H^2(\\QS{2};\\Z)\\cong \\Z_2$. He shows that the inclusion\nSU$(N)\\hookrightarrow \\hbox{SU}(N+1)$ induces an isomorphism \n$$\nH^2(\\QS{N+1};\\Z)\\to H^2(\\QS{N};\\Z) \n$$ \nfor $N>2$ and a surjection\nfor $N=2$. The $N>2$ case follows from the fibration\nSU$(N)\\hookrightarrow \\hbox{SU}(N+1)\\to S^{2N+1}$. The $N=2$ case\nfollows from the four term exact sequence induced by the Ext functor\ntogether with several ingeniously defined maps, see\n\\cite{ramadas}. Since $H^2(\\QS{2};\\Z)\\cong \\Z_2$, the exchange loop in\n$\\pi_1(\\QS{2})$ corresponds to the generator of $H^2(\\QS{2};\\Z)$ under\nthe universal coefficient isomorphism. This class pulls back to the\ngenerator of $H^2(\\hbox{SO}(3);\\Z)$ under $f_{\\rm spin}$. Thus, when\nit is possible to quantize fermionically in the\nFinkelstein-Rubinstein framework, the exchange loop is an element of\norder $2$, so it corresponds to a cohomology class which pulls back\nnon-trivially under $f_{\\rm spin}$ and $f_{\\rm stat}$. Hence it is\npossible to quantize fermionically in the Sorkin framework, also.\n\nNow turn to the case of an arbitrary domain and compact, simply\nconnected, simple target group. Given an arbitrary domain, $M$, we can\nconstruct a degree one map to $S^3$ by collapsing the\n$2$-skeleton. This map induces, via precomposition, a map between the\ncorresponding configuration spaces, $\\iota_M:G^{S^3}\\to G^M$. (Given a\nsoliton configuration on $S^3$ define one on $M$ by mapping points in\n$M$ to points in $S^3$ then follow the configuration into the\ntarget). The induced map on cohomology is surjective, so there is a\nportion of the quantization ambiguity ($H^2(Q_B;\\Z)$) that depends\nonly on the codomain and is completely independent of the domain. This\nconfirms our first physical conclusion, C1. Recall that we call\n$H^2(Q_B;\\Z)$ the quantization ambiguity because line bundles are\nclassified by elements of $H^2(Q_B;\\Z)$ and wave functions are\nsections of such bundles. By Theorems \\ref{thm1} and \\ref{thm1co}\nthis cohomology is \n$$\nH^2(Q_B,\\Z)=\\Z^{b_2(Q_B)}\\oplus{\\rm Tor}(H_1(M))\\oplus \\pi_4(G) \n$$ \nwhere\n$b_2(Q_B)=b_1(M)+1$ for $G={\\rm SU}(N)$, $N\\geq 3$, and $b_2(Q_B)=b_1(M)$\nfor $G=\\Spin(N)$, $N\\geq 7$, $G=\\hbox{Sp}(N)$, $N\\geq 1$, or $G$\nexceptional. Here $b_k(X)$ denotes the $k^{th}$ Betti number of $X$,\nthat is, $\\dim_{\\mathbb{R}}\\, H_k(X;{\\mathbb{R}})$. Also $\\pi_4(G)=\\Z_2$ for\n$G=\\hbox{Sp}(N)$, $N\\geq 1$, and $\\pi_4(G)=0$ otherwise. Notice that\nthe elementary Sorkin maps factor through $\\iota_M:G^{S^3}\\to G^M$.\nUse $f_{\\rm Estat}$ and $f_{\\rm Espin}$ to denote the elementary\nSorkin maps defined on an arbitrary domain. By definition we have,\n$f_{\\rm Estat}=\\iota_M\\circ f_{\\rm stat}$ and $f_{\\rm\nEspin}=\\iota_M\\circ f_{\\rm spin}$. If $G=\\Spin(N)$, $N\\geq 7$, or\nexceptional, then $H^2(G^{S^3};\\Z)=0$ so $\\iota_M^*$ is\ntrivial. Hence, fermionic quantization is impossible in these\ncases. The inclusions Sp$(N)\\hookrightarrow\\hbox{Sp}(N+1)$ induce maps\non the configuration spaces ${\\rm Sp}(n)^{S^3}$ that induce isomorphisms on\ncohomology and Sp$(1)\\cong{\\rm SU}(2)$, so we have reduced to the special\nunitary case. If $G={\\rm SU}(N)$, $N\\geq 3$, then $H^2(G^{S^3};\\Z)=\\Z$ and\n$\\iota_M^*$ maps ${\\rm Tor}(H_1(M))$ and all the generators\n$\\mu(\\Sigma_1^k\\otimes x_3)$ to $0$, and $\\mu([M]\\otimes x_5)$ to\n$\\mu([S^3]\\otimes x_5)$. Since the map on cohomology induced by\n$\\iota_M$ is surjective, fermionic quantization in the generalization\nof the sense of Sorkin is possible over an arbitrary domain if and\nonly if it is possible for domain $S^3$. Combined with the result of\nRamadas that $f_{\\rm spin}:H^2(G^{S^3};\\Z)\\to H^2(\\hbox{SO}(3);\\Z)$\nsurjects and Sorkin's spin statistics correlation that $f_{\\rm\nstat}^*=\\iota^*f_{\\rm spin}$ one obtains $f_{\\rm stat}(m\\mu([M]\\otimes\nx_5))= m\\in\\Z_2$, so quantization on the bundle represented by the\nclass $m\\mu([M]\\otimes x_5)$ is fermionic if and only if $m$ is odd.\nOur second physical conclusion, C2, establishing necessary and\nsufficient conditions for the existence of fermionic quantizations\nfollows from these comments.\n\nThere are consistency conditions that one would like to check with\nregard to the generalized Sorkin model of particle statistics. Since\n$Q_B$ is connected and $H^2(Q_0,\\Z)$ is discrete, there is a canonical\nisomorphism $H^2(Q_0)\\ra H^2(Q_B)$, so a class $c\\in H^2(Q_0;\\Z)$\ndefines a fixed class over each sector $Q_B$. As in the discussion of\nthe Finkelstein-Rubinstein model of particle statistics, we would like\nto know that Baryon number $B^\\prime$ lumps in $Q_B$ are all bosonic\nwhen the one lump class in $Q_1$ is bosonic ($f_{\\rm Estat}^*c=0$).\nWe would also like to know that Baryon number $B^\\prime$ lumps in\n$Q_B$ are bosonic or fermionic according to the parity of $B^\\prime$\nwhen the one lump class in $Q_1$ is fermionic ($f_{\\rm\nEstat}^*c=1$). The statistical type of a Baryon number $B^\\prime$ lump\nmay be defined via a generalization of the Sorkin map in which a pair\nof Baryon number $B^\\prime$ lumps is placed at antipodal points on a\nsphere. With this definition, cobordism arguments similar to those\ngiven in the Finkelstein-Rubinstein case show that this model of\nparticle statistics is indeed consistent. \n\nAs in the Finkelstein-Rubinstein case, the spin of a particle will\njust be determined by a local picture, and the statistical type may be\nbased on non-local exchanges of identical particles. A non-local\nexchange will be defined by a generalization of the Sorkin map\nassociated to an arbitrary embedded $2$-sphere, say $S\\hookrightarrow\nM$. Denote the associated map by, $f_{\\rm S:stat}:{\\mathbb{R}{{P}}}^2\\ra Q_B$. There\nare two cases: either $S$ separates the domain, so $M-S=M_1\\cup M_2$,\nor $S$ does not separate. If the 2-sphere in $M$ separates, we can\ndefine a degree one map $p:M\\to S^3$ by collapsing the relative\n$2$-skeleta of $\\overline M_1$ and $\\overline M_1$. As in the case\nof the elementary Sorkin map, we obtain $f_{\\rm S:stat}=f_{\\rm\nstat}\\circ \\widehat p$ where $\\widehat p:G^{S^3}\\to G^M$. It follows\nthat this model of particle statistics is consistent with these\nnon-local exchanges. If the sphere does not separate, then there is a\nsimple path from one side of the sphere to the other side of the\nsphere. A tubular neighborhood of the union of this path and the\nsphere is homeomorphic to a punctured $S^2\\times S^1$. We may\nconstruct a degree one projection from $M$ to $S^2\\times S^1$ by\ncollapsing the complement of this tubular neighborhood. This\nintertwines the Sorkin map defined using the non-separating sphere\nwith the Sorkin map defined using $S^2\\times\\{1\\}\\subset S^2\\times\nS^1$. If we knew the following conjecture, then this model of particle\nstatistics would be consistent in this larger sense. As it is, we know\nthat it satisfies the stronger consistency condition in the typical\ncase where the domain does not contain a non-separating sphere.\n\n\\noindent\n{\\bf Conjecture} \n$$\nf^*_{S^2\\times \\{1\\}:{\\rm stat}}\\mu([S^2\\times S^1]\\otimes x^5)\\neq 0.\n$$ \n\nTo discuss isospin, we recall some standard facts about extending\ngroup actions on a configuration space. Given a complex line bundle over\n$Q$, let $\\widehat Q$ be the associated principal U$(1)$ bundle. If a group\n$\\Gamma$ acts on $Q$ It is possible to construct an extension of $\\Gamma$ \nby U$(1)$\nthat acts on $\\widehat Q$ so that the projection to $\\Gamma$\nintertwines the two actions. The extension $\\widehat \\Gamma$ may be defined\nas equivalence classes of paths in $\\Gamma$, see \\cite{bms-sp}. The quantum \nsymmetry group is a subgroup of this extension. When\n$\\Gamma=\\hbox{SO}(3)$ the possible U$(1)$ extensions are\nSO$(3)\\times\\hbox{U}(1)$ and U$(2)$. These correspond to integral and\nfractional isospin respectively when SO$(3)$ acts as rotations on the\ntarget. Recall that every compact, simply connected, simple Lie group \n$G$ has a ${\\rm Sp}(1)$ subgroup. We define the isospin action on $G$ to be\nthe adjoint action of this ${\\rm Sp}(1)$ subgroup. This coincides with the\nusual definition if $G={\\rm Sp}(1)$. Of course, we can always take \na trivial line bundle over $Q$, so any of our configuration spaces admit\nquantizations with integral isospin, confirming our fifth physical \nconclusion, C5.\n\nTo justify our remaining physical conclusions about isospin, we review the \nrequired\nconstructions. The Sorkin map SO$(3)\\to \\hbox{SU}(2)^{S^3}$\nis the map obtained by the isospin action. To see this, notice that we can \nrotate the frame in the Pontrjagin-Thom representative by either rotating \nthe domain or by rotating the codomain. When a class in\n$H^2(G^M;\\Z)$ pulls back to the generator of $H^2({\\mathbb{R}{{P}}}^2;\\Z)$\nunder the Sorkin map, we claim that the associated quantization \nhas fractional isospin. Assume otherwise so the extension of the\nrotation group is SO$(3)\\times\\hbox{U}(1)$. This means that the\nSO$(3)$ subgroup is a lift of the SO$(3)$ action on the configuration\nspace to the bundle over the space. Restricting this to the image of SO$(3)$ \nunder $f_{\\rm spin}$ we\nobtain a contradiction from Theorem \\ref{thmgot}. \nSince such classes exist whenever the configuration space admits a fermionic \nquantization, we obtain our third physical conclusion, C3.\nTo show that quantizations with fractional isospin are not possible when the \ngroup does not have a symplectic or special unitary factor, one must just \nfollow through the construction of the extension given in \\cite{bms-sp} to \nsee that the resulting extension is trivial. This establishes our fourth \nconclusion, C4.\n\nAs we noted earlier, the relation between the configuration space of \nSp$(1)$-valued maps and $S^2$-valued maps implies that it is always possible \nto fermionically quantize Hopfions. Since it is possible to quantize \nSp$(1)$-valued solitons with fractional isospin, the same relation implies \nthat it is possible to quantize $S^2$-valued solitons with fractional \nisospin. This is our sixth physical conclusion, C6.\n\n\n\\section{Proofs}\n\\label{proofs}\n\\label{pro}\n\\setcounter{equation}{0}\n\nWe begin by recalling some basic homotopy and homology theory\n\\cite{span}. For pointed spaces $X,Y$, let $[X,Y]$ denote the set of\nbased homotopy classes of maps $X\\ra Y$. There is a distinguished\nelement $0$ in $[X,Y]$, namely the class of the constant map. Given a\nmap $f:X\\ra X'$ there is for each $Y$ a natural map $f^*:[X',Y]\\ra\n[X,Y]$ defined by composition. We define $\\ker f^*\\subset [X',Y]$ to\nbe the inverse image of the null class $0\\in[X,Y]$. A sequence of\nmaps $X\\stackrel{f}{\\ra}X'\\stackrel{g}{\\ra}X''$ is {\\em coexact} if\n$\\ker f^*={\\rm Im}\\, g^*$ for every choice of codomain $Y$. Longer sequences\nof maps are coexact if every constituent triple is coexact. Note that\nthis makes sense even in the absence of group structure. If $Y$\nhappens to be a Lie group $G$, as it will be for us, then $[X,G]$\ninherits a group structure by pointwise multiplication, $f^*$ and\n$g^*$ are homomorphisms, and the sequence\n$[X'',G]\\stackrel{g^*}{\\ra}[X',G]\\stackrel{f^*}{\\ra}[X,G]$ is exact in\nthe usual sense. In the following, we will make extensive use of the\nfollowing standard result \\cite{span}:\n\n\\begin{prop}\\label{coexact} If $X$ is a CW complex and $A\\subset X$ is a \nsubcomplex then there is an infinite coexact sequence,\n$$\nA\\hookrightarrow X\\ra X\/A\\ra SA\\hookrightarrow SX\\ra S(X\/A)\\ra\\cdots\n\\ra S^nA\\hookrightarrow S^nX\\ra S^n(X\/A)\\ra\\cdots\n$$\nwhere $S^n$ denotes iterated suspension.\n\\end{prop}\n\nThe proofs will use several naturally defined homomorphisms. Any map\n$f:X\\ra Y$ defines homomorphisms $f_*:H_k(X)\\ra H_k(Y)$ which depend\non $f$ only up to homotopy. Hence, one has natural maps ${\\cal\nH}_k:[X,Y]\\ra \\homo{H_k(X)}{H_k(Y)}$. \n There is a natural (Hurewicz) homomorphism $\\hur_k:\\pi_k(X)\\ra\nH_k(X)$ sending each map $S^k\\ra X$ to the push-forward of the\nfundamental class via the map in $X$. If $X$ is $(k-1)$-connected then\n$\\hur_k$ is an isomorphism. There is also a natural isomorphism\n${\\rm Susp}_k:H_k(SX)\\ra H_{k-1}(X)$ relating the homologies of $X$ and\n$SX$.\n\nWe may now prove a preliminary lemma that is used in the computation\nof both the fundamental group and the real cohomology. This lemma is\nthe place where we use the assumption that the domain is\norientable. This lemma was used in \\cite{AK1} as well. Note that\n$SX^{(k)}$ denotes the suspension of the $k$-skeleton of $X$. The\n$k$-skeleton of the suspension of $X$ will always be denoted\n$(SX)^{(k)}$.\n\n\\begin{lemma}\\label{surj}\nFor a closed, connected, orientable $3$-manifold, and\nsimply-connected, compact Lie group the map,\n$$\n[SM,G]\\to [SM^{(2)},G]\n$$\ninduced by inclusion is surjective.\n\\end{lemma}\n{\\it Proof:\\, } Start with a cell decomposition of $M$ with exactly\none $0$-cell and exactly one $3$-cell. The sequences,\n\\begin{diagram}\nM\/M^{(2)} & \\rTo^{\\partial} & SM^{(2)} & \\rTo & SM, \n\\end{diagram}\n\\vskip-.1in\n\\noindent and\n\\vskip-.1in\n\\begin{diagram} \nSM^{(1)} & \\rTo & SM^{(2)} & \\rTo{q} & S(M^{(2)}\/M^{(1)}),\n\\end{diagram}\nare coexact by Proposition \\ref{coexact} with $X=M$, $A=M^{(2)}$ and\n$X=M^{(2)}$, $A=M^{(1)}$ respectively. Hence, the sequences, \n\\begin{diagram}\n[SM,\\, G]&\\rTo&[SM^{(2)},\\, G]&\\rTo^{\\cd^*}&[M\/M^{(2)},\\, G], \n\\end{diagram}\n\\vskip-.1in\n\\noindent and\n\\vskip-.1in\n\\begin{diagram}\n[S(M^{(2)}\/M^{(1)}),\\, G]&\\rTo^{q^*}&[SM^{(2)},\\, G]&\\rTo&[SM^{(1)},\\,\nG]=0,\n\\end{diagram}\nare exact. The group $[SM^{(1)},\\, G]$ is trivial because $G$ is\n$2$-connected. Hence $q^*$ is surjective. The space\n$\\,M^{(3)}\/M^{(2)}\\,$ is homeomorphic to $\\,D^3\/S^2$. Under this\nidentification, $\\,\\partial(x) = \\left(f^{(3)}(\\frac{x}{|x|}),\n|x|\\right)$ for $\\,x\\in D^3$, where $\\,f^{(3)}:\\,S^2\\to M^{(2)}\\,$ is\nthe attaching map for the 3-cell. Now we can construct the following\ncommutative diagram:\n\\begin{diagram}\n[S(M^{(2)}\/M^{(1)}), G] & \\rTo^{\\partial^*\\circ q^*} &\n[M^{(3)}\/M^{(2)}, G]\\\\ \\dTo^{{\\cal H}_3} & & \\dTo_{{\\cal H}_3}\\\\\n\\homo{H_3(S(M^{(2)}\/M^{(1)}))}{H_3(G)}&\n&\\homo{H_3(M^{(3)}\/M^{(2)})}{H_3(G)}\\\\ \\dTo^{{\\rm Susp}_3^{-1\\,\n*}\\circ\\hur_{3\\, *}}& & \\dTo_{\\hur_{3\\, *}}\\\\\n\\homo{H_2(M^{(2)}\/M^{(1)})}{\\pi_3(G)}&\\rTo^{\\delta}&\n\\homo{H_3(M^{(3)}\/M^{(2)})}{\\pi_3(G)}.\n\\end{diagram}\nNote that both maps ${\\cal H}_3$ are isomorphisms, and \n$\\hur_3:\\pi_3(G)\\ra H_3(G)$\nis an isomorphism because $G$ is 2-connected, so the vertical maps are\nall isomorphisms. We may therefore identify $\\cd^*\\circ q^*$ with\nthe map $\\delta$. But $\\delta$ is the coboundary map in the CW cochain\ncomplex for $H^*(M;\\pi_3(G))$. Since $M$ is orientable, this coboundary is \ntrivial. Since $q^*$ is surjective, we conclude that $\\partial^*=0$. \\hfill\n$\\Box$\n \n\n\\subsection{The fundamental group of the Skyrme configuration space}\n\\label{subsecpa}\nWe saw in section \\ref{hom} that it was sufficient to study simple,\nsimply-connected Lie groups. We begin our study of the fundamental\ngroup by showing that the fundamental group of the configuration\nspace fits into a short exact sequence in this case.\n\n\\begin{prop}\\label{prop1}\nIf $M$ is a closed, connected, orientable $3$-manifold and $G$ is a\nsimple, simply-connected Lie group, then\n$$\n0\\to \\pi_4(G)\\to\\pi_1(G^M)\\to H^2(M;\\pi_3(G))\\to 0\n$$\nis an exact sequence of abelian groups. The maps in this sequence will\nbe defined in the course of the proof.\n\\end{prop}\n{\\it Proof:\\, } We have $\\pi_1(G^M)=[S^1,\\, G^M]\\cong [SM,\\, G]$,\nwhere $SM$ is the suspension of $M$. The sequence\n$$\n(SM)^{(3)}\\hookrightarrow SM \\to SM\/(SM)^{(3)}\n$$\nis coexact by Proposition \\ref{coexact} with $X=SM$, $A=(SM)^{(3)}$,\nand hence induces the exact sequence of groups:\n\\begin{equation}\\label{sq1}\n[SM\/(SM)^{(3)},\\, G]\\to [SM,\\, G]\\to [(SM)^{(3)},\\, G].\n\\end{equation}\nNoting that $(SM)^{(3)}=SM^{(2)}$, Lemma \\ref{surj} implies that the\nlast map in the above sequence is surjective. This will become the\nexact sequence we seek, after suitable identifications. First we\nidentify the third group in sequence \\ref{sq1} with the second\ncohomology of $M$, in similar fashion to the proof of Lemma\n\\ref{surj}. From Proposition \\ref{coexact}, we have exact sequences,\n\\begin{diagram}\n[S((SM)^{(2)}\/(SM)^{(1)}),\\, G] & \\rTo^{q^*} & [S((SM)^{(2)}),\\, G] &\n\\rTo & [S((SM)^{(1)}),\\, G] \n\\end{diagram}\n\\vskip-.1in\n\n\\noindent\n($X=(SM)^{(2)}$, $A=(SM)^{(1)}$) and\n\n\\vskip-.1in\n\\begin{diagram} \n[S((SM)^{(2)}),\\, G] & \\rTo{\\partial^*} & [(SM)^{(3)}\/(SM)^{(2)},\\,\nG] & \\rTo & [(SM)^{(3)},\\, G],\n\\end{diagram}\n($X=(SM)^{(3)}$, $A=(SM)^{(2)}$). Since $G$ is $2$-connected, \n$[S((SM)^{(1)}),\\,G]=0$, and $q^*$ is surjective. \nThus coker$(\\partial^*\\circ\nq^*)=\\hbox{coker}(\\partial^*)=[(SM)^{(3)},\\, G]$. Using the Hurewicz\nand suspension isomorphisms as before, we may identify\n$\\partial^*\\circ q^*$ with a coboundary map in the CW cochain complex\nfor $H^*(M;\\pi_3(G))$:\n\\begin{diagram}\n[S((SM)^{(2)}\/(SM)^{(1)}),\\, G] & \\rTo^{\\partial^*\\circ q^*} &\n[(SM)^{(3)}\/(SM)^{(2)},\\, G]\\\\ \\dTo & & \\dTo \\\\\n\\hbox{Hom}(H_2((SM)^{(2)}\/(SM)^{(1)}),\\pi_3(G)) & \n\\hskip-3pt\\rTo_{\\delta}\\hskip-3pt &\n\\hbox{Hom}(H_3((SM)^{(3)}\/(SM)^{(2)}),\\pi_3(G)). \\\\ \n\\end{diagram}\nNow,\n$$\n[(SM)^{(3)},G] \\cong\\hbox{coker}(\\partial^*\\circ q^*)\n\\cong\\hbox{coker}(\\delta) \\cong H^3(SM;\\pi_3(G))\\cong H^2(M;\\pi_3(G)).\n$$\n\nThe first group in sequence \\ref{sq1} is $\\pi_4(G)$ because\n$SM\/(SM)^{(3)}\\cong S^4$. For the non-symplectic groups $\\pi_4(G)=0$,\nand we are done. For the higher symplectic groups, the fibration\n$$\n{\\rm Sp}(n)\\to {\\rm Sp}(n+1)\\to S^{4n+3}\n$$\ninduces a fibration\n$$\n({\\rm Sp}(n))^M\\to ({\\rm Sp}(n+1))^M\\to (S^{4n+3})^M.\n$$\nThe homotopy exact sequence of this fibration reads\n$$\n\\pi_2((S^{4n+3})^M)\\to\\pi_1(({\\rm Sp}(n))^M)\\to \\pi_1(({\\rm Sp}(n+1))^M)\\to\n\\pi_1((S^{4n+3})^M).\n$$\nNow $\\pi_k((S^{4n+3})^M)=[S^k,\\, (S^{4n+3})^M] \\cong [S^kM,\\,\nS^{4n+3}]$. For $k=1, 2$ these groups are trivial since $S^{4n+3}$ is\n$5$-connected. Hence $\\pi_1({{\\rm Sp}}(n+1)^M)\\cong \\pi_1({{\\rm Sp}}(n)^M)$ for\nall $n\\geq 1$, so the proposition reduces to showing that the first\nmap in sequence \\ref{sq1} is injective for $G={\\rm Sp}(1)$. In the special\ncase of $M\\cong S^3$ the exchange loop depicted in\nfigure \\ref{fig1} represents the generator of\n$\\pi_4(\\hbox{Sp}(1))\\cong \\pi_1((\\hbox{Sp}(1))^{S^3})$. Our final task\nis to show that the image of this generator under push forward by the\ncollapsing map $M\\ra M\/M^{(2)}$ is non-trivial in\n$\\pi_1(({\\rm Sp}(1))^{M})$. \n\nProceed indirectly and assume that there is a homotopy between the\nconstant loop and the exchange loop, say $H:M\\times[0,1]\\times [0,1]\n\\to \\hbox{Sp}(1)$. Set $\\Sigma=H^{-1}(-1)$. Now glue the homotopy\nfrom figure \\ref{fig3} to this homotopy and let\n$\\widehat\\Sigma=\\Sigma\\cap \\Delta$ where $\\Delta$ is the hemisphere\nin figure \\ref{fig3}. The trivialization of the normal bundle to\n$\\widehat\\Sigma$ defined over $\\Sigma$ and the trivialization of the\nnormal bundle to $\\widehat\\Sigma$ defined over $\\Delta$ do not\nmatch. The discrepency is the generator of $\\pi_1(\\hbox{SO}(3))$. It\nfollows that the second Stiefel-Whitney class,\n$w_2(N(\\widehat\\Sigma))\\in H^2(M\\times[0,1]\\times [0,1];\n\\pi_1(\\hbox{SO}(3)))$ is non-trivial \\cite{ste}. However, the Whitney\nproduct formula yields, \n$$\n\\begin{array}{rl}\nw(N(\\widehat\\Sigma))= & w(N(\\widehat\\Sigma))\\smallsmile w(T\\widehat\\Sigma)\\\\ =\n& w(T\\widehat\\Sigma\\oplus N(\\widehat\\Sigma)) = w(T(M\\times[0,1]\\times\n[0,1])|_{\\widehat\\Sigma}) = 1. \n\\end{array}\n$$\nHere $w$ is the total Stiefel-Whitney class, $ w(T\\widehat\\Sigma)=1$\nbecause the Stiefel-Whitney class of any orientable surface is $1$,\nand $T(M\\times[0,1]\\times [0,1])|_{\\widehat\\Sigma}$ is trivial. This\ncontradiction establishes the proposition. \\hfill$\\Box$\n \n\nIt is well known that a split exact sequence of abelian groups, $0\\to\nK\\to G \\stackrel{\\leftarrow}{\\rightarrow} H \\to 0$, induces an\nisomorphism, $K\\oplus H \\cong G$. The following proposition will\nestablish such a splitting, and therefore, complete our computation\nof the fundamental group of the Skyrme configuration spaces. The proof\nwill require surgery descriptions of $3$-manifolds, so we recall what\nthis means. Given a framed link, say $L$, (i.e. $1$-dimensional\nsubmanifold with trivialized normal bundle or identification of a\nclosed tubular neighborhood with $\\disju S^1\\times D^2$) in\n$S^3=\\partial D^4$, we define a $4$-manifold by $D^4\\cup_{\\disju\nS^1\\times D^2}D^2\\times D^2$. The boundary of this $4$-manifold is\nsaid to be the $3$-manifold obtained by surgery on $L$. It is denoted,\n$S^3_L$.\n\n\\begin{prop}\\label{split}\nThe sequence,\n$$\n0\\to \\pi_4(G)\\to\\pi_1(G^M)\\to H^2(M;\\pi_3(G))\\to 0\n$$\nsplits, and there is a splitting associated to each spin structure on\n$M$.\n\\end{prop}\n{\\it Proof:\\, } As we saw at the end of the proof of the previous\nproposition, it is sufficient to check the result for\n$G=\\hbox{Sp}(1)$. Since the three dimensional Spin cobordism group is\ntrivial, every $3$-manifold is surgery on a framed link with even\nself-linking numbers \\cite{kirby}. \n Such a surgery description induces a Spin structure in $M$. Let\n$M=S^3_L$ be such a surgery description, orient the link and let\n$\\{\\mu_j\\}_{j=1}^c$ be the positively oriented meridians to the\ncomponents of the link. These meridians generate $H_1(M)\\cong\nH^2(M;\\pi_3(\\hbox{Sp}(1)))$. This last isomorphism is Poincar\\'e\nduality. Define a splitting by:\n$$\ns:H_1(M)\\to\\pi_1((\\hbox{Sp}(1))^M); \\quad s(\\mu_j)=\nPT(\\mu_j\\times\\{\\frac12\\}, \\hbox{canonical framing}).\n$$\nHere $PT$ represents the Pontrjagin-Thom construction and the\ncanonical framing is constructed as follows. The first vector is\nchosen to be the $0$-framing on $\\mu_j$ considered as an unknot in\n$S^3$. The second vector is obtained by taking the cross product of\nthe tangent vector with the first vector, and the third vector is\njust the direction of the interval. We will now check that this map\nrespects the relations in $H_1(M)$. Let $Q_L=(n_{jk})$ be the linking\nmatrix so that, $H_1(M)=\\langle \\mu_j| n_{jk}\\mu_j=0 \\rangle$. We are\nusing the summation convention in this description. The $2$-cycle\nrepresenting the relation, $n_{jk}\\mu_j=0$ may be constructed from a\nSeifert surface to the $j^{\\rm th}$ component of the link, when\nthis component is viewed as a knot in $S^3$. Let $\\Sigma_j$ denote\nthis Seifert surface. The desired $2$-cycle is then\n$\\widehat\\Sigma_j=(\\Sigma_j-\\stackrel{\\circ}{N}(L))\\cup\\sigma_j$. Here\n$\\sigma_j$ is the surface in $S^1\\times D^2$ with $n_{jj}$ meridians\ndepicted on the left in figure \\ref{fig4}.\n\\begin{figure}\n\\hskip105bp\\epsfig{file=skfig4.eps,width=4truein}%\n\\caption{The $2$-cycle in the proof of Proposition \\ref{split}}\n\\label{fig4}\n\\end{figure}\nThe boundary of $\\widehat\\Sigma_j$ is exactly the relation,\n$n_{jk}\\mu_j=0$. The framing on each copy of $\\mu_k$ for $k\\neq j$\ninduced from this surface agrees with the $0$-framing. The framing on\neach copy of $\\mu_j$ is $-\\hbox{sign}(n_{jj})$. The surface,\n$\\widehat\\Sigma_j$ may be extended to a surface in $M\\times\n[0,1]\\times [0,1]$ by adding a collar of the boundary in the\ndirection of the second interval followed by one band for each pair of\nthe $\\mu_j$ as depicted on the right of figure \\ref{fig4}. The\nresulting surface has a canonical framing, and the corresponding\nhomotopy given by the Pontrjagin-Thom construction homotopes the loop\ncorresponding to the relation to a loop corresponding to a\n$\\pm2$-framed unlink. Such a loop is null-homotopic, as\nrequired. \\hfill $\\Box$\n \n\nWe remark that the Spin structures on $M$ correspond to\n$H^1(M;{\\mathbb Z}_2)$. In addition, the splittings of ${\\mathbb\nZ}_2\\to \\pi_1(\\hbox{Sp}(1)^M)\\to H^2(M;{\\mathbb Z})$ corresponds to\nthe group cohomology, \n$$\nH^1(H^2(M;{\\mathbb Z});{\\mathbb Z}_2) \\cong H^1(H_1(M;{\\mathbb\nZ});{\\mathbb Z}_2) \\cong H^1(M;{\\mathbb Z}_2).\n$$\nThe last isomorphism is because the $2$-skeleton of $M$ is the\n$2$-skeleton of a $K(H_1(M;{\\mathbb Z}), 1)$. A combination of\nPropositions \\ref{prop1} and \\ref{split} together with Reductions\n\\ref{r5}, \\ref{r6} and \\ref{r7} and the corollary of the universal\ncoefficient theorem that $H^*(X;A\\oplus B)\\cong H^*(X;A)\\oplus\nH^*(X;B)$ give Theorem \\ref{thm1}.\n\n\n\n\n\\subsection{Cohomology of Skyrme configuration spaces}\n\\label{subsecpb} \n\nAs we have seen we may restrict our attention to compact, simple,\nsimply-connected $G$. Recall the cohomology classes, $x_j$, and the\n$\\mu$-map defined in section \\ref{geo}. Throughout this section we\nwill take the coefficients of any homology or cohomology to be the\nreal numbers unless noted to the contrary. To compute the cohomology\nof $G^M$ we will use the cofibrations $M^{(k)}\\to M^{(k+1)} \\to\nM^{(k+1)}\/ M^{(k)}$ and the fact that $ M^{(k+1)}\/ M^{(k)}$ is a\nbouquet of spheres to reduce the problem to the case where the domain\nis a sphere. \n\nBriefly recall the computation of the cohomology of the loop\nspaces. These are well known results, but we sketch a proof because\nthis explains why the classes $\\mu(\\Sigma,x_jx_k)$ are trivial. As\nusual let $\\Omega^kG=G^{S^k}$ denote the $k$-iterated loop space. We\nhave the following lemma.\n\\begin{lemma}\\label{og}\nThe cohomology rings of the first loop groups are given by,\n$H^*(\\Omega G)={\\mathbb R}[\\mu([S^1]\\otimes x_j)]$, $H^*(\\Omega^2\nG)={\\mathbb R}[\\mu([S^2]\\otimes x_j)]$, and $H^*(\\Omega^3_0G)={\\mathbb\nR}[\\mu([S^3]\\otimes x_j),j>3]$.\n\\end{lemma}\n{\\it Proof:\\, } Recall that the path space, $PG$ is contractible, and\nfits into a fibration, $\\Omega G\\hookrightarrow PG\\to G$. The Serre\nspectral sequence of this fibration has, $E_2^{p,q} =\nH^p(G;H^q(\\Omega G))$. Since $G$ is simply-connected, the coefficient\nsystem is untwisted. Since $PG$ is contractible all classes of\npositive degree have to die at some point in this spectral\nsequence. By location we know that all differentials of $x_3$ vanish,\nso there must be some class in $H^2(\\Omega G)$ mapping to $x_3$. The\nclass $\\mu([S^1]\\otimes x_3)$ is one such class, and the only class\nthat there can be without having something else live to the limit\ngroup of the spectral sequence. Notice that classes of the form\n$x_3\\prod x_{j_k}$ are images of classes of the form $\\mu([S^1]\\otimes\nx_3) \\prod x_{j_k}$, so we have killed all classes with a factor of\n$x_3$. In the same way, we can kill terms with a factor of the next\n$x_j$. We conclude that $H^*(\\Omega G)={\\mathbb R}[\\mu([S^1]\\otimes\nx_j)]$. Repeating the argument with the\nfibration, $\\Omega^2G\\to P\\Omega G\\to \\Omega G$ we obtain,\n$H^*(\\Omega^2 G)={\\mathbb R}[\\mu([S^2]\\otimes x_j)]$. This time the\ncoefficient system is untwisted because, $\\pi_1(\\Omega G)\\cong\n\\pi_2(G) = 0$.\n\nWe need to adjust the argument a bit at the next stage because\n$\\pi_0(\\Omega^3G)\\cong\\pi_1(\\Omega^2 G)\\cong \\pi_3(G) \\cong {\\mathbb\nZ}$. This shows that the path components of $\\Omega^3G$ may be\nlabeled by the integers. Each component is homeomorphic to the\nidentity component since $\\Omega^3G$ is a topological group. In this\ncase, we have no guarantee that the coefficient system is untwisted,\nso we will use a different approach that we be useful again in section\n\\ref{subsecpd}. Let $\\widetilde{\\Omega^2G}$ denote the universal cover\nof $\\Omega^2G$ and let $\\Omega_0^3G$ denote the identity component of\n$\\Omega^3G$. These fit into a fibration, $\\Omega_0^3G\\to\nP\\widetilde{\\Omega^2G} \\to \\widetilde{\\Omega^2G}$ that may be used to obtain,\n$H^*(\\Omega^3_0G)={\\mathbb R}[\\mu([S^3]\\otimes x_j),j>3]$. We will use \nequivariant cohomology to compute the cohomology of $\\widetilde{\\Omega^2G}$.\n\nRecall that any Lie group, say $\\Gamma$, acts properly on a\ncontractible space called the total space of the universal\nbundle. This space is denoted $E\\Gamma$. The quotient of this by\n$\\Gamma$ is the classifying space $B\\Gamma$. Let $X$ be a $\\Gamma$\nspace (i.e. a space with a $\\Gamma$ action) and consider the space\n$X_\\Gamma :=E\\Gamma\\times_\\Gamma X$. The cohomology of the space\n$X_\\Gamma$ is called the equivariant cohomology of $X$. It is denoted\nby $H^*_\\Gamma(X)$. When the $\\Gamma$ action on $X$ is free and proper\n(as it is in our case), we have a fibration $X_\\Gamma\\to X\/\\Gamma$\nobtained by ignoring the $E\\Gamma$ component in the definition of\n$X_\\Gamma$. The fiber of this fibration is just $E\\Gamma$ which is\ncontractible, so the spectral sequence of the fibration implies that\nthe cohomology of $X\/\\Gamma$ is isomorphic to the equivariant\ncohomology of $X$. By ignoring the $X$ component in the definition of\n$X_\\Gamma$ we obtain a fibration $X_\\Gamma\\to B\\Gamma$ that may be\nused to relate the equivariant cohomology of $X$ to the cohomology of\n$X$.\n\nIf we apply these ideas with $X=\\widetilde{\\Omega^2G}$ and\n$\\Gamma=\\pi_1(\\Omega^2G)\\cong \\Z$, we obtain a spectral sequence that\nmay be used to show that the cohomology of $\\tilde{\\Omega^2G}$ is\ngenerated by $\\mu([S^2]\\otimes x_j)$ for $j>3$. This then plugs in to\ngive the stated result for $\\Omega^3_0G$. \\hfill $\\Box$\n\nReturning to the situation of a $3$-manifold domain, let $M$ have a\ncell decomposition with one $0$-cell ($p_0$) several $1$-cells\n($e_r$) several $2$-cells ($f_s$) and one $3$-cell ($[M]$). Since\n$G^{X\\vee Y}=G^X\\times G^Y$, and $ M^{(k+1)}\/ M^{(k)}$ is a bouquet of\nspheres we have, \n$$\n\\begin{array}{rcl} H^*(G^{M^{(1)}})&=&{\\mathbb R}[\\mu(e_r\\otimes\nx_j)], \\nonumber \\\\ H^*(G^{M^{(2)}\/M^{(1)}})&=&{\\mathbb\nR}[\\mu(f_s\\otimes x_j)],\\nonumber \\\\\nH^*(G^{M^{(3)}\/M^{(2)}}_0)&=&{\\mathbb R}[\\mu([M]\\otimes x_j), j>0].\n\\end{array}\n$$ \nThe next lemma assembles these facts into the cohomology of\n$G^{M^{(2)}}$.\n\n\\begin{lemma}\\label{gm2}\nIf $\\Sigma^1_r$ form a basis for $H_1(M)$, and $\\Sigma^2_s$ form a\nbasis for $H_2(M)$ we have,\n$$\nH^*(G^{M^{(2)}})= {\\mathbb R}[\\mu(\\Sigma^1_r\\otimes x_j),\n\\mu(\\Sigma^2_s\\otimes x_k)].\n$$\n\\end{lemma}\n{\\it Proof:\\, } The cofibration $M^{(1)}\\to M^{(2)} \\to M^{(2)}\/\nM^{(1)} $ leads to a fibration,\n$$\nG^{M^{(2)}\/ M^{(1)}}\\to G^{M^{(2)}}\\to G^{M^{(1)}}.\n$$ \nSince $G$ is $2$-connected, $\\pi_1(G^{M^{(1)}})=0$, so the\ncoefficients in the cohomology appearing in the second term of the\nSerre spectral sequence are untwisted. We have, \n$$\n\\begin{array}{rcl}\nE_2^{*,*}&=&H^*( G^{M^{(1)}}; H^*(G^{M^{(2)}\/M^{(1)}})) \\\\\n&=& H^*(G^{M^{(1)}})\\otimes H^*(G^{M^{(2)}\/M^{(1)}}) \\\\\n&\\cong& {\\mathbb R}[\\mu(e_r\\otimes x_j), \\mu(f_s\\otimes x_k)]. \n\\end{array}\n$$\nTo go further we need to understand the differentials in this spectral\nsequence. Since $\\mu(e_r\\otimes x_j)\\in E_2^{j-1,0}$ we have\n$d_k\\mu(e_r\\otimes x_j)=0$ for all $k$. We will show that $d_\\ell\n\\mu(f_s\\otimes x_k)=0$ for $\\ell0].\n\\end{array} \n$$\nRepeating the argument from Lemma \\ref{gm2} with computation\n\\ref{delta}, we see that all of the differentials of this spectral\nsequence vanish. This completes our computation of the cohomology of\nthe Skyrme configuration space. \\hfill $\\Box$\n\n\n\\subsection{The fundamental group of Faddeev-Hopf configuration spaces}\n\\label{subsecpc}\n\nIn this subsection we compute the fundamental group of the\nFaddeev-Hopf configuration space, $(S^2)^M$. Recall (Theorem\n\\ref{ponthm}) that the path components $(S^2)^M_\\varphi$ (where\n$\\varphi$ is any representative of the component) fall into families\nlabelled by $\\varphi^*\\mu_{S^2}\\in H^2(M;\\Z)$, where $\\mu_{S^2}$ is a\ngenerator of $H^2(S^2;\\Z)$, and that components within a given family\nare labelled by $\\alpha\\in H^3(M;\\Z)\/2\\varphi^*\\mu_{S^2}\\smallsmile H^1(M;\\Z)$.\n\nTo analyze the Faddeev-Hopf configuration space in more detail we will\nfurther exploit its natural relationship with the classical\n($G={\\rm SU}(2)={\\rm Sp}(1)$) Skyrme configuration space. These ideas were\nconcurrently introduced in \\cite{AK2}. We identify $S^2$ with the unit\npurely imaginary quaternions, and $S^1$ with the unit complex\nnumbers. The quotient, Sp$(1)\/S^1$ is homeomorphic to $S^2$, with an\nexplicit homeomorphism given by $[q]\\mapsto qiq^*$. Our main tool\nwill be the map \n\\beq\\label{qdef} \n{\\mathfrak q}:S^2\\times S^1\\to\n\\hbox{Sp}(1),\\qquad {\\mathfrak q}(x,\\lambda)=q\\lambda q^*,\n\\qquad\\hbox{where} \\ x=qiq^*. \n\\end{equation} \nIt is not difficult to verify the\nfollowing properties of ${\\mathfrak q}$.\n\\begin{enumerate}\n\\item It is well defined and smooth.\n\\item ${\\mathfrak q}(x,\\lambda_1\\lambda_2)= {\\mathfrak q}(x,\\lambda_1)\n{\\mathfrak q}(x,\\lambda_2)$.\n\\item ${\\mathfrak q}^{-1}(1)=S^2\\times\\{1\\}$.\n\\item ${\\mathfrak q}(x,\\lambda)x({\\mathfrak q}(x,\\lambda))^*=x$.\n\\item ${\\mathfrak q}(x,\\cdot):S^1\\to \\{q|qxq^*=x\\}$ is a\ndiffeomorphism.\n\\item deg$({\\mathfrak q})=2$ with the standard ``outer normal first''\norientations. \n\\end{enumerate}\n\nFor example writing $x=qiq^*$, the fourth property may be verified as\n${\\mathfrak q}(x,\\lambda)x({\\mathfrak q}(x,\\lambda))^*=q\\lambda\nq^*qiq^*q\\lambda^* q^*= x$. We will let $\\lambda^x$ denote the\ninverse to ${\\mathfrak q}(x,\\cdot)$. We will also use the related\nmaps, $\\rho: S^2\\times\\hbox{Sp}(1) \\times S^1\\to\nS^2\\times\\hbox{Sp}(1)$ and $f: S^2\\times\\hbox{Sp}(1)\\to S^2\\times\nS^2$ defined by $\\rho(x,y,\\lambda)=(x,y{\\mathfrak q}(x,\\lambda))$ and\n$f(x, q)=(x, qxq^*)$. \nProperties (2) and (3) show that $\\rho$ is a free right\naction. Properties (4) and (5) show that $f$ is a principal fibration\nwith action $\\rho$. As our first application of these maps we show\nthat the evaluation map is a fibration.\n\\begin{lemma}\\label{s2tofree}\nThe evaluation map,\n$$\n\\hbox{ev}_{p_0}:\\hbox{FreeMap}(M, S^2) \\to S^2,\n$$\ngiven by $\\hbox{ev}_{p_0}(\\varphi)=\\varphi(p_0)$ is a fibration.\n\\end{lemma}\n{\\it Proof:\\, } We just need to construct the diagonal map in the\nfollowing diagram.\n\\begin{diagram}\nX\\times \\{0\\} & \\rTo^{\\overline h} & \\hbox{FreeMap}(M, S^2) \\\\\n\\dTo^{i} & \\ruTo^{\\overline H} & \\dTo_{\\hbox{ev}_{p_0}} \\\\ X\\times [0,\n1] & \\rTo_{H} & S^2 \\\\\n\\end{diagram}\nIf we define the horizontal maps in the following diagram by $\\mu(x,\nt)= (\\overline h(x)(p_0), H(x,t))$ and $\\overline\\nu(x)= (\\overline\nh(x)(p_0),1)$, then the existence of the diagonal map will follow\nbecause $f$ is a fibration. \n\\begin{diagram}\nX\\times \\{0\\} & \\rTo^{\\overline \\nu} & S^2\\times\\hbox{Sp}(1) \\\\\n\\dTo^{i} & \\ruTo^{\\overline \\mu} & \\dTo_{f} \\\\ X\\times [0, 1] &\n\\rTo_{\\mu} & S^2\\times S^2\\\\\n\\end{diagram}\nThe desired map is just, $\\overline H(x, t)(p)= \\overline\\mu_2(x, t)\n\\overline h(x)(p)(\\overline\\mu_2(x, t))^*$ \\hfill $\\Box$\n\n\\noindent \nClearly the fiber of this fibration is $(S^2)^M$. Recall that we are\nusing $(S^2)^M_\\varphi$ to denote the $\\varphi$-component of the\nspace of based maps. \n\nIn \\cite{AK2} these ideas were used to give a new proof of Pontrjagin's \nhomotopy classification of maps from a $3$-manifold to $S^2$. The following \nlemma comes from that paper. A second proof of this lemma may be found in \n\\cite{AK3}.\n\n\\begin{lemma}[Auckly-Kapitanski]\nThere exists a map $u:M\\to \\hbox{Sp}(1)$ such that $\\psi:M\\to S^2$ and \n$\\varphi:M\\to S^2$ are related by $\\psi=u\\varphi u^*$ if and only if \n$\\psi^*\\mu_{S^2}=\\varphi^*\\mu_{S^2}$.\n\\end{lemma}\n\nTheorem \\ref{fhhom} follows directly from this lemma. Assuming \n$\\psi^*\\mu_{S^2}=\\varphi^*\\mu_{S^2}$, define a map $F: (S^2)^M_\\varphi \n\\to (S^2)^M_\\psi$ by $F(\\xi)=u\\xi u^*$. This is clearly well defined \nbecause any map homotopic to $\\varphi$ will be mapped to a map homotopic \nto $\\psi$ under $F$. There is a well-defined inverse given by \n$F^{-1}(\\zeta)=u^*\\zeta u$.\n\nWe have a fibration relating the identity\ncomponent of the Skyrme configuration space to any component of the\nFaddeev-Hopf configuration space.\n\n\\begin{lemma}\\label{Fhseq}\nThe map induced by $f$,\n$$\n\\{\\varphi\\}\\times \\hbox{Sp}(1)^M_0\n\\stackrel{f^*}{\\rightarrow}\\{\\varphi\\}\\times (S^2)^M_\\varphi\n$$\nis a fibration.\n\\end{lemma}\n{\\it Proof:\\, } Once again we just need to construct the diagonal map\nin a diagram.\n\\begin{diagram}\nX\\times \\{0\\} & \\rTo^{\\overline h} & \\hbox{Sp}(1)^M_0 \\\\ \\dTo^{i} &\n\\ruTo^{\\overline H} & \\dTo_{f_*} \\\\ X\\times [0, 1] & \\rTo_{H} &\n(S^2)^M_\\varphi\\\\\n\\end{diagram}\nSo once again we consider a second diagram.\n\\begin{diagram}\nM\\times X\\times \\{0\\} & \\rTo^{\\overline \\nu} & S^2\\times\\hbox{Sp}(1)\n\\\\ \\dTo^{i} & \\ruTo^{\\overline \\mu} & \\dTo_{f} \\\\ M\\times X\\times [0,\n1] & \\rTo_{\\mu} & S^2\\times S^2\\\\\n\\end{diagram}\nHere the horizontal maps are given by $\\overline\\nu(p, x)=(\\varphi(p),\n\\overline h(x)(p))$ and $\\mu(p, x, t)=(\\varphi(p), H(x, t)(p))$. The\ndiagonal lift exists because $f$ is a fibration. We need to use\nproperty (5) of ${\\mathfrak q}$ to adjust the base points. Let $x_0$\nbe the basepoint of $S^2$ and define the desired lift by\n$$\n\\overline H(x, t)(p)=\\overline\\mu_2(p, x, t) {\\mathfrak\nq}(\\varphi(p),\\lambda^{x_0} (\\overline\\mu_2(p, x, t)^{-1}).\n$$\nThis completes the proof. \\hfill $\\Box$\n \nBy property (5) of ${\\mathfrak q}$, we see that any element of the\nfiber of the above fibration may be written in the form ${\\mathfrak\nq}(\\varphi, \\lambda)$ for some map $\\lambda:M\\to S^1$. Since\n${\\mathfrak q}(\\varphi, \\lambda)$ is null homotopic, its degree must\nbe zero. By property (6) of ${\\mathfrak q}$, this implies that\n$\\lambda^*\\mu_{S^1}$ must be in the kernel of the cup product\n$2\\varphi^*\\mu_{S^2}\\smallsmile$. Conversely, given any map $\\lambda$ with\n$\\lambda^*\\mu_{S^1}\\in\\hbox{ker}(2\\varphi^*\\mu_{S^2}\\smallsmile)$ we get an\nelement of the fiber. Recall that the components of the space of maps\nfrom $M$ to $S^1$ correspond to $H^1(M;{\\mathbb Z})$ and each\ncomponent is homeomorphic to the identity component which is\nhomeomorphic to ${\\mathbb R}^M$ which is contractible. It follows\nthat up to homotopy Sp$(1)^M_0$ is a regular\nker$(2\\varphi^*\\mu_{S^2}\\smallsmile)$ cover of $(S^2)^M_\\varphi$ (the fiber\nis homotopy equivalent to ker$(2\\varphi^*\\mu_{S^2}\\smallsmile)$). The homotopy\nsequence of the fibration then gives us the following sequence:\n\\begin{equation}\\label{fhseq}\n0\\to \\pi_1(\\hbox{Sp}(1)^M)\\to \\pi_1((S^2)^M_\\varphi)\\to\n\\hbox{ker}(2\\varphi^*\\mu_{S^2}\\smallsmile)\\to 0.\n\\end{equation}\nSince we do not already know that $\\pi_1((S^2)^M_\\varphi)$ is abelian,\nwe not only need to show that the sequence splits, we also need to\nshow that the image of the splitting commutes with the image of\n$\\pi_1(\\hbox{Sp}(1)^M)$. This is the content of the following\nlemma. This lemma will complete the proof of Theorem \\ref{thm2}.\n\n\\begin{lemma}\\label{Fhsplit} The sequence (\\ref{fhseq}) splits and the \nimage of the splitting\ncommutes with the image of $\\pi_1(\\hbox{Sp}(1)^M)$.\n\\end{lemma}\n{\\it Proof:\\, } Given $\\theta\\in\\hbox{ker}(2\\varphi^*\\mu_{S^2})$\ndefine a corresponding map $\\lambda_\\theta:M\\to S^1$ in the usual\nway, $\\lambda_\\theta(p)=e^{\\int^p_{p_0}\\theta}$. This induces a map,\n$q_\\theta:M\\to \\hbox{Sp}(1)$ by $q_\\theta(p)={\\mathfrak\nq}(\\varphi(p),\\lambda_\\theta(p))$. We compute the degree as follows:\n$$\n\\begin{array}{rl}\n\\hbox{deg}(q_\\theta)= & \\int_M q_\\theta^*\\mu_{\\hbox{Sp}(1)} \\\\ = &\n2\\int_M \\varphi^*\\mu_{S^2}\\wedge\\lambda^*_\\theta\\mu_{S^1} = 2\\int_M\n\\varphi^*\\mu_{S^2}\\wedge\\lambda^*_\\theta = 0.\n\\end{array}\n$$\nIt follows that there is a homotopy, $\\overline H_\\theta$ with\n$\\overline H_\\theta(0)=1$ and $\\overline H_\\theta(1)=q_\\theta$. Define\nthe splitting by sending $\\theta$ to\n$H_\\theta\\in\\pi_1((S^2)_\\varphi)$ given by\n$H_\\theta(t)(p)=f(\\varphi(p),\\overline H_\\theta(t)(p))$. To see that\nthe image of this splitting commutes with the image of\n$\\pi_1(\\hbox{Sp}(1)^M)$, let $\\gamma:[0,1]\\to \\hbox{Sp}(1)^M$ be a\nloop and define maps $\\delta_1$ and $\\delta_2$ by\n$\\delta_1(t,s)=\\frac{2t}{s+1}$ for $t\\le \\frac12 (s+1)$,\n$\\delta_1(t,s)=1$ otherwise and $\\delta_2(t,s)=1-\\delta_1(1-t,s)$. We\nsee that $f(\\varphi,(\\gamma\\circ\\delta_1)\\cdot (\\overline\nH_\\theta\\circ\\delta_2))$ is a homotopy between $f(\\varphi,\n\\gamma)*f(\\varphi,\\overline H_\\theta)$ and $f(\\varphi,\n\\gamma\\cdot\\overline H_\\theta)$. Likewise,\n$f(\\varphi,(\\gamma\\circ\\delta_2)\\cdot(\\overline\nH_\\theta\\circ\\delta_1))$ is a homotopy between $f(\\varphi,\\overline\nH_\\theta)* f(\\varphi, \\gamma)$ and $f(\\varphi, \\gamma\\cdot\\overline\nH_\\theta)$. \\hfill $\\Box$\n\n\nTo prove Theorem \\ref{freefun}, notice that we have a left $S^1$\naction on $(S^2)^M_\\varphi$ given by $z\\cdot\\psi:=z\\psi z^*$. We\nclaim that the fibration, $ \\hbox{FreeMap}(M, S^2)_\\varphi \\to S^2, $\nis just the fiber bundle with associated principal bundle Sp$(1)\\to\nS^2$ and fiber $(S^2)^M_\\varphi$. In fact, the map\nSp$(1)\\times_{S^1}(S^2)^M_\\varphi \\to \\free{S^2}{M}$ given by\n$[q,\\psi]\\mapsto q\\psi q^*$ is the desired isomorphism. Now consider\nthe homotopy exact sequence of the fibration, $ \\hbox{FreeMap}(M,\nS^2)_\\varphi \\to S^2, $\n$$\n\\to\\pi_2(S^2)\\to \\pi_1((S^2)^M_\\varphi) \\to\n\\pi_1(\\free{S^2}{M}_\\varphi) \\to 0.\n$$\nIt follows that $\\pi_1(\\free{S^2}{M}_\\varphi)$ is just the quotient of\n$\\pi_1((S^2)^M_\\varphi)$ by the image of $\\pi_2(S^2)$. The next lemma\nidentifies this image, to complete the proof of Theorem \\ref{freefun}.\n\n\n\\begin{lemma}\nThe image of the map from $\\pi_2$ is the subgroup of $H^2(M;\\Z)<\n\\pi_1((S^2)^M_\\varphi)$ generated by $2\\varphi^*\\mu_{S^2}$.\n\\end{lemma}\n{\\it Proof:\\, } Recall that the map from $\\pi_2$ of the base to\n$\\pi_1$ of the fiber is defined by taking a map of a disk into the base to\nthe restriction to the boundary of a lift of the disk to the total\nspace. Since the boundary of the disk maps to the base point, the\nrestriction to the boundary of the lift lies in the fiber. The\nhomotopy exact sequence of the fibration, Sp$(1)\\to S^2$ implies that\nthe disk representing a generator of $\\pi_2(S^2)$ lifts to a disk with\nboundary generating the fundamental group of the fiber $S^1$. This\nlift, say $\\widehat\\gamma$ to Sp$(1)$ gives a lift $D^2\\to\n\\hbox{Sp}(1)\\times_{S^1}(S^2)^M_\\varphi \\cong\\free{M}{S^2}$ defined by\n$z\\mapsto [\\widehat\\gamma(z),1]$. Restricted to the boundary, this map\nis just $z\\mapsto [z,\\varphi]=[1,z\\varphi z^*]$. It follows that the\nimage of $\\pi_2(S^2)$ is just the subgroup generated by the loop\n$z\\varphi z^*$. We now just have to trace this loop through the proof\nof the isomorphism, \n$$\n\\pi_1((S^2)^M_\\varphi) \\cong {\\mathbb Z}_2\\oplus H^2(M;{\\mathbb\nZ})\\oplus \\hbox{\\rm ker}(2\\varphi^*\\mu_{S^2}\\smallsmile).\n$$\nThe projection to $\\hbox{\\rm ker}(2\\varphi^*\\mu_{S^2}\\smallsmile)$ was defined\n by taking a lift of each map in the $1$-parameter family representing\n the loop in $\\pi_1((S^2)^M_\\varphi)$ to Sp$(1)^M_0$ and comparing the\n maps at the beginning and end. In our case the entire path\n consistently lifts to the path $\\gamma_\\varphi:S^1\\to\n \\hbox{Sp}(1)^M_0$ given by $\\gamma_\\varphi(z)=z{\\mathfrak\n q}(\\varphi,z^*)$. It follows that the component in $\\hbox{\\rm\n ker}(2\\varphi^*\\mu_{S^2}\\smallsmile)$ is zero. A loop such as\n $\\gamma_\\varphi$ naturally defines a map, $\\bar\\gamma_\\varphi:M\\times\n S^1\\to \\hbox{Sp}(1)$. The image of our loop in $H^2(M;\\Z)$ is just\n $\\bar\\gamma_\\varphi^*\\mu_{{\\rm Sp}(1)}\/[\\hbox{pt}\\times S^1]$. In\n notation reminiscent of differential forms this would be\n $\\int_{{\\rm pt}\\times S^1 }\\bar\\gamma_\\varphi^*\n \\mu_{{\\rm Sp}(1)}$. In order to evaluate this, we write\n $\\bar\\gamma_\\varphi$ as the composition of the map\n $(\\varphi,\\hbox{id}_{S^1}):M\\times S^1\\to S^2\\times S^1$ and the map\n $\\tilde{\\mathfrak q}:S^2\\times S^1\\to \\hbox{Sp}(1)$ given by\n $\\tilde{\\mathfrak q}(x,z)=z{\\mathfrak q}(x,z^*)$. This latter map is\n then expressed as the composition of $({\\mathfrak q},\n \\hbox{pr}_2^*):S^2\\times S^1 \\to \\hbox{Sp}(1)\\times S^1$ and the map\n $\\hbox{Sp}(1)\\times S^1\\to \\hbox{Sp}(1)$ given by $(u,\\lambda)\\mapsto\n \\lambda u$. The form $\\mu_{{\\rm Sp}(1)}$ pulls back to\n $\\mu_{{\\rm Sp}(1)}\\cup 1$ under the first map, and this pulls back\n to $2\\mu_{S^2}\\cup\\mu_{S^1}$ under the first factor of\n $\\tilde{\\mathfrak q}$ since ${\\mathfrak q}$ has degree two. In\n particular $\\tilde{\\mathfrak q}$ has degree two as well. We can now\n complete this computation to see that our loop projects to\n $2\\varphi^*\\mu_{S^2}$ in $H^2(M;\\Z)$. To complete the proof, we need\n to compute the projection of our loop in the $\\Z_2$-factor. The\n projection to $\\Z_2$ is defined by multiplying the inverse of our map\n by the image of $2\\varphi^*\\mu_{S^2}$ under the splitting\n $H^2\\to\\pi_1$ and taking the framing of the inverse image of a\n regular point. The equivalence classes of framings may be identified\n with $\\Z_2$ since the inverse image is homologically\n trivial. Alternatively we may compare the framing coming from our map\n to the framing of the map coming from the splitting. The image under\n the splitting of $2\\varphi^*\\mu_{S^2}$ is, of course, just two times\n the image of $\\varphi^*\\mu_{S^2}$ under the splitting. The inverse\n image coming from our map is just two copies of the inverse image of\n a frame under the map $(\\varphi,\\hbox{id}_{S^1}):M\\times S^1\\to\n S^2\\times S^1$. This means that the projection is even, so zero in\n $\\Z_2$. \\hfill $\\Box$ \n\n\\subsection{The cohomology of Faddeev-Hopf configuration spaces}\n\\label{subsecpd}\n\nIn order to compute the cohomology of $(S^2)^M_\\varphi$ we will use\nthe fibration, Sp$(1)^M_0\\to (S^2)^M_\\varphi$. The fiber of this\nfibration is just $\\disju_{\\alpha\\in K} (S^1)^M_\\alpha$ where\n$K=\\hbox{ker}(2\\varphi^*\\mu_{S^2}\\cup)$. Up to homotopy, the fiber is\njust $K$. In fact we can assume that the fiber is exactly $K$ if we\nfirst take the quotient by $(S^1)^M_0$ (which is contractible by\nReduction \\ref{r6}). It is slightly tricky to use a spectral sequence\nto compute the cohomology of the base of a fibration, so we will use\nequivariant cohomology to recast the problem. Recall that any Lie\ngroup acts properly on a contractible space called the total space of\nthe universal bundle. In our case, this space is denoted $EK$. The\nquotient of this by $K$ is the classifying space $BK$. Let $X$ be a\n$K$ space (i.e. a space with a $K$ action) and consider the space\n$X_K:=EK\\times_K X$. We will be interested in the situation when\n$X=\\hbox{Sp}(1)^M_0$ (really this divided by $(S^1)^M_0$ but this has\nthe same homotopy type). The cohomology of the space $X_K$ is called\nthe equivariant cohomology of $X$. It is denoted by $H^*_K(X)$. When\nthe $K$ action on $X$ is free and proper (as it is in our case), we\nhave a fibration $X_K\\to X\/K$ obtained by ignoring the $EK$ component\nin the definition of $X_K$. The fiber of this fibration is just $EK$\nwhich is contractible, so the spectral sequence of the fibration\nimplies that the cohomology of $X\/K$ is isomorphic to the equivariant\ncohomology of $X$. By ignoring the $X$ component in the definition of\n$X_K$ we obtain a fibration $X_K\\to BK$ which may be used to compute\nthe equivariant cohomology of $X$. \n\nSince $H^1(M;\\Z)$ is a free abelian group, the kernel $K$ is as\nwell. It follows that we may take $EK$ to be ${\\mathbb{R}}^n$ with $n$ equal to\nthe rank of $K$ and with $K$ acting by translations. It follows that\n$BK$ is just an $n$-torus, and we have a spectral sequence with $E_2$\nterm, $E_2^{p,q}=H^p(T^n;\\tilde{ H^q(\\hbox{Sp}(1)^M_0) })$ converging\nto the cohomology of $(S^2)^M_\\varphi$. Clearly the fundamental group\nof $T^n$ is just $K$. To compute the action of $K$ on\n$H^*(\\hbox{Sp}(1)^M_0)$, let $\\lambda:M\\to S^1$ satisfy\n$\\lambda^*\\mu_{S^1}\\in K$, and $\\mu(\\Sigma\\otimes x)\\in\nH^q(\\hbox{Sp}(1)^M_0)$ with $\\sigma:\\Sigma\\to M$. Let\n$u:\\Delta^q\\to\\hbox{Sp}(1)^M_0$ be a singular $q$-simplex and let\n$m:\\hbox{Sp}(1)\\times\\hbox{Sp}(1)\\to\\hbox{Sp}(1)$ be the\nmultiplication. Then we have,\n$$\n\\begin{array}{rl}\n(\\lambda^*\\mu_{S^1}\\cdot\\mu(\\Sigma\\otimes x))(u) &=\n\\int_{\\Sigma\\times\\Delta^q} m(\\widehat u,{\\mathfrak\nq}(\\varphi,\\lambda) \\circ(\\sigma,1))^*x \\\\ &=\n\\int_{\\Sigma\\times\\Delta^q} (\\sigma,1)^* {\\mathfrak\nq}(\\varphi,\\lambda)^*L_{\\widehat u}^*x+R_{{\\mathfrak\nq}(\\varphi,\\lambda) \\circ(\\sigma,1)}^*\\widehat u^*x \\\\\n&=\\int_{\\Sigma\\times\\Delta^q}\\widehat u^*x = \\mu(\\Sigma\\otimes x)(u).\n\\end{array}\n$$\nThus the fundamental group of the base acts trivially on the\ncohomology of the fiber. Because this fibration has an associated\nprincipal fibration with discrete group, all of the higher\ndifferentials vanish, and we obtain Theorem \\ref{thm2co}.\n\n\nTheorem \\ref{freeco} will follow from considerations of a general\nfiber bundle with structure group $S^1$ and one computation. Let $P\\to\nX$ be a principal $S^1$ bundle with simply-connected base and let\n$\\tau:S^1\\times F\\to F$ be a left action. The Serre spectral sequence\nof the fibration $E=P\\times_{S^1}F\\to X$ has $E_2^{p,q}=H^p(X;\n{H^q(F;{\\mathbb{R}})})$ and second differential $d_2\\omega=c_1(P)\\cup\n\\tau^*\\omega\/[S^1]$. In our case, the principal bundle is Sp$(1)\\to\nS^2$. It follows immediately that the coefficient system in the $E_2$\nterm of the Serre spectral sequence is untwisted, and that the only\nnon-trivial differential is the $d_2$ differential. In this case, the\nfirst Chern class is $\\mu_{S^2}$. The action that we consider is the\nmap $\\tau:S^1\\times (S^2)^M_\\varphi$ given by $\\tau(z,u)=zuz^*$. In\nfact, we only need to consider the effect of this action on terms\ncoming from Sp$(1)^M_0$. This is because the action is trivial on the\nclasses coming from $BK$. This can be seen by considering a map from\n$(S^2)^M_\\varphi$ to $BK$. However, the easiest way to see this is first to\ncompute the cohomology of the fiber bundle with fiber\nSp$(1)^M_0$, and then recognize that, up to homotopy, the total space of\nthis bundle is a regular $K$-cover of $\\free{M}{S^2}_\\varphi$. Either\nway, we need to compute the second differential coming from the action,\n$\\tau_0:S^1\\times \\hbox{Sp}(1)^M_0\\to\\hbox{Sp}(1)^M_0$ given by\n$\\tau_0(z,u)=\\tilde{\\mathfrak q}(\\varphi,z)u$. Let $u:F\\to\n\\hbox{Sp}(1)^M_0$ be a singular chain and compute \n$$\n\\left(\\tau_0^*\\mu(\\Sigma\\otimes x)\/[S^1]\\right)(u) =\n\\int_{S^1\\times\\Sigma\\times F} \\left(m\\circ(\\tilde{\\mathfrak\nq}(\\varphi\\circ\\sigma,\\hbox{pr}_{S^1}),\\widehat\nu\\circ(\\sigma,\\hbox{pr}_F)\\right)^*x.\n$$\nHere $m:\\hbox{Sp}(1)\\times\\hbox{Sp}(1)\\to\\hbox{Sp}(1)$ is\nmultiplication, and the rest of the maps are as in the definition of\n$\\mu(\\Sigma\\otimes x)$ in line \\ref{mudef}. This vanishes for\ndimensional reasons when $\\Sigma$ is a $1$-cycle ($\\varphi\\circ\\sigma$\nwould push it forward to a $1$-cycle in $S^2$). When $\\Sigma$ is a\n$2$-cycle, we use the product rule and the fact that $\\tilde{\\mathfrak\nq}:S^2\\times S^1\\to\\hbox{Sp}(1)$ has degree two to conclude that\n$\\left(\\tau_0^*\\mu(\\Sigma\\otimes\nx)\/[S^1]\\right)=2\\varphi^*\\mu_{S^2}[\\Sigma]$. This completes the proof\nof our last theorem.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Choice of the Unlabeled Dataset}\n\n\n\n\n\n\n\\section{Numerical Results on IJB-B, IJB-C}\nIn Table~\\ref{tab:appendix_ijbc} and Table~\\ref{tab:appendix_ijbb}, we show more numerical results on the IJB-C and IJB-B dataset, respectively. Since all the baseline methods (from other papers) are trained on different number of labeled images, we report the performance of our models trained on different labeled subsets\nfor a more fair comparison. From the tables, we could observe that our models outperform most of the baselines with equal or less than 2M labeled data.\n\n\n\\begin{table}[h]\n\\captionsetup{font=small}\n\\newcommand{\\mr}[1]{\\multirow{2}{*}{#1}}\n\\setlength{\\tabcolsep}{0.6pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{1.0\\linewidth}{X | c|c || c|c|c|c|c|c}\n\\Xhline{2\\arrayrulewidth}\n\\mr{Method} & \\mr{Data} & \\mr{Model} & \\multicolumn{4}{c|}{Verification} & \\multicolumn{2}{c}{Identification} \\\\\\cline{4-9} & & & 1e-7 & 1e-6 & 1e-5 & 1e-4 & Rank1 & Rank5 \\\\\n\\Xhline{2\\arrayrulewidth}\nCao et al.~\\cite{cao2018vggface2} & 13.3M & SE-ResNet-50 & - & - & 76.8 & 86.2 & 91.4 & 95.1 \\\\\\hline\nPFE~\\cite{shi2019probabilistic} & 4.4M & ResNet-64 & - & - & 89.64 & 93.25 & 95.49 & 97.17 \\\\\\hline\nArcFace~\\cite{deng2018arcface} & 5.8M & ResNet-50 & 67.40 & 80.52 & 88.36 & 92.52 & 93.26 & 95.33 \\\\\\hline\nRanjan et al.~\\cite{ranjan2019fast} & 5.6M & ResNet-101 & 67.4 & 76.4 & 86.2 & 91.9 & 94.6 & 97.5 \\\\\\hline\nAFRN~\\cite{kang2019attentional} & 3.1M & ResNet-101 & - & - & 88.3 & 93.0 & \\textbf{95.7} & \\textbf{97.6} \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\nBaseline & 500K & ResNet-50 & 51.13 & 66.44 & 77.58 & 87.73 & 90.90 & 94.50 \\\\\nProposed & 500K+70K & ResNet-50 & 60.33 & 71.24 & 80.31 & 88.18 & 91.81 & 94.96 \\\\\\hline\nBaseline & 1.0M & ResNet-50 & 59.53 & 77.70 & 86.16 & 92.13 & 93.62 & 95.93 \\\\\nProposed & 1.0M+70K & ResNet-50 & 61.87 & 79.76 & 87.16 & 92.39 & 94.19 & 96.30 \\\\\\hline\nBaseline & 2.0M & ResNet-50 & 67.64 & 78.66 & 88.16 & 93.48 & 94.34 & 96.34 \\\\\nProposed & 2.0M+70K & ResNet-50 & \\textbf{78.62} & 84.91 & 90.61 & 93.77 & 95.04 & 96.80 \\\\\\hline\nBaseline & 3.0M & ResNet-50 & 62.65 & 79.20 & 89.20 & 94.20 & 94.76 & 96.49 \\\\\nProposed & 3.0M+70K & ResNet-50 & 78.38 & 85.91 & 91.56 & 94.48 & 95.51 & 97.04 \\\\\\hline\nBaseline & 3.9M & ResNet-50 & 62.90 & 82.94 & 90.73 & 94.57 & 94.90 & 96.77 \\\\\nProposed & 3.9M+70K & ResNet-50 & 77.39 & \\textbf{87.92} & \\textbf{91.86} & \\textbf{94.66} & 95.61 & 97.13 \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{-2em}\n\\end{center}\n\\caption{Performance comparison with state-of-the-art methods on the IJB-C dataset.}\\vspace{-0.5em}\n\\label{tab:appendix_ijbc}\n\\end{table}\n\n\n\\begin{table}[h]\n\\captionsetup{font=small}\n\\newcommand{\\mr}[1]{\\multirow{2}{*}{#1}}\n\\setlength{\\tabcolsep}{0.6pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{1.00\\linewidth}{X | c|c || c|c|c|c|c|c}\n\\Xhline{2\\arrayrulewidth}\n\\mr{Method} & \\mr{Data} & \\mr{Model} & \\multicolumn{4}{c|}{Verification} & \\multicolumn{2}{c}{Identification} \\\\\\cline{4-9}\n & & & 1e-6 & 1e-5 & 1e-4 & 1e-3 & Rank1 & Rank5 \\\\\n\\Xhline{2\\arrayrulewidth}\nCao et al.~\\cite{cao2018vggface2} & 13.3M & SE-ResNet-50 & - & 70.5 & 83.1 & 90.8 & 90.2 & 94.6 \\\\\\hline\nComparator~\\cite{xie2018comparator} & 3.3M & ResNet-50 & - & - & 84.9 & 93.7 & - & - \\\\\\hline\nArcFace~\\cite{deng2018arcface} & 5.8M & ResNet-50 & 40.77 & 84.28 & 91.66 & 94.81 & 92.95 & 95.60 \\\\\\hline\nRanjan et al.~\\cite{ranjan2019fast} & 5.6M & ResNet-101 & \\textbf{48.4} & 80.4 & 89.8 & 94.4 & 93.3 & 96.6 \\\\\\hline\nAFRN~\\cite{kang2019attentional} & 3.1M & ResNet-101 & - & 77.1 & 88.5 & 94.9 & \\textbf{97.3} & \\textbf{97.6} \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\nBaseline & 500K & ResNet-50 & 39.35 & 71.14 & 84.37 & 92.12 & 89.74 & 94.16 \\\\\nProposed & 500K+70K & ResNet-50 & 45.39 & 72.35 & 84.75 & 92.00 & 90.46 & 94.42 \\\\\\hline\nBaseline & 1.0M & ResNet-50 & 45.75 & 80.11 & 90.19 & 94.48 & 92.37 & 95.78 \\\\\nProposed & 1.0M+70K & ResNet-50 & 41.59 & 82.10 & 90.09 & 94.64 & 92.88 & 95.91 \\\\\\hline\nBaseline & 2.0M & ResNet-50 & 47.62 & 82.30 & 91.82 & 95.46 & 93.25 & 96.05 \\\\\nProposed & 2.0M+70K & ResNet-50 & 44.76 & 86.26 & 91.92 & 95.27 & 94.01 & 96.23 \\\\\\hline\nBaseline & 3.0M & ResNet-50 & 42.77 & 82.86 & 92.48 & 95.78 & 93.80 & 96.23 \\\\\nProposed & 3.0M+70K & ResNet-50 & 43.09 & 87.31 & \\textbf{92.80} & 95.70 & 94.35 & 96.53 \\\\\\hline\nBaseline & 3.9M & ResNet-50 & 40.12 & 84.38 & 92.79 & \\textbf{95.90} & 93.85 & 96.55 \\\\\nProposed & 3.9M+70K & ResNet-50 & 43.38 & \\textbf{88.19} & 92.78 & 95.86 & 94.62 & 96.72 \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{-2em}\n\\end{center}\n\\caption{Performance comparison with state-of-the-art methods on the IJB-B dataset.}\\vspace{-0.5em}\n\\label{tab:appendix_ijbb}\n\\end{table}\n\n\\section{Architecture of Augmentation Network}\nThe architecture of our augmentation network is based on MUNIT~\\cite{MUNIT}. Let \\texttt{c5s1-k} be a $5\\times5$ convolutional layer with $k$ filters and stride $1$. \\texttt{dk-IN} denotes a $3\\times 3$ convolutional layer with $k$ filters and dilation $2$, where IN means Instance Normalization~\\cite{InstanceNorm}. Similarly, AdaIN means Adaptive Instance Normalization~\\cite{huang2017adain} and LN denotes Layer Normalization~\\cite{LayerNorm}. \\texttt{fc8} denotes a fully connected layer with $8$ filters. \\texttt{avgpool} denotes a global average pooling layer. No normalization is used in the style encoder. We use Leaky ReLU with slope 0.2 in the discriminator and ReLU activation everywhere else. The architectures of different modules are as follows:\n\\begin{itemize}\\vspace{-0.5em}\n \\item Style Encoder: \\\\ \\texttt{c5s1-32,c3s2-64,c3s2-128,avgpool,fc8} \\vspace{-0.5em}\n \\item Generator: \\\\ \\texttt{c5s1-32-IN,d32-IN,d32-AdaIN,d32-LN,}\\\\\\texttt{d32-LN,c5s1-3} \\vspace{-0.5em}\n \\item Discriminator: \\\\ \\texttt{c5s1-32,c3s2-64,c3s2-128}\n\\end{itemize}\nThe length of the latent style code is set to $8$. A style decoder (multi-layer perceptron) has two hidden fully connected layers of $128$ filters without normalization, which transforms the latent style code to the parameters of the AdaIN layer.\n\n\\begin{figure*}[h]\n\\captionsetup{font=footnotesize}\n\\setlength\\tabcolsep{1px}\n\\newcolumntype{Y}{>{\\centering\\arraybackslash}X}\n \\centering\n \\begin{tabularx}{\\linewidth}{YYYYYYY}\n Input & Model (a) & Model (b) & Model (c) & Model (d) & Model (e) & Model (f) \\\\\n \\midrule\n \\includegraphics[width=0.97\\linewidth]{fig\/generated\/1_0.png} & \n \\includegraphics[width=0.97\\linewidth]{fig\/generated_singlemode\/1_1.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/1_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/1_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/1_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/1_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/1_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/1_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/1_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/1_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/1_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/1_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/1_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/1_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/1_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/1_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/1_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/1_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/1_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/1_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/1_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/1_4.png} \\\\\\midrule\n \n \\includegraphics[width=0.97\\linewidth]{fig\/generated\/2_0.png} & \n \\includegraphics[width=0.97\\linewidth]{fig\/generated_singlemode\/2_1.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/2_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/2_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/2_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/2_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/2_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/2_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/2_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/2_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/2_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/2_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/2_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/2_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/2_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/2_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/2_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/2_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/2_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/2_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/2_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/2_4.png} \\\\\\midrule\n \n \\includegraphics[width=0.97\\linewidth]{fig\/generated\/3_0.png} & \n \\includegraphics[width=0.97\\linewidth]{fig\/generated_singlemode\/3_1.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/3_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/3_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/3_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_noadv\/3_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/3_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/3_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/3_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_norec\/3_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/3_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/3_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/3_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_nostyle\/3_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/3_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/3_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/3_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated_scale\/3_4.png} & \\vspace{-0.97\\linewidth}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/3_1.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/3_2.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/3_3.png} \\hspace{-0.6em}\n \\includegraphics[width=0.48\\linewidth]{fig\/generated\/3_4.png} \\\\\n \\bottomrule\n \\end{tabularx}\n \\caption{Ablation study of the augmentation network. Input images are shown in the first column. The subsequent columns show the results of different models trained without a certain module or loss. The texture style codes are randomly sampled from the normal distribution.}\n \\label{fig:appendix_ablate_generated}\n\\end{figure*}\n\n\n\\begin{table*}[h]\n\\captionsetup{font=footnotesize}\n\\setlength{\\tabcolsep}{5.0pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{0.8\\linewidth}{X || c|c|c|c|c || c|c|c|c|c|c|c|c}\n\\Xhline{2\\arrayrulewidth}\n\\multirow{2}{*}{Model} & \\multicolumn{5}{c||}{Modules} & \\multicolumn{3}{c|}{IJB-C (Vrf)} & \\multicolumn{2}{c|}{IJB-C (Idt)} & \\multicolumn{2}{c|}{IJB-S (V2S)} & LFW \\\\\\cline{2-14}\n& MM & $D_I$ & Rec & $D_Z$ & ND & 1e-7 & 1e-6 & 1e-5 & Rank1 & Rank5& Rank1 & Rank5 & Accuracy \\\\\\Xhline{2\\arrayrulewidth}\n & & & & & & 72.74 & 85.33 & 90.52 & 94.99 & 96.75 & 56.35 & 66.77 & 99.82 \\\\\\hline\n(a) & & \\checkmark & & & \\checkmark & 74.80 & 87.58 & 91.94 & 95.51 & 97.09 & 56.98 & 65.66 & 99.80\\\\\\hline\n(b) & \\checkmark & & \\checkmark & \\checkmark & \\checkmark & 75.32 & 88.00 & 91.71 & 95.42 & 97.04 & 57.54 & 66.72 & 99.75 \\\\\\hline\n(c) & \\checkmark & \\checkmark & & & \\checkmark & 74.51 & 87.49 & 91.97 & 95.61 & 97.18 & 57.17 & 66.24 & 99.78 \\\\\\hline\n(d) & \\checkmark & \\checkmark & \\checkmark & & \\checkmark & 75.07 & 88.11 & 92.19 & 95.66 & 97.12 & 56.85 & 64.87 & 99.78 \\\\\\hline\n(e) & \\checkmark & \\checkmark & \\checkmark & \\checkmark & & 73.99 & 86.52 & 91.33 & 95.33 & 97.04 & 58.47 & 66.00 & 99.73 \\\\\\hline\n(f) & \\checkmark & \\checkmark & \\checkmark & \\checkmark & \\checkmark & 77.39 & 87.92 & 91.86 & 95.61 & 97.13 & 57.33 & 65.37 & 99.75 \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{-0.5em}\n\\end{center}\n\\caption{Ablation study over different training methods of the augmentation network. ``MM'', ``$D_I$'', ``$D_Z$'', ``rec'', ``ND'' refer to ``Multi-mode'', ``Image Discriminator'', ``Reconstruction Loss'', ``Latent Style Discriminator'' and ``No Downsampling'', respectively. The first row is a baseline that uses only the domain adversarial loss but no augmentation network. ``Model (a)'' is a single-mode translation network that does not use latent style code.}\n\\label{tab:appendix_augmentation}\n\\end{table*}\n\n\n\\section{Ablation over the Settings of Augmentation Network}\nIn this section, we ablate over the training modules of the augmentation network. In particular, we consider to remove the following modules for different variants:\nLatent-style code for multi-mode generation (MM), Image Discriminator ($D_I$), Reconstruction Loss (Rec), Style Discriminator ($D_z$ ) and the architecture without downsampling (ND). The qualitative results of different models are shown in Fig.~\\ref{fig:appendix_ablate_generated}. Without the latent style code (Model a), the augmentation network can only output one deterministic image for each input, which mainly applies blurring to the input image. Without the image adversarial loss (Model b), the model cannot capture the realistic variations in the unlabeled dataset and the style code can only change the color channel in this case. Without the Reconstruction Loss (Model c), the model is trained only with adversarial loss but without the regularization of content preservation. And therefore, we see clear artifacts on the output images. However, adding reconstruction loss alone hardly helps, since the latent code used in the reconstruction of the unlabeled images could be very different from the prior distribution $p(z)$ that we use for generation. Therefore, similar artifacts can be observed if we do not add latent code adversarial loss (Model d). As for the architecture, if we choose to use an encoder-decoder style network as in the original MUNIT~\\cite{MUNIT}, with downsampling and upsampling (Model e), we observe that the output images are always blurred due to the loss of spatial information. In contrast, with our architecture (Model f), the network is capable of augmenting images with diverse color, blurring and illumination styles but without clear artifacts.\n\nFurthermore, we incorporate these different variants of augmentation networks into training and show the results in Table~\\ref{tab:appendix_augmentation}. The baseline model here is a model that only uses domain alignment loss without augmentation network. In fact, compared with this baseline, using all different variants of the augmentation network achieves performance improvement in spite of the artifacts in the generated images. But a more stable improvement is observed for the proposed augmentation network across different evaluation protocols. We also show more examples of augmented images in Figure~\\ref{fig:appendix_generated}.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[trim=0 340px 0 0, clip, width=\\linewidth]{fig\/generated_more.jpg}\n \\caption{More examples of augmented images. The photos in the first column are the input images. The remaining images in each row are generated by the augmentation network with different style code.}\n \\label{fig:appendix_generated}\n\\end{figure*}\n\n\n\\section{Introduction}\n\\label{sec:intro}\nMachine learning algorithms typically assumes that training and testing data come from the same underlying distribution. However, in practice, we would often encounter testing domains that are different from the population where the training data is drawn. Since it is non-trivial to collect data for all possible testing domains, learning representations that are generalizable to heterogeneous testing data is desired~\\cite{muandet2013domain,ghifary2015domain,motiian2017unified,li2018domain,carlucci2019domain}. Particularly for face recognition, this problem is reflected by the domain gap between the semi-constrained training datasets and unconstrained testing datasets. Nearly all of the state-of-the-art deep face networks are trained on large-scale web-crawled face images, most of which are high-quality celebrity photos~\\cite{yi2014learning,guo2016msceleb}. But in practice, we wish to deploy the trained FR systems for many other scenarios, e.g. unconstrained photos~\\cite{IJBA,IJBC} and surveillance~\\cite{IJBS}. The large degree of face variation in the testing scenarios, compared to the training set, could result in significant performance drop of the trained face models~\\cite{IJBC,IJBS}. \n\n\\begin{figure}[t]\n\\captionsetup{font=footnotesize}\n \\centering\n \\includegraphics[width=1.00\\linewidth]{fig\/problem_setting.pdf}\n \\vspace{-1.0em}\\caption{Illustration of the problem settings in our work. Blue circles imply the domains that training face images belong to. By utilizing diverse unlabeled images, we want to regularize the learning of the face embedding for more unconstrained face recognition scenarios.}\\vspace{-1.0em}\n \\label{fig:problem_setting}\n\\end{figure}\n\nThe simplest solution to such a domain gap problem is to collect a large number of unconstrained labeled face images from different sources. However, due to privacy issue and human-labeling cost, it is extremely hard to collect such a database. Other popular solutions to this problem include transfer learning and domain adaptation, which require domain-specific data to train a model for each of the target domains~\\cite{pan2010domain,ganin2014unsupervised,long2017deep,sohn2017unsupervised,saito2018maximum,kang2019contrastive}. However, in unconstrained face recognition, a face representation that is robust to all different kinds of variations is needed, so these domain-specific solutions are not appropriate. \\emph{Instead, it would be useful if we could utilize the commonly available, unlabeled data to achieve a domain-agnostic face representation that generalizes to unconstrained testing scenarios} (See Fig.~\\ref{fig:problem_setting}). To achieve this goal, we would like to ask the following questions in this paper:\n\\begin{itemize}\\vspace{-0.5em}\n \\item Is it possible to improve model generalizability to unconstrained faces by introducing more diversity from auxiliary unlabeled data?\\vspace{-0.5em}\n \\item What kind of and how much unlabeled data do we need?\\vspace{-0.5em}\n \\item How much performance boost could we achieve with the unlabeled data?\\vspace{-0.5em}\n\\end{itemize}\n\n\nIn this paper, we propose such an semi-supervised framework for learning robust face representations. The unlabeled images are collected from a public face detection dataset, i.e. WiderFace~\\cite{yang2016wider}, which contains more diverse types (sub-domains) of face images compared to typical labeled face datasets used for training. \nTo utilize the unlabeled data, the proposed method jointly regularizes the embedding model from feature space and image space. We show that adversarial regularization can help to reduce domain gaps caused by facial variations, even in the absence of sub-domain labels. On the other hand, an image augmentation module is trained to discover the hidden sub-domain styles in the unlabeled data and apply them to the labeled training samples, thus increasing the discrimination power on difficult face examples. To our knowledge, this is the first study to use a heterogeneous unlabeled dataset to boost the model performance for general unconstrained face recognition. The contributions of this paper are summarized as below:\n\\begin{itemize}\\vspace{-0.5em}\n \\item A semi-supervised learning framework for generalizing face representations with auxiliary unlabeled data.\\vspace{-0.5em}\n \n \\item An multi-mode image translation module is proposed to perform data-driven augmentation and increase the diversity of the labeled training samples.\\vspace{-0.5em}\n \\item Empirical results show that the regularization of unlabeled data helps to improve the recognition performance on challenging testing datasets, e.g. IJB-B, IJB-C, and IJB-S.\n\\end{itemize}\n\n\\section{Related Work}\n\\subsection{Deep Face Recognition}\nDeep neural networks are widely adopted in the ongoing research in face recognition ~\\cite{taigman2014deepface,deepid2,schroff2015facenet,masi2016we,liu2017sphereface,hasnat2017deepvisage,ranjan2017l2,wang2018additive,deng2018arcface}. Taigman et al.~\\cite{taigman2014deepface} were the first to propose using deep convolutional neural network for learning face representations. The subsequent studies have explored different loss functions to improve the discrimination power of the learned feature representation. A number of studies proposed to use metric learning methods for face recognition ~\\cite{schroff2015facenet,sohn2016improved}. Recent work has been trying to achieve discriminative embeddings with a single identification loss function where proxy\/prototype vectors are used to represent each class in the embedding space~\\cite{liu2017sphereface,wang2018additive,wang2018cosface,ranjan2017l2,deng2018arcface,zhang2019adacos,sun2020circle}.\n\n\\subsection{Semi-supervised Learning}\nClassic semi-supervised learning involves a small number of labeled images and a large number of unlabeled images~\\cite{lee2013pseudo,rasmus2015semi,laine2017temporal,tarvainen2017mean,xie2019unsupervised,zhai2019s4l,berthelot2019mixmatch,sohn2020fixmatch}. The goal is to improve the recognition performance when we don't have sufficient data that are labeled. State-of-the-art semi-supervised learning methods can mainly be classified into four categories. (1) Pseudo-labeling methods generate labels for unlabeled data with the trained model and then use them for training~\\cite{lee2013pseudo}. In spite of its simplicity, it has been shown to be effective primarily for classification tasks where labeled data and unlabeled data share the same label space. (2) Temporal ensemble models maintain different versions of model parameters to serve as teacher models for the current model~\\cite{laine2017temporal,tarvainen2017mean}. (3) Consistency-regularization methods apply certain types of augmentation to the unlabeled data while making sure the output prediction remains consistent after augmentation~\\cite{rasmus2015semi,berthelot2019mixmatch,sohn2020fixmatch}. (4) Self-supervised learning, originally proposed for unsupervised learning, has recently been shown to be effective for semi-supervised learning as well~\\cite{zhai2019s4l}. Compared with classic semi-supervised learning addressed in the literature, our problem is different in two sense of heterogeneity: different domains and different identities between the labeled and unlabeled data. These differences make many classic semi-supervised learning methods unsuitable for our task.\n\n\\subsection{Domain Adaptation and Generalization}\nIn domain adaptation, the user has a dataset for a source domain and another for a fixed target domain~\\cite{pan2010domain,ganin2014unsupervised,long2017deep,saito2018maximum,kang2019contrastive}. If the target domain is unlabeled, this leads to an \\emph{unsupervised domain adaption} setting~\\cite{ganin2014unsupervised,tzeng2017adversarial,saito2018maximum,kang2019contrastive}. The goal is to improve the performance on the target domain so that it could match the performance on the source domain. This is achieved by reducing the domain gap between the two datasets in feature space. The problem about domain adaption is that one needs to acquire a new dataset and train a new model whenever there is a new target domain. In \\emph{domain generalization}, the user is given a set of labeled datasets from different domains. The model is jointly trained on these datasets so that it could better generalize to unseen domains~\\cite{muandet2013domain,ghifary2015domain,motiian2017unified,li2018domain,carlucci2019domain,guo2020learning}. Our problem lies in the middle between domain generalization and unsupervised domain adaptation : we want to generalize the model to broader domains, yet instead of multi-domain labeled data, we use unlabeled data from other sources to achieve this goal.\n\n\n\\begin{figure}[t]\n \\centering\n \\captionsetup{font=footnotesize}\n \\scriptsize\n \\includegraphics[width=1.00\\linewidth]{fig\/overview.pdf}\n \\caption{The training framework of the embedding network. In each mini-batch, a random subset of labeled data would be augmented by the augmentation network to introduce additional diversity. The non-augmented labeled data are used to train the feature discriminator. The adversarial loss forces the distribution of the unlabeled features to align with the labeled one.}\n \\label{fig:overview}\n\\end{figure}\n\n\\section{Methodology}\n\\label{sec:method}\nGenerally, in face representation learning, we are given a large labeled dataset $\\mathcal{X}$=$\\{(x_1,y_1),(x_2,y_2),\\dots,(x_n,y_n)\\}$, where $x_i$ and $y_i$ are the face images and identity labels, respectively. The goal is to learn an embedding model $f$ such that $f(x)$ would be discriminative enough to distinguish between different identities. However, since $f$ is only trained on the domain defined by $\\mathcal{X}$, which is usually semi-constrained celebrity photo, it might not generalize to unconstrained settings. In our framework, we assume the availability of another unlabeled dataset $\\mathcal{U}=\\mathcal{U}_1\\cup \\mathcal{U}_2\\dots\\mathcal{U}_k=\\{u_1,u_2,\\dots,u_n\\}$,\ncollected from different sources (sub-domains). However, these sub-domain labels may not be available in real applications, thus we do not assume the access to them but instead seek solutions that could automatically leverage these hidden sub-domains. \nThen, we wish to simultaneously minimize three types of errors:\n\\begin{itemize}\n \\item Error due to discrimination power within the labeled domain $\\mathcal{X}$.\n \\item Error due to feature domain gap between the labeled domain $\\mathcal{X}$ and the hidden sub-domains $\\mathcal{U}_i$.\n \\item Error due to discrimination power within the unlabeled domain $\\mathcal{U}$.\n\\end{itemize}\nAn overview of the framework is shown in Fig.~\\ref{fig:overview}.\n\n\\subsection{Minimizing Error in the Labeled Domain}\n\\label{sec:method_labeled}\nThe deep representation of a face image is usually a point in a hyper-spherical embedding space, where $\\norm{f(x_i)}^2=1$. State-of-the-art supervised face recognition methods all try to find an objective function to maximize the inter-class margin such that the representation could still be discriminative when tested on unseen identities. In this work, we choose to use CosFace loss function~\\cite{wang2018cosface}\\cite{wang2018additive} for training the labeled images:\n\\begin{equation}\n \\mathcal{L}_{idt} = -\\mathbb{E}_{x_i,y_i\\sim\\mathcal{X}}[\\log \\frac{e^{s(W_{y_i}^Tf_i-m)}}{e^{s(W_{y_i}^Tf_i-m)}+\\sum_{j\\neq y_i}{e^{sW_{y_j}^Tf_i}}}].\n\\end{equation}\nHere $s$ is the hyper-parameter controlling temperature, $m$ is a margin hyper-parameter and $W_{j}$ is the proxy vector of the $j^{th}$ identity in the embedding space, which is also $\\ell_2$ normalized. We choose to use CosFace loss function because of its stability and high-performance. It could potentially be replaced by any other supervised identification loss function.\n\n\\begin{figure}[t]\n \\captionsetup{font=footnotesize}\n \\centering\n \\subfloat[w\/o Domain Adversarial Loss]{\\includegraphics[width=0.48\\linewidth]{fig\/tsne_semi_synthesized_baseline.pdf}}\\hfill\n \\subfloat[w\/ Domain Adversarial Loss]{\\includegraphics[width=0.48\\linewidth]{fig\/tsne_semi_synthesized_dadv.pdf}}\n \\caption{t-SNE visualization of the face embeddings using synthesized unlabeled images. Using part of the MS-Celeb-1M as unlabeled dataset, we create three sub domains by processing the images with either random Gaussian noise, random occlusion or downsampling. (a) different sub-domains show different domain shift in the embedding space of the supervised baseline. (b) with the holistic binary domain adversarial loss, each of the sub-domains is aligned with the distribution of the labeled data.}\n \\label{fig:tsne_subdomain}\n\\end{figure}\n\n\\subsection{Minimizing Domain Gaps}\n\\label{sec:method_domaingap}\nThe unlabeled dataset $\\mathcal{U}$ is assumed to be a diverse dataset collected from different sources, i.e. covering different sub-domains (types) of face images. If we have the access to such sub-domain labels, a natural solution to a domain-agnostic model would be aligning each of the sub-domains with the feature distribution of the labeled images. However, the sub-domain labels might not be available in many cases. In our experiment, we find there is no necessity for pairwise domain alignment. Instead, a binary domain alignment loss is sufficient to align the sub-domains. Formally, given a feature discriminator network $D$, we could reduce the domain gap via an adversarial loss:\n\\begin{gather}\n\\begin{split}\n \\mathcal{L}_{D} = -\\mathbb{E}_{x\\sim\\mathcal{X}}[\\log D(y=0|f(x)]\\\\\n -\\mathbb{E}_{u\\sim\\mathcal{U}}[\\log D(y=1|f(u)],\n\\end{split}\\\\\n\\begin{split}\n \\mathcal{L}_{adv} = -\\mathbb{E}_{x\\sim\\mathcal{X}}[\\log D(y=1|f(x)]\\\\\n -\\mathbb{E}_{u\\sim\\mathcal{U}}[\\log D(y=0|f(u)].\n\\end{split}\n\\label{eq:adv}\n\\end{gather}\nThe discriminator $D$ is a multi-layer binary classifier optimized by $\\mathcal{L}_{D}$. It tries to learn a non-linear classification boundary between the two datasets while the embedding network needs to fool the discriminator by reducing the divergence between the distributions of $f(x)$ and $f(u)$. \nTo see the effect of domain alignment loss, we conduct a controlled experiments with a toy dataset. We split the MS-Celeb-1M~\\cite{guo2016msceleb} dataset into labeled images and unlabeled images (no identity overlap). The unlabeled images are then processed with one of the three degradations: random Gaussian noise, random occlusion and downsampling. Thus, we create three sub-domains in the unlabeled dataset. The corresponding domain shift can be observed in the t-SNE plot in Fig.~\\ref{fig:tsne_subdomain} (a), where the model is trained only on the labeled split. Then, we incorporate the augmented unlabeled images into training with the binary domain adversarial loss. In Fig.~\\ref{fig:tsne_subdomain} (b), we observe that with the binary domain alignment loss, the distribution of each of sub-domains is aligned with the original domain, indicating reduced domain gaps.\n\n\n\n\n\n\\subsection{Minimizing Error in the Unlabeled Domains}\n\\label{sec:method_unlabeled}\n\n\nThe domain alignment loss in Section~\\ref{sec:method_domaingap} helps to eliminate the error caused by domain gaps between unconstrained faces. Thus, the remaining task is to improve the discrimination power of the face representation among the unlabeled faces. Many semi-supervised classification methods address this problem by using pseudo-labeling of unlabeled data~\\cite{lee2013pseudo,berthelot2019mixmatch,sohn2020fixmatch}, but this is not applicable to our problem since our unlabeled dataset does not share the same label space with the labeled one. Furthermore, because of data collection protocols, there is very little chance that one identity would have multiple unlabeled images. Thus, clustering-based methods are also infeasible for our task. Here, we consider to address this issue with a multi-mode augmentation method.\nPrior studies have shown that an image translation network, such as CycleGAN~\\cite{zhu2017unpaired}, can be effectively used as a data augmentation module for domain adaptation~\\cite{hoffman2018cycada}. The main idea of the augmentation network is to learn the difference between two domains in the image space and then augment the samples from source domain data to create training data with pseudo-labels in the target domain. Since our goal is to generalize the deep face representation to unconstrained faces, which involves a large variety, deterministic method such as CycleGAN would be unsuitable. Therefore, we propose to use a multi-mode image translation network that could discover the hidden domains in the unlabeled data and then augment the labeled training data with different styles. In particular, we need a function $G$ which maps labeled samples $x$ into the image space defined by the unlabeled faces, i.e. $p(x)\\rightarrow p(u)$. Then, training the embedding $f$ on $G(x)$ could make it more discriminative in the image space defined by $U$. There are two requirement of the function $G$: (1) it should not change the identity of the input image and (2) it should be able to capture different styles that are present in the unlabeled images. Inspired by recent progress in image translation frameworks~\\cite{zhu2017unpaired,MUNIT}, we propose to train $G$ as a style-transfer network that learns the visual styles during transfer in an unsupervised manner. The network $G$ can then be used as a data-driven augmentation module that generates diverse samples given an input from the labeled dataset. During the training, we randomly replace a subset of the labeled images to be augmented and put them into our identification learning framework. The details of training the augmentation network $G$ is given in Section~\\ref{sec:generator}.\n\nThe overall loss function for the embedding network is given by:\n\\begin{equation}\n \\mathcal{L} = \\lambda_{idt}\\mathcal{L}_{idt} + \\lambda_{adv}\\mathcal{L}_{adv}\n\\end{equation}\nwhere $L_{idt}$ also includes the augmented labeled samples.\n\n\\subsubsection{Multi-mode Augmentation Network}\n\\label{sec:generator}\n\n\\begin{figure}[t]\n\\captionsetup{font=footnotesize}\n \\centering\n \\scriptsize\n \\includegraphics[width=1.00\\linewidth]{fig\/overview_generator.pdf}\n \\caption{Training framework of the augmentation network $G$. The two pipelines are optimized jointly during training.}\n \\label{fig:overview_generator}\n\\end{figure}\n\nThe augmentation network $G$ is a fully convolutional network that maps an image to another. To preserve the geometric structure, our architecture does not involve any downsampling or upsampling. In order to generate styles similar to the unlabeled images, an image discriminator $D_I$ is trained to distinguish between the texture styles of unlabeled images and generated images:\n\\begin{align}\n\\begin{split}\n \\mathcal{L}_{D_I} = & -\\mathbb{E}_{x\\sim\\mathcal{X}}[\\log D_I(y=0|G(x,z))]\\\\\n & -\\mathbb{E}_{u\\sim\\mathcal{U}}[\\log D_I(y=1|u)], \n\\end{split}\\\\\n \\mathcal{L}^{G}_{adv} = & -\\mathbb{E}_{x\\sim\\mathcal{X}}[\\log D_I(y=1|G(x,z))].\n\\end{align}\nHere $z\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I})$ is a random style vector to control the styles of the output image, which is injected into the generation process via Adaptive Instance Normalization (AdaIN)~\\cite{huang2017adain}. Although the adversarial learning could make sure the output are in the unlabeled space, but it cannot ensure that (1) the content of the input is maintained in the output image and (2) the random style $z$ is being used to generate diverse visual styles, corresponding to differnt sub-domains in the unlabeled images. We propose to utilize an additional reconstruction pipeline to simultaneously satisfy these two requirements. First, we introduce an additional style encoder $E_z$ to capture the corresponding style in the input image, as in~\\cite{MUNIT}. A reconstruction loss is then enforced to keep the consistency of the image content:\n\\begin{align}\n \\mathcal{L}^{G}_{rec} = & \\; \\mathbb{E}_{x\\sim\\mathcal{X}}[\\norm{x-G(x,E_z(x))}^2] \\\\\n & + \\mathbb{E}_{u\\sim\\mathcal{U}}[\\norm{u-G(u,E_z(u))}^2],\n\\end{align}\nThen, during the reconstruction, we add another latent style discriminator $D_z$ to guarantee the distribution of $E_z(u)$ align with prior distribution $\\mathcal{N}(\\mathbf{0},\\mathbf{I})$:\n\\begin{align}\n\\begin{split}\n \\mathcal{L}_{D_z} = & -\\mathbb{E}_{u\\sim\\mathcal{U}}[\\log D_z(y=0|E_z(u))]\\\\\n & -\\mathbb{E}_{z\\sim\\mathcal{N}(\\mathbf{0},\\mathbf{I})}[\\log D_z(y=1|z)], \n\\end{split}\\\\\n \\mathcal{L}^{z}_{adv} = & -\\mathbb{E}_{u\\sim\\mathcal{U}}[\\log D_z(y=1|E_z(u))],\n\\end{align}\n\nThe overall loss function of the generator is given by:\n\\begin{equation}\n \\mathcal{L}^{G} = \\lambda^{G}_{adv}\\mathcal{L}^{G}_{adv} + \\lambda^{G}_{rec}\\mathcal{L}^{G}_{rec} + \\lambda^{z}_{adv}\\mathcal{L}^{z}_{adv}\n\\end{equation}\nA overview of the training framework of $G$ is given in Fig.~\\ref{fig:overview_generator} and example generated images are shown in Fig.~\\ref{fig:augmentation}. The architecture details of different modules are given in the supplementary file.\n\n\n\\begin{figure}[t]\n \\centering\n \\footnotesize\n \\captionsetup{font=footnotesize}\n \\begin{minipage}{1.0\\linewidth}\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_0.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_1.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_2.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_3.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_4.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_5.png}\\hfill\n \n \n \n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/1_9.png}\\hfill\\\\[-0.05em]\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_0.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_1.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_2.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_3.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_4.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_5.png}\\hfill\n \n \n \n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/2_9.png}\\hfill\\\\[-0.05em]\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_0.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_1.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_2.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_3.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_4.png}\\hfill\n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_5.png}\\hfill\n \n \n \n \\includegraphics[width=0.14\\linewidth]{fig\/generated\/3_9.png}\\hfill\\\\[-0.05em]\n \n \n \n \n \n \n \n \n \n \n \\end{minipage}\\hfill\n \\vspace{-1.0em}\\caption{Example generated images of the augmentation network. Each row shows augmented images with different styles for the input in the first column.}\n \\label{fig:augmentation}\\vspace{-1.0em}\n\\end{figure}\n\n\\section{Experiments}\n\n\\subsection{Implementation Details}\n\n\\noindent\\textbf{Training Details of the Recognition Models}\nAll the models are implemented with Pytorch v1.1. We use the RetinaFace~\\cite{deng2019retinaface} for face detection and alignment. All images are transformed into $112\\times112$ pixels. A modified 50-layer ResNet in~\\cite{deng2018arcface} is used as our architecture. The embedding size is $512$ for all models. By default, all the models are trained with $150,000$ steps with a batch size of 256. For semi-supervised models, we use $64$ unlabeled images and $192$ labeled images in each mini-batch. For models which use the augmentation module, $20\\%$ of the labeled images are augmented by the generator network. The scale parameter $s$ and margin parameter $m$ are set to $30$ and $0.5$, respectively. We empirically set $\\lambda_{idt}$, $\\lambda_{adv}$ as 1.0 and 0.01. \n\n\\noindent\\textbf{Training Details of the Generator Models}\nThe generator is trained for $160,000$ steps with a batch size of $8$ images ($4$ from each dataset). Adam optimizer is used with $\\beta_1=0.5$ and $\\beta_2=0.99$. The learning rate starts with $1e-4$ and drops to $1e-5$ after $80,000$ steps. The detailed architectures are provided in the supplementary material. $\\lambda^{G}_{adv}$, $\\lambda^{G}_{rec}$ and \n$\\lambda^{z}_{adv}$ are set to as 1.0, 10.0 and 1.0, respectively.\n\n\n\\begin{figure}[t]\n \\centering\n \\footnotesize\n \\captionsetup{font=footnotesize}\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822654\/0.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822654\/12.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822654\/31.jpg}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822655\/0.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822655\/1.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822655\/13.jpg}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822656\/0.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822656\/1.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ms\/0_5822656\/19.jpg}\\hfill\\\\\n \\vspace{-2.0em}\\begin{center}(b) MsCeleb1M\\end{center}\n \\end{minipage}\\hfill\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/2_Demonstration_Demonstration_Or_Protest_2_257_face-65.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/2_Demonstration_Political_Rally_2_667_face-9.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/18_Concerts_Concerts_18_369_face-1.jpg}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/44_Aerobics_Aerobics_44_873_face-1.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/45_Balloonist_Balloonist_45_366_face-2.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/23_Shoppers_Shoppers_23_33_face-4.jpg}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/55_Sports_Coach_Trainer_sportcoaching_55_540_face-4.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/23_Shoppers_Shoppers_23_14_face-38.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/widerface\/39_Ice_Skating_Ice_Skating_39_869_face-106.jpg}\\hfill\\\\\n \\vspace{-2.0em}\\begin{center}(a) WiderFace\\end{center}\n \\end{minipage}\\\\\n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_7993.png}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_13028.png}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_13086.png}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_20095.png}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_23408.png}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_20094.png}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_125729.png}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_126058.png}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijba\/frames_126521.png}\\hfill\\\\\n \\vspace{-2.0em}\\begin{center}(c) IJB-C\\end{center}\n \\end{minipage}\\hfill\n \n \n \n \n \n \n \n \n \n \n \n \n \\begin{minipage}{0.49\\linewidth}\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_107_1318.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_1101_89958.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_5078_17500.jpg}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_2039_5274.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_5008_10906.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_7010_27059.jpg}\\hfill\\\\[-0.1em]\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_2057_2308.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_4021_31909.jpg}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/datasets\/ijbs\/videos_3024_1169.jpg}\\hfill\\\\\n \\vspace{-2.0em}\\begin{center}(d) IJB-S\\end{center}\n \\end{minipage}\\\\\n \\caption{Examples face images of different datasets used in this work. The unconstrained testing datasets (IJB-C, IJB-S) are significantly different from the labeled training data (MsCeleb) with more types and degree of variation. Thus, auxiliary data from WiderFace are utilized to generalize the model.}\n \\label{fig:exp_dataset}\\vspace{-1.0em}\n\\end{figure}\n\n\\subsection{Datasets}\nWe use \\textbf{MS-Celeb-1M}~\\cite{guo2016msceleb} as our labeled training dataset. MS-Celeb-1M is a large-scale public face dataset of celebrity photos. \nThe original dataset is known to contain a large number of noisy labels~\\cite{cao2018vggface2}, so we use a cleaned version from ArcFace~\\cite{deng2018arcface} as training data. After removing the overlapped subjects with the testing sets and duplicate images, we are left with 3.9M images of 85.7K classes. As for unlabeled images, we choose \\textbf{WiderFace}~\\cite{yang2016wider} as our auxiliary training data. WiderFace is a dataset collected by retrieving images from search engines with different event keywords. As a face detection dataset, WiderFace includes a more diverse set of faces (See Fig.~\\ref{fig:exp_dataset}). Many faces in this dataset still cannot be detected by state-of-the-art detection methods~\\cite{deng2019retinaface}. Thus, we only keep the detectable faces in the WiderFace training set as our training data. Our goal is to close the gap between face detection and recognition engine and improve the general recognition performance for any detectable faces. At the end, we were able to detect about 70K faces from WiderFace, less than 2\\% of our labeled training data. \n\nTo evaluate the proposed method, we test on three unconstrained face recognition benchmarks, namely IJB-B, IJB-C and IJB-S.\nThese datasets represent real-world testing scenarios where faces are significantly different from celebrity photos in the training set. The details of these datasets are as follows:\n\\begin{itemize}\\vspace{-0.5em}\n \\item \\textbf{IJB-B}~\\cite{IJBB} includes both high quality celebrity photos taken from the wild and low quality photos or video frames with large variations of illumination, occlusion, head pose, etc. There are 68,195 images of 1,845 identities in all. We test on both verification and identification protocols of the IJB-B benchmark.\\vspace{-0.5em}\n \\item \\textbf{IJB-C}~\\cite{IJBC} is a newer version of IJB-B dataset. It has a similar protocol but with 140,732 images of 3,531 identities.\\vspace{-0.5em}\n \n \n \\item \\textbf{IJB-S}~\\cite{IJBS} is an extremely challenging benchmark where the images were collected from surveillance cameras. There are in all 202 identities with an average of 12 videos per person. Each person also has 7 high-quality enrollment photos (with different poses) which constitute the gallery. We test on two protocols of the IJB-S dataset, Surveillance-to-Still (\\textbf{V2S}) and Surveillance-to-Booking (\\textbf{V2B}), both of which are identification protocols. The difference between them is that in Surveillance-to-Still (V2S) the gallery of each person is a single frontal photo while Surveillance-to-Booking (V2B) uses all 7 registration photos as gallery. To Reduce the evaluation time, the expriments in Sec.~\\ref{sec:exp_ablation} and Sec.~\\ref{sec:exp_quantity} are conducted with subsampled frames from each video, whose performance is close to using the whole video (Sec.~\\ref{sec:exp_final}).\n\\end{itemize}\n\nAlthough our goal is to improve the recognition performance on unconstrained faces, we would not like to lose the discrimination power in the original domain (high-quality photos). Therefore, during ablation we also evaluate our models on the standard \\textbf{LFW}~\\cite{LFWTech} protocol, which is a celebrity photo dataset, similar to the labeled training data (MS-Celeb-1M). Note that the accuracy on the LFW protocol is highly saturated, so the main goal is just to check whether there is a significant performance drop on the constrained faces while increasing the generalizability to unconstrained ones. Example images of different datasets are shown in Figure~\\ref{fig:exp_dataset}.\n\n\\subsection{Ablation Study}\n\\label{sec:exp_ablation}\nIn this section, we conduct an ablation study to quantitatively evaluate the effect of different modules proposed in this paper. In particular, we have two modules to study: Domain Alignment (DA) and Augmentation Network (AN). The performance is shown in Table~\\ref{tab:ablation}. As we already showed in Fig.~\\ref{fig:tsne_subdomain}, domain adversarial loss is able to force smaller domain gaps between the sub-domains in WiderFace and the celebrity faces, even though we do not have access to those domain labels. Consequently, we observe the performance improvement on most of the protocols on IJB-C and IJB-S. Introducing the augmentation network (AN) further helps improving the performance on unconstrained benchmarks, where a multi-mode (MM) augmentation network outperforms a single-model (SM) augmentation network. More details of ablating over the augmentation network can be found in the supplementatry material.\n\n\n\\begin{table}[t]\n\\captionsetup{font=footnotesize}\n\\setlength{\\tabcolsep}{1.5pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{1.00\\linewidth}{X || c|c|c|c|c|c|c|c}\n\\Xhline{2\\arrayrulewidth}\n\\multirow{2}{*}{Method} & \\multicolumn{3}{c|}{IJB-C (Vrf)} & \\multicolumn{2}{c|}{IJB-C (Idt)} & \\multicolumn{2}{c|}{IJB-S (V2S)} & LFW \\\\\\cline{2-9}\n & 1e-7 & 1e-6 & 1e-5 & Rank1 & Rank5& Rank1 & Rank5 & Accuracy \\\\\n\\Xhline{2\\arrayrulewidth}\nBaseline & 62.90 & 82.94 & 90.73 & 94.90 & 96.77 & 53.23 & 62.91 & 99.80 \\\\\\hline\n + DA & 72.74 & 85.33 & 90.52 & 94.99 & 96.75 & 56.35 & \\textbf{66.77} & \\textbf{99.82} \\\\\\hline\n + DA + AN (SM) & 74.80 & 87.58 & \\textbf{91.94} & 95.51 & 97.09 & 56.98 & 65.66 & 99.80 \\\\\\hline\n\\textbf{ + DA + AN (MM)} & \\textbf{77.39} & \\textbf{87.92} & 91.86 & \\textbf{95.61} & \\textbf{97.13} & \\textbf{57.33} & 65.37 & 99.75 \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{0.5em}\n\\caption{Ablation study over different training methods of the embedding network. All models has identification loss by default. ``DA'', ``AN'', ``SM'' and ``MM'' refer to ``Domain Alignment'', ``Augmentation Network'', ``Single-mode'' and ``Multi-mode'', respectively.}\\vspace{-1.0em}\n\\label{tab:ablation}\n\\end{center}\n\\end{table}\n\n\n\n\n\\subsection{Quantity vs. Diversity}\n\\label{sec:exp_quantity}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.33\\linewidth]{fig\/lineplot_quantity_ijbs_vs.pdf}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/lineplot_quantity_ijbs_vb.pdf}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/lineplot_quantity_ijbc_rank1.pdf}\\\\\n \\includegraphics[width=0.33\\linewidth]{fig\/lineplot_quantity_ijbc_1e-7.pdf}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/lineplot_quantity_ijbc_1e-6.pdf}\\hfill\n \\includegraphics[width=0.33\\linewidth]{fig\/lineplot_quantity_ijbc_1e-5.pdf}\\\\[-0.8em]\n \\caption{Evaluation Results on IJB-C and IJB-S with different protocols and different number of labeled training data.}\n \\label{fig:quanitity}\n\\end{figure*}\n\nAlthough we have shown in Sec.~\\ref{sec:exp_ablation} that utilizing unlabeled data leads to better performance on challenging testing benchmarks, generally it shall be expected that simply increasing the number of labeled training data can also have a similar effect. Therefore, in this section, we conduct a more detailed study to answer such a question: \\textit{which is more important for feature generalizability: quantity or diversity of the training data?} In particular, we train several supervised models by adjusting the number of labeled training data. For each such model, we also train a corresponding model with additional unlabeled data. The evaluation results are shown in Figure~\\ref{fig:quanitity}.\n\nOn the IJB-S dataset, which is significantly different from the labeled training data, we see that the models trained with unlabeled data consistently outperform the supervised baselines with a large margin. In particular, the proposed method achieves better performance than the supervised baseline even when there is only one-fourth of the overall labeled training data (1M vs 4M), indicating the value of data diversity during training. Note that there is a significant performance boost when increasing the number of labeled samples from 0.5M to 1M. However, after that, the benefit of acquiring more labeled data plateaus and in fact it is more helpful to introduce 70K unlabeled data than 3M additional labeled data.\n\nOn the IJB-C dataset, for both verification and identification protocols, we observe a similar trend as the IJB-S dataset. In particular, a larger improvement is achieved at lower FARs. This is because the verification threshold at lower FARs is affected by the low quality test data (difficult impostor pairs), which is more similar to our unlabeled data. Another interesting observation is that the improvement margin increases when there is more labeled data. Note that in general semi-supervised learning, we would expect less improvement by using unlabeled data when there is more labeled data. But it is the opposite in our case because the unlabeled data has different characteristics than the labeled data. So when the performance of supervised model saturates with sufficient labeled data, transferring the knowledge from diverse unlabeled data becomes more helpful.\n\n\nFor both IJB-S and IJB-C (TAR@FAR=1e-7), we observe that after a certain point, adding more labeled data does not boost performance any more and the performance starts to fluctuate. This happens because the new labeled data does not necessarily help with those hard cases. Based on these results, we conclude that \\emph{when the number of labeled training data is small, it is more important to increase the quantity of the labeled dataset. Once there is sufficient labeled training data, the generalizablity of the representation tends to saturate while the diversity of the training data becomes more important}. \nAdditional experimental results on the choice of the unlabeled dataset can be found in Section~\\ref{sec:appendix_unlabeled}.\n\n\\begin{figure}[h]\n\\captionsetup{font=small}\n \\centering\n \\includegraphics[width=0.49\\linewidth]{fig\/ablate_unlabeled\/ijbs_vs.pdf}\\hfill\n \\includegraphics[width=0.49\\linewidth]{fig\/ablate_unlabeled\/ijbs_vb.pdf}\\\\\n \\includegraphics[width=0.49\\linewidth]{fig\/ablate_unlabeled\/ijbc_rank1.pdf}\\hfill\n \\includegraphics[width=0.49\\linewidth]{fig\/ablate_unlabeled\/ijbc_1e-7.pdf}\\\\[-0.5em]\n \\caption{Evaluation Results on IJB-S and IJB-C with different protocols and different number and choice of unlabeled training data. The red line here refers the performance of the supervised baseline which does not use any unlabeled data.}\\vspace{-1.0em}\n \\label{fig:appendix_unlabeled}\n\\end{figure}\n\n\\section{Choice of the Unlabeled Dataset}\n\\label{sec:appendix_unlabeled}\n\nIn Section~\\ref{sec:exp_quantity}, we discussed on the impact of the quantity\/diversity of training data on feature generalizability. A remaining question is how the choice and amount of unlabeled data would affect the performance. Here, we show additional experiments on different choices of unlabeled dataset. In addition to the WiderFace, we consider to utilize two other datasets: MegaFace~\\cite{MegaFace} and CASIA-WebFace~\\cite{yi2014learning}. For MegaFace, we only use the distractor images in their identification protocol, which are crawled from album photos on Flicker and present a larger degree of variation compared with the faces in MS-Celeb-1M. CASIA-WebFace, similar to MS-Celeb-1M, is mainly composed of celebrity photos, and therefore it should not introduce much additional diversity. Note that CASIA-WebFace is a labeled dataset but we ignore its labels for this experiment. The diversity (facial variation) of the three datasets can be ranked as: \\textit{WiderFace $>$ MegaFace $>$ CASIA-WebFace}. Example images of the three datasets are shown in Figure~\\ref{fig:exp_dataset}. For both MegaFace and CASIA-Webface, we choose a random subset to match the number of the WiderFace. Furthermore, to see the impact of the quantity of unlabeled dataset, we also train the models with different numbers of unlabeled data. Then, we evaluate all the models on IJB-S, IJB-C and LFW. The reason to evaluate on LFW here is to see the impact of different unlabeled datasets on the performance in the original domain. The results are shown in Fig.~\\ref{fig:appendix_unlabeled}. Note that due to the large number of experiments, we do not use augmentation network here. But empirically we found the trends are similar with the data augmentation network.\n\nFrom Fig.~\\ref{fig:appendix_unlabeled}, it can be seen that in general, the more diverse the unlabeled dataset is, the more performance boost it leads to. In particular, using CASIA-WebFace as the unlabeled dataset hardly improves performance on any protocol. This is expected because CASIA-WebFace is very similar to MS-Celeb-1M and hence it cannot introduce additional diversity to regularize the training of face representations. Using MegaFace distractors as the unlabeled dataset improves the performance on both IJB-C and IJB-S, both of which have more variations than the MS-Celeb-1M. Using WiderFace as the unlabeled dataset further improves the performance on the IJB-S dataset. Note that all the models in this experiment maintain the high performance on the LFW dataset. In other words, \\textit{using a more diverse unlabeled dataset would not impair the performance on the original domain and safely improves the performance on the challenging new domains}. An additional result that we can observe is that the size of the unlabeled dataset does not have a clear effect compared to its diversity. \n\n\n\\begin{table}[t]\n\\captionsetup{font=small}\n\\newcommand{\\mr}[1]{\\multirow{2}{*}{#1}}\n\\setlength{\\tabcolsep}{1.2pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{1.0\\linewidth}{X | c|c || c|c|c|c |c|c}\n\\Xhline{2\\arrayrulewidth}\n\\mr{Method} & \\mr{Data} & \\mr{Model} & \\multicolumn{4}{c|}{Verification} & \\multicolumn{2}{c}{Identification} \\\\\\cline{4-9} & & & 1e-7 & 1e-6 & 1e-5 & 1e-4 & Rank1 & Rank5 \\\\\n\\Xhline{2\\arrayrulewidth}\nCao et al.~\\cite{cao2018vggface2} & 13.3M & SE-ResNet-50 & - & - & 76.8 & 86.2 & 91.4 & 95.1 \\\\\\hline\nPFE~\\cite{shi2019probabilistic} & 4.4M & ResNet-64 & - & - & 89.64 & 93.25 & 95.49 & 97.17 \\\\\\hline\nArcFace~\\cite{deng2018arcface} & 5.8M & ResNet-50 & 67.40 & 80.52 & 88.36 & 92.52 & 93.26 & 95.33 \\\\\\hline\nRanjan et al.~\\cite{ranjan2019fast} & 5.6M & ResNet-101 & 67.4 & 76.4 & 86.2 & 91.9 & 94.6 & 97.5 \\\\\\hline\nAFRN~\\cite{kang2019attentional} & 3.1M & ResNet-101 & - & - & 88.3 & 93.0 & \\textbf{95.7} & \\textbf{97.6} \\\\\\hline\nDUL~\\cite{chang2020data} & 3.6M & ResNet-64 & - & - & 90.23 & 94.2 & \\textbf{95.7} & \\textbf{97.6} \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\nBaseline & 3.9M & ResNet-50 & 62.90 & 82.94 & 90.73 & 94.57 & 94.90 & 96.77 \\\\\\hline\nProposed & 4.0M & ResNet-50 & \\textbf{77.39} & \\textbf{87.92} & \\textbf{91.86} & \\textbf{94.66} & 95.61 & 97.13 \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{-1.0em}\n\\caption{Performance comparison with state-of-the-art methods on the IJB-C dataset.}\\vspace{-1.0em}\n\\label{tab:ijbc}\n\\end{center}\n\\end{table}\n\n\n\\begin{table}[t]\n\\captionsetup{font=small}\n\\newcommand{\\mr}[1]{\\multirow{2}{*}{#1}}\n\\setlength{\\tabcolsep}{1.2pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{1.00\\linewidth}{X | c|c || c|c|c|c|c|c}\n\\Xhline{2\\arrayrulewidth}\n\\mr{Method} & \\mr{Data} & \\mr{Model} & \\multicolumn{4}{c|}{Verification} & \\multicolumn{2}{c}{Identification} \\\\\\cline{4-9}\n & & & 1e-6 & 1e-5 & 1e-4 & 1e-3 & Rank1 & Rank5 \\\\\n\\Xhline{2\\arrayrulewidth}\nCao et al.~\\cite{cao2018vggface2} & 13.3M & SE-ResNet-50 & - & 70.5 & 83.1 & 90.8 & 90.2 & 94.6 \\\\\\hline\nComparator~\\cite{xie2018comparator} & 3.3M & ResNet-50 & - & - & 84.9 & 93.7 & - & - \\\\\\hline\nArcFace~\\cite{deng2018arcface} & 5.8M & ResNet-50 & 40.77 & 84.28 & 91.66 & 94.81 & 92.95 & 95.60 \\\\\\hline\nRanjan et al.~\\cite{ranjan2019fast} & 5.6M & ResNet-101 & \\textbf{48.4} & 80.4 & 89.8 & 94.4 & 93.3 & 96.6 \\\\\\hline\nAFRN~\\cite{kang2019attentional} & 3.1M & ResNet-101 & - & 77.1 & 88.5 & 94.9 & \\textbf{97.3} & \\textbf{97.6} \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\nBaseline & 3.9M & ResNet-50 & 40.12 & 84.38 & \\textbf{92.79} & \\textbf{95.90} & 93.85 & 96.55 \\\\\\hline\nProposed & 4.0M & ResNet-50 & 43.38 & \\textbf{88.19} & 92.78 & 95.86 & 94.62 & 96.72 \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{-1.0em}\n\\caption{Performance comparison with state-of-the-art methods on the IJB-B dataset.}\\vspace{-1.0em}\n\\label{tab:ijbb}\n\\end{center}\n\\end{table}\n\n\\subsection{Comparison with State-of-the-Art FR Methods}\n\\label{sec:exp_final}\nIn Table~\\ref{tab:ijbc} we show more complete results on IJB-C dataset and compare our method with other state-of-the-art methods. In generally, we observe that with fewer labeled training samples and number of parameters, we are able to achieve state-of-the-art performance on most of the protocols. Particularly at low FARs, the proposed method outperforms the baseline methods with a good margin. This is because at a low FAR, the verification threshold is mainly determined by low quality impostor pairs, which are instances of the difficult face samples that we are targeting with additional unlabeled data. Similar trend is observed for IJB-B dataset (Table~\\ref{tab:ijbb}). Note that because of fewer number of face pairs, we are only able to test at higher FARs for IJB-B dataset.\n\nIn Table~\\ref{tab:ijbs} we show the results on two different protocols of IJB-S. Both the Surveillance-to-Still (V2S) and Surveillance-to-Booking (V2B) protocols use surveillance videos as probes and mugshots as gallery. Therefore, IJB-S results represent a cross domain comparison problem. Overall, the proposed system achieve new state-of-the-art performance on both protocols.\n\n\\begin{table}[t]\n\\captionsetup{font=small}\n\\setlength{\\tabcolsep}{0.8pt}\n\\begin{center}\n\\scriptsize\n\\begin{tabularx}{1.0\\linewidth}{X || c|c|c|c|c|| c|c|c|c|c}\n\\Xhline{2\\arrayrulewidth}\n\\multirow{2}{*}{Method} & \\multicolumn{5}{c||}{Surveillance-to-Still} & \\multicolumn{5}{c}{Surveillance-to-Booking} \\\\\\cline{2-11}\n & Rank1 & Rank5 & Rank10 & 1\\% & 10\\% & Rank1 & Rank5 & Rank10 & 1\\% & 10\\% \\\\\n\\Xhline{2\\arrayrulewidth}\nMARN~\\cite{gong2019low} & 58.14 & 64.11 & - & 21.47 & - & 59.26 & 65.93 & - & 32.07 & - \\\\\\hline\nPFE~\\cite{shi2019probabilistic} & 50.16 & 58.33 & 62.28 & 31.88 & 35.33 & 53.60 & 61.75 & 62.97 & 35.99 & 39.82 \\\\\\hline\nArcFace\\cite{deng2018arcface} & 50.39 & 60.42 & 64.74 & 32.39 & 42.99 & 52.25 & 61.19 & 65.63 & 34.87 & 43.50 \\\\\n\\Xhline{2\\arrayrulewidth}\nBaseline & 53.23 & 62.91 & 67.83 & 31.88 & 43.32 & 54.26 & 64.18 & 69.26 & 32.39 & 44.32 \\\\\\hline\nProposed & \\textbf{59.29} & \\textbf{66.91} & \\textbf{69.63} & \\textbf{39.92} & \\textbf{50.49} & \\textbf{60.58} & \\textbf{67.70} & \\textbf{70.63} & \\textbf{40.80} & \\textbf{50.31} \\\\\\hline\n\\Xhline{2\\arrayrulewidth}\n\\end{tabularx}\\vspace{-0.8em}\n\\caption{Performance on the IJB-S benchmark.}\\vspace{-1.0em}\n\\label{tab:ijbs}\n\\end{center}\n\\end{table}\n\n\n\\section{Conclusions}\nWe have proposed a semi-supervised framework of learning robust face representation that could generalize to unconstrained faces beyond the labeled training data. Without collecting domain specific data, we utilized a relatively small unlabeled dataset containing diverse styles of face images. In order to fully utilize the unlabeled dataset, two methods are proposed. First, we showed that the domain adversarial learning, which is common in adaptation methods, can be applied in our setting to reduce domain gaps between labeled faces and hidden sub-domains. Second, we propose an augmentation network that can capture different visual styles in the unlabeled dataset and apply them to the labeled images during training, making the face representation more discriminative for unconstrained faces. Our experimental results show that as the number of labeled images increases, the performance of the supervised baseline tends to saturate on the challenging testing scenarios. Instead, introducing more diverse training data becomes more important and helpful. In a few challenging protocols, we showed that the proposed method can outperform the supervised baseline with less than half of the labeled data. By training on the labeled MS-Celeb-1M dataset and unlabeled WiderFace dataset, our final model achieves state-of-the-art performance on challenging benchmarks such as IJB-B, IJB-C and IJB-S.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}