diff --git a/data_all_eng_slimpj/shuffled/split2/finalztgn b/data_all_eng_slimpj/shuffled/split2/finalztgn new file mode 100644 index 0000000000000000000000000000000000000000..bcab22af6b09bedec0f04fc3466ae00c9abbb5c5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalztgn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe Hanbury Brown and Twiss (HBT) effect can be thought of as Young's double slit experiment using narrowband but incoherent light. Since the source is incoherent the phase is fluctuating \\citep[see for example Figure 7.1 in][]{2006iai..book.....L} and there is no consistent phase difference between the two waves emitted at the slits and thus there will be no interference patterns produced by constructive and destructive superposition visible on the screen, the time averaged intensities (the complex conjugate products of the incoming electric fields) coming from both slits will just add up. But due to the random fluctuations of the phases the intensities will fluctuate. Counting individual photons at photons at two different points on the screen will result in a coincidence rate slightly higher than the Poisson coincidence rate, the intensity fluctuations are correlated. This additional term is what is called the HBT effect.\n\nIn astronomy we need to replace the slits with a two-dimensional superposition of sub-sources. Integrating the amplitude contributions over the new, extended source amounts to taking the Fourier transform of the intensity or brightness distribution on the sky (Van Cittert Zernike theorem). The normalized spatial Fourier transform is called the complex visibility $V$ and can, in principle, be measured at different spatial distances or baselines. But measuring visibility directly is very difficult for astronomical sources because it requires maintaining optical quality mirrors over long baselines to sub wavelength accuracy. Alternatively, information on the visibility can be obtained by considering the normalized intensity correlation (the probability of coincident detection at two detectors) for two detectors ($1$ and $2$):\n\\begin{equation}\nC_{12} = 1 + |V_{12}|^2\n\\label{eq:corr}\n\\end{equation}\nThis equation is valid over a coherence time which is the time scale\n\\begin{equation}\n\\Delta\\tau \\sim 1\/\\Delta\\nu = \\frac{\\lambda^2}{c \\,\\Delta\\lambda}\n\\end{equation}\non which the phases of incoherent light fluctuate. The phase information of the electrical field is lost when measuring the amplitude squared. The time resolution $\\Delta t$ for measuring intensity correlations or photon coincidences should be as quick as possible. If $\\Delta t$ is shorter than $\\Delta\\tau$, the full benefit of the signal in equation \\eqref{eq:corr} is obtained. If $\\Delta t\\gg\\Delta\\tau$ the signal to noise per measurement interval ${\\rm SNR}(\\Delta t)\\ll1$. Nevertheless, sufficient SNR can be built up by collecting data over many many $\\Delta t$. The great advantage of HBT interferometry is that optical parts need to be accurate only to $\\ll c\\Delta t$. For the $\\Delta t$ achievable nowadays, optical-path tolerances of a millimetre would be adequate. In contrast, standard interferometry requires optical paths to be precise to better than a wavelength.\n\nIn \\cite{1958RSPSA.248..222B} famously implemented these ideas in the first stellar intensity interferometer, which measured the diameter of Sirius. In the 1960s the Narrabri intensity interferometer was built to measure more stellar diameters \\citep{1974MNRAS.167..121H}. At that time intensity interferometry was limited to blue-sensitive counting equipment, and after the Narrabri instrument had observed all the hot stars it could, the technique was abandoned for a long time. The possibilities of new, faster counters in red and infrared ranges have recently brought intensity interferometry back into focus and arrays of telescopes such as the proposed Cherenkov Telescope Array (CTA) will allow us to exploit the effect to a greater extent in the years to come \\citep[see for example][]{2013APh....43..331D}. Some illustrative examples are shown in Figures \\ref{fig:simone} and \\ref{fig:fftld}.\n\nThe loss of phase information, as evident from the absolute value in equation (\\ref{eq:corr}) is the major shortcoming of intensity interferometry. But it is not a fundamental shortcoming, because from the theory of quantum optics developed in the 1960s \\citep[see][for a review]{glauber2006}, phase information can be obtained by using three telescopes in concert.\n\nThe normalized three point correlation is:\n\\begin{equation}\nC_{123} = 1 + |V_{12}|^2 + |V_{23}|^2 + |V_{31}|^2 + 2{\\rm Re}[V_{12}V_{23}V_{31}] .\n\\label{eq:3corr}\n\\end{equation}\nThe product in the last term is the real part of the spatial bispectrum of the source. Its phase is well known in radio astronomy as the closure phase, and useful for eliminating local atmospheric and other influences on phase measurements for individual detectors in a three-antenna setup. The term has also long been studied in the context of diverse laboratory experiments: \\cite{1963JAP....34..875G} already used photon triple-coincidence as a tool for spectral measurement; \\cite{1978ApOpt..17.2047S} constructed an imaging system using three-point HBT; \\cite{1983JAP....54..473F} performed some remarkable experiments showing three- and four-point acoustic HBT. The analyses in these works is in terms of classical waves and intensities, so it does not necessarily apply to photon counting, but fortunately it turns out \\citep[see for example][]{1964PPS....84..435M} that for ordinary light, quantum fields can be replaced with classical and intensities interpreted as photon-detection probabilities.\n\n\\cite{2014MNRAS.437..798M} recently reviewed the theory of higher order correlations and provided a simple way of estimating the achievable SNR. In the present work, we will study the feasibility of detecting three-point HBT for astronomical sources. In the following sections we will simulate signals for two and three point correlation measurements of different, simple sources and make some estimates about possible SNR. We will not consider the problem of reconstruction of the actual phase but algorithms for related problems are known \\citep[see, for example][]{Kang:1991:529}.\n\nLet us now briefly preview our results.\n\nIn the next Section we show simulations of the complex visibilities and resulting correlation signals. For example, Figure~\\ref{fig:simone} shows a simulation of a structured disc, inspired by reconstructions of Betelgeuse. Comparing that to the other figures in Section~\\ref{sec:cv} we can immediately see that adequate $(u,v)$ coverage allows to see differences in shape and structure. We also plot the 3-point correlation signals (specifically the bispectra) for a given baseline for different sources.\n\nIn Section~\\ref{snrchap} we rederive the well-known but counterintuitive result that for two-point correlation, the SNR is independent of bandwidth, meaning that decreasing the bandwidth and hence decreasing the count rates will not change the SNR for HBT detection. A related feature is that the observation time needed goes as the inverse square of the collection area. With larger telescopes such as H.E.S.S. (12~m diameter dishes) and the planned CTA (7m, 12m, and 23m are proposed) this will allow for significant measurements on very short timescales. We also want to emphasize again the possibilities in using arrays with many telescopes increasing the possible $(u,v)$ \\citep[see for example][]{2013APh....43..331D}. For the brightest stars, size measurements would be possible with even a 10~cm diameter mirror with modern off-the-shelf single-photon correlators. With a 10 cm diameter telescope and a counting system at 50\\% efficiency one could even achieve an SNR of 1 in 36 seconds integration time for Capella b for example. The small baselines possible with such a setup should allow the detection of the 2 point correlation for this binary system, that would require a about a 2 meter aperture to be resolved by a single telescope.\n\nWhile we have considered nearby stars as examples, the results can be trivially scaled. For example, scaling Sirius up in size and distance by $10^4$ and scaling in luminosity by $10^8$ (keeping apparent size, luminosity and effective temperature the same) would be not unlike a supernova in the LMC. This suggests that supernova shells in nearby galaxies could be rather easily resolved --- the equivalent of SN~1987a with very modest equipment and the equivalent of SN~2011fe in M101 with the sort of instruments now being proposed.\n\nSection~\\ref{snrchap} also shows the necessary observation times to reach a signal to noise ratio of one for the three-point correlation signal. Skipping ahead to Figure \\ref{fig:qtobs3} we can immediately see that measuring the bispectrum in the visible range using off the shelf photon counters is easily achievable for larger telescopes as the observation time goes with the inverse cubed of the collection area. To emphasize this point: measuring the three-point HBT signal will be feasible and informative for CTA type mirrors.\n\n\n\\label{sec:cv}\n\\begin{figure}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{structured11.png}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{structured12.png}\n\\end{minipage}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{structured13.png}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{structured14.png}\n\\end{minipage}\n\\caption{Simulation of a structured, crudely limb-darkened disc of 1~mas radius (inspired by Betelgeuse): Top left shows the simulated source brightness distribution, top right shows the absolute value of the visibility squared, the 2 point correlation term in equation (\\ref{eq:corr}) as simulated using a 2D FFT algorithm. The axes of this and the bottom panels are baselines in meters, for $\\lambda=500\\rm\\,nm$. For completeness the bottom panels show the real and imaginary parts of the complex visibility. Note that the imaginary part goes to zero for point symmetric sources.}\\label{fit} \n\\label{fig:simone}\n\\end{figure}\n\n\\section{Complex visibility vs HBT observables}\n\nThe Van Cittert Zernike theorem allows us to simulate complex visibilities using a 2D FFT. Figure~\\ref{fig:simone} shows a simulation of a structured, crudely limbdarkened disc of 1~mas radius.\\footnote{The baseline scales are always in metres: units of frequency space $(u,v)$ multiplied by an assumed measurement wavelength $\\lambda=500\\rm\\,nm$.} It features three spots that are 25\\% brighter than the rest of the disc. This mimics the features observed on Betelgeuse using standard optical interferometry \\citep{2000MNRAS.315..635Y,2009A&A...508..923H}. Figure \\ref{fig:fftld} shows a similar crudely limb darkened disc without the spots. The power spectrum shows extremely subtle differences, though, looking at real and imaginary parts it is clear one brightness distribution is symmetrical and the other one is not. Clearly it is desirable to have information about the phase of the complex visibility.\n\n\n\\begin{figure}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{limbdark11.png}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{limbdark12.png}\n\\end{minipage}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{limbdark13.png}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{limbdark14.png}\n\\end{minipage}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{examplelimbdark26ab.png}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{examplelimbdark26ph.png}\n\\end{minipage}\n\\caption{Simulation of a roughly limb darkened disc of $1\\rm\\,mas$ radius: left the simulated source, on the right the squared visibility. As we can see the imaginary part of the complex visibility is zero except for roundoff errors. The two lower panels show absolute value and cosine of the phase of the bispectrum at the position of a third detector with respect to a fixed 107~m long baseline of two detectors as indicated by the black line. Note that these quantities cannot be measured directly but their product can.}\n\\label{fig:fftld}\n\\end{figure}\n\nThe bispectrum is a function of two baselines, so it cannot be shown in a single figure. In Figure \\ref{fig:diffat} we show an example with two detectors (hence one baseline) fixed. \\footnote{A program to simulate other baselines as well as different sources, such as a binary, is provided in the supplementary material.}\n\nFeatures of 25\\% difference in brightness appear at the level of $10^{-3}$ in the bispectrum. To assess the feasibility of measuring this one has to calculate the signal to noise ratio.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1.\\linewidth]{compare26.png}\n\\caption{Bispectra $2{\\rm Re}[V_{12}V_{23}V_{31}]$ as the position of one detector is varied while the other two detectors are fixed (at opposite ends of the black line indicating a baseline of 107~m as before). The left panel corresponds to the source in figure \\ref{fig:fftld} while the right panel to the disc in figure \\ref{fig:simone}. The center panel shows the difference.}\n\\label{fig:diffat}\n\\end{figure}\n\n\\section{Count rates and Signal to Noise}\n\\label{snrchap}\n\n\\subsection{count rates}\n\n\\begin{figure}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{qeffpdm-eps-converted-to.pdf}\n\\end{minipage}\n\\hspace{0.5cm}\n\\begin{minipage}[b]{0.5\\linewidth}\n\\centering\n\\includegraphics[scale=.45]{qeffidq-eps-converted-to.pdf}\n\\end{minipage}\n\\caption{Quantum efficiency 7th order polynomial fits for PicoQuant PDM series$^{1}$ (left) and the IDQuantique ID100 series$^{2}$ from figures on the manufacturers' data sheets using the WebPlotDigitizer$^{3}$ by Ankit Rohatgi.}\n\\label{fig:qefffit} \n\\end{figure}\nFor purposes of estimating count rates we approximate stars by black body discs. In particular we approximate Sirius (temperature 9940~K, diameter $0.006''$), Betelgeuse (3500~K, $0.04''$), and Capella~a (5700~K, $0.003''$). Let $r$ be the average count rate for one detector, the number of counts per coherence time $\\Delta\\tau$ then has the simple form \\citep[][]{2014MNRAS.437..798M}:\n\\begin{equation}\n\\gamma r\\Delta\\tau = \\frac{\\gamma A\\Omega} {\\lambda^2 (\\exp{[h c\/\\lambda k_B T]} - 1)}\n\\label{eq:snrarea2det}\n\\end{equation}\nwhere $\\Omega$ is the solid angle of the source, $A$ the collection area of the detectors, $\\lambda$ the measurement wavelength, $T$ the surface temperature of the star, and $\\gamma$ the quantum efficiency of the detector. Today's possibilities allow for reasonable quantum efficiencies at high time resolutions in the middle of the visible spectrum. \nOff the shelf photon counters such as the Pico Quant PDM series$^{[1]}$ or the ID100 series of IDQuantique$^{[2]}$ can help to get a feeling for achievable count rates (see best case parameters in Table \\ref{detectors} and fits to quantum efficiency measurements by the manufacturers in Figure \\ref{fig:qefffit}). \nZooming in on the wavelength region that is covered by these detectors, Figure \\ref{fig:qrate} shows the expected count rates for different stars using narrowband filters with $\\Delta\\lambda = 1\\rm\\,nm$.\n\n\\begin{table}\n\\centering\n\\begin{tabular}{l*{6}{c}r}\ndetector & max efficiency & wavelength of max efficiency & time resolution $\\Delta t$& dead time \\\\\n\\hline\nPicoquant & 49 \\% & 550 nm & 50 ps & 80 ns \\\\\nPDM series$^{1}$ &&&& \\\\\n\\hline\nIDQuantique & 35 \\% & 500 nm & 40 ps & 45 ns \\\\\nID100 series$^{2}$ \\\\\n\\hline\n\\end{tabular}\n\\caption{Some best case specifications of the two mentioned single photon counters.}\n\\label{detectors}\n\\end{table}\n\\footnotetext[1]{http:\/\/www.picoquant.com\/images\/uploads\/downloads\/pdm\\_series.pdf} \n\\footnotetext[2]{http:\/\/www.idquantique.com\/images\/stories\/PDF\/id100-single-photon-detector\/id100-specs.pdf}\n\\footnotetext[3]{http:\/\/arohatgi.info\/WebPlotDigitizer\/app\/}\n\n\n\\begin{figure}\n\\begin{minipage}[b]{\\linewidth}\n\\centering\n\\includegraphics[scale=.8]{qeffratepico.png} \n\\end{minipage}\n\\begin{minipage}[b]{\\linewidth}\n\\centering\n\\includegraphics[scale=.8]{qeffrateidq.png} \n\\end{minipage}\n\\caption{Upper panel: expected count rates per coherence time $\\Delta\\tau$ using two Picoquant PDM series photon counters after filtering to a bandwidth of $\\Delta\\lambda = 1\\rm\\,nm$ for different stars using the quantum efficiency estimates given in the left panel of figure \\ref{fig:qefffit}. The plot shows equation (\\ref{eq:snrarea2det}) evaluated for Sirius (blue), Betelqeuse (magenta), and Capella a (cyan) at the approximate parameters given in table \\ref{stars}. The lower panel shows the expected count rates per coherence time for measurements using the IDQuantique ID100 series.}\n\\label{fig:qrate}\n\\end{figure}\n\n\n\n\n\\subsection{Signal to Noise Ratio for 2 Detectors}\n\nOver a coherence time $\\Delta \\tau$ there will be $|V_{12}|^2(\\gamma r\\Delta \\tau)^2$ coincidences. \nThe time resolution, or counting time of a detector $\\Delta t$ will be much larger than the coherence time, $\\Delta t \\gg \\Delta\\tau$, so in one counting interval there will be $\\Delta t\/\\Delta\\tau$ such contributions, giving $|V_{12}|^2 (\\gamma r)^2\\Delta\\tau\\Delta t$. Meanwhile there will be $(\\gamma r\\Delta t)^2$ random coincidences corresponding to the first term of equation (\\ref{eq:3corr}), giving $\\gamma r\\Delta t$ of noise, so the signal to noise ratio in $\\Delta t$ becomes:\n\\begin{equation}\nSNR(\\Delta t) = |V_{12}|^2 \\gamma r \\Delta\\tau ,\n\\end{equation}\nNote that the signal to noise ratio is independent on the bandwidth: decreasing bandwidth results in lower rates but will not change the signal to noise ratio.\n\nEvaluating the signal to noise ratio over an observation time $T_{\\rm obs}$, we assume that the intensity fluctuations measured in one counting time $\\Delta t$ are not correlated to the ones measured in the next interval, which is a reasonable assumption, since at relevant wavelengths $\\Delta t\\gg\\Delta\\tau\\: (\\sim 10^{-12}$). Thus, uncorrelated, Gaussian noise dominates and the SNR adds in quadrature:\n\\begin{equation}\nSNR = SNR(\\Delta t) \\sqrt{T_{\\rm obs}\/\\Delta t}\n\\end{equation}\nWe can now find the required observation time for a required signal to noise ratio using equation (\\ref{eq:snrarea2det}):\n\\begin{equation}\nT_{\\rm obs}(SNR) = \\frac{SNR^2}{|V_{12}|^4}\\Delta t \\Big[\\frac{\\gamma A \\Omega} {\\lambda^2 (\\exp{[h c\/\\lambda k_B T]} - 1)}\\Big]^{-2}\n\\label{eq:TOBS}\n\\end{equation}\nFigure \\ref{fig:qtobs2} shows the the observation times necessary for some example cases to reach an SNR of $1$ for $|V_{12}|^2 = 1$. Clearly detector technology has improved greatly in the past half century, observing Sirius Hanbury Brown and Twiss needed a few minutes to reach a signal to noise ratio of 1 using a $5$-foot searchlight mirrors. Now the same could be done using $10\\rm\\,cm$ diameter mirrors.\n\n\\begin{figure}\n\\begin{minipage}[b]{\\linewidth}\n\\centering\n\\includegraphics[scale=.8]{2pthbtpicoq.png} \n\\end{minipage}\n\\vskip -1.4cm\n\\begin{minipage}[b]{\\linewidth}\n\\centering\n\\includegraphics[scale=.8]{2pthbtidq.png} \n\\end{minipage}\n\\caption{Upper panel: observation time estimates in seconds to reach an HBT SNR of $1$ for dishes with diameters of $.1\\rm\\,m$ (scales on the left) and $10\\rm\\,m$ (scales on the right) using the Pico Quant PDM series single photon counters, see table \\ref{stars} for color code. The lower panel shows observation time estimates using ID100 detectors by IDQuantique.}\n\\label{fig:qtobs2}\n\\end{figure}\n\n\n\n\n\\subsection{Signal to Noise Ratio for 3 Detectors}\n\\begin{figure}\n\\begin{minipage}[b]{\\linewidth}\n\\centering\n\\includegraphics[scale=.8]{3pthbtpico.png} \n\\end{minipage}\n\\vskip -1.4cm\n\\begin{minipage}[b]{\\linewidth}\n\\centering\n\\includegraphics[scale=.8]{3pthbtidq.png} \n\\end{minipage}\n\\caption{Upper panel: observation time estimates to reach an SNR of $1$ for $|V_{123}| = 10^{-3}$ for dishes with diameters of $.1\\rm\\,m$ (scales on the left) and $10\\rm\\,m$ (scales on the right) using Pico Quant PDM series single photon counters vs wavelength, see table \\ref{stars} for color code. The lower panel shows observation time estimates using ID100.}\n\\label{fig:qtobs3}\n\\end{figure}\n\nOver a coherence time $\\Delta \\tau$ there will be $V_{123}(\\gamma r\\Delta \\tau)^3$ excess coincidences, where $V_{123}$ is the last term in equation (\\ref{eq:3corr}). So over a time $\\Delta t$ there will be $V_{123}(\\gamma r)^3 \\Delta \\tau^2\\Delta t$ signal coincidences. Meanwhile there will be $(\\gamma r \\Delta t)^{3\/2}$ of noise, hence the signal to noise ratio is:\n\\begin{equation}\nSNR_{3}(\\Delta t) \\sim V_{123}(\\gamma r\\Delta\\tau)^{3\/2} (\\Delta\\tau\/\\Delta t)^{1\/2} ,\n\\label{3detSNR}\n\\end{equation}\nagain understanding $r\\Delta \\tau$ as the count rate at one detector over a coherence time. We can now rewrite and expand as before to find the observation time for a specific signal to noise ratio:\n\\begin{equation}\nT_{\\rm obs}(SNR) = \\frac{SNR^2}{V_{123}^2}\\frac{\\Delta t^2}{\\Delta\\tau} \\Big[\\frac{\\gamma A \\Omega} {\\lambda^2 (\\exp{[h c\/\\lambda k_B T]} - 1)}\\Big]^{-3}\n\\label{eq:SNRT3}\n\\end{equation}\nComparing with equation (\\ref{eq:TOBS}) we note the steeper dependence not only on the quantum efficiency $\\gamma$ but also on counting time resolution $\\Delta t$ and detector area $A$.\nAs we saw from Figures \\ref{fig:simone} and \\ref{fig:diffat}, features with brightness differences of around 25\\% show up in $V_{123}$ at the level of $10^{-3}$. Figure \\ref{fig:qtobs3} shows observation time estimates for our example cases for an $SNR$ of $1$ and $V_{123} = 10^{-3}$. We can immediately see that attempting three-point HBT with smaller sized telescopes is not an option but increasing the telescope diameters dramatically reduces observation times as the cube of the area thus making it possible to measure the bispectrum. With the equivalent of $10\\rm\\,m$ diameter mirrors these should be easily detectable in the case of Betelgeuse or Sirius.\n\n\\section{Outlook}\nThe results shown in Figures \\ref{fig:qtobs2} and \\ref{fig:qtobs3} suggest that the recovery of HBT phase is feasible with present day detector technology and may lead to major advances in stellar imaging. Many technical problems will need to be solved first. The most important of these are the following.\n\\begin{itemize}\n\\item Designing suitable configurations for three detectors is essential. In particular, for Betelgeuse $10\\rm\\,m$ diameter mirrors would wash out the interference patterns. Some combination of small mirrors to resolve the large scale and large mirrors to resolve small scale structures is desirable.\n\\item An image reconstruction algorithm using two- and three-point HBT signal is needed.\n\\item Large mirrors such as proposed by CTA introduce non-isochronicity of a nanosecond or more \\citep{2006ApJ...649..399L}. In order to benefit from fast photon counting, the light paths from different parts of the mirror need to be equalized to the sub $\\Delta t$ level (submillimeter) level. Since only small fields of view are involved, it is plausible that a simple spherical-aberration compensators could do the job.\n\\end{itemize}\n\n\\section*{Acknowledgments}\n\nWe thank the referee, Paul Nu\\~nez, for many helpful comments.\n\n\\clearpage\n\\bibliographystyle{astron}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nUniaxially modulated structures are observed in very different\nclasses of magnetic and ferroelectric substances. In many cases they exhibit\nrather complex phase diagrams with large varieties of phases. Phase \ntransitions from a high temperature paramagnetic or paraelectric phase \n(paraphase) to commensurately and incommensurately modulated phases occur, as \nexternal control parameters like temperature \nand elastic stresses are varied. Microscopic models are successfully\nused for the description of these modulated systems. \nThey were reviewed e.g.\\ in \\cite{Sel88}. An interesting example is \nthe $p$-state chiral clock model \\cite{Yeo82}, whose Hamiltonian is\n\\begin{equation}\n\\fl H = - J_0 \\sum\\limits_{\\alpha} \\sum\\limits_{} \\cos \\left[ \\frac{2 \\pi}{p}\n\\left( n_{i, \\alpha} - n_{j, \\alpha} \\right) \\right] \n- J \\sum\\limits_i \\sum\\limits_{\\alpha} \\cos \\left[ \\frac{2 \\pi}{p}\n\\left( n_{i, \\alpha} - n_{i, \\alpha+1} + \\Delta\n\\right) \\right].\n\\label{hamiltonian}\n\\end{equation}\n$\\alpha$ labels the layers perpendicular to the direction of the modulation\n(chiral direction) and $i$, $j$ the crystal units in these layers.\n$\\left< ij \\right>$ runs over neighbouring pairs in the layers.\nThe integer variables $n_{i, \\alpha}$ describe the state of the unit\n$(i,\\alpha)$. They assume one of the values from\n0 to $p-1$. Below they are called spins.\nThe two terms in equation (\\ref{hamiltonian}) describe couplings ($J_0 > 0$,\n$J > 0$)\nbetween nearest neighbours in the same and in\nadjacent layers, respectively.\\\\\nIn the ground state every layer is ferromagnetically ordered.\nDepending on the value of $\\Delta$, various ordering patterns of the different layers are \nrealized. For $0 \\leq \\Delta < \\frac{1}{2}$ nearest neighbours in the chiral direction\ncouple ferromagnetically (ferromagnetic bond),\nthus leading to a ferromagnetic ground state\nwhere all spins are equal. For $\\frac{1}{2} \n< \\Delta \\leq 1$ \nthe spin increases by one for successive layers (chiral bond),\nthus yielding the right-handed chiral pattern\n\\begin{displaymath}\n\\ldots ~ 0 ~ 1 ~ 2 ~ \\ldots ~ (p-1) ~ 0 ~ 1 ~ \\ldots\n\\end{displaymath}\n$\\Delta = \\frac{1}{2}$ is a multiphase point at which an infinity of different phases are\ndegenerate since ferromagnetic and chiral bonds have the same energy.\\\\\nWhereas the three-state model ($p = 3$) \\cite{Ost81,Hus81}\nhas been very thoroughly investigated, \nonly few results are known for the general case $p \\geq 4$. \nThere are derivations of the low\ntemperature phase diagram of the general $p$-state model by an expansion of the free\nenergy in the vicinity of the multiphase point \\cite{Yeo82} as well as by a low\ntemperature mean-field theory \\cite{Yeo84}, in which it was claimed that, for\nthe four-state model ($p = 4$), only the phases $\\left< 1 2^k \\right>$, $\\left< 1 2^k 1 2^{k+1}\n\\right>$, $\\left< 2^k 3 \\right>$, $\\left< 2^k 3 2^{k+1} 3 \\right>$, $\\left< 4\n\\right>$, and $\\left< \\infty \\right>$ ($k = 0, 1, 2, \\ldots$) are stable at low temperatures.\n$\\left< u_1 \\ldots u_r \\right>$ is a shorthand notation for the phase with a period\nconsisting of $r$ bands with $u_1, u_2, \\ldots , u_r$ \nlayers with spins $n$, $n+1, \\ldots, n+r$ (all modulo $p$) respectively.\nThe phase $\\left< 12 \\right>$, for example, is given by the layer sequence\n\\begin{displaymath}\n\\ldots ~ 0 ~ 1 ~ 1 ~ 2 ~ 3 ~ 3 ~ 0 ~ 1 ~ 1 ~ \\ldots\n\\end{displaymath}\nThe ferromagnetic and chiral ground states are denoted by $\\left< \\infty \\right>$ \nand $\\left< 1 \\right>$ respectively.\\\\\nMcCullough \\cite{Mcc92} investigated the phase diagram for\n$p = 3$, 4, and 5 using the mean-field transfer-matrix (MFTM) method.\nFrom the numerical \nextrapolation of the data it was concluded that the low temperature phase diagrams\nfor $p = 3$ and $p = 4$ were consistent \nwith the results of the low-temperature\nseries expansion \\cite{Yeo82,Yeo81}. It is interesting, that, for $p = 5$,\nnew phases not predicted by the\nlow-temperature series expansion \\cite{Yeo82} were found to be stable at low\ntemperatures.\\\\\nScholten and King \\cite{Sch96} presented Monte Carlo simulations of\nthe four- and the six-state models. They investigated especially the transition\nfrom the modulated phases to the ferromagnetic phase (i.e. $\\Delta < \\frac{1}{2}$).\nAs it was not possible to resolve particular phases, they determined the \"interface\nspacing\" as the average number of layers in a band for a given phase.\nThey claimed that, for $\\Delta = 0.45$, the results were not\ninconsistent with the predictions of Yeomans. In the case $p =4$ and $\\Delta = 0.2$\nnew phases with an interface spacing larger than the interface spacings of the phases\npredicted in \\cite{Yeo82} were observed \nclose to the transition to the ferromagnetic phase.\\\\\nRecently the four-state chiral clock model was shown \\cite{Ple97} to be \na special case of the Double Ising Spin (DIS) model \\cite{Ple94,Neu94,Ple96},\nwhich was introduced to describe uniaxially modulated ferroelectrics.\\\\\nIn the following new results for the four-state chiral clock model are presented. \nIn section \\ref{sec2} and \\ref{sec3}\nwe will reexamine the low temperature phase diagram and \ndiscuss discrepancies with previous results.\nIn section \\ref{sec4} it is shown that the \ntransition from the modulated phases to the paramagnetic phase belongs to the\nuniversality class of the 3d-$XY$ model, and in section 5\nshort conclusions are given.\n\n\\section{The low temperature series expansion\\label{sec2}}\nThe present series expansion technique for the four-state chiral clock ($CC_4$) model\nis similar to the method\ndeveloped by Fisher and Selke \\cite{Fis80} for the Axial Next Nearest \nNeighbour Ising (ANNNI) model. At low temperatures the reduced free energy\nper spin $f = \\frac{F}{Nk_BT}$ ($N$ is the total number of spins)\nmay be expanded in the form \\cite{Fis80}\n\\begin{equation}\nf = \\frac{E_0}{k_BT} - \\frac{1}{N} \\sum\\limits_{n \\geq 1} \\Delta Z_N^{(n)}.\n\\label{free_energy}\n\\end{equation}\n$\\Delta Z_N^{(n)}$ is the total contribution \nto the partition function from configurations in which \n$n$ spins have flipped (as compared to the ground state). $E_0$,\nthe ground state energy per spin, can be expressed \\cite{Yeo82,Yeo81} in terms\nof the structural variables \\cite{Fis80}\n$l_k = L_k\/L$ ($L_k$: number of $k$-layer bands; $L$: total number of layers):\n\\begin{displaymath}\nE_0 \\left( \\left\\{ l_k \\right\\} \\right) = - \\frac{1}{2} q_\\perp J_0 - J_1 \n-J_1 \\, \\delta \\, \\sum\\limits_{k \\geq 1} l_k\n\\end{displaymath}\nwith $J_1 = J \\cos \\left( \\frac{\\pi}{2} \\Delta \\right)$ and $\\delta = \\tan \\left( \n\\frac{\\pi}{2} \\Delta \\right) - 1$. The number of nearest neighbours in the \nlayers is $q_\\perp$; it is 4 for the primitive cubic lattice.\\\\\nThe contributions $\\Delta Z_N^{(n)}$ are expressed\nin terms of the elementary Boltzmann factors\n\\begin{displaymath}\n\\fl w = \\exp \\left( - K_0 \\right), ~ x = \\exp \\left( - 2 K \\, \\cos \\left( \n\\frac{\\pi}{2} \\Delta \\right) \\right)\n~~ \\mbox{and} ~~ y = x^{1+ \\delta} = \\exp \\left( - 2 K \\, \\sin \n\\left( \\frac{\\pi}{2} \\Delta \\right) \\right)\n\\end{displaymath}\nwith $K_0 = J_0\/(k_BT)$ and $K = J\/(k_BT)$. \nThe reduced free energy per spin can be expanded in a convergent power series\nof $w$, provided that $x \\gg w$, i.e.\\ if $J_0$ is large compared to $J$ (which\nis assumed throughout this paper). \nThe weight $w$ results from changing\nan in-layer bond between spins with equal values to a bond between spins\nwith values differing by 1. \nThe lowest orders involved are $w^{q_\\perp}$ (overturning one spin),\n$w^{2 q_\\perp-2}$ (overturning two neighbouring spins in one layer) and\n$w^{2 q_\\perp}$ (overturning two spins not being in-layer nearest neighbours).\\\\\nThere are\nthree possible environments of a given spin (the numbers in\nparentheses are the values of the spins in three consecutive layers where the\nconsidered spin belongs to the middle layer): (a) spins with\ntwo ferromagnetic bonds in the chiral direction (e.g.\\ $0\\hat{0}0$), (b) spins\nwith one ferromagnetic and one chiral bond (e.g.\\ $0\\hat{1}1$), and (c) spins \nwith two chiral bonds (e.g.\\ $0\\hat{1}2$).\\\\\nLet us discuss, as an example, the contribution\nto $\\Delta Z_N^{(1)}$ [first-order term in equation (\\ref{free_energy})] for case (a). \nBy overturning one spin, three different final states can be obtained\n($m$ being the initial state): $(m+1) ~ \\mbox{mod} ~ 4$, $(m+2) ~ \\mbox{mod} ~ 4$, and \n$(m+3) ~ \\mbox{mod} ~ 4$. This leads to the Boltzmann factor\n\\begin{eqnarray}\n\\fl \\sum\\limits_{n = 1}^3 \\exp \\left[ - \\left( E_f(n) - E_i \\right)\/\\left( k_BT \\right) \\right]\n\\nonumber \\\\\n\\lo= \\sum\\limits_{n = 1}^3 \\left( \\exp \\left\\{ 2K \\, \\cos \\left( \\frac{\\pi}{2} \\Delta \\right)\n\\, \\left[ \\cos \\left( \\frac{\\pi}{2} n \\right) -1 \\right] \\right\\} \\right. \\nonumber \\\\\n\\left. \\times\n\\exp \\left\\{ q_\\perp \\, K_0 \\left[ \\cos \\left( \\frac{\\pi}{2} n \\right) -1 \\right] \\right\\} \\right)\n\\nonumber \\\\\n\\lo= x \\, w^{q_\\perp} + x^2 \\, w^{2q_\\perp} + x \\, w^{q_\\perp}.\n\\label{boltz}\n\\end{eqnarray}\nIt is obvious from\nequation (\\ref{boltz}) that the process $m \\longrightarrow (m+2) ~ \\mbox{mod} ~ 4$ does not contribute\nto the lowest order term in the expansion, as it has the same in-layer Boltzmann factor\n$w^{2q_\\perp}$ as the higher order process by which the values of two uncoupled\nspins change by 1. In fact, this process of the order $w^{2q_\\perp}$\ndoes not even contribute to the \nlowest order correction term, which is of the order $w^{2q_\\perp-2}$ (flipping of two\nneighbouring spins in one layer \\cite{Fis80}).\\\\\nIn reference \\cite{Yeo82} the following contribution to $\\Delta Z_N^{(1)}$ for the case (a) is\ngiven:\n\\begin{eqnarray}\n\\fl \\sum\\limits_{n = 1}^3 \\exp \\left[ - \\left( E_f(n) - E_i \\right)\/\\left( k_BT \\right) \\right]\n\\nonumber \\\\\n\\lo= \\sum\\limits_{n = 1}^3 \\exp \\left\\{ 2K \\, \\cos \\left( \\frac{\\pi}{2} \\Delta\n\\right) \\, \\left[ \\cos \\left( \\frac{\\pi}{2} n \\right) -1 \\right] \\right\\} \\omega^{q_\\perp}\n\\nonumber \\\\\n\\lo= \\left( x + x^2 + x \\right) \\omega^{q_\\perp}\n\\label{boltzyeo}\n\\end{eqnarray}\nwith\n\\begin{equation}\n\\omega = \\sum\\limits_{n = 1}^3 \\exp \\left\\{ K_0 \\left[ \\cos \\left( \\frac{\\pi}{2} n \\right)\n-1 \\right] \\right\\} \n= w + w^2 + w .\n\\label{boltzyeo2}\n\\end{equation}\nA comparison of equations (\\ref{boltzyeo}) and (\\ref{boltzyeo2}) \nwith equation (\\ref{boltz}) reveals that\nthe treatment of the in-layer bonds is erroneous in reference \\cite{Yeo82}. \nThe free energy is written in reference \\cite{Yeo82} as an expansion in terms of the (erroneous)\nBoltzmann factor $\\omega$.\nAs a consequence, contributions from different\norders of the expansion are treated in reference \\cite{Yeo82} as \nif they were of the same order. Thus, in our example,\nthe term $x^2$, resulting from the process $m \\longrightarrow (m+2) ~ \\mbox{mod}\n~ 4$ and contributing to a higher order correction in the polynominal\nexpansion in $w$ [see equation (\\ref{boltz})], contributes \nto the lowest order in \\cite{Yeo82} [see equation (\\ref{boltzyeo})]. This error\nis repeated for all considered spin configurations and for all considered $p$-state\nmodels ($p \\geq 4$), thus\nleading to a wrong low temperature phase diagram not only for the $CC_4$ model, but\nalso for the generalised $p$-state chiral clock model with $p \\geq 4$. One should\nemphasise that the treatment\nof the in-layer bonds is correct in the analyses of the $CC_3$ model \\cite{Yeo81}.\\\\\nWith the correct contributions, the reduced free energy (2) in first order is given by\n\\begin{equation}\n\\fl f = - \\frac{1}{2} q_\\perp K_0 - K_1 - \\frac{1}{2} K_1 \\delta - \\left( 1 + x \\, y \\right) \nw^{q_\\perp} + a_1( \\delta ) \\, l_1 + \\sum\\limits_{k \\geq 3} a_k( \\delta ) \\, l_k\n+ O(w^{2 q_\\perp -2})\n\\label{f_firstorder}\n\\end{equation}\nwith \n\\begin{displaymath}\na_1( \\delta ) = - \\frac{1}{2} K_1 \\delta - \\left( 2 y - x y -1 \\right) w^{q_\\perp}\n\\end{displaymath}\nand\n\\begin{displaymath} \na_k( \\delta ) = \\left( k - 2 \\right) \\left[ \\frac{1}{2} K_1 \\delta - \\left( 2 x - x y -1 \n\\right) w^{q_\\perp} \\right].\n\\end{displaymath}\nThe set of structural variables $l_k$ minimizing\n$f$ for given values of $\\delta$ and $T$ determine the stable phases occurring in\nfirst order (see figure 1): the $\\left< \\infty \\right>$-, the $\\left< 1 \\right>$-,\nand the $\\left< 2 \\right>$-phase.\nPhases $\\left< \\infty \\right>$ and $\\left< 2 \\right>$ are, in this order\nof the expansion, separated by a boundary, at which all phases that are degenerate at\nthe multiphase point and that do not contain 1-layer bands have the same free energy. Likewise,\nphases containing only 1- and 2-layer bands are still degenerate on the boundary\nbetween the $\\left< 1 \\right>$- and the $\\left< 2 \\right>$-phase.\\\\\nOne could now proceed in considering processes involving two spins, then three\nspins and so on. This is very cumbersome and only feasible for processes involving\nfew spins. \nIn the next section the phases stable in general order in the series expansion\nwill be determined using a transfer-matrix method.\n\n\\section{Transfer-matrix method\\label{sec3}}\n\\subsection{Introductory remarks}\nOne should first note that the Hamiltonian (\\ref{hamiltonian}) is left invariant by\nthe transformation\n\\begin{eqnarray}\n&& \\Delta \\longrightarrow \\Delta ' = 1 - \\Delta \\nonumber \\\\\n&& n_{i,\\alpha} \\longrightarrow n_{i,\\alpha}' = \\left( - n_{i,\\alpha}\n + \\alpha \\right) ~ \\mbox{mod} ~ 4 .\n\\label{trafo}\n\\end{eqnarray}\nTherefore, the phase diagram of the $CC_4$ model is invariant under a reflection \nin the line $\\Delta = \\frac{1}{2}$. In the following we will discuss the low\ntemperature phase diagram for the case $\\Delta > \\frac{1}{2}$, i.e.\\ we will analyse\nin detail the stability of the boundary line between the $\\left< 1 \\right>$- and\nthe $\\left< 2 \\right>$-phase, the phase diagram for $\\Delta < \\frac{1}{2}$ being\ninferred by the transformation (\\ref{trafo}).\\\\\nIn the ground state and in the low temperature expansion every phase $\\left< \\nu\n\\right>$ consists of a periodic arrangement of a sequence of $n(\\nu)$ layers called\n$\\nu$-sequences [$n(\\nu)$ is the period of the phase]. Suppose now that in a certain\norder of the series expansion two stable phases, $\\left< \\nu_1 \\right>$ and \n$\\left< \\nu_2 \\right>$, are separated by a boundary at which the phases produced by \n$\\nu_1$- and $\\nu_2$-sequences are degenerate (see figure \\ref{fig2}).\nIn first order the boundary under\nconsideration separates the phases $\\left< 1 \\right>$ and $\\left< 2 \\right>$.\nAt higher order a new phase $\\left< \\nu \\right> = \\left< \\nu_1 \\nu_2 \\right>$ \nconsisting of a structure\nwith alternating $\\nu_1$- and $\\nu_2$-sequences might be stable in the vicinity\nof the boundary. If \n\\begin{equation}\na_\\nu = f_{\\left< \\nu \\right> } - \\frac{n(\\nu_1)}{n(\\nu_1)+n(\\nu_2)} \nf_{\\left< \\nu_1 \\right> } - \\frac{n(\\nu_2)}{n(\\nu_1)+n(\\nu_2)}\nf_{\\left< \\nu_2 \\right> }\n\\label{anu}\n\\end{equation}\nis negative, the new phase has a lower free energy than the phases $\\left< \\nu_1 \\right>$ and \n$\\left< \\nu_2 \\right>$ \\cite{Fis80,Sen93} and it will be stabilized in the vicinity of the\n$\\left< \\nu_1 \\right> : \\left< \\nu_2 \\right>$ boundary (see figure 2a).\nThe stability of the boundaries between the phases $\\left< \\nu_1 \\right>$ and \n$\\left< \\nu_1 \\nu_2 \\right>$ and the phases $\\left< \\nu_1 \\nu_2 \\right>$ and\n$\\left< \\nu_2 \\right>$ must then be examined at higher orders.\nIf, on the other hand, $a_\\nu$ is positive,\nthe phase $\\left< \\nu_1 \\nu_2 \\right>$ (and therefore every phase consisting of\n$\\nu_1$- and $\\nu_2$-sequences) has a higher free energy than either $\\left< \\nu_1 \\right>$\nor $\\left< \\nu_2 \\right>$. The boundary is a true phase boundary which remains stable\nin all orders of the low temperature series expansion (see figure 2b).\\\\\nThe reader is referred to references \\cite{Fis80} and \\cite{Yeo81}\nfor details concerning\nthe construction of the series expansion to general order.\\\\\n\n\\subsection{Formulation in terms of transfer matrices and vectors\\label{sec3b}} \nThe sign of $a_\\nu$, and therefore the stability of the phase $\\left< \\nu \\right>$,\nis determined by the leading term in its expansion in terms of $w$.\nThis term is obtained by considering all flipping processes involving\na spin chain of $n(\\nu)-1$ spins in $n(\\nu)-1$ different layers\n\\cite{Yeo81}. Besides the linear configuration with all $n(\\nu)-1$ spins \nconnected, the various decompositions of this configuration into 2, 3, $\\ldots$, $n(\\nu)-1$\ndifferent parts must be taken into account.\nThe contributions from these processes can be written\nas a product of transfer matrices and vectors. \nThe matrices describe a bond between two flipping spins, the vectors \nan initial or a final bond preceding the first or following the last flipped spin\nrespectively.\\\\\nEvery spin can flip to three different values and hence\n$3 \\times 3$ matrices occur. As we are only interested in the\nsign of the $a_\\nu$ we can restrict ourselves to the two processes contributing\nin lowest order, thus excluding the process $m \\longrightarrow (m+2) ~ \\mbox{mod}\n~ 4$ only relevant for the correction term. Of course,\nif one considers all possible processes (i.e.\\ $3 \\times 3$ matrices),\nthe leading term is identical to the term obtained by the $2 \\times 2$ matrices.\nThis has already been noticed in the low temperature analyses of a six-state\nclock model with competing axial nearest and next-nearest neighbour couplings \\cite{Sen93},\nwhere the corresponding $2 \\times 2$ matrices have been considered instead\nof the general $5 \\times 5$ matrices.\\\\\nAs two axial next nearest neighbours\nare either coupled by a ferromagnetic or by a chiral bond, only two different matrices\nare to be constructed. For a ferromagnetic or a chiral bond between two spins\nin the layers $\\alpha$ and $\\alpha$+1 one obtains, respectively, the transfer matrices\n\\begin{equation}\n\\altmat{F}_{\\alpha, \\alpha+1} =\n\\left( \\begin{array}{ll} \n1-x ~ & ~ x(1-y) \\\\\nx(1-y^{-1}) ~ & ~ 1-x\n\\end{array} \\right) w^{q_\\perp}\n\\label{matrix_ferro}\n\\end{equation}\nand\n\\begin{equation}\n\\altmat{C}_{\\alpha, \\alpha+1} =\n\\left( \\begin{array}{ll}\n1-y ~ & ~ y(1-x^{-1}) \\\\\ny(1-x) ~ & ~ 1-y\n\\end{array} \\right) w^{q_\\perp}.\n\\label{matrix_chiral}\n\\end{equation}\nThe matrix elements are the Boltzmann factors\nfor a simultaneous change of the values of the two\nspins. The first (second) row corresponds to a\nchange $\\Delta n_{i,\\alpha} = +1 (-1)$ and\nthe first (second) column to $\\Delta n_{i,\\alpha+1} = +1 (-1)$.\nEvery element of the matrices \\altmat{F} and \\altmat{C} is a sum of two terms,\nthe first term resulting from changing the values of two axially coupled \nspins. As already mentioned, disconnected pairs of spins\n(i.e.\\ two spins that are not neighbour to each other but neighbour\nto an unchanged spin) also contribute to the partition sum. Since every\ndisconnected pair must be associated with a minus sign \\cite{Fis80}, the\ncorresponding Boltzmann factors enter the different matrices\nwith a negative sign.\\\\\nThe factor $w^{q_\\perp}$ resulting from changing \nthe in-layer bonds in layer $\\alpha$+1\nis common to all elements of the matrices \\altmat{F} and \\altmat{C}.\nThis is a direct consequence of the fact that only flipping processes\n$m \\longrightarrow (m \\pm 1) ~ \\mbox{mod} ~ 4 $ are to be considered for\nobtaining the leading order in the expansion of $a_\\nu$.\nFor the full $3 \\times 3$ matrices this is not\nthe case as the flipping process $m \\longrightarrow (m+2) ~ \\mbox{mod} ~ 4 $ has the in-layer\nBoltzmann factor $w^{2 q_\\perp}$.\nIn reference \\cite{Yeo82} the phase diagram has been determined to general order using $3 \\times 3$\ntransfer matrices. Due to the erroneous treatment of the in-layer interactions \n(see section \\ref{sec2}) the \"common term\" $\\omega^{q_\\perp}$ has been\nfactorized, thus leading, again,\nto the treatment of terms belonging to different orders as being of the same order.\\\\\nA spin at the end of the spin chain is neighbour of an unchanged spin. To determine\nthe contributions of these spins, four different cases are to be distinguished:\n(a) the considered spin is the first spin of the chain and its bond to the left (i.e.\\\nto an unchanged spin) is a ferromagnetic or a chiral bond (subscripts $f$ and $c$\nrespectively) or (b) it\nis the last spin of the chain and its bond to the right is \na ferromagnetic or a chiral bond. The Boltzmann factors for the\nflipping of these single spins are written as vectors:\n\\begin{eqnarray}\n\\altvec{a}_f & = & \\left(\n\\begin{array}{l}\ny^{- \\frac{1}{2}} \\\\\ny^{\\frac{1}{2}}\n\\end{array} \\right) \\, x^{\\frac{1}{2}} \\, w^{q_\\perp} \n\\label{af} \\\\\n\\altvec{a}_c & = & \\left(\n\\begin{array}{l}\nx^{\\frac{1}{2}} \\\\\nx^{- \\frac{1}{2}}\n\\end{array} \\right) \\, y^{\\frac{1}{2}} \\, w^{q_\\perp} \n\\label{ac} \\\\\n\\altvec{b}_f & = & \\left(\n\\begin{array}{l}\ny^{\\frac{1}{2}} \\\\\ny^{- \\frac{1}{2}}\n\\end{array} \\right) \\, x^{\\frac{1}{2}} \n\\label{bf} \\\\\n\\altvec{b}_c & = & \\left(\n\\begin{array}{l}\nx^{- \\frac{1}{2}} \\\\\nx^{\\frac{1}{2}}\n\\end{array} \\right) \\, y^{\\frac{1}{2}}\n\\label{bc}\n\\end{eqnarray}\nThe vectors (\\ref{bf}) and (\\ref{bc}) do not include the Boltzmann factor resulting from \nthe change of the in-layer bonds. This factor has already been included in the matrix\ndescribing the overturning of the two last spins in the spin chain.\n\n\\subsection{Derivation of the low temperature phase diagram}\nWith the matrices (\\ref{matrix_ferro}) and (\\ref{matrix_chiral}) and the vectors\n(\\ref{af})-(\\ref{bc}) it is now possible to compute the leading order term $b_\\nu$\nof the quantities $a_\\nu$ (and, thus, to determine the sign of $a_\\nu$) for all phases\ndegenerate at the multiphase point and containing only 1- and 2-layer bands.\nAll considered phases can be viewed as periodic arrangements\nof spin sequences with a 1-layer band as the first and a 2-layer band as the last band\nin the sequence \n\\cite{Yeo81}. The sequence $\\tilde{\\nu}$ obtained by stripping\nthe original sequence $\\nu$ by its last and first band is called core. All sequences\nbased on the same core $\\tilde{\\nu}$ enter in the computation of the $b_\\nu$: The\nsequences $ 1 \\tilde{\\nu} 2$ and $ 2 \\tilde{\\nu} 1$ contribute negatively, the\nsequences $ 1 \\tilde{\\nu} 1$ and $ 2 \\tilde{\\nu} 2$ contribute positively \\cite{Fis80}.\nThe expressions $b_\\nu$ for different families of phases \nare summarised in table 1.\n\n\\subsubsection{Stability of some series of phases}\nFor the series of phases $\\left< 1 2^k \\right>$ the expression\n(see table 1)\n\\begin{equation}\nb_{12^k} = - \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\left( \\altmat{C} \\, \n\\altmat{F} \\right)^{k-1} \\altmat{C} \\left( \\altvec{b}_f - \\altvec{b}_c \\right)\n\\label{b12k}\n\\end{equation}\ngives the leading contribution to $a_{12^k}$. \nThe four different sequences based on the core $\\tilde{\\nu}= 2^{k-1}$ yield\nthe four different contributions to $b_{12^k}$.\\\\\nThe eigenvalues $\\exp \\left( - \\Gamma_\\pm \\right)$ of the \nmatrix $\\altmat{C} \\, \\altmat{F}$ are \nreal and positive. Expression (\\ref{b12k}) can be written in the form\n\\begin{displaymath}\nb_{12^k} = A_+ \\, \\exp \\left( - \\frac{k}{2} \\Gamma_+ \\right) + A_- \\, \\exp \\left( - \\frac{k}{2}\n\\Gamma_- \\right)\n\\end{displaymath}\nwith $\\Gamma_+ < \\Gamma_-$. A close\nexamination reveals that for finite temperatures $A_+ < 0$, $A_- > 0$ and $A_+ + A_- < 0$.\nThus, $b_{12^k}$ is negative for all $k$, i.e.\\ all phases of the form\n$\\left< 1 2^k \\right>$ spring from the multiphase point and have a finite stability \nrange at temperatures above zero.\\\\\nThe leading order contribution for the phases $\\left< 1^k 2 \\right>$ is\n\\begin{equation}\nb_{1^k2} = - \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\altmat{C}^k \n\\left( \\altvec{b}_f - \\altvec{b}_c \\right).\n\\label{b1k2}\n\\end{equation}\nThe eigenvalues of the matrix $\\altmat{C}$ are complex conjugate. They are\nwritten in the form\n\\begin{equation}\n\\xi_1 \\pm \\i \\xi_2 = \\exp \\left( - \\Gamma_0 \\pm \\i \\, \\Omega \\right)\n\\label{eigen}\n\\end{equation}\nwith $\\Gamma_0 = - \\frac{1}{2} \\ln \\left( \\xi_1^2 + \\xi_2^2 \\right) > 0$, \n$\\Omega = \\arctan \\left( \\frac{\\xi_2}{\\xi_1} \\right)$ and $\\xi_1 = 1 -x^{1+\\delta}$, $\\xi_2 =\n\\left( 1 - x \\right) x^{\\frac{1}{2} + \\delta}$. We then obtain the expression\n\\begin{displaymath}\n\\fl\nb_{1^k2} = - (A_1 + \\i A_2) (\\xi_1 + \\i \\xi_2 ) + (A_1 - \\i A_2) (\\xi_1 - \\i \\xi_2 )\n= - \\left| \\Delta \\right| \\exp \\left( - k \\, \\Gamma_0 \\right) \\cos \\left(\nk \\, \\Omega + \\phi \\right)\n\\end{displaymath}\nwith $\\left| \\Delta \\right| \\exp \\i \\phi = A_1 + \\i A_2$.\nThe temperature-dependent quantities $\\left| \\Delta \\right|$, $\\phi$, \n$\\Gamma_0$, and $\\Omega$ do not depend on $k$.\\\\ \n$b_{1^k2}$ is negative for small\nvalues of $k$. If $k$ exceeds the value $k_{max} = \\frac{1}{\\Omega} \\left( \\frac{\\pi}{2} -\n\\phi \\right)$, then $b_{1^k2}$ becomes positive and, thus, all phases with\n$k > k_{max}$ are unstable at the considered point of the phase diagram. \nSince $k_{max} \\longrightarrow \\infty$ for $T \\longrightarrow 0$, there is, \nfor every $k$, a temperature\nbelow which the phase $\\left< 1^k 2 \\right>$ is stable. Thus, all phases \n$\\left< 1^k 2 \\right>$ spring from the multiphase point, but the higher\ncommensurate phases disappear at higher temperatures. Such a cut-off of the\nhigh commensurate phases at finite temperatures is also observed in the ANNNI \nmodel \\cite{Fis87}.\\\\\nFollowing the general line we\nalso examined the series of phases $\\left< 1 2^k 1 2^{k+1} \\right>$ and\n$\\left< 1^k 2 1^{k-1} 2 \\right>$.\nFor the case $\\left< 1 2^k 1 2^{k+1} \\right>$ we find that all these phases are\nstable at finite temperatures in the vicinity of the multiphase point with no\ncut-off for the phases with a large value of $k$, i.e.\\ the results for the series\n$\\left< 1 2^k 1 2^{k+1} \\right>$ resemble the results for the series $\\left< 1 2^k \\right>$.\nAnalysing the leading contribution for the phases $\\left< 1^k 2 1^{k-1} 2 \\right>$\nwe find a behaviour similar to the behaviour of the phases $\\left< 1^k 2 \\right>$,\ni.e.\\ all phases with $k < k_{max}$ (the value of $k_{max}$ being series-dependent)\nare stable and $k_{max} \\longrightarrow \\infty$ as $T \\longrightarrow 0$.\n\n\\subsubsection{Phases containing general sequences of 1- and 2-layer bands}\nIn the following we will show\nthat all phases consisting only of 1- and 2-layer bands and obeying the rules\nof the structure combination\nspring from the multiphase point, the higher commensurate phases of some series\nbecoming unstable at higher temperatures.\nThe leading contribution to $a_\\nu$ for all these phases is\nof the form (see table 1)\n\\begin{equation}\nb_\\nu = - \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\altmat{D} \\, \n\\altmat{C} \\left( \\altvec{b}_f - \\altvec{b}_c \\right)\n\\label{bnu}\n\\end{equation}\nwhere $\\altmat{D}$ is a product of powers of matrices $\\altmat{C}$ \nand $\\left( \\altmat{C} \\, \\altmat{F} \\right)$. The contributions of the\nfirst and last band are given by $\\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right)$\nand $\\altmat{C}\n\\left( \\altvec{b}_f - \\altvec{b}_c \\right)$ respectively.\nA 1-layer band in the core contributes a\nmatrix $\\altmat{C}$, whereas a 2-layer band yields the matrix product\n$\\left( \\altmat{C} \\, \\altmat{F} \\right)$. The product over all bands\nin the core yields the matrix $\\altmat{D}$ [see equation (\\ref{bnu})].\\\\\nThe diagonal elements of the matrix\n\\begin{equation}\n\\fl \n\\altmat{C} \\, \\altmat{F} = \\left( \\begin{array}{ll}\n2 \\left( 1 -y \\right) \\left( 1 - x \\right) & x \\left( 1 - y \\right)^2 - x^{-1} y\n\\left( 1 - x \\right)^2 \\\\\ny \\left( 1 - x \\right)^2 - x y^{-1} \\left( 1 - y \\right)^2 & \\left( 1 \n+ x y \\right) \\left( 1 -y \\right) \\left( 1 - x \\right)\n\\end{array} \\right) \\, w^{2 q_\\perp}\n\\nonumber\n\\end{equation}\nare positive whereas the non-diagonal elements are negative, since\n$y = x^{1 + \\delta}$ with $x \\ll 1$. We now follow reference\n\\cite{Sen93} and introduce the unitary matrix\n\\begin{displaymath} \n\\altmat{U} = \\left( \\begin{array}{ll}\n-1 & ~0 \\\\\n~0 & ~1 \n\\end{array} \\right) = \\altmat{U}^{-1}.\n\\end{displaymath} \nAll elements of $\\altmat{U} \\, \\left( \\altmat{C} \\, \\altmat{F} \\right) \\, \\altmat{U}$,\nand therefore of\n$\\altmat{U} \\, \\left( \\altmat{C} \\, \\altmat{F} \\right)^k \\, \\altmat{U}$, are positive.\nThis is also the case for\nthe two vectors $\\left( \\altvec{a}^T_c - \\altvec{a}^T_f \\right) \\, \\altmat{U}$\nand $\\altmat{U} \\, \\altmat{C} \\, \\left( \\altvec{b}_f - \\altvec{b}_c \\right)$,\nthus [see equation (\\ref{b12k}) and table 1]\n\\begin{displaymath}\n\\fl \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\left( \\altmat{C} \\, \n\\altmat{F} \\right)^{k-1} \\altmat{C} \\left( \\altvec{b}_f - \\altvec{b}_c \\right)\n= \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\, \\altmat{U} \\, \\altmat{U} \\,\n\\left( \\altmat{C} \\, \\altmat{F} \\right)^{k-1} \\, \\altmat{U} \\, \\altmat{U} \\,\n\\altmat{C} \\left( \\altvec{b}_f - \\altvec{b}_c \\right)\n\\end{displaymath}\nis positive, i.e.\\ $b_{12^k} < 0$, in agreement\nwith the aforementioned calculations.\\\\\nPhases of the series $\\left< 1 2^k 1 2^{k+1} \\right>$ contain a single\n1-layer-band in the core yielding the matrix product $\\left( \\altmat{C} \\, \\altmat{F} \n\\right) \\altmat{C} \\left( \\altmat{C} \\, \\altmat{F} \\right)$ with positive diagonal\nand negative non-diagonal elements for small $x$. Hence, the product (see table 1)\n\\begin{eqnarray}\n\\fl \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\left( \\altmat{C} \\,\n\\altmat{F} \\right)^k \\altmat{C} \\left( \\altmat{C} \\,\n\\altmat{F} \\right)^k \\altmat{C} \\left( \\altvec{b}_f - \\altvec{b}_c \\right)\n\\nonumber \\\\\n\\lo= \\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\, \\altmat{U} \\, \\altmat{U} \\,\n\\left( \\altmat{C} \\, \\altmat{F} \\right)^{k-1} \\, \\altmat{U} \\, \\altmat{U} \\,\n\\left( \\altmat{C} \\, \\altmat{F} \\right) \n\\altmat{C} \\left( \\altmat{C} \\, \\altmat{F} \\right) \\, \\altmat{U} \\nonumber \\\\\n\\times \\altmat{U} \\,\n\\left( \\altmat{C} \\, \\altmat{F} \\right)^{k-1} \\altmat{U} \\, \\altmat{U} \\,\n\\altmat{C} \\left( \\altvec{b}_f - \\altvec{b}_c \\right)\n\\nonumber\n\\end{eqnarray}\nis positive, showing the stability of the phases $\\left< 1 2^k 1 2^{k+1} \\right>$.\nFollowing this line of thought one easily shows that all phases appearing between \nthe phases $\\left< 2 \\right>$ and $\\left< 12 \\right>$ (i.e. phases with only isolated\n1-layer-bands in the core) are stable in the vicinity of the multiphase point.\nIndeed, as no new matrix products show up in the computation of \nthe different $b_\\nu$, all these\nexpressions can be written, using the matrix $\\altmat{U}$,\nas a product of vectors and matrices having only positive elements.\\\\\nFor phases containing consecutive 1-layer-bands in the core\nthe following additional vectors and matrices may contribute \nto the $b_\\nu$ as can be seen\nfrom table 1: $\\altmat{U} \\, \\altmat{C}^k \\, \\left( \\altvec{b}_f - \\altvec{b}_c \\right)$,\n$\\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\, \\altmat{C}^k \\,\n\\altmat{U}$, and $\\altmat{U} \\, \\left( \\altmat{C} \\, \\altmat{F} \\right) \\,\n\\altmat{C}^k \\, \\altmat{U}$ with $k \\geq 2$. Introducing\nthe eigenvalues of the matrix $\\altmat{C}$ [see equation (\\ref{eigen})], we obtain\n\\begin{eqnarray}\n\\mbox{v}_1 & = & \\exp \\left( -k \\Gamma_0 \\right) \\left[ x^\\frac{\\delta}{2} \\left( 1 - x \\right)\n\\cos k \\Omega + x^{-\\frac{1+\\delta}{2}} \\left( 1 - x^{1+\\delta} \\right)\n\\sin k \\Omega \\right] \\, w^{k q_\\perp} \\nonumber \\\\\n\\mbox{v}_2 & = & \\exp \\left( -k \\Gamma_0 \\right) \\left[ x^{-\\frac{\\delta}{2}} \n\\left( 1 - x^{1+\\delta} \\right) \\cos k \\Omega - x^{1+\\delta} \\left( 1 - x \\right)\n\\sin k \\Omega \\right] \\, w^{k q_\\perp} \\nonumber\n\\end{eqnarray}\nfor the components of the vector $\\altvec{v}= \\altmat{U} \\, \\altmat{C}^k \\,\n\\left( \\altvec{b}_f - \\altvec{b}_c \\right)$.\nWhereas $\\mbox{v}_1$ is always positive, $\\mbox{v}_2 > 0$ only if the\ninequality \n\\begin{displaymath}\n\\tan k \\Omega < \\frac{x^{-\\frac{\\delta}{2}} \\, \\left( 1 - x^{1+ \\delta } \\right)}\n{x^{1+ \\delta } \\, \\left( 1 - x \\right)}\n\\end{displaymath}\nholds. This is the case for temperatures smaller than an upper limit which\ndepends on $\\delta$ and $k$.\nIn a similar way one shows that for temperatures smaller than some $k$-dependent\ntemperature all the components of the vector \n$\\left( \\altvec{a}_c^T - \\altvec{a}_f^T \\right) \\, \\altmat{C}^k \\,\n\\altmat{U}$ and of the matrix $\\altmat{U} \\, \\left( \\altmat{C} \\, \\altmat{F} \\right) \\,\n\\altmat{C}^k \\, \\altmat{U}$ are positive.\nThe free energy differences $a_\\nu$ for all phases \ncontaining consecutive 1-layer-bands in the core are therefore negative below\na certain temperature, \ni.e.\\ these phases possess a stability region below this\ntemperature.\n\n\\subsubsection{Conclusion}\nThe results obtained so far can be summarised as follows: All\nphases consisting only of 1- and 2-layer-bands, that can be formed\nby means of the aforementioned structure combination rules, spring from\nthe multiphase point, where they are degenerate. The higher commensurate\nphases of some series, i.e.\\ those phases formed in higher orders of the combination\nprocess, disappear again at temperatures individually depending\non the series under consideration.\\\\ \nFrom these results the complete low temperature phase diagram\nof the $CC_4$ model is deduced by applying the transformation (\\ref{trafo}).\nAt non-zero temperatures all phases appearing between\nthe phases $\\left< 12 \\right>$ and $\\left< 3 \\right>$ are stable since the \ntransformation (\\ref{trafo}) transforms phase $\\left< 12 \\right>$ into\n$\\left< 3 \\right>$ and leaves the phase $\\left< 2 \\right>$ invariant. \nSome of the long commensurate phases\nappearing between the phases $\\left< 1 \\right>$ and $\\left< 12 \\right>$\nfor $\\Delta > \\frac{1}{2}$ \nand between the phases $\\left< \\infty \\right>$ and $\\left< 3 \\right>$\nfor $\\Delta < \\frac{1}{2}$ are unstable at a given temperature.\nUpon reducing the temperature,\nmore and more of these phases become stable, and in the limit $T \\longrightarrow 0$ all phases\nobeying the rules of the structure combination\nare stable. Therefore, the $CC_4$ model\nexhibits a complete devil's staircase in the low-temperature limit.\n\n\\subsection{Comparison with other work}\nThe low-temperature behaviour of the general $p$-state chiral clock model was\nanalysed in reference \\cite{Yeo82} using a series expansion technique similar \nto the one presented here. Due to the incorrect expansion (see section 2)\nonly some specific families of phases were shown to \npossess a finite stability region at small temperatures. Especially, it was\nclaimed that the phases $\\left< 1^k 2 \\right>$ with $k > 2$ are not stable\nat low temperatures, implying, due to the transformation (\\ref{trafo}), that\nfor $\\Delta < \\frac{1}{2}$ a direct transition from \nthe ferromagnetic phase to the $\\left< 4 \\right>$-phase exists.\nIn order to corroborate these calculations a low temperature mean-field analyses \nof the $CC_p$ model was presented in \\cite{Yeo84} were it was claimed that\nin the vicinity of the multiphase point the mean-field approximation\nyields the same stable phases as reference \\cite{Yeo82}. In that work the model in mean-field\napproximation was mapped onto an one-dimensional array of interacting domain walls. This\nmapping was derived under the approximation that \nthe mean-field average spin\n$( \\langle \\cos \\frac{\\pi}{2} n_{i,\\alpha} \\rangle_{MF},\n\\langle \\sin \\frac{\\pi}{2} n_{i,\\alpha} \\rangle_{MF} )$ \nin each layer (layer spin)\ndoes only deviate from\nthe $T = 0$ value in amplitude but not in phase.\nIn a detailed analyses of the mean-field phase diagram of the $CC_3$ model \nSiegert and Everts \\cite{Sie85} showed that\nthis approximation leads to a wrong phase diagram at low temperatures, thus concluding\nthat the layer spin must also be allowed to deviate in phase from its ground-state\nvalue. This should not only be the case for the three-state but also for the\ngeneral $p$-state model. The results of reference \\cite{Yeo84} for the \nmean-field low temperature\nbehaviour of the $CC_p$ model must therefore be considered with care.\\\\\nAs we have shown in the preceding sections the results of the series\nexpansion in reference \\cite{Yeo82} are erroneous due\nto wrong Boltzmann factors for the in-layer bonds. In fact, the four-state\nmodel exhibits in the low-temperature limit a complete devil's staircase.\nFurthermore, it results from our calculations that no direct\ntransition from the ferromagnetic to the $\\left< 4 \\right>$-phase exists\nas phases with longer periods are stable between these two phases.\\\\\nIn the Monte Carlo simulation of the $CC_4$\nmodel \\cite{Sch96} long-period spin patterns were observed\nwhen going from the ferromagnetic phase to the modulated phases\nat rather high temperatures.\nIn view of the present work one must interpret\nthese patterns as reflecting the existence of phases\nspringing from the multiphase point and intercalating\nbetween the ferromagnetic\nand the $\\left< 4 \\right>$-phases.\n\n\\section{The critical behaviour\\label{sec4}}\nThe critical behaviour of the general $p$-state chiral clock \nmodel at the transition to the paraphase is an interesting topic since \nfor $p=2$ the chiral clock model reduces to the anisotropic Ising model, for\n$p = \\infty$ it corresponds to the classical 3d-$XY$ model. Siegert and Everts \\cite{Sie89}\nshowed that the $CC_3$ model belongs to the universality class of the 3d-$XY$ model.\nOn the basis of his contradictory MFTM results, McCullough \\cite{Mcc92} speculated about\na change in the universality class from 3d-Ising behaviour to 3d-$XY$ behaviour for\n$p$ close to 5.\nIn the following we will show that for $p=4$ an\neffective Ginzburg-Landau-Wilson Hamiltonian can be derived which can be\ntransformed to the effective Hamiltonian of the 3d $XY-$model.\\\\\nFor the case $p=4$\nthe Hamiltonian (\\ref{hamiltonian}) can be rewritten\nin the form\n\\begin{displaymath}\nH = -J \\sum\\limits_{i} \\sum\\limits_{\\alpha} \\altvec{S}_{i,\\alpha} \\, \\altmat{R} \\left( \n\\Delta \\right) \\,\n\\altvec{S}_{i,\\alpha+1} - J_0 \\sum\\limits_{\\alpha} \\sum\\limits_{\\left< ij \\right>} \n\\altvec{S}_{i,\\alpha} \\, \\altvec{S}_{j,\\alpha}\n\\end{displaymath}\nwhere we introduced the spin vector $\\altvec{S}_{i,\\alpha} = \\left( \n\\cos \\frac{\\pi}{2} n_{i,\\alpha},\n\\sin \\frac{\\pi}{2} n_{i,\\alpha} \\right)$ and the rotation matrix\n\\begin{displaymath}\n\\altmat{R} \\left( \\Delta \\right) = \\left( \\begin{array}{ll}\n~ \\cos \\frac{\\pi}{2} \\Delta & \\sin \\frac{\\pi}{2} \\Delta \\\\\n- \\sin \\frac{\\pi}{2} \\Delta & \\cos \\frac{\\pi}{2} \\Delta \n\\end{array}\n\\right).\n\\end{displaymath}\nRotating all spins in layer $\\alpha$ by the angle $\\frac{\\pi}{2} \\alpha \\Delta$, i.e.\\\nintroducing new vectors $\\altvec{\\sigma}_{i,\\alpha}= \\altmat{R} \\left( \\alpha \\Delta \\right) \\,\n\\altvec{S}_{i,\\alpha}$, leads to the expression\n\\begin{equation}\nZ=\\sum\\limits_{\\left\\{ \\altvec{\\sigma} \\right\\} } \\exp \\left[- \\frac{1}{2} \\sum\\limits_{ij}\n\\sum\\limits_{\\alpha \\, \\beta} \\sum\\limits_{\\kappa=1}^{2} \\sigma_{i,\\alpha}^\\kappa \\,\nK_{i \\, \\alpha, j \\, \\beta} \\, \\sigma_{j, \\beta}^\\kappa \\right]\n\\label{partition}\n\\end{equation} \nfor the partition function, $\\kappa$ labelling the two spin components. The elements \n$K_{i \\, \\alpha, j \\, \\beta}$ of the coupling matrix are zero unless the lattice sites\n$\\left( i,\\alpha \\right)$ and $\\left( j , \\beta \\right)$ are nearest neighbours.\\\\\nExpression (\\ref{partition}) may be transformed \\cite{Bak62,Hub72} to\n\\begin{displaymath}\n\\fl\nZ = C \\sum\\limits_{\\left\\{ \\altvec{\\sigma} \\right\\}} \\left( \\prod_{\\rho=1}^2 \\prod_{k \\, \\gamma} \n\\int dh_{k, \\gamma}^\\rho \\right) \\exp \\left[ - \\frac{1}{2} \\sum\\limits_{ij}\n\\sum\\limits_{\\alpha \\, \\beta} \\sum\\limits_{\\kappa} h_{i,\\alpha}^\\kappa \\,\nL_{i \\, \\alpha, j \\, \\beta}^{-1} \\, h_{j, \\beta}^\\kappa + \n\\sum\\limits_{i} \\sum\\limits_{\\alpha} \\sum\\limits_{\\kappa} h_{i,\\alpha}^\\kappa \\,\n\\sigma_{i,\\alpha}^\\kappa \\right].\n\\end{displaymath}\nHere $C$ is a numerical constant, $N$ is the total number of lattice sites and $\\altmat{I}$ is \nthe $N \\times N$ identity matrix. The matrix $\\altmat{L}$ is given by\n$\\altmat{L}= \\mu \\, \\altmat{I}- \\altmat{K}$ where the positive number $\\mu$ is\nchosen large enough to ensure that all the eigenvalues of $\\altmat{L}$ are positive.\\\\\nThe sum over all states can be easily computed:\n\\begin{eqnarray} \n\\fl \n\\sum\\limits_{\\left\\{ \\altvec{\\sigma} \\right\\}} \\exp \\left( \n\\sum\\limits_{i} \\sum\\limits_{\\alpha} \\sum\\limits_{\\kappa} h_{i,\\alpha}^\\kappa \\,\n\\sigma_{i,\\alpha}^\\kappa \\right) \\nonumber \\\\\n\\lo= 2^N \\prod_{i \\, \\alpha} \\left[ \\cosh \\left(\nh^1_{i, \\alpha} \\, c_\\alpha - h^2_{i, \\alpha} \\, s_\\alpha \\right) + \\cosh \\left(\nh^1_{i, \\alpha} \\, s_\\alpha + h^2_{i, \\alpha} \\, c_\\alpha \\right) \\right],\n\\label{partcosh}\n\\end{eqnarray}\nwith $c_\\alpha = \\cos \\left( \\frac{\\pi}{2}\n\\alpha \\Delta \\right)$ and $s_\\alpha = \\sin \\left( \\frac{\\pi}{2}\n\\alpha \\Delta \\right)$. Using the expansion \n\\begin{displaymath}\n\\ln \\cosh x = - \\sum\\limits_n \\left( - 1 \\right)^n\n\\frac{2^{2n-1} \\left( 2^{2n} -1 \\right) B_n}{n \\left( 2 n \\right) !} x^{2n}\n\\end{displaymath}\n($B_n$: Bernoulli number) the expression on the righthand side of (\\ref{partcosh})\ncan be written as the exponential of a sum of powers of $h_{i,\\alpha}^1$ and\n$h_{i,\\alpha}^2$:\n\\begin{equation}\n\\fl\nC_1 \\, \\exp \\left[ \\sum\\limits_n \\sum\\limits_{k=0}^{2n} \\sum\\limits_{l=0}^k \\sum\\limits_{l'=0}^{2n-k}\nc(n,k,l,l') \\sum\\limits_i \\sum\\limits_\\alpha c_\\alpha^{l+l'} \\, s_\\alpha^{2n-l-l'}\n\\, \\left( h^1_{i, \\alpha} \\right)^k \\, \\left( h^2_{i, \\alpha} \\right)^{2n-k}\n\\right]\n\\label{h1h2exp}\n\\end{equation}\nwhere $c(n,k,l,l')$ is a number depending on $n$, $k$, $l$ and $l'$.\\\\\nIntroducing new variables $\\phi^\\kappa_{i,\\alpha} = \nL_{i \\, \\alpha, j \\, \\beta}^{-1} \\, h_{j, \\beta}^\\kappa$, taking the continuum limit\nand turning to the wavenumber representation leads to the \npartition function\n\\begin{displaymath}\nZ \\propto \\left( \\prod\\limits_{\\altvec{q}} \\int d\\altvec{\\tau}_{\\altvec{q}} \\right) \\exp \\left[ - \\bar{H} \\right]\n\\end{displaymath}\nwith the effective Hamiltonian\n\\begin{eqnarray}\n\\fl \\bar{H} = - \\frac{1}{2} \\int_{BZ} \\frac{d^3q}{( 2 \\pi )^3} \\left(\nr + q^2_{\\perp} + \\frac{\\Upsilon}{\\Upsilon_0} q^2_{\\parallel} \\right) \\left[\n\\underline{\\tau} ( \\underline{q} ) ~ \\underline{\\tau} ( - \\underline{q} )\n\\right] \\nonumber \\\\[2mm]\n- u \\int_{BZ} \\frac{d^3q \\, d^3q' \\, d^3q''}{( 2 \\pi )^9}\n\\left[ \\underline{\\tau} ( \\underline{q} ) ~ \\underline{\\tau} (\n\\underline{q}' ) \\right] \\, \\left[ \\underline{\\tau} ( \\underline{q}'' )\n~ \\underline{\\tau} (- \\underline{q} - \\underline{q}' - \\underline{q}'' )\n\\right]\n\\label{effham}\n\\end{eqnarray}\nwith $\\altvec{\\tau}_{\\altvec{q}}=\\left( \\tau_{\\altvec{q}}^1 , \\tau_{\\altvec{q}}^2\n\\right)$ \nand $\\phi^{\\kappa} \\left( \\altvec{r} \\right) = \\int_{BZ} \\frac{d^3q}{( 2 \\pi )^3} \\,\n\\exp ( \\i \\altvec{q} \\cdot \\altvec{r} ) \\, \\tau_{\\altvec{q}}^\\kappa$.\nThe integration is over the first Brillouin zone with $\\altvec{q} = \\left( \n\\altvec{q}_{\\perp}, \\altvec{q}_{\\parallel} \\right)$, \nits components $\\altvec{q}_{\\perp}$ and $\\altvec{q}_{\\parallel}$ being perpendicular\nand parallel to the direction of the modulation respectively.\n$r = \\frac{1}{\\Upsilon_0} \\left( 1 - 2 \\Upsilon_0 - \\Upsilon \\right)$ with\n$\\Upsilon = \\frac{J}{k_BT}$ and $\\Upsilon_0 = \\frac{J_0}{k_BT}$ \nvaries linearly with temperature.\\\\\nIn deriving equation (\\ref{effham}) we neglected\nfourth and higher harmonics, i.e.\\ fast oscillating terms containing\n$\\exp \\left( \\mbox{i} n \\frac{\\pi}{2} \\alpha \\Delta \\right)$ with \n$n \\geq 4$. Furthermore\nwe did not include terms of higher than fourth order in $\\tau$. If we rescale\n$q_\\parallel$ in the effective Hamiltonian \\cite{Nel75} we end with the\neffective Ginzburg-Landau-Wilson Hamiltonian of the 3d-$XY$ model. \n\n\\section{Conclusions\\label{sec5}}\nA low temperature series expansion technique is suitable to obtain exact\nresults on the low \ntemperature behaviour of the four-state chiral clock model. \nAll phases degenerate at the multiphase point ($T=0$, $\\Delta=\\frac{1}{2}$)\nand obeying the structure combination rules spring from\nthe multiphase point. Some of these phases disappear at higher temperatures.\nIn the low-temperature limit the $CC_4$ model exhibits\na complete devil's staircase.\nDifferences in the low temperature phase diagrams derived in the present and in\na previous publication can be traced back to an inconsistency in the series\nexpansion of the latter.\nLong-period spin patterns derived in the present paper as \nstable phases between the ferromagnetic and\nthe $\\left< 4 \\right>$-phase and not occurring in the analyses presented \nin \\cite{Yeo82}, were recently\nseen in Monte Carlo\nsimulations just above the boundary of the ferromagnetic phase.\\\\\nFurthermore,\nthe critical behaviour at the boundary between the paraphase and \nthe modulated structures follows from the derivation of\nan effective Ginzburg-Landau-Wilson\nHamiltonian. It is shown that the latter can be \ntransformed to the effective Hamiltonian of the\n3d-$XY$ model. The four-state model thus belongs to the universality class\nof the $XY$ model.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLast year, an interesting enhancement was reported at 750 GeV by the ATLAS \\cite{ATLAS} and CMS \\cite{CMS} groups at LHC.\\footnote{At ICHEP 2016 (August 3-10), this enhancement was reported to disappear after adding the new data collected by ATLAS and CMS in 2016. Even if this paper is triggered by the enhancement, we study the monopolium in general, so that the contents are not affected by whether it may exist or not.}\nThe enhancement $X$ was observed in the $\\gamma \\gamma$ channel and was not observed in the other di-boson channels such as ${ZZ}$, ${W^+} $ and ${W^-}$: \n\\begin{eqnarray}\n\\sigma(p+p \\to X \\to \\gamma \\gamma)=10 \\pm 3~\\mbox{fb}~\\mbox{(ATLAS)} ~\\mbox{and}~ 6 \\pm 3 ~\\mbox{fb}~\\mbox{(CMS)}, \n\\end{eqnarray}\nat 13 TeV (Run2) experiment, but no such enhancement was reported at 8~TeV (Run1). Upon combining the ATLAS and CMS data and including 8~TeV data the following cross-section estimate is obtained \\cite{Buttazzo:2015txu}:\n\\begin{equation}\n\\sigma (pp\\to X \\to \\gamma\\gamma) \\approx (4.6\\pm1.2)~ {\\rm fb}.\n\\label{combined}\n\\end{equation}\nThe total width $\\Gamma_\\mathrm{tot}$ has not been well determined, but appears to be large $ \\sim45 $ GeV. Various models have been proposed for $X$ \\cite{750 GeV models}, but if $X$ couples exclusively to $\\gamma \\gamma$ with a large coupling constant, then it may be possible to consider the enhancement as a monopolium, or a monopole-antimonopole dumbbell as described by Nambu \\cite{Nambu}. A recent paper by M. Yamada {\\it et al.} \\cite{Yamada et al.} considers a similar idea, but they identify the enhancement as a magnetic Higgs particle where it is dual to the electric Higgs particle in a hidden $\\mbox{U(1)}_H$ gauge theory. Ours is more directly a bound state system of a monopole and an antimonopole. \n\nThe detection of monopolium has already been studied by L. I. Epele {\\it et al.}, aiming for observation at Tevatron and LHC \\cite{Epele1, Epele2}. The main difference between their paper and the following analysis is that we adopt a strong coupling expansion in lattice gauge theory and accordingly a linear term is included in the monopole-antimonopole potential. Therefore, the large binding energy case can be studied. \n \nThis work is motivated by papers ~\\cite{Neil1, Neil2} written by one of the authors (N. D. B.) on explaining the diphoton excess via photon fusion production of $ X $, in particular where $ X $ is identified as a leptonium of highly charged leptons having a charge $Q=(5-7)~e$. Higher charge can be naturally understood in the context of monopoles due to the large magnetic charge, which is opposite to the small electric charge. Also, the stability of the monopole is less crucial than the lepton, since our monopole is confined and can't exist individually.\n \nThe difficulty of highly charged particles is the estimation of the potential between them. One photon exchange is not enough, and so we will use lattice gauge theory with a finite lattice constant $a$, in which a strong coupling expansion is possible \\cite{lattice gauge theory1, lattice gauge theory2, lattice gauge theory3}. We will apply this to the magnetic $ U(1) $ part of the manifestly electromagnetic dual formulation of Zwanziger \\cite{Zwanziger1, Zwanziger2}. In lattice gauge theory, however, there exists another difficulty, that is, $ U(1) $ gauge theories are not well defined; a first-order phase transition exists between the weak coupling perturbative region and the strong coupling confinement region, and so the continuum limit of $a \\to 0$ can't be taken properly \\cite{Creutz-Jacobs-Rebbi}.\nOn the other hand, the continuum limit is properly taken for $ SU(2) $ \\cite{Creutz} and other asymptotically free gauge theories. So, we consider that the welcoming non-Abelian structure reveals when we go inside the finite sized $ U(1) $ monopole, that is, at some point of taking $a \\to 0$, the gauge group is expected to be enhanced from $ U(1) $ to $ SU(2) $ and the 't Hooft-Polyakov~\\cite{monopole1, monopole2, monopole3} like inside structure will appear. Even in the 't Hooft-Polyakov monopole, the $ U(1) $ magnetic charge is unfortunately located at the origin as a point-like singularity. To relax this situation, we look for a solution in which the $ U(1) $ magnetic charge is distributed nonlocally. Fortunately, there exists a solution in which the magnetic charge is distributed uniformly on the surface of a sphere with radius $R$. Inside the sphere ($r 100~ \\mbox{GeV}), \\\\\n&=& (0.1-0.2) \/\\sqrt{E\/\\mbox{GeV}} ~~(\\mbox{for}~ E < 100~ \\mbox{GeV}).\n\\end{eqnarray}\n\nHowever, to analyze the data of two photon events, low energy photons are discarded by the energy cutoff $E^\\mathrm{cut}_{\\gamma}$, \n\\begin{eqnarray}\nE^\\mathrm{cut}_{\\gamma}=25-35~\\mbox{GeV},\n\\end{eqnarray}\nwhich is much larger than the energy resolution \\cite{Energy cut1, Energy cut2}. We choose the energy cut as 35 GeV. The strategy is, by using the infrared technique, to sum up the emitted but undetectable photons with energy \n\\begin{eqnarray}\n0 < E^\\mathrm{soft}_{\\gamma} < E^\\mathrm{cut}_{\\gamma}= 35~\\mbox{GeV} \\ll \\frac{m_X}{2}.\n\\end{eqnarray}\n\nThere is another cut of eliminating the photons emitted forward or backward, which is the cut on the pseudo-rapidity $\\eta=-\\ln \\tan(\\theta\/2)$ ($\\theta$ is the scattering angle), namely,\n\\begin{eqnarray}\n\\vert \\eta \\vert ~<~\\eta^\\mathrm{cut}=2.37 .\n\\end{eqnarray}\nIt is rather technical, but we can convert the energy cut and the rapidity cut into nonvanishing photon masses $\\lambda$, since the introduction of the photon mass prevents the infrared singularity which occurs at the zero photon energy, as well as the collinear (mass) singularity that occurs when the photon is emitted parallel to the emitting fermion (monopole in this case)\n\\footnote{The lower bound of the massless photon propagator is identified with the lower bound of massive photon propagator, namely $\\left. E^\\mathrm{cut}_{\\gamma}=\\sqrt{\\vert \\bm{k} \\vert^2 + \\lambda^2}\\right\\vert_{\\vert \\bm{k}\\vert=0}$, which gives Eq. (\\ref{energy cut}). Similarly the denominator of the fermion propagator $p\\cdot k$ is compared between the case of a massless photon with an angle cut and that of a massive photon without an angle cut. This gives a massless fermion, $1-\\cos \\theta^\\mathrm{cut}=\\sqrt{1+(\\lambda \/ \\vert \\bm{k} \\vert)^2}-1$, from which Eq. (\\ref{rapidity cut}) is obtained.}.\nThe conversion is done by the following dictionary:\n\\begin{eqnarray}\nE^\\mathrm{cut}_{\\gamma} &\\Leftrightarrow& \\lambda=E^\\mathrm{cut}_{\\gamma}=35 ~\\mbox{GeV}, \\label{energy cut} ~\\mbox{and} \\\\\n\\eta^\\mathrm{cut} &\\Leftrightarrow& \\lambda=2 E_{\\gamma} e^{-\\eta^\\mathrm{cut}}=0.19 ~E_{\\gamma}< 0.19 \\left(\\frac{m_X}{2}\\right). \\label{rapidity cut}\n\\end{eqnarray} \nUpon combining the two cuts, in the case of $m_X \\ge $ 750 GeV, the angle cut is stronger, so that the soft photons emitted at LHC satisfy \n\\begin{eqnarray}\n0< E^\\mathrm{soft}_{\\gamma} \\le 0.19 \\left(\\frac{m_X}{2}\\right). \\label{real emission photon}\n\\end{eqnarray}\nThe discussion so far has pertained to the emitted photons.\n\nThe technique of summing infrared photons is as follows: The contribution of one soft photon emission to the decay width (or the cross section) has an infrared divergence, but the divergence is canceled by the other infrared divergence coming from the radiative corrections of the one soft photon exchange. As a result the decay width is multiplied by an infrared free factor $\\Phi$, called the eikonal factor. This eikonal factor can be easily summed to form an exponential factor, by including multi-photons. Virtual photons should also be soft, since the following approximation is taken in the numerator of the fermion propagator, \n\\begin{eqnarray}\n\\gamma_{\\mu} (p+k)^{\\mu}+m \\approx \\gamma_{\\mu} p^{\\mu}+m,\n\\end{eqnarray}\nwhere $p$ is the fermion's (monopole's) momentum and $k$ is the photon's momentum.\nThis means\n\\begin{eqnarray}\n\\vert \\bm{k} \\vert \\ll \\{m~ \\mbox{and}~ \\vert \\bm{p} \\vert\\} \\le \\frac{1}{2} \\sqrt{(2m)^2+ (2\\bm{p})^2} \\approx \\frac{1}{2} \\left(2m+ \\frac{\\bm{p}^2}{m} \\right) \\approx \\frac{1}{2} m_X.\n\\end{eqnarray}\nTherefore, for the virtual soft photon corrections, we have\n\\begin{eqnarray}\n0< E^\\mathrm{soft}_{\\gamma} \\le \\frac{1}{2} m_X. \\label{virtual soft photon}\n\\end{eqnarray}\nThen, the cancelation between real emissions and virtual corrections leaves the soft photons in the following energy interval:\n\\begin{eqnarray}\n0.19 \\times \\frac{1}{2} m_X \\le E^\\mathrm{soft}_{\\gamma} \\le \\frac{1}{2} m_X.\n\\end{eqnarray}\n\nIf we assume that the monopole has four-momentum $p_1$, the antimonopole has $p_2$ inside the monopolium, and $t=(p_1-p_2)^2=4p_{rel}^2=-4\\bm{p}^2_{rel}<0$, then\n\\begin{eqnarray}\n\\Phi= -\\frac{g^2}{2} \\int_{0.19 \\times \\frac{1}{2} m_X}^{\\frac{1}{2} m_X} \\frac{d^4k}{(2\\pi)^4}\\frac{-i}{k^2+i \\varepsilon} \\left(\\frac{p_1^{\\mu}}{p_1\\cdot k} - \\frac{p_2^{\\mu}}{p_2\\cdot k}\\right)^2,\n\\end{eqnarray}\nWe can rewrite $\\Phi$ using the above dictionary as follows:\n\\begin{eqnarray}\n\\Phi &=& \\frac{g^2}{2} \\left[ \\int_{0}^{\\infty} \\frac{d^4k}{(2\\pi)^4}\\frac{-i}{k^2-\\lambda^2+i \\varepsilon} \\left(\\frac{p_1^{\\mu}}{p_1\\cdot k} - \\frac{p_2^{\\mu}}{p_2\\cdot k} \\right)^2 \\right]_{\\lambda=0.19 \\times \\frac{1}{2} m_X}^{\\lambda=\\frac{1}{2}m_X}\\\\\n&=&\\left(\\frac{g}{4\\pi}\\right)^2 \\left[-\\int_0^1 dy~\\frac{y}{y^2+(1-y) (\\lambda\/m)^2} \\right. \\nonumber \\\\ \n& & ~~~~~~~~~~+\\left. 4 \\left(1-2m^2\/t \\right)\\times \\int_0^1 dy~\\frac{1}{\\sqrt C} \\ln \\left\\vert \\frac{\\sqrt{C}+y}{\\sqrt{C}-y} \\right\\vert \\right] _{\\lambda=\\lambda=0.19 \\times \\frac{1}{2} m_X}^{\\lambda=\\frac{1}{2} m_X}\\label{Eikonal} \\\\\n&\\approx& \\pi^2 \\left(\\frac{g^2}{4 \\pi} \\right) \\left[\\ln^2 \\frac{\\vert t \\vert}{m^2}+4 \\ln \\frac{\\vert t \\vert}{m^2} \\ln \\frac{m}{\\lambda} \\right]_{\\lambda=0.19 \\times \\frac{1}{2} m_X}^{\\lambda=\\frac{1}{2} m_X},\\label{high energy Eikonal}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nC=C\\left(y, \\frac{m^2}{t}, \\frac{\\lambda^2}{t}\\right) =y^2 \\left(1-\\frac{4m^2}{t}\\right) - 4 (1-y) \\frac{\\lambda^2}{t}.\n\\end{eqnarray}\nThe last approximate formula, in Eq. (\\ref{high energy Eikonal}), being valid in the high energy limit, is written as a reference, but is useful to estimate $\\Phi$ roughly (see for example Ref.~\\cite{Landau-Lifshitz-QED}). We will use Eq. (\\ref{Eikonal}), since the high energy limit $\\vert t \\vert \\gg m^2$ does not necessarily work in our case. This double logarithmic approximation in the high energy limit was used to estimate the total cross section $\\sigma(e^{+}e^{-} \\to \\mbox{multi-photons})$. Please refer to Ref.~\\cite{Eidelman} and references theirein.\n\nUsing this eikonal factor, the total decay widths can be written as,\n\\begin{eqnarray}\n\\Gamma_\\mathrm{tot}(X_0)&=& \\Gamma(X_0 \\to 2 \\gamma) \\cosh \\Phi, \\\\\n\\Gamma_\\mathrm{tot}(X_1)&=&\\Gamma(X_1 \\to 3 \\gamma) \\cosh \\Phi \\nonumber \\\\\n&+&\\sum_{i} \\Gamma(X_1 \\to q_i\\bar{q}_i)+ \\sum_{i}\\Gamma(X_1 \\to \\ell^+_i\\ell^-_i)+\\Gamma(X_1 \\to W^+W^-),\n\\end{eqnarray}\nsince we have to add an even number of soft photons to the $2\\gamma$ decay or $3\\gamma$ decay, the $\\cosh \\Phi$ factor appears. Now we understand that the branching ratio is roughly given by,\n\\begin{eqnarray}\n\\mathrm{Br}(X_0) \\approx \\mathrm{Br}(X_1) \\approx \\frac{1}{\\cosh \\Phi \\left(g, t\/m^2, m_X\/m \\right)}. \\label{Br}\n\\end{eqnarray}\nDerivation of this suppression factor in the branching ratios is very naive, but is indispensable for the strong coupling dynamics of the monopolium.\n\\section{A model of a monopole and a monopolium based on the Zwanziger model and lattice gauge theory}\nThe bound state of the monopoles is formed by the exchange of magnetic photons $\\gamma_M$. However, the coupling of the monopole to the magnetic photon is very strong, so that we adopt a lattice gauge theory approach with a lattice constant $a$ as a UV cutoff \\cite{lattice gauge theory1, lattice gauge theory2, lattice gauge theory3}.\nThe space-time is considered to be a square lattice $n=(n_0, n_1, n_2, n_3)$ ($n_{\\mu}$ is an integer) with a lattice constant $a$. The link variables $U^{(A)}_{n\\hat{\\mu}}$ and $U^{(B)}_{n\\hat{\\mu}}$ are introduced as usual for the electric and magnetic photons $A_{\\mu}(x)$ and $B_{\\mu}(x)$, respectively.\n\\begin{eqnarray}\nU^{(A)}_{n\\hat{\\mu}}=e^{i \\{e a A_{\\mu}(na)\\}}, ~~U^{(B)}_{n\\hat{\\mu}}=e^{i \\{g a B_{\\mu}(na)\\}},\n\\end{eqnarray}\nand the Wilson loops $W^{(A)}[C]$ and $W^{(B)}[C]$ are defined as the product of the link variables along the loop $C$:\n\n\\begin{eqnarray}\nW^{(A)}[C]= \\prod_{n \\in C, ~\\mu \\parallel C} U^{(A)}_{n\\hat{\\mu}}, ~~W^{(B)}[C]= \\prod_{n \\in C, ~\\mu \\parallel C} U^{(B)}_{n\\hat{\\mu}}.\n\\end{eqnarray}\nThe minimum Wilson loop is given for the boundary curve $C_{n\\mu\\nu}\\equiv \\partial P_{n\\mu\\nu}$ of the minimum plaquette $P_{n\\mu\\nu}=\\mbox{rectangular}~(n, n+\\hat{\\mu}, n+\\hat{\\mu}+\\hat{\\nu}, n+\\hat{\\nu}, n)$. Then, the Zwanziger action can be written as the lattice gauge theory action in the Euclidean metric:\n\\begin{eqnarray}\nS^{\\rm{ZW}}_\\mathrm{lattice}&=&-\\sum_{n, \\nu} \\left( \\frac{1}{2e^2} W^{(A)} [C_{n\\eta\\nu}] +\\frac{1}{2g^2} W^{(B)} [C_{n\\eta\\nu}]+ (h.c.) \\right) \\nonumber \\\\\n&-&\\sum_{n, \\nu} \\frac{1}{2eg} \\left(W^{(A)} [C_{n\\eta\\nu}]\\tilde{W}^{(B)} [C_{n\\eta\\nu}] - W^{(B)} [C_{n\\eta\\nu}]\\tilde{W}^{(A)} [C_{n\\eta\\nu}] \\right) \\nonumber \\\\\n&+&\\frac{a^3}{2} \\sum_{i, n\\nu} \\left( \\overline{\\psi}_{i, n}\\gamma_{\\nu} \\left(U^{(A)}_{n\\nu}+U^{(B)}_{n\\nu}\\right) \\psi_{i, n+\\nu}-\\overline{\\psi}_{i, n}\\gamma_{\\nu} \\left(U^{(A)}_{n\\nu}+U^{(B)}_{n\\nu}\\right)^{\\dagger} \\psi_{i, n-\\nu} \\right) \\nonumber \\\\\n&-& a^4 \\sum_{i, n} m_i ~\\overline{\\psi}_{i, n} \\psi_{i, n} ,\n\\end{eqnarray}\nwhere the dual Wilson loop reads\n\\begin{eqnarray}\n\\tilde{W}^{(A,B)} [C_{n\\eta\\nu}] =-\\frac{i}{2} \\epsilon_{\\eta\\nu\\lambda\\rho}W^{(A,B)} [C_{n\\lambda\\rho}] ,\n\\end{eqnarray}\nwhere $\\epsilon_{1234}=1$.\n\nIn the action, we assume that the magnetic coupling $g$ is strong and , while the electric coupling $e$ is weak and perturbative. So, in estimating the expectation value of the large Wilson loop $W^{(B)}[C]$, the strong coupling expansion is used \\cite{lattice gauge theory1, lattice gauge theory2, lattice gauge theory3}. We choose $C$ to be a rectangle of length $T$ in time and length $r$ in the space-like direction $\\nu$; then we have \n\\begin{eqnarray}\n\\langle W^{(B)}[C] \\rangle &=& \\frac{\\int dU^{(B)}_{n\\nu}~ W^{(B)}[C] ~e^{-S^{\\rm{ZW}}_\\mathrm{lattice}}}{\\int dU^{(B)}_{n\\nu} ~e^{-S^{\\rm{ZW}}_\\mathrm{lattice} } }\\\\\n&=& \\delta_{\\mu\\eta} \\exp\\left(-\\ln(2g^2) \\frac{T \\cdot r}{a^2} \\right) \\left(1+ \\cdots \\right),\n\\end{eqnarray}\nwhere $T \\cdot r$ denotes the minimum area of the rectangle $C$, so that the Wilson's area law is realized only if $\\mu$ is in the $\\eta$-direction. This point is very important. Define the potential between a heavy monopole and its antimonopole separated by a distance $r$ to be $V_{M\\overline{M}}(r)$. Then, the potential has a linear term in $r$, if the monopole and antimonopole are separated in the $\\eta$-direction;\n\\begin{eqnarray}\nV_{M\\overline{M}}(r)= \\delta_{r \\parallel \\eta} \\frac{\\ln(2g^2)}{a^2} r + \\cdots.\n\\end{eqnarray}\nThis clarifies the meaning of the special direction $\\eta$ that appears in the Zwanziger formulation. It gives the direction of the Dirac string starting from the monopole. Therefore, the monopole and antimonopole are connected by the string, starting from the monopole and ending at the antimonopole, which contributes to the linear potential between them.\n\nIn addition to this strong coupling contribution, we will add the usual perturbative weak coupling contribution, and so the potential at this stage is\n\\begin{eqnarray}\nV_{M\\overline{M}}(r)= -\\frac{g^2}{4\\pi r} + \\frac{\\ln(2g^2)}{a^2} r.\n\\end{eqnarray}\nThis matches with high precision QCD calculations in which the potential between a quark and antiquark pair is well approximated by the linear plus Coulomb potential \\cite{Bali1, Bali2, Bali3}; this potential is good for point-like quarks. The monopole, however, may not be point-like, and may have an internal structure. The $ U(1) $ lattice gauge theory is usually considered not to be well defined, since there exists a first-order phase transition between the confinement phase and the perturbative phase \\cite{Creutz-Jacobs-Rebbi}, and it obstructs the continuum limit of $a \\to 0$. One way out from this difficulty is to lift the $ U(1) $ theory to $ SU(2) $ theory, or other asymptotic free theory, when we approach to the short distance region. As is shown by 't Hooft and Polyakov \\cite{monopole1, monopole2, monopole3}, the $ U(1) $ monopole was given as a classical solution of $ SU(2) $ gauge theory, which is broken to $ U(1) $ with a triplet Higgs field $\\phi^a(x)~(a=1-3)$. Therefore, if we go inside the monopole, the non-Abelian gauge theory may appear. If this happens we may take the continuum limit properly. \n\nThe 't Hooft-Polyakov monopole is a classical solution of the $ SU(2) $ gauge theory with a triplet Higgs, based on \n\\begin{eqnarray}\n{\\cal L}_{SU(2)\\, \\mathrm{monopole}}=-\\frac{1}{4} (F^a_{\\mu\\nu})^2+ (D_{\\mu}\\phi^a)^2-\\lambda(\\vert\\phi \\vert^2-v)^2,\n\\end{eqnarray}\nand the following ansatz:\n\\begin{eqnarray}\nA^a_i(x)=v~\\epsilon^{aij}\\hat{r}^j \\frac{1-K(\\xi)}{\\xi}, ~\\phi^a(x)=v~\\hat{r}^a \\frac{H(\\xi)}{\\xi},\n\\end{eqnarray}\nwhere we define a dimensionless parameter $\\xi=evr$.\nIf we take the limit $\\lambda \\to 0$ while keeping $v \\ne 0$, the solution, called the BPS solution, is given analytically \\cite{BPS1, BPS2}. Then, the equations of motion become the first order:\n\\begin{eqnarray}\n\\xi \\frac{dK}{d\\xi}=-KH, ~~\\xi \\frac{dH}{d\\xi}=H-K^2+1,\n\\end{eqnarray}\nfrom which the solutions read\n\\begin{eqnarray}\nK(\\xi)=\\frac{\\xi}{\\sinh \\xi}, ~~H(\\xi)=\\xi \\coth\\xi-1.\n\\end{eqnarray}\nWe want to know the distribution of the magnetic charge inside the monopole, $\\xi <1$ or $r< 1\/ev$. The $ SU(2) $ gauge potential and the Higgs field are properly reduced by factors of $1-K(\\xi)$ and $H(\\xi)$, but the monopole charge is unfortunately not smeared even inside the monopole. This can be understood from the $ U(1) $ field strength proposed by 't Hooft. This gauge invariant expression can be rewritten as follows \\cite{Arafune-Freund-Goebel}:\n\\begin{eqnarray}\nF_{\\mu\\nu}=\\partial_{\\mu} (\\hat{\\phi}^a A^a_{\\nu})-\\partial_{\\nu} (\\hat{\\phi}^a A^a_{\\mu})-\\frac{1}{e}\\epsilon^{abc}\\hat{\\phi}^a \\partial_{\\mu} \\hat{\\phi}^b \\partial_{\\nu} \\hat{\\phi}^c,\n\\end{eqnarray}\nwhere $\\hat{\\phi}^a=\\phi^a\/\\vert \\phi \\vert$. From this expression the magnetic charge is found to be a topological number:\n\\begin{eqnarray}\n\\Phi_m=\\int \\bm{B}d\\bm{S}=\\frac{4\\pi}{e} \\int d^3x \\frac{\\partial(\\phi^1, \\phi^2, \\phi^3)}{\\partial(x^1, x^2, x^3)}=\\frac{4\\pi}{e} N,\n\\end{eqnarray}\nwhere $\\Phi_m$ is the magnetic flux, and $N$ is the winding (wrapping) number of the sphere of the Higgs fields ($\\bm{\\phi}^2=v^2)$ by the sphere in space ($\\bm{x}^2=1$).\n\nWe can treat the charged sphere as follows: If the sphere of radius $R$ is uniformly charged, it can be viewed as a point charge $Q(\\infty)$ from a long distance, but inside the sphere $(rR$. Accordingly, $K$ and $H$ are modified:\n\\begin{eqnarray}\n1-\\tilde{K}(r)=\\theta(r-R) (1-K(r)), ~~\\tilde{H}(r)=\\theta(r-R)H(r).\n\\end{eqnarray}\nThis modification inside the sphere ($rR), \\\\\n&=&\\mbox{const} = - \\frac{g^2}{4\\pi R} + \\frac{\\ln(2g^2)}{a^2} R~~ (\\mbox{for}~ rR), \\\\\n&=&\\mbox{const} = - \\frac{g^2}{4\\pi R} + \\frac{\\ln(2g^2)}{a^2} R~~ (\\mbox{for}~ rR), \\\\\n&=&0~~ (\\mbox{for}~ rr_{*}, \\\\\n\\chi(r)_{II}&=& \\frac{C}{\\sqrt{p_r} } \\sin \\left(-\\int_r^{r_{*}} p_r dr + \\frac{\\pi}{4} \\right)~~\\mbox{for}~~0r_{*}, \\\\\n\\int_r^{r_{*}} p_r dr&=& \\frac{2\\sqrt{m}}{3\\kappa} \\left((\\tilde{E}+g^2\/4\\pi R)-\\kappa r \\right)^{3\/2}-(r=r_{*})~~\\mbox{for}~~RR), \\nonumber \\\\\n&=&\\mbox{const} = - \\frac{g^2}{4\\pi R} + \\frac{\\ln(2g^2)}{a^2} R~~ (\\mbox{for}~ r\\bar{I}^{(k)}=\\frac{\\sum_{\\{m,n\\}\\in w^{(k)}}I(m,n)}{|w^{(k)}|}, k\\!=\\!\\{1,2,3,4,5\\},\n\\end{equation}\nwhere $|w^{(k)}|$ stands for the window size,\nare satisfied for all color channels, $I(i,j)$ is recognized as a rain pixel\nand the corresponding term $S_R(i,j)$ in the so-called binary location map $S_R$ is set to be 1;\notherwise $S_R(i,j)$ is assigned as 0.\nDetection result $S_{R}$ is a binary image as shown in Fig. \\ref{fig:initial_location}.\n\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{0.48\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_loca}}\n\\centerline{(a)}\n\\end{minipage}\n\\caption{Initial rain's location map.}\n\\label{fig:initial_location}\n\\end{figure}\n\n\\subsection{An analysis of mis-detections}\n\nIt can be seen from Fig. \\ref{fig:initial_location}\nthat not only rain streaks but also some non-rain components\nappear in rain detection result. How to recognize those\nnon-rain components and eliminate their influence is\nthus very critical - a lot of image details and useful\ninformation would otherwise get lost after the removal of rain streaks.\n\nIn order to separate rain from non-rain objects,\nsome characteristics of rain can be useful.\nWe describe them as follows:\n\\begin{itemize}\n\\item rain streaks usually do not have too large size in width,\n\\item the directions of all rain streaks in a scene are nearly consistent,\n\\item the color of a rain streak is usually shallow white, and\n\\item the length of a rain streak is usually larger than its width.\n\\end{itemize}\nThese characteristics are very robust to describe rain, and\nsome of them have been utilized in some existing rain-removal\nworks, such as \\cite{Chen_2014_CSVT}, \\cite{Kim_2013_ICIP} and so on.\nLater on, we will see that when these characteristics are\ncombined with our proposed morphological processing,\nthe error detection will be reduced largely.\n\n\\subsection{Refining of initial locations of rain streaks}\n\n\\noindent \\textbf{First}, all connected components shown in Fig. \\ref{fig:initial_location}\nare extracted by the morphology method and the details can be referred to \\cite{Gonzalez_2002_PUSR}.\n\n\\begin{figure}[t]\n\\begin{minipage}{0.48\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L_single_streak}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.48\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L_width}}\n\\centerline{(b)}\n\\end{minipage}\n\\caption{(a) An example of PCA description. (b) Refined result by connected component width.}\n\\label{fig:single_streak}\n\\end{figure}\n\n\\noindent \\textbf{Second}, PCA is used to describe the shape of every connected component.\nIn order to describe this step more visually, we select one connected component\nfrom Fig. \\ref{fig:initial_location} as an example\nto show the refining process, the selected component is in Fig. \\ref{fig:single_streak}(a).\nBecause some colors can not be seen clearly on a black background,\nwe have changed the selected streak to black and the background to white in Fig. \\ref{fig:single_streak}(a).\n\n\\begin{figure*}[t]\n\\begin{minipage}{0.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L_angle}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L_color}}\n\\centerline{(b)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L_aspect_ratio}}\n\\centerline{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L}}\n\\centerline{(d)}\n\\end{minipage}\n\\caption{(a) Refined result by the connected component angle. (b) Refined result by the connected component color. (c) Refined result by the connected component aspect ratio. (d) Dilation of rain streaks.}\n\\label{fig:revision}\n\\end{figure*}\n\nFor $p^{th}( p=1, 2, ..., P)$ connected component, we calculate the covariance matrix of location vectors of all pixels in it.\nSuppose that there are $N$ pixels in $p^{th}$ connected component.\nHence, there are $N$ sample vectors of pixel locations so that the mean location\nvector $\\bm{m}_{\\bm{z}}$ and covariance matrix $\\bm{C}_{\\bm{z}}$ can be calculated as\n\\begin{equation} \\label{eq:mean_approximation}\n\\bm{m}_{\\bm{z}} = \\frac {1}{N} \\sum^{N}_{n=1} \\bm{z}_{n}\n\\end{equation}\n\\begin{equation} \\label{eq:covariance_matrix2}\n\\bm{C}_{\\bm{z}} = \\frac {1}{N} \\sum^{N}_{n=1} \\bm{z}_{n}\\bm{z}_{n}^{T} - \\bm{m}_{\\bm{z}}\\bm{m}_{\\bm{z}}^{T}\n\\end{equation}\nwhere $\\bm{z}_{n}= [x_{n}, y_{n}]^{T}$, and $x_n$ and $y_n$ are respectively\nthe corresponding coordinates of the $n^{th}$ pixel ($n=1, 2, \\cdots, N$).\n\nAfter the covariance matrix $\\bm{C}_{\\bm{z}}$ of $p^{th}$ connected component is obtained,\nwe perform the eigenvalue decomposition of $\\bm{C}_{\\bm{z}}$\nand obtain the eigenvalues $\\lambda_{1}$, $\\lambda_{2}$ and\ntheir corresponding eigenvector $\\bm{e}_{1}$, $\\bm{e}_{2}$ ($\\lambda_{1}$ is the larger eigenvalue).\nThe description of PCA to the shape of connected\ncomponents are shown in Fig. \\ref{fig:single_streak}(a).\nThe red arrows stand for two eigenvectors, while two yellow arrows denote\nthe coordinate axes. Here, $\\theta$ is the angle between $x$-axis and eigenvector\n$\\bm{e}_{1}$ and it can be calculated as $\\theta=\\arctan(\\frac {\\bm{e}_{1}(2)}{\\bm{e}_{1}(1)})$.\nNotice that in order to avoid the red direction arrow from occluding the connected component,\nthe origin of the coordinate system is not placed on the connected component.\n\nFrom Fig. \\ref{fig:single_streak}(a), we learn that $\\bm{e}_{1}$\n(corresponding to the larger eigenvalue $\\lambda_{1}$) points to the direction\nwhere the location variance has the maximum value; whereas $\\bm{e}_{2}$\n(corresponding to the smaller eigenvalue $\\lambda_{2}$) is perpendicular to the maximum variance direction.\n\nAccordingly, we define the length of a connected component as\n\\begin{equation} \\label{eq:length}\nL=c\\lambda_{1}\n\\end{equation}\nand its width as\n\\begin{equation} \\label{eq:width}\nW=c\\lambda_{2}\n\\end{equation}\nwhere $c$ is a proportional parameter. We assume that $c$ is a constant in an image.\nThe specific value of $c$ is not important, because it does not affect the ratio of the\nlength and width of a connected component.\nThe more important quantity is the direction angle of a connected components,\nwhich is denoted as $\\theta$ in Fig. \\ref{fig:single_streak}(a), but is now re-defined as\n\\begin{equation} \\label{eq:angle}\nD=\\theta\n\\end{equation}\nand name $D$ as the direction of a connected component.\n\nIn our experiment, the values $\\lambda_{1}$, $\\lambda_{2}$, $\\bm{e}_{1}$,\n$\\bm{e}_{2}$ and $D$ of all $p^{th}(p=1, 2, ..., P)$ connected components are calculated,\n$P$ is the number of connected components.\nAs an example, these values of the given connected components\nin Fig. \\ref{fig:single_streak}(a) are\n$\\lambda_{1}=172.8949$, $\\lambda_{2}=0.5852$,\n$\\bm{e}_{1}=(0.9309, 0.3653)^{T}$,\n$\\bm{e}_{2}=(-0.3653, 0.9309)^{T}$ and $D=21.4286^{\\circ}$ respectively.\n\n\\noindent \\textbf{Third}, after obtaining the quantified characteristics of all connected components,\nwe recognise non-rain connected components as follows.\n\\begin{itemize}\n\n\\item As we said above, rain streaks usually do not have large width\nas compared to some non-rain objects.\nHence, the $K$-means is used here to classify the connected components by their width $W$.\nThe connected components with larger width are mis-detected non-rain components and\nwe set their corresponding values in location map $S_{R}$ as $0$.\nThe refined result in this way is shown in Fig. \\ref{fig:single_streak}(b).\n\nThere are not so many wide non-rain objects in this given image, hence\nthe refinement by width is not too apparent.\nWe can see that some non-rain components at right\nbottom corner disappear in Fig. \\ref{fig:single_streak}(b).\nThis is because when textures of an image are complex,\nsome non-rain streaks combine together and form\na larger connected component so that the width becomes large.\n\n\\item An apparent characteristic of rain streaks is that they follow\nnearly the same falling direction and the angle will not be too large generally.\nIf we use the direction angle $D$ of connected component defined in Equation\n(\\ref{eq:angle}) to describe this characteristic, $\\vert D \\vert$\nof rain components must be less than a threshold $T1$ ($\\vert D \\vert$ is\nthe absolute value of $D$).\nHence, by the threshold $T1$, we can recognize the mis-detected non-rain connected components\nin Fig. \\ref{fig:single_streak} (b). Then the non-rain connected\ncomponents are set to be 0, and the refined result is shown in Fig. \\ref{fig:revision}(a).\n\n\\item After refining by the width and direction constraints,\nmajority of non-rain components are recognized. However, some non-rain components that\nare similar in shape to the rain streaks still remain.\nRain streaks usually possess neutral color.\nAccording to this feature, Chen \\emph{et al.} \\cite{Chen_2014_CSVT} proposed to\nidentify rain dictionary atoms by the eigen color feature \\cite{Tsai_2008_IET_CV}.\nIn our work, we utilize the color characteristics of\nrain to revise the mis-detected non-rain connected components.\n\nFor $p^{th}$ connected component in Fig. \\ref{fig:revision}(a),\nwe calculate the mean color vector of all pixels in it,\nand denote as $[\\bar{R}, \\bar{G}, \\bar{B}]$.\nThen we transform this 3-D RGB color vector into a 2-D vector as follows:\n\\begin{small}\n\\begin{equation}\\label{eq:color_transform}\n\\begin{split}\n u & =\\frac{2\\Phi-\\bar{G}-\\bar{B}} {\\Phi} \\\\\n v & =max \\left \\{\n \\frac{\\Phi-\\bar{G}}{\\Phi}, \\frac{\\Phi-\\bar{B}}{\\Phi}\n \\right \\}\n\\end{split}\n\\end{equation}\n\\end{small}\nwhere $\\Phi=\\frac{1}{3}(\\bar{R}+\\bar{G}+\\bar{B})$.\nIt is clear from (\\ref{eq:color_transform}) that,\nafter the transform any connected components having neutral color will\nbe clustered around $(0, 0)$ in the $u$-$v$ space.\nHence, we calculate the magnitude of this 2-D vector $(u, v)$\n(i.e.,the Euclidean distance to the origin of the $u$-$v$ space),\nif the magnitude is larger than a pre-set value $T2$,\nthe $p^{th}$ connected component is recognized as a mis-detected non-rain connected component.\nFor all remaining connected components in Fig. \\ref{fig:revision}(a),\nwe repeat this process and revise the mis-detected non-rain connected components.\nThe refined result is shown in Fig. \\ref{fig:revision}(b).\n\n\\item According to \\cite{Kim_2013_ICIP}, a rain streak has a larger size\nin length than in width. Hence, we classify the connected components\nwhose aspect ratio are less than $\\mu$ as non-rain components.\nBy excluding the connected components that have small aspect ratios,\nthe refined result is shown in Fig. \\ref{fig:revision}(c).\n\n\\item Finally, in order to avoid some slim rain edges\nfrom remaining in our final rain-removed result,\nwe dilate the connected components in\nFig. \\ref{fig:revision}(c) by a $3\\times 3$ 'disk' mask,\nto obtain the final result for rain streaks detection,\nas shown in Fig. \\ref{fig:revision}(d).\n\\end{itemize}\n\nOur rain detection is a stepwise revision method.\nBy utilizing morphology and PCA, we quantify the rain's\ncharacteristics and detect rain streaks relatively accurately.\n\n\\section{Image Reconstruction}\n\\label{sec:ImageReconstruction}\nIn this section, we try to verify the sparsity\nof natural rain images and utilize one Laplacian distribution\nto approximate the sparsity prior of natural rain image,\nwe name the approximate prior as \\emph{quasi-sparsity prior}.\nThen, based on the quasi-sparsity and several constraints,\nthe rain-removed result is obtained by\nseparating a rain image into rain layer and background layer.\n\n\\subsection{Quasi-sparsity of rain images}\n\nIn \\cite{Levin_2007_PAMI}, Levin and Weiss tried to separate\nthe background and reflection from an image by sparsity prior of natural images.\nWe also utilize image sparsity in our rain removal task.\nThe sparsity of an image mentioned in \\cite{Levin_2007_PAMI} can\nbe depicted as: when a derivative filter is applied on an image,\nthe logarithm of the histogram of the obtained gradient image reaches\npeak value at zero and falls off much faster than a Gaussian.\nLevin \\emph{et al.} demonstrated that sparse distributions\nwill lead to a better image decomposition \\cite{Levin_2002_NIPS}.\nHence, the sparsity of a natural\nimage is crucial to its decomposition into several layers.\n\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{0.48\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_distribution_compare}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.48\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_sparsity_prior}}\n\\centerline{(b)}\n\\end{minipage}\n\\caption{(a) Log-probability of several distributions. (b) Sparsity verification on one rain image.}\n\\label{fig:sparsity}\n\\end{figure}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{minipage}{0.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_I_nd}}\n\\centerline{(b)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_removed_rain}}\n\\centerline{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.24\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/rain18_L_nonrain}}\n\\centerline{(d)}\n\\end{minipage}\n\\caption{(a) Original rain image. (b) Rain-removed image.\n(c) Rain component removed from (a).\n(d) Non-rain location $S_{NR}$ that is obtained by $S_{NR}=1-S_{R}$ (the white area).}\n\\label{fig:results}\n\\end{figure*}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{minipage}{0.195\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/test75}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.195\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/test75_I_nd}}\n\\centerline{(b)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.195\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/test75_I_nd_rain}}\n\\centerline{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.195\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/test75_I_nd_revised}}\n\\centerline{(d)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.195\\linewidth}\n\\centering{\\includegraphics[width=.9\\linewidth]{images\/test75_I_nd_rain_revised}}\n\\centerline{(e)}\n\\end{minipage}\n\\caption{(a) One rain image; (b) background layer without the third constraint;\n(c) rain layer without the third constraint;\n(d) background layer with the third constraint; (e) rain layer with the third constraint.}\n\\label{fig:correct_separate}\n\\end{figure*}\n\nFig. \\ref{fig:sparsity}(a) illustrates the logarithm probabilities of\nseveral distributions. Laplacian distribution exactly results in a straight line and\nconnects the maximum and minimum values.\nWe can see that Gaussian distribution falls off the slowest and is above the\nstraight line so that it is viewed as non-sparse. The other two distributions\nbelow the straight line are classified as sparse according to \\cite{Levin_2007_PAMI}.\nLaplacian distribution is on the border of\nsparsity and non-sparsity.\n\nIn order to verify the sparsity of rain images,\nwe conduct an experiment on nearly 200 rain images and part of these\nimages are also used in the experiment section. Here, we use the image\nin Fig. \\ref{fig:results}(a) as an example to illustrate the sparsity of rain images.\nFig. \\ref{fig:sparsity}(b) shows the logarithm curve (the blue curve)\nof histogram after applying an horizontal derivative filter on it.\nObviously, the result reveals that the rain image satisfies the sparsity requirement.\n\nHowever, decomposing a rain image $I$ into the rain layer $I_{R}$ and background layer $I_{NR}$ as\n\\begin{equation} \\label{eq:image_decomposition}\nI=I_{R}+I_{NR}\n\\end{equation}\nis a massively ill-posed problem. To simplify this kind of problem,\nLevin \\emph{et al.} proposed that users can label some edges or areas that belong\nto $I_{R}$ and some other edge or areas that belong to $I_{NR}$\nto increase the constraint for this kind of problem \\cite{Levin_2007_PAMI}.\n\nSparsity ensures that an edge of unit contrast will not be split,\nand will appear in one layer \\cite{Levin_2007_PAMI}.\nIn our task, we have detected nearly all rain locations and\nthe remaining region is labelled as the non-rain area.\nOur detection offers better constraints to this ill-posed problem\nthan the manually-labeled operation in \\cite{Levin_2007_PAMI},\nand also realize the role of sparsity in certain degree.\nUnlike in \\cite{Levin_2007_PAMI}, we relax the probability constraint and utilize single\nLaplacian function to approximate the sparsity of rain images\nand named as \\emph{quasi-sparse distribution}:\n\\begin{equation} \\label{eq:histogram_approximation}\nP(x)=e^{- \\vert x \\vert}\n\\end{equation}\nHence, the quasi-sparsity prior over the whole image $I$ is as follows:\n\\begin{equation}\\label{eq:laplacian_approximation}\nP(I)=\\prod_{i, k}e^{- \\vert \\omega_{i, k} \\cdot I \\vert}\n\\end{equation}\nwhere $\\omega_{i,k}$ is the $k^{th}$ filter which centered\nat $i^{th}$ pixel. The filters\nwith two orientations (horizontal and vertical) and two degrees (the\nfirst derivative and the second derivative) are used here.\n\n\\subsection{Optimization}\n\nFor an given rain image $I$, $S_{R}$ is the detected rain location,\nand the non-rain location can be obtained by $S_{NR}=1-S_{R}$.\nThe following constraints are satisfied to separate an rain image\ninto rain layer $I_{R}$ and background (non-rain) layer $I_{NR}$:\n\\begin{enumerate}\n\\item $I=I_{R}+I_{NR}$;\n\\item the gradients of $I_{R}$ and $I_{NR}$ at their corresponding locations\nin $S_{R}$ and $S_{NR}$ respectively agree with the gradient of image $I$;\n\\item the values of $I_{NR}$ at location $S_{NR}$ are close to the value of $I$.\n\\end{enumerate}\nThe first two constraints are also utilized in \\cite{Levin_2007_PAMI}.\nAs shown later, this will lead some non-normal separation for some specific images.\nTo improve the separation, we add the third constraint and it plays a role\nas boundary condition.\n\nAs the work in \\cite{Weiss_2001_ICCV}, we assume that\nderivative filters are independent over space and orientation;\nrain layer $I_{R}$ and background layer $I_{NR}$ are independent.\nThen the quasi-sparsity prior can be written as follows according\nto the first constraint:\n\\begin{equation} \\label{eq:prior_define}\nP(I)=P(I_{R})P(I_{NR})=\\prod_{i, k}e^{- ( \\vert \\omega_{i, k} \\cdot I_{R} \\vert + \\vert \\omega_{i, k} \\cdot I_{NR} \\vert)}\n\\end{equation}\nWe would like to obtain $I_{R}$ and $I_{NR}$ which maximize the\nabove likelihood function. It is equal to minimize the following loss function:\n\\begin{equation}\\label{eq:loss_function}\nJ(I_{R}, I_{NR})=\\sum_{i, k} \\vert \\omega_{i, k} \\cdot I_{R} \\vert + \\vert \\omega_{i, k} \\cdot I_{NR} \\vert\n\\end{equation}\nCombined with the second and third constraints,\nwe rewrite Equation (\\ref{eq:loss_function}) as\n\\begin{equation}\\label{eq:loss_function1}\n\\begin{split}\n&J_{1}(I_{R})=\\sum_{i, k} \\vert \\omega_{i, k} \\cdot I_{R} \\vert + \\vert \\omega_{i, k} \\cdot (I-I_{R}) \\vert \\\\\n& \\qquad \\quad + \\lambda \\sum_{i \\in S_{R}, k} \\vert \\omega_{i,k} \\cdot I_{R} - \\omega_{i, k} \\cdot I \\vert \\\\\n& \\qquad \\quad +\\lambda \\sum_{i \\in S_{NR}, k} \\vert \\omega_{i, k} \\cdot I_{R} \\vert \\\\\n& \\qquad \\quad +\\eta \\sum_{i \\in S_{NR}} \\vert I_{R} \\vert\n\\end{split}\n\\end{equation}\nwhere $\\lambda$ and $\\eta$ are regularization parameters.\n\nIf $v$ is defined as the vectorized version of image $I_{R}$,\nEquation (\\ref{eq:loss_function1}) becomes\n\\begin{equation}\\label{eq:loss_function2}\nJ_{2}(v)= \\Vert Av-b \\Vert_{1}\n\\end{equation}\nwhere $\\Vert \\cdot \\Vert_{1}$ is the $L_{1}$ norm, $A$\nis relative to the derivative filters, $\\lambda$ and $\\eta$, and $b$ is relative to the image derivative,\nthe values of $I$ at location $S_{NR}$, zero and $\\lambda$, $\\eta$.\n\nThis is a $L_{1}$-norm optimization problem,\nand it can be solved by iterative reweighted least squares (IRLS) \\cite{Burrus_2009_CPAM}.\nWe summarize the process in Algorithm \\ref{alg:whole_algorithm}.\nOnce $v$ is obtained, we resize it back to the rain-layer image $I_{R}$.\nThen, the rain-removed image $I_{NR}$ can be obtained as\n\\begin{equation}\\label{eq:rain_remove}\nI_{NR}=I-I_{R}\n\\end{equation}\nOne example of the rain layer and rain-removed image is shown in Fig. \\ref{fig:results}(a) and (b), respectively.\nIn Fig. \\ref{fig:results}(c)(d), we show the constructed rain layer and the non-rain location $S_{NR}$.\n\nAs mentioned above, the third constraint plays an important role\nin the correct separation of rain images. Here, we show\nan example in Fig. \\ref{fig:correct_separate} to suggest the role of this constraint.\nIn Fig. \\ref{fig:correct_separate}(b)(c), we can see that serious\ncolor shift (means that the colors of non-rain details in (b) are abnormal) will\nappear without the third constraint.\nThe reason is that some colors go to the rain layer (c) in under-determined conditions.\nBy adding the third constraint, the separation quality\ncan be improved and we can obtain a natural rain-removed image.\n\n\\begin{algorithm}\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\caption{IRLS}\n\\label{alg:whole_algorithm}\n\\begin{algorithmic}\n\\REQUIRE $A$, $b$, $Iter$\n\\renewcommand{\\algorithmicrequire}{Initialization}\n\\REQUIRE $v=[ A^{T}A ]^{-1}Ab$\n\\FOR{$t$=1 to $Iter$ }\n\\STATE $e=abs(Av-b)$\n\\STATE $z(i) = e(i)^{-0.5}, i=1, 2, ...$\n\\STATE $\\Omega = diag(z)$\n\\STATE $v =[ A^{T}\\Omega^{T}\\Omega A ]^{-1}A^{T}\\Omega^{T}\\Omega b$\n\\ENDFOR\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\ENSURE $v$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Experimental Results}\n\\label{sec:ExperimentalResults}\n\n\\begin{table*} [t]\n\\centering\n\\caption{The Average Time Consumed by Selected Methods on $256 \\times 256$ Images.}\n\\begin{tabular}{lccccccc}\n\\hline\nMethod & \\cite{Ding_2015_MTA} & \\cite{Chen_2014_CSVT} & \\cite{Luo_2015_ICCV} & \\cite{Li_2016_CVPR} & \\cite{Fu_2017_CVPR} & \\cite{Zhang_2018_CVPR} & Ours \\\\\nTime(s) & 1.25s & 97.15s & 69.69s & 1260.40s & 5.30s & 0.20s & 28.01s \\\\\n\\hline\n\\end{tabular}\n\\label{tab:time}\n\\end{table*}\n\n\\begin{table*} [t]\n\\small\n\\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n\\centering\n\\caption{Image Performances (Top: \\textbf{PSNR}, Bottom: \\textbf{SSIM}) of Different Methods (Rows) on $11$ Synthesized Rain Images (Columns) against Ground-truth.}\n\\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}\n\\hline\n & Image 1 & Image 2 & Image 3 & Image 4 & Image 5 & Image 6 & Image 7 & Image 8 & Image 9 & Image 10 & Image 11 \\\\\n\\hline\n\\cite{Ding_2015_MTA} & \\tabincell{c}{34.65 \\\\ 0.867} & \\tabincell{c}{33.70 \\\\ 0.889} & \\tabincell{c}{33.89 \\\\ 0.802} & \\tabincell{c}{34.17 \\\\ 0.805} & \\tabincell{c}{35.16 \\\\ 0.861} & \\tabincell{c}{35.93 \\\\ 0.835} & \\tabincell{c}{41.29 \\\\ 0.796} & \\tabincell{c}{31.77 \\\\ 0.811} & \\tabincell{c}{32.50 \\\\ 0.874} & \\tabincell{c}{34.58 \\\\ 0.907} & \\tabincell{c}{33.22 \\\\0.832 } \\\\\n\\hline\n\\cite{Chen_2014_CSVT} & \\tabincell{c}{34.31 \\\\ 0.803} & \\tabincell{c}{32.36 \\\\ 0.759} & \\tabincell{c}{34.92 \\\\ 0.750} & \\tabincell{c}{34.68 \\\\ 0.738} & \\tabincell{c}{34.95 \\\\ 0.774} & \\tabincell{c}{32.55 \\\\ 0.824} & \\tabincell{c}{38.58 \\\\ 0.775} & \\tabincell{c}{31.84 \\\\ 0.602} & \\tabincell{c}{32.11 \\\\ 0.704} & \\tabincell{c}{34.59 \\\\ 0.854} & \\tabincell{c}{34.15 \\\\ 0.784} \\\\\n\\hline\n\\cite{Luo_2015_ICCV} & \\tabincell{c}{32.69 \\\\ 0.767} & \\tabincell{c}{30.23 \\\\ 0.703} & \\tabincell{c}{31.53 \\\\ 0.748} & \\tabincell{c}{32.43 \\\\ 0.820} & \\tabincell{c}{33.73 \\\\ 0.888} & \\tabincell{c}{29.45 \\\\ 0.841} & \\tabincell{c}{35.95 \\\\ 0.784} & \\tabincell{c}{29.45 \\\\ 0.790} & \\tabincell{c}{30.43 \\\\ 0.879} & \\tabincell{c}{31.63 \\\\ 0.864} & \\tabincell{c}{32.99 \\\\ 0.843} \\\\\n\\hline\n\\cite{Li_2016_CVPR} & \\tabincell{c}{31.55 \\\\ 0.701} & \\tabincell{c}{30.45 \\\\ 0.686} & \\tabincell{c}{31.23 \\\\ 0.789} & \\tabincell{c}{32.27 \\\\ 0.691} & \\tabincell{c}{33.34 \\\\ 0.748} & \\tabincell{c}{31.13 \\\\ 0.754} & \\tabincell{c}{36.39 \\\\ 0.681} & \\tabincell{c}{29.54 \\\\ 0.570} & \\tabincell{c}{30.32 \\\\ 0.686} & \\tabincell{c}{32.35 \\\\ 0.786} & \\tabincell{c}{32.42 \\\\ 0.749} \\\\\n\\hline\nOurs & \\tabincell{c}{\\textbf{35.46} \\\\ \\textbf{0.886}} & \\tabincell{c}{\\textbf{35.30} \\\\ \\textbf{0.901}} & \\tabincell{c}{\\textbf{35.04} \\\\ \\textbf{0.827}} & \\tabincell{c}{\\textbf{34.86} \\\\ \\textbf{0.832}} & \\tabincell{c}{\\textbf{35.38} \\\\ \\textbf{0.897}} & \\tabincell{c}{\\textbf{36.03} \\\\ \\textbf{0.842}} & \\tabincell{c}{\\textbf{41.31} \\\\ \\textbf{0.846}} & \\tabincell{c}{\\textbf{31.94} \\\\ \\textbf{0.854}} & \\tabincell{c}{\\textbf{33.42} \\\\ \\textbf{0.883}} & \\tabincell{c}{\\textbf{34.91} \\\\ \\textbf{0.916}} & \\tabincell{c}{\\textbf{34.53} \\\\ \\textbf{0.866}} \\\\\n\\hline\n\\end{tabular}\n\\label{tab:psnrssim}\n\\end{table*}\n\n\\begin{figure*}[t]\n\\centering\n\\begin{minipage}{0.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/001_GT}}\n\\end{minipage}\n\\begin{minipage}{0.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd_by_Ding}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd_by_Kang}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd_by_luo}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd_by_li}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd_by_Fu_label}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd_by_Zhang_label}}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR1_I_nd}}\n\\end{minipage} \\\\\n\\vspace{0.5mm}\n\\begin{minipage}{0.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/005_GT}}\n\\centerline{(a)}\n\\end{minipage}\n\\begin{minipage}{0.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5}}\n\\centerline{(b)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd_by_Ding}}\n\\centerline{(c)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd_by_Kang}}\n\\centerline{(d)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd_by_luo}}\n\\centerline{(e)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd_by_li}}\n\\centerline{(f)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd_by_Fu_label}}\n\\centerline{(g)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd_by_Zhang_label}}\n\\centerline{(h)}\n\\end{minipage}\n\\begin{minipage}{.1\\linewidth}\n\\centering{\\includegraphics[width=.99\\linewidth]{images\/RR5_I_nd}}\n\\centerline{(i)}\n\\end{minipage}\n\\caption{(a) Groudtruth. (b) Original synthesized rain images. (c) Results by Ding \\emph{et al.} in \\cite{Ding_2015_MTA}.\n(d) Results by Chen \\emph{et al.} in \\cite{Chen_2014_CSVT}. (e) Results by Luo \\emph{et al.} in \\cite{Luo_2015_ICCV}.\n(f) Results by Li \\emph{et al.} in \\cite{Li_2016_CVPR}. (g) Results by Fu \\emph{et al.} in \\cite{Fu_2017_CVPR}.\n(h) Results by Zhang \\emph{et al.} in \\cite{Zhang_2018_CVPR}. (i) Results by our method.}\n\\label{fig:result_render_compare}\n\\end{figure*}\n\n\\begin{figure}[t]\n\\begin{minipage}{0.48\\linewidth}\n\\centering{\\includegraphics[width=1\\linewidth]{images\/PSNR_Comparison}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.48\\linewidth}\n\\centering{\\includegraphics[width=1\\linewidth]{images\/SSIM_Comparison}}\n\\centerline{(b)}\n\\end{minipage}\n\\caption{Objective comparisons with two state-of-the-art deep learning works: (a) PSNR comparison. (b) SSIM comparison.}\n\\label{fig:PSNR_SSIM}\n\\end{figure}\n\nIn order to verify the effectiveness of our method, several state-of-the-art traditional and deep learning based rain removal works are selected for comparisons. The method by Ding \\emph{et al.} \\cite{Ding_2015_MTA} removes rain streaks in a single image by developing an $L_0$ smoothing filter that is derived from the guided filter by He \\emph{et al.} \\cite{He_2013_PAMI}. This work produces excellent rain removal results for some kinds of images and keeps good visual quality. Meanwhile, several rain removal works that are based on dictionary learning have appeared in recent years \\cite{Fu_2011_ASSP,Kang_2012_TIP,Chen_2014_CSVT}. Among them, the work by Chen \\emph{et al.} \\cite{Chen_2014_CSVT} produces the best rain removal effect. In addition, two most recent works, by Luo\n\\emph{et al.} \\cite{Luo_2015_ICCV} and Li \\emph{et al.} \\cite{Li_2016_CVPR}, respectively, are also selected in our comparisons.\nFor deep learning based rain-removed methods, we select the most recent two works \\cite{Fu_2017_CVPR} and \\cite{Zhang_2018_CVPR}.\nCompared with other deep learning based works, these two works are more robust and can obtain better rain-removed visual quality.\n\nWe implement our rain removal algorithm using MATLAB on\nan Intel (R) Xeon (R) CPU E5-2643 v2 @ 3.5 GHz 3.5 GHz (2 processors) with 64G RAM.\nSome parameters used in our work are: the size of the window in\nEquation (\\ref{eq:detect_condition}) is $7 \\times 7$;\nthe iteration time of $K$-means in Section \\ref{sec:RainStreaksDetection} is 100;\nthe thresholds $T1$, $T2$ and $\\mu$ are 10, 0.08, 2 respectively;\nregularization parameter $\\lambda$ and $\\eta$ in loss function (\\ref{eq:loss_function1}) is $0.25$ and $0.1$,\nand the iteration time in IRLS is $3$.\nThe parameters here are robust in our experiments.\nWhile for the parameter $T1$, it can be slightly changed for different images.\nBecause, rain's direction is downward\nand its value $D$ is close to $0$ in most image,\nwe set the threshold $T1$ as $10$ in our paper.\nRain's direction $D$ can be approximately evaluated by user easily.\nFor some rain which has large falling direction (e.g. the sixth row in Fig. \\ref{fig:result_compare}),\nthe threshold $T1$ can be changed to a larger value.\n\nWe first test the run-time consumed by the selected methods on images with size $256 \\times 256$. Our method takes $28.01$ seconds. Specifically, the initial detection of rain streaks takes $5.30$ seconds; the rain streaks refining by morphology needs $2.19$ seconds; and the majority of time is spent on rain and non-rain layer separation by using quasi-sparsity prior, which is $20.02$ seconds. Upon the same image, the time consumed by other selected methods are listed in Table \\ref{tab:time}. By comparison, our algorithm is the fourth fastest one in selected methods.\n\nBecause the task in this work is to remove rain streaks in single images,\nwe need to evaluate the effectiveness of our algorithm subjectively and objectively. For the purpose of objective evaluations, we synthesize rain images from clean images. Two such ground-truth images and synthesized rain images are shown in Fig. \\ref{fig:result_render_compare}(a) and (b), respectively, and the other columns are the corresponding rain-removed results by different state-of-the-art algorithms and our method.\n\nWe also collect many real rain images and present the corresponding rain removal results as shown in Figs. \\ref{fig:result_compare} and \\ref{fig:result_compare1} for subjective assessments.\n\n\\begin{table*} [t]\n\\centering\n\\caption{User Study Result. The Numbers Are The Percentages of Votes Which Are Obtained by Each Method.}\n\\begin{tabular}{lccccccc}\n\\hline\nMethod & \\cite{Ding_2015_MTA} & \\cite{Chen_2014_CSVT} & \\cite{Luo_2015_ICCV} & \\cite{Li_2016_CVPR} & \\cite{Fu_2017_CVPR} & \\cite{Zhang_2018_CVPR} & Ours \\\\\nPercentage & 5.50\\% & 1.25\\% & 2.50\\% & 3.75\\% & 21.00\\% & 9.50\\% & 56.50\\% \\\\\n\\hline\n\\end{tabular}\n\\label{tab:statistics}\n\\end{table*}\n\n\\subsection{Objective assessment}\n\nIn order to evaluate the performances of different methods more completely\nand accurately, we synthesize rain images by the method in \\cite{Luo_2015_ICCV} and implement different rain removal algorithms on these synthesized images. Then, we calculate the PSNR and SSIM \\cite{Wang_2004_TIP} values between the rain-removed images and the ground-truth images.\n\nFig. \\ref{fig:result_render_compare} shows two examples where each row presents a ground-truth image, the rain image (obtained by synthesis),\nand the rain-removed results by different methods. Note that we show the corresponding PSNR\/SSIM values at the top-left corner of each rain-removed image. The PSNR\/SSIM values of more examples by selected traditional methods are shown in Table \\ref{tab:psnrssim}. The comparisons of PSNR\/SSIM with deep learning methods are shown in Fig. \\ref{fig:PSNR_SSIM}.\n\nAccording to the PSNR\/SSIM values, the method by Ding \\emph{et al.} \\cite{Ding_2015_MTA} produces very good results compared with the other traditional methods. Because of the use of an $L_0$ threshold, objects with large structures in the image will usually be preserved well, thus leading to higher SSIM values. In the meantime, rain streaks below the $L_0$ threshold will be removed, leading to higher PSNR values.\n\n\\begin{figure*}[!htb]\n\\centering\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Ding_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Kang_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_luo_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_li_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Fu_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_by_Zhang_part}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain77_I_nd_part}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain74_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test78_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test9_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test113_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test25_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test105_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test154_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test26_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd_by_Ding}}\n\\centerline{(b)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd_by_Kang}}\n\\centerline{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd_by_luo}}\n\\centerline{(d)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd_by_li}}\n\\centerline{(e)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd_by_Fu}}\n\\centerline{(f)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd_by_Zhang}}\n\\centerline{(g)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test110_I_nd}}\n\\centerline{(h)}\n\\end{minipage}\n\\vspace{0.5mm}\n\\caption{(a) Original rain images. (b) Results by Ding \\emph{et al.} in \\cite{Ding_2015_MTA}.\n(c) Results by Chen \\emph{et al.} in \\cite{Chen_2014_CSVT}. (d) Results by Luo \\emph{et al.} in \\cite{Luo_2015_ICCV}.\n(e) Results by Li \\emph{et al.} in \\cite{Li_2016_CVPR}. (f) Results by Fu \\emph{et al.} in \\cite{Fu_2017_CVPR}.\n(g) Results by Zhang \\emph{et al.} in \\cite{Zhang_2018_CVPR}. (h) Results by our proposed method.}\n\\label{fig:result_compare}\n\\end{figure*}\n\nThe method by Chen \\emph{et al.} \\cite{Chen_2014_CSVT} can remove the rain steaks that possess lower pixel intensities but the rain streaks with higher intensities will remain (the reason will be described latter).\nFurthermore, because the HOG descriptor used in this method can not identify rain streaks from tenuous details well, it would lose many details (the second image in Fig. \\ref{fig:result_render_compare}).\nFor the above two reasons, the PSNR\/SSIM values are relatively lower than the method by Ding \\emph{et al.}\n\nThe work by Luo \\emph{et al.} \\cite{Luo_2015_ICCV} can not remove rain streaks well. It makes rain streaks more tenuous in size and weaker in intensity. We show the results of Li \\emph{et al.} \\cite{Li_2016_CVPR} in the sixth column of Fig. \\ref{fig:result_render_compare}. This method removes rain streaks quite well. However, a lot of image details have also been removed at the same time. It can be seen from Table \\ref{tab:psnrssim} that these two methods produce lower PSNR and SSIM values.\n\nFinally, it is seen from Table \\ref{tab:psnrssim} that our proposed method produces the best PSNR\/SSIM results consistently for all 11 test images compared with the selected traditional methods. For some test images (5 out of 11), the PSNR value of our method is about 1 dB higher than the second best method (i.e., Ding's method).\n\nWe can see from Fig. \\ref{fig:PSNR_SSIM} that our method produces comparable PSNR\/SSIM values compared with the state-of-the-art\ndeep learning methods. Only for the two rendering rain images shown in Fig. \\ref{fig:result_render_compare}, the work by Fu \\emph{et al.} \\cite{Fu_2017_CVPR} removes nearly all rain streaks and keeps image details relatively well. But its PSNR\/SSIM values are slightly low compared with ours. The reason is that our method only remove rain streaks on\nthe detected rain pixels and the non-rain pixels nearly unchanged.\nThough a few of light rain streaks remain in our results in these selected two images, our method keeps good image details in majority part of images. The work by Zhang \\emph{et al.} removes rain streaks well, but the image details loss seriously.\nThat is why this method has high PSNR values, while the SSIM values are low.\nBesides, these two methods also can not remove all rain streaks in some rendering rain images, but our method can achieve\ngood results, especially, for practical images which will be shown later. As we know, deep learning methods\nare very good at dealing with rendering rain images, because they are trained from them. Real-world images are real challenging\nfor them.\n\n\\subsection{User study}\n\nTo conduct a visual (subjective) evaluation on the performances of selected methods, we invited 20 viewers (14 males and 6 females, they all are undergraduate, master or Ph.D students in computer vision field) to evaluate the visual quality of different methods in terms of the following three aspects:\n\\begin{itemize}\n\\item less rain residual,\n\\item the maintenance of the image details, and\n\\item overall perception.\n\\end{itemize}\nIn the evaluation, $20$ groups of results are selected and every group involves the results by Ding \\emph{et al.}, Chen \\emph{et al.}, Luo \\emph{et al.}, Li \\emph{et al.}, Fu \\emph{et al.}, Zhang \\emph{et al.} and our method. To ensure fairness, the results in each group are arranged randomly. For each group, the viewers are asked to select only one result which they like most by considering the three criterions together.\n\nThe evaluation result is shown in Table \\ref{tab:statistics}. It is clear that our rain removal results are favored by a majority of viewers (56.50\\%).\n\n\\begin{figure*}[!htb]\n\\centering\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test57_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain9_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain17_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain23_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain36_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain53_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/rain73_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test8_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd_by_Ding}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd_by_Kang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd_by_luo}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd_by_li}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd_by_Fu}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd_by_Zhang}}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test58_I_nd}}\n\\end{minipage}\n\\vspace{0.5mm}\n\\vfill\n\\begin{minipage}{0.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67}}\n\\centerline{(a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd_by_Ding}}\n\\centerline{(b)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd_by_Kang}}\n\\centerline{(c)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd_by_luo}}\n\\centerline{(d)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd_by_li}}\n\\centerline{(e)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd_by_Fu}}\n\\centerline{(f)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd_by_Zhang}}\n\\centerline{(g)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{.115\\linewidth}\n\\centering{\\includegraphics[width=.995\\linewidth]{images\/test67_I_nd}}\n\\centerline{(h)}\n\\end{minipage}\n\\vspace{0.5mm}\n\\caption{(a) Original rain images. (b) Results by Ding \\emph{et al.} in \\cite{Ding_2015_MTA}.\n(c) Results by Chen \\emph{et al.} in \\cite{Chen_2014_CSVT}. (d) Results by Luo \\emph{et al.} in \\cite{Luo_2015_ICCV}.\n(e) Results by Li \\emph{et al.} in \\cite{Li_2016_CVPR}.\n(f) Results by Fu \\emph{et al.} in \\cite{Fu_2017_CVPR}. (g) Results by Zhang \\emph{et al.} in \\cite{Zhang_2018_CVPR}.\n(h) Results by our proposed method.}\n\\label{fig:result_compare1}\n\\end{figure*}\n\n\\subsection{Results analysis}\n\nIn this subsection, we try to analyze the rain removal effectiveness\nof different methods. The advantages or disadvantages of different\nmethods are discussed according to the rain-removed results that\nis obtained by applying the selected methods on practical rain images. Notice that some images employed in these experiments have a large size so that the rain streaks look tenuous.\n\n\\textbf{Method by Ding \\emph{et al.}:} The first row of Fig. \\ref{fig:result_compare} shows a rain image with slight rain streaks. The result by Ding \\emph{et al.}, as shown in the second column, seems to have removed the rain streaks quite well at the first glance. However, when the picture is zoomed in, it is found that a lot of non-rain details are lost. To verify this point more clearly, a small part of the rain picture and its corresponding rain-removed results by the selected methods are shown in the second row of Fig. \\ref{fig:result_compare}. Now, it becomes obvious that some details of tree leaves have been removed together with the rain streaks. This is due to the threshold of the $L_0$ filters used in \\cite{Ding_2015_MTA}: some non-rain objects whose size is relatively small would be mistreated as rain streaks and get removed.\n\nThe third row is still a slight rain image, but the rain streaks are denser. When zoomed in, the details-losing becomes more apparent. For heavy rain streaks in images shown in the sixth, seventh, eighth rows of Fig. \\ref{fig:result_compare}, they can not be removed by the method of Ding \\emph{et al.}. This is because the size of rain streaks in these images is beyond the preset threshold of $L_0$ filters. If we set the threshold larger, the rain streaks with wide size will be removed. However more details in the images will also be removed at the same time. For the light rain images which have less tenuous details (the third, forth and sixth rows in Fig. \\ref{fig:result_compare1}), this method has satisfactory rain removal effectiveness.\n\n\\textbf{Method by Chen \\emph{et al.}:} The results by Chen \\emph{et al.} are shown in the third column. For the light rain images that have less subtle details (such as the image in the fifth row of Fig. \\ref{fig:result_compare}, the third, forth and sixth rows in Fig. \\ref{fig:result_compare1}), this method can obtain good rain removal results. However, if the rain images possess subtle details (such as the first, third and forth rows of Fig. \\ref{fig:result_compare}), the detail-losing and image-blurring are inevitable. The reason is that the HOG descriptor used here cannot separate rain streaks and subtle details well.\nThe lost details can be seen clearly in third image of the second row of Fig. \\ref{fig:result_compare}, which is obtained by zooming in a part of the image in the first row. Moreover, low-pass filter cannot filter bright\nrain streaks completely. Consequently, the method by Chen \\emph{et al.} can not deal with heavy rain images (such as the images in the sixth and seventh rows of Fig. \\ref{fig:result_compare}).\n\n\\textbf{Method by Luo \\emph{et al.}:} The results by Luo \\emph{et al.} are in the forth column of Fig. \\ref{fig:result_compare} and \\ref{fig:result_compare1}. Obviously, this method can not remove rain streaks well. This is due to the discrimination of the sparse code used in this work, which is not good to separate a rain image into the rain layer\nand non-rain layer. However, this method can make the intensity of rain streaks a little weaker. Hence, for tenuous rain streaks considered in their work, their method seems to have removed rain well. When rain steaks become brighter or wider, they can not be removed well.\n\n\\textbf{Method by Li \\emph{et al.}:} Li \\emph{et al.} used priors for both background and rain layer (which are based on Gaussian mixture models) to remove rain streaks. We show the results by this method in the fifth column. For the images that have little subtle details (the fifth, seventh, ninth, and tenth image in Fig. \\ref{fig:result_compare}, as well as\nthe third, forth, and sixth images in Fig. \\ref{fig:result_compare1}),\nthis method can obtain good rain-removal effectiveness. However, for rain images that have subtle details (e.g., the first, third and forth of Fig. \\ref{fig:result_compare}), many subtle details are lost. This point can be seen clearly in the fifth image of the second row of Fig. \\ref{fig:result_compare}.\nAs mentioned above, this image is part of the image in the first row that is zoomed in to see the details more clearly.\n\n\\textbf{Method by Fu \\emph{et al.}:} For the majority of selected practical images, this work can achieve good results.\nBut there are still some defects. The first apparent one is that this method can cause slight blur for some rainy images,\nsuch as the second and eighth images in Fig. \\ref{fig:result_compare}. That is also the reason that this method has\nlower PNSR\/SSIM values than ours for the images in Fig. \\ref{fig:result_render_compare}. The second is the generalization.\nThis method can not handle some rain images. For example, the seventh and eighth images in Fig. \\ref{fig:result_compare}, the rain streaks are left in the results.\n\n\\textbf{Method by Zhang \\emph{et al.}:} The work by Zhang \\emph{et al.} is the most recent work\npublished on CVPR. We can see that this method faces the similar problems as the work by Fu \\emph{et al.} \\cite{Fu_2017_CVPR}.\nThe details lose seriously for some practical images, especially, the images with slim details (the last one in Fig.\n\\ref{fig:result_compare} and \\ref{fig:result_compare1} separately, you can enlarge the images in this paper to see clear).\nThis method also can not deal with some rainy images, and some apparent rain streaks are left in some rain-removed results.\n\n\\textbf{Our work:} The results by our proposed method are shown in the eighth column. Compared with other traditional rain removal works, our proposed approach achieves better rain removal results. When compared with deep learning based works,\nour method produces comparable results for majority of rain images. But for some other rain images, the selected deep learning\nbased methods can not handle well and better rain-removed result are obtained by our method.\nBecause our method acquires relatively more accurate locations, the remaining image details can be preserved well. Besides, the image quasi-sparsity prior offers a robust tool to the image recovery. Hence, better PSNR\/SSIM values and good visual quality have been achieved in our proposed method.\n\n\\subsection{Limitations}\n\nBy experiments, our method can deal with majority of rain images.\nHowever, every algorithm has its drawbacks, so does our method. For some images with non-rain objects that are very similar to the shape and color of rain streaks, some mis-detections are inevitable. This will result in the loss of some useful information. Besides, when the rain is very heavy, the rain streak will be combined to produce fog. A shallow thought for this situation is that we can remove the rain streaks by our method first, and a dehaze method can be used to remove haze which is caused by heavy rain. We note that this situation has been discussed in a very recent work in \\cite{Li_2016_CVPR}. We will continue to work on this situation in our future work. Another future work is to further improve the rain detection.\n\n\\section{Conclusions}\n\\label{sec:Conclusion}\nIn this paper, we have proposed a new rain streaks detection and removal method from a single color image. Our results suggest that using a morphological image processing to extract connected components and quantifying the characteristics of extracted connected components by\nthe principal component analysis (PCA) are effective in detecting rain streaks. Once rain streaks are detected, we employ an image sparsity prior to accurately decompose a rain image into the rain layer and non-rain layer, which has also been proven to be effective. In addition, quantitative (objective) evaluations and an user study (subjective) validate the overall rain removal effectiveness of our method, which outperforms four selected traditional methods and is comparable to the most recent deep learning based works that are all proposed very recently and widely regarded as the state-of-the-art.\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzamnz b/data_all_eng_slimpj/shuffled/split2/finalzzamnz new file mode 100644 index 0000000000000000000000000000000000000000..0d9e8d00942f6741e7460160e0f9c0d85a5d97bb --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzamnz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{introduction}\n\n\nIn general relativity and cosmology, our knowledge about spatially\nhomogeneous cosmological models has increased substantially over the\nyears, and we are able to say that, for a large number of models,\nthe qualitative behaviour of solutions is now well understood;\nsee~\\cite{waiell97} for an overview. The majority of results,\nhowever, concerns solutions of the Einstein equations coupled to a\nperfect fluid (usually with a linear equation of state). It is thus\nimportant to note that these results are in general not robust,\ni.e., not structurally stable, under a change of the matter model;\nsignificant changes of the qualitative behaviour of solutions occur,\nfor instance, for collisionless matter.\n\nSeveral fundamental results on spatially homogeneous diagonal models of\nBianchi type~I with collisionless matter have been obtained in~\\cite{ren96}.\nDiagonal locally rotationally symmetric (LRS) models have been investigated\nsuccessfully by using dynamical systems methods, see~\\cite{rentod99} for the\ncase of massless particles, and~\\cite{renugg00,ren02} for the massive case.\nIn particular, solutions have been found whose qualitative behaviour is\ndifferent from that of any perfect fluid model of the same Bianchi type.\n\nThe purpose of this article is to re-investigate the diagonal (non-LRS)\nBianchi type~I models with collisionless matter. Our analysis is based on\ndynamical systems techniques, which enables us to obtain a more detailed\npicture of the global dynamics than the one previously given.\nIn particular, we show that the dynamical behaviour toward\nthe past singularity of the collisionless matter model\ndiffers considerably from the Bianchi type~I perfect fluid model.\n\nThe outline of the paper is as follows. In Section~\\ref{einsteinvlasov}\nwe reformulate Einstein's field equations for the diagonal Bianchi\ntype~I case with collisionless matter as a reduced \ndimensionless dynamical system on a compact state space. In\nSection~\\ref{invfixed} we give the fixed points of the system and list\nand discuss a hierarchy of invariant subsets of the state space,\nwhich is associated with a hierarchy of monotone functions. In\nSection~\\ref{locglo} we first present the results of the local\ndynamical systems analysis; subsequently, we focus on\nthe analysis of the global dynamics: we establish two theorems that\nformulate the future and past asymptotic behaviour of solutions,\nrespectively. As regards the future asymptotics we show that all\nmodels isotropize asymptotically toward the future. The past\nasymptotic behaviour is more complicated since there exists several\ntypes of past asymptotic behaviour; in particular we establish that\nthe past attractive set resides on a set that contains a so-called\nheteroclinic network. The proof of the theorems is based on methods\nfrom global dynamical systems analysis; in particular, we exploit\nthe hierarchy of monotone functions in conjunction with the\nmonotonicity principle. Finally, in Section~\\ref{conc} we conclude with\nsome remarks about our results and their implications.\nAppendix~\\ref{dynsys} provides a brief introduction to relevant\nbackground material from the theory of dynamical systems; in\nparticular we cite the monotonicity principle. The proofs of some of\nthe statements in the main text are given in Appendix~\\ref{FRWLRS}\nand~\\ref{futureproof}. In Appendix~\\ref{Si0space} we discuss the\nphysical interpretation of one of the most important boundaries of\nour state space formulation.\n\n\n\\section{The reflection-symmetric Bianchi type~I Einstein-Vlasov system}\n\\label{einsteinvlasov}\n\nIn a spacetime with Bianchi type~I symmetry the spacetime metric can\nbe written as\n\\begin{equation}\nd s^2 = -d t^2 + g_{i j}(t) d x^i d x^j\\:,\\quad i,j=1,2,3\\:,\n\\end{equation}\nwhere $g_{i j}$ is the induced Riemannian metric on the spatially\nhomogeneous surfaces $t=\\mathrm{const}$. Since the metric is\nconstant on $t=\\mathrm{const}$, it follows that the Ricci tensor of\n$g_{i j}$ vanishes. Einstein's equations, in units $G=1=c$,\ndecompose into the evolution equations,\n\\begin{subequations}\\label{einsteinvlasovsystem}\n\\begin{equation}\\label{evolution}\n\\partial_t g_{i j} = -2 k_{i j} \\:,\\quad\n\\partial_t k^i_{\\:\\, j} = \\mathrm{tr}\\hspace{0.15ex} k\\: k^i_{\\:\\, j} - 8 \\pi T^i_{\\:\\, j} +\n4 \\pi \\delta^i_{\\:\\, j} ( T^k_{\\:\\, k} -\\rho) - \\Lambda \\delta^i_{\\:\\, j}\\:,\n\\end{equation}\nand the Hamiltonian and momentum constraint\n\\begin{equation}\\label{constraints} (\\mathrm{tr}\\hspace{0.15ex} k)^2\n- k^i_{\\:\\, j} k^j_{\\:\\, i} - 16 \\pi \\rho -2 \\Lambda = 0\\:,\\qquad\nj_k = 0\\:.\n\\end{equation}\nHere, $k_{i j}$ denotes the second fundamental form of the surfaces $t=\\mathrm{const}$.\nThe matter variables are defined as components of the energy-momentum\ntensor $T_{\\mu\\nu}$ ($\\mu=0,1,2,3$), according to $\\rho = T_{00}$,\n$j_k= T_{0k}$; $T_{i j}$ denotes the spatial components.\nThe cosmological constant $\\Lambda$ is set to zero in the following;\nthe treatment of the case $\\Lambda>0$ is straightforward once the case $\\Lambda=0$ has been solved,\ncf.~the remarks in the conclusions.\n\nIn this paper we consider collisionless matter (Vlasov matter),\ni.e., an ensemble of freely moving particles described by a\nnon-negative distribution function $f$ defined on the mass shell bundle\n$PM\\subseteq TM$ of the spacetime; for simplicity we consider\nparticles with equal mass $m$.\nThe spacetime coordinates $(t,x^i)$\nand the spatial components $v^i$ of the four-momentum $v^\\mu$\n(measured w.r.t.\\ $\\partial\/\\partial x^\\mu$) provide local\ncoordinates on $PM$ so that $f = f(t,x^i,v^j)$. Compatibility with\nBianchi type~I symmetry forces the distribution function $f$ to\nbe homogeneous, i.e., $f = f(t, v^j)$. The evolution equation for\n$f$ is the Vlasov equation (the Liouville equation)\n\\begin{equation}\\label{Vlasovequation}\n\\partial_t f + \\frac{v^j}{v^0} \\partial_{x^j} f -\n\\frac{1}{v^0} \\Gamma^j_{\\mu\\nu} v^\\mu v^\\nu \\partial_{v^j} f =\n\\partial_t f + 2 k^j_{\\:\\, l} v^l \\partial_{v^j} f = 0 \\:.\n\\end{equation}\nThe energy-momentum tensor associated with the distribution\nfunction $f$ is given by\n\\[\nT^{\\mu\\nu} = \\int f v^\\mu v^\\nu \\mathrm{vol}_{PM}\\:,\n\\]\nwhere $\\mathrm{vol}_{PM} = (\\det g)^{1\/2} v_0^{-1} d v^1 d v^2 d\nv^3$ is the induced volume form on the mass shell; $v_0$ is\nunderstood as a function of the spatial components, i.e., $v_0^2 =\nm^2 + g_{i j} v^i v^j$. The components $\\rho$, $j_k$, and $T_{i j}$,\nwhich enter in~\\eqref{evolution} and~\\eqref{constraints} can thus be\nwritten as\n\\begin{align}\n\\label{matterrho}\n\\rho & = \\int f \\left(m^2 + g^{i j} v_i v_j\\right)^{1\/2}\n(\\det g)^{-1\/2} d v_1 d v_2 d v_3\\:, \\\\\n\\label{matterj}\nj_k & = \\int f v_k (\\det g)^{-1\/2} d v_1 d v_2 d v_3 \\:,\\\\\n\\label{matterT}\nT_{i j} & = \\int f v_i v_j\n\\left(m^2 + g^{k l} v_k v_l\\right)^{-1\/2}\n(\\det g)^{-1\/2} d v_1 d v_2 d v_3 \\:.\n\\end{align}\n\\end{subequations}\nThe Einstein-Vlasov system~\\eqref{einsteinvlasovsystem} is usually\nconsidered for particles of mass $m>0$, however, the system also\ndescribes massless particles if we set $m=0$. (For a detailed\nintroduction to the Einstein-Vlasov system we refer to~\\cite{and05}\nand~\\cite{ren04}.)\n\nThe general spatially homogeneous solution\nof the Vlasov equation~(\\ref{Vlasovequation}) in\nBianchi type~I is\n\\begin{equation}\\label{fisf0}\nf(t,v^i) = f_0(v_i)\\:,\n\\end{equation}\nwhere the $v_i$ are the covariant components of the momenta and\n$f_0$ is an arbitrary non-negative function, see~\\cite{maamah90}.\n(By inserting~(\\ref{fisf0})\ninto~(\\ref{Vlasovequation}) and using that $v_i = g_{i j}(t) v^j$\nit is easy to check that $f_0(v_i)$ is a solution.)\nThe momentum constraint in~(\\ref{constraints}) then reads\n\\begin{equation}\\label{momcons}\n\\int f_0(v_i) v_k d v_1 d v_2 d v_3 = 0\\:.\n\\end{equation}\nHenceforth, for simplicity, $f_0$ is assumed to be compactly supported.\n\nThere exists a subclass of Bianchi type~I Einstein-Vlasov models\nthat is naturally associated with the constraint~\\eqref{momcons}:\nthe class of ``reflection-symmetric'' (or ``diagonal'') models.\nThe following symmetry conditions are imposed on the initial data:\n\\begin{subequations}\\label{reflsymm}\n\\begin{equation}\\label{reflsymmf0}\nf_0(v_1, v_2,v_3) = f_0(-v_1,-v_2,v_3) = f_0(-v_1,v_2,-v_3) = f_0(v_1,-v_2,-v_3)\\:,\n\\end{equation}\n\\begin{equation}\\label{reflsymmgk}\ng_{i j}(t_0)\\:, k_{i j}(t_0) \\quad\\text{diagonal}\\:.\n\\end{equation}\n\\end{subequations}\nThese conditions ensure that $T_{i j}(t_0)$ is diagonal, hence $g_{i\nj}$, $k_{i j}$, and $T_{i j}$ are diagonal for all times by~(\\ref{einsteinvlasovsystem}). \nIn the present paper, we will be\nconcerned with this class of reflection-symmetric models.\n\nThe Einstein-Vlasov system~\\eqref{einsteinvlasovsystem} thus reduces\nto a system for six unknowns, the diagonal components of the metric\n$g_{i i}(t)$ and the second fundamental form $k^i_{\\:\\, i}(t)$ (no\nsummation). The equations are~\\eqref{evolution} and the Hamiltonian\nconstraint in~\\eqref{constraints}. The initial data consists of\n$g_{i i}(t_0)$, $k^i_{\\:\\, i}(t_0)$; in addition we prescribe a\ndistribution function $f_0(v_i)$ that provides the source terms in\nthe equations via~\\eqref{matterrho} and~\\eqref{matterT}.\n\nIn the following we reformulate the Einstein-Vlasov system as a\ndimensionless system on a compact state space.\nTo that end we introduce new variables and modified matter quantities. Let\n\\begin{equation}\\label{hx}\nH := -\\frac{\\mathrm{tr}\\hspace{0.15ex} k}{3}\\:, \\quad\\qquad x := \\sum_i g^{i i}\\:,\n\\end{equation}\nand define the dimensionless variables\n\\begin{subequations}\n\\begin{align}\n\\label{defdimless}\n& s_i := \\frac{g^{i i}}{x}\\; , & & \\Sigma_i := -\\frac{k^i_{\\:\\, i}}{H} - 1\\; ,\n& z & := \\frac{m^2}{m^2 + x}\\:, \\\\[1ex]\n\\text{where} \\qquad & \\!\\sum_i s_i =1\\; , & & \\sum_i\\Sigma_i = 0\\:.\n\\end{align}\n\\end{subequations}\nThe transformation from the variables $(g_{ii}, k^i_{\\:\\, i})$ to\n$(s_i,\\Sigma_i, x, H)$, where $(s_i,\\Sigma_i)$ are subject to the\nabove constraints, is one-to-one. (Note that $x$ can be obtained\nfrom $z$ when $m>0$.) By distinguishing one direction ($1$, $2$, or\n$3$), one can decompose the $s_i$ and simultaneously introduce a\ntrace-free adaption of the shear to new $\\Sigma_\\pm$ variables as is\ndone in, e.g.,~\\cite{waiell97}; however, since Bianchi type~I does\nnot have a preferred direction we will refrain from doing so here.\n\nNext we replace the matter quantities $\\rho$, $T^i_{\\:\\, i}$ (no\nsummation) by the dimensionless quantities\n\\begin{equation}\n\\Omega :=\\frac{8\\pi\\rho}{3 H^2}\\,,\\qquad w_i := \\frac{T^i_{\\:\\,\ni}}{\\rho}\\,,\\qquad w := \\frac{1}{3} \\sum_i w_i = \\frac{1}{3}\n\\frac{\\sum_i T^i_{\\:\\, i}}{\\rho}\\,.\n\\end{equation}\nExpressed in the new variables, $w_i$ can be written as\n\\begin{equation}\\label{omegai}\nw_i = \\frac{(1-z) s_i {\\displaystyle\\int} f_0\\, v_i^2\n\\left[z+(1-z) \\sum_k s_k \\, v_k^2\\right]^{-1\/2} d v_1 d v_2 d v_3}%\n{{\\displaystyle\\int} f_0 \\left[z+(1-z) \\sum_k s_k \\,\nv_k^2\\right]^{1\/2} d v_1 d v_2 d v_3}\\:.\n\\end{equation}\n\nFinally, let us introduce a new dimensionless time variable $\\tau$ defined by\n\\begin{equation}\n\\partial_\\tau = H^{-1}\\partial_t\\:;\n\\end{equation}\nhenceforth a prime denotes differentiation w.r.t.\\ $\\tau$.\n\nWe now rewrite the Einstein-Vlasov equations as a set of dimensional\nequations that decouple for dimensional reasons and a reduced system\nof dimensionless coupled equations on a compact state space. The\ndecoupled dimensional equations are\n\\begin{subequations}\n\\begin{align}\n\\label{Heq}\nH^\\prime &= -3 H \\left[1 -\\frac{\\Omega}{2} (1-w)\\right]\\\\\n\\label{xeq}\nx^\\prime &= -2 x \\left(1 + \\sum_k \\Sigma_k s_k \\right)\\:.\n\\end{align}\n\\end{subequations}\nThe reduced dimensionless system consists of the Hamiltonian\nconstraint, cf.~\\eqref{constraints},\n\\begin{equation}\\label{omega}\n1- \\Sigma^2-\\Omega = 0\\:, \\qquad\\text{where}\\quad \\Sigma^2 :=\n\\sfrac{1}{6} \\sum_k \\Sigma_k^2\\:,\n\\end{equation}\nwhich we use to solve for $\\Omega$,\nand a coupled system of evolution equations\n\\begin{subequations}\\label{eq}\n\\begin{align}\n\\label{Sigeq}\n\\Sigma_i^\\prime & = -3 \\Omega \\left[ \\frac{1}{2} (1-w) \\Sigma_i -(w_i - w)\\right]\\\\\n\\label{seq}\ns_i^\\prime & = -2 s_i \\left[\\Sigma_i - \\sum_k \\Sigma_k s_k \\right] \\\\\n\\label{zeq}\nz^\\prime & = 2 z\\,(1 - z)\\left(\\,1 + \\sum_k s_k \\, \\Sigma_k\\,\\right)\\:.\n\\end{align}\n\\end{subequations}\nIn the massive case $m>0$ the decoupled equation for $x$ is\nredundant since the equation for $z$ is equivalent. In the massless\ncase $m=0$ we have $z=0$; hence, although the equation for $x$ does\nnot contribute to the dynamics, $x$ is needed in order to\nreconstruct the spatial metric from the new variables.\n\nThe dimensionless dynamical system~(\\ref{eq}) together with the\nconstraint~\\eqref{omega} describes the full dynamics of the\nEinstein-Vlasov system of Bianchi type~I. In the massive case the state space\nassociated with this system is the space of the variables\n$\\{(\\Sigma_i, s_i, z)\\}$, i.e.,\n\\begin{equation}\\label{statespace}\n\\mathcal{X} :=\n\\left\\{(\\Sigma_i, s_i,z)\\:\\big|\\: \\left(\\Sigma^2 < 1\\right)\n\\wedge \\left(s_i > 0\\right) \\wedge \\left(0< z < 1\\right)\\right\\}\\:,\n\\end{equation}\nwhere the $s_i$ and $\\Sigma_i$ are subject to the constraints\n$\\sum_k\\, s_k=1\\:,\\,\\sum_k\\,\\Sigma_k = 0$. (The inequalities for\n$s_i$ and $\\Sigma_i$ follow from the definition~\\eqref{defdimless}\nand the constraint~\\eqref{omega}, respectively.) The state space\n$\\mathcal{X}$ is thus five-dimensional.\n\nIt will turn out eventually that all solutions asymptotically\napproach the boundaries of $\\mathcal{X}$: $z=0$, $z=1$, $s_i=0$,\n$\\Omega = 0$ ($\\Leftrightarrow \\Sigma^2 =1$). This suggests to\ninclude these sets in the analysis, whereby we obtain a compact\nstate space $\\bar{\\mathcal{X}}$.\n\nThe equations on the invariant subset $z=0$ of $\\bar{\\mathcal{X}}$\nare identical to the coupled dimensionless system in the case of\nmassless particles $m=0$. We will therefore refer to the subset\n$z=0$ as the massless subset; it represents the four-dimensional\nstate space for the massless case.\n\nWe conclude this section by looking at some variables in more\ndetail. The inequality $\\Sigma^2 \\leq 1$ together with the\nconstraint $\\sum_k \\Sigma_k =0$ results in $|\\Sigma_i| \\leq 2$ for\nall $i$. Note that equality is achieved when\n$(\\Sigma_1,\\Sigma_2,\\Sigma_3) = (\\pm 2 ,\\mp 1, \\mp 1)$ and\npermutations thereof, cf.~Figure~\\ref{kasneri}. The matter quantities\nsatisfy\n\\begin{equation}\\label{wrelations}\n0 \\leq w \\leq \\textfrac{1}{3} \\:, \\qquad 0 \\leq w_i \\leq 3 w \\leq 1\\:.\n\\end{equation}\nThe equalities hold at the boundaries of the state space: $w =0$ iff\n$z=1$, and $w = \\textfrac{1}{3}$ iff $z=0$; $w_i = 0$ iff $s_i = 0$,\nand $w_i = 3 w$ iff $s_i =1$ (provided that $z<1$; for $z=1$, $0 =\nw_i = 3 w = 0$).\n\nThere exists a number of useful auxiliary equations that complement the system~\\eqref{eq}:\n\\begin{align}\n\\label{omegaeq}\n\\Omega^\\prime & =\n\\Omega\\, \\left[\\,3(1-w)\\Sigma^2 - \\sum_k w_k\\,\\Sigma_k\\,\\right] \\:,\\\\\n\\label{rhoeq} \\rho' & = -\\rho\\, [\\,3(1+w) + \\sum_k w_k\\,\\Sigma_k\\,]\n\\leq -2\\rho\\:.\n\\end{align}\nThe inequality in~\\eqref{rhoeq} follows by using $\\Sigma_i \\geq -2$\n$\\forall i$ and~\\eqref{wrelations}. This shows that $\\rho$ increases\nmonotonically toward the past which yields a matter singularity,\ni.e., $\\rho\\rightarrow\\infty$ for $\\tau\\rightarrow -\\infty$. It is\noften beneficial to consider the equations of the original variables\nas auxiliary equations, e.g., $(g^{ii})^\\prime= -2g^{ii}\\,(1 +\n\\Sigma_i)$.\n\n\n\n\\section{Fixed points, invariant subsets, and monotone functions}\n\\label{invfixed}\n\n\\subsection{Fixed points}\n\\label{fixed}\n\nThe dynamical system~\\eqref{eq} possesses a number of fixed points,\nall residing on the boundaries of the state space;\nsee~Table~\\ref{fixtab}.\n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|c|c|}\\hline\nFixed point set & defined by \\\\ \\hline\nFS$^1$ & $z=1$, $\\Sigma_j = 0 \\:\\,\\forall j$ \\\\\nKC$_i^1$ & $z=1$, $\\Sigma^2 =1$, $s_i=1$, $s_j = 0\\:\\,\\forall j \\neq\ni$\n\\\\ \\hline\nTS$_i$ & $0\\leq z\\leq 1$, $\\Sigma_i = 2$, $\\Sigma_j =-1 \\:\\,\\forall\nj \\neq i$, $s_i=0$ \\\\ \\hline\nF$^0$ & $z=0$, $\\Sigma_j = 0 \\:\\,\\forall j$, $w_j = 1\/3 \\:\\,\\forall j$ \\\\\nD$_i^0$ & $z=0$, $s_i = 0$, $\\Sigma_i =-1$, $\\Sigma_{j}=1\/2 =w_j\\:\\,\n\\forall j\\neq i$ \\\\\nQL$_i^0$ & $z=0$, $\\Sigma_i = -2$, $\\Sigma_j = 1 \\:\\,\\forall j \\neq\ni$,\n$s_i=0$ \\\\\nKC$_i^0$ & $z=0$, $\\Sigma^2 =1$, $s_i=1$, $s_j = 0 \\:\\,\\forall j\n\\neq i$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{The fixed point sets. The range of the index $i$ is always\n$i=1\\ldots 3$. The superscript denotes the value of $z$; the first\nkernel letter describes the type of fixed point set; if there is no\nsecond kernel letter the fixed point set is just a point; if there\nis a second kernel letter this letter denotes the dimensionality and\ncharacter of the set --- S refers to surface, L stands for line, and\nC for circle.} \\label{fixtab}\n\\end{table}\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{Ti1}[cc][cc][1][0]{$\\text{T}^0_{i1}$}\n\\psfrag{Ti2}[cc][cc][1][0]{$\\text{T}^0_{i2}$}\n\\psfrag{Ti3}[cc][cc][1][0]{$\\text{T}^0_{i3}$}\n\\psfrag{Qi1}[cc][cc][1][0]{$\\text{Q}^0_{i1}$}\n\\psfrag{Qi2}[cc][cc][1][0]{$\\text{Q}^0_{i2}$}\n\\psfrag{Qi3}[cc][cc][1][0]{$\\text{Q}^0_{i3}$}\n\\psfrag{S1}[cc][cc][1.2][0]{$\\Sigma_1$}\n\\psfrag{S2}[cc][cc][1.2][0]{$\\Sigma_2$}\n\\psfrag{S3}[cc][cc][1.2][0]{$\\Sigma_3$}\n\\psfrag{S1m}[cc][cc][1][-90]{$\\Sigma_1=-1$}\n\\psfrag{S2m}[cc][cc][1][30]{$\\Sigma_2=-1$}\n\\psfrag{S3m}[cc][cc][1][-30]{$\\Sigma_3=-1$}\n\\psfrag{S1p}[cc][cc][0.6][90]{$\\Sigma_1=1$}\n\\psfrag{S2p}[cc][cc][0.6][30]{$\\Sigma_2=1$}\n\\psfrag{S3p}[cc][cc][0.6][-30]{$\\Sigma_3=1$}\n\\psfrag{s1}[cc][cc][1.0][90]{$\\longleftarrow\\: s_1 =0\\:\n\\longrightarrow$} \\psfrag{s2}[cc][cc][1.0][35]{$\\longleftarrow \\:s_2\n=0 \\:\\longrightarrow$} \\psfrag{s3}[cc][cc][1.0][-35]{$\\longleftarrow\n\\:s_3 =0 \\:\\longrightarrow$}\n\\includegraphics[width=0.6\\textwidth]{kasnercircle.eps}\n\\caption{The disc $\\Sigma^2\\leq 1$ and the Kasner circle $\\text{KC}_i^0$.} \\label{kasneri}\n\\end{center}\n\\end{figure}\n\n$\\text{FS}^1$ is a surface of fixed points that correspond to the\nflat isotropic dust solution. The circles $\\text{KC}_i^{1,0}$\nconsist of fixed points with constant $\\Sigma_ i$, satisfying\n$\\Sigma^2=1,\\sum_k\\,\\Sigma_k=0$, see Figure~\\ref{kasneri}; these fixed\npoints correspond to Kasner solutions. The fixed points on\n$\\text{TS}_i$ are associated with the Taub representation of the\nflat Minkowski spacetime. The intersection of $\\text{TS}_i$ with\n$(z=0)$ yields a line of fixed points which we denote\n$\\text{TL}_i^0$. The fixed points on $\\text{QL}_i^0$ correspond to\nthe non-flat LRS Kasner solutions.\n$\\text{F}^0$ is a fixed point that corresponds to the flat isotropic\nradiation solution. In Appendix~\\ref{FRWLRS} we prove that $\\text{F}^0$\nis well-defined through the equations $w_1 = w_2= w_3 = 1\/3$ (which\nare to be solved for $(s_1,s_2,s_3)\\,$). The location of\n$\\text{F}^0$ depends on the chosen distribution function, since the\nequations $w_1 = w_2= w_3 = 1\/3$ involve $f_0$.\nAnalogously, the equations $w_j = 1\/2$ ($\\forall \\: j\\neq i$) yield\na unique solution, the fixed point D$_i^0$; the location of the\npoint D$_i^0$ also depends on $f_0$. The fixed points D$_i^0$ are\nassociated with a scale-invariant LRS solution (related to a\ndistributional $f_0$; see Appendix~\\ref{Si0space} for details).\n\nThe LRS points on $\\text{KC}_i^0$ play a particularly important role\nin the following, which motivates that they are given individual\nnames. We denote the three Taub points on KC$_i^0$ defined by\n$\\Sigma_j = 2$ (and thus $\\Sigma_l = -1$ $\\forall l\\neq j$) by\n$\\text{T}_{ij}^0$, while we denote the three non-flat LRS point on\nKC$_i^0$ given by $\\Sigma_j = -2$ (and thus $\\Sigma_l = 1$ $\\forall\nl\\neq j$) by Q$_{ij}^0$. The Kasner circles $\\text{KC}_j^0$ and\n$\\text{KC}_k^0$ are connected by the lines $\\text{TL}_i^0$ and\n$\\text{QL}_i^0$; the end points of the line $\\text{TL}_i^0$ are the\nTaub points $\\text{T}_{ji}^0$ and $\\text{T}_{ki}^0$; analogously,\nthe end points of $\\text{QL}_i^0$ are the points $\\text{Q}_{ji}^0$\nand $\\text{Q}_{ki}^0$. (Here, $(i,j,k)$ is an arbitrary permutation\nof $(1,2,3)$.) The remaining points $\\text{T}_{ll}^0$ and\n$\\text{Q}_{ll}^0$ ($l=1\\ldots 3$) do not lie on any of the fixed\npoint sets $\\text{TL}^0_i$ or $\\text{QL}^0_i$. This fixed point\nstructure is depicted in Figure~\\ref{fixedp}.\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{TL1}[cc][cc][0.8][0]{$\\text{TL}^0_1$}\n\\psfrag{TL2}[cc][cc][0.8][0]{$\\text{TL}^0_2$}\n\\psfrag{TL3}[cc][cc][0.8][0]{$\\text{TL}^0_3$}\n\\psfrag{QL1}[cc][cc][0.8][0]{$\\text{QL}^0_1$}\n\\psfrag{QL2}[cc][cc][0.8][0]{$\\text{QL}^0_2$}\n\\psfrag{QL3}[cc][cc][0.8][0]{$\\text{QL}^0_3$}\n\\psfrag{T21}[cc][cc][0.6][0]{$\\text{T}^0_{21}$}\n\\psfrag{T22}[cc][cc][0.6][0]{$\\text{T}^0_{22}$}\n\\psfrag{T23}[cc][cc][0.6][0]{$\\text{T}^0_{23}$}\n\\psfrag{Q21}[cc][cc][0.6][0]{$\\text{Q}^0_{21}$}\n\\psfrag{Q22}[cc][cc][0.6][0]{$\\text{Q}^0_{22}$}\n\\psfrag{Q23}[cc][cc][0.6][0]{$\\text{Q}^0_{23}$}\n\\psfrag{T11}[cc][cc][0.6][0]{$\\text{T}^0_{11}$}\n\\psfrag{T12}[cc][cc][0.6][0]{$\\text{T}^0_{12}$}\n\\psfrag{T13}[cc][cc][0.6][0]{$\\text{T}^0_{13}$}\n\\psfrag{Q11}[cc][cc][0.6][0]{$\\text{Q}^0_{11}$}\n\\psfrag{Q12}[cc][cc][0.6][0]{$\\text{Q}^0_{12}$}\n\\psfrag{Q13}[cc][cc][0.6][0]{$\\text{Q}^0_{13}$}\n\\psfrag{T31}[cc][cc][0.6][0]{$\\text{T}^0_{31}$}\n\\psfrag{T32}[cc][cc][0.6][0]{$\\text{T}^0_{32}$}\n\\psfrag{T33}[cc][cc][0.6][0]{$\\text{T}^0_{33}$}\n\\psfrag{Q31}[cc][cc][0.6][0]{$\\text{Q}^0_{31}$}\n\\psfrag{Q32}[cc][cc][0.6][0]{$\\text{Q}^0_{32}$}\n\\psfrag{Q33}[cc][cc][0.6][0]{$\\text{Q}^0_{33}$}\n\\psfrag{KC1}[cc][cc][1.2][0]{$\\text{KC}^0_1$}\n\\psfrag{KC2}[cc][cc][1.2][0]{$\\text{KC}^0_2$}\n\\psfrag{KC3}[cc][cc][1.2][0]{$\\text{KC}^0_3$}\n\\psfrag{s1}[cc][cc][1.0][90]{$\\longleftarrow\\: s_1 =0\\:\n\\longrightarrow$} \\psfrag{s2}[cc][cc][1.0][35]{$\\longleftarrow \\:s_2\n=0 \\:\\longrightarrow$} \\psfrag{s3}[cc][cc][1.0][-35]{$\\longleftarrow\n\\:s_3 =0 \\:\\longrightarrow$}\n\\includegraphics[width=0.8\\textwidth]{fixedpoints9.eps}\n\\caption{A schematic depiction of the fixed points on $z=0$.\nThe underlying structure is the three sides of the $s_i$-triangle $s_1+s_2+s_3 =1$:\neach point represents a disc $\\Sigma^2\\leq 1$; the vertices contain the Kasner circles $\\text{KC}_i^0$.\nBold lines denote the lines of fixed points $\\text{TL}_i^0$, $\\text{QL}_i^0$,\nand $\\mathrm{KC}_i^0$.} \\label{fixedp}\n\\end{center}\n\\end{figure}\n\n\n\n\\subsection{Invariant subsets and monotone functions}\n\\label{inv}\n\nThe dynamical system~\\eqref{eq} possesses a hierarchy of invariant\nsubsets and monotone functions. Since this feature of the dynamical\nsystem will turn out to be of crucial importance in the analysis of\nthe global dynamics, we give a detailed discussion.\n\n$\\mathcal{X}$: On the full (interior) state space $\\mathcal{X}$ define\n\\begin{subequations}\\label{m1eqs}\n\\begin{equation}\\label{m1}\nM_{(1)} = (s_1 s_2 s_3)^{-1\/3} \\frac{z}{1-z} \\:.\n\\end{equation}\nA straightforward computation shows\n\\begin{equation}\nM_{(1)}^\\prime = 2 M_{(1)}\\:,\n\\end{equation}\n\\end{subequations}\ni.e., $M_{(1)}$ is strictly monotonically increasing along orbits in\n$\\mathcal{X}$. Note that $M_{(1)}$ is intimately related to the\nspatial volume density since $M_{(1)} = m^2 \\det(g_{ij})^{1\/3}$.\n\n$\\mathcal{Z}^1$: This subset is characterized by $z=1$. Since $w_i = w =0$,\nthe equations for $s_i$ decouple, and the essential dynamics is\ndescribed by the equations $\\Sigma_i^\\prime = -(3\/2)(1-\\Sigma^2)\n\\Sigma_i$. (Note that these equations are identical to the Bianchi\ntype~I equations for dust --- it is therefore natural to refer to\n$\\mathcal{Z}^1$ as the dust subset.) Explicit solutions for these equations\ncan be obtained by noting that\n$\\Sigma_1\\propto\\Sigma_2\\propto\\Sigma_3$ for all solutions, or by\nusing that $\\Omega^\\prime = 3\\Sigma^2\\,\\Omega$.\n\n$\\mathcal{Z}^0$: This subset is the massless boundary set $z=0$.\nSince $w =1\/3$, the dynamical system~\\eqref{eq} reduces to\n\\begin{equation}\\label{z0eq}\n\\Sigma_i^\\prime = - \\Omega\\,[\\,1+\\Sigma_i -3 w_i\\,]\\:,\\qquad\ns_i^\\prime = -2 s_i \\,[\\,\\Sigma_i - \\sum\\nolimits_k s_k\\, \\Sigma_k \\,]\\:.\n\\end{equation}\nConsider the function\n\\begin{subequations}\\label{m2eqs}\n\\begin{equation}\\label{m2}\nM_{(2)} =\\left(1 -\\Sigma^2\\right)^{-1} (s_1 s_2 s_3)^{-1\/6} \\int f_0\n\\left[ \\sum\\nolimits_k s_k v_k^2\\right]^{1\/2} d v_1 d v_2 d v_3\\:.\n\\end{equation}\nThe derivative is\n\\begin{equation}\nM_{(2)}^\\prime = -2 \\Sigma^2 M_{(2)}\\:,\n\\end{equation}\nwhich yields monotonicity when $\\Sigma^2 \\neq 0$. If $\\Sigma^2 = 0$, then\n\\begin{equation}\nM_{(2)}^\\prime = 0 \\:, \\quad M_{(2)}^{\\prime\\prime} = 0 \\:, \\quad\nM_{(2)}^{\\prime\\prime\\prime} = -6 M_{(2)} \\sum_i \\left(w_i\n-\\textfrac{1}{3}\\right)^2 \\:.\n\\end{equation}\n\\end{subequations}\nHence, $M_{(2)}$ is strictly monotonically decreasing everywhere on\n$z=0$, except at the fixed point $\\text{F}^0$ (for which\n$\\Sigma^2=0$ and $w_1 = w_2 = w_3 = 1\/3$), where $M_{(2)}$ attains a\n(positive) minimum. The latter follows from the fact that\n$(1-\\Sigma^2)^{-1}$ is minimal at the point $\\Sigma_i = 0\n\\:\\,\\forall i$ and that $\\partial M_{(2)}\/\\partial s_i = (2\ns_i)^{-1} [w_i - 1\/3] M_{(2)}$.\n\n$\\mathcal{S}_i$ ($i=1,2,3$): These invariant boundary subsets are defined by\n$s_i=0$ (which yields $w_i=0$). There exists a monotone function on\n$\\mathcal{S}_1$,\n\\begin{equation}\\label{m3}\nM_{(3)}= (s_2 s_3)^{-1\/2}\\,\\frac{z}{1-z}\\, ,\\qquad M_{(3)}^\\prime =\n(2-\\Sigma_1)\\,M_{(3)}\\, ;\n\\end{equation}\nanalogous functions can be obtained on $\\mathcal{S}_2$ and $\\mathcal{S}_3$ through\npermutations.\n\n$\\mathcal{K}$: This boundary subset is the vacuum subset defined by\n$\\Omega=0$ (or equivalently $\\Sigma^2=1$). The $\\Sigma_i$ are\nconstant on this subset, which completely determines the dynamics of\nthe $s_i$ variables (via~\\eqref{seq} or via the auxiliary equation\nfor $g^{ii}$). The Bianchi type~I vacuum solution is the familiar\nKasner solution and we thus refer to $\\mathcal{K}$ as the Kasner subset.\n\nIntersections of the above boundary subsets yield boundary subsets\nof lower dimensions; those that are relevant for the global dynamics\nare discussed in the following.\n\n$\\mathcal{S}_i^0$ and $\\mathcal{S}_i^1$: The intersection between the subset $\\mathcal{S}_i$\nand $\\mathcal{Z}^0$ and $\\mathcal{Z}^1$ yields three-dimensional invariant subsets\n$(s_i=0) \\cap (z=0)$ and $(s_i=0) \\cap (z=1)$, respectively. On\n$\\mathcal{S}_i^0$ there exists a monotonically decreasing function:\n\\begin{equation}\\label{m4}\nM_{(4)} = (1+\\Sigma_i)^2\\:, \\qquad M_{(4)}^\\prime = - 2 \\Omega M_{(4)} \\:.\n\\end{equation}\n\n$\\mathcal{S}_{ij}$: These subsets are defined by setting $s_i =0$ and $s_j\n=0$ ($j\\neq i$), i.e., $\\mathcal{S}_{ij} = \\mathcal{S}_i \\cap \\mathcal{S}_j$. On $\\mathcal{S}_{ij}$,\nwe have $s_k =1$ ($k \\neq i,j$) and $w_k = 3 w$, because $w_i = w_j\n=0$.\n\n$\\mathcal{D}_i^0$: The subsets $\\mathcal{S}_i^0$ admit two-dimensional invariant\nsubsets $\\mathcal{D}_i^0$ characterized by $(z=0) \\cap (s_i=0) \\cap (\\Sigma_i =-1)$.\nOn $\\mathcal{D}_1^0$ consider the function\n\\begin{subequations}\\label{m5eqs}\n\\begin{equation}\nM_{(5)} = \\left(2 + \\Sigma_2 \\Sigma_3 \\right)^{-1} (s_2 s_3)^{-1\/4}\n\\int f_0 \\left[ s_2 v_2^2 + s_3 v_3^2\\right]^{1\/2} d v_1 d v_2 d\nv_3\\:;\n\\end{equation}\nanalogous functions can be defined on $\\mathcal{D}_2^0$ and $\\mathcal{D}_3^0$.\nEqs.~\\eqref{z0eq} imply\n\\begin{equation}\\label{m5d}\nM_{(5)}^\\prime = -\\textfrac{1}{12}\\, M_{(5)} \\left[ \\left(1 -\n2\\Sigma_2\\right)^2 + \\left(1 - 2\\Sigma_3\\right)^2\\right] \\,,\n\\end{equation}\ni.e., $M_{(5)}$ is strictly monotonically decreasing unless\n$\\Sigma_2 = 1\/2 = \\Sigma_3$. In the special case $\\Sigma_2 = 1\/2 =\n\\Sigma_3$ we obtain\n\\begin{equation}\nM_{(5)}' = 0 \\:, \\quad M_{(5)}^{\\prime\\prime} = 0 \\:, \\quad\nM_{(5)}^{\\prime\\prime\\prime} =\n-\\textfrac{27}{8} M_{(5)} \\left[ \\left(w_2 -\n\\textfrac{1}{2}\\right)^2 +\\left(w_3 -\n\\textfrac{1}{2}\\right)^2\\right] \\:.\n\\end{equation}\n\\end{subequations}\nHence, $M_{(5)}$ is strictly monotonically decreasing everywhere on\n$\\mathcal{D}_1^0$ except for at the fixed point $\\text{D}_1$, for which\n$\\Sigma_2=\\Sigma_3=w_2=w_3=\\frac{1}{2}$, cf.~Table~\\ref{fixtab}. The\nfunction $M_{(5)}$ possesses a positive minimum at $\\text{D}_1$.\nThis is because $(2+\\Sigma_2 \\Sigma_3)^{-1}$ is minimal at the point\n$\\Sigma_2 = \\Sigma_3 = 1\/2$ and $\\partial M_{(5)}\/\\partial s_i = (2\ns_i)^{-1} [w_i - 1\/2] M_{(5)}$ for $i=2,3$.\n\n$\\mathcal{K}^0$: The intersection of the Kasner subset $\\mathcal{K} = (\\Sigma^2 =1)$\nwith the $z=0$ subset yields a 3-dimensional subset, $\\mathcal{K}^0$. This\nsubset will play a prominent role in the analysis of the past\nasymptotic behaviour of solution.\n\nThe remaining subsets we consider are not located at the boundaries\nof the state space, but in the interior; these subsets are invariant\nunder the flow of the dynamical system, if the distribution function\n$f_0$ satisfies certain symmetry conditions.\n\n$\\text{LRS}_i$: We define the subset $\\text{LRS}_1$ of $\\mathcal{X}$\nthrough the equations $\\Sigma_2=\\Sigma_3$, $w_2=w_3$;\n$\\text{LRS}_{2,3}$ are defined analogously. In order for these sets\nto be invariant under the flow of the dynamical system, the\ndistribution function $f_0$ must satisfy conditions that ensure\ncompatibility with the LRS symmetry, see Appendix~\\ref{FRWLRS} for\ndetails. For an orbit lying on $\\text{LRS}_1$, Equation~(\\ref{seq})\nentails that $s_2(\\tau) \\propto s_3(\\tau)$ (where the\nproportionality constant exhibits a dependence on $f_0$, which\nenters through the equation $w_2 =w_3$), and hence $g_{22}\\propto\ng_{33}$; by rescaling the coordinates one can achieve\n$g_{22}=g_{33}$, i.e., a line element in an explicit LRS form.\nHence, the $\\text{LRS}_i$ subsets, if present as invariant subsets,\ncomprise the solutions with LRS geometry.\n\nFRW: If $f_0$ is compatible with an isotropic geometry, see\nAppendix~\\ref{FRWLRS} for details, the one-dimensional subset\ncharacterized by the equations $\\Sigma_i=0$ $\\forall i$ and $w_1\n=w_2=w_3 =w$ is an invariant subset (in fact: orbit), the FRW\nsubset. The equations $\\Sigma_i=0$ yield $s_{i}=\\mathrm{const}$,\nwhereby we obtain a Friedmann-Robertson-Walker (FRW) geometry, since\nthe spatial coordinates can be rescaled so that\n$g_{ij}\\propto\\delta_{ij}$. Note that the location in $s_i$ of the\nFRW subset depends on $f_0$, since the equations $w_1 = w_2= w_3$,\nwhich are to be solved for $(s_1,s_2,s_3)$, involve $f_0$,\ncf.~Appendix~\\ref{FRWLRS}. Remarkably, in the massless case the\nexistence of a FRW solution (which corresponds to the fixed point\n$\\text{F}^0$) does not require any symmetry conditions on $f_0$.\n\n\n\\section{Local and global dynamics}\n\\label{locglo}\n\n\\subsection{Local dynamics}\n\nLet us consider smooth reflection-symmetric\nBianchi type~I Vlasov solutions that approach fixed point sets when\n$\\tau\\rightarrow -\\infty$.\n\n\\begin{theorem}\\label{locthm}\nIn the massive (massless) case there exists\n\\begin{itemize}\n\\item[(a)] a single orbit that approaches (corresponds\nto) $\\mathrm{F}^0$,\n\\item[(b)] three equivalent one-parameter sets of orbits (three single orbits)\nthat approach $\\mathrm{D}_i^0$, $i=1 \\ldots 3$,\n\\item[(c)] one three-parameter\n(two-parameter) set of orbits that approaches $\\mathrm{QL}_1^0$;\n$\\mathrm{QL}_2^0$ and $\\mathrm{QL}_3^0$ yield equivalent sets,\n\\item[(d)] one four-parameter (three-parameter) set of orbits that\napproaches the part of {\\rm KC}$_1^0$ defined by $1<\\Sigma_1<2$;\n{\\rm KC}$_2^0$ and {\\rm KC}$_3^0$ yield equivalent sets.\n\\end{itemize}\n\\end{theorem}\n\n\\proof The statements of the theorem follow from the local stability\nanalysis of the fixed point sets F$^0$, D$_i^0$, $\\text{QL}_i^0$,\n$\\text{KC}_i^0$, when combined with the Hartman-Grobman and the\nreduction theorem, since the fixed points F$^0$, D$_i^0$ are\nhyperbolic and $\\text{QL}_i^0$, $\\text{KC}_i^0$ are transversally\nhyperbolic. This requires the dynamical system to be $\\mathcal{C}^1$\nand this leads to some restrictions on $f_0$. However, it is\npossible to obtain an alternative proof that does not require such\nrestrictions. Such a proof can be obtained from the hierarchical\nstructure of invariant sets; we will refrain from making the details\nexplicit here, since our analysis of the global dynamics below\ncontains all essential ingredients implicitly.\n\n\\textit{Interpretation of Theorem~\\ref{locthm} (massive case)}:\nA three-parameter set of solutions \nconverges to every non-LRS Kasner solution as $t\\rightarrow 0$.\n(In the state space description three equivalent sets of orbits approach three equivalent\ntransversally stable Kasner arcs that cover all non-LRS Kasner\nsolutions; the \nequivalence reflects the freedom of permuting the coordinates.)\nFurthermore, a three-parameter set of\nsolutions approaches the non-flat LRS Kasner solution. \nHence, in total, a four-parameter set of solutions asymptotically approaches \nnon-flat Kasner states. \nThere exist special solutions with\nnon-Kasner behaviour toward the singularity: one solution\nisotropizes toward the singularity and a one-parameter set of solutions\napproaches a non-Kasner LRS solution of the type~\\eqref{Disol} \n(three equivalent one-parameter\nsets of orbits approach three equivalent non-Kasner LRS fixed points\nassociated with this solution).\nFor the latter solutions \n$\\Omega=3\/4$; these solutions cannot be interpreted as a perfect\nfluid solutions since they possess anisotropic pressures.\n\nIn the following we show that the list of Theorem~\\ref{locthm} is\nalmost complete: there exist no other attractors toward the singularity\nwith one exception, a heteroclinic network that connects the flat LRS-Kasner\npoints.\n\n\n\\subsection{Global dynamics}\n\\label{globaldynamics}\n\n\\begin{theorem}\\label{futurethm}\nAll orbits in the interior of the state space $\\mathcal{X}$ of\nmassive particles [state space $\\mathcal{Z}^0$ of massless\nparticles] converge to {\\rm FS}$^1$ [{\\rm F}$^0$] when\n$\\tau\\rightarrow +\\infty$; i.e., all smooth reflection-symmetric\nBianchi type~I Vlasov solutions isotropize toward the future.\n\\end{theorem}\n\nA proof of this theorem has been given in~\\cite{ren96}.\nIn Appendix~\\ref{futureproof} we present an alternative proof based on dynamical\nsystems techniques.\n\n\\begin{theorem}\\label{alphathm}\nThe $\\alpha$-limit set of an orbit in the interior of the state\nspace is one of the fixed points $\\mathrm{F}^0$, $\\mathrm{D}_i^0$,\n$\\mathrm{QL}_i^0$, $\\mathrm{KC}_i^0$, see Theorem~\\ref{locthm}, or\nit is the heteroclinic network $\\mathcal{H}^0$. The $\\alpha$-limit set of a\ngeneric orbit resides on the union of the fixed point sets {\\rm\nKC}$_i^0$ and possibly the heteroclinic network $\\mathcal{H}^0$.\n\\end{theorem}\n\n\\begin{remark}\nA heteroclinic network is defined as a compact connected\nflow-invariant set that is indecomposable (all points are connected\nby pseudo-orbits), has a finite nodal set (the set of recurrent\npoints is a finite union of disjoint compact connected\nflow-invariant subsets), and has finite depth (taking the\n$\\alpha$\/$\\omega$-limit iteratively yields the set of recurrent\npoints after a finite number of iterative steps); for details\nsee~\\cite{Ashwin\/Field:1999} and references therein. A simple case\nis a heteroclinic network of depth $1$ whose nodes are equilibrium\npoints: it can be regarded as a collection of entangled heteroclinic\ncycles. The heteroclinic network $\\mathcal{H}^0$ is of the latter type; it\nwill be introduced in the proof of the theorem.\n\\end{remark}\n\n\nThe remainder of this section is concerned with the proof of Theorem~\\ref{alphathm}.\nThe first step in the proof is to gain a detailed understanding of the dynamics\non the relevant invariant subspaces of the dynamical system.\n\n\\subsubsection*{Dynamics on $\\mathcal{S}_i^0$}\n\n\\begin{lemma}\\label{lemmaSi0}\nConsider an orbit in the interior of $\\mathcal{S}_i^0$. Its $\\alpha$-limit\nset is either a fixed point on {\\rm KC}$_j^0$ or {\\rm KC}$_k^0$\n($i\\neq j \\neq k$), {\\rm QL}$_i^0$ or {\\rm TL}$_i^0$, or it is the\nheteroclinic cycle $\\mathcal{H}_i^0$. The $\\omega$-limit set consists of the\nfixed point {\\rm D}$_i^0$.\n\\end{lemma}\n\n\\begin{remark}\nThe heteroclinic cycle $\\mathcal{H}_i^0$ will be defined\nin~\\eqref{heterocycle}.\n\\end{remark}\n\n\\proof Without loss of generality we consider $\\mathcal{S}_1^0$, which can\nbe described by the variables\n\\begin{equation}\n0< s_2 < 1 \\;\\, (s_3 = 1- s_2)\n\\quad\\text{ and }\\quad \\Sigma_1, \\Sigma_2,\n\\Sigma_3 \\quad\\left( \\sum\\nolimits_i \\Sigma_i = 0,\n\\Sigma^2 < 1\\right) \\:;\n\\end{equation}\nhence $\\mathcal{S}_1^0$ is represented by the interior of a cylinder, cf.~Figure~\\ref{cylinder}.\nThe boundary of $\\mathcal{S}_1^0$ consists of the lateral boundary $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$,\nthe base $\\mathcal{S}_{12}^0$, and the top surface $\\mathcal{S}_{13}^0$.\n\nSince $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$ is part of $\\mathcal{K}$, it follows that\n$\\Sigma_i \\equiv \\mathrm{const}$ for all orbits on $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$.\nWe observe that $s_2$ is monotonically increasing\n(decreasing) when $\\Sigma_2 < \\Sigma_3$ ($\\Sigma_2 > \\Sigma_3$), since\n$s_2^\\prime = -2 s_2 (1-s_2)(\\Sigma_2 -\\Sigma_3)$;\nthe two domains are separated by the lines of fixed points\nTL$_1^0$ and QL$_1^0$, see Figure~\\ref{cylinder}.\n\nThe key equations to understand the flow on $\\mathcal{S}_{12}^0$ are\n\\begin{equation}\n\\Omega^\\prime = \\Omega (2 \\Sigma^2 - \\Sigma_3)\\quad\\text{ and } \\quad\n\\Sigma_3^\\prime = \\Omega (2-\\Sigma_3)\\, .\n\\end{equation}\nFrom the first equation it follows that all points on\nKC$_3^0$ are transversally hyperbolic repelling fixed points\nexcept for T$_{33}^0$; from the second\nequation we infer that T$_{33}^0$ is the attractor of the entire\ninterior of $\\mathcal{S}_{12}^0$.\nSimilarly, T$_{22}^0$ is the attractor on $\\mathcal{S}_{13}^0$, see Figure~\\ref{cylinder}.\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{A}[cc][cc][1.2][0]{$\\text{TL}^0_1$}\n\\psfrag{B}[cc][rc][1.2][0]{$\\text{QL}^0_1$}\n\\psfrag{K2}[cc][cc][1.2][0]{$\\text{KC}^0_2$}\n\\psfrag{K3}[cc][cc][1.2][0]{$\\text{KC}^0_3$}\n\\psfrag{T2}[cc][cc][0.9][0]{$\\text{T}_{32}^0$}\n\\psfrag{T3}[cc][cc][0.9][0]{$\\text{T}_{23}^0$}\n\\psfrag{P2}[cc][cc][0.9][0]{$\\text{T}_{22}^0$}\n\\psfrag{P3}[cc][cc][0.9][0]{$\\text{T}_{33}^0$}\n\\psfrag{S1}[cc][cc][0.8][0]{$\\Sigma_1$}\n\\psfrag{S2}[cc][cc][0.8][0]{$\\Sigma_2$}\n\\psfrag{S3}[cc][cc][0.8][0]{$\\Sigma_3$}\n\\psfrag{Si}[cc][cc][0.8][0]{$\\Sigma_i$}\n\\psfrag{s2}[cc][cc][0.8][0]{$s_2$}\n\\includegraphics[width=0.9\\textwidth]{cylinder2.eps}\n\\caption{Flow on the boundaries and on the invariant subset\n$\\Sigma_1 = -1$ on $\\mathcal{S}_1^0$. The fixed point on $\\Sigma_1=-1$ is\nthe point D$_1^0$; the heteroclinic cycle $\\mathcal{H}_1^0$ consists of the\nfixed points $\\mathrm{T}_{22}^0$, $\\mathrm{T}_{32}^0$,\n$\\mathrm{T}_{33}^0$, $\\mathrm{T}_{23}^0$, and the connecting\norbits.}\n\\label{cylinder}\n\\end{center}\n\\end{figure}\n\nThe plane $\\mathcal{D}_1^0$, defined by $\\Sigma_1 = -1$, is an invariant subset in $\\mathcal{S}_1^0$.\nIn the interior of the plane we find the fixed point\nD$_1^0$; the boundary consists of a heteroclinic cycle $\\mathcal{H}_1^0$,\n\\begin{equation}\\label{heterocycle}\n\\mathcal{H}_1^0: \\mathrm{T}_{22}^0 \\rightarrow \\mathrm{T}_{32}^0 \\rightarrow\n\\mathrm{T}_{33}^0 \\rightarrow \\mathrm{T}_{23}^0 \\rightarrow\n\\mathrm{T}_{22}^0\\, .\n\\end{equation}\n(Note that analogous cycles $\\mathcal{H}_2^0$ and $\\mathcal{H}_3^0$ exist\non $\\mathcal{S}_2^0$ and $\\mathcal{S}_3^0$, respectively.)\nThe function $M_{(5)}$ is monotonically decreasing on $\\mathcal{D}_1^0$, cf.~\\eqref{m5eqs}.\nApplication of the monotonicity principle, see Appendix~\\ref{dynsys},\nyields that $\\text{D}_1^0$ is the $\\omega$-limit and that $\\mathcal{H}_1^0$ is the\n$\\alpha$-limit for all orbits on $\\mathcal{D}_1^0$, cf.~Figure~\\ref{cylinder}.\n\nConsider now an orbit in $\\mathcal{S}_1^0$ with $\\Sigma_1 \\neq -1$.\nThe function $M_{(4)} = (1+\\Sigma_1)^2$ is monotonically decreasing\non $\\mathcal{S}_1^0$, cf.~\\eqref{m4}. The monotonicity\nprinciple implies that the $\\omega$-limit lies on\n$\\Sigma_1 = -1$ or $\\Sigma^2 =1$ (but $\\Sigma_1 \\neq \\pm 2$).\nSince the logarithmic derivative of $\\Omega$ is positive everywhere on $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$\n(except at $\\text{T}^0_{22}$ and $\\text{T}^0_{33}$), i.e.,\n$\\Omega^{-1}\\Omega^\\prime|_{\\Omega =0} = 2 -\\sum_k w_k \\Sigma_k> 0$,\nit follows that the ``wall'' $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$ of the cylinder\nis repelling everywhere away from $\\Sigma_1 = -1$.\nConsequently, the\n$\\omega$-limit of the orbit cannot lie on $\\Sigma^2 =1$\nbut is contained in $\\Sigma_1 = -1$.\nThe fixed point $\\text{D}_1^0$ is a hyperbolic sink, as we conclude\nfrom the dynamics on $\\Sigma_1 = -1$ and from\n$(1+\\Sigma_1)^{-1} (1+ \\Sigma_1)^\\prime|_{\\text{D}_1^0} = -3\/4$.\nTherefore, the a priori possible $\\omega$-limit sets on $\\Sigma_1 = -1$\nare $\\text{D}_1^0$ and the heteroclinic cycle $\\mathcal{H}_1^0$.\n\nTo prove that $\\text{D}_1^0$ is the actual $\\omega$-limit we again\nconsider the function $M_{(5)}$. However, we no longer restrict its\ndomain of definition to $\\mathcal{D}_1^0$, but view it as a function\non $\\mathcal{S}_1^0$; we obtain\n\\begin{equation}\\label{M52}\n12 M_{(5)}^\\prime = -M_{(5)} \\left[(\\Sigma_1 + 2 \\Sigma_2)^2 +\n(\\Sigma_1+2 \\Sigma_3)^2 + 6 (\\Sigma_1+1)^2 - 6(\\Sigma_1+1)\\right]\\:.\n\\end{equation}\nThe bracket is positive when $\\Sigma_1 < -1$; hence $M_{(5)}$ is\ndecreasing when $\\Sigma_1 < -1$. This prevents orbits with $\\Sigma_1\n< -1$ from approaching $\\mathcal{H}_1^0$, since the cycle is characterized\nby $M_{(5)} = \\infty$. Now suppose that there exist an orbit in\n$\\Sigma_1 > -1$, whose $\\omega$-limit is $\\mathcal{H}_1^0$. At late times\nthe trajectory shadows the cycle; hence, for late times, the bracket\nin~\\eqref{M52} is almost always positive along the trajectory --\nonly when the trajectory passes through a small neighbourhood of\n$(\\Sigma_1,\\Sigma_2,\\Sigma_3) = (-1,1\/2,1\/2)$ the bracket is\nmarginally negative. Since the trajectory spends large amounts of\ntime near the fixed points and the passages from one fixed point to\nanother become shorter and shorter in proportion,\nit follows that at late times $M_{(5)}$ is decreasing along the\norbit (with ever shorter periods of limited increase). This is a\ncontradiction to the assumption that the orbit is attracted by the\nheteroclinic cycle. We therefore draw the conclusion that\n$\\text{D}_1^0$ is the global sink on $\\mathcal{S}_1^0$.\n\nConsider again an orbit in $\\mathcal{S}_1^0$ with $\\Sigma_1 \\neq -1$.\nInvoking the monotonicity principle with the function $M_{(4)}$ we\nfind that the $\\alpha$-limit of the orbit must be located on\n$\\Sigma^2 =1$, $\\Sigma_1 \\neq -1$. From the analysis of the flow on\nthe boundaries of the cylinder we obtain that all fixed points on\n$\\Sigma^2=1$ except for T$_{22}^0$ and T$_{33}^0$ are transversally\nhyperbolic. The fixed points on KC$_2^0$ with $\\Sigma_2 < \\Sigma_3$\nand the points on KC$_3^0$ with $\\Sigma_2\n> \\Sigma_3$ are saddles; the fixed points on KC$_2^0$ with\n$\\Sigma_2 > \\Sigma_3$ and those on KC$_3^0$ with $\\Sigma_2 <\n\\Sigma_3$ are transversally hyperbolic sources (except for\nT$_{22}^0$, T$_{33}^0$): every point attracts a one-parameter set of\norbits from $\\mathcal{S}_1^0$ as $\\tau\\rightarrow -\\infty$. In contrast,\neach fixed point on TL$_1^0$ and QL$_1^0$ is a source for exactly\none orbit. The structure of the flow on $\\Sigma^2=1$ implies that\nthe $\\alpha$-limit of the orbit in $\\mathcal{S}_1^0$ with $\\Sigma_1 \\neq -1$\nmust be one of the transversally hyperbolic sources.\n\nThis establishes Lemma~\\ref{lemmaSi0}.\n\n\n\\subsubsection*{Dynamics on $\\mathcal{K}^0$}\n\nThe invariant subset $\\mathcal{K}^0$ is defined by setting $z=0$ and $\\Sigma^2=1$;\nit can be represented by the Cartesian\nproduct of the circle ($\\Sigma^2=1$) in the $\\Sigma_i$-space times\nthe $s_i$-triangle given by $\\{0 0$ yields that $\\mathrm{K}_\\mathrm{P}$ is a\ntransversally hyperbolic source on the whole space $\\mathcal{Z}^0$. Since\n$\\alpha(\\gamma)$ contains the transversally hyperbolic source\n$\\mathrm{K}_\\mathrm{P}$, that fixed point necessarily constitutes\nthe entire $\\alpha$-limit set, i.e., $\\alpha(\\gamma)\n=\\mathrm{K}_\\mathrm{P}$. This is in contradiction to our assumption\n$\\alpha(\\gamma) \\ni \\mathrm{P}$. The omitted cases $\\Sigma_i = \\pm\n2$ for some $i$ will be dealt with next.\n\nSuppose that $\\Sigma_i = -2$ for one index $i$. Assume that\n$\\mathrm{P}$ lies in $\\alpha(\\gamma)$, therefore\n$\\alpha(\\mathrm{P})$ is contained in $\\alpha(\\gamma)$ as well. The\ndynamics on $\\mathcal{K}^0$ implies that $\\alpha(\\mathrm{P})$ is a fixed\npoint $\\mathrm{Q}_{\\mathrm{P}}$ on QL$_i^0$,\ncf.~Figure~\\ref{sigmaquadgleich1}. This point is a transversally\nhyperbolic source; $\\Omega^{-1} \\Omega^\\prime|_{\\text{QL}_i^0} = 1$\nin this case. By the same argument as above we obtain a\ncontradiction to the assumption $\\alpha(\\gamma) \\ni \\mathrm{P}$.\n\nFinally suppose that $\\Sigma_i = 2$ for one index $i$. When we\nassume that $\\mathrm{P}$ is in $\\alpha(\\gamma)$, then the\n$\\omega$-limit $\\omega(\\mathrm{P})$ is contained in\n$\\alpha(\\gamma)$. From Figure~\\ref{sigmaquadgleich1} we see that\n$\\omega(\\mathrm{P})$ is a fixed point $\\mathrm{T}_{\\mathrm{P}}$ on\nTL$_i^0$. The point $\\mathrm{T}_{\\mathrm{P}}$ is a transversally\nhyperbolic saddle, since $\\Omega^{-1} \\Omega^\\prime|_{\\text{TL}_i^0}\n= 3$, and there exists exactly one orbit that emanates from it,\nnamely the orbit that connects $\\mathrm{T}_{\\mathrm{P}}$ with\n$\\text{D}_i$ in $\\mathcal{S}_i^0$. Since $\\mathrm{T}_{\\mathrm{P}} \\in\n\\alpha(\\gamma)$, that orbit must also be contained in\n$\\alpha(\\gamma)$. This is in contradiction to the previous result:\n$\\alpha(\\gamma)$ cannot contain interior points of $\\mathcal{S}_i^0$. Hence\nour assumption $\\alpha(\\gamma) \\ni \\mathrm{P}$ was false: the\n$\\alpha$-limit of $\\gamma$ cannot contain any interior point of\n$\\mathcal{K}^0$.\n\nOur analysis results in the following statement: There exist four\nspecial orbits, one trivial orbit corresponding to the fixed point\n$\\text{F}^0$ and three orbits, the orbits $\\delta^0_i$, that\nconverge to the fixed points $\\text{D}^0_i \\in \\mathcal{D}^0_i$. The\n$\\alpha$-limit set of every orbit $\\gamma$ in $\\mathcal{Z}^0$ different from\n$\\text{F}^0$ and $\\delta^0_i$ must be located on the boundaries of\nthe spaces $\\mathcal{S}_i^0$ and $\\mathcal{K}^0$, i.e., on the union of the\nboundaries of the cylinders $\\mathcal{S}_i^0$, which we denote by\n$\\partial\\mathcal{S}^0=\\partial \\mathcal{S}_1^0 \\cup\n\\partial \\mathcal{S}_2^0 \\cup\n\\partial \\mathcal{S}_3^0$.\nThe set $\\partial\\mathcal{S}^0$ is depicted in Figure~\\ref{fixedp}: it\ncomprises the lateral surfaces of the cylinders and the base\/top\nsurfaces.\n\nAll fixed points on $\\partial\\mathcal{S}^0$ are transversally hyperbolic except\nfor the points $\\text{T}_{ii}^0$:\n$\\text{TL}_i^0$ consists of transversally hyperbolic saddles; in\ncontrast, the fixed points on $\\text{QL}_i^0$ are transversally\nhyperbolic sources; points on $\\text{KC}_i^0$ with $\\Sigma_i\n> 1$, $\\Sigma_i \\neq 2$ are sources while those with $\\Sigma_i < 1$\nare saddles.\nCombining the analysis of the preceding sections,\nsee~Figs.~\\ref{cylinder} and~\\ref{sigmaquadgleich1}, we obtain, more\nspecifically: each point on $\\text{QL}_i^0$ is a source for a\none-parameter family of orbits that emanate into the interior of\n$\\mathcal{Z}^0$, and each point on $\\text{KC}_i^0$ with $\\Sigma_i >\n1$ ($\\Sigma_i \\neq 2$) is the source for a two-parameter family.\n(The points with $\\Sigma_i =1$ on $\\text{KC}_i^0$ are the two points\n$\\text{Q}^0_{ij} \\in \\text{QL}_j^0$ and $\\text{Q}^0_{ik}\\in\n\\text{QL}_k^0$. Each of these two points is a transversally\nhyperbolic source for a one-parameter family of orbits, however,\nthose orbits are not interior orbits, but remain on the boundary of\n$\\mathcal{Z}^0$.)\n\nThe non-transversally hyperbolic fixed points $\\text{T}_{ii}^0$ are\npart of a special structure that is present on $\\partial\\mathcal{S}^0$: the\nset $\\partial\\mathcal{S}^0$ exhibits a robust heteroclinic network $\\mathcal{H}^0$\n(of depth $1$), see, e.g.,~\\cite{Ashwin\/Field:1999} for a discussion\nof heteroclinic networks; the network $\\mathcal{H}^0$ is depicted in\nFigure~\\ref{network}; a schematic depiction is given in\nFigure~\\ref{networkschematic}. In particular we observe that the\nheteroclinic cycles $\\mathcal{H}_i^0$ of the spaces $\\mathcal{S}_i^0$ appear as\nheteroclinic subcycles of the network.\n\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{TL1}[cc][cc][0.8][0]{$\\text{TL}^0_1$}\n\\psfrag{TL2}[cc][cc][0.8][0]{$\\text{TL}^0_2$}\n\\psfrag{TL3}[cc][cc][0.8][0]{$\\text{TL}^0_3$}\n\\psfrag{QL1}[cc][cc][0.8][0]{$\\text{QL}^0_1$}\n\\psfrag{QL2}[cc][cc][0.8][0]{$\\text{QL}^0_2$}\n\\psfrag{QL3}[cc][cc][0.8][0]{$\\text{QL}^0_3$}\n\\psfrag{T21}[cc][cc][0.6][0]{$\\text{T}^0_{21}$}\n\\psfrag{T22}[cc][cc][0.6][0]{$\\text{T}^0_{22}$}\n\\psfrag{T23}[cc][cc][0.6][0]{$\\text{T}^0_{23}$}\n\\psfrag{Q21}[cc][cc][0.6][0]{$\\text{Q}^0_{21}$}\n\\psfrag{Q22}[cc][cc][0.6][0]{$\\text{Q}^0_{22}$}\n\\psfrag{Q23}[cc][cc][0.6][0]{$\\text{Q}^0_{23}$}\n\\psfrag{T11}[cc][cc][0.6][0]{$\\text{T}^0_{11}$}\n\\psfrag{T12}[cc][cc][0.6][0]{$\\text{T}^0_{12}$}\n\\psfrag{T13}[cc][cc][0.6][0]{$\\text{T}^0_{13}$}\n\\psfrag{Q11}[cc][cc][0.6][0]{$\\text{Q}^0_{11}$}\n\\psfrag{Q12}[cc][cc][0.6][0]{$\\text{Q}^0_{12}$}\n\\psfrag{Q13}[cc][cc][0.6][0]{$\\text{Q}^0_{13}$}\n\\psfrag{T31}[cc][cc][0.6][0]{$\\text{T}^0_{31}$}\n\\psfrag{T32}[cc][cc][0.6][0]{$\\text{T}^0_{32}$}\n\\psfrag{T33}[cc][cc][0.6][0]{$\\text{T}^0_{33}$}\n\\psfrag{Q31}[cc][cc][0.6][0]{$\\text{Q}^0_{31}$}\n\\psfrag{Q32}[cc][cc][0.6][0]{$\\text{Q}^0_{32}$}\n\\psfrag{Q33}[cc][cc][0.6][0]{$\\text{Q}^0_{33}$}\n\\psfrag{KC1}[cc][cc][1.2][0]{$\\text{KC}^0_1$}\n\\psfrag{KC2}[cc][cc][1.2][0]{$\\text{KC}^0_2$}\n\\psfrag{KC3}[cc][cc][1.2][0]{$\\text{KC}^0_3$}\n\\psfrag{s1}[cc][cc][1.0][90]{$\\longleftarrow\\: s_1 =0\\: \\longrightarrow$}\n\\psfrag{s2}[cc][cc][1.0][35]{$\\longleftarrow \\:s_2 =0 \\:\\longrightarrow$}\n\\psfrag{s3}[cc][cc][1.0][-35]{$\\longleftarrow \\:s_3 =0 \\:\\longrightarrow$}\n\\includegraphics[width=0.8\\textwidth]{network3D3.eps}\n\\caption{The heteroclinic network $\\mathcal{H}^0$ that exists on the set\n$\\partial\\mathcal{S}^0$. Its building blocks are the heteroclinic cycles\n$\\mathcal{H}^0_1$, $\\mathcal{H}^0_2$, $\\mathcal{H}^0_3$.} \\label{network}\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{T11}[cc][cc][0.9][0]{$\\text{T}_{11}^0$}\n\\psfrag{T22}[cc][cc][0.9][0]{$\\text{T}_{22}^0$}\n\\psfrag{T33}[cc][cc][0.9][0]{$\\text{T}_{33}^0$}\n\\psfrag{T12}[cc][cc][0.9][0]{$\\text{T}_{12}^0$}\n\\psfrag{T21}[cc][cc][0.9][0]{$\\text{T}_{21}^0$}\n\\psfrag{T13}[cc][cc][0.9][0]{$\\text{T}_{13}^0$}\n\\psfrag{T31}[cc][cc][0.9][0]{$\\text{T}_{31}^0$}\n\\psfrag{T23}[cc][cc][0.9][0]{$\\text{T}_{23}^0$}\n\\psfrag{T32}[cc][cc][0.9][0]{$\\text{T}_{32}^0$}\n\\includegraphics[width=0.7\\textwidth]{network.eps}\n\\caption{Schematic representation of $\\mathcal{H}^0$.}\n\\label{networkschematic}\n\\end{center}\n\\end{figure}\n\nA straightforward analysis of the flow on $\\partial \\mathcal{S}^0$ using the\nsame type of reasoning as above leads to the result that there exist\nno other structures on $\\partial \\mathcal{S}^0$ that could serve as\n$\\alpha$-limits for an interior $\\mathcal{Z}^0$-orbit $\\gamma$. We\nhave thus proved the following statement: The $\\alpha$-limit of\n$\\gamma$ is one of the transversally hyperbolic sources listed\nabove, or it is the heteroclinic network (or a heteroclinic subcycle\nthereof). This concludes the proof of the massless case of\nTheorem~\\ref{alphathm}.\n\n\n\\subsection*{Global dynamics in the massive case}\n\nLet $\\gamma$ be an arbitrary orbit in the interior of the state\nspace $\\mathcal{X}$. The function $M_{(1)}$ is strictly\nmonotonically increasing on $\\mathcal{X}$ (and on $\\mathcal{K}$),\ncf.~\\eqref{m1eqs}ff.; moreover, $M_{(1)}$ vanishes for $z\\rightarrow\n1$ and $s_i \\rightarrow 0$ (unless $z\\rightarrow 0$ simultaneously).\nHence, by applying the monotonicity principle we obtain that the\n$\\alpha$-limit set $\\alpha(\\gamma)$ of $\\gamma$ must be located on\n$\\mathcal{Z}^0$ including its boundaries.\n\nConsider the fixed point $\\text{F}^0 \\in \\mathcal{Z}^0$. By\nTheorem~\\ref{futurethm} this fixed point is a global sink on\n$\\mathcal{Z}^0$. In the orthogonal direction, however, we have $z^{-1}\nz^\\prime |_{\\text{F}^0} = 2$. It follows that $\\text{F}^0$ is a\nhyperbolic saddle in the space $\\mathcal{X}$ and that there exists exactly\none orbit $\\phi$ that emanates from $\\text{F}^0$ into the interior\nof $\\mathcal{X}$. (Theorem~\\ref{futurethm} implies that $\\phi$ converges to\n$\\text{FS}^1$ as $\\tau\\rightarrow \\infty$; thus, $\\phi$ represents\nthe unique solution of the Einstein-Vlasov equations that\nisotropizes toward the past and the future.)\n\nLet $\\gamma$ be different from $\\phi$. Assume that $\\alpha(\\gamma)$\ncontains a point $\\mathrm{P}$ of the interior of $\\mathcal{Z}^0$; then the\nwhole orbit through $\\mathrm{P}$ and the $\\omega$-limit\n$\\omega(\\mathrm{P})$ must be contained in $\\alpha(\\gamma)$.\nTheorem~\\ref{futurethm} implies $\\omega(\\mathrm{P}) = \\text{F}^0$,\nhence $\\text{F}^0 \\in \\alpha(\\gamma)$. Since the saddle $\\text{F}^0$\nis in $\\alpha(\\gamma)$, the unique orbit $\\phi$ emanating from it is\ncontained in $\\alpha(\\gamma)$ as well. Thus, ultimately,\n$\\omega(\\phi)$, i.e., a point on $\\text{FS}^1$, must be contained in\n$\\alpha(\\gamma)$; this is a contradiction, since $\\text{FS}^1$\nconsists of transversally hyperbolic sinks. We conclude that\n$\\gamma$ cannot contain any $\\alpha$-limit point in the interior of\n$\\mathcal{Z}^0$.\n\nSince $\\alpha(\\gamma)$ must be located on the boundary on $\\mathcal{Z}^0$,\ni.e., on $\\mathcal{S}_i^0$ or $\\mathcal{K}^0$, the proof can be completed in close\nanalogy to the proof in the massless case. We thus restrict\nourselves here to giving some relations that establish that the\nsources on $\\mathcal{Z}^0$ generalize to sources on $\\mathcal{X}$: on\n$\\text{KC}_i^0$ we have $z^{-1} z^\\prime |_{\\text{KC}_i^0} = 2 ( 1+\n\\Sigma_i)$, which is positive for all $\\Sigma_i> -1$ and thus for\n$\\Sigma_i> 1$ in particular; for $\\text{QL}_i^0$ we obtain $z^{-1}\nz^\\prime |_{\\text{QL}_i^0} = 4$. We note further that $z^{-1}\nz^\\prime |_{\\text{D}_i^0} = 3$; thus $\\text{D}_i^0$ possesses a\ntwo-dimensional unstable manifold. (Orbits in that manifold converge\nto $\\text{FS}^1$.) Finally, note that along the heteroclinic cycle\n$\\mathcal{H}_1^0: \\mathrm{T}_{22}^0 \\rightarrow \\mathrm{T}_{32}^0\n\\rightarrow \\mathrm{T}_{33}^0 \\rightarrow \\mathrm{T}_{23}^0\n\\rightarrow \\mathrm{T}_{22}^0$, we obtain that $z^{-1} z^\\prime$\nequals $6 s_2$, $2(1+\\Sigma_3)$, $2(1+\\Sigma_2)$, $6(1-s_2)$,\nrespectively; hence $z^{-1} z^\\prime$ is non-negative along the\nheteroclinic network $\\mathcal{H}$.\n\nThis concludes the proof of Theorem~\\ref{alphathm}.\n\n\n\n\n\\section{Concluding remarks}\n\\label{conc}\n\nIn this article we have analyzed the asymptotic behaviour of\nsolutions of the Einstein-Vlasov equations with Bianchi type I\nsymmetry. To that end we have reformulated the equations as a system\nof autonomous differential equations on a compact state space, which\nenabled us to employ powerful techniques from dynamical systems\ntheory.\n\nBased on the global dynamical systems analysis we have identified\nall possible past attractors of orbits\n--- both in the massless and massive case.\nWe have found that an open set of solutions converges to the Kasner\ncircle(s); in particular, for these solutions, the rescaled matter\nquantity $\\Omega$ satisfies $\\Omega\\rightarrow 0$ toward the\nsingularity, so that ``matter does not matter.'' However, we have\nseen that there exists an interesting structure that might\ncomplicate matters: there exists a heteroclinic network $\\mathcal{H}^0$ that\nmight be part of the past attractor set. For solutions that converge\nto $\\mathcal{H}^0$, $\\Omega$ has no limit toward the singularity, since\n$\\Omega \\neq 0$ along parts of the network $\\mathcal{H}^0$, i.e., matter\ndoes matter for such solutions. It is not clear at the present stage\nwhether the set of solutions converging to $\\mathcal{H}^0$ is empty, or, if\nnon-empty, of measure zero or not (the flow on the boundary subsets\ngives a hint that it might be a three-parameter set, i.e., a set of\nmeasure zero). In any case $\\mathcal{H}^0$ will be important for the\nintermediate dynamical behaviour of some models, and thus there are\nsignificant differences between Bianchi type I perfect fluid models\nand models with Vlasov matter.\n\nIf a generic set of orbits converges to $\\mathcal{H}^0$, this will have\nconsiderable consequences. Bianchi type I perfect fluid models play\na central role in understanding the singularity of more general\nspatially homogeneous models~\\cite{waiell97}, as well as general\ninhomogeneous models~\\cite{uggetal03}. The importance of Bianchi\ntype I perfect fluid models is due to Lie and source\n``contractions'' in spatially homogeneous cosmology and the\nassociated invariant subset and monotone function structure (which\nis quite similar to the hierarchical structure we have encountered\nin the present paper), and asymptotic silence in general\ninhomogeneous models~\\cite{uggetal03}. Similarly, we expect that\nBianchi type I Einstein-Vlasov models hold an equally prominent\nplace as regards singularities in more general\n--- spatially homogeneous and inhomogeneous --- Einstein-Vlasov\nmodels. Hence the resolution of the problem of whether the\nheteroclinic network attracts generic solutions determines if\nEinstein-Vlasov models are generically different from general\nrelativistic perfect fluid models in the vicinity of a generic\nsingularity, or not.\n\nIn this article we have not considered a cosmological constant,\n$\\Lambda$. The effects of a positive cosmological constant can be\noutlined as follows: since $\\rho \\rightarrow \\infty$ toward the\nsingularity, it follows that $\\Lambda$ can be asymptotically\nneglected and hence that the singularity structure is qualitatively\nthe same as for $\\Lambda =0$. However, toward the future $\\Lambda$\ndestabilizes FS$^1$, which becomes a saddle, and instead solutions\nisotropize and asymptotically reach a de Sitter state.\n\nWe conclude with some remarks on different formulations. We have\nseen that the variables we have used to reformulate the equations as\na dynamical system yielded multiple representations of some\nstructures, e.g., the Kasner circle. Replacing $s_i$ with\n$E_i=\\sqrt{g^{ii}}\/H$, i.e., the Hubble-normalized spatial frame\nvariables of~\\cite{uggetal03,rohugg05}, and using $y=m^2H^{-2}$\ninstead of $z$, yields a single Kasner circle. The latter variables,\nhowever, are not bounded; indeed, they blow up toward the future in\nthe present case. It is possible to replace the variables by bounded\nvariables, however, variables of this type lead to differentiability\ndifficulties toward the singularity. Issues like these made the\nvariables we employed in this article more suitable for the kind of\nanalysis we have performed. However, $E_i$-variables, or\n``$E_i$-based'' variables would have been more suitable to relate\nthe present results to a larger context; but it is not difficult to\ntranslate our results to the $E_i$-variables variables used\nin~\\cite{uggetal03,rohugg05}, where the relationship between the\ndynamics of inhomogeneous and spatially homogeneous models was\ninvestigated and exploited.\n\n\\\n\n\\noindent\n{\\bf Acknowledgement}\n\n\\noindent We gratefully acknowledge the hospitality and the support\nof the Isaac Newton Institute for Mathematical Sciences in\nCambridge, England, where part of this work was done. We also thank\nAlan Rendall for useful questions and comments. CU is supported by\nthe Swedish Research Council.\n\n\\\n\n\n\\begin{appendix}\n\n\\section{Dynamical systems}\n\\label{dynsys}\n\n\nIn this appendix we briefly recall some concepts from the theory of\ndynamical systems which we use in the article.\n\nConsider a dynamical system defined on an invariant set $X\\subseteq\n\\mathbb{R}^m$. The $\\omega$-limit set $\\omega(x)$ [$\\alpha$-limit\nset $\\alpha(x)$] of a point $x\\in X$ is defined as the set of all\naccumulation points of the future [past] orbit of $x$. The simplest\nexamples are fixed points and periodic orbits.\n\nThe monotonicity principle~\\cite{waiell97} gives\ninformation about the global asymptotic behaviour of the dynamical\nsystem. If $M: X\\rightarrow \\mathbb{R}$ is a ${\\mathcal C}^1$\nfunction which is strictly decreasing along orbits in $X$, then\n\\begin{subequations}\\label{omegalimitmon}\n\\begin{align}\n\\omega(x) &\\subseteq\n\\{\\xi \\in \\bar{X}\\backslash X\\:|\\: \\lim\\limits_{\\zeta\\rightarrow \\xi} M(\\zeta) \\neq\n\\sup\\limits_{X} M\\} \\\\\n\\alpha(x) &\\subseteq\n\\{\\xi \\in \\bar{X}\\backslash X\\:|\\:\\lim\\limits_{\\zeta\\rightarrow \\xi} M(\\zeta) \\neq\n\\inf\\limits_{X} M\\}\n\\end{align}\n\\end{subequations}\nfor all $x\\in X$.\n\nLocally in the neighbourhood of a fixed point, the flow of the\ndynamical system is determined by the stability features of the\nfixed point. If the fixed point is hyperbolic, i.e., if the\nlinearization of the system at the fixed point is a matrix\npossessing eigenvalues with non-vanishing real parts, then the\nHartman-Grobman theorem applies: in a neighbourhood of a hyperbolic\nfixed point the full nonlinear dynamical system and the linearized\nsystem are topologically equivalent. Non-hyperbolic fixed points are\ntreated in centre manifold theory: the reduction theorem generalizes\nthe Hartman-Grobman theorem; for further details see,\ne.g.,~\\cite{cra91}. If a fixed point is an element of a connected\nfixed point set (line, surface,\\nolinebreak \\ldots) and the number\nof eigenvalues with zero real parts is equal to the dimension of the\nfixed point set, then the fixed point is called transversally\nhyperbolic. Application of the centre manifold reduction theorem is\nparticularly simple in this case. (The situation is analogous in the\nmore general case when the fixed point is an element of an a priori\nknown invariant set that coincides with the centre manifold of the\nfixed point.)\n\n\\section{FRW and LRS$_i$ symmetry}\n\\label{FRWLRS}\n\nIn this section we discuss in detail the sets $\\text{FRW}$ and $\\text{LRS}_i$,\nconnected with solutions exhibiting FRW or LRS geometry.\n\nTo begin with, we prove that the fixed point $\\text{F}^0$ on $z=0$\nis well-defined. Since the defining equations for $\\text{F}^0$ are\n$w_1 = w_2= w_3 = 1\/3$, we must show that these equations indeed\npossess a unique solution $(s_1,s_2,s_3)$ for all distribution\nfunctions $f_0$. Setting $z=0$ in~\\eqref{omegai} implies that \nequations $w_1 = w_2= w_3 = 1\/3$ are equivalent to the system\n\\begin{equation}\\label{udef}\nu := \\int f_0 \\, \\left[s_1 v_1^2 -s_2 v_2^2\\right] \\left(\\sum\\nolimits_k s_k v_k^2 \\right)^{-1\/2} d^3 v =0\n\\end{equation}\nand $v = 0$, where $v$ is defined by replacing $[s_1 v_1^2 -s_2\nv_2^2]$ by $[s_1 v_1^2 -s_3 v_3^2]$ in~\\eqref{udef}. On the three\nboundaries of the space $\\{(s_1,s_2,s_3)\\:|\\: s_i \\geq 0, \\sum_k s_k\n=1\\}$ the functions $u$ and $v$ are monotonic; their signs are given\nin Figure~\\ref{dreiecken}. The derivative $\\partial u\/\\partial s_1$ is\nmanifestly positive, $\\partial u\/\\partial s_2$ is negative, hence\n$\\mathrm{grad}\\, u$ is linearly independent of the surface normal\n$(1,1,1)$, and it follows that\n$u=\\mathrm{const}$ describes a curve\nfor all $\\mathrm{const} \\in \\mathbb{R}$. The same argument applies\nto $v$, since $\\partial v\/\\partial s_1>0$ and $\\partial v\/\\partial\ns_3 <0$. Figure~\\ref{dreiecken} reveals that $u=0$ ($v=0$) connects\nthe upper (right) vertex of the $(s_1,s_2,s_3)$-space with the\nopposite side. Investigating $(\\mathrm{grad} \\,u - \\lambda\\,\n\\mathrm{grad} \\,v)$ we find that the first component is manifestly\npositive when $\\lambda \\leq 2\/3$ and negative when $\\lambda\\geq\n3\/2$, the second component is negative when $\\lambda \\leq 3$, and\nthe third component is positive when $\\lambda \\geq 1\/3$, which\nimplies that $(\\mathrm{grad}\\, u - \\lambda\\, \\mathrm{grad}\\, v)$ is\nlinearly independent of the surface normal $(1,1,1)$ for all\n$\\lambda$. It follows that all equipotential curves of the functions\n$u$ and $v$ intersect transversally; hence $u=0$ and $v=0$ possess a\nunique point of intersection, which proves the claim.\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{s1}[cc][cc][0.8][0]{$s_1$}\n\\psfrag{s2}[cc][cc][0.8][0]{$s_2$}\n\\psfrag{s3}[cc][cc][0.8][0]{$s_3$}\n\\psfrag{a}[cc][cc][0.7][0]{$v>0$}\n\\psfrag{b}[cc][cc][0.7][0]{$\\begin{array}{cc} u<0 \\\\ v<0 \\end{array}$}\n\\psfrag{c}[cc][cc][0.7][0]{$u>0$}\n\\psfrag{er}[cc][cc][0.7][0]{$\\begin{array}{cc} u<0 \\\\ v=0 \\end{array}$}\n\\psfrag{el}[cc][cc][0.7][0]{$\\begin{array}{cc} u>0 \\\\ v>0 \\end{array}$}\n\\psfrag{eo}[cc][cc][0.7][0]{$\\begin{array}{cc} u=0 \\\\ v<0 \\end{array}$}\n\\includegraphics[width=0.5\\textwidth]{dreiecken.eps}\n\\caption{The functions $u$ and $v$ are monotonic along the boundaries of the space\n$\\{(s_1,s_2,s_3)\\:|\\: s_i \\geq 0, \\sum_k s_k =1\\}$.}\n\\label{dreiecken}\n\\end{center}\n\\end{figure}\n\nBy establishing existence and uniqueness of the fixed point $\\text{F}^0$ for all $f_0$,\nwe have shown that for all distribution functions\nthere exists a unique FRW solution of the massless Einstein-Vlasov\nequations.\n\nThe situation is different in the massive case.\nA FRW solution is characterized by the equations\n$\\Sigma_i=0$ $\\forall i$, $w_1 =w_2=w_3 =w$, since\nthis yields $s_{i}=\\mathrm{const}$\n(and a rescaling of the spatial coordinates then results in $g_{ij}\\propto\\delta_{ij}$.)\nHowever, for a general distribution function $f_0$, these equations\nare incompatible with\nthe Einstein-Vlasov equations; in other words, the straight line\n$\\Sigma_i=0$ $\\forall i$, $w_1 =w_2=w_3 =w$ is not an orbit of the\ndynamical system. Hence, in the massive case, the Einstein-Vlasov\nequations do not admit a FRW solution for arbitrary $f_0$; the\ndistribution function $f_0$ is required to satisfy FRW compatibility\nconditions, see below, in order for a FRW solution to exist.\n\nNote, however, that for each $f_0$, there exists exactly one orbit\nthat originates from $\\text{F}^0$ and ends on $\\text{FS}^1$, see\nSection~\\ref{locglo}, i.e., there exists a unique solution of\nthe Einstein-Vlasov equations that isotropizes toward the past and\ntoward the future. This anisotropic solution can be regarded as a\ngeneralized FRW solution; if $f_0$ is compatible with the FRW\ngeometry, then the generalized FRW solution reduces to an ordinary\nFRW solution.\n\nThe treatment of the LRS case is analogous: the subset\n$\\text{LRS}_1$ (and, analogously, $\\text{LRS}_{2,3}$), defined\nthrough the equations $\\Sigma_2=\\Sigma_3$, $w_2=w_3$, describes\nsolutions exhibiting LRS geometry. (For a solution on\n$\\text{LRS}_1$, Equation~(\\ref{seq}) entails $s_2(\\tau) \\propto\ns_3(\\tau)$; by rescaling the coordinates one can achieve\n$g_{22}=g_{33}$, i.e., a line element in an explicit LRS form.)\nHowever, for general $f_0$, the set $\\text{LRS}_1$ is not invariant\nunder the flow of the dynamical system. Consequently, for general\n$f_0$, the Einstein-Vlasov equations do not admit solutions with LRS\ngeometry.\n\nMore specifically, consider\n\\begin{equation}\n\\left(\\Sigma_2 - \\Sigma_3\\right)^\\prime =\n- 3 \\Omega \\left[ \\textfrac{1}{2} (1-w) (\\Sigma_2 -\\Sigma_3) - (w_2-w_3)\\right]\\:.\n\\end{equation}\nHence, $(\\Sigma_2-\\Sigma_3)^\\prime$ vanishes when $\\Sigma_2 = \\Sigma_3$ and $w_2 = w_3$.\nFrom~\\eqref{seq} and~\\eqref{zeq} we obtain an equation for $w_i^\\prime$,\n\\begin{equation}\\label{omegaieq}\nw_i^\\prime = -2 w_i \\left[ \\Sigma_i -\n\\sum_k \\Sigma_k \\left(\\frac{1}{2}w_k + \\frac{1}{2} w_i^{-1} \\beta_{i k}^{\\text{\\tiny (0)}}\\right)\n+\\frac{z}{2} \\left(w_i^{-1} \\beta_{i}^{\\text{\\tiny (1)}} + \\beta^{\\text{\\tiny (1)}} \\right) \\right]\\:,\n\\end{equation}\nwhere we have defined\n\\begin{equation}\n\\beta_{i_1\\ldots i_k}^{\\text{\\tiny (m)}} =\n\\frac{(1-z)^k {\\displaystyle\\int} f_0\\,\n\\left(\\Pi_{n=1}^{k} s_{i_n} v_{i_n}^2\\right) \\left[z+(1-z) \\sum_k s_k v_k^2\\right]^{1\/2-k-m} d v_1 d v_2 d v_3}%\n{{\\displaystyle\\int} f_0 \\left[z+(1-z) \\sum_k s_k v_k^2\\right]^{1\/2} d v_1 d v_2 d v_3}\\:;\n\\end{equation}\nnote that $w_i = \\beta_{i}^{\\text{\\tiny (0)}}$. Equation~\\eqref{omegaieq} implies\n\\begin{equation}\n(w_2-w_3)^\\prime = -\\left(\\Sigma_1 -\\Sigma_2\\right)\n\\left(\\beta_{22}^{\\text{\\tiny (0)}} - \\beta_{33}^{\\text{\\tiny (0)}}\\right) -\nz \\left(\\Sigma_1 +1\\right) \\left(\\beta_{2}^{\\text{\\tiny (1)}}- \\beta_{3}^{\\text{\\tiny (1)}}\\right)\\:,\n\\end{equation}\nwhen $\\Sigma_2 = \\Sigma_3$ and $w_2 = w_3$. We conclude that the set\n$\\Sigma_2 \\equiv \\Sigma_3$, $w_2 \\equiv w_3$ is an invariant set of\nthe dynamical system, iff $w_2 = w_3$ implies\n$\\beta_{22}^{\\text{\\tiny (0)}} = \\beta_{33}^{\\text{\\tiny (0)}}$ and\n$\\beta_{2}^{\\text{\\tiny (1)}} = \\beta_{3}^{\\text{\\tiny (1)}}$. (In\nthe massless case, only the first condition is required.) These\nconditions are violated for general distribution functions; for the\ncondition to hold $f_0$ must be of a certain type that ensures\ncompatibility with the LRS symmetry. This is the case, for instance,\nwhen there exist constants $a_2>0$, $a_3>0$, such that $f_0$ is\ninvariant under the transformation $v_2 \\rightarrow (a_3\/a_2)\\, v_3$,\n$v_3\\rightarrow (a_2\/a_3)\\, v_2$; e.g., $f_0 = \\tilde{f}_0(v_1,\nv_2^2 v_3^2)$, or $f_0 = \\tilde{f}_0(v_1, a_2^2 v_2^2 + a_3^2\nv_3^2)$; in the latter case $w_2(\\tau) \\equiv w_3(\\tau)$ implies\n$a_3^2 s_2(\\tau) \\equiv a_2^2 s_3(\\tau)$.\n\nNote finally that a distribution function $f_0$ is compatible with a\nFRW geometry, if it is compatible with all LRS symmetries. This\nmeans, that for instance $f_0 = \\tilde{f}_0(a_1^2 v_1^2 +a_2^2 v_2^2\n+a_3^2 v_3^2)$ is compatible with the FRW symmetry and thus admits a\nunique FRW solution of the Einstein-Vlasov equations.\n\n\n\n\\section{Future asymptotics}\n\\label{futureproof}\n\nIn this section we give the proof of Theorem~\\ref{futurethm}:\n\n\\noindent\n\\textbf{Theorem~\\ref{futurethm}.}\n\\textit{The $\\omega$-limit of every orbit in the interior of the\nmassive state space $\\mathcal{X}$ [massless state space $\\mathcal{Z}^0$]\nis one the fixed points $\\,\\mathrm{FS}^1$ [the fixed point $\\,\\mathrm{F}^0$].}\n\n\n\\proof Consider first the state space $\\mathcal{Z}^0$ of massless\nparticles and the associated system~\\eqref{z0eq}. The function\n$M_{(2)}$, cf.~\\eqref{m2eqs}ff., is well-defined and monotonically\ndecreasing everywhere except for at the fixed point $\\text{F}^0$,\nwhere it has a global minimum. On the boundaries $\\mathcal{S}_i^0$\n(given by $s_i=0$) and $\\mathcal{K}^0$ ($\\Sigma^2 =1$) of the state\nspace $\\mathcal{Z}^0$, the function $M_{(2)}$ is infinite.\nTherefore, application of the monotonicity principle yields that the\n$\\omega$-limit of every orbit must be the fixed point $\\text{F}^0$.\n\nIn the massive case consider~\\eqref{Sigeq} in the form\n\\begin{equation}\\label{sigmaiprime}\n\\Sigma_i^\\prime = -3 \\Omega \\left[ \\frac{1}{2} (1-w) (1 +\\Sigma_i) - \\frac{1}{2} (1- 3w) - w_i \\right]\\:.\n\\end{equation}\nThe r.h.s.\\ is positive when $\\Sigma_i \\leq -1$ and $z>0$ ($w<1\/3$).\nThis implies that the hyperplanes $\\Sigma_i = -1$ constitute semipermeable membranes in the state space $\\mathcal{X}$,\nwhereby the ``triangle'' $(\\Sigma_1 > -1) \\cap (\\Sigma_2 > -1) \\cap (\\Sigma_3 > -1)$ becomes a future invariant\nsubset of the flow~\\eqref{eq}.\n\nThe first part of the proof is to show that every orbit enters the\ntriangle at some time $\\tau_e$ (and consequently remains inside for\nall later times).\n\nAssume that there exists an orbit with $\\Sigma_i(\\tau) \\leq -1$ for all $\\tau$ (for some $i$).\nFrom~\\eqref{seq} we infer that\n\\begin{equation}\ns_i^\\prime = -2 s_i \\left[ s_j (\\Sigma_i - \\Sigma_j) + s_k (\\Sigma_i - \\Sigma_k) \\right] > 0\n\\end{equation}\nif $\\Sigma_i< -1$, and that $s_i^\\prime \\geq 0$ if $\\Sigma_i =-1$;\nhence $s_i(\\tau) \\geq s_i(\\tau_0)=\\mathrm{const} > 0$ for all $\\tau \\in [\\tau_0,\\infty)$.\nFrom~\\eqref{omegaeq} we obtain\n\\begin{equation}\n\\begin{split}\n\\frac{1}{3} \\Omega^{-1} \\Omega^\\prime \\,\\Big|_{\\Omega=0} & =\n1 - \\frac{1}{3} w_i (1+\\Sigma_i) - \\frac{1}{3} w_j (1+\\Sigma_j) - \\frac{1}{3} w_k (1+\\Sigma_k) \\,\\geq \\\\\n& \\geq 1 -w_j - w_k = (1-3 w) + w_i \\geq \\mathrm{const} >0 \\:,\n\\end{split}\n\\end{equation}\nsince $s_i \\geq \\mathrm{const} > 0$. Consequently, $\\Omega(\\tau)\n\\geq \\mathrm{const} > 0$ for all $\\tau \\in [\\tau_0,\\infty)$. It\nfollows from~\\eqref{sigmaiprime} that\n\\begin{equation}\n\\Sigma_i^\\prime \\geq \\mathrm{const} > 0\n\\end{equation}\nfor all $\\tau\\in [\\tau_0,\\infty)$ by the same argument.\nThis is in contradiction to the assumption $\\Sigma_i \\leq -1$ for all $\\tau$.\n\nThus, in the second part of the proof, we can consider an arbitrary\norbit $\\gamma$ and assume, without loss of generality, that\n$\\gamma(\\tau)$ lies in the $\\Sigma$-triangle for all $\\tau \\in\n[\\tau_e,\\infty)$. Equation~\\eqref{zeq} leads to\n\\begin{equation}\nz^\\prime = 2 z (1-z) \\sum\\limits_n s_n (1+\\Sigma_n) \\geq 0\n\\end{equation}\nfor all $\\tau \\in [\\tau_e,\\infty)$, hence\n$z(\\tau) \\geq z(\\tau_e) > 0$ for all $\\tau \\in [\\tau_e,\\infty)$.\n\nWe define the function $N$ by\n\\begin{equation}\nN = (1+\\Sigma_1) (1+ \\Sigma_2) (1+\\Sigma_3)\\:.\n\\end{equation}\nThe derivative can be estimated by\n\\begin{equation}\nN^\\prime \\geq 3 \\Omega N \\left[ -\\frac{3}{2} (1-w) + \\frac{1}{2} \\sum_n \\frac{1-3 w}{1+\\Sigma_n}\\right]\\:.\n\\end{equation}\nSince $w(\\tau) \\leq \\mathrm{const} < 1\/3$ (because $z(\\tau) \\geq\n\\mathrm{const} > 0$), $N^\\prime$ is positive when at least one of\nthe $\\Sigma_i$ is sufficiently small, i.e., when $N$ itself is small\n(a detailed analysis shows that $N^\\prime \\geq 3 \\Omega N [-(3\/2)\n(1-w) + \\sqrt{3} (1-3 w) N^{-1\/2}]$). We conclude that there exists\na positive constant $N_0$ such that $N(\\tau) \\geq N_0$ for all $\\tau\n\\in [\\tau_e,\\infty)$. This in turn implies that there exists $\\nu>0$\nsuch that $\\Sigma_i(\\tau) \\geq -1 + \\nu$ for all $i$ for all $\\tau\n\\in [\\tau_e,\\infty)$, whereby $z^\\prime \\geq 2 z (1-z) \\nu$ for all\n$\\tau \\in [\\tau_e,\\infty)$.\n\nIt follows that the $\\omega$-limit of $\\gamma$ must lie on $z=1$,\ni.e., on $\\mathcal{Z}^1$. Taking into account the simple structure\nof the flow on $\\mathcal{Z}^1$, characterized by $\\Omega^\\prime = 3\n(1-\\Omega) \\Omega$, we conclude that the fixed points $\\text{FS}^1$\ngiven by $\\Sigma_1 =\\Sigma_2 =\\Sigma_3 = 0$ are the only possible\n$\\omega$-limits. \\hspace*{\\fill}\\rule{0.2cm}{0.2cm}\n\n\\begin{remark}\nIn order to demonstrate the versatility of the dynamical systems\nmethods, we have chosen here to prove Theorem~\\ref{futurethm} by\nusing techniques that are slightly different from those employed in\nSection~\\ref{locglo} (which exploit the monotonicity principle).\nHowever, it is straightforward (in fact, even simpler) to give a\nproof by making use of the hierarchy of monotone functions. Indeed,\nthe function $M_{(1)}$ ensures that the $\\omega$-limit of every\norbit lies on $\\mathcal{Z}^1$ or $\\mathcal{S}_i$; modulo some\nsubtleties, we can exclude that $\\mathcal{S}_i$ is attractive by\nusing the monotone function $M_{(3)}$ and the local properties of\nthe fixed points.\n\\end{remark}\n\n\n\n\\section{The spaces $\\mathcal{S}^0_i$ -- interpretation of solutions}\n\\label{Si0space}\n\nThe flow on the boundary subsets $\\mathcal{S}^0_i$\nis of fundamental importance in the analysis of the global dynamics of the state space,\nsee Section~\\ref{globaldynamics}. Note that except for $\\text{F}^0$ all attractors\n($\\mathrm{D}_i^0$, $\\mathrm{QL}_i^0$, $\\mathrm{KC}_i^0$, and the heteroclinic network)\nlie on $\\mathcal{S}^0_i$. For a depiction of the flow on $\\mathcal{S}^0_1$ see Figure~\\ref{cylinder}.\nIn the following we show that orbits on $\\mathcal{S}^0_1$ represent solutions\nof the Einstein-Vlasov system that are associated with a special class of\ndistribution functions. Furthermore, we investigate in detail solutions that\nconverge to the subcycle $\\mathcal{H}^0_1$ of the heteroclinic network.\n\nConsider a distribution function $f_0$ of the form\n\\begin{equation}\\label{disdisfct}\nf_0(v_1,v_2,v_3) = \\delta(v_1) f_0^{\\mathrm{red}}(v_2,v_3)\\:,\n\\end{equation}\nwhere $f_0^{\\mathrm{red}}(v_2,v_3)$ is even in $v_2$ and $v_3$.\nIn the case of massless particles, $m=0$ (and $z=0$ respectively),\nwe obtain\n\\begin{equation}\nw_1 = 0 \\:,\\qquad w_j = \\frac{g^{jj} {\\displaystyle\\int} f_0^{\\mathrm{red}} \\, v_j^2\n\\left[g^{22} v_2^2 + g^{33} v_3^2 \\right]^{-1\/2} d v_1 d v_2 d v_3}%\n{{\\displaystyle\\int} f_0^{\\mathrm{red}}\n\\left[g^{22} v_2^2 + g^{33} v_3^2 \\right]^{1\/2} d v_1 d v_2 d v_3}\\:\\,(j=2,3)\\:,\n\\end{equation}\nwhere $g^{22}$ and $g^{33}$ can be replaced by $s_2$ and $s_3$, if desired.\nIn the unbounded variables $g^{ii}$ the equations read\n\\begin{subequations}\\label{unboundedvarsz0sys}\n\\begin{align}\n\\Sigma_1^ \\prime & = - \\Omega [1+\\Sigma_1] \\:, & (g^{11})^\\prime & = -2 g^{11} (1+\\Sigma_1) \\\\\n\\Sigma_j^\\prime &= - \\Omega [1+\\Sigma_j -3 w_j] \\:, & (g^{jj})^\\prime &= -2 g^{jj} (1+\\Sigma_j) \\quad\\qquad(j=2,3)\\:,\n\\end{align}\n\\end{subequations}\ncf.~the remark at the end of Section~\\ref{einsteinvlasov}.\nIn particular we note that the equation for $g^{11}$ decouples; hence the full dynamics is\nrepresented by a reduced system in the variables $(\\Sigma_1, \\Sigma_2, \\Sigma_3, g^{22}, g^{33})$,\nwhich coincides with the system~\\eqref{unboundedvarsz0sys} on the invariant subset $g^{11} = 0$.\nIn analogy to the definitions~\\eqref{defdimless} we set\n\\begin{equation}\ns_1 =0 \\:, \\qquad s_2 = \\frac{g^{22}}{g^{22}+g^{33}} \\:,\\qquad s_3 = \\frac{g^{33}}{g^{22}+g^{33}}\\:,\n\\end{equation}\nso that $s_2 + s_3 =1$. This results in the dynamical system\n\\begin{subequations}\\label{boundedvarsz0sys}\n\\begin{align}\n\\Sigma_1^ \\prime & = - \\Omega [1+\\Sigma_1] \\:, & s_1 & \\equiv 0 \\\\\n\\Sigma_j^\\prime &= - \\Omega [1+\\Sigma_j -3 w_j] \\:,\n& s_j^\\prime &= -2 s_j [\\Sigma_j - (s_2 \\Sigma_2 + s_3 \\Sigma_3)] \\quad\\qquad(j=2,3)\\:.\n\\end{align}\n\\end{subequations}\nThis system~\\eqref{boundedvarsz0sys} coincides with the dynamical system~\\eqref{eq} induced on $\\mathcal{S}^0_1$\n(which is obtained by setting $z=0$, thus $w=1\/3$, and $s_1 = 0$ in~\\eqref{eq}).\n\nOur considerations show that the flow on $\\mathcal{S}^0_1$ possesses a direct physical interpretation:\norbits on $\\mathcal{S}^0_1$ represent solutions of the massless Einstein-Vlasov system\nof Bianchi type~I with a ``distributional'' distribution function of the\ntype~\\eqref{disdisfct}.\nNote that the system~\\eqref{boundedvarsz0sys} on $\\mathcal{S}^0_1$ must be supplemented\nby the decoupled equations~\\eqref{xeq} and $(g^{11})^\\prime = -2 g^{11} (1+\\Sigma_1)$\nin order to construct the actual solution from an orbit in $\\mathcal{S}^0_1$.\n\nTwo structures in $\\mathcal{S}^0_1$ are of particular interest: the fixed point $\\mathrm{D}_1^0$ and\nthe heteroclinic cycle $\\mathcal{H}^0_1$, see Figure~\\ref{cylinder}.\nThe fixed point $\\mathrm{D}_1^0$ represents an LRS solution (associated with a distributional\n$f_0$); it is straightforward to show that the metric is of the form\n\\begin{equation}\\label{Disol}\ng_{11} = \\mathrm{const} \\:,\\qquad\ng_{22} \\propto t^{4\/3} \\:,\\qquad\ng_{33} \\propto t^{4\/3}\\:,\n\\end{equation}\nand $H = (4\/9) t^{-1}$.\n\nThe orbit $\\mathrm{T}^0_{22} \\rightarrow \\mathrm{T}^0_{32}$, which is part of $\\mathcal{H}^0_1$, corresponds to a solution\n\\begin{equation}\\label{teil1}\ng_{11} = g_{11}^0 \\:, \\qquad\ng_{22} = g_{22}^0 ( 3 H_0 t)^2 \\:, \\qquad\ng_{33} = g_{33}^0\\:;\n\\end{equation}\nhere, $H = (3 t)^{-1}$; $H_0$ is a characteristic value of $H$.\nFor the orbit $\\mathrm{T}^0_{33} \\rightarrow \\mathrm{T}^0_{23}$\nthe result is analogous with $g_{22}$ and $g_{33}$ interchanged.\nA more extensive computation shows that the orbit $\\mathrm{T}^0_{32} \\rightarrow \\mathrm{T}^0_{33}$\nleads to\n\\begin{equation}\\label{teil2}\ng_{11} = g_{11}^0 \\:, \\qquad\ng_{22} = g_{22}^0 \\left[ \\log(1+3 H_0 t) \\right]^2 \\:,\\qquad\ng_{33} = g_{33}^0 (1 +3 H_0 t)^2\\:,\n\\end{equation}\ntogether with $H= H_0 (1 +3 H_0 t)^{-1} (1 + [\\log (1+3 H_0 t)]^{-1})$.\n(Note that $3 H t$ is always close to $1$ and approaches $1$ for $t \\rightarrow 0$\nand $t\\rightarrow \\infty$.)\nThe result for the orbit $\\mathrm{T}^0_{23} \\rightarrow \\mathrm{T}^0_{22}$ is analogous with\n$g_{22}$ and $g_{33}$ interchanged.\n\nNow consider an orbit converging to the heteroclinic cycle as $\\tau \\rightarrow -\\infty$, i.e., $t \\searrow 0$.\nSince the orbit alternates between episodes where it is close to one of the four heteroclinic orbits,\nwe obtain a solution with alternating episodes of characteristic behaviour of\nthe type~\\eqref{teil1} and~\\eqref{teil2}; transitions between the episodes\ncorrespond to the orbit being close to the fixed points.\n\nLet $t^{(n)}$ denote a monotone sequence of times such that\nthe solution is in episode $(n)$ at time $t^{(n)}$\n(i.e., the orbit is close to one of the four heteroclinic orbits and\nfar from the fixed points); $t^{(n)}\\searrow 0$ as $n\\rightarrow \\infty$.\nSince $3 H t \\approx 1$ as $t\\searrow 0$, the sequence $t^{(n)}$ gives rise\nto a sequence $H^{(n)}$ defined by $3 H^{(n)} t^{(n)} = 1$.\nDuring episode $(n)$ the solution exhibits a characteristic behaviour of\nthe type~\\eqref{teil1} or~\\eqref{teil2} with $H_0 =H^{(n)}$\n(and $g_{22}^0 = g_{22}^{(n)}$, $g_{33}^0 = g_{33}^{(n)}$).\nA transition from one episode to another involves\na matching of the constants.\n\n\\textit{Example}.\nSuppose that the orbit is close to the heteroclinic orbit\n$\\mathrm{T}^0_{32} \\rightarrow \\mathrm{T}^0_{33}$ in episode $(n)$.\nWe obtain a behaviour of the type~\\eqref{teil2} with $H_0=H^{(n)}$.\nAs $H^{(n)} t$ gets small we see that $g_{22} \\approx g_{22}^{(n)} (3 H^{(n)} t)^2$,\n$g_{33} \\approx g_{33}^{(n)}$.\nThe next (as $t\\searrow 0$) episode corresponds to the orbit running close\nto $\\mathrm{T}^0_{22} \\rightarrow \\mathrm{T}^0_{32}$;\nthe behaviour of the solution is~\\eqref{teil1} with $g_{22}^{(n+1)}$, $g_{33}^{(n+1)}$,\nand $H_0=H^{(n+1)}$.\nThe transition between the episodes $(n)$ and $(n+1)$ is thus straightforward:\n$g_{22}^{(n+1)} (H^{(n+1)})^2 = g_{22}^{(n)} (H^{(n)})^2$ and\n$g_{33}^{(n+1)} = g_{33}^{(n)}$.\nMatching episodes $(n+1)$ and $(n+2)$ is slightly more involved.\nThe orbit is close to the heteroclinic orbit $\\mathrm{T}^0_{23} \\rightarrow \\mathrm{T}^0_{22}$\nin episode $(n+2)$, where\n\\begin{equation}\\label{teil4}\ng_{11} = g_{11}^0 \\:, \\qquad\ng_{22} = g_{22}^{(n+2)} (1 +3 H^{(n+2)} t)^2 \\:,\\qquad\ng_{33} = g_{33}^{(n+2)} \\left[ \\log(1+3 H^{(n+2)} t) \\right]^2 \\:.\n\\end{equation}\nClose to transition time, when $H^{(n+2)} t$ is large,\nwe get $g_{22} = g_{22}^{(n+2)} (3 H^{(n+2)} t)^2$ and $g_{33} = g_{33}^{(n+2)} (\\log 3 H^{(n+2)} t)^2$.\nThe transition between episode $(n+1)$ and $(n+2)$ thus involves\nthat $g_{33}$ begins to decay logarithmically from having been approximately\nconstant.\n\n\n\n\n\n\n\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\\label{intro}\n\n\\textit{Cloud computing} is inexorably becoming the technology of choice among big and small businesses to deploy and manage their IT infrastructures and applications \\cite{wang2016spatial}. Infrastructure-as-a-Service (\\textit{IaaS}) is a key cloud service delivery model. Large companies such as Amazon, Google, Microsoft, and IBM provide IaaS solutions to their consumers. Examples of IaaS consumers include Software-as-a-Service (SaaS) providers and organizations such as governments, universities, and research centers \\cite{fattah2020long}. The computing resources or Virtual Machine (VM) instances are the most common IaaS services \\cite{hwang2015cloud}. The \\textit{functional} properties of a VM instance or an IaaS service include computing resources such as CPU units, memory units, storage, and network bandwidths. Examples of \\textit{non-functional} properties or Quality of Service (QoS) attributes of IaaS services are availability, price, response time, throughput, and energy efficiency \\cite{fattah2020event,fattah2020signature}.\n\n\n\nThe \\textit{market-driven} cloud service provisioning is a topical research area \\cite{varghese2018next}. There exist several key service \\textit{provisioning} models in the cloud market such as on-demand, reservation \\cite{chaisiri2011optimization, zheng2016probabilistic}, and economic models \\cite{sharma2012pricing,Thanakornworakij,PAL2013113}. We propose an \\textit{alternative strategy to the on-demand model} which would be used in conjunction. The premise is that the on-demand or reservation model makes it difficult to accurately predict service demand, thus potentially leading to either under-provisioning or over-provisioning \\cite{dustdar2011principles,jiang2011asap, islam2012empirical}.\nThe alternative model focuses on the economic model-based cloud service selection and provision for long-term IaaS composition. In that regard, the economic model-based service provisioning approach is fundamentally different from the typical on-demand and reservation models. According to the \\textit{on-demand}, the provider has a fixed set of VM instances associated with QoS and price \\cite{Hong}. The consumer may acquire and release on-demand VM instances anytime and only pay for the usage by per hour or per second. \nThe provider usually sets a discounted flat rate for the reserved instances in the \\textit{reservation} model \\cite{fattah2019long}. The consumers reserve the VM instance for a fixed time period and pay for it regardless of usage.\nAccording to \\textit{economic model} based service provisioning approaches, there exists \\textit{market competitions} among providers to set the price and QoS of their services \\cite{PAL2013113}. The market competition forms \\textit{non-cooperative games} among competitive providers and consumers \\cite{Thanakornworakij}.\n\n\n\n\\textit{We focus on the economic model-based cloud service selection and provision for a long-term period}. Economic expectations are \\textit{formally} expressed in terms of economic models \\cite{ye2014economic,sajibtsc2015,goiri2012economic}. According to the economic model-based service selection approaches \\cite{ye2014economic,yu2007efficient,kholidy2014qos}, a consumer's requests include custom VM instances, QoS parameters and the price the consumer is willing to pay for the instance (usually determined by their market research or the \\textit{consumer economic models}). Similarly, the providers follow their own economic model to \\textit{accept} or \\textit{reject} the requests from the consumers \\cite{sajibtsc2015}.\nThe economic model-based service selection and provisioning approaches are applied in different cloud markets such as spot market, SLA negotiation, and auction-based reservation models \\cite{zaman2013combinatorial}. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe assume that consumers create their \\textit{IaaS requests} following their own economic models. An IaaS provider receives a set of IaaS requests from \\textit{different} consumers. \\textit{The IaaS composition from the provider's perspective is defined as the selection of an optimal set of consumer requests \\cite{sajibtsc2015}}. The IaaS composition is a \\textit{decision-making} problem where the provider decides which requests it should \\textit{accept} or \\textit{reject}. An effective IaaS composition \\textit{maximizes} the provider's long-term economic expectations, such as profit, reputation, and consumer growth \\cite{fattah2018cp}. The economic model is the \\textit{formal tool} to select the optimal set of consumer requests to meet the provider's expectations \\cite{dash2009economic}.\n\n\n\nOur objective is to design a \\textit{qualitative economic model} for the long-term IaaS composition \\cite{sajibicsoc2016}. The qualitative economic models provide an \\textit{effective} way to select consumer requests where there exists \\textit{uncertainties} or \\textit{incomplete} information. The consumer requirements are typically uncertain and probabilistic \\cite{fattah2018cp} for the long-term period. \\textit{The provider's long-term economic expectations are also dynamic} \\cite{mistry2018long}. The qualitative economic model specifies the provider's \\textit{temporal business strategies} such as reputation building, risk management, revenue, and profit maximization \\cite{sajibicsoc2016}. These business strategies determine the service provisioning \\textit{preferences}. For example, the provider may observe very high demand for \\textit{Network-intensive services} (i.e., VM instances designed for Network-intensive applications, e.g., C5n instance type in Amazon EC2\\footnote{https:\/\/aws.amazon.com\/ec2\/instance-types\/c5\/}) in the Christmas or holiday period. The provider may prefer to provision Network-intensive services than CPU-intensive services (i.e., VM instances designed for CPU intensive applications, e.g., P3 instance in Amazon EC2\\footnote{https:\/\/aws.amazon.com\/ec2\/instance-types\/p3\/}) to increase its revenue.\n\n\n\nWe assume that an IaaS provider has its long-term qualitative economic model, i.e., the temporal service provisioning \\textit{preferences} \\cite{sajibicsoc2016}. The provider receives long-term IaaS requests from different consumers which are represented in \\textit{time series} and associated with QoS parameters and price. The \\textit{qualitative IaaS composition} is defined as the \\textit{selection} or \\textit{acceptance} of an optimal set of IaaS requests using the qualitative economic model of the provider. \\textit{We aim to provide a comprehensive framework for long-term qualitative IaaS composition}. To the best of our knowledge, apart from our previous work \\cite{sajibicsoc2016,mistry2018long}, existing research mainly focus on the \\textit{quantitative IaaS composition}. The target of the quantitative composition is to \\textit{maximize revenue and profit} of the provider for a \\textit{short-term} period without any long-term business strategies or economic model \\cite{ye2013qos,chaisiri2012optimization,zhu2010resource}. In contrast, the target of the qualitative composition is to \\textit{maximize the similarity measure} between a given set of consumer requests and the provider's qualitative economic model.\n\n\n\nWe represent the provider's long-term qualitative economic model using \\textit{Temporal Conditional Preference Networks} (TempCP-nets) \\cite{mistry2017probabilistic}. The TempCP-net \\textit{ranks} the short-term and long-term consumer requests using k-d tree according to provider's preferences \\cite{sajibtsc2014}. The qualitative composition is \\textit{transformed} into a \\textit{combinatorial optimization problem} where the objective is to select the consumer requests that maximizes the preference rankings. We explore two composition approaches: a) \\textit{global composition}, and b) \\textit{local composition} \\cite{alrifai2009combining, yu2008framework}. The global composition approach considers all the consumer requests within the composition interval which is computationally expensive \\cite{sajibicsoc2016}. The local composition approach \\textit{divides} the composition interval into several time segments and optimizes a \\textit{partial set} of requests (acceptance or rejection) in each time segment. It may \\textit{significantly improve} the runtime efficiency as we do not need to consider the whole set of requests during the entire composition period. However, the local composition approach is a \\textit{greedy approach} of sequential optimization \\cite{gnanlet2009sequential,pednault2002sequential} and may not produce the optimal result as the request selection is \\textit{temporal-dependent} on the previous acceptance or rejection decisions in other time segments. For example, when we optimize the requests from left to right time segments (i.e., January, February, March), the composition result may be different than the optimization from right to left time segments (i.e., March, February, and January). A reinforcement learning based approach called 3d Q-learning \\cite{mistry2018long} is proposed to \\textit{find the optimal sequence} of temporal selections. The proposed 3d Q-learning based composition approach is considered \\textit{off-policy} as the \\textit{learning approach has no restrictions over exploration}. The proposed approach does not consider the \\textit{temporal distribution} and \\textit{correlations} of the \\textit{historical request sets} to compose a new set of requests using \\textit{policy reuse}. \n\n\nWe propose a novel \\textit{on-policy} based 3d Q-learning approach that effectively utilizes historical information to find the optimal selection of requests. First, the proposed learning approach reduces the run-time by removing redundant state transitions in the off-policy based 3d Q-learning approaches. Next, a novel request annotation approach based on agglomerative clustering techniques \\cite{fernandez2008solving,bouguettaya2015efficient} is proposed to capture the \\textit{intrinsic characteristics} of the historical requests such as the temporal distribution and the global preference ranking. A \\textit{novel policy reuse approach} is proposed to compose a new set of requests that effectively utilizes previous policies which are learned from historical information. The key contributions of this work are as follows:\n\n\\begin{itemize}\n \\item A comprehensive framework to compose long-term and short-term IaaS requests based on the provider's qualitative economic model.\n \\item An on-policy based 3d Q-learning approach to finding the optimal request selection sequence.\n \\item A novel request annotation approach to capture the intrinsic characteristics of historical request sets.\n \\item A novel policy reuse approach to enable effective utilization of historical information in the proposed 3d Q-learning. \n\\end{itemize}\n\nThe rest of the paper is structured as follows. In section 2, we introduce a set of terminologies and concepts that are used to formulate the qualitative composition problem. Section 3 illustrates a motivation scenario to explain the need for sequential learning in IaaS composition. Section 4 provides a general overview of the proposed qualitative IaaS composition framework. In section 5, we describe the proposed IaaS composition approach for a new set of requests. Section 6 describes the proposed long-term qualitative composition with the previous learning experience. Section 7 presents our experiments to evaluate the proposed approaches. Section 8 summarizes the related work on economic model based cloud service composition and sequential learning approaches. Finally, Section 9 concludes the paper and discusses the limitation and future work of this paper. \n\n\\section{Preliminaries}\n\nIn this section, We introduce a set of terminologies and concepts that are used to formulate the qualitative IaaS composition problem in this paper. We will use these terminologies throughout the paper to describe the problem and proposed solution.\n\n\n\\begin{itemize}\n \\item \\textit{Providers and Services:} IaaS providers are referred as the provider. We consider the composition problem from the perspective of a single provider. IaaS services usually include a wide selection of VM instance types optimized to fit different use cases such as general purpose, compute optimized, memory optimized, and storage optimized \\cite{hwang2015cloud}. The VM Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give consumers the flexibility to choose the appropriate mix of resources for their applications. We consider VM instances as services. The VM instances designed for Memory-intensive applications are termed as Memory-intensive services. For example, The R5 instance in Amazon EC2 is an example of Memory-intensive services\\footnote{https:\/\/aws.amazon.com\/ec2\/instance-types\/r5\/}. Similarly, VM instances designed for CPU-intensive applications are termed as CPU-intensive services. For example, the P3 instance in Amazon EC2 is an example of Memory-intensive services\\footnote{ https:\/\/aws.amazon.com\/ec2\/instance-types\/p3\/}. \n \n \\item \\textit{Resources:} The resources are the capacity of the physical machines that are used to offer the VM services such as: CPU cores, memory unit, and network bandwidths \\cite{hwang2015cloud}. \\textit{We assume that provider has a fixed set of resources.} Here, the number of fixed set of resources refers to the maximum number of resource the provider may have at a certain point of time. The maximum number of resource however may change over period of time. In such a case, the number of fixed of resources is required to be updated according to the new maximum number of resources. The proposed approach can be considered as a proactive approach, where provider anticipates the maximum number of resources it can have. Our aim to utilize these resources based on the provider's economic model.\n \n \\item \\textit{Consumers}: The targeted consumers are mainly the medium to large business organizations such as SaaS providers, governments, universities, and research institutes. These organizations may require services for a long-term period (e.g., 1 to 3 years). \n \n \\item \\textit{IaaS Requests}: IaaS requests refer to the configuration of \\textit{functional} and \\textit{non-functional} requirements of the VM over a period of time. A consumer may need a VM with 2 vCPU, 2gb memory, and 99\\% availability in first six months of a year. The IaaS requests for that period is represented as (2 vCPU, 2 gb memory, 99\\% availability). \\textit{We assume the deterministic IaaS requests, i.e., the provider has knowledge about the long-term requests prior to the composition}. We compose these requests based on the provider's economic models. The provider defines the requests as either \\textit{short-term} or \\textit{long-term}. A request is considered short-term if only one business strategy is applicable for the life time of the requests. A request is considered long-term if more than one business strategies are applicable to the request. For instance, if the provider changes its business strategies or economic models quarterly, a request that needs reservation for a VM for 1 year is considered as a long-term request. In this circumstances, a request that reserves the VM for a one month is considered as short-term requests. Note that, we focus on the economic model-based service provisioning where VMs are reserved for a certain time period. \\textit{The burstable on-demand resources are outside the focus of this paper}. \n \n \\item \\textit{Conditional Preference Networks (CP-nets)}: The CP-net is a widely used tool that captures a user's conditional preferences qualitatively \\cite{cp1}. CP-Nets \\cite{boutilier2004cp} is a compact and intuitive formalism for representing and reasoning with conditional preferences under the ceteris paribus (``all else being equal\") semantics. The dynamic semantics of the preferences are indicated using a Conditional Preference Table (CPT) \\cite{boutilier2004cp}. One CP-net can only represent one business strategy at a time \\cite{sajibicsoc2016}. For example, if the business strategy is to build reputation for the first three months, a CP-net could be constructed that can graphically express the preference on higher QoS in a service than higher prices. \n \\item \\textit{Temporal Conditional Preference Networks (TempCP-nets)}: The TempCP-net is a set of CP-nets that represents a provider's economic expectations over the long-term period \\cite{sajibicsoc2016}. If there are three business strategies for a year, the TempCP-net could be constructed with a set of three CP-nets.\n \n \\item \\textit{k-d Tree}: The induced graph in a CP-net may contain nodes with multi-dimensional tuples and the annotated ranking of preferences \\cite{boutilier2004cp}. The \\textit{k-d tree} is a graph indexing technique in which every node is a \\textit{k}-dimensional point \\cite{andoni2006near}. Every non-leaf node in a k-d tree can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts, known as half-spaces. Points on the left and right sides of this hyperplane are represented by the left and right subtree of that node respectively. We apply the k-d tree to index the service preference rankings based on the provider's TempCP-net.\n \n \\item \\textit{Local IaaS Ranking}: The rank of an IaaS request is determined by its k-d tree index. The ranking of a request or a set of request considering only one period is called local IaaS ranking. For example, if an IaaS request expands from January to December, then its local IaaS ranking of January is computed considering its preference rankings on January.\n \n \\item \\textit{Global IaaS Ranking}: The ranking of a long-term request or set of requests considering each period is called its global IaaS ranking. For example, if an IaaS request expands from January to December, then its global ranking considers is the aggregated local ranking of each month from January to December. \n\n \\item \\textit{Sequential Optimization}: The sequential optimization approach is a series of local optimization where the initial problem is divided into sub-problems \\cite{gnanlet2009sequential}. The local optimizations have cascading effect as the decisions in each local optimization affects the sequential decision making in successive local optimizations. The key benefit of the sequential optimization is that it reduces the search space of the global optimization significantly \\cite{pednault2002sequential}. \n \n \\item \\textit{Q-learning}: Q-learning \\cite{watkins1992q} is a model-free reinforcement learning approach. The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. It does not require a model of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations \\cite{watkins1992q}. In the context of IaaS compositions, the Q-learning is used to learn the optimal sequences of request selection. The two-dimensional Q-learning has no start and terminal states as it accepts only model-free state transitions \\cite{shani2005mdp}. A Q-learning process is termed as \\textit{off-policy} if the learning approach has no restrictions over exploration \\cite{munos2016safe}. In contrast, the \\textit{on-policy} based Q-learning process is \\textit{smart} and removes redundant state transitions considering historical information \\cite{van2016deep}.\n\\end{itemize}\n\n\n\n\n\\section{Motivation Scenario}\n\nIn this section, We illustrate an example scenario to describe the need of sequential learning in IaaS composition. Let us consider an IaaS provider offers virtual CPU services. The provider offers 100 CPU units with 100\\% availability. We are only considering availability as a QoS parameter for simplicity. We defined three semantic levels - high, moderate, and low to express qualitative preferences of the provider for services attributes as shown in Figure \\ref{fig:semanticTable}. We assume the provider may change the interpretation of the semantic levels based on the cloud market condition. The provider considers more than \\$1000 as high price according to Figure \\ref{fig:semanticTable} in the first year. The provider considers more than \\$1300 as high price due to predicted inflation in the second and third years. Three different preference rankings are set based on the provider's annual goal in three years.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[width = .8\\textwidth]{semanticTable.pdf}\n \\caption{Semantic preference table}\n \\label{fig:semanticTable}\n \\vspace{-4mm}\n\\end{figure}\n\nWe adopt the economic models of the provider as described in \\cite{mistry2018economic} to continue in this example. Figure \\ref{fig:cp} shows the provider's three economic models for three different years. In the first year, the provider wants to offer high-quality service in with a lower price to create its reputation. The most important attribute in the first year is the ``availability'' of a service followed by the ``CPU'' and the ``price''. The provider decides to maximize its profit by offering services at a higher price for lower resources and QoS in the second year. The ``price'' therefore sets the ``CPU'' and the ``availability'' in the second year. The provider's preference for the third year is to provision lower CPU-intensive services. Let us assume the provider wants to receive requests that are long-term. Therefore, the provider offers discounts on long-term service requests. A decision variable labeled $N$ is used to distinguish the type of requests. The value of $N$ is set to true ($T$) when a request is long-term. A request is considered long-term if it spans over the next period. Otherwise, the value of $N$ is set to false ($F$) to indicate the request as a short-term request. In figure \\ref{fig:cp}, $N$ is associated with ``price'' ($P$) for the first two years. In these periods ($CP1$ and $CP2$), the provider considers the high and moderate ``price'' level indifferently for long-term requests. $N$ is associated with ``availability'' ($A$) in the third year. According to $CP3$, short-term requests are provided with relatively lower ``availability\" at the same moderate price. More details about how to represent these economic models using CP-nets can be found in \\cite{sajibicsoc2016}.\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{CPNets} \n \\caption{CP-nets of a provider}\n \\label{fig:cp}\n \\vspace{-4mm}\n\\end{figure}\n\n\n\n\n\n\nLet us assume a set of requests is represented by $A$ in Figure \\ref{fig:req}(a). $A$ has four requests i.e., $\\{R1\\}$,$\\{R2\\}$,$\\{R3\\}$, and $\\{R4\\}$ as shown in Figure \\ref{fig:req}(a). Each of these requests arrives at the beginning of the composition. A request is represented in annual segments for simplicity. For instance, $(C: High, A: low, P: moderate)$ represents a request segment of $\\{R1\\}$ in the first year. Similarly, Figure \\ref{fig:req}(a) shows the annual requirements of other consumer requests $\\{R2\\}, \\{R3\\} $ and $ \\{R4\\}$(a) for three years. The provider can select these four requests from $2^{4} = 16$ possible combinations to find the optimal composition in a brute force manner. \n\n\n\n\nThe number of possible ways to select the requests grows exponentially with the number of requests. A sequential optimization process may be applied to reduce the total number of comparisons to find the global optimal composition. Let us consider a sequential optimization approach for the request set $A$ where requests are selected from the right to left year i.e., $3^{\\text{rd}}$,$2^{\\text{nd}}$ and $1^{\\text{st}}$. There are $2^3 = 8$ comparisons are required in the third year to select the highest ranked $R3$ according to the $CP3$. Note that, in $CP3$ the highest preference order is low CPU, high price, and low availability. In $R3$, the consumer's preference order is low CPU, moderate availability, and low price. Therefore, $R3$ is the closest match request according to $CP3$. Details of the ranking technique using CP-nets can be found in \\cite{sajibicsoc2016}. Once we only accept $R3$ in the third year, $R1$ and $R2$ are rejected in the subsequent years. In following years, the local optimization accepts $R4$ and update the solution. The optimal solution $\\{R3, R4\\}$ is calculated in ten comparisons. If we change the sequence of optimization process e.g., left to right i.e., $1^{\\text{st}}$, $2^{\\text{nd}}$ and $3^{\\text{rd}}$, the total number of comparison in the first year becomes $2^4 = 16$ to select the highest ranked $R1$. The request $R1$ has ``N\/A\" ranking in the following years. As a result, the left to right sequence produces an unacceptable solution. The right to left sequence generates optimal result when sequential optimization is applied on $A$. The same sequence may not work or give a good solution for a different set of requests. Let us consider the request set $B$ in Figure \\ref{fig:req}(c). The distribution of requests in $B$ is different from $A$. The number of comparisons becomes $2^5 = 32$ to select the highest ranked $R3$ in the third year (Figure \\ref{fig:req}(d)). As $R3$ has ``N\/A\" ranking in the second year, it can not be selected. As a result, the right to left sequence does not work for $B$. We propose a model-free learning approach to generate the optimal sequence of local optimizations.\n\n\\begin{figure}\n \n \\centering\n \\includegraphics[width= 0.8\\textwidth]{images\/motive2.pdf} \n \\caption{ Sets of requests (a) request set $A$ (c) request set $B$}\n \\label{fig:req}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\n\\section{A Qualitative IaaS Composition Framework}\n\nIn this section, we provide a general overview of the proposed qualitative IaaS composition framework. We also describe some common concepts in qualitative IaaS composition such as long-term IaaS request representation, long-term qualitative preferences representation, combinatorial optimization in qualitative IaaS composition.\n\nA qualitative composition framework is proposed that learns from historical information of the past request sets as shown in Figure \\ref{fig:propframe}. We assume that a set of long-term requests of consumers and the provider's qualitative preferences are available at the beginning of the composition. Our target is to find the optimal composition using a learning based approach that utilizes information of the past consumer requests. For each set of new requests, we learn the sequence of the optimal composition and save it for future request sets.\n\n\\begin{figure}\n\n\t\\centering\n \t\\includegraphics[width= .6\\textwidth]{propframe.pdf} \n \\caption{Long-term qualitative composition framework}\n \t\\label{fig:propframe}\n \n\\end{figure}\n\nWe use TempCP-nets to represent the qualitative preferences which enable qualitative composition of the requests. The proposed framework performs indexing and ranking of the requests configuration of the TempCP-net. The indexing of the TempCP-net is built using \\textit{k}-d tree indexing that enables efficient searching of the rank of the consumer requests. The ranking of the requests configurations is computed using the provider's TempCP-nets. The ranked requests are taken as an input of a learning module. The learning module applies a reinforcement learning based method that utilizes the historical information of the past requests to find out optimal composition efficiently. \n\n\n\n\n\n\n\n\\subsection{\\textbf{Long-term IaaS Request Representation}}\n\nThe long-term requests of the consumers are represented as time series group (TSG) of the service attributes. We denote \\(T\\) as the total service usage time. The TSG of the consumer requests is defined as \\(R_c = \\{s_{c1},s_{c2},...,s_{cn}\\} \\), where $s_{cn}$ represents a service attribute and \\(cn\\) is the number of service attributes in \\(R_c\\). We represent the service attribute time series as \\( s_{cn} = \\{(x_n,t_n)|n=1,2,3,...., T\\} \\), where \\(x_n\\) is the value of \\(s_{cn}\\) at the time interval \\(t_n\\). Figure \\ref{fig:req} shows two sets of requests where each request in a set has 3 service attributes ($cn=3$) i.e., CPU, availability, and price. Each requests has three year intervals ($T =3$). Each service attribute may have different types of value during these intervals.\n\n\n\n\n\\subsection{\\textbf{Long-term Qualitative Preferences based on TempCP-nets}}\n\\label{sec:cp}\n\n\n\n\nWe need an efficient tool to represent the provider's qualitative preferences. We define a set of attributes $V = \\{X_{1},..., X_{n}\\}$ which is defined over the finite, discrete domain $D(X_n)$ and semantic domains $S(X_n)$. The attributes are either functional or non-functional. Examples of functional attributes are CPU ($C$), Memory ($M$), and so on. Availability ($A$), Price ($P$), Latency ($LT$) are examples of QoS attributes. A mapping table $Sem\\_Table(X_{n}, x_{n})$ is used to map $x_{n}$ in $D(X_{n})$ into $s_{n}$ in $S(X_{n})$ where $s_{n} = Sem\\_Table(X_{n}, x_{n})$. An example of a semantic table is shown in Figure \\ref{fig:semanticTable}. We assume the preferences order and semantics of $V$ is static within an interval in a long-term composition period. However, they may vary within different intervals. We consider a set of decision variables $DN = \\{N_{1}, N_{2},....,N_{d}\\}$. A decision variable may represent request type, requests duration, and etc. We assume that the decision variable is a binary variable. Therefore, it takes true of false $\\{T, F\\}$ values. For instance, the decision variable is set true for a request if it spans to the next interval. \n\n\\begin{figure}\n\n\t\\centering\n \t\\includegraphics[width= .8\\textwidth]{induced.pdf} \n \\caption{Induced preference graph with decision variable for CP1}\n \t\n\t\\label{fig:f3}\n\t\\vspace{-3mm}\n\\end{figure}\n\nThe service provisioning time $T$ is considered into $m$ intervals and represented as $T=\\sum_{k=1}^{m}I_{k}$, where $I_k$ is an interval in $T$. We assume that the provider sets a preference ranking of each service configuration at each interval $I_k$. The preference rankings of service configurations are expressed over the complete assignments on $V$ and $DN$ with the semantic domain $Sem\\_D^{I_{k}}(V)$. $O^{I_{k}}$ is denoted as the set of service configurations for an interval $I_{k}$. A total order $(\\succeq)$ of the service configuration set represents a preference ranking for an interval. For example, $o_{1} \\succeq o_{2}$ denotes that a service configuration $o_{1}$ is equally or more preferred over $o_{2}$. The preference relation $o_{1} \\succ o_{2}$ denotes that the service configuration $o_{1}$ is preferred over $o_{2}$. If the preferences are indifferent or non-comparable, we denote the relation using $o_{1} \\sim o_{2}$. $T \\sim F$ means that the provider does not care about the true and false values of a decision variable. \n\nThe size of the service configuration set $O^{I_{k}}$ grows exponentially with the number of intervals. Therefore, direct assignment of all possible preferences over the long-term period is always not feasible. We represent the provider's long-term preferences on service configurations using a TempCP-net. A TempCP-net is represented as set of CP-nets with semantic preference tables for each interval of the composition period. We denote a TempCP-net as $\\text{TempCP-Net} = \\{(CP^{I_{k}}, Sem\\_Table^{I_{k}}, I_{k})\\;|\\;\\forall k \\in [1,m]\\}$. A CP-net can be considered as a graphical model that formally represents qualitative preferences and reasons about them. A CP-net $CP^{I_{k}}$ in the interval $I_{k}$ consists a direct graph $G$ which is defined using $V$ and $DN$. Each node in $G$ represents an attribute $X_i \\in V$. The nodes of $DN$ are represented by a dashed circle. A node of $V$ is represented by a solid circle. In this work, we only consider acyclic CP-nets to represent the provider's qualitative preferences. The CPT of each node is denoted by $CPT(X_{i})$ which contains a total order $\\succ^{i}_{u}$ with each instantiation $u$ of $X_{i}$'s parents $Pa(X_{i}) = U$ \\cite{cp1}. For example, $Pa(P)=C$ and $CPT(C)$ contains $\\{C1, C2\\}$ in $CP3$ while preferences are made over $\\{P1, P2, P3\\}$ (Figure \\ref{fig:cp}). A preference outcome $o$ of a CP-net is obtained by sweeping through the CP-net from top to bottom setting each variable to its preferred value given the instantiation of its parents \\cite{wang2012wcp}. A preference order $o \\succ \\acute{o}$ is called a consequence of a CP-net, if $o \\succ \\acute{o}$ can be obtained directly from one of the CPTs in the CP-net. For example, the fact that $(A2, C2, P2)$ is preferred to $(A2, C1, p2)$ is a direct consequences of the semantics of $CPT(C)$ in $CP1$ for the long-term requests (Figure \\ref{fig:cp}). The set of consequences $o\\succ \\acute{o}$ creates a partial order over all possible service configurations of an acyclic CP-net.\n\n\n\nFigure \\ref{fig:f3} shows the induced preference graph generated by $CP1$ which is a directed acyclic graph (DAG). There are two induced graph generated based on the value of the decision variable $N$ (i.e., true and false). The true value of $N$ represents the induced preference graph for long-term requests. The false value of $N$ represents short-term requests. $(A1, C1, P1)$ is considered as the most preferred request for the short-term requests. There is no outgoing edge from $(A1, C1, P1)$. Similarly, $(A2, C1, P3)$ has no incoming edge because it is least preferred request configuration. There is an edge between $(A2, C1, P1)$ and $(A1, C1, P1)$. According the preference statement $A1 \\succ A2$ in the $CPT(CP1)$, $(A1, C1, P1) \\succ (A2, C1, P1)$. $(A1, C1, P1)$ and $(A1, C1,\\\\ P2)$ does not have any outgoing edge because they are the highest preferred configuration (Figure \\ref{fig:f3}). The induce preference graph of a CP-net is constructed by pairwise comparison of all service configurations. The complexity for ordering queries for a TempCP-net in an interval is $O(ndq^2)$ where $n$ and $d$ is the number of attributes and decision variables respectively and the number of output configurations is $q$.\n\n\n\n\n\\subsection{\\textbf{Combinatorial Optimization in Qualitative IaaS Composition}}\n\n\\begin{figure}[t!]\n\n\t\\centering\n \t\\includegraphics[width= 0.8\\textwidth]{kd.pdf} \n \\caption{\\textit{k}-d tree indexing of the induced preference graphs}\n \\vspace{-3mm}\n\t\\label{fig:f4}\n\\end{figure}\n\nGiven a provider's TempCP-net and a set of long-term requests $R$, the IaaS composition is defined as the selection of an optimal set $\\bar{r} \\subseteq \\bar{R}$ that produces the best similarity measure with the TempCP-Net. We consider qualitative preference rankings as the foundation of the similarity measure. First, we index preference rankings from TempCP-Net. We then perform a similarity search on the indexed TempCP-Net which is denoted as $Pref(\\text{TempCP-Net}, \\bar{r})$. Hence, the objective of the IaaS composition is to minimize the ranking output $Pref(\\text{TempCP-Net}, \\bar{r})$. \n\n\\subsubsection{\\textbf{Indexing Preference ranks}} \n\\label{sec:kd}\n\n\n\nThe preference rank of a request configuration is denoted as $Sem\\_Req$ $=(s_{1}, ...,s_{n}) \\;|\\; \\text{where } s_{i} \\in S(X_{i}), \\text{and }X_{i} \\in V$. It is found by a pre-order traversal of the induced graph. The time complexity of searching the preference rank over the induced graph is $O(n)$. A request configuration $(s_{1}, ...,s_{n})$ is considered as a multidimensional vector. We use a \\textit{k}-d tree \\cite{jia2010optimizing} to improve the searching process. There exist different multi-dimensional indexing structures such as B-tree, B+-tree, k-d Trees, Point Quadtrees, R, R*, R+ Trees \\cite{sellis1997multidimensional,robinson1981kdb}. The k-d tree is a \\textit{congruent} choice in IaaS composition as the IaaS preference ranking requires the multi-dimensional value queries \\cite{sajibicsoc2016,nam2004comparative}. \\textit{Note that, finding the optimal multi-dimensional indexing structures for IaaS composition is outside the focus of this paper}.\n\nThe \\textit{k}-d tree is a binary tree that is used for indexing some nodes in a space with k dimensions. We represent each service configuration $o$ (i.e., a node in the induced graph) as a k-dimensional point in the \\textit{k}-d tree. Each node in each level splits its all children along a specific dimension into two subspace, known as half-spaces. Each subspace is represented by either a left or a right sub-tree of that node. A canonical method is used to build the \\textit{k}-d tree \\cite{jia2010optimizing}. The construction algorithm follows a cycle during the selection of splitting planes. For example, at the root, all children are split based on ``availability\" plane in Figure \\ref{fig:f4}. The children of the root split their children along ``CPU\" plane. The grandchildren of the root have ``price-aligned\" planes. Finally, the great-grandchildren have again planes aligned with availability. \n\nLet us assume there are $n$ points in an induced preference graph. We place the median point found in one dimension at the root of the \\textit{k}-d tree. Every other point smaller and larger than the root in the same dimension are placed into right and left sub-tree respectively. This process creates a balanced \\textit{k}-d tree where the runtime is $O(n \\;log(n))$ \\cite{jia2010optimizing}. We annotate each node of the \\textit{k}-d tree with its respective preference ranking obtained from the induced graph. For instance, the root node $(A2,C2,P2)$ in the \\textit{k}-d tree in Figure \\ref{fig:f4} has the preference ranking 6 that is obtained from its induced graph. We construct the \\textit{k}-d tree indexing for each value of the decision variable $N$. For example, two different \\textit{k}-d tree indexing are shown in Figure \\ref{fig:f4} to represent short-term and long-term service configurations. The service configurations with indifferent preference have the same ranking value. For example, the provider's preferences on $(A2,C1,P2)$ and $(A1,C2, P1)$ are indifferent for long-term requests. Both service configurations are annotated with preference ranking 4 in Figure \\ref{fig:f4}.\n\n\\subsubsection{\\textbf{Local and Global Preference Ranking}}\n\n\n\nA request may not be inclusive, i.e., exactly fit within an interval of TemCP-net. It may overlap two or more intervals of TempCP-nets. An overlapping request $R$ (interval $[T_{0}, T_{m}]$) is divided into smaller inclusive segments where each segment fits within an interval of the TempCP-net. The attributes that have \\textit{temporal semantics} require such segmentation. For instance, ``Price\" is considered as an attribute with \\textit{temporal semantics} in a consumer request. If a request requires 20 units of CPU for 12 months with total \\$120, a monthly segmentation interprets the provisioning of 20 CPU units for \\$10. Let us consider an attribute $X$ in $R$ that has \\textit{temporal semantics}. If the segmentation is applied in $[T_{j}, T_{k}]$, the new segments are calculated using the following equation according to \\cite{sajibicsoc2016}: \n\\vspace{-3mm}\n\n\\begin{equation}\n\\label{eqn:rank}\nx_{i}^{[T_{j},T_{k}]} = x_{i}^{[T_{0},T_{m}]} \\times \\frac{|T_{k}-T_{j}|}{|T_{m}-T_{0}|}\n\\end{equation} \n\n The requests are ready to be composed after the temporal segmentation. We define a set of $N$ requests as $\\bar{R} = \\sum_{i=1}^{N} R_i$. We use the following composition rules to combine the requests in a set $\\bar{R}$ \\cite{sajibtsc2015}:\n\n\\begin{itemize}\n \\item The rule of summation: $\\bar{x_{i}} = \\sum_{i=1}^{N} x_{i}$, where $X_{i} \\in \\{C, M, NB, RT, P\\}$.\n \\item The rule of maximization: $\\bar{y_{i}} = max(y_{i}), \\forall \\; i \\in [1,N]$, where $Y_{i} \\in \\{A, TP\\}$.\n\\end{itemize}\n\nThe preference ranking function of a set of requests is $Pref(\\text{TempCP-Net}, \\bar{R}): V \\rightarrow [1,n]$, which outputs the order of $\\bar{R}$ according to the preferences (\\textit{k}-d tree of the TempCP-net). $\\bar{R}$ is transformed $\\acute{\\bar{R}} = \\{(s_{i}, I_{j})\\;|\\;s_{i} \\in S(X_{i}), X_{i} \\in V, \\text{and } T = \\sum_{j=1}^m I_{j}\\}$ based on $Sem\\_Table$ of the TempCP-net. First, we define the local similarity measure, i.e., preference rankings for a time segment and then we define the global objective function for the entire composition period.\n\n\\begin{itemize}\n \\item \\textit{Local preference ranking}: Let us consider $M^{i}(s_{i})$ in the interval $i$ is the function that outputs the preference ranking by temporal matching of $\\acute{\\bar{R}}$ segments with the \\textit{k}-d tree. The temporal matching process or the searching algorithm starts from the root node and traverses the tree recursively. The search algorithm returns the preference ranking of a node if that matches with a request configuration. For instance, the algorithm returns ranking 10 by performing 10 comparisons for the short-term request $(A2, C1, P3)$ in Figure \\ref{fig:f4}. If a query search point is not found in the \\textit{k}-d tree, it is discarded in the composition. The complexity of performing a query in a \\textit{k}-d tree is on average $O(log(n))$ for each service.\n \n \\item \\textit{Global Preference Ranking}: As the long-term requests are divided into local segments, we aggregate the local preference rankings to generate the global preference ranking as follows:\n \\begin{equation}\n \\label{eq:ranking}\n Pref(\\text{TempCP-Net}, \\bar{R}) = \\sum_{i=1}^{m} M^{i}(s_{i})\n \\end{equation}\n\\end{itemize}\n\n\\section{IaaS Composition for a New Set of Requests}\n\nIn this section, we illustrate the proposed IaaS composition approach for a new set of requests, i.e., IaaS composition without any prior knowledge of incoming requests. We introduce a sequential IaaS composition approach that leverages reinforcement learning techniques to compose incoming requests.\n\nWe assume that initially the IaaS provider does not store a history of incoming requests. Each set of incoming requests are considered as a new set of requests and the composition is performed from scratch for the new set of requests. We identify three approaches to compose a new set of requests:\n\\begin{itemize}\n \\item \\textit{Brute-force approach}: This approach generates all the combinations of requests over the total composition period. The preference ranking of each combination is computed using the global preference ranking Equation \\ref{eq:ranking} and pairwise compared. The combination of requests that generates the minimum global ranking is returned as the optimal composition. If the number of requests is $N$, the time complexity of this approach is exponential ($2^N$).\n \\item \\textit{Global optimization approach}: The target of the global optimization is to improve the runtime efficiency from the brute-force approach. We apply Dynamic Programming (DP) \\cite{sajibicsoc2016} to reduce the re-computation of similarity measure of the same combinations of requests. The DP is designed to compute the similarity measure of a large combination of request sets by breaking it into smaller combinations (overlapping subproblem) structure. The results of the subproblems are stored in a temporary array which enables avoiding repeated computation. We denote $\\bar{R}(N)$ as a set of $N$ requests and $i \\in [1,N]$ as the $i_{th}$ request. $\\tau(\\bar{R}(N), k)$ denotes the subset of requests of size $k$ which generates the maximum preference rankings among all requests of size $k$. We start with base case $k=1$, i.e., a set consists of only one requests. The highest ranked request $i$ is computed by pairwise comparison of preference rankings:\n \\vspace{-5mm}\n \n \\begin{align*}\n &\\text{Base case, } \\tau(\\bar{R}(N), k=1) = R_{i}\\\\ \\notag &\\text{where } Pref(\\text{TempCP-Net}, R_{i}) \\text{ is minimum.} \\notag\n \\end{align*}\n\n \n For $k>1$, it either accepts the $N_{th}$ request (the $k_{th}$ place is already filled) or rejects it (reduces $\\bar{R}(N)$ to $\\bar{R}(N-1)$). We have two optimal substructures:\n \n \\vspace{-6mm}\n \n \\begin{align*}\n \\bar{R_{i}} &= \\{N \\cup \\tau(\\bar{R}(N-1), k-1)\\} \\\\\n \\bar{R_{j}} &= \\tau(\\bar{R}(N-1), k) \\notag\n \\end{align*}\n \n \n The $\\bar{R_{i}}$ and $\\bar{R_{j}}$ are computed separately. The re-computation of overlapping substructures is avoided by building a temporary array in a bottom-up manner \\cite{kimes2004restaurant}. if $\\bar{R_{i}}$ returns the minimum preference ranking, it should be returned as the optimal composition, $\\tau(\\bar{R}(N), k) =\\bar{R_{i}}$ if $Pref(\\text{TempCP-Net},\\bar{R_{i}})$ $< Pref(\\text{TempCP-Net}, \\bar{R_{j}})$. Otherwise, the request is removed from the composition, i.e., $\\tau(\\bar{R}(N), k) =\\bar{R_{j}},\\text{if } Pref(\\text{TempCP-Net}, \\bar{R_{i}}) \\geq Pref(\\text{TempCP-Net}, \\bar{R_{j}})$. The complexity of finding $C(\\bar{R}(N), k)$ is $O(N^{k})$. As there are at most $N$ requests to be considered in a set, we solve the DP from bottom up manner in the following sequence: $\\tau(\\bar{R}(N), \\;1)$, $\\tau(\\bar{R}(N), \\;2), \\cdots, \\tau(\\bar{R}(N), N)$. The final complexity of the DP based solution is $O(N^{O(N)})$. \n\n \\item \\textit{Local sequential optimization approach:} This approach optimizes requests in each time segment. The key reason is that we do not need to consider the whole set of requests during the entire composition period. We only consider a partial set which is applicable in a specific temporal segment. It should reduce the runtime complexity significantly. In Fig \\ref{fig:newExample1}, the local optimization could be divided into two segments: optimization with ${A,B}$ in the first year ($OP_{i}$) and optimization with ${A, B, C}$ ($OP_{j}$) in the second year. As the request sets are deterministic, we can perform the optimization sequences in different orders, i.e., $$ or $$. Local optimizations are dependent on the accepted or rejected requests during previous optimizations in a sequence. For example, if the sequence is $<$1st year, second year$>$ in Fig \\ref{fig:newExample1} and we reject request $B$ in the first year, the candidate request set for local optimization is the second year reduces to $$.\n\\end{itemize}\n\n\\begin{figure}[t!]\n\t\\centering\n \t\\includegraphics[width=.5\\textwidth]{Example-Overlap.pdf} \n \\caption{Overlapping requests in different temporal segments}\n \\vspace{-4mm}\n\t\\label{fig:newExample1}\n\\end{figure}\n\nWe propose a heuristic based sequential optimization approach for an IaaS composition in \\cite{sajibicsoc2016}. To improve the quality of the solution, we develop a reinforcement learning based approach to find the best local service provision policy, i.e., the best selection of requests in the optimal temporal sequence. \n\n\n\\subsection{\\textbf{Sequential IaaS Composition using Reinforcement Learning}}\n\nWe formulate the long-term IaaS composition problem, i.e., the selection of requests, as a sequential decision process. We begin with a time segment in the TempCP-net and select a request for the composition. The selection of the requests for the next segment depends on the previous selections of requests as accepted overlapping requests are already committed for both the segments. \n\n\nA sequential decision process is modeled in different approaches such as Multi-Armed Bandit (MAB), Markov Decision Process (MDP), or Partially Observable Markov Decision Process (POMDP) \\cite{kaelbling1996reinforcement}. We observe that the state-action-reward situation in the IaaS composition is similar to the MDP. We may start the selection of requests (actions) in a time segment and can compute the local preference ranking of the selected requests by matching the temporal segment in the selected requests with the corresponding temporal \\textit{k}-d tree of the given TempCP-net. The local preference ranking may be considered as the reward function. After the selection of requests in a segment, the composition approach may transit to any segment and make new selections. The process may continue until total reward cannot be further maximized (convergence) (as we consider ranking values as the reward, maximizing total rewards refer to minimizing global ranking). MAB or POMDP is also applicable in our context as they are special cases of MDP. For example, MAB is a special case of MDP that has only one state. We select MDP as the general sequential decision process.\n\n\n\\begin{figure}[t!]\n\n\t\\centering\n \t\\includegraphics[width=.5\\textwidth]{newExample2.pdf} \n \\caption{State-action transitions for sequential composition}\n \n\t\\label{fig:newExample2}\n\t\\vspace{-4mm}\n\\end{figure}\n\nFigure \\ref{fig:newExample2} shows the all possible state transitions for the request sets in the Figure \\ref{fig:newExample1}. The state-action is represented by the pair [interval, Request]. In Fig \\ref{fig:newExample2}, [1,A] refers that request A in Fig \\ref{fig:newExample2} is selected in the interval 1. We consider bi-directional transition edges. The key reason is it does not specify the transition sequence. For example, if request C is selected in the second interval first, i.e. [2,C], the next transition may happen to any of the [1,A], [1,B] or [1,AB]. [1,AB] represents that both requests A and B are selected in the interval 1. \n\n\n\nAs the composition environment is dynamic, model-free learning, e.g., reinforcement learning (RL) is usually more applicable than the model-based learning algorithms to implement the MDP. To solve the composition problem using a reinforcement learning (RL) approach, we treat each new request sets as a new environment and learn the optimal selection of requests through multiple interactions with the environment. We focus on the Q-learning approach as a reinforcement learning (RL) approach. Note that other deep learning approaches could be implemented in our context. However, we are not focused on providing a comparative study of machine learning approaches in this paper; instead, we use what we think is a sound approach in our context. Our primary target is to apply an unsupervised machine learning approach in the long-term IaaS composition. In that respect, our primary target is to evaluate the effectiveness of general reinforcement learning, i.e., Q-learning and its proposed variations in our context. In the future work, we will compare the performance of the proposed approach with other deep reinforcement learning approaches.\n\n\\subsubsection{\\textbf{Q-learning based Approach in IaaS Composition}}\n\nA widely used reinforcement learning is Q-learning. In a Q-learning based method, past interactions in the same environment are utilized to learn the optimal policy \\cite{watkins1992q}. The sequences of request selection i.e., \\textit{experience} over different intervals can be treated as past interactions in the context of IaaS composition. We define experience as a tuple $$ where $s$ is the current interval, $a$ is the selected request in $s$, $r$ is the reward for selecting $a$, and $\\acute{s}$ is the next interval. We represent the history of interactions as $$. We formally define the Q-learning environment in the context of qualitative composition as follows:\n \n\\begin{itemize}\n\\item \\textit{Environment}: The environment consists of consumers and the provider. The consumer requests are represented into time series groups and the provider's long-term qualitative preferences are represented in TempCP-net. The environment is deterministic, i.e., incoming requests and TempCP-net is given prior to the composition.\n\n\\item \\textit{State ($s$)}: The tempCP-net is usually consists of several temporal CP-nets for different time intervals or segments. Each time interval or segment is treated as a state. The number of states is finite. \n\\item \\textit{Action ($a$)}: The selection or rejection of a request is treated as an action in our context. In Figure \\ref{fig:newExample1}, the second segment (2nd year) has 3 available requests $\\{A,B, \\text{ and} \\;C\\}$. Hence, the possible set of acceptance actions is $\\{A,B,C,AB,AC,BC,ABC\\}$. \n\n\\item \\textit{Policy ($\\pi$)}: It is a function that determines which action to perform, i.e., selection of requests in a given state. If $S$ is the set of all states and $A$ is the set of all possible actions, the policy ($\\pi$) is represented as $\\pi(s): S \\longrightarrow A$.\n\n\\item \\textit{Reward function ($RWD$)}: We match the action, i.e., selected request segments with the corresponding segment in TemCP-net. We consider the local preference ranking as the reward. The reward function is defined based on the equation \\ref{eq:ranking} as $RWD(s,a)=Pref(\\text{TempCP-net(s)},a)$ at a state $s$ and actions $a$.\n \n\\item \\textit{Value function ($V$)}: $V^{\\pi}(s)$ is the state-value function in the sequential decision process. It is the expected cumulative preference ranking starting from state $s$ following a policy $\\pi$.\n\\end{itemize}\n\nWe apply the basic Q-learning approach to the IaaS composition. We propose a modified Q-learning approach for the long-term IaaS composition.\n\n\n\\subsubsection{\\textbf{IaaS Composition using 2d Q-learning}}\n\nA value function $V^{\\pi}(s)$ represents how good is the temporal sequence of a segment for the composer to select requests. The value function depends on the policy by which the composer chooses its actions. Among all possible value-functions, there exists an optimal value function that has a higher value than other state functions:\n\\vspace{-5mm}\n\n\\begin{equation}\n V^{*}(s) = max_{\\pi} V^{\\pi}(s) \\;\\;\\forall s \\in S \n\\end{equation}\n\nThe optimal policy $\\pi^{*}$ corresponds to the optimal value function: \\vspace{-3mm}\n\n\\begin{equation}\n \\pi^{*}= arg\\;max_{\\pi} V^{\\pi}(s) \\;\\;\\forall s \\in S \n\\end{equation}\n\nA recursive function called $Q$ is usually used in a Q-learning process and represented as $Q^{\\pi}(s,a)$ \\cite{watkins1992q}. It is used to calculate the cumulative reward using the policy $\\pi$. We map $Q^{\\pi}(s,a)$ with the expected global preference ranking of choosing the request $a$ in the interval $s$ for the policy $\\pi$. The probability of moving from interval $s$ to interval $\\acute(s)$ is denoted by $P(\\acute{s}| s,a)$ in Equation \\ref{eq:q3} where $a$ is the selected requests. The current preference ranking of $a$ in $s$ is denoted by $R(s,a,\\acute{s})$ where $\\acute{s}$ is the next interval. The future global ranking is denoted by $V^{\\pi}(\\acute{s})$ in Equation \\ref{eq:q3}.\n\n\n\n\\vspace{-3mm}\n\n\\begin{equation}\n\\label{eq:q3}\nQ^{\\pi}(s,a) = \\sum_{\\acute{s}} P(\\acute{s}| s,a)(R(s,a,\\acute{s})+\\gamma V^{\\pi}(\\acute{s}))\n\\end{equation}\n\n\n\n\nIn a Q-learning process, a table $Q[S,A]$ is maintained where $S$ denotes the set of states and $A$ denotes the set actions \\cite{watkins1992q}. $Q[S,A]$ is used to store the current value of $Q^{\\pi}(S,A)$. The value of $Q^{\\pi}(S,A)$ in the context of long-term IaaS composition is computed using temporal differences. Therefore, we create a table $Q[S,A]$ where $s$ denotes the set of interval or segment and the set of action is denoted by $a$. We set the initial $Q[s,a]$ to 0 for each $(s,a)$. we start the process from an random state ($s$) and executes a random action ($a$) for a reward $r$. The next interval is also selected randomly. We use $\\epsilon$-greedy policy to restrict the randomness over the time. The idea is the composer should explore the state-action sequences randomly, in the beginning, to find better future discounted preference rankings, later the randomness should be reduced. Here, $\\epsilon$ is defined as the probability of exploration. The exploration is equivalent to picking a random action in action space. If $\\epsilon$ = 1, the composer will always explore, and never act greedily concerning the action-value function. Therefore, $\\epsilon < 1$ in practice, so that there is a good balance between exploration and exploitation. The higher value of alpha usually assigns higher weights to the current estimate than the previous estimate. The learning process terminates when there are no further updates on Q-values. It is also known as convergence values. Once the Q-learning reaches the convergence state, the optimal policy is found using $Q[S,A]$ in Equation \\ref{eq:q5}. At each segment the action with highest reward is selected. An example of 2d $Q[S,A]$ is shown in Figure \\ref{fig:f7} where the number of segments is 5 and the number of actions is 10. The best action in segment 5 is $A10$ or $A5$.\n\n\n\\begin{equation}\n\\label{eq:q4}\nQ[s,a] = (1-\\alpha)Q[s,a] + \\alpha (r+\\gamma max_{\\acute{a}}Q[\\acute{s},\\acute{a}]) \n\\end{equation} \n\n\n\\begin{equation}\n\\label{eq:q5}\n\\pi^{*}(s) = argmax_{a}Q[s,a] \\;|\\; \\forall a \\in A(s)\n\\end{equation}\n\n\\subsubsection{\\textbf{IaaS Composition using 3d Q-learning}}\n\n\\begin{figure}[t!]\n\n \t\\centering\n \t\\includegraphics[width=0.8\\textwidth]{fig7_qvalues.pdf} \n \\caption{(a) Q-values in a 2d $Q[S,A]$ (b) Q-values in a 3d $Q[S,A, O]$}\n \\vspace{-3mm}\n\t\\label{fig:f7}\n\\end{figure} \n\n\nThe 2d Q-learning has no start and terminal states as it accepts only model-free state transitions \\cite{shani2005mdp}. The long-term effect of sequential order is indicated by $Q[S,A]$. However, it is not possible to keep track of the execution order in the 2d Q-learning process. For example, there are several possible state transitions can take places in Figure \\ref{fig:newExample2} such as $\\{[1,A]\\rightarrow [2,B]\\}$ and $\\{[2,B]\\rightarrow [1,A]\\}$. Here, $Q[1,A]$ represents the expected global preference rankings of selecting action $A$ in the first year irrespective of the sequence of selection (first or second). A similar explanation is also applied to $Q[1,A]$. Besides, the existing Q-learning approaches for the composition allow multiple execution order. Such order in Figure \\ref{fig:newExample2} is $\\{[1,A] \\rightarrow [2,BC] \\rightarrow [1,B]\\}$. Note, the selection of request $A$ and $B$ in the first year is taken at two different positions in a sequence.\n\n\n\n\nWe introduce a three-dimensional Q-learning in \\cite{mistry2018long} using a 3d table $Q[S,A,O]$ to store the Q values. $O$ represents the set of execution orders. Therefore, a particular state-action pair $(s,a)$ may have different values depending on the execution order $o$. If a set of requests is selected from the first segment at the first step of the composition, it may have a different preference ranking than if it is selected at the last step of the composition. Figure \\ref{fig:f7} illustrates an example of 3d $Q[S,A,O]$. Here, the ranking of $A10$ is 1 in segment 5 when it is performed as the starting state. However, the preference ranking of $A10$ changes into 9 when it is performed in the third step. An extension of Equation \\ref{eq:q4} is shown in Equation \\ref{eq:q6} for a 3d Q-learning process. The $\\acute{o}$ denotes the next execution order after $o$ and $\\alpha$ denotes the learning rate. The 3d Q-learning selects the start state ($s$), perform an action ($a$) with reward $r$ arbitrarily. From the start state, it observes all the possible states in different orders. \n\n\n\n\\vspace{-4mm}\n\n\\begin{align}\n\\label{eq:q6}\nQ[s,a,o] = &(1-\\alpha)Q[s,a,o] \\\\ \\notag\n&+ \\alpha (r+\\gamma max_{(\\acute{a},\\acute{o})}Q[\\acute{s},\\acute{a},\\acute{o}]) \n\\end{align} \n\n\n\n\n\\subsubsection{\\textbf{On-policy based 3d Q-learning}}\n\\begin{figure}\n\n \t\\centering\n \t\\includegraphics[width= .8\\textwidth]{redundant-states.pdf} \n \\caption{On-policy state-action transitions in 3d Q-learning (Green colored are allowed from [1,A,1] and [1,B,2]}\n \\vspace{-4mm}\n\t\\label{fig:newexample3}\n\\end{figure}\n\nThe 3d Q-learning increases the number of explorations in the learning process than the 2d Q-learning. The $n\\times m$ Q-matrix from two dimensions is extended to $n\\times m \\times p$ Q-matrix. The number of states is denoted by $n$, the number of actions is denoted by $m$ and $p$ is the number of segments. As the exploration space increases, 3d Q-learning requires more time to learn compared to the 2d Q-learning. The 3d Q-learning based composition approach is considered as off-policy, as the composer has no restrictions over exploration. We transform the off-policy approach into an on-policy learning approach by intelligent reduction of state transitions as follows:\n\n\n\\begin{itemize}\n \\item \\textit{State sequence policy}: Once a request is rejected, it is not considered anymore in following local optimization. Therefore, we do not need to consider the same states multiple times. We introduce state sequence policy where each state is visited only once in a policy $\\pi$. \n \\item \\textit{Removing redundant state transitions}: The rejected request in a local segment may appear in another segment if the request overlaps the two segments. All the state-action pairs that contain the rejected request should not be considered as the next state transitions. For example, if we accept request $A$ in the first year, request $B$ is rejected in Figure \\ref{fig:newExample1}. However, request $B$ overlaps into the second year. Hence it should be removed from the next candidate transitions. Figure \\ref{fig:newexample3} depicts the on-policy state transitions when the request $A$ in the first year is selected at first in the sequence. \n\\end{itemize}\n\nWe represent the on-policy 3d Q-learning process for IaaS composition in Algorithm \\ref{alg:qlearning}. Algorithm \\ref{alg:qlearning} runs multiple episodes or round to perform the 3d Q-learning process. The algorithm uses Equation \\ref{eq:q6} to update the Q-values in each episode. A $\\epsilon$-greedy policy is used by algorithm \\ref{alg:qlearning} where the optimal action is selected for execution. The optimal action is denoted by $arg\\;max_{(a,o)}Q(s, a,o))$ where the probability of the optimal action is $(1 - \\epsilon)$. The greedy policy optimizes $Q(s, a,o)$ continuously. It incorporates the unique state sequence policy. When a request is rejected, we remove it from the candidate action sets. The learning process continues in a loop up to a maximum number $k$ or Q-values converge to the optimal value. Once the Q-values reach convergence state, we use Equation \\ref{eq:q7} to compute the optimal policy which is similar to 2d Q-learning process \\cite{wang2010adaptive}. \n\n\\vspace{-4mm}\n\\begin{align}\n\\label{eq:q7}\n\\pi^{*}(s) = &argmax_{(a,o)}Q[s,a,o] \\\\ \\notag\n&\\text{where } \\forall a \\in A(s) \\text{ and } o \\in [1, |A(s)|]\n\\end{align} \n \n\n\\begin{algorithm}\n\\fontsize{8pt}{8pt}\\selectfont\n \\caption{The on-policy 3d Q-learning process to compose IaaS requests}\n \\label{alg:qlearning}\n \\begin{algorithmic}[1]\n \n \n \\STATE Initialize Q(s,a,o) to 0\n \\FOR {each episode to $K$}\n \\STATE $s \\gets s_{0}$\n \\STATE execution order, $o\\gets 1$\n \t\\WHILE{$o \\neq$ total number of segments}\n \t \t\\STATE Choose action $a$ from $s$ in $o$ using $\\epsilon$-greedy policy.\n \t \t\\STATE Execute $a$, observe reward $r$ and next state $\\acute{s}$\n \t \t\\STATE $Q[s,a,o] \\gets (1-\\alpha)Q[s,a,o] + \\alpha (r+\\gamma max_{(\\acute{a},\\acute{o})}Q[\\acute{s},\\acute{a},\\acute{o}])$\n \t \t\\STATE create candidate $[s,a]$ based on redundant state transitions\n \t \t\\STATE $s \\gets \\acute{s}$ using $\\epsilon$-greedy policy. \n \t \t\\STATE $o \\gets o+1$ \n\t\t\\ENDWHILE\n \\ENDFOR\n \\end{algorithmic}\n \n\\end{algorithm} \n\n\n\\section{Long-term Qualitative Composition with Previous Learning Experiences}\n\nWe aim to utilize the knowledge of composing past requests to compose new incoming requests efficiently. In this section, we describe the proposed long-term qualitative composition approach that leverages past learning experience. We analyze different types of requests patterns and illustrate how to identify similar request sets. Once we identify a similar requests set from the past incoming requests, we reuse previously learned policy to compose new incoming requests. \n\nThe optimal sequence of state-action may vary depending on the distribution of the requests over the time and their rankings. For example, Figure \\ref{fig:pattern} shows four types of request patterns, i.e., almost sparse pattern, almost dense pattern, chain pattern, and mixed pattern. \n\n\n\n\\begin{itemize}\n \\item \\textit{Almost Sparse Pattern}: Figure \\ref{fig:pattern}(a) shows a set of \\textit{almost sparse request pattern}. Most requests are short and disjoint, i.e., do not overlap between two intervals. The composition may be performed in parallel by taking the short-term requests.\n \\item\\textit{Almost dense Pattern}: Figure \\ref{fig:pattern}(b) shows a set of requests where the requests are mostly \\textit{overlapped} between intervals. An overlapping request spans for the next interval. Most requests are accepted in the first step of the selection.\n \\item\\textit{Chain Pattern}: Figure \\ref{fig:pattern}(c) shows a chain pattern where the requests are mostly short-term and overlapped between intervals. The requests are also evenly distributed between the intervals in a chain pattern.\n \\item \\textit{Mixed Pattern}: Figure \\ref{fig:pattern}(d) shows a mixed pattern where both long-term and short-term requests are overlapped and evenly distributed.\n\\end{itemize}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.9\\textwidth]{requests_pattern} \n \\caption{Different requests patterns: a) almost sparse pattern, b) almost dense pattern, c) chain, (d) mixed patterns \\cite{sajibicsoc2016}}\n \\vspace{-4mm}\n \\label{fig:pattern}\n\\end{figure} \n\n\nApplying the Q-learning method each time when a new set of requests arrives can be expensive. Instead of learning a new set of incoming request every time, we apply the experience from the previously learned request sets. We propose a qualitative composition framework as shown in Figure \\ref{fig:flowchart}. The proposed framework takes a set of incoming requests and the TempCP-net of the provider. First, the given request set is annotated with its global preference ranking and overlapping ratio and matched with the existing sets of requests. Initially, there is no existing request set. A Q-learning algorithm is applied to the request set using the TempCP-net. The output of the Q-learning algorithm is a matrix called Q-value. The learned Q-value matrix is stored with the corresponding request set for future. \n\n \\begin{figure}\n \n \\centering\n \\includegraphics[width=.6\\textwidth]{flowcharts.pdf} \n \\caption{A qualitative composition framework using policy library from Q-learning}\n \\vspace{-4mm}\n \\label{fig:flowchart}\n\\end{figure}\n\nEach time a new set of requests arrives the proposed framework find the similarity with existing request sets. The similarity is measured through a hierarchical clustering method called \\textit{agglomerative clustering method} \\cite{Growendt2017,fernandez2008solving,bouguettaya2015efficient}. For each set of requests, we apply the\\textit{ agglomerative clustering method} to build a corresponding clustering tree. To measure the similarity between two request sets, the correlation coefficient of their corresponding clustering trees is computed. If the similarity is greater than a predefined variable $S$, then the Q-value matrix of the corresponding matched request set is applied to compose the new request set. We set the value of $S$ based on the trial and error in the experiment. The existing Q-value matrix may not be fully applicable to a new set of requests. In such a case, the proposed framework applies the Q-value matrix partially (a policy reuse approach) and learns the rest of the sequence. \n\n\n\n\\label{sec:dq}\n\n\nIn our previous work, we calculate similarity between different types of requests concerning the statistical distribution of their resources attributes such as normal, left-skewed and right-skewed distributions of CPU, memory, and network bandwidth \\cite{mistry2018long}. The proposed approach however is unable to capture intrinsic characteristics of different request patterns such as temporal distribution or global ranking. Therefore, it may not correctly utilize the historical information of the previous consumers' request sets.\n\n\n\n\\subsection{\\textbf{Clustering Methods to Find Similar Request Sets}}\n\nWe use clustering techniques to compute the similarity between a new request set and past request sets. Clustering is a well-known data analytic techniques to capture the intrinsic features of a set of data and group them on different sets based on similarity. Clustering is suitable where manual tagging of data is expensive. Moreover, the prior knowledge for manual tagging may not be available or insufficient. In such a case, clustering is a preferred option over supervised learning approaches such as classification and regression \\cite{Growendt2017,fernandez2008solving,bouguettaya2015efficient}. There are many clustering techniques in the existing literature. We focus on partitional and hierarchical clustering approaches in the IaaS composition. \n\n\\subsubsection{\\textbf{Partitional Clustering}}\n\nPartitional clustering methods produces a flat partition of that optimizes an objective function. The objective function needs to be predefined. The most well-known partitional algorithm is \\textit{K-means} clustering. The main steps of K-means clustering of a request set are as follows:\n\\begin{enumerate}\n \\item Create randomly few centroid points for the requests. \n \\item Assign each request to the closest centroid. \n \\item Calculate the central point of the newly created clusters and update the centroid accordingly\n \\item Repeat the previous last two steps until there is an object left to resign to another cluster. \n\\end{enumerate}\n\nThe computational complexity of the K-means clustering is $O(NK)$ where $N$ is the number of requests and $K$ is the number of the clusters. The K-means clustering is efficient concerning computational clustering as it requires linear computational time. However, the performance of K-means depends on how the value of $K$ is chosen. It is difficult to determine the optimal value of $K$ when prior knowledge is inadequate or absent. \n\n\n\\subsubsection{\\textbf{Hierarchical Clustering}} \n\n\nThe hierarchical clustering method is a mainstream clustering method because it can be applied to the most types of data. Although hierarchical clustering method has a higher complexity compare to the K-means, it does not need predefined parameters. Therefore, hierarchical clustering are more suitable for handling real-world data. There are two main approaches for hierarchical clustering bottom up and top down. The bottom-up approach aggregate individual data points to the most high-level cluster. The complexity is usually $O(N^2)$ for the bottom-up approach. However, it may go up to $O(N^2logN)$. The complexity of the top-down approach is $O(2^N)$. The top-down approach is usually more expensive than a bottom-up approach. The bottom-up approach is generally known as \\textit{agglomerative hierarchical clustering} \\cite{murtagh2012algorithms,bouguettaya1996line} . \n\n\\subsubsection{\\textbf{Agglomerative Clustering based Similarity Measure}} \n\nWe use an agglomerative clustering based approach to reuse the existing policies for a new incoming request set based on the history of the past request sets. The clustering approach is applied to a set of requests to construct a clustering tree. The clustering tree captures the intrinsic features of the requests and group the requests based on their similarities. When a set of requests arrives, we build a clustering tree and compare it with the existing clustering trees to find the most similar clustering tree using the correlation coefficient.\n\n\nWe annotate each request in a set of requests with its global rank and overlapping ratio to construct a clustering tree to capture the temporal aspects and global ranking of a request set. The global rank is computed by the equation \\ref{eq:ranking}. The overlapping ratio of a request is the ratio of the number of operation interval and the number of total intervals of the composition. The overlapping ratio of a request $R_i$ is computed by the following equation: \n\n\\begin{equation}\n O(R_i) = \\frac{\\text{Number of intervals of } R_i}{\\text{Total Number of Intervals}}\n\\end{equation}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=.7\\textwidth]{plot.pdf}\n \\caption{Annotation of a request set using global preference ranking and overlapping ratio (a) annotation table (b) annotation plot}\n \\label{fig:plot}\n \\vspace{-2mm}\n\\end{figure}\n\nThe overlapping ratio for the request $R3$ in the request set $A$ (Figure \\ref{fig:req}) is $2\/3=.667$. Let us assume a request set is annotated with the global preference ranking and the overlapping ratio of each request as shown in Figure \\ref{fig:plot}. Now, we construct the clustering tree based on the following steps:\n\n\n\\begin{enumerate}\n\n \\item Each request is considered as a cluster. If there are $N$ requests in a set of requests, then the number of clusters is $N$.\n \\item A $N X N$ distance matrix is constructed based on the \\textit{Euclidean Distance} (Equation \\ref{eqn:euclid}) of each pair of requests according to their global preference ranking $GPR$ and overlapping ratio $OR$. \n \\item The closest pair of clusters is selected and merge them into a single cluster.\n \\item The distances between the new cluster and each of the old clusters are computed.\n \\item Steps 3 and 4 are repeated until there exists only a single cluster. \n\\end{enumerate}\n\n\\vspace{-3mm}\n\n\\begin{equation}\n E.D = \\sqrt{\\sum{GPR(R1-R2)}^2+OR(R1-R2)^2}\n \\label{eqn:euclid}\n\\end{equation}\n\nWe can perform step 4 in different ways based on different hierarchical clustering approaches \\cite{murtagh2012algorithms,fernandez2008solving,Growendt2017,bouguettaya1996line}. We consider different conventional approaches to measure the distance between two clusters which are as follows:\n\n\\begin{enumerate}\n \\item SLINK: SLINK stands for single linkage clustering method. In this method, two clusters are joined based on the distance of their nearest pair of elements. Only one member of each cluster is considered to compute the distance \\cite{murtagh2012algorithms}. It is also known as \\textit{nearest neighbour}(NN) clustering method. \n \\item CLINK: CLINK is short for complete linkage clustering method. Two clusters are joined based on the distance of the farthest pair of elements \\cite{Growendt2017}. It is also known as \\textit{farthest neighbour}(FN) clustering method. \n \\item UPGMA: This unweighted pair-group method using arithmetic average is also known as the average linkage clustering method \\cite{fernandez2008solving}. We compute the distance between two clusters based on the average distance of each pair of elements from two clusters.\n\\end{enumerate}\n\n\\begin{figure}[t!]\n \\centering\n \n \\includegraphics[width=0.9\\textwidth]{clusters}\n \\caption{Hierarchical clustering steps}\n \\label{fig:clusters}\n \\vspace{-3mm}\n\\end{figure}\n \n\nAny of the above hierarchical clustering approaches, i.e., SLINK, CLINK, UPGMA could be used for the IaaS composition of new requests. We use the SLINK or nearest neighbor approach as it is a widely used clustering approach and effective in time-series data clustering \\cite{berkhin2006survey}. \\textit{Note that, finding the optimal clustering approach for IaaS composition is out of the focus of this paper}.\n\nA clustering construction process is shown in Figure \\ref{fig:clusters}. Initially, we perform the annotation as shown in Figure \\ref{fig:plot}. In Figure \\ref{fig:clusters}(a), \\{R4\\} and \\{R5\\} are the nearest to each other. \\{R4\\} and \\{R5\\} are put in the same cluster. Similarly, \\{R2\\} and \\{R3\\} are the nearest to each other and joined in the same cluster. In Figure \\ref{fig:clusters}(b), \\{R6\\} is the nearest to the \\{R4\\}. Therefore, \\{R6\\} is joined with \\{\\{R4\\},\\{R5\\}\\}. In the next step, \\{R1\\} is the nearest to \\{R3\\}. \\{R1\\} is joined with \\{\\{R2\\},\\{R3\\}\\}. In Figure \\ref{fig:clusters}(c), there are only two clusters which are \\{\\{R4\\},\\{R5\\},\\{R6\\}\\} and \\{\\{R1\\},\\{R2\\},\\{R3\\}\\}. These two clusters are joined based on \\{\\{R5\\} and \\{R2\\} who are the nearest request to each other from two clusters. Finally, there is only one cluster left. \n\n\nOnce we build clustering trees for different sets of requests, we need to compute the coefficient of correlation between different clustering trees. We use \\textit{Cophenetic correlation coefficient} to compare two clustering trees \\cite{sokal1962comparison}. A cophenetic correlation coefficient determines how well a clustering tree preserves the pairwise distance between the original requests before clustering. \\textit{The cophenetic distance between two requests in a clustering tree is the height of the clustering tree where the two branches that include the two requests merge into a single branch.} We compute the cophenetic correlation coefficient for each clustering tree and use them to measure similarities between two clustering trees. Given a set of requests R, its corresponding clustering tree T, R(i,j) is the Euclidean distance between $i$th and $j$th requests, and T(i,j) is the cophenetic distance where two requests merged first time in the clustering tree, we compute the cophenetic correlation coefficient using the following equation \\cite{sokal1962comparison}: \n\\begin{equation}\n c = \\frac{\\sum_{i \\tmop{Im} \\left(\n \\rho^M_{w_{n - 1}} \\left( m \\right) \\right)\n \\end{array}\n\\end{equation}\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{m \\rightarrow \\pm \\infty} \\tmop{Re} (\\rho^M_{w_n} \\left( m \\right))\n & = 0\n \\end{array}\n\\end{equation}\nThus,\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{m \\rightarrow \\pm \\infty} \\arg \\left( \\rho^M_{w_n} \\left( m \\right)\n \\right) & = \\frac{\\pi}{2}\n \\end{array}\n\\end{equation}\n\n\\subsubsection{Quotients and Differences of $\\rho^M_{w_n} \\left( m \\right)$}\n\nLet\n\\begin{equation}\n \\begin{array}{ll}\n \\Delta \\rho^M_{w_n} (m) & = \\rho^M_{w_n} (m + 1) - \\rho^M_{w_n} \\left( m\n \\right)\n \\end{array}\n\\end{equation}\nbe the forward difference of consecutive roots of $M [w_n (x) ; x \\rightarrow\ns]$. The limiting difference between consecutive roots is the countably\ninfinite set of solutions to the equation $n^{\\frac{s}{2}} +_{} (n +\n1)^{\\frac{s}{2}} = 0$ given by\n\\begin{equation}\n \\begin{array}{ll}\n \\Delta \\rho^M_{w_n} (\\pm \\infty) & = \\lim_{m \\rightarrow \\pm \\infty}\n \\overset{}{\\Delta^{}} \\rho^M_{w_n} (m)\\\\\n & = \\left\\{ s : n^{\\frac{s}{2}} +_{} (n + 1)^{\\frac{s}{2}} = 0 \\right\\}\\\\\n & = \\lim_{m \\rightarrow \\pm \\infty} \\left( \\rho^M_{w_n} (m + 1) -\n \\rho^M_{w_n} (m) \\right)\\\\\n & = \\lim_{m \\rightarrow \\pm \\infty} \\left( \\frac{\\rho^M_{w_n} (m)}{m}\n \\right)\\\\\n & = \\frac{2 \\pi i}{M [\\chi \\left( x, I^H_n \\right) ; x \\rightarrow 0]}\\\\\n & = \\frac{2 \\pi i}{\\lim_{s \\rightarrow 0} \\left( \\frac{n^{- s} - (n +\n 1)^{- s}}{s} \\right)}\\\\\n & = \\frac{2 \\pi i}{\\ln (n + 1) - \\ln (n)}\n \\end{array} \\text{\\label{wmrad}}\n\\end{equation}\nLet $\\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}}$ denote the limit\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}} & = \\lim_{m \\rightarrow \\pm \\infty}\n \\frac{\\rho^M_{w_n} (m)}{\\rho^M_{w_{n - 1}} (m)}\\\\\n & = \\frac{\\Delta \\rho^M_{w_n} (\\pm \\infty)}{\\Delta \\rho^M_{w_{n - 1}}\n (\\pm \\infty)}\\\\\n & = \\frac{\\overset{}{M} \\left[ \\chi \\left( x, \\left( \\frac{1}{n + 1},\n \\frac{1}{n} \\right) \\right) ; x \\rightarrow 0 \\right]}{\\overset{}{M}\n \\left[ \\chi \\left( x, \\left( \\frac{1}{n}, \\frac{1}{n - 1} \\right) \\right)\n ; x \\rightarrow 0 \\right]}\\\\\n & = \\frac{\\ln (n) - \\ln (n - 1)}{\\ln (n + 1) - \\ln (n)}\n \\end{array}\n\\end{equation}\nthen we also have the limit of the limits\n$\\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}}$ \\ as $n \\rightarrow \\infty$ given by\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{n \\rightarrow \\pm \\infty}\n \\text{$\\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}}$ } & = \\lim_{n \\rightarrow \\pm\n \\infty} \\left( \\frac{\\Delta \\rho^M_{w_n} (\\pm \\infty)}{\\Delta \\rho^M_{w_{n\n - 1}} (\\pm \\infty)} \\right)\\\\\n & = \\lim_{n \\rightarrow \\pm \\infty} \\left( \\frac{\\frac{\\ln (n) - \\ln (n -\n 1)}{\\ln (n + 1) - \\ln (n)}}{\\frac{\\ln (n - 1) - \\ln (n - 2)}{\\ln (n) - \\ln\n (n - 2)}} \\right)\\\\\n & = \\lim_{n \\rightarrow \\pm \\infty} \\left( \\frac{\\ln (n) - \\ln (n -\n 1)}{\\ln (n + 1) - \\ln (n)} \\right)\\\\\n & = 1\n \\end{array}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{13cm}{9cm}{\\includegraphics{wmroots.eps}}\n \\caption{$\\{\\rho^M_{w_n} (m) : m = 1 \\ldots 5\\}$}\n\\end{figure}\n\nThe limiting quotients $\\frac{e^{- \\rho^M_{w_n} (m + 1)} }{e^{- \\rho^M_{w_n}\n(m)}} = e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m + 1)}$ as $m \\rightarrow \\infty$\nare given by\n\\begin{equation}\n \\begin{array}{ll}\n \\begin{array}{l}\n \\lim_{m \\rightarrow \\infty} e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m + 1)}\n \\end{array} & = 1 - 2 \\sin \\left( \\frac{\\pi}{\\ln (n + 1) - \\ln (n)}\n \\right)^2 - 2 i \\cos \\left( \\frac{\\pi}{\\ln (n + 1) - \\ln (n)} \\right) \\sin\n \\left( \\frac{\\pi}{\\ln (n + 1) - \\ln (n)} \\right) \\\\\n & = e^{- \\frac{2 \\pi i}{\\ln (n + 1) - \\ln (n)}}\n \\end{array}\n\\end{equation}\nwhere we have\n\\begin{equation}\n \\begin{array}{lll}\n | \\lim_{m \\rightarrow \\infty} e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m + 1)}\n | & = \\lim_{m \\rightarrow \\infty} | e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m\n + 1)} | & \\\\\n & = \\sqrt{e^{- \\frac{2 \\pi i}{\\ln (n + 1) - \\ln (n)}}} & \\\\\n & = 1 & \n \\end{array}\n\\end{equation}\nand\n\\begin{equation}\n \\begin{array}{l}\n \\lim_{n \\rightarrow \\infty} \\lim_{m \\rightarrow \\infty} e^{\\rho^M_{w_n}\n (m) - \\rho^M_{w_n} (m + 1)} = - 1\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Laplace Transform $L \\left[ (s - 1) M [w_n (x) ; x\n\\rightarrow s \\right] ; s \\rightarrow t]$}\n\nThe Laplace transform of $(s - 1) M [w_n (x) ; x \\rightarrow s]$ defined by\n\\begin{equation}\n \\begin{array}{lll}\n \\left. L [(s - 1) M [w_n (x) ; x \\rightarrow s \\right] ; s \\rightarrow t]\n & & = L [n (n + 1)^{- s} + sn^{- s} - n^{1 - s} ; s \\rightarrow t]\\\\\n & & = \\int_0^{\\infty} n (n + 1)^{- s} + sn^{- s} - n^{1 - s} e^{- s t}\n \\mathrm{d} s\\\\\n & & = \\frac{t + \\ln \\left( \\frac{n^n}{(n + 1)^n} \\right) t + \\ln (n + 1)\n + n \\ln (n)^2 - n \\ln (n) \\ln (n + 1)}{(\\ln (n) + t)^2 (\\ln (n + 1) + t)}\n \\end{array}\n\\end{equation}\nhas poles at $- \\ln (n)$ and $- \\ln (n + 1)$ with residues\n\\begin{equation}\n \\begin{array}{ll}\n \\underset{}{\\underset{t = - \\ln (n)}{\\tmop{Res}} \\left( \\underset{}{L} [(s\n - 1) M_{}^{} [w_n (x) ; x \\rightarrow s] ; s \\rightarrow t] \\right)} & = -\n n\\\\\n \\underset{}{\\underset{t = - \\ln (n + 1)}{\\tmop{Res}} \\left( \\underset{}{L}\n [(s - 1) M_{}^{} [w_n (x) ; x \\rightarrow s] ; s \\rightarrow t] \\right)} &\n = n\n \\end{array}\n\\end{equation}\n\n\\subsection{The Gauss Map $h (x)$}\n\n\\subsubsection{$\\tmop{Continued} \\tmop{Fractions}$}\n\nThe Gauss map $h (x)$, also known as the Gauss function or Gauss\ntransformation, maps unit intervals onto unit intervals and by iteration gives\nthe continued fraction expansion of a real number\n{\\cite[A.1.7]{Ctsa}}{\\cite[I.1]{cf}}{\\cite[X]{itn}} The $n$-th component\nfunction $h_n (x)$ of the map $h (x)$ is given by\n\\begin{equation}\n \\begin{array}{ll}\n h_n (x) & = \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n \\right)\n \\end{array}\n\\end{equation}\nThe infinite sum of the component functions is the Gauss map\n\\begin{equation}\n \\begin{array}{ll}\n h (x) & = \\sum_{n = 1}^{\\infty} h_n (x)\\\\\n & = \\sum_{n = 1}^{\\infty} \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n\n \\right)\\\\\n & = x^{- 1} - \\left\\lfloor x^{- 1} \\right\\rfloor\\\\\n & =\\{x^{- 1} \\}\n \\end{array} \\label{h}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{12cm}{12cm}{\\includegraphics{h.eps}}\n \\caption{\\label{gaussmapfig}The Gauss Map}\n\\end{figure}\n\nThe fixed points of $h (x)$ are the (positive) solutions to the equation $h_n\n(x) = x$ given by\n\\begin{equation}\n \\begin{array}{ll}\n \\tmop{Fix}_h^n & =\\{x : h_n (x)_{} = x\\}\\\\\n & = \\left\\{ x : \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n \\right)_{} = x\n \\right\\}\\\\\n & = \\left\\{ x : \\frac{1 - x n}{x} = x \\right\\}\\\\\n & = \\frac{\\sqrt{n^2 + 4}}{2} - \\frac{n}{2}\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Mellin Transform of $h (x)$}\n\nThe Mellin transform (\\ref{mellin}) of the Gauss map $h (x$) over the unit\ninterval, scaled by $s$ then subtracted from $\\frac{s}{s - 1}$, is an analytic\ncontinuation of $\\zeta (s$), denoted by $\\zeta_h (s$), valid for all$-\n\\tmop{Re} (s)) \\not\\in \\mathbbm{N}_{}$. The transfer operator and thermodynamic\naspects of the Gauss map are discussed in\n{\\cite{gkwoperator}}{\\cite{newtonzeta}}{\\cite{gkwzeta}}{\\cite{yarh}}{\\cite{dzf}}.\nThe Mellin transform of the $n$-th component function $w_n (x$) is given by\n\\begin{equation}\n \\begin{array}{ll}\n M [h_n (x) ; x \\rightarrow s] & = \\int_0^1 h_n (x) x^{s - 1} \\mathrm{d} x\\\\\n & = \\int_0^1 \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n \\right) x^{s - 1}\n \\mathrm{d} x\\\\\n & = \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\left( x^{- 1} - \\left\\lfloor\n x^{- 1} \\right\\rfloor \\right) x^{s - 1} \\mathrm{d} x\\\\\n & = \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\frac{1 - x n}{x} x^{s - 1}\n \\mathrm{d} x\\\\\n & = - \\frac{n (n + 1)^{- s} + s (n + 1)^{- s} - n^{1 - s}}{s^2 - s}\n \\end{array}\n\\end{equation}\nwhich provides an analytic continuation $\\zeta_h (s) = \\zeta (s) \\forall (-\n\\tmop{Re} (s)) \\not\\in \\mathbbm{N}_{}$\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_h (s) & = \\frac{s}{s - 1} - s M_{} [h (x) ; x \\rightarrow s]\\\\\n & = \\frac{s}{s - 1} - s \\int_0^1 h (x) x^{s - 1} \\mathrm{d} x\\\\\n & = \\frac{s}{s - 1} - s \\int_0^1 \\left( x^{- 1} - \\left\\lfloor x^{- 1}\n \\right\\rfloor \\right) x^{s - 1} \\mathrm{d} x\\\\\n & = \\frac{s}{s - 1} - s \\sum_{n = 1}^{\\infty} M_{} [h_n (x) ; x\n \\rightarrow s]\\\\\n & = \\frac{s}{s - 1} - \\frac{1}{s - 1} \\sum_{n = 1}^{\\infty} - (n (n +\n 1)^{- s} + s (n + 1)^{- s} - n^{1 - s})\\\\\n & = \\frac{s}{s - 1} - \\frac{1 }{s - 1} \\sum_{n = 1}^{\\infty} n^{1 - s} -\n n (n + 1)^{- s} - s (n + 1)^{- s}\n \\end{array} \\label{gaussmap}\n\\end{equation}\n\n\\subsection{The Harmonic Sawtooth Map w(x) as an Ordinary Fractal String}\n\n\\subsubsection{Definition and Length}\n\nLet\n\\begin{equation}\n _{} \\begin{array}{ll}\n I^H_n & = \\left( \\frac{1}{n + 1}, \\frac{1}{n} \\right)\n \\end{array} \\label{hi}\n\\end{equation}\nbe the $n$-th harmonic interval, then $\\{w (x) \\in \\mathcal{L}_w : x \\in\n\\Omega\\}$ is the piecewise monotone mapping of the unit interval onto itself.\nThe fractal string $\\mathcal{L}_w$ associated with $w (x)$ is the set of\nconnected component functions $w_n (x) \\subset w (x)$ \\ where each $w_n (x)$\nmaps$I_n^H$ onto $(0, 1)$ and vanishes when $x \\not\\in I_n^H$. Thus, the\ndisjoint union of the connected components of $\\mathcal{L}_w$ is the infinite\nsum $w (x) = \\sum_{n = 1}^{\\infty} w_n (x)$ where only 1 of the $w_n (x)$ is\nnonzero for each $x$, thus $w (x)$ maps entire unit interval onto itself\nuniqely except except for the points of discontinuity on the boundary\n$\\partial \\mathcal{L}_w =\\{0, \\frac{1}{n} : n \\in \\mathbbm{N}^{\\ast} \\}$ where\na choice is to be made between 0 and 1 depending on the direction in which the\nlimit is approached. Let\n\\begin{equation}\n \\begin{array}{ll}\n w_n (x) & = n (x n + x - 1) \\chi_{} (x, I^H_n)\n \\end{array} \\label{wn}\n\\end{equation}\nwhere $\\chi_{} (x, I^H_n)$ is the $n$-th harmonic interval indicator\n(\\ref{ii})\n\\begin{equation}\n \\begin{array}{ll}\n _{} \\chi_{} (x, I_n^H) & = \\theta \\left( \\frac{x n + x - 1}{n + 1} \\right)\n - \\theta \\left( \\frac{x n - 1}{n} \\right)\n \\end{array} \\label{hii}\n\\end{equation}\nThe substitution $n \\rightarrow \\left\\lfloor \\frac{1}{x} \\right\\rfloor$ can be\nmade in (\\ref{hii}) where it is seen that\n\\begin{equation}\n \\begin{array}{ll}\n \\text{$\\chi_{} \\left( x, I^H_{\\left\\lfloor x^{- 1} \\right\\rfloor}\n \\right)$} & = \\theta \\left( \\frac{x \\text{$\\left\\lfloor x^{- 1}\n \\right\\rfloor$} + x - 1}{\\text{$\\left\\lfloor x^{- 1} \\right\\rfloor$} + 1}\n \\right) - \\theta \\left( \\frac{x \\left\\lfloor x^{- 1} \\right\\rfloor \\text{}\n - 1}{\\left\\lfloor x^{- 1} \\right\\rfloor} \\right) = 1 \\label{hieye}\n \\end{array}\n\\end{equation}\nand so making the same substitution in (\\ref{wn}) gives\n\\begin{equation}\n \\begin{array}{ll}\n w (x) & = \\sum_{n = 1}^{\\infty} w_n (x)\\\\\n & = \\sum_{n = 1}^{\\infty} n (x n + x - 1) \\chi_{} (x, I^H_n)\\\\\n & = \\left\\lfloor x^{- 1} \\right\\rfloor \\text{} \\left( x \\left\\lfloor x^{-\n 1} \\right\\rfloor \\text{} + x - 1 \\right)\\\\\n & = x \\left\\lfloor x^{- 1} \\right]^2 + x \\left\\lfloor x^{- 1}\n \\right\\rfloor \\text{} - \\left\\lfloor x^{- 1} \\right\\rfloor \\text{}\n \\end{array} \\label{w}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{12cm}{12cm}{\\includegraphics{harmonicsaw.eps}}\n \\caption{The Harmonic Sawtooth Map}\n\\end{figure}\n\nThe intervals $_{} I_n^w$ will be defined such that $\\ell w_n = \\left| I_n^w\n\\right| = \\left| w_n (x) \\right|$. Let\n\\begin{equation}\n \\begin{array}{ll}\n \\mathfrak{h}_n & = \\int^{}_{I^H_n} x (n + 1) n \\mathrm{d} x\\\\\n & = \\frac{\\left( \\frac{1}{n + 1} + \\frac{1}{n} \\right)}{2}\\\\\n & = \\frac{2 n + 1}{2 n (n + 1)}\n \\end{array}\n\\end{equation}\nbe the midpoint of $I^H_n$ then\n\\begin{equation}\n \\begin{array}{ll}\n I_n^w & = \\left( \\mathfrak{h}_n - \\frac{\\left| w_n (x) \\right|}{2},\n \\mathfrak{h}_n + \\frac{\\left| w_n (x) \\right|}{2} \\right)\\\\\n & = \\left( \\frac{4 n + 1}{4 n (n + 1)}, \\frac{4 n + 3}{4 n (n + 1)}\n \\right)\n \\end{array}\n\\end{equation}\nso that\n\\begin{equation}\n \\begin{array}{ll}\n \\ell w_n & = \\left| w_n (x) \\right|\\\\\n & = \\int_0^1 n (x n + x - 1) \\chi_{} (x, I^H_n) \\mathrm{d} x\\\\\n & = \\int^{\\frac{1}{n}}_{\\frac{1}{n + 1}} w (x) \\mathrm{d} x\\\\\n & = \\left| I_n^w \\right|\\\\\n & = \\int_0^1 \\chi (x, I_n^w) \\mathrm{d} x\\\\\n & = \\frac{4 n + 3}{4 n (n + 1)} - \\frac{4 n + 1}{4 n (n + 1)}\\\\\n & = \\frac{1}{2 n (n + 1)}\n \\end{array} \\label{wlen}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{10cm}{8cm}{\\includegraphics{reciplen.eps}}\n \\caption{Reciprocal lengths $\\ell w_n^{- 1}$}\n\\end{figure}\n\nThe total length of $\\mathcal{L}_w$ is\n\\begin{equation}\n \\begin{array}{ll}\n |\\mathcal{L}_w | & = \\int_0^1 w (x) \\mathrm{d} x\\\\\n & = \\sum_{n = 1}^{\\infty} \\ell w_n\\\\\n & = \\sum_{n = 1}^{\\infty} \\frac{1}{2 n (n + 1)}\\\\\n & = \\frac{1}{2}\n \\end{array}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{10cm}{10cm}{\\includegraphics{sawfi.eps}}\n \\caption{$\\chi (x, I_n^w)$ green and $w_n (x)$ blue for $n = 1 \\ldots 3$ and\n $x = \\frac{1}{4} ..1$}\n\\end{figure}\n\n\\subsubsection{Geometry and Volume of the Inner Tubular Neighborhood}\n\nThe geometric counting function (\\ref{geocount}) of $\\mathcal{L}_w$ is\n\\begin{equation}\n \\begin{array}{ll}\n N_{\\mathcal{L}_w} (x) & =\\#\\{n \\geqslant 1 : \\ell w_n^{- 1} \\leqslant\n x\\}\\\\\n & =\\#\\{n \\geqslant 1 : 2 (n + 1) n \\leqslant x\\}\\\\\n & = \\left\\lfloor \\frac{\\sqrt{2 x + 1}}{2} - \\frac{1}{2} \\right\\rfloor\n \\end{array} \\label{wgeocount}\n\\end{equation}\nwhich is used to calculate the limiting constant (\\ref{mconst}) $C_w$\nappearing in the equation for the Minkowski content \\\n\\begin{equation}\n \\begin{array}{ll}\n C_w & = \\lim_{x \\rightarrow \\infty} \\frac{N_{\\mathcal{L}_w}\n (x)}{x^{D_{\\mathcal{L}_w}}}\\\\\n & = \\lim_{x \\rightarrow \\infty} \\frac{\\frac{\\sqrt{2 x + 1}}{2} -\n \\frac{1}{2}}{\\sqrt{x}}\\\\\n & = \\frac{\\sqrt{2}}{2}\n \\end{array} \\label{wmconst}\n\\end{equation}\nThe function $N_{\\mathcal{L}_w} (x)$ happens to coincide with\n{\\cite[A095861]{oeis}}, which is the number of primitive Pythagorean triangles\nof the form $\\{(a, b, b + 1) : (b + 1) \\leqslant n\\}$.\n{\\cite[171-176]{bon}}{\\cite[10.1]{pt}}{\\cite[11.2-11.5]{ent}} Let\n\\begin{equation}\n \\begin{array}{lll}\n v_{} (\\varepsilon) & = \\min (j : \\ell w_j < 2 \\varepsilon) & =\n \\left\\lfloor \\frac{\\varepsilon + \\sqrt{\\varepsilon^2 + \\varepsilon}}{2\n \\varepsilon} \\right\\rfloor\n \\end{array}\n\\end{equation}\nwhich is the floor of the solution to the inverse length equation\n\\begin{equation}\n \\begin{array}{ll}\n \\frac{\\varepsilon + \\sqrt{\\varepsilon^2 + \\varepsilon}}{2 \\varepsilon} =\n \\left\\{ n : \\ell w_{n - 1} = 2 \\varepsilon \\right\\} = \\left\\{ n :\n \\frac{1}{2 n (n - 1)} = 2 \\varepsilon \\right\\} & \\\\\n \\frac{\\ell w_{\\frac{\\varepsilon + \\sqrt{\\varepsilon^2 + \\varepsilon}}{2\n \\varepsilon} - 1}}{2} = \\varepsilon & \n \\end{array}\n\\end{equation}\nThen the volume of the inner tubular neighborhood of $\\partial \\mathcal{L}_w$\nwith radius $\\varepsilon$ (\\ref{tnv}) is\n\\begin{equation}\n \\begin{array}{ll}\n V_{\\mathcal{L}_w} (\\varepsilon) & = 2 \\varepsilon N_{\\mathcal{L}_w} \\left(\n \\frac{1}{2 \\varepsilon^{}} \\right) + \\sum_j^{\\ell w_j < 2 \\varepsilon}\n \\ell w_j\\\\\n & = 2 \\varepsilon N_{\\mathcal{L}_w} \\left( \\frac{1}{2 \\varepsilon^{}}\n \\right) + \\sum_{\\tmscript{\\begin{array}{l}\n n = v (\\varepsilon)\n \\end{array}}}^{\\infty} \\frac{1}{2 (n + 1) n}\\\\\n & = 2 \\varepsilon N_{\\mathcal{L}_w} \\left( \\frac{1}{2 \\varepsilon^{}}\n \\right) + \\frac{1}{2 v (\\varepsilon)^{}}\\\\\n & = 2 \\varepsilon \\left\\lfloor \\frac{\\sqrt{\\frac{1}{\\varepsilon} + 1}}{2}\n - \\frac{1}{2} \\right\\rfloor + \\frac{1}{2 v (\\varepsilon)^{}}\\\\\n & = \\frac{4 \\varepsilon v (\\varepsilon)^2 - 4 \\varepsilon v (\\varepsilon)\n + 1}{2 v (\\varepsilon)}\n \\end{array} \\label{wvol}\n\\end{equation}\nsince\n\\begin{equation}\n \\begin{array}{ll}\n \\sum_{n = m}^{\\infty} \\frac{1}{2 n (n + 1)} & = \\frac{1}{2 m}\n \\end{array}\n\\end{equation}\nand by defintion we have\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{\\varepsilon \\rightarrow 0^+} V_{\\mathcal{L}_w} (\\varepsilon) & = 0\\\\\n \\lim_{\\varepsilon \\rightarrow \\infty} V_{\\mathcal{L}_w} (\\varepsilon) & =\n |\\mathcal{L}_w | = \\frac{1}{2}\n \\end{array}\n\\end{equation}\nThus, using (\\ref{wmconst}) and (\\ref{wvol}), the Minkowski content\n(\\ref{mcontent}) of $\\mathcal{L}_w$ is\n\\begin{equation}\n \\begin{array}{lll}\n \\mathcal{M}_{\\mathcal{L}_w} & = \\lim_{e \\rightarrow 0^+}\n \\frac{V_{\\mathcal{L}w} (\\varepsilon)}{\\varepsilon^{1 - D_{\\mathcal{L}_w}}}\n & \\\\\n & = \\lim_{e \\rightarrow 0^+} \\frac{1}{\\sqrt{\\varepsilon}} \\left( 2\n \\varepsilon \\left\\lfloor \\frac{\\sqrt{\\frac{1}{\\varepsilon} + 1}}{2} -\n \\frac{1}{2} \\right\\rfloor + \\frac{1}{2} \\left\\lfloor \\frac{\\varepsilon +\n \\sqrt{\\varepsilon^2 + \\varepsilon}}{2 \\varepsilon} \\right\\rfloor^{- 1}\n \\right) & \\\\\n & = \\frac{C_w 2^{1 - D_{\\mathcal{L}_w}}}{1 - D_{\\mathcal{L}_w}} & \\\\\n & = \\frac{\\frac{\\sqrt{2}}{2} 2^{1 - \\frac{1}{2}}}{1 - \\frac{1}{2}} & \\\\\n & = 2 & \n \\end{array} \\label{wmcontent}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{sawgeocount.eps}}\n \\caption{Geometric Counting Function $N_{\\mathcal{L}_w} (x)$ of $w (x)$}\n\\end{figure}\n\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{minleps.eps}}\n \\caption{$\\{v (\\varepsilon) = \\min (n : \\ell w_n < 2 \\varepsilon) :\n \\varepsilon = \\frac{1}{1000} \\ldots \\frac{1}{4} \\}$}\n\\end{figure}\n\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{Vw.eps}}\n \\caption{Volume of the inner tubular neighborhood of $\\partial\n \\mathcal{L}_w$ with radius $\\varepsilon$ $\\left\\{ V_{\\mathcal{L}_w}\n (\\varepsilon) : \\varepsilon = 0 \\ldots \\frac{1}{8} \\right\\}$}\n\\end{figure}\n\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{vse.eps}}\n \\caption{$\\left\\{ \\frac{V\\mathcal{L}_w (\\varepsilon)}{\\sqrt{\\varepsilon}} :\n \\varepsilon = 0 \\ldots \\frac{1}{8} \\right\\}$ and $\\frac{V\\mathcal{L}_w (8^{-\n 1})}{\\sqrt{8^{- 1}}} = \\sqrt{2}$}\n\\end{figure}\n\n\\subsubsection{The Geometric Zeta Function $\\zeta_{\\mathcal{L}_w} (s)$}\n\nThe geometric zeta function (\\ref{geozeta}) of $w (x$) is the Dirichlet series\nof the lengths $\\ell w_n$ (\\ref{wlen}) and also an integral over the geometric\nlength counting function (\\ref{wgeocount}) $N_{\\mathcal{L}_w} (x)$\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\mathcal{L}_w} (s) & = \\sum_{n = 1}^{\\infty} \\ell w_j^s\\\\\n & = \\sum_{n = 1}^{\\infty} \\left( \\frac{1}{2 n (n + 1)} \\right)^s\\\\\n & = \\sum_{n = 1}^{\\infty} 2^{- s} (n + 1)^{- s} n^{- s}\\\\\n & = s \\int_0^{\\infty} N_{\\mathcal{L}_w} (x) x^{- s - 1} \\mathrm{d} x\\\\\n & = s \\int_0^{\\infty} \\left\\lfloor \\frac{\\sqrt{2 x + 1}}{2} - \\frac{1}{2}\n \\right\\rfloor x^{- s - 1} \\mathrm{d} x\n \\end{array} \\label{wgeozeta}\n\\end{equation}\nThe residue (\\ref{geozetares}) of $\\zeta_{\\mathcal{L}_w} (s$) at\n$D_{\\mathcal{L}_w}$ is\n\\begin{equation}\n \\begin{array}{ll}\n \\underset{s = D_{\\mathcal{L}}}{\\tmop{Res} (\\zeta_{\\mathcal{L}} (s))} & =\n \\lim_{s \\rightarrow D_{\\mathcal{L}_w}^+} (s - D_{\\mathcal{L}})\n \\zeta_{\\mathcal{L}} (s)\\\\\n & = \\lim_{s \\rightarrow \\frac{1}{2}^+} \\left( s - \\frac{1}{2} \\right)\n \\zeta_{\\mathcal{L}} (s)\\\\\n & = \\lim_{s \\rightarrow \\frac{1}{2}^+} \\left( s - \\frac{1}{2} \\right)\n \\sum_{n = 1}^{\\infty} 2^{- s} (n + 1)^{- s} n^{- s}\\\\\n & = \\lim_{s \\rightarrow \\frac{1}{2}^+} \\left( s - \\frac{1}{2} \\right) s\n \\int_0^{\\infty} \\left\\lfloor \\frac{\\sqrt{2 x + 1}}{2} - \\frac{1}{2}\n \\right\\rfloor x^{- s - 1} \\mathrm{d} x\\\\\n & = 0\n \\end{array}\n\\end{equation}\nThe values of $\\zeta_{\\mathcal{L}_w} (n)$ at positive integer values $n \\in\n\\mathbbm{N}^{\\ast}$ are given explicitly by a rather unwieldy sum of binomial\ncoefficients and the Riemann zeta function $\\zeta (n)$ at even integer values.\nFirst, define\n\\begin{equation}\n \\begin{array}{ll}\n a_n & = \\frac{\\left( n - 1 \\right) \\left( 1 - \\left( - 1 \\right)^{n + 1}\n \\right)}{2}\\\\\n b_n & = \\frac{\\left( - 1 \\right)^{n + 1} \\left( n - 1 \\right)}{2} + n -\n \\frac{7}{4} + \\frac{(- 1)^n}{4}\\\\\n c_n & = (- 1)^n (n - 1)\\\\\n d_n & = \\frac{(- 1)^n}{2}\n \\end{array}\n\\end{equation}\nthen\n\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\mathcal{L}_w} (n) & = \\frac{( - 1)^n \\binom{2 n - 1}{n - 1}}{\n 2^n} + \\sum_{m = a_n}^{b_n} \\frac{2 (- 1)^n \\binom{2 m + c_n - d_n +\n \\frac{1}{2}}{n - 1} \\zeta \\left( d_n + 2 n - \\frac{3}{2} - 2 m - c_n\n \\right)}{ 2^n}\n \\end{array}\n\\end{equation}\n\nThe terms of $\\zeta_{\\mathcal{L}_w} (n)$ from $n = 1$ to $10$ are shown below\nin Table \\ref{wgztable}.\n\n\\begin{table}[h]\n $\\left( \\begin{array}{llllll}\n + \\frac{1}{2} & & & & & \\\\\n - \\frac{3}{4} & + \\frac{1}{2} \\hspace{0.25em} \\zeta \\left( 2 \\right) & & \n & & \\\\\n + \\frac{5}{3} & - \\frac{3}{4} \\hspace{0.25em} \\zeta \\left( 2 \\right) & & \n & & \\\\\n - \\frac{35}{16} & + \\frac{5}{4} \\hspace{0.25em} \\zeta \\left( 2 \\right) & +\n \\frac{1}{8} \\hspace{0.25em} \\zeta \\left( 4 \\right) & & & \\\\\n + \\frac{63}{16} & - \\frac{35}{16} \\hspace{0.25em} \\zeta \\left( 2 \\right) &\n - \\frac{5}{16} \\hspace{0.25em} \\zeta \\left( 4 \\right) & & & \\\\\n - \\frac{231}{32} & + \\frac{63}{16} \\hspace{0.25em} \\zeta \\left( 2 \\right)\n & + \\frac{21}{32} \\hspace{0.25em} \\zeta \\left( 4 \\right) & + \\frac{1}{32}\n \\zeta \\left( 6 \\right) & & \\\\\n + \\frac{429}{32} & - \\frac{231}{32} \\hspace{0.25em} \\zeta \\left( 2 \\right)\n & - \\frac{21}{16} \\hspace{0.25em} \\zeta \\left( 4 \\right) & - \\frac{7}{64}\n \\hspace{0.25em} \\zeta \\left( 6 \\right) & & \\\\\n - \\frac{6435}{256} & + \\frac{429}{32} \\hspace{0.25em} \\zeta \\left( 2\n \\right) & + \\frac{165}{64} \\hspace{0.25em} \\zeta \\left( 4 \\right) & +\n \\frac{9}{32} \\hspace{0.25em} \\zeta \\left( 6 \\right) & + \\frac{1}{128}\n \\hspace{0.25em} \\zeta \\left( 8 \\right) & \\\\\n + \\frac{12155}{256} & - \\frac{6435}{256} \\hspace{0.25em} \\zeta \\left( 2\n \\right) & - \\frac{1287}{256} \\hspace{0.25em} \\zeta \\left( 4 \\right) & -\n \\frac{165}{256} \\hspace{0.25em} \\zeta \\left( 6 \\right) & - \\frac{9}{256}\n \\hspace{0.25em} \\zeta \\left( 8 \\right) & \\\\\n - \\frac{46189}{512} & + \\frac{12155}{256} \\hspace{0.25em} \\zeta \\left( 2\n \\right) & + \\frac{5005}{512} \\hspace{0.25em} \\zeta \\left( 4 \\right) & +\n \\frac{715}{512} \\hspace{0.25em} \\zeta \\left( 6 \\right) & + \\frac{55}{512}\n \\hspace{0.25em} \\zeta \\left( 8 \\right) + & \\frac{1 }{ 512 \\mathfrak{}}\n \\hspace{0.25em} \\zeta \\mathbb{} \\left( \\mathfrak{} 1 0 \\mathsf{} \\right)\n \\end{array} \\right)$\n \n \n \\caption{$\\{ \\text{$\\zeta_{\\mathcal{L}_w} (n) = \\Sigma_{}\n \\tmop{row}_n$:n=1..10\\}}$\\label{wgztable}}\n\\end{table}\n\n\\section{Fractal Strings and Dynamical Zeta Functions}\n\n\\subsection{Fractal Strings}\n\nA a fractal string $\\mathcal{L}$ is defined as a nonempty bounded open subset\nof the real line $\\mathcal{L} \\subseteq \\mathbbm{R}$ consisting of a countable\ndisjoint union of open intervals $I_j$\n\\begin{equation}\n \\mathcal{L}= \\bigcup_{j = 1}^{\\infty} I_j\n\\end{equation}\nThe length of the $j$-th interval $I_j$ is denoted by\n\\begin{equation}\n \\text{$\\ell_j = \\left| I_j \\right|_{}$}\n\\end{equation}\nwhere $\\left. \\left. \\right| \\cdot \\right|_{}$ is the $1$-dimensional Lebesgue\nmeasure. The lengths $\\ell_j$ must form a nonnegative monotically decreasing\nsequence and the total length must be finite, that is\n\\begin{equation}\n \\begin{array}{l}\n |\\mathcal{L}|_1 = \\sum_{j = 1}^{\\infty} \\ell_j < \\infty\\\\\n \\ell_1 \\geqslant \\ell_2 \\geqslant \\ldots \\geqslant \\text{$\\ell_j \\geqslant\n \\ell_{j + 1} \\geqslant \\cdots \\geqslant 0$}\n \\end{array}\n\\end{equation}\nThe case when $\\ell_j = 0$ for any $j$ will be excluded here since $\\ell_j$ is\na finite sequence. The fractal string is defined completely by its sequence of\nlengths so it can be denoted\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{L} & =\\{\\ell_j \\}_{j = 1}^{\\infty}\n \\end{array}\n\\end{equation}\nThe boundary of $\\mathcal{L}$ in $\\mathbbm{R}$, denoted by $\\partial\n\\mathcal{L} \\subset \\Omega$, is a totally disconnected bounded perfect subset\nwhich can be represented as a string of finite length, and generally any\ncompact subset of $\\mathbbm{R}$ also has this property. The boundary $\\partial\n\\mathcal{L}$ is said to be perfect since it is closed and each of its points\nis a limit point. Since the Cantor-Bendixon lemma states that there exists a\nperfect set $P \\subset \\partial \\mathcal{L}$ such that $\\partial \\mathcal{L}-\nP$ is a most countable, we can define $\\mathcal{L}$ as the complenent of\n$\\partial \\mathcal{L}$ in its closed convex hull. The connected components of\nthe bounded open set $\\mathcal{L} \\backslash \\partial \\mathcal{L}$ are the\nintervals $I_j$. {\\cite[1.2]{gsfs}}{\\cite[2.2\nEx17]{mti}}{\\cite[3.1]{fractalzetastrings}}{\\cite{weylberry}}{\\cite{fdisp}}{\\cite{hearfractaldrumshape}}{\\cite{gmc}}{\\cite{ncfg}}{\\cite{zflf}}{\\cite{possf}}{\\cite{cdssfs}}\n\n\\subsubsection{The Minkowski Dimension $D_{\\mathcal{L}}$ and Content\n$\\mathcal{M}_{\\mathcal{L}}$}\n\nThe Minkowski dimension $D_{\\mathcal{L}} \\in [0, 1]$, also known as the box\ndimension, is maximum value of $V (\\varepsilon)$\n\\begin{equation}\n \\begin{array}{ll}\n D_{\\mathcal{L}} & = \\inf \\{\\alpha \\geqslant 0 : V (\\varepsilon) = O\n (\\varepsilon^{1 - \\alpha}) \\tmop{as} \\varepsilon \\rightarrow 0^+ \\}=\n \\zeta_{\\mathcal{L}} (1) = \\sum_{n = 1}^{\\infty} \\ell_j^{}\n \\end{array}\n\\end{equation}\nwhere $V (\\varepsilon)$ is the volume of the inner tubular neighborhoods of\n$\\partial \\mathcal{L}$ with radius $\\varepsilon$\n\\begin{equation}\n \\text{$\\begin{array}{ll}\n V_{\\mathcal{L}} (\\varepsilon) & \\left. = | x \\in \\mathcal{L}: d (x,\n \\partial \\mathcal{L}) < \\varepsilon \\right|_{}\\\\\n & = \\sum_j^{\\ell_j \\geqslant 2 \\varepsilon} 2 \\varepsilon +\n \\sum_j^{\\ell_j < 2 \\varepsilon} \\ell_j\\\\\n & = 2 \\varepsilon N_{\\mathcal{L}} \\left( \\frac{1}{2 \\varepsilon^{}}\n \\right) + \\sum_j^{\\ell_j < 2 \\varepsilon} \\ell_j\n \\end{array}$} \\label{tnv}\n\\end{equation}\nand $N_{\\mathcal{L}} (x)$ is the geometric counting function which is the\nnumber of components with their reciprocal length being less than or equal to\n$x$.\n\\begin{equation}\n \\begin{array}{ll}\n N_{\\mathcal{L}} (x) & =\\#\\{j \\geqslant 1 : \\ell_j^{- 1} \\leqslant x\\}\\\\\n & = \\sum^{\\ell_j^{- 1} \\leqslant x}_{\\tmscript{\\begin{array}{l}\n j \\geqslant 1\n \\end{array}}} 1\n \\end{array} \\label{geocount}\n\\end{equation}\nThe Minkowski content of $\\mathcal{L}$ is then defined as\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{M}_{\\mathcal{L}} & = \\lim_{e \\rightarrow 0^+}\n \\frac{V_{\\mathcal{L}} (\\varepsilon)}{\\varepsilon^{1 - D_{\\mathcal{L}}}} \\\\\n & = \\frac{C_{\\mathcal{L}} 2^{1 - D_{\\mathcal{L}}}}{1 - D_{\\mathcal{L}}}\\\\\n & = \\frac{\\tmop{Res} (\\zeta_{\\mathcal{L}} (s) ; D_{\\mathcal{L}}) 2^{1 -\n D_{\\mathcal{L}}}}{D_{\\mathcal{L}} (1 - D_{\\mathcal{L}})} \n \\end{array} \\label{mcontent}\n\\end{equation}\nwhere $C_{\\mathcal{L}}$ is the constant\n\\begin{equation}\n \\begin{array}{ll}\n C_{\\mathcal{L}} = & \\lim_{x \\rightarrow \\infty} \\frac{N_{\\mathcal{L}_{}}\n (x)}{x^{D_{\\mathcal{L}_{}}}}\n \\end{array} \\label{mconst}\n\\end{equation}\nIf $\\mathcal{M}_{\\mathcal{L}} \\in (0, \\infty)$ exists then $\\mathcal{L}$ is\nsaid to be Minkowski measurable which necessarily means that the geometry of\n$\\mathcal{L}$ does not oscillate and vice versa. {\\cite[1]{cdfs}}\n{\\cite{cizlm}}{\\cite{fsscd}}{\\cite[6.2]{fgnt}}\n\n\\subsubsection{The Geometric Zeta Function $\\zeta_{\\mathcal{L}} (s)$}\n\nThe geometric Zeta function $\\zeta_{\\mathcal{L}} (s)$ of\n\\tmtexttt{$\\mathcal{L}$} is the Dirichlet series\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\mathcal{L}} (s) & = \\sum_{n = 1}^{\\infty} \\ell_j^s\\\\\n & = s \\int_0^{\\infty} N_{\\mathcal{L}_w} (x) x^{- s - 1} \\mathrm{d} x\n \\end{array} \\label{geozeta}\n\\end{equation}\nwhich is holomorphic for $\\tmop{Re} (s) > D_{\\mathcal{L}}$. If\n$\\mathcal{L}_{}$ is Minkowski measurable then $0 < D_{\\mathcal{L}} < 1$ is the\nsimple unique pole of $\\zeta_{\\mathcal{L}} (s)$ on the vertical line\n$\\tmop{Re} (s) = D_{\\mathcal{L}}$. Assuming $\\zeta_{\\mathcal{L}} (s)$ has a\nmeromorphic extension to a neighboorhood of $D_{\\mathcal{L}}$ then\n$\\zeta_{\\mathcal{L}} (s)$ has a simple pole at $\\zeta_{\\mathcal{L}}\n(D_{\\mathcal{L}})$ if\n\\begin{equation}\n N_{\\mathcal{L}} (s) = O (s^{D_{\\mathcal{L}}}) \\tmop{as} s \\rightarrow \\infty\n\\end{equation}\nor if the volume of the tubular neighborhoods satisfies\n\\begin{equation}\n V_{\\mathcal{L}} (\\varepsilon) = O (\\varepsilon^{1 - D_{\\mathcal{L}}})\n \\tmop{as} \\varepsilon \\rightarrow 0^+\n\\end{equation}\nIt can be possible that the residue of $\\zeta_{\\mathcal{L}} (s)_{}$ at $s =\nD_{\\mathcal{L}}$ is positive and finite\n\\begin{equation}\n 0 < \\lim_{s \\rightarrow D_{\\mathcal{L}}} (s - D_{\\mathcal{L}})\n \\zeta_{\\mathcal{L}} (s) < \\infty \\label{geozetares}\n\\end{equation}\neven if $N_{\\mathcal{L}} (s)$ is not of order $s^{D_{\\mathcal{L}}}$ as $s\n\\rightarrow \\infty$ and $V_{\\mathcal{L}} (\\varepsilon)$ is not of order\n$\\varepsilon^{1 - D_{\\mathcal{L}}}$, however this does not contradict the\nMinkowski measurability of $\\mathcal{L}_{}$.\n\n\\subsubsection{Complex Dimensions, Screens and Windows}\n\nThe set of visible complex dimensions of $\\mathcal{L}$, denoted by\n$\\mathcal{D}_{\\mathcal{L}} (W)$, is a discrete subset of $\\mathbbm{C}$\nconsisting of the poles of $\\{\\zeta_{\\mathcal{L}} (s) : s \\in W\\}$.\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{D}_{\\mathcal{L}} (W) & =\\{w \\in W : \\zeta_{\\mathcal{L}} (w)\n \\tmop{is} a \\tmop{pole}\\}\n \\end{array}\n\\end{equation}\nWhen $W$ is the entire complex plane then the set $\\mathcal{D}_{\\mathcal{L}}\n(\\mathbbm{C}) =\\mathcal{D}_{\\mathcal{L}}$ is simply called the set of complex\ndimensions of $\\mathcal{L}$. The presence of oscillations in $V (\\varepsilon)$\nimplies the presence of imaginary complex dimensions with $\\tmop{Re} (\\cdot) =\nD_{\\mathcal{L}}$ and vice versa. More generally, the complex dimensions of a\nfractal string $\\mathcal{L}$ describe its geometric and spectral oscillations.\n\n\\subsubsection{Frequencies of Fractal Strings and Spectral Zeta Functions}\n\nThe eigenvalues $\\lambda_n$ of the Dirichlet Laplacian $\\Delta u (x) = -\n\\frac{\\mathrm{d}^2}{\\mathrm{d} x^2} u (x)$ on a bounded open set $\\Omega \\subset\n\\mathbbm{R}$ correspond to the normalized frequencies $f_n =\n\\frac{\\sqrt{\\lambda_n}}{\\pi}$ of a fractal string. The frequencies of the unit\ninterval are the natural numbers $n \\in \\mathbbm{N}^{\\ast}$ and the\nfrequencies of an interval of length $\\ell$ are $n \\ell^{- 1}$. The\nfrequencies of $\\mathcal{L}$ are the numbers\n\\begin{equation}\n \\begin{array}{l}\n f_{k, j} = k \\ell_j^{- 1} \\forall \\text{ $k, j \\in \\mathbbm{N}^{\\ast}$}\n \\end{array}\n\\end{equation}\nThe spectral counting function $N_{v\\mathcal{L}} (x)$ counts the frequencies\nof $\\mathcal{L}$ with multiplicity\n\\begin{equation}\n \\begin{array}{ll}\n N_{v\\mathcal{L}} (x) & = \\sum_{j = 1}^{\\infty} N_{\\mathcal{L}} \\left(\n \\frac{x}{j} \\right)\\\\\n & = \\sum_{j = 1}^{\\infty} \\left\\lfloor x \\ell_j \\right\\rfloor\n \\end{array} \\label{spectralcount}\n\\end{equation}\nThe spectral zeta function $\\zeta_{\\upsilon \\mathcal{L}} (s)$ of $\\mathcal{L}$\nis connected to the Riemann zeta function (\\ref{zeta}) by\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\upsilon \\mathcal{L}} (s) & = \\sum_{k = 1}^{\\infty} \\sum_{j =\n 1}^{\\infty} k^{- s} \\ell^s_j\\\\\n & = \\zeta (s) \\sum_{n = 1}^{\\infty} \\ell_j^s\\\\\n & = \\zeta (s) \\zeta_{\\mathcal{L}} (s)\n \\end{array} \\label{spectralzeta}\n\\end{equation}\n{\\cite[1.1]{cdfs}}{\\cite[1.2.1]{gsfs}}\n\n\\subsubsection{Generalized Fractal Strings and Dirichlet Integrals}\n\nA generalized fractal string is a positive or complex or local measure $\\eta\n(x)$ on $(0, \\infty)$ such that\n\\begin{equation}\n \\int_0^{x_0} \\eta (x) \\mathrm{d} x = 0 \\label{gfs}\n\\end{equation}\nfor some $x_0 > 0$. A local positive measure is a standard positive Borel\nmeasure $\\eta (J)$ on $(0, \\infty)$ where $J$ is the set of all bounded\nsubintervals of $(0, \\infty)$ in which case $\\eta (x) = | \\eta (x) |$. More\ngenerally, a meausre $\\eta (x)$ is a local complex measure if $\\eta (A)$ is\nwell-defined for any subset $A \\subset [a, b]$ where $[a, b] \\subset [0,\n\\infty]$ is a bounded subset of the positive half-line $(0, \\infty)$ and the\nrestriction of $\\eta$ to the Borel subsets of $[a, b]$ is a complex measure on\n$[a, b]$ in the traditional sense. The geometric counting function of $\\eta\n(x)$ is defined as\n\\begin{equation}\n \\begin{array}{ll}\n N_{\\eta} (x) & = \\int_0^x \\eta (x) \\mathrm{d} x\n \\end{array} \\label{ggcf}\n\\end{equation}\nThe dimension $D_{\\eta}$ is the abscissa of conergence of the Dirichlet\nintegral \\\n\\begin{equation}\n \\zeta_{| \\eta |} (\\sigma) = \\int_0^{\\infty} x^{- \\sigma} | \\eta (x) | \\mathrm{d}\n x \\label{gd}\n\\end{equation}\nIn other terms, it is the smallest real positive $\\sigma$ such that the\nimproper Riemann-Lebesgue converges to a finite value. The geometric zeta\nfunction is defined as the Mellin transform\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\eta} (s) & =\n \\end{array} \\int_0^{\\infty} x^{- s} \\eta (x) \\mathrm{d} x \\label{ggz}\n\\end{equation}\nwhere $\\tmop{Re} (s) > D_{\\eta}$.\n\n\\subsection{Fractal Membranes and Spectral Partitions}\n\n\\subsubsection{Complex Dimensions of Dynamical Zeta Functions}\n\nThe fractal membrane $\\mathcal{T}_L$ associated with $\\mathcal{L}$ is the\nadelic product\n\\begin{equation}\n \\mathcal{T}_L = \\coprod_{j = 1}^{\\infty} \\mathcal{T}_j\n\\end{equation}\nwhere each $\\mathcal{T}_j$ is an interval $I_j$ of length $\\log (\\ell_j^{-\n1})^{- 1}$. To each $\\mathcal{T}_j$ is associated a Hilbert space\n$\\mathcal{H}_j = L^2 (I_j)$ of square integrable functions on $I_j$. The\nspectral partition function $Z_{\\mathcal{L}} (s)$ of $\\mathcal{L}$ is a Euler\nproduct expansion which has no zeros or poles in $\\tmop{Re} (s) > D_M\n(\\mathcal{L})$.\n\\begin{equation}\n \\begin{array}{ll}\n Z_{\\mathcal{L}} (s) & = \\prod_{j = 1}^{\\infty} \\frac{1}{1 - \\ell_j^s}\\\\\n & = \\prod_{j = 1}^{\\infty} Z_{\\mathcal{L}_j} (s)\n \\end{array}\n\\end{equation}\nwhere $D_M (\\mathcal{L})$ is the Minkowski dimension of $\\mathcal{L}$ and\n$Z_{\\mathcal{L}_j} (s) = \\frac{1}{1 - \\ell_j^s}$ is the $j$-th Euler factor,\nthe partition function of the $j$-th component of the fractal membrane.\n{\\cite[3.2.2]{fractalzetastrings}}\n\n\\subsubsection{Dynamical Zeta Functions of Fractal Membranes}\n\nThe dynamical zeta function of a fractal membrane $\\mathcal{L}$ is the\nnegative of the logarithmic derivative of the Zeta function associated with\n$\\mathcal{L}$.\n\\begin{equation}\n \\begin{array}{ll}\n Z_{\\mathcal{L}} (s) & = - \\frac{\\mathrm{d}}{\\mathrm{d} s} \\ln (\\zeta_{\\mathcal{L}}\n (s))\\\\\n & = - \\frac{\\frac{\\mathrm{d}}{\\mathrm{d} s} \\zeta_{\\mathcal{L}}\n (s)}{\\zeta_{\\mathcal{L}} (s)}\n \\end{array} \\label{dzfm}\n\\end{equation}\n\n\\section{Special Functions, Definitions, and Conventions}\n\n\\subsection{Special Functions}\n\n\\subsubsection{The Interval Indicator (Characteristic) Function $\\chi (x, I)$}\n\nThe (left-open, right-closed) interval indicator function is $\\chi (x, I)$\nwhere $I = (a, b]$\n\\begin{equation}\n \\begin{array}{ll}\n \\chi (x, I) & = \\left\\{ \\begin{array}{ll}\n 1 & x \\in I\\\\\n 0 & x \\not\\in I\n \\end{array} \\right.\\\\\n & = \\left\\{ \\begin{array}{ll}\n 1 & a < x \\leqslant b\\\\\n 0 & \\tmop{otherwise}\n \\end{array} \\right.\\\\\n & = \\theta (x - a) - \\theta (x - a) \\theta (x - b)\n \\end{array} \\text{} \\label{ii}\n\\end{equation}\nand $\\theta$ is the Heaviside unit step function, the derivative of which is\nthe Dirac delta function $\\delta$\n\\begin{equation}\n \\begin{array}{ll}\n \\text{$\\int \\delta (x) \\mathrm{d} x$} & = \\theta (x)\\\\\n \\theta (x) & = \\left\\{ \\begin{array}{ll}\n 0 & x < 0\\\\\n 1 & x \\geqslant 0\n \\end{array} \\right.\n \\end{array} \\label{deltastep}\n\\end{equation}\nThe discontinous point of $\\theta (x)$ has the limiting values\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{x \\rightarrow 0^-} \\theta (x) & = 0\\\\\n \\lim_{x \\rightarrow 0^+} \\theta (x) & = 1\n \\end{array}\n\\end{equation}\nthus the values of $\\chi (x, (a, b))$ on the boundary can be chosen according\nto which side the limit is regarded as being approached from.\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{x \\rightarrow a^-} \\text{$\\chi (x, (a, b])$} & = 0\\\\\n \\lim_{x \\rightarrow a^+} \\text{$\\chi (x, (a, b])$} & = 1 - \\theta (a -\n b)\\\\\n \\lim_{x \\rightarrow b^-} \\text{$\\chi (x, (a, b])$} & = \\theta (b - a)\\\\\n \\lim_{x \\rightarrow b^+} \\text{$\\chi (x, (a, b])$} & = 0\n \\end{array}\n\\end{equation}\n\n\\subsubsection{``Harmonic'' Intervals}\n\nLet the $n$-th harmonic (left-open, right-closed) interval be defined as\n\\begin{equation}\n _{} \\begin{array}{ll}\n I^H_n & = \\left( \\frac{1}{n + 1}, \\frac{1}{n} \\right]\n \\end{array} \\label{hi}\n\\end{equation}\nthen its characteristic function is\n\\begin{equation}\n \\begin{array}{lll}\n _{} \\chi_{} (x, I_n^H) & & = \\theta \\left( x - \\frac{1}{n + 1} \\right) -\n \\theta \\left( x - \\frac{1}{n} \\right)\\\\\n & & = \\theta \\left( x - \\frac{n}{n + 1} \\right) - \\theta \\left( x - n\n \\right)\\\\\n & & = \\left\\{ \\begin{array}{ll}\n 1 & \\frac{1}{n + 1} < x \\leqslant \\frac{1}{n}\\\\\n 0 & \\tmop{otherwise}\n \\end{array} \\right.\n \\end{array} \\label{hii}\n\\end{equation}\nAs can be seen\n\\begin{equation}\n \\begin{array}{llll}\n \\bigcup_{n = 1}^{\\infty} I_n^H & = & \\bigcup_{n = 1}^{\\infty} \\left(\n \\frac{1}{n + 1}, \\frac{1}{n} \\right] & = (0, 1]\\\\\n \\sum_{n = 1}^{\\infty} \\chi (x, I_n^H) & = & \\sum_{n = 1}^{\\infty}\n \\chi_{} \\left( x, \\left( \\frac{1}{n + 1}, \\frac{1}{n} \\right] \\right) & =\n \\chi (x, (0, 1])\n \\end{array}\n\\end{equation}\nThe substitution $n \\rightarrow \\left\\lfloor \\frac{1}{x} \\right\\rfloor$ can be\nmade in (\\ref{hii}) where it is seen that\n\\begin{equation}\n \\begin{array}{lll}\n \\text{$\\chi_{} \\left( x, I^H_{\\left\\lfloor x^{- 1} \\right\\rfloor}\n \\right)$} & = \\theta \\left( \\frac{x \\text{$\\left\\lfloor x^{- 1}\n \\right\\rfloor$} + x - 1}{\\text{$\\left\\lfloor x^{- 1} \\right\\rfloor$}\n \\text{} + 1} \\right) - \\theta \\left( \\frac{x \\text{$\\left\\lfloor x^{- 1}\n \\right\\rfloor$} - 1}{\\left\\lfloor x^{- 1} \\right\\rfloor} \\right) = 1 &\n \\forall x \\in [- 1, + 1] \\label{hieye}\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Laplace Transform $L_a^b [f (x) ; x \\rightarrow s]$}\n\nThe Laplace transform {\\cite[1.5]{tlt}} is defined as\n\\begin{equation}\n \\begin{array}{ll}\n \\text{} \\text{$L_a^b [f (x) ; x \\rightarrow s]$} & = \\int_a^b f (x) e^{- x\n s} \\mathrm{d} x\n \\end{array} \\label{laplace}\n\\end{equation}\nwhere the unilateral Laplace transform is over the interval ($a, b) = (0,\n\\infty$) and the bilateral transform is over ($a, b) = (- \\infty, \\infty$).\nWhen ($a, b$) is not specified, it is assumed to range over the support of $f\n(x$) if the support is an interval. If the support of $f (x$) is not an\ninterval then ($a, b$) must be specified. Applying $L$ to the interval\nindicator function (\\ref{ii}) gives\n\\begin{equation}\n \\begin{array}{lll}\n \\text{$L_a^b [\\chi (x, (a, b)) ; x \\rightarrow s]$} & = \\int_a^b \\chi (x,\n (a, b)) e^{- x s} \\mathrm{d} x & \\\\\n & = \\int_a^b (\\theta (x - a) - \\theta (x - b) \\theta (x - a)) e^{- x s}\n \\mathrm{d} x & \\\\\n & = \\frac{e^{b s} - e^{\\tmop{as}}}{s e^{b s} e^{a s}} & \\\\\n & = - \\frac{(e^{a s} - e^{b s}) e^{- s (b + a)}}{s} & \\\\\n & & \n \\end{array} \\label{iilt}\n\\end{equation}\nThe limit at the singular point $s = 0$ is\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{s \\rightarrow 0} \\text{$L_a^b [\\chi (x, (a, b)) ; x \\rightarrow s]$}\n & = \\lim_{s \\rightarrow 0} - \\frac{(e^{a s} - e^{b s}) e^{- s (b +\n a)}}{s}\\\\\n & = b - a\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Mellin Transform $M_a^b [f (x) ; x \\rightarrow s]$}\n\nThe Mellin transform\n{\\cite[3.2]{ambi}}{\\cite[II.10.8]{mmp}}{\\cite[3.6]{piagm}} is defined as\n\\begin{equation}\n \\begin{array}{ll}\n \\text{$M_a^b [f (x) ; x \\rightarrow s]$} & = \\int_a^b f (x) x^{s - 1}\n \\mathrm{d} x\n \\end{array} \\label{mellin}\n\\end{equation}\nwhere the standard Mellin transform is over the interval ($a, b) = (0,\n\\infty$). Again, as with the notation for Laplace transform, the integral is\nover the support of $f (x$) if the support is an interval and ($a, b$) is not\nspecified, otherwise ($a, b$) must be specified. Applying $M$ to the interval\nindicator function (\\ref{ii}) gives\n\\begin{equation}\n \\begin{array}{lll}\n & M_a^b [\\chi (x, (a, b)) ; x \\rightarrow s] & = \\int_a^b \\chi (x, (a,\n b)) x^{s - 1} \\mathrm{d} x\\\\\n & & = \\int_a^b (\\theta (x - a) - \\theta (x - b) \\theta (x - a)) x^{s -\n 1} \\mathrm{d} x\\\\\n & & = \\frac{b^s - a^s}{s}\n \\end{array} \\label{iimt}\n\\end{equation}\nThe limit at the singular point $s = 0$ is\n\\begin{equation}\n \\begin{array}{ll}\n M_a^b \\left[ \\chi (x, (a, b)) ; x \\rightarrow 0 \\right] & = M \\left[ \\chi\n (x, (a, b)) ; x \\rightarrow 0 \\right]\\\\\n & = \\lim_{s \\rightarrow 0} M_a^b [\\chi (x, (a, b)) ; x \\rightarrow s]\\\\\n & = \\lim_{s \\rightarrow 0} \\frac{b^s - a^s}{s}\\\\\n & = \\ln (b) - \\ln (a)\n \\end{array}\n\\end{equation}\nThe Mellin transform has several identities {\\cite[3.1.2]{ambi}}, including\nbut not limited to\n\\begin{equation}\n \\begin{array}{ll}\n M [f (\\alpha x) ; x \\rightarrow s] & = \\alpha^{- s} M [f (x) ; x\n \\rightarrow s]\\\\\n M [x^{\\alpha} f (x) ; x \\rightarrow s] & = M [f (x) ; x \\rightarrow s +\n \\alpha]\\\\\n M [f (x^{\\alpha}) ; x \\rightarrow s] & = \\frac{1}{a} M \\left[ f (x) ; x\n \\rightarrow \\frac{s}{\\alpha} \\right]\\\\\n M [f (x^{- \\alpha}) ; x \\rightarrow s] & = \\frac{1}{a} M \\left[ f (x) ; x\n \\rightarrow - \\frac{s}{\\alpha} \\right]\\\\\n M [x^{\\alpha} f (x^{\\mu}) ; x \\rightarrow s] & = \\frac{1}{\\mu} M \\left[ f\n (x) ; x \\rightarrow \\frac{s + \\alpha}{\\mu} \\right]\\\\\n M [x^{\\alpha} f (x^{- \\mu}) ; x \\rightarrow s] & = \\frac{1}{\\mu} M \\left[\n f (x) ; x \\rightarrow - \\frac{s + \\alpha}{\\mu} \\right]\\\\\n M [\\ln (x)^n f (x) ; x \\rightarrow s] & = \\frac{\\mathrm{d}^n}{\\mathrm{d} s^n} M\n \\left[ f (x) ; x \\rightarrow s \\right]\n \\end{array}\n\\end{equation}\nwhere $\\alpha > 0, \\mu > 0 $, and $n \\in \\mathbbm{N} $. The Mellin transform\nof the harmonic interval indicator function (\\ref{hii}) is\n\\begin{equation}\n \\begin{array}{lll}\n M \\left[ \\chi \\left( x, I^H_n \\right) ; x \\rightarrow 0 \\right] & & =\n \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\chi \\left( x, \\left( \\frac{1}{n +\n 1}, \\frac{1}{n} \\right) \\right) x^{s - 1} \\mathrm{d} x\\\\\n & & = \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\left( \\theta \\left( t -\n \\frac{1}{n + 1} \\right) - \\theta \\left( t - \\frac{1}{n} \\right) \\right)\n x^{s - 1} \\mathrm{d} x\\\\\n & & = \\frac{n^{- s} - (n + 1)^{- s}}{s}\n \\end{array} \\label{himt}\n\\end{equation}\nwhich has the limit\n\\begin{equation}\n \\begin{array}{ll}\n M \\left[ \\chi \\left( x, I^H_n \\right) ; x \\rightarrow 0 \\right] & =\n \\lim_{s \\rightarrow 0} M \\left[ \\chi \\left( x, I^H_n \\right) ; x\n \\rightarrow s \\right]\\\\\n & = \\lim_{s \\rightarrow 0} \\frac{n^{- s} - (n + 1)^{- s}}{s}\\\\\n & = \\ln (n + 1) - \\ln (n)\n \\end{array} \\label{himtl}\n\\end{equation}\nThe Mellin and bilaterial Laplace transforms are related by the change of\nvariables $x \\rightarrow - \\ln (y)$ resulting in the identity\n{\\cite[3.1.1]{ambi}}\n\\begin{equation}\n \\begin{array}{lll}\n M_0^{\\infty} [f (- \\ln (x)) ; x \\rightarrow s] & = L_{- \\infty}^{+ \\infty}\n [f (y) ; y \\rightarrow s] & \\\\\n \\int_0^{\\infty} f (- \\ln (x)) x^{s - 1} \\mathrm{d} x & = \\int_{- \\infty}^{-\n \\infty} f (y) e^{- y s} \\mathrm{d} y & \n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Lambert W Function $W (k, x)$}\n\nThe Lambert W function {\\cite{lambertw}}{\\cite{lambertwss}} is the inverse of\n$x e^x$ given by\n\\begin{equation}\n \\begin{array}{ll}\n W (z) & =\\{x : x e^x = z\\}\\\\\n & = W (0, z)\\\\\n & = 1 + (\\ln (z) - 1) \\exp \\left( \\frac{i}{2 \\pi} \\int_0^{\\infty}\n \\frac{1}{x + 1} \\ln \\left( \\frac{x - i \\pi - \\ln (x) + \\ln (z)}{x + i \\pi\n - \\ln (x) + \\ln (z)} \\right) \\hspace{-0.25em} \\mathrm{d} x \\right)\\\\\n & = \\sum_{k = 1}^{\\infty} \\frac{(- k)^{k - 1} z^k}{k!}\n \\end{array} \\label{linv}\n\\end{equation}\nwhere $W (a, z) \\forall a \\in \\mathbbm{Z}, z \\not\\in \\{0, - e^{- 1} \\}$ is\n\\begin{equation}\n \\begin{array}{ll}\n W (a, z) & = 1 + (2 i \\pi a + \\ln (z) - 1) \\exp \\left( \\frac{i}{2 \\pi}\n \\int_0^{\\infty} \\frac{1}{x + 1} \\ln \\left( \\frac{x + \\left( 2\n \\hspace{0.25em} a - 1 \\right) i \\pi - \\ln \\left( x \\right) + \\ln \\left( z\n \\right)}{x + \\left( 2 \\hspace{0.25em} a + 1 \\right) i \\pi - \\ln \\left( x\n \\right) + \\ln \\left( z \\right)} \\right) \\hspace{-0.25em} \\mathrm{d} x \\right)\n \\end{array} \\label{lambertw}\n\\end{equation}\nA generaliztion of (\\ref{linv}) is solved by\n\\begin{equation}\n \\begin{array}{ll}\n \\{x : x b^x = z\\} & = \\frac{W (\\ln (b) z)}{\\ln (b)}\n \\end{array}\n\\end{equation}\nThe W function satisifes several identities\n\\begin{equation}\n \\begin{array}{lll}\n W (z) e^{W (z)} & = z & \\\\\n W (z \\ln (z)) & = \\ln (z) & \\forall z < 1\\\\\n |W (z) | & = W (|z|) & \\\\\n e^{n W (z)} & = z^n W (z)^{- n} & \\\\\n \\ln (W (n, z)) & = \\ln (z) - W (n, z) + 2 i \\pi n & \\\\\n W \\left( - \\frac{\\ln (z)}{z} \\right) & = - \\ln (z) & \\forall z \\in [0,\n e]\\\\\n \\frac{W (- \\ln (z))}{- \\ln (z)} & = z^{z^{z^{z^{.^{.^.}}}}} & \n \\end{array}\n\\end{equation}\nwhere $n \\in \\mathbbm{Z}$. Some special values are\n\\begin{equation}\n \\begin{array}{lll}\n W \\left( - 1, - e^{- 1} \\right) & = - 1 & \\\\\n W (- e^{- 1}) & = - 1 & \\\\\n W (e) & = 1 & \\\\\n W (0) & = 0 & \\\\\n W (\\infty) & = \\infty & \\\\\n W (- \\infty) & = \\infty + i \\pi & \\\\\n W \\left( - \\frac{\\pi}{2} \\right) & = \\frac{i \\pi}{2} & \\\\\n W \\left( - \\ln ( \\sqrt{2}) \\right) & = - \\ln (2) & \\\\\n W \\left( - 1, - \\ln ( \\sqrt{2}) \\right) & = - 2 \\ln (2) & \n \\end{array}\n\\end{equation}\nWe also have the limit\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{a \\rightarrow \\pm \\infty} \\frac{W (a, x)}{a} & = 2 \\pi i\n \\end{array}\n\\end{equation}\nand differential\n\\begin{equation}\n \\begin{array}{lll}\n \\frac{\\mathrm{d}}{\\mathrm{d} z} W (a, f (z)) & & = \\frac{W (a, f (z))\n \\frac{\\mathrm{d}}{\\mathrm{d} z} f (z)}{f (z) (1 + W (a, f (z)))}\n \\end{array}\n\\end{equation}\nas well as the obvious integral\n\\begin{equation}\n \\begin{array}{ll}\n \\int_0^1 W \\left( - \\frac{\\ln (x)}{x} \\right) \\mathrm{d} x & = \\int_0^1 - \\ln\n (x) \\mathrm{d} x = 1\n \\end{array}\n\\end{equation}\nLet us define, for the sake of brevity, the function\n\n\\begin{equation}\n \\begin{array}{ll}\n W_{\\ln} (z) & = W \\left( - 1, - \\frac{\\ln (z)}{z} \\right)\\\\\n & = 1 + \\left( \\ln \\left( - \\frac{\\ln (z)}{z} \\right) - 1 - 2 \\pi i\n \\right) \\exp \\left( \\frac{i}{2 \\pi} \\int_0^{\\infty} \\frac{1}{x + 1} \\ln\n \\left( \\frac{x - 3 i \\pi - \\ln \\left( x \\right) + \\ln \\left( - \\frac{\\ln\n (z)}{z} \\right)}{x - i \\pi - \\ln \\left( x \\right) + \\ln \\left( - \\frac{\\ln\n (z)}{z} \\right)} \\right) \\hspace{-0.25em} \\mathrm{d} x \\right)\n \\end{array}\n\\end{equation}\n\n\\begin{figure}[h]\n \\resizebox{15cm}{12cm}{\\includegraphics{lw1.eps}}\n \\caption{$W \\left( - \\frac{\\ln (x)}{x} \\right) = - \\ln (x)$ and $W_{\\ln} (x)\n = W \\left( - 1, - \\frac{\\ln (x)}{x} \\right)$}\n\\end{figure}\n\nThen we have the limits\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{x \\rightarrow - \\infty} W_{\\ln} (x) & = 0\\\\\n \\lim_{x \\rightarrow + \\infty} W_{\\ln} (x) & = - \\infty\n \\end{array}\n\\end{equation}\nand\n\\begin{equation}\n \\tmop{Im} \\left( W_{\\ln} (x) \\right) = \\left\\{ \\begin{array}{ll}\n - \\pi & - \\infty < x < 0\\\\\n \\ldots & 0 \\leqslant x \\leqslant 1\\\\\n 0 & 1 < x < \\infty\n \\end{array} \\right.\n\\end{equation}\n\\begin{equation}\n \\begin{array}{lll}\n W_{\\ln} (x) & = - \\ln (x) & \\forall x \\not\\in [0, e]\n \\end{array}\n\\end{equation}\nThe root of $\\tmop{Re} \\left( W_{\\ln} (x) \\right)$ is given by\n\\begin{equation}\n \\begin{array}{ll}\n \\left\\{ x : \\tmop{Re} \\left( W_{\\ln} (x)) = 0 \\right\\} \\right. & =\\{- x :\n \\left( x^2 \\right)^{\\frac{1}{x}} = e^{3 \\pi} \\}\\\\\n & = \\frac{2}{3} \\pi W \\left( \\frac{3}{2} \\pi \\right)\\\\\n & \\cong 0.27441063190284810044 \\ldots\n \\end{array} \\label{wlrr}\n\\end{equation}\nwhere the imaginary part of the value at the root of the real part of $W_{\\ln}\n(z)$ is\n\\begin{equation}\n \\begin{array}{ll}\n W_{\\ln} \\left( \\frac{2}{3} \\pi W \\left( \\frac{3}{2} \\pi \\right) \\right) &\n = W \\left( - 1, - \\frac{\\ln \\left( \\frac{2}{3} \\pi W \\left( \\frac{3}{2}\n \\pi \\right) \\right)}{\\frac{2}{3} \\pi W \\left( \\frac{3}{2} \\pi \\right)}\n \\right)\\\\\n & = W \\left( - 1, \\frac{3 \\pi}{2} \\right)\\\\\n & = \\frac{3 \\pi i}{2}\\\\\n & \\cong i 4.712388980384689857 \\ldots\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Lerch Transcendent $\\Phi (z, a, v)$}\n\nThe Lerch Transcendent {\\cite[1.11]{htf1}} is defined by\n\\begin{equation}\n \\begin{array}{lll}\n \\Phi (z, a, v) & = \\sum_{n = 0}^{\\infty} \\frac{z^n}{(v + n)^a} & \\forall\n \\{|z| < 1\\} \\tmop{or} \\{|z| = 1 \\tmop{and} \\tmop{Re} (a) > 1\\}\n \\label{lerch}\n \\end{array}\n\\end{equation}\nThe Riemann zeta function is the special case\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta (s) & = \\Phi (1, s, 1) = \\sum_{n = 0}^{\\infty} \\frac{1}{(1 + n)^s}\n \\end{array}\n\\end{equation}\n\n\\subsection{Applications of w(x)}\n\n\\subsubsection{Expansion of $\\gamma$}\n\nConsider Euler's constant $\\gamma = 0.577215664901533 \\ldots$ (\\ref{gamma})\n\\begin{equation}\n \\begin{array}{ll}\n w^n (\\gamma) & = a_n - b_n \\gamma\n \\end{array}\n\\end{equation}\nwhereupon iteration we see that\n\\begin{equation}\n \\begin{array}{ll}\n \\left(\\begin{array}{c}\n n\\\\\n - a_n\\\\\n - b_n\n \\end{array}\\right) & \\text{$= \\left(\\begin{array}{cccccccccccc}\n 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \\ldots\\\\\n 0 & 1 & 48 & 290 & 581 & 1163 & 2327 & 13964 & 7492468716 & 14984937433\n & 1078915495184 & \\ldots\\\\\n 1 & 2 & 84 & 504 & 1008 & 2016 & 4032 & 24192 & 12980362752 &\n 25960725504 & 1869172236288 & \\ldots\n \\end{array}\\right)$}\n \\end{array}\n\\end{equation}\n\n\\subsection{Conventions and Symbols}\n\nMany of these symbols are from {\\cite[p491]{fractalzetastrings}}.\n\\begin{equation}\n \\begin{array}{ll}\n i & \\sqrt{- 1}\\\\\n \\mathbbm{R} & \\{x : - \\infty < x < \\infty\\}\\\\\n \\bar{\\mathbbm{R}} & \\{x : - \\infty \\leqslant x \\leqslant \\infty\\}\\\\\n \\mathbbm{R}^+ & \\{x : 0 \\leqslant x < \\infty\\}\\\\\n \\mathbbm{R}^d & \\{x_1 \\ldots x_d : - \\infty < x_i < \\infty\\}\\\\\n \\mathbbm{C} & \\{x + i y : x, y \\in \\mathbbm{R}\\}\\\\\n \\mathbbm{Z} & \\{\\ldots, - 2, - 1, 0, 1, 2, \\ldots\\}\\\\\n \\mathbbm{N} & \\{0, 1, 2, 3, \\ldots .\\}\\\\\n \\mathbbm{N}^{\\ast} & \\{1, 2, 3, \\ldots .\\}\\\\\n \\text{$\\mathbbm{H}$} & \\left\\{ 0, \\frac{1}{n} : n \\in \\mathbbm{Z}\n \\right\\}\\\\\n f (x) = O (g (x)) & \\frac{f (x)}{g (x)} < \\infty\\\\\n f (x) = o (g (x)) & \\lim_{x \\rightarrow \\infty} \\frac{f (x)}{g (x)} = 0\\\\\n f (x) \\asymp g (x) & \\left\\{ a \\leqslant \\frac{f (x)}{g (x)} \\leqslant b :\n \\{a, b\\}> 0 \\right\\}\\\\\n \\#A & \\tmop{numbers} \\tmop{of} \\tmop{elements} \\tmop{in} \\tmop{the}\n \\tmop{finite} \\tmop{set} A\\\\\n |A |_d & d \\text{-dimensional} \\tmop{Lebesgue} \\tmop{measure}\n (\\tmop{volume}) \\tmop{of} A \\subseteq \\mathbbm{R}^d\\\\\n \\text{$d (x, A)$} & \\{\\min (|x - y|) : y \\in A\\} \\tmop{Euclidean}\n \\tmop{distance} \\tmop{between} x \\tmop{and} \\tmop{the} \\tmop{nearest}\n \\tmop{point} \\tmop{of} A\\\\\n \\exp (x) & \\tmop{exponential} e^x = \\sum_{n = 0}^{\\infty} \\frac{x^n}{n!}\\\\\n \\underset{}{\\underset{x = y}{\\tmop{Res}} (f (x))} & \\tmop{complex}\n \\text{residue of $f (x)$ at $x = y$}\\\\\n \\left\\lfloor x \\right\\rfloor & \\tmop{floor}, \\tmop{the} \\tmop{greatest}\n \\tmop{integer} \\leqslant x\\\\\n \\{x\\} & x - \\left\\lfloor x \\right\\rfloor, \\tmop{the} \\tmop{fractional}\n \\tmop{part} \\tmop{of} x\\\\\n \\text{$\\bar{x}$} & \\tmop{complex} \\tmop{conjugate}, \\tmop{Re} (x) -\n \\tmop{Im} (x)\\\\\n \\tmop{Fix}_f^n & \\text{$n$-th} \\tmop{fixed} \\tmop{point} \\tmop{of}\n \\tmop{the} \\tmop{map} f (x), \\text{$n$-th} \\tmop{solution} \\tmop{to} f (x)\n = x\\\\\n p_k & \\text{$k$-th} \\tmop{prime} \\tmop{number}\\\\\n \\ln_b (a) & \\frac{\\ln (a)}{\\ln (b)}\n \\end{array} \\label{notation}\n\\end{equation}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec-introduction}\n\nPooling samples in biomedical studies has now become a frequent\npractice among many researchers. For example, more than $15\\%$ of\nthe data sets deposited in the Gene Expression Omnibus Database\ninvolve pooled RNA samples\\citep{Kendziorski-etal-PNAS2005}. The\npractice of pooling biological samples though is not a new\nphenomenon, as it can be traced back at least to 1940s\n\\citep{Dorfman1943} and has been used in different application\nareas \\citep{Gastwirth2000}, e.g., for the detection of certain\nmedical conditions and estimation of prevalence in a population.\nIn the context of detecting differential gene expressions using\nmicroarrays, divergent views on the wisdom of pooling samples can\nbe found in the literature\n\\citep{Agrawal-etal-JNatCancerInst2002,Affymetrix2004,\nShih-etal-Bioinformatics2004, Churchill&Oliver-NatureGenetics2001,\nPeng-etal-BMCBioinformatics2003,Jolly-etal-PhysiolGenomics2005}.\nOne of the arguments supporting the practice of pooling biological\nsamples is that biological variation can be reduced by pooling RNA\nsamples in microarray\nexperiments\\citep{Churchill&Oliver-NatureGenetics2001}. As more\ncarefully described by Kendziorski {\\em et al}\n\\citep{Kendziorski-etal-PNAS2005}, pooling can reduce the effects\nof biological variation, but not the biological variation itself.\nAnother argument in support of pooling samples in microarray\nexperiments is that it reduces financial cost. However, cost\nreduction is meaningful only if statistical equivalence between\nthe pooled and the non-pooled experimental setups is maintained.\nHere we address this issue and present formulas to determine the\nconditions under which pooled and non-pooled designs are\nstatistically equivalent.\n\nTo compare experimental designs with and without sample pooling\nthe two designs must have something in common that can be\nmeasured, e.g., using the same or equivalent amount of resources,\nor, yielding the same level of detection power. Kendziorski {\\em\net al}\\citep{Kendziorski-etal-Biostatistics2003} used the width of\nthe 95\\% confidence interval for gene expression to compare\ndifferent experimental designs with and without sample pooling.\nThe criterion was that the narrower the confidence interval, the\nmore accurate the results from the experimental design. In a\ncomparative study where two groups of biological subjects are\ncompared the common goal of the different experimental designs is\nto detect a change between the two groups with a given power at a\ngiven false positive rate, as adopted in\n\\citep{Shih-etal-Bioinformatics2004}. We shall use the latter\nmethod to compare different designs. So in this work statistical\nequivalence means that the designs have the same statistical power\nat the same level of significance. Therefore the more appropriate\nexperimental design will be the one which uses less resources to\nachieve this statistical equivalence.\n\nThe basic assumption underlying sample pooling is biological\naveraging; that the measure of interest taken on the pool of\nsamples is equal to the average of the same measure taken on each\nof the individual samples which contributed to the pool. For\nexample in the situation of a microarray experiment, if $r$\nindividual samples contribute equally to a pool, and the\nconcentrations of a gene's mRNA transcripts for the $r$ samples\nare denoted by $T_i$ with $i=1, 2, \\cdots, r$ indexing the\nindividual samples, the assumption of biological averaging says\nthat the concentration of this gene's mRNA transcripts in the pool\nis $T=1\/r\\sum_{i=1}^rT_i$. However, for microarray experiment\nthere is some debate on whether the basic assumption of pooling\nholds. Kendziorski {\\em et al}\n\\citep{Kendziorski-etal-Biostatistics2003,Kendziorski-etal-PNAS2005}\nargue that there is limited support for this assumption. Here we\ndo not seek to enter into this debate but rather take the\nassumption of biological averaging as valid, or at least\napproximately so, so that we are in a position to determine\nwhether pooling samples is financially beneficial or not. The\nvalidity of biological averaging makes it possible (or easier) to\nderive a neat theoretical formulation. On a practical level\nthough, the requirement for the validity of this assumption may\nnot be as stringent as a theoretical formulation does. For\ninstance, in \\citep{Kendziorski-etal-PNAS2005} it was shown that\neven when biological averaging does not hold, pooling can be\nuseful and inferences regarding differential gene expression are\nnot adversely affected by pooling.\n\n\nOne situation where there is little alternative but to pool\nbiological samples is where there is insufficient amount of RNA\nfrom each individual biological subject to perform single\nmicroarray hybridization. RNA amplification may be a possible way\nof obtaining more RNA, but may not be practically feasible when\nmany individual biological subjects are involved as in the case of\n\\citep{Jin-etal2001}. In such a circumstance, pooling samples is\njustified by the lack of alternative and will not be considered\nfurther here. Similarly we will not consider here the case where\nall the biological samples of the same group were pooled together,\nand multiple technical replicate measurements were carried out on\nthe sample pool. This is sometimes seen in the literature\n\\citep{Muckenthaler-etal2003}, but such an experimental design\nleaves no degree of freedom to estimate the biological variance.\nThus valid inferences about the differences between the two\npopulations of biological subjects under study cannot be made.\nHere we only consider situations other than the above two and\nwhere pooling may reduce the overall costs of the experiments.\n\n\n\\section{A general formalism}\n\\label{sec-formalism}\n\nFor every comparative study, there is at least one measurable\nquantity which is the quantity of interest. The goal of the study\nis to deduce from the data collected if there is any difference\nbetween the means of the two populations. As measuring all the\nbiological subjects in two populations is rarely possible in most\nsituations representatives from a population are randomly selected\nand measurements made on these. These are then taken to infer the\nproperties of the population.\n\nLet $X$ be the measurable quantity that is being determined in the\nthe experiment, e.g., the expression level of a gene. In the case\nof one-channel microarray, $X$ could denote the logarithm (most\ncommonly base 2 is used) of fluorescence intensity; or the\nlogarithm of the fluorescence ratio in the case of two-channel\nmicroarray. Let $x^c_i$ denote the value of $X$ for an individual\nsubject $i$ in the control population (c), and $x^t_j$ that of the\nindividual subject $j$ in the treatment population (t). We assume\nthat $x^c_i$s for all individuals in the control population are\nindependent normally distributed with a mean $\\mu_c$ and a\nvariance $\\sigma _c^2$, denoted by $x^c_i \\sim N(\\mu_c,\\sigma\n_c^2)$ for all $i$. Similarly, $x^t_j \\sim N(\\mu_t,\\sigma _t^2)$\nfor all $j$.\n\n\n\\subsection{A general experimental setup}\nFor a general experimental setup individual subjects from both\npopulations are randomly selected and tissue samples collected\nfrom each. Tissue sample pools are made by pooling a given number\n$r$ of randomly selected tissue samples (of the same population)\ntogether. Note that to make $n$ pools we need to have selected\n$nr$ individual subjects from the population. $m$ measurements are\nthen made on each pool of tissue samples. So $m$ is the number of\ntechnical replications of measurement on each pool. Notice that by\nintroducing two parameters $r$ and $m$ a general and flexible\nexperimental setup has been created. For instance, if we set\n$r=1$, the experiment would be equivalent to no pooling of tissue\nsamples. And if we set $m=1$ there is no technical replication.\nUnder the basic assumption of biological averaging, the result of\npooling $r$ tissue samples in equal proportions together is that\nthe value of $X$ for the pool is the average of those subjects\nwhich formed this pool,\n\\begin{equation}\n\\tilde{x}=\\frac{1}{r}\\sum _{i=1}^{r}x_i.\n\\end{equation}\nIt follows that $\\tilde{x} \\sim N(\\mu_c, \\sigma_c ^2\/r)$ for a\npool from the control population, or $\\tilde{x} \\sim N(\\mu_t,\n\\sigma_t ^2\/r)$ for a pool from the treated population. Note that\nin this paper we shall only discuss pooling samples with equal\nindividual contributions. While pools formed by un-equal\ncontributions from individual samples are possible, such pooled\nexperimental design is generally less effective than the equal\npooling, as already shown by Peng {\\em et al}\n\\citep{Peng-etal-BMCBioinformatics2003} with their simulated\nresults.\n\nWhen we take a measurement on a pool $p$, the measured value is\n\\begin{equation}\ny_{p,k}=\\tilde{x}_p+\\epsilon _k,\n\\end{equation}\nwhere $p$ indexes pools, $k$ indexes measurements, and $\\epsilon\n_k$ is a random error term assumed to be independently and\nnormally distributed as $\\epsilon _k \\sim\nN(0,\\sigma^2_{\\epsilon})$. Hereafter $\\sigma^2_{\\epsilon}$ will be\nreferred to as the technical variance, $\\sigma^2_c$ the biological\nvariance for the control population, and $\\sigma^2_t$ the\nbiological variance for the treatment population.\n\nThe output of the experiment are the measurements on the two sets\nof pools. For the control group, we have $y^c_{p,k}$ for\n$p=1,\\dots, n_{c}$ and $k=1,\\dots, m$. And for the treatment\ngroup, we have $y^t_{p,k}$ for $p=1,\\dots, n_{t}$ and $k=1,\\dots,\nm$. Here $n_{c}$ and $n_{t}$ are the numbers of pools prepared for\nthe control and treatment population respectively. Our task is to\ninfer population properties from these measured data. In\nparticular, we want to know whether there is any difference\nbetween the two populations means $\\mu_c$ and $\\mu_t$. It can be\nshown that\n\\begin{equation}\n\\overline{Y}^c=\\frac{1}{mn_{c}}\\sum_{p=1}^{n_c}\\sum_{k=1}^my^c_{p,k}\n\\label{eq-Y^c}\n\\end{equation}\nis an unbiased estimator of $\\mu_c$, with a variance\n\\begin{equation}\n\\frac{1}{n_c}\\left(\\frac{\\sigma^2_c}{r}+\\frac{\\sigma^2_{\\epsilon}}{m}\\right),\n\\end{equation}\nand similarly,\n\\begin{equation}\n\\overline{Y}^t=\\frac{1}{mn_{t}}\\sum_{p=1}^{n_t}\\sum_{k=1}^my^t_{p,k}\n\\label{eq-Y^t}\n\\end{equation}\nis an unbiased estimator of $\\mu_t$, with a variance\n\\begin{equation}\n\\frac{1}{n_t}\\left(\\frac{\\sigma^2_t}{r}+\\frac{\\sigma^2_{\\epsilon}}{m}\\right).\n\\end{equation}\nIf we make an additional assumption that the variances for the two\npopulations of biological subjects are the same, i.e.,\n$\\sigma^2_c=\\sigma^2_t=\\sigma^2$, then the difference between\nEqs.(\\ref{eq-Y^t}) and (\\ref{eq-Y^c}),\n$D=\\overline{Y}^t-\\overline{Y}^c$, is an unbiased estimator of\n$\\mu=\\mu_t-\\mu_c$ with a variance\n\\begin{equation}\n\\sigma^2_D=\\left(\\frac{1}{n_c}+\\frac{1}{n_t}\\right)\n\\left(\\frac{\\sigma^2}{r}+\\frac{\\sigma^2_{\\epsilon}}{m}\\right).\n\\label{eq-var(D)}\n\\end{equation}\nThe factor $(\\sigma^2\/r+\\sigma^2_{\\epsilon}\/m)$ in\nEq.(\\ref{eq-var(D)}) can be estimated without bias by\n\n\n\\begin{eqnarray}\ns^2_p=\\frac{1}{n_c+n_t-2}\n\\sum_{p=1}^{n_c}\\left(\\frac{1}{m}\\sum_{k=1}^my^c_{p,k}-\\overline{Y}^c\\right)^2 \\nonumber \\\\\n+\\frac{1}{n_c+n_t-2}\n\\sum_{p=1}^{n_t}\\left(\\frac{1}{m}\\sum_{k=1}^my^t_{p,k}-\\overline{Y}^t\\right)^2.\n\\end{eqnarray}\nIt is then clear that\n\\begin{equation}\nt=\\frac{(\\overline{Y}^t-\\overline{Y}^c)-(\\mu_t-\\mu_c)}{s_p\\sqrt{1\/n_c+1\/n_t}}\n\\end{equation}\nfollows the Student's t distribution with $n_c+n_t-2$ degrees of\nfreedom. In detecting a differential gene expression, we want to\ntest the null hypothesis $\\mu_c=\\mu_t$ against an alternative\nhypothesis $\\mu_c\\neq \\mu_t$. So our test statistic is\n\\begin{equation}\nt_0=\\frac{(\\overline{Y}^t-\\overline{Y}^c)}{s_p\\sqrt{1\/n_c+1\/n_t}},\n\\label{eq-t_0}\n\\end{equation}\nand there are no unknowns in Eq.(\\ref{eq-t_0}). Note that $t_0$\ncan be seen as a generalized two-sample-t-test statistic, which\nreduces to the statistic of the traditional two-sample t test with\nequal variance when we set the parameters $r=1$ (no pooling of\ntissue samples) and $m=1$ (no technical replication of\nmeasurements). In Ref. \\citep{Shih-etal-Bioinformatics2004},\n\\citeauthor{Shih-etal-Bioinformatics2004} arrived at two separate\nstatistics, one for non-pooled design, the other for pooled\ndesign. The $t_0$ defined by Eq.(\\ref{eq-t_0}) is in more general\nform, setting $r=1$ and $m=1$ in Eq.(\\ref{eq-t_0}) recovers Shih\n{\\em et al}'s statistic for non-pooled design; while setting $r>1$\nand $m=1$ recovers Shih {\\em et al}'s statistic for pooled design.\nNote that $m$ does not need to equal 1. Here by incorporating two\nadditional parameters $r$ and $m$, the statistic $t_0$ can deal\nwith situations where there are pooled tissue samples and multiple\ntechnical replications.\n\n\\subsection{Criteria of significance}\nAs with any statistical test we need to specify a threshold\np-value $P_{th}$ to claim significant results in the test. When\nall the other parameters are given, setting $P_{th}$ is equivalent\nto setting a threshold, say $|\\xi|$, for the statistics $t_0$\ndefined in Eq.(\\ref{eq-t_0}). With this threshold t-value, our\ncriteria for claiming a significant test is as follows: If\n$t_0>|\\xi|$, we declare that $\\mu_t-\\mu_c>0$; if $t_0<-|\\xi|$, it\nis claimed as $\\mu_t-\\mu_c<0$. So the rate at which false positive\nclaims are made is\n\\begin{eqnarray}\nP_{th}=\\int _{-\\infty}^{-|\\xi|}\\rho _{n_c+n_t-2}(t_0)dt_0\n+\\int_{|\\xi|}^{\\infty}\\rho _{n_c+n_t-2}(t_0)dt_0 \\nonumber \\\\\n=2\\int _{-\\infty}^{-|\\xi|}\\rho _{n_c+n_t-2}(t_0)dt_0\n=2T_{n_c+n_t-2}(-|\\xi|), \\label{eq-P_th}\n\\end{eqnarray}\nwhere $\\rho_{n_c+n_t-2}(.)$ is the probability density function\n(PDF) of the Student's t distribution with $n_c+n_t-2$ degrees of\nfreedom, and $T_{n_c+n_t-2}(.)$ is the corresponding cumulative\nprobability distribution function (CDF). It is therefore apparent\nthat the threshold t-value $|\\xi|$ can be obtained by solving the\nequation $2T_{n_c+n_t-2}(-|\\xi|)=P_{th}$ with a given false\npositive rate $P_{th}$.\n\n\n\n\\section{Power function}\n\\label{sec-power-function} In\n\\citep{Zhang&Gant-Bioinformatics2004} we presented a power\nfunction for a new statistical t test (hereafter referred to as\n\"two-labelling t test\") in the context of using two-color\nmicroarrays to detect differential gene expression. Following\nsimilar steps we can derive the power function for the generalized\ntwo-sample t test presented in this paper, which reads\n\n\n\\begin{eqnarray}\nS=\\int _0^{\\infty}p_{n_c+n_t-2}(Y)\\Phi\\left[{-|\\xi|\\sqrt{Y}\\over\n\\sqrt{n_c+n_t-2}} +\\frac{|\\mu|}{\\sigma_D} \\right]dY,\n \\label{eq-S}\n\\end{eqnarray}\nwhere $p_{n_c+n_t-2}(Y)$ is the PDF for the $\\chi ^2$ distribution\nwith $n_c+n_t-2$ degrees of freedom, and $\\Phi(.)$ is the CDF for\nthe standard normal distribution. The rate $S$ at which a true\ndifference between $\\mu_t$ and $\\mu_c$ can be successfully\ndetected is a function of $n_c$, $n_t$, $|\\mu|\/\\sigma _D$, and\n$|\\xi|$. With $\\sigma_D$ given by the square root of\nEq.(\\ref{eq-var(D)}), and $|\\xi|$ determined by solving\nEq.(\\ref{eq-P_th}) at a given false positive rate $P_{th}$, $S$\nis, eventually, a function of $P_{th}$, $n_c$, $n_t$, and\n$|\\mu|\/\\sigma _D$.\n\n\nA few points are worth noting here.\n\n1. The two-labelling t test presented in\n\\citep{Zhang&Gant-Bioinformatics2004} was designed to deal with\nsystematic labelling biases generated during microarray\nexperimentation. The t test presented in this paper, however,\nassumes no systematic data biases. In the case of two-color\nmicroarrays this requires a common reference design. In such an\nexperimental design the labelling biases cancel themselves out in\nthe calculation of the test statistic.\n\n2. In \\citep{Zhang&Gant-Bioinformatics2004}, the biological\nvariances of the two populations under comparison do not have to\nbe the same, that is, we did not assume $\\sigma^2_c=\\sigma^2_t$.\nFor the t test in this paper, we have made an additional\nassumption that $\\sigma^2_c=\\sigma^2_t$. Relaxing this requirement\nwas possible, as in the case of the traditional two-sample t test\nwith unequal variance \\citep{Brownlee1965}, but an exact power\nfunction could not be readily obtained.\n\n3. The exact power function obtained in this paper allows\nevaluation of the effects of pooling biological samples and the\neffects of taking multiple technical measurements, thus giving\nresearchers quantitative guidance on the practice of pooling\nsamples.\n\n4. By setting the parameters $r=1$ and $m=1$, an exact power\nfunction is provided for the traditional two-sample t test with\nequal variance.\n\n\n\\section{Results}\n\nWe have implemented the computation of the power function $S$ of\nEq.(\\ref{eq-S}) as a Java application, which can be accessed at\nthe URL given in the abstract. Here we apply this to microarray\ncomparative studies for finding differentially expressed genes,\nand investigate the effect of pooling RNA samples in the\nexperiments. We also compare our exact results with some\napproximate results presented by other authors\n\\citep{Shih-etal-Bioinformatics2004} to demonstrate why an exact\nformula is desirable.\n\n\\subsection{Comparison with approximate results}\nBased on their approximate formulas,\n\\citeauthor{Shih-etal-Bioinformatics2004} considered two scenarios\nto compare the number of biological subjects and number of\nmicroarrays in the non-pooled and pooled\ndesigns\\citep{Shih-etal-Bioinformatics2004}. Here we give exact\nresults for the two scenarios to show the difference to the\napproximate results. In the first scenario, we consider that the\ncommon biological variance of the two populations is $\\sigma\n^2=0.05$, and the technical variance $\\sigma_{\\epsilon}^2=0.0125$,\nwhich gives the biological-to-technical variance ratio $\\lambda\n=\\sigma^2\/\\sigma_{\\epsilon}^2=4$. The preset target of the\nexperiment in this scenario is that the false positive rate being\ncontrolled at $P_{th}=0.001$ and the power being no less than\n$S=0.95$ to detect a two-fold differential gene expression, which\ncorresponds to $\\mu=1$ with base 2 logarithm\n\\citep{Shih-etal-Bioinformatics2004}. In Table\n\\ref{tab-scenario1}, we present results for different pooling\nparameter $r$. It can be seen from the first panel of this Table\nthat in order to hit the preset target, the non-pooled design\n($r=1$) requires at least $12$ biological subjects divided evenly\nto the two populations, i.e., $6$ from each of the two\npopulations. Having $7$ subjects from one population and $5$\nsubjects from the other is insufficient to achieve the target of\n$95\\%$ detection power. The effects of other levels of pooling on\nthe detection power are also shown in Table \\ref{tab-scenario1}.\nThe minimum number of biological subjects ($N_s$) and microarrays\n($N_m$) that meet the preset targets are highlighted with bold\nfonts. It is clear that as the level of pooling is increased (with\nincreasing $r$), the number of microarrays $N_m$ can be reduced,\nbut the number of biological subjects $N_s$ has to be increased.\nFor example, in order to reduce the number of arrays from $12$\n(Table \\ref{tab-scenario1}, first panel) to $8$ (Table\n\\ref{tab-scenario1}, fourth panel), the number of biological\nsubjects to form the pools must be increased from $12$ to $40$.\n\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=8.5cm,height=6.0cm,angle=0]{Senario2-Scurve.eps}}\n\\caption{The power $S$ as a function of the total number of pools\n$n_c+n_t$. The parameters used are for the second scenario $\\sigma\n^2=0.2$, $\\sigma_{\\epsilon}^2=0.05$, $\\lambda\n=\\sigma^2\/\\sigma_{\\epsilon}^2=4$, $P_{th}=0.001$, $\\mu=1$, and\n$m=1$. The five solid curves correspond to different levels of\npooling, from right to left, $r=1$, $r=2$, $r=4$, $r=6$, and\n$r=15$ respectively. The dashed line indicates the $95\\%$ power,\nthe intersections of which with the power curves specify the total\nnumbers of pools (assuming $n_c=n_t$) needed to achieve the target\npower. The total number of biological subjects and the total\nnumber of arrays can then be calculated simply by\n$N_s=r(n_c+n_t)$, and $N_m=m(n_c+n_t)$ respectively.}\n\\label{fig-Scurve}\n\\end{figure}\n\n\n\n\nFor the second scenario we consider the case $\\sigma ^2=0.2$,\n$\\sigma_{\\epsilon}^2=0.05$, which gives $\\lambda\n=\\sigma^2\/\\sigma_{\\epsilon}^2=4$. Again the preset targets are to\ndetect a true differential expression $\\mu=1$ with no less than\n$95\\%$ power while the false positive rate is set at\n$P_{th}=0.001$. Using these parameters, the power $S$ as a\nfunction of $n_c+n_t$ is plotted in Fig.\\ref{fig-Scurve} for\ndifferent levels of sample-pooling. For the non-pooled design\n($r=1$), $N_s=30$ total biological subjects and $N_m=30$ arrays\nare required to hit the preset targets. Similar to the first\nscenario, as the level of pooling is increased, the number of\narrays $N_m$ is reduced while the number of subjects increased to\nmeet the preset targets.\n\n\nIn Table \\ref{tab-exact-vs-approx}, we summarize our exact results\nand the approximate results of\n\\citep{Shih-etal-Bioinformatics2004}. It can be seen that the\ndifference between the two can be very large, indicating the need\nfor exact results. For example, in the first scenario when\n$N_m=8$ the approximate result of\n\\citep{Shih-etal-Bioinformatics2004} predicts that a minimum of\n$21$ biological subjects are required. In practice $24$ subjects\nare required as $24$ is the minimum number larger than $21$ and\ndivisible by $8$. However this experiment setup ($24$ subjects\nforming $8$ pools, $8$ microarrays) will only give a detection\npower of $90\\%$. To meet the target power of $95\\%$, $40$\nbiological subjects are actually required by our exact result. If\nan experiment with $N_m=7$ microarrays is planned,\n\\citeauthor{Shih-etal-Bioinformatics2004} predicts that $37$\nsubjects are required\\citep{Shih-etal-Bioinformatics2004}, but in\nfact $126$ subjects must be used to achieve the target. Generally,\nthe approximate formulas of \\citep{Shih-etal-Bioinformatics2004}\nare too optimistic in assessing the benefits of pooling samples\nand reducing the number of microarrays, because they underestimate\nthe number of biological subjects required.\n\n\\subsection{Cost analysis}\nDepending on the material costs involved in the biological\nsubjects and microarrays, the conditions where pooling samples\nbecomes beneficial may be different from lab to lab. Here we show\nexamples to determine these conditions. Denoting the cost\nassociated with each biological subject as $C_s$ (including\nmaterials and labor etc) and the cost associated with a microarray\nas $C_m$, then the total costs for an experiment in microarray\ncomparative study is $C_T=N_sC_s+N_mC_m$. Taking the first\nscenario as an example, the total cost of a non-pooled design to\nachieve our preset targets is\n$$C_T(r=1)=12C_s+12C_m,$$\nand the total cost for pooled design with $r=2$ is\n$$C_T(r=2)=20C_s+10C_m.$$\nTherefore in order that the pooled design with $r=2$ is beneficial\nwe must have\n\n\\begin{equation}\nC_T(r=2) \\le C_T(r=1),\n\\end{equation}\nwhich requires that $C_m \\ge 4C_s$. Put another way, only when the\ncost associated with one microarray $C_m$ is more than $4$ times\nthe cost of a subject $C_s$, does the pooling design with $r=2$\nbecome preferable to the non-pooled design. Similarly a higher\nlevel of pooling with $r=3$ becomes preferable to $r=2$ only when\n$C_m \\ge 7C_s$. Furthermore the conditions for increasing the\nlevel of pooling from $r=3$ to $r=5$ are $C_m \\ge 13C_s$, and so\non. Table \\ref{tab-exact-vs-approx} gives these conditions for\nfurther levels of pooling.\n\nFor the first scenario using the actual cost figures given in\n\\citep{Shih-etal-Bioinformatics2004} where $C_s=\\$230$ and\n$C_m=\\$300$, it can be seen that none of the pooling conditions is\nmet. Therefore for this laboratory pooling samples is not\nrecommended. However, if we use the cost figures of\n\\citep{Kendziorski-etal-Biostatistics2003} where $C_s=\\$50$ and\n$C_m=\\$700$, an optimal design is a pooled design with $r=5$.\n\n\nFor the second scenario, it is a similar story. The cost figures\nof Ref. \\citep{Shih-etal-Bioinformatics2004} ($C_s=\\$230$ and\n$C_m=\\$300$) gives $C_m=1.30C_s$, which does not satisfy any of\nthe pooling conditions. So again the non-pooled design with\n$N_m=30$ and $N_s=30$ is recommended. On the other hand, the cost\nfigures of \\citep{Kendziorski-etal-Biostatistics2003} ($C_s=\\$50$\nand $C_m=\\$700$) give $C_m=14C_s$ which satisfies all the pooling\nconditions in the lower panel of Table \\ref{tab-exact-vs-approx}\nexcept the last row. So in Kendziorski et al's lab, the pooled\ndesign with $N_m=14$ and $N_s=84$ would be recommended.\n\n\n\\section{Discussion} \\label{sec-discussion} We have in this paper\npresented exact formulas for calculating the power of microarray\nexperimental design with different levels of pooling. These\nformulas can be used to determine the conditions of statistical\nequivalence between different pooling setups. As in\n\\citep{Kendziorski-etal-Biostatistics2003} and\n\\citep{Shih-etal-Bioinformatics2004}, the calculations presented\nin this paper are for an individual gene, so the statistical\nequivalence for different designs of pooling can be determined\nwith regard to one particular gene. However, microarray monitors\nthousands of genes simultaneously, and the biological and\ntechnical variances vary from gene to gene, therefore no single\nresult of statistical equivalence between pooled and non-pooled\ndesigns applies equally to all genes on the array. So in practice\nhow would the formulations in this work be used? One possible way,\nas suggested by Kendziorski {\\em et al}.\n\\citep{Kendziorski-etal-Biostatistics2003}, is to specify the\ndistributions of $\\sigma ^2$ and $\\sigma_{\\epsilon}$ and calculate\nthe total number of subjects and arrays that maximize the average\npower across the the array. In theory, if the biological variances\nand technical variances were known for all genes on the array, an\nequivalence condition between pooled and non-pooled designs could\nbe determined for each gene individually. The overall (or say,\naverage) equivalence condition between pooled and non-pooled\ndesigns could be obtained, for example, by some form of averaging\noperation over all genes. An alternative and probably more\npractical way is to use representative values of $\\sigma ^2$ and\n$\\sigma_{\\epsilon}$. We therefore propose that parameters for\n\"typical gene\" be used as inputs for the power and sample size\ncalculations. A typical gene is a gene whose biological and\ntechnical variance have the most probable values among the genes,\ni.e., the mode of the distribution for biological and technical\nvariance of genes. Alternatively, the median or mean variances\nacross genes could be used as representative\nvalues\\citep{Shih-etal-Bioinformatics2004}.\n\n\nAn issue associated with microarray experiments is the problem of\nmultiple inferences, where a separate null hypothesis is being\ntested for each gene. Given thousands of null hypotheses being\ntested simultaneously, the customary significance level $\\alpha\n=0.05$ for declaring positive tests will surely give too many\nfalse positives. For example, if among a total number $N=10000$ of\ngenes being tested, $N_0=4000$ are truly null genes (genes that\nare non-differentially expressed between the two classes), the\nexpected number of false positive results would be $4000\\times\n0.05=200$, which may be too many to be acceptable. Thus a smaller\nthreshold p-value for declaring differentially expressed genes\nshould be used. Effectively controlling false positives in a\nmultiple testing situation such as microarray experiments is an\narea which has drawn much attention in recent years due to the\nwider application of microarray technology. As discussed in our\nprevious work in \\citep{Zhang&Gant-Bioinformatics2004}, generally\nspeaking, all different multiple-testing adjustment methods\neventually amount to effectively setting a threshold p-value, and\nthen rejecting all the null hypotheses with p-value below this\nthreshold. The classical Bonferroni multiple-testing procedure,\nwhich controls family-wise error rate at $\\alpha$ by setting the\nthreshold $P_{th}=\\alpha \/N$, is generally regarded as being too\nconservative in the microarray context. The FDR (False Discovery\nRate) idea, initially due to \\citep{Benjamini&Hochberg1995} in\ndealing with the multiple testing problem, has now been widely\naccepted as appropriate to the microarray situation. Recently,\nEfron \\citep{Efron2004} extended the FDR idea by defining fdr, a\nlocal version FDR (local false discovery rate). When planning\nmicroarray experiments in terms of power and sample size\ncalculation, the FDR of \\citep{Benjamini&Hochberg1995} is more\nappropriate and convenient to use. There are now in the literature\na few slightly different variants of the definition of FDR\n\\citep{Benjamini&Hochberg1995,Storey&Tibshrirani-pnas2003,Grant-etal-Bioinformatics2005},\nbut in essence it is defined as the proportion of false positives\namong all positive tests declared. To provide an interface between\nFDR and the formulation in the previous sections, here we show\nthat there is a simple correspondence between controlling FDR and\nspecifying the traditional type I error rate and power. Suppose\nthat there are a total number $N$ of genes being monitored by\nmicroarray, so there will be $N$ hypotheses being tested, one for\neach gene. Suppose that a fraction $\\pi_0$ of the $N$ genes are\ntrue null genes, i.e., genes that are non-differentially expressed\nbetween the two classes. Given the type I error rate $P_{th}$, the\nexpected number of false positive tests is $P_{th}N\\pi_0$; Given\nthe power $S$, the expected number of non-null genes (truly\ndifferentially expressed genes) that are declared positive is\n$SN(1-\\pi_0)$. So the FDR achieved by this setting is\n\\begin{equation}\n\\mbox{FDR}=\\frac{P_{th}N\\pi_0}{P_{th}N\\pi_0+SN(1-\\pi_0)}\n=\\frac{P_{th}\\pi_0}{P_{th}\\pi_0+S(1-\\pi_0)}. \\label{eq-fdr}\n\\end{equation}\nHere $\\pi_0$ is an important parameter in controlling FDR, for\nwhich several different methods of estimating this parameter have\nbeen proposed\n\\citep{Pounds&Morris2003,Storey&Tibshrirani-pnas2003,Zhang&Gant-Bioinformatics2004}.\nEspecially the method we presented in\n\\citep{Zhang&Gant-Bioinformatics2004} is an accurate yet\ncomputationally much simpler algorithm than the one proposed by\nStorey and Tibshrirani in \\citep{Storey&Tibshrirani-pnas2003}.\nWith the interface Eq.(\\ref{eq-fdr}), FDR can be readily presented\nand incorporated into the calculations.\n\n\n\n\\vspace{0.2cm}\\noindent {\\Large\\bf Acknowledgments}\n\n\\noindent We wish to acknowledge the support of the microarray\nteam of the MRC Toxicology Unit particularly Reginald Davies,\nJinLi Luo and Joan Riley. We also thank two anonymous reviewers\nfor their helpful and constructive comments.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMode-coupling theory (MCT) is considered by many the most\ncomprehensive first-principle approach to the dynamics of supercooled\nliquids~\\cite{Gotze}. Nevertheless, its status is rather problematic\nfrom a fundamental point of view, as the physical nature of the glass\nstate and the microscopic interpretation of structural arrest are not\nyet fully elucidated. This is all the more so when we look at the\nhigher-order glass singularities in structured and complex liquids.\nIn this Rapid Communication, I show that multiple glassy states and\nglass-glass transition in MCT can be understood in terms of a\ngeneralisation of the notion of dynamic facilitation~\\cite{FrAn,RiSo}\nand bootstrap percolation~\\cite{ChLeRe,BP_rev}. The latter is known to\nemerge in a variety of contexts including jamming of granular\nmaterials~\\cite{Liu}, NP-hard combinatorial optimization\nproblems~\\cite{MoZe}, neural and immune networks~\\cite{Tlusty,\n Stauffer}, and evolutionary modeling~\\cite{Klimek}.\n\nThe formal structure of glass singularities predicted by MCT is\nencoded in the self-consistent equation\n\\begin{equation}\n\\label{eq.Phi}\n\\Phi = (1-\\Phi) \\ {\\mathsf M} (\\Phi),\n\\end{equation}\nwhere $\\Phi$ is the asymptotic value of the correlator, and ${\\mathsf\n M}$ is the memory kernel describing the retarded friction effect\ncaused by particle caging, a physical feature associated with the de\nGennes narrowing. We shall be concerned in the following with\none-component schematic models in which the wavevector dependence of\n$\\Phi$ is disregarded and ${\\mathsf M}$ is a low order polynomial.\nEquation~(\\ref{eq.Phi})--derived by taking the long-time of the\nintegro-differential equation describing the evolution of the\ncorrelator of particle density fluctuations--generates a hierarchy of\ntopologically stable glass singularities, which can be classified in\nterms of bifurcations exhibited by the roots of the real polynomial\n\\begin{equation}\n{\\mathcal Q}(\\Phi) = \\Phi - (1-\\Phi) \\ {\\mathsf M} (\\Phi) .\n\\end{equation}\nFollowing Arnol'd notation, adopted in~\\cite{Gotze}, an ${\\mathsf\n A}_{\\ell}$ glass singularity occurs when the corresponding maximum\nroot of ${\\mathcal Q}$ has a degeneracy $\\ell$, $\\ell \\ge 2$, and is\ndefined by\n\\begin{equation}\n\\frac{d^n {\\mathcal Q}}{d\\Phi^n} = 0 \\,, \\qquad n=0, \\cdots, \\ell-1,\n\\label{eq.dQ}\n\\end{equation}\nwhile the $\\ell$th derivative is nonzero. The polynomial ${\\mathcal\n Q}$ has always the trivial root $\\Phi=0$, corresponding to a liquid\nergodic state, whereas nonzero values of $\\Phi$ correspond to a system\nthat is unable to fully relax and hence can be identified with a glass\nnonergodic state.\n\n\nFor two-parameter systems there are two basic singularities, ${\\mathsf\n A}_2$ and ${\\mathsf A}_3$, also known as {\\em fold} and {\\em cusp}\nbifurcations. They have been extensively studied by using memory\nkernels given by a superposition of linear and nonlinear terms. In\nthe ${\\mathsf F}_{12}$ schematic model the memory kernel is ${\\mathsf\n M}(\\Phi) = v_1 \\Phi + v_2 \\Phi^2$ while the ${\\mathsf F}_{13}$ model\nis defined by ${\\mathsf M}(\\Phi) = v_1 \\Phi + v_3 \\Phi^3$. The\ncompetition between the two terms produces a variety of nonergodic\nbehaviors: the linear term gives rise to a continuous liquid-glass\ntransitions at which $\\Phi \\sim \\epsilon$, where $\\epsilon$ is the\ndistance from the critical point (e.g., $\\epsilon=T-T_{\\scriptstyle \\rm c}$), while the\nnonlinear term induces a discontinuous liquid-glass transition, with\nthe well known square-root anomaly $\\Phi-\\Phi_{\\scriptstyle \\rm c} \\sim\n\\epsilon^{1\/2}$. In the ${\\mathsf F}_{12}$ scenario the discontinuous\nline joins smoothly the continuous one at a tricritical point. In the\n${\\mathsf F}_{13}$ scenario, the discontinuous transition line\nterminates at an ${\\mathsf A}_3$ singularity inside the glass phase\ngenerated by the continuous liquid-glass transition, and therefore\ninducing a glass-glass transition (see Fig.~1 for a representative\nphase diagram). The scaling form of the order parameter near the\n${\\mathsf A}_3$ endpoint is $\\Phi-\\Phi_{\\scriptstyle \\rm c} \\sim \\epsilon^{1\/3}$, and more\ngenerally $\\Phi-\\Phi_{\\scriptstyle \\rm c} \\sim \\epsilon^{1\/\\ell}$ for an ${\\mathsf\n A}_{\\ell}$ singularity, as implied by the Taylor expansion of\n${\\mathcal Q}$ near the critical surface and Eqs.~(\\ref{eq.dQ}). Thus\none can observe a rich variety of nonergodic behaviors whose\ncomplexity is comparable to that of multicritical points in phase\nequilibria~\\cite{Gilmore}. It is a nontrivial result that only\nbifurcation singularities of type ${\\mathsf A}_{\\ell}$ can occur in\nMCT~\\cite{Gotze}.\n\n\nThe ${\\mathsf F}_{12}$ and ${\\mathsf F}_{13}$ scenarios were first\nintroduced with the mere intention of demonstrating the existence of\nhigher-order singularities and glass-glass transition, and then were\nsubsequently observed in a number of experiments and numerical\nsimulations of realistic model\nsystems~\\cite{Dawson,Pham,Eckert,Chong,Krako,Kurz,Kim,Sperl,Voigtmann}.\nIt is important to emphasize that the parameters $v_i$ entering the\nmemory kernel are smooth functions of the thermodynamic variables,\ne.g. temperature and density, therefore the nature of nonergodic\nbehaviors predicted by MCT is purely dynamic. This is rather puzzling\nfrom the statistical mechanics perspective of critical phenomena where\ndiverging relaxation time-scales are closely tied to thermodynamic\nsingularity. It has been argued that this unusual situation stems\nfrom uncontrolled approximations. For example, the intimate\nconnection of some spin-glass models with MCT has brought to the fore\nthe existence of a genuine {\\em thermodynamic} glass phase at a\nKauzmann temperature $T_{\\scriptscriptstyle \\rm K}$ below the putative dynamic glass transition\npredicted by MCT~\\cite{KiWo,KiTh}. A non-trivial Gibbs measure,\ninduced by a replica-symmetry breaking, would therefore be actually\nresponsible for the observed glassy behavior~\\cite{MePa}. For this\nreason, the nature of the MCT has been much debated since its first\nappearance and several approaches have been attempted to clarify its\nstatus~\\cite{KiWo,KiTh,MePa,BoCuKuMe,Dave,Andrea,Ikeda,Schmid,Szamel,Silvio}.\nI will show here that the idea of dynamic facilitation~\\cite{RiSo},\nfirst introduced by Fredrickson and Andersen~\\cite{FrAn}, offers some\nclues in this direction for its relation with bootstrap percolation\nprovides a transparent microscopic mechanism of structural\narrest~\\cite{ChLeRe,Branco}. In the dynamic facilitation approach the\ncoarse-grained structure of a supercooled liquid is represented by an\nassembly of higher\/lower density mesoscopic \\emph{cells}. In the\nsimplest version a binary spin variable, $s_i=\\pm 1$, is assigned to\nevery cell $i$ depending on its solid or liquid like structure and no\nenergetic interaction among cells is assumed, ${\\mathcal H} = -h\n\\sum_i s_i$. The crucial assumption is that the supercooled liquid\ndynamics is essentially dominated by the cage effect: fluctuations in\nthe cells structure occur if and only if there is a certain number,\nsay $f$, of nearby liquid-like cells. $f$ is called the facilitation\nparameter and can take values in the range $0 \\le f \\le z$, where $z$\nis the lattice coordination: cooperative facilitation imposes $f \\ge\n2$, while non-cooperative dynamics only requires $f=1$. This very\nschematic representation of the cage effect gives rise to a large\nvariety of remarkable glassy behaviors, and it has long been noticed\nthat they are surprisingly similar to those found in the dynamic of\nmean-field disordered systems~\\cite{KuPeSe,Se,SeBiTo}. It has been\nrecently observed that in a special case, an exact mapping between\nfacilitated and disordered models with $T_{\\scriptscriptstyle \\rm K}=0$,\nexists~\\cite{FoKrZa}. Since such models are so utterly different in\ntheir premises, it is by no means obvious that such a correspondence\nis not accidental and can be extended to systems with higher-order\nglass singularities. To clarify this issue, I will consider a\ngeneralization of the facilitation approach~\\cite{SeDeCaAr} in which\nevery cell $i$ is allowed to have its own facilitation parameter $f_i$\n(or, equivalently, an inhomogeneous local lattice connectivity).\nPhysically, this situation may arise from the coexistence of different\nlengthscales in the system, e.g., mixtures of more or less mobile\nmolecules or polymers with small and large size, (or from a\ngeometrically disordered environment, e.g., a porous matrix). In such\nfacilitated spin mixtures the facilitation strength can be tuned\nsmoothly and is generally described by the probability distribution\n\\begin{eqnarray}\n \\pi(f_i) & = & \\sum_{\\zeta=0}^z \\ w_{\\zeta} \\ \\delta_{f_i,\\zeta} ,\n\\label{eq.distf.general}\n\\end{eqnarray}\nwhere the weights $\\{w_{\\zeta}\\}$ controlling the facilitation strength\nsatisfy the conditions\n\\begin{eqnarray}\n\\sum_{\\zeta=0}^z w_{\\zeta} = 1 ,\\qquad 0 \\le w_{\\zeta} \\le 1.\n\\end{eqnarray}\nBy tuning the weights one can thus explore a variety of different\nsituations. Generally, one observes that when the fraction of spins\nwith facilitation $f=z-1,z$ is larger than that with $2 \\le f \\le\nz-2$, the glass transition is continuous while in the opposite case it\nis discontinuous. One advantage of the facilitation approach is that\nwhen the lattice topology has a local tree-like structure, one can\ncompute exactly some key quantities, such as the critical temperature\nand the arrested part of correlation and its scaling properties near\ncriticality. This can be done by exploiting the analogy with bootstrap\npercolation. Let $p$ be the density of up spins in thermal\nequilibrium,\n\\begin{eqnarray}\n p &=& \\frac{1}{ 1 + {\\rm e}^{-h\/k_{\\scriptscriptstyle \\rm B} T} },\n\\end{eqnarray}\nfor a generic spin mixture on a Bethe lattice with branching ratio\n$k=z-1$. As usual, one arranges the lattice as a tree with $k$ branches\ngoing up from each node and one going down, and then proceeds\ndownwards. In analogy with the heterogeneous bootstrap percolation\nproblem, the probability $B$ that a cell is in, or can be brought into, the\nliquid-like state by only rearranging the state of $k$ cells above\nit~\\cite{ChLeRe,Branco,SeBiTo,SeDeCaAr}, can be cast in the form\n\\begin{eqnarray}\n 1-B &=& B \\ p \\ \\left\\langle \\sum_{i=k-f+1}^k {k \\choose i}\n B^{k-i-1} (1-B)^i \\right\\rangle_f,\n\\label{eq.B}\n\\end{eqnarray}\nwhere $\\left\\langle \\cdots \\right\\rangle_f$ represents the average\nover the probability distribution Eq.~(\\ref{eq.distf.general}). The\nright-hand side of Eq.~(\\ref{eq.B}) is a polynomial of $1-B$, and\nhence the formal structure of Eq.~(\\ref{eq.B}) is similar to that of\nschematic MCT (once $1-B$ is formally identified with $\\Phi$).\nSingularities can therefore be classified according to the criteria\nalready mentioned in the introduction. Nevertheless, it should be\nnoticed that what would be the anolog of the MCT kernel in\nEq.~(\\ref{eq.B}) can also have negative coefficients (besides\ncontaining an extra term of the form $(1-B)^k\/B$), while the\npolynomial coefficients of the MCT memory kernel are restricted to\nnon-negative ones. In fact, the sets of critical states which specify\nsome ${\\mathsf A}_{\\ell}$ glass-transition singularity are not\nidentical to those describing the full bifurcation scenario of real\npolynomials of degree $\\ell$, because the coefficients of the\nadmissible polynomials ${\\mathcal Q}$ form only a subset of all real\ncoefficients. This observation means that the correspondence between\nMCT and the heterogeneous facilitation approach is not an identity,\nbut this still leaves enough room for building up models with MCT\nfeatures, although some ingenuity may be required. It has already been\nshown, for example, that the ${\\mathsf F}_{12}$ scenario is faithfully\nreproduced in this framework~\\cite{SeDeCaAr,ArSe}. To substantiate the\nabove observation, I now will focus on the next higher-order glass\nsingularity, which is the ${\\mathsf F}_{13}$ scenario.\n\\begin{figure}\n\\includegraphics[width=8.5cm]{phasediagram}\n\\caption{Phase diagram for a Bethe lattice with $z=5$ and facilitation\n as in Eq.~(\\ref{eq.distf}). The liquid-glass 1 transition is\n continuous while the liquid-glass 2 and the glass 1-glass 2 are\n discontinuous. The light dashed line is the unstable branch of the\n phase diagram and shows the cuspoid structure of the terminal\n endpoint.}\n\\label{fig.diag}\n\\end{figure}\nLet us consider, for simplicity, a binary mixture on a Bethe lattice with\n$z=5$ and\n\\begin{equation}\n\\pi(f_i) = (1-q) \\delta_{f_i,2} + q \\delta_{f_i,4}.\n\\label{eq.distf}\n\\end{equation}\nFor such a mixture, denoted here as (2,4), the probability $B$ obeys\nthe fixed-point equation:\n\\begin{equation}\n 1-B = p \\left[ q (1-B^4) + (1-q) (1-B)^3 (1+3B)\\right].\n\\label{eq.P245}\n\\end{equation}\nThis equation is always satisfied by $1-B=0$, while an additional\nsolution with $1-B>0$ is found by solving\n\\begin{equation}\n p^{-1} = 1+B-5B^2+3B^3 + 2 q B^2 (3-B).\n\\label{eq.P245_1}\n\\end{equation}\nA continuous glass transition is obtained by setting $B=1$ in the\nprevious equation, giving: $p_{\\scriptstyle \\rm c} = 1\/4q$. Using the relation between $T$\nand $p$ (and setting $h\/k_{\\scriptscriptstyle \\rm B}=1$), one gets $T_{\\scriptstyle \\rm c}(q) = -1\/\\ln(4q-1)$,\nimplying that the continuous transition exists in the range $1\/2 \\ge q\n\\ge 1\/4$. The discontinuous transition instead occurs when\nEq.~(\\ref{eq.P245_1}) is satisfied and its first derivative with\nrespect to $B$ vanishes. The latter condition implies\n\\begin{equation}\n q = \\frac{ (9B-1)(1-B)}{6B(2-B)} ,\n\\label{eq.P245_2}\n\\end{equation}\nand naturally leads to the square-root scaling near the discontinuous\ntransition line. Thus the discontinuous transition can be graphically\nrepresented by expressing Eqs.~(\\ref{eq.P245_1}) and (\\ref{eq.P245_2})\nin parametric form in terms of $B$. The phase diagram in the plane\n$(T,q)$ is shown in the Fig.~\\ref{fig.diag}.\nIt exhibits two crossing glass transition lines, with continuous and\ndiscontinuous nature, corresponding to a degenerate and generic\n${\\mathsf A}_2$ singularities. The discontinuous branch extends into\nthe glass region below the continuous line up to a terminal endpoint\nwhich corresponds to an ${\\mathsf A}_3$ singularity. The location of\nthe endpoint is found by simultaneously solving equation\n\\begin{equation}\n B = \\frac{5-6q}{9-6q} ,\n\\label{eq.P245_3}\n\\end{equation}\n(which is obtained by setting the second derivative of\nEq.~(\\ref{eq.P245_1}) to zero), along with Eqs.(\\ref{eq.P245_1}) and\n(\\ref{eq.P245_2}). The discontinuous branch located between the\ncrossing point and the endpoint corresponds to a transition between\ntwo distinct glass states, called here glass 1 and glass 2. They are\nrespectively characterized by a fractal and compact structure of the\nspanning cluster of frozen particles. The passage from one glass to\nthe other can take place either discontinuously or without meeting any\nsingularity, i.e. by circling around the endpoint (in a way much\nsimilar to liquid-gas transformation). The existence of two\ntransitions in bootstrap percolation was first discovered by Fontes\nand Schonmann~\\cite{FoSc} in homogeneous trees and then found in\nErd\\H{o}s-R\\'enyi graphs and complex networks in~\\cite{Porto,Cellai}.\nHowever, its relation with glass-glass transition and MCT went\nunnoticed. In fact, the correspondence between Eqs.~(\\ref{eq.Phi}) and\n(\\ref{eq.B}) naturally suggests the existence of further singularities\nin bootstrap percolation and cooperative facilitated models.\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{orderparameter}\n\\caption{ Fraction of permanently frozen spins, $\\Phi$, vs temperature\n ratio, $T\/T_{\\scriptstyle \\rm c}(q)$, with values of $q$ larger than the crossing\n point.}\n\\label{fig.Phi}\n\\end{figure}\n\nFig.~\\ref{fig.Phi} reports the behavior of the fraction of frozen\nspins, which is the analog of the nonergodicity parameter in the\nfacilitation approach, when the temperature crosses the liquid-glass\ncontinuous transition and the glass-glass transition. This quantity\ncan be exactly computed from $B$~\\cite{SeDeCaAr,ArSe}, and its\nexpression is not reported here--we only notice that its general\nfeatures, and in particular the scaling properties near the critical\nstates, are similar to those of $B$. We observe that the fraction of\nfrozen spins first increases smoothly at the liquid-glass continuous\ntransition and then suddenly jumps at the glass-glass transition. The\njump decreases when $q$ approaches the endpoint and eventually\ndisappears. At this special point, the additional condition that the\nsecond-order derivative of Eq.~(\\ref{eq.P245_1}) with respect to $B$\nvanishes, implies a cube-root scaling near the endpoint. These scaling\nfeatures are exactly those expected from the ${\\mathsf F}_{13}$\nscenario, and we obtain similar results for the mixtures (3,5) on a\nBethe lattice with $z=6$.\n\nTo summarise, a close relationship exists between the structure\nof glass singularities in MCT and that of heterogeneous bootstrap\npercolation. This allows the construction of microscopic realizations\nof MCT scenarios based on the heterogeneous cooperative facilitation\napproach and provides further insights into the degree of universality\nof MCT. The role of the linear and nonlinear terms in the MCT memory\nkernel is played in facilitated spin mixtures by the fraction of spins\nwith facilitation $f=k, k+1$ and $k-1 \\ge f\\ge 2$, respectively. Their\ncompetition generates continuous and discontinuous liquid-glass\ntransitions, while the order of singularity is primarily controlled by\nthe lattice connectivity. This leads to multiple glassy states,\nglass-glass transition and more complex glassy behaviors. In this\nframework, the mechanism of structural arrest can be geometrically\ninterpreted in terms of the formation of a spanning cluster of frozen\nspin having fractal or compact structure depending on the continuous\nor discontinuous nature of the glass transition. Finally, from the\nrelation between MCT and mean-field disordered\nsystems~\\cite{KiWo,KiTh} it follows that quenched disorder and\ncooperative facilitation are two complementary, rather than\nalternative, descriptions of glassy matter, and this contributes to\nthe long sought unifying approach to glass physics.\n\n\n\\bibliographystyle{apsrev} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbsbi b/data_all_eng_slimpj/shuffled/split2/finalzzbsbi new file mode 100644 index 0000000000000000000000000000000000000000..5f8fdcfaeb973da7f44a94258afca98968c29d5f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbsbi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nSmartphones, tablets, and similar smart mobile devices have been greatly successful as personal communication and computing devices owing to their always-on connectivity, mobility, usability, and operating systems that enable a vast market of applications tailored for the needs of mobile users. Smart devices are undeniably taking over the mobile device market, and this trend is not expected to slow down in the near future. Table \\ref{tab:deviceShipments} gives the number of mobile devices shipped worldwide in 2012 based on data from Canalys~\\cite{bib:canalysMobileMarket13}. The data shows that smartphones and tablets represented $42\\%$ of all mobile devices shipped in 2012, and their combined market share is expected to grow to $66\\%$ by 2016. The significance of these numbers is highlighted by the fact that among all consumer devices, only the television has had such a fast market penetration rate in the USA~\\cite{bib:degustaTech12}. Furthermore, while smartphones represented only $18\\%$ of total global mobile phones in use in 2012, they were responsible for $92\\%$ of total global mobile phone traffic~\\cite{bib:ciscoMobileForecast13}. With both the device market share and the amount of data traffic due to smart devices expected to continue to grow significantly, their importance is evident in the current and future mobile landscape.\n\n\\setlength{\\tabcolsep}{8pt}\n\n\\begin{table}[tbp]\n\n\t\\caption{The number of mobile devices shipped worldwide in 2012 and forecast for 2016, in millions of units, according to Canalys (Feb. 2013) \\cite{bib:canalysMobileMarket13}.}\n\t\\label{tab:deviceShipments}\n\t\\centering\n\t\\begin{tabular}{r l l l}\n\t\t\\toprule\n\t\tDevice type\t\t& 2012 shipments & 2016 shipments & 2012--2016 CAGR\\\\\n\t\t\\midrule\n\t\tBasic phone\t\t& $122.0$ & $58.0$ & $-17.0\\%$\\\\\n\t\tFeature phone\t& $770.8$ & $660.9$ & $-3.8\\%$\\\\\n\t\tSmartphone\t\t& $694.8$ & $1,342.5$ & $17.9\\%$\\\\\n\t\tTablet\t\t\t& $114.6$ & $383.5$ & $35.3\\%$\\\\\n\t\tNotebook\t\t\t& $215.7$ & $169.1$ & $-5.9\\%$\\\\\n\t\tNetbook\t\t\t& $18.3$ & $0.3$ & $-64.2\\%$\\\\\n\t\t\\midrule\n\t\tTotal \t\t\t& $1,936.2$ & $2,614.2$ & $7.8\\%$\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\\begin{figure*}[tbp]\n\t\\centering\n\t\\begin{subfigure}[b]{0.45\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth, trim = 0cm -1.6cm 0cm 0cm]{.\/figures\/malwareNumbers.eps}\n\t\t\\caption{The number of detected Android malware in 2012. The data shows a rapid increase in malware, especially in Q3-Q4 2012, which was due to aggressive adware.}\n\t\t\\label{fig:androidMalware}\n\t\\end{subfigure} \\hspace{0.5cm}\n\t\\begin{subfigure}[b]{0.45\\linewidth}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{.\/figures\/malwareTypes.eps}\n\t\t\\caption{The distribution of malware types for the top ten Android malware families in 2012}\n\t\t\\label{fig:malwareTypes}\n\t\\end{subfigure}\n\t\\caption{Android malware in 2012, based on data provided by TrendLabs~\\cite{bib:trendmicroMobileThreats12}}\n\t\\label{fig:malware}\n\\end{figure*}\n\nSmart devices increasingly store and provide access to a range of personal, corporate, financial and security-related data. While the availability and ease of access to such data via different types of network connections (e.g.~\\mbox{Wi-Fi}, cellular), the ability to download, install and use mobile applications, and the practicality of using networked services while on the go make the smart mobile device an inseparable companion to the modern man, they also provide the perfect breeding ground for malicious software. Cyber-criminals have not had continuous access to such varied sources of personal and financial data before the advent of the smartphone. Increasing use of mobile payments presents a new area to exploit, and attackers have already devised covert ways to make direct financial gain from the mobile users, e.g. via premium calls and SMS~\\cite{bib:feltSurveyMobileMalware11}. Furthermore, the use of multiple communication technologies in smart devices allows attackers to cross service boundaries. For example, an attack carried out over \\mbox{Wi-Fi} would allow the attacker to launch attacks over the mobile phone network. In addition to exposing the mobile user to heterogeneous attack vectors, the increasing use of complementary access to the mobile core network, via \\mbox{Wi-Fi} and femtocells, introduces new vulnerabilities to the core network~\\cite{bib:goldeFemtocell12}. The amount of data carried over complementary access should not be underestimated; according to Cisco~\\cite{bib:ciscoMobileForecast13}, $429$ petabytes per month of global mobile data was offloaded onto the fixed network through \\mbox{Wi-Fi} or femtocell radio access, which represents $33\\%$ of total mobile traffic in 2012. Cisco also estimates that mobile data offload will reach $9.6$ exabytes per month by 2017, which will be $46\\%$ of total mobile traffic.\n\nSmart mobile devices are also increasingly at the center of security systems for managing small or large emergencies in built environments, or during sports or entertainment events~\\cite{bib:gelenbeICCCN12,bib:gelenbeWuSimEvac12}, and they are also used increasingly for online search of difficult-to-get sensitive information~\\cite{bib:gelenbeSearch10,bib:abdelrahmanSearch13}. Thus such technologies will necessarily be targeted and may be breached in conjunction with other physical or cyber attacks, as a means of disrupting safety and confidentiality of individuals and emergency responders~\\cite{bib:gorbilANT11,bib:gorbilISCIS11,bib:gorbilPernem13}.\n\nIn order to address the growing mobile threat, there is an urgent need to detect, analyze and understand the new vulnerabilities and threats in the smart mobile ecosystem, which are a result of the evolution of mobile networks and smart devices, the changing way users interact with technology, the popularity of smart devices, and the heterogeneity of the wireless interfaces, supported platforms and offered services. In order to advance in the fast moving field of cyber-security and to counter existing and potential mobile threats, we need to be proactive and work on predicting threats and vulnerabilities to build our defenses before threats materialize. To this purpose, the EU FP7 NEMESYS project\\footnote{\\url{http:\/\/www.nemesys-project.eu\/nemesys\/}} will develop a novel security framework to gather and analyze information about the nature of cyber-attacks targeting mobile devices and the mobile core network, as well as identify and predict abnormal behaviour observed on smart mobile devices and the mobile network so that appropriate countermeasures can be taken. We aim to understand the modus operandi of cyber-criminals, and to identify and reveal the possible shift in the way they launch attacks against mobile devices through root cause analysis and correlation of new findings with known patterns of attacks on wireline networks.\n\n\\section{The Current Mobile Threat Landscape}\n\\label{sec:landscape}\n\nSmart devices are open to both traditional and mobile-specific threats due to the multiple roles smart mobile devices play and the heterogeneity of mobile communication technologies and networked services~\\cite{bib:becherMobileSecurity11}. Among the traditional threats that smart mobile devices face, we include physical attacks that require physical access to the device, device-independent attacks such as eavesdropping on the wireless medium or man-in-the-middle attacks, e-mail-based spam and phishing, and IP-based attacks. Current IP-based attacks encountered on mobile devices~\\cite{bib:wahlischMobileHoneypot13} have been found to be largely similar to those on non-mobile devices~\\cite{bib:gelenbeLoukasDoS07,bib:gelenbeSelfAware09}, but we are more interested in the traits of attacks that are tailored specifically for mobile devices. With the growing popularity of smart devices, mobile-specific threats have evolved from SMS\/MMS-based denial-of-service (DoS) attacks~\\cite{bib:traynorSMSAttack09,bib:mullinerSMSAttack11} to more sophisticated attacks that usually come in the form of malware and target both the core network and the mobile users. The ability of smart devices to install and run applications not only from official markets but also from unknown sources exposes them to malware~\\cite{bib:feltSurveyMobileMalware11,bib:zhouAndroidMalware12}, and while the mobile malware threat is not new~\\cite{bib:dagonMobileVirus04}, it is clearly evolving and growing as attackers experiment with new business models by targeting mobile users~\\cite{bib:lookoutMobileSecurity12,bib:kasperskyMalwareEvolution12}. We next provide a taxonomy of mobile malware based on behavioral classification, and present an overview of malware detection techniques.\n\n\\subsection{Mobile Malware}\n\nAndroid has been the most targeted mobile platform in 2012, with almost $99\\%$ of all encountered mobile malware being designed for Android~\\cite{bib:kasperskyStatistics12}. The number of malicious Android applications detected by Kaspersky Lab in 2012 was more than $35,000$, which reflects a six-fold increase from 2011~\\cite{bib:kasperskyMalwareEvolution12}. Figure \\ref{fig:androidMalware} shows the rapid growth of Android malware in 2012 based on data from TrendLabs~\\cite{bib:trendmicroMobileThreats12}, which shows the significance of the growing mobile malware threat. 2012 has also seen the emergence of the first mobile botnets~\\cite{bib:kasperskyStatistics12}. A botnet is a collection of Internet-connected devices acting together to perform tasks, often under the control of a command and control server. In wireline networks, malicious botnets are used to generate various forms of spam, phishing, and distributed denial-of-service (DDoS) attacks. Mobile botnets extend such capability to cellular networks, give cyber-criminals the advantages of control and adaptability, and pose a significant threat to the mobile core network as they could be used to launch debilitating signaling-based DDoS attacks~\\cite{bib:leeDetectionDoS3G09,bib:traynorCellularBotnet09,bib:mullineriBot10,bib:mullinerSignaling12}.\n\nMobile malware uses various infection vectors in order to gain access to the device; the top two main categories of attacks based on the vulnerabilities they use are:\n\\begin{itemize}\n \\item \\textit{Exploiting} hardware or software vulnerabilities of devices to completely bypass the user and instal malware. Some of the exploits used to attack smart devices are near field communication (NFC) technology, third-party kernel drivers, Android firmware vulnerabilities, and mobile web browsers. For example, some malicious websites use mobile browser exploits to install malware on the device without any user interaction other than visiting the site. Android firmware and third-party driver exploits have been used by malware to elevate their privileges and thus gain root access to the device, allowing them to practically do anything they want without the user's knowledge~\\cite{bib:liebergeldAndroidSecurity13}.\n \n \\item \\textit{Social engineering} is by far the most common method used to infect smart mobile devices, where users are ``tricked'' into installing the malware themselves. Social engineering includes all techniques that exploit the human user, such as phishing, application repackaging, etc., in order to infect the device. Social engineering is popular since it does not require any technical investment by the attacker, i.e.~the identification of a new exploit and the development of a delivery system that uses it. Upcoming malware will continue to employ social engineering in new ways; for example, we have already witnessed the first malicious QR codes, which need to be scanned by the user for their activation.\n\\end{itemize}\nIndependently of how it infects the device, once the malware is installed, it performs one or more malicious activities, which are classified in Table \\ref{tab:malwareBehaviour} according to their behavior. Figure \\ref{fig:malwareTypes} shows the distribution of malware types for the top ten Android malware families in 2012, based on data from TrendLabs~\\cite{bib:trendmicroMobileThreats12}. The data shows that some malware families exhibit multiple malicious activities (e.g. both premimum service abuse and data stealing), and that premium service abuse has been the favorite malware type, most probably because it is simple to create and allows a direct source of revenue for the cyber-criminal. It is closely followed by adware, which generates indirect profit through advertisement fraud. Malicious downloaders appear to be popular malware delivery methods since they can evade detection by malware detectors and do not alarm the user at installation time by requesting many high-level privileges.\n\n\\setlength{\\tabcolsep}{8pt}\n\\renewcommand{\\arraystretch}{1.5}\n\n\\begin{table}[tbp]\n\n\t\\caption{Malware behavioral classification.}\n\t\\label{tab:malwareBehaviour}\n\t\\centering\n\t\\begin{tabular}{l p{5cm}}\n\t\t\\toprule\n\t\tActivity & Description\\\\\n\t\t\\midrule\n\t\tStealing user information\t& Steals user information and credentials. The most commonly targeted data are the contact list, IMEI and IMSI numbers, API authentication keys, banking credentials, user's location, network operator, phone ID and model, phone number, and text messages.\\\\\n\t\tMonitoring\t\t\t\t\t& Tracks user's location, records conversations, takes photos. Spyware is considered to exhibit this type of behavior.\\\\\n\t\tAdware\t\t\t\t\t\t& Presents unwanted advertisements to the user. Most mobile adware have evolved to incorporate other types of behavior, such as monitoring user activity, especially browsing behavior, and stealing user information. Malicious advertisement networks are increasingly finding their way into legitimate applications and being used as infection vectors.\\\\\n\t\tPremimum service abuse\t\t& Sends premium SMS\/MMS, makes premium calls, subscribes the user to premium services without the user's knowledge. Cyber-criminals often make direct financial gain from such premium service abuse.\\\\\n\t\tClick fraud\t\t\t\t\t& Generates ``clicks'' on ads shown on websites and applications, generating indirect financial gain to the cyber-criminal through payment from the advertisement networks.\\\\\n\t\tSearch engine optimization\t& Manipulates the search results shown on the mobile browser and other applications to improve website rankings in search engine results.\\\\\n\t\tSMS and e-mail spam\t\t\t& Sends spam SMS and e-mail either to the user's contacts or to a specified list of people. Could be used for phishing attacks.\\\\\n\t\tMalicious downloading\t\t& Downloads malicious content onto the device. Mainly used to evade detection by malware detectors and the user.\\\\\n\t\tBotclient\t\t\t\t\t& Turns the device into a botclient that receives commands from a remote command and control server. Once part of a mobile botnet, the device can be used to launch a variety of attacks, ranging from spam to DDoS attacks on the network.\\\\\n\t\tRooting\t\t\t\t\t\t& Roots the device to allow execution of otherwise restricted commands and programs. Malware that has root access potentially has full control of the device.\\\\\n\t\tRansom\t\t\t\t\t\t& Locks the device and demands a ransom to be paid in order to unlock the device.\\\\\n\t\tDestruction\t\t\t\t\t& Causes physical damage by deleting important files or personal information.\\\\\n\t\tDenial-of-service\t\t\t& Launches a denial-of-service attack either on the mobile device itself or on the core network. The mobile device may be attacked by repeatedly switching the device off or depleting the battery.\\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\\subsection{Malware Detection Techniques}\n\nThe traditional approach to malware detection is \\textit{signature-based}, where signatures for identified malware are extracted and used to detect new infections of the same malware on other devices. While this can be very effective for controlling an outbreak, it cannot defend against malware unless samples have already been obtained and analyzed. Therefore, new and unknown malware cannot be detected via signature-based approaches. In order to detect previously unknown malware, \\textit{behavioral methods} are useful in which the activities of an application are analyzed via either static or dynamic analysis. \\textit{Static analysis} provides a quick and efficient way to detect malware without executing them, but they are ultimately limited in their effectiveness unless the source code for the application is available. Unlike static analysis, \\textit{dynamic analysis methods} execute the application code in an isolated environment, for example a virtual machine or a sandbox, so its behavior can be directly observed. Dynamic analysis techniques include function call monitoring, function parameter analysis, information flow tracking, and instruction tracing. An overview of mobile malware detection methods is presented in~\\cite{bib:chandraMobileMalware12}, and a more comprehensive survey is given by~\\cite{bib:egeleMalwareAnalysis12}. In the rest of this section, we will only consider behavior-based detection methods.\n\nBased on where the detection is performed, we can classify malware detection methods into three categories. \\textit{Client-side detectors} reside in the mobile device, but are constrained by its limited physical resources, especially battery. \\textit{Network-side detection} methods analyze mobile network traffic from many users and offer a complementary means for detecting attacks targeting mobile users. They can be used in conjunction with client-side methods to improve detection rates, and provide a broad view of malicious activities within a carrier's network, enabling detection of anomalous behavior that would not be visible on a per user basis. However, network-based methods are limited in that they can only monitor and analyze mobile traffic that goes through the cellular network, and therefore may not be suitable for certain malware types, such as malware that exclusively uses \\mbox{Wi-Fi} for communication. \\textit{Cloud-based detection} offers a trade-off between network-level analysis and on-device security by offloading intensive security analysis and computations to the cloud while monitoring internal mobile device events as well as different types of wireless communications from many users. However, cloud-based solutions can only protect those users that install the application and require a large number of subscribers in order to identify large-scale events, while network-based detection does not require the user to do anything as all detection is performed using data available to the network operator.\n\n\\section{The NEMESYS Approach}\n\\label{sec:nemesys}\n\nDespite recent advances in mobile security, large gaps remain in our understanding of the new and future threats and vulnerabilities in the smart mobile ecosystem. Our initial research has identified the following open issues with respect to the general problem of cyber threats against smart mobile devices:\n\n\\textit{- Missing infrastructure for collecting attack traces.} Without an infrastructure to collect attack traces against mobile devices, we will not be able to detect, analyze and understand the evolving attack strategies employed by cyber-criminals. These are crucial in order to develop effective mitigation strategies and to provide seamless services in the smart mobile ecosystem.\n\n\\textit{- Virtualization.} By leveraging advances in mobile virtualization technology, we can protect mobile users from the consequences of malware by appropriately restricting access to device functionality. Virtualization would also allow an easy way to reset mobile devices to a state before infection.\n\n\\textit{- Mobile honeypot development.} Honeypots are successfully used in wired networks in order to study the strategies of attackers and to protect production systems from attacks. However, the development of mobile honeypots is still at an early stage. For example, the mobile honeypot presented in~\\cite{bib:songMobileHoneypot12} can connect to the Internet only over Wi-Fi or a wired connection. Although the mobile honeypot presented in~\\cite{bib:wahlischMobileHoneypot13} can connect to a UMTS network via a USB dongle, the honeypot uses desktop computers to emulate mobile devices. We aim to address the deficiencies of these early efforts by developing a high-interaction mobile honeypot using real mobile devices.\n\n\\textit{- New potential for exploiting security vulnerabilities through mobile botnets.} While botnets are a well-known phenomenon in the wired Internet, we have just witnessed the first mobile botnets in 2012. Mobile botnets pose interesting questions as to their capabilities and uses since smart mobile devices possess many abilities not present on a desktop computer. Such mobile botnets could be used to attack mobile users (e.g. SMS spam) and the core network (e.g. DDoS attack); the ability of a large number of mobile devices to effectively attack the core network has been demonstrated in recent cases of legitimate but poorly-designed mobile applications. We need to explore the new threats posed by the emergence of mobile botnets in more detail.\n\n\\textit{- Adaptability of cyber-criminals and rapidly changing cyber-crime tactics.} Cyber-criminals have become adept at modifying their strategies and tactics as methods are developed to counter their activities. Thus, security solutions should rely less on signatures and instead adopt other forms of detection.\n\n\\textit{- New types of attacks that cross service, platform and network boundaries.} Identification of anomalous behavior within a large set of heterogeneous data is difficult and time-consuming, particularly across layers. Statistical analysis is further challenged by rare anomalous events in massive amounts of data.\n\n\\textit{- Attack attribution and understanding of the cyber-criminals' modus operandi.} The selection of the best mitigation strategy requires understanding of new phenomena and recognizing changes in how the malicious actors operate. This requires that attacks are analyzed in a detailed way, in order to ``attribute'' responsibility to an exact attacker or to protect the true targets.\n\nIn the EU FP7 NEMESYS project, we are developing a data collection, visualization and analysis infrastructure (Fig.~\\ref{fig:architecture}) in order to address these open issues. The core of this architecture consists of a data collection infrastructure that incorporates high-interaction virtualized mobile honeypots and honeyclients in order to gather data regarding mobile attacks. The collected data is enriched by the data collection infrastructure through interaction with external sources and made available to anomaly detection, visualization and analysis modules running on the mobile network operator's site. The purpose of the anomaly detection mechanisms is to detect deviations from normal behaviour of mobile users and the core network in real-time. These mechanisms will utilize charging data records (CDRs) for the users and control-plane protocol data, combined with enriched mobile attack traces made available by the data collection infrastructure. In addition to monitoring abnormal behaviour of users connected to the core network through the radio access network, the architecture contains a module that performs anomaly detection within the femtocell complementary access system.\n\n\\begin{figure*}[tbp]\n\t\\centering\n\t\\includegraphics[width=0.8\\linewidth]{.\/figures\/architecture2.eps}\n\t\\caption{The NEMESYS architecture}\n\t\\label{fig:architecture}\n\\end{figure*}\n\nEnriched attack traces and normal behaviour statistics from the data collection infrastructure and the output of the anomaly detection module are fed into the visualization and analysis module. The visualization and analysis module will aid in the detection of existing and emerging threats in the mobile ecosystem through attack attribution, root cause identification, and correlation of observed mobile attacks with known attack patterns. It will present large sets of data related to the observed attacks from heterogeneous sources and facilitate the role of the security analyst in reasoning, hypothesis testing and decision making. We describe the components of the NEMESYS architecture in the following sections in more detail.\n\n\\subsection{Virtualized Mobile Client Honeypot}\n\\label{sec:honeypot}\n\nHoneypots are networked computer system elements that are designed to be attacked and compromised so we can learn about the methods employed by the attackers~\\cite{bib:provosHoneypot07}. Traditional honeypots are servers that passively wait to be attacked, whereas client honeypots are security devices that actively search for malware, compromised websites and other forms of attacks. High-interaction client honeypots are fully functional, realistic client systems and they generally do not impose any limitations on the attacker other than those required for containing the attack within the compromised system. Despite their complexity and maintenance difficulty, high-interaction client honeypots are effective at detecting unknown attacks and they are harder to detect by the attacker~\\cite{bib:provosHoneypot07}. They also enable in-depth analysis of the attacks during and after the attack has taken place.\n\nAs part of NEMESYS, we are developing a high-interaction virtualized client honeypot for the Android mobile platform in order to attract and collect mobile attack traces. We have chosen Android considering its popularity among mobile users and the extremely high ratio of malware targeting Android. Our virtualization technology logically partitions the physical device into two virtual machines (VMs): the \\textit{honeypot VM} and the \\textit{infrastructure VM}. The honeypot VM will host the largely unmodified mobile device operating system, and it will not have direct access to the device's communication hardware. The infrastructure VM will mediate all access to the communication hardware, and employ sensors to wiretap any communication and detect suspicious behaviour. It will also provide the event monitoring, logging and filesystem snapshot facilities, as well as transmit threat information to the NEMESYS data collection infrastructure. It will host a lightweight malware detection module in order to identify malicious applications running on the honeypot VM. For this purpose, both signature-based and behaviour-based approaches will be considered. In order to improve the efficiency of malware detection, we will identify and prioritize the most important attributes in the system state space to monitor.\nOur virtualization technology will ensure that an attack is confined within the compromised device and that it will not put other devices in the network at risk. Furthermore, through this approach, we will be able to stop malware from abusing premium services and from subscribing the user to services without her knowledge. Thus, the user will be spared from any financial distress that may arise as a result of using the mobile honeypot. The virtualization solution also enables taking full snapshots of the honeypot VM filesystem for further forensic analysis of an attack, as well as improving honeypot maintenance since a compromised honeypot could be restored more quickly.\n\nOur initial research has shown that the infection vector of most mobile malware is social engineering, where users are ``tricked'' into installing the malware themselves. This observation has led us to the conclusion that the user should not be ignored in the construction of an effective mobile honeypot. To this end, we introduce the \\textit{nomadic honeypot} concept, which utilizes real smartphone hardware with the virtualization solution that will be developed within NEMESYS~\\cite{bib:liebergeldHoneypot13}. We plan to deploy nomadic honeypots by handing them out to a chosen group of volunteers, who will use the honeypot as their primary mobile device. It will be up to these human users to get the honeypot infected by visiting malicious sites, installing dubious applications, and so forth. Traces from malware and other types of mobile attacks collected and identified through the nomadic honeypots will be provided to the data collection infrastructure, which is described next.\n\n\\subsection{Data Collection Infrastructure}\n\\label{sec:dataCollection}\n\nThe data collection infrastructure will gather and store mobile attack traces that will be provided by the virtualized mobile client honeypot and the honeyclient, and combine them with data from the mobile core network and external sources for enrichment, correlation analysis, and visualization. As an initial step in the design of this infrastructure, we are identifying available external data sources relating to wireline network attacks which will enable correlation of data from multiple heterogeneous sources. Examples of such data sources are the SGNET~\\cite{bib:leitaSgnet08}, HARMUR~\\cite{bib:leitaHarmur11}, and VirusTotal databases. A source aggregator is being designed and developed to harvest and consolidate data from these sources and the NEMESYS mobile honeypot in a scalable database. Scalable design of the database is important in order to be able to efficiently store and handle large heterogeneous data sets.\nOnce data from multiple sources have been consolidated, they will be enriched by analyzing the data itself or accessing external sources. For example, TCP\/IP stack fingerprinting in order to identify the remote machine's operating system, and clustering of the traces are passive methods of data enrichment. On the other hand, DNS reverse name lookup, route tracing, autonomous system identification, and geo-localization are methods to improve characterization of remote servers but these functions may require access to external sources, possibly in real time. As a final step, the data collection infrastructure will help in the definition of the appropriate inputs representing normal and malicious network activity, which will then be used as the fundamental representation of information in the visualization and analysis module.\n\nThe \\textit{honeyclient}~\\cite{bib:delosieresMalwareDetection13} being developed as part of the data collection infrastructure is similar in concept to the virtualized mobile honeypot, but instead of using real hardware and being driven by real users, the honeyclient uses an Android emulator driven by artificially generated user input to automate interaction with web sites, application markets and applications in order to collect mobile attack traces. The honeyclient consists of a crawler, client, and detector components. The crawler discovers web sites, application markets and applications of interest and generates a list of web pages to visit and applications to download. The client runs the Android emulator (e.g. on a desktop computer) and processes the list generated by the crawler, visiting the web sites using a mobile browser and downloading, installing and executing applications. The behavior of the applications, e.g. function calls, and changes to the system as a result of executing the applications are recorded by the client, which are used by the malware detector to identify malware. The honeyclient provides data relating to identified malware and malicious web sites to the data collection infrastructure.\n\n\\subsection{Anomaly Detection Using Control Plane and Billing Data}\n\\label{sec:anomalyDetection}\n\nThe anomaly detection module that operates at the mobile network operator's site is used for the identification and prediction of abnormal behavior observed on smart mobile devices and the mobile network. In addition to user-oriented attacks, mobile networks are vulnerable to a novel DoS attack called the signaling attack~\\cite{bib:leeDetectionDoS3G09}. Signaling attacks seek to overload the control plane of the mobile network using low-rate, low-volume attack traffic, based on the structure and characteristics of mobile networks. Unlike conventional DoS attacks that focus on the data plane, the signaling attack creates havoc in the control plane of a mobile network by repeatedly triggering radio channel allocations and revocations. In order to identify such DoS attacks against the mobile network and attacks against the mobile users in real time, we will use signaling data from control-plane protocols and sanitized (anonymized) CDRs from mobile users, respectively. For this purpose, we will use normal user behavior statistics, as well as synthetic ``typical'' user playbacks, to create traces of signaling events and billing data so as to characterize and extract their principal statistics such as frequencies, correlations, times between events, and possible temporal tendencies over short (milliseconds to seconds) and long (hours to days) intervals. We will then employ Bayesian techniques such as maximum likelihood detection, neuronal techniques based on learning, and a combination of these two in order to design and develop robust and accurate change detection algorithms to detect the presence of an attack, and classification algorithms to identify the type of attack when it is detected with high confidence.\n\nNovel femtocell architectures provide a specific opportunity for user-end observation of network usage, while they also have specifics for attacks within the femtocells~\\cite{bib:borgaonkarSecurityFemtocell11}. To address attacks specific to femtocells, we will conduct a survey and evaluation of how users may be monitored and attacks detected within a femtocell, and how these are linked to overall mobile network events.\n\nA number of novel ideas are also being investigated~\\cite{bib:abdelrahmanAnomalyDetection13} such as modeling the signaling and billing network as a queueing network~\\cite{bib:gelenbeMuntzProb76,bib:GelenbeActa} to capture the main events that involve hundreds of thousands of mobile calls and interactions, while only a few may be subject to an intrusion or attack at any given time. Detection of anomalies is studied using learning with neural networks~\\cite{bib:gelenbeRNN99,bib:gelenbeNatural12} that provide fast low-order polynomial detection complexity required for massive real-time data, and the need to detect and respond to threats in real-time. Such techniques can also benefit from distributed task decomposition and execution for greater efficiency~\\cite{bib:aguilarTask97}.\nOur analytical models and anomaly detection algorithms will be augmented and validated with simulation tools. As an initial step, we are developing realistic simulations of UMTS and LTE networks using the OPNET simulator in order to extract data regarding control-plane events that take place during normal mobile communications. Characteristics of these control events will be used to drive the development of our analytical models. We will later conduct large-scale mobile network simulations to validate our mathematical results. Another set of simulations will focus on user-level events, such as voice calls and packet communications, and include charging system components to monitor the use of internal and external network resources. Such simulations will be used to test the performance of our real-time and offline anomaly detection methods.\n\n\\subsection{Root Cause Analysis, Correlation and Visualization}\n\\label{sec:analysisVisualization}\n\nThe role of the visualization and analysis module is to process the data obtained from the data collection infrastructure and the anomaly detection module in order to identify and reveal correlations between network events, and to provide a visual analytics framework for the security analyst to perform hypothesis formulation and testing. The data provided to this module represents a large and heterogeneous data set that needs to be presented in a meaningful way to the operator without overwhelming her or restricting available views and actions on the data. In addition to mere representation of data, the visualization and analysis module aims to provide visual analytics tools to the operator. This task is compounded by different uses of visualization by the operator: (i) real-time monitoring of the status of users and the mobile network, and (ii) exploratory data analysis. For real-time monitoring, the security status of a large set of mobile users and more importantly the mobile network need to be presented to the operator. This includes providing early alerts for abnormal behaviour, DoS attacks, malware spreading among the users of the mobile network, etc. The analytics module must also provide visual analytics tools so the analyst can perform attack attribution and correlation analysis with the help of algorithms running in the background.\n\nIn order to effectively visualize and explore large sets of heterogeneous, dynamic, complex data, it is necessary to create multiple coordinated views of the data that allow a multi-faceted perception and the discovery of any hidden attributes. The analysis methods also need to be scalable for early network alerts and fast access to the underlying data. We will therefore focus on enabling a real-time analysis framework by means of incremental analysis and visualization methods~\\cite{bib:papaVisualNetwork13}, such as multi-level hierarchical screen visualizations that update smoothly rather than showing abrupt changes.\n\n\\subsection{Integration and Validation}\n\nIn order to evaluate and validate the technologies that are being developed and to demonstrate their impact to interested parties, NEMESYS will construct a virtual testing environment based on guidelines provided by our industrial partners that is as close to a real mobile network as possible within feasibility limitations. The different modules being developed by various partners will be integrated in the virtual testing environment, and validation tests will be conducted based on realistic use-cases. We aim to use the OPNET simulator as part of the virtual testing environment in order to conduct simulations of different types of mobile networks, e.g. UMTS and LTE, and to drive the large-scale networking experiments.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nThe evolving and growing nature of the mobile threat in the smart mobile ecosystem is evident from the increasing marketshare of smartphones and tablets, the large amount of data due to smart devices, and the number of detected mobile malware. We must therefore address the mobile threat and understand the new and potential vulnerabilities, threats, and operating methods of cyber-criminals. NEMESYS will provide new insight into the nature of next generation network security in the smart mobile ecosystem. The main innovation of NEMESYS is the research and development of novel security technologies for the identification and prediction of abnormal behavior observed on smart mobile devices, as well as for gathering and analyzing information about the nature of cyber-attacks targeting mobile devices, so that appropriate countermeasures can be taken to combat them. It will involve the development of virtualized honeypots for mobile devices, a data collection infrastructure, and the introduction of novel attack attribution and visual analytics technologies for the mining, presentation and representation of large amounts of heterogeneous data that are related to the smart mobile ecosystem.\n\n\\section*{Acknowledgments}\n\nThe work presented in this paper was supported by the EU FP7 collaborative research project NEMESYS (Enhanced Network Security for Seamless Service Provisioning in the Smart Mobile Ecosystem), under grant agreement no. 317888 within the FP7-ICT-2011.1.4 Trustworthy ICT domain.\n\n\n\\IEEEtriggeratref{27}\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\nWe extend the stability theorem of the depolarizing channel \nto the output quantum $p$-R\\'{e}nyi entropy \nfor $p \\ge 2$ or $p=1$. \nThe original stability theorem with the output purity is essentially equivalent to our stability theorem for the case $p=2$ \nand was used in proving the equality \n$\\mathrm{QMA}(k)=\\mathrm{QMA}(2)$ for all $k \\ge 2$~\\cite{HM13}. \nWe generalize it to the output quantum $p$-R\\'{e}nyi entropy \nto create a more powerful tool,\nand we apply it to a type of polygraph test as discussed below.\nGeneralization is accomplished by defining the notion of stability of a quantum channel with respect to any real valued continuous function.\nThat is, if a state is close to achieving \nthe minimal\/maximal output value of a particular quantity (entropy function) through the channel,\nthen it must be close to an input state giving the minimal\/maximal value.\nIn particular, we show that the depolarizing channel is stable \nwith respect to the output quantum R\\'{e}nyi entropy.\n\nOur theorem is constructed by generalizing the Taylor expansion of von~Neumann entropy~\\cite{GF13} to the quantum R\\'{e}nyi entropy.\nWhereas the original work employed the output purity~\\cite{HM13},\nwhich is a relatively simpler function,\nwe use a more general (and complicated) function,\nnamely the output quantum R\\'{e}nyi entropy.\nThe Taylor expansion of the output quantum R\\'{e}nyi entropy is the technique we use\nto prove the stability theorem for a depolarizing channel with respect to the the output quantum R\\'{e}nyi entropy.\nThe protocol is described in \\S\\ref{sec:Poly} \nand provides us meaning and intuition for \nthe stability theorem of the depolarizing channel.\nFurthermore, the protocol shows that our stability theorem has a benefit \nas our protocol has a smaller undecidable gap than the original case. \n\nWe organize our paper as follows. \nIn \\S\\ref{sec:Stable}, we provide some notions to define a stable channel clearly.\nOur main result appears in \\S\\ref{sec:Stability}\nwhere we generalize the Taylor expansion of the von~Neumann entropy \nto calculate the Taylor expansions of the quantum R\\'{e}nyi entropies.\nWe use this result to show that the depolarizing channel is stable with respect to the output quantum $p$-R\\'{e}nyi entropies for $p \\ge 2$ or $p=1$. \nIn \\S\\ref{sec:Poly}, \nwe introduce a polygraph test as an application of our stability theorem. \nFinally, in \\S\\ref{sec:Conclusions}, we conclude with discussion on our results.\n\n\n\\section{Stable channels}\n\\label{sec:Stable}\nIn this section, we define the notions of \na {\\em quantity}, an {\\em extremal} state, an {\\em~$\\epsilon$-almost extremal} state, \nan {\\em~$\\epsilon$-stable} channel, and a {\\em stable} channel. \nExcept for this section, in the rest of the paper, we will be using these notions for only \nthe depolarizing channel and the quantum R\\'{e}nyi entropy as a quantity.\n\n\\begin{definition}\nLet \n$\\mathcal{E}: \\mathcal{B}(\\mathcal{H}_{\\text i}) \\rightarrow \\mathcal{B}(\\mathcal{H}_{\\text o})$ \nbe a quantum channel (i.e., a trace preserving completely positive map) and let $Q$ be a real-valued continuous function on $\\mathcal{B}(\\mathcal{H}_{\\text o})$, \nwhere $\\mathcal{B}(\\mathcal{H}_{\\text i})$ and $\\mathcal{B}(\\mathcal{H}_{\\text o})$ are the sets of all states in the input space $\\mathcal{H}_{\\text i}$ and the output space $\\mathcal{H}_{\\text o}$, respectively.\n\nFor any $\\epsilon>0$, \na state $\\sigma \\in \\mathcal{B}(\\mathcal{H}_{\\text i})$ is {\\em~$\\epsilon$-almost extremal}\nwith respect to the function $Q$ and the channel $\\mathcal{E}$ \nif\n\\begin{equation}\n\t\\left| Q\\left(\\mathcal{E}(\\sigma)\\right)-\\operatorname{ext}_\\rho Q\\left( \\mathcal{E} \\left(\\rho \\right) \\right) \\right|\n\t\t\\in O(\\epsilon),\n\\end{equation} \nwhere the extremal value, $\\operatorname{ext}_\\rho$, refers to \neither the maximal value or the minimal value of $Q$ over all states $\\rho$ in $\\mathcal{B}(\\mathcal{H}_{\\text i})$ \naccording to a given quantity.\nA state $\\sigma_0$ is said to be {\\em extremal} \nwith respect to $Q$ and $\\mathcal{E}$ if \n\\begin{equation}\n\tQ\\left(\\mathcal{E}(\\sigma_0)\\right)=\\operatorname{ext}_\\rho Q \\left( \\mathcal{E} \\left(\\rho\\right) \\right).\n\\end{equation}\n\\end{definition}\nNow we define a stable channel with respect to the function $Q$.\n\\begin{definition}\nFor a given $\\epsilon>0$, a channel $\\mathcal{E}$ is {\\em~$\\epsilon$-stable} \nwith respect to a quantity $Q$\nif, for all $\\sigma$~$\\epsilon$-almost extremal, \nan extremal state $\\sigma_0$ exists with respect to $Q$ and $\\mathcal{E}$ such that\n\\begin{equation}\n\t\\left\\| \\sigma- \\sigma_0 \\right\\|^2_1 \\in O(\\epsilon),\n\\end{equation}\nwhere $\\left\\| \\cdot \\right\\|_1$ denotes the trace norm.\nA channel $\\mathcal{E}$ is {\\em stable} with respect to a quantity $Q$\nif it is~$\\epsilon$-stable with respect to the quantity $Q$ \nfor all $\\epsilon>0$.\n\\end{definition}\n\nWe have provided some generalized definitions to establish the notion of a stable channel.\nIn the next section, as our main result, we present the stability theorem of the depolarizing channel for the output quantum R\\'{e}nyi entropy and prove that the depolarizing channel is stable with respect to the quantum R\\'{e}nyi entropy.\n \n\n\\section{Stability of the depolarizing channel}\n\\label{sec:Stability}\nIn this subsection, we present and prove the stability theorem of the depolarizing channel with respect to the output quantum R\\'{e}nyi entropy. This section consists of two subsections. In the first subsection, we evaluate the Taylor expansion of the quantum R\\'{e}nyi entropy which is crucial to prove our main theorem in the second subsection.\n\n\n\\subsection{The Taylor expansion of the quantum $p$-R\\'{e}nyi entropy}\n\\label{subsec:Taylor}\nIn this subsection, \nwe the Taylor expansion technique for the von~Neumann entropy~\\cite{GF13}\nto calculate the Taylor expansion of the quantum $p$-R\\'{e}nyi entropy.\nThis technique is key to prove the stability theorem of the depolarizing channel for the output quantum $p$-R\\'{e}nyi entropy.\n\nFor $p > 0$ ($p\\neq1$),\nthe quantum $p$-R\\'{e}nyi entropy~\\cite{H3} of a state~$\\rho$ is\n\\begin{equation}\n\tS_p\\left(\\rho\\right):=\\frac{1}{1-p} \\log\\operatorname{Tr}\\rho^{p}.\n\\label{eq:Renyi}\n\\end{equation}\nThe minimal output quantum $p$-R\\'{e}nyi entropy of a quantum channel $\\mathcal{E}$ \nis defined as\n\\begin{equation}\n\tS_p^{\\min}\\left(\\mathcal{E}\\right):=\\min_\\rho S_p\\left(\\mathcal{E}(\\rho)\\right),\n\\label{eq:minRenyi}\n\\end{equation}\nwhere the minimum is taken over all input states $\\rho$ of $\\mathcal{E}$.\nThe quantum $p$-R\\'{e}nyi entropy converges to the von~Neumann entropy as $p$ tends to one, \nand we can thus consider the quantum R\\'{e}nyi entropy \nas a generalization of the von~Neumann entropy~\\cite{H3}. \n\nIn order to obtain the Taylor expansion of the quantum R\\'{e}nyi entropy, \nwe exploit the following lemma.\n\\begin{lemma}[Gour and Friedland~\\cite{GF13}]\n\\label{lem:DividedDifference}\n\tLet $A=diag\\left(p_1,\\cdots,p_m\\right) \\in \\mathbb{C}^{m \\times m}$ be a diagonal square matrix, and $B=\\left[b_{ij} \\right] \\in \\mathbb{C}^{m \\times m}$ be a complex square matrix. \nLet $f$ be a $C^2$ function defined on a real open interval $\\left(a,b\\right)$.\nThen\n\\begin{equation}\n\tf\\left(A+tB\\right)\n\t\t=f\\left(A\\right)+t L_A\\left(B\\right)+t^2 Q_A\\left(B\\right)+O\\left(t^3\\right)\n\\label{eq:Lemma_GF}\n\\end{equation}\nfor $L_A : \\mathbb{C}^{m \\times m} \\rightarrow \\mathbb{C}^{m \\times m}$ a linear operator\nand $Q_A:\\mathbb{C}^{m \\times m} \\rightarrow \\mathbb{C}^{m \\times m} $ \na quadratic homogeneous non-commutative polynomial in $B$. \nFor $i,j=1,\\cdots,m$, we have\n\\begin{align}\n&\\left[L_A\\left(B\\right)\\right]_{ij}\n=\\Delta f\\left(p_i, p_j\\right)b_{ij}\n=\\frac{f\\left(p_i\\right)-f\\left(p_j\\right)}{p_i-p_j}b_{ij}, \n\\nonumber \\\\\n&\\left[Q_A\\left(B\\right)\\right]_{ij}\n=\\sum_{k=1}^m\\Delta^2 f\\left(p_i, p_k, p_j\\right)b_{ik}b_{kj}.\n\\label{eq:lemma_GF1}\n\\end{align}\nIn particular,\n\\begin{align}\n&\\operatorname{Tr}\\left(L_A\\left(B\\right)\\right)\n=\\sum_{j=1}^m f^\\prime\\left(p_j\\right)b_{jj}, \n\\nonumber \\\\\n&\\operatorname{Tr}\\left(Q_A\\left(B\\right)\\right)\n=\\sum_{i,j=1}^m \\frac{f^\\prime\\left(p_i\\right)-f^\\prime\\left(p_j\\right)}\n{2\\left(p_i-p_j\\right)}b_{ij}b_{ji}.\n\\label{eq:lemma_GF2}\n\\end{align}\n\\end{lemma}\nNow we use Lemma~\\ref{lem:DividedDifference} \nto calculate the Taylor expansion of $S_p\\left(\\rho\\left(t\\right)\\right)$.\n\\begin{theorem}\n\\label{thm:TER}\nA nonsingular density matrix\n\\begin{equation}\n\t\\rho\\left(t\\right)=\\rho+ t \\gamma_0+t^2 \\gamma_1+ O(t^3),\n\\end{equation}\nwith~$\\rho$ diagonal, $\\gamma_0$ all zeroes along the diagonal \nand~$\\gamma_1$ having zero trace,\nhas quantum $p$-R\\'{e}nyi entropy\n\\begin{align}\n\tS_p\\left(\\rho\\left(t\\right)\\right)\n=&S_p\\left(\\rho\\right) +\\frac{1}{1-p}t^2\\left(p\\frac{\\operatorname{Tr}\\left(\\rho^{p-1}\\gamma_1\\right)}{\\operatorname{Tr}\\left(\\rho^{p}\\right)}+\\frac{\\operatorname{Tr}\\left(Q_{\\rho}\\left(\\gamma_0\\right)\\right)}{\\operatorname{Tr}\\left(\\rho^{p}\\right)}\n\\right)+O\\left(t^3\\right).\n\\label{eq:Salphat}\n\\end{align}\n\\end{theorem}\n\\begin{remark}\nAs $p$ tends to one, \nTheorem~\\ref{thm:TER} \nimplies the Taylor expansion of the von~Neumann entropy, \nhence generalizes the von~Neumann entropy.\n\\end{remark}\n\\begin{proof}\nAs $\\rho$ is nonsingular, $\\mathds{1}-\\rho\\left(t\\right) < \\mathds{1}$ for small~$t$. \nThus, we can employ the Taylor expansion with respect to~$t$. \nFrom the following Taylor expansion\n\\begin{equation}\n\t\\rho^{p}\\left(t\\right)\n\t\t=\\left[\\mathds{1}-\\left(\\mathds{1}-\\rho\\left(t\\right)\\right)\\right]^{p}\n=\\sum_{n=0}^{\\infty} \\binom{p}{n}\\left(-1\\right)^n\\left(\\mathds{1}-\\rho\\left(t\\right)\\right)^n,\n\\label{eq:TER1}\n\\end{equation}\nwe obtain\n\\begin{equation}\n\t\\operatorname{Tr}\\left(\\rho^{p}\\left(t\\right)\\right)\n\t\t=\\sum_{n=0}^{\\infty} \\binom{p}{n}\\left(-1\\right)^n\n\\operatorname{Tr}\\left[\\left(\\mathds{1}-\\rho\\left(t\\right)\\right)^n\\right].\n\\label{eq:TER2}\n\\end{equation}\nExpanding the trace term in the right-hand side of Eq.~(\\ref{eq:TER2}) \nup to second order in~$t$ yields\n\\begin{equation}\n\t\\operatorname{Tr}\\left[\\left(\\mathds{1}-\\rho\\left(t\\right)\\right)^n\\right]\n\t\t=\\operatorname{Tr}\\left[\\left(\\mathds{1}-\\sigma\\left(t\\right)\\right)^n\\right]-t^2 n \n\\operatorname{Tr}\\left[\\left(\\mathds{1}-\\rho\\right)^{n-1}\\gamma_1\\right]+O\\left(t^3\\right),\n\\label{eq:TER3}\n\\end{equation}\nwhere $\\sigma\\left(t\\right)=\\rho+t\\gamma_0$.\nFrom Eq.~(\\ref{eq:TER2}) and Eq.~(\\ref{eq:TER3}),\n\\begin{equation}\n\\operatorname{Tr}\\left(\\rho^{p}\\left(t\\right)\\right)\n=\\operatorname{Tr}\\left(\\sigma^{p}\\left(t\\right)\\right)\n+p t^2 \\operatorname{Tr}\\left( \\rho^{p-1}\\gamma_1\\right)\n+O\\left( t^3\\right).\n\\label{eq:TER4}\n\\end{equation}\nAs Lemma~\\ref{lem:DividedDifference} yields the equality\n\\begin{equation}\n\t\\operatorname{Tr}\\left(\\sigma^{p}\\left(t\\right)\\right)\n\t\t=\\operatorname{Tr}\\left( \\rho^{p}\\right)+t p \\operatorname{Tr}\\left(\\rho^{p-1}\\gamma_0\\right)\n\t\t\t+t^2 \\operatorname{Tr}\\left( Q_{\\rho}\\left( \\gamma_0\\right)\\right)+O\\left( t^3\\right),\n\\label{eq:TER5}\n\\end{equation}\nand $\\gamma_0$ is zero along the diagonal, \nwe obtain\n\\begin{equation}\n\t\\operatorname{Tr}\\left(\\rho^{p}\\left(t\\right)\\right)\n\t\t=\\operatorname{Tr}\\left(\\rho^{p}\\right)+t^2\\left( p \\operatorname{Tr}\\left(\\rho^{p-1}\\gamma_1\\right)\n\t\t\t+\\operatorname{Tr}\\left( Q_{\\rho}\\left( \\gamma_0\\right)\\right)\\right)+O\\left( t^3\\right).\n\\label{eq:TER6}\n\\end{equation}\nUsing the Taylor expansion of the logarithm function,\n\\begin{equation}\n\\log\\left(1+x\\right)=\\sum_{n=1}^{\\infty} (-1)^{n+1}\\frac{x^n}{n},\n\\label{eq:TER7}\n\\end{equation}\nwe obtain\n\\begin{align}\n\t\\log\\operatorname{Tr}\\left(\\rho^{p}\\left(t\\right)\\right)\n\t\t=&\\log\\left[\\operatorname{Tr}\\left(\\rho^{p}\\right)\\left( 1+ t^2\\left( p \\frac{\\operatorname{Tr}\n\t\t\t\\left(\\rho^{p-1}\\gamma_1\\right)}{\\operatorname{Tr}\\left(\\rho^{p}\\right)}\n\t\t\t\t+\\frac{\\operatorname{Tr}\\left(Q_{\\rho}\\left(\\gamma_0\\right)\\right)}\n\t\t\t\t\t{\\operatorname{Tr}\\left(\\rho^{p}\\right)}\\right) +O\\left( t^3\\right) \\right) \\right]\n\t\t\t\t\t\t\\nonumber\\\\\n\t\t=&\\log\\operatorname{Tr}\\left(\\rho^{p}\\right)+ t^2\\left( p \\frac{\\operatorname{Tr}\\left(\\rho^{p-1}\n\t\t\t\\gamma_1\\right)}{\\operatorname{Tr}\\left(\\rho^{p}\\right)}\n\t\t\t\t+\\frac{\\operatorname{Tr}\\left(Q_{\\rho}\\left(\\gamma_0\\right)\\right)}{\\operatorname{Tr}\\left(\\rho^{p}\\right)}\\right)\n\t\t\t\t\t+O\\left( t^3\\right).\n\\label{eq:TER8}\t\t\t\n\\end{align}\nTherefore, by definition of the quantum $p$-R\\'{e}nyi entropy in Eq.~(\\ref{eq:Renyi}), \nthe equality~(\\ref{eq:Salphat}) can be readily obtained from Eq.~(\\ref{eq:TER8}).\nThis completes the proof.\n\\end{proof}\nWe have evaluated the Taylor expansion of the quantum R\\'{e}nyi entropy.\nIn the next subsection, we use this result to prove the stability theorem of the depolarizing channel for the output quantum R\\'{e}nyi entropy.\n\n\\subsection{The stability theorem of the depolarizing channel for the output quantum R\\'{e}nyi entropy}\n\\label{subsec:Stability}\n\nIn this subsection, we prove our main theorem, \nnamely the stability theorem of the output quantum $p$-R\\'{e}nyi entropy for the depolarizing channel \nfor $p \\ge 2$. \nFirst, we present the following lemma, \nwhich is crucial to prove the theorem. \n\\begin{lemma}\n\\label{lem:f_p(x)}\nFor $p \\ge 2$, $r> 1$ and $d \\ge 2$, \n\\begin{equation}\n\t f_p(x)\n\t \t:= \\frac{p}{1-p}\\left[\\left(\\frac{({r}^{x})^{p-1}-1}{r^x-1}\\right)\n\t\t\t\\left(\\frac{(r-1)^2}{(d+r-1)^2(r^p+(d-1))}\\right)^{x}\n+\\left(\\frac{r^{p-1}+r+(d-2)}{r^p+(d-1)}\\right)^{x} -1 \\right]\n\\label{eq:falphax}\n\\end{equation}\nis monotonically increasing on $[2,\\infty)$.\n\\end{lemma}\n\\begin{remark}\nLet $\\ket{\\psi}$ be an $n$-qudit pure state satisfying $\\left| \\inn{\\psi}{\\phi}\\right|^{2}=1-t^2$\nfor an $n$-qudit product state $\\ket{\\phi}$. Then the function $f_p$~(\\ref{eq:falphax}) is the coefficient of the second order term \nin the Taylor expansion of $S_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right)$.\n\n\\end{remark}\n\\begin{proof}\n\\label{proof:f_p(x)}\nObserve that\n\\begin{equation}\nf'_p(x)=A_p\\left[ B_p \\log\\left(\\frac{r^p+d-1}{r^{p-1}(r-1)^2}\\right)+C_p\\log r \\right],\n\\label{eq:f_p0}\n\\end{equation}\nwhere\n\\begin{align}\nA_p=&\\frac{p}{1-p}\\frac{(r-1)^2}{(r^x-1)^2(r^p+d-1)}, \\nonumber \\\\\nB_p=&(r^x-1)(1-(r^x)^{p-1}), \\nonumber \\\\\nC_p=&-(r^x)^p+p(r^x-1)+1. \n\\label{eq:f_p1}\n\\end{align}\nAs $r>1$, straightforward calculations yield $A_p \\leq 0$, $C_p \\leq B_p \\leq 0$ for $p \\geq 2$.\nThus, we obtain the inequality \n\\begin{align}\n\tf'_p(x) \n\t\t\\ge& A_p \\left[ B_p\\log\\left(\\frac{r^p+d-1}{r^{p-1}(r-1)^2}\\right)+B_p \\log r \\right] \n\t\t\\nonumber\\\\\n\t\t=& A_p B_p \\log\\left(\\frac{r^{p+1}+(d-1)r}{r^{p-1}(r-1)^2}\\right).\t\n\\label{eq:f_p2}\n\\end{align}\nHere the right-hand side of the inequality (\\ref{eq:f_p2}) is clearly nonnegative \nas the inequality\n\\begin{equation}\n\t\\log\\left(\\frac{r^{p+1}+(d-1)r}{r^{p-1}(r-1)^2}\\right) >0\n\t\\label{eq:f_p3}\n\\end{equation}\n\t\tcan be easily proved due to the inequality\n\\begin{equation}\n\t2r^{p-1}-r^{p-2}+(d-1) >0. \n\t\\label{eq:f_p4}\n\\end{equation}\nTherefore, the function $f_p(x)$ is monotonically increasing.\n\\end{proof}\n\nWe now present one more lemma,\nwhich tells us that the minimal output quantum $p$-R\\'{e}nyi entropy of the depolarizing channel is achieved for product state inputs. \nThe lemma can be readily obtained \nby the additivity of the minimal output quantum $p$-R\\'{e}nyi entropy~\\cite{Kin03}.\n\n\\begin{lemma}\n\\label{lem:minRenyi}\nFor the $n$-partite product depolarizing channel~$\\mathcal{D}_{\\lambda}^{\\otimes n}$,\nthe quantum $p$-R\\'{e}nyi entropy of the output state is minimized for product state inputs\nand furthermore has the same value for all product state inputs;\nthat is, \n\\begin{equation}\n\tS_p^{\\min}\\left( \\mathcal{D}_{\\lambda}^{\\otimes n}\\right)\n\t\t=S_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\phi}\\bra{\\phi}\\right),\n\t\t\\label{eq:lemma_product}\n\\end{equation}\t\nfor any $n$-partite pure product state $\\ket{\\phi}$.\n\\end{lemma}\n\nWe now use Theorem~\\ref{thm:TER} and the above lemmas \nto obtain the stability theorem of the output quantum $p$-R\\'{e}nyi entropy \nfor the depolarizing channel.\n\\begin{lemma}\n\\label{lemma:STRenyi}\nLet $p \\ge 2$, $\\epsilon>0$ and $|\\psi\\rangle\\in(\\mathbb{C}^d)^{\\otimes n}$ be a state. Then\n\\begin{equation}\n\tS_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right)\n\t\t< S_p^{\\min}\\left( \\mathcal{D}_{\\lambda}^{\\otimes n}\\right)\n\t\t\t+2\\epsilon\\frac{p}{p-1}\n\t\t\t\t\\frac{r-1}{r+1}\n\t\t\t\t\t\\frac{(r^{p-1}-1)(2r^{p}+dr+d-2)}{(r^{p}+d-1)^2}\n\t\t\t\t\t+ O(\\epsilon^{3\/2})\n\\label{eq:True}\n\\end{equation}\nholds only if a pure product state~$\\ket{\\phi}$ exists such that~$\\ket{\\psi}$ satisfies\n\\begin{equation}\n\\label{eq:close}\n\t\\left|\\left\\langle\\psi\\right|\\phi\\rangle\\right|^{2} \\ge 1-\\epsilon.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe prove the contrapositive of the theorem. \nLet\n\\begin{equation}\n\\epsilon_0 \n= 1-\\max\\left\\{ \\left| \\left\\langle\\psi\\right| \\phi_{1},\\cdots,\\phi _{n}\\rangle\\right|^{2}:\n\t\t\t\\left| \\phi_{i}\\right\\rangle \\in \\mathbb{C}^d\\right\\}>\\epsilon.\n\\label{eq:MaxFidelity}\n\\end{equation} \nWithout loss of generality, \nwe may assume that \none of the states achieving the maximum in Eq.~(\\ref{eq:MaxFidelity}) is $\\ket{0^n}=\\ket{0}$. \nWe then have\n\\begin{equation}\n\t\\ket{\\psi}=\\sqrt{1-\\epsilon_0}\\ket{0}+\\sqrt{\\epsilon_0}\\ket{\\phi}\n\t\\label{eq:main_Thm1}\n\\end{equation}\nfor some state $\\ket{\\phi}$ such that $\\inn{0}{\\phi}=0$;\nthat is, $\\ket{\\phi}=\\sum_{x\\neq0}\\alpha_x\\ket{x}$ \nfor some $\\alpha_x $ such that $\\sum_{x\\neq0}\\left|\\alpha_x\\right|^2=1$.\nWe can write explicitly\n\\begin{equation}\n\\ket{\\psi}\\bra{\\psi}\n=\\left(1-\\epsilon_0\\right)\\ket{0}\\bra{0}+\\sqrt{\\epsilon_0\\left(1-\\epsilon_0\\right)}\n\\left( \\ket{0}\\bra{\\phi}+\\ket{\\phi}\\bra{0}\\right)+\\epsilon_0\\ket{\\phi}\\bra{\\phi}.\n\\label{eq:main_Thm2}\n\\end{equation}\nTherefore, we have\n\\begin{align}\n\t\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\n\t\t=& \\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\\ket{0}\\bra{0}+\\sqrt{\\epsilon_0}\\sqrt{1-\\epsilon_0}~\\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\t\\left(\\ket{0}\\bra{\\phi}+\\ket{\\phi}\\bra{0}\\right)\n\t\t\t\t\t+\\epsilon_0~\\mathcal{D}_{\\lambda}^{\\otimes n}\\left(\\ket{\\phi}\\bra{\\phi}\n\t\t\t\t\t\t-\\ket{0}\\bra{0}\\right) \\nonumber\\\\\n\t\t=& \\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\\ket{0}\\bra{0}+\\sqrt{\\epsilon_0}~\\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\t\\left(\\ket{0}\\bra{\\phi}+\\ket{\\phi}\\bra{0}\\right)\n\t\t\t\t\t+\\epsilon_0~\\mathcal{D}_{\\lambda}^{\\otimes n}\\left(\\ket{\\phi}\\bra{\\phi}\n\t\t\t\t\t\t-\\ket{0}\\bra{0}\\right)+O( \\epsilon_0^{3\/2} ).\n\\label{eq:main_Thm3}\n\\end{align}\nFor the last equality in Eq.~(\\ref{eq:main_Thm3}), \nwe use the Taylor expansion\n\\begin{equation}\n\t\\sqrt{1-x}=1-\\frac{1}{2}x+O\\left( x^2\\right)\n\\end{equation}\nfor all $|x|<1$.\nNow we use \n\\begin{align}\n\\rho &= \\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{0}\\bra{0}, \\nonumber \\\\\n\\gamma_0 &= \\mathcal{D}_{\\lambda}^{\\otimes n}\\left(\\ket{0}\\bra{\\phi}+\\ket{\\phi}\\bra{0}\\right), \\nonumber \\\\\n\\gamma_1&= \\mathcal{D}_{\\lambda}^{\\otimes n}\\left(\\ket{\\phi}\\bra{\\phi} -\\ket{0}\\bra{0}\\right), \\nonumber \\\\\nt&=\\sqrt{\\epsilon_0}, \\nonumber \\\\\n\\rho\\left(t\\right) &= \\rho+t \\gamma_0+t^2 \\gamma_1+O\\left(t^3\\right). \\nonumber \\\\\n\\end{align}\nThen Theorem~\\ref{thm:TER} implies that\n\\begin{align}\n\tS_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right) \n\t\t=&S_p\\left( \\rho\\left(t\\right)\\right)\n\t\t\t\t\t\t\\nonumber\\\\\n\t\t=&S_p\\left(\\rho\\right)+\\frac{1}{1-p}t^2\n\t\t\t\\left(p\\frac{\\operatorname{Tr}\\left(\\rho^{p-1}\\gamma_1\\right)}\n\t\t\t\t{\\operatorname{Tr}\\left(\\rho^{p}\\right)}+\\frac{\\operatorname{Tr}\\left(Q_{\\rho}\\left(\\gamma_0\\right)\\right)}\n\t\t\t\t\t{\\operatorname{Tr}\\left(\\rho^{p}\\right)}\\right)\n\t\t\t\t\t\t+O\\left(t^3\\right).\n\\label{eq:RentropyPerturbation}\n\\end{align}\nFor convenience, we let\n\\begin{equation}\n\ta=\\left(1+(d-1)\\lambda\\right)\/d,\\;\n\tb=(1-\\lambda)\/d.\n\\end{equation}\nThen we obtain the following three facts.\n\\begin{enumerate}\n\t\\item\tAs $\\rho=\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{0}\\bra{0}$ can be rewritten as\n$\\sum_{y} a^{n-|y}b^{|y|}\\ket{y}\\bra{y}$, \n\t\\begin{equation}\n\t\t\\operatorname{Tr}\\left( \\rho^{p}\\right)\n\t\t\t=\\operatorname{Tr}\\left(\\sum_{y}\\left(a^{p}\\right)^{n-|y|}\\left(b^{p}\\right)^{|y|}\\ket{y}\\bra{y}\\right)\n\t\t\t=\\left( a^p+(d-1)b^p\\right)^n,\n\t\\end{equation}\n\tfor~$|y|$ denoting Hamming weight of an $n$-bit string~$y$. \n\t\\item We can be evaluate\n\t\\begin{align}\n\t\t\\operatorname{Tr}\\left(\\gamma_1\\rho^{p-1}\\right) \n\t\t=& \\operatorname{Tr}\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\\ket{\\phi}\\bra{\\phi}\\right)\\left( \\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\t\\ket{0}\\bra{0}\\right)^{p-1} -\\operatorname{Tr}\\left( \\rho^p\\right) \\nonumber\\\\\n\t\t=& \\sum_{x,x' \\neq 0,y } \\alpha_x {\\alpha}_{x'}^*\n\t\t\t\\left(a^{p-1}\\right)^{n-|y|}\\left( b^{p-1}\\right)^{|y|}\n\t\t\t\t\\operatorname{Tr}\\left( \\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\t\t\\ket{x}\\bra{x'}\\right)\\ket{y}\\bra{y}\\right) -\\operatorname{Tr}\\left( \\rho^p\\right)\\nonumber\\\\\n\t\t=& \\sum_{x \\neq 0,y} |\\alpha_x|^2\\left(a^{p-1}\\right)^{n-|y|}\\left( b^{p-1}\\right)^{|y|} \\prod_{i=1}^n \\frac{1-\\lambda(1-d)^{\\delta_{x_i y_i}}}{d}-\\operatorname{Tr}\\left( \\rho^p\\right) \\nonumber\\\\\n\t\t=& \\sum_{x \\neq 0} |\\alpha_x|^2\\left( a^p+(d-1)b^p\\right)^n\n\t\t\\left(\\frac{ a^{p-1}b+ab^{p-1}+(d-2)b^p}{a^p+(d-1)b^p}\\right)^{|x|}\n\t\t-\\operatorname{Tr}\\left( \\rho^p\\right).\n\t\t\\label{eq:main_Thm4}\n\\end{align}\n\t\\item For $n$-bit strings $j$ and $k$, let \n\t\\begin{equation}\n\t\tg_{jk}\n\t\t\t:= \\frac{\\left(a^{n-|j|}b^{|j|}\\right)^{p-1}-\\left(a^{n-|k|}b^{|k|}\\right)^{p-1}}{2\\left(a^{n-|j|}b^{|j|}-a^{n-|k|}b^{|k|}\\right)}.\n\t\\label{eq:main_Thm5}\n\\end{equation}\nThen we write\n\\begin{align}\n\t\\operatorname{Tr}\\left( Q_\\rho\\left( \\gamma_0\\right)\\right) \n\t\t=& p\\left( \\sum_{jk} g_{jk}\\right) \\left|\\left( \\mathcal{D}_{\\lambda}^{\\otimes n}\n\t\t\t\\ket{0}\\bra{\\phi}\\right)_{jk}\\right|^2 \n\t\t\t\\label{eq:main_Thm6} \\\\\n\t\t=& p \\sum_{x \\neq 0} |{\\alpha}_x|^2 \n\t\t\t\\left(\\frac{(a^{|x|})^{p-1}-(b^{|x|})^{p-1}}{a^{|x|}-b^{|x|}}\\right)\n\t\t\t\t(a^p+(d-1)b^p)^{n-|x|}.\n\t\t\t\t\\label{eq:main_Thm7}\n\\end{align}\n\\end{enumerate}\nHere $Q_\\rho$~(\\ref{eq:main_Thm6}) is a polynomial defined in Eq.~(\\ref{eq:lemma_GF1}), and \nall equalities can be proved by tedious but straightforward calculations \nexcept the last equality in Eq.~(\\ref{eq:main_Thm4}), \nwhich can be shown by mathematical induction on $n$.\n\nCombining the above facts, we have\n\\begin{equation}\n\tS_p\\left(\\rho\\left(t\\right)\\right)-S_p\\left(\\rho\\right)\n\t\t= t^2 \\sum_{x \\neq 0} |{\\alpha_x}|^2 f_p(|x|)+O(t^3)\n\t\t\\label{eq:main_Thm8}\n\\end{equation}\nwith \n\\begin{align}\n\tf_p(|x|)\n\t\t=&\\frac{p}{1-p}\n\t\t\t\\left(\\frac{(a^{|x|})^{p-1}-(b^{|x|})^{p-1}}{a^{|x|}-b^{|x|}}\\right)\n\t\t\t\t\\left(\\frac{\\lambda^2}{a^p+(d-1)b^p}\\right)^{|x|}\n\t\t\t\t\t\t\t\\nonumber\\\\\n\t\t\t\t\t&+\\frac{p}{1-p}\\left[\\left(\\frac{a^{p-1}b\n\t\t\t\t\t\t+ab^{p-1}+(d-2)b^p}{a^p+(d-1)b^p}\\right)^{|x|} -1\\right].\n\t\t\t\t\t\t\\label{eq:main_Thm9}\n\\end{align}\nHere it can be shown that \nthe function $f_p$~(\\ref{eq:main_Thm9}) is equal to \nthe function $f_p$ defined in Eq.~(\\ref{eq:falphax})\nby taking $r=a\/b$. \nHence, Lemma~\\ref{lem:f_p(x)} implies that \n$f_p(|x|)$ is monotonically increasing on $|x|$ for all $p \\geq 2$.\nAs $\\ket{\\phi}$ does not have any weight-one components~\\cite{HM13};\nthat is, $\\alpha_x=0$ for $|x|< 2$, from Theorem \\ref{thm:TER} and Lemma \\ref{lem:minRenyi},\nwe can finally obtain the following inequality, \nand thereby complete the proof.\n\\begin{align}\n\tS_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right)-\n\t\tS_p^{\\min}\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\right)\n\t\t\t=&S_p\\left(\\rho\\left(t\\right)\\right)-S_p\\left(\\rho\\right) \n\t\t\t\t\t\t\t\\nonumber\\\\\n\t\t\t\\ge& \\epsilon_0 f_p(2)+O(\\epsilon_0^{3\/2})\n\\label{eq:ineq}\n\t\t\t\t\t\t\t\\nonumber\\\\\n\t\t\t>&\\epsilon \\frac{p}{p-1}\n\t\t\t\t\\left[\\frac{2\\lambda(1-\\lambda)(a^{p-1}-b^{p-1})\n\t\t\t\t\t(2(a^{p}-b^{p})+db^{p-1}(a+b))}\n\t\t\t\t\t\t{(2+(d-2)\\lambda)(a^{p}+(d-1)b^p )^2}\\right] +O(\\epsilon^{3\/2})\n\t\t\t\t\t\t\t\\nonumber\\\\\n\t\t=& 2\\epsilon\\frac{p}{p-1}\n\t\t\t\\frac{(r-1)(r^{p-1}-1)(2r^{p}+dr+d-2)}{(r+1)(r^{p}+d-1)^2}\n\t\t\t\t+O(\\epsilon^{3\/2}).\n\\end{align}\n\\end{proof}\n\\begin{remark}\nAlthough we have not yet established the stability theorem for $10$ be given,\nand let the quantity $Q$ be~$S_p$.\nLet $\\ket{\\psi}\\bra{\\psi}$ be\nan~$\\epsilon$-almost extremal state with respect to~$S_p$ and $\\mathcal{D}_{\\lambda}^{\\otimes n}$.\nThen\n\\begin{equation}\n\t\\left|S_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right)\n\t\t-S_p^{\\min}\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\right)\\right|\n\t\t\t\\in O(\\epsilon),\n\\end{equation}\nby Lemma~\\ref{lemma:STRenyi}, \nthere exists some extremal state $\\ket{\\phi}\\bra{\\phi}$ \nwith respect to~$S_p$ and $\\mathcal{D}_{\\lambda}^{\\otimes n}$ such that\n\\begin{equation}\n\\left\\| \\ket{\\psi}\\bra{\\psi}-\\ket{\\phi}\\bra{\\phi} \\right\\|^2_1\n\\le \\epsilon.\n\\end{equation}\nThus, the depolarizing channel $\\mathcal{D}_{\\lambda}^{\\otimes n}$ is~$\\epsilon$-stable with respect to the quantum $p$-R\\'{e}nyi entropy~$S_p$\nfor any $\\epsilon>0$, \nand hence it is stable with respect to~$S_p$.\n\\end{proof}\n\n\\begin{remark}\n\tWe obtain the stability theorem of the depolarizing channel with respect to the output purity~\\cite{HM13}\n\tas a corollary of Theorem~\\ref{thm:stability_new}.\n\\end{remark}\n\nFurthermore, we can similarly show that if $n$-qudit pure state is close to be product \nthen its output quantum $p$-R\\'{e}nyi entropy is close to \nthe minimal output quantum $p$-R\\'{e}nyi entropy with a specific precision as follows. \n\\begin{theorem}\n\\label{theorem:STRenyi2}\nLet $p \\ge 2$, $\\epsilon>0$ and $|\\psi\\rangle\\in(\\mathbb{C}^d)^{\\otimes n}$ be a state. Then\n\\begin{equation}\n\\label{eq:False}\n\tS_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right)\n\t\t\\geq S_p^{\\min}\\left( \\mathcal{D}_{\\lambda}^{\\otimes n}\\right)\n\t\t\t+\\epsilon\\frac{p}{p-1}+ O(\\epsilon^{3\/2})\n\\end{equation}\nimplies \n\\begin{equation}\n\\left| \\inn{\\psi}{\\phi}\\right|^{2}<1-\\epsilon\n\\label{eq:False2}\n\\end{equation}\n for any product state $|\\phi\\rangle$.\n\\end{theorem}\n\\begin{proof}\nSuppose that\n\\begin{equation}\n\\label{eq:tau}\n1-\\epsilon_1=\\max\\left\\{\\left| \\left\\langle\\psi\\right|\\phi_{1},\\cdots,\\phi _{n}\\rangle\\right|^{2}:\n\t\t\t\\ket{\\phi_{i}} \\in \\mathbb{C}^d\\right\\} \\ge 1-\\epsilon.\n\\end{equation}\nFrom the same arguments in Theorem~\\ref{lemma:STRenyi}, \nwe obtain the same equality as in Eq.~(\\ref{eq:main_Thm8}).\nThen\n\\begin{align}\n\tS_p\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\ket{\\psi}\\bra{\\psi}\\right)-\n\t\tS_p^{\\min}\\left(\\mathcal{D}_{\\lambda}^{\\otimes n}\\right)\n\t\t\t=& \\epsilon_1 \\sum_{x \\neq 0} |{\\alpha_x}|^2 f_p(|x|)+O(\\epsilon_1^{2\/3})\n\t\t\t\t\t\t\t\\nonumber\\\\\n\t\t\t<& \\epsilon_1\\frac{p}{1-p}+O(\\epsilon_1^{3\/2})\n\t\t\t\t\t\t\t\\nonumber\\\\\n\t\t\t\\le& \\epsilon \\frac{p}{1-p}+O(\\epsilon^{3\/2}),\n\\label{eq:epsilon}\t\n\\end{align}\nwhere the first inequality is obtained due to the monotonicity of $f_p(|x|)$\nand the second inequality results from Eq.~(\\ref{eq:tau}). \n\\end{proof}\n\n\\begin{remark}\nThe coefficient of~$\\epsilon$ in Eq.~(\\ref{eq:True}) is smaller than \nthe coefficient of~$\\epsilon$ in Eq.~(\\ref{eq:False}),\nwhich means that some gap exists between them \neven though it is close to zero for sufficiently small~$\\epsilon$. \nFurthermore, for a sufficiently large $p$, \nthe gap can be smaller than the gap for the case of $p=2$ \nas we will see in \\S\\ref{sec:Poly}.\n\\end{remark}\n\n\\section{An application: A Polygraph Test}\n\\label{sec:Poly}\nIn this section, we introduce a polygraph test \nas an application of our main results,\nnamely Theorems~\\ref{lemma:STRenyi} and~\\ref{theorem:STRenyi2}.\nLet us consider the following protocol \nwherein sender Alice transmits multiple copies of an $n$-qudit state \nthrough depolarizing channels to receiver Bob.\n\\renewcommand{\\labelenumi}{\\arabic{enumi}.}\n\\begin{enumerate} \n\t\\item Bob informs Alice of a small enough $\\epsilon>0$ chosen as an error bound.\n\t\\item Alice prepares an $n$-qudit pure state \nthat is close to a product state with fidelity at least $1-\\epsilon$ as in Eq.~(\\ref{eq:close})\nand sends multiple copies to Bob through depolarizing channels.\n\t\\item Bob estimates its output quantum R\\'{e}nyi entropy.\n\t\\item Bob determines whether Alice's preparation satisfies the requirement or not,\n\t\tand our results help Bob make the correct decision as discussed below.\n\t\t\\begin{itemize}\n\t\t\t\\item[Accept:]\n\t\t\t\tIf Bob's estimate of the output quantum R\\'{e}nyi entropy \n\t\t\t\tsatisfies Inequality~(\\ref{eq:True}) then \n\t\t\t\tAlice definitely prepared a correct state according to\n\t\t\t\tTheorem~\\ref{lemma:STRenyi}. \n\t\t\t\\item[Reject:]\n\t\t\t\tIf Bob's estimate of the output quantum R\\'{e}nyi entropy \n\t\t\t\tsatisfies Inequality~(\\ref{eq:False}) then\n\t\t\t\tTheorem~\\ref{theorem:STRenyi2} guarantees\n\t\t\t\tthat Alice's preparation fails the requirement.\n\t\t\\end{itemize}\n\\end{enumerate}\n\\begin{remark}\n\tSome gap exists between the coefficients of~$\\epsilon$ in Eq.~(\\ref{eq:True}) \n\tand of~$\\epsilon$ in Eq.~(\\ref{eq:False}), \n\twhich means that Bob cannot detect Alice's lie \n\twhen neither Eqs.~(\\ref{eq:True}) nor~(\\ref{eq:False}) holds\n\tfor the output quantum R\\'{e}nyi entropy of the state Alice sent. \n\tHowever, the probability that Alice cheats Bob can be forced to be close to zero \n\tif Bob chooses small enough~$\\epsilon$. \n\tThus, the gap problem can be resolved in this way.\n\\end{remark}\n\nWe note that the above polygraph test can be also realized \nby the original stability theorem~\\cite{HM13} in the same way as ours,\nas the original stability theorem is essentially equivalent to \nthe case of $p=2$ in ours.\nHowever, we show that \nif $p$ is sufficiently large then \nour gap can be smaller than the gap from the original stability as follows.\nLet a nonzero~$\\epsilon$ be fixed, \nand define the gap function\n\\begin{equation}\n\t\\operatorname{gap}(p)\n\t\t:=\\frac{p}{p-1}\\left(1-\\frac{2(r-1)(r^{p-1}-1)(2r^p+dr+(d-2))}{(r+1)(r^p+(d-1))^2}\\right),\n\\end{equation}\nwhich is the gap between the coefficients of~$\\epsilon$ \nin Eqs.~(\\ref{eq:True}) and~(\\ref{eq:False}). \nWe claim \n\\begin{equation}\n\t\\operatorname{gap}(2)>\\lim_{p \\rightarrow \\infty}\\operatorname{gap}(p), \n\\label{eq:gap}\n\\end{equation}\nwhich is equivalent to the inequality\n\\begin{equation}\n\t (r^2+(d-1))^2(r^2+5r-4) > 4r(r-1)^2(2r^2+dr+(d-2)).\n \\label{eq:gap2}\n\\end{equation}\nMoreover, in order to prove the inequality (\\ref{eq:gap2})\nit is enough to show that\n\\begin{equation}\n\th(r)\n\t\t:=r^3-3r^2+(-2d+10)r+(14d-10)\n\\label{eq:gap3}\n\\end{equation}\nis positive for all $r>1$.\nAs its derivative is \n\\begin{equation}\nh'(r)=3r^2-6r+(-2d+10),\n\\label{eq:h2}\n\\end{equation}\n$h(r)$ is evidently positive if $d$ is 2 or 3. \nIf $d\\ge4$ then it can be readily shown that\n\\begin{equation}\nh(r)\\ge h\\left(1+\\frac{\\sqrt{6d-21}}{3}\\right)=\\frac{2\\sqrt{6d-21}}{3}-2+12d>0\n\\label{eq:gap4}\n\\end{equation} \nfor all $r>1$.\n\nWe have introduced the polygraph test as an application of out main theorem,\nand we have proved that the protocol for our stability theorem has a smaller undecidable gap than for the protocol in the original stability theorem.\n\n\\section{Conclusions}\n\\label{sec:Conclusions}\nWe have shown that the stability theorem of the depolarizing channel holds \nfor the output quantum $p$-R\\'{e}nyi entropy for $p \\ge 2$, \nwhich was one of the open questions in Ref.~\\cite{HM13}. \n Furthermore, we have also proved that the stability theorem of the depolarizing channel holds for the output von~Neumann entropy ($p=1$), \nand have numerically checked that \nthe stability theorem \nholds \nfor the several cases $p$ with $10$ (note the absence of the selection parameter). The necessary details for simulating a variate $Q_T$ from the model \\eqref{eq:wfneutral} for a fixed $T>0$, given that $Q_0 = q_0\\in(0,1)$ are given in \\citet{jenkinsspano2015} and we briefly review the main ideas. It is well-known \\citep{griffiths1979,tavare1984,ethiergriffiths1993,griffithsspano2013} that, conditional on $Q_0 = q_0$, the variate $Q_T$ has a probability density function given by\n\\begin{equation}\n\\label{eq:pdfWF0}\nf(x\\giv q_0) = \\sum_{m=0}^\\infty q_m^\\theta(T) \\sum_{l=0}^m {\\mathcal Bin}(l;m,q_0){\\mathcal Beta}(x;\\theta_1+l, \\theta_2+m-l)\n\\end{equation}\nwhere $\\theta = \\theta_1 + \\theta_2$, and ${\\mathcal Bin}(\\cdot)$ and ${\\mathcal Beta}(\\cdot)$ are the Binomial and Beta density functions, given by\n\\begin{align*}\n{\\mathcal Bin}(l;m,q_0) &= {m\\choose l} q_0^l (1-q_0)^{m-l} \\\\\n{\\mathcal Beta}(x;\\theta_1, \\theta_2)&\\propto x^{\\theta_1 - 1}(1-x)^{\\theta_2 -1}\\ .\n\\end{align*}\nThe mixture weights $q_m^\\theta(T)$ in \\eqref{eq:pdfWF0} have an alternating series expansion\n\\begin{equation}\n\\label{eq:q0}\nq_m^\\theta(T) = \\sum_{k=m}^\\infty (-1)^{k-m}a_{km}^\\theta e^{-k(k+\\theta-1)T\/2}\n\\end{equation}\nwhere\n\\begin{equation*}\na_{km}^\\theta = \\frac{(\\theta+2k-1)(\\theta+m)_{(k-1)}}{m!(k-m)!}\n\\end{equation*}\nwith the convention that $a_{(x)} \\equiv \\Gamma(a+x)\/\\Gamma(a)$ for $a>0$ and $x\\ge-1$. Sampling from the density \\eqref{eq:pdfWF0} is achieved via Agorithm \\ref{alg:WF0} described below \\citep[see][]{jenkinsspano2015}.\n\n\\begin{algorithm}\n\\caption{Exact Sampling for the neutral WF diffusion:}\n\\label{alg:WF0}\n\\begin{algorithmic}[1]\n\\State Draw $\\sM$ from the discrete distribution $\\{q_m^\\theta(T), m=0,1,\\dots\\}$;\n\\State Draw $\\sL \\sim {\\mathcal Bin}(\\sM, q_0)$;\n\\State Draw $\\sQ \\sim {\\mathcal Beta}(\\theta_1+\\sL, \\theta_2+\\sM-\\sL)$;\n\\State \\textbf{Return} $\\sQ$.\n\\end{algorithmic}\n\\end{algorithm}\n\nThe computational burden in Algorithm \\ref{alg:WF0} is in Step 1, since the weights $q_m^\\theta(T)$ do not have a closed form expression and \\eqref{eq:q0} suggests an infinite amount of computation. However, given the alternating series representation in \\eqref{eq:q0}, one can use ideas described in Chapter 5 of \\citet{devroye1986} to devise an exact procedure for simulating the variate $\\sM$, which only requires finite computing time. Details of such a procedure are provided in Section~3.2 of \\citet{jenkinsspano2015}. \n\n\\subsection*{Exact simulation of general WF-diffusion models}\nAs we mention above, our approach relies on being able to obtain exact draws from a general Wright-Fisher diffusion \\eqref{eq:WFgeneric}. We briefly review the general setup for sampling an SDE. Assume that $\\{X_t,\\ 0\\le t\\le T\\}$ is a stochastic process described via \n\\begin{equation}\n\\label{eq:sde}\ndX_t = \\beta(X_t)dt + \\sigma(X_t)dB_t, \\quad\\quad X(0) = x_0\\ , \\quad 0\\le t\\le T\n\\end{equation}\nwhere the functions $\\beta(\\cdot)$ and $\\sigma(\\cdot)$ are assumed to be smooth enough such that \\eqref{eq:sde} has a unique weak solution. Such conditions can be found in \\citet{karatzas2012brownian}, for example. Our goal is to simulate exact draws from the distribution of $X_t$, for some fixed $t>0$, conditional on $X_0=x_0$. In general, the transition density for the process $X_t$ is not available in closed form, not even in an infinite series representation as is the case with the neutral Wright-Fisher diffusion. Recently, in a series of papers \\citep{EA1,EA2,EA3}, the authors have developed a rejection sampling approach, which yields an exact (distribution-wise) skeleton of the full path $(X_t, 0\\le t\\le T)$ for a very large class of diffusions. Briefly, let $\\Omega = {\\mathcal C}([0,T])$ and let $\\omega$ be a typical element of $\\Omega$. Let $\\QQ$ be the probability measure induced by $\\{X_t,\\ 0\\le t\\le T\\}$ onto $\\Omega$ and $\\ZZ$ be another probability measure on $\\Omega$ which is user-selected in an appropriate way. Under regularity conditions \\citep[see][]{EA1} one can use the Girsanov formula to derive and expression for the Radon-Nikodym derivative\n$$\n\\frac{d\\QQ}{d\\ZZ}(\\omega) \\propto {\\mathsf G}(\\omega)\n$$\nwhere, in principle, one can arrange that ${\\mathsf G}(\\cdot) \\in (0,1)$. Thus, a rejection sampling strategy will be appropriate:\n\n\\begin{algorithm}\n\\caption{Exact sampling for general diffusions}\n\\label{alg:rejection}\n\\begin{algorithmic}[1]\n\\State Sample $\\omega\\sim \\ZZ$;\n\\State Accept $\\omega$ as a draw from $\\QQ$ with probability $\\sG(\\omega)\\in (0,1)$.\n\\end{algorithmic}\n\\end{algorithm}\n\nThe proposal distribution $\\ZZ$ has to be selected in a way that Step 1 of Algorithm \\ref{alg:rejection} can be done efficiently, i.e. a biased Brownian Bridge measure. We refer the reader to \\citet{EA1} for a detailed description of all of the conditions and the general setup. \n\n\\vspace{0.5cm}\n\\noindent\n\\textit{General Wright-Fisher diffusions}. If the diffusion coefficient in \\eqref{eq:sde} takes the form\n$$\n\\sigma(x) = \\sqrt{x(1-x)},\n$$\nthen the SDE \\eqref{eq:sde} is a general WF diffusion. As \\citet{jenkinsspano2015} suggest, in this case, it is efficient to select the proposal measure $\\ZZ$ to be the law induced by the neutral WF diffusion \\eqref{eq:wfneutral}. \n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHigh-speed Monte Carlo simulations are used for a large spectrum of\napplications from mathematics to economy. As input for such simulations, the\nprobability distribution are usually generated by pseudo-random number\nsambling, a method going back to a work of John von Neumann from\n1951~\\cite{vonNeumann:1951}. In the era of ``big data'' such methods have to\nbe fast and reliable, a sign of such neccessity being the release of the very\nfirst randomness Quside processing unit in 2023~\\cite{111}. Still, these\nsamblings need to be cross-checked by exact methods, and for these the\nknowledge of analytical functions to describe the stochastic processes, among\nthose the error function, are of tremendous importance.\n\nBy definition, a function is called analytic if it locally given by a\nconverging Taylor series expansion. Even if a function itself turns out not\nto be analytic, its inverse can be analytic. The error function can be given\nanalytically, one of these analytic expressions is the integral representation\ngiven by Craig in 1991~\\cite{Craig:1991}. Craig mentioned this representation\nonly in passing and did not give a derivation of it. In the following, there\nhave been a couple of derivations of this formula~\\cite{Lever:1998,%\nTellambura:1999,Stewart:2017}. In Sec.~2 we add a further one which is based\non the same geometric considerations as employed in Ref.~\\cite{Martila:2022}.\nIn Sec.~3 we give the series expansion for Craig's integral representation and\nshow the fast convergence of this series.\n\nFor the inverse error function, handbooks for special functions (cf.\\ e.g.\\\nRef.~\\cite{book}) do not unveil such an analytic property. Instead, this\nfunction have to be approximated. Known approximations are dating back to the\nlate 1960s and early 1970s~\\cite{Strecok:1968,Blair:1976}) and reach up to\nsemi-analytical approximations by asymptotic expansion (cf., e.g.,\nRefs.~\\cite{Bergsma:2006,Dominici:2007,Dominici:2008,Winitzki:2008,Giles:2011,%\nSoranzo:2012}. Using the same geometric considerations, in Sec.~4 we develop a\ncouple of handy approximations which can easily be implemented in different\ncomputer languages, indicating the deviations from an exact treatment. In\nSec.~5 we test the CPU time and give our conclusions.\n\n\\section{Derivation of Craig's integral representation}\nRef.~\\cite{Martila:2022} provides an approximation for the Gaussian normal\ndistribution obtained by geometric considerations. The same considerations\napply to the error function $\\mathop{\\rm erf}\\nolimits(t)$ which is given by the normal\ndistribution $P(t)$ via\n\\begin{equation}\\label{erf}\n\\mathop{\\rm erf}\\nolimits(t)=\\frac1{\\sqrt\\pi}\\int_{-t}^te^{-x^2}dx\n =\\frac1{\\sqrt{2\\pi}}\\int_{-\\sqrt2t}^{\\sqrt2t}e^{-x^2\/2}dx=P(\\sqrt2t).\n\\end{equation}\nTranslating the results of Ref.~\\cite{Martila:2022} to the error function, one\nobtains the approximation of order $p$ to be\n\\begin{equation}\\label{Eqf2}\n\\mathop{\\rm erf}\\nolimits_p(t)^2=1-\\frac1N\\,\\sum_{n=1}^Ne^{-k_n^2t^2},\n\\end{equation}\nwhere the $N=2^p$ values $k_n$ ($n=1,2,\\ldots,N$) are found in the intervals\nbetween $1\/\\cos(\\pi(n-1)\/(4N))$ and $1\/\\cos(\\pi n\/(4N))$ and can be optimised.\nIn Ref.~\\cite{Martila:2022} it is shown that\n\\begin{equation}\\label{rteq}\n\\Big|\\mathop{\\rm erf}\\nolimits(t)-\\sqrt{1-e^{-k_{0,1}^2t^2}}\\Big|<0.0033\n\\end{equation}\nfor $k_{0,1}=1.116$, and with $14\\approx 0.0033\/0.00024$ times larger\nprecision\n\\begin{equation}\\label{rteq2}\n\\Big|\\mathop{\\rm erf}\\nolimits(t)-\\sqrt{1-\\frac12(e^{-k_1^2t^2}+e^{-k_2^2t^2})}\\Big|<0.00024,\n\\end{equation}\nwhere $k_{1,1}=1.01$, $k_{1,2}=1.23345$. For the parameters $k_n$ taking the\nvalues $k_n=1\/\\cos(\\pi n\/(4N))$ of the upper limits of those intervals, it can\nbe shown that the deviation is given by\n\\begin{equation}\\label{Eqf299}\n|\\mathop{\\rm erf}\\nolimits(t)-\\mathop{\\rm erf}\\nolimits_p(t)|<\\frac{\\exp(-t^2)}{2N}\\,\\sqrt{1-\\exp(-t^2)}\\,.\n\\end{equation}\nGiven the values $k_n=1\/\\cos\\phi(n)$ with $\\phi(n)=\\pi n\/(4N)$, in the limit\n$N\\to\\infty$ the sum over $n$ in Eq.~(\\ref{Eqf2}) can be replaced by an\nintegral with measure $dn=(4N\/\\pi)d\\phi(n)$ to obtain\n\\begin{equation}\\label{A6}\n\\mathop{\\rm erf}\\nolimits(t)^2=1-\\frac4\\pi\\int_0^{\\pi\/4}\\exp\\left(\\frac{-t^2}{\\cos^2\\phi}\\right)\n \\,d\\phi.\n\\end{equation}\n\n\\section{Power series expansion}\nThe integral in Eq.~(\\ref{A6}) can be expanded into a power series in $t^2$,\n\\begin{equation}\\label{A7}\n\\mathop{\\rm erf}\\nolimits(t)^2=1-\\frac4\\pi\\sum_{n=0}^\\infty c_n\\frac{(-1)^n}{n!}(t^2)^n\n\\end{equation}\nwith\n\\begin{eqnarray}\\label{cn}\nc_n&=&\\int_0^{\\pi\/4}\\frac{d\\phi}{\\cos^{2n}\\phi}\\ =\\ \\int_0^{\\pi\/4}\n (1+\\tan^2\\phi)^nd\\phi\\ =\\ \\int_0^1(1+y^2)^{n-1}dy\\nonumber\\\\\n &=&\\sum_{k=0}^{n-1}\\begin{pmatrix}n-1\\\\k\\\\\\end{pmatrix}\\int_0^1y^{2k}dy\n \\ =\\ \\sum_{k=0}^{n-1}\\frac{1}{2k+1}\\begin{pmatrix}n-1\\\\ k\\\\\\end{pmatrix},\n\\end{eqnarray}\nwhere $y=\\tan\\phi$. The coefficients $c_n$ can be expressed by the\nhypergeometric function, $c_n={}_2F_1(1\/2,1-n;3\/2;-1)$, also known as Barnes'\nextended hypergeometric function. On the other hand, we can derive a\nconstraint for the explicit finite series expression for $c_n$ that renders\nthe series in Eq.~(\\ref{A7}) to be convergent for all values of $t$. In order\nto be self-contained, intermediate steps to derive this constraint and to show\nthe convergence are shown in the following. Necessary is Pascal's rule\n\\begin{eqnarray}\n\\lefteqn{\\begin{pmatrix}n\\\\k\\end{pmatrix}+\\begin{pmatrix}n\\\\k-1\\end{pmatrix}\n \\ =\\ \\frac{n!}{k!(n-k)!}+\\frac{n!}{(k-1)!(n-k+1)!}}\\nonumber\\\\\n &=&\\frac{n!(n-k+1+k)}{k!(n-k+1)!}\\ =\\ \\frac{(n+1)!}{k!(n+1-k)!}\n \\ =\\ \\begin{pmatrix}n+1\\\\k\\end{pmatrix}\n\\end{eqnarray}\nand the sum over the rows of Pascal's triangle,\n\\begin{equation}\\label{row}\n\\sum_{k=0}^n\\begin{pmatrix}n\\\\k\\end{pmatrix}=2^n\n\\end{equation}\nwhich can be shown by mathematical induction. The base case $n=0$ is obvious,\nas $(\\begin{smallmatrix}0\\\\0\\end{smallmatrix})=1=2^0$. For the induction step\nfrom $n$ to $n+1$ we write the first and last elements\n$(\\begin{smallmatrix}n+1\\\\0\\end{smallmatrix})=1$ and\n$(\\begin{smallmatrix}n+1\\\\n+1\\end{smallmatrix})=1$ separately and use\nPascal's rule to obtain\n\\begin{eqnarray}\n\\lefteqn{\\sum_{k=0}^{n+1}\\begin{pmatrix}n+1\\\\k\\end{pmatrix}\n \\ =\\ 1+\\sum_{k=1}^n\\begin{pmatrix}n+1\\\\k\\end{pmatrix}\n +1\\ =}\\nonumber\\\\\n &=&1+\\sum_{k=1}^n\\begin{pmatrix}n\\\\k\\end{pmatrix}\n +\\sum_{k=1}^n\\begin{pmatrix}n\\\\k-1\\end{pmatrix}+1\n \\ =\\ 2\\sum_{k=0}^n\\begin{pmatrix}n\\\\k\\end{pmatrix}\\ =\\ 2^{n+1}.\n\\end{eqnarray}\nThis proves Eq.~(\\ref{row}). Returning to Eq.~(\\ref{cn}), one has\n$0\\le k\\le n-1$ and, therefore,\n\\begin{equation}\n\\frac1{2n-1}\\le\\frac1{2k+1}\\le 1.\n\\end{equation}\nFor the result in Eq.~(\\ref{cn}) this means that\n\\begin{equation}\n\\frac1{2n-1}\\sum_{k=0}^{n-1}\\begin{pmatrix}n-1\\\\k\\end{pmatrix}\\le c_n\\le\n\\sum_{k=0}^{n-1}\\begin{pmatrix}n-1\\\\k\\end{pmatrix}=2^{n-1},\n\\end{equation}\ni.e., the existence of a real number $c_n^*$ between $1\/(2n-1)$ and $1$ such\nthat $c_n=c_n^*2^{n-1}$. One has\n\\begin{equation}\n\\mathop{\\rm erf}\\nolimits_p(t)^2=1-\\frac4\\pi\\sum_{n=0}^Nc_n\\frac{(-1)^n}{n!}(t^2)^n\n =1-\\frac2\\pi\\sum_{n=0}^Nc_n^*\\frac{(-2t^2)^n}{n!},\n\\end{equation}\nand because of $0\\le c_n^*\\le 1$ there is again a real number $c_N^{**}$ in\nthe corresponding open interval so that\n\\begin{equation}\n\\frac2\\pi\\sum_{n=0}^Nc_n^*\\frac{(-2t^2)^n}{n!}=c_N^{**}\\frac2\\pi\\sum_{n=0}^N\n\\frac{(-2t^2)^n}{n!}<\\frac2\\pi\\sum_{n=0}^N\\frac{(-2t^2)^n}{n!}.\n\\end{equation}\nAs the latter is the power series expansion of $(2\/\\pi)e^{-2t^2}$ which is\nconvergent for all values of $t$, also the original series is convergent and,\ntherefore, $\\mathop{\\rm erf}\\nolimits_p(t)^2$ with the limiting value shown in Eq.~(\\ref{A7}). A\nmore compact form of the power series expansion is given by\n\\begin{equation}\n\\mathop{\\rm erf}\\nolimits(t)^2=\\sum_{n=1}^\\infty c_n\\frac{(-1)^{n-1}}{n!}(t^2)^n,\\qquad\nc_n=\\sum_{k=0}^{n-1}\\frac{1}{2k+1}\\begin{pmatrix}n-1\\\\ k\\\\\\end{pmatrix}.\n\\end{equation}\n\n\\section{Approximations for the inverse error function}\nBased on the geometric approach from Ref.~\\cite{Martila:2022}, in the\nfollowing we describe how to find simple, handy formulas that, guided by\nhigher and higher orders of the approximation~(\\ref{Eqf2}) for the error\nfunction lead to more and more advanced approximation of the inverse error\nfunction. Starting point is the degree $p=0$, i.e., the approximation in\nEq.~(\\ref{rteq}). Inverting $E=\\mathop{\\rm erf}\\nolimits_0(t)=(1-e^{-k_{0,1}^2t^2})^{1\/2}$ leads\nto $t^2=-\\ln(1-E^2)\/k_{0,1}^2$, and using the parameter $k_{0,1}=1.116$ from\nEq.~(\\ref{rteq}) gives\n\\begin{equation}\nT_0=\\sqrt{-\\ln(1-E^2)}\/k_{0,1}^2.\n\\label{dft2}\n\\end{equation}\nFor $0\\le E \\le 0.92$ the relative deviation $(T_{(0)}-t)\/t$ from the exact\nvalue $t$ is less than $1.11\\%$, for $0\\le E<1$ the deviation is less than\n$10\\%$. Therefore, for $E>0.92$ a more precise formula has to be used. As such\nhigher values for $E$ appear only in $8\\%$ of the cases, this will not\nessentially influence the CPU time.\n\nContinuing with $p=1$, we insert $T_0=\\sqrt{-\\ln(1-E^2)}\/k_{0,1}^2$ into\nEq.~(\\ref{Eqf2}) to obtain\n\\begin{equation}\n\\mathop{\\rm erf}\\nolimits_1(T_0)=\\sqrt{1-\\frac12(e^{-k_{1,1}^2T_0^2}+e^{-k_{1,2}^2T_0^2})},\n\\end{equation}\nwhere $k_{1,1}=1.01$ and $k_{1,2}=1.23345$ are the same as for\nEq.~(\\ref{rteq2}). Taking the derivative of Eq.~(\\ref{erf}) and approximating\nthis by the difference quotient, one obtains\n\\begin{equation}\n\\frac{\\mathop{\\rm erf}\\nolimits(t)-\\mathop{\\rm erf}\\nolimits(T_0)}{t-T_0}=\\frac{\\Delta\\mathop{\\rm erf}\\nolimits(t)}{\\Delta t}\\Big|_{t=T_0}\n \\approx\\frac{d\\mathop{\\rm erf}\\nolimits(t)}{dt}\\Big|_{t=T_0}=\\frac2{\\sqrt\\pi}e^{-T_0^2},\n\\end{equation}\nleading to $t\\approx T_1=T_0+\\frac12\\sqrt\\pi e^{T_0^2}(E-\\mathop{\\rm erf}\\nolimits_1(T_0))$. In\nthis case, for in the larger interval $0\\le E\\le 0.995$ the relative deviation\n$(T_1-t)\/t$ is less than $0.1\\%$. Using $\\mathop{\\rm erf}\\nolimits_2(t)$ instead of $\\mathop{\\rm erf}\\nolimits_1(t)$ and\ninserting $T_1$ instead of $T_0$ one obtains $T_2$ with a relative deviation\nof maximally $0.01\\%$ for the same interval. The results are shown in\nFig.~\\ref{rval012}.\n\n\\begin{figure}[ht]\\begin{center}\n\\epsfig{figure=rval012.eps, scale=0.7}\n\\caption{\\label{rval012}Relative deviations for the statical approximations}\n\\end{center}\\end{figure}\n\nThe method to can be optimised by a method similar to the shooting method in\nboundary problems, giving dynamics to the calculation. Suppose that following\none of the previous methods, for a particular argument $E$ we have found an\napproximation $t_0$ for the value of the inverse error function at this\nargument. Using $t_1=1.01t_0$, one can adjust the improved result\n\\begin{equation}\nt=t_0+A(E-\\mathop{\\rm erf}\\nolimits(t_0))\n\\end{equation}\nby inserting $E=\\mathop{\\rm erf}\\nolimits(t)$ and and calculating $A$ for $t=t_1$. In general, this\nprocedure gives a vanishing deviation close to $E=0$. In this case and for\n$t_0=T_1$, in the interval $0\\le E\\le 0.7$ the maximal deviation is slightly\nlarger than $10^{-6}=0.0001\\%$ while up to $E=0.92$ the deviation is\nrestricted to $10^{-5}=0.001\\%$. A more general ansatz\n\\begin{equation}\nt=t_0+A(E-\\mathop{\\rm erf}\\nolimits(t_0))+B(E-\\mathop{\\rm erf}\\nolimits(t_0))^2\n\\end{equation}\ncan be adjusted by inserting $E=\\mathop{\\rm erf}\\nolimits(t)$ for $t=1.01t_0$ and $t=1.02t_0$, and\nthe system of equations\n\\begin{equation}\n\\Delta t=A\\Delta E_1+B\\Delta E_1^2,\\qquad\n2\\Delta t=A\\Delta E_2+B\\Delta E_2^2\n\\end{equation}\nwith $\\Delta t=0.01t_0$, $\\Delta E_i=\\mathop{\\rm erf}\\nolimits(t_i)-\\mathop{\\rm erf}\\nolimits(t_0)$ can be solved for\n$A$ and $B$ to obtain\n\\begin{equation}\nA=-\\frac{(2\\Delta E_1^2-\\Delta E_2^2)\\Delta t}{\\Delta E_1\\Delta E_2\n (\\Delta E_1-\\Delta E_2)},\\quad\nB=\\frac{(-2\\Delta E_1+\\Delta E_2)\\Delta t}{\\Delta E_1\\Delta E_2\n (\\Delta E_1-\\Delta E_2)}.\n\\end{equation}\nFor $0\\le E\\le 0.70$ one obtains a relative deviation of $1.5\\cdot 10^{-8}$,\nfor $0\\le E\\le 0.92$ the maximal deviation is $5\\cdot 10^{-7}$. Finally, the\nadjustment of\n\\begin{equation}\nt=t_0+A(E-\\mathop{\\rm erf}\\nolimits(t_0))+B(E-\\mathop{\\rm erf}\\nolimits(t_0))^2+C(E-\\mathop{\\rm erf}\\nolimits(t_0))^3\n\\end{equation}\nleads to\n\\begin{eqnarray}\nA&=&(3\\Delta E_1^2\\Delta E_2^2(\\Delta E_1-\\Delta E_2)\n -2\\Delta E_1^2\\Delta E_3^2(\\Delta E_1-\\Delta E_3)\\strut\\nonumber\\\\&&\\strut\n +\\Delta E_2^2\\Delta E_3^2(\\Delta E_2-\\Delta E_3))\\Delta t\/D,\\nonumber\\\\\nB&=&(-3\\Delta E_1\\Delta E_2(\\Delta E_1^2-\\Delta E_2^2)\n +2\\Delta E_1\\Delta E_3(\\Delta E_1^2-\\Delta E_3^2)\\strut\\nonumber\\\\&&\\strut\n -\\Delta E_2\\Delta E_3(\\Delta E_2^2-\\Delta E_3^2))\\Delta t\/D,\\nonumber\\\\\nC&=&(3\\Delta E_1\\Delta E_2(\\Delta E_1-\\Delta E_2)\n -2\\Delta E_1\\Delta E_3(\\Delta E_1-\\Delta E_3)\\strut\\nonumber\\\\&&\\strut\n +\\Delta E_2\\Delta E_3(\\Delta E_2-\\Delta E_3))\\Delta t\/D,\n\\end{eqnarray}\nwhere $D=\\Delta E_1\\Delta E_2\\Delta E_3(\\Delta E_1-\\Delta E_2)\n(\\Delta E_1-\\Delta E_3)(\\Delta E_2-\\Delta E_3)$. For $0\\le E\\le 0.70$ the\nrelative deviation is restricted to $5\\cdot 10^{-10}$ while up to $E=0.92$\nthe maximal relative deviation is $4\\cdot 10^{-8}$. The results for the\ndeviations of $T_{(n)}$ ($n=1,2,3$) for linear, quadratic and cubic dynamical\napproximation are shown in Fig.~\\ref{rval456}.\n\n\\begin{figure}[ht]\\begin{center}\n\\epsfig{figure=rval456.eps, scale=0.7}\n\\vspace{7pt}\n\\epsfig{figure=rval4567.eps, scale=0.7}\n\\caption{\\label{rval456}Relative deviation for the dynamical approximations\n (the degree is chosen to be $p=1$)}\n\\end{center}\\end{figure}\n\n\\section{Conclusions}\nIn order to test the feasibility and speed, we have coded our algorithm in the\ncomputer language C under {\\tt Slackware~15.0} on an ordinary hp laptop. The\ndependence of the CPU time for the calculation is estimated by calculating the\nvalue $10^6$ times in sequence. The speed of the calculation of course turns\ndoes not depend on the value for $E$, as the precision is not optimised. This\nwould have to be neccessary for a practical application. For an arbitrary\nstarting value $E=0.8$ we perform this test, and the results are given in\nTable~\\ref{tab1}. An analysis of the table shows that a further step in the\ndegree $p$ doubles the run time while the dynamics for increasing $n$ adds a\nconstant value of approximately $0.06$ seconds to the result. Despite the fact\nthe increase of the dynamics needs the solution of a linear system of\nequations and the coding of the result, this endeavour is justified, as by\nusing the dynamics one can increase the precision of the result without\nloosing calculational speed. \n\n\\begin{table}[ht]\\begin{center}\n\\caption{\\label{tab1}Run time experiment for our algorithm under C for\n$E=0.8$ and different values of $n$ and $p$ (CPU time in seconds). As\nindicated, the errors are in the last displayed digit, i.e., $\\pm 0.01$\nseconds.}\n\\begin{tabular}{|r||c|c|c|c|}\\hline\n&$n=0$&$n=1$&$n=2$&$n=3$\\\\\\hline\\hline\n$p=0$&$0.06(1)$&$0.12(1)$&$0.15(1)$&$0.18(1)$\\\\\\hline\n$p=1$&$0.13(1)$&$0.19(1)$&$0.22(1)$&$0.25(1)$\\\\\\hline\n$p=2$&$0.24(1)$&$0.29(1)$&$0.33(1)$&$0.36(1)$\\\\\\hline\n\\end{tabular}\\end{center}\n\\end{table}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbynz b/data_all_eng_slimpj/shuffled/split2/finalzzbynz new file mode 100644 index 0000000000000000000000000000000000000000..7393af1c89bab17906fc7e24b7ccd3a00d8df9dd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbynz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{sec:introduction}\nRecent semantic segmentation algorithms \\cite{DeeplabV3+:2018Chen,Wider:2019Wu} have successfully solved challenging benchmark datasets for semantic segmentation tasks like PASCAL VOC~\\citep{VOC:Everingham2015} and MS COCO~\\citep{COCO:Lin2014}.\nTo do so, however, they use a large number of pixel-level annotations,\nwhich require extensive human labor to obtain.\nGreat attention in computer vision research has thus turned to \\emph{weakly-supervised} learning \\citep{OAA:2019Jiang,SEAM:2020Wang,MCIS:2020Sun}.\nWeakly-supervised semantic segmentation aims to classify each pixel of test images, trained only on image-level labels\n(whether a class is present in the image, but not its location).\nAlthough weakly-supervised approaches have seen success in both semantic segmentation and object localization tasks, \\citet{EvalWSOL:Choe2020} cast significant doubt on their validity and practicality.\nThey argue that although weakly-supervised learning algorithms are designed to be trained only on image-level labels, they inevitably use explicit pixel-level labels (or, equivalently, manual judgement of outputs) in hyperparameter tuning.\nSince at least \\emph{some} fully-supervised inputs are necessary,\n\\citeauthor{EvalWSOL:Choe2020} point out that simply using a small number of these,\n{e.g}\\onedot} \\def\\Eg{{E.g}\\onedot five per class,\nto train a fully-supervised localization model\nsubstantially outperforms a weakly-supervised counterpart.\nTo still take advantage of less-expensive weakly-supervised data points, though,\nperhaps the most natural change is\nto the semi-weakly supervised semantic segmentation (SWSSS) task:\nhere only a small number of pixel-level labels are provided, as well as a large number of image-level labels.\n\nSegmentation networks trained on a small number of pixel-level labels often confuse similar classes, {e.g}\\onedot} \\def\\Eg{{E.g}\\onedot \\labelname{cat} and \\labelname{horse}, as they architecturally tend to focus on local features rather than farther-away distinguishing areas.\nThus, the additional supervision from image-level labels can be potentially quite helpful.\nMost existing SWSSS methods generate pseudo-labels from a classifier using class activation maps (CAMs) \\cite{CAM:Zhou2015},\nthen train a segmentation network using both pseudo-labels and true pixel labels.\nThese pseudo-labels, however, are difficult to extract:\nthey tend to focus on small discriminative regions of objects,\nignoring less-distinctive bulks of objects and often including nearby pixels that are not part of objects.\nOur analysis shows that as the baseline segmentation model improves with more training data,\nthe pseudo-labels quickly provide more incorrect than correct supervision to what the model already would have predicted.\nPrevious methods have thus employed additional information, such as saliency maps, or additional processing methods, adding complexity and many more hyperparameters to tune on a fully-supervised validation set.\nWe use weak supervision data differently,\nwithout requiring any side information and introducing far fewer new hyperparameters.\n\n\n\nTo motivate our method, consider \\cref{fig:gt_filtering}.\nBaseline models with small training sets predict many classes which are not present in the image.\nIf we ignore the predictions for the class of any pixel which are not present in the image at all, which we term \\textit{oracle filtering}, then the segmentation performance improves dramatically.\nInspired by this, we propose a simple algorithm we call \\textit{prediction filtering}.\nPrediction filtering uses a multi-label classifier trained on only image-level labels\nto filter out segmentation predictions deemed very unlikely by the classifier, replacing predictions for those pixels with the next-most-likely class allowed through the filter.\nIt is compatible with any segmentation model, and\nthe threshold for ``very unlikely'' is the only new hyperparameter introduced.\n\nAlthough the classifier is not perfect, because it is trained on a large weakly-supervised set, its predictions tend to be quite accurate.\nMoreover, it is trying to solve an easier problem than the segmentation network, using a different architecture.\nAs we will see in the experiments, even without any additional weakly-supervised data, prediction filtering tends to improve the segmentation performance.\nWhen applied to baseline segmentation models, the performance significantly improves; adding it to the baseline model variant from \\citet{Dual:2020Luo} achieves (to our knowledge) the new highest performance on PASCAL VOC in the SWSSS regime.\nAs prediction filtering is so general, it can even be easily applied to models which already exploit weakly-supervised data via pseudo-labels;\ndoing so on the state-of-the-art SWSSS algorithms \\citep{CCT:2020Ouali,Dual:2020Luo} yields a new model with significantly higher performance,\nwith more improvement for models trained on fewer fully-labeled images.\n\n\n\n\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{figures\/gt_filtering_smaller.pdf}\n \\caption{Filtering this model's predicted classes drastically improves segmentation quality.}\n \\label{fig:gt_filtering}\n \\vspace*{-3.5mm}\n\\end{figure}\n\n\n\\section{Approaches to Semantic Segmentation}\n\\label{sec:related_work}\n\n\\paragraph{Fully-supervised semantic segmentation.}\nIn this task, we have a training set of images with pixel-level class labels:\n$\\mathcal{D}_\\mathit{pixel} = \\{(x_i, y_i)\\}_{i=1}^M$, where\n$x_i \\in \\mathbb{R}^{3 \\times H_i\\times W_i}$ are images\nand $y_i \\in \\{0,1\\}^{K\\times H_i\\times W_i}$ are pixel labels,\nwith $K$ the number of classes.\nOur goal is to find a model that can predict pixel labels $y$ given a new image $x$.\n\nCurrent approaches are mostly based on convolutional networks.\nOne important factor to their success is using larger receptive fields via dilated convolutional layers \\citep{Dilated:2015Yu,DeeplabV3:2017Chen}.\nEven so, state-of-the-art algorithms still misclassify many pixels\nwhen multi-label classifiers on the same data obtain near-perfect accuracy.\nWe conjecture this is because segmentation models still miss global structure of the image when looking at an individual pixel.\nWe will exploit that a classifier ``looks at images differently.''\n\n\n\\paragraph{Weakly-supervised semantic segmentation.}\nTo avoid extensive human labor for pixel-level annotation, there have been many attempts to replace pixel-level labels with image-level labels:\n$\\mathcal{D}_\\mathit{image} = \\{(x_i, z_i)\\}_{i=1}^N$,\nwith $z_i \\in \\{0,1\\}^{K}$ a ``logical or'' of each channel of the unknown $y_i$.\nWe still want a model to produce $y$.\nThe most common pipeline for weakly-supervised semantic segmentation is to generate a class activation map (CAM) \\citep{CAM:Zhou2015}, refine it with various post-processing methods, then use it as a pseudo-label to train a semantic segmentation network.\nHowever, CAMs tend to focus only on discriminative regions of an object.\nPrior work has attempted to expand the CAM to entire objects by masking out parts of an image \\citep{HaS:Singh2017} or intermediate feature \\citep{ACoL:Zhang2018,ADL:Choe2019};\nthese methods do indeed expand the resulting CAM, but often too much, in very unstable ways.\nAnother popular approach to grow the CAM regions, via methods proposed by \\citet{CRF:Krahenbuhl2011}, \\citet{DSRG:2018Huang}, or \\citet{PSA:2018Ahn}, until it converges to the object region \\cite{Boundary:2020Chen}.\n\n\n\n\n\\paragraph{Semi-weakly supervised semantic segmentation.}\n\\citet{EvalWSOL:Choe2020} point out fundamental problems with weakly-supervised learning, as discussed in \\cref{sec:introduction}.\nWe thus consider combining a small number of pixel-annotated images $\\mathcal{D}_\\mathit{pixel}$ with many weakly-supervised images $\\mathcal{D}_\\mathit{image}$, which we refer to as semi-weakly supervised semantic segmentation (SWSSS).\nAlthough they have not used exactly this name, many papers have already addressed this setting.\nBroadly speaking, the most common approach is to generate pseudo-labels from a CAM, and use these in combination with true labels to train a segmentation network\n\\citep{WSSL:2015Papandreou,MDC:2018Wei,GAIN:2018Li,CCT:2020Ouali,Dual:2020Luo,AdvCAM:2021Lee,vessel2022dang,semi_context_const2021lai}.\n(For lack of space, we unfortunately elide all details of these approaches.)\nBecause image-level labels have no spatial information, however, it is fundamentally difficult to make accurate pseudo-labels.\nAs we will now argue, as the number of pixel labels increases and base models improve, the benefit of pseudo-labels drastically diminishes.\n\n\n\n\n\n\n\n\n\n\n\\label{sec:problem}\n\n\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures\/problems_extended3.pdf}\n \\caption{A segmentation network trained on few pixel-level labels confuses similar classes, as it does not capture the global features (panel a). To alleviate this issue, pseudo-labels have been widely used; this approach is sensitive to the values of additional hyperparameters, and the quality of its supervision drastically diminishes as the number of pixel labels increases (panel c).}\n \\vspace*{-2mm}\n \\label{fig:problems}\n\\end{figure*}\n\n\n\nTo demonstrate this, we train a DeeplabV1 segmentation network \\citep{DeeplabV1:2015Chen} on $\\mathcal{D}_\\mathit{pixel}$ consisting of $1{,}464$ images from PASCAL VOC training set, and a VGG16 classifier on $\\mathcal{D}_\\mathit{image}$ containing all $10{,}582$ images in the full training set.\nTo examine what part of the image causes this prediction, we extract CAMs for \\labelname{cat} and \\labelname{horse} classes using GradCAM~\\cite{Gradcam:Selvaraju2016}.\nAlthough the classifier confidently predicts that only the \\labelname{cat} class is present in the given image ($\\Pr(\\labelname{cat}) = 0.97$, $\\Pr(\\labelname{horse}) = 0.03$), \\cref{fig:problems}(a) shows that the segmentation model predicts most of the cat's body as \\labelname{horse} (pink region).\nThe CAM shows that the classifier makes this decision based on the most discriminative region of the object, {i.e}\\onedot} \\def\\Ie{{I.e}\\onedot the cat's head.\nThe segmentation model does the same at the green (top) location, correctly predicted as \\labelname{cat};\nat the yellow (middle) location, however, the \\labelname{horse} prediction is based mostly on the more ambiguous body area.\nAs $\\mathcal{D}_\\mathit{pixel}$ grows, this phenomenon is largely alleviated; it seems $1{,}464$ images were not enough for the segmentation model to learn ``where to look.''\n\n\nSupervision added by previous models from classifier CAMs, then,\nwill also tend to focus on discriminative regions of an object,\nand can therefore be misleading.\nTo estimate the effectiveness of the pseudo-labels, we define a measure called \\textit{mNet}, the mean over classes $c$ of\n\\[\n \\mathit{net}_c =\n \n \\frac{\\sum_{i=1}^N \\!\\!\\splitfrac{%\n \\textcolor{myBlue}{\\operatorname{area}\\left(\n \\left( \\mathit{Pseudo}_{i,c} \\setminus \\mathit{Pred}_{i,c} \\right)\n \\cap \\mathit{GT}_{i,c}\n \\right)} }{%\n - \\textcolor{myRed}{\\operatorname{area}\\left(\n \\left( \\mathit{Pseudo}_{i,c} \\setminus \\mathit{Pred}_{i,c} \\right)\n \\setminus \\mathit{GT}_{i,c}\n \\right))} }\n }{\n \\sum_{i=1}^N \\operatorname{area}(\\mathit{Pseudo}_{i,c} \\setminus \\mathit{Pred}_{i,c})\n }\n \\label{eq:net}\n.\\]\nHere\nthe subscript $\\cdot_{i,c}$ refers to the set of pixels of $i$-th training image whose label is $c$;\n$\\mathit{GT}$ refers to the ground truth labels $y$,\n$\\mathit{Pred}$ to the predicted labels from a baseline segmentation model,\nand $\\mathit{Pseudo}$ to the CAM-based pseudo-labels.\nThe first (blue) term in the numerator measures how much correct supervision the pseudo-label adds (see \\cref{fig:problems}(b));\nthe second (red) term, how much incorrect supervision is added.\nThe denominator combines these two regions.\n$\\mathit{mNet}$ does not exactly measure how much the pseudo-labels help predictions, but it gives a rough sense of how correct their information is.\n\n\\Cref{fig:problems}(c) shows that ``plain'' CAM (in green, lowest) indeed helps when $\\mathcal{D}_\\mathit{pixel}$ is very small, but as it grows, $\\mathit{mNet}$ even becomes negative.\nIt is possible to improve these predictions by, for instance, post-processing with a CRF (blue line, top).\nThis, however, requires a complicated structure with several additional hyperparameters to tune on a fully-supervised validation set;\nsetting these parameters correctly significantly affects performance,\nas shown {e.g}\\onedot} \\def\\Eg{{E.g}\\onedot by the substantially worse $\\mathit{mNet}$ when changing the threshold for a foreground region $\\theta$ from $0.3$ to $0.2$ (orange line, middle).\n\n\n\n\n\n\n\n \n \n \n \n \n\n\\section{Prediction Filtering}\n\\label{sec:prediction_filtering}\n\\paragraph{Motivation.}\nGiven a segmentation network $f$,\nwhose output on the $i$-th image is $f(x_i) \\in \\mathbb{R}^{K \\times H\\times W}$,\nthe final prediction at each pixel $(h, w)$ is normally\n\\begin{align}\n \\hat{y}_{h,w} = \\argmax_{c \\in \\mathcal K} {f(x_i)_{c, h, w}}\n \\label{eq:normal}\n,\\end{align}\nwhere $\\mathcal K = \\{1, 2, \\dots, K\\}$.\n\\emph{Oracle filtering} (\\cref{fig:gt_filtering}) instead only considers classes actually present in the image, maximizing over $\\mathcal{\\tilde K}_i = \\{ c : z_{i,c} = 1 \\}$.\nThis improves segmentation performance substantially;\nthe mIoU (mean intersection over union, the standard segmentation performance metric) \nof a DeeplabV1 segmentation network trained on $\\mathcal{D}_\\mathit{pixel}$ with $1{,}464$ images improves from $61.8$ to $70.6$.\nWe conjecture this is because the segmentation network has not learned to appropriately consider global features when predicting at each pixel of the image,\nwhile the classifier, solving an easier problem with more data, can immediately identify relevant areas.\n\n\n\n\n\n\n\\begin{table*}[t!]\n\\centering\n\\fontsize{9.0}{12.0}\\selectfont\n\\begin{tabular}{c|c|l|c|c|c|l}\n\\hline\nBackbone & Add. 9.1K Images & \\multicolumn{1}{c|}{Method} & Bkg. Cues & CRF & Pred. Filter & $\\!$mIoU \\\\\n\\hline\\hline\n\\multirow{8}{*}{VGG} \n & -- & DeeplabV1~\\citep{DeeplabV1:2015Chen} & -- & \\checkmark & -- & 61.8 \\\\\n \\hhline{*{1}{~}*{6}{|-}}\n & \\multirow{6}{*}{Image-level} & \\cellcolor{myGray}DeeplabV1~\\citep{DeeplabV1:2015Chen} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\ub{67.4} \\\\\n & & WSSL~\\citep{WSSL:2015Papandreou} & -- & \\checkmark & -- & 64.6 \\\\\n & & \\cellcolor{myGray}WSSL~\\citep{WSSL:2015Papandreou} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}67.1 \\\\\n & & \\citet{SemiGAN2017Souly} & -- & -- & -- & 65.8$^*$ \\\\\n & & MDC~\\citep{MDC:2018Wei} & \\checkmark & \\checkmark & -- & 65.7$^*$ \\\\\n & & FickleNet~\\citep{FickleNet:2019Lee} & \\checkmark & \\checkmark & -- & 65.8$^*$ \\\\ \n \\hhline{*{1}{~}*{6}{|-}}\n & Pixel-level & DeeplabV1~\\citep{DeeplabV1:2015Chen} & -- & \\checkmark & -- & 69.0 \\\\\n\\Xhline{2\\arrayrulewidth}\n\\multirow{4}{*}{VGG-W}\n & -- & DeeplabV1-W~\\citep{Dual:2020Luo} & -- & \\checkmark & -- & 69.2 \\\\\n \\hhline{*{1}{~}*{6}{|-}}\n & \\multirow{3}{*}{Image-level} & \\cellcolor{myGray}DeeplabV1-W~\\citep{Dual:2020Luo} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}73.8 \\\\\n & & DualNet~\\citep{Dual:2020Luo} & \\checkmark & \\checkmark & -- & 73.9 \\\\\n & & \\cellcolor{myGray}DualNet~\\citep{Dual:2020Luo} & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\ub{75.1} \\\\ \\Xhline{2\\arrayrulewidth}\n\\multirow{9}{*}{ResNet}\n & \\multirow{2}{*}{--} & DeeplabV3~\\citep{DeeplabV3:2017Chen} & -- & \\checkmark & -- & 72.4 \\\\\n & & \\citet{semi_context_const2021lai} & -- & -- & -- & 74.5 \\\\\n \\hhline{*{1}{~}*{6}{|-}}\n & \\multirow{6}{*}{Image-level} & \\cellcolor{myGray}DeeplabV3~\\citep{DeeplabV3:2017Chen} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}75.3 \\\\ \n & & \\citet{semi_context_const2021lai} & -- & -- & -- & 76.1 \\\\\n & & CCT~\\citep{CCT:2020Ouali} & -- & \\checkmark & -- & 74.7 \\\\\n & & \\cellcolor{myGray}CCT~\\citep{CCT:2020Ouali} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}76.0 \\\\\n & & AdvCAM~\\citep{AdvCAM:2021Lee} & -- & \\checkmark & -- & 76.1 \\\\\n & & \\cellcolor{myGray}AdvCAM~\\citep{AdvCAM:2021Lee} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\ub{77.1} \\\\ \n \\hhline{*{1}{~}*{6}{|-}}\n & Pixel-level & DeeplabV3~\\citep{DeeplabV3:2017Chen} & -- & \\checkmark &-- & 77.4 \\\\ \n\\Xhline{2\\arrayrulewidth}\n\\multirow{4}{*}{ResNet-W}\n & -- & DeeplabV3-W~\\citep{Dual:2020Luo} & -- & \\checkmark & -- & 76.2 \\\\ \n \\hhline{*{1}{~}*{6}{|-}}\n & \\multirow{3}{*}{Image-level} & \\cellcolor{myGray}DeeplabV3-W~\\citep{Dual:2020Luo} & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\ub{77.5} \\\\ \n & & DualNet~\\citep{Dual:2020Luo} & \\checkmark & \\checkmark & -- & 76.7 \\\\\n & & \\cellcolor{myGray}DualNet~\\citep{Dual:2020Luo} & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}77.3 \\\\\n\\Xhline{2\\arrayrulewidth}\n\\multirow{3}{*}{HRNetV2-W48}\n & -- & OCRNet~\\citep{OCRNet:2021Strudel} & -- & -- & -- & 74.0 \\\\ \n \\hhline{*{1}{~}*{6}{|-}}\n & Image-level & \\cellcolor{myGray}OCRNet~\\citep{OCRNet:2021Strudel}& \\cellcolor{myGray}-- & \\cellcolor{myGray}-- & \\cellcolor{myGray}\\checkmark & \\cellcolor{myGray}\\ub{75.8} \\\\ \n \\hhline{*{1}{~}*{6}{|-}}\n & Pixel-level & OCRNet~\\citep{OCRNet:2021Strudel}& -- & -- & -- & 77.7 \\\\ \n\\Xhline{2\\arrayrulewidth}\n\\end{tabular}\n\\vspace*{-1.0mm}\n\\caption{Comparison of the state-of-the-art methods on $1{,}464$ images with pixel labels. The ``Add. 9.1K Images'' column gives which type of supervision is used for the 9.1K additional images (augmented dataset). Numbers marked with $*$ are as reported by the corresponding paper.}\n\\label{tbl:sota}\n\\vspace*{-1.0mm}\n\\end{table*}\n\n\\newcolumntype{C}{>{\\centering\\arraybackslash}p{3.2em}}\n\\newcolumntype{D}{>{\\centering\\arraybackslash}p{6.1em}}\n\\begin{table*}[t!]\n\\setlength{\\tabcolsep}{0.7pt}\n\\centering\n\\fontsize{6.5}{9.0}\\selectfont\n\\begin{tabu}{D|CCCCCCCCCCCCCCCCCCCC|C}\n\\hline\nMethod & plane & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & train & tv & mIoU \\\\\n\\hline\\hline\nDeeplabV1 & 71.1 & 37.1 & 78.5 & 52.7 & 58.3 & 79.4 & 72.0 & 73.2 & 20.6 & 58.0 & 56.1 & 66.5 & 55.9 & 76.1 & 76.4 & 40.6 & 65.4 & 42.8 & 65.2 & 53.3 & 61.4 \\\\\n+ Pred.\\ Filter & \\ub{76.2} & \\ub{38.9} & \\ub{82.2} & \\ub{58.2} & \\ub{61.2} & \\ub{85.1} & \\ub{76.8} & \\ub{84.4} & \\ub{22.6} & \\ub{73.0} & \\ub{56.2} & \\ub{79.3} & \\ub{76.2} & \\ub{82.4} & \\ub{78.2} & \\ub{46.0} & \\ub{80.4} & \\ub{43.6} & \\ub{71.5} & \\ub{55.2} & \\ub{67.6} \\\\\n\\hline\nDeeplabV3-W & \\ub{89.3} & 60.2 & 80.5 & 56.4 & 73.7 & 92.5 & 83.8 & \\ub{92.4} & 31.1 & 83.6 & \\ub{69.9} & \\ub{85.3} & 81.9 & 84.8 & \\ub{85.4} & 63.2 & 84.2 & 52.7 & \\ub{83.9} & \\ub{69.8} & 76.1 \\\\\n+ Pred.\\ Filter & \\ub{89.3} & \\ub{61.7} & \\ub{81.5} & \\ub{58.0} & \\ub{73.8} & \\ub{92.6} & \\ub{84.3} & 91.3 & \\ub{34.4} & \\ub{84.9} & 69.8 & 85.1 & \\ub{89.4} & \\ub{86.2} & 85.1 & \\ub{64.3} & \\ub{89.0} & \\ub{54.3} & 83.2 & 68.1 & \\ub{77.2} \\\\\n\\Xhline{2\\arrayrulewidth}\n\\end{tabu}\n\\vspace*{-1.0mm}\n\\caption{Evaluation results of DeeplabV1 (VGG-based) and DeeplabV3-W (ResNet-based) models on the test set.}\n\\vspace*{-4.0mm}\n\\label{tbl:test}\n\\end{table*}\n\n\\paragraph{Prediction filtering.}\nInspired by this phenomenon, we propose a simple post-processing method, \\emph{prediction filtering}.\nGiven $D_{image}$ with a large number of images, we can train a highly accurate multi-label classifier $g$;\na ResNet50 achieves $99\\%$ accuracy and $97.5\\%$ average precision on PASCAL VOC.\nHence, we constrain predictions to come from the classifier's predictions instead of ground truth classes,\n$\\mathcal{\\hat K}_i = \\{ c : g(x_i)_c > \\tau \\}$,\nwhere $g(x_i)_{c}$ is the output logit of $g$ for class $c$,\nand $\\tau$ is a threshold to determine the presence of a class in an image.\nWe provide full pseudocode in the appendix.\n\n\n\n\nCompared to other SWSSS algorithms, prediction filtering has several advantages.\nFirst, the architecture is simple. It only requires an additional classifier, which can be trained in parallel with the segmentation network; most existing methods require training a classifier first.\nSecond, it requires only a single additional hyperparameter, the threshold $\\tau$, far fewer than required by other SWSSS algorithms.\nFor instance, MDC~\\cite{MDC:2018Wei} requires two thresholds to determine the foreground and background regions, in addition to selecting the number of dilation layers with different rate for each layer.\n(We provide a comprehensive comparison of hyperparameter counts in the appendix).\nPrediction filtering thus minimizes the requirements on the additional fully-supervised validation set.\nThird, it can be independently added to any segmentation algorithm, including existing SWSSS algorithms; we do so in our experiments. \n\n\n\n\n\n\n\n\n\n\\paragraph{Effect on performance.}\n\nPrediction filtering helps performance when an incorrect prediction is filtered out and the ``backup'' prediction is correct;\nit hurts when a correctly-predicted object is incorrectly filtered.\nIt can also change an incorrect prediction to a different incorrect prediction;\nthis can in fact either increase or decrease the mIoU score.\n\nFor a non-perfect classifier and reasonable setting of $\\tau$, it is conceivable for prediction filtering to hurt segmentation performance -- although we did not observe this in our experiments.\nThere is always a value of the threshold $\\tau$, however, for which prediction filtering at least does not hurt:\njust take $\\tau \\to -\\infty$, in which case no predictions are changed.\nAs $\\tau$ increases, it likely (though not certainly) removes incorrect predictions before it begins removing any correct predictions.\nIn another extreme, for a perfect classifier, prediction filtering approaches oracle filtering;\nclearly, oracle filtering may not achieve perfect performance, but it can only help.\n\n\n\n\n\n\\begin{figure*}[t!]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figures\/various_supervision_resnet_6_v4.pdf}\n \\caption{Performance of the prediction filtering on various models and levels of supervision (M).}\n \\label{fig:various_supervision}\n\\end{figure*}\n\n\n\\newcolumntype{F}{>{\\centering\\arraybackslash}p{2.0em}}\n\\newcolumntype{E}{>{\\centering\\arraybackslash}p{4.0em}}\n\\begin{table*}[t!]\n\\begin{minipage}{.53\\textwidth}\n\\centering\n\\fontsize{9.0}{12.0}\\selectfont\n\\begin{tabu}{cc|EE|EE}\n\\hline\n\\multirow{2}{*}{Filtering} &\n\\multirow{2}{*}{CRF} &\n\\multicolumn{2}{c|}{DeeplabV1} &\n\\multicolumn{2}{c}{DeeplabV3}\\\\\n\\cline{3-6}\n& & $M$=$500$ & $M$=$1{,}464$ & $M$=$500$ & $M$=$1{,}464$ \\\\\n\\hline\n\\hline\n & & 49.6 & 57.2 & 57.0 & 70.6 \\\\\n \\checkmark & & 61.4 & 64.6 & 64.8 & 74.0 \\\\\n & \\checkmark & 53.9 & 61.8 & 58.4 & 72.4 \\\\\n \\checkmark & \\checkmark & \\ub{63.6} & \\ub{67.4} & \\ub{65.9} & \\ub{75.3} \\\\\n\\Xhline{2\\arrayrulewidth}\n\\end{tabu}\n\\end{minipage}%\n\\begin{minipage}{.52\\textwidth}\n\\centering\n\\fontsize{9.0}{12.0}\\selectfont\n\\begin{tabu}{cc|FFFF}\n\\hline\n\\multirow{2}{*}{Filtering} & \\multirow{2}{*}{CRF} & \\multicolumn{4}{c}{Pixel-level labels ($M$)} \\\\\n\\cline{3-6}\n & & 200 & 500 & 800 & 1,464 \\\\\n\\hline\n\\hline\n & & 43.3 & 49.9 & 52.7 & 57.2 \\\\\n \\checkmark & & 46.1 & 54.0 & 57.1 & 62.0\\\\\n & \\checkmark & 46.3 & 53.9 & 56.8 & 61.8 \\\\\n \\checkmark & \\checkmark & \\ub{48.2} & \\ub{56.5} & \\ub{59.8} & \\ub{64.9} \\\\\n\\Xhline{2\\arrayrulewidth}\n\\end{tabu}\n\\end{minipage}%\n\\caption{{Left}: Performance of model variants with $\\lvert\\mathcal{D}_\\mathit{pixel}\\rvert = M$ and $\\lvert\\mathcal{D}_\\mathit{image}\\rvert = 10,582$. {Right}: the same, for a DeeplabV1 baseline, but with the classifier trained only on the same $M$ images in $D_{pixel}$.}\n\\vspace*{-1.0mm}\n\\label{tbl:crf_and_partial}\n\\end{table*}\n\n\n\\section{Experimental Evaluation}\n\\label{sec:experiment}\n\n\n\\paragraph{Dataset.}\n\nWe evaluate prediction filtering on PASCAL VOC 2012 \\citep{VOC:Everingham2015} \nwhich contains $10{,}582$ training, $1{,}449$ validation, and $1{,}456$ test images.\nFor SWSSS, we follow the training splits of \\citet{CCT:2020Ouali}, where $1{,}464$ images are used for $\\mathcal{D}_\\mathit{pixel}$.\nAs with previous work, we evaluate segmentation performance by mean Intersection over Union (mIoU),\ngenerally on the validation set.\nTest set performance is obtained from the PASCAL VOC evaluation server, without any tricks such as multi-scale or flipping.\n\n\\paragraph{Implementation.}\nTo verify the robustness of our method, we experiment with five semantic segmentation baselines: DeeplabV1 (based on VGG16), DeeplabV3 (based on ResNet-101), their deeper variants with wider receptive fields used by \\citet{Dual:2020Luo} which we call DeeplabV1-W and DeeplabV3-W, and a Transformer model called OCRNet~\\cite{OCRNet:2021Strudel} (based on HRNetV2-W48~\\cite{HRNet2020Wang}).\nFor prediction filtering, we use a ResNet50 classifier.\nWe also apply prediction filtering to several existing SWSSS models.\nAlthough CRF post-processing is no longer commonly used in semantic segmentation tasks,\nin our experiments it still significantly improves the performance of models trained on a small number of pixel-level labels.\nWe thus apply CRFs as default except when otherwise specified.\n\n\n\n\n\n\n\n\n\\begin{figure*}[t!]\n \\centering\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/orig_resnet.png}\n \\caption{Baseline model}\n \\label{fig:confusion-nofilter}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/updated_resnet.png}\n \\caption{With prediction filtering}\n \\label{fig:confusion-filtered}\n \\end{subfigure}\n \\vspace*{-0.8mm}\n \\caption{Pixel-level confusion matrices for DeeplabV1 models trained on $1{,}464$ pixel-level labels. Each entry shows the number of pixels (on a logarithmic scale; $0$ values are plotted as if they were $1$) whose true label is given according to the row, and whose predicted label is that in the column. Labels are sorted into rough categories to show block structure.}\n \\vspace*{-1.0mm}\n \\label{fig:confusion-matrices}\n\\end{figure*}\n\n\\begin{figure*}[h!]\n \n \\centering\n \\includegraphics[width=0.92\\textwidth]{figures\/qualitative4.pdf}\n \\vspace*{-1.5mm}\n \\caption{Qualitative results, the top row of a successful case and bottom a failure\n \n using DeeplabV1 trained on $1{,}464$ pixel-labeled images.\n \n \n }\n \\vspace*{-3.0mm}\n \\label{fig:qualitative}\n\\end{figure*}\n\n\n\n\n\\paragraph{Comparison with state-of-the-art.}\nIn \\cref{tbl:sota}, we compare the performance of prediction filtering to existing methods when $\\lvert\\mathcal{D}_\\mathit{pixel}\\rvert$ is $1{,}464$.\nWe reproduce results for Deeplab, OCRNet, WSSL, CCT, DualNet, and AdvCAM.\\footnote{%\nOur AdvCAM result is substantially worse than reported by \\citet{AdvCAM:2021Lee}, because we did not apply tricks such as multi-scale and flipping in inference time for fair comparison with the other state-of-the-art methods.}\nAmong VGG-based methods, DeeplabV1 with prediction filtering outperforms the other methods without using any additional information.\nSimilarly, filtering also improves models with stronger backbones, though the margin of improvement is less dramatic since the baseline is better.\nPrediction filtering can help even when it does not involve adding any new training data:\nit significantly helps WSSL, CCT, AdvCAM, and DualNet although they already use weakly-labeled data.\nIt is worth highlighting that simply adding prediction filtering to the DeeplabV3-W baseline achieves the new highest performance, slightly higher than DualNet (with a ResNet-W backbone) with prediction filtering, both of which are notably better than the previous state-of-the-art (DualNet with a ResNet-W backbone without prediction filtering).\nPrediction filtering on top of DualNet also sets the new state-of-the-art for VGG-based models.\n\n\n\n\n\n\n\n\n\n\\paragraph{Results on the test set.}\nWe further evaluate prediction filtering on the test set of VOC2012.\nIn \\cref{tbl:test}, we provide the performance of DeeplabV1 and DeeplabV3-W on the test set, as well as with prediction filtering applied.\nWe can observe that prediction filtering improves the intersection-over-union (IoU) scores for most of the $21$ classes,\nleading to significant improvements in terms of mIoU (as on the validation set).\n\n\n\n\n\n\n\n\n\n\\paragraph{Various levels of supervision.}\n\\Cref{fig:various_supervision} shows the segmentation performance of DeeplabV1, DualNet with a VGG backbone, and DeeplabV3-W trained on $200$, $500$, $800$ and $1{,}464$ images with pixel labels, with and without prediction filtering.\nThe blue (bottom) line shows performance of the base model;\norange (middle) shows with prediction filtering;\ngreen (top) is with oracle filtering, which upper-bounds the possibility of improvement of prediction filtering with a better classifier.\nFor smaller numbers of pixel labels, the performance gain from prediction filtering is drastically larger.\nFor example, at $800$ images with pixel labels,\nDeeplabV1 goes from $56.8 \\rightarrow 64.4$,\nDeeplabV3-W from $72.3 \\rightarrow 74.3$.\nThe improvements for $200$ pixel labels are $12.4$, $1.8$, and $4.8$.\n\n\n\n\n\\paragraph{Relationship with CRF.}\nCRFs adjust the prediction for each pixel by encouraging nearby, similar-colored pixels to have the same label. This usually improves the segmentation performance by refining detailed boundaries and making class predictions across an object more consistent.\nThe latter role overlaps to some extent with prediction filtering.\nIf the size of the wrong prediction is large, though, a CRF might expand the area of the wrong prediction, rather than remove it.\n\\Cref{tbl:crf_and_partial}, as well as the qualitative results to come shortly, shows that the methods complement one another.\n\n\n\\paragraph{Without image-level labels.}\nAlthough image-level labels are easier to obtain than pixel-level labels, annotation effort is still nontrivial.\nOur hypothesis about why prediction filtering works, however, is largely that classifiers ``looks at images differently'' than segmentation networks do.\nIt thus might help even if it does not use any additional data: $\\mathcal{D}_\\mathit{pixel}$ and $\\mathcal{D}_\\mathit{image}$ contain the same images.\n\\cref{tbl:crf_and_partial} (right) shows this is the case.\nEven without introducing any actual new information, prediction filtering still improves mIoU significantly.\n\n\n\n\n\n\n\n\n\n\n\\paragraph{Changes between classes.}\nIn \\cref{sec:problem}, we showed some qualitative evidence that the segmentation network with low pixel labels tends to be confused between similar classes,\nand that prediction filtering can help to compensate it by looking at other parts of the image.\nTo further demonstrate this, \\cref{fig:confusion-matrices} shows the pixel-level confusion matrix for a DeeplabV1 model with CRF before and after prediction filtering.\n\\cref{fig:confusion-matrices}(a) shows a strong block structure where pixels from one animal class are often confused for another animal class, or vehicles for vehicles.\nIn \\cref{fig:confusion-matrices}(b), prediction filtering has dramatically reduced these types of mistakes.\n\n\n\n\n\n\n\n\n\n\n\\paragraph{Qualitative results.}\n\\Cref{fig:qualitative} shows a success (top) and failure (bottom) for prediction filtering.\nAt top, an object is mostly incorrectly predicted -- \\labelname{bike} as \\labelname{motor bike} -- and CRF only makes this worse.\nThe classifier in filtering, however, correctly identifies there is no \\labelname{motor bike}, and the model's ``second choice'' prediction is largely correct.\nIn the failure case, an object (\\labelname{table}) is mostly occluded;\nthe segmentation model still identifies it, but the classifier misses it.\n\n\\paragraph{Additional ablation studies.}\nThe appendix provides further experiments, including alternative approaches to prediction filtering, performance with model variants, and a runtime comparison between prediction filtering and CRFs. \n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nMost existing semi-weakly supervised semantic segmentation algorithms exploit the pseudo-labels extracted from a classifier.\nDoing so, however, requires a complicated architecture and extensive hyperparameter tuning on fully-supervised validation sets.\nWe propose \\textit{prediction filtering}, a simple post-processing method that only considers the classes for segmentation a classifier is confident are present.\nOur experiments demonstrated adding this method to baselines achieves the new highest performance on PASCAL VOC in SWSSS regimes, and adding it to existing SWSSS algorithms uniformly improves their performance.\nWe expect prediction filtering can become a standard post-processing method for segmentation, along with CRFs, at least when a relatively large number of weakly-labeled images are available and the portion of class labels present in most images is low.\n\n\\bibliographystyle{named}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn the last few years there has been a renewed interest in \nthe study of the superconformal index of 4d~$\\mathcal{N}=1$ superconformal field theories (SCFTs) and,\nin particular, $\\mathcal{N}=4$ super Yang-Mills (SYM). \nThe index in question is the supersymmetric partition function of the SCFT on~$S^3 \\times S^1$\nwhich receives contributions from BPS states that preserve two supercharges~$(\\mathcal Q, \\overline{\\mathcal{Q}})$. \nIn the large-$N$ limit, the expectation from AdS\/CFT \nis that the index should account for the entropy of the BPS black holes (BH) \nthat preserve the same two supercharges in the dual supergravity on~AdS$_5$. \nThis question was introduced in~\\cite{Sundborg:1999ue, Aharony:2003sx, Kinney:2005ej}, \nand the work of the last few years has shown that the index indeed captures the BH entropy\nin different asymptotic \nlimits~\\cite{Hosseini:2017mds,Cabo-Bizet:2018ehj, Choi:2018hmj,Benini:2018ywd,\nChoi:2018vbz, Honda:2019cio,ArabiArdehali:2019tdm,Kim:2019yrz,Cabo-Bizet:2019osg,\nCabo-Bizet:2019eaf, Benini:2020gjh, Amariti:2019mgp,Lezcano:2019pae,Lanir:2019abx,David:2020ems, \nCabo-Bizet:2020nkr,Murthy:2020rbd,Agarwal:2020zwm,Cabo-Bizet:2020ewf,Copetti:2020dil,Goldstein:2020yvj}. \n\nThe focus of the present paper is the \\emph{Cardy-like limit} in which the BH entropy \nbecomes very large. In the canonical ensemble, this translates to the study of the exponential growth of the \nindex as~$\\t \\to 0$, where the parameter~$\\t$ is the chemical potential dual to the charge.\nAs pointed out in~\\cite{Cabo-Bizet:2019eaf}, the $\\t \\to 0$ limit is in fact one of an infinite number of \ninequivalent Cardy-like limits in which the index is expected to grow exponentially. \nThese limits correspond to~$\\t$ approaching a rational number or, equivalently,~$q = e^{2\\pi \\i \\t}$ \napproaching a root of unity. \nIn this paper we analyze the 4d superconformal index near a general root of unity, \nand find interesting relations to three-dimensional Chern-Simons (CS) theory.\nThe main statement is that the asymptotics of the index near a rational point~$-n\/m$ is equal \n(to all orders in perturbation theory in deviations~$\\widetilde \\t = m\\t +n$ from the rational point) to \nthe partition function of a certain 3d $\\mathcal{N}=2$ gauge theory with \nChern-Simons couplings that involve background as well as dynamical fields on \nan~$S^3\/\\mathbb{Z}_m$ orbifold. The background couplings give rise to singular terms \nat~$\\text{O}(1\/\\widetilde \\t^2)$ and~$\\text{O}(1\/\\widetilde \\t)$ that govern the growth of the index, while the \nconstant~$\\text{O}(1)$ term receives contributions from both background fields and \nthe dynamical Chern-Simons theory. \n\nWe demonstrate this statement from two points of view---by direct asymptotic analysis of \nthe index near rational points, and from an analysis of the reduced three-dimensional theory and calculating the\nvarious couplings using high-temperature effective-field theory (EFT) techniques. \nThe latter method, based on~\\cite{DiPietro:2014bca, DiPietro:2016ond}, \nrelates the high-temperature asymptotics \nof the index to a low-energy effective \nfield theory, in the spirit of the Cardy formula.\\footnote{In the high-temperature picture \nthe (Euclidean) time direction is taken along the $S^1$, while in the low-energy picture \ntime is a fiber inside the $S^3$. Relating the two pictures involves swapping time and \nspace as in the derivation of the 2d Cardy formula~\\cite{Cardy:1986ie}. Unlike in the \ntwo-dimensional context where one uses~$SL(2,\\mathbb{Z})$ automorphy to relate the \nswapped problem to the original one, here we do not have an a priori understanding \nof the automorphic properties of the 4d index~$\\mathcal I(\\tau)$. Aspects of this question are being \naddressed in~\\cite{GMZ}. See also~\\cite{Shaghoulian:2016gol,Gadde:2020bov} for \nrelated work on modular-type transformation properties relating different indices, \nand~\\cite{Razamat:2012uv} for a discussion of the automorphic behavior of a different \nindex in $\\mathcal{N}=2$ SCFTs.}\n\n\n\n\\vskip 0.4cm\n\n\\noindent {\\bf The four-dimensional superconformal index and its asymptotic growth}\n\n\\vskip 0.1cm\n\nIn this paper we study~$\\mathcal{N}=1$ gauge theories with a Lagrangian description and a $U(1)_R$ symmetry, \nwith a focus on~$\\mathcal{N}=4$ SYM which we use to illustrate some statements in detail. \nThe symmetry algebra of $\\mathcal{N}=1$ SCFT on $S^1\\times S^3$ is~$SU(2,2|1)$,\nwhich includes the energy~$E$ which generates translations around~$S^1$, \nthe angular momenta~$J_1$, $J_2$ on~$S^3$, \nand the $U(1)$ R-charge~$Q$. \nOne can pick a complex supercharge obeying the following algebra, \n\\be \\label{QQbaralg}\n\\{\\mathcal Q, \\overline{\\mathcal{Q}} \\} \\= E-J_1-J_2-\\tfrac{3}{2}\\,Q \\,.\n\\ee\nThe most general index built out of the~$\\mathcal{N}=1$ superconformal algebra is an extension of the \nWitten index of~$\\mathcal Q$ and is defined as the following trace over the physical Hilbert space,\n\\be \\label{defindex}\n\\mathcal I(\\sigma, \\t) \n\\= \\, {\\rm Tr}_{\\mathcal{H}}\\, (-1)^F {\\rm e}^{- \\gamma \\{\\mathcal Q, \\overline{\\mathcal{Q}} \\} \n+2 \\pi \\i \\sigma (J_1+\\frac{1}{2}Q)+2 \\pi \\i \\t (J_2+\\frac{1}{2}Q)} \\,.\n\\ee\nThe trace~\\eqref{defindex} only receives contributions from states annihilated by the supercharges \n($\\frac14$-BPS states) so that the right-hand side of~\\eqref{QQbaralg} vanishes for these states. \nThis index~$\\mathcal I(\\sigma,\\t)$ can be calculated from either Hamiltonian or functional integral methods \nand reduces to a unitary matrix integral~\\cite{Romelsberger:2005eg,Kinney:2005ej,Nawata:2011un,Assel:2014paa}, \nwhich can be written as an integral over the space of gauge holonomies around the~$S^1$\nof certain infinite products, as written in Equation~\\eqref{eq:pqIndex}.\n\n\nOur focus in this paper is the analog, in the present context, of the \nhigh-temperature Cardy limit of 2d CFT. This means fixing the rank and taking the \ncharges~$(J_i,Q)$ to be larger than any other scale in the theory.\nIn the canonical ensemble this translates to taking~$\\Im \\, \\sigma, \\, \\Im \\, \\t \\to 0$ at fixed rank. \nIn order to calculate the asymptotic growth of states along a certain direction in the charge lattice, \none needs to fix the relation between~$\\sigma$ and~$\\t$.\nWe study\\footnote{Our methods can be generalized to study the case where~$\\sigma$ and~$\\t$ are linearly\ndependent over the rationals, but we shall not develop this in the present paper.\n} \nthe slice~$\\sigma=\\t-n_0$ with~$n_0$ an integer, \nas in~\\cite{Cabo-Bizet:2019osg, Cabo-Bizet:2019eaf, Cabo-Bizet:2020nkr}.\nSetting~$2J =J_1+J_2$, the resulting canonical index~$\\mathcal I$ is given by\n\\be \\label{eq:n0Index}\n\\mathcal I(\\t;n_0) \\= \\, {\\rm Tr}_{\\mathcal{H}}\\, (-1)^F {\\rm e}^{-\\gamma \\{\\mathcal Q, \\overline{\\mathcal{Q}} \\} \n-2 \\pi \\i n_0 (J_1+\\frac{1}{2}Q)+2 \\pi \\i \\t (2J+Q)} \\,.\n\\ee\nThe large-charge asymptotics then implies~$\\Im\\,\\t \\to 0$, \nwhile~$\\Re\\,\\t$ is not fixed a priori by the limit. \nWe consider asymptotic limits as~$\\t$ approaches a rational \nnumber~$\\t \\to -n\/m$ with~$\\text{gcd}(m,n)=1$,\nintroduced in the present context as new Cardy-like limits in~\\cite{Cabo-Bizet:2019eaf}. \nThe index $\\mathcal{I}$ clearly depends on the gauge group $G$. We generally suppress \nit in our notation, but sometimes use the notation $\\mathcal{I}_N$ to emphasize the \ndependence on $N$ for $U(N)$ or $SU(N)$ $\\mathcal{N}=4$ SYM theory (which \nshould be clear from the context). \n\nOur motivation to consider these rational points comes from the study\nof the index~$\\mathcal I_N(\\t)$ of $\\mathcal{N}=4$ SYM in the large-$N$ \nlimit.\\footnote{Another motivation comes from the mathematical literature on $q$-series, where \nit is also natural to consider expansions around roots of unity. We thank D.~Zagier for emphasizing this point to us.} \nIn this limit one considers charges scaling as~$N^2$ as~$N \\to \\infty$, \nwhich translates to~$N \\to \\infty$ at fixed~$\\t$ in the canonical ensemble~\\cite{Murthy:2020rbd}. \nIn this large-$N$ limit one expects the field theory index~$\\mathcal I_N(\\t)$ to be written as a \nsum over saddles. This picture has been partially realized in the last few years using two different approaches---the \nBethe-ansatz-like approach developed in~\\cite{Closset:2017bse,Benini:2018mlo, Benini:2018ywd}, \nand the direct study of large-$N$ saddle points using an elliptic extension of the \naction~\\cite{Cabo-Bizet:2019eaf, Cabo-Bizet:2020nkr}. \nIn particular, the large-$N$ approach in~\\cite{Cabo-Bizet:2019eaf} found a class of \nsaddles labelled by rational numbers~$-n\/m$, where the perturbation expansion \naround each saddle is given by the asymptotic limit~$\\t \\to -n\/m$.\\footnote{These saddles \nmap to residues of the Bethe-ansatz type approach---see~\\cite{Cabo-Bizet:2020ewf} \nfor a recent discussion of the connections between the two approaches. \nA larger set of saddles have been classified in~\\cite{Cabo-Bizet:2020nkr}, but \nthe full set of important\/contributing saddles is not understood in either approach. \nIn particular, interesting continuum configurations of the Bethe-ansatz equations have been \nrecently discovered in \\cite{ArabiArdehali:2019orz,Benini:2021ano,Lezcano:2021qbj} \nwhose role in the large-$N$ limit is not fully understood.}\nSetting~$n_0=-1$, we have \n\\be \\label{mnasymp}\n\\log \\mathcal I_N(\\t) \\; \\sim \\; -S_\\text{eff}(m,n; \\tau) \\,, \\qquad \\t \\to -n\/m \\,, \\\\\n\\ee\nwhere the effective action at each saddle is given by\n\\be\\label{actionEll}\nS_\\text{eff}(m,n; \\tau) \\= \\frac{N^2 \\pi \\i }{27\\,m} \\,\\frac{ \\bigl(2 \\widetilde \\t + \\chi_1(m+n) \\bigr)^3}{{\\widetilde \\t}^2} \\,, \n\\qquad \\widetilde \\t \\; \\coloneqq \\;} % This only works with \\usepackage{mathtools m\\t +n \\,.\n\\ee\nwhere~$\\chi_1(n)$ is the Dirichlet character equal to~$0, \\pm 1$ when~$n \\equiv 0, \\pm 1$ (mod~3), respectively.\nThere was one caveat in the above result, which was stressed in~\\cite{Cabo-Bizet:2019eaf, Cabo-Bizet:2020nkr}, \nnamely that the pure-imaginary~$\\widetilde \\t$-independent term could not be fixed by the methods used in those papers.\nThe constant term in the effective action~\\eqref{mnasymp}, therefore, was a convenient choice made using \ninputs coming from outside the field-theory analysis. \n\nAlthough we do not have a rigorous notion of the sum over saddles yet,\nit should be clear that if the effective action of the~$(m,n)$ saddle has negative real part it dominates \nover the others as~$m\\tau +n \\to 0$. \nIt is also clear from~\\eqref{actionEll} that the fastest growth among these saddles comes from~$(m,n)=(1,0)$. \nThe~$(1,0)$ saddle \nin the SYM theory is identified as a fully deconfined phase whose \nentropy scales as~$N^2$, while the other $(m,n)$ saddles have entropies that are suppressed by a factor of~$m$.\nFor this reason they can be called partially deconfined saddles (in the sense of asymptotic growth, \nbut not in the sense of center symmetry breaking---cf.~\\cite{ArabiArdehali:2019orz}). \nOn the gravitational side, the action~$S_\\text{eff}(1,0; \\tau)$ agrees precisely with the canonical on-shell action of the \nblack hole solution in the dual AdS$_5$ supergravity~\\cite{Cabo-Bizet:2018ehj},\nwhich leads to the identification of the AdS$_5$ BH as the saddle~$(1,0)$.\nThe~$(m,n)$ solutions have been identified with orbifolds of the Euclidean AdS$_5$ BH~\\cite{AhaSBTalk}.\n\nBecause of the dominance of the~$(1,0)$ saddle near~$\\t \\to 0$, one can capture it directly\nin an asymptotic expansion---even for finite~$N$.\nIn this calculation, one writes the index~\\eqref{eq:n0Index} as an integral over gauge \nholonomies~$u_i$ (see~\\eqref{eq:pqIndex} below), estimates the integrand in the \nCardy-like limit~$\\t \\to 0$, and then performs the integrals. \nThe initial studies~\\cite{Choi:2018vbz, ArabiArdehali:2019tdm,Honda:2019cio,Kim:2019yrz,Cabo-Bizet:2019osg} \nsuccessfully reproduced the singular parts of the action as~$\\t \\to 0$, \ni.e.~the~$1\/\\t^2$ and the~$1\/\\t$ terms with the correct coefficients.\nMore recently, the complete action~\\eqref{actionEll} for~$(m,n) = (1,0)$ was obtained \nin~\\cite{GonzalezLezcano:2020yeb} by a direct method, \ninvolving a careful analysis of all perturbative terms in the Cardy-like limit. \n(See \\cite{Amariti:2020jyx,Amariti:2021ubd} for more recent related work.)\n\nOur first goal in this paper is to obtain the complete perturbative action at all the~$(m,n)$ saddles by a \ndirect asymptotic analysis of the index as~$\\t \\to -n\/m$. \nThis analysis is described in Section~\\ref{sec:SCI}, the result of which is a \nperfect agreement with the action~\\eqref{actionEll}, up to the constant terms as mentioned above.\nThe asymptotic analysis requires developing the asymptotics of the elliptic gamma function \n\\cite{Ruijsenaars:1997,felder2000elliptic} near rational points. The~$\\tau\\to0$ asymptotic \nestimates were available in previous literature~\\cite{Rains:2006dfy}. Here we develop the \nanalysis for~$\\tau$ approaching rational numbers. The analysis is presented in \nAppendix~\\ref{app:Estimates}. (See also~\\cite{Kels:2017vbc} for related work motivated \nby integrable-systems considerations.)\n\nFurthermore, we note that for given~$m,n$, depending on the sign of~$\\mathrm{arg}\\widetilde\\t-\\pi\/2$ \nthe action in~\\eqref{actionEll} can have negative or positive real part, which respectively yields \na growing or decaying contribution to the index. Therefore in essentially half of the parameter \nspace the saddles in~\\eqref{actionEll} do not capture any growth in the index. As demonstrated \nin Section~\\ref{sec:Ccenter}, when the $(m,n)$ saddle in~\\eqref{actionEll} gives a decaying \ncontribution to the index as~$\\widetilde\\t\\to0$, a ``2-center saddle'' takes over which yields exponential \ngrowth again. In other words, in half of the parameter space the growth of the \nindex~$\\mathcal{I}_N(\\tau)$ is captured by 2-center saddles. (These turn out to be partially \ndeconfined saddles both in the sense of asymptotic growth and in the sense of center \nsymmetry breaking---cf.~\\cite{ArabiArdehali:2019orz}.)\n\n\\vskip 0.4cm\n\n\\noindent {\\bf Chern-Simons theory from the asymptotics of the 4d index}\n\n\\vskip 0.1cm\n\n\n\nThe second goal of the paper is partly inspired by an interesting pattern \nappearing in the asymptotic calculations. \nAs emphasized in the context of SU($N$)~$\\mathcal{N}=4$ SYM in~\\cite{GonzalezLezcano:2020yeb}, \nin the part of the parameter space where the index is dominated by isolated, 1-center saddles, \nthe complete asymptotic expansion in~$\\t$\nterminates at~$O(\\t)$---i.e.~the perturbation theory only contains~$1\/\\t^2$, $1\/\\t$, $\\t^0$ and~$\\t$ \nup to exponentially suppressed corrections. (This is, in fact, more generally true when the index is \ndominated by isolated saddles, and not true when there are flat directions; see~\\cite{Ardehali:2015bla}.) \nInterestingly, it was found in~\\cite{GonzalezLezcano:2020yeb} that the constant term in the \nexpansion contains the partition function of SU($N$) pure Chern-Simons theory on~$S^3$ at level~$\\pm N$.\n\nIn this paper we find that the same structure persists at all rational points. We see that \nthe constant term in the expansion as~$\\t \\to -n\/m$ involves Chern-Simons theory \nwhose action is~$1\/m$ times the action as~$\\t \\to 0$. We present evidence that this corresponds \nto CS theory on an orbifold space~$S^3\/\\mathbb{Z}_m$ (with the action of~$\\mathbb{Z}_m$ depending on~$n$ \nsuch that the orbifold coincides with the lens space~$L(m,-1)$ when~$n=1$) at \nlevel~$\\pm N$~\\cite{Garoufalidis:2006ew,Gang:2019juz}. In other words, the~4d SYM index appears to play the \nrole of a master index which governs the partition function of three-dimensional CS theory on an infinite family of~$S^3$ orbifolds.\n\nThe appearance of 3d Chern-Simons theory from the 4d superconformal index is intriguing, and gives rise to \ntwo related questions:\\\\\n(a) is there a direct three-dimensional physics explanation of the appearance of Chern-Simons theory?\\\\\n(b) can we also understand the singular terms in the asymptotic expansions around \nrational points as being related to 3d Chern-Simons theory?\\\\\nThe answers to both these questions are positive, as we now explain. \n\n\n\\vskip 0.4cm\n\n\\noindent {\\bf The asymptotics of the 4d index from supersymmetric Chern-Simons theory}\n\n\\vskip 0.1cm\n\nThe natural idea is that the reduction of the four-dimensional theory on~$S^1$ gives rise to a three-dimensional \ntheory on~$S^3$ in a ``high-temperature\" expansion in powers of the circumference~$\\beta$ of the shrinking \ncircle. \nIf we calculate the functional integral of the three-dimensional theory, we should \nrecover the four-dimensional functional integral as~$\\beta \\to 0$. \nThe three-dimensional effective field theory\nis known to have a derivative expansion, where the most relevant terms are \nChern-Simons terms~\\cite{Banerjee:2012iz,Jensen:2012jh}.\nThis EFT approach was developed in the supersymmetric context \nin~\\cite{DiPietro:2014bca, DiPietro:2016ond} who presented \\emph{supersymmetrized} CS actions \ninvolving the dynamical as well as background fields, which are \nnecessary for preserving supersymmetry on~$S^3\\times S^1$. In particular, \nthe~$1\/\\beta^2$ and~$1\/\\beta$ effective actions derived this way in~\\cite{DiPietro:2016ond} \nreproduced the asymptotics of the the index as found in~\\cite{Ardehali:2015bla} for $n_0=0$ and $\\mathrm{arg}(\\t)=\\pi\/2$.\n(Note that when the metric on $S^3\\times S^1$ has a direct product form with $S^3$ the unit round three-sphere, \na real value of $\\beta$ determines a purely imaginary~$\\t=\\frac{\\i\\beta}{2\\pi}$.)\nThe coefficient of the leading~$1\/\\beta^2$ term in these works is pure imaginary, and \nalso does not grow as $N^2$ (it is in fact zero for non-chiral theories), therefore the \nexponential growth of states corresponding to the BH is not captured there.\n\n\n\nOne of the motivations for the current paper is to explain the exponential growth associated to the bulk black holes\nfrom the three-dimensional point of view, which \nrequires $\\arg(\\tau) \\neq \\pi\/2$ and $n_0\\neq0$.\\footnote{The leading order~$1\/\\tau^2$ behavior was found \nfrom similar considerations in~\\cite{Choi:2018hmj}\nfor~$\\mathcal{N}=4$ SYM with flavor chemical potentials, and in \\cite{Kim:2019yrz} \nfor more general gauge theories in a setting similar to ours. In this paper we follow a \nsystematic, manifestly supersymmetric approach developed in~\\cite{DiPietro:2014bca,DiPietro:2016ond}, \nwhich allows us to obtain all-order results for general gauge theories around generic rational points.}\n(Note, in particular, that the $(m,n)=(1,0)$ saddle in \\eqref{actionEll} given for $n_0=-1$, would \nhave its leading piece a pure phase if $\\mathrm{arg}(\\t)=\\pi\/2$.) For this purpose we consider, as \nin~\\cite{Cabo-Bizet:2018ehj}, a background geometry of the \nform~$S^3\\times_\\Omega S^1$, with $S^3$ the unit round three-sphere, \n$\\gamma$ the circumference of the circle, and~$\\Omega$ \na twist parameter\\footnote{Similar twists had been described in slightly different contexts \nin~\\cite{Kim:2009wb,Nawata:2011un}.} \ncontrolling the deviation of the metric from a direct product form (Equation~\\eqref{4dbackgnd}).\nThe imaginary part of the twist parameter determines a non-zero real part \nof~$\\tau$ via (Equation~\\eqref{Omomrel})\n\\be\n\\t \\= \\frac{\\i\\gamma}{2\\pi} (1-\\O) \\,.\n\\label{eq:tauAndOmega}\n\\ee\nAs shown in~\\cite{Cabo-Bizet:2018ehj}, \nthe integer~$n_0$ in~\\eqref{eq:n0Index} controls the \nperiodicity of the fermions in this background, and~$n_0=\\pm 1$ (which is naturally dual to the BH) corresponds to \nanti-periodic fermions, i.e.,~as in a Scherck-Schwarz reduction. \nIn the present context we insist on supersymmetry being preserved---and that necessitates the \nturning on of other background fields under which the fermions are charged. \nIn the three-dimensional background supergravity, we have a non-zero graviphoton from the fibration \nas well as non-zero auxiliary background gauge and scalar fields. As we explain in Section~\\ref{sec:4dto3d}, \nthe resulting configuration is effectively described by a circle of radius~$R$, which in the limit~$\\gamma \\to 0$, $\\Omega\\to\\infty$ with~$\\t$ fixed obeys~$R \\to \\t$.\n\n\nNow, what is the actual calculation? \nThere are two types of fields in the three-dimensional functional integral---background fields \nwhich take constant values, and dynamical modes which fluctuate in the integral. The latter is further \nmade up of light modes (with zero momentum around~$S^1$) and heavy (Kaluza-Klein) modes. \nThe first step is to integrate out the heavy modes in order to obtain an effective action for the light modes. \nThe integration over heavy modes also generates corrections to the coefficients of the supersymmetric\nChern-Simons terms of the non-zero background fields, see e.g.~\\cite{Intriligator:2013lca,DiPietro:2014bca}.\nIn these calculations, we need to include, in addition to the couplings discussed \nin~\\cite{DiPietro:2016ond}, the supersymmetrized RR and gravitational CS actions which were \ndiscussed in~\\cite{Closset:2018ghr}. The effective actions of the background gauge fields turn out to \nproduce precisely the singular pieces~$1\/\\t^2$ and~$1\/\\t$ in the asymptotic expansion of the index, \nas well as a constant piece. The remaining functional integral is described by an~$\\mathcal{N}=2$\nSYM theory with a certain one-loop induced CS coupling on~$S^3$, whose partition function is \nknown to agree, up to a sign, with that of pure Chern-Simons theory~\\cite{Kapustin:2009kz}. \nThis explains the appearance of the dynamical Chern-Simons theory in the constant term\nof the asymptotic expansion.\n\n\n\nTwo technical remarks are in order. \nFirstly, recall that supersymmetry implies that the 4d superconformal index should not depend on~$\\gamma$\nand $\\Omega$ separately, but only on their combination~$\\tau$ as in~\\eqref{eq:tauAndOmega}. \nIn~\\cite{Cabo-Bizet:2018ehj} this was shown to be true in 5d gravitational variables, as well as \nthrough a localization computation in 4d field theory. In this paper we verify this also in 3d effective field theory.\nSecondly, the order of limits is important to have a smooth calculational set up. \nWe first send~$\\gamma \\to0$ keeping~$\\Omega$ fixed, so that the three-dimensional geometry is smooth and finite. \nThen we take~$\\Omega \\to \\infty$ at fixed~$\\t$ and express the \nresult in terms of~$\\tau$ using~\\eqref{eq:tauAndOmega}. We find there are no singularities generated in \nthe latter step and thus the limiting procedure is perfectly smooth.\n\n\n\n\n\nFinally, we repeat the same analysis as~$\\t$ approaches rational points. The dimensional reduction in this\ncase naturally leads us to considering orbifolds of~$S^3\\times S^1$, \nwhich, as far as we understand,\nare related to the orbifolds discussed in~\\cite{AhaSBTalk}. The three-dimensional calculation \nthen leads to the~$1\/\\widetilde \\t^2$ and~$1\/\\widetilde \\t$ terms as well as a constant piece from the background \nfields, and we provide evidence that the remaining dynamical piece is the partition function \nof~$\\mathcal{N}=2$ SYM with a one-loop induced CS coupling on~$S^3\/\\mathbb{Z}_m$. \n\n\n\n\n\\vskip 0.4cm\n\n\n\\noindent {\\bf Notation}. We have~$\\sigma,\\t \\in \\mathbb{H}$ and~$z, u \\in \\mathbb{C}$, and \nwe set~$p={\\rm e}^{2\\pi i\\sigma}$, $q={\\rm e}^{2 \\pi \\i \\t}$, $\\zeta={\\rm e}^{2\\pi\\i z}$. \\\\\nWe use~$\\simeq$ to mean an all-order asymptotic equality of the logarithms of the two sides. \n\n\n\n\n\n\n\n\\section{The 4d superconformal index and its asymptotic expansion \\label{sec:SCI}}\n\n\n\n\nWe consider a four-dimensional $\\mathcal{N}=1$ gauge theory which flows to a superconformal fixed point. \nThe theory has gauge group~$G$ (which we take to be semi-simple, and separately comment \non the $U(N)$ case), and a number of chiral multiplets labelled by~$I$ \nwith R-charge $r_I$ and in the representation $\\mathcal{R}_I$ of the gauge group. \nWe assume $00$ (respectively~$\\mathrm{arg}(\\tau)-\\frac{\\pi}{2}<0$). \nAs found in~\\cite{Cabo-Bizet:2019osg}, we can achieve both \nof these requirements in any theory in which~$00$, and comment below on what \nhappens for the opposite sign.\nWe find that the potential is (see Figure~\\ref{fig:MW}) \n\\begin{equation*}\n\\begin{split}\n \\text{M-shaped for }&0<\\{m\\xi\\}<1\/2 \\,,\\\\\n\\text{W-shaped for }&1\/2<\\{m\\xi\\}<1 \\,.\n\\end{split}\n\\end{equation*}\nWe also see from Equation~(\\ref{defxi}) that we have~$\\{m\\xi\\}\\in\\{0,\\frac{1}{3},\\frac{2}{3}\\}$.\n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[scale=.6]{MWlineF}\n\\caption{The catastrophic behavior of $V^Q(u_{ij})$, drawn over the \nrange~$mu_{ij}\\in(-1,1)$, for $\\mathrm{arg}\\widetilde{\\tau}>\\frac{\\pi}{2}$. \nThe control parameter $m\\xi$ determines the M or W type behavior. \\label{fig:MW}}\n\\end{figure}\n\n\n\n\n\\subsubsection*{The $\\text{O}(1\/\\widetilde \\t^2)$ exponent}\n\nLet us now assume $m,n$ are chosen such that~$\\{m\\xi\\}=\\{\\frac{-m n_0-2n}{3}\\}=\\frac{1}{3}$, \nso we are in the M-region with the dominant holonomy configurations corresponding to~$\\{m u_{ij}\\}=0$. \nAlthough this is analogous to the~\\emph{1-center phase} in~\\cite{ArabiArdehali:2019orz}, \nas mentioned around~\\eqref{eq:nontrivialSectors} \nhere in fact $u_{ij}$ can be any integer multiple of~$\\frac{1}{m}$. All these saddles contribute \nequally to the~$\\text{O}(1\/\\widetilde\\t^2)$ exponent though, and hence the preceding analysis \naround~$\\underline{u}=0$ gives the correct leading asymptotics of the index, which up to~$\\text{O}(1\/\\widetilde \\t)$ \ncorrections in the exponent reads\n\\begin{equation}\n\\begin{split}\n &\\exp \\Bigl(-\\frac{\\pi \\i}{m \\, \\widetilde \\t^{\\, 2}}(N^2-1) \\overline{B}_3(m\\xi) \\Bigr) \\=\n \\exp\\Bigl(-\\frac{\\i\\pi }{27 m \\, \\widetilde \\t^{\\, 2}}(N^2-1) \\Bigr) \\,, \n\\label{eq:n0IndexRationalAsy1} \\\\ \n&\\qquad \\text{for} \\quad \\mathrm{arg} (\\widetilde \\t)>\\frac{\\pi}{2} \\,, \\quad \\{m\\xi\\}\\=\\{\\frac{-m n_0-2n}{3}\\} \\=\\frac{1}{3} \\,. \n\\end{split}\n\\end{equation}\nFor $\\mathrm{arg} (\\widetilde \\t)-\\frac{\\pi}{2}<0$, the M- and W-regions switch places. \nSo in order to have $u_{ij}=0$ as the dominant saddle we must assume $m,n$ are such \nthat $ \\{m\\xi\\}=\\{\\frac{-m n_0-2n}{3}\\}=\\frac{2}{3}$. In this case we \nhave~$\\overline{B}_3(2\/3) = - \\overline{B}_3(1\/3) =1\/27$, which leads to \n\\begin{equation}\n\\begin{split}\n &\\exp \\Bigl(-\\frac{\\pi \\i}{m \\, \\widetilde \\t^{\\, 2}}(N^2-1) \\overline{B}_3(m\\xi) \\Bigr) \\=\n \\exp\\Bigl(\\frac{\\i\\pi }{27 m \\, \\widetilde \\t^{\\, 2}}(N^2-1) \\Bigr) \\,, \n\\label{eq:n0IndexRationalAsy2} \\\\ \n&\\qquad \\text{for} \\quad \\mathrm{arg} (\\widetilde \\t)<\\frac{\\pi}{2} \\,, \\quad \\{m\\xi\\}\\=\\{\\frac{-m n_0-2n}{3}\\} \\=\\frac{2}{3} \\,. \n\\end{split}\n\\end{equation}\n\nIn the remaining case where~$\\{m\\xi\\}=\\{\\frac{-m n_0-2n}{3}\\}=0$, we have~$\\widetilde V_2 (\\underline{u};\\xi)=0$ \nand hence no~$\\text{O}(\\frac{1}{\\widetilde \\tau^2})$ exponent. As we discuss momentarily there is \nno~$\\text{O}(\\frac{1}{\\widetilde \\tau})$ exponent in this case either. There are thus~$\\mathrm{rk}(G)$ \nflat directions in the moduli space, leading to a~$(1\/\\widetilde{\\t})^{\\mathrm{rk}(G)}$ growth for the index, \nas in the~$n_0=0$ and~$\\tau$ pure imaginary case studied in~\\cite{Ardehali:2015bla}.\n\n\\subsubsection*{The $\\text{O}(1\/\\widetilde \\t)$ exponent}\n\nThe~$\\text{O}(1\/\\widetilde \\t)$ exponent comes from $\\widetilde V_1\/m\\widetilde\\t$. Although the expression \nfor~$\\widetilde V_1$ in~\\eqref{eq:simplifiedVtildes} was obtained near~$\\underline{u}=0$, the~$\\text{O}(1\/\\widetilde \\t)$ \nexponent is correctly captured by~\\eqref{eq:gammaRationalEstWithR}, which implies \nthat~\\eqref{eq:simplifiedVtildes} remains correct near the nontrivial \nsaddles with~$u_{ij}\\in\\frac{1}{m}\\mathbb{Z}$ as well.\nSo we can specialize~$\\widetilde V_1$ in~\\eqref{eq:simplifiedVtildes} to the SU($N$) $\\mathcal{N}=4$ theory\nand obtain \n\\begin{equation}\n \\exp \\Bigl(- \\frac{\\pi \\i}{m \\widetilde \\t} (N^2-1) \\, \\bigl(-\\overline{B}_2(m\\xi)+\\tfrac{1}{6} \\bigr) \\Bigr) \\,.\\label{eq:N=4at1\/tau}\n\\end{equation}\nIn this case we have that~$\\overline{B}_2(2\/3) = +\\overline{B}_2(1\/3) = - 1\/18$, which leads to \n\\begin{equation}\n \\exp \\biggl(-\\frac{2\\pi \\i \\,}{9}\\frac{(N^2-1)}{m \\widetilde \\t}\\biggr) \\,,\n\\end{equation}\nfor~$\\mathrm{arg} (\\widetilde \\t)>\\frac{\\pi}{2}$ as well as~$\\mathrm{arg} (\\widetilde \\t)<\\frac{\\pi}{2}$. \nNote that since~$\\overline{B}_2(0) = 1\/6$, we see from~\\eqref{eq:N=4at1\/tau} that there is\nno~$\\text{O}(1\/\\widetilde\\t)$ exponent for~$\\{m\\xi\\}=0$, as alluded to above.\n\n\\subsubsection*{The Chern-Simons coupling}\n\nSpecializing the Chern-Simons coupling \\eqref{eq:CScouplingRat} to SU($N$) $\\mathcal{N}=4$ theory we find\n\\be\nk_{ij}=-\\widetilde\\eta \\, N \\, \\delta_{ij},\n\\ee\nwith\n\\be\n\\widetilde\\eta\\; \\coloneqq \\;} % This only works with \\usepackage{mathtools 6\\overline{B}_1(m\\xi)\\= 6\\overline{B}_1\\bigl(\\frac{-mn_0-2n}{3}\\bigr) \\,.\\\\\n\\ee\n\\vspace{.5cm}\n\nWe emphasize that all the topologically nontrivial sectors necessary for agreement with \nan~$S^3\/\\mathbb{Z}_m$ partition function are present in our analysis, but we leave the \ninvestigation of their explicit contributions to future work. \n\n\n\\subsubsection*{The $\\text{O}(\\widetilde \\t)$ exponent}\n\nThe linear (in $\\widetilde{\\tau}$) exponent can be read from~\\eqref{eq:In0semi-simpleAsyRat1} to \nbe~$-2\\pi i\\widetilde\\tau E_{\\mathrm{susy}}\/m$. Note again that while~\\eqref{eq:In0semi-simpleAsyRat1} \nwas derived near~$\\underline{u}=0$, as the estimate~\\eqref{eq:denomEstRational0.75} shows the~$\\text{O}(\\widetilde \\t)$ \nexponent remains valid near~$\\underline{u}\\in\\frac{\\mathbb{Z}}{m}$ as well (at least for~$n=1$, and \nwe expect more generally as well). Since for SU($N$) \n$\\mathcal{N}=4$ theory $E_{\\mathrm{susy}}= \\frac{4}{27}\\big(N^2-1\\big)$, we have the $\\mathcal{O}(\\widetilde{\\tau})$ \nexponent as in\n\\begin{equation}\n \\exp \\biggl(-\\frac{8\\pi \\i}{27m}(N^2-1) \\widetilde{\\tau} \\biggr) \\,.\n\\end{equation}\n\n\n\\subsection*{Summary: the small-$\\widetilde\\t$ asymptotics for $\\mathcal{N}=4$ SYM}\n\nWe can summarize the asymptotics of the SU($N$) $\\mathcal{N}=4$ SYM index analyzed above as follows\n\\begin{equation}\n \\mathcal I_N(\\t;n_0) \\; \\simeq \\; N \\, \\widetilde {C}_N(n_0,m,n)\\, \n \\exp \\biggl(-\\frac{\\i\\pi}{m \\,\\widetilde\\tau^2} \\, (N^2-1) \\Bigl(\\frac{-\\widetilde \\eta+2\\widetilde\\tau}{3} \\Bigr)^3 \\biggr) \\;\nZ^{\\text{CS}}_{S^3\/\\mathbb{Z}_m}(k) \\,,\n \\label{eq:N=4indexAsyRational}\n\\end{equation}\nfor $\\tau$ near any rational point~$-n\/m$, with \n\\be\n\\widetilde \\t \\= m\\t+n \\,, \\qquad \\widetilde \\eta \\= 6\\overline{B}_1(\\frac{-mn_0-2n}{3}) \\= -\\mathrm{sign}(\\mathrm{arg}(\\widetilde \\t)-\\tfrac{\\pi}{2}) \\,,\n\\qquad k \\=- \\widetilde \\eta \\, N \\,,\n\\ee\nand with~$\\widetilde {C}_N(n_0,m,n)$ an overall constant. Note that we have used~$\\widetilde{\\eta}^3=\\widetilde{\\eta}=\\pm1$ \nto simplify the final expression. \nAlso, by completing the cube inside the exponent we have introduced an~$\\mathcal{O}(\\widetilde{\\tau}^0)$ \nfactor at the cost of redefining~$\\widetilde{C}_N(n_0,m,n)$.\n\nWe have only demonstrated that there is a contribution to~$Z^{\\text{CS}}_{S^3\/\\mathbb{Z}_m}(k)$ from \nnear~$\\underline{u}=0$ that coincides with the topologically trivial sector of the~$S^3\/\\mathbb{Z}_m$ \npartition function of Chern-Simons theory with coupling~$k$. As mentioned below~\\eqref{eq:nontrivialSectors} \nwe expect that summing over the contributions from neighborhoods of the non-trivial \nconfigurations~$u_j=m_j\/m$ would lead to the complete orbifold partition function.\n\nWe can include the contribution of a decoupled $U$(1) $\\mathcal{N}=4$ multiplet in a straightforward manner. \nThis effectively changes the dimension of the group in the exponent to~$N^2$, introduces a prefactor~$1\/\\widetilde \\t$,\nand change the constant from~$\\widetilde{C}_N(n_0,m,n)$ to a new constant~$\\widetilde{C'}_N(n_0,m,n)$, so that we have\n\\begin{equation}\n \\mathcal I^{U(N)}(\\t;n_0) \\; \\simeq \\; \\frac{N}{\\i \\widetilde \\t} \\, \\widetilde{C'}_N(n_0,m,n) \\, \n \\exp \\biggl(-\\frac{\\i\\pi}{m \\,\\widetilde\\tau^2} \\, N^2 \\Bigl(\\frac{-\\widetilde \\eta+2\\widetilde\\tau}{3} \\Bigr)^3 \\biggr) \\;\nZ^{\\text{CS}}_{S^3\/\\mathbb{Z}_m}(k) \\,.\n \\label{eq:N=4indexAsyRationalU(N)}\n\\end{equation}\nWe see that the background (and the SUSY Casimir) piece in~\\eqref{eq:N=4indexAsyRationalU(N)} matches the effective \naction~\\eqref{actionEll} and, in addition, we have a dynamical Chern-Simons term.\nIn the following section we explain both these pieces from the point of view of 3d $\\mathcal{N}=2$ field theory.\n\n\n\\subsection{$C$-center phases}\\label{sec:Ccenter}\n\nFocussing on SU($N$) $\\mathcal{N}=4$ theory, we now move on to studying the $\\widetilde\\t\\to0$ limit \nof the index in the \\emph{W region}, which as shown in Figure~\\ref{fig:MW} for \n$\\mathrm{arg}\\widetilde{\\tau}>\\pi\/2$ corresponds to $1\/2<\\{m\\xi\\}<1$. As before we assume \n$\\mathrm{arg}\\widetilde\\t$ is in compact domains avoiding integer multiples of $\\pi\/2$ as $|\\widetilde\\t|\\to0$.\n\nRecall from (\\ref{defxi}) that only the values $\\{m\\xi\\}=0,1\/3,2\/3$ are realized in our problem. \nBut to highlight the parallels with the analysis of partially-deconfined phases in the W regions \nof the (flavored) 4d $\\mathcal{N}=4$ index in \\cite{ArabiArdehali:2019orz}, we will study the \nphase structure for arbitrary $\\{m\\xi\\}\\in(\\frac{1}{2},1)$ below, and only at the end specialize \nour result to the single ``physical'' point $\\{m\\xi\\}=2\/3$ in that interval.\n\n\nAsymptotic analysis of the index for arbitrary $\\{m\\xi\\}\\in(\\frac{1}{2},1)$ is difficult for general $N$, \nbecause finding the dominant holonomy configurations is not possible analytically in the W regions. \nAnalogously to \\cite{ArabiArdehali:2019orz} we consider now the large-$N$ limit (on top of \nthe $\\widetilde\\t\\to0$ limit), and conjecture that the $C$-center phases suffice for extremizing the \npotential in the W region. Also, similarly to \\cite{ArabiArdehali:2019orz} we consider only the \nleading (here $\\text{O}(1\/\\widetilde\\t^2)$) exponent of the index in the W region.\n\nA $C$-center holonomy configuration consists of $C$ packs of $N\/C$ holonomies uniformly \ndistributed on the circle such that the SU($N$) constraint is satisfied. While at finite $N$ it is \npossible to have such configurations only for $C$ a divisor of $N$, in the large-$N$ limit any \ninteger $C\\ge1$ provides an acceptable $C$-center configuration \\cite{ArabiArdehali:2019orz}. \nFor such a distribution the ``on-shell'' value of the potential $\\widetilde V_2$ in (\\ref{eq:QhDef}) becomes\n\\begin{equation}\n \\widetilde V_2^{(C)}=\\i\\pi\\Bigl((N-1)\\overline{B}_3(m\\xi)+\\frac{N}{d}\\frac{d(d-1)}{2}\\,\n 2\\overline{B}_3(m\\xi)+d^2 \\sum_{J=1}^{C-1}J\\big(\\overline{B}_3(m\\xi+m\\frac{J}{C})\n +\\overline{B}_3(m\\xi-m\\frac{J}{C})\\big)\\Bigr),\n\\end{equation}\nwhere $d:=N\/C$. The second term above is the contribution from pairs in the same pack, and the \nthird term is from pairs with each end on a different pack. To simplify the above expression further, \nwe use the following identity which can be proven from (\\ref{eq:Raabe}) and (\\ref{eq:remarkableId}):\n\\begin{equation}\n \\sum_{J=1}^{C-1}J\\big(\\overline{B}_3(\\Delta+m\\frac{J}{C})+\\overline{B}_3(\\Delta-m\\frac{J}{C})\\big)\n =\\frac{g^2\\overline{B}_3(C'\\Delta)}{C'}-C\\overline{B}_3(\\Delta),\n\\end{equation}\nwhere $g:=\\mathrm{gcd}(m,C)$ and $C':=C\/g$. Keeping only the $O(N^2)$ terms we hence end up with\n\\begin{equation}\n \\widetilde V_2^{(C)}=\\i\\pi N^2\\,\\frac{\\overline{B}_3(C'm\\xi)}{C'^3}.\n\\end{equation}\nSince the leading asymptotics of the index is given as $\\exp(-\\widetilde V_2\/\\widetilde\\t^2)$, we then find the \nanalog of the main result of \\cite{ArabiArdehali:2019orz} (Equation~(3.19) of that work) for our case to be\n\\begin{equation}\n \\mathcal{I}_{N\\to\\infty}\\xrightarrow{\\widetilde\\t\\to0}\\sum_{C=1}^{\\infty}\n \\exp\\left(-\\frac{i\\pi N^2}{m\\widetilde{\\tau}^{\\,2}}\\,\\frac{\\overline{B}_3(C'm\\xi)}{C'^3}\\right),\n \\label{eq:doubleScalingConjecture}\n\\end{equation}\nwith $m\\xi=-\\frac{mn_0+2n}{3}$ as before.\n\nThe competition between various terms in (\\ref{eq:doubleScalingConjecture}) can be visualized by \ncomparing the exponents as in Figure~\\ref{fig:singleDelta}, which shows the range of $\\Delta:=\\{m\\xi\\}$ \nfor which a given phase dominates when $\\mathrm{arg}\\widetilde\\t-\\pi\/2>0$. The figure implies that for the \n``physical'' values $\\{m\\xi\\}=1\/3,2\/3$, the index is respectively in the 1-center, and 2-center phase \nwhen $\\mathrm{arg}\\widetilde\\t-\\pi\/2>0$, and vice versa for $\\mathrm{arg}\\widetilde\\t-\\pi\/2<0$. As mentioned above, \nfor $\\{m\\xi\\}=0$ the index is in a confined phase and does not yield exponential $O(N^2)$ growth. \nTherefore up to an $o(N^2\/\\widetilde{\\tau}^{2})$ error in the exponents we have the following simplification of \n(\\ref{eq:doubleScalingConjecture}) by restricting to $C'=1,2$:\n\\begin{equation}\n \\mathcal{I}_{N\\to\\infty}\\xrightarrow{\\widetilde\\t\\to0}e^{-\\frac{i\\pi N^2}{m\\widetilde{\\tau}^2}\\,\n \\overline{B}_3(m\\xi)}+e^{-\\frac{i\\pi N^2}{m\\widetilde{\\tau}^2}\\,\\frac{\\overline{B}_3(2m\\xi)}{8}}.\n \\label{eq:doubleScalingConjecture2}\n\\end{equation}\nThis is the analog of Conjecture~1 in \\cite{ArabiArdehali:2019orz}.\n\nSince $\\overline{B}_3(2\/3)=-\\overline{B}_3(1\/3)$, we see from~\\eqref{eq:doubleScalingConjecture2} \nthat the action of the 2-center saddle has the opposite sign and is smaller in absolute value by a \nfactor of 8 compared to that of the 1-center saddle.\n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[scale=.45]{singleDelta}\n\\caption{The functions $C'^{-3}\\overline{B}_3(C'\\Delta)$ with\n$C'=1,\\cdots,13$, for $0\\le\\Delta\\le1$. For $0<\\Delta<1\/2$ the blue curve corresponding to the \nfully-deconfined phase takes over. The take-over of the\norange curve signifies the partially-deconfined 2-center phase in the corresponding \nregion ($1\/2<\\Delta\\lesssim .72$),\nand so on. \\label{fig:singleDelta}}\n\\end{figure}\n\n\n\n\n\n\n\n\\section{Asymptotics of the 4d index from 3d field theory \\label{sec:4dto3d}}\n\nIn this section we consider the dimensional reduction of the four-dimensional~$\\mathcal{N}=1$ \ngauge theory on a Hopf surface. This surface is topologically~$S^3 \\times S^1$ and \nwe reduce along the~$S^1$ fiber. \nThe dimensionally reduced theory describes a three-dimensional dynamical gauge \nsupermultiplet coupled to background three-dimensional supergravity on~$S^3$. \nThe Wilsonian effective action of the gauge multiplet can be calculated by integrating \nout the tower of massive Kaluza-Klein modes, and the resulting theory is described by a \nfunctional integral over the gauge multiplet fields with this effective action. \nWe find that the functional integral of the three-dimensional theory \ncan be written as a perturbative expansion in~$\\t$. \nThe singular terms in the expansion behave as~$\\text{O}(1\/\\t^2)$ and~$\\text{O}(1\/\\t)$, and are captured \nby three-dimensional effective field theory. \nIn particular, these terms are independent of the dynamical fields, and are completely \naccounted by the (supersymmetrized) Chern-Simons couplings of the background supergravity. \nThe result agrees with the corresponding singular terms in the microscopic \nexpansion~\\eqref{eq:almostThere},~\\eqref{eq:Zback}.\n\nThe all-order asymptotic formula from the microscopic index includes, in addition to these singular terms, \nconstant and linear terms in~$\\tau$. Using a localization argument we show that \nthe constant term in~$\\t$, besides a background part, has a dynamical piece captured by the \nintegral over the fluctuations of the dynamical fields in three-dimensional path integral, which is \nessentially the partition function of~$\\mathcal{N}=2$ supersymmetric CS theory at level~$\\pm N$. \nFinally, the linear term in the microscopic formula is precisely the supersymmetric Casimir energy \nwhich is needed to translate between the microscopic Hamiltonian index and the macroscopic \nfunctional integral.\\footnote{The supersymmetric Casimir energy that appears in our asymptotic \nformulas is the one given in~\\cite{Assel:2015nca}. Note in particular that (unlike \nin~\\cite{Cabo-Bizet:2018ehj}) this is independent of~$n_0$. \nWe can understand this in the path-integral picture by appealing to the result in Section~4 of~\\cite{ArabiArdehali:2019tdm} \n(based on the regularization method of \\cite{Ardehali:2015hya}) which demonstrated that the \nsupersymmetric Casimir energy is independent of flavor fugacities when they are on the unit circle, \nand by noting that~${\\rm e}^{2\\pi \\i(-n_0 r_I\/2)}$ is effectively a flavor fugacity in our problem.}\nIn this manner the full asymptotic formula for the four-dimensional index is explained \nby three-dimensional physics. \nThe fact that the asymptotic formula does not contain any higher order terms in~$\\t$ \nimplies a non-renormalization theorem, namely \nthat there are no corrections to the three-dimensional effective action at any polynomial order in~$\\t$. \nWe leave the explanation of this interesting point to future work. \nFinally, we show that corresponding statements also hold near rational points \nwhen~$\\t \\to -n\/m$. Here we present evidence that the relevant three-dimensional manifold is a~$\\mathbb{Z}_m$ orbifold of~$S^3$ and the results agree with the microscopic asymptotic expansion given \nin~\\eqref{eq:N=4indexAsyRational}.\n\n\\vskip 0.4cm\n\n\nWe begin by recalling the functional integral definition of the~$\\mathcal{N}=1$ superconformal index on~$S^3 \\times S^1$. \nIn the Hamiltonian trace definition~\\eqref{defindex} we have two chemical potentials that couple to linear \ncombinations of the two angular momenta~$J_1, J_2$ on~$S^3$ and the~$U(1)$ R-charge~$Q$. \nThis is equal to the supersymmetric functional integral of the theory on~$S^3 \\times S^1$ with twisted boundary \nconditions on the fields as we go around the~$S^1$.\nEquivalently, one can explicitly introduce a background gauge field (for the R charge) and background \noff-diagonal terms in the metric (for the angular momenta) in a manner, so as to preserve supersymmetry. \nAs explained in \\cite{Festuccia:2011ws}, such background configurations can be obtained as solutions to \nthe condition of vanishing gravitino variations of off-shell supergravity (and then taking a rigid limit so as to decouple the \nfluctuations of gravity). \n\n\nThe relevant background configuration for the calculation of the 4d superconformal index for \ncomplex~$\\tau$ and nonzero $n_0$ was studied\nin~\\cite{Cabo-Bizet:2018ehj} in the context of 4d new minimal supergravity~\\cite{Sohnius:1981tp, Sohnius:1982fw}.\nRecall that the bosonic fields of new minimal supergravity are the metric, a gauge field~$A^\\text{nm}$, \nand another vector field~$V^\\text{nm}$ which is covariantly conserved. \nThe background configuration~\\cite{Cabo-Bizet:2018ehj} preserving the supercharges~$(\\mathcal Q, \\overline{\\mathcal{Q}})$ \nis\\footnote{A real metric corresponds to pure imaginary~$\\Omega_i$. \nGeneral complex $\\Omega_i$ correspond to analytic continuation in the background metric.} \n\\be \\label{4dbackgnd}\n\\begin{split}\n\\mathrm{d} s_4^2 & \\= \\mathrm{d} t_E^2 + \\mathrm{d} \\theta^2 + \\sin^2 \\theta \\, \n\\bigl(\\mathrm{d} \\phi_1 -\\i \\,\\Omega_1 \\, \\mathrm{d} t_E \\bigr)^2 + \\cos^2 \\theta \\, \\bigl(\\mathrm{d} \\phi_2 -\\i \\, \\Omega_2 \\, \\mathrm{d} t_E \\bigr)^2 \\,,\\\\\nA^\\text{nm} & \\= \\i\\, \\Bigl(\\Phi - \\frac{3}{2} \\Bigl) \\mathrm{d} t_E \\,, \\qquad V^\\text{nm} \\= -\\i \\, \\mathrm{d} t_E \\, .\n\\end{split}\n\\ee\nHere~$\\theta \\in [0, \\pi\/2]$, the angles~$\\phi_1$, $\\phi_2$ are $2\\pi$-periodic, and \nthe Euclidean time coordinate has the independent periodicity \ncondition\\footnote{In~\\cite{Cabo-Bizet:2018ehj} the parameter~$\\gamma$ was called~$\\b$.} \n\\be\nt_E \\sim t_E + \\gamma \\,.\n\\ee\nThis configuration admits the following Killing spinor which is identified with~$\\mathcal Q$, \n\\be \\label{KSsol}\n\\varepsilon \\=\n\\left(\n\\begin{array}{c}\n{\\rm e}^{\\i \\, z \\,t_E } \\\\\n0 \\\\\n0 \\\\\n{\\rm e}^{- \\i \\, z \\, t_E } \\\\\n\\end{array}\n\\right) \\,, \\qquad z \\= \\frac{\\pi n_0}{\\gamma} \\,.\n\\ee\n\n\nThe twist parameters~$\\Omega_i$, $\\Phi$ are related to the chemical potentials~$\\sigma$,~$\\t$ in the index as \nfollows\\footnote{Here~$(\\Omega_1^*,\\Omega_2^*, \\Phi^*) = (1,1,\\frac32)$ are the values of the potentials\non the supersymmetric BH solution.}\n\\be\n\\Omega_i \\= 1 + \\frac{\\omega_i}{\\gamma} \\,, \\qquad \\Phi \\= \\frac32 + \n\\frac{1}{\\gamma} \\Bigl(\\frac{\\omega_1+\\omega_2}{2} - \\pi \\i \\, n_0 \\Bigr) \\,,\n\\label{Omomrel}\n\\ee \nwith \n\\be\n\\omega_1 \\= 2 \\pi \\i \\,\\sigma \\,, \\qquad \\omega_2 \\= 2 \\pi \\i \\, \\t \\,.\n\\ee\nIn this section for ease of presentation we focus on the case with~$\\Omega_1=\\Omega_2=\\Omega,$ \nwhich implies~$\\sigma=\\tau=\\frac{\\i\\gamma}{2\\pi}(1-\\Omega)$. \nThe partition function on the above background is related to the index~$\\mathcal I(\\sigma-n_0,\\tau)$, \nwhich for $\\sigma=\\tau$ coincides with the index~$\\mathcal I(\\tau;n_0)$ in~\\eqref{eq:n0Index}. \nIn Appendix~\\ref{sec:diffomegas} we comment on the more general case \nwith~$\\O_1\\neq\\O_2$ and hence $\\sigma\\neq\\tau$.\n\n\nThe four-dimensional supersymmetric partition function of the theory corresponding to the Hamiltonian \nindex~\\eqref{defindex} can then be expressed as a functional integral \nof the gauge theory with 4d~$\\mathcal{N}=1$ chiral and vector multiplets on the \nbackground~\\eqref{4dbackgnd}.\\footnote{More precisely the Hamiltonian index equals the functional \nintegral for the supersymmetric partition function up to the supersymmetric Casimir energy \nfactor~\\cite{Ardehali:2015hya,Assel:2015nca}.}\nAs discussed in~\\cite{Cabo-Bizet:2018ehj}, this functional integral \nlocalizes to an integral over flat connections of the gauge field on the KK circle, \n\\be \\label{AYval}\n\\oint A^i \\= 2 \\pi \\, u_i \\,.\n\\ee\nThe Wilson loop~\\eqref{AYval} maps to the scalar in the three-dimensional vector multiplet \nin the KK reduction. \nWe now proceed to derive an expression for the supersymmetric partition function of \nthe three-dimensional gauge theory. \n\n\n\n\\subsection{Dimensional reduction to three dimensions \\label{sec:dimred}}\n\nWe first consider the reduction of the above four-dimensional background as a \nconfiguration in three-dimensional supergravity. \nIn three dimensions we use the off-shell supergravity \nformalism~\\cite{Kuzenko:2011rd, Kuzenko:2011xg, Kuzenko:2012bc, Kuzenko:2013uya}, \nand follow the treatment~\\cite{Closset:2012vp, Closset:2012vg, Closset:2012ru, Assel:2014paa} \nfor the reduction from four to three dimensions. The bosonic fields in the off-shell three-dimensional \nsupergravity are the metric, the KK gauge field (the graviphoton) written as a one-form~$c$, a \ntwo-form~$B$, and the R-symmetry gauge field one-form~$\\mathcal{A}^R$. \nThe equations are often presented in terms of the dual one-form~$v=-\\i *\\mathrm{d} c$\nand the dual scalar~$H = \\i * \\mathrm{d} B$. \n\n\nWe begin by writing the background in~\\eqref{4dbackgnd} as a Kaluza-Klein (KK) compactification \nto three dimensions, i.e.~a circle fibration on a 3-manifold~$\\mathcal{M}_3$.\nWe define the rescaled~$S^1$ coordinate \n\\be \\label{X4period}\nY \\= \\sqrt{1 - \\Omega^2} \\; t_E \\,,\n\\ee\nwhich obeys the periodicity condition \n\\be\nY \\sim Y + 2 \\pi R\\,, \\qquad R \\= \\frac{\\gamma}{2 \\pi} \\sqrt{1-\\O^2} \\,.\n\\label{Rgamrel}\n\\ee\n\nWriting the metric~\\eqref{4dbackgnd} in the KK form,\n\\be \\label{4dmetricKKform}\n\\mathrm{d} s_4^2 \\= \\mathrm{d} s_3^2 + (\\mathrm{d} Y + c)^2 \\,,\n\\ee \nwe find that the graviphoton field is \n\\be \\label{3dKKc}\nc \\= c_\\mu \\, \\mathrm{d} x^\\mu \n \\= -\\i \\frac{\\O}{\\sqrt{1-\\O^2}} \\, \\bigl( \\sin^2 \\theta \\,\\mathrm{d} \\phi_1 + \\cos^2 \\theta \\,\\mathrm{d} \\phi_2 \\bigr) \\,,\n\\ee\nand the metric on the 3-manifold~$\\mathcal{M}_3$ is \n\\be \\label{3dmetric1}\n\\mathrm{d} s_3^2 \\= g_{\\mu\\nu} \\, \\mathrm{d} x^\\mu \\, \\mathrm{d} x^\\nu \n\\= \\mathrm{d} \\theta^2 + \\sin^2 \\theta \\, \\mathrm{d} \\phi_1^2 + \\cos^2 \\theta \\, \\mathrm{d} \\phi_2^2 - c^2 \\,.\n\\ee\nThe three-dimensional metric obeys \n\\be \\label{sqrtg}\n\\sqrt{g} \\= \\frac{\\sin 2 \\theta}{2 \\sqrt{1-\\O^2}} \\,.\n\\ee\nWe see that we effectively have a KK reduction on a circle of radius~$R$. \n\n\\vskip 0.4cm\n\nIn order to study the effective theory in three dimensions, we consider the limit~$R \\to 0$.\nFrom the relation~\\eqref{Rgamrel} we see that this is implemented by taking the original circle size~$\\gamma \\to 0$. \nOur eventual interest is in the limit~$\\t \\to 0$. The question is how to correlate these two limits of~$\\gamma$ and~$\\t$. \nIf we take~$\\gamma \\to 0$ first, then we see from the relation~\\eqref{Omomrel} that~$\\Omega \\to \\infty$ \nand from~\\eqref{sqrtg} that~$\\mathcal{M}_3$ shrinks to zero size. \nAlthough the local Lagrangian involves background fields and terms such as the Ricci scalar which diverge in this limit, \nthe three-dimensional effective action turns out to be finite. \nWe can understand this in a cleaner manner as follows. \nWe first scale~$\\t$ and~$\\gamma$ to zero at the same rate keeping~$\\O$ finite and fixed, \ni.e.~take~$\\gamma = \\varepsilon \\t$ with fixed~$\\varepsilon = 2 \\pi \\i \/(\\O -1)$, and only take~$\\varepsilon\\to0$ at the end of all calculations. \nIn particular, the three-dimensional calculations\nare all performed at finite~$\\varepsilon$, i.e.~on smooth backgrounds. \nThe action turns out to have two pieces, one of which stays finite and the other vanishing in the limit~$\\varepsilon \\to 0$,\nand, in particular, there are no diverging terms in this limit. \nThus we can safely take the limit~$\\O \\to \\infty$ at the end of calculations. \nIn this limit we have that~$R \\to \\t$, so that the effective field theory answers are effectively written as \na perturbative series in~$\\t$. \n\n\n\n\\vskip 0.4cm\n\nIn the treatment of three-dimensional background supergravity we need the Hodge dual of the graviphoton,\n\\be \\label{defv}\nv \\= v_\\mu \\, \\mathrm{d} x^\\mu \\= - \\i * \\mathrm{d} c \\,,\n\\ee\nwhose value in the above background is\n\\be\nv \\= \\frac{2 \\, \\O}{1-\\O^2} \\, \\bigl( \\sin^2 \\theta \\,\\mathrm{d} \\phi_1 + \\cos^2 \\theta \\,\\mathrm{d} \\phi_2 \\bigr) \\,,\n\\ee\nso that~$v^\\mu = 2 \\,\\O(1,1,0)$. \nThe associated Chern-Simons action is \n\\be\nS^\\text{CS} (c) \\= \\int_{\\mathcal{M}_3} \\, c \\wedge \\mathrm{d} c \\= \\i \\int_{\\mathcal{M}_3} \\mathrm{d}^3 x \\, \\sqrt{g} \\, v^\\mu \\, c_\\mu\\,. \n\\ee\n\n\n\\vskip 0.4cm\n\n\nThe identification between the four-dimensional and the three-dimensional gauge fields \nis made by comparing the respective Killing spinor equations. \nAs shown in~\\cite{Closset:2012ru, Assel:2014paa}, one has \n\\be\n\\frac12 v_\\mu \\= V^\\text{nm}_\\mu - V^\\text{nm}_Y c_\\mu \\,, \\qquad H \\= V^\\text{nm}_Y \\,,\n\\qquad \\mathcal{A}^R_\\mu \\= A^\\text{nm}_\\mu - A^\\text{nm}_Y c_\\mu + \\frac12 v_\\mu \\,.\n\\ee\nThe background gauge fields in~\\eqref{4dbackgnd} are given by\n\\be \\label{AYVYval}\nA^\\text{nm}\n \\= \\Bigl( -\\frac{\\t}{R}+ \\frac{n_0}{2R} \\Bigr) \\, \\mathrm{d} Y \\,, \n\\qquad V^\\text{nm} \\= -\\frac{\\i}{\\sqrt{1-\\O^2}} \\, \\mathrm{d} Y \\,,\n\\ee \nso that the auxiliary fields in the background supergravity multiplet are \n\\be \\label{vHvalues}\nv_\\mu \\= - 2 \\, V^\\text{nm}_Y \\, c_\\mu \\,, \\qquad \nH \\= V^\\text{nm}_Y \\,, \\qquad \n\\mathcal{A}^R_\\mu \\= - (A^\\text{nm}_Y + V^\\text{nm}_Y) \\, c_\\mu \\,.\n\\ee\n(The above equation for~$v_\\mu$ is consistent with Equations~\\eqref{3dKKc},~\\eqref{defv}.)\n\n\n\\vskip 0.4cm\n\n\nWe now discuss the Kaluza-Klein reduction of the dynamical gauge multiplet. \nThe~$\\mathcal{N}=1$ gauge multiplet in four dimensions reduces to an~$\\mathcal{N}=2$\ngauge multiplet in three dimensions, whose bosonic field content is a vector~$\\mathcal{A}_\\mu$, \na scalar~$\\sigma$, and the auxiliary~$\\mathcal{D}$ field. \nThese are related to the four-dimensional fields as follows,\n\\be\\label{eq:susy3dvec}\n\\sigma^i \\= A^i_Y \\,, \\qquad \\mathcal{A}^i_\\mu \\= A^i_\\mu - A^i_Y \\, c_\\mu \\,, \\qquad \\mathcal{D}^i \\= D^i - A^i_Y \\, H \\,,\n\\ee\nand the three-dimensional fermions are the reduction of the corresponding four-dimensional fermions.\nAs discussed above, the theory localizes on the BPS configurations given by \n\\be \nA^i \\= \\frac{u_i}{R} \\, dY \\,, \\qquad D^i \\= 0 \\,, \n\\label{AYval}\n\\ee\nwith vanishing values of all other fields in the off-shell gauge and chiral multiplets. \nIn the three-dimensional theory the non-zero fields on the BPS locus are \n\\be\n\\sigma^i \\= \\frac{u_i}{R} \\,, \\qquad \n\\mathcal{A}^i_\\mu \\= - \\frac{u_i}{R} \\, c_\\mu \\,, \\qquad \\mathcal{D}^i \\= - \\frac{u_i}{R} \\, H \\,.\n\\label{AYval3d}\n\\ee\n\n\n\n\\subsection{Effective action and functional integral of the three-dimensional theory \\label{sec:3deffact}}\n\n\nWe now turn to the calculation of the partition function of the three-dimensional supersymmetric theory\nthat we just discussed. \nOur strategy is to first calculate the three-dimensional Wilsonian effective action of~$u_i$, and then use \nthis to calculate the three-dimensional partition function. \nThe tree-level action (coming from a mode expansion of the four-dimensional theory) \nconsists of matter-coupled super Yang-Mills theory.\nThe full quantum effective action of the three-dimensional theory is obtained by integrating out the \ntower of massive KK modes on the circle. In order to calculate this action, we draw from known results \nin the effective field theory in three dimensions. \n\nThe effective field theory on backgrounds of the type~$\\mathcal{M}_3 \\times S^1_R$ was studied in a \ngeneral context in~\\cite{Banerjee:2012iz,Jensen:2012jh},\nand in the special context of supersymmetry in~\\cite{DiPietro:2014bca, DiPietro:2016ond}. \nThe resulting three-dimensional action begins with a term proportional to~$1\/R^2$,\nand continues as a perturbation expansion as the radius~$R \\to 0$. \nAt each order in~$R$ one has \na combination of three-dimensional actions of the background and the dynamical fields, \nwhich are all related by supersymmetry to a certain Chern-Simons term.\nThe Chern-Simons terms are of the form~$\\int_{\\mathcal{M}_3} A_x \\wedge \\mathrm{d}A_y$, where~$A_x$\nand~$A_y$ represent the various gauge fields. As discussed in the previous subsection, \nthese are the dynamical gauge field, the background graviphoton, the background R-gauge field, \nand the spin connection. We follow, and review in Appendix~\\ref{sec:CSactions}, \nthe treatment of~\\cite{DiPietro:2016ond} for the supersymmetrized Chern-Simons action of all \nthe background and the dynamical gauge fields up to~$O(R^0)$. The full effective action also \nincludes RR and gravitational supersymmetrized CS terms discussed in \\cite{Closset:2018ghr}, \nwhich turn out to be crucial for our purposes.\n\nIt follows from the above discussion that the overall coefficient at each order in~$R$ \ncan be fixed by calculating the coefficient of the Chern-Simons terms themselves. \nThese coefficients, in turn, can be obtained by integrating out all the fermions coupling \nto the corresponding gauge fields. The resulting induced Chern-Simons coefficient is one-loop exact. \nThus the strategy is to integrate out the fermions in each KK mode, write the \nresulting Chern-Simons action, and sum over all the fermions in the theory. \nThe KK momenta of the fermions take the values ~$p_Y = k_Y\/R$, with $k_Y =n + \\frac{n_0}{2}$, $n \\in \\mathbb{Z}$.\nThe shift~$n_0\/2$ appears because of the gauge fields in the background~\\eqref{4dbackgnd}. \n(Recall, for example, that the four-dimensional Killing spinor~\\eqref{KSsol} has momentum~$n_0\/2$.) \n\n\nThe result for the complete action obtained by integrating out a fermion~$\\tf$ of \nR-charge~$r_\\tf$ and transforming in a representation of weight~$\\rho_\\tf$ under the gauge group \nis given in Appendix~\\ref{sec:CSactions} and take the following form,\n\\be\n\\delta S^\\tf_\\text{1-loop} \\= \\widetilde{S}^\\tf_\\text{g-g} + 2 \\, \\widetilde{S}^\\tf_\\text{g-R} + S^\\tf_\\text{R-R} + S^\\tf_\\text{grav} \\,.\n\\label{Sftot1}\n\\ee\nThe terms in~\\eqref{Sftot1} depend on the real mass~$m_\\tf$ (related to the central charge appearing \nin the three-dimensional algebra). \nThe first two terms depend on the dynamical gauge field. \nOn the configuration~\\eqref{AYval3d} they take the following values, \n\\be\n\\begin{split}\n\\widetilde{S}^\\tf_\\text{g-g} & \\=-\\i \\pi \\, \\frac{ \\mathrm{sgn}(m_\\tf)}{8R^2} \\, \n\\bigl(\\rho_{\\tf}\\cdot \\underline{u} - k_Y \\bigr)^2\\, A_{\\mathcal{M}_3} \\,, \\\\\n2 \\, \\widetilde{S}^\\tf_\\text{g-R} & \\=-\\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{8R} \\, \n2 \\, r_\\tf \\, \\bigl(\\rho_{\\tf}\\cdot \\underline{u} - k_Y \\bigr) \\; L_{\\mathcal{M}_3} \\,,\n\\end{split}\n\\label{S1S2fin}\n\\ee\nwhere~$A_{\\mathcal{M}_3}$ and $L_{\\mathcal{M}_3}$ are functions\nof the three-dimensional background given in~\\eqref{defALl}.\nThe last two terms in~\\eqref{Sftot1} do not depend on the dynamical gauge field,\nand given by \n\\be\n\\begin{split}\n S^\\tf_\\text{R-R}&\\=-\\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{8} \\, \\bigl(r_\\tf^2-\\frac{1}{6}\\bigr) \\, R_{\\mathcal{M}_3} \\,, \\\\\n S^\\tf_\\text{grav}&\\=-\\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{192} \\, G_{\\mathcal{M}_3} \\,, \n\\label{S:susy:explicit3}%\n\\end{split}\n\\ee\nwhere~$R_{\\mathcal{M}_3}$ and~$G_{\\mathcal{M}_3}$ are functions\nof the three-dimensional background given in~\\eqref{eq:CSactions2}.\n\n\n\nIn Appendix~\\ref{sec:CSactionvals} we calculate the values of these background \nactions.\\footnote{We note that there is a subtlety with the gravitational CS term in~\\eqref{eq:CSactions2},\nconcerning the dependence of the term on the frame~\\cite{Witten:1988hf}. \nThere should be a choice of frame which is consistent with the supersymmetry and the 4d to 3d reduction.\nWe do not work out the details of this issue in this paper, and instead rely on consistency \nwith~\\cite{Closset:2018ghr} where this term is obtained indirectly by considering integrating out chiral multiplets.\nWe thank Cyril Closset for a discussion on this point.}\nAs explained above, we perform the calculations keeping~$R$, $\\O$ finite so that \nthe three-dimensional physics is manifestly smooth. The result is that there is a smooth \nlimit as~$\\gamma \\to 0$ keeping fixed~$\\t$. \nThe limiting values of the actions are as follows,\n\\be\n\\begin{split}\n&A_{\\mathcal{M}_3} \\= - 4 \\,, \\qquad \n L_{\\mathcal{M}_3} \\= - 4 \\Bigl( 1 - \\frac{n_0}{2R} \\Bigr) \\,, \\\\ \n& R_{\\mathcal{M}_3} \\= - 4 \\, \\Bigl( 1 -\\frac{n_0}{2R} \\Bigr)^2 \\,, \\qquad \nG_{\\mathcal{M}_3} \\= - 16 + 4 \\, R_{\\mathcal{M}_3} \\,.\n\\end{split}\n\\label{ALvals}\n\\ee\nUsing these values, we obtain the total effective action of the fermion~$\\tf$ to be\n\\be\n\\begin{split}\n\\delta S^\\tf_\\text{1-loop} \n& \\= \\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{2R^2} \\, \n\\bigl(\\rho_{\\tf}\\cdot \\underline{u} - k_Y - \\tfrac{1}{2} n_0 \\, r_\\tf \\bigr)^2 \n+ \\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{R} \\, r_\\tf \\, \\bigl(\\rho_{\\tf}\\cdot \\underline{u} - k_Y - \\tfrac{1}{2} n_0 \\, r_\\tf \\bigr) \\\\\n& \\qquad + \\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{2} \\, r_\\tf^2 \n\\; - \\; \\i \\pi \\, \\frac{\\mathrm{sgn}(m_\\tf)}{12} \\,.\n\\end{split}\n\\label{defSf}\n\\ee\n\n\\vskip 0.4cm\n\nNow we turn to the sum over all the fermions in the theory. \nThe value of the real mass is given in~\\eqref{realmass} to be, \nas~$R \\to 0$,\\footnote{In fact the first three terms in~\\eqref{defSf} sum up to\n\\be \\i \\frac{\\pi}{2} \\, \\mathrm{sgn}(m_\\tf) \n\\Bigl( \\frac{ \\rho_{\\tf}\\cdot \\underline{u} - n - \\tfrac{1}{2} n_0 \\, r_I + (r_I-1) R }{R} \\Bigr)^2 \\,,\n\\ee\nand, using~\\eqref{eq:KKsumBbar} with~$x= \\rho_{\\tf}\\cdot \\underline{u} - n - \\tfrac{1}{2} n_0 \\, r_I + (r_I-1) R$ \nto perform the sum over the KK modes, we obtain an effective potential which \nreproduces the chiral multiplet contributions in~\\eqref{Veffans3d}. \nEssentially the same comment can be made in the microscopic analysis of Section~\\ref{sec:SCI}.\n} \n\\be\nm_{\\tf,n} \\= -\\frac{1}{R} \\bigl(\\rho_\\tf\\cdot \\underline{u} - n - \\tfrac12 n_0 \\, (r_\\tf+1) \\bigr) \\,,\n\\ee\nIn order to obtain the full effective action we now have to sum over all the fermions.\nFor the chiral multiplets, this implies summing over all the weights in \nrepresentations~$\\rho_\\tf\\in\\mathcal{R}_\\tf$, as well as over all momenta labelled by $n\\in\\mathbb{Z}$. \nThe summation over KK modes can be evaluated using\n\\begin{equation}\n \\sum_{n\\in\\mathbb{Z}}\\mathrm{sgn}(n+x)(n+x)^{j-1} \\= -\\frac{2}{j} \\, \\overline{B}_{j}(x) \\,,\n\\label{eq:KKsumBbar}\n\\end{equation}\nwith~$x=\\rho_\\tf\\cdot \\underline{u} - \\tfrac12 n_0 \\, r_I $, for $j=1,2,3$ (cf.~Section~4 \nof~\\cite{ArabiArdehali:2019tdm}).\\footnote{Here, the~$\\mbox{sgn}$ function is interpreted as applying \nto~$m_\\tf$ with~$R$ a real positive number (which therefore scales out of the formula so \nas to give~$\\mbox{sgn}(\\rho_\\tf\\cdot \\underline{u} - \\tfrac12 n_0 \\, r_I)$). \nNote that in the subsequent formulas $R$ is taken to be complex.}\nHere we have used the relation~$r_\\tf = r_I-1$ between the R-charge of the fermion and that of the \nbottom component of the multiplet~$I$ to which the fermion belongs. \n\n\\vskip 0.4cm\n\nFor the vector multiplet contribution the analysis is quite similar: there is a tower of massive KK gaugino \nmodes that are integrated out. These generate CS actions whose supersymmetrization yields the \nvector multiplet contribution to $\\delta S_{\\text{1-loop}}$. In the present context there is an important difference \nwith the chiral multiplet analysis however. Near $\\underline{u}=0$ there is a single gaugino mode in the tower that \nhas real mass of order $\\alpha\\cdot\\underline{u}\/R$, and is therefore considered a ``light'' mode \nfor small enough~$|\\alpha\\cdot\\underline{u}|$. \nTherefore we do not integrate out this mode and, instead, keep it as a dynamical mode in \nthe path integral of the three-dimensional theory.\n\nMore precisely, recall that the $n$th KK gaugino mode associated to a root~$\\alpha$ of \nthe gauge group has~$p_Y=(n+n_0)\/R$ and hence a real mass~$(\\alpha\\cdot\\underline{u} -n-n_0)\/R$. \nTherefore the mode corresponding to~$n=-n_0$ is light near~$\\alpha\\cdot\\underline{u}=0$. \nWe now describe how removing this term from the sum over the KK tower modifies the result \ncompared to the chiral multiplet computation. \nThe vector multiplet contributions is a sum over roots~$\\alpha$ that come in pairs~$\\pm\\alpha_+$, \nas a result of which they give vanishing contributions to the quadratic and constant terms in~$\\underline{u}$\nin the action of a single KK mode.\nWe therefore focus on the contribution to the linear term in~$\\underline{u} \\,,$ which is proportional to~$1\/R$. \nThe calculation is similar to the corresponding chiral multiplet calculation. \nUpon summing over all the KK modes, we obtain \nthe vector multiplet contribution \nfrom a root~$\\alpha$ to be \\\\\n\\be\n-\\frac{\\pi \\i}{R} \\, \\sum_{n\\in\\mathbb{Z}}{}^{'}\\mathrm{sgn} \\, (\\alpha\\cdot\\underline{u} -n-n_0)\\, \n\\bigl(\\alpha\\cdot\\underline{u} -n-n_0\\bigr)\\label{eq:sumPrimeVec} \\,,\n\\ee\nwhere the prime indicates that we are not including the light mode corresponding to $n=-n_0$. \nUpon adding and subtracting the $n=-n_0$ contribution, we obtain, using~\\eqref{eq:KKsumBbar}, \n\\be\n\\frac{\\i\\pi}{R} \\, \\Bigl( \\overline{B}_2(\\alpha\\cdot\\underline{u})+ |\\alpha\\cdot\\underline{u}| \\Bigr)\\,.\n\\ee\nNow, since we are interested in the proximity of~$\\underline{u}=0$, we use the fact that for~$|x|<1$ \nwe have~$\\overline{B}_2(x)=x^2-|x|+\\frac{1}{6}$, to simplify the result to\n\\be\n\\frac{\\i\\pi}{R} \\, \\Bigl((\\alpha \\cdot \\underline{u})^2 + \\frac{1}{6}\\Bigr) \\,.\n\\ee\n\n\nUpon putting all the pieces together, we obtain the total one-loop correction to the \nWilsonian action of the three-dimensional theory, \nwhich we call~$V_\\text{eff}(\\underline{u})$ (we justify this name below). We have\n\\be\n\\begin{split}\n V_\\text{eff}(\\underline{u}) \n \\= & \\sum_\\tf\\sum_{\\rho_\\tf\\in\\mathcal{R}_\\tf} \\delta S^\\tf_\\text{1-loop} \\, \\\\\n \\= & \\i\\pi \\sum_{I, \\, \\rho_I} \\,\n \\Bigl(\\frac{1}{3R^2} \\, \\overline{B}_3 \\bigl(\\rho_\\tf\\cdot \\underline{u} - \\tfrac{1}{2} n_0 \\, r_I \\bigr)\n + \\frac{r_I-1}{R} \\, \\overline{B}_2 \\bigl(\\rho_\\tf\\cdot \\underline{u} - \\tfrac{1}{2} n_0 \\, r_I \\bigr) \\\\\n& \\qquad\\qquad\\qquad + \\frac{1}{R} \\sum_{\\alpha} \\, \\bigl((\\alpha \\cdot \\underline{u})^2 + \\tfrac{1}{6} \\bigr) \n + \\bigl( (r_I-1)^2 \\, - \\tfrac{1}{6} \\bigr) \\overline{B}_1 \n \\bigl(\\rho_\\tf\\cdot \\underline{u} - \\tfrac{1}{2} n_0 \\, r_I \\bigr) \\Bigr) \\,.\n\\end{split}\n\\label{Veffans3d}\n\\ee\n\nWe now localize the path integral of the light gauge multiplet mode that was excluded from \nthe sum~\\eqref{eq:sumPrimeVec}, using its Wilsonian effective action, which consists of the \ntree-level action coming from the light~$n=-n_0$ mode in 4d, as well as the one-loop \naction~$\\delta S_{\\text{1-loop}}$ derived above (in the bosonic sector, which is relevant \nfor the localization calculation) from integrating out the heavy modes.\nIt is useful to keep in mind the different but related problem of calculating the partition function of \nsuperconformal CS theory coupled to matter on~$\\mathcal{M}_3$ \\cite{Kapustin:2009kz}, \\cite{Willett:2016adv}. \nIn that case the theory localizes onto \narbitrary constant values of the scalar~$\\sigma$ and is supported by the auxiliary scalar~$H$. \nThe measure including the one-loop determinant of the localizing action in the non-BPS \ndirections is\\footnote{Compare with Section~5 of~\\cite{Aharony:2013dha}, noting that for \nsquashed~$S^3$ with squashing parameter~$b$ one has~$\\omega_{1}^{\\text{thf}}=\\i b$, \n$\\omega_{2}^{\\text{thf}}=\\i b^{-1}$. We leave the derivation of~\\eqref{eq:thfModuli} from the \nmetric~\\eqref{3dmetric1} to future work.} \n\\be\n\\int\n\\frac{D \\underline{\\sigma}}{\\sqrt{- \\omega_1^{\\text{thf}}\\, \\omega_2^{\\text{thf}}}}\\, \n\\prod_{\\alpha_+}4\\sinh\\bigl( \\frac{\\pi \\alpha_+ \\cdot \\underline{\\sigma}}{-\\i\\, \\omega_1^{\\text{thf}}} \\bigr)\n\\sinh\\bigl( \\frac{\\pi \\alpha_+ \\cdot \\underline{\\sigma}}{-\\i\\, \\omega_2^{\\text{thf}}} \\bigr) \\,,\n\\label{1loopM3}\n\\ee\nwith~$\\omega_{1,2}^{\\text{thf}}$ the moduli of the transversely holomorphic foliation \n(THF)~\\cite{Closset:2013vra} of~$\\mathcal{M}_3$, which we expect to be\n\\be\n \\omega_{1}^{\\text{thf}}\\=\\omega_{2}^{\\text{thf}}\\=\\i\\sqrt{\\frac{1-\\Omega}{1+\\Omega}}.\\label{eq:thfModuli}\n\\ee\n\n\nRecalling from~\\eqref{AYval3d} that~$\\sigma^i \\= u_i\/R$, and adding the contribution \nfrom~$\\delta S_{\\text{1-loop}}$ in~(\\ref{Veffans3d}) (which although arises at one-loop \nin high-temperature EFT, contributes as a ``classical'' piece in the localization computation), \nwe obtain the final result for the \nthree-dimensional partition function \n\\be\nZ(\\t) \\= \\int \\frac{D\\underline{u}}{(-\\i\\, \\tau)^{\\mathrm{rk}(G)}}\\, \n\\prod_{\\alpha_+}4\\sinh^2\\Bigl(\\frac{\\pi \\alpha_+ \\cdot \\underline{u}}{-\\i\\, \\tau}\\Bigr) \\exp \\bigl( - V_\\text{eff}(\\underline{u}) \\bigr) \\,.\n\\label{Z3d}\n\\ee\nNoting that the supersymmetric partition function and the Hamiltonian\nindex are related as \\cite{Ardehali:2015hya,Assel:2015nca}\n\\be\nZ(\\t) \\= e^{2\\pi\\i\\tau E_\\text{susy}} \\, \\mathcal I (\\t) \\,,\n\\ee\nwe see that the result~\\eqref{Z3d} agrees precisely with the microscopic\nresult~\\eqref{eq:In0semi-simpleAsy0}--\\eqref{Esusy}.\n\nWe emphasize that while the above derivation of~$V_{\\text{eff}}$ in~\\eqref{Veffans3d} \napplies to~$\\underline{u}$ near 0, it can be easily extended to generic finite~$\\underline{u}$ by modifying the \nvector multiplet discussion. For generic~$\\underline{u}$, the non-Cartan components of the~$n=-n_0$ \nmode of the vector multiplet are also heavy, and ought to be integrated out. Consequently the \nsum in~\\eqref{eq:sumPrimeVec} would no longer have a prime, and we end up with~$V^r_1$ \nas in~\\eqref{eq:V1ren} rather than~$V_1$ in~\\eqref{Veffans3d}. This is the EFT derivation of \nthe finite-$\\underline{u}$ potentials~$V_{1,2}$ found microscopically in~\\cite{Cabo-Bizet:2019osg}.\n\nOn the other hand, when $n_0=0$, the small-$\\underline{u}$ discussion leading up to~\\eqref{Veffans3d} \nneeds to be modified because now the chiral multiplets have light modes (corresponding to $n=0$). \nAs in the discussion around~\\eqref{eq:sumPrimeVec} the light mode should be removed from the \nKK sum and instead be included in the dynamical part (to be localized). Indeed, it is well-known that \nfor~$n_0=0$ the constant piece of the small-$\\tau$ expansion coming from the~$\\underline{u}=0$ saddle \ncontains the (localized)~$S^3$ partition function of the dimensionally reduced chiral as well as \nvector multiplets~\\cite{Ardehali:2015bla} \n(see~\\cite{Dolan:2011rp,Spiridonov:2012ww,Niarchos:2012ah,Gadde:2011ia,Imamura:2011uw} \nfor earlier work on the connection between 4d indices and $S^3$ partition functions). \n\nFinally, as discussed in Appendix~\\ref{sec:diffomegas}, we find that the \neffective potential for~$\\t$ and~$\\sigma$ not necessarily equal is given by \nmaking the replacement \n\\be\n\\frac{1}{R^2} \\to \\frac{1}{\\t \\, \\sigma} \\,, \\qquad \\frac{1}{R} \\to \\frac{\\t + \\sigma}{2\\t \\, \\sigma}\n\\ee\nin the effective potential~\\eqref{Veffans3d}. The singular pieces are indeed in agreement \nwith the microscopic calculations reported \nin~\\cite{Kim:2019yrz,Cabo-Bizet:2019osg}.\n\n\n\n\\subsection{Rational points}\n\nWe now turn our attention to the limit of~$\\t$ approaching a rational point. \nIn the discussion of the previous subsection we used the fact that the radius of the circle~$R$ \nequals~$\\t$ which becomes small in the limit, so that we could use an effective three-dimensional description. \nNow we are interested in~$\\widetilde \\t = m\\t +n \\to 0$, with~$n, m \\in \\mathbb{Z}$ (with no common factor) \nas in~\\cite{Cabo-Bizet:2019eaf}. \nIn terms of the variable~$\\widetilde \\t$ we have that~$\\omega = 2 \\pi \\i \\t = 2 \\pi \\i (\\widetilde \\t -n)\/m$ so that\n\\be\n\\O \\= 1+ \\frac{\\omega}{\\gamma} \\= 1- \\frac{2 \\pi \\i n}{m\\gamma} + \\frac{2 \\pi \\i \\widetilde \\t}{m\\gamma} \\,,\n\\ee\nand the four-dimensional metric background~\\eqref{4dbackgnd} is now\n\\be \\label{4dbackgndrat}\n\\begin{split}\n\\mathrm{d} s_4^2 \\= & \\mathrm{d} t_E^2 + \\mathrm{d} \\theta^2 \n+ \\sin^2 \\theta \\, \n\\Bigl(\\mathrm{d} \\phi_1 - \\frac{2 \\pi n}{m\\gamma} \\, \\mathrm{d} t_E -\\i \\, \\bigl(1 + \\frac{2 \\pi \\i \\widetilde \\t}{m\\gamma}\\bigr) \\, \n\\mathrm{d} t_E \\Bigr)^2 \\\\\n& \\qquad \\qquad \\qquad + \\cos^2 \\theta \\, \n\\Bigl(\\mathrm{d} \\phi_2 - \\frac{2 \\pi n}{m\\gamma} \\, \\mathrm{d} t_E -\\i \\, \\bigl(1+ \\frac{2 \\pi \\i \\widetilde \\t}{m\\gamma}\\bigr) \\, \n\\mathrm{d} t_E \\Bigr)^2 \\,.\n\\end{split}\n\\ee\nIn terms of the following new coordinates and new parameters,\n\\be\n\\widetilde \\gamma \\= m \\gamma \\,, \\qquad \n\\widetilde \\O \\= 1+ \\frac{2 \\pi \\i \\widetilde \\t}{\\widetilde \\gamma} \\,, \\qquad \n\\widetilde \\phi_i \\= \\phi_i - \\dfrac{2 \\pi n}{\\widetilde \\gamma} t_E \\,,\n\\ee\nthe above metric is \n\\be \\label{4dbackgndrat1}\n\\mathrm{d} s_4^2 \\= \\mathrm{d} t_E^{2} + \\mathrm{d} \\theta^2 \n+ \\sin^2 \\theta \\, \n\\bigl(\\mathrm{d} \\widetilde \\phi_1 -\\i \\, \\widetilde \\O \\, \\mathrm{d} t_E \\bigr)^2 + \\cos^2 \\theta\\, \\bigl(\\mathrm{d} \\widetilde \\phi_2 -\\i \\, \\widetilde \\O \\,\\mathrm{d} t_E \\bigr)^2 \\,,\n\\ee\nwith~$\\widetilde \\phi_1$, $\\widetilde \\phi_2$ being $2\\pi$-periodic as before, \nand the periodic identification going around the time circle is \n\\be \\label{newident}\n\\bigl(t_E \\,, \\; \\widetilde \\phi_1 \\,, \\; \\widetilde \\phi_2 \\bigr) \\; \\sim \\;\n\\Bigl( t_E + \\frac{\\widetilde \\gamma}{m} \\,, \\; \\widetilde \\phi_1 - \\frac{2 \\pi n}{m} \\,, \\; \\widetilde \\phi_2 - \\frac{2 \\pi n}{m} \\Bigr) \\,.\n\\ee\nThe metric configuration~\\eqref{4dbackgndrat1} with the identifications~\\eqref{newident} is \nsimply a global identification, or orbifold, of the configuration considered in the previous subsection with \nthe new parameters~$(\\widetilde \\gamma, \\widetilde\\t, \\widetilde \\O)$ replacing $(\\gamma,\\t, \\O)$.~\\footnote{We learned about \nthese orbifolds from a talk by O.~Aharony at the Stony Brook seminar series in November \n2020~\\cite{AhaSBTalk}.}\n\n\nOn the covering space, \ngoing around the time circle shifts~$\\widetilde t_E \\to \\widetilde t_E + \\widetilde \\gamma$ and~$\\widetilde \\phi_i \\to \\widetilde \\phi_i + 2 \\pi n$.\nThe latter identification can be trivialized by using the independent~$2\\pi$-periodicity of~$\\widetilde \\phi_i$, so that \nwe have the identification~$\\bigl(\\,\\widetilde t_E \\,, \\widetilde \\phi_1 \\,, \\widetilde \\phi_2 \\, \\bigr) \\sim \n\\bigl(\\, \\widetilde t_E + \\widetilde \\gamma \\,, \\widetilde \\phi_1 \\,, \\widetilde \\phi_2 \\, \\bigr)$. \nOn this configuration we can perform the dimensional reduction to three dimensions.\nThe relevant considerations of the previous subsection go through exactly as before \nwith the replacement~$(\\gamma,\\t, \\O) \\mapsto (\\widetilde \\gamma, \\widetilde\\t, \\widetilde \\O)$. Actually, because the gauge \nholonomies on the cover wrap a circle~$m$ times larger than the original~$S^1$, we also get \na replacement~$u_j\\to mu_j$. Moreover, since~$\\xi_I$ (which equals~$-n_0r_I\/2$ for~$(m,n)=(1,0)$) \neffectively plays the role of a flavor chemical potential in our problem as mentioned around~\\eqref{eq:defXiGen}, \nwe expect a similar replacement $-n_0r_I\/2\\to m\\xi_I$ (we postpone a direct 3d explanation of this \nreplacement to future work). With the preceding substitutions in the results of the previous \nsubsection, we thus arrive at the potentials~$\\widetilde V_{2,1}$ in~\\eqref{eq:defVsRat}. \nWe then take the~$\\mathbb{Z}_m$ quotient which has two effects as usual. \nFirstly it reduces the volume of the three-dimensional space, and \nsecondly it introduces new topologically non-trivial sectors in the path integral over the \ngauge-field configurations.\nThe change in calculations involving local gauge-invariant Lagrangians will therefore \nbe only a reduction in the action by a factor of~$m$. \nThis explains the reduction of the effective potential by a factor of~$m$ as in~\\eqref{eq:In0semi-simpleAsyRat0}.\n\nFinally we discuss the constant terms (in~$\\widetilde \\t$) arising from the functional integral over the \ndynamical gauge multiplet. \nThere are a few subtleties. Firstly the actions like the gravitational CS action will depend on the \nglobal properties of the orbifold. \nThen we need to calculate the partition function of the orbifold space with a background graviphoton.\nAssuming as in the previous subsection that the expected THF moduli arise, and that by \nre-scaling and contour deformation (as discussed around~\\eqref{eq:Z0Rational}) the THF\nmoduli can be replaced with those of round~$S^3$, the calculation presumably reduces to \nan~$S^3\/\\mathbb{Z}_m$ partition function as in~\\cite{Benini:2011nc,Alday:2012au,Gang:2019juz,Willett:2016adv}, \nwith the~$\\mathbb{Z}_m$ action following from~\\eqref{newident} to be\n\\be\n(\\widetilde\\phi_1,\\widetilde\\phi_2) \\; \\sim \\; (\\widetilde\\phi_1-\\frac{2\\pi n}{m},\\widetilde\\phi_2-\\frac{2\\pi n}{m}),\n\\ee\nwhich for~$n=1$ coincides with that of the lens space~$L(m,-1)$.\nHere one has to be careful about how the measure on the space \nof constant scalars~$\\sigma_i$ is affected by the four-dimensional orbifold~\\eqref{newident}. \nWe leave these interesting questions to future work, noting that \nthe result of these considerations indeed agrees with the microscopic \nanswer~\\eqref{eq:N=4indexAsyRational}, with the~$\\text{O}(\\widetilde \\t)$ piece explained by the \nsupersymmetric Casimir energy factor as before.\n\n\\vspace{0.4cm}\n\n\\noindent \\textbf{Note Added.} The paper~\\cite{Cassani:2021fyv}, which appeared on the arXiv \nthe same day as the first version of this paper, has some overlap with our section~\\ref{sec:4dto3d}. \nThe paper~\\cite{Jejjala:2021hlt}, which appeared on the arXiv soon after, has some overlap \nwith our section~\\ref{sec:SCI}.\n\n\n\n\n\n\\section*{Acknowledgements}\n\nWe would like to thank Alejandro Cabo-Bizet, Daniel Butter, Davide Cassani, Cyril Closset, Zohar Komargodski, Neil Lambert, Stavros Garoufalidis,\nBernard de Wit, and Don Zagier for useful discussions and comments.\nThis work is supported by the ERC Consolidator Grant N.~681908, ``Quantum black holes: A macroscopic \nwindow into the microstructure of gravity'', and by the STFC grant ST\/P000258\/1. \nAAA would like to especially thank Junho~Hong for several helpful discussions and \ncollaboration on a related project.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n Geometry has been a testbed for automated deduction almost\nas long as computers have existed; the first experiments were done\nin the 1950s. In the nineteenth century, geometry was the testbed for the development \nof the axiomatic method in mathematics, spurred by the efforts to prove Euclid's\nparallel postulate from his other postulates and ultimately the development of\nnon-Euclidean geometry. This effort culminated in Hilbert's seminal 1899 book \\cite{hilbert1899}.\n In the period 1927--1965, Tarski developed his \nsimple and short axiom system (described in \\S\\ref{section:axioms}). \nSome 35 years ago, Wos experimented with finding proofs\nfrom Tarski's axioms, reporting success with simple theorems, but leaving several\nunsolved challenge problems. The subject was revisited\nby Art Quaife, who in his 1992 book \\cite{quaife1992} reported on the successful \nsolution of some of those challenge problems using an early version of McCune's theorem prover, \\Ott. \nBut several theorems remained that Quaife was not able to get {\\textsc OTTER\\ } to prove,\nand he stated them as ``challenge problems'' in his book.\nAs far as we know, nobody took up the subject again until 2012, when we \nset out to see whether automated reasoning techniques, and\/or computer\nhardware, had improved enough to let us progress beyond Quaife's achievements.%\n\\footnote{There is also a long tradition, going back to Descartes, of reducing geometric\nproblems to algebra calculations by introducing coordinates. Algorithms for carrying out \nsuch calculations by computer have been extensively studied, including special methods intended\nfor geometry and \nTarski's general decision procedure for real closed fields. We mention these only to \nemphasize that such methods are irrelevant to this paper, which is concerned with proofs\nin an axiomatic system for geometry.}\n\nThe immediate stimulus leading to\n our 2012 work was the existence of the almost-formal development of \nmany theorems in Tarskian geometry in Part I of \\cite{schwabhauser}. This Part I \nis essentially the manuscript developed by Wanda Szmielew for her 1965 Berkeley lectures\non the foundations of geometry, with ``inessential modifications'' by Schw\\\"abhauser.\nThere are 16 chapters. Quaife's challenge problems (listed in \\S\\ref{section:challenges})\n occur in the first nine chapters.\nThe rest contain other important geometrical theorems (described in \\S\\ref{section:results} \nand \\S\\ref{section:hardtheorems}).\nWe set ourselves the goal to find {\\textsc OTTER\\ } proofs of each of the theorems in\nSzmielew's 16 chapters; we completed twelve of them, which is enough to make our points.\nOur methodology focuses on separate problems, each targeting one theorem. For\neach problem, we supplied the axioms and the previously-proved \ntheorems, as well as the (negated) goal expressing the theorem to be proved.\nWe were often forced to supply {\\textsc OTTER\\ } with more information than that,\nas we describe in \\S\\ref{section:diagrams} and \\S\\ref{section:hints}.\n\n\nWe do not know of another \nbody of mathematics of this size that has been formalized by using a theorem prover. \nNormally proof checkers are used to formalize a whole theory, and theorem provers\nare used for individual theorems. Our first report on this work \\cite{beeson2014-wos}\nemphasized the solution of two hundred individual problems. We prepared two \nhundred {\\textsc OTTER\\ } input files by hand. Since the time of that report, we developed \na methodology for the automated preparation and checking of those input files, to \nensure that no human error has corrupted the formal development of an entire theory\nas embodied in two hundred input files and proofs. This methodology makes it \npossible to easily conduct experiments involving many files. For example, \none can easily generate input files for the theorem prover of one's choice, \ninstead of for the prover we used (\\Ott). \nOn the web site for this project \\cite{tarski-archive}, we have made available\nour input files, the resulting proofs, and various tools for manipulating theorems\nand files, which we hope will be of use to others wishing to work with these problems.\n\nWe distinguish between proofs that were found completely mechanically (without \nreference to the steps of a book proof) and proofs that were constructed by \nsome technique that involved a human knowing the steps of a book proof. Roughly \nspeaking, we were able to derive mechanically most of the theorems that have \n``short'' proofs (40 or fewer deduction steps). \nProofs of length 40--100, roughly speaking, are difficult\nexercises for a human, and proofs of 100-250 steps belong in a Ph.D. thesis or\npublication. At first (that is, in \\cite{beeson2014-wos}), we could not obtain\nsuch long proofs completely mechanically. Our main goal at that time (2013)\nwas to obtain formal proofs by any means possible. That meant starting with \na book proof and using techniques explained in \\S\\ref{section:hints} (lemma adjunction and hint injection)\nto eventually obtain a formal proof. We did obtain proofs of all two hundred theorems. \nBut that left an obvious challenge: Can one develop methods that \nenable a theorem prover to prove these theorems without reference to the steps of a book proof?\nWe were able to prove some (but not all) \nquite long theorems (of Ph.D. difficulty) \ncompletely mechanically, using a technique called the ``subformula strategy''\n\\cite{wos-notebook2008}.\nThe challenge of developing strategies to reliably find proofs \nof such difficult theorems (by methods independent of knowledge of a book proof) still stands. \n\nIn this paper, we give Tarski's axioms, explain the challenge problems of Quaife\nand some of the axioms of Hilbert, discuss the difficulties of finding {\\textsc OTTER\\ } \nproofs of these theorems, and explain what techniques we used to find those proofs. \nThen we explain our automated file-generation\nand checking methodologies and the extent to which those methods ensure the reliability\nof the whole theory. \n\n \n\\section{The challenge problems} \\label{section:challenges}\nQuaife's four challenge problems were as follows: every line segment has a midpoint;\nevery segment is the base of some isosceles triangle; the outer Pasch axiom (assuming \ninner Pasch as an axiom); and the first outer connectivity property of betweenness.\nThese are to be proved without any parallel axiom and without even line-circle continuity.\nThe first proofs of these theorems were the heart of Gupta's \nPh.D. thesis \\cite{gupta1965} under Tarski. {\\textsc OTTER\\ } proved them all in 2013. \nAll Quaife's challenge problems occur in Szmielew; the last of them is Satz 9.6,\nso solving these challenge problems is a consequence of formalizing Chapters 2-11 of \nSzmielew. \n\nThe theory of Hilbert (1899) can be translated into Tarski's language, interpreting\nlines as pairs of distinct points and interpreting angles as ordered triples of non-collinear points.\nUnder this interpretation, the axioms of Hilbert either occur among or are easily \ndeduced from theorems in the first 11 (of 16) chapters of Szmielew. We have found\n{\\textsc OTTER\\ } proofs of all of Hilbert's axioms from Tarski's axioms (through Satz 11.49 \nof Szmielew, plus Satz 12.11). \n\n \n\\section{Related work}\nWe know of several other projects involving formal proofs in Tarskian geometry.\nBraun and Narboux \\cite{narboux2012}, \\cite{narboux2015} have checked many theorems of Tarskian geometry in Coq; they have now gotten as far as Pappus's theorem in Chapter 15. \n Richert has checked some in HOL Light (unpublished, but distributed with HOL Light). \n Urban and Veroff are also checking some in HOL-Light. \n Durdevic {\\em et.al.} \\cite{narboux2015b} have used Vampire, E, and {\\textsc SPASS}\n to check some of these theorems. \n \nWos has also conducted many experiments aimed at shortening some of \nthese proofs, and other proofs in Tarski's older axiom system, or finding forward \nproofs instead of backwards or bidirectional proofs; some of these are\ndescribed in \\cite{wos-notebookTarski}. \n\n\n \n \n\\section{Tarski's axioms} \\label{section:axioms}\n In about 1927, Tarski first lectured on his axiom system for geometry, which \nwas an improvement on Hilbert's 1899 axioms in several ways: First, the language\nhad only one sort of variables (for points), instead of having three primitive\nnotions (point, line, and angle). Second, it was a first-order theory (Hilbert's\naxioms mentioned sets, though not in an essential way). Third, the axioms were short,\nelegant, and few in number (only twenty in 1927, decreasing to twelve in 1957 \\cite{tarski-givant}). They could be expressed comprehensibly\nin the primitive syntax, without abbreviations. \n\n\\subsection{History}\n The development of Tarski's theory, started in 1927 or before, was delayed, first \n by Tarski's involvement in other projects, and then by \nWorld War II (galley proofs of Tarski's article about it were destroyed by bombs).\nThe first publication of Tarski's axioms came in 1948 \\cite{tarski1951} and contained little more \nthan a list of the axioms and a statement of the important metamathematical theorems \nabout the theory (completeness, representation of models as ${\\mathbb F}^2$ for ${\\mathbb F}$ a \nreal-closed field, quantifier-elimination, and decidability). Tarksi\nthen lectured on the subject at Berkeley in 1956 to 1958, and published a reduced\nset of axioms in 1959 \\cite{tarski1959}. In the 1960s, \nTarski, Szmielew, Gupta, and Schw\\\"abhauser (and some students) \nreduced the number of axioms still further. \nThe manuscript that Szmielew prepared for her 1965 course became Part I of \\cite{schwabhauser}. \nMore details of the history of these axioms can be found in \\cite{tarski-givant} (our main source) \nand \nthe foreword to (the Ishi Press edition of) \\cite{schwabhauser}. \nFor our present purposes, the relevance of the\nhistory is mainly that there are three versions of Tarski's theory: the 1948 version, \nthe 1959 version, and the 1965 version (published in 1983).\nThe earlier experiments of Wos used the 1959 axioms, but Quaife used the 1965 version,\nas we do. The exact differences are explained in \\S\\ref{section:betweenness} and \n\\S\\ref{section:pasch}. \n\n\\subsection{Syntax}\nThe fundamental relations in the \ntheory (first introduced by Pasch in 1852) are ``betweenness'', which we here write\n${\\bf T}(a,b,c)$, and ``equidistance'', or ``segment congruence'', which is officially \nwritten $E(a,b,c,d)$ and unofficially as $ab = cd$, segment $ab$ is congruent to segment\n$cd$. The intuitive meaning of ${\\bf T}(a,b,c)$ is that $b$ lies between $a$ and $c$ on\nthe line connecting $a$ and $c$; Tarski used non-strict betweenness, so we do \nhave ${\\bf T}(a,a,c)$ and ${\\bf T}(a,c,c)$ and even ${\\bf T}(a,a,a)$. Hilbert used strict betweenness.\nBoth of them wrote ${\\bf B}(a,b,c)$, which is a potential source of confusion. We therefore\nreserve ${\\bf B}$ for strict betweenness and use ${\\bf T}$ for Tarski's non-strict betweenness.\nThe fact that Tarski's initial is `T' should serve as a mnemonic device. Of course\nthe equality relation between points is also part of the language. \n\n\\subsection{Betweenness and congruence axioms}\\label{section:betweenness}\nWe sometimes write $ab=cd$ instead of $E(a,b,c,d)$ to enhance human readability. In {\\textsc OTTER\\ } files\nof course we use $E(a,b,c,d)$. The following are five axioms from the 1965 system.\n\\smallskip\n\n\n\\begin{tabular*} {0.7\\textwidth}{@{\\extracolsep{\\fill}}ll}\n$ab = ba$& (A1) reflexivity for equidistance\\\\\n$ab=pq \\land ab = rs \\ \\rightarrow\\ pq = rs$& (A2) transitivity for equidistance \\\\\n$ab = cc \\ \\rightarrow\\ a=b$& (A3) identity for equidistance \\\\\n$\\exists x\\,({\\bf T}(q,a,x) \\land ax = bc)$& (A4) segment extension\\\\\n${\\bf T}(a,b,a) \\ \\rightarrow\\ a=b$& (A6) identity for betweenness\n\\end{tabular*}\\smallskip\n\nWhen using (A4) in \\Ott, we Skolemize it:\n\\smallskip\n\n\\begin{tabular*} {0.7\\textwidth}{@{\\extracolsep{\\fill}}ll}\n$ {\\bf T}(q,a,ext(q,a,b,c)) \\land E(a,ext(q,a,b,c),b,c)$& (A4) Skolemized\\\\\n\\end{tabular*}\\smallskip\n\nThe original (1948) theory had the following additional fundamental properties\nof betweenness listed as axioms. (We follow the numbering of \\cite{tarski-givant}.)\n\\smallskip\n\n\\hskip-0.5cm\n\\begin{tabular*} {0.7\\textwidth}{@{\\extracolsep{\\fill}}ll}\n${\\bf T}(a,b,b)$& (A12) Reflexivity for ${\\bf T}$ \\\\\n${\\bf T}(a,b,c) \\ \\rightarrow\\ {\\bf T}(c,b,a)$& (A14) Symmetry for ${\\bf T}$ \\\\\n${\\bf T}(a,b,d) \\land {\\bf T}(b,c,d) \\ \\rightarrow\\ {\\bf T}(a,b,c)$& (A15) Inner transitivity \\\\\n${\\bf T}(a,b,c) \\land {\\bf T}(b,c,d) \\land b \\neq c \\ \\rightarrow\\ {\\bf T}(a,b,d)$&(A16) Outer transitivity \\\\\n${\\bf T}(a,b,d) \\land {\\bf T}(a,c,d) \\ \\rightarrow\\ {\\bf T}(a,b,c) \\lor {\\bf T}(a,c,b)$ & (A17) Inner connectivity \\\\\n${\\bf T}(a,b,c) \\land {\\bf T}(a,b,d) \\land a\\neq b$& (A18) Outer connectivity \\\\\n$\\qquad \\ \\rightarrow\\ {\\bf T}(a,c,d) \\lor {\\bf T}(a,d,c)$& \n\\end{tabular*}\\smallskip\n\\smallskip\n\n\\noindent\nOf these only (A15) and (A18) appear in the 1959 version, because in 1956--57 Tarski and \nhis students Kallin and Taylor showed that the other four are dependent (derivable from the \nremaining axioms). H.~N.~Gupta\nshowed in his 1965 Ph.D. thesis \\cite{gupta1965} that (A18) is also dependent. The \nproof of (A18) is one of Quaife's challenge problems.\nGupta also showed that (A15) implies (A6) using the other axioms of the 1959 system.\nThen one could have dropped (A6) as an axiom; but instead, Szmielew\ndropped (A15), keeping (A6) instead; then (A15) becomes a \ntheorem. \n\nAll six of these axioms occur\nas theorems in \\cite{schwabhauser}: (A12) is Satz~3.1, (A14) is Satz~3.2, (A15) is Satz~3.5,\n(A16) is Satz~3.7, (A18) is Satz~5.1, and (A17) is Satz~5.3. \nHence, our research program of proving all the \ntheorems in Szmielew's development using {\\textsc OTTER\\ } systematically captured these results as \nsoon as we reached Satz~5.3. \n\n\\subsection{The five-segment axiom}\nHilbert \\cite{hilbert1899} treated angles as primitive objects and angle congruence\nas a primitive relation, and he took SAS (the side-angle-side triangle congruence principle) \nas an axiom. In Tarski's theory, angles are\n treated as ordered triples \nof points, and angle congruence is a defined notion,\nso a points-only formulation of the SAS principle is required. \n The key idea is Tarski's ``five-segment axiom'' (A5), shown in Fig.~\\ref{figure:5segment}.\n\n \n \\begin{figure}[ht]\n\\TarskiFiveSegmentFigure\n\\caption{The five-segment axiom (A5)}\n\\label{figure:5segment}\n\\end{figure}\n\n\nIf the four solid segments in Fig.~\\ref{figure:5segment} are pairwise congruent, then the fifth (dotted) segments are congruent too.\nThis is essentially SAS for triangles $dbc$ and $DBC$. The triangles $abd$ \nand $ABD$ are surrogates, used to express the congruence of angles $dbc$ and $DBC$.\n By using Axiom A5, we \ncan avoid all mention of angles.\n\n\\subsection{Pasch's axiom} \\label{section:pasch}\nMoritz Pasch \\cite{pasch1882} (see also \\cite{pasch1926}, with an historical appendix by Max Dehn)\n supplied (in 1882) an axiom that repaired many of the defects that \nnineteenth-century rigor found in Euclid. Roughly, a line that enters a \ntriangle must exit that triangle.\nAs Pasch formulated it, it is not in $\\forall\\exists$ form. There are two $\\forall\\exists$ versions,\nillustrated in Fig.~\\ref{figure:InnerOuterPaschFigure}. These formulations of Pasch's axiom \ngo back to Veblen \\cite{veblen1904}, who proved outer Pasch implies inner Pasch. \nTarski took outer Pasch as an axiom in \\cite{tarski1959}.\n \n \n\\begin{figure}[ht] \n\\hskip 0.7cm\n\\InnerOuterPaschFigure\n\\caption{ Inner Pasch (left) and Outer Pasch (right). Line $pb$ meets triangle $acq$ in one side.\n The open circles show the points asserted to exist on the other side. }\n\\label{figure:InnerOuterPaschFigure}\n\\end{figure}\n\\smallskip\n\n\\begin{tabular*} {0.7\\textwidth}{@{\\extracolsep{\\fill}}ll}\n${\\bf T}(a,p,c) \\land {\\bf T}(b,q,c) \\ \\rightarrow\\ \\exists x\\,({\\bf T}(p,x,b) \\land {\\bf T}(q,x,a))$& (A7) inner Pasch \\\\\n${\\bf T}(a,p,c)\\land {\\bf T}(q,c,b) \\ \\rightarrow\\ \\exists x\\,({\\bf T}(a,x,q) \\land {\\bf T}(b,p,x))$& outer Pasch \n\\end{tabular*}\\smallskip\n\\smallskip\n\nFor use in \\Ott, we introduce Skolem symbols \n$$ip(a,p,c,b,q) \\qquad \\mbox{ and} \\qquad op(p,a,b,c,q)$$ \nfor the point $x$ asserted to exist. The use of these symbols makes these axioms\nquantifier free. \n\n \n\nTarski originally took outer Pasch as an axiom. In \\cite{gupta1965}, Gupta proved\nboth that inner Pasch implies outer Pasch and that outer Pasch implies inner Pasch,\nusing the other axioms of the 1959 system. \nThe proof of outer Pasch from inner Pasch\nis one of Quaife's four challenge problems.\n \n \\subsection{Dimension axioms}\nWith no dimension axioms, Tarski's geometry axiomatizes theorems that are true in $n$-dimensional\ngeometries for all $n$. For each positive integer $n$, we can specify that the dimension of \nspace is at least $n$ (with a lower-dimension axiom A8${}^{(n)}$), or at most $n$ (with an \nupper-dimension axiom A9${}^{(n)}$). The uppe--dimension axiom says (in a first-order way) that the set of points\nequidistant from $n$ given points is at most a line. The lowe--dimension axiom for $n$ is the \nnegation of the upper-dimension axiom for $n-1$. For the exact statements of these axioms \nsee \\cite{tarski-givant}.\n\nInner and outer Pasch have another advantage over Pasch's original version, besides \nlogical simplicity. Namely, they hold even in 3-space, where Pasch's original version fails.\nThat is, they can be used without an ``upper-dimension'' axiom, whose use is thereby postponed\n a long time in Szmielew.\n\n \n\\subsection{Tarski's parallel axiom (A10)}\nIn Fig.~\\ref{figure:parallelaxiom}, open circles indicate points asserted to exist.\n \\vskip-0.5cm\n \\begin{figure}[ht]\n \\hskip 2cm\n \\TarskiParallelFigure \n \\caption{Tarski's parallel axiom}\n \\label{figure:parallelaxiom}\n \\vskip -1.2cm\n \\end{figure}\n\n\\begin{eqnarray*}\n& {\\bf T}(a,d,t) \\land {\\bf T}(b,d,c) \\land a \\neq d \\ \\rightarrow\\ \\\\\n& \\qquad \\exists x \\exists y\\,({\\bf T}(a,b,x) \\land {\\bf T}(a,c,y) \\land {\\bf T}(x,t,y)) \\qquad ({\\rm A}10) \n \\end{eqnarray*}\n\n \n The hypothesis says that $t$ lies in the interior of angle $a$, as witnessed by $b$, $c$, and $d$.\n The conclusion says that some line through $t$ meets both sides of the angle. Of course this fails\n in non-Euclidean geometry when both $ab$ and $ac$ are parallel to some line $L$ through $t$.\n \n According to \\cite{tarski-givant}, Szmielew preferred to use the ``triangle circumscription \n principle '' (A10${}_2$) as the parallel axiom. Substituting (A10${}_2$) was apparently one of the \n``inessential changes'' made by Schwabh\\\"auser. This principle says that if $a$, $b$, and\n $c$ are not collinear, then there exists a point equidistant from all three. (That point is the \n center of the circumscribed triangle, hence the name.) See \\cite{tarski-givant}\n for a formal statement.\n \n \\subsection{Continuity axioms}\n Axiom schema (A11) is not a single axiom but an axiom schema, essentially asserting that \n first-order Dedekind cuts are filled. See \\cite{tarski-givant}\n for a formal statement.\n Models of A1-A11 are all isomorphic to planes ${\\mathbb F}^2$\n where ${\\mathbb F}$ is a real-closed field. One can also consider instead of (A11) the axioms of \n line-circle continuity and\/or circle-circle continuity, which assert the existence of intersection \n points of lines and circles, or circles and circles, under appropriate hypotheses. None of \n the continuity axioms are used in the work reported in this paper. Szmielew's development \n proceeds strictly on the basis of A1-A10.\n \n \n\n\n\\section{Completely mechanical methods for finding proofs}\nIn this section we describe, with illustrative examples, the techniques we used \nthat do not involve knowing the steps of a book proof (not \neven the diagram accompanying the proof).\n \n\\subsection{How {\\textsc OTTER\\ } works}\nReaders familiar with {\\textsc OTTER\\ } can skip this subsection. It is not an introduction to {\\textsc OTTER\\ } \nbut an attempt to make the subsequent information about our methodology comprehensible to \nthose who do not have expertise with \\Ott; at least, it should enable such readers to \nmake sense of the input files and proofs we exhibit on the project's website\n\\cite{tarski-archive}.\nFor more information about \\Ott, see \\cite{wos-fascinating}.\n\n{\\textsc OTTER\\ } is a clause-based resolution theorem prover. One writes {\\tt -A} for the \nnegation of $A$. One writes {\\tt A | B} for disjunction (``or''). One does not \nwrite ``and'' at all, but instead one enters the two clauses separately. One writes\n$A \\ \\rightarrow\\ B$ as {\\tt -A | B}. Similarly one writes $P \\land Q \\ \\rightarrow\\ R$ as \n{\\tt -P | -Q | R}. \n\nVariables begin with {\\tt x,y,z,w,u,v}. Names beginning with any other letter are constants.\nA resolution theorem prover requires the goal to be negated and entered as clauses.\nFor example, to prove $A(x) \\ \\rightarrow\\ \\exists y\\, B(x,y)$, we would enter the following clauses:\n\n\\begin{verbatim}\nA(c).\n-B(c,y).\n\\end{verbatim}\nAfter proving this theorem, if we want to use it to prove the next theorem, we invent \na new Skolem symbol {\\tt f} and enter the theorem as\n{\\tt -A(x) | B(x,f(x))}.\n \n \nThe input to {\\textsc OTTER\\ } is contained in two lists, the ``set of support'' (sos) and \n``usable''. The fundamental run-time loop of {\\textsc OTTER\\ } moves a clause from sos to usable,\nand then tries to use one of the specified inference rules to generate new clauses from \nthat clause and other clauses in usable. If conclusions are generated, {\\textsc OTTER\\ } has to decide\nwhether to keep them. If it decides to keep them, they are placed on sos, where they\ncan eventually be used to generate yet more new clauses. If the empty clause is generated,\nthat means a proof has been found, and it will be output. \n\nThe fundamental problem of automated deduction is to avoid drowning in a sea of useless conclusions\nbefore finding the desired proof. One tries to get control over this by assigning ``weights''\nto clauses, adjusting those weights in various ways, and using them to control both which \nclauses are kept and which clause is selected from sos for the next iteration of the loop.\nBy default: the weight of a clause is the number of its symbols; the next clause selected\nis the lightest one in sos; and clauses are kept if their weight does not exceed a \nparameter {\\tt max\\_weight}. More sophisticated ways of setting the weights have been developed\nover the past decades and are discussed in \\S\\ref{section:hints}.\n The idea is to get the weights of the \nimportant clauses to be small, and then to squeeze down {\\tt max\\_weight} to prevent\ndrowning.\n\nIn addition to techniques involving weighting, there are other ways to control \\Ott's search:\n\\begin{itemize}\n\\item Use a propitious combination of rules of inference. For an introduction to these rules please\nrefer to \\cite{wos-fascinating}. \n\\item One can exert some control over which clause will\nbe selected from sos at the next iteration by using \\Ott's {\\tt pick\\_given\\_ratio}. \n\\item One can exert some control over how the search starts and what kind of proof to seek\n(forward, backward, or bi-directional) by choosing which clauses to put in sos and which \nto put in usable.\n\\end{itemize}\n\n\n\\subsection{Hints} \nPutting a clause \ninto {\\tt list(hints)} causes {\\textsc OTTER\\ } to give that clause, if deduced, a low weight, causing \nit to be retained, even if its default weight would have been so large as to cause it to be \ndiscarded. One has options (specified at the top of an {\\textsc OTTER\\ } file) to cause this weight \nadjustment to apply to clauses that match the hints, or subsume the hints, or are \nsubsumed by the hints. We use hints in the implementation of several strategies, \nincluding the subformula strategy and lemma adjunction, which are described \nin \\S\\ref{section:subformulastrategy} and \\S\\ref{section:hints}.\nThe technique of hints was invented by Veroff \\cite{veroff1996} and later incorporated\ninto \\Ott. As a technical note: when using hints, one should always include these lines,\nwithout which the hints will not have the desired effect.\n\n\\begin{verbatim}\nassign(bsub_hint_wt,-1).\nset(keep_hint_subsumers).\n\\end{verbatim}\n\nThere is also a technical difference in {\\textsc OTTER\\ } between {\\tt list(hints)} and {\\tt list(hints2)};\n researchers should consult the manual before using either. \nAnother similar technique is known as {\\em resonators}. This is more useful when one has\na proof in hand and wishes to find a shorter proof. For the exact differences between \nhints and resonators, see \\cite{wos2003}, p.~259.\n\n\\subsection{Choice of inference rules and settings}\nWe mentioned that one of the ways {\\textsc OTTER\\ } can be controlled is through a \npropitious choice of inference rules and settings. We tinkered with our choices often,\nin the hope that a different choice would be better. We did not find that one \nchoice was always best, but a carefully chosen default choice served us well \nin most cases. Our default choice was to use all three of \nhyperresolution, binary resolution, and unit resolution. In addition, \nwe always used paramodulation (for equality reasoning). \n We {\\em often} \nchanged the values of the parameters {\\tt max\\_weight} and {\\tt max\\_proofs}.\nIn particular, when using lemma adjunction, we needed {\\tt max\\_proofs} large enough \nto accommodate many intermediate goals (on the passive list); when using many hints,\nwe chose {\\tt max\\_weight} to be 8 or even smaller; but when searching a ``raw''\nsearch space, we often used 16, 20, or 24, and occasionally even higher. \nIf {\\tt max\\_weight} is too high, then one may be swamped with too many conclusions,\nbut if {\\tt max\\_weight} is too low, one may throw away a clause that was needed. Novices\nmay fail to realize another danger: one might derive a useful clause, and keep it, \nbut if, say, its weight is 10 and there are many clauses of smaller weight, your \nuseful clause may ``never'' become the given clause and give rise to the further \nconclusions that are needed to complete the proof.\n\nOccasionally we changed the values of \n{\\tt max\\_distinct\\_vars}, but usually without much effect. Rarely do the \nproofs we are seeking contain more than one variable per clause. We think \nthis is because Szmielew has already organized the theorems in such a way that we \ndo not need to prove many extra lemmas; a derived clause with variables amounts to a lemma.\nWe performed experiments in which {\\tt max\\_distinct\\_vars} was given increasing values,\nholding other settings fixed. There were only about four theorems that required \nit to have the value 4, but setting it lower did not speed up finding the other proofs\nnoticeably. \n\n\nWe used paramodulation for equality reasoning. Wos believes that\nfailing to use paramodulation was an important reason for the limited success\nof the experiments he made in the 1980s in Tarskian geometry; but we also note\nthat Quaife writes that paramodulation was available for use but seldom actually used in his proofs.\n\n\\paragraph{Demodulation.} A {\\em demodulator} is an equation, used left-to-right \nto rewrite terms to ``simpler forms.'' We used demodulators in connection with \ndiagram equations, but usually we could also obtain proofs without those demodulators.\nIn other subjects, Wos has made use of the technique of ``demodulating to junk'',\nin which unwanted classes of term or types of conclusions are demodulated to an atom {\\tt junk},\nwhich is eliminated by giving it a high weight. But this technique was not useful in \nour work on geometry. Only one of the two hundred theorems of Tarskian geometry \nis an equation. That theorem says that reflection in a point is an involution; that is, \nthe reflection of the reflection of a point is the original point. We write $s(p,x)$ for \nthe reflection of $x$ in $p$; then the equation is $s(p,x(p,x)) = x$.%\n\\footnote{There is a similar theorem, Satz 10.5, about reflection in a line, but that\ntheorem is not quite equational, because it requires that the two points determining the \nline be distinct.} It was \n useful to make this equation a demodulator; but since there is only {\\em one}\nequational theorem, demodulation is not an important technique in the development\nof Tarskian geometry based on Szmielew's manuscript.\n\nWe did, however, make some use of demodulators in the process of lemma adjunction.\nFor example, $midpoint(x,y) = midpoint(y,x)$ is a useful equation (which does not occur\nin Szmielew). Also, in several situations it is natural to rewrite \n(not terms but) formulas, for example, we would naturally regard all the following \nas equivalent ways to express the congruence of two segments: $ab = cd$, $ab=dc$,\n$ba = cd$, $ba = dc$, $cd = ab$, $dc=ab$, $cd = ba$, and $dc = ba$. Similarly \nfor the 4-ary predicate $ab \\perp cd$. {\\textsc OTTER\\ } is able to use demodulators for \nformulas, and setting the {\\tt lrpo} flag tells it to use them to demodulate terms \nto what amounts to alphabetical order; thus we would keep $midpoint(a,b)$ unchanged\nand demodulate $midpoint(b,a)$ to $midpoint(a,b)$. Technically, however, the use of \ndemodulators at a formula level is not permitted in a first-order development, so \nin the end we tried (successfully) to eliminate their use, mostly by hand-editing the \nhints resulting from proof steps using demodulators. Like Gauss, we removed our \nscaffolding, so this paragraph is the only trace of that technique. \n\n \n\\paragraph{ Hot list.} The hot list is a feature of {\\textsc OTTER\\ }\\ that works as \nfollows: Whatever clauses are on the hot list are applied immediately to newly deduced\nformulas. This is useful in the following situation: suppose you think that clause\n$P$ should be derived, and then clause $Q$ should be derived from $P$ by axiom 7. \nYou see that $P$ is derived, but then $P$ never becomes the given clause, because \nit has a fairly high weight and the sea of conclusions contains many clauses of \nlower weight. If you put Axiom 7 on the hot list, then $Q$ will be deduced immediately\nwhen $P$ is deduced, without having to wait until $P$ is the given clause.\nWe used this technique in perhaps fifty out of two hundred hand-crafted input \nfiles. Perhaps it wasn't really necessary. In the mechanically generated\nfiles described in \\S\\ref{section:mechanicalinputfiles}, the hot list disappears, but its influence lingers, because\nthe steps of the proof found originally using a hot list remain as hints. \n \n\\paragraph{Set of support.} Normally, we placed the axioms and previously proved theorems\non the list of ``usable'' clauses, and the negated form of the current theorem on the \nset of support. One of the things that can be tried is putting \neverything on the set of support. This goes against the basic idea behind the set of support (to\nprevent deducing many consequences of the axioms that are irrelevant to the current goal),\nbut it has worked for Wos in some situations. In geometry, however, it \nworked on exactly one theorem out of two hundred, namely Satz 3.1.\n \n\n\\paragraph{ Level saturation.} ``Level saturation''\n refers to a gradual increase in allowed complexity.\nThe program starts\nby allowing the simplest next conclusions, for us of weight 1, then 2, 3, $\\ldots$,\n and continues, run after run. This will be useful only if there is a very short proof\n that for obscure reasons does not come out with more normal settings. Once or twice\n we found a short proof this way, but we always found a proof later with more normal settings,\nso this was not actually a useful technique.\n \n \n\\paragraph{Ratio strategy.} The pick-given ratio in {\\textsc OTTER\\ } alters the basic strategy \nof selecting the lowest-weight clause from the set of support. For example, if this \nratio is set to 4 (our default), then after four clauses have been selected by weight,\nthe oldest one on the set of the support is chosen, regardless of weight. We sometimes,\nbut rarely, had success by setting this ratio to 2.\n\n\\section{Using the diagram} \\label{section:diagrams}\nIn this section, we consider the method of ``diagrams'', which involves looking at the diagram \nin the book accompanying the proof, but not the steps of the proof itself.\n\nWe learned this technique from \nQuaife \\cite{quaife1992}. Suppose we are trying to find an $x$ such that \n$A(a,b,x)$. If we let a theorem prover search, without making any attempt to guide the search, \nessentially it will generate all the points that can be constructed from $a$ and $b$ (and \nother points in the problem or previously constructed in clauses that were kept)\nby the Skolem functions for segment extension and inner Pasch. Imagine trying to prove \na geometry theorem that way: just take your ruler, draw all the lines you can between \nthe points of the diagram, label all the new points formed by their intersections (that is\nby using \nPasch), and construct every point that can be constructed by extending a segment you have by \nanother segment you have. See if any of the new points is the point \nyou are trying to construct. If not, repeat. You will generate a sea of useless points,\neven if you discard those with too many construction steps. \n\nTo guide {\\textsc OTTER\\ } (or any other theorem prover)\n to construct the right points, we ``give it the diagram'' by \ndefining the points to be constructed, using Skolem functions. For example, \nconsider the ``crossbar theorem'' (Satz 3.17). See Figure~\\ref{figure:crossbar},\n shows the diagram and the input (not shown are the axioms A1-A6 \nand the previous theorems, which are placed in {\\tt list(usable)}).\nThe two lines defining $r$ and $q$ are what we call ``giving {\\textsc OTTER\\ } the diagram.''\nWith those lines present, {\\textsc OTTER\\ } finds a proof instantly. Remove them, \nand {\\textsc OTTER\\ } does not find a proof (at least, not in ten minutes).\n\n\\begin{figure}[ht]\n\\hskip 0.3in\n\\begin{verbatim}\n list(sos).\n T(a,s,c).\n T(b,t,c).\n T(a,p,b).\n -T(p,x,c)|-T(s,x,t).\n r = ip(c,t,b,a,p).\n q = ip(c,s,a,t,r).\n end_of_list.\n\\end{verbatim}\n\\vskip-1.6in\n\\CrossBarFigure\n\\caption{The crossbar theorem asserts that $q$ exists, given the other points\nexcept $r$.\nTo prove it, we construct first $r$ and then $q$, using inner Pasch twice. Here {\\tt ip}\nis the Skolem function for the inner Pasch axiom.}\n\\label{figure:crossbar}\n \n\\end{figure}\n \nThe reason this works is that $q$, being a single letter, has weight 1, so \nclauses involving $q$ have much lower weight than would the same clauses with \n$q$ replaced by $ip(c,s,a,t,r)$, which has weight 6. A clause involving the \nterms defining both $q$ and $r$ would have weight at least 14, so it would \nnot be considered very soon, and meantime many other points will be constructed.\n\nIn this simple example, if one puts all clauses into sos and nothing in usable, \n{\\textsc OTTER\\ } does eventually find a proof without being given the diagram; but it is certainly \nmuch slower than with the diagram. In more complex examples, we \nthink this technique is essential. Here, for example, are the lines \nfrom the input file for Satz~5.1, the inner connectivity of betweenness, \ndescribing the two complicated diagrams on pp.~39--40 of \\cite{schwabhauser}:\n\n \n\\begin{verbatim}\n c1=ext(a,d,c,d). \n d1=ext(a,c,c,d).\n p=ext(c1,c,c,d).\n r=ext(d1,c,c,e).\n q=ext(p,r,r,p).\n b1 = ext(a,c1,c,b).\n b2 = ext(a,d1,d,b).\n e = ip(c1,d,b,d1,c).\n\\end{verbatim}\n \n\\section{The subformula strategy} \\label{section:subformulastrategy}\nWos invented the subformula strategy sometime in 2007--2008 and \nused it to study single axioms in relevance logics. He wrote up this work \nin \\cite{wos-notebook2008}. \n The subformula strategy consists of this:\n\n\\begin{quote}\n{\\em For every literal occurring in a clause in the set of support,\nput both the literal and its negation into the hints list.}\n\\end{quote}\n\n\\noindent\nSince clauses that subsume a hint are kept and given low weight, the result is that \nwhen any subformula of a clause in the set of support is deduced, it is kept, and \nmoreover, it will soon be used as the ``given clause'' to deduce more conclusions.\nThe idea is that the proof we are seeking is likely to involve such clauses, so \nwhen we deduce them, we are getting closer to a good result.\n\nThis strategy is particular easy to implement in a modern programming language \nwith a string manipulation function like PHP's {\\tt explode} or Python's {\\tt split}.\nNo parsing is required. Just explode each clause in the set of support on the \nsymbol for disjunction, resulting in an array of literals. Negating a literal is \nalso a simple string manipulation: for example, in PHP one can use {\\tt str\\_replace} to \nreplace {\\tt !=} by {\\tt =} and vice-versa. To compute the required hints and write \nthem into an already-generated input file takes less than 30 lines of code.\n\nThis strategy is completely general; it can be applied to any problem and \nrequires no advance knowledge of a proof. Perhaps not even its creator \nnoticed its generality at the time. \n We failed to use\nthe subformula strategy in 2012--2014, and at that time we found no proofs of length \ngreater than 30 in a completely mechanical fashion, that is, without any reference to \na book proof (although we did find proofs of all the Tarski theorems we tried,\nby methods discussed in \\S\\ref{section:hints} that did start by reference to a book proof).\n\nWhen the light bulb came on over our heads, we used the subformula strategy \nand found several amazingly long proofs completely mechanically, as described in \n\\S\\ref{section:results} and \\S\\ref{section:hardtheorems}. We kept {\\tt max\\_weight}\nas low as possible, typically 8. \n\n\n\\section{Methods that require using steps of the book proof} \\label{section:hints}\nIn this section we describe a method, or perhaps combination of methods, \nthat starts with the steps of a book proof (thus, human steps rather than \nmachine steps) and ends with an {\\textsc OTTER\\ } proof.\n\n\n\nOur methodology was as follows: when trying to prove a theorem, we prepared an \ninput file with the negated goal in {\\tt list(sos)} and the axioms and previously proved theorems \nin list(usable), and a default choice of inference rules. If this \ndid not find a proof, and the proof in Szmielew had a diagram, we supplied the diagram.\nThis section concerns what we did with theorems that could not be proved \nfrom these input files alone. Namely, we supplied some hints based on the book proof.\nHere are the details about how this idea led to proofs.\n\n\n\\paragraph{From the passive list to hints.}\nWe tried supplying some intermediate goals, namely, \nsome or even all of the steps of the book proof.\n\nThese steps were supplied in two places: we put them in the hints list, \nand we also put their negated forms in {\\tt list(passive)},\nwith answer literals having numbers for those goals. The passive list is used\nas follows: When a clause is generated that conflicts with an element of the passive\nlist, a proof is generated. After that, proof search continues.\nHence, when this file is run,\none sees which of the intermediate goals are being proved. Among other things, this \nshowed us where {\\textsc OTTER\\ } was ``getting stuck.'' But even though we found no proof of the \nmain goal, we had proofs of some intermediate goals. We converted these proof steps\nto hints and ran {\\textsc OTTER\\ } again. Sometimes more intermediate goals would be reached. \nOne can tinker with {\\tt max\\_weight}: sometimes a smaller {\\tt max\\_weight} may keep \none from drowning, and sometimes a larger {\\tt max\\_weight} may allow a vital clause \nto be kept. \nAfter adding the new hints, the previous proofs were found immediately, and perhaps\nmore intermediate goals could be found, resulting in still more hints.\n\nWith luck this process would converge to a proof.\n\n\\paragraph{Lemma adjunction and hint injection.} Often, especially with more difficult theorems,\nthe process described above would lead not to a proof but to a situation in which \nsome of the book steps would not be proved, even though others would be. In that\ncase, our next move was to place one or more of these steps, or their negations, \nin the set of support. If this generated a proof (of the original goal, or of \nmore intermediate steps), then again we put those steps in as hints.\nWith those additional hints, then sometimes we could remove (one, some, or even all)\nof the extra clauses\nwe added to the set of support and still get a proof. If we did, we then \nreplaced the hints that came from the proof obtained by using those additional clauses,\nenabling that same proof to be found more easily. \n\n\\paragraph{Divide and conquer.} \nIf all the extra steps can be removed from the set of support, then we have the \nproof we were seeking. But if not, then we wind up, for example, with a \nproof of the theorem from an extra hypothesis $P$. At that point, we replace\nthe extra clause $P$ with its negation $-P$ in the set of support and try \nagain to find a proof, using the techniques described above, accumulating the steps\nof any (partial) proofs found in the hints list. Eventually (in practice) we \nalways can succeed to find a proof from the extra hypothesis $-P$. Now we have\n two proofs, one from $P$ and one from $-P$. If there had \nbeen two extra hypotheses $P$ and $Q$, we might end up with four proofs, \none from $P,Q$, one from $P,-Q$, one from $-P,Q$, and one from $-P,-Q$. \nAt that point we have ``reduced the problem to combining the cases.''\n\n\n\n\\section{Proof by cases} \\label{section:cases}\nProof by cases turned out to be difficult. A number of proofs\nin the book proceed by cases, for example according to whether $p$ lies on line $L$\nor does not lie on line $L$. Moreover, the lemma adjunction method \noften resulted in uncovering an intermediate step $A$ such that if we added\n$A$ as an additional hypothesis, we could find a proof, and we could also \nprove that step, by adding $-A$ as an additional hypothesis. That amounts\nto a proof by cases according as $A$ or $-A$. It may seem that we must be \nnearly finished at that point. But we found\nthat in such situations it was often not trivial, and sometimes\nnot possible, to combine the two proofs into a single proof with no \nadditional assumptions. In this section, we discuss this difficulty.\n\n Let us suppose that \nwe have a proof of the original goal from $A$ and another from $-A$.\nWhat do we do?\nWe could of course just formulate two lemmas: $P$ implies the goal,\nand $-P$ implies the goal. Then (in a third file), we could prove the goal from those two \nlemmas. (Possibly just one of those lemmas would be enough.)\n Sometimes in a mathematics book, that is what is really done, even though \nthe theorem is stated as a single theorem; particularly when there is a certain \nsymmetry to the theorem, instead of formulating a lemma explicitly, one simply says,\n``without loss of generality we can assume.'' Still, we tried to eliminate the \ncases and give a proof in a single file, when we could do so. We now \ndescribe our technique for that. \n\n\\subsection{Tautology adjunction} \n We could sometimes succeed by simply\nadding a clause {\\tt A | -A}, where the proof needs to proceed by cases on $A$.\n{\\textsc OTTER\\ } seems to prefer constructive proofs! This technique is \ncalled ``tautology adjunction'' by Wos, who used it \ndecades ago in proving that subgroups of index 2 are normal. We used this \nin many input files. Here we discuss just one example. The inner connectivity\nof betweenness (A17 above, Satz~5.3 in Szmielew) is derived as an \neasy corollary of Satz~5.1, which is\n$$ a \\neq b \\land {\\bf T}(a,b,c) \\land {\\bf T}(a,b,d) \\ \\rightarrow\\ {\\bf T}(a,c,d) \\lor {\\bf T}(a,d,c).$$\n The natural \nway to formulate {\\tt list(sos)} for this problem would be\n\n\\begin{verbatim}\n a != b.\n T(a,b,c).\n T(a,b,d).\n -T(a,c,d).\n -T(a,d,c).\n\\end{verbatim}\n\n\\noindent\nOf course, that does not suffice to find a proof. So, we added the \ndescription of the diagram, as given in \\S\\ref{section:diagrams}. Unfortunately\n{\\textsc OTTER\\ } could still not find a proof, even with hints. \n\nThe proof in Szmielew proceeds by cases. The methodology we followed \nin such cases was this:\n\n\\begin{itemize}\n\\item Add one case to sos, e.g. {\\tt c=c1}. ({\\tt c1} is a constant from the diagram, above.) \n\\item If we find a proof, add the steps of that proof as hints.\n\\item Remove that case, and add its negation, e.g. {\\tt c != c1}\n\\item If we find a proof, add its steps also as hints.\n\\item Now remove both cases, and add their disjunction: {\\tt c = c1 | c != c1}.\n\\end{itemize}\n\nOften we could find a proof. The example at hand required two divisions into cases \n(so tautology disjunction was applied recursively).\n The first is the case whether {\\tt c=c1} or not, \nand the second whether {\\tt d1=e} or not. Here \none can see in the two commented lines in {\\tt list(sos)} the \ntraces of completing the last argument by cases. \n\n\\begin{verbatim}\n \n \n d1=e | d1!=e.\n c = c1 | c != c1.\n\\end{verbatim}\n\nWe do not mean to imply that this was all there was to proving Satz~5.1. This was just \nthe last difficulty. By that time, the input file already contained a long list of hints\nobtained by methods described above. The final proof had 127 steps.\n\n\\subsection{Remarks on tautology adjunction}\nSuppose we have a resolution proof of contradiction from assumptions $L$ \nand the additional assumption $P$, and another proof \nfrom $L$ together with $-P$. Then if we form the list of all clauses $Q | -P$, where $Q$\nis a clause in the proof from $P$, we arrive at the conclusion $-P$, where we formerly\narrived at a contradiction. Then we can continue with the proof of contradiction from $-P$.\nThis method, however, results in a proof only if the original proof from $P$ proceeded\nby binary resolution. If instead it used hyperresolution, then the resulting clauses \n$Q | -P$ do not form a proof, as the steps originally made by hyperresolution \nare no longer legal hyperresolution steps. Steps by paramodulation might also be \nspoiled if the terms involved in the paramodulation occur in $P$. These observations \nexplain why it can be difficult to combine cases. \n\nAt least we would like clauses of the form $Q | -P$ to be kept if they are deduced.\nTo achieve that when the $Q$ are in the hints list, we need (in \\Ott) to use {\\tt hints}\ninstead of {\\tt hints2}, and {\\tt assign(fsub\\_hint\\_wt,-1)} to get clauses kept that are \nsubsumed by hints. Then we set {\\tt max\\_weight} very low, perhaps 6 or 8, which might be \nenough to allow the clauses $Q | -P$ to be deduced (in several steps) by binary resolution.\n Sometimes this approach worked, but {\\tt hints} is a {\\em lot} slower than \n{\\tt hints2}. (In Prover9, {\\tt hints} does not exist; what is called {\\tt hints} is \nsimilar to {\\tt hints2} in \\Ott.) Moreover, it did not always work (e.g., on Satz~12.1),\nprobably because some of the required inferences by binary resolution contain clauses that \nare not in the hints list, and so their weight is too large. In principle there must be \na setting of {\\tt max\\_weight} and a running time that produces the desired proof, but \nit sometimes eluded us.\n\nIn principle it should be possible to implement an algorithm for combining two proofs\n(one from $P$ and one from $-P$) and producing a valid proof by binary resolution and paramodulation.\nThen the steps of that proof could be injected as hints. The resulting proof \nwould not rely for its correctness on the algorithm for combining proofs, since if the \nhints work, it doesn't matter where they came from. But we have not \ntried to implement such an algorithm. We note, however, that such an algorithm could \nbe implemented ``on top of'' existing theorem provers, without modifying the theorem prover,\nusing PHP scripts (or scripts written in some other language). \n\n\\section{The challenge problems} \\label{section:results}\n\nAll the input files and resulting {\\textsc OTTER\\ } proofs that we found are posted \non the web at \\cite{tarski-archive}. Here\nwe list some of the more difficult proofs we found, as well as some of interest\nfor reasons other than difficulty.\n\n\\subsection{Properties of betweenness}\nIn \\S\\ref{section:betweenness}, we listed six difficult theorems (A12-A18 except A13),\neach of which Tarski originally took as an axiom. They must have been fairly\ndifficult if Tarski did not notice that they could be eliminated as axioms.\n(A13 disappears when equality axioms are included; in 1926 equality axioms were not\nincluded.)\nAll six occur as theorems\nin \\cite{schwabhauser}. \nWe found {\\textsc OTTER\\ } proofs of all those theorems, several of them short in spite\nof the perceived difficulty of the theorems. \nTable~\\ref{table:1} gives the\nlength of the proofs we found, which is perhaps some indication of the relative \ndifficulty of finding them. We found the short proofs completely mechanically.\nSatz~5.1, with 121 steps, is a theorem from Gupta's Ph.D. thesis \\cite{gupta1965},\nand it is the first theorem in Szmielew's development that we could not prove \ncompletely mechanically. We had to use lemma adjunction and hint injection.\n\n\n\n\\begin{table}\n\\caption{Proofs found for some of Tarski's original axioms}\n\\label{table:1}\n\\center{\n\\begin{tabular}{l l l r l}\nA12 \\ \\ & Satz 3.1 \\ \\ & Reflexivity for betweenness & 4 steps \\\\ \nA14 & Satz 3.2 & Symmetry for betweenness & 4 steps \\\\\nA15 & Satz 3.5 & Inner transitivity & 4 steps\\\\\nA16 & Satz 3.7 & Outer transitivity & 16 steps\\\\\nA17 & Satz 5.3 & Inner connectivity & 15 steps\\\\\nA18 & Satz 5.1 & Outer connectivity & 121 steps\n\\end{tabular} }\n\\vskip -0.3cm \n\\end{table}\n\nAnother theorem about betweenness occurs as a \nchallenge problem in \\cite{wos1988}, namely, the ``five-point theorem'':\n$$ {\\bf T}(z,w,v) \\land {\\bf T}(z,y,v) \\land {\\bf T}(w,x,y) \\ \\rightarrow\\ {\\bf T}(z,x,v).$$\nThis theorem does not occur in Szmielew's manuscript, \nbut it has a proof with 6 steps using the theorems mentioned above,\nfound completely mechanically. \n \n\n\\subsection{Midpoints, perpendiculars, and isosceles triangles}\nThe ``midpoint theorem'' asserts that every segment has a midpoint. The traditional Euclidean \nconstruction involves the intersection points of two circles, but we are required to \nprove the theorem from A1-A9. (Not even the parallel axiom A10 is to be used). \nThis is a difficult problem, and was apparently not solved until Gupta's 1965 thesis\n\\cite{gupta1965}. Two important preliminary steps are the erection of a perpendicular\nto a line at a given point, and the ``Lotsatz'', which says we can drop a perpendicular\nto a line from a point not on the line. Remember this must be done without circles!\n A clever observation of Gupta was \nthat one can fairly easily construct the midpoint of $ab$ if $ab$ is the base of an isosceles\ntriangle (only two applications of inner Pasch are needed). This plays a key role in \nthe proof of the Lotsatz. The two theorems on perpendiculars are used to construct\nthe midpoint. Of course, once we have midpoints and perpendiculars, it is trivial to \nshow that every segment is the base of an isosceles triangle; that theorem does not \neven occur explicitly in \\cite{schwabhauser}. An important lemma used in the proofs\nof these theorems is the ``Krippenlemma'', a key result in Gupta's thesis. For a diagram \nand formal statement, see Lemma~7.22 of \\cite{schwabhauser}, pp.~53--54. \nTable~\\ref{table:2} shows the lengths of our proofs of these theorems, obtained using lemma adjunction and hint injection.\n \n\\begin{table}[ht]\n\\caption{Proofs of Gupta's theorems found using lemma adjunction and hint injection}\n\\label{table:2}\n\\center{\n\\begin{tabular}{l l r l}\n Satz 7.22 \\ \\ & Krippenlemma & 96 steps\\\\\n Satz 7.25 & Base of isosceles triangle has a midpoint & 113 steps\\\\\n Satz 8.18 & Lotsatz: there is a perpendicular to a & \\\\\n & line from a point not on the line & 227 steps \\\\\n Satz 8.21a & There is a perpendicular to a line through \\\\\n & a point on the line on the opposite side from & \\\\\n & a given point not on the line. & 106 steps \\\\\nSatz 8.24b & Given segment $ab$ and perpendiculars $ap$ and $qb$, \\\\\n& and point $t$ on line $ab$ between $p$ and $q$, \\\\\n&with $ap\\le qb$, then segment $ab$ has a midpoint. & 201 steps \\\\\nSatz 8.22 & Every segment has a midpoint & 22 steps\n\\end{tabular} }\n\\vskip-0.3cm\n\\end{table}\n\\medskip\n\nWe obtained a 108-step proof of the Krippenlemma using the subformula strategy, which \nworks without reference to the steps of the book proof. It took 3,189 seconds to find.\nWe were surprised to find such a long proof completely mechanically! \nNone of the other theorems listed here could be \nproved by the subformula strategy in one hour, although, \nas described in \\S\\ref{section:hardtheorems}, we did find several more proofs of \nother difficult theorems.\n\n\n\n\\subsection{The diagonals of a rhomboid bisect each other}\nA rhomboid is a quadrilateral whose opposite sides are equal.\nOne of the challenges in \\cite{wos1988} (see p.~214) solved by Quaife,\nwas to prove that the diagonals of a rectangle bisect each other. \nA more general problem is found in Satz~7.21, which asserts that \nif the diagonals of a rhomboid meet, then they bisect each other. Quaife\nalso proved this theorem with \\Ott. \nOur proof of this theorem has 26 steps. \nNote that no upper-dimension axiom is used, so it is necessary to assume the diagonals \nmeet, since otherwise the theorem fails in 3-space. Even though 26 steps is fairly short,\nwe could not find this proof completely mechanically, even with the subformula strategy. \n\n\\subsection{Inner and outer Pasch}\nThe proof that inner Pasch implies outer Pasch (using A1-A6 and A8) was one of the \nmajor results of Gupta's thesis, and enabled Szmielew to replace outer Pasch by \ninner Pasch as an axiom. This theorem was one of Quaife's four challenges. It\nis Satz~9.6 in \\cite{schwabhauser}. The proof posted on our archive is 111 steps,\npreceded by proofs Satz~9.4 and Satz~9.5 of 57 and 45 steps, respectively. Satz 9.5 is the ``plane\nseparation theorem'', important in its own right. \n\n\\subsection{Hilbert's axioms} \nHilbert's theory can be interpreted in Tarski's, using pairs of \npoints for lines and ordered triples of points for angles and planes. His \naxioms (so interpreted)\n all turn out to be either axioms, theorems proved in \\cite{schwabhauser}, or \nextremely elementary consequences of theorems proved in \\cite{schwabhauser}. \n The theorems of \\cite{schwabhauser}\nneeded are 2.3,2.4,2.5,2.8; 3.2,3.13;6.16, 6.18; 8.21, 8.22; 9.8, 9.25, 9.26, 11.15, 11.49;\nand Hilbert's parallel axiom is Satz 12.11. We have posted {\\textsc OTTER\\ } proofs of all these\ntheorems. Narboux and Braun have proof-checked Hilbert's axioms in Tarskian geometry, using Coq \\cite{narboux2012}.\n\n\\section{1992 {\\em vs} 2015: improved strategies or faster computers?}\n \nThe research reported here shows how much progress has occurred in automated reasoning in that \ntime period. Indeed, approximately thirty years ago, almost all of the theorems cited in this article were out of reach.\nThe question arises whether this advance might be due simply to the increased memory capacity and \nspeed of modern computers. Perhaps Quaife, equipped with one of our computers, would have found \nthese proofs? Perhaps we, constrained to run on a computer from 1990, might not have found them?\nWe argue that this is not the case: the improvements are due not to faster hardware but to \nthe techniques described above, namely generating partial proofs (of intermediate steps) and using \ntheir steps as hints; using the right combination of inference rules and settings; using \ntautology adjunction to help with proofs by cases; divide-and-conquer; and finally,\nthe spectacular success of the subformula strategy. We note that Quaife\ndid have (in fact, invented) the technique of giving {\\textsc OTTER\\ } the diagram. We did not actually try \nto run on a 1990 computer, and we do not doubt that it would have been painful and discouraging;\nbut we think the main credit should go to Veroff's invention of hints, and the uses of hints\ndeveloped by Wos and applied here. Proofs of length more \nthan 40 were out of reach in 2014 for completely mechanical derivations, and could then only \nbe found by lemma adjunction and hint injection. But in 2015 several proofs of length more \nthan 90 were found completely mechanically by the subformula strategy, using the same \ncomputer we used in 2014. \n\n\n\\section{Proof checking {\\em vs.} proof finding}\n``Proof checking'' refers to obtaining computer-verified proofs, starting with human-written proofs.\n``Proof finding'' refers to the traditional task of automated deduction, finding a proof by searching a large space of possible proofs, either without possessing a proof or without making use of a known proof.\nOur work shows that this distinction is not as clearcut as it might seem. \n If we have a proof in hand (whether generated by \nhuman or machine), and we enter its steps as hints, with a low {\\tt max\\_weight}, we \nforce {\\textsc OTTER\\ } to find a proof containing mostly the same formulas as the proof in the hints. \n(The order of deductions might be different.) This happens immediately if the steps\nwere machine steps, and we have explained above the techniques we use if the original steps\nare human proof steps. By those methods, we can almost always ensure that {\\textsc OTTER\\ } finds \na proof, if we have a proof in hand. One could plausibly claim that this is proof checking,\nnot proof finding. \n\nOn the other hand, today's proof checkers are increasingly incorporating (calls to) external\ntheorem provers to help reduce the number of steps a human must supply. If a theorem prover\ncan get from the existing proof to the next goal, then the returned proof can often \nbe translated into the proof checker's language and checked. For example, the Sledgehammer\nprogram \\cite{paulson2010} is used with Isabelle, and HOLY-Hammer \\cite{holyhammer}\n is used with HOL Light. \nIn this way, proof checking is incorporating elements of automated theorem proving. \nSee \\cite{blanchette2015} for a survey of this research and further references.\n\n\n\\section{Developing an entire theory using a theorem prover}\nWhen we made our first report on this work \\cite{beeson2014-wos}, we \nhad two hundred hand-crafted input files, one for each theorem. These files \nhad been made by cutting and pasting lists of axioms, adding the positive form \n(Skolemization) of the last theorem by hand, and often had been heavily edited\nduring the process of applying our proof-finding methods. There was plenty\nof room for human error in the preparation of these files; and as it turned out \nin 2015, they contained numerous errors; none irreparable, but one wishes \nfor a standard of perfection in computer-generated proofs. \n\nThinking about the sources of error and the means to remedy them, we realized\nthat a major source of error could be the hand-translation from the negated\nform of a theorem (used in the set of support to prove it) to the ``positive form''\nor Skolemization, in which the theorem is entered into subsequent input files \nfor use in proving further theorems. We therefore prepared a ``master list''\ncontaining, for each theorem, the positive and negative forms and, in addition,\nthe ``diagram equations'' defining the diagram for that theorem, if any.\nThe master list also contains the names of Skolem functions to be used in \nSkolemizing that theorem, if the theorem contains (existential) variables.\nWe also optionally included a list of tautologies that were needed to prove \nthat theorem, if we could not eliminate the tautologies.\n\nThe plan was that the master list could then be used to mechanically generate \ninput files. The correctness issues would then have been conveniently divided:\n\n\\begin{itemize}\n\\item Is the positive form of each theorem in the master list really the correct\nSkolemization of the negative form?\n\\item Does the diagram entry in the master list have the form of an equation with a new \nconstant on the left? And on the right, does it mention only previously-introduced Skolem functions?\n\\item Is each Skolem function introduced in exactly one theorem? \n\\item Does the master list actually correspond to the statements of the theorems in the book?\n\\item Are the input files being correctly generated?\n\\item Do they all produce proofs?\n\\item And of course: are the proofs produced by {\\textsc OTTER\\ } correct? (This is not a novel\nissue and will not be discussed.)\n\\end{itemize}\n\nThe question whether the diagram in the master list corresponds to the book's\ndiagram is not relevant. If one gets a proof with any diagram whatsoever (meeting the \nabove conditions), it is fine.\nAdding equations defining a new constant is conservative; that is, no new theorems can be \nproved by adding such axioms. However, the meaning of ``new constant'' is not quite\nas obvious as one might think. Consider, for example, the diagram for Satz 3.17.\nThere are two equations:\n$$ e = ip(c,b1,a1,a,p)$$\n$$ d = ip(c,b,a,b1,e)$$\nThe new constants are $d$ and $e$. The equation for $d$ has $e$ on the right, but \nthat is okay. What would be wrong is if the equation for $d$ had $d$ on the right,\nor if the equation for $e$ had $d$ on the right and the equation for $d$ had $e$ on the right.\nWhat we mean by ``new constant'' is ``constant not occurring in the theorem or the \nright sides of the previous diagram equations.''\n\nSimilarly, it does not matter what one puts in the hints list. If one gets a proof,\nit is a proof, regardless of what was in the hints list. In our mechanical generation \nof input files, we make use of the (possibly unreliable) proofs we found in 2012.\nIf we cannot find a proof without hints, then we use the steps of an existing (alleged)\nproof as hints. Thus, the sources for generating input files are \n\n\\begin{itemize}\n\\item The master list\n\\item The existing (alleged) proofs\n\\item The program that produces the input files from the master list and proofs\n\\end{itemize}\n \n\n\\subsection{The master list}\nThe master list is a text file, a piece of code in the PHP programming language.\nThat file begins with class definitions of classes called {\\em Theorem}, {\\em Definition},\nand {\\em Axiom}. The class {\\em Theorem}, for example, looks like this (but \nfor simplicity we here omit the constructor function). (Those unfamiliar with PHP\nshould just ignore {\\em var} and the dollar signs.)\n\n\\begin{verbatim}\nclass Theorem\n{ var $name;\n var $PositiveForm; \/\/ array of clauses \n var $NegatedForm; \/\/ array of clauses\n var $Diagram; \/\/ array of diagram equations\n var $SkolemSymbols = \"\"; \/\/ array of Skolem symbols introduced\n var $Cases = \"\"; \/\/ cases we used to split the problem, if any.\n}\n\\end{verbatim}\n\n\\hyphenation{Tarski-Theorems}\n\\noindent\nAfter that come arrays defining {\\em TarskiAxioms}, \n{\\em TarskiDefinitions}, and {\\em TarskiTheorems}. \nThe {\\em TarskiTheorems} array is the heart of the master list.\nIt starts off like this:\n\n\\begin{verbatim}\n$TarskiTheorems = array(\n\tnew Theorem( \"Satz2.1\", \n\t\tarray(\"E(xa,xb,xa,xb)\"), \n\t\tarray(\"-E(a,b,a,b)\")\n\t), \n\tnew Theorem( \"Satz2.2\", \n\t\tarray(\"-E(xa,xb,xc,xd) | E(xc,xd,xa,xb)\"),\n\t\tarray(\"E(a,b,c,d)\",\"-E(c,d,a,b)\")\n\t), \n\\end{verbatim}\n\n\\noindent\nAs one can see, the master list is human readable, not much encumbered\nby the occurrences of {\\tt new} and {\\tt array} that make it into PHP code.\nThe master list was prepared by hand, by the following method:\nFirst, we entered the negated form of a theorem, by copying \nit from {\\tt list(sos)} of the existing hand-crafted input file.\nIt was, we realized, important that the constants in the master list\nhave the same names as the constants in the corresponding \nexisting proofs, to avoid incompatibilities when those proofs are\nused as hints. (Of course, that matters only for the theorems that\ncannot be proved without hints.)%\n\\footnote{Therefore we could not satisfy a request that the constants\nhave names unique across all files. Researchers with that wish can \nwrite PHP code to append a number to each constant. Then, however, they \nwon't be able to use hints extracted from our proofs.\n}\n\nThen, we computed (by hand) the positive form. \nHere constants such as $a$ were changed to variables such as $xa$.\nIn some cases we had constants $cu$ or $cx$, where the book proof \nhad $u$ or $x$; in computing the positive form these become $u$\nor $x$ again, rather than $xcx$ or $xcu$. Although this was\noriginally done by hand, we checked it (and other things about the \nmaster list) with a PHP program {\\em TestMasterList.php}. This program\ncarries out Skolemization on a text-only basis, without parsing\nterms or clauses. (Any parse errors will turn up when we run the resulting input files.)\nThe program copes correctly when the negative form has one clause that is a disjunction.\nIn the entire Tarski corpus, there are only two theorems whose negative form contains\ntwo disjunctions. These were Skolemized by hand and checked carefully.\nThat the resulting input files do result in proofs is another positive indication.\nThis mechanical check of correct Skolemization is our answer to the first\nitemized correctness concern above.\n\nOne issue was the order of arguments to Skolem functions. In 2015 we could \nnot change the conventions that we adopted in 2013, since that would have rendered the \nhints obtained from our existing proofs useless. That necessitated some \nnot very beautiful code that specifies the order per Skolem function. The correctness\nof this code is evidenced by the fact that it did work with hints obtained from \nexisting proofs. Luckily, in 2013 when we worked on one file at a time, we were consistent\nabout the order of arguments of the same Skolem function in different files.\n\nThe second concern was whether each diagram clause has the form of an \nequation with a new constant on the left, in the precise sense defined above.\nOur program {\\tt TestMasterList.php} checks this condition mechanically. It \nalso checks that each Skolem function is introduced by exactly one theorem and \nis not used (even in a diagram) before it is introduced. \n\n\nThe third concern was whether each theorem in the master list corresponds\nto the version in the printed book. Several considerations arise here:\nFirst, Szmielew treated lines as sets of points, so her treatment is not \nstrictly first order. We translated ``line $L$'' to two points $p,q$ and \nthe literal $p \\neq q$, and $x \\in L$ to $Col(p,q,x)$, where $Col$ means\n``collinear'' and is defined in terms of betweenness. The correctness issue\nhas to be understood modulo this translation. Second, as a result of that\ntreatment of lines, some simple theorems had to be formulated in our development\nthat are not in the book. For example, perpendicularity of lines becomes\na 4-ary relation $perp(a,b,c,d)$, or $ab \\perp cd$, and we have to prove that, for example, \n$a$ and $b$ can be interchanged and $c$ and $d$ can be interchanged, or both.\nThis gives rise to some theorems that do not occur in the book. But for those\ntheorems that do occur in the book, one can check by hand that they do correspond \nto the book.\n Since the Skolemization has been checked by machine,\nit is only necessary to check that the negative form corresponds to the book.\nThere are only a few cases where this is at all problematic. Lemma~9.4, p.~68\nis an example. The question whether the formal theorem corresponds to \nthe intended theorem arises also in using a proof checker. One\ncannot check by means of a computer program that the master list\n corresponds to the book. Mathematics books\nwritten in the twentieth century do not contain machine-readable formulas,\nand even if they did, we would still need to know if those really represented\nSzmielew's intent. (There are a few typographical errors in Szmielew.)\n\n\\subsection{Size and availability of the master list}\nWe briefly contemplated including the master list as an appendix to this paper.\nWe discovered that, when printed, it is 43 pages long. Moreover, in order\nto be useful, it must be in electronic form. It could well turn out that \nthe master list is the most useful product of this research, since it can be \nused immediately by anyone else wishing to study the Tarski axioms and Szmielew's\naxiomatic development from them. We therefore plan to post a separate\ndocument to the ArXiv containing the master list in the \\TeX\\ source. Then,\nas long as the ArXiv lives, the master list will be available. Of course\nthe master list, as well as the PHP programs used for this research, will be\navailable at \\cite{tarski-archive} for at least the next few years.\n\n\\subsection{Generating input files mechanically} \\label{section:mechanicalinputfiles}\nOnce the master list is correct, it is a routine programming exercise\ninvolving only textual (string) manipulations to generate input files.\nThe algorithm is simple. Consider, for example, the function\nwith the specification given in the following comments.\n\n\\begin{verbatim}\nfunction InputFileAsArray($t, $diagrams, $sos, $settings, $cases)\n\/\/ $t should be a member of $TarskiTheorems\n\/\/ Generate the lines of an input file for proving $t\n\/\/ and return an array of strings containing those lines.\n\/\/ Make sure no line exceeds 80 characters.\n\/\/ include the diagram if $diagrams is not false.\n\/\/ include the members of $t->Cases if it is an array\n\\end{verbatim}\n\nHere {\\tt \\$settings} is an array of the {\\textsc OTTER\\ } settings to be used\nin this file; the strings in the arrays\n {\\tt \\$t->Diagrams}, {\\tt \\$t->NegatedForm }, and {\\tt \\$t->Cases}\nare ready to be copied out of {\\tt \\$t} into the appropriate places in a file\ntemplate. All the function has to do is \ncreate an array {\\tt \\$ans} and add these things to that array in the right\norder, interspersing template lines like ``{\\tt list(sos).}'' at the appropriate places.\n\nThis function, however, doesn't put any hints into the file it is creating.\nThat is done by the following code.\n\n\\begin{verbatim}\nfunction InsertHintsIntoInputFile($theorem_name, $outdir, $indir)\n\/\/ Get hints from the proof, i.e. from the .prf file \n\/\/ found in directory $outdir, and insert them in \n\/\/ list(hints) in the input file.\n\\end{verbatim}\n\nThe heart of this program is {\\tt GetHintsFromProof}, which we have been \nusing since 2012 to extract hints by hand from a proof; we have confidence \nin the correctness of that program based on long use and on many comparisons\nof the output with the input. Again, the correctness of that program doesn't\nreally matter: whatever we put into the hints list, if we get a proof,\nthe proof is right. In order to ensure correctness of any proofs obtained, \nthe only thing that really matters about {\\tt InsertHintsIntoInputFile}\nis that all it does to the input file is add a hints list. (Of course,\nto actually obtain some proofs, it had better be correct in other ways.)\n\n\\subsection{Experimenting with many files at a time}\n\nWith the aid of the PHP programs discussed above, we wrote short\nPHP programs that test a given set of files (such as, for example, those belonging to Chapter 8),\nwith a given set of settings for the theorem prover, and given values of variables\ncontrolling whether we will use hints extracted from a proof or not, whether we will \nuse diagrams, whether we will use the case listed in the master list,\nwhether we will try to prove theorems that have already been proved or only ones\nthat have not been proved yet, and so forth. We also could have the settings used on each \ninput file depend on previously-obtained results, the record of which was kept in \na machine-readable (but handwritten) file. The PHP tools that we used are\n available, with descriptions, at \\cite{tarski-archive}. \n\n\\subsection{Summary of the correctness discussion}\nWe were able to replace the hand-crafted input files we posted in 2013--2014\nby mechanically generated input files. In the case of difficult theorems that \nwe did not prove completely mechanically, our mechanically generated files used\n hint injection, with hints extracted\nfrom the old proofs. Sometimes, if the proof was shorter, we iterated, injecting\nhints obtained from the new proof. In most cases the resulting proofs were\nshorter than the ones we had before. As discussed above, if our PHP programs for \ngenerating those files are correct and the Skolemizations in the master list are correct,\n then each theorem has been deduced from the \nones preceding it in the master list. In other words, it is a conclusion, not a \nhypothesis, that the theorems are ordered correctly in the master list. \nThe correctness of the Skolemizations in the master list has been machine-checked\n(except for the one theorem with two disjunctions, which was checked manually). \nThat the master list correctly corresponds \nto the book has been checked manually (the only possible way to check it).\n\n\n\\section{Easy and hard theorems and their proofs} \\label{section:hardtheorems}\nWe divide the Tarski theorems into ``easy'' and ``hard'' theorems, according to the somewhat\narbitrary rule that a theorem is ``easy'' if it has a proof of 40 or fewer steps. The idea\nbehind the choice of the number 40 is\n that we were able to prove many of the easy theorems completely automatically,\nwhile to prove the hard theorems, we had to use hints. There are 214 \ntheorems all together, of which 29 are ``hard'' by this definition. We proved about \ntwo-thirds of the \neasy theorems mechanically, and we proved four very hard theorems completely mechanically\ntoo, using the subformula strategy, leaving 25 theorems that required hints. \nHere ``mechanical'' refers to proofs\n obtained with no \nreference to the book proof. An additional 14 proofs were found by ``giving the prover\nthe diagram'', without additional use of the book proof. That raises ``about two-thirds''\nto ``about three-quarters''.\n \nThe rest of the easy proofs, and all the hard proofs, \nwere proved using ``hints''. These hints came from using some steps from the \nbook proofs, and from proofs of related theorems obtained by the \nmethods described above (lemma adjunction and eliminating cases). \n\nIf we had to measure the strength of \\Ott\\ on this problem set by a single number, we would say \nthat 40 is the answer: we expect that \\Ott\\ has a good chance to automatically find \nproofs for theorems that have proofs of length less than or equal to 40, and\n only occasionally to find \nproofs for theorems much longer than 40. \n \n \nThe book starts with easy theorems. Chapters 2 through 6 contain 81 \ntheorems. Of those only two are hard: Satz 4.2 (44 steps) and Satz 5.1 (127 steps).\n Satz~5.1 expresses in Tarski's language that \nif we fix $a$ and define $x \\le y$ by ${\\bf T}(a,x,y)$, then $x \\le y \\lor y \\le x$. \nThis was first proved from Tarski's axioms in Gupta's thesis \\cite{gupta1965}.\nWe had to use diagrams on six of the remaining theorems. We \n used hints on two ``easy'' theorems: Satz~4.16 (23 steps) \nand Satz~5.12b (16 steps). We could prove Satz~4.16 without hints,\nbut we had to wait 95696.25 seconds (more than 26 hours).\nSatz 4.5 required the subformula strategy in order to find a proof automatically.\nSatz 6.16b appeared to be hard at first, but the \nsubformula strategy found a short proof. Of the remaining 69 theorems, we proved 67 \ncompletely mechanically, without hints or diagrams, using our default settings. The \nother two are Satz 3.1 and Satz 4.6. The former we could prove mechanically\nwith a diagram, but we could also prove it without a diagram if we put everything\nin the set of support, instead of putting the axioms in list(usable). Satz~4.6,\nstrangely enough, could be proved immediately after adding two instances of \nequality axioms explicitly to the set of support, but we could not get a proof \nwithout adding those equality axioms. Of course, once we found the proof by adding\nequality axioms, we could put the steps of the proof in as hints and find a proof\nwithout explicit equality axioms. \n\nIn Chapters 7 and 8, there are 46 more theorems that we proved completely mechanically,\n8 more hard theorems, 3 short theorems on which we had to use hints (i.e., could \nnot prove mechanically even with a diagram), and five theorems that we could \nprove mechanically with a diagram, but not without. Of those eight hard theorems,\ntwo were proved mechanically using the subformula strategy: Satz 7.13 and \nSatz 7.22a. The latter of these is the Krippenlemma, one of the important theorems\nof Gupta's thesis. That a 108-step proof of a major result could be found \ncompletely automatically was completely unexpected. \n\nChapter 9 gets harder. We could prove only four theorems completely mechanically;\ndiagrams were not needed and did not help with the other theorems. There are five hard theorems,\nwhich we proved using hints. \nWe were also able to prove two of those hard theorems with the subformula strategy,\nincluding the gem of Gupta's thesis, that outer Pasch implies inner Pasch. \nThere are also five short theorems that we could prove only by using hints, although\nfour of them have proofs between 35 and 40 steps, so they are ``borderline short.''\n\nChapters 10 gets easier again; it is about reflection in a line\nand right angles, and the high point is the proof of Euclid's fourth postulate, that \nall right angles are congruent, in the form that two right triangles with congruent legs\nalso have congruent hypotenuses. Although five proofs in Chapter 10 are longer\nthan 45 steps, only two are longer than 75 steps. The chapter ends with a proof \nof Hilbert's triangle construction axiom.\n\nChapter 11 deals with the definition and properties\nof angles (as triples of points). Only two proofs in Chapter 11 are longer than 40 steps.\n\nTable~\\ref{table:3} summarizes the results just described.%\n\\footnote{There are 214 theorems in our Master List; but the last three are all the \nHilbert parallel axiom, in two cases and in a combined statement. So really, there are 212\ntheorems, of which 29 are hard and 183 are easy.}\n\n\\begin{table}\n\\caption{One hundred eighty three theorems with proofs of length $\\le 40$}\n\\label{table:3}\n\\center{\n\\begin{tabular} { l r r r r}\n{\\bf Chapter} & {\\bf no diagram} & {\\bf diagram} & {\\bf Subformula strategy} & {\\bf hints} \\\\\n2 to 6 & 72 & 5 & 3 & 2\\\\\n7 & 20 & 0 & 1 & 1 \\\\\n8 & 16 & 6 & 2 & 2 \\\\\n9 & 5 & 0 & 3 & 2 \\\\\n8A & 6 & 1 & 0 & 1 \\\\\n9A & 6 & 0 & 0 & 4 \\\\\n10 & 6 & 1 & 0 & 3 \\\\\n11 & 3 & 1 & 0 & 6 \\\\\n12 & 0 & 0 & 0 & 5 \\\\\nTotal & 134 & 14 & 9 & 26\n\\end{tabular}\n}\n\\end{table}\n\n\\begin{table}\n\\caption{Twenty-nine hard theorems and their proofs}\n\\label{table:4}\n\\center{\n \n\\begin{tabular} {l l r r r }\n {\\bf Theorem} & {\\bf description} & {\\bf sub. str.} & {\\bf hints} & {\\bf book }\\\\\n Satz 4.2 & inner 5-segment theorem && 44 & 13\\\\ \n Satz 5.1 & trichotomy && 127 & 70 \\\\\n Satz 7.13 & refl. in pt. preserves congruence & $(99, 921)$ & 72 & 28\\\\\n Satz 7.22a & Krippenlemma & $(108,3189)$& 96 & 27 \\\\\n Satz 7.25 & isosceles triangle has midpoint && 113 &34\\\\\n Satz 8.18 & Lotsatz (dropped perpendicular) && 227 & 32\\\\\n Satz 8.21a & uniqueness of dropped perp && 106& 5\\\\\n Satz 8.22b & existence of midpoint &&201 & 42\\\\\n Satz 8.24a & midpoint details && 171& 42\\\\\n Satz 8.24b & midpoint details && 163& 42\\\\\n Satz 9.3 & opposite-side ray theorem && 52 & 14\\\\\n Satz 9.4b & $r=s$ case of 9.4 &&44 &7\\\\\n Satz 9.4c & second half of Lemma 9.4 in book &&41 & ?\\\\\n Satz 9.6 & inner Pasch implies outer Pasch &$(98,186)$ &91& 27 \\\\\n Satz 9.8 & opposite, same implies opposite &$(63,1646)$ &48 &21\\\\\n Satz 9.13 & transitivity of samesideline && 71& 1?\\\\\n Lemma 9.13f & (case 2) && 42 & ? \\\\\n Satz 10.2a & existence of reflection in line && 45 & 23\\\\\n Satz 10.2b & uniqueness of reflection in line && 49& 23\\\\\n Satz 10.10 & refl. in line preserves congruence && 72 & 26\\\\\n Satz 10.12a & \tright triangles with common vertex & & \\\\\n & $\\ldots$ have congruent hypotenuses & & 60 & 6\\\\\n Satz 10.12b & right triangle theorem (Euclid 4) && 91 &14\\\\\n Sata 10.15 & perpendicular on given side of line && 74 & ? \\\\\n Satz 10.16a & triangle construction && 57 & 24?\\\\\n Satz 10.16b & triangle uniqueness && 60&22\\\\\n Satz 11.3a & angle congruence theorem && 43& 18\\\\\n Satz 11.15a & angle transport(existence) && 78 & 1? \\\\\n Satz 11.15b & angle transport (uniqueness) && 78 & 1? \\\\\n Satz 12.11 & Hilbert's parallel axiom && 105 & 18?\n\\end{tabular} }\n\\end{table}\n\n\\FloatBarrier \n\n\nTable~\\ref{table:4} shows the hard theorems from the entire book, with the \nnumber of steps in the proof(s) that we obtained. A paired number like $(99,921)$ \nindicates \na proof found mechanically using the subformula strategy, of length 99 and found in 921 seconds.\nThe last column is the number of steps in the book proof. A question mark in this column indicates\nthat the book proof contains non-trivial gaps or is omitted. \nNot counting the lines with a question mark, we have 2032 {\\textsc OTTER\\ } steps\ncorresponding to 516 book steps, giving a de Bruijn factor (the ratio of the two) \nof 4. \n\nOf the 212 theorems we tried to prove, we proved 147 completely automatically \n(without any use of the book proof or diagram); we needed the \nsubformula strategy for 13 of those proofs. We were able to prove 15 more theorems\nby using the diagram, without using the steps of the book proof. And when we allowed\nourselves to use some of the steps of the book proof as hints, we achieved a 100\\% success rate.\n \n\n \n \n\\section{What about Prover9? Or E, {\\textsc SPASS}, Vampire?}\n \n \n\nNearly every person with whom \nwe have discussed this work, including the referees, has suggested that another prover\nor provers might do better in some respect than \\Ott. It was not our purpose \nto evaluate the performance of any theorem prover or to compare the performances of \ndifferent theorem provers. Our purpose was to show that a combination of strategies \nin {\\em some} theorem prover could prove all the approximately 200 theorems in Tarski\ngeometry. \n\n\n In \\cite{narboux2015b}, Durdevic {\\em et.al.} tried E, {\\textsc SPASS}, and Vampire\non the theorems from Szmielew, with a 37\\% success rate. The corresponding \npercentage in our work, as discussed in the previous section, was 62\\% without the subformula strategy, and \n76\\% with the subformula strategy.\n These numbers may not be exactly comparable, since the set of theorems may not have been identical.\nThey used a very low time\nlimit, but most of the theorems we proved mechanically took less than 20 seconds, so \nthe few that took longer did not change our percentage much. We attribute our higher \nsuccess rate (without any strategy at all) \nto a good choice of settings for \\Ott. The fact that we did achieve a 100\\% success\nrate with the use of lemma adjunction and hint injection is another reason for not using\nanother prover: we could not improve on that success rate. \n\n\nOur master list of the Tarski theorems is now available for others to use; also our \nPHP programs for manipulating the master list and files are available. Conducting experiments\nwith other theorem provers then involves only modifying the template for an input file\nthat is used by {\\em InputFileAsArray}. \n We have not conducted such experiments, but others are doing so as this article goes\nto press, and will report on their experiments in due course. We note that in order\nto duplicate our result with \\Ott (134 out of 212 proved mechanically),\n it was necessary to use the default settings we used \nfor \\Ott, which can be found in the PHP code supplied on our website. \nTentative results for Vampire and E are 154 and 142 out of 212.\n\n\\section{Summary}\n\\vskip-0.3cm\nWe used {\\textsc OTTER\\ } to find proofs of the theorems in Tarskian geometry in the first nine \nchapters of Szmielew's development in Part I of \\cite{schwabhauser}. Those theorems \ninclude the four unsolved challenge problems from Quaife's book\\cite{quaife1992}, and \nthe verification of Hilbert's axioms. Of the 214 theorems, we proved\n62\\% mechanically and without strategies (except for a good choice of settings), \nand by the use of the subformula strategy we increased that to 76\\%. By using \nlemma adjunction and hint injection, which depend on starting with a book proof, \nwe achieved a 100\\% success rate.\n\n\nMechanically generated input files and the resulting proofs for all the theorems that we have proved \nare archived at \\cite{tarski-archive}. \n\n \n\n \n\n \n\n \n \n \n\n\n \n\n\n\\begin{acknowledgements}\nThis material was based in part on work supported by the U.S. Department of Energy,\nOffice of Science, under contract DE-ACO2-06CH11357.\n\\end{acknowledgements}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}\\setcounter{equation}{0}}\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\n\\newdimen\\tableauside\\tableauside=1.0ex\n\\newdimen\\tableaurule\\tableaurule=0.4pt\n\\newdimen\\tableaustep\n\\def\\phantomhrule#1{\\hbox{\\vbox to0pt{\\hrule height\\tableaurule width#1\\vss}}}\n\\def\\phantomvrule#1{\\vbox{\\hbox to0pt{\\vrule width\\tableaurule height#1\\hss}}}\n\\def\\sqr{\\vbox{%\n \\phantomhrule\\tableaustep\n \\hbox{\\phantomvrule\\tableaustep\\kern\\tableaustep\\phantomvrule\\tableaustep}%\n \\hbox{\\vbox{\\phantomhrule\\tableauside}\\kern-\\tableaurule}}}\n\\def\\squares#1{\\hbox{\\count0=#1\\noindent\\loop\\sqr\n \\advance\\count0 by-1 \\ifnum\\count0>0\\repeat}}\n\\def\\tableau#1{\\vcenter{\\offinterlineskip\n \\tableaustep=\\tableauside\\advance\\tableaustep by-\\tableaurule\n \\kern\\normallineskip\\hbox\n {\\kern\\normallineskip\\vbox\n {\\gettableau#1 0 }%\n \\kern\\normallineskip\\kern\\tableaurule}%\n \\kern\\normallineskip\\kern\\tableaurule}}\n\\def\\gettableau#1{\\ifnum#1=0\\let\\next=\\null\\else\n\\squares{#1}\\let\\next=\\gettableau\\fi\\next}\n\n\\tableauside=1.0ex\n\\tableaurule=0.4pt\n\\def\\mathbb{E}{\\mathbb{E}}\n\\def{\\bf{r}}{{\\bf{r}}}\n\\def\\MM#1{{ $\\ll\\!\\!\\ll\\!\\!\\ll$ \\bf MM: {\\it #1} $\\gg\\!\\!\\gg\\!\\!\\gg $}}\n\\def\\PP#1{{ $\\ll\\!\\!\\ll\\!\\!\\ll$ \\bf PP: {\\it #1} $\\gg\\!\\!\\gg\\!\\!\\gg $}}\n\\def\\ND#1{{ $\\ll\\!\\!\\ll\\!\\!\\ll$ \\bf ND: {\\it #1} $\\gg\\!\\!\\gg\\!\\!\\gg$}}\n\n\\newcommand{\\figref}[1]{Fig.~\\protect\\ref{#1}}\n\n\n\n\\title{\\boldmath New results in ${\\cal N} =2$ theories from non-perturbative string}\n\n\n\\author{ Giulio Bonelli$^a$, Alba Grassi$^b$ and Alessandro Tanzini$^a$}\n\n\n\\affiliation{\n$^a$ International School of Advanced Studies (SISSA), \\\\\nvia Bonomea 265, 34136 Trieste, Italy and INFN, Sezione di Trieste\\\\\n\\\\\n$^b$International Center for Theoretical Physics,\\\\\n ICTP, Strada Costiera 11, Trieste 34151, Italy \n and INFN, Sezione di Trieste\\\\}\n\n\n\n\n\\emailAdd{bonelli@sissa.it, agrassi@ictp.it, tanzini@sissa.it}\n\\preprint{\n\\begin{flushright}\nSISSA 14\/2017\/FISI-MATE \\\\\n\\end{flushright}\n}\n\n\n \n\n\\abstract{We describe the magnetic phase of $SU(N)$ ${\\cal N}=2$ Super Yang-Mills theories in the self-dual $\\Omega$ background in terms of a new class of multi-cut matrix models. These arise from \na non-perturbative completion of topological strings in the dual four dimensional limit which engineers the gauge theory in the strongly coupled magnetic frame. \nThe corresponding spectral determinants provide natural candidates for the $\\tau$-functions of isomonodromy problems\nfor flat spectral connections associated to the Seiberg-Witten geometry.\n }\n\n\n\\begin{document}\n\\maketitle\n\n\\flushbottom\n\n\n\\section{Introduction}\n\nSince the seminal work of Seiberg and Witten (SW) \\cite{sw2} a lot of progress has been made to understand ${\\cal N}=2$ gauge theories in four dimensions. \nA crucial progress has been obtained by applying equivariant localisation methods to the supersymmetric path integral coupled to the so-called $\\Omega$-background \\cite{n, Flume:2002az, Bruzzo:2002xf}. This reduces it to a combinatorial expression which has been subsequently linked to quantum integrable systems \\cite{ns} and to two-dimensional Conformal Field Theory \\cite{agt}.\nLocalisation methods have been applied so far in the weak coupling limit of the gauge theory rebuilding the low-energy effective field theory of SW in the electric\npolarization of the special K\\\"aehler manifold of Coulomb vacua. More recently \\cite{gil1,ilt1,blmst} it has been realized that the $SU(2)$ Nekrasov and Okounkov (NO) \\cite{no2} partition functions in the self--dual $\\Omega$-background are $\\tau$-functions of isomonodromy problems related to the corresponding SW geometry, realized as the spectral curve of Hitchin's integrable system.\nIn this case one can use the well-known relation between isomonodromic deformation problems and Painlev\\'e equations to reduce\nthe evaluation of the NO partition function of gauge theories at strong coupling to the calculation of the relevant $\\tau$-function in the long-distance expansion \\cite{blmst}. \n\nOn the other hand, four dimensional gauge theories can be engineered by using topological string theory \\cite{kkv}. Actually, the latter is richer, in particular at the non-perturbative level there are new effects arising from the embedding of the gauge theory in string theory. Along this line of research a lot of work has been done during the last decade to understand topological string beyond perturbation theory starting with the seminal works \\cite{mmopen, Marino:2007te, mmnp}.\n In particular in \\cite{ghm}, in the spirit of large $N$ dualities, a non--perturbative formulation of topological string on toric Calabi-Yau (CY) has been proposed. This formulation has proved to be extremely rich and constructive leading to several new results and applications in various related fields such as integrable systems \\cite{hm,Franco:2015rnr,Marino:2016rsq}, supersymmetric gauge theories \\cite{bgt, hel} and condensed matter \\cite{Hatsuda:2017zwn, Hatsuda:2016mdw}. \nThe non-perturbative proposal of \\cite{ghm} was originally formulated only for CYs whose mirror curve has genus one but it has been extended to higher genus mirror curves in \\cite{cgm2}.\n\nIn \\cite{bgt} a link between this non-perturbative completion of topological string and isomonodromy problems arising from four dimensional gauge theories has been found in the special case of $SU(2)$ Super Yang-Mills (SYM). The relevant isomonodromy problem in this case is the one associated to Painlev\\'e $\\rm III_3$ (also called \n$ \\rm III^{D_8}$) equation whose $\\tau$-function is known since a long time \\cite{zamo} to admit a Fredholm determinant description.\nUpon a suitable four dimensional scaling limit, the non-perturbative completion of topological string has been shown to be directly related to the Fredholm determinant above. This produces a matrix model presentation of the gauge theory partition function in the strongly coupled magnetic frame and provides an operator theory interpretation of the self--dual $\\Omega$ background ($\\epsilon_1=-\\epsilon_2=\\epsilon$). \nMoreover, it also provides exact S-duality transformation formula for the Nekrasov partition function with self-dual $\\Omega$-background including non-perturbative corrections\nin $\\epsilon$.\n\nThe purpose of this paper is to extend these results to $SU(N)$ gauge theories. More precisely, in section \\ref{su2rev} we review the consequences of the genus one proposal for $SU(2)$ theories and we provide the exact S-duality transformation for the corresponding gauge theory partition function. In section \\ref{nps}, by following the general prescription of \\cite{cgm2}, we derive the matrix models computing the topological string partition function on the $Y^{N,0}$ geometries. The result is given by the $N-1$ cut matrix model shown in \\eqref{mmsu35d}. Then, in section \\ref{sun4}, we perform the so-called {\\it{dual}} four dimensional limit \\cite{bgt} on these models and we make contact with ${\\cal N}=2$ $SU(N)$ SYM in the four dimensional self-dual $\\Omega $ background \\cite{no2}. More precisely we find that the partition function in the magnetic frame is given by \n\\begin{equation} \\label{su3mmintro} {\\begin{aligned} Z_{\\rm N}^{\\rm 4d}(M_1,\\cdots,M_{N-1})=& {1 \\over M_1! \\cdots M_{N-1}!} \\int {{\\rm d} ^M x\\over (2\\pi)^M} \\prod_{j=1}^{N-1} \\prod_{i_j\\in I_j}{\\rm e}^{- \\frac{N \\Lambda }{ \\pi ^2 \\epsilon}{ \\sin \\left(\\frac{\\pi j }{N}\\right)} \\cosh(x_{i_j})} \\\\\n& \\times {\\prod_{ 1\\leq i0}~.\\end{equation}\nLikewise the counterpart of \\eqref{zj} in this limit is\n\\begin{equation} \\label{zj4d} \\begin{aligned} Z_2^{\\rm 4d}(M)=&{\\rm i}^{-1} \\int_{ {\\mathbb R}+\\sigma_0}{{\\rm d} \\sigma} \\tan \\left(2 \\pi \\sigma\\right) Z^{\\rm Nek}(\\sigma,t){\\rm e}^{- \\log \\left[2 \\cos (2 \\pi \\sigma )\\right] M} {\\rm e}^{\\frac{\\log (2)}{12} +3\\zeta'(-1)}t^{-1\/16}{\\rm e}^{4 \\sqrt{t}} , \\\\\n& \\qquad \\sigma_0=(2 \\pi )^{-1}{{\\rm i} \\cosh ^{-1}(2 \\pi )} , \\quad t= \\left(\\frac{\\Lambda }{4 \\pi ^2 \\epsilon }\\right)^4,\\end{aligned}\\end{equation}\nSince $Z^{\\rm Nek}(\\sigma, t)$ has poles only for $\\sigma \\in {\\mathbb Z}\/2$ one can take a generic $\\sigma_0 \\in {\\rm i} {\\mathbb R}_+$. \nThis equality also follows from \\eqref{4dstat} and \\eqref{dic4}. Indeed one has\n\\begin{equation} \\label{interm}Z_2^{\\rm 4d}(M)={1\\over 2 \\pi {\\rm i} } \\oint_0\\kappa^{-N-1}\\det (1+\\kappa \\rho^{\\rm 4D}_{{\\mathbb P}^1\\times {\\mathbb P}^1}) {\\rm d} \\kappa.\\end{equation}\nBy using the change of variable \\eqref{dic4} together with the identity \\eqref{4dstat} we can write \\eqref{interm} as \\eqref{zj4d}.\n Moreover \\eqref{zj4d} can also be tested numerically in an easy way thanks to the good convergent properties of \\eqref{nek4d}. For instance on the l.h.s. we have\n\\begin{equation} \\label{tt}\\begin{aligned}& Z_2^{\\rm 4d}(0)=1, \\quad Z_2^{\\rm 4d}(1)=\\left(2 \\pi \\right)^{-1}{K_0\\left(8 \\sqrt[4]{t}\\right)}, \\\\\n& Z_2^{\\rm 4d}(2)=\\frac{G_{1,3}^{3,0}\\left(64 \\sqrt{t}|\n\\begin{array}{c}\n \\frac{3}{2} \\\\\n 0,0,0 \\\\\n\\end{array}\n\\right)}{32 \\pi ^{3\/2}},\\end{aligned}\\end{equation}\nwhere $K_0$ denotes the Bessel function and $G$ the Meijer G function. It is easy to check that the numerical integration on the r.h.s of \\eqref{zj4d} reproduces \\eqref{tt}. \n\nEquation \\eqref{zj4d} provides the integral kernel implementing the exact S-duality transformation of pure $SU(2)$ Nekrasov partition in a self-dual $\\Omega$-background parametrized by $\\epsilon$. It is easy to see that in the semiclassical limit $\\epsilon\\to 0$ it reduces to a simple Fourier transform as it is expected from Seiberg-Witten special geometry relations \\cite{sw2,abk}. Indeed, by introducing the SW variables $ {\\tt a}\/\\epsilon\\equiv \\sigma $ and $ {\\tt a}_D\/\\Lambda \\propto g_s M$ one has in the $\\epsilon\\to 0$ limit \n\\begin{equation} \n\\log 2 \\cos \\frac{2\\pi {\\tt a}}{\\epsilon} =- \\frac{2\\pi {\\rm i} {\\tt a}}{\\epsilon}+ \\mathcal{O}({\\rm e}^{4 \\pi {\\rm i} {\\tt a} \\over \\epsilon }) \\ \\ , \\quad \\ \\ \n {\\rm i}^{-1}\\tan \\frac{2\\pi {\\tt a}}{\\epsilon} =1+ \\mathcal{O}({\\rm e}^{4 \\pi {\\rm i} {\\tt a} \\over \\epsilon }) \\ \\ .\n\\end{equation}\n Because of the imaginary part in the integration contour we have ${\\rm Re}(4 \\pi {\\rm i} {\\tt a} ) <0$, hence the corrections $\\mathcal{O}({\\rm e}^{4 \\pi {\\rm i} {\\tt a} \\over \\epsilon })$ are exponentially suppressed and do not appear in the perturbative limit $\\epsilon \\to 0$. Notice also that\nequation \\eqref{zj4d} agrees with the general philosophy for the change of frame in topological strings \\cite{abk} and provides the exact S-duality transformation properties of the gauge theory\npartition function with gravitational corrections. \n\n\n\nAll this procedure can be in principle extended to $SU(2)$ theories with matter multiplets as well, however the details still have to be worked out.\nSummarizing the implementation of the dual limit on the TS\/ST duality leads to the following results in connection with four dimensional ${\\cal N}=2$ gauge theories: it gives an operator theory interpretation of the self-dual $\\Omega$ background, it gives Fredholm determinant representation for the $\\tau$ functions of Painlev\\'e equations, it provides a matrix model for the partition function in the magnetic frame\nand an exact formula for its S-duality transformation.\n\n\n \n\\section{Non-perturbative string on $Y^{N,0}$ geometries } \\label{nps}\n\n\nThe TS\/ST duality \\cite{ghm} has been generalized to higher genus mirror curve in \\cite{cgm2}. \nAccording to this construction one can associate a set of $g$ operators \\begin{equation} \\left\\{{\\rm O}_i \\right\\}_{i=1}^g \\end{equation} to any toric CY manifold, $g$ being the genus of its mirror curve.\nOf particular interest for this paper are those CYs from which one can engineer $SU(N)$ supersymmetric gauge theories \\cite{kkv,ikp3,ta}. \nExamples of such geometries are the resolution of the cone over the $Y^{N,0}$ singularity studied for instance in \\cite{bz}.\n The corresponding mirror curve has genus $N-1$ and therefore there are $N-1$ different \"canonical\" forms for this curve which reads \n\\begin{equation} O_i(x_1,x_2, \\xi)+\\kappa_i=0, \\quad i=1, \\cdots , N-1,\\end{equation}\nwhere $\\kappa_i$ denote the complex moduli of the geometry.\nFor instance we have\n\\begin{equation} \\label{cl1}O_1(x_1,x_2, \\xi)+\\kappa_1={\\rm e}^{x_2}+{\\rm e}^{-x_2 +(-N+2)x_1}+\\sum_{i=1}^{N-1} \\kappa_{N-i} {\\rm e}^{(i-N+1)x_1} +\\xi {\\rm e}^{(-N+1)x_1} +{\\rm e}^{ x_1}=0 ,\\end{equation} \nwhere $\\xi$ is the mass parameter and should be distinguished from the others moduli $\\kappa_i$ as emphasized for instance in \\cite{hkp}.\nTherefore, the quantization procedure for the $Y^{N,0}$ geometry leads to the following $N-1$ operators \n\\begin{equation} \\begin{aligned} \n&{ {\\rm O_1}+\\kappa_1={\\rm e}^{\\mathsf x_2}+{\\rm e}^{-\\mathsf x_2 +(-N+2)\\mathsf x_1}+\\sum_{i=1}^{N-1} \\kappa_{N-i} {\\rm e}^{(i-N+1) \\mathsf x_1} +\\xi {\\rm e}^{(-N+1) \\mathsf x_1} +{\\rm e}^{\\mathsf x_1}},\\\\\n& {\\rm O}_{j}+\\kappa_{j}= {\\rm Q}_j^{-1\/2}\\left( {\\rm O}_1+\\kappa_{1}\\right) {\\rm Q}_j^{-1\/2}, \\quad 1 < j \\leq N-1,\n\\end{aligned}\\end{equation}\nwhere $[\\mathsf x_1, \\mathsf x_2]={\\rm i} \\hbar$\nand we denote\n\\begin{equation} {\\rm Q}_j={\\rm e}^{ -(j-1) \\mathsf x_1}.\\end{equation}\nLet us define the following two operators \n\\begin{equation} \\label{rad}\\begin{aligned}& \\rho_{1, N-2,\\xi}=\\left({\\rm e}^{\\mathsf x_2}+{\\rm e}^{-\\mathsf x_2 +(-N+2)\\mathsf x_1}+\\xi {\\rm e}^{(-N+1)\\mathsf x_1} +{\\rm e}^{\\mathsf x_1}\\right)^{-1}, \\\\\n& {\\rm A}_j^{\\rm 5D}=\\rho_{1, N-2,\\xi}{\\rm Q}_j.\\end{aligned}\\end{equation} \nThe conjecture \\cite{cgm2} states that the non--perturbative topological string partition function on the $ Y^{N,0}$ geometry in the conifold frame is given by\n\\begin{equation}\n\\label{gen-fred-th}\nZ_N({M_1}, \\cdots M_{N-1})= {1\\over M_1! \\cdots M_{N-1}!} \\int {\\rm det}_{m,n} \\left(R(x_m, x_n)\\right){\\rm d}^M x, \n\\end{equation}\nwhere\n\\begin{equation}\nM=\\sum_{j=1}^{N-1} M_j,\n\\end{equation}\nand \n\\begin{equation}\nR (x_m, x_n) = A_{j}^{\\rm 5D}(x_m , x_n) \\,\\, \\,\\, \\,\\, \\,\\, \\text{if} \\, \\, \\,\\, \\,\\, \\,\\,\\sum_{s=0}^{j-1} M_s\\leq m \\le \\sum_{s=1}^j M_s.\n\\end{equation} We use $M_0=1$ and \n\\begin{equation} A_{j}^{\\rm 5D}(x_m , x_n)\\end{equation}\ndenotes the kernel of the operator ${\\rm A}_{j}^{\\rm 5D}$ defined in \\eqref{rad}.\nThe explicit expression for a kernel of the form \\begin{equation}\\rho_{n,m,\\xi }= \\left({\\rm e}^{\\mathsf x_1}+ {\\rm e}^{\\mathsf x_2} + {\\rm e}^{-m \\mathsf x_1-n \\mathsf x_2} + \\xi {\\rm e}^{-(1+m)\\mathsf x_1-(n-1)\\mathsf x_2} \\right)^{-1}\\end{equation}\nwas computed in \\cite{cgum}. Let us review how this goes. We introduce some new variables $\\mathsf q, \\mathsf p$ such that \\cite{kama,cgum} \n \\begin{equation} \\label{newv} \\begin{aligned} \\mathsf x_1= {2\\pi b \\over m+n+1}\\left({\\mathsf p+n \\mathsf q }\\right)\\quad& \\mathsf x_2=-{2\\pi b \\over m+n+1}\\left({- \\mathsf p+(m+1)\\mathsf q }\\right), \\end{aligned}\\end{equation} \n where \\begin{equation} \\hbar={2\\pi \\over m+n+1} b^2.\\end{equation}\n In particular we have\n\\begin{equation} [\\mathsf q, \\mathsf p]={{\\rm i} \\over 2 \\pi}.\\end{equation}\n Then, the kernel of $\\rho_{n,m,\\xi }$ in the momentum representation w.r.t.~the new variables \n reads \\cite{cgum} \n\\begin{equation} \\label{rhomon} \\rho_{n,m, \\xi}(p, p')= {{\\overline f_\\zeta( p)} f_\\zeta(p') \\over 2 b \\cosh\\left( \\pi {p-p'+{\\mathsf{i}} h \\over b} \\right)},\\end{equation}\nwhere\n \\begin{equation}\n\\label{fx}\\begin{aligned} \nf_\\zeta(x)=&{\\operatorname{\\Phi}_{\\mathsf{b}}(x-\\zeta + {\\rm i} n { c} ) \\over \\operatorname{\\Phi}_{\\mathsf{b}}(x-{\\mathsf{i}} (\\alpha+c))} {\\rm e}^{ 2 \\pi (\\alpha+ n c) x } {\\rm e}^{ -2 \\pi c n \\zeta}, \n\\end{aligned}\\end{equation}\n \\begin{equation}\n\\label{fx}\\begin{aligned} \n\\overline f_\\zeta(x)= &{\\operatorname{\\Phi}_{\\mathsf{b}}(x+{\\mathsf{i}} (\\alpha+c)) \\over \\operatorname{\\Phi}_{\\mathsf{b}}(x-\\zeta - {\\rm i} n c ) } {\\rm e}^{ 2 \\pi (\\alpha+ n c) x } {\\rm e}^{ -2 \\pi c n \\zeta}. \\end{aligned}\\end{equation}\nWe denote by $\\operatorname{\\Phi}_{\\mathsf{b}} $ the Faddeev's quantum dilogarithm \\cite{faddeev,fk} and we use\n\\begin{equation}\n{ \\alpha={b m\\over 2(m+n+1)}}, \\qquad c= {b \\over 2(m+n+1)}, \\qquad h = \\alpha+c-nc, \\qquad \\zeta={1\\over 2 \\pi b}\\log \\xi.\n\\end{equation}\nSome useful properties of the quantum dilogarithm can be found in Appendix A of \\cite{kama}.\nIn our case we specialize the above formulae to $n=1$ and $m=N-2$. Therefore, in the particular case of the $Y^{N,0}$ geometries, the partition function \\eqref{gen-fred-th} reads \n\\begin{equation} \\label{gmat} \\begin{aligned} Z_N(M_1, \\cdots, M_{N-1})=&{1\\over M_1! \\cdots M_{N-1}! } \\sum_{\\sigma \\in S_{M}}(-1)^\\sigma\\int {\\rm d} ^M x \\left(\\prod_{i=1}^{M_1} A_1^{\\rm 5D}(x_{\\sigma(i)},x_i) \\right)\\\\\n&\\left(\\prod_{i=1+M_1}^{M_1+M_2} A_2^{\\rm 5D}(x_{\\sigma(i)},x_i)\\right) \\cdots\n \\left( \\prod_{i=1+\\cdots +M_{N-2}}^{M_1+\\cdots +M_{N-1}} A_{N-1}^{\\rm 5D}(x_{\\sigma(i)},x_i)\\right)\\\\\n\\end{aligned}\\end{equation}\nwhere \n\\begin{equation} \\begin{aligned}\n &A_j^{\\rm 5D}(p,p')={\\rm e}^{-{\\rm i} \\pi b^2( j-1)^2\/N^2}{\\rm e}^{-2\\pi ( j-1)b p'\/N } \\rho_{1, N-2,\\xi}(p,p'+{{\\rm i} b ( j-1) \\over N}), \\\\\n\\end{aligned}\\end{equation}\nand \\begin{equation} \\rho_{1, N-2,\\xi}(p,p') \\end{equation} is given by \\eqref{rhomon}.\nBy using the Cauchy identity\n\\begin{equation} \\begin{aligned} &{\\prod_{1\\leq i1$, where $\\Delta_{ions}$ and $\\Delta_{neutrons}$ are doses at the same swelling level for ions and neutrons, respectively. As a consequence, the difference between the dose of heavy-ions and neutrons $(\\Delta_{ions}-\\Delta_{neutrons})=\\Delta_{ions} k' (G_{ions}-G_{neutrons})\/(1+k' G_{ions} ))$ increases with increasing ion dose ($\\Delta_{ions}$). Therefore, the radiation tolerance of alloys determined solely by heavy-ion irradiation results may be quantitatively overestimated by this factor, e.g., 1.38 for 304L in Figure 6(d) and 2.39 for T91 in Figure 7(c), where the differences in factors are due to different material parameters of $k'$. It means that a dose of 300 dpa or 600 dpa of ion irradiation of T91 is only equivalent to 125 dpa or 250 dpa (5\/12) of neutron irradiation in terms of void swelling, respectively. Our model indicates that in the high dose regime, increasing the irradiation dose of heavy-ions only leads to a limited increase in the equivalent neutron dose. \n\nIn our model, we assume that there is no saturation of void swelling in materials under irradiation, agreeing with most experiments. In principle, when voids are large enough, the void surfaces may become the predominant sink for all types of radiation defects. Besides, there is a possibility of the reduction of network dislocation densities at high doses. In these cases, void swelling may saturate at high irradiation doses. However, most experiments show no saturation of void swelling within a given irradiation dose. Particularly, only under certain conditions is found saturation of swelling. For instance, electron irradiation of stainless-steel samples that are too thin leads to an apparent saturation of swelling, which is actually an artifact of the surface impact. The irradiation of bulk samples, on the other hand, gives rise to steady swelling without saturation. To validate our model, we have chosen the experimental data that mostly show no saturation of void swelling under irradiation. Accordingly, the total sink absorption rate bias (k) is constant in our model. Therefore, our model cannot be used to predict swelling in the saturation regime. Overall, experimental evidence for a high-dose swelling saturation regime is very limited, and convincing evidence for swelling saturation did not emerge until ~100 dpa for neutron-irradiated Fe-Cr-Ni alloys\\cite{Garner2000} or ~1000 dpa for ion irradiated T91.\\cite{gigax2016radiation} Our model is proved to be applicable at such high doses.\n\nOur additional defect absorption model is application-oriented and is based on a simplified description of irradiation-induced volume swelling in alloys considering point defect production and evolution, such as those in the electron irradiation. We extended ADAM to heavy-ion and neutron irradiation cases, but the detailed in-cascade atomistic processes are not explicitly considered. We also did not consider the details of sink strength from different types of defect sinks since our focus is on the effects induced by the dose and dose rate difference. Besides, we assume that the strength of intrinsic defect sinks is approximately constant during the irradiation. The details of defect forms, such as different types of defect clusters, are also neglected. Therefore, although our model can be used to describe the whole stage of volume swelling and fit the experimental data well at different irradiation conditions, the quantitative prediction of equivalent irradiation dose at different dose rates and from low-dose to high-dose cases may not be used for all irradiation conditions. In ADAM, we have omitted the contribution of thermal vacancies at the void surface. This approximation is reasonable at intermediate temperatures but may not be suitable at high temperatures (such as $T > 0.5 T_{Melting}$). Finally, it's worth noting that it is impractical to build equivalent relations based on the present model for all the irradiation effects produced by neutrons and heavy-ions, such as irradiation-induced hardness, embrittlement and creep. Hence, there is still a high challenge to study and quantify comprehensive materials' performance under high-dose neutron irradiation by virtue of heavy-ion or self-ion irradiation techniques.\n\nIn summary, considering the main characteristics of defect evolution under energetic particle irradiation, this paper constructs an integrated kinetic rate theory model to correlate the volume swelling in alloys under different irradiation conditions. Specifically, our model introduces an additional defect sink in the defect master equations when considering the differences in dose rates of irradiations. With the irradiation-generated additional defect sink, our model provides a unified description of volume swelling at different doses and dose rates, especially for different energetic particles. Based on ADAM, quantitative methods are established to predict irradiation swelling in alloys for the advanced nuclear energy system, from low doses to high doses, from one dose rate to another dose rate. It can particularly be used for predicting the equivalent dose from heavy-ion irradiation to neutron irradiation experiments. In addition, the parameters of $\\Delta_0$ and $\\alpha$ in ADAM well capture the swelling features, and the combined parameter of $\\Delta_0\/\\alpha$ is proposed to quantitatively characterize the relative swelling resistance of alloys for nuclear plants. Therefore, this work provides an innovative practical solution to evaluate the swelling effects of the structural materials in reactor cores and a fast screening way to assess the performance of newly-developed alloys for advanced nuclear energy systems.\n\n\n\\section*{Materials and Methods}\n\nThe governing equations for interstitials and vacancies respectively under electron irradiation, are\n\n\\begin{equation}\n \\frac{\\partial c_i}{\\partial \\tau}=G_i-R_{iv}c_i c_v-S_i c_i,\n\\end{equation}\n\n\\begin{equation}\n \\frac{\\partial c_v}{\\partial \\tau}=G_v-R_{iv}c_i c_v-S_v c_v.\n\\end{equation}\n\nIn these two equations, $c_i$ and $c_v$ are the concentration of interstitials and vacancies respectively, and $R_{iv}$ is the recombination rate of interstitials and vacancies. If local equilibrium is reached ($\\tau$ is sufficiently large), meaning steady-state conditions, and assuming $G_i=G_v=G$, the solutions of Eqs. (6) and (7) give the defect concentrations:\n\n\\begin{equation}\n c_i=\\frac{S_v}{S_i} c_v, c_v=\\frac{2G}{S_v+\\sqrt{4GR_{iv} \\frac{S_v}{S_i}+S_v^2}}\n\\end{equation}\n\nThe steady-state solutions are used in the following volume swelling model since it is generally considered that defects can achieve their equilibrated states quickly compared to the time scale of volume growth \\cite{Was2007}. The classical rate theory for void growth is \\cite{Was2007}:\n\n\\begin{equation}\n \\frac{dr}{dt}=\\frac{\\Omega}{r}[D_v c_v-D_i c_i]\n\\end{equation}\n\nwhere $r$ is the void radius, and $\\Omega$ is the defect volume. Here we have omitted the concentrations of thermal vacancies in the void surface. After inserting the solutions of $c_{v,i}$, we have\n\n\\begin{equation}\n \\frac{dr}{dt}=\\frac{\\Omega}{r}[D_v S_i-D_i S_v]\\frac{-1+\\sqrt{1+\\frac{4GR_{iv}}{S_i S_v}}}{2R_{iv}}\n\\end{equation}\n\nWe consider the sink dominant case, i.e.,$\\frac{4GR_{iv}}{S_i S_v}\\ll 1$. In this case, Eq. (10) can be simplified to:\n\n\\begin{equation}\n \\frac{dr}{dt}=\\frac{\\Omega}{r}[D_v S_i-D_i S_v]\\frac{G}{S_i S_v}\n\\end{equation}\n\nThe solution of this equation is $r=\\left[t \\frac{2G\\Omega}{S_{v0}+\\gamma_v \\delta_v G} (D_v-\\frac{D_i}{b}) \\right]^{1\/2}$. After inserting the assumption $b=\\beta b_0$, the total void-induced swelling can be written as:\n\n\\begin{equation}\n swelling(\\%)=\\frac{\\Delta V}{V_0}=\\frac{\\frac{4\\pi r^3}{3}-\\frac{4\\pi r_0^3}{3}}{V_0}=\\frac{4\\pi}{3V_0}\\left[ t \\frac{2G\\Omega}{S_{v0}+\\gamma_v \\delta_v G} \\frac{D_v\\beta S_{i0}-D_i S_{v0}}{\\beta S_{i0} } \\right]^{3\/2}-\\frac{4\\pi r_0^3}{3V_0}\n\\end{equation}\n\nConsidering the relations among dose ($\\Delta_{dpa}$, dpa), dose rate ($G_{dpa}$, dpa\/s), and defect generation rate (G) of electrons, $G_{dpa,ele}=\\eta G$, and $\\Delta_{dpa,ele}=G_{dpa,ele} t$, we can rewrite the swelling to\n\n\\begin{equation}\n swelling(\\%)\n =\\frac{4\\pi}{3V_0}\\left[ 2\\Omega \\frac{D_v \\beta S_{i0} -D_i S_{v0} }{\\beta \\eta S_{i0} S_{v0} } \\frac{\\Delta_{dpa}}{1+\\frac{\\gamma_v \\delta_v}{\\eta S_{v0}} G_{dpa}} \\right]^{3\/2}-\\frac{4\\pi r_0^3}{3V_0}\n =\\alpha \\left[ \\frac{\\Delta_{dpa,ele}}{1+k'G_{dpa,ele}}\\right]^{3\/2}-c. \n\\end{equation}\n\n\n\n\n\\section*{Data Availability}\nAll data generated or analyzed during this study are included in this published article (and its supplementary information files).\n\n\\section*{Code Availability}\nThe Phase-Field Modeling program in this paper are deposited at a website, https:\/\/github.com\/pku-ion-beam\/PFM.\n\n\n\n\\section*{Acknowledgments}\nThe authors thank Dr. Roger Stoller and Dr. Yong Dai for valuable discussions and suggestions. This work was supported by National Science Foundation of China (Grant No. 11935004 and Grant No. 12192280). S. Zhao was support by City University of Hong Kong (Grant No. 9610425).\n\n\n\\section*{Contributions}\nY. W. and S. J. Z. conceived the research. W.G. and S.Z. performed the theoretical investigation. C. W., H. L., Y.S, and J. X. assisted in data collection and analysis. S. Z., Y.W., W.G., C. W. and S. J. Z. prepared the manuscript. All authors discussed the results, commented on the manuscript, and contributed to the writing of the paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFundamental stability of edge states \\cite{Halperin82, Laughlin81, Hatsugai93-edge} in the quantum Hall\neffect (QHE)\nimplies the phase\nis topological.\nThe bulk is gapped but, with boundaries, there exists localized modes\nin the gap.\nConversely non trivial topological phases are\nnecessarily associated with edge states as generic localized modes\nnear the boundaries of the system or impurities.\nIt applies to any of the topological phases \nthat include the fractional QH states,\nquantum spin Hall states as the topological insulators, the\nHaldane phase of integer spin chains and so on.\nGenerically emergence of the edge states\nis reduced to the feature of the bulk,\nthat is the bulk-edge correspondence \\cite{Hatsugai93-bec}.\n\n{\\color{black} The phase is ``topological''\nimplies it is }``hidden''\\cite{Wen89}, in the sense that\none can hardly observe its character in a bulk\\cout{\\cite{Wen89}},\neven though the phase\nis characterized by a topological invariant\\cite{Thouless82,KM05,Bernevig06}.\nFurther, \\cout{in most cases} \nthey are\n{mostly} not physical observables. Even in the QHE, whether the observed\nHall conductance in \nexperiments is of the bulk or of the edges is\nstill controversial.\nThe bulk topological character\nis hidden but the edge states are real observables.\nA typical example\nis a surface Dirac cone of the three dimensional topological insulators,\nthat is observed by \nthe angle resolved photoemission spectroscopies.\n \\cout{clearly observe the Dirac cones\nas the edge states for some of the semiconductors.}\nThe bulk-edge correspondence is thus\n a rigorous\/conceptual bridge between the\n bulk and with boundaries,\n that was, at first, rigorously shown for the QHE \\cite{Hatsugai93-bec} and\n justified today for the\nothers as well \\cite{Kitaev01,Ryu02,KM05,Bernevig06,Qi06,Schulz00,Graf13}. \n\nThis reverse way of thinking is \\cout{crucially} important since one \ncan realize why the edges are there from a\nuniversal point of view.\nSome of them are historically well-known such as\ndangling bonds of semiconductors and some are new as\nthe chiral edge modes of photonic crystals \\cite{Haldane08,Wang08} and\nphoton's \\cite{Hafezi14}.\n\\cout{It is now clear that we do not need to restrict\nourselves to the quantum world. The photonic crystals are surely governed by\nthe classical Maxwell equation. }\nThis analogy even extends to the \nclassical Newton equation\n\\cite{Prodan09,Kane14,Kariyado15}.\nIt implies the universal feature of the bulk-edge correspondence.\n\nUltimate developments of recent\nquantum technology also trigger further studies of \ntopological phases, such as realization of the Hofstadter butterfly\nby a synthetic gauge field\nin a artificial lattice using cold atoms \\cite{Dalibard11}.\nMany of the theoretical ideas for the topological phases and gauge structures\nof matter \nare now directly measured experimentally where\nquantum coherence and structures\nare under the ultimate control and one can handle or\neven synthesize \ndimensions \\cite{Dalibard11,Fallani15}.\nOne of the most clear and fundamental examples is\na topological pump proposed long time ago \\cite{Thouless83},\nwhere the time is used as a synthetic dimension and the topological\neffects in the QHE in 2D is realized in a simple one dimensional system.\nEven though the proposal is quite old as the QHE,\nit took so long time until its experimental realization \n in cold atoms \\cite{Nakajima15,Lohse15}, \nand intense studies have just began.\nAlthough the bulk-edge correspondence is fundamental in\ntopological phases, it \\cin{has never been} applied to this topological pump.\n\\cin{Here we have first made\n the role of the edge states clear in the topological pump.}\n{\\color{red}\\sout{\nWe here revisit the problem and clarify\nthe bulk-edge correspondence in the\ntopological pump.\n Surprisingly it provides conceptually new aspects of the\n bulk-edge correspondence, that is again closely related to the experiments.}}\n\\cin\n In this letter, a new expression of the pumped charge\nis derived \n by the Berry connection for the system with boundaries. \n In the adiabatic limit, \n the contribution of the the bulk and the edge are \n clearly separated.\n The pumped charge is carried by the bulk but\n its quantization is guaranteed by the locality of the\n edge states. \n\\csoutf{Although the topological pumping is redesciption of the\n quantum Hall effect in 2D, physical contents of the bulk-edge correspondence\n is quite different.} Local $U(1)$ gauge symmetry as the charge conservation\n is crucially important but plays a role \\cinf{different from the QHE. \nNamely, in the QHE, the right- and left-edge states are always paired on the Fermi surface, \nwhereas in the pumping, they can be\nseparately observed at different times.\nPhysically this is a fractionalization of the electron into the massive\nDirac fermions carrying half charge quantum.\n}\n}\n\nLet us consider a many body\ntopological pump by\nthe time dependent one dimensional hamiltonian of\nlattice free fermions of $L_x$ sites\n\\begin{eqnarray*}\n H(\\theta,t )\n = \\sum_{j}^{L_x}\\big[ \n -t_x^\\theta c_{j+1}^\\dagger c_j +h.c.\n + v_j(t) c_j ^\\dagger c_j \\big], \n\\end{eqnarray*}\n($t_x^\\theta=t_x e^{-{\\mathrm i}\\theta\/L_x}, t_x,t_y\\in\\mathbb{R}$. \nThe twist $\\theta$ is introduced to\ndefine the current operator as well as the Berry connection\n for a generic many-body state).\nThe time dependent potential $v_j(t)$ can be any {\\color {black} if it is periodic as \n$v_{j+q}(t)=v_j(t)$ where $q$ is a positive integer.}\nFor simplicity, we choose as\n$ v_j(t) = -2t_y \\cos ( \\frac {2\\pi t}{T}- 2\\pi \\phi j) $,\n $ \\phi =p\/q$\n with mutually prime $p$ and $q$. This is equivalent to the 2D QHE under\n a periodic potential\\cite{Thouless82,Hatsugai93-edge,Hatsugai93-bec}.\n When mapping back to the QHE in 2D, the time as a synthetic dimension\n corresponds to\n the momentum in $y$ direction\n ($\\frac {2\\pi t}{T}\\to k_y $\\cite{Hatsugai93-edge}).\n By writing the one particle state as\n $|\\psi \\rangle =\\sum_j\\psi_j c_j ^\\dagger | 0 \\rangle $, the wave function $\\psi_j$\n satisfies the Harper equation,\n $-t_x^\\theta \\psi_{j-1\\remove{,\\ell_L}}-2 t_y \\cos (2\\pi\\frac {t}{T}- 2\\pi\\phi j )\\psi_{j\\remove{,\\ell_L}}-(t_x^\\theta)^* \\psi_{j+1\\remove{,\\ell_L}}=E_{\\ell_L}\\psi_{j\\remove{,\\ell_L}}$.\n The time-reversal (TR) is broken by a finite $\\theta$ \n but is \n recovered by taking the $\\theta\\to 0$ limit after\n the calculation. The pumping period, $T$, controls the\n adiabaticity of the pumping.\n For simplicity, we take $T=2\\pi$ and control the adiabaticity\n \n by the energy scale of $t_x$ and $t_y$.\n Two boundary conditions are discussed.\n One is the open boundary condition with edges, $\\ed$: $\\psi_0=\\psi_{L_x}=0$ and the other is\n periodic one for the bulk $\\bl$: $\\psi_{j+L_x}=\\psi_j$.\n The many body eigen state of the snap shot hamiltonian $H(t)$ is given\n by specifying a set of occupied one particle states $\\{\\ell_L\\}$\n as\n $|\\alpha \\rangle =\\prod_{\\ell_L}(\\sum_j\\psi_{j,\\ell_L}c_j ^\\dagger |0 \\rangle $.\n \n Using the current operator\n \\crep\n {\n $\n J\n = \\frac {1}{L_x} ({\\mathrm i} \\frac {t_x}{\\hbar } e^{-{\\mathrm i} \\theta\/L_x})\\sum_j c_{j+1} ^\\dagger c_j + h.c\n= \\hbar ^{-1} \\partial_\\theta H(\\theta )\n$,\n }\n {\n \\begin{eqnarray*}\n J\n&=& \\frac {1}{L_x} ({\\mathrm i} \\frac {t_x}{\\hbar } e^{-{\\mathrm i} \\theta\/L_x})\\sum_j c_{j+1} ^\\dagger c_j + h.c\n= \\hbar ^{-1} \\partial_\\theta H(\\theta,t ),\n \\end{eqnarray*}\n }\n the measured current at the time $t$, \n $ \\delta j = \\langle G(t)|J|G(t) \\rangle - \\langle g(t) | J| g(t) \\rangle $,\n is evaluated by the adiabatic approximation \\cite{Thouless83}\n assuming a finite energy gap above the snap shot\n ground state $|g(t) \\rangle $,\n $H(\\theta,t )|g \\rangle = | g \\rangle E$. The state $|G(t) \\rangle $ is a\n true many body state \n that obeys the time dependent Schr\\\"odinger equation, ${\\mathrm i} \\hbar \\partial _t |G(t) \\rangle =H(\\theta,t )|G(t) \\rangle $. It reads\n\n \\delta j\n =\n \n -{\\mathrm i} B, \\ B=\n\\partial _\\theta A_t -\\partial _t A_\\theta \n \n $\n where $B$ is a field strength of the Berry connection $A_\\mu = \\langle g|\\partial_\\mu g \\rangle $, $\\mu =\\theta,t $.\n Note that $ \\langle g(t) | J| g(t) \\rangle| _{\\theta =0}=0$ due to the TR invariance.\n Since the field strength $B$ is gauge invariant for the gauge transformation $| g ^\\prime \\rangle = |g \\rangle e^{{\\mathrm i} \\chi}$,\n $ A ^\\prime _\\mu = \\langle g ^\\prime | \\partial _\\mu g ^\\prime \\rangle\n =A_\\mu +{\\mathrm i} \\partial _\\mu \\chi\n $, let us take a temporal gauge by imposing a gauge condition,\n $A^{(t)}_t=0$, which is quite useful for the discussion of the pumping.\n\\cin{Then} the field strength is given as\n $B = -\\remove{{\\mathrm i} }\\partial _t A^{(t)}_\\theta $.\nNow we have a pumped charge between the time period $[t_a,t_b]$ as\n\\begin{eqnarray*}\n \\Delta Q_{[t_a,t_b]}\n &=& \\int_{t_a}^{t_b} dt \\, \\delta j_x = {\\mathrm i} \\int_{t_a}^{t_b} dt \\, \\partial _t A^{(t)}_\\theta\n ={\\mathrm i} A^{(t)}_\\theta(t)\\bigg|^{t_b}_{t_a}, \n\\end{eqnarray*}\nwhere the integration over $t$ has been carried out but it\nneeds a special care, as discussed later.\nFor any (regularly) gauge fixed many body state $| g \\rangle $, \nthe state in the temporal gauge is\ngiven by\n$\n | g ^{(t)}(\\theta,t)\\rangle = | g (\\theta,t)\\rangle e^{{\\mathrm i} \\chi(\\theta,t)}\n$,\n with the phase factor, $\\chi$, that is path dependent and is explicitly\n given as\n \\crep{\n$\n \\chi (\\theta,t)=- {\\rm Im\\,} \\int_C ds_\\mu \\langle g | \\partial _\\mu g \\rangle = \n {\\mathrm i} \\int_0^t d \\tau\\, \\langle g (\\theta,\\tau)\n | \\partial _\\tau g (\\theta,\\tau) \\rangle\n \n+ {\\mathrm i} \\int_0^\\theta d \\vartheta\\, \\langle g (\\vartheta,0)\n | \\partial _\\vartheta g (\\vartheta,0) \\rangle\n $ where the path is piecewise linear, $C:(0,0)\\to (\\theta,0 )\\to(\\theta,t)$.\n }\n {\n$\n \\chi (\\theta,t)=\n {\\mathrm i} \\int_0^t d \\tau\\, A_t(\\theta ,\\tau )\n \n+ {\\mathrm i} \\int_0^\\theta d \\vartheta\\, A_\\theta (\\vartheta,0)$.\n }\nIt surely satisfies the gauge condition, \n\\crep{\n $ A^{(t)}_t (\\theta,t)= \\langle g^{(t)}| \\partial _t g^{(t)} \\rangle =0$\n}\n { $ A^{(t)}_t (\\theta,t)= 0$ }\nand\none has\n\\crep{\n\\begin{eqnarray*}\n A^{(t)}_\\theta &&(\\theta,t) = \\!\\langle g| \\partial _\\theta g \\rangle ^{(t)}\n= \n \\langle g (\\theta,t)| \\partial _\\theta g (\\theta,t)\\rangle\n \\\\ &&\n -\\partial _\\theta \n \\int_0^t d \\tau\\, \\langle g (\\theta,\\tau) | \\partial _\\tau g (\\theta,\\tau) \\rangle\n-\\langle g (\\theta,0) | \\partial _\\theta g (\\theta,0) \\rangle.\n\\end{eqnarray*}\n}\n {\n\\begin{eqnarray*}\n A^{(t)}_\\theta (\\theta,t) &=& \nA_\\theta (\\theta,t)\n -\\partial _\\theta \n \\int_0^t d \\tau\\, A_t (\\theta,\\tau) \n- A_\\theta (\\theta,0).\n\\end{eqnarray*}\n }\nThis is\ngauge invariant: it is directly confirmed but is clear in a discretized form \\cite{Fukui05}\nas shown in Fig.\\ref{fig:t-gauge}.\nSubstituting this into $\\Delta Q$ above, we have a pumped charge in\na {\\it novel} gauge invariant form.\n\\begin{figure}[h]\n\\includegraphics[width=80mm]{fig1n.eps\n\\caption{\\label{fig:t-gauge} \n (a) \n The Berry connection $A^{(t)}$ in the temporal gauge\n and (b) its lattice analogue \\cite{Fukui05}.}\n\\end{figure}\nThe discussion up to this point is general and applicable for both \nwith\/without edges.\n\nNow let us consider a system with edges\nby imposing the open boundary condition.\nIn this case, the twist of the hopping $e^{-{\\mathrm i} \\theta\/L_x}$ is gauged out by the many body gauge transformation\n\\begin{eqnarray*}\n {\\cal U}(\\theta) &=& \\prod_{j=1} e^{-{\\mathrm i} \\theta n_j (j-j_0)\/L_x},\n \\quad j_0=L_x\/2,\n\\end{eqnarray*}\nwhich operates for the fermion operator as $ {\\cal U} c_j {\\cal U} ^\\dagger = e^{{\\mathrm i} \\theta j\/L_x}c_j$ and\nwe have\n$| g(\\theta )\\rangle = {\\cal U}|g_0 \\rangle $\nwhere $|g_0 \\rangle $ is a snap shot ground state of the hamiltonian $H(0,t)$.\nThen noting that $A_t=\\langle g| \\partial _t g \\rangle\n=\\langle g_0| \\partial _t g_0 \\rangle $ is $\\theta $ independent\nand\n$\n A_\\theta = \n \\langle g_0|{\\cal U}^\\dagger \\partial _\\theta {\\cal U} | g_0 \\rangle\n$, \n the Berry connection {\\it in the temporal gauge }\n for the system with edges is\n \\crep\n {\n\\begin{eqnarray*}\n A^{(t),\\ed}_\\theta \n &=& -{\\mathrm i} \\Delta P(t),\\ \n P(t) = \n \\sum_j \\frac {j-j_0}{L_x} \\langle g_0(t)| n_j | g_0 (t)\\rangle,\n \\label{eq:CM}\n\\end{eqnarray*}\nwhere {$P(t)$} is a \\coutn{normalized} CM of $|g_0(t) \\rangle $\nand $\\Delta \\bar P=\\bar P(t)-\\bar P(0)$\n($-1\/2\\le \\bar P\\le 1\/2$). \n }\n {\n\\begin{eqnarray*}\n A^{(t),\\ed}_\\theta \n &=& -{\\mathrm i} \\big[ P(t)-P(0)\\big],\\ \n P(t) = \\sum\\nolimits_j x_j \\rho_j(t)\n \\label{eq:CM}\n\\end{eqnarray*}\nwhere\n$ \\rho_j(t) = \\langle g_0(t)| n_j | g_0 (t)\\rangle$,\n$ x_j = \\frac {j-j_0}{L_x} $ and $P(t) $ is the CM.\nNow we have the pumped charge as\n\\begin{eqnarray*}\n \\Delta Q_{[t_a,t_b]} &=& P(t_b)-P(t_a).\n\\end{eqnarray*}\nNote that this is only well defined for a system with boundaries.\n }\n \\crep{It should be noted }\n {We also stress } that the \\coutn{normalized} CM derived here is distinct from the Zak phase for infinite systems.\n \n We assume\n \\crep{$\\bar P(t)$}{$P(t)$} is a CM measured \n for the {\\it snap shot ground state }\n {\\it in contact with a particle reservoir}, that is, the system\n is specified by the chemical potential $\\mu $ and the\n temperature is sufficiently low.\n\\cout{ \\footnote\n {{\\color{blue}\n We expect\n the discussion here is justified even for a closed\n system\n in the limit $L_x\\to \\infty$,\n since the total number of particles is\n conserved after the pumping of one cycle.}\n }. }\nIt should be also \n distinguished from the CM of {\\it the time dependent wave function\n \\crep{\n $\n\\sum_j \\frac {j-j_0}{L_x} \\langle G(t)| n_j | G (t)\\rangle,\n$\n }\n {\n $\n\\sum_j x_j \\langle G(t)| n_j | G (t)\\rangle,\n$ }\nwhich is recently observed in real\nexperiments}\n\\cite{Nakajima15,Lohse15,Wang:2013fk_pump}.\nEven though the fermi energy is in the bulk gap,\nwhen the one particle energy of the edge state coincides\nto the fermi energy, \n{\\it the many body gap} of the snap shot hamiltonian necessarily closes\nand the edge state becomes suddenly occupied\/unoccupied\n(See Fig.\\ref{fig:example}).\nThis sudden change of the snap shot ground state\ncauses singularities (discontinuities) in $\\crep{\\bar P}{P}$ since the\nedge state is spatially localized and its contribution to\nthe normalized CM is\n$\\pm 1\/2 $ in the limit $L_x\\to \\infty$.\nThis is inevitable since topologically non trivial ground state\nis associated with the edge states passing through the gap.\n Then labeling the gap closing time's by $t_i$'s ($t_i