diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzamnz" "b/data_all_eng_slimpj/shuffled/split2/finalzzamnz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzamnz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{introduction}\n\n\nIn general relativity and cosmology, our knowledge about spatially\nhomogeneous cosmological models has increased substantially over the\nyears, and we are able to say that, for a large number of models,\nthe qualitative behaviour of solutions is now well understood;\nsee~\\cite{waiell97} for an overview. The majority of results,\nhowever, concerns solutions of the Einstein equations coupled to a\nperfect fluid (usually with a linear equation of state). It is thus\nimportant to note that these results are in general not robust,\ni.e., not structurally stable, under a change of the matter model;\nsignificant changes of the qualitative behaviour of solutions occur,\nfor instance, for collisionless matter.\n\nSeveral fundamental results on spatially homogeneous diagonal models of\nBianchi type~I with collisionless matter have been obtained in~\\cite{ren96}.\nDiagonal locally rotationally symmetric (LRS) models have been investigated\nsuccessfully by using dynamical systems methods, see~\\cite{rentod99} for the\ncase of massless particles, and~\\cite{renugg00,ren02} for the massive case.\nIn particular, solutions have been found whose qualitative behaviour is\ndifferent from that of any perfect fluid model of the same Bianchi type.\n\nThe purpose of this article is to re-investigate the diagonal (non-LRS)\nBianchi type~I models with collisionless matter. Our analysis is based on\ndynamical systems techniques, which enables us to obtain a more detailed\npicture of the global dynamics than the one previously given.\nIn particular, we show that the dynamical behaviour toward\nthe past singularity of the collisionless matter model\ndiffers considerably from the Bianchi type~I perfect fluid model.\n\nThe outline of the paper is as follows. In Section~\\ref{einsteinvlasov}\nwe reformulate Einstein's field equations for the diagonal Bianchi\ntype~I case with collisionless matter as a reduced \ndimensionless dynamical system on a compact state space. In\nSection~\\ref{invfixed} we give the fixed points of the system and list\nand discuss a hierarchy of invariant subsets of the state space,\nwhich is associated with a hierarchy of monotone functions. In\nSection~\\ref{locglo} we first present the results of the local\ndynamical systems analysis; subsequently, we focus on\nthe analysis of the global dynamics: we establish two theorems that\nformulate the future and past asymptotic behaviour of solutions,\nrespectively. As regards the future asymptotics we show that all\nmodels isotropize asymptotically toward the future. The past\nasymptotic behaviour is more complicated since there exists several\ntypes of past asymptotic behaviour; in particular we establish that\nthe past attractive set resides on a set that contains a so-called\nheteroclinic network. The proof of the theorems is based on methods\nfrom global dynamical systems analysis; in particular, we exploit\nthe hierarchy of monotone functions in conjunction with the\nmonotonicity principle. Finally, in Section~\\ref{conc} we conclude with\nsome remarks about our results and their implications.\nAppendix~\\ref{dynsys} provides a brief introduction to relevant\nbackground material from the theory of dynamical systems; in\nparticular we cite the monotonicity principle. The proofs of some of\nthe statements in the main text are given in Appendix~\\ref{FRWLRS}\nand~\\ref{futureproof}. In Appendix~\\ref{Si0space} we discuss the\nphysical interpretation of one of the most important boundaries of\nour state space formulation.\n\n\n\\section{The reflection-symmetric Bianchi type~I Einstein-Vlasov system}\n\\label{einsteinvlasov}\n\nIn a spacetime with Bianchi type~I symmetry the spacetime metric can\nbe written as\n\\begin{equation}\nd s^2 = -d t^2 + g_{i j}(t) d x^i d x^j\\:,\\quad i,j=1,2,3\\:,\n\\end{equation}\nwhere $g_{i j}$ is the induced Riemannian metric on the spatially\nhomogeneous surfaces $t=\\mathrm{const}$. Since the metric is\nconstant on $t=\\mathrm{const}$, it follows that the Ricci tensor of\n$g_{i j}$ vanishes. Einstein's equations, in units $G=1=c$,\ndecompose into the evolution equations,\n\\begin{subequations}\\label{einsteinvlasovsystem}\n\\begin{equation}\\label{evolution}\n\\partial_t g_{i j} = -2 k_{i j} \\:,\\quad\n\\partial_t k^i_{\\:\\, j} = \\mathrm{tr}\\hspace{0.15ex} k\\: k^i_{\\:\\, j} - 8 \\pi T^i_{\\:\\, j} +\n4 \\pi \\delta^i_{\\:\\, j} ( T^k_{\\:\\, k} -\\rho) - \\Lambda \\delta^i_{\\:\\, j}\\:,\n\\end{equation}\nand the Hamiltonian and momentum constraint\n\\begin{equation}\\label{constraints} (\\mathrm{tr}\\hspace{0.15ex} k)^2\n- k^i_{\\:\\, j} k^j_{\\:\\, i} - 16 \\pi \\rho -2 \\Lambda = 0\\:,\\qquad\nj_k = 0\\:.\n\\end{equation}\nHere, $k_{i j}$ denotes the second fundamental form of the surfaces $t=\\mathrm{const}$.\nThe matter variables are defined as components of the energy-momentum\ntensor $T_{\\mu\\nu}$ ($\\mu=0,1,2,3$), according to $\\rho = T_{00}$,\n$j_k= T_{0k}$; $T_{i j}$ denotes the spatial components.\nThe cosmological constant $\\Lambda$ is set to zero in the following;\nthe treatment of the case $\\Lambda>0$ is straightforward once the case $\\Lambda=0$ has been solved,\ncf.~the remarks in the conclusions.\n\nIn this paper we consider collisionless matter (Vlasov matter),\ni.e., an ensemble of freely moving particles described by a\nnon-negative distribution function $f$ defined on the mass shell bundle\n$PM\\subseteq TM$ of the spacetime; for simplicity we consider\nparticles with equal mass $m$.\nThe spacetime coordinates $(t,x^i)$\nand the spatial components $v^i$ of the four-momentum $v^\\mu$\n(measured w.r.t.\\ $\\partial\/\\partial x^\\mu$) provide local\ncoordinates on $PM$ so that $f = f(t,x^i,v^j)$. Compatibility with\nBianchi type~I symmetry forces the distribution function $f$ to\nbe homogeneous, i.e., $f = f(t, v^j)$. The evolution equation for\n$f$ is the Vlasov equation (the Liouville equation)\n\\begin{equation}\\label{Vlasovequation}\n\\partial_t f + \\frac{v^j}{v^0} \\partial_{x^j} f -\n\\frac{1}{v^0} \\Gamma^j_{\\mu\\nu} v^\\mu v^\\nu \\partial_{v^j} f =\n\\partial_t f + 2 k^j_{\\:\\, l} v^l \\partial_{v^j} f = 0 \\:.\n\\end{equation}\nThe energy-momentum tensor associated with the distribution\nfunction $f$ is given by\n\\[\nT^{\\mu\\nu} = \\int f v^\\mu v^\\nu \\mathrm{vol}_{PM}\\:,\n\\]\nwhere $\\mathrm{vol}_{PM} = (\\det g)^{1\/2} v_0^{-1} d v^1 d v^2 d\nv^3$ is the induced volume form on the mass shell; $v_0$ is\nunderstood as a function of the spatial components, i.e., $v_0^2 =\nm^2 + g_{i j} v^i v^j$. The components $\\rho$, $j_k$, and $T_{i j}$,\nwhich enter in~\\eqref{evolution} and~\\eqref{constraints} can thus be\nwritten as\n\\begin{align}\n\\label{matterrho}\n\\rho & = \\int f \\left(m^2 + g^{i j} v_i v_j\\right)^{1\/2}\n(\\det g)^{-1\/2} d v_1 d v_2 d v_3\\:, \\\\\n\\label{matterj}\nj_k & = \\int f v_k (\\det g)^{-1\/2} d v_1 d v_2 d v_3 \\:,\\\\\n\\label{matterT}\nT_{i j} & = \\int f v_i v_j\n\\left(m^2 + g^{k l} v_k v_l\\right)^{-1\/2}\n(\\det g)^{-1\/2} d v_1 d v_2 d v_3 \\:.\n\\end{align}\n\\end{subequations}\nThe Einstein-Vlasov system~\\eqref{einsteinvlasovsystem} is usually\nconsidered for particles of mass $m>0$, however, the system also\ndescribes massless particles if we set $m=0$. (For a detailed\nintroduction to the Einstein-Vlasov system we refer to~\\cite{and05}\nand~\\cite{ren04}.)\n\nThe general spatially homogeneous solution\nof the Vlasov equation~(\\ref{Vlasovequation}) in\nBianchi type~I is\n\\begin{equation}\\label{fisf0}\nf(t,v^i) = f_0(v_i)\\:,\n\\end{equation}\nwhere the $v_i$ are the covariant components of the momenta and\n$f_0$ is an arbitrary non-negative function, see~\\cite{maamah90}.\n(By inserting~(\\ref{fisf0})\ninto~(\\ref{Vlasovequation}) and using that $v_i = g_{i j}(t) v^j$\nit is easy to check that $f_0(v_i)$ is a solution.)\nThe momentum constraint in~(\\ref{constraints}) then reads\n\\begin{equation}\\label{momcons}\n\\int f_0(v_i) v_k d v_1 d v_2 d v_3 = 0\\:.\n\\end{equation}\nHenceforth, for simplicity, $f_0$ is assumed to be compactly supported.\n\nThere exists a subclass of Bianchi type~I Einstein-Vlasov models\nthat is naturally associated with the constraint~\\eqref{momcons}:\nthe class of ``reflection-symmetric'' (or ``diagonal'') models.\nThe following symmetry conditions are imposed on the initial data:\n\\begin{subequations}\\label{reflsymm}\n\\begin{equation}\\label{reflsymmf0}\nf_0(v_1, v_2,v_3) = f_0(-v_1,-v_2,v_3) = f_0(-v_1,v_2,-v_3) = f_0(v_1,-v_2,-v_3)\\:,\n\\end{equation}\n\\begin{equation}\\label{reflsymmgk}\ng_{i j}(t_0)\\:, k_{i j}(t_0) \\quad\\text{diagonal}\\:.\n\\end{equation}\n\\end{subequations}\nThese conditions ensure that $T_{i j}(t_0)$ is diagonal, hence $g_{i\nj}$, $k_{i j}$, and $T_{i j}$ are diagonal for all times by~(\\ref{einsteinvlasovsystem}). \nIn the present paper, we will be\nconcerned with this class of reflection-symmetric models.\n\nThe Einstein-Vlasov system~\\eqref{einsteinvlasovsystem} thus reduces\nto a system for six unknowns, the diagonal components of the metric\n$g_{i i}(t)$ and the second fundamental form $k^i_{\\:\\, i}(t)$ (no\nsummation). The equations are~\\eqref{evolution} and the Hamiltonian\nconstraint in~\\eqref{constraints}. The initial data consists of\n$g_{i i}(t_0)$, $k^i_{\\:\\, i}(t_0)$; in addition we prescribe a\ndistribution function $f_0(v_i)$ that provides the source terms in\nthe equations via~\\eqref{matterrho} and~\\eqref{matterT}.\n\nIn the following we reformulate the Einstein-Vlasov system as a\ndimensionless system on a compact state space.\nTo that end we introduce new variables and modified matter quantities. Let\n\\begin{equation}\\label{hx}\nH := -\\frac{\\mathrm{tr}\\hspace{0.15ex} k}{3}\\:, \\quad\\qquad x := \\sum_i g^{i i}\\:,\n\\end{equation}\nand define the dimensionless variables\n\\begin{subequations}\n\\begin{align}\n\\label{defdimless}\n& s_i := \\frac{g^{i i}}{x}\\; , & & \\Sigma_i := -\\frac{k^i_{\\:\\, i}}{H} - 1\\; ,\n& z & := \\frac{m^2}{m^2 + x}\\:, \\\\[1ex]\n\\text{where} \\qquad & \\!\\sum_i s_i =1\\; , & & \\sum_i\\Sigma_i = 0\\:.\n\\end{align}\n\\end{subequations}\nThe transformation from the variables $(g_{ii}, k^i_{\\:\\, i})$ to\n$(s_i,\\Sigma_i, x, H)$, where $(s_i,\\Sigma_i)$ are subject to the\nabove constraints, is one-to-one. (Note that $x$ can be obtained\nfrom $z$ when $m>0$.) By distinguishing one direction ($1$, $2$, or\n$3$), one can decompose the $s_i$ and simultaneously introduce a\ntrace-free adaption of the shear to new $\\Sigma_\\pm$ variables as is\ndone in, e.g.,~\\cite{waiell97}; however, since Bianchi type~I does\nnot have a preferred direction we will refrain from doing so here.\n\nNext we replace the matter quantities $\\rho$, $T^i_{\\:\\, i}$ (no\nsummation) by the dimensionless quantities\n\\begin{equation}\n\\Omega :=\\frac{8\\pi\\rho}{3 H^2}\\,,\\qquad w_i := \\frac{T^i_{\\:\\,\ni}}{\\rho}\\,,\\qquad w := \\frac{1}{3} \\sum_i w_i = \\frac{1}{3}\n\\frac{\\sum_i T^i_{\\:\\, i}}{\\rho}\\,.\n\\end{equation}\nExpressed in the new variables, $w_i$ can be written as\n\\begin{equation}\\label{omegai}\nw_i = \\frac{(1-z) s_i {\\displaystyle\\int} f_0\\, v_i^2\n\\left[z+(1-z) \\sum_k s_k \\, v_k^2\\right]^{-1\/2} d v_1 d v_2 d v_3}%\n{{\\displaystyle\\int} f_0 \\left[z+(1-z) \\sum_k s_k \\,\nv_k^2\\right]^{1\/2} d v_1 d v_2 d v_3}\\:.\n\\end{equation}\n\nFinally, let us introduce a new dimensionless time variable $\\tau$ defined by\n\\begin{equation}\n\\partial_\\tau = H^{-1}\\partial_t\\:;\n\\end{equation}\nhenceforth a prime denotes differentiation w.r.t.\\ $\\tau$.\n\nWe now rewrite the Einstein-Vlasov equations as a set of dimensional\nequations that decouple for dimensional reasons and a reduced system\nof dimensionless coupled equations on a compact state space. The\ndecoupled dimensional equations are\n\\begin{subequations}\n\\begin{align}\n\\label{Heq}\nH^\\prime &= -3 H \\left[1 -\\frac{\\Omega}{2} (1-w)\\right]\\\\\n\\label{xeq}\nx^\\prime &= -2 x \\left(1 + \\sum_k \\Sigma_k s_k \\right)\\:.\n\\end{align}\n\\end{subequations}\nThe reduced dimensionless system consists of the Hamiltonian\nconstraint, cf.~\\eqref{constraints},\n\\begin{equation}\\label{omega}\n1- \\Sigma^2-\\Omega = 0\\:, \\qquad\\text{where}\\quad \\Sigma^2 :=\n\\sfrac{1}{6} \\sum_k \\Sigma_k^2\\:,\n\\end{equation}\nwhich we use to solve for $\\Omega$,\nand a coupled system of evolution equations\n\\begin{subequations}\\label{eq}\n\\begin{align}\n\\label{Sigeq}\n\\Sigma_i^\\prime & = -3 \\Omega \\left[ \\frac{1}{2} (1-w) \\Sigma_i -(w_i - w)\\right]\\\\\n\\label{seq}\ns_i^\\prime & = -2 s_i \\left[\\Sigma_i - \\sum_k \\Sigma_k s_k \\right] \\\\\n\\label{zeq}\nz^\\prime & = 2 z\\,(1 - z)\\left(\\,1 + \\sum_k s_k \\, \\Sigma_k\\,\\right)\\:.\n\\end{align}\n\\end{subequations}\nIn the massive case $m>0$ the decoupled equation for $x$ is\nredundant since the equation for $z$ is equivalent. In the massless\ncase $m=0$ we have $z=0$; hence, although the equation for $x$ does\nnot contribute to the dynamics, $x$ is needed in order to\nreconstruct the spatial metric from the new variables.\n\nThe dimensionless dynamical system~(\\ref{eq}) together with the\nconstraint~\\eqref{omega} describes the full dynamics of the\nEinstein-Vlasov system of Bianchi type~I. In the massive case the state space\nassociated with this system is the space of the variables\n$\\{(\\Sigma_i, s_i, z)\\}$, i.e.,\n\\begin{equation}\\label{statespace}\n\\mathcal{X} :=\n\\left\\{(\\Sigma_i, s_i,z)\\:\\big|\\: \\left(\\Sigma^2 < 1\\right)\n\\wedge \\left(s_i > 0\\right) \\wedge \\left(0< z < 1\\right)\\right\\}\\:,\n\\end{equation}\nwhere the $s_i$ and $\\Sigma_i$ are subject to the constraints\n$\\sum_k\\, s_k=1\\:,\\,\\sum_k\\,\\Sigma_k = 0$. (The inequalities for\n$s_i$ and $\\Sigma_i$ follow from the definition~\\eqref{defdimless}\nand the constraint~\\eqref{omega}, respectively.) The state space\n$\\mathcal{X}$ is thus five-dimensional.\n\nIt will turn out eventually that all solutions asymptotically\napproach the boundaries of $\\mathcal{X}$: $z=0$, $z=1$, $s_i=0$,\n$\\Omega = 0$ ($\\Leftrightarrow \\Sigma^2 =1$). This suggests to\ninclude these sets in the analysis, whereby we obtain a compact\nstate space $\\bar{\\mathcal{X}}$.\n\nThe equations on the invariant subset $z=0$ of $\\bar{\\mathcal{X}}$\nare identical to the coupled dimensionless system in the case of\nmassless particles $m=0$. We will therefore refer to the subset\n$z=0$ as the massless subset; it represents the four-dimensional\nstate space for the massless case.\n\nWe conclude this section by looking at some variables in more\ndetail. The inequality $\\Sigma^2 \\leq 1$ together with the\nconstraint $\\sum_k \\Sigma_k =0$ results in $|\\Sigma_i| \\leq 2$ for\nall $i$. Note that equality is achieved when\n$(\\Sigma_1,\\Sigma_2,\\Sigma_3) = (\\pm 2 ,\\mp 1, \\mp 1)$ and\npermutations thereof, cf.~Figure~\\ref{kasneri}. The matter quantities\nsatisfy\n\\begin{equation}\\label{wrelations}\n0 \\leq w \\leq \\textfrac{1}{3} \\:, \\qquad 0 \\leq w_i \\leq 3 w \\leq 1\\:.\n\\end{equation}\nThe equalities hold at the boundaries of the state space: $w =0$ iff\n$z=1$, and $w = \\textfrac{1}{3}$ iff $z=0$; $w_i = 0$ iff $s_i = 0$,\nand $w_i = 3 w$ iff $s_i =1$ (provided that $z<1$; for $z=1$, $0 =\nw_i = 3 w = 0$).\n\nThere exists a number of useful auxiliary equations that complement the system~\\eqref{eq}:\n\\begin{align}\n\\label{omegaeq}\n\\Omega^\\prime & =\n\\Omega\\, \\left[\\,3(1-w)\\Sigma^2 - \\sum_k w_k\\,\\Sigma_k\\,\\right] \\:,\\\\\n\\label{rhoeq} \\rho' & = -\\rho\\, [\\,3(1+w) + \\sum_k w_k\\,\\Sigma_k\\,]\n\\leq -2\\rho\\:.\n\\end{align}\nThe inequality in~\\eqref{rhoeq} follows by using $\\Sigma_i \\geq -2$\n$\\forall i$ and~\\eqref{wrelations}. This shows that $\\rho$ increases\nmonotonically toward the past which yields a matter singularity,\ni.e., $\\rho\\rightarrow\\infty$ for $\\tau\\rightarrow -\\infty$. It is\noften beneficial to consider the equations of the original variables\nas auxiliary equations, e.g., $(g^{ii})^\\prime= -2g^{ii}\\,(1 +\n\\Sigma_i)$.\n\n\n\n\\section{Fixed points, invariant subsets, and monotone functions}\n\\label{invfixed}\n\n\\subsection{Fixed points}\n\\label{fixed}\n\nThe dynamical system~\\eqref{eq} possesses a number of fixed points,\nall residing on the boundaries of the state space;\nsee~Table~\\ref{fixtab}.\n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{|c|c|}\\hline\nFixed point set & defined by \\\\ \\hline\nFS$^1$ & $z=1$, $\\Sigma_j = 0 \\:\\,\\forall j$ \\\\\nKC$_i^1$ & $z=1$, $\\Sigma^2 =1$, $s_i=1$, $s_j = 0\\:\\,\\forall j \\neq\ni$\n\\\\ \\hline\nTS$_i$ & $0\\leq z\\leq 1$, $\\Sigma_i = 2$, $\\Sigma_j =-1 \\:\\,\\forall\nj \\neq i$, $s_i=0$ \\\\ \\hline\nF$^0$ & $z=0$, $\\Sigma_j = 0 \\:\\,\\forall j$, $w_j = 1\/3 \\:\\,\\forall j$ \\\\\nD$_i^0$ & $z=0$, $s_i = 0$, $\\Sigma_i =-1$, $\\Sigma_{j}=1\/2 =w_j\\:\\,\n\\forall j\\neq i$ \\\\\nQL$_i^0$ & $z=0$, $\\Sigma_i = -2$, $\\Sigma_j = 1 \\:\\,\\forall j \\neq\ni$,\n$s_i=0$ \\\\\nKC$_i^0$ & $z=0$, $\\Sigma^2 =1$, $s_i=1$, $s_j = 0 \\:\\,\\forall j\n\\neq i$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{The fixed point sets. The range of the index $i$ is always\n$i=1\\ldots 3$. The superscript denotes the value of $z$; the first\nkernel letter describes the type of fixed point set; if there is no\nsecond kernel letter the fixed point set is just a point; if there\nis a second kernel letter this letter denotes the dimensionality and\ncharacter of the set --- S refers to surface, L stands for line, and\nC for circle.} \\label{fixtab}\n\\end{table}\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{Ti1}[cc][cc][1][0]{$\\text{T}^0_{i1}$}\n\\psfrag{Ti2}[cc][cc][1][0]{$\\text{T}^0_{i2}$}\n\\psfrag{Ti3}[cc][cc][1][0]{$\\text{T}^0_{i3}$}\n\\psfrag{Qi1}[cc][cc][1][0]{$\\text{Q}^0_{i1}$}\n\\psfrag{Qi2}[cc][cc][1][0]{$\\text{Q}^0_{i2}$}\n\\psfrag{Qi3}[cc][cc][1][0]{$\\text{Q}^0_{i3}$}\n\\psfrag{S1}[cc][cc][1.2][0]{$\\Sigma_1$}\n\\psfrag{S2}[cc][cc][1.2][0]{$\\Sigma_2$}\n\\psfrag{S3}[cc][cc][1.2][0]{$\\Sigma_3$}\n\\psfrag{S1m}[cc][cc][1][-90]{$\\Sigma_1=-1$}\n\\psfrag{S2m}[cc][cc][1][30]{$\\Sigma_2=-1$}\n\\psfrag{S3m}[cc][cc][1][-30]{$\\Sigma_3=-1$}\n\\psfrag{S1p}[cc][cc][0.6][90]{$\\Sigma_1=1$}\n\\psfrag{S2p}[cc][cc][0.6][30]{$\\Sigma_2=1$}\n\\psfrag{S3p}[cc][cc][0.6][-30]{$\\Sigma_3=1$}\n\\psfrag{s1}[cc][cc][1.0][90]{$\\longleftarrow\\: s_1 =0\\:\n\\longrightarrow$} \\psfrag{s2}[cc][cc][1.0][35]{$\\longleftarrow \\:s_2\n=0 \\:\\longrightarrow$} \\psfrag{s3}[cc][cc][1.0][-35]{$\\longleftarrow\n\\:s_3 =0 \\:\\longrightarrow$}\n\\includegraphics[width=0.6\\textwidth]{kasnercircle.eps}\n\\caption{The disc $\\Sigma^2\\leq 1$ and the Kasner circle $\\text{KC}_i^0$.} \\label{kasneri}\n\\end{center}\n\\end{figure}\n\n$\\text{FS}^1$ is a surface of fixed points that correspond to the\nflat isotropic dust solution. The circles $\\text{KC}_i^{1,0}$\nconsist of fixed points with constant $\\Sigma_ i$, satisfying\n$\\Sigma^2=1,\\sum_k\\,\\Sigma_k=0$, see Figure~\\ref{kasneri}; these fixed\npoints correspond to Kasner solutions. The fixed points on\n$\\text{TS}_i$ are associated with the Taub representation of the\nflat Minkowski spacetime. The intersection of $\\text{TS}_i$ with\n$(z=0)$ yields a line of fixed points which we denote\n$\\text{TL}_i^0$. The fixed points on $\\text{QL}_i^0$ correspond to\nthe non-flat LRS Kasner solutions.\n$\\text{F}^0$ is a fixed point that corresponds to the flat isotropic\nradiation solution. In Appendix~\\ref{FRWLRS} we prove that $\\text{F}^0$\nis well-defined through the equations $w_1 = w_2= w_3 = 1\/3$ (which\nare to be solved for $(s_1,s_2,s_3)\\,$). The location of\n$\\text{F}^0$ depends on the chosen distribution function, since the\nequations $w_1 = w_2= w_3 = 1\/3$ involve $f_0$.\nAnalogously, the equations $w_j = 1\/2$ ($\\forall \\: j\\neq i$) yield\na unique solution, the fixed point D$_i^0$; the location of the\npoint D$_i^0$ also depends on $f_0$. The fixed points D$_i^0$ are\nassociated with a scale-invariant LRS solution (related to a\ndistributional $f_0$; see Appendix~\\ref{Si0space} for details).\n\nThe LRS points on $\\text{KC}_i^0$ play a particularly important role\nin the following, which motivates that they are given individual\nnames. We denote the three Taub points on KC$_i^0$ defined by\n$\\Sigma_j = 2$ (and thus $\\Sigma_l = -1$ $\\forall l\\neq j$) by\n$\\text{T}_{ij}^0$, while we denote the three non-flat LRS point on\nKC$_i^0$ given by $\\Sigma_j = -2$ (and thus $\\Sigma_l = 1$ $\\forall\nl\\neq j$) by Q$_{ij}^0$. The Kasner circles $\\text{KC}_j^0$ and\n$\\text{KC}_k^0$ are connected by the lines $\\text{TL}_i^0$ and\n$\\text{QL}_i^0$; the end points of the line $\\text{TL}_i^0$ are the\nTaub points $\\text{T}_{ji}^0$ and $\\text{T}_{ki}^0$; analogously,\nthe end points of $\\text{QL}_i^0$ are the points $\\text{Q}_{ji}^0$\nand $\\text{Q}_{ki}^0$. (Here, $(i,j,k)$ is an arbitrary permutation\nof $(1,2,3)$.) The remaining points $\\text{T}_{ll}^0$ and\n$\\text{Q}_{ll}^0$ ($l=1\\ldots 3$) do not lie on any of the fixed\npoint sets $\\text{TL}^0_i$ or $\\text{QL}^0_i$. This fixed point\nstructure is depicted in Figure~\\ref{fixedp}.\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{TL1}[cc][cc][0.8][0]{$\\text{TL}^0_1$}\n\\psfrag{TL2}[cc][cc][0.8][0]{$\\text{TL}^0_2$}\n\\psfrag{TL3}[cc][cc][0.8][0]{$\\text{TL}^0_3$}\n\\psfrag{QL1}[cc][cc][0.8][0]{$\\text{QL}^0_1$}\n\\psfrag{QL2}[cc][cc][0.8][0]{$\\text{QL}^0_2$}\n\\psfrag{QL3}[cc][cc][0.8][0]{$\\text{QL}^0_3$}\n\\psfrag{T21}[cc][cc][0.6][0]{$\\text{T}^0_{21}$}\n\\psfrag{T22}[cc][cc][0.6][0]{$\\text{T}^0_{22}$}\n\\psfrag{T23}[cc][cc][0.6][0]{$\\text{T}^0_{23}$}\n\\psfrag{Q21}[cc][cc][0.6][0]{$\\text{Q}^0_{21}$}\n\\psfrag{Q22}[cc][cc][0.6][0]{$\\text{Q}^0_{22}$}\n\\psfrag{Q23}[cc][cc][0.6][0]{$\\text{Q}^0_{23}$}\n\\psfrag{T11}[cc][cc][0.6][0]{$\\text{T}^0_{11}$}\n\\psfrag{T12}[cc][cc][0.6][0]{$\\text{T}^0_{12}$}\n\\psfrag{T13}[cc][cc][0.6][0]{$\\text{T}^0_{13}$}\n\\psfrag{Q11}[cc][cc][0.6][0]{$\\text{Q}^0_{11}$}\n\\psfrag{Q12}[cc][cc][0.6][0]{$\\text{Q}^0_{12}$}\n\\psfrag{Q13}[cc][cc][0.6][0]{$\\text{Q}^0_{13}$}\n\\psfrag{T31}[cc][cc][0.6][0]{$\\text{T}^0_{31}$}\n\\psfrag{T32}[cc][cc][0.6][0]{$\\text{T}^0_{32}$}\n\\psfrag{T33}[cc][cc][0.6][0]{$\\text{T}^0_{33}$}\n\\psfrag{Q31}[cc][cc][0.6][0]{$\\text{Q}^0_{31}$}\n\\psfrag{Q32}[cc][cc][0.6][0]{$\\text{Q}^0_{32}$}\n\\psfrag{Q33}[cc][cc][0.6][0]{$\\text{Q}^0_{33}$}\n\\psfrag{KC1}[cc][cc][1.2][0]{$\\text{KC}^0_1$}\n\\psfrag{KC2}[cc][cc][1.2][0]{$\\text{KC}^0_2$}\n\\psfrag{KC3}[cc][cc][1.2][0]{$\\text{KC}^0_3$}\n\\psfrag{s1}[cc][cc][1.0][90]{$\\longleftarrow\\: s_1 =0\\:\n\\longrightarrow$} \\psfrag{s2}[cc][cc][1.0][35]{$\\longleftarrow \\:s_2\n=0 \\:\\longrightarrow$} \\psfrag{s3}[cc][cc][1.0][-35]{$\\longleftarrow\n\\:s_3 =0 \\:\\longrightarrow$}\n\\includegraphics[width=0.8\\textwidth]{fixedpoints9.eps}\n\\caption{A schematic depiction of the fixed points on $z=0$.\nThe underlying structure is the three sides of the $s_i$-triangle $s_1+s_2+s_3 =1$:\neach point represents a disc $\\Sigma^2\\leq 1$; the vertices contain the Kasner circles $\\text{KC}_i^0$.\nBold lines denote the lines of fixed points $\\text{TL}_i^0$, $\\text{QL}_i^0$,\nand $\\mathrm{KC}_i^0$.} \\label{fixedp}\n\\end{center}\n\\end{figure}\n\n\n\n\\subsection{Invariant subsets and monotone functions}\n\\label{inv}\n\nThe dynamical system~\\eqref{eq} possesses a hierarchy of invariant\nsubsets and monotone functions. Since this feature of the dynamical\nsystem will turn out to be of crucial importance in the analysis of\nthe global dynamics, we give a detailed discussion.\n\n$\\mathcal{X}$: On the full (interior) state space $\\mathcal{X}$ define\n\\begin{subequations}\\label{m1eqs}\n\\begin{equation}\\label{m1}\nM_{(1)} = (s_1 s_2 s_3)^{-1\/3} \\frac{z}{1-z} \\:.\n\\end{equation}\nA straightforward computation shows\n\\begin{equation}\nM_{(1)}^\\prime = 2 M_{(1)}\\:,\n\\end{equation}\n\\end{subequations}\ni.e., $M_{(1)}$ is strictly monotonically increasing along orbits in\n$\\mathcal{X}$. Note that $M_{(1)}$ is intimately related to the\nspatial volume density since $M_{(1)} = m^2 \\det(g_{ij})^{1\/3}$.\n\n$\\mathcal{Z}^1$: This subset is characterized by $z=1$. Since $w_i = w =0$,\nthe equations for $s_i$ decouple, and the essential dynamics is\ndescribed by the equations $\\Sigma_i^\\prime = -(3\/2)(1-\\Sigma^2)\n\\Sigma_i$. (Note that these equations are identical to the Bianchi\ntype~I equations for dust --- it is therefore natural to refer to\n$\\mathcal{Z}^1$ as the dust subset.) Explicit solutions for these equations\ncan be obtained by noting that\n$\\Sigma_1\\propto\\Sigma_2\\propto\\Sigma_3$ for all solutions, or by\nusing that $\\Omega^\\prime = 3\\Sigma^2\\,\\Omega$.\n\n$\\mathcal{Z}^0$: This subset is the massless boundary set $z=0$.\nSince $w =1\/3$, the dynamical system~\\eqref{eq} reduces to\n\\begin{equation}\\label{z0eq}\n\\Sigma_i^\\prime = - \\Omega\\,[\\,1+\\Sigma_i -3 w_i\\,]\\:,\\qquad\ns_i^\\prime = -2 s_i \\,[\\,\\Sigma_i - \\sum\\nolimits_k s_k\\, \\Sigma_k \\,]\\:.\n\\end{equation}\nConsider the function\n\\begin{subequations}\\label{m2eqs}\n\\begin{equation}\\label{m2}\nM_{(2)} =\\left(1 -\\Sigma^2\\right)^{-1} (s_1 s_2 s_3)^{-1\/6} \\int f_0\n\\left[ \\sum\\nolimits_k s_k v_k^2\\right]^{1\/2} d v_1 d v_2 d v_3\\:.\n\\end{equation}\nThe derivative is\n\\begin{equation}\nM_{(2)}^\\prime = -2 \\Sigma^2 M_{(2)}\\:,\n\\end{equation}\nwhich yields monotonicity when $\\Sigma^2 \\neq 0$. If $\\Sigma^2 = 0$, then\n\\begin{equation}\nM_{(2)}^\\prime = 0 \\:, \\quad M_{(2)}^{\\prime\\prime} = 0 \\:, \\quad\nM_{(2)}^{\\prime\\prime\\prime} = -6 M_{(2)} \\sum_i \\left(w_i\n-\\textfrac{1}{3}\\right)^2 \\:.\n\\end{equation}\n\\end{subequations}\nHence, $M_{(2)}$ is strictly monotonically decreasing everywhere on\n$z=0$, except at the fixed point $\\text{F}^0$ (for which\n$\\Sigma^2=0$ and $w_1 = w_2 = w_3 = 1\/3$), where $M_{(2)}$ attains a\n(positive) minimum. The latter follows from the fact that\n$(1-\\Sigma^2)^{-1}$ is minimal at the point $\\Sigma_i = 0\n\\:\\,\\forall i$ and that $\\partial M_{(2)}\/\\partial s_i = (2\ns_i)^{-1} [w_i - 1\/3] M_{(2)}$.\n\n$\\mathcal{S}_i$ ($i=1,2,3$): These invariant boundary subsets are defined by\n$s_i=0$ (which yields $w_i=0$). There exists a monotone function on\n$\\mathcal{S}_1$,\n\\begin{equation}\\label{m3}\nM_{(3)}= (s_2 s_3)^{-1\/2}\\,\\frac{z}{1-z}\\, ,\\qquad M_{(3)}^\\prime =\n(2-\\Sigma_1)\\,M_{(3)}\\, ;\n\\end{equation}\nanalogous functions can be obtained on $\\mathcal{S}_2$ and $\\mathcal{S}_3$ through\npermutations.\n\n$\\mathcal{K}$: This boundary subset is the vacuum subset defined by\n$\\Omega=0$ (or equivalently $\\Sigma^2=1$). The $\\Sigma_i$ are\nconstant on this subset, which completely determines the dynamics of\nthe $s_i$ variables (via~\\eqref{seq} or via the auxiliary equation\nfor $g^{ii}$). The Bianchi type~I vacuum solution is the familiar\nKasner solution and we thus refer to $\\mathcal{K}$ as the Kasner subset.\n\nIntersections of the above boundary subsets yield boundary subsets\nof lower dimensions; those that are relevant for the global dynamics\nare discussed in the following.\n\n$\\mathcal{S}_i^0$ and $\\mathcal{S}_i^1$: The intersection between the subset $\\mathcal{S}_i$\nand $\\mathcal{Z}^0$ and $\\mathcal{Z}^1$ yields three-dimensional invariant subsets\n$(s_i=0) \\cap (z=0)$ and $(s_i=0) \\cap (z=1)$, respectively. On\n$\\mathcal{S}_i^0$ there exists a monotonically decreasing function:\n\\begin{equation}\\label{m4}\nM_{(4)} = (1+\\Sigma_i)^2\\:, \\qquad M_{(4)}^\\prime = - 2 \\Omega M_{(4)} \\:.\n\\end{equation}\n\n$\\mathcal{S}_{ij}$: These subsets are defined by setting $s_i =0$ and $s_j\n=0$ ($j\\neq i$), i.e., $\\mathcal{S}_{ij} = \\mathcal{S}_i \\cap \\mathcal{S}_j$. On $\\mathcal{S}_{ij}$,\nwe have $s_k =1$ ($k \\neq i,j$) and $w_k = 3 w$, because $w_i = w_j\n=0$.\n\n$\\mathcal{D}_i^0$: The subsets $\\mathcal{S}_i^0$ admit two-dimensional invariant\nsubsets $\\mathcal{D}_i^0$ characterized by $(z=0) \\cap (s_i=0) \\cap (\\Sigma_i =-1)$.\nOn $\\mathcal{D}_1^0$ consider the function\n\\begin{subequations}\\label{m5eqs}\n\\begin{equation}\nM_{(5)} = \\left(2 + \\Sigma_2 \\Sigma_3 \\right)^{-1} (s_2 s_3)^{-1\/4}\n\\int f_0 \\left[ s_2 v_2^2 + s_3 v_3^2\\right]^{1\/2} d v_1 d v_2 d\nv_3\\:;\n\\end{equation}\nanalogous functions can be defined on $\\mathcal{D}_2^0$ and $\\mathcal{D}_3^0$.\nEqs.~\\eqref{z0eq} imply\n\\begin{equation}\\label{m5d}\nM_{(5)}^\\prime = -\\textfrac{1}{12}\\, M_{(5)} \\left[ \\left(1 -\n2\\Sigma_2\\right)^2 + \\left(1 - 2\\Sigma_3\\right)^2\\right] \\,,\n\\end{equation}\ni.e., $M_{(5)}$ is strictly monotonically decreasing unless\n$\\Sigma_2 = 1\/2 = \\Sigma_3$. In the special case $\\Sigma_2 = 1\/2 =\n\\Sigma_3$ we obtain\n\\begin{equation}\nM_{(5)}' = 0 \\:, \\quad M_{(5)}^{\\prime\\prime} = 0 \\:, \\quad\nM_{(5)}^{\\prime\\prime\\prime} =\n-\\textfrac{27}{8} M_{(5)} \\left[ \\left(w_2 -\n\\textfrac{1}{2}\\right)^2 +\\left(w_3 -\n\\textfrac{1}{2}\\right)^2\\right] \\:.\n\\end{equation}\n\\end{subequations}\nHence, $M_{(5)}$ is strictly monotonically decreasing everywhere on\n$\\mathcal{D}_1^0$ except for at the fixed point $\\text{D}_1$, for which\n$\\Sigma_2=\\Sigma_3=w_2=w_3=\\frac{1}{2}$, cf.~Table~\\ref{fixtab}. The\nfunction $M_{(5)}$ possesses a positive minimum at $\\text{D}_1$.\nThis is because $(2+\\Sigma_2 \\Sigma_3)^{-1}$ is minimal at the point\n$\\Sigma_2 = \\Sigma_3 = 1\/2$ and $\\partial M_{(5)}\/\\partial s_i = (2\ns_i)^{-1} [w_i - 1\/2] M_{(5)}$ for $i=2,3$.\n\n$\\mathcal{K}^0$: The intersection of the Kasner subset $\\mathcal{K} = (\\Sigma^2 =1)$\nwith the $z=0$ subset yields a 3-dimensional subset, $\\mathcal{K}^0$. This\nsubset will play a prominent role in the analysis of the past\nasymptotic behaviour of solution.\n\nThe remaining subsets we consider are not located at the boundaries\nof the state space, but in the interior; these subsets are invariant\nunder the flow of the dynamical system, if the distribution function\n$f_0$ satisfies certain symmetry conditions.\n\n$\\text{LRS}_i$: We define the subset $\\text{LRS}_1$ of $\\mathcal{X}$\nthrough the equations $\\Sigma_2=\\Sigma_3$, $w_2=w_3$;\n$\\text{LRS}_{2,3}$ are defined analogously. In order for these sets\nto be invariant under the flow of the dynamical system, the\ndistribution function $f_0$ must satisfy conditions that ensure\ncompatibility with the LRS symmetry, see Appendix~\\ref{FRWLRS} for\ndetails. For an orbit lying on $\\text{LRS}_1$, Equation~(\\ref{seq})\nentails that $s_2(\\tau) \\propto s_3(\\tau)$ (where the\nproportionality constant exhibits a dependence on $f_0$, which\nenters through the equation $w_2 =w_3$), and hence $g_{22}\\propto\ng_{33}$; by rescaling the coordinates one can achieve\n$g_{22}=g_{33}$, i.e., a line element in an explicit LRS form.\nHence, the $\\text{LRS}_i$ subsets, if present as invariant subsets,\ncomprise the solutions with LRS geometry.\n\nFRW: If $f_0$ is compatible with an isotropic geometry, see\nAppendix~\\ref{FRWLRS} for details, the one-dimensional subset\ncharacterized by the equations $\\Sigma_i=0$ $\\forall i$ and $w_1\n=w_2=w_3 =w$ is an invariant subset (in fact: orbit), the FRW\nsubset. The equations $\\Sigma_i=0$ yield $s_{i}=\\mathrm{const}$,\nwhereby we obtain a Friedmann-Robertson-Walker (FRW) geometry, since\nthe spatial coordinates can be rescaled so that\n$g_{ij}\\propto\\delta_{ij}$. Note that the location in $s_i$ of the\nFRW subset depends on $f_0$, since the equations $w_1 = w_2= w_3$,\nwhich are to be solved for $(s_1,s_2,s_3)$, involve $f_0$,\ncf.~Appendix~\\ref{FRWLRS}. Remarkably, in the massless case the\nexistence of a FRW solution (which corresponds to the fixed point\n$\\text{F}^0$) does not require any symmetry conditions on $f_0$.\n\n\n\\section{Local and global dynamics}\n\\label{locglo}\n\n\\subsection{Local dynamics}\n\nLet us consider smooth reflection-symmetric\nBianchi type~I Vlasov solutions that approach fixed point sets when\n$\\tau\\rightarrow -\\infty$.\n\n\\begin{theorem}\\label{locthm}\nIn the massive (massless) case there exists\n\\begin{itemize}\n\\item[(a)] a single orbit that approaches (corresponds\nto) $\\mathrm{F}^0$,\n\\item[(b)] three equivalent one-parameter sets of orbits (three single orbits)\nthat approach $\\mathrm{D}_i^0$, $i=1 \\ldots 3$,\n\\item[(c)] one three-parameter\n(two-parameter) set of orbits that approaches $\\mathrm{QL}_1^0$;\n$\\mathrm{QL}_2^0$ and $\\mathrm{QL}_3^0$ yield equivalent sets,\n\\item[(d)] one four-parameter (three-parameter) set of orbits that\napproaches the part of {\\rm KC}$_1^0$ defined by $1<\\Sigma_1<2$;\n{\\rm KC}$_2^0$ and {\\rm KC}$_3^0$ yield equivalent sets.\n\\end{itemize}\n\\end{theorem}\n\n\\proof The statements of the theorem follow from the local stability\nanalysis of the fixed point sets F$^0$, D$_i^0$, $\\text{QL}_i^0$,\n$\\text{KC}_i^0$, when combined with the Hartman-Grobman and the\nreduction theorem, since the fixed points F$^0$, D$_i^0$ are\nhyperbolic and $\\text{QL}_i^0$, $\\text{KC}_i^0$ are transversally\nhyperbolic. This requires the dynamical system to be $\\mathcal{C}^1$\nand this leads to some restrictions on $f_0$. However, it is\npossible to obtain an alternative proof that does not require such\nrestrictions. Such a proof can be obtained from the hierarchical\nstructure of invariant sets; we will refrain from making the details\nexplicit here, since our analysis of the global dynamics below\ncontains all essential ingredients implicitly.\n\n\\textit{Interpretation of Theorem~\\ref{locthm} (massive case)}:\nA three-parameter set of solutions \nconverges to every non-LRS Kasner solution as $t\\rightarrow 0$.\n(In the state space description three equivalent sets of orbits approach three equivalent\ntransversally stable Kasner arcs that cover all non-LRS Kasner\nsolutions; the \nequivalence reflects the freedom of permuting the coordinates.)\nFurthermore, a three-parameter set of\nsolutions approaches the non-flat LRS Kasner solution. \nHence, in total, a four-parameter set of solutions asymptotically approaches \nnon-flat Kasner states. \nThere exist special solutions with\nnon-Kasner behaviour toward the singularity: one solution\nisotropizes toward the singularity and a one-parameter set of solutions\napproaches a non-Kasner LRS solution of the type~\\eqref{Disol} \n(three equivalent one-parameter\nsets of orbits approach three equivalent non-Kasner LRS fixed points\nassociated with this solution).\nFor the latter solutions \n$\\Omega=3\/4$; these solutions cannot be interpreted as a perfect\nfluid solutions since they possess anisotropic pressures.\n\nIn the following we show that the list of Theorem~\\ref{locthm} is\nalmost complete: there exist no other attractors toward the singularity\nwith one exception, a heteroclinic network that connects the flat LRS-Kasner\npoints.\n\n\n\\subsection{Global dynamics}\n\\label{globaldynamics}\n\n\\begin{theorem}\\label{futurethm}\nAll orbits in the interior of the state space $\\mathcal{X}$ of\nmassive particles [state space $\\mathcal{Z}^0$ of massless\nparticles] converge to {\\rm FS}$^1$ [{\\rm F}$^0$] when\n$\\tau\\rightarrow +\\infty$; i.e., all smooth reflection-symmetric\nBianchi type~I Vlasov solutions isotropize toward the future.\n\\end{theorem}\n\nA proof of this theorem has been given in~\\cite{ren96}.\nIn Appendix~\\ref{futureproof} we present an alternative proof based on dynamical\nsystems techniques.\n\n\\begin{theorem}\\label{alphathm}\nThe $\\alpha$-limit set of an orbit in the interior of the state\nspace is one of the fixed points $\\mathrm{F}^0$, $\\mathrm{D}_i^0$,\n$\\mathrm{QL}_i^0$, $\\mathrm{KC}_i^0$, see Theorem~\\ref{locthm}, or\nit is the heteroclinic network $\\mathcal{H}^0$. The $\\alpha$-limit set of a\ngeneric orbit resides on the union of the fixed point sets {\\rm\nKC}$_i^0$ and possibly the heteroclinic network $\\mathcal{H}^0$.\n\\end{theorem}\n\n\\begin{remark}\nA heteroclinic network is defined as a compact connected\nflow-invariant set that is indecomposable (all points are connected\nby pseudo-orbits), has a finite nodal set (the set of recurrent\npoints is a finite union of disjoint compact connected\nflow-invariant subsets), and has finite depth (taking the\n$\\alpha$\/$\\omega$-limit iteratively yields the set of recurrent\npoints after a finite number of iterative steps); for details\nsee~\\cite{Ashwin\/Field:1999} and references therein. A simple case\nis a heteroclinic network of depth $1$ whose nodes are equilibrium\npoints: it can be regarded as a collection of entangled heteroclinic\ncycles. The heteroclinic network $\\mathcal{H}^0$ is of the latter type; it\nwill be introduced in the proof of the theorem.\n\\end{remark}\n\n\nThe remainder of this section is concerned with the proof of Theorem~\\ref{alphathm}.\nThe first step in the proof is to gain a detailed understanding of the dynamics\non the relevant invariant subspaces of the dynamical system.\n\n\\subsubsection*{Dynamics on $\\mathcal{S}_i^0$}\n\n\\begin{lemma}\\label{lemmaSi0}\nConsider an orbit in the interior of $\\mathcal{S}_i^0$. Its $\\alpha$-limit\nset is either a fixed point on {\\rm KC}$_j^0$ or {\\rm KC}$_k^0$\n($i\\neq j \\neq k$), {\\rm QL}$_i^0$ or {\\rm TL}$_i^0$, or it is the\nheteroclinic cycle $\\mathcal{H}_i^0$. The $\\omega$-limit set consists of the\nfixed point {\\rm D}$_i^0$.\n\\end{lemma}\n\n\\begin{remark}\nThe heteroclinic cycle $\\mathcal{H}_i^0$ will be defined\nin~\\eqref{heterocycle}.\n\\end{remark}\n\n\\proof Without loss of generality we consider $\\mathcal{S}_1^0$, which can\nbe described by the variables\n\\begin{equation}\n0< s_2 < 1 \\;\\, (s_3 = 1- s_2)\n\\quad\\text{ and }\\quad \\Sigma_1, \\Sigma_2,\n\\Sigma_3 \\quad\\left( \\sum\\nolimits_i \\Sigma_i = 0,\n\\Sigma^2 < 1\\right) \\:;\n\\end{equation}\nhence $\\mathcal{S}_1^0$ is represented by the interior of a cylinder, cf.~Figure~\\ref{cylinder}.\nThe boundary of $\\mathcal{S}_1^0$ consists of the lateral boundary $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$,\nthe base $\\mathcal{S}_{12}^0$, and the top surface $\\mathcal{S}_{13}^0$.\n\nSince $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$ is part of $\\mathcal{K}$, it follows that\n$\\Sigma_i \\equiv \\mathrm{const}$ for all orbits on $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$.\nWe observe that $s_2$ is monotonically increasing\n(decreasing) when $\\Sigma_2 < \\Sigma_3$ ($\\Sigma_2 > \\Sigma_3$), since\n$s_2^\\prime = -2 s_2 (1-s_2)(\\Sigma_2 -\\Sigma_3)$;\nthe two domains are separated by the lines of fixed points\nTL$_1^0$ and QL$_1^0$, see Figure~\\ref{cylinder}.\n\nThe key equations to understand the flow on $\\mathcal{S}_{12}^0$ are\n\\begin{equation}\n\\Omega^\\prime = \\Omega (2 \\Sigma^2 - \\Sigma_3)\\quad\\text{ and } \\quad\n\\Sigma_3^\\prime = \\Omega (2-\\Sigma_3)\\, .\n\\end{equation}\nFrom the first equation it follows that all points on\nKC$_3^0$ are transversally hyperbolic repelling fixed points\nexcept for T$_{33}^0$; from the second\nequation we infer that T$_{33}^0$ is the attractor of the entire\ninterior of $\\mathcal{S}_{12}^0$.\nSimilarly, T$_{22}^0$ is the attractor on $\\mathcal{S}_{13}^0$, see Figure~\\ref{cylinder}.\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{A}[cc][cc][1.2][0]{$\\text{TL}^0_1$}\n\\psfrag{B}[cc][rc][1.2][0]{$\\text{QL}^0_1$}\n\\psfrag{K2}[cc][cc][1.2][0]{$\\text{KC}^0_2$}\n\\psfrag{K3}[cc][cc][1.2][0]{$\\text{KC}^0_3$}\n\\psfrag{T2}[cc][cc][0.9][0]{$\\text{T}_{32}^0$}\n\\psfrag{T3}[cc][cc][0.9][0]{$\\text{T}_{23}^0$}\n\\psfrag{P2}[cc][cc][0.9][0]{$\\text{T}_{22}^0$}\n\\psfrag{P3}[cc][cc][0.9][0]{$\\text{T}_{33}^0$}\n\\psfrag{S1}[cc][cc][0.8][0]{$\\Sigma_1$}\n\\psfrag{S2}[cc][cc][0.8][0]{$\\Sigma_2$}\n\\psfrag{S3}[cc][cc][0.8][0]{$\\Sigma_3$}\n\\psfrag{Si}[cc][cc][0.8][0]{$\\Sigma_i$}\n\\psfrag{s2}[cc][cc][0.8][0]{$s_2$}\n\\includegraphics[width=0.9\\textwidth]{cylinder2.eps}\n\\caption{Flow on the boundaries and on the invariant subset\n$\\Sigma_1 = -1$ on $\\mathcal{S}_1^0$. The fixed point on $\\Sigma_1=-1$ is\nthe point D$_1^0$; the heteroclinic cycle $\\mathcal{H}_1^0$ consists of the\nfixed points $\\mathrm{T}_{22}^0$, $\\mathrm{T}_{32}^0$,\n$\\mathrm{T}_{33}^0$, $\\mathrm{T}_{23}^0$, and the connecting\norbits.}\n\\label{cylinder}\n\\end{center}\n\\end{figure}\n\nThe plane $\\mathcal{D}_1^0$, defined by $\\Sigma_1 = -1$, is an invariant subset in $\\mathcal{S}_1^0$.\nIn the interior of the plane we find the fixed point\nD$_1^0$; the boundary consists of a heteroclinic cycle $\\mathcal{H}_1^0$,\n\\begin{equation}\\label{heterocycle}\n\\mathcal{H}_1^0: \\mathrm{T}_{22}^0 \\rightarrow \\mathrm{T}_{32}^0 \\rightarrow\n\\mathrm{T}_{33}^0 \\rightarrow \\mathrm{T}_{23}^0 \\rightarrow\n\\mathrm{T}_{22}^0\\, .\n\\end{equation}\n(Note that analogous cycles $\\mathcal{H}_2^0$ and $\\mathcal{H}_3^0$ exist\non $\\mathcal{S}_2^0$ and $\\mathcal{S}_3^0$, respectively.)\nThe function $M_{(5)}$ is monotonically decreasing on $\\mathcal{D}_1^0$, cf.~\\eqref{m5eqs}.\nApplication of the monotonicity principle, see Appendix~\\ref{dynsys},\nyields that $\\text{D}_1^0$ is the $\\omega$-limit and that $\\mathcal{H}_1^0$ is the\n$\\alpha$-limit for all orbits on $\\mathcal{D}_1^0$, cf.~Figure~\\ref{cylinder}.\n\nConsider now an orbit in $\\mathcal{S}_1^0$ with $\\Sigma_1 \\neq -1$.\nThe function $M_{(4)} = (1+\\Sigma_1)^2$ is monotonically decreasing\non $\\mathcal{S}_1^0$, cf.~\\eqref{m4}. The monotonicity\nprinciple implies that the $\\omega$-limit lies on\n$\\Sigma_1 = -1$ or $\\Sigma^2 =1$ (but $\\Sigma_1 \\neq \\pm 2$).\nSince the logarithmic derivative of $\\Omega$ is positive everywhere on $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$\n(except at $\\text{T}^0_{22}$ and $\\text{T}^0_{33}$), i.e.,\n$\\Omega^{-1}\\Omega^\\prime|_{\\Omega =0} = 2 -\\sum_k w_k \\Sigma_k> 0$,\nit follows that the ``wall'' $\\mathcal{S}_1^0 \\cap \\mathcal{K}^0$ of the cylinder\nis repelling everywhere away from $\\Sigma_1 = -1$.\nConsequently, the\n$\\omega$-limit of the orbit cannot lie on $\\Sigma^2 =1$\nbut is contained in $\\Sigma_1 = -1$.\nThe fixed point $\\text{D}_1^0$ is a hyperbolic sink, as we conclude\nfrom the dynamics on $\\Sigma_1 = -1$ and from\n$(1+\\Sigma_1)^{-1} (1+ \\Sigma_1)^\\prime|_{\\text{D}_1^0} = -3\/4$.\nTherefore, the a priori possible $\\omega$-limit sets on $\\Sigma_1 = -1$\nare $\\text{D}_1^0$ and the heteroclinic cycle $\\mathcal{H}_1^0$.\n\nTo prove that $\\text{D}_1^0$ is the actual $\\omega$-limit we again\nconsider the function $M_{(5)}$. However, we no longer restrict its\ndomain of definition to $\\mathcal{D}_1^0$, but view it as a function\non $\\mathcal{S}_1^0$; we obtain\n\\begin{equation}\\label{M52}\n12 M_{(5)}^\\prime = -M_{(5)} \\left[(\\Sigma_1 + 2 \\Sigma_2)^2 +\n(\\Sigma_1+2 \\Sigma_3)^2 + 6 (\\Sigma_1+1)^2 - 6(\\Sigma_1+1)\\right]\\:.\n\\end{equation}\nThe bracket is positive when $\\Sigma_1 < -1$; hence $M_{(5)}$ is\ndecreasing when $\\Sigma_1 < -1$. This prevents orbits with $\\Sigma_1\n< -1$ from approaching $\\mathcal{H}_1^0$, since the cycle is characterized\nby $M_{(5)} = \\infty$. Now suppose that there exist an orbit in\n$\\Sigma_1 > -1$, whose $\\omega$-limit is $\\mathcal{H}_1^0$. At late times\nthe trajectory shadows the cycle; hence, for late times, the bracket\nin~\\eqref{M52} is almost always positive along the trajectory --\nonly when the trajectory passes through a small neighbourhood of\n$(\\Sigma_1,\\Sigma_2,\\Sigma_3) = (-1,1\/2,1\/2)$ the bracket is\nmarginally negative. Since the trajectory spends large amounts of\ntime near the fixed points and the passages from one fixed point to\nanother become shorter and shorter in proportion,\nit follows that at late times $M_{(5)}$ is decreasing along the\norbit (with ever shorter periods of limited increase). This is a\ncontradiction to the assumption that the orbit is attracted by the\nheteroclinic cycle. We therefore draw the conclusion that\n$\\text{D}_1^0$ is the global sink on $\\mathcal{S}_1^0$.\n\nConsider again an orbit in $\\mathcal{S}_1^0$ with $\\Sigma_1 \\neq -1$.\nInvoking the monotonicity principle with the function $M_{(4)}$ we\nfind that the $\\alpha$-limit of the orbit must be located on\n$\\Sigma^2 =1$, $\\Sigma_1 \\neq -1$. From the analysis of the flow on\nthe boundaries of the cylinder we obtain that all fixed points on\n$\\Sigma^2=1$ except for T$_{22}^0$ and T$_{33}^0$ are transversally\nhyperbolic. The fixed points on KC$_2^0$ with $\\Sigma_2 < \\Sigma_3$\nand the points on KC$_3^0$ with $\\Sigma_2\n> \\Sigma_3$ are saddles; the fixed points on KC$_2^0$ with\n$\\Sigma_2 > \\Sigma_3$ and those on KC$_3^0$ with $\\Sigma_2 <\n\\Sigma_3$ are transversally hyperbolic sources (except for\nT$_{22}^0$, T$_{33}^0$): every point attracts a one-parameter set of\norbits from $\\mathcal{S}_1^0$ as $\\tau\\rightarrow -\\infty$. In contrast,\neach fixed point on TL$_1^0$ and QL$_1^0$ is a source for exactly\none orbit. The structure of the flow on $\\Sigma^2=1$ implies that\nthe $\\alpha$-limit of the orbit in $\\mathcal{S}_1^0$ with $\\Sigma_1 \\neq -1$\nmust be one of the transversally hyperbolic sources.\n\nThis establishes Lemma~\\ref{lemmaSi0}.\n\n\n\\subsubsection*{Dynamics on $\\mathcal{K}^0$}\n\nThe invariant subset $\\mathcal{K}^0$ is defined by setting $z=0$ and $\\Sigma^2=1$;\nit can be represented by the Cartesian\nproduct of the circle ($\\Sigma^2=1$) in the $\\Sigma_i$-space times\nthe $s_i$-triangle given by $\\{0 0$ yields that $\\mathrm{K}_\\mathrm{P}$ is a\ntransversally hyperbolic source on the whole space $\\mathcal{Z}^0$. Since\n$\\alpha(\\gamma)$ contains the transversally hyperbolic source\n$\\mathrm{K}_\\mathrm{P}$, that fixed point necessarily constitutes\nthe entire $\\alpha$-limit set, i.e., $\\alpha(\\gamma)\n=\\mathrm{K}_\\mathrm{P}$. This is in contradiction to our assumption\n$\\alpha(\\gamma) \\ni \\mathrm{P}$. The omitted cases $\\Sigma_i = \\pm\n2$ for some $i$ will be dealt with next.\n\nSuppose that $\\Sigma_i = -2$ for one index $i$. Assume that\n$\\mathrm{P}$ lies in $\\alpha(\\gamma)$, therefore\n$\\alpha(\\mathrm{P})$ is contained in $\\alpha(\\gamma)$ as well. The\ndynamics on $\\mathcal{K}^0$ implies that $\\alpha(\\mathrm{P})$ is a fixed\npoint $\\mathrm{Q}_{\\mathrm{P}}$ on QL$_i^0$,\ncf.~Figure~\\ref{sigmaquadgleich1}. This point is a transversally\nhyperbolic source; $\\Omega^{-1} \\Omega^\\prime|_{\\text{QL}_i^0} = 1$\nin this case. By the same argument as above we obtain a\ncontradiction to the assumption $\\alpha(\\gamma) \\ni \\mathrm{P}$.\n\nFinally suppose that $\\Sigma_i = 2$ for one index $i$. When we\nassume that $\\mathrm{P}$ is in $\\alpha(\\gamma)$, then the\n$\\omega$-limit $\\omega(\\mathrm{P})$ is contained in\n$\\alpha(\\gamma)$. From Figure~\\ref{sigmaquadgleich1} we see that\n$\\omega(\\mathrm{P})$ is a fixed point $\\mathrm{T}_{\\mathrm{P}}$ on\nTL$_i^0$. The point $\\mathrm{T}_{\\mathrm{P}}$ is a transversally\nhyperbolic saddle, since $\\Omega^{-1} \\Omega^\\prime|_{\\text{TL}_i^0}\n= 3$, and there exists exactly one orbit that emanates from it,\nnamely the orbit that connects $\\mathrm{T}_{\\mathrm{P}}$ with\n$\\text{D}_i$ in $\\mathcal{S}_i^0$. Since $\\mathrm{T}_{\\mathrm{P}} \\in\n\\alpha(\\gamma)$, that orbit must also be contained in\n$\\alpha(\\gamma)$. This is in contradiction to the previous result:\n$\\alpha(\\gamma)$ cannot contain interior points of $\\mathcal{S}_i^0$. Hence\nour assumption $\\alpha(\\gamma) \\ni \\mathrm{P}$ was false: the\n$\\alpha$-limit of $\\gamma$ cannot contain any interior point of\n$\\mathcal{K}^0$.\n\nOur analysis results in the following statement: There exist four\nspecial orbits, one trivial orbit corresponding to the fixed point\n$\\text{F}^0$ and three orbits, the orbits $\\delta^0_i$, that\nconverge to the fixed points $\\text{D}^0_i \\in \\mathcal{D}^0_i$. The\n$\\alpha$-limit set of every orbit $\\gamma$ in $\\mathcal{Z}^0$ different from\n$\\text{F}^0$ and $\\delta^0_i$ must be located on the boundaries of\nthe spaces $\\mathcal{S}_i^0$ and $\\mathcal{K}^0$, i.e., on the union of the\nboundaries of the cylinders $\\mathcal{S}_i^0$, which we denote by\n$\\partial\\mathcal{S}^0=\\partial \\mathcal{S}_1^0 \\cup\n\\partial \\mathcal{S}_2^0 \\cup\n\\partial \\mathcal{S}_3^0$.\nThe set $\\partial\\mathcal{S}^0$ is depicted in Figure~\\ref{fixedp}: it\ncomprises the lateral surfaces of the cylinders and the base\/top\nsurfaces.\n\nAll fixed points on $\\partial\\mathcal{S}^0$ are transversally hyperbolic except\nfor the points $\\text{T}_{ii}^0$:\n$\\text{TL}_i^0$ consists of transversally hyperbolic saddles; in\ncontrast, the fixed points on $\\text{QL}_i^0$ are transversally\nhyperbolic sources; points on $\\text{KC}_i^0$ with $\\Sigma_i\n> 1$, $\\Sigma_i \\neq 2$ are sources while those with $\\Sigma_i < 1$\nare saddles.\nCombining the analysis of the preceding sections,\nsee~Figs.~\\ref{cylinder} and~\\ref{sigmaquadgleich1}, we obtain, more\nspecifically: each point on $\\text{QL}_i^0$ is a source for a\none-parameter family of orbits that emanate into the interior of\n$\\mathcal{Z}^0$, and each point on $\\text{KC}_i^0$ with $\\Sigma_i >\n1$ ($\\Sigma_i \\neq 2$) is the source for a two-parameter family.\n(The points with $\\Sigma_i =1$ on $\\text{KC}_i^0$ are the two points\n$\\text{Q}^0_{ij} \\in \\text{QL}_j^0$ and $\\text{Q}^0_{ik}\\in\n\\text{QL}_k^0$. Each of these two points is a transversally\nhyperbolic source for a one-parameter family of orbits, however,\nthose orbits are not interior orbits, but remain on the boundary of\n$\\mathcal{Z}^0$.)\n\nThe non-transversally hyperbolic fixed points $\\text{T}_{ii}^0$ are\npart of a special structure that is present on $\\partial\\mathcal{S}^0$: the\nset $\\partial\\mathcal{S}^0$ exhibits a robust heteroclinic network $\\mathcal{H}^0$\n(of depth $1$), see, e.g.,~\\cite{Ashwin\/Field:1999} for a discussion\nof heteroclinic networks; the network $\\mathcal{H}^0$ is depicted in\nFigure~\\ref{network}; a schematic depiction is given in\nFigure~\\ref{networkschematic}. In particular we observe that the\nheteroclinic cycles $\\mathcal{H}_i^0$ of the spaces $\\mathcal{S}_i^0$ appear as\nheteroclinic subcycles of the network.\n\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{TL1}[cc][cc][0.8][0]{$\\text{TL}^0_1$}\n\\psfrag{TL2}[cc][cc][0.8][0]{$\\text{TL}^0_2$}\n\\psfrag{TL3}[cc][cc][0.8][0]{$\\text{TL}^0_3$}\n\\psfrag{QL1}[cc][cc][0.8][0]{$\\text{QL}^0_1$}\n\\psfrag{QL2}[cc][cc][0.8][0]{$\\text{QL}^0_2$}\n\\psfrag{QL3}[cc][cc][0.8][0]{$\\text{QL}^0_3$}\n\\psfrag{T21}[cc][cc][0.6][0]{$\\text{T}^0_{21}$}\n\\psfrag{T22}[cc][cc][0.6][0]{$\\text{T}^0_{22}$}\n\\psfrag{T23}[cc][cc][0.6][0]{$\\text{T}^0_{23}$}\n\\psfrag{Q21}[cc][cc][0.6][0]{$\\text{Q}^0_{21}$}\n\\psfrag{Q22}[cc][cc][0.6][0]{$\\text{Q}^0_{22}$}\n\\psfrag{Q23}[cc][cc][0.6][0]{$\\text{Q}^0_{23}$}\n\\psfrag{T11}[cc][cc][0.6][0]{$\\text{T}^0_{11}$}\n\\psfrag{T12}[cc][cc][0.6][0]{$\\text{T}^0_{12}$}\n\\psfrag{T13}[cc][cc][0.6][0]{$\\text{T}^0_{13}$}\n\\psfrag{Q11}[cc][cc][0.6][0]{$\\text{Q}^0_{11}$}\n\\psfrag{Q12}[cc][cc][0.6][0]{$\\text{Q}^0_{12}$}\n\\psfrag{Q13}[cc][cc][0.6][0]{$\\text{Q}^0_{13}$}\n\\psfrag{T31}[cc][cc][0.6][0]{$\\text{T}^0_{31}$}\n\\psfrag{T32}[cc][cc][0.6][0]{$\\text{T}^0_{32}$}\n\\psfrag{T33}[cc][cc][0.6][0]{$\\text{T}^0_{33}$}\n\\psfrag{Q31}[cc][cc][0.6][0]{$\\text{Q}^0_{31}$}\n\\psfrag{Q32}[cc][cc][0.6][0]{$\\text{Q}^0_{32}$}\n\\psfrag{Q33}[cc][cc][0.6][0]{$\\text{Q}^0_{33}$}\n\\psfrag{KC1}[cc][cc][1.2][0]{$\\text{KC}^0_1$}\n\\psfrag{KC2}[cc][cc][1.2][0]{$\\text{KC}^0_2$}\n\\psfrag{KC3}[cc][cc][1.2][0]{$\\text{KC}^0_3$}\n\\psfrag{s1}[cc][cc][1.0][90]{$\\longleftarrow\\: s_1 =0\\: \\longrightarrow$}\n\\psfrag{s2}[cc][cc][1.0][35]{$\\longleftarrow \\:s_2 =0 \\:\\longrightarrow$}\n\\psfrag{s3}[cc][cc][1.0][-35]{$\\longleftarrow \\:s_3 =0 \\:\\longrightarrow$}\n\\includegraphics[width=0.8\\textwidth]{network3D3.eps}\n\\caption{The heteroclinic network $\\mathcal{H}^0$ that exists on the set\n$\\partial\\mathcal{S}^0$. Its building blocks are the heteroclinic cycles\n$\\mathcal{H}^0_1$, $\\mathcal{H}^0_2$, $\\mathcal{H}^0_3$.} \\label{network}\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{T11}[cc][cc][0.9][0]{$\\text{T}_{11}^0$}\n\\psfrag{T22}[cc][cc][0.9][0]{$\\text{T}_{22}^0$}\n\\psfrag{T33}[cc][cc][0.9][0]{$\\text{T}_{33}^0$}\n\\psfrag{T12}[cc][cc][0.9][0]{$\\text{T}_{12}^0$}\n\\psfrag{T21}[cc][cc][0.9][0]{$\\text{T}_{21}^0$}\n\\psfrag{T13}[cc][cc][0.9][0]{$\\text{T}_{13}^0$}\n\\psfrag{T31}[cc][cc][0.9][0]{$\\text{T}_{31}^0$}\n\\psfrag{T23}[cc][cc][0.9][0]{$\\text{T}_{23}^0$}\n\\psfrag{T32}[cc][cc][0.9][0]{$\\text{T}_{32}^0$}\n\\includegraphics[width=0.7\\textwidth]{network.eps}\n\\caption{Schematic representation of $\\mathcal{H}^0$.}\n\\label{networkschematic}\n\\end{center}\n\\end{figure}\n\nA straightforward analysis of the flow on $\\partial \\mathcal{S}^0$ using the\nsame type of reasoning as above leads to the result that there exist\nno other structures on $\\partial \\mathcal{S}^0$ that could serve as\n$\\alpha$-limits for an interior $\\mathcal{Z}^0$-orbit $\\gamma$. We\nhave thus proved the following statement: The $\\alpha$-limit of\n$\\gamma$ is one of the transversally hyperbolic sources listed\nabove, or it is the heteroclinic network (or a heteroclinic subcycle\nthereof). This concludes the proof of the massless case of\nTheorem~\\ref{alphathm}.\n\n\n\\subsection*{Global dynamics in the massive case}\n\nLet $\\gamma$ be an arbitrary orbit in the interior of the state\nspace $\\mathcal{X}$. The function $M_{(1)}$ is strictly\nmonotonically increasing on $\\mathcal{X}$ (and on $\\mathcal{K}$),\ncf.~\\eqref{m1eqs}ff.; moreover, $M_{(1)}$ vanishes for $z\\rightarrow\n1$ and $s_i \\rightarrow 0$ (unless $z\\rightarrow 0$ simultaneously).\nHence, by applying the monotonicity principle we obtain that the\n$\\alpha$-limit set $\\alpha(\\gamma)$ of $\\gamma$ must be located on\n$\\mathcal{Z}^0$ including its boundaries.\n\nConsider the fixed point $\\text{F}^0 \\in \\mathcal{Z}^0$. By\nTheorem~\\ref{futurethm} this fixed point is a global sink on\n$\\mathcal{Z}^0$. In the orthogonal direction, however, we have $z^{-1}\nz^\\prime |_{\\text{F}^0} = 2$. It follows that $\\text{F}^0$ is a\nhyperbolic saddle in the space $\\mathcal{X}$ and that there exists exactly\none orbit $\\phi$ that emanates from $\\text{F}^0$ into the interior\nof $\\mathcal{X}$. (Theorem~\\ref{futurethm} implies that $\\phi$ converges to\n$\\text{FS}^1$ as $\\tau\\rightarrow \\infty$; thus, $\\phi$ represents\nthe unique solution of the Einstein-Vlasov equations that\nisotropizes toward the past and the future.)\n\nLet $\\gamma$ be different from $\\phi$. Assume that $\\alpha(\\gamma)$\ncontains a point $\\mathrm{P}$ of the interior of $\\mathcal{Z}^0$; then the\nwhole orbit through $\\mathrm{P}$ and the $\\omega$-limit\n$\\omega(\\mathrm{P})$ must be contained in $\\alpha(\\gamma)$.\nTheorem~\\ref{futurethm} implies $\\omega(\\mathrm{P}) = \\text{F}^0$,\nhence $\\text{F}^0 \\in \\alpha(\\gamma)$. Since the saddle $\\text{F}^0$\nis in $\\alpha(\\gamma)$, the unique orbit $\\phi$ emanating from it is\ncontained in $\\alpha(\\gamma)$ as well. Thus, ultimately,\n$\\omega(\\phi)$, i.e., a point on $\\text{FS}^1$, must be contained in\n$\\alpha(\\gamma)$; this is a contradiction, since $\\text{FS}^1$\nconsists of transversally hyperbolic sinks. We conclude that\n$\\gamma$ cannot contain any $\\alpha$-limit point in the interior of\n$\\mathcal{Z}^0$.\n\nSince $\\alpha(\\gamma)$ must be located on the boundary on $\\mathcal{Z}^0$,\ni.e., on $\\mathcal{S}_i^0$ or $\\mathcal{K}^0$, the proof can be completed in close\nanalogy to the proof in the massless case. We thus restrict\nourselves here to giving some relations that establish that the\nsources on $\\mathcal{Z}^0$ generalize to sources on $\\mathcal{X}$: on\n$\\text{KC}_i^0$ we have $z^{-1} z^\\prime |_{\\text{KC}_i^0} = 2 ( 1+\n\\Sigma_i)$, which is positive for all $\\Sigma_i> -1$ and thus for\n$\\Sigma_i> 1$ in particular; for $\\text{QL}_i^0$ we obtain $z^{-1}\nz^\\prime |_{\\text{QL}_i^0} = 4$. We note further that $z^{-1}\nz^\\prime |_{\\text{D}_i^0} = 3$; thus $\\text{D}_i^0$ possesses a\ntwo-dimensional unstable manifold. (Orbits in that manifold converge\nto $\\text{FS}^1$.) Finally, note that along the heteroclinic cycle\n$\\mathcal{H}_1^0: \\mathrm{T}_{22}^0 \\rightarrow \\mathrm{T}_{32}^0\n\\rightarrow \\mathrm{T}_{33}^0 \\rightarrow \\mathrm{T}_{23}^0\n\\rightarrow \\mathrm{T}_{22}^0$, we obtain that $z^{-1} z^\\prime$\nequals $6 s_2$, $2(1+\\Sigma_3)$, $2(1+\\Sigma_2)$, $6(1-s_2)$,\nrespectively; hence $z^{-1} z^\\prime$ is non-negative along the\nheteroclinic network $\\mathcal{H}$.\n\nThis concludes the proof of Theorem~\\ref{alphathm}.\n\n\n\n\n\\section{Concluding remarks}\n\\label{conc}\n\nIn this article we have analyzed the asymptotic behaviour of\nsolutions of the Einstein-Vlasov equations with Bianchi type I\nsymmetry. To that end we have reformulated the equations as a system\nof autonomous differential equations on a compact state space, which\nenabled us to employ powerful techniques from dynamical systems\ntheory.\n\nBased on the global dynamical systems analysis we have identified\nall possible past attractors of orbits\n--- both in the massless and massive case.\nWe have found that an open set of solutions converges to the Kasner\ncircle(s); in particular, for these solutions, the rescaled matter\nquantity $\\Omega$ satisfies $\\Omega\\rightarrow 0$ toward the\nsingularity, so that ``matter does not matter.'' However, we have\nseen that there exists an interesting structure that might\ncomplicate matters: there exists a heteroclinic network $\\mathcal{H}^0$ that\nmight be part of the past attractor set. For solutions that converge\nto $\\mathcal{H}^0$, $\\Omega$ has no limit toward the singularity, since\n$\\Omega \\neq 0$ along parts of the network $\\mathcal{H}^0$, i.e., matter\ndoes matter for such solutions. It is not clear at the present stage\nwhether the set of solutions converging to $\\mathcal{H}^0$ is empty, or, if\nnon-empty, of measure zero or not (the flow on the boundary subsets\ngives a hint that it might be a three-parameter set, i.e., a set of\nmeasure zero). In any case $\\mathcal{H}^0$ will be important for the\nintermediate dynamical behaviour of some models, and thus there are\nsignificant differences between Bianchi type I perfect fluid models\nand models with Vlasov matter.\n\nIf a generic set of orbits converges to $\\mathcal{H}^0$, this will have\nconsiderable consequences. Bianchi type I perfect fluid models play\na central role in understanding the singularity of more general\nspatially homogeneous models~\\cite{waiell97}, as well as general\ninhomogeneous models~\\cite{uggetal03}. The importance of Bianchi\ntype I perfect fluid models is due to Lie and source\n``contractions'' in spatially homogeneous cosmology and the\nassociated invariant subset and monotone function structure (which\nis quite similar to the hierarchical structure we have encountered\nin the present paper), and asymptotic silence in general\ninhomogeneous models~\\cite{uggetal03}. Similarly, we expect that\nBianchi type I Einstein-Vlasov models hold an equally prominent\nplace as regards singularities in more general\n--- spatially homogeneous and inhomogeneous --- Einstein-Vlasov\nmodels. Hence the resolution of the problem of whether the\nheteroclinic network attracts generic solutions determines if\nEinstein-Vlasov models are generically different from general\nrelativistic perfect fluid models in the vicinity of a generic\nsingularity, or not.\n\nIn this article we have not considered a cosmological constant,\n$\\Lambda$. The effects of a positive cosmological constant can be\noutlined as follows: since $\\rho \\rightarrow \\infty$ toward the\nsingularity, it follows that $\\Lambda$ can be asymptotically\nneglected and hence that the singularity structure is qualitatively\nthe same as for $\\Lambda =0$. However, toward the future $\\Lambda$\ndestabilizes FS$^1$, which becomes a saddle, and instead solutions\nisotropize and asymptotically reach a de Sitter state.\n\nWe conclude with some remarks on different formulations. We have\nseen that the variables we have used to reformulate the equations as\na dynamical system yielded multiple representations of some\nstructures, e.g., the Kasner circle. Replacing $s_i$ with\n$E_i=\\sqrt{g^{ii}}\/H$, i.e., the Hubble-normalized spatial frame\nvariables of~\\cite{uggetal03,rohugg05}, and using $y=m^2H^{-2}$\ninstead of $z$, yields a single Kasner circle. The latter variables,\nhowever, are not bounded; indeed, they blow up toward the future in\nthe present case. It is possible to replace the variables by bounded\nvariables, however, variables of this type lead to differentiability\ndifficulties toward the singularity. Issues like these made the\nvariables we employed in this article more suitable for the kind of\nanalysis we have performed. However, $E_i$-variables, or\n``$E_i$-based'' variables would have been more suitable to relate\nthe present results to a larger context; but it is not difficult to\ntranslate our results to the $E_i$-variables variables used\nin~\\cite{uggetal03,rohugg05}, where the relationship between the\ndynamics of inhomogeneous and spatially homogeneous models was\ninvestigated and exploited.\n\n\\\n\n\\noindent\n{\\bf Acknowledgement}\n\n\\noindent We gratefully acknowledge the hospitality and the support\nof the Isaac Newton Institute for Mathematical Sciences in\nCambridge, England, where part of this work was done. We also thank\nAlan Rendall for useful questions and comments. CU is supported by\nthe Swedish Research Council.\n\n\\\n\n\n\\begin{appendix}\n\n\\section{Dynamical systems}\n\\label{dynsys}\n\n\nIn this appendix we briefly recall some concepts from the theory of\ndynamical systems which we use in the article.\n\nConsider a dynamical system defined on an invariant set $X\\subseteq\n\\mathbb{R}^m$. The $\\omega$-limit set $\\omega(x)$ [$\\alpha$-limit\nset $\\alpha(x)$] of a point $x\\in X$ is defined as the set of all\naccumulation points of the future [past] orbit of $x$. The simplest\nexamples are fixed points and periodic orbits.\n\nThe monotonicity principle~\\cite{waiell97} gives\ninformation about the global asymptotic behaviour of the dynamical\nsystem. If $M: X\\rightarrow \\mathbb{R}$ is a ${\\mathcal C}^1$\nfunction which is strictly decreasing along orbits in $X$, then\n\\begin{subequations}\\label{omegalimitmon}\n\\begin{align}\n\\omega(x) &\\subseteq\n\\{\\xi \\in \\bar{X}\\backslash X\\:|\\: \\lim\\limits_{\\zeta\\rightarrow \\xi} M(\\zeta) \\neq\n\\sup\\limits_{X} M\\} \\\\\n\\alpha(x) &\\subseteq\n\\{\\xi \\in \\bar{X}\\backslash X\\:|\\:\\lim\\limits_{\\zeta\\rightarrow \\xi} M(\\zeta) \\neq\n\\inf\\limits_{X} M\\}\n\\end{align}\n\\end{subequations}\nfor all $x\\in X$.\n\nLocally in the neighbourhood of a fixed point, the flow of the\ndynamical system is determined by the stability features of the\nfixed point. If the fixed point is hyperbolic, i.e., if the\nlinearization of the system at the fixed point is a matrix\npossessing eigenvalues with non-vanishing real parts, then the\nHartman-Grobman theorem applies: in a neighbourhood of a hyperbolic\nfixed point the full nonlinear dynamical system and the linearized\nsystem are topologically equivalent. Non-hyperbolic fixed points are\ntreated in centre manifold theory: the reduction theorem generalizes\nthe Hartman-Grobman theorem; for further details see,\ne.g.,~\\cite{cra91}. If a fixed point is an element of a connected\nfixed point set (line, surface,\\nolinebreak \\ldots) and the number\nof eigenvalues with zero real parts is equal to the dimension of the\nfixed point set, then the fixed point is called transversally\nhyperbolic. Application of the centre manifold reduction theorem is\nparticularly simple in this case. (The situation is analogous in the\nmore general case when the fixed point is an element of an a priori\nknown invariant set that coincides with the centre manifold of the\nfixed point.)\n\n\\section{FRW and LRS$_i$ symmetry}\n\\label{FRWLRS}\n\nIn this section we discuss in detail the sets $\\text{FRW}$ and $\\text{LRS}_i$,\nconnected with solutions exhibiting FRW or LRS geometry.\n\nTo begin with, we prove that the fixed point $\\text{F}^0$ on $z=0$\nis well-defined. Since the defining equations for $\\text{F}^0$ are\n$w_1 = w_2= w_3 = 1\/3$, we must show that these equations indeed\npossess a unique solution $(s_1,s_2,s_3)$ for all distribution\nfunctions $f_0$. Setting $z=0$ in~\\eqref{omegai} implies that \nequations $w_1 = w_2= w_3 = 1\/3$ are equivalent to the system\n\\begin{equation}\\label{udef}\nu := \\int f_0 \\, \\left[s_1 v_1^2 -s_2 v_2^2\\right] \\left(\\sum\\nolimits_k s_k v_k^2 \\right)^{-1\/2} d^3 v =0\n\\end{equation}\nand $v = 0$, where $v$ is defined by replacing $[s_1 v_1^2 -s_2\nv_2^2]$ by $[s_1 v_1^2 -s_3 v_3^2]$ in~\\eqref{udef}. On the three\nboundaries of the space $\\{(s_1,s_2,s_3)\\:|\\: s_i \\geq 0, \\sum_k s_k\n=1\\}$ the functions $u$ and $v$ are monotonic; their signs are given\nin Figure~\\ref{dreiecken}. The derivative $\\partial u\/\\partial s_1$ is\nmanifestly positive, $\\partial u\/\\partial s_2$ is negative, hence\n$\\mathrm{grad}\\, u$ is linearly independent of the surface normal\n$(1,1,1)$, and it follows that\n$u=\\mathrm{const}$ describes a curve\nfor all $\\mathrm{const} \\in \\mathbb{R}$. The same argument applies\nto $v$, since $\\partial v\/\\partial s_1>0$ and $\\partial v\/\\partial\ns_3 <0$. Figure~\\ref{dreiecken} reveals that $u=0$ ($v=0$) connects\nthe upper (right) vertex of the $(s_1,s_2,s_3)$-space with the\nopposite side. Investigating $(\\mathrm{grad} \\,u - \\lambda\\,\n\\mathrm{grad} \\,v)$ we find that the first component is manifestly\npositive when $\\lambda \\leq 2\/3$ and negative when $\\lambda\\geq\n3\/2$, the second component is negative when $\\lambda \\leq 3$, and\nthe third component is positive when $\\lambda \\geq 1\/3$, which\nimplies that $(\\mathrm{grad}\\, u - \\lambda\\, \\mathrm{grad}\\, v)$ is\nlinearly independent of the surface normal $(1,1,1)$ for all\n$\\lambda$. It follows that all equipotential curves of the functions\n$u$ and $v$ intersect transversally; hence $u=0$ and $v=0$ possess a\nunique point of intersection, which proves the claim.\n\n\\begin{figure}[Ht]\n\\begin{center}\n\\psfrag{s1}[cc][cc][0.8][0]{$s_1$}\n\\psfrag{s2}[cc][cc][0.8][0]{$s_2$}\n\\psfrag{s3}[cc][cc][0.8][0]{$s_3$}\n\\psfrag{a}[cc][cc][0.7][0]{$v>0$}\n\\psfrag{b}[cc][cc][0.7][0]{$\\begin{array}{cc} u<0 \\\\ v<0 \\end{array}$}\n\\psfrag{c}[cc][cc][0.7][0]{$u>0$}\n\\psfrag{er}[cc][cc][0.7][0]{$\\begin{array}{cc} u<0 \\\\ v=0 \\end{array}$}\n\\psfrag{el}[cc][cc][0.7][0]{$\\begin{array}{cc} u>0 \\\\ v>0 \\end{array}$}\n\\psfrag{eo}[cc][cc][0.7][0]{$\\begin{array}{cc} u=0 \\\\ v<0 \\end{array}$}\n\\includegraphics[width=0.5\\textwidth]{dreiecken.eps}\n\\caption{The functions $u$ and $v$ are monotonic along the boundaries of the space\n$\\{(s_1,s_2,s_3)\\:|\\: s_i \\geq 0, \\sum_k s_k =1\\}$.}\n\\label{dreiecken}\n\\end{center}\n\\end{figure}\n\nBy establishing existence and uniqueness of the fixed point $\\text{F}^0$ for all $f_0$,\nwe have shown that for all distribution functions\nthere exists a unique FRW solution of the massless Einstein-Vlasov\nequations.\n\nThe situation is different in the massive case.\nA FRW solution is characterized by the equations\n$\\Sigma_i=0$ $\\forall i$, $w_1 =w_2=w_3 =w$, since\nthis yields $s_{i}=\\mathrm{const}$\n(and a rescaling of the spatial coordinates then results in $g_{ij}\\propto\\delta_{ij}$.)\nHowever, for a general distribution function $f_0$, these equations\nare incompatible with\nthe Einstein-Vlasov equations; in other words, the straight line\n$\\Sigma_i=0$ $\\forall i$, $w_1 =w_2=w_3 =w$ is not an orbit of the\ndynamical system. Hence, in the massive case, the Einstein-Vlasov\nequations do not admit a FRW solution for arbitrary $f_0$; the\ndistribution function $f_0$ is required to satisfy FRW compatibility\nconditions, see below, in order for a FRW solution to exist.\n\nNote, however, that for each $f_0$, there exists exactly one orbit\nthat originates from $\\text{F}^0$ and ends on $\\text{FS}^1$, see\nSection~\\ref{locglo}, i.e., there exists a unique solution of\nthe Einstein-Vlasov equations that isotropizes toward the past and\ntoward the future. This anisotropic solution can be regarded as a\ngeneralized FRW solution; if $f_0$ is compatible with the FRW\ngeometry, then the generalized FRW solution reduces to an ordinary\nFRW solution.\n\nThe treatment of the LRS case is analogous: the subset\n$\\text{LRS}_1$ (and, analogously, $\\text{LRS}_{2,3}$), defined\nthrough the equations $\\Sigma_2=\\Sigma_3$, $w_2=w_3$, describes\nsolutions exhibiting LRS geometry. (For a solution on\n$\\text{LRS}_1$, Equation~(\\ref{seq}) entails $s_2(\\tau) \\propto\ns_3(\\tau)$; by rescaling the coordinates one can achieve\n$g_{22}=g_{33}$, i.e., a line element in an explicit LRS form.)\nHowever, for general $f_0$, the set $\\text{LRS}_1$ is not invariant\nunder the flow of the dynamical system. Consequently, for general\n$f_0$, the Einstein-Vlasov equations do not admit solutions with LRS\ngeometry.\n\nMore specifically, consider\n\\begin{equation}\n\\left(\\Sigma_2 - \\Sigma_3\\right)^\\prime =\n- 3 \\Omega \\left[ \\textfrac{1}{2} (1-w) (\\Sigma_2 -\\Sigma_3) - (w_2-w_3)\\right]\\:.\n\\end{equation}\nHence, $(\\Sigma_2-\\Sigma_3)^\\prime$ vanishes when $\\Sigma_2 = \\Sigma_3$ and $w_2 = w_3$.\nFrom~\\eqref{seq} and~\\eqref{zeq} we obtain an equation for $w_i^\\prime$,\n\\begin{equation}\\label{omegaieq}\nw_i^\\prime = -2 w_i \\left[ \\Sigma_i -\n\\sum_k \\Sigma_k \\left(\\frac{1}{2}w_k + \\frac{1}{2} w_i^{-1} \\beta_{i k}^{\\text{\\tiny (0)}}\\right)\n+\\frac{z}{2} \\left(w_i^{-1} \\beta_{i}^{\\text{\\tiny (1)}} + \\beta^{\\text{\\tiny (1)}} \\right) \\right]\\:,\n\\end{equation}\nwhere we have defined\n\\begin{equation}\n\\beta_{i_1\\ldots i_k}^{\\text{\\tiny (m)}} =\n\\frac{(1-z)^k {\\displaystyle\\int} f_0\\,\n\\left(\\Pi_{n=1}^{k} s_{i_n} v_{i_n}^2\\right) \\left[z+(1-z) \\sum_k s_k v_k^2\\right]^{1\/2-k-m} d v_1 d v_2 d v_3}%\n{{\\displaystyle\\int} f_0 \\left[z+(1-z) \\sum_k s_k v_k^2\\right]^{1\/2} d v_1 d v_2 d v_3}\\:;\n\\end{equation}\nnote that $w_i = \\beta_{i}^{\\text{\\tiny (0)}}$. Equation~\\eqref{omegaieq} implies\n\\begin{equation}\n(w_2-w_3)^\\prime = -\\left(\\Sigma_1 -\\Sigma_2\\right)\n\\left(\\beta_{22}^{\\text{\\tiny (0)}} - \\beta_{33}^{\\text{\\tiny (0)}}\\right) -\nz \\left(\\Sigma_1 +1\\right) \\left(\\beta_{2}^{\\text{\\tiny (1)}}- \\beta_{3}^{\\text{\\tiny (1)}}\\right)\\:,\n\\end{equation}\nwhen $\\Sigma_2 = \\Sigma_3$ and $w_2 = w_3$. We conclude that the set\n$\\Sigma_2 \\equiv \\Sigma_3$, $w_2 \\equiv w_3$ is an invariant set of\nthe dynamical system, iff $w_2 = w_3$ implies\n$\\beta_{22}^{\\text{\\tiny (0)}} = \\beta_{33}^{\\text{\\tiny (0)}}$ and\n$\\beta_{2}^{\\text{\\tiny (1)}} = \\beta_{3}^{\\text{\\tiny (1)}}$. (In\nthe massless case, only the first condition is required.) These\nconditions are violated for general distribution functions; for the\ncondition to hold $f_0$ must be of a certain type that ensures\ncompatibility with the LRS symmetry. This is the case, for instance,\nwhen there exist constants $a_2>0$, $a_3>0$, such that $f_0$ is\ninvariant under the transformation $v_2 \\rightarrow (a_3\/a_2)\\, v_3$,\n$v_3\\rightarrow (a_2\/a_3)\\, v_2$; e.g., $f_0 = \\tilde{f}_0(v_1,\nv_2^2 v_3^2)$, or $f_0 = \\tilde{f}_0(v_1, a_2^2 v_2^2 + a_3^2\nv_3^2)$; in the latter case $w_2(\\tau) \\equiv w_3(\\tau)$ implies\n$a_3^2 s_2(\\tau) \\equiv a_2^2 s_3(\\tau)$.\n\nNote finally that a distribution function $f_0$ is compatible with a\nFRW geometry, if it is compatible with all LRS symmetries. This\nmeans, that for instance $f_0 = \\tilde{f}_0(a_1^2 v_1^2 +a_2^2 v_2^2\n+a_3^2 v_3^2)$ is compatible with the FRW symmetry and thus admits a\nunique FRW solution of the Einstein-Vlasov equations.\n\n\n\n\\section{Future asymptotics}\n\\label{futureproof}\n\nIn this section we give the proof of Theorem~\\ref{futurethm}:\n\n\\noindent\n\\textbf{Theorem~\\ref{futurethm}.}\n\\textit{The $\\omega$-limit of every orbit in the interior of the\nmassive state space $\\mathcal{X}$ [massless state space $\\mathcal{Z}^0$]\nis one the fixed points $\\,\\mathrm{FS}^1$ [the fixed point $\\,\\mathrm{F}^0$].}\n\n\n\\proof Consider first the state space $\\mathcal{Z}^0$ of massless\nparticles and the associated system~\\eqref{z0eq}. The function\n$M_{(2)}$, cf.~\\eqref{m2eqs}ff., is well-defined and monotonically\ndecreasing everywhere except for at the fixed point $\\text{F}^0$,\nwhere it has a global minimum. On the boundaries $\\mathcal{S}_i^0$\n(given by $s_i=0$) and $\\mathcal{K}^0$ ($\\Sigma^2 =1$) of the state\nspace $\\mathcal{Z}^0$, the function $M_{(2)}$ is infinite.\nTherefore, application of the monotonicity principle yields that the\n$\\omega$-limit of every orbit must be the fixed point $\\text{F}^0$.\n\nIn the massive case consider~\\eqref{Sigeq} in the form\n\\begin{equation}\\label{sigmaiprime}\n\\Sigma_i^\\prime = -3 \\Omega \\left[ \\frac{1}{2} (1-w) (1 +\\Sigma_i) - \\frac{1}{2} (1- 3w) - w_i \\right]\\:.\n\\end{equation}\nThe r.h.s.\\ is positive when $\\Sigma_i \\leq -1$ and $z>0$ ($w<1\/3$).\nThis implies that the hyperplanes $\\Sigma_i = -1$ constitute semipermeable membranes in the state space $\\mathcal{X}$,\nwhereby the ``triangle'' $(\\Sigma_1 > -1) \\cap (\\Sigma_2 > -1) \\cap (\\Sigma_3 > -1)$ becomes a future invariant\nsubset of the flow~\\eqref{eq}.\n\nThe first part of the proof is to show that every orbit enters the\ntriangle at some time $\\tau_e$ (and consequently remains inside for\nall later times).\n\nAssume that there exists an orbit with $\\Sigma_i(\\tau) \\leq -1$ for all $\\tau$ (for some $i$).\nFrom~\\eqref{seq} we infer that\n\\begin{equation}\ns_i^\\prime = -2 s_i \\left[ s_j (\\Sigma_i - \\Sigma_j) + s_k (\\Sigma_i - \\Sigma_k) \\right] > 0\n\\end{equation}\nif $\\Sigma_i< -1$, and that $s_i^\\prime \\geq 0$ if $\\Sigma_i =-1$;\nhence $s_i(\\tau) \\geq s_i(\\tau_0)=\\mathrm{const} > 0$ for all $\\tau \\in [\\tau_0,\\infty)$.\nFrom~\\eqref{omegaeq} we obtain\n\\begin{equation}\n\\begin{split}\n\\frac{1}{3} \\Omega^{-1} \\Omega^\\prime \\,\\Big|_{\\Omega=0} & =\n1 - \\frac{1}{3} w_i (1+\\Sigma_i) - \\frac{1}{3} w_j (1+\\Sigma_j) - \\frac{1}{3} w_k (1+\\Sigma_k) \\,\\geq \\\\\n& \\geq 1 -w_j - w_k = (1-3 w) + w_i \\geq \\mathrm{const} >0 \\:,\n\\end{split}\n\\end{equation}\nsince $s_i \\geq \\mathrm{const} > 0$. Consequently, $\\Omega(\\tau)\n\\geq \\mathrm{const} > 0$ for all $\\tau \\in [\\tau_0,\\infty)$. It\nfollows from~\\eqref{sigmaiprime} that\n\\begin{equation}\n\\Sigma_i^\\prime \\geq \\mathrm{const} > 0\n\\end{equation}\nfor all $\\tau\\in [\\tau_0,\\infty)$ by the same argument.\nThis is in contradiction to the assumption $\\Sigma_i \\leq -1$ for all $\\tau$.\n\nThus, in the second part of the proof, we can consider an arbitrary\norbit $\\gamma$ and assume, without loss of generality, that\n$\\gamma(\\tau)$ lies in the $\\Sigma$-triangle for all $\\tau \\in\n[\\tau_e,\\infty)$. Equation~\\eqref{zeq} leads to\n\\begin{equation}\nz^\\prime = 2 z (1-z) \\sum\\limits_n s_n (1+\\Sigma_n) \\geq 0\n\\end{equation}\nfor all $\\tau \\in [\\tau_e,\\infty)$, hence\n$z(\\tau) \\geq z(\\tau_e) > 0$ for all $\\tau \\in [\\tau_e,\\infty)$.\n\nWe define the function $N$ by\n\\begin{equation}\nN = (1+\\Sigma_1) (1+ \\Sigma_2) (1+\\Sigma_3)\\:.\n\\end{equation}\nThe derivative can be estimated by\n\\begin{equation}\nN^\\prime \\geq 3 \\Omega N \\left[ -\\frac{3}{2} (1-w) + \\frac{1}{2} \\sum_n \\frac{1-3 w}{1+\\Sigma_n}\\right]\\:.\n\\end{equation}\nSince $w(\\tau) \\leq \\mathrm{const} < 1\/3$ (because $z(\\tau) \\geq\n\\mathrm{const} > 0$), $N^\\prime$ is positive when at least one of\nthe $\\Sigma_i$ is sufficiently small, i.e., when $N$ itself is small\n(a detailed analysis shows that $N^\\prime \\geq 3 \\Omega N [-(3\/2)\n(1-w) + \\sqrt{3} (1-3 w) N^{-1\/2}]$). We conclude that there exists\na positive constant $N_0$ such that $N(\\tau) \\geq N_0$ for all $\\tau\n\\in [\\tau_e,\\infty)$. This in turn implies that there exists $\\nu>0$\nsuch that $\\Sigma_i(\\tau) \\geq -1 + \\nu$ for all $i$ for all $\\tau\n\\in [\\tau_e,\\infty)$, whereby $z^\\prime \\geq 2 z (1-z) \\nu$ for all\n$\\tau \\in [\\tau_e,\\infty)$.\n\nIt follows that the $\\omega$-limit of $\\gamma$ must lie on $z=1$,\ni.e., on $\\mathcal{Z}^1$. Taking into account the simple structure\nof the flow on $\\mathcal{Z}^1$, characterized by $\\Omega^\\prime = 3\n(1-\\Omega) \\Omega$, we conclude that the fixed points $\\text{FS}^1$\ngiven by $\\Sigma_1 =\\Sigma_2 =\\Sigma_3 = 0$ are the only possible\n$\\omega$-limits. \\hspace*{\\fill}\\rule{0.2cm}{0.2cm}\n\n\\begin{remark}\nIn order to demonstrate the versatility of the dynamical systems\nmethods, we have chosen here to prove Theorem~\\ref{futurethm} by\nusing techniques that are slightly different from those employed in\nSection~\\ref{locglo} (which exploit the monotonicity principle).\nHowever, it is straightforward (in fact, even simpler) to give a\nproof by making use of the hierarchy of monotone functions. Indeed,\nthe function $M_{(1)}$ ensures that the $\\omega$-limit of every\norbit lies on $\\mathcal{Z}^1$ or $\\mathcal{S}_i$; modulo some\nsubtleties, we can exclude that $\\mathcal{S}_i$ is attractive by\nusing the monotone function $M_{(3)}$ and the local properties of\nthe fixed points.\n\\end{remark}\n\n\n\n\\section{The spaces $\\mathcal{S}^0_i$ -- interpretation of solutions}\n\\label{Si0space}\n\nThe flow on the boundary subsets $\\mathcal{S}^0_i$\nis of fundamental importance in the analysis of the global dynamics of the state space,\nsee Section~\\ref{globaldynamics}. Note that except for $\\text{F}^0$ all attractors\n($\\mathrm{D}_i^0$, $\\mathrm{QL}_i^0$, $\\mathrm{KC}_i^0$, and the heteroclinic network)\nlie on $\\mathcal{S}^0_i$. For a depiction of the flow on $\\mathcal{S}^0_1$ see Figure~\\ref{cylinder}.\nIn the following we show that orbits on $\\mathcal{S}^0_1$ represent solutions\nof the Einstein-Vlasov system that are associated with a special class of\ndistribution functions. Furthermore, we investigate in detail solutions that\nconverge to the subcycle $\\mathcal{H}^0_1$ of the heteroclinic network.\n\nConsider a distribution function $f_0$ of the form\n\\begin{equation}\\label{disdisfct}\nf_0(v_1,v_2,v_3) = \\delta(v_1) f_0^{\\mathrm{red}}(v_2,v_3)\\:,\n\\end{equation}\nwhere $f_0^{\\mathrm{red}}(v_2,v_3)$ is even in $v_2$ and $v_3$.\nIn the case of massless particles, $m=0$ (and $z=0$ respectively),\nwe obtain\n\\begin{equation}\nw_1 = 0 \\:,\\qquad w_j = \\frac{g^{jj} {\\displaystyle\\int} f_0^{\\mathrm{red}} \\, v_j^2\n\\left[g^{22} v_2^2 + g^{33} v_3^2 \\right]^{-1\/2} d v_1 d v_2 d v_3}%\n{{\\displaystyle\\int} f_0^{\\mathrm{red}}\n\\left[g^{22} v_2^2 + g^{33} v_3^2 \\right]^{1\/2} d v_1 d v_2 d v_3}\\:\\,(j=2,3)\\:,\n\\end{equation}\nwhere $g^{22}$ and $g^{33}$ can be replaced by $s_2$ and $s_3$, if desired.\nIn the unbounded variables $g^{ii}$ the equations read\n\\begin{subequations}\\label{unboundedvarsz0sys}\n\\begin{align}\n\\Sigma_1^ \\prime & = - \\Omega [1+\\Sigma_1] \\:, & (g^{11})^\\prime & = -2 g^{11} (1+\\Sigma_1) \\\\\n\\Sigma_j^\\prime &= - \\Omega [1+\\Sigma_j -3 w_j] \\:, & (g^{jj})^\\prime &= -2 g^{jj} (1+\\Sigma_j) \\quad\\qquad(j=2,3)\\:,\n\\end{align}\n\\end{subequations}\ncf.~the remark at the end of Section~\\ref{einsteinvlasov}.\nIn particular we note that the equation for $g^{11}$ decouples; hence the full dynamics is\nrepresented by a reduced system in the variables $(\\Sigma_1, \\Sigma_2, \\Sigma_3, g^{22}, g^{33})$,\nwhich coincides with the system~\\eqref{unboundedvarsz0sys} on the invariant subset $g^{11} = 0$.\nIn analogy to the definitions~\\eqref{defdimless} we set\n\\begin{equation}\ns_1 =0 \\:, \\qquad s_2 = \\frac{g^{22}}{g^{22}+g^{33}} \\:,\\qquad s_3 = \\frac{g^{33}}{g^{22}+g^{33}}\\:,\n\\end{equation}\nso that $s_2 + s_3 =1$. This results in the dynamical system\n\\begin{subequations}\\label{boundedvarsz0sys}\n\\begin{align}\n\\Sigma_1^ \\prime & = - \\Omega [1+\\Sigma_1] \\:, & s_1 & \\equiv 0 \\\\\n\\Sigma_j^\\prime &= - \\Omega [1+\\Sigma_j -3 w_j] \\:,\n& s_j^\\prime &= -2 s_j [\\Sigma_j - (s_2 \\Sigma_2 + s_3 \\Sigma_3)] \\quad\\qquad(j=2,3)\\:.\n\\end{align}\n\\end{subequations}\nThis system~\\eqref{boundedvarsz0sys} coincides with the dynamical system~\\eqref{eq} induced on $\\mathcal{S}^0_1$\n(which is obtained by setting $z=0$, thus $w=1\/3$, and $s_1 = 0$ in~\\eqref{eq}).\n\nOur considerations show that the flow on $\\mathcal{S}^0_1$ possesses a direct physical interpretation:\norbits on $\\mathcal{S}^0_1$ represent solutions of the massless Einstein-Vlasov system\nof Bianchi type~I with a ``distributional'' distribution function of the\ntype~\\eqref{disdisfct}.\nNote that the system~\\eqref{boundedvarsz0sys} on $\\mathcal{S}^0_1$ must be supplemented\nby the decoupled equations~\\eqref{xeq} and $(g^{11})^\\prime = -2 g^{11} (1+\\Sigma_1)$\nin order to construct the actual solution from an orbit in $\\mathcal{S}^0_1$.\n\nTwo structures in $\\mathcal{S}^0_1$ are of particular interest: the fixed point $\\mathrm{D}_1^0$ and\nthe heteroclinic cycle $\\mathcal{H}^0_1$, see Figure~\\ref{cylinder}.\nThe fixed point $\\mathrm{D}_1^0$ represents an LRS solution (associated with a distributional\n$f_0$); it is straightforward to show that the metric is of the form\n\\begin{equation}\\label{Disol}\ng_{11} = \\mathrm{const} \\:,\\qquad\ng_{22} \\propto t^{4\/3} \\:,\\qquad\ng_{33} \\propto t^{4\/3}\\:,\n\\end{equation}\nand $H = (4\/9) t^{-1}$.\n\nThe orbit $\\mathrm{T}^0_{22} \\rightarrow \\mathrm{T}^0_{32}$, which is part of $\\mathcal{H}^0_1$, corresponds to a solution\n\\begin{equation}\\label{teil1}\ng_{11} = g_{11}^0 \\:, \\qquad\ng_{22} = g_{22}^0 ( 3 H_0 t)^2 \\:, \\qquad\ng_{33} = g_{33}^0\\:;\n\\end{equation}\nhere, $H = (3 t)^{-1}$; $H_0$ is a characteristic value of $H$.\nFor the orbit $\\mathrm{T}^0_{33} \\rightarrow \\mathrm{T}^0_{23}$\nthe result is analogous with $g_{22}$ and $g_{33}$ interchanged.\nA more extensive computation shows that the orbit $\\mathrm{T}^0_{32} \\rightarrow \\mathrm{T}^0_{33}$\nleads to\n\\begin{equation}\\label{teil2}\ng_{11} = g_{11}^0 \\:, \\qquad\ng_{22} = g_{22}^0 \\left[ \\log(1+3 H_0 t) \\right]^2 \\:,\\qquad\ng_{33} = g_{33}^0 (1 +3 H_0 t)^2\\:,\n\\end{equation}\ntogether with $H= H_0 (1 +3 H_0 t)^{-1} (1 + [\\log (1+3 H_0 t)]^{-1})$.\n(Note that $3 H t$ is always close to $1$ and approaches $1$ for $t \\rightarrow 0$\nand $t\\rightarrow \\infty$.)\nThe result for the orbit $\\mathrm{T}^0_{23} \\rightarrow \\mathrm{T}^0_{22}$ is analogous with\n$g_{22}$ and $g_{33}$ interchanged.\n\nNow consider an orbit converging to the heteroclinic cycle as $\\tau \\rightarrow -\\infty$, i.e., $t \\searrow 0$.\nSince the orbit alternates between episodes where it is close to one of the four heteroclinic orbits,\nwe obtain a solution with alternating episodes of characteristic behaviour of\nthe type~\\eqref{teil1} and~\\eqref{teil2}; transitions between the episodes\ncorrespond to the orbit being close to the fixed points.\n\nLet $t^{(n)}$ denote a monotone sequence of times such that\nthe solution is in episode $(n)$ at time $t^{(n)}$\n(i.e., the orbit is close to one of the four heteroclinic orbits and\nfar from the fixed points); $t^{(n)}\\searrow 0$ as $n\\rightarrow \\infty$.\nSince $3 H t \\approx 1$ as $t\\searrow 0$, the sequence $t^{(n)}$ gives rise\nto a sequence $H^{(n)}$ defined by $3 H^{(n)} t^{(n)} = 1$.\nDuring episode $(n)$ the solution exhibits a characteristic behaviour of\nthe type~\\eqref{teil1} or~\\eqref{teil2} with $H_0 =H^{(n)}$\n(and $g_{22}^0 = g_{22}^{(n)}$, $g_{33}^0 = g_{33}^{(n)}$).\nA transition from one episode to another involves\na matching of the constants.\n\n\\textit{Example}.\nSuppose that the orbit is close to the heteroclinic orbit\n$\\mathrm{T}^0_{32} \\rightarrow \\mathrm{T}^0_{33}$ in episode $(n)$.\nWe obtain a behaviour of the type~\\eqref{teil2} with $H_0=H^{(n)}$.\nAs $H^{(n)} t$ gets small we see that $g_{22} \\approx g_{22}^{(n)} (3 H^{(n)} t)^2$,\n$g_{33} \\approx g_{33}^{(n)}$.\nThe next (as $t\\searrow 0$) episode corresponds to the orbit running close\nto $\\mathrm{T}^0_{22} \\rightarrow \\mathrm{T}^0_{32}$;\nthe behaviour of the solution is~\\eqref{teil1} with $g_{22}^{(n+1)}$, $g_{33}^{(n+1)}$,\nand $H_0=H^{(n+1)}$.\nThe transition between the episodes $(n)$ and $(n+1)$ is thus straightforward:\n$g_{22}^{(n+1)} (H^{(n+1)})^2 = g_{22}^{(n)} (H^{(n)})^2$ and\n$g_{33}^{(n+1)} = g_{33}^{(n)}$.\nMatching episodes $(n+1)$ and $(n+2)$ is slightly more involved.\nThe orbit is close to the heteroclinic orbit $\\mathrm{T}^0_{23} \\rightarrow \\mathrm{T}^0_{22}$\nin episode $(n+2)$, where\n\\begin{equation}\\label{teil4}\ng_{11} = g_{11}^0 \\:, \\qquad\ng_{22} = g_{22}^{(n+2)} (1 +3 H^{(n+2)} t)^2 \\:,\\qquad\ng_{33} = g_{33}^{(n+2)} \\left[ \\log(1+3 H^{(n+2)} t) \\right]^2 \\:.\n\\end{equation}\nClose to transition time, when $H^{(n+2)} t$ is large,\nwe get $g_{22} = g_{22}^{(n+2)} (3 H^{(n+2)} t)^2$ and $g_{33} = g_{33}^{(n+2)} (\\log 3 H^{(n+2)} t)^2$.\nThe transition between episode $(n+1)$ and $(n+2)$ thus involves\nthat $g_{33}$ begins to decay logarithmically from having been approximately\nconstant.\n\n\n\n\n\n\n\n\\end{appendix}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\n\n\\label{intro}\n\n\\textit{Cloud computing} is inexorably becoming the technology of choice among big and small businesses to deploy and manage their IT infrastructures and applications \\cite{wang2016spatial}. Infrastructure-as-a-Service (\\textit{IaaS}) is a key cloud service delivery model. Large companies such as Amazon, Google, Microsoft, and IBM provide IaaS solutions to their consumers. Examples of IaaS consumers include Software-as-a-Service (SaaS) providers and organizations such as governments, universities, and research centers \\cite{fattah2020long}. The computing resources or Virtual Machine (VM) instances are the most common IaaS services \\cite{hwang2015cloud}. The \\textit{functional} properties of a VM instance or an IaaS service include computing resources such as CPU units, memory units, storage, and network bandwidths. Examples of \\textit{non-functional} properties or Quality of Service (QoS) attributes of IaaS services are availability, price, response time, throughput, and energy efficiency \\cite{fattah2020event,fattah2020signature}.\n\n\n\nThe \\textit{market-driven} cloud service provisioning is a topical research area \\cite{varghese2018next}. There exist several key service \\textit{provisioning} models in the cloud market such as on-demand, reservation \\cite{chaisiri2011optimization, zheng2016probabilistic}, and economic models \\cite{sharma2012pricing,Thanakornworakij,PAL2013113}. We propose an \\textit{alternative strategy to the on-demand model} which would be used in conjunction. The premise is that the on-demand or reservation model makes it difficult to accurately predict service demand, thus potentially leading to either under-provisioning or over-provisioning \\cite{dustdar2011principles,jiang2011asap, islam2012empirical}.\nThe alternative model focuses on the economic model-based cloud service selection and provision for long-term IaaS composition. In that regard, the economic model-based service provisioning approach is fundamentally different from the typical on-demand and reservation models. According to the \\textit{on-demand}, the provider has a fixed set of VM instances associated with QoS and price \\cite{Hong}. The consumer may acquire and release on-demand VM instances anytime and only pay for the usage by per hour or per second. \nThe provider usually sets a discounted flat rate for the reserved instances in the \\textit{reservation} model \\cite{fattah2019long}. The consumers reserve the VM instance for a fixed time period and pay for it regardless of usage.\nAccording to \\textit{economic model} based service provisioning approaches, there exists \\textit{market competitions} among providers to set the price and QoS of their services \\cite{PAL2013113}. The market competition forms \\textit{non-cooperative games} among competitive providers and consumers \\cite{Thanakornworakij}.\n\n\n\n\\textit{We focus on the economic model-based cloud service selection and provision for a long-term period}. Economic expectations are \\textit{formally} expressed in terms of economic models \\cite{ye2014economic,sajibtsc2015,goiri2012economic}. According to the economic model-based service selection approaches \\cite{ye2014economic,yu2007efficient,kholidy2014qos}, a consumer's requests include custom VM instances, QoS parameters and the price the consumer is willing to pay for the instance (usually determined by their market research or the \\textit{consumer economic models}). Similarly, the providers follow their own economic model to \\textit{accept} or \\textit{reject} the requests from the consumers \\cite{sajibtsc2015}.\nThe economic model-based service selection and provisioning approaches are applied in different cloud markets such as spot market, SLA negotiation, and auction-based reservation models \\cite{zaman2013combinatorial}. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe assume that consumers create their \\textit{IaaS requests} following their own economic models. An IaaS provider receives a set of IaaS requests from \\textit{different} consumers. \\textit{The IaaS composition from the provider's perspective is defined as the selection of an optimal set of consumer requests \\cite{sajibtsc2015}}. The IaaS composition is a \\textit{decision-making} problem where the provider decides which requests it should \\textit{accept} or \\textit{reject}. An effective IaaS composition \\textit{maximizes} the provider's long-term economic expectations, such as profit, reputation, and consumer growth \\cite{fattah2018cp}. The economic model is the \\textit{formal tool} to select the optimal set of consumer requests to meet the provider's expectations \\cite{dash2009economic}.\n\n\n\nOur objective is to design a \\textit{qualitative economic model} for the long-term IaaS composition \\cite{sajibicsoc2016}. The qualitative economic models provide an \\textit{effective} way to select consumer requests where there exists \\textit{uncertainties} or \\textit{incomplete} information. The consumer requirements are typically uncertain and probabilistic \\cite{fattah2018cp} for the long-term period. \\textit{The provider's long-term economic expectations are also dynamic} \\cite{mistry2018long}. The qualitative economic model specifies the provider's \\textit{temporal business strategies} such as reputation building, risk management, revenue, and profit maximization \\cite{sajibicsoc2016}. These business strategies determine the service provisioning \\textit{preferences}. For example, the provider may observe very high demand for \\textit{Network-intensive services} (i.e., VM instances designed for Network-intensive applications, e.g., C5n instance type in Amazon EC2\\footnote{https:\/\/aws.amazon.com\/ec2\/instance-types\/c5\/}) in the Christmas or holiday period. The provider may prefer to provision Network-intensive services than CPU-intensive services (i.e., VM instances designed for CPU intensive applications, e.g., P3 instance in Amazon EC2\\footnote{https:\/\/aws.amazon.com\/ec2\/instance-types\/p3\/}) to increase its revenue.\n\n\n\nWe assume that an IaaS provider has its long-term qualitative economic model, i.e., the temporal service provisioning \\textit{preferences} \\cite{sajibicsoc2016}. The provider receives long-term IaaS requests from different consumers which are represented in \\textit{time series} and associated with QoS parameters and price. The \\textit{qualitative IaaS composition} is defined as the \\textit{selection} or \\textit{acceptance} of an optimal set of IaaS requests using the qualitative economic model of the provider. \\textit{We aim to provide a comprehensive framework for long-term qualitative IaaS composition}. To the best of our knowledge, apart from our previous work \\cite{sajibicsoc2016,mistry2018long}, existing research mainly focus on the \\textit{quantitative IaaS composition}. The target of the quantitative composition is to \\textit{maximize revenue and profit} of the provider for a \\textit{short-term} period without any long-term business strategies or economic model \\cite{ye2013qos,chaisiri2012optimization,zhu2010resource}. In contrast, the target of the qualitative composition is to \\textit{maximize the similarity measure} between a given set of consumer requests and the provider's qualitative economic model.\n\n\n\nWe represent the provider's long-term qualitative economic model using \\textit{Temporal Conditional Preference Networks} (TempCP-nets) \\cite{mistry2017probabilistic}. The TempCP-net \\textit{ranks} the short-term and long-term consumer requests using k-d tree according to provider's preferences \\cite{sajibtsc2014}. The qualitative composition is \\textit{transformed} into a \\textit{combinatorial optimization problem} where the objective is to select the consumer requests that maximizes the preference rankings. We explore two composition approaches: a) \\textit{global composition}, and b) \\textit{local composition} \\cite{alrifai2009combining, yu2008framework}. The global composition approach considers all the consumer requests within the composition interval which is computationally expensive \\cite{sajibicsoc2016}. The local composition approach \\textit{divides} the composition interval into several time segments and optimizes a \\textit{partial set} of requests (acceptance or rejection) in each time segment. It may \\textit{significantly improve} the runtime efficiency as we do not need to consider the whole set of requests during the entire composition period. However, the local composition approach is a \\textit{greedy approach} of sequential optimization \\cite{gnanlet2009sequential,pednault2002sequential} and may not produce the optimal result as the request selection is \\textit{temporal-dependent} on the previous acceptance or rejection decisions in other time segments. For example, when we optimize the requests from left to right time segments (i.e., January, February, March), the composition result may be different than the optimization from right to left time segments (i.e., March, February, and January). A reinforcement learning based approach called 3d Q-learning \\cite{mistry2018long} is proposed to \\textit{find the optimal sequence} of temporal selections. The proposed 3d Q-learning based composition approach is considered \\textit{off-policy} as the \\textit{learning approach has no restrictions over exploration}. The proposed approach does not consider the \\textit{temporal distribution} and \\textit{correlations} of the \\textit{historical request sets} to compose a new set of requests using \\textit{policy reuse}. \n\n\nWe propose a novel \\textit{on-policy} based 3d Q-learning approach that effectively utilizes historical information to find the optimal selection of requests. First, the proposed learning approach reduces the run-time by removing redundant state transitions in the off-policy based 3d Q-learning approaches. Next, a novel request annotation approach based on agglomerative clustering techniques \\cite{fernandez2008solving,bouguettaya2015efficient} is proposed to capture the \\textit{intrinsic characteristics} of the historical requests such as the temporal distribution and the global preference ranking. A \\textit{novel policy reuse approach} is proposed to compose a new set of requests that effectively utilizes previous policies which are learned from historical information. The key contributions of this work are as follows:\n\n\\begin{itemize}\n \\item A comprehensive framework to compose long-term and short-term IaaS requests based on the provider's qualitative economic model.\n \\item An on-policy based 3d Q-learning approach to finding the optimal request selection sequence.\n \\item A novel request annotation approach to capture the intrinsic characteristics of historical request sets.\n \\item A novel policy reuse approach to enable effective utilization of historical information in the proposed 3d Q-learning. \n\\end{itemize}\n\nThe rest of the paper is structured as follows. In section 2, we introduce a set of terminologies and concepts that are used to formulate the qualitative composition problem. Section 3 illustrates a motivation scenario to explain the need for sequential learning in IaaS composition. Section 4 provides a general overview of the proposed qualitative IaaS composition framework. In section 5, we describe the proposed IaaS composition approach for a new set of requests. Section 6 describes the proposed long-term qualitative composition with the previous learning experience. Section 7 presents our experiments to evaluate the proposed approaches. Section 8 summarizes the related work on economic model based cloud service composition and sequential learning approaches. Finally, Section 9 concludes the paper and discusses the limitation and future work of this paper. \n\n\\section{Preliminaries}\n\nIn this section, We introduce a set of terminologies and concepts that are used to formulate the qualitative IaaS composition problem in this paper. We will use these terminologies throughout the paper to describe the problem and proposed solution.\n\n\n\\begin{itemize}\n \\item \\textit{Providers and Services:} IaaS providers are referred as the provider. We consider the composition problem from the perspective of a single provider. IaaS services usually include a wide selection of VM instance types optimized to fit different use cases such as general purpose, compute optimized, memory optimized, and storage optimized \\cite{hwang2015cloud}. The VM Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give consumers the flexibility to choose the appropriate mix of resources for their applications. We consider VM instances as services. The VM instances designed for Memory-intensive applications are termed as Memory-intensive services. For example, The R5 instance in Amazon EC2 is an example of Memory-intensive services\\footnote{https:\/\/aws.amazon.com\/ec2\/instance-types\/r5\/}. Similarly, VM instances designed for CPU-intensive applications are termed as CPU-intensive services. For example, the P3 instance in Amazon EC2 is an example of Memory-intensive services\\footnote{ https:\/\/aws.amazon.com\/ec2\/instance-types\/p3\/}. \n \n \\item \\textit{Resources:} The resources are the capacity of the physical machines that are used to offer the VM services such as: CPU cores, memory unit, and network bandwidths \\cite{hwang2015cloud}. \\textit{We assume that provider has a fixed set of resources.} Here, the number of fixed set of resources refers to the maximum number of resource the provider may have at a certain point of time. The maximum number of resource however may change over period of time. In such a case, the number of fixed of resources is required to be updated according to the new maximum number of resources. The proposed approach can be considered as a proactive approach, where provider anticipates the maximum number of resources it can have. Our aim to utilize these resources based on the provider's economic model.\n \n \\item \\textit{Consumers}: The targeted consumers are mainly the medium to large business organizations such as SaaS providers, governments, universities, and research institutes. These organizations may require services for a long-term period (e.g., 1 to 3 years). \n \n \\item \\textit{IaaS Requests}: IaaS requests refer to the configuration of \\textit{functional} and \\textit{non-functional} requirements of the VM over a period of time. A consumer may need a VM with 2 vCPU, 2gb memory, and 99\\% availability in first six months of a year. The IaaS requests for that period is represented as (2 vCPU, 2 gb memory, 99\\% availability). \\textit{We assume the deterministic IaaS requests, i.e., the provider has knowledge about the long-term requests prior to the composition}. We compose these requests based on the provider's economic models. The provider defines the requests as either \\textit{short-term} or \\textit{long-term}. A request is considered short-term if only one business strategy is applicable for the life time of the requests. A request is considered long-term if more than one business strategies are applicable to the request. For instance, if the provider changes its business strategies or economic models quarterly, a request that needs reservation for a VM for 1 year is considered as a long-term request. In this circumstances, a request that reserves the VM for a one month is considered as short-term requests. Note that, we focus on the economic model-based service provisioning where VMs are reserved for a certain time period. \\textit{The burstable on-demand resources are outside the focus of this paper}. \n \n \\item \\textit{Conditional Preference Networks (CP-nets)}: The CP-net is a widely used tool that captures a user's conditional preferences qualitatively \\cite{cp1}. CP-Nets \\cite{boutilier2004cp} is a compact and intuitive formalism for representing and reasoning with conditional preferences under the ceteris paribus (``all else being equal\") semantics. The dynamic semantics of the preferences are indicated using a Conditional Preference Table (CPT) \\cite{boutilier2004cp}. One CP-net can only represent one business strategy at a time \\cite{sajibicsoc2016}. For example, if the business strategy is to build reputation for the first three months, a CP-net could be constructed that can graphically express the preference on higher QoS in a service than higher prices. \n \\item \\textit{Temporal Conditional Preference Networks (TempCP-nets)}: The TempCP-net is a set of CP-nets that represents a provider's economic expectations over the long-term period \\cite{sajibicsoc2016}. If there are three business strategies for a year, the TempCP-net could be constructed with a set of three CP-nets.\n \n \\item \\textit{k-d Tree}: The induced graph in a CP-net may contain nodes with multi-dimensional tuples and the annotated ranking of preferences \\cite{boutilier2004cp}. The \\textit{k-d tree} is a graph indexing technique in which every node is a \\textit{k}-dimensional point \\cite{andoni2006near}. Every non-leaf node in a k-d tree can be thought of as implicitly generating a splitting hyperplane that divides the space into two parts, known as half-spaces. Points on the left and right sides of this hyperplane are represented by the left and right subtree of that node respectively. We apply the k-d tree to index the service preference rankings based on the provider's TempCP-net.\n \n \\item \\textit{Local IaaS Ranking}: The rank of an IaaS request is determined by its k-d tree index. The ranking of a request or a set of request considering only one period is called local IaaS ranking. For example, if an IaaS request expands from January to December, then its local IaaS ranking of January is computed considering its preference rankings on January.\n \n \\item \\textit{Global IaaS Ranking}: The ranking of a long-term request or set of requests considering each period is called its global IaaS ranking. For example, if an IaaS request expands from January to December, then its global ranking considers is the aggregated local ranking of each month from January to December. \n\n \\item \\textit{Sequential Optimization}: The sequential optimization approach is a series of local optimization where the initial problem is divided into sub-problems \\cite{gnanlet2009sequential}. The local optimizations have cascading effect as the decisions in each local optimization affects the sequential decision making in successive local optimizations. The key benefit of the sequential optimization is that it reduces the search space of the global optimization significantly \\cite{pednault2002sequential}. \n \n \\item \\textit{Q-learning}: Q-learning \\cite{watkins1992q} is a model-free reinforcement learning approach. The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. It does not require a model of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations \\cite{watkins1992q}. In the context of IaaS compositions, the Q-learning is used to learn the optimal sequences of request selection. The two-dimensional Q-learning has no start and terminal states as it accepts only model-free state transitions \\cite{shani2005mdp}. A Q-learning process is termed as \\textit{off-policy} if the learning approach has no restrictions over exploration \\cite{munos2016safe}. In contrast, the \\textit{on-policy} based Q-learning process is \\textit{smart} and removes redundant state transitions considering historical information \\cite{van2016deep}.\n\\end{itemize}\n\n\n\n\n\\section{Motivation Scenario}\n\nIn this section, We illustrate an example scenario to describe the need of sequential learning in IaaS composition. Let us consider an IaaS provider offers virtual CPU services. The provider offers 100 CPU units with 100\\% availability. We are only considering availability as a QoS parameter for simplicity. We defined three semantic levels - high, moderate, and low to express qualitative preferences of the provider for services attributes as shown in Figure \\ref{fig:semanticTable}. We assume the provider may change the interpretation of the semantic levels based on the cloud market condition. The provider considers more than \\$1000 as high price according to Figure \\ref{fig:semanticTable} in the first year. The provider considers more than \\$1300 as high price due to predicted inflation in the second and third years. Three different preference rankings are set based on the provider's annual goal in three years.\n\n\\begin{figure}\n \\centering\n \n \\includegraphics[width = .8\\textwidth]{semanticTable.pdf}\n \\caption{Semantic preference table}\n \\label{fig:semanticTable}\n \\vspace{-4mm}\n\\end{figure}\n\nWe adopt the economic models of the provider as described in \\cite{mistry2018economic} to continue in this example. Figure \\ref{fig:cp} shows the provider's three economic models for three different years. In the first year, the provider wants to offer high-quality service in with a lower price to create its reputation. The most important attribute in the first year is the ``availability'' of a service followed by the ``CPU'' and the ``price''. The provider decides to maximize its profit by offering services at a higher price for lower resources and QoS in the second year. The ``price'' therefore sets the ``CPU'' and the ``availability'' in the second year. The provider's preference for the third year is to provision lower CPU-intensive services. Let us assume the provider wants to receive requests that are long-term. Therefore, the provider offers discounts on long-term service requests. A decision variable labeled $N$ is used to distinguish the type of requests. The value of $N$ is set to true ($T$) when a request is long-term. A request is considered long-term if it spans over the next period. Otherwise, the value of $N$ is set to false ($F$) to indicate the request as a short-term request. In figure \\ref{fig:cp}, $N$ is associated with ``price'' ($P$) for the first two years. In these periods ($CP1$ and $CP2$), the provider considers the high and moderate ``price'' level indifferently for long-term requests. $N$ is associated with ``availability'' ($A$) in the third year. According to $CP3$, short-term requests are provided with relatively lower ``availability\" at the same moderate price. More details about how to represent these economic models using CP-nets can be found in \\cite{sajibicsoc2016}.\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.9\\textwidth]{CPNets} \n \\caption{CP-nets of a provider}\n \\label{fig:cp}\n \\vspace{-4mm}\n\\end{figure}\n\n\n\n\n\n\nLet us assume a set of requests is represented by $A$ in Figure \\ref{fig:req}(a). $A$ has four requests i.e., $\\{R1\\}$,$\\{R2\\}$,$\\{R3\\}$, and $\\{R4\\}$ as shown in Figure \\ref{fig:req}(a). Each of these requests arrives at the beginning of the composition. A request is represented in annual segments for simplicity. For instance, $(C: High, A: low, P: moderate)$ represents a request segment of $\\{R1\\}$ in the first year. Similarly, Figure \\ref{fig:req}(a) shows the annual requirements of other consumer requests $\\{R2\\}, \\{R3\\} $ and $ \\{R4\\}$(a) for three years. The provider can select these four requests from $2^{4} = 16$ possible combinations to find the optimal composition in a brute force manner. \n\n\n\n\nThe number of possible ways to select the requests grows exponentially with the number of requests. A sequential optimization process may be applied to reduce the total number of comparisons to find the global optimal composition. Let us consider a sequential optimization approach for the request set $A$ where requests are selected from the right to left year i.e., $3^{\\text{rd}}$,$2^{\\text{nd}}$ and $1^{\\text{st}}$. There are $2^3 = 8$ comparisons are required in the third year to select the highest ranked $R3$ according to the $CP3$. Note that, in $CP3$ the highest preference order is low CPU, high price, and low availability. In $R3$, the consumer's preference order is low CPU, moderate availability, and low price. Therefore, $R3$ is the closest match request according to $CP3$. Details of the ranking technique using CP-nets can be found in \\cite{sajibicsoc2016}. Once we only accept $R3$ in the third year, $R1$ and $R2$ are rejected in the subsequent years. In following years, the local optimization accepts $R4$ and update the solution. The optimal solution $\\{R3, R4\\}$ is calculated in ten comparisons. If we change the sequence of optimization process e.g., left to right i.e., $1^{\\text{st}}$, $2^{\\text{nd}}$ and $3^{\\text{rd}}$, the total number of comparison in the first year becomes $2^4 = 16$ to select the highest ranked $R1$. The request $R1$ has ``N\/A\" ranking in the following years. As a result, the left to right sequence produces an unacceptable solution. The right to left sequence generates optimal result when sequential optimization is applied on $A$. The same sequence may not work or give a good solution for a different set of requests. Let us consider the request set $B$ in Figure \\ref{fig:req}(c). The distribution of requests in $B$ is different from $A$. The number of comparisons becomes $2^5 = 32$ to select the highest ranked $R3$ in the third year (Figure \\ref{fig:req}(d)). As $R3$ has ``N\/A\" ranking in the second year, it can not be selected. As a result, the right to left sequence does not work for $B$. We propose a model-free learning approach to generate the optimal sequence of local optimizations.\n\n\\begin{figure}\n \n \\centering\n \\includegraphics[width= 0.8\\textwidth]{images\/motive2.pdf} \n \\caption{ Sets of requests (a) request set $A$ (c) request set $B$}\n \\label{fig:req}\n \\vspace{-5mm}\n\\end{figure}\n\n\n\n\\section{A Qualitative IaaS Composition Framework}\n\nIn this section, we provide a general overview of the proposed qualitative IaaS composition framework. We also describe some common concepts in qualitative IaaS composition such as long-term IaaS request representation, long-term qualitative preferences representation, combinatorial optimization in qualitative IaaS composition.\n\nA qualitative composition framework is proposed that learns from historical information of the past request sets as shown in Figure \\ref{fig:propframe}. We assume that a set of long-term requests of consumers and the provider's qualitative preferences are available at the beginning of the composition. Our target is to find the optimal composition using a learning based approach that utilizes information of the past consumer requests. For each set of new requests, we learn the sequence of the optimal composition and save it for future request sets.\n\n\\begin{figure}\n\n\t\\centering\n \t\\includegraphics[width= .6\\textwidth]{propframe.pdf} \n \\caption{Long-term qualitative composition framework}\n \t\\label{fig:propframe}\n \n\\end{figure}\n\nWe use TempCP-nets to represent the qualitative preferences which enable qualitative composition of the requests. The proposed framework performs indexing and ranking of the requests configuration of the TempCP-net. The indexing of the TempCP-net is built using \\textit{k}-d tree indexing that enables efficient searching of the rank of the consumer requests. The ranking of the requests configurations is computed using the provider's TempCP-nets. The ranked requests are taken as an input of a learning module. The learning module applies a reinforcement learning based method that utilizes the historical information of the past requests to find out optimal composition efficiently. \n\n\n\n\n\n\n\n\\subsection{\\textbf{Long-term IaaS Request Representation}}\n\nThe long-term requests of the consumers are represented as time series group (TSG) of the service attributes. We denote \\(T\\) as the total service usage time. The TSG of the consumer requests is defined as \\(R_c = \\{s_{c1},s_{c2},...,s_{cn}\\} \\), where $s_{cn}$ represents a service attribute and \\(cn\\) is the number of service attributes in \\(R_c\\). We represent the service attribute time series as \\( s_{cn} = \\{(x_n,t_n)|n=1,2,3,...., T\\} \\), where \\(x_n\\) is the value of \\(s_{cn}\\) at the time interval \\(t_n\\). Figure \\ref{fig:req} shows two sets of requests where each request in a set has 3 service attributes ($cn=3$) i.e., CPU, availability, and price. Each requests has three year intervals ($T =3$). Each service attribute may have different types of value during these intervals.\n\n\n\n\n\\subsection{\\textbf{Long-term Qualitative Preferences based on TempCP-nets}}\n\\label{sec:cp}\n\n\n\n\nWe need an efficient tool to represent the provider's qualitative preferences. We define a set of attributes $V = \\{X_{1},..., X_{n}\\}$ which is defined over the finite, discrete domain $D(X_n)$ and semantic domains $S(X_n)$. The attributes are either functional or non-functional. Examples of functional attributes are CPU ($C$), Memory ($M$), and so on. Availability ($A$), Price ($P$), Latency ($LT$) are examples of QoS attributes. A mapping table $Sem\\_Table(X_{n}, x_{n})$ is used to map $x_{n}$ in $D(X_{n})$ into $s_{n}$ in $S(X_{n})$ where $s_{n} = Sem\\_Table(X_{n}, x_{n})$. An example of a semantic table is shown in Figure \\ref{fig:semanticTable}. We assume the preferences order and semantics of $V$ is static within an interval in a long-term composition period. However, they may vary within different intervals. We consider a set of decision variables $DN = \\{N_{1}, N_{2},....,N_{d}\\}$. A decision variable may represent request type, requests duration, and etc. We assume that the decision variable is a binary variable. Therefore, it takes true of false $\\{T, F\\}$ values. For instance, the decision variable is set true for a request if it spans to the next interval. \n\n\\begin{figure}\n\n\t\\centering\n \t\\includegraphics[width= .8\\textwidth]{induced.pdf} \n \\caption{Induced preference graph with decision variable for CP1}\n \t\n\t\\label{fig:f3}\n\t\\vspace{-3mm}\n\\end{figure}\n\nThe service provisioning time $T$ is considered into $m$ intervals and represented as $T=\\sum_{k=1}^{m}I_{k}$, where $I_k$ is an interval in $T$. We assume that the provider sets a preference ranking of each service configuration at each interval $I_k$. The preference rankings of service configurations are expressed over the complete assignments on $V$ and $DN$ with the semantic domain $Sem\\_D^{I_{k}}(V)$. $O^{I_{k}}$ is denoted as the set of service configurations for an interval $I_{k}$. A total order $(\\succeq)$ of the service configuration set represents a preference ranking for an interval. For example, $o_{1} \\succeq o_{2}$ denotes that a service configuration $o_{1}$ is equally or more preferred over $o_{2}$. The preference relation $o_{1} \\succ o_{2}$ denotes that the service configuration $o_{1}$ is preferred over $o_{2}$. If the preferences are indifferent or non-comparable, we denote the relation using $o_{1} \\sim o_{2}$. $T \\sim F$ means that the provider does not care about the true and false values of a decision variable. \n\nThe size of the service configuration set $O^{I_{k}}$ grows exponentially with the number of intervals. Therefore, direct assignment of all possible preferences over the long-term period is always not feasible. We represent the provider's long-term preferences on service configurations using a TempCP-net. A TempCP-net is represented as set of CP-nets with semantic preference tables for each interval of the composition period. We denote a TempCP-net as $\\text{TempCP-Net} = \\{(CP^{I_{k}}, Sem\\_Table^{I_{k}}, I_{k})\\;|\\;\\forall k \\in [1,m]\\}$. A CP-net can be considered as a graphical model that formally represents qualitative preferences and reasons about them. A CP-net $CP^{I_{k}}$ in the interval $I_{k}$ consists a direct graph $G$ which is defined using $V$ and $DN$. Each node in $G$ represents an attribute $X_i \\in V$. The nodes of $DN$ are represented by a dashed circle. A node of $V$ is represented by a solid circle. In this work, we only consider acyclic CP-nets to represent the provider's qualitative preferences. The CPT of each node is denoted by $CPT(X_{i})$ which contains a total order $\\succ^{i}_{u}$ with each instantiation $u$ of $X_{i}$'s parents $Pa(X_{i}) = U$ \\cite{cp1}. For example, $Pa(P)=C$ and $CPT(C)$ contains $\\{C1, C2\\}$ in $CP3$ while preferences are made over $\\{P1, P2, P3\\}$ (Figure \\ref{fig:cp}). A preference outcome $o$ of a CP-net is obtained by sweeping through the CP-net from top to bottom setting each variable to its preferred value given the instantiation of its parents \\cite{wang2012wcp}. A preference order $o \\succ \\acute{o}$ is called a consequence of a CP-net, if $o \\succ \\acute{o}$ can be obtained directly from one of the CPTs in the CP-net. For example, the fact that $(A2, C2, P2)$ is preferred to $(A2, C1, p2)$ is a direct consequences of the semantics of $CPT(C)$ in $CP1$ for the long-term requests (Figure \\ref{fig:cp}). The set of consequences $o\\succ \\acute{o}$ creates a partial order over all possible service configurations of an acyclic CP-net.\n\n\n\nFigure \\ref{fig:f3} shows the induced preference graph generated by $CP1$ which is a directed acyclic graph (DAG). There are two induced graph generated based on the value of the decision variable $N$ (i.e., true and false). The true value of $N$ represents the induced preference graph for long-term requests. The false value of $N$ represents short-term requests. $(A1, C1, P1)$ is considered as the most preferred request for the short-term requests. There is no outgoing edge from $(A1, C1, P1)$. Similarly, $(A2, C1, P3)$ has no incoming edge because it is least preferred request configuration. There is an edge between $(A2, C1, P1)$ and $(A1, C1, P1)$. According the preference statement $A1 \\succ A2$ in the $CPT(CP1)$, $(A1, C1, P1) \\succ (A2, C1, P1)$. $(A1, C1, P1)$ and $(A1, C1,\\\\ P2)$ does not have any outgoing edge because they are the highest preferred configuration (Figure \\ref{fig:f3}). The induce preference graph of a CP-net is constructed by pairwise comparison of all service configurations. The complexity for ordering queries for a TempCP-net in an interval is $O(ndq^2)$ where $n$ and $d$ is the number of attributes and decision variables respectively and the number of output configurations is $q$.\n\n\n\n\n\\subsection{\\textbf{Combinatorial Optimization in Qualitative IaaS Composition}}\n\n\\begin{figure}[t!]\n\n\t\\centering\n \t\\includegraphics[width= 0.8\\textwidth]{kd.pdf} \n \\caption{\\textit{k}-d tree indexing of the induced preference graphs}\n \\vspace{-3mm}\n\t\\label{fig:f4}\n\\end{figure}\n\nGiven a provider's TempCP-net and a set of long-term requests $R$, the IaaS composition is defined as the selection of an optimal set $\\bar{r} \\subseteq \\bar{R}$ that produces the best similarity measure with the TempCP-Net. We consider qualitative preference rankings as the foundation of the similarity measure. First, we index preference rankings from TempCP-Net. We then perform a similarity search on the indexed TempCP-Net which is denoted as $Pref(\\text{TempCP-Net}, \\bar{r})$. Hence, the objective of the IaaS composition is to minimize the ranking output $Pref(\\text{TempCP-Net}, \\bar{r})$. \n\n\\subsubsection{\\textbf{Indexing Preference ranks}} \n\\label{sec:kd}\n\n\n\nThe preference rank of a request configuration is denoted as $Sem\\_Req$ $=(s_{1}, ...,s_{n}) \\;|\\; \\text{where } s_{i} \\in S(X_{i}), \\text{and }X_{i} \\in V$. It is found by a pre-order traversal of the induced graph. The time complexity of searching the preference rank over the induced graph is $O(n)$. A request configuration $(s_{1}, ...,s_{n})$ is considered as a multidimensional vector. We use a \\textit{k}-d tree \\cite{jia2010optimizing} to improve the searching process. There exist different multi-dimensional indexing structures such as B-tree, B+-tree, k-d Trees, Point Quadtrees, R, R*, R+ Trees \\cite{sellis1997multidimensional,robinson1981kdb}. The k-d tree is a \\textit{congruent} choice in IaaS composition as the IaaS preference ranking requires the multi-dimensional value queries \\cite{sajibicsoc2016,nam2004comparative}. \\textit{Note that, finding the optimal multi-dimensional indexing structures for IaaS composition is outside the focus of this paper}.\n\nThe \\textit{k}-d tree is a binary tree that is used for indexing some nodes in a space with k dimensions. We represent each service configuration $o$ (i.e., a node in the induced graph) as a k-dimensional point in the \\textit{k}-d tree. Each node in each level splits its all children along a specific dimension into two subspace, known as half-spaces. Each subspace is represented by either a left or a right sub-tree of that node. A canonical method is used to build the \\textit{k}-d tree \\cite{jia2010optimizing}. The construction algorithm follows a cycle during the selection of splitting planes. For example, at the root, all children are split based on ``availability\" plane in Figure \\ref{fig:f4}. The children of the root split their children along ``CPU\" plane. The grandchildren of the root have ``price-aligned\" planes. Finally, the great-grandchildren have again planes aligned with availability. \n\nLet us assume there are $n$ points in an induced preference graph. We place the median point found in one dimension at the root of the \\textit{k}-d tree. Every other point smaller and larger than the root in the same dimension are placed into right and left sub-tree respectively. This process creates a balanced \\textit{k}-d tree where the runtime is $O(n \\;log(n))$ \\cite{jia2010optimizing}. We annotate each node of the \\textit{k}-d tree with its respective preference ranking obtained from the induced graph. For instance, the root node $(A2,C2,P2)$ in the \\textit{k}-d tree in Figure \\ref{fig:f4} has the preference ranking 6 that is obtained from its induced graph. We construct the \\textit{k}-d tree indexing for each value of the decision variable $N$. For example, two different \\textit{k}-d tree indexing are shown in Figure \\ref{fig:f4} to represent short-term and long-term service configurations. The service configurations with indifferent preference have the same ranking value. For example, the provider's preferences on $(A2,C1,P2)$ and $(A1,C2, P1)$ are indifferent for long-term requests. Both service configurations are annotated with preference ranking 4 in Figure \\ref{fig:f4}.\n\n\\subsubsection{\\textbf{Local and Global Preference Ranking}}\n\n\n\nA request may not be inclusive, i.e., exactly fit within an interval of TemCP-net. It may overlap two or more intervals of TempCP-nets. An overlapping request $R$ (interval $[T_{0}, T_{m}]$) is divided into smaller inclusive segments where each segment fits within an interval of the TempCP-net. The attributes that have \\textit{temporal semantics} require such segmentation. For instance, ``Price\" is considered as an attribute with \\textit{temporal semantics} in a consumer request. If a request requires 20 units of CPU for 12 months with total \\$120, a monthly segmentation interprets the provisioning of 20 CPU units for \\$10. Let us consider an attribute $X$ in $R$ that has \\textit{temporal semantics}. If the segmentation is applied in $[T_{j}, T_{k}]$, the new segments are calculated using the following equation according to \\cite{sajibicsoc2016}: \n\\vspace{-3mm}\n\n\\begin{equation}\n\\label{eqn:rank}\nx_{i}^{[T_{j},T_{k}]} = x_{i}^{[T_{0},T_{m}]} \\times \\frac{|T_{k}-T_{j}|}{|T_{m}-T_{0}|}\n\\end{equation} \n\n The requests are ready to be composed after the temporal segmentation. We define a set of $N$ requests as $\\bar{R} = \\sum_{i=1}^{N} R_i$. We use the following composition rules to combine the requests in a set $\\bar{R}$ \\cite{sajibtsc2015}:\n\n\\begin{itemize}\n \\item The rule of summation: $\\bar{x_{i}} = \\sum_{i=1}^{N} x_{i}$, where $X_{i} \\in \\{C, M, NB, RT, P\\}$.\n \\item The rule of maximization: $\\bar{y_{i}} = max(y_{i}), \\forall \\; i \\in [1,N]$, where $Y_{i} \\in \\{A, TP\\}$.\n\\end{itemize}\n\nThe preference ranking function of a set of requests is $Pref(\\text{TempCP-Net}, \\bar{R}): V \\rightarrow [1,n]$, which outputs the order of $\\bar{R}$ according to the preferences (\\textit{k}-d tree of the TempCP-net). $\\bar{R}$ is transformed $\\acute{\\bar{R}} = \\{(s_{i}, I_{j})\\;|\\;s_{i} \\in S(X_{i}), X_{i} \\in V, \\text{and } T = \\sum_{j=1}^m I_{j}\\}$ based on $Sem\\_Table$ of the TempCP-net. First, we define the local similarity measure, i.e., preference rankings for a time segment and then we define the global objective function for the entire composition period.\n\n\\begin{itemize}\n \\item \\textit{Local preference ranking}: Let us consider $M^{i}(s_{i})$ in the interval $i$ is the function that outputs the preference ranking by temporal matching of $\\acute{\\bar{R}}$ segments with the \\textit{k}-d tree. The temporal matching process or the searching algorithm starts from the root node and traverses the tree recursively. The search algorithm returns the preference ranking of a node if that matches with a request configuration. For instance, the algorithm returns ranking 10 by performing 10 comparisons for the short-term request $(A2, C1, P3)$ in Figure \\ref{fig:f4}. If a query search point is not found in the \\textit{k}-d tree, it is discarded in the composition. The complexity of performing a query in a \\textit{k}-d tree is on average $O(log(n))$ for each service.\n \n \\item \\textit{Global Preference Ranking}: As the long-term requests are divided into local segments, we aggregate the local preference rankings to generate the global preference ranking as follows:\n \\begin{equation}\n \\label{eq:ranking}\n Pref(\\text{TempCP-Net}, \\bar{R}) = \\sum_{i=1}^{m} M^{i}(s_{i})\n \\end{equation}\n\\end{itemize}\n\n\\section{IaaS Composition for a New Set of Requests}\n\nIn this section, we illustrate the proposed IaaS composition approach for a new set of requests, i.e., IaaS composition without any prior knowledge of incoming requests. We introduce a sequential IaaS composition approach that leverages reinforcement learning techniques to compose incoming requests.\n\nWe assume that initially the IaaS provider does not store a history of incoming requests. Each set of incoming requests are considered as a new set of requests and the composition is performed from scratch for the new set of requests. We identify three approaches to compose a new set of requests:\n\\begin{itemize}\n \\item \\textit{Brute-force approach}: This approach generates all the combinations of requests over the total composition period. The preference ranking of each combination is computed using the global preference ranking Equation \\ref{eq:ranking} and pairwise compared. The combination of requests that generates the minimum global ranking is returned as the optimal composition. If the number of requests is $N$, the time complexity of this approach is exponential ($2^N$).\n \\item \\textit{Global optimization approach}: The target of the global optimization is to improve the runtime efficiency from the brute-force approach. We apply Dynamic Programming (DP) \\cite{sajibicsoc2016} to reduce the re-computation of similarity measure of the same combinations of requests. The DP is designed to compute the similarity measure of a large combination of request sets by breaking it into smaller combinations (overlapping subproblem) structure. The results of the subproblems are stored in a temporary array which enables avoiding repeated computation. We denote $\\bar{R}(N)$ as a set of $N$ requests and $i \\in [1,N]$ as the $i_{th}$ request. $\\tau(\\bar{R}(N), k)$ denotes the subset of requests of size $k$ which generates the maximum preference rankings among all requests of size $k$. We start with base case $k=1$, i.e., a set consists of only one requests. The highest ranked request $i$ is computed by pairwise comparison of preference rankings:\n \\vspace{-5mm}\n \n \\begin{align*}\n &\\text{Base case, } \\tau(\\bar{R}(N), k=1) = R_{i}\\\\ \\notag &\\text{where } Pref(\\text{TempCP-Net}, R_{i}) \\text{ is minimum.} \\notag\n \\end{align*}\n\n \n For $k>1$, it either accepts the $N_{th}$ request (the $k_{th}$ place is already filled) or rejects it (reduces $\\bar{R}(N)$ to $\\bar{R}(N-1)$). We have two optimal substructures:\n \n \\vspace{-6mm}\n \n \\begin{align*}\n \\bar{R_{i}} &= \\{N \\cup \\tau(\\bar{R}(N-1), k-1)\\} \\\\\n \\bar{R_{j}} &= \\tau(\\bar{R}(N-1), k) \\notag\n \\end{align*}\n \n \n The $\\bar{R_{i}}$ and $\\bar{R_{j}}$ are computed separately. The re-computation of overlapping substructures is avoided by building a temporary array in a bottom-up manner \\cite{kimes2004restaurant}. if $\\bar{R_{i}}$ returns the minimum preference ranking, it should be returned as the optimal composition, $\\tau(\\bar{R}(N), k) =\\bar{R_{i}}$ if $Pref(\\text{TempCP-Net},\\bar{R_{i}})$ $< Pref(\\text{TempCP-Net}, \\bar{R_{j}})$. Otherwise, the request is removed from the composition, i.e., $\\tau(\\bar{R}(N), k) =\\bar{R_{j}},\\text{if } Pref(\\text{TempCP-Net}, \\bar{R_{i}}) \\geq Pref(\\text{TempCP-Net}, \\bar{R_{j}})$. The complexity of finding $C(\\bar{R}(N), k)$ is $O(N^{k})$. As there are at most $N$ requests to be considered in a set, we solve the DP from bottom up manner in the following sequence: $\\tau(\\bar{R}(N), \\;1)$, $\\tau(\\bar{R}(N), \\;2), \\cdots, \\tau(\\bar{R}(N), N)$. The final complexity of the DP based solution is $O(N^{O(N)})$. \n\n \\item \\textit{Local sequential optimization approach:} This approach optimizes requests in each time segment. The key reason is that we do not need to consider the whole set of requests during the entire composition period. We only consider a partial set which is applicable in a specific temporal segment. It should reduce the runtime complexity significantly. In Fig \\ref{fig:newExample1}, the local optimization could be divided into two segments: optimization with ${A,B}$ in the first year ($OP_{i}$) and optimization with ${A, B, C}$ ($OP_{j}$) in the second year. As the request sets are deterministic, we can perform the optimization sequences in different orders, i.e., $$ or $$. Local optimizations are dependent on the accepted or rejected requests during previous optimizations in a sequence. For example, if the sequence is $<$1st year, second year$>$ in Fig \\ref{fig:newExample1} and we reject request $B$ in the first year, the candidate request set for local optimization is the second year reduces to $$.\n\\end{itemize}\n\n\\begin{figure}[t!]\n\t\\centering\n \t\\includegraphics[width=.5\\textwidth]{Example-Overlap.pdf} \n \\caption{Overlapping requests in different temporal segments}\n \\vspace{-4mm}\n\t\\label{fig:newExample1}\n\\end{figure}\n\nWe propose a heuristic based sequential optimization approach for an IaaS composition in \\cite{sajibicsoc2016}. To improve the quality of the solution, we develop a reinforcement learning based approach to find the best local service provision policy, i.e., the best selection of requests in the optimal temporal sequence. \n\n\n\\subsection{\\textbf{Sequential IaaS Composition using Reinforcement Learning}}\n\nWe formulate the long-term IaaS composition problem, i.e., the selection of requests, as a sequential decision process. We begin with a time segment in the TempCP-net and select a request for the composition. The selection of the requests for the next segment depends on the previous selections of requests as accepted overlapping requests are already committed for both the segments. \n\n\nA sequential decision process is modeled in different approaches such as Multi-Armed Bandit (MAB), Markov Decision Process (MDP), or Partially Observable Markov Decision Process (POMDP) \\cite{kaelbling1996reinforcement}. We observe that the state-action-reward situation in the IaaS composition is similar to the MDP. We may start the selection of requests (actions) in a time segment and can compute the local preference ranking of the selected requests by matching the temporal segment in the selected requests with the corresponding temporal \\textit{k}-d tree of the given TempCP-net. The local preference ranking may be considered as the reward function. After the selection of requests in a segment, the composition approach may transit to any segment and make new selections. The process may continue until total reward cannot be further maximized (convergence) (as we consider ranking values as the reward, maximizing total rewards refer to minimizing global ranking). MAB or POMDP is also applicable in our context as they are special cases of MDP. For example, MAB is a special case of MDP that has only one state. We select MDP as the general sequential decision process.\n\n\n\\begin{figure}[t!]\n\n\t\\centering\n \t\\includegraphics[width=.5\\textwidth]{newExample2.pdf} \n \\caption{State-action transitions for sequential composition}\n \n\t\\label{fig:newExample2}\n\t\\vspace{-4mm}\n\\end{figure}\n\nFigure \\ref{fig:newExample2} shows the all possible state transitions for the request sets in the Figure \\ref{fig:newExample1}. The state-action is represented by the pair [interval, Request]. In Fig \\ref{fig:newExample2}, [1,A] refers that request A in Fig \\ref{fig:newExample2} is selected in the interval 1. We consider bi-directional transition edges. The key reason is it does not specify the transition sequence. For example, if request C is selected in the second interval first, i.e. [2,C], the next transition may happen to any of the [1,A], [1,B] or [1,AB]. [1,AB] represents that both requests A and B are selected in the interval 1. \n\n\n\nAs the composition environment is dynamic, model-free learning, e.g., reinforcement learning (RL) is usually more applicable than the model-based learning algorithms to implement the MDP. To solve the composition problem using a reinforcement learning (RL) approach, we treat each new request sets as a new environment and learn the optimal selection of requests through multiple interactions with the environment. We focus on the Q-learning approach as a reinforcement learning (RL) approach. Note that other deep learning approaches could be implemented in our context. However, we are not focused on providing a comparative study of machine learning approaches in this paper; instead, we use what we think is a sound approach in our context. Our primary target is to apply an unsupervised machine learning approach in the long-term IaaS composition. In that respect, our primary target is to evaluate the effectiveness of general reinforcement learning, i.e., Q-learning and its proposed variations in our context. In the future work, we will compare the performance of the proposed approach with other deep reinforcement learning approaches.\n\n\\subsubsection{\\textbf{Q-learning based Approach in IaaS Composition}}\n\nA widely used reinforcement learning is Q-learning. In a Q-learning based method, past interactions in the same environment are utilized to learn the optimal policy \\cite{watkins1992q}. The sequences of request selection i.e., \\textit{experience} over different intervals can be treated as past interactions in the context of IaaS composition. We define experience as a tuple $$ where $s$ is the current interval, $a$ is the selected request in $s$, $r$ is the reward for selecting $a$, and $\\acute{s}$ is the next interval. We represent the history of interactions as $$. We formally define the Q-learning environment in the context of qualitative composition as follows:\n \n\\begin{itemize}\n\\item \\textit{Environment}: The environment consists of consumers and the provider. The consumer requests are represented into time series groups and the provider's long-term qualitative preferences are represented in TempCP-net. The environment is deterministic, i.e., incoming requests and TempCP-net is given prior to the composition.\n\n\\item \\textit{State ($s$)}: The tempCP-net is usually consists of several temporal CP-nets for different time intervals or segments. Each time interval or segment is treated as a state. The number of states is finite. \n\\item \\textit{Action ($a$)}: The selection or rejection of a request is treated as an action in our context. In Figure \\ref{fig:newExample1}, the second segment (2nd year) has 3 available requests $\\{A,B, \\text{ and} \\;C\\}$. Hence, the possible set of acceptance actions is $\\{A,B,C,AB,AC,BC,ABC\\}$. \n\n\\item \\textit{Policy ($\\pi$)}: It is a function that determines which action to perform, i.e., selection of requests in a given state. If $S$ is the set of all states and $A$ is the set of all possible actions, the policy ($\\pi$) is represented as $\\pi(s): S \\longrightarrow A$.\n\n\\item \\textit{Reward function ($RWD$)}: We match the action, i.e., selected request segments with the corresponding segment in TemCP-net. We consider the local preference ranking as the reward. The reward function is defined based on the equation \\ref{eq:ranking} as $RWD(s,a)=Pref(\\text{TempCP-net(s)},a)$ at a state $s$ and actions $a$.\n \n\\item \\textit{Value function ($V$)}: $V^{\\pi}(s)$ is the state-value function in the sequential decision process. It is the expected cumulative preference ranking starting from state $s$ following a policy $\\pi$.\n\\end{itemize}\n\nWe apply the basic Q-learning approach to the IaaS composition. We propose a modified Q-learning approach for the long-term IaaS composition.\n\n\n\\subsubsection{\\textbf{IaaS Composition using 2d Q-learning}}\n\nA value function $V^{\\pi}(s)$ represents how good is the temporal sequence of a segment for the composer to select requests. The value function depends on the policy by which the composer chooses its actions. Among all possible value-functions, there exists an optimal value function that has a higher value than other state functions:\n\\vspace{-5mm}\n\n\\begin{equation}\n V^{*}(s) = max_{\\pi} V^{\\pi}(s) \\;\\;\\forall s \\in S \n\\end{equation}\n\nThe optimal policy $\\pi^{*}$ corresponds to the optimal value function: \\vspace{-3mm}\n\n\\begin{equation}\n \\pi^{*}= arg\\;max_{\\pi} V^{\\pi}(s) \\;\\;\\forall s \\in S \n\\end{equation}\n\nA recursive function called $Q$ is usually used in a Q-learning process and represented as $Q^{\\pi}(s,a)$ \\cite{watkins1992q}. It is used to calculate the cumulative reward using the policy $\\pi$. We map $Q^{\\pi}(s,a)$ with the expected global preference ranking of choosing the request $a$ in the interval $s$ for the policy $\\pi$. The probability of moving from interval $s$ to interval $\\acute(s)$ is denoted by $P(\\acute{s}| s,a)$ in Equation \\ref{eq:q3} where $a$ is the selected requests. The current preference ranking of $a$ in $s$ is denoted by $R(s,a,\\acute{s})$ where $\\acute{s}$ is the next interval. The future global ranking is denoted by $V^{\\pi}(\\acute{s})$ in Equation \\ref{eq:q3}.\n\n\n\n\\vspace{-3mm}\n\n\\begin{equation}\n\\label{eq:q3}\nQ^{\\pi}(s,a) = \\sum_{\\acute{s}} P(\\acute{s}| s,a)(R(s,a,\\acute{s})+\\gamma V^{\\pi}(\\acute{s}))\n\\end{equation}\n\n\n\n\nIn a Q-learning process, a table $Q[S,A]$ is maintained where $S$ denotes the set of states and $A$ denotes the set actions \\cite{watkins1992q}. $Q[S,A]$ is used to store the current value of $Q^{\\pi}(S,A)$. The value of $Q^{\\pi}(S,A)$ in the context of long-term IaaS composition is computed using temporal differences. Therefore, we create a table $Q[S,A]$ where $s$ denotes the set of interval or segment and the set of action is denoted by $a$. We set the initial $Q[s,a]$ to 0 for each $(s,a)$. we start the process from an random state ($s$) and executes a random action ($a$) for a reward $r$. The next interval is also selected randomly. We use $\\epsilon$-greedy policy to restrict the randomness over the time. The idea is the composer should explore the state-action sequences randomly, in the beginning, to find better future discounted preference rankings, later the randomness should be reduced. Here, $\\epsilon$ is defined as the probability of exploration. The exploration is equivalent to picking a random action in action space. If $\\epsilon$ = 1, the composer will always explore, and never act greedily concerning the action-value function. Therefore, $\\epsilon < 1$ in practice, so that there is a good balance between exploration and exploitation. The higher value of alpha usually assigns higher weights to the current estimate than the previous estimate. The learning process terminates when there are no further updates on Q-values. It is also known as convergence values. Once the Q-learning reaches the convergence state, the optimal policy is found using $Q[S,A]$ in Equation \\ref{eq:q5}. At each segment the action with highest reward is selected. An example of 2d $Q[S,A]$ is shown in Figure \\ref{fig:f7} where the number of segments is 5 and the number of actions is 10. The best action in segment 5 is $A10$ or $A5$.\n\n\n\\begin{equation}\n\\label{eq:q4}\nQ[s,a] = (1-\\alpha)Q[s,a] + \\alpha (r+\\gamma max_{\\acute{a}}Q[\\acute{s},\\acute{a}]) \n\\end{equation} \n\n\n\\begin{equation}\n\\label{eq:q5}\n\\pi^{*}(s) = argmax_{a}Q[s,a] \\;|\\; \\forall a \\in A(s)\n\\end{equation}\n\n\\subsubsection{\\textbf{IaaS Composition using 3d Q-learning}}\n\n\\begin{figure}[t!]\n\n \t\\centering\n \t\\includegraphics[width=0.8\\textwidth]{fig7_qvalues.pdf} \n \\caption{(a) Q-values in a 2d $Q[S,A]$ (b) Q-values in a 3d $Q[S,A, O]$}\n \\vspace{-3mm}\n\t\\label{fig:f7}\n\\end{figure} \n\n\nThe 2d Q-learning has no start and terminal states as it accepts only model-free state transitions \\cite{shani2005mdp}. The long-term effect of sequential order is indicated by $Q[S,A]$. However, it is not possible to keep track of the execution order in the 2d Q-learning process. For example, there are several possible state transitions can take places in Figure \\ref{fig:newExample2} such as $\\{[1,A]\\rightarrow [2,B]\\}$ and $\\{[2,B]\\rightarrow [1,A]\\}$. Here, $Q[1,A]$ represents the expected global preference rankings of selecting action $A$ in the first year irrespective of the sequence of selection (first or second). A similar explanation is also applied to $Q[1,A]$. Besides, the existing Q-learning approaches for the composition allow multiple execution order. Such order in Figure \\ref{fig:newExample2} is $\\{[1,A] \\rightarrow [2,BC] \\rightarrow [1,B]\\}$. Note, the selection of request $A$ and $B$ in the first year is taken at two different positions in a sequence.\n\n\n\n\nWe introduce a three-dimensional Q-learning in \\cite{mistry2018long} using a 3d table $Q[S,A,O]$ to store the Q values. $O$ represents the set of execution orders. Therefore, a particular state-action pair $(s,a)$ may have different values depending on the execution order $o$. If a set of requests is selected from the first segment at the first step of the composition, it may have a different preference ranking than if it is selected at the last step of the composition. Figure \\ref{fig:f7} illustrates an example of 3d $Q[S,A,O]$. Here, the ranking of $A10$ is 1 in segment 5 when it is performed as the starting state. However, the preference ranking of $A10$ changes into 9 when it is performed in the third step. An extension of Equation \\ref{eq:q4} is shown in Equation \\ref{eq:q6} for a 3d Q-learning process. The $\\acute{o}$ denotes the next execution order after $o$ and $\\alpha$ denotes the learning rate. The 3d Q-learning selects the start state ($s$), perform an action ($a$) with reward $r$ arbitrarily. From the start state, it observes all the possible states in different orders. \n\n\n\n\\vspace{-4mm}\n\n\\begin{align}\n\\label{eq:q6}\nQ[s,a,o] = &(1-\\alpha)Q[s,a,o] \\\\ \\notag\n&+ \\alpha (r+\\gamma max_{(\\acute{a},\\acute{o})}Q[\\acute{s},\\acute{a},\\acute{o}]) \n\\end{align} \n\n\n\n\n\\subsubsection{\\textbf{On-policy based 3d Q-learning}}\n\\begin{figure}\n\n \t\\centering\n \t\\includegraphics[width= .8\\textwidth]{redundant-states.pdf} \n \\caption{On-policy state-action transitions in 3d Q-learning (Green colored are allowed from [1,A,1] and [1,B,2]}\n \\vspace{-4mm}\n\t\\label{fig:newexample3}\n\\end{figure}\n\nThe 3d Q-learning increases the number of explorations in the learning process than the 2d Q-learning. The $n\\times m$ Q-matrix from two dimensions is extended to $n\\times m \\times p$ Q-matrix. The number of states is denoted by $n$, the number of actions is denoted by $m$ and $p$ is the number of segments. As the exploration space increases, 3d Q-learning requires more time to learn compared to the 2d Q-learning. The 3d Q-learning based composition approach is considered as off-policy, as the composer has no restrictions over exploration. We transform the off-policy approach into an on-policy learning approach by intelligent reduction of state transitions as follows:\n\n\n\\begin{itemize}\n \\item \\textit{State sequence policy}: Once a request is rejected, it is not considered anymore in following local optimization. Therefore, we do not need to consider the same states multiple times. We introduce state sequence policy where each state is visited only once in a policy $\\pi$. \n \\item \\textit{Removing redundant state transitions}: The rejected request in a local segment may appear in another segment if the request overlaps the two segments. All the state-action pairs that contain the rejected request should not be considered as the next state transitions. For example, if we accept request $A$ in the first year, request $B$ is rejected in Figure \\ref{fig:newExample1}. However, request $B$ overlaps into the second year. Hence it should be removed from the next candidate transitions. Figure \\ref{fig:newexample3} depicts the on-policy state transitions when the request $A$ in the first year is selected at first in the sequence. \n\\end{itemize}\n\nWe represent the on-policy 3d Q-learning process for IaaS composition in Algorithm \\ref{alg:qlearning}. Algorithm \\ref{alg:qlearning} runs multiple episodes or round to perform the 3d Q-learning process. The algorithm uses Equation \\ref{eq:q6} to update the Q-values in each episode. A $\\epsilon$-greedy policy is used by algorithm \\ref{alg:qlearning} where the optimal action is selected for execution. The optimal action is denoted by $arg\\;max_{(a,o)}Q(s, a,o))$ where the probability of the optimal action is $(1 - \\epsilon)$. The greedy policy optimizes $Q(s, a,o)$ continuously. It incorporates the unique state sequence policy. When a request is rejected, we remove it from the candidate action sets. The learning process continues in a loop up to a maximum number $k$ or Q-values converge to the optimal value. Once the Q-values reach convergence state, we use Equation \\ref{eq:q7} to compute the optimal policy which is similar to 2d Q-learning process \\cite{wang2010adaptive}. \n\n\\vspace{-4mm}\n\\begin{align}\n\\label{eq:q7}\n\\pi^{*}(s) = &argmax_{(a,o)}Q[s,a,o] \\\\ \\notag\n&\\text{where } \\forall a \\in A(s) \\text{ and } o \\in [1, |A(s)|]\n\\end{align} \n \n\n\\begin{algorithm}\n\\fontsize{8pt}{8pt}\\selectfont\n \\caption{The on-policy 3d Q-learning process to compose IaaS requests}\n \\label{alg:qlearning}\n \\begin{algorithmic}[1]\n \n \n \\STATE Initialize Q(s,a,o) to 0\n \\FOR {each episode to $K$}\n \\STATE $s \\gets s_{0}$\n \\STATE execution order, $o\\gets 1$\n \t\\WHILE{$o \\neq$ total number of segments}\n \t \t\\STATE Choose action $a$ from $s$ in $o$ using $\\epsilon$-greedy policy.\n \t \t\\STATE Execute $a$, observe reward $r$ and next state $\\acute{s}$\n \t \t\\STATE $Q[s,a,o] \\gets (1-\\alpha)Q[s,a,o] + \\alpha (r+\\gamma max_{(\\acute{a},\\acute{o})}Q[\\acute{s},\\acute{a},\\acute{o}])$\n \t \t\\STATE create candidate $[s,a]$ based on redundant state transitions\n \t \t\\STATE $s \\gets \\acute{s}$ using $\\epsilon$-greedy policy. \n \t \t\\STATE $o \\gets o+1$ \n\t\t\\ENDWHILE\n \\ENDFOR\n \\end{algorithmic}\n \n\\end{algorithm} \n\n\n\\section{Long-term Qualitative Composition with Previous Learning Experiences}\n\nWe aim to utilize the knowledge of composing past requests to compose new incoming requests efficiently. In this section, we describe the proposed long-term qualitative composition approach that leverages past learning experience. We analyze different types of requests patterns and illustrate how to identify similar request sets. Once we identify a similar requests set from the past incoming requests, we reuse previously learned policy to compose new incoming requests. \n\nThe optimal sequence of state-action may vary depending on the distribution of the requests over the time and their rankings. For example, Figure \\ref{fig:pattern} shows four types of request patterns, i.e., almost sparse pattern, almost dense pattern, chain pattern, and mixed pattern. \n\n\n\n\\begin{itemize}\n \\item \\textit{Almost Sparse Pattern}: Figure \\ref{fig:pattern}(a) shows a set of \\textit{almost sparse request pattern}. Most requests are short and disjoint, i.e., do not overlap between two intervals. The composition may be performed in parallel by taking the short-term requests.\n \\item\\textit{Almost dense Pattern}: Figure \\ref{fig:pattern}(b) shows a set of requests where the requests are mostly \\textit{overlapped} between intervals. An overlapping request spans for the next interval. Most requests are accepted in the first step of the selection.\n \\item\\textit{Chain Pattern}: Figure \\ref{fig:pattern}(c) shows a chain pattern where the requests are mostly short-term and overlapped between intervals. The requests are also evenly distributed between the intervals in a chain pattern.\n \\item \\textit{Mixed Pattern}: Figure \\ref{fig:pattern}(d) shows a mixed pattern where both long-term and short-term requests are overlapped and evenly distributed.\n\\end{itemize}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=.9\\textwidth]{requests_pattern} \n \\caption{Different requests patterns: a) almost sparse pattern, b) almost dense pattern, c) chain, (d) mixed patterns \\cite{sajibicsoc2016}}\n \\vspace{-4mm}\n \\label{fig:pattern}\n\\end{figure} \n\n\nApplying the Q-learning method each time when a new set of requests arrives can be expensive. Instead of learning a new set of incoming request every time, we apply the experience from the previously learned request sets. We propose a qualitative composition framework as shown in Figure \\ref{fig:flowchart}. The proposed framework takes a set of incoming requests and the TempCP-net of the provider. First, the given request set is annotated with its global preference ranking and overlapping ratio and matched with the existing sets of requests. Initially, there is no existing request set. A Q-learning algorithm is applied to the request set using the TempCP-net. The output of the Q-learning algorithm is a matrix called Q-value. The learned Q-value matrix is stored with the corresponding request set for future. \n\n \\begin{figure}\n \n \\centering\n \\includegraphics[width=.6\\textwidth]{flowcharts.pdf} \n \\caption{A qualitative composition framework using policy library from Q-learning}\n \\vspace{-4mm}\n \\label{fig:flowchart}\n\\end{figure}\n\nEach time a new set of requests arrives the proposed framework find the similarity with existing request sets. The similarity is measured through a hierarchical clustering method called \\textit{agglomerative clustering method} \\cite{Growendt2017,fernandez2008solving,bouguettaya2015efficient}. For each set of requests, we apply the\\textit{ agglomerative clustering method} to build a corresponding clustering tree. To measure the similarity between two request sets, the correlation coefficient of their corresponding clustering trees is computed. If the similarity is greater than a predefined variable $S$, then the Q-value matrix of the corresponding matched request set is applied to compose the new request set. We set the value of $S$ based on the trial and error in the experiment. The existing Q-value matrix may not be fully applicable to a new set of requests. In such a case, the proposed framework applies the Q-value matrix partially (a policy reuse approach) and learns the rest of the sequence. \n\n\n\n\\label{sec:dq}\n\n\nIn our previous work, we calculate similarity between different types of requests concerning the statistical distribution of their resources attributes such as normal, left-skewed and right-skewed distributions of CPU, memory, and network bandwidth \\cite{mistry2018long}. The proposed approach however is unable to capture intrinsic characteristics of different request patterns such as temporal distribution or global ranking. Therefore, it may not correctly utilize the historical information of the previous consumers' request sets.\n\n\n\n\\subsection{\\textbf{Clustering Methods to Find Similar Request Sets}}\n\nWe use clustering techniques to compute the similarity between a new request set and past request sets. Clustering is a well-known data analytic techniques to capture the intrinsic features of a set of data and group them on different sets based on similarity. Clustering is suitable where manual tagging of data is expensive. Moreover, the prior knowledge for manual tagging may not be available or insufficient. In such a case, clustering is a preferred option over supervised learning approaches such as classification and regression \\cite{Growendt2017,fernandez2008solving,bouguettaya2015efficient}. There are many clustering techniques in the existing literature. We focus on partitional and hierarchical clustering approaches in the IaaS composition. \n\n\\subsubsection{\\textbf{Partitional Clustering}}\n\nPartitional clustering methods produces a flat partition of that optimizes an objective function. The objective function needs to be predefined. The most well-known partitional algorithm is \\textit{K-means} clustering. The main steps of K-means clustering of a request set are as follows:\n\\begin{enumerate}\n \\item Create randomly few centroid points for the requests. \n \\item Assign each request to the closest centroid. \n \\item Calculate the central point of the newly created clusters and update the centroid accordingly\n \\item Repeat the previous last two steps until there is an object left to resign to another cluster. \n\\end{enumerate}\n\nThe computational complexity of the K-means clustering is $O(NK)$ where $N$ is the number of requests and $K$ is the number of the clusters. The K-means clustering is efficient concerning computational clustering as it requires linear computational time. However, the performance of K-means depends on how the value of $K$ is chosen. It is difficult to determine the optimal value of $K$ when prior knowledge is inadequate or absent. \n\n\n\\subsubsection{\\textbf{Hierarchical Clustering}} \n\n\nThe hierarchical clustering method is a mainstream clustering method because it can be applied to the most types of data. Although hierarchical clustering method has a higher complexity compare to the K-means, it does not need predefined parameters. Therefore, hierarchical clustering are more suitable for handling real-world data. There are two main approaches for hierarchical clustering bottom up and top down. The bottom-up approach aggregate individual data points to the most high-level cluster. The complexity is usually $O(N^2)$ for the bottom-up approach. However, it may go up to $O(N^2logN)$. The complexity of the top-down approach is $O(2^N)$. The top-down approach is usually more expensive than a bottom-up approach. The bottom-up approach is generally known as \\textit{agglomerative hierarchical clustering} \\cite{murtagh2012algorithms,bouguettaya1996line} . \n\n\\subsubsection{\\textbf{Agglomerative Clustering based Similarity Measure}} \n\nWe use an agglomerative clustering based approach to reuse the existing policies for a new incoming request set based on the history of the past request sets. The clustering approach is applied to a set of requests to construct a clustering tree. The clustering tree captures the intrinsic features of the requests and group the requests based on their similarities. When a set of requests arrives, we build a clustering tree and compare it with the existing clustering trees to find the most similar clustering tree using the correlation coefficient.\n\n\nWe annotate each request in a set of requests with its global rank and overlapping ratio to construct a clustering tree to capture the temporal aspects and global ranking of a request set. The global rank is computed by the equation \\ref{eq:ranking}. The overlapping ratio of a request is the ratio of the number of operation interval and the number of total intervals of the composition. The overlapping ratio of a request $R_i$ is computed by the following equation: \n\n\\begin{equation}\n O(R_i) = \\frac{\\text{Number of intervals of } R_i}{\\text{Total Number of Intervals}}\n\\end{equation}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=.7\\textwidth]{plot.pdf}\n \\caption{Annotation of a request set using global preference ranking and overlapping ratio (a) annotation table (b) annotation plot}\n \\label{fig:plot}\n \\vspace{-2mm}\n\\end{figure}\n\nThe overlapping ratio for the request $R3$ in the request set $A$ (Figure \\ref{fig:req}) is $2\/3=.667$. Let us assume a request set is annotated with the global preference ranking and the overlapping ratio of each request as shown in Figure \\ref{fig:plot}. Now, we construct the clustering tree based on the following steps:\n\n\n\\begin{enumerate}\n\n \\item Each request is considered as a cluster. If there are $N$ requests in a set of requests, then the number of clusters is $N$.\n \\item A $N X N$ distance matrix is constructed based on the \\textit{Euclidean Distance} (Equation \\ref{eqn:euclid}) of each pair of requests according to their global preference ranking $GPR$ and overlapping ratio $OR$. \n \\item The closest pair of clusters is selected and merge them into a single cluster.\n \\item The distances between the new cluster and each of the old clusters are computed.\n \\item Steps 3 and 4 are repeated until there exists only a single cluster. \n\\end{enumerate}\n\n\\vspace{-3mm}\n\n\\begin{equation}\n E.D = \\sqrt{\\sum{GPR(R1-R2)}^2+OR(R1-R2)^2}\n \\label{eqn:euclid}\n\\end{equation}\n\nWe can perform step 4 in different ways based on different hierarchical clustering approaches \\cite{murtagh2012algorithms,fernandez2008solving,Growendt2017,bouguettaya1996line}. We consider different conventional approaches to measure the distance between two clusters which are as follows:\n\n\\begin{enumerate}\n \\item SLINK: SLINK stands for single linkage clustering method. In this method, two clusters are joined based on the distance of their nearest pair of elements. Only one member of each cluster is considered to compute the distance \\cite{murtagh2012algorithms}. It is also known as \\textit{nearest neighbour}(NN) clustering method. \n \\item CLINK: CLINK is short for complete linkage clustering method. Two clusters are joined based on the distance of the farthest pair of elements \\cite{Growendt2017}. It is also known as \\textit{farthest neighbour}(FN) clustering method. \n \\item UPGMA: This unweighted pair-group method using arithmetic average is also known as the average linkage clustering method \\cite{fernandez2008solving}. We compute the distance between two clusters based on the average distance of each pair of elements from two clusters.\n\\end{enumerate}\n\n\\begin{figure}[t!]\n \\centering\n \n \\includegraphics[width=0.9\\textwidth]{clusters}\n \\caption{Hierarchical clustering steps}\n \\label{fig:clusters}\n \\vspace{-3mm}\n\\end{figure}\n \n\nAny of the above hierarchical clustering approaches, i.e., SLINK, CLINK, UPGMA could be used for the IaaS composition of new requests. We use the SLINK or nearest neighbor approach as it is a widely used clustering approach and effective in time-series data clustering \\cite{berkhin2006survey}. \\textit{Note that, finding the optimal clustering approach for IaaS composition is out of the focus of this paper}.\n\nA clustering construction process is shown in Figure \\ref{fig:clusters}. Initially, we perform the annotation as shown in Figure \\ref{fig:plot}. In Figure \\ref{fig:clusters}(a), \\{R4\\} and \\{R5\\} are the nearest to each other. \\{R4\\} and \\{R5\\} are put in the same cluster. Similarly, \\{R2\\} and \\{R3\\} are the nearest to each other and joined in the same cluster. In Figure \\ref{fig:clusters}(b), \\{R6\\} is the nearest to the \\{R4\\}. Therefore, \\{R6\\} is joined with \\{\\{R4\\},\\{R5\\}\\}. In the next step, \\{R1\\} is the nearest to \\{R3\\}. \\{R1\\} is joined with \\{\\{R2\\},\\{R3\\}\\}. In Figure \\ref{fig:clusters}(c), there are only two clusters which are \\{\\{R4\\},\\{R5\\},\\{R6\\}\\} and \\{\\{R1\\},\\{R2\\},\\{R3\\}\\}. These two clusters are joined based on \\{\\{R5\\} and \\{R2\\} who are the nearest request to each other from two clusters. Finally, there is only one cluster left. \n\n\nOnce we build clustering trees for different sets of requests, we need to compute the coefficient of correlation between different clustering trees. We use \\textit{Cophenetic correlation coefficient} to compare two clustering trees \\cite{sokal1962comparison}. A cophenetic correlation coefficient determines how well a clustering tree preserves the pairwise distance between the original requests before clustering. \\textit{The cophenetic distance between two requests in a clustering tree is the height of the clustering tree where the two branches that include the two requests merge into a single branch.} We compute the cophenetic correlation coefficient for each clustering tree and use them to measure similarities between two clustering trees. Given a set of requests R, its corresponding clustering tree T, R(i,j) is the Euclidean distance between $i$th and $j$th requests, and T(i,j) is the cophenetic distance where two requests merged first time in the clustering tree, we compute the cophenetic correlation coefficient using the following equation \\cite{sokal1962comparison}: \n\\begin{equation}\n c = \\frac{\\sum_{i \\tmop{Im} \\left(\n \\rho^M_{w_{n - 1}} \\left( m \\right) \\right)\n \\end{array}\n\\end{equation}\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{m \\rightarrow \\pm \\infty} \\tmop{Re} (\\rho^M_{w_n} \\left( m \\right))\n & = 0\n \\end{array}\n\\end{equation}\nThus,\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{m \\rightarrow \\pm \\infty} \\arg \\left( \\rho^M_{w_n} \\left( m \\right)\n \\right) & = \\frac{\\pi}{2}\n \\end{array}\n\\end{equation}\n\n\\subsubsection{Quotients and Differences of $\\rho^M_{w_n} \\left( m \\right)$}\n\nLet\n\\begin{equation}\n \\begin{array}{ll}\n \\Delta \\rho^M_{w_n} (m) & = \\rho^M_{w_n} (m + 1) - \\rho^M_{w_n} \\left( m\n \\right)\n \\end{array}\n\\end{equation}\nbe the forward difference of consecutive roots of $M [w_n (x) ; x \\rightarrow\ns]$. The limiting difference between consecutive roots is the countably\ninfinite set of solutions to the equation $n^{\\frac{s}{2}} +_{} (n +\n1)^{\\frac{s}{2}} = 0$ given by\n\\begin{equation}\n \\begin{array}{ll}\n \\Delta \\rho^M_{w_n} (\\pm \\infty) & = \\lim_{m \\rightarrow \\pm \\infty}\n \\overset{}{\\Delta^{}} \\rho^M_{w_n} (m)\\\\\n & = \\left\\{ s : n^{\\frac{s}{2}} +_{} (n + 1)^{\\frac{s}{2}} = 0 \\right\\}\\\\\n & = \\lim_{m \\rightarrow \\pm \\infty} \\left( \\rho^M_{w_n} (m + 1) -\n \\rho^M_{w_n} (m) \\right)\\\\\n & = \\lim_{m \\rightarrow \\pm \\infty} \\left( \\frac{\\rho^M_{w_n} (m)}{m}\n \\right)\\\\\n & = \\frac{2 \\pi i}{M [\\chi \\left( x, I^H_n \\right) ; x \\rightarrow 0]}\\\\\n & = \\frac{2 \\pi i}{\\lim_{s \\rightarrow 0} \\left( \\frac{n^{- s} - (n +\n 1)^{- s}}{s} \\right)}\\\\\n & = \\frac{2 \\pi i}{\\ln (n + 1) - \\ln (n)}\n \\end{array} \\text{\\label{wmrad}}\n\\end{equation}\nLet $\\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}}$ denote the limit\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}} & = \\lim_{m \\rightarrow \\pm \\infty}\n \\frac{\\rho^M_{w_n} (m)}{\\rho^M_{w_{n - 1}} (m)}\\\\\n & = \\frac{\\Delta \\rho^M_{w_n} (\\pm \\infty)}{\\Delta \\rho^M_{w_{n - 1}}\n (\\pm \\infty)}\\\\\n & = \\frac{\\overset{}{M} \\left[ \\chi \\left( x, \\left( \\frac{1}{n + 1},\n \\frac{1}{n} \\right) \\right) ; x \\rightarrow 0 \\right]}{\\overset{}{M}\n \\left[ \\chi \\left( x, \\left( \\frac{1}{n}, \\frac{1}{n - 1} \\right) \\right)\n ; x \\rightarrow 0 \\right]}\\\\\n & = \\frac{\\ln (n) - \\ln (n - 1)}{\\ln (n + 1) - \\ln (n)}\n \\end{array}\n\\end{equation}\nthen we also have the limit of the limits\n$\\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}}$ \\ as $n \\rightarrow \\infty$ given by\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{n \\rightarrow \\pm \\infty}\n \\text{$\\mathcal{Q}^{\\infty}_{\\rho^M_{w_n}}$ } & = \\lim_{n \\rightarrow \\pm\n \\infty} \\left( \\frac{\\Delta \\rho^M_{w_n} (\\pm \\infty)}{\\Delta \\rho^M_{w_{n\n - 1}} (\\pm \\infty)} \\right)\\\\\n & = \\lim_{n \\rightarrow \\pm \\infty} \\left( \\frac{\\frac{\\ln (n) - \\ln (n -\n 1)}{\\ln (n + 1) - \\ln (n)}}{\\frac{\\ln (n - 1) - \\ln (n - 2)}{\\ln (n) - \\ln\n (n - 2)}} \\right)\\\\\n & = \\lim_{n \\rightarrow \\pm \\infty} \\left( \\frac{\\ln (n) - \\ln (n -\n 1)}{\\ln (n + 1) - \\ln (n)} \\right)\\\\\n & = 1\n \\end{array}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{13cm}{9cm}{\\includegraphics{wmroots.eps}}\n \\caption{$\\{\\rho^M_{w_n} (m) : m = 1 \\ldots 5\\}$}\n\\end{figure}\n\nThe limiting quotients $\\frac{e^{- \\rho^M_{w_n} (m + 1)} }{e^{- \\rho^M_{w_n}\n(m)}} = e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m + 1)}$ as $m \\rightarrow \\infty$\nare given by\n\\begin{equation}\n \\begin{array}{ll}\n \\begin{array}{l}\n \\lim_{m \\rightarrow \\infty} e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m + 1)}\n \\end{array} & = 1 - 2 \\sin \\left( \\frac{\\pi}{\\ln (n + 1) - \\ln (n)}\n \\right)^2 - 2 i \\cos \\left( \\frac{\\pi}{\\ln (n + 1) - \\ln (n)} \\right) \\sin\n \\left( \\frac{\\pi}{\\ln (n + 1) - \\ln (n)} \\right) \\\\\n & = e^{- \\frac{2 \\pi i}{\\ln (n + 1) - \\ln (n)}}\n \\end{array}\n\\end{equation}\nwhere we have\n\\begin{equation}\n \\begin{array}{lll}\n | \\lim_{m \\rightarrow \\infty} e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m + 1)}\n | & = \\lim_{m \\rightarrow \\infty} | e^{\\rho^M_{w_n} (m) - \\rho^M_{w_n} (m\n + 1)} | & \\\\\n & = \\sqrt{e^{- \\frac{2 \\pi i}{\\ln (n + 1) - \\ln (n)}}} & \\\\\n & = 1 & \n \\end{array}\n\\end{equation}\nand\n\\begin{equation}\n \\begin{array}{l}\n \\lim_{n \\rightarrow \\infty} \\lim_{m \\rightarrow \\infty} e^{\\rho^M_{w_n}\n (m) - \\rho^M_{w_n} (m + 1)} = - 1\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Laplace Transform $L \\left[ (s - 1) M [w_n (x) ; x\n\\rightarrow s \\right] ; s \\rightarrow t]$}\n\nThe Laplace transform of $(s - 1) M [w_n (x) ; x \\rightarrow s]$ defined by\n\\begin{equation}\n \\begin{array}{lll}\n \\left. L [(s - 1) M [w_n (x) ; x \\rightarrow s \\right] ; s \\rightarrow t]\n & & = L [n (n + 1)^{- s} + sn^{- s} - n^{1 - s} ; s \\rightarrow t]\\\\\n & & = \\int_0^{\\infty} n (n + 1)^{- s} + sn^{- s} - n^{1 - s} e^{- s t}\n \\mathrm{d} s\\\\\n & & = \\frac{t + \\ln \\left( \\frac{n^n}{(n + 1)^n} \\right) t + \\ln (n + 1)\n + n \\ln (n)^2 - n \\ln (n) \\ln (n + 1)}{(\\ln (n) + t)^2 (\\ln (n + 1) + t)}\n \\end{array}\n\\end{equation}\nhas poles at $- \\ln (n)$ and $- \\ln (n + 1)$ with residues\n\\begin{equation}\n \\begin{array}{ll}\n \\underset{}{\\underset{t = - \\ln (n)}{\\tmop{Res}} \\left( \\underset{}{L} [(s\n - 1) M_{}^{} [w_n (x) ; x \\rightarrow s] ; s \\rightarrow t] \\right)} & = -\n n\\\\\n \\underset{}{\\underset{t = - \\ln (n + 1)}{\\tmop{Res}} \\left( \\underset{}{L}\n [(s - 1) M_{}^{} [w_n (x) ; x \\rightarrow s] ; s \\rightarrow t] \\right)} &\n = n\n \\end{array}\n\\end{equation}\n\n\\subsection{The Gauss Map $h (x)$}\n\n\\subsubsection{$\\tmop{Continued} \\tmop{Fractions}$}\n\nThe Gauss map $h (x)$, also known as the Gauss function or Gauss\ntransformation, maps unit intervals onto unit intervals and by iteration gives\nthe continued fraction expansion of a real number\n{\\cite[A.1.7]{Ctsa}}{\\cite[I.1]{cf}}{\\cite[X]{itn}} The $n$-th component\nfunction $h_n (x)$ of the map $h (x)$ is given by\n\\begin{equation}\n \\begin{array}{ll}\n h_n (x) & = \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n \\right)\n \\end{array}\n\\end{equation}\nThe infinite sum of the component functions is the Gauss map\n\\begin{equation}\n \\begin{array}{ll}\n h (x) & = \\sum_{n = 1}^{\\infty} h_n (x)\\\\\n & = \\sum_{n = 1}^{\\infty} \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n\n \\right)\\\\\n & = x^{- 1} - \\left\\lfloor x^{- 1} \\right\\rfloor\\\\\n & =\\{x^{- 1} \\}\n \\end{array} \\label{h}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{12cm}{12cm}{\\includegraphics{h.eps}}\n \\caption{\\label{gaussmapfig}The Gauss Map}\n\\end{figure}\n\nThe fixed points of $h (x)$ are the (positive) solutions to the equation $h_n\n(x) = x$ given by\n\\begin{equation}\n \\begin{array}{ll}\n \\tmop{Fix}_h^n & =\\{x : h_n (x)_{} = x\\}\\\\\n & = \\left\\{ x : \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n \\right)_{} = x\n \\right\\}\\\\\n & = \\left\\{ x : \\frac{1 - x n}{x} = x \\right\\}\\\\\n & = \\frac{\\sqrt{n^2 + 4}}{2} - \\frac{n}{2}\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Mellin Transform of $h (x)$}\n\nThe Mellin transform (\\ref{mellin}) of the Gauss map $h (x$) over the unit\ninterval, scaled by $s$ then subtracted from $\\frac{s}{s - 1}$, is an analytic\ncontinuation of $\\zeta (s$), denoted by $\\zeta_h (s$), valid for all$-\n\\tmop{Re} (s)) \\not\\in \\mathbbm{N}_{}$. The transfer operator and thermodynamic\naspects of the Gauss map are discussed in\n{\\cite{gkwoperator}}{\\cite{newtonzeta}}{\\cite{gkwzeta}}{\\cite{yarh}}{\\cite{dzf}}.\nThe Mellin transform of the $n$-th component function $w_n (x$) is given by\n\\begin{equation}\n \\begin{array}{ll}\n M [h_n (x) ; x \\rightarrow s] & = \\int_0^1 h_n (x) x^{s - 1} \\mathrm{d} x\\\\\n & = \\int_0^1 \\frac{1 - x n}{x} \\chi_{} \\left( x, I^H_n \\right) x^{s - 1}\n \\mathrm{d} x\\\\\n & = \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\left( x^{- 1} - \\left\\lfloor\n x^{- 1} \\right\\rfloor \\right) x^{s - 1} \\mathrm{d} x\\\\\n & = \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\frac{1 - x n}{x} x^{s - 1}\n \\mathrm{d} x\\\\\n & = - \\frac{n (n + 1)^{- s} + s (n + 1)^{- s} - n^{1 - s}}{s^2 - s}\n \\end{array}\n\\end{equation}\nwhich provides an analytic continuation $\\zeta_h (s) = \\zeta (s) \\forall (-\n\\tmop{Re} (s)) \\not\\in \\mathbbm{N}_{}$\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_h (s) & = \\frac{s}{s - 1} - s M_{} [h (x) ; x \\rightarrow s]\\\\\n & = \\frac{s}{s - 1} - s \\int_0^1 h (x) x^{s - 1} \\mathrm{d} x\\\\\n & = \\frac{s}{s - 1} - s \\int_0^1 \\left( x^{- 1} - \\left\\lfloor x^{- 1}\n \\right\\rfloor \\right) x^{s - 1} \\mathrm{d} x\\\\\n & = \\frac{s}{s - 1} - s \\sum_{n = 1}^{\\infty} M_{} [h_n (x) ; x\n \\rightarrow s]\\\\\n & = \\frac{s}{s - 1} - \\frac{1}{s - 1} \\sum_{n = 1}^{\\infty} - (n (n +\n 1)^{- s} + s (n + 1)^{- s} - n^{1 - s})\\\\\n & = \\frac{s}{s - 1} - \\frac{1 }{s - 1} \\sum_{n = 1}^{\\infty} n^{1 - s} -\n n (n + 1)^{- s} - s (n + 1)^{- s}\n \\end{array} \\label{gaussmap}\n\\end{equation}\n\n\\subsection{The Harmonic Sawtooth Map w(x) as an Ordinary Fractal String}\n\n\\subsubsection{Definition and Length}\n\nLet\n\\begin{equation}\n _{} \\begin{array}{ll}\n I^H_n & = \\left( \\frac{1}{n + 1}, \\frac{1}{n} \\right)\n \\end{array} \\label{hi}\n\\end{equation}\nbe the $n$-th harmonic interval, then $\\{w (x) \\in \\mathcal{L}_w : x \\in\n\\Omega\\}$ is the piecewise monotone mapping of the unit interval onto itself.\nThe fractal string $\\mathcal{L}_w$ associated with $w (x)$ is the set of\nconnected component functions $w_n (x) \\subset w (x)$ \\ where each $w_n (x)$\nmaps$I_n^H$ onto $(0, 1)$ and vanishes when $x \\not\\in I_n^H$. Thus, the\ndisjoint union of the connected components of $\\mathcal{L}_w$ is the infinite\nsum $w (x) = \\sum_{n = 1}^{\\infty} w_n (x)$ where only 1 of the $w_n (x)$ is\nnonzero for each $x$, thus $w (x)$ maps entire unit interval onto itself\nuniqely except except for the points of discontinuity on the boundary\n$\\partial \\mathcal{L}_w =\\{0, \\frac{1}{n} : n \\in \\mathbbm{N}^{\\ast} \\}$ where\na choice is to be made between 0 and 1 depending on the direction in which the\nlimit is approached. Let\n\\begin{equation}\n \\begin{array}{ll}\n w_n (x) & = n (x n + x - 1) \\chi_{} (x, I^H_n)\n \\end{array} \\label{wn}\n\\end{equation}\nwhere $\\chi_{} (x, I^H_n)$ is the $n$-th harmonic interval indicator\n(\\ref{ii})\n\\begin{equation}\n \\begin{array}{ll}\n _{} \\chi_{} (x, I_n^H) & = \\theta \\left( \\frac{x n + x - 1}{n + 1} \\right)\n - \\theta \\left( \\frac{x n - 1}{n} \\right)\n \\end{array} \\label{hii}\n\\end{equation}\nThe substitution $n \\rightarrow \\left\\lfloor \\frac{1}{x} \\right\\rfloor$ can be\nmade in (\\ref{hii}) where it is seen that\n\\begin{equation}\n \\begin{array}{ll}\n \\text{$\\chi_{} \\left( x, I^H_{\\left\\lfloor x^{- 1} \\right\\rfloor}\n \\right)$} & = \\theta \\left( \\frac{x \\text{$\\left\\lfloor x^{- 1}\n \\right\\rfloor$} + x - 1}{\\text{$\\left\\lfloor x^{- 1} \\right\\rfloor$} + 1}\n \\right) - \\theta \\left( \\frac{x \\left\\lfloor x^{- 1} \\right\\rfloor \\text{}\n - 1}{\\left\\lfloor x^{- 1} \\right\\rfloor} \\right) = 1 \\label{hieye}\n \\end{array}\n\\end{equation}\nand so making the same substitution in (\\ref{wn}) gives\n\\begin{equation}\n \\begin{array}{ll}\n w (x) & = \\sum_{n = 1}^{\\infty} w_n (x)\\\\\n & = \\sum_{n = 1}^{\\infty} n (x n + x - 1) \\chi_{} (x, I^H_n)\\\\\n & = \\left\\lfloor x^{- 1} \\right\\rfloor \\text{} \\left( x \\left\\lfloor x^{-\n 1} \\right\\rfloor \\text{} + x - 1 \\right)\\\\\n & = x \\left\\lfloor x^{- 1} \\right]^2 + x \\left\\lfloor x^{- 1}\n \\right\\rfloor \\text{} - \\left\\lfloor x^{- 1} \\right\\rfloor \\text{}\n \\end{array} \\label{w}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{12cm}{12cm}{\\includegraphics{harmonicsaw.eps}}\n \\caption{The Harmonic Sawtooth Map}\n\\end{figure}\n\nThe intervals $_{} I_n^w$ will be defined such that $\\ell w_n = \\left| I_n^w\n\\right| = \\left| w_n (x) \\right|$. Let\n\\begin{equation}\n \\begin{array}{ll}\n \\mathfrak{h}_n & = \\int^{}_{I^H_n} x (n + 1) n \\mathrm{d} x\\\\\n & = \\frac{\\left( \\frac{1}{n + 1} + \\frac{1}{n} \\right)}{2}\\\\\n & = \\frac{2 n + 1}{2 n (n + 1)}\n \\end{array}\n\\end{equation}\nbe the midpoint of $I^H_n$ then\n\\begin{equation}\n \\begin{array}{ll}\n I_n^w & = \\left( \\mathfrak{h}_n - \\frac{\\left| w_n (x) \\right|}{2},\n \\mathfrak{h}_n + \\frac{\\left| w_n (x) \\right|}{2} \\right)\\\\\n & = \\left( \\frac{4 n + 1}{4 n (n + 1)}, \\frac{4 n + 3}{4 n (n + 1)}\n \\right)\n \\end{array}\n\\end{equation}\nso that\n\\begin{equation}\n \\begin{array}{ll}\n \\ell w_n & = \\left| w_n (x) \\right|\\\\\n & = \\int_0^1 n (x n + x - 1) \\chi_{} (x, I^H_n) \\mathrm{d} x\\\\\n & = \\int^{\\frac{1}{n}}_{\\frac{1}{n + 1}} w (x) \\mathrm{d} x\\\\\n & = \\left| I_n^w \\right|\\\\\n & = \\int_0^1 \\chi (x, I_n^w) \\mathrm{d} x\\\\\n & = \\frac{4 n + 3}{4 n (n + 1)} - \\frac{4 n + 1}{4 n (n + 1)}\\\\\n & = \\frac{1}{2 n (n + 1)}\n \\end{array} \\label{wlen}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{10cm}{8cm}{\\includegraphics{reciplen.eps}}\n \\caption{Reciprocal lengths $\\ell w_n^{- 1}$}\n\\end{figure}\n\nThe total length of $\\mathcal{L}_w$ is\n\\begin{equation}\n \\begin{array}{ll}\n |\\mathcal{L}_w | & = \\int_0^1 w (x) \\mathrm{d} x\\\\\n & = \\sum_{n = 1}^{\\infty} \\ell w_n\\\\\n & = \\sum_{n = 1}^{\\infty} \\frac{1}{2 n (n + 1)}\\\\\n & = \\frac{1}{2}\n \\end{array}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{10cm}{10cm}{\\includegraphics{sawfi.eps}}\n \\caption{$\\chi (x, I_n^w)$ green and $w_n (x)$ blue for $n = 1 \\ldots 3$ and\n $x = \\frac{1}{4} ..1$}\n\\end{figure}\n\n\\subsubsection{Geometry and Volume of the Inner Tubular Neighborhood}\n\nThe geometric counting function (\\ref{geocount}) of $\\mathcal{L}_w$ is\n\\begin{equation}\n \\begin{array}{ll}\n N_{\\mathcal{L}_w} (x) & =\\#\\{n \\geqslant 1 : \\ell w_n^{- 1} \\leqslant\n x\\}\\\\\n & =\\#\\{n \\geqslant 1 : 2 (n + 1) n \\leqslant x\\}\\\\\n & = \\left\\lfloor \\frac{\\sqrt{2 x + 1}}{2} - \\frac{1}{2} \\right\\rfloor\n \\end{array} \\label{wgeocount}\n\\end{equation}\nwhich is used to calculate the limiting constant (\\ref{mconst}) $C_w$\nappearing in the equation for the Minkowski content \\\n\\begin{equation}\n \\begin{array}{ll}\n C_w & = \\lim_{x \\rightarrow \\infty} \\frac{N_{\\mathcal{L}_w}\n (x)}{x^{D_{\\mathcal{L}_w}}}\\\\\n & = \\lim_{x \\rightarrow \\infty} \\frac{\\frac{\\sqrt{2 x + 1}}{2} -\n \\frac{1}{2}}{\\sqrt{x}}\\\\\n & = \\frac{\\sqrt{2}}{2}\n \\end{array} \\label{wmconst}\n\\end{equation}\nThe function $N_{\\mathcal{L}_w} (x)$ happens to coincide with\n{\\cite[A095861]{oeis}}, which is the number of primitive Pythagorean triangles\nof the form $\\{(a, b, b + 1) : (b + 1) \\leqslant n\\}$.\n{\\cite[171-176]{bon}}{\\cite[10.1]{pt}}{\\cite[11.2-11.5]{ent}} Let\n\\begin{equation}\n \\begin{array}{lll}\n v_{} (\\varepsilon) & = \\min (j : \\ell w_j < 2 \\varepsilon) & =\n \\left\\lfloor \\frac{\\varepsilon + \\sqrt{\\varepsilon^2 + \\varepsilon}}{2\n \\varepsilon} \\right\\rfloor\n \\end{array}\n\\end{equation}\nwhich is the floor of the solution to the inverse length equation\n\\begin{equation}\n \\begin{array}{ll}\n \\frac{\\varepsilon + \\sqrt{\\varepsilon^2 + \\varepsilon}}{2 \\varepsilon} =\n \\left\\{ n : \\ell w_{n - 1} = 2 \\varepsilon \\right\\} = \\left\\{ n :\n \\frac{1}{2 n (n - 1)} = 2 \\varepsilon \\right\\} & \\\\\n \\frac{\\ell w_{\\frac{\\varepsilon + \\sqrt{\\varepsilon^2 + \\varepsilon}}{2\n \\varepsilon} - 1}}{2} = \\varepsilon & \n \\end{array}\n\\end{equation}\nThen the volume of the inner tubular neighborhood of $\\partial \\mathcal{L}_w$\nwith radius $\\varepsilon$ (\\ref{tnv}) is\n\\begin{equation}\n \\begin{array}{ll}\n V_{\\mathcal{L}_w} (\\varepsilon) & = 2 \\varepsilon N_{\\mathcal{L}_w} \\left(\n \\frac{1}{2 \\varepsilon^{}} \\right) + \\sum_j^{\\ell w_j < 2 \\varepsilon}\n \\ell w_j\\\\\n & = 2 \\varepsilon N_{\\mathcal{L}_w} \\left( \\frac{1}{2 \\varepsilon^{}}\n \\right) + \\sum_{\\tmscript{\\begin{array}{l}\n n = v (\\varepsilon)\n \\end{array}}}^{\\infty} \\frac{1}{2 (n + 1) n}\\\\\n & = 2 \\varepsilon N_{\\mathcal{L}_w} \\left( \\frac{1}{2 \\varepsilon^{}}\n \\right) + \\frac{1}{2 v (\\varepsilon)^{}}\\\\\n & = 2 \\varepsilon \\left\\lfloor \\frac{\\sqrt{\\frac{1}{\\varepsilon} + 1}}{2}\n - \\frac{1}{2} \\right\\rfloor + \\frac{1}{2 v (\\varepsilon)^{}}\\\\\n & = \\frac{4 \\varepsilon v (\\varepsilon)^2 - 4 \\varepsilon v (\\varepsilon)\n + 1}{2 v (\\varepsilon)}\n \\end{array} \\label{wvol}\n\\end{equation}\nsince\n\\begin{equation}\n \\begin{array}{ll}\n \\sum_{n = m}^{\\infty} \\frac{1}{2 n (n + 1)} & = \\frac{1}{2 m}\n \\end{array}\n\\end{equation}\nand by defintion we have\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{\\varepsilon \\rightarrow 0^+} V_{\\mathcal{L}_w} (\\varepsilon) & = 0\\\\\n \\lim_{\\varepsilon \\rightarrow \\infty} V_{\\mathcal{L}_w} (\\varepsilon) & =\n |\\mathcal{L}_w | = \\frac{1}{2}\n \\end{array}\n\\end{equation}\nThus, using (\\ref{wmconst}) and (\\ref{wvol}), the Minkowski content\n(\\ref{mcontent}) of $\\mathcal{L}_w$ is\n\\begin{equation}\n \\begin{array}{lll}\n \\mathcal{M}_{\\mathcal{L}_w} & = \\lim_{e \\rightarrow 0^+}\n \\frac{V_{\\mathcal{L}w} (\\varepsilon)}{\\varepsilon^{1 - D_{\\mathcal{L}_w}}}\n & \\\\\n & = \\lim_{e \\rightarrow 0^+} \\frac{1}{\\sqrt{\\varepsilon}} \\left( 2\n \\varepsilon \\left\\lfloor \\frac{\\sqrt{\\frac{1}{\\varepsilon} + 1}}{2} -\n \\frac{1}{2} \\right\\rfloor + \\frac{1}{2} \\left\\lfloor \\frac{\\varepsilon +\n \\sqrt{\\varepsilon^2 + \\varepsilon}}{2 \\varepsilon} \\right\\rfloor^{- 1}\n \\right) & \\\\\n & = \\frac{C_w 2^{1 - D_{\\mathcal{L}_w}}}{1 - D_{\\mathcal{L}_w}} & \\\\\n & = \\frac{\\frac{\\sqrt{2}}{2} 2^{1 - \\frac{1}{2}}}{1 - \\frac{1}{2}} & \\\\\n & = 2 & \n \\end{array} \\label{wmcontent}\n\\end{equation}\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{sawgeocount.eps}}\n \\caption{Geometric Counting Function $N_{\\mathcal{L}_w} (x)$ of $w (x)$}\n\\end{figure}\n\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{minleps.eps}}\n \\caption{$\\{v (\\varepsilon) = \\min (n : \\ell w_n < 2 \\varepsilon) :\n \\varepsilon = \\frac{1}{1000} \\ldots \\frac{1}{4} \\}$}\n\\end{figure}\n\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{Vw.eps}}\n \\caption{Volume of the inner tubular neighborhood of $\\partial\n \\mathcal{L}_w$ with radius $\\varepsilon$ $\\left\\{ V_{\\mathcal{L}_w}\n (\\varepsilon) : \\varepsilon = 0 \\ldots \\frac{1}{8} \\right\\}$}\n\\end{figure}\n\n\\begin{figure}[h]\n \\resizebox{12cm}{10cm}{\\includegraphics{vse.eps}}\n \\caption{$\\left\\{ \\frac{V\\mathcal{L}_w (\\varepsilon)}{\\sqrt{\\varepsilon}} :\n \\varepsilon = 0 \\ldots \\frac{1}{8} \\right\\}$ and $\\frac{V\\mathcal{L}_w (8^{-\n 1})}{\\sqrt{8^{- 1}}} = \\sqrt{2}$}\n\\end{figure}\n\n\\subsubsection{The Geometric Zeta Function $\\zeta_{\\mathcal{L}_w} (s)$}\n\nThe geometric zeta function (\\ref{geozeta}) of $w (x$) is the Dirichlet series\nof the lengths $\\ell w_n$ (\\ref{wlen}) and also an integral over the geometric\nlength counting function (\\ref{wgeocount}) $N_{\\mathcal{L}_w} (x)$\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\mathcal{L}_w} (s) & = \\sum_{n = 1}^{\\infty} \\ell w_j^s\\\\\n & = \\sum_{n = 1}^{\\infty} \\left( \\frac{1}{2 n (n + 1)} \\right)^s\\\\\n & = \\sum_{n = 1}^{\\infty} 2^{- s} (n + 1)^{- s} n^{- s}\\\\\n & = s \\int_0^{\\infty} N_{\\mathcal{L}_w} (x) x^{- s - 1} \\mathrm{d} x\\\\\n & = s \\int_0^{\\infty} \\left\\lfloor \\frac{\\sqrt{2 x + 1}}{2} - \\frac{1}{2}\n \\right\\rfloor x^{- s - 1} \\mathrm{d} x\n \\end{array} \\label{wgeozeta}\n\\end{equation}\nThe residue (\\ref{geozetares}) of $\\zeta_{\\mathcal{L}_w} (s$) at\n$D_{\\mathcal{L}_w}$ is\n\\begin{equation}\n \\begin{array}{ll}\n \\underset{s = D_{\\mathcal{L}}}{\\tmop{Res} (\\zeta_{\\mathcal{L}} (s))} & =\n \\lim_{s \\rightarrow D_{\\mathcal{L}_w}^+} (s - D_{\\mathcal{L}})\n \\zeta_{\\mathcal{L}} (s)\\\\\n & = \\lim_{s \\rightarrow \\frac{1}{2}^+} \\left( s - \\frac{1}{2} \\right)\n \\zeta_{\\mathcal{L}} (s)\\\\\n & = \\lim_{s \\rightarrow \\frac{1}{2}^+} \\left( s - \\frac{1}{2} \\right)\n \\sum_{n = 1}^{\\infty} 2^{- s} (n + 1)^{- s} n^{- s}\\\\\n & = \\lim_{s \\rightarrow \\frac{1}{2}^+} \\left( s - \\frac{1}{2} \\right) s\n \\int_0^{\\infty} \\left\\lfloor \\frac{\\sqrt{2 x + 1}}{2} - \\frac{1}{2}\n \\right\\rfloor x^{- s - 1} \\mathrm{d} x\\\\\n & = 0\n \\end{array}\n\\end{equation}\nThe values of $\\zeta_{\\mathcal{L}_w} (n)$ at positive integer values $n \\in\n\\mathbbm{N}^{\\ast}$ are given explicitly by a rather unwieldy sum of binomial\ncoefficients and the Riemann zeta function $\\zeta (n)$ at even integer values.\nFirst, define\n\\begin{equation}\n \\begin{array}{ll}\n a_n & = \\frac{\\left( n - 1 \\right) \\left( 1 - \\left( - 1 \\right)^{n + 1}\n \\right)}{2}\\\\\n b_n & = \\frac{\\left( - 1 \\right)^{n + 1} \\left( n - 1 \\right)}{2} + n -\n \\frac{7}{4} + \\frac{(- 1)^n}{4}\\\\\n c_n & = (- 1)^n (n - 1)\\\\\n d_n & = \\frac{(- 1)^n}{2}\n \\end{array}\n\\end{equation}\nthen\n\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\mathcal{L}_w} (n) & = \\frac{( - 1)^n \\binom{2 n - 1}{n - 1}}{\n 2^n} + \\sum_{m = a_n}^{b_n} \\frac{2 (- 1)^n \\binom{2 m + c_n - d_n +\n \\frac{1}{2}}{n - 1} \\zeta \\left( d_n + 2 n - \\frac{3}{2} - 2 m - c_n\n \\right)}{ 2^n}\n \\end{array}\n\\end{equation}\n\nThe terms of $\\zeta_{\\mathcal{L}_w} (n)$ from $n = 1$ to $10$ are shown below\nin Table \\ref{wgztable}.\n\n\\begin{table}[h]\n $\\left( \\begin{array}{llllll}\n + \\frac{1}{2} & & & & & \\\\\n - \\frac{3}{4} & + \\frac{1}{2} \\hspace{0.25em} \\zeta \\left( 2 \\right) & & \n & & \\\\\n + \\frac{5}{3} & - \\frac{3}{4} \\hspace{0.25em} \\zeta \\left( 2 \\right) & & \n & & \\\\\n - \\frac{35}{16} & + \\frac{5}{4} \\hspace{0.25em} \\zeta \\left( 2 \\right) & +\n \\frac{1}{8} \\hspace{0.25em} \\zeta \\left( 4 \\right) & & & \\\\\n + \\frac{63}{16} & - \\frac{35}{16} \\hspace{0.25em} \\zeta \\left( 2 \\right) &\n - \\frac{5}{16} \\hspace{0.25em} \\zeta \\left( 4 \\right) & & & \\\\\n - \\frac{231}{32} & + \\frac{63}{16} \\hspace{0.25em} \\zeta \\left( 2 \\right)\n & + \\frac{21}{32} \\hspace{0.25em} \\zeta \\left( 4 \\right) & + \\frac{1}{32}\n \\zeta \\left( 6 \\right) & & \\\\\n + \\frac{429}{32} & - \\frac{231}{32} \\hspace{0.25em} \\zeta \\left( 2 \\right)\n & - \\frac{21}{16} \\hspace{0.25em} \\zeta \\left( 4 \\right) & - \\frac{7}{64}\n \\hspace{0.25em} \\zeta \\left( 6 \\right) & & \\\\\n - \\frac{6435}{256} & + \\frac{429}{32} \\hspace{0.25em} \\zeta \\left( 2\n \\right) & + \\frac{165}{64} \\hspace{0.25em} \\zeta \\left( 4 \\right) & +\n \\frac{9}{32} \\hspace{0.25em} \\zeta \\left( 6 \\right) & + \\frac{1}{128}\n \\hspace{0.25em} \\zeta \\left( 8 \\right) & \\\\\n + \\frac{12155}{256} & - \\frac{6435}{256} \\hspace{0.25em} \\zeta \\left( 2\n \\right) & - \\frac{1287}{256} \\hspace{0.25em} \\zeta \\left( 4 \\right) & -\n \\frac{165}{256} \\hspace{0.25em} \\zeta \\left( 6 \\right) & - \\frac{9}{256}\n \\hspace{0.25em} \\zeta \\left( 8 \\right) & \\\\\n - \\frac{46189}{512} & + \\frac{12155}{256} \\hspace{0.25em} \\zeta \\left( 2\n \\right) & + \\frac{5005}{512} \\hspace{0.25em} \\zeta \\left( 4 \\right) & +\n \\frac{715}{512} \\hspace{0.25em} \\zeta \\left( 6 \\right) & + \\frac{55}{512}\n \\hspace{0.25em} \\zeta \\left( 8 \\right) + & \\frac{1 }{ 512 \\mathfrak{}}\n \\hspace{0.25em} \\zeta \\mathbb{} \\left( \\mathfrak{} 1 0 \\mathsf{} \\right)\n \\end{array} \\right)$\n \n \n \\caption{$\\{ \\text{$\\zeta_{\\mathcal{L}_w} (n) = \\Sigma_{}\n \\tmop{row}_n$:n=1..10\\}}$\\label{wgztable}}\n\\end{table}\n\n\\section{Fractal Strings and Dynamical Zeta Functions}\n\n\\subsection{Fractal Strings}\n\nA a fractal string $\\mathcal{L}$ is defined as a nonempty bounded open subset\nof the real line $\\mathcal{L} \\subseteq \\mathbbm{R}$ consisting of a countable\ndisjoint union of open intervals $I_j$\n\\begin{equation}\n \\mathcal{L}= \\bigcup_{j = 1}^{\\infty} I_j\n\\end{equation}\nThe length of the $j$-th interval $I_j$ is denoted by\n\\begin{equation}\n \\text{$\\ell_j = \\left| I_j \\right|_{}$}\n\\end{equation}\nwhere $\\left. \\left. \\right| \\cdot \\right|_{}$ is the $1$-dimensional Lebesgue\nmeasure. The lengths $\\ell_j$ must form a nonnegative monotically decreasing\nsequence and the total length must be finite, that is\n\\begin{equation}\n \\begin{array}{l}\n |\\mathcal{L}|_1 = \\sum_{j = 1}^{\\infty} \\ell_j < \\infty\\\\\n \\ell_1 \\geqslant \\ell_2 \\geqslant \\ldots \\geqslant \\text{$\\ell_j \\geqslant\n \\ell_{j + 1} \\geqslant \\cdots \\geqslant 0$}\n \\end{array}\n\\end{equation}\nThe case when $\\ell_j = 0$ for any $j$ will be excluded here since $\\ell_j$ is\na finite sequence. The fractal string is defined completely by its sequence of\nlengths so it can be denoted\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{L} & =\\{\\ell_j \\}_{j = 1}^{\\infty}\n \\end{array}\n\\end{equation}\nThe boundary of $\\mathcal{L}$ in $\\mathbbm{R}$, denoted by $\\partial\n\\mathcal{L} \\subset \\Omega$, is a totally disconnected bounded perfect subset\nwhich can be represented as a string of finite length, and generally any\ncompact subset of $\\mathbbm{R}$ also has this property. The boundary $\\partial\n\\mathcal{L}$ is said to be perfect since it is closed and each of its points\nis a limit point. Since the Cantor-Bendixon lemma states that there exists a\nperfect set $P \\subset \\partial \\mathcal{L}$ such that $\\partial \\mathcal{L}-\nP$ is a most countable, we can define $\\mathcal{L}$ as the complenent of\n$\\partial \\mathcal{L}$ in its closed convex hull. The connected components of\nthe bounded open set $\\mathcal{L} \\backslash \\partial \\mathcal{L}$ are the\nintervals $I_j$. {\\cite[1.2]{gsfs}}{\\cite[2.2\nEx17]{mti}}{\\cite[3.1]{fractalzetastrings}}{\\cite{weylberry}}{\\cite{fdisp}}{\\cite{hearfractaldrumshape}}{\\cite{gmc}}{\\cite{ncfg}}{\\cite{zflf}}{\\cite{possf}}{\\cite{cdssfs}}\n\n\\subsubsection{The Minkowski Dimension $D_{\\mathcal{L}}$ and Content\n$\\mathcal{M}_{\\mathcal{L}}$}\n\nThe Minkowski dimension $D_{\\mathcal{L}} \\in [0, 1]$, also known as the box\ndimension, is maximum value of $V (\\varepsilon)$\n\\begin{equation}\n \\begin{array}{ll}\n D_{\\mathcal{L}} & = \\inf \\{\\alpha \\geqslant 0 : V (\\varepsilon) = O\n (\\varepsilon^{1 - \\alpha}) \\tmop{as} \\varepsilon \\rightarrow 0^+ \\}=\n \\zeta_{\\mathcal{L}} (1) = \\sum_{n = 1}^{\\infty} \\ell_j^{}\n \\end{array}\n\\end{equation}\nwhere $V (\\varepsilon)$ is the volume of the inner tubular neighborhoods of\n$\\partial \\mathcal{L}$ with radius $\\varepsilon$\n\\begin{equation}\n \\text{$\\begin{array}{ll}\n V_{\\mathcal{L}} (\\varepsilon) & \\left. = | x \\in \\mathcal{L}: d (x,\n \\partial \\mathcal{L}) < \\varepsilon \\right|_{}\\\\\n & = \\sum_j^{\\ell_j \\geqslant 2 \\varepsilon} 2 \\varepsilon +\n \\sum_j^{\\ell_j < 2 \\varepsilon} \\ell_j\\\\\n & = 2 \\varepsilon N_{\\mathcal{L}} \\left( \\frac{1}{2 \\varepsilon^{}}\n \\right) + \\sum_j^{\\ell_j < 2 \\varepsilon} \\ell_j\n \\end{array}$} \\label{tnv}\n\\end{equation}\nand $N_{\\mathcal{L}} (x)$ is the geometric counting function which is the\nnumber of components with their reciprocal length being less than or equal to\n$x$.\n\\begin{equation}\n \\begin{array}{ll}\n N_{\\mathcal{L}} (x) & =\\#\\{j \\geqslant 1 : \\ell_j^{- 1} \\leqslant x\\}\\\\\n & = \\sum^{\\ell_j^{- 1} \\leqslant x}_{\\tmscript{\\begin{array}{l}\n j \\geqslant 1\n \\end{array}}} 1\n \\end{array} \\label{geocount}\n\\end{equation}\nThe Minkowski content of $\\mathcal{L}$ is then defined as\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{M}_{\\mathcal{L}} & = \\lim_{e \\rightarrow 0^+}\n \\frac{V_{\\mathcal{L}} (\\varepsilon)}{\\varepsilon^{1 - D_{\\mathcal{L}}}} \\\\\n & = \\frac{C_{\\mathcal{L}} 2^{1 - D_{\\mathcal{L}}}}{1 - D_{\\mathcal{L}}}\\\\\n & = \\frac{\\tmop{Res} (\\zeta_{\\mathcal{L}} (s) ; D_{\\mathcal{L}}) 2^{1 -\n D_{\\mathcal{L}}}}{D_{\\mathcal{L}} (1 - D_{\\mathcal{L}})} \n \\end{array} \\label{mcontent}\n\\end{equation}\nwhere $C_{\\mathcal{L}}$ is the constant\n\\begin{equation}\n \\begin{array}{ll}\n C_{\\mathcal{L}} = & \\lim_{x \\rightarrow \\infty} \\frac{N_{\\mathcal{L}_{}}\n (x)}{x^{D_{\\mathcal{L}_{}}}}\n \\end{array} \\label{mconst}\n\\end{equation}\nIf $\\mathcal{M}_{\\mathcal{L}} \\in (0, \\infty)$ exists then $\\mathcal{L}$ is\nsaid to be Minkowski measurable which necessarily means that the geometry of\n$\\mathcal{L}$ does not oscillate and vice versa. {\\cite[1]{cdfs}}\n{\\cite{cizlm}}{\\cite{fsscd}}{\\cite[6.2]{fgnt}}\n\n\\subsubsection{The Geometric Zeta Function $\\zeta_{\\mathcal{L}} (s)$}\n\nThe geometric Zeta function $\\zeta_{\\mathcal{L}} (s)$ of\n\\tmtexttt{$\\mathcal{L}$} is the Dirichlet series\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\mathcal{L}} (s) & = \\sum_{n = 1}^{\\infty} \\ell_j^s\\\\\n & = s \\int_0^{\\infty} N_{\\mathcal{L}_w} (x) x^{- s - 1} \\mathrm{d} x\n \\end{array} \\label{geozeta}\n\\end{equation}\nwhich is holomorphic for $\\tmop{Re} (s) > D_{\\mathcal{L}}$. If\n$\\mathcal{L}_{}$ is Minkowski measurable then $0 < D_{\\mathcal{L}} < 1$ is the\nsimple unique pole of $\\zeta_{\\mathcal{L}} (s)$ on the vertical line\n$\\tmop{Re} (s) = D_{\\mathcal{L}}$. Assuming $\\zeta_{\\mathcal{L}} (s)$ has a\nmeromorphic extension to a neighboorhood of $D_{\\mathcal{L}}$ then\n$\\zeta_{\\mathcal{L}} (s)$ has a simple pole at $\\zeta_{\\mathcal{L}}\n(D_{\\mathcal{L}})$ if\n\\begin{equation}\n N_{\\mathcal{L}} (s) = O (s^{D_{\\mathcal{L}}}) \\tmop{as} s \\rightarrow \\infty\n\\end{equation}\nor if the volume of the tubular neighborhoods satisfies\n\\begin{equation}\n V_{\\mathcal{L}} (\\varepsilon) = O (\\varepsilon^{1 - D_{\\mathcal{L}}})\n \\tmop{as} \\varepsilon \\rightarrow 0^+\n\\end{equation}\nIt can be possible that the residue of $\\zeta_{\\mathcal{L}} (s)_{}$ at $s =\nD_{\\mathcal{L}}$ is positive and finite\n\\begin{equation}\n 0 < \\lim_{s \\rightarrow D_{\\mathcal{L}}} (s - D_{\\mathcal{L}})\n \\zeta_{\\mathcal{L}} (s) < \\infty \\label{geozetares}\n\\end{equation}\neven if $N_{\\mathcal{L}} (s)$ is not of order $s^{D_{\\mathcal{L}}}$ as $s\n\\rightarrow \\infty$ and $V_{\\mathcal{L}} (\\varepsilon)$ is not of order\n$\\varepsilon^{1 - D_{\\mathcal{L}}}$, however this does not contradict the\nMinkowski measurability of $\\mathcal{L}_{}$.\n\n\\subsubsection{Complex Dimensions, Screens and Windows}\n\nThe set of visible complex dimensions of $\\mathcal{L}$, denoted by\n$\\mathcal{D}_{\\mathcal{L}} (W)$, is a discrete subset of $\\mathbbm{C}$\nconsisting of the poles of $\\{\\zeta_{\\mathcal{L}} (s) : s \\in W\\}$.\n\\begin{equation}\n \\begin{array}{ll}\n \\mathcal{D}_{\\mathcal{L}} (W) & =\\{w \\in W : \\zeta_{\\mathcal{L}} (w)\n \\tmop{is} a \\tmop{pole}\\}\n \\end{array}\n\\end{equation}\nWhen $W$ is the entire complex plane then the set $\\mathcal{D}_{\\mathcal{L}}\n(\\mathbbm{C}) =\\mathcal{D}_{\\mathcal{L}}$ is simply called the set of complex\ndimensions of $\\mathcal{L}$. The presence of oscillations in $V (\\varepsilon)$\nimplies the presence of imaginary complex dimensions with $\\tmop{Re} (\\cdot) =\nD_{\\mathcal{L}}$ and vice versa. More generally, the complex dimensions of a\nfractal string $\\mathcal{L}$ describe its geometric and spectral oscillations.\n\n\\subsubsection{Frequencies of Fractal Strings and Spectral Zeta Functions}\n\nThe eigenvalues $\\lambda_n$ of the Dirichlet Laplacian $\\Delta u (x) = -\n\\frac{\\mathrm{d}^2}{\\mathrm{d} x^2} u (x)$ on a bounded open set $\\Omega \\subset\n\\mathbbm{R}$ correspond to the normalized frequencies $f_n =\n\\frac{\\sqrt{\\lambda_n}}{\\pi}$ of a fractal string. The frequencies of the unit\ninterval are the natural numbers $n \\in \\mathbbm{N}^{\\ast}$ and the\nfrequencies of an interval of length $\\ell$ are $n \\ell^{- 1}$. The\nfrequencies of $\\mathcal{L}$ are the numbers\n\\begin{equation}\n \\begin{array}{l}\n f_{k, j} = k \\ell_j^{- 1} \\forall \\text{ $k, j \\in \\mathbbm{N}^{\\ast}$}\n \\end{array}\n\\end{equation}\nThe spectral counting function $N_{v\\mathcal{L}} (x)$ counts the frequencies\nof $\\mathcal{L}$ with multiplicity\n\\begin{equation}\n \\begin{array}{ll}\n N_{v\\mathcal{L}} (x) & = \\sum_{j = 1}^{\\infty} N_{\\mathcal{L}} \\left(\n \\frac{x}{j} \\right)\\\\\n & = \\sum_{j = 1}^{\\infty} \\left\\lfloor x \\ell_j \\right\\rfloor\n \\end{array} \\label{spectralcount}\n\\end{equation}\nThe spectral zeta function $\\zeta_{\\upsilon \\mathcal{L}} (s)$ of $\\mathcal{L}$\nis connected to the Riemann zeta function (\\ref{zeta}) by\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\upsilon \\mathcal{L}} (s) & = \\sum_{k = 1}^{\\infty} \\sum_{j =\n 1}^{\\infty} k^{- s} \\ell^s_j\\\\\n & = \\zeta (s) \\sum_{n = 1}^{\\infty} \\ell_j^s\\\\\n & = \\zeta (s) \\zeta_{\\mathcal{L}} (s)\n \\end{array} \\label{spectralzeta}\n\\end{equation}\n{\\cite[1.1]{cdfs}}{\\cite[1.2.1]{gsfs}}\n\n\\subsubsection{Generalized Fractal Strings and Dirichlet Integrals}\n\nA generalized fractal string is a positive or complex or local measure $\\eta\n(x)$ on $(0, \\infty)$ such that\n\\begin{equation}\n \\int_0^{x_0} \\eta (x) \\mathrm{d} x = 0 \\label{gfs}\n\\end{equation}\nfor some $x_0 > 0$. A local positive measure is a standard positive Borel\nmeasure $\\eta (J)$ on $(0, \\infty)$ where $J$ is the set of all bounded\nsubintervals of $(0, \\infty)$ in which case $\\eta (x) = | \\eta (x) |$. More\ngenerally, a meausre $\\eta (x)$ is a local complex measure if $\\eta (A)$ is\nwell-defined for any subset $A \\subset [a, b]$ where $[a, b] \\subset [0,\n\\infty]$ is a bounded subset of the positive half-line $(0, \\infty)$ and the\nrestriction of $\\eta$ to the Borel subsets of $[a, b]$ is a complex measure on\n$[a, b]$ in the traditional sense. The geometric counting function of $\\eta\n(x)$ is defined as\n\\begin{equation}\n \\begin{array}{ll}\n N_{\\eta} (x) & = \\int_0^x \\eta (x) \\mathrm{d} x\n \\end{array} \\label{ggcf}\n\\end{equation}\nThe dimension $D_{\\eta}$ is the abscissa of conergence of the Dirichlet\nintegral \\\n\\begin{equation}\n \\zeta_{| \\eta |} (\\sigma) = \\int_0^{\\infty} x^{- \\sigma} | \\eta (x) | \\mathrm{d}\n x \\label{gd}\n\\end{equation}\nIn other terms, it is the smallest real positive $\\sigma$ such that the\nimproper Riemann-Lebesgue converges to a finite value. The geometric zeta\nfunction is defined as the Mellin transform\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta_{\\eta} (s) & =\n \\end{array} \\int_0^{\\infty} x^{- s} \\eta (x) \\mathrm{d} x \\label{ggz}\n\\end{equation}\nwhere $\\tmop{Re} (s) > D_{\\eta}$.\n\n\\subsection{Fractal Membranes and Spectral Partitions}\n\n\\subsubsection{Complex Dimensions of Dynamical Zeta Functions}\n\nThe fractal membrane $\\mathcal{T}_L$ associated with $\\mathcal{L}$ is the\nadelic product\n\\begin{equation}\n \\mathcal{T}_L = \\coprod_{j = 1}^{\\infty} \\mathcal{T}_j\n\\end{equation}\nwhere each $\\mathcal{T}_j$ is an interval $I_j$ of length $\\log (\\ell_j^{-\n1})^{- 1}$. To each $\\mathcal{T}_j$ is associated a Hilbert space\n$\\mathcal{H}_j = L^2 (I_j)$ of square integrable functions on $I_j$. The\nspectral partition function $Z_{\\mathcal{L}} (s)$ of $\\mathcal{L}$ is a Euler\nproduct expansion which has no zeros or poles in $\\tmop{Re} (s) > D_M\n(\\mathcal{L})$.\n\\begin{equation}\n \\begin{array}{ll}\n Z_{\\mathcal{L}} (s) & = \\prod_{j = 1}^{\\infty} \\frac{1}{1 - \\ell_j^s}\\\\\n & = \\prod_{j = 1}^{\\infty} Z_{\\mathcal{L}_j} (s)\n \\end{array}\n\\end{equation}\nwhere $D_M (\\mathcal{L})$ is the Minkowski dimension of $\\mathcal{L}$ and\n$Z_{\\mathcal{L}_j} (s) = \\frac{1}{1 - \\ell_j^s}$ is the $j$-th Euler factor,\nthe partition function of the $j$-th component of the fractal membrane.\n{\\cite[3.2.2]{fractalzetastrings}}\n\n\\subsubsection{Dynamical Zeta Functions of Fractal Membranes}\n\nThe dynamical zeta function of a fractal membrane $\\mathcal{L}$ is the\nnegative of the logarithmic derivative of the Zeta function associated with\n$\\mathcal{L}$.\n\\begin{equation}\n \\begin{array}{ll}\n Z_{\\mathcal{L}} (s) & = - \\frac{\\mathrm{d}}{\\mathrm{d} s} \\ln (\\zeta_{\\mathcal{L}}\n (s))\\\\\n & = - \\frac{\\frac{\\mathrm{d}}{\\mathrm{d} s} \\zeta_{\\mathcal{L}}\n (s)}{\\zeta_{\\mathcal{L}} (s)}\n \\end{array} \\label{dzfm}\n\\end{equation}\n\n\\section{Special Functions, Definitions, and Conventions}\n\n\\subsection{Special Functions}\n\n\\subsubsection{The Interval Indicator (Characteristic) Function $\\chi (x, I)$}\n\nThe (left-open, right-closed) interval indicator function is $\\chi (x, I)$\nwhere $I = (a, b]$\n\\begin{equation}\n \\begin{array}{ll}\n \\chi (x, I) & = \\left\\{ \\begin{array}{ll}\n 1 & x \\in I\\\\\n 0 & x \\not\\in I\n \\end{array} \\right.\\\\\n & = \\left\\{ \\begin{array}{ll}\n 1 & a < x \\leqslant b\\\\\n 0 & \\tmop{otherwise}\n \\end{array} \\right.\\\\\n & = \\theta (x - a) - \\theta (x - a) \\theta (x - b)\n \\end{array} \\text{} \\label{ii}\n\\end{equation}\nand $\\theta$ is the Heaviside unit step function, the derivative of which is\nthe Dirac delta function $\\delta$\n\\begin{equation}\n \\begin{array}{ll}\n \\text{$\\int \\delta (x) \\mathrm{d} x$} & = \\theta (x)\\\\\n \\theta (x) & = \\left\\{ \\begin{array}{ll}\n 0 & x < 0\\\\\n 1 & x \\geqslant 0\n \\end{array} \\right.\n \\end{array} \\label{deltastep}\n\\end{equation}\nThe discontinous point of $\\theta (x)$ has the limiting values\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{x \\rightarrow 0^-} \\theta (x) & = 0\\\\\n \\lim_{x \\rightarrow 0^+} \\theta (x) & = 1\n \\end{array}\n\\end{equation}\nthus the values of $\\chi (x, (a, b))$ on the boundary can be chosen according\nto which side the limit is regarded as being approached from.\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{x \\rightarrow a^-} \\text{$\\chi (x, (a, b])$} & = 0\\\\\n \\lim_{x \\rightarrow a^+} \\text{$\\chi (x, (a, b])$} & = 1 - \\theta (a -\n b)\\\\\n \\lim_{x \\rightarrow b^-} \\text{$\\chi (x, (a, b])$} & = \\theta (b - a)\\\\\n \\lim_{x \\rightarrow b^+} \\text{$\\chi (x, (a, b])$} & = 0\n \\end{array}\n\\end{equation}\n\n\\subsubsection{``Harmonic'' Intervals}\n\nLet the $n$-th harmonic (left-open, right-closed) interval be defined as\n\\begin{equation}\n _{} \\begin{array}{ll}\n I^H_n & = \\left( \\frac{1}{n + 1}, \\frac{1}{n} \\right]\n \\end{array} \\label{hi}\n\\end{equation}\nthen its characteristic function is\n\\begin{equation}\n \\begin{array}{lll}\n _{} \\chi_{} (x, I_n^H) & & = \\theta \\left( x - \\frac{1}{n + 1} \\right) -\n \\theta \\left( x - \\frac{1}{n} \\right)\\\\\n & & = \\theta \\left( x - \\frac{n}{n + 1} \\right) - \\theta \\left( x - n\n \\right)\\\\\n & & = \\left\\{ \\begin{array}{ll}\n 1 & \\frac{1}{n + 1} < x \\leqslant \\frac{1}{n}\\\\\n 0 & \\tmop{otherwise}\n \\end{array} \\right.\n \\end{array} \\label{hii}\n\\end{equation}\nAs can be seen\n\\begin{equation}\n \\begin{array}{llll}\n \\bigcup_{n = 1}^{\\infty} I_n^H & = & \\bigcup_{n = 1}^{\\infty} \\left(\n \\frac{1}{n + 1}, \\frac{1}{n} \\right] & = (0, 1]\\\\\n \\sum_{n = 1}^{\\infty} \\chi (x, I_n^H) & = & \\sum_{n = 1}^{\\infty}\n \\chi_{} \\left( x, \\left( \\frac{1}{n + 1}, \\frac{1}{n} \\right] \\right) & =\n \\chi (x, (0, 1])\n \\end{array}\n\\end{equation}\nThe substitution $n \\rightarrow \\left\\lfloor \\frac{1}{x} \\right\\rfloor$ can be\nmade in (\\ref{hii}) where it is seen that\n\\begin{equation}\n \\begin{array}{lll}\n \\text{$\\chi_{} \\left( x, I^H_{\\left\\lfloor x^{- 1} \\right\\rfloor}\n \\right)$} & = \\theta \\left( \\frac{x \\text{$\\left\\lfloor x^{- 1}\n \\right\\rfloor$} + x - 1}{\\text{$\\left\\lfloor x^{- 1} \\right\\rfloor$}\n \\text{} + 1} \\right) - \\theta \\left( \\frac{x \\text{$\\left\\lfloor x^{- 1}\n \\right\\rfloor$} - 1}{\\left\\lfloor x^{- 1} \\right\\rfloor} \\right) = 1 &\n \\forall x \\in [- 1, + 1] \\label{hieye}\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Laplace Transform $L_a^b [f (x) ; x \\rightarrow s]$}\n\nThe Laplace transform {\\cite[1.5]{tlt}} is defined as\n\\begin{equation}\n \\begin{array}{ll}\n \\text{} \\text{$L_a^b [f (x) ; x \\rightarrow s]$} & = \\int_a^b f (x) e^{- x\n s} \\mathrm{d} x\n \\end{array} \\label{laplace}\n\\end{equation}\nwhere the unilateral Laplace transform is over the interval ($a, b) = (0,\n\\infty$) and the bilateral transform is over ($a, b) = (- \\infty, \\infty$).\nWhen ($a, b$) is not specified, it is assumed to range over the support of $f\n(x$) if the support is an interval. If the support of $f (x$) is not an\ninterval then ($a, b$) must be specified. Applying $L$ to the interval\nindicator function (\\ref{ii}) gives\n\\begin{equation}\n \\begin{array}{lll}\n \\text{$L_a^b [\\chi (x, (a, b)) ; x \\rightarrow s]$} & = \\int_a^b \\chi (x,\n (a, b)) e^{- x s} \\mathrm{d} x & \\\\\n & = \\int_a^b (\\theta (x - a) - \\theta (x - b) \\theta (x - a)) e^{- x s}\n \\mathrm{d} x & \\\\\n & = \\frac{e^{b s} - e^{\\tmop{as}}}{s e^{b s} e^{a s}} & \\\\\n & = - \\frac{(e^{a s} - e^{b s}) e^{- s (b + a)}}{s} & \\\\\n & & \n \\end{array} \\label{iilt}\n\\end{equation}\nThe limit at the singular point $s = 0$ is\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{s \\rightarrow 0} \\text{$L_a^b [\\chi (x, (a, b)) ; x \\rightarrow s]$}\n & = \\lim_{s \\rightarrow 0} - \\frac{(e^{a s} - e^{b s}) e^{- s (b +\n a)}}{s}\\\\\n & = b - a\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Mellin Transform $M_a^b [f (x) ; x \\rightarrow s]$}\n\nThe Mellin transform\n{\\cite[3.2]{ambi}}{\\cite[II.10.8]{mmp}}{\\cite[3.6]{piagm}} is defined as\n\\begin{equation}\n \\begin{array}{ll}\n \\text{$M_a^b [f (x) ; x \\rightarrow s]$} & = \\int_a^b f (x) x^{s - 1}\n \\mathrm{d} x\n \\end{array} \\label{mellin}\n\\end{equation}\nwhere the standard Mellin transform is over the interval ($a, b) = (0,\n\\infty$). Again, as with the notation for Laplace transform, the integral is\nover the support of $f (x$) if the support is an interval and ($a, b$) is not\nspecified, otherwise ($a, b$) must be specified. Applying $M$ to the interval\nindicator function (\\ref{ii}) gives\n\\begin{equation}\n \\begin{array}{lll}\n & M_a^b [\\chi (x, (a, b)) ; x \\rightarrow s] & = \\int_a^b \\chi (x, (a,\n b)) x^{s - 1} \\mathrm{d} x\\\\\n & & = \\int_a^b (\\theta (x - a) - \\theta (x - b) \\theta (x - a)) x^{s -\n 1} \\mathrm{d} x\\\\\n & & = \\frac{b^s - a^s}{s}\n \\end{array} \\label{iimt}\n\\end{equation}\nThe limit at the singular point $s = 0$ is\n\\begin{equation}\n \\begin{array}{ll}\n M_a^b \\left[ \\chi (x, (a, b)) ; x \\rightarrow 0 \\right] & = M \\left[ \\chi\n (x, (a, b)) ; x \\rightarrow 0 \\right]\\\\\n & = \\lim_{s \\rightarrow 0} M_a^b [\\chi (x, (a, b)) ; x \\rightarrow s]\\\\\n & = \\lim_{s \\rightarrow 0} \\frac{b^s - a^s}{s}\\\\\n & = \\ln (b) - \\ln (a)\n \\end{array}\n\\end{equation}\nThe Mellin transform has several identities {\\cite[3.1.2]{ambi}}, including\nbut not limited to\n\\begin{equation}\n \\begin{array}{ll}\n M [f (\\alpha x) ; x \\rightarrow s] & = \\alpha^{- s} M [f (x) ; x\n \\rightarrow s]\\\\\n M [x^{\\alpha} f (x) ; x \\rightarrow s] & = M [f (x) ; x \\rightarrow s +\n \\alpha]\\\\\n M [f (x^{\\alpha}) ; x \\rightarrow s] & = \\frac{1}{a} M \\left[ f (x) ; x\n \\rightarrow \\frac{s}{\\alpha} \\right]\\\\\n M [f (x^{- \\alpha}) ; x \\rightarrow s] & = \\frac{1}{a} M \\left[ f (x) ; x\n \\rightarrow - \\frac{s}{\\alpha} \\right]\\\\\n M [x^{\\alpha} f (x^{\\mu}) ; x \\rightarrow s] & = \\frac{1}{\\mu} M \\left[ f\n (x) ; x \\rightarrow \\frac{s + \\alpha}{\\mu} \\right]\\\\\n M [x^{\\alpha} f (x^{- \\mu}) ; x \\rightarrow s] & = \\frac{1}{\\mu} M \\left[\n f (x) ; x \\rightarrow - \\frac{s + \\alpha}{\\mu} \\right]\\\\\n M [\\ln (x)^n f (x) ; x \\rightarrow s] & = \\frac{\\mathrm{d}^n}{\\mathrm{d} s^n} M\n \\left[ f (x) ; x \\rightarrow s \\right]\n \\end{array}\n\\end{equation}\nwhere $\\alpha > 0, \\mu > 0 $, and $n \\in \\mathbbm{N} $. The Mellin transform\nof the harmonic interval indicator function (\\ref{hii}) is\n\\begin{equation}\n \\begin{array}{lll}\n M \\left[ \\chi \\left( x, I^H_n \\right) ; x \\rightarrow 0 \\right] & & =\n \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\chi \\left( x, \\left( \\frac{1}{n +\n 1}, \\frac{1}{n} \\right) \\right) x^{s - 1} \\mathrm{d} x\\\\\n & & = \\int_{\\frac{1}{n + 1}}^{\\frac{1}{n}} \\left( \\theta \\left( t -\n \\frac{1}{n + 1} \\right) - \\theta \\left( t - \\frac{1}{n} \\right) \\right)\n x^{s - 1} \\mathrm{d} x\\\\\n & & = \\frac{n^{- s} - (n + 1)^{- s}}{s}\n \\end{array} \\label{himt}\n\\end{equation}\nwhich has the limit\n\\begin{equation}\n \\begin{array}{ll}\n M \\left[ \\chi \\left( x, I^H_n \\right) ; x \\rightarrow 0 \\right] & =\n \\lim_{s \\rightarrow 0} M \\left[ \\chi \\left( x, I^H_n \\right) ; x\n \\rightarrow s \\right]\\\\\n & = \\lim_{s \\rightarrow 0} \\frac{n^{- s} - (n + 1)^{- s}}{s}\\\\\n & = \\ln (n + 1) - \\ln (n)\n \\end{array} \\label{himtl}\n\\end{equation}\nThe Mellin and bilaterial Laplace transforms are related by the change of\nvariables $x \\rightarrow - \\ln (y)$ resulting in the identity\n{\\cite[3.1.1]{ambi}}\n\\begin{equation}\n \\begin{array}{lll}\n M_0^{\\infty} [f (- \\ln (x)) ; x \\rightarrow s] & = L_{- \\infty}^{+ \\infty}\n [f (y) ; y \\rightarrow s] & \\\\\n \\int_0^{\\infty} f (- \\ln (x)) x^{s - 1} \\mathrm{d} x & = \\int_{- \\infty}^{-\n \\infty} f (y) e^{- y s} \\mathrm{d} y & \n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Lambert W Function $W (k, x)$}\n\nThe Lambert W function {\\cite{lambertw}}{\\cite{lambertwss}} is the inverse of\n$x e^x$ given by\n\\begin{equation}\n \\begin{array}{ll}\n W (z) & =\\{x : x e^x = z\\}\\\\\n & = W (0, z)\\\\\n & = 1 + (\\ln (z) - 1) \\exp \\left( \\frac{i}{2 \\pi} \\int_0^{\\infty}\n \\frac{1}{x + 1} \\ln \\left( \\frac{x - i \\pi - \\ln (x) + \\ln (z)}{x + i \\pi\n - \\ln (x) + \\ln (z)} \\right) \\hspace{-0.25em} \\mathrm{d} x \\right)\\\\\n & = \\sum_{k = 1}^{\\infty} \\frac{(- k)^{k - 1} z^k}{k!}\n \\end{array} \\label{linv}\n\\end{equation}\nwhere $W (a, z) \\forall a \\in \\mathbbm{Z}, z \\not\\in \\{0, - e^{- 1} \\}$ is\n\\begin{equation}\n \\begin{array}{ll}\n W (a, z) & = 1 + (2 i \\pi a + \\ln (z) - 1) \\exp \\left( \\frac{i}{2 \\pi}\n \\int_0^{\\infty} \\frac{1}{x + 1} \\ln \\left( \\frac{x + \\left( 2\n \\hspace{0.25em} a - 1 \\right) i \\pi - \\ln \\left( x \\right) + \\ln \\left( z\n \\right)}{x + \\left( 2 \\hspace{0.25em} a + 1 \\right) i \\pi - \\ln \\left( x\n \\right) + \\ln \\left( z \\right)} \\right) \\hspace{-0.25em} \\mathrm{d} x \\right)\n \\end{array} \\label{lambertw}\n\\end{equation}\nA generaliztion of (\\ref{linv}) is solved by\n\\begin{equation}\n \\begin{array}{ll}\n \\{x : x b^x = z\\} & = \\frac{W (\\ln (b) z)}{\\ln (b)}\n \\end{array}\n\\end{equation}\nThe W function satisifes several identities\n\\begin{equation}\n \\begin{array}{lll}\n W (z) e^{W (z)} & = z & \\\\\n W (z \\ln (z)) & = \\ln (z) & \\forall z < 1\\\\\n |W (z) | & = W (|z|) & \\\\\n e^{n W (z)} & = z^n W (z)^{- n} & \\\\\n \\ln (W (n, z)) & = \\ln (z) - W (n, z) + 2 i \\pi n & \\\\\n W \\left( - \\frac{\\ln (z)}{z} \\right) & = - \\ln (z) & \\forall z \\in [0,\n e]\\\\\n \\frac{W (- \\ln (z))}{- \\ln (z)} & = z^{z^{z^{z^{.^{.^.}}}}} & \n \\end{array}\n\\end{equation}\nwhere $n \\in \\mathbbm{Z}$. Some special values are\n\\begin{equation}\n \\begin{array}{lll}\n W \\left( - 1, - e^{- 1} \\right) & = - 1 & \\\\\n W (- e^{- 1}) & = - 1 & \\\\\n W (e) & = 1 & \\\\\n W (0) & = 0 & \\\\\n W (\\infty) & = \\infty & \\\\\n W (- \\infty) & = \\infty + i \\pi & \\\\\n W \\left( - \\frac{\\pi}{2} \\right) & = \\frac{i \\pi}{2} & \\\\\n W \\left( - \\ln ( \\sqrt{2}) \\right) & = - \\ln (2) & \\\\\n W \\left( - 1, - \\ln ( \\sqrt{2}) \\right) & = - 2 \\ln (2) & \n \\end{array}\n\\end{equation}\nWe also have the limit\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{a \\rightarrow \\pm \\infty} \\frac{W (a, x)}{a} & = 2 \\pi i\n \\end{array}\n\\end{equation}\nand differential\n\\begin{equation}\n \\begin{array}{lll}\n \\frac{\\mathrm{d}}{\\mathrm{d} z} W (a, f (z)) & & = \\frac{W (a, f (z))\n \\frac{\\mathrm{d}}{\\mathrm{d} z} f (z)}{f (z) (1 + W (a, f (z)))}\n \\end{array}\n\\end{equation}\nas well as the obvious integral\n\\begin{equation}\n \\begin{array}{ll}\n \\int_0^1 W \\left( - \\frac{\\ln (x)}{x} \\right) \\mathrm{d} x & = \\int_0^1 - \\ln\n (x) \\mathrm{d} x = 1\n \\end{array}\n\\end{equation}\nLet us define, for the sake of brevity, the function\n\n\\begin{equation}\n \\begin{array}{ll}\n W_{\\ln} (z) & = W \\left( - 1, - \\frac{\\ln (z)}{z} \\right)\\\\\n & = 1 + \\left( \\ln \\left( - \\frac{\\ln (z)}{z} \\right) - 1 - 2 \\pi i\n \\right) \\exp \\left( \\frac{i}{2 \\pi} \\int_0^{\\infty} \\frac{1}{x + 1} \\ln\n \\left( \\frac{x - 3 i \\pi - \\ln \\left( x \\right) + \\ln \\left( - \\frac{\\ln\n (z)}{z} \\right)}{x - i \\pi - \\ln \\left( x \\right) + \\ln \\left( - \\frac{\\ln\n (z)}{z} \\right)} \\right) \\hspace{-0.25em} \\mathrm{d} x \\right)\n \\end{array}\n\\end{equation}\n\n\\begin{figure}[h]\n \\resizebox{15cm}{12cm}{\\includegraphics{lw1.eps}}\n \\caption{$W \\left( - \\frac{\\ln (x)}{x} \\right) = - \\ln (x)$ and $W_{\\ln} (x)\n = W \\left( - 1, - \\frac{\\ln (x)}{x} \\right)$}\n\\end{figure}\n\nThen we have the limits\n\\begin{equation}\n \\begin{array}{ll}\n \\lim_{x \\rightarrow - \\infty} W_{\\ln} (x) & = 0\\\\\n \\lim_{x \\rightarrow + \\infty} W_{\\ln} (x) & = - \\infty\n \\end{array}\n\\end{equation}\nand\n\\begin{equation}\n \\tmop{Im} \\left( W_{\\ln} (x) \\right) = \\left\\{ \\begin{array}{ll}\n - \\pi & - \\infty < x < 0\\\\\n \\ldots & 0 \\leqslant x \\leqslant 1\\\\\n 0 & 1 < x < \\infty\n \\end{array} \\right.\n\\end{equation}\n\\begin{equation}\n \\begin{array}{lll}\n W_{\\ln} (x) & = - \\ln (x) & \\forall x \\not\\in [0, e]\n \\end{array}\n\\end{equation}\nThe root of $\\tmop{Re} \\left( W_{\\ln} (x) \\right)$ is given by\n\\begin{equation}\n \\begin{array}{ll}\n \\left\\{ x : \\tmop{Re} \\left( W_{\\ln} (x)) = 0 \\right\\} \\right. & =\\{- x :\n \\left( x^2 \\right)^{\\frac{1}{x}} = e^{3 \\pi} \\}\\\\\n & = \\frac{2}{3} \\pi W \\left( \\frac{3}{2} \\pi \\right)\\\\\n & \\cong 0.27441063190284810044 \\ldots\n \\end{array} \\label{wlrr}\n\\end{equation}\nwhere the imaginary part of the value at the root of the real part of $W_{\\ln}\n(z)$ is\n\\begin{equation}\n \\begin{array}{ll}\n W_{\\ln} \\left( \\frac{2}{3} \\pi W \\left( \\frac{3}{2} \\pi \\right) \\right) &\n = W \\left( - 1, - \\frac{\\ln \\left( \\frac{2}{3} \\pi W \\left( \\frac{3}{2}\n \\pi \\right) \\right)}{\\frac{2}{3} \\pi W \\left( \\frac{3}{2} \\pi \\right)}\n \\right)\\\\\n & = W \\left( - 1, \\frac{3 \\pi}{2} \\right)\\\\\n & = \\frac{3 \\pi i}{2}\\\\\n & \\cong i 4.712388980384689857 \\ldots\n \\end{array}\n\\end{equation}\n\n\\subsubsection{The Lerch Transcendent $\\Phi (z, a, v)$}\n\nThe Lerch Transcendent {\\cite[1.11]{htf1}} is defined by\n\\begin{equation}\n \\begin{array}{lll}\n \\Phi (z, a, v) & = \\sum_{n = 0}^{\\infty} \\frac{z^n}{(v + n)^a} & \\forall\n \\{|z| < 1\\} \\tmop{or} \\{|z| = 1 \\tmop{and} \\tmop{Re} (a) > 1\\}\n \\label{lerch}\n \\end{array}\n\\end{equation}\nThe Riemann zeta function is the special case\n\\begin{equation}\n \\begin{array}{ll}\n \\zeta (s) & = \\Phi (1, s, 1) = \\sum_{n = 0}^{\\infty} \\frac{1}{(1 + n)^s}\n \\end{array}\n\\end{equation}\n\n\\subsection{Applications of w(x)}\n\n\\subsubsection{Expansion of $\\gamma$}\n\nConsider Euler's constant $\\gamma = 0.577215664901533 \\ldots$ (\\ref{gamma})\n\\begin{equation}\n \\begin{array}{ll}\n w^n (\\gamma) & = a_n - b_n \\gamma\n \\end{array}\n\\end{equation}\nwhereupon iteration we see that\n\\begin{equation}\n \\begin{array}{ll}\n \\left(\\begin{array}{c}\n n\\\\\n - a_n\\\\\n - b_n\n \\end{array}\\right) & \\text{$= \\left(\\begin{array}{cccccccccccc}\n 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \\ldots\\\\\n 0 & 1 & 48 & 290 & 581 & 1163 & 2327 & 13964 & 7492468716 & 14984937433\n & 1078915495184 & \\ldots\\\\\n 1 & 2 & 84 & 504 & 1008 & 2016 & 4032 & 24192 & 12980362752 &\n 25960725504 & 1869172236288 & \\ldots\n \\end{array}\\right)$}\n \\end{array}\n\\end{equation}\n\n\\subsection{Conventions and Symbols}\n\nMany of these symbols are from {\\cite[p491]{fractalzetastrings}}.\n\\begin{equation}\n \\begin{array}{ll}\n i & \\sqrt{- 1}\\\\\n \\mathbbm{R} & \\{x : - \\infty < x < \\infty\\}\\\\\n \\bar{\\mathbbm{R}} & \\{x : - \\infty \\leqslant x \\leqslant \\infty\\}\\\\\n \\mathbbm{R}^+ & \\{x : 0 \\leqslant x < \\infty\\}\\\\\n \\mathbbm{R}^d & \\{x_1 \\ldots x_d : - \\infty < x_i < \\infty\\}\\\\\n \\mathbbm{C} & \\{x + i y : x, y \\in \\mathbbm{R}\\}\\\\\n \\mathbbm{Z} & \\{\\ldots, - 2, - 1, 0, 1, 2, \\ldots\\}\\\\\n \\mathbbm{N} & \\{0, 1, 2, 3, \\ldots .\\}\\\\\n \\mathbbm{N}^{\\ast} & \\{1, 2, 3, \\ldots .\\}\\\\\n \\text{$\\mathbbm{H}$} & \\left\\{ 0, \\frac{1}{n} : n \\in \\mathbbm{Z}\n \\right\\}\\\\\n f (x) = O (g (x)) & \\frac{f (x)}{g (x)} < \\infty\\\\\n f (x) = o (g (x)) & \\lim_{x \\rightarrow \\infty} \\frac{f (x)}{g (x)} = 0\\\\\n f (x) \\asymp g (x) & \\left\\{ a \\leqslant \\frac{f (x)}{g (x)} \\leqslant b :\n \\{a, b\\}> 0 \\right\\}\\\\\n \\#A & \\tmop{numbers} \\tmop{of} \\tmop{elements} \\tmop{in} \\tmop{the}\n \\tmop{finite} \\tmop{set} A\\\\\n |A |_d & d \\text{-dimensional} \\tmop{Lebesgue} \\tmop{measure}\n (\\tmop{volume}) \\tmop{of} A \\subseteq \\mathbbm{R}^d\\\\\n \\text{$d (x, A)$} & \\{\\min (|x - y|) : y \\in A\\} \\tmop{Euclidean}\n \\tmop{distance} \\tmop{between} x \\tmop{and} \\tmop{the} \\tmop{nearest}\n \\tmop{point} \\tmop{of} A\\\\\n \\exp (x) & \\tmop{exponential} e^x = \\sum_{n = 0}^{\\infty} \\frac{x^n}{n!}\\\\\n \\underset{}{\\underset{x = y}{\\tmop{Res}} (f (x))} & \\tmop{complex}\n \\text{residue of $f (x)$ at $x = y$}\\\\\n \\left\\lfloor x \\right\\rfloor & \\tmop{floor}, \\tmop{the} \\tmop{greatest}\n \\tmop{integer} \\leqslant x\\\\\n \\{x\\} & x - \\left\\lfloor x \\right\\rfloor, \\tmop{the} \\tmop{fractional}\n \\tmop{part} \\tmop{of} x\\\\\n \\text{$\\bar{x}$} & \\tmop{complex} \\tmop{conjugate}, \\tmop{Re} (x) -\n \\tmop{Im} (x)\\\\\n \\tmop{Fix}_f^n & \\text{$n$-th} \\tmop{fixed} \\tmop{point} \\tmop{of}\n \\tmop{the} \\tmop{map} f (x), \\text{$n$-th} \\tmop{solution} \\tmop{to} f (x)\n = x\\\\\n p_k & \\text{$k$-th} \\tmop{prime} \\tmop{number}\\\\\n \\ln_b (a) & \\frac{\\ln (a)}{\\ln (b)}\n \\end{array} \\label{notation}\n\\end{equation}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec-introduction}\n\nPooling samples in biomedical studies has now become a frequent\npractice among many researchers. For example, more than $15\\%$ of\nthe data sets deposited in the Gene Expression Omnibus Database\ninvolve pooled RNA samples\\citep{Kendziorski-etal-PNAS2005}. The\npractice of pooling biological samples though is not a new\nphenomenon, as it can be traced back at least to 1940s\n\\citep{Dorfman1943} and has been used in different application\nareas \\citep{Gastwirth2000}, e.g., for the detection of certain\nmedical conditions and estimation of prevalence in a population.\nIn the context of detecting differential gene expressions using\nmicroarrays, divergent views on the wisdom of pooling samples can\nbe found in the literature\n\\citep{Agrawal-etal-JNatCancerInst2002,Affymetrix2004,\nShih-etal-Bioinformatics2004, Churchill&Oliver-NatureGenetics2001,\nPeng-etal-BMCBioinformatics2003,Jolly-etal-PhysiolGenomics2005}.\nOne of the arguments supporting the practice of pooling biological\nsamples is that biological variation can be reduced by pooling RNA\nsamples in microarray\nexperiments\\citep{Churchill&Oliver-NatureGenetics2001}. As more\ncarefully described by Kendziorski {\\em et al}\n\\citep{Kendziorski-etal-PNAS2005}, pooling can reduce the effects\nof biological variation, but not the biological variation itself.\nAnother argument in support of pooling samples in microarray\nexperiments is that it reduces financial cost. However, cost\nreduction is meaningful only if statistical equivalence between\nthe pooled and the non-pooled experimental setups is maintained.\nHere we address this issue and present formulas to determine the\nconditions under which pooled and non-pooled designs are\nstatistically equivalent.\n\nTo compare experimental designs with and without sample pooling\nthe two designs must have something in common that can be\nmeasured, e.g., using the same or equivalent amount of resources,\nor, yielding the same level of detection power. Kendziorski {\\em\net al}\\citep{Kendziorski-etal-Biostatistics2003} used the width of\nthe 95\\% confidence interval for gene expression to compare\ndifferent experimental designs with and without sample pooling.\nThe criterion was that the narrower the confidence interval, the\nmore accurate the results from the experimental design. In a\ncomparative study where two groups of biological subjects are\ncompared the common goal of the different experimental designs is\nto detect a change between the two groups with a given power at a\ngiven false positive rate, as adopted in\n\\citep{Shih-etal-Bioinformatics2004}. We shall use the latter\nmethod to compare different designs. So in this work statistical\nequivalence means that the designs have the same statistical power\nat the same level of significance. Therefore the more appropriate\nexperimental design will be the one which uses less resources to\nachieve this statistical equivalence.\n\nThe basic assumption underlying sample pooling is biological\naveraging; that the measure of interest taken on the pool of\nsamples is equal to the average of the same measure taken on each\nof the individual samples which contributed to the pool. For\nexample in the situation of a microarray experiment, if $r$\nindividual samples contribute equally to a pool, and the\nconcentrations of a gene's mRNA transcripts for the $r$ samples\nare denoted by $T_i$ with $i=1, 2, \\cdots, r$ indexing the\nindividual samples, the assumption of biological averaging says\nthat the concentration of this gene's mRNA transcripts in the pool\nis $T=1\/r\\sum_{i=1}^rT_i$. However, for microarray experiment\nthere is some debate on whether the basic assumption of pooling\nholds. Kendziorski {\\em et al}\n\\citep{Kendziorski-etal-Biostatistics2003,Kendziorski-etal-PNAS2005}\nargue that there is limited support for this assumption. Here we\ndo not seek to enter into this debate but rather take the\nassumption of biological averaging as valid, or at least\napproximately so, so that we are in a position to determine\nwhether pooling samples is financially beneficial or not. The\nvalidity of biological averaging makes it possible (or easier) to\nderive a neat theoretical formulation. On a practical level\nthough, the requirement for the validity of this assumption may\nnot be as stringent as a theoretical formulation does. For\ninstance, in \\citep{Kendziorski-etal-PNAS2005} it was shown that\neven when biological averaging does not hold, pooling can be\nuseful and inferences regarding differential gene expression are\nnot adversely affected by pooling.\n\n\nOne situation where there is little alternative but to pool\nbiological samples is where there is insufficient amount of RNA\nfrom each individual biological subject to perform single\nmicroarray hybridization. RNA amplification may be a possible way\nof obtaining more RNA, but may not be practically feasible when\nmany individual biological subjects are involved as in the case of\n\\citep{Jin-etal2001}. In such a circumstance, pooling samples is\njustified by the lack of alternative and will not be considered\nfurther here. Similarly we will not consider here the case where\nall the biological samples of the same group were pooled together,\nand multiple technical replicate measurements were carried out on\nthe sample pool. This is sometimes seen in the literature\n\\citep{Muckenthaler-etal2003}, but such an experimental design\nleaves no degree of freedom to estimate the biological variance.\nThus valid inferences about the differences between the two\npopulations of biological subjects under study cannot be made.\nHere we only consider situations other than the above two and\nwhere pooling may reduce the overall costs of the experiments.\n\n\n\\section{A general formalism}\n\\label{sec-formalism}\n\nFor every comparative study, there is at least one measurable\nquantity which is the quantity of interest. The goal of the study\nis to deduce from the data collected if there is any difference\nbetween the means of the two populations. As measuring all the\nbiological subjects in two populations is rarely possible in most\nsituations representatives from a population are randomly selected\nand measurements made on these. These are then taken to infer the\nproperties of the population.\n\nLet $X$ be the measurable quantity that is being determined in the\nthe experiment, e.g., the expression level of a gene. In the case\nof one-channel microarray, $X$ could denote the logarithm (most\ncommonly base 2 is used) of fluorescence intensity; or the\nlogarithm of the fluorescence ratio in the case of two-channel\nmicroarray. Let $x^c_i$ denote the value of $X$ for an individual\nsubject $i$ in the control population (c), and $x^t_j$ that of the\nindividual subject $j$ in the treatment population (t). We assume\nthat $x^c_i$s for all individuals in the control population are\nindependent normally distributed with a mean $\\mu_c$ and a\nvariance $\\sigma _c^2$, denoted by $x^c_i \\sim N(\\mu_c,\\sigma\n_c^2)$ for all $i$. Similarly, $x^t_j \\sim N(\\mu_t,\\sigma _t^2)$\nfor all $j$.\n\n\n\\subsection{A general experimental setup}\nFor a general experimental setup individual subjects from both\npopulations are randomly selected and tissue samples collected\nfrom each. Tissue sample pools are made by pooling a given number\n$r$ of randomly selected tissue samples (of the same population)\ntogether. Note that to make $n$ pools we need to have selected\n$nr$ individual subjects from the population. $m$ measurements are\nthen made on each pool of tissue samples. So $m$ is the number of\ntechnical replications of measurement on each pool. Notice that by\nintroducing two parameters $r$ and $m$ a general and flexible\nexperimental setup has been created. For instance, if we set\n$r=1$, the experiment would be equivalent to no pooling of tissue\nsamples. And if we set $m=1$ there is no technical replication.\nUnder the basic assumption of biological averaging, the result of\npooling $r$ tissue samples in equal proportions together is that\nthe value of $X$ for the pool is the average of those subjects\nwhich formed this pool,\n\\begin{equation}\n\\tilde{x}=\\frac{1}{r}\\sum _{i=1}^{r}x_i.\n\\end{equation}\nIt follows that $\\tilde{x} \\sim N(\\mu_c, \\sigma_c ^2\/r)$ for a\npool from the control population, or $\\tilde{x} \\sim N(\\mu_t,\n\\sigma_t ^2\/r)$ for a pool from the treated population. Note that\nin this paper we shall only discuss pooling samples with equal\nindividual contributions. While pools formed by un-equal\ncontributions from individual samples are possible, such pooled\nexperimental design is generally less effective than the equal\npooling, as already shown by Peng {\\em et al}\n\\citep{Peng-etal-BMCBioinformatics2003} with their simulated\nresults.\n\nWhen we take a measurement on a pool $p$, the measured value is\n\\begin{equation}\ny_{p,k}=\\tilde{x}_p+\\epsilon _k,\n\\end{equation}\nwhere $p$ indexes pools, $k$ indexes measurements, and $\\epsilon\n_k$ is a random error term assumed to be independently and\nnormally distributed as $\\epsilon _k \\sim\nN(0,\\sigma^2_{\\epsilon})$. Hereafter $\\sigma^2_{\\epsilon}$ will be\nreferred to as the technical variance, $\\sigma^2_c$ the biological\nvariance for the control population, and $\\sigma^2_t$ the\nbiological variance for the treatment population.\n\nThe output of the experiment are the measurements on the two sets\nof pools. For the control group, we have $y^c_{p,k}$ for\n$p=1,\\dots, n_{c}$ and $k=1,\\dots, m$. And for the treatment\ngroup, we have $y^t_{p,k}$ for $p=1,\\dots, n_{t}$ and $k=1,\\dots,\nm$. Here $n_{c}$ and $n_{t}$ are the numbers of pools prepared for\nthe control and treatment population respectively. Our task is to\ninfer population properties from these measured data. In\nparticular, we want to know whether there is any difference\nbetween the two populations means $\\mu_c$ and $\\mu_t$. It can be\nshown that\n\\begin{equation}\n\\overline{Y}^c=\\frac{1}{mn_{c}}\\sum_{p=1}^{n_c}\\sum_{k=1}^my^c_{p,k}\n\\label{eq-Y^c}\n\\end{equation}\nis an unbiased estimator of $\\mu_c$, with a variance\n\\begin{equation}\n\\frac{1}{n_c}\\left(\\frac{\\sigma^2_c}{r}+\\frac{\\sigma^2_{\\epsilon}}{m}\\right),\n\\end{equation}\nand similarly,\n\\begin{equation}\n\\overline{Y}^t=\\frac{1}{mn_{t}}\\sum_{p=1}^{n_t}\\sum_{k=1}^my^t_{p,k}\n\\label{eq-Y^t}\n\\end{equation}\nis an unbiased estimator of $\\mu_t$, with a variance\n\\begin{equation}\n\\frac{1}{n_t}\\left(\\frac{\\sigma^2_t}{r}+\\frac{\\sigma^2_{\\epsilon}}{m}\\right).\n\\end{equation}\nIf we make an additional assumption that the variances for the two\npopulations of biological subjects are the same, i.e.,\n$\\sigma^2_c=\\sigma^2_t=\\sigma^2$, then the difference between\nEqs.(\\ref{eq-Y^t}) and (\\ref{eq-Y^c}),\n$D=\\overline{Y}^t-\\overline{Y}^c$, is an unbiased estimator of\n$\\mu=\\mu_t-\\mu_c$ with a variance\n\\begin{equation}\n\\sigma^2_D=\\left(\\frac{1}{n_c}+\\frac{1}{n_t}\\right)\n\\left(\\frac{\\sigma^2}{r}+\\frac{\\sigma^2_{\\epsilon}}{m}\\right).\n\\label{eq-var(D)}\n\\end{equation}\nThe factor $(\\sigma^2\/r+\\sigma^2_{\\epsilon}\/m)$ in\nEq.(\\ref{eq-var(D)}) can be estimated without bias by\n\n\n\\begin{eqnarray}\ns^2_p=\\frac{1}{n_c+n_t-2}\n\\sum_{p=1}^{n_c}\\left(\\frac{1}{m}\\sum_{k=1}^my^c_{p,k}-\\overline{Y}^c\\right)^2 \\nonumber \\\\\n+\\frac{1}{n_c+n_t-2}\n\\sum_{p=1}^{n_t}\\left(\\frac{1}{m}\\sum_{k=1}^my^t_{p,k}-\\overline{Y}^t\\right)^2.\n\\end{eqnarray}\nIt is then clear that\n\\begin{equation}\nt=\\frac{(\\overline{Y}^t-\\overline{Y}^c)-(\\mu_t-\\mu_c)}{s_p\\sqrt{1\/n_c+1\/n_t}}\n\\end{equation}\nfollows the Student's t distribution with $n_c+n_t-2$ degrees of\nfreedom. In detecting a differential gene expression, we want to\ntest the null hypothesis $\\mu_c=\\mu_t$ against an alternative\nhypothesis $\\mu_c\\neq \\mu_t$. So our test statistic is\n\\begin{equation}\nt_0=\\frac{(\\overline{Y}^t-\\overline{Y}^c)}{s_p\\sqrt{1\/n_c+1\/n_t}},\n\\label{eq-t_0}\n\\end{equation}\nand there are no unknowns in Eq.(\\ref{eq-t_0}). Note that $t_0$\ncan be seen as a generalized two-sample-t-test statistic, which\nreduces to the statistic of the traditional two-sample t test with\nequal variance when we set the parameters $r=1$ (no pooling of\ntissue samples) and $m=1$ (no technical replication of\nmeasurements). In Ref. \\citep{Shih-etal-Bioinformatics2004},\n\\citeauthor{Shih-etal-Bioinformatics2004} arrived at two separate\nstatistics, one for non-pooled design, the other for pooled\ndesign. The $t_0$ defined by Eq.(\\ref{eq-t_0}) is in more general\nform, setting $r=1$ and $m=1$ in Eq.(\\ref{eq-t_0}) recovers Shih\n{\\em et al}'s statistic for non-pooled design; while setting $r>1$\nand $m=1$ recovers Shih {\\em et al}'s statistic for pooled design.\nNote that $m$ does not need to equal 1. Here by incorporating two\nadditional parameters $r$ and $m$, the statistic $t_0$ can deal\nwith situations where there are pooled tissue samples and multiple\ntechnical replications.\n\n\\subsection{Criteria of significance}\nAs with any statistical test we need to specify a threshold\np-value $P_{th}$ to claim significant results in the test. When\nall the other parameters are given, setting $P_{th}$ is equivalent\nto setting a threshold, say $|\\xi|$, for the statistics $t_0$\ndefined in Eq.(\\ref{eq-t_0}). With this threshold t-value, our\ncriteria for claiming a significant test is as follows: If\n$t_0>|\\xi|$, we declare that $\\mu_t-\\mu_c>0$; if $t_0<-|\\xi|$, it\nis claimed as $\\mu_t-\\mu_c<0$. So the rate at which false positive\nclaims are made is\n\\begin{eqnarray}\nP_{th}=\\int _{-\\infty}^{-|\\xi|}\\rho _{n_c+n_t-2}(t_0)dt_0\n+\\int_{|\\xi|}^{\\infty}\\rho _{n_c+n_t-2}(t_0)dt_0 \\nonumber \\\\\n=2\\int _{-\\infty}^{-|\\xi|}\\rho _{n_c+n_t-2}(t_0)dt_0\n=2T_{n_c+n_t-2}(-|\\xi|), \\label{eq-P_th}\n\\end{eqnarray}\nwhere $\\rho_{n_c+n_t-2}(.)$ is the probability density function\n(PDF) of the Student's t distribution with $n_c+n_t-2$ degrees of\nfreedom, and $T_{n_c+n_t-2}(.)$ is the corresponding cumulative\nprobability distribution function (CDF). It is therefore apparent\nthat the threshold t-value $|\\xi|$ can be obtained by solving the\nequation $2T_{n_c+n_t-2}(-|\\xi|)=P_{th}$ with a given false\npositive rate $P_{th}$.\n\n\n\n\\section{Power function}\n\\label{sec-power-function} In\n\\citep{Zhang&Gant-Bioinformatics2004} we presented a power\nfunction for a new statistical t test (hereafter referred to as\n\"two-labelling t test\") in the context of using two-color\nmicroarrays to detect differential gene expression. Following\nsimilar steps we can derive the power function for the generalized\ntwo-sample t test presented in this paper, which reads\n\n\n\\begin{eqnarray}\nS=\\int _0^{\\infty}p_{n_c+n_t-2}(Y)\\Phi\\left[{-|\\xi|\\sqrt{Y}\\over\n\\sqrt{n_c+n_t-2}} +\\frac{|\\mu|}{\\sigma_D} \\right]dY,\n \\label{eq-S}\n\\end{eqnarray}\nwhere $p_{n_c+n_t-2}(Y)$ is the PDF for the $\\chi ^2$ distribution\nwith $n_c+n_t-2$ degrees of freedom, and $\\Phi(.)$ is the CDF for\nthe standard normal distribution. The rate $S$ at which a true\ndifference between $\\mu_t$ and $\\mu_c$ can be successfully\ndetected is a function of $n_c$, $n_t$, $|\\mu|\/\\sigma _D$, and\n$|\\xi|$. With $\\sigma_D$ given by the square root of\nEq.(\\ref{eq-var(D)}), and $|\\xi|$ determined by solving\nEq.(\\ref{eq-P_th}) at a given false positive rate $P_{th}$, $S$\nis, eventually, a function of $P_{th}$, $n_c$, $n_t$, and\n$|\\mu|\/\\sigma _D$.\n\n\nA few points are worth noting here.\n\n1. The two-labelling t test presented in\n\\citep{Zhang&Gant-Bioinformatics2004} was designed to deal with\nsystematic labelling biases generated during microarray\nexperimentation. The t test presented in this paper, however,\nassumes no systematic data biases. In the case of two-color\nmicroarrays this requires a common reference design. In such an\nexperimental design the labelling biases cancel themselves out in\nthe calculation of the test statistic.\n\n2. In \\citep{Zhang&Gant-Bioinformatics2004}, the biological\nvariances of the two populations under comparison do not have to\nbe the same, that is, we did not assume $\\sigma^2_c=\\sigma^2_t$.\nFor the t test in this paper, we have made an additional\nassumption that $\\sigma^2_c=\\sigma^2_t$. Relaxing this requirement\nwas possible, as in the case of the traditional two-sample t test\nwith unequal variance \\citep{Brownlee1965}, but an exact power\nfunction could not be readily obtained.\n\n3. The exact power function obtained in this paper allows\nevaluation of the effects of pooling biological samples and the\neffects of taking multiple technical measurements, thus giving\nresearchers quantitative guidance on the practice of pooling\nsamples.\n\n4. By setting the parameters $r=1$ and $m=1$, an exact power\nfunction is provided for the traditional two-sample t test with\nequal variance.\n\n\n\\section{Results}\n\nWe have implemented the computation of the power function $S$ of\nEq.(\\ref{eq-S}) as a Java application, which can be accessed at\nthe URL given in the abstract. Here we apply this to microarray\ncomparative studies for finding differentially expressed genes,\nand investigate the effect of pooling RNA samples in the\nexperiments. We also compare our exact results with some\napproximate results presented by other authors\n\\citep{Shih-etal-Bioinformatics2004} to demonstrate why an exact\nformula is desirable.\n\n\\subsection{Comparison with approximate results}\nBased on their approximate formulas,\n\\citeauthor{Shih-etal-Bioinformatics2004} considered two scenarios\nto compare the number of biological subjects and number of\nmicroarrays in the non-pooled and pooled\ndesigns\\citep{Shih-etal-Bioinformatics2004}. Here we give exact\nresults for the two scenarios to show the difference to the\napproximate results. In the first scenario, we consider that the\ncommon biological variance of the two populations is $\\sigma\n^2=0.05$, and the technical variance $\\sigma_{\\epsilon}^2=0.0125$,\nwhich gives the biological-to-technical variance ratio $\\lambda\n=\\sigma^2\/\\sigma_{\\epsilon}^2=4$. The preset target of the\nexperiment in this scenario is that the false positive rate being\ncontrolled at $P_{th}=0.001$ and the power being no less than\n$S=0.95$ to detect a two-fold differential gene expression, which\ncorresponds to $\\mu=1$ with base 2 logarithm\n\\citep{Shih-etal-Bioinformatics2004}. In Table\n\\ref{tab-scenario1}, we present results for different pooling\nparameter $r$. It can be seen from the first panel of this Table\nthat in order to hit the preset target, the non-pooled design\n($r=1$) requires at least $12$ biological subjects divided evenly\nto the two populations, i.e., $6$ from each of the two\npopulations. Having $7$ subjects from one population and $5$\nsubjects from the other is insufficient to achieve the target of\n$95\\%$ detection power. The effects of other levels of pooling on\nthe detection power are also shown in Table \\ref{tab-scenario1}.\nThe minimum number of biological subjects ($N_s$) and microarrays\n($N_m$) that meet the preset targets are highlighted with bold\nfonts. It is clear that as the level of pooling is increased (with\nincreasing $r$), the number of microarrays $N_m$ can be reduced,\nbut the number of biological subjects $N_s$ has to be increased.\nFor example, in order to reduce the number of arrays from $12$\n(Table \\ref{tab-scenario1}, first panel) to $8$ (Table\n\\ref{tab-scenario1}, fourth panel), the number of biological\nsubjects to form the pools must be increased from $12$ to $40$.\n\n\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=8.5cm,height=6.0cm,angle=0]{Senario2-Scurve.eps}}\n\\caption{The power $S$ as a function of the total number of pools\n$n_c+n_t$. The parameters used are for the second scenario $\\sigma\n^2=0.2$, $\\sigma_{\\epsilon}^2=0.05$, $\\lambda\n=\\sigma^2\/\\sigma_{\\epsilon}^2=4$, $P_{th}=0.001$, $\\mu=1$, and\n$m=1$. The five solid curves correspond to different levels of\npooling, from right to left, $r=1$, $r=2$, $r=4$, $r=6$, and\n$r=15$ respectively. The dashed line indicates the $95\\%$ power,\nthe intersections of which with the power curves specify the total\nnumbers of pools (assuming $n_c=n_t$) needed to achieve the target\npower. The total number of biological subjects and the total\nnumber of arrays can then be calculated simply by\n$N_s=r(n_c+n_t)$, and $N_m=m(n_c+n_t)$ respectively.}\n\\label{fig-Scurve}\n\\end{figure}\n\n\n\n\nFor the second scenario we consider the case $\\sigma ^2=0.2$,\n$\\sigma_{\\epsilon}^2=0.05$, which gives $\\lambda\n=\\sigma^2\/\\sigma_{\\epsilon}^2=4$. Again the preset targets are to\ndetect a true differential expression $\\mu=1$ with no less than\n$95\\%$ power while the false positive rate is set at\n$P_{th}=0.001$. Using these parameters, the power $S$ as a\nfunction of $n_c+n_t$ is plotted in Fig.\\ref{fig-Scurve} for\ndifferent levels of sample-pooling. For the non-pooled design\n($r=1$), $N_s=30$ total biological subjects and $N_m=30$ arrays\nare required to hit the preset targets. Similar to the first\nscenario, as the level of pooling is increased, the number of\narrays $N_m$ is reduced while the number of subjects increased to\nmeet the preset targets.\n\n\nIn Table \\ref{tab-exact-vs-approx}, we summarize our exact results\nand the approximate results of\n\\citep{Shih-etal-Bioinformatics2004}. It can be seen that the\ndifference between the two can be very large, indicating the need\nfor exact results. For example, in the first scenario when\n$N_m=8$ the approximate result of\n\\citep{Shih-etal-Bioinformatics2004} predicts that a minimum of\n$21$ biological subjects are required. In practice $24$ subjects\nare required as $24$ is the minimum number larger than $21$ and\ndivisible by $8$. However this experiment setup ($24$ subjects\nforming $8$ pools, $8$ microarrays) will only give a detection\npower of $90\\%$. To meet the target power of $95\\%$, $40$\nbiological subjects are actually required by our exact result. If\nan experiment with $N_m=7$ microarrays is planned,\n\\citeauthor{Shih-etal-Bioinformatics2004} predicts that $37$\nsubjects are required\\citep{Shih-etal-Bioinformatics2004}, but in\nfact $126$ subjects must be used to achieve the target. Generally,\nthe approximate formulas of \\citep{Shih-etal-Bioinformatics2004}\nare too optimistic in assessing the benefits of pooling samples\nand reducing the number of microarrays, because they underestimate\nthe number of biological subjects required.\n\n\\subsection{Cost analysis}\nDepending on the material costs involved in the biological\nsubjects and microarrays, the conditions where pooling samples\nbecomes beneficial may be different from lab to lab. Here we show\nexamples to determine these conditions. Denoting the cost\nassociated with each biological subject as $C_s$ (including\nmaterials and labor etc) and the cost associated with a microarray\nas $C_m$, then the total costs for an experiment in microarray\ncomparative study is $C_T=N_sC_s+N_mC_m$. Taking the first\nscenario as an example, the total cost of a non-pooled design to\nachieve our preset targets is\n$$C_T(r=1)=12C_s+12C_m,$$\nand the total cost for pooled design with $r=2$ is\n$$C_T(r=2)=20C_s+10C_m.$$\nTherefore in order that the pooled design with $r=2$ is beneficial\nwe must have\n\n\\begin{equation}\nC_T(r=2) \\le C_T(r=1),\n\\end{equation}\nwhich requires that $C_m \\ge 4C_s$. Put another way, only when the\ncost associated with one microarray $C_m$ is more than $4$ times\nthe cost of a subject $C_s$, does the pooling design with $r=2$\nbecome preferable to the non-pooled design. Similarly a higher\nlevel of pooling with $r=3$ becomes preferable to $r=2$ only when\n$C_m \\ge 7C_s$. Furthermore the conditions for increasing the\nlevel of pooling from $r=3$ to $r=5$ are $C_m \\ge 13C_s$, and so\non. Table \\ref{tab-exact-vs-approx} gives these conditions for\nfurther levels of pooling.\n\nFor the first scenario using the actual cost figures given in\n\\citep{Shih-etal-Bioinformatics2004} where $C_s=\\$230$ and\n$C_m=\\$300$, it can be seen that none of the pooling conditions is\nmet. Therefore for this laboratory pooling samples is not\nrecommended. However, if we use the cost figures of\n\\citep{Kendziorski-etal-Biostatistics2003} where $C_s=\\$50$ and\n$C_m=\\$700$, an optimal design is a pooled design with $r=5$.\n\n\nFor the second scenario, it is a similar story. The cost figures\nof Ref. \\citep{Shih-etal-Bioinformatics2004} ($C_s=\\$230$ and\n$C_m=\\$300$) gives $C_m=1.30C_s$, which does not satisfy any of\nthe pooling conditions. So again the non-pooled design with\n$N_m=30$ and $N_s=30$ is recommended. On the other hand, the cost\nfigures of \\citep{Kendziorski-etal-Biostatistics2003} ($C_s=\\$50$\nand $C_m=\\$700$) give $C_m=14C_s$ which satisfies all the pooling\nconditions in the lower panel of Table \\ref{tab-exact-vs-approx}\nexcept the last row. So in Kendziorski et al's lab, the pooled\ndesign with $N_m=14$ and $N_s=84$ would be recommended.\n\n\n\\section{Discussion} \\label{sec-discussion} We have in this paper\npresented exact formulas for calculating the power of microarray\nexperimental design with different levels of pooling. These\nformulas can be used to determine the conditions of statistical\nequivalence between different pooling setups. As in\n\\citep{Kendziorski-etal-Biostatistics2003} and\n\\citep{Shih-etal-Bioinformatics2004}, the calculations presented\nin this paper are for an individual gene, so the statistical\nequivalence for different designs of pooling can be determined\nwith regard to one particular gene. However, microarray monitors\nthousands of genes simultaneously, and the biological and\ntechnical variances vary from gene to gene, therefore no single\nresult of statistical equivalence between pooled and non-pooled\ndesigns applies equally to all genes on the array. So in practice\nhow would the formulations in this work be used? One possible way,\nas suggested by Kendziorski {\\em et al}.\n\\citep{Kendziorski-etal-Biostatistics2003}, is to specify the\ndistributions of $\\sigma ^2$ and $\\sigma_{\\epsilon}$ and calculate\nthe total number of subjects and arrays that maximize the average\npower across the the array. In theory, if the biological variances\nand technical variances were known for all genes on the array, an\nequivalence condition between pooled and non-pooled designs could\nbe determined for each gene individually. The overall (or say,\naverage) equivalence condition between pooled and non-pooled\ndesigns could be obtained, for example, by some form of averaging\noperation over all genes. An alternative and probably more\npractical way is to use representative values of $\\sigma ^2$ and\n$\\sigma_{\\epsilon}$. We therefore propose that parameters for\n\"typical gene\" be used as inputs for the power and sample size\ncalculations. A typical gene is a gene whose biological and\ntechnical variance have the most probable values among the genes,\ni.e., the mode of the distribution for biological and technical\nvariance of genes. Alternatively, the median or mean variances\nacross genes could be used as representative\nvalues\\citep{Shih-etal-Bioinformatics2004}.\n\n\nAn issue associated with microarray experiments is the problem of\nmultiple inferences, where a separate null hypothesis is being\ntested for each gene. Given thousands of null hypotheses being\ntested simultaneously, the customary significance level $\\alpha\n=0.05$ for declaring positive tests will surely give too many\nfalse positives. For example, if among a total number $N=10000$ of\ngenes being tested, $N_0=4000$ are truly null genes (genes that\nare non-differentially expressed between the two classes), the\nexpected number of false positive results would be $4000\\times\n0.05=200$, which may be too many to be acceptable. Thus a smaller\nthreshold p-value for declaring differentially expressed genes\nshould be used. Effectively controlling false positives in a\nmultiple testing situation such as microarray experiments is an\narea which has drawn much attention in recent years due to the\nwider application of microarray technology. As discussed in our\nprevious work in \\citep{Zhang&Gant-Bioinformatics2004}, generally\nspeaking, all different multiple-testing adjustment methods\neventually amount to effectively setting a threshold p-value, and\nthen rejecting all the null hypotheses with p-value below this\nthreshold. The classical Bonferroni multiple-testing procedure,\nwhich controls family-wise error rate at $\\alpha$ by setting the\nthreshold $P_{th}=\\alpha \/N$, is generally regarded as being too\nconservative in the microarray context. The FDR (False Discovery\nRate) idea, initially due to \\citep{Benjamini&Hochberg1995} in\ndealing with the multiple testing problem, has now been widely\naccepted as appropriate to the microarray situation. Recently,\nEfron \\citep{Efron2004} extended the FDR idea by defining fdr, a\nlocal version FDR (local false discovery rate). When planning\nmicroarray experiments in terms of power and sample size\ncalculation, the FDR of \\citep{Benjamini&Hochberg1995} is more\nappropriate and convenient to use. There are now in the literature\na few slightly different variants of the definition of FDR\n\\citep{Benjamini&Hochberg1995,Storey&Tibshrirani-pnas2003,Grant-etal-Bioinformatics2005},\nbut in essence it is defined as the proportion of false positives\namong all positive tests declared. To provide an interface between\nFDR and the formulation in the previous sections, here we show\nthat there is a simple correspondence between controlling FDR and\nspecifying the traditional type I error rate and power. Suppose\nthat there are a total number $N$ of genes being monitored by\nmicroarray, so there will be $N$ hypotheses being tested, one for\neach gene. Suppose that a fraction $\\pi_0$ of the $N$ genes are\ntrue null genes, i.e., genes that are non-differentially expressed\nbetween the two classes. Given the type I error rate $P_{th}$, the\nexpected number of false positive tests is $P_{th}N\\pi_0$; Given\nthe power $S$, the expected number of non-null genes (truly\ndifferentially expressed genes) that are declared positive is\n$SN(1-\\pi_0)$. So the FDR achieved by this setting is\n\\begin{equation}\n\\mbox{FDR}=\\frac{P_{th}N\\pi_0}{P_{th}N\\pi_0+SN(1-\\pi_0)}\n=\\frac{P_{th}\\pi_0}{P_{th}\\pi_0+S(1-\\pi_0)}. \\label{eq-fdr}\n\\end{equation}\nHere $\\pi_0$ is an important parameter in controlling FDR, for\nwhich several different methods of estimating this parameter have\nbeen proposed\n\\citep{Pounds&Morris2003,Storey&Tibshrirani-pnas2003,Zhang&Gant-Bioinformatics2004}.\nEspecially the method we presented in\n\\citep{Zhang&Gant-Bioinformatics2004} is an accurate yet\ncomputationally much simpler algorithm than the one proposed by\nStorey and Tibshrirani in \\citep{Storey&Tibshrirani-pnas2003}.\nWith the interface Eq.(\\ref{eq-fdr}), FDR can be readily presented\nand incorporated into the calculations.\n\n\n\n\\vspace{0.2cm}\\noindent {\\Large\\bf Acknowledgments}\n\n\\noindent We wish to acknowledge the support of the microarray\nteam of the MRC Toxicology Unit particularly Reginald Davies,\nJinLi Luo and Joan Riley. We also thank two anonymous reviewers\nfor their helpful and constructive comments.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMode-coupling theory (MCT) is considered by many the most\ncomprehensive first-principle approach to the dynamics of supercooled\nliquids~\\cite{Gotze}. Nevertheless, its status is rather problematic\nfrom a fundamental point of view, as the physical nature of the glass\nstate and the microscopic interpretation of structural arrest are not\nyet fully elucidated. This is all the more so when we look at the\nhigher-order glass singularities in structured and complex liquids.\nIn this Rapid Communication, I show that multiple glassy states and\nglass-glass transition in MCT can be understood in terms of a\ngeneralisation of the notion of dynamic facilitation~\\cite{FrAn,RiSo}\nand bootstrap percolation~\\cite{ChLeRe,BP_rev}. The latter is known to\nemerge in a variety of contexts including jamming of granular\nmaterials~\\cite{Liu}, NP-hard combinatorial optimization\nproblems~\\cite{MoZe}, neural and immune networks~\\cite{Tlusty,\n Stauffer}, and evolutionary modeling~\\cite{Klimek}.\n\nThe formal structure of glass singularities predicted by MCT is\nencoded in the self-consistent equation\n\\begin{equation}\n\\label{eq.Phi}\n\\Phi = (1-\\Phi) \\ {\\mathsf M} (\\Phi),\n\\end{equation}\nwhere $\\Phi$ is the asymptotic value of the correlator, and ${\\mathsf\n M}$ is the memory kernel describing the retarded friction effect\ncaused by particle caging, a physical feature associated with the de\nGennes narrowing. We shall be concerned in the following with\none-component schematic models in which the wavevector dependence of\n$\\Phi$ is disregarded and ${\\mathsf M}$ is a low order polynomial.\nEquation~(\\ref{eq.Phi})--derived by taking the long-time of the\nintegro-differential equation describing the evolution of the\ncorrelator of particle density fluctuations--generates a hierarchy of\ntopologically stable glass singularities, which can be classified in\nterms of bifurcations exhibited by the roots of the real polynomial\n\\begin{equation}\n{\\mathcal Q}(\\Phi) = \\Phi - (1-\\Phi) \\ {\\mathsf M} (\\Phi) .\n\\end{equation}\nFollowing Arnol'd notation, adopted in~\\cite{Gotze}, an ${\\mathsf\n A}_{\\ell}$ glass singularity occurs when the corresponding maximum\nroot of ${\\mathcal Q}$ has a degeneracy $\\ell$, $\\ell \\ge 2$, and is\ndefined by\n\\begin{equation}\n\\frac{d^n {\\mathcal Q}}{d\\Phi^n} = 0 \\,, \\qquad n=0, \\cdots, \\ell-1,\n\\label{eq.dQ}\n\\end{equation}\nwhile the $\\ell$th derivative is nonzero. The polynomial ${\\mathcal\n Q}$ has always the trivial root $\\Phi=0$, corresponding to a liquid\nergodic state, whereas nonzero values of $\\Phi$ correspond to a system\nthat is unable to fully relax and hence can be identified with a glass\nnonergodic state.\n\n\nFor two-parameter systems there are two basic singularities, ${\\mathsf\n A}_2$ and ${\\mathsf A}_3$, also known as {\\em fold} and {\\em cusp}\nbifurcations. They have been extensively studied by using memory\nkernels given by a superposition of linear and nonlinear terms. In\nthe ${\\mathsf F}_{12}$ schematic model the memory kernel is ${\\mathsf\n M}(\\Phi) = v_1 \\Phi + v_2 \\Phi^2$ while the ${\\mathsf F}_{13}$ model\nis defined by ${\\mathsf M}(\\Phi) = v_1 \\Phi + v_3 \\Phi^3$. The\ncompetition between the two terms produces a variety of nonergodic\nbehaviors: the linear term gives rise to a continuous liquid-glass\ntransitions at which $\\Phi \\sim \\epsilon$, where $\\epsilon$ is the\ndistance from the critical point (e.g., $\\epsilon=T-T_{\\scriptstyle \\rm c}$), while the\nnonlinear term induces a discontinuous liquid-glass transition, with\nthe well known square-root anomaly $\\Phi-\\Phi_{\\scriptstyle \\rm c} \\sim\n\\epsilon^{1\/2}$. In the ${\\mathsf F}_{12}$ scenario the discontinuous\nline joins smoothly the continuous one at a tricritical point. In the\n${\\mathsf F}_{13}$ scenario, the discontinuous transition line\nterminates at an ${\\mathsf A}_3$ singularity inside the glass phase\ngenerated by the continuous liquid-glass transition, and therefore\ninducing a glass-glass transition (see Fig.~1 for a representative\nphase diagram). The scaling form of the order parameter near the\n${\\mathsf A}_3$ endpoint is $\\Phi-\\Phi_{\\scriptstyle \\rm c} \\sim \\epsilon^{1\/3}$, and more\ngenerally $\\Phi-\\Phi_{\\scriptstyle \\rm c} \\sim \\epsilon^{1\/\\ell}$ for an ${\\mathsf\n A}_{\\ell}$ singularity, as implied by the Taylor expansion of\n${\\mathcal Q}$ near the critical surface and Eqs.~(\\ref{eq.dQ}). Thus\none can observe a rich variety of nonergodic behaviors whose\ncomplexity is comparable to that of multicritical points in phase\nequilibria~\\cite{Gilmore}. It is a nontrivial result that only\nbifurcation singularities of type ${\\mathsf A}_{\\ell}$ can occur in\nMCT~\\cite{Gotze}.\n\n\nThe ${\\mathsf F}_{12}$ and ${\\mathsf F}_{13}$ scenarios were first\nintroduced with the mere intention of demonstrating the existence of\nhigher-order singularities and glass-glass transition, and then were\nsubsequently observed in a number of experiments and numerical\nsimulations of realistic model\nsystems~\\cite{Dawson,Pham,Eckert,Chong,Krako,Kurz,Kim,Sperl,Voigtmann}.\nIt is important to emphasize that the parameters $v_i$ entering the\nmemory kernel are smooth functions of the thermodynamic variables,\ne.g. temperature and density, therefore the nature of nonergodic\nbehaviors predicted by MCT is purely dynamic. This is rather puzzling\nfrom the statistical mechanics perspective of critical phenomena where\ndiverging relaxation time-scales are closely tied to thermodynamic\nsingularity. It has been argued that this unusual situation stems\nfrom uncontrolled approximations. For example, the intimate\nconnection of some spin-glass models with MCT has brought to the fore\nthe existence of a genuine {\\em thermodynamic} glass phase at a\nKauzmann temperature $T_{\\scriptscriptstyle \\rm K}$ below the putative dynamic glass transition\npredicted by MCT~\\cite{KiWo,KiTh}. A non-trivial Gibbs measure,\ninduced by a replica-symmetry breaking, would therefore be actually\nresponsible for the observed glassy behavior~\\cite{MePa}. For this\nreason, the nature of the MCT has been much debated since its first\nappearance and several approaches have been attempted to clarify its\nstatus~\\cite{KiWo,KiTh,MePa,BoCuKuMe,Dave,Andrea,Ikeda,Schmid,Szamel,Silvio}.\nI will show here that the idea of dynamic facilitation~\\cite{RiSo},\nfirst introduced by Fredrickson and Andersen~\\cite{FrAn}, offers some\nclues in this direction for its relation with bootstrap percolation\nprovides a transparent microscopic mechanism of structural\narrest~\\cite{ChLeRe,Branco}. In the dynamic facilitation approach the\ncoarse-grained structure of a supercooled liquid is represented by an\nassembly of higher\/lower density mesoscopic \\emph{cells}. In the\nsimplest version a binary spin variable, $s_i=\\pm 1$, is assigned to\nevery cell $i$ depending on its solid or liquid like structure and no\nenergetic interaction among cells is assumed, ${\\mathcal H} = -h\n\\sum_i s_i$. The crucial assumption is that the supercooled liquid\ndynamics is essentially dominated by the cage effect: fluctuations in\nthe cells structure occur if and only if there is a certain number,\nsay $f$, of nearby liquid-like cells. $f$ is called the facilitation\nparameter and can take values in the range $0 \\le f \\le z$, where $z$\nis the lattice coordination: cooperative facilitation imposes $f \\ge\n2$, while non-cooperative dynamics only requires $f=1$. This very\nschematic representation of the cage effect gives rise to a large\nvariety of remarkable glassy behaviors, and it has long been noticed\nthat they are surprisingly similar to those found in the dynamic of\nmean-field disordered systems~\\cite{KuPeSe,Se,SeBiTo}. It has been\nrecently observed that in a special case, an exact mapping between\nfacilitated and disordered models with $T_{\\scriptscriptstyle \\rm K}=0$,\nexists~\\cite{FoKrZa}. Since such models are so utterly different in\ntheir premises, it is by no means obvious that such a correspondence\nis not accidental and can be extended to systems with higher-order\nglass singularities. To clarify this issue, I will consider a\ngeneralization of the facilitation approach~\\cite{SeDeCaAr} in which\nevery cell $i$ is allowed to have its own facilitation parameter $f_i$\n(or, equivalently, an inhomogeneous local lattice connectivity).\nPhysically, this situation may arise from the coexistence of different\nlengthscales in the system, e.g., mixtures of more or less mobile\nmolecules or polymers with small and large size, (or from a\ngeometrically disordered environment, e.g., a porous matrix). In such\nfacilitated spin mixtures the facilitation strength can be tuned\nsmoothly and is generally described by the probability distribution\n\\begin{eqnarray}\n \\pi(f_i) & = & \\sum_{\\zeta=0}^z \\ w_{\\zeta} \\ \\delta_{f_i,\\zeta} ,\n\\label{eq.distf.general}\n\\end{eqnarray}\nwhere the weights $\\{w_{\\zeta}\\}$ controlling the facilitation strength\nsatisfy the conditions\n\\begin{eqnarray}\n\\sum_{\\zeta=0}^z w_{\\zeta} = 1 ,\\qquad 0 \\le w_{\\zeta} \\le 1.\n\\end{eqnarray}\nBy tuning the weights one can thus explore a variety of different\nsituations. Generally, one observes that when the fraction of spins\nwith facilitation $f=z-1,z$ is larger than that with $2 \\le f \\le\nz-2$, the glass transition is continuous while in the opposite case it\nis discontinuous. One advantage of the facilitation approach is that\nwhen the lattice topology has a local tree-like structure, one can\ncompute exactly some key quantities, such as the critical temperature\nand the arrested part of correlation and its scaling properties near\ncriticality. This can be done by exploiting the analogy with bootstrap\npercolation. Let $p$ be the density of up spins in thermal\nequilibrium,\n\\begin{eqnarray}\n p &=& \\frac{1}{ 1 + {\\rm e}^{-h\/k_{\\scriptscriptstyle \\rm B} T} },\n\\end{eqnarray}\nfor a generic spin mixture on a Bethe lattice with branching ratio\n$k=z-1$. As usual, one arranges the lattice as a tree with $k$ branches\ngoing up from each node and one going down, and then proceeds\ndownwards. In analogy with the heterogeneous bootstrap percolation\nproblem, the probability $B$ that a cell is in, or can be brought into, the\nliquid-like state by only rearranging the state of $k$ cells above\nit~\\cite{ChLeRe,Branco,SeBiTo,SeDeCaAr}, can be cast in the form\n\\begin{eqnarray}\n 1-B &=& B \\ p \\ \\left\\langle \\sum_{i=k-f+1}^k {k \\choose i}\n B^{k-i-1} (1-B)^i \\right\\rangle_f,\n\\label{eq.B}\n\\end{eqnarray}\nwhere $\\left\\langle \\cdots \\right\\rangle_f$ represents the average\nover the probability distribution Eq.~(\\ref{eq.distf.general}). The\nright-hand side of Eq.~(\\ref{eq.B}) is a polynomial of $1-B$, and\nhence the formal structure of Eq.~(\\ref{eq.B}) is similar to that of\nschematic MCT (once $1-B$ is formally identified with $\\Phi$).\nSingularities can therefore be classified according to the criteria\nalready mentioned in the introduction. Nevertheless, it should be\nnoticed that what would be the anolog of the MCT kernel in\nEq.~(\\ref{eq.B}) can also have negative coefficients (besides\ncontaining an extra term of the form $(1-B)^k\/B$), while the\npolynomial coefficients of the MCT memory kernel are restricted to\nnon-negative ones. In fact, the sets of critical states which specify\nsome ${\\mathsf A}_{\\ell}$ glass-transition singularity are not\nidentical to those describing the full bifurcation scenario of real\npolynomials of degree $\\ell$, because the coefficients of the\nadmissible polynomials ${\\mathcal Q}$ form only a subset of all real\ncoefficients. This observation means that the correspondence between\nMCT and the heterogeneous facilitation approach is not an identity,\nbut this still leaves enough room for building up models with MCT\nfeatures, although some ingenuity may be required. It has already been\nshown, for example, that the ${\\mathsf F}_{12}$ scenario is faithfully\nreproduced in this framework~\\cite{SeDeCaAr,ArSe}. To substantiate the\nabove observation, I now will focus on the next higher-order glass\nsingularity, which is the ${\\mathsf F}_{13}$ scenario.\n\\begin{figure}\n\\includegraphics[width=8.5cm]{phasediagram}\n\\caption{Phase diagram for a Bethe lattice with $z=5$ and facilitation\n as in Eq.~(\\ref{eq.distf}). The liquid-glass 1 transition is\n continuous while the liquid-glass 2 and the glass 1-glass 2 are\n discontinuous. The light dashed line is the unstable branch of the\n phase diagram and shows the cuspoid structure of the terminal\n endpoint.}\n\\label{fig.diag}\n\\end{figure}\nLet us consider, for simplicity, a binary mixture on a Bethe lattice with\n$z=5$ and\n\\begin{equation}\n\\pi(f_i) = (1-q) \\delta_{f_i,2} + q \\delta_{f_i,4}.\n\\label{eq.distf}\n\\end{equation}\nFor such a mixture, denoted here as (2,4), the probability $B$ obeys\nthe fixed-point equation:\n\\begin{equation}\n 1-B = p \\left[ q (1-B^4) + (1-q) (1-B)^3 (1+3B)\\right].\n\\label{eq.P245}\n\\end{equation}\nThis equation is always satisfied by $1-B=0$, while an additional\nsolution with $1-B>0$ is found by solving\n\\begin{equation}\n p^{-1} = 1+B-5B^2+3B^3 + 2 q B^2 (3-B).\n\\label{eq.P245_1}\n\\end{equation}\nA continuous glass transition is obtained by setting $B=1$ in the\nprevious equation, giving: $p_{\\scriptstyle \\rm c} = 1\/4q$. Using the relation between $T$\nand $p$ (and setting $h\/k_{\\scriptscriptstyle \\rm B}=1$), one gets $T_{\\scriptstyle \\rm c}(q) = -1\/\\ln(4q-1)$,\nimplying that the continuous transition exists in the range $1\/2 \\ge q\n\\ge 1\/4$. The discontinuous transition instead occurs when\nEq.~(\\ref{eq.P245_1}) is satisfied and its first derivative with\nrespect to $B$ vanishes. The latter condition implies\n\\begin{equation}\n q = \\frac{ (9B-1)(1-B)}{6B(2-B)} ,\n\\label{eq.P245_2}\n\\end{equation}\nand naturally leads to the square-root scaling near the discontinuous\ntransition line. Thus the discontinuous transition can be graphically\nrepresented by expressing Eqs.~(\\ref{eq.P245_1}) and (\\ref{eq.P245_2})\nin parametric form in terms of $B$. The phase diagram in the plane\n$(T,q)$ is shown in the Fig.~\\ref{fig.diag}.\nIt exhibits two crossing glass transition lines, with continuous and\ndiscontinuous nature, corresponding to a degenerate and generic\n${\\mathsf A}_2$ singularities. The discontinuous branch extends into\nthe glass region below the continuous line up to a terminal endpoint\nwhich corresponds to an ${\\mathsf A}_3$ singularity. The location of\nthe endpoint is found by simultaneously solving equation\n\\begin{equation}\n B = \\frac{5-6q}{9-6q} ,\n\\label{eq.P245_3}\n\\end{equation}\n(which is obtained by setting the second derivative of\nEq.~(\\ref{eq.P245_1}) to zero), along with Eqs.(\\ref{eq.P245_1}) and\n(\\ref{eq.P245_2}). The discontinuous branch located between the\ncrossing point and the endpoint corresponds to a transition between\ntwo distinct glass states, called here glass 1 and glass 2. They are\nrespectively characterized by a fractal and compact structure of the\nspanning cluster of frozen particles. The passage from one glass to\nthe other can take place either discontinuously or without meeting any\nsingularity, i.e. by circling around the endpoint (in a way much\nsimilar to liquid-gas transformation). The existence of two\ntransitions in bootstrap percolation was first discovered by Fontes\nand Schonmann~\\cite{FoSc} in homogeneous trees and then found in\nErd\\H{o}s-R\\'enyi graphs and complex networks in~\\cite{Porto,Cellai}.\nHowever, its relation with glass-glass transition and MCT went\nunnoticed. In fact, the correspondence between Eqs.~(\\ref{eq.Phi}) and\n(\\ref{eq.B}) naturally suggests the existence of further singularities\nin bootstrap percolation and cooperative facilitated models.\n\n\\begin{figure}\n\\includegraphics[width=8.5cm]{orderparameter}\n\\caption{ Fraction of permanently frozen spins, $\\Phi$, vs temperature\n ratio, $T\/T_{\\scriptstyle \\rm c}(q)$, with values of $q$ larger than the crossing\n point.}\n\\label{fig.Phi}\n\\end{figure}\n\nFig.~\\ref{fig.Phi} reports the behavior of the fraction of frozen\nspins, which is the analog of the nonergodicity parameter in the\nfacilitation approach, when the temperature crosses the liquid-glass\ncontinuous transition and the glass-glass transition. This quantity\ncan be exactly computed from $B$~\\cite{SeDeCaAr,ArSe}, and its\nexpression is not reported here--we only notice that its general\nfeatures, and in particular the scaling properties near the critical\nstates, are similar to those of $B$. We observe that the fraction of\nfrozen spins first increases smoothly at the liquid-glass continuous\ntransition and then suddenly jumps at the glass-glass transition. The\njump decreases when $q$ approaches the endpoint and eventually\ndisappears. At this special point, the additional condition that the\nsecond-order derivative of Eq.~(\\ref{eq.P245_1}) with respect to $B$\nvanishes, implies a cube-root scaling near the endpoint. These scaling\nfeatures are exactly those expected from the ${\\mathsf F}_{13}$\nscenario, and we obtain similar results for the mixtures (3,5) on a\nBethe lattice with $z=6$.\n\nTo summarise, a close relationship exists between the structure\nof glass singularities in MCT and that of heterogeneous bootstrap\npercolation. This allows the construction of microscopic realizations\nof MCT scenarios based on the heterogeneous cooperative facilitation\napproach and provides further insights into the degree of universality\nof MCT. The role of the linear and nonlinear terms in the MCT memory\nkernel is played in facilitated spin mixtures by the fraction of spins\nwith facilitation $f=k, k+1$ and $k-1 \\ge f\\ge 2$, respectively. Their\ncompetition generates continuous and discontinuous liquid-glass\ntransitions, while the order of singularity is primarily controlled by\nthe lattice connectivity. This leads to multiple glassy states,\nglass-glass transition and more complex glassy behaviors. In this\nframework, the mechanism of structural arrest can be geometrically\ninterpreted in terms of the formation of a spanning cluster of frozen\nspin having fractal or compact structure depending on the continuous\nor discontinuous nature of the glass transition. Finally, from the\nrelation between MCT and mean-field disordered\nsystems~\\cite{KiWo,KiTh} it follows that quenched disorder and\ncooperative facilitation are two complementary, rather than\nalternative, descriptions of glassy matter, and this contributes to\nthe long sought unifying approach to glass physics.\n\n\n\\bibliographystyle{apsrev} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}