diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkudm" "b/data_all_eng_slimpj/shuffled/split2/finalzzkudm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkudm" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\label{sec:intro}\n\\setcounter{equation}{0}\n\nRegge calculus is a coordinate free geometric formalism of gravitation \non triangulated piecewise linear manifolds \\cite{Regge:1961aa, MTW}. \nIt is envisaged from classical to quantum as an approach of \nEinstein gravity to problems where analytic methods cannot be reachable. \nThough Regge theory or its evolved ones have brought considerable \nprogress in our understanding of quantum gravity, in particular in \ntwo and three dimensions, efforts to develop the formalism are \nvigorously continued to overcome the conceptual and technical \ndifficulties \\cite{BOW:2018aa}. \n\nAs in continuum gravity, Regge calculus allows exact solutions \nfor systems, where the numbers of variables are largely reduced \nby some symmetry. They are expected not only to play a role of \na test tube to examine the validity of Regge calculus but to \nexpose origins of intriguing geometrical properties of gravitation \nsuch as dynamical behaviors of space-time and black hole singularities. \nAlong this line of thought Regge calculus has been applied \nto spherically symmetric static geometries such as the Schwarzschild \nspace-time \\cite{Wong:1971} and the \nFriedmann--Lema\\^itre--Robertson--Walker (FLRW) \nuniverse \\cite{CW:1973aa,Brewin:1987aa,LW:2015aa, LW:2015ab, LW:2015ac}. \nMost researches assume realistic four dimensions and\napplication of Regge calculus to higher dimensions have not been \ntargeted so far. \n\nIn this paper we investigate vacuum solution of a discretized \nclosed FLRW universe with a positive cosmological constant in \nan arbitrary dimensions via\nRegge calculus. In the previous \npapers \\cite{TF:2016aa, TF:2020aa} we have analyzed \nthe FLRW universe in three and four dimensions within the \nframework of Collins--Williams (CW) formalism\\cite{CW:1973aa}. \nIt is base on $3+1$ decomposition of space-time similar to \nArnowitt--Deser--Misner (ADM) formalism in General \nRelativity \\cite{AD:1959aa, ADM:1959aa}. Three-dimensional \nspherical Cauchy surfaces are replaced by regular polytopes \nand truncated world-tubes are \ntaken as the fundamental building blocks of the discretized \nFLRW universe. \nRegge calculus describes qualitative properties \nof the continuum solution during the period small enough \ncompared with the characteristic time scale $\\sim1\/\\sqrt{\\Lambda}$, \nthe inverse square root of the cosmological constant. \nThe deviation from the continuum theory becomes apparent as time \npasses. In three dimensions the universe expands to infinity \nin a finite time, whereas it repeats expansions and contractions\nperiodically in four dimensions. \n\nIn order for Regge calculus to approximate continuum theory \nquantitatively edge lengths must be sufficiently small \ncompared both with the curvature radius and\n$1\/\\sqrt{\\Lambda}$. \nThis cannot be satisfied for regular \npolytopes since the edge lengths and their circumradii \nare of same order,\nand the minimum edge lengths are of order \n$1\/\\sqrt{\\Lambda}$. To improve the approximation we must \nintroduce nonregular polytopes with shorter edge lengths. \nA natural construction of such polytopes is geodesic dome. \nRegge calculus for them, however, becomes impractical as \nthe number of cells increases. This can be bypassed by \nworking with the pseudo-regular polytopes introduced \nin \\cite{TF:2016aa, TF:2020aa}. \nThey can be simply defined by extending the Schl\\\"afli \nsymbol of the original regular polytope to fractional or \nnoninteger one corresponding to the geodesic dome. \nWe will extend the results obtained in three and four \ndimensions to arbitrary dimensions. \n\nThis paper is organized as follows; in the next section we set up the \nregular polytopal universe by the CW formalism in arbitrary dimensions\nand formulate the Regge action in the continuum time limit. In \nSect. \\ref{sec:req} we give gauge fixed Regge equations in Lorentzian \nsignature. We describe the evolution of the polytopal universe in \ndetail. Comparison with the continuum solutions is made. \nIn Sect. \\ref{sec:prpt} we consider the pseudo-regular polytope \nhaving a $D$-cube as the parent regular polytope and define the \nfractional Schl\\\"afli symbol. Taking the infinite frequency limit, \nwe argue that the pseudo-regular polytope model can reproduce the \ncontinuum FLRW universe. Sect. \\ref{sec:sum} is devoted to summary \nand discussions. In Appendix \\ref{sec:cada}, we describe \ncircumradii and dihedral angles of regular polytopes in \narbitrary dimensions. Appendices \\ref{sec:dadpf} and \\ref{sec:pnu} \nare to explain some technicalities. \n\n\n\n\n\n\n\\section{Regge action for a regular $D$-polytopal universe}\n\n\\label{sec:ra}\n\\setcounter{equation}{0}\n\nIn the beginning we would like to briefly summarize the FLRW universe in General Relativity.\nThe continuum gravitational action with a cosmological constant in $D$ dimensions is given by \n\\begin{align}\n \\label{eq:cEHa}\n S=\\frac{1}{16\\pi}\\int d^Dx\\sqrt{-g}(R-2\\Lambda).\n\\end{align}\nThe FLRW metric\n\\begin{align}\n \\label{eq:FLRWm}\n ds^2=-dt^2+a(t)^2\\left[\\frac{dr^2}{1-kr^2}+r^2 \\sigma_{AB} dx^A dx^B \\right]\n\\end{align}\nis an exact solution of Einstein's field equations, where $\\sigma_{AB}$ is \nthe metric tensor on $(D-2)$-dimensional unit sphere. It describes \nan expanding or contracting universe of homogeneous and isotropic \nspace. All the time dependence of the metric is included in \n$a\\left(t\\right)$, known as scale factor in cosmology.\nEinstein equations for the metric (\\ref{eq:FLRWm}) derive the \nFriedmann equations as differential equations of scale factor\n\\begin{align}\n\\label{eq:Feq}\n&\\ddot{a}= \\Lambda_D a , \\quad \\dot{a}^2 = \\Lambda_D a^2 - k ,\n\\end{align}\nwhere we have introduced $\\Lambda_D$ by\n\\begin{align}\n \\label{eq:LamD}\n \\Lambda_D = \\frac{2\\Lambda}{ \\left( D-1 \\right) \\left( D-2 \\right) }.\n\\end{align}\nThe curvature parameter $k = 1, 0, -1$ corresponds to space being \nspherical, Euclidean, or hyperbolic, respectively. The relations \nbetween the solutions and curvature parameter are summarized in \nTable \\ref{tab:flrw} with the proviso that the behaviors of the \nuniverses are restricted to expanding at the beginning for the \ninitial condition $a\\left(0\\right)=\\min a\\left(t\\right)$. \nNote that we have assumed $a (0) = \\frac{1}{ \\sqrt{\\Lambda_D} }$ \nfor the case of $k=0$ and $ \\Lambda > 0 $.\n\n\n\n\\begin{table}[t]\n\\begin{align*}\n \\begin{array}{cccc} \\hline\n & k=1 & k=0 & k=-1 \\\\ \\hline\n \\Lambda>0 & a=\\frac{1}{\\sqrt{\\Lambda_D}}\n \\cosh\\left(\\sqrt{\\Lambda_D}t\\right) \n & a=\\frac{1}{\\sqrt{\\Lambda_D}}\\exp\\left(\\sqrt{\\Lambda_D}t\\right) \n & a=\\frac{1}{\\sqrt{\\Lambda_D}}\\sinh\\left(\\sqrt{\\Lambda_D}t\\right) \\\\\n \\Lambda=0 & \\mbox{no solution} & a = \\mbox{const.} & a=t \\\\\n \\Lambda<0 & \\mbox{no solution} & \\mbox{no solution} \n & a=\\frac{1}{\\sqrt{-\\Lambda_D}}\\sin\\left(\\sqrt{-\\Lambda_D}t\\right) \\\\ \\hline\n \\end{array}\n\\end{align*}\n\\caption{Solutions of the Friedmann equations.}\n\\label{tab:flrw}\n\\end{table}\n\n\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{clll}\\hline \n & Name & $\\left\\{p_1,p_2,p_3,\\cdots,p_D,p_0\\right\\}$ & \n $[D,\\kappa_D,\\lambda_D,\\mu_D,\\zeta_D]$\\\\ \\hline\n \n 0-polytope & Point & $\\left\\{2\\right\\}$ & $[0,3,3,3,3]$\\\\\n \n 1-polytope & Line segment & $\\left\\{2,2\\right\\}$ & $[1,3,3,3,3]$ \\\\\n \n 2-polytope & $n$-sided polygon & $\\left\\{2,n,2\\right\\}$ & $[2,n,3,3,3]$ \\\\\n \\\\\n \n \\multirow{5}{*}{3-polytope} & Tetrahedron & $ \\left\\{2,3,3,2\\right\\}$\n & $[3,3,3,3,3]$ \\\\ \n & Cube & $ \\left\\{2,4,3,2\\right\\}$ & $[3,4,3,3,3]$ \\\\ \n & Octahedron & $ \\left\\{2,3,4,2\\right\\}$ & $[3,3,4,3,3]$ \\\\ \n & Dodecahedron & $ \\left\\{2,5,3,2\\right\\}$ & $[3,5,3,3,3]$ \\\\ \n & Icosahedron & $ \\left\\{2,3,5,2\\right\\}$ & $[3,3,5,3,3]$ \\\\\n \\\\\n \n \\multirow{6}{*}{4-polytope} & 5-cell & $ \\left\\{2,3,3,3,2\\right\\}$ \n & $[4,3,3,3,3]$\\\\\n & 8-cell & $ \\left\\{2,4,3,3,2\\right\\}$ & $[4,4,3,3,3]$ \\\\\n & 16-cell & $ \\left\\{2,3,3,4,2\\right\\}$ & $[4,3,3,4,3]$ \\\\\n & 24-cell & $ \\left\\{2,3,4,3,2\\right\\}$ & $[4,3,4,3,3]$ \\\\\n & 120-cell & $ \\left\\{2,5,3,3,2\\right\\}$ & $[4,5,3,3,3]$ \\\\\n & 600-cell & $ \\left\\{2,3,3,5,2\\right\\}$ & $[4,3,3,5,3]$ \\\\ \n \\\\\n \n \\multirow{3}{*}{\\begin{tabular}{l} $n$-polytope \\\\ \n $\\left(n\\geq 5\\right)$ \\end{tabular}} & $n$-simplex $\\alpha_n$ & \n $\\left\\{2,3^{n-1},2\\right\\}$ & $[n,3,3,3,3]$ \\\\\n & $n$-orthoplex $\\beta_n$ & $\\left\\{2,3^{n-2},4,2\\right\\}$ & $[n,3,3,3,4]$ \\\\ \n & $n$-cube $\\gamma_n$ & $\\left\\{2,4,3^{n-2},2\\right\\}$ & $[n,4,3,3,3]$ \\\\\n \\hline\n \\end{tabular}\n \\caption{Extended Schl\\\"afli symbols for regular polytopes. \n The symbol $\\{2,3^4,2\\}$ is an abbreviation of \n $\\{2,3,3,3,3,2\\}$.\n By H. M. S. Coxeter the $n$-simplex, $n$-orthoplex, and $n$-cube are labeled as $\\alpha_n$, $\\beta_n$, and $\\gamma_n$, respectively \\cite{Coxeter}.\n The parameter set $ \\left[ D, \\kappa_D, \\lambda_D, \\mu_D, \\zeta_D \\right] $ is another way to specify a regular $D$-polytope introduced in Sect. \\ref{sec:req}.\n }\n \\label{tab:ssfrpt}\n\\end{table}\n\n\n\nAs preparation for the investigation of polytopal universes, \nwe work in Euclidean space-time for the time being and explain an \nepitome of Regge calculus; in Regge calculus, the discrete gravitational \naction is given by the Regge action\\cite{Miller:1997aa}\n\\begin{align}\n \\label{eq:ract}\n S_{\\rm Regge}=\\frac{1}{8\\pi}\\left(\\sum_{i\\in\\rm \\{hinges\\}}\n \\varepsilon_iA_i-\\Lambda\\sum_{i\\in\\rm \\{blocks\\}} V_i\\right),\n\\end{align}\nwhere $A_i$ is the volume of a hinge, $\\varepsilon_i$ the deficit \nangle around the hinge of volume $A_i$, and $V_i$ the volume of a building \nblock of the piecewise linear manifold.\nThe fundamental variables in Regge calculus are the edge lengths $l_i$. \nVarying the Regge action with respect to $l_i$, we obtain the Regge \nequations\n\\begin{align}\n \\label{eq:regeq}\n \\sum_{i\\in \\rm \\{hinges\\}}\\varepsilon_i\\frac{\\partial A_i}{\\partial l_j}\n -\\Lambda\\sum_{i\\in \\rm \\{ blocks \\}}\\frac{\\partial V_i}{\\partial l_j}=0.\n\\end{align}\nNote that there is no need to carry out the variation of the deficit \nangle owing to the Schl\\\"afli \nidentity\\cite{Schlafli:1858aa,HHKL:2015aa}\n\\begin{align}\n\\sum_{i\\in\\rm\\{hinges\\}}A_i\\frac{\\partial\\varepsilon_i}{\\partial l_j}=0.\n\\end{align}\n\nWe now turn to polytopal universe. According to CW formalism we \nreplace $(D-1)$-dimensional hyperspherical Cauchy surface in FLRW \nuniverse by a fixed type of regular $D$-polytope. In general a \nregular $D$-polytope for $D \\geq 2$ is characterized by a set of \n$D-1$ integer parameters $ \\left\\{p_2,p_3,\\cdots,p_D\\right\\}$, known \nas Schl\\\"afli symbol\\cite{Coxeter,Hitotsumatsu}. In this paper we \nintroduce $p_0=p_1=2$ to include the cases of $D=0,~1$ and write \nthe Schl\\\"afli symbol as $\\left\\{p_1,p_2 , p_3\\cdots,p_D,p_0\\right\\}$, which \nwill be referred to as extended Schl\\\"afli symbol. \nEach regular $D$-polytope has a corresponding dual polytope represented by the extended Schl\\\"afli symbol in reverse order $\\left\\{ p_0 , p_n , p_{n-1} , \\cdots , p_1 \\right\\}$.\nNote that there are only three types of regular polytopes in dimensions larger than \nfour: the $n$-simplex, $n$-orthoplex, and $n$-cube being, \nrespectively, higher dimensional analogs of the tetrahedron, octahedron, and \ncube in three dimensions. In \nTable \\ref{tab:ssfrpt}\\cite{TF:2020aa} we summarize all possible \nregular polytopes in arbitrary dimensions.\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=1]{fig_frustum.eps}\n \\caption{The $i$-th frustum as the fundamental building block \n of the 5-polytopal universe for $\\left\\{2,3^4,2\\right\\}$.\n A lower cell like ABCDE for $ \\left\\{2,3,3,3,2\\right\\} $ with \n edge length $l_i$ at the time $t_i$ evolves into an upper one \n A$^\\uparrow$B$^\\uparrow$C$^\\uparrow$D$^\\uparrow$E$^\\uparrow$ \n with $l_{i+1}$ at $t_{i+1}$. The 3-frustum \n ABC-A$^\\uparrow$B$^\\uparrow$C$^\\uparrow$ having 2-simplices \n $\\left\\{2,3,2\\right\\}$ as base faces is a temporal hinge, and \n the 3-simplex ABCD for $\\left\\{2,3,3,2\\right\\}$ a spatial hinge.}\n \\label{fig:fbb}\n\\end{figure}\n\n\n\nIn the present polytopal universe the fundamental building blocks \nof space-time are world-tubes of $D$-dimensional frustums with the \nregular $(D-1)$-polytopes $\\left\\{p_1,\\cdots,p_{D-1},p_0\\right\\} $ as the \nupper and lower cells. We will refer to them as $D$-frustums. \nIn Figure \\ref{fig:fbb} we give, as an illustration, a depiction of \na 5-frustum with 4-simplices as base cells. \nWe assume that the \nupper and lower cells of a block lie in two consecutive time-slices \nseparately and every strut between them has equal length. \nWe denote the volume of the $i$-th $D$-frustum by $V_i$. It \ncontains two types of the fundamental variables: the edge lengths \n$l_i$ and $l_{i+1}$ of the lower and upper $(D-1)$-polytopes, and the \nlengths of the struts $m_i$. In a $D$-dimensional piecewise linear \nmanifold, hinges are ($D-2$)-dimensional objects, where curvature \nis concentrated. \nThere are two types of hinges. One is temporally \nextended $(D-2)$-frustums with regular $(D-3)$-polytopes \n$\\left\\{p_1,\\cdots,p_{D-3},p_0\\right\\}$ as the base cells, like the frustum \n$\\mathrm{ABC\\hbox{-}A}^\\uparrow\\mathrm{B}^\\uparrow\\mathrm{C}^\\uparrow$ \nin Figure \\ref{fig:fbb}. We call them ``temporal hinges'' and denote by \n$A_i^{({\\rm t})}$ the volume of a temporal hinge between the $i$-th \nand $(i+1)$-th Cauchy surfaces. The other is spatially traversed\nregular $(D-2)$-polytopes $\\left\\{p_1,\\cdots,p_{D-2},p_0\\right\\}$ as \nfacets of a Cauchy cell, or equivalently ridges of Cauchy surface, \nsuch as $\\mathrm{ABCD}$. \nNote that in geometry a $(D-1)$-, $(D-2)$-, and $(D-3)$-dimensional face of \n$D$-polytope are also called a facet, ridge, and peak, respectively. \nWe call the codimension two polytopes ``spatial hinges'' and denote by \n$A_i^{({\\rm s})}$ the volume of the hinge lying in the $i$-th time-slice. \n\nWe are able to write the Regge action for the polytopal universe\nby counting the numbers of temporal hinges lying \nbetween two consecutive time-slices, spatial hinges \nin a time-slice, and $D$-frustums. They are just the numbers of peaks, ridges, and facets of the $D$-polytope, respectively. \nLet $N^{(D)}_n$ be the number of $n$-dimensional faces of \na regular $D$-polytope, then the Regge action (\\ref{eq:ract}) \ncan be written as\n\\begin{align}\n \\label{eq:regact}\n S_\\mathrm{Regge}=\\frac{1}{8\\pi}\\sum_i\\left(N^{(D)}_{D-3}A^{({\\rm t})}_i\n \\varepsilon_i^{\\rm (t)}+N^{(D)}_{D-2} A^{({\\rm s})}_i\\varepsilon_i^{\\rm (s)}\n -N^{(D)}_{D-1}\\Lambda V_i\\right),\n\\end{align}\nwhere $\\varepsilon^{\\rm ({\\rm t})}_i$ and $\\varepsilon_i^{\\rm ({\\rm s})}$ are \nthe deficit angles around a temporal hinge of volume $A_i^{(\\rm t)}$ and \na spatial hinge of volume $A_i^{(\\rm s)}$, respectively. The summation is \ntaken over the time-slices. The volume of the frustum, those of hinges, \nand deficit angles can be expressed in terms of the fundamental \nvariables $l$'s and $m$'s. \n\nFor the purpose it is convenient to introduce the circumradius $\\hat R_n$ and \nvolume $\\hat{\\cal V}^{(n)}$ of a regular $n$-polytope \n$\\Pi_n= \\{p_1,p_2,\\cdots,p_n,p_0\\} $ with unit edge length. In \nAppendix \\ref{sec:cada} we give a general formula for $\\hat R_n$. \nSee (\\ref{eq:hatRn}) and (\\ref{eq:cfsinphi}). \nThe normalized \nvolume $\\hat{\\cal V}^{(n)}$ can be obtained from the recurrence \nrelation\n\\begin{align}\n \\label{eq:rr_rpv}\n \\hat{\\cal{V}}^{(n)}=\\frac{N^{(n)}_{n-1}\\sqrt{\\hat{R}_n^2-\\hat{R}_{n-1}^2}}{n} \n \\hat{\\cal{V}}^{(n-1)}, \\quad \\hat{\\cal{V}}^{(0)}=1,\n\\end{align}\nwhere $\\hat R_0=0$ is assumed.\nIt is now straightforward to write the volumes $V_i$ and \n$A_i^{(\\mathrm{s,t})}$ as \n\\begin{align}\n \\label{eq:VAA}\n V_i &=\\frac{1}{D} \\hat{\\cal{V}}^{(D-1)} \n \\sqrt{m_i^2-\\hat{R}_{D-1}^2 \\delta l_i^2}\n \\frac{l^D_{i+1}-l^D_i}{l_{i+1}-l_i}, \\\\\n A_i^{({\\rm s})}&=\\hat{\\cal{V}}^{(D-2)}l_i^{D-2}, \\\\\n A_i^{({\\rm t})}&=\\frac{1}{D-2}\\hat{\\cal{V}}^{(D-3)} \n \\sqrt{m_i^2-\\hat{R}_{D-3}^2\\delta l_i^2}\n \\frac{l^{D-2}_{i+1}-l^{D-2}_i}{l_{i+1}-l_i},\n\\end{align}\nwhere we have introduced the difference of edge \nlength $ \\delta l_i = l_{i+1} - l_i $.\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=.9]{fig_temporal.eps}\n \\caption{(a) Two lateral cells $c^{(\\rm l)}_{{\\rm D}i}$ and \n $c^{(\\rm l)}_{{\\rm E}i}$ are meeting at a temporal hinge \n $h^{(\\rm t)}_i$, (b) $\\theta^{(4)}_i$ is the dihedral \n angle between these cells, and (c) $\\varepsilon^{(\\rm t)}_i$ \n the deficit angle around the hinge $h^{(\\rm t)}_i$ made by \n $p_5$ frustums $\\left(V_i\\right)_1,\\cdots,\\left(V_i\\right)_{p_5}$ \n having $ h^{(\\rm t)}_i $ as a lateral cell in common.}\n \\label{fig:tda}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=.9]{fig_spatial.eps}\n \\caption{ (a) Two spatial hinges $h^{(\\rm s)}_i$ and $h^{(\\rm s)}_{i+1}$ in the $i$-th frustum,\n (b) dihedral angles $\\phi^{(4)\\uparrow}_i$ and $\\phi^{(4)\\downarrow}_{i+1}$,\n and (c) deficit angle $ \\varepsilon^{(\\rm s)}_i $.\n }\n \\label{fig:sda}\n\\end{figure}\n\n\n\nTo find the deficit angle around a hinge we need a dihedral \nangle between two adjacent cells jointed at the hinge. As an example \nconsider the hinges of a 5-frustum with regular 4-polytopal bases as \nlaid out in Figure \\ref{fig:fbb}. At the temporal hinge \n$h^{(\\rm t)}_i=\\mathrm{ABC\\hbox{-}A^\\uparrow B^\\uparrow C^\\uparrow}$ in \nFigure \\ref{fig:tda}, the dihedral angle $\\theta^{(4)}_i$ is made by two \nlateral cells \n$c^{(\\rm l)}_{{\\rm D}i}=\\mathrm{ABCD\\hbox{-}A^\\uparrow B^\\uparrow C^\\uparrow D^\\uparrow}$ \nand $c^{(\\rm l)}_{{\\rm E}i} =\\mathrm{ABCE\\hbox{-}A^\\uparrow B^\\uparrow C^\\uparrow E^\\uparrow}$.\nOn the other hand, $\\phi^{(4)\\uparrow}_i$ is the dihedral angle at the hinge \n$h^{(\\rm s)}_i =\\mathrm{ABCD}$ between the lateral cell $c^{(\\rm l)}_{{\\rm D}i}$ \nand the lower base cell $c^{(\\rm b)}_{{\\rm E}i}=\\mathrm{ABCDE}$ as illustrated \nin Figure \\ref{fig:sda}, and similarly $\\phi^{(4)\\downarrow}_{i+1}$ the \none between $c^{(\\rm l)}_{{\\rm D}i}$ and \n$c^{(\\rm b)}_{{\\rm E}i+1}=\\mathrm{A^\\uparrow B^\\uparrow C^\\uparrow D^\\uparrow E^\\uparrow}$ \nat $h^{(\\rm s)}_{i+1} =\\mathrm{A^\\uparrow B^\\uparrow C^\\uparrow D^\\uparrow }$.\nFor a $D$-frustum with $(D-1)$-polytopal bases, the dihedral angles \n$\\theta^{(D-1)}_i$ and $\\phi^{(D-1)\\downarrow}_{i+1}$ can be written as\n\\begin{align}\n \\label{eq:theta_ptu}\n \\theta_i^{(D-1)} &=2\\arccos\\left[\\sqrt{\\frac{m_i^2-\\hat R^2_{D-1}\\delta l_i^2}{%\n m_i^2-\\hat R^2_{D-2}\\delta l_i^2}}\\cos\\frac{\\vartheta_{D-1}}{2}\\right], \\\\\n \\label{eq:phi_down_ptu}\n \\phi_{i+1}^{(D-1)\\downarrow} &=\\arccos\\left[\\sqrt{\\frac{\\hat R^2_{D-1}-\\hat R_{D-2}^2}{%\n m_i^2-\\hat R^2_{D-2}\\delta l_i^2}}\\:\\delta l_i\\right],\n\\end{align}\nwhere $\\vartheta_n$ is the dihedral angle of a regular $n$-polytope $\\Pi_n$. \nSince the upper cell of the $D$-frustum is parallel to the lower, \n$\\phi^{(D-1)\\uparrow}_i$ and $\\phi^{(D-1)\\downarrow}_{i+1}$ satisfy\n\\begin{align}\n\\phi^{(D-1)\\uparrow}_i + \\phi^{(D-1)\\downarrow}_{i+1} = \\pi.\n\\end{align}\nIn Appendix \\ref{sec:cada} we give a short account of dihedral angles of \nregular polytopes. Derivations of (\\ref{eq:theta_ptu}) and (\\ref{eq:phi_down_ptu}) \nare given in Appendix \\ref{sec:dadpf}. \n\nTaking it into the consideration that $p_D$ \nfrustums have a temporal hinge in common as in Figure \\ref{fig:tda}(c), \nthe deficit angle $\\varepsilon_i^{({\\rm t})}$ is given by\n\\begin{align}\n \\label{eq:dati}\n \\varepsilon_i^{({\\rm t})}=2\\pi- p_D \\theta_i^{(D-1)}.\n\\end{align}\nOn the other hand the spatial hinge $h^{(\\rm s)}_i$ is always \nshared by four frustums as illustrated in Figure \\ref{fig:sda}(c): \ntwo adjacent blocks of volume $V_i$ in the future side and two $V_{i-1}$ in the \npast side. Thus the deficit angle $\\varepsilon_i^{({\\rm s})}$ is \nexpressed as\n\\begin{align}\n \\label{eq:dasi}\n \\varepsilon_i^{({\\rm s})}=2 \\pi-2\\left(\\phi_i^{(D-1)\\uparrow}\n +\\phi_i^{(D-1)\\downarrow}\\right)\n =2\\delta\\phi_i^{(D-1)\\downarrow},\n\\end{align}\nwhere $\\delta\\phi_i^{(D-1)\\downarrow}\n=\\phi_{i+1}^{(D-1)\\downarrow}-\\phi_i^{(D-1)\\downarrow}$. \n\nA facet of regular $D$-polytope is a $(D-1)$-polytope having \n$N^{(D-1)}_{D-2}$ ridges of the $D$-polytope and a ridge is shared \nby two facets, so that $N^{(D)}_{D-1}$, $N^{(D-1)}_{D-2}$, and \n$N^{(D)}_{D-2}$ satisfy \n$N^{(D-1)}_{D-2}N^{(D)}_{D-1}=2N^{(D)}_{D-2}$. \nLikewise, a ridge has $N^{(D-2)}_{D-3}$ peaks of the $D$-polytope and a peak joints two \nridges in a facet, so a facet has $\\dfrac{N^{(D-1)}_{D-2}N^{(D-2)}_{D-3}}{2}$ \npeaks. Taking it into account of the fact that a peak connects $p_D$ \nfacets, we find a relation\n$\\dfrac{N^{(D-1)}_{D-2} N^{(D-2)}_{D-3} }{2}N^{(D)}_{D-1}\n=p_DN^{(D)}_{D-3}$. These constraints together with (\\ref{eq:rr_rpv})\nlead to \n\\begin{align}\n \\label{eq:RatioAs}\n \\frac{N^{(D)}_{D-2}\\hat{\\cal V}^{(D-2)}}{N^{(D)}_{D-3}\\hat{\\cal V}^{(D-3)}}\n =&\\frac{p_D}{D-2}\n \\sqrt{\\hat R_{D-2}^2-\\hat R_{D-3}^2}, \\\\\n \\label{eq:RatioV}\n \\frac{N^{(D)}_{D-1}\\hat{\\cal V}^{(D-1)}}{N^{(D)}_{D-3}\\hat{\\cal V}^{(D-3)}} \n =&\\frac{2p_D}{(D-1)(D-2)}\n \\sqrt{(\\hat R_{D-1}^2-\\hat R_{D-2}^2)(\\hat R_{D-2}^2-\\hat R_{D-3}^2)} \\nonumber \\\\\n =&\\frac{2p_D}{(D-1)(D-2)}\n (\\hat R_{D-2}^2-\\hat R_{D-3}^2)\\tan\\frac{\\vartheta_{D-1}}{2},\n\\end{align}\nwhich can be used to factor out the three couplings \nappearing \nin the action (\\ref{eq:regact}). As for the second equality in \n(\\ref{eq:RatioV}), use has been made of (\\ref{eq:Rnrecr}). We thus obtain \n\\begin{align}\n \\label{eq:regactd}\n S_\\mathrm{Regge}=&\\frac{N^{(D)}_{D-3}\\hat{\\cal{V}}^{(D-3)}}{8\\pi}\\sum_i\n \\Biggl(\\frac{1}{D-2} \n \\sqrt{m_i^2-\\hat{R}_{D-3}^2\\delta l_i^2}\n \\frac{l^{D-2}_{i+1}-l^{D-2}_i}{l_{i+1}-l_i}\n \\varepsilon_i^{\\rm (t)} \\nonumber\\\\\n &+\\frac{2p_D}{D-2}\n \\sqrt{\\hat R_{D-2}^2-\\hat R_{D-3}^2}l_i^{D-2}\n \\delta\\phi_i^{(D-1)\\downarrow} \\nonumber\\\\\n &-\\frac{p_D\\Lambda_D}{D}(\\hat R_{D-2}^2-\\hat R_{D-3}^2)\n \\sqrt{m_i^2-\\hat{R}_{D-1}^2 \\delta l_i^2}\n \\frac{l^D_{i+1}-l^D_i}{l_{i+1}-l_i}\\tan\\frac{\\vartheta_{D-1}}{2}\\Biggr).\n\\end{align}\nIn later sections we are interested in the continuum time limit. \nWe replace $l_i$ and $m_i$ by $l(\\tau)$ and $n(\\tau)\\delta \\tau$, where \n$\\tau$ is an arbitrary parameter and $n(\\tau)$ can be regarded as lapse \nfunction in ADM formalism. The continuum limit $\\delta\\tau\\rightarrow d \\tau$ \nof the action can easily be obtained from (\\ref{eq:regactd}) as \n\\begin{align}\n \\label{eq:ctregact}\n S_\\mathrm{Regge}=&\\frac{N^{(D)}_{D-3}\\hat{\\cal{V}}^{(D-3)}}{8\\pi}\n \\int d\\tau\n \\Biggl( \n \\sqrt{n^2-\\hat{R}_{D-3}^2\\dot l^2}\\:l^{D-3}\n \\varepsilon^{\\rm (t)}\n -2p_D\n \\sqrt{\\hat R_{D-2}^2-\\hat R_{D-3}^2}\\:l^{D-3}\\dot l\n \\phi^{(D-1)\\downarrow} \\nonumber\\\\\n &-p_D\\Lambda_D(\\hat R_{D-2}^2-\\hat R_{D-3}^2)\n \\sqrt{n^2-\\hat{R}_{D-1}^2\\dot l^2}\\:l^{D-1}\\tan\\frac{\\vartheta_{D-1}}{2}\\Biggr),\n\\end{align}\nwhere $\\dot l=\\dfrac{dl}{d\\tau}$ and total $\\tau$ derivative \nterms are suppressed. We have also introduced continuum limits \nof (\\ref{eq:theta_ptu}), (\\ref{eq:phi_down_ptu}), and (\\ref{eq:dati})\nby \n\\begin{align}\n \\label{eq:ctheta_ptu}\n \\varepsilon^{(\\mathrm{t})}&=2\\pi-p_D\\theta^{(D-1)} \\quad\n \\hbox{with} \\quad\n \\theta^{(D-1)}=2\\arccos\\left[\\sqrt{\\frac{n^2-\\hat R^2_{D-1}\\dot l^2}{%\n n^2-\\hat R^2_{D-2}\\dot l^2}}\\cos\\frac{\\vartheta_{D-1}}{2}\\right], \\\\\n \\label{eq:cphi_down_ptu}\n \\phi^{(D-1)\\downarrow} &=\\arccos\n \\left[\\sqrt{\\frac{\\hat R^2_{D-1}-\\hat R_{D-2}^2}{%\n n^2-\\hat R^2_{D-2}\\dot l_i^2}}\\:\\dot l\\right].\n\\end{align}\nThe Regge action (\\ref{eq:ctregact}) is invariant under an\narbitrary reparameterization\n\\begin{align}\n \\label{eq:rep}\n \\tau \\to \\tau'=f(\\tau), \\quad\n n (\\tau) \\to n'(\\tau')=\\frac{n(\\tau)}{\\dot f(\\tau)}, \\quad\n l (\\tau) \\to l'(\\tau')=l(\\tau).\n\\end{align}\nThis can be used to fix the lapse function. \n\n\n\n\n\n\n\\section{Regge equations}\n\\label{sec:req}\n\\setcounter{equation}{0}\n\nThe Regge equations can be obtained by taking variations of \nthe Regge action with respect to $n$ and $l$. The equations \nof motion possess the local symmetry (\\ref{eq:rep}). \nWe must fix it by imposing some condition on the dynamical \nvariables. Furthermore, the action is based on the piecewise \nlinear manifold with Euclidean signature. We must carry out \ninverse Wick rotation to recover Lorentzian signature. As for \nfixing the local invariance we impose the following gauge \ncondition on the lapse function \n\\begin{align}\n \\label{eq:gconn}\n n(\\tau)=1.\n\\end{align}\nWe then carry out inverse Wick rotation by $\\tau=it$, where $t$ \ncan be regarded as the time of a clock fixed \nat a vertex of the polytopal universe. The time axis is taken \nto be parallel to a strut. It is not orthogonal to Cauchy cells. \nIf we consider nonregular polytopes with shorter edge lengths \nand more cells such as geodesic domes \\cite{TF:2016aa}, we \nwould have a better approximation of a smooth hypersphere. \nThe orthogonality of the time axis with the spatial ones as \nin the FLRW universe can be restored in the limit of smooth \nhypersphere. We thus obtain the Regge equations \n\\begin{align}\n &2\\pi-p_D\\theta^{(D-1)}\n =p_D\\Lambda_D(\\hat{R}_{D-2}^2-\\hat{R}_{D-3}^2)\n \\sqrt{\\frac{1+\\hat{R}_{D-3}^2\\dot{l}^2}{1+\\hat{R}_{D-1}^2\\dot{l}^2}}\n \\:l^2\\tan\\frac{\\vartheta_{D-1}}{2}, \n \\label{eq:chc_ptu} \\\\\n \n &\\frac{\\ddot l}{1+\\hat R_{D-2}^2\\dot l^2}\n =\\Lambda_D l\\left[1+\\hat R_{D-3}^2\\dot l^2\n -\\frac{(\\hat R_{D-1}^2-\\hat R_{D-3}^2)l\\ddot l}{%\n 2(1+\\hat R_{D-1}^2\\dot l^2)}\\right],\n\\label{eq:cev_ptu}\n\\end{align}\nwhere the dots on $l$ stand for $t$ derivatives and \n$\\theta^{(D-1)}$ in lorentzian signature is given by\n\\begin{align}\n \\label{eq:lstheta}\n \\theta^{(D-1)}=2\\arccos\\left[\\sqrt{\\frac{1+\\hat R^2_{D-1}\\dot l^2}{%\n 1+\\hat R^2_{D-2}\\dot l^2}}\\cos\\frac{\\vartheta_{D-1}}{2}\\right].\n\\end{align}\nEq. (\\ref{eq:chc_ptu}) is known as the Hamiltonian constraint in \nADM formalism of canonical General Relativity. The equation of \nmotion for $l$ is referred to as the evolution equation. \nWe have simplified the evolution equation by using the Hamiltonian \nconstraint. It is straightforward to show that the evolution \nequation can be obtained as the consistency of the Hamiltonian \nconstraint with the time-development. We also mention that \n(\\ref{eq:chc_ptu}) and (\\ref{eq:cev_ptu}) reproduce the results \nof Refs. \\cite{TF:2016aa,TF:2020aa} in three and four dimensions.\n\nIt is convenient to express the solution to the Regge equations \nin terms of the dihedral angle $\\theta=\\theta^{(D-1)}$. Solving \n(\\ref{eq:chc_ptu}) and (\\ref{eq:lstheta}) with respect to \n$l^2$ and $\\dot l^2$, we obtain \n\\begin{align}\n \\label{eq:lsqr}\n l=&\\sqrt{\\frac{(2\\pi-p_D\\theta)\\cot\\frac{\\theta}{2}}{%\n p_D\\Lambda_D(\\hat R_{D-2}^2-\\hat R_{D-3}^2)}}, \\\\\n \\label{eq:dsqr}\n \\dot l=&\\pm\\frac{1}{\\hat R_{D-2}}\\sqrt{\\frac{\\cos\\theta-\\cos\\theta_0}{%\n \\cos\\theta_\\mathrm{c}-\\cos\\theta}},\n\\end{align}\nwhere $\\theta_0=\\vartheta_{D-1}$ stands for the dihedral angle of a Cauchy cell $ \\left\\{ p_1 , p_2 , \\cdots , p_{D-1} , p_0 \\right\\} $ and determines the minimum size of \nthe universe.\n$\\theta_\\mathrm{c}$ is defined by\n\\begin{align}\n \\label{eq:jc}\n \\theta_{\\mathrm{c}}=2\\arcsin\\left[\\frac{\\hat R_{D-3}}{\\hat R_{D-2}}\n \\sin\\frac{\\vartheta_{D-1}}{2}\\right]. \n\\end{align}\nThe velocity $\\dot l$ diverges for $\\theta=\\theta_\\mathrm{c}$, where \nthe edge length becomes maximum. In three dimensions $\\theta_\\mathrm{c}=0$ \nsince $\\hat R_0=0$. \nIt matches $\\vartheta_1$ the dihedral angle of a 1-polytope $ \\left\\{ p_1 , p_0 \\right\\} $.\nSee (\\ref{eq:vth1}).\nIn dimensions larger than three $\\theta_\\mathrm{c}$ \nequals a dihedral angle of a regular polytope corresponding to extended \nSchl\\\"afli symbol $\\{p_1,p_3,\\cdots,p_{D-1},p_0\\}$, \nwhich is a vertex figure of a Cauchy cell. \nFor the vertex figure, see Appendix \\ref{sec:cada}.\n\n\n\nEliminating $l$ from (\\ref{eq:lsqr}) and (\\ref{eq:dsqr}),\nwe can derive the differential equation for $\\theta$\n\\begin{align}\n \\dot\\theta&=\\mp\\frac{2\\sqrt{p_D\\Lambda_D(2\\pi-p_D\\theta)\\sin\\theta}}{%\n 2\\pi-p_D(\\theta-\\sin \\theta)}\n \\frac{\\sin\\frac{\\theta}{2}}{\\sin\\frac{\\theta_0}{2}}\n \\sqrt{\\frac{(\\cos\\theta_{\\mathrm{c}}-\\cos\\theta_0) \n (\\cos\\theta-\\cos\\theta_0)}\n {\\cos\\theta_{\\mathrm{c}}-\\cos\\theta}}.\n \\label{eq:tde_ptu}\n\\end{align}\nThe upper sign corresponds to expanding universe and the lower \nto shrinking one. This leads to an integral \nrepresentation\n\\begin{align}\n \\label{eq:time}\n t \\left( \\theta \\right) =\\pm\\frac{1}{2\\sqrt{p_D\\Lambda_D}}\n \\int_\\theta^{\\theta_0} du\\frac{2\\pi-p_D(u-\\sin u)}{%\n \\sqrt{(2\\pi-p_Du)\\sin u}}\\frac{\\sin\\frac{\\theta_0}{2}}{%\n \\sin\\frac{u}{2}}\n \\sqrt{\\frac{\\cos\\theta_{\\mathrm{c}}-\\cos u}{%\n (\\cos\\theta_{\\mathrm{c}}-\\cos\\theta_0)(\\cos u-\\cos\\theta_0)}},\n\\end{align}\nwhere $\\theta_{\\mathrm c} \\leq \\theta\\leq\\theta_0$. We have assumed the initial \ncondition\n\\begin{align}\n \\label{eq:initc}\n \\theta(0)=\\theta_0.\n\\end{align}\nAs a function of $t$, the dihedral angle $\\theta$ \nis even \nand monotonically decreasing from $\\theta_0$ to \n$\\theta_\\mathrm{c}$ for $0\\leq t\\leq\\tau_\\mathrm{p}\/2$, where $\\tau_\\mathrm{p}$ is given by \n$\\tau_\\mathrm{p}=2t(\\theta_\\mathrm{c})$. We can extend $\\theta(t)$ as a \ncontinuous periodic function for arbitrary $t$ by\n\\begin{align}\n \\label{eq:tau}\n \\theta(t+\\tau_\\mathrm{p})=\\theta(t). \n\\end{align}\nThe edge length (\\ref{eq:lsqr}) is also a periodic function of $t$. \nIt is continuous for $D\\geq4$, while $l$ diverges for \n$\\theta \\left( \\tau_\\mathrm{p}\/2 \\right) =\\theta_{\\mathrm{c}}=0$ \nin three dimensions. \nNote that $\\dot l\/l$ not only diverges \nbut also has a discontinuity at $t=\\pm\\tau_\\mathrm{p}\/2,\n~\\pm3\\tau_\\mathrm{p}\/2,~\\cdots$. At present it is only an \nassumption that the polytopal universe in four or more dimensions \njumps from expansion to contraction when it reaches the maximum size. \n\n\n\nIn dimensions larger than four there are only three types of regular \npolytopes. As can easily be seen from Table \\ref{tab:ssfrpt} any regular \npolytope can be characterized by $p_2$, $p_3$, $p_D$, and $D$. It is \npossible to write the circumradii $\\hat R_{D-k}$ ($k=1,2,3$) and \ndihedral angles $\\vartheta_{D-1}$ appearing \nin (\\ref{eq:chc_ptu})--(\\ref{eq:lstheta}) in more tractable forms by noting (\\ref{eq:hatRD}) \nand (\\ref{eq:vTD}). To this end we define a set of parameters \n$\\kappa_n$, $\\lambda_n$, $\\mu_n$, and $\\zeta_n$ by\n\\begin{align}\n \\kappa_n &= 3 \\sum_{j=0}^1\\delta_{j,n}+p_2\\sum_{j=2}^\\infty\\delta_{j,n}, \\\\\n \\lambda_n&=3\\sum_{j=0}^2\\delta_{j,n}+p_3\\sum_{j=3}^\\infty\\delta_{j,n},\\\\\n \\mu_n&=3\\sum_{j=0}^3\\delta_{j,n}+p_4\\sum_{j=4}^\\infty\\delta_{j,n},\\\\\n \\zeta_n&=3\\sum_{j=0}^4\\delta_{j,n}+p_n\\sum_{j=5}^\\infty\\delta_{j,n},\n\\end{align}\nwhere $\\delta_{j,k}$ is the Kronecker delta. \nObviously, $\\kappa_n=p_2$, $\\lambda_n=p_3$, $\\mu_n=p_4$, and $\\zeta_n=p_n$ for $n\\geq5$. We assign a regular \n$D$-polytope to a set of five parameters $\\left[D,\\kappa_D,\n\\lambda_D,\\mu_D,\\zeta_D \\right]$. In Table \\ref{tab:ssfrpt} we summarize the correspondence \nbetween regular polytopes and the symbol $\\left[D,\\kappa_D,\n\\lambda_D,\\mu_D,\\zeta_D\\right]$. \nThis allows us to express the normalized circumradius $\\hat{R}_D$ and the dihedral angle $\\vartheta_D$ in the closed forms as\n\\begin{align}\n \\label{eq:hRD}\n\\hat{R}_D &= \\frac{1}{2} \\sqrt{ \\frac{ \\left[ 1 - \\left( D-4 \\right) \\cos \\frac{ 2 \\pi }{ \\zeta_D } \\right] \\sin^2 \\frac{\\pi}{\\lambda_D} - 2 \\left[ 1 - \\left( D - 5 \\right) \\cos \\frac{ 2 \\pi }{ \\zeta_D } \\right] \\cos^2 \\frac{ \\pi }{ \\mu_D } }{%\n\\left[ 1 - \\left( D-4 \\right) \\cos \\frac{2 \\pi}{\\zeta_D} \\right] \\left( \\sin^2 \\frac{\\pi}{\\lambda_D} - \\cos^2 \\frac{\\pi}{\\kappa_D} \\right) \n- \n2 \\left[ 1 - \\left( D-5 \\right) \\cos \\frac{2 \\pi}{\\zeta_D} \\right] \\sin^2 \\frac{\\pi}{\\kappa_D} \\cos^2 \\frac{ \\pi}{\\mu_D} \n} }, \\\\\n\\label{eq:vthD}\n\\vartheta_D &= 2 \\arcsin \\left( \\sqrt{ 2 \\frac{ \\sin^2 \\frac{ \\pi }{ \\kappa_D } \\left[ 1 - \\left( D - 5 \\right) \\cos \\frac{ 2 \\pi }{ \\mu_D } \\right] - \\left( D - 4 \\right) \\cos^2 \\frac{\\pi}{ \\lambda_D } }{%\n\\sin^2 \\frac{ \\pi }{ \\kappa_D } \\left[ 1 - \\left( D - 4 \\right) \\cos \\frac{ 2 \\pi }{ \\mu_D } \\right] - \\left( D - 3 \\right) \\cos^2 \\frac{\\pi}{ \\lambda_D }\n} } \\cos \\frac{\\pi}{ \\zeta_D } \\right).\n\\end{align}\nThe circumradius (\\ref{eq:hRD}) is applicable in $D \\geq 0$,\nwhereas the dihedral angle (\\ref{eq:vthD}) is valid in the dimensions larger than zero.\nNote that $\\vartheta_0$ is undetermined.\nIn particular the fact that $\\mu_{D-k}=\\zeta_{D-k}=3$ with $ 1 \\leq k \\leq D $ for any regular polytope \nenables us to write the following equalities \n\\begin{align}\n \\label{eq:RDk}\n \\hat{R}_{D-k}\n &=\\displaystyle \\frac{1}{2} \\sqrt\n \\frac{\\left(D-1- k\\right)-2\\left(D-2-k\\right)\n \\cos^2\\frac{\\pi}{\\lambda_{D-1}}}\n {%\n \\left(D-1-k\\right)\\sin^2\\frac{\\pi}{p_2}-2\\left(D-2-k\\right)\n \\cos^2\\frac{\\pi}{\\lambda_{D-1}}}} \\quad \\left(k=1,2,3\\right), \\\\\n \\label{eq:cosvthD}\n \\cos\\vartheta_{D-1}\n &=\\frac{\\sin^2 \\frac{\\pi}{p_2}-2\\cos^2\\frac{\\pi}{\\lambda_{D-1}}}{%\n \\left(D-3\\right)\\sin^2\\frac{\\pi}{ p_2}-2\\left(D-4\\right)\n \\cos^2\\frac{\\pi}{\\lambda_{D-1}}}.\n\\end{align}\nThe Regge equations (\\ref{eq:chc_ptu}) and \n(\\ref{eq:cev_ptu}) give descriptions of the time-development of \nthe universe with a regular polytopal Cauchy surface for the parameter \nset $\\left[D,\\kappa_D,\\lambda_D,\\mu_D,\\zeta_D\\right]$. \n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=1]{fig_da_simp.eps}\n \\caption{Plots of the dihedral angles of the simplicial polytope \n models for $3\\leq D\\leq7$.}\n \\label{fig:da_simp}\n\\end{figure}\n\n\n\nTime-development of the dihedral angle $\\theta$ can be obtained \nby integrating (\\ref{eq:tde_ptu}) numerically for the initial \ncondition (\\ref{eq:initc}). We give plots of the dihedral angles \nof simplicial polytope models for $D=3,4,\\cdots,7$ and \n$0\\leq t\\leq \\tau_\\mathrm{p}\/2$ in Figure \\ref{fig:da_simp}.\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=1]{fig_sf_simp.eps}\n \\caption{Plots of the scale factors of the simplicial polytope \n models for $3\\leq D\\leq7$.\n The broken curve corresponds to the $D$-dimensional FLRW universe.\n }\n \\label{fig:sf_simp}\n\\end{figure}\n\n\n\nTo compare the polytopal universe with the continuum, we must \nintroduce a Regge calculus analog of the scale factor. \nThere are, however, ambiguities in defining a radius of a\nregular polytope. Here we simply introduce it as the radius of \nthe circumsphere of the regular polytope\n\\begin{align} \n a_{\\mathrm{R}}(t)\n &=\\hat{R}_Dl(t).\n \\label{eq:sf_ptu}\n\\end{align}\n\nInserting the solutions of (\\ref{eq:tde_ptu}) into (\\ref{eq:sf_ptu}), \nwe obtain the time-developments of the scale factors of polytopal \nuniverses. Figure \\ref{fig:sf_simp} shows the behaviors of the simplicial \nuniverses. The broken curve corresponds to the $D$-dimensional FLRW \nsolution. The 3-simplicial model expands faster than the continuum \none and diverges at $t=\\tau_\\mathrm{p} \/ 2$. For $D \\geq 4$, after \narriving at the maximum scale $a(\\tau_\\mathrm{p}\/2)$ the universe begins to \ncontract to the initial minimum size $a(0)=a(\\tau_\\mathrm{p})$. Then the \nuniverse repeats expanding and contracting with a period $\\tau_\\mathrm{p}$. \nOne easily sees that the $D$-simplices are too crude to approximate the \ncontinuum solution. The larger the space-time dimensions, the bigger \ndifference we have. The situation is somewhat improved by considering \n$D$-orthoplices or $D$-cubes in this order. For fixed space-time \ndimensions the deviation from the continuum FLRW universe become \nsmaller as the number of vertices increases. \n\nIn closing this section we comment on the case of $D$-polytopal universe \nwithout cosmological constant. In this case the Hamiltonian constraint \n(\\ref{eq:chc_ptu}) yields $\\theta^{(D-1)}=\\dfrac{2\\pi}{p_D}$. We obtain \nfrom (\\ref{eq:lstheta}) \n\\begin{align}\n \\label{eq:hcwc_ptu}\n \\dot{l}^2=-\\frac{\\cos^2\\frac{\\vartheta_{D-1}}{2}-\\cos^2\\frac{\\pi}{p_D}}{%\n \\hat R_{D-1}^2\\cos^2\\frac{\\vartheta_{D-1}}{2}-\\hat R_{D-2}^2\n \\cos^2\\frac{\\pi}{p_D}}\n =-\\frac{1}{\\hat R_D^2}.\n\\end{align}\nThere is no convex regular polytope satisfying this. The Hamiltonian \nconstraint, however, admits infinite honeycomb lattices in flat Euclidean \nspace. For any space-filling honeycomb the circumradius $\\hat R_D$ \ndiverges and the dihedral angle is given by \n$\\vartheta_D=\\pi$, which immediately yields \n\\begin{align}\n \\label{eq:condsfhl}\n \\cos\\frac{\\vartheta_{D-1}}{2}=\\cos\\frac{\\pi}{p_D}. \n\\end{align}\nSee (\\ref{eq:pnvth}). In Table \\ref{tab:ssfm} we summarize \nspace-filling honeycomb lattices in arbitrary dimensions. \nIt is straightforward to verify (\\ref{eq:condsfhl}).\nWe thus obtain static solutions \n$l=\\mathrm{const}$. They correspond to the Minkowski \nspace-time. In addition, in the case of $\\dot{l}^2 > 0$, \nSchl\\\"afli symbol \nsatisfying this inequality stands for a regular lattice of open \nCauchy surface of constant negative curvature. These results are \nconsistent with solutions of the Friedmann equations (\\ref{eq:Feq}).\nSee Table \\ref{tab:flrw}. \n\n\n\n\\begin{table}[t]\n\\centering\n \\begin{tabular}{llll}\\hline \n Dimensions $D$ & Name & Extended Schl\\\"afli symbol & $[D,\\kappa_D,\\lambda_D,\\mu_D,\\zeta_D]$ \\\\ \\hline\n %\n 2 & Apeirogon & $\\left\\{2,\\infty , 2 \\right\\}$ & $[2,\\infty,3,3,3]$ \\\\\n \\\\\n \n & Triangular tiling & $\\left\\{2,3,6,2\\right\\}$ & $[3,3,6,3,3]$ \\\\ \n 3 & Square tiling & $\\left\\{2,4,4,2\\right\\}$ & $[3,4,4,3,3]$ \\\\ \n & Hexagonal tiling & $\\left\\{2,6,3,2\\right\\}$ & $[3,6,3,3,3]$ \\\\ \n \\\\\n \n 4 & Cubic honeycomb & $\\left\\{2,4,3,4,2\\right\\}$ & $[4,4,3,4,3]$ \\\\ \n \\\\\n \n & 8-cell honeycomb & $\\left\\{2,4,3,3,4,2\\right\\}$ & $[5,4,3,3,4]$ \\\\\n 5 & 16-cell honeycomb & $\\left\\{2,3,3,4,3,2\\right\\}$ & $[5,3,3,4,3]$ \\\\\n & 24-cell honeycomb & $\\left\\{2,3,4,3,3,2\\right\\}$ & $[5,3,4,3,3]$ \\\\ \n \\\\\n %\n $n+1\\geq6$ & $n$-cubic honeycomb $ \\delta_{n+1} $ & $\\left\\{2,4,3^{n-2},4,2\\right\\}$\n & $[n+1,4,3,3,4]$ \\\\ \\hline\n \\end{tabular}\n\\caption{Space-filling lattices in Euclidean $\\left(D-1\\right)$-space.\n The lattices for $ D\\geq3 $ are \n corresponding to Minkowski space-time. \n The $n$-cubic honeycomb is named by Coxeter as $ \\delta_{n+1} $ \\cite{Coxeter},\n which has the extended Schl\\\"afli symbol $ \\left\\{ 2,4,3^{n-2},4,2 \\right\\} $.\n The only misfit is $ \\delta_2 = \\left\\{ 2,\\infty,2 \\right\\} $.\n }\n \\label{tab:ssfm}\n\\end{table}\n\n\n\n\n\n\n\\section{Fractional Schl\\\"afli symbol and pseudo-regular \\\\ $D$-polytopal universes}\n\n\\label{sec:prpt}\n\\setcounter{equation}{0}\n\nSo far we have investigated evolution of regular polytopes \nas a discretized FLRW universe. To go beyond the approximation \nby regular polytopes, we must introduce polytopes with more cells. \nOne way to implement this is to employ geodesic domes \\cite{TF:2016aa}. \nHypercube is the only type of regular polytope having subdivisions \nof facets in arbitrary dimensions by the same type of polytopes with\nthe parent facets. In this section we consider hypercube-based \ngeodesic domes as Cauchy surfaces of the universe. \n\n\n\n\\begin{figure}[t]\n \\centering\n\\includegraphics[scale=1.0]{fig_cube_decomp.eps}\n \\caption{Subdivision of a 3-cube as a cell of a 4-cube for \n (a) $\\nu=2$, (b) $\\nu=3$, and (c) $\\nu=4$.\n In four dimensions the peaks are the edges.\n Solid lines are the three-way connectors and broken lines \n the four-way connectors.\n }\n \\label{fig:cube_decomp}\n\\end{figure}\n\n\n\nA hypercube in $D$ dimensions has $(D-1)$-cubes as its facets. \nTo define a geodesic dome for the hypercube we first divide \neach facet into $\\nu^{D-1}$ pieces of $(D-1)$-cubes of edge \nlength $l\/\\nu$ as depicted in Figure \\ref{fig:cube_decomp}, \nwhere $\\nu$ is the level of the division, called frequency. \nWe then radially project the tessellated hypercube on the \ncircumsphere of the original hypercube. This results in a \ntessellation of the circumsphere. The geodesic dome $\\Gamma_\\nu$ \ncan be obtained by replacing each circular arc of the tessellated \ncircumsphere with a line segment jointing its end points. \nIn general each facet of $\\Gamma_\\nu$ thus constructed \nis not a flat $(D-1)$-space. We can always decompose these \nfacets into flat $(D-1)$-polytopes by adding extra edges. \nThe deviations from the flat $(D-1)$-spaces, however, become\nnegligible as $\\nu$ increases. We can effectively regard the \nfacets of $\\Gamma_\\nu$ as flat $(D-1)$-cubes and see any \npolytopal data of the geodesic dome such as the numbers of \nfacets, ridges, etc. from the tessellated $D$-cube. \n\nWe can apply Regge calculus \nto $\\Gamma_\\nu$ as the polyhedral model in Ref. \\cite{TF:2016aa}. \nIn the infinite frequency limit $ \\nu \\to \\infty $, geodesic \ndome reproduces a smooth sphere. So the model universe approaches \nthe FLRW universe in the limit $\\nu\\rightarrow\\infty$.\nIn practice the larger the frequency, the more cumbersome the Regge \ncalculus for geodesic domes becomes. We avoid this complexity by \nintroducing pseudo-regular polytopes as in Refs. \n\\cite{TF:2016aa,TF:2020aa}.\n\nLet us denote the pseudo-regular polytope corresponding to $\\Gamma_\\nu$ \nby $\\tilde\\Gamma_\\nu$. We assign it a fractional Schl\\\"afli symbol \n\\begin{align}\n \\label{eq:SSprP}\n \\{2,4,3^{D-3},p(\\nu),2\\},\n\\end{align}\nwhere \n$p(\\nu)$ is the averaged number of facets sharing a peak of $\\Gamma_\\nu$\nand the other $D$ integers are the Schl\\\"afli symbol of the facets of $\\Gamma_\\nu$.\nThere are two types of peaks of $\\Gamma_\\nu$ as illustrated in \nFigure \\ref{fig:cube_decomp} for a cell of 4-cube. One is \nshared by three facets. These come from the peaks of the \noriginal $D$-cube. The other connects four facets.\nThey are generated in subdividing the facets of the original \n$D$-cube. We refer to the former type as ``three-way connector''\nand the later ``four-way connector''. Counting the numbers of \neach type of connectors and averaging the number \nof facets around a peak in $\\Gamma_\\nu$, we find\n\\begin{align}\n \\label{eq:pnu}\n p(\\nu)=\\frac{12\\nu^2}{3\\nu^2+1}.\n\\end{align}\nSee Appendix \\ref{sec:pnu} for details. The result is independent \nof $D$. Furthermore, the fractional Schl\\\"afli symbol approaches the \none of $(D-1)$-cubic honeycomb in the limit $\\nu\\rightarrow\\infty$. \n\n\n\nThe basic approach of pseudo-regular polytope is to regard \n$\\tilde \\Gamma_\\nu$ as a regular polytope of edge length $l$ \nwith the fractional Schl\\\"afli symbol (\\ref{eq:SSprP}) and \nto assume that the model universe is described by the Regge \nequations (\\ref{eq:chc_ptu}) and (\\ref{eq:cev_ptu}). \nThe symbol (\\ref{eq:SSprP}) corresponds to the assignment \n\\begin{align}\n p_2=4, \\quad \\lambda_{D-1}=3, \\quad p_D=p(\\nu).\n\\end{align}\nIn particular the normalized circumradii (\\ref{eq:RDk}) and \nthe dihedral angle (\\ref{eq:cosvthD}) coincide with those of \nthe regular $D$-cube. They are independent of the frequency $\\nu$. \nThe differential equation for the dihedral angle $\\theta(t)$ can be \nwritten explicitly as\n\\begin{align}\n \\label{eq:tde_prpt}\n \\dot{\\theta}(t)&=\\mp\\frac{2}{%\n 2\\pi-p(\\nu)\\left( \\theta(t)-\\sin\\theta(t)\\right)}\n \\sqrt{\\frac{p(\\nu)\\Lambda_D\\left( 2\\pi-p(\\nu)\\theta(t)\\right) \\sin2\\theta(t)}\n {1-\\left(D-2\\right)\\cos\\theta(t)}}\\sin\\frac{\\theta(t)}{2}.\n\\end{align}\nNote that the initial dihedral angle is $\\theta(0)=\\theta_0\n=\\vartheta_{D-1}=\\pi\/2$. Both $\\theta_0$ and $\\theta_{\\mathrm{c}}\n=\\arccos\\dfrac{1}{D-2}$ do not depend on $\\nu$. \n\nThe scale factor $a_{\\mathrm{R}}$ for the pseudo-regular \n$D$-polytopal universe can be defined similarly as the regular \npolytopal models as \n\\begin{align}\n a_{\\rm R}\\left(t\\right)\n \\label{eq:sf_prpt_theta}\n &=\\hat R_D(\\nu)l(t),\n\\end{align}\nwhere the edge length $l(t)$ for $\\tilde\\Gamma_\\nu$ can be \nfound from (\\ref{eq:lsqr}) as \n\\begin{align}\n \\label{eq:elprp}\n l(t)=\\frac{2}{\\sqrt{\\Lambda_D}}\n \\sqrt{ \\left(\\frac{2\\pi}{p(\\nu)}-\\theta(t)\\right)\\cot\\frac{\\theta(t)}{2}}.\n\\end{align}\nThe normalized circumradius $\\hat R_D(\\nu)$ also depends on $p_D=p(\\nu)$ \nand can be obtained from (\\ref{eq:hRD}) as \n\\begin{align}\n\\label{eq:Rhat_dcube}\n \\hat R_D(\\nu)=\\frac{1}{2}\n \\sqrt{D-2-\\sec\\frac{2\\pi}{p(\\nu)}}.\n\\end{align}\nFor $\\nu=1$ this coincides with the circumradius of a regular $D$-cube \nof unit edge length. It grows with the frequency $\\nu$ and diverges linearly\nfor $\\nu\\rightarrow\\infty$. \nIn fact Eq. (\\ref{eq:Rhat_dcube}) can be approximated for large frequency by\n\\begin{align}\n\\hat{R}_D (\\nu) \\approx \\sqrt{ \\frac{3}{2 \\pi} } \\nu.\n\\end{align}\nOn the other hand the edge length \n(\\ref{eq:elprp}) decreases roughly inversely with $\\nu$ and approaches \nzero as $\\nu\\rightarrow\\infty$. This can be seen explicitly for the \ninitial edge length\n\\begin{align}\n \\label{eq:initelprp}\n l(0)=\\sqrt{\\frac{2\\pi}{3\\Lambda_D}}\\frac{1}{\\nu}.\n\\end{align}\nThe scale factor (\\ref{eq:sf_prpt_theta}), however, remains finite for \n$\\nu\\rightarrow\\infty$. \nNoting that $\\hat R_{D-k}$ ($k=1,2,3$) \nare independent of $\\nu$ as given by (\\ref{eq:RDk}),\nit is \nstraightforward to verify that the Regge equations (\\ref{eq:chc_ptu}) \nand (\\ref{eq:cev_ptu}) for $\\tilde\\Gamma_\\nu$ reduce to the Friedmann \nequations (\\ref{eq:Feq}) in the limit $\\nu\\rightarrow\\infty$. \n\nTo see the dependences on $\\nu$ we give plots of the dihedral angles\nin Figure \\ref{fig:da_prpt} and those of the scale factors in \nFigure \\ref{fig:sf_prpt} for $D=5$, $\\nu\\leq 5$, and \n$0\\leq t\\leq\\tau_\\mathrm{p}(\\nu)\/2$, where $\\tau_\\mathrm{p}(\\nu)$ is \nthe period of the \noscillation of $\\tilde\\Gamma_\\nu$. One might think that $D$-cube-based \npseudo-regular polytopes are too crude to approximate $D$-spheres. \nAs can be seen from Figure \\ref{fig:sf_prpt}, the scale factor \napproaches rapidly the continuum one as $\\nu$ increases. \nAs mentioned above, the geodesic dome $\\Gamma_\\nu$ becomes impractical to \ncarry out Regge calculus for large $\\nu$. The advantage of the approach \nof pseudo-regular polytopes is its applicability to arbitrarily large \nfrequency without effort. The scale factor for $\\nu = 100$ is shown in \nFigure \\ref{fig:sf_prpt_nu100}. Coincidence with the continuum theory \nis excellent for $\\sqrt{\\Lambda_5}t\\sim4$. The edge length becomes \ncomparable with $1\/\\sqrt{\\Lambda_5}$ at around $\\sqrt{\\Lambda_5}t\\sim4$, \nonset of the deviation from the continuum solution. \n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=1]{fig_da_prpt.eps}\n \\caption{Plots of the dihedral angles of the pseudo-regular \n 5-polytopal universes for $\\nu\\leq5$.}\n \\label{fig:da_prpt}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n \\centering\n\\includegraphics[scale=1]{fig_sf_prpt.eps}\n \\caption{Plots of the scale factors of the pseudo-regular 5-polytopal universes for $\\nu \\leq 5$.\n The broken curve corresponds to the five-dimensional FLRW universe.\n }\n \\label{fig:sf_prpt}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n \\centering\n\\includegraphics[scale=1]{fig_sf_prpt_nu100.eps}\n \\caption{Plot of the scale factor of the pseudo-regular 5-polytopal universe for $\\nu=100$.\n The broken curve stands for the exact solution of the continuum theory.\n }\n \\label{fig:sf_prpt_nu100}\n\\end{figure}\n\n\n\n\n\n\n\\section{Summary and discussions}\n\n\\label{sec:sum}\n\\setcounter{equation}{0}\n\nFollowing the CW formalism, we have carried out Regge calculus \nfor closed FLRW universe with a positive cosmological constant \nin arbitrary dimensions. The geometrical characterization of \nregular polytopes by the Schl\\\"afli symbol has turned out to be \nvery efficient in describing systematically the discrete FLRW \nuniverse in spite of there being only three types of regular \npolytopes in dimensions more than four. We have given the Regge \naction in closed form in the continuum time limit. It possesses \na reparameterization invariance of time variable to ensure \ncoordinate independence of the formalism. The Regge equations \nare the Hamiltonian constraint and the evolution equation as \nthe continuum theory, describing the time development of the \ndiscrete FLRW universe. They coincide with the previous results \nin three and four dimensions \\cite{TF:2016aa,TF:2020aa}. In \nparticular under the gauge choice (\\ref{eq:gconn}) the \ncircumsphere of the regular polytope repeats periodically \nexpansion and shrinking in any dimensions larger than four\nas the four dimensional case. The Regge equations have more or \nless the same structures in dimensions greater than three. \nIt is only in three dimensions where the edge length diverges \nin finite time. \n\nAs we have shown in Sect. \\ref{sec:req} the approximation by \nregular polytopes is not so accurate even for $\\sqrt{\\Lambda_D}t\\ll 1$. \nThe situation gets worse as the dimensions increase. \nThis is contrasted with the cases of dodecahedron in three \ndimensions and 120-cell in four dimensions, which describe \nthe continuum FLRW universe rather well until $t$ becomes \ncomparable with $1\/\\sqrt{\\Lambda_D}$. The difference basically \ncomes from that of the number of vertices in a polytope. A \n120-cell has six hundred vertices, whereas a 4-cube does only sixteen. \nIn five or more dimensions there are no such special \npolytopes. \nOne must refine the tessellation of the Cauchy \nsurface by \nnonregular polytopes with smaller cells\nto have better \napproximations. Though this can be done by extending the \ngeodesic domes in three dimensions, we have analyzed \npseudo-regular polytopes with the expectation that the Regge \nequations for the pseudo-regular polytopes approximate well \nthe Regge calculus of the corresponding geodesic domes. \nWe stress that the pseudo-regular polytope is a substitute \nof the corresponding geodesic domes characterized by \nthe frequency $\\nu$, not the continuum hypersphere. \nThe Regge equations (\\ref{eq:chc_ptu}) and (\\ref{eq:cev_ptu}) \ntherefore should be considered as an effective description \nof the Regge equations for the geodesic dome, not of the \ncontinuum Freedman equations. The approach of pseudo-regular \npolytopes can be applied to an arbitrary $\\nu$. \nIn particular \nwe can infer the validity of Regge calculus for geodesic \ndomes. Because of this, the pseudo-regular polytope universe \nbegins to deviate from the continuum solution when the edge \nlength becomes larger than $1\/\\sqrt{\\Lambda_D}$. \n\nIn this paper we have considered vacuum universes without \nmatters. Incorporating gravitating matter sources is worth\ninvestigation. In General Relativity, Friedmann equations \nhave a solution for a negative cosmological constant. It \ndescribes hyperbolic Cauchy surfaces expanding or contracting \nwith time. Applying the method of pseudo-regular polytope to \nsuch non-compact universe be interesting. We will address \nthese issues elsewhere. \n\n\n\n\\vskip .5cm\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAfter a successful experimental detection of Bose-Einstein condensates (BEC) of\ndilute trapped bosonic alkali-metal atoms $^7$Li, $^{23}$Na, and $^{87}$Rb\n\\cite{review,books} at ultra-low temperatures, there have been intense\ntheoretical activities in studying properties of the condensate using the\ntime-dependent mean-field Gross-Pitaevskii (GP) equation under different trap\nsymmetries. Among many possibilities, the following traps have been used in\nvarious studies: three-dimensional (3D) spherically-symmetric, axially-symmetric\nand anisotropic harmonic traps, two-dimensional (2D) circularly-symmetric and\nanisotropic harmonic traps, and one-dimensional (1D) harmonic trap. The\ninter-atomic interaction leads to a nonlinear term in the GP equation, which\ncomplicates its accurate numerical solution, specially for a large nonlinearity.\nThe nonlinearity is large for a fixed harmonic trap when either the number of\natoms in the condensate or the atomic scattering length is large and this is\nindeed so under many experimental conditions. Special care is needed \nfor the\nsolution of the time-dependent GP equation with large nonlinearity and there has\nbeen an extensive literature on this topic\n\\cite{Tiwari_Shukla,Bao_Tang,Schneider_Feder,chio1,chio2,chang,num1,num2,num3,num4,num5,num6,num7,num8,num9,num10,num11,num12,num13,num14,num15,num16,num17,num18,num19,num20,num21,num22,num23,num24,num25,num26,num27,num28,num29,num30,num31,num32,num33,num34,num35,aq,xyz1,burnett,holland,baer}.\n\n\nThe time-dependent GP equation is a partial differential equation in space and\ntime variables involving first-order time and second-order space derivatives\ntogether with a harmonic and a nonlinear potential term, and has the structure\nof a nonlinear Schr\\\"odinger equation with a harmonic trap. { One\ncommonly used} procedure for the solution of the time-dependent GP equation makes\nuse of a discretization of this equation in space and time and subsequent\nintegration and time propagation of the discretized equation. From a knowledge\nof the solution of this equation at a specific time, this procedure finds the\nsolution after a small time step by solving the discretized equation.\n{A commonly used} discretization scheme for the GP equation is the\nsemi-implicit Crank-Nicolson discretization scheme \\cite{koonin,ames,dtray}\nwhich has certain advantages and will be used in this work.\n\n\nIn the simplest one-space-variable form of the GP equation, the solution\nalgorithm is executed in two steps. In the first step, using a known initial\nsolution, an intermediate solution after a small interval of time $\\Delta$ is\nfound neglecting the harmonic and nonlinear potential terms. The effect of\nthe potential terms is then included by a first-order time integration to obtain\nthe final solution after time $\\Delta$. In case of two or three spatial\nvariables, the space derivatives are dealt with in two or three steps and the\neffect of the potential terms are included next. As the time evolution is\nexecuted in different steps it is called a split-step real-time propagation\nmethod. This method is equally applicable to stationary ground and \nexcited states as well as non-stationary states, although in this paper \nwe do not consider stationary excited states. \nThe virtues of the semi-implicit Crank-Nicolson scheme\n\\cite{koonin,ames,dtray} are that it is unconditionally stable and preserves the\nnormalization of the solution under real-time propagation. A simpler \nand\nefficient variant of the {scheme} called the split-step\nimaginary-time propagation method obtained by replacing the time variable by an\nimaginary time is also considered. (The GP equation involves complex\n{variables}. However, after replacing the time variable by an\nimaginary time the resultant partial differential equation is real, and hence\nthe imaginary-time propagation method involves real {variables}\nonly. {This trick\nleads to an \nimaginary-time operator which results in exponential decay of all states \nrelative to the ground state and can then be applied to any \ninitial \ntrial wave function to compute an approximation to the actual ground \nstate rather accurately. We shall use imaginary-time propagation to \ncompute the ground state in this paper.)} The split-step \nimaginary-time propagation method involving real variables\n yields very precise result at low computational cost (CPU time) and is very\nappropriate for the solution of stationary problems involving the \nground state. The split-step \nreal-time propagation method uses complex {quantities} and yields\nless precise results for stationary problems; however, they are appropriate for\nthe study of non-equilibrium dynamics in addition to stationary problems \ninvolving excited states also.\n\n\nMost of the previous studies\n\\cite{Tiwari_Shukla,Bao_Tang,Schneider_Feder,chang,num1,num4,num10,num12,num13,num16,num17,num25,num26,num27,num31,xyz1,burnett}\non the numerical solution of the GP equation are confined to a consideration of\nstationary states only. Some used specifically the imaginary-time propagation\nmethod \\cite{chio1,num23,aq,xyz1}. There are few studies\n\\cite{num22,num29,num30,num33} for the numerical solution of the time-dependent\nGP equation using the Crank-Nicolson method \\cite{koonin,ames,dtray}. Other\nmethods for numerical solution of the time-dependent GP equation have also\nappeared in the literature\n\\cite{chio2,num6,num8,num15,num18,num19,num20,num21,num24,num34,num35,holland,baer}.\nThese time-dependent methods can be used for studying non-equilibrium dynamics\nof the condensate involving non-stationary states.\n\n\nThe purpose of the present paper is to develop {a simple and\nefficient} algorithm for the numerical solution of the GP equation using time\npropagation together with the semi-implicit Crank-Nicolson discretization\nscheme \\cite{koonin,ames,dtray} specially useful to newcomers in this field\ninterested in obtaining a numerical solution of the time-dependent GP equation.\nEasy-to-use Fortran 77 programs for different trap symmetries with adequate\nexplanation are also included. In case of two and three space variables, Fortran\n90\/95 programs are more compact in nature and these programs are also included.\nWe include programs using both real- and imaginary-time propagation. For\nstationary ground states the imaginary-time method has a much quicker \nconvergence rate\ncompared to the real-time method and should be used for the calculation of\nchemical potential, energy and root-mean-square (rms) sizes. We calculate the chemical\npotential and rms sizes of the condensate for stationary problems and compare\nthese results with those previously obtained by other workers for different trap\nsymmetries. These results can be easily calculated in a decent PC using the\nFortran programs provided. In addition to the results for the stationary states,\nthe real-time propagation routines can also be used to study the \nnon-stationary\ntransitions, as in collapse dynamics \\cite{ska5} and non-equilibrium\noscillation \\cite{num22}.\n\n\n{This paper is organized as follows.} In Sec. \\ref{gpe} we present\nthe GP equations with different traps that we consider in this paper, e.g., the\n3D spherically-symmetric, 2D circularly-symmetric and 1D harmonic traps\ninvolving one space variable, the anisotropic 2D and axially-symmetric 3D\nharmonic traps in two space variables and the fully anisotropic 3D harmonic trap\nin three space variables. In Sec. \\ref{CN1D} we elaborate the numerical\nalgorithm for solving the GP equation in one space variable (the 1D, \ncircularly-symmetric 2D, and spherically-symmetric 3D cases) \nand for calculating\nthe chemical potential, energy and rms sizes employing both the real- and\nimaginary-time propagation methods. \nIn Sec. \\ref{CN23D} we present the same for\nsolving the GP equation in two and three space variables (the anisotropic 2D\nand axially-symmetric and anisotropic 3D cases). In Sec. \\ref{FOR} we present a\ndescription of the Fortran programs, an explanation about how to use them, and\nsome sample outputs. In Sec. \\ref{NUM} we present the numerical results for\nchemical potential, rms size, value of the wave function at the center, for the\nground-state\nproblem using the imaginary-time propagation routines and compare our\nfinding with previous results for different trap symmetries in 1D, 2D, and 3D. \nWe also present a study of non-stationary oscillation in some of these cases\nusing the real-time propagation routines when the nonlinear coefficient in the\nGP equation with a stationary solution was suddenly reduced to half its value.\nFinally, in Sec. \\ref{SUM} we present a brief summary of our study.\n\n\n\\section{Nonlinear Gross-Pitaevskii Equation}\n\\label{gpe}\n\nAt zero temperature, the time-dependent Bose-Einstein condensate wave function\n$\\Psi \\equiv \\Psi({\\bf r};\\tau)$ at position ${\\bf r}$ and time $\\tau $ may be\ndescribed by the following mean-field nonlinear GP equation \\cite{review}\n\\begin{eqnarray}\n\\mbox{i}\\hbar\\frac{\\partial \\Psi({\\bf r};\\tau) }{\\partial \\tau} =\n\\left[-\\frac{\\hbar^2\\nabla\n^2}{2m} + V({\\bf r}) + gN\\vert\\Psi({\\bf r};\\tau) \\vert^2\n\\right]\\Psi({\\bf r};\\tau),\n\\label{eqn:gp0}\n\\end{eqnarray}\nwith $\\mbox{i}=\\sqrt{-1}$. Here $m$ is the mass of an atom and $N$ the number\nof atoms in the condensate, $g=4\\pi\\hbar^2 a\/m $ the strength of inter-atomic\ninteraction, with $a$ the atomic scattering length. The normalization condition\nof the wave function is $\\int d{\\bf r} \\vert \\Psi({\\bf r};\\tau)\\vert^2 = 1$.\n\n\\subsection{Spherically-symmetric GP equation in 3D}\n\nIn this case the trap potential is given by $V({\\bf r}) =\\frac{1}{2}m \\omega ^2\n\\tilde r^2$, where $\\omega$ is the angular frequency and $\\tilde r$ the radial\ndistance. After a partial-wave projection the radial part $\\psi$ of \nthe wave function $\\Psi$ can be written as $\\Psi({\\bf \nr};\\tau) =\\psi(\\tilde\nr,\\tau)$. After a transformation of variables to dimensionless quantities\ndefined by $r =\\sqrt 2 \\tilde r\/l$, $t=\\tau \\omega$, $l\\equiv \\sqrt\n{(\\hbar\/m\\omega)} $ and $\\phi(r;t) \\equiv \\varphi(r;t)\/r =\\psi(\\tilde r,\\tau)[\nl^3\/(2\\sqrt 2)]^{1\/2}$, the GP equation (\\ref{eqn:gp0}) in this case becomes\n\\begin{eqnarray}\n\\left[-\\frac{\\partial^2}\n{\\partial r^2}+\\frac{r^2}{4}+\\aleph \n\\left| \\frac{\\varphi(r;t)}{r}\n\\right| ^2 -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0,\n\\label{sph}\n\\end{eqnarray}\nwhere $\\aleph\n=8\\sqrt 2\\pi N a\/l$. {The purpose of \nchanging the wave function from $\\psi$ to $\\varphi=r\\psi$ is a matter of \ntaste and it has certain advantages.\n First, this transformation removes the first derivative \n$\\partial\/\\partial r$ from the differential equation (\\ref{sph}) \nand thus results in a simpler equation \\cite{num26}. \nSecondly, at the origin $r=0$, $\\psi$ is a constant, or \n$\\partial \\psi\/\\partial r=0$. But the new variable satisfies \n$\\varphi(0,t)=0$. Hence, while solving the\ndifferential equation (\\ref{sph}),\nwe can implement the \nsimple boundary condition \nthat as $r \\to 0 $ or $\\infty$, $\\varphi$ vanishes. \nThe boundary condition for the \ndifferential \nequation in $\\psi$ will be a mixed one, e.g., the function $\\psi$ \nshould \nvanish at infinity and its first space derivative should vanish at \nthe origin. } \nThe normalization condition for the\nwave function is\n\\begin{align}\n\\label{n1}\n4\\pi \\int_0^\\infty dr \\vert\\varphi(r;t)\\vert^2=1.\n\\end{align}\n\nHowever, Eq. (\\ref{sph}) is not the unique form of dimensionless GP equation in\nthis case. Other forms of dimensionless equations have been obtained and used by\ndifferent workers. For example, using the transformations\n$r =\\tilde r\/l$,\n$t=\\tau \\omega$, $l\\equiv \\sqrt {(\\hbar\/m\\omega)} $ and $\\phi(r;t)\n\\equiv \\varphi(r;t)\/r =\\psi(\\tilde r,\\tau)l^{3\/2}$, the GP equation\n(\\ref{eqn:gp0}) becomes\n\\begin{align}\n\\left[-\\frac{1}{2}\\frac{\\partial^2}\n{\\partial r^2}+\\frac{1}{2}{r^2}+\\aleph \n\\left| \\frac{\\varphi(r;t)}{r}\n\\right| ^2 -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0,\n\\label{sph2}\n\\end{align}\nwhere $\\aleph\n=4\\pi N a\/l$ with normalization (\\ref{n1}).\nFinally, using\nthe transformations\n$r =\\tilde r\/l$,\n$t=\\tau \\omega\/2$, $l\\equiv \\sqrt {(\\hbar\/m\\omega)} $ and $\\phi(r;t)\n\\equiv \\varphi(r;t)\/r =\\psi(\\tilde r,\\tau)l^{3\/2}$, the GP equation\n(\\ref{eqn:gp0}) becomes\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial r^2}+{r^2}+\\aleph \n\\left| \\frac{\\varphi(r;t)}{r}\n\\right| ^2 -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0,\n\\label{sph3}\n\\end{align}\nwhere $ \\aleph\n=8\\pi N a\/l$ with normalization (\\ref{n1}). These three sets of\ndimensionless GP equations have been {widely} used in the\nliterature and will be considered here. Equations (\\ref{sph}), (\\ref{sph2}), and\n(\\ref{sph3}) allow stationary solutions $\\varphi(r;t)\\equiv\n\\varphi(r)\\exp(-\\mbox{i}\\mu t)$ where $\\mu$ is the chemical potential. The\nboundary conditions for the solution of these equations are $\\varphi(0,t)=0$ and\n$\\lim_{r\\to \\infty}\\varphi(r,t)=0 $ \\cite{koonin}.\n\n\n\n\\subsection{Anisotropic GP equation in 3D}\n\nThe three-dimensional trap potential is given by $V({\\bf r})\n=\\frac{1}{2}m \\omega^2(\\nu^2\\bar x^2+\\kappa^2\\bar\ny^2+\\lambda^2\\bar\nz^2)$,\nwhere $\\omega_x \\equiv \\nu \\omega$, $\\omega_y\\equiv \\omega\\kappa$, and\n$\\omega_z\\equiv \\omega\\lambda$ are the angular frequencies in the\n$x$,\n$y$ and $z$ directions, respectively, and ${\\bf r}\\equiv (\\bar x,\\bar\ny,\\bar z)$ is the radial vector. In terms of dimensionless variables\n$x=\\sqrt 2 \\bar x\/l, y=\\sqrt 2 \\bar y\/l,z=\\sqrt 2 \\bar z\/l, t=\\tau\n\\omega, l=\\sqrt{\\hbar\/(m\\omega))}$, and $\\varphi(x,y,z;t)=\\sqrt{\nl^3\/(2\\sqrt 2)}\\Psi({\\bf r};\\tau)$, the GP equation (\\ref{eqn:gp0})\nbecomes\n\\begin{align}\n\\left[\n-\\frac{\\partial^2}{\\partial x^2}\n-\\frac{\\partial^2}{\\partial y^2}\n-\\frac{\\partial^2}{\\partial z^2}\n+\\frac{1}{4} \\biggr(\\nu^2x^2+\\kappa^2 y^2+\\lambda^2 z^2 \\biggr)\n+ \\aleph\n\\left\\vert\\varphi(x,y,z;t)\\right\\vert^2 - \\mbox{i}\n\\frac{\\partial}{\\partial t} \\right]\n\\varphi(x,y,z;t)= 0,\\label{ani}\n\\end{align}\nwith $\\aleph\n=8\\sqrt 2 \\pi aN\/l$ and normalization\n\\begin{align}\\label{n3}\n\\int_{-\\infty}^{\\infty}dx\n\\int_{-\\infty}^{\\infty}dy\n\\int_{-\\infty}^{\\infty}dz\n|\\varphi(x,y,z;t)|^2 =1.\n\\end{align}\n\nSimilarly, using $x=\\bar x\/l, y=\\bar y\/l,z= \\bar\nz\/l, t=\\tau\n\\omega, l=\\sqrt{\\hbar\/(m\\omega)}$, and $\\varphi(x,y,z;t)=\\sqrt{\nl^3}\\Psi({\\bf r};\\tau)$, the GP equation (\\ref{eqn:gp0}) becomes\n\\begin{eqnarray}\n\\left[\n-\\frac{1}{2}\\frac{\\partial^2}{\\partial x^2}\n-\\frac{1}{2}\\frac{\\partial^2}{\\partial y^2}\n-\\frac{1}{2}\\frac{\\partial^2}{\\partial z^2}\n+\\frac{1}{2} \\biggr(\\nu^2x^2+\\kappa^2 y^2+\\lambda^2 z^2 \\biggr)\n+\\aleph \n\\left\\vert\\varphi(x,y,z;t)\\right\\vert^2 - \\mbox{i}\n\\frac{\\partial}{\\partial\nt} \\right]\n\\varphi(x,y,z;t)= 0,\\label{ani2}\n\\end{eqnarray}\nwith $\\aleph\n=4 \\pi aN\/l$ and normalization (\\ref{n3}). Now \nwith\nscaling $t\\to 2t$, Eq. (\\ref{ani2}) can be rewritten as\n\\begin{eqnarray}\n\\left[\n-\\frac{\\partial^2}{\\partial x^2}\n-\\frac{\\partial^2}{\\partial y^2}\n-\\frac{\\partial^2}{\\partial z^2}\n+ \\biggr(\\nu^2x^2+\\kappa^2 y^2+\\lambda^2 z^2 \\biggr)\n+\\aleph \n\\left\\vert\\varphi(x,y,z;t)\\right\\vert^2 - \\mbox{i}\n\\frac{\\partial}{\\partial\nt} \\right]\n\\varphi(x,y,z;t)= 0,\\label{ani3}\n\\end{eqnarray}\nwith $\\aleph \n=8\\pi aN\/l$.\nThe boundary conditions for solution are $\\lim_{x\\to \\pm \\infty}\n\\varphi(x,y,z;t)=0, \\lim_{y\\to \\pm \\infty}\n\\varphi(x,y,z;t)=0,\\lim_{z\\to \\pm \\infty}\n\\varphi(x,y,z;t)=0$ \\cite{koonin}.\n\n\\subsection{Axially-symmetric GP equation in 3D}\n\nIn the special case of axial symmetry ($\\nu = \\kappa$) Eqs.\n(\\ref{ani}), (\\ref{ani2}) and (\\ref{ani3}) can be simplified considering\n${\\bf r}\\equiv (\\rho,z)$ where $\\rho=\\sqrt{x^2+y^2}$ is the radial\ncoordinate and $z$\nis the axial coordinate. Then Eq. (\\ref{ani}) becomes\n\\begin{align}\n\\left[\n-\\frac{\\partial^2}{\\partial \\rho^2}\n-\\frac{1}{\\rho}\\frac{\\partial}{\\partial \\rho}\n-\\frac{\\partial^2}{\\partial z^2}\n+\\frac{1}{4} \\biggr(\\kappa^2 \\rho^2+\\lambda^2 z^2 \\biggr)\n+\\aleph \n\\left\\vert\\varphi(\\rho,z;t)\\right\\vert^2 - \\mbox{i}\n\\frac{\\partial}{\\partial t} \\right]\n\\varphi(\\rho,z;t)= 0,\\label{axi}\n \\end{align}\nwith $\\aleph \n=8\\sqrt 2 \\pi aN\/l$ and normalization\n$2\\pi \\int_{0}^{\\infty}\\rho d\\rho\n\\int_{-\\infty}^{\\infty}dz\n|\\varphi(\\rho,z;t)|^2 =1.$\nSimilarly, Eqs. (\\ref{ani2}) and (\\ref{ani3}) can be written as\n\\begin{eqnarray}\n\\left[\n-\\frac{1}{2}\\frac{\\partial^2}{\\partial \\rho^2}\n-\\frac{1}{2\\rho}\\frac{\\partial}{\\partial \\rho}\n-\\frac{1}{2}\\frac{\\partial^2}{\\partial z^2}\n+\\frac{1}{2} \\biggr(\\kappa^2\\rho^2+\\lambda^2 z^2 \\biggr)\n+\\aleph\n\\left\\vert\\varphi(\\rho,z;t)\\right\\vert^2 - \\mbox{i}\n\\frac{\\partial}{\\partial\nt} \\right]\n\\varphi(\\rho,z;t)= 0,\\label{axi2}\n\\end{eqnarray}\nwith $\\aleph \n=4 \\pi aN\/l$ and\n\\begin{eqnarray}\n\\left[\n-\\frac{\\partial^2}{\\partial \\rho^2}\n-\\frac{1}{\\rho}\\frac{\\partial}{\\partial \\rho}\n-\\frac{\\partial^2}{\\partial z^2}\n+ \\biggr(\\kappa^2 \\rho^2+\\lambda^2 z^2 \\biggr)\n+\\aleph \n\\left\\vert\\varphi(\\rho,z;t)\\right\\vert^2 - \\mbox{i}\n\\frac{\\partial}{\\partial\nt} \\right]\n\\varphi(\\rho,z;t)= 0,\\label{axi3}\n\\end{eqnarray}\nwith $\\aleph \n=8\\pi aN\/l$. In this case $\\varphi(\\rho=0,z;t)$ \nis not\nzero but a constant.\nConvenient boundary conditions for solution\nin this case are $\\lim_{z\\to \\pm \\infty}\\varphi(\\rho,z;t)=0, \\lim_{\\rho\\to\n\\infty}\\varphi(\\rho,z;t)=0,$ and \n$\\partial \\varphi(\\rho,z;t)\/\\partial \\rho|_{\\rho=0}=0$\n\\cite{num4}.\n\n\n\n\\subsection{One-dimensional GP equation}\n\nIn case of an elongated cigar-shaped trap, which is essentially an\naxially-symmetric trap with strong transverse confinement, Eq.\n(\\ref{ani}) reduces to a quasi one-dimensional form. This is achieved by\nassuming that the system remains confined to the ground state in the\ntransverse direction. In this case the wave function of Eq.\n(\\ref{ani}) can be written as $\\varphi(x,y,z;t) =\\tilde \\varphi(x;t)\n\\phi_0(y)\n\\phi_0(z)\\exp[-i(\\lambda+\\kappa)t\/2]$ with $\\phi_0(y)=\n[\\kappa\/(2\\pi)]^{1\/4} \\exp(-\\kappa y^2\/4)$ and $\\phi_0(z)=\n[\\lambda\/(2\\pi)]^{1\/4} \\exp(-\\lambda z^2\/4)$ the respective ground state\nwave functions in $y$ and $z$ directions. Using this ansatz in Eq.\n(\\ref{ani}), multiplying by $\\phi_0(y)\\phi_0(z)$, integrating over\n$y$ and $z$, dropping the tilde over $\\varphi$, and setting $\\nu=1$ we\nobtain\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial x^2}+\\frac{x^2}{4}+\\aleph\n\\left| {\\varphi(x;t)}\n\\right| ^2\n-\\mbox{i}\\frac{\\partial }{\\partial t}\n\\right] \\varphi (x;t)=0 ,\n\\label{1d}\n\\end{align}\nwith $\\aleph\n= 2a N \\sqrt {2\n\\lambda \\kappa} \/l$ and\n normalization\n\\begin{align}\\label{n5}\n\\int_{-\\infty}^\\infty dx |\\varphi(x;t)|^2 =1.\n\\end{align}\nInstead if we employ\n $\\varphi(x,y,z;t) =\\tilde \\varphi(x;t) \\phi_0(y)\n\\phi_0(z)\\exp[-i(\\lambda+\\kappa)t\/2]$ with $\\phi_0(y)=\n(\\kappa\/\\pi)^{1\/4} \\exp(-\\kappa y^2\/2)$ and $\\phi_0(z)=\n(\\lambda\/\\pi)^{1\/4} \\exp(-\\lambda z^2\/2)$\n in Eq.\n(\\ref{ani2}), in a similar fashion\n we obtain\n\\begin{align}\n\\left[-\\frac{1}{2}\\frac{\\partial^2}\n{\\partial x^2}+\\frac{x^2}{2}+\\aleph \n\\left| {\\varphi(x;t)}\n\\right| ^2\n-\\mbox{i}\\frac{\\partial }{\\partial t}\n\\right] \\varphi (x;t)=0 ,\n\\label{1d2}\n\\end{align}\nwith $\\aleph \n= 2a N \\sqrt {\n\\lambda \\kappa} \/l$ and normalization (\\ref{n5}). Now with scaling\n$t\\to 2t$ Eq. (\\ref{1d2}) can be rewritten as\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial x^2}+{x^2}+\\aleph \n\\left| {\\varphi(x;t)}\n\\right| ^2\n-\\mbox{i}\\frac{\\partial }{\\partial t}\n\\right] \\varphi (x;t)=0 ,\n\\label{1d3}\n\\end{align}\nwith $\\aleph \n= 4a N \\sqrt {\n\\lambda \\kappa} \/l$ and normalization (\\ref{n5}).\nFor numerical solution we take $\\lim_{x\\to \\pm \\infty}\\varphi(x,t)=0$.\n\n\n\n\\label{1D}\n\n\n\\subsection{Anisotropic GP equation in 2D}\n\n\\label{2.5}\n\nIn case of a disk-shaped trap, which is essentially an\nanisotropic\ntrap in two dimensions\nwith strong axial binding Eq.\n(\\ref{ani}) reduces to a two-dimensional form. This is achieved by\nassuming that the system remains confined to the ground state in the\naxial direction. In this case the wave function of Eq.\n(\\ref{ani}) can be written as $\\varphi(x,y,z;t) = \\tilde \\varphi(x,y;t)\n\\phi_0(z)\\exp[-i\\lambda t\/2]$ with $\\phi_0(z)=\n[\\lambda\/(2\\pi)]^{1\/4} \\exp(-\\lambda z^2\/4)$ the ground state\nwave function in $z$ direction. Using this ansatz in Eq.\n(\\ref{ani}), multiplying by $\\phi_0(z)$, integrating over\n$z$, dropping the tilde over $\\varphi$ and setting $\\nu =1$ we obtain\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial x^2}-\\frac{\\partial^2}\n{\\partial y^2}\n+\\frac{x^2+\\kappa y^2}{4}+\\aleph\n\\left| {\\varphi(x,y;t)}\n\\right| ^2\n-\\mbox{i}\\frac{\\partial }{\\partial t}\n\\right] \\varphi (x,y;t)=0 ,\n\\label{2d}\n\\end{align}\nnow with $\\aleph\n= 4a N \\sqrt {2\n\\pi \\lambda} \/l$ and\n normalization\n\\begin{align}\\label{n6}\n\\int_{-\\infty}^\\infty dx\n\\int_{-\\infty}^\\infty dy\n|\\varphi(x,y;t)|^2 =1.\n\\end{align}\n\nInstead if we use in Eq. (\\ref{ani2})\n$\\phi(x,y,z;t) =\\tilde \\varphi(x,y;t)\n\\phi_0(z)\\exp[-i\\lambda t\/2]$ with $\\phi_0(z)=\n[\\lambda\/\\pi]^{1\/4} \\exp(-\\lambda z^2\/2)$, then\nin a similar fashion we obtain\n\\begin{align}\n\\left[-\\frac{1}{2}\\frac{\\partial^2}\n{\\partial x^2}-\\frac{1}{2}\\frac{\\partial^2}\n{\\partial y^2}\n+\\frac{x^2+\\kappa y^2}{2}+\\aleph \n\\left| {\\varphi(x,y;t)}\n\\right| ^2\n-\\mbox{i}\\frac{\\partial }{\\partial t}\n\\right] \\varphi (x,y;t)=0 ,\n\\label{2d2}\n\\end{align}\nnow with $\\aleph \n= 2a N \\sqrt {2\n\\pi \\lambda} \/l$ and normalization (\\ref{n6}). Finally, with scaling\n$t\\to 2t$, Eq. (\\ref{2d2}) can be written as\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial x^2}-\\frac{\\partial^2}\n{\\partial y^2}\n+{x^2+\\kappa y^2}+\\aleph\n\\left| {\\varphi(x,y;t)}\n\\right| ^2\n-\\mbox{i}\\frac{\\partial }{\\partial t}\n\\right] \\varphi (x,y;t)=0 ,\n\\label{2d3}\n\\end{align}\nwith $\\aleph\n= 4a N \\sqrt {2\n\\pi \\lambda} \/l$ and normalization (\\ref{n6}).\nFor numerical solution we take $\\lim_{x\\to \\pm \\infty}\\varphi(x,y,t)=0$\nand $\\lim_{y\\to \\pm \\infty}\\varphi(x,y,t)=0$.\n\n\n\n\\subsection{Circularly-symmetric GP equation in 2D}\n\n\n\nIn the special case of circular symmetry the equations of Sec. \\ref{2.5}\ncan be written in one-dimensional form. In this case $\\kappa =1$, and\nwe introduce the radial variable ${\\bf r}\\equiv (x,y)$, and rewrite the\nwave function as $\\varphi(r)$. Then the GP equation (\\ref{2d}) become\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial r^2}-\\frac{1}{r}\\frac{\\partial}\n{\\partial r} +\\frac{r^2}{4}+\\aleph \n\\left| {\\varphi(r;t)}\n\\right| ^2 -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0.\n\\label{cir}\n\\end{align}\n The normalization of the\nwave function is\n$2\\pi \\int_0^\\infty dr r\\vert\\varphi(r;t)\\vert^2=1.$\n\nIn the circularly-symmetric case Eq. (\\ref{2d2}) becomes\n\\begin{align}\n\\left[-\\frac{1}{2}\\frac{\\partial^2}\n{\\partial r^2}-\\frac{1}{2r}\\frac{\\partial}\n{\\partial r}+\n\\frac{1}{2}{r^2}+\\aleph \n\\left| {\\varphi(r;t)}\n\\right| ^2 -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0.\n\\label{cir2}\n\\end{align}\nFinally, Eq. (\\ref{2d3}) can be written as\n\\begin{align}\n\\left[-\\frac{\\partial^2}\n{\\partial r^2}-\\frac{1}{r}\\frac{\\partial}\n{\\partial r}\n+{r^2}+\\aleph \n\\left| {\\varphi(r;t)}\n\\right| ^2 -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0.\n\\label{cir3}\n\\end{align}\nThe convenient boundary condition in this case is $\\lim_{r\\to \\infty }\n\\varphi\n(r;t)=0$ and $d\\varphi\n(r;t)\/dr|_{r=0}=0$ \\cite{num4}.\n\n\nIn this section we have exhibited GP equations for different\ntrap symmetries. In the next section we illustrate the Crank-Nicolson\nmethod for the GP equation in one space variable, which is then\nextended to\nother types of equations in Sec. \\ref{CN23D}.\n\n\n\n\\section{Split-Step Crank-Nicolson Method for the GP\nEquation in one Space Variable}\n\\label{CN1D}\n\n\n\\subsection{The GP Equation in the 1D and radially-symmetric 3D\ncases}\n\nTo introduce the Crank-Nicolson Method \\cite{koonin,ames,dtray}\nfor GP equation we consider\nfirst the one-dimensional case of Sec. \\ref{1D}. The\n nonlinear GP equation (\\ref{1d})\nin this\ncase can be expressed in the following form:\n\\begin{align}\n\\mbox{i}\\frac{\\partial }{\\partial t} \\varphi (x;t) & =\n\\left[-\\frac{\\partial^2}\n{\\partial x^2}+\\frac{x^2}{4}+ \\aleph\n\\left| {\\varphi(x;t)}\n\\right| ^2\\right] \\varphi (x;t) , \\notag \\\\\n & \\equiv H \\varphi (x;t)\n\\label{e1d}\n\\end{align}\nwhere the Hamiltonian $H$ contains the different linear and nonlinear terms\nincluding the spatial derivative. (The spherically-symmetric GP equation in 3D\nhas a similar structure and can be treated similarly.) We solve this equation by\ntime iteration \\cite{koonin,ames,dtray}. A given trial input solution is\npropagated in time over small time steps until a stable final solution is\nreached. The GP equation is discretized in space and time using the finite\ndifference scheme. This procedure results in a set of algebraic equations which\ncan be solved by time iteration using an input solution consistent with the\nknown boundary condition. In the present split-step method \\cite{ames} this\niteration is conveniently done in few steps by breaking up the full Hamiltonian\ninto different derivative and non-derivative parts.\n\n\n\n\\subsubsection{Real-time propagation} \\label{RT}\n\nThe time iteration is performed by splitting $H$ into two parts:\n$H=H_1+H_2$, with\n\\begin{align}\\label{h1}\nH_1 & = \\left[ \\frac{x^2}{4}+\\aleph \n\\left|\n{\\varphi(x;t)}\n\\right| ^2 \\right], \\;\\; \\\\ \\label{h2}\nH_2 & = -\\frac {\\partial ^2}{\\partial x^2}.\n\\end{align}\n{\nEssentially we split Eq.~(\\ref{e1d}) into\n\\begin{align}\n\\mbox{i}\\frac{\\partial }{\\partial t} \\varphi (x;t) & =\n\\left[ \\frac{x^2}{4}+\\aleph \n\\left\\vert {\\varphi(x;t)}\n\\right\\vert ^2\\right] \\varphi (x;t) \\equiv H_1 \\varphi (x;t)\n\\label{e1d_1} \\\\\n\\mbox{i}\\frac{\\partial }{\\partial t} \\varphi (x;t) & =\n-\\frac{\\partial^2}\n{\\partial x^2}\\varphi (x;t) \\equiv H_2 \\varphi (x;t)\n\\label{e1d_2} \n\\end{align}\nWe first solve Eq.~(\\ref{e1d_1}) with a initial value $\\varphi (x;t_0)$ \nat $t = t_0$ to obtain an intermediate solution at $t = t_0 + \\Delta$, where\n$\\Delta$ is the time step. \nThen this intermediate solution is used as initial value to solve \nEq.~(\\ref{e1d_2}) yielding the final solution at $t = t_0 + \\Delta$ as \n$\\varphi (x;t_0 + \\Delta)$. This procedure is repeated $n$ times to get \nthe final\nsolution at a given time $t_{\\text{final}}=t_0+n\\Delta $}. \n\nThe time variable is discretized as $t_n=n\\Delta$ where $\\Delta$ is\nthe time step. The solution is advanced first over the time step\n$\\Delta$ at time $t_n$ by solving the GP equation (\\ref{e1d}) with\n$H=H_1$ to produce an intermediate solution $\\varphi^{n+1\/2}$ from\n$\\varphi^n$, where $\\varphi^n$ is the discretized wave function at\ntime $t_n$. As there is no derivative in $H_1$, this propagation is\nperformed essentially exactly for small $\\Delta$ through the\noperation\n\\begin{align}\\label{al1}\n\\varphi^{n+1\/2}\n& = {\\bigcirc}_{\\mathrm{nd}}(H_1) \\varphi^n \\equiv e^{-\n\\mbox{i}\\Delta H_1}\n\\varphi^n,\n\\end{align}\nwhere ${\\bigcirc}_{\\mathrm{nd}} (H_1)$ denotes time-evolution operation\nwith $H_1$ and the suffix `nd' denotes non-derivative. Next we perform\nthe time propagation corresponding to the operator $H_2$ numerically\nby the semi-implicit Crank-Nicolson scheme (described below)\n\\cite{koonin}:\n\\begin{align}\\label{gp2}\n\\frac{ \\varphi^{n+1}- \\varphi^{n+1\/2}}{-\\mbox{i}\\Delta } =\n\\frac{1}{2}H_2(\n\\varphi^{n+1} + \\varphi^{n+1\/2}).\n\\end{align}\nThe formal solution to (\\ref{gp2}) is\n\\begin{align}\\label{gp3}\n \\varphi^{n+1}= {\\bigcirc}_{\\mathrm{CN}}(H_2) \\varphi^{n+1\/2}\n\\equiv\n\\frac{1-\\mbox{i}\\Delta H_2\/2 }{ 1+\\mbox{i}\\Delta\nH_2\/2 }\n\\varphi^{n+1\/2},\n\\end{align}\nwhich combined with Eq. (\\ref{al1}) yields\n\\begin{align}\\label{gp4}\n \\varphi^{n+1}={\\bigcirc}_{\\mathrm{CN}}(H_2) \n{\\bigcirc}_{\\mathrm{nd}}(H_1)\n\\varphi^n,\n\\end{align}\nwhere ${\\bigcirc}_{\\mathrm{CN}} $ denotes time-evolution operation with \n$H_2$\nand the suffix `CN' refers to the Crank-Nicolson algorithm. Operation \n${\\bigcirc}_{\\mathrm{CN}} $ is used to propagate the intermediate \nsolution $ \n\\varphi^{n+1\/2} $ by time step $\\Delta$ to generate the solution $ \n\\varphi^{n+1}$ at the next time step $t_{n+1}=(n+1)\\Delta$.\n\nThe advantage of the above split-step method with small time step $\\Delta$ is\ndue to the following three factors \\cite{ames,dtray}. First, all iterations\nconserve normalization of the wave function. Second, the error involved in\nsplitting the Hamiltonian is proportional to $\\Delta^2$ and can be neglected and\nthe method preserves the {symplectic} structure of the Hamiltonian formulation. \nFinally, as a major part of the Hamiltonian including the nonlinear term is\ntreated fairly accurately without mixing with the delicate Crank-Nicolson\npropagation, the method can deal with an arbitrarily large nonlinear term and\nlead to stable and accurate converged result.\n\nNow we describe explicitly the semi-implicit\nCrank-Nicolson algorithm.\nThe GP\nequation is mapped onto $N_x$ one-dimensional spatial grid points in\n$x$.\nEquation (\\ref{e1d}) is discretized with $H=H_2$ of (\\ref{h2}) by the\nfollowing Crank-Nicolson scheme \\cite{koonin,ames,dtray}:\n\\begin{align}\\label{kn1}\n\\frac{\\mbox{i}(\\varphi_{i}^{n+1}-\n\\varphi_{i}^{n+1\/2})}{\\Delta}=-\\frac{1}{2h^2}\\biggr[(\\varphi^{n+1}_{i+1}-\n2\\varphi^{n+1}_i\n+\\varphi^{n+1}_{i-1})\n+(\\varphi^{n+1\/2}_{i+1}-2\\varphi_{i}^{n+1\/2}\n+\\varphi^{n+1\/2}_{i-1})\\biggr],\n\\end{align}\nwhere for the spherically-symmetric Eq. (\\ref{sph})\n$\\varphi_i^n=\\varphi(x_i;t_n)$ refers to $x\\equiv x_i=ih,$\n$i=0,1,2,...,N_x$ and $h$ is the space \nstep.\nIn the case of\nthe \n1D Eq. (\\ref{1d}), we choose $x\\equiv x_i=-N_xh\/2+ ih,$\n$i=0,1,2,...,N_x$.\n {(The choice $-$ even $N_x$ $-$ has the\nadvantage\nof taking an equal number of space points on both sides of $x=0$ in the\n1D case setting the point $N_x\/2$ at $x=0$.)} \nEquation (\\ref{kn1}) is the\nexplicit form of the formal Eq. (\\ref{gp2}). This scheme is\nconstructed by approximating $\\partial \/\\partial t$ by a two-point\nformula connecting the present ($n+1\/2$) to future ($n+1$). The\nspatial partial derivative\n$\\partial^2 \/\\partial x^2$ is\napproximated by a three-point formula averaged over the present and\nthe future time grid points.\nThis\nprocedure results in a series of tridiagonal sets of equations\n(\\ref{kn1}) in $\\varphi^{n+1}_{i+1}$, $\\varphi^{n+1}_{i}$, and\n$\\varphi^{n+1}_{i-1}$ at time $t_{n+1}$, which are solved using the\nproper boundary conditions.\n\n\nThe Crank-Nicolson scheme (\\ref{kn1}) possesses certain properties\nworth mentioning \\cite{ames,dtray}. The error in this scheme is both\nsecond order in space and time steps so that for small $\\Delta$ and\n$h$ the error is negligible. This scheme is also unconditionally\nstable \\cite{dtray}. The boundary condition at infinity is preserved for\nsmall\nvalues of $\\Delta\/h^2$\\cite{dtray}.\n\nThe tridiagonal equations emerging from Eq. (\\ref{kn1}) are written\nexplicitly as \\cite{koonin}\n\\begin{align}\\label{eq1}\nA_i^-\\varphi^{n+1}_{i-1}+A_i^0\\varphi^{n+1}_{i}+\nA_i^+\\varphi^{n+1}_{i+1}= b_i,\n\\end{align}\nwhere\n\\begin{align}\nb_i=\\frac{\\mbox{i}\\Delta}{2h^2}(\\varphi^{n+1\/2}_{i+1}-2\\varphi_{i}^{n+1\/2}\n+\\varphi^{n+1\/2}_{i-1})+\\varphi_i^{n+1\/2},\n\\end{align}\nand $A_i^-=A_i^+= -\\mbox{i}\\Delta\/(2h^2), A_i^0 = 1+\\mbox{i}\n\\Delta\/h^2$. All quantities in $b_i$ refer to time step $t_{n+1\/2}$\nand are considered known. The only unknowns in Eq. (\\ref{eq1}) are the\nwave forms $\\varphi^{n+1}_{i\\pm 1}$ and $\\varphi^{n+1}_{i}$ at time\nstep $t_{n+1}$. To solve Eq. (\\ref{eq1}), we assume the one-term\nforward recursion relation\n\\begin{align}\\label{eq2}\n\\varphi^{n+1}_{i+1}=\\alpha_i\\varphi^{n+1}_{i}+\\beta_i,\n\\end{align}\nwhere $\\alpha_i$ and $\\beta_i$ are coefficients to be determined.\nSubstituting Eq. (\\ref{eq2}) in Eq. (\\ref{eq1}) we obtain\n\\begin{align}\\label{eq3}\nA_i^-\\varphi^{n+1}_{i-1}+A_i^0\\varphi^{n+1}_{i}+\nA_i^+(\\alpha_i \\varphi^{n+1}_{i}+\\beta_i)= b_i,\n\\end{align}\nwhich leads to the solution\n\\begin{align}\\label{eq4}\n\\varphi_i^{n+1}=\\gamma_i(A_i^- \\varphi_{i-1}^{n+1}+A_i^+\\beta_i-b_i),\n\\end{align}\nwith\n\\begin{align}\\label{eq41}\n\\gamma_i=-1\/(A_i^0+A_i^+\\alpha_i).\n\\end{align}\nFrom Eqs. (\\ref{eq2}) and (\\ref{eq4}) we obtain the following backward\nrecursion relations for the coefficients $\\alpha_i$ and $\\beta_i$\n\\begin{align}\\label{eq5}\n\\alpha_{i-1}=\\gamma_iA_i^-, \\;\\;\\; \\beta_{i-1}=\n\\gamma_i(A_i^+\\beta_i-b_i).\n\\end{align}\nWe shall use the recursion relations (\\ref{eq4}), (\\ref{eq41}) and (\\ref{eq5})\nin a backward sweep of the lattice to determine $\\alpha_i$ and $\\beta_i$ for $i$\nrunning from $N_x-2$ down to 0. The initial values chosen are $\\alpha_{N_x-1}=0,\n\\beta_{N_x-1} = \\varphi^{n+1}_{N_x}.$ This ensures the correct value of\n$\\varphi$ at the last lattice point. After determining the coefficients\n$\\alpha_i, \\beta_i$ and $\\gamma_i$, we can use the recursion relation\n(\\ref{eq2}) from $i=0$ to $N_x-1$ to determine the solution for the entire space\nrange using the starting value $\\varphi^{n+1}_0$ (=0) known from the boundary\nconditions. The value at the last lattice point is also taken to be known (= 0).\nThus we have determined the solution by using two sets of recursion relations\nacross the lattice each involving about $N_x$ operations.\n\nIn the numerical implementation of CN real-time propagation the\ninitial state at $t=0$ is usually chosen to be the analytically known\nsolution of the harmonic potential with zero nonlinearity: \n$\n\\aleph \n=0$. In the course of time iteration the nonlinearity is slowly\nintroduced until the desired final nonlinearity is attained. This\nprocedure will lead us to the final solution of the problem.\n\n\n\\subsubsection{Imaginary-time propagation}\n\\label{IT}\n\nAlthough, real-time propagation as described above has many\nadvantages, in this approach one has to deal with complex variables\nfor a complex wave function for non-stationary states. For stationary\nground\nstate the wave function is essentially real- and the imaginary-time\npropagation method dealing with real variables seems to be\nconvenient. In this approach time $t$ is replaced by an imaginary\nquantity $t=- \\mbox{i}\\bar t$ and Eq. (\\ref{e1d}) now becomes\n\\begin{align}\n-\\frac{\\partial }{\\partial \\bar t} \\varphi (x;\\bar t) & =\n\\left[-\\frac{\\partial^2}\n{\\partial x^2}+\\frac{x^2}{4}+\\aleph \n\\left|\n{\\varphi(x;\\bar t)}\n\\right| ^2\\right] \\varphi (x;\\bar t), \\notag \\\\\n& \\equiv H \\varphi (x;\\bar t)\n\\label{imt}\n\\end{align}\nIn this equation $\\bar t$ is just a mathematical parameter. \n\n{From Eq. (\\ref{imt}) we see that an eigenstate \n$\\varphi_i$ of eigenvalue $E_i$ \nof $H$ satisfying $H\\varphi_i=E_i\\varphi_i$ behaves under imaginary-time \npropagation as $\\partial \\varphi_i(\\bar t) \/\\partial \\bar t= -E_i \n\\varphi_i(\\bar t)$, so \nthat $\\varphi_i(\\bar t)= \\exp(-E_i\\bar t)\\varphi_i( 0)$. Hence, if we \nstart \nwith as arbitrary initial $\\varphi (x;\\bar t)$ which can be taken as a \nlinear combination of all eigenfunctions of $H$, then upon \nimaginary-time propagation all the eigenfunctions will decay \nexponentially with \ntime. However, all the excited states with larger $E_i$ will decay \nexponentially faster compared to the ground state with the smallest \neigenvalue. Consequently, after some time only the ground state \nsurvives. As all the states are decaying with time during \nimaginary-time propagation, we need to multiply the wave function by a \nnumber \ngreater than unity to preserve its normalization so that the solution \ndoes not go to zero.}\n\nThe imaginary-time iteration is performed by splitting $H$ into two parts as\nbefore: $H=H_1+H_2$, with $H_1$ and $H_2$ given by Eqs. (\\ref{h1}) and\n(\\ref{h2}). It is realized that the entire analysis of Sec. \\ref{RT} remains\nvalid provided we replace $\\mbox{i}$ by 1 in Eq. (\\ref{al1}) and by $-1$ in the\nremaining equations. However, there appears one trouble. The CN real-time\npropagation preserves the normalization of the wave function, whereas the CN\nimaginary-time propagation does not preserve the normalization. This problem can\nbe circumvented by restoring the normalization of the wave function after each\noperation of Crank-Nicolson propagation. Once this is done the imaginary-time\npropagation method for stationary ground state problems yields very \naccurate result at\nlow computational cost.\n\nCompared to the real-time propagation method, the imaginary-time\npropagation method is very robust. The initial solution in the \nimaginary-time method \ncould be any reasonable solution and not the analytically known solution\nof a related problem as in the real-time method. { Also, the\nfull nonlinearity can be added in a small number of time steps or even\nin a single step and not in a large number of steps as in the real-time \nmethod.} In the programs using the imaginary-time method we include the \n nonlinearity in a single step. These two \nadded features together with\nthe use of real algorithm make the imaginary-time propagation method\nvery accurate with quick convergence for stationary ground states as we \nshall\nsee below.\n\n\n\\subsection{The GP Equation in the circularly-symmetric 2D\ncase}\n\\label{gp2dc}\n\n\nThe Crank-Nicolson discretization for real- and imaginary-time propagation for\nthe circularly-symmetric GP equation (\\ref{cir}) is performed in a similar\nfashion as for Eq. (\\ref{e1d}), apart from the difference that here we also have\na first derivative in space variable in addition to the second derivative.\nAnother difference is that the wave function is not zero at $r=0$. In the case\nof Eq. (\\ref{e1d}) we took the boundary condition as $\\varphi(x;t)=0$ at the\nboundaries. For the circularly-symmetric case, the convenient boundary\nconditions are $\\lim_{r\\to \\infty}\\varphi(r;t)=0$ and $d\\varphi\n(r;t)\/dr|_{r=0}=0$.\n\nWe describe below the Crank-Nicolson discretization and the solution\nalgorithm in this case for the following equation\n\\begin{equation}\n\\left[-\\frac{\\partial^2}\n{\\partial r^2}-\\frac{1}{r}\\frac{\\partial}\n{\\partial r} -\n\\mbox{i}\\frac{\\partial }{\\partial t}\\right] \\varphi\n(r;t)=0,\n\\label{CNCIR}\n\\end{equation}\nrequired for the solution of Eq. (\\ref{cir}). The remaining procedure is\nsimilar to that described in detail above in the one-dimensional case.\n\nEquation (\\ref{CNCIR}) is discretized by the\nfollowing Crank-Nicolson scheme as in Eq.\n(\\ref{kn1}) \\cite{koonin,ames,dtray}:\n\\begin{align}\\label{knx}\n\\frac{\\mbox{i}(\\varphi_{i}^{n+1}-\n\\varphi_{i}^{n+1\/2})}{\\Delta}=-\\frac{1}{2h^2}\\biggr[(\\varphi^{n+1}_{i+1}-\n2\\varphi^{n+1}_i\n+\\varphi^{n+1}_{i-1})\n+(\\varphi^{n+1\/2}_{i+1}-2\\varphi_{i}^{n+1\/2}\n+\\varphi^{n+1\/2}_{i-1})\\biggr]\\nonumber \\\\ -\\frac{1}{4r_ih}\n\\left[(\\varphi^{n+1}_{i+1}-\\varphi^{n+1}_{i-1})+(\\varphi^{n+1\/2}_{i+1}-\n\\varphi^{n+1\/2}_{i-1})\n\\right],\n\\end{align}\nwhere again\n$\\varphi_i^n=\\varphi(r_i;t_n)$, $r\\equiv r_i=ih,$\n$i=0,1,2,...,N_r$ and $h$ is the space step.\nThis scheme is\nconstructed by approximating $\\partial \/\\partial x$ by a two-point\nformula averaged over present and future time grid points.\nThe discretization of the first-order time and second-order space\nderivatives is done as in Eq. (\\ref{kn1}).\nThis\nprocedure results in the tridiagonal sets of equations\n(\\ref{knx}) in $\\varphi^{n+1}_{i+1}$, $\\varphi^{n+1}_{i}$, and\n$\\varphi^{n+1}_{i-1}$ at time $t_{n+1}$, which are solved using the\nproper boundary conditions.\n\n\n\nThe tridiagonal equations emerging from Eq. (\\ref{knx}) are written\nexplicitly as\n\\begin{align}\\label{ep1}\nA_i^-\\varphi^{n+1}_{i-1}+A_i^0\\varphi^{n+1}_{i}+\nA_i^+\\varphi^{n+1}_{i+1}= b_i,\n\\end{align}\nwhere\n\\begin{align}\nb_i=\\frac{\\mbox{i}\\Delta}{2h^2}(\\varphi^{n+1\/2}_{i+1}-2\\varphi_{i}^{n+1\/2}\n+\\varphi^{n+1\/2}_{i-1})+\\varphi_i^{n+1\/2}+\\frac{\\mbox{i}\\Delta}{4r_ih}\n(\\varphi^{n+1\/2}_{i+1}-\\varphi^{n+1\/2}_{i-1}),\n\\end{align}\nand $A_i^-= \\mbox{i}\\Delta[1\/(4hr_i) -1\/(2h^2)],\nA_i^+= -\\mbox{i}\\Delta[1\/(4hr_i) +1\/(2h^2)],\nA_i^0 =\n1+\\mbox{i}\n\\Delta\/h^2$. All quantities in $b_i$ refer to time step $t_{n+1\/2}$\nand are considered known. The only unknowns in Eq. (\\ref{ep1}) are the\nwave forms $\\varphi^{n+1}_{i\\pm 1}$ and $\\varphi^{n+1}_{i}$ at time\nstep $t_{n+1}$. To solve Eq. (\\ref{ep1}), we assume the one-term\nbackward recursion relation\n\\begin{align}\\label{ep2}\n\\varphi^{n+1}_{i-1}=\\alpha_i\\varphi^{n+1}_{i}+\\beta_i,\n\\end{align}\nwhere $\\alpha_i$ and $\\beta_i$ are coefficients to be determined.\nSubstituting Eq. (\\ref{ep2}) in Eq. (\\ref{ep1}) we obtain\n\\begin{align}\\label{ep3}\nA_i^+\\varphi^{n+1}_{i+1}+A_i^0\\varphi^{n+1}_{i}+\nA_i^-(\\alpha_i \\varphi^{n+1}_{i}+\\beta_i)= b_i,\n\\end{align}\nwhich leads to the solution\n\\begin{align}\\label{ep4}\n\\varphi_i^{n+1}=\\gamma_i(A_i^+ \\varphi_{i+1}^{n+1}+A_i^-\\beta_i-b_i),\n\\end{align}\nwith\n\\begin{align}\\label{ep41}\n\\gamma_i=-1\/(A_i^0+A_i^-\\alpha_i).\n\\end{align}\nFrom Eqs. (\\ref{ep2}) and (\\ref{ep4}) we obtain the following\nforward\nrecursion relations for the coefficients $\\alpha_i$ and $\\beta_i$\n\\begin{align}\\label{ep5}\n\\alpha_{i+1}=\\gamma_iA_i^+, \\;\\;\\; \\beta_{i+1}=\n\\gamma_i(A_i^-\\beta_i-b_i).\n\\end{align}\nWe shall use the recursion relations (\\ref{ep4}), (\\ref{ep41}) and (\\ref{ep5})\nin a forward sweep of the lattice to determine $\\alpha_i$ and $\\beta_i$ for $i$\nrunning from $1$ to $N_r-1$. The initial values chosen are $\\alpha_{1}=1,\n\\beta_{1} =0.$ This ensures the correct value of the space derivative of\n$\\varphi(r;t)=0$ at $r=0$. After determining the coefficients $\\alpha_i,\n\\beta_i$ and $\\gamma_i$, we can use the recursion relation (\\ref{ep2}) from\n$i=N_r$ to $1$ to determine the solution for the entire space range using the\nstarting value $\\varphi^{n+1}(N_r)$ (=0) known from the boundary condition.\nThus we have determined the solution by using two sets of recursion relations\nacross the lattice each involving about $N_r$ operations.\n\n\n\n\\subsection{Chemical potential}\n\n\\label{CH}\n\n\nFor stationary states the wave functions for the 1D case\nhave the trivial time\ndependence $\\varphi(x;t) \\equiv \\hat \\varphi(x) \\exp(-\\mbox{i}\\mu\nt)$,\nwhere\n$\\mu $ is the chemical potential. Substituting this condition in Eq.\n(\\ref{1d}) we obtain\n\\begin{align} \\label{mueq}\n\\left[-\\frac{d^2}\n{d x^2}+\\frac{x^2}{4}+\\aleph \n{\\hat \\varphi^2(x)}\n\\right] \\hat \\varphi (x)\n=\n \\mu \\hat \\varphi (x).\n\\end{align}\nAssuming that the wave form is normalized to unity\n$\\int_{-\\infty}^\\infty \\hat \\varphi^2(x) dx =1$, the chemical\npotential can\nbe calculated from the following expression obtained by multiplying\nEq. (\\ref{mueq}) by $\\hat \\varphi (x)$ and integrating over all space\n\\begin{align}\\label{muz}\n\\mu = \\int_{-\\infty}^\\infty \\left[\\biggr(\\frac{d\\hat\n\\varphi(x)}{dx}\n\\biggr)^2 +\\hat \\varphi^2(x)\n\\left(\n\\frac{x^2}{4}+\\aleph\n{\\hat \\varphi^2(x)}\n\\right) \\right] dx,\n\\end{align}\nwhere the second derivative has been simplified by an integration by\nparts.\n\nAll the programs also calculate the many-body energy, which is of\ninterest. The analytical expression for energy is the\nsame as that of the chemical potential but with the nonlinear term\nmultiplied by 1\/2, e.g., \\cite{review}\n\\begin{align}\\label{energy}\nE = \\int_{-\\infty}^\\infty \\left[\\biggr(\\frac{d\\hat\n\\varphi(x)}{dx}\n\\biggr)^2 +\\hat \\varphi^2(x)\n\\left(\n\\frac{x^2}{4}+\\frac{\\aleph} \n{2} {\\hat \\varphi^2(x)}\n\\right) \\right] dx.\n\\end{align}\nThe programs will write the value of energy as output. However, we shall\nnot study or tabulate the results for energy and we shall not write the\nexplicit algebraic expression of energy in the case of other trap\nsymmetries.\n\n\n\nThe GP equation (\\ref{sph}) with spherically-symmetric potential is\nalso an one-variable equation quite similar in structure to the\none-dimensional equation (\\ref{1d}) considered above. Hence the entire\nanalysis of Secs. \\ref{RT}, \\ref{IT}, and \\ref{CH} will be applicable\nin this case\nwith $\\varphi(x;t)$ replaced by $\\varphi(r;t)\/r$ in the\nnonlinear term. Now if we consider stationary states of the form\n$\\varphi(r,t)\\equiv \\hat \\varphi(r)\\exp (-\\mbox{i}\\mu t\/\\hbar)$,\n the\nexpression for the chemical potential\nbecomes\n\\begin{align}\\label{mueq2}\n\\mu =4\\pi \\int_{0}^\\infty \\left[\\biggr(\\frac{d\\hat \\varphi(r)}{dr}\n\\biggr)^2 +\\hat \\varphi^2(r)\n\\left(\n\\frac{r^2}{4}+\\aleph\n\\frac{\\hat \\varphi^2(r)}{r^2}\n\\right) \\right] dr.\n\\end{align}\nWe shall use Eqs. (\\ref{muz}) and (\\ref{mueq2}) for the calculation of\nchemical potential from Eqs. (\\ref{1d}) and (\\ref{sph}). The energy will\nbe calculated from Eq. (\\ref{energy}).\nThe expressions\nfor chemical potential in other cases can be written down in a\nstraight-forward fashion.\n\n\n\\section{Split-Step Crank-Nicolson method in two and three space\nvariables}\n\\label{CN23D}\n\n\\subsection{Anisotropic GP equation in 2D}\n\n\\label{anp}\n\nIn this case the GP equation (\\ref{2d}) can be written as\n\\begin{eqnarray}\n\\mbox{i}\n\\frac{\\partial}{\\partial t}\\varphi(x,y;t) & =&\n\\biggr[\n-\\frac{\\partial^2}{\\partial x^2}\n-\\frac{\\partial^2}{\\partial y^2}\n+ \\frac{1}{4} \\biggr( x^2 + \\kappa^2 y^2 \\biggr)\n+\\aleph \n|\\varphi(x,y;t)|^2 \\biggr]\n\\varphi(x,y;t) \\nonumber \\\\ & \\equiv & H \\varphi(x,y;t)\n,\\label{a2}\n\\end{eqnarray}\nwith $\\aleph \n= 4\\sqrt {2 \\pi\\lambda} {Na}\/{l}$.\nThe Hamiltonian $H$ can be conveniently broken\ninto three pieces $H=H_1+H_2+H_3$, where\n\\begin{align}\n& H_1= \\frac{1}{4} \\biggr(x^2+\\kappa^2 y^2 \\biggr)\n+\\aleph \n|\\varphi(x,y;t)|^2, \\\\\n& H_2=-\\frac{\\partial^2}{\\partial x^2}, \\;\\;\nH_3=-\\frac{\\partial^2}{\\partial y^2}. \\;\\;\n\\end{align}\nNow we adopt a policy quite similar to that elaborated in Sec.\n\\ref{RT} where the Hamiltonian was broken into two parts and where the\ntime propagation over time step $\\Delta$ using the two parts were\ncarried out alternatively. The same procedure will be adopted in the\npresent case where we perform the time propagation using the pieces\n$H_1$, $H_2$, and $H_3$ of the Hamiltonian successively in\nthree\nindependent time sub-steps $\\Delta$ to complete a single time\nevolution over time step $\\Delta$ of the entire GP Hamiltonian $H$.\nThe time propagation over $H_1$ is performed as in Eq. (\\ref{al1}) and\nthose over $H_2$ and $H_3$ as in Eqs. (\\ref{gp3}) and\n(\\ref{kn1}).\nThe chemical potential in this case for the stationary state\n$\\varphi(x,y;t)\\equiv \\hat \\varphi(x,y)\\exp(-\\mbox{i}\\mu t)$\ncan be\nwritten as\n\\begin{align}\n\\mu =\n\\int_{-\\infty}^\\infty dx \\int_{-\\infty}^\\infty dy\n \\left[ \\biggr(\\frac{{\\partial}\\hat \\varphi}{\n{\\partial}x} \n\\biggr)^2 +\n\\biggr(\\frac{{\\partial}\\hat \n\\varphi}{{\\partial}y}\n\\biggr)^2\n + \\hat \\varphi^2 \\left(\n\\frac{x^2+\\kappa^2 y^2}{4}+\\aleph \n{\\hat \\varphi^2} \\right)\n\\right] ,\n\\end{align}\n{\nwhere we have performed integrations by parts \nto obtain this form for the chemical potential in terms of first \nderivatives only.}\n\n\n\\subsection{Axially-symmetric GP equation in 3D}\n\nIn this case the GP equation (\\ref{axi}) can be written as\n\\begin{eqnarray}\n\\mbox{i}\n\\frac{\\partial}{\\partial t}\\varphi(x,y;t) & =&\n\\biggr[\n-\\frac{\\partial^2}{\\partial \\rho^2}\n-\\frac{1}{\\rho}\\frac{\\partial}{\\partial \\rho}\n-\\frac{\\partial^2}{\\partial z^2}\n+ \\frac{1}{4} \\biggr( \\kappa^2\\rho^2 + \\lambda^2 z^2 \\biggr)\n+\\aleph \n|\\varphi(\\rho,z;t)|^2 \\biggr]\n\\varphi(\\rho,z;t) \\nonumber \\\\ & \\equiv & H \\varphi(\\rho,z;t).\n\\label{ax2}\n\\end{eqnarray}\nThe Hamiltonian $H$ can be conveniently broken\ninto three pieces $H=H_1+H_2+H_3$, where\n\\begin{align}\n& H_1= \\frac{1}{4} \\biggr( \\rho^2 + \\lambda^2 z^2 \\biggr)\n+\\aleph \n|\\varphi(\\rho,z;t)|^2, \\\\\n& H_2=-\\frac{\\partial^2}{\\partial\n\\rho^2}-\\frac{1}{\\rho}\\frac{\\partial}{\\partial \\rho}, \\;\\;\nH_3=-\\frac{\\partial^2}{\\partial z^2}. \\;\\;\n\\end{align}\nNow we adopt a policy quite similar to that elaborated in Sec.\n\\ref{anp} where the Hamiltonian was broken into three parts and where\nthe\ntime propagation over time step $\\Delta$ using the three parts were\ncarried out alternatively.\nThe time propagation over $H_1$ is performed as in Eq. (\\ref{al1}) and\nthose over $H_2$ and $H_3$ as in Eqs. (\\ref{knx}) and\n(\\ref{kn1}).\nThe chemical potential in this case for the stationary state\n$\\varphi(\\rho,z;t)\\equiv \\hat \\varphi(\\rho,z)\\exp(-\\mbox{i}\\mu t)$\ncan be\nwritten as\n\\begin{align}\n\\mu = 2\\pi\n\\int_{0}^\\infty \\rho d\\rho \\int_{-\\infty}^\\infty dz\n \\left[ \\biggr(\\frac{{\\partial}\n\\hat \\varphi}{{\\partial}\\rho} \n\\biggr)^2 +\n\\biggr(\\frac{{\\partial}\\hat \n\\varphi}{{\\partial}z}\n\\biggr)^2\n + \\hat \\varphi^2 \\left(\n\\frac{\\kappa^2 \\rho^2+\\lambda^2 z^2}{4}+ \\aleph\n{\\hat \n\\varphi^2}\n\\right)\n\\right] ,\n\\end{align}{\nwhere we have again used integrations by parts to simplify the\nfinal expression.}\n\n\n\n\\subsection{Anisotropic GP equation in 3D}\n\nIn this case the GP equation (\\ref{ani}) can be written as\n\\begin{eqnarray}\n\\mbox{i}\n\\frac{\\partial}{\\partial t}\\varphi(x,y,z;t) & =&\n\\biggr[\n-\\frac{\\partial^2}{\\partial x^2}\n-\\frac{\\partial^2}{\\partial y^2}\n-\\frac{\\partial^2}{\\partial z^2}\n+ \\frac{1}{4} \\biggr(\\nu^2 x^2 + \\kappa^2 y^2 + \\lambda^2 z^2 \\biggr)\n+\\aleph \n|\\varphi(x,y,z;t)|^2 \\biggr]\n\\varphi(x,y,z;t) \\nonumber \\\\ & \\equiv & H \\varphi(x,y,z;t)\n,\\label{a3}\n\\end{eqnarray}\nwith $\\aleph\n= 8\\sqrt 2 \\pi {aN}\/{l}$.\nThe Hamiltonian $H$ can be conveniently broken\ninto four pieces $H=H_1+H_2+H_3+H_4$, where\n\\begin{align}\n& H_1= \\frac{1}{4} \\biggr(\\nu^2 x^2 + \\kappa^2 y^2 + \\lambda^2 z^2\n\\biggr)\n+\\aleph \n|\\varphi(x,y,z;t)|^2, \\\\\n& H_2=-\\frac{\\partial^2}{\\partial x^2}, \\;\\;\nH_3=-\\frac{\\partial^2}{\\partial y^2}, \\;\\;\nH_4=-\\frac{\\partial^2}{\\partial z^2}.\n\\end{align}\nNow we adopt a policy quite similar to that elaborated in Sec.\n\\ref{RT} where the Hamiltonian was broken into two parts and where the\ntime propagation over time step $\\Delta$ using the two parts were\ncarried out alternatively. The same procedure will be adopted in the\npresent case where we perform the time propagation using the pieces\n$H_1$, $H_2$, $H_3$ and $H_4$ of the Hamiltonian successively in four\nindependent time sub-steps $\\Delta$ to complete a single time\nevolution over time step $\\Delta$ of the entire GP Hamiltonian $H$.\nThe time propagation over $H_1$ is performed as in Eq. (\\ref{al1}) and\nthose over $H_2$, $H_3$ and $H_4$ as in Eqs. (\\ref{gp3}) and (\\ref{kn1}).\nThe chemical potential in this case for the stationary state\n$ \\varphi(x,y,z;t)\\equiv \\hat \\varphi(x,y,z)\\exp(-\\mbox{i}\\mu t)$\ncan\nbe\nwritten as\n\\begin{align}\n\\mu =\n\\int_{-\\infty}^\\infty dx \\int_{-\\infty}^\\infty dy \\int_{-\\infty}^\\infty\ndz \\left[ \\biggr(\\frac{{\\partial}\n\\hat \\varphi}{{\\partial}x} \n\\biggr)^2 +\n\\biggr(\\frac{{\\partial}\\hat \n\\varphi}{{\\partial}y}\n\\biggr)^2\n+ \\biggr(\\frac{{\\partial}\\hat \\varphi}\n{{\\partial}z} \\biggr)^2 + \n\\hat \\varphi^2 \\left(\n\\frac{\\nu^2x^2+\\kappa^2 y^2+\\lambda^2 z^2}{4}+\\aleph\n{\\hat \n\\varphi^2}\n\\right)\n\\right] ,\n\\end{align}{\nwhere we have again used integrations by parts to simplify the\nfinal expression.}\n\n\n\n\\section{Description of Numerical Programs}\n\\label{FOR}\n\n\n\\subsection{GP equation in one space variable}\n\nIn this subsection we describe six Fortran codes involving GP equation in one\nspace variable. These are programs for solving the 1D GP equation (program {\\bf\nimagtime1d.F}), the circularly-symmetric 2D GP equation (program {\\bf\nimagtimecir.F}) and the radially-symmetric 3D GP equation (program {\\bf\nimagtimesph.F}) using imaginary-time propagation. The similar routines using\nreal-time propagation are {\\bf realtime1d.F}, {\\bf realtimecir.F} and {\\bf\nrealtimesph.F}, respectively. These real- and imaginary-time routines in \none\nspace variable have similar structures. However, the wave function is real in\nimaginary-time propagation, whereas it is complex in real-time propagation. To\naccommodate this fact many variables in the real-time propagation routines are\ncomplex.\n\n\nThe principal variables employed in the MAIN program are: N = number of space\nmesh points, NSTP = number of time iterations during which the nonlinearity is\nintroduced in real-time propagation, (in imaginary-time propagation the\nnonlinearity is introduced in one step,) NPAS = number of subsequent time\niterations with fixed nonlinearity, NRUN = number of final time iterations with\n(a) fixed nonlinearity in imaginary-time propagation and (b) modified\nnonlinearity to study dynamics in real-time propagation, X(I) = space mesh,\nX2(I) = X(I)*X(I), V(I) = potential, CP(I) = wave function at space point X(I),\nG, G0 = coefficient of nonlinear term, MU = chemical potential, EN = energy,\nZNORM = normalization of wave function, RMS = rms size or radius, DX = space\nstep, DT = time step, OPTION and XOP decide which equation to solve. Also\nimportant is the subroutine INITIALIZE, where the space mesh X(I), potential \nV(I), and the initial wave function CP(I), are calculated. (An advanced user\nmay need to change the variables V(I) and CP(I) so as to adopt the program to\nsolve a different equation with different nonlinearity.) The functions and\nvariables not listed above are auxiliary variables, that the user should not\nneed to modify.\n\n\nNow we describe the function of the subroutines, which the user should not need\nto change. The subroutine NORM calculates by Simpson's rule the normalization \nof the wave function and sets the normalization to unity, the preassigned value.\nThe real-time propagation preserves the normalization of the wave function and\nhence the subroutine NORM is not used during time propagation. The subroutines\nRAD and CHEM calculate the rms size (length or radius), and chemical \npotential (and energy) of\nthe system. The subroutines COEF and LU together implement the time propagation\nwith the spatial and temporal time derivative terms. The subroutine NU performs\nthe time propagation with the nonlinear term and the potential. In the\nimaginary-time program the action of the subroutine LU does not preserve\nnormalization and hence each time the subroutine LU is called, the subroutine\nNORM has to be called to set the normalization of the wave function back to\nunity. (This is not necessary in the real-time programs which preserve the\nnorm.) The subroutine NONLIN calculates the nonlinear term. The subroutine DIFF\ncalculates the space derivative of the wave function by Richardson's\nextrapolation formula needed for the computation of the chemical potential and\nenergy. The function SIMP does the numerical space integrations with the\nSimpson's rule.\n\n\n\nThe programs implement the splitting method described in Sec.~\\ref{RT} and\n\\ref{gp2dc} and calculate the wave function, chemical potential, size,\nnormalization, etc. The number of points in the one-dimensional space grid\nrepresented by the integer variable N has to be chosen consistent with space\nstep DX such that the total space covered N$\\times$DX is significantly larger\nthan the size of the condensate so that at the boundaries the wave \nfunction \nattains the asymptotic limits (e.g., the absolute value of the wave \nfunction or its space derivative becomes \nless than 10$^{-10}$ or so \nfor imaginary-time\npropagation and less than 10$^{-7}$ for real-time propagation. Note \nthat in the 1D and spherically-symmetric 3D problems we are using the \nasymptotic condition that the wave function is zero at both \nboundaries. In case of the circularly-symmetric 2D problem we use a mixed \nboundary condition, e.g., at origin the \nderivative of the wave function is zero and at infinity the wave \nfunction is zero.) In the\nimaginary-time routine the total space covered should be about 1.5 times the\nextension of the condensate. In the real-time routines, to obtain good \nprecision the total space covered\nshould be at least 2 times the extension of the condensate. \nThis is because the\nimaginary-time routine is more precise and the solution attains its \nlimiting asymptotic value at the\nboundary very rapidly as the total space covered is increased. The real-time\nroutine is less accurate and one has to go to a larger distance before the\nsolution or its derivative drops to zero. A couple of runs with a \nsufficiently large N and large\nDX are recommended to have an idea of the size\/extension of the system. \nA\nsmaller value of DX leads to a more accurate result provided an appropriate DT\nis chosen.\n\n\nThe Crank-Nicolson method, described, for example, by Eq. (\\ref{kn1}), is\nunconditionally stable for all $\\Delta\/h^2$. Nevertheless, for a numerical\napplication to a specific problem one has to fix the time step DT ($=\\Delta$) and\nspace step DX ($=h$) for good convergence. Space and time steps are given in the\nMAIN program, as ``DATA DX\/0.0025D0\/, DT\/0.00002D0\/\" with correlated DX and DT\nvalues obtained by trial. (If the user wants to use other values of DX and DT, a\nset of correlated values obtained by trial is given in Table \\ref{table1}.) The\ntotal number of space points N in calculation has to be fixed in the line,\ne.g., ``PARAMETER (N = 6000, N2 = N\/2, NX = N-1)\", in the MAIN program, so\nthat the final wave function is within the space range and its value is\nnegligibly small at the boundaries. The total numbers of time iterations are\nfixed in the line, e.g., ``PARAMETER (NSTP = 500000, NPAS = 10000, NRUN =\n20000)\", in the MAIN program as described below in Sec. \\ref{howto}.\n\n\nThe integer parameter OPTION should be set 1, 2 or 3 in the MAIN routine for\nsolution of equations of type (\\ref{sph3}), (\\ref{sph2}), or (\\ref{sph}), [or\nfor solution of equations of type (\\ref{1d3}), (\\ref{1d2}), or (\\ref{1d}),]\nrespectively. The difference between these three types of equations is in the\nvalues of the coefficients in the first two terms. The nonlinear term calculated\nin the subroutine NONLIN is of the form $|\\varphi(r,t)|^2$ with coefficient G.\nIf a different type of nonlinear term (with different functional dependence on\nthe wave function) is to be introduced, it should be done in the subroutine\nNONLIN. Otherwise, this subroutine should not be changed. A different type of\nnonlinear term is appropriate for a Fermi super-fluid \\cite{ska1,ska2} or a\nTonks-Girardeau gas \\cite{tg}.\n\n\\subsection{GP equation in two and three space variables}\n\n\nIn addition to the programs in one space variable we have six programs in two\nand three space variables. The programs {\\bf imagtime2d.F} and {\\bf\nrealtime2d.F} apply to 2D Cartesian space using imaginary- and real-time\npropagations, respectively. Similarly, {\\bf imagtime3d.F} and {\\bf realtime3d.F}\napply to 3D Cartesian space using imaginary- and real-time propagation,\nrespectively. Finally, programs {\\bf imagtimeaxial.F} and {\\bf realtimeaxial.F}\napply to an axially symmetric trap in 3D. The Fortran 90\/95 versions of these\nprograms are somewhat more condensed and provide some advantage. Hence we also\ninclude these programs as {\\bf imagtime2d.f90}, {\\bf realtime2d.f90}, \n{\\bf\nimagtime3d.f90}, {\\bf realtime3d.f90}, {\\bf imagtimeaxial.f90}, and \n{\\bf\nrealtimeaxial.f90}. The output of the Fortran 90\/95 programs are \nidentical with\ntheir corresponding Fortran 77 versions. All these programs are written using a\nvery similar logic used in the programs in one space variable. So most of the\nconsiderations described earlier also applies to these cases. We describe the\nprincipal differences below.\n\nIn case of total number of space points N, one now has the variables NX, NY, and\nNZ for total number of space points in X, Y and Z directions in case of three\nspace variables. In case of two space variables the Z component is absent. These\nvariables\nshould be chosen equal to each other for a nearly symmetric case. However, if\nthe problem is anisotropic in space, these variables can and should be chosen\ndifferently. Similarly, the space variable X(I) is now replaced by space\nvariables X(I), Y(J), and Z(K) in three directions. Now there are space steps\nDX, DY, and DZ in place of DX in one space variable. The variables DX, DY, and\nDZ can now be chosen differently in three directions in case of an anisotropic\nproblem. However, they should be chosen together with NX, NY, and NZ so that\nthe wave function lies entirely inside the chosen space and becomes negligibly\nsmall at the boundaries. If the spatial extension of the wave function is much\nsmaller in one space direction than another, one should take a smaller space\nstep in the direction in which the spatial extension of the wave function is\nsmall. The potential V and wave function CP are now functions of 2 or 3\nspace variables and are represented by matrices in place of a column in the case\nof one space variable. The variables AL (or KAP) and BL (or LAM) now define the\nanisotropy of the trap and are used in the subroutine INITIALIZE to define the\ntrap. The subroutine LU is now replaced by LUX, LUY, and LUZ to implement the\nsolution in different space directions. In the imaginary-time propagation each\ntime the subroutines LUX, LUY, and LUZ operate, one has to set the\nnormalization of the wave function to unity by calling the subroutine NORM.\n\n\n\\subsection{Instruction to use the programs}\n\\label{howto}\n\nThe programs, as supplied, solve the GP equations for a specific value of\nnonlinearity and write the wave function, chemical potential, energy, rms size\nor radius, wave function at the center, and nonlinearity, for specific values\nof space and time steps. The real- and imaginary-time programs for a \nspecific\nequation, for example, spherically-symmetric 3D equation, employ similar set of\nparameters like space and time steps. The real-time programs use a larger value\nof N, so that the discretization covers a larger region in space. In all cases\nthe supplied programs solve an equation of type (\\ref{sph2}) with a factor of\n1\/2 in front of the space derivative, selected by setting the integer parameter\nOPTION = 2 in the MAIN routine. Other types of equations can be obtained by\nsetting OPTION = 1 for equations of type (\\ref{sph3}) or = 3 for equations of\ntype (\\ref{sph}).\n\nFor solving a stationary ground state problem, the imaginary-time \nprograms are far more\naccurate and should be used. The real-time programs should be used for studying\nnon-equilibrium problems often using an initial wave function calculated by the\nimaginary-time program. The non-equilibrium problems include the study of\nsoliton dynamics \\cite{ska3}, expansion \\cite{ska4}, collapse dynamics\n\\cite{ska5}, and other types of problems.\n\nEach program is preset at a fixed nonlinearity G0 (= G), correlated DX-DT values\nand NSTP, NPAS, and NRUN. Smaller the steps DX and DT, more accurate will be\nthe result. The correlated values of DX and DT on a data line should be found\nby trial to obtain good convergence. \nEach supplied program produces result up-to a desired precision consistent\nwith the parameters employed $-$ G0, DX, DT, N, NSTP, NPAS, and NRUN. If the\nnonlinearity G0 is increased, one might need to increase N to achieve similar\nprecision.\nIn many cases one may need an approximate solution (with lower\naccuracy) involving less CPU time or one may need to solve the GP\nequation for a different value of nonlinearity. \n\n(a) If G0 is reduced, just\nchange the card defining G0. However, if G0 is increased, changing the\nvalue of G0 may not be enough. For an increased G0, the wave function\nextends to a larger region in space. One may need to increase the\n``Number of space mesh points\". For a new G0, just plot the output file\nfort.3 for all programs except realtime3d.F, realtime3d.f90,\nimagtime3d.F and imagtime3d.f90, where one should plot output files\nfort.11, fort.12, and fort.13 and see that the wave function is fully\nand adequately accommodated in the space domain. If not, one needs to\nincrease the number in input cards ``Number of space mesh points\" until\nthe wave function is fully and adequately accommodated in the space\ndomain. \n\n(b)The CPU time involved can be reduced by sacrificing the precision.\nThis can be done by increasing the space step(s) and time step and\nreducing the \"Number of space mesh points\". If the space step DX is\nincreased by a factor of f = 2, the number of space mesh points should\nbe reduced by the same factor. The time step DT should be increased by a\nlarger factor, more like f**2. The optimum increase in time step should\nbe determined by some experimentation. (A set of\ncorrelated DX-DT\nvalues is given in Tables \\ref{table1} and \\ref{table4} for the \nsolution of corresponding equations.)\nThe CPU time is the largest in\nthe case of programs realtime3d.F, realtime3d.f90, imagtime3d.F and\nimagtime3d.f90 and we give an example of changes below in these cases\nto reduce the CPU time and precision. For example, for realtime3d.F, \njust change the \nlines 15, 16, 17, and 39. The new\nset of lines should be\n\n PARAMETER (NX=40, NXX = NX-1, NX2 = NX\/2)\n\n PARAMETER (NY=32, NYY = NY-1, NY2 = NY\/2)\n\n PARAMETER (NZ=24, NZZ = NZ-1, NZ2 = NZ\/2)\n\n DATA DX \/0.5D0\/, DY \/0.5D0\/, DZ \/0.5D0\/, DT\/0.05D0\/\n\nFor imagtime3d.F, just change the lines 15, 16, 17, and 37. The new\nset of lines should be\n\n PARAMETER (NX=24, NXX = NX-1, NX2 = NX\/2)\n\n PARAMETER (NY=20, NYY = NY-1, NY2 = NY\/2)\n\n PARAMETER (NZ=16, NZZ = NZ-1, NZ2 = NZ\/2)\n\n DATA DX \/0.5D0\/, DY \/0.5D0\/, DZ \/0.5D0\/, DT\/0.04D0\/\n\nPlease verify, by running the corresponding programs, that the new \nresults so obtained are quite similar to the \nexisting results. \n\n\nThe integer parameter NSTP refers to the number of time iterations during which\nthe nonlinear term is slowly introduced in the real-time propagation. This\nnumber should be large (typically more than 100,000 for small nonlinearity, for\nlarger nonlinearity could be 1000,000 ) for good convergence; this means that\nthe nonlinearity should be introduced in small amounts over a large number of\ntime iterations. In real-time propagation, NPAS refers to certain number of time\niterations with the constant nonlinear term already introduced in NSTP and\nshould be small (typically 1000). NRUN refers to time iterations with a \nmodified\nnonlinearity so as to generate a non-equilibrium dynamics. In the imaginary-time\npropagation the parameters NPAS and NRUN refer to certain number of time\niterations with the constant nonlinear term already introduced in one step and\nshould be large (typically NPAS = 200,000 or more) for good convergence.\n\n\n\n\n\\subsection{Output files}\n\n\nThe programs write via statements WRITE(1,*), WRITE(2,*) WRITE(3,*) in Files 1,\n2, and 3, respectively, the initial stationary wave function and that after\nNSTP, and NPAS time iterations for real-time propagation and after NPAS and NRUN\ntime iterations for imaginary-time propagation. File 3 gives the final\nstationary wave function of the calculation. However, in the case of the\nanisotropic 3D programs realtime3d.F and imagtime3d.F, sections of the wave\nfunction as plotted in Fig. \\ref{fig4} (b) are written in Files 1, 2, and 3\nbefore NSTP (realtime3d.F) and after NPAS (imagtime3d.F) iterations, \nand in Files 11,\n12, and 13 after NPAS (realtime3d.F) and NRUN (imagtime3d.F) iterations.\n\nIn the real-time program a non-stationary oscillation is initiated by suddenly\nmodifying the nonlinearity from G to G\/2 after NPAS time iterations. During NRUN\ntime iterations the non-stationary dynamics is studied. The real-time programs\nwrite on File 8 the running time and rms size during non-stationary\noscillation.\n\n\n\nIn addition, these programs write in File 7 the values of nonlinearity G, space\nsteps DX, DY, DZ, time step DT, number of space mesh points N, number of time\niterations NSTP, NPAS, and NRUN together with the values of normalization of the\nwave function, chemical potential, energy, rms size, value of the wave function\nat center, nonlinearity coefficient G.\n\n\n\nBelow we provide some sample output listed on File 7 from the different programs\nusing OPTION = 2. File 7 (fort.7) represents the comprehensive result in each\ncase.\n\n\n(1) Program {\\bf imagtime1d.F}\n\n\\begin{verbatim}\n\n OPTION = 2\n\n# Space Stp N = 8000\n# Time Stp : NPAS = 200000, NRUN = 20000\n Nonlinearity G = 62.74200000\n Space Step DX = 0.002500, Time Step DT = 0.000020\n\n ----------------------------------------------------\n Norm Chem Ener Psi(0)\n ----------------------------------------------------\nInitial : 1.0000 0.500000 0.500000 0.70711 0.75113\nAfter NPAS iter.: 0.9996 10.369462 6.256976 2.04957 0.40606\nAfter NRUN iter.: 0.9996 10.369462 6.256976 2.04957 0.40606\n ----------------------------------------------------\n\n\\end{verbatim}\n(2) Program {\\bf imagtimesph.F}\n\n\\begin{verbatim}\n\n OPTION = 2\n\n# Space Stp N = 3000\n# Time Stp : , NPAS = 200000, NRUN = 20000\n Nonlinearity G = 125.48400000\n Space Step DX = 0.002500, Time Step DT = 0.000020\n\n ----------------------------------------------------\n Norm Chem Ener Psi(0)\n ----------------------------------------------------\nInitial : 1.0000 1.500000 1.500000 1.22474 0.42378\nAfter NPAS iter.: 0.9998 4.014113 3.070781 1.88214 0.17382\nAfter NRUN iter.: 0.9998 4.014113 3.070781 1.88214 0.17382\n ----------------------------------------------------\n\\end{verbatim}\n\n(3) Program {\\bf imagtimecir.F}\n\\begin{verbatim}\n\n OPTION = 2\n\n# Space Stp N = 2000\n# Time Stp : , NPAS = 200000, NRUN = 200000\n Nonlinearity G = -2.50970000\n Space Step DX = 0.002500, Time Step DT = 0.000020\n\n ----------------------------------------------------\n Norm Chem Ener Psi(0)\n ----------------------------------------------------\nInitial : 1.0000 1.000000 1.000000 1.00000 0.56419\nAfter NPAS iter.: 1.0000 0.499772 0.770107 0.87758 0.67535\nAfter NRUN iter.: 1.0000 0.499772 0.770107 0.87758 0.67535\n ----------------------------------------------------\n\n\\end{verbatim}\n\n\n(4) Program {\\bf imagtime2d.F and imagtime2d.f90}\n\\begin{verbatim}\n\n\n OPTION = 2\n Anisotropy AL = 2.000000\n\n# Space Stp NX = 800, NY = 800\n# Time Stp : NPAS = 30000, NRUN = 5000\n Nonlinearity G = 12.54840000\n Space Step DX = 0.020000, DY = 0.020000\n Time Step DT = 0.000100\n\n -----------------------------------------------------\n Norm Chem Ener Psi(0,0)\n -----------------------------------------------------\nInitial : 1.0000 1.500000 1.500000 0.86603 0.67094\nAfter NPAS iter.: 0.9999 3.254878 2.490493 1.17972 0.46325\nAfter NRUN iter.: 0.9999 3.254878 2.490493 1.17972 0.46325\n -----------------------------------------------------\n\n\\end{verbatim}\n\n\n(5) Program {\\bf imagtimeaxial.F and imagtimeaxial.f90}\n\\begin{verbatim}\n OPTION = 2\n Anisotropy KAP = 1.000000, LAM = 4.000000\n\n# Space Stp NX = 500, NY = 500\n# Time Stp : NPAS = 100000, NRUN = 20000\n Nonlinearity G = 18.81000000\n Space Step DX = 0.020000, DY = 0.020000\n Time Step DT = 0.000040\n\n ------------------------------------------------------------------\n Norm Chem Energy psi(0)\n ------------------------------------------------------------------\nInitial : 1.000 3.00000 3.00000 1.00000 0.35355 1.06066 0.59883\nNPAS iter : 1.000 4.36113 3.78228 1.32490 0.38049 1.37846 0.38129\nNRUN iter : 1.000 4.36113 3.78228 1.32490 0.38049 1.37846 0.38129\n ------------------------------------------------------------------\n\n\\end{verbatim}\n\n\n(6) Program {\\bf imagtime3d.F and imagtime3d.f90}\n\\begin{verbatim}\n OPTION = 2\n Anisotropy AL = 1.414214, BL = 2.000000\n\n# Space Stp NX = 240, NY = 200, NZ = 160\n# Time Stp : NPAS = 5000, NRUN = 500\n Nonlinearity G = 44.90700000\n Space Step DX = 0.050000, DY = 0.050000, DZ = 0.050000\n Time Step DT = 0.000400\n\n ------------------------------------------------------\n Norm Chem Ener Psi(0,0,0)\n ------------------------------------------------------\nInitial : 1.0000 2.2071 2.2071 1.0505 0.5496\nAfter NPAS iter.: 0.9997 4.3446 3.4862 1.4583 0.2888\nAfter NRUN iter.: 0.9997 4.3446 3.4862 1.4583 0.2888\n ------------------------------------------------------\n\n\n\\end{verbatim}\n\n\n(7) Program {\\bf realtime1d.F}\n\\begin{verbatim}\n OPTION = 2\n\n# Space Stp N = 5000\n# Time Stp : NSTP 1000000 , NPAS = 1000, NRUN = 40000\n Nonlinearity G = 62.74200000\n Space Step DX = 0.010000, Time Step DT = 0.000100\n\n ----------------------------------------------------\n Norm Chem Ener Psi(0)\n ----------------------------------------------------\nInitial : 1.0000 0.500 0.500 0.707 0.751\nAfter NSTP iter.: 1.0000 10.368 6.257 2.050 0.406\nAfter NPAS iter.: 1.0000 10.375 6.257 2.047 0.406\n ----------------------------------------------------\n\n\n\n\\end{verbatim}\n\n(8) Program {\\bf realtimesph.F}\n\\begin{verbatim}\n OPTION = 2\n\n# Space Stp N = 2000\n# Time Stp : NSTP = 1000000, NPAS = 1000, NRUN = 40000\n Nonlinearity G = 125.48400000\n Space Step DX = 0.010000, Time Step DT = 0.000100\n\n ----------------------------------------------------\n Norm Chem Ener Psi(0)\n ----------------------------------------------------\nInitial : 1.0000 1.500 1.500 1.225 0.424\nAfter NSTP iter.: 1.0000 4.015 3.071 1.881 0.174\nAfter NPAS iter.: 1.0000 4.011 3.071 1.884 0.174\n ----------------------------------------------------\n\n\\end{verbatim}\n\n(9) Program {\\bf realtimecir.F}\n\\begin{verbatim}\n OPTION = 2\n\n# Space Stp N = 2000\n# Time Stp : NSTP = 1000000, NPAS = 1000, NRUN = 40000\n Nonlinearity G = 12.54840000\n Space Step DX = 0.010000, Time Step DT = 0.000100\n\n ----------------------------------------------------\n Norm Chem Ener Psi(0)\n ----------------------------------------------------\nInitial : 1.0000 1.000 1.000 1.000 0.564\nAfter NSTP iter.: 1.0000 2.255 1.708 1.308 0.391\nAfter NPAS iter.: 1.0000 2.255 1.708 1.308 0.391\n ----------------------------------------------------\n\\end{verbatim}\n\n\n\n(10) Program {\\bf realtime2d.F and realtime2d.f90}\n\\begin{verbatim}\n\n OPTION = 2\n Anisotropy AL = 1.000000\n\n# Space Stp NX = 200, NY = 200\n# Time Stp : NSTP = 100000, NPAS = 1000, NRUN = 5000\n Nonlinearity G = 12.54840000\n Space Step DX = 0.100000, DY = 0.100000\n Time Step DT = 0.001000\n\n -----------------------------------------------------\n Norm Chem Ener Psi(0,0)\n -----------------------------------------------------\nInitial : 1.000 1.000 1.000 1.000 0.564\nAfter NSTP iter.: 1.000 2.256 1.708 1.307 0.392\nAfter NPAS iter.: 1.000 2.257 1.708 1.305 0.392\n -----------------------------------------------------\n\n\n\n\n\n\\end{verbatim}\n\n\n\n(11) Program {\\bf realtimeaxial.F and realtimeaxial.f90}\n\\begin{verbatim}\n\n OPTION = 2\n Anisotropy KAP = 1.000000, LAM = 4.000000\n\n# Space Stp NX = 130, NY = 130\n# Time Stp : NSTP = 100000, NPAS = 1000, NRUN = 20000\n Nonlinearity G = 18.81000000\n Space Step DX = 0.100000, DY = 0.100000\n Time Step DT = 0.001000\n\n -------------------------------------------------------\n Norm Chem Energy psi(0)\n -------------------------------------------------------\nInitial : 1.000 3.000 3.000 1.000 0.354 0.587\nNSTP iter : 0.999 4.362 3.782 1.323 0.381 0.376\nNPAS iter : 1.000 4.362 3.782 1.327 0.379 0.376\n -------------------------------------------------------\n\n\n\\end{verbatim}\n\n\n(12) Program {\\bf realtime3d.F and realtime3d.f90}\n\\begin{verbatim}\n OPTION = 2\n Anisotropy AL = 1.414214, BL = 2.000000\n\n# Space Stp NX = 200, NY = 160, NZ = 120\n# Time Stp : NSTP = 60000, NPAS = 1000, NRUN = 4000\n Nonlinearity G = 22.45400000\n Space Step DX = 0.100000, DY = 0.100000, DZ = 0.100000\n Time Step DT = 0.002000\n\n ------------------------------------------------------\n Norm Chem Ener Psi(0,0,0)\n ------------------------------------------------------\nInitial : 1.000 2.207 2.207 1.051 0.550\nAfter NSTP iter.: 1.000 3.572 2.992 1.321 0.347\nAfter NPAS iter.: 1.000 3.572 2.992 1.320 0.347\n ------------------------------------------------------\n\n\n\\end{verbatim}\n\n\n\\label{5.3}\n\n\n\n\\section{Numerical Results}\n\\label{NUM}\n\\subsection{Stationary Problem}\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{Convergence of result in the 1D (X) and\nradially-symmetric 3D ($r$) cases for nonlinearities $\\aleph \n=627.42$ and\n627.4, calculated\nusing the imaginary-time propagation programs imagtime1d.F [Eq.\n(\\ref{1d2}), OPTION 2]\nand imagtimesph.F [Eq. (\\ref{sph2}), OPTION 2]\nrespectively, for various space step DX and time step DT.\n}\n\\label{table1}\n\\begin{tabular}{|r|r|r|r|r|r|}\n\\hline\n$\\aleph $ \n& DX & DT &\n$\\varphi(0)\/\\phi(0)$ & $x_{\\mathrm{rms}}\/r_{\\mathrm{rms}}$ &\n{$\\mu$ } \\\\\n\\hline\n 627.42(X) & 0.08 & 0.005 & 0.276647 & 4.384825 & 48.024062 \\\\\n 627.42(X) & 0.04 & 0.001 &0.276648 &4.384744 & 48.024389 \\\\\n 627.42(X) & 0.02 &0.0005 &0.276649& 4.384734 & 48.024429 \\\\\n 627.42(X) & 0.01 & 0.0001 &0.276649& 4.384726 & 48.024462 \\\\\n 627.42(X) & 0.005 & 0.00005 &0.276649 & 4.384725 &\n48.024466 \\\\\n 627.42(X) & 0.0025 & 0.00002 &0.276649 & 4.384724 &\n48.024468 \\\\\n 627.42(X) & 0.001 & 0.00001 & 0.276649 & 4.384724 &\n48.024468\n\\\\\n\\hline\n 627.4($r$) &0.08 & 0.005 & 0.106655 & 2.506348 &\n7.247479\n\\\\\n 627.4($r$) &0.04 & 0.001 & 0.106679 & 2.505886 &\n7.248206\n\\\\\n 627.4($r$) & 0.02 & 0.0005 & 0.106684& 2.505833 & 7.248292\n\\\\\n 627.4($r$) & 0.01& 0.0001 &0.106686& 2.505785 & 7.248365\n\\\\\n 627.4($r$) & 0.005 & 0.00005 &0.106686 & 2.505780 & 7.248374\n\\\\\n 627.4($r$) & 0.0025 &0.00002 &0.106686 & 2.505776 &\n7.248380 \\\\\n 627.4($r$) & 0.001 & 0.00001 & 0.106686 & 2.505776 &\n7.248380 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[tbp] \\begin{center}\n{\\includegraphics[width=.8\\linewidth]{Fig1.eps}}\n\\end{center}\n\\caption{(Color online) Relative percentage error\nas a function of\nspace step\nfor one-dimensional (1D) and spherically-symmetric 3D (radial)\nmodels with nonlinearity $\\aleph\n=627.42$ and 627.4, respectively,\ncalculated using the imaginary-time propagation programs imagtime1d.F\n[Eq. (\\ref{1d2}), OPTION 2]\nand imagtimesph.F [Eq. (\\ref{sph2}), OPTION 2]\nwith the\ndata of\nTable \\ref{table1}.\n}\n\\label{fg1}\n\\end{figure}\n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{The chemical potential $\\mu$, rms radius {$r_{\\mathrm{rms}}$}, and the wave function\n$\\phi(0)$ at the center for various\nnonlinearities in the radially symmetric 3D case calculated using the\nprogram imagtimesph.F [Eq. (\\ref{sph2}), OPTION 2]. Table completed with space step\nDR $\n\\le 0.0025$\nand DT $=0.00002$. }\n\\label{table2}\n\\begin{tabular}{|r|r|r|r|r|r|r|}\n\\hline\n{$ \\aleph $\n} & {$\\phi(0)$} &\n$\\phi(0)$ \\cite{Bao_Tang}&\n{$r_{\\mathrm{rms}}$} &\n$r_{\\mathrm{rms}}$ \\cite{Bao_Tang}&\n{$\\mu$} &{$\\mu$ \\cite{Tiwari_Shukla,Bao_Tang}} \\\\\n\\hline\n-3.1371 & 0.48792(1) & 0.4881 & 1.51213(1)& 1.1521& 1.265184(2) &\n$1.2652$ \\\\\n 0 & 0.42378 & 0.4238& 1.22474& 1.2248 & 1.500000 &1.5000\n\\\\\n 3.1371 &0.38425(1)& 0.3843 &1.27857(1) &1.2785 & 1.677451(1) &\n1.6774\\\\\n 12.5484 & 0.31800(1) & 0.3180 & 1.39211(1) & 1.3921 &\n2.065018(1) &\n2.0650 \\\\\n 31.371 & 0.25810(1)& 0.2581 & 1.53561(1)& 1.5356 & 2.586116(1) &\n2.5861 \\\\\n125.484 & 0.17382(1) & 0.1738 & 1.88215(1) &1.8821 & 4.014113(2)\n&4.0141 \\\\\n 627.4 & 0.10669(1) &0.1066 & 2.50578(1) &2.5057 & 7.248380(3) &\n7.2484\n\\\\\n 3137.1 &0.06559(1) &0.0655 & 3.41450(1) &3.4145 & 13.553403(4) &\n13.553\n\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn this subsection we present results for the stationary ground \nstate problem\ncalculated with the imaginary-time programs.\nAll numerical results presented in this paper are for the\nnonlinear equations with a factor of 1\/2 in front of the gradient term\nobtained by choosing OPTION = 2 in the MAIN.\nFirst we consider the numerical result for the simplest cases $-$ the\n1D and the radially-symmetric 3D problems by\nimaginary-time propagation with nonlinearities $\\aleph\n=$ 627.42 \nand\n627.4, respectively. The calculations were performed with different\nspace and time steps DX and DT, respectively. (As DX is reduced, DT\nshould be reduced also to have good convergence. The correlated DX-DT\nvalues were obtained by trial to achieve good convergence.)\nIn each case a\nsufficiently large number of space points N is to be taken, so that the\nspace domain of integration covers the extension of the wave function adequately.\nWe exhibit in Table \\ref{table1}, the results for the wave function at\ncenter, rms\nsize, and chemical potential in these two cases for a fixed\nnonlinearity for different DX and DT.\nWe find that convergence is achieved up to six significant digits after the\ndecimal point\nwith space step DX = 0.0025 and time step DT = 0.00002.\nIn Fig. \\ref{fg1} we plot the relative percentage error in chemical\npotential $\\mu$ for various space steps. The percentage error rapidly\nreduces as space step is reduced.\n\nIn Table \\ref{table2} we exhibit the chemical potential, rms radius, and\nthe wave function at the center for various nonlinearities in the\nspherically symmetric 3D case using space step\nDX = 0.0025 and time step DT = 0.00002 in the imaginary-time\npropagation program. From the\ntable we find that the results are in agreement with those calculated\nin Refs. \\cite{Tiwari_Shukla,Bao_Tang}.\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{The chemical potential $\\mu$,\nrms size {$x_{\\mathrm{rms}}$}, and the wave function\n$\\varphi(0)$ at the center for various\nnonlinearities\n in the 1D case calculated using the program imagtime1d.F\n[Eq. (\\ref{1d2}), OPTION 2]. Table\ncompleted with DX $\\le 0.0025$ and\nDT $=0.00002$.\n}\n\\label{table3}\n\\begin{tabular}{|r|r|r|r|r|r|r|}\n\\hline\n{$\\aleph$}\n& {$\\varphi(0)$} &\n$\\varphi(0)$ \\cite{Bao_Tang}&\n{$x_{\\mathrm{rms}}$} &\n$x_{\\mathrm{rms}}$ \\cite{Bao_Tang}&\n{$\\mu$} &{$\\mu$ \\cite{Bao_Tang}} \\\\\n\\hline\n -2.5097 & 0.91317(1) & 0.9132 & 0.51334(1)& 0.5133& $-0.80623(3)$ &\n$-0.8061$ \\\\\n 0 & 0.75112 & 0.7511& 0.70711& 0.7071 & 0.500000 &0.5000 \\\\\n 3.1371 &0.64596(1)& 0.6459 &0.89602(1)&0.8960 &$1.526593(3)$ &\n1.5265\\\\\n 12.5484 & 0.52975(1)& 0.5297 & 1.24549(1)& 1.2454 & 3.596560(2) &\n3.5965 \\\\\n 31.371 & 0.45567(1)&0.4556 & 1.64170(1)&1.6416 & $6.552682(2)$ &\n6.5526 \\\\\n62.742& 0.40606(1)& 0.4060 & 2.04957(1) & 2.0495 & $10.369462(2)$ &\n10.369\n\\\\\n156.855 & 0.34856(1) & 0.3485 & 2.76794(1) &2.7679 &$19.070457(2) $\n&19.0704 \\\\\n 313.71 & 0.31053(1) & 0.3105 &3.48237(1) &3.4823 & 30.259178(3) &\n30.259\n\\\\\n 627.42 & 0.27665(1) &0.2766 & 4.38472(1) &4.3847 & 48.024468(3) &\n48.024\n\\\\\n 1254.8 & 0.24647(1) &0.2464 & 5.52282(1) &5.5228 & 76.226427(3)&\n76.226\n\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\nIn Table \\ref{table3} we exhibit the chemical potential, rms size, and\nthe wave function at the center for various nonlinearities in the\n 1D case using space step\nDX = 0.0025 and time step DT = 0.00002. From the\ntable we find that the results are in agreement with those calculated\nin Ref. \\cite{Bao_Tang}.\n\n\n\n\n\\begin{figure}[tbp] \\begin{center}\n{\\includegraphics[width=.49\\linewidth]{fig2a.ps}}\n{\\includegraphics[width=.49\\linewidth]{fig2b.ps}}\n\\end{center}\n\\caption{(Color online) Plot of wave function profile for the (a)\nradially-symmetric 3D case [Eq. (\\ref{sph2}), using imagtimesph.F,\nOPTION 2] and (b) 1D\ncase [Eq. (\\ref{1d2}), using imagtime1d.F, OPTION 2]. The\ncurves are\nlabeled by their respective nonlinearities as tabulated in Tables\n\\ref{table2} and \\ref{table3}, respectively.}\n\\label{fig2}\n\\end{figure}\n\n\n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{Convergence of results for chemical potential $\\mu$, rms size $r_{\\mathrm{rms}}$ and\nthe wave function $\\phi(0)$ at the center\nin the Cartesian 2D\n case for a nonlinearity $\\aleph\n=12.5484$ and anisotropy\n$\\kappa =1$ for different space steps $h\\equiv$ DX = DY obtained from\nthe program imagtime2d.F\n[using Eq. (\\ref{2d2}), OPTION 2].}\n\\label{table4}\n\\begin{tabular}{|r|r|r|r|r|r|}\n\\hline\n$\\aleph\n$ & DX=DY & DT &\n$\\varphi(0)$ &\n$r_{\\mathrm{rms}}$ &\n{$\\mu$ } \\\\\n\\hline\n 12.5484 & 0.10 & 0.01 &0.39189(2) & 1.30678(3) & 2.2559 \\\\\n 12.5484 & 0.08 & 0.005 &0.39190(2) & 1.30685(3) & 2.2559 \\\\\n 12.5484 & 0.06 & 0.003 &0.39190(2) & 1.30690(3) & 2.25582(3)\n\\\\\n 12.5484 & 0.04 & 0.001 & 0.39190(2) & 1.30693(2) &\n2.25579(2)\n\\\\\n 12.5484 & 0.02 & 0.0005 &0.39190(2) & 1.30687(2) &\n2.25583(2)\n\\\\\n 12.5484 & 0.015 & 0.0003 &0.39190(2) & 1.30687(2) &\n2.25583(2)\n\\\\\n 12.5484 & 0.01 & 0.0001 &0.39190(2) & 1.30687(2) &\n2.25583(2)\n\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\nIn Figs. \\ref{fig2} (a) and (b) we plot the wave function profiles for\nthe radially-symmetric 3D and 1D cases calculated using the\nprograms imagtimesph.F and imagtime1d.F, respectively, for\ndifferent nonlinearities presented in Tables \\ref{table2} and\n\\ref{table3}. As the nonlinearity is increased the system becomes more\nrepulsive and the wave function extends to a larger domain in space.\n\n\n\nIn Table \\ref{table4}\nwe present the results for the wave function at\ncenter, rms\nradius, and chemical potential for the Cartesian 2D case with\nnonlinearity $\\aleph \n$ = 12.5484 using the imaginary-time \npropagation\nprogram imagtime2d.F.\nWe find that desired convergence is achieved with space step DX =\n0.02. (Note that the converged result in this case is less accurate\nthan those in Tables \\ref{table1}, \\ref{table2} and \\ref{table3} as we\nhave used a larger space step in order to keep the CPU time small. A\nfiner mesh will increase the accuracy requiring a larger CPU time.)\n\n\n\n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{The chemical potential $\\mu$, rms size $r_{\\mathrm{rms}}$,\nand the wave function\n$\\varphi(0)$ at the center for various\nnonlinearities in the anisotropic 2D case obtained using the programs\nimagtime2d.F [Eq. (\\ref{2d2}), OPTION 2] and imagtimecir.F [Eq.\n(\\ref{cir2}), OPTION 2]. The\ncase $\\kappa=1$\nrepresents\ncircular symmetry and $\\kappa\\ne 1$ corresponds to anisotropy.\nTable completed with\nDX = DY $\n\\le 0.02$\nand DT $=0.0001$ for imagtime2d.F and space step DR $\\le 0.0025$ and DT $=0.00002$ for\nimagtimecir.F. }\n\\label{table5}\n\\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|r|}\n\\hline\n$\\kappa$ &\n{$\\aleph$ \n} &\n\\multicolumn{1}{c}{} &\n\\multicolumn{1}{c}{$\\varphi(0)$} &\n &\n\\multicolumn{1}{c}{} &\n\\multicolumn{1}{c}{$r_{\\mathrm{rms}}$} &\n&\n\\multicolumn{1}{c}{} &\n\\multicolumn{1}{c}{$\\mu $} &\n \\\\\n\\hline\n& & anisotropic &circular &\\cite{Bao_Tang} &anisotropic &circular\n&\\cite{Bao_Tang}\n&anisotropic &circular & \\cite{Bao_Tang}\\\\\n\\hline\n1&$-2.5097$ & 0.6754&0.67532(3) & 0.6754 & 0.87759(2)& 0.87758(1) &\n0.8775 &\n0.49978(3) &0.49978(1)\n&\n0.4997\\\\\n1& 0 & 0.5642& 0.56419(1) & 0.5642& 1.00000 & 1.00000\n&1.0000 & 1.00000 & 1.000000 &\n1.0000\n\\\\\n1& 3.1371 &0.4913& 0.49128(1) & 0.4913 &1.10513(2) & 1.10515(1)\n&1.1051 & 1.42005(1)\n&1.420054(3) &\n1.4200\\\\\n1& 12.5484 & 0.3919& 0.39190(2) & 0.3919 & 1.30687(2)& 1.30686(1)\n& 1.3068 &\n2.25583(1) &\n 2.255840(3)&\n2.2558 \\\\\n$\\sqrt 2$& 12.5484 & 0.4267& & & 1.22054(2) & & &\n2.69607(1) & & \\\\\n$ 2$& 12.5484 & 0.4633(1)& & & 1.17972(2) & & &\n3.25488(1) & & \\\\\n1& 62.742 & 0.2676& 0.26760(3) &0.2676 & 1.78817(2)&1.78816(1)\n&1.7881 & 4.60982(1)\n& 4.609831(3) &\n4.6098 \\\\\n$1\/\\sqrt 2$& 62.742 & 0.2453& & & 1.99987(2)& & & 3.88210(2) &\n&\n\\\\\n1\/ 2& 62.742 & 0.2249& & & 2.34157(2)& & & 3.27923(2) & & \\\\\n1&313.71 & 0.1787 & 0.17872(3) &0.1787 & 2.60441(2) & 2.60441(1)\n&2.6044 &\n10.06825(3)&10.068262(5)\n&10.068 \\\\\n1& 627.42 & 0.1502 &0.15024(3) &0.1502 & 3.08453(2) &3.08453(2)\n&3.0845 &\n14.18922(3)\n& 14.189228(5)&\n14.1892\n\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n\\begin{figure}[tbp] \\begin{center}\n{\\includegraphics[width=.49\\linewidth]{fig3a.ps}}\n{\\includegraphics[width=.49\\linewidth]{fig3b.ps}}\n\\end{center}\n\\caption{(Color online) Plot of wave function profile for the (a)\ncircularly-symmetric [Eq. (\\ref{cir2}), OPTION 2]\nand\n(b)\nanisotropic 2D cases [Eq. (\\ref{2d2}), OPTION 2]\nwith nonlinearity $\\aleph\n=12.5484$ and anisotropy\n$\\kappa=2$ using programs imagtimecir.F and imagtime2d.F, respectively.\nCurves in (a) are labeled by the respective nonlinearities\nas in Table \\ref{table5}.\n}\n\\label{fig3}\n\\end{figure}\n\n\n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{The chemical potential $\\mu$, rms sizes,\nand the wave function\n$\\varphi(0)$ at the center for various\nnonlinearities in the axially-symmetric 3D case\nfor $^{87}$Rb atoms in an axial trap\nwith $\\lambda=4,\\kappa=1$\nobtained using the\nprogram\nimagtimeaxial.F [Eq. (\\ref{axi2}), OPTION 2]. The axial frequency is\ntaken as\n$\\omega_z=80\\pi $ Hz,\n$m(^{87}$Rb$)=1.44\\times 10^{-25}$ kg,\n$l=\\sqrt{\\hbar\/m\\omega_x}=0.3407\\times 10^{-5}$ m, $a=5.1$ nm,\nand the ratio between scattering and oscillator length\n$4\\pi a\/l=0.01881$.\nTable completed with DZ\n$ \\le 0.02$, D$\\rho \\le 0.02$ and\nand DT $=0.00004$.}\n\\label{table6}\n\\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}\n\\hline\n$N$ &\n${\\aleph}$ &\n$\\varphi(0)$ &\n$\\varphi(0)$\\cite{Bao_Tang} &\n$ \\rho _{\\mathrm{rms}}$ &\n$\\rho_{\\mathrm{rms}}$\\cite{Bao_Tang} &\n$ z_{\\mathrm{rms}}$ &\n$ z_{\\mathrm{rms}}$\\cite{Bao_Tang} &\n$\\mu $ &$\\mu $ \\cite{Bao_Tang}\\\\\n\\hline\n0& 0 & 0.5993(1) & 0.602 & 1.0000 & 1.000 & 0.3536 & 0.3539 &\n3.0000 & 3.0000 \\\\\n1000& 18.81 & 0.3813(2) &0.3824 &1.3249 &1.325 &0.3805 & 0.3807\n& 4.3611 & 4.362\\\\\n5000& 94.05 & 0.2474 &0.2477 & 1.7742 &1.7742 &0.4212 &0.4214\n& 6.6797 & 6.680 \\\\\n10000& 188.1 & 0.2021 &0.2023 &2.0411 &2.041 & 0.4496 &0.4497\n& 8.3671 & 8.367 \\\\\n50000& 940.5 & 0.1247 &0.1248 &2.8424 & 2.842 &0.5531 & 0.5532\n& 14.9487 &14.95 \\\\\n100000& 1881 & 0.1011 &0.1012 &3.2758 & 3.276 &0.6173 & 0.6174\n& 19.4751 & 19.47 \\\\\n400000& 7524 & 0.0666 & 0.0666 &4.3408 &4.341 &0.7881 & 0.7881\n& 33.4677 & 33.47 \\\\\n800000& 15048 & 0.0540 & 0.0540 &4.9922 &4.992 & 0.8976 &\n0.8976\n& 44.0234 & 43.80 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\\begin{figure}[tbp] \\begin{center}\n{\\includegraphics[width=.49\\linewidth]{fig4a.ps}}\n{\\includegraphics[width=.49\\linewidth]{fig4b.ps}}\n\\end{center}\n\\caption{(Color online) Plot of wave function profile for the (a)\naxially-symmetric 3D case with nonlinearity $\\aleph\n=1881$\nanisotropy\n$\\lambda=4, \\kappa=1$ using imagtimeaxial.F [Eq. (\\ref{axi2}), OPTION\n2]\nand (b) anisotropic 3D case with nonlinearity $\\aleph\n=22.454,\n359.26,$ and 11496.3 and anisotropy $\\nu=1, \\lambda =\\sqrt 2$ and $\\kappa =2$\n using imagtime3d.F [Eq. (\\ref{ani2}), OPTION 2].\nIn the 3D case only the sections $\\varphi(x,0,0), \\varphi(0,y,0)$ and\n$\\varphi(0,0,z)$ of the wave functions are plotted.}\n\\label{fig4}\n\\end{figure}\n\n\n\n\n\n\n\\begin{table}[!ht]\n\\begin{center}\n\\caption{The chemical potential $\\mu,$ rms sizes, and wave function \n$\\varphi(0)$ at the center for various number $N$ of\ncondensate of Na\natoms. The constants used are $m$(Na) $= 38.175\\times 10^{-27}$ kg,\n$a$(Na)\n = 2.75 nm. In all cases the input to numerical calculation was the\nnonlinearity\ncoefficient $\\aleph\n=4\\pi Na\/l$ shown below.\nFor the\nspherically-symmetric\ncase, we solved the radially symmetric program imagtimesph.F [Eq.\n(\\ref{sph2}), OPTION 2],\n(in\naddition to the 3D\nanisotropic program imagtime3d.F\nsetting equal frequencies in all three directions\nwith DX = DY =DZ = 0.05 and DT = 0.0004)\nusing \\cite{Schneider_Feder,hau}\n$\\omega_0^S=87$ rad\/s,\n$\nl=\\sqrt{\\hbar\/m\\omega_0^S}= 5.635$ $\\mu$m, DR $\\le 0.0025$ and\nDT = 0.00002.\nFor the fully\nanisotropic case we used the program imagtime3d.F [Eq.\n(\\ref{ani2}), OPTION 2] with \\cite{Schneider_Feder,kozuma}\n$\\omega_x\\equiv \\omega_0 ^A=354 \\pi$ rad\/s,\n$\\omega_y=\\sqrt 2\n\\omega_x,\n\\omega_z=2\\omega_x$, $l=\\sqrt{\\hbar\/m\\omega_0^A}=1.576$ $\\mu$m,\nDX = DY =DZ = 0.05 and DT = 0.0004.}\n\\label{table7}\n\\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}\n\\hline\n & \\multicolumn{1}{c}{} &\n\\multicolumn{1}{c}{Spherical} & \\multicolumn{1}{c}{}&{}\n& \\multicolumn{1}{c}{}\n& \\multicolumn{1}{c}{}\n&\\multicolumn{1}{c}{anisotropic}\n& \\multicolumn{1}{c}{}\n& {}\n\\\\\n\\hline\n$N $\n& {$\\aleph $} \n& $\\mu$(sph)&$\\mu$(ani) &$\\mu $ \n\\cite{Schneider_Feder}\n& {$\\aleph$}&$r_{\\mathrm{rms}}$ &$\\varphi(0)$& $\\mu$(ani) & $\\mu $\n\\cite{Schneider_Feder} \\\\\n\\hline\n0 & 0 &1.500000 & 1.5000 &1.500\n& 0&1.0505 &0.5496 &\n2.2071 &\n2.207\n\\\\\n 1024 & 6.2798 &1.824546(1)& 1.8245 & 1.825 &\n22.454&1.3211\n&\n0.3471& 3.5718&\n3.572\\\\\n 2048 & 12.5597 & 2.065406(1) & 2.0654 &2.065 &\n44.907&\n 1.4584\n& 0.2888&\n4.3446\n&4.345 \\\\\n 4096 & 25.1194 & 2.434526(1) & 2.4345 &2.435 &\n89.81\n&1.6328\n&0.2363 &5.4253\n&5.425 \\\\\n 8192 & 50.239 & 2.970180(1) & 2.9702 &2.970 &\n179.63&1.8460\n&0.1919\n&6.9042\n&6.904 \\\\\n 16384 & 100.477 & 3.719211(1) & 3.7192 &3.719 &\n359.26\n&2.0999\n&0.1555 &8.9003\n&8.900\\\\\n 32768 & 200.955 & 4.743445(2) & 4.7434 &4.743 &\n718.52&\n2.3979\n&0.1260 &11.5718\n&11.572\\\\\n 65536 & 401.91 &6.123751(2) & 6.1238 &6.124 &\n1437.03&2.7447\n&0.1022\n& 15.1284\n&15.128\n\\\\\n 131072 & 803.82 & 7.970154(2) & 7.9702 &7.970 &\n2874.06 & 3.1460\n&0.0829 & 19.8475& 19.847\n\\\\\n 262144 & 1607.64 & 10.426912(3) & 10.4269 & 10.427 &\n5748.13 &\n3.6092&\n0.0673&\n26.0961&26.096\n\\\\\n 524288 & 3215.28 & 13.685486(3) &13.6855 &13.685 &\n11496.3\n&4.1426\n&0.0546 & 34.3590 &\n34.358\n\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\nIn Table \\ref{table5} we present results for the wave function at the\ncenter, rms size, and chemical potential for different nonlinearities\nin the anisotropic and circularly-symmetric 2D case calculated using\nthe imaginary-time routine\nimagtime2d.F and imagtimecir.F, respectively. In the anisotropic\ncase we used\n space step 0.02 and time step 0.0001 and in the circularly-symmetric\ncase we used space step 0.0025 and time step 0.00002. We also compare\nthese\nresults with those of Ref. \\cite{Bao_Tang} and establish more accurate\nresults in the present numerical calculation.\n\n\nIn Figs. \\ref{fig3} (a) and (b) we plot the wave function profiles for\nthe circularly-symmetric and anisotropic 2D cases using the programs\nimagtimecir.F\n and\nimagtime2D.F, respectively.\nIn the anisotropic 2D case the anisotropy $\\kappa =2$ and nonlinearity\n$\\aleph\n=12.4584$. Because of anisotropy the wave function in \nFig. \\ref{fig3}\n(b)\nis compressed in\nthe $y$ direction (note the different scales in $x$ and $y$ directions\nof the plot.)\n\n\n\n\nIn Table \\ref{table6} we present results for the wave function at the\ncenter, rms sizes, and chemical potential for different nonlinearities\nin the axially-symmetric 3D case with $\\kappa=1$ and $\\lambda =4 $ calculated using\nthe imaginary-time routine\nimagtimeaxial.F and space step $D\\rho=DZ=0.02$ and time step $DT=$ 0.0004. We\ncompare with the results of Ref. \\cite{Bao_Tang} and establish more\naccurate\nresults.\n\n\nIn Figs. \\ref{fig4} (a) and (b) we plot the wave function profiles for\nthe axially-symmetric and fully anisotropic 3D cases using the programs\nimagtimeaxial.F and imagtime3d.F, respectively.\nIn the axially-symmetric 3D case the anisotropy $\\kappa=1,$ $\\lambda =4$ and\nnonlinearity $\\aleph\n=1881$ were employed.\nIn the fully anisotropic 3D case the anisotropies $\\nu=1$, $\\kappa\n=\\sqrt 2$, and\n$\\lambda =2$ and nonlinearities $ \\aleph\n=22.454, 359.26$ and \n11496.3 are used. The effect of the anisotropy\nis\nexplicit in the 3D case in generating different profiles of the\nwave function along $x$, $y$, and $z$ directions for a fixed\nnonlinearity.\n\n\n\nNext we consider the fully anisotropic case\nin three dimensions and compare our results with those of Ref.\n\\cite{Schneider_Feder} calculated using the program imagtime3d.F. This\ncase mimics a realistic case of\nexperimental\ninterest with Na atoms. The completely anisotropic trap considered here\nis time-orbiting potential (TOP) trap with angular frequencies in the\nnatural ratio ($\\omega_x,\\omega_y,\\omega_z)=\\omega_0^A(1,\\sqrt\n2,2)$, with $\\omega_0^A=354\\pi $ rad\/s. BEC in such a system has been\nobserved by Kozuma {\\it et al.} \\cite{kozuma}. We also consider\nthe\nspherical potential with $\\omega_0^S=87$ rad\/s \\cite{hau}. The s-wave\nscattering\nlength of Na is taken as $a=52a_0\\approx 2.75$ nm, with $a_0= 0.5292$\n\\AA \\- the Bohr\nradius\n\\cite{Schneider_Feder}.\n\nIn Table \\ref{table7} we exhibit the results for our calculations with\nthe fully anisotropic potential together with those for the\nspherically-symmetric potential in three dimensions. The results for the\nspherically-symmetric potential are also calculated using the\none-dimensional\nradially symmetric imaginary-time program imagtimesph.F\nin addition to the fully\nanisotropic program imagtime3d.F. In the anisotropic case the\npresent results are consistent with those of Ref.\n\\cite{Schneider_Feder}.\nIn the spherical case the two sets of the present results (calculated\nwith\nthe spherically-symmetric and anisotropic programs) as well as those of\nRef. \\cite{Schneider_Feder} are consistent with each other.\n\n\n\\begin{figure}[tbp] \\begin{center}\n{\\includegraphics[width=.45\\linewidth]{fig5a.ps}}\n{\\includegraphics[width=.45\\linewidth]{fig5b.ps}}\n{\\includegraphics[width=.45\\linewidth]{fig5c.ps}}\n{\\includegraphics[width=.45\\linewidth]{fig5d.ps}}\n\\end{center}\n\\caption{(Color online) Plot of rms size vs. time for non-stationary\noscillation of the system obtained by running the real-time programs for\n(a) 1D case (using program realtime1d.F with\n$\\aleph\n= 62.742$), (b) spherically-symmetric case (using\nprogram realtimesph.F with\n$\\aleph \n= 125.484$), (c) 2D circularly-symmetric case (using\nprogram realtime2d.F\nwith $\\aleph\n= 12.5484$ and $\\kappa=1$), and\n(d) 3D axially-symmetric case (using program realtimeaxial.F with\n $\\aleph\n= 18.81$ and $\\kappa=1, \\lambda=4$).\n The oscillation is started during time evolution\nby suddenly reducing the\nnonlinearity $\\aleph$ \nto half after the formation of the \nstationary\ncondensate.}\n\\label{fig5}\n\\end{figure}\n\n\n\n\n\nThe input to our calculation is the nonlinearity coefficient $ \\aleph$, \nwhich is related to the scattering length $a$, number of atoms $N$ and\nharmonic oscillator length $l$ via $\\aleph\n=4\\pi a N\/l$ in Eq.\n(\\ref{ani2}). We provide the nonlinearity values of our calculations.\nAlthough the present results are in agreement with those of Ref.\n\\cite{Schneider_Feder}, a\nvery precise comparison of the two calculations\n is not to the point as Schneider and Feder did not provide the\nnonlinearity coefficient $\\aleph\n$ used in their calculation.\n\n\n\n\n\n\n\\subsection{Non-stationary Oscillation}\n\n\nThe real-time propagation programs calculate the stationary states under\ndifferent trap symmetries. However, they are less efficient than the\nimaginary-time propagation programs in this task requiring more CPU\ntime and producing less accurate results. However, unlike the\nimaginary-time propagation programs, the real-time programs can produce\ntime\nevolution of non-stationary states also and next we present results of\nsuch time evolution using the real-time propagation programs under\ndifferent trap symmetries.\n\nIn this subsection we present results for non-stationary oscillation\nobtained with the use of the real-time programs. After the calculation\nof the stationary profile, the nonlinearity is suddenly reduced to half.\nThe wave function is no longer an eigenstate of the new nonlinear\nequation. This sets the system into non-stationary oscillation which\ncontinues for ever. In Fig. \\ref{fig5} we plot the rms size of the wave\nfunction vs. time $t$ showing this oscillation using the output from\nFile 8 for (a) 1D case (using program realtime1d.F), (b)\nradially-symmetric 3D case (using program realtimesph.F), (c)\nCartesian 2D case with anisotropy $\\kappa=1$ (using program\nrealtime2d.F), and (d) 3D axially-symmetric case (using program \nrealtimeaxial.F) with respective\nnonlinearities $\\aleph=$\n62.742, 125.484, 12.5484, and 18.81.\nThe rms size at $t=0$\nis the rms size of the stationary wave function obtained after NPAS time\niterations.\n\nBecause of transverse instability, the real-time program \nrealtime3d.F in 3D does not lead to stable sinusoidal oscillation as in \nother\ncases for a large change in nonlinearity (nonlinearity reduced to half \nof \nits initial value) as shown in Fig. \\ref{fig5}.\nOnly for small perturbation a sinusoidal oscillation is observed.\nHowever, we do not present a systematic study of such oscillation.\n\n\n\n\n\n\\section{Summary and Conclusion}\n\\label{SUM}\n\nIn this paper we describe a split-step method for the numerical solution\nof the time-dependent nonlinear GP equation under the action of a\ngeneral anisotropic 3D trap using real- and imaginary-time propagation.\nSimilar methods for 1D and anisotropic 2D traps are also described. The\ntime propagation is carried with an initial input. The full Hamiltonian\nis split into several spatial derivative and a non-derivative parts. The\nspatial derivative parts are treated by the Crank-Nicolson method.\nDifferent spatial derivative and non-derivative parts are dealt in\nindependent steps. This, so called split-step, method leads to highly\nstable and accurate results.\n\nWe considered two types of time iterations $-$ real-time propagation and\nimaginary-time propagation. In the real-time propagation\ntime evolution is performed with the original complex equation. The\nnumerical algorithm in this case requires the use of complex variable\nbut produces solution of non-stationary problems. In the imaginary-time\npropagation, the time variable is replaced by i ($=\\sqrt{-1})$ times a\nnew time variable, consequently the GP equation becomes real. The\nnumerical solution of this equation can no longer yield the solution of\nnon-stationary problems; but yields very accurate solution of stationary \nground state \nproblems only, requiring much smaller CPU time.\n\nWe provide the numerical algorithm in detail in 1D, 2D, and 3D for real-\nand imaginary-time propagations. We consider six different harmonic \noscillator trap\nsymmetries, e.g., a 1D trap, a circularly-symmetric 2D trap, a\nradially-symmetric 3D trap, an anisotropic trap in 2D, an\naxially-symmetric 3D trap, and an anisotropic trap in 3D. Each of these\ncases are treated with real- and imaginary-time propagation algorithms\nresulting in twelve different Fortran 77 programs supplied. We use the\nimaginary-time propagation programs to provide results for different\nstationary properties of the condensate (chemical potential, rms size,\netc) in 1D, 2D, 3D, for different nonlinearities $\\aleph $\nand compare\nwith previously obtained results \\cite{Bao_Tang,Schneider_Feder}. In\naddition we study a non-stationary oscillation initiated by suddenly\naltering the nonlinearity to half its initial value on these preformed\ncondensates using the real-time propagation programs. In addition six\nFortran 90\/95 programs are supplied in the case of two and three space\nvariables.\n\nAlthough the present programs are valid for the standard GP equation\nwith cubic nonlinearity in a harmonic potential, they can be easily\nadopted for other types of\nbosonic \\cite{tg} or fermionic equations \\cite{ska1,ska2} with different\nnonlinearities and under\ndifferent types of potentials. To change the potential one should change\nthe variable V in the subroutine INITIALIZE and the change in the\nnonlinearity can be performed in the subroutine NONLIN.\n\n\n\n\n\\ack\n\nWe thank Dr. A. Gammal for helpful comments regarding the\nsolution of the GP equation in the circularly-symmetric and\naxially-symmetric cases.\nWe thank Prof. W. Bao for the hospitality at the National University of\nSingapore when this project was started. The research was partially\nsupported by the CNPq and FAPESP of Brazil, and the Institute for\nMathematical Sciences of the National University of Singapore.\nPM thanks the Third World Academy of Sciences (TWAS-UNESCO\nAssociateship at the Center of Excellence in the South), and Department\nof Science and Technology, Government of India for partial\nsupport.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Intro}\nIn his seminal 1948 paper~\\cite{shawea49}, Claude Shannon gave a formula for the increase in differential entropy per degree of freedom that a continuous-time, band-limited random process $\\rvau(t)$ experiences after passing through a linear time-invariant (LTI) continuous-time filter.\nIn this formula, if the input process is band-limited to a frequency range $[0,B]$, has differential entropy rate (per degree of freedom) $\\bar{h}(\\rvau)$, and the LTI filter has frequency response $G\\jw$, then the resulting differential entropy rate of the output process $\\rvay(t)$ is given by%\n\\cite[Theorem~14]{shawea49}\n\\begin{align}\\label{eq:hgain_Shannon}\n \\bar{h}(\\rvay) = \\bar{h}(\\rvau) + \\frac{2}{B}\\Intfromto{0}{B} \\log \\abs{G\\jw}d\\w.\n\\end{align}\nThe last term on the right-hand side (RHS) of~\\eqref{eq:hgain_Shannon} \ncan be understood as the \\textit{entropy gain} (entropy amplification or entropy boost)\nintroduced by the filter~$G\\jw$.\nShannon proved this result by arguing that an LTI filter can be seen as a linear operator that selectively scales its input signal \nalong infinitely many frequencies, each of them representing an orthogonal component of the source.\nThe result is then obtained by writing down the determinant of the Jacobian of this operator as the product of the frequency response of the filter over $n$ frequency bands, applying logarithm and then taking the limit as the number of frequency components tends to infinity.\n\n\n\nAn analogous result can be obtained for discrete-time input $\\procu$ and output $\\procy$ processes, and an LTI discrete-time filter $G(z)$ by relating them to their continuous-time counterparts, which yields\n\\begin{align}\\label{eq:hgain_discrete}\n \\bar{h}(\\procy) = \\bar{h}(\\procu) + \\frac{1}{2\\pi}\\intfromto{-\\pi}{\\pi}\\log\\abs{G\\ejw}d\\w,\n\\end{align}\nwhere \n$$\n\\bar{h}(\\procu)\\eq \\lim_{n\\to\\infty} \\tfrac{1}{n} h(\\rvau(1),\\rvau(2),\\ldots,\\rvau(n))\n$$\nis the differential entropy rate of the process $\\procu$.\nOf course the same formula can also be obtained by applying the frequency-domain proof technique that Shannon followed in his derivation of~\\eqref{eq:hgain_Shannon}.\n\n\nThe rightmost term in~\\eqref{eq:hgain_discrete}, which corresponds to the entropy gain of $G(z)$, can be related to the structure of this filter. \nIt is well known that if $G$ is causal with a rational transfer function $G(z)$ such that \n$\\lim_{z\\to\\infty}|G(z)|=1$ (i.e., such that the first sample of its impulse response has unit magnitude), then \n\\begin{align}\\label{eq:Jensen}\n\\frac{1}{2\\pi}\\intfromto{-\\pi}{\\pi}\\log\\abs{G\\ejw}d\\w \n = \\Sumover{c_{i}\\notin\\mathbb{D}}\\log\\abs{\\rho_{i}}, \n\\end{align}\nwhere $\\set{\\rho_{i}}$ are the zeros of $G(z)$ and \n${\\mathbb{D}}\\eq \\set{z\\in\\mathbb{C}: \\abs{z} <1}$ is the open unit disk on the complex plane.\nThis provides a straightforward way to evaluate the entropy gain of a given LTI filter with rational transfer function $G(z)$.\nIn addition,~\\eqref{eq:Jensen} shows that, if $\\lim_{z\\to\\infty}|G(z)|=1$, then such gain is greater than one if and only if $G(z)$ has zeros outside $\\mathbb{D}$.\nA filter with the latter property is said to be \\textit{non-minimum phase} (NMP); conversely, a filter with all its zeros inside $\\mathbb{D}$ is said to be \\textit{minimum phase} (MP)~\\cite{serbra97}.\n\nNMP filters appear naturally in various applications. \nFor instance, any unstable LTI system stabilized via linear feedback control will yield transfer functions which are NMP~\\cite{serbra97,googra00}.\nAdditionally, NMP-zeros also appear when a discrete-time with ZOH (\\emph{zero order hold}) equivalent system is obtained from a plant whose number of poles exceeds its number of zeros by at least 2, as the sampling rate increases~\\cite[Lemma 5.2]{yuzgoo14}. \nOn the other hand, all linear-phase filters, \nwhich are specially suited for audio and image-processing applications, are NMP~\\cite{hayes-96,vaidya93}. \nThe same is true for any all-pass filter, which is an important building block in signal processing applications~\\cite{smith-07,hayes-96}.\n\n\n\nAn alternative approach for obtaining the entropy gain of LTI filters is to work in the time domain; obtain $\\rvay_{1}^{n}\\eq \\set{\\rvay_{1},\\rvay_{1},\\ldots,\\rvay_{n}}$\nas a function of $\\rvau_{1}^{n}$, for every $n\\in\\Nl$,\nand evaluate the limit \n$\n \\lim_{n\\to\\infty}\\frac{1}{n}\\left(h(\\rvay_{1}^{n}) - h(\\rvau_{1}^{n}) \\right)\n$.\nMore precisely, for a filter $G$ with impulse response $g_{0}^{\\infty}$, we can write\n\\begin{align}\\label{eq:y_of_u_matrix}\n \\rvey^{1}_{n} = \n\\underbrace{\\begin{pmatrix}\n g_{0} & 0 &\\cdots& 0\\\\\n g_{1} & g_{0}&\\cdots &0\\\\\n \\vdots & &\\ddots &\\vdots\\\\\n g_{n-1}& g_{n-2}&\\cdots &g_{0}\n\\end{pmatrix}}_{\\bG_{n}}\n\\rveu^{1}_{n},\n\\end{align}\nwhere $\\rvey^{1}_{n} \\eq [\\rvay_{1}\\ \\rvay_{1}\\,\\cdots \\ \\rvay_{n}]^{T}$ and the random vector $\\rveu^{1}_{n}$ is defined likewise. \nFrom this, it is clear that \n\\begin{align}\\label{eq:hy=hu+logdet}\n h(\\rvey^{1}_{n}) = h(\\rveu^{1}_{n}) + \\log|\\det(\\bG_{n})|,\n\\end{align}\nwhere $\\det(\\bG_n)$ (or simply $\\det \\bG_n$) stands for the determinant of $\\bG_n$. Thus, \n\\begin{align}\\label{eq:f0=1_andtherest}\n|g_{0}|=1\\Longrightarrow |\\det(\\bG_{n})|=1, \\;\\forall n\\in\\Nl \\Longleftrightarrow\nh(\\rvey^{1}_{n}) = h(\\rveu^{1}_{n}),\\;\\forall n\\in\\Nl\n\\Longrightarrow\n \\lim_{n\\to\\infty}\\frac{1}{n}\\left[h(\\rvey_{1}^{n}) - h(\\rveu_{1}^{n})\\right] =0,\n\\end{align}\nregardless of whether $G(z)$ (i.e., the polynomial $g_{0} + g_{1}z^{-1}+\\cdots $) has zeros with magnitude greater than one, \\textbf{which clearly\ncontradicts~\\eqref{eq:hgain_discrete} and~\\eqref{eq:Jensen}}. \nPerhaps surprisingly, the above contradiction not only has been overlooked in previous works (such as~\\cite{aarmcd67,zanigl03}), but the time-domain formulation in the form of~\\eqref{eq:y_of_u_matrix} has been utilized as a means to prove or disprove~\\eqref{eq:hgain_discrete} (see, for example, the reasoning in~\\cite[p.~568]{papou91}).\n\n\n\n\n\n\n\n\nA reason for why the contradiction between~\\eqref{eq:hgain_discrete},~\\eqref{eq:Jensen} and~\\eqref{eq:f0=1_andtherest} arises \ncan be obtained from the analysis developed in~\\cite{mardah08} for an LTI system $P$ within a noisy feedback loop, as the one depicted in Fig.~\\ref{fig:fbksystem}.\nIn this scheme, $C$ represents a causal feedback channel which combines the output of $P$ with an exogenous (noise) random process $\\rvac_{1}^{\\infty}$ to generate its output.\nThe process $\\rvac_{1}^{\\infty}$ is assumed independent of the initial state of $P$, represented by the random vector $\\rvex_{0}$, which has finite differential entropy.\n\\begin{figure}[t\n\\centering\n\\input{fbksystem.pstex_t}\n\\caption{Left: LTI system $P$ within a noisy feedback loop. Right: equivalent system when the feedback channel is noiseless and has unit gain.}\n\\label{fig:fbksystem}\n\\end{figure}\nFor this system, it is shown in~\\cite[Theorem 4.2]{mardah08} that \n\\begin{subequations}\\label{eq:martins_both}\n \\begin{align}\\label{eq:martins}\n \\bar{h}(\\rvay_{1}^{\\infty}) \\geq \\bar{h}(\\rvau_{1}^{\\infty}) + \\lim_{n\\to\\infty}\\frac{1}{n}I(\\rvex_{0}; \\rvay_{1}^{n}),\n\\end{align}\nwith equality if $\\rvaw$ is a deterministic function of $\\rvav$.\nFurthermore, it is shown in~\\cite[Lemma 3.2]{mardah05} that if $|h(\\rvex_{0})|<\\infty$ and the steady state variance of system $P$ remains asymptotically bounded as $k\\to\\infty$, then \n\\begin{align}\\label{eq:martins_I_bound}\n\\lim_{n\\to\\infty}\\frac{1}{n}I(\\rvex_{0}; \\rvay_{1}^{n})\n\\geq \\sumover{p_{i}\\notin\\mathbb{D}}\\log\\abs{p_{i}},\n\\end{align}\n\\end{subequations}\nwhere $\\set{p_{i}}$ are the poles of $P$.\nThus, for the (simplest) case in which $\\rvaw=\\rvav$, the output $\\rvay_{1}^{\\infty}$ is the result of filtering $\\rvau_{1}^{\\infty}$ by a filter $G=\\frac{1}{1+P}$ (as shown in Fig.~\\ref{fig:fbksystem}-right), and the resulting entropy rate of $\\procy$ will exceed that of $\\procu$ only if there is a random initial state with bounded differential entropy (see~\\eqref{eq:martins}). \nMoreover, under the latter conditions,~\\cite[Lemma 4.3]{mardah08} implies that if $G(z)$ is stable and $|h(\\rvex_{0})|<\\infty$, then this entropy gain will be lower bounded by the \\textit{right-hand side} (RHS) of~\\eqref{eq:Jensen}, which is greater than zero if and only if $G$ is NMP. \nHowever, the result obtained in~\\eqref{eq:martins_I_bound} does not provide conditions under which the equality in the latter equation holds.\n\nAdditional results and intuition related to this problem can be obtained from in~\\cite{kim-yh10}.\nThere it is shown that if $\\procy$ is a two-sided Gaussian stationary random process generated by a state-space recursion of the form \n\\begin{subequations}\\label{subeq:State_Space_YHKIM}\n\\begin{align}\n \\rves_{k+1}& = (\\bA - \\bg\\bh^{H}) \\rves_{k} - \\bg \\rvau_{k},\\\\\n \\rvay_{k} & = \\bh^{H} \\rves_{k} + \\rvau_{n},\n\\end{align}\n\\end{subequations}\nfor some $\\bA\\in\\mathbb{C}^{M\\times M}$, $\\bg\\in\\mathbb{C}^{M\\times 1}$, $\\bh\\in\\mathbb{C}^{M\\times 1}$,\nwith unit-variance Gaussian i.i.d. innovations $\\rvau_{-\\infty}^{\\infty}$, then its entropy rate will be exactly\n$\\frac{1}{2}\\log(2\\pi\\expo{})$ (i.e., the differential entropy rate of $\\rvau_{-\\infty}^{\\infty}$) plus the RHS of~\\eqref{eq:Jensen} (with $\\set{\\rho_{i}}$ now being the eigenvalues of $\\bA$ outside the unit circle).\nHowever, as noted in~\\cite{kim-yh10}, if the same system with zero (or deterministic) initial state is excited by a one-sided infinite Gaussian i.i.d. process $\\rvau_{1}^{\\infty}$ with unit sample variance, then the (asymptotic) entropy rate of the output process $\\rvay_{1}^{\\infty}$ is just~$\\frac{1}{2}\\log(2\\pi\\expo{})$ (i.e., there is no entropy gain).\nMoreover, it is also shown that if $\\rvav_{1}^{\\ell}$ is a Gaussian random sequence with positive-definite covariance matrix and $\\ell\\geq M$, then the entropy rate of $\\rvay_{1}^{\\infty}+\\rvav_{1}^{\\ell}$ also exceeds that of $\\rvau_{1}^{\\infty}$ by the RHS of~\\eqref{eq:Jensen}.\nThis suggests that for an LTI system which admits a state-space representation of the form~\\eqref{subeq:State_Space_YHKIM}, the entropy gain \nfor a single-sided Gaussian i.i.d. input is zero, and that the entropy gain from the input to the output-plus-disturbance is~\\eqref{eq:Jensen}, for any Gaussian disturbance of length $M$ with positive definite covariance matrix (no matter how small this covariance matrix may be).\n\n\nThe previous analysis suggests that it is the absence of a random initial state or a random additive output disturbance that makes the time-domain formulation~\\eqref{eq:y_of_u_matrix} yield a zero entropy gain.\nBut, how would the addition of such finite-energy exogenous random variables to~\\eqref{eq:y_of_u_matrix} actually produce an increase in the differential entropy rate which asymptotically equals the RHS of~\\eqref{eq:Jensen}? \nIn a broader sense, it is not clear from the results mentioned above what the necessary and sufficient conditions \nare under which \nan entropy gain equal to the RHS of~\\eqref{eq:Jensen} arises (the analysis in~\\cite{kim-yh10} provides only a set of sufficient conditions and relies on second-order statistics and Gaussian innovations to derive the results previously described). \nAnother important observation to be made is the following: it is well known that the entropy gain introduced by a linear mapping is independent of the input statistics~\\cite{shawea49}. \nHowever, \nthere is no reason to assume such independence \nwhen this entropy gain arises as the result of adding a random signal to the input of the mapping, i.e., when the mapping by itself does not produce the entropy gain. \nHence, it remains to characterize the largest set of input statistics which yield an entropy gain, and the magnitude of this gain.\n\nThe first part of this paper provides answers to these questions. \nIn particular, in Section~\\ref{sec:geometric_interpretation} explain how and when the entropy gain arises (in the situations described above), starting with input and output sequences of finite length,\nin a time-domain analysis similar to~\\eqref{eq:y_of_u_matrix},\nand then taking the limit as the length tends to infinity.\nIn Section~\\ref{sec:entropy_gain_output_disturb} it is shown that, in the output-plus-disturbance scenario, \nthe entropy gain is \\emph{at most} the RHS of~\\eqref{eq:Jensen}. \nWe show that, for a broad class of input processes (not necessarily Gaussian or stationary), this maximum entropy gain\nis reached only when the disturbance has bounded differential entropy and its length is at least equal to the number of non-minimum phase zeros of the filter.\nWe provide upper and lower bounds on the entropy gain if the latter condition is not met.\nA similar result is shown to hold when there is a random initial state in the system (with finite differential entropy).\nIn addition, in Section~\\ref{sec:entropy_gain_output_disturb} we study the entropy gain between the \\emph{entire output sequence} that a filter yields as response to a shorter input sequence (in Section~\\ref{sec:effective_entropy}).\nIn this case, however, it is necessary to consider a new definition for differential entropy, named \\emph{effective differential entropy}.\nHere we show that \n an effective entropy gain equal to the RHS of~\\eqref{eq:Jensen} is obtained provided the input has finite differential entropy rate, even when there is no random initial state or output disturbance.\n\n\nIn the second part of this paper (Section\\ref{sec:implications}) we apply the conclusions obtained in the first part to three problems, namely, \nnetworked control,\nthe rate-distortion function for non-stationary Gaussian sources, \nand the Gaussian channel capacity with feedback.\nIn particular, we show that equality holds in~\\eqref{eq:martins_I_bound} \nfor the feedback system in Fig.~\\ref{fig:fbksystem}-left \nunder very general conditions (even when the channel $C$ is noisy).\nFor the problem of finding the quadratic rate-distortion function for non-stationary auto-regressive Gaussian sources, previously solved in~\\cite{gray--70,hasari80,grahas08}, we provide a simpler proof based upon the results we derive in the first part.\nThis proof extends the result stated in~\\cite{hasari80,grahas08} to a broader class of non-stationary sources. \nFor the feedback Gaussian capacity problem, we show that capacity results based on \nusing a short random sequence as channel input and relying on a feedback filter which boosts the entropy rate of the end-to-end channel noise (such as the one proposed in~\\cite{kim-yh10}), crucially depend upon the complete absence of any additional disturbance anywhere in the system.\nSpecifically, we show that the information rate of such capacity-achieving schemes drops to zero in the presence of any such additional disturbance.\nAs a consequence, the relevance of characterizing the robust (i.e., in the presence of disturbances) feedback capacity of Gaussian channels, which appears to be a fairly unexplored problem, becomes evident.\n\nFinally, the main conclusions of this work are summarized in Section~\\ref{sec:conclusions}.\n\nExcept where present, all proofs are presented in the appendix.\n\n\\subsection{Notation}\nFor any LTI system $G$, the transfer function $G(z)$ corresponds to the $z$-transform of the impulse response $g_{0},\\, g_{1}, \\ldots$, i.e., $G(z) = \\sumfromto{i=0}{\\infty} g_{i}z^{-i}$.\nFor a transfer function $G(z)$, we denote by $\\bG_{n}\\in\\Rl^{n\\times n}$ the lower triangular Toeplitz matrix having $[g_{0}\\ \\cdots \\ g_{n-1}]^{T}$ as its first column.\nWe write $x_{1}^{n}$ as a shorthand for the sequence $\\set{x_{1},\\ldots, x_{n}}$ and, when convenient, we write $x_{1}^{n}$ in vector form as $\\bx^{1}_{n}\\eq [x_{1}\\ x_{2}\\ \\cdots \\ x_{n}]^{T}$, where $()^{T}$ denotes transposition.\nRandom scalars (vectors) are denoted using non-italic characters, such as $\\rvax$ (non-italic and boldface characters, such as $\\rvex$).\nFor matrices we use upper-case boldface symbols, such as $\\bA$.\nWe write $\\lambda_{i}(\\bA)$\nto the note the $i$-th smallest-magnitude eigenvalue of $\\bA$. \nIf $\\bA_{n}\\in\\mathbb{C}^{n\\times n}$, then $\\bA_{i,j}$ denotes the entry in the intersection between the $i$-th row and the $j$-th column.\nWe write $[\\bA_{n}]^{i_{1}}_{i_{2}}$, with $i_{1}\\leq i_{2}\\leq n$, to refer to the matrix formed by selecting the rows $i_{1}$ to $i_{2}$ of $\\bA$.\nThe expression ${^{m_{1}}}\\![\\bA]_{m_{2}}$ corresponds to the square sub-matrix along the main diagonal of $\\bA$, with its top-left and bottom-right corners on $\\bA_{m_1,m_1}$ and $\\bA_{m_{2},m_{2}}$, respectively.\nA diagonal matrix whose entries are the elements in $\\Dsp$ is denoted as $\\diag\\Dsp$\n\n\\section{Problem Definition and Assumptions}\nConsider the discrete-time system depicted in Fig.~\\ref{fig:general}.\nIn this setup, the input $\\rvau_{1}^{\\infty}$ is a random process\nand the block $G$ is a causal, linear and time-invariant system with random initial state vector $\\rvex_{0}$ and random output disturbance $\\rvaz_{1}^{\\infty}$.\n\\begin{figure}[t\n \\centering\n\\input{system.pstex_t}\n\\caption{Linear, causal, stable and time-invariant system $G$ with input and output processes, initial state and output disturbance.}\n\\label{fig:general}\n\\end{figure}\nIn vector notation,\n\\begin{align}\\label{eq:system_equation}\n \\rvey^{1}_{n}\\eq \\bG_{n}\\rveu^{1}_{n} + \\bar{\\rvey}^1_n + \\rvez^1_n,\\fspace n\\in\\Nl,\n\n\\end{align}\nwhere \n$\\bar{\\rvey}^1_n$\nis the natural response of $G$ to the initial state $\\rvex_{0}$.\nWe make the following further assumptions about $G$ and the signals around it:\n\\begin{assu}\\label{assu:G_Factorized}\n$G(z)$ is a causal, stable and rational transfer function of finite order, whose impulse response $g_{0},g_{1},\\ldots$ satisfies\n$ g_{0}=1$.\n\\finenunciado\n %\n\\end{assu}\nIt is worth noting that there is no loss of generality in considering $g_{0}=1$, since otherwise one can write $G(z)$ as $G'(z)=g_{0}\\cdot G(z)\/g_{0}$,\n and thus the entropy gain introduced by $G'(z)$ would be $\\log g_{0}$ plus the entropy gain due to $G(z)\/g_{0}$, which has an impulse response where the first sample equals $1$.\n \n\n \n\n\\begin{assu}\nThe random initial state $\\rvex_{0}\n$ is independent of $\\rvau_{1}^{\\infty}$.\n\\end{assu}\n\\begin{assu}\\label{assu:z}\nThe disturbance $\\rvaz_{1}^{\\infty}$ is independent of $\\rvau_{1}^{\\infty}$ and belongs to a $\\kappa$-dimensional linear subspace, for some finite $\\kappa\\in\\Nl$.\nThis subspace is spanned by the $\\kappa$ orthonormal columns of a matrix $\\boldsymbol{\\Phi}\\in\\Rl^{|\\Nl|\\times \\kappa}$ (where $|\\Nl|$ stands for the countably infinite size of $\\Nl$), such that \n$|h(\\boldsymbol{\\Phi}^{T}\\rvez^{1}_{\\infty})|<\\infty$. \nEquivalently, $\\rvez^{1}_{\\infty} = \\boldsymbol{\\Phi} \\rves^{1}_{\\kappa}$, where the random vector \n$\\rves^{1}_{\\kappa}\\eq \\boldsymbol{\\Phi}^{T}\\rvez^{1}_{\\infty}$ has finite differential entropy and is independent of $\\rveu^{1}_{\\infty}$.\n\\end{assu}\n\n\n\n\n\n\n\n\n\nAs anticipated in the Introduction, we are interested in characterizing the entropy gain $\\Gsp$ of $G$ in the presence (or absence) of the random inputs\n$\\rvau_{1}^{\\infty},\\rvex_{0},\\rvaz_{1}^{\\infty}$, denoted by \n\\begin{align}\\label{eq:Gsp_def}\n\\Gsp(G,\\rvex_{0},\\rvau_{1}^{\\infty},\\rvaz_{1}^{\\infty})\\eq \\lim_{n\\to\\infty}\n\\frac{1}{n}\n\\left(h(\\rvay_{1}^{n})- h(\\rvau_{1}^{n})\\right).\n\\end{align}\nIn the next section we provide geometrical insight into the behaviour of $\\Gsp(G,\\rvex_{0},\\rvau_{1}^{\\infty},\\rvaz_{1}^{\\infty})$ for the situation where there is a random output disturbance and no random initial state. \nA formal and precise treatment of this scenario is then presented in Section~\\ref{sec:entropy_gain_output_disturb}. \nThe other scenarios are considered in the subsequent sections.\n\n\n\n\n\n\\section{Geometric Interpretation}\\label{sec:geometric_interpretation}\nIn this section we provide an intuitive geometric interpretation of how and when the entropy gain defined in~\\eqref{eq:Gsp_def} arises. \nThis understanding will justify the introduction of the notion of an entropy-balanced random process (in Definition~\\ref{def:entropy_balanced} below), which will be shown to play a key role in this and in related problems. \n\n\\subsection{An Illustrative Example}\nSuppose for the moment that $G$ in Fig.~\\ref{fig:general} is an FIR filter with impulse response $g_{0}=1,\\ g_{1}=2,\\ g_{i}=0,\\, \\forall i\\geq 2$.\nNotice that this choice yields $G(z)= (z-2)\/z$, and thus $G(z)$ has one non-minimum phase zero, at $z=2$.\nThe associated matrix $\\bG_{n}$ for $n=3$ is\n$$\n\\bG_{3}=\\begin{pmatrix}\n 1&0&0\\\\\n 2&1&0\\\\\n0&2&1\n\\end{pmatrix},\n$$ \nwhose determinant is clearly one (indeed, all its eigenvalues are $1$).\nHence, as discussed in the introduction, \n$h(\\bG_{3}\\rveu^{1}_{3})=h(\\rveu^{1}_{3})$, and thus $\\bG_{3}$ (and $\\bG_{n}$ in general) does not introduce an entropy gain by itself.\nHowever, an interesting phenomenon becomes evident by looking at the singular-value decomposition (SVD) of $\\bG_{3}$, given by \n$\\bG_{3}=\\bQ_{3}^{T}\\bD_{3}\\bR_{3}$, \nwhere $\\bQ_{3}$ and $\\bR_{3}$ are unitary matrices and $\\bD_{3}\\eq \\diag\\set{d_{1},d_{2},d_{3}}$. \nIn this case, $\\bD_3 = \\diag\\set{ 0.19394,\\, 1.90321, \\, 2.70928}$, and thus one of the singular values of $\\bG_{3}$ is much smaller than the others (although the product of all singular values yields $1$, as expected).\nAs will be shown in Section~\\ref{sec:entropy_gain_output_disturb}, for a stable $G(z)$ such uneven distribution of singular values arises only when $G(z)$ has non-minimum phase zeros.\nThe effect of this can be visualized by looking at the image of the cube $[0,1]^{3}$ through $\\bG_{3}$ shown in Fig.~\\ref{fig:cube}.\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 8 cm, trim= 0 70 0 70, clip]{cube_shear.eps}\n\\caption{Image of the cube $[0,1]^{3}$ through the square matrix with columns \n$[1\\; 2 \\; 0]^{T}$,\n$[0\\; 1 \\; 2]^{T}$ and\n$[0\\; 0 \\; 1]^{T}$.\n}\n\\label{fig:cube}\n\\end{figure}\nIf the input $\\rveu^{1}_{3}$ were uniformly distributed over this cube (of unit volume), then $\\bG_{3}\\rveu^{1}_{3}$ would distribute uniformly over the unit-volume parallelepiped depicted in Fig.~\\ref{fig:cube}, and hence\n$h(\\bG_{3}\\rveu^{1}_{3})=h(\\rveu^{1}_{3})$. \n\nNow, if we add to $\\bG_{3}\\rveu^{1}_{3}$ a disturbance $\\rvez^{1}_{3}=\\boldsymbol{\\Phi} \\rvas$, with scalar $\\rvas$ uniformly distributed over $[-0.5,\\ 0.5]$ independent of $\\rveu^{1}_{3}$, and with $\\boldsymbol{\\Phi}\\in\\Rl^{3\\times 1}$, the effect would be to ``thicken'' the support over which the resulting random vector \n$\\rvey^{1}_{3}=\\bG_{3}\\rveu^{1}_{3}+\\rvez^{1}_{3}$ is distributed, along the direction pointed by $\\boldsymbol{\\Phi}$.\nIf $\\boldsymbol{\\Phi}$ is aligned with the direction along which the support of $\\bG_{3}\\rveu^{1}_{3}$ is thinnest\n(given by $\\bq_{3,1}$, the first row of $\\bQ_{3}$), then the resulting support would have its volume significantly increased, which can be associated with a large increase in the differential entropy of $\\rvey^{1}_{3}$ with respect to $\\rveu^{1}_{3}$.\nIndeed, a relatively small variance of $\\rvas$ and an approximately aligned $\\boldsymbol{\\Phi}$ would still produce a significant entropy gain.\n\nThe above example suggests that the entropy gain from $\\rveu^{1}_{n}$ to $\\rvey^{1}_{n}$ appears as a combination of two factors.\nThe first of these is the uneven way in which the random vector $\\bG_{n}\\rveu^{1}_{n}$ is distributed over $\\Rl^{n}$.\nThe second factor is the alignment of the disturbance vector $\\rvez^{1}_{n}$ with respect to the \nspan of the subset $\\set{\\bq_{n,i}}_{i\\in\\Omega_{n}}$ of columns of $\\bQ_{n}$, associated with smallest singular values of $\\bG_{n}$, indexed by the elements in the set $\\W_n$.\nAs we shall discuss in the next section, if $G$ has $m$ non-minimum phase zeros, then, as $n$ increases, there will be $m$ singular values of $\\bG_{n}$ going to zero exponentially.\nSince the product of the singular values of $\\bG_{n}$ equals $1$ for all $n$, it follows that $\\prod_{i\\notin \\W_{n}}d_{n,i}$ must grow exponentially with $n$, where $d_{n,i}$ is the $i$-th diagonal entry of $\\bD_n$.\nThis implies that $\\bG_{n}\\rveu^{1}_{n}$ expands with $n$ along the span of $\\set{\\bq_{n,i}}_{i\\notin\\W_{n}}$, compensating its shrinkage along the span of $\\set{\\bq_{n,i}}_{i\\in\\W_{n}}$, thus keeping $h(\\bG_{n}\\rveu^{1}_{n})=h(\\rveu^{1}_{n})$ for all $n$.\nThus, as $n$ grows, any small disturbance distributed over the span of $\\set{\\bq_{n,i}}_{i\\in\\W_{n}}$, added to $\\bG_{n}\\rveu^{1}_{n}$, will keep the support of the resulting distribution from shrinking along this subspace.\nConsequently, the expansion of \n$\\bG_{n}\\rveu^{1}_{n}$ with $n$ along the span of $\\set{\\bq_{n,i}}_{i\\notin\\W_{n}}$ is no longer compensated, yielding an entropy increase proportional to $\\log(\\prod_{i\\notin\\W_{n}} d_{n,i})$.\n\nThe above analysis allows one to anticipate a situation in which no entropy gain would take place even when some singular values of $\\bG_{n}$ tend to zero as $n\\to\\infty$. \nSince the increase in entropy is made possible by the fact that, as $n$ grows, the support of the distribution of $\\bG_{n}\\rveu^{1}_{n}$ shrinks along the span of $\\set{\\bq_{n,i}}_{i\\in\\W_{n}}$, no such entropy gain should arise if the support of the distribution of the input $\\rveu^{1}_{n}$ expands accordingly along the directions pointed by the rows $\\set{\\br_{n,i}}_{i\\in\\W_{n}}$ of $\\bR_{n}$. \n\nAn example of such situation can be easily constructed as follows: Let $G(z)$ in Fig.~\\ref{fig:general} have non-minimum phase zeros and \nsuppose that $\\rvau_{1}^{\\infty}$ is generated as $G^{-1}\\tilde{\\rvau}_{1}^{\\infty}$, where $\\tilde{\\rvau}_{1}^{\\infty}$ is an i.i.d. random process with bounded entropy rate.\nSince the determinant of $\\bG_{n}^{-1}$ equals $1$ for all $n$, we have that $h(\\rveu^{1}_{n})=h(\\tilde{\\rveu}^{1}_{n})$, for all $n$.\nOn the other hand,\n$\\rvey^{1}_{n}\n=\n\\bG_{n}\\bG_{n}^{-1}\\tilde{\\rveu}^{1}_{n} + \\rvez^{1}_{n}\n=\n\\tilde{\\rveu}^{1}_{n} + \\rvez^{1}_{n}\n$.\nSince $\\rvez^{1}_{n}=[\\boldsymbol{\\Phi}]^{1}_{n}\\rves^{1}_{\\kappa}$ for some finite $\\kappa$ (recall Assumption~\\ref{assu:z}), it is easy to show that \n$\n\\lim_{n\\to\\infty}\\frac{1}{n} h(\\rvey^{1}_{n})\n= \n\\lim_{n\\to\\infty}\\frac{1}{n} h(\\tilde{\\rveu}^{1}_{n})\n= \n\\lim_{n\\to\\infty}\\frac{1}{n} h({\\rveu}^{1}_{n})\n$,\nand thus no entropy gain appears.\n\nThe preceding discussion reveals that the entropy gain produced by $G$ in the situation shown in Fig.~\\ref{fig:general} \\textbf{depends on the distribution of the input and on the support and distribution of the disturbance}.\nThis stands in stark contrast with the well known fact that the increase in differential entropy produced by an invertible linear operator depends only on its Jacobian, and not on the statistics of the input~\\cite{shawea49}. \nWe have also seen that the distribution of a random process along the different directions within the Euclidean space which contains it plays a key role as well.\nThis motivates the need to specify a class of random processes which distribute more or less evenly over all directions.\nThe following section introduces a rigorous definition of this class and characterizes a large family of processes belonging to it.\n\n\\subsection{Entropy-Balanced Processes}\\label{subsec:entropy_balanced}\nWe begin by formally introducing the notion of an ``entropy-balanced'' process\n $\\rvau_{1}^{\\infty}$, being one in which, for every finite $\\nu\\in\\Nl$, the differential entropy rate of the orthogonal projection of $\\rvau_{1}^{n}$ into any subspace of dimension $n-\\nu$ equals the entropy rate of $\\rvau_{1}^{n}$ as $n\\to\\infty$.\nThis idea is precisely in the following definition.\n\\begin{defn}\\label{def:entropy_balanced}\n A random process $\\set{\\rvav(k)}_{k=1}^{\\infty}$ is said to be entropy balanced if, for every $\\nu\\in\\Nl$, \n\\begin{subequations}\\label{eq:the_painful_assumption}\n\\begin{align}\n\\lim_{n\\to\\infty}\\frac{1}{n}& \n\\left( \nh(\\boldsymbol\\Phi_{n}\\rvev^{1}_{n} )- h(\\rvev^{1}_{n})\n\\right) =0, \n\\end{align} \n\\end{subequations} \nfor every sequence of matrices $\\set{\\boldsymbol{\\Phi}_{n}}_{n=\\nu+1}^{\\infty}$, $\\boldsymbol{\\Phi}_n\\in\\Rl^{(n-\\nu)\\times n}$, with orthonormal rows. \n\\finenunciado\n\\end{defn}\nEquivalently, a random process $\\set{\\rvav(k)}$ is entropy balanced if every unitary transformation on $\\rvav_{1}^{n}$ yields a random sequence $\\rvay_{1}^{n}$ such that\n$\n \\lim_{n\\to \\infty}\\frac{1}{n}|h(\\rvay_{n-\\nu+1}^{n}|\\rvay_{1}^{n-\\nu})| =0\n$.\nThis property of the resulting random sequence $\\rvay_{1}^{n}$ means that one cannot predict its last $\\nu$ samples with arbitrary accuracy by using its previous $n-\\nu$ samples, even if $n$ goes to infinity.\n\nWe now characterize a large family of entropy-balanced random processes and establish some of their properties.\nAlthough intuition may suggest that most random processes (such as i.i.d. or stationary processes) should be entropy balanced, that statement seems rather difficult to prove.\nIn the following, we show that the entropy-balanced condition is met by i.i.d. processes with per-sample \\textit{probability density function} (PDF) being uniform, piece-wise constant or Gaussian.\nIt is also shown that adding to an entropy-balanced process an independent random processes independent of the former yields another entropy-balanced process, and that filtering an entropy-balanced process by a stable and minimum phase filter yields an entropy-balanced process as well.\n\n\\begin{prop}\\label{prop:gaussian_is_entropy_balanced}\nLet $\\rvau_{1}^{\\infty}$ be a Gaussian i.i.d. random process with positive and bounded per-sample variance.\n Then $\\rvau_{1}^{\\infty}$ is entropy balanced.\\finenunciado\n\\end{prop}\n\\begin{lem}\\label{lem:piecewiseconstant}\n Let $\\rvau_{1}^{\\infty}$ be an i.i.d. process with finite differential entropy rate, in which each $\\rvau_i$ is distributed according to a piece-wise constant PDF in which each interval where this PDF is constant has measure greater than $\\epsilon$, for some bounded-away-from-zero constant $\\epsilon$. \n Then $\\rvau_{1}^{\\infty}$ is entropy balanced.\\finenunciado\n\\end{lem}\n\n\n\\begin{lem}\\label{lem:sum_yields_entropy_balanced}\n Let $\\rvau_{1}^{\\infty}$ and $\\rvav_{1}^{\\infty}$ be mutually independent random processes.\n If $\\rvau_{1}^{\\infty}$ is entropy balanced, then $\\rvaw_{1}^{\\infty}\\eq \\rvau_{1}^{\\infty} + \\rvav_{1}^{\\infty}$ is also entropy balanced.\\finenunciado\n\\end{lem}\nThe working behind this lemma can be interpreted intuitively by noting that adding to a random process another independent random process can only increase the ``spread'' of the distribution of the former, which tends to balance the entropy of the resulting process along all dimensions in Euclidean space.\nIn addition, it follows from Lemma~\\ref{lem:sum_yields_entropy_balanced} that all i.i.d. processes having a per-sample PDF which can be constructed by convolving uniform, piece-wise constant or Gaussian PDFs as many times as required are entropy balanced.\nIt also implies that one can have non-stationary processes which are entropy balanced, since Lemma~\\ref{lem:sum_yields_entropy_balanced} imposes no requirements for the process $\\rvav_{1}^{\\infty}$.\n\nOur last lemma related to the properties of entropy-balanced processes shows that filtering by a stable and minimum phase LTI filter preserves the entropy balanced condition of its input.\n\\begin{lem}\\label{lem:filtering_preserves_entropy_balance}\n Let $\\rvau_{1}^{\\infty}$ be an entropy-balanced process and $G$ an LTI stable and minimum-phase filter.\n Then the output $\\rvaw_{1}^{\\infty}\\eq G\\rvau_{1}^{\\infty}$ is also an entropy-balanced process.\\finenunciado\n\\end{lem}\nThis result implies that any stable moving-average auto-regressive process constructed from entropy-balanced innovations is also entropy balanced, provided the coefficients of the averaging and regression correspond to a stable MP filter.\n\nWe finish this section by pointing out two examples of processes which are non-entropy-balanced, namely, \nthe output of a NMP-filter to an entropy-balanced input and the output of an unstable filter to an entropy-balanced input. \nThe first of these cases plays a central role in the next section.\n\n\n\n\n\n\\section{Entropy Gain due to External Disturbances}\\label{sec:entropy_gain_output_disturb}\nIn this section we formalize the ideas which were qualitatively outlined in the previous section.\nSpecifically, \nfor the system shown in Fig.~\\ref{fig:general}\nwe will characterize the entropy gain $\\Gsp(G,\\rvex_{0},\\rvau_{1}^{\\infty},\\rvaz_{1}^{\\infty})$ defined in~\\eqref{eq:Gsp_def} for the case in which the initial state $\\rvex_{0}$ is zero (or deterministic) and there exists an output random disturbance of (possibly infinite length) $\\rvaz_{1}^{\\infty}$ which satisfies Assumption~\\ref{assu:z}.\nThe following lemmas will be instrumental for that purpose.\n\n\n\\begin{lem}\\label{lem:singular_values_bounded}\n Let $A(z)$ be a causal, finite-order, stable and minimum-phase rational transfer function with impulse response $a_{0},a_{1},\\ldots$ such that $a_{0}=1$.\n Then \n $\\lim_{n\\to\\infty}\\lambda_{1}(\\bA_{n}\\bA^{T}_{n})>0$\n and \n $\\lim_{n\\to\\infty}\\lambda_{n}(\\bA_{n}\\bA^{T}_{n})<\\infty$.\n \\finenunciado\n\\end{lem}\n\n\n\n\n\n\\begin{lem}\\label{lem:gap_with_two_terms}\nConsider the system in Fig.~\\ref{fig:general}, and suppose $\\rvaz_{1}^{\\infty}$ satisfies Assumption~\\ref{assu:z}, and that the input process $\\rvau_{1}^{\\infty}$ is entropy balanced.\n Let $\\bG_{n}=\\bQ_{n}^{T}\\bD_{n}\\bR_{n}$ be the SVD of $\\bG_{n}$, where $\\bD_{n}=\\diag\\set{d_{n,1},\\ldots,d_{n,n}}$ are the singular values of $\\bG_{n}$, with $d_{n,1}\\leq d_{n,2}\\leq\\cdots \\leq d_{n,n}$, such that $|\\det\\bG_{n}| = 1$ $\\forall n$. \nLet $m$ be the number of these singular values which tend to zero exponentially as $n\\to\\infty$.\nThen\n \\begin{align}\\label{eq:gap_with_two_terms}\n \\lim_{n\\to\\infty}\\frac{1}{n}\\left(h(\\rvay_{1}^{n}) -h(\\rvau_{1}^{n})\\right) \n =\n \\lim_{n\\to\\infty}\\frac{1}{n}\\left( -\\Sumfromto{i=1}{m} \\log d_{n,i} + h\\left([\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\rvez^{1}_{n} \\right) \\right).\n \\end{align}\n \\finenunciado\n\\end{lem}\n(The proof of this Lemma can be found in the Appendix, page~\\pageref{proof:lem_gap_with_two_terms}).\n\nThe previous lemma precisely formulates the geometric idea outlined in Section~\\ref{sec:geometric_interpretation}.\nTo see this, notice that no entropy gain is obtained if the output disturbance vector $\\rvez^{1}_{n}$ is orthogonal to the space spanned by the first $m$ columns of $\\bQ_{n}$.\nIf this were the case, then the disturbance would not be able fill the subspace along which $\\bG_{n}\\rveu^{1}_{n}$ is shrinking exponentially.\nIndeed, if $[\\bQ_{n}]^{1}_{n}\\rvez^{1}_{n}=0$ for all $n$, then\n$\nh([\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\rvez^{1}_{n} )\n=\nh({^{1}}\\![\\bD_{n}]_{m}[\\bR_{n}]^{1}_{m}\\rveu^{1}_{n})\n=\n\\sum_{i=1}^{m} \\log d_{n,i}+h([\\bR_{n}]^{1}_{m}\\rveu^{1}_{n})\n$,\nand the latter sum cancels out the one on the RHS of~\\eqref{eq:gap_with_two_terms}, while \n$\\lim_{n\\to\\infty}\\frac{1}{n}h([\\bR_{n}]^{1}_{n}\\rveu^{1}_{n})=0$ since $\\rvau_{1}^{\\infty}$ is entropy balanced.\nOn the contrary (and loosely speaking), if the projection of the\nsupport of $\\rvez^{1}_{n}$ onto the subspace spanned by the first $m$ rows of $\\bQ_{n}$ is of dimension $m$, then\n$h([\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\rvez^{1}_{n} )$ remains bounded for all $n$, and the entropy limit of the sum \n$\\lim_{n\\to\\infty}\\frac{1}{n}( -\\sumfromto{i=1}{m} \\log d_{n,i})$ on the RHS of~\\eqref{eq:gap_with_two_terms} yields the largest possible entropy gain.\nNotice that \n$\n-\\sumfromto{i=1}{m} \\log d_{n,i} \n=\n\\sumfromto{i=m+1}{n} \\log d_{n,i}\n$ (because $\\det(\\bG_{n})=1$), \nand thus this entropy gain stems from the uncompensated expansion of $\\bG_{n}\\rveu^{1}_{n}$ along the space spanned by the rows of $[\\bQ_{n}]^{m+1}_{n}$.\n\nLemma~\\ref{lem:gap_with_two_terms} also yields the following corollary, which states that \nonly a filter $G(z)$ with zeros outside the unit circle (i.e., an NMP transfer function) can introduce entropy gain.\n\\begin{coro}[Minimum Phase Filters do not Introduce Entropy Gain]\\label{coro:MP_filters_no_EG}\nConsider the system shown in Fig.~\\ref{fig:general} and\nlet $\\rvau_{1}^{\\infty}$ be an entropy-balanced random process with bounded entropy rate.\nBesides Assumption~\\ref{assu:G_Factorized}, suppose that $G(z)$ is minimum phase.\n Then \n \\begin{align}\n\\lim_{n\\to\\infty}\\frac{1}{n}\\left(h(\\rvay_{1}^{n}) -h(\\rvau_{1}^{n})\\right) =0. \n \\end{align}\n \\finenunciado\n\\end{coro}\n\\begin{proof}\nSince $G(z)$ is minimum phase and stable, it follows from Lemma~\\ref{lem:singular_values_bounded} that the number of singular values of $\\bG_{n}$ which go to zero exponentially, as $n\\to\\infty$, is zero.\nIndeed, all the singular values vary polynomially with $n$.\nThus $m=0$ and Lemma~\\ref{lem:gap_with_two_terms} yields directly that the entropy gain is zero (since the RHS of~\\eqref{eq:gap_with_two_terms} is zero).\n\\end{proof}\n\n\n\n\n\\subsection{Input Disturbances Do Not Produce Entropy Gain}\nIn this section we show that random disturbances satisfying Assumption~\\ref{assu:z},\nwhen added to the \\textit{input} $\\rvau_{1}^{\\infty}$ (i.e., before $G$), do not introduce entropy gain.\nThis result can be obtained from Lemma~\\ref{lem:gap_with_two_terms}, as stated in the following theorem:\n\\begin{thm}[Input Disturbances do not Introduce Entropy Gain]\nLet $G$ satisfy Assumption~\\ref{assu:G_Factorized}. \nSuppose that $\\rvau_{1}^{\\infty}$ is entropy balanced and consider the output \n \\begin{align}\n \\rvay_{1}^{\\infty} = G\\ ( \\rvau_{1}^{\\infty} + \\rvab_{1}^{\\infty}).\n \\end{align}\n where \n$\n \\rveb^{1}_{\\infty} = \\boldsymbol{\\Psi} \\rvea^{1}_{\\nu},\n$\nwith $\\rvea^{1}_{\\nu}$ being a random vector satisfying $h(\\rvea^{1}_{\\nu})<\\infty$, and where $\\boldsymbol{\\Psi}\\in\\Rl^{|\\Nl|\\times \\nu}$ has orthonormal columns.\nThen,\n\\begin{align}\n \\lim_{n\\to\\infty}\\frac{1}{n}\\left(h(\\rvay_{1}^{n}) -h(\\rvau_{1}^{n})\\right) =0\n\\end{align}\n\\end{thm}\n\n\\begin{proof}\n In this case, the effect of the input disturbance in the output is the forced response of $G$ to it.\n This response can be regarded as an output disturbance $\\rvaz_{1}^{\\infty} = G \\rvab_{1}^{\\infty}$. \n Thus, the argument of the differential entropy on the RHS of~\\eqref{eq:gap_with_two_terms} is \n %\n \\begin{align}\n [\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\rvez^{1}_{n}\n &\n =\n [\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m} \\bQ_{n}^{T}\\bD_{n}\\bR_{n}\\rveb^{1}_{n}\n\\\\&\n=\n [\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bD_{n}]^{1}_{m}\\bR_{n}\\rveb^{1}_{n}\n\\\\&\n=\n \\block[\\bD_{n}]{1}{m} \\rows[\\bR_{n}]{1}{m}\\left( \\rveu^{1}_{n} + \\rveb^{1}_{n}\\right).\n \\end{align}\n %\nTherefore,\n\\begin{align}\n h([\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\rvez^{1}_{n})\n &\n =\n h(\\block[\\bD_{n}]{1}{m} \\rows[\\bR_{n}]{1}{m}\\left( \\rveu^{1}_{n} + \\rveb^{1}_{n}\\right))\n \\\\&\n =\n \\sumfromto{i=1}{m}\\log d_{n,i} + h(\\rows[\\bR_{n}]{1}{m}\\left( \\rveu^{1}_{n} + [\\boldsymbol{\\Psi}]^{1}_{n}\\rvea^{1}_{\\nu}\\right) ).\n\\end{align}\nThe proof is completed by substituting this result into the RHS of~\\eqref{eq:gap_with_two_terms} and noticing that $$\\lim_{n\\to\\infty}\\frac{1}{n}h\\left([\\bR_{n}]^{1}_{m}(\\rveu^{1}_{n} +[\\boldsymbol{\\Psi}]^{1}_{n}\\rvea^{1}_{\\nu})\\right)=0.$$\n\\end{proof}\n\n\\begin{rem}\n An alternative proof for this result can be given based upon the properties of an entropy-balanced sequence, as follows.\n Since $\\det(\\bG_{n})=1,\\ \\forall n$, we have that \n $\n h(\\bG_{n}(\\rveu^{1}_{n}+\\rveb^{1}_{n}))\n= h(\\rveu^{1}_{n}+\\rveb^{1}_{n})$.\nLet $\\boldsymbol{\\Theta}_{n}\\in\\Rl^{\\nu\\times n}$ and $\\overline{\\boldsymbol{\\Theta}}_{n}\\in\\Rl^{(n-\\nu)\\times n}$ be a matrices with orthonormal rows which satisfy \n$\\overline{\\boldsymbol{\\Theta}}_{n}[\\boldsymbol{\\Psi}]^{1}_{n}=\\bzero$ and such that\n $[\\boldsymbol{\\Theta}_{n}^{T} \\,|\\, \\overline{\\boldsymbol{\\Theta}}_{n}^{T}]^{T}$ is a unitary matrix.\n Then\n %\n\\begin{align}\n h([\\boldsymbol{\\Theta}_{n}^{T} \\,|\\, \\overline{\\boldsymbol{\\Theta}}_{n}^{T}]^{T}\\left(\\rveu^{1}_{n}+\\rveb^{1}_{n}\\right))\n =\n h(\\boldsymbol{\\Theta}_{n} \\rveu^{1}_{n}+ \\boldsymbol{\\Theta}_{n}[\\boldsymbol{\\Psi}]^{1}_{n}\\rvea^{1}_{\\nu} \\,|\\, \\overline{\\boldsymbol{\\Theta}}_{n}\\rveu^{1}_{n})\n +\n h(\\overline{\\boldsymbol{\\Theta}}_{n}\\rveu^{1}_{n}),\n\\end{align}\nwhere we have applied the chain rule of differential entropy.\nBut\n\\begin{align}\n h(\\boldsymbol{\\Theta}_{n} \\rveu^{1}_{n}+ \\boldsymbol{\\Theta}_{n}[\\boldsymbol{\\Psi}]^{1}_{n}\\rvea^{1}_{\\nu} | \\overline{\\boldsymbol{\\Theta}}_{n}\\rveu^{1}_{n})\n \\leq \n h(\\boldsymbol{\\Theta}_{n} \\rveu^{1}_{n}+ \\boldsymbol{\\Theta}_{n}[\\boldsymbol{\\Psi}]^{1}_{n}\\rvea^{1}_{\\nu} )\n\\end{align}\nwhich is upper bounded for all $n$ because $h(\\rvea^{1}_{n})<\\infty$ and $h(\\boldsymbol{\\Theta}_{n} \\rveu^{1}_{n})<\\infty$, the latter due to $\\rvau_{1}^{\\infty}$ being entropy balanced.\nOn the other hand, since $\\rveb^{1}_{n}$ is independent of $\\rveu^{1}_{n}$, it follows that $h(\\rveu^{1}_{n}+\\rveb^{1}_{n})\\geq h(\\rveu^{1}_{n})$, for all $n$.\nThus \n$\n\\lim_{n\\to\\infty}\\frac{1}{n}(h(\\rvey^{1}_{n}) -h(\\rveu^{1}_{n})) \n=\n\\lim_{n\\to\\infty}\\frac{1}{n}(h(\\overline{\\boldsymbol{\\Theta}}_{n}\\rveu^{1}_{n}) -h(\\rveu^{1}_{n}))\n=0$,\nwhere the last equality stems from the fact that $\\rvau_{1}^{\\infty}$ is entropy balanced.\n\\finenunciado\n\\end{rem}\n\n\n\\subsection{The Entropy Gain Introduced by Output Disturbances when $G(z)$ has NMP Zeros}\nWe show here that the entropy gain of a transfer function with zeros outside the unit circle is at most the sum of the logarithm of the magnitude of these zeros.\nTo be more precise, the following assumption is required. \n\n\\begin{assu}\\label{assu:zeros_of_G}\nThe filter $G$ satisfies Assumption~\\ref{assu:G_Factorized} and its transfer function $G(z)$\nhas $p$ poles and $p$ zeros, $m$ of which are NMP-zeros.\nLet $M$ be the number of distinct NMP zeros, given by\n$\\{\\rho_i\\}_{i=1}^M$, i.e., such that $|\\rho_1|>|\\rho_2|>\\dots>|\\rho_{M}|>1$, with \n$\\ell_i$ being the multiplicity of the $i$-th distinct zero.\nWe denote by $\\iota(i)$, where $\\iota:\\set{1,\\ldots,m}\\to\\set{1,\\ldots,M}$, the distinct zero of $G(z)$ associated with the $i$-th non-distinct zero of $G(z)$, i.e.,\n\\begin{align}\\label{eq:iota_def}\n \\iota(k) &\\eq \n \\min\\set{\\iota :\\sumfromto{i=1}{\\iota}\\ell_i\\geq k}.\n\\end{align}\n\\finenunciado \n\\end{assu}\n\n\nAs can be anticipated from the previous results in this section, we will need to characterize the asymptotic behaviour of the singular values of $\\bG_{n}$.\nThis is accomplished in the following lemma, which relates these singular values to the zeros of $G(z)$.\nThis result is a generalization of the unnumbered lemma in the proof of~{\\cite[Theorem~1]{hasari80}} (restated in the appendix as Lemma~\\ref{lem:hashimoto}), which holds for FIR transfer functions, to the case of \\emph{infinite-impulse response} (IIR) transfer functions (i.e., transfer functions having poles).\n\\begin{lem}\\label{lem:hashimoto_IIR}\nFor a transfer function $G$ satisfying Assumption~\\ref{assu:zeros_of_G}, it holds that\n\\begin{align}\n \\lambda_{l}(\\bG_n\\bG_n^T) \n = \n \\begin{cases} \n \\alpha_{n,l}^{2}(\\rho_{ \\iota(l)})^{-2n}\t&, \\text{if } l\\leq m,\\\\\n \\alpha_{n,l}^{2}\t\t\t\t&, \\text{otherwise },\n \\end{cases}\n\\end{align}\nwhere the elements in the sequence $\\set{\\alpha_{n,l} }$ are positive and increase or decrease at most polynomially with $n$. \\finenunciado\n\\end{lem}\n(The proof of this lemma can be found in the appendix, page~\\pageref{proof:lem_hashimoto_IIR}).\n\n\n\n\nWe can now state the first main result of this section.\n\\begin{thm}\\label{thm:eg_n_instate_w_disturb_ineq}\nIn the system of Fig.~\\ref{fig:general}, suppose that $\\rvau_{1}^{\\infty}$ is entropy balanced and that\n$G(z)$ and $\\rvaz_{1}^{\\infty}$ satisfy assumptions~\\ref{assu:zeros_of_G} and~\\ref{assu:z}, respectively.\nThen\n\\begin{align}~\\label{eq:eg_n_instate_w_disturb_ineq}\n0\\leq \\lim_{n\\to\\infty}\\frac{1}{n}\\left(h(\\rvay_{1}^{n}) -h(\\rvau_{1}^{n})\\right)\n \\leq \n \\Sumover{i=1}^{\\bar{\\kappa}}\\log |\\rho_{\\iota(i)}|,\n\\end{align}\nwhere\n$\\bar{\\kappa}\\eq \\min\\set{\\kappa,m}$ and\n$\\kappa$ is as defined in Assumption~\\ref{assu:z}.\nBoth bounds are tight.\nThe upper bound is achieved if \n$\\lim_{n\\to\\infty}\\det(\n\\rows[\\bQ_{n}]{1}{\\bar{\\kappa}} \\rows[\\boldsymbol{\\Phi}]{1}{n}(\\rows[\\bQ_{n}]{1}{\\bar{\\kappa}} \\rows[\\boldsymbol{\\Phi}]{1}{n})^{T})>0$,\nwhere the unitary matrices $\\bQ_{n}^{T}\\in\\Rl^{n\\times n}$ hold the left singular vectors of $\\bG_{n}$.\n\\finenunciado\n\\end{thm}\n\\begin{proof}\nSee Appendix, page~\\pageref{proof:thm_eg_n_instate_w_disturb_ineq}. \n\\end{proof}\n\n\n\nThe second main theorem of this section is the following:\n\\begin{thm}\\label{thm:eg_n_instate_w_disturb}\nIn the system of Fig.~\\ref{fig:general}, suppose that $\\rvau_{1}^{\\infty}$ is entropy balanced and that\n$G(z)$ satisfies Assumption~\\ref{assu:zeros_of_G}.\nLet $\\rvaz_{1}^{\\infty}$ be a random output disturbance, such that $\\rvaz(i)=0,\\, \\forall i > m$, and that $|h(\\rvaz_{1}^{m})|<\\infty$.\nThen\n\\begin{align}\n \\lim_{n\\to\\infty}\n \\frac{1}{n}\n \\left(h(\\rvay_{1}^{n}) - h(\\rvau_{1}^{n}) \\right) = \\Sumfromto{i=1}{m}\\log |\\rho_{\\iota(i)}|.\n\\end{align}\n\\finenunciado\n\\end{thm}\n\n\\begin{proof} \nSee Appendix, page~\\pageref{proof:thm:eg_n_instate_w_disturb}.\n\\end{proof}\n\n\n\n\n\n\n\\section{Entropy Gain due to a Random Initial Sate}\\label{sec:entropy_gain_initial_stat}\n\nHere we analyze the case in which there exists a random initial state $\\rvex_{0}$ independent of the input $\\rvau_{1}^{\\infty}$, \nand zero (or deterministic) output disturbance.\n\nThe effect of a random initial state appears in the output as the natural response of $G$ to it, namely the sequence \n$\\bar{\\rvay}_{1}^{n}$.\nThus, $\\rvay_{1}^{n}$ can be written in vector form as\n\\begin{align}\n \\rvey^{1}_{n} = \\bG_{n}\\rveu^{1}_{n} + \\bar{\\rvey}^{1}_{n}.\n\\end{align}\nThis reveals that the effect of a random initial state can be treated as a random output disturbance, which allows us to apply the results from the previous sections.\n\n\nRecall from Assumption~\\ref{assu:zeros_of_G} that $G(z)$ is a stable and biproper rational transfer function with $m$ NMP zeros.\nAs such, it can be factored as \n \\begin{align}\\label{eq:G_Factorized_as_tilde_G_F}\n G(z) = P(z)N(z),\n \\end{align}\nwhere $P(z)$ is a biproper filter containing only all the poles of $G(z)$, and $N(z)$ is a FIR biproper filter, containing all the zeros of $G(z)$.\n\nWe have already established (recall Theorem~\\ref{coro:MP_filters_no_EG}) that the entropy gain introduced by the minimum phase system $P(z)$ is zero.\nIt then follows that the entropy gain can be introduced only by the NMP-zeros of $N(z)$ and an appropriate output disturbance $\\bar{\\rvay}_1^\\infty$.\nNotice that, in this case, the input process $\\rvaw_1^\\infty$ to $N$ (i.e., the output sequence of $P$ due to a random input $\\rvau_1^\\infty$) is independent of $\\bar{\\rvay}_1^\\infty$ (since we have placed the natural response $\\bar{\\rvay}_{1}^{\\infty}$ after the filters $P$ and $N$, hose initial state is now zero).\nThis condition allows us to directly use Lemma~\\ref{lem:gap_with_two_terms} in order to analyze the entropy gain that $\\rvau_1^\\infty$ experiences after being filtered by $G$, which coincides with $\\bar{h}(\\rvay_1^\\infty)-\\bar{h}(\\rvaw_1^\\infty)$.\nThis is achieved by the next theorem.\n\n\n\n\\begin{thm}\\label{thm:eg-due-to-random-xo}\n Consider a stable $p$-th order biproper filter $G(z)$ having $m$ NMP-zeros, and with a random initial state $\\rvex_0$, such that $|h(\\rvex_0)|<\\infty$. \n Then, the entropy gain due to the existence of a random initial state is \n %\n \\begin{align}\n \\lim_{n\\to\\infty} \\frac{1}{n}( h(\\rvay_1^n) - h(\\rvau_1^n) ) = \\sumfromto{i=1}{m} \\log\\abs{\\rho_{\\iota(i)}}.\n \\end{align}\n\\end{thm}\n\n\\begin{proof}\\label{proof:thm_eg-due-to-random-xo}\nBeing a biproper and stable rational transfer function, $G(z)$ can be factorized as\n\\begin{align}\n G(z) = P(z)N(z),\n\\end{align}\nwhere $P(z)$ is a stable biproper transfer function containing only all the poles of $G(z)$ and with all its zeros at the origin,\nwhile $N(z)$ is stable and biproper FIR filter, having all the zeros of $G(z)$.\nLet $\\tilde{\\bC}_n\\rvex_0$ and $\\bC_{n}\\rvex_0$ be the natural responses of the systems $P$ and $N$ to their common random initial state $\\rvex_{0}$, respectively, where $\\tilde{\\bC}_n, \\bC_n \\in \\Rl^{n\\times p}$.\nThen we can write \n\\begin{align}\n \\rvey^{1}_{n} \n = \n \\bG_{n}\\rveu^{1}_{n} + \\bar{\\rvey}^{1}_{n}\n =\n \\bN_{n}\\underbrace{\\bP_{n}\\rveu^{1}_{n}}_{\\eq \\rvew^{1}_{n}} + \\bar{\\rvey}^{1}_{n}\n =\n \\bN_{n}\\rvew^{1}_{n} + \\bar{\\rvey}^{1}_{n}.\n\\end{align}\nSince $P(z)$ is stable and MP, it follows from Corollary~\\ref{coro:MP_filters_no_EG} that \n$h(\\rvew^{1}_{n})=h(\\rveu^{1}_{n})$ for all $n$, and therefore \n\\begin{align}\n h(\\rvey^{1}_{n}) - h(\\rveu^{1}_{n}) \n = \n h(\\rvey^{1}_{n}) - h(\\rvew^{1}_{n}).\n\\end{align}\nTherefore, we only need to consider the entropy gain introduced by the (possibly) non-minimum filter $N$ due to a random output disturbance \n$\\rvez^{1}_{n} \n=\n\\bar{\\rvey}^{1}_{n}\n= \n\\bN_{n}\\tilde{\\bC}_{n}\\rvex_{0}\n+\n\\bC_{n}\\rvex_{0}\n$, \nwhich is independent of the input $\\rvew^{1}_{n}$.\nThus, the conditions of Lemma~\\ref{lem:gap_with_two_terms} are met considering $\\bG_{n}=\\bN_{n}$,\nwhere now\n$\\bN_{n} = \\bQ_{n}^{T}\\bD_{n}\\bR_{n}$ is the SVD for $\\bN_{n}$, and $d_{n,1}\\leq d_{n,2}\\leq \\cdots \\leq d_{n,n}$.\nConsequently, it suffices to consider the differential entropy on the RHS of~\\eqref{eq:gap_with_two_terms}, whose argument is\n\\begin{align}\n [\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\bar{\\rvey}^{1}_{n} \n &\n =\n [\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m} \n \\left( \\bN_{n}\\tilde{\\bC}_{n}\\rvex_{0} + \\bC_{n}\\rvex_{0}\\right)\n \\\\&\n =\n[\\bD_{n}]^{1}_{m}\\bR_{n} \\left(\\rveu^{1}_{n} + \\tilde{\\bC}_{n}\\rvex_{0}\\right)\n+[\\bQ_{n}]^{1}_{m} \\bC_{n}\\rvex_{0}\n \\\\&\n =\n[\\bD_{n}]^{1}_{m}\\bR_{n} \\rvev^{1}_{n} \n+[\\bQ_{n}]^{1}_{m} \\bC_{n}\\rvex_{0},\\label{eq:the_term_of_v_and_x0}\n\\end{align}\nwhere $\\rvev^{1}_{n}\\eq \\rveu^{1}_{n} + \\tilde{\\bC}_{n}\\rvex_{0}$ has bounded entropy rate and is entropy balanced (since $\\tilde{\\bC}_{n}\\rvex_{0}$ is the natural response of a stable LTI system and because of Lemma~\\ref{lem:sum_yields_entropy_balanced}).\nWe remark that, in~\\eqref{eq:the_term_of_v_and_x0}, $\\rvev^{1}_{n}$ is not independent of $\\rvex_{0}$, which precludes one from using the proof of Theorem~\\ref{thm:eg_n_instate_w_disturb_ineq} directly.\n\n\nOn the other hand, since $N(z)$ is FIR of order (at most) $p$, we have that\n$\\bC_{n}= [ \\bE_{p}^T \\,|\\, \\bzero^T\\,]^T$, \nwhere $\\bE_p\\in\\Rl^{p\\times p}$ is a non-singular upper-triangular matrix independent of $n$.\nHence, $\\bC_{n}\\rvex_{0}$ can be written as $[\\boldsymbol{\\Phi}]^{1}_{n}\\rves^{1}_{p}$, where \n$[\\Phi]^{1}_{n} = [\\bI_p^T \\,|\\,\\bzero^T]^T$ \nand $\\rves^1_p \\eq \\bE_p\\rvex_0$.\nAccording to~\\eqref{eq:the_term_of_v_and_x0}, the entropy gain in~\\eqref{eq:eg_n_instate_w_disturb_ineq} arises as long as $h([\\bQ_n]^1_m\\bC_n\\rvex_0)$ is lower bounded by a finite constant (or if it decreases sub-linearly as $n$ grows).\nThen, we need $[\\bQ_n]^1_m[\\boldsymbol{\\Phi}]^1_n$ to be a full row-ranked matrix in the limit as $n\\to\\infty$.\nHowever,\n\\begin{align}\n \\det \\left([\\bQ_{n}]^{1}_{m}[\\boldsymbol{\\Phi}_{n}]^{1}_{n} ([\\bQ_{n}]^{1}_{m}[\\boldsymbol{\\Phi}_{n}]^{1}_{n})^{T}\\right)\n &=\n \\det \\left([\\bQ_{n}^{(p)}]^{1}_{m}([\\bQ_{n}^{(p)}]^{1}_{m})^{T}\\right),\n\\end{align}\nwhere $[\\bQ_n^{(p)}]^1_m$ denotes the first $p$ columns of the first $m$ rows in $\\bQ_n$.\nWe will now show that these determinants do not go to zero as $n\\to\\infty$.\nDefine the matrix $\\overline{\\bQ}_n\\in\\Rl^{m\\times (p-m)}$ such that $[\\bQ_n^{(p)}]^1_m = [{}^1[\\bQ_n]_m \\,|\\,\\, \\overline{\\bQ}_n]$.\nThen, it holds that $\\forall\\bx\\in\\Rl^n$,\n\\begin{align}\\label{eq:lowerbound_Qp}\n \\norm{([\\bQ_n^{(p)}]^1_m)^T \\bx}^2 &= \\norm{({}^1[\\bQ_n]_m)^T\\bx}^2 + \\norm{(\\overline{\\bQ}_n)^T \\bx}^2 \\\\\n &\\geq \\norm{({}^1[\\bQ_n]_m)^T\\bx}^2 \\\\\n &\\geq \\left(\\lambda_{\\text{min}}({}^1[\\bQ_n]_m({}^1[\\bQ_n]_m)^T)\\right)^{2}.\n\\end{align}\nHence, the minimum singular value of $[\\bQ_n^{(p)}]^1_m$ is lower bounded by the smallest singular value of ${}^1[\\bQ_n]_m$, for all $n\\geq m$.\nBut it was shown in the proof of Theorem~\\ref{thm:eg_n_instate_w_disturb} (see page~\\pageref{proof:thm:eg_n_instate_w_disturb}) that\n$\\lim_{n\\to\\infty}\\lambda_{\\text{min}}({}^1[\\bQ_n]_m({}^1[\\bQ_n]_m)^T)>0$.\nUsing this result in~\\eqref{eq:lowerbound_Qp} and taking the limit, we arrive to \n\\begin{align}\n\\lim_{n\\to\\infty}\\det \\left([\\bQ_{n}^{(p)}]^{1}_{m}([\\bQ_{n}^{(p)}]^{1}_{m})^{T}\\right)\n >\n 0.\n\\end{align}\nThus\n\\begin{align\n h\\left( [\\bD_{n}]^{1}_{m}\\bR_{n}\\rveu^{1}_{n} +[\\bQ_{n}]^{1}_{m}\\bar{\\rvey}^{1}_{n} \\right) \n &\n =\n h\\left([\\bD_{n}]^{1}_{m}\\bR_{n}\\rvev^{1}_{n} + [\\bQ_{n}]^{1}_{m} [\\boldsymbol{\\Phi}]^{1}_{n}\\rves^{1}_{p}\\right)\n \\end{align}\n %\nis upper and lower bounded by a constant independent of $n$ because $\\rvav_{1}^{\\infty}$ is entropy balanced, $[\\bD_{n}]^{1}_{m}$ has decaying entries, and $h(\\rvas_1^{p})<\\infty$, which means that the entropy rate in the RHS of~\\eqref{eq:gap_with_two_terms} decays to zero.\nThe proof is finished by invoking Lemma~\\ref{lem:hashimoto_IIR}.\n\\end{proof}\n\nTheorem~\\ref{thm:eg-due-to-random-xo} allows us to formalize the effect that the presence or absence of a random initial state has on the entropy gain using arguments similar to those utilized in Section~\\ref{sec:entropy_gain_output_disturb}.\nIndeed, if the random initial state $\\rvex_0\\in\\Rl^p$ has finite differential entropy, then the entropy gain achieves~\\eqref{eq:Jensen}, since the alignment between $\\rvex_0$ and the first $m$ rows of $\\bQ_n$ is guaranteed.\nThis motivates us to characterize the behavior of the entropy gain (due only to a random initial state), when the initial state $\\rvex_0$ can be written as $[\\boldsymbol{\\Phi}]^1_p\\rves^1_\\tau$, with $\\tau\\leq p$, which means that $\\rvex_0$ has an undefined (or $-\\infty$) differential entropy.\n\n\\begin{coro}\\label{coro:eg_due_to_xo_ineq}\n Consider an FIR, $p$-order filter $F(z)$ having $m$ NMP-zeros, such that its random initial state can be written as $\\rvex_0 = \\boldsymbol{\\Phi} \\rves^1_\\tau$, \n where $|h(\\rvas_1^\\tau)|<\\infty$ and $\\boldsymbol{\\Phi}\\in\\Rl^{p\\times \\tau}$ contains orthonormal rows .\n Then,\n \\begin{align}\n \\lim_{n\\to\\infty} ( h(\\rvay_1^n) - h(\\rvau_1^n) ) \\leq \\Sumfromto{i=1}{\\bar{\\tau}} \\log\\abs{\\rho_{\\iota(i)}}, \\label{eq:eg_due_to_xo_ineq}\n \\end{align}\n where $\\set{\\bar{\\tau}} \\eq \\min\\set{m,\\tau}$.\n The upper bound in~\\eqref{eq:eg_due_to_xo_ineq} is achieved when $[\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi}([\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi})^T$ is a non-singular matrix, with $\\bC_n$ defined by $\\bar{\\rvey}^1_n = \\bC_n\\rvex_0$ (as in Theorem~\\ref{thm:eg-due-to-random-xo}).\n\\end{coro}\n\n\\begin{proof}\nThe effect of the random initial state to the output sequence $\\rvay_1^\\infty$ can be written as $\\rvey^1_n = \\bC_n\\rvex_0$, where $\\bC_n = [\\bE_p^T \\,|\\, \\bzero^T]^T \\in \\Rl^{n\\times p}$.\nTherefore, if $\\bQ^T_n\\bD_n\\bR_n$ is an SVD for $\\bF_n$, it holds that \n\\begin{align}\nh([\\bD_n]^1_n\\bR_n\\rveu^1_n + [\\bQ_n]^1_m \\bC_n\\boldsymbol{\\Phi}\\rves^1_\\tau) \\label{eq:car}\n\\end{align}\nremains bounded, for $n\\to\\infty$, if and only if $\\lim_{n\\to\\infty}\\det([\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi}([\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi})^T)>0$.\n\nDefine the rank of $[\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi}$ as $\\tau_n\\in\\set{1,\\ldots,\\bar{\\tau}}$.\nIf $\\det([\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi}([\\bQ_n]^1_m\\bC_n\\boldsymbol{\\Phi})^T)=0$, then the lower bound is reached by inserting~\\eqref{eq:car} in~\\eqref{eq:gap_with_two_terms}.\nOtherwise, there exists $L$ large enough such that $\\tau_n \\geq 1$, $\\forall n\\geq L$.\n\nWe then proceed as the proof of Theorem~\\ref{thm:eg_n_instate_w_disturb_ineq}, by considering a unitary $(m\\times m)$-matrix $\\bH_n$, and a $(\\tau_n\\times m)$-matrix $\\bA_n$ such that\n\\begin{align}\n \\bH_{n}[\\bQ_{n}]^{1}_{m} \\bC_n\\boldsymbol{\\Phi}\n =\n \\begin{pmatrix}\n \\bA_{n}[\\bQ_{n}]^{1}_{m} \\bC_n\\boldsymbol{\\Phi}\n \\\\\n \\bzero\n \\end{pmatrix}, \\fspace n\\geq L.\n\\end{align}\n\nThis procedure allows us to conclude that $h([\\bD_n]^1_n\\bR_n\\rveu^1_n + [\\bQ_n]^1_m \\bC_n\\boldsymbol{\\Phi}\\rves^1_\\tau) \\leq \\sumfromto{i=\\tau_n + 1}{m}\\log d_{n,i}$, and that the lower limit in the latter sum equals $\\bar{\\tau}+1$ when $[\\bQ_n]^1_m \\bC_n\\boldsymbol{\\Phi}\\rves^1_\\tau$ is a full row-rank matrix. \nReplacing the latter into~\\eqref{eq:gap_with_two_terms} finishes the proof.\n\\end{proof}\n\n\\begin{rem}\n If the random initial state $\\rvex_0 = \\boldsymbol{\\Phi}\\rves^1_\\tau$ is generated with $\\tau\\geq p-m$, then the entropy gain introduced by an FIR minimum phase filter $F$ is at least $\\log \\rho_1$.\n Otherwise, the entropy gain could be identically zero, as long as the columns of $\\bE_n\\boldsymbol{\\Phi}(\\bE_n\\boldsymbol{\\Phi})^T$ fill only the orthogonal space to the span of the row vectors in $[\\bQ_n^{(p)}]^1_m$, where $\\bE_n$, $\\boldsymbol{\\Phi}$ and $[\\bQ_n^{(p)}]^1_m$ are defined as in the proof of Theorem~\\ref{thm:eg-due-to-random-xo}.\n\\end{rem}\n\nBoth results, Theorem~\\ref{thm:eg-due-to-random-xo} and Corollary~\\ref{coro:eg_due_to_xo_ineq}, reveal that the entropy gain arises as long as the effect of the random initial state aligns with the first rows of $\\bQ_n$, just as in the results of the previous section.\n\n\n\\section{Effective Entropy Gain due to the Intrinsic Properties of the Filter}\\label{sec:effective_entropy}\nIf there are no disturbances and the initial state is zero, then the first $n$ output samples to an input $\\rvau_{1}^{n}$ is given by~\\eqref{eq:y_of_u_matrix}.\nTherefore, the entropy gain in this case, as defined in~\\eqref{eq:Gsp_def}, is zero, regardless of whether or not $G$ is NMP.\n\nDespite the above, there is an interesting question which, to the best of the authors' knowledge, has not been addressed before:\nSince in any LTI filter the entire output is longer than the input, what would happen if one compared the differential entropies of the complete output sequence to that of the (shorter) input sequence?\nAs we show next, a proper definition of this question requires recasting the problem in terms of a new definition of differential entropy.\nAfter providing a geometrical interpretation of this problem, we prove that the (new) entropy gain in this case is exactly~\\eqref{eq:Jensen}.\n\n\\subsection{Geometrical Interpretation} \nConsider the random vectors \n$\\rveu\\eq [\\rvau_1\\ \\rvau_2]^{T}$ \nand \n$\\rvey\\eq [\\rvay_1\\ \\rvay_2 \\ \\rvay_3]^{T}$ \nrelated via\n\\begin{align}\\label{eq:little_example}\n \\begin{bmatrix}\n \\rvay_1\\\\\n \\rvay_2\\\\\n \\rvay_3\n \\end{bmatrix}\n=\n\\underbrace{\\begin{pmatrix}\n 1 & 0\\\\\n 2 & 1\\\\\n 0 & 2\n\\end{pmatrix}}\n_{\\eq\\breve{\\bG}_{2}}\n \\begin{bmatrix}\n \\rvau_1\\\\\n \\rvau_2\n \\end{bmatrix}.\n\\end{align}\nSuppose $\\rveu$ is uniformly distributed over $[0 ,1]\\times[0,1]$.\nApplying the conventional definition of differential entropy of a random sequence, we would have that\n\\begin{align}\n h(\\rvay_1,\\rvay_2,\\rvay_3)\n &\n =\n h(\\rvay_1,\\rvay_2) + h(\\rvay_3|\\rvay_1,\\rvay_2)\n =\n -\\infty,\n\\end{align}\nbecause $\\rvay_3$ is a deterministic function of $\\rvay_1$ and $\\rvay_2$:\n$$\n\\rvay_3 = [0\\;\\; 2][\\rvau_1 \\; \\rvau_2]^{T}=\n[0\\;\\; 2]\n\\begin{pmatrix}\n 1 & 0\\\\\n 2 & 1\\\\\n\\end{pmatrix}^{-1} \n \\begin{bmatrix}\n \\rvay_1\\\\\n \\rvay_2\n \\end{bmatrix}.\n$$\nIn other words, the problem lies in that although the output is a three dimensional vector, it only has two degrees of freedom, i.e., it is restricted to a 2-dimensional subspace of $\\Rl^{3}$.\nThis is illustrated in Fig.~\\ref{fig:square_shear}, where the set $[0,1]\\times[0,1]$ is shown (coinciding with the \\texttt{u}-\\texttt{v} plane), together with its image through $\\breve{\\bG}_{2}$ (as defined in~\\eqref{eq:little_example}).\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width = 8 cm]{square_shear.eps}\n \\caption{Support of $\\rveu$ (laying in the \\texttt{u}-\\texttt{v} plane) compared to that of $\\rvey=\\breve{\\bG}\\rveu$ (the rhombus in $\\Rl^3$).}\\label{fig:square_shear}\n\\end{figure}\n\nAs can be seen in this figure, the image of the square $[0,1]^{2}$ through $\\breve{\\bG}_{2}$ is a $2$-dimensional rhombus over which \n$\\set{\\rvay_1,\\rvay_2,\\rvay_3}$ distributes uniformly.\nSince the intuitive notion of differential entropy of an ensemble of random variables (such as how difficult it is to compress it in a lossy fashion) relates to the size of the region spanned by the associated random vector, one could argue that the differential entropy of $\\set{\\rvay_1,\\rvay_2,\\rvay_3}$, far from being $-\\infty$, should be somewhat larger than that of $\\set{\\rvau_1,\\rvau_2}$ (since the rhombus $\\breve{\\bG}_{2}[0,1]^{2}$ has a larger area than $[0,1]^{2}$). \nSo, what does it mean that (and why should) $h(\\rvay_1,\\rvay_2,\\rvay_3)=-\\infty$?\nSimply put, the differential entropy relates to the volume spanned by the support of the probability density function.\nFor $\\rvey$ in our example, the latter (three-dimensional) volume is clearly zero.\n\nFrom the above discussion, the comparison between the differential entropies of $\\rvey\\in\\Rl^{3}$ and $\\rveu\\in\\Rl^{2}$ of our previous example should take into account that $\\rvey$ actually lives in a two-dimensional subspace of $\\Rl^{3}$.\nIndeed, since the multiplication by a unitary matrix does not alter differential entropies, we could consider the differential entropy of \n\\begin{align}\n\\begin{bmatrix}\n\\tilde{\\rvey}\n\\\\\n0\n\\end{bmatrix}\n \\eq \n \\begin{pmatrix}\n \\breve{\\bQ} \n \\\\\n \\bar{\\bq}^{T}\n \\end{pmatrix}\n \\rvey, \n\\end{align}\nwhere $\\breve{\\bQ}^T$ is the $3\\times2$ matrix with orthonormal rows in the singular-value decomposition of $\\breve{\\bG}_{2}$\n\\begin{align}\n \\breve{\\bG}_{2} = \n \\breve{\\bQ}^T \\breve{\\bD}\\, \\breve{\\bR}.\n\\end{align}\nand $\\bar{\\bq}$ is a unit-norm vector orthogonal to the rows of $\\breve{\\bQ}$ (and thus orthogonal to $\\rvey$ as well).\nWe are now able to compute the differential entropy in $\\Rl^2$ for $\\tilde{\\rvey}$, corresponding to the rotated version of $\\rvey$ such that its support is now aligned with $\\Rl^2$.\n\nThe preceding discussion motivates the use of a modified version of the notion of differential entropy for a random vector $\\rvey\\in\\Rl^{n}$ which considers the number of dimensions actually spanned by $\\rvey$ instead of its length.\n\n\\begin{defn}[The Effective Differential Entropy]\\label{def:BMD_entropy}\nLet $\\rvey\\in\\Rl^\\ell$ be a random vector. \nIf $\\rvey$ can be written as a linear transformation $\\rvey = \\bS \\rveu$, for some $\\rveu\\in\\Rl^n$ ($ n\\leq \\ell$), $\\bS \\in \\Rl^{\\ell\\times n}$, then the effective differential entropy of $\\rvey$ is defined as\n\\begin{align}\n \\breve{h}(\\rvey) \\eq h(\\bA\\rvey),\n\\end{align}\nwhere $\\bS = \\bA^{T}\\bT\\bC$ is an SVD for $\\bS$, with $\\bT\\in\\Rl^{n\\times n}$.\n\\finenunciado\n\\end{defn}\nIt is worth mentioning that Shannon's differential entropy\nof a vector $\\rvey\\in\\Rl^{\\ell}$, whose support's $\\ell$-volume is greater than zero, arises from considering it as the difference between its (absolute) entropy and that of a random variable uniformly distributed over an $\\ell$-dimensional, unit-volume region of $\\Rl^{\\ell}$.\nMore precisely, if in this case the \\textit{probability density function} (PDF) of $\\rvey = [\\rvay_{1}\\;\\rvay_{2}\\; \\cdots \\; \\rvay_{\\ell}]^{T}$ is Riemann integrable, then~\\cite[Thm.~9.3.1]{covtho91},\n\\begin{align}\\label{eq:h_as_limit_covtho06}\n h(\\rvey) = \\lim_{\\Delta\\to 0} \\left[H(\\rvey^{\\Delta}) + \\ell\\log\\Delta\\right], \n\\end{align}\nwhere $\\rvey^{\\Delta}$ is the discrete-valued random vector resulting when $\\rvey$ is quantized using an $\\ell$-dimensional uniform quantizer with $\\ell$-cubic quantization cells with volume $\\Delta^{\\ell}$.\nHowever, if we consider a variable $\\rvey$ whose support belongs to an $n$-dimensional subspace of \n$\\Rl^\\ell$, $n < \\ell$ (i.e., $\\rvey=\\bS\\rveu=\\bA^{T}\\bT\\bC\\rveu$, as in Definition~\\ref{def:BMD_entropy}), \nthen the entropy of its quantized version in $\\Rl^\\ell$, say \n$H_\\ell(\\rvey^{\\Delta})$, is distinct from \n$H_n((\\bA\\rvey)^{\\Delta})$,\nthe entropy of \n$\\bA\\rvey$ in \n$\\Rl^n$.\nMoreover, it turns out that, in general, \n\\begin{align}\\label{eq:unequal}\n \\lim_{\\Delta\\to 0}\\left(H_\\ell(\\rvey^{\\Delta}) -H_n( (\\bA\\rvey)^{\\Delta}) \\right) \\neq 0,\n\\end{align}\ndespite the fact that $\\bA$ has orthonormal rows.\nThus, the definition given by~\\eqref{eq:h_as_limit_covtho06} does not yield consistent results for the case wherein a random vector has a support's dimension (i.e., its number of degrees of freedom) smaller that its length\\footnote{The mentioned inconsistency refers to~\\eqref{eq:unequal}, which reveals that the asymptotic behavior $H_\\ell(\\rvey^{\\Delta})$ changes if $\\rvey$ is rotated.} \n(If this were not the case, then we could redefine~\\eqref{eq:h_as_limit_covtho06} replacing $\\ell$ by $n$, in a spirit similar to the one behind Renyi's $d$-dimensional entropy~\\cite{renyi-59}.)\nTo see this, consider the case in which $\\rveu\\in\\Rl$ distributes uniformly over $[0,1]$ and \n$\\rvey = [1 \\quad 1]^{T}\\rveu\/\\hsqrt{2}$.\nClearly, $\\rvey$ distributes uniformly over the unit-length segment connecting the origin with the point $(1,1)\/\\hsqrt{2}$.\nThen \n\\begin{align}\n H_{2}(\\rvey^{\\Delta}) \n = \n -\n \\left\\lfloor \\tfrac{1}{\\Delta \\hsqrt{2}} \\right\\rfloor \n \\Delta\\hsqrt{2}\n \\log \\left( \\Delta\\hsqrt{2} \\right)\n - \n \\left(1- \\left\\lfloor \\tfrac{1}{\\Delta \\hsqrt{2}} \\right \\rfloor \\hsqrt{2}\\Delta \\right)\n \\log \\left(1- \\left\\lfloor \\tfrac{1}{\\Delta \\hsqrt{2}} \\right \\rfloor \\hsqrt{2}\\Delta \\right).\n\\end{align}\nOn the other hand, since in this case $\\bA\\rvey=\\rveu$, we have that\n\\begin{align}\n H_{1}((\\bA\\rvey)^{\\Delta})\n =\n H_{1}(\\rveu^{\\Delta})\n =\n -\n \\left\\lfloor \\tfrac{1}{\\Delta } \\right\\rfloor \n \\Delta \\log\\Delta \n -\n (1-\\left\\lfloor \\tfrac{1}{\\Delta } \\right\\rfloor \\Delta)\n \\log (1-\\left\\lfloor \\tfrac{1}{\\Delta } \\right\\rfloor \\Delta).\n\\end{align}\nThus \n\\begin{align}\n \\lim_{\\Delta\\to 0}\n &\n \\left(\n H_{1}((\\bA\\rvey)^{\\Delta})\n -\n H_{2}(\\rvey^{\\Delta}) \n \\right)\n =\n \\lim_{\\Delta \\to 0}\n \\left( \n \\left\\lfloor \\tfrac{1}{\\Delta \\hsqrt{2}} \\right\\rfloor \n \\Delta\\hsqrt{2}\n \\log \\left( \\Delta\\hsqrt{2} \\right)\n -\n \\left\\lfloor \\tfrac{1}{\\Delta } \\right\\rfloor \n \\Delta \\log\\Delta \n \\right)\n =\\log\\hsqrt{2}.\n\\end{align}\n\n\nThe latter example further illustrates why the notion of effective entropy is appropriate in the setup considered in this section, where\nthe effective dimension of the random sequences does not coincide with their length \n(it is easy to verify that the effective entropy of $\\rvey$ does not change if one rotates $\\rvey$ in $\\Rl^{\\ell}$).\nIndeed, we will need to consider only sequences which can be constructed by multiplying some random vector $\\rveu\\in\\Rl^{n}$, with bounded differential entropy, by a tall matrix $\\breve{\\bG}_n\\in\\Rl^{n\\times (n+\\eta)}$, with $\\eta>0$ (as in~\\eqref{eq:little_example}), which are precisely the conditions required by Definition~\\ref{def:BMD_entropy}. \n\n\\subsection{Effective Entropy Gain}\nWe can now state the main result of this section:\n\\begin{thm}\\label{thm:BMD_entropy_gain}\n Let the entropy-balanced random sequence $\\rvau_{1}^{\\infty}$ be the input of an LTI filter $G$, and let $\\rvay_{1}^{\\infty}$ be its output.\n Assume that $G(z)$ is the $z$-transform of the $(\\eta+1)$-length sequence $\\set{g_k}_{k=0}^{\\eta}$.\n Then \n \\begin{align}\n \\lim_{n\\to \\infty} \\frac{1}{n}\\left(\\breve{h}(\\rvay_{1}^{n+\\eta}) -\\breve{h}(\\rvau_{1}^{n}) \\right)\n =\\intpipi{\\log\\abs{G\\ejw} }.\n \\end{align}\n \\finenunciado\n\\end{thm}\n\nTheorem~\\ref{thm:BMD_entropy_gain} states that, when considering the full-length output of a filter, the effective entropy gain is introduced by the filter itself, without requiring the presence of external random disturbances or initial states.\nThis may seem a surprising result, in view of the findings made in the previous sections, where the entropy gain appeared only when such random exogenous signals were present. \nIn other words, when observing the full-length output and the input, the (maximum) entropy gain of a filter can be recasted in terms of the ``volume'' expansion yielded by the filter as a linear operator, provided we measure effective differential entropies instead of Shannon's differential entropy.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:BMD_entropy_gain}]\nThe total length of the output $\\ell$, will grow with the length $n$ of the input, if $G$ is FIR, and will be infinite, if $G$ is IIR.\nThus, we define the \\textit{output-length function}\n\\begin{align}\n \\ell(n) \\eq \\text{length of $\\rvey$ when input is $\\rveu^{1}_{n}$}\n =\n \\begin{cases}\n n+\\eta & , \\text{ if $G$ is FIR with i.r. length $\\eta+1$,}\\\\\n \\infty& , \\text{ if $G$ is IIR.}\n \\end{cases}\n\\end{align}\nIt is also convenient to define the sequence of matrices $\\set{\\breve{\\bG}_{n}}_{n=1}^{\\infty}$, where\n$\\breve{\\bG}_{n}\\in\\Rl^{\\ell(n)\\times n}$ is Toeplitz with \n$\\left[\\breve{\\bG}_{n}\\right]_{i,j}=0,\\forall i1$, $\\forall i=1,\\ldots, M$.\nFrom these definitions it is clear that $A(z)$ is unstable, $\\tilde{A}(z)$ is stable, and\n\\begin{align}\n |A\\ejw| = |\\tilde{A}\\ejw|, \\fspace \\forallwinpipi.\n\\end{align}\nNotice also that \n$\\lim_{\\abs{z}\\to\\infty}A(z)=1$\nand\n$\\lim_{\\abs{z}\\to\\infty}\\tilde{A}(z)=1\/\\prod_{i=1}^{M}|p_{i}|$, and thus \n\\begin{align}\n a_{0}=1, && \\tilde{a}_{0}= \\prod_{i=1}^{M}|p_{i}|^{-1}.\n\\end{align}\n\nConsider the non-stationary random sequences (source) \n$\\rvax_{1}^{\\infty}$ and\nthe asymptotically stationary source\n$\\tilde{\\rvax}_{1}^{\\infty}$\ngenerated by passing a stationary Gaussian process $\\rvaw_{1}^{\\infty}$ through $A(z)$ and $\\tilde{A}(z)$, respectively, which can be written as\n\\begin{align}\n \\rvex^{1}_{n} &= \\bA_{n}\\rvew^{n}_{1}, \\fspace n=1,\\ldots,\\label{eq:x_def}\\\\\n \\tilde{\\rvex}^{1}_{n} &= \\tilde{\\bA}_{n}\\rvew^{n}_{1}, \\fspace n=1,\\ldots.\\label{eq:tildex_def}\n\\end{align}\n(A block-diagram associated with the construction of $\\rvax$ is presented in Fig.~\\ref{fig:rdfns}.)\n\\begin{figure}[t]\n\\centering\n\\input{rdfns.pstex_t}\n\\caption{Block diagram representation of how the non-stationary source $\\rvax_{1}^{\\infty}$ is built and then reconstructed as $\\rvay=\\rvax+\\rvau$.}\n\\label{fig:rdfns}\n\\end{figure}\nDefine the rate-distortion functions for these two sources as \n\\begin{align}\nR_{\\rvax}(D) &\\eq \\lim_{n\\to\\infty} R_{\\rvax,n}(D),\n&\nR_{\\rvax,n}(D)&\\eq \\min \\frac{1}{n}I(\\rvax_{1}^{n};\\rvax_{1}^{n}+\\rvau_{1}^{n}),\n\\\\\nR_{\\tilde{\\rvax}}(D) &\\eq \\lim_{n\\to\\infty} R_{\\tilde{\\rvax},n}(D),\n&\nR_{\\tilde{\\rvax},n}(D)&\\eq \\min \\frac{1}{n}I(\\tilde{\\rvax}_{1}^{n};\\tilde{\\rvax}_{1}^{n}+\\tilde{\\rvau}_{1}^{n}),\n\\end{align}\nwhere, for each $n$, the minimums are taken over all the conditional probability density functions \n$f_{\\rvau_{1}^{n}|\\rvax_{1}^{n}}$ \nand\n$f_{\\tilde{\\rvau}_{1}^{n}|\\tilde{\\rvax}_{1}^{n}}$ \nyielding \n $\\Expe{\\norm{\\rveu^{1}_{n}}^{2}}\/n\\leq D$ and\n $\\Expe{\\norm{\\tilde{\\rveu}^{1}_{n}}^{2}}\/n\\leq D$,\nrespectively.\n\nThe above rate-distortion functions have been characterized in~\\cite{gray--70,hasari80,grahas08} for the case in which $\\rvaw_{1}^{\\infty}$ is an i.i.d. Gaussian process.\nIn particular, it is explicitly stated in~\\cite{hasari80,grahas08} that, for that case, \n\\begin{align}\n R_{\\rvax}(D)- \n R_{\\tilde{\\rvax}}(D)\n =\n \\intpipi{\\log|A^{-1}\\ejw|}\n =\n \\sumfromto{i=1}{M}\\log|p_{i}|.\\label{eq:gap}\n\\end{align}\nWe will next provide an alternative and simpler proof of this result, and extend its validity for general (not-necessarily stationary) Gaussian $\\rvaw_{1}^{\\infty}$, using the entropy gain properties of non-minimum phase filters established in Section~\\ref{sec:entropy_gain_output_disturb}.\nIndeed, the approach in~\\cite{gray--70,hasari80,grahas08} is based upon asymptotically-equivalent Toeplitz matrices in terms of the signals' covariance matrices.\nThis restricts $\\rvaw_{1}^{\\infty}$ to be Gaussian and i.i.d. and $A(z)$ to be an all-pole unstable transfer function, and then, the only non-stationary allowed is that arising from unstable poles.\nFor instance, a cyclo-stationarity innovation followed by an unstable filter $A(z)$ would yield a source which cannot be treated using Gray and Hashimoto's approach.\nBy contrast, the reasoning behind our proof lets $\\rvaw_1^\\infty$ be any Gaussian process, and then let the source be $A\\rvaw$, with $A(z)$ having unstable poles (and possibly zeros and stable poles as well).\n\nThe statement is as follows:\n\\begin{thm}\\label{thm:RDF_non_stat}\n Let $\\rvaw_{1}^{\\infty}$ be any Gaussian stationary process with bounded differential entropy rate, and let \n $\\rvax_{1}^{\\infty}$ and $\\tilde{\\rvax}_{1}^{\\infty}$ be as defined in~\\eqref{eq:x_def} and~\\eqref{eq:tildex_def}, respectively.\n Then~\\eqref{eq:gap} holds.\n \\finenunciado\n\\end{thm}\n\nThanks to the ideas developed in the previous sections, it is possible to give an intuitive outline of the proof of this theorem (given in the appendix, page~\\pageref{proof:RDF_non_stat}) by using a sequence of block diagrams.\nMore precisely, consider the diagrams shown in Fig.~\\ref{fig:rdfnsproof}.\n\\begin{figure}[t]\n\\centering\n\\input{rdfns2.pstex_t}\n\\caption{Block-diagram representation of the changes of variables in the proof of Theorem~\\ref{thm:RDF_non_stat}.}\n\\label{fig:rdfnsproof}\n\\end{figure}\nIn the top diagram in this figure, suppose that $\\rvay=C\\rvax+\\rvau$ realizes the RDF for the non-stationary \nsource $\\rvax$.\nThe sequence $\\rvau$ is independent of $\\rvex$, and the linear filter $C(z)$ is such that the error $(\\rvay-\\rvax)\\Perp \\rvay$ \n(a necessary condition for minimum MSE optimality).\nThe filter $B(z)$ is the Blaschke product of $A(z)$ (see~\\eqref{eq:B_def} in the appendix) (a stable, NMP filter with unit frequency response magnitude such that \n$\\tilde{\\rvax} = B \\rvax$).\n\nIf one now moves the filter $B(z)$ towards the source, then the middle diagram in Fig.~\\ref{fig:rdfnsproof} is obtained.\nBy doing this, the stationary source $\\tilde{\\rvax}$ appears with an additive error signal \n$\\tilde{\\rvau}$ that has the same asymptotic variance as $\\rvau$, reconstructed as \n$\\tilde{\\rvay}=C\\tilde{\\rvax} + \\tilde{\\rvau}$.\nFrom the invertibility of $B(z)$, it also follows that the mutual information rate between \n$\\tilde{\\rvax}$ and $\\tilde{\\rvay}$\nequals that between \n$\\rvax$ and $\\rvay$.\nThus, the channel \n$\\tilde{\\rvay}=C\\tilde{\\rvax}+\\tilde{\\rvau}$ \nhas the same rate and distortion as the channel\n$\\rvay =C\\rvax+\\rvau$. \n\n\nHowever, if one now adds a short disturbance $\\rvad$ to the error signal $\\tilde{\\rvau}$ \n(as depicted in the bottom diagram of Fig.~\\ref{fig:rdfnsproof}),\nthen the resulting additive error term $\\bar{\\rvau}=\\tilde{\\rvau}+\\rvad$ will be independent of $\\tilde{\\rvax}$ and\nwill have the same asymptotic variance as $\\tilde{\\rvau}$.\nHowever, the differential entropy rate of $\\bar{\\rvau}$ will exceed that of $\\tilde{\\rvau}$ by the RHS of~\\eqref{eq:gap}.\nThis will make the mutual information rate between \n$\\tilde{\\rvax}$ and $\\bar{\\rvay}$ to be less than that between \n$\\tilde{\\rvax}$ and $\\tilde{\\rvay}$ by the same amount.\nHence, \n$R_{\\tilde{\\rvex}}(D)$ be at most \n$R_{\\rvex}(D) - \\sumfromto{i=1}{M}\\log\\abs{p_{i}}$.\nA similar reasoning can be followed to prove that \n$R_{\\rvex}(D)-R_{\\tilde{\\rvex}}(D)\\leq \\sumfromto{i=1}{M}\\log\\abs{p_{i}}$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Networked Control}\nHere we revisit the setup shown in Fig.~\\ref{fig:fbksystem} and discussed in Section~\\ref{sec:Intro}.\nRecall from~\\eqref{eq:martins_I_bound} that, for this general class of networked control systems, it was shown in~\\cite[Lemma 3.2]{mardah05} that\n\\begin{align}\n \\lim_{n\\to\\infty}\\frac{1}{n}I(\\rvex_{0}; \\rvay_{1}^{n})\n \\geq \\Sumover{\\abs{p_{i}}>1}\\log\\abs{p_{i}},\n\\end{align}\nwhere $\\set{p_{i}}_{i=1}^{M}$ are the poles of $P(z)$ (the plant in Fig.~\\ref{fig:fbksystem}).\n\nBy using the results obtained in Section~\\ref{sec:entropy_gain_initial_stat} we show next that equality holds in~\\eqref{eq:martins_I_bound} provided the feedback channel \nsatisfies the following assumption:\n\n\\begin{assu}\\label{assu:fbck_channel}\nThe feedback channel in Fig.~\\ref{fig:fbksystem} can be written as\n\\begin{align}\n \\rvaw= AB \\rvav + BF(\\rvac), \\label{eq:channel}\n\\end{align}\nwhere\n\\begin{enumerate}\n \\item $A$ and $B$ are stable rational transfer functions such that $AB$ is biproper, $ABP$ has the same unstable poles as $P$, and the feedback $AB$ stabilizes the plant $P$.\n \\item $F$ is any (possibly non-linear) operator such that $\\tilde{\\rvac}\\eq F(\\rvac)$ satisfies \n $\\frac{1}{n}h(\\tilde{\\rvac}_{1}^{n})1}\\log\\abs{p_{i}}.\n \\end{align}\n\\finenunciado\n \\end{thm}\n %\n %\n\\begin{proof\n Let \n $P(z)=N(z)\/\\Lambda (z)$ and \n $T(z)\\eq A(z)B(z)=\\Gamma(z)\/\\Theta(z)$.\n Then, from Lemma~\\ref{lem:initial_states} (in the appendix), \nthe output $\\rvey^1_n$ can be written as\n \\begin{align}\n \\rvay = \\underbrace{\\Lambda}_{\\text{init. state $\\rvex_{0}$}} \\cdot \n\t \\underbrace{\\frac{\\Theta }{\\Theta \\Lambda + \\Gamma N}}_{\\eq \\tilde{G}\\text{, init. state $\\set{\\rvex_{0},\\rves_{0}}$}} \\tilde{\\rvau},\n\t \\label{eq:y_as_Gu}\n \\end{align}\nwhere $\\rves_{0}$ is the initial state of $T(z)$ and \n\\begin{align}\n \\tilde{\\rvau} \\eq u + B \\tilde{\\rvac}.\n\\end{align}\n(see Fig.~\\ref{fig:fbkplant} Bottom).\nThen \n\\begin{align}\n I(\\rvex_{0};\\rvey^{1}_{n}) \n &\n =\n h(\\rvey^{1}_{n}) - h(\\rvey^{1}_{n}|\\rvex_{0})\n \\\\&\n =\n h(\\rvey^{1}_{n}) - h (\\boldsymbol{\\Lambda}_{n} [\\tilde{\\bG}_{n}\\tilde{\\rveu}^{1}_{n} + \\tilde{\\bC}_{n}\\rves_{0} ] )\n \\\\&\n =\n h(\\boldsymbol{\\Lambda}_{n} [\\tilde{\\bG}_{n}\\tilde{\\rveu}^{1}_{n} + \\tilde{\\bC}_{n}\\rves_{0} +\\bar{\\bC}_{n}\\rvex_{0}] + \\bC_{n}\\rvex_{0}) - h(\\boldsymbol{\\Lambda}_{n} [\\tilde{\\bG}_{n}\\tilde{\\rveu}^{1}_{n} + \\tilde{\\bC}_{n}\\rves_{0} ] )\n \\\\&\n =\n h(\\boldsymbol{\\Lambda}_{n} [\\tilde{\\bG}_{n}\\tilde{\\rveu}^{1}_{n} + \\tilde{\\bC}_{n}\\rves_{0} +\\bar{\\bC}_{n}\\rvex_{0}] + \\bC_{n}\\rvex_{0}) - \n h(\\tilde{\\bG}_{n}\\tilde{\\rveu}^{1}_{n} + \\tilde{\\bC}_{n}\\rves_{0} ),\n \\label{eq:I_as_h}\n\\end{align}\n %\n where $\\tilde{\\bC}_{0}$ maps the initial state $\\rves_{0}$ to $\\rvey^{1}_{n}$,\n $\\bar{\\bC}_{n}$ maps the initial state $\\rvex_{0}$ to the output of $\\tilde{G}(z)$,\n and $\\bC_{n}$ maps the initial state $\\rvex_{0}$ (of $\\Lambda(z)$) to $\\rvey^{1}_{n}$.\nSince $\\rvau_{1}^{\\infty}$ is entropy balanced and $\\tilde{\\rvac}_{1}^{\\infty}$ has finite entropy rate, it follows from Lemma~\\ref{lem:sum_yields_entropy_balanced} that $\\tilde{\\rvau}_{1}^{\\infty}$ is entropy balanced as well.\nThus, we can proceed as in the proof of Theorem~\\ref{thm:eg-due-to-random-xo} to conclude that\n %\n \\begin{align}\n\\lim_{n\\to\\infty}\\frac{1}{n}I(\\rvex_{0}; \\rvay_{1}^{n})\n=\n\\sumover{\\abs{p_{i}}>1}\\log\\abs{p_{i}}.\n\\label{eq:I_as_sum_log}\n \\end{align}\n %\n This completes the proof.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\subsection{The Feedback Channel Capacity of (non-white) Gaussian Channels}\nConsider a non-white additive Gaussian channel of the form\n\\begin{align}\n \\rvay_{k}= \\rvax_{k} + \\rvaz_{k},\n\\end{align}\nwhere the input $\\rvax$ is subject to the power constraint\n\\begin{align}\n \\lim_{n\\to\\infty}\\frac{1}{n}\\expe{\\norm{\\rvex^{1}_{n}}^{2}}\\leq P,\n\\end{align}\nand\n$\\rvaz_{1}^{\\infty}$ is a stationary Gaussian process. \n\nThe feedback information capacity of this channel is realized by a Gaussian input $\\rvax$, and is given by \n\\begin{align}\n C_{\\text{FB}} = \\lim_{n\\to\\infty} \n \\max_{\\bK_{\\rvex^{1}_{n}}: \\frac{1}{n}\\tr{\\bK_{\\rvex^{1}_{n}}}\\leq P}\n I(\\rvex^{1}_{n};\\rvey^{1}_{n}),\n\\end{align}\nwhere $\\bK_{\\rvex^{1}_{n}}$ is the covariance matrix of $\\rvex^{1}_{n}$ and, for every $k\\in\\Nl$, the input $\\rvex_{k}$ is allowed to depend upon the channel outputs $\\rvay_{1}^{k-1}$ (since there exists a causal, noise-less feedback channel with one-step delay).\n\nIn~\\cite{kim-yh10}, it was shown that \nif $\\rvaz$ is an auto-regressive moving-average process of $M$-th order, then\n$C_{\\text{FB}}$ can be achieved by the scheme shown in Fig.~\\ref{fig:Kim_FBK_Cap_system}.\nIn this system, $B$ is a strictly causal and stable finite-order filter and $\\rvav_{1}^{\\infty}$ is Gaussian with $\\rvav_{k}=0$ for all $k>M$ and such that\n$\\rvev^{1}_{n}$ is Gaussian with a positive-definite covariance matrix $\\bK_{\\rvev^{1}_{M}}$.\n\\begin{figure}[t]\n\\centering\n \\input{v2Kim_FBK_Cap_system.pstex_t}\n \\caption{Block diagram representation a non-white Gaussian channel $\\rvay=\\rvax+\\rvaz$ and the coding scheme considered in~\\cite{kim-yh10}.}\n \\label{fig:Kim_FBK_Cap_system}\n\\end{figure}\n\nHere we use the ideas developed in Section~\\ref{sec:entropy_gain_output_disturb} to show that \\textbf{the information rate achieved by the capacity-achieving scheme proposed in~\\cite{kim-yh10} drops to zero if there exists any additive disturbance of length at least $M$ and finite differential entropy affecting the output, no matter how small}.\n\nTo see this, notice that, in this case, and for all $n>M$, \n\\begin{align}\n I(\\rvax_{1}^{n};\\rvay_{1}^{n}) \n &= I(\\rvav_{1}^{M};\\rvay_{1}^{n})\n =\n h(\\rvey^{1}_{n}) \n - \n h(\\rvey^{1}_{n}|\\rvev^{1}_{n}) \n \\\\&\n =\n h(\\rvey^{1}_{n}) \n - \n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n} + \\rvev^{1}_{n}|\\rvev^{1}_{M}) \n \\\\&\n =\n h(\\rvey^{1}_{n}) \n - \n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n} |\\rvev^{1}_{M})\n \\\\&\n =\n h(\\rvey^{1}_{n}) \n - \n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n}) \n =\n h(\\rvey^{1}_{n}) \n - \n h(\\rvez^{1}_{n})\n \\\\&\n =\n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n} + \\rvev^{1}_{n})\n -\n h(\\rvez^{1}_{n}),\n\\end{align}\nsince $\\det(\\bI_{n}+\\bB_{n})=1$.\nFrom Theorem~\\ref{thm:eg_n_instate_w_disturb}, this gap between differential entropies is precisely the entropy gain introduced by $\\bI_{n}+\\bB_{n}$ to an input $\\rvez^{1}_{n}$ when the output is affected by the disturbance $\\rvev^{1}_{M}$.\nThus, from Theorem~\\ref{thm:eg_n_instate_w_disturb}, the capacity of this scheme will correspond to\n$\\intpipi{\\log\\abs{1+B\\ejw}}=\\sumover{\\abs{\\rho_{i}}>1}\\log\\abs{\\rho_{i}}$, where $\\set{\\rho_{i}}_{i=1}^{M}$ are the zeros of $1+B(z)$, which is precisely the result stated in~\\cite[Theorem 4.1]{kim-yh10}.\n\nHowever, if the output is now affected by an additive disturbance $\\rvad_{1}^{\\infty}$ not passing through $B(z)$ such that $\\rvad_{k}=0$, $\\forall k>M$ and $|h(\\rved^{1}_{M})|<\\infty$, with $\\rvad_{1}^{\\infty}\\Perp (\\rvav_{1}^{M},\\rvaz_{1}^{\\infty})$, then we will have \n\\begin{align}\n \\rvey^{1}_{n} = \\rvev^{1}_{n} + (\\bI_{n} +\\bB_{n})\\rvez^{1}_{n} + \\rved^{1}_{n}.\n\\end{align}\nIn this case, \n\\begin{align}\n I(\\rvax_{1}^{n};\\rvay_{1}^{n}) \n &= I(\\rvav_{1}^{M};\\rvay_{1}^{n})\n =\n h(\\rvey^{1}_{n}) \n - \n h(\\rvey^{1}_{n}|\\rvev^{1}_{n}) \n \\\\&\n =\n h(\\rvey^{1}_{n}) \n - \n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n} + \\rvev^{1}_{n} + \\rved^{1}_{n}|\\rvev^{1}_{M}) \n \\\\&\n =\n h(\\rvey^{1}_{n}) \n - \n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n} + \\rved^{1}_{n}|\\rvev^{1}_{M})\n \\\\&\n =\n h(\\rvey^{1}_{n}) \n - \n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n}+ \\rved^{1}_{n}) .\n\\end{align}\nBut $\\lim_{n\\to\\infty}\\frac{1}{n}( h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n} + \\rvev^{1}_{n}+ \\rved^{1}_{n})\n -\n h(( \\bI_{n}+\\bB_{n})\\rvez^{1}_{n}+ \\rved^{1}_{n}))=0,$\n which follows directly from applying Theorem~\\ref{thm:eg_n_instate_w_disturb} to each of the differential entropies.\nNotice that this result holds irrespective of how small the power of the disturbance may be.\n\nThus, the capacity-achieving scheme proposed in~\\cite{kim-yh10} (and further studied in~\\cite{ardfra12}), although of groundbreaking theoretical importance, would yield zero rate in any practical situation, since every real signal is unavoidably affected by some amount of noise. \n\n\n\n\n\n\\section{Conclusions}\\label{sec:conclusions}\nThis paper has provided a geometrical insight and rigorous results for characterizing the increase in differential entropy rate (referred to as entropy gain) introduced by passing an input random sequence through a discrete-time linear time-invariant (LTI) filter $G(z)$ such that the first sample of its impulse response has unit magnitude.\nOur time-domain analysis allowed us to explain and establish under what conditions the entropy gain coincides with what was predicted by Shannon, who followed a frequency-domain approach to a related problem in his seminal 1948 paper.\nIn particular, we demonstrated that the entropy gain arises only if $G(z)$ has zeros outside the unit circle (i.e., it is non-minimum phase, (NMP)).\nThis is not sufficient, nonetheless, since letting the input and output be $\\rvau$ and $\\rvay=G\\rvau$, the difference $h(\\rvay_{1}^{n})-h(\\rvau_{1}^{n})$ is zero for all $n$, yielding no entropy gain.\nHowever, if \nthe distribution of the input process $\\rvau$ satisfies a certain regularity condition (defined as being ``entropy balanced'') and the output \nhas the form $\\rvay=G\\rvau + \\rvaz$, with $\\rvaz$ being an output disturbance with bounded differential entropy, we have shown that the entropy gain can range from zero to the sum of the logarithm of the magnitudes of the NMP zeros of $G(z)$, depending on how $\\rvaz$ is distributed.\nA similar result is obtained if, instead of an output disturbance, we let $G(z)$ have a random initial state.\nWe also considered the difference between the differential entropy rate of the \\textit{entire} (and longer) output of $G(z)$ and that of its input, i.e., $h(\\rvay_{1}^{n+\\eta}) -h(\\rvau_{1}^{n})$, where $\\eta+1$ is the length of the impulse response of $G(z)$.\nFor this purpose, we introduced the notion of ``effective differential entropy'', which can be applied to a random sequence whose support has dimensionality smaller than its dimension.\nInterestingly, the effective differential entropy gain in this case, which is intrinsic to $G(z)$, is also the sum of the logarithm of the magnitudes of the NMP zeros of $G(z)$, without the need to add disturbances or a random initial state.\nWe have illustrated some of the implications of these ideas in three problems.\nSpecifically, we used the fundamental results here obtained to provide a simpler and more general proof to characterize the rate-distortion function for Gaussian non-stationary sources and MSE distortion.\nThen, we applied our results to provide sufficient conditions for equality in an information inequality of significant importance in networked control problems.\nFinally, \nwe showed that the information rate of the capacity-achieving scheme proposed in~\\cite{kim-yh10} for the autoregressive Gaussian channel with feedback drops to zero in the presence of any additive disturbance in the channel input or output of sufficient (finite) length, no matter how small it may be.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\@startsection{section}{1\n \\z@{1.1\\linespacing\\@plus\\linespacing}{.8\\linespacing\n {\\normalfont\\Large\\scshape\\centering}}\n\\makeatother\n\n\n\\theoremstyle{plain}\n\\newtheorem*{hypab}{Hypothesis Ab}\n\\newtheorem*{hyp3}{Hypothesis (I)}\n\\newtheorem*{hyp4}{Hypothesis D5}\n\\newtheorem*{target}{Target Theorem}\n\\newtheorem*{thmA}{Theorem A}\n\\newtheorem*{thmB}{Theorem B}\n\\newtheorem*{thmC}{Theorem C}\n\\newtheorem*{T1}{Theorem 1}\n\\newtheorem*{T2}{Theorem 2}\n\\newtheorem*{T3}{Theorem 3}\n\\newtheorem*{MT}{Main Theorem}\n\\newtheorem*{MH}{Main Hypothesis}\n\\newtheorem*{nonexistence}{Nonexistence Theorem}\n\\newtheorem*{conj*}{Root Groups Conjecture}\n\\newtheorem*{thm1.2}{(1.2) Theorem}\n\\newtheorem*{thm1.3}{(1.3) Theorem}\n\\newtheorem*{thm1.4}{(1.4) Theorem}\n\\newtheorem*{prop*}{Proposition}\n\\newtheorem{Thm}{Theorem}\n\n\n \n\\newtheorem{prop}{Proposition}[section]\n\\newtheorem{thm}[prop]{Theorem}\n\\newtheorem{conj}[prop]{Conjecture}\n\\newtheorem{cor}[prop]{Corollary}\n\\newtheorem{lemma}[prop]{Lemma}\n\\newtheorem{hyp1}[prop]{Hypothesis}\n\n\\theoremstyle{definition}\n\\newtheorem{Def}[prop]{Definition}\n\\newtheorem{hypothesis}[prop]{Hypothesis}\n\\newtheorem*{Def*}{Definition}\n\\newtheorem{Defs}[prop]{Definitions}\n\\newtheorem{Defsnot}[prop]{Definitions and notation}\n\\newtheorem{example}[prop]{Example}\n\\newtheorem{notrem}[prop]{Notation and a remark}\n\\newtheorem{notdef}[prop]{Notation and definitions}\n\\newtheorem{notation}[prop]{Notation}\n\\newtheorem*{notation*}{Notation}\n\\newtheorem{remark}[prop]{Remark}\n\\newtheorem{remarks}[prop]{Remarks}\n\\newtheorem*{rem}{Remark}\n\n\\DeclareMathOperator{\\tld}{\\sim\\!}\n\\DeclareMathOperator{\\inv}{inv}\n\n\\newcommand{\\mouf}{\\mathbb{M}}\n\n\\newcommand{\\cala}{\\mathcal{A}}\n\\newcommand{\\calb}{\\mathcal{B}}\n\\newcommand{\\calc}{\\mathcal{C}}\n\\newcommand{\\cald}{\\mathcal{D}}\n\\newcommand{\\cale}{\\mathcal{E}}\n\\newcommand{\\calf}{\\mathcal{F}}\n\\newcommand{\\calg}{\\mathcal{G}}\n\\newcommand{\\calh}{\\mathcal{H}}\n\\newcommand{\\calj}{\\mathcal{J}}\n\\newcommand{\\calk}{\\mathcal{K}}\n\\newcommand{\\call}{\\mathcal{L}}\n\\newcommand{\\calm}{\\mathcal{M}}\n\\newcommand{\\caln}{\\mathcal{N}}\n\\newcommand{\\calo}{\\mathcal{O}}\n\\newcommand{\\calot}{\\tilde{\\mathcal{O}}}\n\\newcommand{\\calp}{\\mathcal{P}}\n\\newcommand{\\calq}{\\mathcal{Q}}\n\\newcommand{\\calr}{\\mathcal{R}}\n\\newcommand{\\cals}{\\mathcal{S}}\n\\newcommand{\\calt}{\\mathcal{T}}\n\\newcommand{\\calu}{\\mathcal{U}}\n\\newcommand{\\calv}{\\mathcal{V}}\n\\newcommand{\\calw}{\\mathcal{W}}\n\\newcommand{\\calx}{\\mathcal{X}}\n\\newcommand{\\calz}{\\mathcal{Z}}\n\\newcommand{\\solv}{\\mathcal{SLV}}\n\n\\newcommand{\\cc}{\\mathbb{C}}\n\\newcommand{\\ff}{\\mathbb{F}}\n\\newcommand{\\bbg}{\\mathbb{G}}\n\\newcommand{\\LL}{\\mathbb{L}}\n\\newcommand{\\kk}{\\mathbb{K}}\n\\newcommand{\\mm}{\\mathbb{M}}\n\\newcommand{\\bmm}{\\bar{\\mathbb{M}}}\n\\newcommand{\\nn}{\\mathbb{N}}\n\\newcommand{\\oo}{\\mathbb{O}}\n\\newcommand{\\pp}{\\mathbb{P}}\n\\newcommand{\\qq}{\\mathbb{Q}}\n\\newcommand{\\rr}{\\mathbb{R}}\n\\newcommand{\\vv}{\\mathbb{V}}\n\\newcommand{\\zz}{\\mathbb{Z}}\n\n\\newcommand{\\fraka}{\\mathfrak{a}}\n\\newcommand{\\frakm}{\\mathfrak{m}}\n\\newcommand{\\frakB}{\\mathfrak{B}}\n\\newcommand{\\frakC}{\\mathfrak{C}}\n\\newcommand{\\frakh}{\\mathfrak{h}}\n\\newcommand{\\frakI}{\\mathfrak{I}}\n\\newcommand{\\frakM}{\\mathfrak{M}}\n\\newcommand{\\N}{\\mathfrak{N}}\n\\newcommand{\\frakO}{\\mathfrak{O}}\n\\newcommand{\\frakP}{\\mathfrak{P}}\n\\newcommand{\\T}{\\mathfrak{T}}\n\\newcommand{\\frakU}{\\mathfrak{U}}\n\\newcommand{\\frakV}{\\mathfrak{V}}\n\n\\newcommand{\\ga}{\\alpha}\n\\newcommand{\\gb}{\\beta}\n\\newcommand{\\gc}{\\gamma}\n\\newcommand{\\gC}{\\Gamma}\n\\newcommand{\\gCt}{\\tilde{\\Gamma}}\n\\newcommand{\\gd}{\\delta}\n\\newcommand{\\gD}{\\Delta}\n\\newcommand{\\gre}{\\epsilon}\n\\newcommand{\\gl}{\\lambda}\n\\newcommand{\\gL}{\\Lambda}\n\\newcommand{\\gLt}{\\tilde{\\Lambda}}\n\\newcommand{\\gm}{\\mu}\n\\newcommand{\\gn}{\\nu}\n\\newcommand{\\gro}{\\omega}\n\\newcommand{\\gO}{\\Omega}\n\\newcommand{\\gvp}{\\varphi}\n\\newcommand{\\gr}{\\rho}\n\\newcommand{\\gs}{\\sigma}\n\\newcommand{\\gS}{\\Sigma}\n\\newcommand{\\gt}{\\tau}\n\\newcommand{\\gth}{\\theta}\n\\newcommand{\\gTH}{\\Theta}\n\n\\newcommand{\\rmA}{\\mathrm{A}}\n\\newcommand{\\rmD}{\\mathrm{D}}\n\\newcommand{\\rmI}{\\mathrm{I}}\n\\newcommand{\\rmi}{\\mathrm{i}}\n\\newcommand{\\rmL}{\\mathrm{L}}\n\\newcommand{\\rmO}{\\mathrm{O}}\n\\newcommand{\\rmS}{\\mathrm{S}}\n\\newcommand{\\rmU}{\\mathrm{U}}\n\\newcommand{\\rmV}{\\mathrm{V}}\n\n\\newcommand{\\nsg}{\\trianglelefteq}\n\\newcommand{\\rnsg}{\\trianglerighteq}\n\\newcommand{\\At}{A^{\\times}}\n\\newcommand{\\Dt}{D^{\\times}}\n\\newcommand{\\Ft}{F^{\\times}}\n\\newcommand{\\Kt}{K^{\\times}}\n\\newcommand{\\kt}{k^{\\times}}\n\\newcommand{\\ktv}{k_v^{\\times}}\n\\newcommand{\\Pt}{P^{\\times}}\n\\newcommand{\\Qt}{Q^{\\times}}\n\\newcommand{\\calqt}{\\calq^{\\times}}\n\n\\newcommand{\\Rt}{R^{\\times}}\n\\newcommand{\\Lt}{L^{\\times}}\n\n\\newcommand{\\charc}{{\\rm char}}\n\n\\newcommand{\\df}{\\stackrel{\\text{def}} {=}}\n\\newcommand{\\lr}{\\longrightarrow}\n\\newcommand{\\ra}{\\rightarrow}\n\\newcommand{\\hr}{\\hookrightarrow}\n\\newcommand{\\lra}{\\longrightarrow}\n\\newcommand{\\sminus}{\\smallsetminus}\n\\newcommand{\\lan}{\\langle}\n\\newcommand{\\ran}{\\rangle}\n\n\\newcommand{\\Ab}{{\\rm Ab}}\n\\newcommand{\\Aut}{{\\rm Aut}}\n\\newcommand{\\End}{{\\rm End}}\n\\newcommand{\\Ker}{{\\rm Ker\\,}}\n\\newcommand{\\im}{{\\rm Im\\,}}\n\\newcommand{\\Hom}{{\\rm Hom}}\n\\newcommand{\\FRAC}{\\rm FRAC}\n\\newcommand{\\Nrd}{\\rm Nrd}\n\\newcommand{\\PSL}{\\rm PSL}\n\\newcommand{\\SL}{{\\rm SL}}\n\\newcommand{\\GL}{{\\rm GL}}\n\\newcommand{\\GF}{{\\rm GF}}\n\\newcommand{\\R}{{\\rm R}}\n\\newcommand{\\Sz}{\\rm Sz}\n\\newcommand{\\St}{\\rm St}\n\\newcommand{\\Sym}{{\\rm Sym}}\n\\newcommand{\\tr}{{\\rm tr}}\n\\newcommand{\\tor}{{\\rm tor}}\n\\newcommand{\\PGL}{{\\rm PGL}}\n\n\\newcommand{\\ch}{\\check}\n\\newcommand{\\s}{\\star}\n\\newcommand{\\bu}{\\bullet}\n\\newcommand{\\dS}{\\dot{S}}\n\\newcommand{\\onto}{\\twoheadrightarrow}\n\\newcommand{\\HH}{\\widebar{H}}\n\\newcommand{\\NN}{\\widebar{N}}\n\\newcommand{\\GG}{\\widebar{G}}\n\\newcommand{\\gCgC}{\\widebar{\\gC}}\n\\newcommand{\\gLgL}{\\widebar{\\gL}}\n\\newcommand{\\hG}{\\widehat{G}}\n\\newcommand{\\hN}{\\widehat{N}}\n\\newcommand{\\hgC}{\\widehat{\\gC}}\n\\newcommand{\\linv}{\\varprojlim}\n\\newcommand{\\In}{{\\rm In}_K}\n\\newcommand{\\Inc}{{\\rm Inc}_K}\n\\newcommand{\\Inv}{{\\rm Inv}}\n\\newcommand{\\tit}{\\textit}\n\\newcommand{\\tbf}{\\textbf}\n\\newcommand{\\tsc}{\\textsc}\n\\newcommand{\\hal}{\\frac{1}{2}}\n\\newcommand{\\half}{\\textstyle{\\frac{1}{2}}}\n\\newcommand{\\restr}{\\upharpoonright}\n\\newcommand{\\rmk}[1]{\\noindent\\tbf{#1}}\n\\newcommand{\\widebar}[1]{\\overset{\\mskip1mu\\hrulefill\\mskip1mu}{#1}\n \\vphantom{#1}}\n\\newcommand{\\widedots}[1]{\\overset{\\mskip1mu\\dotfill\\mskip1mu}{#1}\n \\vphantom{#1}}\n\\newcommand{\\llr}{\\Longleftrightarrow}\n\n\\newcommand{\\tem}{{\\bf S}}\n\\newcommand{\\tnem}{{\\bf NS}}\n\n\\numberwithin{equation}{section}\n\n\\hyphenation{Tim-mes-feld}\n\n\\begin{document}\n\\title[A non-split sharply $2$-transitive group]{A sharply $2$-transitive group without a non-trivial abelian normal subgroup}\n\\author[Eliyahu Rips, Yoav Segev, Katrin Tent]{Eliyahu Rips$^1$\\qquad Yoav Segev\\qquad Katrin Tent}\n\n\\address{Eliyahu Rips\\\\\n Einstein Institute of Mathematics\\\\\n Hebrew University \\\\\n Jerusalem 91904\\\\\n Israel}\n\\email{eliyahu.rips@mail.huji.ac.il}\n\\thanks{$^1$This research was partially supported by the Israel Science Foundation}\n\n\\address{Yoav Segev \\\\\n Department of Mathematics \\\\\n Ben-Gurion University \\\\\n Beer-Sheva 84105 \\\\\n Israel}\n\\email{yoavs@math.bgu.ac.il}\n\n\\address{Katrin Tent \\\\\n Mathematisches Institut \\\\\n Universit\\\"at M\\\"unster \\\\\n\t Einsteinstrasse 62\\\\\n 48149 M\\\"unster \\\\\n Germany}\n\\email{tent@wwu.de}\n\n\n\\keywords{sharply $2$-transitive, free product, HNN extension, malnormal}\n\\subjclass[2010]{Primary: 20B22}\n\n\\begin{abstract} \nWe show that any group $G$ is contained in some sharply 2-transitive \ngroup $\\calg$ without a non-trivial\nabelian normal subgroup.\nThis answers a long-standing open question. The involutions\nin the groups $\\calg$ that we construct have no fixed points.\n\\end{abstract}\n\n\\date{\\today}\n\\maketitle\n \n\\section{Introduction}\n\n \nThe \\emph{finite} sharply $2$-transitive groups were classified by Zassenhaus\nin 1936 \\cite{Z} and it is known that any finite\nsharply $2$-transitive group contains a non-trivial abelian normal subgroup.\n \nIn the infinite situation no classification is known (see \\cite[Problem 11.52, p.~52]{MK}).\nIt was a long standing open problem\nwhether every infinite sharply $2$-transitive group contains a non-trivial abelian normal subgroup.\nIn \\cite{Ti} \nTits proved that this holds for locally compact connected sharply $2$-transitive groups.\nSeveral other papers showed that under certain special conditions the assertion holds\n(\\cite{BN, GMS, GlGu, M, T2, Tu, W}). \nThe reader may wish to consult Appendix \\ref{app A} for more detail,\nand for a description of our main results using permutation group theoretic language.\n\nAn equivalent formulation to the above problem is \nwhether every near-domain is a near-field (see \\cite{Hall, K, SSS} and \nAppendix \\ref{app A} below). \n\nWe here show that this is not the case. We construct a sharply $2$-transitive infinite group\nwithout a non-trivial abelian normal subgroup.\nIn fact, the construction is similar in flavor to the free completion of partial generalized polgyons \\cite{T1}.\n\nWe are grateful to Joshua Wiscons for pointing out an instructive counterexample\nto a first version of this paper, and for greatly simplifying parts of the proof \nin a later version. We are also grateful to Avinoam Mann for greatly simplifying \nthe proof of Proposition \\ref{prop A1 fp} and for drawing our attention to a point in the\nproof that needed correction. We thank all the referees \nof this paper for carefully reading the manuscript\nand making very useful remarks that helped to improve the exposition.\n\nRecall that a proper subgroup $A$ of a group\n$G$ is {\\it malnormal} in $G$ if\\linebreak\n$A\\cap g^{-1}Ag=1,$ for all $g\\in G\\sminus A$.\n \n\n\\begin{thm}\\label{thm main}\nLet $G$ be a group with a malnormal subgroup $A$ and an involution $t\\in G\\sminus A$ such that\n$A$ contains no involutions.\nThen for any two elements $u, v\\in G$ with $Au\\ne Av$ there exist\n\\begin{itemize}\n\\item[(a)]\nan extension $G\\le G_1$;\n\n\\item[(b)]\na malnormal subgroup $A_1$ of $G_1$ such that $A_1$ does not contain involutions and satisfies $A_1\\cap G=A$;\n\n\\item[(c)] an element $f\\in G_1$ such that $A_1f=A_1u$ and $A_1tf=A_1v.$\n\\end{itemize}\n\\end{thm}\n\n\\begin{remark}\nIt is easy to see (see \\S2) that in Theorem \\ref{thm main}\nwe may assume that $u=1, v\\notin AtA$ and that either: \n(1)\\ $v^{-1}\\notin AvA$ or: (2)\\ $v$ is an involution.\nIf case (1) holds we take $G_1=G*\\lan f\\ran$\nto be the free product of $G$ with an infinite cyclic group generated by $f,$\nand $A_1=\\lan A, f, tfv^{-1}\\ran$. If case (2) holds we take\n$G_1=\\lan G, f\\mid f^{-1}tf=s\\ran$ and HNN extension and $A_1=\\lan A, f\\ran$.\n\\end{remark}\n\nAs a corollary to Theorem \\ref{thm main} we get the following.\n\n\\begin{thm}\\label{thm s2t in char 2}\nLet $G$ be a group with a malnormal subgroup $A$ such that\n$A$ contains no involutions.\nAssume further that $G$ is \\texttt{not} sharply $2$-transitive on the set of right cosets $A\\backslash G$.\nThen $G$ is contained in a group $\\calg$ having a malnormal subgroup $\\cala$ such that\n\\begin{enumerate}\n\\item\n$\\cala\\cap G=A;$\n\n\\item\n$\\calg$ is sharply $2$-transitive on the set of right cosets $X:=\\cala\\backslash\\calg;$\n\n\\item\n$\\cala$ contains no involutions (i.e.~$\\calg$ is of permutational characteristic $2$);\n\n\\item\n$\\calg$ does not contain a non-trivial abelian normal subgroup;\n\n\\item\nif $G$ is infinite then $G$ and $\\calg$ have the same cardinality (similarly \nfor $X$ and $A\\backslash G$).\n\\end{enumerate}\n\\end{thm}\n\n\nAs an immediate consequence of Theorem \\ref{thm s2t in char 2} we have\n\n\\begin{thm}\\label{thm eg}\nAny group $G$ is contained in a group $\\calg$ acting sharply $2$-transitively\non a set $X$ such that each involution in $\\calg$ has no fixed point in $X,$ and such that\n$\\calg$ does not contain a non-trivial abelian normal subgroup.\n\\end{thm}\n\\begin{proof}\nFor $|G|=1,2$ this is obvious. Otherwise take $A=1$ in Theorem \\ref{thm s2t in char 2}. \n\\end{proof}\n\\noindent\nIn fact there are many other ways to obtain a group $\\calg$ having a malnormal subgroup \n$\\cala$ and satisfying (2)--(4) of Theorem \\ref{thm s2t in char 2},\ne.g., take $G=\\lan t\\ran* A,$ where $t$ is an involution,\nand $A$ a non-trivial group without involutions, and apply Theorem \\ref{thm s2t in char 2}.\n(Here the free product guarantees that $A$ is malnormal in $G$.)\n\n\nTheorem \\ref{thm eg} shows that there exists a sharply $2$-transitive\ngroup $\\calg$ of {\\it characteristic $2$} (see Definition \\ref{def char} in Appendix \\ref{app A})\nsuch that $\\calg$ does not contain a non-trivial abelian normal subgroup. Further\nas noted in Appendix \\ref{app A}, if $G$ is sharply $2$-transitive of characteristic $3,$\nthen $G$ contains a non-trivial abelian normal subgroup. The cases where $\\charc(G)$ is distinct from\n$2$ and $3$ remain open.\n\nFinally we mention that the hypothesis that $A$ does not contain involutions\nin Theorem \\ref{thm main} is used only in the case where we take $G_1$ to be\nan HNN extension of $G$, and then, it is used only\nin the proof of the malnormality of $A_1$ in $G_1$.\n\n \n\\section{Some preliminaries regarding Theorem \\ref{thm main}}\\label{sect explanation}\n\nThe following observations and remarks are here in order to explain \nto the reader the way we intend to prove Theorem \\ref{thm main}, and\nto explain the main division between the two cases we deal with in \\S\\ref{sect nonhnn} and \\S\\ref{sect hnn}.\n\nIn fact Lemma \\ref{lem hyp}(3) and Lemma \\ref{lem fuv} below, together with Remark \\ref{rem pf thm 1.1}, show that we may\nassume throughout this paper that hypothesis \\ref{hyp main} holds; and that hypothesis naturally\nleads to the division of the two cases dealt with in \\S\\ref{sect nonhnn} and \\S\\ref{sect hnn}. \n\n\n\\begin{lemma}\\label{lem hyp}\nLet $A$ be a malnormal subgroup of a group $G$ and let $g\\in G\\sminus A$. Then\n\\begin{enumerate}\n\\item\n$C_G(a)\\le A,$ for all $a\\in A,\\ a\\ne 1;$ \n\n\\item\n$\\lan g\\ran\\cap A=1;$\n\n\\item\n$AgA$ contains an involution iff $g^{-1}\\in AgA.$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n(1): Let $a\\in A$ with $a\\ne 1,$ and let $h\\in C_G(a)$. Then $a\\in A\\cap A^h$. \nSo $h\\in A,$ since $A$ is malnormal in $G$.\n\\medskip\n\n\\noindent\n(2): Since $g\\in C_G(g^k)$ for all integers $k,$ part (2) follows from (1).\n\\medskip\n\n\\noindent\n(3):\\quad\nIf $g^{-1}\\notin AgA,$ then clearly $AgA$ does not contain an involution.\nConversely, assume that $g^{-1}\\in AgA$. Then $g^{-1}=agb,$ for some $a, b\\in A,$ \nso $(ag)^2=ab^{-1}\\in A$. Then, by (2), either $(ag)^2=1$ or $ag\\in A$. But $g\\notin A,$\nso $ag\\notin A,$ and we have $(ag)^2=1$. Hence $AgA$ contains the involution $ag$.\n\\end{proof}\n \nWe now make the following observation (and introduce the following notation):\n \n\\begin{lemma}\\label{lem fuv}\nLet $G$ be a group with a malnormal subgroup $A$ and an involution $t\\in G\\sminus A$.\nLet $G_1$ be an extension of $G,$ such that\n$G_1$ contains a malnormal subgroup $A_1$ with $A_1\\cap G=A$. Let $r, s\\in G$ be such that $Ar\\ne As$. Then\n\\begin{itemize}\n\\item[(1)]\nthere is at most one element $f'\\in G_1$ with $A_1f'=A_1r$ and $A_1tf'=A_1s$, which\nwe denote by $f'=f_{r,s}$ (if it exists).\n\\end{itemize}\nThe convention in $(2)$--$(4)$ below is that the left side exists if and only if the right side does\nand then they are equal:\n\\begin{itemize}\n\\item[(2)]\n$f_{r,s}g=f_{rg, sg}$ for any $g\\in G.$\n\n\\item[(3)]\n$tf_{r,s}=f_{s,r}.$\n\n\\item[(4)]\n$f_{a_1r, a_2s}=f_{r,s}$ for all $a_1, a_2\\in A$.\n\\end{itemize} \n\\end{lemma}\n\\begin{proof}\n(1):\\quad\nLet $f_1, f_2\\in G_1$ such that $A_1f_1=A_1f_2=A_1r$ and $A_1tf_1=A_1tf_2=A_1s$. Then $f_1f_2^{-1}\\in A_1$ \nand $tf_1f_2^{-1}t\\in A_1$. Since $t\\in G_1\\sminus A_1,$ and since\n$A_1$ is malnormal in $G_1,$ we obtain that $f_1f_2^{-1}=1,$ so $f_1=f_2$.\n\\medskip\n\n\\noindent\n(2):\\quad \n$A_1f_{r,s}g=A_1rg=A_1f_{rg, sg}$ and $A_1tf_{r,s}g=A_1sg=A_1f_{rg,sg}$.\nSo, by (1), $f_{rg, sg}=f_{r,s}g$.\n\\medskip\n\n\\noindent\n(3):\\quad\n$A_1tf_{r,s}=A_1s,$ and $A_1ttf_{r,s}=A_1f_{r,s}=A_1r$. So, by (1), $tf_{r,s}=f_{s,r}$.\n\\medskip\n\n\\noindent\n(4):\\quad\n$A_1f_{a_1r, a_2s}=A_1a_1r=A_1r,$ and $A_1tf_{a_1r,a_2s}= A_1a_2s=A_1s$. So, by (1), $f_{a_1r, a_2s}=f_{r,s}$. \n\\end{proof}\n\n\n\\begin{remark}\\label{rem pf thm 1.1}\nLet the notation be as in Theorem \\ref{thm main}. Notice that if there is an element \n$f\\in G$ such that $Af=Au$ and $Atf=Av,$ we can just take $G_1=G$\nand $A_1=A$ and there is nothing to prove in Theorem \\ref{thm main}. \n\nHence we may assume throughout this paper that this is not the case. \nIn view of (2) and (4)\nof Lemma \\ref{lem fuv}, $f_{u,v}=f_{1,vu^{-1}}u,$ and $f_{1,a'va}=f_{a^{-1},a'v}a=f_{1,v}a,$ for $a, a'\\in A$.\nHence we may assume that $u=1$ (and hence $v\\notin A$) and replace $v$ by any element of\nthe double coset $AvA$. By Lemma \\ref{lem hyp}(3), we may assume that either $v^{-1}\\notin AvA,$\nor $v$ is an involution. Further, since $f_{1,t}=1$ and since $t$ is an involution, we may assume\nthat $v\\notin AtA$ and $v^{-1}\\notin AtA$.\n\\end{remark}\n\nHence it suffices to prove Theorem \\ref{thm main} under the following\nhypothesis which we assume for the rest of the paper.\n\n\\begin{hypothesis}\\label{hyp main}\nIn the setting of Theorem \\ref{thm main}, assume $u=1, v, v^{-1}\\notin AtA$ and either $v^{-1}\\notin AvA$ or $v$ is an involution.\n\\end{hypothesis} \n\n\n\n\n\\section{The case $v^{-1}\\notin AvA$}\\label{sect nonhnn}\n\nThe purpose of this section is to prove Theorem \\ref{thm main} of\nthe introduction in the case where $v^{-1}\\notin AvA$. We refer\nthe reader to Hypothesis \\ref{hyp main} and to its explanation in \\S\\ref{sect explanation}.\nThus, throughout this section we assume that $v^{-1}\\notin AvA$. \nAlso, throughout\nthis section we use the notation and hypotheses of Theorem \\ref{thm main}.\n\nLet $\\lan f_1\\ran$ be an infinite cyclic group.\nWe let\n\\[\nG_1=G*\\lan f_1\\ran,\\quad f_2=tf_1v^{-1},\\quad A_1=\\lan A, f_1, f_2\\ran.\n\\]\n\nIn this section we will prove the following theorem.\n\n\\begin{thm}\\label{thm main nonhnn}\nWe have\n\\begin{enumerate}\n\\item\n$A_1=A*\\lan f_1\\ran*\\lan f_2\\ran,$ with $f_1, f_2$ of infinite order;\n\n\\item\n$A_1$ is malnormal in $G_1$.\n\\end{enumerate}\n\\end{thm}\n\nSuppose Theorem \\ref{thm main nonhnn} is proved. We now prove Theorem \\ref{thm main}\nin the case where $v^{-1}\\notin AvA$.\n\\medskip\n\n\\begin{proof}[Proof of Theorem \\ref{thm main} in the case where $v^{-1}\\notin AvA$]\\hfill\n\nLet $f:=f_1$. Then $A_1f=A_1f_1=A_1,$ and \n\\[\nA_1tf=A_1tf_1=A_1tf_1v^{-1}v=A_1f_2v=A_1v.\n\\]\nBy Theorem \\ref{thm main nonhnn}(2), $A_1$ is malnormal in $G_1$. \nBy Theorem \\ref{thm main nonhnn}(1), \n$A_1\\cap G=A,$ and $f_2$ is of infinite order. \nSince $A_1=A*\\lan f_1\\ran*\\lan f_2\\ran,$ and $A$ does not contain involutions, $A_1$\ndoes not contain involutions. \n\\end{proof}\n \n\n\\begin{prop}\\label{prop A1 fp}\n$f_2$ is of infinite order in $G_1,$ and $A_1=A* \\lan f_1\\ran*\\lan f_2\\ran.$\n\\end{prop}\n\\begin{proof}\nWe first show that $f_2$ is of infinite order. Indeed let $h:=f_2^n,$ for some\n$n\\in\\zz,$ and write $h$ in terms of $f_1$ and elements of $G$. If $n>0,$\nthen $h$ starts with $t$ and ends with $v^{-1},$ while if $n<0,$ then\n$h$ starts with $v$ and ends with $t$. In particular $f_2$ has infinite order.\n\nNext let $F:=\\lan f_1, f_2\\ran$. Then any element of $F$ is a product of\nalternating powers of $f_1$ and $f_2$. As we saw in the previous paragraph of the proof,\nany non-zero power of $f_2$ starts with $t$ or $v$ and ends with $t$ or $v^{-1}$.\nSince $G_1=G*\\lan f_1\\ran$ there will be no cancellation between powers of $f_1$ and powers of $f_2$.\nIt follows that $F$ is a free group.\n\nNow consider an element in $A_1=\\lan A, F\\ran$. It is an alternating product\nof elements of $A$ and elements of $F$. \nWhen we express it as an element of $G_1=G*\\lan f_1\\ran,$ $f_2$ is written as $tf_1v^{-1}$\nand $f_2^{-1}$ is written as $vf_1^{-1}t$. Accordingly, an element $1\\ne a\\in A$ in this alternating\nproduct is multiplied with $1, v^{-1}$ or $t$ on the left, and with $1, t$ or $v$ on the right. \nThe possibilities\nare: \n\\begin{itemize}\n\\item\n$v^{-1}a,\\ ta,\\ at,\\ av:$ all are distinct from $1$ since $t$ and $v$ are not in $A$.\n\n\\item\n$tat,\\ v^{-1}av:$ all are distinct from $1$ since they are conjugate to $a$.\n\n\\item\n$tav,\\ v^{-1}at:$ all are distinct from $1$ since $v\\notin AtA$.\\qedhere\n\\end{itemize}\n\\end{proof}\n\n\\begin{prop}\\label{prop malnor nonhnn}\n$A_1$ is a malnormal subgroup of $G_1$.\n\\end{prop}\n\\begin{proof}\nWe will show that the existence of elements $a, b\\in A_1,$ and $g\\in G_1\\sminus A_1,$\nsuch that $a\\ne 1$ and $g^{-1}ag=b$ leads to a contradiction.\n\nLet \n\\[\na=a_1f_{\\gd_1}^{\\gre_1}a_2f_{\\gd_2}^{\\gre_2}\\cdots a_nf_{\\gd_n}^{\\gre_n}a_{n+1},\\ a\\ne 1,\\qquad\\text{and}\n\\]\n\\[\nb=b_1f_{\\gc_1}^{\\mu_1}b_2f_{\\gc_2}^{\\mu_2}\\cdots b_{\\ell}f_{\\gc_{\\ell}}^{\\mu_{\\ell}}b_{\\ell+1},\n\\]\nwhere $a_i, b_j\\in A,\\ \\gre_i, \\mu_j=\\pm 1,\\ \\gd_i, \\gc_j\\in\\{1,2\\},$\n and if $\\gd_i=\\gd_{i-1}$ and $\\gre_i=-\\gre_{i-1}$ then\n$a_i\\ne 1$ (i.e.~there are no $f_i$-cancellations in $a$),\nand similarly there are no $f_i$-cancellations in $b$.\nWrite\n\\[\ng=g_1f_1^{\\gl_1}g_2f_1^{\\gl_2}\\cdots g_mf_1^{\\gl_m}g_{m+1}\\in G_1\\sminus A_1,\n\\]\nwhere $g_i\\in G,\\ \\gl_i=\\pm 1,$ and there are no $f_1$-cancellations in $g$.\n\nAssume that $m$ is the least possible.\nWe have the picture as in Figure \\ref{fig9} below.\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild9.pspdftex}}\n \\caption{}\n \\label{fig9}\n\\end{figure} \n\n\\noindent\n{\\bf Case 1.}\\ $m=n=0$.\n\nIn this case $b=g^{-1}a_1g \\in A_1\\cap G$. By Proposition \\ref{prop A1 fp}, $A_1\\cap G=A,$ so $b\\in A,$\nand we get a contradiction to the malnormality of $A$ in $G$.\n \n\\medskip\n\n\\noindent \n\\smallskip\n\nThe next case to consider is: \n\\smallskip\n\n\\noindent \n{\\bf Case 2.}\\ $m=0,$ and $n>0$.\n\nSince $G_1=G*\\lan f_1\\ran,$ we must have $n=\\ell$.\nConsider Figure \\ref{fig9}. \nBy an analysis of the normal form in the free\nproduct $G*\\lan f_1\\ran$ we see that\nthe only way we can get the equality $g_1^{-1}ag_1=b$ is \nif both $\\gre_1=\\mu_1$ and $\\gre_n=\\mu_n$. \nWe distinguish a number of cases as follows.\n\\begin{itemize}\n\\item[(i)]\n$\\gd_1=\\gc_1$ or $\\gd_n=\\gc_n$.\n\n\\item[(ii)]\n$\\gd_1\\ne\\gc_1$ and $\\gd_n\\ne \\gc_n$.\n\\begin{itemize}\n\\item[(a)] \n$n=1$.\n\\item[(b)]\n$n>1$.\n\\end{itemize}\n\\end{itemize}\n{\\bf Case (i).}\\ \nBy symmetry we may consider only the case where $\\gd_1=\\gc_1$. \nIn this case, regardless of whether $\\gre_1=1$ or $-1$ and whether $\\gd_1=1$ or $2,$\nwe get that $g_1=a_1b_1^{-1}\\in A,$ a contradiction. \n\\medskip\n\n\\noindent\n{\\bf Case (iia).}\\\nBy symmetry we may assume that $\\gd_1=1$ and $\\gc_1=2$.\n\nSuppose first that $\\gre_1=\\mu_1=1$.\nThen from the left side of Figure \\ref{fig9} we get\n$a_1^{-1}g_1b_1t=1,$ and from the right side we get $a_2g_1b_2^{-1}v=1$.\nThis implies that $t\\in Ag_1A$ and $v^{-1}\\in Ag_1A$. But then $v^{-1}\\in AtA,$\na contradiction.\n\nSuppose next that $\\gre_1=\\mu_1=-1$. Then, from the left side of Figure \\ref{fig9} we get\n$a_1^{-1}g_1b_1v=1,$ and from the right side we get $a_2g_1b_2^{-1}t=1$.\nAgain this implies that $v^{-1}\\in AtA,$\na contradiction.\n\\medskip\n\n\\noindent\n{\\bf Case (iib).}\\ \nBy symmetry, we may assume without loss of generality that \n\\[\n\\gd_1=1\\text{ and }\\gc_1=2.\n\\]\nSuppose first that \n\\[\n\\gre_1=\\mu_1=1.\n\\]\nWe may further assume that \n\\[\na_1^{-1}g_1b_1t=1\\quad\\text{and}\\quad\\gre_2=\\mu_2.\n\\]\nWe now separate the discussion according to the following cases: \n\\begin{itemize}\n\\item\n$\\gd_2=\\gc_2$. In this case, regardless of\nthe sign of $\\gre_2=\\mu_2$ and whether $\\gd_2=\\gc_2=1$ or $2,$ we get that $a_2^{-1}v^{-1}b_2=1,$\nwhich is false since $v\\notin A$.\n\n\\item\n$\\gre_2=\\mu_2=1,\\ \\gd_2=1,\\ \\gc_2=2$. We get $a_2^{-1}v^{-1}b_2t=1,$ contradicting $v\\notin AtA$.\n\n\\item\n$\\gre_2=\\mu_2=-1,\\ \\gd_2=1,\\ \\gc_2=2$. We get $a_2^{-1}v^{-1}b_2v=1$ with $b_2\\ne 1$.\nBut this contradicts the malnormality of $A$ in $G$.\n\n\n\\item\n$\\gre_2=\\mu_2=1,\\gd_2=2,\\ \\gc_2=1$. We get $ta_2^{-1}v^{-1}b_2=1,$ contrary to $v^{-1}\\notin AtA$.\n\n\\item\n$\\gre_2=\\mu_2=-1,\\ \\gd_2=2,\\ \\gc_2=1$. We get $v^{-1}a_2^{-1}v^{-1}b_2=1$. This implies that $v^{-1}\\in AvA,$\ncontrary to our hypotheses.\n\\end{itemize}\n\nSuppose next that\n\\[\n\\gre_1=\\mu_1=-1.\n\\]\nWe may further assume that \n\\[\na_1^{-1}g_1b_1v=1\\quad\\text{and}\\quad \\gre_2=\\mu_2.\n\\]\nAgain we separate the discussion according to the following cases: \n\\begin{itemize}\n\\item\n$\\gd_2=\\gc_2$. In this case, regardless of\nthe sign of $\\gre_2=\\mu_2$ and whether $\\gd_2=\\gc_2=1$ or $2,$ we get that $a_2^{-1}tb_2=1,$\nwhich is false since $t\\notin A$.\n\n\\item\n$\\gre_2=\\mu_2=1,\\ \\gd_2=1,\\ \\gc_2=2$. We get $a_2^{-1}tb_2t=1,$ and $b_2\\ne 1$. This contradicts \nthe malnormality of $A$ in $G$.\n\n\\item\n$\\gre_2=\\mu_2=-1,\\ \\gd_2=1,\\ \\gc_2=2$. We get $a_2^{-1}tb_2v=1,$ impossible, as above.\n\n\n\\item\n$\\gre_2=\\mu_2=1,\\gd_2=2,\\ \\gc_2=1$. We get $ta_2^{-1}tb_2=1$.\nThis case forces $a_2=b_2=1$ (because $A$ is malnormal in $G$) . If $n=2$ we get \n$v^{-1}a_3g_1b_3^{-1}=1$. But this together with $a_1^{-1}g_1b_1v=1$\nimplies that $v^{-1}\\in AvA,$ contrary to our hypotheses. Thus $n\\ge 3$. But now,\nwe must have $\\gre_3=\\mu_3,$ and arguing exactly as in the previous cases, \nfor all choices of $\\gre_3=\\mu_3, \\gd_3$ and $\\gc_3,$ we get a contradiction as in one of the cases above.\n\n\\item\n$\\gre_2=\\mu_2=-1,\\ \\gd_2=2,\\ \\gc_2=1$. We get $v^{-1}a_2^{-1}tb_2=1,$ impossible, as above.\n\\end{itemize}\n\\smallskip\n\n\nNext we consider:\n\\smallskip\n\n\\noindent\n{\\bf Case 3.}\\ $n=0=\\ell$ and $m > 0$.\n\n\\noindent\nNotice that in this case there will be no cancellations in Figure \\ref{fig9},\nsince otherwise we must either have $g_1^{-1}a_1g_1=1,$ or $g_{m+1}^{-1}b_1^{-1}g_{m+1}=1,$\nwhich is false.\n\nHence we may assume that either $n>0$ or $\\ell>0$ or both.\nBy symmetry we may consider the following case:\n\\smallskip\n\n\\noindent\n{\\bf Case 4.}\\ $m > 0$ and $n >0$.\n\nNotice that $f_i$-cancellations have to occur in the product $g^{-1}agb^{-1},$ since\nit is equal to $1$. Now $f_i$-cancellations can occur\nonly if one of the following cases occurs:\n\\begin{itemize}\n\\item[(i)]\nThe product $f_1^{-\\gl_1}g_1^{-1}a_1f_{\\gd_1}^{\\gre_1}$ equals $1,\\ v^{-1},$ or $t$.\n\\item[(ii)]\nThe product $f_{\\gd_n}^{\\gre_n}a_{n+1}g_1f_1^{\\gl_1}$ equals $1,\\ t$ or $v$.\n\\item[(iii)]\nThe product $f_1^{\\gl_m}g_{m+1}b_1f_{\\gc_1}^{\\mu_1}$ equals $1,\\ v^{-1}$ or $t$.\n\\item[(iv)]\nThe product $f_{\\gc_{\\ell}}^{\\mu_{\\ell}}b_{\\ell+1}g_{m+1}^{-1}f_1^{-\\gl_m}$\nequals $1,\\ t$ or $v$.\n\\end{itemize}\n\\medskip\n\nBy symmetry, we may consider only case (i).\nIf $f_1^{-\\gl_1}g_1^{-1}a_1f_{\\gd_1}^{\\gre_1}=1,$\nthen $g_1f_1^{\\gl_1}=a_1f_{\\gd_1}^{\\gre_1}$. Let \n$h:=g_2f_1^{\\gl_2}\\cdots f_1^{\\gl_m}g_{m+1},$ and\n\\[\na'=a_2f_{\\gd_2}^{\\gre_2}\\cdots f_{\\gd_n}^{\\gre_n}a_{n+1}g_1f_1^{\\gl_1}=a_2f_{\\gd_2}^{\\gre_2}\\cdots f_{\\gd_n}^{\\gre_n}a_{n+1}a_1f_{\\gd_1}^{\\gre_1}\\in A_1.\n\\]\nNotice that $a'$ is conjugate to $a,$ so $a'\\ne 1$.\nAlso $h=f_1^{-\\gl_1}g_1^{-1}g,$ and $h\\notin A_1,$ since $f_1^{-\\gl_1}g_1^{-1}\\in A_1,$ while $g\\notin A_1$.\nWe get (see Figure \\ref{fig9}) $g^{-1}ag=h^{-1}a'h\\in A_1,$ contradicting the minimality of $m$.\n\\smallskip\n\nIf $f_1^{-\\gl_1}g_1^{-1}a_1f_{\\gd_1}^{\\gre_1}=v^{-1},$ then $g_1f_1^{\\gl_1}=a_1f_{\\gd_1}^{\\gre_1}v$.\nLet $h:=vg_2f_1^{\\gl_2}\\cdots f_1^{\\gl_m}g_{m+1},$ and \n\\[\na'=a_2f_{\\gd_2}^{\\gre_2}\\cdots f_{\\gd_n}^{\\gre_n}a_{n+1}g_1f_1^{\\gl_1}v^{-1}=a_2f_{\\gd_2}^{\\gre_2}\\cdots f_{\\gd_n}^{\\gre_n}a_{n+1}a_1f_{\\gd_1}^{\\gre_1}\\in A_1.\n\\]\nAs above, $1\\ne a'\\in A_1,$ and if $h\\in A_1,$ then \n$g=g_1f_1^{\\gl_1}v^{-1}h=a_1f_{\\gd_1}^{\\gre_1}h\\in A_1,$\nwhich is false.\nWe again get $g^{-1}ag=h^{-1}a'h\\in A_1,$ which contradicts the minimality of $m$.\n\nFinally if $f_1^{-\\gl_1}g_1^{-1}a_1f_{\\gd_1}^{\\gre_1}=t,$ then $g_1f_1^{\\gl_1}=a_1f_{\\gd_1}^{\\gre_1}t$.\nLet $h:=tg_2f_1^{\\gl_2}\\cdots f_1^{\\gl_m}g_{m+1},$ and \n\\[\na'=a_2f_{\\gd_2}^{\\gre_2}\\cdots f_{\\gd_n}^{\\gre_n}a_{n+1}g_1f_1^{\\gl_1}t=a_2f_{\\gd_2}^{\\gre_2}\\cdots f_{\\gd_n}^{\\gre_n}a_{n+1}a_1f_{\\gd_1}^{\\gre_1}\\in A_1.\n\\]\nAs above we get $1\\ne a'\\in A_1,$ and $h\\notin A_1,$\nand again we get the same contradiction.\n\nNote that if $\\ell=0,$ then no cancellation of the type (iii) or (iv) above can occur.\n\\end{proof}\n\\begin{proof}[Proof of Theorem \\ref{thm main nonhnn}]\\hfill\n\\medskip\n\n\\noindent\nBy Proposition \\ref{prop A1 fp}, part (1) holds, and by Proposition \\ref{prop malnor nonhnn} part (2) holds.\n\\end{proof}\n\\section{The case $v$ is an involution and $v\\notin AtA$}\\label{sect hnn}\nThe purpose of this section is to prove Theorem \\ref{thm main} of\nthe introduction in the case where $v$ is an involution. We refer\nthe reader to Hypothesis \\ref{hyp main} and to its explanation in \\S\\ref{sect explanation}.\nThus, throughout this section we assume that $v$ is an involution and that $v\\notin AtA$. Further, throughout\nthis section we use the notation and hypotheses of Theorem \\ref{thm main}.\n\nLet $\\lan f\\ran$ be an infinite cyclic group.\nWe define an HNN extension\n\\[\nG_1=\\lan G, f\\mid f^{-1}tf=v\\ran,\\qquad A_1=\\lan A, f\\ran.\n\\]\n\nIn this section we will prove the following theorem.\n \n\\begin{thm}\\label{thm main hnn}\nWe have\n\\begin{enumerate}\n\\item\n$A_1=A*\\lan f\\ran$;\n\n\\item\n$A_1$ is malnormal in $G_1.$\n\\end{enumerate}\n\\end{thm}\nSuppose Theorem \\ref{thm main hnn} is proved. We now use it to prove Theorem \\ref{thm main}\nin the case where $v$ is an involution. \n\\smallskip\n\n\\noindent\n\\begin{proof}[Proof of Theorem \\ref{thm main} in the case where $v$ is an involution]\\hfill\n\\medskip\n\n\\noindent\nWe have $A_1f=A_1$ and $A_1tf=A_1fv=A_1v$. By Theorem \\ref{thm main hnn}(2), $A_1$\nis malnormal in $G_1$. By Theorem \\ref{thm main hnn}(1), $A_1\\cap G=A$. \nAlso $A_1$ does not\ncontain involutions since $A_1=A*\\lan f\\ran,$ and $A$ does not contain involutions.\n\\end{proof}\n\n\\begin{remark}\\label{rem hnn}\n\\begin{comment}\nAny element of $G_1$ has the form \n\\[\ng=g_0f^{\\gd_1}g_1\\cdots g_{m-1}f^{\\gd_m}g_m,\n\\]\nwhere $g_i\\in G,\\ i=0,\\dots m,\\ \\gd_i=\\pm 1,\\ i=1,\\dots m$.\nAccording to Britton's lemma we say that there are {\\it no $f$-cancellations in $g$} if the equality\n$\\gd_i=-\\gd_{i-1}$ implies that if $\\gd_i=1,$ then $g_i\\ne 1, t,$ while if $\\gd_i=-1,$ then\n$g_i\\ne 1,v$.\n \nFurther let $g$ be as above, let $h\\in G_1,$ and write:\n\\[\nh=h_0f^{\\eta_1}h_1\\cdots h_{k-1}f^{\\eta_k}h_k,\n\\]\nwhere $h_j\\in G,\\ j=0,\\dots m,\\ \\eta_j=\\pm 1,\\ j=1,\\dots k,$ \nand there are no $f$-cancellations in $g$ and $h$.\n\nThen $g=h$ \nif and only if $m=k,\\ \\gd_i=\\eta_i,\\ i=1,\\dots, m,$\nand there are elements $w_0,z_1,w_1,z_2,w_2,\\dots,z_m,w_m,z_{m+1}$ such that\nfor every oriented loop in Figure \\ref{fig1} the product of edges is $1,$ that is:\n \n\n\\begin{itemize}\n\\item[(a)]\n$h_i=w_ig_iz_{i+1},\\ i=0,\\dots, m;$\n\n\\item[(b)] $w_0=1,\\ z_{m+1}=1;$\n\n\\item[(c)] If $\\gd_i=1,$ then either $z_i=1,\\ w_i=1,$ or $z_i=t,\\ w_i=v;$\n\n\\item[(d)] If $\\gd_i=-1,$ then either $z_i=1,\\ w_i=1,$ or $z_i=v,\\ w_i=t.$\n\\end{itemize}\n\\bigskip\n\n\\noindent\n\\end{comment}\nAny element of $G_1$ has the form \n\\[\ng=g_1f^{\\gd_1}g_2\\cdots g_mf^{\\gd_m}g_{m+1},\n\\]\nwhere $g_i\\in G,\\ i=1,\\dots m+1,\\ \\gd_i=\\pm 1,\\ i=1,\\dots m$.\nAccording to Britton's lemma we say that there are {\\it no $f$-cancellations in $g$} if the equality\n$\\gd_i=-\\gd_{i-1}$ implies that if $\\gd_i=1,$ then $g_i\\ne 1, t,$ while if $\\gd_i=-1,$ then\n$g_i\\ne 1,v$.\n \nFurther let $g$ be as above, let $h\\in G_1,$ and write:\n\\[\nh=h_1f^{\\eta_1}h_2\\cdots h_kf^{\\eta_k}h_{k+1},\n\\]\nwhere $h_j\\in G,\\ j=1,\\dots k+1,\\ \\eta_j=\\pm 1,\\ j=1,\\dots k,$ \nand there are no $f$-cancellations in $g$ and $h$.\n\nThen $g=h$ \nif and only if $m=k,\\ \\gd_i=\\eta_i,\\ i=1,\\dots, m,$\nand there are elements $w_0,z_1,w_1,z_2,w_2,\\dots,z_m,w_m,z_{m+1}$ such that\nfor every oriented loop in Figure \\ref{fig1} the product of edges is $1,$ that is:\n \n\n\\begin{itemize}\n\\item[(a)]\n$h_i=w_{i-1}g_iz_i,\\ i=1,\\dots, m+1;$\n\n\\item[(b)] $w_0=1,\\ z_{m+1}=1;$\n\n\\item[(c)] if $\\gd_i=1,$ then either $z_i=1,\\ w_i=1,$ or $z_i=t,\\ w_i=v;$\n\n\\item[(d)] if $\\gd_i=-1,$ then either $z_i=1,\\ w_i=1,$ or $z_i=v,\\ w_i=t.$\n\\end{itemize}\n\\clearpage\n\n\n\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild8.pspdftex}}\n \\caption{}\n \\label{fig1}\n\\end{figure} \n\\end{remark}\n\\begin{lemma}\\label{A1 is a free product}\n$A_1=A*\\lan f\\ran$.\n\\end{lemma}\n\\begin{comment}\n\\begin{proof}\nSuppose that\n\\[\ng_1f^{\\gd_1}g_2\\cdots g_mf^{\\gd_m}g_{m+1}=h_1f^{\\gd_1}h_2\\cdots h_mf^{\\gd_m}h_{m+1},\n\\]\n\\[\ng_0f^{\\gd_1}g_1\\cdots g_{m-1}f^{\\gd_m}g_m=h_0f^{\\gd_1}h_1\\cdots h_{m-1}f^{\\gd_m}h_m,\n\\]\nand $h_i, g_i\\in A,\\ i=0,\\dots, m$. By Remark \\ref{rem hnn}, $h_0=g_0z_1,$\nhence, by (a)--(d) of Remark \\ref{rem hnn}, since $t, v\\notin A,$ we have $z_1=1,$ \nso $h_0=g_0,$ and then, by Remark \\ref{rem hnn} (c) and (d), $w_1=1$.\n\nAssume $w_i=1$. Then $h_i=w_ig_iz_{i+1}=g_iz_{i+1}$. Since $t, v\\notin A,$ this implies $z_{i+1}=1,$ and\nthen $w_{i+1}=1$. So $g_i=h_i$ for $i=0,\\dots, m$. Hence $A_1=A*\\lan f\\ran$.\n\\end{proof}\n\\bigskip\n\n\\noindent\n\\end{comment}\n\\begin{proof}\nSuppose that\n\\[\ng_1f^{\\gd_1}g_2\\cdots g_mf^{\\gd_m}g_{m+1}=h_1f^{\\gd_1}h_2\\cdots h_mf^{\\gd_m}h_{m+1},\n\\]\nand $h_i, g_i\\in A,\\ i=1,\\dots, m+1$. By Remark \\ref{rem hnn}, $h_1=g_1z_1,$\nhence, by (a)--(d) of Remark \\ref{rem hnn}, since $t, v\\notin A,$ we have $z_1=1,$ \nso $h_1=g_1,$ and then, by Remark \\ref{rem hnn} (c) and (d), $w_1=1$.\n\nAssume $w_i=1$. Then $h_{i+1}=w_ig_{i+1}z_{i+1}=g_{i+1}z_{i+1}$. \nSince $t, v\\notin A,$ this implies $z_{i+1}=1,$ and\nthen $w_{i+1}=1$. So $g_i=h_i$ for $i=1,\\dots, m+1$. Hence $A_1=A*\\lan f\\ran$.\n\\end{proof}\n \n\n\\begin{prop}\\label{prop malnormal hnn}\n$A_1$ is malnormal in $G_1$. \n\\end{prop}\n\\begin{proof}\nWe will show that the existence of elements $a, b\\in A_1,\\ g\\in G_1\\sminus A_1$\nsuch that $a\\ne 1$ and $g^{-1}ag=b$ leads to a contradiction. Let\n\\[\na=a_1f^{\\ga_1}a_2\\cdots a_mf^{\\ga_m}a_{m+1},\\qquad b=b_1f^{\\gb_1}b_2\\cdots b_nf^{\\gb_n}b_{n+1},\n\\]\nwhere $a_i, b_i\\in A,\\ \\ga_i, \\gb_i=\\pm 1,$ and if $\\ga_i=-\\ga_{i-1},$ then $a_i\\ne 1,$\nand if $\\gb_i=-\\gb_{i-1},$ then $b_i\\ne 1$. Recall that by Lemma \\ref{A1 is a free product}, $A_1=A*\\lan f\\ran,$\nand therefore in the above expressions for $a$ and $b$ there are no $f$-cancellations. We also have\n\\[\ng=g_1f^{\\gd_1}g_2\\cdots g_kf^{\\gd_k}g_{k+1},\n\\]\nwhere $g_i\\in G,\\ \\gd_i=\\pm 1,$ and $\\gd_i=-\\gd_{i-1}$ implies that if $\\gd_i=1,$ then $g_i\\ne 1, t,$\nand if $\\gd_i=-1,$ then $g_i\\ne 1, v$.\n\nWe assume that $k$ is the least possible.\n\\medskip\n\n\\noindent\n{\\bf Case 1.}\\ $k=0$.\n\nThen $g=g_1,$ so we have \n\\[\ng_1^{-1}a_1f^{\\ga_1}a_2\\cdots a_mf^{\\ga_m}a_{m+1}g_1=b_1f^{\\gb_1}b_2\\cdots b_nf^{\\gb_n}b_{n+1}.\n\\] \nWe conclude that $n=m,\\ \\ga_i=\\gb_i,$ for $i=1,2,\\dots, m$. If $m=n=0,$ then $a=a_1\\ne 1,\\ b=b_1,$ so \n$g_1^{-1}a_1g_1=b_1$ which is impossible because $A$ is malnormal in $G$.\n\nLet $m=n>0$. We obtain Figure \\ref{fig2} below, where\n\\begin{gather}\\label{eq prop 4.4}\n\\text{if }\\ga_i=1,\\text{ then either }p_i=q_i=1,\\text{ or }p_i=t,\\ q_i=v,\\\\\\notag\n\\text{and if }\\ga_i=-1,\\text{ then either }p_i=q_i=1,\\text{ or }p_i=v,\\ q_i=t.\n\\end{gather}\n\n\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild1.pspdftex}}\n \\caption{}\n \\label{fig2}\n\\end{figure} \n\n\\noindent\nWe have $p_1=a_1^{-1}g_1b_1\\notin A$ since $g_1\\notin A$. Now \nassume $p_i\\notin A$. Then $q_i\\notin A$ by \\eqref{eq prop 4.4} \nand by Britton's Lemma $p_{i+1}=a_{i+1}^{-1}q_ib_{i+1}$ is not in $A$ either.\nIn Particular $p_i, q_i\\ne 1$ for all $i\\le m$.\n\nIf $m=n\\ge 2,$ consider Figure \\ref{fig3}:\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild2.pspdftex}}\n \\caption{}\n \\label{fig3}\n\\end{figure} \n\n\\noindent\nWe now use equation \\eqref{eq prop 4.4}.\nIf $\\ga_1=1,\\ \\ga_2=1,$ then $q_1=v,\\ p_2=t,$ so $v=a_2tb_2^{-1}\\in AtA,$ a contradiction.\n\nIf $\\ga_1=1,\\ \\ga_2=-1,$ then $a_2\\ne 1,\\ q_1=v,\\ p_2=v$. Then $va_2v=b_2,$\ncontradicting the malnormality of $A$ in $G$.\n\nIf $\\ga_1=-1,\\ \\ga_2=1,$ then $a_2\\ne 1,\\ q_1=t, p_2=t,$ and $ta_2t=b_2,$ again contradicting the malnormality of $A$ in $G$.\n\nIf $\\ga_1=-1,\\ \\ga_2=-1,$ then $q_1=t,\\ p_2=v,$ and $v=a_2^{-1}tb_2\\in AtA,$\na contradiction.\n\nSo we are left with the possibility $m=n=1$. In Figure \\ref{fig2} above, after cutting and pasting\nwe obtain the following figure \\ref{fig4}:\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild3.pspdftex}}\n \\caption{}\n \\label{fig4}\n\\end{figure} \n \n\n\\noindent\nIf $\\ga_1=1,$ then $p_1=t,\\ q_1=v,$ and if $\\ga_1=-1,$ then $p_1=v,\\ q_1=t$. In\nboth cases $v\\in AtA,$ contrary to the choice of $v$.\n\\medskip\n\n\\noindent\n{\\bf Case 2.} $k>0$.\n\nConsider Figure \\ref{fig5} below.\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild4.pspdftex}}\n \\caption{}\n \\label{fig5}\n\\end{figure} \n \nNotice that $f$-cancellations have to occur in the product $g^{-1}agb^{-1},$ since\nit is equal to $1$. Therefore, at least one of the following cases must happen:\n\\begin{enumerate}\n\\item\n$m=0,\\ a=a_1,$ and $f^{-\\gd_1}$ cancels with $f^{\\gd_1}$ in the product $f^{-\\gd_1}g_1^{-1}a_1g_1f^{\\gd_1};$\n\n\\item\n$n=0,\\ b=b_1$ and $f^{\\gd_k}$ cancels with $f^{-\\gd_k}$ in the product $f^{\\gd_k}g_{k+1}b_1g_{k+1}^{-1}f^{-\\gd_k};$\n\n\\item\n$m>0,$ and $f^{-\\gd_1}$ cancels with $f^{\\ga_1}$ in the product $f^{-\\gd_1}g_1^{-1}a_1f^{\\ga_1};$\n\n\\item\n$m>0,$ and $f^{\\ga_m}$ cancels with $f^{\\gd_1}$ in the product $f^{\\ga_m}a_{m+1}g_1f^{\\gd_1};$\n\n\\item\n$n>0,$ and $f^{\\gd_k}$ cancels with $f^{\\gb_1}$ in the product $f^{\\gd_k}g_{k+1}b_1f^{\\gb_1};$\n\n\\item\n$n>0,$ and $f^{\\gb_n}$ cancels with $f^{-\\gd_k}$ in the product $f^{\\gb_n}b_{n+1}g_{k+1}^{-1}f^{-\\gd_k}.$ \n\\end{enumerate} \nIn case (1), $a=a_1\\ne 1,$ so $g_1^{-1}a_1g_1=t\\text{ or }v$. Hence $a_1$\nis conjugate to an involution, which is impossible, as $A$ does not contain involutions.\n\nSimilarly, in case (2) $b=b_1\\ne 1,$ so $g_{k+1}b_1g_{k+1}^{-1}=t\\text{ or }v,$ again a contradiction.\n\nIn case (3) we have Figure \\ref{fig6} below,\n\\begin{figure}[h]\n \\centering\n\\scalebox{0.70}{\\input{bild5a.pspdftex}}\n \\caption{}\n \\label{fig6}\n\\end{figure} \n\n\\noindent\nwhere $p,q\\in\\{1, t, v\\}$ by Britton's Lemma. We define \n\\[\na'=a_2\\cdots a_mf^{\\ga_m}a_{m+1}a_1f^{\\ga_1}\\quad\\text{ and }\\quad h=qg_2\\cdots g_kf^{\\gd_k}g_{k+1}.\n\\]\nWe have $h^{-1}a'h=b,\\ a'$ is conjugate to $a$. So $a\\ne 1$ implies $a'\\ne 1$. Also the $f$-length\nof $h$ is $k-1$. \nNotice that\n$h=f^{-\\ga_1}a_1^{-1}g,$ and $h\\notin A_1$ since $f^{-\\ga_1}a_1^{-1}\\in A_1,$ and $g\\notin A_1$.\nWe obtained a contradiction to the minimality of $k$.\n\nThe remaining cases are handled in entirely the same way. \n\\end{proof}\n\\smallskip\n\n\\begin{proof}[Proof of Theorem \\ref{thm main hnn}]\\hfill\n\\medskip\n\n\\noindent\nBy Lemma \\ref{A1 is a free product}, part (1) holds, and by Proposition \\ref{prop malnormal hnn},\npart (2) holds.\n\\end{proof}\n\n \n \n\\section{The proof of Theorem \\ref{thm s2t in char 2}}\n\nIn this section we show how Theorem \\ref{thm s2t in char 2} of the\nintroduction follows from Theorem \\ref{thm main}.\n\nLet $G$ be a group with a malnormal subgroup $A$ such that $A$ contains no involutions. \nAssume that $G$ is {\\it not}\\, $2$-transitive on the set of right cosets $A\\backslash G$.\nIf there exists an involution $t\\in G\\sminus A,$ set $G_0:=G,\\ A_0:=A$.\nOtherwise, let $G_0:=G*\\lan t\\ran,$ where $t$ is an involution, and let $A_0=A$.\nThen, by \\cite[Corollary 4.1.5]{MaKS}, $G$ is malnormal in $G_0,$ and\nthen since $A$ is malnormal in $G,$ it is malnormal in $G_0$.\n\nWe now construct a sequence of groups $G_i$ and of subgroups\n$A_i\\le G_i,\\ i=0,1,2\\dots,$ having the following properties for all $i\\ge 0$:\n\\begin{enumerate}\n\\item\n$G_i\\le G_{i+1},$ and $A_i\\le A_{i+1};$\n\n\\item\n$A_i$ is malnormal in $G_i$ and $t\\in G_i\\sminus A_i;$\n\n\\item\n$A_i$ does not contain involutions;\n\n\\item\n$A_{i+1}\\cap G_i=A_i$;\n\n\\item\nfor each $v\\in G_i\\sminus A_i$ there\nexists an element $f_v\\in A_{i+1}$ such that $A_{i+1}tf_v=A_{i+1}v.$\n\\end{enumerate}\n\nIn order to construct $G_{i+1}, A_{i+1}$ from $G_i, A_i$\nwe enumerate the set\\linebreak $G_i\\sminus A_i=\\{v_\\alpha:\\alpha< \\rho\\}$\nfor some ordinal $\\rho$. For each ordinal $\\ga<\\gr$ we construct the pair \n$G_i^{\\ga},\\ A_i^{\\ga}$ and the element $f_{v_{\\ga}}\\in A_i^{\\ga}$ having\nthe following properties:\n\\begin{itemize}\n\\item[(i)] $G_i^{\\gb}\\le G_i^{\\ga},$ for all ordinals $\\gb<\\ga;$\n\n\\item[(ii)] $A_i^{\\ga}$ is malnormal in $G_i^{\\ga}$ and $t\\in G_i^{\\ga}\\sminus A_i^{\\ga};$\n\n\\item[(iii)] $A_i^{\\ga}$ contains no involutions;\n\n\\item[(iv)] $A_i^{\\ga}\\cap G_i^{\\gb}=A_i^{\\gb}$ for all $\\gb<\\ga;$\n\n\\item[(v)] $f_{v_{\\ga}}\\in A_i^{\\ga}$ and $A_i^{\\ga}tf_{v_{\\ga}}=A_i^{\\ga}v_{\\ga}$.\n\\end{itemize}\nWe let $G_i^0=G_i$ and $A_i^0=A_i$. If $\\ga=\\gb+1,$\nwe construct $(G_i^{\\alpha},\\ A_i^{\\alpha}, f_{v_{\\ga}})$ from $(G_i^{\\gb},\\ A_i^{\\gb})$\nas follows:\nIf there is some $f\\in A_i^{\\gb}$ with\n$A_i^\\gb tf = A_i^\\gb v_\\alpha$ we let $G_i^{\\alpha}=G_i^{\\gb},\\ $\n$A_i^{\\alpha}= A_i^{\\gb}$ and $f_{v_{\\ga}}=f$.\nOtherwise apply Theorem \\ref{thm main} to $G_i^\\gb,\\ A_i^\\gb$ with $u=1$ and $v=v_\\alpha$\nto obtain the groups $G_i^{\\alpha},\\ A_i^{\\alpha}$ and the element $f_{v_{\\ga}}\\in A_i^{\\ga}$.\nOf course, by construction, $A_i^{\\ga}$ contains no involutions\nand $A_i^{\\ga}\\cap G_i^{\\gb}=A_i^{\\gb}$. So (i)--(v) hold.\n\nFor a limit ordinal $\\alpha$ we put $G_i^{(\\ga,1)}=\\bigcup_{\\beta<\\alpha}\nG_i^\\beta,\\ A_i^{(\\ga,1)}=\\bigcup_{\\beta<\\alpha} A_i^\\beta$.\nWe now show that when $\\ga$ is a limit ordinal $A_i^{(\\ga,1)}$ is malnormal in $G_i^{(\\ga,1)}$.\nNotice that for each ordinal $\\gb<\\ga$ and each $g\\in G_i^{\\gb}\\sminus A_i^{\\gb},$ \nwe have $g\\in G_i^{(\\ga,1)}\\sminus A_i^{(\\ga,1)}$.\nIndeed else take the minimal $\\gc<\\ga$ such that $g\\in A_i^{\\gc}$. \nThen, by definition, $\\gc$ is not a limit ordinal,\nand $g\\in G_i^{\\gc-1}\\sminus A_i^{\\gc-1}$.\nSo $g\\in A_i^{\\gc}\\cap G_i^{\\gc-1}=A_i^{\\gc-1},$ \na contradiction. This means that $A_i^{(\\ga,1)}\\cap G_i^{\\gb}=A_i^{\\gb},$\nfor all ordinals $\\gb<\\ga$.\n \n\nSuppose now that $g^{-1}ag=b$ with $g\\in G_i^{(\\ga,1)}\\sminus A_i^{(\\ga,1)}$ and\n$a, b\\in A_i^{(\\ga,1)}$. Then, by the previous paragraph, there exists $\\gb<\\ga$ \nso that $a,b\\in A_i^{\\gb}$\nand $g\\in G_i^{\\gb}\\sminus A_i^{\\gb}$ and then we get a contradiction\nto the malnormality of $A_i^{\\gb}$ in $G_i^{\\gb}$. Clearly\n$A_i^{(\\ga,1)}$ contains no involutions. Next if there exists $f\\in A_i^{(\\ga,1)}$\nsuch that $A_i^{(\\ga,1)}tf=A_i^{(\\ga,1)}u_{\\ga}$ then we let \n$G_i^{\\ga}=G_i^{(\\ga,1)},\\ A_i^{\\ga}=A_i^{(\\ga,1)}$ and $f_{v_{\\ga}}=f$.\nElse we construct $G_i^{\\ga},\\ A_i^{\\ga}$ and $f_{v_{\\ga}}$ from \n$G_i^{(\\ga,1)},\\ A_i^{(\\ga,1)}$ using Theorem \\ref{thm main} with\n$u=1$ and $v=v_{\\ga}$ (just as in the construction above in the case\nof a non-limit ordinal).\nAgain we see that (i)--(v) hold.\n\nFinally put \n\\[\nG_{i+1}=\\bigcup_{\\alpha<\\rho} G_i^\\alpha,\\quad A_{i+1}=\\bigcup_{\\alpha<\\rho} A_i^\\alpha,\n\\]\n\\begin{center}\n\n\\end{center}\nand set \n\\[\n\\calg=\\bigcup_{i<\\omega}G_i,\\quad \\cala=\\bigcup_{i<\\omega}A_i\\quad\\text{and}\\quad X= \\cala\\backslash \\calg.\n\\]\nAs in the construction of $G_i^{(\\ga,1)},\\ A_i^{(\\ga,1)}$\nin the case where $\\ga$ is a limit ordinal, we see that $\\cala$ is malnormal in $\\calg$\nand that $\\cala\\cap G_i=A_i,$ for each $i<\\omega$.\nTo see that the action of $G$ on $X$ is $2$-transitive\njust note that any $v\\in \\calg\\sminus \\cala$ is contained in some $G_i$\nso that there is some $f_v\\in A_{i+1}\\subseteq\\cala$ with $A_{i+1} tf_v = A_{i+1}v$.\nSince $A_{i+1}\\le \\cala$ we see that $\\cala tf_v=\\cala v$ as required.\nSince $\\cala$ is malnormal in $\\calg$ the action of $\\calg$ on $X$\nis sharply $2$-transitive. By construction, $\\cala$ contains no involutions.\n\nFinally, as is well known, if $\\calg$ contains a non-trivial abelian\nnormal subgroup, then necessarily all involutions in $\\calg$\ncommute with each other (see, e.g., \\cite[Remark 4.4]{GMS}).\nBut, by our construction, this is not the case in $\\calg$.\nIndeed, if $G_1=G_0*\\lan f_1\\ran$ is a free product, then $t$ does not\ncommute with $f_1^{-1}tf_1$. \nSuppose that $G_1=\\lan G, f\\mid f^{-1}tf=v\\ran$ is\nan HNN extension. Let $s\\in G$ be an involution distinct from $t$\n(notice that $t$ is not in the center of $G$ since $A$ is malnormal in $G,$\nso such $s$ exists).\nThen $sf^{-1}sf$ and $f^{-1}sfs$ are in canonical form, so they are distinct,\nand the involutions $s$ and $f^{-1}sf$ do not commute\\footnotemark.\n\\footnotetext{Note that we could start with a group $G_0$ \nwhich already contains an involution that does not commute with $t$. Then\nit would immediately follow that $\\calg$ does not split. We thank Uri Bader\nfor pointing this out.}\nThis completes the proof of Theorem \\ref{thm s2t in char 2}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the standard cosmological paradigm, the formation of large-scale\nstructure is driven by the gravitational amplification of small\ninitial density fluctuations (e.g. Peebles 1980). In addition to\ngravity, hydrodynamical processes can influence the formation and\nevolution of galaxies, groups and clusters of galaxies. But since\nhydrodynamical effects play a minor role on scales larger than the\nsize of galaxy clusters, gravitational instability theory alone can\ndirectly relate the present day large-scale structure to the initial\ndensity field and provide the framework within which the observations\ncan be analyzed and interpreted. Gravitational instability is a\nnonlinear process, making numerical methods an essential tool for\nunderstanding the observed large-scale structure.~\\footnote{\nPresent address~: Racah Institute of Physics, The Hebrew University, \nJerusalem Israel}\n\nThere are two complementary numerical approaches to studying\ncosmological structure. The first relies on $N$-body techniques\ndesigned to solve an initial value problem in which the evolution of a\nself-gravitating system of massive particles is determined by forward\nnumerical integration of the Newtonian differential equations.\nBecause the exact initial conditions are unknown, comparisons between\nthese simulations and observations are mainly concerned with general\nstatistical properties.\n\nThe second approach works in the opposite direction, deriving from the\nobserved present-day distribution and peculiar motions of galaxies,\nand independently of the nature of the dark matter, certain features\nof the dynamics at earlier times. The numerical action method (NAM)\nbelongs to this second category of approaches. It arises from the\nobservation that the present-day distribution of galaxies, combined\nwith the reasonable assumption that their peculiar velocities vanish\nat early times, presents a boundary value problem that naturally lends\nitself to an application of Hamilton's principle in which stationary\nvariations of the action are found subject to the boundary conditions.\nThe result is a prediction of the full orbit histories of individual\ngalaxies, either with real space boundary conditions (Peebles 1989,\n1990, 1994, 1995) or, after a coordinate transformation, in redshift\nspace (Peebles {\\it et al.\\ } 2001, Phelps 2002).\n\nThe potential of NAM as a probe of galaxy dynamics and of cosmological\nparameters has been explored in a number of studies following the\nintroduction of the method in Peebles 1989. Possible applications\ninclude the full nonlinear analysis of orbit histories of nearby\ngalaxies (Peebles 1990, 1994, 1995; Sharpe {\\it et al.\\ } 2001), recovering the\ninitial power spectrum of density fluctuations (Peebles 1996),\npredicting the values of cosmological parameters (Shaya {\\it et al.\\ } 1995),\nand estimating the proper motions of nearby galaxies (Peebles {\\it et al.\\ }\n2000). Concerning the latter application, ground and space-based\nobservations will soon make possible the measurement of the full\nthree-dimensional velocities of many nearby galaxies and promise both\na rigourous test of NAM predictions and, given the additional\ndynamical constraints on galaxy motions, the possiblity of using NAM\nas a probe of individual masses of nearby galaxies.\n\nSince a central result of NAM, the past orbit histories of galaxies,\ncannot be confirmed by direct observations, $N$-body simulations\nprovide an important test of NAM and its key assumption that\ngalaxies can be approximated as discrete, non-merging objects\nthroughout their history. It is desirable then to test NAM in a\nscenario which approximates the complexity of the observational\nsituation but where all of the relevant physical quantities are known.\nPrevious tests of NAM using $N$-body simulations have either been\nconfined to a few dark matter haloes at the scale of the Local Group\n(Branchini \\& Carlberg 1994, Dunn \\& Laflamme 1995), traced the paths\nof individual dark matter particles rather than extended haloes\n(Nusser \\& Branchini 2000), or used simulations which demonstrate in\nprincipal the ability of NAM to recover particle orbits to a high\ndegree of accuracy but which do not reproduce the full complexity of\nextended mass distributions (Phelps 2002).\n\nIn this paper we extend the tests of NAM to simulations at a scale\napproaching that of the local supercluster with a catalogue containing\nseveral hundred extended objects modelled as particles. We begin with\nan overview of the relevant properties of the $N$-body simulation and\nthe halo catalogue we derived from it, and follow with details of the\nversion of NAM used here, which includes a novel approach to the\nassignment of halo masses. We will then test the sensitivity of NAM\nboth as a probe of the total mass as well as of the linear bias, and\nexamine in some detail a representative solution, focusing on the\ncomparison between the NAM predictions and the actual halo orbits.\n\n\n\n\\section{The simulation}\n\n\\begin{figure}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics{omega.eps}}\n\\caption{The average matter density $\\Omega( 1$ (radial distance error greater than 10 per\ncent). For haloes located towards the edge of the catalogue ($> 24\nMpc\/h$) these are indicated by filled squares, and towards the center\nof the catalogue ($< 24 Mpc\/h$), by filled circles.}\n\\label{fig:viewfromhome}\n\\end{figure}\n\nThe top panel of Fig.~\\ref{fig:scatter} shows the error in the\npredicted distances as a function of the distance from the reference\nhalo. Here the excellent overall prediction of halo distances, with\ntypical errors of less than 3 per cent, can be seen most clearly.\nAny mismatch in the halo mass density relative to $\\om{m}$\nwould be revealed here by a tilt in the distance errors as a function\nof distance from the reference halo (with a positive slope indicating\nan overdensity and a negative slope an underdensity). As the distance\nerrors trace a line with vanishing slope, this confirms that the total\nmass for a choice of ${\\mathcal{M}}=2.7$ is well matched to $\\om{m}$.\nThe bottom panel of Fig.~\\ref{fig:scatter} compares the distance\nerrors with those obtained assuming zero peculiar velocities and\nHubble-flow distances ($d = cz_i H_0$). In the latter case the average\ndistance errors are 5 per cent. The difference in the\nsharpness of the peaks in the two histograms gives an indication of\nthe ability of NAM to correctly model the interparticle dynamics.\n\\begin{figure}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics{ddscatter.eps}}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics{histogram.eps}}\n\\caption{\n{\\it Top panel}~: Scatterplot of the error in radial distance predictions \nfor the best NAM solution, showing good overall recovery of the distances.\nPoint size is proportional to halo mass.\n{\\it Bottom panel}~: Histogram in $\\chi \\equiv (\\mu_i^{mod} - \\mu_i^{cat})\/\\sigma_i$\nof the same solution. For comparison the dotted line shows the errors \nobtained when Hubble-flow distances are used in place of $\\mu_i^{cat}$.}\n\\label{fig:scatter}\n\\end{figure}\n\n\nFig.~\\ref{fig:orbitcomp2} compares the NAM-reconstructed halo orbits\nwith the actual halo orbits, the latter being defined previously as\nthe CM motion of the dark matter particles comprising the halo at\n$z=0$. The reconstruction is more accurate for the heaviest haloes\n(top panel) than for the rest (bottom panel): for the former the\ndirectional error $\\overline {\\Delta \\theta} = 41^o$, while for the latter\n$\\overline {\\Delta \\theta} = 48^o$. By comparison, the chance\nthat a random orbit will have a directional error of $48^o$ or less,\nthat is, that a point placed at random within a sphere will fall within the\nvolume of the cone swept out by an opening angle of $2 * 48^o$, is about 17 per cent.\nAccording to the directional error the quality\nof the orbit reconstructions, like $\\chi^2$, is not a particularly\nsensitive function of ${\\mathcal{M}}$: far from the $\\chi^2$ minimum\nat the same $\\om{m}$ and ${\\mathcal{M}} = 1$, $\\overline {\\Delta \\theta_i}\n\\simeq 50^o$ for the entire catalogue. Similarly, the average initial\ndisplacement error $\\overline {\\Delta d_i}$, was 2.5 Mpc\/h for the best\nsolution at ${\\mathcal{M}} = 2.7$ , while far from the minimum at\n${\\mathcal{M}} = 1$ it was 2.9 Mpc\/h. A further indication of the\napproximate character of the predicted orbits is that the magnitude of\nthe initial displacement error is comparable to the total distance\ntravelled by the typical halo orbit: The centre of mass of the average\nhalo in the simulation travelled 3.2 Mpc\/h, while the reconstructed\nhaloes travelled an average of 2.6 Mpc\/h, the shorter path lengths in\nthe reconstruction being a feature of our halo mass assignment scheme\nas discussed in section 3.1. While this error may seem large, the\nchance that a random orbit with a total displacement of 2.6 Mpc\/h\nwill end up within 2.5 Mpc\/h of the actual initial position is only\n16 per cent (this is the volume overlap of two spheres of equal radii \nwhose centers are separated by a distance equal to their radii).\nA trial NAM\nreconstruction without the linear growth factor, assigning the full\nhalo mass at early times, gave similar values for the late-time\nmeasures $\\chi^2$ and $\\overline {\\Delta \\theta_i}$, while the early-time\nmeasure $\\overline{\\Delta d_i} \\sim 4.9 Mpc\/h$, or about twice the error.\nThe average total distance travelled by the haloes in these solutions\nwas 6.3 Mpc\/h, nearly twice as long as the actual halo paths and\nillustrative of the instability of the solutions when the haloes are\nallowed to keep their full masses in the initial time steps.\n\\begin{figure}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics{orbitcomphi-lpt.eps}}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics{orbitcomplo-lpt.eps}}\n\\caption{x-y projection of actual (solid lines) and reconstructed\norbits (dotted lines) for the heaviest haloes (top panel) and for all\nother haloes (bottom panel). Dotted lines indicate the actual\nhalo paths to $z=20$, while solid lines mark the paths\npredicted by NAM. Actual halo positions are indicated by circles of\nradii proportional to the halo mass. Straight radial dotted line\nsegments connect these positions to those predicted by NAM, as\nin Fig.~\\ref{fig:radialerror}. Heavier haloes tend to have more\naccurately reconstructed orbits: For the heaviest haloes $\\overline {\\Delta\n\\theta_i} = 41^o$ and for the remaining haloes $\\overline {\\Delta \\theta_i}\n= 48^o$. }\n\\label{fig:orbitcomp2}\n\\end{figure}\n\nFinally, Fig.~\\ref{fig:all_errors} compares three measures of errors\nin NAM reconstructions: $\\overline{\\Delta d_i}$ and $\\overline {\\Delta\n\\theta_i}$ are plotted on the x and y axes, respectively, while the\npoint size is proportional to $\\chi^2$. Haloes with $\\chi^2 > 1$ are\nshown as filled circles. $\\chi^2$ is fairly well correlated to\n$\\overline{\\Delta d_i}$, indicating that reconstructed orbits which begin\nat positions well removed from their actual starting positions in the\nsimulation are likely to end with inaccurately modeled distances.\nSignificantly, $\\chi^2$ is poorly correlated to $\\overline {\\Delta\n\\theta_i}$. This may have been expected since, in the absence of\nnearby haloes constraining the dynamics, the motion of a given halo in\nthe plane of the sky relative to the reference halo should be fully\ndegenerate. The extent to which this degeneracy is broken and the\ntransverse orbital motions at the present epoch are correctly\nrecovered is a measure of the sensitivity of NAM to the dynamics\nbetween haloes. The weak correlation between $\\chi^2$ and $\\overline\n{\\Delta \\theta_i}$ is the clearest evidence that the predicted halo\ndistances alone are not a sufficient discriminator of the quality of\nreconstructed orbits. \n\n\n\n\\begin{figure}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics{errors.eps}}\n\\caption{Scatterplot showing errors in the direction of the\nreconstructed final-time velocities, as well as errors in the early\nand late time predicted positions, for each of the 533 particles in\nthe best NAM solution. The x-axis shows $\\overline{\\Delta d_i}$, the\nerror in the initial placement of the haloes at the first timestep.\nThe y-axis shows $\\overline {\\Delta \\theta_i}$, the error in the\ndirection of the velocity vector at the final timestep. Halo orbits\nwhich most accurately predict the direction of the halo orbit at the\npresent epoch are thus found at the top of the graph. Point size is\nproportional $\\chi^2$.}\n\\label{fig:all_errors}\n\\end{figure}\n\n\n\\section{Discussion and conclusions}\n\nWe have shown, using a catalogue of over 500 dark matter haloes\nderived from a large $N$-body simulation at the scale of the local\nsupercluster, that it is possible with the numerical action method to\nreconstruct the full dynamical histories of dark matter haloes given\nthe masses and the redshift space coordinates at the present epoch.\nThe reconstruction is most successful in recovering the halo distances\nat the present epoch, with typical errors of less than 3 per cent.\nIndividual orbits paths, including the initial positions as well as\nthe direction of motion of the haloes at the present epoch, are\npredicted with less accuracy. By varying the relative contributions\nto the total mass from the haloes and the background, we have also\nfound a way to use NAM to directly measure the linear bias when the\ntotal mass density is known, although with an uncertainty of about 50\nper cent.\n\nGiven the dynamical complexity of millions of interacting particles,\nand the sweeping nature of NAM's central simplifying assumption that\ngalaxy haloes can be approximated as discrete, non-merging point\nmasses throughout their evolution, it is remarkable how successfully\nthe dynamics of a many-body system can be reconstructed on the basis\nof an incomplete catalogue of facts. The successes of NAM as it has\nbeen implemented here are of course partially offset by their\nweaknesses. Among these is the relatively poor quality of the\nreconstruction in the vicinity of massive haloes, clearly seen in\nFig.~5, which shows a breakdown in the non-linear regime where NAM,\nwhich is itself a fully non-linear method, might have potentially\noffered the most insight. In these regions $\\chi^2$ is a good\nindicator of poorly reconstructed orbits, but preliminary attempts to\nuse this information to nudge haloes into the correct orbits while not\nimposing any additional formal constraints have so far been\nunsuccessful. A second concern is the inability of NAM in many cases\nto isolate, on the basis of $\\chi^2$ alone, predicted halo orbits which are\nmoving in the wrong direction at the present epoch. Fig.~9 shows, for\nexample, 34 haloes moving more than $90^o$ in the wrong direction but\nwith good distances at the present epoch and thus low $\\chi^2$.\n\nThe above innacuracies may in part arise from\nthe details of our implementation, such as our ad hoc procedure of\nscaling of halo masses according to linear theory, and there is\ncertainly room here for improvement. The scale of the catalog is also\na factor to consider, and in particlar the density of mass tracers\nwithin it. This analysis should be repeated at the scale of the Local\nGroup, where a larger number of mass tracers acting within a smaller\nvolume may better constrain the dynamics and permit more accurate\norbit reconstructions. It is also possible that, in dynamical systems\nof this complexity, the angular positions, redshifts and masses are by\nthemselves insufficient to lift the degeneracies in the halo orbits,\nand that the full three-dimensional velocities at the present epoch\nwill be needed to accomplish this. This again is work to be\nundertaken at Local Group scales, where next-generation observations\nfrom SIM and GAIA hold out the promise of multiple galaxy proper\nmotion measurements with which to test the NAM predictions. One\nrelated concern is that part of the proper motion data may be needed\nto recover accurate orbits, leaving fewer remaining free parameters to\nassist with the more weighty problem of constraining individual galaxy\nhalo masses, although it is possible that only one of the two\ncomponents of the tangential velocity will be sufficient to break the\norbtial degeneracy. Finally, inaccuracies in the NAM predictions are\ndoubtless due at least in part to intrinsic limitations of the method\nand its assumptions, although we do not wish to suggest at this stage,\ngiven the work that remains to be done, that that an upper\nlimit on NAM accuracy in orbit reconstruction has yet been reached.\n\nWe anticipate that work on NAM in the near term will lie principally\nin two directions. The first is an extension of the above analysis, with\nfurther improvements in the implementation, to a high-resolution\nsimulation at the scale of the Local Group, where present-day\nthree-dimensional velocities can provide significant additional\ndynamical constraints. The second is a direct comparison of\nNAM with other reconstruction methods, both in real\nspace (e.g, Nusser \\& Dekel 1992, Gramman 1993, Croft \\& Gazta\\~{n}aga\n1998, Frisch {\\it et al.\\ } 2002, Mohayaee {\\it et al.\\ } 2005) and redshift space\n(e.g., Narayanan \\& Weinberg 1998, Monaco \\& Efstathiou 1999, Mohayaee\n\\& Tully 2005), that help to bridge the present-day observations of\nlarge-scale structure with the initial conditions prevailing in the\nearly universe.\n\nWe acknowledge the support of the Asher Space Research Institute. We\nwould like to thank Felix Stoehr for providing us with the snapshots\nfrom his simulation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}