{"text":"\\section{}\n\n\\begin{abstract} We study non-linear differential equations on the punctured formal disc by considering the natural derived enhancements of their spaces of solutions. In particular, by appealing to results of the inverse theory in the calculus of variations, we show that a variational formulation of a differential equation is \\emph{equivalent} to the residue pairing inducing a (-1)-symplectic form on the derived space of solutions equipped with a certain decoration of its tangent complex. \\end{abstract}\n\n\n\n\\section{introduction}\n\nWe work mainly over the formal punctured disc $\\Delta^{*}$. This is defined as the spectrum of $\\mathcal{K}:=k((z))$, where $k$ is our base field of characteristic $0$. On $\\Delta^{*}$ we will consider differential equations (possibly non-linear, of arbitrary order) $$D(z,y(z),y'(z),y''(z),...)=0,$$ and their spaces of solutions. In fact, we will consider their \\emph{derived} spaces of solutions, $\\operatorname{dSol}(D)$, which contain some more information. This is a derived ind-scheme cut out by the equations on the Laurent coefficients $y_{j}$ implied by substituting $$y(z)=\\sum_{i\\in\\mathbb{Z}}y_{i}z^{i}$$ into the equation $D=0$. Note that the Laurent series in question is really infinite in both directions as the coefficients are naturally valued in the ring of functions on the ind-scheme of loops into $\\mathbb{A}^{1}$. This is responsible for the ind-structure on $\\operatorname{dSol}$. We will also consider equations on $\\Delta$, the unpunctured disc, with ring of functions $\\mathcal{O}:=k[[z]]$. In this case we do not have any ind-structure.\n\nOur main goal is to give a characterisation of \\emph{variational} differential equations in terms of the interaction of the residue form on $\\mathcal{K}$ and the cotangent complex of the derived space of solutions, $\\mathbb{L}(\\operatorname{dSol}(D))$. In fact, we will need to decorate $\\operatorname{dSol}(D)$ a little, by including as data a lift, along the natural map, of the tangent complex to the loop space of $\\mathbb{A}^{1}$. More specifically, we will see that demanding that the residue form induce (in a sense that we will make precise), a \\emph{Tate (-1)-symplectic} structure on $\\operatorname{dSol}(D)$ is equivalent to $D=0$ having a variational formulation. Of course, one direction is at least morally obvious, given the Euler-Lagrange equations. We remark that there are evident generalisations to collections of differential equations $D_{j}=0$ in functions $y_{i}(z)$, although we will stick to the case of one equation in one dependent variable for ease of exposition. \n\n\n\n\\section{Basic Properties}\n\\subsection{Loop Spaces and Tate Derived Ind-Schemes}We will make use of certain quite large algebro-geometrical objects, which unfortunately requires a little technology. The initiated reader should skip this section. We stress that whilst the objects are at first glance somewhat formidable (derived ind-schemes and Tate sheaves on them), the results and computations are simple, and can be understood without a full knowledge of the technology. We will only give a very brief overview of the relevant notions, the reader is referred to \\cite{Dr}, \\cite{GaiRoz},\\cite{He} and \\cite{Pr} for much better accounts of the theory. \n\nWe will often say \\emph{category} when we often mean $\\infty$- such. The category of \\emph{derived algebras} is by definition the $\\infty$ categorical localisation of commutative differential algebras, $\\mathbf{CDGA}_{k}$, at quasi-isomorphisms. We will consider all our derived spaces as living in the category $\\mathbf{Pst}_{k}$ of \\emph{pre-stacks} over $k$. This is the category of functors from derived algebras to spaces, see \\cite{GaiRoz}. Inside $\\mathbf{Pst}_{k}$ we have the category of \\emph{derived affine schemes}, $\\mathbf{dAff}_{k}$, opposite to the category of derived algebras. Further we have the category of \\emph{derived schemes}, $\\mathbf{dSch}_{k}$ and its category of ind-objects (along ind- systems of closed embeddings) $\\mathbf{IndSch}_{k}$, living inside $\\mathbf{Pst}_{k}$.\n \nFor a derived ind-scheme $X$ we have the category of sheaves, $QC(X)$, and its subcategory of perfect objects, $Perf(X)$. Inside $\\mathbf{Pro}(QC(X))$ we have the cotangent complex $\\mathbb{L}_{X}$, cf. \\cite{GaiRoz}. In order to treat the cotangent and tangent complexes on the same footing we will work with the category of \\emph{Tate sheaves} on $X$. This category, $\\mathbf{Tate}(X)$, is contained in $\\mathbf{Pro}(QC(X))$ and contains the images of both $QC(X)$ and $\\mathbf{Pro}(Perf(X))$. It is further endowed with a natural duality interchanging these images, cf \\cite{He2}. \n\n\\begin{remark} We will only deal with ind- \\emph{affine} derived schemes, for which we will have honest presentations as topological $\\mathbf{CDGA}_{k}$, where by \\emph{topological} we will always mean admitting a neighbourhood basis at $0$ consisting of open ideals. Similarly will treat pro-modules for our algebras as linearly topologized modules and leave implicit that the continuity of the maps constructed ensures a map of pro-systems. \\end{remark}\n\n\\begin{remark} In the case of $X$ a point, $\\mathbf{Tate}(X)$ is the category of \\emph{locally linearly compact topological vector spaces}, cf. \\cite{Dr}.The prototypial example of such is the $k$-vector space $\\mathcal{K}$, endowed with the natural topology on Laurent series.\\end{remark}\n\n\\begin{definition} We say that an ind-scheme $X$ is \\emph{Tate} if the cotangent complex $\\mathbb{L}_{X}\\in\\mathbf{Pro}(QC(X))$ lies in the subcategory $\\mathbf{Tate}(X)$ of $X$. \\end{definition}\n\nCrucially, if $X$ is a Tate derived ind- scheme then one can speak of \\emph{shifted symplectic forms} on $X$. \n\nThe main example of a Tate space for us will be the loop space of the affine line, $\\mathcal{L}\\mathbb{A}^{1}$. This is represented by the ind- affine scheme $$\\operatorname{colim}_{n}\\operatorname{spec}k\\left[\\,y_{i}\\,|\\,i\\geq -n\\,\\right].$$ The $A$ points of $\\mathcal{L}\\mathbb{A}^{1}$ are $A((z))$. The cotangent complex, in $\\mathbf{Pro}(QC(\\mathcal{L}\\mathbb{A}^{1}))$, is globally trivial with fibre $\\mathcal{K}$, whence is a Tate sheaf. The topological algebra of functions on $\\mathcal{L}\\mathbb{A}^{1}$ is by definition $$\\mathcal{O}(\\mathcal{L}\\mathbb{A}^{1}):=\\lim_{n}k\\left[\\,y_{i}\\,|\\,i\\geq -n\\,\\right].$$\n\n \\begin{definition} We define $$\\mathcal{A}:=\\lim_{n}k\\left[\\,y_{i}\\,|\\,i\\geq -n\\,\\right]((z)),$$ noting that we naturally have $y(z),y'(z),...\\in\\mathcal{A}$. \\end{definition}\n\nThe reader should bear in mind the following examples, which hopefully help give a feel for Tate symplectic forms on derived ind-schemes. \\begin{example} \\begin{itemize}\\item Consider $\\mathbb{A}^{2}$ with the standard symplectic form $dxdy$, then the loop space $\\mathcal{L}\\mathbb{A}^{2}$ has Tate symplectic form $\\sum_{i}dx_{i}dy_{-i}$. \\item We could also consider the $-1$-shifted cotangent bundle of $\\mathbb{A}^{1}$, with coordinates $x$ and $\\xi$ say. Then on the loop space of this we obtain a $-1$-shifted symplectic form $\\sum_{i}dx_{i}d\\xi_{-i}$. Note that these are not finite sums of basic two forms, and are topologically convergent as $dx_{i}\\rightarrow 0$ as $i\\rightarrow -\\infty$.\\item Let $f$ be a function on the loop space of $\\mathbb{A}^{1}$, for example we could take $f$ to be the constant term of $y(z)^{2}$. Then on the derived critical locus of this function we have a Tate $-1$ symplectic form. In fact in this case we obtain the loop space of the derived critical locus of the function $y^{2}$.\n\\item The loop space of a shifted symplectic derived scheme is Tate shifted symplectic, indeed we can pull-back the two form via the universal map to one on the product of the loop space with $\\Delta^{*}$ and then integrate against $\\frac{dz}{z}$, in accordance with the well known construction of symplectic forms on mapping spaces in field theory.\\end{itemize}\\end{example}\n\n\n\\subsection{Solution Spaces}\nBy a \\emph{differential algebra} we will mean a (non-derived) $k$-algebra equipped with a distinguished derivation $\\partial$. Maps of differential algebras are defined in the evident way. We will encode differential equations on $\\Delta^{*}$ as elements of a certain differential algebra, which we now introduce. \\begin{definition} Let $\\mathcal{J}$ be the differential algebra defined by $$\\mathcal{J}:=\\mathcal{K}\\left[x_{0},x_{1},x_{2},...\\right],$$ equipped with the derivation $\\partial_{z}$ which is defined to act on $x_{i}$ by sending it to $x_{i+1}$, and to act on $\\mathcal{K}$ as differentiation by $z$.\\end{definition}\n\n\\begin{remark} \\begin{itemize} \\item If $D\\in\\mathcal{J}$, we write $(D)_{\\partial}$ for the differential ideal generated by $D$. \\item If $A$ is a $k$-algebra, denote by $\\mathcal{J}_{A}$ the differential $A$-algebra $A((z))\\left[x_{0},x_{1},x_{2},...\\right]$.\\item We will consider $A((z))$ as a differential algebra with differential $\\partial_{z}$. \\item We can speak of differential algebras over a base differential algebra, and maps between these.\\end{itemize} \\end{remark}\n\nNow there is an obvious way to encode a differential equation $D=0$ as an element of $\\mathcal{J}$, indeed we let $x_{i}$ correspond to $y^{(i)}(z)$. We can define then the space of solutions rather cleanly. First we introduce some notation which we will use throughout the sequel. \\begin{definition}If $D\\in\\mathcal{J}$ we will denote by $D_{i}$, the function on $\\mathcal{L}\\mathbb{A}^{1}$ defined as the $z^{i}$ coefficient of $D(z,y(z),y'(z),y''(z),...)$. We will sometimes suggestively write $D_{i}$ as $$\\int D(z,y,y',y'',...)z^{-1-i}dz$$ when we want to stress the analogy with integration. \\end{definition}\n\n\\begin{remark} Notice that the sum $y(z)$ is not bounded in the Laurent direction, nonetheless the functions $D_{i}$ make sense in the topologial algebra of functions on $\\mathcal{L}\\mathbb{A}^{1}$. \\end{remark}\n\n\\begin{definition} The functor, $\\operatorname{Sol}(D)$, which sends an algebra $A$ to the set of maps of differential $A((z))$-algebras, $\\mathcal{J}_{A}\/(D)_{\\partial}\\rightarrow A((z))$, is referred to as the \\emph{space of solutions} of $D$. \\end{definition} \\begin{remark} Let us note that a $k$-point of $\\operatorname{Sol}(D)$, which is by definition a morphism of differential algebras, $\\mathcal{J}\/(D)_{\\partial}\\rightarrow \\mathcal{K}$, really corresponds to a solution of $D$, in the natural sense. Indeed, compatibility with the differential structures on both sides means that the morphism is determined by the image of $x_{0}$. This image, $\\gamma(z)$ say, is now easily seen to satisfy the equation $D$.\\end{remark}\n\n\\begin{lemma} The functor $\\operatorname{Sol}(D)$ is representable by an ind- affine scheme. \\end{lemma} \\begin{proof} We can easily check that the closed subspace of $\\mathcal{L}\\mathbb{A}^{1}$ cut out by the equations $D_{i}=0$ represents the desired functor.\\end{proof}\n\nAs mentioned before, we are interested in the natural derived enhancement of the space $\\operatorname{Sol}(D)$, which we will denote $\\operatorname{dSol}(D)$. We first note that there is a notion of differential derived algebra, namely a derived algebra equipped with a derivation of cohomologcial degree $0$. \\begin{lemma} The pre-stack $\\operatorname{dSol}(D)$, sending a derived algebra $A$ to the space of maps of differential derived algebras over $A((z))$, $\\mathcal{J}_{A}\/(D)_{\\partial}\\rightarrow A((z))$, is representable by a derived ind-scheme. \\end{lemma}\\begin{proof} We take now the homotopy fibre of the functions $D_{i}$.\\end{proof}\n\n\\begin{remark} Let us write down explicitly the pro- derived algebra of functions on $\\operatorname{dSol}(D)$. For each $n$, we let $\\mathcal{O}^{\\geq -n}(\\operatorname{dSol}(D))$ be the derived algebra freely generated by elements of degree $0$, $y_{i}, i\\geq -n$, and elements $\\xi_{i},i\\geq -n$ of degree $-1$, subject to the relation $\\partial(\\xi_{j})=D_{j}$. We then take the limit of this family to obtain a topological $\\mathbf{CDGA}_{k}$, which models the algebra of functions on $\\operatorname{dSol}(D)$. \\end{remark} \n\n\n\\begin{remark} We can also consider the sub-space of solutions which extend to the disc $\\Delta$. In this case we no longer have ind- objects, and simply have a scheme (resp. derived scheme), denoted $\\operatorname{Sol}^{+}(D)$ (resp. $\\operatorname{dSol}^{+}(D)$). \\end{remark}\n\n\\subsection{Tangent Complex and Linearisation} We want to understand the local structure of $\\operatorname{dSol}(D)$ near a solution $\\gamma(z)\\in\\mathcal{K}$. To do so let us recall the \\emph{linearisation} of $D$ at $\\gamma$. This is a linear differential operator on $\\mathcal{K}$.\\begin{definition} We denote by $\\mathcal{L}_{\\gamma}(D)$ the linear differential equation on $\\Delta^{*}$ defined by $$D(z,\\gamma(z)+\\epsilon y(z),\\gamma'(z)+\\epsilon y'(z),...)=0\\, \\operatorname{mod}\\, \\epsilon^{2}.$$ We will abusively also write $\\mathcal{L}_{\\gamma}(D)$ for the corresponding linear differential operator $\\mathcal{L}_{\\gamma}(D):\\mathcal{K}\\rightarrow\\mathcal{K}$. \\end{definition}\n\nWe then have the following easy lemma; \\begin{lemma} The tangent complex to $\\operatorname{dSol}(D)$ at $\\gamma$ is equivalent to the length one complex (concentrated in degrees 0 and 1) corresponding to $\\mathcal{L}_{\\gamma}(D)$, i.e. we have $$\\mathbb{T}_{\\gamma}(\\operatorname{dSol}(D))\\cong\\left(\\mathcal{K}\\xrightarrow{\\mathcal{L}_{\\gamma}(D)}\\mathcal{K}\\right).$$ A similar result holds for $\\operatorname{dSol}^{+}(D)$, with $\\mathcal{O}$ in place of $\\mathcal{K}$.\\end{lemma} \\begin{proof}This can be deduced functorially, however we choose to prove it directly by constructing an isomorphism. Recalling the explicit description of functions on $\\operatorname{dSol}(D)$, in terms of the variables $y_{i}$ and $\\xi_{j}$, the tangent complex is generated by degree $0$ elements $\\partial_{y_{i}}$ and degree $1$ elements $\\partial_{\\xi_{i}}$, with the condition that these are topologically negligible as $i\\rightarrow \\infty$. We map these to $z^{-i}\\in\\mathcal{K}$ in their respective cohomological degrees, and it is easy to compute that this intertwines the respective differentials. A similar argument works for $\\mathcal{O}$. \\end{proof}\n\nIn fact we can work globally. Given $D$ we consider now the expression $$\\mathcal{L}(D):=D(z,y(z)+\\epsilon w(z),y'(z)+\\epsilon w'(z),...)=0\\, \\operatorname{mod}\\, \\epsilon^{2}.$$ We consider this as a linear differential operator (in the $z$-direction) in the dependent variable $w$, $$\\mathcal{L}(D):\\mathcal{A}\\rightarrow\\mathcal{A}.$$ We may also consider this as a length one complex of sheaves on $\\mathcal{L}\\mathbb{A}^{1}$, noting that the differential operator is linear over $\\mathcal{L}\\mathbb{A}^{1}$. The generalisation of the above lemma is as follows; \\begin{lemma} We have an isomorphism $$\\mathbb{T}(\\operatorname{dSol}(D))\\cong \\iota^{*}\\left(\\mathcal{A}\\xrightarrow{\\mathcal{L}(D)}\\mathcal{A}\\right),$$ with $\\iota$ the natural map $\\operatorname{dSol}(D)\\rightarrow\\mathcal{L}\\mathbb{A}^{1}$. \\end{lemma} \\begin{proof} The isomorphism is given by the same formula as the lemma above. \\end{proof}\n\n\\begin{remark} \\begin{itemize} \\item This allows a computation of the cotangent complex to $\\operatorname{dSol}(D)$ as well, indeed it is equivalent to the length one complex (concentrated in degrees -1 and 0 now) corresponding to the formal adjoint to $\\mathcal{L}_{\\gamma}(D)$, with respect to the residue form. \\item Note that this proves that $\\operatorname{dSol}(D)$ is Tate. \\item We will later be interested in symplectic forms, the above computation makes it clear that it is unnatural to expect one on $\\operatorname{dSol}^{+}$, as $\\mathcal{O}$ is not self-dual. \\end{itemize}\\end{remark}\n\n\n\\begin{example}\\begin{itemize}\\item We work here on the disc $\\Delta$ and consider the \\emph{Clairaut equation}, which depends on a polynomial $F$. We let $D_{F}$ be the equation $$y=zy'+F(y').$$ We find an $\\mathbb{A}^{1}$ of solutions, which is to say a morphism $\\mathbb{A}^{1}\\rightarrow\\operatorname{dSol}^{+}(D_{F})$, mapping $$t\\mapsto \\gamma_{t}(z):=tz+F(t).$$ We can compute that the linearised equation at $\\gamma_{t}$ to be given by $$\\mathcal{L}_{\\gamma_{t}}(D_{F}): y=(z+F'(t))y',$$ so that the corresponding differential operator on $\\mathcal{O}$ gives the tangent complex. It is easy to see then that there is higher tangent cohomology at the solution $\\gamma_{t}$ of $D_{F}$, if and only if $t$ is a root of $F'$. We are not aware of an interpetation of these higher tangent classes in terms of the classical geometry of the equation $D_{F}$.\n\\item Again working over $\\Delta$ let us consider $$D: (y')^{2}=4y,$$ with the $\\mathbb{A}^{1}$ of solutions $$\\gamma_{\\lambda}(z):=(z+\\lambda)^{2}.$$ The linearised equation at the solution $\\gamma_{\\lambda}$ is $$(z+\\lambda)y'=y$$ and an easy computation again shows that we have isolated points at which there is higher tangent cohomology, this time precisely at $\\lambda=0$.\n\\item We work now on $\\Delta^{*}$. Fix elements $a,b\\in \\mathcal{K}$ and consider the non-linear equation $$D: \\,y^{2}=ay''+by',$$ and let $\\gamma$ be a solution. We compute the linearised equation as $$\\mathcal{L}_{\\gamma}(D):\\,2\\gamma y=ay''+by',$$ and a simple computation tells us that the cotangent complex to this equation is shifted self-dual via the residue form iff $b=a'$, so that the equation is equivalently $$y^{2}=(ay')'.$$ \\end{itemize}\\end{example}\n\n\n\\subsection{Linear Equations and -1 Symplectic Forms}\nLet us now deal with the case of a linear differential equation $D=0$, with the corresponding linear endomorphism of $\\mathcal{K}$ denoted $\\mathcal{L}_{D}$ to avoid confusion. A consequence of the above argument is that the tangent complex $\\mathbb{T}(\\operatorname{dSol}(D))$ is free with fibre the complex $\\mathcal{L}_{D}:\\mathcal{K}\\rightarrow\\mathcal{K}$. Now let us assume that $D$ is self adjoint with respect to the residue pairing $$(f,g):=\\int(fg)dz:=\\operatorname{res}_{0}(fgdz),$$ in this case we see that the pairing $\\int$ gives an isomorphism $$\\mathbb{T}(\\operatorname{dSol}(D))\\rightarrow\\mathbb{L}(\\operatorname{dSol}(D))[-1],$$ which in fact we can see comes from a (-1)-symplectic struture $\\omega_{\\int}$, which is written with respect to the $y$ and $\\xi$ variables as $$\\omega_{\\int}=\\sum_{i}dy_{i}d\\xi_{-1-i}.$$ We state this now as a lemma, whih we note gives a purely geometric characterisation of self-adjointness - which for example could be generalised immmediately to non-linear equations. \\begin{lemma} If $D$ is a linear differential equation on the disc, then it is self-adjoint if and only if the residue form induces a (-1)-symplectic structure on the derived space of solutions $\\operatorname{dSol}(D)$. \\end{lemma}\n\n Now, we can in fact say more, indeed we can realise $\\operatorname{dSol}(D)$ as a derived critical locus of a function on the loop space $\\mathcal{L}\\mathbb{A}^{1}$. It can in fact be proven that in the case of a self-adjoint linear differential equation $$D:=\\sum_{i}a_{i}(z)y^{(i)}(z)=0,$$ $\\operatorname{dSol}(D)$ is the derived critical locus of the function $$\\frac{1}{2}\\int \\sum_{i}a_{i}(z)y(z)y^{(i)}(z)dz$$ on $\\mathcal{L}\\mathbb{A}^{1}$. It is the goal of the next section to generalise this result to non-linear equations. \n\n\\section{variational calculus}\n\n\\subsection{Euler-Lagrange Equations} In this subsection we adapt some standard notions of variational calculus to our purely algebraic setting. \nWe begin with an element $D$ of $\\mathcal{J}$, and form the function on $\\mathcal{L}\\mathbb{A}^{1}$, $$\\alpha_{D}:=\\int D(z,y,y',y'',...)dz.$$ We note that if $D$ is a \\emph{total derivative}, which is to say lies in the image of the derivation $\\partial_{z}$ of $\\mathcal{J}$, then $\\alpha_{D}=0$. We expect then that the critical locus of $\\alpha_{D}$ should be described by the Euler-Lagrange equations. To formulate these let us recall the variational derivative, acting on $\\mathcal{J}$.\n\n\\begin{definition} The variational derivative, $\\delta:\\mathcal{J}\\rightarrow\\mathcal{J}$ is defined as follows; writing $\\partial_{x_{i}}=:\\partial_{i}$, we set $$\\delta:=\\sum_{i}(-1)^{i}\\partial_{z}^{i}\\partial_{i}.$$ \\end{definition}\n\nWe have now a purely algebraic version of the Euler-Lagrange equations;\n\n\\begin{lemma} (\\emph{Euler-Lagrange}.) For an element $D\\in\\mathcal{J}$, we have an equality of functions on $\\mathcal{L}\\mathbb{A}^{1}$, $$\\frac{\\partial}{\\partial y_{i}}\\int D(z,y,y',y'',...)dz=\\int (\\delta D)(z,y,y',y'',...)z^{i}dz.$$ \\end{lemma} \\begin{proof}Let us introduce the expression $w=w(z)=\\sum_{i}w_{i}z^{i}$ in indeterminates $w_{j}$, we write $w',w'',...$ in the evident manner. Now a little thought shows that $\\frac{\\partial\\alpha_{D}}{\\partial y_{i}}$ is the coefficient of $\\epsilon w_{i}$ in the expression $$\\int D(z,y+\\epsilon w,y'+\\epsilon w', y''+\\epsilon w'',...)dz.$$ Now we recall the elementary fact that $ab^{(n)}$ is equivalent modulo total derivatives to $(-1)^{n}a^{(n)}b$ and we integrate by parts to remove any derivatives of $w$, obtaining $$\\int (\\delta D)(z,y,y',y'',...)w(z)dz$$ as the term of order $\\epsilon$. Recalling that $w(z)=\\sum w_{j}z^{j}$, we see that the coefficient of $w_{j}$ is thus $(\\delta D)_{-1-j}$ as required. \\end{proof}\n\n\\begin{corollary} The residue pairing endows the space $\\operatorname{dSol}(\\delta D)$ with a Tate (-1)-symplectic form. \\end{corollary}\\begin{proof} The Euler-Lagrange equations produce an isomorphism $\\operatorname{dSol}(\\delta D)\\cong \\operatorname{dcrit}(\\alpha_{A})$, where $\\operatorname{dcrit}$ denotes the derived critical locus. It is a standard result that a derived critical locus of a function on a smooth space is endowed with a (-1)-symplectic form and this generalises readily to a function on a formally smooth Tate space such as $\\mathcal{L}\\mathbb{A}^{1}$ . One then checks that the standard symplectic form on $\\operatorname{dcrit}(\\alpha_{D})$ corresponds to the form $\\omega_{\\int}:=\\sum dy_{i}d\\xi_{-1-i}$ on $\\operatorname{dSol}(\\delta D)$. \\end{proof}\n\n\n\nThe main result of this note is a sort of converse to the above. First a couple of definitions; \\begin{definition} We call a \\emph{framing} of a derived ind-scheme equipped with a map to $\\mathcal{L}\\mathbb{A}^{1}$, a lift of the tangent complex to $\\mathcal{L}\\mathbb{A}^{1}$ and we consider $\\operatorname{dSol}(D)$ as framed by the length one complex of sheaves $\\mathcal{A}\\xrightarrow{\\mathcal{L}(D)}\\mathcal{A}$.\\end{definition} \\begin{definition} We will say that the residue form induces a (-1)-symplectic form on $\\operatorname{dSol}(D)$ if $\\operatorname{dSol}(D)$ admits a (-1)-symplectic form which lifts to the framing, on which it acts as the residue form.\\end{definition} \\begin{theorem} Let $D=0$ be a differential equation on $\\Delta^{*}$ such that the residue pairing endows $\\operatorname{dSol}(D)$ with a Tate (-1)-symplectic form. Then $D=0$ admits a variational formulation, namely there is an $A\\in\\mathcal{J}$ with $\\delta A=D$. \\end{theorem}\n\n\n\n\\begin{proof} With respect to our model with variables $y_{i}$ and $\\xi_{j}$, we see that we are assuming $\\sum dy_{i}d\\xi_{-1-i}$ is a Tate symplectic form. This will hold precisely if it is closed for the internal differential on $\\mathcal{O}(\\operatorname{dSol}(D))$, with respect to our usual model. Such is true precisely if we have the following conditions, which we think of as a sort of \\emph{integrability}, we must have that for all $i,j$, $$\\frac{\\partial D_{-1-i}}{\\partial y_{j}}=\\frac{\\partial D_{-1-j}}{\\partial y_{i}}.$$ We note now that this easily implies that there is some $A\\in \\mathcal{O}(\\mathcal{L}\\mathbb{A}^{1})$ such that $\\partial_{y_{i}}A=D_{i}$, simply as de Rham cohomology of $\\mathcal{L}\\mathbb{A}^{1}$ vanishes. This is not good enough however, as it does not produce a variational formulation.\n\nTo produce such, we must appeal to the inverse theory in the calculus of variations. In particular it is known (cf. \\cite{Kru}) that we must check that the following \\emph{Helmholtz integrability conditions} hold; for all $l\\geq 1$ we have the equality which we denote $H_{l}(D)$; $$(1+(-1)^{l+1})\\partial_{l}D=\\sum_{k>l}(-1)^{k}\\binom{k}{l}\\partial_{z}^{k-l}\\partial_{k}D.$$\n\nNow we rewrite the equations $$\\frac{\\partial D_{-1-i}}{\\partial y_{j}}=\\frac{\\partial D_{-1-j}}{\\partial y_{i}},$$ using the Euler-Lagrange equations, as $$\\int (z^{j}\\delta(z^{i}D)-z^{i}\\delta(z^{j}D))=0,$$ for all $i,j$.\n\n For brevity assume that $D$ is of order $2$. We compute that the above integrand is given by $$(j-i)z^{i+j-1}\\partial_{y'}D+2(i-j)z^{i+j-1}(\\partial_{y''}D)'+(i(i-1)-j(j-1))z^{i+j-2}\\partial_{y''}D,$$ we now integrate the middle term by parts so that we obtain a common factor of $z^{i+j-1}$. We deduce that for all $i,j$ we have $$\\int z^{i+j-1}(\\partial_{y'}D-(\\partial_{y''}D)')dz=0,$$ whence we see that the Helmholtz condition, $\\partial_{y'}D=(\\partial_{y''}D)'$ holds. \n\n In general we argue as follows; we first integrate by parts so that we have a common factor of $z^{i+j-1}$. This gives the first Helmholtz equation $H_{1}$, which we note implies that $\\partial_{y'}D$ is a total derivative in $z$, this allows us then substitute for $\\partial_{y'}D$ and then integrate by parts until we obtain a common factor of $z^{i+j-2}$, from which we obtain the second Helmholtz condition, and so on. Note that there is a subtlety, namely that we can only perform the above integration by parts for generic values of $i,j$. Indeed, we cannot integrate $z^{-1}$. Nonetheless it is easy to see that generic vanishing of $D_{n}$, which is to say vanishing for all but finitely many $n$, implies vanishing of $D$, assuming $D$ is non-constant in the dependent variable $y$.\n\nNow according to a standard result in the inverse theory of calculus of variations, we can construct a Lagrangian, cf the results of chapter 4 of \\cite{Kru}. In fact we can construct one explicitly according to the recipe of Vainberg-Tonti (again cf. chapter 4 of \\cite{Kru}). Such is given by $$\\mathcal{L}:=y\\int_{0}^{1}D(z,ty,ty',ty'',...)dt,$$ where we interpret the definite integration as a linear form in the evident manner. \n\n\n\n\n\n\\end{proof}\n\n \\begin{remark}\\begin{itemize} There are a couple of notable aspects of this result. \\item It is crucial that we work on a punctured disc $\\Delta^{*}$. The corresponding theorem is not true on $\\Delta$. \\item It is not the case that $\\operatorname{dSol}(D)$ admitting a (-1)-symplectic form implies that $D$ admits a variational formulation. Indeed there are linear differential operators $D$ whose adjoint operator $D^{*}$ is simply $-D$, for example $D=ay'+\\frac{a'}{2}y$. In this case $\\operatorname{dSol}$ is certainly (-1)-symplectic although $D$ does not admit a variational formulation. \\end{itemize}\\end{remark}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section*{Acknowledgements}\nS.C. is supported by JSPS KAKENHI grant.\nY.N. would like to thank Kohei Fujikura, Motoo Suzuki and Tsutomu T. Yanagida for discussions.\nY.N. is grateful to Kavli IPMU for their hospitality during the COVID-19 pandemic.\nJ.Z. is supported in part by the National NSF of China under grants 11675086 and 11835005.\n\n\\vspace{0.5cm}\n\n{\\bf Appendix: one intermediate scale.--}\nThe simplest possibility of the multi-step $SO(10)$ breaking is the case with one intermediate scale,\n$SO(10) \\to H \\to G_{\\mathrm{SM}} \\times M$.\nWe focus on models with $H=G_{3221}$ or $G_{421}$\nwhere cosmic strings are formed at the intermediate scale $M_R$ as shown in Table~\\ref{tab:vacuum}.\n\nThe hierarchical VEVs, $|s|\\sim |a|\\sim M_X \\gg |\\sigma| \\sim M_R \\gg |b| \\sim M_2 = M_R^2\/M_X$,\nlead to the breaking pattern of $SO(10) \\to G_{3221} \\to G_{\\mathrm{SM}} \\times M$.\nThe mass spectrum of this model is obtained by taking the limit of $M_C \\rightarrow M_X$\nin Table~\\ref{tab:mass-spectrum}.\nAs in the case of two intermediate scales,\nthe gauge couplings of $SU(2)_R$ and $U(1)_{B-L}$ are matched to that of the SM hypercharge\nat $M_R$ through Eq.~\\eqref{eq:matching-3221}.\nAll the gauge couplings of $G_{3221}$ run further to the unification scale $M_X$\nand unify into a unique value $\\alpha_U$.\nConsequently, we obtain the one-loop level relationship between\nthe gauge coupling constants $\\alpha_i$ at $M_Z$ and $\\alpha_U$ as\n\\begin{align}\n \\frac{2\\pi}{\\alpha_i (M_Z)} &= \\frac{2\\pi}{\\alpha_U} + \\left[\n b_i^{(1)} \\ln\\frac{M_S}{M_Z}\n + b_i^{(2)} \\ln\\frac{M_2}{M_S} \\right. \\notag \\\\\n &\\left.\n + b_i^{(a)} \\ln\\frac{M_R}{M_2}\n + b_i^{(b)} \\ln\\frac{M_X}{M_R}\n \\right],\n \\label{eq:gcu-2}\n\\end{align}\nwhere $b_i^{(1)}$ and $b_i^{(2)}$ are given in Table.~\\ref{tab:RGE}, while $b_i^{(a)}=(57\/5,1,-3)$ and $b_i^{(b)}=(9,1,-3)$.\nWe solve the three equations \\eqref{eq:gcu-2} in terms of the three parameters $M_R$, $M_X$, and $\\alpha_U$, and obtain a set of solutions as a function of $M_S$.\nIt is found that the correct hierarchy $M_Z < M_S < M_R < M_X$ requires\n$M_S \\lesssim 1\\,\\mathrm{TeV}$, which is already excluded by collider searches.\n\nThe breaking pattern of $SO(10) \\to G_{421} \\to G_{\\mathrm{SM}} \\times M$\nis realized by the hierarchical VEVs, $|s| \\sim |b| \\sim M_X \\gg |\\sigma| \\sim M_R \\gg |a| \\sim M_2$.\nIn this setup, there are light degrees of freedom described as $(6,1)_{\\pm 4\/3}$ and $(1,1)_{\\pm 0}$ under $G_{\\mathrm{SM}}$, all of which have masses of $\\mathcal{O}(M_2)$, in addition to the to-be Nambu-Goldstone bosons with masses of $\\mathcal{O}(M_R)$.\nAt the intermediate scale $M_R$, the matching between the gauge coupling constants is given by\n\\begin{align}\n &\\alpha_4 (M_R) = \\alpha_3 (M_R), \\\\\n &\\frac{2}{5}\\alpha_4^{-1} (M_R) + \\frac{3}{5}\\alpha_{1R}^{-1} (M_R) = \\alpha_1^{-1} (M_R),\n\\end{align}\nwhere $\\alpha_{1R}$ denotes the $U(1)_R$ gauge coupling.\nThe RG evolution of the gauge couplings is again governed by Eq.~\\eqref{eq:gcu-2}, though in this case $b_i^{(a)}=(97\/5,1,2)$ and $b_i^{(b)}=(81\/5,1,0)$.\nAgain we found that the hierarchy $M_Z < M_S < M_R < M_X$ requires $M_S < 1\\,\\mathrm{TeV}$, which is already excluded.\n\n\n{\\bf Appendix: threshold corrections.--}\nWe here estimate threshold corrections to the couplings at the GUT scale $M_X$\nfrom the spectrum of $S$, $A$, $\\Sigma$ and $\\bar\\Sigma$.\nIf we ignore $\\bf 10$ and $\\bf 120$, the theory is defined by the terms in Eq.~\\eqref{eq:SO10_lagrangian}. Applying the vacuum conditions, the mass parameters $m_S$, $m_A$, and $m_\\Sigma$ can be traded with the VEVs $s$, $a$ and $\\sigma$. The remaining free parameters are the couplings $\\lambda$, $\\lambda_S$, $\\eta_S$, $\\bar\\eta_S$ and $\\eta_A$. The threshold corrections can then be parametrized by these dimensionless couplings and the VEVs. The one-loop contribution to the running coupling $\\alpha_i^{-1}(Q)$ at $Q\\gtrsim M_X$ from all chiral superfields with the same $G_{\\rm SM}$ representation $R$ is $2\\pi\\Delta\\alpha_i^{-1}(Q) = \\sum_{j} b_i^R \\ln\\frac{m_j}{Q}$. \n$b_i^R = l_i^R$ is the Dynkin index of the representation $R$. Since these superfields have the same SM quantum number and mix with each other, their mass terms are generally described by a non-diagonal mass matrix $M(R)$ after intermediate symmetry breakings, as given in the appendix of \\cite{Melfo:2010gf}. Neglecting Nambu-Goldstone bosons\\footnote{The symmetry breaking scale\nis defined as the mass of gauge bosons so Nambu-Goldstone bosons do not contribute to the threshold correction.},\nthe contribution can be evaluated as\n\\begin{equation}\n2\\pi\\Delta\\alpha_i^{-1}(Q)\n=\nb_i^R \\ln \\left|\\frac{a_k(M(R))}{Q^{n-k}} \\right|\\,,\n\\end{equation}\nwhere $n$ is the dimension of the mass matrix $M(R)$, $k$ is the number of zero eigenvalues that correspond to the Nambu-Goldstone bosons, and $a_k(M(R))$ is the coefficient of the $x^k$ term of the characteristic polynomial $|{\\rm Det}(M(R)-x{\\bf 1})|$. For comparison, the 1-loop step-wise contribution to the running coupling is \n$2\\pi\\Delta_0 {\\alpha}_i^{-1}(Q) = \\sum_{j} b_i^R \\ln\\frac{\\tilde Q_j}{Q}$, where $\\tilde Q_j\\in \\{M_1,\\,M_2,\\,M_R,\\,M_C,\\,M_X\\}$ is the mass scale of the particle $j$. The threshold correction $\\lambda_i\\equiv 2\\pi\\sum_R\\left(\\Delta\\alpha_i^{-1}(Q) - \\Delta_0\\alpha_i^{-1}(Q)\\right)$ is then,\n\\begin{equation}\n\\lambda_i\n=\n\\sum_R\nb_i^R \\ln \\left|\\frac{a_k(M(R))}{\\Pi_{j\\neq {\\rm NG}}\\, \\tilde Q_{j} } \\right|\\,,\t\n\\label{eq:thres_corr}\n\\end{equation} \nwith NG stands for Nambu-Goldstone bosons. For gauge coupling unification, it is more convenient to calculate\n\\begin{equation}\n\\Delta\\lambda_{ij}\\equiv \\lambda_i - \\lambda_j\\,.\n\\end{equation}\nGiven the mass matrices in \\cite{Melfo:2010gf}, the calculation $\\Delta\\lambda_{ij}$ is straightforward with Eq.\\eqref{eq:thres_corr}.\nFor our model A, identifying $M_X=s$, $M_C=a$, $M_R=\\sigma$, and in the limit of $s\\gg a \\gg \\sigma$, we obtain\n\\begin{equation}\n\\Delta\\lambda_{12}\n\\simeq -7.4\n-\\frac{2}{5}\\left(\n16\\ln \\lambda -5\\ln\\eta_A -2\\ln\\xi^3_S\n\\right),\n\\end{equation}\nand\n\\begin{align}\n\\Delta\\lambda_{13}\n\\simeq \n\\begin{cases}\n-9.0\n-\\frac{3}{5}\\left(\n4\\ln \\lambda -5\\ln\\eta_A +2\\ln\\xi^3_S\n\\right), \\\\[1ex]\n\\qquad \\qquad \\qquad \\qquad \\qquad \\text{for $\\sigma^2\/a > a^2\/s$,}\n \\\\[1ex]\n-9.0\n+\\ln\\frac{a^3}{s\\sigma^2}\n-\\frac{3}{5}\\left(\n4\\ln \\lambda -5\\ln\\eta_A +2\\ln\\xi^3_S\n\\right), \\\\[1ex]\n\\qquad \\qquad \\qquad \\qquad \\qquad \\text{for $\\sigma^2\/a < a^2\/s$,}\n\\end{cases}\n\\end{align}\nwhere we have defined\n$\\xi^3_S\\equiv\\eta_S \\bar\\eta_S \\lambda_S$.\nIn the middle panel of Fig.~\\ref{fig:modelA},\nwe define the gauge coupling at the GUT scale as\n\\begin{align}\n \\alpha_1 (M_X) &= \\alpha_U,\\\\\n \\alpha_i (M_X) &= \\alpha_U \\left( 1+\\frac{\\alpha_U}{2\\pi} \\Delta \\lambda_{1i} \\right)~~(i=2,3),\n\\end{align}\nwith $\\Delta\\lambda_{12} = -7.5$ and $\\Delta\\lambda_{13}=-9.0$, which are typical values observed when all the dimensionless couplings are equal to unity.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nFor a prime~$l$, the classical modular polynomial $\\Phi_l$ is the minimal \npolynomial of the function $j(lz)$ over the field $\\mathbf{C}(j)$, where $j(z)$ is \nthe modular $j$-function. The polynomial~$\\Phi_l$ parametrizes elliptic \ncurves $E$ together with an isogeny $E \\rightarrow E'$ of degree~$l$. From\nclassical results, we know that $\\Phi_l$ lies in the ring $\\mathbf{Z}[X,Y]$ and satisfies\n$\\Phi_l(X,Y) = \\Phi_l(Y,X)$, with degree $l+1$ in both variables \\cite[\\S69]{Weber:Algebra}.\n\nThe fact that the moduli interpretation of $\\Phi_l$ remains valid modulo\nprimes $p \\not = l$ was crucial to the improvements made by Atkin and Elkies\nto Schoof's point-counting algorithm \\cite{Elkies:AtkinBirthday,Schoof:ECPointCounting2}. More\nrecently, the polynomials $\\Phi_l\\bmod p$ have been used to compute Hilbert class polynomials~\\cite{Belding:HilbertClassPolynomial,Sutherland:HilbertClassPolynomials}, and to determine the endomorphism ring of an elliptic curve over a finite field~\\cite{BissonSutherland:Endomorphism}. Explicitly computing $\\Phi_l$ is notoriously difficult, primarily due to its large size. As shown in~\\cite{CohenPaula:ModularPolynomials}, the logarithmic height of its largest coefficient is $6l\\log l+O(l)$, thus its total size is \n\\begin{equation}\\label{spacebound}\nO(l^3 \\log l).\n\\end{equation}\nAs this bound suggests, the size of $\\Phi_l$ grows quite rapidly; the binary representation of $\\Phi_{79}$ already exceeds one megabyte, and $\\Phi_{659}$ is larger than a gigabyte.\n\nThe polynomial $\\Phi_l$ can be computed by comparing coefficients in the\nFourier expansions of $j(z)$ and $j(lz)$, an approach considered by several\nauthors~\\cite{Blake:ModularPolynomials, Elkies:AtkinBirthday,\nHerrmann:FourierCoefficients, Ito:ModularEquation, Kaltofen:ModularEquation,\nLMMS:PointCounting, Morain:PointCounting}. As detailed in\n\\cite{Blake:ModularPolynomials}, this only requires integer arithmetic and may\nbe performed modulo~$p$ for any prime $p > 2l+2$. The time to compute $\\Phi_l\n\\bmod p$ is then $O(l^{3+\\varepsilon}(\\log p)^{1+\\varepsilon})$, and for a sufficiently large\n$p$ this yields an $O(l^{4+\\varepsilon})$ time algorithm to compute $\\Phi_l$ over\n$\\mathbf{Z}$. Alternatively (and preferably), one computes $\\Phi_l$ modulo several\nsmaller primes and applies the Chinese Remainder Theorem, as suggested in\n\\cite{Blake:ModularPolynomials,Herrmann:FourierCoefficients,\nLMMS:PointCounting, Morain:PointCounting}.\n\nAn alternative CRT-based approach appears in~\\cite{CharlesLauter:ModPoly}.\nThis algorithm uses isogenies between supersingular elliptic curves defined\nover a finite field, and computes $\\Phi_l\\bmod p$ in time $O(l^{4+\\varepsilon}(\\log\np)^{2+\\varepsilon} + (\\log p)^{4+\\varepsilon}$), under the GRH. Although slower than using\nFourier expansions, this approach relies solely on the fact that $\\Phi_l$\nparametrizes isogenies.\n\nIn~\\cite{Enge:ModularPolynomials}, Enge uses interpolation and fast\nfloating-point evaluations to compute $\\Phi_l\\in\\mathbf{Z}[X,Y]$ in time $O(l^3(\\log\nl)^{4+\\varepsilon})$, under reasonable heuristic assumptions. The complexity of this\nmethod is nearly optimal, quasi-linear in the size of $\\Phi_l$. However, most\napplications actually use $\\Phi_l$ in a finite field $\\mathbf{F}_{p^n}$, and\n$\\Phi_l\\bmod p$ may be much smaller than $\\Phi_l$. In general, Enge's\nalgorithm can compute $\\Phi_l$ and reduce it modulo~$p$ much faster than either\nof the methods above can compute $\\Phi_l\\bmod p$, but this may use an excessive\namount of space. For large $l$ this approach becomes impractical, even when\n$\\Phi_l\\bmod p$ is reasonably small.\n\nHere we present a new method to compute $\\Phi_l$, either over the integers or\nmodulo an arbitrary positive integer~$m$, including $m \\le l$. Our algorithm is both\nasymptotically and practically faster than alternative methods, and achieves\nessentially optimal space complexity. More precisely, we prove the following\nresult.\n\n\\begin{thm1}\\label{main-thm}\nLet $l$ denote an odd prime and $m$ a positive integer.\nAlgorithm~\\ref{alg2} correctly computes $\\Phi_l\\in(\\Z\/m\\Z)[X,Y]$.\nUnder the GRH, it runs in expected time\n$$\nO(l^3 \\log ^3 l\\log\\log l),\n$$\nusing $O(l^2\\log lm)$ expected space.\n\\end{thm1}\n\nTo compute $\\Phi_l$ over $\\mathbf{Z}$, the modulus $m$ is made large enough to uniquely\ndetermine the coefficients, via an explicit height bound proven in\n\\cite{BrokerSutherland:PhiHeightBound}. Our algorithm is of the \\emph{Las\nVegas} type, a probabilistic algorithm whose output is unconditionally correct;\nthe GRH is only used to analyze its running time. We have used it to compute\n$\\Phi_l$ for all $l<3600$, and many larger $l$ up to $5003$. The largest\nprevious computation of which we are aware has $l$ equal to $1009$. Working\nmodulo $m$ we can go further; we have computed $\\Phi_l$ modulo a 256-bit\ninteger $m$ with $l=20011$.\n\nApplications that rely on $\\Phi_l$ can often improve their running times by\nusing alternative modular polynomials that have smaller coefficients. Our\nalgorithm can be adapted to compute polynomials $\\Phi_l^g$ relating $g(z)$ and\n$g(lz)$, for modular functions~$g$ that share certain properties with~$j$. This\nincludes the cube root $\\gamma_2$ of $j$, and we are then able to compute\n$\\Phi_l\\bmod m$ more quickly by reconstructing it from $\\Phi_l^{\\gamma_2}\\bmod m$,\ncapitalizing on a suggestion in \\cite{Elkies:AtkinBirthday}. Other examples\ninclude simple and double eta-quotients, the Atkin functions, and the\nWeber $\\mathfrak{f}$-function. The last is especially\nattractive, since the modular polynomials for $\\mathfrak{f}$ are approximately 1728 times\nsmaller than those for $j$. This has allowed us to compute modular polynomials\n$\\Phi_l^{\\mathfrak{f}}$ with $l$ as large as 60013. \n\nThe outline of this article is as follows. In Section~2 we give a rough \noverview of our new algorithm. The theory behind the algorithm is presented\nin Sections~3--5. We present the algorithm, prove its correctness and \nanalyze its runtime in Section~6. Section~7 deals with modular polynomials\nfor modular functions other than $j$, and a final Section~8 contains computational\nresults.\n\n\\section{Overview}\n\\noindent\nOur basic strategy is a standard CRT approach: we compute $\\Phi_l\\bmod p$ for various primes~$p$ and use the Chinese Remainder Theorem to \nrecover $\\Phi_l\\in\\mathbf{Z}[X,Y]$.\nAlternatively, the explicit CRT (mod $m$) allows us to directly compute $\\Phi_l\\in(\\Z\/m\\Z)[X,Y]$, via~\\cite[Thm.~3.1]{Bernstein:ModularExponentiation}.\nBy applying the algorithm of~\\cite[\\S6]{Sutherland:HilbertClassPolynomials}, this can be accomplished in $O(l^2\\log lm)$ space, even though the total size of all the $\\Phi_l\\bmod p$ is $O(l^3\\log l)$.\n\nOur method for computing $\\Phi_l\\bmod p$ is new, and applies only to certain primes~$p$. \nStrategic prime selection has been used effectively in other CRT-based \nalgorithms, such as~\\cite{Sutherland:HilbertClassPolynomials}, and it is especially helpful here.\nWorking in the finite field~$\\mathbf{F}_p$, we select $l+2$ distinct values $j_i$, compute $\\Phi_l(X,j_i)\\in\\mathbf{F}_p[X]$ for each, and then interpolate the coefficients of $\\Phi_l\\in (\\mathbf{F}_p[Y])[X]$ as polynomials in $\\mathbf{F}_p[Y]$.\nThe key lies in our choice of $p$, which allows us to select particular interpolation points that greatly facilitate the computation.\nWe are then able to compute $\\Phi_l\\bmod p$ in expected time\n\\begin{equation}\\label{modpbound}\nO(l^2(\\log p)^3\\log\\log p).\n\\end{equation}\nIn contrast to the methods above, this is quasi-linear in the size of $\\Phi_l\\bmod p$.\n\nOur algorithm exploits the structure of the $l$-isogeny graph $G_l$ defined on the set of $j$-invariants of elliptic curves over $\\mathbf{F}_p$.\nEach edge in this graph corresponds to an $l$-isogeny; the edge $(j_1,j_2)$ is present if and only if $\\Phi_l(j_1,j_2)=0$.\nAs described in~\\cite{Fouquet:IsogenyVolcanoes,Kohel:thesis}, the ordinary components of this graph have a particular structure known as an $l$-\\emph{volcano}.\nDepicted in Figure 1 are a set of four $l$-volcanoes, each with two levels: the \\emph{surface} (at the top), and the \\emph{floor} (on the bottom).\nNote that each vertex $j_i$ on the surface has $l+1$ neighbors; these are the roots of $\\Phi_l(X,j_i)\\in\\mathbf{F}_p[X]$, and there are at least $l+2$ such $j_i$.\n\\smallskip\n\n\\begin{figure}[htp]\n\\begin{tikzpicture}\n\\draw (-4.5,0) ellipse (1 and 0.1);\n\\draw (-1.5,0) ellipse (1 and 0.1);\n\\draw (1.5,0) ellipse (1 and 0.1);\n\\draw (4.5,0) ellipse (1 and 0.1);\n\\draw (-4.5,0.1) -- (-4.6,-0.06);\n\\draw (-4.5,0.1) -- (-4.56,-0.06);\n\\draw (-4.5,0.1) -- (-4.52,-0.06);\n\\draw (-4.5,0.1) -- (-4.48,-0.06);\n\\draw (-4.5,0.1) -- (-4.44,-0.06);\n\\draw (-4.5,0.1) -- (-4.4,-0.06);\n\\draw[fill=red] (-4.5,0.1) circle (0.04);\n\\draw (-1.5,0.1) -- (-1.6,-0.05);\n\\draw (-1.5,0.1) -- (-1.56,-0.05);\n\\draw (-1.5,0.1) -- (-1.52,-0.05);\n\\draw (-1.5,0.1) -- (-1.48,-0.05);\n\\draw (-1.5,0.1) -- (-1.44,-0.05);\n\\draw (-1.5,0.1) -- (-1.4,-0.05);\n\\draw[fill=red] (-1.5,0.1) circle (0.04);\n\\draw (1.5,0.1) -- (1.6,-0.05);\n\\draw (1.5,0.1) -- (1.56,-0.05);\n\\draw (1.5,0.1) -- (1.52,-0.05);\n\\draw (1.5,0.1) -- (1.48,-0.05);\n\\draw (1.5,0.1) -- (1.44,-0.05);\n\\draw (1.5,0.1) -- (1.4,-0.05);\n\\draw[fill=red] (1.5,0.1) circle (0.04);\n\\draw (4.5,0.1) -- (4.60,-0.06);\n\\draw (4.5,0.1) -- (4.56,-0.06);\n\\draw (4.5,0.1) -- (4.52,-0.06);\n\\draw (4.5,0.1) -- (4.48,-0.06);\n\\draw (4.5,0.1) -- (4.44,-0.06);\n\\draw (4.5,0.1) -- (4.4,-0.06);\n\\draw[fill=red] (4.5,0.1) circle (0.04);\n\\draw (-5,-0.1) -- (-5.35,-0.7);\n\\draw[fill=red] (-5.35,-0.7) circle (0.04);\n\\draw (-5,-0.1) -- (-5.21,-0.7);\n\\draw[fill=red] (-5.21,-0.7) circle (0.04);\n\\draw (-5,-0.1) -- (-5.07,-0.7);\n\\draw[fill=red] (-5.07,-.7) circle (0.04);\n\\draw (-5,-0.1) -- (-4.93,-0.7);\n\\draw[fill=red] (-4.93,-0.7) circle (0.04);\n\\draw (-5,-0.1) -- (-4.79,-0.7);\n\\draw[fill=red] (-4.79,-0.7) circle (0.04);\n\\draw (-5,-0.1) -- (-4.65,-0.7);\n\\draw[fill=red] (-4.65,-0.7) circle (0.04);\n\\draw[fill=red] (-5,-0.1) circle (0.04);\n\n\\draw (-4,-0.1) -- (-4.35,-0.7);\n\\draw[fill=red] (-4.35,-0.7) circle (0.04);\n\\draw (-4,-0.1) -- (-4.21,-0.7);\n\\draw[fill=red] (-4.21,-0.7) circle (0.04);\n\\draw (-4,-0.1) -- (-4.07,-0.7);\n\\draw[fill=red] (-4.07,-0.7) circle (0.04);\n\\draw (-4,-0.1) -- (-3.93,-0.7);\n\\draw[fill=red] (-3.93,-0.7) circle (0.04);\n\\draw (-4,-0.1) -- (-3.79,-0.7);\n\\draw[fill=red] (-3.79,-0.7) circle (0.04);\n\\draw (-4,-0.1) -- (-3.65,-0.7);\n\\draw[fill=red] (-3.65,-0.7) circle (0.04);\n\\draw[fill=red] (-4,-0.1) circle (0.04);\n\n\\draw (-2,-0.1) -- (-2.35,-0.7);\n\\draw[fill=red] (-2.35,-0.7) circle (0.04);\n\\draw (-2,-0.1) -- (-2.21,-0.7);\n\\draw[fill=red] (-2.21,-0.7) circle (0.04);\n\\draw (-2,-0.1) -- (-2.07,-0.7);\n\\draw[fill=red] (-2.07,-0.7) circle (0.04);\n\\draw (-2,-0.1) -- (-1.93,-0.7);\n\\draw[fill=red] (-1.93,-0.7) circle (0.04);\n\\draw (-2,-0.1) -- (-1.79,-0.7);\n\\draw[fill=red] (-1.79,-0.7) circle (0.04);\n\\draw (-2,-0.1) -- (-1.65,-0.7);\n\\draw[fill=red] (-1.65,-0.7) circle (0.04);\n\\draw[fill=red] (-2,-0.1) circle (0.04);\n\n\\draw (-1,-0.1) -- (-1.35,-0.7);\n\\draw[fill=red] (-1.35,-0.7) circle (0.04);\n\\draw (-1,-0.1) -- (-1.21,-0.7);\n\\draw[fill=red] (-1.21,-0.7) circle (0.04);\n\\draw (-1,-0.1) -- (-1.07,-0.7);\n\\draw[fill=red] (-1.07,-0.7) circle (0.04);\n\\draw (-1,-0.1) -- (-0.93,-0.7);\n\\draw[fill=red] (-0.93,-0.7) circle (0.04);\n\\draw (-1,-0.1) -- (-0.79,-0.7);\n\\draw[fill=red] (-0.79,-0.7) circle (0.04);\n\\draw (-1,-0.1) -- (-0.65,-0.7);\n\\draw[fill=red] (-0.65,-0.7) circle (0.04);\n\\draw[fill=red] (-1,-0.1) circle (0.04);\n\n\\draw (1,-0.1) -- (1.35,-0.7);\n\\draw[fill=red] (1.35,-0.7) circle (0.04);\n\\draw (1,-0.1) -- (1.21,-0.7);\n\\draw[fill=red] (1.21,-0.7) circle (0.04);\n\\draw (1,-0.1) -- (1.07,-0.7);\n\\draw[fill=red] (1.07,-0.7) circle (0.04);\n\\draw (1,-0.1) -- (0.93,-0.7);\n\\draw[fill=red] (0.93,-0.7) circle (0.04);\n\\draw (1,-0.1) -- (0.79,-0.7);\n\\draw[fill=red] (0.79,-0.7) circle (0.04);\n\\draw (1,-0.1) -- (0.65,-0.7);\n\\draw[fill=red] (0.65,-0.7) circle (0.04);\n\\draw[fill=red] (1,-0.1) circle (0.04);\n\n\\draw (2,-0.1) -- (2.35,-0.7);\n\\draw[fill=red] (2.35,-0.7) circle (0.04);\n\\draw (2,-0.1) -- (2.21,-0.7);\n\\draw[fill=red] (2.21,-0.7) circle (0.04);\n\\draw (2,-0.1) -- (2.07,-0.7);\n\\draw[fill=red] (2.07,-0.7) circle (0.04);\n\\draw (2,-0.1) -- (1.93,-0.7);\n\\draw[fill=red] (1.93,-0.7) circle (0.04);\n\\draw (2,-0.1) -- (1.79,-0.7);\n\\draw[fill=red] (1.79,-0.7) circle (0.04);\n\\draw (2,-0.1) -- (1.65,-0.7);\n\\draw[fill=red] (1.65,-0.7) circle (0.04);\n\\draw[fill=red] (2,-0.1) circle (0.04);\n\n\\draw (4,-0.1) -- (4.35,-0.7);\n\\draw[fill=red] (4.35,-0.7) circle (0.04);\n\\draw (4,-0.1) -- (4.21,-0.7);\n\\draw[fill=red] (4.21,-0.7) circle (0.04);\n\\draw (4,-0.1) -- (4.07,-0.7);\n\\draw[fill=red] (4.07,-0.7) circle (0.04);\n\\draw (4,-0.1) -- (3.93,-0.7);\n\\draw[fill=red] (3.93,-0.7) circle (0.04);\n\\draw (4,-0.1) -- (3.79,-0.7);\n\\draw[fill=red] (3.79,-0.7) circle (0.04);\n\\draw (4,-0.1) -- (3.65,-0.7);\n\\draw[fill=red] (3.65,-0.7) circle (0.04);\n\\draw[fill=red] (4,-0.1) circle (0.04);\n\n\\draw (5,-0.1) -- (5.35,-0.7);\n\\draw[fill=red] (5.35,-0.7) circle (0.04);\n\\draw (5,-0.1) -- (5.21,-0.7);\n\\draw[fill=red] (5.21,-0.7) circle (0.04);\n\\draw (5,-0.1) -- (5.07,-0.7);\n\\draw[fill=red] (5.07,-0.7) circle (0.04);\n\\draw (5,-0.1) -- (4.93,-0.7);\n\\draw[fill=red] (4.93,-0.7) circle (0.04);\n\\draw (5,-0.1) -- (4.79,-0.7);\n\\draw[fill=red] (4.79,-0.7) circle (0.04);\n\\draw (5,-0.1) -- (4.65,-0.7);\n\\draw[fill=red] (4.65,-0.7) circle (0.04);\n\\draw[fill=red] (5,-0.1) circle (0.04);\n\\end{tikzpicture}\n\\vspace{12pt}\n\n\\begin{minipage}{0.75\\linewidth}\n\\textsc{figure} 1. A set of $l$-volcanoes arising from Theorem~\\ref{theprimes}. In this example $l=7$ splits into ideals of order 3 in $\\operatorname{cl}(\\O)$ and we have $h(\\O)=12$ surface curves and $h(R)=72$ floor curves.\n\\end{minipage}\n\\end{figure}\n\\smallskip\n\nThis configuration contains enough information to compute the $l+2$ polynomials $\\Phi_l(X,j_i)$ that we need to interpolate $\\Phi_l(X,Y)\\bmod p$. It is not an arrangement that is likely to arise by chance; it is achieved by our choice of the order $\\O$ and the primes~$p$ that we use.\nTo further simply our task, we choose $p$ so that vertices on the surface correspond to curves with $\\mathbf{F}_p$-rational $l$-torsion.\nOur ability to obtain such primes is guaranteed by Theorems~\\ref{theprimes} and~\\ref{smallprimes}, proven in Section~\\ref{ExplicitCMTheory}.\n\nThe curves on the surface all have the same endomorphism ring type, isomorphic to an imaginary quadratic order $\\O$.\nTheir $j$-invariants are precisely the roots of the Hilbert class polynomial $H_\\O\\in\\mathbf{Z}[X]$.\nAs described in~\\cite{Belding:HilbertClassPolynomial}, the roots of $H_\\O$ may be enumerated via the action of the ideal class group $\\operatorname{cl}(\\O)$. To do so efficiently, we use an algorithm of~\\cite{Sutherland:HilbertClassPolynomials} to compute a \\emph{polycyclic presentation} for $\\operatorname{cl}(\\O)$ that allows us to enumerate the roots of $H_\\O$ via isogenies of low degree, typically much smaller than~$l$.\nWe may use this presentation to determine the action of any element of $\\operatorname{cl}(O)$, including those that act via $l$-isogenies.\nThis allows us to identify the $l$-isogeny cycles that form the surfaces of the volcanoes in Figure 1.\n\nSimilarly, the vertices on the floor are the roots of $H_R$, where $R$ is the order of index $l$ in $\\O$, and we use a polycyclic presentation of $\\operatorname{cl}(R)$ to enumerate them.\nTo identity children of a common parent (siblings), we exploit the fact that siblings lie in a cycle of $l^2$-isogenies, which we identify using our presentation of $\\operatorname{cl}(R)$.\nIt remains only to connect each parent to one of its children. This may be achieved by using V\\'{e}lu's formula~\\cite{Velu:Isogenies} to compute an $l$-isogeny from the surface to the floor.\nBy matching each parent to a group of siblings, we avoid the need to compute an $l$-isogeny to every child, which is critical to obtaining the complexity bound in (\\ref{modpbound}).\n\nBelow is a simplified version of the algorithm to compute $\\Phi_l\\bmod p$.\n\\begin{algorithm}\\label{alg1}\n{Given $l$, $p$, and $\\O$, compute $\\Phi_l\\bmod p$ as follows:}\n\\begin{enumerate}\n\\vspace{2pt}\\item\nFind a root of $H_\\O$ over $\\mathbf{F}_p$.\n\\vspace{2pt}\\item\nEnumerate the roots $j_i$ of $H_\\O$ and identify the $l$-isogeny cycles.\n\\vspace{2pt}\\item\nFor each $j_i$ find an $l$-isogenous $j$ on the floor.\n\\vspace{2pt}\\item\nEnumerate the roots of $H_R$ and identify the $l^2$-isogeny cycles.\n\\vspace{2pt}\\item\nFor each $j_i$ compute $\\Phi_l(X,j_i)=\\prod_{(j_i,j_k)\\in G_l}(X-j_k)$.\n\\vspace{2pt}\\item\nInterpolate $\\Phi_l\\in (\\mathbf{F}_p[Y])[X]$ using the $j_i$ and the polynomials $\\Phi_l(X,j_i)$.\n\\end{enumerate}\n\\end{algorithm}\nAlgorithm~\\ref{alg1} assumes that $l$, $p$, and $\\O$ satisfy the conditions of Theorem~\\ref{theprimes}, and that $h(\\O)\\ge l+2$.\nWe use the same $\\O$ for each $p$, so $H_\\O$ may be precomputed.\nNote that we do not compute $H_R$, we enumerate its roots by applying the Galois action of $\\operatorname{cl}(R)$ to a root obtained in Step 3.\n\nA more detailed version of Algorithm~\\ref{alg1} appears in Section~6 together with Algorithm~\\ref{alg2}, which selects the order $\\O$ and the primes $p$, and performs the CRT computations needed to determine $\\Phi_l$ over $\\mathbf{Z}$, or modulo~$m$.\n\n\\section{Orders in imaginary quadratic fields}\nIt is a classical fact that the endomorphism ring of an ordinary elliptic\ncurve over a finite field is isomorphic to an imaginary quadratic order~$\\O$.\nThe order $\\O$ is necessarily contained in the maximal order $\\O_K$ of its\nfraction field~$K$, but we quite often have $\\O\\subsetneq\\O_K$. As \nmost textbooks on algebraic number theory focus on\nmaximal orders, we first develop some useful tools for working with non-maximal\norders. To simplify the presentation, we work throughout with fields of\ndiscriminant $d_K<-4$, ensuring that we always have the unit groups\n$\\O^*=\\O_K^*=\\{\\pm1\\}$. We use $\\inkron{d_K}{p}$ to denote the Kronecker symbol,\nwhich is~$-1, 0$, or $1$ as the prime $p$ splits, ramifies, or remains inert in $K$ (respectively).\n\\smallskip\n\nLet $\\O$ be a (not-necessarily maximal) order in a quadratic field $K$ of discriminant $d_K<-4$.\nLet $N$ be a positive integer prime to the conductor $u=\\idx{\\O_K}{\\O}$.\nThe order $R = \\mathbf{Z} + N\\O$ has index~$N$ in $\\O$, and its ideal class group $\\operatorname{cl}(R)$ is an extension of $\\operatorname{cl}(\\O)$.\nMore precisely, as in~\\cite[Thm.~6.7]{Stevenhagen:NumberRings}, there is an exact sequence\n\\begin{equation}\\label{exactsequence}\n1 \\mapright{} (\\O\/N\\O)^* \/ (\\mathbf{Z}\/N\\mathbf{Z})^* \\mapright{} \\operatorname{cl}(R) \n\\mapright{\\varphi} \\operatorname{cl}(\\O) \\mapright{} 1,\n\\end{equation}\nwhere $\\varphi$ maps the class $[I]$ to the class $[I\\O]$.\nFor $R$-ideals prime to~$uN$, the underlying map $I\\mapsto I\\O$ preserves \n`norms', that is, $\\idx{R}{I}=\\idx{\\O}{I\\O}$, as in~\\cite[Prop.~7.20]{Cox:ComplexMultiplication}.\nWe have a particular interest in the kernel of the map $\\varphi$.\n\n\\begin{lemma}\\label{cyclic} In the exact sequence above, if $N=p^n$ is a power of an unramified odd prime $p$, then $\\ker\\varphi$ is cyclic of order $p^{n-1}\\bigl(p-\\inkron{d_K}{p}\\bigr)$.\n\\end{lemma}\n\\begin{proof}\nWe compute the structure of $\\ker\\varphi\\cong (\\O\/p^n\\O)^*\/(\\mathbf{Z}\/p^n\\mathbf{Z})^*$.\nThe group $(\\mathbf{Z}\/p^n\\mathbf{Z})^*$ is cyclic, isomorphic to the \\emph{additive} group $(\\mathbf{Z}\/(p-1)\\mathbf{Z})\\times(\\mathbf{Z}\/p^{n-1}\\mathbf{Z})$. We now apply~\\cite[Cor.~4.2.11]{Cohen:CANT2} to compute the structure of $(\\O\/p^n\\O)^*$:\n\n\\begin{equation}\\label{unitgroup}\n(\\O\/p^n\\O)^*\\cong\n\\begin{cases}\n(\\mathbf{Z}\/(p-1)\\mathbf{Z})^2\\times(\\mathbf{Z}\/p^{n-1}\\mathbf{Z})^2, &\\text{if $p$ splits in $K$;}\\\\\n(\\mathbf{Z}\/(p^2-1)\\mathbf{Z})\\times(\\mathbf{Z}\/p^{n-1}\\mathbf{Z})^2, &\\text{if $p$ is inert in $K$.}\n\\end{cases}\n\\end{equation}\nIn both cases, the factor $(\\mathbf{Z}\/p^{n-1}\\mathbf{Z})$ of $(\\mathbf{Z}\/p^n\\mathbf{Z})^*$ is a maximal cyclic subgroup of the Sylow $p$-subgroup of $(\\O\/p^n\\O)^*$, and must correspond to a direct summand.\nThus the $p$-rank of the quotient $(\\O\/p^n\\O)^*\/(\\mathbf{Z}\/p^n\\mathbf{Z})^*$ is 1.\nThe order of the factor $(\\mathbf{Z}\/(p-1)\\mathbf{Z})$ of $(\\mathbf{Z}\/p^n\\mathbf{Z})^*$ is not divisible by $p$, and must correspond to a subgroup of a cyclic factor of $(\\O\/p^n\\O)^*$ in both cases. It follows that the quotient is cyclic.\nThe calculations above also show that $\\#(\\O\/p^n\\O)^*\/(\\mathbf{Z}\/p^n\\mathbf{Z})^* = p^{n-1}\\bigl(p-\\inkron{d_K}{p}\\bigr)$.\n\\end{proof}\n\\noindent\nEven when $\\ker\\varphi$ is not necessarily cyclic, the size of $\\ker \\varphi$ is as\nin Lemma~\\ref{cyclic}. More generally, the exact sequence (\\ref{exactsequence})\ncan be used to derive the formula\n\\begin{equation}\\label{classnumber}\nh(\\O)=h(\\O_K)u\\prod_{p|u}\\left(1-\\kron{d_K}{p}p^{-1}\\right),\n\\end{equation}\nas in~\\cite[Thm.~7.24]{Cox:ComplexMultiplication}.\n\nWe now describe a particular representation of $\\ker \\varphi$ when $N=l$ is \nprime. In this case $\\ker\\varphi$ is cyclic, of order $l-\\inkron{d_K}{l}$;\nthis follows from Lemma~\\ref{cyclic} for $l>2$, and from (\\ref{classnumber}) for $l=2$.\nLet $\\O=\\mathbf{Z}[\\tau]$ for some $\\tau\\in K$ that is coprime to~$l$.\nThere are exactly $l+1$ index~$l$ subrings of~$\\O$: the order $R$, and rings $S_i=l\\mathbf{Z}+(\\tau+i)\\mathbf{Z}$, for $i$ from 0 to $l-1$.\nEach $\\O$-ideal of norm $l$ corresponds to one of the $S_i$.\nThe remaining $S_i$ are fractional invertible $R$-ideals corresponding to proper $R$-ideals\n\\begin{equation}\\label{Ji}\nJ_i=lS_i=l^2\\mathbf{Z}+l(\\tau+i)\\mathbf{Z},\n\\end{equation}\nfor which $R=\\{\\beta\\in K:\\beta J_i\\subset J_i\\}$.\nExactly $1+\\inkron{d_K}{l}$ of the $S_i$ are $\\O$-ideals, leaving $l-1-\\inkron{d_K}{l}$ proper $R$-ideals~$J_i$.\nThese are all non-principal and inequivalent in $\\operatorname{cl}(R)$, and each lies in $\\ker \\varphi$, since we have $J_i\\O=l\\O$. The invertible $J_i$ are exactly the non-trivial elements of $\\ker \\varphi$.\nWe summarize with the following lemma.\n\n\\begin{lemma}\\label{generators}\nIf $N=l$ is prime in the exact sequence $(\\ref{exactsequence})$, then the $R$-ideal $lR$ and the invertible $R$-ideals $J_i$ defined in $(\\ref{Ji})$ are representatives for $\\ker \\varphi$.\nIn particular, $\\ker \\varphi$ is generated by the class of an invertible $R$-ideal with norm $l^2$.\n\\end{lemma}\n\\noindent\nThis representation of $\\ker\\varphi$ has proven useful in other \nsettings~\\cite{Castagnos:NICECryptanalysis}. We use it to obtain \nthe $l^2$-isogeny cycles we need in Step 4 of Algorithm~\\ref{alg1}.\n\nWe conclude this section with a theorem that allows us to construct arbitrarily large class groups that are generated by elements of bounded norm.\n\n\\begin{theorem}\\label{theorders}\nLet $\\O$ be an order in a quadratic field of discriminant $d_K<-4$, and \nlet $p\\nmid\\operatorname{disc}(\\O)$ be an odd prime. Let $\\mathcal{P}$ be a set of primes \nthat do not divide $p\\idx{\\O_K}{\\O}$.\nFor $n\\in\\mathbf{Z}_{\\ge 0}$, let $R_n$ denote the order $\\mathbf{Z}+p^n\\O$, and let $G_n$ be the subgroup of $\\operatorname{cl}(R_n)$ generated by the set $S_n$ of classes of $R_n$-ideals with norms in $\\mathcal{P}$.\n\nThen if $G_2=\\operatorname{cl}(R_2)$, we have $G_n=\\operatorname{cl}(R_n)$ for every $n\\in\\mathbf{Z}_{\\ge 0}$.\n\\end{theorem}\n\\begin{proof}\nFor each $R_n$, let $\\varphi_n:\\operatorname{cl}(R_n)\\to\\operatorname{cl}(\\O)$ denote the corresponding \nmap in the exact sequence~$(\\ref{exactsequence})$, and let \n$\\phi_{n+1}\\colon \\operatorname{cl}(R_{n+1})\\to\\operatorname{cl}(R_N)$ send $[I]$ to $[IR_n]$, so that \n$\\varphi_{n+1}=\\varphi_n\\circ\\phi_{n+1}$.\nThese are all surjective group homomorphisms, and the underlying ideal maps preserve the norms of ideals prime to $p\\idx{\\O_K}{\\O}$.\nWe assume $G_2=\\operatorname{cl}(R_2)$, which implies $G_n=\\operatorname{cl}(R_n)$ for $n\\le 2$, and proceed by induction on $n$.\n\nFor each prime $q\\in\\mathcal{P}$ and every $n$, there are \nexactly $1-\\inkron{d_K}{q}$ ideals in $R_n$ of norm~$q$, and $\\phi_{n+1}$ \nmaps $S_{n+1}$ onto $S_n$ and $G_{n+1}$ onto $G_n$.\nBy the inductive hypothesis, $G_n=\\operatorname{cl}(R_n)$, therefore $G_{n+1}$ intersects \nevery coset of $\\ker \\phi_{n+1}\\subset \\ker \\varphi_{n+1}$.\nTo prove $G_{n+1}=\\operatorname{cl}(R_{n+1})$, it suffices to show $\\ker \\varphi_{n+1}\\subset G_{n+1}$.\n\nThe groups $\\ker \\varphi_n$ and $\\ker \\varphi_{n+1}$ are cyclic, by \nLemma~\\ref{cyclic}, since $p$ is odd and unramified.\nLet $\\alpha_n$ be a generator for $\\ker \\varphi_n$. Since $\\#\\ker \\varphi_n$ is divisible by $p$, $\\alpha_n$ cannot be a $p$th power in $\\ker\\varphi_n$.\nExpressing $\\alpha_n$ in terms of $S_n$, we see that \n$\\phi_{n+1}^{-1}(\\alpha_n)$ must intersect $G_{n+1}$.\nLet $\\alpha_{n+1}$ lie in this intersection, and note that \n$\\alpha_{n+1}\\in\\ker \\varphi_{n+1}$.\nThe order of $\\alpha_{n+1}$ must be a multiple of $|\\alpha_n|=\\#\\ker\\varphi_n$, and $\\alpha_{n+1}$ cannot be a $p$th power in $\\ker\\varphi_{n+1}$. It follows that $\\alpha_{n+1}$ has order $\\#\\ker\\varphi_{n+1}$, hence it generates $\\ker \\varphi_{n+1}$, proving $\\ker\\varphi_{n+1}\\subset G_{n+1}$ as desired.\n\\end{proof}\n\nTo see Theorem~\\ref{theorders} in action, let $\\O$ be the order of discriminant $D=-7$, let $p=3$, and let $\\mathcal{P}=\\{2\\}$.\nThe class group of the order $R_n$ of discriminant $3^{2n}D$ happens to be generated by an ideal of norm 2 when $n=2$,\nand the theorem then implies that this holds for all $n$. This allows us to construct arbitrarily large cyclic class groups, each generated by an ideal of norm 2.\n\nWe remark that Theorem \\ref{theorders} may be extended to handle $p=2$ if the condition $G_2=\\operatorname{cl}(R_2)$ is replaced by $G_3=\\operatorname{cl}(R_3)$, and easily generalizes to treat families of orders lying in $\\O$ that have $b$-smooth conductors, for any constant $b$.\n\n\\section{Explicit CM theory}\\label{ExplicitCMTheory}\n\n\\subsection{The theory of complex multiplication (CM)}\\label{CMTheory}\nAs in Section~3, let $\\O$ be an order in a quadratic field $K$ of discriminant \n$d_K<-4$. We fix an algebraic closure of~$K$. It follows from class field \ntheory that there is a unique field $K_\\O$ with the property that the \nArtin map induces an isomorphism\n$$\n\\operatorname{Gal}(K_\\O\/K) \\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ \\operatorname{cl}(\\O)\n$$\nbetween the Galois group of $K_\\O\/K$ and the ideal class group of $\\O$.\nThe field $K_\\O$ is called the {\\it ring class field\\\/} for the order~$\\O$.\nIf $\\O$ is the maximal order of $K$, then $K_\\O$ is the Hilbert class field \nof~$K$, the maximal totally unramified abelian extension of~$K$.\nIn general, primes dividing $\\idx{\\O_K}{\\O}$ ramify in the ring \nclass field. \n\nThe first main theorem of complex \nmultiplication~\\cite[Thm.~11.1]{Cox:ComplexMultiplication} states that\n$$\nK_\\O = K(j(E)),\n$$\nfor any complex elliptic curve $E$ with endomorphism ring~$\\O$.\nFurthermore, the minimal polynomial $H_\\O$ of $j(E)$ over $K$ actually has\ncoefficients in $\\mathbf{Z}$, and its degree is~$h(\\O) = |\\operatorname{cl}(\\O)|$.\nThe polynomial $H_\\O$ is known as the {\\it Hilbert class polynomial\\\/}.\nIf $p$ is a prime that splits completely in the extension $K_\\O\/\\mathbf{Q}$, \nthen $H_\\O$ splits into distinct linear factors in $\\mathbf{F}_p[X]$.\nIts roots are the $j$-invariants of the elliptic curves $E\/\\mathbf{F}_p$ with \n$\\operatorname{End}(E)\\cong\\O$, a set we denote $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$. Let $D=\\operatorname{disc}(\\O)$.\nThe primes that split completely in $K_\\O$ are precisely the\nprimes $p \\nmid D$ that are the norm\n$$\np = N_{K\/\\mathbf{Q}}\\left(\\frac{t+v\\sqrt{D}}{2}\\right) = \\frac{t^2 - v^2D}{4}\n$$\nof an element of~$\\O$. The equation $4p = t^2 - v^2D$ is\noften called the \\emph{norm equation}.\n\nFor a positive integer~$N$, there is a unique extension $K_{N,\\O}$ of the ring \nclass field $K_\\O$ such that the Artin map induces an isomorphism\n$$\n\\operatorname{Gal}(K_{N,\\O},K_\\O) \\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ (\\O\/N\\O)^* \/ \\{ \\pm 1 \\}.\n$$\nThe field $K_{N,\\O}$ is the {\\it ray class field of conductor~$N$ for~$\\O$\\\/}.\nWhen $\\O=\\O_K$, this is simply the ray class field of conductor~$N$, and for\n$N=1$ we recover the ring class field $K_\\O = K_{1,\\O}$.\nThe ring class field $K_{R}$ of the order $R = \\mathbf{Z} + N\\O$ is a subfield of the \nray class field~$K_{N,\\O}$. The Galois group of $K_R\/K_\\O$ is isomorphic to\n$$\n(\\O\/N\\O)^* \/ (\\mathbf{Z}\/N\\mathbf{Z})^*,\n$$\nthe kernel of the map $\\varphi$ in~(\\ref{exactsequence}).\n\nThe second main theorem of complex multiplication~\\cite[Thm.~11.39]{Cox:ComplexMultiplication} states that\n$$\nK_{N,\\O} = K_\\O(x(E[N])),\n$$\nwhere $x(E[N])$ denotes the set of $x$-coordinates of the $N$-torsion points of an \nelliptic curve~$E$ with endomorphism ring~$\\O$. \nThe Galois invariance of the {\\it Weil pairing\\\/} \n$E[N] \\times E[N] \\rightarrow \\mu_N$ implies that the cyclotomic field \n$\\mathbf{Q}(\\zeta_N)$ is contained in the ray class field~$K_{N,\\O}$ (a fact\nthat also follows directly from class field theory).\nIn particular, a prime $p$ that splits completely in $K_{N,\\O}$ also splits \ncompletely in $\\mathbf{Q}(\\zeta_N)$, and is therefore congruent to $1$ modulo~$N$.\n\n\\subsection{Primes that split completely in the ray class field}\\label{suitableprimes}\nWe are specifically interested in primes $p$ that split completely in the\nray class field $K_{l,\\O}$, where $l$ is an odd prime. For such $p$\nwe can achieve the desired setting for Algorithm~\\ref{alg1}, as depicted in\nFigure 1.\n\\renewcommand\\labelenumi{(\\theenumi)}\n\\begin{theorem}\\label{theprimes}\nLet $l> 2$ be prime, and let $\\O \\not \\subset \\mathbf{Z}[i], \\mathbf{Z}[\\zeta_3]$ be an imaginary quadratic order that is maximal at~$l$.\nLet $R = \\mathbf{Z}+l\\O$ be the order of index~$l$ in~$\\O$.\nLet $p$ be a prime that splits completely in the ray class field $K_{l,\\O}$, but does not split completely in the ring class field for the order of index $l^2$ in~$\\O$.\n\\begin{enumerate}\n\\item There are exactly $h(\\O)$ different $\\mathbf{F}_p$-isomorphism classes of elliptic curves $E\/\\mathbf{F}_p$ with endomorphism ring $\\O$ that have $E[l] \\subset E(\\mathbf{F}_p)$.\n\\item There are exactly $h(R)$ different $\\mathbf{F}_p$-isomorphism classes of elliptic curves $E\/\\mathbf{F}_p$ with endomorphism ring $R$ that have an $\\mathbf{F}_p$-rational $l^2$-torsion point.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nThe inclusions $K_\\O \\subseteq K_R \\subseteq K_{l,\\O}$ imply that both $H_\\O$ \nand $H_R$ split into linear factors in~$\\mathbf{F}_p$.\nEach $j$-invariant in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, resp.~$\\text{\\rm Ell}_R(\\mathbf{F}_p)$, corresponds to two \ndistinct isomorphism classes over $\\mathbf{F}_p$, since these curves are ordinary\nand $\\O \\not = \\mathbf{Z}[i], \\mathbf{Z}[\\zeta_3]$.\nWe will show that exactly one of these satisfies (1), resp.~(2). \n\nSince $p$ splits completely in the ray class field $K_{l,\\O}$, we can \nfactor $p = \\pi_p \\overline\\pi_p \\in \\O$ with $\\pi_p \\equiv 1 \\bmod l\\O$.\nLet $E\/\\mathbf{F}_p$ be an elliptic curve with endomorphism ring~$\\O$ whose Frobenius \nendomorphism corresponds to $\\pi_p$ under one of the two isomorphisms \n$\\operatorname{End}(E) \\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ \\O$.\nSince $\\pi_p \\equiv 1 \\bmod l\\O$, we have $E[l] \\subset E(\\mathbf{F}_p)$.\nThe Frobenius endomorphism of the non-isomorphic quadratic twist \n$\\tilde{E}\/\\mathbf{F}_p$ corresponds to $-\\pi_p$, and we then have \n$\\#\\tilde{E}(\\mathbf{F}_p)=p+1-\\operatorname{tr}(-\\pi_p)\\equiv 2\\bmod l$.\nFor $l\\not = 2$ this implies that~$\\tilde{E}$ has trivial $l$-torsion \nover $\\mathbf{F}_p$, proving (1).\n\nTo prove (2), let $E'\/\\mathbf{F}_p$ be a curve with endomorphism ring $R$ that is \n$l$-isogenous to~$E$. The Frobenius endomorphism of $E'$ also corresponds \nto $\\pi_p$ under an isomorphism $\\operatorname{End}(E') \\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ R$.\nThe cardinality of $E'(\\mathbf{F}_p)$ is thus equal to the cardinality of $E(\\mathbf{F}_p)$ and \ntherefore divisible by~$l^2$. However, since $p$ does {\\it not\\\/} split \ncompletely in the ring class field of index $l^2$ in $\\O$, we cannot have \n$\\pi_p \\equiv 1 \\bmod lR$. It follows that $E'[l] \\not \\subset E'(\\mathbf{F}_p)$ and \n$E'(\\mathbf{F}_p)$ must contain a point of order $l^2$. As above, the quadratic twist \nof $E'$ must have trivial $l$-torsion over $\\mathbf{F}_p$, proving (2).\n\\end{proof}\n\nProvided the order $\\O$ in Theorem~\\ref{theprimes} also satisfies \n$h(\\O)\\ge l+2$, we can achieve the desired setting for Algorithm~\\ref{alg1}.\nWe say such an order is \\emph{suitable} for $l$.\n\nTo determine the coefficients $\\Phi_l$ via the Chinese Remainder Theorem, we \nneed to compute $\\Phi_l\\bmod p$ for many primes $p$ satisfying \nTheorem~\\ref{theprimes}.\nWe necessarily have $p>l$, since $p\\equiv 1\\bmod l$, and the height \nbound~\\cite{CohenPaula:ModularPolynomials} on the coefficients of $\\Phi_l$ \nimplies that $6l+O(1)$ primes suffice.\nWe now show these primes exist and bound their size, assuming the GRH.\nFor this purpose we define a \\emph{suitable family of orders}.\n\n\n\\begin{definition}\\label{suitableorders}\n{Let $S$ be the set of odd primes and let $T$ be the set of all imaginary quadratic orders.\nA suitable family of orders is a function $\\mathcal{F}:S\\to T$ such that:\n\\begin{enumerate}\n\\item\nfor all $l\\in S$ the order $\\mathcal{F}(l)$ is suitable for $l$.\n\\item\nthere exist effective constants $c_1,c_2\\in\\mathbf{R}_{>0}$ such that for all $l\\in S$ the bounds $l+2\\le h(\\mathcal{F}(l))\\le c_1l$ and $l^2\\le|\\operatorname{disc}(\\mathcal{F}(l))|\\le c_2l^2$ hold.\n\\end{enumerate}\n}\n\\end{definition}\n\n\\begin{example}\\label{exam}\n{Let $\\mathcal{F}(3)=\\mathbf{Z}[\\sqrt{-47}]$, and for $l > 3$ \nlet $\\mathcal{F}(l)$ be the order $\\O$ of discriminant $-7\\cdot3^{2n}$, \nwhere $n$ is the least integer for which $h(\\O)=2\\cdot3^{n-1}\\ge l+2$.\nLetting $c_1=4$ and $c_2=205$, we see that $\\mathcal{F}$ is a suitable family \nof orders.}\n\\end{example}\n\n\\begin{theorem}\\label{smallprimes}\nLet $\\mathcal{F}$ be a suitable family of orders and let $c_0\\in \\mathbf{R}_{>0}$ be an arbitrary constant.\nThen for each prime $l>2$ the set of primes $p$ for which $l$, $\\O=\\mathcal{F}(l)$ and~$p$ satisfy the conditions of Theorem~\\ref{theprimes} has positive density.\n\nAssuming the GRH, there is an effective constant $c\\in\\mathbf{R}_{>0}$ such that at \nleast $c_0l^3(\\log l)^3$ of these primes are bounded by $B=cl^6(\\log l)^4$, \nfor all primes $l>7$.\n\\end{theorem}\n\\begin{proof}\nFor a prime $l>2$, let $\\O=\\mathcal{F}(l)$ have fraction field $K$, and let $u=[\\O_K:\\O]$.\nThe ray class field $K_{l,\\O}$ and the ring class field $K_S$ for the order $S=\\mathbf{Z}+l^2\\O$ are both invariant under the action of\ncomplex conjugation, hence both are Galois extensions of $\\mathbf{Q}$.\nOne finds that\n\\begin{center}\n$\\#\\operatorname{Gal}(K_{l,\\O}\/\\mathbf{Q})=2\\bigl(l-1\\bigr)\\bigl(l-\\inkron{d_K}{l}\\bigr)h(\\O)\\medspace < \\medspace 2l\\bigl(l-\\inkron{d_K}{l}\\bigr)h(\\O) = \\#\\operatorname{Gal}(K_S\/\\mathbf{Q}),$\n\\end{center}\nand the Chebotar\\\"ev density theorem~\\cite[Thm.~13.4]{Neukirch:AlgebraicNumberTheory} yields the unconditional claim.\n\nTo prove the conditional claim, we apply an effective Chebotar\\\"ev bound to the extension $K_{l,\\O}\/\\mathbf{Q}$, assuming the GRH for the Dedekind zeta function of $K_{l,\\O}$.\n\nThe extension $K_{l,\\O}\/K$ is abelian of conductor dividing $lu$, with \ndegree $nh(\\O)$, where $n\\leq 2 \\#(\\O\/l\\O)^*\\le 2l^2$.\nThe $\\O_K$-ideal $\\operatorname{disc}(K_{l,\\O}\/\\O)$ is a divisor of $(lu)^{nh(\\O)}$, by \nHasse's {\\it F\\\"uhrerdiskriminantenproduktformel\\\/}~\\cite[Thm.~VII.11.9]{Neukirch:AlgebraicNumberTheory}.\nWe then have\n\\begin{align*}\n|\\operatorname{disc}(K_{l,\\O}\/\\mathbf{Q})| &= |N_{K\/\\mathbf{Q}}(\\operatorname{disc}(K_{l,\\O}\/K))\\cdot \n \\operatorname{disc}(K\/\\mathbf{Q})^{[K_{l,\\O}:K]} |\\\\\n &\\le (lf)^{2 nh(\\O)} |\\operatorname{disc}(K\/\\mathbf{Q})|^{nh(\\O)} \\le (c_2l^4)^{nh(\\O)},\n\\end{align*}\nwhere $\\operatorname{disc}(\\O)\\le c_2l^2$. Using the bound $h(\\O)\\le c_1l$, Theorem~1.1 of~\\cite{Lagarias:Chebotarev} then yields\n\\begin{equation}\\label{LCbound}\n\\left|\\pi(x,K_{l,\\O}\/\\mathbf{Q})-\\frac{\\operatorname{Li}(x)}{2nh(\\O)}\\right|\\le c_3\\left(x^{1\/2}\\log(lx)+l^3\\log l\\right),\n\\end{equation}\n\nwhere $\\pi(x,K_{l,\\O}\/\\mathbf{Q})$ counts the primes up to $x\\in\\mathbf{R}_{>0}$ that split \ncompletely in $K_{l,\\O}$, and $c_3\\in \\mathbf{R}_{>0}$ is an effectively computable \nconstant, independent of $l$.\n\nIf we now suppose $x=cl^6(\\log l)^4$, and apply $\\operatorname{Li}(x)\\sim x\\log x$ and \n$nh(\\O)\\le c_1l^3$, we may choose $c\\in \\mathbf{R}_{>0}$ so that $\\operatorname{Li}(x)\/(2nh(\\O))$ \nis greater than the RHS of~(\\ref{LCbound}) by an arbitrarily large constant \nfactor. In particular, for any $c_4\\in R_{>0}$ there is an effectively \ncomputable choice of $c$ that ensures $\\pi(x,K_{l,\\O}\/\\mathbf{Q}) \\ge c_4l^3(\\log l)^3$, independent of $l$.\nMoreover, for the least such $c$ we have $c\/c_4\\to 1$ as $c_4\\to\\infty$.\n\nWe now show that most of these primes do not split completely in $K_S$.\nAny prime $p$ that splits completely in $K_{l,\\O}$ must split completely in the ring class field for $R=\\mathbf{Z}+l\\O$.\nPutting $D=\\operatorname{disc}(\\O)$, we then have\n\\begin{equation}\\label{normeq1}\n4p=t^2-v^2l^2D,\n\\end{equation}\nwith $t,v\\in\\mathbf{Z}_{>0}$ and $t\\equiv 2\\bmod l$.\nIf $v\\not\\equiv 0\\bmod l$, then $p$ cannot split completely in $K_S$.\nFor $p\\le cl^6(\\log l)^4$, we have $v\\le 2c^{1\/2}l(\\log l)^2$ and \n$t\\le 2c^{1\/2}l^3(\\log l)^2$, since $D\\ge l^2$, hence there are at most \n$2c^{1\/2}(\\log l)^2$ positive $v\\equiv 0\\bmod l$, and at most \n$2c^{1\/2}l^2(\\log l)^2+1$ positive $t\\equiv 2\\bmod l$, that \nsatisfy~(\\ref{normeq1}).\n\nIt follows that no more than $4cl^2(\\log l)^4+2c^{1\/2}(\\log l)^2$ primes \n$p\\le cl^6(\\log l)^4$ split completely in $K_S$. For a sufficiently large \nchoice of $c_4$, we can choose $c$ so that \n$\\pi(x,K_{l,\\O}\/\\mathbf{Q}) \\ge c_4l^3(\\log l)^3$ and at the same time ensure that\n$$\nc_4l^3(\\log l)^3-4cl^2(\\log l)^4-2c^{1\/2}(\\log l)^2>c_0l^3(\\log l)^3,\n$$\nfor all primes $l > 7$, since we then have $l\/(\\log l)$ bounded above 4.\n\\end{proof}\n\nTheorem~\\ref{smallprimes} guarantees we can obtain a sufficient number of primes $p$ for use with Algorithm~\\ref{alg1}.\nIn fact, as is typical for such bounds, it provides far more than we need. The task of finding these primes is addressed in Section~\\ref{selectingprimes}.\n\n\\subsection{Computing the CM action}\\label{CMaction1}\\par\nThe Galois action of $\\operatorname{Gal}(K_\\O\/K)\\cong\\operatorname{cl}(\\O)$ on the set\n$\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ may be explicitly computed using isogenies,\nas described in \\cite{Belding:HilbertClassPolynomial}.\nLet the prime $p$ split completely in the ring class field $K_\\O$, \nand let $E\/\\mathbf{F}_p$ be an elliptic curve with $\\operatorname{End}(E)\\cong\\O$.\nFixing an isomorphism $\\operatorname{End}(E)\\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ \\O$, for each invertible $\\O$-ideal $\\mathfrak{a}$ we define\n$$\nE[\\mathfrak{a}] = \\{ P \\in E(\\overline \\mathbf{F}_p) \\mid \\forall \\tau \\in \\mathfrak{a} : \\tau(P) = 0 \\},\n$$\nthe `$\\mathfrak{a}$-torsion' subgroup of $E$.\nThe subgroup $E[\\mathfrak{a}]$ is the kernel of a separable isogeny $E\\rightarrow E\/E[\\mathfrak{a}]$ of degree $\\idx{\\O}{\\mathfrak{a}}$, with $\\operatorname{End}(E\/E[\\mathfrak{a}])\\cong \\O$. This yields a group action\n$$\nj(E)^\\mathfrak{a} = j(E\/E[\\mathfrak{a}]),\n$$\nin which the ideal group of $\\O$ acts on the set $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\nThis action factors through the class group, and the $\\operatorname{cl}(\\O)$-action is transitive and free. Equivalently,\n$\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ is a \\emph{torsor} for $\\operatorname{cl}(\\O)$; for each pair $(j_1,j_2)$ of elements in $\\text{\\rm Ell}_\\O$ there is a unique\nelement of $\\operatorname{cl}(\\O)$ whose action sends $j_1$ to $j_2$.\n\nNow let ${\\mathfrak{l}}_0$ be an invertible $\\O$-ideal of prime norm $l_0\\not= p$.\nThe curves $E$ and $E\/E[{\\mathfrak{l}}_0]$ are $l_0$-isogenous, hence\n$$\n\\Phi_{l_0}(j_0,j_0^{{{\\mathfrak{l}}}_0})= 0,\n$$\nwhere $j_0=j(E)$.\nTo compute the action of ${\\mathfrak{l}}_0$, we need to find the corresponding root of $\\Phi_{l_0}(X,j_0)\\in\\mathbf{F}_p[X]$.\nWe assume that $\\Phi_{l_0}(X,Y)$ is known, either via one of the algorithms \nfrom the introduction, or by a previous application of Algorithm~\\ref{alg2}.\nThe polynomial $\\Phi_{l_0}(X,j_0)\\in\\mathbf{F}_p[X]$ has either 1 or 2 roots that lie \nin $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, depending on whether $l_0$ ramifies or splits (it is not inert).\nThese roots correspond to the actions of ${\\mathfrak{l}}_0$ and its inverse ${\\mathfrak{l}}_0^\\text{-1}$, which coincide when $l_0$ ramifies. \n\nOur fixed isomorphism $\\operatorname{End}(E)\\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ \\O$ maps the Frobenius endomorphism of $E$ to an element $\\pi_p\\in\\O\\subset\\O_K$ with norm $p$.\nWe then have the norm equation\n\\begin{equation}\\label{norm-equation}\n4p=t^2-v^2d_K,\n\\end{equation}\nwhere $t=\\operatorname{tr}(\\pi_p)$, and $v$ is the index of $\\mathbf{Z}[\\pi_p]$ in $\\O_K$.\nWhen $l_0$ does not divide $v$, the order $\\mathbf{Z}[\\pi_p]$ is \nmaximal at $l_0$ and the only roots of $\\Phi_{l_0}(X,j_0)$ over $\\mathbf{F}_p$ are those \nin $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\nOtherwise $\\Phi_{l_0}(X,j_0)$ has $l+1$ roots in $\\mathbf{F}_p$, and those in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ lie on the surface of the $l_0$-volcano containing~$j$, as described in \\cite{Fouquet:IsogenyVolcanoes}.\nThe roots on the surface can be readily distinguished, as in~\\cite[\\S4]{Sutherland:HilbertClassPolynomials}, for example, but typically we choose $p$ with $l_0\\nmid v$ so that every root of $\\Phi_{l_0}(X,j_0)$ in $\\mathbf{F}_p$ is on the surface.\n\nWhen $l_0$ splits and does not divide $v$, the actions of ${\\mathfrak{l}}_0$ \nand ${\\mathfrak{l}}_0^\\text{-1}$ may be distinguished as described in~\\cite[\\S5]{Broker:pAdicClassPolynomial} and~\\cite[\\S3]{Galbraith:GHSattack}.\nThe kernels of the two $l_0$-isogenies are subgroups of $E[l_0]$.\nA standard component of the SEA algorithm computes a polynomial $F_{l_0}(X)$, whose roots are the abscissa of the points in one of these kernels~\\cite{Elkies:AtkinBirthday,Schoof:ECPointCounting2}.\nIn our setting $l_0$ splits in $\\mathbf{Z}[\\pi_p]$, and provided $l_0\\nmid v$, the action of $\\pi_p$ on $E[l_0]$ has two distinct eigenvalues corresponding to the two kernels.\nExpressing the ideal ${\\mathfrak{l}}_0$ in the form $(l_0,c+d\\pi_p)$ yields the eigenvalue $\\lambda=-c\/d \\bmod l_0$.\nWe may then use $F_{l_0}(X)$ to test whether $\\pi_p$'s action is equivalent to multiplication by $\\lambda$ in the corresponding kernel. See~\\cite{Broker:pAdicClassPolynomial} for an example and further details.\n\nAs a practical optimization (see Section~\\ref{selectorder}), we avoid the need to ever make this distinction.\nThe asymptotic complexity of computing the action of ${\\mathfrak{l}}_0$ is the same in any case.\n\n\\begin{lemma}\\label{CMactioncost}\nLet $l_0$ and $p$ be distinct odd primes, and let $\\mathcal{O}\\ne\\mathbf{Z}[i],\\mathbf{Z}[\\zeta_3]$ be an imaginary quadratic order.\nLet $j_0=j(E)\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, fix an isomorphism $\\operatorname{End}(E)\\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ \\O$, and let $\\pi_p\\in\\O$ denote the image of the Frobenius endomorphism.\nLet ${\\mathfrak{l}}_0$ be an invertible $\\O$-ideal of norm $l_0$, and assume $\\mathbf{Z}[\\pi_p]$ is maximal at $l_0$.\n\nGiven $\\Phi_{l_0}\\in\\mathbf{F}_p[X,Y]$, the $j$-invariant $j_0^{{\\mathfrak{l}}_0}$ may be computed using an expected\n$O(l_0^2+\\text{\\emph{$\\textsf{M}(l_0)$}}\\log p)$ operations in $\\mathbf{F}_p$.\n\\end{lemma}\nHere $\\textsf{M}(n)$ denotes the complexity of multiplying two polynomials of degree less than $n$, as in~\\cite[Def.~8.26]{Gathen:ComputerAlgebra}.\nNa\\\"ively, $\\textsf{M}(n)=O(n^2)$, Karatsuba's algorithm yields $\\textsf{M}(n)=O(n^{\\log_2 3})$, and FFT-based methods achieve $\\textsf{M}(n)=O(n\\log n\\log\\log n)$.\n\\begin{proof}\nWe first compute $\\gcd(X^p-X,\\Phi_{l_0}(X,j_0))$, the product of the\ndistinct linear factors of $\\Phi_{l_0}(X,j_0)$ over $\\mathbf{F}_p$. Instantiating\n$f(X)=\\Phi_{l_0}(X,j_0)$ uses $O(l_0^2)$ operations in $\\mathbf{F}_p$, exponentiating\n$X^p\\bmod f$ uses $O(\\textsf{M}(l_0)\\log p)$ operations in $\\mathbf{F}_p$, and the fast\nEuclidean algorithm~\\cite[\\S11.1]{Gathen:ComputerAlgebra} obtains\n$\\gcd(X^p-X,f)$ using $O(\\textsf{M}(l_0)\\log l_0)=O(l_0^2)$ operations in $\\mathbf{F}_p$.\nThis gcd has degree at most 2, since $\\mathbf{Z}[\\pi_p]$ is maximal to $l_0$,\nand we may find its roots using an expected $O(\\log p)$ $\\mathbf{F}_p$-operations\n\\cite[Cor.~14.16]{Gathen:ComputerAlgebra}.\n\nThe desired root $j_0^{{\\mathfrak{l}}_0}$ is then distinguished as outlined above.\nWe first compute the eigenvalue $\\lambda-c\/d\\bmod l_0$, where ${\\mathfrak{l}}_0=(l_0,c+d\\pi_p)$, using $O(l_0^2)$ bit operations.\nApplying~\\cite[Thm.~2.1]{Bostan:FastIsogenies}, the kernel polynomial $F_{l_0}(X)$ can be computed using $O(\\textsf{M}(l_0))$ operations in $\\mathbf{F}_p$.\nTo compare $(X^p,Y^p)$ to the scalar multiple $\\lambda\\cdot(X,Y)$, we compute $X^p$, $Y^p$, and the required division polynomials $\\psi_n(X,Y)$, modulo $F_{l_0}(X)$ and the curve equation for $E$, as in the SEA algorithm~\\cite[Ch.~VII]{Blake:EllipticCurves}.\nThis uses $O((\\log l_0 + \\log p)\\textsf{M}(l_0))=O(l_0^2+\\textsf{M}(l_0)\\log p)$ operations in $\\mathbf{F}_p$.\n\\end{proof}\n\n\\section{Mapping the CM torsor}\nThe previous section made explicit the Galois action corresponding to an\nelement of $\\operatorname{cl}(\\O)\\cong\\operatorname{Gal}(K_\\O\/K)$ represented by an ideal $\\mathfrak{l}_0$ of prime norm $l_0$.\nWe now use this to enumerate the set $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, and at the same time compute a map that\nexplicitly identifies the action of each element of $\\operatorname{cl}(\\O)$.\nTo do this efficiently it is critical to work with generators whose norms are small, since\nthe cost of computing the action of $\\mathfrak{l}_0$ increases quadratically with its norm.\n\n\\subsection{Polycyclic presentations}\\label{polycyclic}\nAs a finite abelian group, each element of $\\operatorname{cl}(\\O)$ can be uniquely represented using a basis.\nHowever, as noted in~\\cite[\\S5.3]{Sutherland:HilbertClassPolynomials}, the norms arising in\na basis may need to be much larger than those in a set of generators.\nThus we are led to consider polycyclic presentations.\n\nLet $\\vec{\\alpha}=(\\alpha_1,\\ldots,\\alpha_k)$ be a sequence of generators for $G$, and let $G_i=\\langle\\alpha_1,\\ldots,\\alpha_i\\rangle$ denote the subgroup generated by $\\alpha_1,\\ldots,\\alpha_i$.\nThe composition series\n$$\n1=G_0\\le G_1 \\le \\cdots\\le G_{k-1} \\le G_{k} = G,\n$$\nis then polycyclic, meaning that each quotient $G_{i+1}\/G_i$ is a cyclic group. The sequence $r(\\vec{\\alpha})=(r_1,\\ldots,r_k)$ of \\emph{relative orders} for $\\vec{\\alpha}$ is defined by\n$$\nr_i=|G_i:G_{i-1}|.\n$$\nEach $r_i$ necessarily divides $|\\alpha_i|$, and for $i>1$ we typically have $r_i < |\\alpha_i|$.\nThe sequences $\\vec{\\alpha}$ and $r(\\vec{\\alpha})$ allow us to uniquely represent each $\\beta\\in G$ in the form\n\\begin{equation}\\label{pcprep}\n\\beta=\\vec{\\alpha}^\\vec{x}=\\alpha_1^{x_1}\\cdots\\alpha_k^{x_k},\n\\end{equation}\nwhere $\\vec{x}=(x_1,\\ldots,x_k)$ with $0\\le x_i 1$.\nWhen $l_i$ splits there are two possibilities for $\\alpha_i$.\nTo fix a choice, let $\\alpha_i$ be the ideal class represented by the unique binary quadratic form $ax^2+bxy+cy^2$ of discriminant $D=\\operatorname{disc}(\\O)$ with $a=l_i$ and $b$ nonnegative~\\cite[\\S3.4]{Buchmann:BinaryQuadraticForms}, corresponding to the ideal ${\\mathfrak{l}}_i=(l_i,(-b+\\sqrt{D})\/2)$.\n\nWe call $\\vec{\\alpha}$ the \\emph{polycyclic presentation} of $\\operatorname{cl}(\\O)$ determined by $\\mathcal{P}$.\nWe use $l(\\vec{\\alpha})$ to denote the sequence of norms $(l_1,\\ldots,l_k)$, but note that this also depends on $\\mathcal{P}$; each $l_i$ is the least prime in $\\mathcal{P}$ that is the norm of an ideal in $\\alpha_i$.\n\nWe may compute $\\vec{\\alpha}$ by applying~\\cite[Alg.~2.1]{Sutherland:HilbertClassPolynomials} to an implicit sequence of generators $\\vec{\\gamma}=(\\gamma_1,\\gamma_2,\\gamma_3,\\ldots)$ corresponding to the subsequence of $\\mathcal{P}$ for which there exists an invertible $\\O$-ideal of norm $p_i$.\nThe algorithm computes $r_i$ for each $\\gamma_i$ in turn, and if we find that $r_i>1$, we append $\\gamma_i$ to an initially empty vector~$\\vec{\\alpha}$. We terminate when $\\prod r_i=h(\\O)$, a value which we assume has been precomputed.\n\nThe computation of $\\vec{\\alpha}$ uses $|G|=h(\\O)$ group operations in $G=\\operatorname{cl}(\\O)$, and creates a table $T:X(\\vec{\\alpha})\\to G$ that stores $|G|=h(\\O)$ group elements~\\cite[Prop.~6]{Sutherland:HilbertClassPolynomials}.\nUsing binary quadratic forms to represent $\\operatorname{cl}(\\O)$, the group operation has bit-complexity $O(\\log^2|\\operatorname{disc}(O)|)$, as shown in~\\cite{Biehl:FormReductionComplexity}, and each element may be stored in $O(\\log|\\operatorname{disc}(\\O)|)$ space.\nEvaluating $T(\\vec{x})$, or $T^{-1}(\\beta)$, has bit-complexity $O(\\log|G|)$.\n\n\\subsection{Suitable presentations}\nWhen $\\mathcal{P}$ is the sequence of all primes, the norms $l(\\vec{\\alpha})$ for the polycyclic presentation $\\vec{\\alpha}$ of $\\operatorname{cl}(\\O)$ determined by $\\mathcal{P}$ are as small as possible.\nHowever, when working in a finite field $\\mathbf{F}_p$, we may wish to ensure that each norm $l_i$ does not divide $v=[\\O_K:\\mathbf{Z}[\\pi_p]]$, as noted in Section~\\ref{CMaction1}.\nThis is achieved by excluding from $\\mathcal{P}$ primes that divide $v$, and we call the corresponding $\\vec{\\alpha}$ the presentation of $\\operatorname{cl}(\\O)$ \\emph{suitable} for $p$. This may cause us to use norms that are slightly larger than optimal. We now show that, provided we work\nwith a family of orders that satisfies certain (easily met) constraints, the norms in every suitable presentation are quite small, assuming the GRH.\n\n\\begin{theorem}\\label{suitablepresentation}\nLet $c_3\\in\\mathbf{R}_{>0}$ be a fixed constant, and let $\\mathcal{F}$ be a suitable family of orders with the following additional property: if $\\O$ is an order in $\\mathcal{F}$ whose fraction field $K$ has discriminant $d_K$, and $s_\\O$ denotes the square-free part of $[\\O_K:\\O]$, then $s_\\O$ is coprime to $2d_k$ and both $s_\\O$ and $|d_K|$ are bounded by $c_3$.\n\nThen under the GRH, for every $\\O$ in $\\mathcal{F}$ and every prime $p$ that splits completely in $K_\\O$, the presentation $\\vec{\\alpha}$ of $\\operatorname{cl}(\\O)$ suitable for $p$ has norms $l(\\vec{\\alpha})$ for which\n$$\\max l(\\vec{\\alpha})\\le c\\omega(v)\\log (\\omega(v)+1),$$\nwhere $v$ is defined by $4p=t^2-v^2\\operatorname{disc}(\\O)$, the function $\\omega(v)$ counts the distinct prime factors of $v$, and $c$ is an effective constant that depends only on $c_3$.\n\\end{theorem}\n\\begin{proof}\nLet $\\O$, $p$, $v$, and $\\vec{\\alpha}$ be as above.\nLet $R$ be the order of index $s_\\O^2$ in $\\O_K$, and let $\\vec{\\gamma}$ be the presentation of $R$ determined by the increasing sequence of primes that do not divide $v$.\nIt follows from Theorem~\\ref{theorders} that $\\max l(\\vec{\\alpha}) \\le \\max l(\\vec{\\gamma})$.\n\nLet $K_R$ be the ring class field for $R$ and let $\\pi_C(x,K_R\/\\mathbf{Q})$ count the primes bounded by $x\\in\\mathbf{R}_{>0}$ whose Frobenius symbol (under the Artin map) lies in the conjugacy class $C$ of $\\operatorname{Gal}(K_R\/\\mathbf{Q})$.\nWe may bound $\\idx{K_R}{\\mathbf{Q}}$, $\\#\\operatorname{Gal}(K_R\/\\mathbf{Q})$, and $\\operatorname{disc}(K_R)$ by constants that depend only on $c_3$, independent of $\\mathcal{O}$.\nUnder the GRH, the Chebotar\\\"ev bound of~\\cite[Thm.~1.1]{Lagarias:Chebotarev} then yields\n$$\n\\pi_C(x,K_R\/\\mathbf{Q}) \\ge c_4x\/\\log x,\n$$\nfor some effective constant $c_4\\in\\mathbf{R}_{>0}$ and all $x > 2$, where $c_4$ depends only on~$c_3$.\nFor an effective constant $c$ depending on $c_3$, setting $x=c\\omega(v)\\log(\\omega(v)+1)$ yields $\\pi_C(x,K_R\/\\mathbf{Q}) > \\omega(v)$. In this case the Frobenius symbol of at least one prime not dividing $v$ lies in $C$, and this applies to every $C$. It follows that every class in $\\operatorname{cl}(R)$ contains an element whose norm is a prime bounded by $x$ that does not divide $v$.\nWe then have $\\max l(\\vec{\\alpha}) \\le \\max l(\\vec{\\gamma}) \\le x=c\\omega(v)\\log(\\omega(v)+1)$, as desired.\n\\end{proof}\n\nThe family of orders in Example~\\ref{exam} satisfies the requirements of Theorem~\\ref{suitablepresentation}.\nWe note that provided $\\log p = O(\\log l)$, we have $\\omega(v)=O(\\log l\/\\log\\log l)$, and the theorem then yields an $O(\\log l)$ bound on the norms $l(\\vec{\\alpha})$.\nThis is sharper than the more general $O(\\log^2 l)$ bound implied by~\\cite{Bach:ERHbounds}.\nIn fact, by~\\cite[Thm.~431]{Hardy:NumberTheory}, one expects $\\omega(v)=O(\\log\\log p)$, which yields a bound of $O(\\log\\log l \\log\\log\\log l)$.\n\n\\subsection{Realizing the CM torsor}\\label{CMaction2}\nWe now consider how to explicitly map $\\operatorname{cl}(\\O)$ to the torsor $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, so that we may then compute the action of any element (or subgroup) of $\\operatorname{cl}(\\O)$ on any element of $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, \\emph{without needing to compute any further isogenies}.\nWe use the presentation $\\vec{\\alpha}$ of $\\operatorname{cl}(\\O)$ suitable for~$p$, and the table $T:X(\\vec{\\alpha})\\to\\operatorname{cl}(\\O)$ described in Section~\\ref{polycyclic}.\nAs above, we have $\\vec{\\alpha}=([{\\mathfrak{l}}_1],\\ldots,[{\\mathfrak{l}}_k])$, with norms $l(\\vec{\\alpha})=(l_1,\\ldots,l_k)$ and relative orders $r(\\vec{\\alpha})=(r_1,\\ldots,r_k)$. We assume the modular polynomials $\\Phi_{l_1},\\ldots,\\Phi_{l_k}$ are known, since the $l_i$ are small.\n\nTo enumerate $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ we use~\\cite[Alg.~1.3]{Sutherland:HilbertClassPolynomials}, but we augment this algorithm to also compute an explicit bijection $\\phi\\text{\\hspace{2pt}:\\hspace{1pt}}\\operatorname{cl}(\\O)\\to\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ in which $[\\mathfrak{a}]\\in\\operatorname{cl}(\\O)$ corresponds to $j_0^{\\mathfrak{a}}$.\nGiven $j_0\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, we compute a path of $l_k$-isogenies\n\\begin{equation}\\label{isogenypath}\nj_0\\mapright{{\\mathfrak{l}}_k}j_1\\mapright{{\\mathfrak{l}}_k}j_2\\mapright{{\\mathfrak{l}}_k}\\cdots\\mapright{{\\mathfrak{l}}_k}j_{r_k-1},\n\\end{equation}\nwhere the $j_i$ are distinct elements of $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\nAs explained in Section~\\ref{CMaction1}, each step in this path is computed by finding a root $j_i$ of $\\Phi(X,j_{i-1})$ that lies in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\nWhen $l_k$ splits in $K$ we have two choices for $j_1$, and the correct choice may be determined using a kernel polynomial as outlined in Section~\\ref{CMaction1}. For $i>1$ we use the polynomial $\\Phi(X,j_{i-1})\/(X-j_{i-2})$, which has exactly one root $j_i\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\n\nWhen $k>1$, the enumeration of $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ proceeds recursively.\nFor each $j_i$ in (\\ref{isogenypath}) we compute a path of $l_{k-1}$ isogenies containing $r_{k-1}$ distinct $j$-invariants.\nEventually, every element of $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ is enumerated exactly once~\\cite[Prop.~5]{Sutherland:HilbertClassPolynomials}.\nFor each $j_n\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ we also compute a vector $\\vec{x}\\in X(\\vec{\\alpha})$ that describes the path used to reach $j_n$ from $j_0$, where $x_i$ indicates the number of steps taken on an $l_i$-isogeny path.\nBy correctly choosing the direction of each path, we ensure that each $j_n$ is the image of $j_0$ under the $\\operatorname{cl}(\\O)$-action of $\\vec{\\alpha}^{\\vec{x}}=\\alpha_1^{x_1}\\cdots\\alpha_k^{x_k}$.\nThis yields the desired bijection $\\phi$; since $\\vec{\\alpha}^{\\vec{x}}$ uniquely represents some $\\beta\\in\\operatorname{cl}(\\O)$, we may set $\\phi(\\vec{\\alpha}^{\\vec{x}})=j_n$, a process facilitated by the map $T:X(\\vec{\\alpha})\\to\\operatorname{cl}(\\O)$.\n\nThe bijection $\\phi$ allows us translate any computation in the group $\\operatorname{cl}(\\O)$ to the torsor $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$. In particular, by enumerating the cyclic subgroup $H\\subseteq\\operatorname{cl}(\\O)$ generated by $[{\\mathfrak{l}}]$, where ${\\mathfrak{l}}$ is an ideal of norm $l$, we obtain the $l$-isogeny cycle containing~$j_0$, corresponding to the surface of one of the $l$-volcanoes in Figure~1. Doing the same for each coset of $H$ partitions $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ into $l$-isogeny cycles.\n\nThis may also be applied to the order $R=\\mathbf{Z}+l\\O$. After obtaining a bijection from $\\operatorname{cl}(R)$ to $\\text{\\rm Ell}_R(\\mathbf{F}_p)$, we enumerate the kernel of the map $\\varphi\\text{\\hspace{2pt}:\\hspace{1pt}}\\operatorname{cl}(R)\\to\\operatorname{cl}(O)$ from the exact sequence of (\\ref{exactsequence}). Here we use one of the generators of norm $l^2$ guaranteed by Lemma~\\ref{generators}. Enumerating the cosets of $\\ker \\varphi$ then partitions $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ into $l^2$-isogeny cycles of siblings with a common $l$-isogenous parent in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\n\n\\section{The algorithm}\nWe now present our algorithm to compute the modular polynomial $\\Phi_l$ using the Chinese Remainder Theorem (CRT).\nAlgorithm~\\ref{alg2} follows the standard pattern of a CRT-based algorithm; the details lie in Algorithm~\\ref{alg21}, which selects a set of primes $S$, and in Algorithm~\\ref{alg1}, which computes $\\Phi_l$ modulo each prime $p\\in S$.\n\nThe computation of $\\Phi_l\\in\\mathbf{Z}[X,Y]$ may be viewed as a special case of computing $\\Phi_l\\in(\\mathbf{Z}\/m\\mathbf{Z})[X,Y]$, where $m$ is the product of the primes in $S$. The choice of $S$ ensures that this $m$ is large enough to uniquely determine $\\Phi_l\\in\\mathbf{Z}[X,Y]$.\n\n\\renewcommand\\labelenumi{\\theenumi.}\n\\renewcommand\\labelenumii{\\theenumii.}\n\\begin{algorithm}\\label{alg2}\n{Let $l$ be an odd prime, let $m$ be a positive integer, and let $\\O=\\mathcal{F}(l)$ lie in a suitable family of orders $\\mathcal{F}$.\nCompute $\\Phi_l\\in(\\mathbf{Z}\/m\\mathbf{Z})[X,Y]$ as follows:}\n\\begin{enumerate}\n\\item\nCompute the Hilbert class polynomial $H_\\O\\in\\mathbf{Z}[X]$.\n\\item\nSelect a set of primes $S$ with Algorithm~\\ref{alg21}, using $l$ and $\\O$.\n\\item\nPerform CRT precomputation using $S$.\n\\item\nFor each prime $p\\in S$:\n\\begin{enumerate}\n\\item\nCompute $\\Phi_l\\bmod p$ with Algorithm~\\ref{alg1}, using $\\O$ and $H_\\O$.\n\\item\nUpdate CRT data using $\\Phi_l\\bmod p$.\n\\end{enumerate}\n\\item\nPerform CRT postcomputation.\n\\item\nOutput $\\Phi_l\\in(\\mathbf{Z}\/m\\mathbf{Z})[X,Y]$.\n\\end{enumerate}\n\\end{algorithm}\nThe suitable family of orders $\\mathcal{F}$ is as defined in Section~\\ref{suitableprimes}, see Definition~\\ref{suitableorders}.\nThe polynomial $H_\\O$ computed in Step~1 may be obtained using any of several algorithms whose running time is quasi-linear in $\\operatorname{disc}(\\O)$, including~\\cite{Broker:pAdicClassPolynomial,Enge:FloatingPoint,Sutherland:HilbertClassPolynomials}.\n\n\\subsection{Selecting primes}\\label{selectingprimes}\nThe primes $p$ in the set $S$ selected by Algorithm~\\ref{alg2} must satisfy the conditions of Theorem~\\ref{theprimes} in order to use them in Algorithm~\\ref{alg1}.\nWe require $p$ to split completely in the ray class field $K_{l,\\mathcal{\\O}}$, but to not split completely in the ring class field for the order $\\mathbf{Z}+l^2\\O$. Equivalently, we need $p\\nmid D$ to satisfy\n\\begin{equation}\\label{normeq2}\n4p=t^2-v^2l^2D,\n\\end{equation}\nwith $t\\equiv 2\\bmod l$ and $l\\nmid v$, where $D=\\operatorname{disc}(\\O)$. To apply the CRT, we also require\n\\begin{equation}\\label{primeset}\n\\prod_{p\\in S}\\log p\\ge 4|c|,\n\\end{equation}\nfor every coefficient $c$ of $\\Phi_l\\in\\mathbf{Z}[X,Y]$. From~\\cite{BrokerSutherland:PhiHeightBound}, we use the explicit bound\n\\begin{equation}\\label{heightbound}\nB_l = 6l\\log l + 17l,\n\\end{equation}\non the logarithmic height of $\\Phi_l$ to achieve this. We then have $\\#S=O(l)$, by (\\ref{normeq2}).\n\nHeuristically, it is easy to find primes that satisfy (\\ref{normeq2}). If $D\\equiv 1\\bmod 8$, fix $v=2$, otherwise fix $v=1$.\nThen, for increasing $t\\equiv 2\\bmod l$ with the correct parity, test whether $p=(t^2-v^2l^2D)\/4$ is prime.\nWe expect to need $O(l\\log l)$ primality tests, and each can be accomplished in time polynomial in $\\log l$,\nalthough typically $p$ is small enough to make an attempted factorization more efficient.\nWe could obtain slightly smaller $p$'s by letting $v$ vary, but it is more convenient to fix $v$\nso that we can use the same presentation of $\\operatorname{cl}(\\O)$ and $\\operatorname{cl}(R)$ for every $p$.\nThis approach is easy to implement and very fast in practice.\n\\medskip\n\nHowever, in order to prove Theorem~\\ref{main-thm} we must take a more cautious approach.\nEven assuming the GRH, we cannot guarantee we will find \\emph{any} primes with a fixed value of $v$.\nOn the other hand, Theorem~\\ref{smallprimes} implies that if we construct random integers $p\\le x$ satisfying (\\ref{normeq2}), for sufficiently large $x$ we have $p$ prime with probability $\\Omega(1\/\\log x)$, and under the GRH, $x=O(l^6(\\log l)^4)$ is large enough.\nAdditionally, we would like to avoid $v$'s with many prime factors, so that we may more profitably apply Theorem~\\ref{suitablepresentation}.\nBy Lemma \\ref{HWbound} of the appendix, the restriction $\\omega(v)\\le (\\log(\\log v+3))^2$ eliminates a negligible proportion of the integers $v\\in[1,x]$.\n\\smallbreak\n\nWe now present Algorithm~\\ref{alg21}, emphasizing that its purpose is to facilitate the proof of Theorem~\\ref{main-thm}.\nIn practice we use the heuristic procedure described above.\n\n\\renewcommand\\labelenumi{\\theenumi.}\n\\renewcommand\\labelenumii{\\theenumii.}\n\\begin{algorithm}\\label{alg21}\n{Let $l$ be an odd prime, let $D<-4$ be a discriminant, and let $B_l$ be as in (\\ref{heightbound}).\nConstruct the set $S$ as follows:}\n\\begin{enumerate}\n\\vspace{2pt}\\item\nSet $n\\leftarrow (B_l+2\\log 2)\/\\log(l^2|D|\/4)$ and then $x\\leftarrow 4l^2|D|n\\log n$.\\\\\nSet $b\\leftarrow 0$ and $S\\leftarrow \\emptyset$.\n\\vspace{2pt}\\item\nSet $T\\leftarrow 2x^{1\/2}$ and $V\\leftarrow 2x^{1\/2}l^{-1}|D|^{-1\/2}$.\n\\vspace{2pt}\\item\nRepeat $\\lceil 2N\\log x\\rceil $ times:\n\\begin{enumerate}\n\\item\nConstruct an integer $p=(t^2-v^2l^2D)\/4$ using uniformly random integers $v\\in[1,V]$ and $t\\in[1,T]$, subject to $l\\nmid v$, $t\\equiv 2\\bmod l$, and $t\\equiv vD\\bmod 2$.\n\\item\nIf $\\omega(v)> (\\log(\\log v+3))^2$ then go to Step 3d.\n\\item\nIf $p\\notin S$ and $p$ is prime then set $S\\leftarrow S\\cup \\{p\\}$ and $b\\leftarrow b+\\log p$.\n\\item\nIf $b>B_l+2\\log 2$ then output $S$ and terminate.\n\\end{enumerate}\n\\vspace{2pt}\\item\nSet $x\\leftarrow 2x$ and go to Step 2.\n\\end{enumerate}\n\\end{algorithm}\n\nIn Step~3a, the integer $t$ is generated as $t=al+2$ using a uniformly random integer $a\\in[0,V\/l-2]$.\nThe computation of $\\omega(v)$ in Step 3b is performed by factoring~$v$.\nNote that $\\# S\\le n$, and $p \\le x$ for all $p\\in S$.\n\n\\renewcommand\\labelenumi{(\\theenumi)}\n\\begin{lemma}\\label{selectprimes}\nLet $\\mathcal{F}$ be a suitable family of orders, let $l$ be an odd prime and let $D=\\operatorname{disc}(\\mathcal{F}(l))$.\nGiven inputs $l$ and $D$, the expected running time of Algorithm~\\ref{alg21} is finite.\nUnder the GRH we also have the following:\n\\begin{enumerate}\n\\item\nThe expected running time is $O(l^{1+\\varepsilon})$, for any $\\varepsilon\\in\\mathbf{R}_{>0}$.\n\\item\nThere is a constant $c<1$ such that for all $l>7$ and $k\\in\\mathbf{Z}_{>0}$, the algorithm terminates with $\\log x \\le (6+k)\\log l$ with probability at least $1-c^{-k\\log l}$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nTo analyze Algorithm~\\ref{alg21}, we count the number of times Step 4 is executed, referring to the period between each execution as an iteration.\nBy Theorem~\\ref{smallprimes}, the set of primes that satisfy Theorem~\\ref{theprimes}, equivalently, those that satisfy (\\ref{normeq2}), has positive density. Here we may use the natural density, via~\\cite[Thm.~4.3.e]{Jarden:Density}.\nFor every fixed odd prime $l$, this implies a lower bound of $\\Omega(1\/\\log x)$ on the probability that a random integer in $[1,x]$ is a prime that satisfies (\\ref{normeq2}).\nEach integer $p$ tested by Algorithm~\\ref{alg21} necessarily satisfies (\\ref{normeq2}), hence such a $p$ is prime with probability $\\Omega(1\/\\log x)$.\nBy Lemma~\\ref{HWbound} in the appendix, the probability that a candidate $p$ is skipped due to the test in Step 3b is $o(1\/\\log x)$.\nThis implies that for all sufficiently large $x$, the probability that \nAlgorithm~\\ref{alg21} terminates in a given iteration is bounded above zero, and the expected running time is finite.\n\nNow assume the GRH and let $l>7$.\nApplying Theorem~\\ref{smallprimes} with $c_0=1$, there are at least $l^3(\\log l)^3$ primes $p0}$ that does not depend on $l$.\nLet $x_0$ be the least value of $x\\ge c_1l^6(\\log l)^4)$.\nWhen $x=x_0$ we have $VT\/l \\le 8c_1l^3(\\log l)^4$, since $|D|\\ge l^2$, and the probability that a given primality test succeeds is at least $8c_1\/\\log l \\ge c_2\/\\log x$, for some constant $c_2\\in\\mathbf{R}_{>0}$.\nFrom the inequality (\\ref{LCbound}) in the proof of Theorem~\\ref{smallprimes}, one finds that this holds for all $x \\ge x_0$, with the same constants.\nAs above, Step 3b has negligible impact, and for $x\\ge x_0$ the probability that the algorithm terminates in a given iteration is at least $c$, for some constant $c\\in\\mathbf{R}_{>0}$ independent of $l$ and $x$.\n\nWe now consider the running time as a function of $l$, fixing an arbitrary $\\varepsilon\\in\\mathbf{R}_{>0}$.\nIt takes $O(\\log l)$ iterations to achieve $x=x_0$, assuming that we don't terminate earlier, and we execute Steps 3b and 3c a total of $O(l(\\log l)^2)$ times during this process.\nThe computation of $\\omega(v)$ and the primality test of $p$ can both be achieved in expected time subexponential in $\\log x$, by~\\cite{Lenstra:RigorousFactoring}, yielding an $O(l^{1+\\varepsilon})$ bound on the time to reach $x=x_0$, since $\\log x_0 = O(\\log(l))$.\n\nFor $x\\ge x_0$, the probability of reaching each subsequent iteration declines exponentially, while the cost of Steps 3b and 3c grows subexponentially, implying that the total expected running time is also $O(l^{1+\\varepsilon})$, proving (1).\n\nClaim (2) follows from the same analysis. We have $\\log x_0 = (6+o(1))\\log l$ and add $\\log 2$ to $\\log x$ in each iteration. Once $x=x_0$, it takes more than $k\\log l$ iterations to reach $\\log x > (6+k)\\log l$. There is a probability of at least $c$ that the algorithm terminates in each subsequent iteration, yielding the bound in (2).\n\\end{proof}\n\n\\subsection{CRT computations}\\label{CRT}\nThe computations involved in Steps~3, 4b, and 5 of Algorithm~\\ref{alg2} are described in detail in~\\cite[\\S6]{Sutherland:HilbertClassPolynomials}.\nWe summarize briefly here.\n\nGiven $S=\\{p_i\\}$, let $M=\\prod p_i$, $M_i=M\/p_i$, and $a_i\\equiv M_i^{-1}\\bmod p_i$.\nLet $c$ denote a coefficient of $\\Phi_l\\in\\mathbf{Z}[X,Y]$, and let $c_i\\equiv c\\bmod p_i$ denote the corresponding coefficient of $\\Phi_l\\in\\mathbf{F}_{p_i}[X,Y]$. As in~\\cite[\\S10.3]{Gathen:ComputerAlgebra}, we can use fast Chinese remaindering to efficiently compute\n\\begin{equation}\\label{standardCRTsum}\nc\\equiv c_ia_iM_i\\bmod M.\n\\end{equation}\nProvided that $M>2|c|$, we can then lift the result from $\\mathbf{Z}\/M\\mathbf{Z}$ to $\\mathbf{Z}$.\n\nWhen $m$ is ``large,\" by which we mean $\\log m=\\Omega(\\log M)$, we compute $c\\bmod M$, lift to $\\mathbf{Z}$, and then reduce to $\\mathbf{Z}\/m\\mathbf{Z}$. In this scenario, Step 4b simply stores the coefficients $c_i$ and Step 3 can be deferred to Step 5.\n\nWhen $m$ is ``small,\" by which we mean $\\textsf{M}(\\log m) = O(\\log^3 l\\log\\log l)$, we instead use the explicit CRT modulo $m$.\nAssuming $M>4|c|$, we may apply\n\\begin{equation}\\label{explicitCRTsum}\nc\\equiv c_ia_iM_i - rM \\bmod m,\n\\end{equation}\nwhere $r$ is the closest integer to $s=\\sum c_ia_i\/p_i$, by~\\cite[Thm.~3.1]{Bernstein:ModularExponentiation}.\nIn this scenario, we update the sum $C=\\sum c_ia_iM_i\\bmod m$ and an approximation to $s$ in Step 4b as each $c_i$ is computed.\nThis uses $O(\\log m + \\log l)$ space per coefficient, rather than the $O(l\\log l)$ space used to compute $c\\bmod M$.\nThe postcomputation in Step 5 determines $r$ from the approximation to $s$ and computes $c\\bmod m$ via (\\ref{explicitCRTsum}).\n\nWhen $m$ is neither small nor large, a hybrid approach is used, see~\\cite[\\S6.3]{Sutherland:HilbertClassPolynomials}.\n\n\\subsection{Computing $\\Phi_l(X,Y)\\bmod p$}\\label{Alg1}\nAn overview of Algorithm~\\ref{alg1} was given in the introduction, we now fill in the details.\n\n\\renewcommand\\labelenumi{\\theenumi.}\n\\renewcommand\\labelenumii{\\theenumii.}\n\\algstart{{\\bf \\ref{alg1}}} {Let $l$, $p$, and $\\O$ be as in Theorem~\\ref{theprimes}, with $h(\\O)\\ge l+2$, and let $R=\\mathbf{Z}+l\\O$.\nGiven $H_\\O\\in\\mathbf{Z}[X]$, compute $\\Phi_l\\in\\mathbf{F}_p[X,Y]$ as follows:}\n\\vspace{2pt}\\item\nCompute the presentations $\\vec{\\alpha}$ of $\\operatorname{cl}(\\O)$ and $\\vec{\\alpha}'$ of $\\operatorname{cl}(R)$ suitable for $p$.\n\\vspace{2pt}\\item\nFind a root of $j_0$ of $H_\\O(X)$ over $\\mathbf{F}_p$.\n\\vspace{2pt}\\item\nUse $\\vec{\\alpha}$ to enumerate $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ from $j_0$, and identify the $l$-isogeny cycles.\n\\vspace{2pt}\\item\nFor distinct $j_0,\\ldots,j_{l+1}\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$:\n\\begin{enumerate}\n\\item\nConstruct a curve $E_i$ with $j(E_i)=j_i$ such that $l$ divides $\\#E_i(\\mathbf{F}_p)$.\n\\item\nGenerate a random point $P\\in E_i(\\mathbf{F}_p)$ of order $l$.\n\\item\nUse $E_i$ and $P$ to compute an $l$-isogenous curve $E_i'\/\\mathbf{F}_p$ via Algorithm~\\ref{alg11}.\n\\item\nIf $j(E_i')\\not\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ set $j_i'\\leftarrow(E_i')$, otherwise return to Step 3b.\n\\end{enumerate}\n\\vspace{2pt}\\item\nUse $\\vec{\\alpha}'$ to enumerate $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ from $j_0'$ and identify the $l^2$-isogeny cycles.\n\\vspace{2pt}\\item\nFor $i$ from 0 to $l+1$:\n\\begin{enumerate}\n\\item\nLet $j_{i0},\\ldots,j_{il}$ consist of the neighbors of $j_i$ in its $l$-isogeny cycle in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ together with the $l^2$-isogeny cycle of $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ containing $j_i'$.\n\\item\nCompute $\\Phi_l(X,j_i)=\\sum_k a_{ik}X^k$ as the product $\\prod_k(X-j_{ik})$.\n\\end{enumerate}\n\\item\nFor $k$ from 0 to $l+1$:\n\\begin{enumerate}\n\\item\nInterpolate $\\phi_k\\in\\mathbf{F}_p[Y]$ with $\\deg\\phi_k\\le l+1$ satisfying $\\phi_k(j_i)=a_{ik}$.\n\\end{enumerate}\n\\vspace{2pt}\\item\nOutput $\\Phi_l(X,Y)=\\sum_k\\phi_k(Y)X^k$.\n\\end{enumerate}\\vspace{4pt}\n\\noindent\nSteps 2, 6, and 7 involve standard computations with polynomials over finite fields, as described in~\\cite{Gathen:ComputerAlgebra}, for example.\nStep 1 is addressed in Section~\\ref{polycyclic}, and Steps 3 and 5 are the topic of Section~\\ref{CMaction2}.\nOnly Step 4 merits further discussion here.\n\nThe existence of the curve $E_i$ constructed in Step 4a is guaranteed by Theorem~\\ref{theprimes}.\nThe trace of Frobenius $t$ of the desired curve is uniquely determined by the norm equation for $p$ and the constraint $t\\equiv 2\\bmod l$, as in (\\ref{normeq2}).\nWith $k=j_i\/(1728-j_i)$, the curve $E\/\\mathbf{F}_p$ defined by $y^2=x^3+3kx+2k$ has $j(E)=j_i$, and we may determine whether it is $E$ or its quadratic twist that has trace $t$ by attempting to generate a point of order $l$ on both curves in Step 4b.\n\nTo obtain a point $P$ of order $l$, we generate a random point $Q$ uniformly distributed over $E_i(\\mathbf{F}_p)$,\ncompute the scalar multiple $P=nQ$, where $n=(p+1-t)\/l$, and then check that $P\\ne 0$.\nThis will be true with probability $1-1\/l^2$, since Theorem~\\ref{theprimes} implies that the Sylow $l$-subgroup of $E_i(\\mathbf{F}_p)$ is $E_i[l]\\cong\\mathbf{Z}\/l\\mathbf{Z}\\times\\mathbf{Z}\/l\\mathbf{Z}$. Thus we expect to succeed within $1+O(1\/l^2)$ attempts, and the expected cost of generating $P$ is $O(\\log p)$ operations in $\\mathbf{F}_p$.\n\nNote that $E_i(\\mathbf{F}_p)$ contains $l+1$ distinct subgroups of order $l$, each corresponding to a distinct $l$-isogenous $j$-invariant.\nAt most 2 of these lie in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$.\nThus in Step 4d we have $j(E_i')\\not\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ with probability at least $1-2\/(l+1)$ and expect to need $1+O(1\/l)$ random points $P$ to obtain such an $E_i'$.\nThe curve $E_i'$ is the image of an $l$-isogeny whose kernel is generated by $P$, obtained via Algorithm~\\ref{alg11}.\n\n\\subsection{Isogenies from subgroups}\nLet $E\/\\mathbf{F}_p$ be an elliptic curve.\nGiven a cyclic subgroup $H\\subseteq E(\\overline{\\F}_p)$, V\\'{e}lu's formulas construct an isogeny $E\\to E'$ with $H$ as its kernel~\\cite{Velu:Isogenies}.\nTypically $H$ is specified by a polynomial whose roots in the algebraic closure $\\overline{\\F}_p$ are the $x$-coordinates of the points in $H$.\nIn our setting we may specify $H$ as the subgroup generated by a point $P\\in E(\\mathbf{F}_p)$, and work entirely in $\\mathbf{F}_p$ rather than an extension field.\nAdditionally, the order $l$ of $H$ is odd and $p>3$, allowing us to simplify the formulas.\n\n\\renewcommand\\labelenumi{\\theenumi.}\n\\renewcommand\\labelenumii{\\theenumii.}\n\\begin{algorithm}\\label{alg11}\n{Let $l>2$ and $p>3$ be primes, let $E\/\\mathbf{F}_p$ be an elliptic curve defined by $y^2=x^3+Ax+B$, and let $P=(P_x,P_y)$ be a point on $E(\\mathbf{F}_p)$ of order $l$. Compute the image $E'\/\\mathbf{F}_p$ of the $l$-isogeny with kernel $H=\\langle P\\rangle$ as follows:}\n\\begin{enumerate}\n\\vspace{2pt}\\item\nSet $t\\leftarrow 0$, $w\\leftarrow 0$, and $Q\\leftarrow P$\n\\vspace{2pt}\\item\nRepeat $(l-1)\/2$ times:\n\\begin{enumerate}\n\\item\nSet $s\\leftarrow 6Q_x^2+2A$ and then set $u\\leftarrow 4Q_y^2 + sQ_x$.\n\\item\nSet $t\\leftarrow t + s$, $w\\leftarrow w + u$, and $Q\\leftarrow Q + P$.\n\\end{enumerate}\n\\vspace{2pt}\\item\nSet $A'=A-5t$ and $B'=B-7w$.\n\\vspace{2pt}\\item\nOutput the curve $E'\/\\mathbf{F}_p$ defined by $y^2=x^3+A'x+B'$.\n\\end{enumerate}\n\\end{algorithm}\n\nThe addition $Q+P$ in Step 2b is performed using the group operation in $E(\\mathbf{F}_p)$.\nThe complexity of Algorithm~\\ref{alg11} is $O(l)$ operations in $\\mathbf{F}_p$.\n\n\\subsection{Complexity analysis}\nWe first bound the complexity of Algorithm~\\ref{alg1}, as used by Algorithm~\\ref{alg2}.\n\n\\begin{lemma}\\label{Alg1bound}\nLet $\\mathcal{F}$ be a suitable family of orders that satisfies the conditions of Theorem~\\ref{suitablepresentation}.\nFor an odd prime $l$, let $\\O=\\mathcal{F}(l)$ and let $D=\\operatorname{disc}(\\O)$.\nLet $p$ be a prime in the set $S$ selected by Algorithm~\\ref{alg21} on input $l$ and $D$. Assuming the GRH, the expected running time of Algorithm~\\ref{alg1} is $O(l^2(\\log p)^3\\log\\log p)$.\n\\end{lemma}\n\\begin{proof}\nWe note that $p=t^2-v^2l^2D > l^4$, thus $\\log l < \\log p$, \nand recall that the bit-complexity of multiplying two polynomials of degree $O(l)$ in $\\mathbf{F}_p[X]$ may by bounded by $O(\\textsf{M}(l\\log p))$, using Kronecker substitution, see Corollaries 8.28 and 9.8 of~\\cite{Gathen:ComputerAlgebra}.\n\nIn the analysis below we use $O(\\textsf{M}(l\\log p))=O(l(\\log p)^2\\log\\log p)$, via the bound $\\textsf{M}(n)=O(n\\log n\\log\\log n)$ for fast multiplication~\\cite{Schonhage:Multiplication}.\nTo bound the cost of operations in $\\mathbf{F}_p$, we use $\\textsf{M}(n)=O(n^2\/(\\log n)^c)$, see Remark~\\ref{multremark}, and bound the cost of inversions by $O(\\textsf{M}(\\log p)\\log\\log p)=O((\\log p)^2)$, via~\\cite[Cor.~11.10]{Gathen:ComputerAlgebra}.\n\\smallbreak\n\nWe now bound the (expected) cost of each step in Algorithm~\\ref{alg1}:\n\\noindent\n\\begin{enumerate}\n\\item\nWe have $h(\\O)l$.\nAll but $O(l)$ of the $O(l^2)$ steps taken when enumerating $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ involve these elements, and, as in Step~3, we obtain a cost of $O(l^2(\\log p)^3)$ for these steps.\nAssuming the GRH, the remaining elements of $\\vec{\\alpha}'$ all have norm $O((\\log |D|)^2)=O((\\log l)^2)$, via~\\cite{Bach:ERHbounds}, yielding a cost of $O(l(\\log l)^4(\\log p)^3)$ for these steps.\nThus the expected time to enumerate $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ is $O(l^2(\\log p)^3)$, which dominates the $O(l^2\\log l)$ time to identify the $l^2$-isogeny cycles.\n\\item\nUsing a product tree we may compute $\\prod_k(X-j_{ik})$ in time $O(\\textsf{M}(l\\log p)\\log l)$, yielding a total cost of $O(l^2(\\log p)^3\\log\\log p)$ for Step~6.\n\\item\nUsing a product tree and fast interpolation~\\cite[Alg.~10.11]{Gathen:ComputerAlgebra}, we also obtain a cost of $O(l^2(\\log p)^3\\log\\log p)$ for Step~7. Here we use the $O(\\textsf{M}(l\\log p))$ bit-complexity of polynomial multiplication in $\\mathbf{F}_p[X]$ to bound the cost at each level, rather than using the bound in~\\cite[Cor.~10.12]{Gathen:ComputerAlgebra}.\n\\end{enumerate}\nThe bound $O(l^2(\\log p)^3\\log\\log p)$ applies to every step, completing the proof.\n\\end{proof}\n\\noindent\nWe are now ready to prove our main theorem.\n\n\\begin{thm1}\nLet $\\mathcal{F}$ be the suitable family of orders in Example~\\ref{exam}.\nLet $l$ be an odd prime and let $m\\in\\mathbf{Z}_{>0}$.\nGiven inputs $l$, $m$, and $\\O=\\mathcal{F}(l)$, Algorithm~\\ref{alg2} correctly computes $\\Phi_l\\in(\\Z\/m\\Z)[X,Y]$.\nUnder the GRH, its expected running time is\n$$\nO(l^3 \\log ^3 l\\log\\log l),\n$$\nusing $O(l^2\\log lm)$ expected space.\n\\end{thm1}\n\\begin{proof}\nWe first argue correctness.\nBy Lemma~\\ref{selectprimes}, Algorithm~\\ref{alg21} obtains a set of primes~$S$ that satisfy Theorem~\\ref{theprimes}, with $\\prod_{p\\in S} p > 4|c|$, for every coefficient $c$ of~$\\Phi_l$.\nWe now claim that for $p\\in S$, Algorithm~\\ref{alg1} obtains, for each of $l+2$ distinct $j$-invariants $j_i$, a list of $l+1$ distinct $j$-invariants $j_{ik}$ of $l$-isogenous curves.\nGranting the claim, we may invoke standard properties of $\\Phi_l$ to show that Algorithm correctly interpolates $\\Phi_l\\in\\mathbf{F}_p[X,Y]$,\nsee~\\cite[Thm.~12.19]{Washington:EllipticCurves} and~\\cite[Thm.~5.3]{Lang:EllipticFunctions}, for example.\nThe correctness of Algorithm~\\ref{alg2} then follows from the CRT and\/or the explicit CRT mod $m$, via~\\cite[Thm.~3.1]{Bernstein:ModularExponentiation}, as described in Section~\\ref{CRT}.\n\nAs usual, let $\\O=\\mathcal{F}(l)$ have fraction field $K$, and let $R=\\mathbf{Z}+l\\O$.\nThe claim above rests on three facts: (1) the explicit CM-action described in Section~\\ref{CMaction1} is correct, (2) any two $l^2$-isogenous elements of $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ must be $l$-isogenous to exactly one and the same element of $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, and (3) each $l^2$ isogeny cycle in $R$ contains exactly $l-\\inkron{d_K}{l}$ elements.\nWe note that (1) follows from the theory of complex multiplication and the properties of $\\Phi_{l_0}$ guaranteed by~\\cite[Thm.~12.19]{Washington:EllipticCurves}, (2) follows from the $l$-volcano structure, as shown by~\\cite[\\S2.2]{Fouquet:IsogenyVolcanoes} and~\\cite[Prop.~23]{Kohel:thesis}, and (3) is explicitly proven in Lemma~\\ref{cyclic}.\n\\smallbreak\nWe now assume the GRH and bound the complexity of Algorithm~\\ref{alg2}.\nLemma~\\ref{selectprimes} shows that the expected size of the largest $p\\in S$ is $O(\\log l)$, and we have $\\#S=O(l)$.\nApplying~\\cite[\\S6]{Sutherland:HilbertClassPolynomials} yields an $O(l^2\\log lm)$ space bound for $m\\in\\mathbf{Z}_{>0}$.\n\nBy~\\cite[Thm.~1]{Sutherland:HilbertClassPolynomials}, the expected time to compute $H_\\O$ in Step 1 is $O(l^{2+\\varepsilon})$,\nand Lemma~\\ref{selectprimes} gives an expected time of $O(l^{1+\\varepsilon})$ for Step 2, for any $\\varepsilon\\in\\mathbf{R}_{>0}$.\nAdditionally, we have $\\log p > (6+k)\\log l$ for all $p\\in S$ with probability approaching 1 exponentially as $k$ increases.\nThe time complexity of all remaining steps in Algorithm~\\ref{alg2}, including calls to Algorithm~\\ref{alg1}, depends polynomially on $\\log p$, hence we may bound the expected running time assuming $\\log p=O(\\log l)$.\n\nRegardless of the exact cutoff used, if $\\textsf{M}(\\log m)=O((\\log l)^3\\log\\log l)$ whenever we consider $m$ ``small\", we may apply the results of~\\cite[\\S6]{Sutherland:HilbertClassPolynomials} to obtain a bound of $O(l^3(\\log l)^3\\log\\log l)$ on the expected time for all CRT computations, for every $m\\in\\mathbf{Z}_{>0}$.\nSince $\\mathcal{F}$ satisfies the conditions of Theorem~\\ref{suitablepresentation}, we may apply Lemma~\\ref{Alg1bound} with $\\log p=O(\\log l)$ to obtain an $O(l^2(\\log l)^3\\log\\log l)$ bound on the expected time of each call to Algorithm~\\ref{alg1}. Applying $\\#S=O(l)$ completes the proof.\n\\end{proof}\n\n\\subsection{Remark on \\textsf{M}(n)}\\label{multremark}\nTo bound the complexity of multiplication in $\\mathbf{F}_p$, and of polynomials of low degree, we assume only that $\\textsf{M}(n)=O(n^c)$ for some $c<2$, already achieved by Karatsuba's algorithm~\\cite[Alg.~8.1]{Gathen:ComputerAlgebra}.\nIn the practical range of~$l$, the elements of $\\mathbf{F}_p$ actually fit in a single 64-bit machine word and can be multiplied very quickly. We do assume FFT-based multiplication is used for multiplying polynomials of degree $l$ in $\\mathbf{F}_p[X]$, and this assumption is met in practice; most of the computations described in Section~\\ref{results} make heavy use of the FFT.\n\n\\subsection{Selecting a suitable order}\\label{selectorder}\nThe family of orders used in Theorem~\\ref{main-thm} suffices to prove the complexity bound, but we can simplify the implementation and improve performance with some additional constraints on the order~$\\O$. Let us fix a bound $b < l$ (say $b=256$, for large $l$), and a small prime $l_0 < l$ (typically $l_0=2$).\nAs above, $R$ is the order of index $l$ in $\\O$, and $O_K$ is the maximal order.\nWe seek an order $\\O$ for which the following hold:\n\\renewcommand\\labelenumi{(\\theenumi)}\n\\begin{enumerate}\n\\item\nThe conductor of $\\O$ is $b$-smooth, $h(\\O_K) \\le b$, and $h(\\O) \\ge l+2$.\n\\item\nThe groups $\\operatorname{cl}(\\O)$ and $\\operatorname{cl}(R)$ are either generated by a single ideal with norm $l_0$, or by two ideals with norms $l_0$ and $l_1$, where $l_1\\le b$ is ramified.\n\\end{enumerate}\nThe first condition ensures that $\\O$ is suitable for $l$ and allows us to to obtain a root of $H_\\O(X)$ using only polynomials of degree at most~$b$. This is accomplished by finding a root of $H_{O_K}(X)$ and descending to the proper level of the $l'$-isogeny volcano for each prime $l'\\le b$ dividing the conductor of $\\O$, as in~\\cite[\\S4.1]{Sutherland:HilbertClassPolynomials}.\nThe second condition allows us to realize the torsors for $\\operatorname{cl}(\\O)$ and $\\operatorname{cl}(R)$ either by walking a single $l_0$-isogeny cycle, or by walking two $l_0$-isogeny cycles connected by a single $l_1$-isogeny. In the latter case we orient the two cycles by computing one extra $l_1$-isogeny. In both cases we avoid the need to ever distinguish the action of an ideal and its inverse, simplifying the computation described in Section~\\ref{CMaction2}.\n\nSubject to these conditions, we also wish to minimize $h(\\O)\\ge l+2$.\nTo find such orders we enumerate fundamental discriminants $d_K<-4$ with $\\inkron{d_K}{l_1}=1$ and $h(d_K)\\le b$, and for each $d_K$ we select $b$-smooth integers $u$ for which $h(u^2d_K)$ is slightly greater than $l+2$ and test whether condition (2) holds.\nIn practice we are almost always able to obtain $\\O$ with $h(\\O)$ within a few percent of $l+2$.\n\n\\section{Modular functions other than $j$}\\label{OtherFunctions}\nLet $g$ be a modular function of level $N$, and let $l\\nmid N$ be a prime. \nWe define the {\\it modular polynomial \n$\\Phi_l^g$ of level~$l$ for $g$\\\/} as the minimal polynomial of the function\n$g(lz)$ over the field $\\mathbf{C}(g)$. Much of the theory for the classical modular \npolynomial of the $j$-function generalizes to $g$. In particular, if the\nFourier expansion of~$g$ has {\\it integer\\\/} coefficients then we have\n$\\Phi_l^g \\in \\mathbf{Z}(g)[X]$. The following lemma gives us further information\nin this case.\n\n\\begin{lemma} Let $g$ be a modular function and let $l$ be a prime not \ndividing the level of~$g$. Suppose that $\\Phi_l^g$ has integer coefficients.\n If $g$ is invariant under the action of either\n$S = \\bigl(\\afrac{0}{1}\\thinspace\\afrac{-1}{0}\\bigr)\\in\\operatorname{SL}_2(\\mathbf{Z})$ or\n$M = \\bigl(\\afrac{0}{1} \\thinspace\\afrac{-l}{0}\\bigr)\\in\\operatorname{GL}_2(\\mathbf{Q})$, then\nwe have\n$$\n\\Phi_l^g(X,Y) = \\Phi_l^g(Y,X).\n$$\n\\end{lemma}\n\\begin{proof}\nThe proof follows the symmetry proof for $\\Phi_l(X,Y)$, see ~\\cite[Thm.~5.3]{Lang:EllipticFunctions}.\n\\end{proof}\nTo apply our method we additionally require that $\\Phi_l^g$ have degree $l+1$. This is not\ntrue in general, but it does hold in many cases of interest.\n\nThe polynomial $\\Phi_l^g$ should not be confused with the minimal polynomial of $g$ as an element of $\\mathbf{C}(j)$, which we\ndenote $\\Psi^g(X,J)$. The polynomial $\\Psi^g$ depends only on $g$, not $l$,\nand we assume it is known (for our purposes, it effectively defines~$g$). Given $\\Psi^g$, our goal is to efficiently compute $\\Phi_l^g$ for a prime $l\\nmid N$. \n\nWe wish to adapt Algorithm~\\ref{alg1} to compute $\\Phi_l^g \\in \\mathbf{F}_p[X]$. We may then apply Algorithm~\\ref{alg2}\nto recover $\\Phi_l^g$ over the integers or modulo some integer~$m$ via the Chinese Remainder Theorem.\nTo simplify matters, we place some additional restrictions on the order~$\\O$ that we\nuse in Algorithm~\\ref{alg2}.\nSpecifically, we require that there is a generator $\\tau\\in\\mathbf{H}$ of $\\O=\\mathbf{Z}[\\tau]$ with the property that \n$$\ng(\\tau) \\in K_\\O,\n$$\nwhere $K_\\O$ is the ring class field for the order~$\\O$. We say that $g$\nis a {\\it class invariant\\\/} for~$\\O$ in this case. If we now take a prime $p$ that splits\ncompletely in $K_\\O$ and $E\/\\mathbf{F}_p$ an elliptic curve with endomorphism ring~$\\O$, then the polynomial\n$$\n\\Psi^g(X, j(E)) \\in \\mathbf{F}_p[X]\n$$\nhas at least one root in~$\\mathbf{F}_p$. Indeed, the value $h=g(\\tau) \\bmod \\mathfrak p$ for\na prime $\\mathfrak p|p$ of $K_\\O$ satisfies $\\Psi^g(h,j(E)) = 0$.\n\nWe can analyze {\\it how many roots\\\/} the polynomial $\\Psi^g(X,j(E))\\in\\mathbf{F}_p[X]$\nhas using a combination of Deuring lifting and Shimura reciprocity. We refer\nto~\\cite[\\S6.7]{Broker:Thesis} for a detailed description of the \ntechniques involved and only state the result here. \nLet $g_i: \\mathbf{H} \\rightarrow \\mathbf{C}$ be the roots of $\\Psi^g(X,j)$. The \nfunctions $g_i$ are modular of level~$N$, and they are permuted by the \nGalois group $\\operatorname{GL}_2(\\mathbf{Z}\/N\\mathbf{Z})$ of the field of all modular functions of \nlevel~$N$. To state our result, we will associate a matrix $A \\in \n\\operatorname{GL}_2(\\mathbf{Z}\/N\\mathbf{Z})$ to the Frobenius morphism of~$E$ as follows. \nFix an isomorphism $\\operatorname{End}(E) \\ \\smash{\\mathop{\\longrightarrow}\\limits^{\\thicksim}}\\ \\O$ and let $\\pi_p\\in\\O$ be\nthe image of the Frobenius morphism. If $\\pi_p$ has minimal \npolynomial $X^2-tX+p$ of discriminant $\\Delta = t^2-4p$, then we put\n\\begin{equation}\\label{Amatrix}\nA = \\left( \\begin{matrix} \n\\frac{t-2}{2} & \\frac{\\Delta - 1}{2} \\\\\n2 & \\frac{t+2}{2} \n\\end{matrix} \n\\right) \\in \\operatorname{GL}_2(\\mathbf{Z}\/N\\mathbf{Z}).\n\\end{equation}\n\n\\begin{theorem} \\label{Shimura}\n{Let $g$ be a modular function with the property that \n$\\Psi^g$ has integer coefficients and is separable modulo a prime~$p$ that\ndoes not divide the level of~$N$. If $E\/\\mathbf{F}_p$ is an elliptic curve with \nendomorphism ring~$\\O$ and $j_0=j(E)$, then we have\n$$\n\\# \\{ x \\in\\mathbf{F}_p : \\Psi^g(x,j_0) = 0 \\} = \\# \\{ g_i : \\Psi^g(g_i,j) =0 \n\\hbox{\\ and \\ } g_i^A = g_i \\},\n$$\nwhere $A$ is the matrix in $(\\ref{Amatrix})$.}\n\\end{theorem}\n\\proof See~\\cite[\\S6.7]{Broker:Thesis}. \\endproof\n\nTo check if $g_i^A = g_i$ holds is a standard computation, see~\\cite{Gee:GeneratingClassFields} for example.\nAlthough the matrix $A$ has norm~$p$\nand trace~$t$ and therefore depends on~$p$, we can often derive a result \nthat merely depends on a congruence condition on $p \\bmod N$. Lemma~\\ref{WeberTwoRoots}\nin Section~\\ref{Weberf} gives an example.\n\n\n\\subsection{Computing modular polynomials for $\\boldsymbol{\\gamma_2}$}\\label{ComputingGamma2}\nLet $\\gamma_2(z)$ denote the unique cube root of $j(z)$ that has integral\nFourier expansion.\nIt was known to Weber already that $\\gamma_2$ is a modular function of \nlevel~3, see~\\cite[\\S125]{Weber:Algebra}, and $\\gamma_2$ is a class\ninvariant for~$\\O$ whenever $3\\nmid\\operatorname{disc}(\\O)$. We have \n$\\Psi^{\\gamma_2}(X,j) = X^3-j$, and in this simple case there is no need to\napply Theorem~\\ref{Shimura}; if we restrict to $p = 2 \\bmod 3$ then every element of $\\mathbf{F}_p$\nhas a unique cube root. \nThus we can compute $\\Phi_l^{\\gamma_2}$ with only minor modifications to Algorithm~\\ref{alg2}:\n\\renewcommand\\labelenumi{\\theenumi.}\n\\begin{itemize}\n\\item\nUse a suitable order $\\O$ with $3\\nmid \\operatorname{disc}(\\O)$ and select only primes $p\\equiv 2\\bmod 3$.\n\\item\nAfter Step 5 of Algorithm~\\ref{alg1}, replace each element of $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ \nand $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ with its unique cube root in $\\mathbf{F}_p$.\n\\end{itemize}\n\\noindent\nThese changes suffice, but we can also improve the algorithm's performance.\n\nFirst, Lemmas 2--3 and Corollary 9 of~\\cite{BrokerSutherland:PhiHeightBound} yield the bound\n\\begin{equation}\\label{gamma2height}\nB_l^{\\gamma_2} = 2l\\log l + 8l\n\\end{equation}\non the logarithmic height of $\\Phi_l^{\\gamma_2}$ (conjecturally, $B_l^{\\gamma_2}=2l\\log l +4l$ for all $l>60$, but we do not use this).\nThus we can reduce the height bound $B_l$ in (\\ref{heightbound}) by a factor of approximately 3 when computing $\\Phi_l^{\\gamma_2}$.\nThis reduces the number of primes $p\\in S$, and the corresponding number of calls Algorithm~\\ref{alg2} makes to Algorithm~\\ref{alg1}.\n\nSecond, we may take advantage of the fact that $\\Phi_l^{\\gamma_2}$ is {\\it \nsparser\\\/} than $\\Phi_l$.\nAs noted in~\\cite[p.~37]{Elkies:AtkinBirthday}, the coefficient of $X^aY^b$ in $\\Phi_l^{\\gamma_2}$ is zero unless\n\\begin{equation}\\label{gamma2sparse}\na + lb \\equiv l+1 \\mod 3.\n\\end{equation}\nThe proof of this relation goes back to Weber: the argument given \nin~\\cite[p.\\ 266]{Weber:Algebra} generalizes immediately to~$\\gamma_2$.\nIt allows us to reduce the number \nof points we use to interpolate $\\Phi_l^{\\gamma_2}$ by a factor of \napproximately 3.\n\nLet $n=\\lceil(l+1)\/3\\rceil+1$. When selecting a suitable order $\\O$ as in Section~\\ref{selectorder}, we now only require that $h(\\O)\\ge n$, and further modify Algorithm~\\ref{alg1} as follows:\n\\begin{itemize}\n\\item\nIn Steps 4-6 we construct just $n$ polynomials $\\Phi_l^{\\gamma_2}(X,\\sqrt[3]{j_i})$ of degree $l+1$.\n\\item\nIn Step 7 we interpolate $l+1$ polynomials $\\phi^*_k$ of degree less than $n$ by writing $\\phi_k=Y^c\\phi^*_k(Y^3)$, with $c\\in\\{0,1,2\\}$ satisfying $c+lk\\equiv l+1\\mod 3$.\n\\end{itemize}\nThis reduces the cost of all the significant components of Algorithm~\\ref{alg1} by a factor of approximately 3. \nThe reduction in the cost of the interpolations in Step~7 is actually greater than this, since its complexity is superlinear in the degree.\n\nThe total size of $\\Phi_l^{\\gamma_2}$ is approximately 9 times smaller than $\\Phi_l$, and with the optimizations above, the time to compute it is effectively reduced by the same factor. A small amount of additional time is required to compute the cube roots of the elements in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ and $\\text{\\rm Ell}_R(\\mathbf{F}_p)$, but even this can be avoided.\n\nProvided we have already computed $\\Phi_{l'}^{\\gamma_2}$ for some small values of $l'$ (specifically, for the primes $l_0$ and $l_1$ of Section~\\ref{selectorder}), we may use these polynomials to directly enumerate sets $\\text{\\rm Ell}_\\O^{\\gamma_2}(\\mathbf{F}_p)$ and $\\text{\\rm Ell}_R^{\\gamma_2}(\\mathbf{F}_p)$ containing the cube roots of the elements in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ and $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ respectively. We need only compute the cube roots of $j_0$ and $j_0'$ as starting points. This third optimization yields a small but useful improvement in the case of $\\gamma_2$, and plays a critical role in the examples that follow.\n\n\\subsection{Recovering $\\Phi_l$ from $\\Phi_l^{\\gamma_2}$}\n\nHaving computed $\\Phi_l^{\\gamma_2}$, we note that $\\Phi_l$ may be computed via~\\cite[Eq. 23]{Elkies:AtkinBirthday}:\n\\begin{equation}\\label{gammaToj}\n\\Phi_l(X^3,Y^3) = \\Phi_l^{\\gamma_2}(X,Y)\\Phi_l^{\\gamma_2}(X,\\omega Y)\\Phi_l^{\\gamma_2}(X,\\omega^2 Y),\n\\end{equation}\nwhere $\\omega=e^{2\\pi i\/3}$. For computation in $\\mathbf{Z}$, or modulo $m$, it is more convenient to express $\\Phi_l^{\\gamma_2}$ in terms of polynomials $P_0,P_1,P_2\\in\\mathbf{Z}[X,Y]$ satisfying\n\\begin{equation}\\label{gammaSplit}\n\\Phi_l^{\\gamma_2}(X,Y) = P_0(X^3,Y^3)Y^b + P_1(X^3,Y^3)XY + P_2(X^3,Y^3)X^2Y^{2-b},\n\\end{equation}\nwhere $b=2$ when $l\\equiv 1\\bmod 3$, and $b=0$ when $l\\equiv 2\\bmod 3$. We then have\n\\begin{equation}\\label{GammaToj}\n\\Phi_l = P_0^3 Y^b + (P_1^3 - 3P_0P_1P_2) XY + P_2^3 X^2 Y^{2-b}.\n\\end{equation}\nUsing Kronecker substitution and fast multiplication, it is possible to evaluate (\\ref{GammaToj}) in time $O(l^3(\\log l)^{2+\\varepsilon})$, which is asymptotically faster than Algorithm~\\ref{alg2}. This suggests that we might more efficiently compute $\\Phi_l$ by recovering it from $\\Phi_l^{\\gamma_2}$, but we do not find this to be true in practice: it actually takes longer to evaluate (\\ref{GammaToj}) than it does to compute $\\Phi_l$ directly. This can be explained by two factors. First, the $\\mathbf{F}_p$-operations used in Algorithm~\\ref{alg1} effectively have unit cost for word-size primes, making it faster than Theorem~\\ref{main-thm} would suggest for all but very large $l$. Secondly, the evaluation of (\\ref{GammaToj}) becomes extremely memory intensive when $l$ is large. However, if we are computing $\\Phi_l \\bmod m$ with $\\log m \\ll l\\log l$, then the time to apply (\\ref{GammaToj}) modulo $m$ is negligible. In this situation it is quite advantageous to derive $\\Phi_l\\bmod m$ from $\\Phi_l^{\\gamma_2}\\bmod m$, as may be seen in Table~3 of Section~\\ref{results}.\n\n\\subsection{Computing modular polynomials for the Weber $\\mathfrak{f}$ function}\\label{Weberf}\nWe now consider the classical Weber function~\\cite[p.~114]{Weber:Algebra} defined by\n$$\n\\mathfrak{f}(z)=\\zeta_{48}^{-1}\\frac{\\eta((z+1)\/2))}{\\eta(z)},\n$$\nwhere $\\zeta_{48}=e^{\\frac{\\pi i}{24}}$ and $\\eta(z)$ is the Dedekind eta \nfunction. This is a modular function of level 48 that satisfies \n$\\gamma_2 = (\\mathfrak{f}^{24}-16)\/\\mathfrak{f}^8$, see~\\cite[p.~179]{Weber:Algebra}, thus we have\n$$\n\\Psi^{\\mathfrak{f}}(X,j) = (X^{24}-16)^3 - X^{24}j.\n$$\nAsymptotically, we expect to be able to reduce the height bound $B_l$ by a \nfactor of $\\deg_X\\Psi^{\\mathfrak{f}}\/\\deg_j\\Psi^{\\mathfrak{f}}=72$ when \ncomputing $\\Phi_l^\\mathfrak{f}$. One can derive an explicit bound along the \nlines of~(\\ref{gamma2height}), but this tends to overestimate the $O(l)$ term \nquite significantly, so in practice for large $l$ we use the heuristic bound\n\\begin{equation}\\label{Weberheight}\nB_l^\\mathfrak{f} = \\frac{1}{12}l\\log l + \\frac{1}{5}l\\qquad\\qquad(l>2400),\n\\end{equation}\nwhich has been verified for every prime $l$ between 2400 and 10000. The \nmodular polynomial $\\Phi_l^{\\mathfrak{f}}$ is also sparse: the coefficient \nof $X^aY^b$ can be nonzero only when\n\\begin{equation}\\label{Webersparse}\nla+b\\equiv l+1\\bmod 24,\n\\end{equation}\nas shown in~\\cite[p.~266]{Weber:Algebra}. Thus $\\Phi_l^\\mathfrak{f}$ is\nroughly $72\\cdot24=1728$ times smaller than $\\Phi_l$. By applying the\ntechnique described above for $\\gamma_2$, \\emph{mutatis mutandi}, we can\nactually compute $\\Phi_l^\\mathfrak{f}$ more than 1728 times faster than\n$\\Phi_l$ for large values of $l$, as may be seen in see Table~2 of\nSection~\\ref{results}. When applying Algorithm~\\ref{alg2}, we now insist that the\n$\\O$ have discriminant $D\\equiv 1\\bmod 8$ and $3\\nmid D$, since\nthe Weber function yields class invariants in (at least) this case, see\n~\\cite{Gee:GeneratingClassFields}, for example.\n\nSince $-\\mathfrak{f}$ will also yield class invariants for~$\\O$, the \npolynomial $\\Psi^\\mathfrak{f}(X,j_0)$ will always have at least {\\it two\\\/}\nroots. The following lemma tells us that we can impose a congruence condition\non~$p$ to ensure that we have exactly two roots.\n\n\\begin{lemma}\\label{WeberTwoRoots}\nLet $p\\equiv 11\\bmod 12$ be prime and let $j_0$ be the $j$-invariant of an \nelliptic curve $E\/\\mathbf{F}_p$ with $\\operatorname{End}(E)$ isomorphic to an imaginary quadratic \norder $\\O$ with discriminant $D\\equiv 1\\bmod 8$ and $3\\nmid D$. \nThen $\\Psi^\\mathfrak{f}(X,j_0)\\in\\mathbf{F}_p[X]$ has exactly two roots in $\\mathbf{F}_p$, and \nthese are of the form $x_0$ and $-x_0$.\n\\end{lemma}\n\\noindent\nNote that if the lemma applies to $\\O$, it also applies to the order $R$ of index $l$ in $\\O$.\n\n\\begin{proof}\nWe only have to apply theorem~\\ref{Shimura}. The action of $A$ on the roots of \n$\\Psi^\\mathfrak{f}(X,j_0)$ is computed in~\\cite[\\S6.7]{Broker:Thesis}, and \nthis yields the lemma.\n\\end{proof}\n\nGiven a $j$-invariant $j_0\\in\\mathbf{F}_p$ that corresponds to $j(\\tau_0)\\in K_\\O$, we cannot readily determine which of the roots $x_0$ and $-x_0$ of $\\Psi^\\mathfrak{f}(X,j_0)$ actually corresponds to $\\mathfrak{f}(\\tau_0)$. The functions $\\mathfrak{f}$ and $-\\mathfrak{f}$ yield distinct class invariants, but they share the same modular polynomials, since $\\Phi_l^\\mathfrak{f}(X,Y)=\\Phi_l^\\mathfrak{f}(-X,-Y) = \\Phi_l^{-\\mathfrak{f}}(X,Y)$, by (\\ref{Webersparse}).\n\nThus for the initial $j_0$ obtained in Step 2 of Algorithm~\\ref{alg1}, it does not matter whether we pick $x_0$ or $-x_0$ as a root of $\\Psi^\\mathfrak{f}(X,j_0)$, and we need not be concerned with making a consistent choice for each prime $p$. However it is critical that while computing $\\Phi_l^\\mathfrak{f}\\bmod p$ we make a consistent choice of sign for each $j$-invariant we convert to an ``$\\mathfrak{f}$-invariant\" (a root of $\\Psi^\\mathfrak{f}(X,j_i)\\bmod p$). This makes it impractical to enumerate $j$-invariants and convert them \\emph{en masse}. Instead, as described for $\\gamma_2$ above, we use modular polynomials $\\Phi_{l'}^\\mathfrak{f}$ for small $l'$ to enumerate sets $\\text{\\rm Ell}_\\O^{\\mathfrak{f}}(\\mathbf{F}_p)$ and $\\text{\\rm Ell}_R^{\\mathfrak{f}}(\\mathbf{F}_p)$ from starting points $x_0$ and $x_0'$ satisfying $\\Psi^\\mathfrak{f}(x_0,j_0)=\\Psi^\\mathfrak{f}(x_0',j_0')=0$. This ensures that signs are chosen consistently within each of these sets, we only need to check that the sign choices for the two sets are consistent with each other.\n\nTo do so, we use the fact that the coefficient of $X^lY^l$ in $\\Phi_l^\\mathfrak{f}(X,Y)$ is $-1$. This is shown for $\\Phi_l$ in~\\cite[\\S69]{Weber:Algebra}, and the same argument applies to $\\Phi_l^\\mathfrak{f}$. We modify Algorithm~\\ref{alg1} to compute the coefficient of $X^lY^l$ in $\\Phi_l^\\mathfrak{f}\\bmod p$ in between Steps 5 and 6. We do this twice, switching the signs in $\\text{\\rm Ell}_\\O^{\\mathfrak{f}}(\\mathbf{F}_p)$ the second time, and expect exactly one of these computations to yield $-1$, thereby determining a consistent choice of signs. This test should be regarded as a heuristic, since we do not rule out the possibility that both choices produce $-1$. However, in the course of extensive testing this has never happened, and we suspect that it cannot. If it does occur, the algorithm will detect this, and can then simply choose a different prime $p$.\n\n\\subsection{Eta quotients and Atkin modular functions} For a prime $N$, let\n$$\nf_N(z) = N^{s\/2}\\left(\\frac{\\eta(Nz)}{\\eta(z)}\\right)^{s},\n$$\nwhere $s=24\/\\gcd(12,N-1)$. These are modular functions of level $N$, and the polynomials $\\Psi_N = \\Psi^{f_N}$ that relate $f_N$ to $j$ are sometimes called canonical modular polynomials~\\cite[p.~418]{Cohen:HECHECC}. The functions $f_N$ are closely related to the functions $\\mathfrak{w}_N^s$ considered in~\\cite{Enge:GeneralizedWeberI}, and in fact $\\Psi^{f_N}=\\Psi^{\\mathfrak{w}_N^s}$, so what follows applies to both. When $N$ is 2, 3, 5, 7, or 13, we have $\\deg_j \\Psi_N = 1$ and can adapt Algorithm~\\ref{alg2} to compute polynomials $\\Phi_l^{f_N}$ for odd primes $l\\nmid N$. We assume here that $N$ is also odd.\n\nWe have $\\deg_X \\Psi_N=N+1$, hence we can reduce the height bound $B_l$ by a\nfactor of approximately $N+1$. When selecting a suitable order $\\O$, we\nrequire that $N$ is prime to the conductor and splits into prime ideals that\nare distinct in $\\operatorname{cl}(\\O)$. This assumption is stronger than we need, but it simplifies the implementation.\nFor the primes $p\\in S$ we require that $N$ is prime to $v$, where $4p=t^2-v^2\\operatorname{disc}(\\O)$.\n\nAs shown in \\cite{Muller:thesis}, the polynomial $\\Psi_N(X,j_0)\\bmod p$ has the same splitting type\nas $\\Phi_N(X,j_0)\\bmod p$. In particular, for $j_0\\in\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ (or $j_0\\in \\text{\\rm Ell}_R(\\mathbf{F}_p)$) it has exactly two roots, say $x_1$, and $x_2$. These correspond to $N$-isogenies as follows:\nthe $j$-invariants $j_1$ and $j_2$ of the two elliptic curves that are $N$-isogenous to $j_0$ are uniquely determined by the relations $\\Psi_N(N^s\/x_1,j_1)=0$ and $\\Psi_N(N^s\/x_2,j_2)$. Here the transformation $x\\mapsto N^s\/x$ realizes the Atkin-Lehner involution on $f_N(z)$.\n\nStarting points $x_0$ and $x_0'$ corresponding to $j_0$ and $j_0'$ are chosen as follows. Let $x_0'$ be a root of $\\Psi_N(X,j_0')$, chosen arbitrarily, and let $j_1'$ be determined by $\\Psi_N(N^s\/x_0',j_1')=0$. We then use V\\'elu's formulas to obtain the $j$-invariant $j_1$ of the elliptic curve that is $l$-isogenous to $j_1'$ (there is exactly one and it lies in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$, since $j_1'\\in\\text{\\rm Ell}_R(\\mathbf{F}_p)$ is on the floor of its $l$-volcano). Finally, $x_0$ is uniquely determined by the constraints $\\Psi_N(x_0,j_0)=0$ and $\\Psi_N(N^s\/x_0,j_1)=0$.\n\nWe next consider double eta-quotients~\\cite{Enge:DoubleEtaQuotient,EngeSchertz:CompositeLevel} of\ncomposite level $N=p_1p_2$:\n$$\n\\mathfrak{w}_{p_1,p_2}^s(z) = \\left(\\frac{\\eta(\\frac{z}{p_1})\\eta(\\frac{z}{p_2})}{\\eta(\\frac{z}{p_1p_2})\\eta(z)}\\right)^s,\n$$\nwhere $p_1\\ne p_2$ are primes and $s=24\/\\gcd(24,(p_1-1)(p_2-1))$. For $(p_1,p_2)$ in\n$$\n\\bigl\\{(2,3),(2,5),(2,7),(2,13),(3,5),(3,7),(3,13),(5,7)\\bigr\\},\n$$\nthe polynomial $\\Psi_{p_1,p_2}=\\Psi^{\\mathfrak{w}_{p_1,p_2}^s}$ has degree 2 in\n$j$ and we can compute $\\Phi_l^{\\mathfrak{w}_{p_1,p_2}^s}$ for odd primes\n$l\\nmid N$. Our restrictions on $\\O$ are analogous to those for $f_N$ or\n$\\mathfrak{w}_N^s$: we require that $N$ is prime to the conductor and that both\n$p_1$ and $p_2$ split into distinct prime ideals in $\\operatorname{cl}(\\O)$. Our\nrequirements for $p\\in S$ are as above. We can reduce the height bound $B_l$ by a\nfactor of approximately $(p_1+1)(p_2+1)\/2$.\n\nWith the double eta-quotients, the polynomial $\\Psi_{p_1,p_2}(X,j_0)$ has four\nroots, corresponding to four distinct isogenies of (composite) degree $N$.\nEach root $x_i$ uniquely determines the $j$-invariant of a curve $N$-isogenous\nto $E\/\\mathbf{F}_p$ as the unique root of $\\Psi_{p_1,p_2}(x_i,J)\/(J-j_0)\\in\\mathbf{F}_p[J]$. The\ndouble eta-quotients are invariant under the Atkin-Lehner involution, so we\nneed not transform $x_i$. With this understanding, the procedure for selecting\n$x_0$ and $x_0'$ is as above.\n\nIn some cases one can obtain smaller modular polynomials by considering\nsuitable roots of the functions defined above. For example, a sixth root of\n$f_3$ also yields class invariants (this is shown for $\\mathfrak{w}_3^2$\nin~\\cite{Enge:GeneralizedWeberI}), and the corresponding modular polynomials\nare sparser and of lower height (by a factor of 6).\n\nOur algorithm also applies to the Atkin modular functions, which we denote\n$A_N$. These are (optimal) modular functions for $X_0^+(N)$ invariant under\nthe Atkin-Lehner involution, see\n\\cite{Elkies:AtkinBirthday,Morain:PointCounting} for further details. The\npolynomials $\\Psi^{A_N}$ are known as Atkin modular polynomials, and are\navailable in computer algebra systems such as Magma~\\cite{Magma} and Sage\n\\cite{SAGE}. For primes $N<32$, and also $N$ in the set $\\{41,47,59,71\\}$, we\nhave $\\deg_j\\Psi^{A_N} = 2$ and can compute polynomials $\\Phi_l^{A_N}$ for odd\nprimes $l\\ne N$. This is done in essentially the same way as with the double\neta-quotients, except that now $N$ is prime and $\\Psi^{A_N}(X,j_0)$ has just\ntwo roots, rather than four. For these $A_N$, the height bound can be reduced\nby a factor of approximately $(N+1)\/2$.\n\nFinally, we note an alternative approach applicable to both eta-quotients and\nthe Atkin modular functions. If we choose $\\O$ so that the prime factors of\n$N$ are all ramified, then there is actually a unique $x_0\\in\\mathbf{F}_p$ corresponding\nto each $j_0$ in $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ and $\\text{\\rm Ell}_R(\\mathbf{F}_p)$, that is, $\\Psi^g(X,j_0)$ has exactly one root\nin $\\mathbf{F}_p$. In this scenario we can\nsimply enumerate $j$-invariants as usual and then replace each $j_i$ with a\ncorresponding $x_i$. This is not as\nefficient and places stricter requirements on $\\O$, but it allows us to compute\n$\\Phi_l^g$ without needing to know $\\Phi_{l'}^g$ for any $l'$. This provides a\nconvenient way to ``bootstrap\" the process. In fact all of the modular\npolynomials $\\Phi_l^g$ we have considered can eventually be obtained via\nAlgorithm~\\ref{alg2}, starting from the polynomials $\\Psi^g$ and $\\Phi_2$.\n\n\\section{Computational results}\\label{results}\nWe have applied our algorithm to compute polynomials $\\Phi_l^g$ for all the modular functions discussed in Section~\\ref{OtherFunctions} and every applicable $l$ up to 1000. For the functions $j$, $\\gamma_2$, and $\\mathfrak{f}$ we have gone further, and present details of these computations here.\n\n\\subsection{Implementation} The algorithms described in this paper were implemented using the GNU C\/C++ compiler~\\cite{GNU} and the GMP library~\\cite{GMP} on a 64-bit Linux platform. Multiplication of large polynomials is handled by the \n{\\tt zn\\_poly} library developed by Harvey~\\cite{Harvey:KroneckerSubstitution,Harvey:zn_poly}.\n\nThe hardware platform included four 3.0 GHz AMD Phenom II processors, each with four cores and 8GB of memory. Up to 16 cores were used in the larger tests, with essentially linear speedup. For consistency we report total CPU times, noting that in a multi-threaded implementation, disk and network I\/O can be overlapped with CPU activity so that all computations are CPU bound.\n\nAs a practical optimization, we do not use the Hilbert class polynomial $H_\\O$ in Step~1 of Algorithm~\\ref{alg2}. Instead, we compute the minimal polynomial of some more favorable class invariant, as described in~\\cite{EngeSutherland:CRTClassInvariants}, which is then used to obtain a $j$-invariant. Additionally, as noted in Section~\\ref{selectorder}, it suffices to compute a class polynomial for the maximal order containing $\\O$. With these optimizations the time spent computing class polynomials is completely negligible (well under one second).\n\nAnother important optimization is the use of polynomial gcds to accelerate root-finding when walking paths in the isogeny graph, a technique developed in~\\cite[\\S 2]{EngeSutherland:CRTClassInvariants}. This greatly accelerates the enumeration of the sets $\\text{\\rm Ell}_\\O(\\mathbf{F}_p)$ and $\\text{\\rm Ell}_R(\\mathbf{F}_p)$ in Steps 3 and 5 of Algorithm~\\ref{alg2}. As a result, most of the computation (typically over 75\\%) is spent interpolating polynomials in Steps 6 and 7.\n\n\\subsection{Computations over $\\mathbf{Z}$}\nTables 1 and 2 provide performance data for computations of $\\Phi_l$ and $\\Phi_l^\\mathfrak{f}$ using Algorithm~\\ref{alg2}. For each $l$ we list:\n\\begin{itemize}\n\\item The discriminant $D$ of the suitable order $\\O$.\n\\item The number of CRT primes $n=\\#S$ used.\n\\item The height bound $B_l$ in bits and the actual bit-size $b_l$ of the largest coefficient.\n\\item The total size of $\\Phi_l$ (resp. $\\Phi_l^\\mathfrak{f}$) in megabytes (1MB = $10^6$ bytes), computed as the sum of the coefficient sizes, with symmetric terms counted only once.\n\\item The total CPU time, in seconds. This includes the time to select $\\O$.\n\\item The throughput, defined as the total size divided by the total CPU time.\n\\end{itemize}\n\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{@{}rrrrrrrr@{}}\n$l$ &$|D|$&$n$&$B_l$&$b_l$&size (MB)&time (s)&MB\/s\\\\\n\\midrule\n 101 & 216407 & 184 & 6511 & 5751 & 2.65 & 2.25 & 1.18\\\\\n 211 & 393047 & 369 & 14949 & 13359 & 27.6 & 14.4 & 1.92\\\\\n 307 & 837407 & 531 & 22748 & 20483 & 90.5 & 51.0 & 1.78\\\\\n 401 & 626431 & 725 & 30640 & 27642 & 211 & 130 & 1.62\\\\\n 503 & 3076175 & 870 & 39421 & 35686 & 431 & 264 & 1.63\\\\\n 601 & 461351 & 1011 & 48027 & 43542 & 755 & 485 & 1.56\\\\\n 701 & 1254871 & 1229 & 56953 & 51731 & 1227 & 863 & 1.42\\\\\n 809 & 916599 & 1376 & 66731 & 60743 & 1926 & 1410 & 1.37\\\\\n 907 & 986855 & 1517 & 75712 & 69017 & 2759 & 2010 & 1.37\\\\\n1009 & 2871983 & 1728 & 85157 & 77653 & 3857 & 2910 & 1.32\\\\\n2003 & 91696103 & 3410 & 180941 & 166095 & 33120 & 31800 & 1.04\\\\\n3001 & 248329639 & 5122 & 281635 & 259272 & 117256 & 143000 & 0.82\\\\\n4001 & 72135279 & 6939 & 385300 & 355707 & 287783 & 363000 & 0.79 \\\\\n5003 & 67243191 & 8373 & 491355 & 454429 & 577740 & 749000 & 0.77 \\\\\n\\bottomrule\n\\end{tabular}\n\\\\\n\\vspace{8pt}\n\\textsc{Table} 1. Computations of $\\Phi_l$ over $\\mathbf{Z}$.\\\\\n\\end{center}\n\\end{table}\n\nIn the last column of Table 1 one can see the quasilinear performance of Algorithm~\\ref{alg2} as a function of the size of $\\Phi_l$, and the constant factors appear to be advantageous relative to other algorithms. For example, computing $\\Phi_{1009}$ with the evaluation\/interpolation algorithm of~\\cite{Enge:ModularPolynomials} uses approximately 100000 CPU seconds (scaled to our hardware platform), while Algorithm~\\ref{alg2} needs less than 3000. \n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{@{}rrrrrrrr@{}}\n$l$ &$|D|$&$n$&$B_l$&$b_l$&size (MB)&time (s)&MB\/s\\\\\n\\midrule\n 1009 & 1391 & 33 & 1275 & 1099 & 2.34 & 1.59 & 1.47\\\\\n 2003 & 37231 & 58 & 2542 & 2271 & 19.5 & 10.7 & 1.81\\\\\n 3001 & 88879 & 88 & 3822 & 3611 & 69.6 & 47.7 & 1.46\\\\\n 4001 & 53191 & 112 & 5201 & 4801 & 167 & 116 & 1.45\\\\\n 5003 & 30959 & 136 & 6613 & 6228 & 339 & 241 & 1.41\\\\\n 6007 & 463039 & 170 & 8052 & 7530 & 595 & 493 & 1.21\\\\\n 7001 & 150631 & 192 & 9496 & 8876 & 957 & 701 & 1.37\\\\\n 8009 & 315031 & 220 & 10979 & 10292 & 1453 & 1200 & 1.21\\\\\n 9001 & 179159 & 240 & 12453 & 11974 & 2123 & 1790 & 1.18\\\\\n10009 & 207919 & 265 & 13964 & 13453 & 2953 & 2630 & 1.12\\\\\n20011 & 1114879 & 537 & 29485 & 27860 & 24942 & 27600 & 0.90\\\\\n30011 & 2890639 & 795 & 45649 & 43304 & 87660 & 123000 & 0.71\\\\\n40009 & 22309439 & 1032 & 62210 & 59439 & 214273 & 335000 & 0.64\\\\\n50021 & 37016119 & 1316 & 79116 & 78077 & 508571 & 677000 & 0.75\\\\\n60013 & 27334823 & 1594 & 96165 & 91733 & 747563 &1150000 & 0.65\\\\\n\\bottomrule\n\\end{tabular}\n\\\\\n\\vspace{8pt}\n\\textsc{Table} 2. Computations of $\\Phi_l^{\\mathfrak{f}}$ over $\\mathbf{Z}$.\\\\\n\\vspace{2pt}\n\\end{center}\n\\end{table}\n\nThe first five rows of Table 2 may be compared to the corresponding rows of Table 1 to see the performance advantage gained when computing modular polynomials for the Weber $\\mathfrak{f}$ function rather than $j$. As expected, these polynomials are approximately 1728 times smaller, and the speedup achieved by Algorithm~\\ref{alg2} is even better; we already achieve a speedup of around 1800 when $l=1009$, and this increases to to over 3000 when $l=5003$. This can be explained by the superlinear complexity of interpolation, as well as the superior cache utilization achieved by condensing the sparse coefficients of $\\Phi_l^\\mathfrak{f}$, as described in Section~\\ref{ComputingGamma2}.\n\nAs noted in Section~\\ref{Weberf}, we used a heuristic height bound for the computations in Table~2. The gap between the values of $b_l$ and $B_l$ in each case gives us high confidence in the results (the probability of this occurring by chance is negligible).\n\n\\subsection{Computations modulo $\\boldsymbol{m}$}\n\nTable~3 gives timings for computations of $\\Phi_l$ modulo 256-bit and 1024-bit primes $m$. The values of $m$ are arbitrary, and, in particular, they are not of a form suitable for direct computation with Algorithm~\\ref{alg1}. Instead, Algorithm~\\ref{alg2} derives $\\Phi_l\\bmod m$ from the computations of $\\Phi_l\\bmod p$, for $p\\in S$, using the explicit CRT. The same set $S$ is used as when computing $\\Phi_l$ over $\\mathbf{Z}$, so the running time is largely independent of $m$, but using the explicit CRT (mod $m$) yields a noticeable speedup when $\\log m$ is significantly smaller than $6l\\log l$. For example, when $l=1009$ it takes approximately 2300 seconds to compute $\\Phi_l\\bmod m$, for the $m$ listed in Table~3, versus about 2900 seconds to compute $\\Phi_l$ over $\\mathbf{Z}$.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{@{}rrrrrrrrrr@{}}\n&&\\multicolumn{3}{c}{$m = 2^{256}-189$}&&\\multicolumn{3}{c}{$m = 2^{1024}-105$}\\\\\n\\cmidrule(r){3-5}\\cmidrule(r){7-9}\n$l$&&$\\Phi_l$&$\\Phi_l^{\\gamma_2}$&$\\Phi_l^*$&\\hspace{12pt}&$\\Phi_l$&$\\Phi_l^{\\gamma_2}$&$\\Phi_l^*$\\\\\n\\midrule\n 101 && 2.12 & 0.16 & 0.47 && 2.16 & 0.17 & 1.53 \\\\\n 211 && 12.4 & 1.64 & 3.26 && 12.7 & 1.68 & 7.95 \\\\\n 307 && 43.3 & 4.82 & 8.34 && 44.0 & 4.93 & 19.3 \\\\\n 401 && 109 & 10.9 & 17.9 && 111 & 11.1 & 38.0 \\\\\n 503 && 215 & 23.3 & 34.0 && 219 & 23.8 & 66.4 \\\\\n 601 && 390 & 40.5 & 55.8 && 395 & 41.4 & 110 \\\\\n 701 && 695 & 69.1 & 90.2 && 703 & 70.3 & 158 \\\\\n 809 && 1130 & 105 & 134 && 1150 & 107 & 222 \\\\\n 907 && 1590 & 158 & 194 && 1600 & 160 & 306 \\\\\n 1009 && 2300 & 223 & 267 && 2320 & 225 & 403 \\\\\n 2003 && 23900 & 2400 & 2590 && 24100 & 2440 & 3210 \\\\\n 3001 && 106000 & 9250 & 9650 && 107000 & 9360 & 11200 \\\\\n 4001 && 283000 & 25100 & 25900 && 287000 & 25400 & 28600 \\\\\n 5003 && 647000 & 57000 & 58300 && 653000 & 60200 & 65700 \\\\\n10009 && 7180000 & 681000 & 687000 && 7320000 & 688000 & 713000\\\\\n\\bottomrule\n\\end{tabular}\n\\\\\n\\vspace{8pt}\n\\textsc{Table} 3. Computations of $\\Phi_l$ and $\\Phi_l^{\\gamma_2}$ modulo $m$.\\\\\n\\vspace{2pt}\n\\footnotesize\nColumns $\\Phi_l^*$ list the total time to obtain $\\Phi_l$ by computing $\\Phi_l^{\\gamma_2}$ and applying (\\ref{GammaToj}).\n\\normalsize\n\\end{center}\n\\end{table}\n\nIn addition to computing $\\Phi_l\\bmod m$ directly, we may also obtain $\\Phi_l\\bmod m$ by first computing $\\Phi_l^{\\gamma_2}\\bmod m$ and then applying (\\ref{GammaToj}), as discussed in Section~\\ref{ComputingGamma2}. The time to compute $\\Phi_l^{\\gamma_2}\\bmod m$ is essentially independent of $m$, but the time to apply (\\ref{GammaToj}) is not. Even so, for the 256-bit and 1024-bit $m$ that we used, computing $\\Phi_l\\bmod m$ in this fashion is much faster than computing $\\Phi_l\\bmod m$ directly; for $l=1009$ we achieve times of 223 and 403 seconds, respectively. As when computing $\\Phi_l^{\\mathfrak{f}}$, this speedup improves superlinearly, and for large~$l$ is noticeably greater than the expected factor of 9.\n\nWhen computing $\\Phi_l^{\\gamma_2}\\bmod m$ we used the height bound $B_l^{\\gamma_2}=2l\\log l + 8l$ given by (\\ref{gamma2height}). The timings in Table~3 would be further improved if the heuristic bound $B_l^{\\gamma_2}=2l\\log l + 4l$ were used instead. While we conjecture that this bound holds for all $l>60$, caution is warranted when applying heuristic bounds to computations performed modulo $m$: the impact of an incorrect height bound may not be immediately apparent, as it is when computing $\\Phi_l$ over $\\mathbf{Z}$.\n\nThe computations listed in Tables 1 and 2 were practically limited by space, not time. The largest computations took only a day or two when run on 16 cores, but required nearly a terabyte of disk storage. However when computing $\\Phi_l\\bmod m$, we can handle larger values of $l$ without using an excessive amount of space. When $l=20011$, for example, the total size of $\\Phi_l$ is over 30 terabytes, but we are able to compute $\\Phi_l$ modulo a 256-bit integer $m$ using less than 10 gigabytes.\n\n\\section*{Acknowledgments}\nWe thank David Harvey for his {\\tt zn\\_poly} library, and also Andreas Enge for providing timings for his evaluation\/interpolation algorithm.\n\n\\section*{Appendix}\n\\begin{lemma}\\label{HWbound}\nLet $\\varepsilon$ and $c$ be any positive real constants.\nLet $\\pi_\\varepsilon(x)$ count the integers $n\\in[3,x]$ with more than $(\\log\\log n)^{1+\\varepsilon}$ prime factors.\nThen $\\pi_\\varepsilon(x)$ is $o(x\/(\\log x)^c)$.\n\\end{lemma}\n\\begin{proof}\nLet $\\tau_k(x)$ count the integers in $[1,x]$ that factor into exactly $k$ primes, which need not be distinct.\nFor $x>e^2$ let $I$ denote the interval $[(\\log\\log x^{1\/2})^{1+\\varepsilon},\\log_2 x]$.\nFor all sufficiently large $x$ we have\n\\begin{equation}\n\\pi_\\varepsilon(x)\\le x^{1\/2} + \\sum_{k\\in I}\\tau_k(x).\n\\end{equation}\nClearly $x^{1\/2}=o(x\/\\log^n x)$, so we only need to bound the sum.\nBy~\\cite[Thm.~437]{Hardy:NumberTheory},\n$$\n\\tau_k(x)\\thickspace\\sim\\thickspace\\frac{kx(\\log\\log x)^{k-1}}{k!\\log x}\\qquad\\qquad(k\\ge 2).\n$$\nFor all sufficiently large $x$, one finds, via Stirling's approximation, that $\\tau_k(x)$ is a monotonically decreasing function of $k$ on the interval $I$ and therefore\n$$\n\\sum_{k\\in I}\\tau_k(x) \\le(\\log_2 x)\\tau_{\\lceil(\\log\\log x^{1\/2})^{1+\\varepsilon}\\rceil}(x).\n$$\nTaking logarithms and applying $\\log(n!)=n\\log n-n+O(\\log n)$, one obtains\n\\begin{equation}\\label{HWsum}\n\\log\\Bigl(\\sum_{k\\in I}\\tau_k(x)\\Bigr) \\le \\log x - \\varepsilon\\bigl(\\log\\log x^{1\/2}\\bigr)^{1+\\varepsilon}\\log\\log\\log x + O\\bigl((\\log\\log x)^{1+\\varepsilon}\\bigr).\n\\end{equation}\nFor any fixed $c_0\\in\\mathbf{R}_{>0}$ and all sufficiently large $x$ the RHS of (\\ref{HWsum}) is smaller than $\\log(c_0x\/\\log^c x)$.\nThus $\\sum_{k\\in I}\\tau_k(x)$ is $o(x\/(\\log x)^c)$, as desired.\n\\end{proof}\n\\bibliographystyle{amsplain}\n\\input{modpoly.bbl}\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nAutomated learning via deep neural networks is gaining increasing popularity, as a ductile procedure to address a widespread plethora of interdisciplinary applications \\cite{he2018amc, sutton2018reinforcement, grigorescu2020survey}. In standard neural network training one seeks to optimise the weights that link pairs of neurons belonging to adjacent layers of the selected architecture \\cite{Goodfellow-et-al-2016}. This is achieved by computing the gradient of the loss with respect to the sought weights, a procedure which amounts to operate in the so called direct space of the network \\cite{spec_learn}. Alternatively, the learning can be carried out in reciprocal space: the spectral attributes (eigenvalues and eigenvectors) of the transfer operators that underlie information handling across layers define the actual target of the optimisation. This procedure, first introduced in \\cite{spec_learn} and further refined in \\cite{chicchi2021training}, enables a substantial compression of the space of trainable parameters. The spectral method leverages on a limited subset of key parameters which impact on the whole set of weights in direct space. Particularly relevant, in this respect, is the setting where the eigenmodes of the inter-layer transfer operators align along random directions. In this case, the associated eigenvalues constitute the sole trainable parameters. When employed for classifications tasks, the accuracy displayed by the spectral scheme restricted to operate with eigenvalues is slightly worse than that reported when the learning is carried in direct space, for an identical architecture and by employing the full set of trainable parameters. To bridge the gap between conventional and spectral methods in terms of measured performances, one can also train the elements that populate the non trivial block of the eigenvectors matrix \\cite{spec_learn}. By resorting to apt decomposition schemes, it is still possible to contain the total number of trainable parameters, while reaching stunning performances in terms of classification outcomes \\cite{chicchi2021training}. \n\nIn this paper we will discuss a relevant byproduct of the spectral learning scheme. More specifically, we will argue that the eigenvalues do provide a reliable ranking of the nodes, in terms of their associated contribution to the overall performance of the trained network. Working along these lines, we will empirically prove that the absolute value of the eigenvalues is an excellent marker of the node's significance in carrying out the assigned discrimination task. This observation can be effectively exploited, downstream of training, to filter the nodes in terms of their relative importance and prune the unessential units so as to yield a more compact model, with almost identical classification abilities. The effectiveness of the proposed method has been tested for different feed-forward architectures, with just a single or multiple hidden layers, by invoking several activation functions, and against distinct datasets for image recognition, with various levels of inherent complexity. Building on these findings, we will also propose a two stages training protocol to generate minimal networks (in terms of allowed computing neurons) which outperform those obtained by hacking off dispensable units from a large, fully trained, apparatus. This is a viable strategy to discover a ``winning ticket'' \\cite{frankle2018lottery}: dense (randomly-initialized) feed-forward networks contain sub-networks (aka winning tickets) with recorded performance comparable to those displayed by their unaltered homologues, after a proper round of training.\n\nThe paper is organized as follows. In the next section we will discuss the mathematical foundation and set the notation of the spectral learning scheme. We will then move on to illustrating the results of the proposed spectral pruning strategy, after a short account of the alternative methods available in the literature. Finally, we will sum up and draw our conclusions. \nThe details about the proposed schemes are discussed in the Methods Section.\n\n\\section{Spectral approach to learning}\nThis Section is devoted to reviewing the spectral approach to the training of deep neural networks. The discussion will follow mainly \\cite{chicchi2021training}, where an extension of the method originally introduced in \\cite{spec_learn} is handed over.\n\nConsider a deep feed-forward network made of $\\ell$ distinct layers. Each layer is labelled with a discrete index $i$ $(=1,...,\\ell)$. Denote by $N_i$ the number of the neurons, the individual computing units, that pertain to layer $i$. Then,\nwe posit $N=\\sum_{i=1}^{\\ell} N_i$ and introduce a column vector $\\vec{x}^{(1)}$, of size $N$, the first $N_1$ entries referring to the supplied input signal. As anticipated, we will be mainly concerned with datasets for image recognition, so we will use this specific case to illustrate the more general approach of spectral learning. This means that, the first $N_1$ elements of $\\vec{x}^{(1)}$ are the intensities (from the top-left to the bottom-right, moving horizontally) as displayed on the pixels of the image presented as an input. All other entries of $\\vec{x}^{(1)}$ are identically equal to zero. \n\nThe aim of the procedure is to map $\\vec{x}^{(1)}$ into an output vector $\\vec{x}^{(\\ell)}$ , still of size $N$: the last $N_{\\ell}$ elements are the intensities displayed at the output nodes, where reading is eventually performed. The applied transformation is composed by a suite of linear operations, interposed to non linear filters. To exemplify the overall strategy, consider the generic vector $\\vec{x}^{(k)}$, with $k=1,..., \\ell-1$, as obtained after $k$ execution of the above procedure. At the successive iteration, one gets $\\vec{x}^{(k+1)}= {\\mathbf A}^{(k)} \\vec{x}_{(k)}$, where ${\\mathbf A}^{(k)}$ is a $N \\times N$ matrix with a rather specific structure, as elucidated in the following and schematically depicted in Fig. \\ref{f:Lin_transf}. Further, a suitably defined non-linear function $f(\\cdot, \\beta_k)$ is applied to $\\vec{x}^{(k+1)}$, where $\\beta_k$ identifies an optional bias. To proceed in the analysis, we cast ${\\mathbf A}^{(k)}={\\mathbf \\Phi}^{(k)} {\\mathbf \\Lambda}^{(k)} \\left({\\mathbf \\Phi}^{(k)}\\right)^{-1}$ by invoking spectral decomposition. Here, ${\\mathbf \\Lambda}^{(k)}$ denotes the diagonal matrix of the eigenvalues of ${\\mathbf A}^{(k)}$. Following \\cite{chicchi2021training}, we set $\\left({\\mathbf \\Lambda}^{(k)} \\right)_{jj} = 1$ for $j< \\sum_{i=1}^{k-1} N_i$ and $j> \\sum_{i=1}^{k+1} N_i$. The remaining $N_k+N_{k+1}$ elements are initially assigned to random entries, as e.g. extracted from a uniform distribution, and define a first basin of target variables for the spectral learning scheme. Then, ${\\mathbf \\Phi}^{(k)}$ is the identity matrix $\\id{}_{N \\times N}$, with the inclusion of a sub-diagonal $N_{k+1} \\times N_{k}$ block, denoted by {\\boldmath$\\phi$}$^{(k)}$, see Fig. \\ref{f:base}. This choice amounts to assume a feed-forward architecture. It can be easily shown that $\\left({\\mathbf \\Phi}^{(k)}\\right)^{-1}=2 \\id{}_{N \\times N}- {\\mathbf \\Phi}^{(k)}$, which readily yields ${\\mathbf A}^{(k)}={\\mathbf \\Phi}^{(k)} {\\mathbf \\Lambda}^{(k)} \\left(2 \\id{}_{N \\times N}- {\\mathbf \\Phi}^{(k)} \\right)$. The off-diagonal elements of ${\\mathbf \\Phi}^{(k)}$ define a second set of adjustable parameters to be self-consistently modulated during active training. To implement the learning scheme on these basis, we consider $\\vec{x}^{(\\ell)}$, the image on the output layer of the input vector $\\vec{x}^{(1)}$:\n\n\\begin{equation} \\label{image}\n\t\\vec{x}^{(\\ell)} = f\\left(\\mathbf{A}^{(\\ell-1)}... f\\left (\\mathbf{A}^{(1)} \\vec{x}^{(1)},\\beta_1 \\right),\\beta_{\\ell-1} \\right)\t \n\\end{equation}\n\nSince we are dealing with image classification, we can calculate $\\vec{z} = softmax(\\vec{x}^{(\\ell)})$. We will then use $\\vec{z}$ to compute the categorical cross-entropy loss function $\\text{CCE}(l(\\vec{x}^{(1)}), \\vec{z})$, where $l(\\vec{x}^{(1)})$ is the label which identifies the category to which $\\vec{x}^{(1)}$ belongs, via one-hot encoding \\cite{aggarwal2018neural}. \n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width = 0.4\\textwidth]{Figure\/Lin_transf.png}\n\t\\caption{A schematic outline of the structure of transfer matrix ${\\mathbf A}^{(k)}$, bridging layer $k$ to layer $k+1$. The action of ${\\mathbf A}^{(k)}$ on $\\vec{x}^{(k)}$ is also graphically illustrated.}\n\t\\label{f:Lin_transf}\n\\end{figure}\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width = 0.4\\textwidth]{Figure\/base.png}\n\t\\caption{The structure of matrix ${\\mathbf \\Phi}^{(k)}$ is schematically displayed.}\n\t\\label{f:base}\n\\end{figure}\n\nThe loss function can thus be minimized by acting on the spectral parameters, i.e. the ensemble made of non trivial eigenvalues and\/or the associated eigendirections. A straightforward calculation, carried out in the annexed supplementary information, allows one to derive a closed analytical expression for $w^{(k)}_{ij}$, the weights of the edges linking nodes $i$ (belonging to layer $k+1$) and $j$ (sitting on layer $k$) in direct space, as a function of the underlying spectral quantities. In formulae, one gets: \n\\begin{equation} \\label{w}\n\tw^{(k)}_{ij} = \\left( \\lambda^{(k)}_{m(j)}-\\lambda^{(k)}_{l(i)} \\right) {\\Phi}^{(k)}_{l(i), m(j)}\n\\end{equation}\nwhere $l(i)=\\sum_{s=1}^{k} N_s +i$ and $m(j)=\\sum_{s=1}^{k-1} N_s +j$, with $i \\in \\left(1, ..., N_{k+1} \\right)$ and $j \\in \\left(1, ..., N_k \\right)$. In the above expression, $\\lambda^{(k)}_{m(j)}$ stand for the first $N_k$ eigenvalues of ${\\mathbf \\Lambda}^{(k)}$. The remaining $N_{k+1}$ eigenvalues are labelled $\\lambda^{(k)}_{l(i)}$.\n\nTo help comprehension denote by $x^{(k)}_j$ the activity on nodes $j$. Then, the activity $x^{(k)}_i$ on node $i$ reads:\n\\tiny\n\\begin{equation}\n\tx^{(k+1)}_i = \\sum_{j=1}^{N_k} \\left( \\lambda^{(k)}_{m(j)} {\\Phi}^{(k)}_{l(i), m(j)} x_j^{(k)} \\right) - \\lambda^{(k)}_{l(i)} \\sum_{j=1}^{N_k} \\left( {\\Phi}^{(k)}_{l(i), m(j)} x_j^{(k)} \\right)\n\\end{equation} \n\\normalsize\n\nThe eigenvalues $\\lambda^{(k)}_{m(j)}$ modulate the density at the origin, while $\\lambda^{(k)}_{l(i)}$ set the excitability of the receiver nodes, weighting the network activity in its immediate neighbourhood. As remarked in \\cite{chicchi2021training}, this can be rationalized as the artificial analogue of the {\\it homeostatic plasticity}, the strategy used by living neurons to maintain the synaptic basis for learning, respiration, and locomotion \\cite{surmeier2004mechanism}. \n\nStarting from this background, we shall hereafter operate within a simplified setting which is obtained by imposing $\\lambda^{(k)}_{m(j)} = 0$. This implies that $\\lambda^{(k)}_{l(i)}$ are the sole eigenvalues to be actively involved in the training. As we shall prove, these latter eigenvalues provide an effective criterion to rank a posteriori, i.e. upon training being completed, the relative importance of the nodes belonging to the examined network. Stated differently, nodes can be sorted according to their relevance in carrying out the assigned task. This motivates us to introduce, and thoroughly test, an effective spectral pruning strategy which seeks at removing the nodes deemed unessential, while preserving the overall network classification score. The Methods Section is entirely devoted to explain in detail the proposed strategy, that we shall contextualize with reference to other existing methodologies. \n\n\\section{Conventional Pruning Techniques}\nGenerally speaking, it is possible to ideally group various approaches for network compression into five different categories: Weights Sharing, Network Pruning, Knowledge Distillation, Matrix Decomposition and Quantization \\cite{neill2020overview, cheng2017survey}.\n\nWeights Sharing defines one of the simplest strategies to reduce the number of parameters, while allowing for a robust feature detection. The key idea is to have a shared set of model parameters between layers, a choice which reflects back in an effective model compression. An immediate example of this methodology are the convolutional neural networks \\cite{lecun1989backpropagation}. A refined approach is proposed in Bat et al. \\cite{bai2019deep} where a virtual infinitely deep neural network is considered. Further, in Zhang et al. \\cite{zhang2018learning} an $\\ell_{1}$ group regularizer is exploited to induce sparsity and, simultaneously, identify the subset of weights which can share the same features.\n\n\nNetwork Pruning is arguably one of the most common technique to compress Neural Network: in a nutshell it aims at removing a set of weights according to a certain criterion (magnitude, importance, etc). Chang et al. \\cite{chang2018prune} proposed an iterative pruning algorithm that exploits a continuously differentiable version of the $\\ell_{\\frac{1}{2}}$ norm, as a penalty term. Molchanov et al. \\cite{molchanov2016pruning} focused on pruning convolutional filters, so as to achieve better inference performances (with a modest impact on the recorded accuracy) in a transfer leaning scenario. Starting from a network fine-tuned on the target task, they proposed an iterative algorithm made up of three main parts: (i) assessing the importance of each convolutional filter on the final performance via a Taylor expansion, (ii) removing the less informative filters and (iii) re-training the remaining filters, on the target task. Inspired by the pioneering work in \\cite{frankle2018lottery}, Pau de Jorge et al. \\cite{de2020progressive} proved that pruning at initialization leads to a significant performance degradation, after a certain pruning threshold. In order to overcome this limitation they proposed two different methods that enable an initially trimmed weight to be reconsidered during the subsequent training stages.\n\nKnowledge Distillation is yet another technique, firstly proposed by Hinton et al. \\cite{hinton2015distilling}. In its simplest version Knowledge Distillation is implemented by combining two objective functions. The first accounts for the discrepancy between the predicted and true labels. The second is the cross-entropy between the output produced by the examined network and that obtained by running a (generally more powerful) trained model. In \\cite{polino2018model} Polino et al. proposed two approaches to mix distillation and quantization (see below): the first method uses the distillation during the training of the so called student network under a fixed quantization scheme while the second exploits a network (termed the teacher network) to directly optimize the quantization. Mirzadeh et al. \\cite{mirzadeh2020improved} analyzed the regime in which knowledge distillation can be properly leveraged. They discovered that the representation power gap of the two networks (teacher and student) should be bounded for the method to yield beneficial effects. To resolve this problem, they inserted an intermediate network (the assistant), which sits in between the teacher and the student, when their associated gap is too large. \n\nMatrix Decomposition is a technique that remove redundancies in the parameters by the means of a tensor\/matrix decomposition. Masana et al. \\cite{masana2017domain} proposed a matrix decomposition method for transfer learning scenario. They showed that decomposing a matrix taking into account the activation outperforms the approaches that solely rely on the weights. In \\cite{novikov2015tensorizing}, Novikov et al. proposed to replace the dense layer with its Tensor-Train representation \\cite{oseledets2011tensor}. Yu et al. \\cite{yu2017compressing} introduced a unified framework, integrating the low-rank and sparse decomposition of weight matrices with the feature map reconstructions.\n\nQuantization, as also mentioned above, aims at lowering the number of bits used to represent any given parameter of the network. Stock et al. \\cite{stock2019and} defined an algorithm that quantize the model by minimizing the reconstruction error for inputs sampled from the training set distribution. The same authors also claimed that their proposed method is particularly suited for compressing residual network architectures and that the compressed model proves very efficient when run on CPU. In Banner et al. \\cite{banner2018post} a practical 4-bit post-training quantization approach was introduced and tested.\nMoreover, a method to reduce network complexity based on node-pruning was presented by He et al. in \\cite{He2014}. Once the network has been trained, nodes are classified by means of a node importance function and then removed or retained depending on their score. The authors proposed three different node ranking functions: entropy, output-weights norm (onorm) and input-weights norm (inorm). In particular, the input-weights norm function is defined as the sum of the absolute values of the incoming connections weights. As we will see this latter defines the benchmark model that we shall employ to challenge the performance of the trimming strategy here proposed. Finally, it is worth mentioning the Conditional Computation methods \\cite{wang2020deep, Wang_2018_ECCV, bengio2015conditional}: the aim is to dynamically skip part of the network according to the provided input so as to reduce the computational burden.\n\nSumming up, pruning techniques exist which primarily pursue the goal of enforcing a sparsification by cutting links from the trained neural network and have been reviewed above.\nIn contrast with them, the idea of our method is to a posteriori identify the nodes of the trained network which prove unessential for a proper functioning of the device and cut them out from ensemble made of active units. This yields a more compact neural network, in terms of composing neurons, with unaltered classification performance. The method relies on the spectral learning \\cite{spec_learn,chicchi2021training} and exploits the fact that eigenvalues are credible parameters to gauge the importance of a given node among those composing the destination layer. In short, our aim is to make the network more compact by removing nodes classified as unimportant, according to a suitable spectral rating.\n\n\n\\section{Results}\nIn order to assess the effectiveness of the eigenvalues as a marker of the node's importance (and hence as a potential target for a cogent pruning procedure) we will consider a fully connected feed-forward architecture. Applications of the explored methods will be reported for $\\ell=3$ and $\\ell>3$ configurations. The nodes that compose the hidden layers are the target of the implemented pruning strategies. As we shall prove, it is possible to get rid of the vast majority of nodes without reflecting in a sensible decrease in the test accuracy, if the filter, either in its pre- or post-training versions, relies on the eigenvalues ranking.\n\nFor our test, we used three different datasets of images. The first is the renowned MNIST database of handwritten digits \\cite{lecun1998mnist}, the second is Fashion-MNIST (F-MNIST) \\cite{xiao2017fashion} (an image dataset of Zalando's items) and the last one is CIFAR-10 \\cite{krizhevsky2009learning}. In the main text we report our findings for Fashion-MNIST. Analogous investigations carried out for MNIST and CIFAR10 will be reported as supplementary information. Further, different activation functions have been employed to evaluate the performance of the methods. In the main body of the paper, we will show the results obtained for the ELU. The conclusion obtained when operating with the ReLU and $\\tanh$ are discussed in the annexed supplementary material. In the following we will report into two separate sub-sections the results pertaining to either the single or multiple hidden layers settings.\n\n\\subsection{Single hidden layer ($\\ell=3$)}\nIn Figure \\ref*{f:f-mnist ELU} the performance of the inspected methods are compared for the minimal case study of a three layers network. The intermediate layer, the sole hidden layer in this configuration, is set to $N_2=500$ neurons. The accuracy of the different methods are compared, upon cutting at different percentile, following the strategies discussed in the Methods. The orange profile is the benchmark model: the neural network is trained in direct space, by adjusting the weights of each individual inter-nodes connection. Then, the absolute value of the incoming connectivity is computed and used as an importance rank of the nodes' influence on the test accuracy. Such a model has been presented and discussed by He et al. in \\cite{He2014}. Following this assessment, nodes are progressively removed from the trained network, depending on the imposed percentile, and the ability of the trimmed network to perform the sought classification (with no further training) tested. The same procedure is repeated $5$ times and the mean value of the accuracy plotted. The shaded region stands for the semi dispersion of the measurements. A significant drop of the network performance is found when removing a fraction of nodes larger than 60 \\% from the second layer. \n\nThe blue curve Figure \\ref*{f:f-mnist ELU} refers instead to the post-processing spectral pruning based on the eigenvalues and identified, as method (ii), in the Methods Section. More precisely, the three layers network is trained by simultaneously acting on the eigenvectors and the eigenvalues of the associated transfer operators, as illustrated above. The accuracy displayed by the network trained according to this procedure is virtually identical to that reported when the learning is carried out in direct space, as one can clearly appreciate by eye inspection of Figure \\ref*{f:f-mnist ELU}. Removing the nodes based on the magnitude their associated eigenvalues, allows one to keep stable (practically unchanged) classification performance for an intermediate layer that is compressed of about 70\\% of its original size. In this case the spectral pruning is operated as a post-processing filter, meaning that the neural network is only trained once, before the nodes' removal takes eventually place.\n\nAt variance, the green curve in Figure \\ref*{f:f-mnist ELU} is obtained following method (i) from the Methods Section, which can be conceptualized as a pre-training manipulation. Based on this strategy, we first train the network on the set of tunable eigenvalues, than reduce its size by performing a compression that reflects the ranking of the optimized eigenvalues and then train again the obtained network by acting uniquely on the ensemble of residual eigenvectors. The results reported in Figure \\ref*{f:f-mnist ELU} indicate that, following this procedure, it is indeed possible to attain astoundingly compact networks with unaltered classification abilities. Moreover, the total number of parameters that need to be tuned following this latter procedure is considerably smaller than that on which the other methods rely. This is due to the fact that only the random directions (the eigenvectors) that prove relevant for discrimination purposes (as signaled by the magnitude of their associated eigenvalues) undergoes the second step of the optimization. This method can also be seen as a similar kind of \\cite{frankle2018lottery}. As a matter of fact, the initial training of the eigenvalues uncovers a sub-network that, once trained, obtains performances comparable to the original model. More specifically, the uncovered network can be seen as a \\textit{winning ticket} \\cite{frankle2018lottery}. That is, a sub-network with an initialization particularly suitable for carrying out a successful training.\n\nNext, we shall generalize the analysis to the a multi-layer setting ($\\ell>3$), reaching analogous conclusions. \n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width = 0.45\\textwidth]{Figure\/Fashion-MNIST\/elu.png}\n\t\\caption{Accuracy on the Fashion-MNIST database with respect to the percentage of trimmed nodes (from the hidden layer), in a three layers feedforward architecture. Here, $N_2=500$, while $N_1=784$ and $N_3=10$, as reflecting the structural characteristics of the data. In orange the results obtained by pruning the network trained in direct space, based on the absolute value of the incoming connectivity (see main text). In blue, the results obtained when filtering the nodes after a full spectral training (post-training). The curve in green reports the accuracy of the trimmed networks generated upon application of the pre-training filter. Symbols stand for the averaged accuracy computed over 5 independent realizations. The shadowed region is traced after the associated semi-dispersion.}\n\t\\label{f:f-mnist ELU}\n\\end{figure}\n\n\\subsection{Multiple hidden layers ($\\ell>3$)}\nQuite remarkably, the results achieved in the simplified context of a single hidden layer network \nalso apply within the framework of a multi-layers setting.\\\\\nTo prove this statement we set to consider a $\\ell=5$ feedforward neural network with ELU activation. Here, $N_1=784$ and $N_5=10$ as reflecting the specificity of the employed dataset. \nThe performed tests follows closely those reported above, with the notable difference that now the ranking of the eigenvalues is operated on the pool of $ N_2 + N_3 + N_4 $ neurons that compose the hidden bulk of the trained network. In other words, the selection of the neuron to be removed is operated after a global assessment, i.e. scanning across the full set of nodes, without any specific reference to an a priori chosen layer. \n\nIn Figure \\ref{f:multi f-mnist ELU} the results of the analysis are reported, assuming $N_2=N_3=N_4=500$. The conclusions are perfectly in line with those reported above for the one layer setting, except for the fact that now the \nimprovement of the spectral pruning over the benchmark reference are even superior. The orange curve drops at percentile 20, while the blue begins its descent at about 60 \\%. The green curve, relative to the sequential two steps training, stays stably horizontal up to about 90 \\%. \n\n\\begin{figure}[!ht]\n\t\\centering\n\t\\includegraphics[width = 0.45\\textwidth]{Figure\/Multilayer\/Fashion-MNIST\/elu.png}\n\t\\caption{Accuracy on the Fashion-MNIST database with respect to the percentage of pruned nodes (from the hidden layers), in a five layers feedforward architecture. Here, $N_2=N_3=N_4=500$, while $N_1=784$ and $N_5=10$, as reflecting the structural characteristics of the data. Symbols and colors are chosen as in Figure \\ref*{f:f-mnist ELU}.}\n\t\\label{f:multi f-mnist ELU}\n\\end{figure}\n\n\\section{Conclusions}\nIn this paper we have discussed a relevant byproduct of a spectral approach to the learning of deep neural networks. The eigenvalues of the transfer operator that connects adjacent stacks in a multi-layered architecture provide an effective measure of the nodes importance in handling the information processing. By exploiting this fact we have introduced and successfully tested two distinct procedures to yield compact networks --in terms of number of computing neurons-- which perform equally well than their untrimmed original homologous. \nOne procedure (referred as (ii) in the description) is acknowledged as a post processing method, in that it acts on a multi-layered network downstream of training. The other (referred as (i)) is based on a sequence of two nested operations. First the eigenvalues are solely trained. After the spectral pruning took place, a second step in the optimization path seeks to adjust the entries of the eigenvectors that populate a trimmed space of reduced dimensionality. The total number of trained parameters is small as compared to that involved when the pruning acts as a post processing filter. Despite that, the two steps pre-processing protocol yields compact devices which outperform those obtained with a single post-processing removal of the unessential nodes. \n\nAs a benchmark model, and for a neural network trained in direct space, we decided to rank the nodes importance based on the absolute value of the incoming connectivity. This latter appeared as the obvious choice, when aiming at gauging the local information flow in the space of the nodes, see also \\cite{He2014}. In principle, one could consider to diagonalizing the transfer operators as obtained after a standard approach to the training and make use of the computed eigenvalues to a posteriori sort the nodes relevance. This is however not possible as the transfer operator that links a generic layer $k$ to its adjacent counterpart $k+1$, as follows the training performed in direct space, is populated only below the diagonal, with all diagonal entries identically equal zero. All associated eigenvalues are hence are zero and they provide no information on the relative importance of the nodes of layer $k+1$, at variance with what happens when the learning is carried out in the reciprocal domain.\n\nSumming up, by reformulating the training of neural networks in spectral space, we identified a set of sensible scalars, the eigenvalues of suitable operators, that unequivocally correlate with the influence of the nodes within the collection. This observation translates in straightforward procedures to generate efficient networks that exploit a reduced number of computing units. Tests performed on different settings corroborate this conclusions. As an interesting extension, we will show in the supplementary information that a suitable regularization of the eigenvalues yields a general improvement of the proposed method.\n\n\\section{Methods}\\label{method}\nWe detail here the spectral procedure to make a trained network smaller, while preserving its ability to perform classification. \n\nTo introduce the main idea of the proposed method, we make reference to formula (\\ref{w}) and assume the setting where $\\lambda^{(k)}_{m(j)}=0$. The information travelling from layer $k$ to layer $k+1$ gets hence processed as follows: first, the activity on the departure node $j$ is modulated by a multiplicative scaling factor ${\\Phi}^{(k)}_{l(i), m(j)}$, specifically linked to the selected $(i,j)$ pair. Then, all incoming (and rescaled) activities reaching the destination node $i$ are summed together and further weighted via the scalar quantity $\\lambda^{(k)}_{l(i)}$. This latter eigenvalue, downstream of the training, can be hence conceived as a distinguishing feature of node $i$ of layer $k+1$. Assume for the moment that ${\\Phi}^{(k)}_{l(i), m(j)}$ are drawn from a given distribution and stay put during optimization. Then, every individual neuron bound to layer $k+1$ is statistically equivalent (in terms of incoming weights) to all other nodes, belonging to the very same layer. The eigenvalues \n$\\lambda^{(k)}_{l(i)}$ gauge therefore the relative importance of the nodes, within a given stack, and as reflecting the (randomly generated) web of local inter-layer connections (though statistically comparable). Large values of $|\\lambda^{(k)}_{l(i)}|$ suggest that node $i$ on layer $k+1$ plays a central role in the economy of the neural network functioning. This is opposed to the setting when $|\\lambda^{(k)}_{l(i)}|$ is found to be small. Stated differently, the subset of trained eigenvalues provide a viable tool to rank the nodes according to their degree of importance. As such, they can be used as reference labels to make decision on the nodes that should be retained in a compressed analogue of the trained neural network, with unaltered classification performance. As empirically shown in the Results section with reference to a variegated set of applications, the sorting of the nodes based on the optimized eigenvalues turns out effective also when the eigenvectors get simultaneously trained, thus breaking, at least in principle, statistical invariance across nodes. \n\nAs we will clarify, the latter setting translates in a post-training spectral pruning strategy, whereas the former materializes in a rather efficient pre-training procedure. The non linear activation function as employed in the training scheme leaves a non trivial imprint, which has to be critically assessed. \n\nMore specifically, in carrying out the numerical experiments here reported we considered two distinct settings, as listed below:\n\n\\begin{itemize}\n\t\\item{(i)} As a first step, we will begin by considering a deep neural network made of $N$ neurons organized in $\\ell$ layers. The network will be initially trained by solely leveraging on the set of tunable eigenvalues. Then, we will proceed by progressively removing the neurons depending on their associated eigenvalues (as in the spirit discussed above). The trimmed network, composed by a total of $M