diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznrum" "b/data_all_eng_slimpj/shuffled/split2/finalzznrum" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznrum" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{introduction}\n\n\nIn papers of Allcock, Carlson, Toledo \\cite{ACT1, ACT2}, resp. of Looijenga, Swierstra \\cite{LS1, LS2}, resp. of \nKondo \\cite{K1, K2}, \"hidden\" period maps are constructed in certain cases. The target spaces of these maps \nare certain {\\it arithmetic quotients of complex unit balls}. The basic \nobservation, which is the starting point of this paper, is that these arithmetic quotients can be interpreted as the complex points of certain {\\it moduli spaces of abelian varieties \nof Picard type}, of the kind considered in our paper \\cite{KR2}. Consequently, the purpose in this paper is to interpret these hidden period maps in moduli-theoretic terms. \nThe pay-off of this exercise is that we can raise and partially answer some {\\it descent problems} which seem natural from our view point, and which are related to a similar \ndescent problem addressed by Deligne in \\cite{De1} in his theory of {\\it complete intersections of Hodge level one}.\n\nWhy do we speak of \"hidden\", or \"occult\" period maps in this context? This is done in order to make the distinction \nwith the usual period maps which associate to a family of smooth projective complex varieties (over some \nbase scheme $S$) the (polarized) Hodge structures of its fibers, which \nthen induces a map from $S$ to a quotient by a discrete group of a period domain. Let us recall three examples of classical period maps:\n\n(1) {\\bf Case of quartic surfaces.} In this case the period map is a holomorphic map of orbifolds \n\\begin{equation*}\n\\varphi: {\\it Quartics}_{2, {\\mathbb C}}^\\circ\\to \\big[\\Gamma\\backslash V(2, 19)\\big].\n\\end{equation*}\n\\noindent{\\rm Here ${\\it Quartics}_{2, {\\mathbb C}}^\\circ$ denotes the stack parametrizing smooth quartic surfaces up to projective \nequivalence,\n\\begin{equation*}\n\\text{\\it Quartics}_{2, {\\mathbb C}}^\\circ = \\big[{\\rm PGL}_4\\backslash \\mathbb P{\\text{\\rm Sym}}^4 ({\\mathbb C}^4)^\\circ \\big]\n\\end{equation*}\n{\\rm (stack quotient in the orbifold sense).} The target space is the orbifold quotient of the space of \noriented positive $2$-planes in a quadratic space $V$ of signature $(2, 19)$ by the automorphism group $\\Gamma$ of a lattice in $V$. \n\n(2) {\\bf Case of cubic threefolds.} In this case the period map is a holomorphic map of orbifolds \n\\begin{equation*}\n\\varphi: {\\it Cubics}_{3, {\\mathbb C}}^\\circ\\to \\big[\\Gamma\\backslash\\mathfrak H_5\\big].\n\\end{equation*}\n\\noindent{\\rm Here ${\\it Cubics}_{3, {\\mathbb C}}^\\circ$ denotes the stack parametrizing smooth cubic threefolds up to projective \nequivalence.\nThe target space is the orbifold quotient of the Siegel upper half space of genus $5$ by the Siegel group $\\Gamma={\\rm Sp}_5(\\mathbb Z)$. \n\n(3) {\\bf Case of cubic fourfolds.} In this case the period map is a holomorphic map of orbifolds \n\\begin{equation*}\n\\varphi: {\\it Cubics}_{4, {\\mathbb C}}^\\circ\\to \\big[\\Gamma\\backslash V(2, 20)\\big].\n\\end{equation*}\n\\noindent{\\rm Here ${\\it Cubics}_{4, {\\mathbb C}}^\\circ$ denotes the stack parametrizing smooth cubic fourfolds up to projective \nequivalence.\nThe target space is the orbifold quotient of the space of oriented positive $2$-planes in a quadratic space \n$V$ of signature $(2, 20)$ by the automorphism group $\\Gamma$ of a lattice in $V$. \n\nIn the first case, by the Torelli theorem of Piatetskii-Shapiro\/Shafarevich, the induced map $|\\varphi|$ on coarse moduli spaces is an open embedding. In the second case, \nby the Torelli theorem of Clemens\/Griffiths, the map $|\\varphi|$ is a locally closed embedding (it is not an open embedding since the \nsource of $\\varphi$ has dimension $10$, and the target has dimension $15$). In the third case, by the Torelli \ntheorem of Voisin, the map $|\\varphi|$ is an open embedding.\n\n\n\n\nThe construction of the occult period maps is quite different, although it does use the classical period maps indirectly. \nFor instance, the construction of Allcock, Carlson, Toledo attaches a certain Hodge structure to any smooth cubic \nsurface which allows one to distinguish between non-isomorphic ones, even though the natural Hodge structures on the \ncohomology in the middle dimension of all cubic surfaces are isomorphic. \nAlso, in one dimension higher, their construction allows them to define an {\\it open embedding} of the space of cubic threefolds into \nan arithmetic quotient of the complex unit ball of dimension $10$. \n\n\nOur second aim in this paper is to identify the complements of the images of occult period maps \nwith {\\it special divisors} considered in \\cite{KR2}. \n\nThe lay-out of the paper is as follows. In sections \\ref{subsectionpic}, \\ref{cu} and \\ref{sc} we recall \nsome of the theory and notation of \\cite{KR2}. In sections \\ref{cubicsurfaces}, \\ref{cubicthreefolds}, \n \\ref{curvesgenus3}, and \\ref{curvesgenus4}, respectively, \n we explain in turn the case of cubic surfaces, cubic threefolds, curves of genus $3$, and curves of genus $4$. \n In section \\ref{descent}, we explain the descent problem, and solve it in zero characteristic. In the final section, \nwe make a few supplementary remarks.\n\nWe stress that the proofs of our statements are all contained in the papers mentioned above, and that our work only consists in interpreting these results.\n\nWe thank B.~van Geemen, D.~Huybrechts and E.~Looijenga for very helpful discussions. \nWe also thank J.~Achter for keeping us informed about his progress in proving our conjecture in section \\ref{descent} in some cases. Finally, we thank the referee who alerted us to a mistake concerning the stacks aspect of period maps. \n\n\n\n\n\n\n\n\\section{Moduli spaces of Picard type}\\label{subsectionpic} \nLet ${\\text{\\cute k}}={\\mathbb Q}(\\sqrt{\\Delta})$ be an imaginary-quadratic field with discriminant $\\Delta$, ring of integers $O_{\\smallkay}$, and a \nfixed complex embedding. We write $\\aa\\mapsto{\\aa^\\sigma}$ for the\nnon-trivial automorphism of $O_{\\smallkay}$.\n\nFor integers $n\\geq 1$ and $r$, $0\\leq r\\leq n$, we consider the groupoid $\\Cal M = \\Cal M (n-r, r) = \\Cal M ({\\text{\\cute k}}; n-r, r)$ fibered over $({\\rm Sch} \/ O_{\\smallkay})$ which associates to an \n$O_{\\smallkay}$-scheme $S$ the groupoid of triples $(A, \\iota, \\lambda)$. Here $A$ is an abelian scheme over $S$, \n$\\lambda$ is a principal polarization, and $\\iota : O_{\\smallkay}\\to\\text{\\rm End} (A)$ is a homomorphism such that\n\\begin{equation*}\n\\iota (\\aa)^\\ast = \\iota ({\\aa^\\sigma})\\ ,\n\\end{equation*}\nfor the Rosati involution $\\ast$ corresponding to $\\lambda$. In addition, the following signature condition is imposed\n\\begin{equation}\n{\\rm char} (T, \\iota (\\aa)\\mid\\text{\\rm Lie}\\, A) = (T-i(\\aa))^{n-r}\\cdot (T-i({\\aa^\\sigma}))^r\\ ,\\quad\\forall\\,\\aa\\inO_{\\smallkay}\\ ,\n\\end{equation}\nwhere $i:O_{\\smallkay}\\rightarrow \\mathcal O_S$ is the structure map. \n\nWe will mostly consider the complex fiber $\\Cal M_{\\mathbb C} = \\Cal M\\times_{\\text{\\rm Spec}\\, O_{\\smallkay}}\\text{\\rm Spec}\\, {\\mathbb C}$ of $\\Cal M$. In any\ncase, $\\Cal M$ is a Deligne-Mumford stack and $\\Cal M_{\\mathbb C}$ is smooth. We denote by $|\\Cal M_{\\mathbb C}|$ the coarse moduli scheme. \n\nWe will also have to consider the following variant, defined by modifying the requirement above that \nthe polarization $\\lambda$ be principal. Let $d>1$ be a square free divisor of $\\vert \\Delta\\vert$. \nThen $\\Cal M ({\\text{\\cute k}}, d; n-r, r)^*=\\Cal M ({\\text{\\cute k}}; n-r, r)^*$ parametrizes triples $(A, \\iota, \\lambda)$ as in the case \nof $\\Cal M ({\\text{\\cute k}}; n-r, r)$, except that we impose the following condition on $\\lambda$. We require first of all \nthat $ \\ker\\lambda\\subset A[d]$, so that $O_{\\smallkay}\/(d)$ acts on $\\ker\\lambda$. In addition, we require \nthat this action factor through the quotient ring $\\prod_{p\\mid d}{\\mathbb F}_p$ of $O_{\\smallkay}\/(d)$, and that $\\lambda$ \nbe of degree $d^{n-1}$, if $n$ is odd, resp.\\ $d^{n-2}$, if $n$ is even. In the notation introduced in section 13 of \\cite{KR2}, we have \n$\\Cal M ({\\text{\\cute k}}, d; n-r, r)^*=\\Cal M ({\\text{\\cute k}}, {\\bf t}; n-r, r)^{*, {\\rm naive}}$, where the function ${\\bf t}$ on the set of \nprimes $p$ with $p\\mid \\Delta$ assigns to $p$ the integer $2[(n-1)\/2]$ if $p\\mid d$, and $0$ \nif $p\\nmid d$. Note that if ${\\text{\\cute k}}$ is the Gaussian field ${\\text{\\cute k}}={\\mathbb Q}(\\sqrt{-1})$, then necessarily $d=2$; if ${\\text{\\cute k}}$ is the Eisenstein field ${\\text{\\cute k}}={\\mathbb Q}(\\sqrt{-3})$, then $d=3$. We denote by $|\\Cal M_{\\mathbb C}^*|$ the corresponding coarse moduli scheme. \n\n\\section{Complex uniformization}\\label{cu} \nLet us recall from \\cite{KR2} the complex uniformization of $\\Cal M ({\\text{\\cute k}}; n-1, 1)({\\mathbb C})$ in the special case that ${\\text{\\cute k}}$ has class number one. \n For $n>2$, let $(V, (\\ ,\\ ))$ be a hermitian vector space over ${\\text{\\cute k}}$ of signature\n$(n-1, 1)$ which contains a self-dual $O_{\\smallkay}$-lattice $L$. By the class number hypothesis, $V$ is unique up to isomorphism. \nWhen $n$ is odd, or when $n$ is even and $\\Delta$ is odd, the lattice $L$ is also unique up to isomorphism. We assume \nthat one of these conditions is satisfied. \n Let $\\mathcal D$ be the space of negative lines in the\n${\\mathbb C}$-vector space $(V_{\\mathbb R}, \\mathbb I_0)$, where the complex structure $\\mathbb I_0$ is defined in terms of the discriminant \nof ${\\text{\\cute k}}$, as $\\mathbb I_0 = \\sqrt{\\Delta}\/{|\\sqrt{\\Delta}|}$.\nLet $\\Gamma$ be the isometry group of $L$. \nThen the complex uniformization is the isomorphism of orbifolds, \n\\begin{equation*}\n\\Cal M ({\\text{\\cute k}}; n-1, 1)({\\mathbb C})\\simeq [\\Gamma\\backslash \\mathcal D] .\n\\end{equation*} \nThere is an obvious $*$-variant of this uniformization, which gives \n\\begin{equation*}\n\\Cal M ({\\text{\\cute k}}; n-1, 1)^*({\\mathbb C})\\simeq [\\Gamma^*\\backslash \\mathcal D] ,\n\\end{equation*} \nwhere $\\Gamma^*$ is the automorphism group of the ({\\it parahoric}) lattice $L^*$ corresponding to the $*$-moduli problem. \nThe lattice $L^*$ is uniquely determined up to isomorphism by the condition \nthat there is a chain of inclusions of $O_{\\smallkay}$-lattices \n$L^*\\subset (L^*)^\\vee\\subset (\\sqrt{d})^{-1}L^*$, with quotient $(L^*)^\\vee \/L^*$ of dimension $n-1$ if $n$ is odd and $n-2$ if $n$ is even, when localized \nat any prime ideal $\\mathfrak p$ dividing $d$. Here, for an $O_{\\smallkay}$-lattice $M$ in $V$, \nwe write \n$$M^\\vee = \\{ \\ x\\in V\\ \\mid \\ h(x,L) \\subset O_{\\smallkay}\\ \\}$$\nfor the dual lattice. \n\n\n\n\\section{ Special cycles (KM-cycles)}\\label{sc}\n We continue to assume that the class number of ${\\text{\\cute k}}$ is one, and recall from \\cite{KR2} the definition of special cycles over \n ${\\mathbb C}$. Let $(E, \\iota_0)$ be an elliptic curve with $CM$ by $O_{\\smallkay}$ over ${\\mathbb C}$, which we fix in what follows. \n Note that, due to our class number hypothesis, $(E, \\iota_0)$\nis unique up to isomorphism. We denote its canonical principal polarization by $\\lambda_0$. \nFor any ${\\mathbb C}$-scheme $S$, and $(A, \\iota, \\lambda)\\in\\Cal M ({\\text{\\cute k}}; n-1, 1) (S)$, let\n\\begin{equation*}\nV' (A, E) = \\text{\\rm Hom}_{O_{\\smallkay}} (E_S, A)\\ ,\n\\end{equation*}\nwhere $E_S = E\\times_{\\mathbb C} S$ is the constant elliptic scheme over $S$ defined by $E$. \nThen $V' (A, E)$ is a projective $O_{\\smallkay}$-module of finite rank with a positive definite $O_{\\smallkay}$-valued hermitian form given by\n\\begin{equation*}\nh' (x, y) = \\lambda_0^{-1}\\circ y^\\vee\\circ\\lambda\\circ x\\in\\text{\\rm End}_{O_{\\smallkay}} (E_S) = O_{\\smallkay}\\ .\n\\end{equation*}\nFor a positive integer $t$, we define the DM-stack\\footnote{This notation \ndiffers from that in \\cite{KR2}, in that here the special cycles are defined over ${\\mathbb C}$, and are considered as lying over $\\Cal M({\\text{\\smallcute k}}; n-1, 1)_{\\mathbb C}$.}\n$\\Cal Z(t)$ by \n\\begin{equation*}\n\\Cal Z(t)(S) = \\{(A, \\iota, \\lambda; x)\\mid (A, \\iota, \\lambda) \\in \\Cal M ({\\text{\\cute k}}; n-1, 1)(S), \\ x\\in V' (A, E), \\ \\ h' (x, x) = t\\}\\ .\n\\end{equation*}\nThen $\\Cal Z(t)$ maps by a finite unramified morphism to $\\Cal M ({\\text{\\cute k}}; n-1, 1)_{\\mathbb C}$, and its image is a divisor in the sense that, locally \nfor the \\'etale topology, it is defined by a non-zero equation. \n\nThe cycles $\\Cal Z(t)$ also admit a complex uniformization. More precisely, under the assumption of the triviality of the class group of ${\\text{\\cute k}}$, we have\n\\begin{equation*}\n\\Cal Z(t)({\\mathbb C}) \\simeq \\Big[\\Gamma\\backslash\\Big(\\coprod_{\\substack{x\\in L\\\\ h(x, x)=t}}\\mathcal D_x\\Big)\\Big] ,\n\\end{equation*} \nwhere $\\mathcal D_x$ is the set of lines in $\\mathcal D$ which are perpendicular to $x$. \n\nAgain, there is a $*$-variant of these definitions and a corresponding DM-stack $\\Cal Z(t)^*$ above $\\Cal M({\\text{\\cute k}}; n-1, 1)^*$. \n\n\n\\section{ Cubic surfaces} \\label{cubicsurfaces}\nIn this paper we consider four occult period mappings. We start with the case of cubic surfaces, following Allcock, Carlson, Toledo \\cite{ACT1}, comp.\\ also \\cite{B}. \nAs explained in the introduction, in these sources, the results are formulated in terms of arithmetic ball quotients;\nhere we use the complex uniformization of the previous two sections to\nexpress these results in terms of moduli spaces of Picard type.\n\nLet $S\\subset\\mathbb P^3$ be a smooth cubic surface. Let $V$ be a cyclic covering of degree $3$ of $\\mathbb P^3$, \nramified along $S$. Explicitly, if $S$ is defined by the homogeneous equation of degree $3$ in $4$\nvariables\n\\begin{equation*}\nF (X_0, \\ldots , X_3) = 0\\ ,\n\\end{equation*}\nthen $V$ is defined by the homogeneous equation of degree $3$\nin $5$ variables,\n\\begin{equation*}\nX_4^3 - F(X_0, \\ldots , X_3) = 0\\ .\n\\end{equation*}\n\nLet ${\\text{\\cute k}} = {\\mathbb Q} (\\o)$, $\\o = e^{2\\pi i \/3}$. Then the obvious $\\mu_3$-action on $V$ determines an action of $O_{\\smallkay}={\\mathbb Z}[\\o]$ on $H^3 (V, {\\mathbb Z})$. \nFor the (alternating) cup product pairing $\\langle \\ , \\ \\rangle$,\n\\begin{equation*}\n\\langle \\o x, \\o y \\rangle = \\langle x, y \\rangle ,\n\\end{equation*}\nwhich implies that\n\\begin{equation*}\n\\langle \\aa x, y \\rangle = \\langle x, {\\aa^\\sigma} y \\rangle ,\\quad \\forall\\,\\aa \\in O_{\\smallkay} . \n\\end{equation*}\nHence there is a unique $O_{\\smallkay}$-valued hermitian form $h$ on $H^3 (V, {\\mathbb Z})$ such that \n\\begin{equation}\\label{altherm}\n\\langle x, y \\rangle = {\\rm tr} \\big(\\frac{1}{\\sqrt \\Delta} h( x, y)\\big) ,\n\\end{equation}\nwhere the discriminant $\\Delta$ of ${\\text{\\cute k}}$ is equal to $-3$ in the case at hand. Explicitly,\n\\begin{equation}\nh(x, y)=\\frac{1}{2}\\big(\\langle \\sqrt{\\Delta} x, y \\rangle +\\langle x, y \\rangle \\sqrt{\\Delta}\\big) . \n\\end{equation} \n\nFurthermore, an $O_{\\smallkay}$-lattice is self-dual wrt $\\langle\\ ,\\ \\rangle$ if and only if it is self-dual wrt $h(\\ ,\\ )$. \n\\smallskip\n\n{\\bf Fact:} {\\it $H^3 (V, {\\mathbb Z})$ is a self-dual hermitian $O_{\\smallkay}$-module of signature $(4, 1)$.}\n\\smallskip\n\nAs noted above, such a lattice is unique up to isomorphism.\n\nLet\n\\begin{equation*}\nA = A(V) = H^3 (V, {\\mathbb Z})\\backslash H^3 (V, {\\mathbb C}) \/ H^{2, 1} (V)\n\\end{equation*}\nbe the intermediate Jacobian of $V$. Then $A$ is an abelian variety of dimension $5$ which is principally\npolarized by the intersection form. Since the association $V\\mapsto (A(V), \\lambda)$ is functorial, we obtain\nan action $\\iota$ of $O_{\\smallkay}$ on $A(V)$.\n\\begin{theo}\\label{cubicsurf}\n(i) The object $(A, \\iota, \\lambda )$ lies in $\\Cal M ({\\text{\\cute k}}; 4,1) ({\\mathbb C})$.\n\\smallskip\n\n\\noindent (ii)This construction is functorial and compatible with families, \nand defines a morphism of\nDM-stacks,\n\\begin{equation*}\n\\varphi: {\\it Cubics}_{2, {\\mathbb C}}^\\circ\\to\\Cal M ({\\text{\\cute k}}; 4,1)_{\\mathbb C}\\ .\n\\end{equation*}\n{Here ${\\it Cubics}_{2, {\\mathbb C}}^\\circ$ denotes the stack parametrizing smooth cubic surfaces up to projective \nequivalence,\n\\begin{equation*}\n\\text{\\it Cubics}_{2, {\\mathbb C}}^\\circ = [ {\\rm PGL}_4\\backslash \\mathbb P{\\text{\\rm Sym}}^3 ({\\mathbb C}^4)^\\circ ]\n\\end{equation*}\n{\\rm [stack quotient in the orbifold sense].}}\n\\smallskip\n\n\\noindent (iii) The induced morphism on coarse moduli spaces $|\\varphi|: |{\\it Cubics}_{2, {\\mathbb C}}^\\circ|\\to|\\Cal M ({\\text{\\cute k}}; 4,1)_{\\mathbb C}|$ is an open embedding. Its image is the complement of the image of the \nKM-cycle $\\Cal Z(1)$ in $|\\Cal M ({\\text{\\cute k}}; 4,1)_{\\mathbb C}|$.\n\\end{theo}\n\\begin{proof}\nWe only comment on the assertions in (ii) and (iii). In (ii), the compatibility with families is always true of Griffiths' \nintermediate jacobians (which however are abelian varieties only when the Hodge structure is of type \n$(m+1, m)+(m, m+1)$). This constructs $\\varphi$ as a complex-analytic morphism. \nThe algebraicity of $\\varphi$ then follows from Borel's theorem that any analytic family of abelian varieties over a ${\\mathbb C}$-scheme is automatically algebraic \\cite{Bo}. \nThe fact that the image is contained in the complement of $\\Cal Z(1)$ is true because, by the Clemens-Griffiths \ntheory, intermediate Jacobians of cubic threefolds are simple as polarized abelian varieties, whereas, over $\\Cal Z(1)$ the \npolarized abelian varieties split off an elliptic curve. However, the fact that $\\Cal Z(1)$ makes up the whole complement \nis surprising and results from the fact that the morphism $\\varphi$ extends to an isomorphism from a \npartial compactification $|\\text{\\it Cubics}_{2, {\\mathbb C}}^{\\rm s}|$ of $|\\text{\\it Cubics}_{2, {\\mathbb C}}^\\circ|$ (obtained by \nadding {\\it stable} cubics) to $|\\Cal M ({\\text{\\cute k}}; 4,1)_{\\mathbb C}|$, such that the complement of \n$|\\text{\\it Cubics}_{2, {\\mathbb C}}^\\circ|$ in $|\\text{\\it Cubics}_{2, {\\mathbb C}}^{\\rm s}|$ is an irreducible divisor, cf.\\ \\cite{B}, Prop.\\ 6.7, Prop.\\ 8.2. \n\\end{proof}\n\\begin{rem} {\\rm Let us comment on the stacks aspect of Theorem \\ref{cubicsurf}. Any automorphism of $S$ is induced by an automorphism of $\\mathbb P^3$, which in turn induces an automorphism of $V$. We therefore obtain a homomorphism\n$ {\\rm Aut}(S)\\to {\\rm Aut}(A(V), \\iota, \\lambda).$\nThe statement of \n\\cite{ACT1}, Thm.~2.20. implies that this homomorphism induces an isomorphism\n \\begin{equation}\\label{stackiso}\n {\\rm Aut}(S)\\overset{\\sim}{\\longrightarrow} {\\rm Aut}(A(V), \\iota, \\lambda)\/O_{\\smallkay}^\\times\\, ,\n\\end{equation} \nwhere the units $O_{\\smallkay}^\\times\\simeq\\mu_6$ act via $\\iota$ on $A(V)$. Indeed, in loc. cit. it is asserted that $\\varphi$ is an open immersion of orbifolds ${\\it Cubics}_{2, {\\mathbb C}}^\\circ\\to [P\\Gamma\\backslash \\mathcal D]$, where $P\\Gamma=\\Gamma\/O_{\\smallkay}^\\times$; however, we were not able to follow the argument. Note that the orbifold $[P\\Gamma\\backslash \\mathcal D]$ is different from $[\\Gamma\\backslash \\mathcal D]$, which occurs in \\S 3. \n}\n\\end{rem}\n\\section{ Cubic threefolds}\\label{cubicthreefolds} \nOur next example concerns cubic threefolds, following Allcock, Carlson, Toledo \\cite{ACT2} and Looijenga, Swierstra \\cite{LS1}.\n\nLet $T\\subset\\mathbb P^4$ be a cubic threefold. Let $V$ be the cyclic covering of degree $3$ of $\\mathbb P^4$, \nramified in $T$. Then $V$ is a cubic hypersurface in $\\mathbb P^5$ and we define the primitive cohomology as \n\\begin{equation}\nL=H_0^4 (V, {\\mathbb Z}) = \\{x\\in H^4 (V, {\\mathbb Z})\\mid ( x, \\rho) = 0\\}\\ ,\n\\end{equation}\nwhere $\\rho$ is the square of the hyperplane section class. Note that ${\\rm rk}_{{\\mathbb Z}} L = 22$. \nAgain, let ${\\text{\\cute k}} ={\\mathbb Q} (\\o)$, with $\\o= e^{2\\pi i\/3}$, so that $L$\nbecomes an $O_{\\smallkay}$-module. Now the cup-product $(\\ , \\ )$ on $H^4 (V, {\\mathbb Z})$ is a perfect {\\it symmetric} \npairing satisfying $(a x, y)= (x, {a^\\sigma} y)$ for $a \\inO_{\\smallkay}$. It induces on\n$L$ a symmetric bilinear form $(\\ ,\\ )$ of discriminant $3$. We wish to define an {\\it alternating} pairing \n$\\langle \\ , \\ \\rangle$ on $L$ satisfying $\\langle \\aa x, y\\rangle=\\langle x, {\\aa^\\sigma} y\\rangle$ for $\\aa\\in O_{\\smallkay}$. \nWe do this by giving the associated $O_{\\smallkay}$-valued hermitian pairing $h(\\ , \\ )$, \nin the sense of (\\ref{altherm}) defined by \n\\begin{equation}\\label{hermit1}\nh(x, y) = \\frac{3}{2} \\big(( x, y) + ( x, \\sqrt{\\Delta} y) \\frac{1}{\\sqrt{\\Delta}}\\big). \n\\end{equation}\nHere the factor $3\/2$ is used instead of $1\/2$ to have better integrality properties. Set $\\pi=\\sqrt{\\Delta}$. \n\n\n\n{\\bf Fact:} {\\it For the pairing \\eqref{hermit1}, $L^{\\vee}$ contains $\\pi^{-1}L$ with \n$L^\\vee\/\\pi^{-1}L\\simeq {\\mathbb Z}\/3{\\mathbb Z}$.} \\hfill\\break \nFor this result, see \\cite{ACT2}, Theorem\\ 2.6 and its proof, as well as \\cite{LS1}, the passage below (2.1). \n\n\nNow consider the eigenspace decomposition of $H_0^4 (V, {\\mathbb C})$ under ${\\text{\\cute k}}\\otimes{\\mathbb C} = {\\mathbb C}\\oplus{\\mathbb C}$.\n\n\n{\\bf Fact:} {\\it The Hodge structure of $H_0^4 (V, {\\mathbb R})$ is of type\n\\begin{equation*}\nH_0^4 (V, {\\mathbb C}) = H^{3, 1}\\oplus H_0^{2, 2}\\oplus H^{1, 3}\\ ,\n\\end{equation*}\nwith $\\dim H^{3, 1} = \\dim H^{1, 3} = 1$. Furthermore, the only nontrivial eigenspaces of the generator $\\o$ of $\\mu_3$ are\n\\begin{equation*}\n\\begin{aligned}\nH_0^4 (V, {\\mathbb C})_\\o & = & H^{3, 1}\\oplus (H_0^{2, 2})_\\o\\ , & \\text{ with}\\ \\dim (H_0^{2, 2})_\\o = 10\\\\\nH_0^4 (V, {\\mathbb C})_{\\overline{\\o}} & = & (H_0^{2, 2})_{\\overline{\\o}}\\oplus H^{1, 3}\\ , & \\text{ with}\\ \\dim (H_0^{2, 2})_{\\overline{\\o}} = 10\\ ,\n\\end{aligned}\n\\end{equation*}}\nsee \\cite{ACT2}, \\S 2, resp.\\ \\cite{LS1} \\S 4.\n\n\\smallskip\n\nNow set $\\Lambda=\\pi L^\\vee$. Then we have the chain of inclusions of $O_{\\smallkay}$-lattices\n\\begin{equation*}\n\\Lambda\\subset \\Lambda^\\vee\\subset \\pi^{-1}\\Lambda\\ ,\n\\end{equation*}\nwhere the quotient $\\Lambda^\\vee\/\\Lambda$ is isomorphic to $({\\mathbb Z}\/3{\\mathbb Z})^{10}$, and where $\\pi^{-1}\\Lambda\/\\Lambda^\\vee$ is isomorphic to ${\\mathbb Z}\/3{\\mathbb Z}$. \nLet\n\\begin{equation*}\nA = \\Lambda\\backslash H_0^4 (V, {\\mathbb C}) \/ H^-\\ ,\n\\end{equation*}\nwhere\n\\begin{equation*}\nH^- = H^{3, 1}\\oplus (H_0^{2, 2})_{\\overline{\\o}}\\ .\n\\end{equation*}\nNote that the map $\\Lambda\\to H_0^4 (V, {\\mathbb C}) \/ H^-$ is an $O_{\\smallkay}$-linear injection, hence $A$ is \na complex torus. In fact, the hermitian form $h$ and its associated alternating form $\\langle \\ , \\ \\rangle$ define a polarization $\\lambda$ on $A$. Hence $A$ is an abelian variety of dimension $11$, with an action of $O_{\\smallkay}$ and a polarization of degree\n$3^{10}$. In fact, we obtain in this way an object $(A, \\iota, \\lambda )$ of $\\Cal M ({\\text{\\cute k}}; 10, 1)^\\ast ({\\mathbb C})$ (see section \\ref{subsectionpic} for the definition of the $*$-variants of our moduli stacks).\n\\begin{theo}\\label{thmcubic3}\n\n\\noindent (i) The construction which associates to a smooth cubic $T$ in $\\mathbb P^4$ the object $(A, \\iota, \\lambda )$\nof $\\Cal M ({\\text{\\cute k}}; 10, 1)^\\ast({\\mathbb C})$ is functorial and compatible with families, and defines a morphism of DM-stacks,\n\\begin{equation*}\n\\varphi : \\text{Cubics}_{3, {\\mathbb C}}^\\circ\\to\\Cal M ({\\text{\\cute k}}; 10,1)_{\\mathbb C}^\\ast\\ .\n\\end{equation*}\n\n\\noindent (ii) The induced morphism on coarse moduli spaces $|\\varphi| : |\\text{Cubics}_{3, {\\mathbb C}}^\\circ|\\to|\\Cal M ({\\text{\\cute k}}; 10,1)_{\\mathbb C}^\\ast| $ is an open embedding. Its image is the complement of the image of the KM-cycle $\\Cal Z (3)^*$ in $|\\Cal M ({\\text{\\cute k}}; 10,1)_{\\mathbb C}^\\ast|$.\n\\end{theo}\n\\begin{proof}\n The compatibility with families is due to the fact \nthat the eigenspaces for the $\\mu_3$-action and the Hodge filtration both vary in a \nholomorphic way. The point (ii) is \\cite{ACT2}, Thm.\\ 1.1, resp.\\ \\cite{LS1}, Thm.\\ 3.1. \n\\end{proof}\n\\begin{rem}{\\rm The stack aspect is not treated in these sources. However, it seems reasonable to conjecture that the analogue of \\eqref{stackiso} is also true in this case, i.e., that there is an isomorphism \n \\begin{equation}\\label{stackiso2}\n {\\rm Aut}(T)\\overset{\\sim}{\\longrightarrow} {\\rm Aut}(A, \\iota, \\lambda)\/O_{\\smallkay}^\\times\\, ,\n\\end{equation} \nwhere $(A, \\iota, \\lambda)$ is the object of $\\Cal M ({\\text{\\cute k}}; 10,1)_{\\mathbb C}^\\ast$ attached to $T$. }\\end{rem}\n\\begin{rem}\n{\\rm The construction of the rational Hodge structure $H^1 (A, {\\mathbb Q})$ from $H_0^4 (V, {\\mathbb Q})$ is a very special case\nof a general construction due to van Geemen \\cite{G}. More precisely, it arises (up to Tate twist) as the {\\it inverse\nhalf-twist} in the sense of loc.\\ cit.\\ of the Hodge structure $H_0^4 (V, {\\mathbb Q})$ with complex \nmultiplication by ${\\text{\\cute k}}$. The {\\it half twist} construction attaches to a rational Hodge structure \n$V$ of weight $w$ with complex multiplication by a CM-field ${\\text{\\cute k}}$ a rational Hodge \nstructure of weight $w+1$. More precisely, if $\\Sigma$ is a fixed half system of complex \nembeddings of ${\\text{\\cute k}}$, then van Geemen defines a new Hodge structure on $V$ by setting\n\\begin{equation*}\nV_{\\rm new}^{r, s}=V_{\\Sigma}^{r-1, s}\\oplus V_{\\overline{\\Sigma}}^{r, s-1} , \n\\end{equation*} \nwhere $V_{\\Sigma}$, resp.\\ $V_{\\overline{\\Sigma}}$ denotes the sum of the \neigenspaces for the ${\\text{\\cute k}}$-action corresponding to the complex embeddings in ${\\Sigma}$, resp. in ${\\overline{\\Sigma}}$. }\n\\end{rem}\n\n\n\n\n\\section{Curves of genus three} \\label{curvesgenus3}\n\nOur third example concerns the moduli space of curves of genus $3$ following Kondo \\cite{K1}. \n\nLet $C$ be a non-hyperelliptic smooth projective curve of genus 3. The canonical system embeds $C$ as a\nquartic curve in $\\mathbb P^2$. Let $X(C)$ be the $\\mu_4$-covering of $\\mathbb P^2$ ramified in $C$. Then the quartic $X(C)\\subset \\mathbb P^3$ is a\n$K3$-surface with an automorphism $\\tau$ of order $4$ and hence an action of $\\mu_4$. Let\n\\begin{equation*}\nL= \\{x\\in H^2 (X (C), {\\mathbb Z})\\mid\\tau^2 (x) = -x\\}\\ .\n\\end{equation*}\nLet ${\\text{\\cute k}} = {\\mathbb Q} (i)$ be the Gaussian field.\n\n\\smallskip\n\n{\\bf Fact:} {\\it $L$ is a free ${\\mathbb Z}$-module of rank $14$. The restriction $(\\ , \\ )$ of the symmetric\ncup product pairing to $L$ has discriminant $2^8$; more precisely, for the dual lattice $L^*$ for the symmetric pairing, \n\\begin{equation*}\nL^* \/ L\\cong ({\\mathbb Z}\/2)^8\\ ,\n\\end{equation*}}\nsee \\cite{K1}, top of p.\\ 222.\n\n\\smallskip\n\nNow consider the eigenspace decomposition of $L_{ {\\mathbb C}}=L\\otimes{\\mathbb C}$ under ${\\text{\\cute k}}\\otimes{\\mathbb C} = {\\mathbb C}\\oplus{\\mathbb C}$, where $i\\otimes 1$ acts via $\\tau$.\n\n\\smallskip\n\n{\\bf Fact:} {\\it The induced Hodge structure on $L_{{\\mathbb C}}$ is of type\n\\begin{equation*}\nL_{{\\mathbb C}} = L^{2,0}\\oplus L^{1,1}\\oplus L^{0,2}\\ ,\n\\end{equation*}\nwith $\\dim L^{2,0} = \\dim L^{0,2} = 1$. Furthermore the only nontrivial eigenspaces of $\\tau$ are\n\\begin{equation*}\n\\begin{aligned}\n(L_{{\\mathbb C}})_i & = & L^{2,0}\\oplus (L^{1,1})_i\\ , & \\text{ with}\\ \\dim (L^{1,1})_i = 6\\\\\n(L_{{\\mathbb C}})_{-i} & = & (L^{1,1})_{-i}\\oplus L^{0,2}\\ , & \\text{ with}\\ \\dim (L^{1,1})_{-i} = 6\\ .\n\\end{aligned}\n\\end{equation*}}\n\n\\smallskip\n\nWe define an $O_{\\smallkay}$-valued hermitian pairing $h$ on $L_{\\mathbb Q}$ by setting \n\\begin{equation}\nh(x, y)= (x, y) +(x, \\tau y)\\ i\\ .\n\\end{equation}\nThen it is easy to see that the dual lattice $L^\\vee$ of $L$ for the hermitian form $h$ is the same as the dual lattice $L^*$ for the symmetric form. \n\nNow set $\\Lambda=\\pi L^\\vee$, where $\\pi=1+ i$. Then we obtain a chain of inclusions of $O_{\\smallkay}$-lattices\n\\begin{equation*}\n\\Lambda\\subset \\Lambda^\\vee\\subset \\pi^{-1}\\Lambda\\ ,\n\\end{equation*}\nwhere the quotient $\\Lambda^\\vee\/\\Lambda$ is isomorphic to $({\\mathbb Z}\/2{\\mathbb Z})^{6}$, and where $\\pi^{-1}\\Lambda\/\\Lambda^\\vee$ is isomorphic to ${\\mathbb Z}\/2{\\mathbb Z}$. \n\n\nLet\n\\begin{equation*}\nA = \\Lambda\\backslash L_{{\\mathbb C}} \/ L^-\\ ,\n\\end{equation*}\nwhere\n\\begin{equation*}\nL^-= L^{2,0}\\oplus (L^{1,1})_{-i}\\ .\n\\end{equation*}\nNote that the map $\\Lambda\\to L_{{\\mathbb C}} \/ L^-$ is a $O_{\\smallkay}$-linear injection, hence\n$A$ is a complex torus. In fact, the hermitian form $h$ and its associated \nalternating form $\\langle \\ , \\ \\rangle$ define a polarization $\\lambda$ \non $A$. Hence $A$ is an abelian variety of dimension $7$, with an action of $O_{\\smallkay}$ and a polarization of degree\n$2^{6}$. In fact, we obtain in this way an object $(A, \\iota, \\lambda )$ \nof $\\Cal M ({\\text{\\cute k}}; 6, 1)^\\ast ({\\mathbb C})$.\nNow \\cite{K1}, Thm.\\ 2.5 implies the following theorem. \n\n\\begin{theo}\n\n\\noindent (i) The construction which asssociates to a non-hyperelliptic curve of genus $3$ the object $(A, \\iota, \\lambda )$\nof $\\Cal M ({\\text{\\cute k}}; 6,1)^\\ast({\\mathbb C})$ is functorial and compatible with families, and defines a morphism of DM-stacks,\n\\begin{equation*}\n\\varphi : \\Cal N_{3, {\\mathbb C}}^\\circ\\to\\Cal M (k; 6,1)^\\ast_{\\mathbb C}\\ .\n\\end{equation*}\n{ Here $\\Cal N_{3,{\\mathbb C}}^\\circ$ denotes the stack of smooth non-hyperelliptic curves of genus $3$, i.e., of smooth non-hyperelliptic quartics in $\\mathbb P^2$ up to projective equivalence.}\n\\smallskip\n\n\\noindent (ii) The induced morphism on coarse moduli schemes $|\\varphi| : |\\Cal N_{3, {\\mathbb C}}^\\circ|\\to|\\Cal M (k; 6,1)^\\ast_{\\mathbb C}| $ is an open embedding. Its image is the complement of the image of the KM-cycle $\\Cal Z (2)^*$ in $|\\Cal M (k; 6,1)^\\ast_{\\mathbb C}|$.\\qed\n\\end{theo}\n\\begin{rem}{\\rm Again, the stack aspect is not treated in \\cite{K1}. It seems reasonable to conjecture that the analogue of \\eqref{stackiso} is also true in this case, i.e., that there is an isomorphism \n \\begin{equation}\\label{stackiso2}\n {\\rm Aut}(C)\\overset{\\sim}{\\longrightarrow} {\\rm Aut}(A, \\iota, \\lambda)\/O_{\\smallkay}^\\times\\, ,\n\\end{equation} \nwhere $(A, \\iota, \\lambda)$ is the object of $\\Cal M ({\\text{\\cute k}}; 6,1)_{\\mathbb C}^\\ast$ attached to $C$, and where $O_{\\smallkay}^\\times=\\mu_4$. }\\end{rem}\n\n\\section{Curves of genus four} \\label{curvesgenus4}\nOur final example concerns the moduli space of curves of genus four and is also due to Kondo \\cite{K2}. \n\nLet $C$ be a non-hyperelliptic curve of genus $4$. The canonical system embeds $C$ into\n$\\mathbb P^3$. More precisely, $C$ is the intersection of a smooth cubic surface $S$ and a\nquartic $Q$ which is either smooth or a quadratic cone. Furthermore, $Q$ is uniquely determined by $C$. Let $X$ be a cyclic cover of\ndegree $3$ over $Q$ branched along $C$ (in case $Q$ is singular, we take the minimal\nresolution of the singularities, cf.\\ loc.cit.). Then $X$ is a $K3$-surface with an action of $\\mu_3$. Let\n\\begin{equation*}\nL = (H^2 (X, {\\mathbb Z})^{\\mu_3})^\\perp\n\\end{equation*}\nbe the orthogonal complement of the invariants of this action in $H^2 (X, {\\mathbb Z})$, equipped with the symmetric form $(\\ , \\ )$ obtained by restriction. \n\n\\smallskip\n\n{\\bf Fact:} {\\it $L$ is a free ${\\mathbb Z}$-module of rank $20$, with dual $L^*$ for the symmetric form satisfying \n\\begin{equation*}\nL^* \/ L \\simeq ({\\mathbb Z} \/3{\\mathbb Z})^2\\ ,\n\\end{equation*}}\ncf. \\cite{K2}, top of p.\\ 386. \n\n\\smallskip\n\nFor ${\\text{\\cute k}}={\\mathbb Q}(\\o)$, $\\o= e^{2\\pi i\/3}$, we again define an alternating form $\\langle \\ , \\ \\rangle$ through its associated \n$O_{\\smallkay}$-valued\nhermitian form $h$. Using the action of $O_{\\smallkay}$ on $L$, we set\n\\begin{equation}\\label{hermit2}\nh(x, y) = \\frac{3}{2} \\big(( x, y) + ( x, \\sqrt{\\Delta} y) \\frac{1}{\\sqrt{\\Delta}}\\big)\\ .\n\\end{equation}\nSet $\\pi=\\sqrt{\\Delta}$. \n\n\\smallskip\n\n{\\bf Fact:} {\\it For the hermitian pairing \\eqref{hermit2}, $L^{\\vee}$ is an over-lattice of $\\pi^{-1}L$ with $L^\\vee\/\\pi^{-1}L\\simeq ({\\mathbb Z}\/3{\\mathbb Z})^2$.}\n\n\\smallskip\n\nNow consider the eigenspace decomposition of $L\\otimes{\\mathbb C}$ under ${\\text{\\cute k}}\\otimes{\\mathbb C} = {\\mathbb C}\\oplus{\\mathbb C}$.\n\n\\smallskip\n\n{\\bf Fact:} {\\it The induced Hodge structure on $L_{\\mathbb C}$ is of type\n\\begin{equation*}\nL_{\\mathbb C} = L^{2,0}\\oplus L^{1,1}\\oplus L^{0,2}\\ ,\n\\end{equation*}\nwith $\\dim L^{2,0} = \\dim L^{0,2} = 1$. Furthermore the only nontrivial eigenspaces of $\\mu_3$ are\n\\begin{equation*}\n\\begin{aligned}\n(L_{\\mathbb C})_\\omega = L^{2,0}\\oplus (L^{1,1})_\\o\\ ,\\ \\text{with}\\ \\dim (L^{1,1})_\\o &= 9\\\\\n(L_{\\mathbb C})_{\\overline{\\omega}} = (L^{1,1})_{\\overline{\\o}}\\oplus L^{0,2}\\ ,\\ \\text{with}\\ \\dim (L^{1,1})_{\\overline{\\o}} &= 9\\ .\n\\end{aligned}\n\\end{equation*}}\n\n\\smallskip\n\nNow set $\\Lambda=\\pi L^\\vee$. Then we have the chain of inclusions of $O_{\\smallkay}$-lattices\n\\begin{equation*}\n\\Lambda\\subset \\Lambda^\\vee\\subset \\pi^{-1}\\Lambda\\ ,\n\\end{equation*}\nwhere the quotient $\\Lambda^\\vee\/\\Lambda$ is isomorphic to $({\\mathbb Z}\/3{\\mathbb Z})^{8}$, and where $\\pi^{-1}\\Lambda\/\\Lambda^\\vee$ is isomorphic to $({\\mathbb Z}\/3{\\mathbb Z})^2$. \n\n\nLet\n\\begin{equation*}\nA = \\Lambda\\backslash L_{\\mathbb C} \/ L^-\\ ,\n\\end{equation*}\nwhere \n\\begin{equation*}\nL^- = L^{2,0}\\oplus (L^{1,1})_{\\overline{\\o}}\\ .\n\\end{equation*}\n\n Then the map\n$\\Lambda\\to L_{\\mathbb C} \/ L^-$ is a $O_{\\smallkay}$-linear injection, hence $A$ is a complex torus. \nIn fact, the hermitian form $h$ and its associated alternating form $\\langle \\ , \\ \\rangle$ \ndefine a polarization $\\lambda$ on $A$. Hence $A$ is an abelian variety of dimension $10$, with an action of $O_{\\smallkay}$ and a polarization of degree\n$3^{8}$. In fact, we obtain in this way an object $(A, \\iota, \\lambda )$ of $\\Cal M ({\\text{\\cute k}}; 9, 1)^\\ast ({\\mathbb C})$, \n\\begin{theo}\n\n\\noindent (i) The construction which associates to a non-hyperelliptic curve of genus $4$ the object\n$(A,\\iota, \\lambda )$ of $\\Cal M ({\\text{\\cute k}}; 9,1)^\\ast({\\mathbb C})$ is functorial and compatible with families, and defines a morphism of\nDM-stacks,\n\\begin{equation*}\n\\varphi :\\Cal N^\\circ_{4, {\\mathbb C}}\\to\\Cal M ({\\text{\\cute k}}; 9,1)^\\ast_{\\mathbb C}\\ .\n\\end{equation*}\n{\\rm Here $\\Cal N^\\circ_{4, {\\mathbb C}}$ denotes the stack of smooth non-hyperelliptic curves of genus $4$.}\n\n\\smallskip\n\n\\noindent (ii) The induced morphism on coarse moduli schemes $|\\varphi| :|\\Cal N^\\circ_{4, {\\mathbb C}}|\\to|\\Cal M ({\\text{\\cute k}}; 9,1)^\\ast_{\\mathbb C}| $ is an open embedding. Its image is the complement of the image of the KM-cycle\n$\\Cal Z (2)^*$ in $|\\Cal M ({\\text{\\cute k}}; 9,1)^\\ast_{\\mathbb C}|$.\\qed\n\\end{theo}\n\\begin{rem}{\\rm Again, the stack aspect is not treated in \\cite{K2}. It seems reasonable to conjecture that the analogue of \\eqref{stackiso} is also true in this case, i.e., that there is an isomorphism \n \\begin{equation}\\label{stackiso2}\n {\\rm Aut}(C)\\overset{\\sim}{\\longrightarrow} {\\rm Aut}(A, \\iota, \\lambda)\/O_{\\smallkay}^\\times\\, ,\n\\end{equation} \nwhere $(A, \\iota, \\lambda)$ is the object of $\\Cal M ({\\text{\\cute k}}; 9,1)_{\\mathbb C}^\\ast$ attached to $C$, and where $O_{\\smallkay}^\\times=\\mu_6$. }\\end{rem}\n\n\\section{Descent} \\label{descent}\nIn all four cases discussed above, we obtain morphisms over ${\\mathbb C}$ between \nDM-stacks defined over ${\\text{\\cute k}}$. These \nmorphisms are constructed using transcendental methods. In this section \nwe will show that these morphisms are in fact defined over ${\\text{\\cute k}}$. The argument is \nmodelled on Deligne's solution of the analogous problem for complete intersections of \nHodge level one \\cite{De1}, where he shows that the corresponding family of intermediate \njacobians is an abelian scheme over the moduli scheme over ${\\mathbb Q}$ of complete intersections of given multi-degree. \n\nIn our discussion below, to simplify notations, we will deal with the case of cubic threefolds, as explained in \nsection \\ref{cubicthreefolds}; the other cases are completely analogous. Below we will shorten the \nnotation $Cubics^\\circ_3$ to $\\mathcal C$, and consider this as a DM-stack over $\\text{\\rm Spec}\\, {\\text{\\cute k}}$. \nLet $v:V\\to \\mathcal C$ be the universal family of cubic threefolds, and let $a: A\\to \\mathcal C_{\\mathbb C}$ \nbe the polarized family of abelian varieties constructed from $V$ in section \\ref{cubicthreefolds}. \nHence $A$ is the pullback of the universal abelian scheme over $\\mathcal M({\\text{\\cute k}}; 10, 1)^*_{\\mathbb C}$ under the \nmorphism $\\varphi: \\mathcal C_{\\mathbb C}\\to \\mathcal M({\\text{\\cute k}}; 10, 1)^*_{\\mathbb C}$. \n\\begin{lem}\\label{elladic}\n Let $b: B\\to \\mathcal C_{\\mathbb C}$ be a polarized abelian scheme with $O_{\\smallkay}$-action, which is the pullback \n under a morphism $\\psi: \\mathcal C_{\\mathbb C}\\to \\Cal M({\\text{\\cute k}}; 10, 1)_{\\mathbb C}^*$ of the universal abelian scheme, \n and such that there exists $\\ell$ and an $O_{\\smallkay}$-linear isomorphism of lisse $\\ell$-adic sheaves on $\\mathcal C_\\mathbb C$, \n$$\n\\alpha_{\\ell}: R^1a_*{\\mathbb Z}_{\\ell}\\simeq R^1b_*{\\mathbb Z}_{\\ell}\n$$\ncompatible with the Riemann forms on source and target. Then there exists a unique \nisomorphism $\\alpha: A\\to B$ that induces $\\alpha_{\\ell}$. This isomorphism is compatible with polarizations. \n\\end{lem}\n\nTo prove this, we are going to use the following lemma. In it, we denote by $\\Lambda$ the \nhermitian $O_{\\smallkay}$-module $H^1(A_s, {\\mathbb Z})$, for $s\\in\\mathcal C_{\\mathbb C}$ a fixed base point. \nRecall from section \\ref{cubicthreefolds} that there is a chain \nof inclusions $\\Lambda\\subset \\Lambda^\\vee\\subset \\pi^{-1}\\Lambda$, where $\\pi=\\sqrt {-3}$ is a generator of the unique prime ideal \nof $O_{\\smallkay}$ dividing $3$. \n\\begin{lem}\\label{monodr}\nLet $s\\in\\mathcal C_{\\mathbb C}$ be the chosen base point.\n\n\\noindent (i) The monodromy representation $\\rho_A:\\pi_1(\\mathcal C_{\\mathbb C}, s)\\to {\\rm GL}_{{\\text{\\smallcute k}}}\\big(\\Lambda\\otimes_{O_{\\smallkay}}{\\text{\\cute k}}\\big)$ is absolutely irreducible.\n\n\\noindent (ii) For every prime ideal $\\mathfrak p$ prime to $3$, the monodromy \nrepresentation $\\pi_1(\\mathcal C_{\\mathbb C}, s)\\to {\\rm GL}_{\\kappa(\\mathfrak p)}\\big(\n\\Lambda\/\\mathfrak p \\Lambda\\big)$ is absolutely irreducible. \n\n\\noindent (iii) For the unique prime ideal $\\mathfrak p=(\\pi)$ lying over $3$, the \nmonodromy representation $\\pi_1(\\mathcal C_{\\mathbb C}, s)\\to {\\rm GL}_{\\kappa(\\mathfrak p)}\\big(\n\\Lambda\/\\mathfrak p \\Lambda\\big)$ is not absolutely irreducible, but there is a \nunique non-trivial stable subspace, namely, the $10$-dimensional image of $\\pi \\Lambda^\\vee$ in $\\Lambda\/\\pi \\Lambda$. \n\\end{lem}\n\\begin{proof} The monodromy representations in question are induced by the composition of homomorphisms\n\\begin{equation}\\label{monodromy}\n\\pi_1(\\mathcal C_{\\mathbb C}, s)\\longrightarrow \\pi_1(\\mathcal M({\\text{\\cute k}}; 10, 1)^*_{\\mathbb C}, \\varphi(s))\\longrightarrow \\text{\\rm GL}_{O_{\\smallkay}}\\big(H^1(A_s, {\\mathbb Z})\\big) .\n\\end{equation}\nHere by Theorem \\ref{thmcubic3}, and using complex uniformization (cf. section \\ref{cu}), the first \nhomomorphism is induced by the inclusion of connected spaces \n$$\\iota: \\mathcal D\\setminus \\Big(\\bigcup_{\\substack{x\\in L\\\\ h(x, x)=3}}\\mathcal D_x\\Big) \\hookrightarrow \\mathcal D\\ , \n$$ followed by quotienting out by the free action of $\\Gamma^*$. Since $\\mathcal D$ is simply-connected, it follows \nthat $\\pi_1(\\mathcal M({\\text{\\cute k}}; 10, 1)^*_{\\mathbb C}, \\varphi(s))=\\Gamma^*$ and that the first homomorphism \nin (\\ref{monodromy}) is surjective. Now, $\\Gamma^*$ can \nbe identified with the group of unitary automorphisms of the {\\it parahoric} lattice $\\Lambda$, \nand it is elementary that the representations of $\\Gamma^*$ on $\\Lambda\\otimes_{O_{\\smallkay}}{\\text{\\cute k}}$ \nand on $\\Lambda\/\\mathfrak p \\Lambda$ for $\\mathfrak p$ prime to $ 3$ \nare absolutely irreducible (the latter since $\\Lambda^\\vee\\otimes {{\\mathbb Z}}_\\ell=\\Lambda\\otimes {{\\mathbb Z}}_\\ell$ for $\\ell\\neq 3$). \nThe statement (iii) is proved in the same way. \n\\end{proof}\n\\begin{proof}(of Lemma \\ref{elladic}) \nLet us compare the monodromy representations, \n\\begin{equation}\n\\begin{aligned}\n\\rho_A:& \\pi_1(\\mathcal C_{\\mathbb C}, s)\\to \\text{\\rm GL}_{O_{\\smallkay}}\\big(H^1(A_s, {\\mathbb Z})\\big)\\\\\n\\rho_B:& \\pi_1(\\mathcal C_{\\mathbb C}, s)\\to \\text{\\rm GL}_{O_{\\smallkay}}\\big(H^1(B_s, {\\mathbb Z})\\big) .\n\\end{aligned}\n\\end{equation}\nBy hypothesis, these representations are isomorphic after tensoring with ${\\mathbb Z}_{\\ell}$. \nHence, they are also isomorphic after tensoring with ${\\text{\\cute k}}$. Hence there \nexists a $\\pi_1(\\mathcal C_{\\mathbb C}, s)$-equivariant ${\\text{\\cute k}}$-linear isomorphism \n\\begin{equation*}\n\\beta: H^1(A_s, {\\mathbb Q})\\simeq H^1(B_s, {\\mathbb Q})\\ .\n\\end{equation*}\nBy the irreducibility of the representation of $\\pi_1(\\mathcal C_{\\mathbb C}, s)$ in $H^1(A_s, {\\mathbb Q})$, $\\beta$ is unique \nup to a scalar in ${\\text{\\cute k}}^\\times$. Let us compare the $O_{\\smallkay}$-lattices \n$\\beta^{-1}\\big(H^1(B_s, {\\mathbb Z})\\big)$ and $H^1(A_s, {\\mathbb Z})$. Since we are assuming that $O_{\\smallkay}$ is a PID, after replacing $\\beta$ by a multiple $\\beta_{\\text{\\rm O}}=c\\beta$, \nwe may assume that $L_B= \\beta_{\\text{\\rm O}}^{-1}\\big(H^1(B_s, {\\mathbb Z})\\big)$ \nis a primitive $O_{\\smallkay}$-sublattice in $\\Lambda=H^1(A_s, {\\mathbb Z})$. Let $\\mathfrak p$ be a prime ideal in $O_{\\smallkay}$, \nand let us consider the image of $L_B$ in $\\Lambda\/\\mathfrak p \\Lambda$. Since $L_B$ is \nprimitive in $\\Lambda$, this image is non-zero. If $\\mathfrak p$ is prime to $3$, the irreducibility \nstatement in (ii) of Lemma \\ref{monodr} implies that this image is everything, and hence\n$L_B\\otimes O_{{\\text{\\smallcute k}}, \\mathfrak p}=\\Lambda\\otimes O_{{\\text{\\smallcute k}}, \\mathfrak p}$ in this case. \n\nTo handle the prime ideal $\\mathfrak p=(\\pi)$ over $3$, we use the polarizations. By the irreducibility \nstatement in (i) of Lemma \\ref{monodr}, the polarization forms on $H^1(A_s, {\\mathbb Q})$ and on $H^1(B_s, {\\mathbb Q})$ \ndiffer by a scalar in ${\\mathbb Q}^\\times$ under the isomorphism $\\beta_{\\text{\\rm O}}$. Now, by hypothesis on $B$, with respect to the polarization form \non $H^1(B_s, {\\mathbb Q})$, we have a chain of inclusions $L_B\\subset L_B^\\vee\\subset \\pi^{-1}L_B$ \nwith respective quotients of dimension $10$ and $1$ over ${\\mathbb F}_p$, just as for $\\Lambda$. Since the \ntwo polarization forms differ by a scalar, this excludes the possibility that the image \nof $L_B$ in $\\Lambda\/\\pi \\Lambda$ be non-trivial. It follows that $L_B=\\Lambda$. \n\nFurthermore, the isomorphism $\\beta_{\\text{\\rm O}}$ is unique up to a unit in $O_{\\smallkay}^\\times$, and it is \nan isometry with respect to both polarization forms. Now, by \\cite{De2}, 4.4.11 and 4.4.12, $\\beta_\\text{\\rm O}$ is induced \nby an isomorphism of polarized abelian schemes. Finally, $\\beta_\\text{\\rm O}\\otimes_{\\mathbb Z}\\Z_\\ell=\\alpha_\\ell$ up to a unit, \nsince these homomorphisms differ by a scalar and both preserve the Riemann forms.\\hfill\\break \nThe uniqueness of $\\alpha$ follows from Serre's Lemma. \n\\end{proof}\nNow Lemma \\ref{elladic} implies that over any field extension $k'$ of ${\\text{\\cute k}}$ inside ${\\mathbb C}$, there exists \nat most one polarized abelian variety $b: B\\to \\mathcal C_{k'}$ obtained by pull-back from the universal \nabelian variety over $\\Cal M({\\text{\\cute k}}; 10, 1)^*$, equipped with an $O_{\\smallkay}$-linear isomorphism of lisse $\\ell$-adic sheaves over $\\mathcal C_{\\mathbb C}$\n$$ R^1a_{*}{\\mathbb Z}_{\\ell}\\simeq R^1b_{{\\mathbb C} *}{\\mathbb Z}_{\\ell}\\ ,$$\npreserving the Riemann forms. By the argument in \\cite{De1}, 2.2 this implies that, in fact, $B$ \nexists (since it does for $k'={\\mathbb C}$). Hence the morphism $\\varphi$ is defined over ${\\text{\\cute k}}$. Put otherwise, \nfor any ${\\text{\\cute k}}$-automorphism $\\tau$ of ${\\mathbb C}$, the conjugate embedding $\\varphi^\\tau$, which corresponds \nto the conjugate $(A, \\iota, \\l)^\\tau$, is equal to $\\varphi$; hence $\\varphi$ is defined over ${\\text{\\cute k}}$. \n\n\\begin{comment}\\begin{rem}{\\rm In the case of cubic surfaces, the original argument of Deligne \\cite{De1} applies directly. Indeed, \nconsider the following commutative diagram, in which the lower row is defined by associating to \na cubic threefold its intermediate jacobian, a principally polarized abelian variety of dimension $5$, \n$$\\xymatrix{& Cubics^\\circ_{2 {\\mathbb C}}\\ar[d]\\ar[r] & \\Cal M({\\text{\\cute k}}; 4, 1)_{\\mathbb C}\\ar[d]\\\\\n& Cubics^\\circ_{3 {\\mathbb C}}\\ar[r]\\ar[r]&\\mathcal A_{5 {\\mathbb C} } \\ .}$$\nBy the Torelli theorem for cubic threefolds, this diagram is cartesian. The horizontal morphisms \nare open embeddings and the vertical morphisms are unramified. By \\cite{De1}, the lower horizontal morphism \nis defined over ${\\text{\\cute k}}$ (even over ${\\mathbb Q}$). Hence also the upper horizontal morphism is defined over ${\\text{\\cute k}}$. }\n\\end{rem}\n\\end{comment}\n \n\\begin{conj}\nIn all four cases above, the morphisms $\\varphi$ can be extended over $O_{\\smallkay}[\\Delta^{-1}]$.\n\\end{conj}\n\nSince we circulated a first version of our paper, this has been proved by J.~Achter \\cite{Ach} in the case of cubic surfaces. \n\n\\section{Concluding remarks} \n\nWe end this paper with a few remarks. \n\\begin{rem}\n{\\rm In all four cases, the complement of ${\\rm Im}(|\\varphi|)$ is identified with a certain KM-divisor. In fact,\n for other KM-divisors, the intersection with ${\\rm Im}(|\\varphi|)$ sometimes has a geometric interpretation. For example,\n in the case of cubic surfaces, the intersection of ${\\rm Im}(|\\varphi|)$ with the image of the KM-divisor $\\Cal Z(2)$ in $|\\Cal M({\\text{\\cute k}}; 4, 1)_{{\\mathbb C}}|$\n can be identified with the locus of cubic surfaces admitting Eckardt points, cf. \\cite{DGK}, Thm.~8.10. Similarly, in the case of curves of genus $3$,\n the intersection of ${\\rm Im}(|\\varphi|)$ with the image of $\\Cal Z(t)^*$ in $|\\Cal M({\\text{\\cute k}}; 6, 1)^*_{{\\mathbb C}}|$ can be identified with the locus of curves $C$ where the K3-surface $X(C)$ admits a ``splitting curve\" of a certain degree depending on $t$, cf. \\cite{Art}, Thm.~4.6. }\n\\end{rem}\n\n\\begin{rem}\n{\\rm In \\cite{DK, DGK, MSY}, occult period morphisms are often set in comparison with \nthe Deligne-Mostow theory which establishes a relation between configuration spaces \n(e.g., of points in the projective line) and quotients of the complex unit ball by complex \nreflection groups, via monodromy groups of hypergeometric equations. This aspect of \nthese examples has been suppressed entirely here. Also, it should be mentioned that there \nare other ways of constructing the period map for cubic surfaces, e.g., \\cite{DK, DGK}. }\n\\end{rem}\n\\begin{rem}\\label{invar}\n{\\rm Let us return to the section \\ref{cu}. There we had fixed an hermitian vector space $(V, (\\ , \\ ))$ \nover ${\\text{\\cute k}}$ of signature $(n-1, 1)$. Let $V_0$ be the underlying ${\\mathbb Q}$-vector space, with the symmetric pairing defined by\n\\begin{equation*}\ns( x, y)= {\\rm tr}(h( x, y)) .\n\\end{equation*}\nThen $s$ has signature $(2(n-1), 2)$, and we obtain an embedding of ${\\rm U}(V)$ into ${\\rm O}(V_0)$. This \nalso induces an embedding of symmetric spaces,\n\\begin{equation}\\label{embed}\n\\mathcal D\\hookrightarrow \\mathcal D_{\\rm O} ,\n\\end{equation}\nwhere, as before, $\\mathcal D$ is the space of negative (complex) lines in $(V_{{\\mathbb R}}, \\mathbb I_0)$, \nand where $\\mathcal D_{\\rm O}$ is the space of oriented negative $2$-planes in $V_{{\\mathbb R}}$. The \nimage of \\eqref{embed} is precisely the set of negative $2$-planes that are stable by $\\mathbb I_0$. In the \ncases of the Gauss field resp.\\ the Eisenstein field, this invariance is equivalent to being stable \nunder the action of $\\mu_4$, resp.\\ $\\mu_6$. Hence in these two cases, the image of \\eqref{embed} \ncan also be identified with the fixed point locus of $\\mu_4$ resp.\\ $\\mu_6$ in $\\mathcal D_{\\rm O}$. }\n\\end{rem}\n\\begin{rem}{\\rm By going through the tables in \\cite{rapo'}, \\S 2, one sees that there is no further example of \nan occult period map of the type above which embeds the moduli stack of {\\it hypersurfaces} of suitable degree and \ndimension into a Picard type moduli stack of abelian varieties. Note, however, that, in the case of curves of genus 4, \nthe source of the hidden period morphism is a moduli stack of {\\it complete intersections} of a certain multi-degree of dimension one, and there may be more examples of this type. }\n\\end{rem}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nThe Cox proportional hazards model (hereafter, the Cox model) introduced by \\citet{cox72}\nis one of the most popular regression models for survival analysis. \nIt is a {\\em semiparametric} model with two sets of unknown parameters: the regression coefficients, which measure the correlation between the covariates and the\nexplanatory variables, and the baseline hazard, a nonparametric quantity, which describes the risk of\nevents happening within given time intervals at baseline levels conditional on the covariates.\nA commonly used approach to estimate the two parameters takes two steps: first estimate the regression coefficients from the Cox partial likelihood \\citep{cox72} and then derive \nthe estimated cumulative hazard function (known as the Breslow estimator \\citep{breslow72}) through maximizing the full likelihood via plugging-in the estimated value of the regression coefficients. \n\nIn the past few decades, Bayesian methods for the Cox model have been widely applied for analyzing datasets in, e.g., astronomy \\citep{isobe86}, medical and genetics studies \\citep{jialiang13}, and engineering \\citep{equeter20}.\nAn advantage of the Bayesian approach is that uncertainty quantification for the parameters of interest is in principle straightforward to obtain once posterior samples are available. \nContrary to the standard two-step procedure mentioned above, the Bayesian approach provides estimates for the joint distribution of all parameters, which enables to capture dependencies: in particular, as one of the practical applications considered below, one can derive meaningful uncertainty quantification simultaneously for the Cox model parameter and functionals of the hazard rate (e.g. its mean, or the value of the survival function at a point) from corresponding credible sets, in particular automatically capturing the (optimal `efficient') dependence structure.\n\nThe prior for the hazard function needs to be chosen carefully, as it is a nonparametric quantity. \nTwo main common approaches to place a prior on hazards have been considered in the literature.\nA first approach puts a prior on the cumulative hazard function, modeling this quantity rather than the hazard itself. \nA prominent example is the family of {\\it neutral to the right process} priors, which includes the Beta process prior \n\\citep{hjort90, damien96},\nthe Gamma process prior \\citep{kalbfleisch78, burridge81}, \nand the Dirichlet process prior \\citep{florens99} as special cases.\nA second approach, which is the one we follow here, is to put a prior on the baseline hazard function. A commonly used family is that of piecewise constant priors \\citep{ibrahim01}. \nThe latter approach is particularly attractive, first because it allows for inference on the hazard rate (assuming it exists), and second because in practice the follow-up period is often split into several intervals, with \nthe hazard rate taking a distinct constant value on each sub-interval, making the output of the method easy to interpret for practitioners.\n\nOne primary goal of the paper is to validate the practical use of Bayesian credible sets for inference on the Cox model unknown parameters, for instance credible bands for the conditional survival function. Indeed, practitioners often treat Bayesian credible sets as confidence sets. Before discussing the possible mathematical validity of this practice, let us conduct a simple illustrative simulation study (see Section \\ref{sec:sim} for a detailed description of the simulation setting). \n\nFrom data simulated from the Cox model, suppose we want to make inference on the conditional survival function, which gives the probability that a patient survives past a certain time $t$ given a covariate $z \\in \\mathbb{R}^p$, a useful quantity for practitioners. Let us compare a simple $95\\%$ credible band of the posterior distribution (with the piecewise constant prior; see Section \\ref{sec:sim} for its construction) induced on the conditional survival function and a certain 95\\% confidence band---which requires estimation of the covariance structure---of the same function obtained by a commonly used frequentist approach (see Section \\ref{sec:sim} for a precise description of how the band is obtained). \nIn Figure \\ref{fig:intro}, we plot the credible band (blue) and the confidence band (orange). \nThe sample size is $n = 200$, and we let $p=1$ and $z = 1$.\nOne first notes that the true function (black) is contained in both bands, which suggests that both provide a reasonable uncertainty assessment for the conditional survival function. Second, we compared the total area of the two bands; interestingly, we found that the area of the credible band is smaller than that of the second band: the area of the credible band is 0.163 and that of the confidence band is 0.183 ($12\\%$ larger). \nA thorough Monte Carlo study, carried out in Section \\ref{sec:sim}, confirms that the area of the Bayesian credible band is indeed consistently smaller on average than the size of the frequentist confidence band when the sample size is 200. \nIt may be noted that these results hold for the in a sense `simplest possible' Bayesian credible band: as can be seen in Figure \\ref{fig:intro}, its width is fixed through the time interval, and even better results are expected for bands that become thinner close to $t=0$; see Section \\ref{sec:diss} for more discussion on this. \nThis simulation study suggests that, aside from not having to estimating covariances, \nusing the Bayesian credible set can be particularly advantageous, especially for \nsmall sample size datasets. \n \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{plots\/credibleband-200.pdf}\n\\caption{\nThe 95\\% Bayesian credible band (blue) using the random histogram prior and the 95\\% frequentist confidence band (orange).\nThe bolded black line is the true conditional survival function. \nSample size is $n = 200$. \n}\n\\label{fig:intro}\n\\end{figure}\n\nThe observations from Figure \\ref{fig:intro} raise some interesting questions: can one validate and generalize our findings in the figure, that is, can one provide theory explaining why the credible band is a confidence band, and will the two bands become more similar as sample size increases? \nWhat can be said in terms of the hazard function: does the Bayesian procedure estimate it in a possibly `optimal' way? We now discuss the existing literature on these questions and the main contributions of the paper. \n\nIn smooth parametric models, taking certain quantile credible sets as confidence sets is justified mathematically by the celebrated Bernstein--von Mises theorem (henceforth BvM, see e.g. \\cite{vdvAS}, Chapter 10): a direct consequence thereof is that taking, in dimension $1$ say, the $\\alpha\/2$ and $1-\\alpha\/2$ posterior quantiles provides a credible set (by definition) whose frequentist coverage asymptotically goes to $1-\\alpha$. Its diameter also asymptotically matches the information bound so is optimal in the frequentist sense from the efficiency perspective. For more complex models, such as the\nCox model, obtaining a semiparametric BvM theorem for the regression parameter is possible, as we see below, but requires non-trivial work. Obtaining analogous results at the level of the {\\em survival} function itself is even more challenging. We now review recent advances in the area for such semi- and non-parametric models.\n \nSemiparametric BvM theorems where obtained in \\citet{cast12} under general conditions on the statistical model using Gaussian process priors on the nuisance parameter. \n\\citet{cast15BvM} considered an even more general framework, allowing for BvMs for linear and non-linear functionals (also generalizing some early results of \\cite{RR12} for density estimation). A multiscale approach was introduced in \\citet{cast13, cast14a} in order to derive nonparametric BvM theorems for families of possibly non-conjugate priors, as well as Donsker--type theorems. Yet, the first applicative examples of these works were mostly confined to relatively simple models and\/or priors.\n\nTheory for convergence of Bayesian posterior distributions in survival models has mostly followed two directions, which we briefly review now (see also \\citet{ghosal17}, Section 12.3.3 and Chapter 13). A first series of influential results has been concerned with classes of neutral to the right priors: in the standard nonparametric survival analysis model, \\citet{hjort90} showed in particular conjugacy for Beta process priors in the context of cumulative hazard estimation; \\citet{kimlee01} obtained posterior consistency, while \\citet{Kim2004} derived Donsker theorems for the posterior survival function. In \\citet{kim06}, the Cox model was considered and the joint posterior distribution of parameter and survival function was shown to satisfy the Bernstein--von Mises theorem. These results share two common features: they model the cumulative baseline hazard (equivalently, the survival function), not the baseline hazard itself---which can be desirable to model for practitioners---, and they rely on conjugacy of the class of neutral to the right priors, which provides fairly explicit characterisations of the posterior distributions.\n\nA second series of results, closer in spirit to ours, considers priors on the baseline hazard function. \\citet{DeBlasi2009a} used a kernel mixture with respect to a completely random measure as a prior, and obtained both posterior consistency for the hazard and limit results for linear and nonlinear functionals thereof. The work \\cite{DeBlasi2009b} derived a semiparametric BvM theorem in competing risk models. The present work can be seen as following the footsteps of \\cite{cast20survival}, where the simple nonparametric model with right-censoring is treated, and for which results for the hazard and cumulative hazard are derived. However, the latter model is much simpler than the Cox model, which features both regression coefficients and random covariates: this requires several more delicate bounding of both (semiparametric) bias terms and remainder terms, see Section \\ref{sec:support-lemma-thm1} in \\citet{ning22supp} for more details. \n In \\citet{cast12}, the Cox model is treated as an application of the general results, which yield the semiparametric BvM theorem for the Cox model parameter for (transformed) Gaussian process priors on the hazard. Although this result has a general flavor, and can be adapted to handle other prior families, it requires a fast enough posterior contraction rate for the baseline hazard, and thus cannot be applied for histogram priors on the hazard (see Section \\ref{sec:hellinger} for more on this). Perhaps more importantly, the later result is confined to the Cox regression parameter, and says nothing about uncertainty quantification for the cumulative hazard, for the survival function, or even simply for linear functionals of the hazard.\n\n \nThe present work obtains the first results for Bayesian uncertainty quantification jointly on regression parameters and survival function in the Cox model using non-conjugate priors. In particular, our results demonstrate that the popular and broadly used histogram priors on the baseline hazard provide not only contraction of the posterior distribution around the true unknown parameters, but also optimal and efficient uncertainty quantification on those. We adopt the multiscale analysis approach, which is motivated by \\citet{cast20survival}'s study of the survival model.\nMore precisely, we derive \n\\begin{itemize}\n\\item[(a)] a joint Bernstein--von Mises (BvM) theorem for linear functionals of the regression coefficients and of the baseline hazard function;\n\\item[(b)] a Bayesian Donsker theorem for the conditional cumulative hazard and survival functions;\n\\item[(c)] a minimax optimal contraction rate for the conditional hazard function in supremum-norm distance.\n\\end{itemize}\nThe Bayesian Donsker theorem, in particular, implies that certain $(1-\\alpha)\\%$ credible bands for the conditional survival function are asymptotically $(1-\\alpha)\\%$ confidence bands.\nIn addition to these results, a nonparametric BvM theorem for the conditional baseline hazard function \n is obtained in the Supplemental Material \\citep[see][]{ning22supp}. All these results are completely new for non-conjugate (in particular, histogram) priors; also, we derive the first supremum-norm posterior contraction rates for the hazard in the Cox model; we also show the corresponding matching minimax lower bound, which to the best of our knowledge was not yet available in the literature. \n\nFinally, the techniques we introduce have a general flavor. First, they do not rely on conjugacy of the priors considered, so that they can virtually be applied to a wide variety of families, as long as a certain change-of-measure condition is met. Second, the specific form of the model (here the Cox model) comes in through its local asymptotic normality (LAN) expansion, so similar techniques can be used in more complex settings, as long as a form of local asymptotic normality of the model holds. In particular, the techniques developed herein can serve as a useful tool for future studies of other semiparametric and nonparametric models, in survival analysis and beyond: \n as further discussed in Section \\ref{sec:diss}. \n\n\nThe paper is organized as follows. \nSection \\ref{sec:setup} presents the model, the prior families, and key assumptions. \nMain results are presented in Section \\ref{sec:main-results}.\nSimulation studies are conducted in Section \\ref{sec:sim}. \nSection \\ref{sec:diss} concludes the paper and discuss a variety extensions for future studies. \nAll the relevant proofs for the main results and auxiliary lemmas are left to the Supplementary Material. \n\n{\\em Notation.} For any two real numbers $a$ and $b$, let $a \\vee b = \\max(a, b)$ and $a \\wedge b = \\min(a, b)$;\nalso, let $a \\lesssim b$ as $a \\leq C b$ for some constant $C$. \nLet us denote by $o(1)$ a deterministic sequence going to \n$0$ with $n$ and $o_P(1)$ a sequence of random variables going to $0$ in probability under the distribution $P$. \nFor a vector $x$, denote $\\|x\\|_q$ as the $\\ell_q$-norm of $x$ ($q \\geq 1$), i.e., $\\|x\\|_q = (\\sum_i |x_i|^q)^{1\/q}$. When $q = \\infty$, $\\|x\\|_\\infty = \\max_i |x_i|$ is the infinity norm of $x$. \nFor a matrix $A$, denote $\\|A\\|_{(\\infty, \\infty)} = \\max_{ij} |a_{ij}|$.\n \nFor a function $f \\in L^{p}[a,b]$ ($p \\geq 1$), where $L^p[a,b]$ is the space of functions whose $p$-th power is Lebesgue integrable on $[a,b]$, we define $\\|f\\|_{p} = (\\int_a^b |f|^{p})^{1\/p}$ the $L^{p}$-norm of $f$. \nIf $p = 2$, $\\|f\\|_{{2}} = \\sqrt{\\langle f, f \\rangle}$ is the $L^2$-norm\nand if $p = \\infty$, $\\|f \\|_{{\\infty}}= \\sup_t |f(t)|$ is the supremum norm.\nThe associate inner product between any two functions, $f, g \\in L^p[0, 1]$, is denoted as $\\langle f, g\\rangle = \\int_0^1 fg$ and the space of continuous functions on $[a,b]$ is given by $\\mathcal{C}[a, b]$ (resp. $L^{\\infty}[a, b]$), which is equipped with the supremum norm $\\|\\cdot\\|_\\infty$.\n\n\nFor $\\beta, D > 0$, let $l = \\lfloor \\beta \\rfloor$ be the largest integer smaller than $\\beta$, a standard H\\\"older-ball on $[a,b]$ can be defined as \n$$\n\\mathcal{H}(\\beta, D) = \\{f: |f^{(l)}(x) - f^{(l)}(y)| \\leq D|x-y|^{\\beta - l}, \\ \\|f\\|_\\infty \\leq D, \\ x, y\\in [a, b]\\}.\n$$\nLet $(\\mathcal{S}, d)$ be a metric space and $\\mu, \\nu$ be probability measures of $\\mathcal{S}$.\nFor $F: \\mathcal{S} \\to \\mathbb{R}$, set\n$$\n\\|F\\|_{BL} = \\sup_{x \\in \\mathcal{S}} |F(x)| + \\sup_{x \\neq y} \\frac{|F(x) - F(y)|}{d(x,y)},\n$$\nand denote the bounded Lipschitz metric $\\mathcal{B}_{\\mathcal{S}}$ as\n$$\n\\mathcal{B}_{\\mathcal{S}}(\\mu, \\nu) = \\sup_{F: \\|F\\|_{BL} \\leq 1} \n\\left|\n\\int_{\\mathcal{S}} F(x) (d \\mu - d \\nu ) (x)\n\\right|.\n$$\nFor two densities $f$ and $g$, denote $h^2(f, g) = \\int (\\sqrt{f} - \\sqrt{g})^2 d\\mu$ as their squared Hellinger distance.\n\n\\section{Model, prior families and structural assumptions}\n\\label{sec:setup}\n\n\\subsection{The Cox model with random right censoring}\n\nThe observations $X=X^n$ are $n$ independent identically distributed (i.i.d.) triplets given by \n$X = ((Y_1, \\delta_1, Z_1), \\dots, (Y_n, \\delta_n, Z_n))$. The observed $Y_i\\in\\mathbb{R}^+$ are censored versions of (unobserved) survival times $T_i\\in\\mathbb{R}^+$,\nwith $\\delta_i$ indicator variables informing on whether $T_i$ has been observed or not: that is, $Y_i = T_i \\wedge C_i$ and $\\delta_i = \\mathbbm{1}(T_i \\leq C_i)$, where $C_i$'s are i.i.d. censoring times. The variables $Z_1, \\dots, Z_n \\in \\mathbb{R}^p$, $p$ fixed, are called covariates. \n\nFor a fixed covariate vector $z\\in\\mathbb{R}^p$ and $t>0$, define the {\\it conditional hazard rate} $\\lambda(t\\,|\\, z) = \\text{lim}_{h\\to0} h^{-1} P(t \\leq T \\leq t+h \\,|\\, T \\geq t, Z = z)$. \nThe Cox model assumes, for some $\\theta\\in\\mathbb{R}^p$ and denoting by $\\theta'z$ the standard inner product in $\\mathbb{R}^p$, \n\\[ \\lambda(t \\,|\\, z) = e^{\\theta'z} \\lambda(t),\\]\nwhere $\\lambda(t)$ is the {\\it baseline hazard function}. \nThe {\\it conditional cumulative hazard function} is defined as $\\Lambda(\\cdot \\,|\\, z) = \\int_0^\\cdot \\lambda(u \\,|\\, z) du = e^{\\theta'z} \\int_0^\\cdot \\lambda(u) du = e^{\\theta'z} \\Lambda(\\cdot)$ and the {\\it conditional survival function} is denoted by \n$S(\\cdot \\,|\\, z) = \n\\exp(-\\Lambda(\\cdot \\,|\\, z)) = \n\\exp(- e^{\\theta'z} \\Lambda(\\cdot) )$. \n\nAssuming the baseline hazard rate is positive, one can alternatively make inference on the log-hazard. The unknown parameters of the Cox model are then \n\\[ \\eta = (\\theta, r),\\qquad \\text{where } r = \\log \\lambda.\\] \nThe goal is to estimate the pair $\\eta=(\\theta,r)$. We denote by $\\eta_0=(\\theta_0,r_0)$ the true values of the parameters (and similarly for the related quantities $\\lambda_0, \\Lambda_0$).\n\nWe now give a set of standard assumptions used in this paper. \nFirst, we assume both $T$ and $Z$ admit a continuous density function, $f_T(\\cdot)$ and $f_Z(\\cdot)$ respectively. Given $Z$, the survival time $T$ and the censoring time $C$ are independent. \nAt the end of the follow-up, some individuals are still event free and uncensored such that \n$P(T > \\varrho \\,|\\, Z = z) > 0$ and $P(C > \\varrho \\,|\\, Z = z) = P(C = \\varrho \\,|\\, Z = z) = 0$ for some fixed time $\\varrho$. \nThe censoring $C$ is assumed to follow a distribution $G$ and admit a density such that\n \\begin{align*}\n p_C(u) = g_z(u) \\mathbbm{1}\\{0\\leq u < \\varrho\\} + \\bar G_z(\\varrho) \\mathbbm{1}\\{u = \\varrho\\}\n \\end{align*}\n with respect to $\\text{Leb}([0, \\varrho]) + \\delta_\\varrho$, where $\\text{Leb}(I)$ is the the Lebesgue measure on $I$.\nWithout loss of generality, we assume $\\varrho = 1$ throughout the paper.\n\n\nBased on this set-up, the joint density function of the triple $(y, \\delta, z)$ is given by\n\\begin{equation}\n\\begin{split}\n\\label{eqn:density-function}\n f_\\eta (y, \\delta, z) = \n & \\left(\n g_z(y) e^{- \\Lambda(y) e^{\\theta'z}}\n \\right)^{1-\\delta} \n \\left(\n \\bar{G}_z (y) \\lambda(y) e^{\\theta' z - \\Lambda(y) e^{\\theta'z}}\n \\right)^\\delta \n f_Z(z)\n \\mathbbm{1}(y < t)\\\\\n & + \n \\left(\n \\bar{G}_z(t) e^{-\\Lambda(t) e^{\\theta'z}}\n \\right) \n f_Z(z)\n \\mathbbm{1}(\\delta = 0, y = t),\n\\end{split}\n\\end{equation}\nwhere $g_z(\\cdot)$ is a continuous density of $C$ given $Z = z$,\n$\\bar{G}_z(\\cdot) = 1 - G_z(\\cdot -)$, where $G_z(\\cdot)$ is the cumulative distribution function of $g_z(\\cdot)$. \n\n\nLet $\\ell_n(\\eta) = \\sum_{i=1}^n \\log f_\\eta (X_i)$ be the log-likelihood function,\nthe likelihood ratio is given by \n\\begin{align}\n\\label{log-likelihood}\n\\ell_n(\\eta) - \\ell_n(\\eta_0)\n= \n\\sum_{i=1}^n \\left(\\delta_i \\{(\\theta-\\theta_0)'Z_i + (r-r_0)(Y_i) \\} - \\Lambda(Y_i) e^{\\theta'Z_i} + \\Lambda_0(Y_i) e^{\\theta_0'Z_i}\\right).\n\\end{align}\nFrom (\\ref{log-likelihood}), one sees that the log-likelihood ratio does not depend on $g_z(y)$ and $\\bar G_z(\\cdot)$, thus one does not need to model $g_z(y)$ in order to make inference on $\\eta$. \n\n\\subsection{Information--related quantities} \\label{sec:infonot}\n\nWe now introduce some of the key quantities arising in the study of the Cox model: these are all related to the information operator (extending the usual Fisher information in parametric models) arising from the LAN--expansion in the model (see also Section \\ref{sec:bg}). \nFor a bounded function $b(\\cdot)$ on $[0, 1]$ and a cumulative hazard function $\\Lambda(\\cdot)$, \nwe denote \n$\\Lambda\\{b\\}(\\cdot) = \\int_0^\\cdot b(u) d\\Lambda(u)$ and, in slight abuse of notation, we set $\\Lambda \\{b\\} = \\Lambda \\{b\\}(1)$. \nThe following notations are commonly used in the literature for the Cox model (see e.g., Section VIII 4.3 of \\citet{andersen93} and Section 12.3.3 of \\citet{ghosal17}):\n\\begin{align}\n\\label{M0}\n M_0(u) &= \\mathbb{E}_{\\eta_0} \\left(e^{\\theta_0'Z} \\mathbbm{1}_{u \\leq T}\\right) = \n \\int \\bar{G}_z(u) e^{\\theta_0'z - \\Lambda_0(u) e^{\\theta_0'z}} f_Z(z) dz,\\\\\n \\label{M1}\n M_1(u) & = \\mathbb{E}_{\\eta_0} \\left(Z e^{\\theta_0'Z} \\mathbbm{1}_{u \\leq T}\\right) \n = \\int z \\bar{G}_z(u) e^{\\theta_0'z - \\Lambda_0(u) e^{\\theta_0'z}} f_Z(z) dz,\\\\\n \\label{M2}\n\tM_2(u) & = \\mathbb{E}_{\\eta_0} \\left(ZZ' e^{\\theta_0'Z} \\mathbbm{1}_{u \\leq T}\\right) \n = \\int zz' \\bar{G}_z(u) e^{\\theta_0'z - \\Lambda_0(u) e^{\\theta_0'z}} f_Z(z) dz.\n\\end{align}\nThe least favorable direction is defined as $\\gamma_{M_1}(\\cdot) = (M_1\/M_0)(\\cdot)$. \nFor a bounded function $b(\\cdot)$ on $[0,1]$, we let $\\gamma_b(\\cdot) = (b\/M_0)(\\cdot)$.\nThe efficient information matrix of the Cox model is denoted by $\\tilde I_{\\eta_0}$ and is given by\n\\begin{align}\n\\tilde I_{\\eta_0} = & \\Lambda_0\\{M_2(\\cdot) - \\gamm(\\cdot) \\gamm'(\\cdot) M_0(\\cdot)\\}.\n\\label{eff-info-mtx}\n\\end{align}\nFor any $\\vartheta \\in \\mathbb{R}^p$ and $g \\in L^2\\{\\Lambda_0\\}$ (i.e., $\\int g^2d\\Lambda_0<\\infty$), we define\n\\begin{align}\n\\label{Wn}\nW_n(\\vartheta, g)\n =\n \\frac{1}{\\sqrt{n}}\n \\sum_{i=1}^n \\left\\{\n \\delta_i \\left(\n \\vartheta'Z_i + g(Y_i)\n \\right)\n - e^{\\theta_0'Z_i}\n \\left(\n \\vartheta'Z_i \\Lambda_0(Y_i) + (\\Lambda_0 g)(Y_i)\n \\right)\n \\right\\}.\n\\end{align}\nThis quantity corresponds to the empirical process part of the LAN expansion of the Cox model (see (\\ref{lan}) in Section \\ref{sec:bg} of \\citet{ning22supp}).\n\n\\subsection{Structural assumptions}\n\\label{sec:assumption}\n\nThe following fairly mild conditions are assumed on the unknown quantities of the model. For some positive constants $c_1, \\dots, c_8$,\n\\begin{enumerate}[label=\\textbf{(i)}]\n \\item \\label{asp:i} \\text{the random variable }$Z\\in\\mathbb{R}^p$ is bounded (i.e. $\\|Z\\|_\\infty\\le c_1$ a.s.);\n\n\\end{enumerate} \n\\begin{enumerate}[label=\\textbf{(ii)}] \n\t\\item \\label{asp:ii} \n $\\|\\theta_0\\|_\\infty \\le c_2$;\n\\end{enumerate}\t\nNote that from \\ref{asp:i} and \\ref{asp:ii}, one can bound $e^{\\theta_0'z} \\le e^{pc_1c_2}$. Also, suppose\n\\begin{enumerate}[label=\\textbf{(iii)}] \n \\item \\label{asp:iii} $c_3 \\leq \\inf_{t \\in [0, \\varrho]} \\lambda_0(t) \\leq \\sup_{t \\in [0, \\varrho]} \\lambda_0(t) \\leq c_4$ and $r_0 = \\log \\lambda_0 \\in \\mathcal{H}(\\beta, D)$, where $\\beta, D > 0$;\n\\end{enumerate} \n\\begin{enumerate}[label=\\textbf{(iv)}]\n \\item \\label{asp:iv} $g_z$ is a continuous density and\n $c_5 \\leq \\inf_{t \\in [0, \\varrho]} g_z(t) \\leq \\sup_{t \\in [0, \\varrho]} g_z(t) \\leq c_6$;\n\\end{enumerate}\n\nAssumptions \\ref{asp:i}--\\ref{asp:iv} are common in the related literature; e.g., see \\citet{cast12} (p. 17). \nAs $\\theta_0, \\lambda_0, g_z(u)$ are all assumed to be bounded from above and below, \\ref{asp:i}--\\ref{asp:iv} imply\n \\begin{enumerate}[label=\\textbf{(v)}] \n\t\\item \\label{asp:v} For $M_2(u)$ and $\\tilde I_{\\eta_0}^{-1}$ in (\\ref{M2}) and (\\ref{eff-info-mtx}) respectively,\n\t$\\Lambda_0\\{M_2(\\cdot)\\} \\geq c_7 I_p$ and $\\|\\tilde I_{\\eta_0}^{-1}\\|_{(\\infty,\\infty)} \\leq c_8$.\n\\end{enumerate} \n\nThe above conditions are mostly assumed for technical simplicity: boundedness of the true vector $\\theta$ enables one to take a prior with bounded support, which is particularly helpful in order to carry out likelihood expansions. Attempting to remove this condition would lead to delicate questions on likelihood remainder terms and is beyond the scope of this work. The smoothness condition on the (log--) hazard is quite mild for smooth hazards: it includes for instance the case of Lipschitz hazards. Another interesting setting would be the one of piecewise constant hazards. It could be treated with the techniques developed of this paper (a given histogram can always be approximated arbitrarily well in the $L^2$--sense by a Haar-histogram, and then the problem becomes --nearly-- parametric) although it would require a somewhat separate treatment: we refrain from providing theory here for this case, although we consider it as one of the examples of the simulations study in Section \\ref{sec:sim}, where simulations show very good behavior in this setting as well. Henceforth we assume that \\ref{asp:i}--\\ref{asp:iv} hold without explicit reference. \n\n\\subsection{Prior distributions}\n\\label{priors}\n\nPriors for $\\theta$ and $\\lambda$ are chosen independently. \nThe prior for $\\theta$ is chosen as follows: \n\\begin{enumerate}[label=\\textbf{(T)}]\n\\item \\label{prior:T} \nLet $\\theta_j$ be the $j$-th coordinate of $\\theta$, \n$\\pi(\\theta) = \\bigotimes_{j=1}^p \\pi(\\theta_j) = \\bigotimes_{j=1}^p f(\\theta_j)\\mathbbm{1}_{[-C, C]}$ for some constant $C > 0$.\nExamples for $f(\\theta_j)$ include the uniform density, i.e., $f(\\theta_j) = 1$, the {\\em truncated} (to $[-C,C]$) $\\tau$--Subbotin density \\citep{subb23}, which includes the {\\em truncated} Laplace (when $\\tau = 1$) density and {\\em truncated} normal (when $\\tau = 2$) density as special cases. \n\\end{enumerate}\n\nImposing the truncation does not seem necessary in practice, as we found in the simulation study. However, for deriving our theoretical results, and similar to \\citet{cast12} and \\citet{ghosal17}, we assume that at least some upper-bound on $\\|\\theta\\|_\\infty$\n is known, as discussed in the previous subsection (so that one can take e.g. $C=c_2$ in \\ref{asp:ii}). \n\nFor the prior on $\\lambda$, two classes of piecewise constant priors are considered throughout the paper:\n\\begin{enumerate}[label=\\textbf{(H)}]\n\\item \\label{prior:H} {\\it Random histogram prior.} \nFor $k \\geq 1, L' = L+1$, and $L$ an $n$-dependent deterministic value to be specified below,\nlet\n\\begin{align}\n\\label{histogram-prior}\n\\lambda_H = \\sum_{k=0}^{2^{L+1} - 1} \\lambda_k \\mathbbm{1}_{I_k^{L+1}}, \n\\end{align}\nwhere \n$I_0^{L'} = [0, 2^{-L'}], I_k^{L'} = (k 2^{-L'}, (k+1)2^{-L'}]$,\nand ($\\lambda_k$) are independent random variables. We consider putting the following two types of priors on the $k$-th histogram height, $\\lambda_k$:\n\\end{enumerate}\n\n\\begin{itemize}\n\\item[(i)] An independent Gamma prior: $\\lambda_k \\sim \\text{Gamma}(\\alpha_0, \\beta_0)$ are i.i.d. variables, for some fixed positive $\\alpha_0,\\beta_0$.\n\\item[(ii)] A dependent Gamma prior: let $\\lambda_0 \\sim \\text{Gamma}(\\alpha_0, \\beta_0)$ and $\\lambda_k| \\lambda_{k-1} \\sim \\text{Gamma}(\\alpha, \\alpha\/\\lambda_{k-1})$ for some positive constant $\\alpha$. \nThen for $k \\geq 1$, $E(\\lambda_k \\,|\\, \\lambda_{k-1}) = \\lambda_{k-1}$ and $\\text{Var}(\\lambda_k \\,|\\, \\lambda_{k-1}) = \\lambda_{k-1}^2\/\\alpha$. \n\\end{itemize}\n\n\\begin{enumerate}[label=\\textbf{(W)}]\n\\item \\label{prior:W} {\\it Haar wavelet prior.} \nLet $r_S = \\log \\lambda_S$ and again for $L$ to be specified below, let us set\n\\begin{align} \n\\label{wavelet-prior}\nr_S = \\sum_{l = -1}^{L} \\sum_{k=0}^{2^l - 1} \\sigma_l Z_{lk} \\psi_{lk},\n\\end{align}\nwhere $Z_{lk}$ are random variables, $\\sigma_l = 1$, $0 \\leq l \\leq L$, and $(\\psi_{lk})$ are Haar wavelet basis. \n\\end{enumerate}\nAlthough a variety of densities can be considered for $Z_{lk}$, we specifically consider for simplicity the standard normal density and the standard Laplace density (other choices of $\\sigma_l$ e.g. $2^{-l\/2}$ are possible). Note that both densities give a non-conjugate prior for $r_S$. \n \nAlso note that the random histogram prior can be viewed as a special case of the Haar wavelet prior if one allows for possibly dependent variables $Z_{lk}$ (and possibly different values for $\\sigma_l$). \n\n{\\em Choice of the parameter $L$}. For results on the specific priors as above, we consider the choice of cut-off $L=L_n$ defined as, for $\\beta>0$ the assumed regularity of $r_0=\\log\\lambda_0$ (see \\ref{asp:iii}),\n\\begin{align} \\label{choicel}\n2^{L_n}=2^{L_n(\\beta)} & \\circeq \\left(\\frac{n}{\\log{n}}\\right)^{\\frac{1}{2\\beta+1}},\n\\end{align}\nwhere $\\circeq$ means that one picks a closest integer solution in $L_n$ of the equation. If the regularity of the true $r_0$ is not known in advance, as is usually the case in practice, then all the limiting shape (Bernstein--von Mises) results below go through if one replaces $\\beta$ by $1\/2$ in \\eqref{choicel} (note also that all the main results, except the Hellinger rate which requires no minimal smoothness, require a regularity $\\beta>1\/2$). In other words, for the semiparametric results, it is enough to `undersmooth'. The strict knowledge of $\\beta$ is only required if one wishes to obtain an optimal minimax supremum-norm contraction rate (see Section \\ref{sec:sup-norm} for more on this).\n\n\\section{Main results}\n\\label{sec:main-results}\n\nLet us give a brief outline of our results.\nSection \\ref{sec:hellinger} provides a preliminary contraction result in Hellinger distance.\nSection \\ref{sec:jointBvM} presents a joint Bernstein--von Mises (BvM) theorem for linear functionals of $\\theta$ and $\\lambda$. \nSection \\ref{sec:npBvM} derives a Donsker theorem for the joint posterior distribution of $\\theta$ and the cumulative hazard function $\\Lambda$. \nThis result leads to the Donsker theorem for the posterior of the conditional cumulative hazard and survival functions.\nA supremum-norm convergence rate for the conditional hazard function is obtained in Section \\ref{sec:sup-norm}.\nIn Sections \\ref{sec:jointBvM}-\\ref{sec:sup-norm}, we provide generic conditions that are suitable for a wide range of priors. In Section \\ref{sec:application}, we verify those conditions for those specific choices of priors listed in Section \\ref{priors}. \n\n\n\\subsection{A key preliminary contraction result}\n\\label{sec:hellinger}\n\nWe start by obtaining a preliminary Hellinger contraction rate for the posterior distribution for the priors considered above. \n\nDefine the rate, for $\\beta\\in(0,1]$,\n\\begin{equation} \\label{rateveps}\n\\varepsilon_n=\\varepsilon_{n,\\beta}=\\left(\\frac{\\log{n}}{n}\\right)^{\\frac{\\beta}{2\\beta+1}}.\n\\end{equation}\n\nLet $\\epsilon_n=o(1)$ be a sequence such that $n\\epsilon_n^2\\to\\infty$ as $n \\to \\infty$. Define $\\zeta_n=\\zeta_n(\\epsilon_n)= 2^{L_n\/2} \\epsilon_n + 2^{-\\beta L_n}$ and \n\\begin{equation}\\label{defan}\nA_n = \\{\\eta:\\ \\|\\theta - \\theta_0\\| \\leq \\epsilon_n, \\|\\lambda - \\lambda_0\\|_{1} \\leq \\epsilon_n, \\|\\lambda - \\lambda_0\\|_\\infty \\leq \\zeta_n\\}\n\\end{equation}\nLet us consider the following condition:\n\\begin{enumerate}[label=\\textbf{(P)}]\n\\item \\label{cond:P} \nThe sequences $L_n,\\epsilon_n$ verify $L_n = o(\\sqrt{n} \\epsilon_n)$, $L_n^2 = o(1\/\\epsilon_n)$, and $\\sqrt{n} \\epsilon_n^2 L_n = o(1)$ and, for $A_n$ as in \\eqref{defan},\n\\[ \\Pi(A_n^c \\,|\\, X) = o_{P_{\\eta_0}}(1).\\]\n\\end{enumerate}\nCondition \\ref{cond:P} requires the posterior distribution to contract in a certain sense around the true pair $(\\theta_0,r_0=\\log{\\lambda_0})$. In order to derive such a result, one may first apply the general contraction rate theorem of \\cite{ghosal00}. This, however, entails a rate for the overall density $f_\\eta$ in the Cox model only, not the parameters themselves. The main difficulty here is then to derive results on $\\theta$ and $\\lambda$ separately. The rate $\\epsilon_n$ can be thought of as a typical (possibly optimal) nonparametric rate. We call condition \\ref{cond:P} a preliminary contraction result, because faster rates both for $\\theta$ and for $\\lambda$ in the supremum norm can be derived, as will be seen below. In fact, for $\\theta$, it is expected that the posterior contracts at parametric, near $1\/\\sqrt{n}$ rate; a much more precise result is obtained in Section \\ref{sec:jointBvM} below in the form of a BvM theorem. \n\nThe next lemma shows that condition \\ref{cond:P} is indeed satisfied for the examples of priors introduced in Section \\ref{priors}. \n\\begin{lemma}\n\\label{lemma1}\nConsider the Cox model \nwith priors as specified in {\\rm \\ref{prior:T}}, and {\\rm \\ref{prior:H}} or {\\rm \\ref{prior:W}} with $L=L_n$ as in \\eqref{choicel}. \nThen for any $\\beta\\in(0,1]$ and $\\varepsilon_n$ as in \\eqref{rateveps},\n\\begin{align*}\n& \\Pi[\\{\\eta:\\ h^2(f_{\\eta_0}, f_\\eta) \\gtrsim \\varepsilon_{n}^2\\} \\,|\\, X]=o_{P_{\\eta_0}}(1).\n\\end{align*}\nFurther, for any $\\beta\\in(1\/2,1]$, condition {\\rm \\ref{cond:P}} is satisfied for these priors for $\\epsilon_n=\\varepsilon_n$ as in \\eqref{rateveps}.\n\\end{lemma}\n\nLet us now briefly comment on the preliminary supremum-norm rate $\\zeta_n$ for $\\lambda$ entailed by \\ref{cond:P}. For some cut--offs $L_n$, the rate can be slow. However, it is a $o(1)$ as soon as $\\epsilon_n=o(2^{-L_n\/2})$. For the typical choice of $L_n$ in \\eqref{choicel}, this only requires that $\\beta>1\/2$, which corresponds to a preliminary rate faster than $n^{-1\/4}$. This is much less than what is required for Theorem 5 in \\cite{cast12} or Theorem 12.12 in \\cite{ghosal17}, where a preliminary rate faster than $n^{-3\/8}$ is needed (note that the latter rate rules out the use of regular histograms as priors, since these can get only a rate $n^{-1\/3}$ at best). In Section \\ref{sec:sup-norm}, we show the rate $\\zeta_n$ can be improved by adopting a multiscale analysis approach. For this, a BvM theorem for linear functionals of $\\lambda$ will be needed: it is a consequence of the joint BvM derived in the next section. \n\nFrom Section \\ref{sec:jointBvM} to Section \\ref{sec:sup-norm}, we work with a generic histogram prior of the form \\eqref{wavelet-prior} with cut--off $L=L_n$ (which includes both {\\rm \\ref{prior:H}} and {\\rm \\ref{prior:W}} as special cases), under the above condition \\ref{cond:P}. This way, the reader can directly see what generic conditions underpin our results, and adapt these to other relevant families of priors not considered here for the sake of brevity. For instance, smoother wavelet bases $(\\psi_{lk})$ can be used in the prior definition and require only minor adaptations of the proofs (in a similar way as in \\cite{cast20survival} for the simple nonparametric survival model); although we do not prove this here for brevity, using these priors would enable one to derive optimal contraction rates in the supremum norm for arbitrary regularities $\\beta>1\/2$. We come back to the specific examples of priors {\\rm \\ref{prior:H}} and {\\rm \\ref{prior:W}} in Section \\ref{sec:application}. \n\n\\subsection{The joint BvM theorem for the linear functionals of $\\theta$ and $\\lambda$} \n\\label{sec:jointBvM}\n\nLet us consider the joint estimation of the two linear functionals defined by\n\\begin{align}\n\\label{linear-fns}\n\\varphi_a(\\theta) = \\theta'a, \\quad \n\\varphi_b(\\lambda) = \\langle b, \\lambda \\rangle = \\int_0^1 b(u) \\lambda(u) du = \\Lambda\\{b\\},\n\\end{align}\nfor fixed $a \\in \\mathbb{R}^p$ and $b \\in L^2(\\Lambda)$.\n Let us recall that we work under the generic form of prior \\eqref{wavelet-prior} with cut--off $L=L_n$ and generic condition \\ref{cond:P}. Let us also recall the notation from Section \\ref{sec:infonot}: $M_0, M_1, M_2$, $\\gamma_b(\\cdot) = (b\/M_0)(\\cdot)$ for a bounded function $b$, and $\\tilde I_{\\eta_0}$. \n\nConsider the following conditions:\n\\begin{enumerate}[label=\\textbf{(B)}]\n\\item \\label{cond:B} \nLet $P_{L_n}(\\cdot)$ be the orthogonal projection onto $\\mathcal{V}_{L_n} := \\text{Vect}\\{\\psi_{lk}, \\ l \\leq L_n, \\ 0 \\leq k < 2^l\\}$, the subspace of $L^2[0,1]$ spanned by the first $L_n$ wavelet levels, and denote $\\gamma_{b,L_n} = P_{L_n}(\\gamma_b)$ and $\\gamma_{M_1, L_n} = P_{L_n}(\\gamm)$.\nFor any fixed $b \\in L^\\infty[0,1]$ and $\\epsilon_n$ in {\\ref{cond:P}}, \n$$\n\\sqrt{n} \\epsilon_n \\|\\gamma_b - \\gamma_{b,L_n}\\|_{\\infty} = o(1).\n$$\n\\end{enumerate}\n\nCondition \\ref{cond:B} is sometimes called {\\it no-bias} condition, and holds true if $b$ is sufficiently smooth and (or) the preliminary contraction rate $\\epsilon_n$ is fast enough. \nNext, let $h = (t, s) \\in \\mathbb{R}^2$,\nfor fixed $a \\in \\mathbb{R}^p$ and $b \\in L^2(\\Lambda)$, consider the two local paths:\n\\begin{align}\n&\\theta_h = \\theta - \\frac{t \\tilde I_{\\eta_0}^{-1} a}{\\sqrt{n}} + \\frac{s \\tilde I_{\\eta_0}^{-1} \\Lambda_0\\{b \\gamm\\}}{\\sqrt{n}},\n\\label{path-theta}\\\\\nr_h = r +& \\frac{t \\gamma_{M_1, L_n}' \\tilde I_{\\eta_0}^{-1} a}{\\sqrt{n}} - \\frac{s\\gamma_{b,L_n}}{\\sqrt{n}} - \\frac{s\\gamma_{M_1, L_n}' \\tilde I_{\\eta_0}^{-1} \\Lambda_0\\{b\\gamm\\}}{\\sqrt{n}}.\n\\label{path-r}\n\\end{align}\n\\begin{enumerate}[label=\\textbf{(C1)}]\n\\item \\label{cond:C1} ({\\it Change of variables condition}) with $r = \\log \\lambda$ and $r_0 = \\log \\lambda_0$, let $\\eta_0 = (\\theta_0, r_0)$ and $\\eta_h = (\\theta_h, r_h)$, \nwhere $\\theta_h$ in (\\ref{path-theta}) and $r_h$ in (\\ref{path-r}) with $a, b$ to be specified below\nand $A_n$ as in \\ref{cond:P}, suppose \n$$\n\\frac{\n\\int_{A_n} e^{\\ell_n(\\eta_h) - \\ell_n(\\eta_0)} d\\Pi(\\eta)\n}{\n\\int e^{\\ell_n(\\eta) - \\ell_n(\\eta_0)} d\\Pi(\\eta)\n} \n= 1 + o_{P_{\\eta_0}}(1).\n$$\n\\end{enumerate}\n\nCondition \\ref{cond:C1} is often called {\\em change of variables condition}: indeed, one natural way to check it is via controlling the change in distribution from $\\eta\\sim \\Pi$, to the distribution induced on $\\eta_h$. For priors such as \\ref{prior:H} and \\ref{prior:W}, this can be checked by posing a change of variables with respect to the Lebesgue measure on $\\mathbb{R}^{L_n}$. \nThe verification of these conditions for these priors is given in Section \\ref{verify-cov}. \n\nFor $\\eta \\sim \\Pi(\\cdot \\,|\\, X)$, $\\mu = (\\mu_1, \\mu_2) \\in \\mathbb{R}^2$, $a \\in \\mathbb{R}^p$, and $b \\in L^2(\\Lambda)$,\nlet us define the map\n$$\n\\tau_\\mu: \\eta \\to \\sqrt{n} \n(\\theta'a - \\mu_1,\n\\langle \\lambda, b \\rangle - \\mu_2),\n$$\nand let $\\Pi(\\cdot \\,|\\, X) \\circ \\tau_\\mu^{-1}$ be the distribution induced on $\\sqrt{n} (\\theta'a - \\mu_1,\n\\langle \\lambda, b \\rangle - \\mu_2)$.\nWe are ready to present the joint BvM theorem for the bivariate functions $\\varphi_a(\\theta)$ and $\\varphi_b(\\lambda)$. \n\n\\begin{theorem}\n\\label{thm:bvm-linear-fns}\nLet $a$ and $b$ be fixed elements of $\\mathbb{R}^p$ and $L^2(\\Lambda_0)$ respectively, for any $b$ that satisfies {\\rm \\ref{cond:B}},\nsuppose the prior for $\\eta = (\\theta, \\lambda)$ is chosen such that {\\rm \\ref{cond:P}} and {\\rm \\ref{cond:C1}} hold. Then\n\\begin{align}\n\\label{thm1-1}\n\\mathcal{B}_{\\mathbb{R}^2} \\left( \\Pi(\\cdot \\,|\\, X) \\circ \\tau_{\\hat \\varphi}^{-1}, \n\\mathcal{L}(a'\\mathbb{V}, \\Upsilon_b - \\mathbb{V}\\Lambda_0\\{b\\gamm\\} )\n\\right) \n\\stackrel{P_{\\eta_0}}{\\to} 0,\n\\end{align}\nwhere $\\mathbb{V}$ and $\\Upsilon_b$ are independent, \n$\\mathbb{V} \\sim N\\left(0, \\tilde I_{\\eta_0}^{-1}\\right)$ and $\\Upsilon_b \\sim N(0, \\Lambda_0\\{b \\gamma_b\\})$, \n$\\mathcal{B}_{\\mathbb{R}^2}$ is the bounded Lipschitz metric between distributions on $\\mathbb{R}^2$, and\n$\\hat\\varphi = (\\hat\\varphi_a(\\theta), \\hat\\varphi_b(\\lambda))$ is given by \n\\begin{align*}\n\\hat \\varphi_a(\\theta) & := \\varphi_a(\\hat \\theta) = \\varphi_a(\\theta_0) \n+ \\frac{1}{\\sqrt{n}}W_n\\left(\\tilde I_{\\eta_0}^{-1} a, \\ - \\gamm' \\tilde I_{\\eta_0}^{-1} a\\right),\\\\\n\\hat \\varphi_b(\\lambda) : = \\varphi_b(\\hat \\lambda) & = \\varphi_b(\\lambda_0) + \n\\frac{1}{\\sqrt{n}} W_n \\left(- \\tilde I_{\\eta_0}^{-1} \\Lambda_0\\{b \\gamm\\}, \\gamma_b + \\gamm' \\tilde I_{\\eta_0}^{-1} \\Lambda_0 \\{b\\gamm\\}\\right).\n\\end{align*}\n\\end{theorem}\n\nThe centering sequences in the last display of the statement can be seen to be `efficient' ones from the semiparametric perspective (see e.g. \\cite{vdvAS}, Chapter 25). An important added value to the {\\em joint} BvM (in contrast to individual limiting statement for marginal coordinates) is that it captures the dependence between $\\theta$ and $\\lambda$: a practical application is given in Figure \\ref{fig:joint-density}. \nThe result enables to consider many combinations of functionals by choosing specific $a \\in \\mathbb{R}^p$ and $b \\in L^2(\\Lambda_0)$. For example, let $a = (1, 0, \\dots, 0)$ and $b = \\mathbbm{1}_{[0,1]}$: Theorem \\ref{thm:bvm-linear-fns} implies a joint joint BvM for $(\\theta_1, \\Lambda(1))$, where $\\theta_1$ is the first coordinate of $\\theta$ and $\\Lambda(1) = \\int_0^1 \\lambda$ is the cumulative hazard function at time one.\nThe limiting distribution is given in the next corollary, where, in addition, we center the joint posterior at efficient frequentist estimators for $\\theta$ and $\\Lambda(1)$. \n\n\n\\begin{corollary}[Joint BvM for $\\theta_1$ and $\\Lambda(1)$]\n\\label{cor-1}\nConsider the Cox model with the density function in (\\ref{eqn:density-function}), \nlet $a = (1, 0, \\dots, 0)$ and $b = 1$, and $\\tau_{(\\hat \\theta_1, \\hat \\Lambda(1))}$ be the map such that $$\n\\tau_{(\\hat \\theta_1, \\hat \\Lambda(1))}: \\eta \\rightarrow \\sqrt{n} \\left(\\theta_1 - \\hat \\theta_1, \\Lambda(1) - \\hat \\Lambda(1)\\right),\n$$\nwhere $\\hat \\theta$ is the maximum Cox partial likelihood estimator and $\\hat \\Lambda(1)$ is the Breslow estimator. \nDenote $\\Pi(\\cdot \\,|\\, X) \\circ \\tau_{(\\hat\\theta_1, \\hat \\Lambda(1))}^{-1}$ as the distribution induced on \n$\\sqrt{n} (\\theta_1 - \\hat \\theta_1, \\Lambda(1) - \\hat \\Lambda(1))$, then \nunder the same conditions as in Theorem \\ref{thm:bvm-linear-fns}, \n\\begin{align}\n\\label{BvM-2}\n\\mathcal{B}_{\\mathbb{R}^2}\n\\left(\n\\Pi(\\cdot \\,|\\, X) \\circ \\tau_{(\\hat \\theta_1, \\hat \\Lambda(1))}^{-1}, \\\n\\mathcal{L}(a' \\mathbb{V}, \\Upsilon_1 - \\mathbb{V}\\Lambda_0\\{\\gamm\\})\n\\right) \\stackrel{P_{\\eta_0}}{\\to} 0,\n\\end{align}\nwhere \n$\\Upsilon_1 \\sim N(0, \\Lambda_0\\{M_0^{-1}\\})$ and $\\mathbb{V} \\sim N(0, \\tilde I_{\\eta_0}^{-1})$ are independent. \n\\end{corollary}\n\nAn immediate practical implication of the BvM theorem in Corollary \\ref{cor-1} is that two-sided quantile credible sets for $\\hat\\theta_1$ (or more generally for any given coordinate $\\theta_j$, $j = 1, \\dots, p$) are asymptotically optimal confidence sets from the perspective. Results in this vein can also be derived for the survival function in the functional sense: this is the object of the next section.\n\n\\subsection{Joint Bayesian Donsker theorems}\n\\label{sec:npBvM}\n\nWe now present the second main result in this paper, the Bayesian Donsker theorem for the joint posterior distribution of $\\theta$ and the cumulative hazard function $\\Lambda(\\cdot)$. \n\nLet us denote\n\\begin{align} \nW_n^{(1)} = W_n^\\star \\left(\\tilde I_{\\eta_0}^{-1}, \\ -\\gamm' \\tilde I_{\\eta_0}^{-1}\\right),\n\\label{Wn-1}\n\\end{align}\nwhere $W_n^{(1)}$ is a $p$-dimensional vector and\n\\begin{equation*}\n\\begin{split}\nW_n^\\star \\left(\\tilde I_{\\eta_0}^{-1},\\ -\\gamm' \\tilde I_{\\eta_0}^{-1}\\right) \n= \n\\frac{1}{\\sqrt{n}} \\sum_{i=1}^n\n\\left\\{\n\\delta_i \\tilde I_{\\eta_0}^{-1} \\left(Z_i - \\gamm \\right)\n- e^{\\theta_0'Z_i} \\tilde I_{\\eta_0}^{-1} \\left(Z_i \\Lambda_0(Y_i) - (\\Lambda_0{\\gamm})(Y_i) \n\\right)\n\\right\\},\n\\end{split}\n\\end{equation*}\nand given $b\\in L^2(\\Lambda_0)$,\n\\begin{align} \nW_n^{(2)}(b) = W_n \\left(-\\tilde I_{\\eta_0}^{-1} \\Lambda_0\\{b \\gamm\\}, \\gamma_b + \\gamm' \\tilde I_{\\eta_0}^{-1} \\Lambda_0 \\{b \\gamm\\} \\right).\n\\label{Wn-2}\n\\end{align}\n\nDefine the centering sequences for $\\theta$ and $\\lambda$ as follows:\n$$\nT_n^\\theta = \\theta_0 + W_n^{(1)}\/\\sqrt{n},\n$$\nand, for a given sequence $L_n$, \n\\begin{align*}\n\\langle T_{n}^\\lambda, \\psi_{lk} \\rangle \n= \n\\begin{cases}\n\\langle \\lambda_0, \\psi_{lk} \\rangle \n+\nW_n^{(2)}(\\psi_{lk})\/\\sqrt{n} & \\quad \\text{if} \\ l\\leq L_n,\\\\\n\\ 0 & \\quad \\text{if} \\ l > L_n.\n\\end{cases}\n\\end{align*}\n\nThe Donsker theorem requires, in addition to \\ref{cond:C1}, a similar condition, where $t$ and $s$ are allowed to increase with $n$. This condition is stated as follows:\n\n\\begin{enumerate}[label=\\textbf{(C2)}]\n\\item \\label{cond:C2} (Change of variables condition, version 2) with the same notation as in \\ref{cond:C1}, let $\\eta_h = (\\theta_h, r_h)$, $\\theta_h$ and $r_h$ be the local paths in \\eqref{path-theta} and \\eqref{path-r} respectively with $a, b$ to be specified below,\nfor $n$ large enough and any $|t|, |s| \\leq \\log n$, one assumes, for $A_n$ as in \\ref{cond:P} and some constant $C_1 > 0$,\n\n$$\n\\frac{\n\\int_{A_n} e^{\\ell_n(\\eta_h) - \\ell_n(\\eta_0)} d\\Pi(\\eta)\n}\n{\n\\int e^{\\ell_n(\\eta) - \\ell_n(\\eta_0)} d\\Pi(\\eta)\n}\n\\leq e^{C_1(1 +t^2 + s^2)}.\n$$\n\\end{enumerate}\n\nCondition \\ref{cond:C2} is similar to \\ref{cond:C1}. A major difference between the two conditions is that in \\ref{cond:C2}, $t$ and $s$ are allowed to increase with $n$; however, in \\ref{cond:C1}, $t$ and $s$ are fixed. \n\nWe further require the rates $\\epsilon_n$ and $\\zeta_n$ and the cut-off $L_n$ in \\ref{cond:P} to satisfy\n\\begin{align}\n\\label{ez-rate-condition}\n\\sqrt{n} \\epsilon_n 2^{-L_n} = o(L_n^{-5\/2}), \\quad\n\\zeta_n L_n^2 = o(1),\n\\end{align}\n\n\\begin{theorem}[Joint Bayesian Donsker theorem]\n\\label{thm:donsker-joint}\n Suppose the prior for $\\eta = (\\theta, \\eta)$ is chosen such that both\n{\\rm \\ref{cond:P}} and (\\ref{ez-rate-condition}) hold. Suppose {\\rm \\ref{cond:C1}} holds for $a = z$, any fixed $z \\in \\mathbb{R}^p$, and any $b \\in \\mathcal{V}_{\\mathcal{L}} = \\text{Vect}\\{\\psi_{lk}, \\ l \\leq L_n, \\ 0 \\leq k < 2^l \\}$ \nfor a fixed $\\mathcal{L}$ and {\\rm \\ref{cond:C2}} holds uniformly for $a = z$, any fixed $z \\in \\mathbb{R}^p$, and any $b = \\psi_{LK}$ with $0 \\leq L \\leq L_n$ and $0 \\leq K < 2^L$.\n\nLet $\\mathcal{L}((\\theta, \\Lambda(\\cdot)) \\in \\cdot \\,|\\, X)$ be the distribution induced on $\\theta$ and $\\Lambda(\\cdot) = \\int_0^\\cdot \\lambda$ and $\\mathbb{T}_n^\\lambda(\\cdot) = \\int_0^\\cdot T_n^\\lambda$ be the centering for $\\Lambda(\\cdot)$. \nDenote $\\mathbb{B}(\\cdot)$ as standard Brownian motion and set $U_0(\\cdot) = \\int_0^\\cdot (\\lambda_0\/M_0)(u) du$.\nLet $\\mathbb{V} \\sim N(0, \\tilde I_{\\eta_0}^{-1})$ that is independent of $\\mathbb{B}(\\cdot)$.\nThen, as $n \\to \\infty$,\n\\begin{align}\n\\label{eqn:donsker-joint}\n\\mathcal{B}_{\\mathbb{R}^p \\times \\mathcal{C}([0,1])} \\left(\n\\mathcal{L} \\left(\\sqrt{n} (\\theta - T_n^\\theta, \\Lambda(\\cdot) - \\mathbb{T}_n^\\lambda(\\cdot) ) \\,|\\, X \\right), \\\n\\mathcal{L} \\left( \\mathbb{V}, \\mathbb{B}(U_0(\\cdot)) - \\mathbb{V}' \\Lambda_0\\{\\gamm\\}(\\cdot) \\right)\n\\right) \\stackrel{P_{\\eta_0}}{\\to} 0,\n\\end{align}\nwhere $\\mathcal{B}_{\\mathbb{R}^p \\times \\mathcal{C}([0,1])}$ is the bounded-Lipschitz metric on $\\mathbb{R}^p \\times \\mathcal{C}([0,1])$.\n\\end{theorem}\n\n\\begin{remark}\nWhile the proof is left to the supplemental material \\citep{ning22supp}, a key step for obtaining the joint Bayesian Donsker theorem, following ideas from \\cite{cast14a}, is to establish first a BvM for $(\\theta,\\lambda)$ in an appropriate space. However, unlike in \\cite{cast14a} where one can work directly on the nonparametric quantity of interest, here due to the split semiparametric model at hand, one needs to prove a {\\em joint} nonparametric BvM for the pair $(\\theta,\\lambda)$, see Proposition \\ref{prop-tightness}.\nThis result is new in this context and is of independent interest for proving similar results in other semiparametric models. \n\\end{remark}\n\nThe centerings $T_n^\\theta$ and $\\mathbb{T}_n^\\lambda$ in Theorem \\ref{thm:donsker-joint} can be replaced with any efficient estimators for $\\theta$ and $\\Lambda$: the next corollary formalizes this with centering at standard frequentist estimators.\n\n\\begin{corollary}\n\\label{cor-donsker}\nLet $\\hat \\theta$ be the maximum Cox partial likelihood estimator and $\\hat \\Lambda(\\cdot)$ be the Breslow estimator,\nthen,\nunder the same conditions as in Theorem \\ref{thm:donsker-joint}, \nas $n \\to \\infty$,\n\\begin{align}\n\\label{cor-donsker-1}\n& \\mathcal{B}_{\\mathbb{R}^p \\times \\mathcal{D}([0,1])} \\left(\n\\mathcal{L}\\left(\\sqrt{n} (\\theta - \\hat \\theta, \\Lambda(\\cdot) - \\hat \\Lambda(\\cdot)) \\,|\\, X \\right), \\\n\\mathcal{L} \\left( \\mathbb{V}, \\mathbb{B}(U_0(\\cdot)) - \\mathbb{V}' \\Lambda_0\\{\\gamm\\}(\\cdot) \\right)\n\\right) \\stackrel{P_{\\eta_0}}{\\to} 0,\n\\end{align}\nwhere $\\mathcal{D}([0,1])$ is the Skorokhod space on $[0,1]$. \n\\end{corollary}\n\nAs an application of Corollary \\ref{cor-donsker}, one obtains the Bayesian Donsker theorem for the conditional hazard and survival functions by simply applying the functional delta method \\citep[Chapter 20 in][]{vdvAS}. \nLet $\\tz$ be a fixed element in $\\mathbb{R}^p$, and recall we define $S(\\cdot \\,|\\, \\tz) = \\exp(-\\Lambda(\\cdot) e^{\\theta' \\tz})$, the survival function conditional on $\\tz$. \nDenote $\\hat S(\\cdot \\,|\\, \\tz) = \\exp(-\\hat \\Lambda(\\cdot) e^{\\hat \\theta' \\tz})$ with $\\hat\\theta$ and $\\hat \\Lambda(\\cdot)$ the frequentist estimators as above, then, as $n \\to \\infty$,\n\\begin{align}\n& \\mathcal{B}_{\\mathcal{D}([0,1])} \n\\left(\n\t \\mathcal{L}\\left(\\sqrt{n} (\\Lambda(\\cdot) e^{\\theta'\\tz} - \\hat \\Lambda(\\cdot) e^{\\hat \\theta'\\tz}) \\,|\\, X \\right), \n\t\\ \\mathcal{L} (\\mathbb{H}_1)\n\\right) \n\\stackrel{P_{\\eta_0}}{\\to} 0,\n\\label{donsker-cond-hazard}\n\\\\\n& \\mathcal{B}_{\\mathcal{D}([0,1])} \n\\left(\n\t \\mathcal{L}\\left(\\sqrt{n} (S(\\cdot \\,|\\, \\tz) - \\hat S(\\cdot \\,|\\, \\tz)) \\,|\\, X \\right), \\\n\t\\mathcal{L}(\\mathbb{H}_2)\n\\right) \n\\stackrel{P_{\\eta_0}}{\\to} 0,\n\\label{donsker-cond-survival}\n\\end{align}\nwhere $\\mathbb{H}_1$ and $\\mathbb{H}_2$ are the transformed processes obtained after applying the functional delta method from (\\ref{cor-donsker-1}).\nMoreover, by applying the continuous mapping theorem and noting that the map for any function $f \\to \\|f\\|_\\infty$ is continuous from $\\mathcal{D}([0,1])$, equipped with the supremum norm, to $\\mathbb{R}^+$, (\\ref{donsker-cond-hazard}) and (\\ref{donsker-cond-survival}) imply \n\\begin{align*}\n&\t\\mathcal{B}_{\\mathbb{R}} \n\\left(\n\t \\mathcal{L} \\left(\\sqrt{n} \\|\\Lambda(\\cdot) e^{\\theta'\\tz} - \\hat \\Lambda(\\cdot) e^{\\hat \\theta'\\tz}\\|_{\\infty} \\,|\\, X \\right), \n\t\\ \\mathcal{L} \\left(\\|\\mathbb{H}_1\\|_{\\infty}\\right)\n\\right) \n\\stackrel{P_{\\eta_0}}{\\to} 0, \\\\\n&\t\\mathcal{B}_{\\mathbb{R}} \n\\left(\n\t \\mathcal{L} \\left(\\sqrt{n} \\|S(\\cdot \\,|\\, \\tz) - \\hat S(\\cdot \\,|\\, \\tz) \\|_{\\infty} \\,|\\, X \\right), \\\n\t\\mathcal{L} \\left(\\|\\mathbb{H}_2\\|_{\\infty}\\right)\n\\right) \n\\stackrel{P_{\\eta_0}}{\\to} 0.\n\\end{align*}\nA simple consequence of the last display is that the two-sided $(1-\\alpha)\\%$ quantile credible band for the conditional hazard (resp. the conditional survival) function is asymptotically a two-sided $(1-\\alpha)\\%$ confidence band (see \\cite{cast14a}, Corollary 2).\n\n\\subsection{The supremum-norm convergence rate for the conditional hazard function}\n\\label{sec:sup-norm}\n\nIn this section, we present the third main result: a faster supremum-norm posterior contraction rate for the conditional hazard function than the rate $\\zeta_n$ in \\ref{cond:P}. We denote this rate as \n$\\xi_n$, which depends on $L_n$, a diverging sequence, such that $L_n 2^{L_n} \\lesssim \\sqrt{n}$, where\n$$\n\\xi_n: = \\xi_n(\\beta, L_n, \\epsilon_n) = \\sqrt{\\frac{L_n 2^{L_n}}{n}} + 2^{-\\beta L_n} + \\epsilon_n.\n$$\n\n\\begin{theorem}\n\\label{thm:supnorm-rate}\nSuppose $r_0 \\in \\mathcal{H}(\\beta, D)$ with $1\/2 < \\beta \\leq 1$. \nLet $L_n$ be a diverging sequence such that $L_n 2^{L_n} \\lesssim \\sqrt{n}$ and let $\\tz \\in \\mathbb{R}^p$ be fixed.\nFor the prior of $\\eta = (\\theta, \\lambda)$ chosen such that both {\\rm \\ref{cond:P}} and (\\ref{ez-rate-condition}) hold, and {\\rm \\ref{cond:C2}} also holds uniformly for $a = z$ and any $b = \\psi_{LK}$, with $(\\psi_{LK})$ the Haar wavelet basis, $0 \\leq L \\leq L_n$ and $0 \\leq K < 2^L$, \nthen for $\\xi_n = o(1)$ and $n \\xi_n^2 \\to \\infty$, and an arbitrary sequence $M_n \\to \\infty$, \n\\begin{align}\n\\label{supnorm-rate}\n\\Pi \\left(\\eta: \\|\\lambda e^{\\theta'\\tz} - \\lambda_0 e^{\\theta_0'\\tz} \\|_{\\infty} > M_n \\xi_n \\,|\\, X \\right) = o_{P_{\\eta_0}}(1).\n\\end{align}\n\\end{theorem}\n\nWe will show that in the next section, with a specific choice of the value of $L_n$, the rate $\\xi_n$\nis within the same order of the Hellinger rate $\\varepsilon_n$ in \\eqref{rateveps}. \n \n\\subsection{Results for specific priors}\n\\label{sec:application}\n\nIn this section, we apply the generic results in Sections \\ref{sec:jointBvM}, \\ref{sec:npBvM}, and \\ref{sec:sup-norm} to study the specific priors considered in Section \\ref{priors}.\nThe result is stated in the following theorem.\n\\begin{theorem}\n\\label{thm:specific-sup-rate}\nConsider the Cox model with the priors as specified in {\\rm \\ref{prior:T}, and \\ref{prior:H}} or {\\rm \\ref{prior:W}} with $L = L_n$ in (\\ref{choicel}) and $\\varepsilon_n$ given in \\eqref{rateveps}. For any $\\beta \\in (1\/2, 1]$,\n\\begin{enumerate}\n\\item conditions {\\rm \\ref{cond:P}} and {\\rm \\ref{cond:C1}} hold, {\\rm \\ref{cond:B}} holds for any $b \\in \\mathcal{H}(\\mu, D)$ with $\\mu > 1\/2$ and $D > 0$, then, (\\ref{thm1-1}) in Theorem \\ref{thm:bvm-linear-fns} holds;\n\\item condition {\\rm \\ref{cond:C2}} also holds, thus (\\ref{eqn:donsker-joint}) in Theorem \\ref{thm:donsker-joint} holds;\n\\item the supremum-norm rate $\\xi_n$ in \\eqref{supnorm-rate} can be taken to be $\\xi_n = \\varepsilon_n$ as in \\eqref{rateveps}. \n\\end{enumerate}\n\\end{theorem}\n\nThe proof of this result, implying that conditions \\ref{cond:P}, \\ref{cond:B}, \\ref{cond:C1}, and \\ref{cond:C2} hold for priors given in Section \\ref{priors}, can be found in \\citet{ning22supp}.\n\n\\begin{remark}\nIf $\\beta > 1$, the first and second points in Theorem \\ref{thm:specific-sup-rate} still hold. The third point also holds but the supremum-norm rate becomes $(\\log n\/n)^{1\/3}$.\n\\end{remark}\n\n\nLet us compare the supremum rate $\\varepsilon_n$ in the third point of the theorem and the rate $\\zeta_n$ obtained in Lemma \\ref{lemma1}. Obviously, $\\varepsilon_n < \\zeta_n$, as $\\zeta_n \\geq 2^{L_n\/2} \\varepsilon_n$, and $L_n \\to \\infty$ as $n\\to \\infty$. In fact, by plugging-in the value of $L_n$ in (\\ref{choicel}), \none obtains $\\zeta_n = (\\log n\/n)^{\\frac{2\\beta - 1}{2(2\\beta+1)}}$ which can become extremely slow when $\\beta$ is close to $1\/2$. In Lemma \\ref{lower-bound} in \\citet{ning22supp}, we derive a lower bound for the minimax rate in the supremum norm for the hazard which shows that the rate $\\varepsilon_n$ is sharp. \nTo our best knowledge, this is the first sharp supremum-norm result for the hazard obtained for the Cox model. \n\n\\section{Simulation studies}\n\\label{sec:sim}\n\nTwo simulation studies are conducted in this section. \nThe first study, described in Section \\ref{study-I}, compares the limiting distribution given in Corollary \\ref{cor-1} with the empirical distributions obtained from the MCMC algorithm, which is given in Section \\ref{generate-data}.\nThe second study compares the coverage and the area of the 95\\% credible bands for the MCMC algorithm to the 95\\% confidence bands for a commonly used frequentist method by varying the sample size and changing the censoring distribution.\nWe choose the standard normal distribution for $\\theta$ and the random histogram prior for $\\lambda$ with the independent gamma and dependent priors in Section \\ref{priors} as those are often used in practice. \nIn Section \\ref{generate-data}, we describe how we generate the simulated data and the MCMC sampler. Section \\ref{study-I} presents results for the first study, and Section \\ref{study-II} summarizes results for the second study.\n\n\\subsection{Generating the data and the MCMC sampler}\n\\label{generate-data}\n\nThe data are generated from the ``true'' conditional hazard function $\\lambda_0(t) e^{\\theta_0'z}$, which is chosen as follows:\nlet $\\theta_0 = -0.5$, a scalar, and generate the covariate $z$ randomly from the standard normal distribution.\nTwo different baseline hazard functions are used to generate the data:\n\\begin{enumerate}[label=\\textbf{(1)}]\n\\item \\label{chzf} $\\lambda_0(t) = 6 \\times ((t + 0.05)^3 - 2(t + 0.05)^2 + t + 0.05) + 0.7, \\ t \\in [0,1]$,\n\\end{enumerate}\n\\begin{enumerate}[label=\\textbf{(2)}]\n\\item \\label{pwhzf} $\\lambda_0(t) = 3 \\times \\mathbbm{1}_{[0,0.4)}(t) + 1.5 \\times \\mathbbm{1}_{[0.4, 0.6)}(t) + 2 \\times \\mathbbm{1}_{[0.6, 1]}(t)$,\n\\end{enumerate}\n\n\n\n\\begin{figure}[!h]\n\\centering\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width=1\\textwidth]{plots\/chzf.pdf}\n\\caption{Plot of the continuous baseline hazard function in {\\rm \\ref{chzf}}.}\n\\label{fig:chzf}\n\\end{minipage}%\n\\hspace{0.04\\textwidth}\n\\begin{minipage}{0.45\\textwidth}\n\\includegraphics[width = 1\\textwidth]{plots\/pwhzf.pdf}\n\\caption{Plot of the piecewise constant baseline hazard function in {\\rm \\ref{pwhzf}}.}\n\\label{fig:pwhzf}\n\\end{minipage}\n\\end{figure}\n\nThe first one is a smooth function and the second one is piecewise constant. \nPlots of the two functions are given in Figures \\ref{fig:pwhzf} and \\ref{fig:chzf} respectively. These numerical choices are for illustration purposes and otherwise fairly arbitrary. Similar simulation results would hold if choosing other either smoothly varying or piecewise constant functions (we note once again that, although our theoretical results assume H\\\"older smoothness of the true log-hazard, the techniques go through for histogram true hazards as well).\n\n\nThe ``observations'' $X^n = (Y^n, \\delta^n)$ are generated using the ``{\\it simsurv}'' function in $\\mathsf{R}$. \nWe consider the following two types of censoring: \n\\begin{enumerate}\n\\item {\\it Administrative censoring only}. Time points are censored at a fixed time point $t = 1$;\n\\item {\\it Administrative censoring $+$ uniform censoring}. The censoring time is generated from the uniform distribution on $[0,1]$. Any time point beyond $t = 1$ is also censored. \n\\end{enumerate}\n\nAlthough the first type of censoring violates our assumption in Section \\ref{sec:assumption}, as we assumed the censoring follows a random distribution, it is interesting to find out in the next two sections that the empirical results still match with our theoretical results quite well.\n\nPosterior draws are obtained using the MCMC algorithm given as follows:\n\\begin{enumerate}\n\\item For the {\\it independent gamma prior,} since it is conjugate with the posterior distribution given $\\theta$, we sample each $\\lambda_k \\sim \\text{Gamma}(d_k + \\alpha, T_k(\\theta) + \\beta)$,\nwhere $d_k = \\sum_{i=1}^n \\delta_{ki}$ is the number of events in $k$-th interval and \n$T_k(\\theta) = \\sum_{i=1}^n Y_{ik} e^{\\theta Z_i}$, $\\alpha$ and $\\beta$ are the hyperparameters, and we chose them to 1.\nAfter obtaining samples for $(\\lambda_{lk})$, we draw $\\theta$. Since $\\pi(\\theta)$ is not conjugate, we first draw a candidate from the proposal density, i.e., $\\theta^{\\text{prop}} \\sim N(\\theta^{\\text{prev}}, 1)$, where $\\theta^{\\text{prev}}$ stands for the draw from the previous iteration, and then use the metropolis algorithm to accept or reject this candidate.\n\n\\item For the {\\it dependent gamma prior}, as it is non-conjugate, we thus draw each $\\lambda_k$ from the proposal density as follows: $\\lambda_1^{\\text{prop}} \\sim \\text{Gamma}(d_1 + \\alpha_0 - \\alpha, \\ T_1(\\theta) + \\beta_0)$ and\n$\\lambda_k^{\\text{prop}} \\sim \\text{Gamma}(d_k + \\varepsilon, \\ \\alpha\/\\lambda_{k-1} + T_k(\\theta))$ for $k = 2, \\dots, L_n-1$. The last interval $\\lambda_K \\sim \\text{Gamma}(d_K + \\alpha,\\ \\alpha\\lambda_{K-1} + T_K(\\theta))$ for $K=L_n$. In practice, we choose $\\varepsilon = 10^{-6}$, $\\alpha_0 = 1.5$ and $\\alpha = \\beta_0 = 1$. \nThe proposal density for $\\theta$ is the same.\n\\end{enumerate}\n\nTo initialize the MCMC algorithm, we choose the initial values for $\\theta$ and $\\lambda_k$ as their frequentist estimators (the same as in Corollary \\ref{cor-donsker}). We choose $L_n$ as in (\\ref{choicel}) and $\\beta = 1\/2$.\nFor each simulation, we run 10,000 iterations and discard the first 2,000 draws as burn-in. \n\nLet us now discuss in more detail the simulation in Figure \\ref{fig:intro}. \nBoth datasets (in the left plot and the right plot) are generated by choosing \\ref{chzf}.\nOnly administrative censoring is considered. \nThe prior is chosen to be the independent gamma prior (choosing the dependent gamma prior won't change the result dramatically, as can be seen in Table \\ref{tab:coverage-1} below).\nThe 95\\% credible band is a fixed width band whose width is constant with the time. \nThe width is determined such that the posterior probability is 95\\%.\nThe 95\\% confidence band, on the other hand, is obtained using the $\\mathsf{predictCox}$ function of the `riskRegression' package in $\\mathsf{R}$. Its width varies with time. \nHere we briefly describe the approach used in their package. \nWe refer the interested readers to read \\citet{lin94} and \\citet{scheike08} for more details. \nUsing the fact that the frequentist estimator $(\\hat \\theta, \\hat \\Lambda)$ converges to the same limiting process as the joint distribution $(W_n^{(1)}, \\int_0^t W_n^{(2)}(\\mathbbm{1}_{u \\leq t}) du)$, where \n$W_n^{(1)}$ and $W_n^{(2)}$ are given in \\eqref{Wn-1} and \\eqref{Wn-2}, their approach first defines another process, depending on $W_n^{(1)}$ and $W_n^{(2)}$, that is asymptotically equivalent to $\\sqrt{n} (\\hat \\Lambda(t)e^{\\hat\\theta'z} - \\Lambda(t) e^{\\theta' z})$; see equation 2.1 in \\citet{lin94} for the exact expression of that process.\nTheir approach then further approximates that process by a summation of independent normal variables whose distribution can be easily generated through Monte Carlo simulation and replaces other unknown quantities with their sample estimators.\nAfter large enough samples are generated, the last step is to obtain the size of the 95\\% confidence band for the conditional cumulative hazard function by choosing the 95th quantile from those samples. \nThe confidence band for the conditional survival function can be obtained similarly, except that one first needs to apply the functional delta method to obtain the limiting process for the conditional survival function. The remaining parts are the same. \n \n\\subsection{Study I: Comparing the empirical posterior distributions and the limiting distributions of $\\theta$ and $\\Lambda(1)$}\n\\label{study-I}\n\nWe compare the limiting distribution given in Corollary \\ref{cor-1} with the empirical distribution obtained using the MCMC sampler in Section \\ref{generate-data}.\nWe generate the hazard rate with a sample of 1,000 and choose $\\lambda_0$ as \\ref{chzf}.\nThe prior is chosen as the independent gamma prior. Results for choosing the dependent gamma prior are similar. To obtain draws, we run the MCMC algorithms in parallel for 1,000 times. \nEach time we only record the last pair draw for $\\theta$ and $\\Lambda(1)$, where $\\Lambda(1) = \\sum_{k=1}^{L_n} \\lambda_k$. Therefore, the 1,000 draws we obtained are independent.\n\nWe first study the marginal posterior distributions for $\\theta$ and $\\Lambda(1)$. In \nFigures \\ref{fig:marginal-theta} and \\ref{fig:marginal-Lambda}, \nwe first draw their empirical histogram from the 1,000 independent draws. \nWe then draw a normal density with blue color centered at the posterior mean, and its variance is estimated from those draws. \nLast, we draw another normal density with red color, which has the same centering as the blue one, but its variance is chosen as the theoretical value from the limiting distribution in Corollary \\ref{cor-1}. \nWe observe that in either the left plot (i.e., for $\\theta$) or the right plot (i.e., for $\\Lambda(1)$), \nthe density with blue color is well aligned with the one with red color. This finding suggests that empirical variances are close to their theoretical variances obtained from the corollary.\nWe also found that both empirical histograms show similar shapes as their corresponding normal density, which verifies their limiting distributions should be normal. \nLast, both the true values of $\\theta_0$ and $\\Lambda_0(1)$, $-0.5$ and $1.2$ respectively, are contained inside the corresponding 95\\% credible intervals.\nFor $\\theta$, the interval is $[-0.54, -0.39]$, and for $\\Lambda(1)$, it is $[1.14, 1.34]$.\n\n\\begin{figure}\n\\centering\n\\begin{minipage}{0.45\\textwidth}\n\\centering\n\\includegraphics[width = 1\\textwidth]{plots\/theta.pdf}\n\\caption{Plot of the empirical histogram, the empirical distribution (blue), and the limiting distribution (red) for the marginal posterior distribution of $\\theta$. True value of $\\theta$ is $-0.5$.}\n\\label{fig:marginal-theta}\n\\end{minipage}%\n\\hspace{0.04\\textwidth}\n\\begin{minipage}{0.45\\textwidth}\n\\includegraphics[width=1\\textwidth]{plots\/Lambda1.pdf}\n\\caption{Plot of the empirical histogram, the empirical distribution (blue), and the limiting distribution (red) for the marginal posterior distribution of $\\Lambda(1)$. True value of $\\Lambda(1)$ is $1.2$.}\n\\label{fig:marginal-Lambda}\n\\end{minipage}\n\\end{figure}\n\n\nNext, we study the joint posterior distribution of $\\theta$ and $\\Lambda(1)$, which involves the correlation between the two quantities.\nIn Figure \\ref{fig:joint-density}, we give three plots. \nIn (a), we plot the 86\\%, 90\\%, 95\\%, and 99\\% contour plots of the limiting joint distribution in Corollary \\ref{cor-1}. In (b), we plot the contours with the same four quantiles for a bivariate normal distribution, which its mean, variances, and correlations are estimated from the 1,000 draws. \nIn (c), we found that the two sets of contour plots in (a) and (b) indeed align quite well, \nwhich suggests that the empirical distribution matches with the theoretical limiting distribution in the corollary. \nOur calculation reveals that the correlation between $\\theta$ and $\\Lambda(1)$ in (a) is 0.15 and that in (b) is 0.10. \nA benefit of studying the joint posterior distribution with the correlation is that one can obtain the elliptical credible sets instead of rectangular credible sets. The length and the width of the rectangular credible sets are the 97.5\\% credible intervals of $\\theta$ and $\\Lambda(1)$ respectively. \nTherefore, the area of a rectangular credible set is typically larger than that of an elliptical credible set. For example, in (b), the area of the 95\\% elliptical credible set is 1.07 and that of the 95\\% rectangular credible set is 1.76, which is 64\\% bigger.\n\n\\begin{figure}[!]\n\\centering\n\\subfigure[ ]{\\includegraphics[width=0.34\\textwidth]{plots\/jointBvM-th.pdf}}%\n\\subfigure[ ]{\\includegraphics[width=0.34\\textwidth]{plots\/jointBvM-ep.pdf}}%\n\\subfigure[ ]{\\includegraphics[width=0.34\\textwidth]{plots\/jointBvM-overlay.pdf}}\n\\caption{Contour plots of the elliptical credible sets at the 68\\%, 90\\%, 95\\%, and 99\\% quantiles. (a) is obtained using the joint limiting distribution given in Corollary \\ref{cor-1}, (b) is obtained using the 1,000 independent draws of the pair $(\\theta, \\Lambda(1))$ from the MCMC output. In (c), we overlay the credible sets in (a) and (b). The 1,000 draws are plotted with gray color.}\n\\label{fig:joint-density}\n\\end{figure}\n\n\\subsection{Study II: Comparing the coverage and the area between the credible bands and the confidence bands}\n\\label{study-II}\n\nThe study in the last section is based on a single simulated dataset. We now provide a more thorough study to compare the coverage and the area of the credible (or confidence) bands under various settings. Specially, we want to compare: 1) the two Bayesian methods using the independent gamma and the dependent gamma priors; 2) datasets with two different sample sizes $n = 200$ and $n = 1,000$; 3) data with two different types of censoring: administrative censoring only and administrative censoring with additional uniform censoring; 4) coverages of the baseline survival and conditional survival functions; and 5) data are generated from the continuous function in \\ref{chzf} and that from the piecewise constant function in \\ref{pwhzf}.\n\nFor each setting, we generate 1,000 datasets. For each dataset, we run the MCMC sampler to obtain the 95\\% credible (or confidence) band. \nThe coverage is the percentage of the credible (or confidence) bands encompassing the true function. The area estimated by taking the average of 1,000 areas of the credible (or confidence) bands. Results using the continuous baseline hazard function are given in Table \\ref{tab:coverage-1} and those using the piecewise constant baseline hazard function are given in Table \\ref{tab:coverage-2}.\n\n\n\\begin{table}[ht]\n\\caption{Coverages and areas of the 95\\% Bayesian credible bands and of the 95\\% confidence bands with $\\lambda_0$ is chosen as the continuous function in {\\rm \\ref{chzf}}.}\n\\centering \n\\begin{tabular}{c|cccc||cccc} \n\\hline\\hline \n & \\multicolumn{4}{c||}{\\bf Adm. censoring only} & \\multicolumn{4}{c}{\\bf Adm. $+$ Unif. censoring}\\\\\n\\hline\n& \\multicolumn{2}{c}{baseline survival} & \\multicolumn{2}{c||}{cond. survival} \n& \\multicolumn{2}{c}{baseline survival} & \\multicolumn{2}{c}{cond. survival}\\\\\n & coverage & area & coverage & area & coverage & area & coverage & area\\\\[2pt]\n\\hline\nind. 200 & 0.96 & 0.16 & 0.94 & 0.16 & 0.95 & 0.21 & 0.94 & 0.21\\\\\ndep. 200 & 0.94 & 0.16 & 0.93 & 0.16 & 0.93 & 0.22 & 0.93 & 0.22 \\\\\nfreq. 200 & 0.82 & 0.18 & 0.83 & 0.18 & 0.81 & 0.23 & 0.80 & 0.23 \\\\[2pt]\n\\hline \nind. 1000 & 0.95 & 0.08 & 0.94 & 0.08 & 0.95 & 0.11 & 0.94 & 0.11 \\\\\ndep. 1000 & 0.95 & 0.08 & 0.94 & 0.08 & 0.94 & 0.11 & 0.94 & 0.11 \\\\\nfreq. 1000 & 0.95 & 0.08 & 0.95 & 0.08 & 0.94 & 0.11 & 0.94 & 0.11 \\\\[2pt]\n\\hline\\hline\n\\end{tabular}\n\\label{tab:coverage-1}\n\\end{table}\n\n\nFrom Table \\ref{tab:coverage-1}, in which $\\lambda_0$ is chosen to be a continuous function, first, we found that the two Bayesian methods, either using the independent or the dependent gamma prior, produce similar converge results and areas for the credible bands. \nSecond, when $n = 200$, not only the coverage of the two Bayesian methods is better than that of the frequentist method, but the area is also smaller. The coverages and the areas of the three methods become similar when $n = 1,000$.\nAs approximation becomes more accurate when the sample size increases, this leads to higher coverage and a smaller area of the confidence band. \nThird, we found that when data are administratively and uniformly censored, the area of the credible bands is larger than those only administratively censored. \nSuch a result is expected, as we found that in a typical simulated dataset, a former has $\\sim$45\\% data are censored, and the latter only has $\\sim$10\\% data are censored. \nLast, there is no significant difference between the coverage and the area of the baseline survival function and the conditional survival function, even though the latter accounts for the uncertainty for estimating the regression coefficients. \n\n\\begin{table}[ht]\n\\caption{Coverages and areas of the 95\\% Bayesian credible bands and of the 95\\% confidence bands with $\\lambda_0$ is chosen as the piecewise constant function in {\\rm \\ref{pwhzf}}.}\n\n\\centering \n\\begin{tabular}{c|cccc||cccc} \n\\hline\\hline \n & \\multicolumn{4}{c||}{\\bf Adm. censoring only} & \\multicolumn{4}{c}{\\bf Adm. $+$ Unif. censoring}\\\\\n\\hline\n& \\multicolumn{2}{c}{baseline survival} & \\multicolumn{2}{c||}{cond. survival} \n& \\multicolumn{2}{c}{baseline survival} & \\multicolumn{2}{c}{cond. survival}\\\\\n & coverage & area & coverage & area & coverage & area & coverage & area\\\\[2pt]\n\\hline\nind. 200 & 0.95 & 0.15 & 0.93 & 0.15 & 0.98 & 0.17 & 0.96 & 0.17\\\\\ndep. 200 & 0.93 & 0.15 & 0.92 & 0.15 & 0.94 & 0.18 & 0.94 & 0.18 \\\\\nfreq. 200 & 0.94 & 0.17 & 0.94 & 0.17 & 0.90 & 0.21 & 0.90 & 0.21 \\\\[2pt]\n\\hline \nind. 1000 & 0.93 & 0.07 & 0.92 & 0.07 & 0.94 & 0.09 & 0.93 & 0.09 \\\\\ndep. 1000 & 0.92 & 0.07 & 0.91 & 0.07 & 0.91 & 0.09 & 0.91 & 0.09 \\\\\nfreq. 1000 & 0.93 & 0.07 & 0.93 & 0.08 & 0.92 & 0.10 & 0.92 & 0.10 \\\\[2pt]\n\\hline\\hline\n\\end{tabular}\n\\label{tab:coverage-2} \n\\end{table}\n\n\nTable \\ref{tab:coverage-2} gives the results for data are generated from the baseline hazard that is the piecewise constant function in \\ref{pwhzf}. \nWe notice that the coverage of the frequentist confidence bands improved significantly when $n = 200$ compared to that in Table \\ref{tab:coverage-1}. This suggests that the frequentist approach is more accurate for estimating a piecewise constant function. Indeed, the Breslow estimator treats the $\\lambda_0$ as piecewise constant between uncensored failure times \\citep[see Page 473 of][]{lin07}.\nHowever, we again observe that the area of the credible bands provided by the two Bayesian methods is smaller than that of the confidence bands. \nWe also found that both of the two Bayesian methods provide similar coverage results whether $\\lambda_0$ is chosen to be the continuous function or the piecewise constant function. \n\n\n\nIn summary, using the Bayesian method can be attractive for estimating data with a relatively small sample size, especially when the true baseline hazard function is not piecewise constant. \nThe frequentist method needs to apply an asymptotical approximation to obtain the confidence band, and the approximation can perform poorly when the sample size is relatively small. On the other hand, the proposed Bayesian method provides a credible band without using any asymptotical approximation. Notice that the width for the credible band is constant over time. It should be possible to build a varying-width credible band whose overall area is smaller, both in small samples and asymptotically, but \nthe construction and analysis of such a band is outside the scope of the paper. \nYet, the considered fixed--width credible band performs already remarkably well, in particular in finite samples, and achieves the asymptotic limits expected from the Donsker theorem for large sample sizes.\n\n\n\n\\section{Discussion}\n\\label{sec:diss}\n\nWe provide three new exciting results for the study of the Bayesian Cox model: 1) a joint Bernstein--von Mises theorem for the linear functionals of $\\theta$ and $\\lambda$; in particular, the correlation between the two functionals is captured by the results; 2) a Bayesian Donsker theorem for the conditional hazard function and the conditional survival functions; 3) a supremum-norm posterior contraction rate for the conditional hazard function. \n\nThe paper makes major advances on two fronts: on the one hand, it provides new results on optimal posterior convergence rates both in $L^1$-- and $L^\\infty$--sense for the hazard; uncertainty quantification is considered for finite dimensional functionals as well as for the posterior cumulative hazard process: those are the first results of this kind for non--conjugate priors (in particular priors for which explicit posterior expressions are not accessible) in this model. On the other hand, the paper provides validation for several classes of practically used histogram priors (see e.g. \\cite{ibrahim01}), both for dependent and independent histogram heights.\n\nAs a comparison, the results from \\cite{cast12} (Theorem 5) and \\cite{ghosal17} (Theorem 12.12) require a fast enough preliminary posterior contraction rate of $n^{-3\/8}$ in terms of the Hellinger distance. This effectively rules out the use of regular histogram priors, which are limited in terms of rate by $n^{-1\/3}$ (corresponding to the optimal minimax rate for Lipschitz functions). Two key novelties here are that {\\em a)} we only require a preliminary rate of an order faster than $n^{-1\/4}$ (corresponding to $\\beta=1\/2$ in \\eqref{rateveps}) {\\em b)} the use of the multiscale approach introduced in \\cite{cast14a} enables one to provide both optimal supremum norm contraction rates for the conditional hazard, justifying practically the visual closeness of estimated hazard curves to the true curve, and uncertainty quantification for the conditional cumulative hazard, which follows a BvM for $\\Lambda(\\cdot) e^{\\theta'z}$.\n \nWe also underline that although not investigated here in details for reasons of space, the results extend to smoother dictionaries than histograms: for instance, if the true hazard is very smooth, one can derive correspondingly very fast posterior rates (obtaining optimal rates $n^{-\\beta\/(2\\beta+1)}$ for any $\\beta>1\/2$, up to log factors) if one chooses the basis $(\\psi_{lk})$ to be a suitably smooth wavelet basis. We refer the interested reader to \\cite{cast20survival} for more on how to effectively obtain this. \n\nThe present work studies the classical Cox model.\nMany extensions of the model have been proposed, such as the Cox model with time-varying covariates \\citep{fisher99}, the nonproportional hazards model \\citep{schemper02}, the Cox-Aalan model \\citep{scheike02} to name a few. The Bayesian nonparametric perspective is particularly appealing in these more complex settings; let us cite two recent practical success stories of the approach in settings going beyond the Cox model (in particular enabling more complex dependencies in terms of covariates and hazard, and\/or time dependence): one is the use of BART ({Bayesian} additive regression trees) priors in \\citet{sparapani16}, another is the use of dependent Dirichlet process priors in \\citet{xu19}. \nIt would be very desirable to obtain theory and validation for these more complex settings: the present work can be seen as a first step towards this aim. \n\n\\section*{Acknowledgement}\n\nThe authors would like to thank St\\'ephanie van der Pas for helpful discussions with the $\\mathsf{R}$ code for simulation studies. \n\n\\begin{supplement}\nThe supplement \\citet{ning22supp} includes the proofs of the results stated in this paper. \n\\end{supplement}\n\n\\bibliographystyle{chicago}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:1}\n\\IEEEPARstart{N}{etwork} mining is the basis for many network analysis tasks, such as classification and link prediction. The dimensionality of traditional node representations is proportional to the network scale, which requires large amount of storage and computation resources for network analysis tasks. Thus, it is necessary to learn low-dimensional representations of nodes to capture and preserve the network features. Network embedding, also known as network representation learning, is a way of learning low-dimensional representations and preserving useful features to commonly support subsequent network analysis tasks.\n\nExisting network embedding methods \\cite{perozzi2014deepwalk,grover2016node2vec,cao2015grarep} that emphasize preserving network structural features have achieved promising performance in several network analysis tasks. However, nodes in real-world networks have rich attribute information beyond the structural details, such as text information in citation networks and user profiles in social networks. Attribute features are essential to network analysis applications \\cite{hu2013exploiting,tang2013exploiting}and it is insufficient to learn network representations only based on preserving structural features. Node attributes carry semantic information that largely alleviates the link sparsity problem and supplement the incompleteness structure information. The strong correlations between structures and attributes enable them to be integrated to learn network representations according to the principles of homophily \\cite{mcpherson2001birds} and social influence theory \\cite{marsden1988homogeneity}. Therefore, we integrate the topological structures and node attributes to perform network embedding to preserve both structural and attribute features of the network. Differing from some task-oriented network embedding methods that learn network representations for the specific task, we aim to learn the network representations generally applying to various advanced network analysis tasks. We face three challenges: (1) Both the underlying network structures \\cite{luo2011cauchy} and the complex interactions between attributes and structures \\cite{cui2017survey} are highly non-linear. Thus, designing a model to capture these non-linear relationships is difficult. (2) The structures and attributes are information from different sources, which make it difficult to find direct correlations between originally observed information due to sparsity and noise. Modeling the correlations between network structures and attribute information is a tough problem. (3) The nodes with coherent links and similar attributes in the original network have strong proximity. They are supposed to be close to each other in the embedding space as well. Thus, mapping the proximity of nodes from both structure and attribute perspectives to the embedding space is critically important.\n\nTo address the above challenges, a Multimodal Deep Network Embedding method named MDNE is proposed in this paper. Most of existing shallow models have limited ability to represent complex non-linear relationships \\cite{bengio2009learning}. A deep model comprising of multiple layers of non-linear functions, using each layer to capture the non-linear relationships of units in the lower layer, is able to extract the non-linear relationships of data progressively during training \\cite{hinton2006reducing}. Moreover, deep learning has been demonstrated to have powerful non-linear representation and generalization ability \\cite{bengio2009learning}. In order to capture the highly non-linear network structures and the complex interactions between structures and attributes, a deep model comprising multiple layers of non-linear functions is proposed to learn compact representations of nodes.The original structure and attribute information, which are represented by an adjacency matrix and attribute matrix, respectively, are usually sparse and noisy, making it difficult for the deep model to extract the correlations between them directly. In this paper, a multimodal learning method \\cite{ngiam2011multimodal} is adopted to pre-process the structure and attribute information to obtain their high-order features. High-order features are condensed and less noisy, so concatenating the two high-order features facilitates the deep model to extract the high-order correlations between the network structures and node attributes. To ensure the obtained representations preserve both structural and attribute features of the original network, we use the structural proximity and attribute proximity to define the loss function for the new model. We preserve the structural features by taking the advantage of the first-order proximity and second-order proximity, which capture the local and global network structures \\cite{wang2016structural}. The attribute proximity, which indicates the similarity of node attributes, is also utilized in the learning process to preserve the attribute features of the network. Thus, the learned representations preserve both the structural and attribute features of nodes in the embedding space.\n\nTo evaluate the effectiveness and generality of the proposed method in a variety of scenes, we conduct experiments to analyze network representations obtained by different network embedding methods from four real-world network datasets in three analysis tasks including link prediction, attribute prediction, and classification. The results show that the network representations obtained by MDNE offer better performance on different tasks compared to other methods. This demonstrates that the proposed method effectively preserves the topological structure and attribute features of nodes in the embedding space, which improves the performance on diverse network analysis tasks.\n\nThe rest of the paper is organized as follows. Section 2 discusses the related works. The proposed method MDNE is described in details in Section 3. Experimental results of different network analysis tasks on various real-world datasets are presented in Section 4. Finally, Section 5 concludes the paper.\n\n\\section{Related Works}\n\\label{sec:2}\nThe early works of network embedding are related to graph embedding \\cite{roweis2000nonlinear,belkin2002laplacian}, which aims to embed an affinity graph into a low-dimensional vector space. The affinity graph is obtained by calculating the proximity between feature vectors of nodes. Recent network embedding aims to embed naturally formed networks into a low-dimensional space, such as social networks, citation networks, etc. Most of the existing works \\cite{jacob2014learning,hofmann2001unsupervised,blei2003latent} focused on reducing the dimensions of structure information while preserving the structural features of nodes. GraRep \\cite{cao2015grarep} defined different loss functions of models to preserve high-order proximity among nodes and optimized each model by matrix factorization techniques. The final representations of nodes combined representations learned from different models. M-NMF \\cite{wang2017community} proposed a novel modularized nonnegative matrix factorization model to incorporate the community structure into network embedding. The above shallow models have applied in various network analysis tasks, but have limited ability to represent the highly non-linear structure of networks. Thus, techniques with deep models were introduced to deal with the problem. LINE \\cite{tang2015line} designed the objective function based on the first-order proximity and second-order proximity and adopted negative sampling approach to minimize the objective function to get low-dimensional representations which preserve the local and global structure of the network. DeepWalk \\cite{perozzi2014deepwalk} utilized random walks in the network to sample the neighbors of nodes. By regarding the path generated as sentences, it adopted Skip-Gram, a general word representation learning model, to learn the node representations. Node2vec \\cite{grover2016node2vec} modified the way of generating node sequences and proposed a flexible notion of node's neighborhood. A biased random walk procedure was designed, which explored diverse neighborhood. SDNE \\cite{wang2016structural} designed a clear objective function to preserve the first-order proximity and second-order proximity of nodes and mapped the network into a highly non-linear latent space through an autoencoder-based model.\n\nBesides structure information, most of the recently obtained network datasets often carry a large amount of attribute information. However, it is difficult for pure structure-based methods to compress attribute information and obtain the representations combining the structure and attribute information. Therefore, efforts have been done to jointly exploit structure and attribute information in network embedding, and the representations integrating structure and attribute information have been demonstrated to improve the performance in network analysis tasks \\cite{hu2013exploiting,tang2013exploiting,li2016robust}. TADW \\cite{yang2015network} proved DeepWalk to be equivalent to matrix factorization and incorporated text features into network representation learning under the framework of matrix factorization. It can only handle the text attributes. AANE \\cite{huang2017accelerated} modeled and incorporated node attribute proximity into network embedding in a distributed way. The above matrix factorization methods did not preserve the attribute features directly, but performed the learning based on the attribute affinity matrix calculated by a specific affinity metric, which limited the attribute feature preservation ability of the obtained representations. UPP-SNE \\cite{zhang2017user} learned joint embedding representations by performing a non-linear mapping on user profiles guided by network structure. It mainly dealt with user profile information. TriDNR \\cite{pan2016tri} separately learned embedding from a coupled neural network architecture and linearly combined them in an iterative way. It lacked sufficient knowledge interactions between the two separate models. ASNE \\cite{Liao2017Attributed} proposed a multilayer perceptron framework to integrate the structural and attribute features of nodes. It preserved the structural proximity and attribute proximity by maximizing the likelihood function defined based on random walks. Its model lacked a non-linear pre-processing of the structure and attribute information, which could facilitate to extract the high-order correlations between attribute and structural features in the later learning. In this paper, a multimodal learning method is adopted to pre-process the original data.\n\nMultimodal learning methods which have aroused considerable research interests aim to project data from multiple modalities into a latent space. The classical methods CCA, PLS, BLM and their variants \\cite{hardoon2004canonical,rosipal2005overview,tenenbaum2000separating} were widely applied in previous time. Recent decades have seen great power of deep learning method to generate integrated representations for multimodal data. \\cite{ngiam2011multimodal} proposed an autoencoder-based method to learn features over multiple modalities (video and audio) and achieved in speech recognition. \\cite{srivastava2012learning} proposed a Deep Belief Network (DBN) architecture for learning a joint representation of multimodal data, which made it possible to create representations when some data modalities are missing. The multimodal Deep Boltzmann Machine (DBM) model proposed in \\cite{srivastava2014multimodal} fused modalities (image and tag) together and extracted unified representations which were useful for classification and information retrieval tasks. \\cite{kang2015learning} learned consistent representations for two modalities and facilitated the cross-matching problem. \\cite{xu2017learning} proposed a cross-modal hashing method to learn unified binary representations for multimodal data. Following these successful works, we introduced multimodal learning method into network embedding. The structure and attribute information of the network are regarded as different modalities. An autoencoder-based multimodal model \\cite{ngiam2011multimodal} is adopted to pre-process the bimodal data and forms high-order features, which facilitate the fused representations to be learnt.\n\nThere are also methods learning network representations for specific applications. PinSage \\cite{ying2018graph} combined the recent Graph Convolutional Network (GCN) algorithm \\cite{defferrard2016convolutional,kipf2016semi} with efficient random walks to generate representations applying in web-scale recommender systems. However, our MDNE learns integrated representations generally applying for various network analysis tasks.\n\n\\section{Multimodal Deep Network Embedding Method}\n\\label{sec:3}\n\\subsection{Problem Definition}\nAn attributed network is defined as $G=(U,E,A)$, where $U=\\{u_1,\\dots,u_n\\}$ represents a set of $n$ nodes, $E=\\{e_{i,j}\\}$ represents a set of $l$ edges, and $A=\\{\\bf{a}_i\\}_{i=1}^n$ represents the attribute matrix. Edge information is represented by the adjacency matrix $S=\\{\\bf{s}_i\\}_{i=1}^n$.\n\nThe adjacency vector $\\bf{s}_i$ and attribute vector $\\bf{a}_i$ of node $i$ represent the structure and attribute information, respectively. Thus, the goal of network embedding is to compress the two vectors into a low-dimensional vector, and preserving the structure and attribute features in the low-dimensional space (embedding space).\n\nThe first-order proximity and second-order proximity capture the local and global network structural features, respectively \\cite{wang2016structural}.\n\\newtheorem{definition}{Definition}\n\\begin{definition}[First-Order Structural Proximity]\n The first-order proximity describes the local pairwise proximity between two nodes. For each pair of nodes, the edge weight, $s_{i,j}$ indicates the first-order proximity between $u_i$ and $u_j$.\n\\end{definition}\n\\begin{definition}[Second-Order Structural Proximity]\n The second-order proximity between a pair of nodes $(u_i,u_j)$ in a network describes the similarity between their neighborhood structures which are represented by the adjacency vectors.\n\\end{definition}\n The first-order proximity and second-order proximity jointly compose the structural proximity between nodes. The attribute proximity captures the attribute feature of nodes.\n\\begin{definition}[Attribute Proximity]\n The attribute proximity between a pair of nodes $(u_i,u_j)$ describes the proximity of their attributes information. It is determined by the similarity between their attribute vectors, i.e., $a_i$ and $a_j$.\n\\end{definition}\nThe attribute proximity and structural proximity between nodes are the basis of many network analysis tasks. For example, community detection on social networks clusters nodes based on the structural proximity and attribute proximity \\cite{wu2018mining}. In recommendation on citation networks, papers having strong structural and attribute proximity are most likely to be reference papers of the given manuscript \\cite{cai2018three}. In user alignment across social networks, users are aligned based on their structure and attribute proximity on each network \\cite{zhao2018learning}. These applications benefit from utilizing both structural proximity and attribute proximity, which lead us to vestigate the problem of learning the low-dimensional representations of the network in the condition of preserving the two proximities. The problem is defined as follows.\n\\begin{definition}[Attributed Network Embedding]\n Given an attributed network denoted as $G=(U,E,A)$ with $n$ nodes and $m$ attributes, attributed network embedding aims to learn a mapping function $f:(\\bf{s}_i,\\bf{a}_i)\\mapsto \\bf{y}_i\\in \\mathbb{R}^d$, where $d \\ll \\min (n,m)$. The objective of the function is to make the similarity between ${{\\bf{y}}_{i}}$ and ${{\\bf{y}}_{j}}$ explicitly preserve the attribute proximity and structural proximity of ${u_i}$ and ${u_j}$.\n\\end{definition}\n\n\\subsection{Framework}\n\\label{sec:3.2}\nIn order to address the attributed network embedding problem, a Multimodal Deep Network Embedding (MDNE) method is proposed. Figure 1 shows the MDNE framework. The parameters marked with $\\hat{}$ are parameters of the reconstruction component. Table 1 lists the terms and notations. Note that the attributes pre-processing layer and structures pre-processing layer have different weight matrices ${W^{{a^{(1)}}}}$ and ${W^{{s^{(1)}}}}$, respectively. For simplicity, we denote ${W^{{a^{(1)}}}}$ and ${W^{{s^{(1)}}}}$ as ${W^{(1)}}$.\n\nThe strong interactions and complex dependencies between nodes in real-world networks result in the high non-linearity of the network structures. The interactions between structure and attribute features are non-linear as well. Deep neural networks have demonstrably strong representation and generalization abilities for such non-linear relationships \\cite{he2016deep}. Therefore, the proposed model is established based on a deep autoencoder, one of the most common deep neural network architectures. Autoencoder is an unsupervised learning model that performs well in data dimensionality reduction and feature extraction \\cite{hinton2006reducing}. An autoencoder consists of two parts, the encoder and decoder. The encoder consists of one or multiple layers of non-linear functions that map the input data into the representation space and obtain its feature vector; the decoder reconstructs the data in the representation space to obtain its original input form by an inverse process. A shallow autoencoder has three layers (input, encoding and output), where the encoder has only one layer of non-linear functions. The deep autoencoder of our implementation has more hidden layer and is able to learn higher-order features of data. Given the input data vector $\\bf{x}_i$, the output feature vectors for each layer are\n\\begin{equation*}\n\\begin{array}{l}\n{\\bf{y}_i}^{(1)} = \\sigma \\left( {{W^{(1)}}{\\bf{x}_i} + {\\bf{b}^{(1)}}} \\right)\\\\\n{\\bf{y}_i}^{\\left( k \\right)} = \\sigma \\left( {{W^{(k)}}{\\bf{y}_i}^{(k - 1)} + {\\bf{b}^{(k)}}} \\right),k = 2,\\dots,K\n\\end{array}\n\\end{equation*}\n, where $\\sigma$ denotes the non-linear activation function for bringing the non-linearity into the models. The activation functions must be chosen according to the loss function \\cite{charte2018practical}, the requirements of the applied representations, and the datasets. In practice, we can choose them based on their test performance. In this work, the sigmoid function $\\sigma (x) = \\frac{1}{{1 + \\exp ( - x)}}$ is adopted as it provided the best performance in the experiments\\footnote{Regarding the choice of activation function, we have tried sigmoid, Rectified Linear Unit (ReLU), Scaled Exponential Linear Unit (SELU), and hyperbolic tangent function (tanh). Empirically, the sigmoid function leads to the best performance in general.}. After obtaining the mid-layer representation, i.e., the encoding result ${\\bf{y}_i}^{(K)}$, we can obtain the decoding result through an inverse calculation process. The autoencoder optimizes the parameters by minimizing the reconstruction error between the input data and the reconstructed data. A typical loss function is the mean squared error (MSE)\n\\begin{equation*}\n{\\cal L} = \\sum\\limits_{i = 1}^n {\\left\\| {{{\\bf{\\hat x}}_i} - {\\bf{x}_i}} \\right\\|_2^2}\n\\end{equation*}\n. To alleviate the noise and redundant information in the input feature vectors, an undercomplete autoencoder is adopted to learn compact low-dimensional representations. The undercomplete autoencoder has a tower structure, with each upper layer having a smaller number of neurons than the layer below it. A smaller number of neurons restricts the dimensionality of the learned features, so that the autoencoder is forced to learn more abstract features of data during training \\cite{charte2018practical}. A layer-by-layer pre-training algorithm, such as Restricted Boltzmann Machine (RBM) enables each upper layer of the encoder to capture the high-order correlations between the feature units in the lower layer, which is an efficient way to extract non-linear structures progressively \\cite{hinton2006reducing}. Thus, the tower structure with stacked multiple layers of non-linear functions is able to map the data into a compressive latent space, and capture the highly non-linear structures of the network, as along with the complex interactions between the structures and attributes during training. The basic undercomplete autoencoder is chosen in our framework because of its generality and simplicity. Variants of the autoencoder can replace the basic autoencoder with slight modifications to accommodate specific scenarios, such as denoising autoencoder, contractive autoencoder, etc. \\cite{charte2018practical}.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig1}\n\\caption{The framework of MDNE}\n\\label{F1}\n\\end{figure}\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Terms and notations}\n\\label{T1}\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\nSymbol & Definition \\\\\n\\hline\n$K$ & Number of layers of the encoder\/decoder \\\\\n\\hline\n${W^{(k)}},{\\hat W^{(k)}}$ & Weight matrix of the ${k^{th}}$ layer \\\\\n\\hline\n${{\\bf{b}}^{(k)}},{{\\bf{\\hat b}}^{(k)}}$ & Biases of the ${k^{th}}$ layer \\\\\n\\hline\n${Y^{(k)}} = \\{ {{\\bf{y}}_i}^{(k)}\\} _{i = 1}^n$ & Representations of the ${k^{th}}$ layer \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nAn intuitive way to integrate both structure and attribute information in the representations is to concatenate the two feature vectors separately learned from both modalities. The way of learning individual modalities separately is limited in its ability to extract the correlations between structures and attributes. Alternatively, two kinds of information can be concatenated first at the input and the integrated representations are learned by a unified model. The inputs of the unified model are the adjacency vectors describing network structure and the attribute vectors describing node attributes. Since the adjacency vectors and attribute vectors of nodes are sparse and noisy, inputting the concatenated adjacency vector and attribute vector to the deep autoencoder directly, as shown in Figure 2(a), increases the difficulty in training the model to capture the correlations between structure and attribute information. We have also found that, in practice, learning in this way results in hidden units have strong connections of either structure or attribute variables, but few units connect across the two modalities \\cite{ngiam2011multimodal}.\n\nTo enable the deep model to better capture the correlations between structure and attribute information, multimodal learning method is introduced into the proposed model. The autoencoder-based multimodal learning model \\cite{ngiam2011multimodal} is adopted to pre-process the original structure and attribute data. The pre-processing reduces the dimensionality of data from different modalities, specifically removing noise and redundant information to obtain compact high-order features. The correlations across modalities are strengthened between their high-order features. As shown in Figure 2(b), the structure information (adjacency vector) and attribute information (attribute vector) are input separately to a one-layer neural network serving as a pre-processing layer. The use of a pre-training algorithm such as a single-layer RBM enables the pre-processing layer to extract high-order features of each modality. Then, the structure and attribute feature vectors are concatenated and input to the deep autoencoder for further learning. The high-order correlations between structure and attribute will be more facilely learned by deep autoencoder using high-order features obtained by the pre-processing layer. With the subsequent fine-tuning algorithm, the deep autoencoder provides a unified framework to integrate structure and attribute information.\n\nThe training goal of the model is preserving the structural and attribute features in the embedding space. The structural features and attribute features are captured by the structural and attribute proximities, respectively. Thus, the model loss function is defined based on the two proximities, as detailed in the next subsection. By fine-tuning the model based on the optimization of the loss function, the obtained representations preserve both the structure and attribute features of the original network.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig2}\n\\caption{Concatenated model and Pre-processing model}\n\\label{F2}\n\\end{figure}\n\nIn comparison with SDNE \\cite{wang2016structural}, which adopts a basic autoencoder to directly reconstruct the input structure information data, the proposed MDNE pre-processes the original adjacency matrix and attribute matrix using a multimodal learning method respectively, and concatenates the resulting high-order structure and attribute features for input to the deep model. The loss function is defined based on the structural and attribute proximities to preserve the structural and attribute features of nodes in the embedding space.\n\n\\subsection{Loss Functions}\n\\label{sec:3.3}\nThe structural proximity includes the first-order proximity describing the local network structure and the second-order proximity describing the global network structure \\cite{wang2016structural}. They are preserved in the loss function to preserve the local and global structural features in low-dimensional embedding space. With the first-order proximity indicating the proximity between directly connected nodes, a corresponding loss function is defined to guarantee that connected nodes with larger weight have a shorter distance in the embedding space, i.e., \n\\begin{equation}\\label{E1}\n{{\\cal L}_{1st}} = \\sum\\limits_{i,j = 1}^n {{s_{ij}}\\left\\| {{\\bf{y}}_i^{(K)} - {\\bf{y}}_j^{(K)}} \\right\\|_2^2}\n\\end{equation}\n. Minimizing ${{\\cal L}_{1st}}$ forces the model to preserve the first-order proximity in the embedding space. The second-order proximity represents the similarity of the neighborhood structure between nodes. The neighborhood structure of each node can be described by its adjacency vector. Thus the second-order proximity between two nodes is determined by the similarity of their adjacency vectors, and the goal of the corresponding loss function is to guarantee that nodes with similar adjacency vectors have a short distance in the embedding space. Minimizing the reconstruction error of the input data amounts to maximizing the mutual information between input data and learnt representations \\cite{vincent2010stacked}. Intuitively, if the representation allows a good reconstruction of the input data, it means that it has retained much of the information that was present in the input. That is, the MSE-based loss function prompts the basic autoencoder to latently preserve the similarity between input vectors in the embedding space during training. Since the adjacency vector describes the neighborhood structure of each node, minimizing of the reconstruction error of the adjacency vectors preserves the similarity of neighborhood structure (i.e., the second-order proximity) between nodes in the embedding space. Thus, the loss function based on the second-order proximity is as follows:\n\\begin{equation}\\label{E2}\n{{\\cal L}_{2nd}} = \\sum\\limits_{i = 1}^n {\\left\\| {\\left( {{{\\bf{\\hat s}}_i} - {\\bf{s}_i}} \\right) \\odot \\bf{r}_i^s} \\right\\|_2^2 = \\left\\| {\\left( {\\hat S - S} \\right) \\odot {R^s}} \\right\\|_F^2} \n\\end{equation}\n, where $ \\odot $ means the Hadamard product, and ${R^s} = \\left\\{ {\\bf{r}_i^s} \\right\\}_{i = 1}^n$ are the penalty parameters for non-zero adjacency elements. If ${s_{ij}} = 0$, $r_{ij}^s = 1$, else $r_{ij}^s = {\\gamma _1} > 1$. Increasing the penalty for the reconstruction error of non-zero elements avoids the reconstruction process's tendency to reconstruct zero elements, making the model robust to sparse networks. Minimizing ${{\\cal L}_{1st}}$ and ${{\\cal L}_{2nd}}$ imposes a restriction to force the model to preserve the first-order and second-order proximities between nodes.\n\nThe attribute proximity of nodes is determined by the similarity of their attribute vectors. The similarity metric of attribute vectors depends on whether the attributes are symmetric or asymmetric. In real-world networks, most of the attributes are highly asymmetric, such as word-counts on citation networks. Moreover, symmetric attributes can also be transformed into asymmetric ones by regarding each ${a_{ij}}$ in node $i$'s attribute vector ${{\\bf{a}}_{\\bf{i}}}$ as an asymmetric attribute indicating whether node $i$ has attribute value $j$. Therefore, the attribute vectors are treated as highly asymmetric to match real-world circumstances. The asymmetry of both attribute vectors and adjacency vectors results in the same similarity metric of the two data forms. Training the autoencoder to minimize reconstruction error enables the model to preserve the similarity between input vectors in the embedding space \\cite{vincent2010stacked}. Meanwhile, experiments in \\cite{belkin2003laplacian} shows that minimizing the reconstruction error of the word-count vectors, a kind of highly asymmetric attribute vectors, with a deep autoencoder makes the similar input word-count vectors close to each other in the embedding space. Thus, to preserve the attribute proximity between nodes in the embedding space, the autoencoder is trained to minimize the reconstruction error of the attribute vectors. The corresponding loss function is\n\\begin{equation}\\label{E3}\n{{\\cal L}_{att}} = \\sum\\limits_{i = 1}^n {\\left\\| {\\left( {{{\\bf{\\hat a}}_i} - {\\bf{a}_i}} \\right) \\odot \\bf{r}_i^a} \\right\\|_2^2 = \\left\\| {\\left( {\\hat A - A} \\right) \\odot {R^a}} \\right\\|_F^2} \n\\end{equation}\n, where ${R^a} = \\left\\{ {\\bf{r}_i^a} \\right\\}_{i = 1}^n$ are the penalty parameters for non-zero attribute elements. If ${a_{ij}} = 0$, $r_{ij}^a = 1$, and $r_{ij}^a = {\\gamma _2} > 1$ otherwise. The penalty for the reconstruction error of non-zero attribute values reflects that the reconstruction of non-zero elements is more meaningful than the reconstruction of zero ones. This is because there are significantly fewer non-zero elements than zero ones in highly asymmetrical attribute vectors, with non-zero elements much more important in determining the similarity.\n\nThe final loss function combines the above structural and attribute proximity loss functions and preserves the structural and attribute proximities between nodes in the embedding space:\n\\begin{equation}\\label{E4}\n\\begin{array}{rl}\n{{\\cal L}_{mix}} = &\\lambda {{\\cal L}_{att}} + \\alpha {{\\cal L}_{2nd}} + {{\\cal L}_{1st}} + \\upsilon {{\\cal L}_{reg}}\\\\\n = &\\!\\lambda \\left\\| {(\\hat A - A) \\odot {R^a}} \\right\\|_F^2 + \\alpha \\left\\| {(\\hat S - S) \\odot {R^s}} \\right\\|_F^2\\\\\n &\\!+ \\sum\\limits_{i,j = 1}^n {{s_{ij}}\\left\\| {{\\bf{y}}_i^{(K)} - {\\bf{y}}_j^{(K)}} \\right\\|_2^2} + \\upsilon {{\\cal L}_{reg}}\n\\end{array}\n\\end{equation}\n, where ${{\\cal L}_{reg}}$ is an ${\\cal L}2$-norm regularization term to prevent overfitting, and $\\lambda $, $\\alpha $, and $\\upsilon $ are the weight of the attribute proximity loss, second-order proximity loss and regularization term in the loss function. ${{\\cal L}_{reg}}$ is defined as:\n\\begin{equation*}\n{{\\cal L}_{reg}} = \\frac{1}{2}\\sum\\limits_{k = 1}^K {(\\left\\| {{W^{(k)}}} \\right\\|_F^2 + \\left\\| {{{\\hat W}^{(k)}}} \\right\\|_F^2)}\n\\end{equation*}\n, where ${W^{(k)}},{\\hat W^{(k)}},k = 1,\\dots,K$ are the weight matrices of the ${k^{th}}$ layer of the encoder and decoder, respectively.\n\n\\subsection{Optimization}\n\\label{sec:3.4}\nAs presented so far, we seek to minimize the loss function to preserve the structural proximity and attribute proximity in the embedding space. Stochastic gradient descent is a general way to optimize the deep model. However, it is difficult to obtain the optimal result of the model when using stochastic gradient descent directly over randomized weights due to the existence of many local optima \\cite{hinton2006reducing}. Otherwise, the gradient descent works well when the initial weights are close to a good solution. Therefore, Deep Belief Network \\cite{hinton2006fast} is adopted to pre-train the model and obtain the initial weights, which have been proved to be close to the optimal weights \\cite{erhan2010does}. Then, the model is optimized using stochastic gradient descent and the initial weights.\n\nBy iterating and updating the parameters until model converges, we obtain the optimal model. Experimental results show that the model optimization converges quickly after the first 10 iterations, and slowly approaches the optimum in the later iterations. Approximately 400 iterations produce the satisfactory results. After proper optimization, informative representations are learned based on the trained model. Algorithm 1 presents the pseudo-code of the proposed method. All the parameters ${W^{(k)}},{\\hat W^{(k)}},{\\bf{b}^{(k)}},{\\bf{\\hat b}^{(k)}}$ are signed as $\\theta $.\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\begin{algorithm}\n\\caption{MDNE}\n\\label{A1}\n\\begin{algorithmic}[1]\n\\REQUIRE the adjacency matrix $S$, the attribute matrix $A$.\n\\ENSURE network representation ${Y^{(K)}}$, updated parameters $\\theta $.\n \\STATE Build pre-processing layer($PPL$), encoder($EC$) and decoder($DC$), pre-train them through Deep Belief Network to obtain the initialized parameters $\\theta $;\n \\REPEAT\n \\STATE ${Y^{(K)}} = EC(PPL(\\left[ {S\\left. A \\right]} \\right.),\\theta )$, $\\left[ {\\hat S\\left. {\\hat A} \\right]} \\right. = DC({Y^{(K)}},\\theta )$;\n \\STATE Obtain ${{\\cal L}_{mix}}$ based on Eq. (4);\n \\STATE Updated parameters $\\theta $ through back-propagate algorithm;\n \\UNTIL {converge}\n \\STATE Obtain the network representations ${Y^{(K)}}$ based on the optimal parameters $\\theta $.\n \\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Analysis and Discussions}\n\\label{sec:3.5}\nIn this section, we discuss and analyze the proposed model of MDNE.\n\n\\textbf{Time Complexity:} \nThe time complexity of MDNE is $O((l + f)hr)$, where $l$ is the number of edges, $f$ is the total number of the attributes carried by all the nodes, $h$ is the maximum number of dimensions of the hidden layer, and $r$ is the number of iterations.\nSince $h$ and $r$ are independent of the other parameters, the overall training complexity of the model is linear to the sum of the number of edges and attributes carried by all the nodes.\n\n\\textbf{New nodes:} A practical issue for network embedding is how to capture evolving networks. Many researches \\cite{huang2017accelerated,Liao2017Attributed} have shown interest in dealing with dynamic topological structures and node attributes. Since newly arriving nodes are an important factor for evolving networks, the proposed method provides a possible way to represent them. If new nodes have observable links connecting to existing nodes and bringing attribute information as well, their representations can be obtained by feeding their adjacency vectors and attribute vectors into the finely trained model. If the new nodes lack structure or attribute information, most existing methods cannot handle them \\cite{wang2016structural}. However, MDNE can learn the representations of the new nodes lacking one modality of information by replacing the missing vectors with zero vectors and inputting the existing vectors together with the zero vectors to the trained model.\n\n\\section{Experimental Results}\n\\label{sec:4}\nIn this section, we empirically evaluate the effectiveness and generality of the proposed algorithm. First, the experimental setup is introduced, including datasets, baseline methods and parameter settings. We also investigate the convergence of MDNE, and verify the ability of all methods to reconstruct the network structure. Then, the comparisons of the proposed method and baselines are conducted on three real-world network analysis tasks, i.e., link prediction, attribute prediction and classification, to verify the ability of the obtained representations. Finally, the parameter sensitivity and the impact of pre-processing are discussed. Experiments run on a Dell Precision Tower 5810 with an Intel Xeon CPU E5-1620 v3 at 3.50 GHz and 16 GB of RAM.\n\n\\subsection{Experiment Setup}\n\\label{sec:4.1}\n\\subsubsection{Datasets}\nFour real-world network datasets are used in this work, including two citation networks and two social networks. Considering the characteristics of these datasets, one or more datasets are chosen to evaluate the performances on each network analysis task. Four datasets are described as follows.\n\n\\textbf{cora:} cora\\footnote{http:\/\/linqs.cs.umd.edu\/projects\/\/projects\/lbc\/index.html} is a citation network which contains 2,708 nodes and 5,278 edges. Each node indicates a machine learning paper, and the edge indicates the citation relation between papers. After stemming and removing stop-words, a vocabulary of 1433 unique words is regarded as the attribute information of papers. Each attribute indicates the absence\/presence of the corresponding word in papers. These papers are classified into one of the following seven classes: Case Based, Genetic Algorithms, Neural Networks, Probabilistic Methods, Reinforcement Learning, Rule Learning, and Theory.\n\n\\textbf{citesee:} citeseer is a citation network which consists of 3,312 nodes and 4,551 edges. Similarly, nodes and edges represent scientific publications and their citations, respectively. The vocabulary of size 3,703 words is extracted and set as the attributes. These papers are classified into one of the following six classes: Agents, AI, DB, IR, ML, HCI.\n\n\\textbf{UNC, Oklahoma:} They are two Facebook sub-networks, which respectively contains 18,163 students from the University of North Carolina and 17,425 students from University of Oklahoma, and also with their seven anonymized attributes: status, gender, major, second major, dorm\/house, high school, class year. Note that not all of the students have the seven attributes available.\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Dataset statistics}\n\\label{T2}\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nDataset & \\# nodes & \\# edges & \\# attributes \\\\\n\\hline\nUNC &\t 18163 &\t766800 &\t2788 \\\\\n\\hline\nOklahoma &\t17425 &\t892528 &\t2305 \\\\\n\\hline\nciteseer &\t3312 &\t4551 &\t3703 \\\\\n\\hline\ncora &\t 2708 &\t5278 &\t 1433 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe statistics of the four datasets are summarized in Table 2. Experiments are conducted on both weighted and unweighted, small and large networks. Diverse datasets allow us to evaluate whether the proposed network embedding method has a better performance on networks with different characteristics.\n\n\\subsubsection{Baseline Methods}\nFive typical methods are chosen to be baselines.\n\n\\textbf{LE \\cite{belkin2002laplacian}:} It provides Laplacian Eigenmaps and spectral techniques to embed the data into a latent low-dimensional space. The solution reflects the features of the network structure.\n\n\\textbf{node2vec \\cite{grover2016node2vec}:} It samples the network structure by the biased random walk. By regarding the paths as sentences, it adopts the natural language processing model to generate network embedding. The hyper-parameters $p$ and $q$ introduce breadth-first sampling and depth-first sampling in the random walk. It can recover DeepWalk when $p$ and $q$ are set to 1.\n\n\\textbf{SDNE \\cite{wang2016structural}:} It exploits the first-order proximity and second-order proximity to preserve the local and global network structure. A deep model is adopted to address the highly non-linear structure and sparsity problem of networks.\n\n\\textbf{AANE \\cite{huang2017accelerated}:} It proposes a scalable and efficient framework which incorporates node attribute proximity into network embedding. It processes each node efficiently by decomposing the complex modeling and optimization into many sub-problems.\n\n\\textbf{ASNE \\cite{Liao2017Attributed}:} It adopts a multilayer neural network to capture the complex interactions between features which denote the ID and attributes of nodes, and the proposed framework performs network embedding by preserving the structural proximity and attribute proximity of nodes in the paths generated by the random walk.\n\nThe first three methods are pure structure-based methods, and the others integrate attribute and structure information into network embedding.\n\n\\subsubsection{Parameter Settings}\nThe depth of neural networks and the number of neurons are essential factors in learning effect. Recent evidences \\cite{wang2016structural,simonyan2014very,szegedy2015going} reveal that the number of stacked layers (depth) and neurons should be neither too large nor too small. Large numbers of layers and neurons increase the difficulty of training the model, and bring over-fitting problem. However, too few layers and neurons fail to extract effective low-dimensional representations \\cite{zeiler2014visualizing}, especially for large-scale datasets. Therefore, we vary MDNE's neural network structure according to different datasets, as shown in Table 3. Two numbers in the first layer and second layer indicate the dimensions of the vectors related to the structure and attribute data, respectively.\n\nWe implemented of MDNE using TensorFlow\\footnote{https:\/\/www.tensorflow.org\/}. We fine-tuned the loss function hyper-parameters $\\lambda ,\\alpha ,\\upsilon ,{\\gamma _1},{\\gamma _2}$ using grid search based on the performance of the network reconstruction \\cite{wang2016structural}, which is introduced as a basic quality criterion of the proposed method in Section 4.3. We first perform a parameter sweep setting $\\lambda ,\\alpha ,\\upsilon ,{\\gamma _1},{\\gamma _2} = \\{0, 0.01, 0.1, 1, 10, 100, 1000\\}$ on each dataset. They are tuned one by one iteratively until all of them are converged. Then every hyper-parameter is further fine-tuned by grid search on a smaller space around optimal value got in previous search for each dataset.\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Neural Network Structures}\n\\label{T3}\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\nDataset & \\# nodes in each layer \\\\\n\\hline\ncora &\t (2708,1433)-(300,200)-128 \\\\\n\\hline\nciteseer &\t(3312,3703)-(250,250)-128 \\\\\n\\hline\nUNC &\t(18163,2788)-(3000,500)-128 \\\\\n\\hline\nOklahoma &\t(17425,2305)-(3600,650)-128 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe parameters of the baseline methods are adjusted to the optimal values as given in their researches. For the sake of fairness, we set the embedding dimensions of all the methods $d=128$ on different tasks.\n\n\\subsection{Convergence}\nExperiments are conducted to investigate the convergence property of MDNE. We vary the number of iterations from 0 to 800 and plot the corresponding value of loss function on a citation network cora and a social network UNC. The learning curves are shown in Figure 3. The result indicates that MDNE convergences at about 400 iterations on different datasets. Although the performance may be better with more iterations, 400 iterations have achieved the best result among baselines. To balance the effectiveness and efficiency of MDNE, the model is trained about 400 iterations in experiments.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig3}\n\\caption{Convergence of MDNE on cora and UNC datasets.}\n\\label{F3}\n\\end{figure}\n\n\\subsection{Network Reconstruction}\nNetwork reconstruction verifies the ability of the method to reconstruct the network structure, which is also a basic requirement for network embedding methods. Given the learned network representations, all links in the original network need to be predicted. The way to predict the links is ranking all node pairs based on their similarity and predicting that a certain number of top pairs are linked by edges. The cosine distance of learned vectors measures the similarities between nodes. The higher-ranking node pairs are more likely to have links in the original network. The evaluation indicator is $precision@k$ \\cite{wang2016structural}, referring to the ratio of the top $k$ node pairs to be connected in the original network. A larger $precision@k$ indicates the better performance of the reconstruction.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig4}\n\\caption{Network Reconstruction performance on different datasets. Node2vec can't obtain results on UNC and Oklahoma which have more than 10,000 nodes, due to an out of memory problem. $k$ is set based on the network scale.}\n\\label{F4}\n\\end{figure}\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{$precision@k$ of Network Reconstruction on cora and citeseer datasets}\n\\label{T4}\n\\centering\n\\tiny\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\n &Algorithm & $P$@1000 & $P$@3000 & $P$@5000 & $P$@7000 & $P$@9000 & $P$@10000 \\\\\n\\hline\n & LE & 0.661 & 0.481 & 0.408 & 0.353 & 0.316 & 0.300 \\\\\n & node2vec & \\textbf{1.000} & \\textbf{0.903}& 0.542 & 0.388 & 0.302 & 0.272 \\\\\ncora & SDNE & 0.924 & 0.703 & 0.543 & 0.432 & 0.353 & 0.323 \\\\\n & AANE & 0.792 & 0.465 & 0.318 & 0.239 & 0.194 & 0.179 \\\\\n & ASNE & 0.954 & 0.796 & 0.514 & 0.383 & 0.307 & 0.281 \\\\\n & MDNE & 0.996 & 0.871 & \\textbf{0.701} & \\textbf{0.581} & \\textbf{0.491} & \\textbf{0.455} \\\\\n\\hline\n & LE & 0.480 & 0.376 & 0.334 & 0.307 & 0.280 & 0.269 \\\\\n & node2vec & \\textbf{1.000}& \\textbf{1.000} & 0.654 & 0.467 & 0.364 & 0.327 \\\\\nciteseer & SDNE & 0.869 & 0.787 & 0.658 & 0.530 & 0.430 & 0.390 \\\\\n & AANE & 0.774 & 0.586 & 0.424 & 0.323 & 0.262 & 0.239 \\\\\n & ASNE & 0.962 & 0.908 & 0.713 & 0.543 & 0.438 & 0.400 \\\\\n & MDNE & 0.994 & 0.951 & \\textbf{0.798} & \\textbf{0.637} & \\textbf{0.530} & \\textbf{0.488} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{$precision@k$ of Network Reconstruction on UNC and Oklahoma datasets}\n\\label{T5}\n\\centering\n\\tiny\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n &Algorithm & $P$@5000 & $P$@10000 & $P$@15000 & $P$@20000 & $P$@25000 \\\\\n\\hline\n & LE & 0.942 & 0.915 & 0.901 & 0.894 & 0.885 \\\\\nUNC & SDNE & 0.997 & 0.988 & 0.968 & 0.943 & 0.915 \\\\\n & AANE & 0.012 & 0.010 & 0.008 & 0.008 & 0.008 \\\\\n & ASNE & \\textbf{0.999} & 0.989 & 0.922 & 0.765 & 0.635 \\\\\n & MDNE & 0.998 & \\textbf{0.998} & \\textbf{0.997} & \\textbf{0.982} & \\textbf{0.963} \\\\\n\\hline\n & LE & 0.952 & 0.938 & 0.925 & 0.916 & 0.907 \\\\\nOklahoma & SDNE & 0.998 & 0.986 & 0.981 & 0.978 & 0.976 \\\\\n & AANE & 0.022 & 0.018 & 0.015 & 0.014 & 0.013 \\\\\n & ASNE & 0.995 & 0.974 & 0.943 & 0.914 & 0.888 \\\\\n & MDNE & \\textbf{0.999}& \\textbf{0.996} & \\textbf{0.993} & \\textbf{0.983} & \\textbf{0.969} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nNetwork reconstruction has been performed on the four datasets, the results of which are shown in Figure 4. Also, Table 4 and Table 5 provide the numeric results helping to compare the close curves. Numbers in bold represent the best result in each column. Compared to UNC and Oklahoma, the performance of all the methods visibly decrease on cora and citeseer. This is because cora and citeseer have sparsity problem, as their average degree is much smaller than that of UNC and Oklahoma.\n\nLE, which is a shallow model-based method, has poor performance. It indicates that going deep enhance the model's generalization ability, and helps to capture the high non-linearity of network structures. SDNE adopts deep autoencoder model but only uses structure information. Its inferior performance demonstrates the usefulness of attribute information in learning better node representations. Node2vec is slightly better than MDNE on cora and citeseer network when $k=1000$ to $k=3000$. The reason might be that node2vec can capture the higher-order proximity between nodes by random walks in the network.\n\nAANE has relatively poor performance, especially on UNC and Oklahoma. This is because AANE only considers the first-order proximity, and its performance largely depends on the computation of attribute similarity under full attribute space, while attribute similarity of nodes computed under high-dimensional attribute space explicitly on certain networks has little discriminability. ASNE has slightly inferior performance because it is hard to capture the non-linear correlations of structure and attribute information, as it pre-processes structure and attribute data linearly before concatenating them. MDNE has the best performance on four datasets in most cases. The good performance of MDNE is because it adopts a deep model to learn non-linear features, and uses multimodal learning method to better capture the correlations of attribute and structure, and preserves the attribute proximity by minimizing the reconstruction error instead of computing the attribute similarity explicitly.\n\n\\subsection{Link Prediction and Attribute Prediction}\nIn this section, we evaluate the ability of the learned representations to predict missing links and attributes in the network, which is a practical task in real-world applications.\n\n\\subsubsection{Link Prediction}\nLink prediction is the prediction of missing links based on the existing information. After hiding $5\\% \\sim 45\\%$ of links randomly, the left network is utilized as a sub-dataset to perform network embedding. The test set consisted of positive instances and negative instances. The hidden links are taken as positive instances and the same ratio of unconnected node pairs in the original network are randomly selected to be negative instances. Similarities between the learned representations of nodes in the test set are calculated and are sorted in descending order. A higher ranking of a node pair corresponds to a greater possibility for them to be connected. Area Under the ROC Curve (AUC) is adopted as the evaluation metric as it is commonly used to measure the quality of classification based on ranking. A large AUC indicates good performance. If an algorithm ranks all positive instances higher than all negative instances, the AUC is 1. The above steps are repeated 10 times and the average AUC is taken as the final result. All methods had extremely poor performance on the cora and citeseer networks, as the low average degrees of the two networks make link prediction very hard. Thus we only show the results on the UNC and Oklahoma networks in Figure 5 and Table 6. Numbers in bold represent the highest performance in each column.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig5}\n\\caption{Link prediction performance on UNC and Oklahoma datasets.}\n\\label{F5}\n\\end{figure}\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{AUC of Link Prediction on UNC and Oklahoma datasets}\n\\label{T6}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n &Test Ratio & 0.05 & 0.15 & 0.25 & 0.35 & 0.45 \\\\\n\\hline\n & LE & 0.670 & 0.668 & 0.644 & 0.616 & 0.588 \\\\\nUNC & SDNE & \\textbf{0.915} & 0.902 & 0.896 & 0.880 & 0.873 \\\\\n & AANE & 0.501 & 0.500 & 0.500 & 0.500 & 0.499 \\\\\n & ASNE & 0.711 & 0.683 & 0.653 & 0.629 & 0.605 \\\\\n & MDNE & 0.912 & \\textbf{0.907} & \\textbf{0.901} & \\textbf{0.900} & \\textbf{0.906} \\\\\n\\hline\n & LE & 0.682 & 0.685 & 0.685 & 0.670 & 0.637 \\\\\nOklahoma & SDNE &0.891 & 0.885 & 0.882 & 0.873 & 0.852 \\\\\n & AANE & 0.498 & 0.499 & 0.500 & 0.501 & 0.500 \\\\\n & ASNE & 0.899 & 0.889 & 0.879 & 0.868 & 0.855 \\\\\n & MDNE & 0.937 & \\textbf{0.935} & \\textbf{0.933} & \\textbf{0.931} & \\textbf{0.924} \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{$p$-value of Friedman Test on MDNE with baselines for Link Prediction}\n\\label{T7}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\nBaseline & UNC & Oklahoma \\\\\n\\hline\n LE & 1.125$e-5$ & 1.125$e-5$ \\\\\n\\hline\nSDNE & 0.0084 & 1.125$e-5$ \\\\\n\\hline\nAANE & 1.125$e-5$ & 1.125$e-5$ \\\\\n\\hline\nASNE & 1.125$e-5$ & 1.125$e-5$ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nCompared with the shallow model-based methods LE and AANE, the deep model-based methods MDNE, SDNE and ASNE perform significantly better. This is because the deep model can better capture highly non-linear network structures. The reason for the extremely poor performance of AANE is similar to that on network reconstruction task. SDNE and MDNE have good performance since preserve both the first-order and second-order proximities between nodes in the embedding space. MDNE is slightly better than SDNE which does not preserve attribute features in the learned representations. This result justifies the usefulness of attribute information in link prediction.\n\nFriedman test is conducted to better endorse the superiority of MDNE with respect to other methods. The $p$-values are computed based on the ranking for the AUC value of MDNE with each method on different sub-datasets with different test ratios. In Table 7, all the $p$-values are less than 0.05. The results show that the performance of MDNE is significantly different from the compared methods on link prediction task. The $p$-value of MDNE with SDNE on UNC is slightly higher than others because SDNE is slightly better than MDNE when the ratio of links for test is 5\\%.\n\nThe network becomes sparse with the ratio of links for test increasing, and the AUC of MDNE is stable while that of other methods dropped. It indicates that the penalty for non-zero elements in the loss function improves MDNE's performance in dealing with sparse networks. Such an advantage is pivotal for downstream applications since links are often sparse, especially in large-scale real-world networks. Despite the link prediction task's favoring pure structure-based methods, our MDNE outperforms the others. This demonstrates the effectiveness of the learned representations in predicting missing links.\n\n\\subsubsection{Attribute Prediction} \nAttribute prediction refers to predicting unknown attribute values of nodes based on the obtained information. It has enjoyed increasing interest in network analysis tasks. For example, in social network recommendation, predicting attribute features is essential to help users to locate their interested information \\cite{cai2018three}.\n\nIn attribute prediction experiments, $5\\%\\sim45\\%$ of the attribute values (including value 1 and value 0) in the original network are hidden randomly, i.e., $5\\%\\sim45\\%$ of ${a_{jk}}$ in the attribute matrix $A$ are hidden. They are set as the test set. The left attribute information and structure information is trained to learn the representations of nodes. The obtained representations are used to predict the attributes in the test set. Assuming that the attribute $k$ of node $j$ is hidden, here is the way to predict. Similarities between node $j$ with all the other nodes in the embedding space are calculated, denoting as $sim_{ij},i = 1,\\dots,n$. We denote the top 10 nodes with the highest similarity to $j$ as set $N_S$. ${N_p} = \\left\\{ {i|{a_{ik}} = 1,i \\in {N_S}} \\right\\}$, is the set of nodes with the attribute value $k=1$ in the above set. Similarly, ${N_n} = \\left\\{ {i|{a_{ik}} = 0,i \\in {N_S}} \\right\\}$, is the set of the rest of nodes with the attribute value $k=0$ in $N_S$. $p = {{\\sum\\limits_{i \\in {N_p}} {si{m_{ij}}} } \/ {\\sum\\limits_{i \\in {N_n}} {si{m_{ij}}} }}$ is calculated, which indicates the possibility of that the ${a_{jk}}$, i.e., attribute $k$ of node $j$, is 1. All ${a_{jk}}$ in the test set are sorted in descending order of $p$. AUC is the metric to evaluate the $p$ ranking list. A high AUC value indicates the high accuracy of the prediction. The result shows as in Figure 6.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig6}\n\\caption{Attribute prediction performance on different datasets.}\n\\label{F6}\n\\end{figure}\n\n\\begin{table}[!t]\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{$p$-value of Friedman Test on MDNE with baselines for Attribute Prediction}\n\\label{T8}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nBaseline & cora & citeseer & UNC & Oklahoma \\\\\n\\hline\n LE & 1.125$e-5$ & 1.125$e-5$ & 1.125$e-5$ & 1.125$e-5$ \\\\\n\\hline\nSDNE & 1.125$e-5$ & 1.125$e-5$ & 1.125$e-5$ & 1.125$e-5$\\\\\n\\hline\nAANE & 1.125$e-5$ & 1.125$e-5$ & 1.125$e-5$ & 1.125$e-5$ \\\\\n\\hline\nASNE & 1.125$e-5$ & 1.125$e-5$ & \/ & \/ \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe performances of LE, node2vec and SDNE are worse than that of MDNE. The reason is their lacking of consideration of preserving the attribute features in the embedding space, which is important for predicting missing attributes of nodes. AANE still has poor performance. It is because the attribute affinity matrix adopted by AANE is calculated based on the full attribute space, which decreases the discriminability of the representations. Compared with ASNE, the superior performance of our MDNE credits the pre-processing of the original attribute and structure information based on multimodal learning method. The high-order features of the attribute vectors and adjacency vectors obtained by the pre-processing layer help the successive layers better extract the high-order correlations between the structure and attribute feature of nodes.\n\nAlso, the Friedman test is conducted on MDNE with others. The $p$-values are listed in Table 8, all of which are less than 0.05. The results show that the performance of MDNE is significantly different from baselines on attribute prediction task.\n\nThe attribute sparsity of different datasets is quite different, as the average number of attributes of each node is 34.3, 31.7, 5.4 and 5.3 on cora, citeseer, UNC and Oklahoma respectively. Moreover, the attributes of each network become sparse with the ratio of the test set increasing. The proposed method has good performance in all cases. This demonstrates that MDNE is effective in attribute prediction tasks and is robust to networks with different extent of attribute sparseness.\n\n\\subsection{Classification}\n\nClassification is one of the important tasks in network analysis. It classifies nodes based on their features. The representations generated are used as features. The widely used classifier LIBLINEAR \\cite{fan2008liblinear} is adopted. A portion of node representations and their labels are taken as the training set, and the rest to be the test set. For a fair comparison, the test ratio varies from $10\\%\\sim90\\%$ by an increment of $10\\%$. F-measure is a commonly adopted metric for binary classification. Micro-F1 \\& Macro-F1 are employed to judge the classification quality. Macro-average is defined as an arithmetic average of F-measure of all the label categories, and Micro-average is the harmonic mean of average precision and average recall. For both metrics, the higher values indicate better performance. For each training ratio, we randomly split the training set and the test set for 10 times and report the average result as Figure 7. The experiment is conducted on citeseer and cora, since they are the only datasets containing class labels for nodes.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig7}\n\\caption{Classification performance on cora and citeseer datasets.}\n\\label{F7}\n\\end{figure}\n\nMDNE always has the best performance in all cases. Although node2vec has satisfactory performance on network reconstruction task, it returns the disappointing result on classification task. This shows the representations learned by node2vec have task preference. AANE still has poor performance. We replay the classification experiments with the non-linear kernel SVM classifier and 5-fold cross validation. The performance of AANE is improved. This is because AANE is hard to capture the non-linear correlations between structure and attribute features, and the representations learned by AANE are non-linear. The SVM with the non-linear kernel has the classification ability with non-linear representations, which is difficult for the linear classifier LIBLINEAR to deal with. MDNE has good performance on both LIBLINEAR and SVM. Considering LIBLINEAR has advantages in time complexity, it is beneficial that the learned representations are suitable for linear classifiers. The poor performance of ASNE is due to its lacking of non-linear pre-processing of the original structure and attribute information. The non-linear pre-processing of the adjacency vector and attribute vector can help the model to capture the high-order correlations between the two information in the subsequent learning. SDNE and LE are worse than MDNE, as they do not consider attribute information when embedding networks. The significant improvement of MDNE over baselines proves that adopting multimodal deep model and optimizing the loss function defined based on the structural proximity and attribute proximity are able to learn effective representations for classification tasks.\n\n\\subsection{Parameters Sensitivity and the Impact of Pre-Processing}\nIn this section, we investigate how different choices of $\\lambda$ and embedding dimensions, along with the consideration of pre-processing affect the performance of MDNE on the cora dataset. The results of classification tasks with different test ratios are reported. The results from other tasks on other datasets are omitted as they are similar.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig8}\n\\caption{Classification performance of MDNE on cora dataset with different weight of the attribute proximity loss $\\lambda$.}\n\\end{figure}\n\n\\subsubsection{The weight of the attribute proximity loss $\\lambda$}\nThe hyper-parameter $\\lambda$ adjusts the importance of attribute proximity loss in the loss function. The weight of structural proximity loss is fixed as $\\alpha = 0.5$. Then the $\\lambda$ will determine the relative importance between the attribute proximity loss and the structural proximity loss.\n\nFigure 8 shows the impact of $\\lambda$ on the range of [0, 0.04] at an interval of 0.005. The slightly improving performance on $\\lambda = [0, 0.02]$ shows that attribute proximity loss plays an important role in learning network representations. The performance relatively stabilized on $\\lambda = [0.02, 0.04]$ indicates that the performance of MDNE is not sensitive to values on this range, which means the value of $\\lambda$ is suitable for the proposed model in a wide range in real-world applications. The great difference between $\\lambda$ and the weight of structural proximity loss is due to the inherent characteristics of dataset cora. The total number of edges is 5278, and the total number of attribute values is 49216. That is, the attribute proximity loss ${{\\cal L}_{att}}$ is much larger than the structural proximity loss. To balance the different effect from them, the smaller weight of ${{\\cal L}_{att}}$ is necessary. Besides, compared with Figure 7, when $\\lambda = 0$, which means the attribute proximity loss is ignored in loss function, MDNE still outperforms baselines. The observation indicates that the structure of the proposed multimodal deep autoencoder with the pre-training algorithm is able to capture the highly non-linear relationship between structure and attribute features even without attribute proximity loss in the loss function.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig9}\n\\caption{Classification performance of MDNE on cora dataset with different embedding dimensions.}\n\\label{F9}\n\\end{figure}\n\n\\subsubsection{Embedding dimensions}\nThe effect of the embedding dimensions on classification performance is shown in Figure 9. The performance gets better as the number of dimensions increasing initially. When the number of dimensions is larger than a threshold, the performance becomes stable. The reason is twofold. When the number of dimensions is small, more useful information is incorporated into representations with the number of dimensions increasing and the performance also increase. However, the too large number of dimensions also bring noise and redundant information which weaken the classification ability of the representations. Thus, it is important to select a reasonable embedding dimension. It is observed from Figure 9 that the proposed method is not very sensitive to embedding dimensions when the number of dimensions is larger than 60. Taking into account the accuracy and complexity of nodes, the embedding dimensions of MDNE is set as 128 in our experiments.\n\\begin{figure}[!t]\n\\centering\n\\includegraphics{Fig10}\n\\caption{Classification performance of MDNE on cora dataset with and without preprocessing.}\n\\label{F10}\n\\end{figure}\n\n\\subsubsection{Pre-processing}\nFigure 10 shows the results of MDNE with and without the pre-processing procedure. The model with the pre-processing procedure, which corresponds to Figure 2(b), has the structure of \\{(2708,1433)-(300,200)-128\\}. Except for the pre-processing layer, the subsequent deep model has an input layer with the concatenated high-order features and an output layer. The model without the pre-processing procedure, which corresponds to Figure 2(a), has the structure of \\{(2708,1433)-500-128\\}. The corresponding deep model has an input layer with concatenated original vectors, a hidden layer, and an output layer. The total number of weight parameters in the model without the pre-processing procedure is larger than that in the model with the pre-processing procedure. As Figure 10 shows, although the pre-processing model has smaller computation complexity, its result is slightly better. Moreover, compared with Figure 7, the proposed method without the pre-processing procedure is still better than baselines. It demonstrates that besides the pre-processing procedure, both the deep model and loss function of the proposed method contribute to the good performance of MDNE.\n\n\\section{Conclusion}\nIn this paper, a Multimodal Deep Network Embedding method is proposed for learning informative network representations by integrating the structure and attribute information of nodes. Specifically, the deep model comprising of multiple layers of non-linear functions is adopted to capture the non-linear network structure and the complex interactions with node attributes. In order to better extract the high-order correlations between the topological structures and attributes of nodes, the multimodal learning method is adopted to pre-process the original structure and attribute data. The structural proximity and attribute proximity are utilized to describe the structure and attribute features of the network, respectively. The model loss function is defined based on the two proximities. Minimizing the loss function preserves both proximities in the embedding space. Experiments are conducted on four real-world networks to evaluate the performance of the representations obtained. Compared with baselines, the result demonstrates that MDNE offers superior performance on various real-world applications. In the future, we will consider improving the efficiency of MDNE through a parallel processing framework and expanding the model to learn task-oriented representations combined with the requirements from specific applications.\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\nThe authors would like to thank the anonymous referees for their critical appraisals and useful suggestions.\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nInclusive and semi-inclusive deep inelastic scatterings (SIDIS) are important tools to understand\nthe structure of nucleon and nucleus governed by the\nQuantum Chromodynamics (QCD) for the strong interaction.\nThe azimuthal asymmetries and their spin and\/or nuclear dependences\nof the SIDIS cross sections are directly related to the parton distribution and polarization inside nucleon\nor nuclei and therefore are the subjects of intense studies both \ntheoretically\\cite{Georgi:1977tv,Cahn:1978se,Berger:1979kz,Liang:1993re,\nMulders:1995dh,Oganesian:1997jq,Chay:1997qy,Nadolsky:2000kz,\nAnselmino:2006rv,Anselmino:2005nn,Liang:2006wp,Gao:2010mj}\nand experimentally\\cite{Aubert:1983cz,Arneodo:1986cf,Adams:1993hs,\nBreitweg:2000qh,Chekanov:2002sz,Airapetian:2001eg,Airapetian:2002mf,Airapetian:2004tw,\nAlexakhin:2005iw,Webb:2005cd,:2008rv,Alekseev:2010dm}.\nThey provide us with a glimpse into the dynamics of strong interaction within nucleons or nuclei \nand a baseline for the study of parton dynamics in other extreme conditions at high temperature and\nbaryon density.\n\n\nIn the unpolarized SIDIS experiments, the azimuthal angle $\\phi$ of the final hadrons is defined with respect to \nthe leptonic plane and is directly related to the transverse momentum of the hadron from either parton \nfragmentation or the initial and final state interaction of the parton before hadronization.\nIn this paper we will restrict our study to SIDIS $e^-+N (A) \\to e^-+q+X$ of quark jet production so that we\ndon't need to deal with the azimuthal asymmetry resulting from parton fragmentation \nand have no need to consider Boer-Mulders effect \\cite{Boer:1997nt}. \nWe instead focus primarily on the effect of initial and final state interaction.\nIn the large transverse momentum region, the azimuthal asymmetries arise predominately\nfrom hard gluon bremsstrahlung that can be calculated using perturbative QCD (pQCD) \\cite{Georgi:1977tv},\nand are clearly observed in experiments\\cite{Aubert:1983cz,Arneodo:1986cf,Adams:1993hs,Breitweg:2000qh,Chekanov:2002sz}.\nOn the other hand, in the small transverse momentum region $p_{h\\perp}\\sim k_\\perp \\le 1$GeV\/c,\nthe asymmetry was shown\\cite{Cahn:1978se} to arise mainly from the intrinsic\ntransverse momentum of quarks in nucleon and is\na higher twist effect proportional to $k_\\perp\/Q$ for $\\langle\\cos\\phi\\rangle$\nand to $k_\\perp^2\/Q^2$ for $\\langle\\cos 2\\phi\\rangle$.\n(Here, $p_{h\\perp}$ denotes the transverse momentum of the hadron produced,\n$k_\\perp$ is the intrinsic transverse momentum of quark in nucleon,\n$Q^2=-q^2$ and $q$ is the four-momentum transfer from the lepton).\nThe calculations in \\cite{Cahn:1978se} are based on a generalization\nof the naive parton model to include intrinsic transverse momentum.\nTo go beyond the naive parton model, one has to consider multiple soft\ngluon interaction between the struck quark and the remanent of the target nucleon or nucleus.\nInclusion of such soft gluon interaction ensures the gauge invariance of the final\nresults and relate the azimuthal asymmetry to the transverse momentum\ndependent (TMD) parton matrix elements of the nucleon or nucleus.\n\nWithin the framework of TMD parton distributions and correlations, the intrinsic\ntransverse momentum of partons arises naturally from multiple soft\ngluon interaction inside the nucleon or nucleus. The TMD parton distributions and correlations\ncan be in fact expressed in terms of the expectation values of matrix elements\nrelated to the accumulated total transverse momentum as a result of the\ncolor Lorentz force enforced upon the parton through soft gluon exchange \\cite{Liang:2008vz}.\nThese soft gluon interactions are responsible for the single-spin asymmetries\nobserved in SIDIS, $pp$ and $\\bar pp$ collisions. They also lead to the \ntransverse momentum broadening \\cite{Liang:2008vz} of hadron production in \ndeep-inelastic lepton-nucleus scattering\\cite{VanHaarlem:2007kj,VanHaarlem:2009zz,Domdey:2008aq}\nas well as the jet quenching observed at the Relativistic Heavy Ion Collider (RHIC) \\cite{Gyulassy:1993hr,Baier:1996sk,\nWiedemann:2000za,Gyulassy:2000er,Guo:2000nz,Wang:2001ifa}. Such transverse\nmomentum broadening inside nucleus is directly related to the gluon \nsaturation scale \\cite{Liang:2008vz,McLerran:1993ka} and can be studied directly through the nuclear \ndependence of the azimuthal asymmetry in SIDIS.\n\n\nHigher twist contributions in inclusive DIS have been studied systematically using the\ncollinear expansion technique \\cite{Ellis:1982wd,Qiu:1988dn,Qiu:1990xxa} which not only provides \na useful tool to study the higher twist contributions but also is a necessary procedure to ensure\ngauge invariance of the parton distribution and\/or correlation functions.\nIn Ref.\\cite{Liang:2006wp}, such collinear expansion is extended to the SIDIS process $e^-+N\\to e^-+q+X$\nand calculation of the TMD differential cross section and the azimuthal asymmetries up to twist-3.\nTaking into account of multiple gluon scattering, the study found the azimuthal asymmetry $\\langle \\cos\\phi\\rangle$\nproportional to a twist-3 TMD parton correlation \nfunction $f_{q\\perp}(x,k_\\perp)$ defined as,\n\\begin{eqnarray}\nf_{q\\perp}^N(x,k_\\perp) &&\n=\\int \\frac{p^+dy^- d^2y_{\\perp}}{(2\\pi)^3}\ne^{ix p^+ y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}} \\nonumber \\\\\n&& \\times\\langle N|\\bar{\\psi}(0)\\frac{\\slash{\\hspace{-5pt}k_\\perp}}{2k_\\perp^2}{\\cal{L}}(0;y)\\psi(y)|N\\rangle,\n\\label{eq:fqperp}\n\\end{eqnarray}\nwhere ${\\cal{L}}(0;y)$ is the gauge link,\n\\begin{eqnarray}\n\\label{CollinearGL}\n&&{\\cal{L}}(0;y)=\\mathcal{L}^\\dag_{\\parallel}(\\infty,\\vec{0}_\\perp; 0,\\vec{0})\n\\mathcal{L}_\\perp(\\infty,\\vec 0_\\perp;\\infty,\\vec{y}_{\\perp}) \\nonumber\\\\\n&&\\phantom{{\\cal{L}}(0;y)=}\n\\mathcal{L}_{\\parallel}(\\infty,\\vec{y}_{\\perp}; y^-,\\vec{y}_{\\perp},\\vec{y}_\\perp),\\nonumber\\\\\n\\label{TMDGL}\n&&{\\cal{L}}_{\\parallel}(\\infty,\\vec{y}_{\\perp}; y^-,\\vec y_\\perp)\\equiv \nP e^{- i g \\int_{y^-}^\\infty d \\xi^{-} A^+ ( \\xi^-, \\vec{y}_{\\perp})} , \\nonumber \\\\\n&&\\mathcal{L}_\\perp(\\infty,\\vec 0_\\perp;\\infty,\\vec{y}_{\\perp}) \\equiv \nPe^{-ig\\int_{\\vec 0_\\perp}^{\\vec y_\\perp} d\\vec\\xi_\\perp\\cdot\n\\vec A_\\perp(\\infty,\\vec\\xi_\\perp)},\n\\end{eqnarray}\nfrom the resummation of multiple soft gluon interaction that ensures the gauge invariance of the\ntwist-3 parton correlation function in Eq.~(\\ref{eq:fqperp}) under any gauge transformation.\nThe asymmetry obtained within this generalized collinear expansion method reduces to that in the\nnaive parton model \\cite{Cahn:1978se} if and only if one neglects the soft gluon interaction as contained\nin the gauge link or equivalently by setting the strong coupling constant $g=0$ in the final result.\nMeasurements of $\\langle \\cos\\phi\\rangle$ in $e^-+N\\to e^-+q+X$ and its $k_\\perp$-dependence\ntherefore provide an unique determination of this new parton correlation function in Eq.~(\\ref{eq:fqperp}).\nFurthermore, the nuclear dependence of the asymmetry \\cite{Liang:2008vz} from multiple soft gluon\ninteraction within the target nucleus can probe the transverse momentum broadening or the jet\nquenching parameter in cold nuclear matter \\cite{Gao:2010mj} which also determines the gluon\nsaturation scale in cold nuclei.\n\nIn this paper, we present a complete calculation of the hadronic tensor and the differential cross section\nfor $e^-+N\\to e^-+q+X$ up to twist-4. We study in particular the azimuthal asymmetry $\\langle\\cos2\\phi\\rangle$\nin terms of the corresponding TMD quark correlation functions and its nuclear dependence.\nFor completeness, in Sec. II, we present the formulae for calculating the hadronic tensor and\ndifferential cross sections within the framework of generalized collinear expansion.\nIn Sec. III, we present the cross section and discuss azimuthal asymmetry $\\langle\\cos2\\phi\\rangle$\nincluding its nuclear dependence with a Gaussian ansatz for the TMD correlation functions. \nA summary is given in Sec. IV.\n\n\\section{Hadronic tensor $W_{\\mu\\nu}$ in $e^-+N\\to e^-+q+X$ up to twist-4}\n\nWe consider the SIDIS process $e^-+N\\to e^-+q+X$ with unpolarized beam and target.\nThe differential cross section is given by,\n\\begin{equation}\nd\\sigma=\\frac{\\alpha_{em}^2e_q^2}{sQ^4}L^{\\mu\\nu}(l,l')\\frac{d^2W_{\\mu\\nu}}{d^2k'_\\perp}\n\\frac{d^3l'd^2k'_\\perp}{ 2E_{l'}},\n\\end{equation}\nwhere $l$ and $l'$ are respectively the four-momenta of the incoming and outgoing leptons,\n$p$ is the four-momentum of the incoming nucleon $N$,\n$k'$ is the four-momentum of the outgoing quark.\nWe neglect the masses and use the light-cone coordinates.\nThe unit vectors are taken as,\n$\\bar n^\\mu=(1,0,0,0)$, $n^\\mu=(0,1,0,0)$, $n_{\\perp 1}^\\mu=(0,0,1,0)$,\n$n_{\\perp 2}^\\mu=(0,0,0,1)$.\nWe chose the coordinate system in the way so that,\n$p=p^+\\bar n$, $q=-x_Bp+nQ^2\/(2x_Bp^+)$, $l_\\perp=|\\vec l_\\perp|n_{\\perp 1}$,\nand $k_\\perp=(0,0,\\vec k_\\perp)$;\nwhere $x_B=Q^2\/2p\\cdot q$ is the Bjorken-$x$ and $y=p\\cdot q\/p\\cdot l$.\nThe leptonic tensor $L^{\\mu\\nu}$ is defined as usual ,\n\\begin{equation}\nL^{\\mu\\nu}(l,l')=4[l^\\mu{l'}^\\nu+l^\\nu{l'}^\\mu-(l\\cdot l')g^{\\mu\\nu}],\n\\end{equation}\nand the differential hadronic tensor is,\n\\begin{equation}\n\\frac{d^2W_{\\mu\\nu}}{d^2k'_\\perp}=\\int \\frac{dk'_z}{(2\\pi)^3 2E_{k'}}W_{\\mu\\nu}^{(si)}(q,p,k'),\n\\end{equation}\n\\begin{eqnarray}\nW_{\\mu\\nu}^{(si)}(q,p,k')&=&\n\\frac{1}{2\\pi}\\sum_X \\langle N| J_\\mu(0)|k',X\\rangle \\langle k',X| J_\\nu(0)|N\\rangle \\nonumber\\\\\n&&\\times (2\\pi)^4\\delta^4(p+q-k'-p_X),\n\\end{eqnarray}\nwhere the superscript $(si)$ denotes SIDIS.\nIt has been shown\\cite{Liang:2006wp} that, after collinear expansion,\nthe hadronic tensor can be expressed in an expansion series characterized by the number of\ncovariant derivatives in the parton matrix elements in each term,\n\\begin{equation}\n\\frac{d^2W_{\\mu \\nu}}{d^2{k}_\\perp}=\\sum_{j=0}^\\infty\n\\frac{d^2\\tilde W^{(j)}_{\\mu \\nu}}{d^2{k}_\\perp},\n\\end{equation}\n\\begin{widetext}\n\\begin{eqnarray}\n\\label{eq:Wsi0}\n&&\\frac{d\\tilde W_{\\mu\\nu}^{(0)}}{d^2k'_\\perp}\n=\\frac{1}{2\\pi} \\int dx d^2k_\\perp\n{\\rm Tr}[\\hat H_{\\mu\\nu}^{(0)}(x)\\ \\hat \\Phi^{(0)N}(x,k_\\perp)] \\delta^{(2)}(\\vec k_\\perp-\\vec{k'}_\\perp);\\\\\n\\label{eq:Wsi1}\n&&\\frac{d\\tilde W_{\\mu\\nu}^{(1)}}{d^2k'_\\perp}\n=\\frac{1}{2\\pi}\n\\int dx_1d^2k_{1\\perp} dx_2d^2k_{2\\perp} \\sum_{c=L,R}\n{\\rm Tr}\\bigl[\\hat H_{\\mu\\nu}^{(1,c)\\rho}(x_1,x_2) \\omega_\\rho^{\\ \\rho'}\n\\hat \\Phi^{(1)N}_{\\rho'}(x_1,k_{1\\perp},x_2,k_{2\\perp})\\bigr] \\delta^{(2)}(\\vec k_{c\\perp}-\\vec{k'}_\\perp); \\phantom{XX}\\\\\n\\label{eq:Wsi2}\n&&\\frac{d\\tilde W_{\\mu\\nu}^{(2)}}{d^2k'_\\perp}=\\frac{1}{2\\pi}\n\\int dx_1d^2k_{1\\perp}dx_2d^2k_{2\\perp}dxd^2k_{\\perp}\\nonumber\\\\\n&&\\phantom{\\frac{d\\tilde W_{\\mu\\nu}^{(2)}}{d^2k'_\\perp}=\\frac{1}{2\\pi}}\n\\sum_{c=L,R,M}{\\rm Tr}\\bigl[\\hat H_{\\mu\\nu}^{(2,c)\\rho\\sigma}(x_1,x_2,x)\n\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'} \\hat\\Phi^{(2)N}_{\\rho'\\sigma'}(x_1,k_{1\\perp},x_2,k_{2\\perp},x,k_\\perp)\\bigr]\n\\delta^{(2)}(\\vec k_{c\\perp}-\\vec{k'}_\\perp).\\\n\\end{eqnarray}\nwhere, for different cuts $c=L$, $R$ or $M$, $\\vec k_{c\\perp}$ denotes\n$\\vec k_{L\\perp}=\\vec k_{1\\perp}$, $\\vec k_{R\\perp}=\\vec k_{2\\perp}$, and\n$\\vec k_{M\\perp}=\\vec k_{\\perp}$;\n$\\omega_\\rho^{\\ \\rho'}=g_\\rho^{\\ \\rho'}-\\bar n_\\rho n^{\\rho'}$ is a projection operator.\nThe matrix elements are defined as,\n\\begin{eqnarray}\n\\hat\\Phi^{(0)N}(x,k_\\perp)&=&\\int \\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3}\ne^{ix p^+ y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar{\\psi}(0){\\cal{L}}(0;y)\\psi(y)|N\\rangle\n\\label{eq:Phi0def}\\\\\n\\hat\\Phi^{(1)N}_\\rho(x_1,k_{1\\perp},x_2,k_{2\\perp})\n&=&\\int \\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3}\\frac{p^+dz^-d^2z_\\perp}{(2\\pi)^3}\ne^{ix_2p^+z^- -i\\vec{k}_{2\\perp}\\cdot \\vec{z}_{\\perp}+\nix_1p^+(y^--z^-)-i\\vec{k}_{1\\perp}\\cdot (\\vec y_\\perp-\\vec z_\\perp)} \\nonumber\\\\\n&&\\langle N|\\bar\\psi(0) {\\cal L}(0;z)D_\\rho(z){\\cal L}(z;y)\\psi(y)|N\\rangle,\n\\label{eq:Phi1def}\\\\\n\\hat\\Phi^{(2)N}_{\\rho\\sigma}(x_1,k_{1\\perp},x_2,k_{2\\perp},x,k_\\perp)&=&\n\\int \\frac{p^+dz^-d^2z_\\perp}{(2\\pi)^3} \\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3} \\frac{p^+d{y'}^- d^2{y'}_\\perp}{(2\\pi)^3}\\nonumber\\\\\n&&e^{ix_2p^+z^- -i\\vec{k}_{2\\perp}\\cdot \\vec{z}_{\\perp}+\nixp^+({z'}^--z^-)-i\\vec{k}_{\\perp}\\cdot (\\vec {z'}_\\perp-\\vec {z}_\\perp)+\nix_1p^+({y}^--{z'}^-)-i\\vec{k}_{1\\perp}\\cdot (\\vec {y}_\\perp-\\vec {z'}_\\perp)} \\nonumber\\\\\n&&\\langle N|\\bar\\psi(0){\\cal L}(0;z) D_\\rho(z) {\\cal L}(z;z')D_\\sigma(z'){\\cal L}(z';y)\\psi(y)|N\\rangle,\n\\label{eq:Phi2def}\n\\end{eqnarray}\nwhere ${\\cal{L}}(0;y)$ is the gauge link as defined in Eq.~(\\ref{eq:fqperp}), and also in the remainder of this paper, for brevity, \nunless explicitly specified, the coordinate $y$ in the field operator denotes $(0,y^-,\\vec{y}_{\\perp})$.\n\n\n\n\nThe hard parts after the collinear expansion are given as \\cite{Liang:2006wp},\n\\begin{eqnarray}\n\\hat H_{\\mu\\nu}^{(0)}(x)&=&\\frac{2\\pi}{2q\\cdot p}\n\\gamma_\\mu(\\slash{\\hspace{-5pt}q}+x\\slash{\\hspace{-5pt}p})\\gamma_\\nu\\delta(x-x_B),\\\\\n\\hat H_{\\mu\\nu}^{(1,L)\\rho}(x_1,x_2)&=&\\frac{2\\pi}{(2q\\cdot p)^2}\n\\frac{\\gamma_\\mu(\\slash{\\hspace{-5pt}q}+x_2\\slash{\\hspace{-5pt}p})\\gamma^\\rho(\\slash{\\hspace{-5pt}q}+x_1\\slash{\\hspace{-5pt}p})\n\\gamma_\\nu}{x_2-x_B-i\\varepsilon}\\delta(x_1-x_B),\\\\\n\\hat H_{\\mu\\nu}^{(2,L)\\rho\\sigma}(x_1,x_2,x)&=&\\frac{2\\pi}{(2q\\cdot p)^3}\n\\frac{\\gamma_\\mu(\\slash{\\hspace{-5pt}q}+x_2\\slash{\\hspace{-5pt}p})\\gamma^\\rho(\\slash{\\hspace{-5pt}q}+x\\slash{\\hspace{-5pt}p})\n\\gamma^\\sigma(\\slash{\\hspace{-5pt}q}+x_1\\slash{\\hspace{-5pt}p})\\gamma_\\nu}\n{(x-x_B-i\\varepsilon)(x_2-x_B-i\\varepsilon)}\\delta(x_1-x_B).\n\\end{eqnarray}\nThese equations form the basis for calculating the hadronic tensor in $e^-+N\\to e^-+q+X$.\nDue to the existence of the projection operators $\\omega_\\rho^{\\ \\rho'}$ and $\\omega_\\sigma^{\\ \\sigma'}$,\nthe hard parts can be simplified to,\n\\begin{eqnarray}\n&&\\hat H^{(0)}_{\\mu\\nu}(x)=\\pi\\hat h^{(0)}_{\\mu\\nu}\\delta(x-x_B),\n\\label{eq:H0simple}\\\\\n&&\\hat H^{(1,L)\\rho}_{\\mu\\nu}(x_1,x_2)\\omega_\\rho^{\\ \\rho'}\n=\\frac{\\pi}{2q\\cdot p}\\hat h^{(1)\\rho}_{\\mu\\nu}\\omega_\\rho^{\\ \\rho'}\\delta(x_1-x_B),\n\\label{eq:H1Lsimple}\\\\\n&&\\hat H^{(2,L)\\rho\\sigma}_{\\mu\\nu}(x_1,x_2,x)\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'}=\n\\frac{2\\pi}{(2q\\cdot p)^2}\\bigl[\\bar n^\\rho\\hat h^{(1)\\sigma}_{\\mu\\nu}+\\frac{\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}}{x_2-x_B-i\\varepsilon}\\bigr]\n\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'}\\delta(x_1-x_B),\n\\label{eq:H2Lsimple}\\\\\n&& \\hat H^{(2,M)\\rho\\sigma}_{\\mu\\nu}(x_1,x_2,x)\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'}=\n\\frac{2\\pi }{(2q\\cdot p)^2}\\hat h^{(2)\\rho\\sigma}_{\\mu\\nu}\n\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'}\\delta(x-x_B),\n\\label{eq:H2Msimple}\n\\end{eqnarray}\nwhere $\\hat h^{(0)}_{\\mu\\nu}=\\gamma_\\mu\\slash{\\hspace{-5pt}n}\\gamma_\\nu\/p^+$,\n$\\hat h^{(1)\\rho}_{\\mu\\nu}=\\gamma_\\mu\\slash{\\hspace{-5pt}\\bar n}\\gamma^\\rho\\slash{\\hspace{-5pt}n}\\gamma_\\nu$,\n$\\hat h^{(2)\\rho\\sigma}_{\\mu\\nu}=p^+\\gamma_\\mu\\slash{\\hspace{-5pt}\\bar n}\\gamma^\\rho\n\\slash{\\hspace{-5pt} n}\\gamma^\\sigma\\slash{\\hspace{-5pt}\\bar n}\\gamma_\\nu\/2$ and\n$\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}=q^-\\gamma_\\mu\\gamma^\\rho\\slash{\\hspace{-5pt}n}\\gamma^\\sigma\\gamma_\\nu$.\nWe insert them into Eqs.(\\ref{eq:Wsi0}-\\ref{eq:Wsi2}) and obtain,\n\\begin{eqnarray}\n\\frac{d^2\\tilde W^{(0)}_{\\mu\\nu}}{d^2k_\\perp} &=&\n\\frac{1}{2}{\\rm Tr}\\bigl[\\hat h^{(0)}_{\\mu\\nu}\\hat\\Phi^{(0)N}(x_B,k_\\perp)\\bigr],\n\\label{eq:W0} \\\\\n\\frac{d^2\\tilde W^{(1,L)}_{\\mu\\nu}}{d^2k_\\perp} &=&\n\\frac{1}{4q\\cdot p}{\\rm Tr}\\bigl[\\hat h^{(1)\\rho}_{\\mu\\nu}\\omega_\\rho^{\\ \\rho'}\\hat\\varphi^{(1,L)N}_{\\rho'}(x_B,k_\\perp)\\bigr],\n\\label{eq:W1L} \\\\\n\\frac{d^2\\tilde W^{(2,L)}_{\\mu\\nu}}{d^2k_\\perp} &=&\n\\frac{1}{(2q\\cdot p)^2}\\left\\{{\\rm Tr}\\bigl[\\hat h^{(1)\\rho}_{\\mu\\nu}\n\\omega_\\rho^{\\ \\rho'}\\hat\\phi^{(2,L)N}_{\\rho'}(x_B,k_\\perp)\\bigr]\n+\n{\\rm Tr}\\bigl[\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'}\n\\hat\\varphi^{(2,L)N}_{\\rho'\\sigma'}(x_B,k_\\perp)\\bigr]\\right\\},\n\\label{eq:W2L} \\\\\n\\frac{d^2\\tilde W^{(2,M)}_{\\mu\\nu}}{d^2k_\\perp} &=&\n\\frac{1}{(2q\\cdot p)^2}\n{\\rm Tr}\\bigl[\\hat h^{(2)\\rho\\sigma}_{\\mu\\nu}\\omega_\\rho^{\\ \\rho'}\\omega_\\sigma^{\\ \\sigma'}\n\\hat\\varphi^{(2,M)N}_{\\rho'\\sigma'}(x_B,k_\\perp)\\bigr].\n\\label{eq:W2M}\n\\end{eqnarray}\nThe correlation matrices are defined as,\n\\begin{eqnarray}\n\\hat\\varphi^{(1,L)N}_\\rho(x_1,k_{1\\perp})&\\equiv&\\int dx_2d^2k_{2\\perp}\\hat\\Phi^{(1)N}_\\rho(x_1,k_{1\\perp},x_2,k_{2\\perp}),\\\\\n\\hat\\varphi^{(2,L)N}_{\\rho\\sigma}(x_1,k_{1\\perp})&\\equiv&\\int dxd^2k_{\\perp}\n\\frac{dx_2d^2k_{2\\perp}}{x_2-x_1-i\\varepsilon}\n\\hat\\Phi^{(2)N}_{\\rho\\sigma}(x_1,k_{1\\perp},x_2,k_{2\\perp},x,k_\\perp),\\\\\n\\hat\\varphi^{(2,M)N}_{\\rho\\sigma}(x,k_{\\perp})&\\equiv&\\int dx_1d^2k_{1\\perp}dx_2d^2k_{2\\perp}\n\\hat\\Phi^{(2)N}_{\\rho\\sigma}(x_1,k_{1\\perp},x_2,k_{2\\perp},x,k_\\perp),\\\\\n\\hat\\phi^{(2,L)N}_\\sigma(x_1,k_{1\\perp})&\\equiv&\\int dxd^2k_\\perp dx_2d^2k_{2\\perp}\n\\hat\\Phi^{(2)N}_{\\rho\\sigma}(x_1,k_{1\\perp},x_2,k_{2\\perp},x,k_\\perp).\n\\end{eqnarray}\nThey are given by,\n\\begin{eqnarray}\n&& \\hat\\varphi^{(1,L)N}_\\rho(x,k_\\perp)\n=\\int \\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3}e^{ixp^+y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar\\psi(0) D_\\rho(0){\\cal L}(0;y)\\psi(y)|N\\rangle,\\\\\n&&\\hat\\varphi^{(2,L)N}_{\\rho\\sigma}(x,k_{\\perp})\n=\\int \\frac{dx_2}{x_2-x-i\\varepsilon}\\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3}\\frac{p^+dz^-}{2\\pi}\ne^{ix_2p^+z^-+ixp^+(y^--z^-)-i\\vec{k}_{\\perp}\\cdot \\vec{y}_\\perp} \\nonumber\\\\\n&&\\phantom{XXXXXXXXXXXXX}\n\\langle N|\\bar\\psi(0){\\cal L}(0;z^-,y_\\perp) D_\\rho(z^-,y_\\perp)D_\\sigma(z^-,y_\\perp){\\cal L}(z^-,\\vec y_\\perp;y)\\psi(y)|N\\rangle,\\\\\n&&\\hat\\varphi^{(2,M)N}_{\\rho\\sigma}(x,k_{\\perp})\n=\\int \\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3}e^{ixp^+y^- -i\\vec{k}_\\perp\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar\\psi(0) D_\\rho(0){\\cal L}(0;y)D_\\sigma(y)\\psi(y)|N\\rangle,\\\\\n&&\\hat\\phi^{(2,L)N}_\\sigma(x,k_{\\perp})\n=\\int \\frac{p^+dy^- d^2y_\\perp}{(2\\pi)^3}e^{ixp^+y^- -i\\vec{k}_\\perp\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar\\psi(0) D^-(0)D_\\sigma(0){\\cal L}(0;y)\\psi(y)|N\\rangle.\n\\end{eqnarray}\n\nWe note that,\n$\\tilde W^{(0)*}_{\\mu\\nu}=\\tilde W^{(0)}_{\\nu\\mu}$,\n$\\tilde W^{(2,M)*}_{\\mu\\nu}=\\tilde W^{(2,M)}_{\\nu\\mu}$,\n$\\tilde W^{(1,R)}_{\\mu\\nu}=\\tilde W^{(1,L)*}_{\\nu\\mu}$,\nand $\\tilde W^{(2,R)}_{\\mu\\nu}=\\tilde W^{(2,L)*}_{\\nu\\mu}$.\nHence, if we divide $W_{\\mu\\nu}$ into a $\\mu\\leftrightarrow\\nu$ symmetric part and an anti-symmetric part,\nand denote $W_{\\mu\\nu}=W_{S,\\mu\\nu}+iW_{A,\\mu\\nu}$, we obtain,\n\\begin{equation}\n\\frac{d^2W_{S,\\mu\\nu}}{d^2k_\\perp}=\n\\frac{d^2\\tilde W^{(0)}_{S,\\mu\\nu}}{d^2k_\\perp}+\n2{\\rm Re}\\frac{d^2\\tilde W^{(1,L)}_{S,\\mu\\nu}}{d^2k_\\perp}+\n2{\\rm Re}\\frac{d^2\\tilde W^{(2,L)}_{S,\\mu\\nu}}{d^2k_\\perp}+\\frac{d^2\\tilde W^{(2,M)}_{S,\\mu\\nu}}{d^2k_\\perp},\n\\end{equation}\n\\begin{equation}\n\\frac{d^2W_{A,\\mu\\nu}}{d^2k_\\perp}=\n\\frac{d^2\\tilde W^{(0)}_{A,\\mu\\nu}}{d^2k_\\perp}+\n2{\\rm Im}\\frac{d^2\\tilde W^{(1,L)}_{S,\\mu\\nu}}{d^2k_\\perp}+\n2{\\rm Im}\\frac{d^2\\tilde W^{(2,L)}_{S,\\mu\\nu}}{d^2k_\\perp}+\\frac{d^2\\tilde W^{(2,M)}_{A,\\mu\\nu}}{d^2k_\\perp}.\n\\end{equation}\n\nThe anti-symmetric part contributes only in reactions with polarized lepton.\nIn this paper, we concentrate on the unpolarized reactions and calculate the symmetric part in the following.\n\n\nNow, we continue with a complete calculation of the hadronic tensor $d^2 W_{\\mu \\nu}\/d^2k_\\perp$\nin the unpolarized $e^-+N\\to e^-+q+X$ up to twist-4 level.\nFor this purpose, we need to calculate $d^2 W_{\\mu \\nu}\/d^2k_\\perp$ up to\n$d^2\\tilde W_{\\mu \\nu}^{(2)}\/d^2k_\\perp$ and we now present\nthe calculations of each term in the following.\n\nThe contribution from $d^2\\tilde W_{\\mu \\nu}^{(0)}\/d^2k_\\perp$ is the easiest one to calculate.\nBecause $\\hat H^{(0)}_{\\mu\\nu}(x)$ contains 3 $\\gamma$-matrices, only $\\gamma^\\alpha$\nterm of $\\hat\\Phi^{(0)}(x,k_\\perp)$ contributes in the unpolarized case so we need only to consider\n$\\hat\\Phi^{(0)N}(x,k_\\perp)=\\gamma^\\alpha\\Phi^{(0)N}_\\alpha(x,k_\\perp)\/2$,\n\\begin{equation}\n\\Phi^{(0)N}_{\\alpha}(x,k_{\\perp})\n=\\int \\frac{p^+dy^- d^{2}y_{\\perp}}{(2\\pi)^3}\ne^{ix p^+ y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar{\\psi}(0)\\frac{\\gamma_\\alpha}{2}{\\cal{L}}(0;y)\\psi(y)|N\\rangle\n=p_\\alpha f_q^N+k_{\\perp\\alpha} f_{q\\perp}^N+\\frac{M^2}{p^+}n_\\alpha f_{q(-)}^N.\n\\label{eq:Phi0alpha}\n\\end{equation}\nand obtain the result for $d^2\\tilde W_{\\mu \\nu}^{(0)}\/d^2k_\\perp$ as,\n\\begin{equation}\n\\frac{d^2\\tilde W_{\\mu \\nu}^{(0)}}{d^2k_\\perp}=-d_{\\mu\\nu}f_q^N(x_B,k_\\perp)+\n\\frac{1}{q\\cdot p}k_{\\perp\\{\\mu}(q+x_Bp)_{\\nu\\}}f_{q\\perp}^N(x_B,k_\\perp)+\n2(\\frac{M}{q\\cdot p})^2(q+x_Bp)_\\mu (q+x_Bp)_\\nu f_{q(-)}^N(x_B,k_\\perp),\n\\end{equation}\nwhere $d^{\\mu\\nu}=g^{\\mu\\nu}-\\bar{n}^{\\mu}n^{\\nu}-\\bar{n}^{\\nu}n^{\\mu}$ and\n$A_{\\{\\mu}B_{\\nu\\}}\\equiv A_\\mu B_\\nu+A_\\nu B_\\mu$,\n$A_{[\\mu}B_{\\nu]}\\equiv A_\\mu B_\\nu-A_\\nu B_\\mu$.\nThe TMD quark distribution\/correlation functions are given by,\n\\begin{eqnarray}\n&&f_q^N(x,k_\\perp)=\\frac{n^\\alpha}{p^+}\\Phi^{(0)N}_\\alpha(x,k_\\perp)=\\int \\frac{dy^- d^2y_{\\perp}}{(2\\pi)^3}\ne^{ix p^+ y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar{\\psi}(0)\\frac{\\gamma^+}{2}{\\cal{L}}(0;y)\\psi(y)|N\\rangle,\n\\label{eq:fqn} \\\\\n&& k_\\perp^\\alpha f_{q\\perp}^N(x_B,k_\\perp)=d^{\\alpha\\beta}\\Phi^{(0)N}_\\beta(x,k_\\perp)\n=\\int \\frac{p^+dy^- d^2y_{\\perp}}{(2\\pi)^3}\ne^{ix p^+ y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar{\\psi}(0)\\frac{\\gamma^\\alpha_\\perp}{2}{\\cal{L}}(0;y)\\psi(y)|N\\rangle,\n\\label{eq:fqnperp} \\\\\n&&f_{q(-)}^N(x_B,k_\\perp)=\\frac{p^+}{M^2}\\bar n^\\alpha\\Phi^{(0)N}_\\alpha(x,k_\\perp)\n=\\frac{p^+}{M^2}\\int \\frac{p^+dy^- d^2y_{\\perp}}{(2\\pi)^3}\ne^{ix p^+ y^- -i\\vec{k}_{\\perp}\\cdot \\vec{y}_{\\perp}}\n\\langle N|\\bar{\\psi}(0)\\frac{\\gamma^-}{2}{\\cal{L}}(0;y)\\psi(y)|N\\rangle.\n\\label{eq:fqnminus}\n\\end{eqnarray}\n\nBecause $\\hat h^{(1)\\rho}_{\\mu\\nu}$ contains 5 $\\gamma$-matrices,\nwe have contributions from $\\gamma_\\alpha$ and $\\gamma_5\\gamma_\\alpha$ terms of $\\varphi^{(1,L)N}_\\rho$,\ni.e., we need to consider\n$\\hat\\varphi^{(1,L)N}_\\rho(x,k_\\perp)=[\\gamma^\\alpha\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp)-\n\\gamma_5\\gamma^\\alpha\\tilde\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp)]\/2\n$ \nand obtain,\n\\begin{equation}\n\\frac{d^2\\tilde W_{\\mu\\nu}^{(1,L)}}{d^2k_\\perp}\n=\\frac{1}{2p\\cdot q} \\Bigl[h^{(1)\\rho\\alpha}_{\\mu\\nu}\\omega_\\rho^{\\ \\rho'}\\varphi^{(1)N}_{\\rho'\\alpha}(x_B,k_\\perp)\n-\\tilde h^{(1)\\rho\\alpha}_{\\mu\\nu}\\omega_\\rho^{\\ \\rho'}\\tilde\\varphi^{(1)N}_{\\rho'\\alpha}(x_B,k_\\perp)\\Bigr],\n\\end{equation}\nwhere $h^{(1)\\rho\\alpha}_{\\mu\\nu}\\equiv {\\rm Tr}[\\gamma^\\alpha \\hat h^{(1)\\rho}_{\\mu\\nu}]\/4$,\n$\\tilde h^{(1)\\rho\\alpha}_{\\mu\\nu}\\equiv {\\rm Tr}[\\gamma_5\\gamma^\\alpha \\hat h^{(1)\\rho}_{\\mu\\nu}]\/4$ and,\n\\begin{eqnarray}\n&&\\varphi^{(1)N}_{\\rho\\alpha}(x, k_\\perp)\n=\\int \\frac{p^+dy^-d^2y_\\perp}{(2\\pi)^3}e^{ixp^+y^--i\\vec y_\\perp\\cdot\\vec k_\\perp}\n\\langle N|\\bar\\psi(0)\\frac{\\gamma_\\alpha}{2}{\\cal L}(0;y)D_\\rho(y)\\psi(y)|N\\rangle,\n\\label{eq:varphi1}\\\\\n&&\\tilde\\varphi^{(1)N}_{\\rho\\alpha}(x, k_\\perp)\n=\\int \\frac{p^+dy^-d^2y_\\perp}{(2\\pi)^3}e^{ixp^+y^--i\\vec y_\\perp\\cdot\\vec k_\\perp}\n\\langle N|\\bar\\psi(0)\\frac{\\gamma_5\\gamma_\\alpha}{2}{\\cal L}(0;y)D_\\rho(y)\\psi(y)|N\\rangle.\\\n\\label{eq:tildevarphi1}\n\\end{eqnarray}\nAfter evaluating the two traces in $h^{(1)\\rho\\alpha}_{\\mu\\nu}$ and $\\tilde h^{(1)\\rho\\alpha}_{\\mu\\nu}$, we\nobtain the symmetric parts as,\n\\begin{eqnarray}\n&&h^{(1)\\rho\\alpha}_{S,\\mu\\nu}=-g^\\alpha_{\\ \\mu}d^\\rho_{\\ \\nu}-g^\\alpha_{\\ \\nu}d^\\rho_{\\ \\mu}+g_{\\mu\\nu}d^{\\rho\\alpha},\\\\\n&& \\tilde h^{(1)\\rho\\alpha}_{S,\\mu\\nu}=ig^\\alpha_{\\ \\mu}\\varepsilon_{\\perp\\nu}^\\rho\n+ig^\\alpha_{\\ \\nu}\\varepsilon_{\\perp\\mu}^\\rho-ig_{\\mu\\nu}\\varepsilon_{\\perp}^{\\rho\\alpha},\n\\end{eqnarray}\nwhere $\\epsilon_{\\perp\\rho\\gamma}\\equiv\\epsilon_{\\alpha\\beta\\rho\\gamma}\\bar n^\\alpha n^\\beta$.\nUp to twist-4, the contributing terms of $\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp)$\nand $\\tilde\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp)$\nare respectively,\n\\begin{eqnarray}\n\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp)&=&\np_\\alpha k_{\\perp\\rho}\\varphi^{(1)N}_\\perp(x,k_\\perp)+\n(k_{\\perp\\alpha}k_{\\perp\\rho}-\\frac{k_\\perp^2}{2}d_{\\rho\\alpha})\\varphi^{(1)N}_{\\perp 2}(x,k_\\perp)+\n\\frac{k_\\perp^2}{2}(\\bar n_{\\{\\alpha}n_{\\rho\\}}-d_{\\rho\\alpha})\\varphi^{(1)N}_{\\perp 3}(x,k_\\perp),\\\\\n\\tilde\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp)&=&\nip_\\alpha\\varepsilon_{\\perp\\rho\\gamma}k_\\perp^\\gamma \\tilde\\varphi^{(1)N}_\\perp(x,k_\\perp)+\n\\frac{i}{2}k_{\\perp\\{\\alpha}\\varepsilon_{\\perp\\rho\\}\\gamma}k_\\perp^\\gamma \\tilde\\varphi^{(1)N}_{\\perp2}(x,k_\\perp)+\n\\frac{i}{2}k_{\\perp[\\alpha}\\varepsilon_{\\perp\\rho]\\gamma}k_\\perp^\\gamma\\tilde\\varphi^{(1)N}_{\\perp3}(x,k_\\perp).\n\\end{eqnarray}\nThe result for $d^2\\tilde W_{S,\\mu\\nu}^{(1,L)}\/d^2k_\\perp$ is,\n\\begin{eqnarray}\n&&\\frac{d^2\\tilde W_{S,\\mu\\nu}^{(1,L)}}{d^2k_\\perp}\n=-\\frac{1}{2q\\cdot p}\\bigl\\{ (p_\\mu k_{\\perp\\nu}+p_\\nu k_{\\perp\\mu})\n[\\varphi^{(1)N}_{\\perp}(x_B,k_\\perp)-\\tilde\\varphi^{(1)N}_{\\perp}(x_B,k_\\perp)]\\nonumber\\\\\n&&\\phantom{XXXXXXXX}\n+(2k_{\\perp\\mu}k_{\\perp\\nu}-k_\\perp^2d_{\\mu\\nu})\n [\\varphi^{(1)N}_{\\perp2}(x_B,k_\\perp)-\\tilde\\varphi^{(1)N}_{\\perp2}(x_B,k_\\perp)]\\nonumber\\\\\n&&\\phantom{XXXXXXXX}\n+k_\\perp^2(g_{\\mu\\nu}-d_{\\mu\\nu})\n[\\varphi^{(1)N}_{\\perp3}(x_B,k_\\perp)-\\tilde\\varphi^{(1)N}_{\\perp3}(x_B,k_\\perp)]\\bigr\\}.\n\\label{eq:W1Lres}\n\\end{eqnarray}\n\nUp to twist-4 level, we need only to consider $\\slash{\\hspace{-5pt}p}$ and the $\\gamma_5\\slash{\\hspace{-5pt}p}$-term\nin the calculations of $d\\tilde W^{(2)}_{\\mu \\nu}\/d^2k_\\perp$.\nFor the first term in Eq.(\\ref{eq:W2L}), because of $\\omega_\\rho^{\\ \\rho'}$ and\n$n_\\rho\\hat h^{(1)\\rho}_{\\mu\\nu}=0$, we need only to consider the $k_{\\perp\\rho}$ terms and\nwe found out that they contribute only at twist-5 or higher level.\nFor the second term, because $n_\\rho\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}=n_\\sigma\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}=0$ and\n$\\hat\\varphi^{(2,L)N}_{\\rho\\sigma}=\\hat\\varphi^{(2,L)N}_{\\sigma\\rho}$,\nwe need to consider only $k_{\\perp\\rho}k_{\\perp\\sigma}$ and $k_\\perp^2d_{\\rho\\alpha}$\nfor the tensor term and $k_{\\perp\\{\\rho}\\varepsilon_{\\perp\\sigma\\}\\gamma}k_\\perp^\\gamma$\nfor the pseudo-tensor term.\nFurthermore,\n\\begin{eqnarray}\n&&k^2_\\perp\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}d_{\\rho\\sigma}=2\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}k_{\\perp\\rho}k_{\\perp\\sigma}\n=-2k_\\perp^2\\gamma_\\mu\\slash{\\hspace{-5pt}n}\\gamma_\\nu,\\\\\n&&\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}k_{\\perp\\rho}\\varepsilon_{\\perp\\sigma\\gamma}k_\\perp^\\gamma=\n-\\hat N^{(2)\\rho\\sigma}_{\\mu\\nu}k_{\\perp\\sigma}\\varepsilon_{\\perp\\rho\\gamma}k_\\perp^\\gamma\n=k_\\perp^2\\gamma_\\mu\\slash{\\hspace{-5pt}n}\\slash{\\hspace{-5pt}n_{\\perp1}}\\slash{\\hspace{-5pt}n_{\\perp2}}\\gamma_\\nu,\n\\end{eqnarray}\nwe need only to consider,\n\\begin{equation}\n\\hat\\varphi^{(2,L)N}_{\\rho\\sigma}(x,k_\\perp)=\\frac{\\slash{\\hspace{-5pt}p}}{2}\n\\left(-\\frac{1}{2}k^2_\\perp d_{\\rho\\sigma}\\right)\\varphi^{(2,L)N}_\\perp(x,k_\\perp)+...\n\\end{equation}\nand obtain the results for $d^2\\tilde W^{(2,L)}_{\\mu\\nu}\/d^2k_\\perp$ up to $1\/Q^2$ as,\n\\begin{equation}\n\\frac{d^2\\tilde W^{(2,L)}_{\\mu\\nu}}{d^2k_\\perp}=\n-\\frac{1}{2q\\cdot p}k_\\perp^2d_{\\mu\\nu}\\varphi^{(2,L)N}_\\perp(x_B,k_\\perp)+... .\n\\end{equation}\n\nSimilarly, to calculate $d^2\\tilde W^{(2,M)}_{\\mu\\nu}\/d^2k_\\perp$ up to $1\/Q^2$ level, we need to consider\n\\begin{equation}\n\\hat\\varphi^{(2,M)N}_{\\rho\\sigma}(x,k_\\perp)=\n\\frac{\\slash{\\hspace{-5pt}p}}{2}\\left(-\\frac{1}{2}k^2_\\perp d_{\\rho\\sigma}\\right)\\varphi^{(2,M)N}_\\perp(x,k_\\perp)-\n\\frac{i}{4}\\gamma_5\\slash{\\hspace{-5pt}p}\nk_{\\perp[\\rho}\\varepsilon_{\\perp\\sigma]\\gamma}k_\\perp^\\gamma\\tilde\\varphi^{(2,M)N}_\\perp(x,k_\\perp),\n\\end{equation}\nand the results for $d^2\\tilde W^{(2,M)}_{\\mu\\nu}\/d^2k_\\perp$ are given by,\n\\begin{equation}\n\\frac{d^2\\tilde W^{(2,M)}_{\\mu\\nu}}{d^2k_\\perp}=\n\\frac{k_\\perp^2}{(q\\cdot p)^2}p_\\mu p_\\nu[\\varphi^{(2,M)N}_\\perp(x_B,k_\\perp)-\\tilde\\varphi^{(2,M)N}_\\perp(x_B,k_\\perp)].\n\\end{equation}\n\nQCD equation of motion relates matrix elements with different number of $D_\\rho$ and gives\n\\begin{eqnarray}\n&&xf_{q\\perp}^N(x,k_\\perp)=-[\\varphi_\\perp^{(1)N}(x,k_\\perp)-\\tilde\\varphi_\\perp^{(1)N}(x,k_\\perp)],\\\\\n&&2(xM)^2f_{q(-)}^N(x,k_\\perp)=k_\\perp^2[\\varphi_\\perp^{(2,M)N}(x,k_\\perp)-\\tilde\\varphi_\\perp^{(2,M)N}(x,k_\\perp)],\\\\\n&&x[\\varphi^{(1)N}_{\\perp 3}(x,k_\\perp)-\\tilde \\varphi^{(1)N}_{\\perp 3}(x,k_\\perp)]\n=-[\\varphi^{(2,M)N}_{\\perp }(x,k_\\perp)-\\tilde \\varphi^{(2,M)N}_{\\perp }(x,k_\\perp)],\n\\end{eqnarray}\nwhere, as well as in the following of this paper,\nall the correlation functions in the results of the hadronic tensors and\/or cross section\nstand for their real parts. %\nThe final results for $d^2W_{\\mu\\nu}\/d^2k_\\perp$ up to twist-4 level are given by,\n \\begin{eqnarray}\n\\frac{d^2W_{\\mu \\nu}}{d^2k_\\perp}&=&\n-\\frac{1}{q\\cdot p}\\Bigl\\{ (q\\cdot p)d_{\\mu\\nu}f_q^N(x_B,k_\\perp)\n+\\frac{2M^2}{q\\cdot p}(q+2x_Bp)_\\mu(q+2x_Bp)_\\nu f_{q(-)}^N(x_B,k_\\perp)\\nonumber\\\\\n&&-(q+2x_Bp)_{\\{\\mu} k_{\\perp\\nu\\}} f_{q\\perp}^N(x_B,k_\\perp) +(2k_{\\perp\\mu}k_{\\perp\\nu}-k_\\perp^2d_{\\mu\\nu})\n [\\varphi^{(1)N}_{\\perp2}(x_B,k_\\perp)-\\tilde\\varphi^{(1)N}_{\\perp2}(x_B,k_\\perp)]\\nonumber\\\\\n&&+k_\\perp^2d_{\\mu\\nu} \\varphi^{(2,L)N}_{\\perp2}(x_B,k_\\perp)\\Bigr\\}.\n\\end{eqnarray}\n\n\n\\section{Differential cross section and $\\langle \\cos 2\\phi\\rangle$ up to the $1\/Q^2$}\n\n\nMaking the Lorentz contraction of the result for $d^2W_{\\mu\\nu}\/d^2k_\\perp$\nwith the leptonic tensor $L_{\\mu\\nu}$ given in Eq.(3),\nwe obtain the differential cross section as,\n\\begin{eqnarray}\n\\frac{d\\sigma}{dx_Bdyd^2k_\\perp}&=&\\frac{2\\pi\\alpha_{em}^2e_q^2}{Q^2y}\n\\bigg\\{ [1+(1-y)^2] f_q^N(x_B,k_\\perp)\n-4(2-y)\\sqrt{1-y} \\frac{|\\vec k_\\perp|}{Q} x_Bf_{q\\perp}^{(1)N}(x_B,k_\\perp)\\cos\\phi \\nonumber\\\\\n\\nonumber &&-4(1-y)\\frac{|\\vec k_\\perp|^2}{Q^2}x_B\n[\\varphi^{(1)N}_{\\perp 2}(x_B,k_\\perp)-\\tilde\\varphi^{(1)N}_{\\perp 2} (x_B,k_\\perp)]\\cos 2\\phi\\\\\n\\nonumber &&+8(1-y)\\left(\\frac{|\\vec k_\\perp|^2}{Q^2}\nx_B[\\varphi^{(1)N}_{\\perp 2}(x_B,k_\\perp)-\\tilde\\varphi^{(1)N}_{\\perp 2} (x_B,k_\\perp)]\n+\\frac{2x_B^2M^2}{Q^2}f_{q(-)}^N (x_B,k_\\perp)\\right)\\\\\n&&-2\\left[1+(1-y)^2\\right]\\frac{|\\vec k_\\perp|^2}{Q^2}x_B\\varphi_{\\perp 2}^{(2,L)N} (x_B,k_\\perp)\\bigg\\}.\n\\label{eq:CSres}\n\\end{eqnarray}\n\n\\end{widetext}\n\nFrom Eq.(\\ref{eq:CSres}), we can calculate the azimuthal asymmetries\n$\\langle\\cos\\phi\\rangle$ and $\\langle\\cos 2\\phi\\rangle$.\nThe result for $\\langle\\cos\\phi\\rangle$ and its nuclear dependence are\ndiscussed in \\cite{Gao:2010mj}.\nWe now discuss the result for $\\langle\\cos 2\\phi\\rangle$.\nAt fixed $k_\\perp$, it is given by,\n\\begin{eqnarray}\n&&\\langle \\cos2\\phi\\rangle_{eN}=-\\frac{2(1-y)}{1+(1-y)^2}\\frac{|\\vec k_\\perp|^2}{Q^2}\\times\\phantom{XXXXXXXX}\\nonumber\\\\\n&&\\phantom{XXXX}\\frac{x_B[\\varphi_{\\perp 2}^{(1)N}(x_B,k_\\perp)-\\tilde\\varphi_{\\perp 2}^{(1)N}(x_B,k_\\perp)]}{f_q^N(x_B,k_\\perp)}.\\ \\\n\\label{eq:cos2phires}\n\\end{eqnarray}\nIntegrating over the magnitude of $\\vec k_\\perp$, we obtain,\n\\begin{eqnarray}\n&&\\langle\\langle \\cos 2\\phi\\rangle\\rangle_{eN}=\n-\\frac{2(1-y)}{1+(1-y)^2} \\times\\phantom{XXXX}\\nonumber\\\\\n&&\\phantom{XX}\\frac{\\int |\\vec k_\\perp|^2d^2k_\\perp\nx_B[\\varphi_{\\perp 2}^{(1)N}(x_B,k_\\perp)-\\tilde\\varphi_{\\perp 2}^{(1)N}(x_B,k_\\perp)]}{Q^2f_q^N(x_B)},\\nonumber\n\\end{eqnarray}\nwhere $f_q^N(x)=\\int d^2k_\\perp f_q^N(x,k_\\perp)$ is the usual quark distribution in nucleon.\nThe new quark correlation functions involved are given by,\n\\begin{eqnarray}\n|\\vec k_\\perp|^2\\varphi_{\\perp 2}^{(1)N}(x,k_\\perp)&=(2\\hat k_\\perp^\\alpha\\hat k_\\perp^\\rho+d^{\\alpha\\rho})\n\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp), \\\\\n|\\vec k_\\perp|^2\\tilde\\varphi_{\\perp 2}^{(1)N}(x,k_\\perp)&=\n-i\\hat k_\\perp^{\\{\\alpha}\\varepsilon_\\perp^{\\rho\\}\\sigma}\\hat k_{\\perp\\sigma} \\tilde\\varphi^{(1)N}_{\\rho\\alpha}(x,k_\\perp),\n\\end{eqnarray}\nwhere $\\hat k_\\perp=k_\\perp\/|\\vec k_\\perp|$ denotes the unit vector.\nIf we consider only ``free parton with intrinsic transverse momentum\",\ni.e., the same case as considered in \\cite{Cahn:1978se},\nwe need to just set $g=0$ in the results mentioned above.\nIn this case, ${\\cal L}=1$ and\n$x[\\varphi_{\\perp 2}^{(1)N}(x,k_\\perp)-\\tilde\\varphi_{\\perp 2}^{(1)N}(x,k_\\perp)]=f_q^N(x,k_\\perp)$,\nso that,\n\\begin{equation}\n\\langle \\cos 2\\phi\\rangle_{eN}|_{g=0}=-\\frac{2(1-y)}{1+(1-y)^2}\\frac{|\\vec k_\\perp|^2}{Q^2},\n\\label{eq:cos2phig=0}\n\\end{equation}\nwhich is just the result obtained in \\cite{Cahn:1978se}.\n\nIn general, we need to take QCD multiple parton scattering into account thus\n$\\langle \\cos 2\\phi\\rangle_{eN}$ is given by Eq.~(\\ref{eq:cos2phires}) where new\nquark correlation functions are involved.\nMeasurements of $\\langle \\cos 2\\phi\\rangle_{eN}$, in particular whether the\nresults deviate from Eq.~(\\ref{eq:cos2phig=0}), can provide useful information\non the new parton correlation functions and on multiple parton scattering as well.\n\n\n\nIf we consider $e^-+A\\to e^-+q+X$, i.e. instead of a nucleon but a nucleus target,\nall the calculations given above apply and we obtain similar results\nwith only a replacement of the state $|N\\rangle$\nby $|A\\rangle$ in the definitions of the matrix elements and\/or parton distribution\/correlation functions.\nThe multiple gluon scattering now can be connected to different nucleons in the nucleus $A$\nthus give rise to nuclear dependence.\nIt has been shown that, under the ``maximal two gluon approximation\",\na TMD quark distribution $\\Phi^A_\\alpha(x,k_\\perp)$ in nucleus defined in the form,\n\\begin{eqnarray}\n\\Phi^A_\\alpha(x,k_\\perp)&\\equiv &\\int \\frac{p^+dy^-d^2y_\\perp}{(2\\pi)^3}\ne^{ixp^+y^- -i\\vec k_\\perp\\cdot \\vec y_\\perp}\\times\\nonumber\\\\\n&&\\langle A \\mid \\bar\\psi(0)\\Gamma_\\alpha{\\cal L}(0;y)\\Psi(y)\\mid A \\rangle,\n\\label{form}\n\\end{eqnarray}\nis given by a convolution of the corresponding distribution $\\Phi^N_\\alpha(x,k_\\perp)$ in nucleon\nand a Gaussian broadening,\n\\begin{equation}\n\\Phi^A_\\alpha(x,k_\\perp)\\approx\\frac{A}{\\pi \\Delta_{2F}}\n\\int d^2\\ell_\\perp e^{-(\\vec k_\\perp -\\vec\\ell_\\perp)^2\/\\Delta_{2F}}\\Phi^N_\\alpha(x,\\ell_\\perp),\n\\label{tmdgeneral}\n\\end{equation}\nwhere $\\Gamma_\\alpha$ is any gamma matrix, $\\Psi(y)$ is a field operator;\n$\\Delta_{2F}$ is the broadening width given by,\n\\begin{equation}\n\\Delta_{2F}=\\int d\\xi^-_N \\hat q_F(\\xi_N)=\\frac{2\\pi^2\\alpha_s}{N_c}\\int d\\xi^-_N\\rho_N^A(\\xi_N)[xf^N_g(x)]_{x=0},\n\\label{eq:Delta2F}\n\\end{equation}\nwhere $\\rho_N^A(\\xi_N)$ is the spatial nucleon number density inside the nucleus\nand $f^N_g(x)$ is the gluon distribution function in nucleon.\n\nWe note that both $\\varphi_{\\rho\\alpha}(x,k_\\perp)$ and $\\tilde\\varphi_{\\rho\\alpha}(x,k_\\perp)$\nhave the form of $\\Phi^A_\\alpha(x,k_\\perp)$.\nHence,\n\\begin{equation}\n\\varphi^{(1)A}_{\\rho\\alpha}(x,k_\\perp)\n\\approx\\frac{A}{\\pi \\Delta_{2F}}\n\\int d^2\\ell_\\perp e^{-(\\vec k_\\perp -\\vec\\ell_\\perp)^2\/\\Delta_{2F}}\\varphi_{\\rho\\alpha}^{(1)N}(x,\\ell_\\perp),\n\\label{tmdphi}\n\\end{equation}\n\\begin{equation}\n\\tilde\\varphi^{(1)A}_{\\rho\\alpha}(x,k_\\perp)\n\\approx\\frac{A}{\\pi \\Delta_{2F}}\n\\int d^2\\ell_\\perp e^{-(\\vec k_\\perp -\\vec\\ell_\\perp)^2\/\\Delta_{2F}}\\tilde\\varphi_{\\rho\\alpha}^{(1)N}(x,\\ell_\\perp),\n\\label{tmdtildephi}\n\\end{equation}\nMaking the Lorentz contration of both sides of these two equations\nwith $2\\hat k_\\perp^\\rho \\hat k_\\perp^\\alpha+d^{\\rho\\alpha}$ and $\\hat k_\\perp^{\\{\\alpha}\\varepsilon_\\perp^{\\rho\\}\\sigma}$\nrespectively, we obtain that,\n\\begin{eqnarray}\n&&|\\vec k_\\perp|^2\\varphi^{(1)A}_{\\perp2}(x,k_\\perp)\n\\approx \\frac{A}{\\pi \\Delta_{2F}}\n\\int d^2\\ell_\\perp e^{-(\\vec k_\\perp -\\vec\\ell_\\perp)^2\/\\Delta_{2F}}\\times\\nonumber\\\\\n&&\\phantom{XXXXX}\n\\left[2(\\ell_\\perp\\cdot\\hat k_\\perp)^2+\\ell_\\perp^2\\right]\\varphi_{\\perp2}^{(1)N}(x,\\ell_\\perp),\\\\\n\\label{eq:phi1NA}\n&&|\\vec k_\\perp|^2\\tilde\\varphi^{(1)A}_{\\perp2}(x,k_\\perp)\n\\approx \\frac{A}{\\pi \\Delta_{2F}}\n\\int d^2\\ell_\\perp e^{-(\\vec k_\\perp -\\vec\\ell_\\perp)^2\/\\Delta_{2F}}\\times\\nonumber\\\\\n&&\\phantom{XXXXX}\n\\left[2(\\ell_\\perp\\cdot\\hat k_\\perp)^2+\\ell_\\perp^2\\right]\\tilde\\varphi_{\\perp2}^{(1)N}(x,\\ell_\\perp).\n\\label{eq:tildephi1NA}\n\\end{eqnarray}\nAdopting a Gaussian ansatz, i.e.,\n\\begin{eqnarray}\nf_q^N(x,k_\\perp)&=&\\frac{1}{\\pi\\alpha}f_q^N(x)e^{-\\vec k_\\perp^2\/\\alpha},\\\\\n\\varphi_{\\perp2}^{(1)N}(x,k_\\perp)&=&\\frac{1}{\\pi\\beta}\\varphi_{\\perp2}^{(1)N}(x)e^{-\\vec k_\\perp^2\/\\beta},\\\\\n\\tilde\\varphi_{\\perp2}^{(1)N}(x,k_\\perp)&=&\\frac{1}{\\pi\\tilde\\beta}\\tilde\\varphi_{\\perp2}^{(1)N}(x)e^{-\\vec k_\\perp^2\/\\tilde\\beta},\n\\end{eqnarray}\nwe obtain, for those functions in nucleus,\n\\begin{equation}\nf_q^A(x,k_\\perp)\\approx\\frac{A}{\\pi\\alpha_A}f_q^N(x)e^{-\\vec k_\\perp^2\/\\alpha_A},\n\\end{equation}\n\\begin{equation}\n\\varphi_{\\perp2}^{(1)A}(x,k_\\perp)\\approx\n\\frac{A}{\\pi\\beta_A}\\Bigl(\\frac{\\beta}{\\beta_A}\\Bigr)^2\n\\varphi_{\\perp2}^{(1)N}(x)e^{-\\vec k_\\perp^2\/\\beta_A},\n\\end{equation}\n\\begin{equation}\n\\tilde\\varphi_{\\perp2}^{(1)A}(x,k_\\perp)\\approx\n\\frac{A}{\\pi\\tilde\\beta_A}\\Bigl(\\frac{\\tilde\\beta}{\\tilde\\beta_A}\\Bigr)^2\n\\tilde\\varphi_{\\perp2}^{(1)N}(x)e^{-\\vec k_\\perp^2\/\\tilde\\beta_A},\n\\end{equation}\nwhere $\\alpha_A=\\alpha+\\Delta_{2F}$, $\\beta_A=\\beta+\\Delta_{2F}$ and $\\tilde\\beta_A=\\tilde\\beta+\\Delta_{2F}$.\nThe azimuthal asymmetry is given by,\n\\begin{eqnarray}\n&&\\frac{\\langle \\cos 2\\phi\\rangle_{eA}}{\\langle \\cos 2\\phi\\rangle_{eN}}\\approx\n\\frac{\\alpha}{\\alpha_A}e^{-{\\vec k_\\perp}^2\/\\alpha_A+\\vec k_\\perp^2\/\\alpha}\\times \\nonumber\\\\\n&&\\phantom{XX}\\frac{\n\\frac{\\beta^2}{\\beta_A^3}\\varphi_{\\perp2}^{(1)N}(x_B)e^{-\\vec k_\\perp^2\/\\beta_A}\n-\\frac{\\tilde\\beta^2}{\\tilde\\beta_A^3}\n\\tilde\\varphi_{\\perp2}^{(1)N}(x_B)e^{-\\vec k_\\perp^2\/\\tilde\\beta_A} }\n{\\Bigl[\\frac{1}{\\beta}\\varphi_{\\perp2}^{(1)N}(x_B)e^{-\\vec k_\\perp^2\/\\beta}\n-\\frac{1}{\\tilde\\beta}\\tilde\\varphi_{\\perp2}^{(1)N}(x_B)e^{-\\vec k_\\perp^2\/\\tilde\\beta}\\Bigr]}, \\nonumber\n\\end{eqnarray}\nwhich reduces to\n\\begin{equation}\n\\frac{\\langle \\cos 2\\phi\\rangle_{eA}}{\\langle \\cos 2\\phi\\rangle_{eN}}\n=(\\frac{\\beta}{\\beta+\\Delta_{2F}})^2,\n\\end{equation}\nin the case that $\\alpha=\\beta=\\tilde\\beta$.\nWe see that, in this case, for given $x_B$, $Q^2$ and $|\\vec k_\\perp|$,\n$\\langle\\cos2\\phi\\rangle_{eA}$ in deep inelastic $eA$ scattering\nis suppressed compared to that in $eN$ scattering with a\nsuppression factor $\\beta^2\/(\\beta+\\Delta_{2F})^2$.\nComparing with result of ~\\cite{Gao:2010mj}, we can see that $\\langle\\cos2\\phi\\rangle_{eA}$ is more suppressed\nthan $\\langle\\cos\\phi\\rangle_{eA}$.\nIn general, $\\beta,\\tilde \\beta$ can be different from $\\alpha$, and the ratio can also be different\nat different $k_\\perp$ and $\\Delta_{2F}$.\nAs example, we show the results for a few cases in Figs. 1a and 1b with $\\beta=\\tilde \\beta$.\n \\begin{figure} [htb\n \\resizebox{0.45\\textwidth}{!}{\\includegraphics{ratio2.eps}}\n \\resizebox{0.45\\textwidth}{!}{\\includegraphics{ratio5.eps}}\n \\caption{(color online) Ratio $\\langle \\cos2\\phi\\rangle_{eA}\/\\langle \\cos2\\phi\\rangle_{eN}$ as a function of $\\Delta_{2F}$ for different $k_\\perp$ and $\\beta$. }\n \\label{fig:AfH} \n \\end{figure}\n\nWe see that the asymmetry can be suppressed or enhanced depending on the values of $k_\\perp$ and $\\Delta_{2F}$,\nand the magnitude is smaller than $\\langle\\cos\\phi\\rangle$case.\n\nIf we integrate over the magnitude of $\\vec k_\\perp$, we obtain,\n\\begin{equation}\n\\frac{\\langle\\langle \\cos\\phi\\rangle\\rangle_{eA}}{\\langle\\langle \\cos\\phi\\rangle\\rangle_{eN}}\n\\approx\\frac{(\\frac{\\beta}{\\beta_A})^2\\beta_A\\varphi_{\\perp2}^{(1)N}(x_B)\n-(\\frac{\\tilde\\beta}{\\tilde\\beta_A})^2\\tilde\\beta_A\\tilde\\varphi_{\\perp2}^{(1)N}(x_B)}\n{\\beta\\varphi_{\\perp2}^{(1)N}(x_B)-\\tilde\\beta\\tilde\\varphi_{\\perp2}^{(1)N}(x_B)},\n\\end{equation}\nwhich reduces to $\\beta\/(\\beta+\\Delta_{2F})$ for the special case $\\beta=\\tilde\\beta$.\n\n\n\n\n\\section{Summary and discussions}\n\nWe calculated the hadronic tensor and differential cross section for unpolarized SIDIS process\n$e^-+N\\to e^-+q+X$ in LO pQCD and up to twist-4 contributions\nThe results depend on a number of new TMD parton correlation functions.\nWe showed that measurements of the azimuthal asymmetry $\\langle \\cos2\\phi\\rangle$\nand its $k_\\perp$-dependence provides information on these TMD\ncorrelation functions which in turn can shed light on the properties of multiple gluon interaction\nin hadronic processes. Under two-gluon correlation approximation, we also show the relationship between these TMD\ncorrelation functions inside large nuclei and that of a nucleon. One can therefore study the\nnuclear dependence of the azimuthal asymmetry $\\langle \\cos2\\phi\\rangle$ which\nis determined by the jet transport parameter $\\hat q$ inside nuclei. With \na Gaussian ansatz for the TMD parton correlation functions inside the nucleon, we also illustrate \nnumerically that the asymmetry $\\langle \\cos2\\phi\\rangle$ is suppressed \nin the corresponding SIDIS with nuclear target.\n\nThere exist experimental measurements of the azimuthal asymmetries in both unpolarized and polarized \nDIS \\cite{Aubert:1983cz,Arneodo:1986cf,Adams:1993hs,Breitweg:2000qh,Chekanov:2002sz,\nAirapetian:2001eg,Airapetian:2002mf,Airapetian:2004tw,Alexakhin:2005iw,Webb:2005cd,:2008rv,Alekseev:2010dm}.\nMore results are expected from CLAS at JLab and COMPASS at CERN.\nThe available data seem to be consistent with the Gaussian ansatz for the \ntransverse momentum dependence of the TMD matrix elements\\cite{Schweitzer:2010tt}.\nHowever these data are still not adequate enough to provide any precise constraints on the form of \nthe higher twist matrix elements. Our calculations of the azimuthal asymmetries are\nmost valid in the small transverse momentum region where NLO pQCD corrections are\nnot dominant. The high twist effects are also most accessible in intermediate region\nof $Q^{2}$. One expects that future experiments such as those at the proposed Electron Ion \nCollider (EIC) \\cite{EIC} will be better equipped to study these high twist effects in detail.\n\n\nThis work was supported in part by the National Natural Science Foundation of China\nunder the approval Nos. 10975092 and 11035003,\nthe China Postdoctoral Science Foundation funded project under\nContract No.20090460736,\nOffice of Energy\nResearch, Office of High Energy and Nuclear Physics, Division of\nNuclear Physics, of the U.S. Department of Energy under Contract No.\nDE-AC02-05CH11231.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\nWe prove a conjecture of M\\'esz\\'aros and Morales \\cite{MM} on the\nvolume of a flow polytope, which is a type $D$ analog of the\nChan-Robbins-Yuen polytope \\cite{CRY2000}. Independently from our\nwork, Zeilberger also proved the conjecture and sketched his proof in\n\\cite{Z}. In fact, our proof is the same as Zeilberger's proof. The\npurpose of this note is to give a more detailed proof of the\nconjecture.\n\nFor a function $f(z)$ with a Laurent series expansion at $z$, we\ndenote by $\\CT_z f(z)$ the constant term of the Laurent expansion of\n$f(z)$ at $0$. In other words, if $f(z)=\\sum_{n=-\\infty}^\\infty a_n\nz^n$, then $\\CT_z f(z) = a_0$.\n\nThe conjecture of M\\'esz\\'aros and Morales can be stated as the\nfollowing constant term identity.\n\n\\begin{conj} \\cite[Conjecture~7.6]{MM} \\label{MMconj}\nFor an integer $n\\ge2$, we have\n\\[\n \\CT_{x_n}\\CT_{x_{n-1}}\\cdots \\CT_{x_1}\n\\prod_{j=1}^n x_j^{-1}(1-x_j)^{-2}\n\\prod_{1\\le j0$ such that $f(z)$ is\nholomorphic inside $C$ except $0$. Thus \\eqref{eq:zeilberger} can be rewritten\nas\n\\begin{multline}\n \\label{eq:zeilberger2}\n\\frac{1}{(2\\pi i)^n}\\oint_{C_n}\\dots\\oint_{C_1} \\prod_{j=1}^n (1-x_j)^{-a} x_j^{-b-1} \n\\prod_{1\\le j0$ is a very small number.\n\nBy \\eqref{eq:CT} we have\n\\begin{align*}\n& \\CT_{x_n}\\CT_{x_{n-1}}\\cdots \\CT_{x_1} \\prod_{j=1}^n x_j^{-a+1}(1-x_j)^{-a}\n\\prod_{1\\le j