applied-ai-018's picture
Add files using upload-large-folder tool
7df9d2d verified
raw
history blame
140 kB
{"text":"\\section{Introduction}\n\nIt is now experimentally well established that neutrinos have mass and they mix with each other (see \\cite{Tortola:2012te} for the best \nfit values of the parameters). \nBeing electrically neutral allows the possibility of them to be Majorana particles \\cite{Majorana:1937vz}. The observation of neutrinoless \ndouble beta ({$0\\nu 2\\beta\\,$}) decay, $(A,Z)\\rightarrow (A,Z+2) + 2e^-$, will establish the Majorana nature and lepton number violation beyond any doubt\n\\cite{Furry:1939qr}.\nTherefore, the search for neutrinoless double beta decay continues to be an important area. \nTheoretically as well, {$0\\nu 2\\beta\\,$} decay is heralded as a useful probe of physics beyond the standard model (SM).\n {$0\\nu 2\\beta\\,$} can potentially discriminate between the two hierarchies of the neutrino masses, and this, in turn can be used \n to rule out specific models of neutrino mass generation. In the context of models which involve TeV scale particles, \n like low scale seesaw models or low energy supersymmetric models including models with R-parity violation, {$0\\nu 2\\beta\\,$} \n imposes stringent constraints on the model parameters. The same set of diagrams, with appropriate changes in the momentum flow,\n can lead to interesting signatures at LHC. Constraints from {$0\\nu 2\\beta\\,$} thus can prove rather useful for phenomenological studies \n (see e.g. \\cite{Keung:1983uu} for an incomplete list discussing various aspects).\n\n The {$0\\nu 2\\beta\\,$} decay amplitude can be split into the so called long-range and short-range parts \n (for a review of theoretical and experimental issues and the sources of uncertainties and errors, \n see \\cite{Rodejohann:2011mu} and references therein ). \n Here, the long-range refers to the fact that there is an intermediate light neutrino involved. \n This should be contrasted with the short-range part of the amplitude in which the intermediate particles are all much \n much heavier that the relevant scale of the process $\\sim {\\mathcal{O}}$(GeV). In such a case, the heavier degrees \n of freedom can be systematically integrated out leaving behind a series of operators built out of low energy fields \n weighted by coefficients, called Wilson coefficients (denoted by $C_i$ below), which are functions of the parameters of \n the large mass degrees of freedom that have been integrated out (see e.g \\cite{Georgi:1994qn}). This provides a very convenient framework to evaluate the \n decay amplitude in terms of short distance coefficients which encode all the information about the high energy physics one \n may be trying to probe via a low energy process. This also neatly separates the particle physics input from the nuclear\n physics part which enters via the nuclear matrix elements (NMEs) of the quark level operators sandwiched between the nucleon states. \n In what follows, the discussion will be centered around the short range part though we believe that many of the arguments and \n results may also apply to the long range part. More care may be needed in the latter case though.\n\nGiven a specific model it is straightforward to write down the amplitude for the quark level \n{$0\\nu 2\\beta\\,$} process and compute the short distance coefficient. The complete amplitude then involves NMEs. \nAt present, the biggest source of uncertainty stems from the NMEs, and theoretical predictions show a marked sensitivity on the\nNMEs used (see \\cite{Simkovic:2007vu} for some of the recent NME calculations and predictions for {$0\\nu 2\\beta\\,$} rates).\nOn the experimental side, \nstudies have been carried out on several nuclei. \nOnly one of the experiments, the Heidelberg-Moskow collaboration (HM collab.) \\cite{KlapdorKleingrothaus:2006ff} has claimed observation of {$0\\nu 2\\beta\\,$} signal in $^{76}{\\mathrm Ge}$. \nThe half-life at $68\\%$ confidence level is: $T^{0\\nu}_{1\/2}(^{76}{\\mathrm Ge}) = 2.23^{+0.44}_{-0.31}\\times 10^{25}\\,\n{\\mathrm yr}$. A combination of the Kamland-Zen \\cite{Gando:2012zm} and EXO-200 \\cite{Auger:2012ar} results, both using $^{136}{\\mathrm Xe}$, \nyields a lower limit on the half-life $T^{0\\nu}_{1\/2}(^{136}{\\mathrm Xe}) > 3.4 \\times 10^{25}\\, {\\mathrm yr}$ \nwhich is at variance with the HM claim. Very recently GERDA experiment reported the lower limit on the half-life based \non the first phase of the experiment \\cite{Agostini:2013mzu}: $T^{0\\nu}_{1\/2}(^{76}{\\mathrm Ge}) > 2.1 \\times 10^{25}\\, {\\mathrm yr}$. A combination\nof all the previous limits results in a lower limit $T^{0\\nu}_{1\/2}(^{76}{\\mathrm Ge}) > 3.0 \\times 10^{25}\\, {\\mathrm yr}$\nat $90\\%$ confidence level. The new GERDA result (and the combination) is (are) again at odds with the positive claim of HM collab.\nThe GERDA results have been challenged \\cite{Klapdor-Kleingrothaus:2013cja} on account of low statistics and poorer resolution. \nVery clearly, there is some tension among the\nexperimental results and higher statistics in future will shed more light.\nTo reduce the dependence (or sensitivity) on NMEs, predictions for {$0\\nu 2\\beta\\,$} for various nuclei can be compared. Further, it is necessary\nto establish if the long-range contribution, coming from the light neutrino exchange, can saturate the experimental limits (or \npositive claims). This is investigated in \\cite{Dev:2013vxa}, and the conclusion drawn is that the light neutrino exchange falls short of\nsaturating the current limits. Also, for some choices of NMEs, the $^{76}{\\mathrm Ge}$ positive result can be consistent with \n$^{136}{\\mathrm Xe}$ limits when considered individually but not when combined.\n\nIn view of the immense importance of {$0\\nu 2\\beta\\,$}, both experimentally and theoretically, it is important to ensure \nthat theoretical calculations are very precise. In the present article, we consider dominant one loop QCD \ncorrections and renormalization group effects to the {$0\\nu 2\\beta\\,$} amplitude. To the best of our knowledge, this has not \nbeen studied before and as we show below, QCD corrections can have a significant impact on the {$0\\nu 2\\beta\\,$} rate, thereby \nimpacting the constraints on the model parameters.\n\nWe begin by recapitulating the essential steps in arriving at the final amplitude for {$0\\nu 2\\beta\\,$}. \nUsing the Feynman rules for a given model, all possible terms can be easily written. \nSince the momentum flowing through any of the internal lines is far smaller than the masses of the \nrespective particles and can be neglected, this leads to the low energy amplitude at the quark level. Parts of the amplitude may require Fierz rearrangement (for example in supersymmetric theories) to express it in colour singlet form which can then be sandwiched between the nucleon states after taking the non-relativistic limit. This last step results in NMEs. \nWe shall not be concerned with the issue of uncertainties creeping in due to NME calculations here.\nWe shall, rather, choose to work with a particular set of NMEs and focus on the impact of perturbative \nQCD corrections. As an example, consider a heavy right handed neutrino and SM gauge group. The resulting amplitude is of the form\n\\begin{eqnarray}\n{\\mathcal{A}} &\\sim& \\frac{1}{M_W^4M_N}\\bar{u}\\gamma_{\\mu}(1-\\gamma_5)d\\,\\bar{e}\\gamma^{\\mu}\\gamma^{\\nu}(1+\\gamma_5)e^c\\,\n\\bar{u}\\gamma_{\\nu}(1-\\gamma_5)d \\nonumber \\\\\n&=& \\underbrace{\\frac{1}{M_W^4M_N}}_{G}\\underbrace{\\bar{u}\\gamma_{\\mu}(1-\\gamma_5)d\\,\n\\bar{u}\\gamma^{\\mu}(1-\\gamma_5)d}_{{\\mathcal{J}}_{q,\\mu}{\\mathcal{J}}_q^{\\mu}}\\, \\underbrace{\\bar{e}(1+\\gamma_5)e^c}_{j_l}\n\\end{eqnarray}\nwhere we used $\\gamma_{\\mu}\\gamma_{\\nu} = g_{\\mu\\nu}-2i\\sigma_{\\mu\\nu}$ and the fact that $\\bar{e}\\sigma_{\\mu\\nu}(1+\\gamma_5)e^c $ \nvanishes identically. So does $\\bar{e}\\gamma_{\\mu}e^c$. This was noted in \\cite{Prezeau:2003xn}.\nThese, thus, restrict the form of the leptonic current. $G$ denotes the analogue of Fermi constant. \nThe exact form of $G$ will be\nmodel dependent. The physical {$0\\nu 2\\beta\\,$} amplitude is written as\n\\begin{equation}\n{\\mathcal{A}}_{0\\nu 2\\beta} = \\langle f\\vert i{\\mathcal{H}}_{\\mathrm eff}\\vert i\\rangle \\sim G\\,\n\\underbrace{\\langle f\\vert {\\mathcal{J}}_{q,\\mu}{\\mathcal{J}}_q^{\\mu}\\vert i\\rangle}_{\\boldmath NME}\\,j_l\n\\end{equation}\nThis clearly illustrates how the short distance or high energy physics separates from the low energy matrix elements. \nThe effective Hamiltonian for a given model is expressed as a sum of operators, $O_i$ weighted by the Wilson coefficents $C_i$: \n${\\mathcal{H}}_{\\mathrm eff} = G_i C_i O_i$, \nwhere we have allowed for more than one $G$ for more complicated theories. In the above case, there is only one operator \n$O_1 = {\\mathcal{J}}_{q,\\mu}{\\mathcal{J}}_q^{\\mu}\\,j_l = \\bar{u_i}\\gamma_{\\mu}(1-\\gamma_5)d_i\\,\\bar{u_j}\\gamma^{\\mu}(1-\\gamma_5)d_j\\,\n\\bar{e}(1+\\gamma_5)e^c$ ($i,j$ denoting the colour indices) and the corresponding Wilson coefficient $C_1=1$. \nIn other models like SUSY with R-parity violation \\cite{Mohapatra:1986su} or leptoquarks \\cite{Hirsch:1996qy}, Fierz transformations have to be employed to bring the \noperators in the colour matched form. The specific NME that finally enters the {$0\\nu 2\\beta\\,$} rate depends on the Lorentz and Dirac structure \nof the quark level operator involved.\n\nThis is not the entire story. From the effective field theory point of view, the integrating out of the heavier \ndegrees of freedom happens at the respective thresholds and then the obtained effective Lagrangian\/Hamiltonian has to be properly \nevolved down to the relevant physical scale of the problem ($\\sim {\\mathcal{O}}$(GeV) in the present case). \nThis is similar to what happens in non-leptonic meson decays (see for example \\cite{Buchalla:1995vs}). \nFor simplicity, we assume that the heavy \nparticles are all around the electroweak (EW) scale and in obtaining the numerical values, we shall put $M_W$ as the scale for all.\nThis facilitates one step integrating out of all the heavy degrees of freedom. Therefore, the above statement about $C_1$ being\nunity should now be written as $C_1(M_W) = 1$. Next consider one loop QCD corrections. The full amplitude is evaluated with one \ngluon exchange ($O(\\alpha_s) $) and matched with the amplitude at the same order in $\\alpha_s$ in the effective theory. Fig.\\ref{fig1}\nshows representative diagrams in the full and effective theory. \nThis has two effects: (i) $C_1$ gets corrected and reads $C_1(M_W) = 1 + \\frac{\\alpha_s}{4\\pi}{\\mathcal{N}} \\ln\\left(\\frac{M_W^2}{\\mu_W^2}\\right)$, \nwhere $\\mu_W$ is the renormalization scale and ${\\mathcal{N}}$ is a calculable quantity. This coefficient is then evolved down to the \n${\\mathcal{O}}$(GeV) using the renormalization group (RG) equations; (ii) QCD corrections induce the colour mismatched \noperator $O_2 = \\bar{u_i}\\gamma_{\\mu}(1-\\gamma_5)d_j\\,\\bar{u_j}\\gamma^{\\mu}(1-\\gamma_5)d_i\\,\\bar{e}(1+\\gamma_5)e^c$ with \ncoefficient $C_2 = \\frac{\\alpha_s}{4\\pi}{\\mathcal{N}}' \\ln\\left(\\frac{M_W^2}{\\mu_W^2}\\right)$. When evaluating the quark level matrix\nelement in the effective theory, both the operators contribute and in fact lead to mixing. This approach is a consistent one and also reduces the scale dependence of\nthe physical matrix elements. Without following the above steps, the short distance coefficient would have been evaluated\nat the high scale while the physical matrix elements at a low scale, leading to large scale dependence, which is not a\nphysical effect but rather an artifact of the calculation.\n\nArmed with this machinery, we now consider specific examples to bring out the impact of QCD corrections. \nAs mentioned above, to simplify the discussion, we assume all the heavy particles beyond the SM to be around the EW scale. \nThe technical details and explicit expressions for some of the models leading to neutrinoless double beta decay and \nrelated phenomenology will be presented elsewhere. Here we provide approximate numerical values of the Wilson coefficients \nof the operators considered. For the time being, we neglect the mixing of operators under renormalization. This can have a large\nimpact on some of the coefficients but their inclusion is beyond the scope of the present work.\n\n\\begin{figure}[ht!]\n\\vskip 0.32cm\n\\hskip 1.35cm\n\\hbox{\\hspace{0.03cm}\n\\hbox{\\includegraphics[scale=0.85]{fig1.eps}}\n}\n\\caption{Representative Feynman diagrams (drawn using the package JaxoDraw \\cite{Binosi:2003yf}) showing one loop QCD corrections.\nLeft: Full theory and Right: Effective theory\n }\n \\label{fig1}\n\\end{figure}\n\nFirst we consider left-right symmetric model and focus our attention on operators generated due to $W_L$ and $W_R$ exchange:\n\\begin{eqnarray}\nO^{LL}_1 &=& \\bar{u_i}\\gamma_{\\mu}(1-\\gamma_5)d_i\\,\\bar{u_j}\\gamma^{\\mu}(1-\\gamma_5)d_j\\,\\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{LL}_2 &=& \\bar{u_i}\\gamma_{\\mu}(1-\\gamma_5)d_j\\,\\bar{u_j}\\gamma^{\\mu}(1-\\gamma_5)d_i\\,\\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{RR}_1 &=& \\bar{u_i}\\gamma_{\\mu}(1+\\gamma_5)d_i\\,\\bar{u_j}\\gamma^{\\mu}(1+\\gamma_5)d_j\\,\\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{RR}_2 &=& \\bar{u_i}\\gamma_{\\mu}(1+\\gamma_5)d_j\\,\\bar{u_j}\\gamma^{\\mu}(1+\\gamma_5)d_i\\,\\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{LR}_1 &=& \\bar{u_i}\\gamma_{\\mu}(1-\\gamma_5)d_i\\,\\bar{u_j}\\gamma^{\\mu}(1+\\gamma_5)d_j\\,\\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{LR}_2 &=& \\bar{u_i}\\gamma_{\\mu}(1-\\gamma_5)d_j\\,\\bar{u_j}\\gamma^{\\mu}(1+\\gamma_5)d_i\\,\\bar{e}(1+\\gamma_5)e^c\n\\end{eqnarray}\nFollowing the general steps outlined above, the Wilson coefficents can be evaluated at the high scale and run down to\n$\\mu \\sim {\\mathcal{O}}$(GeV) (see also \\cite{Cho:1993zb}). Their approximate values read:\n\\begin{eqnarray}\nC^{LL,RR}_1 \\sim 1.3 &,& C^{LL,RR}_2 \\sim -0.6 \\nonumber\\\\\nC^{LR,RL}_1 \\sim 1.1 &,& C^{LR,RL}_2 \\sim 0.7\n\\end{eqnarray}\nTo evaluate the physical matrix elements, the colour mismatched operators $O^{AB}_2$ have to be Fierz transformed. \nUnder Fierz rearrangement, $(V-A)\\otimes (V-A)$ and $(V+A)\\otimes (V+A)$ retain their form while\n$(V-A)\\otimes (V+A) \\rightarrow -2 (S-P)\\otimes (S+P)$. With this rearrangement, $LL,\\,RR$ operators effectively \nyield $C^{LL,RR}_1+C^{LL,RR}_2$ as the effective couplings with the same NMEs involved, implying substantial cancellation \n(by about a factor of two). The $LR$ operator Fierz transformed brings in a different combination of NMEs. \nExplicitly, following for example the last reference in \\cite{Rodejohann:2011mu}, \nwe have the following (not showing the lepton current explicitly):\n\\begin{equation}\n\\langle {\\mathcal{J}}^{(V\\pm A)}{\\mathcal{J}}_{(V\\pm A)}\\rangle \\propto \\frac{m_A}{m_Pm_e} \n({\\mathcal{M}}_{GT,N}\\, \\mp \\alpha^{SR}_3 {\\mathcal{M}}_{F,N})\n\\end{equation}\nwhere $\\vert{\\mathcal{M}}_{GT,N}\\vert \\sim (2-4)\\vert{\\mathcal{M}}_{F,N}\\vert$\nfor all the nuclei considered, and $\\alpha^{SR}_3 \\sim 0.63$. Thus, to a good accuracy the above matrix element is \nessentially governed by ${\\mathcal{M}}_{GT,N}$. In the above equation, the relative negative sign between the two terms\non the right hand side corresponds to $(V+A)\\otimes (V+A)$ and $(V-A)\\otimes (V-A)$ structures on the left hand side, \nwhile for the $(V+A)\\otimes (V-A)$ structure, the relative sign is positive.\n\nOn the other hand, \n\\begin{equation}\n\\langle {\\mathcal{J}}^{(S\\pm P)}{\\mathcal{J}}_{(S\\pm P)}\\rangle \\propto - \\alpha^{SR}_1 {\\mathcal{M}}_{F,N}\n\\end{equation}\nwith $\\alpha^{SR}_1 \\sim 0.145\\frac{m_A}{m_Pm_e}$.\nClearly, the Fierz transformed operator in this case turns out to be subdominant. This simple exercise illustrates \nthe large impact and importance of QCD corrections in the context of {$0\\nu 2\\beta\\,$}. As obtained above, QCD corrections can lead to \nsubstantial shift in the {$0\\nu 2\\beta\\,$} rate for specific models, thereby changing the limits on the model parameters significantly.\n\n\nAs our next example, consider theories where the interactions are $S\\pm P$ form, like SUSY with R-parity violation or\nleptoquarks etc. In such cases, the operators have the structure:\n\\begin{eqnarray}\nO^{SP\\pm\\pm}_1 &=& \\bar{u_i}(1\\pm\\gamma_5)d_i\\, \\bar{u_j}(1\\pm\\gamma_5)d_j\\, \\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{SP\\pm\\pm}_2 &=& \\bar{u_i}(1\\pm\\gamma_5)d_j\\, \\bar{u_j}(1\\pm\\gamma_5)d_i\\, \\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{SP+-}_1 &=& \\bar{u_i}(1+\\gamma_5)d_i\\, \\bar{u_j}(1-\\gamma_5)d_j\\, \\bar{e}(1+\\gamma_5)e^c \\nonumber \\\\\nO^{SP+-}_2 &=& \\bar{u_i}(1+\\gamma_5)d_j\\, \\bar{u_j}(1-\\gamma_5)d_i\\, \\bar{e}(1+\\gamma_5)e^c \n\\end{eqnarray}\nThe Wilson coefficients of the colour mismatched operators are about 0.1-0.5 times those of the colour allowed operators in magnitude.\nThis could be argued from the $1\/N_c(\\,\\sim 0.3\\,\\,\\textrm{for} N_c=3)$ counting rules for the colour mismatched structures, up to factors of order unity. \nFollowing the same chain of arguments, the colour mismatched operators need to be Fierz transformed before computing the physical\nmatrix elements. Under Fierz transformations we have: $(S+P)\\otimes (S-P) \\rightarrow \\frac{1}{2}(V+A)\\otimes (V-A)$ implying that\nthe colour mismatched operator, after Fierz transformation, may provide the dominant contribution (see Eq.(5) and Eq.(6)). \nConsequently the amplitudes, and therefore the limits on parameters may change by a factor of five or so. That the colour mismatched\noperator can provide a large contribution is again something we are familiar with from $K\\to\\pi\\pi$ decays where the QCD (and\nelectroweak) penguin operator after Fierz transformation gives the dominant contribution, though QCD and electroweak penguin contributions\ntend to cancel each other in this case.\n\nThe most interesting and the largest effect in the examples considered above comes about when considering the $O^{SP++,--}$ operators. \n$(S\\pm P)\\otimes (S\\pm P) \\rightarrow \\frac{1}{4}[2(S\\pm P)\\otimes (S\\pm P) - (S\\pm P)\\sigma_{\\mu\\nu}\\otimes\\sigma^{\\mu\\nu}]$ \nunder Fierz rearrangement. The tensor-pseudotensor structure yields the following NME:\n\\begin{equation}\n\\langle {\\mathcal{J}}^{\\mu\\nu}{\\mathcal{J}}_{\\mu\\nu}\\rangle \\propto -\\alpha^{SR}_2 {\\mathcal{M}}_{GT,N}\n\\end{equation}\nwith $\\alpha^{SR}_2 \\sim 9.6\\frac{m_A}{m_Pm_e}$ which is about $200$ times larger than \n$\\langle {\\mathcal{J}}^{(S\\pm P)}{\\mathcal{J}}_{(S\\pm P)}\\rangle$. Conservatively taking the corresponding \nWilson coefficient to be $0.1$ of the colour allowed operator, the relative contributions are:\n\\begin{equation}\n\\vert\\frac{O^{SP++}_2}{O^{SP++}_1}\\vert \\geq 10\n\\end{equation}\nThe above discussion makes it very clear that the QCD corrections to {$0\\nu 2\\beta\\,$} are rather important and should be included \nsystematically. \nThese corrections can be as large as or in fact larger than in most cases than the uncertainty due to NMEs \nand are independent of the particular set of NMEs considered.\nAs eluded to above, we have considered only pairs of operators $O^{AB}_1,\\, O^{AB}_2$ while obtaining the approximate values \nof $C's$ at the low scale. The effect of mixing with other operators has been ignored at this stage. This could further lead\nto significant corrections for some of the operators. We plan to systematically investigate these issues elsewhere.\nThis (and the shift above) is rather large and can completely change the phenomenological constraints. \nIn theories with many contributions to {$0\\nu 2\\beta\\,$}, it is essential to understand the interplay between different competing\namplitudes to set limits on the couplings and masses of the particles. In such cases, the discussion above becomes even more\nimportant.\nLow (TeV) scale models appear to be attractive due to plausible signatures at LHC, where QCD corrections will be\ninevitable. It is therefore important to include the dominant QCD corrections at the very least in order to set meaningful\nlimits on model parameters. \\\\\n\nIn this article we have investigated the impact of one loop QCD corrections to the {$0\\nu 2\\beta\\,$} amplitude. \nThis, to the best of our knowledge, is the first time this issue has been discussed. We found that QCD corrections can\nhave a large impact, ranging from near cancellation to huge enhancement of the {$0\\nu 2\\beta\\,$} rate. Since {$0\\nu 2\\beta\\,$} is an important \nprocess to search experimentally and has the potential to link seemingly unrelated processes, particularly in the context\nof TeV scale models, it is rather important to ensure that theoretical predictions are precise enough to be compared to\nthe experimental results. As such, the calculations suffer from large uncertainties due to the choice of NMEs, \nwhich are non-perturbative in nature. What we have found is that even perturbative corrections have the potential to \nshift the predictions by a large amount. This by itself is a rather important aspect and such corrections need to \nbe systematically computed for various models of interest. The shift in the limits on model parameters also implies \nthat the related phenomenology at say the LHC (in specific models) will also get modified. There are other issues related \nto operator mixing which have not been incorporated here. These may also become important in the context of specific theories\nand should be consistently included. Furthermore, QCD corrections need to be evaluated for the light neutrino exchange contribution\nas well.\nAs mentioned in the beginning, the light neutrino contribution is unable to saturate the present experimental limits. It remains to be seen if \nincluding the radiative corrections eases out this tension, and to what extent \\cite{nm}.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nRecent advances in large-scale pre-training language models (PrLMs) have achieved remarkable successes in a variety of natural language processing (NLP) tasks \\cite{Peters2018ELMO,radford2018improving,devlin-etal-2019-bert,yang2019xlnet,clark2020electra}. Providing fine-grained contextualized embedding, these pre-trained models are widely employed as encoders for various downstream NLP tasks. Although the PrLMs demonstrate superior performance due to their strong representation ability from self-supervised pre-training, it is still challenging to effectively adapt task-related knowledge during the detailed task-specific training which is usually in a way of fine-tuning \\cite{gururangan-etal-2020-dont}. Generally, those PrLMs handle the whole input text as a linear sequence of successive tokens and implicitly capture the contextualized representations of those tokens through self-attention. Such fine-tuning paradigm of exploiting PrLMs would be suboptimal to model dialogue task which holds exclusive text features that plain text for PrLM training may hardly embody. Therefore, we explore a fundamental way to alleviate this difficulty by improving the training of PrLM. This work devotes itself to designing the natural way of adapting the language modeling to the dialogue scenario motivated by the natural characteristics of dialogue contexts.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{dialogue_exp.pdf}\n\\caption{A multi-turn dialogue example. Different colors indicate the utterances from different speakers.}\n\\label{dialogue_exp}\n\\end{figure}\n\nAs an active research topic in the NLP field, multi-turn dialogue modeling has attracted great interest. The typical task is response selection \\cite{lowe2015ubuntu,wu2016sequential,zhang2018modeling} that aims to select the appropriate response according to a given dialogue context containing a number of utterances, which is the focus in this work. However, selecting a coherent and informative response for a given dialogue context remains a challenge. The multi-turn dialogue typically involves two or more speakers that engage in various conversation topics, intentions, thus the utterances are rich in interactions, e.g., with criss-cross discourse structures \\cite{li-etal-2020-molweni}. A critical challenge is the learning of rich and robust context representations and interactive relationships of dialogue utterances, so that the resulting model is capable of adequately capturing the semantics of each utterance, and the relationships among all the utterances inside the dialogue.\n\n\n\nInspired by the effectiveness for learning universal language representations of PrLMs, there are increasing studies that employ PrLMs for conversation modeling \\cite{mehri2019pretraining,zhang2019dialogpt,rothe2020leveraging}. These studies typically model the response selection with only the context-response matching task and overlook many potential training signals contained in dialogue data. \nAlthough the PrLMs have learned contextualized semantic representation from token-level or\nsentence-level pre-training tasks like MLM, NSP,\nthey all do not consider dialogue related features like speaker role, continuity and consistency.\nOne obvious issue of these approaches is that the relationships between utterances are harder to capture using word-level semantics. Besides, some latent features, such as user intent and conversation topic, are under-discovered in existing works \\cite{xu2021topic}. \nTherefore, the response retrieved by existing dialogue systems supervised by the conventional way still faces critical challenges, including incoherence and inconsistency.\n\n\n\n\nIn this work, we present SPIDER (Structural Pre-traIned DialoguE Reader), a structural language modeling method to capture dialogue exclusive features. Motivated to efficiently and explicitly model the coherence among utterances and the key facts in each utterance, we propose two training objectives in analogy to the original BERT-like language model (LM) training: 1) utterance order restoration (UOR), which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization (SBR), which regularizes the model to improve the factual correctness of summarized subject-verb-object (SVO) triplets. Experimental results on widely used benchmarks show that SPDER boosts the model performance for various multi-turn dialogue comprehension tasks including response selection and dialogue reasoning.\n\n\\section{Background and Related Work}\n\\subsection{Pre-trained Language Models}\nRecent works have explored various architecture choices and training objectives for large-scale LM pre-training. Most of the PrLMs are based on the encoder in Transformer, among which Bidirectional Encoder Representations from Transformers (BERT) \\cite{devlin2018bert} is one of the most representative work. BERT uses multiple layers of stacked Transformer Encoder to obtain contextualized representations of the language at different levels. BERT has helped achieve great performance improvement in a broad range of NLP tasks. Several subsequent variants have been proposed to further enhance the capacity of PrLMs, such as XLNet \\cite{yang2019xlnet}, RoBERTa \\cite{liu2019roberta}, ALBERT \\cite{lan2019albert}, ELECTRA \\cite{clark2020electra}. For simplicity and convenient comparison with public studies, we select the most widely used BERT as the backbone in this work.\n\nThere are two ways of training PrLMs on dialogue scenarios, including open-domain pre-training and domain-adaptive post-training. Some studies perform training on open-domain conversational data\nlike Reddit for response selection or generation\ntasks \\cite{wolf2019transfertransfo,zhang2020dialogpt,henderson-etal-2020-convert,bao-etal-2020-plato}, but they are limited to the original pre-training\ntasks and ignore the dialogue related features. For domain-adaptive post-training, prior works have indicated that the order information would be important in the text representation, and the well-known next-sentence-prediction \\cite{devlin2018bert} and sentence-order-prediction \\cite{lan2019albert} can be viewed as special cases of order prediction. Especially in the dialogue scenario, predicting the word order of utterance, as well as the utterance order in the context, has shown effectiveness in the dialogue generation task \\cite{kumar2020deep,gu2020dialogbert}, where the order information is well recognized \\cite{chen-etal-2019-neural}. However, there is little attention paid to dialogue comprehension tasks such as response selection \\cite{lowe2015ubuntu,wu2016sequential,zhang2018modeling}. The potential difficulty is that utterance order restoration involves much more ordering possibilities for utterances that may have a quite flexible order inside dialogue text than NSP and SOP which only handle the predication of two-class ordering.\n\nOur work is also profoundly related to auxiliary multi-task learning, whose common theme is to guide the language modeling Transformers with explicit knowledge and complementing objectives \\cite{zhang-etal-2019-ernie,sun2019ernie,xu2020learning}. A most related work is \\citet{xu2020learning}, which introduces four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination. Our work differs from \\citet{xu2020learning} by three sides. 1) Motivation: our method is designed for a general-purpose in broad dialogue comprehension tasks whose goals may be either utterance-level discourse coherence or inner-utterance factual correctness, instead of only motivated for downstream context-response matching, whose goal is to measure if two sequences are related or not. 2) Technique: we propose both sides of intra- and inter- utterance objectives. In contrast, the four objectives proposed in \\citet{xu2020learning} are natural variants of NSP in BERT, which are all utterance-level. 3)\tTraining: we empirically evaluate domain-adaptive training and multi-task learning, instead of only employing multi-task learning, which requires many efforts of optimizing coefficients in the loss functions, which would be time-consuming. \n\nIn terms of factual backbone modeling, compared with the existing studies that enhance the PrLMs by annotating named entities or incorporating external knowledge graphs \\citep{eric2017key,liu2018knowledge}, the SVO triplets extracted in our sentence backbone predication objective (SBP) method, appear more widely in the text itself. Such triplets ensure the correctness of SVO and enable our model to discover the salient facts from the lengthy texts, sensing the intuition of ``who did what\". \n\n\\subsection{Multi-turn Dialogue Comprehension}\n\\label{sec:relatedwork}\nMulti-turn dialogue comprehension aims to teach machines to read dialogue contexts and solve tasks such as response selection \\cite{lowe2015ubuntu,wu2016sequential,zhang2018modeling} and answering questions \\cite{sun2019dream,cui2020mutual}, whose common application is building intelligent human-computer interactive systems \\cite{Chen2017survey,Shum2018,AliMe,zhu2018lingke}.\nEarly studies mainly focus on the matching between the dialogue context and question \\cite{huang2018flowqa,zhu2018sdnet}. Recently, inspired by the impressive performance of PrLMs, the mainstream is employing PrLMs to handle the whole input texts of context and question, as a linear sequence of successive tokens and implicitly capture the contextualized representations of those tokens through self-attention \\cite{qu2019bert,liu2020hisbert}. Such a way of modeling would be suboptimal to capture the high-level relationships between utterances in the dialogue history. In this work, we are motivated to model the structural relationships between utterances from utterance order restoration and the factual correctness inside each utterance in the perspective of language modeling pre-training instead of heuristically stacking deeper model architectures.\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1.0\\textwidth]{objectives.pdf}\n\\caption{Structural language modeling manipulations.}\n\\label{struct}\n\\end{figure*}\n\n\\section{Approach}\nThis section presents our proposed method SPIDER (Structural Pre-traIned DialoguE Reader). First, we will present the standard dialogue comprehension model as the backbone. Then, we will introduce our designed language modeling objectives for dialogue scenarios, including utterance order restoration (UOR) and sentence backbone regularization (SBR). In terms of model training, we employ two strategies, i.e., 1) domain adaptive post-training that first trains a language model based on newly proposed objectives and then fine-tunes the response selection task; 2) multi-task fine-tuning that trains the model for downstream tasks, along with LM objectives.\n\\subsection{Transformer Encoder}\\label{sec:response_selection}\nWe first employ a pre-trained language model such as BERT \\cite{devlin-etal-2019-bert} to obtain the initial word representations. The utterances and response are concatenated and then fed into the encoder. Given the context $C$ and response ${R}$, we concatenate all utterances in the context and the response candidate as a single consecutive token sequence with special tokens separating them: \n${X} = \\{\\texttt{[CLS]} {R} \\texttt{[SEP]} {U}_1 \\texttt{[EOU]} \\dots \\texttt{[EOU]} {U}_n \\texttt{[SEP]}\\}$,\nwhere \\texttt{[CLS]} and \\texttt{[SEP]} are special tokens. \\texttt{[EOU]} is the ``End Of Utterance\" tag designed for multiturn context. ${X}$ is then fed into the BERT encoder, which is a deep multi-layer bidirectional Transformer, to obtain a contextualized representation ${H}$. \n\n\nIn detail, let ${X} = \\{x_1, \\dots, x_n\\}$ be the embedding of the sequence, which are features of encoding sentence words of length $n$. The input embeddings are then fed into the multi-head attention layer to obtain the contextual representations.\n\t\n\nThe embedding sequence $X$ is processed to a multi-layer bidirectional Transformer for learning contextualized representations, which is defined as \n\\begin{align}\\label{eq:mutihead}\n{H} = \\textup{FFN}(\\textup{MultiHead}(K,Q,V)),\n\\end{align}\nwhere K,Q,V are packed from the input sequence representation $X$. As the common practice, we set $K=Q=V$ in the implementation.\n\nFor the following part, we use ${H} = \\{h_1, \\dots, h_n\\}$ to denote the last-layer hidden states of the input sequence.\n\n\n\n\\subsection{SPIDER Training Objectives}\nTo simulate the dialogue-like features, we propose two pre-training objectives in addition to the original LM objectives: 1) utterance order restoration, which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization, which regularizes the model to improve the factual correctness of summarized subject-verb-object triplets. The utterance manipulations are shown in Figure \\ref{struct}. The following subsections describe the objectives in turn.\n\n\\subsubsection{Utterance Order Restoration}\\label{sec:uor}\nCoherence is an essential aspect of conversation modeling. In a coherent discourse, utterances should respect specific orders of relations and logic. The ordering of utterances in a dialogue context determines the semantic of the conversation. Therefore, learning to order a set of disordered utterances in such a way that maximizes the discourse coherence will have a critical impact in learning the representation of dialogue contexts.\n\nHowever, most previous studies focused on modeling the semantic relevance between the context and the response candidate. Here we introduce utterance-level position modeling, i.e., utterance order restoration to encourage the model to be aware of the semantic connections among utterances in the context. The idea is similar to autoencoding (AE) which aims to reconstruct the original data from corrupted input \\cite{yang2019xlnet}. Given permuted dialogue contexts that comprise utterances in random orders, we maximize the expected log-likelihood of a sequence of the original ground-truth order. \n\nThe goal of the utterance order restoration\nis to organize randomly shuffled utterances of a conversation into a coherent dialogue context. We extract the hidden states of \\texttt{[EOU]} from $H$ as the representation of each utterance. Formally, given an utterance sequence denoted as $C' = [H_{u_1}; H_{u_2}; \\dots; H_{u_{K}}]$\nwith order $o = [o_1; o_2; \\dots ; o_{K}]$, where $K$ means the number of maximum positions to be predicted. \nWe expect an ordered context $C^{*} = [u_{o^{*}_1} ; u_{o^{*}_2} ; \\dots; u_{o^{*}_{K}}]$ is the most coherent permutation of utterances.\n\n\nAs predicting the permuted orders is a more challenging optimization problem than NSP and SOP tasks due to the large searching space of permutations and causes slow convergence in preliminary experiments, we choose to only predict the order of the last few permuted utterances by a permutation ratio $\\delta$ to control the maximum number of permutations: $K' = K*\\delta$. The UOR training objective is then formed as:\n\\begin{equation}\n\\begin{split}\n\\mathbb{L}_{uor} &= -\\sum_{k=1}^{K'}\\left [ {o}_{k}\\log\\hat{o}_{k} \\right ],\n\\end{split}\n\\end{equation}\nwhere $\\hat{o}_{k}$ denotes the predicted order.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1\\textwidth]{model_training.pdf}\n\\caption{Model training flow of domain-adaptive post-training (a) and multi-task fine-tuning (b).}\n\\label{model_training}\n\\end{figure*}\n\n\\subsubsection{Sentence Backbone Regularization}\nThe sentence backbone regularization objective is motivated to guide the model to distinguish the internal relation of the fact triplets that are extracted from each utterance, which would be helpful to improve the ability to capture the key facts of the utterance as well as the correctness. First, we apply a fact extractor to conduct the dependency parsing of each sentence. After that, we extract the subject, the root verb, and the object tokens as an SVO\ntriplet corresponding to each utterance. Inspired by \\citet{bordes2013translating} where the embedding of the tail entity should be close to the embedding of the head entity plus some vector that depends on the relationship, we assume that given the dialogue input, in the hidden representation space, the summation of the subject\nand the verb should be close to the object as\nmuch as possible, i.e., \n\\begin{equation}\n h_{subject} + h_{verb} \\rightarrow h_{object}.\\label{eq.svo}\n\\end{equation}\n\nConsequently, based on the sequence hidden states\n$h_i$ where $i = 1, ...,L_y$, we introduce a regularization\nfor the extracted facts:\n\\begin{equation}\n \\mathbb{L}_{sbr} = \\sum^{m}_{k=1}(1-\\cos(h_{subj_k} + h_{verb_k}, h_{obj_k})),\n\\end{equation}\nwhere $m$ is the total number of fact tuples extracted from the summary and $k$ indicates the $k$-th triplet. ``$subj_k$\", ``$verb_k$\", and ``$obj_k$\" are indexes of the $k$-th fact tuple's subject, verb, and object.\n\nIn our implementation, since PrLMs take subwords as input while the SVO extraction performs in word-level, we use the first-token hidden state as the representation of the original word following the way in \\citet{devlin-etal-2019-bert} for named entity recognition.\n\n\\begin{table*}\n\t\\centering\\small\n\t \\resizebox{\\linewidth}{!}\n {\\setlength{\\tabcolsep}{7.5pt}\n\t\t\\begin{tabular}{cccccccccc}\n\t\t\t\\toprule\n\t\t\t\n\n\t\t\t& \\multicolumn{3}{c}{\\textbf{Ubuntu}} & \\multicolumn{3}{c}{\\textbf{Douban}} & \\multicolumn{3}{c}{\\textbf{ECD}} \\\\\n\t\t\t&\\textbf{Train} & \\textbf{Valid} & \\textbf{Test} & \\textbf{Train} & \\textbf{Valid} & \\textbf{Test} & \\textbf{Train} & \\textbf{Valid} & \\textbf{Test} \\\\\n\t\t\t\t\t\t\\midrule\n\t\t\t\\midrule\n\t\t\t\\# context-response pairs & 1M & 500K & 500K &1M & 50K & 10K &1M & 10K & 10K\\\\\n\t\t\t\\# candidates per context & 2 & 10 & 10 & 2 & 2 & 10& 2 & 2 & 10\\\\\n\t\t\tAvg \\# turns per context & 10.13 & 10.11 & 10.11 & 6.69 & 6.75 & 6.45 & 5.51 & 5.48 & 5.64\\\\\n\t\t\tAvg \\# words per utterance & 11.35 & 11.34 & 11.37 & 18.56 & 18.50 & 20.74 & 7.02 & 6.99 & 7.11\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t}\n\t\\caption{\\label{tab:dataset} Data statistics of Ubuntu, Douban, and ECD datasets.}\n\\end{table*}\n\n\\section{Use of SPIDER Objectives}\\label{sec:training}\nIn this section, we introduce two training methods to take the newly proposed language modeling objectives into account, namely domain-adaptive post-training and multi-task fine-tuning, as illustrated in Figure \\ref{model_training}.\n\\subsection{Domain Adaptive Post-training}\nSimilar to BERT, we also adopt the masked language model (MLM) and the next sentence prediction (NSP) as LM-training tasks to enable our model to capture lexical and syntactic information from tokens in text. More details of the LM training tasks can be found from \\citet{devlin-etal-2019-bert}. The overall post-training loss is the sum of the MLM, NSP, UOR, and SBR loss.\n\nOur full model is trained by a joint loss by combining both of the objectives above: \\begin{equation}\n \\mathbb{L} = \\lambda_1(\\mathbb{L}_{mlm}+\\mathbb{L}_{nsp}) + \\lambda_2\\mathbb{L}_{uor}+\\lambda_3\\mathbb{L}_{sbr},\n\\end{equation}\nwhere $\\lambda_1, \\lambda_2, \\lambda_3$ are hyper-parameters.\n\nAfter post-training the language model on the dialogue corpus, we load the pre-trained weights as the same way of using BERT \\cite{devlin-etal-2019-bert}, to fine-tune the downstream tasks such as response selection and dialogue reasoning as focused in this work (details in Section \\ref{sec:tasks}).\n\n\\subsection{Multi-task Fine-tuning}\nSince our objectives can well share the same input as the downstream tasks, there is an efficient way of using multi-task fine-tuning (MTF) to directly train the task-specific models along with our SPIDER objectives. Therefore, we feed the permuted context to the dialogue comprehension model and combine the three losses for training: \n\\begin{equation}\n \\mathbb{L} = \\beta_1\\mathbb{L}_{dm} + \\beta_2\\mathbb{L}_{uor}+\\beta_3\\mathbb{L}_{sbr},\n\\end{equation}\nwhere $\\beta_1, \\beta_2, \\beta_3$ are hyper-parameters.\n\nIn order to train a task-specific model for dialogue comprehension, the hidden states $H$ will be fed into a classifier with a fully connected and softmax layer. We learn model $g(\\cdot, \\cdot)$ by minimizing cross entropy loss with dataset $\\mathcal{D}$. Let $\\Theta$ denote the parameters, for binary classification like the response selection task, the objective function $\\mathcal{L(D}, \\Theta)$ can be formulated as:\n\\begin{equation*}\n\\begin{split}\n \\mathbb{L}_{dm} = -\\sum_{i=1}^N [y_i\\log(g(c_i,r_i)) + \\\\ (1-y_i)\\log(1-g(c_i,r_i))].\n\\end{split}\n\\end{equation*}\nwhere $N$ denotes the number of examples. For multiple choice task like MuTual, the loss function is:\n\\begin{equation*}\n \\mathbb{L}_{dm} = -\\sum_{i=1}^N\\sum_{k=1}^C y_{i,c}\\log(g(c_i,r_{i,k})).\n\\end{equation*}\nwhere $C$ is the number of choice.\n\n\n\\section{Experiments}\n\\subsection{Datasets}\\label{sec:tasks}\nWe evaluated our model on two English datasets: Ubuntu Dialogue Corpus (Ubuntu) \\cite{lowe2015ubuntu} and Multi-Turn Dialogue Reasoning (MuTual) \\cite{cui2020mutual},\\footnote{Actually, MuTual is a retrieval-based dialogue corpus in form, but the theme is English listening comprehension exams, thus we regard as a reading comprehension corpus in this work. Because the test set of MuTual is not publicly available, we conducted the comparison with our baselines on the Dev set for convenience.} and two Chinese datasets: Douban Conversation Corpus (Douban) \\cite{wu2016sequential} and E-commerce Dialogue Corpus (ECD) \\cite{zhang2018modeling}. \n\\subsubsection{Ubuntu Dialogue Corpus} Ubuntu \\cite{lowe2015ubuntu} consists of English multi-turn conversations about technical support collected from chat logs of the Ubuntu forum. The dataset contains 1 million context-response pairs, 0.5 million for validation and 0.5 million for testing. In training set, each context has one positive response generated by human and one negative response sampled randomly. In validation and test sets, for each context, there are 9 negative responses and 1 positive response. \n\\subsubsection{Douban Conversation Corpus} Douban \\cite{wu2016sequential} is different from Ubuntu in the following ways. First, it is an open domain where dialogues are extracted from Douban Group. Second, response candidates on the test set are collected by using the last turn as the query to retrieve 10 response candidates and labeled by humans. Third, there could be more than one correct response for a context.\n\\subsubsection{E-commerce Dialogue Corpus} ECD \\cite{zhang2018modeling} dataset is extracted from conversations between customer and service staff on Taobao. It contains over 5 types of conversations based on over 20 commodities. There are also 1 million context-response pairs in the training set, 0.5 million in the validation set, and 0.5 million in the test set.\n\n\n\n\\begin{table*}[t]\n{\n \\centering\\small\n\t\n \n \\resizebox{\\linewidth}{!}\n {\\setlength{\\tabcolsep}{3pt}\n \\renewcommand\\arraystretch{1.1}\n \\begin{tabular}{lcccccccccccc}\n \\toprule \\textbf{Model} & \\multicolumn{3}{c}{\\textbf{Ubuntu Corpus}} & \\multicolumn{6}{c}{\\textbf{Douban Conversation Corpus}} & \\multicolumn{3}{c}{\\textbf{E-commerce Corpus}} \\\\\n \\cmidrule(r){2-4} \\cmidrule(r){5-10} \\cmidrule(r){11-13}\n & $\\textbf{R}_{10}$@1 & $\\textbf{R}_{10}$@2 & $\\textbf{R}_{10}$@5 & \\textbf{MAP} & \\textbf{MRR} & \\textbf{P}@1 & $\\textbf{R}_{10}$@1 & $\\textbf{R}_{10}$@2 & $\\textbf{R}_{10}$@5 & $\\textbf{R}_{10}$@1 & $\\textbf{R}_{10}$@2 & $\\textbf{R}_{10}$@5 \\\\\n \\midrule\n \\midrule SMN & 72.6 & 84.7 & 96.1 & 52.9 & 56.9 & 39.7 & 23.3 & 39.6 & 72.4 & 45.3 & 65.4 & 88.6 \\\\\n DUA & 75.2 & 86.8 & 96.2 & 55.1 & 59.9 & 42.1 & 24.3 & 42.1 & 78.0 & 50.1 & 70.0 & 92.1 \\\\\n DAM & 76.7 & 87.4 & 96.9 & 55.0 & 60.1 & 42.7 & 25.4 & 41.0 & 75.7 & - & - & - \\\\\n IoI & 79.6 & 89.4 & 97.4 & 57.3 & 62.1 & 44.4 & 26.9 & 45.1 & 78.6 & - & - & - \\\\\n MSN & 80.0 & 89.9 & 97.8 & 58.7 & 63.2 & 47.0 & 29.5 & 45.2 & 78.8 & 60.6 & 77.0 & 93.7 \\\\\n MRFN & 78.6 & 88.6 & 97.6 & 57.1 & 61.7 & 44.8 & 27.6 & 43.5 & 78.3 & - & - & - \\\\\n SA-BERT & 85.5 & 92.8 & 98.3 & \\textbf{61.9} & \\textbf{65.9} & \\textbf{49.6} & \\textbf{31.3} & 48.1 & \\textbf{84.7} & 70.4 & \\textbf{87.9} & 98.5 \\\\\n \\midrule\n \\multicolumn{13}{l}{\\textit{Multi-task Fine-tuning}} \\\\\n BERT &81.7 &90.4 &97.7 &58.8 &63.1 & 45.3 &27.7 &46.4 &81.8 & 61.7 & 81.1 & 97.0 \\\\\n \n \n \\quad + SPIDER & 83.1 & 91.3 & 98.0 & 59.8 & 63.8 & 45.9 &28.5 & 48.7 & 82.6 & 62.6 & 82.7 & 97.1 \\\\\n \\midrule\n \\multicolumn{13}{l}{\\textit{Domain Adaptive Post-training}} \\\\\n BERT & 85.7 & 93.0 & 98.5 & 60.5 & 64.7 & 47.4 & 29.1 & 47.8 & 84.9 & 66.4 & 84.8 & 97.6 \\\\\n \n \n \\quad + SPIDER & \\textbf{86.9} & \\textbf{93.8} & \\textbf{98.7} & 60.9 & 65.0 & 47.5 & 29.6 & \\textbf{48.8} & 83.6 & \\textbf{70.8} & 85.3 & \\textbf{98.6} \\\\\n \n\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\\label{tab:exp} Performance comparison on Ubuntu, Douban and E-Commerce datasets.\n\t}\n}\n\\end{table*}\n\n\n\\subsubsection{Multi-Turn Dialogue Reasoning} MuTual \\cite{cui2020mutual} consists of 8860 manually annotated dialogues based on Chinese student English listening comprehension exams. For each context, there is one positive response and three negative responses. The difference compared to the above three datasets is that only MuTual is reasoning-based. There are more than 6 types of reasoning abilities reflected in MuTual.\n\\subsection{Implementation Details}\nFor the sake of computational efficiency, the maximum number of utterances is specialized as 20. The concatenated context, response, \\texttt{[CLS]} and \\texttt{[SEP]} in one sample is truncated according to the ``longest first\" rule or padded to a certain length, which is 256 for MuTual and 384 for the other three datasets. For the hyper-parameters, we\nempirically set $\\lambda_1 = \\lambda_2 =\\lambda_3 = \\beta_1 = \\beta_2 =1$ in our experiments.\n\nOur model is implemented using Pytorch and based on the Transformer Library. We use BERT \\cite{devlin-etal-2019-bert} as our backbone model. AdamW \\cite{loshchilov2017decoupled} is used as our optimizer. The batch size is 24 for MuTual, and 64 for others. The initial learning rate is $4\\times 10^{-6}$ for MuTual and $3\\times 10^{-5}$ for others. The ratio is set to 0.4 in our implementation by default. We run 3 epochs for MuTual and 2 epochs for others and select the model that achieves the best result in validation. The training epochs are 3 for DAP. \n\nOur domain adaptive post-training for the corresponding response selection tasks is based on the three large-scale dialogue corpus including Ubuntu, Douban, and ECD, respectively.\\footnote{Since phrases are quite common in Chinese, making it inaccurate to calculate the SVO relations according to Eq. \\ref{eq.svo}, thus we did not use the SBR objective for the two Chinese tasks in this work.} The data statistics are in Table \\ref{tab:dataset}. Since domain adaptive post-training is time-consuming, following previous studies \\cite{gu2020speaker}, we use \\textit{bert-base-uncased}, and \\textit{bert-base-chinese} for the English and Chinese datasets, respectively. Because there is no appropriate domain data for the small-scale Mutual dataset, we only report the multi-task fine-tuning results with our SPIDER objectives, and also present the results with other PrLMs such as ELECTRA \\cite{clark2020electra} for general comparison.\n\n\\subsection{Baseline Models}\nWe include the following models for comparison:\n\n$\\bullet$ \\textbf{Multi-turn matching models}: Sequential Matching Network (SMN) \\cite{wu2016sequential}, Deep Attention Matching Network (DAM) \\cite{zhou2018multi}, Deep Utterance Aggregation (DUA) \\cite{zhang2018modeling}, Interaction-over-Interaction (IoI) \\cite{tao2019ioi} have been stated in Section \\ref{sec:relatedwork}. Besides, Multi-Representation Fusion Network (MRFN) \\cite{tao2019multi} matches context and response with multiple types of representations. Multi-hop Selector Network (MSN) \\cite{yuan2019multi} utilizes a multi-hop selector to filter necessary utterances and matches among them.\n\n$\\bullet$ \\textbf{PrLMs-based models}: BERT \\cite{devlin2018bert}, SA-BERT \\cite{gu2020speaker}, and ELECTRA \\cite{clark2020electra}.\n\n\n\n\\subsection{Evaluation Metrics}\nFollowing \\cite{lowe2015ubuntu, wu2016sequential}, we calculate the proportion of true positive response among the top-$k$ selected responses from the list of $n$ available candidates for one context, denoted as $\\textbf{R}_n$@$k$. Besides, additional conventional metrics of information retrieval are employed on Douban: Mean Average Precision (MAP) \\cite{baeza1999modern}, Mean Reciprocal Rank (MRR) \\cite{voorhees1999trec}, and precision at position 1 (P@1). \n\n\\subsection{Results}\nTables \\ref{tab:exp}-\\ref{tab:mutual_result} show the results on the four benchmark datasets. We have the following observations:\n\n1) Generally, the previous models based on multi-turn matching networks perform worse than simple PrLMs-based ones, illustrating the power of contextualized representations in context-sensitive dialogue modeling. PrLM can perform even better when equipped with our SPIDER objectives, verifying the effectiveness of dialogue-aware language modeling, where inter-utterance position information and inner-utterance key facts are better exploited. Compared with SA-BERT that involves more complex architecture and more parameters by injecting extra speaker-aware embeddings, SPIDER keeps the same model size as the backbone BERT, and even surpasses SA-BERT on most of the metrics.\n\n2) In terms of the training methods, DAP generally works better than MTF, with the merits of two-step procedures including the pure LM-based post-training. According to the ablation study in Table \\ref{tab:ablation}, we see that both of the dialogue-aware LM objectives are essentially effective and combining them (SPIDER) gives the best performance, which verifies the necessity of modeling the utterance order and factual correctness. We also notice that UOR shows better performance than SBR in DAP, while gives relative descent in MFT. The most plausible reason would be that UOR would permute the utterances in the dialogue context which helps the language model learn the utterance in UOR. However, in MFT, the major objective is the downstream dialogue comprehension task. The permutation of the context would possibly bring some negative effects to the downstream task training.\n\n\\begin{table}\n{\n \\centering\\small\n \\resizebox{\\linewidth}{!}\n { \\setlength{\\tabcolsep}{8pt}\n \\begin{tabular}{l l l l}\n \\toprule \n \\textbf{Model}& MRR & $\\textbf{R}_{4}$@1 & $\\textbf{R}_{4}$@2 \\\\\n \\midrule \\midrule\n BERT$_{base}$ & 80.0 & 65.3 & 86.0 \\\\\n \\quad + UOR & 80.7 & 66.1 & 86.7 \\\\\n \\quad + SBR & 81.3 & 67.4 & 87.1 \\\\\n \\quad + SPIDER & 81.6 & 67.6 & 87.3 \\\\\n \\midrule\n BERT$_{large}$& 82.2& 69.1 & 87.9 \\\\\n \\quad + UOR & 82.8& 69.8 & 88.6 \\\\\n \\quad + SBR &83.4 & 71.0 & 89.4\\\\\n \\quad + SPIDER &83.9 & 71.8 & 89.2 \\\\\n \\midrule\n ELECTRA$_{base}$ &86.5 & 76.2 & 91.6 \\\\\n \\quad + UOR & 86.9 & 76.6 & 91.8 \\\\\n \\quad + SBR & 87.6 & 77.1 & 92.0 \\\\\n \\quad + SPIDER &88.2 & 79.2 & 92.3 \\\\\n \\midrule\n ELECTRA$_{large}$ & 94.9 & 90.6 & 97.7 \\\\\n \\quad + UOR & 95.3 & 91.3 & 97.8 \\\\\n \\quad + SBR & 95.5 & 91.6 & 97.8 \\\\\n \\quad + SPIDER & \\textbf{95.6} & \\textbf{92.0} & \\textbf{97.9} \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\\label{tab:mutual_result} Results on MuTual dataset.\n\t}\n}\n\\end{table}\n\n\n\n\\begin{table}\n\t\t\\centering\\small\n \\resizebox{\\linewidth}{!}\n { \\setlength{\\tabcolsep}{8pt}\n\t\t{\n\t\t\t\\begin{tabular}{l l l l}\n\t\t\t\t\\toprule\n\t\t\t\t\\textbf{Model} & $\\textbf{R}_{10}$@1 & $\\textbf{R}_{10}$@2 & $\\textbf{R}_{10}$@5 \\\\\n\t\t\t\t\\midrule\n\t\t\t\t\\midrule\n\t\t\t\tSPIDER$_\\textup{DAP}$ & \\textbf{86.9} & \\textbf{93.8} & \\textbf{98.7} \\\\\n\t\t\t\t\\quad w\/o UOR & 86.2 & 93.3 & 98.6 \\\\ \n\t\t\t\t\\quad w\/o SBR & 86.4 & 93.5 & 98.6 \\\\ \n\t\t\t\t\\quad w\/o Both & 85.7 & 93.0 & 98.5 \\\\\n\t\t\t\n\t\t\t\t\\midrule\n\t\t\t\tSPIDER$_\\textup{MTF}$ & 83.1 & 91.3 & 98.0 \\\\ \n\t\t\t\t\\quad w\/o UOR & 82.6 & 91.0 & 97.9 \\\\ \n\t\t\t\t\\quad w\/o SBR & 82.3 & 90.8 & 97.8 \\\\ \n\t\t\t\t\\quad w\/o Both & 81.7 &90.4 &97.7 \\\\\n\t\t\t\n\t\t\t\t\\bottomrule\n\t\t\t\\end{tabular}\n\t\t}\n }\n\t\t\\caption{\\label{tab:ablation} Ablation study on the Ubuntu dataset.}\n\t\\end{table}\n\n\n\\subsection{Influence of Permutation Ratio}\nFor the UOR objective, a hyper-parameter $\\delta$ is set to control the maximum number of permutations (as described in Section \\ref{sec:uor}), which would possibly influence the overall model performance. To investigate the effect, we set the permutation ratio from [0, 20\\%, 40\\%, 60\\%, 80\\%, 100\\%]. The result is depicted in Figure \\ref{numImg}, in which our model outperforms the baseline in general, showing that the permutation indeed strengthens the baseline.\n\n\n\\subsection{Comparison with Different Context Length}\nContext length can be measured by the number of turns and average utterance length in a conversation respectively. We split test instances from the Ubuntu dataset into several buckets and compare SPIDER with UOR with the BERT baseline. According to the results depicted in Figure \\ref{fig:num_utterance}, we observe that SPIDER performs much better on contexts with long utterances, and it also performs robustly and is significantly and consistently superior to the baseline. The results indicate the benefits of modeling the utterance order for dialogue comprehension.\n\n\\subsection{Human Evaluation about Factual Correctness}\nTo compare the improvements of SPIDER over the baseline on factual correctness, we extract the error cases of the BERT baseline on MuTual (102 in total) and 42 (41.2\\%) are correctly answered by SPIDER. Among the 42 solved cases, 33\/42 (78.6\\%) are entailed with SVO facts in contexts, indicating the benefits of factual correctness. \n\n\\begin{figure}\n \n\\setlength{\\abovecaptionskip}{0pt}\n\\pgfplotsset{height=5.3cm,width=8cm,compat=1.15,every axis\/.append style={thick},every axis legend\/.append style={at={(0.95,0.95)}},legend columns=3 row=1} \\begin{tikzpicture} \\tikzset{every node}=[font=\\small]\n\\begin{axis} [width=8cm,enlargelimits=0.13,legend pos=north west,xticklabels={0.0,0.2,0.4,0.6,0.8,1.0}, axis y line*=left, axis x line*=left, xtick={0,1,2,3,4,5}, x tick label style={rotate=0},\n ylabel={$\\textbf{R}_{10}$@1}, ymin=85.4,ymax=87.8,\n ylabel style={align=left},font=\\small]\n+\\addplot+ [smooth, mark=*,mark size=1.2pt,mark options={mark color=red}, color=red] coordinates { (0,85.7) (1,86.21) (2,86.9) (3,86.7) (4,86.4) (5,86.38)};\n\\addlegendentry{\\small Our method}\n\\addplot+[densely dotted, mark=none, color=cyan] coordinates {(0, 85.7)(1, 85.7)(2, 85.7)(3, 85.7)(4, 85.7)(5, 85.7)}\n\\addlegendentry{\\small Baseline}\n\\end{axis}\n\\end{tikzpicture}\n \\caption{\\label{numImg}Influence of the permutation ratio $\\delta$.}\n %\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\t\t\\setlength{\\abovecaptionskip}{0pt}\n\t\t\t\\begin{center}\n\t\t\t\\pgfplotsset{height=5.3cm,width=6cm,compat=1.14,every axis\/.append style={thick},every axis legend\/.append style={at={(0.95,0.95)}},legend columns=3 row=1} \\begin{tikzpicture} \\tikzset{every node}=[font=\\small] \\begin{axis} [width=8cm,enlargelimits=0.13, legend pos=north west, xticklabels={0-4, 4-8,8-12,12-16,16-20}, axis y line*=left, axis x line*=left, xtick={1,2,3,4,5}, x tick label style={rotate=0},\n\t\t\tylabel={${\\rm R_{10} @1} $},\n\t\t\tymin=83.5,ymax=90,\n\t\t\tylabel style={align=left},xlabel={number of utterances},font=\\small]\n\t\t\t\\addplot+ [smooth, mark=*,mark size=1.2pt,mark options={mark color=cyan}, color=red] coordinates\n\t\t\t{(1, 85.2938347403428) (2, 86.27162652589905) (3,86.9946175991989) (4,87.61989860583016) (5,87.94123362906633)}\n\t\t\t\\addlegendentry{\\small SPIDER}\n\t\t\t\\addplot+[smooth, mark=diamond*, mark size=1.2pt, mark options={mark color=cyan}, color=cyan] coordinates {(1, 84.3903300076746) (2, 85.59551303200263) (3,86.45568907247465) (4,86.86797634136038) (5,86.53523489932886)}\n\t\t\t\\addlegendentry{\\small Baseline}\n\t\t\n\t\t\t\\end{axis}\n\t\t\t\\end{tikzpicture}\n\t\t\\end{center}\n \t\\caption{${\\rm R_{10} @1} $ of SPIDER and the baseline BERT on different numbers of utterances.}\n\t\t\\label{fig:num_utterance}\n\\end{figure}\n\n\n\\section{Conclusion}\nIn this paper, we focus on the task-related adaptation of the pre-trained language models and propose SPIDER (Structural Pre-traIned DialoguE Reader), a structural language modeling method to capture dialogue exclusive features. To explicitly model the coherence among utterances and the key facts in each utterance, we introduce two novel dialogue-aware language modeling tasks including utterance order restoration and sentence backbone regularization objectives. Experiments on widely-used multi-turn dialogue comprehension benchmark datasets show the superiority over baseline methods. Our work reveals a way to make better use of the structure learning of the contextualized representations from pre-trained language models and gives insights on how to adapt the language modeling training objectives in downstream tasks. \n\n\t\n\\bibliographystyle{acl_natbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\n\nCoherent elastic neutrino-nucleus scattering is a well-predicted Standard\nModel interaction that is of wide-spread interest. Characterizing the process can provide a probe of beyond the Standard Model physics, such as through\nnon-standard neutrino interactions~\\cite{kate}, and can contribute to\nour understanding of supernova dynamics. Although\nexperiments have been proposed to search for this channel~\\cite{CLEAR, CLEAR2, Cogent, texono}, neutrino-nucleus coherent elastic\nscattering has never been observed.\nThis paper shows that, for existing proposed layouts of neutrino sources\nand ton-scale dark matter detectors, discovery of coherent neutrino\nscattering can occur with a 2~ton$\\cdot$year exposure under an optimistic detection scenario. Along with coherent neutrino physics, the observation of rare, WIMP-like events with a well predicted flux (shape and absolute normalization) and cross section in a well-known time window can provide a cross-check of the sensitivity and efficiency of dark matter detectors. Furthermore, in the case that a substantial event sample is collected with a dedicated experiment close to the neutrino source, sensitivity to physics beyond the Standard Model and a unique probe of $\\sin^{2}\\theta_{W}$ can be obtained through a coherent neutrino scattering cross section measurement.\n\nWe discuss the physics importance of the coherent neutrino interaction in the next section and detection methods in Section~\\ref{sec:detection}. Section~\\ref{sec:discovery} provides our assumptions for a few example experiments and the raw rates in these detectors with various exposures and as a function\nof envisioned energy thresholds and baseline lengths. The effect of coherent events as a background for WIMP interactions is also considered. Then, we consider the scenario in which a detector module is brought within tens of meters of the neutrino source and used to obtain a high statistics sample of events for\ncoherent neutrino physics (Section~\\ref{sec:closedetector2}).\n\n\\section{Coherent neutrino scattering}\nThe coherent scattering cross section, $\\sigma$, depends on the number of neutrons, $N$,\nand protons, $Z$, of the target material with mass $M$. If $T$ is the\nrecoil energy of the interaction and the incoming neutrino has energy $E_\\nu$,\n\\begin{equation}\n{{d\\sigma}\\over{dT}} = {{G_F^2}\\over {4\\pi}} Q_W^2 M \\left(1-\n{{MT}\\over{2E_\\nu^2}}\\right) F(Q^2)^2. \\label{coherent}\n\\end{equation}\n\n\nIn this equation, $G_F$ and $Q_W$ are the precisely known Fermi constant and weak\ncharge [$Q_W=N-(1-4~\\mathrm{sin}^{2}\\theta_{W})Z$], respectively. The form factor, $F(Q^2)$, dominates the $\\sim$5\\% cross section uncertainty~\\cite{Horowitz:2003cz}. \n\nAs the cross section is well predicted, coherent elastic neutrino-nucleus scattering is an ideal source\nto search for new physics in the neutrino sector. A cross section\nmeasurement with $\\sim$10\\% uncertainty will result in an uncertainty on\n$\\sin^{2}\\theta_{W}$ of $\\sim$5\\%~\\cite{kate}. While this uncertainty\nis large compared to existing and planned precision atomic parity violation and M\\o ller scattering measurements, a discrepancy from the Standard Model prediction already\nobserved in the neutrino sector by the NuTeV experiment~\\cite{NuTeV} motivates more neutrino-based measurements. Notably, a $\\sin^{2}\\theta_{W}$ measurement with coherent neutrino-nucleus scattering would be at $Q\\sim\\mathrm{0.04}$~GeV\/c, well away from all previous neutrino scattering measurements (including NuTeV's at $Q\\sim\\mathrm{4}$~GeV\/c).\n\nA coherent neutrino-nucleus scattering cross section measurement agreement\nwithin 10\\% uncertainty of the Standard Model prediction will\nresult in limits on non-standard neutrino interactions (NSI) which improve on the present\nones by more than an order of magnitude~\\cite{Barranco:2005yy,Barranco:2007tz,kate}. The low-$Q$ existing and planned precision measurements mentioned above are not sensitive to new physics unique to neutrino interactions. NSI terms can enter the Standard Model Lagrangian through an extra term, \n\\begin{equation}\n{\\cal L}_\\mathit{eff}^\\mathit{NSI} =- \\varepsilon^{fP}_{\\alpha \\beta} 2 \\sqrt{2} G_F\n (\\bar{\\nu}_\\alpha \\gamma_{\\rho} L \\nu_\\beta) (\\bar{f} \\gamma^{\\rho}P\n f)\n\\end{equation}\nwhere $f$ is a first generation Standard Model fermion, $e$, $u$, or $d$ and $P = L~\\mathrm{or}~ R$~\\cite{davidson}. These $\\varepsilon^{fP}_{\\alpha \\beta}$ terms can appear due to a range of sources, including incorporating neutrino mass into the Standard Model~\\cite{Schecter} and supersymmetry~\\cite{Barger}. The $\\varepsilon_{ee}$ and $\\varepsilon_{e\\tau}$ terms, in particular, are poorly constrained by existing measurements. Measuring a coherent scattering cross section in disagreement with the Standard Model expectation could be an indication of NSI. In the case that a cross section discrepancy is observed, multiple nuclear targets could be employed in order to disentangle effects from NSI, a $\\sin^{2}\\theta_{W}$ anomaly (e.g. consistent with NuTeV), and\/or nuclear physics. \n\nCharacterizing neutrino coherent scattering \nis also essential to the understanding of supernova evolution as the energy carried away by neutrinos comprises $\\sim$99\\% of the supernova's total energy and the coherent channel's cross section exceeds all others by at least an order of magnitude in the relevant energy region. \nIn a stellar core collapse, the density of the electron\/nucleus plasma at the core can reach $>$$10^{12}$~$\\mathrm{g\/cm^{3}}$. At these densities, a 20~MeV neutrino's mean free path is on the order of 0.5~km~\\cite{Horowitz:1996ci} with the opacity in nucleus-rich regions dominated by coherent scattering.\nAlong with being relevant for supernova evolution in general, the coherent cross section may also affect the supernova neutrino signals expected on Earth. \n\nThe coherent process is important for supernova burst neutrino detection as well, providing information about all flavors of neutrinos--not just $\\nu_{e}$\/$\\overline{\\nu}_{e}$. For coherent neutrino-nucleus scattering specifically, unlike other channels of flavor-blind neutral current interactions, the nuclear recoil energy is proportional to neutrino energy due to the elastic nature of the interaction. Such information could be combined with charged current $\\nu_{e}$\/$\\overline{\\nu}_{e}$ interaction information from other sources to develop a complete picture of oscillations with supernova neutrinos. Note that approximately seven\nneutrino-nucleus coherent events in one ton of Ar during a ten second\nwindow for a galactic core-collapse supernova at 10~kpc are expected with a recoil energy threshold of 5~keV~\\cite{Horowitz:2003cz}. Although the detectors discussed below are probably too small to provide a sizable sample of supernova burst neutrino-nucleus coherent scatters (unless the supernova is very close), an accelerator-based measurement to confirm the predicted interaction cross section would prove valuable to a next-generation coherent neutrino scattering experiment's supernova burst neutrino measurement.\n\n\n\\section{Detection}\n\\label{sec:detection}\nThe coherent neutrino-nucleus cross section favors very low recoil energies, in\nthe few-to-tens of keV range. This is well below the threshold of the most\nsensitive recent and existing large-scale low energy neutrino detectors, like SNO~\\cite{SNO}, Borexino~\\cite{borexino}, and KamLAND~\\cite{Kamland}, which explains why this relatively high cross section process has not yet\nbeen observed. \nDark matter detectors, on the other-hand, have energy thresholds in the $\\sim$10~keV range and lower. As such, these detectors are potentially ideal coherent neutrino scattering detectors if given\na sufficiently large target mass and neutrino flux. A ton-scale dark matter detector at its nominal depth underground of ~1-2 km, in combination with an intense decay-at-rest (DAR) neutrino source, could discover the coherent neutrino scattering process. \n\nIt is worth noting that dark matter\ndetectors could observe $^8$B solar\nneutrino coherent events, as has been pointed out in Ref.~\\cite{jocelyn}. The $^8$B rate depends on the detector threshold and material, with more events for lighter target nuclei and lower thresholds.\nThis solar neutrino signal becomes negligible for high-$A$ targets when the low-energy recoil threshold of the experiment is between 5 and 10~keV or more. There are zero solar coherent events expected with a 5\/10~keV threshold for the targets (Xe, Ge\/Ar) and exposures considered here. In contrast, the DAR accelerator source described in Section~\\ref{sec:discovery} will produce a significant number of events within a well-defined time window when the accelerator is on.\nThese events can therefore only be considered a WIMP background during those times (see Sec.~\\ref{sec:WIMPsearch}). In terms of a coherent neutrino physics measurement, the accelerator-based case has the luxury of an \\textit{in-situ} background measurement (when the source is off) in addition to the higher-energy, above-threshold nuclear recoils. \n\n\n\\section{Rates and Time to Discovery}\n\\label{sec:discovery}\nIn order to be reasonably concrete, we study a set of experimental\ndesigns inspired by proposals for the Deep Underground Science and\nEngineering Laboratory (DUSEL). We note that the detector designs are\nnot very different from those under consideration at other underground\nlaboratories and that the results can be easily scaled. For the\nneutrino source, we assume a DAR configuration produced by high\nintensity cyclotrons which are now under development~\\cite{Luciano}\\cite{Jose} and proposed for DUSEL~\\cite{EOI}.\n\nDAR neutrinos are known to be an excellent source for neutrino-nucleus coherent scattering experiments~\\cite{CLEAR, CLEAR2}.\nThe neutrinos are produced with relatively low energies ($<$52.8 MeV), a range \nwhere coherent neutrino scattering dominates all\nother cross sections by about an order of magnitude. Ref.~\\cite{EOI} calls for a design with 800~MeV protons, accelerated via high intensity\ncyclotron(s), which impinge on a carbon target and facilitate the production and eventual decay of pions: $\\pi^{+}\\rightarrow\\mu^{+}\\nu_{\\mu}$, followed by $\\mu^+\n\\rightarrow e^{+}\\bar{\\nu}_{\\mu}\\nu_{e}$. A DAR source flux profile is shown in Fig.~\\ref{flux}. \n\n\\begin{figure}[b]\\begin{center}\n{\\includegraphics[width=3.5in]{flux.pdf}\n} \\end{center}\n\\vspace{-.8cm}\n\\caption{Energy distribution of neutrinos in a DAR source, from Ref~\\cite{multi}.\n\\label{flux} }\n\\end{figure}\n\n\nHigh intensity DAR sources are being proposed for $CP$ violation\nsearches involving Gd-doped ultra-large water Cerenkov detectors at\nunderground science laboratories~\\cite{multi, beamconfig, Argawalla1}. The design for this search~\\cite{EOI} calls for multiple accelerator sites at varying distances from ultra-large water Cerenkov detector(s). The DAE$\\delta$ALUS cyclotron-based near accelerator is proposed to run with a duty factor between 13\\%~\\cite{Luciano} and 20\\%~\\cite{EOI}, with an average of 1 MW of power in either case. Each 1 MW accelerator will provide $4 \\times 10^{22}$ neutrinos of each flavor per year\nproduced as an isotropic flux within the time window, expected to be about 67~ms out of 500~ms for the near accelerator with the current design~\\cite{janet}. The absolute normalization of the neutrino flux, determined with electron-neutrino elastic scattering ($\\nu_{e}e^{-} \\rightarrow \\nu_{e}e^{-}$) as measured by ultra-large water Cerenkov detector(s), will have a systematic uncertainty of 1\\% with dominant contributions from the cross section and energy scale uncertainties~\\cite{multi}. The statistical uncertainty on the flux depends on the run period, but is expected to be on the order of 1\\% as well. The near accelerator site is envisioned at or near the surface of the laboratory with the other cyclotrons\nlocated many kilometers away. Note that the far sites\nwill produce insignificant coherent rates due to the 1\/$\\mathrm{r}^{2}$ dependence of the flux. However, the near accelerator can provide a\nsignificant event rate, during the 13\\% beam-on time, for detectors\nwhich are sufficiently close and large. Examples of other\nphysics opportunities with this near accelerator are discussed in~Refs.~\\cite{EOI, Argawalla2, Argawalla3}.\n\nIn order to provide realistic calculations, we examine three dark\nmatter experiments which are drawn from the designs of GEODM~\\cite{GEODM}, \nLZ~\\cite{LZ}, and MAX~\\cite{MAX}. These experiments use\ngermanium, xenon, and argon as their targets, respectively. Note that neon is also commonly considered as an alternative target medium in the noble liquid detectors mentioned~\\cite{CLEAN}. We assume that the accelerator and beam dump are located at or near the surface. As GEODM is proposed for the DUSEL 7400~ft level and LZ\/MAX are proposed for the 4800~ft level, we simply consider baseline lengths of 2.3~km and 1.5~km, respectively. The rates for each target are calculated for a ton$\\cdot$year fiducial exposure since the design of each detector is still under consideration.\n\nAs discussed above, the coherent neutrino-nucleus\ninteraction takes place at very low recoil energies. Fig.~\\ref{recoils}\nshows the distribution of recoil energies for a DAR source with \n$^{20}$Ne, $^{40}$Ar, $^{76}$Ge, and $^{132}$Xe. \nThe experimental rates will strongly depend upon the recoil\nenergy threshold for reconstructed events, $T_{min}$. As the exact values\nof this cut for the various detectors are unknown, we\nconsider five possible values of $T_{min}$. The rates with each cut are\nobtained by summing Eq.~\\ref{coherent} in bins of recoil energy $T$,\nstarting at $T_{min}$. For the aforementioned targets, we find the rates given in\nTable~I, where we assume 100\\% efficiency for detecting events in\nthe time-window above the threshold $T_{min}$. \n\n\n\\begin{table}[t]\n\\label{ratesa}\n \\begin{center}\n {\\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|} \\hline \n Events\/ton\/year & For & \\multicolumn{5}{c|}{$T_{min}$} \\\\ \n at distance & target & 0 keV & $5$ keV & $10$ keV & $20$ keV & $30$ keV \\\\ \\hline\n 1.5 km & $^{40}$Ar & 11.1 & 9.1 & 7.5 & 4.9 & 3.1 \\\\\n &$^{132}$Xe & 36.4 & 16.3 & 6.6 & 1.1 & 0.1 \\\\ \n &$^{76}$Ge$^{\\dagger}$ & 21.9 & 14.6 & 9.4 & 3.5 & 1.4 \\\\ \\hline\n 2.3 km & $^{76}$Ge & 9.3 & 6.2 & 4.0 & 1.5 & 0.6 \\\\ \\hline\n \\end{tabular} \n \\caption{Coherent neutrino scattering events\/ton\/year (with the accelerator running at 1~MW with a 13\\% duty factor) for various detector layouts and thresholds. The rates reported assume 100\\% detection efficiency. $^{\\dagger}$The present plan is for the GEODM ($^{76}$Ge-based) baseline to be 2.3~km--although 1.5~km is included for completeness.}\n}\n\n\\end{center}\n\n\\end{table}\n\n\\begin{figure}[t]\\begin{center}\n\\vspace{-.5cm}\n\\includegraphics[width=2.9in,angle=90]{diffspec.pdf}\n\\end{center}\n\\vspace{-.5cm}\n\\caption{Recoil energy distributions for coherent scattering 1.5~km from a DAR neutrino source for Ne, Ar, Ge, and Xe. The rates reported assume 100\\% detection efficiency.}\\label{recoils}\n\\end{figure}\n\nWe note that the coherent event rates for dark matter detectors at their nominal depths underground are in the 0-35~events\/ton\/year range depending on target, baseline, energy threshold, and unrealistically assuming 100\\% detection efficiency. This is too low to be competitive with presently used neutron sources for detector calibration. Neutrons are adequate for energy calibration, despite their propensity to multiple scatter and activate the detector. Observing excess nuclear recoil events between beam-on and beam-off times would allow a dark matter search to cross-check its sensitivity. Unlike neutrons, neutrinos have a negligible probability of multiply scattering and neutrino interactions would be uniformly distributed throughout the detector; furthermore, unlike a neutron source, the measurement is noninvasive. A coherent neutrino signal would enable the study of the systematics in the expected scaling of the event rate across multiple dark matter experimental targets at the site.\n\n\\subsection{Detection at a Ton-Scale Dark Matter Experiment}\nNext, we consider one of the nuclear targets mentioned above ($^{76}$Ge) in more detail. GEODM is a proposed ton-scale dark matter detector~\\cite{GEODM} based on the cryogenic Ge crystal technology used in the CDMS experiment~\\cite{CDMS}. The target design for GEODM is an array of 300 $\\sim$5~kg Ge crystals operated at 40~mK with a total target mass of $\\sim$1500~kg. Interaction events in an individual crystal produce populations of athermal phonons and electron-hole pairs which are measured by various phonon and ionization sensors lithographically patterned on the crystal surfaces. The ratio of ionization to phonon signals for an event is a powerful discriminator between electron and nuclear recoils. The signals also enable precise determination of the position and energy of each event, which allow volume and energy cuts. This information is used to set the number of electron recoils that can pass the cuts and pose as nuclear recoils. This electron recoil ``leakage'' into the nuclear recoil band constitutes one source of background events. There is also a background from muon- and radiogenic-induced neutrons, which is controlled through the use of radio-pure materials and passive and active shields to be $< 0.15$~events\/ton\/year. \n\nThe efficiency of GEODM depends critically on the cuts used to achieve a target leakage background. This is largely a function of the future detector's performance, which we need to estimate. In light of this, we consider a ``baseline\" scenario and an ``optimistic\" scenario for the detector parameters, summarized in Table~II. Fig.~\\ref{efficiency} shows the recoil energy distribution in a detector with two hypothetical efficiency curves as a function of recoil energy. The baseline scenario has a 10~keV nuclear recoil energy threshold with the efficiency rising linearly to 0.3 at 20~keV. We additionally assume an energy resolution near threshold of 300~eV and a fiducial mass uncertainty of 5\\%. These parameters are consistent with the performance of the Ge detectors used in the CDMS II experiment~\\cite{CDMSIIEff}. The optimistic scenario assumes that refinement of the detectors improves all of these parameters. In the optimistic scenario, we assume a 5~keV nuclear recoil energy threshold and an efficiency that rises linearly to 0.6 at 20~keV. We also assume an energy resolution near threshold of 50~eV and a fiducial mass uncertainty of 1\\%. The optimistic scenario parameters are anticipated to be achieved by detectors in the SuperCDMS experiment. In all cases, unless otherwise specified, we assume a raw exposure of 4.5 ton$\\cdot$year before the efficiency curve is applied. We also assume a leakage background of 1 event per 4.5 ton$\\cdot$year raw exposure with a spectrum of $e^{-E_\\textrm{recoil} \/ 10\\textrm{ keV}}$, and a neutron background of 0.15~events per ton$\\cdot$year of exposure with a spectrum of $e^{-E_\\textrm{recoil} \/ 50\\textrm{ keV}}$ before the efficiency curve is applied. Events are required to have recoil energies less than 100~keV to be considered nuclear recoils, so we do not use any background events with energies exceeding 100~keV. This neutron spectrum is convolved with each efficiency curve and scaled by the exposure to obtain the expected distribution of neutron events for each scenario. For the baseline scenario we expect 0.13 total neutron events, and for the optimistic scenario we expect 0.29 total neutron events with a 4.5 ton$\\cdot$year exposure. \n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[width=3.2in]{efficiency.pdf}\n\\end{center}\n\\vspace{-.7cm}\n\\caption{Recoil energy distribution for coherent neutrino scattering on Ge at 2.3~km from a DAR neutrino source, before and after the effect of two hypothetical efficiency curves.}\\label{efficiency}\n\\end{figure}\n\n\n\\begin{table}[t]\n\\label{base_opti}\n \\begin{center}\n {\\footnotesize\n \\begin{tabular}{|c|c|c|} \\hline \n &Baseline & Optimistic \\\\ \\hline\n Threshold & 10~keV & 5~keV \\\\\n Efficiency & See Fig. \\ref{efficiency} & See Fig. \\ref{efficiency} \\\\ \n Energy resolution near threshold&300~eV& 50~eV\\\\\n Neutron background events\/(4.5~ton$\\cdot$year)&0.13&0.29 \\\\\n Surface background events\/(4.5~ton$\\cdot$year)&1.0 &1.0 \\\\\n Fiducial mass uncertainty&5\\%&1\\% \\\\ \\hline\n\n \\end{tabular} \n \\caption{The GEODM detector scenarios considered. The neutron background expectation is after efficiency corrections.}\\label{GEODMscenarios}\n}\n\\end{center}\n\n\\end{table}\n\nUsing the baseline efficiency and energy threshold, we find that a coherent neutrino rate of 0.8~events\/ton\/year is expected in a $^{76}$Ge-based detector at a baseline of 2.3~km. GEODM expects an overall leakage rate of about 1~background surface leakage event with a 4.5~ton$\\cdot$year exposure, obtained by adjusting efficiency\/fiducial volume cuts to get to that number. In addition, there is a neutron-induced nuclear recoil background (absolute rate convolved with detection efficiency) of 0.13 events\/(4.5~ton$\\cdot$year) and 0.29 events\/(4.5~ton$\\cdot$year) for the baseline and optimistic scenarios, respectively. The total expected background rate during the 13\\% of beam-on time would therefore be 0.15~events\/(4.5~ton$\\cdot$year) for the baseline and 0.17~events\/(4.5~ton$\\cdot$year) for the optimistic scenario. The assumptions for GEODM as a coherent neutrino detector are shown in Table~III. Based on these considerations and assuming no WIMP ``background\", one can see that an experiment like GEODM could find evidence for coherent scattering in a $\\sim$4.5~ton$\\cdot$year exposure with 3-4~signal events above a background expectation of 0.15~events. The probability for 4~observed events to be completely due to background, with a background expectation of 0.15 events, is $\\sim$0.002\\%. The signal rate and evidence\/discovery timeline would be quickly improved in the case that the baseline efficiency estimate, especially in the low energy region (and possibly below 10~keV), is too conservative. With the optimistic energy threshold and efficiency scenario, we expect a coherent rate of 2.0~events\/ton\/year. Under the same background assumption as above, we find that GEODM will discover coherent scattering in a $\\sim$2~ton$\\cdot$year exposure with 4~signal events above a background expectation of 0.07 events. The probability for 4~observed events to be completely due to background, with a background expectation of 0.07 events, is $\\sim$0.00009\\%. \n\nIt is worth noting that the absolute (100\\% efficiency and 100\\% on-time) solar coherent neutrino interaction rate on a $^{76}$Ge target above 5~keV is expected to be 0.079~events\/ton\/year~\\cite{gutlein}. In either efficiency scenario and with 13\\% beam-on time, the solar coherent ``background\" rate is negligible.\n\n\n\n\\begin{table}[t]\n\\label{deep_params}\n \\begin{center}\n {\\footnotesize\n \\begin{tabular}{|c|c|}\n \\hline\n \\multicolumn{2}{|c|}{GEODM Assumptions} \\\\\n \\hline \n Scenarios considered& ``Baseline\" and ``Optimistic\" \\\\\n $\\nu$ source& $4\\times10^{22}$ $\\nu$\/flavor\/year w\/ 13\\% duty factor \\\\ \n $\\nu$ flux uncertainty & 2\\% \\\\ \n Distance from $\\nu$ source&2.3~km\\\\ \n Exposure&4.5~ton$\\cdot$year \\\\ \\hline\n\n \\end{tabular} \n \\caption{The assumptions used in the text for coherent neutrino detection with GEODM deep underground.}\n}\n\\end{center}\n\n\\end{table}\n\n\n\n\\subsection{The effect of the coherent background on WIMP sensitivity}\n\\label{sec:WIMPsearch}\n\nSince coherent neutrino scattering is an irreducible background for WIMP searches, the presence of a neutrino source near a dark matter experiment will reduce the sensitivity of the experiment. In calculating a WIMP-nucleon cross section limit, one can either use data from only the 87\\% of the time when the neutrino source is off or data from the 87\\% beam-off time and the 13\\% beam-on time. Since background events reduce the sensitivity of a limit in the optimum interval method~\\cite{OptimumInterval}, using the combined exposure may result in a worse limit than using only the beam-off exposure. Fig. \\ref{Fig:WIMPLimitsBaseline} and Fig. \\ref{Fig:WIMPLimitsOptimistic} present a comparison of limits for the baseline and optimistic GEODM detector scenarios. The limits assume a GEODM-style detector with a raw exposure of 4.5 ton$\\cdot$year before efficiency cuts and the same assumptions as in the previous section. In particular, we use the hypothetical efficiency curves shown in Fig.~\\ref{efficiency}, a neutron rate of $0.15$~events\/ton\/year before efficiency convolution, and a total surface-event leakage of 1 event per 4.5 ton$\\cdot$year raw exposure. We use the recoil energy distribution from Fig.~\\ref{efficiency} for neutrino events in the 13\\% beam-on time and a detector at 2.3 km from a DAR neutrino source.\n\nUsing these parameters, we randomly generate 100 realizations of the background events and compute the average of the upper limits on the WIMP-nucleon cross section for each. In this study, the beam-off data actually has greater sensitivity than the combined beam-on and beam-off data. Using the beam-off data with 87\\% exposure only, the mean upper limit at maximum sensitivity is $4.0 \\cdot 10^{-47}$cm$^2$ (baseline) or $1.9 \\cdot 10^{-47}$cm$^2$ (optimistic). The combined data give a limit of $5.5 \\cdot 10^{-47}$cm$^2$ (baseline) or $3.6 \\cdot 10^{-47}$cm$^2$ (optimistic). The results for the 87\\% exposure obviously also have worse sensitivity than 100\\% exposure without any beam-on time. If there were no beam-on time, the upper limit at maximal sensitivity would be $3.3 \\cdot 10^{-47}$cm$^2$ (baseline) or $1.7 \\cdot 10^{-47}$cm$^2$ (optimistic).\n\n\\begin{figure}[htp!]\\begin{center}\n\\subfigure[ Cross section limit assuming ``baseline\" GEODM efficiency.]{\\label{Fig:WIMPLimitsBaseline}\\includegraphics[width=3.4in]{LimitPlotBaseline.pdf}}\n\\subfigure[ Cross section limit assuming ``optimistic\" efficiency.]{\\label{Fig:WIMPLimitsOptimistic}\\includegraphics[width=3.4in]{LimitPlotOptimistic.pdf}}\n\\end{center}\n\\caption{Expected average limits for the WIMP-nucleon cross section for a 4.5~ton$\\cdot$year exposure, assuming no WIMP signal and calculated using the optimum interval method. All limits include neutron and surface event leakage background events. The `13\\% exposure w\/ $\\nu$' limit also includes background events from coherent neutrino scattering, while `13\\% exposure w\/ no $\\nu$', `87\\% exposure w\/ no $\\nu$', and `100\\% exposure w\/ no $\\nu$' do not. The `Combined 87\\% w\/ no $\\nu$ + 13\\% w\/ $\\nu$' limit is obtained by combining the events and exposures from the `13\\% exposure w\/ $\\nu$' limit and the `87\\% exposure w\/no $\\nu$' limit, and treating it as a single experiment. Each limit is calculated 100 times with background events randomly drawn from their distributions. The resulting limits are averaged and averages are shown in this figure.}\n\\end{figure}\n\n\n\nNote that when the number of background events is zero, the sensitivity to WIMPs scales as the reciprocal of the exposure. This occurs because the expected number of events in an experiment is proportional to the product of the cross section $\\sigma$ and the exposure $E$,\n\\begin{equation}\\label{eqn:CrossSectionScaling}\n\\mu \\propto \\sigma E.\n\\end{equation}\nA limit at confidence level $C$ is obtained by determining the expected number of events $\\mu$ such that there is a probability $C$ of observing zero events. Assuming Poisson statistics the expected number of events is $\\mu = -\\log (1-C)$. It is thus apparent from Eq.~(\\ref{eqn:CrossSectionScaling}) that the cross section limit $\\sigma$ corresponding to confidence level $C$, scales as the reciprocal of the exposure $E$. In the presence of a nonzero background that is proportional to exposure, the limit will scale more slowly than the reciprocal of the exposure. Fixed backgrounds set by fiducial cuts may produce more complicated scaling behavior.\n\n\n\\section{Physics with a Detector Close to the Neutrino Source}\n\\label{sec:closedetector2}\nA suitable detector within tens of meters of the stopped pion source could gather a rather large sample of elastic neutrino coherent scatters for physics studies. The 300~ft adit at DUSEL could provide direct tunnel access to an envisioned experimental site for such a detector. The cyclotron would then be located just outside the tunnel in a building against the cliff face. As discussed earlier, a coherent cross section measurement is sensitive to a number of physics possibilities. For simplicity, we consider a flux-integrated total cross section measurement as our figure of merit. However, the shape of the cross section as a function of energy is also interesting, especially in the case that a measured total cross section is inconsistent with expectation.\n\nAlthough many low-threshold nuclear-recoil-sensitive detector technologies\ncould work (including a noble liquid detector~\\cite{CLEAR,CLEAR2} or other dark matter detector technology), for concreteness we consider the specific example of a set of GEODM-derived detectors with 16.7~kg raw mass, 20~m away from the stopped-pion neutrino source. With the optimistic efficiency estimate (Figure~\\ref{efficiency}) and a fiducial mass of 10~kg, we expect a detected coherent rate of 0.74~events\/(10~kg$\\cdot$day) within the 13\\% beam-on time window. The background rate design goal for such an experiment would be $<$0.1~events\/(10~kg$\\cdot$day) within the same window. This rate seems reasonable with $\\sim$300~ft of rock shielding along with modest passive\/active shielding immediately surrounding the detector for prompt cosmic-ray-induced background attenuation\/tagging. The radiogenic background is assumed negligible, at a rate consistent with GEODM deep underground. The uncertainty on the non-beam related background estimate will easily be statistics-dominated with an $in-situ$ background measurement during beam-off. We also assume that there are no background events from neutrons produced by the DAR neutrino source at a 20~m baseline with 17~m rock and 3~m Fe shielding. This assumption was justified by performing a Geant4~\\cite{geant} simulation of an isotropic neutron source along a 20~m baseline, fitting the flux at various distances from the neutron source to the functional form\n\\begin{equation*}\nF(z) = \\frac{A e^{-z\/\\lambda}}{z^2},\n\\end{equation*}\nwhere $A$ and $\\lambda$ are fit parameters and $z$ is the distance along the baseline. The rate from this fit was extrapolated to 20~m distance and a 50~kg$\\cdot$year exposure, and the number of neutron events was found to be negligible. The neutron flux and spectrum used were taken from the SNS source, which is similar to the DAR source discussed here \\cite{NuSNSproposal}.\n\n\\begin{table}[t]\n\\label{close_params}\n \\begin{center}\n {\\footnotesize\n \\begin{tabular}{|c|c|} \n \\hline \n \\multicolumn{2}{|c|}{GEODM Module Close to the $\\nu$ Source Assumptions} \\\\\n \\hline \n Scenario considered& ``Optimistic\" \\\\ \n Source& $4\\times10^{22}$ $\\nu$\/flavor\/year w\/ 13\\% duty factor \\\\ \n $\\nu$ flux uncertainty & 2\\% \\\\ \n Distance from $\\nu$ source&20~m\\\\ \n Exposure&50~kg$\\cdot$year \\\\ \n Background rate & 0.1 events\/(10~kg$\\cdot$day) in beam window \\\\\\hline\n\n \\end{tabular} \n \\caption{The assumptions used in the text for coherent neutrino detection with a GEODM module close to the $\\nu$ source.}\n}\n\\end{center}\n\n\\end{table}\n\n\nA 50~kg$\\cdot$year exposure with the previously described experimental design would yield about 1350 coherent events, assuming a coherent cross section consistent with the Standard Model. With an optimistic 1\\% uncertainty on the target mass, 0.1 background events\/(10~kg$\\cdot$day) with statistical-only error, 2\\% absolute flux normalization uncertainty, and a 0.5\\% uncertainty on the energy resolution near threshold, a flux-averaged total cross section measurement with $<$5\\% (statistical and systematic) uncertainty would be achieved. The assumptions that went into the event rate and cross section measurement uncertainty estimates are summarized in Table~IV.\n\n\n\\section{Conclusions}\n\\label{sec:conclusion}\nCoherent elastic neutrino-nucleus scattering has never been observed. Relevant for supernova evolution, supernova-burst neutrino detection, probing non-standard neutrino interactions, and measuring $\\sin^{2}\\theta_{W}$ with neutrinos at low-$Q$, among other topics~\\footnote{Coherent scattering has even been envisioned as a nuclear reactor monitoring tool~\\cite{reactor1}\\cite{reactor2}.}, the process is very well predicted by the Standard Model and confirmation of the $\\sim$5\\% precision theoretical cross section prediction is needed.\n\nDark matter detectors can double as coherent neutrino scattering experiments as the products of WIMP and coherent scattering interactions are predicted to be nearly identical. In the case of a decay-at-rest neutrino source and a suite of dark matter experiments at the same site, the deep underground detectors there would merely need to receive a beam timing signal in order to participate in the coherent search. Furthermore, these detectors would receive a free dark matter detection consistency check in the form of non-WIMP rare events in a well-known time window--all with a modest cost to the WIMP analysis\/exposure. The power of this consistency check is strongly dependent on the number of coherent neutrino events collected, a value which is expected to be fairly low in most configurations. In both optimistic and baseline detection scenarios, the best limit on the WIMP-nucleon cross section uses only data from the period when the DAR source is off. In the optimistic scenario, the cross section limit is only about 12\\% weaker than if no neutrino source were present. \n\nA coherent neutrino interaction discovery in GEODM could be achieved with a 2~ton$\\cdot$year exposure. About 2.0 detected coherent neutrino events\/ton\/year over a background of 0.03~events\/ton\/year are expected in a GEODM-style detector at a 2.3~km baseline, given optimistic assumptions for energy threshold and detection efficiency. Even in a conservative (baseline) scenario, with energy threshold and efficiency reasonably consistent with CDMS~II, evidence for coherent neutrino scattering could be obtained with a 4.5~ton$\\cdot$year exposure. In addition, a 10~kg fiducial mass GEODM-derived detector brought within 20~m of the neutrino source could collect about 1350 events with a 50~kg$\\cdot$year exposure. Such a sample would be good for a $<$5\\% flux-averaged total cross section measurement uncertainty and significant tests of the Standard Model.\n\n\n\n\\begin{center}\n{ {\\bf Acknowledgments}}\n\\end{center}\n\nThe authors thank Jocelyn Monroe for discussions, and Chuck Horowitz\n for providing the form factors for use in our calculations. J.~S. thanks Bonnie Fleming for support. We thank the National Science\nFoundation for support. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{(di)leptons and heavy flavors: what is different at the LHC}\nOne of the most exciting aspects of heavy ion collisions at the LHC is \nthe abundant production rate of heavy flavors which can be used, for the \nfirst time, as high statistics probes of the \nmedium~\\cite{Carminati:2004fp,Bedjidian:2003gd}.\nThis allows to use a large variety of new observables.\nThe magnitude of most of the in-medium effects is expected to be dramatically \nenhanced. \nSome of these aspects are discussed hereafter.\n\n\\begin{itemize}\n\\item{{\\bf Large primary production cross-sections:}\nThe number of $c\\bar{c}$ ($b\\bar{b}$) pairs produced in central $AA$ \ncollisions at the LHC is expected to be 10 (100) times larger than at RHIC.\nTherefore, at the LHC both charmonia and bottomonia can be used, thus \nproviding powerful probes for Quark Gluon Plasma (QGP) studies.\nIn fact, since the $\\Upsilon(1S)$ state is expected to only dissolve\nsignificantly above the \ncritical temperature~\\cite{Digal:2001ue,Wong:2004zr},\nwhich might only be reachable at the LHC, the spectroscopy of the \n$\\Upsilon$ family should reveal unique characteristics of the \nQGP~\\cite{Gunion:1996qc};}\n\\item{{\\bf Large resonance dissociation rate:}\nIn addition to nuclear absorption, comoving hadrons and color screening, \nquarkonia can be significantly destroyed by gluon \nbombardment~\\cite{Xu:1995eb}.\nThis mechanism which results from the presence of quasifree gluons starts \nbeing effective for temperatures above the critical temperature but \nnot necessarily above the resonance dissociation temperature\nby color screening.\nIt is expected to be relatively important at the LHC.\nIndeed, recent estimates~\\cite{Bedjidian:2003gd} \nof the dissociation cross-sections show that none of \nthe prompt $J\/\\psi$ would survive the deconfined phase at the LHC\nand that about 80\\% of the $\\Upsilon$ would be destroyed;}\n\\item{{\\bf Large charmonium secondary production:}\nBesides indirect charmonia production from $b$-hadron \ndecay~\\cite{Eidelman:2004wy} (see below) an important yield of \nsecondary charmonia is expected from $D\\bar{D}$ annihilation~\\cite{Ko:1998fs},\nstatistical hadronization~\\cite{Braun-Munzinger:2000px}\nand kinetic recombination~\\cite{Thews:2000rj}.\nThe two last processes explicitly assume the formation of a deconfined medium.\nThe underlying picture is that charmonium resonances form by coalescence of \nfree $c$ and $\\bar{c}$ in the QGP~\\cite{Thews:2000rj} or at the \nhadronization stage~\\cite{Braun-Munzinger:2000px}.\nAccording to these models, the signature of the QGP should lead to an \nincrease of the $J\/\\psi$ yield versus centrality, proportional to \n${\\rm N}^2_{c\\bar{c}}$, instead of a suppression\\footnote{Note that, due\nto the large number of $c\\bar{c}$ pairs produced in central heavy ion \ncollisions at LHC, these models predict a spectacular enhancement of \nthe $J\/\\psi$ yield, up to a factor 100 in central collisions, relative to the \nprimary production yield~\\cite{Bedjidian:2003gd,Andronic:2003zv}.};}\n\\item{{\\bf Complex structure of (di)lepton spectra:}\nWith a low $p_{\\rm t}$ threshold of about 2~GeV\/c on the decay leptons, \nunlike-sign dileptons from bottom decay dominate the dilepton correlated \ncomponent all over the mass range.\nWhereas in the high invariant mass region each lepton comes from the direct \ndecay of a $B$ meson, in the low invariant mass region both leptons \ncome from the decay of a single $B$ meson via a $D$ meson.\nNext-to-leading order processes such as gluon splitting also populate \nsignificantly the low mass dilepton spectrum due to their particular \nkinematics~\\cite{Norrbin:2000zc}.\nThen, a sizeable yield of like-sign correlated dileptons from bottom \ndecay is present. \nThis contribution arises from the peculiar decay chain of $b$ hadrons and \nfrom $B$-meson oscillations.\nThe single lepton spectra are also subject to significant novelties. \nThe most striking one is the emergence of the $W^\\pm$ bosons as a bump \nlocated at around $30~{\\rm GeV\/c}$ in the single lepton $p_{\\rm t}$ \ndistributions~\\cite{zaida}.}\n\\end{itemize}\n\n\n\\section{Selected physics channels}\nALICE (A Large Ion Collider Experiment) is the LHC experiment\ndedicated to the study of nucleus-nucleus collisions.\nThe detector consists of a central barrel ($|\\eta|<0.9$), a forward muon\nspectrometer $(2.5<\\eta<4$) and several forward\/backward and central small \nacceptance detectors~\\cite{Carminati:2004fp,Hans-Ake}.\n(di)leptons will be measured in ALICE through the electron channel in the\ncentral region and through the muon channel in the forward region.\nSelected physics channels are presented below.\n\n\\begin{itemize}\n\\item{{\\bf $\\Upsilon^\\prime\/\\Upsilon$ ratio versus $p_{\\rm t}$:} \nThe $p_{\\rm t}$ suppression pattern of a resonance is a consequence of\nthe competition between the resonance formation time and the QGP \ntemperature, lifetime and spatial extent~\\cite{Blaizot:1987ha}.\nQuarkonium suppression is expected also as the result of nuclear effects \nlike shadowing and absorption.\nIn order to isolate pure QGP effects, it has been proposed to study the \n$p_{\\rm t}$ dependence of quarkonium ratios instead of single quarkonium \n$p_{\\rm t}$ distributions.\nBy doing so, nuclear effects cancel out, at least in the $p_{\\rm t}$ \nvariation of the ratio.\nFollowing the arguments of~\\cite{Gunion:1996qc}, \nthe capabilities of the ALICE muon spectrometer to measure the \n$p_{\\rm t}$ dependence of the $\\Upsilon^\\prime\/\\Upsilon$ ratio in central \n(10\\%) Pb-Pb collisions have been investigated~\\cite{ericTHESIS}.\nTwo different QGP models with different system sizes were considered.\nThe results of the simulations show that, with the \nstatistics collected in one month of data taking, the measured \n$\\Upsilon^\\prime\/\\Upsilon$ ratio \nexhibits a strong sensitivity to the characteristics of the QGP;}\n\\item{{\\bf Secondary $J\/\\psi$ from $b$-hadron decay:} \nAs stated above, a large fraction of $J\/\\psi$ arises from $b$-hadron \ndecay~\\footnote{In central (5\\%) Pb+Pb collisions at \n$\\sqrt{s} = 5.5~{\\rm TeV}$, \n${\\rm N}(b\\bar{b}\\rightarrow J\/\\psi)\/{\\rm N}({\\rm direct}~J\/\\psi) \\sim 20\\%$\nin $4\\pi$ (with shadowing and feed-down but without nuclear \nabsorption)~\\cite{Bedjidian:2003gd}.}. \nThese secondary $J\/\\psi$, which are not QGP suppressed, must be subtracted \nfrom the measured $J\/\\psi$ yield prior to $J\/\\psi$ suppression studies.\nThey can be identified by exploiting the large lifetime of $b$ hadrons \nwhich results in a finite impact parameter for the decay leptons of secondary \n$J\/\\psi$.\nSimulations have shown that such measurements can successfully be performed\nwith dielectrons measured in the central part of ALICE \nthanks to the excellent spatial resolution of the Inner Tracking \nSystem~\\cite{TRDTP};}\n\n\\item{{\\bf Open heavy flavors:} \nThe open heavy flavor cross-section can be measured by means of several \nchannels: low-mass and high-mass unlike-sign dileptons~\\cite{Rachid}, \nsingle lepton $p_{\\rm t}$ distributions~\\cite{Rachid,Padova}, \nlike-sign dileptons~\\cite{Crochet:2001qd}, single leptons with displaced \nvertices~\\cite{TRDTP,Padova}, \nsecondary $J\/\\psi$ from $b$-hadron decay~\\cite{TRDTP} and electron-muon\ncoincidences~\\cite{TRDTP}.\nRecently, a measurement of the differential inclusive $b$-hadron cross-section \nhas been investigated in the electron channel~\\cite{Padova} with a technique \ndeveloped in $p\\bar{p}$ collisions~\\cite{Albajar:1988th} \nand adapted to heavy ion collisions in the ALICE-muon channel~\\cite{Rachid}.\nThe results presented in Figure~\\ref{fig} (left) show that the $b$-hadron \ncross-section\ncan be reconstructed up to $p_{\\rm t}^{b\\;{\\rm hadron}} = 30~{\\rm GeV\/c}$.\nSensitivity to the $b$-quark energy loss is evidenced such that\nthe nuclear modification factors, which can be simultaneously measured for\nlight hadrons, for $D^0$~\\cite{Dainese:2003wq} and for $b$ hadrons should \nprovide a set of powerful tools to investigate the mass dependence of \nthe energy loss;}\n\\vspace*{-0.4cm}\n\\begin{figure}[hbt]\n \\centering{\\epsfig{file=fig.epsi,width=0.84\\linewidth}}\n\\vspace*{-0.8cm}\n \\caption{Left: differential inclusive $b$-hadron cross-section reconstructed \n from single electrons with displaced vertices\n in central (5\\%) Pb+Pb collisions~\\cite{Padova,PPRV2}.\n The results are shown without and with $b$-quark energy loss according \n to~\\cite{Armesto:2005iq}.\n Right: centrality dependence of the $\\Upsilon\/b\\bar{b}$ ratio in Pb+Pb \n collisions in the muon channel. The ratio is shown without (dots) \n and with (squares) $\\Upsilon$ \n nuclear absorption as well as with nuclear absorption and \n melting by color screening (triangles and circles) with dissociation \n temperatures taken from~\\cite{Wong:2004zr}.\n Taken from~\\cite{PPRV2,SmbatPriv}.}\n \\label{fig}\n\\end{figure}\n\\vspace*{-0.9cm}\n\\item{{\\bf Centrality dependence of the $\\Upsilon\/b\\bar{b}$ ratio:} \nIf the $b$-quark energy loss turns out to be negligible, the $b$-hadron \ncross-section can be used as a normalisation for $\\Upsilon$ suppression\nstudies.\nThis normalisation is the most natural normalisation because of \nthe similar production processes for open and hidden heavy flavors.\nFigure~\\ref{fig} (right) shows the measurement that can be achieved \nin one month of Pb beams.\nThe triangles and circles illustrate the typical sensitivity\nof the ratio to $\\Upsilon$ melting by color screening.\nNote that the statistical uncertainty of the ratio is dominated by the \nstatistics of the probe (i.e. the number of $\\Upsilon$s) and not \nby the statistics of the reference.\nIndeed, the number of correlated unlike-sign muon pairs from bottom decay\nin the mass range ${\\rm M}_{\\mu\\mu} > 5~{\\rm GeV\/c}^2$ is larger than that \nof the $\\Upsilon$ by a factor 5.}\n\\end{itemize}\n\\section{Summary}\n(di)lepton measurements with the ALICE detector at the LHC will bring an \nunprecedentedly rich physics program in the heavy flavor sector of heavy ion \ncollisions.\nIn addition to the channels discussed here, further exciting possibilities \nshould be opened with, for example, quarkonia polarization and dilepton \ncorrelations.\n\\section*{Acknowledgments}\nPart of this work was supported by the EU Integrated Infrastructure\nInitiative HadronPhysics Project under contract number\nRII3-CT-2004-506078. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nConsiderable observational evidence has built up over the past few years that\na substantial\nfraction of the massive galaxies around us today were already massive at very early epochs.\nThis evidence comes primarily from three sources:\n\\begin{itemize}\n\\item Studies of local massive elliptical galaxies indicate that the stars in the most\nmassive galaxies generally formed very early and over very short time intervals\n\\citep{pee02,tho05,nel05,ren06}. Stars\nin less massive spheroids formed, on average, later and over longer time spans.\n\\item Massive galaxies in clusters show little evidence for significant evolution up to at\nleast redshift $\\sim1$ \\citep[{\\it e.g.,}][]{deP07,sca07}.\n\\item Direct observations of massive galaxies at redshifts $\\gtrsim1.5$ that are dominated\nby already old stellar populations show that significant numbers of massive galaxies\nwere in place at even earlier epochs \\citep*[{\\it e.g.,}][]{sto04,mcC04,vanD04,lab05,dad05,\nred06,pap06,kri06,abr07}.\n\\end{itemize}\n\nAlthough the existence of massive galaxies at high redshifts is now well documented, there have\nbeen only a few high-resolution studies of their morphologies (e.g., \\citealt{yan03,sto04,zir07,tof07}). \nMorphologies are important, because they may well \nretain signs of the formation history of the galaxies. This is particularly true for galaxies\nthat show little or no recent star formation, so that we are able to observe relatively clean\nexamples of the stellar population that formed earliest and that comprises the bulk of the \nmass of the galaxy.\nIn this paper, we present deep {\\it Hubble Space Telescope} ({\\it HST}) NICMOS imaging \nof two galaxies with virtually\npure old stellar populations at $z\\sim2.5$. In \\S~2, we briefly recount how these galaxies were\nselected. In \\S~3, we describe the observations and reduction procedures. In \\S~4 and\n\\S~5, we\nanalyze model fits to the images to determine morphologies, and in \\S~6 we discuss the implications\nof our conclusions. We assume a flat cosmology with $H_0 = 73$ km~s$^{-1}$~\\ Mpc$^{-1}$ and\n$\\Omega_M = 0.28$.\n\n\\section{Identifying Galaxies with Old Stellar Populations at High Redshifts}\n\nOur procedure for selecting galaxies with old stellar populations is described in some detail\nin \\citet{sto04}; here we give a brief synopsis. We observe fields of radio sources in certain\nspecific redshift ranges, selecting galaxies with photometric redshifts consistent with that\nof the radio source. Radio sources generally serve as beacons for some of the more\noverdense regions in the early universe. Furthermore, the specific redshift ranges \nselected are chosen to optimize discrimination with standard filter passbands \nbetween old stellar populations and highly reddened star-forming galaxies. One of these redshift\nranges is $2.3<z<2.7$, for which the 4000 \\AA\\ break, strong in old stellar populations,\nfalls between the $J$ and $H$ bands. We have used the \\citet{bru03} (BC03) spectral synthesis \nmodels, and, more recently, preliminary versions of the \\citet{cha07} (CB07) models, to\nevaluate and optimize our photometric selection of old stellar populations at various\nredshifts. The preliminary CB07 models include more realistic prescriptions for thermally pulsing \nasymptotic-giant-branch stars (\\citealt{mar07}; see also \\citealt{mar05}). Although at low redshifts\n(and for some SEDs at high redshifts) the new models can significantly lower the masses estimated \nfrom $K$-band photometry, at the redshifts we are considering here for nearly pure old stellar\npopulations, the masses (and ages) change hardly at all. The main effect of using the newer\nmodels is to reduce the amount of reddening required to obtain a good fit.\n\nIf a stellar population were to have an age of\n2 Gyr at $z=2.5$ (corresponding to all of the stars forming at $z=9$), its observed colors \nwould be $J\\!-\\!K\\approx3.0$ and $J\\!-\\!H\\approx2.1$. We use a photometric sieve procedure to optimize\nthe selection with respect to available observing time, first obtaining relatively short $J$ and\n$K'$ integrations (typically 5 $\\sigma$ at $J=23$ and $>10$ $\\sigma$ at $K'=20$). If any objects\nwith $J\\!-\\!K'\\sim3$ are found, we then obtain $H$ and deeper $J$ imaging. Finally, for fields\nwith objects matching the expected spectral-energy distributions of an old stellar population\nat the redshift of the radio source, we attempt to obtain deep imaging at shorter wavelengths\n(usually either $R$ or $I$) to set constraints on any residual star formation.\n\nAmong the galaxies found by this technique are one each in the fields of the radio galaxy\n4C\\,23.56 \\citep{sto04} and the quasar 4C\\,05.84. \nWe refer to these galaxies as 4C\\,23.56\\,ER1\\ and 4C\\,05.84\\,ER1; they are both luminous objects,\nand they have stellar populations that appear to be overwhelmingly dominated by old stars.\n\n\\section{Observations and Data Reduction}\n\n\\subsection{Ground-Based Optical and Near-IR Observations}\n\nWe obtained most of the near-IR observations ($J$, $H$, and $K'$) with the CISCO IR camera\n\\citep{mot02} on the 8.2 m Subaru Telescope \\citep{iye04} in observing runs on \n2000 November 8 (UT),\n2001 August 5 and 6, and 2002 May 30--June 1. The images have a scale of\n0\\farcs105 pixel$^{-1}$ and a field of $\\sim1\\farcm8$. In addition, we carried out deep $R$-band\nimaging of both fields with the Echelle Spectrograph and Imager (ESI;\n\\citealt{she02}) on the Keck II Telescope on 2002 August 7. Both the IR and optical\nimaging were reduced according to standard procedures using our own IRAF scripts.\nThe calibrations used observations of UKIRT Faint Standards \\citep{haw01,leg06}\nfor the IR photometry and Landolt fields \\citep{lan92} for the $R$-band imaging. \n\nWe also observed 4C\\,23.56\\,ER1\\ at $K'$ with the Subaru 36-element\ncurvature-sensing adaptive optics (AO) system \\citep{tak04} \nand the Infrared Camera and Spectrograph\n(IRCS; \\citealt{kob00}) on 2002 August 17. These results were reported by\n\\citet{sto04}, but we will refer to them again in this paper. We used IRCS without\nthe AO system, but with excellent natural seeing (final images have FWHM of 0\\farcs35)\nto obtain a very deep image of the 4C\\,05.84 field in the $K$ filter on 2004 August 1.\nFinally, we obtained $J$-band imaging of the 4C\\,05.84 field with NIRC2 and the Keck II\nlaser-guide-star adaptive-optics system on 2007 August 21.\n\n\\subsection{{\\itshape Hubble Space Telescope} NICMOS Observations}\n\nThe NICMOS observations used the NIC2 camera (0\\farcs075 pixel$^{-1}$) and the\nF110W and F160W filters. They were obtained on UT 2005 January 3 (4C\\,05.84\\,ER1, F160W, \ntotal exposure 5376 s), 2005 January 4 (4C\\,23.56\\,ER1, F160W, total exposure 8192 s), \n2005 January 8 (4C\\,05.84\\,ER1, F110W, total exposure 8448 s), and 2005 May 16\n(4C\\,23.56\\,ER1, F110W, total exposure 11264 s) as part of {\\it HST} program 10418. \nAfter doing a first-pass combination\nof the images to get a rough idea of the quality of the data, we went back to the\n{\\it calnica} processed images and corrected these for bias offsets and inverse\nflatfield effects using the STSDAS {\\it pedsky} task. Most of the F110W images\nwere obtained in orbits impacted by passages through the South Atlantic Anomaly\n(SAA) and needed special processing. We used the IDL routine saa\\_clean.pro\n\\citep{ber03} to generate an image of the persistence from the routinely taken\npost-SAA dark images and subtract it from the science images. Finally,\nfor these images, {\\it pedsky} was run again to remove any residual bias\npedestals.\n\nWe then generated a bad-pixel\nmask from the data quality file, adding an additional mask for the coronographic\nocculter, which produces background that is detected in both filters. Most of\nthe cosmic rays were removed with the contributed IRAF procedure {\\it lacos\\_im}\n\\citep{vanD01}. At this point the images from the individual dither positions were\ncombined onto a subsampled grid with the STSDAS {\\it drizzle} task \nto produce the final image. To choose the optimum {\\it drizzle} parameters for\nour purposes, we performed a series of tests with artificial PSFs generated by\nTiny Tim\\footnotemark[3]. \n\\footnotetext[3]{Tiny Tim was written by J. Krist and can be\nfound at http:\/\/www.stsci.edu\/software\/tinytim\/tinytim.html}\nWe ended up choosing a drop size of 0.7 and a\nsubsampling factor of 2. The final combined images had FWHM of 0\\farcs133 \nfor the F160W images and 0\\farcs115 for the 4C\\,23.56\\,ER1\\ F110W image.\nWe were unfortunately unable to produce a useful image\nof 4C\\,05.84\\,ER1\\ in the F110W band because of a combination of the object's low surface\nbrightness at that wavelength and residual effects on the detector of the \nprevious SAA passage. The final $3 \\sigma$ surface brightness limits \n(in the Vega system) were $\\mu_{160} \\approx 22.3$ and $\\mu_{110} \\approx 23.3$\nfor 4C\\,23.56\\,ER1, and $\\mu_{160} \\approx 22.0$ for 4C\\,05.84\\,ER1.\n\nThe drizzling process inevitably introduces some level of correlation between adjacent\npixels. Where we needed to estimate absolute errors (such as for our radial-surface-brightness\nplots), we made a statistical correction to the error determinations, following the prescriptions\nof \\citet{fru02}\n\nAlthough we obtained images of stars for point-spread-function (PSF) determination\nat the ends of some of the orbits, PSFs modeled from TinyTim were\nquite consistent with the stellar profiles. Subtracting TinyTim models\nfrom the observed stars gave residuals that were less than 1.5\\%\\ (rms) of the peak over\nthe central FWHM region (and much lower outside this region), with maximum pixel \ndeviations of 3\\%. The TinyTim profiles also have the advantage that they can be \ngenerated on a subsampled grid to minimize interpolation errors in matching the\nprofiles to the undersampled NICMOS2 images. We accordingly used subsampled \nTinyTim model profiles in our analysis.\n\n\\section{4C\\,23.56\\,ER1}\n\nIn the field of the $z=2.483$ radio galaxy 4C\\,23.56, our photometric selection procedure picked\nout a galaxy that had previously been noted as a very red object by \\citet{kno97}.\nAs mentioned above, we have already reported on our Subaru AO\/IRCS imaging of 4C\\,23.56\\,ER1\\\n(\\citealt{sto04}; there, the galaxy is referred to as 4C\\,23.56\\,KC68). The main conclusions \nof that paper were that (1) the galaxy indeed has a redshift close to that of the radio galaxy,\n(2) the best fit to the photometry is an old (2--3 Gyr) stellar population with little reddening\n($\\lesssim0.2 A_V$ of \\citealt{cal00} extinction), and that (3) the morphology of this massive,\nold galaxy looked surprisingly disklike, with a projected axial ratio of 0.3 and a S\\'{e}rsic \nindex of 1.5. Because the details of the previous observations are available in that paper,\nand in a brief follow-up report \\citep{sto07},\nwe restrict our discussion here to a comparison of the NICMOS imaging with the \nprevious AO imaging.\n\nThe NICMOS F110W and F160W images are shown in Fig.~\\ref{4c23mos}, along with\nthe best-fit S\\'{e}rsic models (convolved with the PSF), the residuals from the subtraction \nof the models from the data, and, again, the best-fit models (but {\\em without} convolution\nwith the PSF). We determine total magnitudes for 4C\\,23.56\\,ER1\\ from the S\\'{e}rsic models, finding\n(on the Vega system) $m_{F110W} = 23.39 \\pm 0.17$ and $m_{F160W} = 20.82 \\pm 0.04$\nafter correction for Galactic reddening \\citep{sch98}. The quoted uncertainties include only\nsky noise and uncertainty in the sky level; they do not include any deviations between the\nmodels and the data (which are in any case quite small over the region of good S\/N for the\ndata) or other potential systematic effects.\n\\begin{figure*}[!bt]\n\\epsscale{1.0}\n\\plotone{f1.eps}\n\\caption{NICMOS2 images of 4C\\,23.56\\,ER1 in the F110W and F160W filters.\nThe best-fit {\\sc galfit} S\\'{e}rsic models, convolved with the PSF, are shown in \nthe second panel of each \nrow, the difference between the observed images and the models in the third\npanel, and the models without convolution with the PSF in the last panel.\nInsets show lower-contrast versions of the images. North is up and East to the\nleft for this and all following images.}\\label{4c23mos}\n\\end{figure*}\n\nWe discuss the F160W image first, since it has a much higher S\/N ratio than\ndoes the F110W image.\nFigure \\ref{4c23hrsb} shows the radial-surface-brightness profile for \nthe F160W image of 4C\\,23.56\\,ER1, along\nwith the best-fit $r^{1\/4}$-law, exponential, and S\\'{e}rsic profiles, determined\nusing {\\sc galfit} \\citep{pen02}.\nAmong these, the S\\'{e}rsic profile clearly gives the best fit, as expected,\nbecause of the extra degree of freedom in the model.\nThe S\\'{e}rsic profile has an index $n = 1.52\\pm0.06$, \nan effective radius $r_e = 0\\farcs24\\pm0.01$, and axial ratio $b\/a = 0.32$.\nThe uncertainties in $n$ and $r_e$ have been estimated by re-running the\nmodels with the sky level set $1\\sigma$ above and below its median value. From\nour Subaru AO imaging in the $K'$ band \\citep{sto04}, we had obtained\n$n = 1.49$, $r_e = 0\\farcs22$, $b\/a = 0.33$, so the two independent profiles\nin different bands are in remarkably good agreement. Although the AO imaging had slightly \nbetter FWHM, and the two datasets had similar S\/N near the center of the \ngalaxy, the NICMOS2 data extends farther in semi-major axis because\nof its lower sky background. We show a comparison of the two profiles in the\nregion of overlap in Fig.~\\ref{4c23drsbp}.\nBoth the $r^{1\/4}$-law and exponential profiles fit the observed profile poorly. Adding\na small ($r_e=0\\farcs1$), weak (14\\%\\ of total light) bulge component to an exponential \nprofile with an $r_e = 0\\farcs26$ gives a fit that is as good\nas that of the S\\'{e}rsic profile within a semi-major axis of 0\\farcs6 but somewhat\nworse beyond this radius.\n\\begin{figure}[!bt]\n\\epsscale{1.0}\n\\plotone{f2.eps}\n\\caption{Radial-surface-brightness profile of the NICMOS2 F160W image of\n4C\\,23.56\\,ER1, with best-fit $r^{1\/4}$-law, exponential, and S\\'{e}rsic profiles shown.\nThe upper panel shows the profiles, and the lower panel shows the\ndeviations of the observed profile and the two other models from the best-fit\nS\\'{e}rsic profile. Sample points in this and subsequent plots are at intervals\nof 1 subsampled pixel in the drizzled images (0\\farcs038) along the major\naxis, so data values\nand errors for adjacent points are fairly strongly correlated because of \ndrizzling, PSF smearing, and compression of the scale along the minor\naxis.}\\label{4c23hrsb}\n\\end{figure}\n\\begin{figure}[!tb]\n\\epsscale{1.0}\n\\plotone{f3.eps}\n\\caption{Comparison of normalized differential surface-brightness profiles for 4C\\,23.56\\,ER1\\\nin the region of overlap. \nAll profiles are shown relative to the NICMOS2 F160W S\\'{e}rsic fit, and the difference\nbetween F160W and $K'$ magnitudes has been removed via a simple slide fit.\nThe red points and line show the F160W profile and S\\'{e}rsic fit, respectively, \nand the blue points and line show the Subaru AO $K'$ profile and fit.}\\label{4c23drsbp}\n\\end{figure}\n\nThe F110W image shown in Fig.~\\ref{4c23mos}, which samples the morphology\nshortward of the 4000 \\AA\\ break (assuming that 4C\\,23.56\\,ER1\\ has the same redshift as\n4C\\,23.56 itself), superficially has an even more ``disky'' appearance\nthan does the F160W image. This is partly due to the sharper PSF at this\nwavelength: notice that the best-fit S\\'{e}rsic models without PSF convolution\nlook much more similar than do the models with PSF convolution. Nevertheless,\nthere may be a detectable difference in morphology in the two bands.\nThe F110W S\\'{e}rsic model has an index $n = 1.03\\pm0.10$ ({\\it i.e.,}\\ essentially a pure\nexponential), an effective radius of $0\\farcs28\\pm0\\farcs02$, and $b\/a = 0.31$. The \nradial-surface-brightness profile is shown in Fig.~\\ref{4c23jrsb}, along with\nthe best-fit S\\'{e}rsic model and the F160W S\\'{e}rsic model (adjusted by a constant\nmagnitude offset to approximately match the F110W points).\nThe observed differences are barely significant, given the uncertainties, but they seem \nto indicate a small color gradient, such that the outer parts of\nthe galaxy are slightly bluer (at least out to a semi-major axis of 0\\farcs7, at which point\nthe uncertainty in the sky background level becomes dominant). \nThis cannot be a large effect because of the tight upper limits\non the $R$ and $I$-band magnitudes (see \\citealt{sto04}). Nevertheless, it does suggest\na possible slight decrease in mean age and\/or mean metallicity of the stellar population as one \nprogresses from the center to the outskirts of the galaxy.\n\\begin{figure}[!tb]\n\\epsscale{1.0}\n\\plotone{f4.eps}\n\\caption{Radial-surface-brightness profile of the NICMOS2 F110W image of\n4C\\,23.56\\,ER1, with best-fit S\\'{e}rsic profile shown (red trace). Also shown is the\nbest-fit F160W S\\'{e}rsic profile from Fig.~\\ref{4c23hrsb}, shifted by a\nconstant magnitude offset to match the F110W points (blue trace).\nThe upper panel shows the profiles, and the lower panel shows the\ndeviations of the observed F110W profile and the F160W model from the best-fit\nS\\'{e}rsic profile to the F110W data.}\\label{4c23jrsb}\n\\end{figure}\n\n\\section{4C\\,05.84\\,ER1\\label{4c05}}\n\n4C\\,05.84\\,ER1\\ was found in the field of the $z=2.323$ quasar 4C\\,05.84 (Fig.~\\ref{4c05field}).\nThe SED of 4C\\,05.84\\,ER1\\ is shown in Fig.~\\ref{4c05sed}, including photometry from \nour Spitzer IRAC images, which will be discussed elsewhere in more detail in \nthe context of a larger sample of objects. While, for 4C\\,23.56\\,ER1, only upper limits\nat $R$ and $I$ bands have been obtained, for 4C\\,05.84\\,ER1\\ we have detections at $R=24.6$\nand $I=23.4$, indicating the presence of some younger stars. \nIn attempting to fit the observed SED, we have explored a range of exponentially decreasing\nstar-forming models as well as instantaneous burst models; we have also considered models\nwith metallicities of solar, 0.4 solar, and 2.5 solar.\n\\begin{figure}[!tb]\n\\epsscale{1.0}\n\\plotone{f5.eps}\n\\caption{The field of 4C\\,05.84\\,ER1. The image is a deep (6750 s) $K$-band integration with\nthe Subaru Infrared Camera and Spectrograph in 0\\farcs35 seeing. North is up\nand East to the left.}\\label{4c05field}\n\\end{figure}\n\\begin{figure}[!bt]\n\\epsscale{1.0}\n\\plotone{f6.eps}\n\\caption{Spectral-energy distribution for 4C\\,05.84\\,ER1. The blue curve shows the best-fit CB07 \nstellar population model to the photometry, at a redshift $z=2.93$. The red curve shows\nthe best-fit model with a redshift close to that of 4C\\,05.84 itself ($z_Q$). The redshifts\n$z$, Calzetti-law extinctions $A_V$, and ages of the stellar populations are indicated for\neach model. See the text for details.}\\label{4c05sed}\n\\end{figure}\n\nThe formal best-fit model SED is at a redshift of 2.93, substantially higher than that of 4C\\,05.84\nitself. This model has a 0.4-solar-metallicity population with an age of 900 Myr, an \nexponential time constant of 100 Myr, and a reddening $A_V = 0.06$ mag. If we restrict\nourselves to models with redshifts close to that of the quasar, we get a reasonable fit with\na solar-metallicity model with a redshift of 2.40, an age of 1.02 Gyrs, an exponential time\nconstant of 200 Myr, and a reddening $A_V = 0.58$ mag. This model fits the $I$-band and \nIRAC 5.8 $\\mu$m photometry less well but the IRAC 7.9 $\\mu$m photometry slightly better.\nBoth of these models are shown in Fig.~\\ref{4c05sed}. We have no firm grounds \nfor choosing one of these SEDs over the other; but, given the uncertainties in the models\nand possible star-formation histories, we will accept for the remainder of this paper that the redshift\ncloser to that of 4C\\,05.84 itself is the correct one. In either case, we are dealing with a massive\ngalaxy comprising stars that mostly formed $\\sim1$ Gyr before the observed epoch.\n\n\nWe show our NICMOS2 F160W image of 4C\\,05.84\\,ER1\\ in Fig.~\\ref{4c05mos}, along with our best-fit\n{\\sc galfit} model. We have tried a series of models, including, again, $r^{1\/4}$-law,\nexponential, and S\\'{e}rsic. Radial-surface-brightness profiles of these are shown in\nthe left panel of Fig.~\\ref{4c05hrsb}. In this case, even the S\\'{e}rsic profile is not a\nparticularly good fit. We get a significantly better fit with a two-component model\nincorporating a small $r^{1\/4}$-law bulge comprising $31 \\pm 15$\\%\\ of the light and an \nexponential disk accounting for the rest. This model is compared with the best\nS\\'{e}rsic profile fit in the right panel of Fig.~\\ref{4c05hrsb}. For the two-component\nmodel, the disk component has an effective radius $r_e = 0\\farcs89 \\pm 0\\farcs09$,\nand the bulge component has $r_e = 0\\farcs37 \\pm 0\\farcs2$. This best-fit model gives\na total magnitude (on the Vega system) $m_{F160W} = 20.28$.\n\\begin{figure*}[!bt]\n\\epsscale{1.0}\n\\plotone{f7.eps}\n\\caption{The NICMOS2 image of 4C\\,05.84\\,ER1 in the F160W filter (left panels).\nThe best-fit {\\sc galfit} composite $r^{1\/4}$-law + exponential model, \nconvolved with the PSF, is shown in the second panel of each \nrow, the difference between the observed images and the model in the third\npanel, and the model without convolution with the PSF in the last panel. This last\npanel is divided into 3 sub-panels: the middle one of these shows the composite\nmodel, the left one shows the exponential component alone, and the right one\nshows the $r^{1\/4}$-law subcomponent alone.\nThe lower panels show lower-contrast images, except for that for the residual image,\nwhich shows a slightly smoothed high-contrast version of the image.}\\label{4c05mos}\n\\end{figure*}\n\\begin{figure*}[!tb]\n\\epsscale{1.0}\n\\plottwo{f8a.eps}{f8b.eps}\n\\caption{Radial-surface-brightness profile of the NICMOS2 F160W image of\n4C\\,05.84\\,ER1. In the left panel, the best-fit $r^{1\/4}$-law, exponential, and S\\'{e}rsic profiles\nare shown in the same way as in Fig.~\\ref{4c23hrsb}. In the right panel, a composite\nmodel consisting of a small $r^{1\/4}$-law bulge, accounting for 31\\%\\ of the\nlight, and an exponential disk comprising the remainder, is compared with the\nbest-fit S\\'{e}rsic profile. In this case, the differentials in the bottom part of the \npanel are made with respect to the composite model.}\\label{4c05hrsb}\n\\end{figure*}\n\nThere is some evidence from our $R$ and $I$-band imaging and our recent \nKeck AO imaging in the $J$ band that the bulge component virtually disappears\nat these shorter wavelengths, indicating that the two morphological components\nhave different stellar populations. Such a result that would not be surprising. The \nSED shown in Fig.~\\ref{4c05sed} would then be the linear combination of the\ntwo SEDs, with the bulge likely being a nearly pure old population and with\nyounger stars being confined to the disk component. We stress, however,\nthat even the disk component must be dominated by old ($\\sim$ few hundred\nMyr) stars, with little very recent star formation. We have experimented with\na range of combinations of SEDs at the quasar redshift, but none with simple\nstar-formation histories (instantaneous bursts or exponentially decaying bursts)\ngave a significantly better fit than did the single-population SEDs shown in Fig.~\\ref{4c05sed}.\nWe will explore this possibility in more detail elsewhere.\n\n\\section{Discussion\\label{discuss}}\nTable~\\ref{tab1} summarizes the parameters for the two galaxies.\nThe morphologies of both 4C\\,23.56\\,ER1\\ and 4C\\,05.84\\,ER1\\ appear to be dominated by disks of \nold stars. However, the disks are quite different in scale. \n 4C\\,05.84\\,ER1, at least, also appears to have a small bulge\ncomprising about 1\/3 of the total light in the F160W filter ($\\sim4800$ \\AA,\nrest frame). We cannot exclude the possibility that 4C\\,23.56\\,ER1\\ also has a weak bulge,\nwith up to $\\sim15$\\% of the total light in the F160W filter; indeed, if the slight\napparent difference in morphology between the F160W and F110W images is\nreal, such a difference would seem to favor this possibility.\n\nBut it is the presence of massive, old disks that continues to give the strongest\nconstraint on formation mechanisms. Such disks also have been seen at \nredshifts $\\sim1.5$, where normal ellipticals with\n$r^{1\/4}$-law profiles are also found\n\\citep*{iye03, cim04, yan04, fu05, sto06, mcG07}. It is difficult to \nimagine that these massive disks\ncould have formed via any process other than the dissipative collapse of a large\ncloud of gas. Such disks are also unlikely to have survived major merging events,\nalthough the bulge component in 4C\\,05.84\\,ER1\\ may testify to either some level of minor\nmerging activity or bulge building via disk instabilities. \n\nFor galaxies at $z\\sim2.5$, the evidence for a dominant old stellar population\ndepends on the inflection in the SED shortward of the $H$ band, and establishing\nthis inflection with optical\/near-IR photometry depends on the relatively short baseline \nfrom the $H$ to the $K$\nband. Furthermore, at the present epoch, essentially all strongly disk-dominated\ngalaxies show evidence for continued star formation. It is therefore not too\nsurprising that claims of passive disks at high redshift should be doubted \n\\citep[{\\it e.g.,}][]{pie05}. However, as Fig.~\\ref{4c05sed} shows, {\\it Spitzer} IRAC data\nis entirely consistent with the SED of a moderately old stellar population, and\nno plausible SED incorporating very recent star formation combined with dust would fit\nthe observed photometry. We have recently also obtained IRAC\nimaging of the field of 4C\\,23.56, and our analysis of these data\nshows that the IRAC photometry falls squarely on our best-fit \nsolar-metallicity BC03 model determined from\nthe optical\/near-IR photometry alone: an instantaneous burst with an age of\n2.6 Gyr and an extinction $A_V=0.16$ mag \\citep{sto07}. Using the more recent\npreliminary CB07 models, with their improved treatment of AGB stars,\nwe obtain a stellar population age of 2.8 Gyr with $A_V=0$. Again, no plausible model\nwith significant star formation and reddening would fit these data.\n\\begin{deluxetable}{l c c c c}\n\\tablewidth{0pt}\n\\tablecaption{Model Parameters for 4C\\,23.56\\,ER1\\ and 4C\\,05.84\\,ER1}\n\\tablehead{\n\\colhead{Galaxy} & \\colhead{Filter} & \\colhead{S\\'{e}rsic $n$} & \\colhead{$r_e$} & \\colhead{$r_e$}\\\\\n & & & (\\arcsec) & (kpc)\n}\n\\startdata\n4C\\,23.56\\,ER1 & F110W & $1.03\\pm0.10$ & $0\\farcs28\\pm0\\farcs02$ & $2.2\\pm0.2$ \\\\\n4C\\,23.56\\,ER1 & F160W & $1.52\\pm0.06$ & $0\\farcs24\\pm0\\farcs01$ & $1.9\\pm0.1$ \\\\\n & & 1.00\\tablenotemark{a} & $0\\farcs89\\pm0\\farcs09$ & $7.1\\pm0.8$ \\\\\n\\raisebox{1.5ex}[0pt]{4C\\,05.84\\,ER1} & \\raisebox{1.5ex}[0pt]{F160W} & 4.00\\tablenotemark{a} & $0\\farcs37\\pm0.20$ & $3.0\\pm1.6$ \\\\\n\\enddata\n\\tablenotetext{a}{The S\\'{e}rsic indices for the two model components for 4C\\,05.84\\,ER1 have been fixed at these values, \nwhich correspond to exponential and $r^{1\/4}$-law profiles, respectively.}\n\\label{tab1}\n\\end{deluxetable}\n\n\nMasses for these galaxies can be estimated from the model fits. Assuming\nsolar metallicities and a \\citet{cha03} initial mass function, we obtain a mass\nof $3.9\\times10^{11} M_{\\odot}$ for 4C\\,23.56\\,ER1\\ and $3.3\\times10^{11} M_{\\odot}$\nfor 4C\\,05.84\\,ER1 (assuming the model at $z=2.4$ with $A_V=0.58$).\n\nWhile the stellar-population age of 4C\\,05.84\\,ER1\\ indicates that the last\nmajor star-formation episode occurred at $z\\sim3.7$, when the universe was\n$\\sim1.8$ Gyr old, 4C\\,23.56\\,ER1\\ has a stellar-population age that is formally slightly greater\nthan the age of the universe at $z=2.483$. Clearly, the likely errors in the age\ndetermination and the usual caveats regarding the age-metallicity degeneracy\nmitigate any implied paradox. Nevertheless, this massive galaxy must have formed\nat a very high redshift. Models with [Fe\/H] $= +0.4$ give an age of 1.9 Gyr,\nbut with a significantly worse fit.\n\nIt therefore seems likely that galaxy formation models will have to allow for the\npresence of early-forming massive disks. This means that, at least in some\ndense regions, it has been possible to form $\\sim3\\times10^{11}$ $M_{\\odot}$\nof stars within a relatively short time via dissipative collapse and without the aid of\nmajor mergers. While our selection criteria have ensured that the galaxies we\nhave discussed here comprise essentially pure old stellar populations, they\nmay well be representative of many massive galaxies at high redshift, most\nof which would not be in our sample if they retained even tiny amounts of residual star\nformation or if they had had any significant star formation within a few hundred Myr prior to\nthe epoch at which we observe them.\n\n4C\\,05.84\\,ER1\\ has a luminosity and an effective radius that are similar to those of many\nlocal galaxies. Our best-fitting S\\'{e}rsic model has $r_e=6.3$ kpc. For comparison,\nfor galaxies of similar mass from the Sloan survey with S\\'{e}rsic $n<2.5$, \\citet{she03} find\n$r_e=7.2^{+2.9}_{-2.1}$ kpc. This galaxy could become, with passive evolution and perhaps \na few minor mergers to increase the bulge-to-disk ratio somewhat, a typical S0 galaxy at\nthe present epoch. On the other hand, we do not see galaxies like 4C\\,23.56\\,ER1\\ at the present\nepoch. By the prescription of \\citet{she03}, a low-S\\'{e}rsic-index galaxy with the mass\nof 4C\\,23.56\\,ER1\\ would have $r_e=7.6^{+3.1}_{-2.2}$ kpc, but 4C\\,23.56\\,ER1\\ actually has $r_e=1.9\\pm0.1$\nkpc. This means that the stellar mass surface density is much higher than for\nlocal galaxies, a result that has also been found for other samples of distant red \ngalaxies \\citep[{\\it e.g.,}][]{tru06,tof07}. It would seem that the only likely path for such galaxies\nto evolve to objects consistent with the local population of galaxies is through\ndissipationless mergers.\n\nThere is recent evidence that the most massive galaxies in the local\nuniverse are likely the result of dry mergers of galaxies with stars that are already \nold and with very little gas \\citep[e.g.,][]{ber07}. With the constraint that these\nmerging components must themselves mostly be fairly massive (to avoid a large\ndispersion and flattening in the observed color---magnitude relation for \npresent-day massive galaxies, {\\it e.g.,}\\ \\citealt*{bow98}), it seems possible that \nthese early massive disks may well be among the sources for the old stars that \ntoday are found in the most massive elliptical galaxies.\n\n\n\\acknowledgments\nWe thank S. Charlot and G. Bruzual for providing us with preliminary versions of\ntheir new spectral synthesis models prior to publication. We also thank the anonymous\nreferee for a detailed reading of the paper and a number of specific suggestions that helped\nus improve it.\nSupport for {\\it HST} program no.~10418 was provided by NASA through a grant from \nthe Space Telescope Science Institute, which is operated by the Association \nof Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.\nThis research has also been partially supported by NSF grant AST03-07335. \nIt made use of the NASA\/IPAC Extragalactic Database (NED) \nwhich is operated by the Jet Propulsion Laboratory, California Institute of \nTechnology, under contract with the National Aeronautics and Space \nAdministration.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}