diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfglu" "b/data_all_eng_slimpj/shuffled/split2/finalzzfglu" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfglu" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\nSince the introduction of Hawking radiation \\cite{Hawking:1974sw}, many semiclassical, non-classical \\cite{Birrell:1982ix,DeWitt:2003pm,Fulling:1989nb,Wald:1995yp,Ford:1997hb,Bar:2009zzb,Parker:2009uva,Donoghue:2017pgk} and \\emph{geometrical} \\cite{Sardanashvily:1992nr,Prugovecki:1995tj,Blagojevic:2013xpa} endeavours\\textemdash seeking ``the\" unified theory\\textemdash have been striving to ``shoehorn\" general relativity into the framework of quantum field theories (QFT). Irrespective of the back-and-forth objections \\cite{Ohanian:1995uu} and rebuttals \\cite{Hehl:1997bz} on the torsion-based and Poincar\\'{e} gauge geometrical approaches, what captures attention is the capability of these approaches to maintain \\emph{Cosmic Censorship} \\cite{Penrose:1999vj}, i.e., they can avoid nonphysical singularities resulted from black holes evaporation, and remove the ultraviolet divergences in QFT through treating fermions as spatially extended objects rather than perceiving them as ``point-like\" particles \\cite{Poplawski:2009su,Poplawski:2010kb,Poplawski:2011jz}, the mainstream QFT perspective on those particles. Part of this letter sheds some light on an alternative to this mainstream perspective as the approach we follow concurs with those geometrical approaches in viewing particles as dimensional objects.\\\\\n\nIn contrast, all non-geometrical attempts have shown drawbacks when it comes to deal with divergences and renormalization problems \\cite{Eppley:1977fp,Shomer:2007vq,Albers:2008as}, especially when the mass of black hole $M_{BH}$ becomes closer to Planck mass $M_{P}$ at temperature $T_{BH}$ relating the mass of radiating black hole and its entropy. Such black hole is known as \\textit{microscopic black hole}. If $M_{BH}3$) can be placed in a (1, $n$)-dimensional isotropic spacetime, i.e., they are close to be microscopic black holes. Then higher-dimensional Schwarzschild solution appears in the scenario of black hole evaporation. So if high energy particle collisions would create microscopic black holes, they might evoke higher values of associated cross section in the presence of such large extra dimensions \\cite{Bleicher:2007hw}. It is worth pointing out that ADD-based string theory models suggest quantum description for only extremal and super-extremal charged black hole models \\cite{Strominger:1996sh}. However, it raises questions about the early discharge phase before reaching Schwarzschild geometry. Besides that, and still within the realm of string theory, there is no program that describes all evaporation phases of (super)-extremal black holes, specially the phase when the mass of black hole becomes $\\sim M_P$ \\cite{Nicolini:2008aj}.\\\\\n\nEarlier before the introduction of \\textit{large} extra dimensions in ADD model, it was proposed \\cite{Madore:1989ma, Chamseddine:1992yx, Madore:1993fn} that the introduction of Noncommutative geometry (NC) in quantum gravity theories should imply extra dimensions. The proposal extends to substantially relate NC to the essential quantum fluctuations of gravitational field \\cite{Madore:1995cg} by showing that classical gravity is indeed unique ``shadow\" in the commutative limit of the noncommutative ``Fuzzy Spacetime'' \\cite{Madore:1996bb, Madore:1996gr, Madore:1996sk, Madore:1997ta, Violette:1997ag}. Almost at the same time, Virtual Black Holes (VBH) were introduced to appear\/disappear due to quantum fluctuations of spacetime too \\cite{Hawking:1995ag, Wald:1998de, Crowell:2005ax} as consequence of relating uncertainty principle to Einstein equations of gravity such that VBH would \\textit{gravitationally} resemble particle-antiparticle pairing in vacuum state of QFT \\cite{Huggett:1998sz}. In light of this proposal, a VBH is to carry a mass $\\sim M_P$ and to share features of Wheeler's quantum foam \\cite{Wheeler:1955zz, Rickles:2018aoo}. So taking into account that uncertainty principle is a noncommutative relation that presumably forestalls measuring physical lengths more accurately than Planck length, and by considering all drawbacks of QFT in curved spacetime together with question marks on string theory predictions for extremal black hole, Nicolini black holes \\cite{Nicolini:2005de, Nicolini:2005zi, Nicolini:2005vd, Rizzo:2006zb, Spallucci:2006zj, Ansoldi:2006vg, Spallucci:2009zz, Nicolini:2008aj, Casadio:2008qy, Arraut:2009an, Nicolini:2009gw, Gingrich:2010ed, Arraut:2010qx, Nicolini:2010nb} were introduced as a potential alternative to describe the end stage of primordial black holes with mass $\\sim M_P$ in \\emph{NC background}. Further details on noncommutative black holes and fuzzy geometry are in Ref. \\cite{Nicolini:2008aj}. Also the rotating case \\cite{Modesto:2010rv}, charged case \\cite{Romero-Ayala:2015fba} and potential connection to primordial black holes \\cite{Mann:2011mm} are considered. The case of Nicolini black hole with Schwarzschild geometry and VBH features is what we focus on through this letter.\\\\\n\nOne of the major issues with the many current approaches to quantum gravity research is the need for phenomenological features of a given quantum gravity model. It is the issue of having a set of observational\/experimental constraints that allows eliminating some of the many other quantum gravity models.\nAny quantum gravity phenomenology ought to be connected to the micro-structure of spacetime, such as spin foam models of canonical quantum gravity~\\cite{Hawking:1979zw,Perez:2003vx,Garattini:2001yb,Baez:1997zt} or the compactified extra dimensions from brane world models and string theory~\\cite{Randall:1999ee,Randall:2005xy,Antoniadis:1998ig}.\nMajor quantum gravity models phenomenologically result in\nnoncommutative geometry, where the conventional spacetime points are in a given\ncoordinate system~\\cite{Seiberg:1999vs,Ardalan:1998ce,Aastrup:2012jj}. Furthermore, they form an algebra satisfying the Lie\nbracket \n\\begin{seqn}\n\\left[x^\\mu, x^\\nu\\right] = i \\theta g^{\\mu \\nu}\n\\end{seqn}\nfor~$\\theta$ the noncommutativity parameter, with some matrix element~$g^{\\mu\\nu}$. The\nimplication for the above relation on point-like objects is to smear out such object into a Gaussian with a width~$\\sqrt{\\theta}$.\\\\\n\n\nThis \\textit{natural} assumption about the micro-structure of the spacetime is based upon two main reasons.\nThe first one is to avoid the controversy of having undefined point-like particles, e.g.: electrons. This Boscovichian-like model results in characterizing those particles by an infinitely electromagnetic mass density. The only way to clear out such divergences is to use renormalization techniques, which essentially impose an effective cut-off scale for the quantum electrodynamics, which is a QFT, and hence avoiding indirectly the notion of point-like particles~\\cite{Schwinger:1948iu}. It is also worth mentioning that due to the same reason, there is a new line of research has been launched \\cite{Hooft:2016cpw} to describe black holes the same way physics describes particles with spatial volumes. Part of it refers indirectly to the assumed relationship that might be between VBH and NC. Bekenstein argued for similar argument \\cite{Bekenstein:1997bt} although it targets different problem. Also torsion-based gauge theories of gravitation we mentioned earlier are endowed with noncommutative geometrical variety\\textemdash but it is of a different kind as gravitational gauge theories are based on diffeomorphisms rather than Lie group structure of noncommutative coherent state formalism of spacetime\\textemdash which what makes these theories see elementary particles with non-Boscovichian signature. This suggests that noncommutativity, in general, may be essential to the existence of elementary particles as spatially extended objects. The second reason is that the existence of a causal metric theory of gravity governed by the Einstein equations\nalong with localized spacetime events\\textemdash that are determined by quantum radiation\/matter interactions\\textemdash would strongly recommend considering spacetime as a foam-like structure. This recommendation comes from the two known facts that spacetime obeys the uncertainty principle between position and momentum, and the Einstein equations imply the~\\emph{ultra-relativistic} dispersion relation between energy and momentum~$ E\\sim p$. Moreover, the existence of highly-localized energy would cause the spacetime structure to break-down beyond the Planck scale~\\cite{Frohlich:1996zc}.\\\\\n\nAnother motivation for noncommutative geometry~\\cite{PMIHES_1985__62__41_0} is the discovery of the area law for entropy~\\cite{Bandyopadhyay:2003nu},\nsetting a bound on the maximum number of particle\/events in a given region\nof spacetime bounded by an area~$A$\n\\begin{seqn}\nN\\lesssim \\frac{A}{\\ell_p^2}~.\n\\end{seqn}\nHence from the previous discussion, we would expect that the spacetime at the\nmicro-scale to consist of a sea of VBH~\\cite{Faizal:2006cm,Hawking:1995ag}. However,\nin all the models studying VBH, the noncommutative structure of spacetime has not been taken into an account, a priori, although the motivation is the same for both phenomena. VBH could have a measurable effect in particle physics, permitting events\/decays that are forbidden within the realm of the standard model. The most important decay that could be caused by VBH is the proton decay~\\cite{Adams:2000za}. Noncommutative\nspacetime models also predict phenomenological aspects on particle physics, but they seem to be rather ill-defined or even unjustified.\nA widely known prediction of noncommutative spacetime geometry is the\nmass of the Higgs particles~$m_H$. It is predicted to be about~$\\sqrt{2}m_t \\sim246$ GeV, where $m_t$ is the mass of the top quark~\\cite{Connes:1990qp}. This\nhas a considerable error compared to the measured mass~$m_H=125$ GeV but remains in the same order of magnitude. Given the time these calculations\nwere made, the predictions from noncommutative geometry seemed within the\nexperimental range.\n\\begin{figure}[h!]\n\t\\centering\n\t\\label{timeline}\n\t\\includegraphics[width=\\linewidth]{tl.eps}\n\t\\caption{Theoretical predictions for the proton lifetime $\\tau_p$ in the most prominent models Vs. the experimental searches for the proton decay. Observations show that most of these models has been ruled out, leaving a tight window for $ SO(10)$ and flipped $SU(5)$ models. In addition to the VBH models that can be tuned for a large range.~\\cite{Adams:2000za,Nishino:2009aa,Gajewski:1981kv,Dimopoulos:1981dw,Sakai:1981pk,Bajc:2002bv,Frampton:1990hz,Langacker:1980js} }\n\\end{figure}\n\n\nConsidering the experimental\/observational bound of the lifetime of the\nproton $ \\tau_p > 10^{34}$ years~\\cite{Nishino:2009aa}, then many models such as grand unification theories (GUT)\\footnote{In GUT, magnetic monopoles are interesting example of processes catalyze proton decay, particularly the monopoles of SU(N) with the Rubakov-Callan effect \\cite{Rubakov:1981rg, Callan:1982ac}. Those monopoles should be differentiated from those of SO(N) dual gravity \\cite{Danehkar:2019qmw, Curtright:2019yur, Curtright:2019wxg, Alshal:2019hpk}, even if SU(5) of Georgi-Glashaw model can be embedded within SO(10) \\cite{Dawson:1982sc}.}, supersymmetric models (MSSM's in particular)~\\cite{Georgi:1974sy,Dimopoulos:1981dw,Sakai:1981pk,Bajc:2002bv} or sphaleron model~\\cite{Arnold:1987mh} would be eliminated from the consideration. However, such consideration would leave a\ntight window for other GUT models, in particular, like those involving strings and branes~\\cite{Nath:2006ut}, variations of SO(10) group~\\cite{Baez:2009dj}, or by leptoquarks models~\\cite{Dorsner:2012nq} which shows an increasing interest, due to recent findings related to the anomalies in $B$-meson decays at the LHCb experiment~\\cite{Aaij:2014ora}. Moreover, the proton could decay via VBH, and the lifetime of this decay can be estimated from the relation in $D$ dimensions~\\cite{Alsaleh:2017ttv,Adams:2000za}\n\\begin{seqn}\n\\tau_p \\sim M_{proton}^{-1} \\left(\\frac{M_{qg}}{M_{proton}}\\right)^D,\n\\label{lifetime}\n\\end{seqn}\nwhere, for the VBH mass~$ M_{qg} = \\sqrt{1 \/ 8 \\pi G} = M_p$ the Planck mass and $D=4$, the proton lifetime ~$\\tau$ is $\\sim 10^{45}$ years. Not to mention that the proton decay process, if it exists, is\na very rare event, therefore, it also gives branching ratio~$\\mathcal{B}$ between the generic GUT decay channel and QG one of order $\\mathcal{B} = 10^{19}$ which is extremely small. However, for different models of quantum gravity and extra dimensions, the VBH channel would have a significant contribution to the proton decay. In fact, for phenomenological quantum gravity models such as generalized uncertainty principle~(GUP)\\cite{Amati:1988tn, Garay:1994en, Kempf:1994su, Adler:2001vs, Ali:2009zq, Vagenas:2018zoz, Vagenas:2018pez, Vagenas:2019wzd, Vagenas:2019rai} the VBH decay channel could have comparable effects to GUP\/SUSY or other models for reasonable deformation parameter $ \\beta$~\\cite{Alsaleh:2017ttv}. Since the experimental researches have excluded many non-quantum gravity models, the possibility that proton decay being a signature of quantum gravity is increasing, see FIG. 1.~\\ref{timeline}\n\n\\section*{Gaussian distribution for the mass of virtual black holes in noncommutative geometrical background}\n It would be interesting to investigate the hypothesis of noncommutative spacetime as a phenomenological quantum gravity model on the proton decay via noncommutative VBH~(NCVBH) and to examine the experimental limits on the noncommutativity parameter~$\\theta$. The mass density distribution for a droplet of matter\/equivalent of energy in $D$ space time dimension is given by \\cite{Nicolini:2005vd,Tejeiro:2010gu}.\n\\begin{seqn}\n\\rho(r) = \\frac{M}{\\left( 4 \\pi \\theta \\right) ^{D-1\/2}} \\, e^{-\\frac{r^2}{4 \\theta }}.\n\\end{seqn}\n\n Assuming a Gaussian mass density distribution with width $ \\theta$, we begin studying the geometric properties of noncommutative VBH by computing the Einstein field equations using the metric of a microscopic black hole \\cite{Nicolini:2005vd}.\n\\begin{seqn}\n-ds^2 = & \\left( 1-\\frac{4M}{r \\sqrt{\\pi}} \\gamma(3\/2,r^2\/4\\theta)\\right) dt^2\\\\\n& -\\left( 1-\\frac{4M}{r \\sqrt{\\pi}}\\, \\gamma(3\/2,r^2\/4\\theta)\\right)^{-1}dr^2-r^2 d\\Omega^2_2~,\n\\label{NCVBH}\n\\end{seqn}\nwhere $ M$ is the black hole mass, which is an unknown parameter in this case. And~$\\gamma(3\/2,4\\theta)$ is the incomplete gamma function.\n\\begin{seqn}\n\\gamma(3\/2,r^2\/4\\theta) = \\int_{0}^{r^2\/4\\theta}t^{1\/2}e^{-t} dt.\n\\end{seqn}\n\nFollowing the analysis in~\\cite{Arraut:2009an,Nicolini:2005vd}, we find the spacetime metric for $4$ dimensions, and we can generalize this analysis for arbitrary $D$ dimensions.\n\n\n This metric gives a noncommutative gravitational radius $ r_\\Delta$ that would be of concern when we examine the proton lifetime. Nevertheless, the mass of (virtual) black hole needs further study in order to identify it. \\\\ We want the above metric~\\eqref{NCVBH} to become the conventional Schwarzschild metric when $ \\theta \\to 0$ with a Planck mass~$M_p$ being its mass. This ansatz is attainable as the limit $ \\gamma(a,z) \\to \\Gamma(a)$ as $ z\\to \\infty$~\\cite{Abramowitz:1974:HMF:1098650}. So $M$ is, indeed, Planck mass.\\\\ Also we identify the effective gravitational radius $ r_s$ (or equivalently the effective quantum gravity mass~$M_{qg}$) as the solution to the equation\n %\n %\n \\begin{seqn}\nh(r)|_{r_s}:=\\frac{4M}{r_s \\sqrt{\\pi}}\\, \\gamma(3\/2,r_s^2\/4\\theta)-1=0.\n\\label{hor}\n \\end{seqn}\nIn this present form, we could not analytically solve this equation for $r_s$. Therefore, we expand the incomplete gamma function at the classical spacetime limit~$ \\theta \\to 0$ and take the leading and sub-leading terms. Then the incomplete gamma function is expanded as~\\cite{Abramowitz:1974:HMF:1098650}\n\\begin{seqn}\n\\gamma(3\/2,r^2\/4\\theta) \\simeq \\Gamma(3\/2) -\\frac{2}{\\sqrt{\\theta}}r e^{-r^2\/4\\theta} + \\mathcal{O}(\\theta ^{1\/2}),\n\\end{seqn}\nwhich is rather an expected result. The sub-leading term is Gaussian, superimposing Gaussian distribution \\textit{noise} around the standard gravitational radius~$ r_s = 2M$, see the plot in FIG. 2.~\\ref{plt_rand}\n\\begin{figure}[h!]\n\t\\centering\n\t\\label{plt_rand}\n\t\\includegraphics[width=0.5\\textwidth]{plt_rand.eps}\n\t\\caption{The standard gravitational radius~$ r_s=2M$ with added Gaussian noise of standard deviation $ \\sqrt{\\theta}=1$. The linear relationship is described by $r_s=2M+\\delta r$, where $\\delta r$ represents the standard error generated by normally distributed random error with a mean $ = 0$ and 100 random trials. Notice that the negative radius region is nonphysical, hence~$M$ must be at least~$ M> \\sqrt{\\theta}$.}\n\\end{figure}\n\n\\noindent Therefore, the horizon equation can be written as up to sub-leading order as\n\\begin{seqn}\nh(r)|_{r_s}=r_s - r_s\\,\\mathcal N (r; r_s,\\sqrt{\\theta}),\n\\end{seqn}\nwith~$\\mathcal N (r; r_s,\\sqrt{\\theta})$ being the normal distribution. Then from \\cite{Nicolini:2005vd} with $r_s=2M$ and standard deviation $ \\sigma = \\sqrt{\\theta}$, we can directly define the minimal effective gravitational mass to be $ M_{gq}\\sim \\sqrt{\\theta}$ since the noncommutative black hole could not be defined with radius less that $\\sqrt{\\theta}$, i.e., at short distances we consider the quantum geometry effects made by spacetime fuzziness where $r_s \\sim \\sqrt{\\theta}$. This suggests a consistent picture to present the basic phenomenology of quantum gravity, particularly the description of VBH, without setting an artificial bounds on the gravitational mass\/radius.\\\\ Moreover, this result can be realized within the stochastic interpretation of quantization~\\cite{Damgaard:1988nq,Bandyopadhyay:2003nu} which assumes that the gravitational degree of freedom is the black hole's gravitational radius $r_s$ with the rate of its change as the conjugate momentum does. Therefore, we add to them a stochastic extension with the parameter $ \\sqrt{\\theta} $ such that\n\\begin{seqn}\nr_s +i \\sqrt{\\theta}\\hat{Q},\n\\end{seqn}\nwhich is similar to what was obtained in the formal scholastic quantization of black holes by Moffat~\\cite{Moffat:1996fu,Moffat:2014eua}.\n\n\\section*{Numerical analysis in $D$ dimensions}\n From this analysis we could rewrite $\\theta$ in terms of the effective scale of quantum gravity, that is $ \\Lambda_{QG}= \\frac{1}{\\sqrt{\\theta}}$. And since the virtual black hole mass is bounded by the noncommutativity parameter, we can recover the result of the virtual black hole mass being corresponded to the effective scale of quantum gravity~$ \\Lambda_{qg}\\sim \\frac{1}{M_{VBH}}$. Therefore, if observations were made with careful analysis, a crucial observation of black hole decay would reveal the micro-structure of spacetime. The experimental and observational bound of the minimal mass of black holes can be found in~\\cite{Abazov:2008kp,Gingrich:2006hm,Aaltonen:2008hh,Khachatryan:2010wx} where the mass is bounded to be $ > 4.5$ TeV. According to our analysis, that corresponds to a quantum gravity scale bound of $\\sim4.39 \\times 10^{-20}$ m, which is clearly much larger from what we expect, as this scale is comparable to the electroweak scale that does not show spacetime anti-commutativity at it. Other models excluded the possibility of detecting microscopic black holes at the LHC even at run II with $ \\sqrt{s}= 14 $ TeV due to phenomenology of quantum gravity, such as modified dispersion relation by rainbow functions, or existence of maximum momentum by a generalization of Heisenberg algebra GUP~\\cite{Ali:2012mt,Cavaglia:2003qk}. These bounds were set using particle collisions. However, the proton lifetime could set a much better bound if the relation~\\eqref{lifetime} is used and the $M_{qg}$ with the quantum gravity scale $ \\Lambda_{QG}^{-1}$ is substituted. This leads to the order of unity estimation. Using the relation~\\eqref{hor} in the proton lifetime formula, we can numerically find the bound on the noncommutativity parameter~$\\theta$. Alternatively, the bound on the noncommutativity scale~(quantum gravity scale)~$\\Lambda_{qg}=1\/\\sqrt{\\theta}$ can be estimated from the experimental bound of the proton lifetime $ > 10^{34}$ years. The relation is between $\\theta$ and the mass of the proton to the $D+1$ power, multiplied with $\\tau_p$ is shown in FIG. 3.~\\ref{plt_gamma} Numerical computations results are summarized in table. I.~\\ref{result} and visualized in FIG. 4.~\\ref{plt_dim}\n\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\label{plt_gamma}\n\t\\includegraphics[width=0.9\\linewidth]{tp.eps}\n\t\\caption{The proton lifetime as a function of the noncommutativity parameter~$\\theta$ in Planck units for $4,6,8,9,11$ and $ 26$ spacetime dimensions $D$. }\n\\end{figure}\n\\vspace{1cm}\n\\begin{table}[h!]\n\\scalebox{1.5}{\n\t\\label{result}\n\t\\begin{tabular}{|c|c|}\n\t\t\\hline\n\t $D$ &$\\Lambda_{qg}\/\\ell_p$ \\\\\n\t\t\\hline\n\t\t4 & $2.269$ \\\\\n\t\t6 & $171.940$ \\\\\n\t\t8 & $1.489 \\times 10^{3}$ \\\\\n\t\t9 & $3.059 \\times 10^{3}$\\\\\n\t\t11 &$ 8.714\\times 10^{3}$ \\\\\n\t\t26 & $1.320 \\times 10^{5}$ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t}\n\\caption{The bound on the quantum gravity noncommutativity scale $ \\Lambda_{qg}$ in Planck length units, from the observational bound on the proton lifetime. For different spacetime dimensions}\n\\end{table}\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\label{plt_dim}\n\t\\includegraphics[width=0.9\\linewidth]{dim.eps}\n\t\\caption{Visualization of the relation between the spacetime dimensions $D$ and the minimum noncommutativity scale $ \\Lambda_{qg}\/\\ell_p$}\n\\end{figure}\n\n\n\\section*{Conclusions}\nWe investigated the proton lifetime and how experimental results showed the non-validity of many quantum gravity models. We suggested perceiving the decay of proton as the thermal evaporation of virtual black holes within the context of noncommutative geometry. We used the lower incomplete gamma function to relate the Gaussian distribution of the mass density to the mass of Schwarzschild-like virtual black holes, and we calculated both corresponding mass and gravitational radius of the horizon of such black hole in terms of the noncommutativity parameter $\\theta$. This introduced an experimental verifiable way to check the validity of seeing the micro-structure of the spacetime in the context of the noncommutative geometry. Finally, we numerically analyzed the process of decay in different $D$ dimensions, and we showed the possible bounds on the noncommutativity parameter~$\\theta$. The study can be extended for investigating the implications of noncommutativity geometry in cosmology to be compared with the recent Planck data. We hope to report on this in the future.\n\n\\section*{Acknowledgment}\n\nAuthors would like to thank the anonymous reviewers of the manuscript for their constructive suggestions to amend the presentation of the letter.\\\\\n A.A and S.A. were supported by a grant from the \"Research Center of the Female Scientific and Medical Colleges\", Deanship of Scientific Research, King Saud University.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Scintillation Light}\n\\indent \nScintillation in liquid argon results from the radiative decays of excited molecular dimers, Ar$_2^*$, formed after the passage of charged particles. An excited dimer may be formed in either a singlet or a triplet state, which have very different radiative decay lifetimes, $\\sim\\!\\! 7$ ns for the singlet and $\\sim\\!\\! 1.6$ $\\mu$s for the triplet~\\cite{Kubota1978561}. The relative population of the fast (singlet) and slow (triplet) components is strongly correlated with the ionization density and hence the nature of the primary ionizing particle and the deposited energy~\\cite{PhysRevB.27.5279}. The typical fraction of the scintillation light in the fast component is $\\sim\\!\\! 0.7$ for nuclear recoil events, which are heavily-ionizing, and $\\sim\\!\\! 0.3$ for electron-mediated events -- this is the basis of pulse shape discrimination.\n\nThe rejection power achievable by PSD is strongly affected by the amount of light detected. This is due to the growing statistical precision with which the fast and slow component populations can be determined for any event type as the total number of detected photons increases. Consequently, the overall light collection and detection efficiency becomes a crucial figure of merit for the performance of these detectors. With the common use of photomultiplier tubes (PMTs) as light sensors, the overall light yield is often expressed in terms of the number of detected photoelectrons (p.e.) per keV of energy deposited in the argon. This number, which in general depends on the nature of the ionizing particle and on the applied electric field, is usually quoted for electron recoil events at null field, and expressed in units of p.e.\/keV$_{ee}$ (for ``electron equivalent\" energy). In absolute terms, the detector light yield can be compared to the raw photon yield in liquid argon, $\\sim\\!\\! 40$ scintillation photons per keV$_{ee}$ deposited~\\cite{Doke1988291}. This scintillation light is peaked in the UV at 128 nm.\n\nWe measure the light yield of the DS-10 detector by studying the scintillation spectra from radioactive $\\gamma$ sources deployed outside the cryostat. By normalizing the integrated signal on each PMT to the average integral corresponding to a single photoelectron, the light yield of the detector can be estimated from the spectral features of the $\\gamma$ sources. \n\n\\section{The DS-10 Detector}\n\\indent\nThe DarkSide-10 detector, shown in Fig.~\\ref{fig:ds-10}, is a two-phase (liquid and gas) TPC~\\cite{Loer:2011bl}, incorporating several innovative design features intended to improve stability, simplicity, and performance. The device consists of an acrylic\/fused silica vessel which contains the two-phase sensitive region. This ``inner vessel\" is completely immersed in a liquid argon (LAr) bath contained in a vacuum-insulated stainless steel Dewar. The PMTs are in the outer LAr bath, viewing the active volume of LAr through the inner vessel. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{drawing.pdf}\n\\caption{Vertically-sectioned drawing of the DS-10 detector.}\n\\label{fig:ds-10}\n\\end{center}\n\\end{figure}\n\nThe inner vessel consists of an open-ended acrylic cylinder of 23.5~cm height, 24.1~cm inner diameter, and 1.9~cm wall thickness, sealed by PTFE-encapsulated steel-spring Creavey o-rings at the top and bottom to fused silica windows, 1.3~cm thick~\\cite{creavey}. The cylinder and windows are clamped together by a cage of spring-loaded, 0.95-cm-diameter stainless steel rods. The resulting seal is sufficiently ``bubble tight\" to contain the argon gas pocket required for two-phase operation.\n\nThe gas for the pocket is produced in a tube running vertically alongside the acrylic cylinder. LAr purified in the recirculation loop (see below) enters the tube and is boiled by a resistor operating at about one watt. A connecting pipe delivers the gas to the top of the inner vessel. The gas-liquid interface level is passively maintained 2.0 cm below the top fused silica window by a bubbler tube that vents gas from the pocket and ends in the LAr bath at the desired height. The liquid level is continuously measured by a set of discrete Pt-1000 thermistors and a capacitive level sensor in the boiling tube. Under normal operating conditions, including gas recirculation to the inner and outer vessels, the fluctuations in inner vessel liquid level are $<1$mm.\n\nThe active volume of the detector, 21 cm in diameter, is defined by a reflector lining the acrylic cylinder. The reflector is made of overlapping sheets of 3M Vikuiti ESR~\\cite{3m:2012vk}, a multilayer plastic foil, mounted inside a PTFE frame.\n\nTo detect the 128 nm argon scintillation light, we use the wavelength shifter tetraphenyl butadiene (TPB), with a peak emission wavelength of 420 nm~\\cite{Burton:73}. The TPB fluorescence decay time is $\\sim\\!\\! 1.8$~ns, short compared to the 7 ns fast component of the LAr scintillation~\\cite{1970RSPSA.315..163P}. TPB is deposited by vacuum evaporation onto the reflector lining the acrylic cylinder and the inner surfaces of the fused silica windows. The stainless steel mesh separating the electron drift and extraction regions of the TPC (described below), a few cm$^2$ of the reflector covered by $\\alpha$ sources, and small gaps at the edges of the reflectors caused by differential thermal contraction are the only non-TPB-coated surfaces seen by the UV scintillation light from the active argon volume.\n\nMeasurements in a vacuum-UV spectrophotometer suggested an optimum TPB thickness of about 200\\,$\\mu$g\/cm$^2$, a tradeoff between high UV-to-visible conversion efficiency and low absorption of the visible light. The reflector and windows were coated with 230-260 and 215-230 $\\mu$g\/cm$^2$ of TPB, respectively. The evaporations were performed in a large high-vacuum chamber using a Knudsen effusion cell. The typical vacuum level reached prior to the evaporation was (2-7)$\\times 10^{-8}$ torr. After the evaporation the parts were kept in sealed bags filled with dry argon. During the detector assembly we took care to minimize exposure of TPB-coated surfaces to air, since degradation was noted during optical-bench testing. We accomplished this by flushing the inner vessel with argon gas throughout the assembly procedure. The fused silica windows were coated about three months before the measurements reported here began. The reflecting foils were coated 9 months before the run and were used in the preceding 3-month run in Princeton.\n\nWavelength-shifted scintillation light is collected by two arrays of seven Hamamatsu high-quantum-efficiency R11065 3'' PMTs~\\cite{2012JInst...7.1016A}, viewing the active volume through the top and bottom fused silica windows. A $\\sim$1 mm layer of LAr optically couples the PMTs to the windows. These fused-silica-window, metal-bulb tubes are operated at negative HV and are electrically insulated from the surrounding materials with PTFE spacers. The PMTs have Hamamatsu-reported room-temperature quantum efficiencies at 420 nm ranging from 30.4 to 35.7\\%, with an average of 33.9\\%. They are run at a typical gain of $4\\times 10^6$ with lines terminated in 50 Ohm at both ends. To enhance the light collection, the spaces between the phototubes are filled with 1.3-cm-thick PTFE reflectors, and the small exposed areas of the stainless steel endplates are covered with 3M Vikuiti foil. \n\nTo allow the device to be operated as a time projection chamber, the inner vessel is equipped with a set of high voltage electrodes and a distribution system to provide the necessary electric fields in the sensitive volume to drift electrons to the surface of the liquid, to extract them into the gas, and to accelerate them through the gas producing a secondary scintillation signal proportional to the collected ionization. The electrodes fixing the potential at boundaries of the chamber are: a transparent Indium Tin Oxide (ITO) cathode on the inner surface of the bottom window, a kapton flexible printed circuit board with overlapping etched copper strips alternating between the two sides wrapped around the acrylic cylinder, an etched stainless steel grid 5 mm below the liquid-gas surface, and an ITO anode on the bottom surface of the top window. For TPC operation, the anode is grounded and independently-controllable voltages on the cathode and grid set the drift and extraction fields, while a chain of resistors between the copper strips creates the graded potential that keeps the drift field uniform. To shield the negatively-biased PMT photocathodes from the voltages applied to the anode and cathode, each fused silica window carries a second ITO layer on its external face. These are maintained at approximately the average of the PMT photocathode voltages. \n\nTo obtain the light yield measurements presented here, the TPC anode, grid, and cathode as well as the field shaping copper strips have all been kept at ground potential, giving null drift, extraction, and multiplication fields. In the rest of the paper this will be referred as the \\emph{null field} configuration. In this configuration the device operates as a pure LAr scintillation detector. The TPC system nonetheless affects the scintillation optics. The measurements presented here were made with the gas pocket present, and the gas pocket affects the light propagation, primarily through total internal reflection in the LAr. The grid is a 100-$\\mu$m-thick stainless steel membrane etched with a hexagonal pattern of through holes 0.5 cm on a side, with an optical transparency for normally incident light of 89\\%. The ITO layers are 15 nm thick, the thinest thought feasible by the vendor. Since ITO conducts, it has a complex index of refraction that results in absorption. At 420 nm, calculations give an absorption of 2\\% per layer at normal incidence, and all light must pass through at least two layers to reach the PMTs. \n\nA 90-W Cryomech PT90 cryocooler is connected to a cold-head inside the Dewar but outside the inner vessel. The cold head provides the cooling power needed both to cool and condense argon gas in the detector during filling and to control the liquid argon temperature during normal operation. The cold-head is instrumented with a temperature sensor and a 100-W heater which are part of a feedback loop controlled by a Lakeshore 430 temperature controller. This allows the temperature of the system to be maintained at the boiling point of liquid argon (87.8 K) with typical fluctuations less than 0.1K. \n\nDissolved impurities such as nitrogen, oxygen, and water are known to strongly affect the scintillation properties of liquid argon~\\cite{2010JInst...5.5003A, 2010JInst...5.6003A}. The purity of the active argon in DS-10 is established and maintained by a number of measures. Before the detector is cooled, the dewar is repeatedly flushed and pumped over several days at room temperature using research grade (99.999\\%) argon gas and a dry turbopump, achieving a final pressure of about $6\\times 10^{-5}$ mbar. This removes adsorbed impurities from metal surfaces and reduces subsequent outgassing from the internal plastic parts. Research grade atmospheric\\footnote{Atmospheric argon is used in this prototype, as opposed to the $^{39}$Ar-depleted underground argon being extracted for the DarkSide dark matter detectors~\\cite{AcostaKane200846}.} argon gas is also used for the fill. The gas is further purified by a single pass through a SAES MonoTorr PS4-MT3-R1 getter which is sized and configured to reduce O$_2$, N$_2$, and H$_2$O impurities to sub-ppb levels~\\cite{Saes:2012sa}. During operation, argon purity is maintained by continuous gas recirculation: a metal bellows pump forces the boil-off argon from the Dewar through the MonoTorr getter, at about 15~slpm, and send it back to the cold-head for recondensation.\n\n\\section {Data Acquisition}\n\\indent\nThe data acquisition system consists of a set of 12 bit, 250 MS\/s, digitizers (CAEN 1720) which record the signals from each of the 14 photomultiplier tubes and store them for offline analysis. To trigger the system, the anode signal from each PMT is first amplified tenfold by a LeCroy 612A fast amplifier with two parallel outputs. One output goes directly into the digitizer channel which runs continuously, filling a circular memory buffer. In the digitizer, one sample at one count represents 0.0078 pC from the PMT. The other output is used to form a majority trigger. This requires a coincidence, within 100 ns, of at least 5 PMTs with signals above a threshold: the latter is set independently on each channel to about 1\/3$^\\text{rd}$ of the photoelectron mean amplitude. When an event satisfies the majority trigger condition, data in the 14 circular buffers representing a 35~$\\mu$s time window (5~$\\mu$s before the trigger and 30~$\\mu$s after), is downloaded to a PC and stored on a local hard disk. The acquired window length for the null field configuration has been selected to fully contain the slow component of the scintillation light, while also including relatively large pre- and post-trigger regions to allow for baseline evaluation. \n\n\\section{Single-Photoelectron Calibration}\n\\label{sec:spe}\nThe charge response of each PMT to a single photoelectron is evaluated using a laser calibration procedure, which was repeated frequently among the data runs analyzed here. Light pulses of $\\sim$70 ps duration at 440 nm wavelength from a diode laser are injected into the detector through an optical fiber that terminates on the bottom window of the inner vessel. Diffuse reflection from the TPB leads to a roughly uniform illumination of the 14 PMTs. The controller pulses the laser at a rate of 1000 Hz and simultaneously triggers the data acquisition system. Optical filters are placed between the laser and the fiber to adjust the intensity until the average number of photoelectrons generated on each tube in any given trigger, referred to as the average occupancy, is roughly 0.1. Unlike regular data runs, the digitization window for laser runs is only 1.5 $\\mu$s long. Within this record, a 0.8 $\\mu$s period before the pulse arrival time is used to define the baseline. After subtraction of this baseline, the integral of the recorded waveform is evaluated within a fixed 92-ns window around the arrival time of the laser pulse. The resulting charge spectrum for each PMT is then fitted to a model function, allowing the mean of the single-photoelectron charge response to be determined.\n\nThe fitting function used is\n\\begin {align}\nF(x) = \\sum_{n=0}^7 P(n;\\lambda)f_n(x)\n\\end {align}\nwhere $P(n;\\lambda)$ is a Poisson distribution with mean $\\lambda$, representing the average occupancy, and $f_n(x)$ the $n$-photoelectron charge ($x$) response of the system. We have modeled the $n$-photoelectron response of the system as \n\\begin {align}\nf_n(x) = \\rho(x) \\ast \\psi_1^{n\\ast}(x)\n\\label{eq:npe}\n\\end {align}\nwhere $\\rho$ denotes the zero photoelectron response (pedestal), $\\ast$ is a convolution, and $\\psi_1^{n\\ast}$ is the $n$-fold convolution of the PMT single-photoelectron response function, $\\psi_1$, with itself. The function representing the pedestal, $\\rho$, the integral in the absence of any photoelectrons and thus the entire $n=0$ term, is described by a Gaussian, while the PMT single-photoelectron response, $\\psi_1$, is modeled by the weighted sum of a decaying exponential and a Gaussian, truncated at zero,\n\\begin {align}\n\\psi_1(x) = \\begin{cases}\np_E\\left(\\frac{1}{x_0}e^{-x\/x_0}\\right)+(1-p_E)\\mathrm{G}(x;x_m,\\sigma) & x>0;\\\\\n0 & x\\leq 0.\n\\end{cases}\n\\label{eq:pdf}\n\\end {align}\nThe Gaussian term $\\mathrm{G}(x;x_m,\\sigma)$ represents the single-photoelectron response from the full dynode chain, while the exponential term accounts for incomplete dynode multiplication~\\cite{Dossi2000623, 2011ITNS...58.1290D}. \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{spe.pdf}\n\\caption{Example of the charge response spectrum of a single PMT exposed to low-occupancy laser flashes. The horizontal axis measures charge in integrated digitizer counts (counts $\\cdot$ samples), where 1~count~$\\cdot$~sample corresponds to a PMT output charge of 0.0078 pC. The colored curves represent components in the fit function used in the calibration. Green: pedestal. Dashed Magenta: Gaussian and exponential terms of the single-p.e.~model convolved with pedestal. Solid Magenta: full single-p.e.~response convolved with pedestal. Solid Blue: 2-p.e response. Dotted Blue: $\\geq 3$-p.e. response. Solid Red: Sum of all components. }\n\\label{fig:laser_fit}\n\\end{center}\n\\end{figure}\n\nThe fit is performed with seven free parameters: the average occupancy $\\lambda$, the mean and standard deviation of the pedestal Gaussian, the mean $x_m$ and standard deviation $\\sigma$ of the single-photoelectron Gaussian, the decay constant $x_0$ of the single-photoelectron exponential component, and the relative weight $p_E$ between the single-photoelectron Gaussian and exponential terms. In order to simplify the computation, for $n \\ge 3$ the function $\\psi_1^{n\\ast}$ is approximated by a Gaussian whose mean and variance are $n$ times that of the single-photoelectron response $\\psi_1$. Figure~\\ref{fig:laser_fit} shows a sample spectrum and fit for a laser run on a single channel. Due to the presence of the exponential term, the mean of the single-photoelectron response is, on average, 13\\% lower than that of the Gaussian single-photoelectron component alone.\n\n\\section{Event Analysis}\n\\label{sec:source}\nFor each individual\nchannel, we determine a baseline and subtract it from the digitized waveform. Because argon scintillation pulses extend over several microseconds and eventually taper into sparse, individual photoelectrons, this needs careful treatment. The baseline is calculated by two different methods. The first, referred as the \\emph{linear-baseline} method, calculates the average of the digitized samples in a gate at least 1.0~$\\mu$s wide before the trigger in the acquisition window (where no signal is expected) and, when possible, at the end of it, using a linear interpolation between these two values as the baseline in the region in between. If the two values differ by more than 1 ADC count, the event is rejected. The second algorithm uses a moving average, over a window of length 80~ns, to calculate a local baseline in regions where the waveform fluctuations are consistent with electronic noise. In regions where sharp excursions are found (such as under scintillation pulses, including single photoelectrons) the baseline is linearly interpolated between the most recent quiet intervals. This \\emph{moving-average baseline} is intended to remove slowly varying fluctuations, such as possible low-frequency interference. \n\nOnce the baseline has been subtracted, the waveform for each channel is divided by the corresponding single photoelectron mean, as obtained from the nearest laser calibration run. The scaled waveforms from all 14 channels are then added together to form a summed waveform which is further analyzed to identify scintillation pulses. The scintillation-pulse finding algorithm identifies a pulse start time and end time for the scintillation signal and, once the start and end times of the pulse are found, the integral in that interval is evaluated independently for each of the 14 scaled channels and these integrals are summed to give an estimate of the total number of photoelectrons of the scintillation event. \n\n\\section{Response to Cobalt, Cesium, and Sodium Sources}\n\\indent\nEvents with known energy depositions are obtained by exposing the detector to a series of external radioactive $\\gamma$ sources. The gamma sources are collimated by means of a 45.5-mm-thick lead collimator with a 10 mm diameter hole. Three collimator positions along the vertical direction are used: \\emph{bottom}, \\emph{central}, and \\emph{top}, corresponding to 20~mm, 105~mm, and 158~mm from the TPC cathode. Due to the large amount of material between the source collimator (located outside the LAr Dewar) and the active volume, as well as the relatively large size of the sensitive region, the spectra are degraded, leaving full-absorption peaks as the most visible and reliable features for light yield estimates.\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n \\hline \\hline\n Source & E$_{\\gamma}$ [keV] & I$_{\\gamma}$ \\% & Activity [$\\mu$Ci] \\\\ \\hline \\hline\n $^{57}$Co & 14.41 & 9.16 & \\multirow{3}{*}{0.96}\\\\\n $^{57}$Co & 122.06 & 85.60 &\\\\\n $^{57}$Co & 136.47 & 10.68 & \\\\\n \\hline\n $^{22}$Na & 510.99 & 180.76& \\multirow{2}{*}{1.08}\\\\\n $^{22}$Na & 1274.53 & 99.94 &\\\\\n \\hline\n $^{137}$Cs & 661.66 & 85.10 & 0.94\\\\\n \\hline \n \\end{tabular}\n \\caption{$\\gamma$ energies and intensities~\\cite{Lbnl:2012:ti}, and activities of sources used. The 511-keV $^{22}$Na line results from positron annihilation resulting in two back-to-back 511-keV $\\gamma$ rays. }\n \\label{tab:isotope}\n\\end{table}\n\\indent\nLight yield measurements have been performed with $^{57}$Co, $^{22}$Na, and $^{137}$Cs whose main gamma energies and intensities are summarized in Table~\\ref{tab:isotope}. The data for a single spectrum contain about 1,000,000 events taken over a few hours. Gamma rays interact in the active volume through Compton scattering and the photoelectric effect, with events in the full-energy peak typically resulting from multiple interactions. Figures~\\ref{fig:spectrum_co},~\\ref{fig:spectrum_cs}, and~\\ref{fig:spectrum_na} show the gamma-induced scintillation spectra obtained with the three sources collimated at the central position, after subtraction of a background spectrum (Fig.~\\ref{fig:spectrum_bkgd}) acquired with no source present and scaled by the ratio of the livetimes. The events in the plots were analyzed using the linear-baseline algorithm. \n\nA minimal set of cuts is applied in order to remove from the spectra:\n\\begin{itemize} \n \\setlength{\\itemsep}{0pt}\n \\setlength{\\parskip}{0pt}\n \\setlength{\\parsep}{0pt}\n\\item events saturating the digitizer ADC of any channel;\n\\item events with a rejected baseline (described above);\n\\item events in which the first found pulse in the acquisition window is not within 100 ns of the trigger time.\n\\end{itemize}\nThese cuts typically retain $>\\!\\!98\\%$ of the triggered events.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n \\hline \\hline\n E$_{\\gamma}$ & $\\mu_p$ & $\\sigma_p$ & LY$_{\\gamma}$ \\\\ \n {[keV$_{ee}$]} & [p.e.] & [p.e.] & [p.e.\/keV$_{ee}$]\\\\ \\hline \\hline\n 122.06 & 1082.0$\\pm$2.3 & 56.80 & 8.865$\\pm$0.019 \\\\\n 510.99 & 4486.4$\\pm$2.5 & 152.86 & 8.780$\\pm$0.007\\\\\n 661.657 & 6009.6$\\pm$1.8 & 186.19 & 9.083$\\pm$0.005\\\\\n 1274.53 & 10961.9$\\pm$6.7 & 318.07 & 8.601$\\pm$0.007\\\\ \\hline\n \\end{tabular}\n \\caption{Fitted gamma full-absorption peak mean, width, and light yield. The error on $\\mu_p$ is the statistical error from the fit. The error on LY$_{\\gamma}$ is the fit error combined with the statistical error on the mean single-p.e.~response.}\n \\label{tab:fits}\n\\end{table}\n\nLooking at the scintillation spectra in Figs.~\\ref{fig:spectrum_co}-\\ref{fig:spectrum_na} it is evident that, while the Compton edges are degraded, the full-absorption peaks are very clear. This is consistent with expectations from a GEANT4-based Monte Carlo simulation of the experimental setup, including the material between the source and the active volume. The experimental spectra in the region of the full absorption peaks are fitted with a Gaussian function with mean value $\\mu_p$ and standard deviation $\\sigma_p$. For $^{22}$Na and $^{137}$Cs, where the Compton edge and degraded gamma tails are more significant, a falling exponential term is added to the fit function. For $^{57}$Co the full absorption peaks of the 122 and 136 keV lines are not individually resolved and hence the spectrum was fit with the sum of two Gaussians. The ratio between the means and variances of the two Gaussians were fixed to the ratio of the energies and the ratios of the integrals were fixed to the relative intensity of the two $\\gamma$ rays (see Table \\ref{tab:isotope}). Fitting these functions to simulated spectra reproduces the true peak positions to better than 1\\%. The best-fit functions are also shown in Figs.~\\ref{fig:spectrum_co}-\\ref{fig:spectrum_na}. The best-fit values of the parameters of interest are summarized in Table~\\ref{tab:fits} together with the light yield (LY$_{\\gamma}$), defined for each fit as $\\mu_p$\/E$_{\\gamma}$.\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{57Co_spectrum.pdf}\n\\caption{Scintillation spectrum of $^{57}$Co collimated at the central position after subtraction of a background spectrum. The full absorption peak has been fit with two Gaussians (see text). The best-fit function is superimposed on the histogram in the energy range over which the fit was performed.}\n\\label{fig:spectrum_co}\n\\end{center}\n\\end{figure}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{137Cs_spectrum.pdf}\n\\caption{Scintillation spectrum of $^{137}$Cs collimated at the central position after subtraction of a background spectrum. The full absorption peak has been fit with the sum of a Gaussian and a falling exponential. The best-fit function is superimposed on the histogram in the energy range over which the fit was performed.}\n\\label{fig:spectrum_cs}\n\\end{center}\n\\end{figure}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{22Na_spectrum.pdf}\n\\caption{Scintillation spectrum of $^{22}$Na collimated at the central position after subtraction of a background spectrum. The full absorption peaks at 511~keV and 1274~keV have been fitted with the sum of a Gaussian and a falling exponential. The best-fit functions are superimposed on the histogram in the energy ranges over which the fits were performed.}\n\\label{fig:spectrum_na}\n\\end{center}\n\\end{figure}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{background_spectrum.pdf}\n\\caption{Background spectrum acquired with no calibration sources present.}\n\\label{fig:spectrum_bkgd}\n\\end{center}\n\\end{figure}\n\\indent\nThe measured widths of the full absorption peaks deserve a separate discussion. As reference we obtain an energy resolution of 3.1\\% ($\\sigma_p\/\\mu_p$) for 662 keV $\\gamma$ rays. The variance of the detector response, in photoelectrons, to a mono-energetic energy release can be described as~\\cite{Saldanha:2012rs}\n\\begin{equation}\n \\sigma_p^2 = \\sigma^2_\\text{baseline} + \\sigma^2_\\text{pe} + \\sigma^2_\\text{PMT} + \\sigma^2_\\text{geom}\n \\label{eq:sigma}\n\\end{equation}\nwhere\n\\begin{itemize}\n \\setlength{\\itemsep}{0pt}\n \\setlength{\\parskip}{0pt}\n \\setlength{\\parsep}{0pt}\n\\item $\\sigma_\\text{baseline}^2$ is the variance of the baseline noise, integrated over the length of the pulse. It increases with the amplitude of the scintillation signals and, for 662 keV $\\gamma$ rays, has a typical value of 2800 p.e.$^2$;\n\\item $\\sigma_\\text{pe}^2$ is the variance in the number of photoelectrons produced. We assume the scintillation process to be Poissonian, with the variance equal to the mean number of photoelectrons, $\\mu_p$; \n\\item $\\sigma^2_\\text{PMT}$ is the variance of the photomultiplier response. For PMTs with a similar response, it is approximated by $\\sigma^2_{\\psi_1}\\cdot \\mu_p$, where $\\sigma^2_{\\psi_1}$ is the relative variance of the single-photoelectron response (see Eq.~\\ref{eq:pdf}) averaged over all channels, with a typical value of 0.2;\n\\item $\\sigma^2_\\text{geom}$ is the geometrical variance, associated with spatial non-uniformities in the light collection of the detector. Due to their different mean interaction lengths in liquid argon, gammas of different energies can probe different regions of the active volume. Thus, the geometrical variance is expected to be different for different sources. The total variance due to geometrical effects can be written as $\\sigma^2_\\text{LY}\\cdot{\\mu_p}^2$, where $\\sigma^2_\\text{LY}$ is the relative variance of the light yield over the interaction region. The order of magnitude of this term can be qualitatively compared to the observed variation of LY$_\\gamma$ with the vertical position of the source. $^{22}$Na source runs performed with the collimator located in the top and bottom positions show, respectively, a decrease of 4.4\\% and an increase of 5.6\\% in the light yield with respect to the central position. This asymmetry agrees with expectations as the vertical symmetry of the system is broken by the presence of the liquid-gas interface and the grid near the top, both of which favor light collection by the bottom PMT's. We note that in full TPC mode, three-dimensional event reconstruction allows these non-uniformities in the detector response to be measured and corrections applied. \n\\end{itemize}\nFrom the estimates for the individual terms above, at 662 keV we obtain an energy resolution of $\\sim$ 0.9\\% from the baseline, $\\sim 1.3\\%$ from photoelectron statistics, and $\\sim$ 0.6\\% from the PMT response. The residual resolution in the observed response ($\\sim2.6$\\%) is of the same order of magnitude as the estimated contribution for the geometrical variance. It should be noted that Eq.~\\ref{eq:sigma} does not account for any additional variance from multiple Compton scattering (such as non-linear quenching) or possible non-Poissonian fluctuations in the distribution of scintillation photons \\cite{Doke1976353, 2003PhRvB..68e4201C, PhysRevB.76.014115}. \n \nLight collection performance has shown good stability with time. A $^{22}$Na calibration run (collimated at the central position) performed 53~days after the one shown in Table~\\ref{tab:fits} gives LY$_\\gamma=9.142\\pm$0.006~p.e.\/keV$_{ee}$ for the 511~keV line. The observed light yield increase of about 4\\% is likely associated with an improvement in the liquid argon purity due to the running of the purification system between the two measurements. Argon contaminants such as N$_{2}$ and O$_{2}$ are known to quench the argon scintillation light via non-radiative collisional de-excitation~\\cite{2010JInst...5.5003A, 2010JInst...5.6003A}. This process also reduces the observed slow-component lifetime.\nFigure~\\ref{fig:lifetime} shows average scintillation waveforms from the two runs. Independent of any particular model, the slow-component lifetime has clearly improved from the first to the second run, suggesting the elimination of de-exciting contaminants. The fit to an exponential in the range 1.0-5.0~$\\mu$s provides lifetimes of (1.4601$\\pm$0.0007)~$\\mu$s for the first run and (1.5349$\\pm$0.0008)~$\\mu$s for the second,\nwhere the errors are statistical only. A simple model with an absolute-purity slow-component lifetime of 1.6~$\\mu$s, predicts that this increase in lifetime would correspond to an increase in total light yield of 3.8\\%, in good agreement with that observed.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width = \\linewidth]{lifetime.pdf}\n\\caption{Average scintillation waveforms from a single PMT in 0.2~$\\mu$s bins. The waveforms are from two $^{22}$Na run collimated at the central position, one (red) taken 53~days after the other (black). \nThere has been a clear increase in the slow-component lifetime between the runs.\nThe change in the leading edge at the left of the plot is thought to be due to a small change in the trigger timing.}\n\\label{fig:lifetime}\n\\end{center}\n\\end{figure}\n\nSeveral sources of systematic uncertainty have been considered and are summarized in Table~\\ref{tab:fit_syserr}. As discussed in Sec.~\\ref{sec:source}, the algorithm used to evaluate the baseline affects the integral of the digitized signals. A study of the effect of the baseline algorithm on simulated data has shown that the moving-baseline algorithm tends to underestimate the true integral for events with a large number of photoelectrons. Nonetheless, we include the difference in $^{137}$Cs light yields between the two baseline algorithms as a systematic\nuncertainty in Table~\\ref{tab:fit_syserr}.\n\nA second source of systematic uncertainty is the function modeling the spectrum. One component of this uncertainty is the use of an exponential to model the spectrum under the Gaussian in the full-absorption-peak fits. We conservatively estimate this uncertainty by re-fitting the $^{137}$Cs peak with a Gaussian only. The observed variation in the fit result is 0.07\\%. A contribution of the same order is attributed to the background subtraction, estimated by re-fitting the $^{137}$Cs spectrum without subtracting the background. Fitting simulated $^{137}$Cs and $^{22}$Na spectra with the same Gaussian+exponential used on data shows systematic displacement of the fitted peak from the true value, typically 0.7\\%. We combine these three components into the ``Fit function\" entry in Table~\\ref{tab:fit_syserr}.\n\nIn the fit of the single-photoelectron spectrum, the parameters of the exponential term have shown some instability when noise increases the pedestal width. This can result in sizable excursions in individual channels. To explore this, we measured the values of the exponential parameters for each PMT using a single laser run, chosen to be relatively clean. The full laser calibration was redone with these parameters fixed and the calibration was used to reanalyze the source spectra. Shifts of up to 0.5\\% are observed in the resulting light yields and we assign this as a systematic error. We vary the spectrum binning and the integration region to estimate systematic uncertainties associated with the mechanics of the single-photoelectron fit. These variations have $\\sim\\!\\!1\\%$ effects on the light yield estimate and, combined with the systematic uncertainty from the exponential term, are included in Table~\\ref{tab:fit_syserr}.\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|c|}\n \\hline \\hline\n Source & [\\%] \\\\ \\hline \\hline\n Baseline algorithm & 4.9 \\\\\n Fit function & 0.7\\\\\n Single photoelectron & 1.0\\\\ \\hline\n Total & 5.0\\\\\n \\hline \\hline\n \\end{tabular}\n \\caption{Systematic uncertainties in light yield measurement [\\%]}\n \\label{tab:fit_syserr}\n\\end{table}\n\n\\section{Conclusions}\n\\indent\nThe light yield reported here greatly exceeds that measured in the previous run of DS-10, about 4.5~p.e.\/keV$_{ee}$. Since the previous run, a number of modifications were made to the detector. The most relevant of these were the replacement of the bottom PMT array and the replacement of the top and bottom windows. In the previous run, the bottom PMT array consisted of a single 8\" Hamamatsu R5912-02 PMT. This was replaced with an array of 7 3\" R11065s, matching the top array. The new array had less photocathode coverage (partly compensated by filling the gaps between the 3\" PMTs with PTFE reflectors), but much higher quantum efficiency (averaging 33.9\\% vs.~18\\%). In the previous run, the windows were acrylic, with 100-nm-thick ITO on both sides. A 100 nm ITO layer is calculated to absorb 10\\% of 420 nm light at normal incidence, with a large effect on the light yield when multiple passes and non-normal incidence were considered. However, thinner coatings were not recommended on acrylic. The replacement windows, fused silica with 15-nm ITO, were expected to provide considerably better light yield. \n\nAs described in Sec.~\\ref{sec:spe} the light yields reported here depend directly on the single-photoelectron calibration, in which the response was fitted to the single-p.e. model of Eq.~\\ref{eq:pdf}. The first, exponential, term lowers the single-p.e.~mean, and thus raises the inferred number of p.e. in a signal of a given integrated charge. The presence of such a term in the single-p.e. response of the PMT's is motivated by structure below the single-p.e. peak observed to be correlated with laser activity. However, its inclusion in the light yield may not be appropriate for all applications, notably those that count single photoelectrons above some threshold. As discussed in Sec.~\\ref{sec:spe}, including less of the exponential term is at most a 13\\% effect.\n\nThe light yield achieved in DarkSide-10, 9.142$\\pm$0.006(stat)$\\pm$0.457(sys)~p.e.\/keV$_{ee}$ after the purification campaign, is well in excess of the light yields proposed in Reference~\\cite{Boulay2006179} and assumed in the background calculations for the 50-kg DarkSide-50~\\cite{darkside}. It demonstrates that excellent light yield can be achieved in the elaborate structure of a TPC.\n\\\\ \\\\\nWe acknowledge support from the NSF (U.S., Grants PHY-0919363, PHY-1004072, and associated collaborative Grants), DOE (U.S., Contract Nos. DE-FG02-91ER40671 and DE-AC02-07CH11359), and the Istituto Nazionale di Fisica Nucleare (Italy).\n\n\\bibliographystyle{model1-num-names.bst}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nImages (\\emph{a.k.a.,} graphicon) are another important approach for expressing feelings and emotions in addition to using text in communication.\nIn mobile messaging apps, these images can generally be classified into emojis and stickers.\nEmoji is a kind of small picture which is already stored in most of the keyboard of the mobile operational systems, \\emph{i.e.,} iOS or Android.\nEmojis are pre-designed by the mobile phone vendor (now it is managed by standards organization) and the number of emoji is limited, and users can not design emoji by themselves.\nDifferent with the inflexible emojis, sticker is image or graphicon essentially~\\cite{Seta2018BiaoqingTC,Herring2017NicePC,Ge2018CommunicativeFO}, which users can draw or modify images as a sticker and upload to the chatting app by themselves.\nThe using of stickers on online chatting usually brings diversity of expressing emotion.\nSince emojis are sometimes used to help reinforce simple emotions in a text message due to their small size, and their variety is limited.\nStickers, on the other hand, can be regarded as an alternative for text messages, which usually include cartoon characters and are of high definition.\nThey can express much more complex and vivid emotion than emojis.\nMost messaging apps, such as WeChat, Telegram, WhatsApp, and Slack provide convenient ways for users to download stickers for free, or even share self-designed ones.\nWe show a chat window including stickers in Figure~\\ref{fig:example}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.15]{figs\/case.png}\n \\caption{\n An example of stickers in a multi-turn dialog. Sticker response selector automatically selects the proper sticker based on multi-turn dialog history. \n %\n }\n \\label{fig:example}\n\\end{figure}\n\nStickers are becoming more and more popular in online chat.\nFirst, sending a sticker with a single click is much more convenient than typing text on the 26-letter keyboard of a small mobile phone screen.\nSecond, there are many implicit or strong emotions that are difficult to express in words but can be captured by stickers with vivid facial expressions and body language.\nHowever, the large scale use of stickers means that it is not always straightforward to think of the sticker that best expresses one's feeling according to the current chatting context.\nUsers need to recall all the stickers they have collected and selected the appropriate one, which is both difficult and time-consuming.\n\nConsequently, much research has focused on recommending appropriate emojis to users according to the chatting context.\nExisting works such as~\\cite{xie2016neural}, are mostly based on emoji recommendation, where they predict the probable emoji given the contextual information from multi-turn dialog systems.\nIn contrast, other works~\\cite{barbieri2017emojis,barbieri2018multimodal} recommend emojis based on the text and images posted by a user.\nAs for sticker recommendation, existing works such as~\\cite{laddha2019understanding} and apps like Hike or QQ directly match the text typed by the user to the short text tag assigned to each sticker.\nHowever, since there are lots of ways of expressing the same emotion, it is very hard to capture all variants of an utterance as tags.\n\nTo overcome the drawbacks, we propose a sticker response selector (SRS) for sticker selection in our early work~\\cite{gao2020sticker}, where we address the task of sticker response selection in multi-turn dialog.\nWe focus on the two main challenges in this work:\n(1) Since existing image recognition methods are mostly built with real-world images, and how to capture the semantic meaning of sticker is challenging.\n(2) Understanding multi-turn dialog history information is crucial for sticker recommendation, and jointly modeling the candidate sticker with multi-turn dialog is challenging.\nHerein, we propose a novel sticker recommendation model, namely \\emph{sticker response selector} (SRS), for sticker response selection in multi-turn dialog.\nSpecifically, SRS first learns representations of dialog context history using a self-attention mechanism and learns the sticker representation by a convolutional neural network (CNN). \nNext, SRS conducts deep matching between the sticker and each utterance and produces the interaction results for every utterance.\nFinally, SRS employs a fusion network which consists of a sub-network fusion RNN and fusion transformer to learn the short and long term dependency of the utterance interaction results.\nThe final matching score is calculated by an interaction function.\nTo evaluate the performance of our model, we propose a large number of multi-turn dialog dataset associated with stickers from one of the popular messaging apps. \nExtensive experiments conducted on this dataset show that SRS significantly outperforms the state-of-the-art baseline methods in commonly-used metrics.\n\nHowever, the user's sticker selection does not only depend on the matching degree between dialog context and candidate sticker image, but also depends on the user's preference of using sticker.\nWhen users decide to use a sticker as their response in multi-turn dialog, they may choose their favorite one from all appropriate stickers as the final response. \nWe assume that user tends to use the recently used sticker in their dialog history, and the recently-used-sticker can represent the user's preference of sticker selection.\nAn example is shown in Figure~\\ref{fig:user-preference-case}.\nTo verify this assumption, we retrieve 10 recently-used-stickers of each user and calculate the proportion of whether the currently used sticker appeared in these 10 stickers.\nThe result shows that 54.09\\% of the stickers exist in the 10 recently used sticker set.\nHence, we reach to the conclusion that users have strong personal preference when selecting the sticker as their response for the current dialog context.\nHowever, in some cases, this also indicates a tendency to re-use stickers, but not necessarily a preference.\n\nMotivated by this observation, in this work, we take one step further and improve our previously proposed SRS framework with user preference modeling.\nOverall, we propose a novel sticker recommendation model which considers the user preference, namely \\emph{Preference Enhanced Sticker Response Selector} (PESRS).\nSpecifically, PESRS first employs a convolutional network to extract features from the candidate stickers.\nThen, we retrieve the recent user sticker selections then a user preference modeling module is employed to obtain a user preference representation.\nNext, we conduct the deep matching between the candidate sticker and each utterance as the same as SRS.\nFinally, we use a gated fusion method to combine the deep matching result and user preference into final sticker prediction.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.15]{figs\/tois-case.png}\n \\caption{\n User's history dialog context and the selected sticker. Four figures in the left history dialog context and the selected sticker, and the right one the current dialog context with the user selected sticker. User tends to use the same sticker when the dialog context is semantically similar. \n }\n \\label{fig:user-preference-case}\n\\end{figure}\n\nThe key to the success of PESRS lies in how to design the user preference modeling module, which should not only identify the user's favorite sticker and but also consider the current dialog context.\nMotivated by this, we first propose a recurrent neural network (RNN) based position-aware sticker modeling module which encodes the recently used stickers in chronological order.\nThen, we employ a key-value memory network to store these sticker representations as values and the corresponding dialog context as keys.\nFinally, we use the current dialog context to query the key-value memory and obtain the dynamic user preference of the current dialog context.\n\nWe empirically compare PESRS and SRS on the public dataset\\footnote{https:\/\/github.com\/gsh199449\/stickerchat} proposed by our early work~\\cite{gao2020sticker}.\nThis is a large-scale real-world Chinese multi-turn dialog dataset, which dialog context is multiple text utterances and the response is a sticker image.\nExperimental results show that on this dataset, our newly proposed PESRS model can significantly outperform the existing methods. \nParticularly, PESRS yields 4.8\\% and 7.1\\% percentage point improvement in terms of $MAP$ and $R_{10}@1$ compared with our early work SRS.\nIn addition to the comprehensive evaluation, we also evaluate our proposed user preference memory by a fine-grained analysis.\nThe analysis reveals how the model leverages the user's recent sticker selection history and provides us insights on why they can achieve big improvement over state-of-the-art methods.\n\nThis work is a substantial extension of our previous work reported at WWW 2020. \nThe extension in this article includes the user preference modeling framework for the existing methods, a proposal of a new framework for sticker selection in the multi-turn dialog.\nSpecifically, the contributions of this work include the following:\n\n\\begin{itemize}\n \\item We propose a position-aware sticker modeling module which can model the user's sticker selection history.\n \\item We propose a key-value memory network to store the user's recently used stickers and its corresponding dialog context.\n \\item Finally, we use the current dialog context to query the key-value memory and obtain a user preference representation, and then fuse the user preference representation into final sticker prediction dynamically. \n \\item Experiments conducted on a large-scale real-world dataset show that our model outperforms all baselines, including state-of-the-art models. Experiments also verify the effectiveness of each module in PESRS as well as its interpretability.\n\\end{itemize}\n\nThe rest of the paper is organized as follows:\nWe summarize related work in \\S\\ref{sec:related}. \n\\S\\ref{sec:dataset} introduces the data collection method and some statistics of our proposed multi-turn dialog sticker selection dataset.\nWe then formulate our research problem in \\S \\ref{sec:formulation} and elaborate our approach in \\S\\ref{sec:model}. \n\\S\\ref{sec:exp-setup} gives the details of our experimental setup and \\S\\ref{sec:exp-result} presents the experimental results. \nFinally, \\S\\ref{sec:conclusion} concludes the paper.\n\n %\n\\section{Related Work}\\label{sec:related}\n\nWe outline related work on sticker recommendation, user modeling, visual question answering, visual dialog, and multi-turn response selection.\n\n\\subsection{Sticker and Emoji Recommendation}\n\nMost of the previous works emphasize the use of emojis instead of stickers.\nFor example, \\cite{barbieri2017emojis,barbieri2018multimodal} use a multimodal approach to recommend emojis based on the text and images in an Instagram post.\n\\cite{Guibon2018EmojiRI} propose a MultiLabel-RandomForest algorithm to predict emojis based on the private instant messages.\n\\cite{Zhao2020CAPERCP} conduct emoji prediction on social media text (\\emph{e.g.,} Sina Weibo and Twitter), and they tackle this task as ranking among all emojis.\nThe total number of unique emojis in their dataset is 50, which is much smaller than the number of stickers.\nWhat is more, emojis are limited in variety, while there exists an abundance of different stickers.\n\\cite{Zhou2018MojiTalk} incorporates the emoji information into the dialog generation task, and they use the emoji classification as an auxiliary task to facilitate the dialog generation to produce utterance with proper emotion.\nThe most similar work to ours is \\cite{laddha2019understanding}, where they generate recommended stickers by first predicting the next message the user is likely to send in the chat, and then substituting it with an appropriate sticker.\n\nHowever, more often than not the implication of the stickers cannot be fully conveyed by text and, in this paper, we focus on directly generating sticker recommendations from dialog history.\n\n\\subsection{User Modeling}\n\nUser modeling~\\cite{Ren2019Lifelong,zolna2017User,Huang2019Explainable,Zhu2017What,Yang2017Multi} is a hot research topic especially in recommendation task, which models the preference of user based on the user history interaction data.\nSpecifically, in the e-commerce recommendation task~\\cite{Lei2019TiSSA,Ren2019RepeatNet,Huang2019TaxonomyAware}, the user modeling systems use the purchase history or click records to model the user's intrinsic interest and temporal interest~\\cite{Yu2019Adaptive,Pi2019Practice}.\nMost of the research typically utilize user-item binary relations, and assume a flat preference distribution over items for each user.\nThey neglect the hierarchical discrimination between user intentions and user preferences.\n\\citet{Zhu2020Sequential} propose a novel key-array memory network with user-intention-item triadic relations, which takes both user intentions and preferences into account for the next-item recommendation.\nAs for the user modeling in the news recommendation task, there are much side information can be used to obtain better user preference representation.\n\\citet{Wu2019Neural} propose a neural news recommendation approach which can exploit heterogeneous user behaviors, including the search queries and the browsed webpages of the user.\n\nHowever, to model the user preference of sticker selection, we should not only model the sticker selection history, and the dialog context of each selected sticker should also be considered when modeling the user preference.\n\n\\subsection{Memory Networks}\n\nThe memory network proposed by \\citet{Sukhbaatar2015EndToEndMN} generally consists of two components.\nThe first one is a memory matrix to save information (\\emph{i.e.,} memory slots) and the second one is a neural network to read\/write the memory slots.\nThe memory network has shown better performance than traditional long-short term memory network in several tasks, such as question answering~\\cite{Sukhbaatar2015EndToEndMN,Pavez2018Working,Ma2018Visual,Gao2018MotionAppearance}, machine translation~\\cite{Maruf2018Document}, text summarization~\\cite{Kim2019Abstractive,Chen2019Learning,Gao2020From}, dialog system~\\cite{Chu2018Learning,Wu2019Globaltolocal} and recommendation~\\cite{Ebesu2018Collaborative,Wang2018Neural,Zhou2019TopicEnhanced}.\nThe reason is that the memory network can store the information in a long time range and has more memory storage units than LSTM which has the single hidden state.\nFollow memory network, there are many variations of memory network have been proposed, \\emph{i.e.,} key-value memory network~\\cite{Miller2016KeyValueMN} and dynamic memory network~\\cite{Xiong2016DynamicMN,Kumar2016AskMA}.\nOur method is mainly based on the key-value memory network~\\cite{Miller2016KeyValueMN}, which employs the user history dialog contexts as the memory keys and the corresponding selected stickers the memory values.\nHowever, there are two main differences between our PESRS model and the previous key-value memory network.\nFirst, the user history data is in chronological order, we should consider the time information when storing them into the memory.\nTo recommend more accurate stickers, the model should not only consider the user preference information stored in the memory, but also incorporates the matching result between current dialog context and candidate stickers.\nThe second difference lies in that we propose a dynamic fusion layer that considers both the memory read output and the matching result of the current context.\nCompared with these methods, we not only implement a key-value memory network, but also provide a sticker selection framework that could incorporate the user's preference.\n\n\\subsection{Visual Question Answering}\nSticker recommendation involves the representation of and interaction between images and text, which is related to the Visual Question Answering (VQA) task~\\cite{Goyal2018Think,Gao2019Multi,Chao2018Cross,Wang2017Explicit,Noh2019Transfer,Li2018Visual,Su2018Learning}.\nSpecifically, VQA takes an image and a corresponding natural language question as input and outputs the answer.\nIt is a classification problem in which candidate answers are restricted to the most common answers appearing in the dataset and requires deep analysis and understanding of images and questions such as image recognition and object localization~\\cite{malinowski2015ask,xiong2016dynamic,wu2016ask,goyal2017making}.\nCurrent models can be classified into three main categories: early fusion models, later fusion models, and external knowledge-based models.\nOne state-of-the-art VQA model is \\cite{li2019beyond}, which proposes an architecture, positional self-attention with co-attention, that does not require a recurrent neural network (RNN) for video question answering.\n\\cite{guo2019image} proposes an image-question-answer synergistic network, where candidate answers are coarsely scored according to their relevance to the image and question pair in the first stage. \nThen, answers with a high probability of being correct are re-ranked by synergizing with images and questions.\n\nThe difference between sticker selection and VQA task is that the sticker selection task focus more on multi-turn multimodal interaction between stickers and utterances.\n\n\\subsection{Visual Dialog}\nVisual dialog extends the single turn dialog task~\\cite{Tao2018Get,Guo2019Dual,Murahari2019Improving} in VQA to a multi-turn one, where later questions may be related to former question-answer pairs. %\nTo solve this task, \\cite{lu2017best} transfers knowledge from a pre-trained discriminative network to a generative\nnetwork with an RNN encoder, using a perceptual loss.\n\\cite{wu2018you} combines reinforcement learning and generative adversarial networks (GANs) to generate more human-like responses to questions, where the GAN helps overcome the relative paucity of training data, and the\ntendency of the typical maximum-likelihood-estimation-based approach to generate overly terse answers.\n\\cite{jain2018two} demonstrates a simple symmetric discriminative baseline that can be applied to both predicting an answer as well as predicting a question in the visual dialog.\n\nUnlike visual dialog tasks, in a sticker recommendation system, the candidates are stickers rather than text.\n\n\\subsection{Multi-turn Response Selection}\nMulti-turn response selection~\\cite{Tao2019One,Feng2019Learning,Yan2018Coupled,Yan2017Joint,Yan2016LearningTR,Li2019Insufficient,Chan2019Modeling} takes a message and utterances in its previous turns as input and selects a response that is natural and relevant to the whole context.\nIn our task, we also need to take previous multi-turn dialog into consideration.\nPrevious works include \\cite{zhou2016multi}, which uses an RNN to represent context and response, and measure their relevance.\nMore recently, \\cite{Wu2017SequentialMN} matches a response with each utterance in the context on multiple levels of granularity, and the vectors are then combined through an RNN.\nThe final matching score is calculated by the hidden states of the RNN.\n\\cite{zhou2018multi} extends this work by considering the matching with dependency information.\nMore recently, \\cite{tao2019multi} proposes a multi-representation fusion network where the representations can be fused into matching at an early stage, an intermediate stage, or at the last stage.\n\nTraditional multi-turn response selection deals with pure natural language processing, while in our task, we also need to obtain a deep understanding of images. %\n\\section{Dataset}\n\\label{sec:dataset}\n\nIn this section, we introduce our multi-turn dialog dataset with sticker as response in detail.\n\n\\subsection{Data Collection}\n\nWe collect the large-scale multi-turn dialog dataset with stickers from one of the most popular messaging apps, Telegram\\footnote{https:\/\/telegram.org\/}.\nIn this app, a large amount of sticker sets are published, and everyone can use the sticker when chatting with a friend or in a chat group.\nSpecifically, we select 20 public chat groups consisting of active members, which are all open groups that everyone can join it without any authorities.\nThe chat history of these groups is collected along with the complete sticker sets.\nThese sticker sets include stickers with similar style.\nAll stickers are resized to a uniform size of $128 \\times 128$ pixels.\nWe use 20 utterances before the sticker response as the dialog context, and then we filter out irrelevant utterance sentences, such as URL links and attached files.\nDue to privacy concern, we also filter out user information and anonymize user IDs.\nTo construct negative samples, 9 stickers other than the ground truth sticker are randomly sampled from the sticker set.\nAfter pre-processing, there are 320,168 context-sticker pairs in the training dataset, 10,000 pairs in the validation, and 10,000 pairs in test datasets respectively.\nWe make sure that there is no overlap between these three datasets, there is no the same dialog context in any two datasets.\nTwo examples are shown in Figure~\\ref{fig:dataset-case}.\nWe publish this dataset to communities to facilitate further research on dialog response selection task.\n\n\\subsection{Statistics and Analysis}\n\n\\begin{table}[t]\n\\centering\n\\caption{Statistics of Response Selection Dataset.}\n\\label{tab:stat-dataset}\n\\begin{tabular}{llll}\n\\toprule\n & Train & Valid & Test \\\\\n\\midrule\n\\# context-stickers pairs & 320,168 & 10,000 & 10,000 \\\\\nAvg. words of context utterance & 7.54 & 7.50 & 7.42 \\\\\nAvg. users participate & 5.81 & 5.81 & 5.79 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nIn total, there are 3,516 sets of sticker which contain 174,695 stickers.\nThe average number of stickers in a sticker set is 49.64.\nEach context includes 15.5 utterances on average.\nThe average number of users who participate in the dialog context over each dataset is shown in the third row of Table~\\ref{tab:stat-dataset}.\n\nSince not all the users have history dialog data, we calculate the percentage of how many data samples in our dataset have history data.\nThere are 290939 data samples in our training dataset which have at lease one history sticker selection history, and the percentage is 88.12\\%.\nWe set the maximum of retrieved history data pair (consisting of dialog context and selected sticker) for one data sample to 10, and the average of history length in our training dataset is 6.82.\nWe also plot the distribution of history length in Figure~\\ref{fig:distribution-history-len}.\n\n\\begin{figure*} \n \\centering \n \\subfigure[The distribution of history length in training dataset.]{ \n \\label{fig:distribution-history-len}\n \\includegraphics[scale=0.38]{figs\/his-lens.pdf}\n } \n \\subfigure[Similarity distribution among all stickers in test dataset.]{ \n \\label{fig:similarity-distribution}\n \\includegraphics[scale=0.40]{figs\/sticker-simi-count.pdf}\n } \n \\caption{Statistics of dataset.}\n\\end{figure*}\n\n\\subsection{Sticker Similarity}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[scale=0.50]{figs\/sticker-dataset-case.pdf}\n \\caption{\n Example cases in the dataset with different similarity scores.\n }\n \\label{fig:dataset-case}\n\\end{figure*}\n\nStickers in the same set always share a same style or contain the same cartoon characters.\nIntuitively, the more similar the candidate stickers are, the more difficult it is to choose the correct sticker from candidates.\nIn other words, the similarity between candidate stickers determines the difficulty of the sticker selection task.\nTo investigate the difficulty of this task, we calculate the average similarity of all the stickers in a specific sticker set by the Structural Similarity Index (SSIM) metric~\\cite{wang2004image,avanaki2008exact}.\nWe first calculate the similarity between the ground truth sticker and each negative sample, then average the similarity scores.\nThe similarity distribution among test data is shown in Figure~\\ref{fig:similarity-distribution}, where the average similarity is 0.258.\nThe examples in Figure~\\ref{fig:dataset-case} are also used to illustrate the similarity of stickers more intuitively, where the left one has a relatively low similarity score, and the right one has a high similarity score.\n %\n\\section{Problem formulation}\n\\label{sec:formulation}\n\nBefore presenting our approach for sticker response selection in multi-turn dialog, we first introduce our notations and key concepts. \nTable~\\ref{tbl:notations} lists the main notations we use.\n\n\\begin{table}[!t]\n \\caption{Glossary.}\n \\label{tbl:notations}\n \\centering\n \\begin{tabular}{ll}\n \\toprule\n Symbol & Description \\\\\n \\midrule \n $s$ & multi-turn dialog context \\\\\n $u_{i}$ & $i$-th utterance in $s$ \\\\\n $T_u$ & number of utterances in dialog context \\\\\n $x^i_j$ & $j$-th word in $i$-th utterance $u_{i}$ \\\\\n $T_x^i$ & number of words in the $i$-th utterance \\\\\n $C$ & candidate sticker set \\\\\n $c_{i}$ & $i$-th candidate sticker in $c$ \\\\\n $T_c$ & number of stickers in candidate sticker set $c$ \\\\\n $y_i$ & the selection label of $i$-th sticker $c_i$ \\\\\n $\\hat{s}^k$ & $k$-th multi-turn dialog context in history \\\\\n $\\hat{u}^k_{i}$ & $i$-th utterance in $k$-th history context $\\hat{s}^k$ \\\\\n $\\hat{x}^{k,i}_j$ & $j$-th word in $i$-th utterance $\\hat{u}^k_{i}$ of $k$-th history context \\\\\n $\\hat{c}_{k}$ & user selected sticker of $k$-th history \\\\\n $T_h$ & number of history dialog context and selected sticker \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nSimilar to the multi-turn dialog response selection~\\cite{Wu2017SequentialMN,zhou2018multi}, we assume that there is a multi-turn dialog context $s=\\{u_{1},\\dots,u_{T_u}\\}$ and a candidate sticker set $C=\\{c_{1},...c_{T_c}\\}$, where $u_{i}$ represents the $i$-th utterance in the multi-turn dialog.\nIn the $i$-th utterance $u_i=\\{x^i_1,\\dots,x^{i}_{T_x^i}\\}$, $x^i_j$ represents the $j$-th word in $u_i$, and $T_x^i$ represents the total number of words in $u_i$ utterance.\nIn dialog context $s$, $c_{i}$ represents a sticker image with a binary label $y_i$, indicating whether $c_i$ is an appropriate response for $s$.\n$T_u$ is the number of utterance in the dialog context and $T_c$ is the number of candidate stickers.\nFor each candidate set, there is only one ground truth sticker, and the remaining ones are negative samples.\n\nTo model the user preference, we use $T_h$ history dialog contexts with user selected sticker $\\{(\\hat{s}^1, \\hat{c}_1), \\dots, (\\hat{s}^{T_h}, \\hat{c}_{T_h})\\}$, where $\\hat{s}^i$ denotes the $i$-th history dialog context and $\\hat{c}_i$ denotes the user selected sticker at $i$-th history dialog context.\nIn the remaining of the paper, we use the word \\textbf{current} to denotes the dialog context $s$ and sticker $c_i$ which the model needs to predict the sticker selection, and we use the word \\textbf{history} to denote the dialog context and sticker which user has generated before.\nIn the $k$-th history, there is a dialog context $\\hat{s}^k=\\{\\hat{u}^k_{i},\\dots,\\hat{u}^k_{T_u}\\}$ which contains up to $T_u$ utterances as the same as current dialog context $s$, and a user selected sticker $\\hat{c}_{k}$.\nFor each dialog history, we pad the dialog context which number of utterances is less than $T_u$ to $T_u$.\nOur goal is to learn a ranking model that can produce the correct ranking for each candidate sticker $c_i$; that is, can select the correct sticker among all the other candidates.\nFor the rest of the paper, we take the $i$-th candidate sticker $c_{i}$ as an example to illustrate the details of our model and omit the candidate index $i$ for brevity.\nIn some of the sticker selection scenarios, the stickers in the preceding dialog context may affect the current decision of sticker selection. \nBut in most cases, the sticker selection is influenced by a few utterances before. \nThus, in this paper, we focus on modeling the text utterances in dialog context. \nAnd we will consider the information provided by the stickers in the preceding context in our future work. \n\n\\section{PESRS model}\n\\label{sec:model}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.7]{figs\/sticker-pesrs-model.pdf}\n \\caption{\n Overview of PESRS. \n We divide our model into five ingredients: \n (1) \\textit{Sticker encoder} learns sticker representation by a convolutional neural network; \n (2) \\textit{Utterance encoder} learns representation of each utterance by self-attention based Transformer; \n (3) \\textit{User preference modeling module} obtains the position-aware history representations and store them into a key-value memory network;\n (4) \\textit{Deep interaction network} conducts deep matching interaction between sticker representation and utterance representation in different levels of granularity;\n (5) \\textit{Fusion network} combines the long-term and short-term dependency feature between interaction results produced by (4) and the user preference representation produced by (3) into final sticker prediction layer.\n }\n \\label{fig:model}\n\\end{figure*}\n\n\\subsection{Overview}\n\nIn this section, we propose our \\emph{preference enhanced sticker response selector}, abbreviated as PESRS. \nAn overview of PESRS is shown in Figure~\\ref{fig:model}, which can be split into five main parts:\n\n\\begin{itemize}\n \\item \\textit{Sticker encoder} is a convolutional neural network (CNN) based image encoding module that learns a sticker representation. %\n \\item \\textit{Utterance encoder} is a self-attention mechanism-based module encoding each utterance $u_{i}$ in the multi-turn dialog context $s$.\n \\item \\textit{User preference modeling module} is a key-value memory network that stores the representation of history dialog context and corresponding selected sticker.\n \\item \\textit{Deep interaction network} module conducts deep matching between each sticker representation and each utterance, and outputs each interaction result.\n \\item \\textit{Fusion network} learns the short-term dependency by the fusion RNN and the long-term dependency by the fusion Transformer, and finally outputs the matching score by combining the current interaction results with user preference representation using a gated fusion layer.\n\\end{itemize}\n\n\\subsection{Sticker Encoder}\n\\label{subsec:sticker_encoder}\n\nMuch research has been conducted to alleviate gradient vanishing~\\cite{he2016deep} and reduce computational costs~\\cite{he2015delving} in image modeling tasks.\nWe utilize one of these models, \\emph{i.e.,} the Inception-v3~\\cite{szegedy2016rethinking} model rather than plain CNN to encode sticker image:\n\\begin{align}\n O, O_{\\text{flat}} &= \\text{Inception-v3}(c) , \\label{eq:inceptionv3}\n\\end{align}\nwhere $c$ is the sticker image.\nThe sticker representation is $O \\in \\mathbb{R}^{p \\times p \\times d}$ which conserves the two-dimensional information of the sticker, and will be used when associating stickers and utterances in \\S\\ref{deep_int}.\nWe use the original image representation output of Inception-v3 $O_{\\text{flat}} \\in \\mathbb{R}^{d}$ as another sticker representation.\nMost imaging grounded tasks~\\cite{Jing2018OnTA,Wu2018ChainOR,Wu2018ObjectDifferenceAA} employ the pre-trained image encoding model to produce the image representation.\nHowever, existing pre-trained CNN networks including Inception-v3 are mostly built on real-world photos.\nThus, directly applying the pre-trained networks on stickers cannot speed up the training process.\nIn this dataset, sticker author give each sticker $c$ an emoji tag which denotes the general emotion of the sticker.\nHereby, we propose an auxiliary sticker classification task to help the model converge quickly, which uses $O_{\\text{flat}}$ to predict which emoji is attached to the corresponding sticker.\nMore specifically, we feed $O_{\\text{flat}}$ into a linear classification layer and then use the cross-entropy loss $\\mathcal{L}_s$ as the loss function of this classification task.\n\n\\subsection{Utterance Encoder}\n\nTo model the semantic meaning of the dialog context, we learn the representation of each utterance $u_i$.\nFirst, we use an embedding matrix $e$ to map a one-hot representation of each word in each utterance $u_i$ to a high-dimensional vector space.\nWe also add the positional embedding to the original word embedding, and we use the $e(x^i_j)$ to denote the embedding representation of word $x^i_j$.\nThe positional embedding is the same as Transformer~\\cite{vaswani2017attention}.\nFrom these embedding representations, we use the attentive module with positional encoding from Transformer~\\cite{vaswani2017attention} to model the interactions between the words in an utterance.\nAttention mechanisms have become an integral part of compelling sequence modeling in various tasks~\\cite{bahdanau2014neural,fan2018hierarchical,Gao2019Abstractive,li2019beyond}.\nIn our sticker selection task, we also need to let words fully interact with each other words to model the dependencies of words in the input sentence.\nThe self attentive module in the Transformer requires three inputs: the query $Q$, the key $K$ and the value $V$.\nTo obtain these three inputs, we use three linear layers with different parameters to project the embedding of dialog context $e(x^i_j)$ into three spaces:\n\\begin{align}\n Q^i_j &= FC(e(x^i_j)), \\label{equ:transformer-q-linear} \\\\\n K^i_j &= FC(e(x^i_j)) , \\\\\n V^i_j &= FC(e(x^i_j)).\n\\end{align}\nThe self attentive module then takes each $Q^i_j$ to attend to $K^i_\\cdot$, and uses these attention distribution $\\alpha^{i}_{j, \\cdot} \\in \\mathbb{R}^{T_x^i}$ as weights to gain the weighted sum of $V^i_j$, as shown in Equation~\\ref{equ:transformer-sum}.\n\\begin{align}\n \\alpha^i_{j,k} &= \\frac{\\exp\\left( Q^i_j \\cdot K^i_k \\right)}{\\sum_{n=1}^{T_x^i} \\exp\\left(Q^i_j \\cdot K^i_n\\right)}, \\label{equ:attention}\\\\\n \\beta^i_{j} &= \\sum_{k=1}^{T_x^i} \\alpha^i_{j,k} \\cdot V^i_{k}, \\label{equ:transformer-sum}\n\\end{align}\nNext, we add the original word embedding $e(x^i_j)$ on $\\beta^i_{j}$ as the residential connection layer, shown in Equation~\\ref{equ:drop-add}:\n\\begin{align}\n \\hat{h}^i_j = \\text{Dropout} \\left( e(x^i_j) + \\beta^i_j \\right), \\label{equ:drop-add}\n\\end{align}\nwhere $\\alpha^i_{j,k}$ denotes the attention weight between $j$-th word to $k$-th word in $i$-th utterance.\nTo prevent vanishing or exploding of gradients, a layer normalization operation~\\cite{lei2016layer} is also applied on the output of the feed-forward layers with ReLU activation as shown in Equation~\\ref{equ:ffn}: \n\\begin{align}\n h^i_j = \\text{norm} \\left( max(0, \\hat{h}^i_j \\cdot W_1 + b_1) \\cdot W_2 + b_2 + \\hat{h}^i_j \\right), \\label{equ:ffn}\n\\end{align}\nwhere $W_1, W_2, b_1, b_2$ are all trainable parameters of the feed-forward layer.\n$h^{i}_j$ denotes the hidden state of $j$-th word for the $i$-th utterance in the Transformer.\nWe also employ the multi-head attention is our model which conducts these operation multiple times and then concatenate the outputs as the final representation.\nFor brevity, we omit these multi-head operation in our equations.\n\n\n\\subsection{Deep Interaction Network}\n\\label{deep_int}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.75]{figs\/sticker-interaction.pdf}\n \\caption{Framework of deep interaction network.}\n \\label{fig:interaction}\n\\end{figure*}\n\nNow that we obtain the representation of the sticker and each utterance, we can conduct a deep matching between these components to model the bi-directional relationship between the words in dialog context and the sticker patches.\nOn one hand, there are some emotional words in dialog context history that match the expression of the stickers such as ``happy'' or ``sad''.\nOn the other hand, specific parts of the sticker can also match these corresponding words such as dancing limbs or streaming eyes.\nHence, we employ a bi-directional attention mechanism between a sticker and each utterance, that is, from utterance to sticker and from sticker to utterance, to analyze the cross-dependency between the two components.\nThe interaction is illustrated in Figure~\\ref{fig:interaction}.\n\nWe take the $i$-th utterance as an example and omit the index $i$ for brevity.\nThe two directed attentions are derived from a shared relation matrix, $M \\in \\mathbb{R}^{(p^2) \\times T_{u}}$, calculated by sticker representation $O \\in \\mathbb{R}^{p \\times p \\times d}$ and utterance representation $h \\in \\mathbb{R}^{T_{u} \\times d}$.\nThe score $M_{kj} \\in \\mathbb{R}$ in the relation matrix $M$ indicates the relation between the $k$-th sticker representation unit $O_k$, $k \\in [1,p^2]$ and the $j$-th word $h_j$, $j \\in [1, T_{u}]$ and is computed as:\n\\begin{align}\n M_{kj} &= \\sigma(O_k, h_j) , \\\\\n \\sigma(x, y) &= w^\\intercal [x \\oplus y \\oplus (x \\otimes y)] , \\label{eq:alpha}\n\\end{align}\nwhere $\\sigma$ is a trainable scalar function that encodes the relation between two input vectors. \n$\\oplus$ denotes a concatenation operation and $\\otimes$ is the element-wise multiplication.\n\nNext, a two-way max pooling operation is conducted on $M$, \\emph{i.e.,} let $\\tau_j^u = \\max(M_{:j}) \\in \\mathbb{R}$ represent the attention weight on the $j$-th utterance word by the sticker representation, corresponding to the ``utterance-wise attention''.\nThis attention learns to assign high weights to the important words that are closely related to sticker.\nWe then obtain the weighted sum of hidden states as ``\\textbf{sticker-aware utterance representation}'' $l$:\n\\begin{equation}\\label{equ:sa-utterance}\nl = \\sum^{T_{u}}_j {\\tau_j^u h_j} .\n\\end{equation}\n\nSimilarly, sticker-wise attention learns which part of a sticker is most relevant to the utterance.\nLet $\\tau_k^s = \\max(M_{k:}) \\in \\mathbb{R}$ represent the attention weight on the $k$-th unit of the sticker representation. We use this to obtain the weighted sum of $O_{k}$, \\emph{i.e.,} the ``\\textbf{utterance-aware sticker representation}'' $r$:\n\\begin{equation}\\label{equ:ua-sticker}\nr = \\sum^{p^2}_k {\\tau_k^s O_{k}} .\n\\end{equation}\n\nAfter obtaining the two outputs from the co-attention module, we combine the sticker and utterance representations and finally get the ranking result.\nWe first integrate the utterance-aware sticker representation $r$ with the original sticker representation $O_{\\text{flat}}$ using an \\textbf{integrate function}, named $IF$:\n\\begin{align}\n Q_1 &= \\text{IF} \\left( O_{\\text{flat}}, r \\right) , \\\\\n \\text{IF}(x, y) &= \\text{FC} \\left(x \\oplus y \\oplus \\left( x \\otimes y \\right) \\oplus (x+y) \\right) , \\label{triliner}\n\\end{align}\nwhere $\\text{FC}$ denotes the fully-connected (FC) layer and we use the ReLU~\\cite{Nair2010RectifiedLU} as the activation function, $\\oplus$ represents the vector concatenation along the final dimension of the vector and $\\otimes$ denotes the elementwise product operation.\nWe add the sticker-aware utterance representation $l$ into $Q_1$ together and then apply a fully-connected layer with ReLU activation:\n\\begin{align}\\label{equ:q2}\n Q_2 = \\text{FC} (Q_1 \\oplus l) .\n\\end{align}\n\n\\subsection{User Preference Modeling Module}\n\\label{subsec:preference}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.6]{figs\/user-prefer-mem.pdf}\n \\caption{Framework of user preference modeling module which consists a position-aware RNN and a key-value memory network. We use the history dialog contexts as the keys and the history selected stickers as the values. Finally, we use the representation of the current dialog context as the query the user preference memory.}\n \\label{fig:preference-memory}\n\\end{figure*}\n\nUsers have their preference when selecting the sticker as the response of the multi-turn dialog context.\nHence, to recommend the sticker more accurately, our model should consider the user's preference when giving the final sticker recommendation.\nIntuitively, the sticker that selected by the user recently contains the user's preference, and these history data can help our model to build the preference representation.\nAs for constructing the preference modeling module, our motivation is to find the semantically similar dialog contexts in the history data, and then use the corresponding selected stickers of these dialog contexts to facilitate the final sticker prediction of the current dialog context.\nHence, we propose the user preference memory and the architecture of this module as shown in Figure~\\ref{fig:preference-memory}.\nThe proposed user preference memory unit inherits from memory networks~\\cite{Gao2019Product,Tao2019Log2Intent,Wang2018Neural,Chen2018Sequential}, and generally has two steps: (1) memory addressing and (2) memory reading. \nThe user preference memory consists of a set of history multi-turn dialog contexts and selected stickers. \nThough one action is corresponding to a dialog context, it should attend to the different history contexts (\\emph{i.e.,} memory slots) upon the current dialog context.\nThus, we address and read the memory unit as follows.\n\n\\subsubsection{History Encoding} \\label{subsubsec:history-encoding}\n\nTo store the history dialog contexts and selected stickers, we encode them into vector spaces using the same method as used when encoding current dialog context and candidate stickers.\nConcretely, first, attentive module is employed to encode all the dialog contexts $\\{\\hat{s}^1, \\dots, \\hat{s}^{T_h}\\}$:\n\\begin{align}\n \\overline{h}^k_{i} = \\text{mean-pooling} (\\text{Transformer} (\\hat{u}^k_{i})), \\label{equ:transformer-history}\n\\end{align}\nwhere $\\overline{h}^k_{i}$ is the vector representation of the $i$-th utterance in the $k$-th history, the $\\text{Transformer}$ is the same operation as shown in Equation~\\ref{equ:transformer-q-linear}-Equation~\\ref{equ:ffn}.\nDifferent with the Transformer used in Equation~\\ref{equ:transformer-q-linear}, the query, key and value in Equation~\\ref{equ:transformer-history} are all $\\hat{u}^k_{i}$, where we conduct self-attention over all history dialog contexts.\nThen, we use a max-pooling layer to obtain the vector representation $\\overline{h}^k$ of the $k$-th dialog context in history:\n\\begin{align}\n \\overline{h}^k = \\max (\\{\\overline{h}^k_{1}, \\dots, \\overline{h}^k_{T_h}\\}).\n\\end{align}\n\nNext, we use the same image encoder $\\text{Inception-v3}$ in \\S~\\ref{subsec:sticker_encoder} to encode all the stickers $\\{\\hat{c}_{1}, \\dots, \\hat{c}_{T_h}\\}$ of each history dialog context into vector representations $\\{\\overline{O}_{1}, \\dots, \\overline{O}_{T_h}\\}$:\n\\begin{align}\n \\overline{O}_k &= \\text{Inception-v3}(\\hat{c}_{k}) , \\label{eq:inceptionv3}\n\\end{align}\nwhere sticker representation $\\overline{O}_{k} \\in \\mathbb{R}^{d}$ is a one-dimensional vector, we drop the output $O$ and use the output $O_{flat}$ as the $\\overline{O}_{k}$.\n\nIntuitively, it is much easier for the user to recall their recently used stickers than the stickers they used a long time ago.\nThus, we propose a recurrent neural network (RNN) based position-aware user history modeling layer which incorporates the position feature into the history data representation, \\emph{e.g.,} history dialog context representation $\\overline{h}^k$ and history selected sticker representation $\\overline{O}_{k}$.\nWe first concatenate the position of history dialog context as an additional feature to the vector representation of dialog representation $\\overline{h}^k$ and sticker representation $\\overline{O}_{k}$.\nThen, we employ an RNN to encode these representations in chronological order:\n\\begin{align}\n \\hat{h}^k &= \\text{RNN} (t_k \\oplus \\overline{h}^k, \\hat{h}^{k-1}) ,\\\\\n \\hat{O}_{k} &= \\text{RNN} (t_k \\oplus \\overline{O}_{k}, \\hat{O}_{k-1}) .\n\\end{align}\nFinally, we obtain the position-aware history data representations $\\{\\hat{h}^1, \\dots, \\hat{h}^{T_h}\\}$ and $\\{\\hat{O}_{1}, \\dots, \\hat{O}_{T_h}\\}$ and we will introduce how to store them into user preference memory.\n\n\\subsubsection{Memory Addressing}\n\nAfter obtaining all the vector representations of history sticker and dialog context, we employ a key-value memory network and store them into each key-value slot, as shown in Figure~\\ref{fig:preference-memory}.\nIn this memory network, we use the dialog contexts as the keys and use the corresponding selected stickers as the values.\n\nFirst, we construct the query from the current dialog context, which will be used to retrieve the user preference representation from the memory network.\nWe apply a max-pooling layer on the representations of each utterance in the current dialog context:\n\\begin{align}\n h^i_m &= \\text{mean-pooling}(\\{h^i_1, \\dots, h^i_{T_x}\\}), \\\\\n h &= \\max \\{h^1_m, \\dots, h^{T_u}_m\\} , \\label{eq:mem-query}\n\\end{align}\nwhere $h^i_m$ is the representation of $i$-th utterance in the current dialog context, $h \\in \\mathbb{R}^{d}$ is used as the query and it represents the overall information of the current dialog context.\nNext, we use $h$ to calculate the read weights over each memory slot:\n\\begin{align}\n \\delta_k = \\text{softmax} ( h W_\\delta \\hat{h}^k ) ,\n\\end{align}\nwhere $\\delta_k \\in [0, 1]$ is the read weight for the $k$-th memory slot and $W_\\delta$ is a trainable parameter.\n\n\\subsubsection{Memory Reading}\n\nAfter obtaining the read weights $\\{\\delta_1, \\dots, \\delta_{T_h}\\}$ for all the memory slots, we can write the semantic output for preference memory by:\n\\begin{align}\n r = \\sum^{T_h}_k { \\delta_k \\hat{O}_{k} },\n\\end{align}\nwhere $r$ in essence represents a semantic preference representation and will be used when predicting the sticker in current dialog context.\n\n\\subsection{Fusion Network}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.7]{figs\/sticker-fusion.pdf}\n \\caption{Framework of fusion network. The black circle in the right of the figure indicates the interaction combination and gated fusion.} \n \\label{fig:fusion}\n\\end{figure}\n\nUp till now, we have obtained the user preference representation and interaction result between each utterance and the candidate sticker.\nHere we again include the utterance index $i$ which has been omitted in previous subsections, and $Q_2$ now becomes $Q_2^i$. \nSince the utterances in a multi-turn dialog context are in chronological order, we employ a \\textbf{Fusion RNN} and a \\textbf{Fusion Transformer} to model the short-term and long-term interaction between utterance $\\{Q_2^1, \\dots, Q_2^{T_u}\\}$.\nFusion RNN (shown in the top part of Figure~\\ref{fig:fusion}) is based on the recurrent network which can capture short-term dependency over each utterance interaction result.\nFusion Transformer (shown in the bottom part of Figure~\\ref{fig:fusion}) is based on the self-attention mechanism which is designed for capturing the important elements and the long-term dependency among all the interaction results.\n\n\\subsubsection{Fusion RNN} \\label{subsubsec:fusion-rnn}\n\nFusion RNN first reads the interaction results for each utterance $\\{Q_2^1, \\dots, Q_2^{T_u}\\}$ and then transforms into a sequence of hidden states.\nIn this paper, we employ the gated recurrent unit (GRU)~\\cite{Chung2014EmpiricalEO} as the cell of fusion RNN, which is popular in sequential modeling~\\cite{Gao2019How, Wu2017SequentialMN,tao2019multi}:\n\\begin{align}\ng_i = \\text{RNN} \\left( Q_2^i, g_{i-1} \\right) , \\label{equ:fusion-rnn}\n\\end{align}\nwhere $g_i$ is the hidden state of the fusion RNN and $g_{0}$ is the initial state of RNN which is initialized randomly.\nFinally, we obtain the sequence of hidden states $\\{g_1, \\dots, g_{T_u}\\}$.\nOne can replace GRU with similar algorithms such as Long-Short Term Memory network (LSTM)~\\cite{hochreiter1997long}.\nWe leave the study as future work.\n\n\\subsubsection{Fusion Transformer}\n\nTo model the long-term dependency and capture the salience utterance from the context, we employ the self-attention mechanism introduced in Equation~\\ref{equ:attention}-\\ref{equ:ffn}.\nConcretely, given $\\{Q_2^1, \\dots, Q_2^{T_u}\\}$, we first employ three linear projection layers with different parameters to project the input sequence into three different spaces:\n\\begin{align}\n\\mathcal{Q}^i &= \\text{FC} ( Q_2^i ), \\\\\n\\mathcal{K}^i &= \\text{FC} ( Q_2^i ), \\\\\n\\mathcal{V}^i &= \\text{FC} ( Q_2^i ).\n\\end{align}\nThen we feed these three matrices into the self-attention algorithm illustrated in Equation~\\ref{equ:attention}-\\ref{equ:ffn}.\nFinally, we obtain the long-term interaction result $\\{\\hat{g}_1, \\dots, \\hat{g}_{T_u}\\}$.\n\n\\subsubsection{Long Short Interaction Combination}\n\nTo combine the interaction representation generated by fusion RNN and fusion Transformer, we employ the SUMULTI function proposed by \\cite{Wang2016ACM} to combine these representations, which has been proven effective in various tasks:\n\\begin{align}\n \\overline{g}_i = \\text{ReLU}(\\mathcal{W}^s \n \\begin{bmatrix}\n (\\hat{g}_i - g_i) \\otimes (\\hat{g}_i - g_i) \\\\\n \\hat{g}_i \\otimes g_i\n \\end{bmatrix}\n + \\mathbf{b}^s),\n\\end{align}\nwhere $\\otimes$ is the element-wise product.\nThe new interaction sequence $\\{\\overline{g}_1, \\dots, \\overline{g}_{T_u}\\}$ is then boiled down to a matching vector $\\Tilde{g}_{T_u}$ by another GRU-based RNN:\n\\begin{align}\n\\Tilde{g}_i = \\text{RNN}(\\Tilde{g}_{i-1}, \\overline{g}_i) .\n\\end{align}\nWe use the final hidden state $\\Tilde{g}_{T_u}$ as the representation of the overall interaction result between the whole utterance context and the candidate sticker.\n\n\\subsubsection{Gated Fusion}\n\nIn the final prediction, our model combines the current dialog context interaction result and user preference representation to predict the final result.\nHowever, in each case, the information required for current dialog context interaction and user preference representation is not necessarily the same.\nIf the current dialog context is very similar to the history dialog context, the historical information should play a greater role in prediction.\nTo incorporate the user preference information into final sticker prediction, we employ a gated fusion which dynamically fuses the current context interaction result and user preference representation together by using a gate $f_g$.\nTo dynamically fuse these two information sources, we calculate a gate $f_g \\in [0, 1]$ which decide which part should the model concentrates on when making the final sticker selection decision:\n\\begin{equation}\n f_g = \\sigma (\\text{FC}([r \\oplus \\Tilde{g}_{T_u}])) , \\label{eq:dynamic-fusion}\n\\end{equation}\nwhere $\\sigma$ is the sigmoid function and $\\oplus$ denotes the vector concatenation operation.\nNext, we apply a weighted sum operation using the gate $f_g$ on current context interaction result $\\Tilde{g}_{T_u}$ and user preference representation $r$, as shown in Equation~\\ref{eq:prediction-layer}.\nFinally, we apply a fully-connected layer to produce the matching score $\\hat{y}$ of the candidate sticker:\n\\begin{equation}\n \\hat{y} = \\sigma(\\text{FC} (f_g * \\Tilde{g}_{T_u} + (1 - f_g) * r)) , \\label{eq:prediction-layer}\n\\end{equation}\nwhere $\\hat{y} \\in (0,1)$ is the matching score of the candidate sticker.\n\n\n\\subsection{Learning}\n\nRecall that we have a candidate sticker set $C=\\{c_{1},...c_{T_c}\\}$ which contains multiple negative samples and one ground truth sticker.\nWe use hinge loss as our objective function:\n\\begin{align}\n\\mathcal{L} = \\sum^{N} \\max \\left( 0 , \\hat{y}_{\\text{negative}}- \\hat{y}_{\\text{positive}} +\\text{margin} \\right),\n \\label{eq:loss-generator}\n\\end{align}\nwhere $\\hat{y}_{\\text{negative}}$ and $\\hat{y}_{\\text{positive}}$ corresponds to the predicted labels of the negative sample and ground truth sticker, respectively.\nThe margin is the margin rescaling in hinge loss.\nThe gradient descent method is employed to update all the parameters in our model to minimize this loss function. %\n\\newcommand{\\cellcolor{blue!15}}{\\cellcolor{blue!15}}\n\\section{Experimental Setup}\n\\label{sec:exp-setup}\n\n\\subsection{Research Questions}\nWe list nine research questions that guide the experiments: \n\n\\begin{itemize}\n \\item \\textbf{RQ1} (See \\S~\\ref{subsec:Overall}): What is the overall performance of PESRS compared with all baselines?\n \\item \\textbf{RQ2} (See \\S~\\ref{subsec:ablation}): What is the effect of each module in PESRS? \n \\item \\textbf{RQ3} (See \\S~\\ref{subsec:number}): How does the performance change when the number of utterances changes?\n \\item \\textbf{RQ4} (See \\S~\\ref{subsec:attention}): \n %\n Can co-attention mechanism successfully capture the salient part on the sticker image and the important words in dialog context? \n \\item \\textbf{RQ5} (See \\S~\\ref{subsec:features}): What is the influence of the similarity between candidate stickers?\n \\item \\textbf{RQ6} (See \\S~\\ref{subsec:hidden}): What is the influence of the parameter settings?\n \\item \\textbf{RQ7} (See \\S~\\ref{subsec:history-len}): What is the influence of the user history length?\n \\item \\textbf{RQ8} (See \\S~\\ref{subsec:most-select}): What is the performance of using the user's most selected sticker as the response?\n \\item \\textbf{RQ9} (See \\S~\\ref{subsec:emoji-analysis}): Can sticker encoder capture the semantic meaning of sticker?\n\\end{itemize}\n\n\\subsection{Comparison Methods}\n\nWe first conduct an ablation study to prove the effectiveness of each component in PESRS as shown in Table~\\ref{tab:ablations}.\nSpecifically, we remove each key part of our PESRS to create ablation models and then evaluate the performance of these models.\n\n\\begin{table}[t]\n\\centering\n\\caption{Ablation models for comparison.}\n\\label{tab:ablations}\n\\begin{tabular}{ll}\n\\toprule\nAcronym & Gloss \\\\\n\\midrule\nPESRS w\/o Classify & \\multicolumn{1}{p{8cm}}{\\small PESRS w\/o emoji classification task}\\\\\nPESRS w\/o DIN & \\multicolumn{1}{p{8cm}}{\\small PESRS w\/o \\textbf{D}eep \\textbf{I}nteraction \\textbf{N}etwork}\\\\\nPESRS w\/o FR & \\multicolumn{1}{p{8cm}}{\\small PESRS w\/o \\textbf{F}usion \\textbf{R}NN}\\\\\nPESRS FR2T & \\multicolumn{1}{p{8cm}}{\\small Change the \\textbf{F}usion \\textbf{R}NN in PESRS to Transformer with positional encoding}\\\\\nPESRS w\/o UPM & \\multicolumn{1}{p{8cm}}{\\small PESRS w\/o \\textbf{U}ser \\textbf{P}reference \\textbf{M}emory } \\\\\nPESRS w\/o TAR & \\multicolumn{1}{p{8cm}}{\\small PESRS w\/o \\textbf{T}ime-\\textbf{A}ware \\textbf{R}NN } \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\nNext, to evaluate the performance of our model, we compare it with the following baselines.\nNote that, we adapt VQA and multi-turn response selection models to the sticker response selection task by changing their input text encoder to image encoder.\nSince we incorporate the user history data into our model, we also compare with the user modeling method which has been widely used in the recommendation tasks.\n\n\\noindent (1) \\textbf{SMN}: \n\\cite{Wu2017SequentialMN} proposes a sequential matching network to address response selection for the multi-turn conversation problem.\nSMN first matches a response with each utterance in the context.\nThen vectors are accumulated in chronological order through an RNN.\nThe final matching score is calculated with RNN.\n\n\\noindent (2) \\textbf{DAM}: \n\\cite{zhou2018multi} extends the transformer model~\\cite{vaswani2017attention} to the multi-turn response selection task, where representations of text segments are constructed using stacked self-attention.\nThen, truly matched segment pairs are extracted across context and response. \n\n\\noindent (3) \\textbf{MRFN}: \n\\cite{tao2019multi} proposes a multi-representation fusion network which consists of multiple dialog utterance representation methods and generates multiple fine-grained utterance representations.\nNext, they argue that these representations can be fused into final response candidate matching at an early stage, at the intermediate stage or the last stage.\nThey evaluate all stages and find fusion at the last stage yields the best performance.\nThis is the state-of-the-art model on the multi-turn response selection task.\n\n\\noindent (4) \\textbf{Synergistic}:\n\\cite{guo2019image} devises a novel synergistic network on VQA task.\nFirst, candidate answers are coarsely scored according to their relevance to the image-question pair. \nAfterward, answers with high probabilities of being correct are re-ranked by synergizing with image and question.\nThis model achieves the state-of-the-art performance on the Visual Dialog v1.0 dataset~\\cite{das2017visual}.\n\n\\noindent (5) \\textbf{PSAC}: \n\\cite{li2019beyond} proposes the positional self-attention with co-attention architecture on VQA task, which does not require RNNs for video question answering. \nWe replace the output probability on the vocabulary size with the probability on candidate sticker set.\n\n\\noindent (6) \\textbf{SRS}:\nWe propose the first sticker selection method consists of the sticker and dialog context encoding module, deep matching network and information fusion layer in our previous work~\\cite{gao2020sticker}.\nThis method achieves the state-of-the-art performance on the multi-turn dialog-based sticker selection dataset.\n\n\\noindent (7) \\textbf{LSTUR}:\n\\cite{An2019Neural} proposes a long- and short-term user modeling method to represent the long- and short-term user preference and then apply this method to the news recommendation task.\nExperiments on a real-world dataset demonstrate their approach can effectively improve the performance of neural news recommendation method. \nTo adapt this method on our sticker selection task, we replace their news encoding network with the sticker image encoding network, Inception-v3, as the same as we used in our model.\nSince there are countless users in our task, we can not obtain a static user embedding as they used in their model.\nFor fair, comparison, we replace the user embedding in their model to current dialog context.\n\nFor the first three multi-turn response selection baselines, we replace the candidate utterance embedding RNN or Transformer network with the image encoding CNN network Inception-v3, which is the same as used in our proposed model.\nThis Inception-v3 network is initialized using a pre-trained model\\footnote{\\url{https:\/\/github.com\/tensorflow\/models\/tree\/master\/research\/slim}} for all baselines and PESRS.\n\n\n\\subsection{Evaluation Metrics}\n\nFollowing~\\cite{tao2019multi,zhou2018multi}, we employ recall at position $k$ in $n$ candidates $R_n@k$ as an evaluation metric, which measures if the positive response is ranked in the top $k$ positions of $n$ candidates.\nFollowing~\\cite{zhou2018multi}, we also employ mean average precision (MAP)~\\cite{baeza2011modern} as an evaluation metric.\nThe statistical significance of differences observed between the performance of two runs is tested using a two-tailed paired t-test and is denoted using $^{\\blacktriangle}$\\ (or $^{\\blacktriangledown}$) for strong significance at $\\alpha=0.01$.\n\n\\subsection{Implementation Details}\n\nWe implement our experiments using TensorFlow~\\cite{abadi2016tensorflow} on an NVIDIA GTX 2080Ti GPU. \nIf the number of words in an utterance is less than 30, we pad zeros, otherwise, the first 30 words are kept.\nThe word embedding dimension is set to 100 and the number of hidden units is 100.\nThe batch size is set to 32.\n9 negative samples are randomly sampled from the sticker set containing the ground truth sticker, and we finally obtain 10 candidate stickers for the model to select.\nWe initialize all the parameters randomly using a Gaussian distribution in [-0.02, 0.02].\nWe use Adam optimizer~\\cite{Kingma2015AdamAM} as our optimizing algorithm, and the learning rate is $1 \\times 10^{-4}$.\n\n\\section{Experimental result}\n\\label{sec:exp-result}\n\n\\subsection{Overall Performance}\n\\label{subsec:Overall}\n\n\\begin{table}[t]\n\\centering\n\\caption{RQ1: Automatic evaluation comparison. Significant differences are with respect to MRFN.}\n\\begin{tabular}{@{}l cc cc @{}}\n\\toprule\n& MAP & $R_{10}@1$ & $R_{10}@2$ & $R_{10}@5$ \\\\\n\\midrule\n\\multicolumn{5}{@{}l}{\\emph{Visual Q\\&A methods}}\\\\\nSynergistic & 0.593 \\phantom{0} & 0.438\\phantom{0} & 0.569\\phantom{0} & 0.798\\phantom{0} \\\\\nPSAC & 0.662\\phantom{0} & 0.533\\phantom{0} & 0.641\\phantom{0} & 0.836\\phantom{0} \\\\\n\\midrule\n\\multicolumn{5}{@{}l}{\\emph{Multi-turn response selection methods}}\\\\\nSMN & 0.524\\phantom{0} & 0.357\\phantom{0} & 0.488\\phantom{0} & 0.737\\phantom{0} \\\\\nDAM & 0.620\\phantom{0} & 0.474\\phantom{0} & 0.601\\phantom{0} & 0.813\\phantom{0} \\\\\nMRFN & 0.684\\phantom{0} & 0.557\\phantom{0} & 0.672\\phantom{0} & 0.853\\phantom{0}\\\\\nLSTUR & 0.689 & 0.558 & 0.68 & 0.874 \\\\\n\\midrule\nSRS & 0.709 & 0.590 & 0.703 & 0.872 \\\\\nPESRS & \\textbf{0.743} & \\textbf{0.632}$^{\\blacktriangle}$ & \\textbf{0.740}$^{\\blacktriangle}$ & \\textbf{0.897} \\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab:comp_auto_baselines}\n\\end{table}\n\n\\begin{figure*} \n \\centering \n \\subfigure[$MAP$ score]{ \n \\label{figs:MAP.png} %\n \\includegraphics[scale=0.40]{figs\/ctx_len_MAP.pdf}\n } \n %\n \\subfigure[$R_{10}@1$ score]{ \n \\label{figs:r1.png} %\n \\includegraphics[scale=0.40]{figs\/ctx_len_R_101.pdf}\n } \n %\n \\subfigure[$R_{10}@2$ score]{ \n \\label{figs:r2.png} %\n \\includegraphics[scale=0.40]{figs\/ctx_len_R_102.pdf}\n } \n %\n \\subfigure[$R_{10}@5$ score]{ \n \\label{figs:r5.png} %\n \\includegraphics[scale=0.40]{figs\/ctx_len_R_105.pdf}\n } \n \\caption{\n RQ3: Performance of PRSRS on all metrics when reading different number of utterances.\n }\n \\label{fig:turns}\n\\end{figure*}\n\nFor research question \\textbf{RQ1}, we examine the performance of our model and baselines in terms of each evaluation metric, as shown in Table~\\ref{tab:comp_auto_baselines}.\nFirst, the performance of the multi-turn response selection models is generally consistent with their performances on text response selection datasets.\nSMN~\\cite{Wu2017SequentialMN}, an earlier work on multi-turn response selection task with a simple structure, obtains the worst performance on both sticker response and text response selection.\nDAM~\\cite{zhou2018multi} improves the SMN model and gets better performance.\nMRFN~\\cite{tao2019multi} is the state-of-the-art text response selection model and achieves the best performance among baselines in our task as well.\nSecond, VQA models perform generally worse than multi-turn response selection models, since the interaction between multi-turn utterances and sticker is important, which is not taken into account by VQA models. %\nThird, our previously proposed SRS achieves better performance with 3.36\\%, 5.92\\% and 3.72\\% improvements in MAP, $R_{10}@1$ and $R_{10}@2$ respectively, over the state-of-the-art multi-turn selection model, \\emph{i.e.,} MRFN, and with 6.80\\%, 10.69\\% and 8.74\\% significant increases (with p-value < 0.05) over the state-of-the-art visual dialog model, PSAC. \nFinally, comparing with our previously proposed sticker selection method SRS, our newly proposed model PESRS which incorporates the user preference information achieves the state-of-the-art performance with 4.8\\%, 7.1\\% and 5.3\\% improvements in $MAP$, $R_{10}@1$ and $R_{10}@2$ respectively, over our previous method SRS which is just based on the multi-modal matching between utterance and sticker image.\nThat demonstrates the superiority of incorporating the user preference information into sticker selection model.\n\n\n\\subsection{Ablation Study}\n\\label{subsec:ablation}\n\n\\begin{table}[t]\n \\centering\n \\caption{RQ2: Evaluation of different ablation models.}\n \\begin{tabular}{@{}lcc cc@{}}\n \\toprule\n & MAP & $R_{10}@1$ & $R_{10}@2$ & $R_{10}@5$ \\\\\n \\midrule\n %\n %\n %\n PESRS w\/o Classify & 0.714 & 0.598 & 0.707 & 0.866 \\\\\n %\n PESRS w\/o DIN & 0.728 & 0.612 & 0.725 & 0.888 \\\\\n %\n PESRS w\/o FR & 0.727 & 0.609 & 0.725 & 0.886 \\\\\n %\n PESRS FR2T & 0.725 & 0.610 & 0.719 & 0.881 \\\\\n %\n PESRS w\/o UPM & 0.709 & 0.590 & 0.703 & 0.872 \\\\\n %\n PESRS w\/o TAR & 0.710 & 0.589 & 0.706 & 0.873 \\\\\n PESRS & \\textbf{0.743} & \\textbf{0.632}$^{\\blacktriangle}$ & \\textbf{0.740}$^{\\blacktriangle}$ & \\textbf{0.897} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:comp_rouge_ablation}\n\\end{table}\n\nFor research question \\textbf{RQ2}, we conduct ablation tests on the use of the sticker classification loss (introduced in \\S~\\ref{subsec:sticker_encoder}), the deep interaction network (introduced in \\S~\\ref{deep_int}), the fusion RNN (introduced in \\S~\\ref{subsubsec:fusion-rnn}), the user preference memory without position aware RNN (introduced in \\S~\\ref{subsec:preference}) and the full user preference memory (introduced in \\S~\\ref{subsubsec:history-encoding}) respectively. \nThe evaluation results are shown in Table~\\ref{tab:comp_rouge_ablation}.\nThe performances of all ablation models are worse than that of PESRS under all metrics, which demonstrates the necessity of each component in PESRS.\nWe also find that the sticker classification makes contribution to the overall performance.\nAnd this additional task can also speed up the training process, and help our model to converge quickly.\nWe use 21 hours to train the PESRS until convergence, and we use 35 hours for training PESRS w\/o Classify.\nThe fusion RNN brings a significant contribution (with p-value < 0.05), improving the $MAP$ and $R_{10}@1$ scores by 2.2\\% and 3.8\\%, respectively.\nWe also change the fusion RNN to a Transformer with positional encoding, which leads to a decrease of the performance that verifies the effectiveness of fusion RNN.\nThe deep interaction network also plays an important part. \nWithout this module, the interaction between the sticker and utterance are hindered, leading to a 3.3\\% drop in $R_{10}@1$.\nParticularly, since the user preference memory capture the preference of user's sticker selection, we can see that when the user preference memory is removed from the model, the model suffers from dramatic performance drop in terms of all metrics.\nAnd the position-aware user history encoding RNN also makes contribution to the PESRS model, improving the $MAP$ and $R_{10}@1$ scores by 4.6\\% and 7.3\\%, respectively.\n\n\\subsection{Analysis of Number of Utterances} \\label{subsec:number}\n\nFor research question \\textbf{RQ3}, in addition to comparing with various baselines, we also evaluate our model when reading different number of utterances to study how the performance relates to number of context turns.\n\nFigure~\\ref{fig:turns} shows how the performance of the PESRS changes with respect to different numbers of utterances turns.\nIn this experiment, we change the numbers of utterances turns in both current dialog context and history dialog contexts.\nWe observe a similar trend for PESRS on the first three evaluation metrics $MAP$, $R_{10}@1$ and $R_{10}@2$: they first increase until the utterance number reaches 15, and then fluctuate as the utterance number continues to increase.\nThere are two possible reasons for this phenomena.\nThe first reason might be that, when the information in the utterances is limited, the model can capture the features well, and thus when the amount of information increases, the performance gets better.\nHowever, the capacity of the model is limited, and when the amount of information reaches its upper bound, it gets confused by this overwhelming information.\nThe second reason might be of the usefulness of the utterance context.\nUtterances that occur too early before the sticker response may be irrelevant to the sticker and bring unnecessary noise.\nAs for the last metric, the above observations do not preserve.\nThe $R_{10}@5$ scores fluctuate when the utterance number is below 15, and drop when the utterance number increases.\nThe reason might be that $R_{10}@5$ is not a strict metric, and it is easy to collect this right sticker in the set of half of the whole candidates.\nThus, the growth of the information given to PESRS does not help it perform better but the noise it brings harms the performance.\nOn the other hand, though the number of utterances changes from 3 to 18, the overall performance of PESRS generally remains at a high level, which proves the robustness of our model.\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[scale=0.50]{figs\/sticker-predict-case.pdf}\n \\caption{\n RQ4: Examples of sticker selection results produced by SRS. We show the selected sticker and three random selected candidate stickers with the attention heat map. The lighter the area on image is, the higher attention weight it gets. The first two cases are collected from a chitchat group, and the third one is collected from a VPN custom service group.\n }\n \\label{fig:predict-case}\n\\end{figure*}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.6]{figs\/sticker-text-attention-case.pdf}\n \\caption{\n RQ4: Examples of the attention weights of the dialog utterance. We translate Chinese to English word by word. The darker the area is, the higher weight the word gets.\n }\n \\label{fig:text-attention-case}\n\\end{figure}\n\n\\subsection{Analysis of Attention Distribution in Interaction Process}\n\\label{subsec:attention}\n\nNext, we turn to address \\textbf{RQ4}.\nWe also show three cases with the dialog context in Figure~\\ref{fig:predict-case}.\nThere are four stickers under each dialog context, one is the selected sticker by our model and other three stickers are random selected candidate stickers.\nAs a main component, the deep interaction network comprises a bi-directional attention mechanism between the utterance and the sticker, where each word in the utterance and each unit in the sticker representation have a similarity score in the co-attention matrix.\nTo visualize the sticker selection process and to demonstrate the interpretability of deep interaction network, we visualize the sticker-wise attention $\\tau^s$ (Equation~\\ref{equ:ua-sticker}) on the original sticker image and show some examples in Figure~\\ref{fig:predict-case}.\nThe lighter the area is, the higher attention it gets.\n\nFacial expressions are an important part in sticker images.\nHence, we select several stickers with vivid facial expression in Figure~\\ref{fig:predict-case}.\nTake forth sticker in Case 1 for example where the character has a wink eye and a smiling mouth.\nThe highlights are accurately placed on the character's eye, indicating that the representation of this sticker is highly dependent on this part.\nAnother example is the last sticker of Case 3, there is two question marks on the top right corner of the sticker image which indicates that the girl is very suspicious of this.\nIn addition to facial expression, the characters gestures can also represent emotions.\nTake the third sticker in Case 2 for example, the character in this sticker gives a thumbs up representing support and we can find that the attention lies on his hand, indicating that the model learns the key point of his body language.\n\nFurthermore, we randomly select three utterances from the test dataset, and we also visualize the attention distribution over the words in an utterance, as shown in Figure~\\ref{fig:text-attention-case}.\nWe use the weight $\\tau_j^u$ for the $j$-th word (calculated in Equation~\\ref{equ:sa-utterance}) as the attention weight.\nWe can find that the attention module always gives a higher attention weight on the salience word, such as the ``easy method'', ``make a lot of money'' and ``use Chine Mobile''.\n\n\n\\subsection{Influence of Similarity between Candidates}\n\\label{subsec:features}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.60]{figs\/similarity-recall.pdf}\n \\caption{\n RQ5: Performance of SRS on groups of different candidate similarity.\n }\n \\label{fig:similarity-recall}\n\\end{figure}\n\nIn this section, we turn to \\textbf{RQ5} to investigate the influence of the similarities between candidates.\nThe candidate stickers are sampled from the same set, and stickers in a set usually have a similar style.\nThus, it is natural to ask: Can our model identify the correct sticker from a set of similar candidates?\nWhat is the influence of the similarity between candidate stickers?\nHence, we use the Structural Similarity Index (SSIM) metric~\\cite{wang2004image,avanaki2008exact} to calculate the average similarity among all candidates in a test sample and then aggregate all test samples into five groups according to their average similarities.\nWe calculate the $R_{10}@1$ of each group of samples, as shown in Figure~\\ref{fig:similarity-recall}.\nThe x-axis is the average similarity between candidate stickers and the y-axis is the $R_{10}@1$ score.\n\nNot surprisingly, our model gains the best performance when the average similarity of the candidate group is low and its performance drops as similarity increases.\nHowever, we can also see that, though similarity varies from minimum to maximum, the overall performance can overall stay at high level. \n$R_{10}@1$ scores of all five groups are above 0.42, and the highest score reaches 0.59.\nThat is, our model is highly robust and can keep giving reasonable sticker responses.\n\n\\subsection{Robustness of Parameter Setting}\\label{subsec:hidden}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.6]{figs\/hidden.pdf}\n \\caption{\n RQ6: Performance of PESRS with different parameter settings.\n }\n \\label{fig:hidden}\n\\end{figure}\n\nIn this section, we turn to address \\textbf{RQ6} to investigate the robustness of parameter setting.\nWe train PESRS model in different parameter setting as shown in Figure~\\ref{fig:hidden}.\nThe hidden size of the RNN, CNN and the dense layer in our model is tuned from 50 to 200, and we use the MAP and $R_n@k$ to evaluate each model.\nAs the hidden size grows larger from 50 to 100, the performance rises too.\nThe increment of hidden size improves the MAP and $R_{10}@1$ scores by 1.1\\% and 1.9\\%.\nWhen the hidden size continuously goes larger from 100 to 200, the performance is declined slightly.\nThe increment of hidden size leads to a 3.9\\% and 5.3\\% drop in terms of MAP and $R_{10}@1$ respectively.\nNonetheless, we can find that each metric maintained at a stable interval, which demonstrates that our PESRS is robust in terms of the parameter size.\n\n\\subsection{Influence of User History Length} \\label{subsec:history-len}\n\n\\begin{figure*} \n \\centering \n \\subfigure[$MAP$ score]{ \n \\label{figs:MAP.png} %\n \\includegraphics[scale=0.40]{figs\/his_len_MAP.pdf}\n } \n %\n \\subfigure[$R_{10}@1$ score]{ \n \\label{figs:r1.png} %\n \\includegraphics[scale=0.40]{figs\/his_len_R_101.pdf}\n } \n %\n \\subfigure[$R_{10}@2$ score]{ \n \\label{figs:r2.png} %\n \\includegraphics[scale=0.40]{figs\/his_len_R_102.pdf}\n } \n %\n \\subfigure[$R_{10}@5$ score]{ \n \\label{figs:r5.png} %\n \\includegraphics[scale=0.40]{figs\/his_len_R_105.pdf}\n } \n \\caption{\n RQ7: Performance of PESRS with different user history length.\n }\n \\label{fig:history-len}\n\\end{figure*}\n\nNext, we address \\textbf{RQ7} which focuses on the influence of using different lengths of user history.\nWe feed different lengths of user sticker selection history to the model, and we show the model performance of different lengths in Figure~\\ref{fig:history-len}.\nFrom this figure, we can find that the model performs worse when we just feed only $2$ user sticker selection history.\nThe sticker selection prediction performance of the model rises sharply as the history length increases. \nThis indicates that it requires large amount of user behavior patterns for modeling the preference of the user. \nAnd the growth of user behavior sequence helps PESRS to better capture sticker selection patterns according to the dialog context.\n\n\n\\subsection{Analysis of User Preference Memory} \\label{subsec:most-select}\n\nNext, we turn to \\textbf{RQ8} to investigate the effectiveness of user preference modeling module.\nWe propose a simple heuristic method and two variations of our user preference memory module.\n\nTo verify the necessity of using user preference modeling network, we use a simple heuristic method (\\textit{MostSelected}) which just use the most selected sticker by user as the sticker prediction of current dialog context.\nThis method does not consider the semantic matching degree of previous dialog context and current dialog context.\nConsequently, the predicted sticker of this heuristic method is not flexible.\n\nThe first variation (\\textit{AverageMem}) is to simply apply an average-pooling layer on all the previous selected sticker representations by the corresponding user:\n\\begin{align}\n r = \\sum^{T_h}_k { \\hat{O}_{k} }.\n\\end{align}\nThen we use this as the user preference representation and feed $r$ to the final gated fusion layer, as shown in Equation~\\ref{eq:dynamic-fusion}.\n\nThe second variation (\\textit{WeightedMem}) is to remove the key addressing process, and directly apply an attention-then-weighted method on all the user previous selected stickers.\nThis variation can be split into two steps: (1) calculate attention weights and (2) weighted sum stickers.\nWe use the query vector $h$ (shown in Equation~\\ref{eq:mem-query}) to calculate attention weights of each user previously selected stickers $\\{\\hat{O}_{1}, \\dots, \\hat{O}_{T_h}\\}$, and the query vector $h$ is the same as used in our proposed user preference memory module:\n\\begin{align}\n \\delta_k = \\text{softmax} ( h W_\\delta \\hat{O}_{k} ) ,\n\\end{align}\nwhere $\\delta_k \\in [0, 1]$ is the attention weight for the $k$-th selected sticker.\nThen we apply the weighted-sum on all the user previously selected sticker representations:\n\\begin{align}\n r = \\sum^{T_h}_k { \\delta_k \\hat{O}_{k} } .\n\\end{align}\nFinally, we feed this preference representation $r$ into final gated fusion layer (Equation~\\ref{eq:dynamic-fusion}).\nNote that, the above two variations exclude the histories of dialogue contexts, and we employ these experiments to verify the effectiveness of incorporating histories of dialogue contexts.\n\n\\begin{table}[t]\n \\centering\n \\caption{RQ8: Performance of two variations user preference memory module.}\n \\begin{tabular}{@{}lcc cc@{}}\n \\toprule\n & MAP & $R_{10}@1$ & $R_{10}@2$ & $R_{10}@5$ \\\\\n \\midrule\n SRS & 0.709 & 0.590 & 0.703 & 0.872 \\\\\n %\n MostSelected & 0.545 & 0.419 & 0.490 & 0.679 \\\\\n %\n AverageMem & 0.701 & 0.573 & 0.701 & 0.870 \\\\\n %\n WeightedMem & 0.694 & 0.565 & 0.689 & 0.866 \\\\\n PESRS & \\textbf{0.743} & \\textbf{0.632}$^{\\blacktriangle}$ & \\textbf{0.740}$^{\\blacktriangle}$ & \\textbf{0.897} \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:mem-analysis}\n\\end{table}\n\nWe conduct the experiments on these variations and compare with our proposed PESRS and SRS, as shown in Table~\\ref{tab:mem-analysis}.\nFrom this table, we can find that \\textit{MostSelected} performs the worst among all the methods.\nThat demonstrates the necessity of exploring a learning-based method to leverage the user history data to recommend the proper sticker.\nBy comparing \\textit{AverageMem} with the SRS which does not incorporate the user's history, we find that although \\textit{AverageMem} and \\textit{WeightedMem} leverages the user's history information, it can not take advantages from these data to boost the performance of sticker selection.\nThe reason is that these methods can not model the relationship between current dialog context and previous history data, thus it can not determine which history data may be helpful for the current context.\n\n\\subsection{Sticker Classification and Emotion Diversity}\n\\label{subsec:emoji-analysis}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.45]{figs\/emoji-dist.pdf}\n \\caption{\n RQ9: Number of the used emoji labels over stickers in training dataset (top 50 emojis of 893 unique emojis in total). \n %\n }\n \\label{fig:emoji-dist}\n\\end{figure}\n\nFinally, we turn to \\textbf{RQ9}.\nIn this dataset, the sticker authors give each sticker an emoji label that indicates the approximate emotion of the sticker.\nHowever, this label is not a mandatory field when creating a sticker set in this online chatting platform.\nSome authors use random emoji or one emoji label for all the stickers in the sticker set.\nThus, we cannot incorporate the emoji label and tackle the sticker selection task as an emoji classification task.\nWe randomly sample 20 sticker sets and employ human annotators to check whether the emoji label in sticker set is correct, and we find that there are 2 sticker set of them have wrong emoji labels for the stickers.\nSince we introduce the auxiliary sticker classification (introduced in \\S~\\ref{subsec:sticker_encoder}) to help the model for accelerating convergence of the model training, we also report the sticker classification performance in this paper.\nNote that, since the emoji label of the sticker may not be correct, therefore, the classification performance is \\emph{not accurate}, the results are for reference only.\nThe results of the sticker classification are 65.74\\%, 50.75\\%, 47.02\\%, 61.20\\% for accuracy, F1, recall and precision, respectively.\nThese results indicate the sticker encoder can capture the semantic meanings of the sticker image.\n\nTo illustrate the diversity of the emotion expressed by the stickers, we use the emoji label as the indicator of the emotion and plot the distribution of the emoji label of stickers.\nIn Figure~\\ref{fig:emoji-dist}, we only show the top 50 emoji labels used in all the sticker set in our training dataset and the total number of unique emoji label is 893.\nFrom Figure~\\ref{fig:emoji-dist}, we can find that there are many stickers with the emoji label \\includegraphics[scale=0.08]{face-with-tears-of-joy_1f602.png} and \\includegraphics[scale=0.08]{smirking-face_1f60f.png}.\nThe reason is that some of the sticker authors assign \\includegraphics[scale=0.08]{face-with-tears-of-joy_1f602.png} or \\includegraphics[scale=0.08]{smirking-face_1f60f.png} as the emoji label to all the stickers in their sticker set, as we mentioned before (some authors use random emoji or one emoji label for all the stickers in the sticker set).\n %\n\\section{Conclusion}\\label{sec:conclusion}\n\nIn our previous work, we propose the task of multi-turn sticker response selection, which recommends an appropriate sticker based on multi-turn dialog context history without relying on external knowledge.\nHowever, this method only focuses on measuring the matching degree between the dialog context and sticker image, which ignores the user preference of using stickers.\nHence, in this paper, we propose the \\emph{Preference Enhanced Sticker Response Selector} (PESRS) to recommend an appropriate sticker to user based on multi-turn dialog context and sticker using history of user.\nSpecifically, PESRS first learns the representation of each utterance using a self-attention mechanism, and learns sticker representation by CNN.\nSecond, a deep interaction network is employed to fully model the dependency between the sticker and utterances.\nThe deep interaction network consists of a co-attention matrix that calculates the attention between each word in an utterance and each unit in a sticker representation.\nThird, a bi-directional attention is used to obtain utterance-aware sticker representation and sticker-aware utterance representations.\nNext, we retrieve the recent user sticker selections, and then propose a user preference modeling module which consists a position-aware history encoding network and a key-value based memory network to generate the user preference representation dynamically according to current dialog context.\nThen, a fusion network models the short-term and long-term relationship between interaction results, and a gated fusion layer is applied to fuse the current dialog interaction results and user preference representation dynamically.\nFinally, a fully-connected layer is applied to obtain the final sticker prediction using the output of gated fusion layer.\nOur model outperforms state-of-the-art methods including our previous method SRS in all metrics and the experimental results also demonstrate the effectiveness of each module in our model.\nIn the near future, we aim to propose a personalized sticker response selection system. \n\\section*{Acknowledgments}\nWe would like to thank the anonymous reviewers for their constructive comments. \nWe would also like to thank Anna Hennig in Inception Institute of Artificial Intelligence for her help on this paper. \nThis work was supported by the National Science Foundation of China (NSFC No. 61876196) and the National Key R\\&D Program of China (2020AAA0105200).\nRui Yan is supported as a Young Fellow of Beijing Institute of Artificial Intelligence (BAAI).\n\n\\clearpage\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nAmbitious observational programs are underway to make very precise measurements of the evolution of large scale\nstructures (LSS), e.g. \\cite{Abbott:2005bi,Ivezic:2008fe,Laureijs:2011gra,Levi:2013gra}. One of the main goals is to constrain the nature of dark energy, but at the same time future surveys will also probe the origin of the initial seed for structure formation, which --- very plausibly --- emanated from an early phase of accelerated expansion. After the outstanding results from the Planck collaboration \\cite{Aghanim:2018eyx,Akrami:2019izv}, current bounds on primordial non-Gaussianity are still far from reaching a well-motivated physical threshold \\cite{threshold1,threshold2}, which may be difficult to achieve if restricted to observations of the cosmic microwave background (CMB) alone. The study of LSS will then provide, not only powerful constraints on the properties of dark energy, but also the next leading probe for early universe cosmology. Regardless of one's motivation, an accurate analytic understanding will indisputably maximize the discovery potential for these remarkable experiments. As a consequence, the combination of a voluminous amount of new data has reinvigorated the constant effort to make accurate theoretical predictions in cosmology. This includes the fully non-linear region of structure formation, where simulations have reached an exquisite level, as well as the weakly non-linear regime, where the Effective Field Theory of LSS (EFT of LSS) --- both in Euler \\cite{Baumann:2010tm,Carrasco:2012cv,Carrasco:2013mua, Carrasco:2013sva,Angulo:2014tfa,Foreman:2015lca,Baldauf:2015aha,Baldauf:2015zga,Baldauf:2015tla,error} and Lagrangian space \\cite{left,left2,left3,Zaldarriaga:2015jrj} (see \\cite{review} for a review) --- has pushed forward the frontiers of `precision cosmology'. \\vskip 4pt\n\nIn the EFTofLSS, as in any other EFT \\cite{review}, the imprint of the short-distance physics in long-distance dynamics is encapsulated in a series of (symmetry-motivated) `Wilson coefficients'. In practice, these extra parameters play two roles. On the one hand, they can be chosen to remove the cutoff dependence introduced in an effective approach, simultaneously fixing the errors incurred by pushing the perturbative description beyond its realm of validity. On the other hand, the remaining finite part of the Wilson coefficients can incorporate the true knowledge from the non-perturbative regime in the long-distance dynamics. This information can be obtained either from observation or by comparison with a description which is assumed to be valid at short(er) scales. Hence, given a sought-after level of precision, the EFT provides an accurate analytic description of the dynamics up to a finite number of matching coefficients. The EFT formalism has several advantages over numerical methods attempting to cover the entirety of the parameter space over all scales. Not only the EFT approach provides an analytic description of the problem in a `universal' framework with systematic power-counting, but also because the EFT formalism is naturally suited to scan over different cosmologies and initial conditions to a high level of accuracy. Therefore, unlike standard perturbation theory (SPT) beyond leading order \\cite{Bernardeau:2001qr} (where the perturbative (or loop) expansion is ill-defined), the EFT approach makes cosmological perturbative expansions a controlled theoretical framework.\\vskip 4pt \n\nIn a body of recent work, initiated in \\cite{Carrasco:2013mua,Carrasco:2013sva}, the EFTofLSS was studied up to two loops, and shown to describe the evolution of dark matter density perturbations in the mildly non-linear regime with great success \\cite{Angulo:2014tfa,Foreman:2015lca,error,Leofin}. In this paper, we continue the quest for accuracy by studying the power spectrum within the EFT approach to three loop orders. The SPT calculation at three loops was carried out in \\cite{Blas:2013aba}, yet without cutting off the region of integration where the perturbative expansion is not expected to be valid. We point the reader to \\cite{Blas:2013aba} for details on the diagrammatic and computational tools needed to achieve the third order in perturbation theory. In this paper we implement instead the EFTofLSS, introducing a series of counter-terms to remove the unwanted contributions from modes outside the realm of validity of SPT, and at the same time to properly incorporate the true non-linear information. As expected, precision increases with respect to the two loop results. As we shall see, this occurs with high precision up to $k \\simeq 0.4\\,h$\\,Mpc$^{-1}$ at redshift $z=0$.\\vskip 4pt\n\nWhile this is a remarkable achievement of the EFTofLSS, we argue here that --- even corrected by the EFT methodology --- there is strong indication that the perturbative series is an asymptotic expansion reaching its maximum predictive power at weakly non-linear scales. (Similar speculations have been made in \\cite{Sahni:1995rr, Pajer:2017ulp}.) As a consequence, we do not expect higher loop orders to have a positive impact, but rather to produce a departure from the true answer at moderate values of $k$. As a circumstantial evidence for this behavior, we find that the results up to two loop orders can match the numerical data with great precision up to high scales, $k \\simeq 0.7\\,h$ Mpc$^{-1}$, while at three loop orders --- and after including the necessary counter-terms --- the $\\chi^2$ of the best fit presents a sharp increase for $k \\gtrsim 0.55\\, h$\\, Mpc$^{-1}$. We argue the reason for the mismatch is not due to short-distance contributions, which we may be overlooking, but rather to large effects from mildly non-linear scales. \nHence, our findings suggest that the EFTofLSS to three loop order provides the most accurate analytic model for the (deterministic part of the) dark matter power spectrum in the weakly non-linear regime at $z=0$. The study of higher $n$-point function would be required to analyze the behavior of the perturbative expansion more thoroughly.\\vskip 4pt\n\nThis paper is organized as follows. In sec. 2 we discuss the general renormalization scheme and matching procedure we will utilize to extract the value of the extra parameters of the effective theory up to a given loop order. In sec. 3 we move on to the development of the EFTofLSS to three loops, and in particular the counter-terms that will be needed to properly remove the unwanted UV part of the SPT computation and incorporate the true information from non-linear scales. In that respect, we perform an `ultraviolet (UV) test' similar to what was proposed in \\cite{Foreman:2015lca} to ensure we incorporate all of the necessary counter-terms that handled the UV-sensitivity of the SPT results. We incorporate further details in appendix A. In sec. 4 we match the EFTofLSS up to two and three loop orders to numerical simulations up to a given fitting scale $k_{\\rm fit}$. We include the IR-resummation of \\cite{Senatore:2014via,Cataneo:2016suz, Senatore:2017pbn}, which removes large oscillations in the power spectrum. We find a high level of accuracy up to scales of order $k \\simeq 0.5\\, h\\,$\\,Mpc$^{-1}$. We discuss the input of non-linear information as well as the possibility of overfitting the data. A comparison between the two and three loop results is performed and we show that there is improvement in the addition of the three loop effects for scales up to $k \\sim 0.4\\, h\\,$\\,Mpc$^{-1}$, although the increase in accuracy is somewhat marginal. At the same time we show there is circumstantial evidence of the asymptotic nature of the perturbative expansion in the EFTofLSS. We elaborate on this point in sec. 5, where we discuss in more detail the properties of the perturbative series up to three loop orders. In contrast to the `on-shell' prescription of \\cite{Foreman:2015lca}, throughout this paper we use a generalized renormalization scheme which incorporates the {\\it running} (or scale dependence) of the individual counter-terms, thus allowing us to extend the reach of the perturbative expansion. In appendix B we analyze the power spectrum using the on-shell scheme. We reproduce their results up to two loop orders and show that our results to three loop orders are essentially unmodified using the on-shell prescription. \n\\section{Matching procedure \\label{sec:fit}} \n\nIn order to match the analytic computations within the EFT approach to numerical data\\footnote{We use the (new) `Horizon' Simulation data given in \\cite{horizon}.} we introduce the $\\chi^2$ of the fit, given by\n\\begin{equation}\n\\label{eq:def_chi_data}\n\\chi^2(c_i(\\Lambda),k_{\\rm fit}) \\equiv \\sum^{N}_{n=1} \\left(\\frac{P_{\\rm data}(k_n) - P^{\\rm EFT}_{\\ell{\\text -}\\rm loop}\n\\left(k_n,c_i(\\Lambda)\\right)}{P_{\\rm data}(k_n)} \\right)^2 \\, ,\n\\end{equation}\nwhich we minimze to a given fitting value, $k_{\\rm fit}\\equiv k_N$, with $N \\gg 1$. $P^{\\rm EFT}_{\\ell{\\text -}\\rm loop}$ is the power spectrum computed in the EFTofLSS up to $\\ell$-loop orders with a cutoff scale $\\Lambda$, whose dependence is absorbed into a series of $i$-independent coefficients, $c_{i}(\\Lambda)$, each one with up to $\\ell$ contributions, depending on the loop order at which each counter-term first enters.\\vskip 4pt\n\nBefore we proceed, let us stress an important point. Our procedure to fit the EFTofLSS to numerical data will be somewhat different than in previous studies at two loops, e.g. \\cite{Foreman:2015lca}. Unlike what it has been done in the literature, we will not fix the higher loop order counter-terms, $c_{i (2)},\\cdots, c_{i (\\ell)}$, as a function of $c_{i (1)}$ through a renormalization condition. Instead, we allow for independent contributions from each one of them. The reason is twofold. First of all, because we use numerical results for power spectrum, it is difficult to cleanly disentangle the (cutoff dependent) counter-term which cure the mistake of SPT from the (finite) renormalized part which accounts for the true non-linear information. Therefore the size of the effect from the counter-terms does not respect the naive (loop) power-counting, while the renormalized parameters do. This is related to the second reason, which is the fact that we use a cutoff to define the EFT and therefore the size of the effect of the counter-terms at high loop orders may be as large as lower loop contributions, simply from the existence of `power-law' cutoff dependence.\\footnote{This could be resolved by working with an approximate analytic expression for the power spectrum, as in \\cite{Carrasco:2013mua}, such that we can explicitly remove the polynomial $\\Lambda$ dependence in the counter-terms.} As a consequence, in what follows we do not distinguish between the counter-terms and the renormalized contributions and treat the extra parameters in the EFT as matching coefficients (although we will loosely refer to them as counter-terms in general). Since in our approach all counter-terms will be fixed by matching, each one of the $c_{i(\\ell)}$'s loose a physical significance, and only the sum\n\\begin{equation}\nc_i = c_{i(1)}+ c_{i(2)} + \\cdots + c_{i(\\ell)}\\,,\n\\end{equation}\nwill be physically relevant.\\footnote{The reader should keep in mind that different Wilson coefficients at leading order (i.e. {\\it tree-level}), and beyond, contribute at various loop orders in the SPT counting.} In particular, we will see that each one of them is typically very sensitive to small changes to the (simulated) power spectrum, in particular for wave numbers close to the upper limit of the fitting range, while the sum remains much more stable.\\vskip 4pt\n\nFixing the lowest order counter-term by matching, while removing higher order contributions by the renormalization prescription, is akin to an on-shell scheme in quantum field theory. However, in principle any other prescription is equally viable, as long as there is convergence. (In general, we expect the difference between various schemes to be a higher order effect in the systematic expansion of the EFT, much like different renormalization schemes.) On the other hand, since one of the focus of our work is to understand precisely the properties of the perturbative expansion within the EFT framework to high loop orders, addressing whether we have convergence or not should not hinge upon a particular choice of renormalization. In order to bypass this issue, we will not commit to a specific choice and instead allow for running of the individual $\\ell$-loop order counter-terms when fitting to the numerical data.\\footnote{For example, for the sound speed, the two loop parameter $c_{s(2)}$ may be fixed in terms of the one loop counter-term, $c_{s(1)}$, by the requirement that the power spectrum remains unchanged at two loop orders in the limit of small wave number \\cite{Carrasco:2013mua}. In our case we will instead keep both $c_{s(1)}$ and $c_{s(2)}$ as independent parameters. While this increases our freedom, we will also show that the sum, which is the relevant quantity, remains remarkably stable to variations of $k_{\\rm fit}$.} In this sense our choice is conservative, yet we also suffer from potential `over-fitting', something which was already emphasized in \\cite{Foreman:2015lca}. We will attempt to address this issue throughout the paper. In any case, following our motivation to assess the best results the EFTofLSS can possibly achieve, we will not refrain from allowing the most freedom the effective theory can offer. For completeness, we analyze in appendix~\\ref{appB} the three loop result using the renormalization prescription of \\cite{Foreman:2015lca}.\\vskip 4pt\n\nAnother important point for our matching procedure is to determine the uncertainty bands for a given counter-term, and linear combinations thereof. In order to do so, we will minimize\n\\begin{equation}\n\\label{eq:def_delta_chi}\n\\Delta \\chi^2 \\equiv c_{i(\\ell)} \\frac{\\partial^2\\, \\chi^2}{\\partial c_{i(\\ell)} \\partial c_{i(\\ell')}} c_{i(\\ell')} \\, ,\n\\end{equation}\nwhile varying the linear combination of counter-terms we are interested in. We will display error bands which correspond to variations in $\\chi^2$ of order $\\Delta\\chi^2 \\simeq 10^{-3}$. While, realistically when fitting data we are subject to larger errors, we are guided by the self-consistency of EFT approach to three loop orders for a given choice of matching coefficients.\\footnote{For other (perhaps more realistic) choices, e.g. $\\Delta\\chi_0^2 = 0.1$, the error bands would be a factor of $10$ thicker.} \n\n\\vskip 4pt\nNot all of the $c_i$ coefficients are equally determined by this procedure. The reason is twofold. First of all, at low values of $k$ their contribution is highly suppressed. Secondly, when they start to become relevant at higher values of $k$, the effect of higher order coefficients which we do not include (see below), together with the failure of the loop expansion, significantly impairs their determination. This leaves some of our Wilson coefficients poorly constrained. We will return to this point in sec.~\\ref{sec:disc}.\n\n\n\\section{Power spectrum to three loops \\label{sec:setup}} \nIn this section we discuss the construction of the EFTofLSS in Euler space, and in particular the number of relevant \ncounter-terms which will be needed to renormalize the SPT computations up to three loop order. We will also implement the infrared (IR) resummation inherited from Lagrangian space \\cite{left,Senatore:2014via,Cataneo:2016suz,Leofin}, which improves the predictability of the theory, more radically at three loops.\\vskip 4pt\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.3\\textwidth]{figs\/spt_compare_1l_lambdas.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/spt_compare_2l_lambdas.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/spt_compare_3l_lambdas.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/spt_compare_low_lambdas.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/spt_compare_mid_lambdas.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/spt_compare_high_lambdas.pdf}\n\\caption{\\label{fig:Ps}\n\\small The SPT power spectrum at a given loop order ($P_{\\ell}$) is plotted in the first row, calculated with high ($\\Lambda = 60 \\,h\\, \\rm{ Mpc}^{-1}$), moderate ($\\Lambda = 0.7 \\, h\\,\\rm{ Mpc}^{-1}$) and low ($\\Lambda = 0.3 \\,h\\, \\rm{ Mpc}^{-1}$) cutoffs. The second row compares the size of each loop term using different cutoffs. The kink in the plot is due to a change in sign in the $P_{\\ell}$'s, while we plot the absolute value. The reader will notice that $P_3$ dominates over the other contributions at moderate values of $k$, and only becomes (somewhat) smaller in the low cutoff case. This feature, as we shall see, anticipates much of our conclusions in our paper.}\n\\end{figure}\n\n\\subsection{Cutoff our ignorance}\n\nThe basic idea of the EFT approach is to introduce a cutoff scale, $\\Lambda$, below which the perturbative expansion is under control, and above which some UV information is needed. The cutoff dependence is absorbed into Wilson coefficients, $c_i(\\Lambda)$, up to a given loop order. The theory becomes predictable once a finite set of coefficients, which also incorporate the true non-linear information, are read off either from data or a realization of the full theory. For the SPT computations at $\\ell$-loop order we have\n\\begin{equation}\n\\label{eq:pspt}\nP^{\\rm SPT}_{\\ell{\\text -}\\rm loop}(k,\\Lambda) = P_0 + P_1(k,\\Lambda) + \\cdots + P_{\\ell}(k,\\Lambda)\\,,\n\\end{equation}\nwhere $P_{\\ell}$ denotes the SPT prediction of the power spectrum up to $\\ell$ loops, with the integrals appearing in the perturbative expansion cut off at $k< \\Lambda$.\\footnote{As it has been repeatedly emphasized, the finiteness of SPT integrals for a given cosmology does not imply the errors are necessarily small, let alone under theoretical control \\cite{review}.} We will use both, a high ($\\Lambda = 60 \\,h\\, \\rm{ Mpc}^{-1}$) and a moderate ($\\Lambda = 0.7 \\, h\\, \\rm{ Mpc}^{-1}$) cutoff, the latter used as a proxy for the non-linear scale. Figure~\\ref{fig:Ps} shows the SPT results at one $(P_1)$, two ($P_2$) and three ($P_3$) loop orders. The reader will immediately notice that $P_3$ is somewhat larger than $P_2$ even for a moderate cutoff, and it becomes smaller only when $\\Lambda \\lesssim 0.3\\, h\\, \\rm{ Mpc}^{-1}$, which we also display for comparison in Fig.~\\ref{fig:Ps}. As we shall see, this fact turns out to have important implications for the convergence of the EFTofLSS, even after all the counter-terms are included.\n\n\n\\subsection{Counter-terms and UV test}\\label{sec:UV}\nOnce the counter-terms, $c_i(\\Lambda)$, are added the power spectrum in the EFTofLSS takes the form, schematically:\n\\begin{equation}\nP^{\\rm EFT}_{\\ell{\\text -}\\rm loop}(k) = P^{\\rm SPT}_{\\ell{\\text -}\\rm loop}(k,\\Lambda) + {\\rm counter}\\text{-}{\\rm terms}(\\Lambda)\\,, \n\\end{equation}\nwhere $P^{\\rm SPT}_{\\ell{\\text -}\\rm loop}(k,\\Lambda)$ is given in \\eqref{eq:pspt}. There are several counter-terms which contribute at three loop orders. However, one can show many of which are less important to reproduce the data. For concreteness, the following set can be singled out:\\footnote{There are various notations in the EFT literature for the Wilson coefficients. We will keep using $c_s$ for the sound speed. For terms involving $n$-derivatives we introduce a $c_n$ parameter. For terms quadratic in the perturbations we will also use a `quad' label, as in \\cite{Foreman:2015lca}.}\n\\begin{eqnarray}\n\\label{eq:eft_basis}\n&\\Big\\{ \n4 \\pi c_s^2 k^2 P_0(k), \\quad\n8 \\pi^2 \\bar c_4 k^4 P_0(k), \\quad\n4 \\pi c_{2, \\rm quad} k^2\\, P_{\\rm quad}(k),\\quad & \\\\\n& 4 \\pi^2 c_{\\rm stoch} k^4, \\quad\n16 \\pi^3 \\bar c_6 k^6 P_0(k), \\quad\n8 \\pi^2 c_{4,\\rm quad} k^4 P_{\\rm quad}(k) \n\\Big\\} & , \\nonumber\n\\end{eqnarray}\nwhere $P_0(k)$ is the linear power spectrum, and\n\\begin{eqnarray}\n\\bar c_4 = -\\frac12 c_s^4 + c_4 \\, , \\nonumber \\\\\n\\bar c_6 = - c_s^2\\, c_4 + c_6 \\, ,\n\\end{eqnarray}\n(see appendix~\\ref{appA} for more details). As we discuss momentarily, these coefficients are sufficient to deal with the UV-sensitivity of the SPT computations. All the above expressions must be expanded up to the relevant loop order. For instance, the sound speed $c_s$ is inherently of one loop order, but it also enters at higher orders and therefore \n\\begin{equation} \n\\label{csum} c^2_s = c^2_{s (1)} + \\cdots + c^2_{s (\\ell)}\\,, \n\\end{equation}\nto $\\ell$-loop order. On the other hand, the coefficients $c_4$, $c_{2,\\rm quad}$ and $c_{\\rm stoch}$ are naturally of two loop order, hence $(\\ell-1)$ coefficients would be needed, and so on. Notice that, at two loops and beyond, we encounter counter-terms which descend from loop corrections to higher $n$-point functions, such as $c_{2,\\rm quad}$ and $c_{4,\\rm quad}$. The leading contribution comes from a contraction with the three-point function, see appendix~\\ref{appA}. Higher order corrections may be obtained from the expansion of the stress tensor at higher orders in the density field, see e.g. \\cite{Foreman:2015lca}.\\footnote{In principle, given our set in \\eqref{eq:eft_basis}, $P_{\\rm quad}$ should be computed to next-to-leading order, giving rise to another counter-term, i.e. $c_{2,{\\rm quad} (2)}$. However, we will show that the leading order $P_{\\rm quad}$ is sufficient to renormalize the theory. The same applies to other counter-terms to three loop orders. In fact, one can show that $c_{4,\\rm quad}$ has little impact at three loops, and therefore it will be omitted when we fit to the numerical data. This will also reduce the number of free parameters in the EFT approach.} \\vskip 4pt\n\n\nIn the EFTofLSS we then have one new term at one loop, $c^2_{s (1)}$, four extra coefficients at two loops, $\\{ c^2_{s (2)}, c_{4(1)}, c_{2,{\\rm quad}}, c_{\\rm stoch} \\}$, and three more to three loops, $\\{c^2_{s (3)}, c_{4 (2)}, c_6\\}$. A first check of this choice consists on explicitly checking whether the UV sensitivity of the SPT results can be absorbed into the counter-terms. For this `UV test' we use the high cutoff SPT result and renormalize it using the EFTofLSS, and then compare against the moderate cutoff answer, as a proxy of the non-linear scale, see Fig.~\\ref{fig:Ps}. As a measure for the fit we use the residuals\n\\begin{equation}\n\\label{eq:def_chi}\n\\chi^2_{\\rm UV\\text{-}test} = \\sum_n \\left(\\frac{P^{\\rm SPT}_{\\ell \\text{-}\\rm loop}(k_n,\\Lambda=0.7 \\, h\\,{\\rm Mpc}^{-1}) - P^{\\rm EFT}_{\\ell \\text{-}\\rm loop}(k_n,\\Lambda=60 \\, h\\,{\\rm Mpc}^{-1})}{P^{\\rm SPT}_{\\ell \\text{-}\\rm loop}(k_n,\\Lambda=0.7 \\, h\\,{\\rm Mpc}^{-1})}\\right)^2 \\, , \n\\end{equation}\nto a given loop order. This is introduced in analogy to the measure in \\eqref{eq:def_chi_data} we use to test the quality of the EFT fit to data. \n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=0.3\\textwidth]{figs\/UV_1loop_02.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/UV_2loop_02.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/UV_3loop_02.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/UV_1loop_05.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/UV_2loop_05.pdf}\n \\includegraphics[width=0.3\\textwidth]{figs\/UV_3loop_05.pdf}\n\\caption{\\label{fig:UVresiduals}%\n\\small The residuals from fitting the EFTofLSS with the appropriate counter-terms to the SPT spectra with a moderate cutoff. The top row uses a fit in the momentum range $k \\in [0.02,0.2]$ Mpc$^{-1}$, while for the bottom row\nwe use $k \\in [0.02,0.5]$ Mpc$^{-1}$. The three columns show the one, two and three loop results, respectively.\n}\n\\end{figure}\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=0.45\\textwidth]{figs\/chi2_UV_2L.pdf}\n \\includegraphics[width=0.45\\textwidth]{figs\/chi2_UV_3L.pdf}\n\\caption{\\label{fig:UVmodels}%\n\\small The $\\chi^2$ for the UV test (EFT with high cutoff vs. SPT with moderate cutoff) for different sets of Wilson coefficients. The left and right plots are for two and three loops respectively.\n}\n\\end{figure}\n\n\\vskip 4pt Figure~\\ref{fig:UVresiduals} shows the results for the one, two and three loops for two different fitting regions, $k \\in [0.02,0.2]$ Mpc$^{-1}$ (top) and $k \\in [0.02,0.5]$ Mpc$^{-1}$ (bottom). In all cases, the residuals are below $1\\%$. (This is particularly surprising in the bottom row, since the fitting range comes rather close to the cutoff scale, $\\Lambda = 0.7 \\, \\rm{ Mpc}^{-1}$.) Notice the spike in $\\chi^2$ in the 2-loop results. This is due to a zero in the denominator of the measure introduced in \\eqref{eq:def_chi}, which can be easily removed from the fit to get a better agreement. Figure~\\ref{fig:UVmodels} displays the results of the UV test, for models where different counter-terms are removed from the fit.\\footnote{Notice that the step in the accumulated $\\chi^2$ results from the spike in Fig.~\\ref{fig:UVresiduals}, due to the normalization, as mentioned before.} At two loops the UV dependence can be absorbed into $c_s, c_4$ and $c_{\\rm stoch}$. For this UV test, counter-term associated with $c_{2,\\rm quad}$ has no noticeable effect to two loops. As we will see later, this changes when we perform the matching to the numerical simulations. This means that a renormalized finite contribution will be required to fit the data. At three loop level, the UV dependence can be removed by adding the sound speed as well as $c_4$ and $c_6$. \nThe remaining counter-terms have little impact on the UV dependence in the momentum range we are interested in. Yet, similarly to the two loop case, a renormalized contribution may be needed.\\vskip 4pt \n\nAs we mentioned, because we cannot separate the counter-term from the renormalized (cutoff independent) contributions, we will match for all of the $c_i$'s without distinction. The UV test, however, tells us that we have included all the needed counter-terms in \\eqref{eq:eft_basis} to account for the UV dependence in the SPT computations. We will show shortly that we have also included those which capture (most of) the true non-linear information in the numerical data. In principle, one could include additional counter-terms (from extra contributions to the stress-energy tensor in Euler equation). This can potentially improve the fit to the data. However, while the perturbative expansion is under control, we do not expect their impact to be large. The reason is the following. Any new term in the EFT expansion with a parameter without a significant cutoff dependence (such as renormalized contributions carrying true short-distance information as opposite to pure counter-terms) will be down by powers of the expansion parameter (in our case the density perturbation) relative to those with counter-terms (or lower orders). If adding these new parameters we find large contributions without a counter-term part (needed to fix the SPT UV behavior), this would be an indication of the failure of the perturbative series. Therefore, we will fit the numerical data assuming that is not the case, and resort only to the counter-terms needed to remove the cutoff dependence of the perturbative expansion, and their correspondent finite (renormalized) parts.\n\n\n\\section{Comparison with simulations\\label{sec:results}} \n\nWe present now the results of the fit to numerical data at redshift $z=0$, and the subsequent determination of the counter-terms of the EFTofLSS using the SPT results with the large cutoff.\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.6\\textwidth]{figs\/all_loops.pdf}\n\\caption{\\label{fig:chi2summary}%\n\\small The cumulative $\\chi^2$ as a function of the upper bound of the fitting range for the $\\ell$-loop EFT results, with $\\ell = 1,2,3$.}\n\\end{figure}\nIn Figure~\\ref{fig:chi2summary} we display the resulting $\\chi^2$ including all our available EFT parameters at one, two and three loop orders, respectively. The range of $\\chi^2$ chosen is such that the prominent increase shown in the plot is indicative of residuals moving above the $1\\%$ level, which is the accuracy we are aiming at in this paper. We immediately see that the EFTofLSS to two loops provides an excellent fit to the data up to wavenumbers of order $k\\simeq 0.9\\, h\\,$Mpc$^{-1}$. (There is, however, some reliance on an unphysical value (negative) for the stochastic term for $k \\gtrsim 0.7\\, h$ Mpc$^{-1}$.) From the plot, we also conclude that the EFTofLSS to three loop orders shows some improvement with respect to the two loop results in the regime $0.2 \\lesssim k \\lesssim 0.4\\,h\\,$Mpc$^{-1}$. Nevertheless, the increase in accuracy is somewhat marginal. At the same time, around $k \\simeq 0.55\\, h\\,$Mpc$^{-1}$, the power spectrum to three loops starts to be in tension with the numerical data. We will return the reasons for the discrepancy in sec.~\\ref{sec:disc}.\\vskip 4pt\n\n\\subsection{Non-linear information}\n\nIn Fig.~\\ref{fig:chi2models} we show several EFT models at two and three loop orders with only a subset of Wilson coefficients. For comparison, we also provide the $\\chi^2$ associated with the UV test, which gave us the necessary counter-terms to remove the short-distance sensitivity of SPT. The difference between the UV test and the fit to the simulations accounts also for the non-linear information in the numerical data. As we see, the relative difference is small, which means the extra matching coefficients to two loop orders not only correct the SPT results, they also incorporate all of the relevant non-linear information to a very good level of accuracy.\\vskip 4pt\n\nFor the two loop results, in the left panel of Fig.~\\ref{fig:chi2models}, we immediately realize that the sound speed is no longer sufficient, and the fit to the power spectrum beyond one loop order can be significantly improved, in principle up to $k \\simeq 0.9 h\\,$Mpc$^{-1}$, by including more EFT coefficients. We notice, however, that at this high $k$ the value of the stochastic term starts to influence the results (see e.g. \\cite{error}). We comment on this below. To three loops, shown in the right panel of Fig.~\\ref{fig:chi2models}, amusingly adding just the sound speed provides a relatively good fit to the data up to $k \\simeq 0.55 h\\,$Mpc$^{-1}$. As we noticed earlier, when we performed the UV test, $c_s$~alone is not sufficient to remove the (incorrect) UV portion of the SPT calculation. Therefore, the reason for the apparent success of the fit is twofold. Firstly, there is non-linear information in the numerical data which resembles the $k^2$ behavior of the $c_s$ counter-term and, secondly, there are partial cancelations between the part of the counter-terms which correct the SPT result with high cutoff, and the true non-linear information which is also encoded in the matching coefficients. Notice that, after all, the fit does improve the more parameters are added. Nonetheless, as anticipated in Fig.~\\ref{fig:chi2summary}, and starting around the same value, $k \\simeq 0.55 h$ Mpc$^{-1}$, the result to three loops starts to deviate from the data, even when all the counter-terms from the UV test are included. As we argue here, the reason for the mismatch is not because of the UV sensitivity, or lack of counter-terms, but rather due to SPT contributions with loop momenta of order of the external momentum of the power spectrum.\n\n\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=0.45\\textwidth]{figs\/2loops_results.pdf}\n \\includegraphics[width=0.45\\textwidth]{figs\/3loops_results.pdf}\n\\caption{\\label{fig:chi2models}%\n\\small Same as Figure~\\ref{fig:chi2summary}, but for different combinations of counter-terms. For comparison, we also include the result for the UV test (see text).}\n\\end{figure}\n\n\n\\subsection{(Over)fitting the data}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L\/cs_2L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L\/c4_2L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L\/cquad_2L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L\/cstoch_2L.pdf}\n\\caption{\\label{fig:cs2L}%\n\\small The fit for the relevant EFT coefficients up to two loops in the range $k_{\\rm fit} \\in [0.2,1]\\, h\/$Mpc. The error bands are determined as explained in the text. }\n\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L_nonstoch\/cs_2L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L_nonstoch\/c4_2L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_2L_nonstoch\/cquad_2L.pdf}\n\\caption{\\label{fig:cs2L_wostoch}%\n\\small Same as in Fig.~\\ref{fig:cs2L}, but without the stochastic term.}\n\n\\end{figure}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_3L\/cs_3L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_3L\/c4_3L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_3L\/cquad_3L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_3L\/cstoch_3L.pdf}\n\\includegraphics[width=0.39\\textwidth]{figs\/rg_3L\/c6_3L.pdf}\n\\caption{\\label{fig:cs3L}%\n\\small Same as Fit.~\\ref{fig:cs2L} for the three-loop counter terms. Due to large errors and degeneracies we display their values in the range $k_{\\rm fit} \\in [0.3,1]\\, h\/$Mpc.}\n\\end{figure}\n\nAs we mentioned earlier, our approach to fit the numerical data gives us more leeway for the value of the matching coefficients at a given loop order. Therefore, in principle, we are subject to some degree of overfitting. In order to explore whether that is the case, we studied the behavior of each of the $\\ell$-loop order contributions for the EFT coefficients as a function of the fitting range. In contrast, we also provide the value for the total sum, see \\eqref{csum}. The results at two loops are shown in Fig.~\\ref{fig:cs2L}, and in Fig.~\\ref{fig:cs2L_wostoch} without the stochastic term, and in Fig.~\\ref{fig:cs3L} at three loops. The errors are computed as explained in sec.~\\ref{sec:fit}.\\vskip 4pt Unfortunately, since we are unable to disentangle the counter-term from the true non-linear information, the contributions from the Wilson coefficients become important at values of $k$ lower than expected by power-counting, for the sole purpose of removing the SPT (miss-)behavior. Hence, our ability to determine each coefficient from the power spectrum alone deteriorates.\\footnote{In cases where the linear theory can be solved exactly, without resorting to numerical input, the perturbative expansion is written in terms of analytic functions. The counter-term and renormalized coefficients can be then properly disentangled. Once the counter-terms cancel the unwanted contribution from the loop integrals, the perturbative expansion obeys a well-defined power-counting in coupling constant(s) of the theory, as long as they remain small.} This is exacerbated by the many new degeneracies between counter-terms at high loop~orders. In~spite of these caveats, up to two loops the situation is relatively under control. This is partially due to the remarkable determination of the (total) sound speed at lower values~of~$k$. As we see in the plots, the other coefficients start to become relevant at $k \\simeq 0.3-0.4 \\, h$ Mpc$^{-1}$, and their values remain relatively stable until $k \\simeq 0.9 h\\,$Mpc$^{-1}$, where there is a noticeable shift. This is consistent with the behavior of the $\\chi^2$ seen in Fig.~\\ref{fig:chi2summary}. We also notice the reduction of the error bars, which is due to the scaling with $k$ of each of the (derivatively coupled) counter-term. In the spirit of \\cite{Foreman:2015lca}, we interpret the sudden change in the counter-terms as an indication of overfitting.\\footnote{Notice, in contrast, the kink in the matching coefficients occurs at a value of $k$ higher than the one reported in \\cite{Foreman:2015lca}. We attribute the different limiting values to the different choice of renormalization scheme. We have also studied the renormalization scheme introduced in \\cite{Foreman:2015lca}, and reproduce their results at two loops. We also analyzed the power spectrum to three loop orders using their approach to fix the counter-terms, see appendix~\\ref{appB}.}\nWe should emphasize, however, that there is a non-trivial sensitivity to the value of the stochastic term for \n$k \\gtrsim 0.6\\, h$ Mpc$^{-1}$. Moreover, the best-fit value for $c_{\\rm stoch}$ becomes negative at high values of $k$, even after taking into account the part due to the UV counter-term (see Fig.~\\ref{fig:UVmodels}). This goes against the expectation of a positive (renormalized) stochastic coefficient \\cite{Foreman:2015lca}. Nonetheless, we have checked that omitting the stochastic term produces similar results for the other coefficients and overall fit up to $k \\simeq 0.7\\, h$ Mpc$^{-1}$, where potential overfitting starts to show up, see Fig.~\\ref{fig:cs2L_wostoch}. This is consistent with Fig.~\\ref{fig:chi2models}, which suggests that while the UV test requires a stochastic term to correct SPT in the hunt for high accuracy, the EFT matching to the power spectrum alone does not entirely capture the correct renormalized contribution. In view of the lack of additional information, our results provide a very good fit to the data up to $k \\simeq 0.7\\, h$ Mpc$^{-1}$ to two loops.\\vskip 4pt\n\nFor the three loop results, on the other hand, the failure of a systematic loop expansion in SPT, already at relatively low values of $k$, complicates the independent determination of the counter-terms and extraction of the true non-linear information. While the (total) sound speed remains very well determined, the other coefficients are less constrained by the data from the power spectrum, leading to large uncertainties at low values of $k$ (not shown). It is only a somewhat small window, around $k \\in [0.3,1]\\, h$ Mpc$^{-1}$, that the Wilson coefficients (other than the sound speed) can be determined more accurately, which is displayed in Fig.~\\ref{fig:cs3L}. In particular, we notice that the terms which first appear at two loops, $c_4$ and $c_{2,\\rm quad}$, are much better extracted than $c_6$.\\footnote{We have included the loop correction, $c_{4(2)}$, but we found that adding $c_{4,{\\rm quad}}$ did not change our results significantly.} This is not surprising, given the high power of $k$ involved for the terms associated with the latter coefficients, which only become relevant for $k \\gtrsim 0.5\\, h $Mpc$^{-1}$. This is consistent with what we observed in Fig~\\ref{fig:UVmodels}, where the additional term is required for the UV test but only at higher values of $k$. Similarly to the results at two loops, but somewhat earlier, there's a clear shift in the value of the Wilson coefficients, more prominently for $c_4$ and $c_{2,\\rm quad}$, around $k \\simeq 0.55 h\\,$Mpc$^{-1}$. There is also another large variation near $k \\simeq 0.85 h\\,$Mpc$^{-1}$, more prominently for $c_6$. These results are consistent with our findings for the overall $\\chi^2$ in Fig.~\\ref{fig:chi2summary}. We have also checked that omitting the stochastic term the plots for the Wilson coefficients are essentially unaltered up to $k \\simeq 0.55 h\\,$Mpc$^{-1}$. We will return to this point in sec.~\\ref{sec:disc}.\n\n\n\\subsection{IR-resummation}\n\nAn important aspect in implementing the EFT approach in Euler space is the so-called IR-resummation, introduced in \\cite{Senatore:2014via,Cataneo:2016suz, Senatore:2017pbn} following the construction of the EFT in Lagrangian space in \\cite{left} (where the resummation is manifest).\n\\begin{figure}[ht]\n\\centering\n \\includegraphics[width=0.6\\textwidth]{figs\/residues_max70_fullct_IR.pdf}\n\\caption{\\label{fig:IR_resum}%\n\\small The result of the best fit with (continuous line) and without (dashed line) IR-resummation at different loop orders.\n}\n\\end{figure}\nIn Figure~\\ref{fig:IR_resum}, we show the impact of the IR-resummation procedure in the EFT approach. The continuous lines indicate the result after fitting to the simulation {\\it with} the IR-resummation, while the dashed lines display the best fit {\\it without} performing the resummation. The values for the counter-terms do not vary significantly, with or without resummation. However, the oscillatory behavior is nicely removed by the procedure. The reader will note that the amplitude of the oscillations increases, the higher the loop order in the expansion. At one loop, they are of the order of $1\\%$, while they reach $2\\%$ at two loops and as high as $10\\%$ for the three loop results, mainly at high values of $k$. Notice that some of these features are due to integration noise in the three-loop results. Even though the accuracy of the three-loop results is by itself on the level $10^{-3}$, a large portion of the three-loop result is absorbed into the counter terms what increases the noise to a few percent (c.f.~Fig.~\\ref{fig:Ps}). \nIn short, the IR-resummation and IR-safe integrals were vital at three loops to improve the quality of the matching. This provides yet another clue that the long-distance behavior, as opposite to the non-linear dynamics, is failing in SPT (without resummation) at high loop orders.\n\n\n\n\\section{Discussion \\label{sec:disc}} \n\nWe have computed the power spectrum in the EFTofLSS to three loop orders. We show the residuals in Fig.~\\ref{fig:residuals} for the best fits to the data using our renormalization procedure, both for the two and three loops, with $k_{\\rm fit} = 0.35\\, h$~Mpc$^{-1}$ at redshift $z=0$. The values for the counter-terms are given by:\n\\begin{eqnarray}\nc_{s(1)}^2 &=&1.60 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2, \\quad c_{s(2)}^2 = -0.27 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2, \\quad c_{s(3)}^2 = -3.54 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2,\\nonumber\\\\\n\\quad c_{4(1)} &=& -1.82\\, \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^4, \\quad c_{4(2)} = 2.67 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^4\\,,\\nonumber \\\\ \\quad c_{2,\\rm quad} &=& 1.66 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2, \\quad c_{6} = 1.94\\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^6\\,,\n\\end{eqnarray}\nfor the three loop results.\\footnote{For the best fit to two loops the counter-terms are given by: \\begin{eqnarray}\nc_{s(1)}^2 &=&0.97 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2, \\quad c_{s(2)}^2 = -1.11 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2\\,, \\nonumber\\\\ \n\\quad c_{4} &=& 0.34\\, \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^4, \\quad c_{2,\\rm quad} = 1.0 \\left(\\frac{1}{h \\textrm{Mpc}^{-1}}\\right)^2\\,.\\nonumber\n\\end{eqnarray} } \nWe notice the level of accuracy to three loop orders increases with respect to two loops at redshift $z=0$, although the improvement is somewhat marginal. We also notice that adding the three loop order makes the best fit to rapidly deviate from the numerical data, unlike the two loop result which is much better behaved, see also Fig.~\\ref{fig:chi2summary}. We interpret this as an evidence of the asymptotic nature of the loop expansion in the EFTofLSS, as we discuss here in more detail.\\vskip 4pt \n\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.6\\textwidth]{figs\/residues_max35_allct_wostochc4quad.pdf}\n\\caption{\\label{fig:residuals}%\n\\small The best fit to the data with $k_{\\rm fit} = 0.35 \\, h$ Mpc$^{-1}$ using our renormalization scheme up to two (green) and three (blue) loop orders, respectively. The one loop result (in red), in contrast, is fitted in the range $k \\in [0.1, 0.3]\\, h$ Mpc$^{-1}$. The results are essentially unaltered by the presence of a stochastic term, which is omitted here. Notice that while the three loop results provide a somewhat better fit to the data in the region $k< k_{\\rm fit}$, the two loop power spectrum (with our renormalization scheme) is better behaved beyond that point (see text).}\n\\end{figure}\n\n\\subsection{Perturbative expansion}\n\nThe parameters that control the EFTofLSS are given by \\cite{left}\n\\begin{eqnarray}\n\\epsilon_{s>}(k) &=& k^2\\int_{k}^{\\infty} \\frac{d^3p}{(2\\pi)^3} \\frac{P_0(p)}{p^2}\\,, \\nonumber \\\\\n\\epsilon_{s<}(k) &=& k^2\\int_{0}^k \\frac{d^3p}{(2\\pi)^3} \\frac{P_0(p)}{p^2} \\,,\\\\\n\\epsilon_{\\delta<}(k) &=& \\int_{0}^k \\frac{d^3p}{(2\\pi)^3}P_0(p)\\,,\n\\end{eqnarray}\nwhere $P_0$ is the leading order (tree-level) power spectrum. For instance, let us take the power spectrum at one loop, $P_1$, which includes the sum of two diagrams, $P_{13}$ and $P_{22}$. In the limit the loop momenta, $p$, is higher than the external momenta, $k$, we find \n\\begin{equation}\nP_1(k) = P_{13}(k) + P_{22}(k) \\xrightarrow{k \\ll p}\n P_0(k) \\, \\epsilon_{s>}(k),\n\\end{equation}\nwhile in the opposite limit,\n\\begin{equation}\nP_{1}(k) \\xrightarrow{k \\gg p} P_0(k)\\epsilon_{\\delta<}(k)\\,.\n\\end{equation}\nIn terms of the EFTofLSS, the behavior due to short-distance modes, encapsulated in $\\epsilon_{s>}$, is absorbed into a series of {\\it local} (derivatively-coupled) terms and Wilson coefficients. The latter include each a {\\it finite} piece responsible for the true non-linear information, but also a counter-term to correct the SPT contribution beyond the non-linear scale. Notice that $\\epsilon_{s<}$ does not appear in the final expression. This is due to a partial cancellation between the two contributions at equal time. In our universe, however, the parameter $\\epsilon_{s<}$ is indeed the largest (see Fig.~\\ref{fig:eps}). This means that a proper treatment of the perturbative expansion is mandatory, in order to correctly account for all physical effects. In Lagrangian space, the perturbative expansion does not rely on $\\epsilon_{s<}$ being small and it is automatically resummed. The IR-resummation of \\cite{Senatore:2014via} translates this feature into Euler space, removing the oscillatory behavior due to the BAO scale (see Fig.~\\ref{fig:IR_resum}).\\vskip 4pt\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{figs\/expansion_parameters.pdf}\n\\caption{\\label{fig:eps}%\n\\small Parameters measuring the amplitude of non-linear corrections on a mode of wavenumber\n$k$, computed for our universe at $z=0$. They quantify the motions created by modes longer, $\\epsilon_{s >}$, and shorter, $\\epsilon_{s <}$, than $k$ and the tides, $\\epsilon_{\\delta <}$, from larger scales. See \\cite{left,review} for more details.}\n\\end{figure}\nFinally, $\\epsilon_{\\delta <}$ is the long-distance expansion parameter of the effective theory. For example, at $\\ell$-loop order the renormalized result scales as:\\footnote{The ellipses include logarithmic corrections which may also play an important role to determine the precise region of analytic control.}\n\\begin{equation}\n\\label{eq:al}\nP_{\\ell}(k) \\xrightarrow{k \\gg p} a_{\\ell}\\, P_0(k)\\, (\\epsilon_{\\delta<}(k))^{\\ell}+\\cdots\\,,\n\\end{equation}\nand therefore $\\epsilon_{\\delta<}(k)$ must remain somewhat small in order to have a well-defined perturbative series. We see in Fig.~\\ref{fig:eps} that around $k \\simeq\\, 0.4\\, \\text{-}\\, 0.6 \\,h\\, \\rm{ Mpc}^{-1}$ we have $\\epsilon_{\\delta<} \\simeq 1$. However, at these values of $k$ the non-linear behavior plays an important role to determine the precise location of the scale at which perturbation theory breaks down, depending on the convergence properties of the series expansion. This is, after all, one of the main motivations behind the EFTofLSS. Yet, we observe that the perturbative series in the EFTofLSS to three loop orders already starts to deteriorate significantly for $k \\gtrsim 0.55\\, h$ Mpc$^{-1}$ at $z=0$, which also translates in the much more rapid deviation from the data seen in Fig.~\\ref{fig:residuals} at lower values of $k$, with $k_{\\rm fit} \\simeq 0.35\\, h$ Mpc$^{-1}$. The sharp increase in the $\\chi^2$ is also in the region where $\\epsilon_{\\delta <}$ is order one (see Fig~\\ref{fig:eps}). However, at the same time, the computation up to two loops can be extended, somewhat surprisingly, to higher values of $k$ with our (more generic) renormalization scheme. Since, as we see in Fig.~\\ref{fig:eps}, the dependence on $k$ of $\\epsilon_{\\delta <}$ is rather mild for $k \\gtrsim 0.1\\, h$Mpc$^{-1}$, the precise region of analytic control could in principle be at higher values of $k$, depending on the exact location of the (non-linear) scale at which the perturbative expansion ought to break down. In general, the behavior of the series expansion as a whole, even after renormalization, depends on the form of the $a_{\\ell}$'s in \\eqref{eq:al}, e.g.~\\cite{Blas:2013aba,Sahni:1995rr,Pajer:2017ulp, Pietroni:2018ebj}. In an EFT approach these coefficients commonly growth at each loop order, leading to an asymptotic behavior for the series expansion. We confirm this expectation in the present~paper.\\vskip 4pt\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{figs\/cs_test.pdf}\n\\caption{\\label{cstest}%\n\\small We plot the power spectrum in SPT at $\\ell$-loop order computed with high cutoff after removing the associated (leading) $c^2_{s(\\ell)}(\\Lambda) k^2 P_0$ counter-term found by fitting in the low $k$ region, which we denote as $\\bar P_{\\ell}$. As expected, the naive power counting is restored, yet there is a significant increase in $\\bar P_3$ at $k \\gtrsim 0.1\\, h\\,$Mpc$^{-1}$. As we see in Fig.~\\ref{fig:Ps} with a low cutoff, this behavior is due to large IR contributions rather than the need of additional UV counter-terms.}\n\\end{figure}\n\nThe reason for the (putative) asymptotic behavior behind the renormalized perturbative series is {\\it not} due to an incorrect treatment of short-distance modes --- which are naturally incorporated in the EFTofLSS --- but instead due to the nature of the perturbative expansion itself.\\footnote{As an example, let us consider a scaling universe: $P^{\\rm s}_0(k) = A k^n \\simeq \\left(k\/k_{\\rm NL}\\right)^{n+3}$, such that $\\epsilon_{\\delta<}(k)\\simeq \\left(k\/k_{\\rm NL}\\right)^{3+n}$. The power spectrum at $\\ell$-loops, after renormalization, scales like\n\\begin{equation}\nP^{\\rm s}_{\\ell}(k) \\simeq a^{\\rm s}_\\ell \\left(k\/k_{\\rm NL}\\right)^{(n+3)\\ell} P^{\\rm s}_0(k)\\,.\n\\end{equation}\nA proxy for the value of the $a_\\ell$'s for a scaling universe can be obtained from the one dimensional case where $a^{\\rm s}_\\ell \\simeq \\left(a_n\\ell\\right)^{- n\\ell}$, with $a_n$ a numerical factor depending on $n$ \\cite{Foreman} (see Eq. 3.24 in \\cite{Pajer:2017ulp}). A good approximation of the power spectrum near the non-linear scale is given by $n\\simeq -2$ \\cite{Carrasco:2013mua}, and therefore $a^{\\rm s}_\\ell \\simeq \\tilde a_{-2}^\\ell \\sqrt{\\ell} (2\\ell)!$, after using Stirling's approximation and absorbing numerical factors into $\\tilde a_n$. This scaling with $\\ell$ is paramount of asymptotic series, with an {\\it optimal} number of loops given by $\\ell_{\\rm opt} (k) \\simeq 1\/|\\tilde a_{-2}\\epsilon_{\\delta<}(k)|$. Since at the scales of interest we have $\\epsilon_{\\delta<} \\simeq 1$, it is not surprising then to find $\\ell_{\\rm opt} \\simeq 3$ for our universe.} Physically, the renormalized power spectrum is dominated by loop momenta in the range between the peak of the linear spectrum and a mildly non-linear scale, where lower loop orders provide the best approximation to the data. We notice that, after removing the leading $c_s$ counter-term to each one of the $P_{\\ell}$'s (see Fig.~\\ref{cstest}), the naive power counting is restored at low $k$, yet there is a rapid increase in the contribution at three loops as we move to higher wavenumbers. Some of this behavior is cured by extra counter-terms, however, an important comes from modes near the non-linear scale.\\footnote{This is consistent with the expectation in the soft limit discussed in \\cite{Ben-Dayan:2014hsa,Garny:2015oya}.} This is also evidenced in Fig.~\\ref{fig:Ps}, where the relative size between $P_{\\ell}$ and $P_{\\ell+1}$ is reduced at higher loop orders even after implementing a low cutoff ($\\Lambda \\simeq 0.3\\, h$ Mpc$^{-1}$).\\vskip 4pt\n\nWe should emphasize, nonetheless, that for wavenumbers below $k \\simeq 0.4\\, h$ Mpc$^{-1}$ the EFT results to three loop orders (slightly) outperform the two loop results, see Figs.~\\ref{fig:chi2summary}. At the same time, it is not too surprising that an EFT approach with more Wilson coefficients can improve the situation in this regime.\\footnote{\\label{footv} Other counter-terms, which we have not included in the matching, can in principle impact the resulting $\\chi^2$. For instance, we have checked that adding the leading $c_{4,\\rm quad}$, or velocity-dependent counter-terms (see appendix~\\ref{appA}), both improve the fit with respect to the two loop results, up to $k \\simeq 0.5\\, h$ Mpc$^{-1}$, but only by a small amount.} Yet, this conclusion turns out to be independent of the renormalization scheme, and therefore quite robust. In fact, as we show in appendix~\\ref{appB}, similar wavenumbers can also be reached to three loop orders with the on-shell prescription of \\cite{Foreman:2015lca}, and with no additional counter-term other than~$c^2_{s(3)}$. This is an indication that long-distance effects are becoming important before higher order Wilson coefficients have a chance to kick in, which we would have expected to play an important role at high values of $k$ according to the UV test (see Fig.~\\ref{fig:UVmodels}).\\footnote{We already commented on the fact that $c_s$ alone provides a relatively good fit to data to three loops, see Fig.~\\ref{fig:chi2models}. This feature appears to translate to the on-shell scheme, where adding $P_3$ and only $c^2_{s(3)}(c^2_{s(1)},k_{\\rm ren})$ outperforms the two loop result with the same number of independent parameters. However, once again, we observe similar behavior as in our analysis with a more generic scheme, and around the same scale (see appendix~\\ref{appB}).} Hence, while we find that we can extend the reach of perturbation theory previously found in \\cite{Foreman:2015lca}, our analysis using a more generic renormalization procedure demonstrates that (not only the accuracy of the two loop results was somewhat underestimated in \\cite{Foreman:2015lca}) the value $k \\simeq 0.4\\, h$ Mpc$^{-1}$ is plausibly the best we can achieve in terms of accuracy of the EFTofLSS~at~$z=0$.\\vskip 4pt In conclusion, our results suggest that we do not expect higher orders to change our main results, but rather to further deteriorate the matching to the data, such that the perturbative EFT derivation will start to depart from the true answer at even lower values of~$k$. This is made worse by our inability to disentangle the finite part (carrying the $\\epsilon_{s>}$-dependent renormalized part) of the Wilson coefficient, together with the failure of SPT manifesting itself at relatively lower values~of~$k$ (than expected by power-counting). As a result, the explicit counter-term part of the Wilson coefficient that removes the unwanted contributions from SPT will be needed earlier than expected (except perhaps for those involving higher powers of $k$). This introduces large degeneracies and errors which will hinder the quality of the fit. Hence, as a consequence of all of the above, we believe that the EFTofLSS to three loop orders already provides the best approximation to the dark matter power spectrum in the weakly non-linear~regime.\\vskip 4pt It is conceivable that a (Borel or Pade) resummation in $\\epsilon_{\\delta <}$ (similar to the attempt in \\cite{Blas:2013aba} for SPT) could in principle improve the level of precision in the region of analytic control of the EFTofLSS. We leave this possibility open for exploration.\\footnote{While convergence beyond the non-linear scale is doomed to fail, see e.g. \\cite{review}, a Pade-type resummation --- which in practice amounts to an ansatz of the form $P(x)\/Q(x)$ (with two polynomials) to replace the (asymptotic) series $\\sum_n a_n x^n$ --- could in principle provide a better approximation in the mildly non-linear regime. Nevertheless, most likely a Pade approximation will not be able to reproduce the true answer at fully non-linear~scales.} The reader must keep in mind, however, that our computation only includes the deterministic part of the evolution equations, and a careful treatment of the stochastic term is required to properly address the ultimate reach of perturbation theory, see e.g. \\cite{Baldauf:2015aha,Baldauf:2015zga,Baldauf:2015tla}. We also postpone for future work the study of the redshift dependence of the EFT results, which were largely ignored here. This will have important implications for future surveys, which can in principle reach up to high values (e.g. $z \\gtrsim 2$ \\cite{Laureijs:2011gra}). In that case, it is (very) possible higher loop orders can still play an important role.\n\n\\section*{Acknowledgments}\nWe would like to thank Matias Zaldarriaga for helpful comments and discussions. The authors acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy (EXC 2121) `Quantum Universe' (390833306). R.A.P. acknowledges financial support from the ERC Consolidator Grant ``Precision Gravity: From the LHC to LISA\" provided by the European Research Council (ERC) under the European Union's H2020 research and innovation programme (grant agreement No. 817791). R.A.P. would like to thank the Simons Foundation and FAPESP for support through Young Investigator Awards during the early stages of this work. R.A.P. also thanks the participants of the `Simons Foundation Symposium: Amplitudes meet Cosmology'\\footnote{\\url{https:\/\/www.simonsfoundation.org\/event\/amplitudes-meet-cosmology-2019\/}} for helpful discussions. H.R. acknowledges support from the FAPESP grants 2016\/08700-0 and 2017\/09951-9. H.R. also acknowledges the Universidade de S\\~ao Paulo, for all the support during this work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe precise form of the neutron star (NS) equation of state is still unknown. Despite much theoretical work aimed at determining this fundamental aspect of astrophysics, to eliminate some of the contending theories we must turn to observational data. Presently the only means of determining the mass of neutron stars in accretion driven systems is by observing eclipsing X-ray binary pulsars. Unfortunately only 10 such systems are currently known, and only 6 of these have previously had mass measurements made (e.g. \\cite{ash99}, \\cite{quaintrell03}, \\cite{valbaker05}). In this paper we present the preliminary results from our on-going work on the mass of the High Mass X-ray Binary (HMXB) accretion driven pulsar {EXO 1772--363}. The counterpart star within this HMXB is heavily obscured and reddened, necessitating for the first time the utilisation of near-infrared spectroscopy to construct the Radial Velocity (RV) curve in order to obtain an accurate mass solution.\n\nObservations made of {EXO~1722--363} (alternatively designated IGR J17252--3616), in 1987 by the \\emph{Ginga} X-ray satellite were the first to detect pulsations. These pulsations were found to have a $413.9 \\pm 0.2$s period \\citep{tawara89}. Over an 8 hour period the source appeared to vary substantially in flux, decreasing from 2~mcrab to 0.2 -- 0.3~mcrab within the 6--21~keV band. It was subsequently found that the X-ray flux remained persistent from 20 -- 60~keV, however a cutoff point was found above this flux level in which the source was undetectable. {EXO~1722--363} at maximum flux and assuming a distance of 10 kpc, had a luminosity calculated as $5 \\times 10^{36}$~erg~s$^{-1}$ \\citep{tawara89}.\nLater observations by the \\emph{Rossi X-ray Timing Explorer} (RXTE) revealed the eclipsing nature of this system with the eclipse duration determined as $1.7 \\pm 0.1$~days \\citep{corbet05}. Subsequent observations by \\emph{INTEGRAL} followed up by \\emph{XMM-Newton} in 2004 led to a further refinement of the spin and orbital periods to $413.851\\pm0.004$~s and $9.7403\\pm0.0004$~days respectively \\citep{thompson07}.\n\n\\emph{XMM-Newton} observations allowed the source position to be determined more precisely (with an uncertainty of $4^{\\prime\\prime}$) at RA(2000.0) = $17^h 25^m 11.4^s$ and Dec = $-36^\\circ 16 ^\\prime 58.6 ^{\\prime\\prime}$. {EXO~1722--363} lies within the Galactic plane and as it is heavily reddened, unsurprisingly the counterpart star could not be detected optically. An infrared counterpart was found lying $1^{\\prime\\prime}$ from the X-ray source position \\citep{zurita06} with a corresponding entry in the 2MASS catalogue, 2MASS~J17251139--3616575 (with JHK magnitudes J = 14.2, H = 11.8 and K$_s$ = 10.7). Examination of near infrared K-band spectra obtained with the ESO \\emph{ISAAC} instrument led to our determination of the spectral classification of the mass donor as B0 -- B1 Ia \\citep{mason09}.\n\n\\section{Observations and Data Reduction}\n\nIn our previous work, we only had single epoch K$_s$-band spectra of the mass donor in {EXO~1722--363}, but in order to determine a dynamical mass solution, radial velocities at a range of orbital phases are required. Fortunately we were able to locate a series of K$_{s}$-band spectra held within the ESO Archive\\footnote{\\url {http:\/\/archive.eso.org\/eso\/eso\\_archive_main.html}} which were obtained over 26 nights between 24th May, 2006 and 4th August, 2006. 26 pairs of spectra were centred on 2.1~$\\mu$m and 26 pairs centred on 2.2~$\\mu$m. No observations of radial velocity standards appear to have been taken with any of the science spectra, however there are telluric standards available to enable removal of atmospheric features from the target spectra.\n\nThe data were reduced using the ISAAC pipeline \\footnote{\\url {http:\/\/www.eso.org\/sci\/data-processing\/software\/pipelines\/isaac\/isaac-pipe-recipes.html}} in conjunction with the data browsing tool GASGANO \\footnote{\\url {http:\/\/www.eso.org\/sci\/data-processing\/software\/gasgano\/}}. Unfortunately within the~2.2 $\\mu$m {EXO 1722-363} dataset, the counterpart to {EXO~1722--363} had been incorrectly identified and the telescope had been mis-pointed, so the spectra were unusable. Additionally only 11 of the 26 spectra from the dataset centred on 2.1~$\\mu$m turned out to be of sufficient quality to derive a radial velocity measurement (See Table~\\ref{usable_spectra}).\nThe usable spectra were made in the SW MRes mode with a 0.6$^{\\prime\\prime}$ slit. Resulting in spectra with a high S\/N and resolution (R $\\approx$ 4200). The integration time for each pair of spectra was 700~s, with the resulting data having a count rate below 10\\,000 ADU; therefore no correction for non-linearity was necessary.\n\nFlatfields were reduced using the pipeline recipe isaac\\_spc\\_flat and combined to produce a master flatfield. The wavelength calibration and ISAAC slit curvature distortion was computed using OH skylines using the pipeline recipe isaac\\_spc\\_arc. Spectra produced by ISAAC have a high degree of curvature, to remove this the pipeline recipe isaac\\_spc\\_startrace computes the spectra curvature using both images and spectra of a star moving across the slit. Science spectra were obtained using the nodding technique; unfortunately only one nod was performed during the original observation, which we believe to be less than optimal. The two nodded science frames were then reduced using the products of the pipeline calibration recipes to produce a final reduced science spectrum. This process was then repeated for each telluric standard. Telluric correction was then made using the standards shown in Table~\\ref{usable_spectra}.\nAll spectra were reduced using standard IRAF\\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} routines; Fig~\\ref{stacked_spectra} shows the stacked continuum normalised spectra ordered by date of observation.\n\n\\begin{figure}\n \\includegraphics[width=9cm]{fig1.eps}\n \\begin{center}\n \\caption{Continuum normalised K$_{s}$-band spectra centred on 2.1~$\\mu$m of {EXO~1722--363} in order of date from top to bottom.}\n \\label{stacked_spectra}\n \\end{center}\n\\end{figure}\n\n\\section{Data Analysis}\n\nRadial velocities were determined by cross-correlating the region around the HeI 2.112\/3~$\\mu$m absorption line in each of the 11 archive spectra against the high signal-to-noise K$_s$-band spectrum of EXO 1722-363, which we had previously obtained for spectral classification purposes \\citep{mason09}. The resulting velocities were then corrected to the solar system barycentre and are reported in Table~\\ref{usable_spectra}. The spectrum highlighted in bold is that used as the reference spectrum (our previously obtained high signal-to-noise K$_s$-band spectrum of EXO 1722-363 \\citep{mason09}). In obtaining these final velocities, we determined the absolute velocity of the reference spectrum by fitting the positions of its absorption lines.\n\n\n\n\\begin{table*}\n\\caption{The phase, radial velocity and telluric standard for each {EXO~1722--363} archive spectrum.}\n\\label{usable_spectra}\n\\centering\n\\begin{tabular} {cccccc}\n\\hline\nMid-point of Observations (UT) & HJD & Phase & Radial velocity \/ km s$^{-1}$ & Telluric Std & Telluric Spec. Type \\\\\n\\hline\n 2006 May 24.340 & 245 3879.83 & 0.130 & --18.61 $\\pm$ 11.8 & Hip 093225 & B4V \\\\\n 2006 Jun 22.025 & 245 3908.53 & 0.077 & --11.74 $\\pm$ 11.8 & Hip 088109 & B5II\\\\\n 2006 Jun 29.233 & 245 3915.74 & 0.817 & --21.39 $\\pm$ 11.8 & Hip 070148 & B8III\\\\\n 2006 Jul 14.167 & 245 3930.67 & 0.350 & 14.92 $\\pm$ 11.8 & Hip 089960 & B6V\\\\\n 2006 Jul 17.248 & 245 3933.75 & 0.666 & --29.22 $\\pm$ 11.8 & Hip 087616 & B9IV\/V\\\\\n 2006 Jul 20.122 & 245 3936.63 & 0.961 & --14.95 $\\pm$ 11.8 & Hip 090336 & B7III\\\\\n 2006 Jul 22.112 & 245 3938.62 & 0.165 & --10.09 $\\pm$ 11.8 & Hip 094859 & B5V\\\\\n 2006 Jul 24.115 & 245 3940.62 & 0.371 & 21.21 $\\pm$ 11.8 & Hip 090336 & B7III\\\\\n 2006 Jul 25.105 & 245 3941.61 & 0.473 & 6.89 $\\pm$ 11.8 & Hip 089960 & B6V\\\\\n 2006 Aug 01.111 & 245 3948.57 & 0.188 & --3.83 $\\pm$ 11.8 & Hip 085548 & B9II\\\\\n 2006 Aug 02.113 & 245 3949.62 & 0.295 & 28.06 $\\pm$ 11.8 & Hip 085548 & B9II\\\\\n\\bf {2008 May 17.115} & \\bf {245 4603.62} & \\bf{0.439} & \\bf{30.95 $\\pm$ 11.8} & \\bf{Hip 085008} & \\bf{B5V} \\\\\n\\end{tabular}\n\\end{table*}\n\n\nFrom X-ray data there is no evidence that {EXO~1722--363} has anything other than a circular orbit \\citep{thompson07}, so we fitted the radial velocities of the supergiant star with a simple sinusoidal solution. The ephemeris of \\citet{thompson07} specifies the epoch of mid-eclipse as\n\\begin{equation}\n T({\\rm HJD}) = 53761.68(4) + 9.7403(4)N\n\\end{equation}\nwhere $N$ is the cycle number and uncertainties in brackets refer to the last decimal place quoted. At the epoch of our observations, the accumulated uncertainty in phase is formally only $\\sim 0.005$, but nonetheless we fitted our data with two models, in one of which the zero phase was a free parameter and in the other of which it was not.\n\nFitting our data with a sinusoid with just two free parameters (RV amplitude and systemic velocity) yielded an amplitude of 17.6 $\\pm$ 7.7~km~s$^{-1}$ and a systemic velocity of -6.5 $\\pm$ 5.6~km~s$^{-1}$. In order to achieve a reduced chi-squared of unity, the uncertainties on each RV data point had to be scaled to $\\pm$ 17.5~km~s$^{-1}$. In comparison, fitting our data with a sinusoid with three free parameters (i.e. with the addition of zero phase as a free parameter) gave an amplitude of 24.6 $\\pm$ 5.0~km~s$^{-1}$, a systemic velocity of $-6.5 \\pm 3.8$~km~s$^{-1}$ and a phase shift of -0.13 $\\pm$ 0.03. In this case, a reduced chi-squared of unity was achieved by scaling the uncertainties on each point to $\\pm$ 11.8~km~s$^{-1}$. Although the best-fit phase offset is discrepant with the accumulated phase uncertainty of the ephemeris, we prefer this fit and use the data from it subsequently; both fits are shown in Figure~\\ref{rvcurve} in which the value for $K_{\\rm O}$ is that resulting from fitting the radial velocities including a phase shift.\n\n\\begin{figure}[h] \n \\includegraphics[scale=0.33, angle=-90]{fig2.eps}\n \\begin{center}\n \\caption{Radial velocity data for the supergiant star in {EXO~1722--363}. The solid line is the best fitting sinusoid with three free parameters, the dashed line is that with a fixed zero phase in line with the published ephemeris. The orbital phase is based upon the ephemeris of \\citet{thompson07}.}\n \\label{rvcurve}\n \\end{center}\n\\end{figure}\n\nThe masses of the system components may be determined as follows. The mass ratio of the system $q$ is equal to the ratio of the semi-amplitudes of the radial velocities for each star\n\\begin{equation}\n q = \\frac{M_{\\rm X}}{M_{\\rm O}} = \\frac{K_{\\rm O}}{K_{\\rm X}}\n\\end{equation}\nwhere $M_{\\rm X}$ and $M_{\\rm O}$ are the masses of the neutron star and supergiant star respectively, and\n$K_{\\rm X}$ and $K_{\\rm O}$ are the corresponding semi-amplitudes of their radial velocities. In addition, for circular orbits,\n\\begin{equation}\nM_{\\rm O} = \\frac{{K_{\\rm X}}^3 P}{2\\pi G \\sin^3 i}\\left(1+q\\right)^2\n\\end{equation}\nand similarly\n\\begin{equation}\nM_{\\rm X} = \\frac{{K_{\\rm O}}^3 P}{2 \\pi G \\sin^3 i}\\left(1+\\frac{1}{q} \\right)^2\n\\end{equation}\nwhere $i$ is the inclination to the plane of the sky and $P$ is the orbital period. For {EXO~1722--363}, X-ray pulse timing delays yield the value of $K_{\\rm X}$ as $226.1\\pm6.7$~km~s$^{-1}$ \\citep{thompson07}. A value for the system inclination can be found from the geometric relation\n\\begin{equation}\n \\sin i \\approx \\frac{\\left[1 - \\beta^2 \\left(\\frac{R_L}{a} \\right)^2\\right]^{1\/2}}{\\cos~\\theta_{\\rm e}}\n\\end{equation}\nwhere $\\theta_{\\rm e}$ is the eclipse half-angle, $R_{\\rm L}$ is the Roche lobe radius of the supergiant, $\\beta$ is the ratio the supergiant's radius to that of its Roche lobe and $a$ is the separation between the centres of mass of the two stars. The Roche lobe radius may be approximated by\n\\begin{equation}\n \\frac{R_{\\rm L}}{a} \\approx A + B \\log q + C \\log^2 q\n\\end{equation}\nwhere the constants have been determined by \\citet{rappaport84} as\n\\begin{equation}\nA \\approx 0.398 - 0.026\\Omega^2 + 0.004\\Omega^3\n\\end{equation}\n\\begin{equation}\nB \\approx - 0.264 + 0.052\\Omega^2 - 0.015\\Omega^3\n\\end{equation}\n\\begin{equation}\nC \\approx - 0.023 - 0.005\\Omega^2\n\\end{equation}\n$\\Omega$ is the ratio of the spin period of the supergiant to its orbital period. For {EXO~1722--363} we have assumed that the supergiant is close to Roche lobe-filling and is rotating synchronously with the orbit, so $\\Omega =1$ (although we note this assumption may not be entirely correct in this case), and the eclipse half angle is measured using RXTE data to be $\\theta_{\\rm e} = 31.8^{\\circ} \\pm 1.8^{\\circ}$ \\citep{corbet05}.\n\nHence the above set of equations allow the masses of the two stars to be determined in two limits. First, assuming that the supergiant fills its Roche lobe (in which case $\\beta=1$) we can find a lower limit to the system inclination $i$ and upper limits to the stellar masses. Secondly, assuming that the system is viewed edge-on (in which case $i=90^{\\circ}$) we can find a lower limit to the Roche lobe filling factor $\\beta$ and lower limits to the stellar masses. Unfortunately, the spectra we have obtained from the ESO archive are not of sufficient quality to conduct a non-LTE model atmosphere analysis and hence make an accurate determination of the stellar radius which would break this degeneracy.\n\nIn order to propagate the uncertainties in each parameter, we performed a Monte Carlo analysis of the above equations to determine the system masses. The results in each limit are shown in Table~\\ref{results} , and of course masses lying between the extremes are also valid and correspond to values of $i$ and $\\beta$ between their extremes.\n\n\n\\begin{table}\n\\caption{System parameters for {EXO~1722--363}.} \n \\label{results}\n \\begin{tabular}{llll} \\hline\nParameter & \\multicolumn{2}{c}{Value} & Ref. \\\\ \\hline\n{\\it Observed} \t\t&\t&\t& \\\\\n$a_{\\rm X} \\sin i$ \/ lt sec\t& \\multicolumn{2}{c}{$101 \\pm 1$}\t\t& [1]\\\\\n$P$ \/ d\t\t\t\t& \\multicolumn{2}{c}{$9.7403 \\pm 0.0004$}\t& [1]\\\\\n$T_{90}$ \/ HJD\t\t& \\multicolumn{2}{c}{$53761.68 \\pm 0.04$}\t & [1]\\\\\t\n$e$\t\t\t\t& \\multicolumn{2}{c}{$<0.19$}\t& [1]\\\\\t\n$\\theta_{\\rm e}$ \/ deg\t\t& \\multicolumn{2}{c}{$31.8 \\pm 1.8$}\t\t& [2]\\\\\t\n\n$K_{\\rm O}$ \/ km s$^{-1}$\t& \\multicolumn{2}{c}{$24.5 \\pm 5.0$} \t\t& [3]\\\\\n\n{\\it Assumed} \t\t&\t&\t& \\\\\n\\bf{$\\Omega$}\t\t\t& \\multicolumn{2}{c}{= 1}\t\t \\\\\n\n{\\it Inferred} & & & \\\\\n\n$K_{\\rm X}$ \/ km s$^{-1}$\t& \\multicolumn{2}{c}{$226.1 \\pm 6.7$} \t\t\\\\\n$q$\t\t\t\t& \\multicolumn{2}{c}{$0.107 \\pm 0.022$} \\\\\n$\\beta$\t\t\t\t& $1.000$ \t& $0.916 \\pm 0.047$\t \\\\\n$i$ \/ deg\t\t\t& 75.2 $\\pm ~4.6$& $90.0$ \\\\\n$M_{\\rm X}$ \/ M$_{\\odot}$ \t& 1.63 $\\pm ~0.38$ & $1.46 ~\\pm$ 0.38 \t \\\\\n$M_{\\rm O}$ \/ M$_{\\odot}$ \t& 15.2 $\\pm $ 1.9 & $13.6 \\pm 1.6$ \t \\\\\n$a$ \/ R$_{\\odot}$ & $49.1 ~\\pm$ 9.1 & $47.3 \\pm 8.8$ \\\\\n$R_{\\rm L}$ \/ R$_{\\odot}$\t& 28.0 $\\pm$ 5.3 & $27.0 ~\\pm$ 5.0 \\\\\n$R_{\\rm O}$ \/ R$_{\\odot}$\t& 28.0 $\\pm$ 5.3 & $24.7 ~\\pm$ 4.7 \\\\ \\hline\n\\end{tabular}\\\\\n$[1]$ Thompson et al. 2007; $[2]$ Corbet et al. 2005\\\\\n$[3]$ this paper\n\\end{table}\n\n\n\\section{Discussion}\n\nAlthough the 11 spectra reported here are of relatively low quality, and few in number, they still allow us to make a\n preliminary determination of the orbit of the supergiant in {EXO~1722--363} and make a first measurement of the \ndynamical masses of the stellar components. The results are encouraging for a number of reasons. First, the resulting\n neutron star mass is consistent with the canonical mass of 1.4~M$_{\\odot}$ measured in most other eclipsing HMXBs, \nexcept for that in Vela X-1, \\citep{quaintrell03}. Second, the measured mass and radius of the supergiant, $M \\sim \n13 - 15$~M$_{\\odot}$ and $R \\sim 25 - 28$~R$_{\\odot}$, support the B0-1 Ia spectral classification that we have \npreviously determined \\citep{mason09}. This is illustrated by the Hertzsprung-Russell diagram plotted in \nFig. \\ref{evo_track}, which shows a close correspondence between the system primary and the properties of other galactic \nfield BSGs \\citep{searle08}. While the similarity in temperature is to be expected - the value for the primary was \nadopted on the basis of its spectral type, which in turn has been calibrated by the analysis of \\cite{searle08}, \nthe radii for the field BSGs were determined via non-LTE model atmosphere analysis, while that for the primary is \ninstead determined dynamically.\n\nThe measurement of the stellar radius and hence bolometric luminosity,\n has in turn has allowed a more precise determination of the distance \nto the system by comparison to its observed photometric magnitude and reddening. The refined distance to \n {EXO~1722--363} of 7.1 - 7.9 kpc results in an X-ray luminosity ranging from \n{L$_{\\rm {X_{min}}}$ = 0.47} \n$\\times$ ~10$^{36}$ to \nL$_{\\rm {X_{max}}}$ = 9.2 \n$\\times$ ~10$^{36}$ erg s$^{-1}$. \nHowever, due to the nature of the archive observations\n used for this work, the uncertainties on the mass and radius parameters are still rather large; it \nis our intention in the near future to propose and obtain more accurate VLT\/ISAAC observations to further\n constrain the orbital solution parameters for this HMXB system. \n\nFinally, comparison to evolutionary tracks in Fig. \\ref{evo_track} might suggest the primary had an initial progenitor\nmass of $\\sim$ 35 - 40M$_{\\odot}$ and hence the neutron star originated in a more massive star. However we caution\nthat the binary is highly likely to have undergone at least one episode of mass transfer in the past, rendering such a \nconclusion highly uncertain. As an exemplar we cite {GX 301-2}, a HMXB composed of a NS and a B hypergiant with a \nspectroscopic mass of 43$\\pm$10M$_{\\odot}$ \\citep{kaper06}. \n However \\citep{wellstein99} propose a formation scenario in which two stars of comparable initial masses evolved via quasi conservative mass transfer into the current configuration post supernova; hence determining progenitor masses for both primary and neutron star based on the current system parameters is non trivial. \n\n\n\\begin{figure}\n \\includegraphics[width=9cm]{fig3_res.eps}\n \\begin{center}\n \t\\caption{Position of {EXO 1722-363} on the Hertzsprung-Russell \\citep{searle08} diagram alongside a sample of O and B supergiants \tfrom differing locations, Galactic B supergiants, \\cite{crowther06}, SMC B supergiants (\\cite{trundle04}; \\cite{trundle05}) and Galactic O stars, \\cite{repolust04}. These are overplotted together with solar metallicity evolutionary tracks from \\cite{meynet00}. Also shown is the lower and upper limits on the luminosity of {EXO 1772-363}.} \n \\label{evo_track}\n \\end{center}\n\\end{figure}\n\n\n\n\n\n\\begin{acknowledgements}\nABM acknowledges support from an STFC studentship. JSC acknowledges support from an RCUK fellowship.\nThis research is partially supported by grants AYA2008-06166-C03-03 and\nConsolider-GTC CSD-2006-00070 from the Spanish Ministerio de Ciencia e\nInnovaci\\'on (MICINN).\nBased on observations carried out at the European Southern Observatory, Chile through programme ID 077.B-0872(A).\n\\end{acknowledgements}\n\n\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}