diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmxxv" "b/data_all_eng_slimpj/shuffled/split2/finalzzmxxv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmxxv" @@ -0,0 +1,5 @@ +{"text":"\\section{Intuitive meaning of parton distribution functions}\n\nLet $d\\sigma$ be a cross section involving short distances. For\ninstance, we may consider the process hadron $A$ + hadron $B$ $\\to $\njet + $X$ at the Fermilab collider. Let the jet have a high transverse\nmomentum $P_T$. Intuitively, the observed jet begins as a single quark\nor gluon that emerges from a parton-parton scattering event with large\n$P_T$, as illustrated in Fig.~\\ref{fig:jetgraph}. (Typically, this\nparton recoils against a single parton that carries the opposite\n$P_T$.) The large $P_T$ parton fragments into the observed jet of\nhadrons.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(jetgraph.eps width 5 cm)}\n\\caption{Hadron $A$ + hadron $B$ $\\to $ 2 partons.}\n\\label{fig:jetgraph}\n\\end{figure}\n\nThe physical picture illustrated in Fig.~\\ref{fig:jetgraph} suggests\nhow we may write the cross section to produce the jet as a product of\nthree factors. A parton of type $a$ comes from from a hadron of type\n$A$. It carries a fraction $x_A$ of the hadron's momentum. The\nprobability to find it is given by $\\blue{f_{a\/A}(x_A)}\\, dx_A$. A\nsecond parton of type $b$ comes from a hadron of type B. It carries a\nfraction $x_B$ of the hadron's momentum. The probability to find it is\n$\\blue{f_{b\/B}(x_B)}\\, dx_B$. The functions $f_{a\/A}(x)$ are the\nparton distribution functions that are the subject of this talk.\nThe third factor is the cross section for the partons to make the\nobserved jet, $\\red{d\\hat\\sigma}$. This parton level cross section\nis calculated using perturbative QCD.\n\n\n\\subsection{Factorization}\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(factorization.eps width 5 cm)}\n\\caption{Factorization for hadron collisions.}\n\\label{fig:factorization}\n\\end{figure}\n\nWe have been led by the intuitive parton picture of\nFig.~\\ref{fig:jetgraph} to write the cross section for jet production\nin the following form\n\\begin{eqnarray}\n\\lefteqn{{d\\sigma \\over d P_T} \\sim}\n\\label{factor}\\\\\n&&{\\hskip - 0.7 cm}\n\\sum_{a,b}\\int\\! dx_A\\,\n\\blue{f_{a\/A}(x_A,\\mu)}\\\n\\int\\! dx_B\\, \\blue{f_{b\/B}(x_B,\\mu)}\\\n\\red{{d\\hat\\sigma \\over d P_T}}.\n\\nonumber\n\\end{eqnarray}\nHere the parton level cross section has a well behaved expansion\nin powers of $\\alpha_s$,\n\\begin{equation}\n\\red{{d\\hat\\sigma \\over d P_T}} \\sim \n\\sum_N \\left(\\!\\alpha_s(\\mu)\\over\\pi\\right)^{\\!N}\\!\nH_N(x_A,x_B,P_T;a,b;\\mu).\n\\end{equation}\nThe coefficients $H_N$ are calculable in perturbative QCD.\n\nThe principle of {\\it factorization} asserts that\nEq.~(\\ref{factor}) holds up to corrections of order\n\n\\smallskip\n$\\bullet$ \n$(m\/P_T)^n$ where $m$ is a typical hadronic mass\n\n\\hskip 0.4 cm scale and the power $n$ depends on the \n\n\\hskip 0.4 cm process, and\n\n$\\bullet$\n$\\left(\\alpha_s(\\mu)\\right)^{\\!L}$ from truncating \nthe expansion of \n\n\\hskip 0.4 cm $d\\hat\\sigma\/d P_T$.\n\n\\smallskip\n\\noindent\nFor our purposes, we can regard factorization as an established\ntheorem of QCD, although this subject is not without its loose ends.\nA review may be found in Ref.~\\cite{theorems}.\n\nAs we have seen, Eq.~(\\ref{factor}) has a simple intuitive meaning.\nHowever, the appearance of a parameter $\\mu$ in Eq.~(\\ref{factor})\nhints that there is more to the equation than just a model. The\nparameter $\\mu$, which has dimensions of mass, is related to the\nrenormalization of the strong coupling $\\alpha_s(\\mu)$ and of the\noperators in the definition of the parton distribution functions\n$f_{a\/A}(x_A,\\mu)$. (Often, one uses two separate parameters in these\ntwo places.)\n\nAt the Born level, the parton level cross section $\\red{d\\hat\\sigma\/d\nP_T}$ is calculated in a straightforward manner. At the\nnext-to-leading order and beyond, the calculation is not so\nstraightforward. Various divergences appear in a naive calculation.\nThe divergences are removed and the dependence on the scale $\\mu$\nappears in their place. The precise rules for calculating\n$\\red{d\\hat\\sigma\/d P_T}$ follow once you have set the definition of\nthe parton distribution functions $\\blue{f_{a\/A}(x,\\mu)}$. These rules\nenable one to do practical calculations. For example, for jet\nproduction, the first two terms in $\\red{d\\hat\\sigma\/d P_T}$ are known\nin the form of computer code that puts together the pieces of\nEq.~(\\ref{factor}) and produces a numerical cross section\n\\cite{jetxsect}.\n\n\n\\subsection{Reality Check}\n\nSets of parton distribution functions, one function for each\nkind of parton in a proton, are produced to fit experiments. We will\nexamine the fitting process in a later section.\nFig.~\\ref{fig:cteq3m} is a graph of the gluon distribution and the\nup-quark distribution in a proton, according to a parton\ndistribution set designated CTEQ3M \\cite{cteq3m}. The figure\n\\cite{potpourri} shows $x^2 f_{a\/A}(x,\\mu)$ for $a = g$ and $a = u$,\n$A = p$. Note that\n\\begin{equation}\n\\int_0^1 dx\\ x\\, f_{a\/A}(x,\\mu) = \n\\int d\\log x\\ x^2f_{a\/A}(x,\\mu),\n\\end{equation}\nso the area under the curve is the momentum fraction carried by\npartons of species $a$.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(cteq3m.eps width 7cm)}\n\\caption{Gluon and up quark distributions in the proton according to\nthe CTEQ3M parton distribution set.}\n\\label{fig:cteq3m}\n\\end{figure}\n\n\n\\subsection{Significance}\n\nKnowledge of parton distribution functions is necessary for the\ndescription of hard processes with one or two hadrons in the initial\nstate. With two hadrons in the initial state, as at Fermilab or the\nfuture Large Hadron Collider, observed short distance cross sections\ntake the form\n\\begin{eqnarray}\n\\lefteqn{{d\\sigma} \\sim}\n\\label{onehadron}\\\\\n&&{\\hskip - 0.4 cm}\\sum_{a,b}\\int\\! dx_A\\, \\blue{f_{a\/A}(x_A,\\mu)}\\\n\\int\\! dx_B\\, \\blue{f_{b\/B}(x_B,\\mu)}\\\n\\red{{d\\hat\\sigma}}.\n\\nonumber\n\\end{eqnarray}\nWith one hadron in initial state, as in deeply inelastic lepton\nscattering at HERA (Fig.~\\ref{fig:dis}), the cross section has\nthe form\n\\begin{equation}\n{d\\sigma} \\sim\n\\sum_{a}\\int dx_A\\, \\blue{f_{a\/A}(x_A,\\mu)}\\\n\\red{{d\\hat\\sigma}}.\n\\label{twohadrons}\n\\end{equation}\nIn either case, one has no predictions without knowledge of the\nparton distribution functions.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(dis.eps width 5cm)}\n\\caption{Deeply inelastic scattering.}\n\\label{fig:dis}\n\\end{figure}\n\nThe essence of Eqs.~(\\ref{onehadron}) and (\\ref{twohadrons}) above is\nthat in high energy, short distance collisions a hard scattering\nprobes the system quickly, while the strong binding forces act slowly.\nThus one needs to know probabilities to find partons in a fast moving\nhadron as seen by an approximately instantaneous probe. This is the\ninformation encoded in the parton distribution functions. It should be\nevident that this information is not only useful, but also valuable \nin understanding hadron structure. Parton distribution functions are\nnot everything, however. They provide a relativistic view only, a\nview quite different from the view that might be most economical for\nthe description of a hadron at rest. Furthermore, they provide no\ninformation on correlations among the partons.\n\n\n\\section{Translation to operators}\n\nWe are now ready to examine the technical definition of parton\ndistribution functions. There are, in fact, two definitions in current\nuse. I will describe the \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ definition, which is the most\ncommonly used. There is also a DIS definition, in which deeply\ninelastic scattering plays a privileged role. The interested reader\nmay consult the CTEQ Collaboration's {\\it Handbook of Perturbative\nQCD} \\cite{handbook} for information on the DIS definition. There are\nalso different ways to think about the \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ definition. In this\ntalk, I define the parton distributions directly in terms of field\noperators along a light-like line, as in \\cite{partondef}. An\nequivalent construction from a different point of view may be found\nin \\cite{altpartondef}. As we will see, moments of the parton\ndistribution functions are related to matrix elements of certain\nlocal operators, which appear in the operator product expansion for\ndeeply inelastic scattering. This relation could be also used as the\ndefinition.\n\nI distinguish between the {\\it parton distribution functions}\n$f_{a\/A}(x,\\mu)$ and the {\\it structure functions} $F_1(x,Q^2)$,\n$F_2(x,Q^2)$, and $F_3(x,Q^2)$ that are measured in deeply inelastic\nlepton scattering.\n\n\\subsection{Null coordinates}\n\nI will use null plane coordinates and momenta defined by\n\\begin{equation}\nx^\\pm = (x^0 \\pm x^3)\/\\sqrt 2\\,,\n\\hskip 0.3 cm\nP^\\pm = (P^0 \\pm P^3)\/\\sqrt 2\\,.\n\\end{equation}\n\nImagine a proton with a big $P^+$, a small $P^-$, and $\\vec P_T = 0$.\nThe partons in such a proton move roughly parallel to the $x^+$ axis,\nas illustrated in Fig.~\\ref{fig:lightcone}. One can treat $x^+$ as\n``time,'' so that the system propagates from one plane of equal $x^+$\nto another. For our fast moving proton, the interval in $x^+$ between\nsuccessive interactions among the partons is typically large.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(lightcone.eps width 5cm)}\n\\caption{World lines of the partons in a fast-moving proton.}\n\\label{fig:lightcone}\n\\end{figure}\n\nNotice that the invariant dot product between $P^\\mu$ and $x^\\mu$ is\n\\begin{equation}\nP\\cdot x = P^+ x^- + P^- x^+ - \\vec P_T\\cdot \\vec x_T\\,.\n\\end{equation}\nThus the generator of ``time'' translations is $P^-$.\n\n\n\\subsection{Null plane field theory}\n\nWe will define parton distribution functions by taking a snapshot of\nthe proton on a plane of equal $x^+$ in Fig.~\\ref{fig:lightcone}.\nTo motivate the definition, we use field theory quantized on planes of\nequal $x^+$ \\cite{KS}. This quantization uses the gauge $A^+ = 0$.\nThen the unrenormalized quark field operator $\\psi_0$ is expanded in\nterms of \n\n\\smallskip\n$\\bullet$ quark destruction operators $b$ and\n\n$\\bullet$ antiquark creation operators $d^\\dagger$\n\n\\smallskip\\noindent\nusing simple spinors $w(s)$ normalized to $w^\\dagger w = 1$:\n\n\\begin{eqnarray}\n\\lefteqn{{\\scriptstyle{1\\over 2}}\\gamma^-\\gamma^+\n\\sienna{\\psi_0}(x^+,x^-,\\vec x_T) =} \n\\\\\n&&\\hskip - 0.3 cm\n{ 1 \\over (2\\pi)^3}\\int_0^\\infty {d k^+ \\over 2 k^+}\\, \n\\int d \\vec k_T\n\\sum_s (\\sqrt 2 k^+)^{1\/2}\n\\nonumber\\\\\n&&\\hskip - 0.3 cm \\times\\biggl\\{\ne^{-i\\,(k^+ x^- - \\vec k_T\\cdot \\vec x_T)}\\,\nw(s)\\, \\green{b}(k^+,\\vec k_T;s;x^+)\n\\nonumber\\\\\n&& {}+\ne^{+i\\,(k^+ x^- - \\vec k_T\\cdot \\vec x_T)}\\,\nw(-s)\\, \\green{d^\\dagger}(k^+,\\vec k_T;s;x^+)\n\\biggr\\}.\n\\nonumber\n\\end{eqnarray}\nThe factor ${\\scriptstyle{1\\over 2}}\\gamma^-\\gamma^+$ here serves to\nproject out the components of the quark field $\\psi_0$ that are the\nindependent dynamical operators in null plane field theory.\n\n\n\\subsection{The quark distribution function}\n\nWe are now ready to define the (unrenormalized) quark distribution\nfunction. Let $|P\\rangle$ be the state vector for a hadron of type $A$\ncarrying momentum $P^\\mu$. Take the hadron to be spinless in order\nto simplify the notation. Construct the unrenormalized distribution\nfunction for finding quarks of flavor $j$ in hadron $A$ as\n\\begin{eqnarray}\n\\lefteqn{\\blue{f^{(0)}_{j\/A}(x)} \\times\n\\langle P^{+\\prime},\\vec P_T^\\prime | P^{+},\\vec P_T \\rangle =}\n\\nonumber\\\\\n&&\\hskip 0.5 cm{ 1 \\over 2 x (2\\pi)^3}\\int d\\vec k_T \\sum_s\\,\n\\langle P^{+\\prime},\\vec P_T^\\prime |\n\\nonumber\\\\\n&&\\hskip 0.5 cm\n\\times\n\\green{b^\\dagger_j}(xP^+,\\vec k_T;s;x^+)\\,\n\\green{b_j}(xP^+,\\vec k_T;s;x^+)\n\\nonumber\\\\\n&&\\hskip 0.5 cm\n\\times\n| P^{+},\\vec P_T \\rangle\\ .\n\\label{quark0def}\n\\end{eqnarray}\nIn Eq.~(\\ref{quark0def}) there are factors relating to the\nnormalization of the states and the creation\/destruction operators.\nThere is a quark number operator $\\green{b^\\dagger b}$ for flavor $j$.\nWe integrate over the quark transverse momentum $\\vec k_T$ and sum\nover the quark spin $s$.\n\n\n\\subsection{Translation to coordinate space}\n\nWith a little algebra, we find\n\\begin{eqnarray}\n\\lefteqn{\\blue{f^{(0)}_{j\/A}(x)} \n={ 1 \\over 4\\pi}\\int dy^-\ne^{-i xP^+y^-}}\n\\\\\n&&\\hskip - 0.6cm \\times\\langle P^{+}\\!,\\vec 0_T |\n\\overline{\\sienna{\\psi}}_{0,j}(0,y^-,\\vec 0_T)\n\\gamma^+\n{\\sienna{\\psi}}_{0,j}(0,0,\\vec 0_T)\n| P^{+}\\!,\\vec 0_T \\rangle.\n\\nonumber\n\\end{eqnarray}\nNotice that the (still unrenormalized) quark distribution function is\nan expectation value in the hadron state of a certain operator. The\noperator is not local but ``bilocal.'' The two points, $(0,y^-,\\vec\n0_T)$ and $(0,0,\\vec 0_T)$, at which the field operators are evaluated\nare light-like separated. The formula directs us to integrate over\n$y^-$ with the right factor so that we annihilate a quark with plus\nmomentum $xP^+$.\n\n\n\\subsection{Gauge invariance}\n\nBefore turning to the renormalization of the operator that occurs in\nthe quark distribution function, we note that the definition as it\nstands relies on the gluon potential $A^\\mu(x)$ being in the\ngauge $A^+ = 0$. Let us modify the formula so that\n\n\\smallskip\n$\\bullet$ the operator is gauge invariant and\n\n$\\bullet$ we match the previous definition in \n\n\\hskip 0.3 cm $A^+=0$ gauge.\n\n\\smallskip\\noindent\nThe gauge invariant definition is\n\\begin{eqnarray}\n\\lefteqn{\\blue{f^{(0)}_{j\/A}(x)} \n={ 1 \\over 4\\pi}\\int\\! dy^-\ne^{-i xP^+y^-}\\ \n\\langle P^{+}\\!,\\vec 0_T |}\n\\nonumber\\\\\n&&\\hskip 1.0cm\n\\times\n\\overline{\\sienna{\\psi}}_{0,j}(0,y^-,\\vec 0_T)\n\\gamma^+ \\magenta{{\\cal O}_0}\\\n{\\sienna{\\psi}}_{0,j}(0,0,\\vec 0_T)\n\\nonumber\\\\\n&&\\hskip 1.0cm \\times\n| P^{+}\\!,\\vec 0_T \\rangle\\ ,\n\\label{quarkdef2}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n\\magenta{{\\cal O}_0} ={\\cal P}\n\\exp\\left(\nig_0 \\int_0^{y^-}\\!\\!\\! dz^-\\, {\\sienna{A}}_{0,a}^+(0,z^-,\\vec 0_T)\\,\nt_a\n\\right).\n\\label{eikonal}\n\\end{equation}\nHere ${\\cal P}$ denotes a path-ordered product, while the $t_a$ are\nthe generators for the {\\bf 3} representation of SU(3). There is an\nimplied sum over the color index $a$.\n\n\n\\subsection{Interpretation of the eikonal gauge operator}\n\nThe appearance of the operator $\\cal O$, Eq.~(\\ref{eikonal}), in \nthe definition (\\ref{quarkdef2}) seems to be just a technicality.\nHowever, this operator has a physical interpretation that is of some\nimportance. Let us write this operator in the form\n\\begin{eqnarray}\n\\lefteqn{\n\\magenta{{\\cal O}_0} = \\overline{\\cal P}\n\\exp\\left(\n-ig_0 \\int_{y^-}^\\infty\\!\\!\\! dz^-\\, {\\sienna{A}}_{0,a}^+(0,z^-,\\vec\n0_T)\\, t_a\n\\right)}\n\\nonumber\\\\\n&&\\times \n{\\cal P}\n\\exp\\left(\nig_0 \\int_0^\\infty\\!\\!\\! dz^-\\, {\\sienna{A}}_{0,a}^+(0,z^-,\\vec 0_T)\\,\nt_a\n\\right).\n\\label{eikonalmod}\n\\end{eqnarray}\nInserting this form in the definition (\\ref{quarkdef2}), we can\nintroduce a sum over states $|N\\rangle\\langle N|$ between the\ntwo exponentials in Eq.~(\\ref{eikonalmod}). We take these states to\nrepresent the final states after the quark has been ``measured.''\n\nConsider now a deeply inelastic scattering experiment that is used to\ndetermine the quark distribution. The experiment doesn't just\nannihilate the quark's color. In a suitable coordinate system, a quark\nmoving in the plus direction is struck and exits to infinity with\nalmost the speed of light in the minus direction, as illustrated in\nFig.~\\ref{fig:lightcone3}. As it goes, the struck quark interacts with\nthe gluon field of the hadron.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(lightcone3.eps width 5cm)}\n\\caption{Effect of the eikonal gauge operator.}\n\\label{fig:lightcone3}\n\\end{figure}\n\nWe can now see that the role of the operator $\\cal O$ is to replace\nthe struck quark with a fixed color charge that moves along a\nlight-like line in the minus-direction, mimicking the motion of the\nactual struck quark in a real experiment.\n\n\n\n\\subsection{Renormalization}\n\nWe now discuss the renormalization of the operator products in\nthe definition (\\ref{quarkdef2}). We use \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ renormalized fields\n${\\sienna{\\psi}}(x)$ and ${\\sienna{A}}^\\mu(x)$ and we use the \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\\nrenormalized coupling $g$. The field operators are evaluated at\npoints separated by $\\Delta x$ with $ \\Delta x^\\mu \\Delta x_\\mu = 0$.\nFor this reason, there will be ultraviolet divergences from the\noperator products. We elect to renormalize the operator products with\nthe \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ scheme.\n\nFor instance, Fig.~\\ref{fig:renormalize} illustrates one of the\ndiagrams for the distribution of quarks in a proton. Before it is\nmeasured, the quark emits a gluon into the final state. There is a\nloop integration over the minus and transverse components of the\nmeasured quark's momentum. This loop integration is ultraviolet\ndivergent. To apply \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ renormalization, we perform the\nintegration in $4-2\\epsilon$ dimensions, including a factor $(\\mu^2\ne^{\\gamma}\/4\\pi)^\\epsilon$ that keeps the dimension constant while\nsupplying some conventional factors. The integral will consist of a\npole term proportional to $1\/\\epsilon$ plus terms that are finite as\n$\\epsilon \\to 0$. We simply subtract the pole term. Notice that\n\\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ renormalization introduces a scale $\\mu$.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(renormalize.eps width 5cm)}\n\\caption{Renormalization of an ultraviolet divergent\nloop integration.}\n\\label{fig:renormalize}\n\\end{figure}\n\nThe definition of the renormalized quark distribution function is thus\n\\begin{eqnarray}\n\\lefteqn{\\blue{f_{j\/A}(x,\\mu)} \n={ 1 \\over 4\\pi}\\int dy^-\ne^{-i xP^+y^-}\\\n\\langle P^{+},\\vec 0_T |}\n\\nonumber\\\\\n&&\\hskip 1.0cm \\times\n\\overline{\\sienna{\\psi}}_{j}(0,y^-,\\vec 0_T)\n\\gamma^+ {\\magenta{\\cal O}}\\\n{\\sienna{\\psi}}_{j}(0,0,\\vec 0_T)\n\\nonumber\\\\\n&&\\hskip 1.0cm \\times\n| P^{+},\\vec 0_T \\rangle _{{\\red{\\vbox{\\hrule\\kern 1pt\\hbox{\\smallrm MS}}}}}\\,,\n\\label{quarkdef}\n\\end{eqnarray}\nwhere the \\vbox{\\hrule\\kern 1pt\\hbox{\\rm MS}}\\ denotes the renormalization prescription and\nwhere\n\\begin{equation}\n\\magenta{{\\cal O}}={\\cal P}\n\\exp\\left(\nig \\int_0^{y^-}\\!\\!\\! dz^-\\, {\\sienna{A}}_{a}^+(0,z^-,\\vec 0_T)\\, t_a\n\\right).\n\\label{eikonaldef}\n\\end{equation}\n\n\n\\subsection{Antiquarks and gluons}\n\nWe now have a definition of parton distribution functions for quarks.\nFor antiquarks, we use charge conjugation to define\n\\begin{eqnarray}\n\\lefteqn{\n{\\blue{f_{\\bar j\/A}(x,\\mu)}} \n={ 1 \\over 4\\pi}\\int dy^-\ne^{-i xP^+y^-}\\\n\\langle P^{+}\\!\\!,\\vec 0_T |\n}\n\\nonumber\\\\\n&&\\hskip 0.6 cm\\times \n{\\rm Tr}\\!\\left\\{\\!\\gamma^+ {\\sienna{\\psi}}_{j}(0,y^-,\\vec 0_T)\n{\\magenta{{\\cal O}}}\\\n\\overline{\\sienna{\\psi}}_{j}(0,0,\\vec 0_T)\\right\\}\\!\n\\nonumber\\\\\n&&\\hskip 0.6 cm\\times\n| P^{+}\\!\\!,\\vec 0_T \\rangle _{\\vbox{\\hrule\\kern 1pt\\hbox{\\smallrm MS}}}\\ ,\n\\label{antiquarkdef}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n{\\magenta{{\\cal O}}}={\\cal P}\n\\exp\\left(\n-ig \\int_0^{y^-}\\!\\!\\! dz^-\\, {\\sienna{A}}_{a}^+(0,z^-,\\vec 0_T)\\, t^T_a\n\\right).\n\\end{equation}\n\nFor gluons we begin with the number operator in $A^+ = 0$ gauge.\nProceeding analogously to the quark case, we obtain an expression\ninvolving the field strength tensor $F_a^{\\mu\\nu}$ with color index\n$a$: \n\\begin{eqnarray}\n\\lefteqn{{\\blue{f_{g\/A}(x,\\mu)}} \n={ 1 \\over 2\\pi\\,xP^+}\\int dy^-\ne^{-i xP^+y^-}\\\n\\langle P^{+}\\!\\!,\\vec 0_T |}\n\\nonumber\\\\\n&&\\hskip 1.2cm \\times\n {\\sienna{F}}_{\\!a}(0,y^-,\\vec 0_T)\n^{+\\nu}{\\magenta{{\\cal O}}}_{ab}\\\n{\\sienna{F}}_{\\!b}(0,0,\\vec 0_T)_\\nu^{\\ +}\n\\nonumber\\\\\n&&\\hskip 1.2cm \\times\n| P^{+}\\!\\!,\\vec 0_T \\rangle _{\\vbox{\\hrule\\kern 1pt\\hbox{\\smallrm MS}}}\\ ,\n\\label{gluondef}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n{\\magenta{{\\cal O}}}={\\cal P}\n\\exp\\left(\nig \\int_0^{y^-}\\!\\!\\! dz^-\\, {\\sienna{A}}_{c}^+(0,z^-,\\vec 0_T)\\, t_c\n\\right).\n\\end{equation}\nHere the $t_c$ generate the {\\bf 8} representation of SU(3).\n\n\n\\section{Renormalization group}\n\nA change in the scale $\\mu$ induces a change in the parton\ndistribution functions $f_{a\/A}(x,\\mu)$. The change comes from the\nchange in the amount of ultraviolet divergence that renormalization\nis removing. Since the operators are non-local in $y^-$, the\nultraviolet counterterms are integral operators in $k^+$ or\nequivalently in momentum fraction $x$. Since the ultraviolet\ndivergences mix quarks and gluons, so do the counterterms.\n\nOne finds\n\\begin{eqnarray}\n\\lefteqn{\\mu^2{ d \\over d \\mu^2}{\\blue{f_{a\/A}(x,\\mu)}} =}\n\\nonumber\\\\\n&&\n\\int_x^1 {d\\xi\\over \\xi}\\sum_b\\ {\\red{P_{a\/b}(x\/\\xi,\\alpha_s(\\mu))}}\\\n{\\blue{f_{b\/A}(\\xi,\\mu)}}.\n\\label{APeqn}\n\\end{eqnarray}\nThe Altarelli-Parisi (= GLAP = DGLAP) kernel $P_{a\/b}$ is expanded in\npowers of $\\alpha_s$. The $\\alpha_s^1$ and $\\alpha_s^2$ terms are\nknown and used.\n\n\n\\subsection{Renormalization group interpretation}\n\nThe derivation of the renormalization group equation (\\ref{APeqn}) is\nrather technical. One should not lose sight of its intuitive meaning.\nParton splitting is always going on as illustrated in\nFig.~\\ref{fig:scales}. A probe with low resolving power doesn't see\nthis splitting. The renormalization parameter $\\mu$ corresponds to the\nphysical resolving power of the probe. At higher $\\mu$, field\noperators representing an idealized experiment can resolve the\nmother parton into its daughters.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\centerline{\\DESepsf(scales.eps width 5cm)}\n\\caption{A quark can fluctuate into a quark plus a gluon in a\nsmall space-time volume.}\n\\label{fig:scales}\n\\end{figure}\n\n\n\\subsection{Renormalization group result}\n\nOne can use the renormalization group equation (\\ref{APeqn}) to \nfind the parton distributions at a scale $\\mu$ if they are known at a\nlower scale $\\mu_0$. Fig.~\\ref{fig:evolve} shows an example, the gluon\ndistribution at $\\mu = 10 {\\ \\rm GeV} $ and at $\\mu = 100 {\\ \\rm GeV} $ (using the\nCTEQ3M parton distribution set). Notice that with greater resolution,\na gluon typically carries a smaller momentum fraction $x$ because of\nsplitting.\n\n\\begin{figure}[htb]\n\\vspace{9pt}\n\\leftline{\\DESepsf(evolve.eps width 7cm)}\n\\caption{Evolution of the gluon distribution between \n$\\mu = 10 {\\ \\rm GeV} $ and $\\mu = 100 {\\ \\rm GeV} $.}\n\\label{fig:evolve}\n\\end{figure}\n\n\n\n\\section{Translation to local operators}\n\nWe have defined the parton distributions as hadron matrix elements of\ncertain operator products, where the operators are evaluated along a\nlight-like line. Now we relate the parton distributions to products of\noperators all at the same point. It is these local operator products\nthat were originally used in the interpretation of deeply inelastic\nscattering experiments. For lattice QCD, evaluation of operator\nproducts at light-like separations would be, at best, very difficult,\nso the translation to local operator products seems essential. \n\n\\subsection{Quarks}\n\nNote that, according to the definitions (\\ref{quarkdef}) and\n(\\ref{antiquarkdef}), \n\\begin{eqnarray}\nf_{j\/A}(x,\\mu) &=& 0\\,, \\hskip 2 cm {\\magenta{x>1}}\\,,\n\\nonumber\\\\\nf_{\\bar j\/A}(x,\\mu) &=& 0\\,, \\hskip 2 cm {\\magenta{x>1}}\\,,\n\\nonumber\\\\\nf_{j\/A}({\\blue{-x}},\\mu) &=& {\\blue{-}} f_{{\\blue{\\bar j}}\/A}(x,\\mu).\n\\end{eqnarray}\nConsider the moments of the quark\/antiquark distributions defined by\n\\begin{eqnarray}\n\\lefteqn{M_j^{(J)}(\\mu) =}\n\\\\\n&& \\int_0^1 { dx \\over x}\\, {\\red{x^J}}\n\\left\\{\nf_{j\/A}(x;\\mu)\n+{\\blue{(-1)^J}} f_{{\\blue{\\bar j}}\/A}(x;\\mu)\n\\right\\}\n\\nonumber\n\\end{eqnarray}\nfor $J = 1,2,\\dots$. Given the properties above, this is\n\\begin{equation}\nM_j^{(J)}(\\mu) = \\int_{{\\magenta{-\\infty}}}^{{\\magenta{\\infty}}}\n{ dx \\over x}\\, {\\red{x^J}}\nf_{j\/A}(x;\\mu)\\,.\n\\end{equation}\nObtaining $\\int_{-\\infty}^{\\infty}dx$ is the essential step.\n\nFrom the operator definitions of the $f$s, this is\n\\begin{eqnarray}\n\\lefteqn{\nM_j^{(J)}(\\mu) \n=}\n\\nonumber\\\\&& \n{ 1 \\over 4\\pi}\\int dy^- \\int_{-\\infty}^{\\infty}dx\\ \ne^{-i xP^+{\\red{y^-}}}\\ \n\\left(\n{ -i \\over P^+}\n{\\red{{ \\partial \\over \\partial y^-}}}\n\\right)^{J-1}\n\\nonumber\\\\\n&&\\ \\ \\times\\langle P|\n\\overline\\psi_{j}({\\red{y^-}})\n\\gamma^+ {\\cal O}({\\red{y^-}},0)\\\n\\psi_{j}(0)\n| P \\rangle _{\\vbox{\\hrule\\kern 1pt\\hbox{\\smallrm MS}}}\\ .\n\\end{eqnarray}\nPerforming the $x$-integration gives a $\\delta(y^-)$.\nThus we get a local operator. The ${\\red{\\partial\/\\partial y^-}}$\ndifferentiates either the quark field or the exponential of gluon\nfields in $\\cal O$, Eq.~(\\ref{eikonaldef}). We find\n\\begin{eqnarray}\n\\lefteqn{M_j^{(J)}(\\mu) =}\n\\\\\n&&\\hskip - 0.3cm\n{ 1 \\over 2}(P^+)^{-J}\\,\n\\langle P|\n\\overline\\psi_{j}(0)\n\\gamma^+ \n\\left(\ni{\\red{D^+}}\n\\right)^{J-1}\n\\psi_{j}(0)\n| P \\rangle _{\\vbox{\\hrule\\kern 1pt\\hbox{\\smallrm MS}}}\\,,\n\\nonumber\n\\end{eqnarray}\nwhere\n\\begin{equation}\n{\\red{D^\\mu}} = { \\partial \\over \\partial y_\\mu}\n- i g\\, A^\\mu_a(y)\\, t_a\\,.\n\\end{equation}\n\nWe have now related the moments of the quark distribution to products\nof operators evaluated at the same point. However, this is not yet\nready for the lattice because it would not be easy to differentiate\nthe operators with respect to $y^-$. To obtain a more useful\nexpression, consider ${\\blue{\\langle {\\cal O}_j^{(J)}\\rangle}}$\ndefined by\n\\begin{eqnarray}\n\\lefteqn{\\{P^{\\mu_1}P^{\\mu_1}\\cdots P^{\\mu_J}\\}_{\\rm TS}\\\n{\\blue{\\langle {\\cal O}_j^{(J)}\\rangle}}\n=}\n\\\\\n&&\\hskip -0.5cm\n{1 \\over 2}\n\\langle P|\n\\overline\\psi_{j}(0)\\,\n\\{\\gamma^{\\mu_1} \niD^{\\mu_2}\n\\cdots\niD^{\\mu_J}\n\\}_{\\rm TS}\\,\n\\psi_{j}(0)\n| P \\rangle _{\\vbox{\\hrule\\kern 1pt\\hbox{\\smallrm MS}}}\\ .\n\\nonumber\n\\end{eqnarray}\nwhere TS denotes taking the traceless symmetric part of the tensor\nenclosed.\n\nThen\n\\begin{eqnarray}\n\\lefteqn{{\\blue{\\langle {\\cal O}_j^{(J)}\\rangle}}\n= M_j^{(J)}(\\mu) =}\n\\\\\n&&\\int_0^1 { dx \\over x}\\, {\\red{x^J}}\n\\left\\{\n{\\blue{f_{j\/A}(x;\\mu)}}\n+(-1)^J {\\blue{f_{\\bar j\/A}(x;\\mu)}}\n\\right\\}.\n\\nonumber\n\\end{eqnarray}\nThis is our final result.\n\nWe can now imagine the following program for a lattice calculation.\nOne could measure ${\\blue{\\langle {\\cal O}_j^{(J)}\\rangle}}$ on the\nlattice for $J=1,2,\\dots$. Of course, this would not be so easy, but\nit would gives moments of the quark\/antiquark distributions that\ncould be compared to the corresponding moments of the distributions\ndetermined from experiments.\n\n\n\\subsection{Gluons}\n\nWe follow a similar analysis to relate the gluon distribution\nfunction to local operator products. From the definition\n(\\ref{gluondef}), it follows that\n\\begin{eqnarray}\nf_{g\/A}(x,\\mu) &=& 0\\,, \\hskip 2 cm {\\magenta{x>1}}\\,,\n\\nonumber\\\\\nf_{g\/A}({\\blue{-x}},\\mu) &=& {\\blue{-}} f_{g\/A}(x,\\mu)\\,.\n\\end{eqnarray}\nFor $J= 2,4,6,\\dots$, we relate moment integrals over the interval\n$00$ is the Ginzburg-Landau parameter, a material characteristic\nof the sample\\,;\n\\item $h_{\\rm ex}>0$ measures the intensity of the applied magnetic\nfield\\,;\n\\item $B_0$ is a smooth function defined in $\\overline\\Omega$. The\napplied magnetic field is $h_{\\rm ex}B_0 \\vec{e}$, where $\\vec{\ne}=(0,0,1)$.\n\\end{itemize}\nWe introduce the ground state energy of the functional in\n\\eqref{eq-3D-GLf} as follows,\n\\begin{equation}\\label{eq-gse}\n\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)=\\inf\\{\\mathcal E(\\psi,\\mathbf A)~:~(\\psi,\\mathbf A)\\in H^1(\\Omega;\\mathbb\nC)\\times H^1(\\Omega;\\mathbb R^2)\\}\\,.\n\\end{equation}\nIn physical terms, \\eqref{eq-gse} describes the energy of a type~II\nsuperconductor submitted to a possibly non-constant magnetic field of intensity\n$h_{\\rm ex}|B_0|$.\n\n\nThe behavior of the ground state energy in \\eqref{eq-gse} strongly\ndepends on the values of $\\kappa$ and $h_{\\rm ex}$. This is the\nsubject of a vast mathematical literature. In the two monographs\n\\cite{FH-b, SaSe}, a survey of many important results regarding the\nbehavior of $\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)$ is given. The results are valid when $h_{\\rm\nex}=h_{\\rm ex}(\\kappa)$ is a function of $\\kappa$ and\n$\\kappa\\to +\\infty$.\n\nLet us recall two important results regarding the ground state\nenergy in \\eqref{eq-gse}. The first result is obtained in\n\\cite{SS02} and says, if $b\\in(0,1]$ is a constant, $h_{\\rm\nex}=b\\kappa^2$ and $B_0=1$, then\n\\begin{equation}\\label{eq:gse-SS}\n\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)=g(b)|\\Omega|\\kappa^2+o(\\kappa^2)\\quad(\\kappa\\to +\\infty)\\,,\n\\end{equation}\nwhere $g(b)$ is a constant that will be defined in \\eqref{eq:g}\nbelow.\n\n\nThe second result is given in \\cite{HK} and valid under the\nfollowing assumption on the function $B_0$.\n\\begin{ass}\\label{ass-B0}\nSuppose that $B_0:\\overline{\\Omega}\\to\\mathbb R$ is a smooth function\nsatisfying\n\\begin{itemize}\n\\item $|B_0|+|\\nabla B_0|\\geq c$ in $\\overline{\\Omega}$, where $c>0$\nis a constant\\,;\n\\item $\\Gamma=\\{x\\in\\overline{\\Omega}~:~B_0(x)=0\\}$ is the union of\na finite number of smooth curves\\,;\n\\item $\\Gamma\\cap\\partial\\Omega$ is a finite set\\,.\n\\end{itemize}\n\\end{ass}\nUnder these assumptions on $B_0$, if $b>0$ is a constant and $h_{\\rm\nex}=b\\kappa^3$, then,\n\\begin{equation}\\label{eq:gse-HK}\n\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)=\\kappa\\left(\\int_{\\Gamma}\\Big(b|\\nabla\nB_0(x)|\\Big)^{1\/3}\\,E\\Big(b|\\nabla\nB_0(x)|\\Big)\\,ds(x)\\right)+o(\\kappa)\\,,\n\\end{equation}\nwhere $E(\\cdot)$ is a {\\it continuous} function that will be defined\nin \\eqref{eq:E} below, and $ds$ is the arc-length measure in\n$\\Gamma$.\n\n\n\nIn physical terms, \\eqref{eq:gse-HK} describes the energy of a\ntype~II superconductor subjected to a {\\it variable} magnetic field\nthat {\\it vanishes} along a {\\it smooth curve}. Such magnetic\nfields are of special importance in the analysis of the\nGinzburg-Landau model in surfaces (see \\cite{CL}).\n\nMagnetic fields satisfying Assumption~\\ref{ass-B0} have an early\nappearance in the literature, for instance in a paper by Montgomery\n\\cite{M}. Pan and Kwek \\cite{PK} study the breakdown of\nsuperconductivity under the Assumption~\\ref{ass-B0}. They find a\nconstant $c_0>0$ such that, if $h_{\\rm ex}= b\\kappa^3$, $b> c_0$ and\n$\\kappa$ is sufficiently large, then $\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)=0$. Recently, the results\nof Pan-Kwek have been improved in \\cite{Att3, Miq}. The discussion\nin \\cite{HK} proves that the formula in \\eqref{eq:gse-HK} is\nconsistent with the conclusion in \\cite{PK} and with Theorem 1.7 in\n\\cite{Att3}.\n\n\n\nAs proven in \\cite{HK}, the formula in \\eqref{eq:gse-HK} continues\nto hold when $h_{\\rm ex}=b\\kappa^3$ and $b=b(\\kappa)$\nsatisfies\\,\\footnote{ The notation $a(\\kappa)\\ll b(\\kappa)$ means\nthat $a(\\kappa)=\\delta(\\kappa)b(\\kappa)$ and\n$\\displaystyle\\lim_{\\kappa\\to +\\infty}\\delta(\\kappa)=0\\,$.},\n\\begin{equation}\\label{eq:cond-b}\n\\kappa^{-1\/2}\\ll b(\\kappa)\\ll\n1\\quad(\\kappa\\to +\\infty)\\,.\\end{equation}\nWhen the condition in \\eqref{eq:cond-b} is violated by\nallowing\\,\\footnote{The notation $a(\\kappa)\\lesssim b(\\kappa)$\nmeans that there exists a constant $c >0$ and $\\kappa_0>0$ such\nthat, for all $\\kappa\\geq \\kappa_0$, $a(\\kappa)\\leq c b(\\kappa)\\,$.}\n$$ \\kappa^{-1} \\ll b(\\kappa)\\lesssim \\kappa^{-1\/2} \\quad(\\kappa\\to +\\infty)\\,$$ then the formula in\n\\eqref{eq:gse-HK} is replaced with (see \\cite{HK}),\n\\begin{equation}\\label{eq:gse-HK'}\n\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)=\\kappa^2\\int_\\Omega g\\big(b (\\kappa) \\, \\kappa\\, |B_0(x)|\\big)\\,dx+o\\left(b(\\kappa) ^{-1}\\kappa\\right)\\,.\n\\end{equation}\n Note that \\eqref{eq:gse-HK'} is still true for lower values\nof the external field but with a different expression for the\nremainder term (see \\cite{Att,Att2}).\n\n\nThe comparison of the formulas in \\eqref{eq:gse-HK} and\n\\eqref{eq:gse-HK'} at the border regime\\footnote{The notation\n$a(\\kappa)\\approx b(\\kappa)$ means that $a(\\kappa)\\lesssim\nb(\\kappa)$ and $b(\\kappa)\\lesssim a(\\kappa)$.}\n$$\nb(\\kappa)\\approx \\kappa^{-1\/2}$$ suggests that there might\nexist a relation between the two reference functions $g(\\cdot)$ and\n$E(\\cdot)$. This paper confirms the existence of such a\nrelationship.\n\nThe two functions $g(\\cdot)$ and $E(\\cdot)$ are defined via\nsimplified versions of the functional in \\eqref{eq-3D-GLf}. As we\nshall see, $g(\\cdot)$ will be defined via a constant magnetic field,\nwhile, for $E(\\cdot)$, this will be via a magnetic field that\nvanishes along a line.\n\nLet us recall the definition of the function $g(\\cdot)$.\nConsider $b\\in\\,(0,+\\infty)$, $r>0\\,$, and\n$Q_r=\\,(-r\/2,r\/2)\\,\\times\\,(-r\/2,r\/2)$\\,. Define the functional,\n\\begin{equation}\\label{eq:rGL}\nF_{b,Q_r}(u)=\\int_{Q_r}\\left(b|(\\nabla-i\\mathbf A_0)u|^2-|u|^2+\\frac{1}2|u|^4\\right)\\,dx\\,,\n\\quad \\mbox{ for } u\\in H^1(Q_r)\\,.\n\\end{equation}\nHere, $\\mathbf A_0$ is the magnetic potential,\n\\begin{equation}\\label{eq:A0}\n\\mathbf A_0(x)=\\frac12(-x_2,x_1)\\,,\\quad \\mbox{ for } x=(x_1,x_2)\\in \\mathbb R^2\\,.\n\\end{equation}\nDefine the two Dirichlet and Neumann ground state energies,\n\\begin{align}\n&e_D(b,r)=\\inf\\{F_{b,Q_r}(u)~:~u\\in H^1_0(Q_r)\\}\\,,\\label{eq:eD}\\\\\n&e_N(b,r)=\\inf\\{F_{b,Q_r}(u)~:~u\\in H^1(Q_r)\\}\\,.\\label{eq:eN}\n\\end{align}\nThanks to \\cite{Att, FK-cpde, SS02}, $g(\\cdot)$ may be defined as\nfollows,\n\\begin{equation}\\label{eq:g}\n\\forall~b>0\\,,\\quad g(b)=\\lim_{r\\to\\infty}\\frac{e_D(b,r)}{|Q_r|}=\\lim_{r\\to\\infty}\\frac{e_N(b,r)}{|Q_r|}\\,,\n\\end{equation}\nwhere $|Q_r|$ denotes the area of $Q_r$ ($|Q_r|=r^2$).\\\\\n Moreover the\nfunction $g(\\cdot)$ is a non decreasing continuous function such\nthat\n\\begin{equation} \\label{propg}\ng(0)=-\\frac12 \\mbox{ and } g(b)=0 \\mbox{ when } b\\geq 1\\,.\n\\end{equation}\n\nNow we introduce the function $E(\\cdot)$.\n\nLet $L>0$, $R>0$, $\\mathcal S_R=(-R\/2,R\/2)\\times \\mathbb R$ and\n\\begin{equation}\\label{eq-A-app}\n\\Ab_{\\rm van}(x)=\\Big(-\\frac{x_2^2}2,0\\Big)\\,,\\quad\n\\mbox{ for } x=(x_1,x_2)\\in\\mathbb R^2\\,.\\end{equation} Notice that $\\Ab_{\\rm van}$\nis a magnetic potential generating the magnetic field\n\\begin{equation}\\label{eq-B-app}\nB_{\\rm van}(x)=\\curl\\Ab_{\\rm van}=x_2\\,,\n\\end{equation}\nwhich vanishes along the $x_2$-axis.\n\nConsider the functional\n\\begin{equation}\\label{eq-gs-er''}\n\\mathcal E_{L,R}(u)=\\int_{\\mathcal S_R}\\left(|(\\nabla-i\\Ab_{\\rm van})u|^2-L^{-2\/3}|u|^2+\\frac{L^{-2\/3}}{2}|u|^4\\right)\\,dx\\,,\n\\end{equation}\nand the ground state energy\n\\begin{equation}\\label{eq-gs-er'}\n\\mathfrak{e}_{\\rm gs}(L;R)=\\inf\\{\\mathcal E_{L,R}(u)~:~u\\in H^1_{{\\rm mag},0}(\\mathcal S_R)\\}\\,,\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:space-mag}\nH^1_{{\\rm mag},0}(\\mathcal S_R)=\\{u\\in L^2(\\mathcal S_R)~:~(\\nabla-i\\Ab_{\\rm van})u\\in L^2(\\mathcal S_R)\\quad{\\rm and}\\quad u=0~{\\rm on~}\\partial\\mathcal S_R\\}\\,.\n\\end{equation}\nThanks to \\cite{HK}, we may define $E(\\cdot)$ as follows,\n\\begin{equation}\\label{eq:E}\nE(L)=\\lim_{R\\to\\infty}\\frac{\\mathfrak{e}_{\\rm gs}(L;R)}{R}\\,.\n\\end{equation}\n\nIn this paper, we obtain a relationship between the functions\n$E(\\cdot)$ and $g(\\cdot)$:\n\n\\begin{thm}\\label{thm:HK}\nLet $g(\\cdot)$ and $E(\\cdot)$ be as in \\eqref{eq:g} and \\eqref{eq:E}\nrespectively. It holds,\n$$E(L)=2L^{-4\/3}\\int_0^1g(b)\\,db+o\\big(L^{-4\/3}\\big)\\quad{\\rm as\n~}L\\to0_+\\,.$$\n\\end{thm}\n\nAs a consequence of Theorem~\\ref{thm:HK} and the co-area formula, we\nobtain:\n\n\\begin{thm}\\label{prop:HK+}\nSuppose that the function $B_0$ satisfies Assumption~\\ref{ass-B0}\nand\n$$\\kappa^{-1}\\ll b(\\kappa)\\ll 1\\,.$$\nLet $g(\\cdot)$ and $E(\\cdot)$ be the energies introduced in\n\\eqref{eq:gse-SS} and \\eqref{eq:gse-HK'} respectively. It holds,\n\\begin{multline*}\n\\int_\\Omega g\\big(b (\\kappa) \\, \\kappa\\, |B_0(x)|\\big)\\,dx\n\\\\=\\kappa^{-1}\\int_{\\Gamma}\\Big(b(\\kappa)|\\nabla\nB_0(x)|\\Big)^{1\/3}\\,E\\Big(b(\\kappa)|\\nabla B_0(x)|\\Big)\\,ds(x)+\no\\big(b(\\kappa)^{-1}\\kappa^{-1}\\big)\\big)\\,,\\quad(\\kappa\\to\n+\\infty)\\,.\n\\end{multline*}\n\\end{thm}\n\nThis yields the following improvement of the main result in\n\\cite{HK}:\n\n\\begin{theorem}\\label{thm:HK+}\nSuppose that Assumption~\\ref{ass-B0} holds and\n$$h_{\\rm ex}=b(\\kappa)\\kappa^3\\,,\\quad \\kappa^{-1}\\ll\nb(\\kappa)\\lesssim 1\\quad(\\kappa\\to +\\infty)\\,.$$ The ground state energy in \\eqref{eq:gse-HK} satisfies,\n$$\\mathrm{E}_{\\rm gs}(\\kappa, h_{\\rm ex};B_0)=\\kappa\\int_\\Gamma\\Big(b(\\kappa)|\\nabla\nB_0(x)|\\Big)^{1\/3}\\,E\\Big(b(\\kappa)|\\nabla\nB_0(x)|\\Big)\\,ds(x)+o\\big(b(\\kappa)^{-1}\\kappa\\big)\\,.\\quad(\\kappa\\to +\\infty)\\,.$$\n\\end{theorem}\n\n\n\nThe rest of the paper is devoted to the proof of\nTheorems~\\ref{thm:HK} and \\ref{prop:HK+}. Note that, along the\nproof of Theorem~\\ref{thm:HK}, we provide explicit estimates of the\nremainder terms (see Theorems~\\ref{thm:lb} and \\ref{thm:ub}).\n\n\n\n\\section{Preliminaries}\n\nIn this section, we collect useful results regarding the two\nfunctionals in \\eqref{eq:gse-SS} and \\eqref{eq:gse-HK}.\n\n\n\nFor the functional in \\eqref{eq:gse-SS} and the corresponding ground\nstate energies in \\eqref{eq:eD} and \\eqref{eq:eN}, the following\nresults are given in \\cite{Att2,FK-cpde}:\n\n\\begin{prop}\\label{prop:FK}~\n\\begin{enumerate}\n\\item There exist minimizers of the ground state energies in\n\\eqref{eq:eD} and \\eqref{eq:eN}.\n\\item For all $r>0$ and $b>0$, a minimizer $u_{b,r}$ of\n\\eqref{eq:eD} or \\eqref{eq:eN} satisfies\n$$|u_{b,r}|\\leq 1\\quad{~\\rm in~}Q_r\\,.$$\n\\item For all $r>0$ and $b>0$, $e_D(b,R)\\geq e_N(b,R)$.\n\\item For all $r>0$ and $b\\geq 1$, $e_D(b,r)=0$.\n\\item There exists a constant $C>0$ such that, for all\n$ b>0$ and $ r\\geq 1$, then\n\\begin{equation}\\label{eq:2.1}\ne_N(b,R)\\geq e_D(b,r)-Cr\\sqrt{b}\\,.\n\\end{equation}\n\\item There exists a constant $C$ such that, for all\n$r\\geq 1$ and $b\\in(0,1)$,\n\\begin{equation}\\label{eq:g'}\n g(b)\\leq\\frac{e_D(b,r)}{|Q_r|}\\leq g(b)+C\\frac{\\sqrt{b}}{r}\\,.\n\\end{equation}\n\\end{enumerate}\n\\end{prop}\n\n\\begin{rem}\\label{rem:ext(6)}\nThe estimate in \\eqref{eq:g'} continues to hold when $b\\geq\n1$, since in this case $g(b)=0$ and $e_D(b,r)=0$.\n\\end{rem}\n\n\\begin{rem}\\label{rem:proof(5)}\nLet us mention that Inequality \\eqref{eq:2.1} is proved in\n\\cite[Prop.~2.2]{Att2} for $00$ such that,\nfor all $b>0$ and $r\\geq 1$,\n$$\\frac{e_N(b,r)}{|Q_r|}\\geq g(b)-C\\frac{\\sqrt{b}}r\\,.$$\n\\end{rem}\n\nThe next lemma indicates a regime where the Neumann energy in\n\\eqref{eq:eN} vanishes.\n\n\\begin{lem}\\label{lem:eN=0}\nThere exists a constant $r_0>0$ such that, for all $r\\geq r_0$ and\n$b\\geq r_0$,\n$$e_N(b,r)=0\\,.$$\n\\end{lem}\n\\begin{proof}\nWe have the trivial upper bound, valid for all $b>0$ and $r>0$,\n$$e_N(b,r)\\leq F_{b,Q_r}(0)=0\\,.$$\nNow we will prove that $e_N(b,r)\\geq 0$ for sufficiently large\nvalues of $b$ and $r$. Let $u$ be an arbitrary function in\n$H^1(Q_r)$.\\\\\n We apply a rescaling to obtain,\n\\begin{equation}\\label{eq:eigenvalue}\n\\int_{Q_r}|(\\nabla-i\\mathbf A_0)u|^2\\,dx=r^4\\int_{Q_1}|(r^{-2}\\nabla-i\\mathbf A_0)v|^2\\,dy\\,,\n\\end{equation}\nwhere $$v(y)=u(ry)\\,.$$\n For every $h>0$, we introduce the following\nground state eigenvalue,\n$$ \\mu_1(h)=\\inf_{\\substack{v\\in\nH^1(Q_1)\\\\v\\not=0}}\\frac{\\displaystyle\\int_{Q_1}|(h\\nabla-i\\mathbf A_0)v|^2\\,dy}{\\displaystyle\\int_{Q_1}|v|^2\\,dy}\\,.$$\nIt is a known fact that (see \\cite{Bon, Pan, FH-b}),\n$$\\lim_{h\\to0_+}\\frac{\\mu_1(h)}{h}=\\Theta_1\\,,$$\nwhere $\\Theta_1\\in(0,1)$ is a universal constant.\n\nIn that way, we get a constant $r_1>0$ such that, for all $r\\geq\nr_1$, we infer from \\eqref{eq:eigenvalue},\n$$\n \\int_{Q_r}|(\\nabla-i\\mathbf A_0)u|^2\\,dx\\geq\\frac{\\Theta_1}2\\int_{Q_1}|v (y)|^2\\,r^2dy=\\frac{\\Theta_1}2\\int_{Q_r}|u (x)|^2\\,dx \\,.\n$$\n We insert this into the expression of $F_{b,Q_r}(u)$ to get, for\nall $r\\geq r_1$ and $b>0$,\n$$F_{b,Q_{r}}\\geq \\int_{Q_{r}}\n\\left(b\\frac{\\Theta_1}2-1\\right)|u|^2\\,dx\\,.$$ Let\n$r_0=\\max(r_1,2\\Theta_1^{-1})$. Clearly, for all $r\\geq r_0$, $b\\geq\nr_0$ and $u\\in H^1(Q_r)$, $F_{b,Q_r}(u)\\geq 0$. Consequently,\n$e_N(b,r)\\geq 0$.\n\\end{proof}\n\n\nThe functional in \\eqref{eq:gse-HK} is studied in \\cite{HK}. In\nparticular, the following results were obtained:\n\n\\begin{prop}\\label{prop:HK}~\n\\begin{enumerate}\n\\item For all $L>0$ and $R>0$, there exists a minimizer\n$\\varphi_{L,R}$ of \\eqref{eq-gs-er'}.\n\\item The function $\\varphi_{L,R}$ satisfies\n$$|\\varphi_{L,R}|\\leq 1\\quad{\\rm in~}\\mathcal S_R\\,.$$\n\\item There exists a constant $C>0$ such that, for all $L>0$ and\n$R>0$,\n\\begin{equation}\\label{eq:ub-u}\n\\int_{\\mathcal S_R}|\\varphi_{L,R}(x)|^2\\,dx\\leq CL^{-2\/3}R\\,.\n\\end{equation}\n\\item For all $ L>0$ and $R>0$,\n\\begin{equation}\\label{eq:lb-er}\nE(L)\\leq \\frac{\\mathfrak{e}_{\\rm gs}(L;R)}{R} \\,.\n\\end{equation}\n\\item There exists a constant $C>0$ such that, for all $L>0$ and $\\ R\\geq 4$,\n\\begin{equation}\\label{eq:ub-er}\n \\frac{\\mathfrak{e}_{\\rm gs}(L;R)}{R}\\leq E(L)+C\\left(1+L^{-2\/3}\\right)R^{-2\/3}\\,.\n\\end{equation}\n\\end{enumerate}\n\\end{prop}\n\n\\section{Proof of Theorem~\\ref{thm:HK}: Lower bound}\n\nThe aim of this section is to prove the lower bound in Theorem\n\\ref{thm:HK}. Note that the lower bound below is with a\nbetter remainder term.\n\n\\begin{thm}\\label{thm:lb}\nThere exist two constants $L_0>0$ and $C>0$ such that, for all\n$L\\in(0,L_0)$,\n$$E(L)\\geq 2L^{-4\/3}\\int_0^1g(b)\\,db-CL^{-1}\\,,$$\nwhere $E(\\cdot)$ and $g(\\cdot)$ are the energies introduced in\n\\eqref{eq:E} and \\eqref{eq:g} respectively.\n\\end{thm}\n\nThe proof of Theorem~\\ref{thm:lb} relies on the\nfollowing lemma:\n\n\n\\begin{lem}\\label{lem:lb}\nLet $M>0$. There exist two constants $C>0$ and $A_0\\geq 4$ such\nthat, if\n$$A\\geq A_0\\,,\\quad R\\geq 1,\\quad 00$ and $R$ and $u$ satisfy the assumptions in\nLemma~\\ref{lem:lb}. If $\\mathcal D\\subset\\mathcal S_R$, then we use\nthe notation\n\\begin{equation}\\label{eq-D}\n\\mathcal E(u;\\mathcal D)=\\int_{ \\mathcal\nD}\\left(|(\\nabla-i\\Ab_{\\rm van})u|^2-L^{-2\/3}|u|^2+ \\frac 12\\, L^{-2\/3}\\,\n|u|^4\\right)\\,dx\\,.\n\\end{equation}\nWe will prove that,\n\\begin{equation}\\label{eq:x2>A}\n\\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\geq A\\})\\geq RL^{-4\/3}\\int_0^1g(b)\\,d b-CRL^{-1}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:x2<-A}\n\\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\leq -A\\})\\geq RL^{-4\/3}\\int_0^1g(b)\\,d b-CRL^{-1}\\,,\n\\end{equation}\nfor some constant $C$ independent of $L$, $R$, $A$, $L$ and $u$.\n\nWe will write the detailed proof of \\eqref{eq:x2>A}. The proof of\n\\eqref{eq:x2<-A} is identical.\n\nLet $r_0$ be the {\\it universal} constant introduced in\nLemma~\\ref{lem:eN=0}. We define $b_0=2\\max(1,r_0^2)$. Thanks to\nLemma~\\ref{lem:eN=0}, we have,\n\\begin{equation}\\label{eq:eN-rem}\n\\forall~b\\geq \\frac{b_0}2\\,,\\quad\\forall~r\\geq \\sqrt{b_0}\\,,\\quad e_N(b,r)=0\\,,\n\\end{equation}\nwhere $e_N$ is the Neumann ground state energy introduced in\n\\eqref{eq:eN}.\n\nWe define the constant $A_0=4\\sqrt{b_0}$. We introduce $n\\in\\mathbb\nN$ and\n$$\\ell=n^{-1}R\\,.\n$$\nWe will fix a choice of $n$ later at the end of this proof such\nthat (for all $A\\geq A_0$),\n\\begin{equation}\\label{eq:cond-A}\n R 1\\,.$$\n Now, if $j\\in\\widetilde{\\mathcal J}$, then\n we can\n use the lower bound in \\eqref{rem:eN} with $b=\n(1-\\eta)|a_{j,2}|L^{2\/3} $ and $r= \\sqrt{|a_{j,2}|}\\,\\ell $ to\nwrite, for a different constant $C>0$,\n$$\n\\mathcal E(u; Q_{\\ell,j})\n\\geq L^{-2\/3} \\ell^2 \\Big(g\\big((1-\\eta)|a_{j,2}|L^{2\/3}\\big)- \\frac{C}{\\ell} \\sqrt{1-\\eta}\\,L^{1\/3}\\Big)-C\\eta^{-1}\\ell^4\\int_{Q_{\\ell,j}}|u|^2\\,dx\\,.\n$$\nIf $j\\in\\mathcal J_\\infty$, then $(1-\\eta)|a_{j,2}|L^{2\/3}\\geq\n\\frac12b_0$ and we can use the identity in \\eqref{eq:eN-rem} to\nwrite\n$$e_N\\Big((1-\\eta)|a_{j,2}|L^{2\/3}\\,,\\,\\sqrt{|a_{j,2}|}\\,\\ell\\Big)=\n0\\,.$$\nNow we can infer from \\eqref{eq:decomp} the following estimate,\n$$\n \\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\geq A\\})\\geq L^{-2\/3}\\sum_{j\\in\\widetilde{\\mathcal J}}\n\\Big(g\\big((1-\\eta)|a_{j,2}|L^{2\/3}\\big)- \\frac{C}{\\ell}\\,\n\\sqrt{1-\\eta}\\,L^{1\/3}\\Big)\\ell^2 -C\\eta^{-1}\\ell^4\\int_{\\mathcal\nS_R}|u|^2\\,dx\\,.\n$$\nUsing the assumption on the $L^2$-norm of $u$ (see\nLemma~\\ref{lem:lb}), we get further,\n$$\n \\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\geq A\\})\\geq L^{-2\/3}\\sum_{j\\in\\widetilde{\\mathcal J}}\n\\Big(g\\big((1-\\eta)|a_{j,2}|L^{2\/3}\\big)-\\frac{C}{\\ell}\\, \\sqrt{1-\\eta}\\,L^{1\/3}\\Big)\\ell^2\\\\\n-C\\eta^{-1}\\ell^4RL^{-2\/3}\\,.\n$$\nFor any $j \\in \\mathcal J$, we choose in $\\overline{Q_{j,\\ell}}$ the previously free point $a_j$ as $a_j :=\n\\big(c_{j,1}, c_{j,2} + \\frac \\ell 2\\big).$\\\\\nSince\n$g(\\cdot)$ is a non decreasing function, this choice yields that,\n$$\ng\\big( (1-\\eta) a_{j,2} L^{\\frac 23}\\big)= \\sup_{ t\\in (-\\frac \\ell 2 + c_{j,2}, c_{j,2} + \\frac \\ell 2) } g\\big( (1-\\eta) t L^{\\frac 23}\\big)\\,.\n$$\nIn that way, the sum\n$$\\ell^2 \\sum_{j\\in\\widetilde{\\mathcal J}}\ng\\big((1-\\eta)|a_{j,2}|L^{2\/3}\\big)$$ is an upper Riemann sum of\nthe function $(x_1,x_2) \\mapsto g( (1-\\eta) |x_2| L^{\\frac 23})$\non $\\mathcal D_{L,R}:=\\bigcup_{j\\in \\widetilde{ \\mathcal J}}\nQ_{\\ell,j}$ and\n\\begin{multline*}\n \\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\geq A\\})\\geq L^{-2\/3}\\int_{\\mathcal D_{L,R}}\ng\\big((1-\\eta)|x_2|L^{2\/3}\\big)\\,dx_1dx_2-C(1-\\eta)^{-1\/2}\\,L^{-1}R\\\\\n-C\\eta^{-1}\\ell^4RL^{-2\/3}\\,.\n\\end{multline*}\nWe now observe that, by definition of $\\widetilde{\\mathcal J}$ and\n$\\mathcal J$,\n$$ \\mathcal D_{L,R}=\\bigcup_{j\\in\\widetilde{\\mathcal J}}Q_{\\ell,j}\\subset\\{(x_1,x_2)\\in\\mathbb R^2~:~|x_1|\\leq R\/2\\quad{\\rm\nand}\\quad A< x_2\\leq b_0(1-\\eta)^{-1}L^{-2\/3}+\\ell\\}\\,.$$\n Since $g(\\cdot)$ is valued in\n$]-\\infty,0]$ and $g(b)=0$ for all $b\\geq 1$,\n then\n$$\n\\int_{\\mathcal D_{L,R}}\ng\\big((1-\\eta)|x_2|L^{2\/3}\\big)\\,dx_1dx_2\\geq\n\\int_{0\\leq x_2\\leq b_0(1-\\eta)^{-1}L^{-2\/3}+\\ell}\\int_{|x_1|\\leq R\/2}\ng\\big((1-\\eta)|x_2|L^{2\/3}\\big)\\,dx_1dx_2\\,,$$ and a simple change of variable yields,\n$$\n\\int_{\\mathcal D_{L,R}}\ng\\big((1-\\eta)|x_2|L^{2\/3}\\big)\\,dx_1dx_2\\geq\nR(1-\\eta)^{-1}L^{-2\/3}\\int_0^1 g(t)\\,dt\\,.\n$$\n Therefore, we have\nproved the following lower bound,\n$$\n \\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\geq A\\})\\geq L^{-4\/3}R(1-\\eta)^{-1}\\int_0^1 g(t)\\,dt\n-C(1-\\eta)^{-1\/2}\\,L^{-1}R -C\\eta^{-1}\\ell^4RL^{-2\/3}\\,.\n$$\nNow, we choose $n=[R+1]$ where $[\\,\\cdot\\,]$ denotes the integer part.\n In that way, the condition in \\eqref{eq:cond-A} is\nsatisfied for all $R\\geq 1$ and $A\\geq A_0=4\\sqrt{b_0}$. Moreover,\nwe have the lower bound,\n$$\n \\mathcal E(u;\\mathcal S_R\\cap\\{x_2\\geq A\\})\\geq 2L^{-4\/3}R(1-\\eta)^{-1}\\int_0^1 g(t)\\,dt\n-C(1-\\eta)^{-1\/2}\\,L^{-1}R -C\\eta^{-1}RL^{-2\/3}\\,.$$\nNow, we choose $\\eta=\\frac12L^{1\/3}$ so that, for all $L\\in(0,1)$,\n$\\eta\\in(0,\\frac12)$, $\\eta^{-1}L^{-2\/3}=2L^{-1}$, $\\eta\nL^{-4\/3}=\\frac12L^{-1}$ and the lower bound in \\eqref{eq:x2>A} is\nsatisfied.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:lb}]\nWe use the conclusion in Lemma~\\ref{lem:lb} with the following\nchoices,\n$$\nR=4\\,,\\quad A=A_0\\,,\\quad 00$ and $C>0$ such that, for all\n$L\\in(0,L_0)$,\n$$E(L)\\leq 2L^{-4\/3}\\int_0^1g(b)\\,db+CL^{-1\/3}\\,,$$\nwhere $E(\\cdot)$ and $g(\\cdot)$ are the energies introduced in\n\\eqref{eq:E} and \\eqref{eq:g} respectively.\n\\end{thm}\n\nThe proof of Theorem~\\ref{thm:ub} relies on the following lemma:\n\n\\begin{lem}\\label{lem:ub}\nLet $R\\geq 1$, $L>0$, $\\ell\\in(0,1)$, $\\eta\\in(0,1)$,\n$c=(c_1,c_2)\\in \\mathbb R^2$ and\n$$\n Q_\\ell=(-\\ell\/2+c_1,c_1+\\ell\/2)\\times\n(-\\ell\/2+c_2,c_2+\\ell\/2)\\,.$$\nSuppose that\n$$Q_\\ell\\subset\\{(x_1,x_2)\\in\\mathbb R^2~:~|x_1|\\leq R\/2\\quad{\\rm and}\\quad\n|x_2|\\geq \\frac1{\\ell^2}\\}\\,.$$\nFor all $R\\geq 1$, it holds,\n\\begin{multline*}\n\\inf\\{\\mathcal E_{L,R}(w)~:~w\\in H^1_0(Q_\\ell)\\}\n\\\\\n\\leq\nL^{-2\/3}\\int_{Q_\\ell}g\\Big((1+\\eta)L^{2\/3}|x_2|\\Big)\\,dx_1dx_2+CL^{-2\/3}\\Big(\\ell^{-1}L^{1\/3}+\\eta^{-1}\\ell^4\\Big)\\ell^2\\,,\\end{multline*}\nwhere, for all $w\\in H^1_0(Q_\\ell)$, $\\mathcal E_{L,R}(w)$ is\nintroduced in \\eqref{eq-gs-er''} by setting $w=0$ outside $Q_\\ell$,\nand $C>0$ is a constant independent of $\\ell$, $\\eta$, $c$, $L$ and\n$R$.\n\\end{lem}\n\\begin{proof}\nWe write the details of the proof when $Q_\\ell\\subset\\{x_2\\geq\n\\ell^{-2}\\}$. The case $Q_\\ell\\subset\\{x_2\\leq -\\ell^{-2}\\}$ can be\nhandled similarly. Let $a=(a_1,a_2)\\in \\overline{Q_\\ell}$. As we did\nin the derivation of \\eqref{eq:gauge1}, we may define a smooth\nfunction $\\phi$ in $Q_\\ell$ such that,\n\\begin{equation}\\label{eq:gauge3}\n\\Ab_{\\rm van}(x)= a_{2}\\mathbf A_0(x-c)+\\mathbf F(x)-\\nabla\\phi(x)\\quad {\\rm in}\\quad\nQ_{\\ell}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{eq:gauge4}\n|\\mathbf F(x)|\\leq C\\ell^2\\quad{\\rm in}~Q_\\ell\\,,\n\\end{equation}\nwhere $C>0$ is a universal constant. \\\\\nWe introduce the following\nthree parameters,\n\\begin{equation}\\label{eq:b,r}\n\\eta\\in(0,1)\\,,\\quad b=a_2(1+\\eta)L^{2\/3}\\,,\\quad r=\\sqrt{a_2}\\,\\ell\\,.\n\\end{equation}\nDefine the following function,\n$$u(x)=e^{i\\phi(x)}u_{b,r}\\big(\\sqrt{a_2}\\,(x-c)\\big)\\,,\\quad x\\in\nQ_\\ell\\,,$$ where $u_{b,r}\\in H^1_0(Q_r)$ is a minimizer of the energy $e_D(b,r)$ in \\eqref{eq:eD}.\n\nClearly, $u\\in H^1_0(Q_\\ell)$. Hence,\n$$\n\\inf\\{\\mathcal E_{L,R}(w)~:~w\\in H^1_0(Q_\\ell)\\}\\leq\\mathcal E_{L,R}(u)\\,.\n$$\nUsing \\eqref{eq:gauge3} and the Cauchy-Schwarz inequality, we\ncompute the energy of $u$ as follows,\n\\begin{multline*}\n\\mathcal E_{L,R}(u)\\leq\n\\int_{Q_\\ell}\\left((1+\\eta)|(\\nabla-ia_2\\mathbf A_0(x-c))e^{-i\\phi}u|^2-L^{-2\/3}|u|^2+\\frac{L^{-2\/3}}2|u|^4\\right)\\,dx\\\\\n+4\\eta^{-1}\\int_{Q_\\ell}|\\mathbf F(x)|^2|u|^2\\,dx\\,.\n\\end{multline*}\nUsing \\eqref{eq:gauge4}, the bound $|u_{b,r}|\\leq 1$, a change of\nvariable and \\eqref{eq:b,r}, we get,\n$$\\mathcal E_{L,R}(u)\\leq\n\\frac{L^{-2\/3}}{a_2}F_{b,r}(u_{b,r})+C\\eta^{-1}\\ell^6\\,,$$ where\n$F_{b,r}$ is the functional in \\eqref{eq:rGL}. \\\\\nOur choice of\n$u_{b,r}$ ensures that,\n$$F_{b,r}(u_{b,r})=e_D(b,r).$$\nAgain, thanks to the choice of $b$ and $r$ in \\eqref{eq:b,r}, we\nget,\n$$\\mathcal E_{L,R}(u)\\leq\n\\frac{L^{-2\/3}}{a_2}e_D\\Big((1+\\eta)a_2L^{2\/3},\\sqrt{a_2}\\,\\ell\\Big)+C\\eta^{-1}\\ell^6\\,.$$\nNow, by the assumption $ Q_\\ell\\subset\\{x_2\\geq \\ell^{-2}\\}$, we\nknow that $\\sqrt{a_2}\\,\\ell\\geq 1$. Thus we may use \\eqref{eq:g'} to\nwrite,\n\\begin{align*}\n\\mathcal E_{L,R}(u)&\\leq\n\\frac{L^{-2\/3}}{a_2}\\left(g\\big((1+\\eta)a_2L^{2\/3}\\big)+\\frac{C\\sqrt{1+\\eta}\\,L^{1\/3}}{\\ell}\\right)(\\sqrt{a_2}\\ell)^2\n+C\\eta^{-1}\\ell^6\\\\\n&=L^{-2\/3}\\left(g\\big((1+\\eta)a_2L^{2\/3}\\big)+\\frac{C\\sqrt{1+\\eta}\\,L^{1\/3}}{\\ell}\\right)\\ell^2+C\\eta^{-1}\\ell^6\\,,\\end{align*}\nwhich is uniformly true for $ a \\in \\overline{Q_\\ell}\\,$.\\\\\nWe now select\n$a=\\big(c_1,c_{2} -\\frac \\ell 2\\big)\\,$.\nSince $g(\\cdot)$ is a non-decreasing function, then\n$$\ng\\big((1+\\eta)a_2L^{2\/3}\\big)=\\inf_{x_2 \\in (-\\frac \\ell 2+c_2, c_2 +\\frac \\ell 2)} g\\big((1+\\eta)x_2L^{2\/3}\\big)\\,.$$\nThis yields,\n$$\n\\ell^2 \\, g\\big((1+\\eta)a_2L^{2\/3}\\big )\\leq\n\\int_{Q_\\ell}g\\big((1+\\eta)x_2L^{2\/3}\\big)\\,dx_1dx_2\\,,\n$$\nand finishes the proof of Lemma~\\ref{lem:ub}.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:ub}]~\\\\\nLet $R=4$, $L\\in(0,1)$, $\\eta=L$ and $\\ell=\\frac14$.\nLet $(Q_{\\ell,j})_{j}$ be the lattice of squares generated by the\nsquare\n$$Q=(-R\/2,-R\/2+\\ell)\\times(\\ell^{-2}, \\ell^{-2}+\\ell)\\,.$$\nDefine the set of indices\n$$\\mathcal J=\\big\\{j~:~Q_{\\ell,j}\\subset\\mathcal S_R\\cap\\{ x_2\\geq\\ell^{-2}\\}\\quad{\\rm and}~Q_{\\ell,j}\\cap\\{x_2\\leq (1+\\eta)^{-1}L^{-2\/3}\\}\\not=\\emptyset\\big\\}\\,.$$\nFor all $x=(x_1,x_2)\\in\\mathbb R^2$ with $x_2\\geq0$, define $u(x)$ as follows,\n$$u(x)=\\left\\{\n\\begin{array}{ll}\nu_{\\ell,j}(x)&{\\rm if~}j\\in\\mathcal J\\,,\\\\\n0&{\\rm if~}j\\not\\in\\mathcal J\\,,\n\\end{array}\n\\right.\n$$\nwhere $u_{\\ell,j}\\in H^1_0(Q_{\\ell,j})$ is a minimizer of the\nfollowing ground state energy\n$$\\inf\\{\\mathcal E_{L,R}(w)~:~w\\in H^1_0(Q_{\\ell,j})\\}\\,.$$\nWe extend $u(x)$ in $\\{x_2\\leq 0\\}$ as follows,\n$$ u(x)= \\bar u(x_1,-x_2)\\,,\\quad x=(x_1,x_2)\\quad {\\rm and ~}x_2\\leq 0\\,.$$\nClearly, $u\\in H^1_{\\rm mag,0}(\\mathcal S_R)$. Notice that,\n$$\\mathcal E_{L,R}(u)=2\\sum_{j\\in\\mathcal J}\\mathcal\nE_{L,R}(u_{\\ell,j})\\,,$$ and for $j\\in\\mathcal J$, the square\n$Q_{\\ell,j}$ satisfies the assumption in Lemma~\\ref{lem:ub}. We use\nLemma~\\ref{lem:ub} to write, \\begin{equation}\\label{eq:ub} \\mathcal\nE_{L,R}(u)\\leq 2L^{-2\/3}\\int_{\\mathcal\nD_\\ell}g\\Big((1+\\eta)L^{2\/3}x_2\\Big)\\,dx_1dx_2+CL^{-1\/3}|\\mathcal\nD_\\ell|\\,,\\end{equation} where the domain $\\mathcal D_\\ell$ is given\nas follows,\n$$\\mathcal D_\\ell=\\bigcup_{j\\in\\mathcal J}\\overline{Q_{\\ell,j}}\\,.$$\nThanks to the definition of the set $\\mathcal J$, it is clear that,\n$$ \\mathcal S_R\\cap\\{\\ell^{-2}\\leq x_2\\leq\n(1+\\eta)^{-1}L^{-2\/3}\\}\\subset\\mathcal D_\\ell \\subset \\mathcal\nS_R\\cap\\{0\\leq x_2\\leq (1+\\eta)^{-1}L^{-2\/3}+\\ell\\}\\,.$$\nThis yields:\n$$\n|\\mathcal D_\\ell|=\\mathcal O(RL^{-1\/3})\\,,$$ and (since the function\n$g(\\cdot)$ is valued in $ [-\\frac 12,0]$ and $g(b)=0$ for all $b\\geq\n1$),\n\\begin{align*}\n\\int_{\\mathcal\nD_\\ell}g\\Big((1+\\eta)L^{2\/3}x_2\\Big)\\,dx_1dx_2& \\leq\n\\int_{\\ell^{-2}}^{(1+\\eta)^{-1}L^{-2\/3}} \\int_{-R\/2}^{R\/2}g\\Big((1+\\eta)L^{2\/3}x_2\\Big)\\,dx_1dx_2\\\\\n& =(1+\\eta)^{-1}L^{-2\/3}R\\int_{\\ell^{-2}(1+\\eta)L^{2\/3}}^1g(t)\\,dt\\\\\n&\\leq (1+\\eta)^{-1}L^{-2\/3}R\\int_{0}^1g(t)\\,dt +\\ell^{-2}R\\,.\n\\end{align*}\nSubstitution into \\eqref{eq:ub} yields (recall that $\\eta=L\\in(0,1)$\nand $\\ell=\\frac14$),\n$$\\mathcal E_{L,R}(u)\\leq 2L^{-4\/3}R\\int_0^1g(t)\\,dt+CRL^{-1\/3}\\,.$$\nSince $u\\in H^1_{\\rm mag,0}(\\mathcal S_R)$, then\n$$\\mathfrak{e}_{\\rm gs}\\leq\\mathcal E_{L,R}(u)\\leq\n2L^{-4\/3}R\\int_0^1g(t)\\,dt+CRL^{-1\/3}\\,.$$ We divide by $R$ and use \\eqref{eq:lb-er} to deduce that\n$$E(L)\\leq 2L^{-4\/3}R\\int_0^1g(t)\\,dt+CL^{-1\/3}\\,.$$\n\\end{proof}\n\n\\section{Proof of Theorem~\\ref{prop:HK+}}\n\nLet $\\ell\\in(0,1)$ be a parameter {\\bf independent} of $\\kappa$.\nDefine the two sets,\n$$\\Omega_{\\kappa,\\ell}=\\{\\,x\\in\\Omega~:~|B_0(x)|<\\frac1{b(\\kappa)\\kappa}\\quad{\\rm and}~{\\rm dist}(x,\\partial\\Omega)>\\ell\\,\\}\\,,\\quad\n\\Gamma_{\\kappa,\\ell}=\\{x\\in\\Gamma~:~{\\rm\ndist}(x,\\partial\\Omega)>\\ell\\}\\,.$$ Recall that $\\Gamma=\\{B_0=0\\}$\nand by Assumption~\\ref{ass-B0}, $\\Gamma\\cap\\partial\\Omega$ is a\nfinite set. Thus, the area of $\\Omega_{\\kappa,\\ell}$ and the length\nof $\\Gamma_{\\kappa,\\ell}$ satisfy, for $\\kappa$ sufficiently large\nand some constant $C>0$ (independent of $\\kappa$ and $\\ell$),\n\\begin{equation}\\label{eq:appendix}\n|\\Omega_{\\kappa,\\ell}|\\leq\n\\frac{C\\varepsilon(\\ell)}{b(\\kappa)\\kappa}\\,,\\quad\n|\\Gamma_{\\kappa,\\ell}|\\leq C\\varepsilon(\\ell)\\,,\n\\end{equation}\nwhere $\\varepsilon(\\cdot)$ is a function independent of $\\kappa$ and\nsatisfying\n$\\lim_{\\ell\\to0_+}\\varepsilon(\\ell)=0\\,.$\\\\\nThe standard proof of \\eqref{eq:appendix} is left to the\nreader. The estimate in \\eqref{eq:appendix} is easier to\nverify under the additional assumption that $\\Gamma$ and $\\partial\n\\Omega$ intersect transversally, and in this case $\\varepsilon(\\ell)\n= \\ell$.\nNote that $g(\\cdot)$ vanishes in\n$[1,\\infty)$. Thus,\n\\begin{equation}\\label{g=E} \\int_\\Omega g\\big(b (\\kappa) \\, \\kappa\\,\n|B_0(x)|\\big)\\,dx =\\int_{\\Omega_{\\kappa,\\ell}} g\\big(b (\\kappa) \\,\n\\kappa\\, |B_0(x)|\\big)\\,dx+\\mathcal\nO\\left(\\frac{\\varepsilon(\\ell)}{b(\\kappa)\\kappa}\\right)\\,.\\end{equation}\nSince $b(\\kappa)\\kappa\\to +\\infty$, then Assumption~\\ref{ass-B0}\nyields, for $\\kappa$ sufficiently large,\n\\begin{equation}\\label{eq:nablaB0}\n\\exists~C>0\\,,\\quad \\forall~x\\in\\Omega_\\kappa\\,,\\quad\\Big|\\,|\\nabla B_0(x)|^{-1}-|\\nabla\nB_0(p( x))|^{-1}\\,\\Big|\\leq \\frac{C}{b(\\kappa)\\kappa}\\,.\n\\end{equation}\nHere, for $\\kappa$ sufficiently large and for all\n$x\\in\\Omega_{\\kappa,\\ell}$, the point $p(x)\\in\\Gamma$ is uniquely\ndefined by the relation\n$${\\rm dist}(x,\\Gamma)={\\rm dist}(x,p(x))\\,.$$\nThe co-area formula yields,\n$$\\int_{\\Omega_{\\kappa,\\ell}} g\\big(b (\\kappa) \\, \\kappa\\, |B_0(x)|\\big)\\,dx=\n\\int_0^{\\frac1{b(\\kappa)\\kappa}}\\left(\\int_{\\{|B_0|=r\\}\\cap\\Omega_{\\kappa,\\ell}}|\\nabla\nB_0(x)|^{-1}\\,g\\big(b(\\kappa)\\kappa \\,r\\big)\\,ds\\right)\\,dr\\,.$$\nThanks to \\eqref{eq:nablaB0}, we get further,\n\\begin{multline*}\n\\int_{\\Omega_{\\kappa,\\ell}} g\\big(b (\\kappa) \\, \\kappa\\,\n|B_0(x)|\\big)\\,dx\\\\=\n\\int_0^{\\frac1{b(\\kappa)\\kappa}}\\left(\\int_{\\{|B_0|=r\\}\\cap\\Omega_{\\kappa,\\ell}}|\\nabla\nB_0(p(x))|^{-1}\\,g\\big(b(\\kappa)\\kappa\n\\,r\\big)\\,ds\\right)\\,dr+\\mathcal\nO\\left(\\frac1{\\big(b(\\kappa)\\kappa\\big)^2}\\right)\\,.\\end{multline*}\nNow, a simple calculation yields,\n\\begin{multline*}\n\\int_0^{\\frac1{b(\\kappa)\\kappa}}\\left(\\int_{\\{|B_0|=r\\}\\cap\\Omega_{\\kappa,\\ell}}|\\nabla\nB_0(p(x))|^{-1}\\,g\\big(b(\\kappa)\\kappa\n\\,r\\big)\\,ds\\right)\\,dr\\\\\n=\\int_0^{\\frac1{b(\\kappa)\\kappa}}\\left(\\int_{\\{|B_0|=r\\}\\cap\\Omega_{\\kappa,\\ell}}|\\nabla\nB_0(p(x))|^{-1}\\,ds\\right)g\\big(b(\\kappa)\\kappa\n\\,r\\big)\\,dr\\,,\\end{multline*} and (using a simple analysis of the\narc-length measure in the curve $\\{|B_0|=r\\}$ and the assumption\nthat $\\Gamma\\cap\\partial\\Omega$ is a finite set),\n\\begin{multline*}\n\\forall~r\\in\\left(0,\\frac1{b(\\kappa)\\kappa}\\right)\\,,\\quad\n\\int_{\\{|B_0|=r\\}\\cap\\Omega_{\\kappa,\\ell}}|\\nabla\nB_0(p(x))|^{-1}\\,ds\\\\\n=\\int_{\\{|B_0|=0\\}}|\\nabla B_0(p(x))|^{-1}\\,ds+\\mathcal\nO\\big(\\eta(\\kappa)+\\varepsilon(\\ell)\\big)\\,,\\quad(\\kappa\\to\n\\infty)\\,,\n\\end{multline*}\nwhere $\\eta(\\cdot)$ satisfies\n$$\\lim_{\\kappa\\to\\infty}\\eta(\\kappa)=0\\,.$$\nAs a consequence, we get the following formula,\n$$\n\\int_{\\Omega_{\\kappa,\\ell}} g\\big(b (\\kappa) \\, \\kappa\\, |B_0(x)|\\big)\\,dx=\n\\int_{\\Gamma}\\left(\\int_0^{\\frac1{b(\\kappa)\\kappa}}\\,g\\big(b(\\kappa)\\kappa\n\\,r\\big)\\,dr\\right)|\\nabla\nB_0(x)|^{-1}ds(x)+\\mathcal O\\left(\\frac{\\eta(\\kappa)+\\varepsilon(\\ell)}{b(\\kappa)\\kappa}\\right)\n$$\nA change of variable and Theorem~\\ref{thm:HK} yield,\n\\begin{align*}\n\\int_0^{\\frac1{b(\\kappa)\\kappa}}\\,g\\big(b(\\kappa)\\kappa\n\\,r\\big)\\,dr&=\\frac1{b(\\kappa)\\kappa}\\int_0^1g(t)\\,dt\\\\\n&=\\frac1{2b(\\kappa)\\kappa}\\left(L^{4\/3}E(L)+\\varepsilon_1(L)\\right)\\,,\n\\end{align*}\nwhere $\\lim_{L\\rightarrow 0} \\varepsilon_1(L) =0\\,.$ \\\\\nFor $\\kappa$ sufficiently large, we take\n$$L=b(\\kappa)|\\nabla B_0(x)|\\,,$$\nand get,\n$$\n\\int_{\\Omega_{\\kappa,\\ell}} g\\big(b (\\kappa) \\, \\kappa\\, |B_0(x)|\\big)\\,dx=\n\\frac1{2\\kappa}\\int_{\\Gamma}|\\nabla\nB_0(x)|^{1\/3}E\\Big(b(\\kappa)|\\nabla B_0(x)|\\Big)ds(x)+\\mathcal O\\left(\\frac{ \\lambda(\\kappa)+\\eta(\\kappa)+\\varepsilon(\\ell)}{b(\\kappa)\\kappa}\\right)\\,,\n$$\nwhere $\\lambda(\\cdot)$ satisfies\n$\\displaystyle\\lim_{\\kappa\\to\\infty}\\lambda(\\kappa)=0$.\n\n Inserting\nthis into \\eqref{g=E} and noticing that $\\eta(\\kappa)\\to0$ as\n$\\kappa\\to\\infty$ and $\\ell$ was arbitrary in $(0,1)$, then we get\nthe conclusion in Theorem~\\ref{prop:HK+}.\n\\\\~\\\\\n{\\bf Acknowledgements}\\\\\nThis work was done when the first author was Simons foundation\nVisiting Fellow at the Isaac Newton Institute in Cambridge. The\nsupport of the ANR project Nosevol is also acknowledged. The second\nauthor acknowledges financial support through a fund from Lebanese\nUniversity.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAt present the top quark has not yet been conclusively\ndiscovered at the Fermilab Tevatron, even though there are events\nwhich look similar to those expected from top quark\ndecays. The experimental situation is presently\nconfusing because both the CDF \\cite{CDF} and D0 \\cite{D0}\ncollaborations only have limited statistics. As more events are collected\none expects that the situation will be clarified.\n\nAt the Tevatron, the top quark should be mainly\nproduced through $t\\bar{t}$ pair production from the light mass\nquarks and gluons in the colliding proton and\nantiproton. Both the top quark and the top antiquark\nthen decay to $(W,b)$ pairs, and\neach $W$ boson can decay either hadronically or leptonically.\nThe $b$-quark becomes an on-mass-shell $B$-hadron which\nsubsequently decays into leptons and (charmed) hadrons.\nA large effort is being made to reconstruct the top quark mass from the\nmeasured particles in the decay, which is complicated by the fact\nthat the neutrinos are never detected. Also there are\nadditional jets so it is not clear which ones to choose to\nrecombine \\cite{{OS},{OTS}}. The best channel for this mass\nreconstruction is where both $W$ bosons decay\nleptonically, one to a $(e,\\nu_e)$ pair, the other to a\n$(\\mu,\\nu_{\\mu})$ pair (a dilepton event) because the backgrounds\nin this channel are small.\nWhen only a single lepton is detected then it is necessary\nto identify the $b$ quark in the decay to remove large backgrounds\nfrom the production of $W +$ jets \\cite{Berends}.\nIn all cases the reconstruction of the particles in the final state involves\nboth the details of the production of the top quark-antiquark\npair as well as the knowledge of their fragmentation\nand decay products. The CDF collaboration \\cite{CDF}\nhave reported two events with dilepton final states, six events\nwith a single lepton and a $b$-quark identified\nby a secondary vertex, and seven single lepton events\nwith the $b$-quark identified by a semileptonic decay.\nThe CDF collaboration then constructed a likelihood function for the invariant\nmass \\cite{Dalitz} and quoted the value\n$m_{\\rm top} = 174 \\pm 10^{+13}_{-12}$ GeV$^\/c^2$.\nThe top quark cross section quoted by the CDF collaboration is\n$13.9^{+6.1}_{-4.8}$ pb.\nThe D0 collaboration \\cite{D0} have reported on nine events with\nan expected background of\n$3.8 \\pm 0.9$. If the excess is due to $t\\bar t$ production\nand if the top quark mass is 180 GeV$\/c^2$, then the top quark\ncross section is $8.2 \\pm 5.1$ pb.\n\nThe values for the top quark production cross section\nas a function of the top quark mass used by the CDF\nand D0 collaborations contain\nboth the NLO QCD corrections \\cite{{nde1},{betal}}\nand an extension to include the resummation of initial state soft partons to\nall orders in perturbation theory \\cite {lsn}. A recent\nsummary of the theoretical predictions has been presented in \\cite{ke}.\nIn the Laenen et al. paper\n\\cite{lsn} the DIS factorization scheme was used with the\nMRSD$\\_$ parton distributions \\cite{mrs},\nthe two-loop running coupling constant with five active flavors, and\n$\\Lambda_{\\rm QCD} = 0.152$ GeV.\n\nIn the analysis of the decay distributions one needs\nknowledge of the inclusive differential distributions\nof the heavy quarks in transverse momentum $p_t$ and rapidity $y$.\nThese distributions are known in NLO \\cite{{nde2},{bnmss}}.\nWhat we would like to discuss in this paper is\nan updating of the resummation effects on the\ninclusive transverse momentum distribution of the top quark.\nIn the original paper \\cite{LSN} it was not known\nwhich mass to choose whereas now\nwe can assume that the mass is $175$ GeV$\/c^2$.\nAlso we discuss here how\nthe resummation effects modify the rapidity distribution\nof the top quark. Since there have been suggestions of using\nthe mass and angular distributions in top quark production to\nlook for physics beyond the standard model \\cite{Lane} it is very\nimportant to know the normal QCD predictions for these quantities.\n\nWe first summarize what is known on the top quark cross section. If the\ntop quark mass is 175 GeV$\/c^2$ then the dominant production\nchannel is $q + \\bar q \\rightarrow t + \\bar t$. In lowest order\nQCD perturbation theory it contributes\nabout 90 \\% of the total cross section with the reaction\n$g + g \\rightarrow t + \\bar t$ making up the remaining 10\\%.\nOne notes that the NLO corrections\nin the $q\\bar{q}$ channel are small, whereas those in\nthe $gg$ channel are more than 80\\% .\nAt this large top quark mass the $qg$ and $\\bar{q}g$ channels\ngive negligible contributions so we do not consider them.\nEven though the $gg$ channel contribution is small in Born approximation\nit can be significant in NLO due to multiple soft parton radiation.\nThese large corrections are predominantly from the threshold\nregion for heavy quark production. It was shown\npreviously \\cite{MSSN} that initial state gluon bremsstrahlung (ISGB)\nis responsible for the large corrections at NLO\nnear threshold.\n\nIn \\cite{LSN} the dominant logarithms\nfrom ISGB, which are the cause of the large corrections\nnear threshold, were carefully examined.\nSuch logarithms have been studied previously\nin Drell-Yan (DY) \\cite{DY} production at fixed target energies\n(again near threshold) where they are responsible for\ncorrespondingly large corrections. The\nanalogy between DY and heavy quark production cross sections was\nexploited in \\cite{LSN} and\na formula to resum the leading and next-to-leading logarithms\nin pQCD to all orders was proposed. Since the contributions due to these\nlogarithms are positive (when all scales $\\mu$ are set equal\nto the heavy quark mass $m$),\nthe effect of summing the higher order corrections\nincreases the top quark production cross section over that\npredicted in $O(\\alpha_s^3)$.\nThis sum, which will be identified as $\\sigma_{\\rm res}$, depends\non a nonperturbative parameter $\\mu_0$. The reason that\na new parameter has to be introduced is that\nthe resummation is sensitive to the scale at which\npQCD breaks down. As we approach the threshold region\nother, nonperturbative, physics plays a role (higher twist,\nbound states, etc.) indicated by a dramatic increase in\n$\\alpha_s$ and in the resummed cross section.\nThis is commonly called the effect of the infrared renormalon\nor Landau pole \\cite{Mueller}.\nWe chose to simply cut off the resummation\nat a specific scale $\\mu_0$ where\n$\\Lambda_{\\rm QCD} << \\mu_0 << m$ since it is not\nobvious how to incorporate the nonperturbative effects.\nNote that our resummed corrections diverge for small $\\mu_0$\nbut this is {\\em not} physical since they should be\njoined smoothly onto some nonperturbative prescription\nand the total cross section will be finite.\nAnother way to make it finite would be to avoid the infrared\nrenormalon by a specific continuation around it, i.e. the\nprincipal value resummation method \\cite{{Contop},{Alvero}}.\nHowever, at the moment\nour total resummed corrections depend on the parameter\n$\\mu_0$ for which we can only make a rough estimate.\nSee \\cite{LSN} for more details.\n\n\n\\section{Soft Gluon Approximation to the\ninclusive distributions}\nTo make this paper self-contained we list some\nrelevant formulae.\nThe partonic processes under discussion will be denoted by\n\\begin{equation}\ni(k_1) + j(k_2) \\rightarrow Q(p_1) + \\bar Q(p_2),\n\\end{equation}\nwhere $i,j = g, q, \\bar q$. The kinematical variables\n\\begin{equation}\ns = ( k_1+k_2)^2 \\quad , \\quad t_1 = (k_2-p_2)^2 - m^2 \\quad , \\quad\nu_1 = (k_1- p_2)^2 - m^2\\quad ,\n\\end{equation}\nare introduced in the calculation of the corrections\nto the single particle inclusive differential distributions\nof the heavy (anti)quark. We do not distinguish in the text\nbetween the heavy quark and heavy antiquark since the distributions\nare essentially identical in our calculations.\nHere $s$ is the square of the\nparton-parton c.m. energy and the heavy quark transverse\nmomentum is given by $p_t= (t_1u_1\/s-m^2)^{1\/2}$.\nThe rapidity variable is defined by\n$ \\exp (2y) = u_1\/t_1$.\nThe Born approximation differential cross sections can be expressed by\n\\begin{equation}\ns^2\\frac{d^2\\sigma^{(0)}_{ij}(s,t_1,u_1)}{dt_1 \\: du_1} = \\delta\n(s+t_1+u_1) \\sigma^B_{ij}(s,t_1,u_1)\\,,\n\\end{equation}\nwith\n\\begin{equation}\n\\sigma^B_{q\\bar q}(s,t_1,u_1) = \\pi \\alpha_s^2(\\mu^2) K_{q\\bar{q}}\nNC_F \\Big[ \\frac{t_1^2 + u_1^2}{s^2} + \\frac{2m^2}{s}\\Big]\\,,\n\\end{equation}\nand\n\\begin{eqnarray}\n\\sigma^B_{gg}(s,t_1,u_1)& = & 2\\pi \\alpha_s^2(\\mu^2) K_{gg}\nNC_F \\Big[C_F - C_A \\frac{t_1u_1}{s^2}\\Big] \\nonumber \\\\ &&\n\\times\\Big[ \\frac{t_1}{u_1} + \\frac{u_1}{t_1} + \\frac{4m^2s}{t_1u_1}\n\\Big(1 - \\frac{m^2s}{t_1u_1}\\Big) \\Big] \\,.\n\\end{eqnarray}\nHere the color factors are\n$C_A=N$ and $C_F=(N^2-1)\/(2N)$.\nThe color average factors are\n$K_{q\\bar{q}}=N^{-2}$ and $K_{gg}=(N^2-1)^{-2}$.\nThe parameter $\\mu$ denotes the renormalization scale.\nIn \\cite{LSN} the inclusive cross section\nwas examined near threshold ($s \\approx 4m^2$)\nwhere the contributions from the radiation of\nsoft and collinear gluons are large.\nA variable\n$s_4 = s+t_1+u_1$ was defined, where $t_1=(k_2-p_2)^2-m^2$\nand $u_1=(k_1-p_2)^2-m^2$ are inelastic variables\nin the channel $i(k_1)+j(k_2) \\rightarrow Q(p_1) + \\bar Q(p_2)\n+g(k_3)$.\nThe variable $s_4>0$ now depends on the\nfour momentum of the extra parton(s) emitted in the reaction.\nIn the Born approximation there are no additional partons\nso $s_4=0$. In \\cite{LSN} the NLO contributions were examined\nin the soft region (where $s_4 \\rightarrow 0$) and it was found\nthat the dominant contribution to the NLO cross section in this region\nhad a similar form to the NLO correction in the Drell-Yan\nprocess. As the latter correction is known exactly\nin NNLO \\cite{Neerven} this correspondence was used to write the differential\ncross section in order $\\alpha_s^k(\\mu^2)$ as follows\n\\begin{eqnarray}\ns^2\\frac{d^2\\sigma_{ij}^{(k)}(s,t_1,u_1)}{dt_1 \\: du_1} &=&\n\\alpha_s^k(\\mu^2) \\sum_{l=0}^{2k-1} \\Big[\\frac{1}{s_4}a_l(\\mu^2)\n\\ln^l\\Big(\\frac{s_4}{m^2}\\Big)\\theta(s_4 - \\Delta)\n \\nonumber \\\\ &&\n+ \\frac{1}{l+1} a_l(\\mu^2) \\ln^{l+1}\\Big(\\frac{\\Delta}{m^2}\\Big) \\delta(s_4)\n\\Big] \\sigma^B_{ij}(s,t_1,u_1) \\,.\n\\end{eqnarray}\nHere a small parameter $\\Delta$ has been introduced\nto allow us to distinguish between the\nsoft ($s_4 < \\Delta$) and the hard ($s_4> \\Delta$)\nregions in phase space. The quantities $a_l(\\mu^2)$ contain\nterms involving the QCD $\\beta$-functions and color factors.\nThe variables $t_1$ and $u_1$ were then mapped onto the variables $s_4$ and\n$\\cos\\theta$, where $\\theta$ is the parton-parton\nc.m. scattering angle. After explicit integration over the angle\n$\\theta$, the resulting series was\nexponentiated by the introduction of the $s_4$ variable\ninto the argument of the running coupling constant.\n\nAs noted in the previous paper \\cite{LSN} in\naddition to the total cross section we can also\nderive the resummed heavy (anti)quark inclusive $p_t$ (and below the\n$y$) distributions.\nThe transverse momentum $p_t$ of the heavy quark is related\nto our previous variables by\n\\begin{equation}\nt_1 = - \\frac{1}{2}\\Big\\{ s - s_4 -[(s - s_4)^2 - 4s m_t^2]^{1\/2}\\Big\\}\\,,\n\\end{equation}\n\\begin{equation}\nu_1 = - \\frac{1}{2}\\Big\\{ s - s_4 +[(s - s_4)^2 - 4s m_t^2]^{1\/2}\\Big\\}\\,,\n\\end{equation}\nwith $m_t^2 = m^2 + p_t^2.$ The double differential cross section is\ntherefore\n\\begin{equation}\ns^2 \\frac{d^2\\sigma_{ij}(s, t_1, u_1)}{dt_1 \\: du_1} =\ns[(s - s_4)^2 - 4s m_t^2]^{1\/2}\n\\frac{d^2\\sigma_{ij}(s, s_4, p_t^2)}{dp_t^2ds_4} \\, ,\n\\end{equation}\nwith the boundaries\n\\begin{equation}\n0 < p_t^2 < \\frac{s}{4} - m^2\\quad , \\quad 0 < s_4 < s-2m_t \\sqrt{s}\\,.\n\\end{equation}\nThe $O(\\alpha_s^k)$ contribution to the inclusive transverse momentum\ndistribution $d\\sigma_{ij}\/dp_t^2$ is given by\n\\begin{eqnarray}\n\\frac{d\\sigma_{ij}^{(k)}(s,p_t^2)}{dp_t^2} &=&\n\\frac{2}{s} \\alpha_s^k(\\mu^2) \\sum_{l=0}^{2k-1} a_l(\\mu^2)\n\\int_0^{s-2m_ts^{1\/2}}\\, ds_4\n \\nonumber \\\\&&\n\\times\\Big\\{ \\frac{1}{s_4} \\ln^l\\Big(\\frac{s_4}{m^2}\\Big) \\theta(s_4 - \\Delta)\n+ \\frac{1}{l+1} \\ln^{l+1} \\Big(\\frac{\\Delta}{m^2}\\Big) \\delta(s_4)\\Big\\}\n \\nonumber \\\\ &&\n \\times \\frac{1}{[(s-s_4)^2 - 4sm_t^2]^{1\/2}} \\sigma^B_{ij}(s,s_4,p_t^2)\n \\, ,\n\\end{eqnarray}\nwhere we have inserted an extra factor of two so\nthat $\\int dp_t^2 \\: d\\sigma\/dp_t^2 = \\sigma_{\\rm tot}$. After some algebra\nwe can rewrite this result as\n\\begin{eqnarray}\n\\frac{d\\sigma_{ij}^{(k)}(s,p_t^2)}{dp_t^2} &=&\n\\alpha_s^k(\\mu^2) \\sum_{l=0}^{2k-1} a_l(\\mu^2)\n\\Big[\\int_0^{s-2m_ts^{1\/2}}\\, ds_4 \\frac{1}{s_4} \\ln^l\\frac{s_4}{m^2}\n \\nonumber \\\\ &&\n\\times\\Big\\{ \\frac{d\\bar\\sigma_{ij}^{(0)}(s,s_4,p_t^2)}{dp_t^2}\n - \\frac{d\\bar\\sigma_{ij}^{(0)}(s,0,p_t^2)}{dp_t^2} \\Big\\}\n \\nonumber \\\\ &&\n+ \\frac{1}{l+1} \\ln^{l+1}\\Big(\\frac{s - 2m_t s^{1\/2}}{m^2}\\Big)\n\\frac{d\\bar\\sigma_{ij}^{(0)}(s,0,p_t^2)}{dp_t^2} \\Big]\\,,\n\\end{eqnarray}\nwith the definition\n\\begin{equation}\n\\frac{d\\bar\\sigma_{ij}^{(0)}(s,s_4,p_t^2)}{dp_t^2} =\n\\frac{2}{s[(s-s_4)^2 - 4s m_t^2]^{1\/2}} \\sigma_{ij}^B(s,s_4,p_t^2)\\,,\n\\end{equation}\nwhere $d\\bar\\sigma^{(0)}_{ij}(s,0,p_t^2)\/dp_t^2 \\equiv\nd\\sigma^{(0)}_{ij}(s,p_t^2)\/dp_t^2 $ again represents the\nBorn differential $p_t$ distribution. For the $q\\bar q$ and $gg$\nsubprocesses we have the explicit results\n\\begin{eqnarray}\n\\frac{d\\bar\\sigma_{q\\bar q}^{(0)}(s,s_4,p_t^2)}{dp_t^2} &=&\n2\\pi \\alpha_s^2(\\mu^2) K_{q\\bar q} N C_F \\frac{1}{s}\n\\frac{1}{[(s-s_4)^2 -4sm_t^2]^{1\/2}}\n\\nonumber \\\\ &&\n\\times \\Big[\\frac{(s-s_4)^2 - 2sp_t^2}{s^2}\\Big] \\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\frac{d\\bar\\sigma_{gg}^{(0)}(s,s_4,p_t^2)}{dp_t^2} &=&\n4\\pi \\alpha_s^2(\\mu^2) K_{gg} N C_F \\frac{1}{s}\n\\frac{1}{[(s-s_4)^2 -4sm_t^2]^{1\/2}}\n\\nonumber \\\\ &&\n\\times\\Big[C_F - C_A \\frac{m_t^2}{s}\\Big]\n\\nonumber \\\\ &&\n\\times \\Big[\\frac{(s-s_4)^2 - 2sm_t^2}{sm_t^2} +\n\\frac{4m^2}{m_t^2} \\Big( 1 -\\frac{m^2}{m_t^2}\\Big)\\Big] \\,.\n\\end{eqnarray}\nSince the above formulae are symmetric\nunder the interchange $t_1 \\leftrightarrow u_1$\nthe heavy quark and heavy antiquark inclusive $p_t$ distributions are\nidentical. Note that (2.12) is basically the integral of a plus\ndistribution together with a surface term.\n\nThe corresponding formula to (2.12) for the rapidity $y$\nof the heavy quark is obtained by using\n\\begin{equation}\nt_1 = - \\frac{(s-s_4)}{2}(1 - \\tanh y)\\,,\n\\end{equation}\n\\begin{equation}\nu_1 = - \\frac{(s-s_4)}{2}(1 + \\tanh y)\\,.\n\\end{equation}\nThe double differential cross section is\ntherefore\n\\begin{equation}\ns^2 \\frac{d^2\\sigma_{ij}(s, t_1, u_1)}{dt_1 \\: du_1}\n=\n2 s^2 \\frac{\\cosh^2y}{s-s_4}\n\\frac{d^2\\sigma_{ij}(s, s_4, y)}{dy \\: ds_4}\\,,\n\\end{equation}\nwith the boundaries\n\\begin{equation}\n- \\frac{1}{2}\\ln \\Big( \\frac{1+\\beta}{1-\\beta}\\Big) < y <\n \\frac{1}{2}\\ln \\Big( \\frac{1+\\beta}{1-\\beta}\\Big)\n\\quad , \\quad 0 < s_4 < s-2\\sqrt{s}m\\cosh y\\,,\n\\end{equation}\nwhere $\\beta^2 = 1 -4m^2\/s$.\nThe $O(\\alpha_s^k)$ contribution to the inclusive rapidity\ndistribution $d\\sigma_{ij}\/dy$ is given by\n\\begin{eqnarray}\n\\frac{d\\sigma_{ij}^{(k)}(s,y)}{dy} &=&\n\\alpha_s^k(\\mu^2) \\sum_{l=0}^{2k-1} a_l(\\mu^2)\n\\int_0^{s-2ms^{1\/2}\\cosh y}\\, ds_4\n \\nonumber \\\\&&\n\\times \\Big\\{ \\frac{1}{s_4} \\ln^l\\Big(\\frac{s_4}{m^2}\\Big) \\theta(s_4 - \\Delta)\n+ \\frac{1}{l+1} \\ln^{l+1}\\Big( \\frac{\\Delta}{m^2}\\Big) \\delta(s_4)\\Big\\}\n \\nonumber \\\\ &&\n \\times \\Big(\\frac{s-s_4}{2s^2\\cosh^2 y}\\Big) \\sigma^B_{ij}(s,s_4,y)\n \\,.\n\\end{eqnarray}\nAfter some algebra we can rewrite this result as\n\\begin{eqnarray}\n\\frac{d\\sigma_{ij}^{(k)}(s,y)}{dy} &=&\n\\alpha_s^k(\\mu^2) \\sum_{l=0}^{2k-1} a_l(\\mu^2)\n\\Big[\\int_0^{s-2ms^{1\/2}\\cosh y}\\, ds_4 \\frac{1}{s_4}\n\\ln^l\\Big(\\frac{s_4}{m^2}\\Big)\n \\nonumber \\\\ &&\n\\times\\Big\\{ \\frac{d\\bar\\sigma_{ij}^{(0)}(s,s_4,y)}{dy}\n - \\frac{d\\bar\\sigma_{ij}^{(0)}(s,0,y)}{dy} \\Big\\}\n \\nonumber \\\\ &&\n+ \\frac{1}{l+1} \\ln^{l+1}\\Big(\\frac{s - 2ms^{1\/2}\\cosh y}{m^2}\\Big)\n\\frac{d\\bar\\sigma_{ij}^{(0)}(s,0,y)}{dy} \\Big]\\,,\n\\end{eqnarray}\nwith the definition\n\\begin{equation}\n\\frac{d\\bar\\sigma_{ij}^{(0)}(s,s_4,y)}{dy} =\n\\frac{s-s_4}{2s^2 \\cosh^2 y} \\, \\sigma_{ij}^B(s,s_4,y)\\,,\n\\end{equation}\nwhere $d\\bar\\sigma^{(0)}_{ij}(s,0,y)\/dy \\equiv\nd\\sigma^{(0)}_{ij}(s,y)\/dy $ again represents the\nBorn differential $y$ distribution. For the $q\\bar q$ and $gg$\nsubprocesses we have the explicit formulae\n\\begin{eqnarray}\n\\frac{d\\bar \\sigma_{q\\bar q}^{(0)}(s,s_4,y)}{dy} &=&\n\\pi\\alpha_s^2(\\mu^2) K_{q\\bar q} N C_F\n\\frac{s-s_4}{2s^2\\cosh^2 y}\n\\nonumber \\\\ &&\n\\times \\Big[\\frac{(s-s_4)^2}{2s^2\\cosh^2 y}\\Big(\n\\cosh^2 y + \\sinh^2 y\\Big) + \\frac{2m^2}{s}\\Big] \\,,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\frac{d\\bar \\sigma_{gg}^{(0)}(s,s_4,y)}{dy} &=&\n4\\pi \\alpha_s^2(\\mu^2) K_{gg} N C_F\n\\frac{s-s_4}{2s^2 \\cosh^2 y }\n\\nonumber \\\\ &&\n\\times\\Big[C_F - C_A \\frac{(s-s_4)^2}{4s^2 \\cosh^2 y }\\Big]\n\\times \\Big[\\cosh^2 y + \\sinh^2 y\n\\nonumber \\\\ &&\n+ \\frac{8m^2s \\cosh^2 y}{(s-s_4)^2} \\Big( 1 -\\frac{4m^2s\\cosh^2 y}{(s-s_4)^2}\n\\Big)\\Big] \\,.\n\\end{eqnarray}\n Since the above formulae are symmetric under\nthe interchange $t_1 \\leftrightarrow u_1$\nthe heavy quark and heavy antiquark inclusive $y$-distributions are\nidentical. Also (2.21) is again of the form of a plus distribution\ntogether with a surface term. Finally, we note that the terms in\n(2.12) and (2.21) are all finite.\n\n\n\\section{Resummation procedure in parton-parton collisions}\nThe resummed contribution to the top quark cross\nsection can be written as \\cite{LSN}\n\\begin{equation}\ns^2\\frac{d^2\\sigma_{ij}(s,t_1,u_1)}{dt_1 \\: du_1}\n=\\left[\\frac{df(s_4\/m^2,m^2\/\\mu^2)}{ds_4}\\theta(s_4-\\Delta)\n+f(\\frac{\\Delta}{m^2},\\frac{m^2}{\\mu^2})\\delta(s_4) \\right]\n\\sigma_{ij}^B(s,t_1,u_1),\n\\end{equation}\nwhere\n\\begin{eqnarray}\nf\\left(\\frac{s_4}{m^2},\\frac{m^2}{\\mu^2}\\right)=\n\\exp\\left\\{A\\frac{C_{ij}}{\\pi}\\bar\\alpha_s\\left(\\frac{s_4}{m^2},m^2\\right)\n\\ln^2\\frac{s_4}{m^2}\\right\\}\\frac{[s_4\/m^2]^{\\eta}}{\\Gamma(1+\\eta)}\n\\exp(-\\eta\\gamma_E).\n\\end{eqnarray}\nThe straightforward expansion of the exponential plus the change\nof the argument in $\\bar\\alpha_s$ via the renormalization group equations\ngenerates the corresponding leading logarithmic terms written\nexpicitly in \\cite{LSN}.\nThe scheme dependent $A$ and $\\bar\\alpha_s$ in the above expression\nare given by\n\\begin{equation}\nA=2; \\; \\; \\; \\; \\; \\bar\\alpha_s(y,\\mu^2)=\\alpha_s(y^{2\/3}\\mu^2)\n=\\frac{4\\pi}{\\beta_0 \\ln (y^{2\/3}\\mu^2\/\\Lambda^2)}\\,,\n\\end{equation}\nin the $\\overline{\\rm MS}$ scheme, and\n\\begin{equation}\n\\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! \\! A=1; \\; \\; \\; \\; \\;\n\\bar\\alpha_s(y,\\mu^2)=\\alpha_s(y\\mu^2)\n=\\frac{4\\pi}{\\beta_0 \\ln (y\\mu^2\/\\Lambda^2)}\\,,\n\\end{equation}\nin the DIS scheme,\nwhere $\\beta_0=11\/3 \\; C_A-2\/3 \\; n_f$\nis the lowest order coefficient of the QCD $\\beta$-function.\nThe color factors $C_{ij}$ are defined by $C_{q\\bar{q}}=C_F$ and\n$C_{gg}=C_A$, and $\\gamma_E$ is the Euler constant.\nThe quantity $\\eta$ is given by\n\\begin{equation}\n\\eta=\\frac{8C_{ij}}{\\beta_0}\\ln\\left(1+\\beta_0\n\\frac{\\alpha_s(\\mu^2)}{4\\pi}\\ln\\frac{m^2}{\\mu^2}\\right)\\,.\n\\end{equation}\n\nFollowing the procedure in \\cite{LSN} for the resummation of the\norder $\\alpha_s^k$ contributions to the $p_t$ distribution we have\n\\begin{eqnarray}\n\\frac{d\\sigma_{ij}(s,p_t^2)}{dp_t^2} &=&\n\\sum_{k=0}^{\\infty}\n\\frac{d\\sigma_{ij}^{(k)}(s,p_t^2)}{dp_t^2}\n \\nonumber \\\\ &&\n\\! \\! \\! \\! \\! \\! =\\int_{s_0}^{s-2m_ts^{1\/2}}\\, ds_4\n\\frac{df(s_4\/m^2, m^2\/\\mu^2)}{ds_4}\n \\nonumber \\\\ &&\n\\times\\Big\\{ \\frac{d\\bar\\sigma_{ij}^{(0)}(s,s_4,p_t^2)}{dp_t^2}\n - \\frac{d\\bar\\sigma_{ij}^{(0)}(s,0,p_t^2)}{dp_t^2} \\Big\\}\n \\nonumber \\\\ &&\n+ f\\Big( \\frac{s-2m_ts^{1\/2}}{m^2}, \\frac{m^2}{\\mu^2}\\Big)\n\\frac{d\\sigma_{ij}^{(0)}(s,p_t^2)}{dp_t^2} \\,.\n\\end{eqnarray}\nNote that we now have cut off the lower limit of the $s_4$ integration\nat $s_4=s_0$ because $\\bar\\alpha_s$ in (3.2) diverges as $s_4 \\rightarrow 0$.\nThis parameter $s_0$ must satisfy\nthe conditions $0s_0$ while no such cut was imposed\non the phase space for the individual terms in the perturbation series.\nThe improved differential distributions are uniformly above the exact\n$O(\\alpha_s^3)$ result. It is also evident from fig. 2 that the\nresummation of the soft gluon contributions to the $p_t$ distribution\nmodifies the exact $O(\\alpha_s^3)$ result only slightly for the values\nof $\\mu_0$ that have been chosen.\n\nWe continue with the results for the $gg$ channel in\nthe $\\overline{\\rm MS}$ scheme.\nThe corresponding plots are given in figures 3 and 4.\nIn this case the values of $\\mu_0$ have been chosen to be\n$\\mu_0=0.2\\:m$ and $\\mu_0=0.25\\:m$ to correspond to those in \\cite{lsn}.\nNote that $\\mu_0$ need not be the same in the $q\\bar{q}$ and $gg$ reactions\nbecause the convergence properties of the QCD perturbation series could be\ndifferent in these channels and moreover depend on the factorization scheme.\nThe first and second order corrections in the $gg$ channel in the\n$\\overline{\\rm MS}$ scheme are larger than the respective ones in the\n$q\\bar{q}$ channel in the DIS scheme. In fact, for the range of $p_t$\nvalues shown the second-order approximate correction is larger than the\nfirst-order approximation. Hence, the relative difference in magnitude\n between the improved $d\\sigma_H^{\\rm imp}\/dp_t$ and\nthe exact $O(\\alpha_s^3)$ results\nis significantly larger than that in the $q\\bar{q}$ channel\nin the DIS scheme.\n\nWe finish our discussion of the differential $p_t$ distributions with the\nresults of adding the $q\\bar{q}$ and $gg$ channels. The plots appear\nin figures 5 and 6. It is evident that resummation produces an\nenhancement of the exact $O(\\alpha_s^3)$ result, with very little change\nin shape.\n\nNow we turn to a discussion of the differential $Y$\ndistributions at $\\sqrt{S}=1.8$ TeV\nfor a top quark mass $m = 175$ GeV$\/c^2$.\nIn this case we set the factorization mass scale equal to $m$ everywhere.\nWe begin with the results for the $q\\bar{q}$ channel in the DIS scheme.\nIn fig. 7 we show\nthe Born term $d\\sigma_H^{(0)}\/dY$, the first order exact result\n$d\\sigma_H^{(1)}\/dY\\mid _{\\rm exact}$, the first order approximation\n$d\\sigma_H^{(1)}\/dY\\mid _{\\rm app}$, the second\norder approximation $d\\sigma_H^{(2)}\/dY\\mid _{\\rm app}$,\nand the resummed result $d\\sigma_H^{\\rm res}\/dY$ for $\\mu_0=0.05\\:m$\nand $\\mu_0=0.1\\:m$. In fig. 8 we show the exact $O(\\alpha_s^3)$ result\n$d\\sigma_H^{(0)}\/dY+d\\sigma_H^{(1)}\/dY\\mid _{\\rm exact}$, and,\nfor comparison, $d\\sigma_H^{\\rm imp}\/dY$\nfor $\\mu_0=0.05\\:m$ and $\\mu_0=0.1\\:m$. Again, the resummed differential cross\nsections were calculated with the cut $s_4>s_0$ while no such cut was imposed\non the phase space for the individual terms in the perturbation series.\nIt is also evident from fig. 8 that the\nresummation of the soft gluon contributions to the $Y$ distribution\nmodifies the exact $O(\\alpha_s^3)$ result only slightly for the\nvalues of $\\mu_0$\nthat have been chosen.\n\nWe continue with the results for the $gg$ channel\nin the $\\overline{\\rm MS}$ scheme.\nThe corresponding plots are given in figures 9 and 10. Here, the values\nof $\\mu_0$ are $\\mu_0=0.2\\:m$ and $\\mu_0=0.25\\:m$. As in the case of\nthe $p_t$ distributions, the first and second order\ncorrections in this channel are larger than the respective ones in the\n$q\\bar{q}$ channel in the DIS scheme. For the range of $Y$\nvalues shown the second-order approximate correction is larger than the\nfirst-order approximation. Again, as in the $p_t$ distributions,\nthe relative difference in magnitude between the improved\n$d\\sigma_H^{\\rm imp}\/dY$ and the exact $O(\\alpha_s^3)$ results\nis significantly larger than that in the $q\\bar{q}$ channel\nin the DIS scheme.\n\nFinally, we conclude our discussion of the differential\n$Y$ distributions by showing the\nresults of adding the $q\\bar{q}$ and $gg$ channels. The plots appear\nin figures 11 and 12. Again, it is evident that resummation produces\na non-negligible modification of the exact $O(\\alpha_s^3)$ result.\nHowever, the shape of the distribution is unchanged.\n\nWe have shown that the resummation of soft gluon radiation\nproduces a small difference between the perturbation improved distributions\nin $p_t$ and $Y$ and the exact $O(\\alpha_s^3)$ distributions\nin $p_t$ and $Y$ for\nthe $q\\bar{q}$ reaction in the DIS scheme for the values of $\\mu_0$ chosen.\nHowever, for the $gg$ channel in the $\\overline{\\rm MS}$ scheme the\nresummation produces a large difference. The difference between the\n perturbation improved and the exact $O(\\alpha_s^3)$ distributions depends\non the mass factorization\nscheme (DIS or $\\overline{\\rm MS}$), the factorization scale $\\mu$,\nas well as the\nspecific reaction under consideration ($q\\bar{q}$ or $gg$).\nFor a mass $m=175$ GeV$\/c^2$ the $gg$ channel is not\nas important numerically as\nthe $q\\bar{q}$ channel. However, since the corrections for the $gg$ channel\nare quite large, resummation produces a non-negligible difference between\nthe perturbation improved and the exact\n$O(\\alpha_s^3)$ distributions when adding the\ntwo channels. However, the shapes of the distributions are essentially\nunchanged.\n\n\n\n{\\bf ACKNOWLEDGEMENTS}\n\nWe thank E. Laenen and W. L. van Neerven for helpful discussions.\n\nThe work in this paper was supported in part under the\ncontract NSF 93-09888.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Acknowledgements} We are grateful to M. Kapranov for informing\nus of Sigg's work. This research is partially supported by the NSF.\n\n\\section{Lie $\\SS$-algebras}\n\nIn this section, we recall parts of the formalism of operads, referring to\nGetzler-Jones \\cite{GJ} for further details. This formalism is closely\nrelated to Joyal's theory of species and analytic functors (Joyal\n\\cite{Joyal}).\n\n\\begin{definition}\nAn $\\SS$-module is a functor from $\\SS$, the groupoid formed by taking the\nunion of the symmetric groups $\\SS_n$, $n\\ge0$, to the category of vector\nspaces.\n\\end{definition}\n\nAssociated to an $\\SS$-module $\\mathsf{A}$ is the functor from the category of\nvector spaces to itself,\n$$\nV \\mapsto \\mathsf{A}(V) = \\sum_{k=0}^\\infty \\bigl( \\mathsf{A}(k) \\o V^{\\o k}\n\\bigr)_{\\SS_k} .\n$$\nThis is a generalization of the notion of a Schur functor, which is the\nspecial case where $\\mathsf{A}$ is an irreducible representation of $\\SS_n$.\n\n\\begin{definition}\nA polynomial functor $\\Phi$ is a functor from the category of vector spaces\nto itself such that the map $\\Phi:\\Hom(V,W)\\to\\Hom(\\Phi(V),\\Phi(W))$ is\npolynomial for all vector spaces $V$ and $W$. An analytic functor $\\Phi$ is\na direct image of polynomial maps.\n\\end{definition}\n\nTo an analytic functor $\\Phi$, we may associate the $\\SS$-module\n$$\n\\mathsf{A}(n) = \\Phi(\\mathbb{F} x_1\\oplus\\dots\\oplus\\mathbb{F} x_n)_{(1,\\dots,1)} \\subset \\Phi(\\mathbb{F}\nx_1\\oplus\\dots\\oplus\\mathbb{F} x_n) ,\n$$\nthe summand of $\\Phi(\\mathbb{F} x_1\\oplus\\dots\\oplus\\mathbb{F} x_n)$ homogeneous of degree\n$1$ in each of the generators $x_i$. We call $\\mathsf{A}$ the $\\SS$-module of\nTaylor coefficients of $\\Phi$. The following theorem is proved in\nAppendix~A of Macdonald \\cite{Macdonald}.\n\\begin{theorem}\nThere is an equivalence of categories between the category of $\\SS$-modules\nand the category of analytic functors: to an $\\SS$-module, we associate the\nfunctor $V\\mapsto\\mathsf{A}(V)$, while to an analytic functor $\\Phi$, we\nassociate its $\\SS$-module of Taylor coefficients.\n\\end{theorem}\n\nAny $\\SS$-module $\\mathsf{A}$ extends to a functor on the category of finite sets\nand bijections: if $S$ is a finite set of cardinality $n$, we have\n$$\n\\mathsf{A}(S) = \\biggl( \\sum_{\\substack{f:[n]\\to S \\\\ \\text{bijective}}} \\mathsf{A}(n)\n\\biggr)_{\\SS_n} ,\n$$\nwhere $[n]=\\{1,\\dots,n\\}$. The category of $\\SS$-modules has a monoidal\nstructure, defined by the formula\n$$\n(\\mathsf{A}\\circ\\mathsf{B})(n) = \\sum_{k=0}^\\infty \\biggl( \\mathsf{A}(k) \\o \\sum_{f:[n]\\to[k]}\n\\mathsf{B}(f^{-1}(1)) \\o \\dots \\o \\mathsf{B}(f^{-1}(k)) \\biggr)_{\\SS_k} .\n$$\nThis definition is motivated by the composition formula\n$(\\mathsf{A}\\circ\\mathsf{B})(V)\\cong\\mathsf{A}(\\mathsf{B}(V))$.\n\n\\begin{definition}\nAn \\textbf{operad} is a monoid in the category of $\\SS$-modules, with\nrespect to the above monoidal structure.\n\\end{definition}\n\nWe see that the structure of an operad on an $\\SS$-module $\\mathsf{A}$ is the same\nas the structure of a triple on the associated analytic functor\n$V\\mapsto\\mathsf{A}(V)$.\n\nThe Lie operad $\\mathsf{Lie}$ is the operad whose associated analytic functor is\nthe functor taking a vector space to its free Lie algebra.\n\\begin{definition}\nA Lie $\\SS$-algebra $\\L$ is a left $\\mathsf{Lie}$-module in the category of\n$\\SS$-modules.\n\\end{definition}\n\nLie $\\SS$-algebras are essentially the same things as analytic functors\nfrom the category of vector spaces to the category of Lie algebras; more\nprecisely, they are the collections of Taylor coefficients of such\nfunctors.\n\nIf we unravel the definition of a Lie $\\SS$-algebra, we see that it is an\n$\\SS$-module $\\L$ with $\\SS_n$-equivariant brackets\n$$\n[-,-] : \\Ind^{\\SS_n}_{\\SS_k\\times\\SS_{n-k}} \\L(k) \\o \\L(n-k) \\to \\L(n)\n$$\nfor $0\\le k\\le n$, such that if $a_i\\in\\L(n_i)$, $i=1,2,3$, the following\nexpressions vanish:\n\\begin{gather*}\n[a_1,a_2] - [a_2,a_1] \\in \\Ind^{\\SS_n}_{\\SS_{n_1}\\times\\SS_{n_2}} \\bigl(\n\\L(n_1) \\o \\L(n_2) \\bigr) \\quad \\text{and} \\\\\n[a_1,[a_2,a_3]] + [a_2,[a_3,a_1]] + [a_3,[a_1,a_2]] \\in\n\\Ind^{\\SS_n}_{\\SS_{n_1}\\times\\SS_{n_2}\\times\\SS_{n_3}} \\bigl( \\L(n_1) \\o\n\\L(n_2) \\o \\L(n_3) \\bigr) .\n\\end{gather*}\n\nIf $L$ is a Lie algebra, let $\\mathsf{K}_\\bullet(L)$ be the Chevalley-Eilenberg\ncomplex of $L$. Recall that $\\mathsf{K}_k(L)=\\Wedge^kL$ is the $k$th exterior power\nof $L$, and the differential $\\partial:\\mathsf{K}_k(L)\\to\\mathsf{K}_{k-1}(L)$ is given by the\nformula\n$$\n\\partial (a_1\\.\\dots\\.a_k) = \\sum_{1\\le id . \\end{cases}\n$$\nEach of these is a Lie $\\SS$-algebra; the brackets\n$\\mathsf{Lie}_d(k)\\o\\mathsf{Lie}_d(n-k)\\to\\mathsf{Lie}_d(n)$ are defined as for $\\mathsf{Lie}$ if $n\\le d$,\nand of course vanish if $n>d$. The analytic functor associated to the Lie\n$\\SS$-module $\\mathsf{Lie}_d$ is known as the free $d$-step nilpotent Lie algebra.\nWe may view Sigg's theorem \\cite{Sigg} as the calculation of the homology\nof the Lie $\\SS$-algebra $\\mathsf{Lie}_2$:\n$$\nH_k(\\mathsf{Lie}_2)(n) \\cong \\sum_{\\{\\lambda\\in\\mathcal{O}_k\\mid|\\lambda|=n\\}}\n\\Schur^\\lambda .\n$$\nHere, we use the same notation for the representation of the symmetric\ngroup $\\SS_n$ with the Young diagram $\\lambda$ as for the associated Schur\nfunctor $\\Schur^\\lambda$.\n\nThe tensor product $R\\o\\L$ of a Lie $\\SS$-algebra $\\L$ with a commutative\nalgebra $R$ is again a Lie $\\SS$-algebra. For example, let $M$ be a\ndifferentiable manifold and let $\\Omega^\\bullet(M)$ be the differential\ngraded algebra of complex differential forms. The homology of differential\ngraded Lie $\\SS$-algebras is defined in a manner analogous to the\ndefinition of the homology of Lie $\\SS$-algebras, except that we must add\nto the Chevalley-Eilenberg differential $\\partial$ the internal differential $d$\nin defining the homology groups. Let $\\mathsf{F}(M,n)$ be the $n$th configuration\nspace of $M$, defined by\n$$\n\\mathsf{F}(M,n) = \\{ i : [n] \\to M \\mid \\text{$i$ is an embedding} \\} .\n$$\nLet $j(n):\\mathsf{F}(M,n)\\to M^n$ be the open embedding of the configuration\nspace. The resolution of the sheaf $j(n)_!j(n)^*\\mathbb{C}$ on $M^n$ constructed\nin \\cite{config2} may be identified with the twist of the\nChevalley-Eilenberg complex $\\mathsf{K}_\\bullet(\\Omega^\\bullet(M)\\o\\mathsf{Lie})(n)$ by the\nalternating character $\\varepsilon(n)$ of $\\SS_n$. This yields natural\nisomorphisms\n$$\nH^\\bullet(\\mathsf{F}(M,n),\\mathbb{C})[n] \\cong H_\\bullet(\\Omega^\\bullet(M)\\o\\mathsf{Lie})(n) \\o \\varepsilon(n)\n.\n$$\nIn particular, if $M$ is a compact manifold whose cohomology over $\\mathbb{C}$ is\nformal (such as a compact K\\\"ahler manifold), we see that\n$$\nH^\\bullet(\\mathsf{F}(M,n),\\mathbb{C})[n] \\cong H_\\bullet(H^\\bullet(M,\\mathbb{C})\\o\\mathsf{Lie})(n) \\o \\varepsilon(n) .\n$$\nThis reformulates a theorem of Totaro \\cite{Totaro}.\n\nAnother example of a Lie $\\SS$-algebra is associated to a symplectic vector\nspace $\\mathsf{H}$ with symplectic form $\\<-,-\\>$: set $\\L_\\mathsf{H}(1)=\\mathsf{H}$, and let\n$\\L_\\mathsf{H}(2)$ be the trivial representation $\\Schur^{(2)}$ of $\\SS_2$. The\nChevalley-Eilenberg complex of $\\L_\\mathsf{H}$ is familiar from Weyl's\nconstruction of the irreducible representations of the symplectic group\n$\\SP(\\mathsf{H})$: we have\n$$\n\\mathsf{K}_n(\\L_\\mathsf{H})(n+\\ell) = \\begin{cases}\n\\Ind_{\\SS_\\ell\\wr\\SS_2\\times\\SS_{n-\\ell}}^{\\SS_{n+\\ell}} \\Bigl( \\bigl(\n\\Schur^{(2)} \\bigr)^{\\o\\ell} \\o \\Schur^{(1^{n-\\ell})} \\Bigr) \\o\n\\mathsf{H}^{\\o(n-\\ell)} , & \\ell\\ge0 , \\\\ 0 , & \\ell<0 .\n\\end{cases}\n$$\nIn particular, $\\mathsf{K}_n(\\L_\\mathsf{H})(n)\\cong\\Schur^{(1^n)}\\o\\mathsf{H}^{\\o n}$, and\n$$\n\\mathsf{K}_n(\\L_\\mathsf{H})(n+1) \\cong \\sum_{1\\le i\n\\, e_1 \\o \\dots \\o \\widehat{e_i} \\o \\dots \\o \\widehat{e_j} \\o \\dots \\o e_n\n\\o x_{ij} .\n$$\nIf $\\Schur^{\\<\\lambda\\>}(\\mathsf{H})$ is the irreducible representation of\n$\\SP(\\mathsf{H})$ associated to the Young diagram $\\lambda$, it follows that\n$$\nH_n(\\L_\\mathsf{H})(n) \\cong \\sum_{|\\lambda|=n} \\Schur^{\\<\\lambda\\>}(\\mathsf{H}) \\o\n\\Schur^{\\lambda^*} .\n$$\nFor example, if $\\dim(\\mathsf{H})=2$, denoting the $k$th symmetric power\n$\\Schur^{\\}(\\mathsf{H})$ of $\\mathsf{H}$ by $\\mathsf{H}_k$, we have\n$$\n\\mathsf{K}_n(\\L_\\mathsf{H})(n) \\cong \\sum_{j=0}^{[\\frac{n}{2}]} \\mathsf{H}_{n-2j} \\o\n\\Schur^{(2^j,1^{n-2j})} ,\n$$\nand $H_n(\\L_\\mathsf{H})(n) \\cong \\mathsf{H}_n \\o \\Schur^{(1^n)}$.\n\n\\section{The Chevalley-Eilenberg complex of $\\L_\\mathsf{H}$}\n\nWe now turn to the closer study of the Chevalley-Eilenberg complex of the\nLie $\\SS$-algebra $\\L_\\mathsf{H}$. To this end, choose a basis $\\{e_a \\mid 1\\le\na\\le 2g \\}$ for $\\mathsf{H}$, with symplectic form\n$$\n\\=\\eta_{ab} .\n$$\nLet $\\eta^{ab}$ be the inverse matrix to $\\eta_{ab}$:\n$$\n\\sum_{b=1}^{2g} \\eta^{ab} \\eta_{bc} = \\delta^a_c .\n$$\n\nLet $V$ be a vector space with basis $\\{E_i\\mid 1\\le i\\le r\\}$; the\nsymmetric square $\\Schur^2(V)$ has basis $\\{E_{ij}=E_iE_j\\mid 1\\le i\\le j\n\\le r\\}$.\n\nThe nilpotent Lie algebra $\\L_\\mathsf{H}(V)=(\\mathsf{H}\\o V)\\oplus\\Schur^2(V)$ has centre\n$\\Schur^2(V)$, and the restriction of its Lie bracket to $\\mathsf{H}\\o V$ is\n$$\n[ e_a\\o E_i , e_b\\o E_j ] = \\eta_{ab} E_{ij} .\n$$\nThe Chevalley-Eilenberg complex of $\\L_\\mathsf{H}(V)$ is the graded vector space\n$\\Wedge^\\bullet(\\mathsf{H}\\o V) \\o \\Wedge^\\bullet(\\Schur^2(V))$. Denote by $\\varepsilon_i^a$\nthe operation of exterior multiplication by $e_a\\o E_i$ on this complex,\nand let $\\iota_a^i$ be its adjoint, characterized by the (graded)\ncommutation relations\n$$\n[\\iota_a^i,\\varepsilon_j^b] = \\delta_j^i \\delta^b_a .\n$$\nLet $\\varepsilon_{ij}=\\varepsilon_{ji}$ be the operation of exterior multiplication by\n$E_{ij}$ on the Chevalley-Eilenberg complex, and let $\\iota^{ij}$ be its\nadjoint, characterized by the commutation relations\n\\begin{equation} \\label{commute.ij}\n[\\iota^{ij},\\varepsilon_{kl}] = \\delta_k^i \\delta_l^j + \\delta_l^i \\delta_k^j .\n\\end{equation}\n\nThe differential $\\partial$ of the Chevalley-Eilenberg complex and its adjoint\n$\\partial^*$ are given by the formulas\n$$\n\\partial = \\tfrac12 \\sum_{i,j,a,b} \\eta^{ab} \\varepsilon_{ij} \\iota^i_a \\iota^j_b , \\quad\n\\partial^* = - \\tfrac12 \\sum_{i,j,a,b} \\eta_{ab} \\varepsilon^a_i \\varepsilon^b_j \\iota^{ij} .\n$$\nThe following theorem is the most powerful idea in the calculation of the\ncohomology of nilpotent Lie algebras.\n\\begin{theorem}[Kostant \\cite{Kostant}]\nThe kernel of the Laplacian $\\Delta=[\\partial^*,\\partial]$ on the Chevalley-Eilenberg\ncomplex is isomorphic to the homology of the Lie algebra $\\L_\\mathsf{H}(V)$.\n\\end{theorem}\n\nSigg \\cite{Sigg} has calculated the Laplacian $\\Delta$ for the free\n$2$-step nilpotent Lie algebra $\\mathsf{Lie}_2(V)=V\\oplus\\Wedge^2V$. Our\ncalculation is modelled on his, with some modifications brought on by the\nintroduction of the symplectic vector space $\\mathsf{H}$.\n\nThe complexity of our notation is reduced by adopting the Einstein\nsummation convention: indices $i,j,\\dots$ lie in the set $\\{1,\\dots,r\\}$,\nindices $a,b,\\dots$ in the set $\\{1,\\dots,2g\\}$, and we sum over repeated\npairs of indices if one is a subscript and one is a superscript.\n\\begin{lemma} \\label{Laplacian1}\n$\\Delta = \\varepsilon_{ij}\\varepsilon_k^a\\iota^i_a\\iota^{jk} - \\tfrac12 \\eta_{ab} \\eta^{cd}\n\\varepsilon_i^a \\varepsilon_j^b \\iota^i_c \\iota^j_d - g \\, \\varepsilon_{ij}\\iota^{ij}$\n\\end{lemma}\n\\begin{proof}\nWe have\n$$\n4 [\\partial^*,\\partial] = - [ \\eta_{ab} \\varepsilon^a_i \\varepsilon^b_j \\iota^{ij} , \\eta^{cd}\n\\varepsilon_{kl} \\iota^k_c \\iota^l_d] = - \\eta_{ab} \\eta^{cd} \\varepsilon_{kl} [ \\varepsilon^a_i\n\\varepsilon^b_j , \\iota^k_c \\iota^l_d ] \\iota^{ij} - \\eta_{ab} \\eta^{cd} \\varepsilon^a_i\n\\varepsilon^b_j [ \\iota^{ij} , \\varepsilon_{kl} ] \\iota^k_c \\iota^l_d .\n$$\nThe first term of the right-hand side is calculated as follows,\n\\begin{align*}\n- \\eta_{ab} \\eta^{cd} [ \\varepsilon^a_i \\varepsilon^b_j , \\iota^k_c \\iota^l_d ] &=\n- \\eta_{ab} \\eta^{cd} \\varepsilon^a_i [ \\varepsilon^b_j , \\iota^k_c \\iota^l_d ]\n- \\eta_{ab} \\eta^{cd} [ \\varepsilon^a_i , \\iota^k_c \\iota^l_d ] \\varepsilon^b_j \\\\\n&= \\delta^k_j \\varepsilon^a_i \\iota^l_a + \\delta^l_j \\varepsilon^a_i \\iota^k_a\n- \\delta^k_i \\iota^l_a \\varepsilon^a_j - \\delta^l_i \\iota^k_a \\varepsilon^a_j \\\\\n&= \\delta^k_j \\varepsilon^a_i \\iota^l_a + \\delta^l_j \\varepsilon^a_i \\iota^k_a\n+ \\delta^k_i \\varepsilon^a_j \\iota^l_a + \\delta^l_i \\varepsilon^a_j \\iota^k_a - 2g \\,\n\\delta^k_i \\delta^l_j - 2g \\, \\delta^l_i \\delta^k_j ,\n\\end{align*}\nwhile the second term is calculated by \\eqref{commute.ij}.\n\\end{proof}\n\n\\section{The Casimir operator of $\\GL(V)$}\n\nIf $V$ is a vector space with basis $\\{E_i\\mid 1\\le i\\le n\\}$, the Lie\nalgebra of $\\GL(V)$ has basis $\\{E_i^j\\mid 1\\le i,j\\le n\\}$, with\ncommutation relations\n$$\n[E_i^j,E_k^l] = \\delta^j_k E_i^l - \\delta_i^l E^j_k .\n$$\nThe centre of $\\GL(V)$ is spanned by $\\mathcal{D} = E_i^i$, and the Casimir\noperator is the element of the centre of $U(\\gl(V))$ given by the formula\n$$\n\\Delta_{\\GL(V)} = E_i^jE_j^i .\n$$\nLet $c_\\lambda$ be the eigenvalue of the Casimir operator $\\Delta_{\\GL(V)}$\non the representation $\\Schur^\\lambda(V)$ of $\\GL(V)$ with highest weight\nvector $\\lambda=(\\lambda_1,\\dots,\\lambda_r)$. Since the sum of the positive\nroots of $\\GL(V)$ equals $2\\rho = (2r-1,2r-3,\\dots,3-2r,1-2r)$, the theory\nof semisimple Lie algebras shows that, up to an overall factor,\n\\begin{equation} \\label{casimir}\nc_\\lambda = \\|\\lambda\\|^2 + 2(\\rho,\\lambda) = \\sum_{i=1}^r\n\\lambda_i(\\lambda_i+r-2i+1) .\n\\end{equation}\nTo see that this factor equals $1$, observe that on the fundamental\nrepresentation $V$, with highest weight $(1,0,\\dots,0)$, the Casimir has\neigenvalue $r$.\n\nGiven a Young diagram $\\lambda$, let\n$$\nn(\\lambda) = \\sum_{i\\ge1} (i-1)\\lambda_i = \\sum_{i\\ge1}\n\\binom{\\lambda_i^*}{2} .\n$$\n\\begin{lemma} \\label{Casimir}\n$c_\\lambda = r|\\lambda| + 2n(\\lambda^*) - 2n(\\lambda)\n= \\sum_{i=1}^\\infty \\lambda^*_i(r-\\lambda^*_i+2i-1)$\n\\end{lemma}\n\\begin{proof}\nThe proof follows from rearranging \\eqref{casimir}:\n$$\nc_\\lambda = r|\\lambda| + 2 \\sum_{i=1}^r \\binom{\\lambda_i}{2} - 2\n\\sum_{i=1}^r (i-1) \\lambda_i .\n$$\\def{}\n\\end{proof}\n\nRecall the dominance order on Young diagrams:\n$$\n\\text{$\\lambda\\ge\\mu$ if $|\\lambda|=|\\mu|$ and\n$\\lambda_1+\\dots+\\lambda_i\\ge\\mu_1+\\dots+\\mu_i$ for all $i\\ge1$.}\n$$\nIf $\\lambda\\ge\\mu$, then $\\mu^*\\ge\\lambda^*$ (Macdonald, I.1.11\n\\cite{Macdonald}).\n\\begin{corollary} \\label{dominant}\nIf $\\lambda\\ge\\mu$, then $c_\\lambda \\ge c_\\mu$, with equality only if\n$\\lambda=\\mu$.\n\\end{corollary}\n\\begin{proof}\nIf $\\lambda\\ge\\mu$, we have\n$$\nn(\\lambda) = \\sum_{i\\ge1} (i-1)\\lambda_i = \\sum_{i\\ge1} \\sum_{j>i}\n\\lambda_i = \\sum_{i\\ge1} \\Bigl( |\\lambda| - \\sum_{j=1}^i \\lambda_i \\Bigr)\n\\le \\sum_{i\\ge1} \\Bigl( |\\mu| - \\sum_{j=1}^i \\mu_i \\Bigr) = n(\\mu) .\n$$\nLikewise, $n(\\lambda^*)\\ge n(\\mu^*)$. In both cases, equality holds only if\n$\\lambda=\\mu$. The corollary now follows from Lemma~\\ref{Casimir}.\n\\end{proof}\n\n\\begin{corollary} \\label{Dominant}\nOn the tensor product $\\Schur^\\lambda(V)\\o\\Schur^\\mu(V)$, the Casimir\noperator $\\Delta_{\\GL(V)}$ is bounded above by\n$c_\\lambda+c_\\mu+2(\\lambda,\\mu)$, with equality only on\n$\\Schur^{\\lambda+\\mu}(V)\\hookrightarrow\\Schur^\\lambda(V)\\o\\Schur^\\mu(V)$.\n\\end{corollary}\n\\begin{proof}\nThere can only be a nonzero morphism\n$\\Schur^\\nu(V)\\hookrightarrow\\Schur^\\lambda(V)\\o\\Schur^\\mu(V)$ if\n$\\nu\\le\\lambda+\\mu$. It follows from Corollary \\ref{dominant} that\n$$\nc_\\nu \\le c_{\\lambda+\\mu} = \\|\\lambda+\\mu\\|^2 + 2 (\\rho,\\lambda+\\mu) =\n\\|\\lambda\\|^2 + 2 (\\rho,\\lambda) + \\|\\mu\\|^2 + 2 (\\rho,\\mu) +\n2(\\lambda,\\mu) .\n$$\\def{}\n\\end{proof}\n\n\\section{A formula for the Laplacian}\n\nIn this section, we prove the following explicit formula for the Laplacian\n$\\Delta$ on the Chevalley-Eilenberg complex $\\mathsf{K}_\\bullet(\\L_\\mathsf{H}(V))$.\n\\begin{theorem} \\label{main}\n$\\Delta = \\tfrac12 \\bigl( \\Delta_{\\SP(\\mathsf{H})} + \\Delta_{\\GL(V)} - (r+2g+1) \\mathcal{D}\n\\bigr)$\n\\end{theorem}\n\nTheorem \\ref{main} will follow by combining the results of\nLemmas~\\ref{Laplacian1}, \\ref{Laplacian2} and \\ref{Laplacian3}. The Lie\nalgebra of $\\GL(V)$ acts on $\\mathsf{K}_\\bullet(\\L_\\mathsf{H}(V))$ via the operations\n$$\nE_i^j = \\varepsilon_i^a \\iota^j_a + \\varepsilon_{ik} \\iota^{jk} .\n$$\nIt follows that $\\mathcal{D} = \\varepsilon_i^a \\iota^i_a + \\varepsilon_{ij} \\iota^{ij}$, while\nthe Casimir operator for $\\GL(V)$ acts on $\\mathsf{K}_\\bullet(\\L_\\mathsf{H}(V))$ as follows.\n\\begin{lemma} \\label{Laplacian2}\n$\\Delta_{\\GL(V)} = \\varepsilon_i^a\\varepsilon_j^b \\iota^i_b\\iota^j_a + 2 \\,\n\\varepsilon_{ij}\\varepsilon_k^a\\iota^i_a\\iota^{jk} + r \\varepsilon_i^a \\iota^i_a + (r+1)\n\\varepsilon_{ij} \\iota^{ij}$\n\\end{lemma}\n\\begin{proof}\nWe have\n\\begin{align*}\nE_i^jE_j^i & = ( \\varepsilon_i^a \\iota^j_a + \\varepsilon_{ik} \\iota^{jk} ) ( \\varepsilon_j^b\n\\iota^i_b + \\varepsilon_{jl} \\iota^{il} ) \\\\ &= \\varepsilon_i^a \\iota^j_a \\varepsilon_j^b\n\\iota^i_b + \\varepsilon_i^a \\iota^j_a \\varepsilon_{jl} \\iota^{il} + \\varepsilon_{ik} \\iota^{jk}\n\\varepsilon_j^b \\iota^i_b + \\varepsilon_{ik} \\iota^{jk} \\varepsilon_{jl} \\iota^{il} \\\\ &= -\n\\varepsilon_i^a \\varepsilon_j^b \\iota^j_a \\iota^i_b + r \\varepsilon_i^a \\iota^i_a + \\varepsilon_{jl}\n\\varepsilon_i^a \\iota^j_a \\iota^{il} + \\varepsilon_{ik} \\varepsilon_j^b \\iota^i_b \\iota^{jk} -\n\\varepsilon_{ik} \\varepsilon_{jl} \\iota^{jk} \\iota^{il} + (r+1) \\varepsilon_{ij} \\iota^{ij} .\n\\end{align*}\nThe (a)symmetries of $\\varepsilon_{ik} \\varepsilon_{jl} \\iota^{jk} \\iota^{il}$ force it\nto vanish, and the result follows.\n\\end{proof}\n\nThe Lie algebra of $\\GL(\\mathsf{H})$ acts on the Chevalley-Eilenberg complex of\n$\\L_\\mathsf{H}(V)$ by the operators\n$$\n\\{ e^a_b = \\varepsilon_i^a \\iota^i_b \\mid 1\\le a,b\\le 2g \\} ,\n$$\nand the Lie subalgebra $\\SP(\\mathsf{H})\\subset\\GL(\\mathsf{H})$ is spanned by the\noperators\n$$\n\\{ e_{ab}+e_{ba} \\mid 1\\le a\\le b\\le 2g \\} ,\n$$\nwhere $e_{ab}=\\eta_{ac}e^c_b$. The Casimir operator of $\\SP(\\mathsf{H})$ is given\nby the formula\n$$\n\\Delta_{\\SP(\\mathsf{H})} =- \\tfrac12 \\eta^{ac} \\eta^{bd} \\bigl( e_{ab} + e_{ba}\n\\bigr) \\bigl( e_{cd} + e_{dc} \\bigr) = - \\eta^{ac} \\eta^{bd} e_{ab} e_{cd}\n- \\eta^{ac} \\eta^{bd} e_{ab} e_{dc} .\n$$\n\\begin{lemma} \\label{Laplacian3}\n$\\Delta_{\\SP(\\mathsf{H})} = -\\varepsilon_i^a\\varepsilon_j^b\\iota^i_b\\iota^j_a\n-\\eta_{ab}\\eta^{cd}\\varepsilon_i^a\\varepsilon_j^b\\iota^i_c\\iota^j_d +\n(2g+1)\\varepsilon_i^a\\iota^i_a$\n\\end{lemma}\n\\begin{proof}\nWe have\n\\begin{align*}\n\\eta^{ac} \\eta^{bd} e_{ab} e_{cd} &= \\eta^{ac} \\eta^{bd} \\eta_{aa'}\n\\eta_{cc'} \\varepsilon_i^{a'} \\iota^i_b \\varepsilon_j^{c'} \\iota^j_d = - \\eta_{ac}\n\\eta^{bd} \\varepsilon_i^a \\iota^i_b \\varepsilon_j^c \\iota^j_d = \\eta_{ac} \\eta^{bd}\n\\varepsilon_i^a \\varepsilon_j^c \\iota^i_b \\iota^j_d - \\varepsilon_i^a \\iota^i_a \\\\\n\\eta^{ac} \\eta^{bd} e_{ab} e_{dc} &= \\eta^{ac} \\eta^{bd} \\eta_{aa'}\n\\eta_{dd'} \\varepsilon_i^{a'} \\iota^i_b \\varepsilon_j^{d'} \\iota^j_c = - \\varepsilon_i^a\n\\iota^i_b \\varepsilon_j^b \\iota^j_a = \\varepsilon_i^a \\varepsilon_j^b \\iota^i_b \\iota^j_a - 2g\n\\varepsilon_i^a \\iota^i_a .\n\\end{align*}\n\\def{}\n\\end{proof}\n\n\\section{The case $g=1$}\n\nIn this section, we apply our results in the special case $g=1$, in which\nthe symplectic vector space $\\mathsf{H}$ is two-dimensional. Recall Frobenius's\nnotation for partitions: if $\\alpha_1>\\dots>\\alpha_d\\ge0$ and\n$\\beta_1>\\dots>\\beta_d\\ge0$,\n$$\n(\\alpha_1,\\dots,\\alpha_d|\\beta_1,\\dots,\\beta_d)\n$$\nis the partition of $\\alpha_1+\\dots+\\alpha_d+\\beta_1+\\dots+\\beta_d+d$ whose\n$i$th part equals $\\alpha_i+i$ for $i\\le d$, and $\\sup\\{j \\mid \\beta_j+j\n\\ge i\\}$ for $i>d$. For example, $(\\alpha|\\beta)$ corresponds to the hook\n$(\\alpha+1,1^\\beta)$, while $(d-1,d-2,\\dots,1,0|d-1,d-2,\\dots,1,0)$ is the\npartition $(d^d)$.\n\\begin{definition}\nLet $\\mathcal{P}_\\ell$ be the set of partitions of $2\\ell$ of the form\n$(\\alpha_1+1,\\dots,\\alpha_d+1|\\alpha_1,\\dots,\\alpha_d)$; thus\n$\\alpha_1+\\dots+\\alpha_d+d = \\ell$ and $\\alpha_1>\\dots>\\alpha_d\\ge0$.\n\\end{definition}\n\nThe following plethysm is Ex.\\ I.5.10 of Macdonald \\cite{Macdonald}:\n\\begin{equation} \\label{plethysm}\n\\Schur^{(1^\\ell)} \\circ \\Schur^{(2)} = \\sum_{\\lambda\\in\\mathcal{P}_\\ell}\n\\Schur^\\lambda .\n\\end{equation}\n\n\\begin{theorem} \\label{limit}\nThe cohomology group $H_n(\\L_\\mathsf{H})(n+\\ell)$ is zero except in the following\ncases:\n\\begin{enumerate}\n\\item $\\ell=0$ and $n\\ge0$, in which case $H_n(\\L_\\mathsf{H})(n) \\cong \\mathsf{H}_n \\o\n\\Schur^{(1^n)}$;\n\\item $\\ell>0$ and $n\\ge\\ell+2$.\n\\end{enumerate}\nIf $\\ell>0$ and $n\\ge2\\ell+2$, we have\n$\\displaystyle\nH_n(\\L_\\mathsf{H})(n+\\ell) \\cong\n\\sum_{\\substack{\\lambda\\in\\mathcal{P}_\\ell\\\\n\\ge\\ell+\\alpha_1+1}} \\mathsf{H}_{n-\\ell} \\o\n\\Schur^{(1^{n-\\ell})+\\lambda}$.\n\\end{theorem}\n\\begin{proof}\nThe Chevalley-Eilenberg complex of $\\L_\\mathsf{H}(V)$ is bigraded, $\\mathsf{K}_{k,\\ell} =\n\\Wedge^k(\\mathsf{H}\\o V)\\o\\Wedge^\\ell\\bigl(\\Schur^2(V)\\bigr)$, and since the\ndifferential $\\partial$ is homogeneous of bidegree $(-2,1)$, the homology is also\nbigraded. In terms of this bigrading, we wish to calculate\n$H_{n-\\ell,\\ell}(\\L_\\mathsf{H})$; evidently, this vanishes unless $n\\ge\\ell$.\n\nThe plethysm \\eqref{plethysm} implies that\n$$\n\\mathsf{K}_{k,\\ell}(\\L_\\mathsf{H})(n) = \\sum_{j=0}^{[\\frac{k}{2}]}\n\\sum_{\\lambda\\in\\mathcal{P}_\\ell} \\mathsf{H}_{k-2j} \\o \\Schur^{(2^j1^{k-2j})} \\o\n\\Schur^\\lambda .\n$$\nWe will derive a lower bound for the Laplacian $\\Delta$ on each summand.\n\nGiven a partition $\\lambda\\in\\mathcal{P}_\\ell$, we calculate that $c_\\lambda =\n2\\ell r + 2\\sum_{i=1}^d (\\alpha_i + 1) = 2(r+1)\\ell$ and\n$$\n(2^j1^{k-j},\\lambda) \\le \\sum_{i=1}^j (\\alpha_i+i+1) + 2\\ell \\le 3\\ell +\n\\tbinom{j+1}{2} .\n$$\nOn the summand $\\mathsf{H}_{k-2j} \\o \\Schur^{(2^j1^{k-2j})} \\o \\Schur^\\lambda$, we\nhave $(r+3)\\mathcal{D}=(r+3)(k+2\\ell)$,\n\\begin{gather*}\n\\tfrac12 \\Delta_{\\GL(V)} \\le \\tfrac12 c_{(2^j,1^{k-2j})} + \\tfrac12 c_\\lambda +\n(2^j1^{k-2j},\\lambda) \\le \\tfrac12 c_{(2^j,1^{k-2j})} + (r+3)\\ell + \\ell +\n\\tbinom{j+1}{2} , \\quad \\text{and} \\\\\n\\begin{aligned}\n\\Delta_{\\SP(\\mathsf{H})} + c_{(2^j1^{k-2j})} &= \\bigl\\{ (k-2j)^2 + 2(k-2j) \\bigr\\}\n+ \\bigl\\{ (k-j)(r-(k-j)+1) + j(r-j+3) \\bigr\\} \\\\ &= \\tfrac12 (r+3)k - j(k-j+1)\n.\n\\end{aligned}\n\\end{gather*}\nCombining all of these ingredients, we see that $\\Delta \\ge\nj\\bigl(k-\\tfrac{3}{2}j+\\tfrac{1}{2}\\bigr) - \\ell$. If $j>0$, the right-hand\nside is bounded below by $k-\\ell-1$; unless $k\\ge2$ and $k\\le\\ell+1$, our\nsummand does not contribute to $H_n(\\L_\\mathsf{H})(n+\\ell)$. Equivalently,\n$n=k+\\ell$ must lie in the interval $[\\ell+2,2\\ell+2]$.\n\nIt remains to consider the summands of $\\mathsf{K}_{k,\\ell}$ with $j=0$; these have\nthe form\n$$\n\\mathsf{H}_k \\o \\sum_{\\lambda\\in\\mathcal{P}_\\ell} \\Schur^{(1^k)} \\o \\Schur^\\lambda .\n$$\nOn the summand $\\mathsf{H}_k\\o\\Schur^{(1^k)+\\lambda}$ of\n$\\mathsf{H}_k\\o\\Schur^{(1^k)}\\o\\Schur^\\lambda$, the operator\n$\\Delta_{\\SP(\\mathsf{H})}+\\Delta_{\\GL(V)}$ equals\n\\begin{align*}\nk(k+2) + c_{(1^k)} + c_\\lambda + 2(1^k,\\lambda) &= k(k+2) + k(r-k+1) +\n2\\ell(r+1) + 2\\sum_{i=1}^k \\lambda_i \\\\ &= (k+2\\ell)(r+3) -\n\\sum_{i=k+1}^{\\alpha_1+1} \\lambda_i ,\n\\end{align*}\nwhile on all other irreducible components of\n$\\mathsf{H}_k\\o\\Schur^{(1^k)}\\o\\Schur^\\lambda$, it is strictly less. It follows\nthat the Laplacian can only vanish on the summand\n$\\mathsf{H}_k\\o\\Schur^{(1^k)+\\lambda}$, and only at that when $k\\ge\\alpha_1+1$.\n\\end{proof}\n\nThe following formula illustrates the behaviour of $H_n(\\L_\\mathsf{H})(n+\\ell)$\nwhen $n\\in[\\ell+2,2\\ell+1]$\n\\begin{proposition}\n$$\nH_n(\\L_\\mathsf{H})(n+1) \\cong \\Bigl( \\mathsf{H}_{n-1} \\o \\Schur^{(3,1^{n-2})} \\Bigr)\n\\oplus \\begin{cases} \\mathsf{H}_0 \\o \\Schur^{(4)} , & n=3 , \\\\ 0 , & n\\ne3 .\n\\end{cases}\n$$\n\\end{proposition}\n\\begin{proof}\nPieri's formula shows that\n\\begin{align*}\n\\mathsf{K}_{n-1,1} &\\cong \\sum_{j=1}^{[\\frac{n+1}{2}]} \\mathsf{H}_{n-2j+1} \\o\n\\Schur^{(2^{j-1},1^{n-2j+1})} \\o \\Schur^{(2)} \\\\ &\\cong\n\\sum_{j=1}^{[\\frac{n+1}{2}]} \\mathsf{H}_{n-2j+1} \\o \\Schur^{(2^j,1^{n-2j+1})}\n\\oplus \\sum_{j=1}^{[\\frac{n}{2}]} \\mathsf{H}_{n-2j+1} \\o\n\\Schur^{(3,2^{j-1},1^{n-2j})} \\\\ & \\quad \\oplus\n\\sum_{j=2}^{[\\frac{n+1}{2}]} \\mathsf{H}_{n-2j+1} \\o\n\\Schur^{(3,2^{j-2},1^{n-2j+2})} \\oplus \\sum_{j=2}^{[\\frac{n+1}{2}]}\n\\mathsf{H}_{n-2j+1} \\o \\Schur^{(4,2^{j-2},1^{n-2j+1})} .\n\\end{align*}\nOn these four summands, the operator $\\Delta$ equals $j(n-j+2)$,\n$j(n-j+3)-n-2$, $j(n-j+1)$ and $j(n-j+2)-n-3$, respectively. Thus, the only\nsummands on which $\\Delta$ vanishes are $\\mathsf{H}_{n-1}\\o\\Schur^{(3,1^{n-2})}$,\nand $\\mathsf{H}_0\\o\\Schur^{(4)}$.\n\\end{proof}\n\nThe same method may be used in the case $\\ell=2$: we obtain\n$$\nH_n(\\L_\\mathsf{H})(n+2) \\cong \\Bigl( \\mathsf{H}_{n-2} \\o \\Schur^{(4,2,1^{n-4})} \\Bigr)\n\\oplus \\begin{cases} \\mathsf{H}_1 \\o \\Schur^{(5,2)} , & n=5 , \\\\ 0 , & n\\ne5\n. \\end{cases}\n$$\nOur search for a formula for $H_n(\\L_\\mathsf{H})(n+\\ell)$ for all $\\ell$ has been\nfruitless; nevertheless, it might be of interest to find one.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSoftware Requirements Specifications (SRS) provide detailed information on the functionality and features of a software project. \nThey also help evaluate compliance with the standards and requirements laid down at the project initiation. \nSRS data is usually created in natural language and serves the purpose of enabling communication between the teams collaborating on the project. \nSRS language is subject to a certain level of subjectivity that might lead to misinterpretation, contradiction, ambiguity and redundancy. Automation brought by Natural Language Processing (NLP) methods can help avoid these issues in an effective manner.\n\nNLP methods can be used to facilitate different tasks in requirement engineering including classification and extraction of requirements, which can lead to substantial savings in terms of time and effort spent on these tasks. \nThe research in this field has benefited from recent developments in NLP for a variety of tasks such as extractive summarization, tagging, and entity recognition~\\citep{kicibert}. \nThese recent developments in NLP methodology can be attributed to transfer learning that has become the state-of-the-art approach for generic language modeling~\\citep{ruder2019transfer}. Most transfer learning approaches rely on sequential learning, which has been a common practice for NLP due to its simplicity. \nMany NLP tasks are gradually achieved in sequential order as the generalization capacity of a language model increases with sequential learning that enables leveraging the information obtained from the previous task to achieve better performance on further tasks. \n\nMost studies in the literature rely on a two-stage transfer learning regimen: pre-training and fine-tuning. \nOne main problem with the two-stage regimen is that the distribution of source data may differ from the target data. \nAs such, it can be highly beneficial to adapt a model from a training distribution to the different distribution of the target task. \nAdaptive fine-tuning can provide a remedy for this problem by utilizing the unlabeled textual data in the target domain.\nAccordingly, the two-stage regimen can be extended to three or more stages where additional training can be performed after the pre-training step. \nSuch additional training is also known as adaptive pre-training (APT), pre-finetuning (PFT), and continual pre-training (CPT). \n\n\\paragraph{Research goals and scope} \nIn this study, we focus on leveraging transfer learning methods to improve text classification performance over SRS data. \nSpecifically, we design an adaptive fine-tuning framework that makes use of abundantly available unlabeled data in the target domain.\nWe conduct our analysis with the SRS data obtained from IBM Rational DOORS Next Generation product~\\citep{doors}.\nThis product serves as a requirement management tool for optimizing collaboration and communication between software development teams to improve the information flow within a project. \nDOORS dataset is used for three different classification tasks, namely, predicting \\textit{Priority}, \\textit{Severity}, and \\textit{Type} of a requirement.\n\n\\paragraph{Contribution}\nStandard transfer learning is typically performed in two stages. \nThe first stage includes learning from source data, while the second stage is fine-tuning source knowledge for the solution of the main target task. \nHowever, the assumption that source and target share a similar distribution may not hold in real-world problems since the distribution of the target can differ from that of source data over time or across domains, which is called Out-of-Distribution (OOD) problem~\\citep{hendrycks2020pretrained}. \nThe solution is to adapt a model from the source distribution to the distribution of the target task. \nIn this study, we show how to apply adaptation and leverage the three-stage domain-adaptive fine-tuning for three multi-class classification tasks over a software requirement dataset, DOORS.\nThe main contributions of our study are summarized as follows:\n\n\\begin{itemize}\\setlength\\itemsep{0.3em}\n \\item We create a strong baseline for our analysis by utilizing word embeddings and Sentence BERT as feature extraction models where the model parameters are frozen and given as input to the linear classifier.\n \n \n \n Additionally, we utilize two- and three-stage fine-tuning mechanisms for the classification task where the base (e.g., BERT-base and RoBERTa-base), large (e.g., BERT-large and RoBERTa-large) and distilled versions of the Transformer models are leveraged.\n \n \n \\item We make use of the additional unlabeled in-domain data to further pre-train Transformer model checkpoints on in-domain data. \n It is an additional \\textit{adaptive fine-tuning} step for improving text classification performance. \n These adaptively trained models are then fine-tuned again for three downstream tasks.\n Designing an adaptive fine-tuning mechanism that provides a performance boost constitutes the main novelty of this work along with the novel application problem.\n \n \\item We conduct an extensive empirical study, and provide a detailed analysis of the performances of various models for the SRS datasets.\n \n These analyses contribute to the understanding of the strengths and limitations of state-of-the-art text classification methods for important practical problems in software engineering.\n\\end{itemize}\n\n\\paragraph{Organization of the paper}\nThe rest of the paper is organized as follows.\nIn Section~\\ref{sec:litreview}, we briefly review the most relevant studies to our work, with a special focus on NLP-based automation in software engineering and methodological developments in transfer learning.\nWe provide a discussion on the employed methods in Section~\\ref{sec:methodology}, along with a discussion on our dataset and a review of Transformer models and domain adaptation.\nIn Section~\\ref{sec:results}, we discuss the results from our numerical study, and we conclude the paper in Section~\\ref{sec:conclusion} with a summary of our findings, and a discussion on study limitations and future research directions.\n\n\\section{Literature review}\\label{sec:litreview}\n\nIn the literature, SRS data have been analyzed using various text classification techniques such as document classification or sentiment analysis, to enhance the software development process via high-quality predictions.\nIn earlier studies, traditional machine learning models were successfully used for such tasks. \nHussain et al.~\\citep{hussain2007using} applied document classification in a traditional machine learning pipeline to detect ambiguities in the SRS documents. \nThey trained decision trees to map the SRS document passages to ambiguous or unambiguous labels, which is a binary classification task. \nZhang et al.~\\citep{zhang2013extracting} focused on extracting problematic API designs using sentiment analysis.\nThey extracted useful information from online resources that contain huge amounts of unstructured data, such as bug reports and some online discussions. \nHou et al.~\\citep{hou2013content} used Naive Bayes models to categorize API discussions based on their content. \nThere have been various machine learning applications over the software requirement datasets.\nFor instance, Asabadi et al.~\\citep{asadabadi2020ambiguous} employed fuzzy set theory to identify ambiguous SRS statements by using an ambiguous terms list. Apart from machine learning methods, rule-based approaches have also been employed for the classification tasks in the software engineering domain such as requirement classification~\\citep{singh2016rule}.\nIn recent years, deep learning architectures have been frequently employed for text analytics tasks in requirement engineering. Navarro et al.~\\citep{navarro2017towards} applied Convolutional Neural Network (CNN) models to classify software requirements without using handcrafted features.\nOnyeka et al.~\\citep{onyeka2019identifying} utilized commonsense knowledge ontology for implicit requirements framework where a CNN-based auto-encoder identifies implicit requirements from tables and images in large SRS documents.\n\nThe most recent NLP-related works in the field of software engineering and requirement engineering have been shaped by transfer learning. \nThe pre-trained language models (PLM), specifically the encoder part of the Transformers, have been the best models to take advantage of transfer learning. \nIn this regard, Bidirectional Encoder Representations from Transformers (BERT)~\\citep{bert} has proven to be useful, especially for text and token classification problems such as sentiment analysis and named entity recognition. \nMany variants of BERT have recently emerged and they are used in software analytics including the classification tasks involving SRS data~\\citep{kicibert, hey2020norbert,sainani2020extracting}. \nHey et al.~\\citep{hey2020norbert} fine-tuned BERT on specific tasks where the model predicts if a requirement is non-functional or functional. \nIn another study, Sainani et al.~\\citep{sainani2020extracting} employed BERT in a different way such that they first extract requirements from large software engineering contracts and then classify those. \nTheir approach is based on the idea that business contracts could help in the identification of high-level requirements in order to improve the performance of software engineering projects.\nKici et al.~\\citep{kicibert} fine-tuned various PLMs for different SRS tasks. \nThey used different BERT models and the variants for three different datasets to test out the generalizability of their results.\n\nAll these aforementioned Transformer models have been implemented using a two-stage straightforward fine-tuning process. Specifically, they solved downstream tasks without domain-specific adaptation just by training the pre-trained model checkpoints with labeled data. \nHowever, it is possible to increase the transfer learning capacity by adaptive fine-tuning using unlabeled data that is easy to obtain for most problems. Adaptive fine-tuning is a domain adaptation technique to find a way of applying a pre-trained model trained on general-purpose source data to a different target domain~\\citep{ramponi2020neural}. However, the source and target domain can have different distributions such as vocabulary distribution. Some studies~\\citep{Gururangan2020, diao2021taming} showed that adaptation takes care of the effects of the mismatch between the previous source distribution and the target distribution. \nThe domain-specific models such as BioBERT~\\citep{Lee2019}, SciBERT~\\citep{beltagy2019scibert}, HateBERT~\\citep{Caselli2020}, ClinicalBERT~\\citep{Alsentzer2019}, NetBERT~\\citep{Louis2020}, MathBERT~\\citep{Peng2021}, News~\\citep{Gururangan2020} and GraphCodeBERT~\\citep{Guo2020} have started to be frequently observed in the literature. \nAll these models or checkpoints try to build a better model that can work in cases where the distribution of the target domain is different from that of the source domain. \nSome studies investigated the benefit of masking rate in Transformer architecture. \nThe selection of 15\\% has been mostly seen as an efficient default rate during pre-training of MLMs~\\citep{Clark2020}. \nOn the other hand, in another study, \\citet{Wettig2022} suggested that masking around 40\\% of input tokens instead of 15\\% can show a better downstream task performance.\n\nTo the best of our knowledge, there is no other study that focused on software requirement domain adaptation. \nIn our study, we addressed this research gap in the software engineering area and applied adaptive fine-tuning over software requirement data to improve the text classification performance for a specific task.\n\n\\section{Methodology}\\label{sec:methodology}\nIn deep learning models, the trained parameters of the neural network architecture can be stored for future use based on two types of transfer: feature extraction and fine-tuning. \nThe parameters obtained in the former are fixed as in word embeddings and not subject to gradient descent, whereas the parameters obtained in the latter are subject to change as in the BERT model.\nIn this section, we explain how we employed feature extraction briefly discuss the standard two-stage training process, and elaborate on adaptive fine-tuning framework.\n\n\\subsection{DOORS dataset}\nOur dataset consists of software requirement specifications collected using IBM's Dynamic Object-Oriented Requirements System Next Generation (DOORS) product. \nThe original dataset contains 83,837 documents.\nWe consider the \\textit{Summary} of the requirements specifications to classify for three separate categories: \\textit{Priority}, \\textit{Severity}, and \\textit{Type}. \nIn this regard, the DOORS dataset can be considered as a task management dataset, and differs from other SRS datasets in the literature (e.g., see NFR-PROMISE~\\citep{hey2020norbert}), which are typically used for requirement type classification (e.g., functional vs nonfunctional).\nTable~\\ref{tab:dataSample} provides data instances from the DOORS dataset.\n\n\\setlength{\\tabcolsep}{6pt}\n\\renewcommand{\\arraystretch}{1.35}\n\\begin{table}[!ht]\n\\centering\n\\caption{DOORS data samples}\\label{tab:dataSample}\n\\resizebox{1.01\\linewidth}{!}{\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\textbf{Summary} & \\textbf{Type} & \\textbf{Priority} & \\textbf{Severity} \\\\\n\\hline\nDefault selection by creating a new link must be MODULE and NOT FOLDER & Enhancement & Medium & Major \\\\\n\\hline\nProvide easy mechanism to update server and test server rename & Task & Medium & Normal\\\\\n\\hline\nAs a developer, I should be able to implement support for large lists of Longs in view queries & Story & Unassigned & Normal\\\\\n\\hline\nExport artifact to a PDF file is not working & Defect & High & Major\\\\\n\\hline\nAll Projects label should be updated in the component selection page & Enhancement & Unassigned & Minor\\\\\n\\hline\nInvestigate potential corrupt data involving wrapper resources on self host & Issue & Low & Normal\\\\\n\\hline\n\\end{tabular}\n}\n\\end{table}\n\nTable~\\ref{tab:eda1} shows the distribution of the classes for each category label.\nThere exist four classes in the \\textit{Priority} (\\textit{unassigned, high, medium, and low}), and six classes in \\textit{Severity} (\\textit{normal, major, minor, blocker, critical, and undecided}). \n\\textit{Type} category originally consisted of 20 different classes. \nWith some merging and pre-processing operations, we obtained seven classes for the \\textit{Type} category. \nIn addition, we eliminated `nan' values, which correspond to missing values both for \\textit{Severity} and \\textit{Priority} categories.\nWe note that while the Type category's classes are somewhat fairly distributed, the others show severe data imbalance. \n\n\\begin{table}[!ht]\n\\centering\n\\caption{Class distribution for each category label}\n\\label{tab:eda1}\n \\subfloat[Priority \\label{tab:eda1_priority}]{\n \\resizebox{0.325\\textwidth}{!}{\n \\begin{tabular}{|l|r|r|}\n \\hline\n \\textbf{Class} & \\textbf{Count} & \\textbf{Perc. (\\%)} \\\\ \\hline\n Unassigned & 11,443 & 68.97 \\\\ \\hline\n High & 2,677 & 16.13 \\\\ \\hline\n Medium & 1,990 & 11.99 \\\\ \\hline\n Low & 480 & 2.89 \\\\ \\hline\n \\end{tabular}\n }\n \n \\subfloat[Severity \\label{tab:eda1_severity}] {\n \\resizebox{0.325\\textwidth}{!}{\n \\begin{tabular}{|l|r|r|}\n \\hline\n \\textbf{Class} & \\textbf{Count} & \\textbf{Perc. (\\%)} \\\\\n \\hline\n Normal & 14,348 & 86.48 \\\\\n \\hline\n Major & 1,127 & 6.79 \\\\\n \\hline\n Undecided & 478 & 2.88 \\\\\n \\hline\n Minor & 284 & 1.71 \\\\\n \\hline\n Critical & 245 & 1.47 \\\\\n \\hline\n Blocker & 108 & 0.65 \\\\\n \\hline\n \\end{tabular}\n }\n \n \\subfloat[Type \\label{tab:eda1_type}] {\n \\resizebox{0.325\\textwidth}{!}{\n \\begin{tabular}{|l|r|r|}\n \\hline\n \\textbf{Class} & \\textbf{Count} & \\textbf{Perc. (\\%)} \\\\\n \\hline\n Enhancement & 5,026 & 30.29 \\\\\n \\hline\n Story & 4,565 & 27.50 \\\\\n \\hline\n Maintenance & 2,180 & 13.14 \\\\\n \\hline\n Other & 1,774 & 10.69 \\\\\n \\hline\n Test Task & 1,615 & 9.73 \\\\\n \\hline\n Plan Item & 885 & 5.33 \\\\\n \\hline\n JUnit & 545 & 3.28 \\\\\n \\hline\n \\end{tabular}\n }\n }\n\\end{table}\n\nHaving applied the pre-processing steps, the total number of remaining instances is 16,590, where the text length is on average 13.1 words with a median value of 12.\nWe provide the box plot of text lengths by class for the Type label as a representative case in Figure~\\ref{fig:eda_boxplot_type}.\nWe find that text length distribution over the classes provides some information for the Type label (e.g., Story class has the longest text and JUnit has the shortest).\nHowever, text length distribution across the classes is more uniform for Priority and Severity labels.\n\n\\begin{figure}[!ht]\n \\centering\n\\includegraphics[width=0.925\\textwidth]{BoxPlotType.png}\n \\caption{Text length box plot across Type label classes in the DOORS dataset}\n \\label{fig:eda_boxplot_type}\n\\end{figure}\n\n\n\\subsection{Feature extraction methods}\nLearning text representations through averaging fixed vectors of the words has been found to be a simple but effective method in previous works. \nThe word embedding techniques such as FastText~\\citep{fasttext}, Glove~\\citep{glove} and Word2vec~\\citep{word2vec} can provide strong static word vectors that are kept for a different purpose in a generic machine learning pipeline, called \\textit{feature extraction}. \nThat is, word vectors (or document vectors) are kept frozen, and provided as input to a linear classifier or a clustering algorithm. \n\nSuch document representations based on averaging static word vectors have become surprisingly strong baselines. \nIn some cases, these approaches can achieve higher performance than more complex models~\\citep{wieting2015towards}. \nMoreover, they provide a basis for whether there is a pattern in the data. \nFor instance, Cer et al.~\\citep{use} proposed an averaging-based model, called the DAN network, and showed that their model can achieve similar performance to those of more complex architectures.\nOn the other hand, the averaging method has certain limitations as well, since it is based on frozen word embeddings. \nOther than averaging, different sentence-level approaches have been proposed, which are based on feature extraction for text representation such as Doc2vec \\citep{doc2vec}, Sentence BERT \\citep{sbert}, and Skip Thought \\citep{kiros2015skip}. \n\n\nIn our experiments, we employ two feature extraction methods as strong baselines: Word Embedding Pooling and Sentence BERT. \nWe describe the settings for these two approaches as follows.\n\n\\begin{itemize}\\setlength\\itemsep{0.3em} \n\\item \\textit{Word Embedding Pooling}: In Word Embedding Pooling, we simply take an average of word embeddings in a sentence where FastText and Glove pre-trained vectors are obtained and concatenated. \nThe length of the final vector is 400. \nA linear layer is added on top of it, and the model is trained in a supervised pipeline. \n\n\\item \\textit{SBERT}: Sentence BERT embeddings have been found to be a highly efficient way of text representation, especially for semantic problems such as sentence similarity or semantic search. \nThe main motivation behind SBERT approach is that a BERT model is not suitable for the standalone usage of word or sentence vectors since it requires end-to-end fine-tuning for a downstream task. Based on Siamese network structures, SBERT can produce semantically meaningful and independent embeddings of the sentences, making it a suitable candidate for a baseline method. \n\\end{itemize}\n\n\\subsection{Two-stage training: Pre-training and fine-tuning}\\label{sec:twostageTraining}\nTo fine-tune a pre-trained source model with the parameters $\\Theta_{S}$ to a different target task, task-specific parameters $\\Theta_{T}$ (or layers) are designed and added to $\\Theta_{S}$.\nFor feature extraction, as applied mostly with word embeddings, $\\Theta_{S}$ is frozen, and only $\\Theta_{T}$ parameters are updated for a given task. \nFor fine-tuning, as applied in the ELMo~\\citep{elmo} and the BERT models~\\citep{bert}, the entire set of parameters, $\\Theta_{S} \\cup \\Theta_{T}$, is updated, mostly in an end-to-end fashion. \nDuring fine-tuning, the learning rate is typically set to a smaller value than the one in pre-training settings in order not to update the main model too frequently. \nThis is due to the observation that most syntactic and semantic information has been already encoded in the pre-trained models. \nOn the other hand, an important issue is the difference in the distribution of the data between the target and the source. \nFor instance, a word that does not appear in the pre-training phase but appears in the fine-tuning phase leads to an out-of-vocabulary problem~\\citep{hendrycks2020pretrained}. \n\nELMo~\\citep{elmo} and Transformer models~\\citep{attention} have made great contributions to natural language understanding and generation problems. \nTheir success can be attributed to the fact that these architectures encode information by distributing it into layers in deep networks, rather than encoding it in a simple one-dimensional vector. \nThey have been successfully utilized in the fine-tuning phase, and are capable of transferring the knowledge obtained at previous training phases.\nRecently, Transformer architectures~\\citep{attention} have shown promising results for many tasks due to utilizing the self-attention mechanism. \nOne of the most important advantages of Transformer architectures is that they can create reliable pre-trained models in a parallelizable architecture, making transfer learning faster and more effective. \nIn addition to being very suitable for domain adaptation, the Transformer models are also highly robust to OOD samples~\\citep{hendrycks2020pretrained}.\n\nContrary to word embedding-based averaging models, a Transformer model is trained in an end-to-end fashion. \nA thin layer, $\\Theta_{T}$, is added on top of a pre-trained source model, $\\Theta_{S}$, and the entire architecture, $\\Theta_{S} \\cup \\Theta_{T}$, is trained as a whole, called fine-tuning. \nIn some cases, it would be computationally expensive to fine-tune the entire architecture. \nSome approaches in this line have been developed by utilizing adapters~\\citep{adapter,rebuffi2017learning}, which is based on the idea that training the entire architecture is avoided by adding simple trainable adapters between layers instead. Another reason for using the adapters is that the models can suffer from catastrophic forgetting, which means information learned during previous stages can be lost when adding new tasks.\nIn our analysis, we took this negative transfer notion into account by observing the losses and the performance, which is important for not disrupting our pre-trained models and also not suffering from catastrophic forgetting and negative transfer.\n\nThe fine-tuning strategy has been shown to perform better than its static feature extraction counterpart for various problem instances. \nIn the numerical study, we consider widely used models and their checkpoints for fine-tuning, which are briefly summarized below. \n\\begin{itemize}\\setlength\\itemsep{0.3em}\n \\item \\textbf{BERT checkpoints:} BERT is the most well-known encoder using both Masked Language Model (MLM) and Next Sentence Prediction (NSP) objectives. The base model, \\textit{BERT-base-uncased}, has a hidden size of 768, which corresponds to 12 heads and a head embedding size of 64. \n The number of layers in the model is 12. \n The large counterpart model, \\textit{BERT-large-uncased}, has a hidden size of 1024 with 24 layers and 16 attention heads. \n The original learning rate used for BERT checkpoints is 1e-4. \n When it is fine-tuned, a smaller learning rate (e.g., 1e-5 or 2e-5) is selected in order not to disrupt the learning during pre-training. \n \n \\item \\textbf{DistilBERT checkpoints:} We use \\textit{distilbert-base-uncased} checkpoint of the DistilBERT~\\citep{distilbert}, which is a small and light model trained by distilling BERT-base checkpoint. It is pre-trained on the same data used to pre-train BERT-base model. The corpus consists of the Toronto Book Corpus and full English Wikipedia using distillation with the supervision of the \\textit{BERT-base-uncased} version. The model has 6 layers, a hidden size of 768 with 12 heads, and 66M parameters.\n\n \\item \\textbf{RoBERTa checkpoints}: RoBERTa (Robustly Optimized BERT pre-training Approach)~\\citep{roberta} is a well-known BERT re-implementation. \n RoBERTa training provided many more improvements in terms of training strategies than modifying the architectural design. \n For instance, NSP training objective is removed, and the static masking is replaced by the strategy of dynamically changing the masking patterns.\n In our analysis, Adam optimizer is used during pre-training as in other BERT checkpoints with a learning rate of 6e-4.\n\\end{itemize}\n\n\\subsection{Adaptive fine-tuning}\nEven though fine-tuning approach used for the Transformer architectures typically performs well, the differences in the distribution of source and target data typically have a significant impact on the effectiveness of fine-tuning~\\citep{ruder2019transfer}. \nIf the source and target datasets are substantially different from each other, fine-tuning may face difficulty in learning. \nTypically, NLP model evaluations rely on the assumption that the source (train) and target (test) instances are independent and identically distributed, which does not hold in most cases in real-world applications. \nSuch discrepancy may appear since the target dataset hardly characterizes the entire distribution, and the target distribution usually changes over time~\\citep{torralba2011unbiased,quinonero2009dataset}. \nPrevious studies showed that pre-trained Transformers are the most robust architecture on real-world distribution shift and have more generalization capacity than other architectures, however, there still remains room for improvement~\\citep{hendrycks2020pretrained}. \n\nIn the literature, several strategies were proposed to adapt a pre-trained model to a target domain~\\citep{ruder2019transfer}. \nIn our implementations, we keep training the pre-trained models using some additional in-domain data, with the expectation that specializing the model to target data would improve the downstream task performance.\nThat is, an already pre-trained model is continually trained with the pre-training objective on target data (i.e., $J_{S}= J_{A}$) that is expected to be closer to the target distribution. \nFinally, we end up with another version of the pre-trained model, $f(\\hat{\\theta}_{S})$, that still requires fine-tuning to a downstream task. \nFigure~\\ref{fig:adaptive_fine_tuning} summarizes our adaptive fine-tuning framework.\n\n\\begin{figure}[!ht]\n \\centering\n\\includegraphics[width=0.81\\textwidth]{arch.png}\n \\caption{Adaptive fine-tuning framework}\n \\label{fig:adaptive_fine_tuning}\n\\end{figure}\n\nAdaptive fine-tuning consistently helps with distribution shifts. \nTherefore, just before the fine-tuning phase, the model can be further trained with the target dataset. \nIn other words, we hold three phases as depicted in Figure~\\ref{fig:adaptive_fine_tuning}. \nThe first phase includes a pre-training process to learn the parameters $\\theta_{S}$ with source objective $J_{S}$ where we do not need any labeled data. \nThe objective function is typically the masked language model. \nIn fact, there are already many checkpoints trained for this purpose in the literature. Therefore we did not have to pre-train a model from scratch. \nThese pre-trained models have been trained on a large variety of data and have a sufficient depth of architecture and a number of parameters as discussed in Section~\\ref{sec:twostageTraining}. \n\nIn the second phase, the pre-trained model is kept training, again with the same source objective $J_{S}= J_{A}$ but with target datasets for adaptation. \nIt is important to note that other kinds of auxiliary objective functions can be utilized to improve the adaptation. \nAt this phase, we do not change the Transformer model architecture and do not add any additional parameters. \nFurthermore, in the third phase, new parameters $\\theta_{T}$ are added to the architecture. \nThey are usually found in the new layer, which is placed on top of the last layer.\nThe new architecture is then trained with a target objective $J_{T}$ and target labeled data in an end-to-end fashion. \nDuring this phase, we set the learning rate to a smaller value than one in the pre-training and adaptive fine-tuning phases in order not to disrupt the main model significantly, as the model learns the most important syntactic and semantic information by this stage.\n\n\\subsection{Experimental settings}\nTable~\\ref{tab:hyperparameters} lists the hyperparameters used for different Transformer models. \nSpecifically, we experiment with different BERT checkpoints with varying sizes. \nThe learning rate (\\textit{lr}) and activation function parameters are determined via hyperparameter tuning experiments.\n\n\\begin{table}[!ht]\n\\centering\n\\caption{Transformer model hyperparameters}\\label{tab:hyperparameters}\n\\resizebox{0.99\\textwidth}{!}{\n\\begin{tabular}{|l|r|r|r|r|r|c|}\n\\hline\n\\textbf{Check point} & \\textit{\\textbf{Layers}} & \\textit{\\textbf{Hidden units}} & \\textit{\\textbf{Heads}} & \\textit{\\textbf{Total Params}} & \\textit{\\textbf{lr}} & \\textit{\\textbf{activation}} \\\\ \n\\hline\nDistilBERT& 6 & 768 & 12 & 66M & 5e-4 & gelu \\\\ \n\\hline\nBERT-base & 12 & 768 & 12 & 110M & 1e-4 & gelu \\\\ \n\\hline\nBERT-large& 24 & 1024 & 16 & 340M & 1e-4 & gelu \\\\ \n\\hline\nRoBERTa-base & 12 & 768 & 12 & 125M & 6e-4 & gelu \\\\ \n\\hline\nRoBERTa-large & 24 & 1024 & 16 & 355M & 4e-4 & gelu \\\\ \n\\hline\n\\end{tabular}\n}\n\\end{table}\n\nIn a typical supervised training pipeline, unlabeled data would be eliminated and the fine-tuning phase would be implemented on a labeled dataset. \nIn the DOORS dataset, there are more than 80,000 requirement texts, however, the majority of those are unlabeled. \nAccordingly, we employ all the requirement texts from the entire dataset for the adaptive fine-tuning process by using MLM as the unsupervised learning objective. \nThe tokens are randomly masked in the input with a probability of 0.15 to implement MLM. \nThis process yields a new adapted pre-trained model with the updated parameters, $\\hat{\\theta}_{S}$.\n\nFor the rest of the training procedure, we follow the common practice to apply hyper-parameter selection. \nDataset is divided into three sets: training, validation, and test. The learning rate is kept around 1e-4 as in other pre-training hyperparameter settings such as BERT checkpoints. \nThe model is trained up to 30 epochs, using \\textit{AdamW} optimizer~\\citep{adamW}. \nThe best model found during the training is loaded at the end. \nThen fine-tuning phase is applied as the last phase. \n\nWe also follow the common practice at the fine-tuning phase that comes after the adaptive phase. \nSpecifically, we have tree classification targets \\textit{Priority}, \\textit{Severity}, and \\textit{Type} as downstream tasks. \nFor each task, we fine-tuned a pre-trained model, that is an adapted pre-trained model obtained at stage two, up to 3 epochs. \nWe observe that keeping the learning rate value very small at this stage is highly important so as not to spoil the adapted language model. \n\n\\section{Numerical Study}\\label{sec:results}\nIn this section, we provide the results from our detailed numerical study. We first show representative results from the hyperparameter tuning experiments.\nThen, we present the results from our comparative analysis with different baselines and Transformers, along with their adaptively fine-tuned variants.\nWe next examine the performance of the best-performing model using the class-specific outcomes.\nLastly, we provide a discussion on model predictions and elaborate on potential causes for misclassifications.\n\n\\subsection{Hyperparameter tuning results}\n\nWe performed hyperparameter tuning for various model parameters listed in Table~\\ref{tab:hyperparameters}.\nFigure~\\ref{fig:fine_tuning_loss} illustrates the fine-tuning loss of the downstream task of Type prediction across different learning rates when fine-tuning a selected Transformer model (DistilBERT). \nThe learning rate values between 1e-05 and 1e-04 have been evaluated. \nWe observe that, for the lower rates such as 1e-05, even though training eventually converges to the ideal level, this can take a long time, e.g., up to 1500 steps in this case. \nAs the learning rates get too high, the model training makes progress very quickly at first, but it never converges and settles down, which leads to divergence. \nWhen the learning rate is selected between 2e-05 and 3e-05 (illustrated as thick straight lines in the figure), we obtain satisfactory results and fine-tuning achieves convergence.\n\nFigure~\\ref{fig:adative_fine_tuning_loss} illustrates the training and validation loss for adaptive fine-tuning of a DistilBERT checkpoint.\nWe see that after 15 epochs, the reduction in validation loss almost stops and the model converges. \nWe also observe that there is no variance and bias problem in the training process. \n\n\\begin{figure}[!ht]\n \\centering\n \\subfloat[Fine-tuning loss ($x$-axis: training steps) \\label{fig:fine_tuning_loss}]{\\includegraphics[width=0.685\\textwidth]{FineTuningEvalLoss_new.pdf}}\\\\\n \\subfloat[Adaptive fine-tuning loss \\label{fig:adative_fine_tuning_loss}]{\\includegraphics[width=0.705\\textwidth]{loss.png}}\n \\caption{Sample loss values observed during hyperparameter tuning for the selected pre-trained model, DistilBERT ($y$-axis: loss values).}\n \\label{fig:lossValues}\n\\end{figure}\n\n\\subsection{Comparative performance analysis results}\nWe report the model performances in terms of Accuracy (ACC), F1, and weighted F1 (w-F1)) as in Table~\\ref{tab:perf_comparison_table}.\nThree baseline models are employed, namely majority class classifier, Word Embeddings (WE) and SBERT.\nThe rest of the table contains the performance of the Transformers models. \nFor easier comparison, we list the results of two-stage (pre-training and fine-tuning) learning and three-stage adaptive learning back-to-back as shown in the table. \nThe three-stage process is indicated with the term ``+adapted''.\n\n\\begin{table}[!ht]\n\\centering\n\\caption{Comparison of accuracy values for the classification models (*: adaptively fine-tuned version). }\n\\label{tab:perf_comparison_table}\n\\resizebox{0.999\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|l|} \n\\hline\nACC \/ F1 \/w-F1~ & \\multicolumn{1}{l}{Priority} & \\multicolumn{1}{l}{Severity} & Type \\\\ \n\\hline\nMajority Classifier & 68.39 \/ 20.31 \/ 55.56 & 86.11 \/ 15.42 \/ 76.69 & 30.16 \/ 6.62 \/ 13.98 \\\\ \n\\hline\n\\multicolumn{4}{|l|}{\\textbf{Feature Extraction Baselines }} \\\\ \n\\hline\nWE(Glove + FastText) & 71.52 \/ 31.82 \/ 66.18 & 86.60 \/ \\textbf{27.76} \/ 81.74 & 75.92 \/ 73.81 \/ 75.10~ \\\\ \n\\hline\nSBERT & 70.85 \/ 32.79 \/ 67.88 & 86.62 \/\\textbf{ 25.59} \/ 82.52~ ~ & 77.22 \/ 74.78 \/ 76.33 \\\\ \n\\hline\n\\multicolumn{4}{|l|}{\\textbf{Transformer Models~}} \\\\ \n\\hline\nDistilBERT & 72.09 \/ 33.20 \/ 65.34 & 86.43~\/ 15.76 \/ 79.84 & 78.80 \/ 73.88 \/ 76.72 \\\\ \n\\hline\n\\textit{+ adapted} & \\textit{\\textbf{72.54}~\/ 33.21~\/ 65.45} & \\textit{\\textbf{86.61} \/ 16.9 \/ 80.89} & \\textit{\\textbf{81.15}~\/~ 77.57 \\textbf{\/ 79.24}} \\\\ \n\\hline\nBert-base & 71.89 \/ 32.51 \/ 65.0 & 86.49 \/ 16.3 \/ 80.48 & 77.24 \/ 70.87 \/ 74.62~ \\\\ \n\\hline\n\\textit{+ adapted} & \\textbf{72.63} \/ \\textbf{33.8 }\/ \\textbf{65.87~} & 86.56 \/ \\textbf{22.3}~\/ \\textbf{81.64} & 80.06 \/ 75.6~~\/ 77.76 \\\\ \n\\hline\nBert-large & 68.39 \/ 20.31 \/ 55.56~ & 86.12 \/ 15.42 \/ 79.68 & 79.48 \/ 75.91 \/ 78.13~ \\\\ \n\\hline\n\\textit{+adapted} & 71.94 \/ 30.64 \/ 65.29 & 86.11 \/ 15.4 \/ 79.70 & 79.52 \/ 76.06 \/ 78.19 \\\\ \n\\hline\nRoberta-base & 71.33 \/ 31.04 \/ 62.11 & 86.32 \/ 15.11 \/ 79.29 & 80.12 \/ 77.11 \/ 78.29~ \\\\ \n\\hline\n\\textit{+adapted} & 71.78 \/ 32.12 \/ 65.12~ & 86.21 \/ 29.43 \/ 81.49 & 80.45 \/ \\textbf{78.71} \/ 79.15~ \\\\ \n\\hline\nRoberta-large & 68.81 \/ 30.12 \/ 56.17~ & 86.03 \/ 15.71 \/ 79.98 & 79.59 \/ 75.94 \/ 78.17~ \\\\ \n\\hline\n\\textit{+adapted} & 72.02 \/ 32.77 \/ 65.11 & 86.23 \/ 15.67 \/ 80.40 & 79.94 \/ 76.16 \/ 78.18 \\\\\n\\hline\n\\end{tabular}\n}\n\\end{table}\n\nThese results show that the Transformer models, with or without adaptation, outperform the majority classifier baseline model. \nThey also outperform word embedding pooling and SBERT baselines for \\textit{priority} and \\textit{type} tasks. \nThis can be attributed to the generalization capacity of the Transformers and the benefit of adaptation. However, we observe contradictory results for the \\textit{Severity} category. \n\nAnother observation is that the adaptation brought an improvement for all model settings. \nWe could not see any negative transfer or performance loss. In addition, we examine which of these differences are statistically significant with a detailed analysis as some improvements have very low degree. \nWe also note that all the adapted models achieve similar performance as shown in Figure~\\ref{fig:perf_comparison_adapted}. \n\n\\begin{figure}[!ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{adapted3.pdf}\n \\caption{Performance comparison across adapted models based on accuracy values}\n \\label{fig:perf_comparison_adapted}\n\\end{figure}\n\nWe report the improvement obtained by adaptation as shown in Table~\\ref{tab:p_value}. \nWe list the improvement obtained by adaptation against its vanilla counterpart and majority classifier baselines. \nThe $p$-value shows whether the difference is statistically significant. \nWe employ the 5x2 cv F-test for this purpose, which is able to determine whether there is a significant difference between the performance of two classifiers~\\citep{alpaydm1999combined}. This method employs 2-fold cross-validation 5 times for two classifiers to be compared. \nThus, the values of the two models under the same conditions and on the same fold are comparable. \nIf the $p$-value is less than $\\alpha$ (0.05), the null hypothesis is rejected, which means that the two models are significantly different.\n\n\\begin{table}[!ht]\n\\centering\n\\caption{Improvement made by adaptation and its $p$-value as compared to vanilla Transformer model variants and the majority classifier baseline (reported as ``+Improvement(p-value)'')}\n\\label{tab:p_value}\n\\resizebox{0.99950\\linewidth}{!}{\n\\begin{tabular}{|l|l|l|l|l|l|l|} \n\\cline{1-7}\n\\cline{1-7}\n\\multicolumn{1}{l}{} & \\multicolumn{3}{c|}{Vanilla Counterpart} & \\multicolumn{3}{c}{Majority Classifier} \\\\ \n\\cline{2-7}\n\\multicolumn{1}{l}{\\textbf{Priority}} & \\multicolumn{1}{|l}{Acc.} & \\multicolumn{1}{|l}{F1} & \\multicolumn{1}{|l}{w-F1} & \\multicolumn{1}{|l|}{Acc.} & \\multicolumn{1}{|l|}{F1} & \\multicolumn{1}{|l|}{w-F1} \\\\ \n\\hline\nBert+adapted &+0.73(0.15) &+1.29(0.047) &+0.86(0.53) &+3.56(0.002) &+13.37(5.2e-6) &+9.44(1.1e-5) \\\\ \n\\hline\nDistilbert+adapted &+0.45(0.04) &+0.012(0.67) &+0.11(0.68) &+3.4(1.5e-7) &+12.78(2.2e-8) &+9.02(2.8e-6) \\\\ \n\\hline\n\\multicolumn{1}{l}{\\textbf{\\textbf{Severity}}} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\ \n\\hline\nBert+adapted &+0.07(0.48) &+5.96(0.0015) &+1.16(0.0052) &+0.14(0.42) &+6.83(0.0015) & 1.5(0.008) \\\\ \n\\hline\nDistilbert+adapted &+0.17(0.1) &+1.44(0.0029) &+1.04(0.08) &+0.183(0.065) &+1.45(0.0024) &+0.74(0.007) \\\\ \n\\hline\n\\multicolumn{1}{l}{\\textbf{\\textbf{Type}}} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\ \n\\hline\nBert+adapted &+2.82(0.0041) &+4.72(0.0043) &+3.14(0.0097) &+49.48(2.8e-6) &+68.91(3.5e-10) & 63.44(2.8e-8) \\\\ \n\\hline\nDistlbert+adapted &+2.35( 0.0023) &+3.69(0.0005) &+2.52(0.0033) &+50.57(2.5e-9) &+70.88(3.5e-10) &+64.92(2.8 e-8) \\\\ \n\\hline\n\\end{tabular}\n}\n\\end{table}\n\nThe performance improvement table suggests that adapted models consistently have positive contributions. \nThat is, the overall performance of the adapted models is promising. \nHowever, positive differences for certain cases are not found to be statistically significant as seen in the table since the corresponding $p$-values are not smaller than 0.05.\n\n\nWe can clearly see that \\textit{Priority} and \\text{Type} tasks have a strong pattern between requirement and the target class. Therefore, we observe better improvement for these tasks. However, \\textit{Severity} class is the most difficult one to be accurately classified among these tasks,\nmostly because we are unable to observe a strong relationship between the requirement text and the target. \nIn many models and many different experimental setups that we have designed, no significant pattern have been observed for this class. Therefore, the Transformer models are not able to achieve any meaningful improvement against the baseline methods, nor do we see an improvement during adaptive fine-tuning. The feature extraction-based models (Word embeddings and SBERT) show better performance especially in terms of F1-score for this task.\n\n\\subsection{Detailed performance evaluation results}\n\nThe confusion matrix and certain other metrics such as precision and recall can provide better insights to understand the prediction errors. \nWe check this through the DistilBERT model, which is selected as the representative Transformer model. \nIt is important to note that we observe similar behavior in other models as well, and DistilBERT is simply chosen as a representative model.\nTable~\\ref{tab:detailed_perf} shows detailed performance values for the adaptively fine-tuned version of DistilBERT. \nBased on these results, we identify one of the main reasons for the errors in Priority classification as the model being too sensitive to high-frequency labels such as ``Unassigned''. \nThat is, the performance for low-frequency labels ``Medium'' and ``Low'' suffer from the imbalanced class distribution. \n\n\\begin{table}[!ht]\n\\centering\n\\caption{Detailed performance results for the adaptively fine-tuned DistilBERT model for the Priority, Severity, and Type prediction tasks (diagonals bolded in confusion matrices).}\n\\label{tab:detailed_perf}\n \\-\\hspace{-2.2cm}\\subfloat[Priority performance values by class \\label{tab:priority_perf}]\n {\n \\resizebox{0.395\\linewidth}{!}{\n \\begin{tabular}[c]{|l|rrrr|}\n \\hline\n & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F1-score} & \\textbf{Support} \\\\\n \\hline\n \\textbf{High} & 0.582 & 0.507 & 0.542 & 682 \\\\\n \\textbf{Low} & 1.000 & 0.022 & 0.044 & 132 \\\\\n \\textbf{Medium} & 0.466 & 0.191 & 0.271 & 497 \\\\\n \\textbf{Unassigned} & 0.785 & 0.926 & 0.849 & 2,837 \\\\\n \\hline\n \\end{tabular}\n }\n }\n \\subfloat[Priority confusion matrix \\label{tab:priority_cm}] \n {\n \\resizebox{0.395\\linewidth}{!}{\n \\begin{tabular}[c]{|ll|rrrr|}\n \\hline\n & & \\multicolumn{4}{c|}{Prediction} \\\\\n \n & & \\textbf{High} & \\textbf{Low} & \\textbf{Medium} & \\textbf{Unassigned} \\\\\n \\hline\n \\multirow{4}{*}{\\STAB{\\rotatebox[origin=c]{90}{{Ground Truth}}}} & \\textbf{High} & \\textbf{346} & 0 & 37 & 299 \\\\\n & \\textbf{Low} & 13 & \\textbf{3} & 12 & 104 \\\\\n & \\textbf{Medium} & 86 & 0 & \\textbf{95} & 316 \\\\\n & \\textbf{Unassigned} & 149 & 0 & 60 & \\textbf{2,628}\\\\\n \\hline\n \\end{tabular}\n }\n }\\\\\n \\-\\hspace{-0.79cm}\\subfloat[Severity performance values by class \\label{tab:severity_perf}]\n {\n \\resizebox{0.395\\linewidth}{!}{\n \\begin{tabular}[c]{|l|rrrr|}\n \\hline\n & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F1-score} & \\textbf{Support} \\\\\n \\hline\n \\textbf{Blocker} & 0.800 & 0.150 & 0.250 & 26 \\\\ \n \\textbf{Critical} & 1.000 & 0.066 & 0.125 & 60 \\\\ \n \\textbf{Major} & 0.390 & 0.195 & 0.262 & 291 \\\\ \n \\textbf{Minor} & 1.000 & 0.134 & 0.237 & 67 \\\\ \n \\textbf{Normal} & 0.890 & 0.977 & 0.933 & 3,572 \\\\ \n \\textbf{Undecided} & 0.695 & 0.431 & 0.532 & 132 \\\\ \n \\hline\n \\end{tabular}\n }\n }\\hspace{0.01cm}\n \\subfloat[Severity confusion matrix\\label{tab:severity_cm}] \n {\n \\resizebox{0.515\\linewidth}{!}{\n \\begin{tabular}[c]{|ll|rrrrrr|}\n \\hline\n & & \\multicolumn{6}{c|}{Prediction} \\\\\n & & \\textbf{Blocker} & \\textbf{Critical} & \\textbf{Major} & \\textbf{Minor} & \\textbf{Normal} & \\textbf{Undecided} \\\\\n \\hline\n \\multirow{7}{*}{\\STAB{\\rotatebox[origin=c]{90}{{Ground Truth}}}} & \\textbf{Blocker} & \\textbf{4} & 0 & 7 & 0 & 14 & 1 \\\\\n & \\textbf{Critical} & 0 & \\textbf{4} & 16 & 0 & 39 & 1 \\\\\n & \\textbf{Major} & 1 & 0 & \\textbf{57} & 0 & 232 & 1 \\\\\n & \\textbf{Minor} & 0 & 0 & 2 & \\textbf{9} & 56 & 0 \\\\\n & \\textbf{Normal} & 0 & 0 & 59 & 0 & \\textbf{3,491} & 22 \\\\\n & \\textbf{Undecided} & 0 & 0 & 2 & 0 & 73 & \\textbf{57}\\\\\n \\hline\n \\end{tabular}\n }\n }\\\\\n \\subfloat[Type performance values by class \\label{tab:type_perf}]\n {\n \\resizebox{0.395\\linewidth}{!}{\n \\begin{tabular}[c]{|l|rrrr|}\n \\hline\n & \\textbf{Precision} & \\textbf{Recall} & \\textbf{F1-score} & \\textbf{Support} \\\\\n \\hline\n \\textbf{Enhancement} & 0.763 & 0.823 & 0.792 & 1,251 \\\\\n \\textbf{JUnit} & 0.922 & 0.902 & 0.912 & 132 \\\\\n \\textbf{Maintenance} & 0.825 & 0.895 & 0.858 & 562 \\\\\n \\textbf{Other} & 0.589 & 0.354 & 0.442 & 466 \\\\\n \\textbf{Plan Item} & 0.749 & 0.809 & 0.778 & 199 \\\\\n \\textbf{Story} & 0.916 & 0.932 & 0.924 & 1,131 \\\\\n \\textbf{Test Task} & 0.920 & 0.936 & 0.928 & 407 \\\\\n \\hline\n \\end{tabular}\n }\n }\n \\subfloat[Type confusion matrix \\label{tab:type_cm}] \n {\n \\resizebox{0.605\\linewidth}{!}{\n \\begin{tabular}[c]{|ll|rrrrrrr|}\n \\hline\n & & \\multicolumn{7}{c|}{Prediction} \\\\\n & & \\textbf{Enhancement} & \\textbf{JUnit} & \\textbf{Maintenance} & \\textbf{Other} & \\textbf{Plan Item} & \\textbf{Story} & \\textbf{Test Task} \\\\\n \\hline\n \\multirow{7}{*}{\\STAB{\\rotatebox[origin=c]{90}{{Ground Truth}}}} & \\textbf{Enhancement} & \\textbf{1,029} & 1 & 70 & 80 & 21 & 48 & 2 \\\\\n & \\textbf{JUnit} & 1 & \\textbf{119} & 1 & 2 & 1 & 6 & 2 \\\\\n & \\textbf{Maintenance} & 44 & 0 & \\textbf{503} & 10 & 1 & 4 & 0 \\\\\n & \\textbf{Other} & 212 & 3 & 33 & \\textbf{165} & 12 & 17 & 24 \\\\\n & \\textbf{Plan Item} & 18 & 0 & 0 & 7 & \\textbf{161} & 12 & 1 \\\\\n & \\textbf{Story} & 41 & 1 & 2 & 11 & 18 & \\textbf{1,054} & 4 \\\\\n & \\textbf{Test Task} & 4 & 5 & 1 & 5 & 1 & 10 & \\textbf{381} \\\\\n \\hline\n \\end{tabular}\n }\n }\n\\end{table}\n\nThe severity classification task shows similar behavior to the Priority category. \nThis can be similarly attributed to severe data imbalance for this label, with the ``Normal'' class having a significant majority over the others.\nWe find that the predictions for the ``Major'' class can be quite poor, with many ``Major'' severity requirements predicted as ``Normal'' severity. \nOn the other hand, the Type label has relatively better class distributions than others.\nWe observe that fine-tuning performs fairly well on this task. \nThere is only one class label, ``Other'', that performs significantly poorly than the others.\n\n\\subsection{Discussion on model predictions}\nWe next examine the sample requirements where the model fails to predict the actual class correctly in our three classification tasks with the DOORS dataset (see Table~\\ref{tab:sample_results}).\nBelow, we summarize the general observations regarding these misclassifications.\n\\begin{itemize}\\setlength\\itemsep{0.3em}\n \\item In general, the model predictions are highly impacted by class distributions and input length. The majority of the test instances are classified into ``Unassigned'', ``Normal'' and ``Enhancement'' classes for priority, severity, and type classification tasks, respectively. \n \n \\item In the Type classification task, the ``Story'' class is usually assigned to the instance that mentions a requirement for a particular software whereas the ``Enhancement'' class is usually assigned to an already existing task which mentions an improvement over the existing requirement. \n Certain words such as ``improve'', ``ensure'', and ``report'' might overlap in multiple classes which may lead to misclassification.\n \n \\item Severity labels generally point to the extent of a particular defect or a task can impact the software product.\n However, the level of severity designation can be subjective.\n As such, the requirement text with the ``Critical'' or ``Major'' label can be classified as ``Normal'' because the model is not able to differentiate whether that particular requirement belongs to the important feature and use-case in the system.\n \n \\item Priority classification task can be considered as ordering the requirements\/defects as ``High'', ``Low'', and ``Medium'' based on business needs and urgency of solving a particular defect. \n Similar to the Severity label, these priority values can also be highly subjective.\n Furthermore, since the ``Unassigned'' class is the significant majority, and may contain instances from the other three classes, Priority classification can be considered a more challenging task than others.\n \n\\end{itemize}\n\nTable~\\ref{tab:sample_results} provides specific examples of misclassification cases.\nIn priority classification, $R_1$ is classified into ``Unassigned'' (indicating appropriate priority level could not be determined), however, the requirement text indicates that it is a high priority task as the rename commands are not able to refresh the expected feature. \nSimilarly, $R_2$ is also classified as ``Unassigned'' despite having lower priority. \nIn Severity classification, $R_3$ is predicted as ``Normal'' although it is evident from the requirement text that the label should be ``Critical'', as the user cannot access graphical editor plug-ins on Linux and Mac workstations which might cause a complete halt in the application for Mac and Linux users.\n$R_4$ also suggests an important task as the user is not able to lock the changeset deliveries at the appropriate time and this task needs urgent attention, which explains the `Major' severity level. \nHowever, the model predicts the ``Normal'' level of severity. \nIn Type classification, the $R_5$ task talks about the improvement in performance for some baseline query which can be easily identified as the maintenance task in the existing system. \nWords like ``improve'' might have confused the model, and it classifies $R_5$ as ``Story''. \nThe presence of the word ``Assessment'' in $R_6$ gives a clear indication that it is a ``Test Task'', whereas the word ``web client'' also shows the presence of the user. \nAccordingly, the model is unable to differentiate between a ``Story'' and ``Test Task'' class in this case. \n\n\\begin{table}[!ht]\n \\centering\n \\caption{Examples of misclassified instances}\n \\label{tab:sample_results}\n \\resizebox{1.0\\linewidth}{!}{\n \\begin{tabular}{P{0.15\\textwidth} P{0.57\\textwidth} P{0.12\\textwidth} P{0.15\\textwidth}}\n \\toprule\n \\textbf{Classification task} & \\textbf{Requirement text} & \\textbf{Actual class} & \\textbf{Predicted class} \\\\ \n \\midrule\n Priority & $R_1$: Type system rename commands do not refresh the affected shapes in the TRS feed. & High & Unassigned \\\\ \n \\cmidrule(l){2-4} \n & $R_2$: Review email can be directed to user who does not have access to the RRC project. & Low & Unassigned \\\\ \n \\midrule\n Severity & $R_3$: Support browser graphical editor plug-ins on Linux and Mac workstations & Critical & Normal \\\\ \n \\cmidrule(l){2-4} \n & $R_4$: Changeset deliveries do not lock at appropriate time & Major & Normal \\\\ \n \\midrule\n Type & $R_5$: Improve performance for large baseline compare query & Maintenance Item & Story \\\\\n \\cmidrule(l){2-4} \n & $R_6$: Assessment of existing web client TCs & Test Task & Story \\\\ \n \\bottomrule\n \\end{tabular}\n }\n\\end{table}\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nModern language models may still be inadequate to deal with transferring knowledge where the distribution of the target domain may differ significantly from that of the source domain. \nEven though it is well known that the Transformers are more robust to the OOD problem than other deep learning architectures, there is still some room for improvement in this regard. \nIn this paper, we show that adaptive fine-tuning can help various text classification tasks in the SRS domain. \nIn a three-stage training pipeline, we fine-tune the pre-trained models on data that is closer to the distribution of the target data, which benefits to reduce the variation in the source and target distributions.\nFor this method, it is sufficient to have unlabeled data from the target distribution as the model is fine-tuned with the pre-training objective.\nWe leverage the domain-adaptive fine-tuning for three problems of SRS document classification (Priority, Severity and Type classification), and improve the model performance on a real distribution shift. \nWe find that the Severity classification is the most challenging task as this category does not contain salient patterns between the requirement text and the task. \nWe perform comparisons against strong baselines and show that the model performance can be improved significantly by the additional adaptive phase. \n\nOur work can be extended in multiple directions. First, due to a lack of publicly available data, we only experimented with a single data source. \nThe effectiveness of adaptive fine-tuning for SRS classification tasks can be further investigated by using diverse datasets. Second, we note that all three classification tasks considered in this study suffer from data imbalance issues.\nAccordingly, data augmentation strategies can be employed alongside adaptive fine-tuning for improved classification performance.\nLastly, baseline models are all pre-trained on a general corpus, whereas a software engineering corpus-specific Transformer model training can perform better for our classification tasks.\n\n\\section*{Statements and Declarations}\nNo potential conflict of interest was reported by the authors.\n\n\\section*{Data Availability Statement}\nThe DOORS dataset is propriety and is not made available public.\n\n\\bibliographystyle{elsarticle-num-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}