diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzleov" "b/data_all_eng_slimpj/shuffled/split2/finalzzleov" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzleov" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nA key element in the development of machine learning methods is the exploitation of the underlying structure of the data through appropriate architectures.\nFor example, convolutional neural networks make use of local, translation-invariant correlations in image-like data and compile them into characteristic features \\cite{Hinton, Ciresan, ILSVRC2012, He2015, deOliveira:2015xxd, Aurisano:2016jvx, Komiske:2016rsd, Kasieczka:2017nvn, Erdmann:2017str, Shilon:2018xlp, Delaquis:2018zqi, Adams:2018bvi}.\n\nIn particle physics, characteristic features are used to identify particles, to separate signal from background processes, or to perform calibrations.\nThese features are usually expressed in manually engineered variables based on physics reasoning.\nAs input for machine learning methods in particle physics, these so-called high-level variables were in many cases superior to the direct use of particle four momenta, often referred to as low-level variables.\nRecently, several working groups have shown that deep neural networks can perform better if they are trained with both specifically constructed variables and low-level variables \\cite{Baldi2014:exotics, Baldi2015:higgs, Adam2015:kaggle, Guest2016:jetflavor, Baldi:2016fql, Shimmin:2017PhRvD, Louppe:2017ipp, Datta:2017rhs, Erdmann:ttH2017, Butter:2017cot, deOliveira:2017pjk, Stoye:2017, Sirunyan:2018mvw}.\nThis observation suggests that the networks extract additional useful information from the training data.\n\nIn this paper, we attempt to autonomize the process of finding suitable variables that describe the main characteristics of a particle physics task.\nThrough a training process, a neural network should learn to construct these variables from raw particle information.\nThis requires an architecture tailored towards the structure of particle collision events.\nWe will present such an architecture here and investigate its impact on the separation power for signal and background processes in comparison to domain-unspecific deep neural networks.\nFurthermore, we will uncover characteristic features which are identified by the network as particularly suitable.\n\nParticle collisions at high energies produce many particles which are often short-lived.\nThese short-lived particles decay into particles of the final state, sometimes through a cascade of multiple decays.\nIntermediate particles can be reconstructed by using energy-momentum conservation when a parent particle decays into its daughter particles.\nBy assigning to each particle a four-vector defined by the particle energy and its momentum vector, the sum of the four-vectors of the daughter particles gives the four-vector of the parent particle.\nFor low-energy particle collisions, bubble chamber images of parents and their daughter particles were recorded in textbook quality.\nEvidently, here the decay angular distributions of the daughter particles are distorted by the movement of the parent particle, but can be recovered in the rest frame of the parent particle after Lorentz transformation.\n\nThe same principles of particle cascades apply to high-energy particle collisions at colliders.\nHere, the particles of interest are, for example, top quarks or Higgs bosons, which remain invisible in the detector due to their short lifetimes, but can be reconstructed from their decay products.\nExploiting the properties of such parent particles and their daughter particles in appropriate rest frames is a key to the search for high-level variables characterizing a physics process.\n\nFor the autonomous search for sensitive variables, we propose a two-stage network, composed of a so-called Lorentz Boost Network (LBN) followed by an application-specific deep neural network (NN).\nThe LBN takes only the four-vectors of the final-state particles as input.\nIn the LBN there are two ways of combining particles, one to create composite particles, the other to form appropriate rest frames.\nUsing Lorentz transformations, the composite particles are then boosted into the rest frames.\nThus, the decay characteristics of a parent particle can be exploited directly.\n\nFinally, characteristic features are derived from the boosted composite particles: masses, angles, etc.\nThe second network stage (NN) then takes these variables as input to solve a specific problem, e.g. the separation of signal and background processes.\nWhile the first stage constitutes a novel network architecture, the latter network is interchangeable and can be adapted depending on the analysis task.\n\nThis paper is structured as follows.\nFirst, we explain the network architecture in detail.\nSecond, we present the simulated dataset we use to investigate the performance of our architecture in comparison to typical deep neural networks.\nThereafter, we review the particles, rest frames, and characteristic variables created by the network to gain insight into what is learned in the training process, before finally presenting our conclusions.\n\n\n\\section{Network architecture}\n\\label{sec:architecture}\n\nIn this section we explain the structural concept on which the Lorentz Boost Network (LBN) is based and introduce the network architecture.\n\nThe measured final state of a collision event is typically rather complex owing to the high energies of the particles involved.\nIn principle, all available information is encoded in the particles' four-vectors, but the comprehensive extraction of relevant properties poses a significant challenge.\nTo this end, physicists engineer high-level variables to decipher and factorize the probability distributions inherent to the underlying physics processes.\nThese high-level variables are often fed into machine learning algorithms to efficiently combine their descriptive power in the context of a specific research question.\nHowever, the consistent result of \\cite{Baldi2014:exotics, Baldi2015:higgs, Adam2015:kaggle, Guest2016:jetflavor, Baldi:2016fql, Shimmin:2017PhRvD, Louppe:2017ipp, Datta:2017rhs, Erdmann:ttH2017, Butter:2017cot, deOliveira:2017pjk, Stoye:2017, Sirunyan:2018mvw} is that the combination of both low- and high-level variables tends to provide superior performance.\nThis observation suggests that low-level variables potentially contain additional, useful information that is absent in hand-crafted high-level variables.\n\nThe aim of the LBN is, given only low-level variables as input, to autonomously determine a comprehensive set of physics-motivated variables that maximizes the relevant information for solving the physics task in the subsequent neural network application.\n\\Fig{fig:lbn_arch} shows the proposed two-stage network architecture (LBN+NN) in detail.\nThe first stage is the LBN and constitutes the novel contribution.\nIt consists of several parts, namely the combination of input four-vectors to particles and rest frames, subsequent Lorentz transformations, and the extraction of suitable high-level variables.\nThe second stage can be some form of deep neural network (NN) with an objective function depending on the specific research question.\n\\begin{figure}[h!tbp]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{lbn_architecture.pdf}\n \\caption{\n The two-stage deep neural network architecture consists of the Lorentz Boost Network (LBN) and a subsequent deep neural network (NN).\n In the LBN, the input four-vectors ($E, p_x, p_y, p_z$) are combined in two independent ways before each of the combined particles is boosted into its particular rest frame, which is formed from a different particle combination.\n The boosted particles are characterized by variables which can be, e.g., invariant masses, transverse momenta, pseudorapidities, and angular distances between them.\n These features serve as input to the second network designed to accomplish a particular analysis task.\n }\n \\label{fig:lbn_arch}\n\\end{figure}\n\nThe LBN combines $N$ input four-vectors, consisting of energies $E$ and momentum components $p_x, p_y, p_z$, to create $M$ particles and $M$ corresponding rest frames according to weighted sums using trainable weights.\nThrough Lorentz transformation, each combined particle is boosted into its dedicated rest frame.\nAfter that, a generic set of features is extracted from the properties of these $M$ boosted particles.\n\nExamples of variables that can be reconstructed with this structure include spin-dependent angular distributions, such as those observed during the decay of a top quark with subsequent leptonic decay of the W boson.\nBy boosting the charged lepton into the rest frame of the W boson, its decay angular distribution can be investigated.\nIn more sophisticated scenarios, the LBN is also capable of accessing further properties that rely on the characteristics of two different rest frames.\nAn example is a variable usually referred to as $\\cos(\\theta^*)$, which is defined by the angular difference between the directions of the charged lepton in the W boson's rest frame, and the W boson in the rest frame of the top quark.\nThe procedure of how this variable can be reconstructed in the LBN is depicted in \\Fig{fig:lbn_example}.\n\\begin{figure}[h!tbp]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{lbn_theta_star.pdf}\n \\caption{\n Example of a possible feature engineering in top quark decays addressing the angular distance of the direction of the W boson in the top rest system and the direction of the lepton in the W boson rest system, commonly referred to as $\\cos(\\theta^*)$.\n }\n \\label{fig:lbn_example}\n\\end{figure}\n\nThe number $N$ of incoming particles is to be chosen according to the research question.\nThe number of matching combinations $M$ is a hyperparameter to be adjusted.\nIn this paper we introduce a specific version of the LBN which constructs $M$ particle combinations, and each combination will have its own suitable rest system.\nOther variants are conceivable and will be mentioned below.\n\nIn the following paragraphs we describe the network architecture in detail.\n\n\\subsection*{Combinations}\n\\label{sec:architecture:combinations}\n\nThe purpose of the first LBN layer is to construct arbitrary particles and suitable rest frames for subsequent boosting.\nThe construction is realized via linear combinations of $N$ input four-vectors,\n\\begin{equation}\n X =\n \\begin{bmatrix}\n E_1 & p_{x,1} & p_{y,1} & p_{z,1}\\\\\n E_2 & p_{x,2} & p_{y,2} & p_{z,2}\\\\\n \\vdots & \\vdots & \\vdots & \\vdots\\\\\n E_N & p_{x,N} & p_{y,N} & p_{z,N}\n \\end{bmatrix},\n\\end{equation}\nto a number of $M$ particles and rest frames, which are represented by four-vectors accordingly.\nHere, $M$ is a hyperparameter of the LBN and its choice is related to the respective physics application.\nThe coefficients $W$ of all linear combinations $C$,\n\\begin{equation}\n C_{m} = \\sum_{n=1}^N W_{mn} \\cdot X_{n}\n\\end{equation}\nwith $m \\in \\left[1, M\\right]$, are free parameters and subject to optimization within the scope of the training process.\nIn the following, combined particles and rest frames are referred to as $C^P$ and $C^R$, respectively.\nTaking both into consideration, this amounts to a total of $2 \\cdot N \\cdot M$ degrees of freedom in the LBN.\nIn order to prevent the construction of objects which would lead to unphysical implications when applying Lorentz transformations, i.e., four-vectors not fulfilling $E > m > 0$, all parameters $W_{mn}$ are restricted to positive values.\nWe initialize the weights randomly according to a half-normal distribution with mean $0$ and standard deviation $1 \/ M$.\nIt should be noted that, in order to maintain essential physical properties and relations between input four-vectors, feature normalization is not applied at this point.\n\n\n\\subsection*{Lorentz transformation}\n\\label{sec:architecture:boost}\n\nThe boosting layer performs a Lorentz transformation of the combined particles into their associated rest frames.\nThe generic transformation of a four-vector $q$ is defined as $q^* = \\Lambda \\cdot q$ with the boost matrix\n\\begin{align}\n \\Lambda &=\n \\begin{bmatrix}\n \\gamma & -\\gamma\\beta n_x & -\\gamma\\beta n_y & -\\gamma\\beta n_z\\\\\n -\\gamma\\beta n_x & 1 + (\\gamma - 1) n_x^2 & (\\gamma - 1) n_x n_y & (\\gamma - 1) n_x n_z\\\\\n -\\gamma\\beta n_y & (\\gamma - 1) n_y n_x & 1 + (\\gamma - 1) n_y^2 & (\\gamma - 1) n_y n_z\\\\\n -\\gamma\\beta n_z & (\\gamma - 1) n_z n_x & (\\gamma - 1) n_z n_y & 1 + (\\gamma - 1) n_z^2\n \\end{bmatrix}.\n \\label{eqn:lambda}\n\\end{align}\nThe relativistic parameters $\\gamma = E \/ m$, $\\vec{\\beta} = \\vec{p} \/ E$, and $\\vec{n} = \\vec{\\beta} \/ \\beta$ are to be derived per rest-frame four-vector $C^R_m$.\n\nThe technical implementation within deep learning algorithms requires a vectorized representation of the Lorentz transformation.\nTo this end, we rewrite the boost matrix in \\Eqn{eqn:lambda} as\n\\begin{align}\n \\Lambda &= I \\, + \\, (U \\oplus \\gamma) \\, \\odot \\, ((U \\oplus 1) \\cdot \\beta \\, - \\, U) \\, \\odot \\, (e \\cdot e^T)\\\\[2mm]\n \\text{with} \\quad U &=\n \\begin{bmatrix}\n -1^{1 \\times 1} & 0^{1 \\times 3}\\\\\n 0^{3 \\times 1} & -1^{3 \\times 3}\n \\end{bmatrix},\n \\quad e =\n \\begin{bmatrix}\n 1^{1 \\times 1}\\\\\n -\\vec{n}^{3 \\times 1}\n \\end{bmatrix},\n\\end{align}\nand the $4 \\times 4$ unit matrix $I$.\nThe operators $\\oplus$ and $\\odot$ denote elementwise addition and multiplication, respectively.\nThis notation also allows for the extension by additional dimensions to account for the number of combinations $M$ and an arbitrary batch size.\nAs a result, all boosted four-vectors $B$ are efficiently computed by a single, broadcasted matrix multiplication, $B = \\Lambda^R \\cdot C^P$.\n\nWhile the description above focuses on a pairwise mapping approach, i.e., particle $C^P_m$ is boosted into rest frame $C^R_m$, other configurations are conceivable:\n\\begin{itemize}\n \\item\n The input four-vectors are combined to build $M$ particles and $K$ rest frames.\n Each combined particle is transformed into all rest frames, resulting in $K \\cdot M$ boosted four-vectors.\n\n \\item\n The input four-vectors are combined only to build $M$ particles, which simultaneously serve as rest frames.\n Each combined particle is transformed into all rest frames derived from the other particles, resulting in $M^2 - M$ boosted four-vectors.\n\\end{itemize}\nThe specific advantages of these configurations can, in general, depend on aspects of the respective physics application.\nResults of these variants will be the target of future investigations.\n\n\n\\subsection*{Feature extraction}\n\\label{sec:architecture:features}\n\nFollowing the boosting layer, features are extracted from the Lorentz transformed four-vectors, which can then be utilized in a subsequent neural network.\nFor the projection of $M \\times 4$ particle properties into $F \\times 1$ features, we employ a distinct yet customizable set of generic mappings.\nThe autonomy of the network is not about finding entirely new features, but rather about factorizing probability densities in the most effective way possible to answer a scientific question.\nTherefore, we let the LBN network work autonomously to find suitable particle combinations and rest frames which enable this factorization, but then use established particle characterizations.\n\nWe differentiate between two types of generic mappings:\n\\begin{enumerate}\n \\item\n Single features are extracted per boosted four-vector.\n Besides the vector elements ($E$, $p_x$, $p_y$, $p_z$) themselves, derived features such as mass, transverse and longitudinal momentum, pseudorapidity, and azimuth can be derived.\n\n \\item\n Pairwise features are extracted for all pairs of boosted four-vectors.\n Examples are the cosine of their spatial angular difference, their distance in the $\\eta-\\phi$ plane, or their distance in Minkowski space.\n In contrast to single features, pairwise features introduce close connections among boosted four-vectors and, by means of backpropagation, between trainable combinations of particles and rest frames.\n\\end{enumerate}\nThe set of extracted features is concatenated to a single output vector.\nProvided that the employed batch size is sufficiently large, batch normalization with floating averages adjusted during training can be performed after this layer \\cite{batch_norm}.\n\n\n\\subsection*{Subsequent problem-specific application}\n\\label{sec:architecture:application}\n\nIn the previous sections, we described the LBN as shown in the left box in \\Fig{fig:lbn_arch}, namely how input vectors are combined and boosted into dedicated rest frames, followed by how features of these transformed four-vectors are compiled.\nThese features are intended to maximize the information content to be exploited in a subsequent, problem-specific deep neural network NN as shown in the right box in \\Fig{fig:lbn_arch}.\n\nThe objective function of the NN defines the type of learning process.\nWeight updates are performed as usual through backpropagation.\nThese updates apply to the weights of the subsequent network as well as to the trainable weights of the combination layer in the LBN.\n\nIn the following, we evaluate how well the autonomous feature engineering of the compound LBN+NN network operates.\nWe compare its performance with only low-level information to that of typical, fully-connected deep neural networks (DNNs).\nWe alternatively supply the DNNs with low-level information, sophisticated high-level variables (see \\Apx{sec:appendix}), and the combination of both.\n\n\n\\section{Simulated datasets}\n\\label{sec:simulations}\n\nThe Pythia $8.2.26$ program package \\cite{Pythia} was used to simulate $\\ttH$ and $\\ttbb$ events.\nExamples of corresponding Feynman diagrams are shown in \\Fig{fig:feynman}.\nThe matrix elements contain angular correlations of the decay products from heavy resonances.\nBeam conditions correspond to LHC proton-proton collisions at $\\sqrt{s} = 13$\\,TeV.\nOnly the dominant gluon-gluon process is enabled and Higgs boson decays into bottom quark pairs are favored.\nHadronization is performed with the Lund string fragmentation model.\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.7\\textwidth]{feynman_ttH.pdf}\n \\caption{}\n \\label{fig:feynman_ttH}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.7\\textwidth]{feynman_ttbb.pdf}\n \\caption{}\n \\label{fig:feynman_ttbb}\n \\end{subfigure}\n \\caption{Example Feynman diagrams of a) $\\ttH$ and b) $\\ttbb$ processes.}\n \\label{fig:feynman}\n\\end{figure}\n\nTo analyze a typical final state as observed in an LHC detector, we use the DELPHES package \\cite{deFavereau:2013fsa}.\nDELPHES provides a modular framework designed to parameterize the simulation of a multi-purpose detector.\nIn our study, we chose to simulate the CMS detector response.\nAll important effects such as pile-up, deflection of charged particles in magnetic fields, electromagnetic and hadronic calorimetry, and muon detection systems are covered.\nThe simulated output consists of muons, electrons, tau leptons, photons, jets, and missing transverse momentum from a particle flow algorithm.\nAs the neutrino leaves the detector without interacting, we identify its transverse momentum components from the measurement of missing transverse energy, and we reconstruct its longitudinal momentum by constraining the mass of the leptonically decaying W boson to $80.4$\\,GeV.\n\nFollowing the detector simulation, an event selection with the following criteria was carried out:\nEvents must have at least six jets clustered with the anti-$k_T$ algorithm, implemented in the FastJet package \\cite{Cacciari:2011ma}, with a radius parameter of $\\Delta R = 0.4$.\nA jet is accepted if its transverse momentum fulfills $p_t > 25$\\,GeV and its absolute pseudorapidity is within $|\\eta| < 2.4$.\nFurthermore, exactly one electron or muon with $p_t > 20$\\,GeV and $|\\eta| < 2.1$ is required.\nEvents with further leptons are rejected to focus solely on semi-leptonic decays of the $\\ttbar$ system.\nFinally, a matching is performed between the final-state partons of the generator and the jets of the detector simulation with the maximum distance $\\Delta R = 0.3$, whereby the matching must be unambiguously successful for all partons.\nFor $\\ttH$ events, the combined selection efficiency was measured as $2.3$\\,\\%.\n\nFor $\\ttbb$ processes, further measures are taken to identify the two additional bottom quark jets.\nThe definition is an adaption of \\cite{Sirunyan:2018mvw}.\nAt generator level, the anti-$k_T$ algorithm with a radius parameter of $\\Delta R = 0.4$ is used to cluster all stable final-state particles.\nIf a generator jet is found to originate from one of the top quarks or W boson decays, or if its transverse momentum is below a threshold of $20$\\,GeV, it is excluded from further considerations.\nFor each remaining jet, we then count the number of contained bottom hadrons using the available hadronization and decay history.\nAn event is a $\\ttbb$ candidate if at least two generator jets contain one or more distinct bottom hadrons.\nFinally, a matching is performed between the four quarks resulting from the $\\ttbar$ decay and the two identified generator jets on the one hand, and the selected, reconstructed jets on the other hand.\nSimilar to $\\ttH$, a $\\ttbb$ event is accepted if all six generator-level objects are unambiguously matched to a selected jet.\nThis leaves a fraction of $0.02$\\,\\% of the generated $\\ttbb$ events.\n\nA total of $10^6$ events remain, consisting of an evenly distributed number of $\\ttH$ and $\\ttbb$ events, which are then divided into training and validation datasets at a ratio of $80:20$.\n\nFor further analysis, a distinct naming scheme for reconstructed jets is introduced that is inspired by the semi-leptonic decay characteristics of the $\\ttbar$ system.\nThe two light quark jets of the hadronically decaying W boson are named $\\qi$ and $\\qii$, where the index $1$ refers to the jet with the greater transverse momentum.\nThe bottom jet that is associated to this W boson within a top quark decay is referred to as $\\bhad$.\nAccordingly, the bottom quark jet related to the leptonically decaying W boson is referred to as $\\blep$.\nThe remaining jets are named $\\bi$ and $\\bii$, and transparently refer to the decay products of the Higgs boson for $\\ttH$, or to the additional b quark jets for $\\ttbb$ events.\n\nIn \\Fig{fig:dataset} we show, by way of example, distributions of the generated $\\ttH$ and $\\ttbb$ events.\nIn \\Fig{fig:dataset-a} we compare the transverse momentum of the jet $\\bi$, and in \\Fig{fig:dataset-b} the largest difference in pseudorapidity between a jet and the charged lepton.\nIn \\Fig{fig:dataset-c} we show the invariant mass of the closest jet pair, and in \\Fig{fig:dataset-d} the event sphericity.\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{dataset_bj1_pt.pdf}\n \\caption{}\n \\label{fig:dataset-a}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{dataset_jet_lep_max_abs_deta.pdf}\n \\caption{}\n \\label{fig:dataset-b}\n \\end{subfigure}\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{dataset_jet_closest_pair_mass.pdf}\n \\caption{}\n \\label{fig:dataset-c}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{dataset_sphericity.pdf}\n \\caption{}\n \\label{fig:dataset-d}\n \\end{subfigure}\n \\caption{\n Exemplary comparisons of kinematic distributions of $\\ttH$ and $\\ttbb$ events; a)~transverse momentum of the jet $\\bi$, b)~largest difference in pseudorapidity of a jet to the charged lepton, c)~invariant mass of the closest jet pair, d)~event sphericity.\n }\n \\label{fig:dataset}\n\\end{figure}\n\n\n\\section{Benchmarks of network performance}\n\\label{sec:performance}\n\nAs a benchmark, the LBN is utilized in the classification task of distinguishing a signal process from a background process.\nIn this example, we use the production of a top quark pair in association with a Higgs boson decaying into bottom quarks ($\\ttH$) as a signal process, and top quark pairs produced with $2$ additional bottom jets ($\\ttbb$) as a background process.\nIn both processes, the final state consists of eight four-vectors, four of which represent bottom quark jets, two describe light quark jets, one denotes a charged lepton, and one a neutrino.\n\nWithin the benchmark, the LBN competes against other frequently used deep neural network setups, which we refer to as DNN here.\nFor a meaningful comparison, we perform extensive searches for the optimal hyperparameters in each setup.\nAs explained above, the LBN consists of only very few parameters to be trained in addition to those of the following neural network (LBN+NN).\nStill, by varying the number $M$ of particle combinations to be constructed from the eight input four-vectors, and by varying the number $F$ and types of generated features, we performed a total of $346$ training runs of the LBN+NN setup, and a similar amount for the competing DNN.\nThe best performing architectures are listed in \\Apx{sec:appendix}.\n\nThe LBN network exclusively receives the four-vectors of the eight particles introduced above, which shall be denoted by 'LBN low' in the following.\nFor the networks marked with DNN we use three variants of inputs.\nIn the first variant, the DNN receives only the four-vectors of the eight particles ('DNN-low').\nFor the second variant, physics knowledge is applied in the form of $26$ sophisticated high-level variables which are inspired by \\cite{Sirunyan:2018mvw} and incorporate comprehensive event information ('DNN-high').\nA list of these variables, such as the event sphericity \\cite{event_shape_variables} and Fox-Wolfram moments \\cite{fox_wolfram_moments}, can be found in the \\Apx{sec:appendix}.\nIn the third variant, we provide the DNNs with both low-level and high-level variables as input ('DNN-combined').\n\nIn the first set of our benchmark tests, the task of input particle identification is bypassed by using the generator information in order to exclusively measure the performance of the LBN.\nJets associated to certain quarks through matching are consistently placed at the same position of the $N$ input four-vectors.\nWe quantify how well the event classification of the networks performs with the integral of the receiver operating characteristic curve (ROC AUC).\n\n\\Fig{fig:perf_comparison_a} shows the performance of all training runs involved in the hyperparameter search.\nThe best training is marked with a horizontal line, while the distribution of results is denoted by the orange and blue shapes which manifestly depend on the structure of the scanned hyperparameter space.\nFor the LBN, the training runs achieve stable results since there are only minor variations in the resulting ROC AUC.\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{training_distributions_best.pdf}\n \\caption{}\n \\label{fig:perf_comparison_a}\n \\end{subfigure}%\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{training_distributions_reco.pdf}\n \\caption{}\n \\label{fig:perf_comparison_b}\n \\end{subfigure}\n \\caption{\n Benchmarks of the LBN performance in comparison to typical deep neural network architectures (DNNs) with three variants of input variables;\n low=four-vectors, high=physics-motivated variables, combined=low+high.\n Ordering of the input four-vectors is done by a) generator information and b) jet transverse momenta.\n }\n \\label{fig:perf_comparison}\n\\end{figure}\n\nAlso shown are the training runs of the three input variants for the DNNs.\nIf the eight input four-vectors are ordered according to the generator information, the 'DNN low' setup already achieves results equivalent to the combination of low-level and high-level variables ('DNN combined').\nHowever, both setups are unable to match the 'LBN low' result.\nNote that high-level variables are mostly independent of the input ordering by construction, and hence contain reduced information compared to the ordered four-vectors.\nConsequently, the weaker performance of the 'DNN high' is well understood.\n\nIn the second set of our benchmark test, we disregard generator information and instead sort the six jets according to their transverse momenta, followed by the charged lepton and the neutrino.\nThis order can be easily established in measured data.\n\\Fig{fig:perf_comparison_b} shows that this alternative order consistently reduces the ROC AUC score of all networks.\n\n\nThe training runs of the LBN again show stable results with few exceptions and achieve the best performance in separating events from signal and background processes.\nFor the DNNs, the above-mentioned hierarchy emerges, i.e., the best results are achieved using the combination of high- and low-level variables, followed by only high-level variables with reasonably good results,\nwhereas the DNN exhibits the weakest performance with only low-level information.\n\nOverall, the performance of the compound LBN+NN structure based on the four-vector inputs is quite convincing in this demanding benchmark test.\n\n\n\\section{Visualizations of network predictions}\n\\label{sec:insights}\n\nSeveral components of the LBN architecture have a direct physics interpretation.\nTheir investigation can provide insights into what the network is learning.\nIn the following, we investigate only the best network trained on the generator-ordered input in order to establish which of the quark and lepton four-vectors are combined.\n\nThe number of particle combinations to create is a hyperparameter of the architecture.\nFor the generator ordering, we obtain the best results for $13$ particles.\nThis matches well with our intuition since the Feynman diagram also contains $13$ particles in the cascade.\nIn total we extract $F = 143$ features in the LBN that serve as input to the subsequent deep neural network NN (\\Fig{fig:lbn_arch}).\nFor details refer to the \\Apx{sec:appendix}.\n\nIn our particular setup, each combined particle has a separate corresponding rest frame.\nAs a consequence of demanding the second network to solve a research question, the decisions about which four-vectors are combined for the composite particle and which for its rest frame are strongly correlated.\nA correlation also exists between all $13$ systems of combined particles and their rest frames from the pairwise angular distances exploited by the LBN as additional properties.\n\nWe start by looking at the weights used to combine the input four-vectors to form the particle combinations and rest frames.\n\n\n\\subsection*{Weight matrices of particle and rest-frame combinations}\n\n\\Fig{fig:weight_gen_particles_normed} shows the weight matrices of the LBN.\nEach column of the two matrices for combined particles (red) and rest frames (green) is normalized so that the weight of each input four-vector (bottom quark jet $\\bi$ of the Higgs, etc.) is shown as a percentage.\nThe color code reflects the sum of the particle weights for the Higgs boson (or $b\\bar{b}$ system for $\\ttbb$) and the two top quarks, respectively.\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{1\\textwidth}\n \\centering\n \\includegraphics[width=0.65\\textwidth]{weight_particles.pdf}\n \\caption{}\n \\label{fig:weight_gen_particles_normed-a}\n \\end{subfigure}\n \\begin{subfigure}{1\\textwidth}\n \\centering\n \\includegraphics[width=0.65\\textwidth]{weight_restframes.pdf}\n \\caption{}\n \\label{fig:weight_gen_particles_normed-b}\n \\end{subfigure}\n \\caption{\n Particle combinations (red) and their rest frames (green).\n The numbers represent the relative weights of the input particles for each combination $i$.\n The color code reflects the sum of the particle weights for the Higgs boson ($\\ttH$), the $b\\bar{b}$ system ($\\ttbb$), or the two top quarks.\n For better clarity of the presentation, the zero weights are kept white.\n }\n \\label{fig:weight_gen_particles_normed}\n\\end{figure}\n\nNote that combined particles and rest frames are complementary.\nExtreme examples of this can be seen in columns $2$ and $6$, in which one of the two bottom quark jets from the Higgs boson decay was selected as the rest frame and a broad combination of all other four-vector vectors was formed whose properties are exploited for the event classification.\n\nConversely, in column $0$ a Higgs-like combination is transformed into the rest frame of all particles, to which the hadronic top quark makes the main contribution.\nSimilarly, in columns $1$ and $7$, top quark-like combinations are formed and also transformed into rest frames formed by many four-vectors.\nThese types of combinations allow for the Lorentz boost of the center-of-mass system of the scattering process to be nearly compensated.\n\nIt is striking that four-vector combinations forming a top quark are often selected.\nIn \\Fig{fig:correlations}, we quantitatively assess the $13$ possible combinations of the combined particles (red) and the $13$ rest frames (green) in order to determine which combinations are typically formed between the eight input four-vectors.\n\\begin{figure}[htbp]\n \\centering\n \\begin{subfigure}{.49\\textwidth}\n \\centering\n \\includegraphics[width=1\\textwidth]{correlations_particles.pdf}\n \\caption{}\n \\label{fig:correlations-a}\n \\end{subfigure}\n \\begin{subfigure}{.49\\textwidth}\n \\centering\n \\includegraphics[width=1\\textwidth]{correlations_restframes.pdf}\n \\caption{}\n \\label{fig:correlations-b}\n \\end{subfigure}\n \\caption{\n Correlations of quarks and leptons for particle combinations (red) and for rest frames (green).\n Both the numbers and the color code represent the correlation strength.\n }\n \\label{fig:correlations}\n\\end{figure}\n\nIn order to build rest frames (green), the LBN network recognizes combinations leading to the top quarks and treats the bottom quark jets from the Higgs individually.\nThis appears to be an appropriate choice as the top quark pair is the same in $\\ttbb$ and $\\ttH$ events while the two bottom quarks typically originate from gluon splitting or the Higgs decay, respectively.\nFor the hadronic top quark, the W boson ($75$\\,\\%) is often accounted for as well.\nWith the leptonic top, the LBN network considers bottom quark jet and lepton ($91$\\,\\%).\nThe lepton and neutrino are related to form the W ($68$\\,\\%).\n\nTo form the particle combinations (red), the LBN network builds the light quark jets to the W boson ($78$\\,\\%) and the hadronic top quark.\nIn the leptonic top quark, the LBN combines bottom quark jets and leptons ($61$\\,\\%).\nSometimes the lepton and the neutrino are also correlated to build the W boson ($22$\\,\\%).\nThe LBN rarely combines the Higgs boson ($-4$\\,\\%).\n\nThe positive and negative correlations show that the LBN forms physics-motivated particle combinations from the training data.\nIn the next step, we investigate the usefulness of these combinations for the separation of $\\ttH$ and $\\ttbb$ events.\n\n\\subsection*{Distributions resulting from combined particle properties}\n\nThe performance of particle combinations and their transformations into reference frames in the LBN is evaluated through the extracted features which support the classification of signal and background events.\nHere, we present the distributions of different variables calculated from the combined particles.\n\n\\Fig{fig:m_gen_ttH_a} shows the invariant mass $m$ distributions of all $13$ combined particles for the $\\ttH$ signal dataset.\n\\Fig{fig:m_gen_ttH_b} shows the mass distributions for the $\\ttbb$ background dataset.\nThe difference between the two histograms is shown in Figure~\\ref{fig:m_gen_ttH_c}, where combined particle $0$ of the first column shows the largest difference between signal and background.\n\\Fig{fig:weight_gen_particles_normed} shows that this combination $0$ approximates a Higgs-like configuration (red).\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{feature_m_gen_ttH.pdf}\n \\caption{}\n \\label{fig:m_gen_ttH_a}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{feature_m_gen_ttbb.pdf}\n \\caption{}\n \\label{fig:m_gen_ttH_b}\n \\end{subfigure}\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{feature_m_gen_diff.pdf}\n \\caption{}\n \\label{fig:m_gen_ttH_c}\n \\end{subfigure}%\n \n \n \n \\caption{\n Masses of combined particles for a) $\\ttH$, b) $\\ttbb$, and c) the differences between $\\ttH$ and $\\ttbb$.\n }\n \\label{fig:m_gen_ttH}\n\\end{figure}\n\nFor this Higgs-like configuration (combined particle $0$), we show mass distributions in \\Fig{fig:slices_a}.\nFor $\\ttH$, the invariant mass exhibits a narrow distribution around $m = 22$ a.u. reflecting the Higgs-like combined particle, while the $\\ttbb$ background is widely distributed.\nNote that because of the weighted four-vector combinations, arbitrary units are used, so that the Higgs boson mass is not stated in GeV.\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_m_gen_0.pdf}\n \\caption{}\n \\label{fig:slices_a}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_m_gen_5.pdf}\n \\caption{}\n \\label{fig:slices_b}\n \\end{subfigure}\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_pt_gen_0.pdf}\n \\caption{}\n \\label{fig:slices_c}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_pt_gen_5.pdf}\n \\caption{}\n \\label{fig:slices_d}\n \\end{subfigure}\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_eta_gen_0.pdf}\n \\caption{}\n \\label{fig:slices_e}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_eta_gen_5.pdf}\n \\caption{}\n \\label{fig:slices_f}\n \\end{subfigure}\n \\caption{\n Example comparisons of kinematic distributions for $\\ttH$ and $\\ttbb$ processes.\n Figures a) and b) show the masses of the combined particles $0$ and $5$.\n Figures c)-f) also present their transverse momenta and pseudorapidities in the rest frames.\n }\n \\label{fig:slices}\n\\end{figure}\n\nAnalogously, we determine the transverse momentum $p_t$ and pseudorapidity $\\eta$ distributions of the $13$ particles for signal and background events, and then compute their difference to visualize distinctions between $\\ttH$ and $\\ttbb$.\nWhile the invariant mass distribution can also be formed without Lorentz transformation, both the $p_t$ and $\\eta$ distributions are determined in the respective rest frames.\nClear differences in the distributions are apparent in \\Fig{fig:gen_ttH_ttbb_a} for the transverse momenta of the combined particles $11, 8, 5, 10$ (in descending order of importance), while the combined particles $5, 9, 12, 10$ are most prominent for the pseudorapidities (\\Fig{fig:gen_ttH_ttbb_b}).\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{feature_pt_gen_diff.pdf}\n \\caption{}\n \\label{fig:gen_ttH_ttbb_a}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{feature_eta_gen_diff.pdf}\n \\caption{}\n \\label{fig:gen_ttH_ttbb_b}\n \\end{subfigure}\n \\caption{\n Differences between $t\\ttH$ and $\\ttbb$ processes for combined particles in the rest frames of particle combinations, a) transverse momenta and b) pseudorapidities.\n }\n \\label{fig:gen_ttH_ttbb}\n\\end{figure}\n\nAs a selection of the many possible distributions, in \\Fig{fig:slices} we show the invariant mass $m$, the transverse momentum $p_t$, and the pseudorapidity $\\eta$ for two combined particles.\n\\Figs{fig:slices_a}, \\ref{fig:slices_c}, and \\ref{fig:slices_e} present the distributions of the combined Higgs-like particle $0$.\nIn addition to the prominent difference in the mass distributions, for $\\ttH$ events its $p_t$ is smaller while its direction is more central in $\\eta$ compared to $\\ttbb$ events.\n\nFurthermore, in \\Figs{fig:slices_b}, \\ref{fig:slices_d}, and \\ref{fig:slices_f} we show combined particle $5$ because of its separation power in pseudorapidity and in transverse momentum (\\Fig{fig:gen_ttH_ttbb}).\nCombination $5$ is essentially determined by the bottom quark jet $\\bi$ boosted into a rest frame of all other four-vectors (\\Fig{fig:weight_gen_particles_normed}).\nHere, the distribution of mass $m$ for $\\ttH$ events is smaller, $p_t$ is larger and more defined, and $\\eta$ is more central than for $\\ttbb$ events.\n\nIn the example of the characteristic angular distribution of the charged lepton in top quark decays (\\Fig{fig:lbn_example}, $\\cos{\\theta^*}$), two different reference systems are combined to determine the opening angle $\\theta^*$ between the W boson in the top quark rest frame and the lepton in the W boson rest frame.\nAs mentioned above, the LBN is capable of calculating these complex angular correlations involving four-vectors evaluated in different rest frames.\n\nAs an example, we select distributions for angular distances of combined particles $2$ and $5$ (\\Fig{fig:weight_gen_particles_normed}).\nParticle $2$ is a combination of all four-vectors boosted into the rest frame of the bottom quark jet $\\bii$, whereas particle $5$ corresponds to the bottom quark jet $\\bi$ boosted into a combination of all four-vectors.\n\\Fig{fig:cosangle_a} shows that, for $\\ttbb$ background events, combined particles $2$ and $5$ predominantly propagate back-to-back, while the $\\ttH$ signal distribution scatters broadly around $90$ deg.\n\\begin{figure}[h!tbp]\n \\centering\n \\begin{subfigure}{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_pair_cos_gen_2_5.pdf}\n \\caption{}\n \\label{fig:cosangle_a}\n \\end{subfigure}%\n \\begin{subfigure}{.5\\textwidth}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{sliced_pair_cos_gen_2_6.pdf}\n \\caption{}\n \\label{fig:cosangle_b}\n \\end{subfigure}\n \\caption{\n Cosine of the opening angles between combined particle $2$ ($\\ttbar + \\bi$) in its rest frame $\\bii$ and a) combined particle $5$ ($\\bi$) in its rest frame ($\\ttbar + \\bi + \\bii$), and b) combined particle $6$ ($\\ttbar + \\bii$) in its rest frame ($\\bi$).\n }\n \\label{fig:cosangle}\n\\end{figure}\n\nThe opposite holds true for the angles between combined particles $2$ and $6$ (\\Fig{fig:weight_gen_particles_normed}).\nHere, the bottom quark jet $\\bi$, or alternatively $\\bii$, serves as a rest frame for characterizing combinations of both top quarks together with the other bottom quark jet.\n\\Fig{fig:cosangle_b} shows that, for $\\ttbb$, combined particles $2$ and $6$ preferably propagate in the same direction, whereas for $\\ttH$ signal events this is less pronounced.\n\nOverall, it can be concluded that the LBN network, when given only low-level variables as input, seems to autonomously build and transform particle combinations leading to extracted features that are suitable for the task of separating signal and background events.\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nThe various physics processes from which the events of high-energy particle collisions originate lead to complex particle final states.\nOne reason for the complexity is the high energy that leads to Lorentz boosts of the particles and their decay products.\nFor an analysis task such as the identification of processes, we reconstruct aspects of the probability distributions from which the particles were generated.\nHow well probability distributions of individual physical processes are reflected is compiled in a series of characteristic variables.\nTo accomplish this, both the Minkowski metric for the correct calculation of energy-momentum conservation or invariant masses and the Lorentz transformation into the rest frames of parent particles are necessary.\nSo far, such characteristic variables have been engineered by physicists.\n\nIn this paper, we presented a general two-stage neural network architecture.\nThe first stage, the novel Lorentz Boost Network, contains at its core an efficient and fully vectorized implementation for performing Lorentz transformations.\nThe aim of this stage is to autonomously generate variables suitable for the characterization of collision events when using only particle four-vectors.\nFor this purpose, the input four-vectors are combined separately into composite particles and rest frames.\nThe composite particles are boosted into corresponding rest frames and a generic set of variables is extracted from the boosted particles.\nThe second stage of the network then uses these autonomously generated variables to solve a particular physics question.\nBoth the weights used to create the combinations of input four-vectors and the parameters of the second stage are learned together in a supervised training process.\n\nTo assess the performance of the LBN, we investigated a benchmark task of distinguishing $\\ttH$ and $\\ttbb$ processes.\nWe demonstrated the improved separation power of our model compared to domain-unspecific deep neural networks where the latter even used sophisticated high-level variables as input in addition to the four-vectors.\nFurthermore, we developed visualizations to gain insight into the training process and discovered that the LBN learns to identify physics-motivated particle combinations from the data used for training.\nExamples are top-quark-like combinations or the approximate compensation of the Lorentz boost of the center-of-mass system of the scattering process.\n\nThe LBN is a multipurpose method that uses Lorentz transformations to exploit and uncover structures in particle collision events.\nIt is part of an ongoing comparison of methods to identify boosted top quark jets and has already been successfully applied at the IML workshop challenge at CERN \\cite{iml2018}.\nThe source code of our implementation in TensorFlow \\cite{tensorflow} is publically available under BSD license \\cite{lbncode}.\n\n\\begin{appendix}\n\n\\section*{Appendix}\n\\label{sec:appendix}\n\n\\subsection*{Network parameters}\n\nThe network parameters of the best performing networks are illustrated in Table~\\ref{tab:network-parameters}.\n\\begin{table}[h!tbp]\n \\centering\n \\caption{\n All deep neural networks use a fully connected architecture with $n_\\text{layers}$ and $n_\\text{nodes}$.\n ELU is used as the activation function.\n The Adam optimizer \\cite{kingma2014} is employed with decay parameters $\\beta_1 = 0.9$, $\\beta_2 = 0.999$, and a learning rate of $10^{-4}$.\n $L_2$ normalization is applied with a factor of $10^{-4}$.\n Batch normalization between layers was utilized in every configuration.\n The generic mappings in LBN feature extraction layer create $E, p_T, \\eta, \\phi, m$, and pairwise $\\cos(\\varphi)$.\n }\n \\label{tab:network-parameters}\n \\begin{tabular}{c|cc|cc|cc|cc}\n \\toprule\n Network & \\multicolumn{2}{c|}{\\textbf{LBN+NN}} & \\multicolumn{6}{c}{\\textbf{DNN}}\\\\\n Variables & \\multicolumn{2}{c|}{\\textbf{low-level}} & \\multicolumn{2}{c|}{\\textbf{low-level}} & \\multicolumn{2}{c|}{\\textbf{high-level}} & \\multicolumn{2}{c}{\\textbf{combined}} \\\\\n Input Ordering & \\textbf{gen.} & $\\mathbf{p_T}$ & \\textbf{gen.} & $\\mathbf{p_T}$ & \\textbf{gen.} & $\\mathbf{p_T}$ & \\textbf{gen.} & $\\mathbf{p_T}$\\\\\n \\midrule\n $M_\\text{part.,rest fr.}$ & $13$ & $16$ & \\verb|-| & \\verb|-| & \\verb|-| & \\verb|-| & \\verb|-| & \\verb|-|\\\\\n $n_\\text{layers}$ & $8$ & $8$ & $8$ & $8$ & $4$ & $4$ & $8$ & $6$\\\\\n $n_\\text{nodes}$ & $1024$ & $1024$ & $1024$ & $1024$ & $512$ & $512$ & $1024$ & $1024$\\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\n\\subsection*{High-level variables}\n\nThe high-level variables employed in the training of the DNN benchmark comparison are inspired by \\cite{Sirunyan:2018mvw}:\n\\begin{itemize}\n \\item\n The event shape variables sphericity, transverse sphericity, aplanarity, centrality \\cite{event_shape_variables}.\n\n \\item\n The first five Fox-Wolfram moments \\cite{fox_wolfram_moments}.\n\n \\item\n The cosine of spatial angular difference $\\theta^*$ between the charged lepton in the W boson rest frame and the W boson direction when boosted into the rest frame of its corresponding top quark.\n In the hadronic branch, the down-type quark is used owing to its increased spin analyzing power \\cite{spin_analyzing_power}.\n\n \\item\n The minimum, maximum and average of the distance in pseudorapidity $\\eta$ and $\\phi$ phase space $\\Delta R$ of jet pairs.\n\n \\item\n The minimum, maximum and average $|\\Delta\\eta|$ of jet pairs.\n\n \\item\n The minimum and maximum of the distance in $\\Delta R$ of jet-lepton pairs.\n\n \\item\n The minimum, maximum and average $|\\Delta\\eta|$ of jet-lepton pairs.\n\n \\item\n The sum of the transverse momenta of all jets.\n\n \\item\n The transverse momentum and the mass of the jet pair with the smallest $\\Delta R$.\n\n \\item\n The transverse momentum and the mass of the jet pair whose combined mass is closest to the Higgs boson mass m$_H = 125$\\,GeV \\cite{CMS2012:higgs}.\n\\end{itemize}\n\n\\end{appendix}\n\n\n\\acknowledgments\nWe wish to thank Jonas Glombitza for his valuable comments on the manuscript.\nWe would also like to thank Jean-Roch Vlimant, Benjamin Fischer, Dennis Noll, and David Schmidt for a variety of fruitful discussions.\nThis work is supported by the Ministry of Innovation, Science and Research of the State of North Rhine-Westphalia, and by the Federal Ministry of Education and Research (BMBF).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nScattering amplitudes and cross sections simplify in infrared kinematic limits enabling insight into their all orders perturbative structure.\nOne of the most powerful approaches to studying the all orders structure is the use of renormalization group (RG) techniques. Depending on the particular nature of the kinematic limit considered, these RG equations often describe evolution not just in the standard virtuality scale, $\\mu$, but also in other additional physical scales. A common example in gauge theories are rapidity evolution equations, which allow for the resummation of infrared logarithms associated with hierarchical scales in rapidity. Classic examples include the Sudakov form factor \\cite{Collins:1989bt}, the Collins-Soper equation \\cite{Collins:1981uk,Collins:1981va,Collins:1984kg} describing the $p_T$ spectrum for color singlet boson production in hadron colliders, the BFKL evolution equations describing the Regge limit \\cite{Kuraev:1977fs,Balitsky:1978ic,Lipatov:1985uk}, and the rapidity renormalization group \\cite{Chiu:2011qc,Chiu:2012ir} describing more general event shape observables.\n\nThere has recently been significant effort towards understanding subleading power corrections to infrared limits \\cite{Manohar:2002fd,Beneke:2002ph,Pirjol:2002km,Beneke:2002ni,Bauer:2003mga,Hill:2004if,Lee:2004ja,Dokshitzer:2005bf,Trott:2005vw,Laenen:2008ux,Laenen:2008gt,Paz:2009ut,Benzke:2010js,Laenen:2010uz,Freedman:2013vya,Freedman:2014uta,Bonocore:2014wua,Larkoski:2014bxa,Bonocore:2015esa,Bonocore:2016awd,Kolodrubetz:2016uim,Moult:2016fqy,Boughezal:2016zws,DelDuca:2017twk,Balitsky:2017flc,Moult:2017jsg,Goerke:2017lei,Balitsky:2017gis,Beneke:2017ztn,Feige:2017zci,Moult:2017rpl,Chang:2017atu,Beneke:2018gvs,Beneke:2018rbh,Moult:2018jjd,Ebert:2018lzn,Ebert:2018gsn,Bhattacharya:2018vph,Boughezal:2018mvf,vanBeekveld:2019prq,vanBeekveld:2019cks,Bahjat-Abbas:2019fqa,Beneke:2019kgv,Boughezal:2019ggi,Moult:2019mog,Beneke:2019mua,Cieri:2019tfv,Moult:2019uhz,Beneke:2019oqx} with the ultimate goal of achieving a systematic expansion, much like for problems where their exists a local operator product expansion (OPE). Using Soft Collinear Effective Theory (SCET) \\cite{Bauer:2000ew, Bauer:2000yr, Bauer:2001ct, Bauer:2001yt} subleading power infrared logarithms were resummed to all orders using RG techniques for a particular class of event shape observables where only virtuality evolution is required. This was achieved both in pure Yang-Mills theory \\cite{Moult:2018jjd}, and including quarks in $\\mathcal{N}=1$ QCD \\cite{Moult:2019uhz}, and a conjecture for the result including quarks in QCD was presented in \\cite{Moult:2019uhz}. Subleading power infrared logarithms have also been resummed for color singlet production at kinematic threshold, when only soft real radiation is present \\cite{Beneke:2018gvs,Beneke:2019mua,Bahjat-Abbas:2019fqa}.\n\nIn this paper we build on the recent advances in understanding the structure of subleading power renormalization group equations in SCET, and consider for the first time the resummation of subleading power rapidity logarithms. Using renormalization group consistency arguments, we derive a class of subleading power rapidity evolution equations. These equations involve mixing into new class of operators, which play a crucial role in the renormalization group equations. We call these operators ``rapidity identity operators\", and we derive their renormalization group properties, and solve the associated RG equations. \n\nWe apply our evolution equations to derive the all orders structure of the power suppressed leading logarithms for the energy-energy correlator (EEC) event shape in $\\mathcal{N}=4$ super-Yang-Mills (SYM) theory in the back-to-back (double light cone) limit. Denoting these power suppressed contributions by $\\text{EEC}^{(2)}$, we find the remarkably simple formula\n\\begin{align}\n\\boxed{\\text{EEC}^{(2)}=-\\sqrt{2a_s}~D\\left[ \\sqrt{\\frac{\\Gamma^{\\mathrm{cusp}}}{2}} \\log(1-z) \\right]}\\,,\n\\end{align}\nwhere $D(x)=1\/2\\sqrt{\\pi}e^{-x^2}\\mathrm{erfi}(x)$ is Dawson's integral, $a_s=\\alpha_s\/(4\\pi)C_A$, and $\\Gamma^{\\mathrm{cusp}}$ is the cusp anomalous dimension \\cite{Korchemsky:1987wg}. This result provides insight into new all orders structures appearing in subleading power infrared limits. Since this extends the classic Sudakov exponential \\cite{Sudakov:1954sw}, we will refer to this functional form as ``Dawson's Sudakov\". The particular example of the EEC observable was chosen in this paper, since its exact structure for generic angles is known to $\\mathcal{O}(\\alpha_s^3)$ due to the remarkable calculation of \\cite{Henn:2019gkr}, and we find perfect agreement with the expansion of their results in the back-to-back limit to this order, providing a strong check of our techniques. While we focus on the EEC in $\\mathcal{N}=4$, this observable has an identical resummation structure to $p_T$ resummation, and therefore the techniques we have developed apply more generally, both to the EEC in QCD, and to the $p_T$ distribution of color singlet bosons at hadron colliders.\n\nAn outline of this paper is as follows. In \\Sec{sec:EEC_b2b} we review the known structure of the EEC observable in the back-to-back limit, and relate it to the case of the $p_T$ spectrum of color singlet bosons, which is perhaps more familiar to the resummation community. In \\Sec{sec:FO} we perform a fixed order calculation of the EEC at subleading power in SCET, which allows us to understand the structure of the subleading power rapidity divergences, and provides the boundary data for our RG approach. In \\Sec{sec:RRG_NLP} we study the structure of subleading power rapidity evolution equations, introduce the rapidity identity operators, and analytically solve their associated evolution equations. In \\Sec{sec:N4} we apply these evolution equations to the particular case of the EEC in $\\mathcal{N}=4$ SYM to derive the subleading power leading logarithmic series, and we comment on some of the interesting features of the result. We also compare our result with the fixed order calculation of \\cite{Henn:2019gkr} expanded in the back-to-back limit, finding perfect agreement. We conclude in \\Sec{sec:conc}, and discuss many directions for improving our understanding of the infrared properties of gauge theories at subleading powers. \n\n\\section{The Energy-Energy Correlator in the Back-to-Back Limit}\\label{sec:EEC_b2b}\n\nIn this section we introduce the EEC observable, and review its structure in the back-to-back limit at leading power. We then discuss the resummation of the associated infrared logarithms using the rapidity renormalization group approach. This will allow us to introduce our notation, before proceeding to subleading power. \n\nAn additional goal of this section is to make clear the relation between the EEC in the back-to-back limit and more standard $p_T$ resummation, which may be more familiar to the resummation community. This should also make clear that the techniques we develop are directly applicable to the case of $p_T$ resummation, although we leave a complete treatment of $p_T$ resummation to a future publication due to complexities related to the treatment of the initial state hadrons. Some other recent work towards understanding subleading power factorization for $p_T$ can be found in \\cite{Balitsky:2017flc,Balitsky:2017gis}.\n\n\nFor a color singlet source, the EEC is defined as \\cite{Basham:1978bw}\n\\begin{align}\\label{eq:EEC_intro}\n\\text{EEC}(\\chi)=\\sum\\limits_{a,b} \\int d\\sigma_{V\\to a+b+X} \\frac{2 E_a E_b}{Q^2 \\sigma_{\\text{tot}}} \\delta(\\cos(\\theta_{ab}) - \\cos(\\chi))\\,,\n\\end{align}\nwhere the sum is over all pairs of final state particles, $E_a$ are the energies of the particles, and $\\theta_{ab}$ are the angles between pairs of particles. Energy correlators are a theoretically nice class of event shape observable, since they can be directly expressed in terms of energy flow operators \\cite{Hofman:2008ar,Belitsky:2013xxa,Belitsky:2013bja,Belitsky:2013ofa}\n\\begin{align}\n\\mathcal{E}(\\vec n) =\\int\\limits_0^\\infty dt \\lim_{r\\to \\infty} r^2 n^i T_{0i}(t,r \\vec n)\\,.\n\\end{align}\nIn particular, the EEC is given by the four-point Wightman correlator\n\\begin{align}\n\\frac{1}{\\sigma_{\\rm tot}} \\frac{d\\sigma}{dz}=\\frac{\\langle \\mathcal{O} \\mathcal{E}(\\vec n_1) \\mathcal{E}(\\vec n_2) \\mathcal{O}^\\dagger \\rangle }{\\langle \\mathcal{O} \\mathcal{O}^\\dagger \\rangle}\\,,\n\\end{align}\nwhere we have introduced the convenient variable\n\\begin{align}\nz=\\frac{1-\\cos {\\chi}}{2}\\,,\n\\end{align}\nand $\\mathcal{O}$ is a source operator that creates the excitation.\n\nThere has been significant recent progress in understanding the EEC, both in QCD, as well as in conformal field theories. In QCD, the EEC has been computed for arbitrary angles at next-to-leading order (NLO) analytically \\cite{Dixon:2018qgp,Luo:2019nig} and at NNLO numerically \\cite{DelDuca:2016csb,DelDuca:2016ily}. In $\\mathcal{N}=4$ it has been computed for arbitrary angles to NNLO \\cite{Belitsky:2013ofa,Henn:2019gkr}. There has also been significant progress in understanding the limits of the EEC, namely the collinear ($z\\to 0$) limit \\cite{Dixon:2019uzg,Kravchuk:2018htv,Kologlu:2019bco,Kologlu:2019mfz,Korchemsky:2019nzm}, and the back-to-back ($z\\to 1$) limit \\cite{deFlorian:2004mp,Moult:2018jzp,Korchemsky:2019nzm,Gao:2019ojf}. Here we will focus on the EEC in the back-to-back limit, where it exhibits Sudakov double logarithms. As we will explain shortly, these double logarithms are directly related to those appearing for transverse momentum resummation. In this section we follow closely the factorization derived in \\cite{Moult:2018jzp} using the rapidity renormalization group \\cite{Chiu:2012ir,Chiu:2011qc}. An alternative approach to studying this limit directly from the four point correlator was given in \\cite{Korchemsky:2019nzm}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.75\\columnwidth]{figures\/back_to_back}\n\\end{center}\n\\caption{The kinematics of the EEC in the back-to-back limit. Wide angle soft radiation recoils the two jets in a manner identical to the case of $p_T$ for color singlet boson production in hadronic collisions. This provides a precise relation between the factorization formulas in the two cases. (Figure from \\cite{Moult:2018jzp}.) }\n\\label{fig:schematic_factorization}\n\\end{figure}\n\nThe back-to-back limit corresponds to the region of phase space where there are two collimated jets accompanied by low energy soft radiation, which recoils the directions of the jets so that they are not exactly back-to-back. This configuration is illustrated in \\Fig{fig:schematic_factorization}. A simple exercise shows that the angle between two partons correlated by the EEC is related to the transverse momentum of these particles within the jets and the transverse momentum of the soft radiation that recoils these jets, by\n\\begin{align}\n\\label{eq:z_in_back_to_back}\n1-z=\\frac{1}{Q^2} \\left| \\frac{\\vec k_{\\perp,i}^h}{x_i}+\\frac{\\vec k_{\\perp,j}^h}{x_j}-\\vec k_{\\perp,s}^h \\right|^2 +\\mathcal{O}(1-z)\\,.\n\\end{align}\nHere $x_i=2E_i\/Q$, where $Q$ is the center of mass energy.\nThis relation makes clear the connection between the EEC in the back-to-back limit and transverse momentum resummation, which we will shortly extend to subleading powers.\n\nTo describe the EEC in this limit, we use an effective field theory description of the soft and collinear dynamics, where the relevant modes have the scalings\n\\begin{align}\\label{eq:pc_modes}\np_s\\sim Q(\\lambda, \\lambda, \\lambda)\\,, \\qquad p_c \\sim Q(\\lambda^2, 1, \\lambda) \\,, \\qquad p_{\\bar c} \\sim Q(1,\\lambda^2, \\lambda)\\,.\n\\end{align}\nHere $\\lambda$ is a power counting parameter and is defined as\n\\begin{align}\n\\lambda\\sim \\sqrt{1-z}\\,.\n\\end{align}\nThis scaling defines what is referred to as an SCET$_{\\rm II}$~ theory \\cite{Bauer:2002aj}. Crucially, unlike in SCET$_{\\rm I}$, the soft and collinear modes in SCET$_{\\rm II}$~ have the same virtualities, but different rapidities. This factorization into soft and collinear modes therefore introduces divergences as $k^+\/k^-\\to \\infty$ or $k^+\/k^-\\to 0$ \\cite{Collins:1992tv,Manohar:2006nz,Collins:2008ht,Chiu:2012ir,Vladimirov:2017ksc}, which are referred to as rapidity divergences. To regulate these divergences, one must introduce a regulator that breaks boost invariance, allowing the soft and collinear modes to be distinguished. Once such a regulator is introduced, renormalization group evolution equations can be derived to resum the associated rapidity logarithms \\cite{Chiu:2012ir,Chiu:2011qc}.\n\nUsing the effective field theory, one can systematically expand the cross section for either transverse momentum, or for the EEC in powers of the observable. For the EEC, we write\n\\begin{align}\\label{eq:xsec_expand}\n\\frac{\\mathrm{d} \\sigma}{\\mathrm{d} z}\n&= \\frac{\\mathrm{d}\\sigma^{(0)}}{\\mathrm{d} z} + \\frac{\\mathrm{d}\\sigma^{(2)}}{\\mathrm{d} z}+ \\frac{\\mathrm{d}\\sigma^{(4)}}{\\mathrm{d} z} + \\dotsb{\\nonumber} \\\\\n&=\\text{EEC}^{(0)}+\\text{EEC}^{(2)}+\\text{EEC}^{(4)}\\,,\n\\end{align}\nwhere we will occasionally use the second notation, since it is more compact.\nHere\n\\begin{align}\n\\frac{\\mathrm{d}\\sigma^{(0)}}{\\mathrm{d} z}\n&\\sim \\delta(1-z)+ \\biggl[\\frac{ \\ord{1} }{1-z}\\biggr]_+\n\\,, \n\\end{align}\nis referred to as the leading power cross section, and\ndescribes all terms scaling like $\\mathcal{O}((1-z)^{-1})$ modulo logarithms. All the other terms in the expansion of the cross section are suppressed by explicit powers of $(1-z)$\n\\begin{align}\\label{eq:scaling_lam2}\n\\frac{\\mathrm{d}\\sigma^{(2k)}}{\\mathrm{d} z} &\\sim \\ord{(1-z)^{k-1}}\n\\,.\\end{align}\nThe focus of this paper will be on deriving the structure of the leading logarithms in $\\mathrm{d}\\sigma^{(2)}\/\\mathrm{d} z$, which is also referred to as the next-to-leading power (NLP) cross section. \n\n\nFor the leading power cross section, $d\\sigma^{(0)}\/dz$, one can derive a factorization formula describing in a factorized manner the contributions of the soft and collinear modes to the EEC in the $z\\to 1$ limit \\cite{Moult:2018jzp}\n\\begin{equation}\\label{eq:fact_final}\n\\hspace{-0.35cm}\\frac{d\\sigma^{(0)}}{dz}= \\frac{1}{2} \\int d^2 \\vec k_\\perp \\int \\frac{d^2 \\vec b_\\perp}{(2 \\pi)^2} e^{-i \\vec b_\\perp \\cdot \\vec k_\\perp} H(Q,\\mu) J^q_{\\text{EEC}} (\\vec b_\\perp,\\mu,\\nu) J^{\\bar q}_{\\text{EEC}} (\\vec b_\\perp,\\mu,\\nu) S_{\\text{EEC}}(\\vec b_\\perp,\\mu,\\nu) \\delta \\left( 1-z- \\frac{\\vec k_\\perp^2}{Q^2} \\right )\\,,\n\\end{equation}\nin terms of a hard function, $H$, jet functions, $J$, and a soft function, $S$. This factorization is nearly equivalent to the factorization for the $p_T$ for color singlet boson production (This factorization formula was originally derived in \\cite{Collins:1981uk,Collins:1981va,Collins:1984kg}, and was derived in terms of the rapidity renormalization group used here in \\cite{Chiu:2012ir,Chiu:2011qc}), \n\\begin{align}\\label{eq:pt_fact}\n\\frac{1}{\\sigma} \\frac{d^3 \\sigma^{(0)}}{d^2 \\vec p_T dY dQ^2} = H(Q,\\mu) \\int \\frac{d^2 \\vec b_\\perp}{(2\\pi)^2} e^{i\\vec b_\\perp \\cdot \\vec p_T} \\left[ B \\times B \\right] (\\vec b_\\perp, \\mu, \\nu) S_\\perp(\\vec b_\\perp, \\mu, \\nu)\\,,\n\\end{align}\nup to the fact that the jet functions are moved to the initial state, where they are referred to as beam functions \\cite{Stewart:2009yx}. Apart from our intrinsic interest in understanding the kinematic limits of the EEC, this relation between the EEC and $p_T$ is one of our primary motivations for studying the EEC. Lessons derived from the EEC can be directly applied to understanding the structure of subleading power logarithms for $p_T$, which is a phenomenologically important observable at the LHC, for example, for precision studies of the Higgs boson. Here we briefly discuss the objects appearing in the factorization formula, both to emphasize the close connections between the EEC and $p_T$, as well as to introduce the general structure of the $\\mu$ and $\\nu$ rapidity evolution equations. \n\n\nThe hard functions, $H(Q,\\mu)$, appearing in the factorization formulas for the EEC and $p_T$ are identical. They describe hard virtual corrections, and satisfy a multiplicative renormalization group equation (RGE) in $\\mu$\n\\begin{align}\n\\mu \\frac{d}{d\\mu} H(Q,\\mu) =2 \\left[\\Gamma^{\\mathrm{cusp}}(\\alpha_s) \\ln\\frac{Q^2}{\\mu^2} + \\gamma^H(\\alpha_s) \\right] H(Q,\\mu)\\,.\n\\end{align}\nThey are independent of rapidity. The soft functions appearing in both $p_T$ and the EEC can be proven to be identical \\cite{Moult:2018jzp}. They are matrix elements of Wilson lines, which for quarks and gluons are defined as\n\\begin{align}\n S_q(\\vec p_T) &= \\frac{1}{N_c} \\big\\langle 0 \\big| \\mathrm{Tr} \\bigl\\{\n \\mathrm{T} \\big[S^\\dagger_{\\bn} S_n\\big]\n \\delta^{(2)}(\\vec p_T-\\mathcal{P}_\\perp)\\overline{\\mathrm{T}} \\big[S^\\dagger_{n} S_{\\bn} \\big]\n \\bigr\\{ \\big| 0 \\big\\rangle\n\\,,{\\nonumber}\\\\\n S(\\vec p_T) &= \\frac{1}{N_c^2 - 1} \\big\\langle 0 \\big| \\mathrm{Tr} \\bigl\\{\n \\mathrm{T} \\big[\\mathcal{S}^\\dagger_{\\bn} \\mathcal{S}_n\\big]\n \\delta^{(2)}(\\vec p_T-\\mathcal{P}_\\perp) \\overline{\\mathrm{T}} \\big[\\mathcal{S}^\\dagger_{n} \\mathcal{S}_{\\bn} \\big]\n \\bigr\\} \\big| 0 \\big\\rangle\n\\,.\\end{align}\nHere $\\mathrm{T}$ and $\\overline{\\mathrm{T}}$ denote time and anti-time ordering, and $S_n$ and $\\mathcal{S}_n$ denote Wilson lines in the fundamental and adjoint representations, respectively. Explicitly, \n\\begin{align}\n S_n(x) &= \\mathrm{P} \\exp\\biggl[ \\mathrm{i} g \\int_{-\\infty}^0 \\mathrm{d} s\\, n \\cdot A(x+sn)\\biggr]\\,,\n\\end{align}\nand similarly for the adjoint Wilson lines. These soft functions satisfy the $\\mu$ and $\\nu$ RGEs\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S(\\vec p_T)&=\\int d\\vec q_T \\gamma^S_\\nu(p_T-q_T) S(\\vec q_T)\\,, {\\nonumber} \\\\\n\\mu \\frac{d}{d\\mu}S(\\vec p_T)&=\\gamma_\\mu^S S(\\vec p_T)\\,,\n\\end{align}\nwith the anomalous dimensions\n\\begin{align}\n\\gamma_\\mu^S &=4\\Gamma^{\\mathrm{cusp}}(\\alpha_s)\\log \\left( \\frac{\\mu}{\\nu}\\right)\\,,{\\nonumber} \\\\\n\\gamma_\\nu^S &=2\\Gamma^{\\mathrm{cusp}} (\\alpha_s)\\mathcal{L}_0\\left( \\vec p_T,\\mu \\right)\\,.\n\\end{align}\nHere the color representation is implicit in the cusp anomalous dimension, and $\\mathcal{L}_0$ is a standard plus function (see e.g. \\cite{Ebert:2016gcn} for a detailed discussion of two-dimensional plus distributions, and for a definition of the conventions that are used here). Since we will ultimately be interested in $\\mathcal{N}=4$ where all particles are in the same representation, we will drop the quark and gluon labels on the soft functions.\n\nThe only difference between $p_T$ and the EEC lies in the collinear sector, namely whether one uses beam functions or jet functions. The jet functions for the EEC were recently computed to NNLO \\cite{Luo:2019bmw,Luo:2019hmp}. Since in this paper we will be focused on resummation at LL, we can always choose to run all functions to the jet scale, and thereby avoid a discussion of the collinear sector for simplicity. The structure of the power corrections to the beam (jet) functions, and their matching to the parton distributions (fragmentation functions) is interesting, and will be presented in future work, since it is important for a complete understanding of $p_T$ at subleading powers.\n\nThese renormalization group evolution equations in both $\\mu$ and $\\nu$ allow for a derivation of the all orders structure of logarithms in the $z\\to 1$ limit, at leading order in the $(1-z)$ expansion. Performing the renormalization group evolution, one finds for a non-conformal field theory \\cite{Moult:2018jzp} (i.e. allowing for a running coupling)\n\\begin{align}\n \\label{eq:resformula}\n\\frac{d\\sigma^{(0)}}{dz} = &\\; \\frac{1}{4} \\int\\limits_0^\\infty db\\, b\n J_0(bQ\\sqrt{1-z})H(Q,\\mu_h) j^q_{\\text{EEC}}(b,b_0\/b,Q) j^{\\bar\n q}_{\\text{EEC}}(b,b_0\/b,Q) S_{\\text{EEC}}( b,\\mu_s, \\nu_s) \n{\\nonumber}\\\\\n&\\; \\cdot\n \\left(\\frac{Q^2}{\\nu_s^2}\\right)^{\\gamma^r_{\\text{EEC}}(\\alpha_s(b_0\/b))}\n \\exp \\left[ \\int\\limits_{\\mu_s^2}^{\\mu_h^2}\n \\frac{d\\bar{\\mu}^2}{\\bar{\\mu}^2} \\Gamma^{\\mathrm{cusp}}(\\alpha_s(\\bar \\mu)) \\ln\n \\frac{b^2\\bar{\\mu}^2}{b_0^2} \\right.\n{\\nonumber}\\\\\n&\\;\n\\left. +\n \\int\\limits_{\\mu_h^2}^{b_0^2\/b^2}\\frac{d\\bar{\\mu}^2}{\\bar{\\mu}^2}\n \\left(\\Gamma^{\\mathrm{cusp}}(\\alpha_s(\\bar \\mu)) \\ln\\frac{b^2 Q^2}{b_0^2} +\n \\gamma^H (\\alpha_s(\\bar \\mu)) \\right) -\n \\int\\limits_{\\mu_s^2}^{b_0^2\/b^2}\\frac{d\\bar{\\mu}^2}{\\bar{\\mu}^2}\n \\gamma^s_{\\text{EEC}} (\\alpha_s(\\bar \\mu)) \\right]\\,.\n\\end{align}\nFor the particular case of a conformal theory, this expression simplifies considerably, both due to the fact that the coupling doesn't run, and also since in a conformal field theory there is an equivalence between the rapidity anomalous dimension and the soft anomalous dimension \\cite{Vladimirov:2016dll,Vladimirov:2017ksc}. Combining this equivalence with the relations for the soft anomalous dimension derived in \\cite{Dixon:2008gr} (see also \\cite{Falcioni:2019nxk} for recent work on relations between different soft functions), we have\n\\begin{align}\n\\gamma^r= -\\mathcal{G}_0+2B\\,,\n\\end{align}\nwhere $B$ is the virtual anomalous dimension (the coefficient of $\\delta(1-x)$ in the DGLAP kernel), and $\\mathcal{G}_0$ is the collinear anomalous dimension. We then find\n\\begin{align}\n \\label{eq:resformula_N4}\n\\frac{d\\sigma^{(0)}}{dz} = &\\; \\frac{1}{4} \\int\\limits_0^\\infty db\\, b\n J_0(bQ\\sqrt{1-z})H(Q,\\mu_h) j^q_{\\text{EEC}}(b,b_0\/b,Q) j^{\\bar\n q}_{\\text{EEC}}(b,b_0\/b,Q) S_{\\text{EEC}}( b,\\mu_s, \\nu_s) \n{\\nonumber}\\\\\n&\\exp \\left[ \\Gamma^{\\mathrm{cusp}} \\log^2\\left( \\frac{b^2 Q^2}{b_0^2} \\right) +2B\\log\\left( \\frac{b^2 Q^2}{b_0^2} \\right) \\right]\\,.\n\\end{align}\nBoth the cusp anomalous dimension, as well as the $B$ anomalous dimension are known from integrability \\cite{Eden:2006rx,Beisert:2006ez,Freyhult:2007pz,Freyhult:2009my,Fioravanti:2009xt}. It is interesting that only the two anomalous dimensions that are known from integrability appear in the final result. The collinear anomalous dimension, which drops out of the final result in a conformal theory, is known to four loops in $\\mathcal{N}=4$ \\cite{Dixon:2017nat}.\n\n\n\nThis formula describes the leading power asymptotics of the EEC in the $z\\to 1$ limit to all orders in the coupling (Indeed, in $\\mathcal{N}=4$, it should also apply at finite coupling). The goal of this paper will be to start to understand the all orders structure of the subleading power corrections to this formula in $(1-z)$. While we do not have a complete factorization formula or all orders understanding, we will be able to deduce much of the structure from general consistency and symmetry arguments. Ultimately, we would like to be able to classify the operators that appear in the description of the subleading powers in this limit, and understand their renormalization group evolution. This paper represents a first step in this direction. \n\n\n\nWe conclude this section by noting that the result of \\Eq{eq:resformula_N4} can also be derived in a conformal field theory by directly considering the structure of the four point correlator in the double light cone limit \\cite{Korchemsky:2019nzm} (see also \\cite{Alday:2013cwa}), and using the duality between the correlator and a Wilson loop \\cite{Alday:2010zy}, as well as known results for the structure constants \\cite{Eden:2012rr}. It would be interesting to understand systematically the OPE of the correlator in this limit and the operators that appear from this perspective. There has been some study of the double light cone limit in \\cite{Alday:2015ota}. It would be interesting to understand it in more detail and develop a systematic OPE, much like exists in the collinear limit \\cite{Alday:2010ku,Basso:2014jfa,Basso:2007wd,Basso:2010in,Basso:2015uxa,Basso:2013vsa,Basso:2013aha,Basso:2014koa,Basso:2014nra}. It would also be interesting if the recently introduced light ray OPE \\cite{Kravchuk:2018htv,Kologlu:2019bco,Kologlu:2019mfz} can provide insight into this limit. However, we leave these directions to future work.\n\n\n\\section{Fixed Order Calculation of the EEC at Subleading Power}\\label{sec:FO}\n\nHaving discussed the structure of the EEC in the back-to-back limit, as well as the factorization theorem describing its leading power asymptotics, in this section we begin our study of the subleading corrections in powers of $(1-z)$ by performing a fixed order calculation. This is important both for understanding the structure of the subleading power rapidity divergences for the EEC, and for providing the boundary conditions for the renormalization group studied later in the paper. We will perform this calculation both in QCD, as well as in $\\mathcal{N}=4$. We follow closely the calculation for the power corrections for $p_T$ presented in \\cite{Ebert:2018gsn}. In \\Sec{sec:intuition} we summarize some of the intuition derived from this calculation, which provides significant insight into the structure of the subleading power renormalization group evolution, which we will then study in more detail in \\Sec{sec:RRG_NLP}.\n\n\n\\subsection{Leading Order Calculation in $\\mathcal{N}=4$ SYM and QCD}\\label{sec:FO2}\n\n\nIn this section, we perform the leading order (LO) calculation of the EEC at NLP in both QCD and $\\mathcal{N}=4$. This section is rather technical, and assumes some familiarity with fixed order calculations at subleading power in SCET (see e.g. \\cite{Moult:2016fqy,Moult:2017jsg,Ebert:2018lzn,Ebert:2018gsn} for more detailed discussions).\n\nThe EEC observable can be written as\n\\begin{align}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}}{\\mathrm{d} y} &= \\sum_{a,b} \\int \\mathrm{d} \\Phi_{V \\to a+b+X} \\left|\\mathcal{M}_{V \\to a+b+X}\\right|^2 \\frac{E_a E_b}{q_V^2}\\delta\\left(y-\\cos^2 \\frac{\\theta_{ab}}{2}\\right) \\,,\n\\end{align}\nwhere we have used $y \\equiv 1-z$, so that $y \\to 0$ in the back-to-back limit. To perform the calculation it is convenient to write the observable definition in a boost invariant manner\n\\begin{align}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}}{\\mathrm{d} y} &= \\frac{1}{2 (1-y) q_V^2}\\sum_{a,b} \\int \\mathrm{d} \\Phi_{V \\to a+b+X} \\left|\\mathcal{M}_{V \\to a+b+X}\\right|^2\\, p_a\\cdot p_b\\,\\delta\\left[y - \\left(1 -\\frac{q_V^2p_a\\cdot p_b}{2 p_a\\cdot q_V p_b \\cdot q_V}\\right)\\right] \\,,\n\\end{align}\nwhere $q_V^\\mu$ is the momentum of the vector boson. This definition is convenient since if we are correlating a particular pair of particles $\\{a,b\\}$, we can boost the system such that the particles being correlated are back-to-back and the vector boson (or source) recoils against the unmeasured radiation in the perpendicular direction.\n\n\nGiven this setup, we begin by considering a single perturbative emission, which is sufficient for the LO calculation. We will denote by $p_a^\\mu$ and $p^\\mu_b$ the momenta of the particles we are correlating, and $k^\\mu$ the momentum of the unmeasured radiation. This translates to the following choice of kinematics\\footnote{Note that here $p_{a,b}^\\mu$ and $k^\\mu$ are outgoing while $q_V^\\mu$ is incoming.}\n\\begin{align}\\label{eq:kinematic}\n\tp_a^\\mu &= (q^-_V - k^-) \\frac{n^\\mu}{2}\\,,&& k^\\mu = k^- \\frac{n^\\mu}{2} + k^+ \\frac{\\bn^\\mu}{2} + k_\\perp^\\mu\\,,{\\nonumber}\\\\\n\tp_b^\\mu &= (q^+_V - k^+) \\frac{\\bn^\\mu}{2}\\,,&& q_V^\\mu = q_V^- \\frac{n^\\mu}{2} + q_V^+ \\frac{\\bn^\\mu}{2} + k_\\perp^\\mu\\,.\n\\end{align}\nThe measurement of the EEC observable then takes the form of the following constraint\n\\begin{align}\n\ty = 1 - \\frac{(q^-_V - k^-)(q^+_V - k^+) q_V^2}{q_V^+ (q^-_V - k^-) q_V^- (q^+_V - k^+)} = \\frac{q^+_V q^-_V - q_V^2}{q^+_V q^-_V} = \\frac{{\\vec{k}_\\perp^{\\,2}}}{q_V^2 - {\\vec{k}_\\perp^{\\,2}}}\\,,\n\\end{align}\nwhich we can rewrite as a constraint on ${\\vec{k}_\\perp^{\\,2}}$ \n\\begin{equation}\n\t\\delta\\left(y - \\frac{{\\vec{k}_\\perp^{\\,2}}}{q_V^2 - {\\vec{k}_\\perp^{\\,2}}} \\right) = \\delta\\left({\\vec{k}_\\perp^{\\,2}} - q_V^2\\frac{y}{1-y}\\right) \\frac{q_V^2}{(1-y)^2}\\,.\n\\end{equation}\nThis extends the relation between the EEC and $p_T$ to subleading powers.\n\n\\subsection*{Master Formula}\n\nUsing this result for the measurement function, the cross section for the EEC with one emission reads\n\\begin{align}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}}{\\mathrm{d} y} &= \\frac{1}{2 (1-y)^3}\\sum_{a,b}\\int \\mathrm{d} \\Phi |\\mathcal{M}_{V \\to q\\bar{q} g}|^2\\, p_a \\cdot p_b\\,\\delta\\left({\\vec{k}_\\perp^{\\,2}} - q_V^2\\frac{y}{1-y}\\right){\\nonumber}\\,.\n\\end{align}\nFor our particular choice of frame where $p_a^\\mu$ has no $\\perp$ component, we have \\cite{Fleming:2007qr}\n\\begin{align}\n\t \\int \\frac{\\mathrm{d}^{d-2} p_{a\\perp}}{(2\\pi)^{d-2}} &= \\sum_n d^2 p_{\\perp,n} \\int \\frac{\\mathrm{d}^{d-2} p_{a\\perp}}{(2\\pi)^{d-2}}\\delta^{(d-2)}(p^\\mu_{a\\perp}) = \\frac{|\\vec{p}_a|^{d-2}}{(2\\pi)^{d-2}} \\int \\mathrm{d} \\Omega_{d-2}\\,.\n\\end{align}\nUsing\n\\begin{align}\\label{eq:paPS}\n\t\\int {\\mathrm{d}\\!\\!\\!{}^-}^d p_a \\delta_+(p_a^2) &= \\frac{1}{2} \\int \\frac{\\mathrm{d} p_a^-}{(2\\pi)} \\frac{\\mathrm{d} p_a^+}{(2\\pi)} (2\\pi) \\theta(p_a^+ + p_a^-)\\delta(p_a^+ p_a^- ) |\\vec{p}_a|^2 \\int \\frac{ \\mathrm{d}\\Omega_{d-2}}{(2\\pi)^{d-2}} {\\nonumber}\\\\\n\t&= \\frac{1}{2(2\\pi)^{d-1}} \\int \\frac{\\mathrm{d} p_a^-}{p_a^-} \\theta(p_a^-) |p_a^-\/2|^2\\int \\mathrm{d} \\Omega_{d-2} = C \\times \\int \\mathrm{d} p_a^- \\,p_a^-\\,, \n\\end{align}\nwe can write the cross section with a single emission as\n\\begin{align}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{\\text{EEC}}}}{\\mathrm{d} y} &\\sim \\frac{1}{2 (1-y)^3}\\sum_{a,b} \\int {\\mathrm{d}\\!\\!\\!{}^-}^d k \\delta_+(k^2) |\\mathcal{M}_{V \\to q\\bar{q} g}|^2\\, (p_a \\cdot p_b)^2\\,\\delta\\left({\\vec{k}_\\perp^{\\,2}} - q_V^2\\frac{y}{1-y}\\right){\\nonumber}\\,\\\\\n\t&\\sim \\frac{1}{2 (1-y)^3}\\sum_{a,b} \\int_0^{q^-} \\frac{\\mathrm{d} k^-}{k^-} \\int \\mathrm{d} {\\vec{k}_\\perp^{\\,2}} \\frac{\\mu^{2\\epsilon}}{{\\vec{k}_\\perp^{\\,2\\epsilon}}} |\\mathcal{M}_{V \\to q\\bar{q} g}|^2\\, (p_a \\cdot p_b)^2\\,\\delta\\left({\\vec{k}_\\perp^{\\,2}} - q_V^2\\frac{y}{1-y}\\right){\\nonumber}\\,\\\\\n\t&\\sim \\frac{1}{2 (1-y)^3}\\left(\\frac{\\mu^{2}}{q_V^2}\\frac{1-y}{y}\\right)^\\epsilon \\sum_{a,b} \\int_0^{q^-} \\frac{\\mathrm{d} k^-}{k^-} |\\mathcal{M}_{V \\to q\\bar{q} g}|^2(k^-,y)\\, (p_a \\cdot p_b)^2 (k^-,y)\\,,\n\\end{align}\nwhere everything is a function of $k^-$, $y$ and $q_V^2$. Up to this point, we have not expanded anything, but we have enforced the measurement, momentum conservation and the choice of frame. We can now express $k^-$ as a dimensionless fraction $x$ via\n\\begin{equation}\\label{eq:xdefinition}\n\tx = \\frac{k^-}{q^-}\\,,\n\\end{equation} \nand we can write all Mandelstam invariants in terms of $x$ and $y$\n\\begin{align}\n\tk^-&= x q^- \\,,\\qquad {\\vec{k}_\\perp^{\\,2}} = q_V^2\\frac{y}{1-y}\\,,\\qquad k^+ = \\frac{{\\vec{k}_\\perp^{\\,2}}}{k^-} = \\frac{q_V^2}{q^-}\\frac{y}{1-y} \\frac{1}{x}\\,,{\\nonumber} \\\\\n\tq^2_V &\\equiv q^+ q^- - {\\vec{k}_\\perp^{\\,2}} = q_V^+ q_V^- - q_V^2\\frac{y}{1-y} \\quad\\implies\\quad q_V^2 = q^+_V q^-_V(1-y)\\,, {\\nonumber} \\\\\n\ts_{ab}(x,y) &= 2p_a \\cdot p_b = (q_V^- - k^-)(q_V^+ - k^+) = q_V^+q_V^- (1-x)\\left(1 - \\frac{y}{x}\\right) {\\nonumber} \\\\\n\t&= q_V^2 \\frac{(1-x)}{(1-y)}\\left(1 - \\frac{y}{x}\\right) \\,,{\\nonumber}\\\\\n\ts_{ak}(x,y) &= 2p_a \\cdot k = (q_V^- - k^-)k^+ = q_V^2\\frac{y}{(1-y)}\\frac{(1-x)}{x}\\,,{\\nonumber}\\\\\n\ts_{bk}(x,y) &= 2p_b \\cdot k = (q_V^+ - k^+)k^- = q_V^2 \\frac{x}{(1-y)}\\left(1 - \\frac{y}{x}\\right)\\,.\n\\end{align}\nWith these expressions one can easily check that\n\\begin{equation}\n\ts_{ab}+ s_{ak}+ s_{kb} = q_V^2\\,,\n\\end{equation}\nwith no power corrections.\nWe can now use the expressions for $s_{ab}$ to arrive at\n\\begin{align}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}}{\\mathrm{d} y} &= \\frac{1}{(1-y)^5}\\left(\\frac{\\mu^{2}}{q_V^2}\\frac{1-y}{y}\\right)^\\epsilon \\sum_{a,b} \\int_0^{1} \\frac{\\mathrm{d} x}{x} (1-x)^2 \\left(1 - \\frac{y}{x}\\right)^2|\\mathcal{M}_{V \\to q\\bar{q} g}|^2(x,y)\\,.\n\\end{align}\nExpanding in the collinear limit is now the same as expanding in $y$, and we find up to NLP\n\\begin{align}\\label{eq:collinearMasterUnreg}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}}{\\mathrm{d} y} &=\\left(\\frac{\\mu^{2}}{q_V^2 y}\\right)^\\epsilon \\sum_{a,b} \\int_0^{1} \\frac{\\mathrm{d} x}{x} (1-x)^2 \\left[A^{(0)}(x)+A^{(2)}(x) + y A^{(0)}(x)\\left( 5-\\frac{2}{x} -\\epsilon \\right) \\right]\\,.\n\\end{align}\nHere $A^{(0)}(x)$ and $A^{(2)}(x)$ are the expansion of the squared matrix elements in the collinear limits,\n\\begin{align}\n|\\mathcal{M}_{V \\to q\\bar{q} g}|^2(x,y)=A^{(0)}(x)+A^{(2)}(x) + y A^{(0)}(x)+\\cdots.\n\\end{align}\n\n\\subsection*{Rapidity Regulator}\n\nThe expression for the EEC in \\eq{collinearMasterUnreg} is divergent as $x \\to 0$. This is a rapidity divergence and must be regulated with a rapidity regulator. Here we present the result using pure rapidity regularization \\cite{Ebert:2018gsn}, which greatly simplifies the calculation, particularly for the constant (the non-logarithmically enhanced term). We have also computed the logarithm using the more standard $\\eta$-regulator \\cite{Chiu:2012ir,Chiu:2011qc}, and find an identical result.\n\n\nWe take as a regulator the rapidity in the $n$-collinear sector, normalized by the rapidity of the color singlet\\footnote{In the rest frame of the decaying boson this would be 1. Here we use a slightly boosted frame, so adding this factor is necessary to guarantee that the result is independent of the frame.}\n\\begin{equation}\n\te^{-Y_V}e^Y_n = \\frac{q_V^+}{q_V^-}\\frac{p_1^- + k^-}{p_1^+ + k^+} = \\frac{q_V^+}{k^+} = \\frac{q_V^+ q_V^- x}{{\\vec{k}_\\perp^{\\,2}}} = \\frac{q_V^+ q_V^- x (1-y)}{q_V^2 y } = \\frac{x}{y}\\,.\n\\end{equation}\nThe rapidity regulated result is then given by\n\\begin{align}\\label{eq:collinearMasterReg}\n\t\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}}{\\mathrm{d} y} &=\\left(\\frac{\\mu^{2}}{q_V^2 y}\\right)^\\epsilon \\frac{y^\\eta}{\\upsilon^\\eta} \\sum_{a,b} \\int_0^{1} \\frac{\\mathrm{d} x}{x^{1+\\eta}} (1-x)^2 \\left[A^{(2)}(x) + y A^{(0)}(x)\\left( 5-\\frac{2}{x} -\\epsilon \\right) \\right]\\,,\n\\end{align}\nwhich can be straightforwardly integrated.\n\n\\subsection*{Results}\n\nPlugging in the expression for $A^{(0)}$ and $A^{(2)}$ in QCD and $\\mathcal{N}=4$ gives the result for the EEC up to NLP\n\\begin{align}\\label{eq:results}\n\t\\frac{1}{\\sigma_0}\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}^\\text{QCD}}{\\mathrm{d} y} &= -\\frac{1}{y}\\left(\\log y + \\frac{3}{4}\\right) - 5\\log y-\\frac{9}{4} +\\mathcal{O}(y)\\,, {\\nonumber} \\\\\n\t\\frac{1}{\\sigma_0}\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}^{\\mathcal{N}=4}}{\\mathrm{d} y} &=- \\frac{\\log y}{y}- 2\\log y +\\mathcal{O}(y)\\,.\n\\end{align}\nThis agrees with the expansion of the full angle result for $\\mathcal{N}=4$ in \\cite{Belitsky:2013ofa,Belitsky:2013xxa,Belitsky:2013bja}, as well as the classic QCD result \\cite{Basham:1978bw} for both the logarithm and the constant. This agreement (in particular for the constant) illustrates that we have the correct effective field theory setup, and that we understand the regularization of rapidity divergences at subleading power. This is further supported by our calculation of the subleading power corrections for the case of $p_T$ in \\cite{Ebert:2018gsn}, which was performed using the same formalism. While this expansion is an inefficient way to compute these subleading terms, which can much more easily be obtained by performing the full calculation and expanding the result, the ability to systematically compute the terms at each order in the power expansion will allow us to perform an all orders resummation by deriving renormalization group evolution equations in rapidity.\n\n\n\\subsection{Physical Intuition for Subleading Power Rapidity Divergences}\\label{sec:intuition}\n\nIn this section we wish to summarize some of the general lessons learned from the above fixed order calculation (as well as from the calculation of the fixed order subleading rapidity logarithms for $p_T$ in \\cite{Ebert:2018gsn}), which provides significant clues into the structure of the subleading power rapidity renormalization group. Since we have shown above that the description of the EEC in the back-to-back limit is ultimately formulated in the EFT in terms of transverse momentum, here we will phrase all the functions in terms of transverse momentum, however, these can straightforwardly be converted back to derive results for the EEC.\nWe will also phrase this discussion in terms of the $\\eta$ regulator \\cite{Chiu:2012ir,Chiu:2011qc}, due to the fact that it is more familiar for most readers.\n\nThe first important observation from the fixed order calculation is that at NLP there are no purely virtual corrections at lowest order in $\\alpha_s$ (such a correction would appear as $y\\delta(y)=0$). The lowest order result for the EEC in $\\mathcal{N}=4$, which we use as an example due to its simpler structure, is given by\n\\begin{align}\\label{eq:N4_NLP_fixedorder}\n\t\\frac{1}{\\sigma_0}\\frac{\\mathrm{d} \\sigma_{\\text{EEC}}^{\\mathcal{N}=4}}{\\mathrm{d} (1-z)} &= - 2\\log(1-z) +\\mathcal{O}(1-z)\\,.\n\\end{align}\nHere the single logarithm comes from real soft or collinear emissions. Since the soft and collinear sectors lie at the same virtuality, this guarantees that at subleading power the lowest order logarithm must be a rapidity logarithm. This can be made explicit by writing down a general ansatz for the one-loop result. This approach was first introduced in the SCET$_{\\rm I}$~ case in \\cite{Moult:2016fqy}. Here we will phrase it in terms of the underlying $p_T$ dependence, since this is perhaps more familiar to the resummation community. The general form of the one-loop result for the NLP corrections to $p_T$ can be written as\n\\begin{align}\n\\frac{d\\sigma^{(2)}}{dp_T^2}=\\left( \\frac{\\mu^2}{p_T^2} \\right)^\\epsilon \\left( \\frac{\\nu}{p_T} \\right)^\\eta \\left[ \\frac{s_\\epsilon}{\\epsilon}+\\frac{s_\\eta}{\\eta} \\right] +\\left( \\frac{\\mu^2}{p_T^2} \\right)^\\epsilon \\left( \\frac{\\nu}{Q} \\right)^\\eta \\left[ \\frac{c_\\epsilon}{\\epsilon}+\\frac{c_\\eta}{\\eta} \\right] \\,.\n\\end{align}\nExpanding this, we find\n\\begin{align}\n\\frac{d\\sigma^{(2)}}{dp_T^2}=\\left( \\frac{c_\\epsilon+s_\\epsilon}{\\epsilon} \\right) + \\left( \\frac{c_\\eta+s_\\eta}{\\eta} \\right) +2(c_\\epsilon+s_\\epsilon) \\log\\frac{\\mu}{p_T} +s_\\eta \\log \\frac{\\nu}{p_T} +c_\\eta \\log\\frac{\\nu}{Q}\\,.\n\\end{align}\nDemanding that there are no $1\/\\epsilon$ or $1\/\\eta$ poles in the final answer imposes the conditions \n\\begin{align}\nc_\\epsilon=-s_\\epsilon\\,, \\qquad c_\\eta=-s_\\eta\\,,\n\\end{align}\nand shows that the lowest order logarithm appearing in the cross section is a rapidity logarithm.\n\nThis simple observation provides considerable insight into the structure of the subleading power renormalization group evolution equations. In \\cite{Moult:2018jjd} it was shown that the way that a single logarithm at the first non-vanishing order can be generated is through renormalization group mixing. In the particular case studied in \\cite{Moult:2018jjd} there was only a virtuality renormalization group, and therefore the single logarithm was generated by the $\\mu$-RGE. However, here we see that for observables that have both a $\\mu$ and $\\nu$ RGE, this mixing will always occur in the $\\nu$ RGE, since the lowest order logarithm is always a rapidity logarithm. \n\nOur fixed order calculation also provides insight into the structure of the cancellation of rapidity divergences at subleading power. Although we have not written down a complete set of SCET$_{\\rm II}$~operators, we briefly comment on the physical intuition for the cancellation of rapidity anomalous dimensions, which will then determine how renormalization group consistency appears in the effective theory. We can consider the case of the NLP correction from a soft quark, since this will provide the clearest picture of the differences between rapidity divergences and virtuality divergences. At lowest order, the soft function for the emission of a soft quark is given by\n\\begin{align}\\label{eq:quark_soft}\nS^{(2)}_q&= \\fd{3cm}{figures\/soft_quark_scetii_low}{\\nonumber} \\\\\n&\\propto \\mu^{2\\epsilon} \\nu^\\eta \\int d\\hspace*{-0.08em}\\bar{}\\hspace*{0.1em}^d k |k^+-k^-|^{-\\eta} \\delta^+ (k^2) \\frac{(p_\\perp^2)^{-\\epsilon}}{\\Gamma(1-\\epsilon) \\pi^\\epsilon} \\delta^{d-2} (p_\\perp-\\hat \\mathcal{P}_\\perp) \\\\\n&=\\frac{1}{\\eta} + \\log{\\frac{\\nu}{p_\\perp}} +\\cdots\\,.\n\\end{align}\nThis soft function is $\\epsilon$ finite, but exhibits the expected $\\eta$ divergence. This divergence cannot be absorbed into the renormalization of this soft function, since it starts at $\\alpha_s$, and therefore we must find the class of operators that this soft function mixes into. We will derive the structure of these operators in \\Sec{sec:RRG_NLP}.\n\nWhile the fact that this operator must mix into a new operator is familiar from other studies at NLP, what is different is that the integrand for the soft function is ``1\". This implies that an equal part of the divergence comes from when the quark goes to infinite rapidity in either direction. This has an interesting interpretation, which can guide the physical intuition for how the cancellation of rapidity divergences occurs. As the soft quark goes collinear in the direction of the collinear quark, the rapidity divergence must be cancelled by a collinear rapidity divergence from a subleading power jet function describing two collinear quarks. One the other hand, as the soft quark goes collinear with the gluon, it must be cancelled by a subleading power jet function involving a collinear quark and gluon field. In pictures, the two limits are shown as\n\\begin{align}\n\\fd{3cm}{figures\/qq_op_low.pdf}\\xleftarrow[]{\\eta\\to -\\infty} \\fd{3cm}{figures\/soft_quark_low.pdf} \\xrightarrow[]{\\eta\\to \\infty} \\fd{3cm}{figures\/qg_collinear_low.pdf}\\,.\n\\end{align}\nThis is of course exactly what happens at leading power, however, at leading power the jet functions are identical in all limits, giving a much simpler structure. The structure for the cancellation of the rapidity divergences at subleading power implies that renormalization group consistent subsets of operators will appear in triplets, as opposed to doublets, as was observed in \\cite{Moult:2018jjd} for the $\\mu$-RGE. The factorization formula for a single pair of triplets will take the (extremely) schematic form\n\\begin{align}\n\\frac{d\\sigma^{(2)}}{dz}&=H_1 J_{\\bar n}^{(2)} J_n^{(0)} S^{(0)} \n+H_2 J_{\\bar n}^{(0)} J_n^{(2)} S^{(0)}\n+H_3 J_{\\bar n}^{(0)} J_n^{(0)} S^{(2)}\\,,\n\\end{align}\nwhere the jet and soft functions with superscript $(2)$ denote power suppressed functions. In terms of pictures, we have\n\\begin{align}\\label{eq:schematic_factorization}\n\\frac{d\\sigma^{(2)}}{dz}&={\\nonumber} \\\\\n&\\underbrace{ \\left| \\fd{1.5cm}{figures\/LP_hardfunc_low.pdf}~ \\right|^2 \\cdot \\int dr_2^+ dr_3^+ \\fd{3cm}{figures\/soft_quark_collinear_piece_purjet_low.pdf}\\otimes \\fd{3cm}{figures\/1quark_jetfunction_low} \\otimes \\fd{3cm}{figures\/soft_quark_diagram_wilsonframe_low.pdf} }_{\\text{Soft Quark Correction}}{\\nonumber} \\\\\n +&\\underbrace{ \\int d \\omega_1 d \\omega_2 \\left| \\fd{1.5cm}{figures\/NLP_hard_2quark_low}~ \\right|^2\\otimes\\fd{3cm}{figures\/2quark_jetfunc_low}\\otimes \\fd{3cm}{figures\/1gluon_jetfunc_low}\\otimes \\fd{3cm}{figures\/eikonal_factor_gluon_low.pdf} }_{\\text{Collinear Quark Correction 1}} {\\nonumber} \\\\\n +&\\underbrace{ \\int d \\omega_1 d \\omega_2 \\left| \\fd{1.5cm}{figures\/NLP_1quark1gluon_hard_low}~\\right|^2 \\otimes~\\fd{3cm}{figures\/1quark_jetfunction_low}\\otimes \\fd{3cm}{figures\/1quark1gluon_jetfunc_low}\\otimes \\fd{3cm}{figures\/eikonal_factor_gluon_low.pdf}}_{\\text{Collinear Quark Correction 2}}\\,,\n\\end{align}\nwhich perhaps makes more clear schematically how the cancellation between $\\nu$ anomalous dimension occurs between the soft and collinear sectors of the theory. A similar triplet exists from the corrections associated with soft gluon emission. Each of the power suppressed functions in \\Eq{eq:schematic_factorization} will mix in rapidity, as was illustrated for the soft function in \\Eq{eq:quark_soft}. To perform the renormalizaton, we must therefore identify which operators are being mixed into in each case.\n\nIt is interesting to compare this to the structure of the subleading power factorization for SCET$_{\\rm I}$~ event shapes, which is described in \\cite{Moult:2018jjd,Moult:2019uhz}. There subleading power jet functions involving two quark fields, and those involving a quark and gluon field are in separately renormalization group consistent pairings. Therefore, we find that the SCET$_{\\rm II}$~ case gives rise to a much tighter structure in the EFT, since it links multiple hard scattering operators. Note that this also suggests that the issue of endpoint divergences is harder to avoid, although this is a topic that we leave for future study.\n\nIn conclusion, we have learned a number of general lessons about the subleading power $\\nu$-RG from our perturbative calculation. In particular, we have seen that subleading power corrections to the EEC (and more generally any observable that involves both rapidity and virtuality renormalization group flows) involve a mixing in rapidity at the first non-trivial order into a new class of operators. In \\Sec{sec:RRG_NLP} we will derive the structure of these new operators by studying the consistency of the structure of the renormalization group.\n\n\\section{Rapidity Renormalization Group at Subleading Power}\\label{sec:RRG_NLP}\n\nIn the previous section we have shown that for the EEC, and more generally for subleading corrections to observables with both rapidity and virtuality scales, the subleading power rapidity RGE will always involve mixing into additional operators. The goal of this section will be to derive the structure of the operators that arise from this mixing, as well as their renormalization group properties. As in our study of thrust at subleading powers \\cite{Moult:2018jjd}, our approach to gain a handle on the structure of the RG equations at subleading power will be to use an illustrative example of subleading power jet and soft functions whose renormalization group structure can be obtained from the known leading power RG equations. Once the particular form of the operators is derived, they can then be applied in other situations. \n\nWe will find that for the case of rapidity divergences considered here, the use of an illustrative example is more subtle than for the $\\mu$ RGE due to the appearance of additional divergences that appear. We will also find that due to this, there are multiple (two distinct) operators that can be mixed into. Therefore, our analysis in \\Sec{sec:example} should be viewed as providing motivation for the type of operators that appear, although in the initial form that they arise, they will involve unregularized integrals. With this motivation for their structure, we are then able to use the commutativity of the $\\mu$ and $\\nu$ RGEs in \\Sec{sec:consistency} to fix the RG properties of these operators and provide regularized definitions. We then solve the associated evolution equations for the newly introduced operators in \\Sec{sec:solution}.\n\n\\subsection{An Illustrative Example}\\label{sec:example}\n\nWe will begin by considering the LP soft function for $p_T$, defined as\n\\begin{align}\n S(\\vec p_T) &= \\frac{1}{N_c^2 - 1} \\big\\langle 0 \\big| \\mathrm{Tr} \\bigl\\{\n \\mathrm{T} \\big[\\mathcal{S}^\\dagger_{\\bn} \\mathcal{S}_n\\big]\n \\delta^{(2)}(\\vec p_T-\\mathcal{P}_\\perp) \\overline{\\mathrm{T}} \\big[\\mathcal{S}^\\dagger_{n} \\mathcal{S}_{\\bn} \\big]\n \\bigr\\} \\big| 0 \\big\\rangle\n\\,.\\end{align}\nThis soft function satisfies the $\\mu$ and $\\nu$ RGEs\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S(\\vec p_T)&=\\int d\\vec q_T \\gamma^S_\\nu(p_T-q_T) S(\\vec q_T)\\,, {\\nonumber} \\\\\n\\mu \\frac{d}{d\\mu}S(\\vec p_T)&=\\gamma_\\mu^S S(\\vec p_T)\\,,\n\\end{align}\nwith the anomalous dimensions\n\\begin{align}\n\\gamma_\\mu^S &=4\\Gamma^{\\mathrm{cusp}}(\\alpha_s)\\log \\left( \\frac{\\mu}{\\nu}\\right)\\,,{\\nonumber} \\\\\n\\gamma_\\nu^S &=2\\Gamma^{\\mathrm{cusp}} (\\alpha_s)\\mathcal{L}_0\\left( \\vec p_T,\\mu \\right)\\,.\n\\end{align}\nCrucially, the $\\mu$ anomalous dimension is multiplicative, while the $\\nu$ anomalous dimension is a convolution in $p_T$ space. It is this fact that will ultimately lead to the subleading power rapidity renormalization group having a more interesting structure. \n\nA simple trick to understand the structure of subleading power RG equations that was first used in \\cite{Moult:2018jjd} is to consider jet and soft functions obtained by multiplying the leading power jet and soft functions by a kinematic invariant. In the present case, we \ncan consider the subleading power soft function defined by\n\\begin{align}\nS_{p_T^2}^{(2)}(\\vec p_T)=\\vec p_T^2 S(\\vec p_T)\\,.\n\\end{align}\nThe superscript $(2)$ indicates that this function is power suppressed due to the explicit factor of $p_T^2$, and the subscript $p_T^2$ is meant to identify the nature of this power suppression. The structure of this function is known to all orders, since it is inherited from the known structure of the leading power soft function. However, understanding how this structure is manifested in the renormalization group structure of $S_{p_T^2}^{(2)}$ is non-trivial, and will reveal new operators.\n\n\n\nFirst, we note that the $\\mu$ RGE for this subleading power soft function is identical to the RGE of the leading power soft function, since the $\\mu$ RGE is multiplicative. We therefore have\n\\begin{align}\n\\mu \\frac{d}{d\\mu}S_{p_T^2}^{(2)}(\\vec p_T)=\\gamma^\\mu_S S_{p_T^2}^{(2)}(\\vec p_T)\\,.\n\\end{align}\nHowever, we find a more interesting behavior for the $\\nu$ RGE, due to the fact that it is a convolution in $p_T$. Multiplying both sides of the LP RGE by $\\vec p_T^2$, and using the identity\n\\begin{align}\\label{eq:pt_expand}\n\\vec p_T^2 =(\\vec p_T-\\vec q_T)^2 +\\vec q_T^2 +2(\\vec p_T- \\vec q_T) \\cdot \\vec q_T\\,,\n\\end{align}\nwe arrive at the equation\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S_{p_T^2}^{(2)}(\\vec p_T)&=\\int d\\vec q_T (\\vec p_T-\\vec q_T)^2 \\gamma_S(p_T-q_T) S(q_T) \\\\\n&+\\int d\\vec q_T \\gamma_S(p_T-q_T) \\left[ 2(\\vec p_T- \\vec q_T) \\cdot \\vec q_T S(q_T)\\right] + \\int d\\vec q_T \\gamma_S(p_T-q_T) \\left[ \\vec q_T^2 S(\\vec q_T)\\right]\\,. {\\nonumber}\n\\end{align}\nNote that we must arrange \\Eq{eq:pt_expand} in this form, so that it is a kernel in $\\vec p_T- \\vec q_T$ multiplying a function of $q_T$.\nSimplifying this result, we find that we can write it as\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S_{p_T^2}^{(2)}(\\vec p_T)&=2\\Gamma^{\\mathrm{cusp}} {\\mathbb{I}}_S \\\\\n&+\\int d\\vec q_T \\gamma_S(p_T-q_T) 2(\\vec p_T- \\vec q_T) \\cdot \\vec S^{(1)}(q_T) +\\int d\\vec q_T \\gamma_S(p_T-q_T) S_{p_T^2}^{(2)}(\\vec q_T)\\,.{\\nonumber}\n\\end{align}\nHere we see a renormalization group mixing with two power suppressed functions. The first is \n\\begin{align}\n{\\mathbb{I}}_S\\propto\\int d^2\\vec q_T~ S(\\vec q_T)\\,,\n\\end{align}\nwhich we will refer to as the ``rapidity identity operator\", and will play a crucial role in our subsequent analysis. As written, the integral over $q_T$ goes to infinity, and therefore this expression is ill-defined, and will require regularization. For this reason we have also been glib about what this operator depends on. In the next section we will present a way of deriving its renormalization group properties, as well as a regularized definition. The goal of this section is merely to illustrate that the subleading power soft function mixes into a new operator which is loosely related to a moment of the leading power soft function. The second operator arising in the mixing is \n\\begin{align}\n\\vec S^{(1)}(\\vec p_T)=\\vec p_T S^{(0)}(\\vec p_T)\\,,\n\\end{align}\nwhich is a vector soft function which scales like $\\mathcal{O}(\\lambda)$. This function can only appear in a factorization formula if it is dotted into a vector jet function, or some other vector quantity. \n\n\nWhile we believe that it would be extremely interesting to study in more detail the complete structure of this illustrative example, and we will return to this in future work, here we focus only on the elements of these equations that are required at leading logarithmic accuracy. We note that\n\\begin{align}\n{\\mathbb{I}}_S=1+\\mathcal{O}(\\alpha_s)\\,, \\qquad S^{(1)}(\\vec p_T)=0+\\mathcal{O}(\\alpha_s)\\,.\n\\end{align}\nTherefore, in the leading logarithmic series, $S_{p_T^2}^{(2)}$ mixes into ${\\mathbb{I}}_S$, and we can ignore $S^{(1)}$. We can therefore simplify our equation to\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S_{p_T^2}^{(2)}(\\vec p_T,\\mu,\\nu)&=2\\Gamma^{\\mathrm{cusp}} {\\mathbb{I}}_S +\\int d\\vec q_T \\gamma_S(p_T-q_T) S_{p_T^2}^{(2)}(\\vec q_T,\\mu,\\nu)\\,.\n\\end{align}\nWe see that what is occurring is that the power suppressed soft function is mixing with the rapidity identity operator. This provides a renormalization group derivation of the perturbative calculations in \\Sec{sec:FO}, where one generates a rapidity divergence and associated logarithm at the lowest non-trivial order. This is simply a perturbative description of the mixing into ${\\mathbb{I}}_S$. Now, with this general structure in mind, we would like to understand the properties of this rapidity identity operator.\n\n\n\nWe emphasize again that we have performed only a cursory study of this illustrative example so as to be able to illustrate the renormalization group mixing in rapidity, and to identify the structure of the rapidity identity operator. It would be particularly interesting to study the complete structure of this illustrative example to all logarithmic orders, and in particular to better understand the structure of the convolutions in $p_T$. However, since the focus of this paper is on deriving the leading logarithmic series for the EEC, we will leave this to future work.\n\n\n\n\n\\subsection{Rapidity Identity Operators}\\label{sec:consistency}\n\n\n\nIn the previous section, we found that the subleading power rapidity renormalization group involves a mixing into the rapidity identity operator, which is loosely related to the first moment of the leading power soft function\n\\begin{align}\\label{eq:identity_unregularized}\n{\\mathbb{I}}_S\\propto\\int d^2\\vec q_T~ S(\\vec q_T)\\,.\n\\end{align}\nThe goal of this section will be to understand how to make sense of this operator, since it is ill-defined as currently written. This is a crucial difference as compared with the case of thrust considered in \\cite{Moult:2018jjd}. There a similar first moment operator appears, defined as \\cite{Moult:2018jjd}\n\\begin{align}\\label{eq:theta_soft_first}\nS^{(2)}_{g,\\theta}(k,\\mu)&= \\frac{1}{(N_c^2-1)} {\\rm tr} \\langle 0 | \\mathcal{Y}^T_{\\bar n} (0)\\mathcal{Y}_n(0) \\theta(k-\\hat \\mathcal{T}) \\mathcal{Y}_n^T(0) \\mathcal{Y}_{\\bar n}(0) |0\\rangle\\,.\n\\end{align}\nHowever, there the first moment is a finite integral, and does not introduce new divergences, allowing all the properties of this operator to be immediately deduced from those of the leading power operator. For the case of $p_T$ considered here, additional arguments must be used to fully fix the structure of the rapidity identity operator.\n\n\nThe $\\mu^2$ dependence of the rapidity identity operator can be derived using the commutativity of the RG \\cite{Chiu:2012ir,Chiu:2011qc}, namely that\n\\begin{align}\n\\left[ \\frac{d}{d\\mu}, \\frac{d}{d\\nu} \\right]=0\\,,\n\\end{align}\nwhich ensures the path independence of the $\\mu$ and $\\nu$ rapidity evolution. Here we consider a general subleading power soft operator $S^{(2)}$ (not necessarily $S_{p_T^2}^{(2)}$). If we assume that this operator mixes into a single identity type operator, then we obtain\n\\begin{align}\n\\mu \\frac{d}{d\\mu} \\left[ \\nu \\frac{d}{d\\nu} S^{(2)} \\right]&=\\mu \\frac{d}{d\\mu} \\left[ \\gamma_{\\delta{\\mathbb{I}} } {\\mathbb{I}}_S+\\gamma_\\nu S^{(2)} \\right]\\,,{\\nonumber} \\\\\n\\nu \\frac{d}{d\\nu} \\left[ \\mu \\frac{d}{d\\mu} S^{(2)} \\right]&= \\nu\\frac{d}{d\\nu} \\left[ \\gamma^S_\\mu S^{(2)} \\right]\\,.\n\\end{align}\nHere, we have used $\\gamma_{\\delta{\\mathbb{I}} }$ to denote the mixing anomalous dimension.\nPerforming the next differentiation, we then obtain the equality\n\\begin{align}\n\\gamma_{\\delta{\\mathbb{I}} } \\mu \\frac{d}{d\\mu} {\\mathbb{I}}_S +\\left[ \\mu \\frac{d}{d\\mu} \\gamma_\\nu \\right]S^{(2)} +\\gamma_\\nu \\gamma_\\mu^S S^{(2)} =\\gamma_\\mu^S \\gamma_{\\delta{\\mathbb{I}} } {\\mathbb{I}}_S +\\left[ \\nu \\frac{d}{d\\nu} \\right]S^{(2)} +\\gamma_\\nu \\gamma_\\mu^S S^{(2)}\\,.\n\\end{align}\nUsing the fact that commutativity is satisfied for the leading power anomalous dimensions, we arrive at\n\\begin{align}\n\\mu \\frac{d}{d\\mu}{\\mathbb{I}}_S = \\gamma_\\mu^S {\\mathbb{I}}_S\\,.\n\\end{align}\nThis fixes the $\\mu$ anomalous dimension of the rapidity identity operator. \n\n\nTo fix the $\\nu$ anomalous dimension, we now apply commutativity to ${\\mathbb{I}}_S$ itself, and use the fact that we know the $\\mu$ anomalous dimension. We then have\n\\begin{align}\n\\left[ \\frac{d}{d\\mu}, \\frac{d}{d\\nu} \\right] {\\mathbb{I}}_S =0\\,,\n\\end{align}\nwhich gives the equation (at lowest order in $\\alpha_s$, which is sufficient for LL)\n\\begin{align}\n\\mu \\frac{d}{d\\mu} \\left( \\nu \\frac{d}{d\\nu} \\right) {\\mathbb{I}}_S=-4 \\Gamma^{\\mathrm{cusp}}\\,,\n\\end{align}\nwhich can then be solved for\n\\begin{align}\n\\nu \\frac{d}{d\\nu} {\\mathbb{I}}_S= -4 \\Gamma^{\\mathrm{cusp}} \\log \\left( \\frac{\\mu^2}{\\Lambda^2} \\right)\\,,\n\\end{align}\nwhere $\\Lambda^2$ is an as yet to be determined scale. The only available scales at leading logarithmic accuracy are $\\Lambda^2=p_T^2, \\mu^2, \\nu^2$. The cases $\\Lambda^2=p_T^2, \\nu^2$ both give the same behavior for the $\\nu$ RGE on the hyperbola $\\mu=p_T$, and therefore we will not treat them separately. It would be interesting to explore in more detail the differences between these RGEs.\n\n\nWe therefore find two distinct identity operators with two different rapidity anomalous dimensions\n\\begin{align}\n\\nu \\frac{d}{d\\nu}{\\mathbb{I}}^\\nu_S(\\mu, \\nu) &= -\\gamma_\\mu^S {\\mathbb{I}}^\\nu_S(\\mu, \\nu)\\,, {\\nonumber} \\\\\n\\nu \\frac{d}{d\\nu}{\\mathbb{I}}^{p_T^2}_S(p_T^2,\\mu, \\nu) &= 2 \\Gamma^{\\mathrm{cusp}} \\log \\left( \\frac{p_T^2}{\\mu^2} \\right) {\\mathbb{I}}^{p_T^2}_S(p_T^2,\\mu, \\nu)\\,.\n\\end{align}\nHere we have again used the superscript $p_T^2$ to indicate the identity function that the $S_{p_T^2}^{(2)}$ function mixes into, as we will argue shortly, and the superscript $\\nu$ to indicate the identity function that has a non-trivial $\\nu$ RGE on the $\\mu=p_T$ hyperbola.\n\nWhile this argument allows us to derive the renormalization group properties of these operators, which is sufficient for the purposes of this paper, it is also interesting to give explicit example of functions that realize this behavior. This is easy to do by defining regularizations of the integral in \\Eq{eq:identity_unregularized}. \nFunctions which give the behavior of the different identity operators at LL are\n\\begin{align}\n{\\mathbb{I}}^\\nu_S(\\mu, \\nu)&=\\int_0^{\\nu^2} d^2\\vec q_T~ S(\\vec q_T)\\,, \\\\\n{\\mathbb{I}}_S^{p_T^2}(p_T^2,\\mu, \\nu)&=\\int_0^{p_T^2} d^2\\vec q_T~ S(\\vec q_T)\\,.\n\\end{align}\nThe first function is a function of only $\\mu^2\/\\nu^2$, while the second also depends on $p_T^2$. These two functions have different properties under boosts, and therefore do not themselves mix. This provides leading logarithmic definitions of the rapidity identity operators. It would be extremely interesting to understand how to extend these definitions beyond leading logarithm, however, we leave this to future work.\n\n\nFor the particular soft function considered in our illustrative example, $S_{p_T^2}^{(2)}=p_T^2 S^{(0)}$, one can use the knowledge of the two loop soft function to show that it is the operator ${\\mathbb{I}}^{p_T^2}_S(p_T^2,\\mu, \\nu)$ that is being mixed into. This can also be argued directly by symmetry grounds: at the scale $\\mu=p_T$, the leading power soft function does not flow in $\\nu$ at leading logarithmic accuracy. This is due to its boost invariance. This property is not broken by multiplying by $p_T^2$, and therefore must also be a property of the counterterm operator that is being mixed with. This identifies the operator ${\\mathbb{I}}^{p_T^2}_S(p_T^2,\\mu, \\nu)$. We can therefore make more precise the equation earlier for the RG of this function\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S_{p_T^2}^{(2)}(\\vec p_T,\\mu,\\nu)&=2\\Gamma^{\\mathrm{cusp}} {\\mathbb{I}}^{p_T^2}_S(\\vec p_T,\\mu,\\nu) +\\int d\\vec q_T \\gamma_S(p_T-q_T) S_{p_T^2}^{(2)}(\\vec q_T,\\mu,\\nu)\\,.\n\\end{align}\nFor subleading power soft functions with explicit fields inserted, this argument no longer holds, and one can mix into the other operator.\n\nIt is also interesting to arrive at these conclusions for the structure of the renormalization group by manipulation of the renormalization group equations. This approach is ultimately ill-defined due to the lack of convergence of the integrals, but it gives the same results as derived from the commutativity of the RG, and provides additional insight into the origin of this behavior.\nRecall that the leading power renormalization group evolution equations are\n\\begin{align}\n\\nu \\frac{d}{d\\nu}S(\\vec p_T)&=\\int d\\vec q_T \\gamma^S_\\nu(p_T-q_T) S(q_T)\\,, {\\nonumber} \\\\\n\\mu \\frac{d}{d\\mu}S(\\vec p_T)&=\\gamma_\\mu^S S(\\vec p_T)\\,,\n\\end{align}\nwith the anomalous dimensions\n\\begin{align}\n\\gamma_\\mu^S &=4\\Gamma^{\\mathrm{cusp}}(\\alpha_s)\\log \\left( \\frac{\\mu}{\\nu}\\right)\\,,{\\nonumber} \\\\\n\\gamma_\\nu^S &=2\\Gamma^{\\mathrm{cusp}} (\\alpha_s)\\mathcal{L}_0\\left( \\vec p_T,\\mu \\right)\\,.\n\\end{align}\nSince the $\\mu$ anomalous dimension is multiplicative, this should not be changed if we integrate over $q_T$. In other words, we have\n\\begin{align}\n\\int d^2 p_T \\left[ \\mu \\frac{d}{d\\mu}S(\\vec p_T)=\\gamma^\\mu_S S(\\vec p_T) \\right] \\implies \\mu \\frac{d}{d\\mu} {\\mathbb{I}}_S= \\gamma^\\mu_S {\\mathbb{I}}_S\\,,\n\\end{align}\nwhich immediately leads to the fact that ${\\mathbb{I}}_S$ is multiplicatively renormalized in $\\mu$ with the same anomalous dimension as the leading power soft function. For the $\\nu$ renormalization group equation, we have\n\\begin{align}\n\\nu \\frac{d}{d\\nu} {\\mathbb{I}}_S &=\\int d^2 \\vec q_T \\nu \\frac{d}{d\\nu} S(\\vec q_T){\\nonumber} \\\\\n&=\\int d^2 \\vec q_T \\left[ \\int d^2 \\vec p_T \\gamma_S(\\vec q_T-\\vec p_T) S(\\vec p_T) \\right]\\,.\n\\end{align}\nNow, performing the shift $\\vec q_T\\to \\vec q_T +\\vec p_T$, we obtain\n\\begin{align}\n\\nu \\frac{d}{d\\nu} {\\mathbb{I}}_S &=\\left[ \\int d^2 \\vec q_T \\gamma_S(\\vec q_T) \\right]{\\mathbb{I}}_S\\,,\n\\end{align}\nwhich is again multiplicative. The expression in square brackets is not well defined, and must be fixed by some regularization, as was shown above. In this case, this shift argument may no longer be valid. However, this exercise is merely meant to illustrate another perspective on why the rapidity identity operator should satisfy a multiplicative $\\nu$ RGE, and the argument presented earlier in this section should be taken as primary.\n\n\nTherefore, in summary, we have shown that there are non-trivial rapidity identity operators that arise at subleading power, and we have identified the renormalization group properties of these operators at leading logarithmic accuracy. The first rapidity identity operator does not depend on the observable, and its anomalous dimensions are given by\n\\begin{align}\\label{eq:rap_identity_summary}\n\\mu \\frac{d}{d\\mu} {\\mathbb{I}}^\\nu_S \\left( \\frac{\\mu^2}{\\nu^2} \\right) &= -2\\Gamma^{\\mathrm{cusp}}\\log\\left( \\frac{\\nu^2}{\\mu^2} \\right)~ {\\mathbb{I}}^\\nu_S\\left( \\frac{\\mu^2}{\\nu^2} \\right)\\,, {\\nonumber} \\\\\n\\nu \\frac{d}{d\\nu} {\\mathbb{I}}^\\nu_S\\left( \\frac{\\mu^2}{\\nu^2} \\right) &= 2\\Gamma^{\\mathrm{cusp}} \\log\\left( \\frac{\\nu^2}{\\mu^2} \\right)~ {\\mathbb{I}}^\\nu_S\\left( \\frac{\\mu^2}{\\nu^2} \\right)\\,.\n\\end{align}\nThe second rapidity identity operator depends on the observable, and its anomalous dimensions are given by\n\\begin{align}\\label{eq:rap_identity2_summary}\n\\mu \\frac{d}{d\\mu} {\\mathbb{I}}^{p_T^2}_S \\left( p_T^2,\\mu, \\nu \\right) &= -2\\Gamma^{\\mathrm{cusp}}\\log\\left( \\frac{\\nu^2}{\\mu^2} \\right)~ {\\mathbb{I}}^{p_T^2}_S\\left( p_T^2,\\mu, \\nu \\right)\\,, {\\nonumber} \\\\\n\\nu \\frac{d}{d\\nu} {\\mathbb{I}}^{p_T^2}_S\\left( p_T^2,\\mu, \\nu \\right) &= 2\\Gamma^{\\mathrm{cusp}}\\log\\left( \\frac{p_T^2}{\\mu^2} \\right)~ {\\mathbb{I}}^{p_T^2}_S\\left( p_T^2,\\mu, \\nu \\right)\\,.\n\\end{align}\nHere we have written the anomalous dimension in terms of the cusp anomalous dimension \\cite{Korchemsky:1987wg}, which is given by $\\Gamma^{\\mathrm{cusp}}=4(\\alpha_s\/4\\pi)C_A+\\mathcal{O}(\\alpha_s^2)$.\nWe expect these functions to appear ubiquitously in subleading power rapidity renormalization, and we therefore believe their identification is an important first step towards an understanding of the structure of subleading power rapidity logarithms.\n\nOne also has rapidity identity operators in the jet\/beam sectors, that are defined analogously to the soft operators. Their anomalous dimensions are fixed to be identical to the soft rapidity identity operators (up to a sign) by RG consistency. For the particular case of the EEC, we can avoid them by always running to the jet scale. They are interesting for the case of $p_T$ resummation where one must consider their matching onto the parton distribution functions (PDFs). We will present a more detail discussion of these structures in future work.\n\nIt is also interesting to note that the subleading power Regge limit for massive scattering amplitudes has recently been studied in $\\mathcal{N}=4$ SYM in \\cite{Bruser:2018jnc}. Their solution also involves an interesting operator mixing. Since there are connections between the Regge limit and the EEC at leading power due to conformal transformations, it would be interesting to understand if these persist at subleading power.\n\n\\subsection{Analytic Solution of Renormalization Group Evolution Equations}\\label{sec:solution}\n\nIn this section we provide an analytic solution to the renormalization group evolution equations of the subleading power soft functions when mixing with either type of identity operator. Here we will consider only the case of fixed coupling, since our current application will be to $\\mathcal{N}=4$ (which is conformal), and the reader who is interested in extending these results to running coupling can consult \\cite{Moult:2018jjd}. We will also only study the renormalization group flow in $\\nu$ at the scale $\\mu=p_T$ to simplify the analysis. This will be sufficient for our applications, since we can always run the hard function down to the scale $\\mu=p_T$. We will consider separately the two different types of identity operators, since we know that due to their different properties under boosts, they themselves cannot mix to generate a more complicated RG structure.\n\n\\subsection*{Identity Operator ${\\mathbb{I}}^{p_T^2}_S \\left( p_T^2,\\mu, \\nu \\right)$:}\n\nWe first consider the case of mixing with the operator ${\\mathbb{I}}_S \\left( p_T^2,\\mu, \\nu \\right)$, which gives rise to a simple $\\nu$ RGE at the scale $\\mu=p_T$. In this case, we get the RGE\n\\begin{align}\n\\nu \\frac{d}{d\\nu}\\left(\\begin{array}{c} S^{(2)} \\\\ {\\mathbb{I}}^{p_T^2}_S \\end{array} \\right) &= \\left( \\begin{array}{cc}0&\\gamma_{\\delta{\\mathbb{I}} }\\, \\\\ 0 & 0 \\end{array} \\right) \\left(\\begin{array}{c} S^{(2)} \\\\ {\\mathbb{I}}^{p_T^2}_S \\end{array} \\right) \\,, \n\\end{align}\nwith the boundary conditions\n\\begin{align}\n{\\mathbb{I}}^{p_T^2}_S(\\mu=p_T, \\nu=p_T)=1\\,, \\qquad S^{(2)}(\\mu=p_T,\\nu=p_T)=0\\,.\n\\end{align}\nNote that the specific choice of $\\mu=p_T$ is important for achieving this simple form of the RG since it eliminates the need to consider the diagonal terms in the mixing matrix. More generally, these would be required, but for LL resummation as considered in this paper, the particular path considered here suffices, as explained in more detail in \\Sec{sec:N4_LL}.\n\nThis RGE is trivial to solve, and generates a single logarithm from the mixing\n\\begin{align}\nS^{(2)}(\\mu=p_T, \\nu=Q)=\n \\gamma_{\\delta{\\mathbb{I}} } \\log(p_T\/Q) {\\mathbb{I}}^{p_T^2}_S(\\mu=p_T, \\nu=p_T) \\,.\n\\end{align}\nNo additional logarithms are generated. Since this is the case that applies to the soft function $S_{p_T^2}^{(2)}=p_T^2 S^{(0)}$, one can easily check using the known form of the two-loop soft function that there is indeed only a single logarithm at any $\\nu$ scale for $\\mu=p_T$. \n\nIn an actual factorization formula, this single rapidity logarithm is then dressed by a Sudakov coming from running the hard function to the scale $\\mu=p_T$, giving rise to a result of the form\n\\begin{align}\n\\gamma_{\\delta{\\mathbb{I}} } \\log(p_T\/Q) \\exp \\left[ -2 a_s \\log^2\\left( \\frac{ p_T^2}{Q^2} \\right) \\right] H(Q) {\\mathbb{I}}^{p_T^2}_S(\\mu=p_T, \\nu=p_T)\\,.\n\\end{align}\n This provides quite an interesting structure, namely a single logarithm arising from $\\nu$ evolution, which is then dressed by double logarithms from $\\mu$ evolution. The $\\nu$ and $\\mu$ RGEs therefore completely factorize at leading logarithmic order. An identical structure was observed for the case of thrust in \\cite{Moult:2018jjd}, however, there both the single logarithm and the tower of double logarithms arise from the $\\mu$ RGE. \n\n\\subsection*{Identity Operator ${\\mathbb{I}}^\\nu_S \\left(\\mu, \\nu \\right)$:}\n\nMixing with the rapidity identity operator ${\\mathbb{I}}^\\nu_S \\left(\\mu, \\nu \\right)$ gives rise to a more non-trivial rapidity flow at the scale $\\mu=p_T$.\nIn this case, we get the RGE\n\\begin{align}\n\\nu \\frac{d}{d\\nu}\\left(\\begin{array}{c} S^{(2)} \\\\ {\\mathbb{I}}^\\nu_S \\end{array} \\right) &= \\left( \\begin{array}{cc}0&\\gamma_{\\delta{\\mathbb{I}} }\\, \\\\ 0 & \\gamma^\\mu_S \\end{array} \\right) \\left(\\begin{array}{c} S^{(2)} \\\\ {\\mathbb{I}}^\\nu_S \\end{array} \\right) \\,, \n\\end{align}\nwith the boundary conditions\n\\begin{align}\n{\\mathbb{I}}^\\nu_S(\\mu=p_T, \\nu=p_T)=1\\,, \\qquad S^{(2)}(\\mu=p_T,\\nu=p_T)=0\\,.\n\\end{align}\nThis RGE is is a specific case of the general RGE solved in \\cite{Moult:2018jjd}. However, here we can solve it in two steps by first solving for the identity operator, and then plugging it in to the solution for the soft function. This will provide some insight into the structure of the final solution.\n\nThe solution of\n\\begin{align}\n\\nu \\frac{d}{d\\nu} {\\mathbb{I}}^\\nu_S =\\gamma^\\mu_S {\\mathbb{I}}^\\nu_S \\equiv \\tilde \\gamma \\log\\left( \\frac{\\nu^2}{p_T^2} \\right) {\\mathbb{I}}^\\nu_S\\,,\n\\end{align}\nis easily found to be\n\\begin{align}\n {\\mathbb{I}}^\\nu_S=\\exp\\left( \\frac{\\tilde \\gamma}{4} \\log^2\\left( \\frac{\\nu^2}{p_T^2} \\right) \\right){\\mathbb{I}}^\\nu_S(\\mu=p_T, \\nu=p_T)\\,.\n\\end{align}\nThe original soft function then satisfies the inhomogeneous equation\n\\begin{align}\n\\nu \\frac{d}{d\\nu} S^{(2)}=\\gamma_{\\delta{\\mathbb{I}} } \\exp\\left( \\frac{\\tilde \\gamma}{4} \\log^2\\left( \\frac{\\nu^2}{p_T^2} \\right) \\right){\\mathbb{I}}_S^\\nu(\\mu=p_T, \\nu=p_T)\\,.\n\\end{align}\nWe can integrate this equation up to the scale $\\nu=Q$ to find\n\\begin{align}\nS^{(2)}(\\mu=p_T, \\nu=Q)=\n - \\frac{\\sqrt{\\pi} \\gamma_{\\delta{\\mathbb{I}} }}{ \\sqrt{\\tilde \\gamma}} \\mathrm{erfi}\\left[ \\sqrt{\\tilde \\gamma} \\log(p_T\/Q) \\right] {\\mathbb{I}}^\\nu_S(\\mu=p_T, \\nu=p_T) \\,, \n\\end{align}\nwhere $\\mathrm{erfi}$ is the imaginary error function, defined as \n\\begin{align}\n\\mathrm{erfi}(z)=-i \\mathrm{erf}(iz)\\,.\n\\end{align}\nThe fact that the solution is not simply a Sudakov is quite interesting, and shows that the subleading power logarithms have a more interesting structure than at leading power. We expect that this structure will be quite common in the study of subleading power corrections to observables with rapidity evolution.\nThe $\\mathrm{erfi}$ function satisfies the identify\n\\begin{align}\n\\frac{d}{dz}\\mathrm{erfi}(z) = \\frac{2}{\\sqrt{\\pi}} e^{z^2}\\,.\n\\end{align}\nIt is perhaps quite intuitive that an integral of a Sudakov appears, since the rapidity identity operator is the integral of the leading power soft function. However, this is a new structure that has not previously appeared in subleading power calculations. It is amusing to note that a similar structure also appears in the calculation of Sudakov safe observables \\cite{Larkoski:2015lea} due to the integration over a resummed result.\n\n\\section{The Energy-Energy Correlator in $\\mathcal{N}=4$ SYM}\\label{sec:N4}\n\nIn this section we use our subleading power rapidity renormalization group to derive the leading logarithmic series at NLP for a physical observable, namely the EEC in $\\mathcal{N}=4$ SYM. We then compare our predictions with the recent calculation of \\cite{Henn:2019gkr} to $\\mathcal{O}(\\alpha_s^3)$ finding perfect agreement.\n\n\\subsection{Leading Logarithmic Resummation at Subleading Power}\\label{sec:N4_LL}\n\nIn performing the resummation of subleading power logarithms for the EEC, we must clearly state several assumptions that are made, which we hope can be better understood in future work. Nevertheless, we believe that the fact that our result agrees with the calculation of \\cite{Henn:2019gkr} provides strong support for these assumptions. The goal of this paper has been to understand how far one can get in understanding the subleading power rapidity renormalization group using only symmetries and consistency arguments. Using this approach, we found that at leading logarithmic order, there are two distinct identity operators with different renormalization group properties. To derive the structure of the resummed result for the EEC in the back-to-back limit, our approach will therefore be to match a linear combination of these two solutions to the know expansion of the EEC. To fix both coefficients requires two inputs, which we take to be the $\\alpha_s$ and $\\alpha_s^2$ leading logarithms. This then completely fixes our result, which can then be used to predict the coefficient of the $\\alpha_s^3$ leading logarithm, for which we will find agreement with the calculation of \\cite{Henn:2019gkr}. This approach should simple be viewed as a shortcut to a complete operator based analysis, which enables us to explore the general structure of the rapidity evolution equations at NLP, and show that they predict non-trivial behavior of the NLP series for a physical observable. A more complete operator based analysis will be presented in a future paper.\n\nSecondly, we must also assume that there exists a consistent factorization at subleading powers that does not have endpoint divergences. The presence of endpoint divergences in the factorization formula would violate our derivations based on the consistency of the RG. At this stage, both for the standard $\\mu$ renormalization group at subleading power, and for the subleading power rapidity renormalization group considered here, this is still ultimately an assumption. In general, endpoint divergences are known to appear generically at next-to-leading logarithm, but have also been shown to appear even at LL in certain cases when fields with different color representations are involved (e.g. both quarks and gluons) \\cite{Moult:2019uhz}. However, since in $\\mathcal{N}=4$ all fields are in the same representation, we work under the assumption that there are no endpoint divergences at leading logarithmic accuracy. We will see that this assumption is strongly supported by the fact that we are able to exactly reproduce to $\\mathcal{O}(\\alpha_s^3)$ the highly non-trivial series that arises from the exact calculation of \\cite{Henn:2019gkr}. However, we acknowledge that before our techniques can be more widely applied, it will be important to understand when endpoint divergences do occur in the rapidity renormalization group, and how they can be resolved. \n\nTherefore, working under the assumption of convergent convolutions for the subleading power factorization, all anomalous dimensions are fixed by symmetries, as described above, and the resummation of the subleading power logarithms is now a simple application of the renormalization group evolution equations derived in \\Sec{sec:RRG_NLP}. To perform the resummation, one must resum logarithms in both $\\mu$ and $\\nu$. We choose to perform this resummation using the following evolution path\n\\begin{itemize}\n\\item Run the hard functions from $\\mu=Q$ to $\\mu=p_T$.\n\\item Run the soft functions in rapidity from $\\nu=p_T$ to $\\nu=Q$.\n\\end{itemize}\nThis path is the most convenient, since it avoids the need to perform any resummation of the rapidity anomalous dimensions \\cite{Chiu:2012ir,Chiu:2011qc}. To run the hard function from $\\mu=Q$ down to the soft scale, we use the evolution equations\n\\begin{align}\n\\mu \\frac{d}{d\\mu} H = \\gamma_H H\\,, \\qquad \\gamma_H =-8 a_s \\log \\left( \\frac{ \\mu^2}{Q^2} \\right)\\,,\n\\end{align}\nHere and throughout this section, we will use $a_s=\\alpha_s\/(4\\pi)C_A$ to simplify the notation.\nThe renormalization group equation for the hard function has the simple solution\n\\begin{align}\nH(p_T)=H(Q) \\exp \\left[ -2 a_s \\log^2\\left( \\frac{ p_T^2}{Q^2} \\right) \\right]\\,.\n\\end{align}\nFor the soft function evolution, we use the results derived in \\Sec{sec:RRG_NLP} for the evolution of the two different types of rapidity identity operators. Since these rapidity identity functions cannot themselves mix due to different boost properties, the result at LL order is necessarily a linear combination of the two. For the first type of mixing, we have\n\\begin{align}\nS_1^{(2)}(\\mu=p_T, \\nu=Q)=\n \\gamma_{\\delta{\\mathbb{I}},1 } \\log(p_T\/Q) {\\mathbb{I}}^{p_T^2}_S(\\mu=p_T, \\nu=p_T) \\,,\n\\end{align}\nwhere $ \\gamma_{\\delta{\\mathbb{I}},1 }$ is the anomalous dimension for mixing into the $ {\\mathbb{I}}^{p_T^2}_S$ soft function, and for the second type, we have\n\\begin{align}\nS_2^{(2)}(\\mu=p_T, \\nu=Q)=\n - \\frac{\\sqrt{\\pi} \\gamma_{\\delta{\\mathbb{I}},2 }}{ \\sqrt{\\tilde \\gamma}} \\mathrm{erfi}\\left[ \\sqrt{\\tilde \\gamma} \\log(p_T\/Q) \\right] {\\mathbb{I}}_S(\\mu=p_T, \\nu=p_T)\\,,\n\\end{align}\nwith $\\tilde \\gamma =8 a_s$, and where $ \\gamma_{\\delta{\\mathbb{I}},2 }$ is the anomalous dimension for mixing into the $ {\\mathbb{I}}^{\\nu}_S$ soft function.\nOur general prediction is then a linear combination of these two \n\\begin{align}\n\\text{EEC}^{(2)}=\\gamma_{\\delta{\\mathbb{I}},1 } \\log(p_T\/Q) \\exp \\left[ -2 a_s \\log^2\\left( \\frac{ p_T^2}{Q^2} \\right) \\right] - \\frac{\\sqrt{\\pi} \\gamma_{\\delta{\\mathbb{I}},2 }}{ \\sqrt{\\tilde \\gamma}} \\mathrm{erfi}\\left[ \\sqrt{\\tilde \\gamma} \\log(p_T\/Q) \\right]\\,.\n\\end{align}\nMatching to the $a_s$ and $a_s^2$ coefficients from expanding the result of \\cite{Belitsky:2013ofa,Belitsky:2013xxa,Belitsky:2013bja}(these coefficients are given in \\Sec{sec:N4_expand}), we find that $\\gamma_{\\delta{\\mathbb{I}},1 }=0$. We therefore find the simple result for the NLP leading logarithmic series to all orders in $a_s$ in $\\mathcal{N}=4$ SYM theory\n\\begin{align}\n\\text{EEC}^{(2)}=-\\frac{\\sqrt{\\pi}a_s}{ \\sqrt{2a_s}} \\mathrm{erfi}\\left[ \\sqrt{2 a_s} \\log(1-z) \\right] \\exp\\left[ -2 a_s \\log(1-z)^2 \\right]\\,.\n\\end{align}\nInterestingly, for the particular case of $\\mathcal{N}=4$, we find that the result only involves the operator ${\\mathbb{I}}^\\nu_S \\left(\\mu, \\nu \\right)$.\nThis result takes an interesting form, going beyond the simple Sudakov exponential \\cite{Sudakov:1954sw} for the leading logarithms at leading power. We believe that this structure will appear somewhat generically in subleading power rapidity resummation. It is interesting to note that up to the prefactor this particular structure is in fact a well known special function, called Dawson's integral, which is defined as\n\\begin{align}\nD(x)=\\frac{1}{2}\\sqrt{\\pi}e^{-x^2}\\mathrm{erfi}(x)\\,.\n\\end{align}\nWe can therefore write our answer for the NLP leading logarithmic series in the simple form\n\\begin{align}\n\\text{EEC}^{(2)}=-\\sqrt{2a_s}~D\\left[ \\sqrt{2 a_s} \\log(1-z) \\right]\\,.\n\\end{align}\nSince the anomalous dimension is fixed by renormalization group consistency to be the cusp anomalous dimension (see \\Eq{eq:rap_identity_summary}), we can rewrite this as\n\\begin{align}\n\\boxed{\\text{EEC}^{(2)}=-\\sqrt{2a_s}~D\\left[ \\sqrt{\\frac{\\Gamma^{\\mathrm{cusp}}}{2}} \\log(1-z) \\right]\\,.}\n\\end{align}\nThis expression is a primary result of this paper. We will refer to this functional form as ``Dawson's Sudakov\". We find it pleasing that despite the somewhat non-trivial functional structure, the double logarithmic asymptotics of the EEC in the back-to-back limit are still driven by the cusp anomalous dimension \\cite{Korchemsky:1987wg} much like at leading power (see \\Eq{eq:resformula_N4}), and as is expected physically. It would be extremely interesting to understand if the subleading power logarithms at next-to-leading power are driven by the collinear anomalous dimension, as is the case at leading power. We note that from \\Eq{eq:N4_NLP_fixedorder} there is no constant term at NLP (it can easily be checked that this is true at any power), and therefore it is plausible that there is a simple generalization of this formula that also incorporates the next-to-leading logarithms.\n\nIn \\Fig{fig:plot}, we compare the standard Sudakov with Dawson's Sudakov. Note that while $\\mathrm{erfi}\\left[ \\sqrt{2 a_s} \\log(1-z) \\right]$ diverges as $z\\to 1$, this is overcome by the suppression from the Sudakov exponential. However, the behavior of the $\\mathrm{erfi}$ leads to a more enhanced behavior of the distribution as $z\\to 1$, as compared to a standard Sudakov.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.75\\columnwidth]{figures\/sudakovs}\n\\end{center}\n\\caption{A comparison of the standard Sudakov exponential which describes the all orders exponentiation of the leading power logarithms for the EEC with the subleading power Sudakov involving the imaginary error function ($\\mathrm{erfi}$) derived in this section, which we refer to as Dawson's Sudakov. Dawson's Sudakov exhibits a more peaked structure as $z\\to 1$. A value of $\\alpha_s=0.12$ was chosen to plot the numerical results.}\n\\label{fig:plot}\n\\end{figure}\n\nIt is also interesting to consider the Taylor expansion of our result in $a_s$. We find that it can be written as a remarkably simple series in terms of the double factorial\n\\begin{align}\n\\text{EEC}^{(2)}\n&=\\sum\\limits_{n=0}^{\\infty} \\frac{(-1)^{n+1}}{(2n+1)!!} a_s^{n+1} \\log((1-z)^2)^{2n+1}\\,,\n\\end{align}\n(here we have chosen to move factors of $2$ into the definition of the logarithm, but they can equally well be moved into the definition of $a_s$)\nwhere we recall that the double factorial is defined as\n\\begin{align}\nn!!=n(n-2)(n-4)\\cdots .\n\\end{align}\nExplicitly, the first few terms in the expansion are\n\\begin{align}\\label{eq:EEC_RG_expand}\n\\text{EEC}^{(2)}&=-2 a_s \\log(1-z) +\\frac{8}{3}a_s^2 \\log^3(1-z)-\\frac{32}{15}a_s^3 \\log^5(1-z) +\\frac{128}{105}a_s^4 \\log^7(1-z) {\\nonumber} \\\\\n&-\\frac{512}{945}a_s^5 \\log^9(1-z)+\\frac{2048}{10395}a_s^6 \\log^{11}(1-z)-\\frac{8192}{135135}a_s^7 \\log^{13}(1-z)+\\cdots\\,.\n\\end{align}\nThe presence of the double factorial as compared with the single factorial at leading power generates a more interesting series of rational coefficients. In \\cite{Dixon:2019uzg} the logarithms in the collinear limit of the EEC at each logarithmic order were written as simple infinite sums of factorials multiplied by polynomials in logarithms. It would be very interesting to understand if the subleading logarithms at NLP in the back-to-back limit can also be written as generalized double factorial sums.\n\nIt will be important to understand if this structure persists in QCD. The results in QCD are only known to $a_s^2$ \\cite{Dixon:2018qgp,Luo:2019nig}, therefore, without performing a more complete operator analysis, these results can be used to fix our prediction as a linear combination of our two RG solutions, but do not enable a non-trivial check, which is particularly important due to the possible presence of endpoint divergences. While we will perform an operator based analysis in a future publication, we find that it interesting to conjecture a result. Based on our intuition from the case of the thrust observable \\cite{Moult:2019uhz}, we expect that we are most likely to avoid endpoint divergences for the case of the EEC in Higgs decays to gluons in pure Yang-Mills. Under the assumption that there are no endpoint divergences, we can fix an ansatz in terms of our two RG solutions to be\n\\begin{align}\n\\left.\\text{EEC}^{(2)}\\right|_{\\text{Yang-Mills}}=2 a_s \\log(1-z) \\exp \\left[ -2 a_s \\log^2\\left(1-z \\right) \\right] -4\\sqrt{2a_s}~D\\left[ \\sqrt{\\frac{\\Gamma^{\\mathrm{cusp}}}{2}} \\log(1-z) \\right]\\,,\n\\end{align}\nwhich takes a slightly more complicated form than its $\\mathcal{N}=4$ counterpart, since it involves both types of rapidity identity operators. Unfortunately, unlike for the case of the EEC in $\\mathcal{N}=4$, there is no $a_s^3$ result to which we are able to compare this result. \nIt will interesting to verify or disprove this conjecture using a complete operator analysis, which we leave to future work. This will provide insight into the presence (or lack of) endpoint divergences in this particular case.\n\nIt would also be extremely interesting to understand how to derive this result directly from the four point correlator, or from the light ray OPE \\cite{Kravchuk:2018htv,Kologlu:2019bco,Kologlu:2019mfz}. From the point of view of the four point correlator, the back-to-back limit corresponds to the so called double light cone limit. This has been used to study the EEC in \\cite{Korchemsky:2019nzm}, has been studied in gauge theories in \\cite{Alday:2010zy,Alday:2013cwa}, and has been studied in more general conformal field theories in \\cite{Alday:2015ota}. \n\n\\subsection{Comparison with Direct Fixed Order Calculation to $\\mathcal{O}(\\alpha_s^3)$}\\label{sec:N4_expand}\n\nAs mentioned earlier, we have chosen to perform the resummation for the EEC in $\\mathcal{N}=4$ since we can directly compare with the remarkable calculation of the EEC for arbitrary angles using the triple discontinuity of the four point correlator \\cite{Henn:2019gkr} (The result to $\\mathcal{O}(\\alpha_s^2)$ in $\\mathcal{N}=4$ was calculated in \\cite{Belitsky:2013ofa,Belitsky:2013xxa,Belitsky:2013bja}), and exploiting the large amount of information known about its structure, see e.g. \\cite{Eden:2011we,Eden:2012tu,Drummond:2013nda,Korchemsky:2015ssa,Bourjaily:2016evz}. This calculation of the EEC provides extremely valuable data for understanding the structure of kinematic limits at subleading power, both for the particular case considered here, and beyond.\n\nAlthough we will be interested in the expansion in the $z\\to 1$ limit, it is interesting to understand what parts of the full angle result the back-to-back limit is sensitive to at subleading powers, so we briefly review the structure of the result of \\cite{Henn:2019gkr}.\nThe result of \\cite{Henn:2019gkr} is written as\n \\begin{align}\\label{defF}\n F(\\zeta) \\equiv 4 \\zeta^2 (1-\\zeta) {\\text{EEC}} (\\zeta) \\,,\n \\end{align}\nwhere at NNLO, \n \\begin{align} \\label{resultFatNNLO}\nF_{\\rm NNLO} (\\zeta) &= f_{\\rm HPL} (\\zeta) + \\int_{0}^1 d \\bar z \\int_{0}^{\\bar z } dt \\, \\frac{\\zeta-1}{t(\\zeta-\\bar z)+(1-\\zeta)\\bar z} \\nt \n& \\times \\left[ R_1 ( z, \\bar z ) P_1 (z, \\bar z ) + \n R_2 ( z, \\bar z ) P_2 (z, \\bar z ) \\right] \\,,\n\\end{align} \nwith\n \\begin{align} \n R_1 = \\frac{z \\bar z }{ 1 - z - \\bar z}\\, , \\quad R_2 = \\frac{z^2 \\bar z }{ (1-z)^2 ( 1 - z \\bar z)}\\,.\n \\end{align}\n Here $P_1$ and $P_2$ are weight three HPLS in $z$ and $\\bar z$. These two fold integrals are believed to be elliptic, and so we will refer to them as elliptic contributions. The term $f_{\\rm HPL} (\\zeta)$ is expressed in terms of harmonic polylogarithms. The leading power asymptotics in the back-to-back limit are described entirely by $ f_{\\rm HPL}$. At NLP, we require also the elliptic contributions, showing that subleading powers probe more of the structure of the result. This in turn makes the agreement with our result derived from the RG more non-trivial.\n \n\nThe expansion of $f_{\\rm HPL} (\\zeta)$ can be performed straightforwardly, and produces only $\\zeta$ values, $\\log^n(2)$, and $\\text{Li}_n(1\/2)$. On the other hand, the expansion of the elliptic contribution is more non-trivial, and leads to a more complicated set of constants. To compute the result, we expanded under the integral sign, and integrated using HyperInt \\cite{Panzer:2014caa}. This produced polylogarithms of sixth roots of unity up to weight 5. These were reduced using results from \\cite{Henn:2015sem} to a basis of constants. The final result involves several non-zeta valued constants which were guessed using hints for the classes of numbers that should appear in the answer from \\cite{Broadhurst:1998rz,Fleischer:1999mp,Davydychev:2000na} and reconstructed using the PSLQ algorithm. We found that the elliptic piece could be expressed as \n\\begin{align}\n\\frac{\\text{Elliptic}_{\\text{NLP}}}{2}&=2\\frac{L^5}{5!}+\\frac{1}{2}\\zeta_2 \\frac{L^3}{3!} +\\frac{3}{4}\\zeta_3 \\frac{L^2}{2!}+ \\left(-\\frac{67}{32}+\\frac{3}{4}\\zeta_2+\\frac{9}{4}\\zeta_3 -\\frac{7}{4} \\zeta_4\\right) L{\\nonumber} \\\\\n&+\\frac{85}{32}-\\frac{49}{16}\\zeta_2+\\frac{87}{8}\\zeta_3 +\\frac{37}{4}\\zeta_4 +\\frac{3}{4}\\zeta_2\\zeta_3+\\frac{5}{2}\\zeta_5-\\frac{611}{108}\\zeta_4 \\sqrt{3} \\pi+6\\sqrt{3}I_{2,3}\\,.\n\\end{align}\nHere $I_{2,3}$ is a higher weight Clausen function \\cite{Broadhurst:1998rz} \n\\begin{align}\nI_{2,3}=\\sum\\limits_{m>n>0}\\frac{\\sin\\left(\\frac{\\pi(m-n)}{3}\\right)}{m^{b-a}n^{2a}}\\,.\n\\end{align}\nWhile the leading power result has uniform trascendental weight when expanded in the back-to-back region, this is no longer true at subleading power. \n\n\nCollecting all the terms from both the elliptic and HPL contributions, we find (in our normalization) the following expression for the leading power suppressed logarithms up to $\\mathcal{O}(a_s)^3$\n\\begin{align}\n &\\text{EEC}^{(2)}=-2a_s \\log(1-z) {\\nonumber} \\\\\n &+a_s^2 \\left[\\frac{8}{3} \\log^3(1-z) +3 \\log^2(1-z) +(4+16\\zeta_2)\\log(1-z)+(-12-2\\zeta_2+36 \\zeta_2\\log(2)+5\\zeta_3) \\right]{\\nonumber} \\\\\n &+a_s^3 \\left[-\\frac{32}{15} \\log^5(1-z)-\\frac{16}{3}\\log^4(1-z)-\\left( \\frac{8}{3}+24 \\zeta_2 \\right)\\log^3(1-z) +(4-36\\zeta_2-50\\zeta_3)\\log^2(1-z) \\right. {\\nonumber} \\\\\n &\\left. -\\left(\\frac{131}{2}+4\\zeta_2+372 \\zeta_4+12\\zeta_3 \\right) \\log(1-z) + \\text{const} \\right]\\,,\n\\end{align}\nwhere\n\\begin{align}\n\\text{const}&=-\\frac{3061}{2}\\zeta_5 -96\\zeta_2 \\zeta_3 -\\frac{4888}{27}\\zeta_4 \\pi \\sqrt{3} +192 \\sqrt{3} I_{2,3}+1482 \\zeta_4 \\log(2) -256 \\zeta_2 \\log^3(2) {\\nonumber} \\\\\n&-\\frac{64}{5}\\log^5(2)\n+1536 \\text{Li}_5\\left( \\frac{1}{2} \\right) -544 \\zeta_4 +192 \\zeta_2 \\log^2(2)+16 \\log^4(2)+384 \\text{Li}_4\\left( \\frac{1}{2} \\right) {\\nonumber} \\\\\n& -288 \\zeta_2 \\log(2)+158 \\zeta_3 +55 \\zeta_2 +\\frac{533}{2}\\,.\n\\end{align}\nExtracting out the leading logarithmic series\n\\begin{align}\n\\left. \\text{EEC}^{(2)}\\right|_{\\text{LL}}=-2a_s \\log(1-z) +\\frac{8}{3}a_s^2 \\log^3(1-z) -\\frac{32}{15}a_s^3 \\log^5(1-z)\\,,\n\\end{align}\nwe find that this result agrees exactly with the result derived from the subleading power renormalization group given in \\Eq{eq:EEC_RG_expand}! Note that due to the matching of our RG predictions to the fixed order results, as was explained above, it is really only the $a_s^3$ coefficient that is a prediction. However, this agreement is highly non-trivial, since it probes both the elliptic and polylogarithmic sectors of the full result, and therefore we believe that it provides strong support that our subleading power renormalization group evolution equation is correct. The next-to-leading logarithms also have relatively simple rational coefficients, \n\\begin{align}\n\\left. \\text{EEC}^{(2)}\\right|_{\\text{NLL}}=3a_s^2 \\log^2(1-z) -\\frac{16}{3}a_s^3 \\log^4(1-z)\\,,\n\\end{align}\nand provide data for understanding the structure of subleading power resummation beyond the leading logarithm. It would be interesting to derive them directly from the renormalization group approach.\n\nIt would be interesting to better understand the structure of the numbers appearing in the expansion of the EEC both in the collinear and back-to-back limits, and the functions appearing in the full angle result. This could ultimately allow the result to be bootstrapped from an understanding of these limits, in a similar manner to the hexagon bootstrap for $N=4$ SYM amplitudes \\cite{Dixon:2011pw,Dixon:2014iba,Caron-Huot:2016owq,Dixon:2016nkn,Caron-Huot:2019vjl}. However, the presence of elliptic functions makes this seem like a daunting task, unless more information is known about their structure.\n\n\n\\section{Conclusions}\\label{sec:conc}\n\nIn this paper we have shown how to resum subleading power rapidity logarithms using the rapidity renormalization group, and have taken a first step towards a systematic understanding of subleading power corrections to observables exhibiting hierarchies in rapidity scales. Much like for the virtuality renormalization group at subleading power, the rapidity renormalization group at subleading power involves a non-trivial mixing structure. Using the consistency of the RG equations combined with symmetry arguments, we were able to identify the operators that arise in the mixing, which we termed ``rapidity identity operators\", and we derived their anomalous dimensions. We believe that these operators will play an important role in any future studies of the rapidity renormalization group at subleading power, and are the key to understanding its structure. \n\nTo illustrate our formalism, we resummed the subleading power logarithms appearing in the back-to-back limit of the EEC in $\\mathcal{N}=4$ SYM. This particular observable was chosen since the full analytic result is known to $\\mathcal{O}(\\alpha_s^3)$ from the remarkable calculation of \\cite{Henn:2019gkr}. We found perfect agreement between our result derived using the renormalization group, and the expansion in the back-to-back limit of the calculation of \\cite{Henn:2019gkr}, which provides an extremely strong check on our results. The analytic form of the resummed subleading power logarithms takes an interesting, but extremely simple form, being expressed in terms of Dawson's integral with argument related to the cusp anomalous dimension. We called this structure ``Dawson's Sudakov\". We expect this structure to be generic at subleading power for rapidity dependent observables, much like the Sudakov exponential is at leading power. \n\nSince this represents the first resummation of subleading power rapidity logarithms, there are many directions to extend our results, as well as to better understand the structures that we have introduced in this paper. First, although we have arrived at the structure of the leading logarithms using symmetry and consistency arguments, it would be interesting to use a complete basis of SCET$_{\\rm II}$ ~operators to derive the operator structure of all the subleading power jet and soft functions in SCET$_{\\rm II}$, and perturbatively compute their anomalous dimensions. Second, to go beyond LL, it will be necessary to better understand the structure of the momentum convolutions appearing in the subleading power factorization. We expect that at subleading power this is best done in momentum space using the formalism of \\cite{Ebert:2016gcn}. Finally, we also expect that away from $\\mathcal{N}=4$ SYM theory, divergent convolutions will appear even at LL order, as occurs in SCET$_{\\rm I}$, and so it will be important to understand when these occur, and how they can be overcome.\n\nOn the more formal side, it will be interesting to understand how to extract the subleading power logarithms directly from the four point correlation function, following the approach of \\cite{Korchemsky:2019nzm}, or using the light ray OPE \\cite{Kravchuk:2018htv,Kologlu:2019bco,Kologlu:2019mfz}. While there have been some studies of the double light cone limit in conformal field theories \\cite{Alday:2010zy,Alday:2015ota}, further studies of this limit in conformal field theories could provide insight into behavior of phenomenological interest in QCD, and the EEC provides an example that is of both formal and phenomenological interest.\n\nIt will also be important to apply our formalism to observables of direct phenomenological interest, such as the EEC in QCD, the color singlet $p_T$ distribution in hadron colliders, or to the study of power suppressed logarithms appearing in the Regge limit, which can also be formulated in terms of the rapidity renormalization group \\cite{Rothstein:2016bsq,Moult:2017xpp}. Our work represents the first step in extending recent successes in understanding the structure of subleading power infrared logarithms to subleading power rapidity logarithms, and we hope that this will allow for a much wider set of phenomenologically important applications.\n\n\\begin{acknowledgments}\nWe thank Martin Beneke, Sebastian Jaskiewicz, Robert Szafron, Iain Stewart, Frank Tackmann, Johannes Michel, Markus Ebert, David Simmons-Duffin, Cyuan-Han Chang, Lance Dixon, Johannes Henn, and HuaXing Zhu for useful discussions. We thank Vladimir Smirnov for providing us with reduction tables for polylogarithms of roots of unity. This work was supported in part by the Office of Nuclear Physics of the U.S.\nDepartment of Energy under Contract No. DE-SC0011090, and by the Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-76SF00515. This research received\nfunding from the European Research Council (ERC) under the European\nUnion's Horizon 2020 research and innovation programme\n(grant agreement No. 725110), ``Novel structures in scattering amplitudes\".\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction.}\n\nIn \\cite{C}, Cohn introduced the notion of Schreier domain. A domain $D$ is said to be a {\\em Schreier domain} if $(1)$ $D$ is integrally closed and $(2)$ whenever $I, J_1, J_2$ are principal ideals of $D$ and $I\\supseteq J_1J_2$, then $I = I_1I_2$ for some principal ideals $I_1,I_2$ of $D$ with $I_i \\supseteq J_i$ for $i = 1,2$. The study of Schreier domains was continued in \\cite{MR} and \\cite{Zps}. In \\cite{Zps}, a domain was called a {\\em pre-Schreier domain} if it satisfies condition $(2)$ above.\nSubsequently, extensions of the ``(pre)-Schreier domain'' concept were studied in \\cite{DM}, \\cite{ADZ}, \\cite{DK}, \\cite{AD} and \\cite{ADE}.\n\nIn \\cite{ADE}, we studied a class of domains that satisfies a Schreier-like condition for all ideals. More precisely, a domain $D$ was called a {\\em sharp domain } if whenever $I \\supseteq AB$ with $I$, $A$, $B$ nonzero ideals of $D$, there exist ideals $A'\\supseteq A$ and $B'\\supseteq B$ such that $I=A'B'$. We recall several results from \\cite{ADE}.\nIf the domain $D$ is Noetherian or Krull, then $D$ is sharp if and only if $D$ is a Dedekind domain \\cite[Corollaries 2 and 12]{ADE}. A sharp domain is pseudo-Dedekind; in particular, a sharp domain is a completely integrally closed GGCD domain \\cite[Proposition 4]{ADE}.\nRecall (cf. \\cite{Z} and \\cite{AK}) that a domain $D$ is called a {\\em pseudo-Dedekind domain} (the name used in \\cite{Z} was {\\em generalized Dedekind domain}) if the $v$-closure of each nonzero ideal of $D$ is invertible. Also, recall from \\cite{AA} that a domain $D$ is called a {\\em generalized GCD domain (GGCD domain)} if the $v$-closure of each nonzero finitely generated ideal of $D$ is invertible. The definition of the $v$-closure is recalled below.\nA valuation domain is sharp if and only if the value group of $D$ is a complete subgroup of the reals \\cite[Proposition 6]{ADE}.\nThe localizations of a sharp domain at the maximal ideals are valuation domains with value group a complete subgroup of the reals; in par\\-ti\\-cu\\-lar, a sharp domain is a Pr\\\"ufer domain of dimension at most one \\cite[Theorem 11]{ADE}.\nThe converse is true for the domains of finite character \\cite[Theorem 15]{ADE}, but not true in general \\cite[Example 13]{ADE} (recall that a {\\em domain of finite character} is a domain whose nonzero elements are contained in only finitely many maximal ideals). A countable sharp domain is a Dedekind domain \\cite[Corollary 17]{ADE}.\n\n\n\n\n\n\n\n\n\n\n\n\nThe purpose of this paper is to study the ``sharp domain'' concept in the star operation setting. To facilitate the reading of the paper, we first review some basic facts about $*$-operations. Let $D$ be a domain with quotient field $K$ and let $F(D)$ denote the set of nonzero fractional ideals of $D$.\nA function $A\\mapsto A^*: F(D) \\rightarrow F(D)$ is called a {\\em star operation} on\n$D$ if $*$ satisfies the following three conditions for all $0\n\\neq a \\in K$ and all $I,J \\in F(D)$:\n\n$(1)$ $D^{*} = D$ and\n$(aI)^{*}=aI^{*}$,\n\n$(2)$ $I \\subseteq I^{*}$ and if $ I\n\\subseteq J$, then $I^{*} \\subseteq J^{*}$,\n\n$(3)$\n$(I^{*})^{*} = I^{*}.$\n\n\\noindent An ideal $I \\in F(D)$ is called a $*$-ideal if $I = I^{*}.$\nFor all $I,J\\in F(D)$, we have\n$(IJ)^{*}=(I^{*}J)^{*}=(I^{*}J^{*})^{*}$.\nThese equations define the so-called {\\em $*$-multiplication.}\nIf $\\{I_\\alpha\\}$ is a subset of $F(D)$ such that $\\cap I_\\alpha\\neq 0$, then\n$\\cap I_\\alpha^*$ is a $*$-ideal.\nAlso, if $\\{I_\\alpha\\}$ is a subset of $F(D)$ such that $\\sum I_\\alpha$ is a fractional ideal, then\n$(\\sum I_\\alpha)^*=(\\sum I_\\alpha^*)^*$.\nThe star operation $*$ is said to be {\\em stable} if $(I \\cap J)^{*} = I^{*}\\cap J^{*}$ for all $I,J\\in F(D)$.\nIf $*$ is a star operation, the function $*_f: F(D) \\rightarrow F(D)$ given by $I^{*_f} = \\cup_H H^{*}$, where $H$ ranges over all\nnonzero finitely generated subideals of $I$, is also a star operation. The star operation $*$ is said to be of\n{\\em finite character} if $*=*_f$. Clearly $(*_f)_f=*_f$.\nDenote by $Max_*(D)$ the set of maximal $*$-ideals, that is, ideals maximal among proper integral\n$*$-ideals of $D$. Every maximal $*$-ideal is a prime ideal.\nThe {\\em $*$-dimension} of $D$ is\n$sup \\{ n\\mid 0\\subset P_1\\subset \\cdots \\subset P_n,$ $P_i$ prime $*$-ideal of $D\\}$.\nAssume that $*$ is a star operation of finite character. Then every proper $*$-ideal is contained in some maximal $*$-ideal, and\nthe map $I\\mapsto I^{\\tilde{*}} = \\cap _{P \\in Max_*(D)}ID_P$ for all $I\\in F(D)$ is a stable star operation of finite character, cf. \\cite[Theorems 2.4, 2.5 and 2.9]{AC}.\nMoreover, $*$ is stable if and only if $*=\\tilde{*}$, cf. \\cite[Corollary 4.2]{A}.\nA $*$-ideal $I$ is of {\\em finite type} if\n$I=(a_1,...,a_n)^{*}$ for some $a_1,...,a_n\\in I.$\nA {\\em Mori domain} is a domain whose $t$-ideals are of finite type (see \\cite{B}). By \\cite{HZ}, an integral domain is said to be a {\\em TV domain} if every $t$-ideal is a $v$-ideal. A Mori domain is a TV domain.\n\nA fractional ideal $I \\in F(D)$ is said to be {\\em $*$-invertible} if\n$(II^{-1})^{*} = D$, where $I^{-1}=(D:I)=\\{ x \\in K \\mid xI \\subseteq D\\}$.\nIf $*$ is of finite character, then $I$ is $*$-invertible if and only if\n$II^{-1}$ is not contained in any maximal $*$-ideal of $D$; in this case $I^*=(a_1,...,a_n)^*$ for some $a_1,...,a_n\\in I$.\nLet $*_1,*_2$ be star operations on $D$. We write $*_1\\leq *_2$, if $I^{*_1}\\subseteq I^{*_2}$ for all\n$I\\in F(D)$. In this case we get $(I^{*_1})^{*_2}=I^{*_2}=(I^{*_2})^{*_1}$ and every\n$*_1$-invertible ideal is $*_2$-invertible.\nSome well-known star operations are: the {\\em $d$-operation} (given by $I\\mapsto I$),\nthe {\\em $v$-operation} (given by $I\\mapsto I_v = (I^{-1})^{-1}$) and the {\\em $t$-operation} (defined by $t=v_f$).\nThe {\\em $w$-operation} is the star operation given by $I \\mapsto I_w= \\{x\\in K \\mid xH \\subseteq\nI$ for some finitely\ngenerated ideal $H$ of $D$ with $H^{-1} =D\\}$. The $w$-operation is a stable star operation of finite character.\nFor an integrally closed domain $D$, the {\\em $b$-operation} on $D$ is the star operation defined by $I\\mapsto I_b=\\cap _{V} IV$ where $V$ runs in the set of all valuation overrings of $D$ (see \\cite[Page 398]{G}). For every $I\\in F(D)$, we have\n$I\\subseteq I_w \\subseteq I_t \\subseteq I_v$. It is known that $Max_w(D)=Max_t(D)$, cf.\n\\cite[Corollaries 2.13 and 2.17]{AC} and $I_w = \\cap _{M \\in Max_t(D)}ID_M$, cf. \\cite[Corollary 2.10]{AC}.\nConsequently, a nonzero fractional ideal is $w$-invertible if and only if it is $t$-invertible. Recall \\cite{EFP} that an integral domain $D$ is said to be {\\em $*$-Dedekind} if every nonzero fractional ideal of $D$ is $*$-invertible. A domain $D$ is called a Prufer {\\em $*$-multiplication domain (P$*$MD)} if every nonzero finitely generated ideal of $D$ is $*_f$-invertible (see \\cite{FJS}).\nFor the general theory of star operations we refer the reader to \\cite[Sections 32 and 34]{G}.\\\\\n\n\nWe introduce the key concept of this paper.\n\n\\begin{definition}\\label{1}\nLet $*$ be a star operation on $D$. We say that a domain $D$ is a {\\em $*$-sharp domain} if whenever $I$, $A$, $B$ are nonzero ideals of $D$ with $I \\supseteq AB$, there exist nonzero ideals $H$ and $J$ such that $I^{*}=(HJ)^{*}$, $H^{*}\\supseteq A$ and $J^{*}\\supseteq B$.\n\\end{definition}\n\n\nThe $d$-sharp domains are just the sharp domains studied in \\cite{ADE}. If $*_1 \\leq *_2$ are star operations and $D$ is $*_1$-sharp, then $D$ is $*_2$-sharp (Proposition \\ref{81}). In particular, if $*$ is a star operation, then every sharp domain is $*$-sharp and every $*$-sharp domain is $v$-sharp. A $t$-sharp domain is $v$-sharp but the converse is not true in general (Remark \\ref{121}).\n\nIn Section 2, we study the $*$-sharp domains in general.\nIn this new context, we generalize most of the results obtained in \\cite{ADE}.\nFor $*\\in \\{d,b,w,t\\}$, every fraction ring of a $*$-sharp domain is $*$-sharp (Proposition \\ref{81}).\nEvery $*$-Dedekind domain is $*$-sharp. In particular, every Krull domain is $t$-sharp (Proposition \\ref{3}).\nLet $D$ be a domain and $*$ a finite character stable star operation such that $D$ is $*$-sharp. Then $D$ is a P$*$MD of $*$-dimension $\\leq 1$; moreover $D_M$ is a valuation domain with value group a complete subgroup of the reals, for each $M\\in Max_*(D)$ (Proposition \\ref{77}).\nThe converse is true for domains whose nonzero elements are contained in only finitely many $*$-maximal ideals (Proposition \\ref{200}).\nIf $*$ is a star operation on $D$ such that $D$ is a $*$-sharp domain, then $I_v$ is $*$-invertible for each nonzero ideal $I$\n(Proposition \\ref{3a}).\nIf $*$ is a finite character stable star operation on $D$ such that $D$ is a $*$-sharp TV domain, then $D$ is $*$-Dedekind (Corollary \\ref{111}).\nA domain $D$ is $v$-sharp if and only if $D$ is completely integrally closed (Corollary \\ref{100}). In particular, every $*$-sharp domain is completely integrally closed.\nIf $*$ is a stable star operation on $D$ such that $D$ is a $*$-sharp domain, then every finitely generated nonzero ideal of $D$ is $*$-invertible (Proposition \\ref{102}).\nIf $D$ is a countable domain and $*$ a finite character stable star operation on $D$ such that $D$ is $*$-sharp, then $D$ is a $*$-Dedekind domain (Corollary \\ref{845}).\n\nIn Section 3, we study the $t$-sharp domains. We obtain the following results.\nEvery $t$-sharp domain $D$ is a PVMD with $t$-dimension $\\leq 1$ and $D_M$ is a valuation domain with value group a complete subgroup of the reals, for each maximal $t$-ideal $M$ of $D$ (Proposition \\ref{11}). A domain is $t$-sharp if and only if it is $w$-sharp (Proposition \\ref{103}).\nA domain $D$ is a Krull domain if and only if $D$ is a $t$-sharp TV domain (Corollary \\ref{400}).\nIf $D$ is a countable $t$-sharp domain, then $D$ is a Krull domain\n(Corollary \\ref{300}).\nA domain $D$ is $t$-sharp if and only if $D[X]$ is $t$-sharp\n(Proposition \\ref{331}) if and only if $D[X]_{N_v}$ is sharp (Proposition \\ref{133}). Here $N_v$ denotes the multiplicative subset of $D[X]$ consisting of all $f\\in D[X]-\\{0\\}$ with $c(f)_v=D$, where $c(f)$ is the ideal generated by the coefficients of $f$.\nLet $D$ be a $t$-sharp domain. Then $N'_v=\\{ f\\in D[[X]]-\\{0\\}\\mid c(f)_v=D\\}$ is a multiplicative set,\n$D[[X]]_{N'_v}$ is a sharp domain and every ideal of $D[[X]]_{N'_v}$ is extended from $D$ (Proposition \\ref{1024}). Moreover, $D[[X]]_{N'_v}$ is a faithfully flat $D[X]_{N_v}$-module\nand there is a one-to-one correspondence between the ideals of $D[X]_{N_v}$ and the ideals of $D[[X]]_{N'_v}$\n(Corollary \\ref{307}).\n\n\nThroughout this paper all rings are (commutative unitary) integral domains. Any unexplained material is standard, as in \\cite{G}, \\cite{H}.\n\n\\section{$*$-sharp domains.}\n\n\n\n\nIn this section we study the $*$-sharp domains for an arbitrary star operation $*$ (see Definition \\ref{1}). We obtain $*$-operation analogues for most of the results in \\cite{ADE}.\n\n\n\\begin{proposition} \\label{4}\nLet $D$ be a domain, $S\\subseteq D$ a multiplicative set and $*$ (resp. $\\sharp$) star operations on $D$ (resp. $D_S$) such that\n$I^*\\subseteq (ID_S)^\\sharp$ for each nonzero ideal $I$ of $D$.\nIf $D$ is $*$-sharp, then the fraction ring $D_S$ is $\\sharp$-sharp.\n\\end{proposition}\n\\begin{proof}\nNote that the condition $I^*\\subseteq (ID_S)^\\sharp$ in the hypothesis is equivalent to $(I^*D_S)^\\sharp=(ID_S)^\\sharp$.\nLet $I,A,B$ be nonzero ideals of $D$ such that $ID_S\\supseteq ABD_S$. Then $C=ID_S \\cap D \\supseteq AB$. As $D$ is $*$-sharp, we have $C^*=(HJ)^*$ with $H,J$ ideals of $D$ such that $H^*\\supseteq A$ and $J^*\\supseteq B$. Since $(WD_S)^\\sharp=(W^*D_S)^\\sharp$ for every nonzero ideal $W$,\nwe get $(ID_S)^\\sharp=(C^*D_S)^\\sharp=((HJ)^*D_S)^\\sharp=(HJD_S)^\\sharp$, $(HD_S)^\\sharp=(H^*D_S)^\\sharp\\supseteq AD_S$ and $(JD_S)^\\sharp\\supseteq BD_S$.\n\\end{proof}\n\n\n\n\n\\begin{proposition} \\label{81}\nLet $D$ be a domain, $*_1 \\leq *_2$ star operations and $D$ and $S\\subseteq D$ a multiplicative set.\n\n$(a)$ If $D$ is $*_1$-sharp, then D is $*_2$-sharp.\n\n$(b)$ If $*\\in \\{d,t,w,b\\}$ and $D$ is $*$-sharp (with $D$ integrally closed if $*=b$), then $D_S$ is $*$-sharp.\n\\end{proposition}\n\\begin{proof}\n$(a)$. Apply Proposition \\ref{4} for $S=\\{1\\}$, $*=*_1$ and $\\sharp=*_2$.\n$(b)$. By Proposition \\ref{4}, it suffices to show that $I^*\\subseteq (ID_S)^*$ for each nonzero ideal $I$ of $D$. This is clear for $*=d$ and true for $*=t$, cf. \\cite[Lemma 3.4]{Kg}. Assume that $x\\in I_w$. Then $xH\\subseteq I$ for some finitely generated nonzero ideal $H$ of $D$ such that $H_v=D$. Hence $(HD_S)_v=D_S$ (cf. \\cite[Lemma 3.4]{Kg}) and $xHD_S\\subseteq ID_S$, thus $x\\in (ID_S)_w$. Assume that $D$ integrally closed. If $V$ is a valuation overring of $D_S$, then $V$ is an overring of $D$, so $I_b\\subseteq IV$. Thus $I_b\\subseteq (ID_S)_b$.\n\\end{proof}\n\nIn \\cite[Theorem 11]{ADE}, it was shown that a sharp domain is a Prufer domain of dimension at most $1$. We extend this result.\n\n\\begin{proposition} \\label{77}\nLet $D$ be a domain and $*$ a finite character stable star operation on $D$ such that $D$ is $*$-sharp. Then $D_M$ is a valuation domain with value group a complete subgroup of the reals, for each $M\\in Max_*(D)$. In particular, $D$ is a P$*$MD of $*$-dimension $\\leq 1$.\n\\end{proposition}\n\\begin{proof}\nLet $M$ be a maximal $*$-ideal. If $I$ is a nonzero ideal of $D$, then $I^*D_M=ID_M$, cf. \\cite[Corollary 4.2]{A}. By Proposition \\ref{4}, applied for $S=D-M$, $*=*$ and $\\sharp=d$, we get that $D_M$ is a sharp domain. Apply \\cite[Theorem 11]{ADE}. The ``in particular'' assertion is clear.\n\\end{proof}\n\n\n\n\\begin{proposition}\\label{3}\nLet $D$ be a domain and $*$ a star operation on $D$. If $D$ is $*$-Dedekind, then $D$ is $*$-sharp. In particular, every Krull domain is $t$-sharp.\n\\end{proposition}\n\\begin{proof}\nLet $I,A,B$ be nonzero ideals of $D$ such that $I \\supseteq AB$. Set $H=I+A$ and $J=IH^{-1}$. Note that $J\\subseteq D$ and $A\\subseteq H$. Since $(HH^{-1})^*=D$, we get $I^*=(HJ)^*$. From $BH=B(A+I) \\subseteq I$, we get $B \\subseteq (BHH^{-1})^* \\subseteq (IH^{-1})^*=J^*$. For the ``in particular statement'', recall that the $t$-Dedekind domains are the Krull domains, cf. \\cite[Theorem 3.6]{Kg1}.\n\\end{proof}\n\n\n\n\n\\begin{proposition}\\label{3a}\nLet $D$ be a domain and $*$ a star operation on $D$ such that $D$ is $*$-sharp. Then $I_v$ is $*$-invertible for each nonzero ideal $I$.\n\\end{proposition}\n\\begin{proof}\nLet $I$ be a nonzero ideal of $D$ and $x \\in I-\\{0\\}$. Then $I(xI^{-1}) \\subseteq xD$. Since $D$ is $*$-sharp, there exist $H,J$ ideals of $D$ such that $H^* \\supseteq I$, $J^* \\supseteq xI^{-1}$ and $xD=(HJ)^*$. Hence $H$ is $*$-invertible and we get\n$H^{-1}=(x^{-1}J)^*\\supseteq (xx^{-1}I^{-1})^*=I^{-1}$, so $H_v\\subseteq I_v$. The opposite inclusion follows from $H^* \\supseteq I$. Thus $I_v=H_v$ is $*$-invertible, because $H^*=H_v$ since $H$ is $*$-invertible.\n\\end{proof}\n\nNext, we extend \\cite[Corollary 12]{ADE} to the star operation setting.\n\n\n\\begin{corollary}\\label{111}\nLet $D$ be a domain and $*$ a finite character stable star operation on $D$ such that $D$ is $*$-sharp. If $D$ is a TV domain (e.g. a Mori domain), then $D$ is $*$-Dedekind.\n\\end{corollary}\n\\begin{proof}\nBy Proposition \\ref{77}, $D$ is a P$*$MD , so $*=t$, cf. \\cite[Proposition 3.15]{FJS}. As $D$ is a TV domain, we get $*=t=v$. By Proposition \\ref{3a}, $D$ is $*$-Dedekind.\n\\end{proof}\n\n\n\n\\begin{corollary}\\label{100}\nA domain $D$ is $v$-sharp if and only if $D$ is completely integrally closed. In particular, any $*$-sharp domain is completely integrally closed.\n\\end{corollary}\n\\begin{proof}\nBy Propositions \\ref{3} and \\ref{3a} (for $*=v$), $D$ is $v$-sharp if and only if $D$ is $v$-Dedekind. By \\cite[Theorem 34.3]{G} or \\cite[Proposition 3.4]{F}, a domain is $v$-Dedekind if and only if it is completely integrally closed. For the ``in particular'' assertion, apply Proposition \\ref{81} taking into account that $*\\leq v$ for each star operation $*$.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{remark}\\label{121}\n$(a)$ There exist a completely integrally closed domain $A$ having some fraction ring which is not completely integrally closed (for instance the ring of entire functions, cf. \\cite[Exercises 16 and 21, page 147]{G}). Thus the $v$-sharp property does not localize, cf. Corollary \\ref{100}. Note that $A$ cannot be $t$-sharp because the $t$-sharp property localizes, cf.\nProposition \\ref{81}.\n$(b)$ Let $D$ be a completely integrally closed domain which is not a PVMD (such a domain is constructed in \\cite{D}). By Corollary \\ref{100} and Proposition \\ref{11}, such a domain is $v$-sharp but not $t$-sharp.\n$(c)$ Let $D$ be a Krul domain of dimension $\\geq 2$ (e.g. $\\mathbb{Z}[X]$). By Proposition \\ref{3} and \\cite[Theorem 11]{ADE}, $D$ is $t$-sharp but not sharp.\n\n\n\\end{remark}\n\nIn the next lemma we recall two well-known facts.\n\n\n\n\\begin{lemma} \\label{71}\nLet $D$ be a domain, $*$ a star operation on $D$ and $I,J,H\\in F(D)$.\n\n$(a)$ If $(I+J)^*=D$, then $(I\\cap J)^*=(IJ)^*$.\n\n$(b)$ If $I$ is $*$-invertible, then $(I(J\\cap H))^*=(IJ\\cap IH)^*$.\n\\end{lemma}\n\\begin{proof}\n$(a)$ Clearly, $(IJ)^*\\subseteq (I\\cap J)^*$. Conversely, since $(I+J)^*=D$, we have $(I\\cap J)^*=((I\\cap J)(I+J))^*\\subseteq (IJ)^*$, thus $(I\\cap J)^*=(IJ)^*$.\n$(b)$ Clearly, $(I(J\\cap H))^*\\subseteq (IJ\\cap IH)^*$. Conversely, because $I$ is $*$-invertible, we have $(IJ\\cap IH)^*=(II^{-1}(IJ\\cap IH))^*\\subseteq (I(I^{-1}IJ\\cap I^{-1}IH))^*\\subseteq (I(J\\cap H))^*$.\n\\end{proof}\n\nThe next result generalizes \\cite[Proposition 10]{ADE}.\n\n\\begin{proposition} \\label{211p}\nLet $D$ be a domain and $*$ a stable star operation on $D$ such that $D$ is $*$-sharp. If $I,J$ are nonzero ideals of $D$ such that $(I+J)_v=D$, then $(I_v+J_v)^*=D$.\n\\end{proposition}\n\\begin{proof}\nLet $K$ be the quotient field of $D$. Changing $I$ by $I_v$ and $J$ by $J_v$, we may assume that $I,J$ are $*$-invertible $v$-ideals, cf. Proposition \\ref{3a}.\nSince $(I+J)^2 \\subseteq I^2+J$ and $D$ is $*$-sharp, there exist two nonzero ideals $A$, $B$ such that $(I^2+J)^*=(AB)^*$ and $I+J\\subseteq A^* \\cap B^*$. We {\\em claim} that $(I^2+J)^*:I=(I+J)^*$. To prove the claim, we perform the following step-by-step computation. First,\n$(I^2+J)^*:I=((I^2+J)^*:_KI)\\cap D=((I^2+J)I^{-1})^*\\cap D=(I+JI^{-1})^*\\cap D$ because $I$ is $*$-invertible. As $*$ is stable, we get $(I+JI^{-1})^*\\cap D=((I+JI^{-1})\\cap D)^*=(I+(JI^{-1}\\cap D))^*$ by modular distributivity.\nSince $I$ is $*$-invertible, we get\n$(I+(JI^{-1}\\cap D))^*=(I+I^{-1}(J\\cap I))^*$, cf. Lemma \\ref{71}.\nUsing the fact that $I$ is $*$-invertible (hence $v$-invertible) and Lemma \\ref{71}, we derive that $(I+I^{-1}(J\\cap I))^* \\subseteq (I+I^{-1}(IJ)_v)^*\\subseteq (I+(II^{-1}J)_v)^*=(I+J_v)^*=(I+J)^*$. Putting all these facts together, we get $(I^2+J)^*:I\\subseteq (I+J)^*$ and the other inclusion is clear. So the claim is proved.\nFrom $(I^2+J)^*=(AB)^*$, we get $A^*\\subseteq (I^2+J)^*:B^*\\subseteq (I^2+J)^*:I=(I+J)^*$, so $A^*=(I+J)^*$. Similarly, we get $B^*=(I+J)^*$, hence $(I^2+J)^*=((I+J)^2)^*$.\nIt follows that $J^*\\subseteq (I^2+J)^*=((I+J)^2)^*\\subseteq (J^2+I)^*$. So $J^*=J^*\\cap (J^2+I)^*=(J\\cap (J^2+I))^*=(J^2+(J\\cap I))^*$ where we have used the fact that $*$ is stable and the modular distributivity. By Lemma \\ref{71}, we have $I\\cap J\\subseteq (IJ)_v$, so we get $J^*=(J^2+(J\\cap I))^*\\subseteq (J^2+(IJ)_v)^*$. Since $J$ is $*$-invertible, we have $D=(JJ^{-1})^*\\subseteq ((J^2+(IJ)_v)J^{-1})^* \\subseteq (J+I_v)^*=(J+I)^*$.\nThus $(I+J)^*=D$.\n\\end{proof}\n\nNote that from Proposition \\ref{211p} we can recover easily \\cite[Proposition 10]{ADE}. Next, we give another extension of \\cite[Theorem 11]{ADE} (besides Proposition \\ref{77}).\n\n\n\\begin{proposition} \\label{102}\nLet $D$ be a domain and $*$ a stable star operation on $D$ such that $D$ is $*$-sharp. Then every finitely generated nonzero ideal of $D$ is $*$-invertible.\n\\end{proposition}\n\\begin{proof}\nLet $x,y\\in D-\\{0\\}$. By Proposition \\ref{3a}, the ideal $I=(xD+yD)_v$ is $*$-invertible (hence $v$-invertible), so $(xI^{-1}+yI^{-1})_v=D$. By Proposition \\ref{211p} we get $(xI^{-1}+yI^{-1})^*=D$, hence $I=((xI^{-1}+yI^{-1})I)^*=(xD+yD)^*$ because $I$ is $*$-invertible. Thus every two-generated nonzero ideal of $D$ is $*$-invertible. Now the proof of \\cite[Proposition 22.2]{G} can be easily adapted to show that every finitely generated nonzero ideal of $D$ is $*$-invertible.\n\\end{proof}\n\n\\begin{remark}\nUnder the assumptions of Proposition \\ref{102}, it does not follow that $D$ is a P$*$MD. Indeed, let $D$ be a completely integrally closed domain which is not a PVMD (such a domain is constructed in \\cite{D}). The $v$-operation on $D$ is stable (cf. \\cite[Theorem 2.8]{ACl}) and $D$ is $v$-sharp (cf. Corollary \\ref{100}).\n\\end{remark}\n\n\nLet $D$ be a domain with quotient field $K$.\nAccording to \\cite{AZ}, a family $\\mathcal{F}$ of nonzero prime ideals of $D$ is called {\\em independent of finite character family (IFC family)},\nif $(1)$ $D=\\cap_{P\\in \\mathcal{F}}D_P$, $(2)$ every nonzero $x\\in D$ belongs to only finitely many members of\n$\\mathcal{F}$ and $(3)$ every nonzero prime ideal of $D$ is contained in at most one member of $\\mathcal{F}$. The follwing result extends \\cite[Theorem 15]{ADE}.\n\n\n\n\n\n\n\\begin{proposition}\\label{200}\nLet $D$ be a domain and $*$ a finite character star operation on $D$. Assume that\n\n$(a)$ every $x\\in D-\\{0\\}$ is contained in only finitely many maximal $*$-ideals, and\n\n$(b)$ for every $M\\in Max_*(D)$, $D_M$ is a valuation domain with value group a complete subgroup of the reals.\n\\\\ Then $D$ is a $\\tilde{*}$-sharp domain and hence $*$-sharp.\n\\end{proposition}\n\\begin{proof}\nBy \\cite{Gr}, $D=\\cap_M D_M$ where $M$ runs in the set of maximal $*$-ideals. Since each $D_M$ is a valuation domain with value group a complete subgroup of the reals, every $M$ has height one.\nIt follows that $Max_*(D)$ is an IFC family. Consider the $\\tilde{*}$ operation, i.e. $I\\mapsto I^{\\tilde{*}}=\\cap_M ID_M$. We show that $D$ is $\\tilde{*}$-sharp.\nLet $I,A,B$ be nonzero ideals of $D$ such that $I\\supseteq AB$.\nLet $P_1$,...,$P_n$ the maximal $*$-ideals of $D$ containing $AB$.\nSince $D_{P_i}$ is sharp, there exist $H_i$, $J_i$ ideals of $D_{P_i}$ such that $ID_{P_i}=H_iJ_i$, $H_i\\supseteq AD_{P_i}$ and\n$J_i\\supseteq BD_{P_i}$ for all $i$ between $1$ and $n$.\nSet $H'_i=H_i\\cap D$, $J'_i=J_i\\cap D$, $i=1,...,n$,\n$H=H'_1\\cdots H'_n$ and $J=J'_1\\cdots J'_n$.\nBy \\cite[Lemma 2.3]{AZ}, $P_i$ is the only element of $Max_*(D)$ containing $H'_i$ (resp. $J'_i$), thus it can be checked that $ID_P=(HJ)D_P$, $HD_P\\supseteq AD_P$ and $JD_P\\supseteq BD_P$ for each $P\\in Max_*(D)$. So, we have $I^{\\tilde{*}}=(HJ)^{\\tilde{*}}$, $H^{\\tilde{*}}\\supseteq A$ and $J^{\\tilde{*}}\\supseteq B$. Consequently, $D$ is $\\tilde{*}$-sharp. By Proposition \\ref{81}, $D$ is $*$-sharp because $\\tilde{*}\\leq *$, cf. \\cite[Theorem 2.4]{AC}.\n\\end{proof}\n\n\\begin{proposition}\\label{822}\nLet $D$ be a contable domain and $*$ a finite character star operation on $D$ such that $D$ is a P$*$MD and $I_v$ is $*$-invertible for each nonzero ideal $I$ of $D$. Then every nonzero element of $D$ is contained in only finitely many maximal $*$-ideals.\n\\end{proposition}\n\\begin{proof}\nDeny. By \\cite[Corollary 5]{DZ}, there exists a nonzero element $z$ and an infinite family $(I_n)_{n\\geq 1}$ of $*$-invertible proper ideals containing $z$ which are mutually $*$-comaximal (that is, $(I_m+I_n)^*=D$ for every $m\\neq n$). For each nonempty set $\\Lambda$ of natural numbers, consider the $v$-ideal $I_\\Lambda=\\cap_{n\\in \\Lambda} I_n$ (note that $z\\in I_\\Lambda$). By hypothesis, $I_\\Lambda$ is $*$-invertible. We claim that $I_\\Lambda\\neq I_{\\Lambda'}$ whenever $\\Lambda$, $\\Lambda'$ are distinct nonempty sets of natural numbers.\nDeny. Then there exists a nonempty set of natural numbers $\\Gamma$ and some $k\\notin \\Gamma$ such that $I_k\\supseteq I_\\Gamma$. Consider the ideal $H=(I_k^{-1} I_\\Gamma)^*\\supseteq I_\\Gamma$. If $n\\in \\Gamma$, then $I_n \\supseteq I_\\Gamma =(I_kH)^*$, so $I_n \\supseteq H$, because $(I_n+I_k)^*=D$. It follows that $I_\\Gamma\\supseteq H$, so $I_\\Gamma = H=(I_k^{-1} I_\\Gamma)^*$. Since $I_\\Gamma$ is $*$-invertible, we get $I_k=D$, a contradiction. Thus the claim is proved. But then it follows that $\\{I_\\Lambda\\mid \\emptyset\\neq \\Lambda\\subseteq \\mathbb{N}\\}$ is an uncountable set of $*$-invertible ideals. This leads to a contradiction, because $D$ being countable, it has countably many $*$-ideals of finite type.\n\\end{proof}\n\n\n\\begin{corollary}\\label{845}\nLet $D$ be a countable domain and $*$ a finite character stable star operation on $D$ such that $D$ is $*$-sharp.\nThen $D$ is a $*$-Dedekind domain.\n\\end{corollary}\n\\begin{proof}\nWe may assume that $D$ is not a field.\nBy Proposition \\ref{77}, $D$ is a P$*$MD. Now Propositions \\ref{3a} and \\ref{822} show that every nonzero element of $D$ is contained in only finitely many maximal $*$-ideals. Let $M$ be a maximal $*$-ideal of $D$. By Proposition \\ref{77}, $D_M$ is a countable valuation domain with value group $\\mathbb{Z}$ or $\\mathbb{R}$, so $D_M$ is a DVR. Thus $D$ is a $*$-Dedekind domain, cf. \\cite[Theorem 4.11]{EFP}.\n\\end{proof}\n\n\n\\section{$t$-sharp domains.}\n\n\nThe $t$-operation is a very useful tool in multiplicative ideal theory. In this section we give some results which are specific for the $t$-sharp domains.\n\n\n\n\\begin{proposition}\\label{11}\nLet $D$ be a $t$-sharp domain. Then $D$ is a PVMD of $t$-dimension $\\leq 1$ and $D_M$ is a valuation domain with value group a complete subgroup of the reals for each maximal $t$-ideal $M$ of $D$.\n\\end{proposition}\n\\begin{proof}\nLet $I$ be finitely generated nonzero ideal of $D$. Then $I_v=I_t$, so Proposition \\ref{3a} shows that $I_t$ is $t$-invertible. Thus $D$ is a PVMD.\nLet $M$ be a maximal $t$-ideal of $D$.\nBy part $(b)$ of Proposition \\ref{81}, $D_M$ is a $t$-sharp valuation domain, so $D_M$ is sharp since for valuation domains $t=d$. Now apply \\cite[Proposition 6]{ADE}.\n\\end{proof}\n\n\n\n\n\n\\begin{proposition}\\label{103}\nLet $D$ be a domain. Then $D$ is $t$-sharp if and only if $D$ is $w$-sharp.\n\\end{proposition}\n\\begin{proof}\nIf $D$ is $w$-sharp, then $D$ is $t$-sharp (cf. Proposition \\ref{81}) because $w\\leq t$. Conversely, assume that $D$ is $t$-sharp. By Proposition \\ref{11}, $D$ is a PVMD. But in a PVMD the $w$-operation coincides with the $t$-operation (cf. \\cite[Theorem 3.5]{Kg}), so $D$ is also $w$-sharp.\n\\end{proof}\n\nCombining Corollary \\ref{111} and Proposition \\ref{103}, we get\n\n\\begin{corollary}\\label{400}\nA domain $D$ is a Krull domain if and only if $D$ is a $t$-sharp TV domain.\n\\end{corollary}\n\n\n\n\\begin{corollary}\\label{300}\nIf $D$ is a countable $t$-sharp domain, then $D$ is a Krull domain.\n\\end{corollary}\n\\begin{proof}\nAssume that $D$ is a countable $t$-sharp domain.\nBy Proposition \\ref{103}, $D$ is $w$-sharp. Moreover the $w$-operation is stable and of finite character, cf. \\cite[Corollary 2.11]{AC}. By Corollary \\ref{845}, $D$ is a $t$-Dedekind domain, that is a Krull domain.\n\\end{proof}\n\n\n\n\n\n\nBy \\cite{Zac}, a domain $D$ is called a {\\em pre-Krull domain} if $I_v$ is $t$-invertible for each nonzero ideal $I$ of $D$ (see also \\cite{L} where a pre-Krull domain is called a $(t,v)$-Dedekind domain).\n\n\n\n\n\n\\begin{proposition} \\label{3x}\nA domain $D$ is $t$-sharp if and only if\n\n$(a)$ $D$ is pre-Krull, and\n\n$(b)$ for all nonzero ideals $I$,$A$,$B$ of $D$ such that $I_v=D$ and $I\\supseteq AB$, there exist nonzero ideals $H$ and $J$ such that $I_t=(HJ)_t$, $H_t\\supseteq A$ and $J_t\\supseteq B$.\n\\end{proposition}\n\\begin{proof}\nThe implication $(\\Rightarrow)$ follows from Proposition \\ref{3a}. Conversely, assume that $(a)$ and $b$ hold.\nLet $I$, $A$ and $B$ be nonzero ideals of $D$ such that $I\\supseteq AB$.\nThen $I_v \\supseteq A_vB_v$ and $I_v$, $A_v$, $B_v$ are $t$-invertible ideals, cf. $(a)$.\nSince $D$ is a pre-Krull domain, $D$ is a PVMD and hence a $t$-Schreier domain cf. \\cite[Corollary 6]{DZ1}.\nSo there exist $t$-invertible ideals $H$ and $J$ such that $I_v=(HJ)_t$, $H_t\\supseteq A_v$ and $J_t\\supseteq B_v$.\nWe have $(II^{-1})_t = (IH^{-1}J^{-1})_t \\supseteq (AH^{-1})(BJ^{-1})$. Set $M=(II^{-1})_t$ and note that $AH^{-1}$ and $BJ^{-1}$ are integral ideals.\nSince $I_v$ is $t$-invertible, $M_v=(I_vI^{-1})_v=D$.\nBy $(b)$, there exist nonzero ideals $N$ and $P$ such that $M_t=(NP)_t$, $N_t\\supseteq AH^{-1}$ and $P_t\\supseteq BJ^{-1}$.\nSumming up, we get\n$I_t=(I_vM)_t=(HJNP)_t=((HN)(JP))_t$, $(HN)_t\\supseteq (AHH^{-1})_t=A_t$ and $(JP)_t\\supseteq (BJJ^{-1})_t=B_t$.\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{332} If D is an integrally closed domain and $I$ a nonzero ideal of $D[X]$ such that $I_v=D[X]$, then $I\\cap D \\neq 0$.\n\\end{lemma}\n\\begin{proof} Assume that $I \\cap D = 0$. By \\cite[Theorem 2.1]{AKZ}, there exist $f\\in D[X]-\\{0\\}$ and $a\\in D-\\{0\\}$ such that $J:=(a\/f)I \\subseteq D[X]$ and $J\\cap D \\neq 0$. We get $(a\/f)D[X]=(a\/f)I_v \\subseteq D[X]$, hence $a\/f \\in D[X]$ and thus $a\/f\\in D$, because $a\\in D-\\{0\\}$. We get $J\\subseteq I$ which is a contradiction because $J\\cap D \\neq 0$ and $I \\cap D = 0$.\n\\end{proof}\n\n\\begin{proposition} \\label{331}\nA domain $D$ is $t$-sharp if and only if $D[X]$ $t$-sharp.\n\\end{proposition}\n\\begin{proof} $(\\Rightarrow).$ Set $\\Omega=D[X]$. Let $I$,$A$,$B$ be nonzero ideals of $\\Omega$ such that $I\\supseteq AB$.\nBy Proposition \\ref{3x}, $D$ is pre-Krull, so $D[X]$ is pre-Krull, cf. \\cite[Theorem 3.3]{L}. Applying Proposition \\ref{3x} for $\\Omega$, we can assume that $I_v=\\Omega$. Changing $A$ by $I+A$ and $B$ by $I+B$, we may assume that $A,B \\supseteq I$. By Lemma \\ref{332} we get $I\\cap D\\neq 0$, so $A\\cap D\\neq 0$ and $B\\cap D\\neq 0$. By \\cite[Theorem 3.2]{AKZ}, we have $I_t=I'_t\\Omega$, $A_t=A'_t\\Omega$ and $B_t=B'_t\\Omega$ for some nonzero ideals $I',A'$ and $B'$ of D. From $I\\supseteq AB$, we get $I'_t\\Omega=I_t\\supseteq A_tB_t=(A'_tB'_t)\\Omega$, hence $I'_t\\supseteq A'B'$.\nAs $D$ is $t$-sharp, there exist nonzero ideals $H$ and $J$ of $D$ such that $I'_t=(HJ)_t$, $H_t\\supseteq A'$ and $J_t\\supseteq B'$.\nHence $I_t=(HJ\\Omega)_t$, $(H\\Omega)_t\\supseteq A$ and $(J\\Omega)_t\\supseteq B$. $(\\Leftarrow).$ Let $I$,$A$,$B$ be nonzero ideals of $D$ such that $I\\supseteq AB$. As $D[X]$ $t$-sharp, there exist nonzero ideals $H$ and $J$ of $\\Omega$ such that $(I\\Omega)_t=(HJ)_t$, $H_t\\supseteq A$ and $J_t\\supseteq B$.\nSince $H_t\\supseteq A$ and $A\\neq 0$, we derive that $H_t\\cap D\\neq 0$, hence $H_t=(M\\Omega)_t$ for some nonzero ideal $M$ of $D$, cf. \\cite[Theorem 3.2]{AKZ}. Similarly, $J_t=(N\\Omega)_t$ for some nonzero ideal $N$ of $D$. Combining the relations above, we get $I_t=(MN)_t$, $M_t\\supseteq A$ and $N_t\\supseteq B$.\n\\end{proof}\n\n\\begin{remark}\nNotice that we do not have a ``$d$-analogue'' of Proposition \\ref{331} because a sharp domain has dimension $\\leq 1$ (see \\cite[Theorem 11]{ADE}). But remark that we do have a ``$v$-analogue'' of Proposition \\ref{331}. Indeed, a domain $D$ is $v$-sharp if and only if $D$ is completely integrally closed\n(cf. Corollary \\ref{100}) and $D$ is completely integrally closed if and only if so is $D[X]$. Similarly, $D$ is $v$-sharp if and only if the power series ring $D[[X]]$ is $v$-sharp.\n\\end{remark}\n\nDenote by $N_v$ the multiplicative set of $D[X]$ consisting of all nonzero polynomials $a_0+a_1X+\\cdots +a_nX^n$ such that $(a_0,a_1,...,a_n)_v=D$. The ring $D[X]_{N_v}$ was studied in \\cite{Kg}.\n\n\n\n\n\n\\begin{proposition}\\label{133}\nA domain $D$ is $t$-sharp if and only if $D[X]_{N_v}$ is sharp.\n\\end{proposition}\n\\begin{proof}\nIf $D$ is $t$-sharp, then $D$ is a PVMD, cf. Proposition \\ref{11}.\nIf $D[X]_{N_v}$ is sharp, then $D[X]_{N_v}$ is a Prufer domain (cf. \\cite[Theorem 11]{ADE}) hence $D$ is a PVMD, cf. \\cite[Theorem 3.7]{Kg}. So we may assume from the beginning that $D$ is a PVMD. Note that the $t$-sharp property of $D$ is in fact a property of the ordered monoid of all integral $t$-ideals of $D$ under the $t$-multiplication. Similarly, the sharp property of $D[X]_{N_v}$ is a property of the ordered monoid of all integral ideals of $D$ under the usual multiplication. Since $D$ is a PVMD, these two monoids are isomorphic (cf. \\cite[Theorem 3.14]{Kg}), so the proof is complete.\n\\end{proof}\n\nWe end our paper with a (partial) power series analogue of Proposition \\ref{133}. A lemma is in order.\n\n\n\\begin{lemma}\\label{101}\nLet $D\\subseteq E$ be a domain extension and every ideal of $E$ is extended from $D$. If $D$ is sharp then $E$ is also sharp.\n\\end{lemma}\n\\begin{proof}\nLet $I,A,B$ be nonzero ideals of $D$ such that $IE\\supseteq ABE$. Then $C=IE \\cap D \\supseteq AB$. As $D$ is sharp, we have $C=HJ$ with $H,J$ ideals of $D$ such that $H\\supseteq A$ and $J\\supseteq B$. We get $IE=CE=HJE$, $HE\\supseteq AE$ and $J\\supseteq BE$.\n\\end{proof}\n\nLet $D$ be a $t$-sharp domain which is not a field. \nBy Proposition \\ref{11}, $D$ is PVMD with $t$-dimension one. Hence \\cite[Proposition 3.3]{AK2} shows that $c(fg)_t=(c(f)c(g))_t$ (thus $c(fg)_v=(c(f)c(g))_v$)\nfor every $f,g\\in D[[X]]-\\{0\\}$, where $c(f)$ is the ideal generated by the coefficients of $f$.\nThen $N'_v=\\{ f\\in D[[X]]-\\{0\\}\\mid c(f)_v=D\\}$\nis a multiplicative subset of the power series ring $D[[X]]$.\n The fraction ring $D[[X]]_{N'_v}$ was studied in \\cite{AK2} and \\cite{L}. Note that $D\\subseteq D[X]_{N_v}\\subseteq D[[X]]_{N'_v}$, where $N_v=\\{ f\\in D[X]-\\{0\\}\\mid c(f)_v=D\\}$.\n\n\n\n\\begin{proposition}\\label{1024}\nIf $D$ is a $t$-sharp domain, then $D[[X]]_{N'_v}$ is sharp and every ideal of $D[[X]]_{N'_v}$ is extended from $D$.\n\\end{proposition}\n\\begin{proof} \nWe may assume that $D$ is not a field. \nBy Proposition \\ref{3x}, $D$ is a pre-Krull domain (alias $(t,v)$-Dedekind domain). As seen in the paragraph preceding this proposition, $c(fg)_v=(c(f)c(g))_v$\nfor every $f,g\\in D[[X]]-\\{0\\}$. By \\cite[Theorem 4.3]{L}\n it follows that every ideal of $D[[X]]_{N'_v}$ is extended from $D$, then, a fortiori, from $D[X]_{N_v}$.\n By Proposition \\ref{133} it follows that $D[X]_{N_v}$ is a sharp domain, hence so is $D[[X]]_{N'_v}$, cf. Lemma \\ref{101}. \n\\end{proof}\n\n\\begin{corollary}\\label{307}\nLet $D$ be a $t$-sharp domain. Then $D[[X]]_{N'_v}$ is a faithfully flat $D[X]_{N_v}$-module and the extension map \n$I\\mapsto ID[[X]]_{N'_v}$ \n is a bijection from the set of ideals of $D[X]_{N_v}$ to the set of ideals of $D[[X]]_{N'_v}$.\n\\end{corollary}\n\\begin{proof}\nSet $E=D[X]_{N_v}$ and $F=D[[X]]_{N'_v}$.\nBy the proof of Proposition \\ref{133} it follows that $E$ is a Prufer domain. Hence \n$F$ is a flat $E$-module because over a Prufer domain every torsion-free module is flat. \nWe show that every proper ideal of $E$ extends to a proper ideal of $F$. \nLet $J$ be a proper nonzero ideal of $E$. By \\cite[Theorem 3.14]{Kg}, $J=IE$ for some ideal $I$ of $D$ such that $I_t\\neq D$. Assume that $JF=F$. Then $IF=JF=F$, so $ID[[X]]$ contains some power series $f$ with $c(f)_v=D$. Write $f=a_1f_1+...+a_nf_n$ with $a_i \\in I$ and $f_i \\in D[[X]]$. Then $D=c(f)_v \\subseteq (a_1,...,a_n)_v\\subseteq I_t$, so $I_t=D$, a contradiction. As $F$ is a flat $E$-module and every proper ideal of $E$ extends to a proper ideal of $F$, it follows that $F$ is a faithfully flat $E$-module, cf. \\cite[Exercise 16, page 45]{AM}. In particular, $HF\\cap E=H$ for each ideal $H$ of $E$. \nBy Proposition \\ref{1024}, every ideal of $F$ is extended from $D$ and hence from $E$ because $D\\subseteq E\\subseteq F$.\nCombining these two facts, it follows that the extension map $I\\mapsto IF$ is a bijection from the set of ideals of $E$ to the set of ideals of $F$.\n\\end{proof}\n \n\n\n\n\n\n\n{\\bf Acknowledgements.} The first author was partially supported by an HEC (Higher Education Commission, Pakistan) grant. The second author gratefully acknowledges the warm\nhospitality of the Abdus Salam School of Mathematical Sciences GCU Lahore during his visits in 2006-2011. The third author was supported by UEFISCDI, project number 83\/2010, PNII-RU code TE\\_46\/2010, program Human Resources.\n\\\\[7mm]\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec:1}}\n\nIn this note, we describe a~generalization of the Carath\\'eodory\nform of the calculus of variations in \\emph{second-order }and, for\nspecific Lagrangians, in\\emph{ higher-order} field theory. Our approach\nis based on a geometric relationship between the Poincar\\'e\\textendash Cartan\nand Carath\\'eodory forms,\\emph{ }and analysis of the corresponding\nglobal properties. In \\cite{CramSaun2}, Crampin and Saunders obtained\nthe Carath\\'eodory form for second-order Lagrangians as a~certain\nprojection onto a~sphere bundle. Here, we confirm this result by\nmeans of a different, straightforward method which furthermore allows\nhigher-order generalization. It is a~standard fact in the global\nvariational field theory that the local expressions,\n\\begin{equation}\n\\Theta_{\\lambda}=\\mathscr{L}\\omega_{0}+\\sum_{k=0}^{r-1}\\left(\\sum_{l=0}^{r-1-k}(-1)^{l}d_{p_{1}}\\ldots d_{p_{l}}\\frac{\\partial\\mathscr{L}}{\\partial y_{j_{1}\\ldots j_{k}p_{1}\\ldots p_{l}i}^{\\sigma}}\\right)\\omega_{j_{1}\\ldots j_{k}}^{\\sigma}\\wedge\\omega_{i},\\label{eq:PoiCar}\n\\end{equation}\nwhich generalize the well-known Poincar\\'e\\textendash Cartan form\nof the calculus of variations, define, in general, differential form\n$\\Theta_{\\lambda}$ globally for Lagrangians $\\lambda=\\mathscr{L}\\omega_{0}$\nof order $r=1$ and $r=2$ only; see Krupka \\cite{Krupka-Lepage}\n($\\Theta_{\\lambda}$ is known as the principal component of a~Lepage\nequivalent of Lagrangian $\\lambda$), and Hor\\'ak and Kol\\'a\\v{r}\n\\cite{HorakKolar} (for higher-order Poincar\\'e\\textendash Cartan\nmorphisms). We show that if $\\Theta_{\\lambda}$ is globally defined\ndifferential form for a~\\emph{class} of Lagrangians of order $r\\geq3$,\nthen a~higher-order Carath\\'eodory equivalent for Lagrangians belonging\nto this class naturally arises by means of geometric operations acting\non $\\Theta_{\\lambda}$. To this purpose, for order $r=3$ we analyze\nconditions, which describe the obstructions for globally defined principal\ncomponents of Lepage equivalents \\eqref{eq:PoiCar} (or, higher\\textendash order\nPoincar\\'e\\textendash Cartan forms). \n\nThe above-mentioned differential forms are examples of \\emph{Lepage\nforms}; for a~comprehensive exposition and original references see\nKrupka \\cite{Handbook,Krupka-Book}\\emph{. }Similarly as the well-known\nCartan form describes analytical mechanics in a~coordinate-independent\nway, in variational field theory (or, calculus of variations for multiple-integral\nproblems) this role is played by Lepage forms, in general. These objects\ndefine the same variational functional as it is prescribed by a~given\nLagrangian and, moreover, variational properties (as variations, extremals,\nor Noether's type invariance) of the corresponding functional are\nglobally characterized in terms of geometric operations (such as the\nexterior derivative and the Lie derivative) acting on integrands -\nthe Lepage equivalents of a~Lagrangian.\n\nA concrete application of our result in second-order field theory\nincludes the Carath\\'eodory equivalent of the Hilbert Lagrangian\nin general relativity, which we determine and it will be further studied\nin future works.\n\nBasic underlying structures, well adapted to this paper, can be found\nin Voln\\'a and Urban \\cite{Volna}. If $(U,\\varphi)$, $\\varphi=(x^{i})$,\nis a chart on smooth manifold $X$, we set\n\\begin{equation*}\n{\\color{black}{\\color{black}\\omega_{0}}=dx^{1}\\wedge\\ldots\\wedge dx^{n},}\\qquad{\\color{black}\\omega_{j}}{\\color{black}=i_{\\partial\/\\partial x^{j}}\\omega_{0}=\\frac{1}{(n-1)!}\\varepsilon_{ji_{2}\\ldots i_{n}}dx^{i_{2}}\\wedge\\ldots\\wedge dx^{i_{n}},\n\\end{equation*}\nwhere $\\varepsilon_{i_{1}i_{2}\\ldots i_{n}}$ is the Levi-Civita permutation\nsymbol. If $\\pi:Y\\rightarrow X$ is a~fibered manifold and $W$ an\nopen subset of $Y$, then there exists a~unique morphism $h:\\Omega^{r}W\\rightarrow\\Omega^{r+1}W$\nof exterior algebras of differential forms such that for any fibered\nchart $(V,\\psi)$, $\\psi=(x^{i},y^{\\sigma})$, where $V\\subset W$,\nand any differentiable function $f:W^{r}\\rightarrow\\mathbb{R}$, where\n$W^{r}=(\\pi^{r,0})^{-1}(W)$ and $\\pi^{r,s}:J^{r}Y\\rightarrow J^{s}Y$\nthe jet bundle projection, \n\\[\nhf=f\\circ\\pi^{r+1,r},\\quad\\quad hdf=(d_{i}f)dx^{i},\n\\]\nwhere\n\\begin{equation}\nd_{i}=\\frac{\\partial}{\\partial x^{i}}+\\sum_{j_{1}\\leq\\ldots\\leq j_{k}}\\frac{\\partial}{\\partial y_{j_{1}\\ldots j_{k}}^{\\sigma}}y_{j_{1}\\ldots j_{k}i}^{\\sigma}\\label{eq:FormalDerivative}\n\\end{equation}\nis the $i$-th formal derivative operator associated with $(V,\\psi)$.\nA~differential form $q$-form $\\rho\\in\\Omega_{q}^{r}W$ satisfying\n$h\\rho=0$ is called \\emph{contact}, and $\\rho$ is generated by contact\n$1$-forms\n\\begin{equation*}\n\\omega_{j_{1}\\ldots j_{k}}^{\\sigma}=dy_{j_{1}\\ldots j_{k}}^{\\sigma}-y_{j_{1}\\ldots j_{k}s}^{\\sigma}dx^{s},\\qquad0\\leq k\\leq r-1\n\\end{equation*}\nThroughout, we use the standard geometric concepts: the exterior derivative\n$d$, the contraction $i_{\\Xi}\\rho$ and the Lie derivative $\\partial_{\\Xi}\\rho$\nof a differential form $\\rho$ with respect to a vector field $\\Xi$,\nand the pull-back operation $*$ acting on differential forms.\n\n\\section{Lepage equivalents in first- and second-order field theory}\n\nBy a~\\textit{Lagrangian} $\\lambda$ for a~fibered manifold $\\pi:Y\\rightarrow X$\nof order $r$ we mean an element of the submodule $\\Omega_{n,X}^{r}W$\nof $\\pi^{r}$-horizontal $n$-forms in the module of $n$-forms $\\Omega_{n}^{r}W$,\ndefined on an open subset $W^{r}$ of the $r$-th jet prolongation\n$J^{r}Y$. In a~fibered chart $(V,\\psi)$, $\\psi=(x^{i},y^{\\sigma})$,\nwhere $V\\subset W$, Lagrangian $\\lambda\\in\\Omega_{n,X}^{r}W$ has\nan expression\n\\begin{equation}\n\\lambda=\\mathscr{L}\\omega_{0},\\label{eq:Lagrangian}\n\\end{equation}\nwhere $\\omega_{0}=dx^{1}\\wedge dx^{2}\\wedge\\ldots\\wedge dx^{n}$ is\nthe (local) volume element, and $\\mathscr{L}:V^{r}\\rightarrow\\mathbb{R}$\nis the \\textit{Lagrange function} associated to $\\lambda$ and $(V,\\psi)$.\n\nAn $n$-form $\\rho\\in\\Omega_{n}^{s}W$ is called a\\textit{~Lepage\nequivalent} of $\\lambda\\in\\Omega_{n,X}^{r}W$, if the following two\nconditions are satisfied: \n\n(i) $(\\pi^{q,s+1})^{*}h\\rho=(\\pi^{q,r})^{*}\\lambda$ (i.e. $\\rho$\nis \\textit{equivalent }with $\\lambda$), and \n\n(ii) $hi_{\\xi}d\\rho=0$ for arbitrary $\\pi^{s,0}$-vertical vector\nfield $\\xi$ on $W^{s}$ (i.e. $\\rho$ is a\\textit{~Lepage form}). \n\nThe following theorem describe the structure of the Lepage equivalent\nof a~Lagrangian (see \\cite{Krupka-Lepage,Handbook}).\n\\begin{thm}\n\\label{Thm:LepageEquiv}Let $\\lambda\\in\\Omega_{n,X}^{r}W$ be a~Lagrangian\nof order $r$ for $Y$, locally expressed by \\eqref{eq:Lagrangian}\nwith respect to a~fibered chart $(V,\\psi)$, $\\psi=(x^{i},y^{\\sigma})$.\nAn $n$-form $\\rho\\in\\Omega_{n}^{s}W$ is a~Lepage equivalent of\n$\\lambda$ if and only if\n\\begin{equation}\n(\\pi^{s+1,s})^{*}\\rho=\\Theta_{\\lambda}+d\\mu+\\eta,\\label{eq:LepDecomp}\n\\end{equation}\nwhere $n$-form $\\Theta_{\\lambda}$ is defined on $V^{2r-1}$ by \\emph{\\eqref{eq:PoiCar},}\n$\\mu$ is a~contact $(n-1)$-form, and an $n$-form $\\eta$ has the\norder of contactness $\\geq2$.\n\\end{thm}\n\n$\\Theta_{\\lambda}$ is called the \\emph{principal component} of the\nLepage form $\\rho$ with respect to fibered chart $(V,\\psi)$. In\ngeneral, decomposition \\eqref{eq:LepDecomp} is \\emph{not} uniquely\ndetermined with respect to contact forms $\\mu$, $\\eta$, and the\nprincipal component $\\Theta_{\\lambda}$ need \\emph{not} define a~global\nform on $W^{2r-1}$. Nevertheless, the Lepage equivalent $\\rho$ satisfying\n\\eqref{eq:LepDecomp} is globally defined on $W^{s}$; moreover $E_{\\lambda}=p_{1}d\\rho$\nis a~globally defined $(n+1)$-form on $W^{2r}$, called the \\emph{Euler\\textendash Lagrange\nform} associated to $\\lambda$.\n\nWe recall the known examples of Lepage equivalents of first- and second-order\nLagrangians, determined by means of additional requirements.\n\\begin{lem}\n\\textbf{\\textup{\\label{Lem:PrincipalLepEq}(Principal Lepage form)}}\n\\emph{(a)} For every Lagrangian $\\lambda$ of order $r=1$, there\nexists a unique Lepage equivalent $\\Theta_{\\lambda}$ of $\\lambda$\non $W^{1}$, which is $\\pi^{1,0}$-horizontal and has the order of\ncontactness $\\leq1$. In a fibered chart $(V,\\psi)$, $\\Theta{}_{\\lambda}$\nhas an expression \n\\begin{equation}\n\\Theta_{\\lambda}=\\mathscr{L}\\omega_{0}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega^{\\sigma}\\wedge\\omega_{j}.\\label{eq:Poincare-Cartan}\n\\end{equation}\n\n\\emph{(b)} For every Lagrangian $\\lambda$ of order $r=2$, there\nexists a unique Lepage equivalent $\\Theta_{\\lambda}$ of $\\lambda$\non $W^{3}$, which is $\\pi^{3,1}$-horizontal and has the order of\ncontactness $\\leq1$. In a fibered chart $(V,\\psi)$, $\\Theta{}_{\\lambda}$\nhas an expression \n\n\\begin{equation}\n\\Theta_{\\lambda}=\\mathscr{L}\\omega_{0}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pj}^{\\sigma}}\\right)\\omega^{\\sigma}\\wedge\\omega_{j}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\wedge\\omega_{j}.\\label{eq:Poincare-Cartan-2ndOrder}\n\\end{equation}\n\\end{lem}\n\nFor $r=1$ and $r=2$, the principal component $\\Theta_{\\lambda}$\n\\eqref{eq:PoiCar} is a~\\emph{globally defined} Lepage equivalent\nof $\\lambda$. We point out that for $r\\geq3$ this is \\emph{not}\ntrue (see \\cite{HorakKolar,Krupka-Lepage}). \\eqref{eq:Poincare-Cartan}\nis the well-known \\emph{Poincar\\'e-Cartan form} (cf. Garc\\'ia \\cite{Garcia}),\nand it is generealized for second-order Lagrangians by globally defined\n\\emph{principal Lepage equivalent} \\eqref{eq:Poincare-Cartan-2ndOrder}\non $W^{3}\\subset J^{3}Y$.\n\\begin{lem}\n\\textbf{\\textup{\\label{Lem:Fundamental}(Fundamental Lepage form)}}\nLet $\\lambda\\in\\Omega_{n,X}^{1}W$ be a~Lagrangian of order 1 for\n$Y$, locally expressed by \\eqref{eq:Lagrangian}. There exists a~unique\nLepage equivalent $Z_{\\lambda}\\in\\Omega_{n}^{1}W$ of $\\lambda$,\nwhich satisfies $Z_{h\\rho}=(\\pi^{1,0})^{*}\\rho$ for any $n$-form\n$\\rho\\in\\Omega_{n}^{0}W$ on $W$ such that $h\\rho=\\lambda$. With\nrespect to a fibered chart $(V,\\psi)$, $Z_{\\lambda}$ has an expression\n\\begin{align}\nZ_{\\lambda} & =\\mathscr{L}\\omega_{0}+\\sum_{k=1}^{n}\\frac{1}{(n-k)!}\\frac{1}{(k!)^{2}}\\frac{\\partial^{k}\\mathscr{L}}{\\partial y_{j_{1}}^{\\sigma_{1}}\\ldots\\partial y_{j_{k}}^{\\sigma_{k}}}\\varepsilon_{j_{1}\\ldots j_{k}i_{k+1}\\ldots i_{n}}\\label{eq:Fundamental}\\\\\n & \\quad\\cdot\\omega^{\\sigma_{1}}\\land\\ldots\\wedge\\omega^{\\sigma_{k}}\\wedge dx^{i_{k+1}}\\wedge\\ldots\\wedge dx^{i_{n}}.\\nonumber \n\\end{align}\n\\end{lem}\n\n$Z_{\\lambda}$ \\eqref{eq:Fundamental} is known as the \\emph{fundamental\nLepage form} \\cite{Krupka-Fund.Lep.eq.}, \\cite{Betounes}), and it\nis characterized by the equivalence: $Z_{\\lambda}$ is closed if and\nonly if $\\lambda$ is trivial (i.e. the Euler\\textendash Lagrange\nexpressions associated with $\\lambda$ vanish identically). Recently,\nthe form \\eqref{eq:Fundamental} was studied for variational problems\nfor submanifolds in \\cite{UrbBra}, and applied for studying symmetries\nand conservation laws in \\cite{Javier}.\n\\begin{lem}\n\\textbf{\\textup{\\label{Lem:Caratheodory}(Carath\\'eodory form) }}Let\n$\\lambda\\in\\Omega_{n,X}^{1}W$ be a~non-vanishing Lagrangian of order\n1 for $Y$ \\eqref{eq:Lagrangian}. Then a~differential $n$-form\n$\\Lambda_{\\lambda}\\in\\Omega_{n}^{1}W$, locally expressed as\n\\begin{align}\n\\Lambda_{\\lambda} & =\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}\\left(\\mathscr{L}dx^{j}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega^{\\sigma}\\right)\\label{eq:CaratheodoryForm}\\\\\n & =\\frac{1}{\\mathscr{L}^{n-1}}\\left(\\mathscr{L}dx^{1}+\\frac{\\partial\\mathscr{L}}{\\partial y_{1}^{\\sigma_{1}}}\\omega^{\\sigma_{1}}\\right)\\wedge\\ldots\\wedge\\left(\\mathscr{L}dx^{n}+\\frac{\\partial\\mathscr{L}}{\\partial y_{n}^{\\sigma_{n}}}\\omega^{\\sigma_{n}}\\right),\\nonumber \n\\end{align}\nis a Lepage equivalent of $\\lambda$.\n\\end{lem}\n\n$\\Lambda_{\\lambda}$ \\eqref{eq:Fundamental} is the well-known \\emph{Carath\\'eodory\nform} (cf. \\cite{Caratheodory}), associated to Lagrangian $\\lambda\\in\\Omega_{n,X}^{1}W$,\nwhich is nowhere zero. $\\Lambda_{\\lambda}$ is uniquely characterized\nby the following properties: $\\Lambda_{\\lambda}$ is (i) a~Lepage\nequivalent of $\\lambda$, (ii) decomposable, (iii) $\\pi^{1,0}$-horizontal\n(i.e. semi-basic with respect to projection $\\pi^{1,0}$).\n\n\\section{The Carath\\'eodory form: second-order generalization}\n\nLet $\\lambda\\in\\Omega_{n,X}^{1}W$ be a~\\emph{non-vanishing,} \\emph{first-order}\nLagrangian on $W^{1}\\subset J^{1}Y$. In the next lemma, we describe\na~new observation, showing that the Carath\\'eodory form $\\Lambda_{\\lambda}$\n\\eqref{eq:CaratheodoryForm} arises from the Poincar\\'e-Cartan form\n$\\Theta_{\\lambda}$ \\eqref{eq:Poincare-Cartan} by means of contraction\noperations on differential forms with respect to the formal derivative\nvector fields $d_{i}$ \\eqref{eq:FormalDerivative}.\n\\begin{lem}\n\\label{lem:Car-PC}The Carath\\'eodory form $\\Lambda_{\\lambda}$ \\eqref{eq:CaratheodoryForm}\nand the Poincar\\'e-Cartan form $\\Theta_{\\lambda}$ \\eqref{eq:Poincare-Cartan}\nsatisfy\n\\begin{align*}\n\\Lambda_{\\lambda} & =\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}\n\\end{align*}\n\\end{lem}\n\n\\begin{proof}\nFrom the decomposable structure of $\\Lambda_{\\lambda}$, we see that\nwhat is needed to show is the formula\n\\begin{equation*}\ni_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}=(-1)^{n-j}\\left(\\mathscr{L}dx^{j}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega^{\\sigma}\\right\n\\end{equation*}\nfor every $j$, $1\\leq j\\leq n$. Since $dx^{k}\\wedge\\omega_{j}=\\delta_{j}^{k}\\omega_{0}$,\nthe Poincar\\'e-Cartan form is expressible as\n\\begin{align*}\n\\Theta_{\\lambda} & =\\mathscr{L}\\omega_{0}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega^{\\sigma}\\wedge\\omega_{j}=\\mathscr{L}\\omega_{0}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}dy^{\\sigma}\\wedge\\omega_{j}-\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}y_{k}^{\\sigma}dx^{k}\\wedge\\omega_{j}\\\\\n & =\\left(\\mathscr{L}-\\frac{\\partial\\mathscr{L}}{\\partial y_{1}^{\\sigma}}y_{1}^{\\sigma}-\\frac{\\partial\\mathscr{L}}{\\partial y_{2}^{\\sigma}}y_{2}^{\\sigma}-\\ldots-\\frac{\\partial\\mathscr{L}}{\\partial y_{n}^{\\sigma}}y_{n}^{\\sigma}\\right)\\omega_{0}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}dy^{\\sigma}\\wedge\\omega_{j}.\n\\end{align*}\nApplying the contraction operations to $\\Theta_{\\lambda}$, we obtain\nby means of a straightforward computation for every $j$,\n\\begin{align*}\n & i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}=\\left(\\mathscr{L}-\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}y_{j}^{\\sigma}-\\ldots-\\frac{\\partial\\mathscr{L}}{\\partial y_{n}^{\\sigma}}y_{n}^{\\sigma}\\right)dx^{j}\\wedge\\ldots\\wedge dx^{n}\\\\\n & \\quad\\quad+(-1)^{j-1}\\sum_{k=j}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}dy^{\\sigma}\\wedge i_{d_{j-1}}\\ldots i_{d_{1}}\\omega_{k}\\\\\n & \\quad\\quad+\\sum_{l=1}^{j-1}(-1)^{l-1}y_{l}^{\\sigma}i_{d_{j-1}}\\ldots i_{d_{l+1}}i_{d_{l-1}}\\ldots i_{d_{1}}\\left(\\sum_{k=j}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}\\omega_{k}\\right),\n\\end{align*}\nand\n\\begin{align*}\n & i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}\\\\\n & \\quad=-\\left(\\mathscr{L}-\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}y_{j}^{\\sigma}-\\sum_{k=j+2}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}y_{k}^{\\sigma}\\right)dx^{j}\\wedge dx^{j+2}\\wedge\\ldots\\wedge dx^{n}\\\\\n & \\quad+(-1)^{j}\\sum_{k=j}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}dy^{\\sigma}\\wedge i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\omega_{k}\\\\\n & \\quad+\\sum_{l=1}^{j-1}(-1)^{l-1}y_{l}^{\\sigma}i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{l+1}}i_{d_{l-1}}\\ldots i_{d_{1}}\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega_{j}+\\sum_{k=j+2}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}\\omega_{k}\\right)\\\\\n & \\quad+(-1)^{j-1}y_{j+1}^{\\sigma}i_{d_{j-1}}\\ldots i_{d_{1}}\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega_{j}+\\sum_{k=j+2}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}\\omega_{k}\\right).\n\\end{align*}\nFollowing the inductive structure of the preceding expressions, we\nget after the next $n-j-1$ steps,\n\\begin{align*}\n & i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}\\\\\n & \\quad=(-1)^{n-j}\\left(\\mathscr{L}-\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}y_{j}^{\\sigma}\\right)dx^{j}+(-1)^{n-j}\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}dy^{\\sigma}-(-1)^{n-j}\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\sum_{k\\neq j}y_{k}^{\\sigma}dx^{k}\\\\\n & \\quad=(-1)^{n-j}\\left(\\mathscr{L}dx^{j}+\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}\\omega^{\\sigma}\\right),\n\\end{align*}\nas required.\n\\end{proof}\nAn intrinsic nature of Lemma \\ref{lem:Car-PC} indicates a~possible\nextension of the Carath\\'eodory form \\eqref{eq:CaratheodoryForm}\nfor higher-order variational problems. We put\n\\begin{equation}\n\\Lambda_{\\lambda}=\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda},\\label{eq:2nd-Caratheodory}\n\\end{equation}\nwhere $\\Theta_{\\lambda}$ in \\eqref{eq:2nd-Caratheodory} denotes\nthe principal Lepage equivalent \\eqref{eq:Poincare-Cartan-2ndOrder}\nof a~\\emph{second-order} Lagrangian $\\lambda$, and verify that formula\n\\eqref{eq:2nd-Caratheodory} defines a~global form.\n\\begin{thm}\n\\label{thm:Main}Let $\\lambda\\in\\Omega_{n,X}^{2}W$ be a~non-vanishing\nsecond-order Lagrangian on $W^{2}\\subset J^{2}Y$. Then $\\Lambda_{\\lambda}$\nsatisfies:\n\n\\emph{(a)} Formula \\eqref{eq:2nd-Caratheodory} defines an $n$-form\non $W^{3}\\subset J^{3}Y$. \n\n\\emph{(b)} If $\\lambda\\in\\Omega_{n,X}^{2}W$ has an expression \\eqref{eq:Lagrangian}\nwith respect to a~fibered chart $(V,\\psi)$, $\\psi=(x^{i},y^{\\sigma})$,\nsuch that $V\\subset W$, then $\\Lambda_{\\lambda}$ \\eqref{eq:2nd-Caratheodory}\nis expressed by\n\\begin{align}\n\\Lambda_{\\lambda} & =\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right).\\label{eq:2ndCaratheodoryExpression}\n\\end{align}\n\n\\emph{(c)} $\\Lambda_{\\lambda}\\in\\Omega_{n}^{3}W$ \\eqref{eq:2nd-Caratheodory}\nassociated to a~second-order Lagrangian $\\lambda\\in\\Omega_{n,X}^{2}W$\nis a~Lepage equivalent of $\\lambda$, which is decomposable and $\\pi^{3,1}$-horizontal.\n\\end{thm}\n\n\\begin{proof}\n1. Suppose $(V,\\psi)$, $\\psi=(x^{i},y^{\\sigma})$, and $(\\bar{V},\\bar{\\psi})$,\n$\\bar{\\psi}=(\\bar{x}^{i},\\bar{y}^{\\sigma})$, are two overlapping\nfibered charts on $W$. For $\\lambda\\in\\Omega_{n,X}^{2}W$, the corresponding\nchart expressions $\\lambda=\\mathscr{L}\\omega_{0}$ and $\\lambda=\\bar{\\mathscr{L}}\\bar{\\omega}_{0}$\nsatisfy\n\\begin{equation}\n\\mathscr{L}=\\left(\\bar{\\mathscr{L}}\\circ\\bar{\\psi}^{-1}\\circ\\psi\\right)\\det\\frac{\\partial\\bar{x}^{i}}{\\partial x^{j}}.\\label{eq:LagrangianTransform}\n\\end{equation}\nSince the push-forward vector field $\\bar{d}_{k}$,\n\\[\n\\bar{d}_{k}=\\frac{\\partial}{\\partial\\bar{x}^{k}}+\\bar{y}_{k}^{\\sigma}\\frac{\\partial}{\\partial\\bar{y}^{\\sigma}}+\\bar{y}_{kl}^{\\sigma}\\frac{\\partial}{\\partial\\bar{y}_{l}^{\\sigma}},\n\\]\nof vector field $(\\partial x^{i}\/\\partial\\bar{x}^{k})d_{i}$ with\nrespect to the chart transformation $\\bar{\\psi}^{-1}\\circ\\psi$ satisfies\n\\[\n(\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\left(i_{\\bar{d}_{k}}\\Theta_{\\lambda}\\right)=i_{\\frac{\\partial x^{i}}{\\partial\\bar{x}^{k}}d_{i}}\\left((\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\Theta_{\\lambda}\\right)=\\frac{\\partial x^{i}}{\\partial\\bar{x}^{k}}i_{d_{i}}\\left((\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\Theta_{\\lambda}\\right),\n\\]\nand $\\Theta_{\\lambda}$ \\eqref{eq:Poincare-Cartan-2ndOrder} is globally\ndefined, we get\n\\begin{align*}\n & (\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\frac{1}{\\mathscr{\\bar{L}}^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{\\bar{d}_{n}}\\ldots i_{\\bar{d}_{j+1}}i_{\\bar{d}_{j-1}}\\ldots i_{\\bar{d}_{1}}\\Theta_{\\lambda}\\\\\n & \\quad=\\frac{1}{\\left(\\bar{\\mathscr{L}}\\circ\\bar{\\psi}^{-1}\\circ\\psi\\right)^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}\\frac{\\partial x^{i_{1}}}{\\partial\\bar{x}^{1}}\\ldots\\frac{\\partial x^{i_{j-1}}}{\\partial\\bar{x}^{j-1}}\\frac{\\partial x^{i_{j+1}}}{\\partial\\bar{x}^{j+1}}\\ldots\\frac{\\partial x^{i_{n}}}{\\partial\\bar{x}^{n}}\\\\\n & \\quad\\quad\\cdot i_{d_{i_{n}}}\\ldots i_{d_{i_{j+1}}}i_{d_{i_{j-1}}}\\ldots i_{d_{i_{1}}}\\left((\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\Theta_{\\lambda}\\right)\\\\\n & \\quad=\\frac{1}{\\left(\\bar{\\mathscr{L}}\\circ\\bar{\\psi}^{-1}\\circ\\psi\\right)^{n-1}}\\left(\\frac{\\partial x^{i_{1}}}{\\partial\\bar{x}^{1}}\\ldots\\frac{\\partial x^{i_{n}}}{\\partial\\bar{x}^{n}}\\varepsilon_{i_{1}\\ldots i_{n}}\\right)^{n}\\\\\n & \\quad\\quad\\cdot\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}(\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\Theta_{\\lambda}\\\\\n & \\quad=\\frac{1}{\\left((\\bar{\\mathscr{L}}\\circ\\bar{\\psi}^{-1}\\circ\\psi)\\det\\frac{\\partial\\bar{x}^{i}}{\\partial x^{j}}\\right)^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}(\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\Theta_{\\lambda}\\\\\n & \\quad=\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}(\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\Theta_{\\lambda}\\\\\n & \\quad=\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda},\n\\end{align*}\nas required.\n\n2. Analogously to the proof of Lemma \\ref{lem:Car-PC}, we find a~chart\nexpression of $1$-form\n\\[\ni_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda},\n\\]\nwhere $\\Theta_{\\lambda}$ is the principal Lepage equivalent \\eqref{eq:Poincare-Cartan-2ndOrder}.\nUsing $dx^{k}\\wedge\\omega_{j}=\\delta_{j}^{k}\\omega_{0}$, we have\n\\begin{align*}\n\\Theta_{\\lambda} & =\\left(\\mathscr{L}-\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pj}^{\\sigma}}\\right)y_{j}^{\\sigma}-\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}y_{ij}^{\\sigma}\\right)\\omega_{0}\\\\\n & +\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pj}^{\\sigma}}\\right)dy^{\\sigma}\\wedge\\omega_{j}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}dy_{i}^{\\sigma}\\wedge\\omega_{j}.\n\\end{align*}\nThen\n\\begin{align*}\n & i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}\\\\\n & \\quad=\\left(\\mathscr{L}-\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{k}^{\\sigma}-\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{ik}^{\\sigma}\\right)dx^{j}\\wedge\\ldots\\wedge dx^{n}\\\\\n & \\quad+\\sum_{l=1}^{j-1}(-1)^{l-1}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{l}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{il}^{\\sigma}\\right)i_{d_{j-1}}\\ldots i_{d_{l+1}}i_{d_{l-1}}\\ldots i_{d_{1}}\\omega_{k}\\\\\n & \\quad+(-1)^{j-1}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)dy^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}dy_{i}^{\\sigma}\\right)\\wedge\\left(i_{d_{j-1}}\\ldots i_{d_{1}}\\omega_{k}\\right)\\\\\n & \\quad=\\left(\\mathscr{L}-\\sum_{k=j}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{k}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{ik}^{\\sigma}\\right)\\right)dx^{j}\\wedge\\ldots\\wedge dx^{n}\\\\\n & \\quad+\\sum_{l=1}^{j-1}(-1)^{l-1}\\sum_{k=j}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{l}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{il}^{\\sigma}\\right)i_{d_{j-1}}\\ldots i_{d_{l+1}}i_{d_{l-1}}\\ldots i_{d_{1}}\\omega_{k}\\\\\n & \\quad+(-1)^{j-1}\\sum_{k=j}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)dy^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}dy_{i}^{\\sigma}\\right)\\wedge\\left(i_{d_{j-1}}\\ldots i_{d_{1}}\\omega_{k}\\right),\n\\end{align*}\nand\n\\begin{align*}\n & i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}\\\\\n & \\quad=\\Biggl(-\\mathscr{L}+\\sum_{\\begin{array}{c}\nk=j\\\\\nk\\neq j+1\n\\end{array}}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{k}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{ik}^{\\sigma}\\right)\\Biggr)dx^{j}\\wedge\\bigwedge_{l=j+2}^{n}dx^{l}\\\\\n & \\quad+\\sum_{l=1}^{j-1}(-1)^{l-1}\\sum_{\\begin{array}{c}\nk=j\\\\\nk\\neq j+1\n\\end{array}}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{l}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{il}^{\\sigma}\\right)\\\\\n & \\quad\\quad i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{l+1}}i_{d_{l-1}}\\ldots i_{d_{1}}\\omega_{k}\\\\\n & \\quad+(-1)^{j-1}\\sum_{\\begin{array}{c}\nk=j\\\\\nk\\neq j+1\n\\end{array}}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)y_{j+1}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}y_{i,j+1}^{\\sigma}\\right)i_{d_{j-1}}\\ldots i_{d_{1}}\\omega_{k}\\\\\n & \\quad+(-1)^{j}\\sum_{k=j}^{n}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pk}^{\\sigma}}\\right)dy^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}dy_{i}^{\\sigma}\\right)\\wedge\\left(i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\omega_{k}\\right).\n\\end{align*}\nAfter another $n-j-1$ steps we obtain\n\\begin{align*}\n & i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda}\\\\\n & =(-1)^{n-j}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pj}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right).\n\\end{align*}\n\n3. From \\eqref{eq:2ndCaratheodoryExpression} it is evident that $\\Lambda_{\\lambda}$\n\\eqref{eq:2nd-Caratheodory} is decomposable, $\\pi^{3,1}$-horizontal,\nand obeys $h\\Lambda_{\\lambda}=\\lambda$. It is sufficient to verify\nthat $\\Lambda_{\\lambda}$ is a~Lepage form, that is $hi_{\\xi}d\\Lambda_{\\lambda}=0$\nfor arbitrary $\\pi^{3,0}$-vertical vector field $\\xi$ on $W^{3}\\subset J^{3}Y$.\nThis follows, however, by means of a~straightforward computation\nusing chart expression \\eqref{eq:2ndCaratheodoryExpression}. Indeed,\nwe have\n\\begin{align*}\n & d\\Lambda_{\\lambda}=(1-n)\\frac{1}{\\mathscr{L}^{n}}d\\mathscr{L}\\wedge\\bigwedge_{j=1}^{n}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\\\\n & \\quad+\\frac{1}{\\mathscr{L}^{n-1}}\\sum_{k=1}^{n}(-1)^{k-1}d\\left(\\mathscr{L}dx^{k}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\\\\n & \\quad\\wedge\\bigwedge_{j\\neq k}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right),\n\\end{align*}\nand the contraction of $d\\Lambda_{\\lambda}$ with respect to $\\pi^{3,0}$-vertical\nvector field $\\xi$ reads\n\\begin{align*}\n & i_{\\xi}d\\Lambda_{\\lambda}=(1-n)\\frac{1}{\\mathscr{L}^{n}}i_{\\xi}d\\mathscr{L}\\bigwedge_{j=1}^{n}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\\\\n & \\quad-(1-n)\\frac{1}{\\mathscr{L}^{n}}d\\mathscr{L}\\wedge\\sum_{l=1}^{n}(-1)^{l-1}\\frac{\\partial\\mathscr{L}}{\\partial y_{jl}^{\\sigma}}\\xi_{j}^{\\sigma}\\\\\n & \\quad\\quad\\bigwedge_{j\\neq l}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\\\\n & \\quad+\\frac{1}{\\mathscr{L}^{n-1}}\\sum_{k=1}^{n}(-1)^{k-1}\\left(i_{\\xi}d\\mathscr{L}dx^{k}+i_{\\xi}d\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\right)\\omega^{\\sigma}\\right.\\\\\n & \\quad\\left.-\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\right)\\xi_{j}^{\\sigma}dx^{j}+i_{\\xi}d\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\omega_{i}^{\\sigma}-\\xi_{i}^{\\sigma}d\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}-\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\xi_{is}^{\\sigma}dx^{s}\\right)\\\\\n & \\quad\\quad\\wedge\\bigwedge_{j\\neq k}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\\\\n & \\quad+\\frac{1}{\\mathscr{L}^{n-1}}\\sum_{k=1}^{n}(-1)^{k-1}d\\left(\\mathscr{L}dx^{k}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\\\\n & \\quad\\quad\\wedge\\sum_{lk}(-1)^{l}\\frac{\\partial\\mathscr{L}}{\\partial y_{il}^{\\sigma}}\\xi_{i}^{\\sigma}\\bigwedge_{j\\neq k,l}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right).\n\\end{align*}\nHence the horizontal part of $i_{\\xi}d\\Lambda_{\\lambda}$ satisfies\n\\begin{align*}\nhi_{\\xi}d\\Lambda_{\\lambda} & =(1-n)i_{\\xi}d\\mathscr{L}\\omega_{0}-(1-n)\\frac{1}{\\mathscr{L}}\\sum_{l=1}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{jl}^{\\sigma}}\\xi_{j}^{\\sigma}d_{l}\\mathscr{L}\\omega_{0}+ni_{\\xi}d\\mathscr{L}\\omega_{0}\\\\\n & -\\sum_{k=1}^{n}\\left(\\xi_{i}^{\\sigma}d_{k}\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\xi_{ik}^{\\sigma}\\right)\\omega_{0}-\\sum_{k=1}^{n}\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{k}^{\\sigma}}-d_{i}\\frac{\\partial\\mathscr{L}}{\\partial y_{ik}^{\\sigma}}\\right)\\xi_{k}^{\\sigma}\\omega_{0}\\\\\n & +\\frac{1}{\\mathscr{L}}\\sum_{k=1}^{n}(-1)^{k-1}d_{s}\\mathscr{L}dx^{s}\\wedge dx^{k}\\wedge\\sum_{lk}(-1)^{l}\\frac{\\partial\\mathscr{L}}{\\partial y_{il}^{\\sigma}}\\xi_{i}^{\\sigma}\\bigwedge_{j\\neq k,l}dx^{j}\\\\\n & =-(1-n)\\frac{1}{\\mathscr{L}}\\sum_{l=1}^{n}\\frac{\\partial\\mathscr{L}}{\\partial y_{jl}^{\\sigma}}\\xi_{j}^{\\sigma}d_{l}\\mathscr{L}\\omega_{0}-\\frac{1}{\\mathscr{L}}\\sum_{k=1}^{n}\\sum_{l\\neq k}\\frac{\\partial\\mathscr{L}}{\\partial y_{il}^{\\sigma}}\\xi_{i}^{\\sigma}d_{s}\\mathscr{L}dx^{s}\\wedge\\omega_{l}\\\\\n & =0,\n\\end{align*}\nwhere the identity $dx^{k}\\wedge\\omega_{l}=\\delta_{l}^{k}\\omega_{0}$\nis applied.\n\\end{proof}\nLepage equivalent $\\Lambda_{\\lambda}$ \\eqref{eq:2nd-Caratheodory}\nis said to be the \\emph{Carath\\'eodory form} associated to $\\lambda\\in\\Omega_{n,X}^{2}W$.\n\n\\section{The Carath\\'{e}odory form and principal Lepage equivalents in higher-order\ntheory}\n\nWe point out that in the proof of Theorem \\ref{thm:Main}, (a), the\nchart independence of formula \\eqref{eq:2nd-Caratheodory} is based\non principal Lepage equivalent $\\Theta_{\\lambda}$ \\eqref{eq:Poincare-Cartan-2ndOrder}\nof a~second-order Lagrangian, which is defined \\emph{globally}. Since\nfor a Lagrangian of order $r\\geq3$, principal components of Lepage\nequivalents are, in general, \\emph{local} expressions (see the Introduction),\nwe are allowed to apply the definition \\eqref{eq:2nd-Caratheodory}\nfor such class of Lagrangians of order $r$ over a~fibered manifold\nwhich assure invariance of local expressions $\\Theta_{\\lambda}$ \\eqref{eq:PoiCar}.\n\nConsider now a~\\emph{third-order} Lagrangian $\\lambda\\in\\Omega_{n,X}^{3}W$.\nThen the principal component $\\Theta_{\\lambda}$ of a~Lepage equivalent\nof $\\lambda$ reads\n\\begin{align}\n\\Theta_{\\lambda} & =\\mathscr{L}\\omega_{0}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pj}^{\\sigma}}+d_{p}d_{q}\\frac{\\partial\\mathscr{L}}{\\partial y_{pqj}^{\\sigma}}\\right)\\omega^{\\sigma}\\wedge\\omega_{j}\\label{eq:PrinComp3}\\\\\n & +\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{kj}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{kpj}^{\\sigma}}\\right)\\omega_{k}^{\\sigma}\\wedge\\omega_{j}+\\frac{\\partial\\mathscr{L}}{\\partial y_{klj}^{\\sigma}}\\omega_{kl}^{\\sigma}\\wedge\\omega_{j}.\\nonumber \n\\end{align}\nIn the following lemma we describe conditions for invariance of \\eqref{eq:PrinComp3}.\n\\begin{lem}\n\\label{lem:3rdCond}The following two conditions are equivalent:\n\n\\emph{(a)} $\\Theta_{\\lambda}$ satisfies\n\\[\n(\\bar{\\psi}^{-1}\\circ\\psi)^{*}\\bar{\\Theta}_{\\lambda}=\\Theta_{\\lambda}.\n\\]\n\\emph{(b)} For arbitrary two overlapping fibered charts on $Y$, $(V,\\psi)$,\n$\\psi=(x^{i},y^{\\sigma})$, and $(\\bar{V},\\bar{\\psi})$, $\\bar{\\psi}=(\\bar{x}^{i},\\bar{y}^{\\sigma})$,\n\\begin{equation}\nd_{k}\\left(\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{l_{1}l_{2}k}^{\\tau}}\\frac{\\partial x^{s}}{\\partial\\bar{x}^{p}}-\\frac{\\partial\\mathscr{L}}{\\partial y_{l_{1}l_{2}s}^{\\tau}}\\frac{\\partial x^{k}}{\\partial\\bar{x}^{p}}\\right)\\frac{\\partial^{2}\\bar{x}^{p}}{\\partial x^{l_{1}}\\partial x^{l_{2}}}\\frac{\\partial y^{\\tau}}{\\partial\\bar{y}^{\\sigma}}\\right)=0.\\label{eq:Obstruction-3rd}\n\\end{equation}\n\\end{lem}\n\n\\begin{proof}\nEquivalence conditions (a) and (b) follows from the chart transformation\n\\begin{align*}\n & \\bar{x}^{k}=\\bar{x}^{k}(x^{s}),\\quad\\bar{y}^{\\sigma}=\\bar{y}^{\\sigma}(x^{s},y^{\\nu}),\\\\\n & \\bar{y}_{k}^{\\sigma}=\\frac{d}{d\\bar{x}^{k}}(\\bar{y}^{\\sigma})=\\frac{\\partial x^{s}}{\\partial\\bar{x}^{k}}\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial x^{s}}+\\frac{\\partial x^{s}}{\\partial\\bar{x}^{k}}\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial y^{\\nu}}y_{s}^{\\nu},\\\\\n & \\bar{y}_{ij}^{\\sigma}=\\frac{d}{d\\bar{x}^{j}}(\\bar{y}_{i}^{\\sigma})=\\frac{\\partial x^{t}}{\\partial\\bar{x}^{j}}\\frac{d}{dx^{t}}\\left(\\frac{\\partial x^{s}}{\\partial\\bar{x}^{i}}\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial x^{s}}+\\frac{\\partial x^{s}}{\\partial\\bar{x}^{i}}\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial y^{\\nu}}y_{s}^{\\nu}\\right),\\\\\n & \\bar{y}_{ijk}^{\\sigma}=\\frac{d}{d\\bar{x}^{k}}(\\bar{y}_{ij}^{\\sigma})=\\frac{\\partial x^{r}}{\\partial\\bar{x}^{k}}\\frac{d}{dx^{r}}\\left(\\frac{\\partial x^{t}}{\\partial\\bar{x}^{j}}\\frac{d}{dx^{t}}\\left(\\frac{\\partial x^{s}}{\\partial\\bar{x}^{i}}\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial x^{s}}+\\frac{\\partial x^{s}}{\\partial\\bar{x}^{i}}\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial y^{\\nu}}y_{s}^{\\nu}\\right)\\right),\n\\end{align*}\napplied to $\\Theta_{\\lambda}$, where Lagrange function $\\mathscr{L}$\nis transformed by \\eqref{eq:LagrangianTransform} and the following\nidentities are employed,\n\\begin{align*}\n & \\bar{\\omega}_{j}=\\frac{\\partial x^{s}}{\\partial\\bar{x}^{j}}\\det\\left(\\frac{\\partial\\bar{x}^{p}}{\\partial x^{q}}\\right)\\omega_{s},\\\\\n & \\bar{\\omega}^{\\sigma}=\\frac{\\partial\\bar{y}^{\\sigma}}{\\partial y^{\\nu}}\\omega^{\\nu},\\quad\\bar{\\omega}_{i}^{\\sigma}=\\frac{\\partial\\bar{y}_{i}^{\\sigma}}{\\partial y^{\\nu}}\\omega^{\\nu}+\\frac{\\partial\\bar{y}_{i}^{\\sigma}}{\\partial y_{p}^{\\nu}}\\omega_{p}^{\\nu},\\quad\\bar{\\omega}_{ij}^{\\sigma}=\\frac{\\partial\\bar{y}_{ij}^{\\sigma}}{\\partial y^{\\nu}}\\omega^{\\nu}+\\frac{\\partial\\bar{y}_{ij}^{\\sigma}}{\\partial y_{p}^{\\nu}}\\omega_{p}^{\\nu}+\\frac{\\partial\\bar{y}_{ij}^{\\sigma}}{\\partial y_{pq}^{\\nu}}\\omega_{pq}^{\\nu}.\n\\end{align*}\n\\end{proof}\n\\begin{thm}\n\\label{thm:Main-3rd}Suppose that a~fibered manifold $\\pi:Y\\rightarrow X$\nand a~non-vanishing third-order Lagrangian $\\lambda\\in\\Omega_{n,X}^{3}W$\nsatisfy condition \\emph{\\eqref{eq:Obstruction-3rd}}. Then $\\Lambda_{\\lambda}$\n\\emph{\\eqref{eq:2nd-Caratheodory}}, where $\\Theta_{\\lambda}$ is\ngiven by \\emph{\\eqref{eq:PrinComp3},} defines a~differential $n$-form\non $W^{5}\\subset J^{5}Y$, which is a~Lepage equivalent of $\\lambda$,\ndecomposable and $\\pi^{5,2}$-horizontal. In a~fibered chart $(V,\\psi)$,\n$\\psi=(x^{i},y^{\\sigma})$, on W, $\\Lambda_{\\lambda}$ has an expression\n\n\\begin{align}\n\\Lambda_{\\lambda} & =\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}\\left(\\mathscr{L}dx^{j}+\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{pj}^{\\sigma}}+d_{p}d_{q}\\frac{\\partial\\mathscr{L}}{\\partial y_{pqj}^{\\sigma}}\\right)\\omega^{\\sigma}\\right.\\label{eq:3rdCaratheodoryExpression}\\\\\n & \\qquad\\qquad\\qquad+\\left.\\left(\\frac{\\partial\\mathscr{L}}{\\partial y_{ij}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{L}}{\\partial y_{ipj}^{\\sigma}}\\right)\\omega_{i}^{\\sigma}+\\frac{\\partial\\mathscr{L}}{\\partial y_{ikj}^{\\sigma}}\\omega_{ik}^{\\sigma}\\right)\\nonumber \n\\end{align}\n\\end{thm}\n\n\\begin{proof}\nThis is an immediate consequence of Lemma \\ref{lem:3rdCond} and the\nprocedure given by Lemma \\ref{lem:Car-PC} and Theorem \\ref{thm:Main}.\n\\end{proof}\n\\begin{rem}\nNote that according to Lemma \\ref{lem:3rdCond}, characterizing obstructions\nfor the principal component $\\Theta_{\\lambda}$ \\eqref{eq:PrinComp3}\nto be a~global form, the Carath\\'eodory form \\eqref{eq:3rdCaratheodoryExpression}\nis well-defined for third-order Lagrangians on fibered manifolds,\nwhich satisfy condition \\eqref{eq:Obstruction-3rd}. Trivial cases\nwhen \\eqref{eq:Obstruction-3rd} holds identically include namely\n(i) Lagrangians \\emph{independent} of variables $y_{ijk}^{\\sigma}$,\nand (ii) fibered manifolds with bases endowed by smooth structure\nwith \\emph{linear} chart transformations. An example of (ii) are fibered\nmanifolds over two-dimensional open \\emph{M\\\"{o}bius strip} (for\ndetails see \\cite{UV}).\n\\end{rem}\n\nSuppose that a~pair $(\\lambda,\\pi)$, where $\\pi:Y\\rightarrow X$\nis a~fibered manifold and $\\lambda\\in\\Omega_{n,X}^{r}W$ is a Lagrangian\non $W^{r}\\subset J^{r}Y$, induces \\emph{invariant} principal component\n$\\Theta_{\\lambda}$ \\eqref{eq:PoiCar} of a~Lepage equivalent of\n$\\lambda$, with respect to fibered chart transformations on $W$.\nWe call $n$-form $\\Lambda_{\\lambda}$ on $W^{2r-1}$,\n\\begin{equation}\n\\Lambda_{\\lambda}=\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}(-1)^{n-j}i_{d_{n}}\\ldots i_{d_{j+1}}i_{d_{j-1}}\\ldots i_{d_{1}}\\Theta_{\\lambda},\\label{eq:Caratheodory-rth}\n\\end{equation}\nwhere $\\Theta_{\\lambda}$ is given by \\eqref{eq:PoiCar}, the \\emph{Carath\\'eodory\nform} associated to Lagrangian $\\lambda\\in\\Omega_{n,X}^{r}W$.\n\nIn a~fibered chart $(V,\\psi)$, $\\psi=(x^{i},y^{\\sigma})$, on W,\n$\\Lambda_{\\lambda}$ has an expression\n\n\\begin{align}\n\\Lambda_{\\lambda} & =\\frac{1}{\\mathscr{L}^{n-1}}\\bigwedge_{j=1}^{n}\\left(\\mathscr{L}dx^{j}+\\sum_{s=0}^{r-1}\\left(\\sum_{k=0}^{r-1-s}(-1)^{k}d_{p_{1}}\\ldots d_{p_{k}}\\frac{\\partial\\mathscr{L}}{\\partial y_{i_{1}\\ldots i_{s}p_{1}\\ldots p_{k}j}^{\\sigma}}\\right)\\omega_{i_{1}\\ldots i_{s}}^{\\sigma}\\right).\\label{eq:HOCaratheodoryExpression}\n\\end{align}\n\n\n\\section{Example: The Carath\\'eodory equivalent of the Hilbert Lagrangian}\n\nConsider a~fibered manifold $\\mathrm{Met}X$ of metric fields over\n$n$-dimensional manifold $X$ (see \\cite{Volna} for geometry of\n$\\mathrm{Met}X$). In a~chart $(U,\\varphi)$, $\\varphi=(x^{i})$,\non $X$, section $g:X\\supset U\\rightarrow\\mathrm{Met}X$ is expressed\nby $g=g_{ij}dx^{i}\\otimes dx^{j}$, where $g_{ij}$ is symmetric and\nregular at every point $x\\in U$. An induced fibered chart on second\njet prolongation $J^{2}\\mathrm{Met}X$ reads $(V,\\psi)$, $\\psi=(x^{i},g_{jk},g_{jk,l},g_{jk,lm})$.\n\nThe \\emph{Hilbert Lagrangian }is an odd-base $n$-form defined on\n$J^{2}\\mathrm{Met}X$ by \n\\begin{equation}\n\\lambda=\\mathscr{R}\\omega_{0},\\label{eq:Hilbert}\n\\end{equation}\nwhere $\\mathscr{R}=R\\sqrt{|\\det(g_{ij})|}$, $R=R(g_{ij},g_{ij,k},g_{ij,kl})$\nis the \\emph{scalar curvature} on $J^{2}\\mathrm{Met}X$, and $\\mu=\\sqrt{|\\det(g_{ij})|}\\omega_{0}$\nis the \\emph{Riemann volume element}.\n\nThe principal Lepage equivalent of $\\lambda$ \\eqref{eq:Hilbert}\n(cf. formula \\eqref{eq:Poincare-Cartan-2ndOrder}), reads\n\\begin{equation}\n\\Theta_{\\lambda}=\\mathscr{R}\\omega_{0}+\\left(\\left(\\frac{\\partial\\mathscr{R}}{\\partial y_{j}^{\\sigma}}-d_{p}\\frac{\\partial\\mathscr{R}}{\\partial y_{pj}^{\\sigma}}\\right)\\omega^{\\sigma}+\\frac{\\partial\\mathscr{R}}{\\partial y_{ij}^{\\sigma}}\\omega_{i}^{\\sigma}\\right)\\wedge\\omega_{j},\\label{eq:FLE-Hilbert}\n\\end{equation}\nand it is a~globally defined $n$-form on $J^{1}\\mathrm{Met}X$.\n\\eqref{eq:FLE-Hilbert} was used for analysis of structure of Einstein\nequations as a~system of \\emph{first-order} partial differential\nequations (see \\cite{KruStep}). Another Lepage equivalent for a~second-order\nLagrangian in field theory which could be studied in this context\nis given by \\eqref{eq:2ndCaratheodoryExpression},\n\\begin{align*}\n & \\Lambda_{\\lambda}=\\frac{1}{\\mathscr{R}^{n-1}}\\bigwedge_{k=1}^{n}\\left(\\mathscr{R}dx^{k}+\\left(\\frac{\\partial\\mathscr{\\mathscr{R}}}{\\partial g_{ij,k}}-d_{l}\\frac{\\partial\\mathscr{\\mathscr{R}}}{\\partial g_{ij,kl}}\\right)\\omega_{ij}+\\frac{\\partial\\mathscr{\\mathscr{R}}}{\\partial g_{ij,kl}}\\omega_{ij,l}\\right),\n\\end{align*}\nwhere $\\omega_{ij}=dg_{ij}-g_{ij,s}dx^{s}$, $\\omega_{ij,l}=dg_{ij,l}-g_{ij,ls}dx^{s}$.\nUsing a~chart expression of the scalar curvature, we obtain \n\\begin{align*}\n & \\Lambda_{\\lambda}=\\frac{1}{\\mathscr{R}^{n-1}}\\bigwedge_{k=1}^{n}\\left(\\mathscr{R}dx^{k}+\\frac{1}{2}\\sqrt{g}\\left(g^{qp}g^{si}g^{jk}-2g^{sq}g^{pi}g^{jk}+g^{pi}g^{qj}g^{sk}\\right)g_{pq,s}\\omega_{ij}\\right.\\\\\n & \\qquad\\qquad\\qquad\\qquad+\\sqrt{g}\\left(g^{il}g^{kj}-g^{kl}g^{ji}\\right)\\omega_{ij,l}\\biggr).\n\\end{align*}\nThis is the Carath\\'eodory equivalent of the Hilbert Lagrangian.\n\n\\section*{Acknowledgements}\n\nThis work has been completed thanks to the financial support provided\nto the VSB-Technical University of Ostrava by the Czech Ministry of\nEducation, Youth and Sports from the budget for the conceptual development\nof science, research, and innovations for the year 2020, Project No.\nIP2300031.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Electronic Submission}\n\\label{submission}\n\nSubmission to ICML 2020 will be entirely electronic, via a web site\n(not email). Information about the submission process and \\LaTeX\\ templates\nare available on the conference web site at:\n\\begin{center}\n\\textbf{\\texttt{http:\/\/icml.cc\/}}\n\\end{center}\n\nThe guidelines below will be enforced for initial submissions and\ncamera-ready copies. Here is a brief summary:\n\\begin{itemize}\n\\item Submissions must be in PDF\\@.\n\\item Submitted papers can be up to eight pages long, not including references, plus unlimited space for references. Accepted papers can be up to nine pages long, not including references, to allow authors to address reviewer comments. Any paper exceeding this length will automatically be rejected. \n\\item \\textbf{Do not include author information or acknowledgements} in your\n initial submission.\n\\item Your paper should be in \\textbf{10 point Times font}.\n\\item Make sure your PDF file only uses Type-1 fonts.\n\\item Place figure captions \\emph{under} the figure (and omit titles from inside\n the graphic file itself). Place table captions \\emph{over} the table.\n\\item References must include page numbers whenever possible and be as complete\n as possible. Place multiple citations in chronological order.\n\\item Do not alter the style template; in particular, do not compress the paper\n format by reducing the vertical spaces.\n\\item Keep your abstract brief and self-contained, one paragraph and roughly\n 4--6 sentences. Gross violations will require correction at the\n camera-ready phase. The title should have content words capitalized.\n\\end{itemize}\n\n\\subsection{Submitting Papers}\n\n\\textbf{Paper Deadline:} The deadline for paper submission that is\nadvertised on the conference website is strict. If your full,\nanonymized, submission does not reach us on time, it will not be\nconsidered for publication. \n\n\\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying\nauthor information may appear on the title page or in the paper\nitself. Section~\\ref{author info} gives further details.\n\n\\textbf{Simultaneous Submission:} ICML will not accept any paper which,\nat the time of submission, is under review for another conference or\nhas already been published. This policy also applies to papers that\noverlap substantially in technical content with conference papers\nunder review or previously published. ICML submissions must not be\nsubmitted to other conferences during ICML's review period. Authors\nmay submit to ICML substantially different versions of journal papers\nthat are currently under review by the journal, but not yet accepted\nat the time of submission. Informal publications, such as technical\nreports or papers in workshop proceedings which do not appear in\nprint, do not fall under these restrictions.\n\n\\medskip\n\nAuthors must provide their manuscripts in \\textbf{PDF} format.\nFurthermore, please make sure that files contain only embedded Type-1 fonts\n(e.g.,~using the program \\texttt{pdffonts} in linux or using\nFile\/DocumentProperties\/Fonts in Acrobat). Other fonts (like Type-3)\nmight come from graphics files imported into the document.\n\nAuthors using \\textbf{Word} must convert their document to PDF\\@. Most\nof the latest versions of Word have the facility to do this\nautomatically. Submissions will not be accepted in Word format or any\nformat other than PDF\\@. Really. We're not joking. Don't send Word.\n\nThose who use \\textbf{\\LaTeX} should avoid including Type-3 fonts.\nThose using \\texttt{latex} and \\texttt{dvips} may need the following\ntwo commands:\n\n{\\footnotesize\n\\begin{verbatim}\ndvips -Ppdf -tletter -G0 -o paper.ps paper.dvi\nps2pdf paper.ps\n\\end{verbatim}}\nIt is a zero following the ``-G'', which tells dvips to use\nthe config.pdf file. Newer \\TeX\\ distributions don't always need this\noption.\n\nUsing \\texttt{pdflatex} rather than \\texttt{latex}, often gives better\nresults. This program avoids the Type-3 font problem, and supports more\nadvanced features in the \\texttt{microtype} package.\n\n\\textbf{Graphics files} should be a reasonable size, and included from\nan appropriate format. Use vector formats (.eps\/.pdf) for plots,\nlossless bitmap formats (.png) for raster graphics with sharp lines, and\njpeg for photo-like images.\n\nThe style file uses the \\texttt{hyperref} package to make clickable\nlinks in documents. If this causes problems for you, add\n\\texttt{nohyperref} as one of the options to the \\texttt{icml2020}\nusepackage statement.\n\n\n\\subsection{Submitting Final Camera-Ready Copy}\n\nThe final versions of papers accepted for publication should follow the\nsame format and naming convention as initial submissions, except that\nauthor information (names and affiliations) should be given. See\nSection~\\ref{final author} for formatting instructions.\n\nThe footnote, ``Preliminary work. Under review by the International\nConference on Machine Learning (ICML). Do not distribute.'' must be\nmodified to ``\\textit{Proceedings of the\n$\\mathit{37}^{th}$ International Conference on Machine Learning},\nOnline, PMLR 119, 2020.\nCopyright 2020 by the author(s).''\n\nFor those using the \\textbf{\\LaTeX} style file, this change (and others) is\nhandled automatically by simply changing\n$\\mathtt{\\backslash usepackage\\{icml2020\\}}$ to\n$$\\mathtt{\\backslash usepackage[accepted]\\{icml2020\\}}$$\nAuthors using \\textbf{Word} must edit the\nfootnote on the first page of the document themselves.\n\nCamera-ready copies should have the title of the paper as running head\non each page except the first one. The running title consists of a\nsingle line centered above a horizontal rule which is $1$~point thick.\nThe running head should be centered, bold and in $9$~point type. The\nrule should be $10$~points above the main text. For those using the\n\\textbf{\\LaTeX} style file, the original title is automatically set as running\nhead using the \\texttt{fancyhdr} package which is included in the ICML\n2020 style file package. In case that the original title exceeds the\nsize restrictions, a shorter form can be supplied by using\n\n\\verb|\\icmltitlerunning{...}|\n\njust before $\\mathtt{\\backslash begin\\{document\\}}$.\nAuthors using \\textbf{Word} must edit the header of the document themselves.\n\n\\section{Format of the Paper}\n\nAll submissions must follow the specified format.\n\n\\subsection{Dimensions}\n\n\n\n\nThe text of the paper should be formatted in two columns, with an\noverall width of 6.75~inches, height of 9.0~inches, and 0.25~inches\nbetween the columns. The left margin should be 0.75~inches and the top\nmargin 1.0~inch (2.54~cm). The right and bottom margins will depend on\nwhether you print on US letter or A4 paper, but all final versions\nmust be produced for US letter size.\n\nThe paper body should be set in 10~point type with a vertical spacing\nof 11~points. Please use Times typeface throughout the text.\n\n\\subsection{Title}\n\nThe paper title should be set in 14~point bold type and centered\nbetween two horizontal rules that are 1~point thick, with 1.0~inch\nbetween the top rule and the top edge of the page. Capitalize the\nfirst letter of content words and put the rest of the title in lower\ncase.\n\n\\subsection{Author Information for Submission}\n\\label{author info}\n\nICML uses double-blind review, so author information must not appear. If\nyou are using \\LaTeX\\\/ and the \\texttt{icml2020.sty} file, use\n\\verb+\\icmlauthor{...}+ to specify authors and \\verb+\\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information\nwill not be printed unless \\texttt{accepted} is passed as an argument to the\nstyle file.\nSubmissions that include the author information will not\nbe reviewed.\n\n\\subsubsection{Self-Citations}\n\nIf you are citing published papers for which you are an author, refer\nto yourself in the third person. In particular, do not use phrases\nthat reveal your identity (e.g., ``in previous work \\cite{langley00}, we\nhave shown \\ldots'').\n\nDo not anonymize citations in the reference section. The only exception are manuscripts that are\nnot yet published (e.g., under submission). If you choose to refer to\nsuch unpublished manuscripts \\cite{anonymous}, anonymized copies have\nto be submitted\nas Supplementary Material via CMT\\@. However, keep in mind that an ICML\npaper should be self contained and should contain sufficient detail\nfor the reviewers to evaluate the work. In particular, reviewers are\nnot required to look at the Supplementary Material when writing their\nreview.\n\n\\subsubsection{Camera-Ready Author Information}\n\\label{final author}\n\nIf a paper is accepted, a final camera-ready copy must be prepared.\nFor camera-ready papers, author information should start 0.3~inches below the\nbottom rule surrounding the title. The authors' names should appear in 10~point\nbold type, in a row, separated by white space, and centered. Author names should\nnot be broken across lines. Unbolded superscripted numbers, starting 1, should\nbe used to refer to affiliations.\n\nAffiliations should be numbered in the order of appearance. A single footnote\nblock of text should be used to list all the affiliations. (Academic\naffiliations should list Department, University, City, State\/Region, Country.\nSimilarly for industrial affiliations.)\n\nEach distinct affiliations should be listed once. If an author has multiple\naffiliations, multiple superscripts should be placed after the name, separated\nby thin spaces. If the authors would like to highlight equal contribution by\nmultiple first authors, those authors should have an asterisk placed after their\nname in superscript, and the term ``\\textsuperscript{*}Equal contribution\"\nshould be placed in the footnote block ahead of the list of affiliations. A\nlist of corresponding authors and their emails (in the format Full Name\n\\textless{}email@domain.com\\textgreater{}) can follow the list of affiliations.\nIdeally only one or two names should be listed.\n\nA sample file with author names is included in the ICML2020 style file\npackage. Turn on the \\texttt{[accepted]} option to the stylefile to\nsee the names rendered. All of the guidelines above are implemented\nby the \\LaTeX\\ style file.\n\n\\subsection{Abstract}\n\nThe paper abstract should begin in the left column, 0.4~inches below the final\naddress. The heading `Abstract' should be centered, bold, and in 11~point type.\nThe abstract body should use 10~point type, with a vertical spacing of\n11~points, and should be indented 0.25~inches more than normal on left-hand and\nright-hand margins. Insert 0.4~inches of blank space after the body. Keep your\nabstract brief and self-contained, limiting it to one paragraph and roughly 4--6\nsentences. Gross violations will require correction at the camera-ready phase.\n\n\\subsection{Partitioning the Text}\n\nYou should organize your paper into sections and paragraphs to help\nreaders place a structure on the material and understand its\ncontributions.\n\n\\subsubsection{Sections and Subsections}\n\nSection headings should be numbered, flush left, and set in 11~pt bold\ntype with the content words capitalized. Leave 0.25~inches of space\nbefore the heading and 0.15~inches after the heading.\n\nSimilarly, subsection headings should be numbered, flush left, and set\nin 10~pt bold type with the content words capitalized. Leave\n0.2~inches of space before the heading and 0.13~inches afterward.\n\nFinally, subsubsection headings should be numbered, flush left, and\nset in 10~pt small caps with the content words capitalized. Leave\n0.18~inches of space before the heading and 0.1~inches after the\nheading.\n\nPlease use no more than three levels of headings.\n\n\\subsubsection{Paragraphs and Footnotes}\n\nWithin each section or subsection, you should further partition the\npaper into paragraphs. Do not indent the first line of a given\nparagraph, but insert a blank line between succeeding ones.\n\nYou can use footnotes\\footnote{Footnotes\nshould be complete sentences.} to provide readers with additional\ninformation about a topic without interrupting the flow of the paper.\nIndicate footnotes with a number in the text where the point is most\nrelevant. Place the footnote in 9~point type at the bottom of the\ncolumn in which it appears. Precede the first footnote in a column\nwith a horizontal rule of 0.8~inches.\\footnote{Multiple footnotes can\nappear in each column, in the same order as they appear in the text,\nbut spread them across columns and pages if possible.}\n\n\\begin{figure}[ht]\n\\vskip 0.2in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{icml_numpapers}}\n\\caption{Historical locations and number of accepted papers for International\nMachine Learning Conferences (ICML 1993 -- ICML 2008) and International\nWorkshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was\nproduced, the number of accepted papers for ICML 2008 was unknown and instead\nestimated.}\n\\label{icml-historical}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\n\\subsection{Figures}\n\nYou may want to include figures in the paper to illustrate\nyour approach and results. Such artwork should be centered,\nlegible, and separated from the text. Lines should be dark and at\nleast 0.5~points thick for purposes of reproduction, and text should\nnot appear on a gray background.\n\nLabel all distinct components of each figure. If the figure takes the\nform of a graph, then give a name for each axis and include a legend\nthat briefly describes each curve. Do not include a title inside the\nfigure; instead, the caption should serve this function.\n\nNumber figures sequentially, placing the figure number and caption\n\\emph{after} the graphics, with at least 0.1~inches of space before\nthe caption and 0.1~inches after it, as in\nFigure~\\ref{icml-historical}. The figure caption should be set in\n9~point type and centered unless it runs two or more lines, in which\ncase it should be flush left. You may float figures to the top or\nbottom of a column, and you may set wide figures across both columns\n(use the environment \\texttt{figure*} in \\LaTeX). Always place\ntwo-column figures at the top or bottom of the page.\n\n\\subsection{Algorithms}\n\nIf you are using \\LaTeX, please use the ``algorithm'' and ``algorithmic''\nenvironments to format pseudocode. These require\nthe corresponding stylefiles, algorithm.sty and\nalgorithmic.sty, which are supplied with this package.\nAlgorithm~\\ref{alg:example} shows an example.\n\n\\begin{algorithm}[tb]\n \\caption{Bubble Sort}\n \\label{alg:example}\n\\begin{algorithmic}\n \\STATE {\\bfseries Input:} data $x_i$, size $m$\n \\REPEAT\n \\STATE Initialize $noChange = true$.\n \\FOR{$i=1$ {\\bfseries to} $m-1$}\n \\IF{$x_i > x_{i+1}$}\n \\STATE Swap $x_i$ and $x_{i+1}$\n \\STATE $noChange = false$\n \\ENDIF\n \\ENDFOR\n \\UNTIL{$noChange$ is $true$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Tables}\n\nYou may also want to include tables that summarize material. Like\nfigures, these should be centered, legible, and numbered consecutively.\nHowever, place the title \\emph{above} the table with at least\n0.1~inches of space before the title and the same after it, as in\nTable~\\ref{sample-table}. The table title should be set in 9~point\ntype and centered unless it runs two or more lines, in which case it\nshould be flush left.\n\n\n\\begin{table}[t]\n\\caption{Classification accuracies for naive Bayes and flexible\nBayes on various data sets.}\n\\label{sample-table}\n\\vskip 0.15in\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lcccr}\n\\toprule\nData set & Naive & Flexible & Better? \\\\\n\\midrule\nBreast & 95.9$\\pm$ 0.2& 96.7$\\pm$ 0.2& $\\surd$ \\\\\nCleveland & 83.3$\\pm$ 0.6& 80.0$\\pm$ 0.6& $\\times$\\\\\nGlass2 & 61.9$\\pm$ 1.4& 83.8$\\pm$ 0.7& $\\surd$ \\\\\nCredit & 74.8$\\pm$ 0.5& 78.3$\\pm$ 0.6& \\\\\nHorse & 73.3$\\pm$ 0.9& 69.7$\\pm$ 1.0& $\\times$\\\\\nMeta & 67.1$\\pm$ 0.6& 76.5$\\pm$ 0.5& $\\surd$ \\\\\nPima & 75.1$\\pm$ 0.6& 73.9$\\pm$ 0.5& \\\\\nVehicle & 44.9$\\pm$ 0.6& 61.5$\\pm$ 0.4& $\\surd$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table}\n\nTables contain textual material, whereas figures contain graphical material.\nSpecify the contents of each row and column in the table's topmost\nrow. Again, you may float tables to a column's top or bottom, and set\nwide tables across both columns. Place two-column tables at the\ntop or bottom of the page.\n\n\\subsection{Citations and References}\n\nPlease use APA reference format regardless of your formatter\nor word processor. If you rely on the \\LaTeX\\\/ bibliographic\nfacility, use \\texttt{natbib.sty} and \\texttt{icml2020.bst}\nincluded in the style-file package to obtain this format.\n\nCitations within the text should include the authors' last names and\nyear. If the authors' names are included in the sentence, place only\nthe year in parentheses, for example when referencing Arthur Samuel's\npioneering work \\yrcite{Samuel59}. Otherwise place the entire\nreference in parentheses with the authors and year separated by a\ncomma \\cite{Samuel59}. List multiple references separated by\nsemicolons \\cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'\nconstruct only for citations with three or more authors or after\nlisting all authors to a publication in an earlier reference \\cite{MachineLearningI}.\n\nAuthors should cite their own work in the third person\nin the initial version of their paper submitted for blind review.\nPlease refer to Section~\\ref{author info} for detailed instructions on how to\ncite your own papers.\n\nUse an unnumbered first-level section heading for the references, and use a\nhanging indent style, with the first line of the reference flush against the\nleft margin and subsequent lines indented by 10 points. The references at the\nend of this document give examples for journal articles \\cite{Samuel59},\nconference publications \\cite{langley00}, book chapters \\cite{Newell81}, books\n\\cite{DudaHart2nd}, edited volumes \\cite{MachineLearningI}, technical reports\n\\cite{mitchell80}, and dissertations \\cite{kearns89}.\n\nAlphabetize references by the surnames of the first authors, with\nsingle author entries preceding multiple author entries. Order\nreferences for the same authors by year of publication, with the\nearliest first. Make sure that each reference includes all relevant\ninformation (e.g., page numbers).\n\nPlease put some effort into making references complete, presentable, and\nconsistent. If using bibtex, please protect capital letters of names and\nabbreviations in titles, for example, use \\{B\\}ayesian or \\{L\\}ipschitz\nin your .bib file.\n\n\\section*{Software and Data}\n\nIf a paper is accepted, we strongly encourage the publication of software and data with the\ncamera-ready version of the paper whenever appropriate. This can be\ndone by including a URL in the camera-ready copy. However, \\textbf{do not}\ninclude URLs that reveal your institution or identity in your\nsubmission for review. Instead, provide an anonymous URL or upload\nthe material as ``Supplementary Material'' into the CMT reviewing\nsystem. Note that reviewers are not required to look at this material\nwhen writing their review.\n\n\\section*{Acknowledgements}\n\n\\textbf{Do not} include acknowledgements in the initial version of\nthe paper submitted for blind review.\n\nIf a paper is accepted, the final camera-ready version can (and\nprobably should) include acknowledgements. In this case, please\nplace such acknowledgements in an unnumbered section at the\nend of the paper. Typically, this will include thanks to reviewers\nwho gave useful comments, to colleagues who contributed to the ideas,\nand to funding agencies and corporate sponsors that provided financial\nsupport.\n\n\n\\nocite{langley00}\n\n\n\\section{Introduction}\n With the availability of large collections of unlabeled data, recent work has led to significant advances in self-supervised learning. In particular, contrastive methods have been tremendously successful in learning representations for visual and sequential data \\citep{logeswaran2018efficient,wu2018unsupervised,oord2018representation,henaff2020data,tian2019contrastive,hjelm2018learning,bachman2019learning,he2019momentum,chen2020simple,schneider2019wav2vec,Baevski2020vqwav2vec,baevski2020wav2vec,ravanelli2020multi}. %\n While a number of explanations have been provided as to why contrastive learning leads to such informative representations, existing theoretical predictions and empirical observations appear to be at odds with each other~\\citep{tian2019contrastive,bachman2019learning,wu2020importance,saunshi2019theoretical}. \n \n In a nutshell, contrastive methods aim to learn representations where related samples are aligned (positive pairs, e.g. augmentations of the same image), while unrelated samples are separated (negative pairs)~\\citep{chen2020simple}.\n Intuitively, this leads to invariance to irrelevant details or transformations (by decreasing the distance between positive pairs), while preserving a sufficient amount of information about the input for solving downstream tasks (by increasing the distance between negative pairs)~\\citep{tian2020makes}.\n This intuition has recently been made more precise by \\cite{wang2020understanding}, showing that a commonly used contrastive loss from the InfoNCE family~\\citep{Gutmann12JMLR, oord2018representation, chen2020simple} asymptotically converges to a sum of two losses: an \\emph{alignment} loss that pulls together the representations of positive pairs, and a \\emph{uniformity} loss that maximizes the entropy of the learned latent distribution.\n \n \\begin{figure*}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figures\/figure_1_reduced.pdf}\n \\caption{We analyze the setup of contrastive learning, in which a feature encoder $f$ is trained with the InfoNCE objective \\citep{Gutmann12JMLR, oord2018representation, chen2020simple} using positive samples (green) and negative samples (orange). We assume the observations are generated by an (unknown) injective generative model $g$ that maps unobservable latent variables from a hypersphere to observations in another manifold. Under these assumptions, the feature encoder $f$ implictly learns to invert the ground-truth generative process $g$ up to linear transformations, i.e., $f = \\mathbf{A} g^{-1}$ with an orthogonal matrix $\\mathbf{A}$, if $f$ minimizes the InfoNCE objective.}\n \\label{fig:header_figure}\n \\end{figure*}\n \n We show that an encoder learned with a contrastive loss from the InfoNCE family can recover the true generative factors of variation (up to rotations) if the process that generated the data fulfills a few weak statistical assumptions. This theory bridges the gap between contrastive learning, nonlinear independent component analysis (ICA) and generative modeling (see Fig.~\\ref{fig:header_figure}).\n Our theory reveals implicit assumptions encoded in the InfoNCE objective about the generative process underlying the data. If these assumptions are violated, we show a principled way of deriving alternative contrastive objectives based on assumptions regarding the positive pair distribution.\n We verify our theoretical findings with controlled experiments, providing evidence that our theory holds true in practice, even if the assumptions on the ground-truth generative model are partially violated. %\n \n To the best of our knowledge, our work is the first to analyze under what circumstances representation learning methods used in practice provably represent the data in terms of its underlying factors of variation. Our theoretical and empirical results suggest that the success of contrastive learning in many practical applications is due to an implicit and approximate inversion of the data generating process, which explains why the learned representations are useful in a wide range of downstream tasks.\n \n In summary, our contributions are:\n \\begin{itemize}\n \\item We establish a theoretical connection between the InfoNCE family of objectives, which is commonly used in self-supervised learning, and nonlinear ICA. We show that training with InfoNCE inverts the data generating process if certain statistical assumptions on the data generating process hold.\n \\item We empirically verify our predictions when the assumed theoretical conditions are fulfilled. In addition, we show successful inversion of the data generating process even if these theoretical assumptions are partially violated. %\n \\item We build on top of the CLEVR rendering pipeline~\\citep{johnson2017clevr} to generate a more visually complex disentanglement benchmark, called \\emph{3DIdent}, that contains hallmarks of natural environments (shadows, different lighting conditions, a 3D object, etc.). We demonstrate that a contrastive loss derived from our theoretical framework can identify the ground-truth factors of such complex, high-resolution images.\n \\end{itemize}\n \n\\section{Related Work}\n\\paragraph{Contrastive Learning}\n Despite the success of contrastive learning (CL), our understanding of the learned representations remains limited, as existing theoretical explanations yield partially contradictory predictions. One way to theoretically motivate CL is to refer to the InfoMax principle \\citep{linsker1988self}, which corresponds to maximizing the mutual information (MI) between different views \\citep{oord2018representation, bachman2019learning, hjelm2018learning, chen2020simple, tian2020makes}. However, as optimizing a tighter bound on the MI can produce worse representations \\citep{tschannen2019mutual}, it is not clear how accurate this motivation describes the behavior of CL.\n \n Another approach aims to explain the success by introducing latent classes \\citep{saunshi2019theoretical}. While this theory has some appeal, there exists a gap between empirical observations and its predictions, e.g. the prediction that an excessive number of negative samples decreases performance does not corroborate with empirical results~\\citep{wu2018unsupervised,tian2019contrastive,he2019momentum,chen2020simple}. However, recent work has suggested some empirical evidence for said theoretical prediction, namely, issues with the commonly used sampling strategy for negative samples, and have proposed ways to mitigate said issues as well~\\citep{robinson2020contrastive, chuang2020debiased}. \n \n \n More recently, the behavior of CL has been analyzed from the perspective of \\emph{alignment} and \\emph{uniformity} properties of representations, demonstrating that these two properties are correlated with downstream performance~\\citep{wang2020understanding}.\n We build on these results to make a connection to cross-entropy minimization from which we can derive identifiability results.%\n\n\\paragraph{Nonlinear ICA}\n Independent Components Analysis (ICA) attempts to find the underlying sources for multidimensional data. In the nonlinear case, said sources correspond to a well-defined nonlinear generative model $g$, which is assumed to be invertible (i.e., injective)~\\citep{Hyvabook,Jutten10}. In other words, nonlinear ICA solves a demixing problem:\n Given observed data $\\mathbf{x} = g(\\mathbf{z})$, it aims to find a model $f$ that equals the inverse generative model $g^{-1}$, which allows for the original sources $\\mathbf{z}$ to be recovered.\n \n \\citet{hyvarinen2018nonlinear} show that the nonlinear demixing problem can be solved as long as the independent components are conditionally mutually independent with respect to some auxiliary variable. The authors further provide practical estimation methods for solving the nonlinear ICA problem~\\citep{hyvarinen2016unsupervised,hyvarinen2017nonlinear}, similar in spirit to noise contrastive estimation (NCE; \\citealp{Gutmann12JMLR}). Recent work has generalized this contribution to VAEs~\\citep{khemakhem2020variational,locatello2020weakly,klindt2020slowvae}, as well as (invertible-by-construction) energy-based models~\\citep{khemakhem2020ice}. We here extend this line of work to more general feed-forward networks trained using InfoNCE~\\citep{oord2018representation}.\n \n In a similar vein, \\citet{roeder2020linear} build on the work of \\citet{hyvarinen2018nonlinear} to show that for a model family which includes InfoNCE, distribution matching implies parameter matching. In contrast, we associate the learned latent representation with the ground-truth generative factors, showing under what conditions the data generating process is inverted, and thus, the true latent factors are recovered.\n \n\\section{Theory}\n \n We will show a connection between contrastive learning and identifiability in the form of nonlinear ICA. For this, we introduce a feature encoder $f$ that maps observations $\\mathbf{x}$ to representations.\n We consider the widely used \\emph{InfoNCE} loss, which often assumes $L^2$ normalized representations \\citep{wu2018unsupervised, he2020momentum, tian2019contrastive, bachman2019learning,chen2020simple},\n \\begin{align} \\label{eq:contrastive_loss}\n &\\mathcal{L}_\\mathsf{contr}(f; \\tau, M) \\quad := \\\\ \n &\\underset{\\substack{\n ({\\bf{x}}, {\\bf{\\tilde x}}) \\sim p_\\mathsf{pos} \\\\\n \\{\\mathbf{x}^-_i\\}_{i=1}^M \\overset{\\text{i.i.d.}}{\\sim} p_\\mathsf{data}\n }}{\\mathbb{E}} \\left[\\, {- \\log \\frac{e^{f(\\mathbf{x})^{\\mathsf{T}} f({\\bf{\\tilde x}}) \/ \\tau }}{e^{f(\\mathbf{x})^{\\mathsf{T}} f({\\bf{\\tilde x}}) \/ \\tau } + \\sum\\limits_{i=1}^M e^{f(\\mathbf{x}^-_i)^{\\mathsf{T}} f({\\bf{\\tilde x}}) \/ \\tau }}}\\,\\right]. \\nonumber\n \\end{align}\n Here $M\\in\\mathbb{Z}_+$ is a fixed number of negative samples, $p_{\\rm{data}}$ is the distribution of all observations and $p_{\\rm{pos}}$ is the distribution of positive pairs.\n This loss was motivated by the InfoMax principle \\citep{linsker1988self}, and has been shown\n to be effective by many recent representation learning methods \\citep{logeswaran2018efficient,wu2018unsupervised,tian2019contrastive,he2019momentum,hjelm2018learning,bachman2019learning,chen2020simple,baevski2020wav2vec}. Our theoretical results also hold for a loss function whose denominator only consists of the second summand across the negative samples (e.g., the SimCLR loss \\citep{chen2020simple}). %\n \n In the spirit of existing literature on nonlinear ICA \\cite{hyvarinen1999nonlinear, harmeling2003kernel,sprekeler2014extension,hyvarinen2016unsupervised,hyvarinen2017nonlinear, Gutmann12JMLR, hyvarinen2018nonlinear, khemakhem2020variational}, we assume that the observations $\\mathbf{x} \\in \\mathcal{X}$ are generated by an invertible (i.e., injective) generative process $g: \\mathcal{Z} \\to \\mathcal{X}$, where $\\mathcal{X} \\subseteq \\mathbb{R}^K$ is the space of observations and $\\mathcal{Z} \\subseteq \\mathbb{R}^N$ with $N\\leq K$ denotes the space of latent factors. Influenced by the commonly used feature normalization in InfoNCE, we further assume that $\\mathcal{Z}$ is the unit hypersphere $\\mathbb{S}^{N-1}$ (see Appx.~\\ref{apx:gt_assumptions}).\n Additionally, we assume that the ground-truth marginal distribution of the latents of the generative process is uniform and that the conditional distribution (under which positive pairs have high density) is a von Mises-Fisher (vMF) distribution:\n \\begin{align} \\label{eq:vmf_conditional}\n p({\\bf{z}}) &= |\\mathcal{Z}|^{-1}, \\quad\\quad p({\\bf{z}}|{\\tilde{\\mathbf{z}}}) = C_p^{-1} e^{\\kappa {\\bf{z}}^\\top {\\tilde{\\mathbf{z}}}} \\quad \\text{with} \\\\ C_p :&= \\int e^{\\kappa {\\bf{z}}^\\top {\\tilde{\\mathbf{z}}}} \\,\\d{\\tilde{\\mathbf{z}}} = \\text{const.}, \\quad \\mathbf{x} = g({\\bf{z}}), \\quad {\\tilde{\\mathbf{x}}} = g({\\tilde{\\mathbf{z}}}). \\nonumber\n \\end{align}\n \n Given these assumptions, we will show that if $f$ minimizes the contrastive loss $\\mathcal{L}_\\mathsf{contr}$, then $f$ solves the demixing problem, i.e., inverts $g$ up to orthogonal linear transformations. %\n \n Our theoretical approach consists of three steps:\n (1) We demonstrate that $\\mathcal{L}_\\mathsf{contr}$ can be interpreted as the cross-entropy between the (conditional) ground-truth and inferred latent distribution. %\n (2) Next, we show that encoders minimizing $\\mathcal{L}_\\mathsf{contr}$ maintain distance, i.e., two latent vectors with distance $\\alpha$ in the ground-truth generative model are mapped to points with the same distance $\\alpha$ in the inferred representation. %\n (3) Finally, we leverage distance preservation to show that minimizers of $\\mathcal{L}_\\mathsf{contr}$ invert the generative process up to orthogonal transformations.\n Detailed proofs are given in Appx.~\\ref{apx:proofs}.\n \n Additionally, we will present similar results for general convex bodies in $\\mathbb{R^N}$ and more general similarity measures, see Sec.~\\ref{sec:extension_rn}. For this, the detailed proofs are given in Appx.~\\ref{apx:rn_extension}. %\n \n\\subsection{Contrastive learning is related to cross-entropy minimization}\n From the perspective of nonlinear ICA, we are interested in understanding how the representations $f({\\bf{x}})$ which minimize the contrastive loss $\\mathcal{L}_\\mathsf{contr}$ (defined in Eq.~\\eqref{eq:contrastive_loss}) are related to the ground-truth source signals ${\\bf{z}}$. To study this relationship, we focus on the map $h = f\\circ g$ between the recovered source signals $h({\\bf{z}})$ and the true source signals ${\\bf{z}}$. Note that this is merely for mathematical convenience; it does not necessitate knowledge regarding neither $g$ nor the ground-truth factors during learning (beyond the assumptions stated in the theorems).\n \n A core insight is a connection between the contrastive loss and the cross-entropy between the ground-truth latent distribution and a certain model distribution. For this, we expand the theoretical results obtained by \\citet{wang2020understanding}:\n \\vspace{\\topsep}\n \\begin{customtheorem}{\\ref*{thm:extended_asym_inf_negatives_CE}}[$\\mathcal{L}_\\mathsf{contr}$ converges to the cross-entropy between latent distributions] \\label{thm:asym_inf_negatives_CE}\n If the ground-truth marginal distribution $p$ is uniform, then for fixed $\\tau > 0$, as the number of negative samples $M \\rightarrow \\infty$, the (normalized) contrastive loss converges to\n \\begin{equations}\n \\lim_{M \\rightarrow \\infty} \\mathcal{L}_\\mathsf{contr}(f; \\tau, M) - \\log M + \\log |\\mathcal{Z}| = \\\\ \\expectunder{{\\bf{z}} \\sim p({\\bf{z}})}{H(p(\\cdot | {\\bf{z}}), q_h(\\cdot | {\\bf{z}}))}\n \\label{eq:contrastive_loss_CE_limit}\n \\end{equations}\n where $H$ is the cross-entropy between the ground-truth conditional distribution $p$ over positive pairs and a conditional distribution $q_{\\rm{h}}$ parameterized by the model $f$,\n \\begin{equations} \\label{eq:qhjoint}\n q_{\\rm{h}}({\\tilde{\\mathbf{z}}}|{\\bf{z}}) &= C_h(\\mathbf{z})^{-1} e^{h({\\tilde{\\mathbf{z}}})\\T h(\\mathbf{z}) \/\\tau} \\\\ \\text{with} \\quad C_h(\\mathbf{z}) :&= \\int e^{h({\\tilde{\\mathbf{z}}})\\T h(\\mathbf{z}) \/\\tau} \\,\\d{\\tilde{\\mathbf{z}}},\n \\end{equations}\n where $C_h({\\bf{z}})\\in\\mathbb{R}^{+}$ is the partition function of $q_{\\rm{h}}$ (see Appx.~\\ref{apx:model_assumptions}).\n \\end{customtheorem}\n \n Next, we show that the minimizers $h^{*}$ of the cross-entropy~(\\ref{eq:qhjoint}) are isometries in the sense that $\\kappa {\\bf{z}}^\\top{\\tilde{\\mathbf{z}}} = h^{*}({\\bf{z}})^\\top h^{*}({\\tilde{\\mathbf{z}}})$ for all ${\\bf{z}}$ and ${\\tilde{\\mathbf{z}}}$. In other words, they preserve the dot product between ${\\bf{z}}$ and ${\\tilde{\\mathbf{z}}}$. %\n \\vspace{\\topsep}\n \\begin{customproposition}{\\ref*{prop:extended_correct_model_ce_isometry}}[Minimizers of the cross-entropy maintain the dot product] \\label{prop:correct_model_ce_isometry}\n Let $\\mathcal{Z} = \\mathbb{S}^{N-1}$, $\\tau > 0$ and consider the ground-truth conditional distribution of the form $p({\\tilde{\\mathbf{z}}} | {\\bf{z}}) = C_p^{-1} \\exp(\\kappa {\\tilde{\\mathbf{z}}}^\\top \\mathbf{z})$. Let $h$ map onto a hypersphere with radius $\\sqrt{\\tau \\kappa}$.\\footnote{Note that in practice this can be implemented as a learnable rescaling operation as the last operation of the network $f$.} Consider the conditional distribution $q_h$ parameterized by the model, as defined above in Theorem~\\ref{thm:asym_inf_negatives_CE}, where the hypothesis class for $h$ (and thus $f$) is assumed to be sufficiently flexible such that $p({\\tilde{\\mathbf{z}}} | \\mathbf{z})$ and $q_{\\rm{h}}({\\tilde{\\mathbf{z}}}|\\mathbf{z})$ can match.\n If $h$ is a minimizer of the cross-entropy $\\mathbb{E}_{p({\\tilde{\\mathbf{z}}} | \\mathbf{z})}[- \\log q_{\\rm{h}}({\\tilde{\\mathbf{z}}} | \\mathbf{z})]$, then $p({\\tilde{\\mathbf{z}}}|\\mathbf{z}) = q_{\\rm{h}}({\\tilde{\\mathbf{z}}} | \\mathbf{z})$ and $\\forall {\\bf{z}}, {\\tilde{\\mathbf{z}}}: \\kappa {\\bf{z}}^\\top{\\tilde{\\mathbf{z}}} = h({\\bf{z}})^\\top h({\\tilde{\\mathbf{z}}})$.\n \\end{customproposition}\n\n\n\\subsection{Contrastive learning identifies ground-truth factors on the hypersphere}\n From the strong geometric property of isometry, we can now deduce a key property of the minimizers $h^*$: %\n \\vspace{\\topsep}\n \\begin{customproposition}{\\ref*{prop:extended_mazurulamspheres}}[Extension of the Mazur-Ulam theorem to hyperspheres and the dot product]\n \\label{prop:mazurulamspheres}\n Let $\\mathcal{Z} = \\mathbb{S}^{N-1}$. If $h: \\mathcal{Z} \\to \\mathcal{Z}$ maintains the dot product up to a constant factor, i.e., $\\forall {\\bf{z}}, {\\tilde{\\mathbf{z}}} \\in \\mathcal{Z}: \\kappa {\\bf{z}}^\\top {\\tilde{\\mathbf{z}}} = h({\\bf{z}})^\\top h({\\tilde{\\mathbf{z}}})$, then $h$ is an orthogonal linear transformation.\n \\end{customproposition}\n\n In the last step, we combine the previous propositions to derive our main result: the minimizers of the contrastive loss $\\mathcal{L}_\\mathsf{contr}$ solve the demixing problem of nonlinear ICA up to linear transformations, i.e., they identify the original sources ${\\bf{z}}$ for observations $g({\\bf{z}})$ up to orthogonal linear transformations. For a hyperspherical space $\\mathcal{Z}$ these correspond to combinations of permutations, rotations and sign flips.\n \\vspace{\\topsep}\n \\begin{customtheorem}{\\ref*{thm:extended_ident_matching}}\\label{thm:ident_matching}\n Let $\\mathcal{Z} = \\mathbb{S}^{N-1}$, the ground-truth marginal be uniform, and the conditional a vMF distribution (cf. Eq.~\\ref{eq:vmf_conditional}). Let the mixing function $g$ be differentiable and injective. If the assumed form of $q_{\\rm{h}}$, as defined above, matches that of $p$, i.e., both are based on the same metric, and if $f$ is differentiable and minimizes the CL loss as defined in Eq.~\\eqref{eq:contrastive_loss}, then for fixed $\\tau > 0$ and $M\\to\\infty$, $h = f \\circ g$ is linear, i.e., $f$ recovers the latent sources up to orthogonal linear transformations.\n \\end{customtheorem}\n Note that we do not assume knowledge of the ground-truth generative model $g$; we only make assumptions about the conditional and marginal distribution of the latents.\n On real data, it is unlikely that the assumed model distribution $q_{\\rm{h}}$ can exactly match the ground-truth conditional. We do, however, \n provide empirical evidence that $h$ is still an affine transformation even if there is a severe mismatch, see Sec.~\\ref{sec:experiments}.\n \n \n\\subsection{Contrastive learning identifies ground-truth factors on convex bodies in \\texorpdfstring{$\\mathbb{R}^N$}{RN}} \\label{sec:extension_rn}\n While the previous theoretical results require $\\mathcal{Z}$ to be a hypersphere, we will now show a similar theorem for the more general case of $\\mathcal{Z}$ being a convex body in $\\mathbb{R}^N$. Note that the hyperrectangle $[a_1, b_1] \\times \\ldots \\times [a_N, b_N]$ is an example of such a convex body.\n \n We follow a similar three step proof strategy as for the hyperspherical case before:\n (1) We begin again by showing that a properly chosen contrastive loss on convex bodies corresponds to the cross-entropy between the ground-truth conditional and a distribution parametrized by the encoder. For this step, we additionally extend the results of \\citet{wang2020understanding} to this latent space and loss function.\n (2) Next, we derive that minimizers of the loss function are isometries of the latent space. Importantly, we do not limit ourselves to a specific metric, thus the result is applicable to a family of contrastive objectives.\n (3) Finally, we show that these minimizers must be affine transformations.\n For a special family of conditional distributions (rotationally asymmetric generalized normal distributions~\\citep{subbotin1923law}), we can further narrow the class of solutions to permutations and sign-flips. %\n For the detailed proofs, see Appx.~\\ref{apx:rn_extension}. \n \n As earlier, we assume that the ground-truth marginal distribution of the latents is uniform. However, we now assume that the conditional distribution is exponential:\n \\begin{equations} \\label{eq:rn_conditional}\n p({\\bf{z}}) &= |\\mathcal{Z}|^{-1}, \\quad\\quad p({\\bf{z}}|{\\tilde{\\mathbf{z}}}) = C_p^{-1} e^{- \\delta({\\bf{z}}, {\\tilde{\\mathbf{z}}})} \\quad \\text{with} \\\\ C_p({\\bf{z}}) :&= \\int e^{-\\delta({\\bf{z}}, {\\tilde{\\mathbf{z}}})} \\,\\d{\\tilde{\\mathbf{z}}}, \\quad \\mathbf{x} = g({\\bf{z}}), \\quad {\\tilde{\\mathbf{x}}} = g({\\tilde{\\mathbf{z}}}),\n \\end{equations}\n where $\\delta$ is a metric induced by a norm (see Appx.~\\ref{apx:rn_gt_assumptions}).\n \n To reflect the differences between this conditional distribution and the one assumed for the hyperspherical case, we need to introduce an adjusted version of the contrastive loss in \\eqref{eq:contrastive_loss}: \n \\begin{definition}[$\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}$ objective] \\label{def:delta_contrastive_loss}\n Let $\\delta: \\mathcal{Z} \\times \\mathcal{Z} \\to \\mathbb{R}$ be a metric on $\\mathcal{Z}$. We define the general InfoNCE loss, which uses $\\delta$ as a similarity measure, as\n \n \\begin{align} \\label{eq:delta_contrastive_loss}\n &\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}(f; \\tau, M) \\quad :=\\\\\n &\\underset{\\substack{\n ({\\bf{x}}, {\\bf{\\tilde x}}) \\sim p_\\mathsf{pos} \\\\\n \\{\\mathbf{x}^-_i\\}_{i=1}^M \\overset{\\text{i.i.d.}}{\\sim} p_\\mathsf{data}\n }}{\\mathbb{E}} \\hspace{-1em}\\Bigg[\n {- \\log \\frac{e^{-\\delta(f(\\mathbf{x}), f({\\bf{\\tilde x}})) \/ \\tau }}{e^{\\text{--}\\delta(f(\\mathbf{x}), f({\\bf{\\tilde x}})) \/ \\tau } \\hspace{-.3em}+\\hspace{-.3em} \\sum\\limits_{i=1}^M e^{\\text{--}\\delta(f(\\mathbf{x}^\\text{--}_i), f({\\bf{\\tilde x}})) \/ \\tau }}}\\,\\Bigg]. \\nonumber\n \\end{align}\n \\end{definition}\n Note that this is a generalization of the InfoNCE criterion in Eq.~(\\ref{eq:contrastive_loss}). In contrast to the objective above, the representations are no longer assumed to be $L^2$ normalized, and the dot-product is replaced with a more general similarity measure $\\delta$.\n \n Analogous to the previously demonstrated case for the hypersphere, for convex bodies $\\mathcal{Z}$, minimizers of the adjusted $\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}$ objective solve the demixing problem of nonlinear ICA up to invertible linear transformations:\n \\begin{customtheorem}{\\ref*{thm:extended_rn_linear_identifiable}} \\label{thm:rn_linear_identifiable}\n Let $\\mathcal{Z}$ be a convex body in $\\mathbb{R}^N$, $h = f\\circ g:\\mathcal{Z}\\to\\mathcal{Z}$, and $\\delta$ be a metric or a semi-metric (cf. Lemma~\\ref{lem:semimetric} in Appx.~\\ref{apx:rn_ce_min_identifiability}), induced by a norm. Further, let the ground-truth marginal distribution be uniform and the conditional distribution be as Eq.~\\eqref{eq:rn_conditional}. Let the mixing function $g$ be differentiable and injective. If the assumed form of $q_{\\rm{h}}$ matches that of $p$, i.e., \n \\begin{equations}\n q_{\\rm{h}}({\\tilde{\\mathbf{z}}}|{\\bf{z}}) &= C_q^{-1}(\\mathbf{z})e^{-\\delta(h({\\tilde{\\mathbf{z}}}), h(\\mathbf{z}))\/\\tau}\\quad \\\\ \\text{with} \\quad C_q(\\mathbf{z}) :&= \\int e^{-\\delta(h({\\tilde{\\mathbf{z}}}), h(\\mathbf{z}))\/\\tau} \\,\\d{\\tilde{\\mathbf{z}}},\n \\end{equations}\n and if $f$ is differentiable and minimizes the $\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}$ objective in Eq.~\\eqref{eq:delta_contrastive_loss} for $M \\to \\infty$, we find that $h = f \\circ g$ is invertible and affine, i.e., we recover the latent sources up to affine transformations.\n \\end{customtheorem}\n Note that the model distribution $q_{\\rm{h}}$, which is implicitly described by the choice of the objective, must be of the same form as the ground-truth distribution $p$, i.e., both must be based on the same metric. Thus, identifying different ground-truth conditional distributions requires different contrastive $\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}$ objectives.\n This result can be seen as a generalized version of Theorem~\\ref{thm:ident_matching}, as it is valid for any convex body $\\mathcal{Z} \\subseteq \\mathbb{R}^N$, allowing for a larger variety of conditional distributions.\n \n Finally, under the mild restriction that the ground-truth conditional distribution is based on an $L^p$ similarity measure for $p \\geq1, p \\neq 2$, $h$ identifies the ground-truth generative factors up to generalized permutations. A generalized permutation matrix $\\bf{A}$ is a combination of a permutation and element-wise sign-flips, i.e., $\\forall {\\bf{z}}: (\\bf{A}{\\bf{z}})_i = \\alpha_i {\\bf{z}}_{\\sigma(i)}$ with $\\alpha_i = \\pm 1$ and $\\sigma$ being a permutation.\n \\begin{customtheorem}{\\ref*{thm:extended_rn_permutation_identifiable}} \\label{thm:rn_permutation_identifiable}\n Let $\\mathcal{Z}$ be a convex body in $\\mathbb{R}^N$, $h: \\mathcal{Z} \\to \\mathcal{Z}$, and $\\delta$ be an $L^\\alpha$ metric or semi-metric (cf. Lemma~\\ref{lem:semimetric} in Appx.~\\ref{apx:rn_ce_min_identifiability}) for $\\alpha \\geq 1, \\alpha \\neq 2$. Further, let the ground-truth marginal distribution be uniform and the conditional distribution be as Eq.~\\eqref{eq:rn_conditional}, and let the mixing function $g$ be differentiable and invertible. If the assumed form of $q_{\\rm{h}}(\\cdot|{\\bf{z}})$ matches that of $p(\\cdot|{\\bf{z}})$, i.e., both use the same metric $\\delta$ up to a constant scaling factor, and if $f$ is differentiable and minimizes the $\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}$ objective in Eq.~\\eqref{eq:delta_contrastive_loss} for $M \\to \\infty$, we find that $h = f \\circ g$ is a composition of input independent permutations, sign flips and rescaling.\n \\end{customtheorem}\n\n\n\\section{Experiments} \\label{sec:experiments}\n\n\\subsection{Validation of theoretical claim} \\label{sec:toy_experiments}\n We validate our theoretical claims under both perfectly matching and violated conditions regarding the ground-truth marginal and conditional distributions. We consider source signals of dimensionality $N=10$, and sample pairs of source signals in two steps: First, we sample from the marginal $p({\\bf{z}})$. For this, we consider both uniform distributions which match our assumptions and non-uniform distributions (e.g., a normal distribution) which violate them. Second, we generate the positive pair by sampling from a conditional distribution $p({\\tilde{\\mathbf{z}}} | {\\bf{z}})$.\n Here, we consider matches with our assumptions on the conditional distribution (von Mises-Fisher for $\\mathcal{Z} = \\mathbb{S}^{N-1}$) as well as violations (e.g. normal, Laplace or generalized normal distribution for $\\mathcal{Z} = \\mathbb{S}^{N-1}$). Further, we consider spaces beyond the hypersphere, such as the bounded box (which is a convex body) and the unbounded $\\mathbb{R}^N$.\n \n We generate the observations with a multi-layer perceptron (MLP), following previous work~\\citep{hyvarinen2016unsupervised,hyvarinen2017nonlinear}.\n Specifically, we use three hidden layers with leaky ReLU units and random weights; to ensure that the MLP $g$ is invertible, we control the condition number of the weight matrices.\n For our feature encoder $f$, we also use an MLP with leaky ReLU units, where the assumed space is denoted by the normalization, or lack thereof, of the encoding. Namely, for the hypersphere (denoted as \\emph{Sphere}) and the hyperrectangle (denoted as \\emph{Box}) we apply an $L^2$ and $L^\\infty$ normalization, respectively. For flexibility in practice, we parameterize the normalization magnitude of the \\emph{Box}, including it as part of the encoder's learnable parameters. On the hypersphere we optimize $\\mathcal{L}_\\mathsf{contr}$ and on the hyperrectangle as well as the unbounded space we optimize $\\mathcal{L}_{\\delta\\text{-}\\mathsf{contr}}$. For further details, see Appx.~\\ref{apx:experiment_details}. %\n \n To test for identifiability up to affine transformations, we fit a linear regression between the ground-truth and recovered sources and report the coefficient of determination ($R^2$). To test for identifiability up to generalized permutations, we leverage the mean correlation coefficient (MCC), as used in previous work~\\citep{hyvarinen2016unsupervised,hyvarinen2017nonlinear}. For further details, see Appx.~\\ref{apx:experiment_details}.\n \n \\begin{table*}[t]\n \\centering\n \\caption{Identifiability up to affine transformations. Mean $\\pm$ standard deviation over $5$ random seeds. Note that only the first row corresponds to a setting that matches (\\cmark) our theoretical assumptions, while the others show results for violated assumptions (\\xmark; see column \\emph{M.}). Note that the identity score only depends on the ground-truth space and the marginal distribution defined for the generative process, while the supervised score additionally depends on the space assumed by the model. \n }\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{ccccccccc}\n \\toprule\n \\multicolumn{3}{c}{Generative process $g$} & \\multicolumn{3}{c}{Model $f$} & \\multicolumn{3}{c}{$R^2$ Score [\\%]} \\\\\n Space & $p(\\cdot)$ & $p(\\cdot|\\cdot)$ & Space & $q_{\\rm{h}}(\\cdot|\\cdot)$ & M. & Identity & Supervised & Unsupervised \\\\\n \\midrule\n Sphere & Uniform & vMF($\\kappa{=}1$) & Sphere & vMF($\\kappa{=}1$) & \\cmark & $66.98 \\pm 2.79$ & $99.71 \\pm 0.05$ & $99.42 \\pm 0.05$ \\\\\n Sphere & Uniform & vMF($\\kappa{=}10$) & Sphere & vMF($\\kappa{=}1$) & \\xmark & \\dittotikz & \\dittotikz & $99.86 \\pm 0.01$ \\\\\n Sphere & Uniform & Laplace($\\lambda{=}0.05$) & Sphere & vMF($\\kappa{=}1$) & \\xmark & \\dittotikz & \\dittotikz & $99.91 \\pm 0.01$ \\\\\n Sphere & Uniform & Normal($\\sigma{=}0.05$) & Sphere & vMF($\\kappa{=}1$) & \\xmark & \\dittotikz & \\dittotikz & $99.86 \\pm 0.00$\\\\\n \\midrule\n Box & Uniform & Normal($\\sigma{=}0.05$) & Unbounded & Normal & \\xmark & $67.93 \\pm 7.40$ & $99.78 \\pm 0.06$ & $99.60 \\pm 0.02$ \\\\\n Box & Uniform & Laplace($\\lambda{=}0.05$) & Unbounded & Normal & \\xmark & \\dittotikz & \\dittotikz & $99.64 \\pm 0.02$ \\\\\n Box & Uniform & Laplace($\\lambda{=}0.05$) & Unbounded & GenNorm($\\beta{=}3$) & \\xmark & \\dittotikz & \\dittotikz & $99.70 \\pm 0.02$\\\\\n Box & Uniform & Normal($\\sigma{=}0.05$) & Unbounded & GenNorm($\\beta{=}3$) & \\xmark & \\dittotikz & \\dittotikz & $99.69 \\pm 0.02$\\\\\n \\midrule\n Sphere & Normal($\\sigma{=}1$) & Laplace($\\lambda{=}0.05$) & Sphere & vMF($\\kappa{=}1$) & \\xmark & $63.37 \\pm 2.41$ & $99.70 \\pm 0.07$ & $99.02 \\pm 0.01$ \\\\\n Sphere & Normal($\\sigma{=}1$) & Normal($\\sigma{=}0.05$) & Sphere & vMF($\\kappa{=}1$) & \\xmark & \\dittotikz & \\dittotikz & $99.02 \\pm 0.02$ \\\\\n \\midrule Unbounded & Laplace($\\lambda{=}1$) & Normal($\\sigma{=}1$) & Unbounded & Normal & \\xmark & $62.49 \\pm 1.65$ & $99.65 \\pm 0.04$ & $98.13 \\pm 0.14$ \\\\ Unbounded & Normal($\\sigma{=}1$) & Normal($\\sigma{=}1$) & Unbounded & Normal & \\xmark & $63.57 \\pm 2.30$ & $99.61 \\pm 0.17$ & $98.76 \\pm 0.03$ \\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab:results_linear}\n \\end{table*}\n \n \\begin{table*}[t]\n \\centering\n \\caption{Identifiability up to generalized permutations, averaged over $5$ runs. \n Note that while Theorem~\\ref{thm:extended_rn_permutation_identifiable} requires the model latent space to be a convex body and $p(\\cdot|\\cdot)=q_{\\rm{h}}(\\cdot|\\cdot)$, we find that empirically either is sufficient.\n The results are grouped in four blocks corresponding to different types and degrees of violation of assumptions of our theory showing identifiability up to permutations: (1) no violation, violation of the assumptions on either the (2) space or (3) the conditional distribution, or (4) both.\n }\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{ccccccccc}\n \\toprule \\multicolumn{3}{c}{Generative process $g$} & \\multicolumn{3}{c}{Model $f$} & \\multicolumn{3}{c}{MCC Score [\\%]} \\\\\n Space & $p(\\cdot)$ & $p(\\cdot|\\cdot)$ & Space & $q_{\\rm{h}}(\\cdot|\\cdot)$ & M. & Identity & Supervised & Unsupervised \\\\\n \\midrule\n Box & Uniform & Laplace($\\lambda{=}0.05$) & Box & Laplace & \\cmark & $46.55 \\pm 1.34$ & $99.93 \\pm 0.03$ & $98.62 \\pm 0.05$ \\\\\n Box & Uniform & GenNorm($\\beta{=}3$; $\\lambda{=}0.05$) & Box & GenNorm($\\beta{=}3$) & \\cmark & \\dittotikz & \\dittotikz & $99.90 \\pm 0.06$ \\\\\n \\midrule\n Box & Uniform & Normal($\\sigma{=}0.05$) & Box & Normal & \\xmark & \\dittotikz & \\dittotikz & $99.77 \\pm 0.01$ \\\\\n Box & Uniform & Laplace($\\lambda{=}0.05$) & Box & Normal & \\xmark & \\dittotikz & \\dittotikz & $99.76 \\pm 0.02$ \\\\\n Box & Uniform & GenNorm($\\beta{=}3$; $\\lambda{=}0.05$) & Box & Laplace & \\xmark & \\dittotikz & \\dittotikz & $98.80 \\pm 0.02$ \\\\\n \\midrule\n Box & Uniform & Laplace($\\lambda{=}0.05$) & Unbounded & Laplace & \\xmark & \\dittotikz & $99.97 \\pm 0.03$ & $98.57 \\pm 0.02$ \\\\\n Box & Uniform & GenNorm($\\beta{=}3$; $\\lambda{=}0.05$) & Unbounded & GenNorm($\\beta{=}3$) & \\xmark & \\dittotikz & \\dittotikz & $99.85 \\pm 0.01$ \\\\\n \\midrule\n Box & Uniform & Normal($\\sigma{=}0.05$) & Unbounded & Normal & \\xmark & \\dittotikz & \\dittotikz & $58.26 \\pm 3.00$ \\\\\n Box & Uniform & Laplace($\\lambda{=}0.05$) & Unbounded & Normal & \\xmark & \\dittotikz & \\dittotikz & $59.67 \\pm 2.33$ \\\\\n Box & Uniform & Normal($\\sigma{=}0.05$) & Unbounded & GenNorm($\\beta{=}3$) & \\xmark & \\dittotikz & \\dittotikz & $43.80 \\pm 2.15$ \\\\\n \\bottomrule\n \\end{tabular}}\n \\label{tab:perm_results}\n \\end{table*}\n \n We evaluate both identifiability metrics for three different model types.\n First, we ensure that the problem requires nonlinear demixing by considering the identity function for model $f$, which amounts to scoring the observations against the sources (\\textbf{Identity Model}).\n Second, we ensure that the problem is solvable within our model class by training our model $f$ with supervision, minimizing the mean-squared error between $f(g({\\bf{z}}))$ and ${\\bf{z}}$ (\\textbf{Supervised Model}). Third, we fit our model without supervision using a contrastive loss (\\textbf{Unsupervised Model}).\n \n Tables~\\ref{tab:results_linear} and~\\ref{tab:perm_results} show results evaluating identifiability up to affine transformations and generalized permutations, respectively. When assumptions match (see column M.), CL recovers a score close to the empirical upper bound.\n Mismatches in assumptions on the marginal and conditional do not lead to a significant drop in performance with respect to affine identifiability, but do for permutation identifiability compared to the empirical upper bound.\n In many practical scenarios, we use the learned representations to solve a downstream task, thus, identifiability up to affine transformations is often sufficient.\n However, for applications where identification of the individual generative factors is desirable, some knowledge of the underlying generative process is required to choose an appropriate loss function and feature normalization.\n Interestingly, we find that for convex bodies, we obtain identifiability up to permutation even in the case of a normal conditional, which likely is due to the axis-aligned box geometry of the latent domain.\n Finally, note that the drop in performance for identifiability up to permutations in the last group of Tab.~\\ref{tab:perm_results} is a natural consequence of either the ground-truth or the assumed conditional being rotationally symmetric, e.g., a normal distribution, in an unbounded space. Here, rotated versions of the latent space are indistinguishable and, thus, the model cannot align the axes of the reconstruction with that of the ground-truth latent space, resulting in a lower score.\n \n To zoom in on how violations of the uniform marginal assumption influence the identifiability achieved by a model in practice, we perform an ablation on the marginal distribution by interpolating between the theoretically assumed uniform distribution and highly locally concentrated distributions.\n In particular, we consider two cases: (1) a sphere ($\\mathcal{S}^9$) with a vMF marginal around its north pole for different concentration parameters $\\kappa$; (2) a box ($[0,1]^{10}$) with a normal marginal around the box's center for different standard deviations $\\sigma$.\n For both cases, Fig.~\\ref{fig:uniformity_violation} shows the $R^2$ score as a function of the concentration $\\kappa$ and $1\/\\sigma^2$ respectively (black). As a reference, the concentration of the used conditional distribution is highlighted as a dashed line.\n In addition, we also display the probability mass (0--100\\%) that needs to be moved for converting the used marginal distribution (i.e., vMF or normal) into the assumed uniform marginal distribution (blue) as an intuitive measure of the mismatch (i.e., $\\frac{1}{2}\\int |p(\\mathbf{z})\\mathrm{-}p_{\\mathrm{uni}}|\\, \\mathrm{d}\\mathbf{z}$).\n While, we observe significant robustness to mismatch, in both cases, we see performance drop drastically once the marginal distribution is more concentrated than the conditional distribution of positive pairs. In such scenarios, positive pairs are indistinguishable from negative pairs. \n\n \\begin{figure}\n \\centering\n \\includegraphics[width=.9\\linewidth]{figures\/uniformity_violation_sigma_kappa_ablation.pdf}\n \\vspace*{-0.5cm}\n \\caption{Varying degrees of violation of the uniformity assumption for the marginal distribution. The figure shows the $R^2$ score measuring identifiability up to linear transformations (black) as well as the difference between the used marginal and assumed uniform distribution in terms of probability mass (blue) as a function of the marginal's concentration. The black dotted line indicates the concentration of the used conditional distribution.}\n \\label{fig:uniformity_violation}\n \\end{figure}\n \n\\subsection{Extensions to image data}\n Previous studies have demonstrated that representation learning using constrastive learning scales well to complex natural image data \\citep{chen2020simple, chen2020big, henaff2020data}.\n Unfortunately, the true generative factors of natural images are inaccessible, thus we cannot evaluate identifiability scores.\n\n We consider two alternatives.\n First, we evaluate on the recently proposed benchmark \\textit{KITTI Masks}~\\citep{klindt2020slowvae}, which is composed of segmentation masks of natural videos.\n Second, we contribute a novel benchmark (\\textit{3DIdent}; cf. Fig.~\\ref{fig:3dident_examples}) which features aspects of natural scenes, e.g. a complex 3D object and different lighting conditions, while still providing access to the continuous ground-truth factors. For further details, see Appx.~\\ref{apx:3dident_comparison}. \\textit{3DIdent} is available at \\href{https:\/\/zenodo.org\/record\/4502485\/}{zenodo.org\/record\/4502485}.\n\n\\subsubsection{KITTI Masks} \\label{sec:kitti_experiments}\n KITTI Masks~\\citep{klindt2020slowvae} is composed of pedestrian segmentation masks extracted from an autonomous driving vision benchmark KITTI-MOTS~\\citep{geiger2012are}, with natural shapes and continuous natural transitions. We compare to SlowVAE~\\citep{klindt2020slowvae}, the state-of-the-art on the considered dataset. In our experiments, we use the same training hyperparameters (for details see Appx.~\\ref{apx:experiment_details}) and (encoder) architecture as \\citet{klindt2020slowvae}. The positive pairs consist of nearby frames with a time separation $\\overline{\\Delta t}$.\n \n \\begin{table}\n \\centering\n \\caption{\\textbf{KITTI Masks}. Mean $\\pm$ standard deviation over 10 random seeds. $\\overline{\\Delta t}$ indicates the average temporal distance of frames used.}\n \\label{table:MOTSComp}\n \\small\n \\begin{tabular}{clll}\n \\toprule\n & Model & Model Space & MCC [\\%] \\\\\n \\midrule\n \\parbox[t]{16mm}{\\multirow{5}{*}{$\\overline{\\Delta t}=0.05s$}} & SlowVAE & Unbounded & 66.1 $\\pm$ 4.5 \\\\\n & Laplace & Unbounded & 77.1 $\\pm$ 1.0 \\\\\n & Laplace & Box & 74.1 $\\pm$ 4.4 \\\\\n & Normal & Unbounded & 58.3 $\\pm$ 5.4 \\\\\n & Normal & Box & 59.9 $\\pm$ 5.5 \\\\\n \\midrule\n \\parbox[t]{16mm}{\\multirow{5}{*}{$\\overline{\\Delta t}=0.15s$}} & SlowVAE & Unbounded & 79.6 $\\pm$ 5.8 \\\\\n & Laplace & Unbounded & 79.4 $\\pm$ 1.9 \\\\\n & Laplace & Box & 80.9 $\\pm$ 3.8 \\\\\n & Normal & Unbounded & 60.2 $\\pm$ 8.7 \\\\\n & Normal & Box & 68.4 $\\pm$ 6.7 \\\\\n \\bottomrule\n \\end{tabular}\n \\end{table}\n \n As argued and shown in \\citet{klindt2020slowvae}, the transitions in the ground-truth latents between nearby frames is sparse. Unsurprisingly then, Table~\\ref{table:MOTSComp} shows that assuming a Laplace conditional as opposed to a normal conditional in the contrastive loss leads to better identification of the underlying factors of variation. %\n SlowVAE also assumes a Laplace conditional~\\citep{klindt2020slowvae} but appears to struggle if the frames of a positive pair are too similar ($\\overline{\\Delta t}=0.05s$).\n This degradation in performance is likely due to the limited expressiveness of the decoder deployed in SlowVAE.\n \n \n \n \n\\subsubsection{3DIdent} \\label{sec:3dident_experiments}\n \n\\paragraph{Dataset description}\n \\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/3ddis_samples_reduced.pdf}\n \\caption{\\textbf{3DIdent}. Influence of the latent factors ${\\bf{z}}$ on the renderings ${\\bf{x}}$. Each column corresponds to a traversal in one of the ten latent dimensions while the other dimensions are kept fixed. %\n }\n \\label{fig:3dident_examples}\n \\end{figure*}\n \n We build on \\citep{johnson2017clevr} and use the Blender rendering engine \\citep{blender} to create visually complex 3D images (see Fig.~\\ref{fig:3dident_examples}). Each image in the dataset shows a colored 3D object which is located and rotated above a colored ground in a 3D space. Additionally, each scene contains a colored spotlight focused on the object and located on a half-circle around the scene. The observations are encoded with an RGB color space, and the spatial resolution is $224\\times224$ pixels.\n\n The images are rendered based on a $10$-dimensional latent, where: (1) three dimensions describe the XYZ position, (2) three dimensions describe the rotation of the object in Euler angles, (3) two dimensions describe the color of the object and the ground of the scene, respectively, and (4) two dimensions describe the position and color of the spotlight. We use the HSV color space to describe the color of the object and the ground with only one latent each by having the latent factor control the hue value. For more details on the dataset see Sec.~\\ref{apx:3dident_details}.\n \n The dataset contains $250\\,000$ observation-latent pairs where the latents are uniformly sampled from the hyperrectangle $\\mathcal{Z}$. To sample positive pairs $({\\bf{z}}, {\\tilde{\\mathbf{z}}})$ we first sample a value ${\\tilde{\\mathbf{z}}}'$ from the data conditional $p({\\tilde{\\mathbf{z}}}'|{\\bf{z}})$, and then use nearest-neighbor matching\\footnote{We used an Inverted File Index (IVF) with Hierarchical Navigable Small World (HNSW) graph exploration for fast indexing.} implemented by FAISS \\citep{JDH17} to find the latent ${\\tilde{\\mathbf{z}}}$ closest to ${\\tilde{\\mathbf{z}}}'$ (in $L^2$ distance) for which there exists an image rendering. In addition, unlike previous work~\\citep{locatello2018challenging}, we create a hold-out test set with $25\\,000$ distinct observation-latent pairs. \n\n\\paragraph{Experiments and Results}\n \\begin{table*}[htb]\n \\vspace{-0.05cm}\n \\centering\n \\caption{Identifiability up to affine transformations on the test set of 3DIdent. Mean $\\pm$ standard deviation over 3 random seeds. As earlier, only the first row corresponds to a setting that matches the theoretical assumptions for linear identifiability; the others show distinct violations. Supervised training with unbounded space achieves scores of $R^2=(98.67 \\pm 0.03)$\\% and $\\text{MCC}=(99.33 \\pm 0.01)$\\%. The last row refers to using the image augmentations suggested by \\citet{chen2020simple} to generate positive image pairs.\n For performance on the training set, see Appx.~Table~\\ref{tab:3dident_results_train}.\n }\n \\small\n \\begin{tabular}{ccccccc}\n \\toprule\n Dataset & \\multicolumn{3}{c}{Model $f$} & Identity [\\%] & \\multicolumn{2}{c}{Unsupervised [\\%]} \\\\\n $p(\\cdot|\\cdot)$ & Space & $q_{\\rm{h}}(\\cdot|\\cdot)$ & M. & $R^2$ & $R^2$ & MCC \\\\\n \\midrule\n \n Normal & Box & Normal & \\cmark & $5.25 \\pm 1.20$ & $96.73 \\pm 0.10$ & $98.31 \\pm 0.04$\\\\ \n\n Normal & Unbounded & Normal & \\xmark & \\dittotikz & $96.43 \\pm 0.03$ & $54.94 \\pm 0.02$\\\\\n\n Laplace & Box & Normal & \\xmark & \\dittotikz & $96.87 \\pm 0.08$ & $98.38 \\pm 0.03$\\\\\n \n Normal & Sphere & vMF & \\xmark & \\dittotikz & $65.74 \\pm 0.01$ & $42.44 \\pm 3.27$\\\\\n \n Augm. & Sphere & vMF & \\xmark & \\dittotikz & $45.51 \\pm 1.43$ & $46.34 \\pm 1.59$\\\\\n\n \\bottomrule\n \\end{tabular}\n \\label{tab:3dident_results_test}\n \\end{table*}\n \n We train a convolutional feature encoder $f$ composed of a ResNet18 architecture~\\citep{he2015deep} and an additional fully-connected layer, with a LeakyReLU nonlinearity as the hidden activation. For more details, see Appx.~\\ref{apx:experiment_details}. Following the same methodology as in Sec.~\\ref{sec:toy_experiments}, i) depending on the assumed space, the output of the feature encoder is normalized accordingly and ii) in addition to the CL models, we also train a supervised model to serve as an upper bound on performance. We consider normal and Laplace distributions for positive pairs. Note, that due to the finite dataset size we only sample from an approximation of these distributions.\n \n As in Tables~\\ref{tab:results_linear} and~\\ref{tab:perm_results}, the results in Table~\\ref{tab:3dident_results_test} demonstrate that CL reaches scores close to the topline (supervised) performance, and mismatches between the assumed and ground-truth conditional distribution do not harm the performance significantly. However, if the hypothesis class of the encoder is too restrictive to model the ground-truth conditional distribution, we observe a clear drop in performance, i.e., mapping a box onto a sphere. Note, that this corresponds to the InfoNCE objective for $L^2$-normalized representations, commonly used for self-supervised representation learning~\\citep{wu2018unsupervised, he2020momentum, tian2019contrastive, bachman2019learning,chen2020simple}.\n Finally, the last result shows that leveraging image augmentations~\\citep{chen2020simple} as opposed to sampling from a specified conditional distribution of positive pairs $p(\\cdot|\\cdot)$ results in a performance drop. For details on the experiment, see Appx. Sec.~\\ref{apx:experiment_details}. We explain this with the greater mismatch between the conditional distribution assumed by the model and the conditional distribution induced by the augmentations. \n In all, we demonstrate validation of our theoretical claims even for generative processes with higher visual complexity than those considered in Sec.~ \\ref{sec:toy_experiments}.\n \n \n\\section{Conclusion}\n We showed that objectives belonging to the InfoNCE family, the basis for a number of state-of-the-art techniques in self-supervised representation learning, can uncover the true generative factors of variation underlying the observational data. To succeed, these objectives implicitly encode a few weak assumptions about the statistical nature of the underlying generative factors. While these assumptions will likely not be exactly matched in practice, we showed empirically that the underlying factors of variation are identified even if theoretical assumptions are severely violated.\n \n Our theoretical and empirical results suggest that the representations found with contrastive learning implicitly (and approximately) invert the generative process of the data. This could explain why the learned representations are so useful in many downstream tasks. It is known that a decisive aspect of contrastive learning is the right choice of augmentations that form a positive pair. We hope that our framework might prove useful for clarifying the ways in which certain augmentations affect the learned representations, and for finding improved augmentation schemes.\n \n Furthermore, our work opens avenues for constructing more effective contrastive losses. As we demonstrate, imposing a contrastive loss informed by characteristics of the latent space can considerably facilitate inferring the correct semantic descriptors, and thus boost performance in downstream tasks. \n While our framework already allows for a variety of \\emph{conditional} distributions, it is an interesting open question how to adapt it to \\emph{marginal} distributions beyond the uniform implicitly encoded in InfoNCE. Also, future work may extend our theoretical framework by incorporating additional assumptions about our visual world, such as compositionality, hierarchy or objectness. \n Accounting for such inductive biases holds enormous promise in forming the basis for the next generation of self-supervised learning algorithms. \n \n Taken together, we lay a strong theoretical foundation for not only understanding but extending the success of state-of-the-art self-supervised learning techniques.\n\n\\subsection*{Author contributions}\n The project was initiated by WB.\n RSZ, SS and WB jointly derived the theory. RSZ and YS implemented and executed the experiments. The 3DIdent dataset was created by RSZ with feedback from SS, YS, WB and MB.\n RSZ, YS, SS and WB contributed to the final version of the manuscript.\n\n\\subsection*{Acknowledgements}\n We thank\n Muhammad Waleed Gondal,\n Ivan Ustyuzhaninov,\n David Klindt,\n Lukas Schott\n and Luisa Eck\n for helpful discussions.\n We thank Bozidar Antic, Shubham Krishna and Jugoslav Stojcheski for ideas regarding the design of 3DIdent.\n We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting RSZ, YS and StS.\n StS acknowledges his membership in the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program.\n We acknowledge support from the German Federal Ministry of Education and Research (BMBF) through the Competence Center for Machine Learning (TUE.AI, FKZ 01IS18039A) and the Bernstein Computational Neuroscience Program T\\\"ubingen (FKZ: 01GQ1002). WB acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under grant no. BR 6382\/1-1 as well as support by Open Philantropy and the Good Ventures Foundation. MB and WB acknowledge funding from the MICrONS program of the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior\/Interior Business Center (DoI\/IBC) contract number D16PC00003.\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}