diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzczim" "b/data_all_eng_slimpj/shuffled/split2/finalzzczim" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzczim" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{Introduction}\nThe top quark is the heaviest elementary\nparticle in the standard model (SM)~\\cite{Abazov:2014dpa,Abazov:2015spa,Tevatron:2014cka,ATLAS:2014wva}.\nDespite the fact that the top quark decays weakly, its large mass leads to a very short lifetime\nof $\\approx 5\\cdot10^{-25}$~s~\\cite{Jezabek:1987nf, Jezabek:1988iv,Abazov2012n}.\nIt decays to a $W$ boson and a $b$ quark before hadronizing, \na process that has a characteristic time of\n$1\/\\Lambda_{\\rm{QCD}} \\approx (200~{\\rm MeV})^{-1}$\nequivalent to $\\tau_{\\rm{had}} \\approx 3.3\\cdot10^{-24}$~s,\nwhere $\\Lambda_{\\rm{QCD}}$ is the fundamental scale of\nquantum chromodynamics (QCD). The top quark lifetime is also smaller\nthan the spin-decorrelation time from spin-spin interactions with the\nlight quarks generated in the fragmentation process~\\cite{PhysRevD.49.3320},\n$\\tau_{\\rm{spin}} \\approx m_t\/\\Lambda_{\\rm{QCD}}^2 \\approx (0.2~{\\rm MeV})^{-1} \\approx 3\\cdot 10^{-21}$~s~\\cite{Willenbrock:2002ta}.\nThe top quark thus provides a unique opportunity\nto measure spin-related phenomena in the quark sector \nby exploiting kinematic properties of its decay products.\n\n\nIn proton-antiproton (\\ppbar) collisions,\nthe dominant process for producing top quarks is through\ntop-antitop (\\,$t\\bar{t}$\\,) quark pairs.\nThis QCD process yields unpolarized $t$ and $\\bar{t}$ quarks,\nbut leaves the spins of $t$ and $\\bar{t}$ correlated.\nA spin correlation observable can be defined as~\\cite{Bernreuther:2010ny}\n\\[\n\\begin{split}\nO_{ab} & = \\langle 4(S_t\\cdot \\hat{a}) (S_{\\bar{t}} \\cdot \\hat{b}) \\rangle = \\\\\n & \\frac{\\sigma(\\uparrow\\uparrow)+\\sigma(\\downarrow\\downarrow)-\\sigma(\\uparrow\\downarrow)-\\sigma(\\downarrow\\uparrow)}\n {\\sigma(\\uparrow\\uparrow)+\\sigma(\\downarrow\\downarrow)+\\sigma(\\uparrow\\downarrow)+\\sigma(\\downarrow\\uparrow)} ,\n\\end{split}\n\\]\nwhere $S$ is a spin operator, $\\hat{a}, \\hat{b}$ are the spin quantization axes for the top\nquark ($\\hat{a}$) and the antitop quark\n($\\hat{b}$), $ \\langle \\rangle$ refers to an expectation value,\n$\\sigma$ is the \\ttbar\\ production cross section,\nand the arrows refer to the spin states of the $t$ and $\\bar{t}$ quarks relative to the \n$\\hat{a}$ and $\\hat{b}$ axes.\nThe strength of the correlation depends \non the \\ttbar\\ production mechanism~\\cite{Mahlon:1995zn,Mahlon:1997uc,Bernreuther:2001rq}.\nIn \\ppbar\\ collisions at a center-of-mass energy of 1.96~TeV, the correlation of spins is predicted \nto be $O_{\\rm off} = 0.80^{+0.01}_{-0.02}$~\\cite{Bernreuther:2010ny} in the off-diagonal spin basis,\nthe basis in which the strength of the spin correlation is maximal at the Tevatron~\\cite{Mahlon:1997uc}.\nThe most significant contribution is from the\nquark-antiquark annihilation process ($q\\bar{q} \\to t\\bar{t}$) with a spin correlation\nstrength of $\\approx 0.99$, while the gluon-gluon ($gg$) fusion process ($gg \\to t\\bar{t}$)\nhas anticorrelated spins with a typical strength of $ \\approx -0.36$ at \nnext-to-leading order (NLO) in QCD \\cite{Bernreuther:2004jv,Bernreuther:2010ny,Bernreuther_private}.\nContributions to \\ttbar\\ production from beyond the SM can have different dynamics that\naffect the strength of the \\ttbar\\ spin correlation.\n\nEvidence for \\ttbar\\ spin correlations based on a matrix element technique~\\cite{Abazov2012f},\nwas presented by the \\dzero\\ collaboration.\n Earlier lower precision measurements used a template method~\\cite{Aaltonen:2010nz,Abazov2011aj}.\nSpin correlation effects have also been measured in proton-proton ($pp$) collisions by two\nLHC collaborations, ATLAS and CMS, at a center-of-mass energy of 7~TeV\n\\cite{ATLAS:2012ao,Aad:2014pwa, Aad:2015bfa, Chatrchyan:2013wua}\nand at 8~TeV \\cite{Aad:2014mfk, Khachatryan:2015tzo} .\nThe main mechanism for \\ttbar\\ production at the LHC is the \n$gg$ fusion process. The spin correlation at the LHC \narises mainly from the fusion of like-helicity gluons~\\cite{Mahlon:2010gw}.\nThe differences between $pp$ and \\ppbar\\ incident channels,\nthe different sources of spin correlation\n(quark-antiquark annihilation versus like-helicity $gg$ fusion), \nand their different collision energies, make the measurements of the \nstrength of the spin correlation at both the Tevatron and LHC interesting and complementary.\n\nIn this letter, we present an updated measurement of the \\ttbar\\ spin correlation strength in\n\\ppbar\\ collisions at $\\sqrt s = 1.96$~TeV. \nThe measurement uses the statistics accumulated during 2001 -- 2011 data taking period of \nthe Fermilab Tevatron Collider, which corresponds to an integrated luminosity of \n\\lumi, which is almost two times more than in our previous publication~\\cite{Abazov2012f}.\n\n\n\n\\section{Detector, Event Selection and Simulation, Background}\n\\label{sec:detector}\n\nThe \\dzero\\ detector is described in Refs.~\\cite{Abazov2006l,Abazov:2005uk,Abolins2008,Angstadt2010,Ahmed:2010fx,Casey:2012rr,Bezzubov:2014jka}. \nIt has a central tracking system consisting of a\nsilicon microstrip tracker and a central fiber tracker,\nboth located within an $\\sim 2$~T superconducting solenoidal\nmagnet. The central tracking system is designed to optimize tracking and\nvertexing at detector pseudorapidities of \n$|\\etadet|<2.5$\\footnote{The pseudorapidity is \ndefined as $\\eta = - \\ln [\\tan(\\theta\/2)]$, where $\\theta$ is the polar \nangle of the reconstructed particle originating from a the $p\\bar{p}$ collision vertex,\nrelative to the proton beam direction.\nDetector pseudorapidity $\\eta_{\\rm det}$ is defined relative to\nthe center of the detector.}.\nThe liquid-argon sampling calorimeter has a\ncentral section covering pseudorapidities $|\\etadet|$ up to\n$\\approx 1.1$, and two end calorimeters that extend coverage\nto $|\\etadet|\\approx 4.2$, with all three housed in separate\ncryostats. A outer muon system, with pseudorapidity coverage of $|\\etadet|<2$,\nconsists of a layer of tracking detectors and scintillation trigger\ncounters in front of $1.8$~T iron toroids, followed by two similar layers\nafter the toroids.\n\nWithin the SM, the top quark decays with almost 100\\% probability into a $W$ boson and a \n$b$ quark. We also include two final states: the dilepton final state (\\dilepton), where\nboth $W$ bosons decay to leptons,\nand the lepton+jets final state (\\ljets), where one of the $W$ bosons\ndecays into a pair of quarks and one decays to a lepton and a neutrino.\nThe \\ljets\\ and \\dilepton\\ final states contain, respectively, one or two isolated charged leptons.\nIn both final states we consider only electrons and muons,\nincluding those from $\\tau$-lepton decay, $W \\to \\tau\\nu_\\tau \\to \\ell \\nu_\\ell \\nu_\\tau$.\nWe also require the presence of two $b$ quark jets, two light-quark jets from $W$ decay (in \\ljets),\nand a significant missing transverse momentum (\\met) due to the escaping neutrinos. \n\nWe use the following selection criteria.\nIn the \\dilepton\\ channels, we require two isolated leptons \nwith \\mbox{$\\pt>15$ GeV},\nboth originating from the same \\ppbar\\ interaction vertex.\nThe \\ljets\\ channels require one isolated lepton with \\mbox{$\\pt>20$ GeV}.\nWe consider electrons and muons identified using the standard \\dzero\\ criteria \n\\cite{Abazov:2013tha,Abazov:2013xpp}, in the pseudorapidity range of \n $|\\etadet|<2.0$ for muons, and $|\\etadet|<1.1$ for electrons. In the \\dilepton\\ channels,\nwe consider in addition forward electrons in the range of $1.5<|\\etadet|<2.5$.\nJets are reconstructed and identified from energy deposition in the calorimeter using\nan iterative midpoint cone algorithm~\\cite{Blazey:2000qt} of radius~$\\sqrt{(\\Delta\\phi)^2+(\\Delta\\eta)^2}= 0.5$.\nTheir energies are corrected using the jet energy scale (JES) algorithm~\\cite{Abazov:2013hda}.\nAll \\dilepton\\ channels also require the presence of at least two jets with $\\pt>20$~GeV and $|\\etadet|<2.5$.\nFor the \\ljets\\ final state, at least four jets must be identified with the same \\pt\\ and \\etadet\\\ncutoffs, but with the leading jet required to have $\\pt>40$~GeV.\nWhen a muon track is found within a jet cone, the JES calculation takes that muon momentum into account, \nassuming that the muon originates from the semileptonic decay of a heavy-flavor hadron belonging to the jet.\nTo identify $b$ quark jets,\nwe use a multivariate $b$ quark jet identification discriminant that\ncombines information from the impact parameters of the tracks and variables that characterize\nthe presence and properties of secondary vertices within the jet~\\cite{Abazov:2013gaa}.\nWe require that at least one jet is identified as a $b$ quark jet in the \\dilepton\\ channels, and\nat least two such jets in the \\ljets\\ channels.\nTo improve signal purity, additional selections based on the global event topology \nare applied~\\cite{Abazov:2013wxa,Abazov:2014vga} in each final state. \nA detailed description of event selection can be found in Ref.~\\cite{Abazov:2013wxa} for the \\dilepton\\, and\nin Ref.~\\cite{Abazov:2014vga} for the \\ljets\\ final states. \n\n\nTo simulate \\ttbar\\ events we use the\nnext-to-leading (NLO) order Monte Carlo (MC)\nQCD generator \\mcatnlo~(version~3.4)~\\cite{Frixione:2002ik,Frixione:2008ym},\ninterfaced to \\herwig~(version~6.510)~\\cite{Corcella:2000bw} for parton\nshowering and hadronization. The\nCTEQ6M parton distribution functions (PDF)~\\cite{Pumplin2002,Nadolsky:2008zw} \nare used to generate events at a top quark mass of $m_t=172.5$~GeV.\nWe use two samples, one including spin correlation effects, and \nthe other without correlation.\nThe generated events are processed through a \\geant-based~\\cite{geant3}\nsimulation of the \\dzero\\ detector.\nTo simulate effects from additional overlapping \\ppbar\\ interactions,\n``zero bias'' events taken from collider data with an unbiassed trigger\nbased solely on beam bunch crossings are overlaid on the simulated events.\nSimulated events are then processed with the same reconstruction program as data.\n\nIn the \\dilepton\\ channels, the main sources of background are\nDrell-Yan production, $q\\bar{q}\\to\\Z\\to\\ell\\ell$, diboson\n$WW,\\ WZ,\\ ZZ$ production, and instrumental background. The\ninstrumental background arises mainly from multijet and $(W \\to \\ell\n\\nu)$+jets events, in which one jet in \\wjets\\ or two jets in multijet events are\nmisidentified as electrons, or where muons or electrons originating\nfrom semileptonic decay of heavy-flavor hadrons appear to be isolated.\nThe instrumental background is determined from data, while the other\nbackgrounds are estimated using MC simulations. For the\n\\ljets\\ channel, in addition to the Drell-Yan and diboson production,\nthe contribution from \\wjets\\ production is estimated from MC\nsimulation, but normalized to data. Electroweak single top quark production\nand \\ttbar\\ dilepton final states are also considered as background.\nThe Drell-Yan and $(W \\to \\ell \\nu)$+jets samples are generated with\nthe leading order (LO) matrix element generator\n\\alpgen\\ (version~v2.11)~\\cite{Mangano2003}, interfaced to\n\\pythia~\\cite{Sjostrand2006} (version~6.409, \\dzero\\ modified\ntune~A~\\cite{Affolder2002}) for parton showering and hadronization.\nDiboson events are generated with \\pythia. More details about\nbackground estimation can be found in\nRefs.~\\cite{Abazov:2013wxa,Abazov:2014vga}.\nTable~\\ref{tab:events} shows the number of expected events for each background source and for the signal,\nand the number of selected events in\ndata. The number of the expected \\ttbar\\ events is normalized to the NLO cross\nsection of $7.45^{+0.48}_{-0.67}$~pb~\\cite{Moch:2008qy}.\nThe observed number of events in the \\ljets\\ channel is\nhigher than the expected, mainly due to an excess in the $\\mu$ + jets channel.\nThe expected and observed number of events are consistent when the systematic uncertainties,\npartially correlated between the \\ljets\\ and \\dilepton\\ channels, are taken into account.\nThese uncertainties are of the order of 10\\%.\nThe most important contributions are the integrated luminosity, $b$-quark jet modeling,\nuncertainties on the \\ttbar\\ modeling and uncertainty in the heavy flavor NLO\n$K$-factors of the $W$ + jets background in the \\ljets\\ channel.\n\n\\begin{table}\n \\begin{center}\n \\begin{tabular}[t]{lcccc c|c}\n \\hline\\hline\n & $Z\/\\gamma^\\star$ & Instrumental & Diboson &\\ttbar&\\ Total\\ \\ &Data \\\\\n $e\\mu$ & 13.2 & 16.4 & 3.7 &\\ 303.4\\ \\ & 336.7 & 347 \\\\ \n $ee$ & 12.2 & 1.8 & 1.9 & 102.4 & 118.3 & 105 \\\\ \n $\\mu\\mu$ & 9.8 & 0.0 & 1.7 & 85.0 & 96.5 & 93 \\\\ \\hline\n \n & $W$+jets & Multijet & Other & & & \\\\\n $e$+jets & 22.7 & 23.1 & 15.3 & 427.4 & 488.6 & 534 \\\\\n $\\mu$+jets & 24.1 & 3.5 & 11.6 & 341.4 & 380.6 & 440 \\\\ \n \\hline\\hline\n \\end{tabular}\n \\caption{Numbers of expected events, and numbers of events found in data.\n \\label{tab:events}}\n \\end{center}\n\\end{table}\n\n\n\\section{Measurement Technique and Results}\n\nOur measurement uses the same matrix element (ME) approach \nas Refs.~\\cite{Abazov2011b,Abazov2012f}, adapted to the spin correlation measurement.\nThis method consists of calculating the spin correlation discriminant~\\cite{Melnikov2011a}\n\\begin{equation}\n\\label{eq:R}\n R (x)=\\frac{P_{t\\bar{t}}(x, {\\rm SM})}{P_{t\\bar{t}}(x, {\\rm SM})+P_{t\\bar{t}}(x, {\\rm null})}\\ , \n\\end{equation}\nwhere $P_{t\\bar{t}}(x, \\mathscr{H})$ is a per-event probability for hypothesis $\\mathscr{H}$ for\nthe vector of the reconstructed object parameters $x$. \nHypothesis $\\mathscr{H}={\\rm SM}$ assumes the \\ttbar\\ spin correlation strength\npredicted by the SM,\nand $\\mathscr{H}={\\rm null}$ assumes uncorrelated spins.\nThese probabilities are calculated from the integral\n\\begin{eqnarray}\n P_{t\\bar{t}}(x,\\mathscr{H}) = \\frac{1}{\\sigma_{\\rm obs}} \\int f_{{\\rm PDF}}(q_1) f_{{\\rm PDF}}(q_2) \\times \n \\nonumber \\\\\n\\frac{(2\\pi)^4 |\\mathscr{M}(y,\\mathscr{H})|^2}{q_1 q_2 s}W(x,y) d\\Phi^6 dq_1 dq_2 .\n\\end{eqnarray}\nHere, $q_1$ and $q_2$ represent the respective fractions of proton and antiproton momentum carried by the initial state partons,\n$f_{{\\rm PDF}}$ represents the parton distribution functions, \n$s$ is the square of the \\ppbar\\ center-of-mass energy,\nand $y$ refers to partonic final state four-momenta of the particles.\nThe detector transfer functions, $W(x,y)$, correspond to the probability to reconstruct\nfour-momenta $y$ as $x$,\n$d\\Phi^6$ represents the six-body phase space, and \n$\\sigma_{{\\rm obs}}$ is the observed \\ttbar\\ production cross section, \ncalculated using $\\mathscr{M}(\\mathscr{H}={\\rm null})$,\ntaking into account the efficiency of the selection.\nThe same $\\sigma_{{\\rm obs}}$ is used for $\\mathscr{H}={\\rm null}$ and $\\mathscr{H}={\\rm SM}$ hypotheses,\nbecause the difference in observed cross-sections is small, at the order of percent,\nand affects only the separation power of the discriminant $R$.\nThis calculation uses the LO matrix element \n$\\mathscr{M}(y,\\mathscr{H})$ for the processes \n$q\\bar{q} \\to t\\bar{t} \\to W^+W^- b\\bar{b} \\to \\ell^{\\pm}\\nu_\\ell qq^\\prime b\\bar{b}$ or\n$\\ell^+\\ell^-\\nu_\\ell \\bar{\\nu_\\ell} b\\bar{b}$,\ncalculated according to the spin correlation hypothesis $\\mathscr{H}$.\nThe matrix element $\\mathscr{M}$ is averaged over the colors and spins of the initial partons,\nand summed over the final colors and spins.\nFor the hypothesis $\\mathscr{H}={\\rm null}$,\nwe set the spin correlation part to zero~\\cite{Mahlon:1995zn,Mahlon:1997uc}.\nIn the calculation, we assume perfect measurements of the lepton and jet directions, and \nperfect measurement of electron energy, which reduces the number of dimensions that require integration.\nThe probability is obtained by integrating over the remaining kinematic variables.\nIn the \\dilepton\\ final state, we use the \ntop and antitop quark masses, $W^+$ and $W^-$ boson masses, \\pt\\ of two jets,\n$1\/p_T$ for any muons and \\pt\\ and $\\phi$ of the \\ttbar\\ system as integration variables.\nIn the \\ljets\\ final state, the variables are\nthe top and antitop quark masses, the mass of the $W$ boson decaying to $q\\bar{q}^\\prime$, \\pt\\ of the $d$-type quark jet,\n$p_z$ of the leptonically decaying top quark and $1\/p_T$ of a muon.\nGiven the inability to know the flavor of the two quarks from the $W$ boson decay,\nor which b-tagged jet originates from the decay of the top or anti-top quark,\nall possible jet-parton assignments are considered and $P_{t\\bar{t}}$ is calculated as the\nsum over all the probabilities.\n\n \nThe distributions in the discriminant $R$ of Eq.~(\\ref{eq:R})\nare calculated for simulated \\ttbar\\ events with SM spin correlation and\nwith uncorrelated spins. These and the expected contributions from\nthe background events are used as templates to fit \nthe $R$ distribution in data through a binned maximum-likelihood fit with two free parameters:\nthe \\ttbar\\ production cross section $\\sigmattbar$, and the measured fraction of\nevents with the SM spin correlation strength, $f$.\n\nThis fit of the distributions in the \\dilepton\\ and \\ljets\\ channels is performed simultaneously,\nwith the expected number of events $n_i$ in each bin $i$ given by\n\\begin{equation}\n \\label{eq:nevt}\n n_i = \\frac{\\sigmattbar}{7.45 {\\rm pb}} \\left(f n_{\\rm SM}^i + (1-f) n_{\\rm null}^i\\right) + n_{\\rm bckg}^i,\n\\end{equation}\nwhere $n_{\\rm SM}^i$ and $n_{\\rm null}^i$ are the number of events in bin $i$\nbased on the \\mcatnlo\\ prediction, with and without spin correlations, and\n$n_{\\rm bckg}^i$ is the expected number of background events in the same bin.\nWe use a non-uniform bin width and require a sufficiently large\nnumber of events for each bin in order to avoid bins with zero events, as they\ncould bias the fit result.\nThe exact number of bins and their size were optimized to give the smallest\nexpected statistical uncertainty in the case of the SM spin correlation.\nWe use the same number and widths of the bins for the \\ljets\\\nand \\dilepton\\ channels so as to keep the bin optimization procedure relatively simple.\nThe fit yields $f = \\fmeas \\pm \\ferr\\ (\\rm{stat})$.\nThe $R$ distribution for the combined \\dilepton\\ and \\ljets\\ channels is shown in Fig.~\\ref{fig:result}.\nWe estimate the significance of the non-zero spin correlation hypothesis \nusing the Feldman and Cousins frequentist\nprocedure~\\cite{Feldman:1997qc}, assuming that the parameter $f$ is in the range $[0, 1]$, even though the measured\nvalue obtained in the fit is outside of the range $[0, 1]$.\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{datafit_comb_final.eps}\n\\caption{Distribution of the spin correlation discriminant $R$ in data and for the \\mcatnlo\\ \\ttbar\\ prediction with background,\nshowing the merged results from \\dilepton\\ and \\ljets\\ events.\nThe lower plot represents the difference between data and simulation with SM spin correlation\nand without spin correlation.\nThe error bars correspond to statistical uncertainties.\n\\label{fig:result}}\n\\end{figure}\n\n\nTo translate the $f$ value to the \nspin correlation strength in the off-diagonal basis $O_{\\rm{off}}$,\nwe must consider the value of the spin correlation strength in the simulation $O^{\\rm{MC}}_{\\rm off}$.\nWe choose to obtain this value in the simulated \\dilepton\\ samples from\nthe expected value of \n$k_1k_2O^{\\rm{MC}}_{\\rm{off}} = -9 \\langle\\cos\\theta_1 \\cdot\\cos \\theta_2\\rangle$~\\cite{Bernreuther:2004jv}, \nwhere $\\theta_1$ and $\\theta_2$ represent angles between the respective direction of a positively and negatively\ncharged lepton and the spin quantization axes in the $t$ and $\\bar{t}$ rest frame.\nThe parameters $k_1$ and $k_2$ are the spin analyzing-power coefficients of the top quark\n(equal to 1 for leptons at LO in QCD)~\\cite{Brandenburg:2002xr}. \nWith \\mcatnlo, the value calculated for the parton-level \ndistributions before any selections is found to equal\n$O^{\\rm{\\mcatnlo}}_{\\rm{off}}=0.766$ in the off-diagonal basis. \nThe measured spin correlation strength for \\ljets\\ and \\dilepton\\ channels is therefore\n\\[O^{\\rm meas}_{\\rm{off}} = O^{\\rm{\\mcatnlo}}_{\\rm{off}} \\cdot f = \\result \\pm 0.16\\ (\\rm{stat})\\ , \\]\nin agreement with the NLO QCD calculation $O_{\\rm off} = 0.80^{+0.01}_{-0.02}$~\\cite{Bernreuther:2010ny}.\nFor events in the \\ljets\\ channel, the result is\n\\[O^{\\rm{\\ell+jets}}_{\\rm{off}} = \\resultlj\\pm0.24\\ (\\rm{stat})\\ ,\\]\nand for \\dilepton\\ channel the result is \n\\[O^{\\ell\\ell}_{\\rm{off}} = \\resultll\\pm0.22\\ (\\rm{stat})\\ .\\]\n\nWe can reinterpret the measured fraction $f$ as the related measurement of the\nspin correlation observable\n$O_{\\rm{spin}} = \\langle \\frac{4}{3}(S_t\\cdot S_{\\bar{t}} ) \\rangle$~\\cite{Bernreuther:2010ny}.\nThis observable characterizes the distribution in the opening angle,\n$\\varphi$, between the directions of the two leptons in dilepton events or\nbetween the lepton and the up-type quark from the $W$ decay in \\ljets\\ events,\nwhere the directions are\ndefined in the $t$ and $\\bar{t}$ rest frame:\n\\begin{equation}\n \\frac{1}{\\sigma}\\frac{d\\sigma}{d\\cos\\varphi}=\\frac{1}{2}(1-k_1k_2 O_{\\rm{spin}}\\cos\\varphi).\n \\end{equation}\nThe prediction from the \\mcatnlo\\ simulation is given by the expectation\nvalue $k_1k_2 O^{\\rm{\\mcatnlo}}_{\\rm{spin}} = -3 \\langle\\cos \\varphi\\rangle$ at the parton level,\nwithout any selections,\nand found to be $O^{\\rm{\\mcatnlo}}_{\\rm{spin}} = 0.20$.\nThe value measured from data is therefore \n\\[O^{\\rm meas}_{\\rm{spin}} = O^{\\rm{\\mcatnlo}}_{\\rm{spin}} \\cdot f = 0.23 \\pm 0.04 (\\rm{stat}) ,\\]\nconsistent with the NLO QCD calculation of $O_{\\rm{spin}} = 0.218\\pm0.002$~\\cite{Bernreuther:2010ny}.\n\n\n\\section{Systematic Uncertainties}\n\nThe estimated systematic uncertainties are summarized in Table~\\ref{tab:syst}.\nThese are obtained by replacing the nominal \\ttbar\\ and background results with modified templates, \nrefitting the data and determining the new fraction $f_{\\Delta}$. \n\nWe consider several sources of uncertainties in the modeling of the signal. These include\ninitial-state and final-state radiation, the simulation of hadronization and underlying events, \nthe effects of higher-order corrections,\ncolor-reconnection and uncertainty on the top quark mass.\nThe details of the corresponding samples and parameters \nare discussed in Refs.~\\cite{Abazov:2014dpa,Abazov:2015spa}.\n\nFor the PDF uncertainty, we change the 20 CTEQ6 eigenvectors independently and\nadd the resulting uncertainties in quadrature.\nIn modeling both the estimated signal and PDF uncertainties, the event samples have different fractional\ncontributions from $gg$ fusion and \\qqbar\\ annihilation, and therefore different\nspin-correlation strengths.\nWe take this into account by normalizing the measured fraction to the\nspin-correlation strength of the sample $O^{\\rm{MC}}_{\\rm{off}}$,\nin a way similar to that used for the\nnominal measurement $O^{\\Delta}_{\\rm{off}} = f_{\\rm{\\Delta}} \\cdot O^{\\rm{MC}}_{\\rm{off}}$.\n\nThe statistical uncertainty in MC templates is estimated using the ensemble testing technique.\nThe new ensembles are created through a random generation of a new number of events\nin each bin of the MC template assuming a Gaussian distribution in the number of events in the bin.\nThe same distribution in data is fitted with the modified templates and the dispersion\nin the fit results over 1000 ensembles is used as an estimation of the statistical uncertainty in the MC templates.\n\nThe uncertainty on identification and reconstruction effects includes\nuncertainties on lepton, jet and $b$ tagging identification efficiencies, jet energy resolution and scale corrections, trigger efficiencies, and the luminosity.\nThe uncertainty in the background contributions includes all uncertainties\nthat affect the signal-to-background ratio that are not contained in the\nprevious categories. These uncertainties include uncertainties\nin theoretical cross sections for backgrounds,\nuncertainty in $Z$ boson $p_T$ distribution,\nand uncertainties in instrumental background contributions.\n\nThe total absolute systematic uncertainty on the spin correlation observable $O^{meas}_{\\rm{off}}$,\ncalculated as a quadratic sum over all individual sources, is $0.15$, as shown in Table~\\ref{tab:syst}.\n\\begin{table}[!htb]\n\\begin{center}\n\\begin{tabular}[t]{l c}\n\\hline\\hline\nSource & Uncertainty in $O^{meas}_{\\rm off}$ \\\\\n\\hline\nModeling of signal & $\\pm 0.135$ \\\\ \nPDF & $\\pm 0.027$ \\\\ \nStatistical fluctuations in MC & $\\pm 0.026 $ \\\\ \nIdentification and reconstruction & $\\pm 0.032 $ \\\\ \nBackground contribution & $\\pm 0.019 $ \\\\ \n\\hline \nTotal & $\\pm \\errsys$ \\\\ \n\\hline\\hline\n\\end{tabular}\n\\caption{Systematic uncertainties (absolute values) on the spin correlation strength $O^{meas}_{\\rm off}$.\n\\label{tab:syst}}\n\\end{center}\n\\end{table}\n\n\\section{Spin correlation and the~$t\\bar{t}$~production mechanism}\nThe strength of the \\ttbar\\ spin correlation in the SM is strongly dependent on the \\ttbar\\ production mechanism.\nThe spin correlation measurement thus provides a way of measuring\nthe fraction of events produced via $gg$ fusion, $f_{gg}$~\\cite{Bernreuther:2001rq}.\nThe $f_{gg}$ fraction is not well defined\nat orders higher than LO QCD.\nThe difficulty arises from the fact that the cross sections for the\n$gq\\to t\\bar{t}q$ and $g\\bar{q}\\to t\\bar{t}\\bar{q}$ processes \nat LO, as well as $gg$ and \\qqbar\\ production at NLO,\ncontain a singularity when the final state quark is collinear with the quark in the initial state.\nThis makes the integration over the phase space divergent~\\cite{Nason:1987xz,Bernreuther_private, Mangano_private}. \nIn practice, this singularity is absorbed into the definition of the PDF, but the final results \ndepend on the scheme used for regularization. \nFor the NLO PDF, the \\msbar\\ scheme is usually preferred. \nThe \\gq\\ contribution at NLO is \nof the order of a few percent~\\cite{Bernreuther_private, Bernreuther:2010ny,Bernreuther:2004jv},\nand considering that the overall spin correlation strength is $\\approx 80\\% $,\nwe neglect these smaller contributions, and determine $f_{gg}$ from the relation\n\\[ O = (1-f_{gg}) O_{q\\bar{q}} + f_{gg} O_{gg}\\ .\\]\nAssuming $O_{q\\bar{q}} \\approx 1$, the gluon fraction becomes\n\\[f_{gg} \\approx \\frac{1-O_{\\ \\ }} {1-O_{gg}}, \\]\nwhere $O$ is the measured value of the total spin correlation strength,\nand $O_{gg}$ is the SM value of the spin correlation strength for $gg$ events.\n\n\nThe NLO calculation in the off-diagonal basis using the\nCT10 PDF yields $O_{gg}= -0.36\\pm 0.02 $~\\cite{Bernreuther_private, Bernreuther:2010ny,Bernreuther:2004jv}.\nThe systematic uncertainty on the observable $O$ can be translated to the uncertainty on the gluon fraction\nthat includes an additional contribution from the theoretical uncertainty on $O_{gg}$. \nIn the absence of non-SM contributions, the fraction of \\ttbar\\ events produced through gluon fusion becomes\n\\[\n f_{gg} = 0.08 \\pm 0.12 (\\rm{stat}) \\pm 0.11 (\\rm{syst}) = 0.08 \\pm 0.16 (\\rm{stat} + \\rm{syst})\\ ,\n\\]\n in agreement with the NLO prediction \nof $ f_{gg} = 0.135$~\\cite{Bernreuther_private, Bernreuther:2010ny,Bernreuther:2004jv}.\n\n\n\\section{Summary}\n\nWe have presented an updated measurement of \\ttbar\\ spin correlations with the \\dzero\\ detector\nfor an integrated luminosity of \\lumi.\nThe result of the measurement of the strength of the \\ttbar\\ spin correlation in the off-diagonal basis is \n\\begin{eqnarray*}\nO_{\\rm{off}} & = & \\result\\pm 0.16\\ (\\rm{stat}) \\pm \\errsys\\ (\\rm{syst}) \\\\\n& = & \\result\\pm 0.22\\ (\\rm{stat} + \\rm{syst}).\n\\end{eqnarray*}\nThis result is in agreement with the NLO QCD calculation $O_{\\rm off} = 0.80^{+0.01}_{-0.02}$~\\cite{Bernreuther:2010ny}\nand supersedes that reported in Ref.~\\cite{Abazov2012f}.\nUsing the Feldman and Cousins approach for interval setting~\\cite{Feldman:1997qc},\nand assuming uncorrelated \\ttbar\\ spins, we estimate a probability (p-value)\nof $2.5\\times 10^{-5}$ for obtaining a spin correlation larger than the observed value.\nThis corresponds to evidence for spin correlation in \\ttbar\\ \nevents at a significance of $4.2$~standard deviations.\n\nIn the absence of non-SM contributions, we use the spin correlation strength measurement to constrain \nthe fraction of events produced through gluon fusion at NLO QCD and obtain \n\\begin{eqnarray*}\n f_{gg} = 0.08 \\pm 0.16 (\\rm{stat} + \\rm{syst})\\ .\n\\end{eqnarray*}\n in good agreement with SM prediction.\n\n\n\\section*{Acknowledgment}\nWe thank the staffs at Fermilab and collaborating institutions,\nand acknowledge support from the\nDepartment of Energy and National Science Foundation (United States of America);\nAlternative Energies and Atomic Energy Commission and\nNational Center for Scientific Research\/National Institute of Nuclear and Particle Physics (France);\nMinistry of Education and Science of the Russian Federation, \nNational Research Center ``Kurchatov Institute\" of the Russian Federation, and \nRussian Foundation for Basic Research (Russia);\nNational Council for the Development of Science and Technology and\nCarlos Chagas Filho Foundation for the Support of Research in the State of Rio de Janeiro (Brazil);\nDepartment of Atomic Energy and Department of Science and Technology (India);\nAdministrative Department of Science, Technology and Innovation (Colombia);\nNational Council of Science and Technology (Mexico);\nNational Research Foundation of Korea (Korea);\nFoundation for Fundamental Research on Matter (The Netherlands);\nScience and Technology Facilities Council and The Royal Society (United Kingdom);\nMinistry of Education, Youth and Sports (Czech Republic);\nBundesministerium f\\\"{u}r Bildung und Forschung (Federal Ministry of Education and Research) and \nDeutsche Forschungsgemeinschaft (German Research Foundation) (Germany);\nScience Foundation Ireland (Ireland);\nSwedish Research Council (Sweden);\nChina Academy of Sciences and National Natural Science Foundation of China (China);\nand\nMinistry of Education and Science of Ukraine (Ukraine).\n\n\n\\bibliographystyle{apsrev_custom2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{int}\n\n\nOne of the simplest observable, which contains information about\nthe dynamics of multiparticle production beyond single-particle\ndensities, is the multiplicity distribution. \nWhile the study of the multiplicity distribution $P_N$ in \nfull phase space deals with limited dynamical information influenced by \ncharge- and energy-momentum conservation, \nthe investigation of the evolution of the probabilities \n$P_n(\\delta )$ of detecting $n$ particles in ever smaller \nsizes $\\delta$ of phase-space windows (bins) \ncan provide detailed information on QCD multihadron \nproduction without these trivial constraints. \nA deviation of this\ndistribution from that expected for purely independent particle\nproduction can be attributed to \ndynamical local multiplicity fluctuations. \n\nThe important quest behind such a study is \nthe understanding of the origin of short-range correlations between \nfinal-state particles, leading to the appearance of\ndynamical multiparticle spikes in individual events. \nAs a consequence of these correlations, \nthe normalized factorial moments (NFMs) \n\\begin{equation}\nF_q(\\delta )\\equiv\\frac{\\langle n^{[q]}\\rangle}{\\langle n \\rangle ^q},\n\\label{lf1}\n\\end{equation} \n\\begin{equation}\n\\langle n^{[q]}\\rangle =\\sum_{n=q}^{\\infty}n^{[q]}P_n(\\delta ),\n\\qquad n^{[q]}=n(n-1)\\ldots (n-q+1),\n\\label{lf2}\n\\end{equation} \nof the local multiplicity distribution $P_n(\\delta )$ exhibit \na power-like increase with decreasing $\\delta$, namely \n$F_q(\\delta )\\propto\\delta ^{-\\phi_q}$ \\cite{bp}. The \nconstants $\\phi_q$ are called intermittency indices. \nThis phenomenon reflects the peculiarity of \n$P_n(\\delta )$ to become broader with\ndecreasing $\\delta$. \nSince NFMs satisfy the scaling property \n$F_q(\\lambda\\delta )=\\lambda^{-\\phi_q}F_q(\\delta )$,\nthis is widely regarded as evidence that \nthe correlations exhibit a self-similar \nunderlying dynamics.\n\nExperimentally, local fluctuations in $\\mbox{e}^+\\mbox{e}^-$-processes \nhave already been studied by the TASSO, \nHRS, CELLO, OPAL, ALEPH, DELPHI \nand L3 Collaborations \\cite{cllr}.\nThe data do exhibit approximate \npower-like rise of the NFMs with a saturation at small $\\delta$.\nThe conclusion has been reached that such a phenomenon is \na consequence of the multi-jet structure of events,\ni.e., groups of particles \nwith similar angles resulting\nin spikes of particles as seen in selected phase-space projections.\nThe hard gluon radiation significantly affects the NFMs,\nso that they have stronger increase in\n3-jet events than in 2-jet events.\nIt has been found that for the statistics used at that time\ncurrent Monte-Carlo models can, in general, describe the data,\neven without additional tuning. \n\nRecently, it has been realized that the factorial-moment\nmethod poorly reflects the information content of local fluctuations,\nsince the NFM of order $q$ contains a trivial contamination from\nlower-order correlation functions \n(see reviews \\cite{rev}). As a result, \nrather different event samples can exhibit \na very similar behavior of the NFMs.\nThe fact that subtle details in the behavior of $P_n(\\delta )$ are missing,\ntogether with the small statistics used, may be the reason \nwhy different Monte-Carlo models can reasonably describe\nthe local fluctuations measured in $\\mbox{e}^+\\mbox{e}^-$ annihilation so far. \n\n\n\nAnother shortcoming of the factorial-moment \nmeasurement is that in moving to ever smaller phase-space bins,\nthe statistical bias due to a finite event \nsample ($N_{\\mathrm{ev}}\\ne\\infty$)\nbecomes significant, especially for high-order moments $q$.\nThis is because in actual measurements\nthe NFMs at small bin size are \ndetermined by the first few terms in \n(\\ref{lf2}).\nIn most cases this leads to a\nsignificant underestimate of\nthe measured NFMs with respect to their true values.\n\n\nCumulants are a more sensitive statistical tool (see \\cite{rev}\nand references therein).\nHowever, their measurement is rather difficult and was rarely attempted. \nBesides, the cumulants are \nexpected to be influenced by the \nstatistical bias to even larger degree,\nsince they are constructed from the factorial moments \nof different orders $q$. \n\n\\section{Local Properties}\n\nAn important step towards an improvement of experimental measurements of\nthe local multiplicity distribution was made in \\cite{bp1,bp2}, where\nit was shown that any complex distribution can be represented as\n$$\nP_n(\\delta )=P_0(\\delta )\\frac{\\lambda^n}{n!}\\, L_n, \\qquad\nL_n=\\prod^{n}_{i=2}\\eta_i^{n-i+1}(\\delta ),\n$$\nwhere $\\lambda =P_1(\\delta)\/P_0(\\delta )$. \nThe factor $L_n$ measures a deviation of the distribution\nfrom a Poisson with $L_n=1$. Non-poissonian fluctuations\nexhibit themself as a deviation of $L_n$ from unity. The $L_n$ \nis constructed from the bunching parameters (BPs) \n\\begin{equation}\n\\eta_q(\\delta )=\\frac{q}{q-1}\\frac{P_q(\\delta )P_{q-2}(\\delta )}\n{P_{q-1}^2(\\delta )}, \\qquad q>1. \n\\label{l3}\n\\end{equation} \nThe values of the BPs and NFMs for \nmost popular distributions are\nshown in Table~1. The most\ninteresting observation is that while\nthe NFM is an ``integral'' characteristic of the $P_n(\\delta )$\nand the BP is a ``differential'', both tools\nhave values larger than unity if the distribution\nis broader than a Poisson. \nGenerally, however, one should not expect that all\nBPs are larger than unity for a broad distribution;\nBPs probe the distribution locally, i.e. they are simply\ndetermined by the second-order derivative from \nthe logarithm of $P_n(\\delta )$ with respect to the $n$. \nNote, that in the case of local distributions, \nthe width of distributions is mainly determined by \n$\\eta_2(\\delta )$. This observation is based on the\nsimple fact that $P_n(\\delta )$ ceases to be bell-shaped at\nsufficiently small $\\delta$. \n\\begin{center}\n$$\\begin{array}{|l|c|c|c|}\\hline\n$Distribution$ & P_n & $NFMs$ & $BPs$ \\\\ \\hline\\hline\n & & & \\\\\n$Pos. Binomial$ & C_n^Np^n(1-p)^{N-n}\n& \\prod_{i=1}^{q}(1-\\frac{i}{N})<1 & \\frac{q-1-N}{q-2-N}<1 \\\\ [0.6cm]\n$Poisson$ & p^n\\exp(-p)\/n!\n& 1 & 1 \\\\ [0.6cm]\n$Neg. Binomial$ & \\frac{\\Gamma(n+k)}{\\Gamma(n+1)\\Gamma(k)}\np^n(1+p)^{-(k+n)}\n& \\prod_{i=1}^{q}(1+\\frac{i}{k})>1 & \\frac{q-1+k}{q-2+k}>1 \\\\ [0.6cm]\n$Geometric$ & p^n(p+1)^{-n-1} & \\prod_{i=1}^{q}(1+i)>1 & \\frac{q}{q-1}>1 \\\\\n[0.2cm]\n\\hline\n\\end{array}$$\n\\end{center}\n\n\\vspace{0.5cm}\nTable 1. {\\it NFMs and BPs for positive-binomial,\nPoisson, negative-binomial and\ngeometric distributions.}\n\\label{dis}\n\\vspace{1.0cm}\n\nBPs are more sensitive to the variation \nin the shape of $P_n(\\delta )$ \nwith decreasing $\\delta$ than are the NFMs \\cite{che}. \nIn the case of intermittent fluctuations,\none should expect $\\eta_2(\\delta )\\propto\\delta^{-d_2}$.\nFor multifractal local fluctuations, the $\\eta_q(\\delta )$ are \n$\\delta$-dependent functions for all $q\\ge 3$,\nwhile for monofractal behavior \n$\\eta_q(\\delta ) =const$ for $q\\ge3$ \\cite{bp1}.\n\n\nFrom an experimental point of view,\nthe BPs have the following important advantages \\cite{bp2}:\n\n1) They are less severely affected by\nthe bias from finite statistics\nthan the NFMs, since the $q$th-order BP\nresolves only the behavior\nof the multiplicity distribution near multiplicity $n=q-1$;\n\n2) For the calculation of the BP of order $q$,\none needs to know only the $q$-particle\nresolution of the detector,\nnot any higher-order resolution.\n\nThe problem we are dealing with in this paper is to investigate\nwhether different Monte-Carlo (MC) models, which were tuned \nto reproduce the global-shape variables and single-particle\ninclusive densities, can \nlead to the same structure of the local multiplicity\nfluctuations which are determined by {\\em many}-particle inclusive\ndensities. \nWe study JETSET 7.4 PS \\cite{eer9}, ARIADNE 4.08 \n\\cite{ard1} and HERWIG 5.9 \\cite{h56}\nmodels. The models have been tuned as described above by the\nL3 Collaboration \\cite{eer10}.\n\n\\section{Monte-Carlo Analysis}\n\\label{sec:ex}\n\n1) {\\it Horizontal BPs}:\n\nIn order to reduce\nthe statistical error on the observed local quantities\nwhen analyzing experimental data,\nwe use the bin-averaged BPs \\cite{bp1,bp2}:\n\\begin{equation}\n\\eta_q(M)=\n\\frac{q}{q-1} \\frac{\\bar N_q(M) \\bar N_{q-2}(M)}\n{\\bar N_{q-1}^2(M)}, \n\\label{l14}\n\\end{equation}\n\n\\begin{equation}\n\\bar N_{q}(M)=\\frac{1}{M}\\sum_{m=1}^{M}N_q(m,\\delta ),\n\\end{equation}\nwhere $N_q(m,\\delta )$ is the number of events having $q$ particles\nin bin $m$ and $M=\\Delta \/\\delta$ is the total number of bins\n($\\Delta$ represents the size of full phase-space volume). \nTo be able to study non-flat distributions, like for rapidity, \nwe have to carry out a transformation from the original\nphase-space variable to one \nin which the underlying\ndensity is approximately uniform, as suggested by Bia\\l as, \nGadzinski and Ochs \\cite{tr}. \n\n\\medskip\n2) {\\it Generalized integral BPs:}\n\nTo study the distribution for spikes,\nwe will consider the generalized integral BPs \\cite{bp2} \nusing the squared pairwise\nfour-momentum difference $Q^2_{12}=-(p_1-p_2)^2$.\nIn this variable, the definition of the BPs is given by\n\\begin{equation}\n\\chi_{q}(Q^2)=\n\\frac{q}{q-1} \\frac{\\Pi_q(Q^2) \\Pi_{q-2}(Q^2)}\n{\\Pi^2_{q-1}(Q^2)},\n\\label{l15}\n\\end{equation} \nwhere $\\Pi_q(Q^2)$ represents the number\nof events having $q$ spikes of size \n$Q^2$ in the phase-space of variable $Q^2_{12}$ , \nirrespective of\nhow many particles are inside each spike.\nTo define the spike size, we shall use\nthe so-called Grassberger-Hentschel-Procaccia\ncounting topology\nfor which a many-particle hyper-tube is assigned\na size $Q^2$ corresponding to the\nmaximum of all pairwise distances (see \\cite{bp2} for details).\nFor purely independent particle production, with the\nmultiplicity distribution characterized by a Poissonian\nlaw, the BPs (\\ref{l15})\nare equal to unity for all $q$.\n\n\\subsection{In rapidity variable}\n\nIn order to study fluctuations inside jets,\nin most investigations the \nfluctuations have been measured in the rapidity\n$y$ defined with respect to the thrust \nor sphericity axis \\cite{cllr}.\nThe Monte Carlo analysis for this variable is performed in the full\nrapidity range $\\mid Y \\mid \\leq 5$.\nFig.~1 shows the results for the \nBPs (\\ref{l14}) for rapidity after\nthe Bia\\l as-Gazdzicki-Ochs transformation.\nThe second-order BP for JETSET model \ndecreases with increasing $M$ up to \n$M\\simeq 20$, which is found to\ncorrespond to the value of $M$ at which the maximum of\nthe multiplicity distribution $P_n(\\delta)$\nfirst occurs at $n=0$.\nAt large $M$, all BPs show a\npower-law increase \nwith increasing $M$, $\\eta_q\\sim M^{\\alpha_q}$.\nThis indicates that the fluctuations in $y$ defined\nwith respect to the thrust axis\nare multifractal scale invariant. \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\mbox{\\epsfig{file=bp.eps, width=0.6\\linewidth}}\n\\end{center}\nFigure 1.\n{\\it BPs as a function of the number $M$ of bins in\nrapidity defined with respect to the thrust axis.\nThe shaded areas represent the statistical and systematical errors\non the JETSET predictions.}\n\\label{pda5a}\n\\end{figure}\n\nNote that the conclusion that \nfluctuations have a multifractal structure is possible without the\nnecessity of calculating the intermittency indices $\\phi_q$.\nIn contrast, to reveal multifractality with the help of the NFMs,\none first needs to carry out fits of the NFMs by a power law.\n\n\n\nHERWIG predictions (dashed lines) significantly overestimate the\nsecond-order BP obtained from LUND MCs. Since, for small\nphase-space cells, the second-order BP is determined by the dispersion\nof the distribution \\cite{bp1,bp2},\nthis means that the HERWIG produces too broad\nlocal multiplicity distributions.\nSuch a result confirms that obtained by\nthe ALEPH Collaboration \\cite{AL2}.\n\nTo study the disagreement between Monte-Carlo models in\nmore detail, one can split $\\eta_2$ into two BPs:\n\\begin{equation}\n\\eta_2 = \\eta_2^{(\\pm\\pm)} + \\eta_2^{(+-)}.\n\\label{l3aa}\n\\end{equation} \nHere $\\eta_2^{(\\pm\\pm)}$ is defined by (\\ref{l14}) with\n$N_2(m, \\delta)=N_2^{(\\pm\\pm)}(m, \\delta y)$, $N_2^{(\\pm\\pm)}(m, \\delta y)$ being\nthe number of events having like-charged two-particle\ncombinations inside bin $m$ of size $\\delta y$.\nAnalogously, $\\eta_2^{(+-)}$ is constructed from\nthe number of events $N_2^{(+-)}(m, \\delta y)$ having unlike-charged\ntwo-particle combinations. Note that due to\na combinatorial reason,\n$\\eta_2^{(\\pm\\pm)}<\\eta_2^{(+-)}$. \n\n\n\\begin{figure}[htbp]\n\\begin{center}\n\\mbox{\\epsfig{file=bpp.eps, width=0.7\\linewidth}}\n\\end{center}\nFigure 2. \n{\\it The second-order BP as a function of the number $M$\nof bins in rapidity defined with respect\nto the thrust axis for\nlike-charged and unlike-charged particle combinations.}\n\\label{pda55}\n\\end{figure}\n\nFig. 2 shows that $\\eta_2^{(\\pm\\pm)}$ and\n$\\eta_2^{(+-)}$ indeed behave completely differently.\nWhile $\\eta_2^{(\\pm\\pm)}$ shows the expected rise,\n$\\eta_2^{(+-)}$ shows a strong decrease at low $M$ and\nthe onset of an increase at large $M$. The structure of\n$\\eta_2$ observed in Fig. 2 is\na combination of these two effects.\n\n\n\nNote that $\\eta_2$ is \nstrongly effected by the Bose-Einstein (BE) interference \nincorporated into the JETSET generator\\footnote{Here and below,\nwe show JETSET predictions with the BE interference disabled after\nthe retuning of this model to describe global-shape variables.}.\nThis is not unexpected since \n$\\eta_2\\sim P_2\/ P^2_1$, which is very similar to the \ncorrelation functions used for Bose-Einstein studies. \n\nLet us remind that in order to model the BE interference in JETSET, \nthe momenta of identical final-state\nparticles are shifted to reproduce the expected two-particle correlation\nfunction. The main disadvantage of \nsuch {\\em ad hoc} method is that it spoils overall energy-momentum conservation \nand it is necessary to modify also momenta of non-identical\nparticles to compensate for this. This effect in JETSET model \ncan be seen in Fig. 2. \n\nThe strong anti-bunching tendency seen for\nunlike-charged particles at $M <30$ can\nbe attributed to resonance decays and\nto chain-like particle production along the thrust axis,\nas expected from the QCD-string model \\cite{oder1}.\nThe latter effect leads to local charge conservation with\nan alternating charge structure.\nEvidence for this effect was recently observed\nby DELPHI \\cite{oder2}.\nAs a result, there is a smaller\nrapidity separation between\nunlike-charged particles than between like-charged and\n$\\eta_2^{(+-)}$ is much larger than $\\eta_2^{(\\pm\\pm)}$\nat small $M$.\nHaving correlation lengths $\\delta y\\sim 0.5-1.0$ in rapidity,\nthe resonance and the charge-ordering\neffects, however,\nbecome smaller with increasing $M$. \n\nNote that to distinguish the NFMs\ncalculated for different charge combinations in a bin-splitting technique\nis difficult due to\ninsufficient sensitivity of this tool and a purely combinatorial\nreason.\n\n\\subsection{In the four-momentum difference}\n\nThe study of BPs described above can help us to understand \na tendency of the particles to be grouped into spikes inside small\nphase-space intervals. Another question is\nhow the multiplicity of these spikes fluctuates from\nevent to event when the spike size goes to zero. \nTo study this, we will use the BPs defined in (\\ref{l15}). \n\n\\begin{figure}[htbp]\n\\begin{center}\n\\mbox{\\epsfig{file=in1.eps, width=0.65\\linewidth}}\n\\end{center}\nFigure 3.\n{\\it Generalized integral BPs\nas a function of\nthe squared four-momentum difference $Q^2$\nbetween two charged particles.}\n\n\n\\begin{center}\n\\mbox{\\epsfig{file=in2.eps, width=0.65\\linewidth}}\n\\end{center}\nFigure 4.\n{\\it Generalized\nsecond-order BP\nas a function of\nthe squared four-momentum difference $Q^2$\nbetween two charged particles.\n}\n\\end{figure}\n\nFig.~3 shows the behavior of $\\chi_q$\nas a function of $-\\ln Q^2$.\nThe full lines represent the behavior of the BPs\nin the Poissonian case.\nIn contrast, all BPs obtained from the Monte Carlo models\nrise with increasing $-\\ln Q^2$ \n(decreasing $Q^2$).\nThis corresponds to a\nstrong bunching effect of all orders, as expected\nfor multifractal fluctuations.\nThe anti-bunching effect ($\\chi_q<1$) for small $-\\ln Q^2$ is\ncaused by the energy-momentum conservation constraint \\cite{bp2}.\n\n\nTo learn more about the mechanism of multiparticle fluctuations\nin $Q^2_{12}$ variable, we present in Fig.~4\nthe behavior of the second-order\nBP as a function of $-\\ln Q^2$ for\nmultiparticle hyper-tubes (spikes) made of\nlike-charged and those of unlike-charged particles, separately.\nA significant difference is observed for like-charged combinations \nbetween HERWIG and LUND MCs.\n\n\n\\section{Discussion}\n\\label{disc}\n\nLocal multiplicity\nfluctuations in Monte Carlo models have been\nstudied by means of bunching parameters.\nSince all high-order BPs show a power-like \nrise with decreasing the size of phase-space interval, \nnone of the conventional multiplicity distributions\ngiven in Table~1 can describe the local \nfluctuations observed in the MC models.\n\nFor $\\mbox{e}^+\\mbox{e}^-$ interactions, one can be confident\nthat, at least on the parton level of this reaction,\nperturbative QCD can give a\nhint for the understanding of the problem.\nAnalytical calculations based on the DLLA \nof perturbative QCD show that the multiplicity distribution\nof partons in ever smaller opening angles is\ninherently multifractal \\cite{qcdd1}.\nQualitatively, this is consistent with our results on the BPs \nfor rapidity. Quantitatively, however, the QCD predictions\ndisagree with the $\\mbox{e}^+\\mbox{e}^-$ data and MC models \\cite{qcd1}. \n\n\nIn this paper we show that the power-law behavior of BPs\nis mainly due to like-charged particles.\nJETSET gives the same power-law trend even without\nthe BE effect. This means that the intermittency observed for\nlike-charged particles appears to be largely a consequence of \nQCD parton showers and hadronization. \n\nThe predictions of the ARIADNE 4.08 model are comparable with those\nof the JETSET 7.4 PS model. This is essentially\ndue to the same implementation\nof hadronization, which is based for both models\non string fragmentation.\n\nA noticeable disagreement, however, is\nfound between LUND and HERWIG models.\nThe conversion of the partons into hadrons\nin the first models is based on the Lund String Model \\cite{oder1}.\nHowever, the hadronization in HERWIG is \nmodelled with a cluster mechanism \\cite{h56}.\nThis can be a rather natural candidate to explain the observed\ndifference between local fluctuations in these models. \nA particular concern is the large difference between MC's \nfor $\\eta_2$. The behavior of $\\eta_2$ \nfor not very small intervals is sensitive to low-multiplicity events,\nfor which hadronization details could play a significant role.\n\n\n\\section*{Acknowledgments}\n\nI wish to express my gratitude to \nW.Kittel and W.J.Metzger for many useful discussions. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nRecently, essential progress has been made in understanding the\nnature of the mysterious kappa symmetry, which is the basic\ningredient of any self-consistent covariant super $p$-brane theory [1\n- 10].\n\nThe first decisive step in this direction has been made by Sorokin,\nTkach, Volkov and Zheltukhin (STVZ) \\cite{{a1},{a00}} in the case of\nsuperparticle ($p=0$). Replacing the superparticle momentum by a new\ntwistor variable (a commuting spinor), these authors reformulated the\nmassless $D=3,\\; 4$ superparticle actions in a manifestly world-line\nand target-space supersymmetric way. In this new formulation, the\nlocal kappa symmetry inherent to the original form of the action\nbecomes part of the diffeomorphism group of the world-line $N=1$ and\n$N=2$ superspace, respectively. More precisely, it is identified\nwith the odd sector of the group of local world-line superconformal\ntransformations and so it is realized as an off-shell symmetry of\nthe action. The latter is naturally written down in terms of the\nworld-line superfields.\n\nThese results have been further extended in [3 - 8]. Delduc and\nSokatchev \\cite{a2} have put the $D=3,\\;4$ STVZ actions in an\narbitrary curved background and have given a similar description for\nthe massless $D=6$ superparticle. In the case $D=4$ the key role of\nthe combined complex Grassmann analyticity (chirality) in the\nworld-line and target superspaces has been revealed. A new crucial\nfeature of the $D=6$ action is that it relies on the concept of\ndouble harmonic analyticity which generalizes the complex analyticity\nof the $D=4$ case: The target superspace is the analytic harmonic\n$D=6$ superspace \\cite{a9} and its coordinates are superfields living\nin an analytic subspace of the world-line harmonic $N=4$ superspace.\nIvanov and Kapustnikov \\cite{a3} have developed a twistor-type\nformulation for the massive $D=2$ superparticle, where the\ncorresponding kappa symmetry could once again be identified with\nlocal superconformal symmetry in the $N=1$ world-line superspace.\nBased on this formulation, they have also found a very simple and\nefficient algorithm for constructing higher-order supersymmetric and\nkappa invariant corrections to the minimal superparticle action. In\n\\cite{a4} this result has been extended to the case of the massive\n$D=3$ superparticle. Also, it has been observed [5, 6] that the\ntwistor-like superfield actions of the massive $D=2$, $D=3$\nsuperparticles follow via a dimensional reduction from those of the\nmassless $D=3$, $D=4$ superparticles and the harmonic superspace\naction for the massive $D=5$ superparticle has been derived in this\nway from the $D=6$ action of ref. \\cite{a2}. A group-theoretical\nanalysis of the STVZ construction has been undertaken by Galperin,\nHowe and Stelle \\cite{a5}. Howe and Townsend \\cite{a6} interpreted\nthe world-line superfield STVZ actions as those of supersymmetric\nChern-Simons mechanics.\n\n\nThough being very interesting and suggestive in their own right, the\ntwistor formulation of the superparticle and the new interpretation\nof the relevant kappa symmetry should be considered as preparatory\nstages before approaching the cases of super $p$-branes with $p \\geq\n1$ and, first of all, superstrings ($p=1$) \\cite{a10}. The first\nsteps in generalizing this approach to superstrings have been made in\nref. \\cite{{a00},{a7}}. Berkovits \\cite{a7} proposed a twistor-like\naction for the $N=1$ Green-Schwarz (GS) superstring which has the\nsame form for all classically allowed superstring dimensions.\nHowever, this action possesses only one manifest off-shell world-line\nsupersymmetry, while the kappa symmetry of the GS action is known to\ninvolve 1, 2, 4 and 8 Grassmann parameters for target spaces of\ndimension 3, 4, 6 and 10, respectively. Thus, the formulation of\n\\cite{a7} could only fully explain kappa symmetry for three\ntarget-space dimensions. The complete twistor action for the $D=4$\nGS superstring, with manifest $N=(2,0)$ world-sheet supersymmetry,\nhas been found in \\cite{a3} \\footnote {An $N=(2,0)$ world-sheet\nsupersymmetric twistor-like formulation of the $D=10$ superstring,\nwith two of the eight kappa supersymmetries traded for world-sheet\nsuperconformal ones, has been independently and simultaneously\nproposed by Tonin \\cite{a8}.}. Like Berkovits' action, it is written\nas a pure Wess-Zumino term in the world-sheet superspace, however the\nlatter is now identified with the complex chiral subspace of the\nworld-sheet $N=(2,0)$ superspace. Kappa symmetry turns out to be\ndirectly related to a restricted class of diffeomorphisms of the\nworld-sheet superspace, namely to $N=(2,0)$ superconformal\ntransformations with the parameters having an\n arbitrary dependence on the remaining (inert under supersymmetry)\nworld-line bosonic coordinate. The guiding principle in constructing\nthe twistor description of the $D=4$ superstring, just as in the case\nof the $D=4$ superparticle \\cite{a2}, was that of double Grassmann\nanalyticity. The coordinates of the target {\\it chiral} $D=4\\;\\; N=1$\nsuperspace were defined as {\\it chiral} superfields with respect to\nworld-sheet $N=(2,0)$ supersymmetry. In \\cite{a3} it has been\nsuggested that this principle could be applied in twistor-like\nformulations of other super $p$-branes with kappa invariance.\n\nThe main result of the present article (Section 4) is a twistor\nformulation of the $D=6\\;\\;N=1$ superstring based on a combination of\nthe ideas underlying the twistor descriptions of the $D=6\\;\\;N=1$\nsuperparticle and the $D=4\\;\\;N=1$ superstring. It is given in terms\nof the coordinates of the analytic harmonic $D=6$ target superspace,\nwhich are in turn world-sheet $N=(4,0)$ harmonic analytic\nsuperfields. The action consists of a Wess-Zumino term and two\nLagrange multiplier terms. The latter imply on-shell constraints on\nthe superfields, which restrict their harmonic dependence. We analyse\nthe component content of the theory and show that it is equivalent to\nthe GS superstring in six dimensions. The superstring kappa symmetry\nis identified with the odd sector of the $N=(4,0)$ superconformal\ngroup (gauged, as in the case of $D=4$ superstring, by the\nsupersymmtery-inert world-sheet coordinate). We show that by\nintroducing a gauge $N=(4,0)$ Beltrami supermultiplet, the action can\nbe made invariant under an extended world-sheet superdiffeomorphism\ngroup including {\\it arbitrary} bosonic world-sheet\nreparametrizations.\n\nFor convenience of the reader in Sections 2, 3 we recall the already\nknown examples. In Section 2 we review the simplest case of a\ntwistor-like superstring, the one with a $D=3$ target superspace and\n$N=(1,0)$ world-sheet supersymmetry. There one can already see the\nr\\^ole played by the twistor variables and the relationship between\nkappa symmetry and world-sheet supersymmetry. We study the\nsymmetries of the action and we show how to promote the right\nconformal invariance to full right diffeomorphisms by introducing a\nBeltrami gauge field. In Section 3 we present a modification of the\n$D=4\\;\\; N=(2,0)$ superstring of ref. \\cite{a3}, in which the\nWess-Zumino term is given as a full superspace integral and the\non-shell condition on the superfields is included in the action with\na Lagrange multiplier.\n\n\n\\section{D=3 superstring with N=(1,0) world-sheet supersymmetry }\n\nIn this section we review the formulation of twistor-like superstring\ntheories with $N=(1,0)$ world-sheet supersymmetry. Although this can\nbe done for the target-space dimensions $D=3,\\;4,\\;6,\\;10$ \\cite{a7},\nwe shall restrict ourselves to the case $D=3$. The reason is that\nwith just one world-sheet supersymmetry one can fully explain the\nkappa symmetry of a three-dimensional superstring only.\n\n\\subsection{Superspace action}\n\nThe action we shall consider has $N=(1,0)$ supersymmetry, so we\nintroduce a Grassmann coordinate $\\eta$ besides the light-cone\nworld-sheet coordinates $\\xi^{(+)}, \\xi^{(-)}$. The heterotic nature\nof $N=(1,0)$ supersymmetry means that we supersymmetrize only one of\nthe world-sheet directions, e.g., $\\xi^{(-)}$. The corresponding\ncovariant spinor derivative on the world sheet is $D=\\partial_\\eta\n+i\\eta\\partial_{(-)}$, $D^2=i\\partial_{(-)}$. The target superspace\ncoordinates $X^\\mu (x,\\eta)$, $\\Theta^\\alpha (x,\\eta)$ are defined as\nworld-sheet superfields. $\\Theta^\\alpha$ is a $D=3$ Majorana spinor.\nIn the following, heavy use will be made of the\nrelation\\footnote{This identity is valid in $D=3,\\;4,\\;6,\\;10$. It\nis crucial for the consistency of the Green-Schwarz superstring\ntheories. At the same time, it is in the basis of the twistor-like\napproach.} \\begin{equation} (\\gamma^\\mu)_{\\alpha\n(\\beta}(\\gamma_\\mu)_{\\delta\\epsilon)} =0\\;. \\label{relg}\n\\end{equation} The action describing the $D=3$ superstring is\n\\begin{equation} S=\\int d^2\\xi d\\eta\\left[ \\Pi_{(+)\\mu}\nD\\Theta\\gamma^\\mu\\Theta -iP_\\mu (DX^\\mu-iD\\Theta\\gamma^\\mu\\Theta)\n\\right],\\label{action0}\\end{equation}\n where \\begin{equation} \\Pi_{(+)}^\\mu=\\partial_{(+)}X^\\mu\n-i\\partial_{(+)}\\Theta\\gamma^\\mu\\Theta \\; , \\end{equation} and\n$P_\\mu$ is a Lagrange multiplier superfield. This action is invariant\nunder global space-time supersymmetry transformations,\n$$\\delta\\Theta^\\alpha=\\epsilon^\\alpha\\;,\\ \\ \\delta\nX^\\mu=i\\Theta\\gamma^\\mu\\epsilon \\;. $$ Indeed, up to total\nderivatives and making use of the identity (\\ref{relg}), one can\nwrite down the variation of the first term $S_{1}$ of this action as\nfollows $$\\delta S_{1} =\\int d^2\\xi d\\eta \\;\n(DX^\\mu-iD\\Theta\\gamma^\\mu\\Theta)\\;\\partial_{(+)}\n\\Theta\\gamma_\\mu\\epsilon\\; . $$ Clearly, this can be absorbed into a\nvariation of the Lagrange multiplier $P_\\mu$ (the constraint\nintroduced by $P_\\mu$ is invariant in its own right).\n\nThe action (\\ref{action0}) is also invariant under a restricted class\nof left superdiffeomorphisms: \\begin{equation}}\\newcommand{\\ee}{\\end{equation} \\delta\\xi^{(-)} = \\Lambda^{(-)} -\n{1\\over 2}\\eta D\\Lambda^{(-)}, \\ \\ \\ \\delta\\eta = -{i\\over 2}\nD\\Lambda^{(-)}. \\label{left1}\\ee (the world-sheet coordinate\n$\\xi^{(+)}$ is not affected by \\p{left1}). They do not change the\nform of the spinor covariant derivative: $$ \\delta D = -{1\\over 2}\n\\partial_{(-)}\\Lambda^{(-)}\\; D\\;. $$ This transformation law is\nreminiscent of a superconformal transformation. However, the\n$\\xi^{(+)}$-dependence of the superfield parameter\n$\\Lambda^{(-)}(\\xi^{(+)},\\xi^{(-)},\\eta)$ is not restricted and so\nthese transformations actually constitute a Kac-Moody extension of\nthe $N=(1,0)$ superconformal group. Their active form on any scalar\nsuperfield $\\phi$ is \\begin{equation} \\delta\\phi\n=\\Lambda^{(-)}\\partial_{(-)}\\phi - {i\\over 2}D\\Lambda^{(-)}D\\phi\\; .\n\\label{ldiff0} \\end{equation}\n\nTaking for the time being all fields to transform as scalars, one\nfinds, up to total derivatives and using again eq. (\\ref{relg}):\n$$\\delta S =-i\\int d^2\\xi d\\eta\\;\n(DX^\\mu-iD\\Theta\\gamma^\\mu\\Theta)\\left[ D \\;(\n\\partial_{(+)}\\Lambda^{(-)}\\; D\\Theta\\gamma_\\mu\\Theta) -{1\\over\n2}\\;\\partial_{(+)}D\\Lambda^{(-)}\\; D\\Theta\\gamma_\\mu\\Theta\n\\right]\\;. $$ It is thus clear that, like in the previous case, one\ncan choose an additional variation of the Lagrange multiplier so that\nthe action (\\ref{action0}) will be invariant.\n\nFinally, the action (\\ref{action0}) is invariant under right\nconformal transformations: $$ \\delta\\xi^{(+)} =\n\\Lambda^{(+)}(\\xi^{(+)})\\;, $$ the superfields $X^\\mu$ and $\\Theta$\nbeing scalars, and the Lagrange multiplier $P_\\mu$ a density. The\nfact that this is not a full diffeomorphism invariance is an\nindication that one of the two Virasoro constraints does not follow\nfrom the action. This conformal invariance can be promoted to a full\nright diffeomorphism invariance, $\\Lambda^{(+)} =\n\\Lambda^{(+)}(\\xi^{(+)},\\xi^{(-)},\\eta)$ by introducing a new field\nwhich will at the same time generate the missing constraint. To\nclarify this procedure, one could consider a simplified example. Take\na bosonic string with the following action: \\begin{equation}}\\newcommand{\\ee}{\\end{equation} S=\\int d^2\\xi\n\\left(\\partial_{(+)}X^{\\mu}\\partial_{(-)}X_{\\mu}\n+\\nu\\partial_{(-)}X^{\\mu}\\partial_{(-)}X_{\\mu}\\right). \\label{mu}\\ee\nThis action has the same type of invariances as (\\ref{action0}),\nnamely\n full left diffeomorphisms and right conformal invariance. Note that\n the Lagrange multiplier $\\nu$ is the gauge field for left\n diffeomorphisms. At the same time, it generates the Virasoro\n constraint $\\partial_{(-)}X^{\\mu}\\partial_{(-)}X_{\\mu}=0$. To\nrestore the right diffeomorphisms, one makes an arbitrary change of\ncoordinate from $\\tilde\\xi^{(+)}$ to a new coordinate\n$\\xi^{(+)}=\\xi^{(+)}(\\tilde\\xi^{(+)},\\xi^{(-)})$. The Jacobian of\nthis transformation may be reabsorbed into a rescaling of the\nLagrange multiplier $\\nu$. The only change is thus in the derivative\n$$\\tilde\\partial_{(-)}={\\cal D}_{(-)}=\\partial_{(-)}+\n\\mu(\\xi)\\partial_{(+)}, \\ \\ \\ \\mu(\\xi)=\n\\tilde\\partial_{(-)}\\xi^{(+)}.$$ The new field $\\mu$ is the gauge\nfield for the right diffeomorphisms. Further, the variation with\nrespect to $\\mu$ of the action (\\ref{mu}), with $\\partial_{(-)}$\nreplaced by ${\\cal D}_{(-)}$, produces the second Virasoro\nconstraint \\begin{equation}}\\newcommand{\\ee}{\\end{equation} \\label{v2}\n\\partial_{(+)}X^{\\mu}\\partial_{(+)}X_{\\mu}+2\\nu\\partial_{(+)}X^{\\mu}\n{\\cal D}_{(-)}X_{\\mu}=0.\\ee Effectively, the fields $\\mu$ and $\\nu$\nare two of the components of the world-sheet metric. The third one is\nnot present because it corresponds to a Weyl rescaling. In an\nanalogous way, we can covariantize the derivatives $D$,\n$\\partial_{(-)}$ in the superstring action (\\ref{action0}) as\nfollows: \\begin{eqnarray} D\\rightarrow {\\cal D}=D+\\chi\\partial_{(+)}\n&&\\nonumber\\\\ \\partial_{(-)}\\rightarrow {\\cal D}_{(-)}= -i{\\cal D}^2\n\\;,&&\\end{eqnarray} where the superfield $\\chi$ transforms under\nright diffeomorphisms: \\begin{equation}\n\\delta_{right}\\chi=-\\Lambda^{(+)}\\partial_{(+)} \\chi+{\\cal\nD}\\Lambda^{(+)}\\ \\ \\ \\Lambda^{(+)} =\n\\Lambda^{(+)}(\\xi^{(+)},\\xi^{(-)},\\eta)\\;. \\label{rdiff}\n\\end{equation}\n\nThe action (\\ref{action0}) with these replacements made is still\ninvariant under global space-time supersymmetry, as well as under\nleft superdiffeomorphisms (\\ref{left1}) provided the derivatives in\nthe transformation law (\\ref{ldiff0}) are replaced by covariant\nderivatives, the transformation law for the Lagrange mutiplier\n$P_\\mu$ is suitably modified and $\\chi$ is inert under (\\ref{left1})\n \\footnote{These transformation laws of $\\chi$ are compatible with\nthe Lie bracket structure : $ \\left[ \\delta_{left},\\;\\delta_{right}\n\\right] \\sim \\delta_{right}\\;.$} \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{csc3} \\delta_{left} \\chi =\n0\\;. \\ee\n\nNote that the so covariantized left superdiffeomorphisms, being\nrewritten in a passive form, involve an induced field-dependent\ntransformation of the coordinate $\\xi^{(+)}$. In components this\nleads to a non-standard form of the world-sheet reparametrizations.\nBesides, these transformations close with a field-dependent bracket\nparameter. By making use of the invariance under $\\xi^{(+)}$\ndiffeomorphisms one may ``subtract'' this unwanted induced shift of\n$\\xi^{(+)}$ from the left diffeomorphisms and restore the original\nLie bracket structure. As a result of this redefinition, the\nvariation $\\delta_{left}\\chi$ becomes \\begin{equation}\n\\tilde{\\delta}_{left}\\chi=\\tilde{\\Lambda}^{(+)}\\partial_{(+)}\n\\chi-{\\cal D} \\tilde{\\Lambda}^{(+)}\\;, \\;\\;\\;\\;\\;\\;\\;\n\\tilde{\\Lambda}^{(+)} = i ( \\Lambda^{(-)}{\\cal D}\\chi + \\frac{1}{2}\n{\\cal D} \\Lambda^{(-)} \\chi )\\;. \\label{rconf} \\end{equation} We\npoint out that $\\delta_{left}$ and $\\tilde{\\delta}_{left}$ coincide\nmodulo a $\\xi^{(+)}$ diffeomorphism transformation.\n\nIn fact, the superfield $\\chi$ is a pure gauge. In particular, using\nthe freedom in the parameter $\\Lambda^{(+)}$, one can fix a\nWess-Zumino gauge (and we shall assume it in what follows), where the\nonly surviving component of $\\chi$ is the Beltrami parameter\n$\\mu(\\xi)=-iD\\chi\\vert_{\\eta=0}$. Note that this does not restrict\nthe bosonic part of the world-sheet reparametrization.\n\nWe would like to point out that there is an alternative approach, in\nwhich one could have started with the complete formalism of $N=(2,0)$\nsupergravity on the world sheet (see, for example, \\cite{a8}). There\none introduces the full set of zweibeins and connections. However,\nmost of them simply drop out from the twistor-like action\n\\p{action0}. As we have just seen, the only gauge superfield really\nneeded for maintainig the gauge symmetries (local supersymmetry and\ndiffeomorphisms) is the Beltrami superfield.\n\n\n\\subsection{Component action. World-sheet conformal supersymmetry\nversus kappa symmetry}\n\n We shall denote the physical components of the superfields $X$,\n$\\Theta$ and $P$ by: $$x^\\mu(\\xi)=X^\\mu\\vert_{\\eta=0}\\;,\\ \\\n\\theta^\\alpha(\\xi)=\\Theta^\\alpha\\vert_{\\eta=0}\\;,\\ \\\n\\lambda(\\xi)=D\\Theta^\\alpha\\vert_{\\eta=0}\\;,\\ \\\np_\\mu(\\xi)=P_\\mu\\vert_{\\eta=0}$$ (we discard some purely auxiliary\nfields which are expressed in terms of the physical ones on shell).\nFurther, we introduce the notation:\n$$\\pi^\\mu_{(+)}=\\partial_{(+)}x^\\mu-i\n\\partial_{(+)}\\theta\\gamma^\\mu\\theta ,$$ $$\\hat{\\pi}^\\mu_{(-)}={\\cal\nD}_{(-)}x^\\mu-i {\\cal D}_{(-)}\\theta\\gamma^\\mu\\theta \\equiv\n\\pi^{\\mu}_{(-)} + \\mu \\pi^\\mu_{(+)}\\; ,$$ with ${\\cal\nD}_{(-)}=\\partial_{(-)}+ \\mu\\partial_{(+)}$. Then it is not hard to\nobtain the component form of the action (\\ref{action0}): \\begin{equation}}\\newcommand{\\ee}{\\end{equation} S=\\int\nd^2\\xi\\;\\left(\\pi_{(+)\\mu }\\;\\lambda\\gamma^\\mu\\lambda +i\\;\\pi_{(+)\\mu\n}\\; {\\cal D}_{(-)}\\theta\\gamma^\\mu\\theta\n-i\\;\\lambda\\gamma_\\mu\\lambda\\;\\partial_{(+)}\\theta \\gamma^\\mu\\theta\n +p_\\mu\\; (\\hat{\\pi}^\\mu_{(-)}-\\lambda\\gamma^\\mu\\lambda) \\right)\\;.\n\\label{local}\\ee The left superdiffeomorphisms \\p{ldiff0}\n(covariantized with respect to $\\xi^{(+)}$ diffeomorphisms) contain,\nin particular, local left supersymmetry with parameter $\\rho=-{i\\over\n2}D\\Lambda^{(-)}\\vert_{\\eta=0}$. It acts on the above fields in the\nfollowing way: $$ \\delta x^\\mu =i\\rho\\;\\lambda\\gamma^\\mu\\theta\\;,\\ \\\n\\delta \\theta^\\alpha =\\rho\\;\\lambda^\\alpha \\;, $$ \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\n\\delta\\lambda^\\alpha =i\\rho\\;{\\cal D}_{(-)}\\theta\\;,\\ \\ \\delta p_\\mu\n=i\\partial_{(+)}\\left( \\rho\\;\\lambda\\gamma_\\mu\\theta \\right)\\;.\n\\label{loccon}\\ee\n\n\nThe field $p_\\mu$ is a Lagrange multiplier for the constraint\n\\begin{equation} \\hat{\\pi}^\\mu_{(-)} =\\lambda\\gamma^\\mu\\lambda\\;.\n\\label{stv0} \\end{equation} With the help of the identity \\p{relg}\nthis implies that $\\hat{\\pi}^\\mu_{(-)}$ is a light-like vector:\n\\begin{equation} \\hat{\\pi}^\\mu_{(-)}\\hat{\\pi}_{\\mu(-)}=0\\;.\n\\label{light0} \\end{equation} In fact, eq. \\p{light0} is one of the\ntwo Virasoro constraints for the superstring (the second Virasoro\nconstraint can be obtained by varying the Beltrami parameter $\\mu$ in\n\\p{local}, see below). Here we see the main idea of the twistor\napproach in action: A light-like vector is represented as a pair of\ncommuting spinors (twistor variables). Further, we note that the\ntwistor variable $\\lambda^\\alpha$ appears in \\p{local} only in the\ncombination $\\lambda\\gamma^\\mu\\lambda$, so (\\ref{stv0}) may be used\nto eliminate it from the action, which then becomes:\n\\begin{equation} S=\\int d^2\\xi\\;\\left(\\pi_{\\mu\n(+)}\\hat{\\pi}^\\mu_{(-)} +i\\;\\pi_{\\mu (+)}\\;\n\\partial_{(-)}\\theta\\gamma^\\mu\\theta -i\\;\\hat\\pi_{\\mu (-)}\\;\n\\partial_{(+)}\\theta \\gamma^\\mu\\theta\\right)\\;.\n\\label{GS}\\end{equation} This action, accompanied by the constraint\n(\\ref{light0}), is just the action of the GS superstring \\cite{a10}.\nAs a consequence of the elimination of the twistor-like variable\n$\\lambda^\\alpha$, the action \\p{GS} has lost the local left\nsupersymmetry \\p{loccon} of the action \\p{local}. Instead, it has a\nnew local symmetry, \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{kappa} \\delta x^\\mu =\ni\\kappa\\gamma^\\nu\\gamma^\\mu\\theta\\; \\hat{\\pi}_{\\nu(-)}\\;, \\ \\ \\\n\\delta\\theta^\\alpha = (\\kappa\\gamma^\\mu)^\\alpha \\hat{\\pi}_{\\mu(-)}\\;,\n\\ee which is just the familiar kappa symmetry of the GS superstring.\nActually, the transformations \\p{kappa} can be recast in the form of\nlocal supersymmetry \\p{loccon}, if one defines $\\rho =\n\\lambda^\\alpha\\kappa_\\alpha$, and then uses the {\\it on-shell\ncondition} \\p{stv0}, as well as the Fierz identity for the $D=3$\ngamma matrices. This procedure shows that kappa symmetry is\nequivalent to world-sheet supersymmetry only on shell (and hence the\non-shell closure of the algebra of kappa symmetry). Further, we\nrecall the well-known fact that because of the presence of the\nlight-like vector $\\hat{\\pi}_{(-)}$ in \\p{kappa} only half of\n$\\kappa^\\alpha$ are true gauge parameters (they are used to gauge\naway half of $\\Theta^\\alpha$). In the allowed dimensions\n$D=3,\\;4,\\;6,\\;10$ that half of $\\kappa^{\\alpha}$ can be matched by\n$N=(1,0),\\;(2,0),\\;(4,0),\\;(8,0)$ world-sheet supersymmetries. In\nthis section we consider $N=(1,0)$, so it was natural to associate it\nwith the $D=3$ superstring. As a matter of fact, the whole discussion\nabove applies to any of the dimensions $D=3,\\;4,\\;6,\\;10$, except for\nthe relationship between local world-sheet supersymmetry and kappa\nsymmetry.\n\nLet us explain in more detail how the second Virasoro constaint\nfollows from the action \\p{local}. After a redefinition of the\nLagrange multiplier as $$ p^{\\mu} = \\tilde{p}^{\\mu} -\ni\\;\\partial_{(+)}\\theta\\gamma^{\\mu}\\theta + \\pi_{(+)}^{\\mu}\\;, $$ the\naction takes the form \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{local1} S=\\int\nd^2\\xi\\;\\left(\\pi_{(+)\\mu}\\hat{\\pi}_{(-)}^{\\mu} +i\\;\\pi_{(+)\\mu}\\;\n{\\cal D}_{(-)}\\theta\\gamma^\\mu \\theta\n-i\\;\\hat\\pi_{(-)\\mu}\\;\\partial_{(+)}\\theta\\gamma^\\mu \\theta\n +\\tilde{p}_\\mu\\; (\\hat{\\pi}^\\mu_{(-)}-\\lambda\\gamma^\\mu\\lambda)\n\\right)\\;. \\ee Varying \\p{local1} with respect to $\\tilde{p}_{\\mu}$\nleads to the already known twistor constraint \\p{stv0}, whose\ncorollary is eq. \\p{light0}, while varying with respect to $\\mu$\ngives \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{rvc1} (\\tilde{p}_{\\mu} +\n\\pi_{(+)\\mu})\\;\\pi_{(+)}^{\\mu} = 0 \\;. \\ee Let us now look at the\nequation of motion for $\\lambda^\\alpha$ $$ \\tilde{p}_{\\mu}\n(\\gamma^{\\mu} \\lambda)_\\alpha = 0\\;. $$ It is well known [1, 2, 6,\n7] that for $D=3$ (as well as for $D=4,\\;6,\\;10$) this equation has\nthe general solution \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{tws} \\tilde{p}^{\\mu} = c\n\\lambda\\gamma^{\\mu}\\lambda\\;, \\ee where $c$ is an arbitrary scalar\nfield on the world sheet. Comparing \\p{tws} with \\p{stv0} we find $$\n\\tilde{p}^{\\mu} = c \\hat{\\pi}_{(-)}^{\\mu}\\;.$$ Further, substituting\nthis into into eq.\\p{rvc1} we see that \\p{rvc1} is just the second\nVirasoro constraint \\p{v2}, provided one identifies $$ c = 2\\nu\\;. $$\n\n\nWe conclude this section by the remark that the first term in the\naction (\\ref{action0}) is in fact a typical Wess-Zumino\nterm\\footnote{The presence of a WZ term is a characteristic feature\nof the GS superstring \\cite{mez}.}. This becomes clear after\nrewriting the action in the form: \\begin{equation} S=\\int d^2\\xi\nd\\eta\\left[ ({\\cal D}\\Theta^\\alpha\\partial_{(+)}X^\\mu -{\\cal\nD}X^\\mu\\partial_{(+)}\\Theta^\\alpha) (\\gamma_\\mu\\Theta)_\\alpha -iP_\\mu\n({\\cal D}X^\\mu-i{\\cal D}\\Theta\\gamma^\\mu\\Theta) \\right]\\;,\n\\end{equation}\n obtained by a redefinition of the Lagrange multiplier $P_\\mu$.\nIndeed, the first term now looks like a Wess-Zumino term $\\int\n\\partial Z^M \\partial Z^N B_{NM}(Z)$, where the only non-vanishing\ncomponent of the two-form is $B_{\\mu\\alpha} =\n(\\gamma_\\mu\\Theta)_\\alpha$.\n\n\n\\section{D=4 superstring with N=(2,0) world-sheet supersymmetry}\n\nThe formulation of the $D=4\\;\\; N=(2,0)$ superstring presented in\nthis section is equivalent to the one of ref. \\cite{a3}, the main\ndifference is that the WZ term is given as a full world-sheet\nsuperspace integral, and not as a chiral one, as in \\cite{a3}. This\nallows us to introduce the on-shell constraints on the superfields\n$X$ and $\\Theta$ in the action via a Lagrange multiplier, so the new\naction involves only unconstrained objects. At the end of this\nsection we show how the formulation of \\cite{a3} can be obtained from\nthe present one.\n\nWe begin by introducing an $N=(2,0)$ world-sheet superspace with\ncoordinates $\\xi^{(+)}, \\xi^{(-)}, \\eta, \\bar\\eta$. As before,\nsupersymmetry acts only in the direction of the coordinate\n$\\xi^{(-)}$. Further, following the principle of double Grassmann\nanalyticity formulated in the Introduction, we introduce the\ncoordinates of the left and right chiral bases in $D=4$ superspace,\n$X^\\mu_L$, $\\Theta^\\alpha$ and $X^\\mu_R = \\overline{X^\\mu_L}$,\n$\\bar\\Theta^{\\dot\\alpha} = \\overline{\\Theta^\\alpha}$, as left and\nright-handed chiral world-sheet superfields, respectively:\n\\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{chirality} \\bar D X^\\mu_L = \\bar D \\Theta^\\alpha = 0\\;,\n\\ \\ \\ D X^\\mu_R = D \\bar\\Theta^{\\dot\\alpha} = 0\\;, \\ee where $$ D =\n\\partial\/\\partial\\eta + i\\bar\\eta\\partial\/\\partial\\xi^{(-)}\\;, \\ \\\n\\bar D = -\\partial\/\\partial\\bar\\eta -\ni\\eta\\partial\/\\partial\\xi^{(-)}\\;. $$ The usual, real coordinate\n$X^\\mu$ is identified with the real part of the chiral ones, $$ X^\\mu\n= {1\\over 2}(X^\\mu_L + X^\\mu_R), $$ after restricting the imaginary\npart to be \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{onshell} {i\\over 2}(X^\\mu_L - X^\\mu_R) =\n-\\Theta\\sigma^\\mu\\bar\\Theta \\;. \\ee The constraint \\p{onshell} is in\nfact an equation of motion (see \\cite{a2} for the analogous case of\nthe $D=4\\;\\; N=2$ superparticle), so we are going to introduce it in\nthe action with a Lagrange multiplier (cf. \\p{action0}).\n\nWe propose the following action for the $D=4\\;\\; N=(2,0)$\nsuperstring: $$ S ={1\\over 2} \\int d^2\\xi d\\eta d\\bar\\eta\n\\Big[(\\partial_{(+)} X^\\mu_L + \\partial_{(+)} X^\\mu_R\n-i\\partial_{(+)}\\Theta\\sigma^\\mu\\bar\\Theta +\ni\\Theta\\sigma^\\mu\\partial_{(+)}\\bar\\Theta) \\Theta\\sigma_\\mu\\bar\\Theta\n$$ \\begin{equation}}\\newcommand{\\ee}{\\end{equation} + P_\\mu({i\\over 2}(X^\\mu_L - X^\\mu_R) +\n\\Theta\\sigma_\\mu\\bar\\Theta) \\Big] \\;. \\label{action4} \\ee Comparing\nthis action with that from \\cite{a3} (see \\p{chiralWZ}), we see that\nthe main difference is in the first (WZ) term in \\p{action4}. Note\nalso that the Lagrange multiplier term in \\p{action4} has exactly the\nsame form as the action for the $D=4\\;\\; N=2$ superparticle of ref.\n\\cite{a2}.\n\nThe action \\p{action4} has several symmetries.\n\nFirstly, it has global target-space supersymmetry. In the left-handed\nchiral basis of $D=4$ superspace it is realized as follows:\n\\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{susy4} \\delta X^\\mu_L = 2i\\Theta\\sigma^\\mu\\bar\\epsilon\\;,\n\\ \\ \\ \\delta\\Theta^\\alpha = \\epsilon^\\alpha\\;, \\ee and similarly for\nthe right-handed basis. The invariance of the Lagrange multiplier\nterm in \\p{action4} is obvious (for the moment, we do not vary\n$P^{\\mu}$). To check the invariance of the WZ term one has to use the\nchirality conditions \\p{chirality}, the Fierz identity for the\nmatrices $\\sigma^\\mu$ and to ascribe the following transformation law\nto the Lagrange multiplier: $$ \\delta P^\\mu = 2i (\\partial_{(+)}\n\\Theta\\sigma^\\mu\\bar\\epsilon - \\epsilon \\sigma^\\mu \\partial_{(+)}\n\\bar\\Theta) \\;. $$\n\nSecondly, the action \\p{action4} is invariant under restricted left\nsuperdiffeomorphisms: \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{left4} \\delta\\xi^{(-)}\n=\\Lambda^{(-)} +{1\\over 2} \\bar\\eta\\bar D \\Lambda^{(-)} + {1\\over\n2}D\\Lambda^{(-)}\\eta\\;, \\ \\ \\delta\\eta = {i\\over 2} \\bar D\n\\Lambda^{(-)}\\;, \\ \\ \\delta\\bar\\eta = -{i\\over 2} D \\Lambda^{(-)}\\;,\n\\ee where $\\Lambda^{(-)}\\;(\\xi^{(+)},\\xi^{(-)},\\eta,\\bar\\eta) =\n\\overline{\\Lambda^{(-)}}$. These transformations leave the volume of\nthe real world-sheet superspace invariant, $$ \\delta (d^2\\xi d\\eta\nd\\bar\\eta) = 0\\;, $$ and transform the left and right-handed\ncoordinates of the target superspace as scalars. Consequently, one\nfinds $$ \\delta (\\partial_{(+)} X^\\mu_L) = -{i\\over 2} (\\bar D\n\\partial_{(+)} \\Lambda) DX^\\mu_L - (\\partial_{(+)}\\Lambda)\n\\partial_{(-)}X^\\mu_L\\;, $$ etc. Using all this, as well as the\non-shell relation \\p{onshell} (in other words, this means finding\nappropriate compensating transformations of the Lagrange multiplier),\none can show that the WZ term in \\p{action4} is invariant up to total\nderivatives. The Lagrange multiplier term is manifestly invariant\ntoo. Like in the case $D=3$, the transformations \\p{left4} constitute\nan $N=(2,0)$ world-sheet superconformal group with the parameters\nlocal the supersymmetry-inert coordinate $\\xi^{(+)}$. Their relation\nto kappa supersymmetry is precisely the same as in the case of $D=3$\nsuperparticle (details can be found in \\cite{a3}). Note that\n\\p{left4} leave invariant the chiral subspace $(\\xi^{(-)}_{L} =\n\\xi^{(-)} - i\\bar\\eta \\eta\\;, \\eta)$ and can be regarded as a\nparticular class of general diffeomorphisms of the latter,\n\\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{left5} \\delta \\xi^{(-)}_{L} = \\tilde {\\Lambda}^{(-)}\n(\\xi^{(-)}_{L}, \\eta, \\xi^{(+)}) \\;, \\;\\;\\;\\;\\;\\delta \\eta = \\omega\n(\\xi^{(-)}_{L}, \\eta, \\xi^{(+)})\\;, \\ee which preserve the flat\ndefinition of the world-sheet chiral subspace \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{chir} Im\n\\;\\xi^{(-)}_{L} = -i\\bar{\\eta}\\eta\\;. \\ee\n\nFinally, the action \\p{action4} is obviously invariant under right\nconformal reparametrizations of $\\xi^{(+)}$. In the end of this\nsection we shall sketch how these can be promoted to general ones.\n\nHere we shall not investigate the component content of the action\n\\p{action4}. Instead, we shall show how it can be reduced to the\nchiral action of ref. \\cite{a3}, where the issue of components has\nbeen discussed. Firstly, we impose the equation of motion\n\\p{onshell}. Thus the Lagrange multiplier term in \\p{action4} drops\nout, but the fields become constrained. Secondly, we convert one of\nthe Grassmann integrations, e.g., $d\\bar\\eta$ into a spinor\nderivative, $\\bar D$. Using the relation $\\bar D X^\\mu_R = 2i\n\\Theta\\sigma^\\mu\\bar D \\bar \\Theta$ following from \\p{onshell} and\nintegrating $\\partial_{(+)}$ by parts, we obtain the chiral form of\nthe WZ term from \\cite{a3}: \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{chiralWZ} S_{WZ} = - \\int\nd\\xi^{(+)}d\\xi^{(-)}_L d\\eta (\\partial_{(+)} X^\\mu +\ni\\Theta\\sigma^\\mu \\partial_{(+)}\\bar \\Theta - i \\partial_{(+)}\n\\Theta\\sigma^\\mu \\bar\\Theta) \\Theta\\sigma_\\mu\\bar D\\bar\\Theta \\;.\n\\ee Note that the chirality of the integrand in \\p{chiralWZ} and the\nreality of the action are not manifest, but they follow from the\non-shell relation \\p{onshell}.\n\nThe WZ term \\p{chiralWZ} has the standard geometric form $\\int\n\\partial Z^M \\partial Z^N B_{NM}$, where the two-form is represented\nby $B_{\\mu\\dot\\alpha} = (\\Theta\\sigma_\\mu)_{\\dot\\alpha}$. In\ncontrast, what we find in the real form \\p{action4} of the WZ term is\nnot the two-form itself, but its chiral-basis prepotential\n$\\Theta\\sigma_\\mu \\bar\\Theta$, $B_{\\mu\\dot\\alpha} =-\\bar\nD_{\\dot\\alpha} (\\Theta\\sigma_\\mu\\bar\\Theta)$.\n\nFinally, we outline very briefly how the $\\xi^{(+)}$ conformal\ninvariance of the $D=4$ action \\p{action4} can be extended to full\ndiffeomorphisms. Like in the case $D=3$, this can be done by\nintroducing into the action \\p{action4} an $N=(2,0)$ Beltrami\nsuperfield. A new point compared to $D=3$ is that the right\nsuperdiffeomorphism group should preserve the notion of world-sheet\nchirality. The natural way to satisfy this requirement is to apply\nthe approach of ref. \\cite{a11} to $N=1\\;\\; D=4$ supergravity in\nsuperspace (for applications to $N=2$ world-sheet geometry in the\ncontext of $N=2$ chiral bosons see also ref. \\cite{a12}). One\nchanges the coordinate $\\xi^{(+)}$ to a complex one $\\xi^{(+)}_{L} =\n\\xi^{(+)} + i \\chi^{(+)}(\\xi^{(-)}, \\xi^{(+)}, \\bar{\\eta}, \\eta)$ and\nreplaces the conformal shifts of $\\xi^{(+)}$ by {\\it chiral}\ndiffeomorphisms of $\\xi^{(+)}_{L}$, $$ \\delta \\xi^{(+)}_{L} =\n\\Lambda^{(+)} (\\xi^{(+)}_{L}, \\xi^{(-)}_{L}, \\eta)\\;. $$ Here\n$\\Lambda^{(+)}$ is an arbitrary complex function of its arguments\nand, as before, $\\xi^{(-)}_{L} = \\xi^{(-)} -i \\bar{\\eta}\\eta$. The\nreal superfield $\\chi^{(+)}$ accomodates the $N=(2,0)$ Beltrami gauge\nmultiplet. In the Wess-Zumino gauge it reduces to $\\chi^{(+)} =\n-\\bar{\\eta}\\eta\\; \\mu (\\xi)$ where $\\mu$ is the Beltrami parameter,\nthe same as in the case $D=3$. The covariantized form of the left\ndiffeomorphisms can be obtained by replacing $\\xi^{(+)}$ by\n$\\xi^{(+)}_{L}$ in the transformation laws \\p{left5} and requiring\nthat the modified transformations still preserve the flat relation\n\\p{chir}. The resulting transformation laws are nonlinear and\nnonpolynomial in $\\chi^{(+)}$ but they radically simplify in the\nWess-Zumino gauge. The only place in the action \\p{action4} where\nthe new Beltrami superfield $\\chi^{(+)}$ appears is in the\ncoordinates of the chiral superfields $X_L$, $X_R$, $\\Theta$ and\n$\\bar\\Theta$. The details of the construction are out of the scope\nof our presentation here. We only note that the action considered in\n\\cite{a3} can be viewed as a particular gauge-fixed form of the\nBeltrami-covariantized action, with the whole of $\\chi^{(+)}$ gauged\nto zero.\n\n\\section{D=6 superstring with N=(4,0) world-sheet supersymmetry}\n\n\\subsection{Superspace action and symmetries}\n\nThe world-sheet super-coordinates of the $N=(4,0)$ superstring will\nbe denoted by $\\xi^{(+)}$, $\\xi^{(-)}$, $\\eta^i$, $\\bar\\eta_i$ ($i$\nis an $SU(2)$ doublet index). To those we add the harmonic\ncoordinates\\footnote{The Lorentz weights $(\\pm)$ should not be\nconfused with the $U(1)$ charges $\\pm$.} $u^\\pm_i$ of the sphere\n$S^2$ (see \\cite{a9}). They are used to project the supersymmetric\ncovariant derivatives: $$\\{D_i\\;,\\bar\nD^j\\}=i\\delta^j_i\\partial_{(-)}\\ \\ \\rightarrow \\ \\ D^\\pm = u^\\pm_i\nD^i\\;,\\ \\ \\bar D^\\pm = u^\\pm_i \\bar D^i\\;.\n $$ The usual target superspace of the $D=6$ superstring has\ncoordinates $X^{\\alpha\\beta} = -X^{\\beta\\alpha}\\equiv\n(\\gamma_\\mu)^{\\alpha\\beta} X^\\mu$ and $\\Theta^{i\\alpha}$. Here\n$\\alpha, \\beta$ are $SU(4)^*$ spinor indices and $\\Theta$ satisfies\nthe pseudo-Majorana condition $\\overline{\\Theta^{i\\alpha}} =\n\\epsilon_{ij}C_{\\alpha}^{\\dot\\alpha} \\Theta^{i\\alpha}$.\n\nAs we saw in the preceding section, the $D=4$ superstring is\nformulated in terms of the coordinates of the chiral subspaces of\nthe target superspace. At the same time, they are taken as chiral\nsuperfields with respect to world-sheet supersymmetry. This is\nactually the principle of double Grassmann analyticity mentioned in\nthe Introduction. In the case $D=6$ the notion corresponding to $D=4$\nchirality is that of $SU(2)$ harmonic Grassmann analyticity\n\\cite{a9}. Following this idea,\\footnote{The same approach proved\nsuccessful in the case of the $D=6$ superparticle \\cite{a2}.} we\nchoose to formulate the $D=6$ superstring in terms of the coordinates\n$X^{\\alpha\\beta}(\\xi,\\eta^+,u)$, $\\Theta^{+\\alpha}(\\xi,\\eta^+,u)$\n(where $\\widetilde{\\Theta^{+\\alpha}}=C_{\\alpha}^{\\dot\\alpha}\n\\Theta^{+\\alpha}$), defined as analytic harmonic superfields: $$D^+\nX^{\\alpha\\beta}=\\bar D^+ X^{\\alpha\\beta}=0\\;,\\ \\ D^+\n\\Theta^{+\\alpha}=\\bar D^+ \\Theta^{+\\alpha}=0\\; .$$ Such superfields\ncan be considered as unconstrained objects in the analytic subspace\n$\\xi^{(+)}\\;,$ $\\xi^{(-)}_A = \\xi^{(-)} + i\\eta^i\\bar\\eta^j\nu^+_{(i}u^-_{j)}\\;,$ $\\eta^+ = u^+_i\\eta^i\\;,$ $\\bar\\eta^+ =\nu^+_i\\bar \\eta^i$. Later on, after imposing the harmonic equations\nof motion, the usual coordinate $\\theta^{i\\alpha}$ will reappear as\nthe first term of the harmonic expansion of $\\Theta^{+\\alpha}$.\n\nWe shall consider the following action for the $D=6\\;\\; N=(4,0)$\nsuperstring: $$ S=\\int d^2\\xi [du]d^2\\eta^+\\left[ \\partial_{(+)}\nX^{\\alpha\\beta} \\Theta^{+\\gamma}\\Theta^{+\\delta}\n\\epsilon_{\\alpha\\beta\\gamma\\delta}\\right. $$ \\begin{equation}}\\newcommand{\\ee}{\\end{equation} \\left.\n+P_{\\alpha\\beta}(D^{++}X^{\\alpha\\beta}-i\n\\Theta^{+\\alpha}\\Theta^{+\\beta}) +Q^-_\\alpha\nD^{++}\\Theta^{+\\alpha}\\right]\\;. \\label{action} \\ee Here $D^{++} =\nu^{+i}\\partial\/\\partial u^{-i} + i\\eta^+\\bar\\eta^+\\partial_{(-)}$ is\nthe analytic basis form of the harmonic covariant derivative. The\nsuperfields $P_{\\alpha\\beta}$ and $Q^-_\\alpha$ are analytic Lagrange\nmultipliers restricting the $u$-dependence of the on-shell fields.\nThe last two terms in \\p{action} are exactly the same as in the case\nof the $D=6\\;\\; N=4$ superparticle (see \\cite{a2}).\n\nThis action has the three symmetries we already encountered in the\npreceding Sections. These are:\n\n1) Global space-time supersymmetry: $$\\delta\nX^{\\alpha\\beta}=i(\\epsilon^{-\\alpha}\n\\Theta^{+\\beta}-\\epsilon^{-\\beta}\n\\Theta^{+\\alpha})\\;,\\ \\ \\delta\\Theta^{+\\alpha}= \\epsilon^{+\\alpha}$$\nwith $\\epsilon^{\\pm\\alpha}=u^\\pm_i\\epsilon^{i\\alpha}$, and\n$\\epsilon^{i\\alpha}$ is a $SU(2)$-Majorana spinor parameter. Up to\ntotal derivatives, the variation of the action (\\ref{action}) is (for\nthe moment, $P_{\\alpha\\beta}$ and $Q^{-}_{\\alpha}$ are not varied):\n$$ \\delta S = \\int d^2\\xi [du]d^2\\eta^+\\left\\{\n\\left[2(D^{++}X^{\\alpha\\beta}-i\n\\Theta^{+\\alpha}\\Theta^{+\\beta})\\epsilon^{-\\gamma}\n\\partial_{(+)}\\Theta^{+\\delta}\\right.\\right. $$ \\begin{equation}}\\newcommand{\\ee}{\\end{equation} \\left.\\left.\n-2\\partial_{(+)} X^{\\alpha\\beta}\\epsilon^{-\\gamma}\nD^{++}\\Theta^{+\\delta}\\right] \\epsilon_{\\alpha\\beta\\gamma\\delta}\n+2iP_{\\alpha\\beta}\\epsilon^{-\\alpha} D^{++}\\Theta^{+\\beta}\\right\\}\\;,\n\\ee which may be compensated by the following variations of the\nLagrange multipliers: $$\\delta\nP_{\\alpha\\beta}=-2\\epsilon_{\\alpha\\beta\\gamma\\delta}\\;\n\\epsilon^{-\\gamma} \\partial_{(+)}\\Theta^{+\\delta}\\;,\\ \\ \\delta\nQ^-_\\alpha =-2 \\epsilon_{\\alpha\\beta\\gamma\\delta}\\;\n\\partial_{(+)}X^{\\beta\\gamma}\\epsilon^{-\\delta}\n+2iP_{\\alpha\\beta}\\epsilon^{-\\beta}\\;.$$\n\n2) Left superdiffeomorphisms. These may be written as\nactive\\footnote{Their passive form acting on the coordinates of the\nworld-sheet superspace can be found in \\cite{a2}. There a detailed\ndiscussion of the $N=4$ world-sheet superconformal group is given\n(see also \\cite{a5}).} transformations on the fields as follows:\n\\begin{equation} \\delta X^{\\alpha\\beta}=iD^+\\bar D^+ (\\Lambda^{(-)}\nD^{--} X^{\\alpha\\beta})\\;,\\ \\ \\delta\\Theta^{+\\alpha}=iD^+\\bar D^+\n(\\Lambda^{(-)} D^{--} \\Theta^{+\\alpha})\\;, \\label{diff}\n\\end{equation} where $\\Lambda^{(-)}$ is a $u$-independent\n($D^{++}\\Lambda^{(-)} = 0$) superfield satisfying the constraint\n\\begin{equation} D^+\\bar D^+\\Lambda^{(-)}=0\\;. \\label{cons}\n\\end{equation}\n Notice that the $\\xi^{(+)}$-dependence of $\\Lambda^{(-)}$ is not\nrestricted just as in the cases considered previously. The\ncomponents of $\\Lambda^{(-)}$ are easily shown to be the parameters\nof left diffeomorphisms, local $N=4$ left supersymmetry and local\n$SU(2)$ transformations (see \\cite{a2}, \\cite{a5}). Assuming for the\ntime being that the Lagrange multipliers transform according to the\nstandard law \\p{diff}, the variation of the action (\\ref{action})\nunder these transformations is given, up to total harmonic\nderivatives, by \\begin{equation} \\delta S = i\\int d^2\\xi\n[du]d^4\\eta\\; \\partial_{(+)}\\Lambda^{(-)} D^{--}X^{\\alpha\\beta}\n\\Theta^{+\\gamma}\\Theta^{+\\delta}\n\\epsilon_{\\alpha\\beta\\gamma\\delta}\\;. \\label{transf} \\end{equation}\nOne has to work a little in order to show that this expression can be\ncompensated by choosing the full transformations of the Lagrange\nmultipliers to be as follows: \\begin{eqnarray} &&\\delta\nP_{\\alpha\\beta}=iD^+\\bar D^+ \\left[ \\Lambda^{(-)} D^{--}\nP_{\\alpha\\beta}-\\epsilon_{\\alpha\\beta\\gamma\\delta}\\;\n\\partial_{(+)}\\Lambda^{(-)} D^{--}\n(\\Theta^{+\\gamma}D^{--}\\Theta^{+\\delta})\\right] \\nonumber\\\\ && \\delta\nQ^-_\\alpha =iD^+\\bar D^+ [ \\Lambda^{(-)} D^{--}\nQ^-_\\alpha+\\epsilon_{\\alpha\\beta\\gamma\\delta}\\;\n\\partial_{(+)}\\Lambda^{(-)}((D^{--})^2X^{\\beta\\gamma}\n\\Theta^{+\\delta} \\\\ &&\n -{2\\over 3} \\Theta^{+\\beta}\\Theta^{+\\gamma}\n(D^{--})^3\\Theta^{+\\delta} - {2\\over 9}(D^{--})^3\n(\\Theta^{+\\beta}\\Theta^{+\\gamma}\\Theta^{+\\delta})) ]\\;.\n\\label{diffp}\\nonumber \\end{eqnarray}\n\n\n 3) Right conformal invariance. The action is invariant under right\nconformal transformations with parameter $\\Lambda^{(+)}(\\xi^{(+)})$\nprovided the fields $X$ and $\\Theta$ transform as scalars and the\nLagrange multipliers transform as densities. Like in the cases $D=3$\nand $D=4$, this invariance can be promoted to a right\nsuperdiffeomorphism one, this time by changing\n$\\Lambda^{(+)}(\\xi^{(+)})$ to a general analytic superfield, if one\nintroduces an analytic einbein $\\chi^{++(+)}$ and replaces the\nharmonic derivative $D^{++}$ by a covariant one:\n$$D^{++}\\rightarrow{\\cal D}^{++}=D^{++}+\\chi^{++(+)} \\partial_{(+)}$$\nwith $\\delta \\chi^{++(+)}=-{\\cal D}^{++}\\Lambda^{(+)}+ \\Lambda^{(+)}\n\\partial_{(+)}\\chi^{++(+)}$. The action (\\ref{action}) with this\nreplacement made is still globally space-time supersymmetric, and\ninvariant under left superdiffeomorphisms, provided the harmonic\nderivative $D^{--}$ in (\\ref{diff}) is replaced by a covariant one:\n$$D^{--}\\rightarrow{\\cal D}^{--}=D^{--}+\\chi^{--(+)} \\partial_+$$\nwith: $$\\{{\\cal D}^{++},{\\cal D}^{--}\\} =D^0 \\Rightarrow {\\cal\nD}^{++}\\chi^{--(+)}-{\\cal D}^{--}\\chi^{++(+)}=0.$$ This equation\ndetermines $\\chi^{--(+)}$ as a function of $\\chi^{++(+)}$. The\nparameter $\\Lambda^{(-)}$ in (\\ref{diff}), (\\ref{diffp}) is now\ncovariantly $u$-independent: $${\\cal D}^{++}\\Lambda^{(-)}={\\cal\nD}^{--}\\Lambda^{(-)}=0.$$ In the Wess-Zumino gauge, the only\nsurviving component of $\\chi^{++(+)}$ is $$\\mu(x)=i\\int [du]D^-\\bar\nD^-\\chi^{++(+)} \\vert_{\\eta = 0} ,$$ which transforms under right\ndiffeomorphisms as $$\\delta \\mu(x)=-\\partial_{(-)}\\Lambda^{(+)}(x)-\n\\mu(x) \\partial_{(+)}\\Lambda^{(+)}(x)-\\Lambda^{(+)}(x)\\partial_{(+)}\n\\mu(x).$$\n\n\n\\subsection{Component action}\n\nIn order to find out the component content of the superfield action\n\\p{action} one has first to get rid of the harmonic dependence of the\nsuperfields in it. This is achieved using the harmonic constraints\nintroduced in the action with the Lagrange multipliers $P$ and $Q$\n(see \\cite{a2} for the details). Eliminating those infinite sets of\nauxiliary fields and using the Wess-Zumino gauge for $\\chi^{++(+)}$\njust discussed, we are left with the following component fields:\n$$x^{\\alpha\\beta}(\\xi)=\\int [du]X^{\\alpha\\beta}\\vert_{\\eta=0}\\;,\n\\ \\ \\ \\theta^{i\\alpha} (\\xi)=2\\int [du]u^{-i}\n\\Theta^+\\vert_{\\eta=0}\\;,$$ $$\\lambda^\\alpha =\\int\n[du]D^-\\Theta^+\\vert_{\\eta=0}\\;,\\ \\ \\ \\bar\\lambda^\\alpha =\\int\n[du]\\bar D^-\\Theta^+\\vert_{\\eta=0}\\;,$$ $$\\sigma_{\\alpha\\beta}=\\int\n[du]P_{\\alpha\\beta} \\vert_{\\eta=0}\\;.$$\n Let us introduce a notation similar to the one used in section 2:\n$$\\pi^{\\alpha\\beta}_{(\\pm)}= \\partial_{(\\pm)} x^{\\alpha\\beta}\n+{i\\over 2}\\partial_{(\\pm)}\\theta^{i\\alpha}\\;\\theta^{\\beta}_i\n-{i\\over 2}\\partial_{(\\pm)}\\theta^{i\\beta}\\;\\theta^{\\alpha}_i $$\n$$\\hat{\\pi}^{\\alpha\\beta}_{(-)}= {\\cal D}_{(-)} x^{\\alpha\\beta}\n+{i\\over2} {\\cal D}_{(-)}\\theta^{i\\alpha}\\;\\theta^{\\beta}_i\n-{i\\over2} {\\cal D}_{(-)}\\theta^{i\\beta}\\;\\theta^{\\alpha}_i \\equiv\n\\pi^{\\alpha\\beta}_{(-)} + \\mu\\pi^{\\alpha\\beta}_{(+)}\\;.$$ Here ${\\cal\nD}_{(-)} = -i \\{[{\\cal D}^{--}, D^+],\\bar D^+\\}$. Then the component\nform of the action (\\ref{action}) is given by: \\begin{eqnarray} S\n&=& \\int d^2\\xi \\left[\\epsilon_{\\alpha\\beta\\gamma\\delta}\n\\;(2\\pi^{\\alpha\\beta}_{(+)}\\bar\\lambda^\\gamma\\lambda^\\delta\n+2i\\;\\partial_{(+)}\\theta^{i\\alpha}\\;\\theta^{\\beta}_i\n\\bar\\lambda^\\gamma \\lambda^\\delta \\right. \\nonumber \\\\\n &+& \\left. i\\;{\\cal D}_{(-)} \\theta^{i\\alpha}\\;\n\\theta^{\\beta}_i\\pi^{\\gamma\\delta}_{(+)})\n-i\\;\\sigma_{\\alpha\\beta}\\;(\\hat{\\pi}^{\\alpha\\beta}_{(-)}+2\n\\bar\\lambda^\\alpha\\lambda^\\beta)\\right]\\;.\n\\label{action6}\\end{eqnarray} In components, the left local\nsupersymmetry transformations contained in \\p{diff} read:\n\\begin{eqnarray} &&\\delta\\theta^{\\alpha i}=\\rho^i\\bar\\lambda^\\alpha\n+\\bar\\rho^i\\lambda^\\alpha \\nonumber\\\\ &&\\delta\nx^{\\alpha\\beta}=-{i\\over 2}\\rho^i (\\bar\\lambda^\\alpha\\theta^\\beta_i\n-\\bar\\lambda^\\beta\\theta^\\alpha_i) -{i\\over 2}\\bar\\rho^i\n(\\lambda^\\alpha\\theta^\\beta_i -\\lambda^\\beta\\theta^\\alpha_i)\n\\label{cotr}\\\\&& \\delta\\lambda^\\alpha = i\\rho^i\\;{\\cal\nD}_{(-)}\\theta^\\alpha_i\\;,\\ \\ \\delta\\bar\\lambda^\\alpha =\n-i\\bar\\rho^i\\;{\\cal D}_{(-)}\\theta^\\alpha_i\\nonumber\\\\&&\n\\delta\\sigma_{\\alpha\\beta}= \\epsilon_{\\alpha\\beta\\gamma\\delta}\\;\n\\partial_{(+)}(\\rho^i\\bar\\lambda^\\gamma\\theta^\\delta_i\n+\\bar\\rho^i\\lambda^\\gamma\\theta^\\delta_i)\\;. \\nonumber \\end{eqnarray}\nIn the action (\\ref{action6}) the field $\\sigma_{\\alpha\\beta}$ is a\nLagrange multiplier for the constraint: \\begin{equation}}\\newcommand{\\ee}{\\end{equation}\\label{const}\n\\bar\\lambda^\\alpha\\lambda^\\beta -\\bar\\lambda^\\beta\\lambda^\\alpha\n=-\\hat{\\pi}^{\\alpha\\beta}_{(-)}\\;, \\ee which has as a consequence the\nVirasoro constraint \\begin{equation}\n(\\hat{\\pi}_{(-)})^2=\\epsilon_{\\alpha\\beta\\gamma\\delta}\\;\n\\pi^{\\alpha\\beta}_{(-)}\\pi^{\\gamma\\delta}_{(-)} =0\n\\label{light}\\end{equation} (the second Virasoro constraint can be\nobtained in precisely the same manner as in the case $D=3$, see\nsection 2). Further, only the product $\\bar\\lambda\\lambda$ appears in\nthe action, so we can use the constraint \\p{const} to rewrite the\naction as follows: \\begin{equation}}\\newcommand{\\ee}{\\end{equation} S=\\int d^2\\xi\n\\epsilon_{\\alpha\\beta\\gamma\\delta}\\left[\n-\\pi^{\\alpha\\beta}_{(+)}\\hat{\\pi}^{\\gamma\\delta}_{(-)}\n-i\\;\\partial_{(+)}\\theta^{i\\alpha}\\theta^{\\beta}_i\n\\hat{\\pi}^{\\gamma\\delta}_{(-)} +i\\;{\\cal D}_{(-)}\\theta^{i\\alpha}\n\\theta^{\\beta}_i\\pi^{\\gamma\\delta}_{(+)} \\right]\\; . \\ee This,\ntogether with the constraint (\\ref{light}), is just the action for\nthe GS superstring in six dimensions.\n\nOnce again, the procedure of elimination of the twistor variables\n$\\bar\\lambda\\;, \\lambda$ breaks the left local supersymmetry, but\nsome memory of it is kept, which is just kappa symmetry. It emerges\nafter the replacement\n $\\rho^i =\\lambda^\\alpha \\kappa^i_\\alpha $, $\\bar\\rho^i\n=-\\bar\\lambda^\\alpha \\kappa^i_\\alpha$ in eqs. (\\ref{cotr}) and the\nelimination of the product $\\bar\\lambda\\lambda$ from the resulting\ntransformations with the help of the on-shell constraint \\p{const}.\nIn this way one obtains $$\\delta\\theta^{\\alpha i} ={1\\over\n2}\\kappa^i_\\beta \\pi_{(-)}^{\\alpha\\beta}\\;,\\ \\ \\delta\nx^{\\alpha\\beta}=-{i\\over 2}(\\delta\\theta^{\\alpha i} \\theta_i^\\beta\n-\\delta\\theta^{\\beta i} \\theta_i^\\alpha )\\;. $$ These can be\nrecognized as the kappa symmetry transformations of the GS\nsuperstring. We stress again that this identification is only\npossible on shell, where the constraint \\p{const} is valid.\n\n\n\\section{Conclusions}\n\nThe obvious question which did not find its answer in the present\npaper, is how to approach the most interesting case of the $D=10$\nsuperstring with $N=(8,0)$ world-sheet supersymmetry. The problem is\nthat the notion of complex structure and the associated Grassmann\nanalyticity, heavily used in this paper, do not have a natural\nextension to the case $D=10, N=(8,0)$. However, there exists an\nalternative approach, which is based on the properties of the\neight-sphere realized as a coset space of the $D=10$ lorentz group,\nand does not make use of any complex structures (see \\cite{100} for\nthe case of the superparticle and a forthcoming publication for the\nsuperstring).\n\nAmongst possible immediate developments of the results presented here\nlet us mention coupling the above superstring actions to target-space\nbackground supergravity and super-Yang-Mills, as well as introducing\nadditional world-sheet superfields in order to describe the internal\ndegrees of freedom of the heterotic superstrings.\n\n\n\nFinally, let us point out that in the present paper we dealt with an\n$n=1$ target superspace. In the context of a twistor-like formulation\nthis naturally goes together with an world-sheet supersymmetry of the\nheterotic $(N,0)$ type. Analogous formulations for the $n=2$ GS\nsuperstrings are expected to be combined with world-sheet\nsupersymmetries of the $(N,N)$ type and thus to involve two\nindependent sets of twistor variables \\cite{a00}. The latter may be\nused to replace both vectors $\\pi^{\\mu}_{(+)}\\;,$ $\\pi^{\\mu}_{(-)}$\nand thus to simultaneously solve both Virasoro constraints of the\nsuperstring, along the lines of refs. \\cite{{a00},{a0},{a13}}.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\paragraph{}\nThis paper is dedicated to the numerical resolution in moderate dimension of the following McKean-Vlasov Forward Backward Stochastic Differential Equations (MKV FBSDEs)\n\\begin{equation}\\label{eq: MKV FBSDE}\n\\begin{cases}\nX_t & = \\xi + \\int_0^t b(s,X_s,Y_s,Z_s,\\mathcal{L}(X_s),\\mathcal{L}(Y_s),\\mathcal{L}(Z_s))\\ \\mathrm{d} s + \\int_0^t \\sigma(s,X_s,\\mathcal{L}(X_s))\\ \\mathrm{d} W_s \\\\\nY_t & = g(X_T,\\mathcal{L}(X_T)) + \\int_t^T f(s,X_s,Y_s,Z_s,\\mathcal{L}(X_s),\\mathcal{L}(Y_s),\\mathcal{L}(Z_s))\\ \\mathrm{d} s - \\int_t^T Z_s\\ \\mathrm{d} W_s\\\\\n\\end{cases}\n\\end{equation}\nwith $b: \\mathbb{R}\\times\\mathbb{R}^d\\times\\mathbb{R}^k\\times\\mathbb{R}^{k\\times d}\\times\\mathcal{P}_2(\\mathbb{R}^d)\\times\\mathcal{P}_2(\\mathbb{R}^k)\\times\\mathcal{P}_2(\\mathbb{R}^{k\\times d}) \\mapsto \\mathbb{R}^d$, $\\sigma: \\mathbb{R}\\times\\mathbb{R}^d \\times \\mathcal{P}_2(\\mathbb{R}^d) \\mapsto \\mathbb{R}^d$, $g: \\mathbb{R}^d\\times\\mathcal{P}_2(\\mathbb{R}^d) \\mapsto \\mathbb{R}^k$, and $f: \\mathbb{R}\\times\\mathbb{R}^d\\times\\mathbb{R}^k\\times\\mathbb{R}^{k\\times d}\\times\\mathcal{P}_2(\\mathbb{R}^d)\\times\\mathcal{P}_2(\\mathbb{R}^k)\\times\\mathcal{P}_2(\\mathbb{R}^{k\\times d}) \\mapsto \\mathbb{R}^k$. \\\\\n$W_t$ is a $d$-dimensional $\\mathcal{F}_t$-Brownian motion where $(\\Omega,\\mathcal{A},\\mathcal{F}_t,\\mathbb{P})$ is a given filtered probability space and $T>0$.\\\\\n$\\xi$ is a given random variable in $L^2(\\Omega,\\mathcal{F},\\mathbb{P};\\mathbb{R}^d)$ and\n$\\mathcal{P}_2(\\mathbb{R}^n)$ stands for the space of square integrable probability measures over $\\mathbb{R}^n$ endowed with the $2-$Wasserstein distance \n\\begin{equation}\n\\mathcal{W}_2(\\mu,\\nu) = \\inf\\left\\{\\sqrt{\\mathbb{E}[(X-X')^2]}\\ |\\ X,X'\\in L^2(\\Omega,\\mathcal{F},\\mathbb{P};\\mathbb{R}^n),\\ \\mathcal{L}(X) = \\mu,\\ \\mathcal{L}(X') = \\nu \\right\\}.\n\\end{equation}\nAt last $\\mathcal{L}(.)$ is a generic notation for the law of a random variable.\n\\paragraph{}\nThis kind of equation is linked to non local PDEs kwown as master equations. We refer to \\cite{carmona2018probabilistic}, chapter 4 and 5 of volume 2, for an introduction on the subject.\nIn \\cite{chassagneux2014probabilistic}, it is shown for example that under regularity conditions, when the drift $b$ is independent of time and the law $\\mathcal{L}(Z_t)$, the driver $f$ does not depend on time and the law $\\mathcal{L}(Z_t)$, $\\sigma$ does not depend on time, then the resolution of equation \\eqref{eq: MKV FBSDE} provides a way to estimate the solution of the equation\n\\begin{align}\\label{eq: master}\n\\partial_t \\mathcal{U}(t,x, \\mu) + & b(x, \\mathcal{U}(t,x, \\mu),\\partial_x \\mathcal{U}(t,x, \\mu) \\sigma(x, \\mu),\\mu, \\eta).\\partial_x \\mathcal{U}(t,x, \\mu) \\nonumber \\\\\n& + \\frac{1}{2} \\Tr[ \\partial^2_{xx} \\mathcal{U}(t,x, \\mu) \\sigma \\sigma^\\top(x, \\mu)] + f(x, \\mathcal{U}(t,x, \\mu), \\partial_x \\mathcal{U}(t,x, \\mu) \\sigma(x, \\mu),\\mu, \\eta) \\nonumber\\\\\n& + \\int_{\\mathbb{R}^d} \\partial_\\mu \\mathcal{U}(t,x, \\mu)(y). b(y,\\mathcal{U}(t,y, \\mu),\\partial_x \\mathcal{U}(t,x, \\mu) \\sigma(x, \\mu),\\mu, \\eta) \\ \\mathrm{d}\\mu(y) \\nonumber\\\\\n&\n + \\int_{\\mathbb{R}^d} \\frac{1}{2} \\Tr[ \\partial_x \\partial_\\mu \\mathcal{U}(t,x, \\mu)(y) \n\\sigma \\sigma^\\top(y, \\mu)]\\ \\mathrm{d}\\mu(y) =0\\ \\mathrm{on}\\ [0,T]\\times\\mathbb{R}^d\\times{\\cal P}_2(\\mathbb{R}^d)\n\\end{align} with initial condition $\\mathcal{U}(0,x, \\mu) = g(x,\\mu)$ on $\\mathbb{R}^d\\times{\\cal P}_2(\\mathbb{R}^d)$, and\nwhere $\\eta$ is a notation for the image of the probability measure $\\mu$ by the mapping $x\\in\\mathbb{R}^d \\longrightarrow \\mathcal{U}(t,x,\\mu) \\in \\mathbb{R}^{k}$. \nUnder suitable assumptions, \\cite{chassagneux2014probabilistic} proves that the solution to \\eqref{eq: MKV FBSDE} admits the so-called decoupling field representation $Y_s = U(s,X_s,{\\cal L}(X_s)) $ where $(t,x,\\mu)\\in[0,T]\\times\\mathbb{R}^d\\times{\\cal P}_2(\\mathbb{R}^d)\\mapsto U(T-t,x,\\mu) $ is a classical solution to \\eqref{eq: master}. See Theorem 2.9 and equation (2.12) in \\cite{chassagneux2014probabilistic}. Their results are stated in the more general of dynamics depending in the joint law of $(X_t,Y_t)$ but we state them here with the particular case of dependence in the marginal laws ${\\cal L}(X_t),{\\cal L}(Y_t)$ to be coherent with \\eqref{eq: MKV FBSDE}. This MKV FBSDE representation is used in \\cite{CCD17} to build a numerical scheme for the approximation of \\eqref{eq: master} when the drift $b$ is independent of $Z$.\nThe equation above is non local due to the integral terms\n and the term $\\partial_\\mu \\mathcal{U}(t,x, \\mu)(y)$ stands\n for the Wasserstein derivative of $\\mathcal U$ in the direction of the measure at point $(t,x, \\mu)$ and evaluated at the continuous coordinate $y$. \n\n\\paragraph{}\nEquations \\eqref{eq: MKV FBSDE} appear as well as probabilistic formulations of mean-field games or mean-field controls characterizing the value function $V$ of the game.\nMean-field games are introduced by \\cite{LL06} and \\cite{LL06P2} to model games with interactions between many similar players. In this theory, each player's \ndynamics and cost take into account the empirical distribution of all agents. At the limit of an infinite number of players, the search for a Nash equilibrium with close loop controls boils down to a control problem concerning a representative player whose law enters in the cost and dynamics.\\\\\nTwo probabilistic approaches based on Forward Backward Stochastic Differential Equations can be used to solve these problems:\n\\begin{itemize}\n \\item A first approach called the {\\bf Pontryagin approach} consists as shown in \\cite{CD13} in applying the strong Pontryagin principle to these control problems. Under regularity and convexity conditions, $Y_t$ appears to be a stochastic representation of the gradient of the value function $V$. In this case the coefficients $b,f$ of the related MKV FBSDE \\eqref{eq: MKV FBSDE} do not depend on $Z_t, \\mathcal{L}(Y_t),\\mathcal{L}(Z_t) $ and $k = d.$ \n \\item Another approach called the {\\bf Weak approach} permits to solve the optimization problem by estimating directly $Y_t$ as the value function $V$ of the problem as shown in \\cite{CL15}. In this case the coefficients $b,f$ of the related MKV FBSDE \\eqref{eq: MKV FBSDE} do not depend on $Y_t, \\mathcal{L}(Y_t),\\mathcal{L}(Z_t) $ and $k = 1.$ \n\\end{itemize}\n\\paragraph{}\nThe numerical resolution of equations \\eqref{eq: MKV FBSDE} is rather difficult since:\n\\begin{itemize}\n \\item The dynamics are coupled through both the drift and the driver of the BSDE.\n \\item The McKean-Vlasov structure of the problem requires to solve a fixed point in probability spaces. \n\\end{itemize}\nIn the linear-quadratic setting (with quadratic cost to minimize but linear dynamics) the weak approach applied to mean-field problems gives a problem in low dimension but with quadratic coupling in $Z_t$ appearing in the backward dynamic. In contrast, the Pontryagin approach exhibits a problem in potentially high dimension (the $Z$ component is a $d\\times d$ matrix in this case) but with a linear coupling in $Y_t$ which is easier to solve numerically.\\\\\nIn the case of mean-field games, only the law of $X_t$ is present in the dynamic of \\eqref{eq: MKV FBSDE}.\nIn other applications, individuals interact through their controls instead of their states as in the application of trade crowding in \\cite{cardaliaguet2018mean}. The law of the control thus appears in the dynamic of \\eqref{eq: MKV FBSDE} and may give rise to some FBSDE depending on the law of $Z_t$ in the weak approach or the law of $Y_t$ in the Pontryagin approach. \n\\paragraph{}\nExistence and uniqueness of a solution to the fully coupled system \\eqref{eq: MKV FBSDE} are studied by \\cite{carmona2018probabilistic} when the drift does not depend on the law of $Z$. Their Theorem 4.29 gives existence of a solution under a non-degeneracy condition. However, uniqueness is a priori only expected to hold in small time, as stated in Theorem 4.24 from \\cite{carmona2018probabilistic}.\nIn the latter we will assume that existence and uniqueness hold for the MKV FBSDE we aim to solve.\n\n\\paragraph{}\nIn \\cite{CCD17} and \\cite{AVLCDC19}, tree and grid algorithms are proposed and tested in dimension 1. It is worth mentioning that these techniques suffer from the so-called curse of dimensionality and cannot be applied when the dimension describing a player state is too high (typically greater than 3 or 4). This is due to the discretization of the state space.\n\\paragraph{}\nHowever, new approaches using machine learning are developed since 2017 to solve non linear parabolic PDEs through a BSDE representation.\nTwo kinds of methods have emerged:\n\\begin{itemize}\n \\item The first to appear are {\\bf global methods} first proposed in \\cite{HJE17} to solve semi-linear PDEs. They rely on a single high-dimensional optimization problem whose resolution is difficult. It consists in the training of as many neural networks as time steps by solving in a forward way the backward representation of the PDE solution. The $Z_t$ process is represented by a different neural network $Z^\\theta_i$ with parameters $\\theta$ at each date $t_i$. Instead of solving the BSDE starting from the terminal condition, the method writes it down as a forward equation and an optimization problem aiming to reach the terminal condition $g(X_T)$ by minimizing a mean squared error $\\mathbb{E}|Y_T - g(X_T)|^2$. The approach is extended to fully nonlinear equations (nonlinear in the solution, its gradient and hessian) in \\cite{beck2017machine} and the authors show that the methodology can solve some equations in high dimension.\n \\cite{CWNMW19} showed that it is more effective to used a single network for all dates and besides proposed an original fixed point algorithm to solve semilinear PDEs.\n \\item A second kind of {\\bf local methods} first proposed in \\cite{HPW19} is based on local optimization problems solved at each time step in a backward way. Contrarily to the global method, the successive optimization problems are here in moderate dimension. Each optimization step at date $t_i$ consists in the training of only two local neural networks $Y^\\theta_i, Z^\\theta_i$ with parameters $\\theta$. For instance, instead of solving 1 optimization problem with $N$ neural networks in the global method, \\cite{HPW19} solves $N$ learning problems with 2 neural networks. Moreover the resolution is simplified by the initialization of the neural networks at time $t_i$ to their previously computed values at time $t_{i+1}$, which provides a good approximation for the current value. This strategy is inspired by the standard backward resolution of BSDE with conditional expectations from \\cite{BT04} and \\cite{GLW05}. \n The methodology is extended to the much more challenging case of fully nonlinear PDEs in \\cite{phawarger20} by combining it with some ideas proposed in \\cite{beck2019deep}. Extensive tests performed in \\cite{HPW19} show that the local method gives better results than the global one, such as \\cite{phawarger20} in the case of fully nonlinear dynamics. Especially, these papers show that local methods can be used with a larger time horizon $T$ than the global method.\n\\end{itemize}\n\\paragraph{}\nMachine learning techniques to solve coupled FBSDEs are investigated by several authors in \\cite{HL18} and \\cite{JPPZ19}, and a first method for McKean-Vlasov FBSDEs with delay is studied by \\cite{FZ19} for a linear quadratic equation in dimension one. Similar and more general ideas are presented and tested in dimension one in \\cite{CL19b} alongside convergence results, together with an additional method directly solving mean-field control problems by minimizing the cost without writing down optimality conditions.\nThe resulting algorithms proposed all rely on the global approach first initiated in \\cite{HJE17}.\n\\paragraph{}\nOur paper aims to extend these methods and to propose new ones for the resolution of McKean-Vlasov FBSDEs in moderate dimension, and go beyond one dimensional examples for which standard methods are already available (see \\cite{AC10,L21}). In fact, one major advantage for the use of neural networks for solving control problems is their ability to efficiently represent high-dimensional functions without using space grids. We also study the influence of the maturity $T$ on the algorithms.\nWe first propose to modify the previously proposed algorithm to\nstabilize its convergence. \nOur modification allows us to reduce the variance of the estimators used in the dynamic of $X_t$ and $Y_t$.\nThen we propose a second algorithm relaxing the fixed point iteration algorithm by adding a neural network learning the distribution of the solution thanks to a penalization in the loss function.\nAt last we propose a resolution scheme based on some local resolution as in \\cite{HPW19}.\\paragraph{}\nTo simplify the presentation, we consider first order interaction models, that is to say that the dependency of the drift and cost function with respect to the laws $\\mathcal{L}(X_t),\\mathcal{L}(Y_t),\\mathcal{L}(Z_t)$ only concerns expectations in the form\n\\begin{equation}\n u_t = (u_t^X,u_t^Y,u_t^Z) := (\\mathbb{E}[\\varphi_1(X_t)], \\mathbb{E}[\\varphi_2(Y_t)], \\mathbb{E}[\\varphi_3(Z_t)]),\\label{eq: def m}\n\\end{equation} for some continuous functions $\\varphi_1,\\varphi_2,\\varphi_3$ with adequate domains and codomains. In this framework we can rewrite by abuse of notation \n\\begin{align*}\n b(s,X_s,Y_s,Z_s,\\mathcal{L}(X_s),\\mathcal{L}(Y_s),\\mathcal{L}(Z_s)) & = b(s,X_s,Y_s,Z_s,u_s)\\\\\n \\sigma(s,X_s,\\mathcal{L}(X_s)) & = \\sigma(s,X_s,u_s^X) \\\\\ng(X_T,\\mathcal{L}(X_T)) & = g(X_T,u_T^X)\\\\\nf(s,X_s,Y_s,Z_s,\\mathcal{L}(X_s),\\mathcal{L}(Y_s),\\mathcal{L}(Z_s)) & = f(s,X_s,Y_s,Z_s,u_s).\n\\end{align*}\nFor instance when $\\varphi_1,\\varphi_2,\\varphi_3$ are power functions with positive integers as exponents we recover probability distribution moments.\nSee Remark \\ref{rem: more general law} for more general cases beyond first order interaction.\n\\paragraph{}\nWe provide multidimensional tests to show how these machine learning approaches can overcome the curse of dimensionality on some test cases first coming from a mean-field game of controls: we solve the FBSDE derived from the weak approach and the Pontryagin approach. We also consider an example arising from a non linear quadratic mean-field game.\nThen we compare all the methods on some general test cases of FBSDE involving linear or quadratic dependence on the processes $X_t$, $Y_t$, $Z_t$ and on their distributions.\n\\paragraph{}\nThe structure of the paper is the following: in sections \\ref{schemes} and \\ref{local} we describe the proposed schemes, and in section \\ref{results} we provide a numerical study of our methods in dimension 10 (except for the one-dimensional Example of Section \\ref{pop}). We show that our algorithms can solve non linear-quadratic models with small maturities.\n\\section{Machine learning global solvers}\\label{schemes}\nIn this section we propose three {\\bf global algorithms} based on the approach in \\cite{HJE17}. \n\n\\subsection{Algorithm principle}\n\n\\paragraph{}\nWe propose a generalized and refined version of the Algorithm 2 from \\cite{CL19b}. We recall that a similar technique with additional networks is used in \\cite{FZ19} for delayed McKean-Vlasov equations but is tested only on a one dimensional linear quadratic example. Our methods also take advantage of different expectation computation methods, introduced in section \\ref{sec: expectation}. \nWe present in section \\ref{results} several tests in dimension 10 where the laws of $X, Y, Z$ are involved. \n\\paragraph{}\nWe consider the Euler-Maruyama discretized FBSDE system \\eqref{eq: MKV FBSDE} on a regular time grid $t_k = \\frac{kT}{N}$ for $k\\in\\llbracket0,N\\rrbracket$:\n\\begin{equation}\\label{eq: approximated FBSDE}\n\\begin{cases}\nX_{t_{i+1}} & = X_{t_{i}} + b\\left(t_i,X_{t_{i}},Y_{t_{i}},Z_{t_{i}},u_{t_{i}}\\right)\\ \\Delta t+ \\sigma\\left(t_{i},X_{t_{i}},u_{t_{i}}^X \\right) \\Delta W_i\\\\\nY_{t_{i+1}} & = Y_{t_{i}} - f\\left({t_{i}},X_{t_{i}},Y_{t_{i}},Z_{t_{i}},u_{t_{i}}\\right)\\ \\Delta t + Z_{t_{i}} \\Delta W_i.\n\\end{cases}\n\\end{equation}\nwith terminal condition $Y_{t_N} = g(X_{t_N},u_{t_N}^X)$ and initial condition $X_0 = \\xi$. We recall that $u_t$ is defined in \\eqref{eq: def m}. We note $\\Delta t := t_{i+1} - t_i = \\frac{T}{N}$ and $(\\Delta W_i)_{i=0,\\cdots,N-1} := (W_{t_{i+1}} - W_{t_{i}})_{i=0,\\cdots,N-1}$ the Brownian increments.\nIn the FBSDE theory, one requires the processes $(X_{t_{i}},Y_{t_{i}},Z_{t_{i}})$ to be $\\mathcal{F}_{t_{i}}$-adapted. Therefore the backward part of the system can also be written in the conditional expectation form\n\\begin{equation}\n \\begin{cases}\n Y_{t_i} = \\mathbb{E}[Y_{t_{i+1}} + f\\left({t_{i}},X_{t_{i}},Y_{t_{i}},Z_{t_{i}},u_{t_{i}}\\right)\\ \\Delta t | \\mathcal{F}_{t_i}]\\\\\n Z_{t_{i}} = \\mathbb{E}[Y_{t_{i+1}}\\frac{\\Delta W_i}{\\Delta t} | \\mathcal{F}_{t_i}],\n \\end{cases}\n\\end{equation} where we see how the process $Z$ is defined. This process, specific to the stochastic case, allows the $Y$ component to be $\\mathcal{F}_{t}$-adapted, even though we fix its terminal condition. It is a major difference between backward ordinary differential equations and backward stochastic differential equations.\\\\ We see that the whole system is coupled therefore we need to design a method allowing to solve simultaneously both equations of \\eqref{eq: approximated FBSDE}.\n\\paragraph{}\nWe solve the system by the Merged Deep BSDE method introduced in \\cite{CWNMW19}. $Z_{t_{i}}$ is approximated by a single feedforward neural network ${\\cal Z}_{\\theta^z}(t_{i}, X_{t_{i}})$ and $Y_0$ by a neural network ${\\cal Y}_{\\theta^y}(X_{0})$ with parameters $\\theta = ({\\theta^y},{\\theta^z})$. \nWith this point of view, the discretized Brownian motion $W_t$ acts as training data in the language of machine learning, so that we can generate a training set as large as desired. \nExtensive tests conducted in \\cite{CWNMW19} show that the use of a Merged network improves the training in comparison with the Deep BSDE method of \\cite{HJE17}. Indeed it lowers the number of parameters to learn hence reduces the complexity of the problem. It empirically improves the accuracy of the method but also makes the training faster. That is why we focus on this architecture. It is also used by \\cite{CL19b} which considers controls in the form $\\big(\\zeta(t_i,X_i)\\big)_{i=1,\\dots,N}$ for a neural network $\\zeta$. The use of recurrent networks such as Long Short Term Memory networks as in \\cite{FZ19} is possible but tests achieved in \\cite{CWNMW19} seem to show that is does not bring more accuracy on Markovian problems. Other alternatives may include the GroupSort network \\cite{ALG19} for a better control of the Lipschitz constant of the approximation, or some special networks preserving some properties of the solution but they have not been tested.\n\n The motivation for such an approximation comes from the notion of decoupling field, also used for numerical purposes in \\cite{AVLCDC19} or \\cite{CCD17}, which gives the existence of functions $u,v$ (see the paragraph below \\eqref{eq: master}) such that\n\\begin{equation}\nY_t = u(t,X_t,\\mathcal{L}(X_t)),\\ Z_t = v(t,X_t,\\mathcal{L}(X_t)).\n\\end{equation}\nNumerically, it is enough to consider $Y$ and $Z$ as a function of the couple $(t,X_t)$. In fact, the law of the solution (and therefore its moments) can be seen as a function of $t$. That's why we search for a representation \n\\begin{equation}\nY_t = \\widetilde{u}(t,X_t),\\ Z_t = \\widetilde{v}(t,X_t).\n\\end{equation} The forward-backward system is transformed into a forward system and an optimization problem aiming to satisfy the terminal condition of the BSDE through the loss function $\\mathbb{E}\\big[\\big(Y_T - g\\big(X_T,$ $u^X_{t_N}\\big\n \\big)^2\\big]$. To simplify notations, $X_i := X_{t_i}$ and similarly for $Y$ and $Z$.\n\n\\paragraph{}\nIn practice the loss function is minimized with the Adam gradient descent method \\cite{KB14}. \nIn any case, the goal of our scheme is to learn both the optimal control and the distribution of $X_t, Y_t, Z_t$. In the following, $B$ is the batch size, $N$ is the number of time steps and $M$ is the number of previous batches expectations to keep in memory. \n\\paragraph{}\nWe use for ${\\cal Z}$ and ${\\cal Y}$ feedforward neural networks with 3 hidden layers ($d + 10$ neurons in each) with hyperbolic tangent function as activation functions and an output layer with identity as activation. It is worth noticing that because the merged neural network takes the couple $(t,X)$ as inputs, we cannot use batch normalization since the distribution of $X_i$ is not stationary over time.\n\n\\subsection{Estimation of the expectation}\\label{sec: expectation}\n\\paragraph{}\nA key step for the methods is to estimate the mean-field parameter $u$. It has a significant effect on the algorithms performances. We note $\\theta_m=({\\theta^y_m},{\\theta^z_m})$ the neural network parameters at optimization iteration $m$ and $u_i = (u_i^X,u_i^Y,u_i^Z)$ the estimation of \n$u_{t_i}$. \nIn the algorithms described below, the approximated processes are considered as functions of the parameters $\\theta$ of the neural network.\\\\\nSeveral methods can be used to approximate the moments of the solution involved in the stochastic McKean-Vlasov dynamics:\n\\begin{itemize}\n \\item \\textbf{Direct}: use the empirical mean of the current batch of particle\n \\begin{equation} \\label{eq: Monte-Carlo}\n u_i = \\frac{1}{B} \\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_3(Z^j_{i}(\\theta_m))\\right),\\ i=0, \\cdots , N-1.\n \\end{equation}\n \n Alternatively one could use instead the last batch particles\n to estimate the law \n \\begin{equation} \n u_i = \\frac{1}{B} \\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_{m-1})), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_{m-1})), \\sum_{j=1}^{B} \\varphi_3(Z^j_{i}(\\theta_{m-1}))\\right),\\ i=0, \\cdots , N-1.\n \\end{equation} The difference lies in the fact that in one case the optimization of the parameters $\\theta_m$ at iteration $m$ modifies the current estimation of the law whereas using the previously computed parameters $\\theta_{m-1}$ fixes the law and simplifies the optimization problem.\n In practice, for the numerical tests of Section \\ref{results} we use the formula \\eqref{eq: Monte-Carlo}. This approach requires to handle very large batches, typically of the order of $B=10,000$ sample paths get a reasonable approximation of the laws. This is the approach used by \\cite{FZ19} and \\cite{CL19b}. \n \n We solve the following optimization problem\n \\begin{align*}\n \\min_{\\theta = (\\theta^y,\\theta^z)}\\ & \\frac{1}{B} \\sum_{k=1}^B \\Big|Y_N^k(\\theta) - g\\Big(X_N^k(\\theta),\\frac{1}{B} \\sum_{j=1}^B \\varphi_1(X_N^j(\\theta))\\Big) \\Big|^2\\\\\n X_{i+1}^j(\\theta) =\\ & X_i^j(\\theta) + b\\left(t_i,X_i^j(\\theta),Y_i^j(\\theta),{\\cal Z}_{\\theta^z}\\left(t_i,X_i^j(\\theta)\\right) ,u_{i} \\right) \\Delta t\n + \\sigma\\left(t_i,X_i^j(\\theta) ,u_{i}^{X} \\right) \\Delta W_i^j\\\\\n Y_{i+1}^j(\\theta) =\\ & Y_i^j(\\theta) - f\\left({t_{i}},X_{i}^j(\\theta),Y_i^j(\\theta),{\\cal Z}_{\\theta^z}\\left(t_i,X_{i}^j (\\theta)\\right) ,u_{i} \\right)\\ \\Delta t\n + {\\cal Z}_{\\theta^z}\\left(t_i,X_{i}^j(\\theta)\\right) \\Delta W_i^j \\\\\n u_i =\\ & \\frac{1}{B} \\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta)), \\sum_{j=1}^{B} \\varphi_3\\left({\\cal Z}_{\\theta^z}\\left(t_i,X_{i}^j(\\theta)\\right)\\right)\\right)\\\\\n X_0^j =\\ & \\xi^j \\sim \\xi,\\ j=1,\\cdots,B\\\\\n Y_0^j(\\theta) =\\ & {\\cal Y}_{\\theta^y}(X_0^j),\\ %\n \\\\ i = \\ & 1,\\cdots,N-1.\n \\end{align*}\n \n The {\\bf direct solver} leads to algorithm \\ref{algo: direct}. \n \n\\begin{algorithm}[H]\n\\caption{Direct solver }\\label{algo: direct}\n\\begin{algorithmic}[1]\n\\State Let ${\\cal Y}_{\\theta^y}(\\cdot)$ be a neural network with parameter ${\\theta^y}$, defined on $\\mathbb{R}^d$ and valued in $\\mathbb{R}^{k}$\n, ${\\cal Z}_{\\theta^z}(\\cdot,\\cdot)$ be a neural network with parameter ${\\theta^z}$, defined on $\\mathbb{R}^{+} \\times \\mathbb{R}^d$ and valued in $\\mathbb{R}^{k\\times d}$, so that $ \\theta = ({\\theta^y}, {\\theta^z})$ is initialized with value $\\theta_0 = (\\theta^y_0, \\theta^z_0)$.\n\\For{$m$ from 0 to $K$} \\Comment{Stochastic gradient iterations}\n\\State Sample $(\\xi^j)_{j=1,\\cdots,B}$ from $B$ independent copies of the initial condition $\\xi$.\n\\State Set $\\forall j\\in\\llbracket1,B\\rrbracket,\\ X_0^j(\\theta_m) = \\xi^j \\in\\mathbb{R}^d , Y_0^j(\\theta_m) = {\\cal Y}_{\\theta^y_m}(\\xi^j) \\in\\mathbb{R}^k.\n\\For{$i$ from 0 to $N-1$}\n\\State $u_{i} = (u_{i}^X,u_{i}^Y,u_{i}^Z) = \\frac{1}{B}\\sum_{j=1}^{B} \\left( \\varphi_1(X^j_{i}(\\theta_m)), \\varphi_2(Y^j_{i}(\\theta_m)), \\varphi_3\\Big({\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j(\\theta_{m})\\right)\\Big)\\right)$\n\\For{$j$ from 1 to $B$}\n\\State Sample $\\delta_i^j$ from a $d$-dimensional standard Gaussian vector.\n\\State $X_{i+1}^j(\\theta_m) = X_i^j(\\theta_m) + b\\left(t_i,X_i^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{\\theta^z_m}\\left(t_i,X_i^j(\\theta_m)\\right) ,u_{i}\\right) \\Delta t + \\sqrt{\\Delta t}\\ \\sigma\\left(t_i,X_i^j(\\theta_m) ,u_{i}^X\\right) \\delta_i^j$\n\\State $ Y_{i+1}^j(\\theta_m) = Y_i^j(\\theta_m) - f\\left({t_{i}},X_{i}^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j (\\theta_m)\\right) ,u_{i}\\right)\\ \\Delta t +\\sqrt{\\Delta t}\\ {\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j(\\theta_m)\\right) \\delta_i^j$\n\\EndFor\n\\EndFor\n\\State $\\overline{X_{N}}(\\theta_m) = \\frac{1}{B} \\sum_{j=1}^{B} \\varphi_1(X^j_{N}(\\theta_m))$,\n\\State $J(\\theta_m) = \\frac{1}{B}\\sum_{j=1}^{B} \\left(Y_N^j(\\theta_m) - g\\left(X_N^j(\\theta_m),\\overline{X_{N}}(\\theta_m)\\right) \\right)^2$\n\\State Calculate $\\nabla J(\\theta_m)$ by back-propagation.\n\\State Update $\\theta_{m+1} = \\theta_m - \\rho_m \\nabla J(\\theta_m)$.\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n \\item \\textbf{Dynamic}: a method which dynamically updates the estimation on $(M+1) B$ samples. The expectations from the last $M$ batches are kept in memory in an array $$(\\zeta_{i,r})_{\\begin{array}{c}\n i=0, \\dots, N-1, \\\\\n r= 0, \\dots , M-1\n \\end{array} }\n $$ \n initialized with values $(\\mathbb{E}[\\varphi_1(\\xi)], \\varphi_2(0),\\varphi_3(0))^{N\\times M}$.\\\\\n At iteration $m - 1$, $\\nu_{{i}}^{(m-1)}$ is defined as the empirical mean on these previous sample paths. On a new batch, the expectation is computed by averaging the previous estimation $\\nu_{{i}}^{(m-1)}$ and the current batch empirical mean by the following algorithm used for $i=0, \\cdots , N-1$:\n \\begin{align} \\label{eq: dynamic Monte-Carlo}\n \\nu_{i}^{(m-1)} &= \\frac{1}{M} \\sum_{r=0}^{M-1} \\zeta_{i,r}, \\nonumber\\\\\n u_i &= \\frac{M \\nu_{{i}}^{(m-1)} + \\frac{1}{B}\\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_m)),\\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_3(Z^j_{i}(\\theta_m))\\right)}{M + 1}, \\nonumber \\\\\n \\zeta_{i,m\\% M} & = \\frac{1}{B}\\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_m)),\\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_3(Z^j_{i}(\\theta_m))\\right).\n \\end{align} The notation $m\\%M$ refers to the remainder of the Euclidian division of $m$ by $M$. This technique allows to use smaller batches of size 100 or 1000. Thus it is more efficient in terms of convergence speed in comparison with the direct approach. This method can be seen as a dynamic fixed point approach. \n \\paragraph{}\n The idea behind this update rule comes from online learning in machine learning. $1\/M$ can be interpreted as a learning rate quantifying the updating speed. From the current estimation of the particles law, we introduce a small correction related to the new observed samples. Therefore the estimation is much more stable through iterations compared to the instantaneous update of the law used by the Direct method. After $M$ batches, the older samples are forgotten, since they don't represent anymore the current law. Indeed we expect the convergence for a good choice of $M$. If this parameter is too small the stabilization would be inefficient and on the contrary a too large $M$ would slow down the learning process by introducing a bias in the law.\n For instance in our numerical experiments of Section \\ref{results} we use $M = 100$ for a total of 2000 gradient descent iterations.\n \n \\begin{Rem}\\label{rem: more general law}\n \tIf the law dependence is more general than a first order interaction\n \tand is given by a continuous function $F: \\mu \\in\\mathcal{P}_2(\\mathbb{R}^d) \\mapsto \\mathbb{R}^k$ then the Direct method can be straightforwardly applied to the equation by estimating $F(\\mathcal{L}(X_t))$ by the so-called empirical projection $ F(\\frac{1}{B} \\sum_{j=1}^B \\delta_{X^j_{t_i}})$ for identically distributed particles $(X^j_{t_i})_{j=1,\\dots,B}$ on a time grid $t_0,\\cdots,t_N$. Concerning the Dynamic approach, it would require to keep in memory the previously computed particles from the last M batches which is costly.\n \\end{Rem}\n \n \\begin{Rem}\n The fixed point approach is known to be convergent theoretically only for small maturities. In practice, the theoretical bound on the maturity found on the simple example given for example in paragraph 3.1 in \\cite{AVLCDC19} is far too pessimistic. We will see that the restriction is not relevant on all our test cases.\n \\end{Rem}\n \n \n For a given iteration $m$, given the estimations $(\\zeta_{i,r})_{\\begin{array}{c}\n i=0, \\dots, N-1, \\\\\n r= 0, \\dots , M-1\n \\end{array} }$ of $u_i$ on the last $M$ iterations, we perform one gradient descent step for the following optimization problem\n \\begin{align*}\n \\min_{\\theta_m=({\\theta^y_m},{\\theta^z_m})}\\ & \\frac{1}{B} \\sum_{k=1}^B \\Big|Y_N^k(\\theta_m) - g\\Big(X_N^k(\\theta_m),\\frac{1}{B} \\sum_{j=1}^B \\varphi_1(X_N^j(\\theta_m))\\Big) \\Big|^2\\\\\n X_{i+1}^j(\\theta_m) =\\ & X_i^j(\\theta_m) + b\\left(t_i,X_i^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{{\\theta^z_m}}\\left(t_i,X_i^j(\\theta_m)\\right),\\widetilde{u_{i}}\\right) \\Delta t \\\\ & + \\sigma\\left(t_i,X_i^j(\\theta_m),\\widetilde{u_{i}}^X(\\theta_m)\\right) \\Delta W_i^j\\\\\n Y_{i+1}^j(\\theta_m) =\\ & Y_i^j(\\theta_m) - f\\left({t_{i}},X_{i}^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{{\\theta^z_m}}\\left(t_i,X_{i}^j (\\theta_m)\\right),\\widetilde{u_{i}}\\right)\\ \\Delta t \\\\ & + {\\cal Z}_{{\\theta^z_m}}\\left(t_i,X_{i}^j(\\theta_m)\\right) \\Delta W_i^j \\\\\n u_i =\\ & \\frac{1}{B} \\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_3\\left({\\cal Z}_{{\\theta^z_m}}\\left(t_i,X_{i}^j(\\theta_m)\\right)\\right)\\right)\\\\\n \\widetilde{u_{i}} =\\ & \\frac{\\sum_{r=0}^{M-1} \\zeta_{i,r} + u_{i} }{M + 1}\\\\\n X_0^j =\\ & \\xi^j \\sim \\xi,\\ j=1,\\cdots,B\\\\\n Y_0^j(\\theta_m) =\\ & {\\cal Y}_{\\theta^y_m}(X_0^j)\\\\\n i = \\ & 1,\\cdots,N-1.\n \\end{align*} Then we update $(\\xi_{i})_i$ by forgetting the oldest estimation and keeping in memory the new one, $(u_i)_i$ (see \\eqref{eq: dynamic Monte-Carlo}). The {\\bf dynamic solver} is given more explicitly in algorithm \\ref{algo: dynamic}.\n\\begin{algorithm}[H]\n\\caption{Dynamic solver}\\label{algo: dynamic}\n\\begin{algorithmic}[1]\n\\State Let ${\\cal Y}_{\\theta^y}(\\cdot)$ be a neural network with parameter ${\\theta^y}$, defined on $\\mathbb{R}^d$ and valued in $\\mathbb{R}^{k}\n, ${\\cal Z}_{\\theta^z}(\\cdot,\\cdot)$ be a neural network with parameter ${\\theta^z}$, defined on $\\mathbb{R}^{+} \\times \\mathbb{R}^d$ and valued in $\\mathbb{R}^{k\\times d}$, so that $ \\theta = ({\\theta^y}, {\\theta^z})$ is initialized with value $\\theta_0=(\\theta^y_0, \\theta^z_0)$.\n\\State Set $\\forall i\\in\\llbracket0,N-1\\rrbracket,\\ \\forall r\\in\\llbracket0,M-1\\rrbracket,\\ \\zeta_{i,r} = (\\mathbb{E}[\\varphi_1(\\xi)], \\varphi_2(0), \\varphi_3(0))$\n\\For{$m$ from 0 to $K$}\n\\State Sample $(\\xi^j)_{j=1,\\cdots,B}$ from $B$ independent copies of the initial condition $\\xi$.\n\\State Set $\\forall j\\in\\llbracket1,B\\rrbracket,\\ X_0^j(\\theta_m) = \\xi^j\\in\\mathbb{R}^d, Y_0^j(\\theta_m) = {\\cal Y}_{\\theta^y_m}(\\xi^j)\\in\\mathbb{R}^k.\n\\For{$i$ from 0 to $N-1$}\n\\State $u_{i} = \\frac{1}{B} \\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_3\\Big({\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j(\\theta_{m})\\right)\\Big)\\right)$\n\\State $\\widetilde{u_{i}} = (\\widetilde{u_{i}}^X,\\widetilde{u_{i}}^Y,\\widetilde{u_{i}}^Z) =\\frac{\\sum_{r=0}^{M-1} \\zeta_{i,r} + u_{i}}{M + 1}$\n\\For{$j$ from 1 to $B$}\n\\State Sample $\\delta_i^j$ from a $d$-dimensional standard Gaussian vector.\n\\State $X_{i+1}^j(\\theta_m) = X_i^j(\\theta_m) + b\\left(t_i,X_i^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{\\theta^z_m}\\left(t_i,X_i^j(\\theta_m)\\right),\\widetilde{u_{i}} \\right) \\Delta t + \\sqrt{\\Delta t}\\ \\sigma\\left(t_i,X_i^j(\\theta_m),\\widetilde{u_{i}}^{X} \\right) \\delta_i^j$\n\\State $Y_{i+1}^j(\\theta_m) = Y_i^j(\\theta_m) - f\\left({t_{i}},X_{i}^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j (\\theta_m)\\right),\\widetilde{u_{i}} \\right)\\ \\Delta t + \\sqrt{\\Delta t}\\ {\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j(\\theta_m)\\right) \\delta_i^j$\n\\EndFor\n\\State $\\zeta_{i,m\\% M} = u_{i} $\n\\EndFor\n\\State $\\overline{X_{N}}(\\theta_m) = \\frac{1}{B} \\sum_{j=1}^{B} \\varphi_1(X^j_{N}(\\theta_m))$,\n\\State $J(\\theta_m) = \\frac{1}{B}\\sum_{j=1}^{B} \\left(Y_N^j(\\theta_m) - g\\left(X_N^j(\\theta_m),\\overline{X_{N}}(\\theta_m)\\right) \\right)^2$\n\\State Calculate $\\nabla J(\\theta_m)$ by back-propagation.\n\\State Update $\\theta_{m+1} = \\theta_m - \\rho_m \\nabla J(\\theta_m)$.\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n \n \\item \\textbf{Expectation}: estimate $u_t$ by a neural network $\\Psi_{\\theta^\\Psi}$ with input $t$ and parameters ${\\theta^\\Psi}$. \n \\begin{equation} \\label{eq: neural expectation}\n u_i (\\theta^\\Psi) = \\Psi_{\\theta^\\Psi}(t_i) = ( \\Psi_{\\theta^\\Psi}^X(t_i),\\Psi_{\\theta^\\Psi}^Y(t_i) ,\\Psi_{\\theta^\\Psi}^Z(t_i) ),\\ i=0, \\cdots , N.\n \\end{equation}\n A penalization term\n \\begin{equation*}\n \\mathbb{E}\\left[\\frac{\\lambda}{N}\\sum_{i=0}^{N-1} \\left\\lVert\\Psi_{\\theta^\\Psi}(t_i) - \\frac{1}{B}\\left( \\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta)), \\sum_{j=1}^{B} \\varphi_3(Z^j_{i}(\\theta))\\right)\\right\\rVert_2^2\\right],\n \\end{equation*\n is added to the loss function. \n We will see that in practice this method is quite involved to use because the performances heavily depend upon the choice of the parameter $\\lambda$. This approach provides a relaxation of the fixed point method.\n \n We solve the following optimization problem\n \\begin{align*}\n \\min_{\\theta = (\\theta^y,\\theta^z,\\theta^\\Psi)}\\ & \\frac{1}{B} \\sum_{k=1}^B \\Big|Y_N^k(\\theta) - g\\Big(X_N^k(\\theta),\\frac{1}{B} \\sum_{j=1}^B \\varphi_1(X_N^j(\\theta))\\Big) \\Big|^2\\\\\n & + \\frac{\\lambda}{N}\\sum_{i=0}^{N-1}\\left\\lVert u_i(\\theta^\\Psi) - \\Psi_{{\\theta^\\Psi}}(t_i)\\right\\rVert^2\\\\\n X_{i+1}^j(\\theta) =\\ & X_i^j(\\theta) + b\\left(t_i,X_i^j(\\theta),Y_i^j(\\theta),{\\cal Z}_{\\theta^z}\\left(t_i,X_i^j(\\theta)\\right),\\Psi_{\\theta^\\Psi}(t_i)\\right) \\Delta t \\\\ & + \\sigma\\left(t_i,X_i^j(\\theta),\\Psi_{\\theta^\\Psi}(t_i)^X\\right) \\Delta W_i^j\\\\\n Y_{i+1}^j(\\theta) =\\ & Y_i^j(\\theta) - f\\left({t_{i}},X_{i}^j(\\theta),Y_i^j(\\theta),{\\cal Z}_{\\theta^z}\\left(t_i,X_{i}^j (\\theta)\\right),\\Psi_{\\theta^\\Psi}(t_i)\\right)\\ \\Delta t \\\\ & + {\\cal Z}_{\\theta^z}\\left(t_i,X_{i}^j(\\theta)\\right) \\Delta W_i^j \\\\\n u_i =\\ & \\frac{1}{B} \\left(\\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta)), \\sum_{j=1}^{B} \\varphi_3\\left({\\cal Z}_{\\theta^z}\\left(t_i,X_{i}^j(\\theta)\\right)\\right)\\right)\\\\\n X_0^j =\\ & \\xi^j \\sim \\xi,\\ j=1,\\cdots,B\\\\\n Y_0^j(\\theta) =\\ & {\\cal Y}_{\\theta^y}(X_0^j)\\\\\n i = \\ & 1,\\cdots,N-1.\n \\end{align*}\n The {\\bf expectation solver} is described in algorithm \\ref{algo: expectation}. \n The parameter $\\lambda$ is chosen by trial and error.\n \\begin{algorithm}[H]\n\\caption{Expectation solver}\\label{algo: expectation}\n\\begin{algorithmic}[1]\n\\State Let ${\\cal Y}_{\\theta^y}(\\cdot)$ be a neural network with parameter ${\\theta^y}$, defined on $\\mathbb{R}^d$ and valued in $\\mathbb{R}^{k}$, ${\\cal Z}_{\\theta^z}(\\cdot,\\cdot)$ defined on $\\mathbb{R}^{+} \\times \\mathbb{R}^d$, $\\Psi_{\\theta^\\Psi}(\\cdot)=( \\Psi_{\\theta^\\Psi}^X(\\cdot),\\Psi_{\\theta^\\Psi}^Y(\\cdot) ,\\Psi_{\\theta^\\Psi}^Z(\\cdot) )$ defined on $\\mathbb{R}^{+}$ be neural networks with parameters $\\theta^z$, ${\\theta^\\Psi}$, taking values respectively in $\\mathbb{R}^{k\\times d}$ and $\\mathbb{R}^{d}\\times\\mathbb{R}^{k}\\times\\mathbb{R}^{k\\times d}$, so that $\\theta = ({\\theta^y}, \\theta^z, \\theta^\\Psi)$ is initialized with value $\\theta_0 = (\\theta^y_0, \\theta^z_0, \\theta^\\Psi_0)$.\n\\For{$m$ from 0 to $K$}\n\\State Sample $(\\xi^j)_{j=1,\\cdots,B}$ from $B$ independent copies of the initial condition $\\xi$.\n\\State Set $\\forall j\\in\\llbracket1,B\\rrbracket,\\ X_0^j(\\theta_m) = \\xi^j\\in\\mathbb{R}^d, Y_0^j(\\theta_m) = {\\cal Y}_{\\theta^y_m}(\\xi^j)\\in\\mathbb{R}^k$.\n\\For{$i$ from 0 to $N-1$}\n\\State $u_{i} = \\frac{1}{B}\\left( \\sum_{j=1}^{B} \\varphi_1(X^j_{i}(\\theta_m)), \\sum_{j=1}^{B} \\varphi_2(Y^j_{i}(\\theta_m)),\\sum_{j=1}^{B} \\varphi_3\\Big({\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j(\\theta_m)\\right)\\Big)\\right) $\n\\For{$j$ from 1 to $B$}\n\\State Sample $\\delta_i^j$ from a $d$-dimensional Gaussian vector.\n\\State $X_{i+1}^j(\\theta_m) = X_i^j(\\theta_m) + b\\left(t_i,X_i^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{\\theta^z_m}\\left(t_i,X_i^j(\\theta_m)\\right),\\Psi_{\\theta^\\Psi_m}(t_i)\\right) \\Delta t + \\sqrt{\\Delta t}\\ \\sigma\\left(t_i,X_i^j(\\theta_m),\\Psi_{\\theta^\\Psi_m}^X(t_i)\\right) \\delta_i^j$\n\\State $Y_{i+1}^j(\\theta_m) = Y_i^j(\\theta_m) - f\\left({t_{i}},X_{i}^j(\\theta_m),Y_i^j(\\theta_m),{\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j (\\theta_m)\\right),\\Psi_{\\theta^\\Psi_m}(t_i)\\right)\\ \\Delta t + \\sqrt{\\Delta t}\\ {\\cal Z}_{\\theta^z_m}\\left(t_i,X_{i}^j(\\theta_m)\\right) \\delta_i^j$\n\\EndFor\n\\EndFor\n\\State $\\overline{X_{N}}(\\theta_m) = \\frac{1}{B} \\sum_{j=1}^{B} \\varphi_1(X^j_{N}(\\theta_m))$,\n\\State $J(\\theta_m) = \\frac{1}{B}\\sum_{j=1}^{B} \\left(Y_N^j(\\theta_m) - g\\left(X_N^j(\\theta_m), \\overline{X_{N}}(\\theta_m)\\right) \\right)^2 + \\frac{\\lambda}{N}\\sum_{i=0}^{N-1}\\left(u_{i} - \\Psi_{\\theta^\\Psi_m}(t_i)\\right)^2$\n\\State Calculate $\\nabla J(\\theta_m)$ by back-propagation.\n\\State Update $\\theta_{m+1} =\\theta_m - \\rho_m \\nabla J(\\theta_m)$.\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\\end{itemize}\nWe will compare the performances of these techniques on several examples in section \\ref{results}.\n\n\\section{A local solver}\\label{local}\n\nWe also propose a local method inspired by the Deep Backward Dynamic Programming introduced by \\cite{HPW19} and \\cite{phawarger20}. It considers local minimization problems between contiguous time steps. In this case there are as many networks as time steps. We replace a global optimization setting by a set of smaller problems.\n\\paragraph{}\nIn this method for $i\\in\\llbracket 0, N-1\\rrbracket$, $Z_{{i}}$ and $Y_{{i}}$ are approximated by a neural network $({\\cal Z}^i_{\\theta_{i}^y}(\\cdot)$ $,{\\cal Y}^i_{\\theta_{i}^z}(\\cdot))$ with parameters $\\theta = (\\theta_{0}^y,\\theta_{0}^z,\\cdots,\\theta_{N-1}^y, \\theta_{N-1}^z)$. At iteration $m$, with $\\theta_m = (\\theta_{m,0}^y , \\theta_{m,0}^z,\\cdots, \\theta_{m,N-1}^y,$ $\\theta_{m,N-1}^z)$ , we simulate $X_i(\\theta_m)$ with the previously computed parameters $\\theta_m$. \n\\begin{align}\\label{eq: dynamics X local}\n X_{i+1}^j(\\theta_{m}) =\\ & X_i^j(\\theta_{m}) + b\\bigg(t_i,X_i^j(\\theta_{m}),{\\cal Y}^i_{\\theta_{m,i}^y}\\left(X_i^j(\\theta_{m})\\right),{\\cal Z}^i_{\\theta_{m,i}^z}\\left(X_i^j(\\theta_{m})\\right),\\widetilde{u_{i}}\\bigg)\\ \\Delta t \\\\ +\\ & \\sigma\\left(t_{i},X_i^j(\\theta_{m}),\\widetilde{u_{i}}^{X}\\right) \\Delta W_i^j.\\nonumber\n\\end{align}\nThis first step allows to find the areas visited by the controlled process. Using $R$ samples, we compute the empirical mean $\\overline{m_i} = \\frac{1}{R} \\left( \\sum_{j=1}^{R} X^j_{i}(\\theta_{m})\\right)$ and variance $V_i = \\frac{1}{R} \\sum_{j=1}^{R} \\left(X^j_{i}(\\theta_{m})\\right)^2 - \\frac{1}{R^2} \\left(\\sum_{j=1}^{R} X^j_{i}(\\theta_{m})\\right)^2$ of $X_i(\\theta_m)$. We estimate $u_t$ as in the Dynamic method \\eqref{eq: dynamic Monte-Carlo}: \n\\begin{align*}\n u_i =\\ & \\frac{1}{R} \\left(\\sum_{j=1}^{R} \\varphi_1(X^j_{i}(\\theta_{m})), \\sum_{j=1}^{R} \\varphi_2\\left({\\cal Y}^i_{\\theta_{m,i}^y}\\left(t_i,X_{i}^j(\\theta_{m})\\right)\\right), \\sum_{j=1}^{R} \\varphi_3\\left({\\cal Z}^i_{\\theta_{m,i}^z}\\left(t_i,X_{i}^j(\\theta_{m})\\right)\\right)\\right)\\\\\n \\widetilde{u_{i}} =\\ & \\frac{\\sum_{r=0}^{M-1} \\zeta_{i,r} + u_{i} }{M + 1}.\n\\end{align*}\nA priori $R$ can be different from the batch size $B$. It is the batch size used for the estimation of the law with the previously computed parameters. In the numerical tests of Section \\ref{results} we use $B=100$ or $B=300$ for the backward optimization and a larger value $R= 50000$ for the Monte-Carlo forward estimation of the law. Then we solve backward problems to find the $\\theta_{m+1}$ by sampling $B$ independent copies of $X_{i}$ through a Gaussian distribution $\\mathcal{N}(\\overline{ m_i},V_i^m )$ with frozen parameters $\\theta_m$ : \n\\begin{itemize}\n \\item First sample $B$ independent copies $X^1_N,\\cdots,X^B_N$ of $X_{N}$ following a Gaussian distribution $\\mathcal{N}(\\overline{ m_N^m},V_N^m)$. $Y^N_{\\theta_{m+1,N}^y}(X_N^j)$ is set to the terminal condition $g\\left(X_N^j ,u_N^X\\right)$.\n\\item For $i$ from $N-1$ to $0$:\\\\\nSample $B$ independent copies $X^1_i,\\cdots,X^B_i$ of $X_{i}$ following a Gaussian distribution $\\mathcal{N}(\\overline{m_i^m},$ $V_i^m)$. Diffuse according to the dynamics \\eqref{eq: dynamics X local} of $X$, starting from $X^1_i,\\cdots,X^B_i$, to obtain $X^1_{i+1},\\cdots,X^B_{i+1}$.\\\\\nSolve the local optimization problem \n\\begin{align*}\n\\min_{\\theta=(\\theta^y,\\theta^z)} \\frac{1}{B} \\sum_{j=1}^B & \\left|Y^{i+1}_{\\theta_{m+1,i+1}^y}\\left(X_{i+1}^j\\right) - {Y^i_{\\theta^y}\\left(X_i^j\\right)}+ f\\left({t_{i}},X_i^j,{{\\cal Y}^i_{\\theta^y}\\left(X_i^j\\right)},{{\\cal Z}^i_{\\theta^z}\\left(X_i^j\\right)},\\widetilde{u_{i}}\\right) \\Delta t \\right. \\\\ & \\left. - {{\\cal Z}^i_{\\theta^z}\\left(X_i^j\\right)} \\Delta W_{i}^j \\right|^2,\n\\end{align*}starting from the parameter value $\\theta_{m+1,i+1}$. We can then update the $\\theta$ value by denoting as $\\theta_{m+1,i}$ the argmin value of the minimization problem.\n\\item Repeat the previous steps for the iteration $m+1$ until reaching $K$ iterations.\n\\end{itemize}\n\nIn the version of the {\\bf local solver} given in algorithm \\ref{algo: local}, we use the dynamic update of the expectations introduced previously in the dynamic solver of section \\ref{schemes}. In this algorithm $H$ stands for the number of gradient steps to perform at each step of the algorithm and $R$ is the number of samples for the laws estimation.\n\n\\begin{Rem}\nBecause we have to learn the dynamic of the forward process, the use of a backward resolution is not as obvious as in \\cite{HPBL18,BHPL19}. We have to alternate between forward dynamic estimations and backward resolutions. \\\\\nMore precisely, here we solve fully coupled FBSDEs, whereas the works \\cite{HPW19,phawarger20} consider decoupled FBSDEs, where the forward process $X$ can be simulated independently of $Y,Z$. Estimating the law of $X$ and sampling from a normal distribution allows us to decouple and solve locally the FBSDEs in areas visited by $X$ and its law. However, because of this freezing of the forward dynamics, another fixed point problem has to be solved. Other approaches for fully coupled FBSDEs like \\cite{HL18} and \\cite{JPPZ19} rely on the global machine learning method initiated by \\cite{HJE17}.\n\\end{Rem}\n\n\\begin{algorithm}[H]\n\\caption{Local solver}\\label{algo: local}\n\\begin{algorithmic}[1]\n\\State Let $({\\cal Y}^i_{\\theta^y_i}(\\cdot), {\\cal Z}^i_{\\theta^z_i}(\\cdot))$ be some neural networks defined on $\\mathbb{R}^d$ with values in $\\mathbb{R}^k\\times\\mathbb{R}^{k\\times d}$ for $i=0, \\cdots, N-1$ and parameters $\\theta = (\\theta^y_0,\\theta^z_0,\\cdots, \\theta^y_{N-1},\\theta^z_{N-1})$ initialized with values $\\theta_0 =(\\theta^y_{0,0},\\theta^z_{0,0},\\cdots, \\theta^y_{0,N-1},\\theta^z_{0,N-1}).$\n\\State Set $\\forall i\\in\\llbracket0,N\\rrbracket,\\ \\forall r\\in\\llbracket0,M-1\\rrbracket,\\ \\zeta_{i,r} = (\\mathbb{E}[\\xi], 0, 0)$\n\\For{$m$ from 0 to $K$}\n\\State Sample $\\delta_i^j$ from a $d$-dimensional standard Gaussian vector, $i =0, \\cdots, N$, $j=1, \\cdots, R$.\n\\State Sample $(\\xi^j)_{j=1,\\cdots,B}$ from $R$ independent copies of the initial condition $\\xi$.\n\\State Set $\\forall j\\in\\llbracket1,R\\rrbracket,\\ X_0^j(\\theta_m) = \\xi^j \\in\\mathbb{R}^d.$\n\\For{$i$ from 0 to $N$} \\Comment{Forward estimation of the laws}\n\\State $l_{i}^m = \\frac{1}{R} \\left( \\sum_{j=1}^{R} X^j_{i}(\\theta_{m})\\right)\n\\State $u_{i} = \\frac{1}{R} \\left( \\sum_{j=1}^{R} \\varphi_1(X^j_{i}(\\theta_{m})), \\sum_{j=1}^{R} \\varphi_2({\\cal Y}^i_{\\theta_{m,i}^y}\\left(X_i^j(\\theta_{m})\\right)),\n\\sum_{j=1}^{R} \\varphi_3\\Big({\\cal Z}^i_{\\theta_{m,i}^z}\\left(X_i^j(\\theta_{m})\\right)\\Big) \\right)$\n\\State $V_{i}^m = \\frac{1}{R} \\sum_{j=1}^{R} \\left(X^j_{i}(\\theta_{m})\\right)^2 - \\frac{1}{R^2} \\left(\\sum_{j=1}^{R} X^j_{i}(\\theta_{m})\\right)^2$\n\\State $\\widetilde{u_{i}} =\\frac{\\sum_{r=0}^{M-1} \\zeta_{i,r} + u_{i} }{M + 1}$\n\\State $\\zeta_{i,m\\% M} = u_{i}( \\theta_m)$\n\\For{$j$ from 1 to $R$}\n\\State $X_{i+1}^j(\\theta_{m}) = X_i^j(\\theta_{m}) + b\\bigg(t_i,X_i^j(\\theta_{m}),{\\cal Y}^i_{\\theta_{m,i}^y}\\left(X_i^j(\\theta_{m})\\right),{\\cal Z}^i_{\\theta_{m,i}^z}\\left(X_i^j(\\theta_{m})\\right),\\widetilde{u_{i}}\\bigg)\\ \\Delta t + \\sqrt{\\Delta t}\\ \\sigma\\left(t_{i},X_i^j(\\theta_{m}),\\widetilde{u_{i}}^{X} \\right) \\delta_i^j$\n\\EndFor\n\\EndFor\n\\For{$i$ from $N-1$ to 0} \\Comment{Backward resolution}\n\\State $\\hat \\theta_0= \\theta_{m,i}$ \n\\For{$h$ from 0 to $H-1$} \\Comment{Gradient descent with simulated data for $X$}\n\\For{$j$ from 1 to $B$}\n\\State Sample $\\Xi_i^j, \\Theta_i^j$ from $d$-dimensional standard Gaussian vectors.\n\\State $x_i^j =l^m_{i} + \\sqrt{V^m_i }\\ \\Theta_i^j$\n\\State $x_{i+1}^j = x_i^j + b\\left(t_i,x_i^j,{\\cal Y}^i_{\\hat \\theta_{h}^y}\\left(x_i^j\\right),{\\cal Z}^i_{\\hat \\theta_{h}^z}\\left(x_i^j\\right),\\widetilde{u_{i}}\\right) \\Delta t + \\sqrt{\\Delta t}\\ \\sigma\\left(t_{i},x_i^j,\\widetilde{u_{i}}^{X}\\right) \\Xi_i^j$\n\\If{$i = N-1$}\n\\State $Y_{i+1}^j = g\\left(x_N^j,\\widetilde{u_{N}}^{X}\\right)$\n\\Else\\State $Y_{i+1}^j = {\\cal Y}^{i+1}_{ \\theta_{m+1,i+1}}\\left(x_{i+1}^j\\right)$\n\\EndIf\n\\EndFor\n\\State $J^i(\\hat \\theta_{h}) = \\frac{1}{B}\\sum_{j=1}^{B} \\bigg( f\\left({t_{i}},x_i^j,{\\cal Y}^i_{\\hat \\theta_{h}^y}\\left(x_i^j\\right),{\\cal Z}^i_{\\hat \\theta_{h}^z}\\left(x_i^j\\right),\\widetilde{u_{i}}\\right)\\ \\Delta t +\nY_{i+1}^j - {\\cal Y}^i_{\\hat \\theta_{h}^y}\\left(x_i^j\\right) - \\sqrt{\\Delta t}\\ {\\cal Z}^i_{\\hat \\theta_{h}^z}\\left(x_i^j\\right) \\Xi_i^j \\bigg)^2$\n\\State Calculate $\\nabla J^i(\\hat \\theta_{h})$ by back-propagation.\n\\State Update $\\hat \\theta_{h+1} = \\hat \\theta_{h} - \\rho_{h} \\nabla J^i(\\hat \\theta_{h})$.\n\\EndFor\n\\State $\\theta_{m+1,i}= \\hat \\theta_{H}$\n\\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Numerical results}\\label{results}\nThe algorithms are implemented in Python with the Tensorflow library \\cite{TF16}. Each numerical experiment is conducted using a node composed of 2 Intel\u00ae Xeon\u00ae Gold 5122 Processors, 192 Go of RAM, and 2 GPU nVidia\u00ae Tesla\u00ae V100 16Go. The multi-GPU parallelization on the global solver is conducted using the Horovod library \\cite{horovod}.\nThe methods we test are:\n\n\\begin{itemize}\n \\item \\textbf{Direct}: algorithm \\ref{algo: direct} at page \\pageref{algo: direct}. Batch size $B = 10000$.\n \\item \\textbf{Dynamic}: algorithm \\ref{algo: dynamic} at page \\pageref{algo: dynamic}. Batch size $B = 200$ and $M = 100$.\n \\item \\textbf{Expectation}: algorithm \\ref{algo: expectation} at page \\pageref{algo: expectation}. Batch size $B = 2000 $.\n \\item \\textbf{Local}: algorithm \\ref{algo: local} at page \\pageref{algo: local}. Batch size $B = 300\\ (\\mathrm{Weak}), B = 100\\ (\\mathrm{Pontryagin})$ and $M = 20 $, $R=50000$.\n\\end{itemize}\nIf the algorithm is applied to equations coming from the Pontryagin (abbreviated in Pont.) or the Weak approach, it is specified in its name. \n\n\\subsection{Linear price impact model}\n\nWe use a linear-quadratic mean-field game of controls model studied in \\cite{AVLCDC19} and \\cite{carmona2018probabilistic} for comparison. This model is useful for numerical tests since the analytic solution is known. The MFG of controls model for the representative player is given by:\n\\begin{equation}\\label{eq: price impact}\n\\begin{aligned}\n& \\min_{\\alpha\\in\\mathbb{A}}& &\\mathbb{E}\\left[\\int_0^T \\left(\\frac{c_\\alpha}{2}\\norme{\\alpha_t}^2 + \\frac{c_X}{2} \\norme{X_t}^2 - \\gamma X_t \\cdot u_t \\right)\\ \\mathrm{d} t + \\frac{c_g}{2} \\norme{X_T}^2\\ \\right]\\\\\n& \\text{subject to} & &X_t = x_0 + \\int_0^t \\alpha_s\\ \\mathrm{d} s + \\sigma\\ W_t\n\\end{aligned}.\n\\end{equation} \nand the fixed point $\\mathbb{E}[\\alpha_t] =u_t$. In this case, the mean-field interaction is exerted through the law of the control process.\\\\\nThe Pontryagin optimality principle gives the system:\n\\begin{equation}\\label{eq: weak price impact FBSDE}\n\\begin{cases}\n\\mathrm{d} X_t &= - \\frac{1}{c_\\alpha}Y_t\\ \\mathrm{d} t + \\sigma\\ \\mathrm{d} W_t\\\\\nX_0 &= x_0\\\\\n\\mathrm{d} Y_t &= - (c_X X_t + \\frac{\\gamma}{c_\\alpha}\\mathbb{E}[Y_t])\\ \\mathrm{d} t + Z_t\\ \\mathrm{d} W_t\\\\\nY_T &= c_g X_T.\n\\end{cases}\n\\end{equation}\nIn this case, the output $Z$ of the neural network is a matrix of size $d\\times d$ and $Y$ is a vector of size $d$. \\\\\nThe weak representation of the value function gives:\n\\begin{equation}\\label{eq: price impact FBSDE}\n\\begin{cases}\n\\mathrm{d} X_t &= - \\frac{1}{c_\\alpha}\\sigma^{-1}Z_t\\ \\mathrm{d} t + \\sigma\\ \\mathrm{d} W_t\\\\\nX_0 &= x_0\\\\\n\\mathrm{d} Y_t &= - \\left(\\frac{c_X}{2} \\norme{X_t}^2 + \\frac{\\gamma}{c_\\alpha}X_t\\cdot\\sigma^{-1}\\mathbb{E}[Z_t] + \\frac{1}{2c_\\alpha} \\norme{\\sigma^{-1} Z_t}^2\\right)\\ \\mathrm{d} t + Z_t\\ \\mathrm{d} W_t\\\\\nY_T &= \\frac{c_g}{2} \\norme{X_T}^2.\n\\end{cases}\n\\end{equation}\n In this case, the output $Z$ of the neural network is a vector of size $d$ and $Y$ is a scalar. Therefore we may be able work in higher dimensions.\n\\begin{Rem}\nWith LQ models, the dynamics of $Y$ is linear in the Pontryagin approach and quadratic in the Weak approach. Thus the potentially high dimension of one method is counterbalanced by the complex dynamics of the other technique.\n\\end{Rem}\n\nFor our numerical experiments we take $c_X = 2, x_0 = 1, \\sigma = 0.7, \\gamma =2, c_\\alpha = 2\/3, c_g = 0.3$. If not stated otherwise, the simulations are conducted with $T = 1, d = 10, \\Delta t = 0.01$. \n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|l||*{5}{c|}}\\hline\n \\backslashbox[25mm]{Method}{$T$}\n &\\makebox[3em]{0.25}&\\makebox[3em]{0.75}&\\makebox[3em]{1.0}&\\makebox[3em]{1.5}\\\\ \\hline\\hline\n \\textbf{Reference} & \\textbf{0.7709} & \\textbf{0.1978} & \\textbf{0.0811} & \\textbf{0.0125}\\\\\n \\hline\n Pontryagin & 0.763 (1.3e-03) &\n0.187 (2.5e-03) &\n\\cellcolor{green} 0.075 (2.7e-03) &\n\\cellcolor{green} 0.012 (5.0e-03)\\\\ \\hline\n Dyn. Pont. & 0.762 (2.3e-03) &\n\\cellcolor{green} 0.189 (4.0e-03) &\n\\cellcolor{green} 0.078 (5.5e-03) &\n\\cellcolor{green} 0.013 (6.7e-03) \\\\ \\hline\nExp. Pont. ($0.1$) & 0.763 (1.6e-03) & \\cellcolor{red}0.604 (1.1e-01) & \\cellcolor{red} 0.729 (1.1e-01) & \\cellcolor{red}0.803 (1.5e-01)\\\\ \\hline\nExp. Pont. ($1.$) & 0.762 (1.4e-03) & \\cellcolor{red} 0.251 (2.7e-02) & \\cellcolor{red}0.467 (7.6e-02) & \\cellcolor{red}0.639 (1.1e-01) \\\\ \\hline\nExp. Pont. ($10.$) & 0.763 (1.5e-03) & 0.216 (1.7e-02) &\n\\cellcolor{red} 0.275 (3.7e-02) & \\cellcolor{red}\n0.574 (1.7e-01) \\\\ \\hline\nExp. Pont. ($100.$) & \\cellcolor{green}0.776 (8.4e-03) & \\cellcolor{red}0.797 (1.1e-01) & \\cellcolor{red}1.042 (1.3e-01)\n & \\cellcolor{red}1.613 (2.6e-01)\n \\\\\\hline\n Weak & \\cellcolor{green}0.778 (2.0e-03) &\n\\cellcolor{green}0.200 (1.4e-02) &\n0.092 (2.9e-02) &\n0.025 (2.0e-02)\n\\\\ \\hline\n Dyn. Weak & \\cellcolor{green}0.775 (4.4e-03) &\n0.212 (2.0e-02) &\n0.083 (4.1e-02) &\n0.016 (5.6e-02)\n\\\\ \\hline\nExp. Weak ($0.1$) & \\cellcolor{red}0.877 (1.9e-02) & \\cellcolor{red} 0.654 (9.9e-02) & \\cellcolor{red} 0.595 (2.3e-01) & \\cellcolor{red} 0.28 (6.0e-01) \\\\ \\hline\nExp. Weak ($1.$) & \\cellcolor{red} 0.901 (2.2e-03) & \\cellcolor{red} 0.664 (9.8e-02) & \\cellcolor{red} 0.617 (1.0e-01) & \\cellcolor{red} 0.507 (1.9e-01) \\\\ \\hline\nExp. Weak ($10.$) & \\cellcolor{red} 0.887 (1.1e-02) & \\cellcolor{red} 0.698 (7.3e-02) & \\cellcolor{red} 0.6541 (6.3e-02)\n & \\cellcolor{red} 0.49 (2.3e-01) \\\\ \\hline\nExp. Weak ($100.$) & \\cellcolor{red} 0.887 (2.0e-03) & \\cellcolor{red} 0.650 (9.4e-02) & \\cellcolor{red} 0.602 (9.1e-02)\n & \\cellcolor{red} 0.492 (2.4e-01)\n \\\\ \\hline\nPontryagin Loc. & 0.767 (3.5e-04) &\n\\cellcolor{green} 0.189 (6.3e-04) &\n\\cellcolor{green} 0.076 (7.6e-04) &\n\\cellcolor{green} 0.011 (7.5e-04)\n \\\\ \\hline\n Weak Loc. & \\cellcolor{red} 0.944 (8.7e-04) &\n\\cellcolor{red}0.740 (2.6e-02) &\n\\cellcolor{red}0.692 (1.6e-02) &\n\\cellcolor{red}0.625 (2.2e-02)\n \\\\ \\hline\n \\end{tabular}\n \\caption{Mean of $\\mathbb{E}[X_T]$ over the 10 dimensions (and standard deviation) for several maturities $T$ (2000 iterations for global methods, 20000 iterations for local methods) on the price impact model \\eqref{eq: price impact}. For the expectation method, the value of the $\\lambda$ penalization parameter is given under parenthesis.}\n \\label{tab: expectations}\n\\end{table}\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|l||*{2}{c|}}\\hline\n Pontryagin & \\textbf{1877 s.} \\\\\\hline\n Dyn. Pontryagin & \\textbf{1336 s.} \\\\\\hline\n Exp. Pontryagin & \\textbf{1562 s.}\n \\\\\\hline\n Weak & \\textbf{2205s.} \\\\\\hline\n Dyn. Weak & \\textbf{1605 s.} \\\\\\hline\n Exp. Weak & \\textbf{1670 s.} \\\\\\hline\n Pontryagin Loc. & 11627 s.\n \\\\\\hline\n Weak Loc. & 12689 s.\\\\\\hline\n \\end{tabular}\n \\caption{Duration times of the methods (2000 iterations for global methods, 20000 iterations for local methods) on the price impact model \\eqref{eq: price impact} with $T=1$. on one run}\n \\label{tab: duration trader}\n\\end{table}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{traderPontplot2907Direct-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{traderPontplot2907Dynamic-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{Learning curves for direct (left) and dynamic (right) Pontryagin method on the price impact model \\eqref{eq: price impact}. The loss is the $L^2$ error between $Y_T$ and the terminal condition of the backward equation.}\n \\label{fig: learning Pont}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{LocalFiguresDirect-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{LocalFiguresWeakDirect-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{Learning curves for Local Pontryagin (left) and Local Weak (right) method on the price impact model \\eqref{eq: price impact}. The loss is the sum of the local $L^2$ errors between the neural network $Y$ and the Euler discretization for all time steps.}\n \\label{fig: learning curve local}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{traderWeakDirect-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{traderWeakDynamic-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{Learning curves for direct (left) and dynamic (right) Weak method on the price impact model \\eqref{eq: price impact}. The loss is the $L^2$ error between $Y_T$ and the terminal condition of the backward equation.}\n \\label{fig: learning curve weak}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{traderControlPontryaginDirect1-0100-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{traderControlPontryaginDynamic1-0100-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{First coordinate of the optimal control evaluated on a sample path for direct (left) and dynamic (right) Pontryagin method after 2000 iterations on the price impact model \\eqref{eq: price impact} with $T=1$.}\n \\label{fig: control Pont}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{ControlPontLocal1010010-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{ControlWeakLocal1010010-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{First coordinate of the optimal control evaluated on a sample path for Local Pontryagin (left) and Local Weak (right) Local Weak method after 20000 iterations on the price impact model \\eqref{eq: price impact} with $T=1.$}\n \\label{fig: control local}\n\\end{figure}\n\n\\paragraph{}\nAll methods except Expectation Weak and Local Weak converge to the exact solution for small maturities.\nThese two solvers do not converge to the right solution for any time horizon.\n\nThe choice of the parameter $\\lambda$ influences a lot the output of the Pontryagin Expectation scheme, as observed in Table \\ref{tab: expectations}. The best results are obtained when $\\lambda$ is of order $10$ but we notice that a large range of values seems to work fine for very small time horizon $T$. However the Weak Expectation scheme never works.\nWe won't test the expectation methods on the other test cases since they are less efficient than the other methods.\n\\paragraph{}\nWe see in Figure \\ref{fig: learning curve local} that the Pontryagin Local method needs more iterations for the loss to stabilize than the Global method. We cannot hope for more iterations to help the convergence in the Weak method since the loss in the learning curves of Figure \\ref{fig: learning curve weak} reaches a plateau. The algorithms solving the system coming from the Pontryagin principle perform better than the others. The dynamic estimation of the expectation allows to gain training speed and to smooth the loss, as seen in Figure \\ref{fig: learning Pont} and Table \\ref{tab: duration trader}. As another accuracy test, we can also plot the optimal control for which we have an analytical expression. We see in Figures \\ref{fig: control Pont}, \\ref{fig: control local} that Global and Local Pontryagin methods perform well but that the Local Weak method does not seem to converge, which confirms what is observed in Table \\ref{tab: expectations}. \n\n\\subsection{A one-dimensional mixed model}\\label{pop}\nWe consider the following one-dimensional example from \\cite{CL19b,AVLCDC19}:\n\\begin{align}\n\\begin{cases}\n \\mathrm{d} X_t & = - \\rho Y_t\\ \\mathrm{d} t + \\sigma\\ \\mathrm{d} W_t,\\ X_0 = x_0\\\\\n \\mathrm{d} Y_t & = \\arctan(\\mathbb{E}[X_t])\\ \\mathrm{d} t + Z_t\\ \\mathrm{d} W_t,\\ Y_T = \\arctan(X_T).\\label{eq: population}\n\\end{cases}\n\\end{align} This model comes from the Pontryagin principle applied to the mean-field game problem\n\\begin{align*}\n \\min_\\alpha\\ &\\mathbb{E}\\Big[\\int_0^T \\big(\\frac{1}{2\\rho} \\alpha_s^2 - X_s \\arctan( u_s)\\big)\\ \\mathrm{d} s + g(X_T)\\Big]\\\\\n & \\mathrm{d} X_t = \\alpha_t\\ \\mathrm{d} t + \\mathrm{d} W_t,\\ X_0 = x_0,\n\\end{align*} with the fixed point $u_s = \\mathbb{E}[X_s]$, and where $g$ is an antiderivative of $\\arctan$. \nWe take the same model parameters as in \\cite{CL19b} ($T=1$ and $x_0=1$) and obtain in Figure \\ref{fig: pop} with all our methods the same results as in their Figure 4. For the numerical resolution we choose $100$ time steps. Notice that we use 3 hidden layers with 11 neurons in each when \\cite{CL19b} uses 100 neurons by layer. We see that our smaller number of neurons is enough for this example resolution.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=10cm]{Population.pdf}\n \\caption{Value of $Y_0$ as a function of parameter $\\rho$ for the model \\eqref{eq: population}}\n \\label{fig: pop}\n\\end{figure}\nThe duration of the methods are given in Table \\ref{tab: pop} which illustrates again the speed gain in using the dynamic method.\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|l||*{2}{c|}}\n \\hline\n Direct & \\textbf{535 s.}\\\\\n \\hline\n Dynamic & \\textbf{425 s.}\\\\\n \\hline\n Local & \\textbf{10900 s.}\\\\\n \\hline\n \\end{tabular}\n \\caption{Duration times of the algorithms for model \\eqref{eq: population} on one run} (2000 iterations for global methods, 20000 iterations for the local method)\n \\label{tab: pop}\n\\end{table}\n\n\n\\subsection{Beyond the mean-field games case\n\\paragraph{}\nIn this Section we design non Linear Quadratic models in order to test the limitations of our methods. We construct general MKV FBSDES with explicit solutions following a log-normal distribution. Let $X_t^i$ be defined by\n\n\\begin{align}\n\\label{forward}\n \\mathrm{d} X_t^i & = a^i X^i_t\\ \\mathrm{d} t + \\sigma^i_t X^i_t\\ \\mathrm{d} W^i_t, \\\\\n X^i_0 & = \\xi^i .\n\\end{align}\nWe obtain explicitly\n\\begin{align*}\n X^i_t & = \\xi^i e^{(a^i-\\frac{(\\sigma^i)^2}{2}) t +\\sigma^i W^i_t}, \\\\\n g^i_t & := \\mathbb{E}[ X^i_t] = \\xi^i e^{a^i t}, \\\\\n k^i_t & := \\mathbb{E}[ (X^i_t)^2] = \\xi^i e^{(2a^i +(\\sigma^i)^2) t }. \n\\end{align*}\n\nWe choose $g: (t,x) \\mapsto e^{\\alpha t} \\log\\left( \\prod_{i=1}^n x^i \\right)$ and the following dynamic for $Y_t$\n\\begin{align*}\n Y_t & = e^{\\alpha t} \\log\\left(\\prod_{i} X^i_t\\right) = e^{\\alpha t} \\sum_i \\left[ \\log(\\xi^i)+ (a^i-\\frac{(\\sigma^i)^2}{2}) t + \\sigma^i W^i_t \\right],\n \\end{align*}\n such that\n \\begin{align*}\n c_t & :=\\mathbb{E}[ Y_t ] = e^{\\alpha t} \\sum_i \\left[ \\log(\\xi^i)+ \\left(a^i-\\frac{(\\sigma^i)^2}{2}\\right) t\\right],\\\\\n d_t & := \\mathbb{E}[Y_t^2] = e^{2 \\alpha t} \\left( \\left[\\sum_i \\left( \\log(\\xi^i)+ a^i-\\frac{(\\sigma^i)^2}{2}\\right) t\\right]^2 + \\sum_i (\\sigma^i)^2 t \\right).\n\\end{align*}\nAs we want $Y_t = u(t,X_t)$, we have $Z_t^i = \\sigma_t^i X^i_t \\partial_x u(t,X_t)$ following:\n\\begin{align*}\n Z_t^i & = \\sigma^i_t e^{\\alpha t} \\\\\n e_t^i & :=\\mathbb{E}[Z_t^i ] = \\sigma^i_t e^{\\alpha t},\\\\\n f_t^i & := \\mathbb{E}[(Z_t^i)^2 ] = (\\sigma^i_t)^2 e^{2\\alpha t}.\n\\end{align*}\nIntroducing \n\\begin{align*}\n \\phi(t,x) & := \\partial_t u + \\sum_{i} a_i x_i\\ \\partial_{x_i} u + \n \\sum_{i} \\frac{(\\sigma^i x_i)^2}{2}\\ \\partial_{x_i^2}^2 u \\\\\n & = e^{\\alpha t} \\left( \\alpha \\log\\left(\\prod_{i} x^i\\right) + \\sum_i \\left(a^i - \\frac{(\\sigma^i)^2}{2}\\right)\\right),\n\\end{align*}\n$u(t,X_t)$ solves the PDE\n\\begin{align*}\n\\partial_t u + \\sum_{i} a_i x^i\\ \\partial_{x_i} u + \n\\sum_{i} \\frac{(\\sigma^i)^2}{2}\\ \\partial_{x_i^2}^2 u - \\phi(t,x)= 0.\n\\end{align*}\nThis semilinear PDE is related to the BSDE associated with the driver $f(t,x) = -\\phi(t,x)$ for forward dynamics \\eqref{forward}.\\\\\nUsing some chosen $\\mathbb{R}^d$ valued functions $\\psi^i$ and $\\mathbb{R}^k$ valued functions $\\kappa$,\n we express all dynamics in a McKean-Vlasov setting:\n\\begin{equation}\n\\begin{cases}\n\\mathrm{d} X_t^i & = (a^i X^i_t + \\psi^i(Y_t, Z_t^i, \\mathbb{E}[X_t^i],\\mathbb{E}[(X_t^i)^2], \\mathbb{E}[Y_t], \\mathbb{E}[Y_t^2],\n\\mathbb{E}[Z_t^i], \\mathbb{E}[(Z^i_t)^2])\\\\ & - \\psi^i\\left(e^{\\alpha t} \\log\\left(\\prod_{i} X^i_t\\right), \\sigma^i_t e^{\\alpha t}, g_t^i,k_t^i,c_t,d_t,e_t^i,f_t^i\\right)\\ \\mathrm{d} t + \\sigma^i_t X^i_t\\ \\mathrm{d} W^i_t \\\\\n X^i_0 & = \\xi^i\\\\\n\\mathrm{d} Y_t & = - f(t,X_t,Y_t,Z_t,\\mathbb{E}[X_t],\\mathbb{E}[X_t^2], \\mathbb{E}[Y_t], \\mathbb{E}[Y_t^2],\n\\mathbb{E}[Z_t], \\mathbb{E}[Z^2_t])\\ \\mathrm{d} t + Z_t\\ \\mathrm{d} W_t\\\\\nY_T & = e^{\\alpha T} \\log\\left(\\prod_{i} X^i_T\\right)\n\\end{cases} \n\\end{equation}\nwith\n\\begin{align*}\n& f(t,X_t,Y_t,Z_t,x_1,x_2,y_1,y_2,z_1,z_2 ) \\\\& = -\\phi(t,x) + \\kappa(Y_t, Z_t,x_1,x_2,y_1,y_2,z_1,z_2 ) -\n\\kappa\\left(e^{\\alpha t} \\log\\left(\\prod_{i} X^i_t\\right), \\sigma^i_t e^{\\alpha t}, g_t^i,k_t^i,c_t,d_t,e_t^i,f_t^i\\right).\n\\end{align*}\nand $f: \\mathbb{R}\\times\\mathbb{R}^d\\times\\mathbb{R}\\times\\mathbb{R}^{1\\times d}\\times\\mathbb{R}^d\\times\\mathbb{R}^d\\times\\mathbb{R}\\times\\mathbb{R}\\times\\mathbb{R}^{1\\times d}\\times\\mathbb{R}^{1\\times d}\\mapsto \\mathbb{R}$.\\\\\nWe consider two models of this kind for numerical tests.\n\n\\subsubsection{A fully coupled linear example }\nWe consider a linear McKean-Vlasov FBSDE in $Y_t$, $Z_t$ and their law dynamics for $X_t$ and $Y_t$:\n\\begin{equation}\n\\begin{cases}\n\\mathrm{d} X_t^i & = (a^i X^i_t + b(Y_t + Z_t^i + \\mathbb{E}[X_t^i] + \\mathbb{E}[Y_t] +\n\\mathbb{E}[Z_t^i]) \\\\ & - b\\left(e^{\\alpha t} \\log\\left(\\prod_{i=1}^d X^i_t\\right)+ \\sigma^i_t e^{\\alpha t} + g_t^i + c_t + e_t^i\\right)\\ \\mathrm{d} t + \\sigma^i_t X^i_t\\ \\mathrm{d} W^i_t \\\\\n X^i_0 & = \\xi^i\\\\\n\\mathrm{d} Y_t & = \\bigg(\\phi(t,X_t) + b(Y_t + \\frac{1}{d}\\sum_{i=1}^d Z_t^i + \\frac{1}{d}\\sum_{i=1}^d\\mathbb{E}[X_t^i] + \\mathbb{E}[Y_t] + \\frac{1}{d}\\sum_{i=1}^d \\mathbb{E}[Z_t^i]) \\\\ & - b\\left(e^{\\alpha t} \\log\\left(\\prod_{i=1}^d X^i_t\\right) + \\frac{1}{d}\\sum_{i=1}^d \\sigma^i_t e^{\\alpha t} + \\frac{1}{d}\\sum_{i=1}^d g_t^i + c_t + \\frac{1}{d}\\sum_{i=1}^d e_t^i\\right) \\bigg)\\ \\mathrm{d} t+ Z_t\\ \\mathrm{d} W_t\\\\\nY_T & = e^{\\alpha T} \\log\\left(\\prod_{i=1}^d X^i_T\\right).\\label{eq: linear model}\n\\end{cases} \n\\end{equation}\n\nWe take $a=b=0.1,\\ \\alpha = 0.5, \\sigma = 0.4, \\xi = 1$.\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|l||*{5}{c|}}\\hline\n \\backslashbox[25mm]{Method}{$T$}\n &\\makebox[3em]{0.25}&\\makebox[3em]{0.75}&\\makebox[3em]{1.0}&\\makebox[3em]{1.5}\\\\ \\hline\\hline\n \\textbf{Reference} & \\textbf{1.0253} & \\textbf{1.0779} & \\textbf{1.1052} & \\textbf{1.1618}\\\\\n \\hline\n Global & \\cellcolor{green}\\cellcolor{green}1.025 (1.8e-03) &\n1.076 (3.3e-03) &\n1.095 (3.9e-03) &\n\\cellcolor{green}1.162 (7.2e-03)\n\\\\ \\hline\n Dyn. Global & 1.026 (2.0e-03) &\n\\cellcolor{green}1.077 (3.6e-03) &\n\\cellcolor{green}1.105 (2.9e-03) &\n1.163 (4.8e-03)\n \\\\ \\hline Local & 1.025 (2.3e-04)&\n1.092 (5.0e-04)&\n1.146 (7.9e-04) &\n\\cellcolor{red}1.28 (1.4e-03)\n \\\\ \\hline\n \\end{tabular}\n \\caption{Mean of $\\mathbb{E}[X_T]$ over the 10 dimensions (and standard deviation) for several maturities $T$ (2000 iterations for global methods, 20000 iterations for local method) on the fully coupled linear model \\eqref{eq: linear model}.}\n \\label{tab: res linear }\n\\end{table}\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|c|c|c|}\\hline\n Global & Dynamic Global & Local \\\\ \\hline\n \\textbf{2081 s.} &\n \\textbf{1308 s.} &\n 14811 s. \\\\ \\hline\n \\end{tabular}\n \\caption{Duration times of the methods (2000 iterations for global methods, 20000 iterations for local method) on the fully coupled linear model \\eqref{eq: linear model} for $T=1$.\n\\end{table}\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=.49\\linewidth]{linearDirect-eps-converted-to.pdf}\n \\caption{Learning curves for Local method on the fully coupled linear model \\eqref{eq: linear model}. The loss is the sum of the local $L^2$ errors between the neural network $Y$ and the Euler discretization for all time steps.}\n \\label{fig: learning curve lin loc}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{linZcustomDynamic1010010-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{linearZcustomLocal1010010-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{First coordinate of $Z_t$ evaluated on a sample path for\n Dynamic Global (left) and \n Local (right) methods after 2000 iterations (respectively 20000) on the fully coupled linear model \\eqref{eq: linear model}.}\n \\label{fig: z lin loc}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{linYcustomDirect1010010-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{linearYcustomLocal1010010-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{First coordinate of $Y_t$ evaluated on a sample path for Direct Global (left) and Local method (right)\n after 2000 iterations (respectively 20000) on the fully coupled linear model \\eqref{eq: linear model}. }\n \\label{fig: y lin loc}\n\\end{figure}\n\n\\paragraph{}\nThe three algorithms demonstrate good performances on this test case. Both processes $Y, Z$ are well represented by the neural network. However the local method is less precise than the global methods when the maturity grows. We see in Table \\ref{tab: res linear }, Figure \\ref{fig: z lin loc}, Figure \\ref{fig: y lin loc} that the Local method is biased when $T=1$ when the Global methods achieve a great accuracy. It looks like the results of the Local method cannot be improved since the loss flattens in Figure \\ref{fig: learning curve lin loc}.\n\n\\subsubsection{A fully coupled quadratic example }\nWe consider a quadratic McKean-Vlasov FBSDE in $Y_t$, $Z_t$ and their law dynamics for $X_t$ and $Y_t$:\n\\begin{equation} \n\\begin{cases}\n\\mathrm{d} X_t^i & = (a^i X^i_t + b(Y_t + Z_t^i + \\mathbb{E}[X_t^i] + \\mathbb{E}[Y_t] +\n\\mathbb{E}[Z_t^i]) \\\\ & - b\\left(e^{\\alpha t} \\log\\left(\\prod_{i=1}^d X^i_t\\right)+ \\sigma^i_t e^{\\alpha t} + g_t^i + c_t + e_t^i\\right) \\\\ & + c \\Bigg(Y_t^2 + (Z_t^i)^2 + \\mathbb{E}[(X_t^i)^2] + \\mathbb{E}[Y_t^2] +\n\\mathbb{E}[(Z_t^i)^2]) \\\\ & - c\\left(e^{2\\alpha t} \\log\\left(\\prod_{i=1}^d X^i_t\\right)^2+ (\\sigma^i_t)^2 e^{2\\alpha t} + (g_t^i)^2 + c_t^2 + (e_t^i)^2\\right) \\Bigg)\\ \\mathrm{d} t + \\sigma^i_t X^i_t\\ \\mathrm{d} W^i_t \\\\\n X^i_0 & = \\xi^i\\\\\n\\mathrm{d} Y_t & = \\Bigg(\\phi(t,X_t) + b(Y_t + \\frac{1}{d}\\sum_{i=1}^d Z_t^i + \\frac{1}{d}\\sum_{i=1}^d\\mathbb{E}[X_t^i] + \\mathbb{E}[Y_t] + \\frac{1}{d}\\sum_{i=1}^d \\mathbb{E}[Z_t^i]) \\\\ & - b\\left(e^{\\alpha t} \\log\\left(\\prod_{i=1}^d X^i_t\\right) + \\frac{1}{d}\\sum_{i=1}^d \\sigma^i_t e^{\\alpha t} + \\frac{1}{d}\\sum_{i=1}^d g_t^i + c_t + \\frac{1}{d}\\sum_{i=1}^d e_t^i\\right) \\\\ & + c(Y_t^2 + \\frac{1}{d}\\sum_{i=1}^d (Z_t^i)^2 + \\frac{1}{d}\\sum_{i=1}^d\\mathbb{E}[(X_t^i)^2] + \\mathbb{E}[Y_t^2] + \\frac{1}{d}\\sum_{i=1}^d \\mathbb{E}[(Z_t^i)^2]) \\\\ & - c\\left(e^{2\\alpha t} \\log\\left(\\prod_{i=1}^d X^i_t\\right)^2 + \\frac{1}{d}\\sum_{i=1}^d (\\sigma^i_t)^2 e^{2\\alpha t} + \\frac{1}{d}\\sum_{i=1}^d (g_t^i)^2 + c_t^2 + \\frac{1}{d}\\sum_{i=1}^d (e_t^i)^2\\right)\\Bigg)\\ \\mathrm{d} t\\\\ &+ Z_t\\ \\mathrm{d} W_t\\\\\nY_T & = e^{\\alpha T} \\log\\left(\\prod_{i=1}^d X^i_T\\right).\\label{eq: quadratic model}\n\\end{cases} \n\\end{equation} \n\nWe take $a=b=c=0.1,\\ \\alpha = 0.5, \\sigma = 0.4, \\xi = 1$.\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|l||*{5}{c|}}\\hline\n \\backslashbox[25mm]{Method}{$T$}\n &\\makebox[3em]{0.25}&\\makebox[3em]{0.75}&\\makebox[3em]{1.0}&\\makebox[3em]{1.5}\\\\ \\hline\\hline\n \\textbf{Reference} & \\textbf{1.0253} & \\textbf{1.0779} & \\textbf{1.1052} & \\textbf{1.1618}\\\\\n \\hline\n Global & 1.024 (1.8e-03) &\n1.065 (4.3e-03) &\n\\cellcolor{red}12.776 (3.3e-02) & \\cellcolor{red}DV\n\\\\ \\hline\n Dyn. Global & \\cellcolor{green}1.025 (2.1e-03) &\n\\cellcolor{green}1.072 (3.1e-03) &\n0.961 (7.0e-03) & \\cellcolor{red}DV\n \\\\ \\hline\n Local & 1.024 (1.6e-04) &\n\\cellcolor{red}-7.180 (9.0e-04) &\n0.411 (1.1e-03) \n & \\cellcolor{red}DV\n \\\\ \\hline\n \\end{tabular}\n \\caption{Mean of $\\mathbb{E}[X_T]$ over the 10 dimensions (and standard deviation) for several maturities $T$ (2000 iterations for global methods, 20000 iterations for local methods) on the fully coupled quadratic model \\eqref{eq: quadratic model}.}\n \\label{tab: quadratic}\n\\end{table}\n\n\\begin{table}[H]\n \\centering\n \\begin{tabular}{|c|c|c|}\\hline\n Global & Dynamic Global & Local \\\\ \\hline\n \\textbf{2072 s.} &\n \\textbf{1309 s.} &\n 14823 s. \\\\ \\hline\n \\end{tabular}\n \\caption{Duration times of the methods (2000 iterations for global methods, 20000 iterations for local methods) on the fully coupled quadratic model \\eqref{eq: quadratic model} for $T=1$}\n\\end{table}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{quadquadDirect-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{quadquadDynamic-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{Learning curves for Direct Global (left) and Dynamic Global (right) method on the fully coupled quadratic model \\eqref{eq: quadratic model}. The loss is the $L^2$ error between $Y_T$ and the terminal condition of the backward equation.}\n \\label{fig: loss 2}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.49\\linewidth]{quadraticDirect-eps-converted-to.pdf}\n \\caption{Learning curves for Local method on the fully coupled quadratic model \\eqref{eq: quadratic model}. The loss is the sum of the local $L^2$ errors between the neural network $Y$ and the Euler discretization for all time steps.}\n \\label{fig: learning local quadratic}\n\\end{figure}\n\n\n\n\\begin{figure}[H]\n \\centering \n \\includegraphics[width=0.49\\linewidth]{quadlocalnanExpectationsDirect-eps-converted-to.pdf}\n \\caption{$\\mathbb{E}[X_T]$ for Local method on the fully coupled quadratic model \\eqref{eq: quadratic model}.}\n \\label{fig: negative exp}\n\\end{figure}\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{quadZcustomDirect0757510-eps-converted-to.pdf}\n \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{ZcustomLocal0252510-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{First coordinate of $Z_t$ evaluated on a sample path for Direct Global (left) ($T=0.75$) and\n Local method (right)($T=0.25$) method after 2000 iterations (respectively 20000) on the fully coupled quadratic model \\eqref{eq: quadratic model}.}\n \\label{fig: Z quadratic}\n\\end{figure}\n\n\n\\begin{figure}[H]\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{quadYcustomDynamic0757510-eps-converted-to.pdf} \\end{minipage} \\hfill\n \\begin{minipage}[c]{.49\\linewidth}\n \\includegraphics[width=0.9\\linewidth]{YcustomLocal0252510-eps-converted-to.pdf}\n \\end{minipage}\n \\caption{First coordinate of $Y_t$ evaluated on a sample path for\n Dynamic Global (right) method ($T=0.75$) and Local method (right) ($T=0.25$) after 2000 iterations (respectively 20000 for the local method}) on the fully coupled quadratic model \\eqref{eq: quadratic model}.\n \\label{fig: better Y}\n\\end{figure}\n\n\\paragraph{}\nWe observe in Table \\ref{tab: quadratic} convergence of the methods for small maturities and divergence beyond $T=1$. Note that the dynamic estimation of the expectation prevents the algorithm to explode for $T=1$, contrarily to the direct method. However, it does not converge to the true solution in this case. Indeed the loss plateaus at the value 2 in Figure \\ref{fig: loss 2} (right), so the terminal condition of the BSDE is not properly respected. The dynamic method also produces better result than the Direct one (see Figure \\ref{fig: better Y} and Table \\ref{tab: quadratic}). We notice from Figure \\ref{fig: Z quadratic} and Figure \\ref{fig: better Y} that the estimated $Y,Z$ processes have the good shape but some errors are still present after convergence.\n\\paragraph{}\nConcerning the local method, we see in Figure \\ref{fig: negative exp} that the estimated expectations are stable around zero for a few iterations but then become negative. It may be due to the lack of a contraction for the fixed point problem. The loss explodes for $T=1.5$, as seen on the learning curve from Figure \\ref{fig: learning local quadratic}. For $T=1.$, we see that it stays above $10$.\n\n\\section{Conclusion}\nWe have shown that neural network methods can solve some moderate dimensional FBSDE of McKean-Vlasov type.\nComparing the different algorithms we find out that\n\\begin{itemize}\n \\item The dynamic update of the expectation is efficient in terms of computation speed (about 30\\% faster than direct method) and seems to smooth the learning curve.\n \\item The Pontryagin approach performs better than the Weak one for large maturities. On the contrary, the Weak approach is the best for small maturities.\n \\item For the linear model we observe no convergence problem whereas for the quadratic one we can solve only the equation on a small time horizon. However the local method is not very accurate for larger maturities.\n \\item The local method faces more difficulties for quadratic problems than the global methods do. It also requires more iterations, hence more time, to converge.\n \\item The Expectation methods do not work well and require to empirically choose a proper penalization parameter, which is troublesome.\n \\item The methods can be used in dimension 10, thus applied to more realistic problems than usually. For instance, in the price impact model, the number of dimensions corresponds to the number of assets involved in the trading. Thus, developing methods able to deal with problems in high dimensions can help us to handle large portfolios.\n\\end{itemize}\nWe recommend the use of the Dynamic method which offers the best accuracy and training speed among all the tested methods. For linear quadratic mean-field games, it appears to be better to use the Weak approach for small maturities and the Pontryagin method for larger time horizons. The use of a local method is possible but requires too many iterations to converge hence it is not competitive in terms of computation time.\n\n\\printbibliography\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nIn recent years, the modeling of nanoscale physics has become important,\nboth because of industrial applications~\\cite{Denis2002,Veinot2002,Moller2001,Wijnhoven1998},\nand because of the development of experiments that probe these small scales~\\cite{WolfBook}.\nOne particular problem is the modeling of aggregation, in which microscopic\nparticles collapse under the potential they exert on each other, and form\nmesoscopic structures that in turn behave like particles.\n\nIn a series of papers, Holm, Putkaradze and Tronci~\\cite{Darryl_eqn1,Darryl_eqn2_0,Darryl_eqn2_1,Darryl_eqn3,Darryl_eqn5,Darryl_eqn6}\nhave focused on the derivation of aggregation equations that\npossess emergent singular solutions. Continuum aggregation equations have\nbeen used\nto model gravitational collapse and the subsequent emergence of stars~\\cite{ChandraStars},\nthe localization of biological populations~\\cite{KellerSegel1970,Segel1985,Topaz2006},\nand the self-assembly of nanoparticles~\\cite{Putkaradze2005}. These are\ncomplexes of atoms or molecules that form mesoscale structures with particle-like\nbehavior.\nThe utility of the Holm--Putkaradze model lies in its emphasis on non-local\nphysics, and the emergence of singular solutions from smooth initial data.\n Because of the singular (delta-function) behavior of the model, it is an\n appropriate way to describe the universal phenomena of \naggregation and the subsequent formation of particle-like structures. Indeed\nin this framework, it is possible to prescribe the dynamics of the particle-like\nstructures after collapse.\nThus, the model\nprovides a description of directed self-assembly in nanophysics~\\cite{Xia2004,Putkaradze2005},\nin which the detailed physics is less important than the effective medium\nproperties of the dynamics.\n\nIn this work we focus on equations introduced by Holm, Putkaradze and Tronci\nfor the aggregation of oriented particles~\\cite{Darryl_eqn1,Darryl_eqn3}.\n We treat the\ninitial state of the system as a continuum, a good approximation in nanophysics\napplications~\\cite{Forest2007}.\n One realization of this problem is in nanomagnetics, in which particles\n with a definite magnetic moment collapse and form mesoscale structures,\n that in turn have a definite magnetic moment. Thus, in this paper we refer\n to the orientation vector in our continuum picture as the \\emph{magnetization}.\n We investigate these equations numerically and study their evolution and\n aggregation properties. One aspect of non-local problems, already mentioned\n in~\\cite{Darryl_eqn6}, is the effect of competition between the length\n scales of non-locality\n on the system evolution. We shall highlight this effect with a linear\n stability analysis of the full density-magnetization equations.\n\nThis paper is organized as follows. In Sec.~\\ref{sec:gilbert} we introduce\na non-local Gilbert (NG) equation to describe non-local interactions in a\nmagnetic\nsystem. We investigate the competition between the system's two length scales\nof non-locality. In Sec.~\\ref{sec:mag_dens} we\nintroduce a coupled density-magnetization system that generates singular\nsolutions. We examine the competition of length scales through a linear\nstability analysis and through the study of the dynamical equations for a\nsimple singular solution that describes the interaction of two particle-like\nobjects (clumpons). We perform numerical simulations that\nhighlight the emergence of singular solutions from smooth initial data. \nWe draw our conclusions in Sec.~\\ref{sec:conclusions}.\n\n\n\\section{The Non-local Gilbert Equation}\n\\label{sec:gilbert}\n\nIn this section we study a magnetization equation that in form is similar\nto the Gilbert equation, that is, the Landau--Lifshitz--Gilbert equation\nin\nthe over-damped limit~\\cite{GilbertIEEE,Weinan2000}. The equation we focus\non incorporates\nnon-local effects, and was introduced in~\\cite{Darryl_eqn1}. We study the\nevolution and energetics of this equation, and examine the importance of\nthe problem length scales in determining the evolution.\n\nWe study the following non-local Gilbert (NG) equation,\n\\begin{equation}\n\\frac{\\partial\\bm{m}}{\\partial t} = \\bm{m}\\times\\left(\\bm{\\mu}_m\\times\\frac{\\delta{E}}{\\delta\\bm{m}}\\right),\n\\label{eq:mag_eqn}\n\\end{equation}\nwhere $\\bm{m}$ is the magnetization density, $\\bm{\\mu}_m$ is the mobility,\ndefined as\n\\[\n\\bm{\\mu}_m = \\left(1-\\beta^2\\partial_x^2\\right)^{-1}\\bm{m},\n\\]\nand $\\delta E\/\\delta\\bm{m}$ is the variational derivative of the energy,\n\\[\n\\frac{\\delta E}{\\delta\\bm{m}} = \\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}.\n\\]\nThe smoothened magnetization $\\bm{\\mu}_m$ and the force $\\delta{E}\/\\delta{\\bm{m}}$\ncan be computed using the theory of Green's functions. In particular,\n\\[\n\\bm{\\mu}_m\\left(x,t\\right)=\\int_\\Omega{dy} H_{\\beta}\\left(x-y\\right)\\bm{m}\\left(y,t\\right):=H_{\\beta}*\\bm{m}\\left(x,t\\right).\n\\]\nHere $*$ denotes the convolution of functions, and the kernel $H_{\\beta}\\left(x\\right)$\nsatisfies the equation\n\\begin{equation}\n\\left(1-\\beta^2\\frac{d^2}{dx^2}\\right)H_\\beta\\left(x\\right)=\\delta\\left(x\\right).\n\\label{eq:kernel}\n\\end{equation}\nThe function $\\delta\\left(x\\right)$ is the Dirac delta function. Equation~\\eqref{eq:kernel}\nis solved subject to conditions imposed on the boundary of the domain $\\Omega$.\n In this paper we shall work with a periodic domain $\\Omega=\\left[-L\/2,L\/2\\right]$\n or $\\Omega=\\left[0,L\\right]$,\n although other boundary conditions are possible. Note that Eq.~\\eqref{eq:mag_eqn}\n has a family of non-trivial equilibrium states given by\n\\[\n\\bm{m}_{\\mathrm{eq}}\\left(x\\right)=\\bm{m}_0\\sin\\left(kx+\\phi_0\\right),\n\\]\nwhere $\\bm{m}_0$ is a constant vector, $k$ is some wave number, and $\\phi_0$\nis a constant phase. The derivation of this solution is subject to the boundary\nconditions discussed in Sec.~\\ref{sec:mag_dens}.\n\nBy setting $\\beta=0$ and replacing $\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}$\nwith $-\\partial_x^2$, we recover the more familiar Landau--Lifshitz--Gilbert\nequation, in the overdamped limit~\\cite{GilbertIEEE},\n\\begin{equation}\n\\frac{\\partial\\bm{m}}{\\partial t} = -\\bm{m}\\times\\left(\\bm{m}\\times\\frac{\\partial^2\\bm{m}}{\\partial{x}^2}\\right).\n\\label{eq:gilbert}\n\\end{equation}\nEquation~\\eqref{eq:mag_eqn} possesses several features that will be useful\nin understanding the numerical simulations. There is an energy\nfunctional\n\\begin{equation}\nE\\left(t\\right)=\\tfrac{1}{2}\\int_\\Omega{dx}\\bm{m}\\cdot\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m},\n\\label{eq:energy}\n\\end{equation}\nwhich evolves in time according to the relation\n\\begin{eqnarray}\n\\frac{dE}{dt}&=&\\int_\\Omega{dx}\\left[\\bm{\\mu}_m\\cdot\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]\\left[\\bm{m}\\cdot\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]\\nonumber\\\\\n&\\phantom{a}&\\phantom{aaaaaaaaaaaaaaaaaaaaaadaaaa}\n-\\int_\\Omega{dx}\\left(\\bm{\\mu}_m\\cdot\\bm{m}\\right)\\left[\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]^2,\\nonumber\\\\\n&=&-\\int_\\Omega{dx}\\left[\\bm{m}\\times\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]\\cdot\\left[\\bm{\\mu}_m\\times\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right].\n\\label{eq:dt_energy}\n\\end{eqnarray}\nThis is not necessarily a non-increasing function of time, although setting $\\beta=0$\ngives\n\\begin{eqnarray}\n\\left(\\frac{dE}{dt}\\right)_{\\beta=0}&=&\\int_\\Omega{dx}\\left[\\bm{m}\\cdot\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]^2-\n\\int_\\Omega{dx}\\bm{m}^2\\left[\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]^2,\\nonumber\\\\\n&=&\\int_\\Omega{dx}\\bm{m}^2\\left[\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\\right]^2\\left(\\cos^2\\varphi-1\\right)\\leq0,\n\\label{eq:dt_energy_beta0}\n\\end{eqnarray}\nwhere $\\varphi$ is the angle between $\\bm{m}$ and $\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}$.\n In the special case when $\\beta\\rightarrow0$, we therefore expect $E\\left(t\\right)$\n to be a non-increasing function of time. On the other hand, inspection\n of Eq.~\\eqref{eq:dt_energy} shows that as $\\alpha\\rightarrow0$, the energy\n tends to a constant.\nAdditionally, the magnitude of the vector $\\bm{m}$ is conserved. This can\n be shown by multiplying Eq.~\\eqref{eq:mag_eqn} by $\\bm{m}$, and by exploiting\n the antisymmetry of the cross product. Thus, we are interested only in\n the orientation of the vector $\\bm{m}$; this can be parametrized by two\n angles on the sphere:\n\\begin{figure}\n \\scalebox{0.4}[0.4]{\\includegraphics*[viewport=0 0 420 330]{fig1.pdf}}\n\\caption{(Color online) The initial data for the magnetization equation~\\eqref{eq:mag_eqn}.\n This initialization is obtained by allowing the orientation angles\n of the magnetization vector to vary sinusoidally in space, as in Eq.~\\eqref{eq:initial}.\n Here the wave number of the variation is equal to the fundamental wave\n number\n $2\\pi\/L$.}\n\\label{fig:initial}\n\\end{figure}\nthe azimuthal angle $\\theta\\left(\\bm{x},t\\right)$, and the polar angle $\\phi\\left(\\bm{x},t\\right)$,\nwhere\n\\begin{equation}\nm_x=|\\bm{m}|\\cos\\phi\\sin\\theta,\\qquad\nm_y=|\\bm{m}|\\sin\\phi\\sin\\theta,\\qquad\nm_z=|\\bm{m}|\\cos\\theta,\n\\label{eq:spherical_polars}\n\\end{equation}\nand where $\\phi\\in\\left[0,2\\pi\\right)$, and $\\theta\\in\\left[0,\\pi\\right]$.\n\nWe carry out numerical simulations of Eqs.~\\eqref{eq:mag_eqn} and~\\eqref{eq:gilbert}\non a periodic domain $\\left[0,L\\right]$, and outline the findings in\nwhat\nfollows. Motivated by the change of coordinates~\\eqref{eq:spherical_polars},\nwe choose the initial data\n\\begin{equation}\n\\phi_0\\left(x\\right)=\\pi\\left(1+\\sin\\left(2r\\pi x\/L\\right)\\right),\\qquad\n\\theta_0\\left(x\\right)=\\tfrac{1}{2}\\pi\\left(1+\\sin\\left(2\\pi s x\/L\\right)\\right),\n\\label{eq:initial}\n\\end{equation}\nwhere $r$ and $s$ are integers. These data are shown in Fig.~\\ref{fig:initial}.\n\n\\emph{Case 1: Numerical simulations of Eq.~\\eqref{eq:gilbert}.}\nEquation~\\eqref{eq:gilbert} is usually solved by explicit or implicit finite\ndifferences~\\cite{Weinan2000}. We solve the equation by these methods, and\nby the explicit spectral method~\\cite{Zhu_numerics}. The accuracy and computational\ncost\nis roughly the same\nin each case, and for simplicity, we therefore employ explicit finite differences;\n it is this method we use throughout the paper.\n Given the initial conditions~\\eqref{eq:initial}, each component of the magnetization\n $\\bm{m}=\\left(m_x,m_y,m_z\\right)$ tends to a constant, the energy\n\\[\nE = \\tfrac{1}{2}\\int_\\Omega{dx}\\left|\\frac{\\partial\\bm{m}}{\\partial{x}}\\right|^2\n\\]\ndecays with time, and $\\left|\\bm{m}\\right|^2$ retains its initial value $|\\bm{m}|^2=1$.\n After some transience, the decay of the energy functional becomes exponential\n in time. These results are shown in Fig.~\\ref{fig:alpha_LL}\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig2a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig2b.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig2c.pdf}}\n}\n\\caption{(Color online) Numerical simulations of Case~(1), the Landau--Lifshitz--Gilbert\nequation in the over-damped limit. In this case, the magnetization decays\nto a constant state.\nSubfigures (a) and (b) show the magnetization at times $t=0.03$ and $t=0.15$\nrespectively;\n(c) is the energy functional, which exhibits exponential decay after some\ntransience. The final orientation is $\\left(\\phi,\\theta\\right)=\\left(\\pi,\\pi\/2\\right)$.}\n\\label{fig:alpha_LL} \n\\end{figure}\n\n\\emph{Case 2: Numerical simulations of Eq.~\\eqref{eq:mag_eqn} with\n$\\alpha<\\beta$}. Given the smooth initial data~\\eqref{eq:initial},\nin time each component of the magnetization $\\bm{m}=\\left(m_x,m_y,m_z\\right)$\ndecays to zero, while the energy\n\\[\nE=\\tfrac{1}{2}\\int_\\Omega{dx}\\bm{m}\\cdot\\left(1-\\alpha^2\\partial_x^2\\right)^{-1}\\bm{m}\n\\]\ntends to a constant value. Given our choice of initial conditions, the energy\nin fact \\emph{increases} to attain this constant value. Again the quantity\n$\\left|\\bm{m}\\right|^2$ stays constant. These results are shown in Fig.~\\ref{fig:alpha_small}.\nWe find similar results when we set $\\alpha=0$.\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig3a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig3b.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig3c.pdf}}\n}\n\\caption{(Color online) Numerical simulations of Case~(2), the non-local\nGilbert equation\nwith with $\\alpha<\\beta$. In this case, the energy increases to a constant\nvalue, and the magnetization becomes constant.\nSubfigures (a) and (b) show the magnetization\nat times $t=8$ and $t=40$; (c) is the energy functional. The final orientation\nis $\\left(\\phi,\\theta\\right)=\\left(\\pi,\\pi\/2\\right)$.}\n\\label{fig:alpha_small} \\end{figure}\n\n\\emph{Case 3: Numerical simulations of Eq.~\\eqref{eq:mag_eqn} with\n$\\alpha>\\beta$}. Given the smooth initial data~\\eqref{eq:initial}, in time\neach component of the magnetization $\\bm{m}=\\left(m_x,m_y,m_z\\right)$ develops\nfiner and finer scales. \nThe development of small scales is driven by the decreasing nature of the\nenergy functional, which decreases as power law at late times, and is reflected\nin snapshots of the power spectrum of the magnetization vector, shown in\nFig.~\\ref{fig:alpha_big}. As the system evolves, there\n\\begin{table}[h!b!p!]\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nCase&Length scales&Energy&Outcome as $t\\rightarrow\\infty$&Linear Stability\\\\\n\\hline\n(1)&$\\beta=0$, $\\delta{E}\/\\delta{\\bm{m}}=-\\partial_x^2\\bm{m}$&Decreasing&Constant\nstate&Stable\\\\\n(2)&$\\alpha<\\beta$&Increasing&Constant state&Stable\\\\\n(3)&$\\alpha>\\beta$&Decreasing&Development of finer and finer scales&Unstable\\\\\n\\hline\n\\end{tabular}\n\\caption{Summary of the forms of Eq.~\\eqref{eq:mag_eqn} studied.}\n\\label{tab:table_summary}\n\\end{table}\nis a transfer\nof large amplitudes to higher wave numbers. This transfer slows down at\nlate\ntimes, suggesting that the rate at which the solution roughens tends to zero,\nas $t\\rightarrow\\infty$.\nThe evolution preserves the symmetry of the magnetization\nvector $\\bm{m}\\left(x,t\\right)$ under parity transformations. This is\nseen by comparing Figs.~\\ref{fig:initial} and~\\ref{fig:alpha_big}.\nThe energy is a decaying function of time, while the quantity\n$\\left|\\bm{m}\\right|^2$ stays constant. We find similar results for the case\nwhen $\\beta=0$.\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig4a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig4b.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig4c.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig4d.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.3}[0.3]{\\includegraphics*[viewport=0 0 420 330]{fig4e.pdf}}\n}\n\\caption{(Color online) Numerical simulations of Case~(3), the non-local\nGilbert equation\nwith $\\alpha>\\beta$. In this case, the energy decreases indefinitely, and\nthe magnetization vector develops finer and finer scales.\nSubfigures (a), (b), and (c) show the magnetization at time $t=10000$; (d)\nis the energy functional, which decreases in time as a power law at late\ntimes. Subfigure (e) shows the power spectrum of $m_x$;\nthe integer index $n$ labels the spatial scales: if $k_n$ is a wavenumber,\nthen the corresponding integer label is $n=k_nL\/2\\pi$.\n}\n\\label{fig:alpha_big} \n\\end{figure}\n\nThese results can be explained qualitatively as follows. In Case~(1), the\nenergy functional exacts a penalty for the formation of gradients. The energy\ndecreases with time and the the system evolves into a state in\nwhich no magnetization gradients are present, that is, a constant state.\nOn the other hand, we have demonstrated that in Case~(2), when $\\alpha<\\beta$,\nthe energy increases to a constant value. Since in the non-local model, the\nenergy functional represents the cost of forming smooth spatial structures,\nan increase in energy produces a smoother magnetization field, a process\nthat continues until the magnetization reaches a constant value. \nFinally, in Case~(3), when $\\alpha>\\beta$, the energy functional\ndecreases, and this decrease corresponds to a roughening of the magnetization\nfield, as seen in Fig.~\\ref{fig:alpha_big}. \nIn Sec.~\\ref{sec:mag_dens} we\nshall show that Case~(2) is stable to small perturbations around a constant\nstate, while Case~(3) is unstable. Furthermore, we note that Case~(2) and\nCase~(3) differ only by a minus sign in Eq.~\\eqref{eq:mag_eqn}, and are therefore\nrelated by time reversal.\nThese results are summarized in Table~\\ref{tab:table_summary}.\n\n\nThe solutions of Eqs.~\\eqref{eq:mag_eqn} and~\\eqref{eq:gilbert} do not become\nsingular. This is not surprising: the manifest\nconservation of $\\left|\\bm{m}\\right|^2$ in Eqs.~\\eqref{eq:mag_eqn}\nand~\\eqref{eq:gilbert} provides a pointwise bound on the magnitude of the\nsolution, preventing blow-up. Any addition\nto Eq.~\\eqref{eq:mag_eqn} that breaks this conservation law gives rise to\nthe possibility of singular solutions, and it is to this possility that we\nnow turn.\n\n\n\\section{Coupled density-magnetization equations}\n\\label{sec:mag_dens}\nIn this section we study a coupled density-magnetization equation\npair that admit singular solutions.\n We investigate the linear stability\nof the equations and examine the conditions for instability. We find that\nthe stability or otherwise of a constant state is controlled by the magnetization\nand density values of that state, and by the relative magnitude of the problem\nlength scales. Using numerical and analytical techniques, we investigate\nthe emergence and self-interaction of singular solutions.\n\nThe equations we study are as follows,\n\\begin{subequations}\n\\begin{equation}\n\\frac{\\partial\\rho}{\\partial t}=\\frac{\\partial}{\\partial{x}}\\left[\\rho\\left(\\mu_\\rho\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\rho}+\\bm{\\mu}_{m}\\cdot\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\bm{m}}\\right)\\right],\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial\\bm{m}}{\\partial t} = \\frac{\\partial}{\\partial x}\\left[\\bm{m}\\left(\\mu_\\rho\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\rho}+\\bm{\\mu}_{m}\\cdot\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\bm{m}}\\right)\\right]+\\bm{m}\\times\\left(\\bm{\\mu}_{\\bm{m}}\\times\\frac{\\delta{E}}{\\delta\\bm{m}}\\right),\n\\end{equation}%\n\\label{eq:mag_dens}%\n\\end{subequations}%\nwhere we set\n\\[\n\\mu_\\rho=1,\\qquad\\frac{\\partial{E}}{\\partial\\rho}=-\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\rho,\n\\]\nand, as before,\n\\[\n\\bm{\\mu}_{m} = \\left(1-\\beta_m^2\\partial_x^2\\right)^{-1}\\bm{m},\\qquad\n\\frac{\\delta E}{\\delta\\bm{m}} = \\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\bm{m}.\n\\]\nThese equations have been introduced by Holm, Putkaradze and Tronci in~\\cite{Darryl_eqn1},\nusing a kinetic-theory description.\nThe density and the magnetization vector are driven by the velocity\n\\begin{equation}\nV=\\mu_\\rho\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\rho}+\\bm{\\mu}_{m}\\cdot\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\bm{m}}.\n\\label{eq:velocity}\n\\end{equation}\nThe velocity advects the ratio $|\\bm{m}|\/\\rho$ by\n\\[\n\\left(\\frac{\\partial}{\\partial{t}}-V\\frac{\\partial}{\\partial{x}}\\right)\\frac{|\\bm{m}|}{\\rho}=0.\n\\]\nWe have the system energy\n\\begin{equation}\nE = \\tfrac{1}{2}\\int_\\Omega{dx}\\bm{m}\\cdot\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\bm{m}\n- \\tfrac{1}{2}\\int_\\Omega{dx}\\rho\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\rho,\n\\label{eq:energy_mag_dens}\n\\end{equation}\nand, given a non-negative density, the second term is always non-positive.\n This represents an energy of attraction, and we therefore expect singularities\n in the magnetization vector to arise from a collapse of the particle density\n due to the ever-decreasing energy of attraction. There are three length\n scales in the problem that control the time evolution: the\n ranges $\\alpha_m$ and $\\alpha_\\rho$ of the potentials in Eq.~\\eqref{eq:energy_mag_dens},\n and the smoothening length $\\beta_m$.\n\\subsection*{Linear stability analysis}\nWe study the linear stability of the constant state $\\left(\\bm{m},\\rho\\right)=\\left(\\bm{m}_0,\\rho_0\\right)$.\nWe evaluate the smoothened values of this constant solution as follows,\n\\begin{eqnarray*}\n\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\rho_0&=&f\\left(x\\right),\\\\\n\\rho_0&=&f\\left(x\\right)-\\alpha_\\rho^2\\frac{d^2 f}{dx^2},\\\\\nf\\left(x\\right)&=&\\rho_0+A\\sinh\\left(x\/\\alpha_\\rho\\right)+B\\cosh\\left(x\/\\alpha_\\rho\\right).\n\\end{eqnarray*}\nFor periodic or infinite boundary conditions, the constants $A$ and\n$B$ are in fact zero and thus \n\\begin{equation}\n\\left(1-\\alpha_\\rho\\partial_x^2\\right)^{-1}\\rho_0=\\rho_0,\n\\label{eq:smooth_const}\n\\end{equation}\nand similarly $\\bm{\\mu}_0=\\left(\\delta E\/\\delta\\bm{m}\\right)_{\\bm{m}_0}=\\bm{m}_0$.\n The result~\\eqref{eq:smooth_const} guarantees that the constant state $\\left(\\bm{m}_0,\\rho_0\\right)$\n is indeed a solution of Eq.~\\eqref{eq:mag_dens}.\n \nWe study a solution $\\left(\\bm{m},\\rho\\right)=\\left(\\bm{m}_0+\\delta\\bm{m},\\rho_0+\\delta\\rho\\right)$,\nwhich represents a perturbation away from the constant state. By assuming\nthat $\\delta\\bm{m}$ and $\\delta\\rho$ are initially small in magnitude, we obtain\nthe following linearized equations for the perturbation density and magnetization,\n\\begin{subequations}\n\\begin{equation*}\n\\frac{\\partial}{\\partial t}\\delta\\rho=-\\rho_0\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\delta\\rho+\\rho_0\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\bm{m}_0\\cdot\\delta\\bm{m},\n\\end{equation*}\n\\begin{multline*}\n\\frac{\\partial}{\\partial t}\\delta\\bm{m} =\\bm{m}_0\\left[-\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\delta\\rho+\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\bm{m}_0\\cdot\\delta\\bm{m}\\right]\\\\\n+\\bm{m}_0\\times\\Big\\{\\bm{m}_0\\times\\left[\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\delta\\bm{m}-\\left(1-\\beta_m^2\\partial_x^2\\right)^{-1}\\delta\\bm{m}\\right]\\Big\\}.\n\\end{multline*}%\n\\label{eq:mag_dens_linear}%\n\\end{subequations}%\n For $\\bm{m}_0\\neq0$ we may choose two unit vectors $\\hat{\\bm{n}}_1$ and $\\hat{\\bm{n}}_2$\n such that $\\bm{m}_0\/|\\bm{m}_0|$, $\\hat{\\bm{n}}_1$ and $\\hat{\\bm{n}}_2$ form\n an orthonormal triad (that is, we have effected a change of basis). We\n then study the quantities $\\delta\\rho$, $\\delta\\chi$, $\\delta\\xi_1$ and\n $\\delta\\xi_2$, where\n\\[\n\\delta\\chi=\\bm{m}_0\\cdot\\delta\\bm{m},\\qquad\\delta\\xi_1=\\hat{\\bm{n}}_1\\cdot\\delta\\bm{m},\\qquad\\delta\\xi_2=\\hat{\\bm{n}}_2\\cdot\\delta\\bm{m}.\n\\]\nWe obtain the linear equations\n\\begin{equation*}\n\\frac{\\partial}{\\partial t}\\delta\\rho=-\\rho_0\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\delta\\rho+\\rho_0\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\delta\\chi,\n\\end{equation*}\n\\begin{equation*}\n\\frac{\\partial}{\\partial t}\\delta\\chi =-|\\bm{m}_0|^2\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_\\rho^2\\partial_x^2\\right)^{-1}\\delta\\rho+|\\bm{m}_0|^2\\frac{\\partial^2}{\\partial{x}^2}\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\delta\\chi,\n\\end{equation*}\n\\begin{equation*}\n\\frac{\\partial}{\\partial t}\\delta\\xi_i=\\left[\\left(1-\\beta_m^2\\partial_x^2\\right)^{-1}-\\left(1-\\alpha_m^2\\partial_x^2\\right)^{-1}\\right]\\delta\\xi_i,\\qquad\ni=1,2.\n\\end{equation*}%\nBy focusing on a single-mode disturbance with wave number $k$ we obtain the\nfollowing system of equations\n\\begin{equation*}\n\\frac{d}{dt}\\left(\\begin{array}{c}\\delta\\rho \\\\ \\delta\\chi \\\\ \\delta\\xi_1\n\\\\ \\delta\\xi_2\\end{array}\\right)=\n\\left(\\begin{array}{cccc}\n\\frac{\\rho_0k^2}{1+\\alpha_\\rho^2k^2}&-\\frac{\\rho_0k^2}{1+\\alpha_m^2k^2}&0&0\\\\\n\\frac{|\\bm{m}_0|^2k^2}{1+\\alpha_\\rho^2k^2}&-\\frac{|\\bm{m}_0|^2k^2}{1+\\alpha_m^2k^2}&0&0\\\\\n0&0&\\frac{1}{1+\\beta_m^2k^2}-\\frac{1}{1+\\alpha_m^2k^2}&0\\\\\n0&0&0&\\frac{1}{1+\\beta_m^2k^2}-\\frac{1}{1+\\alpha_m^2k^2}\n\\end{array}\\right)\n\\left(\\begin{array}{c}\\delta\\rho \\\\ \\delta\\chi \\\\ \\delta\\xi_1\n\\\\ \\delta\\xi_2\\end{array}\\right),\n\\end{equation*}\nwith eigenvalues\n\\begin{equation}\n\\sigma_0=0,\\qquad \n\\sigma_1=\\frac{\\rho_0k^2}{1+\\alpha_\\rho^2k^2}-\\frac{|\\bm{m}_0|^2k^2}{1+\\alpha_m^2k^2},\\qquad\n\\sigma_2=\\frac{1}{1+\\beta_m^2k^2}-\\frac{1}{1+\\alpha_m^2k^2}.\n\\end{equation}\nThe eigenvalues are the growth rate of the disturbance $\\left(\\delta\\rho,\\delta\\chi,\\delta\\xi_1,\\delta\\xi_2\\right)$~\\cite{ChandraFluids}.\n There are two routes to instability, when $\\sigma_1>0$, or when $\\sigma_2>0$.\n The first route leads to an instability when\n\\[\n\\sigma_1>0,\\qquad \\frac{\\rho_0}{|\\bm{m}_0|^2}>\\frac{1+\\alpha_\\rho^2k^2}{1+\\alpha_m^2k^2},\n\\]\nwhile the second route leads to instability when\n\\[\n\\sigma_2>0,\\qquad \\alpha_m>\\beta_m.\n\\] \n \nWe have plotted the growth rates for the case when $\\rho_0=|\\bm{m}_0|^2=1$,\nand compared the theory with numerical simulations. There is excellent agreement\nat low wave numbers, although the numerical simulations become less accurate\nat high wave numbers. This can be remedied by increasing the resolution\nof the simulations. These plots are shown in Figs.~\\ref{fig:growth_rates}\nand~\\ref{fig:growth_rates2}.\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig5a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig5b.pdf}}\n}\n\\caption{(Color online) The first route to instability. Subfigure (a) shows\nthe growth\nrate $\\sigma_1$ for $\\alpha_m<\\beta_m<\\alpha_\\rho$, with negativity\nindicating a stable equilibrium; (b) gives the growth rate $\\sigma_1$\nfor $\\alpha_\\rho<\\alpha_m<\\beta_m$, with positivity indicating an\nunstable equilibrium. We have set $|\\bm{m}_0|=\\rho_0=1$.\n}\n\\label{fig:growth_rates} \n\\end{figure} \nThe growth rates $\\sigma_{1,2}$ are parabolic in $k$ at small $k$; $\\sigma_1$\nsaturates at large $k$, while $\\sigma_2$ attains a maximum and decays at\nlarge $k$. The growth rates can be positive or negative, depending on the\ninitial configuration,\nand on the relationship between the problem length scales.\n In contrast to some standard instabilities of pattern formation (e.g. Cahn--Hilliard~\\cite{Argentina2005}\n or Swift--Hohenberg~\\cite{OjalvoBook}), the $\\sigma_1$-unstable state\n becomes more unstable at higher wave numbers (smaller scales), thus preventing\n the `freezing-out' of the instability by a reduction of the box size~\\cite{Argentina2005}.\n The growth at small scales is limited,\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig6a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig6b.pdf}}\n}\n\\caption{(Color online) The second route to instability. Subfigure (a)\nshows the growth\nrate $\\sigma_2$ for $\\alpha_m<\\beta_m$, with negativity\nindicating a stable equilibrium; (b) gives the growth rate $\\sigma_2$\nfor $\\alpha_m>\\beta_m$, with positivity indicating an unstable equilibrium.\n We have set $|\\bm{m}_0|=\\rho_0=1$. \n}\n\\label{fig:growth_rates2} \n\\end{figure} \nhowever, by the saturation in $\\sigma$ as $k\\rightarrow\\infty$. Heuristically,\n this can be explained as follows: at higher wave number, the disturbance\n $\\left(\\delta\\rho,\\delta\\chi,\\delta\\xi_1,\\delta\\xi_2\\right)$ gives rise\n to more and more peaks per unit length. This makes merging events increasingly\n likely, so that peaks combine to form larger peaks, enhancing the growth\n of the disturbance.\n\nRecall in Sec.~\\ref{sec:gilbert} that the different behaviors of the magnetization\nequation~\\eqref{eq:mag_eqn} are the result of a competition between the length\nscales\n$\\alpha_m$ and $\\beta_m$. For $\\alpha_m<\\beta_m$ the initial\n(large-amplitude) disturbance tends to a constant, while for $\\alpha_m>\\beta_m$\nthe initial disturbance develops finer and finer scales. In this section,\nwe have shown that the coupled density-magnetization equations are linearly\nstable when $\\alpha_m<\\beta_m$, while the reverse case is unstable.\n In contrast to the first route to instability, the growth rate $\\sigma_2$,\n if positive, admits a maximum. This is obtained by setting $\\sigma_2'\\left(k\\right)=0$.\n Then the maximum growth rate occurs at a scale\n\\begin{equation*}\n\\lambda_{\\mathrm{max}}:=2\\pi k_{\\mathrm{max}}^{-1}=2\\pi\\sqrt{\\alpha_m\\beta_m}.\n\\end{equation*}\nThus, the scale at which the disturbance is most unstable is determined\nby the geometric mean of $\\alpha_m$ and $\\beta_m$. Given a disturbance\n$\\left(\\delta\\rho,\\delta\\chi,\\delta\\xi_1,\\delta\\xi_2\\right)$ with a range\nof modes initially present, the instability selects the disturbance on the\nscale $\\lambda_{\\mathrm{max}}$. This disturbance develops a large amplitude\nand a singular solution subsequently emerges. It is to this aspect of the\nproblem that we now turn.\n\n\n\\subsection*{Singular solutions}\n In this section we show that a finite weighted sum of delta functions satisfies\n the partial differential equations~\\eqref{eq:mag_dens}. Each delta function\n has the\n interpretation of a particle or clumpon, whose weights and positions satisfy\n a finite set of ordinary differential equations. We investigate the two-clumpon\n case analytically and show that the clumpons tend to a state in which they\n merge, diverge, or are separated by a fixed distance. In each case, we\n determine the final state of the clumpon magnetization.\n \n\nTo verify that singular solutions are possible, let us substitute the ansatz\n\\begin{equation}\n\\rho\\left(x,t\\right)=\\sum_{i=1}^M a_i\\left(t\\right)\\delta\\left(x-x_i\\left(t\\right)\\right),\\qquad\n\\bm{m}\\left(x,t\\right)=\\sum_{i=1}^M \\bm{b}_i\\left(t\\right)\\delta\\left(x-x_i\\left(t\\right)\\right),\n\\label{eq:clumpon_sln}\n\\end{equation}\ninto the weak form of equations~\\eqref{eq:mag_dens}. Here we sum over the\ndifferent components of the singular solution (which we call clumpons).\nIn this section we work on the infinite domain $x\\in\\left(-\\infty,\\infty\\right)$.\n The weak form of the equations is obtained by testing Eqs.~\\eqref{eq:mag_dens}\nwith once-differentiable functions $\\phi\\left(x\\right)$ and $\\bm{\\psi}\\left(x\\right)$,\n\\begin{subequations}\n\\begin{equation}\n\\frac{d}{dt}\\int_{-\\infty}^{\\infty}{dx}\\rho\\left(x,t\\right)\\phi\\left(x\\right)=-\\int_{-\\infty}^{\\infty}{dx}\\phi'\\left(x,t\\right)\\left(\\mu_\\rho\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\rho}+\\bm{\\mu}_{\\bm{m}}\\cdot\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\bm{m}}\\right),\n\\end{equation}\n\\begin{multline}\n\\frac{d}{dt}\\int_{-\\infty}^{\\infty}{dx}\\bm{m}\\left(x,t\\right)\\cdot\\bm{\\psi}\\left(x\\right) \n= -\\int_{-\\infty}^{\\infty}{dx}\\bm{\\psi}'\\left(x\\right)\\cdot\\bm{m}\\left(x,t\\right)\\left(\\mu_\\rho\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\rho}\n+\\bm{\\mu}_{\\bm{m}}\\cdot\\frac{\\partial}{\\partial{x}}\\frac{\\delta{E}}{\\delta\\bm{m}}\\right)\n\\\\\n+\\int_{-\\infty}^{\\infty}{dx}\\bm{\\psi}\\left(x\\right)\\cdot\\left[\\bm{m}\\times\\left(\\bm{\\mu}_{\\bm{m}}\\times\\frac{\\delta{E}}{\\delta\\bm{m}}\\right)\\right],\n\\end{multline}%\n\\label{eq:mag_dens_weak}%\n\\end{subequations}%\nSubstitution of the ansatz~\\eqref{eq:clumpon_sln} into the weak equations~\\eqref{eq:mag_dens_weak}\nyields the relations\n\\begin{equation}\n\\frac{da_i}{dt}=0,\\qquad \\frac{dx_i}{dt}=-V\\left(x_i\\right),\\qquad\\frac{d\\bm{b}_i}{dt}=\\bm{b}_i\\times\\left(\\bm{\\mu}\\times\\frac{\\delta{E}}{\\delta\\bm{m}}\\right)\\left(x_i\\right),\\qquad\ni\\in\\{1,...,M\\},\n\\label{eq:clumpon_evolution}\n\\end{equation}\nwhere $V$ and $\\left(\\bm{\\mu}\\times\\left({\\delta{E}}\/{\\delta\\bm{m}}\\right)\\right)$\nare obtained from the ansatz~\\eqref{eq:clumpon_sln} and are evaluated at\n$x_i$. Note that the density weights $a_i$ and the magnitude of the\nweights $\\bm{b}_i$ remain constant in time.\n\nWe develop further understanding of the clumpon dynamics by studying the\ntwo-clumpon version of Eqs.~\\eqref{eq:clumpon_evolution}. Since the weights\n$a_1$, $a_2$, $|\\bm{b}_1|$, and $|\\bm{b}_2|$ are constant, two variables\nsuffice to describe the interaction: the relative separation $x=x_1-x_2$\nof the clumpons,\nand the cosine of the angle between the clumpon magnetizations, $\\cos\\varphi=\\bm{b}_1\\cdot\\bm{b}_2\/|\\bm{b}_1||\\bm{b}_2|$.\nUsing the properties\nof the kernel $H\\left(0\\right)=1$, $H'\\left(0\\right)=0$, we derive the equations\n\\begin{subequations}\n\\begin{equation}\n\\frac{dx}{dt}=M H_{\\alpha_\\rho}'\\left(x\\right)- B_1 H'_{\\alpha_m}\\left(x\\right)H_{\\beta_m}\\left(x\\right)\n- B_2 H_{\\alpha_m}'\\left(x\\right)y,\\qquad y=\\cos\\varphi\n\\end{equation}\n\\begin{equation}\n\\frac{dy}{dt}=B_2\\left(1-y^2\\right)\\left[H_{\\beta_m}\\left(x\\right)-H_{\\alpha_m}\\left(x\\right)\\right],\n\\label{eq:xtheta_b}\n\\end{equation}%\n\\label{eq:xtheta}%\n\\end{subequations}%\nwhere $M=a_1+a_2$, $B_1=|\\bm{b}_1|^2+|\\bm{b}_2|^2$, and $B_2=2|\\bm{b}_1||\\bm{b}_2|$\nare constants. \nEquations~\\eqref{eq:xtheta} form a dynamical system whose\nproperties we now investigate using phase-plane analysis~\\cite{StrogatzBook}.\n We note first\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.4}[0.4]{\\includegraphics*[viewport=0 0 400 320]{fig7a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.4}[0.4]{\\includegraphics*[viewport=0 0 400 320]{fig7b.pdf}}\n}\n\\caption{(Color online) The nullcline $dx\/dt=0$ of the two-clumpon dynamical\nsystem with $\\alpha_m<\\alpha_\\rho$. The\nregion contained inside the dotted lines $y=\\pm1$ gives the allowed values\nof the dynamical variables $\\left(x,y\\right)$.\n Subfigure~(a)\nshows the case when $\\beta_m<\\alpha_m$. The stable equilibria of the\nsystem are $\\left(x,y\\right)=\\left(\\pm{d},1\\right)$ and the line $x=0$.\n All initial conditions flow into one of these equilibrium states; subfigure\n (b) shows the case when $\\alpha_m<\\beta_m$. Initial conditions confined\n to the\n line $y=1$ flow into the fixed point $\\left(\\pm{d},1\\right)$,\n while all other initial conditions flow into the line $x=0$.\n}\n\\label{fig:nullclines} \n\\end{figure} \n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.4}[0.4]{\\includegraphics*[viewport=0 0 400 320]{fig8a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.4}[0.4]{\\includegraphics*[viewport=0 0 400 320]{fig8b.pdf}}\n}\n\\caption{(Color online) The nullcline $dx\/dt=0$ of the two-clumpon dynamical\nsystem with $\\alpha_\\rho<\\alpha_m$. The\nregion contained inside the dotted lines $y=\\pm1$ gives the allowed values\nof the dynamical variables $\\left(x,y\\right)$.\n Subfigure~(a)\nshows the case when $\\beta_m<\\alpha_m$. The lines $x=0$ and $x=\\pm\\infty$\nform the stable equilibria of the system. All initial conditions flow\n into one of these states; subfigure (b)\n shows the case when $\\alpha_m<\\beta_m$. \nInitial conditions confined to the line $y=1$ flow into the fixed points\n$\\left(0,1\\right)$ and $\\left(\\pm\\infty,1\\right)$, while all other initial\nconditions flow into the line $x=0$. \n}\n\\label{fig:nullclines2} \n\\end{figure}\nof all that the $|y|>1$ region of the phase plane is forbidden, since the\n$y$-component of the vector field $\\left(dx\/dt,dy\/dt\\right)$ vanishes at\n$|y|=1$. The vertical lines $x=0$ and $x=\\pm\\infty$ are equilibria, although\ntheir stability will depend on the value of the parameters $\\left(\\alpha_m,\\alpha_\\rho,\\beta_m,B_1,B_2,M\\right)$.\nThe curve across which $dx\/dt$ changes sign is called the nullcline. This\nis given by\n\\[\ny=\\frac{MH_{\\alpha_\\rho}'\\left(x\\right)-B_1H'_{\\alpha_m}\\left(x\\right)H_{\\beta_m}\\left(x\\right)}{B_2H'_{\\alpha_m}\\left(x\\right)},\n\\]\nwhich on the domain $x\\in\\left(-\\infty,\\infty\\right)$ takes the form\n\\[\ny=\\frac{\\alpha_m}{B_2}\\left[\\frac{M}{\\alpha_\\rho}e^{-|x|\\left(\\frac{1}{\\alpha_\\rho}-\\frac{1}{\\alpha_m}\\right)}-\\frac{B_1}{\\alpha_m}e^{-\\frac{1}{\\beta_m}|x|}\\right].\n\\]\nSeveral qualitatively different behaviors are possible, depending on the\nmagnitude of the values taken by the parameters $\\left(\\alpha_m,\\alpha_\\rho,\\beta_m,B_1,B_2,M\\right)$.\n Here we outline four of these behavior types.\n\\begin{itemize} \n\\item\\emph{Case 1:} The length scales are in the relation $\\alpha_m<\\alpha_\\rho$,\nand $\\beta_m<\\alpha_m$. The vector field $\\left(dx\/dt,dy\/dt\\right)$ and\nthe nullcline are shown in Fig.~\\ref{fig:nullclines}~(a).\nThere is flow into the fixed points $\\left(x,y\\right)=\\left(\\pm{d},1\\right)$,\nand into the line $x=0$,\nwhile $y$ is a non-decreasing function of time, which follows\nfrom Eq.~\\eqref{eq:xtheta_b}. The ultimate state of the system is thus $x=\\pm{d}$,\n$\\varphi=0$ (alignment), or $x=0$ (merging). In the latter case the final\norientation is given by the integral of Eq.~\\eqref{eq:xtheta_b},\n\\begin{equation}\n\\tan\\left(\\frac{\\varphi}{2}\\right)=\\tan\\left(\\frac{\\varphi_0}{2}\\right)\\exp\\left[-B_2\\int_0^\\infty{dt}\\left[H_{\\beta_m}\\left(x\\left(t\\right)\\right)-H_{\\alpha_m}\\left(x\\left(t\\right)\\right)\\right]\\right],\\qquad\\varphi_0=\\varphi\\left(t=0\\right).\n\\label{eq:theta_final}\n\\end{equation}\n\\item\\emph{Case 2:} The length scales are in the relation $\\alpha_m<\\alpha_\\rho$,\n$\\alpha_m<\\beta_m$. The vector field and the nullcline are shown in Fig.~\\ref{fig:nullclines}~(b).\n All flow not confined to the line $y=1$ is into the line $x=0$, since $y$\n is now a non-increasing function of time. The ultimate state of the system\n is thus $x=\\pm{d}$, $\\varphi=0$ (alignment), or $x=0$ (merging). In the\n latter case the final orientation is given by the formula~\\eqref{eq:theta_final}.\n\\item\\emph{Case 3:} The length scales are in the relation $\\alpha_\\rho<\\alpha_m$\nand $\\beta_m<\\alpha_m$. The vector field and the nullcline are shown in\nFig.~\\ref{fig:nullclines2}~(a).\n Inside the region bounded by the line $y=0$ and the nullcline, the flow\n is into the line $x=0$ (merging), and the fixed points\n $\\left(\\pm{d},1\\right)$ are unstable. The flow below the line $y=0$ is\n towards the line\n $x=0$. Outside of these regions, however, the flow is into the lines\n $x=\\pm\\infty$, which shows that for a suitable choice of parameters\n and initial conditions, the clumpons can be made to diverge.\n\\item\\emph{Case 4:} The length scales are in the relation $\\alpha_\\rho<\\alpha_m$\nand $\\alpha_m<\\beta_m$. The vector field and the nullcline are shown in\nFig.~\\ref{fig:nullclines2}~(b). The quantity $y$ is a non-increasing function\nof time. All flow along the line $y=1$ is directed away from the fixed points\n$\\left(\\pm{d},1\\right)$ and is into the fixed points $\\left(0,1\\right)$,\nor $\\left(\\pm\\infty,1\\right)$.\n All other initial conditions flow into $x=0$, although initial conditions\n that start above the curve formed by the nullcline flow in an arc and eventually\n reach a fixed point $\\left(x=0,y<0\\right)$.\n\\end{itemize}\nWe summarize the cases we have discussed in Table~\\ref{tab:table2_summary}.\nUsing numerical simulations of Eqs.~\\eqref{eq:xtheta}, we have verified that\nCases~(1)--(4) do indeed occur. The list of cases we have considered is\nnot exhaustive: depending on the parameters $B_1$, $B_2$, and $M$, other\nphase portraits may arise. Indeed, it is clear from Fig.~\\ref{fig:nullclines}\nthat through saddle-node bifurcations, the fixed points $\\left(x,y\\right)=\\left(\\pm{d},1\\right)$\nmay disappear, or additional fixed points $\\left(x,y\\right)=\\left(\\pm{d'},-1\\right)$\nmay appear. Our\n\\begin{table}[h!b!p!]\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\nCase&$\\alpha_m$ vs. $\\alpha_\\rho$&$\\alpha_m$ vs. $\\beta_m$&Equilibria&Flow\\\\\n\\hline\n(1)&$\\alpha_m<\\alpha_\\rho$&$\\beta_m<\\alpha_m$&$\\left(x,y\\right)=\\left(\\pm{d},1\\right)$;\n$x=0$; $x=\\pm\\infty$&Flow into $x=0$ and $\\left(x,y\\right)=\\left(\\pm{d},1\\right)$\\\\\n(2)&$\\alpha_m<\\alpha_\\rho$&$\\alpha_m<\\beta_m$&$\\left(x,y\\right)=\\left(\\pm{d},1\\right)$;\n$x=0$; $x=\\pm\\infty$&Flow into $x=0$ and $\\left(x,y\\right)=\\left(\\pm{d},1\\right)$\\\\\n(3)&$\\alpha_\\rho<\\alpha_m$&$\\beta_m<\\alpha_m$&$\\left(x,y\\right)=\\left(\\pm{d},1\\right)$;\n$x=0$; $x=\\pm\\infty$&Flow into $x=0$ and $x=\\pm\\infty$\\\\\n(4)&$\\alpha_\\rho<\\alpha_m$&$\\alpha_m<\\beta_m$&$\\left(x,y\\right)=\\left(\\pm{d},1\\right)$;\n$x=0$; $x=\\pm\\infty$&Flow into $x=0$ and $x=\\pm\\infty$\\\\\n\\hline\n\\end{tabular}\n\\caption{Summary of the distinct phase portraits of Eq.~\\eqref{eq:xtheta}\nstudied.}\n\\label{tab:table2_summary}\n\\end{table}\n analysis shows, however, that it is possible to choose\n a set of parameters $\\left(\\alpha_\\rho,\\alpha_m,\\beta_m,B_1,B_2,M\\right)$\n such that two clumpons either merge, diverge, or are separated by a fixed\n distance.\n\n\\subsection*{Numerical Simulations}\nTo examine the emergence and subsequent interaction of the clumpons, we carry\nout numerical simulations of Eq.~\\eqref{eq:mag_dens} for a variety\nof initial conditions. We use\nan explicit finite-difference algorithm with a small amount of artifical\ndiffusion.\n We solve the following weak form of Eq.~\\eqref{eq:mag_dens}, obtained by\n testing Eq.~\\eqref{eq:mag_dens_weak} with $H_{\\beta_m}$,\n\\[\n\\frac{\\partial\\overline{\\rho}}{\\partial{t}}=D_{\\mathrm{artif}}\\frac{\\partial^2\\overline{\\rho}}{\\partial{x^2}}+\\int_\\Omega{dy}H_{\\beta_m}'\\left(x-y\\right)\\rho\\left(y,t\\right)V\\left(y,t\\right),\n\\]\n\\begin{multline*}\n\\frac{\\partial\\mu_i}{\\partial{t}}=D_{\\mathrm{artif}}\\frac{\\partial^2\\mu_i}{\\partial{x^2}}+\\int_\\Omega{dy}H_{\\beta_m}'\\left(x-y\\right)\\bm{m}_i\\left(y,t\\right)V\\left(y,t\\right)\\\\\n+\\int_\\Omega{dy}H_{\\beta_m}\\left(x-y\\right)\\bm{e}_i\\cdot\\left[\\bm{m}\\times\\left(\\bm{\\mu}\\times\\frac{\\delta{E}}{\\delta\\bm{m}}\\right)\\right],\n\\end{multline*}\nwhere $\\overline{\\rho}=H_{\\beta_m}*\\rho$ and $\\bm{e}_i$ is the unit vector\nin the $i^{\\mathrm{th}}$ direction. \nWe work on a periodic domain $\\Omega=\\left[-L\/2,L\/2\\right]$, at a resolution of $250$\ngridpoints; going to higher resolution does not noticeably increase\nthe accuracy of the results.\n\nThe first set of initial conditions we study is the following,\n\\begin{eqnarray}\n\\bm{m}\\left(x,0\\right)&=&\\left(\\sin\\left(4k_0x+\\phi_x\\right),\\sin\\left(4k_0x+\\phi_y\\right),\\sin\\left(4k_0x+\\phi_z\\right)\\right),\\nonumber\\\\\n\\rho\\left(x,0\\right)&=&0.5+0.35\\cos\\left(2k_0x\\right),\n\\label{eq:initial_conditions1}\n\\end{eqnarray}\nwhere $\\phi_x$, $\\phi_y$, and $\\phi_z$ are random phases in the interval\n$\\left[0,2\\pi\\right]$, and $k_0=2\\pi\/L$ is the fundamental wave number. The\ninitial conditions for the magnetization vector are chosen to represent the\nlack of a preferred direction in the problem.\nThe time evolution of\nequations~\\eqref{eq:mag_dens} for this set of initial conditions is shown\nin Fig.~\\ref{fig:evolution_ic2}.\nAfter a short time, the initial data become singular, and subsequently,\nthe solution $\\left(\\rho,\\bm{m}\\right)$ can be represented as a sum of clumpons,\n\\[\n\\rho\\left(x,t\\right)=\\sum_{i=1}^M a_i\\delta\\left(x-x_i\\left(t\\right)\\right),\\qquad\n\\bm{m}\\left(x,t\\right)=\\sum_{i=1}^M \\bm{b}_i\\left(t\\right)\\delta\\left(x-x_i\\left(t\\right)\\right),\\qquad\nM=2.\n\\]\nHere $M=2$ is the number of clumpons present at the singularity time. \nThis\nnumber corresponds to the number of maxima in the initial density profile.\n The forces exerted by each clumpon on the other balance\n because of the effect of the periodic boundary conditions. Indeed,\n any number of equally-spaced, identical, interacting particles arranged\n on a ring are in equilibrium, although this equilibrium is unstable for\n an attractive force. Thus, at late\n times, the clumpons are stationary, while the magnetization vector $\\bm{\\mu}$\n shows alignment of clumpon magnetizations.\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig9a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 440 330]{fig9b.pdf}}\n}\n\\caption{(Color online) Evolution of sinusoidally-varying initial conditions\nfor the density\nand magnetization, as in Eq.~\\eqref{eq:initial_conditions1}.\n Subfigure~(a) shows the evolution of $H_\\rho*\\rho$ for $t\\in\\left[0,0.15\\right]$,\n by which time the initial data have formed two clumpons; (b) shows\n the evolution of $\\mu_x$. The profiles of $\\mu_y$ and $\\mu_z$ are similar.\n Note that\n the peaks in the density profile correspond to the troughs in the magnetization\n profile. This agrees with the linear stability analysis, wherein disturbances\n in the density give rise to disturbances in the magnetization.\n}\n\\label{fig:evolution_ic2} \n\\end{figure}\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig10a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig10b.pdf}}\n}\n\\caption{(Color online) Evolution of sinusoidally-varying initial conditions\nfor the density\nand magnetization, as in Eq.~\\eqref{eq:initial_conditions1}.\n Subfigure~(a) shows the system velocity $V$ given in Eq.~\\eqref{eq:velocity},\njust before the singularity time; (b) shows the magnetization $\\bm{\\mu}$\nat the same time. The density maxima emerge at the locations where the convergence\nof $-V$ (flow into $x=0$ and $x=\\pm L\/2$) occurs, and the magnetization develops\nextrema there.}\n\\label{fig:snapshot_ic2} \n\\end{figure} \n\n\n\n\n\n\n\nWe gain further understanding of the formation of singular solutions by studying\nthe system velocity $V$ just before the onset of the singularity. This is\ndone in Fig.~\\ref{fig:snapshot_ic2}.\nFigure~\\ref{fig:snapshot_ic2}~(a) shows the development of the two clumpons\nfrom the initial data. Across each density maximum, the velocity has the\nprofile $V\\approx\\lambda\\left(t\\right)x$, where $\\lambda\\left(t\\right)>0$\nis an increasing function of time. This calls to mind the advection problem\nfor the scalar $\\theta\\left(x,t\\right)$, studied by Batchelor in the context\nof passive-scalar mixing~\\cite{Batchelor1959}\n\\[\n\\frac{\\partial\\theta}{\\partial t}=\\lambda_0x\\frac{\\partial\\theta}{\\partial{x}},\\qquad\n\\lambda_0>0.\n\\]\nGiven initial data $\\theta\\left(x,0\\right)=\\theta_0 e^{-x^2\/\\ell_0^2}$, the\nsolution evolves in time as\n\\[\n\\theta\\left(x,t\\right)=\\theta_0 e^{-x^2\/\\left(\\ell_0^2e^{-2\\lambda_0{t}}\\right)},\n\\]\nso that gradients are amplified exponentially in time,\n\\[\n\\frac{\\partial\\theta}{\\partial{x}}=-\\frac{2\\theta_0}{\\ell_0^2}xe^{\\lambda_0{t}}\ne^{-x^2\/\\left(\\ell_0^2e^{-2\\lambda_0{t}}\\right)},\n\\]\nin a similar manner to the problem studied. \n\nThe evolution of the set of\ninitial conditions~\\eqref{eq:initial_conditions1} has therefore demonstrated\nthe following:\nthe local velocity $V$ is such that before the onset of the singularity,\nmatter\nis compressed into regions where $\\rho\\left(x,0\\right)$ is large, to such\nan extent that the matter eventually accumulates at isolated points, and\nthe singular solution emerges. Moreover, the density maxima, rather than\nthe magnetization\nextrema, drive the formation of singularities. This is not surprising, given\nthat the attractive part of the system's energy comes from density variations.\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig11a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig11b.pdf}}\n}\n\\caption{(Color online) Evolution of a flat magnetization field and a sinusoidally-varying\ndensity, as in Eq.~\\eqref{eq:initial_conditions2}.\n Subfigure~(a) shows the evolution of $H_\\rho*\\rho$ for $t\\in\\left[0.5,1\\right]$;\n (b) shows the evolution of $\\mu_x$. The profiles of $\\mu_y$ and $\\mu_z$\n are similar. At $t=0.5$, the initial data have formed eight equally spaced,\n identical clumpons, corresponding to the eight density maxima in the initial\n configuration. By impulsively shifting the clumpon\n at $x=0$ by a small amount, the equilibrium is disrupted and the clumpons\n merge repeatedly until only one clumpon remains.\n}\n\\label{fig:evolution_ic1} \n\\end{figure}\n\nTo highlight the interaction between clumpons, we examine the following set\nof initial conditions,\n\\begin{equation}\n\\bm{m}\\left(x,0\\right)=\\bm{m}_0=\\text{const.},\\qquad\n\\rho\\left(x,0\\right)=0.5+0.35\\cos\\left(8k_0x\\right),\n\\label{eq:initial_conditions2}\n\\end{equation}\nwhere $k_0=2\\pi\/L$ is the fundamental wave number. Since this set of initial\nconditions contains a large number of density maxima, we expect a large number\nof closely-spaced clumpons to emerge, and this will illuminate the\nclumpon interactions.\nThe time evolution of equations~\\eqref{eq:mag_dens} for this set of initial\nconditions is shown in Fig.~\\ref{fig:evolution_ic1}.\nAs before, the solution becomes singular after a short time, and is subsequently\nrepresented by a sum of clumpons,\n\\[\n\\rho\\left(x,t\\right)=\\sum_{i=1}^M a_i\\delta\\left(x-x_i\\left(t\\right)\\right),\\qquad\n\\bm{m}\\left(x,t\\right)=\\sum_{i=1}^M \\bm{b}_i\\left(t\\right)\\delta\\left(x-x_i\\left(t\\right)\\right),\\qquad\nM=8.\n\\]\nHere $M=8$ is the number of clumpons at the singularity time. This number\ncorresponds to the number of maxima in the initial density profile.\nAs before, this configuration of equally spaced, identical clumpons is an\nequilibrium state, due to periodic boundary conditions. Therefore, once\nthe particle-like state has formed, we impulsively shift the clumpon at $x=0$\nby a small amount, and precipitate the merging of clumpons.\nThe eight clumpons then merge repeatedly until only a single clumpon remains.\nThe tendency for the clumpons to merge is explained by the velocity $V$,\nwhich changes sign across a clumpon. Thus, if a clumpon is within the range\nof the force exerted by its neighbours, the local velocity, if unbalanced,\n will advect a given clumpon\n\\begin{figure}[htb]\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig12a.pdf}}\n}\n\\subfigure[]{\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig12b.pdf}}\n}\n\\caption{(Color online) Evolution of a flat magnetization field and a sinusoidally-varying\ndensity, as in Eq.~\\eqref{eq:initial_conditions2}.\n Subfigure~(a) shows the system velocity $V$ given in Eq.~\\eqref{eq:velocity}\njust before the singularity time; (b) gives the magnetization $\\bm{\\mu}$\nat the same time. The density maxima emerge at the locations where the convergence\nof $-V$ occurs, and the magnetization develops extrema there.}\n\\label{fig:snapshot_ic1} \n\\end{figure} \nin the direction of one of its neighbours, and the clumpons merge. This\nprocess is shown in Fig.~\\ref{fig:V_time_ic1}. \n\\begin{figure}\n \\scalebox{0.45}[0.45]{\\includegraphics*[viewport=0 0 420 330]{fig13.pdf}}\n\\caption{(Color online) Evolution of a flat magnetization field and a sinusoidally-varying\ndensity, as in Eq.~\\eqref{eq:initial_conditions2}.\n Shown is the velocity profile for $t\\in\\left[0.5,1\\right]$; the system velocity\n is given by Eq.~\\eqref{eq:velocity}. The velocity $-V$ flows into each\n density\n maximum, concentrating matter at isolated points and precipitating the\n formation of eight equally-spaced identical clumpons. On a periodic domain,\n such an arrangement is an equilibrium state, although it is unstable. Thus,\n by impulsively shifting the clumpon at $x=0$ by a small amount, we force\n the clumpons to collapse into larger clumpons, until only a single clumpon\n remains.\n }\n\\label{fig:V_time_ic1}\n\\end{figure}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\nWe have investigated the non-local Gilbert (NG) equation introduced by Holm,\nPutkaradze, and Tronci in~\\cite{Darryl_eqn1} using a combination of numerical\nsimulations and simple analytical arguments. The NG equation contains two\ncompeting length scales of non-locality: there is a length scale $\\alpha$\nassociated with the range of the interaction potential, and a length scale\n$\\beta$ that governs the smoothened magnetization vector that appears in\nthe equation. When $\\alpha<\\beta$ all initial configurations of the magnetization\ntend to a constant value; while for $\\beta<\\alpha$ the initial configuration\nof the magnetization field develops finer and finer scales. These two effects\nare in balance when $\\alpha=\\beta$, and the system does not evolve away\nfrom its initial state. Furthermore, the NG equation conserves the norm\nof the magnetization vector $\\bm{m}$, thus providing a pointwise bound on the\nsolution and preventing the formation of singular solutions.\n\nTo study the formation of singular solutions, we couple the NG equation to\na\nscalar density equation. Associated with the scalar density is a negative\nenergy of attraction that drives the formation of singular solutions and\nbreaks the pointwise bound on the $\\bm{m}$. \nThree length scales of non-locality\nnow enter into the problem: the range of the force associated with the scalar\ndensity, the range of the force due to the magnetization,\nand the smoothening length. As before, the competition of length\nscales is crucial to the evolution of the system; this is seen in the\nlinear stability analysis of the coupled equations, in which the relative\nmagnitude of the length scales determines the stability or otherwise of a\nconstant state.\n\n\nUsing numerical simulations, we\nhave demonstrated the emergence of singular solutions from smooth initial\ndata, and have explained this behavior by the negative energy of attraction\nproduced by the scalar density. The singular solution consists of a weighted\nsum of delta functions, given in Eq.~\\eqref{eq:clumpon_sln},\nwhich we interpret \nas interacting particles or clumpons. The clumpons evolve under simple finite-dimensional\ndynamics. We have shown that a system of two clumpons is governed by a two-dimensional\ndynamical system that has a multiplicity of steady states. Depending on\nthe length scales of non-locality and the clumpon weights, the two clumpons\ncan merge, diverge, or align and remain separated by a fixed distance.\n \nOur paper thus gives a qualitative description of the dynamics. Future work\nwill focus on the regularity of solutions\nof the NG equation, and the existence and regularity of solutions for the\ncoupled density-magnetization equations. Bertozzi and Laurent~\\cite{Bertozzi2007}\nhave studied the simpler (uncoupled) non-local scalar density equation, proving\nexistence, uniqueness, and blowup results using techniques from functional\nanalysis, and a similar analysis will illuminate the equations we have studied.\nThe behavior of singular solutions in higher dimensions is another topic\nthat deserves further study.\n\n \nDDH was partially supported by the US Department of Energy, Office of Science,\nApplied Mathematical Research and the Royal Society Wolfson Research Merit\nAward. CT was also partially supported by the Royal Society Wolfson Research\nMerit Award. L.\\'O.N. was supported by the Irish government and the UK Engineering\nand Physical Sciences Research Council.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}