diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjqwn" "b/data_all_eng_slimpj/shuffled/split2/finalzzjqwn" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjqwn" @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\n\nOne of the foundations of Einstein's theory of General Relativity is that matter\n curves the surrounding space-time. For the rare cases of nearly perfect alignment between\n an astronomical source, an intervening massive object and the observer, \n multiple images of a single source can be detected, \n a phenomenon known as strong gravitational lensing.\n \n Although many strongly lensed galaxies and quasars have been detected\n to date, it has proved extremely difficult to find multiply-imaged lensed supernova (SN\\xspace) explosions. \n Type Ia supernovae (SNe~Ia\\xspace) \n are particularly interesting sources due to their\n ``standard candle'' nature. These explosions have nearly identical peak luminosity\n which makes them excellent distance indicators in\n cosmology \\cite{2011ARNPS..61..251G}. \n For lensed SNe~Ia\\xspace, the standard candle property allows the flux magnification to be estimated directly, independent\n of any model related to the lensing galaxy \\cite{Kolatt:1997zh,Oguri:2002ku}. This removes important \n degeneracies in gravitational lensing measurements, the mass-sheet degeneracy \\cite{1985ApJ...289L...1F} \n and the source-plane degeneracy \\cite{2013A&A...559A..37S}.\n \n \n A lensed SN~Ia\\xspace at redshift $z=1.388$ with a large amplification ($\\mu\\sim 30$) , PS1-10afx, where multiple images could have been expected, has been reported earlier \\cite{2013ApJ...768L..20Q}. A foreground lens was later identified at $z=1.117$ \\cite{2014Sci...344..396Q}. However, at the time of the discovery several interpretations were discussed,\n including a super-luminous supernova \\cite{2013ApJ...767..162C}. \nSince the lensed SN~Ia\\xspace hypothesis was only\n accepted long after the explosion had faded, no high spatial\n resolution imaging could be carried out in that case to verify the strong lensing nature of the system.\n Multiple-images of another supernova, SN\\xspace Refsdal \\cite{2015Sci...347.1123K}, were discovered in \n a Hubble Space Telescope (HST) survey of the massive galaxy cluster MACS J1149.6+2223. As the source was identified as a core-collapse supernova it could not be used to measure the\n lensing magnification directly.\n\n\n\nThanks to the well-known characteristics of their time-dependent brightness in optical and near-infrared filters (the SN\\xspace lightcurves), multiply-imaged SNe~Ia\\xspace\n are also ideally suited to measure time-delays in the arrival of the\n images. This provides a direct probe of the Hubble constant, the cosmological parameter measuring the expansion rate of the universe\\cite{1964MNRAS.128..307R}, as well as\n leverage for studies of dark energy \n \\cite{2002A&A...393...25G,2013ApJ...766...70S}, the cosmic constituent responsible for the accelerated expansion of the universe. \n\nThe intermediate Palomar Transient Factory (iPTF) searches the sky for new transient phenomena at optical\nwavelengths. It uses image differencing between repeated observations\n\\cite{cnk16} with a large field-of-view camera\n(7.3 sq.deg) at the 48-inch telescope (P48) at the Palomar\nObservatory \\cite{2009PASP..121.1395L}. \nThe first detection of iPTF16geu, with a statistical significance of five standard deviations (5$\\sigma$), is from 2016 September 5. The new\nsource was first recognized by a human scanner on September 11 \\cite {ATEL9603}. iPTF16geu (also known as SN 2016geu) was found near the center of the galaxy\nSDSS\\,J$210415.89$-$062024.7$, at\nright ascension $21^h$$4^m$$15.86^s$ and declination \\ang{-06;20;24.5} (J2000)\n\nSpectroscopic identification was carried out with the Spectral Energy Distribution (SED) Machine \n\\cite{2014CoSka..43..209R} at the Palomar 60-inch telescope (P60) on 2016 October 2 and iPTF16geu was found to be spectroscopically consistent with a normal\nSN~Ia\\xspace at $z\\approx0.4$ (see Fig.~\\ref{fig:spec}). Further spectroscopic observations from the\nPalomar~200-inch telescope (P200) and the 2.5-meter Nordic\nOptical Telescope (NOT) were used to confirm the SN~Ia\\xspace\nidentification and to establish the redshift of the host galaxy from\nnarrow sodium (Na~I~D) absorption lines, as $z=0.409$. The P200 and NOT spectra also\nshow absorption features from the foreground lensing galaxy at\n$z=0.216$. To estimate the velocity dispersion of the lensing galaxy, we fit two Gaussian functions with a common width to the H${\\alpha}$ and [N~{\\sc ii}] emission lines in the P200 spectrum in Fig~\\ref{fig:spec}D. After taking the instrumental resolution into account, we measure $\\sigma = 3.6^{+0.9}_{-0.6}$ \\AA, corresponding to a velocity dispersion of $\\sigma_{v} = 163^{+41}_{-27}$ km s$^{-1}$.\n\nPhotometric observations of iPTF16geu collected at P48 and \nwith the SED Machine Rainbow Camera (RC) at P60, between 2016 September 5 and October 13 (see Fig.~\\ref{fig:lc}), were \nused to estimate the peak flux and lightcurve properties of the SN\\xspace with the SALT2 \nlightcurve fitting tool \\cite{Guy:2007js}.\nThe best fit lightcurve template, also shown in Fig.~\\ref{fig:lc}, confirms that the observed \nlightcurve shapes are consistent with a SN~Ia\\xspace at $z=0.409$. These fits also indicate \nsome reddening of the supernova, suggesting that iPTF16geu suffers from moderate extinction \nby dust. This produces dimming at optical wavelengths of 20-40\\%, whith the largest losses \nin the $g$-band observations. Thanks to the standard candle nature of SNe~Ia\\xspace, after correcting the peak magnitude for lightcurve properties \\cite{1993ApJ...413L.105P,1998A&A...331..815T},\nthe flux of the SN\\xspace was found to be $\\sim$30 standard deviations brighter than expected for the measured\nredshift.\nThis suggested that iPTF16geu was gravitationally lensed and we estimated the lensing amplification to be $\\mu \\sim 52$. Expressed in astronomical magnitudes, \n$\\Delta m = -4.3\\pm {0.2}$~mag, where the uncertainty is dominated by the brightness dispersion of normal SNe~Ia\\xspace.\nSince the magnification is derived from comparing the observed brightness of iPTF16geu to other \nSNe~Ia\\xspace \\cite{Betoule:2014iz} within a narrow redshift range around $z=0.409$, the measurement of the lensing \nmagnification is independent of any assumptions on cosmology, e.g., the value of the Hubble constant or \nother cosmological parameters. The lensing magnification is also independent of a lens model, which is the only \nway to determine the magnification for almost all other strong lensing systems.\n\nThe optical observations from Palomar, with a typical angular resolution (atmospheric seeing) of \\ang{;;2}, were \ninsufficient to spatially resolve any multiple images that could result from the strong lensing nature of the system \n(Fig.~\\ref{fig:zoom}A). We therefore obtained $K_{\\mathrm{s}}$-band (2.2\\,$\\mu$m) observations from the European Southern Observatory (ESO) with the Nasmyth \nAdaptive Optics System Near-Infrared Imager and Spectrograph (NACO) at the Very Large Telescope (VLT).\nAn angular resolution of $\\sim$\\ang{;;0.3} (full-width half-max, FWHM) was obtained at the location of the target. Adaptive optics (AO) \ncorrections of the seeing were performed using a natural bright star, $\\sim$\\ang{;;30} south-east of the SN\\xspace location, \nindicated in Fig.~\\ref{fig:zoom} along with the SDSS pre-explosion image of the field \\cite{2015ApJS..219...12A}. \n\n\nThe near-IR image from VLT indicated the structure expected in a strongly lensed system, with higher flux in the northeastern and southwestern regions of the system, compared to the center (Fig.~\\ref{fig:zoom}B).\nMultiple images of the system were first resolved with observations from the Keck observatory at near-infrared wavelengths, using the Laser Guide Star aided Adaptive Optics (LGSAO) with the\nOH-Suppressing Infra-Red Imaging Spectrograph (OSIRIS) instrument, yielding an image quality of \\ang{;;0.07}\nFWHM in the $H$-band centered at 1.6 $\\mu$m (Fig.~\\ref{fig:zoom}C). \n\nLGSAO observations of iPTF16geu using the Near-InfraRed Camera 2 (NIRC2) at the Keck telescope on 2016 October 22 and November 5, in $K_{\\mathrm{s}}$-band and \n$J$-band (1.1$\\mu$m) respectively, and optical images obtained with the Hubble Space Telescope (HST) on 2016 October 25, are shown in Fig.~\\ref{fig:combo}. The HST observations were carried out through the $F475W$, $F625W$ and\n$F814W$ filters, where the names correspond to the approximate location of the central wavelength in nanometers.\n\nThe observations exhibit four images of iPTF16geu, \\ang{;;0.26}--\\ang{;;0.31} from the\ncenter of the lensing galaxy, with nearly 90$^\\circ$ azimuthal\nseparations. The extended host galaxy, warped by the lens to form a partial\nEinstein ring, is brighter in the near-IR compared to the observations through optical filters. Thus, the fainter individual SN\\xspace images are poorly resolved\nfor the observations with the longest wavelengths in Fig.~\\ref{fig:combo}. Furthermore, the SN~Ia\\xspace \nspectral energy distribution (redshifted to $z=0.4$) peaks within the $F625W$ and $F814W$ filters, see e.g. \\cite{2014MNRAS.439.1959M}. \nDimming by interstellar dust in the line of sight is roughly inversely proportional to wavelength in the optical and near-IR \\cite{1989ApJ...345..245C}.\nThe biggest impact from extinction by dust is therefore expected for the shortest wavelength, in $F475W$ filter observations, where the two faintest SN\\xspace images cannot be detected above the\nbackground light.\n\\newcommand{\\mathcal{C}}{\\mathcal{C}}\nThe low spatial resolution lightcurves in Fig.~\\ref{fig:lc} are dominated by the two brightest SN\\xspace images, \nlabelled 1 and 2 in Fig.~\\ref{fig:combo}D. The $F625W$-$F814W$ magnitude difference (color) of the resolved images \nmeasured with HST indicate small differences in relative extinction between the SN\\xspace images, except for image 4, \nwhich appears to have about two magnitudes of additional dimming in $F814W$. \n\nUnaccounted dimming of light by scattering on dust grains in the line of sight would lead to an underestimation \nof the lensing amplification. Including corrections for differential extinction in the intervening lensing galaxy \nbetween the SN\\xspace images suggest a wider range for the lensing magnification of iPTF16geu, between $-4.1$ \nand $-4.8$ mag \\cite{sup}.\n\n\nThe SN\\xspace multiple-image positions in Fig~\\ref{fig:combo} were used to construct a lensing model, with an isothermal ellipsoid galaxy\nlens \\cite{1993LIACo..31..571K,1994A&A...284..285K} with ellipticity $\\epsilon_e=0.15\\pm 0.07$ and mass\n$M=(1.70 \\pm 0.06)\\cdot 10^{10}\\,M_\\odot$ inside an ellipse with major axis $1.13$ kpc and minor axis $0.97$ kpc. Details of the lensing model are presented in the Supplementary Material \\cite{sup}. The lens model can be independently verified through comparisons between the model-predicted and observed velocity dispersion of the lensing galaxy. From the model \nwe derive an estimate, $\\sigma^{\\rm mod}_v=156\\pm 4$ km s$^{-1}$, in good agreement with the measured value of the velocity dispersion (Fig.~\\ref{fig:spec}D).\n\n\nHowever, the adopted smooth isothermal ellipsoid lens model predicts brightness differences\nbetween the multiple SN\\xspace images that are in disagreement with the observations. Including corrections for \nextinction in the resolved SN\\xspace images in the $F814W$ filter, we find large discrepancies between the model and \nmeasured magnitude differences for the multiple images of iPTF16geu:\n $\\Delta m^{obs}_{1j}-\\Delta m^{mod}_{1j}$ = ($-0.3, -1.6, -1.5$) mag for $j=2,3$ and $4$, where the indices follow the numbering scheme adopted in Fig.~\\ref{fig:combo}.\nThe observed discrepancy between the smooth model predictions for the SN\\xspace images 1 and 2 compared to 3 and 4 (brighter by a factor 4 and 3, respectively), cannot be accounted for by time-delays between the images, as they are \npredicted to be $<35$ hours \\cite{sup}. Graininess of the stellar distribution and dark matter sub-halos in the lens\ngalaxy, in addition to the smooth mass profile, can cause variations to magnification without altering image locations. \nThese milli- and micro-lensing effects \\cite{1989Natur.338..745K,1994ApJ...429...66W}, small enough not to cause additional resolved image separations, offer a plausible explanation for the deviation from the smooth lens model.\n \n \n \n\n \n \n\nAvailable forecasts for wide-field surveys \\cite{2010MNRAS.405.2579O} suggest that about one strongly lensed SN~Ia\\xspace could be expected in our survey, \nirrespectively of redshift and magnification, with approximately a 30\\% chance to be in a quad configuration. For an average ellipticity of the lenses $e = 0.3$ \\cite{2010MNRAS.405.2579O}, only about 1\\% of the lensed SNe\\xspace are expected to \nhave $\\mu \\raise0.3ex\\hbox{$>$}\\kern-0.75em{\\lower0.65ex\\hbox{$\\sim$}} 50$ \\cite{Chae:2002uf}.\n \n \n \nWe have performed an independent rate estimate, with a somewhat simplified lensing simulation but including survey specific parameters, and confirm that the probability to detect and classify a\nhighly magnified SN~Ia\\xspace like iPTF16geu does not exceed the few percent level \\cite{sup}.\n\n iPTF16geu appears to be a rather unlikely event, unless the actual rate of very magnified SNe\\xspace is higher\n than anticipated, e.g., if the contribution from lensing by any kind of sub-structures in galaxies is underestimated, or if\n we are otherwise lacking an adequate description of gravitational lensing at the $\\sim$ 1 kpc scale. The physical scale probed by the resolved images of iPTF16geu is comparable to the smallest of the 299 multiply-imaged lensed systems in the Master \nLens Database \\cite{master}. Using the standard candle nature of SNe~Ia\\xspace we can more easily detect strongly lensed systems with sub-arcsecond angular separations,\nallowing exploration of the bending of light at scales $\\raise0.3ex\\hbox{$<$}\\kern-0.75em{\\lower0.65ex\\hbox{$\\sim$}}$ 1 kpc, an otherwise challengingly small distance in studies of gravitational lensing \\cite{2017ApJ...834L...5G}. \nAs demonstrated with iPTF16geu, discovered while still brightening with a modest size telescope and sub-optimal atmospheric conditions, the locations of these rare systems can be identified\nin advance of extensive follow-up imaging at high spatial resolution. \n\\bibliographystyle{Science}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe relativistic transformation of temperature is a problem which has\nbeen controvertial for almost a century. There has been many proposals,\nstarting by pure classical\nthermodynamics~\\cite{Einstein,planck,tolman,Ott,landsvarianza,newburgh} until\nclassical and quantum statistical mechanics~\\cite{bors,impos,kania1,kania2,cubero}. Starting from different\npostulates, each one of this works had tried to establish how the \ndifferent thermodynamics quantities change under Lorentz\ntransformations,\nbut they have obtained incompatible results. For example, in\nRefs.~\\cite{kampen,yuen} a review of different formalisms is done. In\nparticular, in Ref.~\\cite{kampen} it is established that the different\nformalisms are mathematically equivalent to each other, because there\nis a one to one correspondence \nbetween the quantities defined in every formalism.\n\n\n\n\nThe idea to generalize the statistical mechanics to relativistic\nsystems dates from J\\\"uttner~\\cite{juttner,juttner2}, who proposed a relativistic form of\nMaxwell-Boltzmann velocities distribution. Other attempts to get the\ncorrect relativistic distribution function that fits correctly\nexperimental data have been recently done. For example, in\nRefs.~\\cite{kania1,kania2} a new mathematical formalism was created in\norder to develope a non-extensive relativistic statistical mechanics under a\ncanonical ensemble, which works to fit data of cosmic rays. Recently, it has been shown through\nnumerical simulation that J\\\"uttner's distribution function is the\ndistribution in special relativity that produces the best fit for a dilute gas of two components mixture with collisions\nin one dimension~\\cite{cubero}. \n\nIn other hand, other works conduct to others distribution functions\nthan J\\\"uttner one. For instance, in Ref.~\\cite{lapenta}, authors perfom\nnumerical simulations of electrons accelerated to relativistic\nenergies due to its interaction with waves generated by longitudinal\nstreaming plasma instabilities. They found an equilibrium\ndistribution which present power-law tails at high energies. Although\nRefs.~\\cite{cubero,lapenta} consider different systems, both show that\nthe old problems of transformation of temperature and pressure, and\nthe form of distribution function in theoretical relativistic\nstatistical mechanics appears in numerical simulation. \n\nHowever, in many of these works the temperature transformation are\nassumed and not derived from the theory itself. This happen because in\ncanonical ensembles the temperature $\\beta=T^{-1}$ is a system\nvariable (we set Boltzmann constant $k_B=1$). Therefore, it is not easy\nto find a temperature transformation between two inertial frames\nmoving with relative velocities by direct calculation. \n\nTo overcome this problem, in this article we calculate the temperature\nin the microcanonical ensemble of a relativistic ideal gas of\nbradyons, luxons or tachyons. In this ensemble the intensive quantities are not\nvariables and it is possible to find the temperature only by taking\nderivatives. Thus, the calculations are\nsimpler than in a canonical ensemble because we only need to fix the energy of these\nparticles. In addition, according to Gibbs' postulate, the results\nshould be independent from the ensemble used to calculate it. This\npostulate allow us to obtain a result that is equivalent to the one\nobtained in any other ensemble~\\cite{gibbs}. \n\nWe are extending the old problem of how the temperature of\nbradyons transform in different frames to luxons and tachyons. The reason to\ninclude tachyons under this study is the wide range of relativistic\nsystems in which they can be included. They play an important role in\nrecent developments in inflationary cosmological\nmodels~\\cite{balart,frey,xiong}, string theory black holes\nmodels~\\cite{atish,rama} and there are, even, proposed procedures to\nmeasure tachyonic states~\\cite{chiao}.\n\nTo find the temperature transformation we first derive the\nmicrocanonical entropy of the systems. Then we calculate the\ntemperature in a thermodynamic way showing how it\ntransforms. Futhermore, we show that the entropy thermodinamic $dS$\nelement is Lorentz invariant for each particle specie.\n\n\n\\section{Entropy calculation}\n\nConsider an ideal gas (of bradyons, luxons or\ntachyons) which is at rest in a inertial\nframe $I$. Let us suppose other inertial frame $I'$ moving with constant velocity\n$\\mathbf{w}=w\\hat x$ respective to $I$. \nSetting $c=1$, we choose the magnitude $w\\leq1$ if the particles of the\nsystems are bradyons or luxons, and $w>1$ if the particle system are\ntachyons. \n\nA bradyon is a particle with rest mass $m$\nwhich moves slower than speed of light. Its dispersion relation is\ngiven by \n\\begin{equation}\n \\label{erel}\n p_\\mu p^\\mu= m^2\\, ,\n\\end{equation}\nwhere $p_\\mu=(\\epsilon,\\mathbf{p})$ is the 4-momentum of the particle with energy $\\epsilon$\nand momentum $\\mathbf{p}$. We use the signature $(+,-,-,-)$ for our\ncalculations.\n\nA luxon is a particle with null mass which moves\nat the speed of light. Its dispersion relation has the form\n \\begin{equation}\n \\label{erell}\n p_\\mu p^\\mu=0\\, . \\end{equation}\n\nFinally, a tachyon is a particle with imaginary mass $M=im$ (with $m$\na real quantity) which moves faster than speed of\nlight~\\cite{feinberg,eve,mariwalla,maccarrone,feinberg2,kowa,antippa}.\nIts dispersion relation is\n\\begin{equation}\n \\label{erel2}\n p_\\mu p^\\mu=-m^2\\, .\n\\end{equation} \n\n\nWe calculate the number of states $\\Omega$ using the microcanonical\nensemble. The three-vector phase-space $d^3\\mathbf{q}d^3\\mathbf{p}$\nis Lorentz invariant for bradyons, luxons and tachyons ~\\cite{kowa}.\n\nWe consider an ideal gas of bradyons, luxons or tachyons,\nconsisting of $N$ particles ($N\\gg1$) contained in a volume $V$. The Hamiltonian of $N$ bradyons is\n\\begin{equation}\nH(p_i)=\\sum_{i=1}^N\\sqrt{|\\mathbf{p}_i|^2+m^2}\\, ,\n\\end{equation}\nwhere $|\\mathbf{p}_i|=(p_{x,i}^2+p_{y,i}^2+p_{z_i}^2)^{1\/2}$. The Hamiltonian for $N$ luxons is\n\\begin{equation}\nH(p_i)=\\sum_{i=1}^N|\\mathbf{p}_i|\\, ,\n\\end{equation}\nand the Hamiltonian for $N$ tachyons is\n\\begin{equation}\nH(p_i)=\\sum_{i=1}^N\\sqrt{|\\mathbf{p}_i|^2-m^2}\\, .\n\\end{equation}\n\nSetting $h=1$, the microcanonical number of states for each specie is given by \n\\begin{eqnarray}\n \\label{numestados}\n \\Omega&=&\\frac{1}{N!}\\int_{E\\leq H(p_i)\\leq E+\\Delta E} d^3\\mathbf{q}_1\\ldots d^3\\mathbf{q}_Nd^3\\mathbf{p}_1\\ldots d^3\\mathbf{p}_{N}\\nonumber\\\\\n&=&\\frac{V^N}{N!}\\int_{E\\leq H(p_i)\\leq E+\\Delta E} d^3\\mathbf{p}_1\\ldots d^3\\mathbf{p}_{N}\\, .\n\\end{eqnarray}\n\nFor simplicity, we first calculate $\\Sigma$ instead $\\Omega$, where\n\\begin{equation}\n\\label{sigma}\n\\Sigma=\\frac{V^N}{N!}\\int_{H(p_i)\\leq E} d^3\\mathbf{p}_1\\ldots d^3\\mathbf{p}_{N}\\, .\n\\end{equation}\n\nThe number of states in a energy interval can be calculated from $\\Omega=(\\partial\n\\Sigma\/\\partial E)\\Delta E$. Thus, we must write the condition $H(p_i)\\leq E$ in a\n$3N$-dimensional momentum space. For photons $m=0$, and then \n\\begin{equation}\nH=\\sum_i |{\\bf p}_i|\\leq E\n\\label{condmom0}\n\\end{equation}\n\nNow, we seek for the condition for bradyons. Since no direction in space is preferred,\nlet us start supposing $n$ particles with the same momentum ${\\bf p_0},$\nand $N-n$ particles without momentum, with $n\\leq N$. In this way, the\ncondition for the Hamiltonian is $H=n\\left(|{\\bf p_0}|^2+m^2\\right)^{1\/2}+(N-n)m\\leq E$. Using this, we can obtain\n$$\\sum_i|{\\bf p}_i|=n|{\\bf p_0}|\\leq\\left((E-(N-n)m)^2-n^2m^2\\right)^{1\/2}$$\nHowever, the factor $(E-(N-n)m)^2-n^2m^2=(E-Nm)(E-Nm+2nm)\\leq E^2-N^2 m^2$, and then, it is fulfilled \n\\begin{equation}\n\\sum_i|{\\bf p}_i|\\leq \\left(E^2-N^2m^2\\right)^{1\/2}\n\\label{condmom}\n\\end{equation}\neven when $n=N$.\n\nNow, we should study what happen when we have different momenta for\neach particle. An illustrative example is the next case. If we have\n$N-1$ particles with the same momentum and one particle with a\ndifferent momentum, its sum of the norm of all momenta will be always\nless than the sum of momenta in Eq.~(\\ref{condmom}) owing to the\nparticles obey the condition $H\\leq E$. Following this example, the\ndifferent cases of momentum of each particle will produced a sum which\nwill be less than Eq.~(\\ref{condmom}). So, the condition Eq.~(\\ref{condmom})\nis always valid for bradyons.\n\nUsing an analogue argument we can obtain the condition for momentum space for tachyons\n\\begin{equation}\n\\sum_i|{\\bf p}_i|\\leq \\left(E^2+N^2m^2\\right)^{1\/2}\n\\label{condmom2}\n\\end{equation}\nwhich is fulfilled always.\n\nAll these conditions can be easily written for the momentum\ncomponents. Thus, the sum will go from 1 to $3N$. Written in that\nform, they will represent a regular geometric body in $3N$ dimensions,\nwhich would be a sphere in the case of classical ideal gas. Then, the\nproblem of calculate the integral Eq.~(\\ref{sigma}) is reduced to find the\nvolume of this regular geometric body. Following the procedure\ndescribed in Ref.~\\cite{greiner}, we obtain the number of states for\nbradyons as \n\\begin{equation}\n \\label{numestadosb}\n \\Omega=\\frac{V^N}{N!}\\left(2\\sqrt 3\\right)^{3N}\\frac{\\left(E^2- N^2m^2\\right)^{3N\/2}}{(3N)!}\\, ,\n\\end{equation}\nthe number of states for luxons as\n\\begin{equation}\n \\label{numestadosl}\n \\Omega=\\frac{V^N}{N!}\\left(2\\sqrt 3\\right)^{3N}\\frac{E^{3N}}{(3N)!}\\, ,\n\\end{equation}\nand the number of states of tachyons as\n\\begin{equation}\n \\label{numestadost}\n \\Omega=\\frac{V^N}{N!}\\left(2\\sqrt 3\\right)^{3N}\\frac{\\left(E^2+ N^2m^2\\right)^{3N\/2}}{(3N)!}\\, .\n\\end{equation}\n\nIt is straigthforward to obtain the entropy as $S=\\ln \\Omega$ in a\nmicrocanonical ensemble. For bradyons the entropy is\n\\begin{equation}\n \\label{entropiab}\n S=N\\ln\\left[\\frac{V(E^2- N^2m^2)^{3\/2}}{27N^4}\\right]+3N\\ln \\left[2\\sqrt 3 e^{4\/3}\\right]\\, .\n\\end{equation}\n\nIn the same way, the entropy for luxons is\n\\begin{equation}\n \\label{entropial}\n S=N\\ln\\left[\\frac{VE^3}{27N^4}\\right]+3N\\ln \\left[2\\sqrt 3 e^{4\/3}\\right]\\, ,\n\\end{equation}\nand the entropy for tachyons\n\\begin{equation}\n \\label{entropiat}\n S=N\\ln\\left[\\frac{V(E^2+ N^2m^2)^{3\/2}}{27N^4}\\right]+3N\\ln \\left[2\\sqrt 3 e^{4\/3}\\right]\\, .\n\\end{equation}\n\n\\section{Temperature transformation}\n\nTo find the relation between the\ntemperature of the system in the $I$ frame and the temperature in the\n$I'$ frame we need to find how to calculate the number of\nstates in $I'$. According to Liouville theorem~\\cite{misner} the \ndimensional phase space $d^3\\mathbf{p}'d^3\\mathbf{q}' = d^3\\mathbf{p}d^3\\mathbf{q}$ is Lorentz\ninvariant. Using this, the number of\nstates $\\Omega'$ in the $I'$ frame can be written using the phase space of $I$ frame\n\\begin{multline}\n\\label{volps}\nN!\\, \\Omega= \\int_Id^3\\mathbf{p}d^3\\mathbf{q} \\\\\\to N!\\, \\Omega'= \\int_{I'}d^3\\mathbf{p}'d^3\\mathbf{q}' = \\int_{I'}d^3\\mathbf{p}d^3\\mathbf{q}\\, ,\n\\end{multline}\nwhere the $I'$ subindex means that now the integration is for all\n$\\mathbf{p'}_j$ that satisfy $\\sum_{i=1}^N |\\mathbf{p'}_i|\\leq ({E'^2-N^2\n m'^2})^{1\/2}$ for bradyons, $\\sum_{i=1}^N|\\mathbf{p'}_i|\\leq E'$ for\nluxons and $\\sum_{i=1}^N |\\mathbf{p'}_i|\\leq ({E'^2+N^2 m'^2})^{1\/2}$ for\ntachyons in the $I'$ frame. \n\n\n\n\nDue to Eq.~(\\ref{volps}), the entropy $S'$ calculated in the $I'$\nframe has the same form of the entropy $S$ of Eq.~(\\ref{entropiab}), Eq.~(\\ref{entropial}), and\nEq.~(\\ref{entropiat}), but\nchanging the energy $E$ by the energy $E'$, and the volume $V$ by the volume $V'$.\n\nFor bradyons and luxons the energy transforms as $E'=\\gamma E$, the momentum\ntransforms as $p'=\\gamma p$ and the\nvolume transform as $V=\\gamma V'$ since the relative\nmovement is in one dimension. The relativistic factor is $\\gamma=(1-w^2)^{-1\/2}$ with $w\\leq\n1$. For this particles we are considering positive energies. \n\nIn the case of tachyons, the energy and momentum transformations are $E'=\\zeta E$ and\n$p'=\\zeta p$ respectively, where $\\zeta=(w^2-1)^{-1\/2}$ with\n$w>1$~\\cite{feinberg,maccarrone}. For simplicity, we consider the\npositive momentum tachyons. Similarly, the volume transformation for tachyons is $V=\\zeta V'$~\\cite{eve}. Note\nthat if in the $I$ frame the energy, the momentum and the volume of tachyons are real\nquantities, then in $I'$ frame these quantities are still real. \n\nThe above energy, momentum and volume transformations are one of\nthe multiple transformations that can be constructed for a Lorentz\ninvariant tachyon-theory~\\cite{eve,maccarrone,mariwalla,feinberg2}. Although the present\nanalysis can be done with other transformations, the election of the above one gives back\nthe usual and simpler energy and momentum relations for\ntachyons~\\cite{eve}. They ensure that the tachyon three-vector\nphase-space $d^3{\\bf q}d^3{\\bf p}$ is invariant.\n\nIn order to obtain the temperature, we calculate the thermodynamical variation of\nentropy. The variation is $dS=dE\/T+(P\/T)dV$, where the temperature $T$\nand the pressure $P$ are defined by~\\cite{greiner}\n\\begin{equation}\n \\label{entrp11}\n \\frac{1}{T}=\\left(\\frac{\\partial S}{\\partial E}\\right)_V\\, \\quad,\\quad \\frac{P}{T}=\\left(\\frac{\\partial S}{\\partial V}\\right)_E .\n\\end{equation}\n\nIn this way, the calculation of the temperature for bradyons, from\nEq.~(\\ref{entrp11}), is\n\\begin{equation}\n \\label{tempb}\n \\frac 1 T =\\frac{3NE}{E^2- N^2m^2}\\, .\n\\end{equation}\n\nThe temperature for luxons is\n\\begin{equation}\n \\label{templ}\n \\frac 1 T =\\frac{3N}{E}\\, ,\n\\end{equation}\nand the temperature for tachyons is\n\\begin{equation}\n \\label{temt}\n \\frac 1 T =\\frac{3NE}{E^2+ N^2m^2}\\, .\n\\end{equation}\n\nLikewise, we can calculate the pressure for bradyons, luxons and tachyons from\nEq.~(\\ref{entrp11}). This is\n\\begin{equation}\n \\label{pressure}\n \\frac P T =\\frac{N}{V}\\, ,\n\\end{equation}\nfor the three species. It corresponds to the state equation for an ideal gas.\n\nCalculation of the temperature $T'$ for bradyons, luxons or tachyons in the $I'\n$ frame\n can be done using Eq.~(\\ref{volps}) to\n evaluate the entropy $S'$. We can write Eq.~(\\ref{entrp11}) for intensive quantities in the\n $I'$ frame. This allow us to express Eq.~(\\ref{tempb}) for bradyons,\n Eq.~(\\ref{templ}) for luxons, and Eq.~(\\ref{temt}) for tachyons in\n $I'$. Thus, we obtain how the temperature $T'$ from $I'$ frame\n transforms to temperature $T$ in the $I$ frame for bradyon and luxon ideal gas under the transformations\n for energy and momentum previously established\n\\begin{equation}\n \\label{transT}\n {T'}={\\gamma T}\\, ,\n\\end{equation}\nand for tachyon gas\n\\begin{equation}\n \\label{transTt}\n {T'}={\\zeta T}\\, .\n\\end{equation}\n\nThe transformations in Eq.~(\\ref{transT}) and in Eq.~(\\ref{transTt}) implies that the\ntemperature is not a Lorentz invariant. The temperature transformation\nfor bradyons and luxons (\\ref{transT}) is in coincidence with Ott's\ntemperature transformation~\\cite{Ott} and other previous\nworks~\\cite{newburgh,bors,sutcliffe,moya}, and it is in disagreement\nwith Planck's formalism~\\cite{Einstein,planck,tolman,impos,kania2,kowa}. This means\nthat a moving gas of bradyons or luxons appears hotter. The temperature transformation for\ntachyons~(\\ref{transTt}) is derived for the first time in the\nknowledge of the authors. \n\nThe difference between our approach and other approaches is the\ndefinition of temperature. We emphasize that temperature is defined in\na thermodynamic and statistical form by Eq.~(\\ref{entrp11}). Thus, the\ncorrect definition of Eq.~(\\ref{entrp11}) leads naturally to the above\ncorrect temperature transformations. \n\nWe can do the same analysis for the pressure $P'$ in the $I'$ frame\nusing the transformation for the energy and the momentum. In the same\nway, according to Liouville theorem and using Eq.~(\\ref{entrp11}) and\nEq.~(\\ref{pressure}) for $I'$, we can get the transformation of\npressure $P'$ from $I'$ frame to pressure $P$ in the $I$ frame for\nbradyons and luxons as\n\\begin{equation}\n \\label{transp}\n {P'}={\\gamma^2 P}\\, .\n\\end{equation}\n\nSimilarly, for tachyons, the pressure transformation is\n\\begin{equation}\n \\label{transpt}\n {P'}={\\zeta^2 P}\\, .\n\\end{equation}\n\nWe also see that pressure is not Lorentz invariant. The result for bradyons and luxons coincide with the\nsame one found previously in Refs.~\\cite{moya,sutcliffe}. This\ntransformation for pressure goes in contradiction with some previous\nworks~\\cite{Einstein,planck,tolman,impos,kowa}. The tachyon\npressure transformation is derived for first time. Our thermodynamic and\nstatistical definition for pressure transformations~(\\ref{transp})\nand~(\\ref{transpt}) preserves the properties of ideal gases. Thus, the\ntemperature and pressure transformation are necessary to get \nan ideal gas of any of these particles in both frames. After taking\ninto account Eq.~(\\ref{transT}) and Eq.~(\\ref{transp}), we can write\nEq.~(\\ref{pressure}) in the $I'$ frame as\n\\begin{equation}\n\\label{gasI'}\nP'\\, V' = N\\, T'\\,.\n\\end{equation}\n\nFrom this we can conclude that in every inertial frame an ideal\ngas behaves like an ideal gas under Lorentz transformations as one would\nexpect due to special relativity principle. \n\nIn the same token, the transformations of Eq.~(\\ref{transT}),\nEq.~(\\ref{transTt}), Eq.~(\\ref{transp}) and Eq.~(\\ref{transpt})\nfor intensive quantities $T$ and $p$, for bradyons, luxons\nor tachyons satisfy\n\\begin{equation}\n \\label{dS\/dE}\n \\left(\\frac{\\partial S'}{\\partial E'}\\right)_{V'} dE' = \\left(\\frac{\\partial S}{\\partial E}\\right)_{V} dE\\,,\n\\end{equation}\nand\n\\begin{equation}\n \\label{dS\/dV}\n \\left(\\frac{\\partial S'}{\\partial V'}\\right)_{E'} dV' = \\left(\\frac{\\partial S}{\\partial V}\\right)_{E} dV\\,.\n\\end{equation}\n\nTherefore, the variation of the entropy is the same in both frames, which means\n\\begin{equation}\n \\label{dS}\n dS = dS'\\,,\n\\end{equation}\nfor any of the three species, wich is according to all previous works. \n\n\nFinally, it is easy to get from Eq.~(\\ref{templ}) the correct energy for\nan ideal gas of luxons as $E=3NT$. From Eq.~(\\ref{tempb}) we can\nobtain the correct non-relativistic energy for an ideal gas of\nbradyons when $m\\gg T$ \n\\begin{equation}\n \\label{Ebnorel}\n E\\simeq\\frac{3}{2}NT+Nm\\, .\n\\end{equation}\n\nFor an ideal gas of tachyons, it is possible to obtain the energy in\nthe very-high temperature limit when $m\\ll T$. From Eq.~(\\ref{temt}) we obtain\n\\begin{equation}\n \\label{Etnorel}\n E\\simeq\\frac{Nm^2}{3T}\\, .\n\\end{equation}\n\nFrom~(\\ref{Etnorel}) we can see that the energy become null when the tachyon velocity and temperature goes to\ninfinite as expected~\\cite{feinberg}.\n\n\n\\section{Conclusions}\n\nWe have shown a new path to get some known results of temperature\ntransformation for a gas of non-interacting particles. Our treatment is from statistical first principles, only\nassuming the known space-time and\nenergy-momentum Lorentz transformation along with Liouville theorem and Gibss' postulate.\n\nThe temperature transformation for a classical ideal gas composed of\nparticles which moves slower, equal or faster than light was shown\nexplicitely. These transformations are the correct ones, at least for\nmicrocanonical ensemble, because they were derived using \nonly the known statistical properties for each particle. In addition,\nLiouville theorem allow us to work in the microcanonical ensemble for\nany inertial frame, so the transformations obtained preserve the form\nof the first and second laws of thermodynamics in all inertial\nframes. This is a difference with the usual relativistic\nthermodynamics treatment, where the forms of the first and second laws\nare choosen in a more arbitrary way. \n\n\nAn interesting consecuence of the transformations that we\nfound for temperature and pressure is that the equation of\nstate of an ideal gas is a Lorentz invariant. This is in agreement\nwith the first postulate of special relativity as one would expect.\n\nFor tachyons, Eq.~(\\ref{Etnorel}) is correct in the high temperature\nlimit, when their velocity goes to infinite and their\nenergy $E\\to 0$. However, when the relative speed between frames\n$w$ goes to infinite, the temperature $T'$ goes to zero from\nEq.~(\\ref{transTt}). This is due to a tachyon in $I$ frame is a bradyon\nin a $I'$ frame which moves with speed greater than $1$ relative to the $I$\nframe~\\cite{antippa}. The behavior of the tachyon temperature transformation~(\\ref{transTt}) is\nequivalent to temperature transformation of bradyons when $w\\to\n1$. This shows the duality bradyon-tachyon between frames moving at\nrelative speeds greater than light.\n\n\n\\acknowledgments\n\nWe thank to Dr. Gonzalo Guti\\'errez, Dr. J. Alejandro Valdivia and\nMSc. Andr\\'es Gom\\'ez for useful discussions and their enlightening\ncomments. \n\nF. A. is grateful to Programa MECE Educaci\\'on Superior for a Doctoral\nFellowship, C. A. F. is grateful to CONICyT Master Fellowship and\nP. S. M. is grateful to CONICyT Doctoral Fellowship. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\setcounter{equation}{0}A major direction in differential geometry is the\nstudy of Riemannian manifolds with exceptional holonomy, i.e. $7\n-dimensional $G_{2}$-manifolds and $8$-dimensional $\\func{Spin}\\left(\n7\\right) $-manifolds, as well as more generally, $G_{2}$-structures and \n\\func{Spin}\\left( 7\\right) $-structures. As it turns out, both of these\ngroups are closely related to the octonions \\cite{Harvey}, which is the \n8$-dimensional nonassociative normed division algebra $\\mathbb{O}$ over \n\\mathbb{R}.$ A number of properties of $G_{2}$-structures and $\\func{Spin\n\\left( 7\\right) $-structure are hence artifacts of the octonionic origin of\nthese groups. In particular, in \\cite{GrigorianOctobundle}, the author has\nexplicitly used an octonion formalism to investigate properties of isometric \n$G_{2}$-structures. In that setting, it emerged that objects such as the\ntorsion of a $G_{2}$-structure are naturally expressed in terms of sections\nof a unit octonion bundle. The set of unit octonions $U\\mathbb{O}\\cong\nS^{7}, $ has the algebraic structure of a \\emph{Moufang loop}. Indeed, a\ncloser look shows that in the context of $G_{2}$-structure, the algebra\nstructure of $\\mathbb{O}$ played a secondary role to the loop structure on $\n\\mathbb{O} $ and the corresponding cross-product structure on the tangent\nspace at the identity $T_{1}U\\mathbb{O\\cong }\\func{Im}\\mathbb{O}$, the pure\nimaginary octonions. This suggests that there is room for generalization by\nconsidering bundles of other smooth loops. As far as possible, we will\nminimize assumptions made on the loops. Generally, there is a large supply\nof smooth loops, because given a Lie group $G,$ a Lie subgroup $H$, and a\nsmooth section $\\sigma :G\/H\\longrightarrow G$ (i.e. a smooth collection of\ncoset representatives), we may define a loop structure on $G\/H$ if $\\sigma $\nsatisfies certain conditions, such as $\\sigma \\left( H\\right) =1$, and for\nany cosets $xH$ and $yH,$ there exists a unique element $z\\in \\sigma \\left(\nG\/H\\right) $ such that $zxH=yH$ \\cite{NagyStrambachBook}. Conversely, any\nsmooth loop can also be described in terms of a section of a quotient of Lie\ngroups. Special kinds of smooth loops, such as Moufang loops have been\nclassified \\cite{NagyStrambachBook}$,$ however for a broader classes, such\nas Bol loops, there exists only a partial classification \\cite{FigulaBol}.\n\nIn \\cite{GrigorianOctobundle}, the octonion bundle is constructed out of the\ntangent bundle, and is hence very specific, one could say canonical. However\nto understand properties of the bundle, it is helpful to decouple the bundle\nstructure and the properties of the base manifold. Hence, another direction\nfor generalization is to consider loop bundles over arbitrary manifolds. In\nparticular, such an approach will also make it more clear which properties\nof the octonion bundle in the $G_{2}$ setting are generic and which are\nintrinsic to the $G_{2}$-structure.\n\nThe purpose of this paper is two-fold. One is to carefully build up the\ntheory of loop bundles starting with all the necessary algebraic\npreliminaries and properties of smooth loops. The second is to define a\nunified framework through which geometric structures based on certain\nalgebraic structures may be studied. In this sense, this can be considered\nas an extension of the normed division algebra approach to various\nstructures in Riemannian geometry as developed by Leung \\cite{LeungDivision\n. The long-term goal in $G_{2}$-geometry is to obtain some kind of analog of\nYau's celebrated theorem on existence of Calabi-Yau metrics \\cite{CalabiYau\n, and thus a key theme in the study of $G_{2}$-manifolds is to try to\ncompare and contrast the corresponding theory of K\\\"{a}hler and Calabi-Yau\nmanifolds. This requires putting the complex and octonionic geometries into\nthe same framework. However, a certain amount of generalization allows to\nsee clearer some aspects of the theory.\n\nIn Section \\ref{sectLoop} we give an overview of the key algebraic\nproperties of loops. While many basic properties of loops may be known to\nalgebraists, they may be new to geometers. Moreover, we adopt a point of\nview where we emphasize the pseudoautomorphism group of a loop, which is a\ngeneralization of the automorphism group, and properties of modified\nproducts defined on loops. These are the key objects that are required to\ndefine loop bundles, however in algebraic literature they typically take the\nbackstage. In particular, we show how the pseudoautomorphism group, the\nautomorphism group, the nucleus of a loop are related and how these\nrelationships manifest themselves in the octonion case as well-known\nrelationships between the groups $\\func{Spin}\\left( 7\\right) ,$ $SO\\left(\n7\\right) $, and $G_{2}$.\n\nIn Section \\ref{sectSmooth}, we then restrict attention to smooth loops,\nwhich are the not necessarily associative analogs of Lie groups. We also\nmake the assumption that the pseudoautomorphism group acts on the smooth\nloop via diffeomorphisms and is hence itself a Lie group. This is an\nimportant assumption and it is not known whether this is always true. The\nkey example of a non-associative smooth loop is precisely the loop of unit\noctonions. We first define the concept of an exponential function, which is\nsimilar to that on Lie groups. This is certainly not a new concept - it\nfirst defined by Malcev in 1955 \\cite{Malcev1955}, but here we show that in\nfact, generally, there may be different exponential maps, based on the\ninitial conditions of the flow equation. This then relates to the concept of\nthe modified product as defined in Section \\ref{sectLoop}. Then, in Section\n\\ref{secTangent}, we define an algebra structure on tangent spaces of the\nloop. The key difference with Lie algebras is that in the non-associative\ncase, there is a bracket defined at each point of the loop. Indeed, as shown\nin Section \\ref{sectStruct}, the differential of the bracket depends on the\nassociator, which of course vanishes on Lie algebras, but is non-trivial on\ntangent algebras of non-associative loops. Moreover, in Section \\re\n{sectStruct}, we prove a loop version of the Maurer-Cartan structural\nequation. Namely, for any point $p$ in the loop, the right Maurer-Cartan\nform satisfies the following equation\n\\begin{equation}\n\\left( d\\theta \\right) _{p}-\\frac{1}{2}\\left[ \\theta ,\\theta \\right]\n^{\\left( p\\right) }=0,\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot \\right] ^{\\left( p\\right) }$ is the bracket at\npoint $p$. In Lie theory, the Jacobi identity is the integrability condition\nfor the Maurer-Cartan equation, however in the non-associative case, the\ncorresponding equation is known as the Akivis identity \\cit\n{HofmannStrambach}, and involves the associator.\n\nIn Section \\ref{sectStruct} we define another key component in the theory of\nsmooth loops. As discussed above, each element $s$ of the loop $\\mathbb{L}$\ndefines a bracket $b_{s}$ on the tangent algebra $\\mathfrak{l}$. Moreover,\nwe also define a map $\\varphi _{s}$ that maps the Lie algebra $\\mathfrak{p}$\nof the pseudoautomorphism group to the loop tangent algebra. The kernel of\nthis map is precisely the Lie algebra $\\mathfrak{h}_{s}$ of the stabilizer\nof $s$ in the pseudoautomorphism group. In the case of unit octonions, we\nknow $\\mathfrak{p\\cong so}\\left( 7\\right) \\cong \\Lambda ^{2}\\left( \\mathbb{R\n^{7}\\right) ^{\\ast }$ and $\\mathfrak{l=}\\func{Im}\\mathbb{O}\\cong \\mathbb{R\n^{7}$ , so $\\varphi _{s}$ can be regarded as an element of $\\mathbb{R\n^{7}\\otimes $ $\\Lambda ^{2}\\mathbb{R}^{7},$ and this is (up to a constant\nfactor) a dualized version of the $G_{2}$-invariant $3$-form $\\varphi $, as\nused to project from $\\Lambda ^{2}\\left( \\mathbb{R}^{7}\\right) ^{\\ast }$ to \n\\mathbb{R}^{7}.$ The kernel of this map is then the Lie algebra $\\mathfrak{g\n_{2}.$ The $3$-form $\\varphi $ also defines the bracket on $\\func{Im}\\mathbb\nO}$, so in this case, both $b_{s}$ and $\\varphi _{s}$ are determined by the\nsame object, but in general they have different roles. By considering the\naction of $U\\left( n\\right) $ on $U\\left( 1\\right) $ (i.e. the unit complex\nnumbers) and $Sp\\left( n\\right) Sp\\left( 1\\right) $ on $Sp\\left( 1\\right) $\n(i.e. the unit quaternions), we find that Hermitian and hyperHermitian\nstructures fit into the same framework. Namely, a complex Hermitian form, a\nquaternionic triple of Hermitian forms, and the $G_{2}$-invariant $3$-form\nhave the same origin as $2$-forms with values in imaginary complex numbers,\nquaternions, and octonions, respectively.\n\nIn Section \\ref{sectKilling} we define an analog of the Killing form on \n\\mathfrak{l}$ and give conditions for it to be invariant under both the\naction of $\\mathfrak{p}$ and the bracket on $\\mathfrak{l}.$ In particular,\nusing the Killing form, we define the adjoint $\\varphi _{s}^{t}$ of $\\varphi\n_{s}$. This allows to use the Lie bracket on $\\mathfrak{p}$ to define\nanother bracket on $\\mathfrak{l}.$ In the case of octonions, it's\nproportional to the standard bracket on $\\mathfrak{l},$ but in general could\nbe a distinct object.\n\nIn Section \\ref{sectDarboux}, we consider maps from some smooth manifold $M$\nto a smooth loop. Given a fixed map $s$, we can then define the\ncorresponding products of loop-valued maps and correspondingly a bracket of \n\\mathfrak{l}$-valued maps. Similarly as for maps to Lie groups, we define\nthe Darboux derivative \\cite{SharpeBook} of $s$ - this is just $s^{\\ast\n}\\theta $ - the pullback of the Maurer-Cartan form on $\\mathbb{L}.$ This now\nsatisfies a structural equation, which is just the pullback of the loop\nMaurer-Cartan equation, as derived in Section \\ref{sectStruct}, with respect\nto the bracket defined by $s$. For maps to Lie groups, there holds a\nnon-abelian \\textquotedblleft Fundamental Theorem of\nCalculus\\textquotedblright\\ \\cite[Theorem 7.14]{SharpeBook}, namely that if\na Lie algebra-valued $1$-form on $M$ satisfies the structural equation, then\nit is the Darboux derivative of some Lie group-valued function. Here, we\nprove an analog for $\\mathfrak{l}$-valued $1$-forms (Theorem \\re\n{thmLoopCartan}). However, since in the non-associative case, the bracket in\nthe structural equation depends on $s,$ Theorem \\ref{thmLoopCartan} requires\nthat such a map already exists and some additional conditions are also\nneeded, so as expected, it's not as powerful as for Lie groups. However, in\nthe case the loop is associative, it does reduce to the theorem for Lie\ngroups.\n\nFinally, in Section \\ref{sectBundle}, we turn our attention to loop bundles\nover a smooth manifold $M$. In fact, since it's not a single bundle, it's\nbest to refer to a \\emph{loop structure} over a manifold. The key component\nis $\\Psi $-principal bundle $\\mathcal{P}$ where $\\Psi $ is a group that acts\nvia pseudoautomorphisms on the loop $\\mathbb{L}.$ Then, several bundles\nassociated to $\\mathcal{P}$ are defined: two bundles $\\mathcal{Q}$ and \n\\mathcal{\\mathring{Q}}$ with fibers diffeomorphic to $\\mathbb{L}$, but with\nthe bundle structure with respect to different actions of $\\Psi $; the\nvector bundle $\\mathcal{A}$ with fibers isomorphic to $\\mathfrak{l},$ as\nwell as some others. Crucially, a section $s$ of the bundle $\\mathcal\n\\mathring{Q}}$ then defines a fiberwise product structure on sections of \n\\mathcal{Q}$, a fiberwise bracket structure, and a map $\\varphi _{s}$ from\nsections of the adjoint bundle $\\mathfrak{p}_{\\mathcal{P}}$ to sections of \n\\mathcal{A}.$ In the key example of a $G_{2}$-structure on a $7$-manifold $M\n, the bundle $\\mathcal{P}$ is then the $Spin\\left( 7\\right) $-bundle that is\nthe lifting of the orthonormal frame bundle. The bundles $\\mathcal{Q}$ and \n\\mathcal{\\mathring{Q}}$ are unit octonion bundles, similarly as defined in \n\\cite{GrigorianOctobundle}, but $\\mathcal{Q}$ transforms under $SO\\left(\n7\\right) ,$ and hence corresponds the the unit subbundle of $\\mathbb{R\n\\oplus TM,$ while $\\mathcal{\\mathring{Q}}$ transforms under $Spin\\left(\n7\\right) $, and hence corresponds to the unit subbundle of the spinor\nbundle. The section $s$ then defines a global unit spinor, and hence defines\na reduction of the $Spin\\left( 7\\right) $ structure group to $G_{2}$, and\nthus defines a $G_{2}$-structure. In the complex and quaternionic examples,\nthe corresponding bundle $\\mathcal{P}$ then has $U\\left( n\\right) $ and \nSp\\left( n\\right) Sp\\left( 1\\right) $ structure group, respectively, and the\nsection $s$ defines a reduction to $SU\\left( n\\right) $ and $Sp\\left(\nn\\right) ,$ respectively. Thus, as noted in \\cite{LeungDivision}, indeed the\noctonionic analog of a reduction from K\\\"{a}hler structure to Calabi-Yau\nstructure and from quaternionic K\\\"{a}hler to HyperK\\\"{a}hler, is the\nreduction from $Spin\\left( 7\\right) $ to $G_{2}.$\n\nUsing the equivalence between sections of bundles associated to $\\mathcal{P}$\nand corresponding equivariant maps, we generally work with equivariant maps.\nIndeed, in that case, $s:\\mathcal{P}\\longrightarrow \\mathbb{L}$ is an\nequivariant map, and given a connection $\\omega $ on $\\mathcal{P}$, we find\nthat the Darboux derivative of $s$ decomposes as \n\\begin{equation}\ns^{\\ast }\\theta =T^{\\left( s,\\omega \\right) }-\\hat{\\omega}^{\\left( s\\right) \n\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,} \\label{sTom}\n\\end{equation\nwhere $\\hat{\\omega}^{\\left( s\\right) }=\\varphi _{s}\\left( \\omega \\right) $\nand $T^{\\left( s,\\omega \\right) }$ is the \\emph{torsion of }$s$ \\emph{with\nrespect to the connection }$\\omega $, which is defined as the horizontal\npart of $s^{\\ast }\\theta .$ The quantity $T^{\\left( s,\\omega \\right) }$ is\ncalled the torsion because in the case of $G_{2}$-structures on a $7\n-manifold, if we take $\\mathcal{P}$ to be the spin bundle and $\\omega $ the\nLevi-Civita connection for a fixed metric, then $T^{\\left( s,\\omega \\right)\n} $ is precisely (up to the chosen sign convention) the torsion of the \nG_{2} $-structure defined by the section $s$. Moreover, vanishing of \nT^{\\left( s,\\omega \\right) }\\ $implies a reduction of the holonomy group of \n\\omega $. As shown in \\cite{GrigorianOctobundle}, the torsion of a $G_{2}\n-structure may be considered as a $1$-form with values in the bundle of\nimaginary octonions. Indeed, in general, $T^{\\left( s,\\omega \\right) }$ is a\nbasic (i.e. horizontal and equivariant) $\\mathfrak{l}$-valued $1$-form on \n\\mathcal{P}$, so it corresponds to an $\\mathcal{A}$-valued $1$-form on $M$.\nIt also enters expressions for covariant derivatives of products of sections\nof $\\mathcal{Q}$ and the bracket on $\\mathcal{A}.$\n\nThe relation (\\ref{sTom}) is significant because it shows that the torsion\nvanishes if and only if $-\\hat{\\omega}^{\\left( s\\right) }$ is equal to the \n\\mathfrak{l}$-valued Darboux derivative $s^{\\ast }\\theta $. In particular, a\nnecessary condition is then that $-\\hat{\\omega}^{\\left( s\\right) }$\nsatisfies the loop structural equation. In Theorem \\ref{thmTNucl}, we give a\npartial converse under certain assumptions on $\\mathbb{L}.$\n\nIn Section \\ref{sectCurv}, we then also consider the projection of the\ncurvature $F$ of $\\omega $ to $\\mathfrak{l}.$ We define $\\hat{F}=\\varphi\n_{s}\\left( F\\right) $, which is then equal to the horizontal part of $d\\hat\n\\omega},$ and show in Theorem \\ref{thmFTstruct} that $\\hat{F}$ and $T$ are\nrelated via a structural equation\n\\begin{equation}\n\\hat{F}=d^{\\mathcal{H}}T-\\frac{1}{2}\\left[ T,T\\right] ^{\\left( s\\right) },\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot \\right] ^{\\left( s\\right) }$ is the bracket\ndefined by $s$. Again, such a relationship is recognizable from $G_{2}\n-geometry, where the projection $\\pi _{7}\\func{Riem}$ of the Riemann\ncurvature to the $7$-dimensional representation of $G_{2}$ satisfy the\n\\textquotedblleft $G_{2}$ Bianchi identity\\textquotedblright\\ \\cit\n{GrigorianOctobundle,karigiannis-2007}. We also consider gauge\ntransformations. In this setting, we have two quantities - the connection\nand the section $s.$ We show that under a simultaneous gauge transformation\nof the pair $\\left( s,\\omega \\right) ,$ $\\hat{F}$ and $T$ transform\nequivariantly.\n\nFinally, in Section \\ref{sectVar}, we consider several functionals and the\ncorresponding critical points, at least under some assumptions on the loop \n\\mathbb{L}.$ Indeed, if we consider the loop bundle structure over a $3\n-dimensional manifold, then we can write down an analog of the Chern-Simons\nfunctional. The critical points over the space of connections, but with a\nfixed section $s$, are connections for which $\\hat{F}=0$, i.e. the curvature\nlies in $\\mathfrak{h}_{s}$ everywhere. If we moreover consider the critical\npoints over pairs $\\left( s,\\omega \\right) $, then we get an additional\ncondition on the torsion, namely that $\\left[ T,T,T\\right] ^{\\left( s\\right)\n}=0$, where $\\left[ \\cdot ,\\cdot ,\\cdot \\right] ^{\\left( s\\right) }$ is the\nassociator defined by $s$ and wedge products of $1$-forms are implied.\n\nAnother functional that we consider is the $L^{2}$-norm squared of the\ntorsion $\\int_{M}\\left\\vert T\\right\\vert ^{2}$. In this case, we fix the\nconnection, and consider critical points over the space of sections $s$, or\nequivalently, equivariant loop-valued maps from $\\mathcal{P}.$ In the $G_{2}$\nsetting, similar functionals have been considered in \\cit\n{Bagaglini2,DGKisoflow,GrigorianOctobundle, GrigorianIsoflow,GrigorianIsoFlowSurvey,SaEarpLoubeau}.\nThis is then closely related to the Dirichlet energy functional, but\nrestricted to equivariant maps. The critical points then are maps $s$, for\nwhich the torsion is divergence-free.\n\n\\subsection*{Acknowledgements}\n\nThis work was supported by the National Science Foundation [DMS-1811754].\nThe author also thanks Henrique S\\'{a} Earp and Jonathan D.H. Smith for some\nhelpful suggestions.\n\n\\section{Loops}\n\n\\setcounter{equation}{0}\\label{sectLoop}\n\n\\subsection{Definitions}\n\nThe main object of study in this paper is a \\emph{loop}. Roughly, this can\nbe thought of as a non-associative analog of a group, but with a few\ncaveats. According to \\cite{PflugHistorical}, this term was coined by the\ngroup of Abraham Albert in Chicago in 1940's, as rhyming with \\emph{group \nand also referring to the Chicago Loop. Unfortunately however, for\nnon-algebraists, and especially in geometry and topology, this term may\ncause confusion. A less ambiguous term would be something like a \\emph\nunital quasigroup }or \\emph{quasigroup with identity}, however this would be\nnonstandard terminology and also much longer than a loop. In general,\nnon-associative algebra requires a large number of definitions and concepts\nthat become unnecessary in the more standard associative setting. In this\nsection we go over some of the terminology and notation that we will be\nusing. The reader can also refer to \\cit\n{HofmannStrambach,KiechleKloops,NagyStrambachBook,SabininBook,SmithJDHQuasiReps}\nfor the various concepts, although, as far as the author knows, much of the\nnotation in this setting is not standardized.\n\n\\begin{definition}\nA \\emph{quasigroup }$\\mathbb{L}$ is a set together with the following\noperations $\\mathbb{L}\\times \\mathbb{L}\\longrightarrow \\mathbb{L}$\n\n\\begin{enumerate}\n\\item Product $\\left( p,q\\right) \\mapsto pq$\n\n\\item Right quotient $\\left( p,q\\right) \\mapsto p\\backslash q$\n\n\\item Left quotient $\\left( p,q\\right) \\mapsto q\\backslash p$,\n\\end{enumerate}\n\nthat satisfy the following properties\n\n\\begin{enumerate}\n\\item $\\left( p\\backslash q\\right) q=p$\n\n\\item $q\\left( q\\backslash p\\right) =p$\n\n\\item $\\faktor{pq}{q}=p$\n\n\\item $\\scalebox{-1}[1]{\\nicefrac{\\scalebox{-1}[1]{$pq$}}\n\\scalebox{-1}[1]{$p$}}} =q.$\n\\end{enumerate}\n\\end{definition}\n\nWe will interchangeably denote the product operation by $p\\cdot q.$ To avoid\nmultiple parentheses, at times we will use the convention $a\\cdot bc=a\\left(\nbc\\right) $ and $ab\/c=\\left( ab\\right) \/c$. If the same underlying set \n\\mathbb{L}$ is equipped with a different product operation $\\circ _{r}$(to\nbe defined later), then the corresponding quasigroup will be denoted by \n\\left( \\mathbb{L},\\circ _{r}\\right) $ and the corresponding quotient\noperation by $\\backslash _{r}$.\n\n\\begin{definition}\nLet $\\mathbb{L}$ be a quasigroup. The \\emph{right nucleus }$\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) $ \\emph{of }$\\mathbb{L}$ is the set of all \nr\\in \\mathbb{L},$ such that for any $p,q\\in \\mathbb{L}$, \n\\begin{equation}\npq\\cdot r=p\\cdot qr. \\label{assoc}\n\\end{equation\nSimilarly, define the left nucleus $\\mathcal{N}^{L}\\left( \\mathbb{L}\\right) $\nand the middle nucleus $\\mathcal{N}^{M}\\left( \\mathbb{L}\\right) $.\n\\end{definition}\n\nElements of $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ satisfy several other\nuseful properties.\n\n\\begin{lemma}\n\\label{LemAssoc} If $r\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $, then\nfor any $p,q\\in \\mathbb{L}$,\n\n\\begin{enumerate}\n\\item $\\faktor{pr}{qr}=p\/q$\n\n\\item $p\\cdot q\\slash r=\\faktor{pq}{r}$\n\n\\item $\\scalebox{-1}[1]{\\nicefrac{\\scalebox{-1}[1]{$qr$}}\n\\scalebox{-1}[1]{$p$}}}=p\\backslash q\\cdot r.$\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{lemma}\nThe first property follows from (\\ref{assoc}) using \n\\begin{equation*}\np\/q\\cdot qr=\\left( p\/q\\cdot q\\right) r.\n\\end{equation*\nThe second property follows similarly using \n\\begin{equation*}\np\\left( q\/r\\cdot r\\right) =\\left( p\\cdot q\/r\\right) r.\n\\end{equation*\nThe third property follows using \n\\begin{equation*}\n\\left( p\\cdot p\\backslash q\\right) r=p\\left( p\\backslash q\\cdot r\\right) .\n\\end{equation*}\n\\end{lemma}\n\nIn group theory the only reasonable morphism between groups is a group\nhomomorphism, however for quasigroups there is significantly more\nflexibility.\n\n\\begin{definition}\nSuppose $\\mathbb{L}_{1},\\mathbb{L}_{2}$ are quasigroups. Then a triple \n\\left( \\alpha ,\\beta ,\\gamma \\right) $ of maps from $\\mathbb{L}_{1}$ to \n\\mathbb{L}_{2}$ is a \\emph{homotopy} from $\\mathbb{L}_{1}$ to $\\mathbb{L\n_{2} $ if for any $p,q\\in \\mathbb{L}_{1}$, \n\\begin{equation}\n\\alpha \\left( p\\right) \\beta \\left( q\\right) =\\gamma \\left( pq\\right) .\n\\label{Qhom}\n\\end{equation\nIf $\\left( \\alpha ,\\alpha ,\\alpha \\right) $ is a homotopy, then $\\alpha $ is\na \\emph{quasigroup homomorphism}. If each of the maps $\\alpha ,\\beta ,\\gamma \n$ is a bijection, then $\\left( \\alpha ,\\beta ,\\gamma \\right) $ is an \\emph\nisotopy}. An isotopy from a quasigroup to itself is an \\emph{autotopy}. The\nset of all autotopies of a quasigroup $\\mathbb{L}$ is clearly a group under\ncomposition. If $\\left( \\alpha ,\\alpha ,\\alpha \\right) \\ $is an autotopy,\nthen $\\alpha $ is an automorphism of $\\mathbb{L}$, and the group of\nautomorphisms is denoted by $\\func{Aut}\\left( \\mathbb{L}\\right) $.\n\\end{definition}\n\nWe will only be concerned with quasigroups that have an identity element,\ni.e. loops.\n\n\\begin{definition}\nA \\emph{loop} $\\mathbb{L}$ is a quasigroup that has a unique identity\nelement $1\\in \\mathbb{L}$ such that for any $q\\in \\mathbb{L}$, \n\\begin{equation}\n1\\cdot q=q\\cdot 1=q. \\label{idelem}\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}\nLet $\\mathbb{L}$ be a loop. Then, for any $q\\in \\mathbb{L}$ define\n\n\\begin{enumerate}\n\\item The \\emph{right inverse }$q^{\\rho }=q\\backslash 1.$\n\n\\item The \\emph{left inverse }$q^{\\lambda }=1\/q.$\n\nIn particular, they satisfy \n\\begin{equation}\nqq^{\\rho }=q^{\\lambda }q=1.\n\\end{equation}\n\\end{enumerate}\n\\end{definition}\n\nFor a general quasigroup, the nuclei may be empty, however if $\\mathbb{L}$\nis a loop, the identity element $1$ associates with any other element, so\nthe nuclei are non-empty. Moreover, it is easy to show that $\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) $ (and similarly, $\\mathcal{N}^{L}\\left( \n\\mathbb{L}\\right) $ and $\\mathcal{N}^{M}\\left( \\mathbb{L}\\right) $) is a\ngroup.\n\n\\begin{theorem}\nLet $\\mathbb{L}$ be a loop, then the right nucleus $\\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) $ is a group.\n\\end{theorem}\n\n\\begin{proof}\nClearly, $1\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $. Also, suppose \na,b\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$ Then, for any $p,q\\in \n\\mathbb{L}$, \n\\begin{eqnarray*}\npq\\cdot ab &=&\\left( pq\\cdot a\\right) b=\\left( p\\cdot qa\\right) b \\\\\n&=&p\\left( qa\\cdot b\\right) =p\\left( q\\cdot ab\\right)\n\\end{eqnarray*\nand hence, $ab\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $. Moreover, it is\nclear that the product on $\\mathbb{L}$ restricted to $\\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) $ is associative.\n\nIf $a\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $, then \n\\begin{equation*}\na=a\\cdot a^{\\lambda }a=aa^{\\lambda }\\cdot a\n\\end{equation*\nand thus, $\\alpha ^{\\lambda }=a^{\\rho }$, so $a$ has a well-defined inverse \na^{-1}=a^{\\lambda }=a^{\\rho }.$ Moreover, since for any $p\\in \\mathbb{L}$, \n\\left( pa^{-1}\\right) a=p$, we see that $pa^{-1}=p\/a$. Now, for $p,q\\in \n\\mathbb{L}$ we have \n\\begin{equation*}\n\\left( p\\cdot qa^{-1}\\right) a=p\\left( qa^{-1}\\cdot a\\right) =pq\n\\end{equation*\nand hence \n\\begin{equation*}\np\\cdot qa^{-1}=\\left( pq\\right) \/a=pq\\cdot a^{-1}.\n\\end{equation*\nThus, $a^{-1}\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$\n\\end{proof}\n\nLoops may be endowed with additional properties that bestow various weaker\nforms of associativity and inverse properties.\n\n\\begin{enumerate}\n\\item \\emph{Two-sided inverse}: for any $p\\in \\mathbb{L}$, $p^{\\rho\n}=p^{\\lambda }.$ Then we can define a unique two-sided inverse $p^{-1}.$\n\n\\item \\emph{Right inverse property}: for any $p,q\\in \\mathbb{L}$, $pq\\cdot\nq^{\\rho }=p.$ In particular, this implies that the inverses are two-sided,\nso we can set $p^{-1}=p^{\\rho }=p^{\\lambda }$, and moreover $p\/q=pq^{-1}$.\nThe \\emph{left} inverse property is defined similarly. A loop with both the\nleft and right inverse properties is said to be an \\emph{inverse loop}.\n\n\\item \\emph{Power-associativity }(or\\emph{\\ monoassociativity}): any element \n$p\\in \\mathbb{L}$ generates a subgroup of $\\mathbb{L}.$ In particular, this\nimplies that $\\mathbb{L}$ has two-sided inverses. Power-associativity allows\nto unambiguously define integer powers $p^{n}$ of elements. Note that some\nauthors use monoassociativity as a more restrictive property, namely only\nthat $pp\\cdot p=p\\cdot pp$.\n\n\\item \\emph{(Left)-alternative}: for any $p,q\\in \\mathbb{L}$, $p\\cdot\npq=pp\\cdot q.$ Similarly we can define the right-alternative property (i.e. \nq\\cdot pp=qp\\cdot p$). In each of these cases, $\\mathbb{L}$ has two-sided\ninverses. If $\\mathbb{L}$ is both left-alternative and right-alternative,\nthen it is said to be \\emph{alternative. }A loop with a similar property\nthat $p\\cdot qp=pq\\cdot p$ is known as a \\emph{flexible loop}.\n\n\\item \\emph{Diassociative: }any two elements $p,q\\in \\mathbb{L}$ generate a\nsubgroup of $\\mathbb{L}$. Clearly, a diassociative loop has the inverse\nproperty, is power-associative, alternative, and flexible.\n\n\\item \\emph{(Left) Bol loop}: for any $p,q,r\\in \\mathbb{L}$, \n\\begin{equation}\np\\left( q\\cdot pr\\right) =\\left( p\\cdot qp\\right) r. \\label{leftBol}\n\\end{equation\nIt is easy to see that a left Bol loop has the left inverse property and is\nleft-alternative and flexible \\cite{RobinsonBol}. It is also\npower-associative. Similarly, define a right Bol loop: for any $p,q,r\\in \n\\mathbb{L}\n\\begin{equation}\n\\left( pq\\cdot r\\right) q=p\\left( qr\\cdot q\\right) . \\label{rightBol}\n\\end{equation}\n\n\\item \\emph{Moufang loop: }a loop is a Moufang loop if it satisfies both the\nleft and right Bol identities. In particular, Moufang loops are\ndiassociative.\n\n\\item \\emph{Group}: clearly any associative loop is a group.\n\\end{enumerate}\n\n\\begin{example}\nThe best-known example of a non-associative loop is the Moufang loop of unit\noctonions.\n\\end{example}\n\n\\subsection{Pseudoautomorphisms}\n\nSuppose now $\\mathbb{L}$ is a loop and $\\left( \\alpha ,\\beta ,\\gamma \\right)\n\\ $is an autotopy of $\\mathbb{L}.$ Let $B=\\alpha \\left( 1\\right) ,$ $A=\\beta\n\\left( 1\\right) $, $C=\\gamma \\left( 1\\right) $. It is clear that $BA=C$.\nMoreover, from (\\ref{Qhom}) we see that \n\\begin{eqnarray*}\n\\alpha \\left( p\\right) &=&\\gamma \\left( p\\right) \/A \\\\\n\\beta \\left( p\\right) &=&B\\backslash \\gamma \\left( p\\right) .\n\\end{eqnarray*\nWe can rewrite (\\ref{Qhom}) as \n\\begin{equation*}\n\\alpha \\left( p\\right) \\cdot \\scalebox{-1}[1]{\\nicefrac{\\scalebox{-1}[1]{$\n\\left( q\\right) A$}}{\\scalebox{-1}[1]{$B$}}} =\\alpha \\left( pq\\right) A\n\\end{equation*\nIf $B=1,$ then, we obtain a \\emph{right pseudoautomorphism }$\\alpha $\\emph{\\\nof }$\\mathbb{L}$\\emph{\\ with companion }$A$, which we'll denote by the pair \n\\left( \\alpha ,A\\right) ,$ and which satisfies \n\\begin{equation}\n\\alpha \\left( p\\right) \\cdot \\alpha \\left( q\\right) A=\\alpha \\left(\npq\\right) A. \\label{PsAutoPair}\n\\end{equation\nWe have the following useful relations for quotients: \n\\begin{subequations\n\\label{PsAutquot} \n\\begin{eqnarray}\n\\alpha \\left( q\\backslash p\\right) A &=& \\scalebox{-1}[1]{\\nicefrac\n\\scalebox{-1}[1]{$\\alpha \\left( p\\right) A$}}{\\scalebox{-1}[1]{$\\alpha\n\\left( q\\right)$}}} \\\\\n\\alpha \\left( p\/q\\right) \\cdot \\alpha \\left( q\\right) A &=&\\alpha \\left(\np\\right) A\n\\end{eqnarray\n\\end{subequations\nThere are several equivalent ways of characterizing \\emph{right\npseudoautomorphisms}$.$\n\n\\begin{theorem}\nLet $\\mathbb{L}$ be a loop and suppose $\\alpha :\\mathbb{L}\\longrightarrow \n\\mathbb{L}$. Also, let $A\\in \\mathbb{L}$ and $\\gamma =R_{A}\\circ \\alpha $.\nThen the following are equivalent:\n\n\\begin{enumerate}\n\\item $\\left( \\alpha ,A\\right) $ is a \\emph{right pseudoautomorphism of }\n\\mathbb{L}$\\emph{\\ with companion }$A$.\n\n\\item $\\left( \\alpha ,\\beta ,\\gamma \\right) $ is an autotopy of $\\mathbb{L}$\nwith $\\alpha \\left( 1\\right) =1$ and $\\beta \\left( 1\\right) =\\gamma \\left(\n1\\right) =A$.\n\n\\item $\\gamma \\left( 1\\right) =A$ and $\\gamma $ satisfies \n\\begin{equation}\n\\gamma \\left( p\\right) \\gamma \\left( q\\gamma ^{-1}\\left( 1\\right) \\right)\n=\\gamma \\left( pq\\right) . \\label{PsAutosingle}\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{remark}\nSimilarly, if $A=1,$ then we can rewrite (\\ref{Qhom}) a\n\\begin{equation*}\nB\\beta \\left( p\\right) \\cdot \\beta \\left( q\\right) =B\\beta \\left( pq\\right)\n\\end{equation*\nand in this case, $\\beta $ is a \\emph{left pseudoautomorphism} with\ncompanion $B$. Finally, suppose $C=1,$ so that then $A=B^{\\rho },$ and we\ncan rewrite (\\ref{Qhom}\n\\begin{equation*}\n\\gamma \\left( p\\right) \/B^{\\rho }\\cdot B\\backslash \\gamma \\left( q\\right)\n=\\gamma \\left( pq\\right)\n\\end{equation*\nso that in this case, $\\gamma $ is a \\emph{middle pseudoautomorphism} with\ncompanion $B$.\n\\end{remark}\n\n\\begin{example}\nIn a Moufang loop, consider the map $\\func{Ad}_{q},$ given by $p\\longmapsto\nqpq^{-1}.$ Note that this can be written unambiguously due to\ndiassociativity. Then, this is a right pseudoautomorphism with companion \nq^{3}$ \\cite[Lemma 1.2]{NagyStrambachBook}. Indeed, using diassociativity\nfor $\\left\\{ q,xy\\right\\} $, we have \n\\begin{equation*}\nq\\left( xy\\right) q^{-1}\\cdot q^{3}=q\\left( xy\\right) q^{2}.\n\\end{equation*\nOn the other hand, \n\\begin{eqnarray*}\nqxq^{-1}\\cdot qyq^{2} &=&q\\left( xq^{-1}\\right) \\cdot \\left( qyq\\right) q \\\\\n&=&\\left( q\\left( xq^{-1}\\cdot qyq\\right) \\right) q \\\\\n&=&\\left( q\\left( xy\\cdot q\\right) \\right) q \\\\\n&=&q\\left( xy\\right) q^{2},\n\\end{eqnarray*\nwhere we have use appropriate Moufang identities. Hence, indeed, \n\\begin{equation*}\nq\\left( xy\\right) q^{-1}\\cdot q^{3}=\\left( qxq^{-1}\\right) \\left(\nqyq^{-1}\\cdot q^{3}\\right) .\n\\end{equation*\nIn general, the adjoint map on a loop is \\emph{not} a pseudoautomorphism or\na loop homomorphism. For each $q\\in \\mathbb{L}$, $\\func{Ad}_{q}$ is just a\nbijection that preserves $1\\in \\mathbb{L}$. However, as we see above, it is\na pseudoautomorphism if the loop is Moufang. Keeping the same terminology as\nfor groups, we'll say that $\\func{Ad}$ defines an adjoint action of $\\mathbb\nL}$ on itself, although for a non-associative loop, this is not an action in\nthe usual sense of a group action.\n\\end{example}\n\nWe can easily see that the right pseudoautomorphisms of $\\mathbb{L}$ form a\ngroup under composition. Denote this group by $\\func{PsAut}^{R}\\left( \n\\mathbb{L}\\right) $. Clearly, $\\func{Aut}\\left( \\mathbb{L}\\right) \\subset \n\\func{PsAut}^{R}\\left( \\mathbb{L}\\right) $. Similarly for left and middle\npseudoautomorphisms. More precisely, $\\alpha \\in $ $\\func{PsAut}^{R}\\left( \n\\mathbb{L}\\right) $ if there exists $A\\in \\mathbb{L}$ such that (\\re\n{PsAutoPair}) holds. Here we are not fixing the companion. On the other\nhand, consider the set $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ of all pairs \n\\left( \\alpha ,A\\right) $ of\\emph{\\ right pseudoautomorphisms with fixed\ncompanions}. This then also forms a group.\n\n\\begin{lemma}\nThe set $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ of all pairs $\\left( \\alpha\n,A\\right) $, where $\\alpha \\in \\func{PsAut}^{R}\\left( \\mathbb{L}\\right) $\nand $A\\in \\mathbb{L}$ is its companion, is a group with identity element \n\\left( \\func{id},1\\right) $ and the following group operations\n\\begin{subequations\n\\begin{eqnarray}\n\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{product}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{:\\ } &&\\left( \\alpha _{1},A_{1}\\right) \\left( \\alpha\n_{2},A_{2}\\right) =\\left( \\alpha _{1}\\circ \\alpha _{2},\\alpha _{1}\\left(\nA_{2}\\right) A_{1}\\right) \\label{PsAutprod} \\\\\n\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{inverse}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{: } &&\\left( \\alpha ,A\\right) ^{-1}=\\left( \\alpha\n^{-1},\\alpha ^{-1}\\left( A^{\\lambda }\\right) \\right) =\\left( \\alpha\n^{-1},\\left( \\alpha ^{-1}\\left( A\\right) \\right) ^{\\rho }\\right) .\n\\label{PsAutInv}\n\\end{eqnarray\n\\end{subequations\n\\end{lemma}\n\n\\begin{proof}\nIndeed, it is easy to see that $\\alpha _{1}\\left( A_{2}\\right) A_{1}$ is a\ncompanion of $\\alpha _{1}\\circ \\alpha _{2}$, that (\\ref{PsAutprod}) is\nassociative, and that $\\left( \\func{id},1\\right) $ is the identity element\nwith respect to it. Also, it is easy to see that \n\\begin{equation*}\n\\left( \\alpha ,A\\right) \\left( \\alpha ^{-1},\\alpha ^{-1}\\left( A^{\\lambda\n}\\right) \\right) =\\left( \\func{id},1\\right) .\n\\end{equation*\nOn the other hand, setting $B=\\alpha ^{-1}\\left( A^{\\lambda }\\right) $, we\nhave \n\\begin{eqnarray*}\nB &=&\\alpha ^{-1}\\left( 1\\right) B=\\alpha ^{-1}\\left( A^{\\lambda }A\\right) B\n\\\\\n&=&\\alpha ^{-1}\\left( A^{\\lambda }\\right) \\cdot \\alpha ^{-1}\\left( A\\right) B\n\\\\\n&=&B\\cdot \\alpha ^{-1}\\left( A\\right) B.\n\\end{eqnarray*\nCancelling $A$ on both sides on the left, we see that $B=\\left( \\alpha\n^{-1}\\left( A\\right) \\right) ^{\\rho }.$\n\\end{proof}\n\nLet $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ be the set of elements of \n\\mathbb{L}$ that are a companion for a right pseudoautomorphism. Then, (\\re\n{PsAutprod}) shows that there is a left action of $\\Psi ^{R}\\left( \\mathbb{L\n\\right) $ on $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ given by\n\\begin{subequations\n\\label{PsiLeftaction\n\\begin{eqnarray}\n\\Psi ^{R}\\left( \\mathbb{L}\\right) \\times \\mathcal{C}^{R}\\left( \\mathbb{L\n\\right) &\\longrightarrow &\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) \\\\\n\\left( \\left( \\alpha ,A\\right) ,B\\right) &\\mapsto &\\left( \\alpha ,A\\right)\nB=\\alpha \\left( B\\right) A.\n\\end{eqnarray\n\\end{subequations\nThis action is transitive, because if $A,B\\in \\mathcal{C}^{R}\\left( \\mathbb{\n}\\right) $, then exist $\\alpha ,\\beta \\in \\func{PsAut}^{R}\\left( \\mathbb{L\n\\right) $, such that $\\left( \\alpha ,A\\right) ,\\left( \\beta ,B\\right) \\in\n\\Psi ^{R}\\left( \\mathbb{L}\\right) $, and hence $\\left( \\left( \\beta\n,B\\right) \\left( \\alpha ,A\\right) ^{-1}\\right) A=B.$ Similarly, $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $ also acts on all of $\\mathbb{L}$. Let \nh=\\left( \\alpha ,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, then for\nany $p\\in \\mathbb{L}$, $h\\left( p\\right) =\\alpha \\left( p\\right) A.$ This is\nin general non-transitive, but a faithful action (assuming $\\mathbb{L}$ is\nnon-trivial). Using this, the definition of (\\ref{PsAutoPair}) can be\nrewritten as \n\\begin{equation}\nh\\left( pq\\right) =\\alpha \\left( p\\right) h\\left( q\\right)\n\\label{PsAutProd2}\n\\end{equation\nand hence the quotient relations (\\ref{PsAutquot}) may be rewritten as \n\\begin{subequations\n\\label{PsAutquot2} \n\\begin{eqnarray}\nh\\left( q\\backslash p\\right) &=&\\alpha \\left( q\\right) \\backslash h\\left(\np\\right) \\label{PsAutquot2b} \\\\\n\\alpha \\left( p\/q\\right) &=&h\\left( p\\right) \/h\\left( q\\right) .\n\\label{PsAutquot2a}\n\\end{eqnarray\n\\end{subequations}\nIf $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ acts transitively on $\\mathbb{L},$\nthen $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) \\cong \\mathbb{L}$, since every\nelement of $\\mathbb{L}$ will be a companion for some right\npseudoautomorphism. In that case, $\\mathbb{L}$ is known as a (\\emph{right)} \n\\emph{G}$\\emph{-loop}. Note that usually a loop is known as a $G$-loop is\nevery element of $\\mathbb{L}$ is a companion for a right pseudoautomorphism\nand for a left pseudoautomorphism \\cite{KunenGloops}. However, in this paper\nwe will only be concerned with right pseudoautomorphisms, so for brevity we\nwill say $\\mathbb{L}$ is a $G$-loop if $\\Psi ^{R}\\left( \\mathbb{L}\\right) $\nacts transitively on it.\n\nThere is another action of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ on $\\mathbb{\n}$ - which is the action by the pseudoautomorphism. This is a non-faithful\naction of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $, but corresponds to a\nfaithful action of $\\func{PsAut}^{R}\\left( \\mathbb{L}\\right) $. Namely, let \nh=\\left( \\alpha ,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, then $h\\ \nacts on $p\\in \\mathbb{L}$ by $p\\mapsto \\alpha \\left( p\\right) $. To\ndistinguish these two actions, we make the following definitions.\n\n\\begin{definition}\nLet $\\mathbb{L}$ be a loop and let $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ the\ngroup of right pseudoautomorphism pairs. $\\mathbb{L}$ admits two left\nactions of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ on itself. Let $h=\\left(\n\\alpha ,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $ and $p\\in \\mathbb{L\n.$\n\n\\begin{enumerate}\n\\item The \\emph{full} action is given by $\\left( h,p\\right) \\mapsto h\\left(\np\\right) =\\alpha \\left( p\\right) A.$ The set $\\mathbb{L}$ together with this\naction of $\\mathbb{\\Psi }^{R}\\left( \\mathbb{L}\\right) $ will be denoted by \n\\mathbb{\\mathring{L}}.$\n\n\\item The \\emph{partial} action, given by $\\left( h,p\\right) \\mapsto\nh^{\\prime }\\left( p\\right) =\\alpha \\left( p\\right) .$ The set $\\mathbb{L}$\ntogether with this action of $\\mathbb{\\Psi }^{R}\\left( \\mathbb{L}\\right) $\nwill be denoted by $\\mathbb{L}$ again.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark}\nFrom (\\ref{PsAutProd2}), these definitions suggest that the loop product on \n\\mathbb{L}$ can be regarded as a map $\\cdot :\\mathbb{L\\times \\mathring{L}\n\\longrightarrow \\mathbb{\\mathring{L}}$. This bears some similarity to\nClifford product structure on spinors, however without the linear structure,\nbut instead with the constraint that $\\mathbb{L}$ and $\\mathbb{\\mathring{L}}$\nare identical as sets. This however allows to define left and right\ndivision. '\n\\end{remark}\n\nNow let us consider several relationships between the different groups\nassociated to $\\mathbb{L}.$ First of all define the following maps\n\\begin{eqnarray}\n\\iota _{1} &:&\\func{Aut}\\left( \\mathbb{L}\\right) \\hookrightarrow \\Psi\n^{R}\\left( \\mathbb{L}\\right) \\label{i1map} \\\\\n\\gamma &\\mapsto &\\left( \\gamma ,1\\right) \\notag\n\\end{eqnarray\nand \n\\begin{eqnarray}\n\\iota _{2} &:&\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\hookrightarrow \\Psi\n^{R}\\left( \\mathbb{L}\\right) \\notag \\\\\nC &\\mapsto &\\left( \\func{id},C\\right) , \\label{i2map}\n\\end{eqnarray\nThe map $\\iota _{1}$ is clearly injective and is a group homomorphism, so \n\\iota _{1}\\left( \\func{Aut}\\left( \\mathbb{L}\\right) \\right) $ is a subgroup\nof $\\Psi ^{R}\\left( \\mathbb{L}\\right) .$ On the other hand, if $A,B\\in \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $, then in $\\Psi ^{R}\\left( \\mathbb{\n}\\right) $, $\\left( \\func{id},A\\right) \\left( \\func{id},B\\right) =\\left( \n\\func{id},BA\\right) ,$ so $\\iota _{2}$ is an antihomomorphism from $\\mathcal\nN}^{R}\\left( \\mathbb{L}\\right) $ to $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ and\nthus a homomorphism from the opposite group $\\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) ^{\\func{op}}.$ So, $\\iota _{2}\\left( \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) \\right) $ is a subgroup of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ that\nis isomorphic to $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}.$\n\nUsing (\\ref{i1map}) let us define a right action of $\\func{Aut}\\left( \n\\mathbb{L}\\right) $ on $\\Psi ^{R}\\left( \\mathbb{L}\\right) .$ Given $\\gamma\n\\in \\func{Aut}\\left( \\mathbb{L}\\right) $ and $\\left( \\alpha ,A\\right) \\in\n\\Psi ^{R}\\left( \\mathbb{L}\\right) $, we define \n\\begin{equation}\n\\left( \\alpha ,A\\right) \\cdot \\gamma =\\left( \\alpha ,A\\right) \\iota\n_{1}\\left( \\gamma \\right) =\\left( \\alpha \\circ \\gamma ,A\\right) .\n\\label{AutRAct}\n\\end{equation\nSimilarly, (\\ref{i2map}) allows to define a left action of $\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}$,and hence a right action of \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $, on $\\Psi ^{R}\\left( \\mathbb{L\n\\right) $\n\\begin{equation}\nC\\cdot \\left( \\alpha ,A\\right) =\\iota _{2}\\left( C\\right) \\left( \\alpha\n,A\\right) =\\left( \\alpha ,AC\\right) . \\label{NLAct}\n\\end{equation\nThe actions (\\ref{AutRAct}) and (\\ref{NLAct}) commute, so we can combine\nthem to define a left action of $\\func{Aut}\\left( \\mathbb{L}\\right) \\times \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}.$ Indeed, given $\\gamma\n\\in \\func{Aut}\\left( \\mathbb{L}\\right) $ and $C\\in \\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) $, \n\\begin{equation}\n\\left( \\alpha ,A\\right) \\cdot \\left( \\gamma ,C\\right) =\\iota _{2}\\left(\nC\\right) \\left( \\alpha ,A\\right) \\iota _{1}\\left( \\gamma \\right) =\\left(\n\\alpha \\circ \\gamma ,AC\\right) . \\label{AutNAct}\n\\end{equation}\n\n\\begin{remark}\nSince any element of $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ is a right\ncompanion for any automorphism, we can also define the semi-direct product\nsubgroup $\\iota _{1}\\left( \\func{Aut}\\left( \\mathbb{L}\\right) \\right)\n\\ltimes \\iota _{2}\\left( \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\right)\n\\subset \\Psi ^{R}\\left( \\mathbb{L}\\right) $. Suppose $\\beta ,\\gamma \\in \n\\func{Aut}\\left( \\mathbb{L}\\right) $ and $B,C\\in \\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) ,$ then in this semi-direct product, \n\\begin{equation*}\n\\left( \\beta ,B\\right) \\left( \\gamma ,C\\right) =\\left( \\beta \\circ \\gamma\n,\\beta \\left( C\\right) B\\right) .\n\\end{equation*}\n\\end{remark}\n\n\\begin{lemma}\nGiven the actions of $\\func{Aut}\\left( \\mathbb{L}\\right) $ and $\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) $ on $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ as in\n(\\ref{AutRAct}) and (\\ref{NLAct}), respectively, we have the following\nproperties.\n\n\\begin{enumerate}\n\\item \n\\faktor{\\Psi ^{R}\\left( \\mathbb{L}\\right)}{\\func{Aut}\\left(\n\\mathbb{L}\\right)} \\cong \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ as $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $-sets.\n\n\\item The image $\\iota _{2}\\left( \\mathcal{N}^{R}\\left( \\mathbb{L}\\right)\n\\right) $ is a normal subgroup of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ and\nhence \n\\begin{equation*}\n\\faktor{\\Psi ^{R}\\left( \\mathbb{L}\\right)}{\\mathcal{N}^{R}\\left(\n\\mathbb{L}\\right)} \\cong \\func{PsAut}^{R}\\left( \\mathbb{L}\\right) .\n\\end{equation*}\n\n\\item Moreover, \n\\begin{equation*}\n\\faktor{\\Psi ^{R}\\left( \\mathbb{L}\\right)}{\\func{Aut}\\left(\n\\mathbb{L}\\right) \\times \\mathcal{N}^{R}\\left( \\mathbb{L}\\right)} \\cong\n\\faktor{\\func{PsAut}^{R}\\left( \\mathbb{L}\\right)}{\\func{Aut}\\left(\n\\mathbb{L}\\right)} \\cong \\faktor{\\mathcal{C}^{R}\\left(\n\\mathbb{L}\\right)}{\\mathcal{N}^{R}\\left( \\mathbb{L}\\right)}\n\\end{equation*\nwhere equivalence is as $\\func{Aut}\\left( \\mathbb{L}\\right) \\times \\mathcal{\n}^{R}\\left( \\mathbb{L}\\right) $-sets.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nSuppose $\\mathbb{L}$ is a loop.\n\n\\begin{enumerate}\n\\item Consider the projection on the second component $\\func{prj}_{2}:\\Psi\n^{R}\\left( \\mathbb{L}\\right) \\longrightarrow \\mathcal{C}^{R}\\left( \\mathbb{L\n\\right) $ under which $\\left( \\alpha ,A\\right) \\mapsto A.$ Both $\\Psi\n^{R}\\left( \\mathbb{L}\\right) \\ $and $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) \n$ are left $\\Psi ^{R}\\left( \\mathbb{L}\\right) $-sets, since both admit a\nleft $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ action - $\\Psi ^{R}\\left( \\mathbb{\n}\\right) $ acts on itself by left multiplication and acts on $\\mathcal{C\n^{R}\\left( \\mathbb{L}\\right) $ via the action (\\ref{PsiLeftaction}). Hence, \n\\func{prj}_{2}$ is a $\\Psi ^{R}\\left( \\mathbb{L}\\right) $-equivariant map\n(i.e. a $G$-set homomorphism). On the other hand, given the action (\\re\n{AutRAct}) of $\\func{Aut}\\left( \\mathbb{L}\\right) $ on $\\Psi ^{R}\\left( \n\\mathbb{L}\\right) ,$ we easily see that two pseudoautomorphisms have the\nsame companion if and only if they lie in the same orbit of $\\func{Aut\n\\left( \\mathbb{L}\\right) $. Thus, $\\func{prj}_{2}$ descends to a $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $-equivariant bijection $\\Psi ^{R}\\left( \n\\mathbb{L}\\right) \/\\func{Aut}\\left( \\mathbb{L}\\right) \\longrightarrow \n\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $, so that $\\Psi ^{R}\\left( \\mathbb{\n}\\right) \/\\func{Aut}\\left( \\mathbb{L}\\right) \\cong \\mathcal{C}^{R}\\left( \n\\mathbb{L}\\right) $ as $\\Psi ^{R}\\left( \\mathbb{L}\\right) $-sets.\n\n\\item It is clear that $C\\in $ $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ is\na right companion of the identity map $\\func{id}$ if and only if $C\\in $ \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $. Now, let $\\nu =\\left( \\func{id\n,C\\right) \\in $ $\\iota _{2}\\left( \\mathcal{N}^{R}\\left( \\mathbb{L}\\right)\n\\right) $ and $g=\\left( \\alpha ,A\\right) \\in $ $\\Psi ^{R}\\left( \\mathbb{L\n\\right) .$ Then, \n\\begin{equation}\ng\\nu g^{-1}=\\left( \\alpha ,A\\right) \\left( \\func{id},C\\right) \\left( \\alpha\n^{-1},\\alpha ^{-1}\\left( A^{\\lambda }\\right) \\right) =\\left( \\func{id\n,A^{\\lambda }\\cdot \\alpha \\left( C\\right) A\\right) . \\label{PsiAdjN}\n\\end{equation\nIn particular, this shows that $g\\nu g^{-1}\\in \\iota _{2}\\left( \\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) \\right) $ since $A^{\\lambda }\\cdot \\alpha\n\\left( C\\right) A$ is the right companion of $\\func{id}$. Thus indeed, \n\\iota _{2}\\left( \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\right) $ is a\nnormal subgroup of $\\Psi ^{R}\\left( \\mathbb{L}\\right) .$ Now consider the\nprojection on the first component $\\func{prj}_{1}:\\Psi ^{R}\\left( \\mathbb{L\n\\right) \\longrightarrow \\func{PsAut}^{R}\\left( \\mathbb{L}\\right) $ under\nwhich $\\left( \\alpha ,A\\right) \\mapsto \\alpha .$ This is clearly a group\nhomomorphism with kernel $\\iota _{2}\\left( \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) \\right) .$ Thus, $^{R}\\left( \\mathbb{L}\\right) ^{\\func{op\n}\\backslash \\Psi ^{R}\\left( \\mathbb{L}\\right) \\cong \\Psi ^{R}\\left( \\mathbb{\n}\\right) \/\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\cong \\func{PsAut\n^{R}\\left( \\mathbb{L}\\right) $.\n\n\\item Since the actions of $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ and \n\\func{Aut}\\left( \\mathbb{L}\\right) $ on $\\Psi ^{R}\\left( \\mathbb{L}\\right) $\ncommute, the action of $\\func{Aut}\\left( \\mathbb{L}\\right) $ descends to \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}\\backslash \\Psi\n^{R}\\left( \\mathbb{L}\\right) \\cong \\func{PsAut}^{R}\\left( \\mathbb{L}\\right) $\nand the action of $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}$\ndescends to $\\Psi ^{R}\\left( \\mathbb{L}\\right) \/\\func{Aut}\\left( \\mathbb{L\n\\right) \\cong \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) .$ Since the left\naction of $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}$ on $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $ corresponds to an action by right\nmultiplication on $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $, we find that\nthere is a bijection $\\func{PsAut}^{R}\\left( \\mathbb{L}\\right) \/\\func{Aut\n\\left( \\mathbb{L}\\right) \\longrightarrow \\mathcal{C}^{R}\\left( \\mathbb{L\n\\right) \/\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$\n\nSuppose $\\left( \\alpha ,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $ and\nlet $\\left[ \\alpha \\right] _{\\func{Aut}\\left( \\mathbb{L}\\right) }\\in $ \n\\func{PsAut}^{R}\\left( \\mathbb{L}\\right) \/\\func{Aut}\\left( \\mathbb{L}\\right) \n$ be the orbit of $\\alpha $ under the action of $\\func{Aut}\\left( \\mathbb{L\n\\right) $ and let $\\left[ A\\right] _{\\mathcal{N}^{R}\\left( \\mathbb{L}\\right)\n}\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) \/\\mathcal{N}^{R}\\left( \\mathbb{\n}\\right) $ be the orbit of $A$ under the action of $\\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) $. Then the bijection is given by $\\left[ \\alpha \\right] _\n\\func{Aut}\\left( \\mathbb{L}\\right) }\\mapsto \\left[ A\\right] _{\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) }$. Moreover, each of these orbits also\ncorresponds to the orbit of $\\left( \\alpha ,A\\right) $ under the right\naction of $\\func{Aut}\\left( \\mathbb{L}\\right) \\times \\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) $ on $\\Psi ^{R}\\left( \\mathbb{L}\\right) .$ These quotients\npreserve actions of $\\func{Aut}\\left( \\mathbb{L}\\right) \\times \\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) $ on corresponding sets and thus these coset\nspaces are equivalent as $\\func{Aut}\\left( \\mathbb{L}\\right) \\times \\mathcal\nN}^{R}\\left( \\mathbb{L}\\right) $-sets.\n\\end{enumerate}\n\\end{proof}\n\nThe above relationships between the different groups are summarized in\nFigure \\ref{tikGroup}.\n\n\\begin{figure}[tbp]\n\\begin{tikzcd} \\func{Aut}(\\mathbb{L}) \\arrow[r,hookrightarrow,bend\nleft=20,\"{(\\cdot,1)}\"] \\arrow[d,hook] & \\Psi ^ {R} (\\mathbb{L})\n\\arrow[dl,swap,\"\\func{prj}_1\"] \\arrow[dr,\"\\func{prj}_2\"] &\n\\mathcal{N}^{R}(\\mathbb{L})^{\\func{op}}\\arrow[d,hook'] \\arrow[l,hook', bend\nright,swap,\"{(\\func{id},\\cdot)}\"]\\\\ \\func{PsAut} ^ {R} (\\mathbb{L}) \\cong\n\\scalebox{-1}[1]{\\nicefrac{\\scalebox{-1}[1]{$\\Psi ^\n{R}(\\mathbb{L})$}}{\\scalebox{-1}[1]{$\\mathcal{N}^{R}(\\mathbb{L})^\n\\func{op}}$}}} \\arrow[d]& &\\faktor{ \\Psi ^\n{R}(\\mathbb{L})}{\\func{Aut}(\\mathbb{L})} \\cong \\mathcal{C}^{R}(\\mathbb{L})\n\\arrow[d] \\\\ \\faktor{\\func{PsAut} ^ {R}\n(\\mathbb{L})}{\\func{Aut}(\\mathbb{L})} \\arrow[rr,bend right=15,\"\\cong\"] &\n&\\faktor{\\mathcal{C}^{R}(\\mathbb{L})}{\\mathcal{N}^{R}(\\mathbb{L})}\n\\end{tikzcd}\n\\caption{Groups related to the loop $\\mathbb{L}$}\n\\label{tikGroup}\n\\end{figure}\n\n\\begin{example}\n\\label{ExPsQuat}Suppose $\\mathbb{L=}U\\mathbb{H\\cong }S^{3}$ - the group of\nunit quaternions. Then, since this is associative, $\\mathcal{N}^{R}\\left( \n\\mathbb{H}\\right) =U\\mathbb{H\\cong }Sp\\left( 1\\right) .$ We also know that \n\\func{Aut}\\left( U\\mathbb{H}\\right) \\cong SO\\left( 3\\right) .$ Now however, \n\\Psi ^{R}\\left( U\\mathbb{H}\\right) $ consists of all pairs $\\left( \\alpha\n,A\\right) \\in SO\\left( 3\\right) \\times U\\mathbb{H}$ with the group structure\ndefined by (\\ref{PsAutprod}),which is the semi-direct product \n\\begin{equation}\n\\Psi ^{R}\\left( U\\mathbb{H}\\right) \\cong SO\\left( 3\\right) \\ltimes Sp\\left(\n1\\right) \\cong Sp\\left( 1\\right) Sp\\left( 1\\right) \\cong SO\\left( 4\\right) .\n\\end{equation\nIn this case, $\\func{PsAut}^{R}\\left( U\\mathbb{H}\\right) \\cong \\func{Aut\n\\left( U\\mathbb{H}\\right) \\cong SO\\left( 3\\right) .$ Here $\\left( p,q\\right)\n\\sim \\left( -p,-q\\right) $ acts on $U\\mathbb{H}$ via $r\\mapsto prq^{-1}$.\n\\end{example}\n\n\\begin{example}\n\\label{exGroup}More generally, suppose $\\mathbb{L=}G$ is a group. Then, \n\\func{PsAut}^{R}\\left( G\\right) \\cong \\func{Aut}\\left( G\\right) $ and $\\Psi\n^{R}\\left( G\\right) \\cong \\func{Aut}\\left( G\\right) \\ltimes G^{\\func{op}}$,\nwith $h=\\left( \\alpha ,A\\right) \\in \\Psi ^{R}\\left( G\\right) $ acting on $G$\nby \n\\begin{equation}\nh\\left( g\\right) =\\alpha \\left( g\\right) A \\label{hG}\n\\end{equation\nNote that the group $\\func{Aut}\\left( G\\right) \\ltimes G$ is known as the \n\\emph{holomorph} of $G.$\n\\end{example}\n\n\\begin{example}\n\\label{ExPsOcto}Suppose $\\mathbb{L=}U\\mathbb{O}$ - the Moufang loop of unit\noctonions, which is homeomorphic to the $7$-sphere $S^{7}.$ From \\cite[Lemma\n14.61]{Harvey} we know that $g\\in O\\left( \\mathbb{O}\\right) $ belongs to \nSpin\\left( 7\\right) $ if and only i\n\\begin{equation}\ng\\left( uv\\right) =\\chi _{g}\\left( u\\right) g\\left( v\\right) \\label{spin7}\n\\end{equation\nfor all $u,v\\in \\mathbb{O}$ where $\\chi _{g}\\left( u\\right) =g\\left(\nug^{-1}\\left( 1\\right) \\right) $ gives the vector representation of \nSpin\\left( 7\\right) $ on $\\func{Im}\\mathbb{O}$. We may as well restrict\neverything to the non-zero octonions $\\mathbb{O}^{\\ast }$ or the unit\noctonions $U\\mathbb{O}$, so that we have a loop. Now, \n\\begin{eqnarray*}\ng\\left( u\\right) &=&g\\left( u\\cdot 1\\right) =\\chi _{g}\\left( u\\right)\ng\\left( 1\\right) \\\\\ng\\left( uv\\right) &=&g\\left( uv\\cdot 1\\right) =\\chi _{g}\\left( uv\\right)\ng\\left( 1\\right)\n\\end{eqnarray*\nHence, we find that (\\ref{spin7}) implies \n\\begin{equation*}\n\\chi _{g}\\left( uv\\right) g\\left( 1\\right) =\\chi _{g}\\left( u\\right) \\cdot\n\\chi _{g}\\left( v\\right) g\\left( 1\\right) .\n\\end{equation*\nThus, $\\left( \\chi _{g},g\\left( 1\\right) \\right) $ is a right\npseudoautomorphism of $U\\mathbb{O}$ with companion $g\\left( 1\\right) $.\nThus, in this case we find that $\\Psi ^{R}\\left( U\\mathbb{O}\\right) \\cong\nSpin\\left( 7\\right) $. We also know that $\\mathcal{N}^{R}\\left( U\\mathbb{O\n\\right) =\\left\\{ \\pm 1\\right\\} \\cong \\mathbb{Z}_{2}$ and thus the projection \n$\\left( \\chi ,A\\right) \\mapsto \\chi $ corresponds to the double cover \nSpin\\left( 7\\right) \\longrightarrow SO\\left( 7\\right) $. Hence, $\\func{PsAut\n^{R}\\left( U\\mathbb{O}\\right) \\cong SO\\left( 7\\right) $ and as we know, \n\\func{Aut}\\left( U\\mathbb{O}\\right) \\cong G_{2}$. Since $U\\mathbb{O}$ is a\nMoufang loop, and we know that for any $q$, the map $\\func{Ad}_{q}$ is a\nright pseudoautomorphism with companion $q$, we see that $\\mathcal{C\n^{R}\\left( U\\mathbb{O}\\right) =U\\mathbb{O},$ and indeed as we know, \nSpin\\left( 7\\right) \/G_{2}\\cong S^{7}.$\n\\end{example}\n\n\\begin{remark}\nWe have defined the group $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ as the set of \n\\emph{all} right pseudoautomorphism pairs $\\left( \\alpha ,A\\right) ,$\nhowever we could consistently truncate $\\Psi ^{R}\\left( \\mathbb{L}\\right) $\nto a subgroup, or more generally, if $G$ is some group with a homomorphism \n\\rho :G\\longrightarrow \\Psi ^{R}\\left( \\mathbb{L}\\right) $, we can use this\nhomomorphism to define a \\emph{pseudoautomorphism action} of $G$ on $\\mathbb\nL}.$ For example, if $G=\\func{Aut}\\left( \\mathbb{L}\\right) \\ltimes \\mathcal{\n}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}},$ then we know that $\\iota\n_{1}\\times \\iota _{2}:G\\longrightarrow \\Psi ^{R}\\left( \\mathbb{L}\\right) $\nis a homomorphism. With respect to the action of $G,$ the companions would\nbe just the elements of $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$\n\\end{remark}\n\n\\begin{example}\n\\label{ExNormedDiv}In \\cite{LeungDivision}, Leung developed a general\nframework for structures in Riemannian geometry based on division algebras - \n$\\mathbb{R},\\mathbb{C},\\mathbb{H},\\mathbb{O}$. As a first step, this\ninvolved representations of unitary groups with values in each of these\nalgebras on the algebras themselves. The unitary groups, $O\\left( n\\right) \n, $U\\left( n\\right) $, $Sp\\left( n\\right) Sp\\left( 1\\right) $, and \nSpin\\left( 7\\right) ,$ as well as the corresponding special unitary groups \nSO\\left( n\\right) ,\\ SU\\left( n\\right) ,$ $Sp\\left( n\\right) $, and $G_{2},$\nare precisely the possible Riemannian holonomy groups for irreducible, not\nlocally symmetric smooth manifolds \\cite{Berger1955}. By considering the\ncorresponding loops (groups for the associative cases) we can look at the\npseudoautomorphism actions. The octonionic case is already covered in\nExample \\ref{ExPsOcto}.\n\n\\begin{enumerate}\n\\item In the case of $\\mathbb{R},$ consider instead the group of\n\\textquotedblleft unit reals\\textquotedblright\\ $U\\mathbb{R}=\\left\\{ \\pm\n1\\right\\} \\cong \\mathbb{Z}_{2}.$ Then, $\\Psi ^{R}\\left( U\\mathbb{R}\\right)\n=\\left\\{ 1\\right\\} \\ltimes \\left\\{ \\pm 1\\right\\} \\mathbb{\\cong }\\mathbb{Z\n_{2},$ however consider now for some positive integer $n,$ the homomorphism \n\\det :O\\left( n\\right) \\longrightarrow \\mathbb{Z}_{2}.$ Thus, $O\\left(\nn\\right) $ acts on $\\mathbb{Z}_{2}$ via this homomorphism: $\\left(\ng,x\\right) \\mapsto x\\det g$, where $x\\in \\mathbb{Z}_{2}$ and $g\\in O\\left(\nn\\right) .$ The preimage $\\func{Aut}\\left( \\mathbb{Z}_{2}\\right) =\\left\\{\n1\\right\\} $ is then just $\\ker \\det =SO\\left( n\\right) .$ Thus, we can now\ndefine the group $\\Psi _{n}^{R}\\left( U\\mathbb{R}\\right) =O\\left( n\\right) .$\nThe full action of $\\Psi _{n}^{R}\\left( U\\mathbb{R}\\right) $ on $U\\mathbb{R}$\nis transitive, while the partial action is trivial. Similarly, we can also\ndefine $\\func{Aut}_{n}\\left( U\\mathbb{R}\\right) =SO\\left( n\\right) .$\n\n\\item In the complex case, the group of unit complex numbers $U\\mathbb{C\n=U\\left( 1\\right) \\cong S^{1}.$ Similarly, as above, $\\Psi ^{R}\\left( \n\\mathbb{C}\\right) =\\left\\{ 1\\right\\} \\ltimes U\\left( 1\\right) \\mathbb{\\cong \nU\\left( 1\\right) .$ Now however, we also have the homomorphism $\\det_\n\\mathbb{C}}:U\\left( n\\right) \\longrightarrow U\\left( 1\\right) .$ Then, \nU\\left( n\\right) $ acts on $U\\left( 1\\right) $ via $\\left( g,z\\right)\n\\mapsto z\\det g$, where $z\\in U\\left( 1\\right) $ and $g\\in U\\left( n\\right)\n. $ The preimage of $\\func{Aut}\\left( U\\left( 1\\right) \\right) =\\left\\{\n1\\right\\} $ is then just $\\ker \\det_{\\mathbb{C}}=SU\\left( n\\right) .$ Thus,\nsimilarly as above, we can now define the group $\\Psi _{n}^{R}\\left( \n\\mathbb{C}\\right) =U\\left( n\\right) .$ The full action of $\\Psi\n_{n}^{R}\\left( U\\mathbb{R}\\right) $ on $U\\mathbb{C}$ is transitive, while\nthe partial action is trivial. Similarly, we can also define $\\func{Aut\n_{n}\\left( U\\mathbb{C}\\right) =SU\\left( n\\right) .$\n\n\\item In the quaternionic case, we have already seen the case $n=1$ in\nExample \\ref{ExPsQuat}. The $n$-dimensional quaternionic unitary group is in\ngeneral $Sp\\left( n\\right) Sp\\left( 1\\right) $, where $Sp\\left( n\\right) $\nis the compact symplectic group or equivalently, the quaternion special\nunitary group. The group $Sp\\left( n\\right) Sp\\left( 1\\right) $ acts on \n\\mathbb{H}^{n}$ by $Sp\\left( n\\right) $ on the left, and multiplication by a\nunit quaternion on the right, and hence can be represented by pairs \nh=\\left( \\alpha ,q\\right) \\in Sp\\left( n\\right) \\times Sp\\left( 1\\right) ,$\nwith the identification $\\left( -\\alpha ,-q\\right) \\sim \\left( \\alpha\n,q\\right) $. For $n\\geq 2$, define the homomorphism $\\rho _{\\mathbb{H\n}:Sp\\left( n\\right) Sp\\left( 1\\right) \\longrightarrow Sp\\left( 1\\right)\nSp\\left( 1\\right) $ given by $\\left[ \\alpha ,q\\right] \\mapsto \\left[ 1,\n\\right] .$ The image of this homomorphism simply corresponds to elements of \n\\Psi ^{R}\\left( U\\mathbb{H}\\right) $ that are of the form $\\left( \\func{id\n,q\\right) ,$ i.e. act by right multiplication of $U\\mathbb{H}$ on itself.\nThe preimage of $\\func{Aut}\\left( U\\mathbb{H}\\right) \\cong SO\\left( 3\\right) \n$ is then $\\ker \\rho _{\\mathbb{H}}\\cong Sp\\left( n\\right) .$ Overall, we may\ndefine the group $\\Psi _{n}^{R}\\left( U\\mathbb{H}\\right) =Sp\\left( n\\right)\nSp\\left( 1\\right) $ and $\\func{Aut}_{n}\\left( U\\mathbb{H}\\right) =Sp\\left(\nn\\right) .$ As in the previous examples, the full action of $\\Psi\n_{n}^{R}\\left( U\\mathbb{H}\\right) $ on $U\\mathbb{H}$ is transitive, whereas\nthe partial action is again trivial. We will refer to this example later on,\nwith the assumption that $n\\geq 2$.\n\\end{enumerate}\n\nThus, in each of the above cases, we may regard $\\Psi _{n}^{R}$ ($O\\left(\nn\\right) ,U\\left( n\\right) ,$ or $Sp\\left( n\\right) Sp\\left( 1\\right) $) as \n\\emph{a }group of\\emph{\\ }pseudoautomorphism pairs acting on the unit real\nnumbers, unit complex numbers, and unit quaternions with a trivial partial\naction and will the full action just given by right multiplication. The\ncorresponding automorphism subgroups are then the \\textquotedblleft\nspecial\\textquotedblright\\ unitary subgroups $SO\\left( n\\right) ,$ $SU\\left(\nn\\right) ,$ $Sp\\left( n\\right) .$\n\\end{example}\n\n\\subsection{Modified product}\n\nLet $r\\in \\mathbb{L}$, and define the modified product $\\circ _{r}$ on \n\\mathbb{L}$ vi\n\\begin{equation}\np\\circ _{r}q=\\faktor{\\left( p\\cdot qr\\right)}{r}. \\label{rprod}\n\\end{equation\nThen, $p\\circ _{r}q=p\\cdot q$ if and only if $p\\cdot qr=pq\\cdot r.$ This is\ntrue for all $p,q$ if and only if $r\\in \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) $. However, this will not hold for all $r$ unless $\\mathbb{L}$ is\nassociative (and is thus a group). If $\\mathbb{L}$ is a right Bol loop, and \na\\in \\mathbb{L}$, setting $r=q\\backslash a$ in the right Bol identity (\\re\n{rightBol}), gives us \n\\begin{equation}\npq\\cdot q\\backslash a=\\faktor{\\left( p\\cdot aq\\right)}{q}=p\\circ _{q}a.\n\\label{midprod}\n\\end{equation\nOn octonions, the left-hand side of (\\ref{midprod}) is precisely the\n\\textquotedblleft modified octonion product\\textquotedblright\\ defined in \n\\cite{GrigorianOctobundle} and also used in \\cite{GrigorianOctoSUSY}. Since\nunit octonions are in particular a right Bol loop, the two products are\nequal on octonions.\n\nThe product (\\ref{rprod}) gives us a convenient definition of the \\emph{loop\nassociator}.\n\n\\begin{definition}\nGiven $p,q,r\\in \\mathbb{L}$, the \\emph{loop associator }of $p,q,r$ is given\nby \n\\begin{equation}\n\\left[ p,q,r\\right] =\\faktor{\\left( p\\circ _{r}q\\right)}{pq}.\n\\label{loopassoc}\n\\end{equation\nThe \\emph{loop commutator }of $p$ and $q$ is given by \n\\begin{equation}\n\\left[ p,q\\right] =\\faktor{\\left( pq\/p\\right)}{q}. \\label{loopcomm}\n\\end{equation}\n\\end{definition}\n\nFrom the definition (\\ref{loopassoc}), we see that $\\left[ p,q,r\\right] =1$\nif and only if $p\\left( qr\\right) =\\left( pq\\right) r.$ There are several\npossible equivalent definitions of the associator, but from our point of\nview, (\\ref{loopassoc}) will be the most convenient. Similarly, the loop\ncommutator can be defined in different ways, however (\\ref{loopcomm}) has an\nadvantage, because if we define $\\func{Ad}_{p}\\left( q\\right) =pq\/p$, then \n\\left[ p,q\\right] =\\left( \\func{Ad}_{p}\\left( q\\right) \\right) \/q,$ which is\na similar relation as for the group commutator.\n\nWe can easily see that $\\left( \\mathbb{L},\\circ _{r}\\right) $ is a loop.\n\n\\begin{lemma}\nConsider the pair $\\left( \\mathbb{L},\\circ _{r}\\right) $ of the set $\\mathbb\nL}$ equipped with the binary operation $\\circ _{r}$.\n\n\\begin{enumerate}\n\\item The right quotient $\/_{r}$ and the left quotient $\\backslash _{r}$ on \n\\left( \\mathbb{L},\\circ _{r}\\right) $ are given by \n\\begin{subequations\n\\label{rprodq} \n\\begin{eqnarray}\np\/_{r}q &=&\\faktor{pr}{qr} \\label{rprodqright} \\\\\np\\backslash _{r}q &=&\\faktor{\\left( p\\backslash qr\\right)}{r},\n\\label{rprodqleft}\n\\end{eqnarray\n\\end{subequations\nand hence, $\\left( \\mathbb{L},\\circ _{r}\\right) $ is a quasigroup.\n\n\\item $1\\in \\mathbb{L}$ is the identity element for $\\left( \\mathbb{L},\\circ\n_{r}\\right) ,$ and hence $\\left( \\mathbb{L},\\circ _{r}\\right) $ is a loop.\n\n\\item Let $q\\in \\mathbb{L}$, the left and right inverses with respect to \n\\circ _{r}$ are given by \n\\begin{subequations}\n\\begin{eqnarray}\nq^{\\lambda _{\\left( r\\right) }} &=&\\faktor{r}{qr} \\label{linvr} \\\\\nq^{\\rho _{\\left( r\\right) }} &=&\\faktor{\\left( q\\backslash r\\right)}{r}.\n\\label{rinvr}\n\\end{eqnarray\n\\end{subequations\n\n\\item $\\left( \\mathbb{L},\\circ _{r}\\right) $ is isomorphic to $\\left( \n\\mathbb{L},\\cdot \\right) $ if and only if $r\\in \\mathcal{C}^{R}\\left( \n\\mathbb{L}\\right) $. In particular, $\\alpha :\\left( \\mathbb{L},\\cdot \\right)\n\\longrightarrow \\left( \\mathbb{L},\\circ _{r}\\right) $ is an isomorphism,\ni.e. for any $p,q\\in \\mathbb{L},\n\\begin{equation}\n\\alpha \\left( pq\\right) =\\alpha \\left( p\\right) \\circ _{r}\\alpha \\left(\nq\\right) , \\label{alpharcirc}\n\\end{equation\nif and only if $\\alpha $ is a right pseudoautomorphism on $\\left( \\mathbb{L\n,\\cdot \\right) $ with companion $r$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet $x,p,q,r\\in \\mathbb{L}.$\n\n\\begin{enumerate}\n\\item Suppose \n\\begin{equation*}\nx\\circ _{r}q=p.\n\\end{equation*\nUsing (\\ref{rprod}), \n\\begin{equation*}\nx\\cdot qr=pr,\n\\end{equation*\nand thus \n\\begin{equation*}\nx=pr\/qr:=p\/_{r}q.\n\\end{equation*\nSimilarly, suppos\n\\begin{equation*}\np\\circ _{r}x=q,\n\\end{equation*\nso that \n\\begin{equation*}\np\\cdot xr=qr,\n\\end{equation*\nand thus \n\\begin{equation*}\nx=\\left( p\\backslash \\left( qr\\right) \\right) \/r:=p\\backslash _{r}q.\n\\end{equation*\nSince the left and right quotients are both defined, $\\left( \\mathbb{L\n,\\circ _{r}\\right) $ is a quasigroup.\n\n\\item We have \n\\begin{eqnarray*}\np\\circ _{r}1 &=&\\left( p\\cdot r\\right) \/r=p \\\\\n1\\circ _{r}p &=&\\left( 1\\cdot pr\\right) \/r=p.\n\\end{eqnarray*\nHence, $1$ is indeed the identity element for $\\left( \\mathbb{L},\\circ\n_{r}\\right) ,$ and thus $\\left( \\mathbb{L},\\circ _{r}\\right) $ is a loop.\n\n\\item Setting $p=1$ in (\\ref{rprodq}) we get the desired expressions.\n\n\\item Suppose $\\left( \\alpha ,r\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) \n$. Then, by definition, for any $p,q\\in \\mathbb{L}$, \n\\begin{equation*}\n\\alpha \\left( pq\\right) =\\faktor{\\left( \\alpha \\left( p\\right) \\cdot \\alpha\n\\left( q\\right) r\\right)}{r}\n\\end{equation*\nHence, from (\\ref{rprod}), \n\\begin{equation}\n\\alpha \\left( pq\\right) =\\alpha \\left( p\\right) \\circ _{r}\\alpha \\left(\nq\\right) ,\n\\end{equation\nThus, $\\alpha $ is an isomorphism\\emph{\\ }from $\\left( \\mathbb{L},\\cdot\n\\right) $ to $\\left( \\mathbb{L},\\circ _{r}\\right) $. Clearly the converse is\nalso true: if $\\alpha $ is an isomorphism from $\\left( \\mathbb{L},\\cdot\n\\right) $ to $\\left( \\mathbb{L},\\circ _{r}\\right) $, then $r$ is companion\nfor $\\alpha $. Hence, $\\left( \\mathbb{L},\\cdot \\right) $ and $\\left( \\mathbb\nL},\\circ _{r}\\right) $ are isomorphic if and only if $r$ is a companion for\nsome right pseudoautomorphism.\n\\end{enumerate}\n\\end{proof}\n\nSuppose $r,x\\in \\mathbb{L}$, then the next lemma shows the relationship\nbetween products $\\circ _{x}$ and $\\circ _{rx}$.\n\n\\begin{lemma}\n\\label{lemxrprod}Let $r,x\\in \\mathbb{L}$, then \n\\begin{equation}\np\\circ _{rx}q=\\left( p\\circ _{x}\\left( q\\circ _{x}r\\right) \\right) \/_{x}r.\n\\label{xrprod}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nLet $r,x\\in \\mathbb{L}$, and suppose $y=rx.$ Then, by (\\ref{rprod}), \n\\begin{eqnarray*}\np\\cdot qy &=&\\left( p\\circ _{y}q\\right) \\cdot y \\\\\n&=&\\left( p\\circ _{y}q\\right) \\cdot rx \\\\\n&=&\\left( \\left( p\\circ _{y}q\\right) \\circ _{x}r\\right) \\cdot x.\n\\end{eqnarray*\nOn the other hand, using (\\ref{rprod}) in a different way, we get \n\\begin{eqnarray*}\np\\cdot qy &=&p\\cdot q\\left( rx\\right) \\\\\n&=&p\\cdot \\left( \\left( q\\circ _{x}r\\right) x\\right) \\\\\n&=&\\left( p\\circ _{x}\\left( q\\circ _{x}r\\right) \\right) \\cdot x\n\\end{eqnarray*\nHence, \n\\begin{equation*}\n\\left( p\\circ _{y}q\\right) \\circ _{x}r=p\\circ _{x}\\left( q\\circ _{x}r\\right)\n.\n\\end{equation*\nDividing by $r$ on the right using $\/_{x}$ gives (\\ref{xrprod}).\n\\end{proof}\n\n\\begin{remark}\nLemma \\ref{lemxrprod} shows that the $rx$-product is equivalent to the $r\n-product, \\emph{but defined on }$\\left( \\mathbb{L},\\circ _{x}\\right) .$ That\nis, if we start with $\\circ _{x}$ define the $r$-product using $\\circ _{x}$,\nthen we obtain the $rx$-product \\emph{on }$\\left( \\mathbb{L},\\cdot \\right) \n. If $x\\in \\mathcal{C}^{R}\\left( \\mathbb{L},\\cdot \\right) $, then $\\left( \n\\mathbb{L},\\circ _{x}\\right) $ is isomorphic to $\\left( \\mathbb{L},\\cdot\n\\right) $. Similarly, if $r\\in \\mathcal{C}^{R}\\left( \\mathbb{L},\\circ\n_{x}\\right) $, then $\\left( \\mathbb{L},\\circ _{rx}\\right) $ is isomorphic to \n$\\left( \\mathbb{L},\\circ _{x}\\right) .$\n\\end{remark}\n\nOn $\\left( \\mathbb{L},\\circ _{x}\\right) $ we can define the associator and\ncommutator. Given $p,q,r\\in \\mathbb{L}$, the \\emph{loop associator }on\\emph\n\\ } $\\left( \\mathbb{L},\\circ _{x}\\right) $ is given by \n\\begin{equation}\n\\left[ p,q,r\\right] ^{\\left( x\\right) }=\\left( p\\circ _{rx}q\\right)\n\/_{x}\\left( p\\circ _{x}q\\right) . \\label{loopassoc2}\n\\end{equation\nThe \\emph{loop commutator }on $\\left( \\mathbb{L},\\circ _{x}\\right) $ is\ngiven by \n\\begin{equation}\n\\left[ p,q\\right] ^{\\left( x\\right) }=\\left( \\left( p\\circ _{x}q\\right)\n\/_{x}p\\right) \/_{x}q. \\label{loopcomm2}\n\\end{equation\nFor any $x\\in \\mathbb{L}$, the adjoint map $\\func{Ad}^{\\left( x\\right) }:$ \n\\mathbb{L\\times L}\\longrightarrow \\mathbb{L}$ with respect to $\\circ _{x}$\nis given by \n\\begin{equation}\n\\func{Ad}_{p}^{\\left( x\\right) }\\left( q\\right) =\\left( \\left( R_{p}^{\\left(\nx\\right) }\\right) ^{-1}\\circ L_{p}^{\\left( x\\right) }\\right) q=\\left( p\\circ\n_{x}q\\right) \/_{x}p \\label{Adpx}\n\\end{equation\nfor any $p,q\\in \\mathbb{L},$ and its inverse for a fixed $p$ i\n\\begin{equation}\n\\left( \\func{Ad}_{p}^{\\left( x\\right) }\\right) ^{-1}\\left( q\\right) =\\left(\n\\left( L_{p}^{\\left( x\\right) }\\right) ^{-1}\\circ R_{p}^{\\left( x\\right)\n}\\right) q=p\\backslash _{x}\\left( q\\circ _{x}p\\right) .\n\\end{equation}\n\nLet us now consider how pseudoautomorphisms of $\\left( \\mathbb{L},\\cdot\n\\right) $ act on $\\left( \\mathbb{L},\\circ _{r}\\right) $.\n\n\\begin{lemma}\n\\label{lemPseudoHom}Let $h=\\left( \\beta ,B\\right) \\in \\Psi ^{R}\\left( \n\\mathbb{L},\\cdot \\right) $. Then, for any $p,q,r\\in \\mathbb{L},$ \n\\begin{equation}\n\\beta \\left( p\\circ _{r}q\\right) =\\beta \\left( p\\right) \\circ _{h\\left(\nr\\right) }\\beta \\left( q\\right) \\label{PsiActcircr}\n\\end{equation\nand $\\beta $ is a right pseudoautomorphism of $\\left( \\mathbb{L},\\circ\n_{r}\\right) $ with companion $h\\left( r\\right) \/r$. It also follows that \n\\begin{equation}\n\\beta \\left( p\/_{r}q\\right) =\\beta \\left( p\\right) \/_{h\\left( r\\right)\n}\\beta \\left( q\\right) . \\label{PsiActQuot}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nConsider $\\beta \\left( p\\circ _{r}q\\right) $. Then, using (\\ref{PsAutprod})\nand (\\ref{PsAutquot2}), \n\\begin{eqnarray*}\n\\beta \\left( p\\circ _{r}q\\right) &=&\\beta \\left( \\left( p\\cdot qr\\right)\n\/r\\right) \\\\\n&=&h\\left( p\\cdot qr\\right) \/h\\left( r\\right) \\\\\n&=&\\left( \\beta \\left( p\\right) \\cdot h\\left( qr\\right) \\right) \/h\\left(\nr\\right) \\\\\n&=&\\left( \\beta \\left( p\\right) \\cdot \\beta \\left( q\\right) h\\left( r\\right)\n\\right) \/h\\left( r\\right) \\\\\n&=&\\beta \\left( p\\right) \\circ _{h\\left( r\\right) }\\beta \\left( q\\right) ,\n\\end{eqnarray*\nand hence we get (\\ref{PsiActcircr}). Alternatively, using (\\ref{rprodqright\n), \n\\begin{eqnarray*}\n\\beta \\left( p\\circ _{r}q\\right) &=&\\faktor{\\left( \\beta \\left( p\\right)\n\\cdot \\beta \\left( q\\right) h\\left( r\\right) \\right)}{h\\left( r\\right)} \\\\\n&=&\\left(\\faktor{\\left( \\beta \\left( p\\right) \\cdot \\beta \\left( q\\right)\nh\\left( r\\right) \\right)}{r}\\right) \/_{r}\\left(\\faktor{ h\\left( r\\right)}{r\n\\right) .\n\\end{eqnarray*\nNow, let $C=h\\left( r\\right) \/r$. Thus, \n\\begin{eqnarray*}\n\\beta \\left( p\\circ _{r}q\\right) &=&\\left( \\faktor{\\left( \\beta \\left(\np\\right) \\left( \\beta \\left( q\\right) \\cdot Cr\\right) \\right)}{r}\\right)\n\/_{r}C \\\\\n&=&\\left( \\beta \\left( p\\right) \\circ _{r}\\left( \\beta \\left( q\\right) \\circ\n_{r}C\\right) \\right) \/_{r}C\n\\end{eqnarray*\nThus, indeed, $\\beta $ is a right pseudoautomorphism of $\\left( \\mathbb{L\n,\\circ _{r}\\right) $ with companion $C=h\\left( r\\right) \/r$.\n\nNow using (\\ref{PsiActcircr}) with $p\/_{r}q$ and $q$, we find \n\\begin{equation*}\n\\beta \\left( p\\right) =\\beta \\left( p\/_{r}q\\circ _{r}q\\right) =\\beta \\left(\np\/_{r}q\\right) \\circ _{h\\left( r\\right) }\\beta \\left( q\\right)\n\\end{equation*\nand hence we get (\\ref{PsiActQuot}).\n\\end{proof}\n\n\\begin{remark}\nWe will use the notation $\\left( \\beta ,C\\right) _{r}$ to denote that \n\\left( \\beta ,C\\right) _{r}$ is considered as a pseudoautomorphism pair on \n\\left( \\mathbb{L},\\circ _{r}\\right) $, i.e. $\\left( \\beta ,C\\right) _{r}\\in\n\\Psi ^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $. Of course, the product of $C$\nwith any element in $\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $ on\nthe right will also give a companion of $\\beta $ on $\\left( \\mathbb{L},\\circ\n_{r}\\right) $. Any right pseudoautomorphism of $\\left( \\mathbb{L},\\cdot\n\\right) $ is also a right pseudoautomorphism of $\\left( \\mathbb{L},\\circ\n_{r}\\right) $, however their companions may be different. In particular, \n\\func{PsAut}^{R}\\left( \\mathbb{L},\\cdot \\right) =\\func{PsAut}^{R}\\left( \n\\mathbb{L},\\circ _{r}\\right) $. For $\\Psi ^{R}\\left( \\mathbb{L},\\cdot\n\\right) $ and $\\Psi ^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $ we have a\ngroup isomorphism \n\\begin{eqnarray}\n\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) &\\longrightarrow &\\Psi ^{R}\\left( \n\\mathbb{L},\\circ _{r}\\right) \\notag \\\\\nh &=&\\left( \\beta ,B\\right) \\mapsto h_{r}=\\left( \\beta ,\\faktor{h\\left(\nr\\right)} {r}\\right) _{r}. \\label{PsAutoriso}\n\\end{eqnarray}\nConversely, if we have $h_{r}=\\left( \\beta ,C\\right) _{r}\\in \\Psi ^{R}\\left( \n\\mathbb{L},\\circ _{r}\\right) $, then this corresponds to $h=\\left( \\beta\n,B\\right) \\in \\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) $ where \n\\begin{equation}\nB=\\beta \\left( r\\right) \\backslash \\left( Cr\\right) . \\label{PsAutorisorev}\n\\end{equation}\n\\end{remark}\n\nThe group isomorphism (\\ref{PsAutoriso}) together with $R_{r}^{-1}$ (right\ndivision by $r$) induces a $G$-set isomorphism between $\\left( \\mathbb\n\\mathring{L}},\\cdot \\right) $with the action of $\\Psi ^{R}\\left( \\mathbb{L\n,\\cdot \\right) $ and $\\left( \\mathbb{\\mathring{L}},\\circ _{r}\\right) $ with\nthe action of $\\Psi ^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $.\n\n\\begin{lemma}\nLet $r\\in \\mathbb{L}$, then the mapping (\\ref{PsAutoriso}) $h\\mapsto h_{r}$\nfrom $\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) \\ $to $\\Psi ^{R}\\left( \n\\mathbb{L},\\circ _{r}\\right) $ together with the map $R_{r}^{-1}:\\left( \n\\mathbb{\\mathring{L}},\\cdot \\right) \\longrightarrow \\left( \\mathbb{\\mathring\nL}},\\circ _{r}\\right) $ gives a $G$-set isomorphism. In particular, for any \nA\\in \\mathbb{\\mathring{L}}$ and $h\\in \\Psi ^{R}\\left( \\mathbb{L},\\cdot\n\\right) ,$ \n\\begin{equation}\nh\\left( A\\right) \/r=h_{r}\\left( A\/r\\right) . \\label{Gsetiso}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nSuppose $h=\\left( \\beta ,B\\right) $ and correspondingly, from (\\re\n{PsAutoriso}), $h_{r}=\\left( \\beta ,\\faktor{h\\left( r\\right)} {r}\\right) $.\nThen, we have, \n\\begin{eqnarray*}\nh_{r}\\left( A\/r\\right) &=&\\beta \\left( A\/r\\right) \\circ _{r}\\faktor{h\\left(\nr\\right)} {r} \\\\\n&=&\\faktor{\\left( h\\left( A\\right) \/h\\left( r\\right) \\cdot h\\left( r\\right)\n\\right)}{ r } \\\\\n&=&h\\left( A\\right) \/r,\n\\end{eqnarray*\nwhere we have also used (\\ref{PsAutquot2a}).\n\\end{proof}\n\nUsing (\\ref{PsAutoriso}), we now have the following characterizations of \n\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) ,$ $\\mathcal{N}^{R}\\left( \n\\mathbb{L},\\circ _{r}\\right) $, and $\\func{Aut}\\left( \\mathbb{L},\\circ\n_{r}\\right) $.\n\n\\begin{lemma}\nLet $r,C\\in \\mathbb{L}$, then \n\\begin{subequations}\n\\begin{eqnarray}\nC &\\in &\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) \\iff C=A\/r\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi\nfor some }A\\in \\func{Orb}_{\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) }\\left(\nr\\right) \\label{CRrdef} \\\\\nC &\\in &\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) \\iff C=\\func{Ad\n_{r}\\left( A\\right) \\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{for some }A\\in \\mathcal{N}^{R}\\left( \\mathbb{L\n,\\cdot \\right) \\label{CRNucl}\n\\end{eqnarray\n\\end{subequations\nand \n\\begin{equation}\n\\func{Aut}\\left( \\mathbb{L},\\circ _{r}\\right) \\cong \\func{Stab}_{\\Psi\n^{R}\\left( \\mathbb{L},\\cdot \\right) }\\left( r\\right) . \\label{AutLr}\n\\end{equation\nIf $r\\in \\mathcal{C}^{R}\\left( \\mathbb{L},\\cdot \\right) $, so that there\nexists a right pseudoautomorphism pair $h=\\left( \\alpha ,r\\right) \\in \\Psi\n^{R}\\left( \\mathbb{L},\\cdot \\right) $, then $\\func{Aut}\\left( \\mathbb{L\n,\\circ _{r}\\right) \\cong h\\func{Aut}\\left( \\mathbb{L},\\cdot \\right) h^{-1}.$\n\\end{lemma}\n\n\\begin{proof}\nFrom (\\ref{PsAutoriso}) we see that any companion in $\\left( \\mathbb{L\n,\\circ _{r}\\right) $ is of the form $h\\left( r\\right) \/r$ for some $h\\in\n\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) $. Therefore, $C\\in \\mathbb{L}$ is\na companion in $\\left( \\mathbb{L},\\circ _{r}\\right) $ if and only if it is\nof the form $C=A\/r\\ $for some $A\\in \\func{Orb}_{\\Psi ^{R}\\left( \\mathbb{L\n,\\cdot \\right) }\\left( r\\right) .$\n\nThe right nucleus $\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $\ncorresponds to the companions of the identity map $\\func{id}$ on $\\mathbb{L}\n, hence taking $\\beta =\\func{id}$ in (\\ref{PsAutoriso}), we find that\ncompanions of $\\func{id}$ in $\\left( \\mathbb{L},\\circ _{r}\\right) $ must be\nof the form $C=\\left( rA\\right) \/r=\\func{Ad}_{r}\\left( A\\right) \\ $for some \nA\\in \\mathcal{N}^{R}\\left( \\mathbb{L},\\cdot \\right) $. Conversely, suppose \nC=\\left( rA\\right) \/r\\ $for some $A\\in \\mathcal{N}^{R}\\left( \\mathbb{L\n,\\cdot \\right) $, then we can explicitly check that for any $p,q\\in \\mathbb{\n}$, we have \n\\begin{eqnarray*}\n\\left( p\\circ _{r}q\\right) \\circ _{r}C &=&\\left( \\left( p\\cdot qr\\right)\n\/r\\cdot rA\\right) \/r \\\\\n&=&\\left( \\left( p\\cdot qr\\right) \\cdot A\\right) \/r \\\\\n&=&\\left( p\\cdot \\left( qr\\cdot A\\right) \\right) \/r=\\left( p\\cdot \\left(\nq\\cdot rA\\right) \\right) \/r \\\\\n&=&\\left( p\\cdot \\left( q\\cdot Cr\\right) \\right) \/r=\\left( p\\cdot \\left(\nq\\circ _{r}C\\right) r\\right) \/r \\\\\n&=&p\\circ _{r}\\left( q\\circ _{r}C\\right)\n\\end{eqnarray*\nand hence, $C\\in \\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $.\n\nThe group $\\func{Aut}\\left( \\mathbb{L},\\circ _{r}\\right) $ is isomorphic to\nthe preimage $\\func{prj}_{2}^{-1}\\left( 1\\right) $ with respect to the\nprojection map $\\func{prj}_{2}:$ $\\Psi ^{R}\\left( \\mathbb{L},\\circ\n_{r}\\right) \\longrightarrow \\mathcal{C}^{R}\\left( \\mathbb{L},\\circ\n_{r}\\right) $. From (\\ref{PsAutoriso}), this corresponds precisely to the\nmaps $h\\in \\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) $ for which $h\\left(\nr\\right) =r$. If $r$ is in the $\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) \n-orbit of $1$, then clearly $\\func{Aut}\\left( \\mathbb{L},\\circ _{r}\\right) $\nis conjugate to $\\func{Aut}\\left( \\mathbb{L},\\cdot \\right) .$\n\\end{proof}\n\n\\begin{remark}\nSuppose $r\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $, then from (\\re\n{CRrdef}), we see that if $A\\in \\mathcal{C}^{R}\\left( \\mathbb{L},\\circ\n_{r}\\right) $, then $Ar\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) .$ Also,\nusing the isomorphism (\\ref{PsAutoriso}), we can define the left action of \n\\Psi ^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $ on $\\Psi ^{R}\\left( \\mathbb{L\n,\\cdot \\right) $ just by composition on the left by the corresponding\nelement in $\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) $. Now recall that \n\\begin{equation*}\n\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{r}\\right) \\cong \\faktor{\\Psi\n^{R}\\left( \\mathbb{L},\\circ _{r}\\right)}{\\func{Aut}\\left( \\mathbb{L},\\circ\n_{r}\\right)}\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and\\ }\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) \\cong\n\\faktor{\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right)}{\\func{Aut}\\left(\n\\mathbb{L},\\cdot \\right)}.\n\\end{equation*\nThen, for any equivalence classes $\\left\\lfloor \\alpha ,A\\right\\rfloor\n_{r}\\in \n\\faktor{\\Psi ^{R}\\left( \\mathbb{L},\\circ _{r}\\right)}{\\func{Aut}\\left(\n\\mathbb{L},\\circ _{r}\\right)}$ and $\\left\\lfloor \\beta ,r\\right\\rfloor \\in \n\\faktor{\\Psi\n^{R}\\left( \\mathbb{L},\\cdot \\right)}{\\func{Aut}\\left( \\mathbb{L},\\cdot\n\\right)}$, we find that \n\\begin{equation}\n\\left\\lfloor \\alpha ,A\\right\\rfloor _{r}\\cdot \\left\\lfloor \\beta\n,r\\right\\rfloor =\\left\\lfloor \\alpha \\circ \\beta ,Ar\\right\\rfloor .\n\\label{CCaction}\n\\end{equation\nAnother way to see this is the following. From (\\ref{PsAutorisorev}), the\nelement in $\\Psi ^{R}\\left( \\mathbb{L},\\cdot \\right) $ that corresponds to \n\\left( \\alpha ,A\\right) _{r}\\in \\Psi ^{R}\\left( \\mathbb{L},\\circ _{r}\\right) \n$ is $\\left( \\alpha \n\\scalebox{-1}[1]{\\nicefrac{\\scalebox{-1}[1]{$\nAr$}}{\\scalebox{-1}[1]{$\\alpha \\left( r\\right)$}}}\\right) .$ The composition\nof this with $\\left( \\beta ,r\\right) $ is then $\\left( \\alpha \\circ \\beta\n,Ar\\right) .$ Then, it is easy to see that this reduces to cosets.\n\\end{remark}\n\n\\begin{example}\n\\label{exMouf}Recall that in a Moufang loop $\\mathbb{L}$, the map $\\func{Ad\n_{q}$ is a right pseudoautomorphism with companion $q^{3}$. The relation \n\\ref{CCaction}) then shows that for any $r\\in \\mathbb{L},$ \n\\begin{equation}\n\\func{Ad}_{q}^{\\left( r^{3}\\right) }\\circ \\func{Ad}_{r}=\\func{Ad}_{\\left(\nq^{3}r^{3}\\right) ^{\\frac{1}{3}}}\\circ h\n\\end{equation\nwhere $h\\in \\func{Aut}\\left( \\mathbb{L}\\right) $. This follows because \n\\func{Ad}_{q}^{\\left( r^{3}\\right) }$ has companion $q^{3}$ in $\\Psi\n^{R}\\left( \\mathbb{L},\\circ _{r^{3}}\\right) $ and $\\func{Ad}_{r}$ has\ncompanion $r^{3}$ in $\\Psi ^{R}\\left( \\mathbb{L}\\right) $, thus the\ncomposition has companion $q^{3}r^{3}$, so up to composition with $\\func{Aut\n\\left( \\mathbb{L}\\right) ,$ it is given by $\\func{Ad}_{\\left(\nq^{3}r^{3}\\right) ^{\\frac{1}{3}}}.$ A similar expression for octonions has\nbeen derived in \\cite{GrigorianOctobundle}.\n\\end{example}\n\nAs we have seen, $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ acts transitively on \n\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ and moreover, for each $r\\in \n\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $, the loops $\\left( \\mathbb{L\n,\\circ _{r}\\right) $ are all isomorphic to one another, and related via\nelements of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $. Concretely, consider \n\\left( \\mathbb{L},\\circ _{r}\\right) $ and suppose $h=\\left( \\alpha ,A\\right)\n\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $. Then, define the map \n\\begin{equation*}\nh:\\left( \\mathbb{L},\\circ _{r}\\right) \\longrightarrow \\left( \\mathbb{L\n,\\circ _{h\\left( r\\right) }\\right) ,\n\\end{equation*\nwhere $h$ acts on $\\mathbb{L}$ via the partial action (i.e. via $\\alpha $).\nIndeed, from (\\ref{alpharcirc}), we have for $p,q\\in h\\left( \\mathbb{L\n\\right) $ \n\\begin{equation}\n\\alpha \\left( \\alpha ^{-1}\\left( p\\right) \\circ _{r}\\alpha ^{-1}\\left(\nq\\right) \\right) =p\\circ _{h\\left( r\\right) }q. \\label{alphaprod}\n\\end{equation\nMoreover, if we instead consider the action of $\\Psi ^{R}\\left( \\mathbb{L\n,\\circ _{r}\\right) ,$ then given $h_{r}=\\left( \\alpha ,x\\right) _{r}\\in \\Psi\n^{R}\\left( \\mathbb{L},\\circ _{r}\\right) $, $h_{r}\\left( \\mathbb{L}\\right)\n\\cong \\left( \\mathbb{L},\\circ _{xr}\\right) .$ This is summarized in the\ntheorem below.\n\n\\begin{theorem}\n\\label{thmLeftProd}Let $\\mathbb{L}$ be a loop with the set of right\ncompanions $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) .$ For every $r\\in \n\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ and every $h\\in \\Psi ^{R}\\left( \n\\mathbb{L}\\right) $, the loop $\\left( \\mathbb{L},\\circ _{r}\\right) $ is\nisomorphic to $\\left( \\mathbb{L},\\circ _{h\\left( r\\right) }\\right) .$\nMoreover, if instead, the action of $\\Psi ^{R}\\left( \\mathbb{L},\\circ\n_{r}\\right) $ is considered, then an element of $\\Psi ^{R}\\left( \\mathbb{L\n,\\circ _{r}\\right) $ with companion $x$ induces a loop isomorphism from \n\\left( \\mathbb{L},\\circ _{r}\\right) $ to $\\left( \\mathbb{L},\\circ\n_{xr}\\right) .$\n\\end{theorem}\n\nNow again, let $h=\\left( \\alpha ,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L\n\\right) $, and we will consider the action of $h$ on the nucleus. It is easy\nto see how the loop associator transforms under this map. Using (\\re\n{loopassoc2}) and (\\ref{PsiActQuot}), we hav\n\\begin{eqnarray}\n\\alpha \\left( \\left[ p,q,r\\right] ^{\\left( x\\right) }\\right) &=&\\alpha\n\\left( \\left( p\\circ _{rx}q\\right) \/_{x}\\left( p\\circ _{x}q\\right) \\right) \n\\notag \\\\\n&=&\\left( \\alpha \\left( p\\right) \\circ _{\\alpha \\left( r\\right) h\\left(\nx\\right) }\\alpha \\left( q\\right) \\right) \/_{h\\left( x\\right) }\\left( \\alpha\n\\left( p\\right) \\circ _{h\\left( x\\right) }\\alpha \\left( q\\right) \\right) \n\\notag \\\\\n&=&\\left[ \\alpha \\left( p\\right) ,\\alpha \\left( q\\right) ,\\alpha \\left(\nr\\right) \\right] ^{\\left( h\\left( x\\right) \\right) }. \\label{alphaassoc}\n\\end{eqnarray\nSo in particular, taking $x=1$, $C\\in \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) $ if and only if $\\alpha \\left( C\\right) \\in \\mathcal{N}^{R}\\left( \n\\mathbb{L},\\circ _{A}\\right) .$ However from (\\ref{CRNucl}), we know that \nC\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ if and only if $\\left( \\func\nAd}_{A}\\right) C\\in \\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{A}\\right) .$ In\nparticular, this means that $C\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $\nif and only if $\\alpha ^{-1}\\left( \\func{Ad}_{A}C\\right) \\in \\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) .$ This defines a left action of $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $ on $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \n: \n\\begin{equation}\nh^{\\prime \\prime }\\left( C\\right) =\\func{Ad}_{A}^{-1}\\left( \\alpha \\left(\nC\\right) \\right) =\\scalebox{-1}[1]{\\nicefrac{\\scalebox{-1}[1]{$ h\\left(\nC\\right)$}}{\\scalebox{-1}[1]{$A$}}} \\label{nuclearaction}\n\\end{equation\nfor $h=\\left( \\alpha ,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $ and \nC\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$ The action (\\re\n{nuclearaction}) can be seen from the following considerations. Recall \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ^{\\func{op}}$ embeds in $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $ via the map $C\\mapsto \\iota _{2}\\left(\nC\\right) =\\left( \\func{id},C\\right) .$ The group $\\Psi ^{R}\\left( \\mathbb{L\n\\right) $ acts on itself via the adjoint action, so let $h=\\left( \\alpha\n,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, then from (\\ref{PsiAdjN})\nrecall, \n\\begin{equation}\nh\\left( \\iota _{2}\\left( C\\right) \\right) h^{-1}=\\left( \\alpha ,h\\left(\nC\\right) \\right) h^{-1}=\\left( \\func{id},A^{\\lambda }\\cdot h\\left( C\\right)\n\\right) .\n\\end{equation\nOn the other hand, suppose \n\\begin{equation*}\n\\left( \\alpha ,h\\left( C\\right) \\right) h^{-1}=\\left( \\func{id},x\\right) ,\n\\end{equation*\nso that \n\\begin{equation*}\n\\left( \\alpha ,h\\left( C\\right) \\right) =\\left( \\func{id},x\\right) \\left(\n\\alpha ,A\\right) =\\left( \\alpha ,Ax\\right)\n\\end{equation*\nTherefore, $x=A\\backslash h\\left( C\\right) .$ In particular, $A\\backslash\nh\\left( C\\right) \\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$ Thus the\ninduced action on $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ is precisely \nC\\mapsto A\\backslash h\\left( C\\right) =\\func{Ad}_{A}^{-1}\\left( \\alpha\n\\left( C\\right) \\right) $. Moreover, right multiplication of elements in \n\\mathbb{\\mathring{L}}$ by elements of $\\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) $ is compatible with the corresponding actions of $\\Psi ^{R}\\left( \n\\mathbb{L}\\right) $.\n\n\\begin{lemma}\nFor any $s\\in \\mathbb{\\mathring{L}},C\\in \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) $, and $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, we have \n\\begin{equation}\nh\\left( sC\\right) =h\\left( s\\right) h^{\\prime \\prime }\\left( C\\right) ,\n\\label{nuclearaction1}\n\\end{equation\nwhere $h^{\\prime \\prime }$ is the action (\\ref{nuclearaction}).\n\\end{lemma}\n\n\\begin{proof}\nIndeed, to show (\\ref{nuclearaction1}), we have \n\\begin{eqnarray*}\nh\\left( sC\\right) &=&\\alpha \\left( s\\right) h\\left( C\\right) \\\\\n&=&h\\left( s\\right) \/A\\cdot Ah^{\\prime \\prime }\\left( C\\right) \\\\\n&=&\\left( h\\left( s\\right) \/A\\cdot A\\right) h^{\\prime \\prime }\\left( C\\right)\n\\\\\n&=&h\\left( s\\right) \\cdot h^{\\prime \\prime }\\left( C\\right) ,\n\\end{eqnarray*\nsince $h^{\\prime \\prime }\\left( C\\right) \\in \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) .$\n\\end{proof}\n\n\\section{Smooth loops}\n\n\\setcounter{equation}{0}\\label{sectSmooth}Suppose the loop $\\mathbb{L}$ is a\nsmooth finite-dimensional manifold such that the loop multiplication and\ndivision are smooth functions. Define maps\n\n\\begin{equation}\n\\begin{array}{c}\nL_{r}:\\mathbb{L}\\longrightarrow \\mathbb{L} \\\\ \nq\\longmapsto r\n\\end{array}\n\\label{lprod}\n\\end{equation\nand \n\\begin{equation}\n\\begin{array}{c}\nR_{r}:\\mathbb{L}\\longrightarrow \\mathbb{L} \\\\ \nq\\longmapsto qr\n\\end{array}\n\\label{rprod0}\n\\end{equation\nThese are diffeomorphisms of $\\mathbb{L}$ with smooth inverses $L_{r}^{-1}$\nand $R_{r}^{-1}$ that correspond to left division and right division by $r$,\nrespectively. Also, assume that $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ acts\nsmoothly on $\\mathbb{L}$ (as before, $\\mathbb{L}$ together with the full\naction of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ will be denoted by $\\mathbb\n\\mathring{L}}$). Thus, the action of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ is\na group homomorphism from $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ to $\\func{Dif\n}\\left( \\mathbb{L}\\right) .$ In particular, this allows to induce a Lie\ngroup structure on $\\Psi ^{R}\\left( \\mathbb{L}\\right) .$ Similarly, $\\func\nPsAut}^{R}\\left( \\mathbb{L}\\right) $ is then also a Lie group, and for any \ns\\in \\mathbb{\\mathring{L}}$, $\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right)\n\\cong \\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left( s\\right) $ is\nthen a Lie subgroup of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $, and indeed of \n\\func{PsAut}^{R}\\left( \\mathbb{L}\\right) $ as well. The assumption that\npseudoautomorphisms acts smoothly on $\\mathbb{L}$ may be nontrivial. To the\nbest of the author's knowledge, it is an open question whether this is\nalways true. However, for the loop $U\\mathbb{O}$ of unit octonions, this is\nindeed true, as can be seen from Example \\ref{ExPsOcto}.\n\nDefine $X$ to be a \\emph{right fundamental vector field}\\textbf{\\ }if for\nany $q\\in \\mathbb{L},$ it is determined by a tangent vector at $1$ via right\ntranslations. That is, given a tangent vector $\\xi \\in T_{1}\\mathbb{L}$, we\ndefine a corresponding right fundamental vector field $\\rho \\left( \\xi\n\\right) $ given by \n\\begin{equation}\n\\rho \\left( \\xi \\right) _{q}=\\left( R_{q}\\right) _{\\ast }\\xi\n\\end{equation\nat any $p\\in \\mathbb{L}$. If $\\mathbb{L}$ is a Lie group, then this\ndefinition is equivalent to the standard definition of a right-invariant\nvector field $X$ such that $\\left( R_{q}\\right) _{\\ast }X_{p}=X_{pq}$,\nhowever in the non-associative case, $R_{q}\\circ R_{p}\\neq R_{pq},$ so the\nstandard definition wouldn't work, so a right fundamental vector field is\nnot actually right-invariant in the usual sense. We can still say that the\nvector space of right fundamental vector fields has dimension $\\dim \\mathbb{\n}$, and at any point, they still form a basis for the tangent space. In\nparticular, any smooth loop is parallelizable. However this vector space is\nnow in general not a Lie algebra under the Lie bracket of vector fields,\nwhich is to be expected, since $T_{1}\\mathbb{L}$ doesn't necessarily have\nthe Lie algebra structure either.\n\nInstead of right invariance, we see that given a right fundamental vector\nfield $X=\\rho \\left( \\xi \\right) $, \n\\begin{eqnarray}\n\\left( R_{p}^{-1}\\right) _{\\ast }X_{q} &=&\\left( R_{p}^{-1}\\circ\nR_{q}\\right) _{\\ast }\\xi \\notag \\\\\n&=&\\left( R_{q\/p}^{\\left( p\\right) }\\right) _{\\ast }\\xi \\label{rightvect}\n\\end{eqnarray\nwhere $R^{\\left( p\\right) }$ is the right product with respect to the\noperation $\\circ _{p}.$ This is because \n\\begin{eqnarray}\n\\left( R_{p}^{-1}\\circ R_{q}\\right) r &=&\\left( rq\\right) \/p \\notag \\\\\n&=&\\left( r\\cdot \\left( q\/p\\cdot p\\right) \\right) \/p \\notag \\\\\n&=&r\\circ _{p}\\left( q\/p\\right) =R_{q\/p}^{\\left( p\\right) }r, \\label{RinvR}\n\\end{eqnarray\nwhere we have used (\\ref{rprod}).\n\n\\subsection{Exponential map}\n\n\\label{secExpMap}For some $\\xi \\in T_{1}\\mathbb{L},$ define a flow $p_{\\xi }$\non $\\mathbb{L}$ given by \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\frac{dp_{\\xi }\\left( t\\right) }{dt}=\\left( R_{p_{\\xi }\\left( t\\right)\n}\\right) _{\\ast }\\xi \\\\ \np_{\\xi }\\left( 0\\right) =\n\\end{array\n\\right. \\label{floweq}\n\\end{equation\nThis generally has a solution for some sufficiently small time interval \n\\left( -\\varepsilon ,\\varepsilon \\right) $, and is only a local 1-parameter\nsubgroup. However it is shown in \\cite{Kuzmin1971,Malcev1955} that if \n\\mathbb{L}$ is at least power-associative, then $p_{\\xi }\\left( t+s\\right)\n=p_{\\xi }\\left( t\\right) p_{\\xi }\\left( s\\right) $ for all $t,s$, and hence\nthe solution can extended for all $t$. The weakest power-associativity\nassumption is required in order to be able to define $p_{\\xi }\\left(\nnh\\right) =p_{\\xi }\\left( h\\right) ^{n}$ unambiguously.\n\nThe solutions of (\\ref{floweq}) define the (local) exponential map: $\\exp\n\\left( t\\xi \\right) :=p_{\\xi }\\left( t\\right) $. The corresponding\ndiffeomorphisms are then the right translations $R_{\\exp \\left( t\\xi \\right)\n}$. We will generally only need this locally, so the power-associativity\nassumption will not be necessary. Now consider a similar flow but with a\ndifferent initial condition: \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\frac{dp_{\\xi ,q}\\left( t\\right) }{dt}=\\left( R_{p_{\\xi ,q}\\left( t\\right)\n}\\right) _{\\ast }\\xi \\\\ \np_{\\xi ,q}\\left( 0\\right) =\n\\end{array\n\\right. \\label{floweq2}\n\\end{equation\nwhere $q\\in \\mathbb{L}$. Applying $R_{q}^{-1}$, and setting $\\tilde{p}\\left(\nt\\right) =\\faktor{p_{\\xi ,q}\\left( t\\right)}{q}$, we obtain \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\frac{d\\tilde{p}\\left( t\\right) }{dt}=\\left( R_{q}^{-1}\\circ R_{p_{\\xi\n,q}\\left( t\\right) }\\right) _{\\ast }\\xi \\\\ \n\\tilde{p}\\left( 0\\right) =\n\\end{array\n\\right. . \\label{floweq2a}\n\\end{equation\nIf $\\mathbb{L}$ is associative, then $R_{q}^{-1}\\circ R_{p_{\\xi ,q}\\left(\nt\\right) }=R_{\\left( p_{\\xi ,q}\\left( t\\right) \\right) \/q},$ and thus \n\\tilde{p}\\left( t\\right) $ would satisfy (\\ref{floweq}), and we could\nconclude that $p_{\\xi ,q}\\left( t\\right) =\\exp \\left( t\\xi \\right) q.$\nHowever, in the general case, we have (\\ref{RinvR}) and hence, $\\tilde{p\n\\left( t\\right) $ satisfies the following equatio\n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\frac{d\\tilde{p}\\left( t\\right) }{dt}=\\left( R_{\\tilde{p}\\left( t\\right)\n}^{\\left( q\\right) }\\right) _{\\ast }\\xi \\\\ \n\\tilde{p}\\left( 0\\right) =\n\\end{array\n\\right. . \\label{floweq3}\n\\end{equation\nThis is now an integral curve equation for $\\xi $ on $\\left( \\mathbb{L\n,\\circ _{q}\\right) $, and hence for sufficiently small $t$ we can define a\nlocal exponential map $\\exp _{q}$ for $\\left( \\mathbb{L},\\circ _{q}\\right) $\n\\begin{equation}\n\\tilde{p}\\left( t\\right) =\\exp _{q}\\left( t\\xi \\right) , \\label{ptildesol}\n\\end{equation\nso, that \n\\begin{equation}\np_{\\xi ,q}\\left( t\\right) =\\exp _{q}\\left( t\\xi \\right) q. \\label{pxiqsol}\n\\end{equation\nIf $q\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $, then $\\left( \\mathbb{L\n,\\circ _{q}\\right) $ is isomorphic to $\\mathbb{L}$, so if $\\mathbb{L}$ is\npower-associative, then so is $\\left( \\mathbb{L},\\circ _{q}\\right) $, and\nhence, the solutions (\\ref{ptildesol}) are defined for all $t.$\n\nSuppose $h=\\left( \\alpha ,q\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) ,$\nthen let $\\hat{p}\\left( t\\right) =\\alpha ^{-1}\\left( \\tilde{p}\\left(\nt\\right) \\right) .$ This then satisfies $\\hat{p}\\left( 0\\right) =1$ and \n\\begin{equation}\n\\frac{d\\hat{p}\\left( t\\right) }{dt}=\\left( \\alpha ^{-1}\\right) _{\\ast\n}\\left( R_{\\tilde{p}\\left( t\\right) }^{\\left( q\\right) }\\right) _{\\ast }\\xi .\n\\label{dphat1}\n\\end{equation\nHowever, let $r\\in \\mathbb{L}$ and consider $R_{p}^{\\left( q\\right) }$\n\\begin{eqnarray*}\nR_{p}^{\\left( q\\right) }r &=&r\\circ _{q}p=\\alpha \\left( \\alpha ^{-1}\\left(\nr\\right) \\cdot \\alpha ^{-1}\\left( p\\right) \\right) \\\\\n&=&\\left( \\alpha \\circ R_{\\alpha ^{-1}\\left( p\\right) }\\circ \\alpha\n^{-1}\\right) \\left( r\\right) .\n\\end{eqnarray*\nThus, \n\\begin{equation}\nR_{p}^{\\left( q\\right) }=\\alpha \\circ R_{\\alpha ^{-1}\\left( p\\right) }\\circ\n\\alpha ^{-1}, \\label{Rpqalpha}\n\\end{equation\nand hence, (\\ref{dphat1}) becomes \n\\begin{equation}\n\\frac{d\\hat{p}\\left( t\\right) }{dt}=\\left( R_{\\hat{p}\\left( t\\right)\n}\\right) _{\\ast }\\left( \\left( \\alpha ^{-1}\\right) _{\\ast }\\xi \\right) .\n\\end{equation\nThis shows that $\\hat{p}$ is a solution of (\\ref{floweq}) with initial\nvelocity vector $\\left( \\alpha ^{-1}\\right) _{\\ast }\\xi \\in T_{1}\\mathbb{L}\n, and is hence given by $\\hat{p}=\\exp \\left( t\\left( \\alpha ^{-1}\\right)\n_{\\ast }\\xi \\right) .$ Comparing with (\\ref{ptildesol}) we see that in this\ncase, \n\\begin{equation}\n\\exp _{q}\\left( t\\xi \\right) =\\alpha \\left( \\exp \\left( t\\left( \\alpha\n^{-1}\\right) _{\\ast }\\xi \\right) \\right) , \\label{expqtalpha}\n\\end{equation\nand hence the solution $p_{\\xi ,q}\\left( t\\right) $ of (\\ref{floweq2}) can\nbe written as \n\\begin{equation}\np_{\\xi ,q}\\left( t\\right) =h\\left( \\exp \\left( t\\left( \\alpha ^{-1}\\right)\n_{\\ast }\\xi \\right) \\right) . \\label{expqtalpha2}\n\\end{equation\nWe can summarize these findings in the theorem below.\n\n\\begin{theorem}\n\\label{thmLoopflow}Suppose $\\mathbb{L}$ is a smooth loop and suppose $q\\in \n\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) .$ Then, given $\\xi \\in T_{1}\\mathbb\nL},$ the equation \n\\begin{equation}\n\\left\\{ \n\\begin{array}{c}\n\\frac{dp\\left( t\\right) }{dt}=\\left( R_{p\\left( t\\right) }\\right) _{\\ast }\\xi\n\\\\ \np\\left( 0\\right) =\n\\end{array\n\\right. \\label{floweq4}\n\\end{equation\nhas the solution \n\\begin{equation}\np\\left( t\\right) =\\exp _{q}\\left( t\\xi \\right) q\n\\end{equation\nfor sufficiently small $t$, where \n\\begin{equation*}\n\\exp _{q}\\left( t\\xi \\right) =\\alpha \\left( \\exp \\left( t\\left( \\alpha\n^{-1}\\right) _{\\ast }\\xi \\right) \\right)\n\\end{equation*\nwhere $\\alpha $ is a right pseudoautomorphism of $\\mathbb{L}$ that has\ncompanion $q$ and $\\exp \\left( t\\xi \\right) $ is defined as the solution of \n\\ref{floweq4}) with initial condition $p\\left( t\\right) =1$. In particular, \n\\xi $ defines a flow $\\Phi _{\\xi ,t}$, given by \n\\begin{equation}\n\\Phi _{\\xi ,t}\\left( q\\right) =\\exp _{q}\\left( t\\xi \\right) q.\n\\label{flowPhi}\n\\end{equation}\n\\end{theorem}\n\n\\begin{remark}\nThe expression (\\ref{expqtalpha}) can be made a bit more general. Suppose \n\\mathbb{L}_{1}$ and $\\mathbb{L}_{2}$ are two loops and $\\alpha :\\mathbb{L\n_{1}\\longrightarrow \\mathbb{L}_{2}$ is a loop homomorphism. If we suppose \n\\exp _{\\left( 1\\right) }$ and $\\exp _{\\left( 2\\right) }$ are the exponential\nmaps on $\\mathbb{L}_{1}$ and $\\mathbb{L}_{2}$, respectively, then the\nfollowing diagram in Figure \\ref{loopexp}.\n\\end{remark}\n\n\\begin{center}\n\\begin{tikzcd}[sep=large] & T_{1}\\mathbb{L}_{1} \\arrow[r,\"\\alpha_{*}\"]\n\\arrow[d,\"\\func{exp}_{(1)}\"] & T_{1}\\mathbb{L}_{2}\n\\arrow[d,\"\\func{exp}_{(2)}\"] & \\\\ & \\mathbb{L}_{1} \\arrow[r,\"\\alpha\"] &\n\\mathbb{L}_{2} & \n\\end{tikzcd} \n\\captionof{figure}{Loop exponential maps.} \\label{loopexp}\n\\end{center}\n\n\\begin{remark}\nThe action of $\\Phi _{\\xi ,t}$ given by (\\ref{flowPhi}) looks like it\ndepends on $q$, however we easily see that for sufficiently small $t$, $\\exp\n_{q}\\left( t\\xi \\right) =\\exp _{r}\\left( t\\xi \\right) $ whenever $q$ and $r$\nare on the same integral curve generated by $\\xi $ (equivalently in the same\norbit of $\\Phi _{\\xi }$). This is consistent with the $1$-parameter subgroup\nproperty $\\Phi _{\\xi ,t}\\left( \\Phi _{\\xi ,s}\\left( q\\right) \\right) =\\Phi\n_{\\xi ,t+s}\\left( q\\right) $.\n\nIndeed, consider $r=\\exp _{q}\\left( s\\xi \\right) q$ and $\\tilde{r}=\\exp\n_{q}\\left( \\left( t+s\\right) \\xi \\right) q.$ These are points that lie along\nthe solution curve of (\\ref{floweq4}). On the other hand, consider the\nsolution of (\\ref{floweq4}) at with $p\\left( 0\\right) =r.$ This is then\ngiven by $\\hat{r}=\\exp _{r}\\left( t\\xi \\right) r.$ However, clearly by\nuniqueness of solutions of ODEs, $\\hat{r}=\\tilde{r}.$ So now, \n\\begin{eqnarray*}\n\\hat{r} &=&\\tilde{r} \\\\\n&=&\\exp _{q}\\left( \\left( t+s\\right) \\xi \\right) q=\\left( \\exp _{q}\\left(\nt\\xi \\right) \\circ _{q}\\exp _{q}\\left( s\\xi \\right) \\right) q \\\\\n&=&\\exp _{q}\\left( t\\xi \\right) \\left( \\exp _{q}\\left( s\\xi \\right) q\\right)\n\\\\\n&=&\\exp _{q}\\left( t\\xi \\right) r\n\\end{eqnarray*\nHence, we conclude that $\\exp _{q}\\left( t\\xi \\right) =\\exp _{r}\\left( t\\xi\n\\right) .$\n\\end{remark}\n\n\\begin{remark}\nSuppose $\\left( \\mathbb{L},\\cdot \\right) $ power left-alternative, i.e. \nx^{k}\\left( x^{l}q\\right) =x^{k+l}q$ for all $x,q\\in \\mathbb{L}$ and any\nintegers $k,l$. In particular this also means that $\\left( \\mathbb{L},\\cdot\n\\right) $ is power-associative and has the left inverse property. In\nparticular, powers of $x\\in \\mathbb{L}$ with respect to $\\circ _{q}$ are\nequal to powers of $x$ with respect to $\\cdot $. For any $q\\in \\mathbb{L}$, \n\\left( \\mathbb{L},\\circ _{q}\\right) $ is then also power left-alternative.\nNow the right-hand side of (\\ref{floweq3}) can be written as \n\\begin{equation}\n\\left( R_{\\tilde{p}\\left( t\\right) }^{\\left( q\\right) }\\right) _{\\ast }\\xi\n=\\left. \\frac{d}{ds}\\left( r\\left( s\\right) \\circ _{q}\\tilde{p}\\left(\nt\\right) \\right) \\right\\vert _{s=0} \\label{floweq3a}\n\\end{equation\nwhere $r\\left( s\\right) $ is a curve with $r\\left( 0\\right) =1$ and \nr^{\\prime }\\left( 0\\right) =\\xi $, so we may take $r\\left( s\\right) =\\tilde{\n}\\left( s\\right) .$ Suppose there exist integers $n,k$ and a real number $h\n, such that $t=nh$ and $s=kh$. Then \n\\begin{eqnarray*}\n\\tilde{p}\\left( s\\right) \\circ _{q}\\tilde{p}\\left( t\\right) &=&\\tilde{p\n\\left( kh\\right) \\circ _{q}\\tilde{p}\\left( nh\\right) \\\\\n&=&\\left( \\tilde{p}\\left( h\\right) ^{k}\\cdot \\tilde{p}\\left( h\\right)\n^{n}q\\right) \/q \\\\\n&=&\\tilde{p}\\left( h\\right) ^{k+n}=\\tilde{p}\\left( kh\\right) \\tilde{p}\\left(\nnh\\right) \\\\\n&=&\\tilde{p}\\left( s\\right) \\tilde{p}\\left( t\\right) .\n\\end{eqnarray*\nThis is independent of $n$ and $k$, and is hence true for any $s,t$. Thus we\nfind that (\\ref{floweq3a}) is equal to the right-hand side of (\\ref{floweq\n), so $\\tilde{p}$ actually satisfies the same equation as $p,$ so by\nuniqueness of solutions $\\tilde{p}=p$. Hence, in this case, $\\exp _{q}=\\exp \n. In general however, the exponential map will not be unique and will depend\non the choice of $q.$\n\\end{remark}\n\n\\subsection{Tangent algebra}\n\n\\label{secTangent}Suppose $\\xi ,\\gamma \\in T_{1}\\mathbb{L}$ and let $X=\\rho\n\\left( \\xi \\right) $ and $Y=\\rho \\left( \\gamma \\right) $ be the\ncorresponding right fundamental vector fields on $\\mathbb{L}$. Then, recall\nthat the vector field Lie bracket of $X$ and $Y$ is given by \n\\begin{equation}\n\\left[ X,Y\\right] _{p}=\\left. \\frac{d}{dt}\\left( \\left( \\Phi\n_{t}^{-1}\\right) _{\\ast }\\left( Y_{\\Phi _{t}\\left( p\\right) }\\right) \\right)\n\\right\\vert _{t=0}, \\label{vecbracket}\n\\end{equation\nwhere $\\Phi _{t}=\\Phi \\left( \\xi ,t\\right) $ is the flow generated by $X$.\nFor sufficiently small $t$, we have $\\Phi _{t}\\left( p\\right) =\\exp\n_{p}\\left( t\\xi \\right) p,$ and thus \n\\begin{equation*}\nY_{\\Phi _{t}\\left( p\\right) }=\\left( R_{\\exp _{p}\\left( t\\xi \\right)\np}\\right) _{\\ast }\\gamma .\n\\end{equation*\nHence \n\\begin{equation}\n\\left( \\Phi _{t}^{-1}\\right) _{\\ast }\\left( Y_{\\Phi _{t}\\left( p\\right)\n}\\right) =\\left( L_{\\exp _{p}\\left( t\\xi \\right) }^{-1}\\circ R_{\\exp\n_{p}\\left( t\\xi \\right) p}\\right) _{\\ast }\\gamma . \\label{Phinegt}\n\\end{equation\nNow right translating back to $T_{1}\\mathbb{L}$, we obtain \n\\begin{equation}\n\\left( R_{p}^{-1}\\right) _{\\ast }\\left[ X,Y\\right] _{p}=\\left. \\frac{d}{dt\n\\left( \\left( R_{p}^{-1}\\circ L_{\\exp _{p}\\left( t\\xi \\right) }^{-1}\\circ\nR_{\\exp _{p}\\left( t\\xi \\right) p}\\right) _{\\ast }\\gamma \\right) \\right\\vert\n_{t=0}. \\label{Rpbrack0}\n\\end{equation\nIn general, let $q,x,y\\in \\mathbb{L},$ then \n\\begin{eqnarray*}\n\\left( R_{p}^{-1}\\circ L_{x}^{-1}\\circ R_{yp}\\right) q &=&\\faktor{\\left(\nx\\backslash \\left( q\\cdot yp\\right) \\right)} {p} \\\\\n&=&\\faktor{\\left( x\\backslash \\left( \\left( q\\cdot yp\\right) \/p\\cdot\np\\right) \\right)}{p} \\\\\n&=&x\\backslash _{p}\\left( q\\circ _{p}y\\right) \\\\\n&=&\\left( \\left( L_{x}^{\\left( p\\right) }\\right) ^{-1}\\circ R_{y}^{\\left(\np\\right) }\\right) q,\n\\end{eqnarray*\nwhere we have used (\\ref{rprodqleft}). Hence (\\ref{Rpbrack0}) become\n\\begin{eqnarray}\n\\left( R_{p}^{-1}\\right) _{\\ast }\\left[ X,Y\\right] _{p} &=&\\left. \\frac{d}{d\n}\\left( \\left( \\left( L_{\\exp _{p}\\left( t\\xi \\right) }^{\\left( p\\right)\n}\\right) ^{-1}\\circ R_{\\exp _{p}\\left( t\\xi \\right) }^{\\left( p\\right)\n}\\right) _{\\ast }\\gamma \\right) \\right\\vert _{t=0} \\notag \\\\\n&=&\\left. \\frac{d}{dt}\\left( \\left( \\func{Ad}_{\\exp _{p}\\left( t\\xi \\right)\n}^{\\left( p\\right) }\\right) _{\\ast }^{-1}\\gamma \\right) \\right\\vert ;_{t=0} \n\\notag \\\\\n&=&-\\left. \\frac{d}{dt}\\left( \\left( \\func{Ad}_{\\exp _{p}\\left( t\\xi \\right)\n}^{\\left( p\\right) }\\right) _{\\ast }\\gamma \\right) \\right\\vert _{t=0} \\notag\n\\\\\n&=&-\\left. d_{\\xi }\\left( \\func{Ad}^{\\left( p\\right) }\\right) _{\\ast\n}\\right\\vert _{1}\\left( \\gamma \\right) \\label{brackdtAd}\n\\end{eqnarray\nHere, $\\left( \\func{Ad}_{x}^{\\left( p\\right) }\\right) _{\\ast }$ denotes the\ninduced adjoint action of $\\mathbb{L}$ on $T_{1}\\mathbb{L}.$ As remarked\nearlier, this is not an action in the sense of group actions. Similarly, as\nfor Lie groups and Lie algebras, we can also think of $\\left( \\func{Ad\n^{\\left( p\\right) }\\right) _{\\ast }:\\mathbb{L}\\longrightarrow \\func{End\n\\left( T_{1}\\mathbb{L}\\right) $, and then (\\ref{brackdtAd}) is just the\ndifferential of this map at $1\\in \\mathbb{L}$ in the direction $\\xi \\in T_{1\n\\mathbb{L}$. The differential of $\\left( \\func{Ad}^{\\left( p\\right) }\\right)\n_{\\ast }$ at an arbitrary point in $\\mathbb{L}$ is given in Lemma \\re\n{lemdtAd}. This now allows us to define the tangent adjoint map $\\func{ad\n^{\\left( p\\right) }$ on $T_{1}\\mathbb{L}.$\n\n\\begin{definition}\nFor any $\\xi ,\\gamma \\in T_{1}\\mathbb{L},$ the tangent adjoint map $\\func{ad\n_{\\xi }^{\\left( p\\right) }:T_{1}\\mathbb{L}\\longrightarrow T_{1}\\mathbb{L}$\nis defined as \n\\begin{equation}\n\\func{ad}_{\\xi }^{\\left( p\\right) }\\left( \\gamma \\right) =\\left. d_{\\xi\n}\\left( \\func{Ad}^{\\left( p\\right) }\\right) _{\\ast }\\right\\vert _{1}\\left(\n\\gamma \\right) =-\\left( R_{p}^{-1}\\right) _{\\ast }\\left[ X,Y\\right] _{p}.\n\\label{ladpx}\n\\end{equation}\n\\end{definition}\n\nThe negative sign in (\\ref{ladpx}) is there to be consistent with the\ncorresponding definitions for Lie groups for right-invariant vector fields.\nWe then define the $p$-bracket $\\left[ \\cdot ,\\cdot \\right] ^{\\left(\np\\right) }$ on $T_{1}\\mathbb{L}$ as \n\\begin{equation}\n\\left[ \\xi ,\\gamma \\right] ^{\\left( p\\right) }=\\func{ad}_{\\xi }^{\\left(\np\\right) }\\left( \\gamma \\right) . \\label{T1Lbrack}\n\\end{equation\nFrom (\\ref{ladpx}) it is clear that it's skew-symmetric in $\\xi $ and \n\\gamma $. Equivalently, we can say \n\\begin{equation}\n\\left[ \\left( R_{p}^{-1}\\right) _{\\ast }X_{p},\\left( R_{p}^{-1}\\right)\n_{\\ast }Y_{p}\\right] ^{\\left( p\\right) }=-\\left( R_{p}^{-1}\\right) _{\\ast \n\\left[ X,Y\\right] _{p}. \\label{T1Lbrack2}\n\\end{equation}\n\n\\begin{definition}\nThe vector space $T_{1}\\mathbb{L}$ together with the bracket $\\left[ \\cdot\n,\\cdot \\right] ^{\\left( p\\right) }$ is the \\emph{tangent algebra }or \n\\mathbb{L}$\\emph{-algebra }$\\mathfrak{l}^{\\left( p\\right) }$ of $\\left( \n\\mathbb{L},\\circ _{p}\\right) $.\n\\end{definition}\n\nThis is obviously a generalization of a Lie algebra. However, since now\nthere is a bracket $\\left[ \\cdot ,\\cdot \\right] ^{\\left( p\\right) }$\ncorresponding to each point $p\\in \\mathbb{L},$ it does not make sense to try\nand express $\\left[ \\left[ \\cdot ,\\cdot \\right] ^{\\left( p\\right) },\\cdot\n\\right] ^{\\left( p\\right) }$ in terms of Lie brackets of corresponding\nvector fields. Hence, the Jacobi identity for $\\left[ \\cdot ,\\cdot \\right]\n^{\\left( p\\right) }$ cannot be inferred, as expected. From (\\ref{T1Lbrack2\n), we cannot even infer that the bracket of two right fundamental vector\nfields is again a right fundamental vector field. In fact, at each point $p$\nit will be the pushforward of the bracket on $T_{1}\\mathbb{L}$ with respect\nto $p.$ Overall, we can summarize properties of the bracket in the theorem\nbelow.\n\n\\begin{theorem}\nLet $\\xi ,\\gamma \\in T_{1}\\mathbb{L}$ and suppose $X=\\rho \\left( \\xi \\right) \n$ and $Y=\\rho \\left( \\gamma \\right) $ are the corresponding right\nfundamental vector fields on $\\mathbb{L}$. Then, for any $p\\in \\mathbb{L}$, \n\\begin{equation}\n\\left[ \\xi ,\\gamma \\right] ^{\\left( p\\right) }=\\func{ad}_{\\xi }^{\\left(\np\\right) }\\left( \\gamma \\right) =\\left. \\frac{d}{dt}\\left( \\left( \\func{Ad\n_{\\exp \\left( t\\xi \\right) }^{\\left( p\\right) }\\right) _{\\ast }\\gamma\n\\right) \\right\\vert _{t=0}=-\\left( R_{p}^{-1}\\right) _{\\ast }\\left[ X,\n\\right] _{p}, \\label{Rpbrack}\n\\end{equation\nand moreover, \n\\begin{eqnarray}\n\\left[ \\xi ,\\gamma \\right] ^{\\left( p\\right) } &=&\\left. \\frac{d^{2}}\ndtd\\tau }\\left[ \\exp \\left( t\\xi \\right) ,\\exp \\left( \\tau \\gamma \\right)\n\\right] ^{\\left( \\mathbb{L},\\circ _{p}\\right) }\\right\\vert _{t,\\tau =0} \n\\notag \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\xi \\right) \\circ _{p}\\exp\n\\left( \\tau \\gamma \\right) \\right\\vert _{t,\\tau =0} \\label{brack2deriv} \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( \\tau \\gamma \\right) \\circ\n_{p}\\exp \\left( t\\xi \\right) \\right\\vert _{t,\\tau =0}. \\notag\n\\end{eqnarray\nHere $\\left[ \\cdot ,\\cdot \\right] ^{\\left( p\\right) }$ is the $\\mathbb{L}\n-algebra bracket on $\\mathfrak{l}^{\\left( p\\right) }$, $\\left[ \\cdot ,\\cdot\n\\right] _{p}$ refers to the value of the vector field Lie bracket at $p\\in \n\\mathbb{L}$, and $\\left[ \\cdot ,\\cdot \\right] ^{\\left( \\mathbb{L},\\circ\n_{p}\\right) }$ is the loop commutator (\\ref{loopcomm2}) on $\\left( \\mathbb{L\n,\\circ _{p}\\right) .$\n\\end{theorem}\n\n\\begin{proof}\nWe have already shown (\\ref{Rpbrack}), so let us prove (\\ref{brack2deriv}).\nRecall from (\\ref{loopcomm2}) that \n\\begin{equation}\n\\left[ \\exp \\left( t\\xi \\right) ,\\exp \\left( \\tau \\gamma \\right) \\right]\n^{\\left( \\mathbb{L},\\circ _{p}\\right) }=\\func{Ad}_{\\exp \\left( t\\xi \\right)\n}^{\\left( p\\right) }\\left( \\exp \\left( \\tau \\gamma \\right) \\right) \/_{p}\\exp\n\\left( \\tau \\gamma \\right) . \\label{commexp}\n\\end{equation\nDifferentiating (\\ref{commexp}) with respect to $\\tau $ and evaluating at \n\\tau =0$ using Lemma \\ref{lemQuotient} gives \n\\begin{eqnarray}\n\\left. \\frac{d}{d\\tau }\\left[ \\exp \\left( t\\xi \\right) ,\\exp \\left( \\tau\n\\gamma \\right) \\right] ^{\\left( \\mathbb{L},\\circ _{p}\\right) }\\right\\vert\n_{\\tau =0} &=&\\left. \\frac{d}{d\\tau }\\func{Ad}_{\\exp \\left( t\\xi \\right)\n}^{\\left( p\\right) }\\left( \\exp \\left( \\tau \\gamma \\right) \\right)\n\\right\\vert _{\\tau =0} \\notag \\\\\n&&-\\left. \\frac{d}{d\\tau }\\exp \\left( \\tau \\gamma \\right) \\right\\vert _{\\tau\n=0} \\notag \\\\\n&=&\\left( \\func{Ad}_{\\exp \\left( t\\xi \\right) }^{\\left( p\\right) }\\right)\n_{\\ast }\\gamma -\\tau\n\\end{eqnarray\nwhere we have also used the definition of $\\exp _{p}$ via (\\ref{floweq3}).\nThis gives us the first part of (\\ref{brack2deriv}). Now, using Lemma \\re\n{lemQuotient} again, we can differentiate $\\left( \\func{Ad}_{\\exp \\left(\nt\\xi \\right) }^{\\left( p\\right) }\\right) _{\\ast }\\gamma $ with respect to $t$\nto get the second part\n\\begin{eqnarray*}\n\\left. \\frac{d}{dt}\\left( \\left( \\func{Ad}_{\\exp \\left( t\\xi \\right)\n}^{\\left( p\\right) }\\right) _{\\ast }\\gamma \\right) \\right\\vert _{t=0}\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\left( \\left( \\exp \\left( t\\xi \\right) \\circ\n_{p}\\exp \\left( \\tau \\gamma \\right) \\right) \/_{p}\\exp \\left( t\\xi \\right)\n\\right) \\right\\vert _{t,\\tau =0} \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\left( \\exp \\left( t\\xi \\right) \\circ\n_{p}\\exp \\left( \\tau \\gamma \\right) \\right) \\right\\vert _{t,\\tau =0} \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( \\tau \\gamma \\right) \\circ\n_{p}\\exp \\left( t\\xi \\right) \\right\\vert _{t,\\tau =0}.\n\\end{eqnarray*}\n\\end{proof}\n\n\\begin{remark}\nApplying (\\ref{brack2deriv}) to the Moufang loop of unit octonions and the\ncorresponding $\\mathbb{L}$-algebra of imaginary octonions shows that as\nexpected, the bracket on the $\\mathbb{L}$-algebra coincides with the\ncommutator of imaginary octonions in the algebra of octonions.\n\\end{remark}\n\nAlthough $\\mathbb{L}$ and $\\mathfrak{l}$ are not in general a Lie group and\na Lie algebra, there are analogs of actions of these spaces on one another,\nwhich we summarize below.\n\nLet $s\\in \\mathbb{\\mathring{L}},$ $A\\in \\mathbb{L}$, and $\\xi ,\\eta \\in \n\\mathfrak{l},$ then we have the following:\n\n\\begin{enumerate}\n\\item Action of $\\mathbb{L}$ on $\\mathbb{\\mathring{L}}$: $A\\cdot s=As.$\n\n\\item Adjoint action of $\\left( \\mathbb{L},\\circ _{s}\\right) $ on $\\mathbb{L}\n$: $A\\cdot B=\\func{Ad}_{A}^{\\left( s\\right) }\\left( B\\right) =\\left( A\\circ\n_{s}B\\right) \/_{s}A.$\n\n\\item Action of $\\left( \\mathbb{L},\\circ _{s}\\right) $ on $\\mathfrak{l}$: \nA\\cdot \\xi =\\left( \\func{Ad}_{A}^{\\left( s\\right) }\\right) _{\\ast }\\xi .$\n\n\\item Action of $\\mathfrak{l}^{\\left( s\\right) }$ on itself: $\\xi \\cdot\n_{s}\\eta =\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }.$\n\n\\item Action of $\\mathfrak{l}$ on $\\mathbb{\\mathring{L}}$: $\\xi \\cdot\ns=\\left( R_{s}\\right) _{\\ast }\\xi =\\left. \\frac{d}{dt}\\exp _{s}\\left( t\\xi\n\\right) s\\right\\vert _{t=0}.$\n\\end{enumerate}\n\n\\begin{remark}\nThere may be some confusion about notation because we will sometimes\nconsider the same objects but in different categories. Generally, for the\nloop $\\mathbb{L}$, the notation \\textquotedblleft $\\mathbb{L}\n\\textquotedblright\\ will denote the underlying set, the underlying smooth\nmanifold, the loop, and the $G$-set with the partial action of $\\Psi\n^{R}\\left( \\mathbb{L}\\right) .$ Similarly, $\\mathbb{\\mathring{L}}$ will\ndenote the same underlying set, the same underlying smooth manifold, but\nwill be different as a $G$-set - it has the full action of $\\Psi ^{R}\\left( \n\\mathbb{L}\\right) .$ Since $\\mathbb{L}$ and $\\mathbb{\\mathring{L}}$ are\nidentical as smooth manifolds, they have the same tangent space at $1$.\nGenerally, we will only refer to $\\mathbb{\\mathring{L}}$ if we need to\nemphasize the group action. For the $\\mathbb{L}$-algebra, the notation\n\\textquotedblleft $\\mathfrak{l}$\\textquotedblright\\ will denote both the\nunderlying vector space, and the vector space with the algebra structure on \nT_{1}\\mathbb{L}$ induced from the loop $\\mathbb{L}.$ For different values of \n$p\\in \\mathbb{L}$, $\\mathfrak{l}^{\\left( p\\right) }$ is identical to \n\\mathfrak{l}$ as a vector space, but has a different algebra structure. We\nwill use the notation $\\mathfrak{l}^{\\left( p\\right) }$ to emphasize the\nalgebra structure.\n\\end{remark}\n\n\\subsection{Structural equation}\n\n\\label{sectStruct}Let us now define an analog of the Maurer-Cartan form on\nright fundamental vector fields. Given $p\\in \\mathbb{L}$ and and $\\xi \\in \n\\mathfrak{l},$ define $\\theta _{p}$ to be \n\\begin{equation}\n\\theta _{p}\\left( \\rho \\left( \\xi \\right) _{p}\\right) =\\left(\nR_{p}^{-1}\\right) _{\\ast }\\rho \\left( \\xi \\right) _{p}=\\xi . \\label{MCloop}\n\\end{equation\nThus, this is an $\\mathfrak{l}$-valued $1$-form. The right fundamental\nvector fields still form a global frame for $T\\mathbb{L},$ so this is\nsufficient to define the $1$-form $\\theta .$ Just as the right fundamental\nvector field $\\rho \\left( \\xi \\right) $ is generally not right-invariant,\nneither is $\\theta .$ Indeed, let $q\\in \\mathbb{L}$ and consider $\\left(\nR_{q}^{-1}\\right) ^{\\ast }\\theta .$ Then, given $X_{p}=\\left( R_{p}\\right)\n_{\\ast }\\xi \\in T_{p}\\mathbb{L}$ \n\\begin{eqnarray}\n\\left( \\left( R_{q}^{-1}\\right) ^{\\ast }\\theta \\right) _{p}\\left(\nX_{p}\\right) &=&\\theta _{p\/q}\\left( \\left( R_{q}^{-1}\\circ R_{p}\\right)\n_{\\ast }\\xi \\right) \\notag \\\\\n&=&\\left( R_{p\/q}^{-1}\\circ R_{q}^{-1}\\circ R_{p}\\right) _{\\ast }\\xi \\notag\n\\\\\n&=&\\left( R_{p\/q}^{-1}\\circ R_{p\/q}^{\\left( q\\right) }\\right) _{\\ast }\\xi\n\\label{thetarighttr}\n\\end{eqnarray\nwhere same idea as in (\\ref{rightvect}) was used.\n\nNow consider $d\\theta .$ Generally, for a $1$-form$,$ we have \n\\begin{equation}\nd\\theta \\left( X,Y\\right) =X\\theta \\left( Y\\right) -Y\\theta \\left( X\\right)\n-\\theta \\left( \\left[ X,Y\\right] \\right) .\n\\end{equation\nSuppose $X,$ $Y$ are right fundamental, then from (\\ref{T1Lbrack2}), we get \n\\begin{equation}\n\\left( d\\theta \\right) _{p}\\left( X,Y\\right) -\\left[ \\theta \\left( X\\right)\n,\\theta \\left( Y\\right) \\right] ^{\\left( p\\right) }=0. \\label{MCequation1}\n\\end{equation\nHowever, since right fundamental vector fields span the space of vector\nfields on $\\mathbb{L}$, (\\ref{MCequation1}) is true for any vector fields,\nand we obtain the following analogue of the Maurer-Cartan equation.\n\n\\begin{theorem}\n\\label{thmMC}Let $p\\in \\mathbb{L}$ and let $\\left[ \\cdot ,\\cdot \\right]\n^{\\left( p\\right) }$ be bracket on $\\mathfrak{l}^{\\left( p\\right) }$. Then, \n\\theta $ satisfies the following equation at $p$: \n\\begin{equation}\n\\left( d\\theta \\right) _{p}-\\frac{1}{2}\\left[ \\theta ,\\theta \\right]\n^{\\left( p\\right) }=0, \\label{MCequation2}\n\\end{equation\n\\qquad where $\\left[ \\theta ,\\theta \\right] ^{\\left( p\\right) }$ is the\nbracket of $\\mathbb{L}$-algebra-valued $1$-forms such that for any $X,Y\\in\nT_{p}\\mathbb{L}$, $\\frac{1}{2}\\left[ \\theta ,\\theta \\right] ^{\\left(\np\\right) }\\left( X,Y\\right) =\\left[ \\theta \\left( X\\right) ,\\theta \\left(\nY\\right) \\right] ^{\\left( p\\right) }.$\n\nLet $q\\in \\mathbb{L}$ and $\\theta ^{\\left( q\\right) }=\\left( R_{q}\\right)\n^{\\ast }\\theta ,$ then $\\theta ^{\\left( q\\right) }$ satisfies \n\\begin{equation}\n\\left( d\\theta ^{\\left( q\\right) }\\right) _{p}-\\frac{1}{2}\\left[ \\theta\n^{\\left( q\\right) },\\theta ^{\\left( q\\right) }\\right] ^{\\left( pq\\right) }=0,\n\\label{MCequation3}\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot \\right] ^{\\left( pq\\right) }$ is the bracket on \n\\mathfrak{l}^{\\left( pq\\right) }.$\n\\end{theorem}\n\n\\begin{proof}\nThe first part already follows from (\\ref{MCequation1}). For the second\npart, by applying $\\left( R_{q}\\right) ^{\\ast }$ to (\\ref{MCequation2}) we\neasily see that $\\theta ^{\\left( q\\right) }$ satisfies (\\ref{MCequation2})\nwith the translated bracket $\\left[ \\cdot ,\\cdot \\right] ^{\\left( pq\\right)\n} $, and hence we get (\\ref{MCequation3}).\n\\end{proof}\n\n\\begin{remark}\nThe $1$-form $\\theta ^{\\left( q\\right) }$ can be seen as translating a\nvector in $T_{p}\\mathbb{L}$ by $R_{q}$ to $T_{pq}\\mathbb{L}$, and then by \nR_{pq}^{-1}$ back to $\\mathfrak{l}.$ However, given the identity \nxq\/pq=x\/_{q}p$, we see that $\\theta ^{\\left( q\\right) }$ is just the loop\nMaurer-Cartan form in $\\left( \\mathbb{L},\\circ _{q}\\right) .$\n\\end{remark}\n\nThe obvious key difference with the Lie group picture here is that the\nbracket in (\\ref{MCequation2}) non-constant on $\\mathbb{L},$ i.e. given a\nbasis, the structure \\textquotedblleft constants\\textquotedblright\\ would no\nlonger be constants. In particular, the Jacobi identity is the integrability\ncondition for the Maurer-Cartan equation on Lie groups, however here we see\nthat the right-hand side of the Jacobi identity is related to a ternary form\ngiven by the derivative of the bracket. For any $\\xi ,\\eta ,\\gamma \\in \n\\mathfrak{l}^{\\left( p\\right) }$, define \n\\begin{equation}\n\\func{Jac}^{\\left( p\\right) }\\left( \\xi ,\\eta ,\\gamma \\right) =\\left[ \\xi \n\\left[ \\eta ,\\gamma \\right] ^{\\left( p\\right) }\\right] ^{\\left( p\\right) }\n\\left[ \\eta ,\\left[ \\gamma ,\\xi \\right] ^{\\left( p\\right) }\\right] ^{\\left(\np\\right) }+\\left[ \\gamma ,\\left[ \\xi ,\\eta \\right] ^{\\left( p\\right) }\\right]\n^{\\left( p\\right) }. \\label{Jac}\n\\end{equation\nWe also need the following definition.\n\n\\begin{definition}\nDefine the \\emph{bracket function }$b:\\mathbb{\\mathring{L}}\\longrightarrow \n\\mathfrak{l}\\otimes \\Lambda ^{2}\\mathfrak{l}^{\\ast }$ to be the map that\ntakes $p\\mapsto \\left[ \\cdot ,\\cdot \\right] ^{\\left( p\\right) }\\in \\mathfrak\nl}\\otimes \\Lambda ^{2}\\mathfrak{l}^{\\ast }$, so that $b\\left( \\theta ,\\theta\n\\right) $ is an $\\mathfrak{l}$-valued $2$-form on $\\mathbb{L}$, i.e. \nb\\left( \\theta ,\\theta \\right) \\in \\Omega ^{2}\\left( \\mathfrak{l}\\right) .$\n\\end{definition}\n\nLemma \\ref{lemAssoc} below will give the differential of $b$. The proof is\ngiven in Appendix \\ref{secAppendix}.\n\n\\begin{lemma}\n\\label{lemAssoc}For fixed $\\eta ,\\gamma \\in \\mathfrak{l},$ \n\\begin{equation}\n\\left. db\\right\\vert _{p}\\left( \\eta ,\\gamma \\right) =\\left[ \\eta ,\\gamma\n,\\theta _{p}\\right] ^{\\left( p\\right) }-\\left[ \\gamma ,\\eta ,\\theta _{p\n\\right] ^{\\left( p\\right) }\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{,} \\label{db1}\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot ,\\cdot \\right] ^{\\left( p\\right) }$ is the \n\\mathbb{L}$\\emph{-algebra associator }on $\\mathfrak{l}^{\\left( p\\right) }$\ngiven by \n\\begin{eqnarray}\n\\left[ \\eta ,\\gamma ,\\xi \\right] ^{\\left( p\\right) } &=&\\left. \\frac{d^{3}}\ndtd\\tau d\\tau ^{\\prime }}\\exp \\left( \\tau \\eta \\right) \\circ _{p}\\left( \\exp\n\\left( \\tau ^{\\prime }\\gamma \\right) \\circ _{p}\\exp \\left( t\\xi \\right)\n\\right) \\right\\vert _{t,\\tau ,\\tau ^{\\prime }=0} \\label{Lalgassoc} \\\\\n&&-\\left. \\frac{d^{3}}{dtd\\tau d\\tau ^{\\prime }}\\left( \\exp \\left( \\tau \\eta\n\\right) \\circ _{p}\\exp \\left( \\tau ^{\\prime }\\gamma \\right) \\right) \\circ\n_{p}\\exp \\left( t\\xi \\right) \\right\\vert _{t,\\tau ,\\tau ^{\\prime }=0}. \n\\notag\n\\end{eqnarray\nMoreover, \n\\begin{equation}\n\\left[ \\eta ,\\gamma ,\\xi \\right] ^{\\left( p\\right) }=\\left. \\frac{d^{3}}\ndtd\\tau d\\tau ^{\\prime }}\\left[ \\exp \\left( \\tau \\eta \\right) ,\\exp \\left(\n\\tau ^{\\prime }\\gamma \\right) ,\\exp \\left( t\\xi \\right) \\right] ^{\\left( \n\\mathbb{L},\\circ _{p}\\right) }\\right\\vert _{t,\\tau ,\\tau ^{\\prime }=0}\n\\label{Lalgassoc2}\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot ,\\cdot \\right] ^{\\left( \\mathbb{L},\\circ\n_{p}\\right) }$ is the loop associator on $\\left( \\mathbb{L},\\circ\n_{p}\\right) $ as defined by (\\ref{loopassoc2}).\n\\end{lemma}\n\nThe skew-symmetric combination of associators, as in (\\ref{db1}) will\nfrequently occur later on, so let us define for convenience \n\\begin{equation}\na_{p}\\left( \\eta ,\\gamma ,\\xi \\right) =\\left[ \\eta ,\\gamma ,\\xi \\right]\n^{\\left( p\\right) }-\\left[ \\gamma ,\\eta ,\\xi \\right] ^{\\left( p\\right) },\n\\label{ap}\n\\end{equation\nwhich we can can call the \\emph{left-alternating associator}, so in\nparticular, (\\ref{db1}) becomes \n\\begin{equation}\n\\left. db\\right\\vert _{p}\\left( \\eta ,\\gamma \\right) =a_{p}\\left( \\eta\n,\\gamma ,\\theta _{p}\\right) . \\label{db2}\n\\end{equation}\n\nThe loop Maurer-Cartan equation can be rewritten as \n\\begin{equation}\nd\\theta =\\frac{1}{2}b\\left( \\theta ,\\theta \\right) , \\label{MC3}\n\\end{equation\nand hence we see that $b\\left( \\theta ,\\theta \\right) $ is an exact form, so\nin particular, $d\\left( b\\left( \\theta ,\\theta \\right) \\right) =0$. We will\nnow use this to derive a generalization of the Jacobi identity.\n\n\\begin{theorem}\n\\label{thmJacobi}The maps $a$ and $b$ satisfy the relation \n\\begin{equation}\nb\\left( \\theta ,b\\left( \\theta ,\\theta \\right) \\right) =a\\left( \\theta\n,\\theta ,\\theta \\right) . \\label{Jac3}\n\\end{equation\nwhere wedge products are assumed. Equivalently, if $\\xi ,\\eta ,\\gamma \\in \n\\mathfrak{l}$ and $p\\in \\mathbb{L}$, the\n\\begin{equation}\n\\func{Jac}^{\\left( p\\right) }\\left( \\xi ,\\eta ,\\gamma \\right) =a_{p}\\left(\n\\xi ,\\eta ,\\gamma \\right) +a_{p}\\left( \\eta ,\\gamma ,\\xi \\right)\n+a_{p}\\left( \\gamma ,\\xi ,\\eta \\right) . \\label{Jac2}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nWe know that $d\\left( b\\left( \\theta ,\\theta \\right) \\right) =0,$ and thus,\nusing (\\ref{db1}) and (\\ref{MC3}), we have \n\\begin{eqnarray*}\n0 &=&d\\left( b\\left( \\theta ,\\theta \\right) \\right) \\\\\n&=&\\left( db\\right) \\left( \\theta ,\\theta \\right) +b\\left( d\\theta ,\\theta\n\\right) -b\\left( \\theta ,d\\theta \\right) \\\\\n&=&a\\left( \\theta ,\\theta ,\\theta \\right) -b\\left( \\theta ,b\\left( \\theta\n,\\theta \\right) \\right) .\n\\end{eqnarray*\nSo indeed, (\\ref{Jac3}) holds. Now let $X,Y,Z$ be vector fields on $\\mathbb{\n}$, such that $X=\\rho \\left( \\xi \\right) ,$ $Y=\\rho \\left( \\eta \\right) ,$ \nZ=\\rho \\left( \\gamma \\right) $. Then, $a\\left( \\theta ,\\theta ,\\theta\n\\right) _{p}\\left( X,Y,Z\\right) =2\\func{Jac}^{\\left( p\\right) }\\left( \\xi\n,\\eta ,\\gamma \\right) $ and $\\frac{1}{2}b\\left( \\theta ,b\\left( \\theta\n,\\theta \\right) \\right) _{p}\\left( X,Y,Z\\right) $ gives the right-hand side\nof (\\ref{Jac2}).\n\\end{proof}\n\n\\begin{remark}\nAn algebra $\\left( A,\\left[ \\cdot ,\\cdot \\right] ,\\left[ \\cdot ,\\cdot ,\\cdot\n\\right] \\right) $ with a skew-symmetric bracket $\\left[ \\cdot ,\\cdot \\right] \n$ and ternary multilinear bracket $\\left[ \\cdot ,\\cdot ,\\cdot \\right] $ that\nsatisfies (\\ref{Jac2}) is known as an \\emph{Akivis algebra} \\cit\n{Akivis1,ShestakovAkivis1}. If $\\left( \\mathbb{L},\\circ _{p}\\right) $ is\nleft-alternative, we find from (\\ref{Lalgassoc}) that for any $\\xi ,\\eta \\in \n\\mathfrak{l},$ $\\left[ \\xi ,\\xi ,\\eta \\right] ^{\\left( p\\right) }=0$, that\nis, the $\\mathbb{L}$-algebra associator on $\\mathfrak{l}^{\\left( p\\right) }$\nis skew-symmetric in the first two entries, and thus $a_{p}=2\\left[ \\cdot\n,\\cdot ,\\cdot \\right] ^{\\left( p\\right) }.$ If the algebra is alternative,\nthen $\\func{Jac}^{\\left( p\\right) }\\left( \\xi ,\\eta ,\\gamma \\right) =6\\left[\n\\xi ,\\eta ,\\gamma \\right] ^{\\left( p\\right) }.$ It is known \\cit\n{HofmannStrambach}, that conversely, to an alternative Akivis algebra, there\ncorresponds a unique, up to local isomorphism, local analytic alternative\nloop. If $\\left( \\mathbb{L},\\circ _{p}\\right) $ is a left Bol loop (so that\nit is left-alternative) then the corresponding algebra on $\\mathfrak{l\n^{\\left( p\\right) }$ will be a \\emph{Bol algebra}, where $\\left[ \\cdot\n,\\cdot \\right] ^{\\left( p\\right) }$ and $\\left[ \\cdot ,\\cdot ,\\cdot \\right]\n^{\\left( p\\right) }$ satisfy additional identities \\cit\n{Akivis1,OnishchikVinberg,SabininMikheev1985}. If $\\left( \\mathbb{L},\\circ\n_{p}\\right) $ is a Moufang loop (so in particular it is alternative), then\nthe associator is totally skew-symmetric and the algebra on $\\mathfrak{l\n^{\\left( p\\right) }$ is then a \\emph{Malcev\\ algebra}. It then satisfies in\naddition the following identity \\cite{Kuzmin1971,Malcev1955}\n\\begin{equation}\n\\left[ \\xi ,\\eta ,\\left[ \\xi ,\\gamma \\right] ^{\\left( p\\right) }\\right]\n^{\\left( p\\right) }=\\left[ \\left[ \\xi ,\\eta ,\\gamma \\right] ^{\\left(\np\\right) },\\xi \\right] ^{\\left( p\\right) }. \\label{MalcevId}\n\\end{equation\nMoreover, all non-Lie simple Malcev algebras have been classified \\cit\n{Kuzmin1968b} - these are either the imaginary octonions over the real\nnumber, imaginary octonions over the complex numbers, or split octonions\nover the real numbers.\n\\end{remark}\n\nWe generally will not distinguish the notation between loop associators and \n\\mathbb{L}$-algebra associators. It should be clear from the context which\nis being used. Moreover, it will be convenient to define mixed associators\nbetween elements of $\\mathbb{L}$ and $\\mathfrak{l}$. For example, an $\\left( \n\\mathbb{L},\\mathbb{L},\\mathfrak{l}\\right) $-associator is defined for any \np,q\\in \\mathbb{L}$ and $\\xi \\in \\mathfrak{l}$ as \n\\begin{equation}\n\\left[ p,q,\\xi \\right] ^{\\left( s\\right) }=\\left( L_{p}^{\\left( s\\right)\n}\\circ L_{q}^{\\left( s\\right) }\\right) _{\\ast }\\xi -\\left( L_{p\\circ\n_{s}q}^{\\left( s\\right) }\\right) _{\\ast }\\xi \\in T_{p\\circ _{s}q}\\mathbb{L}\n\\label{pqxiassoc}\n\\end{equation\nand an $\\left( \\mathbb{L},\\mathfrak{l},\\mathfrak{l}\\right) $-associator is\ndefined for an $p\\in \\mathbb{L}$ and $\\eta ,\\xi \\in \\mathfrak{l}$ as \n\\begin{eqnarray}\n\\left[ p,\\eta ,\\xi \\right] ^{\\left( s\\right) } &=&\\left. \\frac{d}{dtd\\tau \n\\left( p\\circ _{s}\\left( \\exp \\left( t\\eta \\right) \\circ _{s}\\exp \\left(\n\\tau \\xi \\right) \\right) \\right) \\right\\vert _{t=0} \\notag \\\\\n&&-\\left. \\frac{d}{dtd\\tau }\\left( \\left( p\\circ _{s}\\exp \\left( t\\eta\n\\right) \\right) \\circ _{s}\\exp \\left( \\tau \\xi \\right) \\right) \\right\\vert\n_{t=0}, \\label{etapxiassoc}\n\\end{eqnarray\nwhere we see that $\\left[ p,\\eta ,\\xi \\right] ^{\\left( s\\right) }\\in T_{p\n\\mathbb{L}.$ Similarly, for other combinations.\n\nLet us now consider the action of loop homomophisms on $\\mathbb{L}$-algebras.\n\n\\begin{lemma}\n\\label{lemAlgHom}Suppose $\\mathbb{L}_{1}$ and $\\mathbb{L}_{2}$ are two\nsmooth loops with tangent algebras at identity $\\mathfrak{l}_{1}\\ $and \n\\mathfrak{l}_{2}$, respectively. Let $\\alpha :\\mathbb{L}_{1}\\longrightarrow \n\\mathbb{L}_{2}$ be a smooth loop homomorphism. Then, $\\alpha _{\\ast }:$ \n\\mathfrak{l}_{1}\\longrightarrow \\mathfrak{l}_{2}$ is an $\\mathbb{L}$-algebra\nhomomorphism, i.e., for any $\\xi ,\\gamma \\in $ $\\mathfrak{l}_{1}$, \n\\begin{equation}\n\\alpha _{\\ast }\\left[ \\xi ,\\gamma \\right] ^{\\left( 1\\right) }=\\left[ \\alpha\n_{\\ast }\\xi ,\\alpha _{\\ast }\\gamma \\right] ^{\\left( 2\\right) },\n\\label{algebrahom}\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot \\right] ^{\\left( 1\\right) }$ and $\\left[ \\cdot\n,\\cdot \\right] ^{\\left( 2\\right) }$ are the corresponding brackets on \n\\mathfrak{l}_{1}\\ $and $\\mathfrak{l}_{2}$, respectively. Moreover, $\\alpha\n_{\\ast }$ is an associator homomorphism, i.e., for any $\\xi ,\\gamma ,\\eta\n\\in $ $\\mathfrak{l}_{1}$, \n\\begin{equation}\n\\alpha _{\\ast }\\left[ \\xi ,\\gamma ,\\eta \\right] ^{\\left( 1\\right) }=\\left[\n\\alpha _{\\ast }\\xi ,\\alpha _{\\ast }\\gamma ,\\alpha _{\\ast }\\eta \\right]\n^{\\left( 2\\right) } \\label{akivishom}\n\\end{equation\nwhere $\\left[ \\cdot ,\\cdot ,\\cdot \\right] ^{\\left( 1\\right) }$ and $\\left[\n\\cdot ,\\cdot ,\\cdot \\right] ^{\\left( 2\\right) }$ are the corresponding\nternary brackets on $\\mathfrak{l}_{1}\\ $and $\\mathfrak{l}_{2}$, respectively.\n\\end{lemma}\n\n\\begin{proof}\nSuppose $\\exp _{\\left( 1\\right) }:$ $\\mathfrak{l}_{1}\\longrightarrow \\mathbb\nL}_{1}$ and $\\exp _{\\left( 2\\right) }:\\mathfrak{l}_{2}\\longrightarrow \n\\mathbb{L}_{2}$ are the corresponding exponential maps. Let $\\xi ,\\gamma \\in \n$ $\\mathfrak{l}_{1}$. We know from (\\ref{loopexp}) that \n\\begin{equation}\n\\alpha \\left( \\exp _{\\left( 1\\right) }\\xi \\right) =\\exp _{\\left( 2\\right)\n}\\left( \\alpha _{\\ast }\\xi \\right) . \\label{homexp}\n\\end{equation\nFrom (\\ref{brack2deriv}), we have \n\\begin{equation*}\n\\left[ \\xi ,\\gamma \\right] ^{\\left( 1\\right) }=\\left. \\frac{d^{2}}{dtd\\tau \n\\exp _{\\left( 1\\right) }\\left( t\\xi \\right) \\exp _{\\left( 1\\right) }\\left(\n\\tau \\gamma \\right) \\right\\vert _{t,\\tau =0}-\\left. \\frac{d^{2}}{dtd\\tau \n\\exp _{\\left( 1\\right) }\\left( \\tau \\gamma \\right) \\exp _{\\left( 1\\right)\n}\\left( t\\xi \\right) \\right\\vert _{t,\\tau =0},\n\\end{equation*\nApplying $\\alpha _{\\ast }$ to $\\left[ \\xi ,\\gamma \\right] ^{\\left( 1\\right)\n} $, we find \n\\begin{eqnarray*}\n\\alpha _{\\ast }\\left[ \\xi ,\\gamma \\right] ^{\\left( 1\\right) } &=&\\left. \n\\frac{d^{2}}{dtd\\tau }\\alpha \\left( \\exp _{\\left( 1\\right) }\\left( t\\xi\n\\right) \\exp _{\\left( 1\\right) }\\left( \\tau \\gamma \\right) \\right)\n\\right\\vert _{t,\\tau =0} \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\alpha \\left( \\exp _{\\left( 1\\right) }\\left(\n\\tau \\gamma \\right) \\exp _{\\left( 1\\right) }\\left( t\\xi \\right) \\right)\n\\right\\vert _{t,\\tau =0}.\n\\end{eqnarray*\nHowever, since $\\alpha $ is a loop homomorphism, and using (\\ref{homexp}),\nwe have, \n\\begin{eqnarray*}\n\\alpha _{\\ast }\\left[ \\xi ,\\gamma \\right] ^{\\left( 1\\right) } &=&\\left. \n\\frac{d^{2}}{dtd\\tau }\\exp _{\\left( 2\\right) }\\left( t\\alpha _{\\ast }\\xi\n\\right) \\exp _{\\left( 1\\right) }\\left( \\tau \\alpha _{\\ast }\\gamma \\right)\n\\right\\vert _{t,\\tau =0} \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\exp _{\\left( 1\\right) }\\left( \\tau \\alpha\n_{\\ast }\\gamma \\right) \\exp _{\\left( 1\\right) }\\left( t\\alpha _{\\ast }\\xi\n\\right) \\right\\vert _{t,\\tau =0} \\\\\n&=&\\left[ \\alpha _{\\ast }\\xi ,\\alpha _{\\ast }\\gamma \\right] ^{\\left(\n2\\right) }.\n\\end{eqnarray*\nSimilarly, using the definition (\\ref{Lalgassoc}) for the $\\mathbb{L}\n-algebra associator, we obtain (\\ref{akivishom}).\n\\end{proof}\n\nIn particular, if $\\left( \\alpha ,p\\right) \\in \\Psi ^{R}\\left( \\mathbb{L\n\\right) $, then $\\alpha $ induces an $\\mathbb{L}$-algebra isomorphism \n\\alpha _{\\ast }:\\left( \\mathfrak{l,}\\left[ \\cdot ,\\cdot \\right] \\right)\n\\longrightarrow \\left( \\mathfrak{l,}\\left[ \\cdot ,\\cdot \\right] ^{\\left(\np\\right) }\\right) $. This shows that as long as $p$ is a companion of some\nsmooth right pseudoautomorphism, the corresponding algebras are isomorphic.\nMore generally, we have the following.\n\n\\begin{corollary}\n\\label{corLoppalghom}Suppose $h=\\left( \\alpha ,p\\right) \\in \\Psi ^{R}\\left( \n\\mathbb{L}\\right) $, and $q\\in \\mathbb{\\mathring{L}}$, then, for any $\\xi\n,\\eta ,\\gamma \\in \\mathfrak{l}$\n\\begin{subequations\n\\label{loopalghom} \n\\begin{eqnarray}\n\\alpha _{\\ast }\\left[ \\xi ,\\eta \\right] ^{\\left( q\\right) } &=&\\left[ \\alpha\n_{\\ast }\\xi ,\\alpha _{\\ast }\\eta \\right] ^{h\\left( q\\right) }\n\\label{loopalghom1} \\\\\n\\alpha _{\\ast }\\left[ \\xi ,\\eta ,\\gamma \\right] ^{\\left( q\\right) } &=&\\left[\n\\alpha _{\\ast }\\xi ,\\alpha _{\\ast }\\eta ,\\alpha _{\\ast }\\gamma \\right]\n^{h\\left( q\\right) }. \\label{loopalghom2}\n\\end{eqnarray\n\\end{subequations\n\\end{corollary}\n\n\\begin{proof}\nSince $h=\\left( \\alpha ,p\\right) $ is right pseudo-automorphism of $\\mathbb{\n},$ by Lemma \\ref{lemPseudoHom}, it induces a loop homomorphism $\\alpha\n:\\left( \\mathbb{L},q\\right) \\longrightarrow \\left( \\mathbb{L},h\\left(\nq\\right) \\right) ,$ and thus by Lemma \\ref{lemAlgHom}, $\\alpha _{\\ast }\n\\mathfrak{l}^{\\left( q\\right) }\\longrightarrow \\mathfrak{l}^{\\left( h\\left(\nq\\right) \\right) }$ is a loop algebra homomorphism. Thus (\\ref{loopalghom})\nfollows.\n\\end{proof}\n\n\\begin{remark}\nIn general, Akivis algebras are not fully defined by the binary and ternary\nbrackets, as shown in \\cite{ShestakovUmirbaev}. Indeed, for a fuller\npicture, a more complicated structure of a \\emph{Sabinin algebra }is needed \n\\cite{SabininBook}.\n\\end{remark}\n\nGenerally, we see that $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ acts on \n\\mathfrak{l}$ via pushforwards of the action on $\\mathbb{L}$, i.e. for $h\\in\n\\Psi ^{R}\\left( \\mathbb{L}\\right) $ and $\\xi \\in \\mathfrak{l}$, we have \nh\\cdot \\xi =\\left( h^{\\prime }\\right) _{\\ast }\\xi $.\n\nThe expressions (\\ref{loopalghom}) show that the maps $b\\in C^{\\infty\n}\\left( \\mathbb{\\mathring{L}},\\Lambda ^{2}\\mathfrak{l}^{\\ast }\\otimes \n\\mathfrak{l}\\right) $ and $a\\in C^{\\infty }\\left( \\mathbb{\\mathring{L}\n,\\left( \\otimes ^{3}\\mathfrak{l}^{\\ast }\\right) \\otimes \\mathfrak{l}\\right) $\nthat correspond to the brackets are equivariant maps with respect to the\naction of $\\Psi ^{R}\\left( \\mathbb{L}\\right) .$ Now suppose $s\\in \\mathbb\n\\mathring{L}},$ and denote $b_{s}=b\\left( s\\right) \\in \\Lambda ^{2}\\mathfrak\nl}^{\\ast }\\otimes \\mathfrak{l}$. Then the equivariance of $b$ means that the\nstabilizer $\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left(\nb_{s}\\right) $ in $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ of $b_{s}$ is\nequivalent to the the set of all $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $\nfor which $b_{h\\left( s\\right) }=b_{s}.$ In particular, $\\func{Stab}_{\\Psi\n^{R}\\left( \\mathbb{L}\\right) }\\left( b_{s}\\right) $ is a Lie subgroup of \n\\Psi ^{R}\\left( \\mathbb{L}\\right) $, and clearly $\\func{Aut}\\left( \\mathbb{L\n,\\circ _{s}\\right) =\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left(\ns\\right) \\subset $ $\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left(\nb_{s}\\right) .$ Moreover, note that if $h=\\left( \\gamma ,C\\right) \\in \\func\nAut}\\left( \\mathbb{L},\\circ _{s}\\right) \\times \\mathcal{N}^{R}\\left( \\mathbb\nL},\\circ _{s}\\right) $, then we still have $b_{h\\left( s\\right) }=b_{s}.$\nSo, we can say that the corresponding subgroup $\\iota _{1}\\left( \\func{Aut\n\\left( \\mathbb{L},\\circ _{s}\\right) \\right) \\ltimes \\iota _{2}\\left( \n\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{s}\\right) \\right) \\subset \\Psi\n^{R}\\left( \\mathbb{L}\\right) $ is contained in $\\func{Stab}_{\\Psi ^{R}\\left( \n\\mathbb{L}\\right) }\\left( b_{s}\\right) .$ Hence, as long as $\\mathcal{N\n^{R}\\left( \\mathbb{L},\\circ _{s}\\right) $ is non-trivial, $\\func{Stab}_{\\Psi\n^{R}\\left( \\mathbb{L}\\right) }\\left( b_{s}\\right) $ is strictly greater than \n$\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) .$ Similarly for $a$.\n\nLet us now also consider how the bracket $\\left[ \\cdot ,\\cdot \\right] $ is\ntransformed by $\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast }.$\n\n\\begin{theorem}\nSuppose $s\\in \\mathbb{\\mathring{L}}$ $,\\ p\\in \\mathbb{L}$, and $\\xi ,\\eta\n,\\gamma \\in \\mathfrak{l}.$ Then \n\\begin{eqnarray}\n\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\left[ \\xi ,\\eta\n\\right] ^{\\left( s\\right) } &=&\\left[ \\left( \\func{Ad}_{p}^{\\left( s\\right)\n}\\right) _{\\ast }\\xi ,\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast\n}\\eta \\right] ^{\\left( ps\\right) } \\label{Adbrack1} \\\\\n&&-\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \\func{A\n}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\xi ,p,\\eta \\right] ^{\\left(\ns\\right) }+\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \n\\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\eta ,p,\\xi \\right] ^{\\left(\ns\\right) } \\notag \\\\\n&&+\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ p,\\xi ,\\eta\n\\right] ^{\\left( s\\right) }-\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast\n}^{-1}\\left[ p,\\eta ,\\xi \\right] ^{\\left( s\\right) }. \\notag\n\\end{eqnarray\nThe bracket $\\left[ \\cdot ,\\cdot \\right] ^{\\left( ps\\right) }$ is related to \n$\\left[ \\cdot ,\\cdot \\right] ^{\\left( s\\right) }$ via the expression \n\\begin{equation}\n\\left[ \\xi ,\\eta \\right] ^{\\left( ps\\right) }=\\left[ \\xi ,\\eta \\right]\n^{\\left( s\\right) }+\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast\n}^{-1}a_{s}\\left( \\xi ,\\eta ,p\\right) . \\label{Adbrack1a}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nConside\n\\begin{eqnarray}\n\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\left[ \\xi ,\\eta\n\\right] ^{\\left( s\\right) } &=&\\left. \\frac{d}{dtd\\tau }\\left( p\\circ\n_{s}\\left( \\exp \\left( t\\xi \\right) \\circ _{s}\\exp \\left( \\tau \\eta \\right)\n\\right) \\right) \/_{s}p\\right\\vert _{t,\\tau =0} \\notag \\\\\n&&-\\left. \\frac{d}{dtd\\tau }\\left( p\\circ _{s}\\left( \\exp \\left( t\\eta\n\\right) \\circ _{s}\\exp \\left( \\tau \\xi \\right) \\right) \\right)\n\/_{s}p\\right\\vert _{t,\\tau =0}. \\label{Adbrack}\n\\end{eqnarray\nFor brevity and clarity, let us suppress the derivatives and exponentials,\nthen using mixed associators such as (\\ref{etapxiassoc}), we can write \n\\begin{eqnarray*}\n\\left( p\\circ _{s}\\left( \\xi \\circ _{s}\\eta \\right) \\right) \/_{s}p &=&\\left(\n\\left( p\\circ _{s}\\xi \\right) \\circ _{s}\\eta \\right) \/_{s}f+\\left[ p,\\xi\n,\\eta \\right] ^{\\left( s\\right) }\/_{s}p \\\\\n&=&\\left( \\left( \\left( p\\circ _{s}\\xi \\right) \/_{s}p\\circ _{s}p\\right)\n\\circ _{s}\\eta \\right) \/_{s}p+\\left[ p,\\xi ,\\eta \\right] ^{\\left( s\\right)\n}\/_{s}p \\\\\n&=&\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\xi \\circ _{s}\\left( p\\circ\n_{s}\\eta \\right) \\right) \/_{s}p-\\left[ \\func{Ad}_{p}^{\\left( s\\right) }\\xi\n,p,\\eta \\right] ^{\\left( s\\right) }\/_{s}p \\\\\n&&+\\left[ p,\\xi ,\\eta \\right] ^{\\left( s\\right) }\/_{s}p.\n\\end{eqnarray*\nApplying (\\ref{xrprod}), we get \n\\begin{equation}\n\\left( p\\circ _{s}\\left( \\xi \\circ _{s}\\eta \\right) \\right) \/_{s}p=\\func{Ad\n_{p}^{\\left( s\\right) }\\xi \\circ _{ps}\\func{Ad}_{p}^{\\left( s\\right) }\\eta \n\\left[ \\func{Ad}_{p}^{\\left( s\\right) }\\xi ,p,\\eta \\right] ^{\\left( s\\right)\n}\/_{s}p+\\left[ p,\\xi ,\\eta \\right] ^{\\left( s\\right) }\/_{s}p.\n\\end{equation\nSubtracting the same expression with $\\xi $ and $\\eta $ reversed, (\\re\n{Adbrack}) becomes \n\\begin{eqnarray}\n\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\left[ \\xi ,\\eta\n\\right] ^{\\left( s\\right) } &=&\\left[ \\left( \\func{Ad}_{p}^{\\left( s\\right)\n}\\right) _{\\ast }\\xi ,\\left( \\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast\n}\\eta \\right] ^{\\left( ps\\right) } \\\\\n&&-\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \\func{A\n}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\xi ,p,\\eta \\right] ^{\\left(\ns\\right) }+\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \n\\func{Ad}_{p}^{\\left( s\\right) }\\right) _{\\ast }\\eta ,p,\\xi \\right] ^{\\left(\ns\\right) } \\notag \\\\\n&&+\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ p,\\xi ,\\eta\n\\right] ^{\\left( s\\right) }-\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast\n}^{-1}\\left[ p,\\eta ,\\xi \\right] ^{\\left( s\\right) }. \\notag\n\\end{eqnarray\nTo obtain (\\ref{Adbrack1a}), using (\\ref{brack2deriv}), we can write \n\\begin{eqnarray}\n\\left[ \\xi ,\\eta \\right] ^{\\left( ps\\right) } &=&\\left. \\frac{d^{2}}{dtd\\tau \n}\\exp \\left( t\\xi \\right) \\circ _{ps}\\exp \\left( \\tau \\eta \\right)\n\\right\\vert _{t,\\tau =0} \\label{brackps} \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( \\tau \\xi \\right) \\circ _{ps}\\exp\n\\left( t\\eta \\right) \\right\\vert _{t,\\tau =0}. \\notag\n\\end{eqnarray\nHowever, from (\\ref{xrprod})\n\\begin{equation*}\n\\exp \\left( t\\xi \\right) \\circ _{ps}\\exp \\left( \\tau \\eta \\right) =\\left(\n\\exp \\left( t\\xi \\right) \\circ _{s}\\left( \\exp \\left( \\tau \\eta \\right)\n\\circ _{s}p\\right) \\right) \/_{s}p,\n\\end{equation*\nthus \n\\begin{eqnarray*}\n\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\xi \\right) \\circ _{ps}\\exp \\left(\n\\tau \\eta \\right) \\right\\vert _{t,\\tau =0} &=&\\left( R_{p}^{\\left( s\\right)\n}\\right) _{\\ast }^{-1}\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\xi \\right)\n\\circ _{s}\\left( \\exp \\left( \\tau \\eta \\right) \\circ _{s}p\\right)\n\\right\\vert _{t,\\tau =0} \\\\\n&=&\\left( R_{p}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\xi ,\\eta ,\n\\right] ^{\\left( s\\right) } \\\\\n&&+\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\xi \\right) \\circ _{s}\\exp\n\\left( \\tau \\eta \\right) \\right\\vert _{t,\\tau =0}\n\\end{eqnarray*\nand similarly for the second term in (\\ref{brackps}). Hence, we obtain (\\re\n{Adbrack1a}).\n\\end{proof}\n\nFrom (\\ref{Adbrack1a}) and noting that for any $h\\in \\Psi ^{R}\\left( \\mathbb\nL}\\right) $, $h\\left( s\\right) =h\\left( s\\right) \/s\\cdot s,$ we find that \n\\left[ \\cdot ,\\cdot \\right] ^{\\left( s\\right) }=\\left[ \\cdot ,\\cdot \\right]\n^{\\left( h\\left( s\\right) \\right) }$ if and only if \n\\begin{equation}\na_{s}\\left( \\xi ,\\eta ,\\faktor{h\\left( s\\right)}{s}\\right) ^{\\left( s\\right)\n}=0 \\label{stabbrackcond}\n\\end{equation\nfor any $\\xi ,\\eta \\in \\mathfrak{l}.$ From (\\ref{PsAutoriso}) recall that \nh\\left( s\\right) \/s$ is the companion that corresponds to $h$ in $\\left( \n\\mathbb{L},\\circ _{s}\\right) .$\n\nAlso, note that from (\\ref{Adbrack1a}), we have \n\\begin{equation}\n\\left[ \\theta ,\\theta \\right] ^{\\left( p\\right) }=\\left[ \\theta ,\\theta\n\\right] ^{\\left( 1\\right) }+\\left( R_{p}\\right) _{\\ast }^{-1}a_{1}\\left(\n\\theta ,\\theta ,p\\right) , \\label{brackthetas}\n\\end{equation\nso the left-alternating associator with $p$ is the obstruction for the\nbrackets $\\left[ \\cdot ,\\cdot \\right] ^{\\left( p\\right) }$ and $\\left[ \\cdot\n,\\cdot \\right] ^{\\left( 1\\right) }$ to be equal. Moreover, the structural\nequation (\\ref{MCequation2}) can be rewritten as \n\\begin{equation}\nd\\theta -\\frac{1}{2}\\left[ \\theta ,\\theta \\right] ^{\\left( 1\\right) }=\\frac{\n}{2}\\left( R_{p}\\right) _{\\ast }^{-1}a_{1}\\left( \\theta ,\\theta ,p\\right) .\n\\end{equation\nThis makes the dependence on the associator more explicit.\n\nUsing the associator on $\\mathfrak{l}^{\\left( p\\right) }$ we can define the\nright nucleus $\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( p\\right) }\\right) $\nof $\\mathfrak{l}^{\\left( p\\right) }.$\n\n\\begin{definition}\nLet $p\\in \\mathbb{\\mathring{L}}$, then, the right nucleus $\\mathcal{N\n^{R}\\left( \\mathfrak{l}^{\\left( p\\right) }\\right) $ is defined as \n\\begin{equation}\n\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( p\\right) }\\right) =\\left\\{ \\xi\n\\in \\mathfrak{l}:a_{p}\\left( \\eta ,\\gamma ,\\xi \\right) =0\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{for all \n\\eta ,\\gamma \\in \\mathfrak{l}\\right\\} . \\label{NRl}\n\\end{equation}\n\\end{definition}\n\nIt may seem that a more natural definition of $\\mathcal{N}^{R}\\left( \n\\mathfrak{l}^{\\left( p\\right) }\\right) $ would be to be the set of all $\\xi\n\\in \\mathfrak{l}$ such that $\\left[ \\eta ,\\gamma ,\\xi \\right] ^{\\left(\np\\right) }=0$ for any $\\eta ,\\gamma \\in \\mathfrak{l}.\\ $However, the\nadvantage of (\\ref{NRl}) is that, as we will see, it will always be a Lie\nsubalgebra of $\\mathfrak{l}^{\\left( p\\right) }.$ For a left-alternative\nalgebra, the skew-symmetrization in (\\ref{NRl}) would be unnecessary of\ncourse.\n\n\\begin{theorem}\nThe right nucleus $\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( p\\right)\n}\\right) $ is a Lie subalgebra of $\\mathfrak{l}^{\\left( p\\right) }.$\n\\end{theorem}\n\n\\begin{proof}\nWe first need to show that $\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left(\np\\right) }\\right) $ is closed under $\\left[ \\cdot ,\\cdot \\right] ^{\\left(\np\\right) }.$ Indeed, taking the exterior derivative of (\\ref{db2}), for\nvector fields $X,Y$ on $\\mathbb{L}$ we have \n\\begin{eqnarray*}\n0 &=&\\left( d^{2}b\\left( \\beta ,\\gamma \\right) \\right) \\left( X,Y\\right)\n=X\\left( d_{Y}b\\left( \\beta ,\\gamma \\right) \\right) -Y\\left( d_{X}b\\left(\n\\beta ,\\gamma \\right) \\right) -d_{\\left[ X,Y\\right] }b\\left( \\beta ,\\gamma\n\\right) \\\\\n&=&X\\left( a\\left( \\beta ,\\gamma ,\\theta \\left( Y\\right) \\right) \\right)\n-Y\\left( a\\left( \\beta ,\\gamma ,\\theta \\left( X\\right) \\right) \\right)\n-a\\left( \\beta ,\\gamma ,\\theta \\left( \\left[ X,Y\\right] \\right) \\right) .\n\\end{eqnarray*\nSuppose now $\\xi ,\\eta \\in \\mathfrak{l}^{\\left( p\\right) }$ and let $X=\\rho\n\\left( \\xi \\right) ,$ $Y=\\rho \\left( \\eta \\right) $ be the corresponding\nright fundamental vector fields, then using (\\ref{T1Lbrack}), we have \n\\begin{equation}\na\\left( \\beta ,\\gamma ,b\\left( \\xi ,\\eta \\right) \\right) =-X\\left( a\\left(\n\\beta ,\\gamma ,\\eta \\right) \\right) +Y\\left( a\\left( \\beta ,\\gamma ,\\xi\n\\right) \\right) \\label{d2b}\n\\end{equation\nSuppose now $\\xi ,\\eta \\in \\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left(\np\\right) }\\right) $. Then, the right-hand side of (\\ref{d2b}) vanishes, and\nat $p\\in \\mathbb{L}$, \n\\begin{equation}\na_{p}\\left( \\beta ,\\gamma ,\\left[ \\xi ,\\eta \\right] ^{\\left( p\\right)\n}\\right) =0, \\label{d2b2}\n\\end{equation\nand thus $\\left[ \\xi ,\\eta \\right] ^{\\left( p\\right) }\\in \\mathcal{N\n^{R}\\left( \\mathfrak{l}^{\\left( p\\right) }\\right) .$\n\nTo conclude that $\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( p\\right)\n}\\right) $ is a Lie subalgebra, we also need to verify that Lie algebra\nJacobi identity holds. That is, for any $\\xi ,\\eta ,\\gamma \\in \\mathcal{N\n^{R}\\left( \\mathfrak{l}^{\\left( p\\right) }\\right) $, $\\func{Jac}^{\\left(\np\\right) }\\left( \\xi ,\\eta ,\\gamma \\right) =0$. Indeed, from the Akivis\nidentity (\\ref{Jac2}), \n\\begin{equation}\n\\func{Jac}^{\\left( p\\right) }\\left( \\xi ,\\eta ,\\gamma \\right) =a_{p}\\left(\n\\xi ,\\eta ,\\gamma \\right) +a_{p}\\left( \\eta ,\\gamma ,\\xi \\right)\n+a_{p}\\left( \\gamma ,\\xi ,\\eta \\right) =0,\n\\end{equation\nby definition of $\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( p\\right)\n}\\right) .$\n\\end{proof}\n\nFor any smooth loop, consider the loop right nucleus $\\mathcal{N}^{R}\\left( \n\\mathbb{L},\\circ _{p}\\right) $ as a submanifold of $\\mathbb{L}.$ Then, \n\\begin{equation}\nT_{1}\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ _{p}\\right) =\\left\\{ \\xi \\in \n\\mathfrak{l}:\\left[ q,r,\\xi \\right] ^{\\left( p\\right) }=0\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{for all \nq,r\\in \\mathbb{L}\\right\\} , \\label{T1N}\n\\end{equation\nwhere here we are using the mixed associator as defined by (\\ref{pqxiassoc\n). Then, (\\ref{Lalgassoc2}) implies that $T_{1}\\mathcal{N}^{R}\\left( \\mathbb\nL},\\circ _{p}\\right) \\subset \\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left(\np\\right) }\\right) .$ It is unclear what are the conditions for the converse,\nand hence equality, of the two spaces.\n\nRecall from (\\ref{CRNucl}) that $A\\in \\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) $\\ if and only if $\\func{Ad}_{p}\\left( A\\right) \\in \\mathcal{N\n^{R}\\left( \\mathbb{L},\\circ _{p}\\right) $, so in particular, $\\eta \\in T_{1\n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ if and only if $\\left( \\func{Ad\n_{p}\\right) _{\\ast }\\eta \\in T_{1}\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ\n_{p}\\right) .$ In (\\ref{Adbrack1}) we then see that for $\\eta ,\\gamma \\in\nT_{1}\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $, the associators vanish, and\nwe get \n\\begin{equation}\n\\left( \\func{Ad}_{p}\\right) _{\\ast }\\left[ \\eta ,\\gamma \\right] =\\left[\n\\left( \\func{Ad}_{p}\\right) _{\\ast }\\eta ,\\left( \\func{Ad}_{p}\\right) _{\\ast\n}\\gamma \\right] ^{\\left( p\\right) }. \\label{AdNucl}\n\\end{equation\nHence, for each $p\\in \\mathbb{\\mathring{L}},$ $T_{1}\\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) \\cong T_{1}\\mathcal{N}^{R}\\left( \\mathbb{L},\\circ\n_{p}\\right) $ as Lie algebras.\n\n\\begin{example}\nConsider the Moufang loop of unit octonions $U\\mathbb{O}.$ Then, $T_{1}\n\\mathbb{O}\\cong \\func{Im}\\mathbb{O}$ - the space of imaginary octonions,\nwith the bracket given by the commutator on $\\func{Im}\\mathbb{O}$: for any \n\\xi ,\\eta \\in \\func{Im}\\mathbb{O}$, $\\left[ \\xi ,\\eta \\right] =\\xi \\eta\n-\\eta \\xi .$ We also know that $\\mathcal{N}\\left( U\\mathbb{O}\\right) \\cong \n\\mathbb{Z}_{2}$ and $\\mathcal{N}\\left( \\func{Im}\\mathbb{O}\\right) =\\left\\{\n0\\right\\} .$ On the other hand, taking a direct product $G\\times U\\mathbb{O}$\nwith any Lie group $G$ will give a non-trivial nucleus.\n\\end{example}\n\nLet $s\\in \\mathbb{\\mathring{L}}.$ Suppose the Lie algebras of $\\Psi\n^{R}\\left( \\mathbb{L}\\right) $ and $\\func{Aut}\\left( \\mathbb{L},\\circ\n_{s}\\right) $ are $\\mathfrak{p}$ and $\\mathfrak{h}_{s}$, respectively. In\nparticular, $\\mathfrak{h}_{s}$ is a Lie subalgebra of $\\mathfrak{p}$. Define \n$\\mathfrak{q}_{s}=T_{1}\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{s}\\right) ,$\nthen since $\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{s}\\right) \\subset \n\\mathbb{L}$, so $\\mathfrak{q}_{s}\\mathfrak{\\subset l}^{\\left( s\\right) \n\\mathfrak{\\cong }T_{1}\\mathbb{L}.$ On the other hand, $\\mathcal{C}^{R}\\left( \n\\mathbb{L},\\circ _{s}\\right) \\cong \n\\faktor{\\Psi ^{R}\\left( \\mathbb{L}\\right)}{\\func{Aut}\\left( \n\\mathbb{L},\\circ _{s}\\right)}$, and the tangent space at the coset \n1=\\left\\lfloor \\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) \\right\\rfloor $\nis $\\mathfrak{p\/h}_{s}.$ Hence, we see that $\\mathfrak{q}_{s}\\mathfrak{\\cong\np\/h}_{s}$, at least as vector spaces. The groups $\\Psi ^{R}\\left( \\mathbb{L\n\\right) $ and $\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) $ act on \n\\mathfrak{p}$ and $\\mathfrak{h}_{s}$ via their respective adjoint actions\nand hence $\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) $ acts on \n\\mathfrak{q}_{s}$ via a restriction of the adjoint action of $\\Psi\n^{R}\\left( \\mathbb{L}\\right) .$ Now note that given $h=\\left( \\alpha\n,A\\right) \\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $ and $\\beta \\in \\func{Aut\n\\left( \\mathbb{L},\\circ _{s}\\right) $, the conjugation of $h$ by $\\beta $ is\ngiven by \n\\begin{equation*}\n\\left( \\beta ,1\\right) \\left( \\alpha ,A\\right) \\left( \\beta ^{-1},1\\right)\n=\\left( \\beta \\circ \\alpha \\circ \\beta ^{-1},\\beta \\left( A\\right) \\right)\n\\end{equation*\nand hence the corresponding action on the companion $A$ is via standard\naction of $\\beta $ on $\\mathbb{L}.$ The differentials of these actions give\nthe corresponding actions on the tangent spaces. We thus see that the\nadjoint action of $\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) $ on \n\\mathfrak{p\/h}_{s}$ is equivalent to the standard tangent action of $\\func\nAut}\\left( \\mathbb{L},\\circ _{s}\\right) $ on $\\mathfrak{q}_{s}.$ Hence, \n\\mathfrak{q}_{s}$ and $\\mathfrak{p\/h}_{s}$ are isomorphic as linear\nrepresentations of $\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) .$ We can\nmake the isomorphism from $\\mathfrak{p\/h}_{s}$ to $\\mathfrak{q}_{s}$ more\nexplicit in the following way.\n\n\\begin{definition}\nDefine the map $\\varphi :\\mathbb{\\mathring{L}}\\longrightarrow $ $\\mathfrak{l\n\\otimes \\mathfrak{p}^{\\ast }$ such that for each $s\\in \\mathbb{\\mathring{L}}$\nand $\\gamma \\in \\mathfrak{p}$, \n\\begin{equation}\n\\varphi _{s}\\left( \\gamma \\right) =\\left. \\frac{d}{dt}\\faktor{\\left( \\exp\n\\left( t\\gamma \\right) \\left( s\\right) \\right)}{s}\\right\\vert _{t=0}\\in \n\\mathfrak{l.} \\label{phis}\n\\end{equation}\n\\end{definition}\n\nThus, for each $s\\in \\mathbb{\\mathring{L}},$ $\\varphi _{s}$ gives a map from \n$\\mathfrak{p}$ to $\\mathfrak{l}^{\\left( s\\right) }.$\n\n\\begin{theorem}\n\\label{lemGammahatsurj}The map $\\varphi $ as in (\\ref{phis}) is equivariant\nwith respect to corresponding actions of $\\Psi ^{R}\\left( \\mathbb{L}\\right)\n, $ in particular for $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) ,$ $s\\in \n\\mathbb{\\mathring{L}}$, $\\gamma \\in \\mathfrak{p},$ we hav\n\\begin{equation}\n\\varphi _{h\\left( s\\right) }\\left( \\left( \\func{Ad}_{h}\\right) _{\\ast\n}\\gamma \\right) =\\left( h^{\\prime }\\right) _{\\ast }\\varphi _{s}\\left( \\gamma\n\\right) . \\label{phihs}\n\\end{equation\nMoreover, the image of $\\varphi _{s}$ is $\\mathfrak{q}_{s}$ and the kernel\nis $\\mathfrak{h}_{s}$, and hence, \n\\begin{equation}\n\\mathfrak{p\\cong h}_{s}\\oplus \\mathfrak{q}_{s}. \\label{pdecomp}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nConsider $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $. Then, using (\\re\n{PsAutquot2a}), we have \n\\begin{eqnarray*}\n\\varphi _{h\\left( s\\right) }\\left( \\gamma \\right) &=&\\left. \\frac{d}{dt\n\\faktor{\\left[ \\exp \\left( t\\gamma \\right) \\left( h\\left( s\\right) \\right)\n\\right]}{ h\\left( s\\right)} \\right\\vert _{t=0} \\\\\n&=&\\left. \\frac{d}{dt}h^{\\prime }\\left[ \\faktor{\\func{Ad}_{h^{-1}}\\left(\n\\exp \\left( t\\gamma \\right) \\right) \\left( s\\right)}{s}\\right] \\right\\vert\n_{t=0} \\\\\n&=&\\left( h^{\\prime }\\right) _{\\ast }\\left. \\frac{d}{dt}\\faktor{\\exp \\left(\nt\\left( \\func{Ad}_{h^{-1}}\\right) _{\\ast }\\gamma \\right) \\left( s\\right)} {s\n\\right\\vert _{t=0}.\n\\end{eqnarray*\nSince $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ acts on $\\mathfrak{l}$ via \n\\left( h^{\\prime }\\right) _{\\ast }$ and on $\\mathfrak{p}$ via $\\left( \\func\nAd}_{h}\\right) _{\\ast }$ we see that $\\varphi $ is equivariant.\n\nSince $\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) $ is a Lie subgroup of \n\\Psi ^{R}\\left( \\mathbb{L}\\right) ,$ the projection map $\\pi :\\Psi\n^{R}\\left( \\mathbb{L}\\right) \\longrightarrow \n\\faktor{\\Psi ^{R}\\left(\n\\mathbb{L}\\right)}{\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right)}\\cong \n\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{s}\\right) $ is a smooth submersion\ngiven by $\\pi \\left( h\\right) =h\\left( s\\right) \/s$ for each $h\\in \\Psi\n^{R}\\left( \\mathbb{L}\\right) .$ Thus, $\\left. \\pi _{\\ast }\\right\\vert _\n\\func{id}}:\\mathfrak{p}\\longrightarrow \\mathfrak{q}_{s}$ is surjective.\nHowever, since $\\exp $ is a surjective map from $\\mathfrak{p}$ to a\nneighborhood of $\\func{id}\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, we find\nthat $\\left. \\pi _{\\ast }\\right\\vert _{\\func{id}}\\left( \\gamma \\right)\n=\\varphi _{s}\\left( \\gamma \\right) .$ So indeed, the image of the map \n\\varphi _{s}$ is $\\mathfrak{q}_{s}.$ Clearly the kernel is $\\mathfrak{h\n_{s}. $ Then, (\\ref{pdecomp}) follows immediately.\n\\end{proof}\n\nTheorem \\ref{lemGammahatsurj} implies that $\\varphi :\\mathbb{\\mathring{L}\n\\longrightarrow $ $\\mathfrak{l}\\otimes \\mathfrak{p}^{\\ast }$ is equivariant\nwith respect to the action of $\\Psi ^{R}\\left( \\mathbb{L}\\right) ,$ and\nsimilarly as for $b$, we can define $\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L\n\\right) }\\left( \\varphi _{s}\\right) =\\left\\{ h\\in \\Psi ^{R}\\left( \\mathbb{L\n\\right) :\\varphi _{h\\left( s\\right) }=\\varphi _{s}\\right\\} .$ This is then a\nLie subgroup of $\\Psi ^{R}\\left( \\mathbb{L}\\right) ,$ and $\\func{Aut}\\left( \n\\mathbb{L},\\circ _{s}\\right) \\subset $ $\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb\nL}\\right) }\\left( \\varphi ^{\\left( s\\right) }\\right) .$ Suppose $h=\\left(\n\\alpha ,A\\right) \\in \\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left(\n\\varphi ^{\\left( s\\right) }\\right) $, then \n\\begin{equation*}\n\\varphi _{s}\\left( \\gamma \\right) =\\varphi _{h\\left( s\\right) }\\left( \\gamma\n\\right) =\\left. \\frac{d}{dt}\\left[ \\exp \\left( t\\gamma \\right) \\left( \\alpha\n\\left( s\\right) A\\right) \\right] \/\\left( \\alpha \\left( s\\right) A\\right)\n\\right\\vert _{t=0}\n\\end{equation*}\n\nWe can also see the effect on $\\varphi $ of left multiplication of $s$ by\nelements of $\\mathbb{L}$.\n\n\\begin{lemma}\nSuppose $A\\in \\mathbb{L}$ and $s\\in \\mathbb{\\mathring{L}}$, then for any \n\\gamma \\in \\mathfrak{p},\n\\begin{equation}\n\\varphi _{As}\\left( \\gamma \\right) =\\left( R_{A}^{\\left( s\\right) }\\right)\n_{\\ast }^{-1}\\left( \\gamma ^{\\prime }\\cdot A\\right) +\\left( \\func{Ad\n_{A}^{\\left( s\\right) }\\right) _{\\ast }\\varphi _{s}\\left( \\gamma \\right) ,\n\\label{phiAs}\n\\end{equation\nwhere $\\gamma ^{\\prime }\\cdot A=\\left. \\frac{d}{dt}\\left( \\exp t\\gamma\n\\right) ^{\\prime }\\left( A\\right) \\right\\vert _{t=0}$ represents the\ninfinitesimal action of $\\mathfrak{p}$ on $\\mathbb{L}.$\n\\end{lemma}\n\n\\begin{proof}\nThis follows from a direct computation\n\\begin{eqnarray*}\n\\varphi _{As}\\left( \\gamma \\right) &=&\\left. \\frac{d}{dt}\\exp \\left( t\\gamma\n\\right) \\left( As\\right) \/As\\right\\vert _{t=0} \\\\\n&=&\\left. \\frac{d}{dt}\\left[ \\exp \\left( t\\gamma \\right) ^{\\prime }\\left(\nA\\right) \\exp \\left( t\\gamma \\right) \\left( s\\right) \\right] \/As\\right\\vert\n_{t=0} \\\\\n&=&\\left. \\frac{d}{dt}\\left[ A\\exp \\left( t\\gamma \\right) \\left( s\\right)\n\\right] \/As\\right\\vert _{t=0}+\\left. \\frac{d}{dt}\\left( \\left[ \\exp \\left(\nt\\gamma \\right) ^{\\prime }\\left( A\\right) \\right] s\\right) \/As\\right\\vert\n_{t=0} \\\\\n&=&\\left( \\func{Ad}_{A}^{\\left( s\\right) }\\right) _{\\ast }\\varphi _{s}\\left(\n\\gamma \\right) +\\left( R_{A}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left(\n\\gamma ^{\\prime }\\cdot A\\right) ,\n\\end{eqnarray*\nwhere we have used (\\ref{rprodqright}).\n\\end{proof}\n\n\\begin{example}\n\\label{exOct}If $\\mathbb{L}\\ $is the loop of unit octonions, then we know \n\\mathfrak{p\\cong so}\\left( 7\\right) \\cong \\Lambda ^{2}\\left( \\mathbb{R\n^{7}\\right) ^{\\ast }$ and $\\mathfrak{l\\cong }\\mathbb{R}^{7}$ , so $\\varphi\n_{s}$ can be regarded as an element of $\\mathbb{R}^{7}\\otimes $ $\\Lambda ^{2\n\\mathbb{R}^{7},$ and this is precisely a dualized version of the $G_{2}\n-invariant $3$-form $\\varphi .$ The kernel is isomorphic to $\\mathfrak{g\n_{2}.$\n\\end{example}\n\n\\begin{example}\n\\label{exCx2}Suppose $\\mathbb{L=}U\\mathbb{C\\cong }S^{1}$ - the unit complex\nnumbers, so that $\\mathfrak{l\\cong }\\mathbb{R}.$ From Example \\re\n{ExNormedDiv}, we may take $\\Psi _{n}^{R}\\left( U\\mathbb{C}\\right) =U\\left(\nn\\right) ,$ with a trivial partial action on $U\\mathbb{C}.$ The\ncorresponding Lie algebra is $\\mathfrak{p}_{n}\\cong \\mathfrak{u}\\left(\nn\\right) \\cong \\mathfrak{su}\\left( n\\right) \\oplus i\\mathbb{R}.$ The map \n\\varphi _{s}:\\mathfrak{p}_{n}\\longrightarrow i\\mathbb{R}\\ $is then just the\nprojection $\\mathfrak{su}\\left( n\\right) \\oplus i\\mathbb{R}\\longrightarrow \n\\mathbb{R}$ (i.e. trace). It is independent of $s$. The kernel is $\\mathfrak\nsu}\\left( n\\right) .$ Suppose $V$ is a $n$-dimensional real vector space,\nand $V\\otimes \\mathbb{C}=V^{1,0}\\oplus V^{0,1}$. Then, the group $U\\left(\nn\\right) $ acts via unitary transformations on the complex vector space \nV^{1,0},$ and correspondingly $\\mathfrak{u}\\left( n\\right) \\cong V^{1,1}$\n(i.e. the space of $\\left( 1,1\\right) $-forms). Then, we see that $\\varphi\n_{s}$ is just the dualized version of a Hermitian form on $V\\otimes \\mathbb{\n}.$\n\\end{example}\n\n\\begin{example}\n\\label{exQuat2}Suppose $\\mathbb{L=}U\\mathbb{H\\cong }S^{3}$ - the unit\nquaternions, so that $\\mathfrak{l\\cong }\\mathfrak{sp}\\left( 1\\right) .$ From\nExample \\ref{ExNormedDiv}, we may take $\\Psi _{n}^{R}\\left( U\\mathbb{H\n\\right) =Sp\\left( n\\right) Sp\\left( 1\\right) ,$ with $n\\geq 2$, with a\ntrivial partial action on $U\\mathbb{H}.$ The corresponding Lie algebra is \n\\mathfrak{p}_{n}\\cong \\mathfrak{sp}\\left( n\\right) \\oplus \\mathfrak{sp\n\\left( 1\\right) .$ The map $\\varphi _{s}:\\mathfrak{p}_{n}\\longrightarrow \n\\mathfrak{sp}\\left( 1\\right) \\ $is then given by $\\left( a,\\xi \\right)\n\\mapsto \\left( \\func{Ad}_{s}\\right) _{\\ast }\\xi .$ The kernel is then \n\\mathfrak{sp}\\left( n\\right) .$ Suppose $Sp\\left( n\\right) Sp\\left( 1\\right) \n$ acts on a $4n$-dimensional real vector space \\ $V$, $\\mathfrak{sp}\\left(\nn\\right) \\oplus \\mathfrak{sp}\\left( 1\\right) \\subset \\Lambda ^{2}V^{\\ast }$.\nGiven that $\\mathfrak{sp}\\left( 1\\right) \\cong \\func{Im}\\mathbb{H},$ we can\nthen write $\\varphi _{s}=i\\omega _{1}^{\\ast }+j\\omega _{2}^{\\ast }+k\\omega\n_{3}^{\\ast },$ where the $\\omega _{i}^{\\ast }$ are dualized versions of the\n3 linearly independent Hermitian forms that space the $\\mathfrak{sp}\\left(\n1\\right) $ subspace of $\\Lambda ^{2}V^{\\ast }$ \\cite{SalamonBook}.\n\\end{example}\n\n\\begin{remark}\nThe above examples clearly show that one interpretation of the $G_{2}$\nstructure $3$-form $\\varphi $ is as $\\func{Im}\\mathbb{O}$-valued $2$-form. A\ncomplex Hermitian form is then an $\\func{Im}\\mathbb{C}$-valued $2$-form, and\na quaternionic Hermitian form is an $\\func{Im}\\mathbb{H}$-valued $2$-form.\n\\end{remark}\n\nNow let us summarize the actions of different spaces on one another. For a\nfixed $\\gamma $, define the map $\\hat{\\gamma}:\\mathbb{\\mathring{L}\n\\longrightarrow \\mathfrak{l\\ }$given by $s\\mapsto \\hat{\\gamma}^{\\left(\ns\\right) }=\\varphi _{s}\\left( \\gamma \\right) .$\n\n\\begin{theorem}\nSuppose $\\mathbb{L}$ is a smooth loop with tangent algebra $\\mathfrak{l}$\nand suppose $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ is a Lie group with Lie\nalgebra $\\mathfrak{p}.$ Let $A\\in \\mathbb{L},$ $s\\in \\mathbb{\\mathring{L}}$, \n$\\xi \\in \\mathfrak{l}$, and $\\gamma \\in \\mathfrak{p}.$ Then, denoting by \n\\cdot $ the relevant action, we have the following:\n\n\\begin{enumerate}\n\\item Infinitesimal action of $\\mathfrak{p}$ on $\\mathbb{\\mathring{L}}$: \n\\begin{equation}\n\\gamma \\cdot s=\\left. \\frac{d}{dt}\\exp \\left( t\\gamma \\right) \\left(\ns\\right) \\right\\vert _{t=0}=\\left( R_{s}\\right) _{\\ast }\\hat{\\gamma}^{\\left(\ns\\right) }\\in T_{s}\\mathbb{L} \\label{infplring}\n\\end{equation}\n\n\\item Infinitesimal action of $\\mathfrak{p}$ on $\\mathbb{L}$, for any $s\\in \n\\mathbb{\\mathring{L}}$: \n\\begin{equation}\n\\gamma \\cdot A=\\left. \\frac{d}{dt}\\exp \\left( t\\gamma \\right) ^{\\prime\n}\\left( A\\right) \\right\\vert _{t=0}=\\left( R_{A}^{\\left( s\\right) }\\right)\n_{\\ast }\\hat{\\gamma}^{\\left( As\\right) }-\\left( L_{A}^{\\left( s\\right)\n}\\right) _{\\ast }\\hat{\\gamma}^{\\left( s\\right) }\\in T_{A}\\mathbb{L}.\n\\label{infpl}\n\\end{equation\nIn particular, if $s=1$, \n\\begin{equation}\n\\gamma \\cdot A=\\left( R_{A}\\right) _{\\ast }\\hat{\\gamma}^{\\left( A\\right)\n}-\\left( L_{A}\\right) _{\\ast }\\hat{\\gamma}^{\\left( 1\\right) }.\n\\label{infpl2}\n\\end{equation}\n\n\\item Action of $\\mathfrak{p}$ on $\\mathfrak{l\\ }$for any $s\\in \\mathbb\n\\mathring{L}}$\n\\begin{eqnarray}\n\\gamma \\cdot \\xi &=&\\left. \\frac{d}{dt}\\left( \\exp \\left( t\\gamma \\right)\n^{\\prime }\\right) _{\\ast }\\left( \\xi \\right) \\right\\vert _{t=0} \\notag \\\\\n&=&\\left. d\\hat{\\gamma}\\right\\vert _{s}\\left( \\rho _{s}\\left( \\xi \\right)\n\\right) +\\left[ \\hat{\\gamma}^{\\left( s\\right) },\\xi \\right] ^{\\left(\ns\\right) }. \\label{actpl}\n\\end{eqnarray\nIn particular, for $s=1$, we have \n\\begin{equation}\n\\gamma \\cdot \\xi =\\left. d\\hat{\\gamma}\\right\\vert _{1}\\left( \\xi \\right) \n\\left[ \\hat{\\gamma}^{\\left( 1\\right) },\\xi \\right] . \\label{actpl2}\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nLet $A,B\\in \\mathbb{L},$ $s\\in \\mathbb{\\mathring{L}}$, $\\xi ,\\eta \\in \n\\mathfrak{l}$, $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, and $\\gamma \\in \n\\mathfrak{p}.$ Then we have the following.\n\n\\begin{enumerate}\n\\item The infinitesimal action of a Lie algebra is a standard definition.\n\n\\item Consider now the action of $\\mathfrak{p}$ on $\\mathbb{L}.$ Suppose now \n$\\gamma \\in \\mathfrak{p}$ and $A\\in \\mathbb{L}$ \n\\begin{equation}\n\\gamma ^{\\prime }\\cdot A=\\left. \\frac{d}{dt}\\left( \\exp \\left( t\\gamma\n\\right) ^{\\prime }\\right) \\left( A\\right) \\right\\vert _{t=0}.\n\\label{gammaprime}\n\\end{equation\nSuppose $h\\in \\Psi ^{R}\\left( \\mathbb{L},\\circ _{s}\\right) $, then by (\\re\n{PsAutoriso}), the action of $h$ on $A\\in \\mathbb{L}$ i\n\\begin{equation*}\nh\\left( A\\right) =h^{\\prime }\\left( A\\right) \\circ _{s}\\left(\n\\faktor{h\\left( s\\right)}{s}\\right)\n\\end{equation*\nThus, the partial action $h^{\\prime }\\left( A\\right) $ is given by \n\\begin{equation}\nh^{\\prime }\\left( A\\right) =\\left( \\faktor{h\\left( As\\right)}{s}\\right)\n\/_{s}\\left(\\faktor{ h\\left( s\\right)} {s}\\right) . \\label{hprimes}\n\\end{equation\nMoreover, \n\\begin{equation}\n\\faktor{h\\left( As\\right)}{s}=\\left(\\faktor{h\\left( As\\right)}{As}\\right)\n\\circ _{s}A. \\label{hprimes2}\n\\end{equation\nHence, substituting into (\\ref{gammaprime}), we have \n\\begin{eqnarray}\n\\gamma ^{\\prime }\\cdot A &=&\\left. \\frac{d}{dt}\\left( \\faktor{\\exp \\left(\nt\\gamma \\left( As\\right) \\right)}{As}\\circ _{s}A\\right) \/_{s} \\left(\n\\faktor{\\exp \\left( t\\gamma \\right) \\left( s\\right)}{s}\\right) \\right\\vert\n_{t=0} \\notag \\\\\n&=&\\left. \\frac{d}{dt}\\left( \\faktor{\\exp \\left( t\\gamma \\left( As\\right)\n\\right)}{As}\\circ _{s}A\\right) \\right\\vert _{t=0}-\\left. \\frac{d}{dt}A\\circ\n_{s}\\left( \\faktor{\\exp \\left( t\\gamma \\right) \\left( s\\right)}{s}\\right)\n\\right\\vert _{t=0} \\notag \\\\\n&=&\\left( R_{A}^{\\left( s\\right) }\\right) _{\\ast }\\hat{\\gamma}^{\\left(\nAs\\right) }-\\left( L_{A}^{\\left( s\\right) }\\right) _{\\ast }\\hat{\\gamma\n^{\\left( s\\right) }.\n\\end{eqnarray\nSetting $s=1$ immediately gives (\\ref{infpl2}).\n\n\\item Suppose now $\\gamma \\in \\mathfrak{p}$ and $\\xi \\in \\mathfrak{l}$, then\nwe have \n\\begin{eqnarray}\n\\gamma \\cdot \\xi &=&\\left. \\frac{d}{dt}\\left( \\exp \\left( t\\gamma \\right)\n^{\\prime }\\right) _{\\ast }\\left( \\xi \\right) \\right\\vert _{t=0} \\notag \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\gamma \\right) ^{\\prime }\\left(\n\\exp _{s}\\tau \\xi \\right) \\right\\vert _{t,\\tau =0}. \\label{pactl1}\n\\end{eqnarray\nLet $\\Xi =\\exp _{s}\\tau \\xi \\in \\mathbb{L}$, then using (\\ref{hprimes}) and \n\\ref{hprimes2}), we can write \n\\begin{eqnarray*}\n\\exp \\left( t\\gamma \\right) ^{\\prime }\\left( \\exp _{s}\\tau \\xi \\right)\n&=&\\exp \\left( t\\gamma \\right) ^{\\prime }\\left( \\Xi \\right) \\\\\n&&\\left( \\exp \\left( t\\gamma \\right) \\left( \\Xi s\/\\Xi s\\circ _{s}\\Xi \\right)\n\\right) \/_{s}\\left( \\faktor{\\exp \\left( t\\gamma \\right) \\left( s\\right)}{s\n\\right) .\n\\end{eqnarray*\nUsing this, (\\ref{pactl1}) becomes \n\\begin{eqnarray}\n\\gamma ^{\\prime }\\cdot \\xi &=&\\left. \\frac{d^{2}}{dtd\\tau }\\faktor{\\left(\n\\exp \\left( t\\gamma \\right) \\left( \\left( \\exp _{s}\\tau \\xi \\right) s\\right)\n\\right)} {\\left( \\left( \\exp _{s}\\tau \\xi \\right) s\\circ _{s}\\exp _{s}\\tau\n\\xi \\right)}\\right\\vert _{t,\\tau =0} \\notag \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\exp _{s}\\tau \\xi \\circ _{s}\\left(\n\\faktor{\\exp \\left( t\\gamma \\right) \\left( s\\right)}{s}\\right) \\right\\vert\n_{t,\\tau =0} \\notag \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\gamma \\right) \\left( \\left(\n\\exp _{s}\\tau \\xi \\right) s\\right) \/\\left( \\exp _{s}\\tau \\xi \\right)\ns\\right\\vert _{t,\\tau =0}+ \\notag \\\\\n&&+\\left. \\frac{d^{2}}{dtd\\tau }\\left( \\faktor{\\exp \\left( t\\gamma \\right)\n\\left( s\\right)}{s}\\right) \\circ _{s}\\exp _{s}\\tau \\xi \\right\\vert _{t,\\tau\n=0} \\notag \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\exp _{s}\\tau \\xi \\circ _{s}\\left(\n\\faktor{\\exp \\left( t\\gamma \\right) \\left( s\\right)}{s}\\right) \\right\\vert\n_{t,\\tau =0}\n\\end{eqnarray\nHowever $\\hat{\\gamma}^{\\left( s\\right) }=\\left. \\frac{d}{dt}\\exp \\left(\nt\\gamma \\right) \\left( s\\right) \/s\\right\\vert _{t=0}\\in \\mathfrak{l}$, and\nthus \n\\begin{eqnarray*}\n\\left. \\frac{d}{d\\tau }\\left( L_{\\exp _{s}\\tau \\xi }^{\\left( s\\right)\n}\\right) _{\\ast }\\hat{\\gamma}^{\\left( s\\right) }\\right\\vert _{\\tau =0}\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\left( \\exp _{s}\\tau \\xi \\right) \\circ\n_{s}\\exp _{s}\\left( t\\hat{\\gamma}^{\\left( s\\right) }\\right) \\right\\vert\n_{t,\\tau =0} \\\\\n\\left. \\frac{d}{d\\tau }\\left( R_{\\exp _{s}\\tau \\xi }^{\\left( s\\right)\n}\\right) _{\\ast }\\hat{\\gamma}^{\\left( s\\right) }\\right\\vert _{\\tau =0}\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\exp _{s}\\left( t\\hat{\\gamma}^{\\left(\ns\\right) }\\right) \\circ _{s}\\exp _{s}\\tau \\xi \\right\\vert _{t,\\tau =0}.\n\\end{eqnarray*\nHence, using the expression (\\ref{brack2deriv}) for $\\left[ \\cdot ,\\cdot\n\\right] ^{\\left( s\\right) },$ we get \n\\begin{equation}\n\\gamma ^{\\prime }\\cdot \\xi =\\left. \\frac{d}{d\\tau }\\hat{\\gamma}^{\\left( \\exp\n_{s}\\tau \\xi \\right) s}\\right\\vert _{\\tau =0}+\\left[ \\hat{\\gamma}^{\\left(\ns\\right) },\\xi \\right] ^{\\left( s\\right) }. \\label{gampri1}\n\\end{equation\nThe first term in (\\ref{gampri1}) is then precisely the differential of \n\\hat{\\gamma}$ at $s\\in \\mathbb{L}$ in the direction $\\left( R_{s}\\right)\n_{\\ast }\\xi .$ Setting $s=1$ we get (\\ref{actpl2}).\n\\end{enumerate}\n\\end{proof}\n\n\\begin{remark}\nSince the full action of $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ does not\npreserve $1,$ the pushforward of the action of some $h\\in \\Psi ^{R}\\left( \n\\mathbb{L}\\right) $ sends $T_{1}\\mathbb{L}$ to $T_{A}\\mathbb{L}$, where \nA=h\\left( 1\\right) $ is the companion of $\\mathbb{L}.$ To actually obtain an\naction on $T_{1}\\mathbb{L},$ translation back to $1$ is needed. This can be\nachieved either by right or left division by $A$. Dividing by $A$ on the\nright reduces to the partial action of $\\Psi ^{R}\\left( \\mathbb{L}\\right) ,$\ni.e. action by $h^{\\prime }$. This is how the action of $\\mathfrak{p}$ on \n\\mathfrak{l}$ in (\\ref{actpl}) is defined. Dividing by $A$ on the left,\ngives the map $h^{\\prime \\prime }=\\func{Ad}_{A^{-1}}\\circ h^{\\prime }$, as\ndefined in (\\ref{nuclearaction}). In that setting, it was defined on the\nnucleus, and hence gave an actual group action of $\\Psi ^{R}\\left( \\mathbb{L\n\\right) $, however in a non-associative setting, in general this will not be\na group action.\n\\end{remark}\n\nCombining some of the above results, we also have the following useful\nrelationship.\n\n\\begin{lemma}\nSuppose $\\xi \\in \\mathfrak{p}$ and $\\eta ,\\gamma \\in \\mathfrak{l},$ then \n\\begin{equation}\n\\xi \\cdot \\left[ \\eta ,\\gamma \\right] ^{\\left( s\\right) }=\\left[ \\xi \\cdot\n\\eta ,\\gamma \\right] ^{\\left( s\\right) }+\\left[ \\eta ,\\xi \\cdot \\gamma\n\\right] ^{\\left( s\\right) }+a_{s}\\left( \\eta ,\\gamma ,\\varphi _{s}\\left( \\xi\n\\right) \\right) . \\label{xilbrack}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nUsing the definition (\\ref{actpl}) of the action of $\\mathfrak{p}$ on \n\\mathfrak{l},$ we have \n\\begin{eqnarray*}\n\\xi \\cdot \\left[ \\eta ,\\gamma \\right] ^{\\left( s\\right) } &=&\\left. \\frac{d}\ndt}\\left( \\exp \\left( t\\xi \\right) ^{\\prime }\\right) _{\\ast }\\left[ \\eta\n,\\gamma \\right] ^{\\left( s\\right) }\\right\\vert _{t=0} \\\\\n&=&\\left. \\frac{d}{dt}\\left[ \\left( \\exp \\left( t\\xi \\right) ^{\\prime\n}\\right) _{\\ast }\\eta ,\\left( \\exp \\left( t\\xi \\right) ^{\\prime }\\right)\n_{\\ast }\\gamma \\right] ^{\\exp \\left( t\\xi \\right) \\left( s\\right)\n}\\right\\vert _{t=0}\n\\end{eqnarray*\nwhere we have also used (\\ref{loopalghom1}). Hence, \n\\begin{equation}\n\\xi \\cdot \\left[ \\eta ,\\gamma \\right] ^{\\left( s\\right) }=\\left[ \\xi \\cdot\n\\eta ,\\gamma \\right] ^{\\left( s\\right) }+\\left[ \\eta ,\\xi \\cdot \\gamma\n\\right] ^{\\left( s\\right) }+\\left. \\frac{d}{dt}\\left[ \\eta ,\\gamma \\right]\n^{\\exp \\left( t\\xi \\right) \\left( s\\right) }\\right\\vert _{t=0}.\n\\label{xilbrack2}\n\\end{equation\nWe can rewrite the last term in (\\ref{xilbrack2}) as \n\\begin{equation*}\n\\left. \\frac{d}{dt}\\left[ \\eta ,\\gamma \\right] ^{\\exp \\left( t\\xi \\right)\n\\left( s\\right) }\\right\\vert _{t=0}=\\left. \\frac{d}{dt}\\left[ \\eta ,\\gamma\n\\right] ^{\\exp _{s}\\left( t\\varphi _{s}\\left( \\xi \\right) \\right)\ns}\\right\\vert _{t=0}=\\left. d_{\\rho \\left( \\hat{\\xi}\\right) }b\\right\\vert\n_{s}\\left( \\eta ,\\gamma \\right)\n\\end{equation*\nwhere $\\hat{\\xi}=\\varphi _{s}\\left( \\xi \\right) $. Then, from (\\ref{db1}),\nwe see that \n\\begin{equation}\n\\left. d_{\\rho \\left( \\hat{\\xi}\\right) }b\\right\\vert _{s}\\left( \\eta ,\\gamma\n\\right) =a_{s}\\left( \\eta ,\\gamma ,\\hat{\\xi}\\right)\n\\end{equation\nand overall, we obtain (\\ref{xilbrack}).\n\\end{proof}\n\nRecall that for each $s\\in \\mathbb{\\mathring{L}}$, the bracket function \nb_{s}\\ $is in$\\ \\Lambda ^{2}\\mathfrak{l}^{\\ast }\\otimes \\mathfrak{l}$, which\nis a tensor product of $\\mathfrak{p}$-modules, so (\\ref{xilbrack}) can be\nused to define the action of $\\xi \\in \\mathfrak{p}$ on $b_{s}$. Using the\nderivation property of Lie algebra representations on tensor products, we\nfind that for $\\eta ,\\gamma \\in \\mathfrak{l},$ \n\\begin{eqnarray}\n\\left( \\xi \\cdot b_{s}\\right) \\left( \\eta ,\\gamma \\right) &=&\\xi \\cdot\n\\left( b_{s}\\left( \\eta ,\\gamma \\right) \\right) -b_{s}\\left( \\xi \\cdot \\eta\n,\\gamma \\right) -b_{s}\\left( \\eta ,\\xi \\cdot \\gamma \\right) \\notag \\\\\n&=&a_{s}\\left( \\eta ,\\gamma ,\\varphi _{s}\\left( \\xi \\right) \\right) .\n\\label{bsact}\n\\end{eqnarray}\n\n\\begin{definition}\nSuppose $\\mathfrak{g}$ is a Lie algebra with a representation on a vector\nspace $M$, so that $\\left( M,\\mathfrak{g}\\right) $ is a $\\mathfrak{g}\n-module. Then if $x\\in M$, define the \\emph{annihilator subalgebra }$\\func\nAnn}_{\\mathfrak{g}}\\left( x\\right) $ in $\\mathfrak{g}$ of $x$ as \n\\begin{equation}\n\\func{Ann}_{\\mathfrak{g}}\\left( x\\right) =\\left\\{ \\xi \\in \\mathfrak{g}:\\xi\n\\cdot x=0\\right\\} . \\label{anng}\n\\end{equation}\n\\end{definition}\n\nFrom (\\ref{bsact}), we see that \n\\begin{equation}\n\\func{Ann}_{\\mathfrak{p}}\\left( b_{s}\\right) =\\left\\{ \\xi \\in \\mathfrak{p\n:a_{s}\\left( \\eta ,\\gamma ,\\varphi _{s}\\left( \\xi \\right) \\right) =0\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi\nfor all }\\eta ,\\gamma \\in \\mathfrak{l}\\right\\} . \\label{annpbs}\n\\end{equation\nThe definition (\\ref{annpbs}) is simply that $\\xi \\in $ $\\func{Ann}_\n\\mathfrak{p}}\\left( b_{s}\\right) \\ $if and only if $\\varphi _{s}\\left( \\xi\n\\right) \\in \\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( s\\right) }\\right) ,$\nso that $\\func{Ann}_{\\mathfrak{p}}\\left( b_{s}\\right) =\\varphi\n_{s}^{-1}\\left( \\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( s\\right) }\\right)\n\\right) .$ This is the Lie algebra that corresponds to the Lie group $\\func\nStab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left( b_{s}\\right) .$ Indeed, the\ncondition (\\ref{annpbs}) is precisely the infinitesimal version of (\\re\n{stabbrackcond}). If $\\mathbb{L}$ is a $G$-loop, so that $\\varphi _{s}\\left( \n\\mathfrak{p}\\right) =\\mathfrak{l}^{\\left( s\\right) },$ then $\\varphi\n_{s}\\left( \\func{Ann}_{\\mathfrak{p}}\\left( b_{s}\\right) \\right) =\\mathcal{N\n^{R}\\left( \\mathfrak{l}^{\\left( s\\right) }\\right) .$ Hence, in this case, \n\\func{Ann}_{\\mathfrak{p}}\\left( b_{s}\\right) \\cong \\mathfrak{h}_{s}\\oplus \n\\mathcal{N}^{R}\\left( \\mathfrak{l}^{\\left( s\\right) }\\right) $.\n\nUsing the definition (\\ref{phis}) of $\\varphi _{s}$, let us consider the\naction of $\\mathfrak{p}$ on $\\varphi _{s}.$\n\n\\begin{lemma}\n\\label{lempactl}Suppose $\\xi ,\\eta \\in \\mathfrak{p}$, then for any $s\\in \n\\mathbb{L}$, we have \n\\begin{equation}\n\\xi \\cdot \\varphi _{s}\\left( \\eta \\right) -\\eta \\cdot \\varphi _{s}\\left( \\xi\n\\right) =\\varphi _{s}\\left( \\left[ \\xi ,\\eta \\right] _{\\mathfrak{p}}\\right) \n\\left[ \\varphi _{s}\\left( \\xi \\right) ,\\varphi _{s}\\left( \\eta \\right)\n\\right] ^{\\left( s\\right) }, \\label{xiphi}\n\\end{equation\nwhere $\\cdot $ means the action of $\\mathfrak{p}$ on $\\mathfrak{l}.$\n\\end{lemma}\n\n\\begin{proof}\nUsing (\\ref{actpl}) and the definition (\\ref{phis}) of $\\varphi _{s}$, we\nhave \n\\begin{eqnarray}\n\\xi \\cdot \\varphi _{s}\\left( \\eta \\right) &=&\\left. \\frac{d^{2}}{dtd\\tau \n\\exp \\left( t\\xi \\right) ^{\\prime }\\left( \\faktor{\\exp \\left( \\tau \\eta\n\\right) \\left( s\\right)}{s}\\right) \\right\\vert _{t,\\tau =0} \\notag \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\faktor{\\exp \\left( t\\xi \\right) \\left( \\exp\n\\left( \\tau \\eta \\right) \\left( s\\right) \\right)}{\\exp \\left( t\\xi \\right)\n\\left( s\\right)} \\right\\vert _{t,\\tau =0} \\notag \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\exp \\left( t\\xi \\right) \\left( \\exp \\left(\n\\tau \\eta \\right) \\left( s\\right) \\right) \/s\\right\\vert _{t,\\tau =0} \\notag\n\\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\left( \\faktor{\\exp \\left( \\tau \\eta \\right)\n\\left( s\\right)}{s}\\cdot \\exp \\left( t\\xi \\right) \\left( s\\right) \\right)\n\/s\\right\\vert _{t,\\tau =0} \\notag \\\\\n&=&\\left. \\frac{d^{2}}{dtd\\tau }\\left( \\exp \\left( t\\xi \\right) \\exp \\left(\n\\tau \\eta \\right) \\right) \\left( s\\right) \/s\\right\\vert _{t,\\tau =0}\n\\label{xiphi1} \\\\\n&&-\\left. \\frac{d^{2}}{dtd\\tau }\\faktor{\\exp \\left( \\tau \\eta \\right) \\left(\ns\\right)}{s}\\circ _{s}\\faktor{\\exp \\left( t\\xi \\right) \\left( s\\right)} {s\n\\right\\vert _{t,\\tau =0}, \\notag\n\\end{eqnarray\nwhere we have used (\\ref{PsAutquot2a}) and Lemma \\ref{lemQuotient}. Now\nsubtracting the same expression but with $\\xi $ and $\\eta $ switched around,\nwe obtain (\\ref{xiphi}).\n\\end{proof}\n\n\\begin{remark}\nIn terms of the Chevalley-Eilenberg complex of $\\mathfrak{p}$ with values in \n$\\mathfrak{l},$ the relation (\\ref{xiphi}) shows that if we regard $\\varphi\n_{s}\\in C^{1}\\left( \\mathfrak{p};\\mathfrak{l}\\right) $, i.e. a $1$-form on \n\\mathfrak{p}$ with values in $\\mathfrak{l}$, then the Chevalley-Eilenberg\ndifferential $d_{CE}$ of $\\varphi _{s}$ is given by \n\\begin{equation}\n\\left( d_{CE}\\varphi _{s}\\right) \\left( \\xi ,\\eta \\right) =\\left[ \\varphi\n_{s}\\left( \\xi \\right) ,\\varphi _{s}\\left( \\eta \\right) \\right] ^{\\left(\ns\\right) } \\label{dCEphis}\n\\end{equation\nfor any $\\xi ,\\eta \\in \\mathfrak{p}.$ It is interesting that, at least on \n\\mathfrak{q}_{s},$ the bracket $\\left[ \\cdot ,\\cdot \\right] ^{\\left(\ns\\right) }$ corresponds to an exact $2$-cochain.\n\\end{remark}\n\nSimilarly, from (\\ref{xiphi}), we then see that the action of $\\xi \\in \n\\mathfrak{p}$ on $\\varphi _{s}$ as an $\\mathfrak{p}^{\\ast }\\otimes \\mathfrak\nl}$-valued map. Indeed, given $\\xi ,\\eta \\in \\mathfrak{p},$ we have \n\\begin{eqnarray}\n\\left( \\xi \\cdot \\varphi _{s}\\right) \\left( \\eta \\right) &=&\\xi \\cdot\n\\varphi _{s}\\left( \\eta \\right) -\\varphi _{s}\\left( \\left[ \\xi ,\\eta \\right]\n_{\\mathfrak{p}}\\right) \\notag \\\\\n&=&\\eta \\cdot \\varphi _{s}\\left( \\xi \\right) -\\left[ \\varphi _{s}\\left( \\eta\n\\right) ,\\varphi _{s}\\left( \\xi \\right) \\right] ^{\\left( s\\right) }\n\\label{xiphi3}\n\\end{eqnarray\nwhere we have first used the fact that $\\mathfrak{p}$ acts on itself via the\nadjoint representation and then (\\ref{xiphi}) in the second line.\n\nLet us now consider $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) .$\nFrom (\\ref{xiphi3}), we see that we have two equivalent characterizations of \n$\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) .$ In particular, $\\xi\n\\in \\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) $ if and only if \n\\begin{equation}\n\\xi \\cdot \\hat{\\eta}=\\varphi _{s}\\left( \\left[ \\xi ,\\eta \\right] _{\\mathfrak\np}}\\right) \\label{xiphieta}\n\\end{equation\nor equivalently, for $\\xi \\not\\in \\mathfrak{h}_{s},$ if and only if, \n\\begin{equation}\n\\eta \\cdot \\hat{\\xi}=\\left[ \\hat{\\eta},\\hat{\\xi}\\right] ^{\\left( s\\right) },\n\\label{etaphixi}\n\\end{equation\nfor any $\\eta \\in \\mathfrak{p.}$ Here we are again setting $\\hat{\\xi\n=\\varphi _{s}\\left( \\xi \\right) $ and $\\hat{\\eta}=\\varphi _{s}\\left( \\eta\n\\right) .$ In particular, (\\ref{xiphieta}) shows that $\\mathfrak{q}_{s}$ is\na representation of $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) .$\nSuppose now, $\\xi _{1},\\xi _{2}\\in \\func{Ann}_{\\mathfrak{p}}\\left( \\varphi\n_{s}\\right) ,$ then using (\\ref{xiphieta}) and (\\ref{etaphixi}), we find\nthat \n\\begin{equation}\n\\varphi _{s}\\left( \\left[ \\xi _{1},\\xi _{2}\\right] _{\\mathfrak{p}}\\right)\n=\\xi _{1}\\cdot \\hat{\\xi}_{2}=\\left[ \\hat{\\xi}_{1},\\hat{\\xi}_{2}\\right]\n^{\\left( s\\right) }. \\label{annpphi}\n\\end{equation\nTherefore, $\\varphi _{s}\\left( \\func{Ann}_{\\mathfrak{p}}\\left( \\varphi\n_{s}\\right) \\right) $ is a Lie subalgebra of $\\mathfrak{l}^{\\left( s\\right)\n} $ with $\\varphi _{s}$ being a Lie algebra homomorphism. The kernel \n\\mathfrak{h}_{s}=\\ker \\varphi _{s}$ is then of course an ideal of $\\func{Ann\n_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) .$ Thus, the quotient $\\func{Ann}_\n\\mathfrak{p}}\\left( \\varphi _{s}\\right) \/\\mathfrak{h}_{s}$ is again a Lie\nalgebra, and hence $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) $ is\na trivial Lie algebra extension of $\\mathfrak{h}_{s}$. Moreover, note that\nthe Lie algebra $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) $\ncorresponds to the Lie group $\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right)\n}\\left( \\varphi _{s}\\right) $, and thus if $\\func{Aut}\\left( \\mathbb{L\n,\\circ _{s}\\right) $ and $\\func{Stab}_{\\Psi ^{R}\\left( \\mathbb{L}\\right)\n}\\left( \\varphi _{s}\\right) $ are both connected, then we see that $\\func{Au\n}\\left( \\mathbb{L},\\circ _{s}\\right) $ is a normal subgroup of $\\func{Stab\n_{\\Psi ^{R}\\left( \\mathbb{L}\\right) }\\left( \\varphi _{s}\\right) .$\n\nIn the special case when $\\mathbb{L}$ is a $G$-loop, we get a nice property\nof $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) .$\n\n\\begin{theorem}\nSuppose $\\mathbb{L}$ is a $G$-loop, then $\\func{Ann}_{\\mathfrak{p}}\\left(\n\\varphi _{s}\\right) \\subset \\func{Ann}_{\\mathfrak{p}}\\left( b_{s}\\right) .$\n\\end{theorem}\n\n\\begin{proof}\nSuppose $\\xi \\in \\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) $ and\nlet $\\eta ,\\gamma \\in \\mathfrak{p}$. Consider \n\\begin{eqnarray*}\n\\left[ \\gamma ,\\eta \\right] _{\\mathfrak{p}}\\cdot \\hat{\\xi} &=&\\gamma \\cdot\n\\left( \\eta \\cdot \\hat{\\xi}\\right) -\\eta \\cdot \\left( \\gamma \\cdot \\hat{\\xi\n\\right) \\\\\n&=&\\gamma \\cdot \\left[ \\hat{\\eta},\\hat{\\xi}\\right] ^{\\left( s\\right) }-\\eta\n\\cdot \\left[ \\hat{\\gamma},\\hat{\\xi}\\right] ^{\\left( s\\right) } \\\\\n&=&\\left[ \\gamma \\cdot \\hat{\\eta},\\hat{\\xi}\\right] ^{\\left( s\\right) }+\\left[\n\\hat{\\eta},\\gamma \\cdot \\hat{\\xi}\\right] ^{\\left( s\\right) }+a_{s}\\left( \n\\hat{\\eta},\\hat{\\xi},\\hat{\\gamma}\\right) \\\\\n&&-\\left[ \\eta \\cdot \\hat{\\gamma},\\hat{\\xi}\\right] ^{\\left( s\\right) }-\\left[\n\\hat{\\gamma},\\eta \\cdot \\hat{\\xi}\\right] ^{\\left( s\\right) }-a_{s}\\left( \n\\hat{\\gamma},\\hat{\\xi},\\hat{\\eta}\\right) ^{\\left( s\\right) } \\\\\n&=&\\left[ \\varphi _{s}\\left( \\left[ \\gamma ,\\eta \\right] _{\\mathfrak{p\n}\\right) ,\\hat{\\xi}\\right] ^{\\left( s\\right) }+\\left[ \\left[ \\hat{\\gamma}\n\\hat{\\eta}\\right] ^{\\left( s\\right) },\\hat{\\xi}\\right] +\\left[ \\hat{\\eta}\n\\left[ \\hat{\\gamma},\\hat{\\xi}\\right] ^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) } \\\\\n&&-\\left[ \\hat{\\gamma},\\left[ \\hat{\\eta},\\hat{\\xi}\\right] ^{\\left( s\\right) \n\\right] ^{\\left( s\\right) } \\\\\n&&+a_{s}\\left( \\hat{\\eta},\\hat{\\xi},\\hat{\\gamma}\\right) -a_{s}\\left( \\hat\n\\gamma},\\hat{\\xi},\\hat{\\eta}\\right) \\\\\n&=&\\left[ \\gamma ,\\eta \\right] _{\\mathfrak{p}}\\cdot \\hat{\\xi}-a_{s}\\left( \n\\hat{\\gamma},\\hat{\\xi},\\hat{\\eta}\\right)\n\\end{eqnarray*\nwhere we have used (\\ref{etaphixi}), (\\ref{xilbrack}), (\\ref{xiphi}), and\nthe Akivis identity (\\ref{Jac2}). We hence find that \n\\begin{equation}\na_{s}\\left( \\hat{\\gamma},\\hat{\\xi},\\hat{\\eta}\\right) =0. \\label{annphicond}\n\\end{equation\nWe know that if $\\mathbb{L}$ is a $G$-loop, then $\\mathfrak{l}^{\\left(\ns\\right) }=\\varphi _{s}\\left( \\mathfrak{p}\\right) ,$ and thus the condition \n\\ref{annphicond}) is the same as (\\ref{annpbs}), that is $\\xi \\in \\func{Ann\n_{\\mathfrak{p}}\\left( b_{s}\\right) .$\n\\end{proof}\n\n\\begin{remark}\nOverall, if $\\mathbb{L}$ is a $G$-loop, we have the following inclusions of\nLie algebras \n\\begin{equation}\n\\ker \\varphi _{s}=\\mathfrak{h}_{s}\\underset{\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{ideal}}{\\subset }\\func{Ann\n_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) \\subset \\func{Ann}_{\\mathfrak{p\n}\\left( b_{s}\\right) \\cong \\mathfrak{h}_{s}\\oplus \\mathcal{N}^{R}\\left( \n\\mathfrak{l}^{\\left( s\\right) }\\right) \\subset \\mathfrak{p.} \\label{lieseq}\n\\end{equation\nIf we look at the octonion case, with $\\mathbb{L=}U\\mathbb{O},$ then \n\\mathfrak{p=so}\\left( 7\\right) $, $\\mathfrak{h}_{s}\\cong \\mathfrak{g}_{2}.$\nMoreover, in this case, $\\mathcal{N}^{R}\\left( \\mathfrak{l}\\right) =\\left\\{\n0\\right\\} $, so we must have $\\mathfrak{h}_{s}=\\func{Ann}_{\\mathfrak{p\n}\\left( \\varphi _{s}\\right) =\\func{Ann}_{\\mathfrak{p}}\\left( b_{s}\\right) .$\nThis also makes sense because in this case, $\\varphi _{s}$ and $b_{s}$ are\nessentially the same objects, and moreover, almost uniquely determine $s$\n(up to $\\pm 1$). At the other extreme, if $\\mathbb{L}$ is associative, so\nthat $\\mathcal{N}^{R}\\left( \\mathfrak{l}\\right) =\\mathfrak{l},$ then $\\func\nAnn}_{\\mathfrak{p}}\\left( b_{s}\\right) =\\mathfrak{p,}$ but $\\func{Ann}_\n\\mathfrak{p}}\\left( \\varphi _{s}\\right) $ does not have to equal $\\func{Ann\n_{\\mathfrak{p}}\\left( b_{s}\\right) .$\n\\end{remark}\n\n\\begin{example}\n\\label{ExNormedDiv2}Using the setup from Examples \\ref{ExNormedDiv}, \\re\n{exCx2}, and \\ref{exQuat2}, if $\\mathbb{L=}U\\mathbb{C}$ with $\\Psi\n_{n}^{R}\\left( U\\mathbb{C}\\right) =U\\left( n\\right) $ or $\\mathbb{L=}\n\\mathbb{H}$ with $\\Psi _{n}^{R}\\left( U\\mathbb{H}\\right) =Sp\\left( n\\right)\nSp\\left( 1\\right) $, the since the partial action of $\\Psi _{n}^{R}$ in each\ncase here is trivial, from (\\ref{pactl1}), we see that the action of each\nLie algebra $\\mathfrak{p}_{n}$ on $\\mathfrak{l}$ is trivial. In the complex\ncase, $\\mathfrak{l\\cong }\\mathbb{R},$ and is thus abelian. Hence, from (\\re\n{xiphi3}), we see that in this case $\\xi \\cdot \\varphi _{s}=0$ for each $\\xi\n\\in \\mathfrak{p}_{n}.$ This makes because in Example \\ref{exCx2} we noted\nthat $\\varphi _{s}$ does not depend on $s$ in the complex case. In the\nquaternion case, (\\ref{xiphi3}) shows that if $\\xi ,\\eta \\in \\mathfrak{sp\n\\left( n\\right) \\oplus \\mathfrak{sp}\\left( 1\\right) =\\mathfrak{p}_{n}$, then \n\\begin{eqnarray}\n\\left( \\xi \\cdot \\varphi _{s}\\right) \\left( \\eta \\right) &=&-\\varphi\n_{s}\\left( \\left[ \\xi ,\\eta \\right] _{\\mathfrak{p}_{n}}\\right) \\notag \\\\\n&=&-\\left[ \\xi _{1},\\eta _{1}\\right] _{\\func{Im}\\mathbb{H}}\n\\label{quatbrack}\n\\end{eqnarray\nwhere $\\xi _{1},\\eta _{1}$ are the $\\mathfrak{sp}\\left( 1\\right) $\ncomponents of $\\xi $ and $\\eta ,$ and $\\left[ \\cdot ,\\cdot \\right] _{\\func{I\n}\\mathbb{H}}$ is the bracket on $\\func{Im}\\mathbb{H}$ (and equivalently on \n\\mathfrak{sp}\\left( 1\\right) $). In particular, $\\func{Ann}_{\\mathfrak{p\n_{n}}\\left( \\varphi _{s}\\right) =\\mathfrak{sp}\\left( n\\right) .$\n\\end{example}\n\nNote that, while it is known that any simple (i.e. has no nontrivial proper\nnormal subloops) Moufang loop is a $G$-loop, it is not known whether there\nare simple Bol loops that are not $G$-loops \\cite{NagyLoop}. On the other\nhand, there is an example of a Bol loop that is a $G$-loop but is not a\nMoufang loop \\cite{Robinson68}. That particular example is constructed from\nan alternative division ring, but if that is taken to be $\\mathbb{O},$ we\nobtain a smooth loop.\n\n\\subsection{Killing form}\n\n\\label{sectKilling}Similarly as for Lie groups, we may define a Killing form \n$K^{\\left( s\\right) }$ on $\\mathfrak{l}^{\\left( s\\right) }$. For $\\xi ,\\eta\n\\in \\mathfrak{l}$, we have \n\\begin{equation}\nK^{\\left( s\\right) }\\left( \\xi ,\\eta \\right) =\\func{Tr}\\left( \\func{ad}_{\\xi\n}^{\\left( s\\right) }\\circ \\func{ad}_{\\eta }^{\\left( s\\right) }\\right) ,\n\\label{Killing}\n\\end{equation\nwhere $\\circ $ is just composition of linear maps on $\\mathfrak{l}$ and \n\\func{ad}_{\\xi }^{\\left( s\\right) }\\left( \\cdot \\right) =\\left[ \\xi ,\\cdot\n\\right] ^{\\left( s\\right) },$ as in (\\ref{ladpx}). Clearly $K^{\\left(\ns\\right) }$ is a symmetric bilinear form on $\\mathfrak{l.}$ Given the form \nK^{\\left( s\\right) }$ on $\\mathfrak{l}$, we can extend it to a\n\\textquotedblleft right-invariant\\textquotedblright\\ form $\\left\\langle\n{}\\right\\rangle ^{\\left( s\\right) }$ on $\\mathbb{L}$ via right translation,\nso that for vector fields $X,Y$ on $\\mathbb{L}$, \n\\begin{equation}\n\\left\\langle X,Y\\right\\rangle _{\\mathbb{L}}^{\\left( s\\right) }=K^{\\left(\ns\\right) }\\left( \\theta \\left( X\\right) ,\\theta \\left( Y\\right) \\right) .\n\\label{Killing2}\n\\end{equation}\n\n\\begin{theorem}\n\\label{thmKillingprop}The bilinear form $K^{\\left( s\\right) }$ (\\ref{Killing\n) on $\\mathfrak{l}$ has the following properties.\n\n\\begin{enumerate}\n\\item Let $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, then for any $\\xi ,\\eta\n\\in \\mathfrak{l},$ \n\\begin{equation}\nK^{\\left( h\\left( s\\right) \\right) }\\left( h_{\\ast }^{\\prime }\\xi ,h_{\\ast\n}^{\\prime }\\eta \\right) =K^{\\left( s\\right) }\\left( \\xi ,\\eta \\right) .\n\\label{Kpsi}\n\\end{equation}\n\n\\item Suppose also $\\gamma \\in \\mathfrak{l,}$ then \n\\begin{eqnarray}\nK^{\\left( s\\right) }\\left( \\func{ad}_{\\gamma }^{\\left( s\\right) }\\eta ,\\xi\n\\right) &=&-K^{\\left( s\\right) }\\left( \\eta ,\\func{ad}_{\\gamma }^{\\left(\ns\\right) }\\xi \\right) +\\func{Tr}\\left( \\func{Jac}_{\\xi ,\\gamma }^{\\left(\ns\\right) }\\circ \\func{ad}_{\\eta }^{\\left( s\\right) }\\right) \\notag \\\\\n&&+\\func{Tr}\\left( \\func{Jac}_{\\eta ,\\gamma }^{\\left( s\\right) }\\circ \\func\nad}_{\\xi }^{\\left( s\\right) }\\right) , \\label{Kad}\n\\end{eqnarray\nwhere $\\func{Jac}_{\\gamma ,\\xi }^{\\left( s\\right) }:\\mathfrak{l\n\\longrightarrow \\mathfrak{l}$ is given by $\\func{Jac}_{\\eta ,\\gamma\n}^{\\left( s\\right) }\\left( \\xi \\right) =\\func{Jac}^{\\left( s\\right) }\\left(\n\\xi ,\\eta ,\\gamma \\right) .$\n\n\\item Let $\\alpha \\in \\mathfrak{p},$ then \n\\begin{eqnarray}\nK^{\\left( s\\right) }\\left( \\alpha \\cdot \\xi ,\\eta \\right) &=&-K^{\\left(\ns\\right) }\\left( \\xi ,\\alpha \\cdot \\eta \\right) +\\func{Tr}\\left( a_{\\eta \n\\hat{\\alpha}}^{\\left( s\\right) }\\circ \\func{ad}_{\\xi }^{\\left( s\\right)\n}\\right) \\label{Klie} \\\\\n&&+\\func{Tr}\\left( a_{\\xi ,\\hat{\\alpha}}^{\\left( s\\right) }\\circ \\func{ad\n_{\\eta }^{\\left( s\\right) }\\right) , \\notag\n\\end{eqnarray\nwhere $a_{\\xi ,\\eta }^{\\left( s\\right) }:\\mathfrak{l}\\longrightarrow \n\\mathfrak{l}$ is given by $a_{\\xi ,\\eta }^{\\left( s\\right) }\\left( \\gamma\n\\right) =\\left[ \\gamma ,\\xi ,\\eta \\right] ^{\\left( s\\right) }-\\left[ \\xi\n,\\gamma ,\\eta \\right] ^{\\left( s\\right) }$ and $\\hat{\\alpha}=\\varphi\n_{s}\\left( \\alpha \\right) $.\n\\end{enumerate}\n\\end{theorem}\n\nThe proof of Theorem (\\ref{thmKillingprop}) is given in Appendix \\re\n{secAppendix}.\n\n\\begin{remark}\nIf $\\left( \\mathbb{L},\\circ _{s}\\right) $ is an alternative loop, we know\nthat $\\func{Jac}_{\\eta ,\\gamma }^{\\left( s\\right) }=3a^{\\left( s\\right) },$\nso in that in case, $K^{\\left( s\\right) }$ is invariant with respect to both \n$\\func{ad}^{\\left( s\\right) }$ and the action of $\\mathfrak{p}\\ $if and only\nif \n\\begin{equation}\n\\func{Tr}\\left( a_{\\eta ,\\hat{\\alpha}}^{\\left( s\\right) }\\circ \\func{ad\n_{\\xi }^{\\left( s\\right) }\\right) +\\func{Tr}\\left( a_{\\xi ,\\hat{\\alpha\n}^{\\left( s\\right) }\\circ \\func{ad}_{\\eta }^{\\left( s\\right) }\\right) =0.\n\\end{equation\nIndeed, in \\cite{SagleMalcev}, it is shown that for a Malcev algebra, the\nKilling form is $\\func{ad}$-invariant. A Malcev algebra is alternative and\nhence the Killing form is also $\\mathfrak{p}$-invariant in that case.\nMoreover, it shown in \\cite{LoosMalcev} that for a \\emph{semisimple} Malcev\nalgebra, the Killing form is non-degenerate. Here the definition of\n\\textquotedblleft semisimple\\textquotedblright\\ is the same as for Lie\nalgebras, namely that the maximal solvable ideal is zero. Indeed, given the\nalgebra of imaginary octonions on $\\mathbb{R}^{7},$ it is known that the\ncorresponding Killing form is negative-definite \\cite{BaezOcto}. Moreover,\nsince in this case, the pseudoautomorphism group is $SO\\left( 7\\right) ,$ so\n(\\ref{Kpsi}) actually shows that $K^{h\\left( s\\right) }=K^{s}$ for every $h\n, and thus is independent of $s$. General criteria for a loop algebra to\nadmit an invariant definite (or even just non-degenerate) Killing form do\nnot seem to appear in the literature, and could be the subject of further\nstudy. At least for well-behaved loops, such as Malcev loops, it is likely\nthat there is significant similarity to Lie groups.\n\\end{remark}\n\nSuppose now $K^{\\left( s\\right) }$ is nondegenerate and both $\\func{ad\n^{\\left( s\\right) }$- and $\\mathfrak{p}$-invariant, and moreover suppose \n\\mathfrak{p}$ is semisimple itself, so that it has a nondegenerate,\ninvariant Killing form $K_{\\mathfrak{p}}.$ We will use $\\left\\langle\n{}\\right\\rangle ^{\\left( s\\right) }$ and $\\left\\langle {}\\right\\rangle _\n\\mathfrak{p}}$ to denote the inner products using $K^{\\left( s\\right) }$ and \n$K_{\\mathfrak{p},}$ respectively. Then, given the map $\\varphi _{s}\n\\mathfrak{p}\\longrightarrow \\mathfrak{l}^{\\left( s\\right) }$, we can define\nits adjoint with respect to these two bilinear maps.\n\n\\begin{definition}\nDefine the map $\\varphi _{s}^{t}:\\mathfrak{l}^{\\left( s\\right)\n}\\longrightarrow \\mathfrak{p}$ such that for any $\\xi \\in \\mathfrak{l\n^{\\left( s\\right) }$\\ and $\\eta \\in \\mathfrak{p}$, \n\\begin{equation}\n\\left\\langle \\varphi _{s}^{t}\\left( \\xi \\right) ,\\eta \\right\\rangle _\n\\mathfrak{p}}=\\left\\langle \\xi ,\\varphi _{s}\\left( \\eta \\right)\n\\right\\rangle ^{\\left( s\\right) }. \\label{phiadj}\n\\end{equation}\n\\end{definition}\n\nSince $\\mathfrak{h}_{s}\\cong \\ker \\varphi _{s}$, we then clearly have \n\\mathfrak{p\\cong h}_{s}\\oplus \\func{Im}\\varphi _{s}^{t}$, so that $\\mathfrak\nh}_{s}^{\\perp }=\\func{Im}\\varphi _{s}^{t}.$ On the other hand, we also have \n\\mathfrak{l}^{\\left( s\\right) }\\cong \\ker \\varphi _{s}^{t}\\oplus \\mathfrak{q\n_{s}$, since $\\mathfrak{q}_{s}=\\func{Im}\\varphi _{s}.$ Define the\ncorresponding projections $\\pi _{\\mathfrak{h}_{s}},\\pi _{\\mathfrak{h\n_{s}^{\\perp }}$ and $\\pi _{\\mathfrak{q}_{s}},\\pi _{\\mathfrak{q}_{s}^{\\perp\n}.}$We then have the following properties.\n\n\\begin{lemma}\n\\label{lemphisphist}Suppose $\\mathfrak{q}_{s}$ is an irreducible\nrepresentation of $\\mathfrak{h}\\ $and suppose the base field of $\\mathfrak{p}\n$ is $\\mathbb{F=R}$ or $\\mathbb{C}.$ Then, there exists a $\\lambda _{s}\\in $ \n$\\mathbb{F}$ such that \n\\begin{equation}\n\\varphi _{s}\\varphi _{s}^{t}=\\lambda _{s}\\pi _{\\mathfrak{q}^{\\left( s\\right)\n}}\\ \\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and }\\varphi _{s}^{t}\\varphi _{s}=\\lambda _{s}\\pi _{\\mathfrak{h\n_{s}^{\\perp }}. \\label{phistphis}\n\\end{equation\nMoreover, for any $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, $\\lambda\n_{s}=\\lambda _{h\\left( s\\right) }$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\gamma ,\\eta \\in \\mathfrak{p}$ and $\\xi \\in \\mathfrak{l}^{\\left(\ns\\right) }$, then using (\\ref{xiphi3}), \n\\begin{eqnarray}\n\\left\\langle \\left( \\gamma \\cdot \\varphi _{s}^{t}\\right) \\left( \\xi \\right)\n,\\eta \\right\\rangle _{\\mathfrak{p}} &=&\\left\\langle \\left[ \\gamma ,\\varphi\n_{s}^{t}\\left( \\xi \\right) \\right] _{\\mathfrak{p}},\\eta \\right\\rangle _\n\\mathfrak{p}}-\\left\\langle \\varphi _{s}^{t}\\left( \\gamma \\cdot \\xi \\right)\n,\\eta \\right\\rangle _{\\mathfrak{p}} \\notag \\\\\n&=&-\\left\\langle \\varphi _{s}^{t}\\left( \\xi \\right) ,\\left[ \\gamma ,\\eta\n\\right] _{\\mathfrak{p}}\\right\\rangle -\\left\\langle \\gamma \\cdot \\xi ,\\varphi\n_{s}\\left( \\eta \\right) \\right\\rangle ^{\\left( s\\right) } \\notag \\\\\n&=&\\left\\langle \\xi ,\\gamma \\cdot \\varphi _{s}\\left( \\eta \\right) -\\varphi\n_{s}\\left( \\left[ \\gamma ,\\eta \\right] _{\\mathfrak{p}}\\right) \\right\\rangle\n^{\\left( s\\right) } \\notag \\\\\n&=&\\left\\langle \\xi ,\\left( \\gamma \\cdot \\varphi _{s}\\right) \\left( \\eta\n\\right) \\right\\rangle ^{\\left( s\\right) }, \\label{gammaphit}\n\\end{eqnarray\nso in particular, $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right) \n\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}^{t}\\right) .$ Thus, the map \n\\varphi _{s}\\varphi _{s}^{t}:\\mathfrak{l}^{\\left( s\\right) }\\longrightarrow \n\\mathfrak{l}^{\\left( s\\right) }$ is an equivariant map of representations of\nthe Lie subalgebra $\\func{Ann}_{\\mathfrak{p}}\\left( \\varphi _{s}\\right)\n\\subset \\mathfrak{p}\\ $and is moreover self-adjoint with respect to \n\\left\\langle {}\\right\\rangle ^{\\left( s\\right) }.$ We can also restrict this\nmap to $\\mathfrak{q}_{s},$ which is also a representation of $\\func{Ann}_\n\\mathfrak{p}}\\left( \\varphi _{s}\\right) $, and in particular of $\\mathfrak{h\n_{s}.$ Hence, if $\\mathfrak{q}_{s}$ is an irreducible representation of \n\\mathfrak{h}_{s},$ since $\\varphi _{s}\\varphi _{s}^{t}$ is diagonalizable\n(in general, if $\\mathbb{C}$ is the base field, or because it symmetric if\nthe base field is $\\mathbb{R}$), by Schur's Lemma, there exists some number \n\\lambda _{s}\\neq 0$ such that \n\\begin{equation}\n\\left. \\varphi _{s}\\varphi _{s}^{t}\\right\\vert _{\\mathfrak{q}^{\\left(\ns\\right) }}=\\lambda _{s}\\func{id}_{\\mathfrak{q}^{\\left( s\\right) }}.\n\\label{phistid}\n\\end{equation\nApplying $\\varphi _{s}^{t}$ to (\\ref{phistid}), we also obtain. \n\\begin{equation}\n\\left. \\varphi _{s}^{t}\\varphi _{s}\\right\\vert _{\\mathfrak{h}_{s}^{\\perp\n}}=\\lambda _{s}\\func{id}_{\\mathfrak{h}_{s}^{\\perp }}.\n\\end{equation\nSince $\\varphi _{s}^{t}$ and $\\varphi _{s}$ vanish on $\\mathfrak{q\n_{s}^{\\perp }$ and $\\mathfrak{h}_{s}$, respectively, we obtain (\\re\n{phistphis}).\n\nLet $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, then from (\\ref{phihs}),\nrecall that \n\\begin{equation}\n\\varphi _{h\\left( s\\right) }=\\left( h^{\\prime }\\right) _{\\ast }\\circ \\varphi\n_{s}\\circ \\left( \\func{Ad}_{h}^{-1}\\right) _{\\ast }.\n\\end{equation\nIt is then easy to see using (\\ref{Kpsi}) and the invariance of the Killing\nform on $\\mathfrak{p}$ that \n\\begin{equation}\n\\varphi _{h\\left( s\\right) }^{t}=\\left( \\func{Ad}_{h}\\right) _{\\ast }\\circ\n\\varphi _{s}^{t}\\circ \\left( h^{\\prime }\\right) _{\\ast }^{-1}.\n\\label{phiths}\n\\end{equation\nIn particular, we see that \n\\begin{equation*}\n\\left( h^{\\prime }\\right) _{\\ast }\\mathfrak{q}_{s}=\\mathfrak{q}_{h\\left(\ns\\right) }\\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and }\\left( \\func{Ad}_{h}\\right) _{\\ast }\\mathfrak{h\n_{s}^{\\perp }=\\mathfrak{h}_{s}.\n\\end{equation*\nHence, \n\\begin{eqnarray*}\n\\left. \\varphi _{h\\left( s\\right) }\\varphi _{h\\left( s\\right)\n}^{t}\\right\\vert _{\\mathfrak{q}_{h\\left( s\\right) }} &=&\\left. \\left(\nh^{\\prime }\\right) _{\\ast }\\circ \\varphi _{s}\\varphi _{s}^{t}\\circ \\left(\nh^{\\prime }\\right) _{\\ast }^{-1}\\right\\vert _{\\mathfrak{q}_{h\\left( s\\right)\n}} \\\\\n&=&\\lambda _{s}\\func{id}_{\\mathfrak{q}_{h\\left( s\\right) }}\n\\end{eqnarray*\nand so indeed, $\\lambda _{s}=\\lambda _{h\\left( s\\right) }.$\n\\end{proof}\n\n\\begin{example}\nIn the case of octonions, suppose we set $\\varphi _{s}\\left( \\eta \\right)\n_{a}=k\\varphi _{abc}\\eta ^{bc}$ where $\\eta \\in \\mathfrak{so}\\left( 7\\right)\n\\cong \\Lambda ^{2}\\left( \\mathbb{R}^{7}\\right) ^{\\ast }$, $\\varphi $ is the\ndefining $3$-form on $\\mathbb{R}^{7},$ and $k\\in \\mathbb{R}$ is some\nconstant. Then, $\\varphi _{s}^{t}\\left( \\gamma \\right) _{ab}=k\\varphi\n_{abc}\\gamma ^{c}$ where $\\gamma \\in \\mathbb{R}^{7}\\cong \\func{Im}\\mathbb{O\n. $ Now, $\\mathbb{R}^{7}$ is an irreducible representation of $\\mathfrak{g\n_{2} $, so the hypothesis of Lemma \\ref{lemphisphist} is satisfied. In this\ncase, $\\lambda _{s}=6k^{2}$ due to the contraction identities for $\\varphi $ \n\\cite{GrigorianG2Torsion1,karigiannis-2005-57}.\n\\end{example}\n\nConsider the action of $\\varphi _{s}^{t}\\left( \\mathfrak{l}^{\\left( s\\right)\n}\\right) \\subset \\mathfrak{p}$ on $\\mathfrak{q}_{s}.$ Let $\\xi ,\\eta \\in \n\\mathfrak{q}_{s},$ then from (\\ref{xiphi})\n\\begin{equation}\n\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\varphi _{s}\\varphi _{s}^{t}\\left(\n\\eta \\right) -\\varphi _{s}^{t}\\left( \\eta \\right) \\cdot \\varphi _{s}\\varphi\n_{s}^{t}\\left( \\xi \\right) =\\varphi _{s}\\left( \\left[ \\varphi _{s}^{t}\\left(\n\\xi \\right) ,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _{\\mathfrak{p\n}\\right) +\\left[ \\varphi _{s}\\varphi _{s}^{t}\\left( \\xi \\right) ,\\varphi\n_{s}\\varphi _{s}^{t}\\left( \\eta \\right) \\right] ^{\\left( s\\right) },\n\\end{equation\nand thus, \n\\begin{equation}\n\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta -\\varphi _{s}^{t}\\left( \\eta\n\\right) \\cdot \\xi =\\frac{1}{\\lambda _{s}}\\varphi _{s}\\left( \\left[ \\varphi\n_{s}^{t}\\left( \\xi \\right) ,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _\n\\mathfrak{p}}\\right) +\\lambda _{s}\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right)\n}. \\label{phistxi}\n\\end{equation\nWe now show that $\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta $ is\nskew-symmetric when restricted to $\\mathfrak{q}_{s}$ and then projected back\nto $\\mathfrak{q}_{s}.$\n\n\\begin{lemma}\n\\label{lemPhibrack}Suppose $\\mathbb{L}$ is a loop and $s\\in \\mathbb{L}$,\nsuch that the Killing form is non-degenerate and $\\func{ad}^{\\left( s\\right)\n}$- and $\\mathfrak{p}$-invariant. Then, for any $\\xi ,\\eta \\in \\mathfrak{q\n_{s}$, \n\\begin{equation}\n\\pi _{\\mathfrak{q}_{s}}\\left( \\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta\n\\right) =-\\pi _{\\mathfrak{q}_{s}}\\left( \\varphi _{s}^{t}\\left( \\eta \\right)\n\\cdot \\xi \\right) . \\label{piqsphit}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nSuppose $\\xi ,\\eta \\in \\mathfrak{q}_{s},$ then using the $\\func{ad}^{\\left(\ns\\right) }$- and $\\mathfrak{p}$-invariance of the Killing form on $\\mathfrak\nl}^{\\left( s\\right) }$ and (\\ref{phistxi}) we have \n\\begin{eqnarray*}\n\\left\\langle \\varphi _{s}^{t}\\left( \\eta \\right) \\cdot \\eta ,\\xi\n\\right\\rangle ^{\\left( s\\right) } &=&-\\left\\langle \\eta ,\\varphi\n_{s}^{t}\\left( \\eta \\right) \\cdot \\xi \\right\\rangle ^{\\left( s\\right) } \\\\\n&=&-\\left\\langle \\eta ,\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta -\\frac{\n}{\\lambda _{s}}\\varphi _{s}\\left( \\left[ \\varphi _{s}^{t}\\left( \\xi \\right)\n,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _{\\mathfrak{p}}\\right) -\\lambda\n_{s}\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left(\ns\\right) } \\\\\n&=&-\\left\\langle \\eta ,\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta\n\\right\\rangle ^{\\left( s\\right) }+\\frac{1}{\\lambda _{s}}\\left\\langle \\varphi\n_{s}^{t}\\left( \\eta \\right) ,\\left[ \\varphi _{s}^{t}\\left( \\xi \\right)\n,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _{\\mathfrak{p}}\\right\\rangle \\\\\n&&-\\lambda _{s}\\left\\langle \\left[ \\eta ,\\eta \\right] ^{\\left( s\\right)\n},\\xi \\right\\rangle ^{\\left( s\\right) } \\\\\n&=&-\\left\\langle \\eta ,\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta\n\\right\\rangle ^{\\left( s\\right) }=\\left\\langle \\varphi _{s}^{t}\\left( \\xi\n\\right) \\cdot \\eta ,\\eta \\right\\rangle ^{\\left( s\\right) } \\\\\n&=&0.\n\\end{eqnarray*\nThus, we see that $\\pi _{\\mathfrak{q}_{s}}\\left( \\varphi _{s}^{t}\\left( \\eta\n\\right) \\cdot \\eta \\right) =0,$ and hence (\\ref{piqsphit}) holds.\n\\end{proof}\n\nTaking the $\\pi _{\\mathfrak{q}_{s}}$ projection of (\\ref{phistxi}) gives \n\\begin{equation}\n\\pi _{\\mathfrak{q}_{s}}\\left( \\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta\n\\right) =\\frac{1}{2\\lambda _{s}}\\varphi _{s}\\left( \\left[ \\varphi\n_{s}^{t}\\left( \\xi \\right) ,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _\n\\mathfrak{p}}+\\lambda _{s}\\varphi _{s}^{t}\\left( \\left[ \\xi ,\\eta \\right]\n^{\\left( s\\right) }\\right) \\right) . \\label{piqsact}\n\\end{equation\nThe relation (\\ref{piqsact}) suggests that we can define a new bracket \n\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}$ on $\\mathfrak{l}^{\\left(\ns\\right) }$ using $\\varphi _{s}$.\n\n\\begin{definition}\nSuppose $\\mathbb{L}$ satisfies the assumptions of Lemma \\ref{lemPhibrack}.\nThen, for $\\xi ,\\eta \\in \\mathfrak{l}^{\\left( s\\right) }$, define \n\\begin{equation}\n\\left[ \\xi ,\\eta \\right] _{\\varphi _{s}}=\\varphi _{s}\\left( \\left[ \\varphi\n_{s}^{t}\\left( \\xi \\right) ,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _\n\\mathfrak{p}}\\right) . \\label{phisbrack}\n\\end{equation}\n\\end{definition}\n\nThis bracket restricts to $\\mathfrak{q}_{s}$ and vanishes on $\\mathfrak{q\n_{s}^{\\perp }$, so that $\\mathfrak{q}_{s}^{\\perp }$ is an abelian ideal with\nrespect to it. We can rewrite (\\ref{piqsact}) as \n\\begin{equation}\n\\pi _{\\mathfrak{q}_{s}}\\left( \\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\eta\n\\right) =\\frac{1}{2\\lambda _{s}}\\left[ \\xi ,\\eta \\right] _{\\varphi _{s}}\n\\frac{\\lambda _{s}}{2}\\pi _{\\mathfrak{q}_{s}}\\left( \\left[ \\xi ,\\eta \\right]\n^{\\left( s\\right) }\\right) . \\label{piqsact1}\n\\end{equation}\n\n\\begin{example}\nIn the case of octonions, if, as before, we set $\\varphi _{s}\\left( \\eta\n\\right) _{a}=k\\varphi _{abc}\\eta ^{bc}$ and $\\left( \\left[ \\xi ,\\gamma\n\\right] ^{\\left( s\\right) }\\right) _{a}=2\\varphi _{abc}\\xi ^{b}\\gamma ^{c}$,\nwe find that $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}=3k^{3}\\left[ \\cdot\n,\\cdot \\right] ^{\\left( s\\right) }.$ Then, recalling that $\\lambda\n_{s}=6k^{2}$, (\\ref{piqsact1}) shows that in this case \n\\begin{equation*}\n\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot \\gamma =\\left( \\frac{k}{4\n+3k^{2}\\right) \\left[ \\xi ,\\gamma \\right] ^{\\left( s\\right) },\n\\end{equation*\nand to be consistent with the standard action of $\\mathfrak{so}\\left(\n7\\right) $ on $\\mathbb{R}^{7}$, we must have \n\\begin{equation*}\nk\\varphi _{abc}\\xi ^{c}\\gamma ^{b}=\\left( \\frac{k}{2}+6k^{2}\\right) \\varphi\n_{abc}\\xi ^{b}\\gamma ^{c},\n\\end{equation*\nwhich means that $6k^{2}+\\frac{3}{2}k=0$ and therefore, $k=-\\frac{1}{4}.$\nThis also implies that $\\lambda _{s}=\\frac{3}{8}$ in this case.\n\\end{example}\n\n\\begin{example}\nIf $\\mathbb{L}$ is a Lie group, and $\\Psi ^{R}\\left( \\mathbb{L}\\right) $ is\nthe full group of pseudoautomorphism pairs, then $\\mathfrak{p\\cong aut\n\\left( \\mathbb{L}\\right) \\oplus \\mathfrak{l}$, where $\\mathfrak{aut}\\left( \n\\mathbb{L}\\right) $ is the Lie algebra of $\\func{Aut}\\left( \\mathbb{L\n\\right) $ and $\\mathfrak{l}$ is the Lie algebra of $\\mathbb{L}.$ In this\ncase, $\\varphi _{s}^{t}\\varphi _{s}$ is just the projection to $\\mathfrak\nl\\subset p},$ and thus $\\lambda _{s}=1$ and $\\left[ \\cdot ,\\cdot \\right]\n_{\\varphi _{s}}=\\left[ \\cdot ,\\cdot \\right] ^{\\left( s\\right) }.$ Then (\\re\n{piqsact1}) just shows that $\\mathfrak{l}$ acts on itself via the adjoint\nrepresentation.\n\\end{example}\n\n\\begin{remark}\nBoth of the above examples have the two brackets $\\left[ \\cdot ,\\cdot \\right]\n_{\\varphi _{s}}$and $\\left[ \\cdot ,\\cdot \\right] ^{\\left( s\\right) }$\nproportional to one another. This is really means that $\\mathfrak{l}^{\\left(\ns\\right) }$ and $\\mathfrak{h}_{s}^{\\perp }$ have equivalent $\\mathbb{L}\n-algebra structures with $\\varphi _{s}$ and $\\varphi _{s}^{t}$ (up to a\nconstant factor) being the corresponding isomorphisms. It is not clear if\nthis is always the case.\n\\end{remark}\n\nThe bracket $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}$ has some\nreasonable properties.\n\n\\begin{lemma}\n\\label{lemPhibrack2}Under the assumptions of Lemma \\ref{lemPhibrack}, the\nbracket $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}$ satisfies the\nfollowing properties. Let $\\xi ,\\eta ,\\gamma \\in \\mathfrak{l}$, then\n\n\\begin{enumerate}\n\\item $\\left\\langle \\left[ \\xi ,\\eta \\right] _{\\varphi _{s}},\\gamma\n\\right\\rangle ^{\\left( s\\right) }=-\\left\\langle \\eta ,\\left[ \\xi ,\\gamma\n\\right] _{\\varphi _{s}}\\right\\rangle ^{\\left( s\\right) }.$\n\n\\item For any $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, $\\left[ \\xi ,\\eta\n\\right] _{\\varphi _{h\\left( s\\right) }}=\\left( h^{\\prime }\\right) _{\\ast \n\\left[ \\left( h^{\\prime }\\right) _{\\ast }^{-1}\\xi ,\\left( h^{\\prime }\\right)\n_{\\ast }^{-1}\\eta \\right] _{\\varphi _{s}}.$\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nThe first property follows directly from the definition (\\ref{phisbrack})\nand the $\\func{ad}$-invariance of the Killing form on $\\mathfrak{p}.$\nIndeed, \n\\begin{eqnarray*}\n\\left\\langle \\left[ \\xi ,\\eta \\right] _{\\varphi _{s}},\\gamma \\right\\rangle\n^{\\left( s\\right) } &=&\\left\\langle \\varphi _{s}\\left( \\left[ \\varphi\n_{s}^{t}\\left( \\xi \\right) ,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _\n\\mathfrak{p}}\\right) ,\\gamma \\right\\rangle ^{\\left( s\\right) } \\\\\n&=&\\left\\langle \\left[ \\varphi _{s}^{t}\\left( \\xi \\right) ,\\varphi\n_{s}^{t}\\left( \\eta \\right) \\right] _{\\mathfrak{p}},\\varphi _{s}^{t}\\left(\n\\gamma \\right) \\right\\rangle ^{\\left( s\\right) } \\\\\n&=&-\\left\\langle \\varphi _{s}^{t}\\left( \\eta \\right) ,\\left[ \\varphi\n_{s}^{t}\\left( \\xi \\right) ,\\varphi _{s}^{t}\\left( \\gamma \\right) \\right] _\n\\mathfrak{p}}\\right\\rangle ^{\\left( s\\right) } \\\\\n&=&-\\left\\langle \\eta ,\\left[ \\xi ,\\gamma \\right] _{\\varphi\n_{s}}\\right\\rangle ^{\\left( s\\right) }.\n\\end{eqnarray*\nNow let $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, and then since $\\left( \n\\func{Ad}_{h}\\right) _{\\ast }$ is a Lie algebra automorphism of $\\mathfrak{p}\n$, we have \n\\begin{eqnarray}\n\\left[ \\xi ,\\eta \\right] _{\\varphi _{h\\left( s\\right) }} &=&\\varphi\n_{h\\left( s\\right) }\\left( \\left[ \\varphi _{h\\left( s\\right) }^{t}\\left( \\xi\n\\right) ,\\varphi _{h\\left( s\\right) }^{t}\\left( \\eta \\right) \\right] _\n\\mathfrak{p}}\\right) \\notag \\\\\n&=&\\left( h^{\\prime }\\right) _{\\ast }\\circ \\varphi _{s}\\circ \\left( \\func{Ad\n_{h}^{-1}\\right) _{\\ast }\\left( \\left[ \\left( \\func{Ad}_{h}\\right) _{\\ast\n}\\left( \\varphi _{s}^{t}\\left( \\left( h^{\\prime }\\right) _{\\ast }^{-1}\\left(\n\\xi \\right) \\right) \\right) ,\\left( \\func{Ad}_{h}\\right) _{\\ast }\\left(\n\\varphi _{s}^{t}\\left( \\left( h^{\\prime }\\right) _{\\ast }^{-1}\\left( \\eta\n\\right) \\right) \\right) \\right] _{\\mathfrak{p}}\\right) \\notag \\\\\n&=&\\left( h^{\\prime }\\right) _{\\ast }\\circ \\varphi _{s}\\left( \\left[ \\varphi\n_{s}^{t}\\left( \\left( h^{\\prime }\\right) _{\\ast }^{-1}\\left( \\xi \\right)\n\\right) ,\\varphi _{s}^{t}\\left( \\left( h^{\\prime }\\right) _{\\ast\n}^{-1}\\left( \\eta \\right) \\right) \\right] _{\\mathfrak{p}}\\right) \\notag \\\\\n&=&\\left( h^{\\prime }\\right) _{\\ast }\\left[ \\left( h^{\\prime }\\right) _{\\ast\n}^{-1}\\xi ,\\left( h^{\\prime }\\right) _{\\ast }^{-1}\\eta \\right] _{\\varphi\n_{s}}. \\label{phibrackequi}\n\\end{eqnarray\nTherefore, $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}$ is equivariant with\nrespect to transformations of $s$.\n\\end{proof}\n\n\\subsection{Darboux derivative}\n\n\\label{sectDarboux}Let $M$ be a smooth manifold and suppose \ns:M\\longrightarrow \\mathbb{L}$ is a smooth map. The map $s$ can be used to\ndefine a product on $\\mathbb{L}$-valued maps from $M$ and a corresponding\nbracket on $\\mathfrak{l}$-valued maps. Indeed, let $A,B:M\\longrightarrow \n\\mathbb{L}$ and $\\xi ,\\eta :M\\longrightarrow \\mathfrak{l}$ be smooth maps,\nthen at each $x\\in M$, define \n\\begin{subequations\n\\label{maniproducts} \n\\begin{eqnarray}\n\\left. A\\circ _{s}B\\right\\vert _{x} &=&A_{x}\\circ _{s_{x}}B_{x}\\in \\mathbb{L}\n\\\\\n\\left. A\/_{s}B\\right\\vert _{x} &=&A_{x}\/_{s_{x}}B_{x}\\in \\mathbb{L} \\\\\n\\left. A\\backslash _{s}B\\right\\vert _{x} &=&A_{x}\\backslash _{s}B_{x}\\in \n\\mathbb{L} \\\\\n\\left. \\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }\\right\\vert _{x} &=&\\left[\n\\xi _{x},\\eta _{x}\\right] ^{\\left( s_{x}\\right) }\\in \\mathfrak{l.}\n\\end{eqnarray\n\\end{subequations\nIn particular, the bracket $\\left[ \\cdot ,\\cdot \\right] ^{\\left( s\\right) }$\ndefines the map $b_{s}:M\\longrightarrow \\Lambda ^{2}\\mathfrak{l}^{\\ast\n}\\otimes \\mathfrak{l.}$ We also have the corresponding associator $\\left[\n\\cdot ,\\cdot ,\\cdot \\right] ^{\\left( s\\right) }$ and the left-alternating\nassociator map $a_{s}:M\\longrightarrow \\Lambda ^{2}\\mathfrak{l}^{\\ast\n}\\otimes \\mathfrak{l}^{\\ast }\\otimes \\mathfrak{l}.$ Similarly, define the\nmap $\\varphi _{s}:M\\longrightarrow \\mathfrak{p}^{\\ast }\\otimes \\mathfrak{l}.$\n\nThen, similarly as for maps to Lie groups, we may define the (right) \\emph\nDarboux derivative} $\\theta _{s}$ of $s,$ which is an $\\mathfrak{l}$-valued \n1$-form on $M$ given by $s^{\\ast }\\theta $ \\cite{SharpeBook}. In particular,\nat every $x\\in M$, \n\\begin{equation}\n\\left. \\left( \\theta _{s}\\right) \\right\\vert _{x}=\\left( R_{s\\left( x\\right)\n}^{-1}\\right) _{\\ast }\\left. ds\\right\\vert _{x}. \\label{Darbouxf}\n\\end{equation\nIt is then clear that $\\theta _{s}$, being a pullback of $\\theta $,\nsatisfies the loop Maurer-Cartan structural equation (\\ref{MCequation1}). In\nparticular, for any vectors $X,Y\\in T_{x}M$, \n\\begin{equation}\nd\\theta _{s}\\left( X,Y\\right) -\\left[ \\theta _{s}\\left( X\\right) ,\\theta\n_{s}\\left( Y\\right) \\right] ^{\\left( s\\right) }=0. \\label{DarbouxMC}\n\\end{equation}\n\nWe can then calculate the derivatives of these maps. For clarity, we will\nsomewhat abuse notation, we will suppress the pushforwards of right\nmultiplication and their inverses (i.e. quotients) on $T\\mathbb{L}$, so that\nif $X\\in T_{q}\\mathbb{L}$, then we will write $X\\circ _{s}A$ for $\\left(\nR_{A}^{\\left( s\\right) }\\right) _{\\ast }X.$\n\n\\begin{theorem}\n\\label{thmmaniDeriv}Let $M$ be a smooth manifold and let $x\\in M$. Suppose \nA,B,s\\in C^{\\infty }\\left( M,\\mathbb{L}\\right) ,$ then \n\\begin{equation}\nd\\left( A\\circ _{s}B\\right) =\\left( dA\\right) \\circ _{s}B+A\\circ _{s}\\left(\ndB\\right) +\\left[ A,B,\\theta _{s}\\right] ^{\\left( s\\right) } \\label{dAsB1}\n\\end{equation\nand \n\\begin{subequations\n\\label{dquots} \n\\begin{eqnarray}\nd\\left( A\/_{s}B\\right) &=&dA\/_{s}B-\\left( A\/_{s}B\\circ _{s}dB\\right) \/_{s}B-\n\\left[ A\/_{s}B,B,\\theta _{s}\\right] ^{\\left( s\\right) }\/_{s}B \\label{drquot}\n\\\\\nd\\left( B\\backslash _{s}A\\right) &=&B\\backslash _{s}dA-B\\backslash\n_{s}\\left( dB\\circ _{s}\\left( B\\backslash _{s}A\\right) \\right) \n\\label{dlquot} \\\\\n&&-B\\backslash _{s}\\left[ B,B\\backslash _{s}A,\\theta _{s}\\right] ^{\\left(\ns\\right) }. \\notag\n\\end{eqnarray\n\\end{subequations\nSuppose now $\\xi ,\\eta \\in C^{\\infty }\\left( M,\\mathfrak{l}\\right) $, then \n\\begin{equation}\nd\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }=\\left[ d\\xi ,\\eta \\right]\n^{\\left( s\\right) }+\\left[ \\xi ,d\\eta \\right] ^{\\left( s\\right)\n}+a_{s}\\left( \\xi ,\\eta ,\\theta _{s}\\right) . \\label{dbrack}\n\\end{equation}\n\nThe $\\mathfrak{l}\\otimes \\mathfrak{p}^{\\ast }$-valued map $\\varphi\n_{s}:M\\longrightarrow $ $\\mathfrak{l}\\otimes \\mathfrak{p}^{\\ast }$ satisfies \n\\begin{equation}\nd\\varphi _{s}=\\func{id}_{\\mathfrak{p}}\\cdot \\theta _{s}-\\left[ \\varphi\n_{s},\\theta _{s}\\right] ^{\\left( s\\right) }, \\label{dphis0}\n\\end{equation\nwhere $\\func{id}_{\\mathfrak{p}}\\ $is the identity map of $\\mathfrak{p}$ and \n\\cdot $ denotes the action of the Lie algebra $\\mathfrak{p}$ on $\\mathfrak{l}\n$ given by (\\ref{pactl1})\n\\end{theorem}\n\n\\begin{proof}\nLet $V\\in T_{x}M$ and let $x\\left( t\\right) $ be a curve on $M$ with \nx\\left( 0\\right) =x$ and $\\dot{x}\\left( 0\\right) =V.$ To show (\\ref{dAsB1}),\nfirst note that \n\\begin{equation}\n\\left. d\\left( A\\circ _{s}B\\right) \\right\\vert _{x}\\left( V\\right) =\\left. \n\\frac{d}{dt}\\left( A_{x\\left( t\\right) }\\circ _{s_{x\\left( t\\right)\n}}B_{x\\left( t\\right) }\\right) \\right\\vert _{t=0}.\n\\end{equation\nHowever\n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\left( A_{x\\left( t\\right) }\\circ _{s_{x\\left( t\\right)\n}}B_{x\\left( t\\right) }\\right) \\right\\vert _{t=0} &=&\\left. \\frac{d}{dt\n\\left( A_{x\\left( t\\right) }\\circ _{s_{x}}B_{x}\\right) \\right\\vert\n_{t=0}+\\left. \\frac{d}{dt}\\left( A_{x}\\circ _{s_{x}}B_{x\\left( t\\right)\n}\\right) \\right\\vert _{t=0} \\notag \\\\\n&&+\\left. \\frac{d}{dt}\\left( A_{x}\\circ _{s_{x\\left( t\\right) }}B_{x}\\right)\n\\right\\vert _{t=0} \\notag \\\\\n&=&\\left( R_{B_{x}}^{\\left( s_{x}\\right) }\\right) _{\\ast }\\left.\ndA\\right\\vert _{x}\\left( V\\right) +\\left( L_{A_{x}}^{\\left( s_{x}\\right)\n}\\right) _{\\ast }\\left. dB\\right\\vert _{x}\\left( V\\right) \\label{dABprod0}\n\\\\\n&&+\\left. \\frac{d}{dt}\\left( A_{x}\\circ _{s_{x\\left( t\\right) }}B_{x}\\right)\n\\right\\vert _{t=0} \\notag\n\\end{eqnarray\nand then, using Lemma \\ref{lemQuotient}, \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\left( A_{x}\\circ _{s_{x\\left( t\\right) }}B_{x}\\right)\n\\right\\vert _{t=0} &=&\\left. \\frac{d}{dt}\\left( \\left( A_{x}\\cdot\nB_{x}s_{x\\left( t\\right) }\\right) \/s_{x\\left( t\\right) }\\right) \\right\\vert\n_{t=0} \\notag \\\\\n&=&\\left. \\frac{d}{dt}\\left( \\left( A_{x}\\cdot B_{x}s_{x\\left( t\\right)\n}\\right) \/s_{x}\\right) \\right\\vert _{t=0} \\label{dABprod} \\\\\n&&+\\left. \\frac{d}{dt}\\left( \\left( A_{x}\\cdot B_{x}s_{x}\\right) \/s_{x}\\cdot\ns_{x\\left( t\\right) }\\right) \/s_{x}\\right\\vert _{t=0}. \\notag\n\\end{eqnarray\nLooking at each term in (\\ref{dABprod}), we have \n\\begin{eqnarray*}\n\\left( A_{x}\\cdot B_{x}s_{x\\left( t\\right) }\\right) \/s_{x} &=&\\left(\nA_{x}\\cdot B_{x}\\left( s_{x\\left( t\\right) }\/s_{x}\\cdot s_{x}\\right) \\right)\n\/s_{x} \\\\\n&=&A_{x}\\circ _{s_{x}}\\left( B_{x}\\circ _{s_{x}}\\left( s_{x\\left( t\\right)\n}\/s_{x}\\right) \\right)\n\\end{eqnarray*\nan\n\\begin{equation*}\n\\left( \\left( A_{x}\\cdot B_{x}s_{x}\\right) \/s_{x}\\cdot s_{x\\left( t\\right)\n}\\right) \/s_{x}=\\left( A_{x}\\circ _{s_{x}}B_{x}\\right) \\circ _{s_{x}}\\left(\ns_{x\\left( t\\right) }\/s_{x}\\right) .\n\\end{equation*\nOverall (\\ref{dABprod0}) becomes, \n\\begin{equation}\n\\left. \\frac{d}{dt}\\left( A_{x}\\circ _{s_{x\\left( t\\right) }}B_{x}\\right)\n\\right\\vert _{t=0}=\\left( \\left( L_{A_{x}}^{\\left( s_{x}\\right) }\\circ\nL_{B_{x}}^{\\left( s_{x}\\right) }\\right) _{\\ast }-\\left( L_{A_{x}\\circ\n_{s_{x}}B_{x}}^{\\left( s_{x}\\right) }\\right) _{\\ast }\\right) \\left(\nR_{s_{x}}^{-1}\\right) _{\\ast }\\left. ds\\right\\vert _{x}\\left( V_{x}\\right)\n\\end{equation\nand hence we get (\\ref{dAsB1}) using the definitions of $\\theta _{s}$ and\nthe mixed associator (\\ref{pxiqsol}).\n\nLet us now show (\\ref{dquots}). From Lemma \\ref{lemQuotient}$,$ we find \n\\begin{subequations\n\\label{dquots copy(1)} \n\\begin{eqnarray}\nd\\left( A\/B\\right) &=&\\left( dA\\right) \/B-\\left( A\/B\\cdot dB\\right) \/B\n\\label{dquots1} \\\\\nd\\left( B\\backslash A\\right) &=&B\\backslash \\left( dA\\right) -B\\backslash\n\\left( dB\\cdot B\\backslash A\\right) . \\label{dquots2}\n\\end{eqnarray\n\\end{subequations\nNow if we instead have the quotient defined by $s$, using (\\ref{rprodqright\n), we have a modification: \n\\begin{eqnarray}\nd\\left( A\/_{s}B\\right) &=&d\\left( As\/Bs\\right) =d\\left( As\\right) \/\\left(\nBs\\right) -\\left( A\/_{s}B\\cdot d\\left( Bs\\right) \\right) \/\\left( Bs\\right) \n\\notag \\\\\n&=&dA\/_{s}B+A\\left( ds\\right) \/\\left( Bs\\right) -\\left( A\/_{s}B\\cdot \\left(\ndB\\right) s\\right) \/\\left( Bs\\right) \\notag \\\\\n&&-\\left( A\/_{s}B\\cdot B\\left( ds\\right) \\right) \/\\left( Bs\\right) \\notag \\\\\n&=&dA\/_{s}B-\\left( A\/_{s}B\\circ _{s}dB\\right) \/_{s}B+\\left( A\\circ\n_{s}\\theta _{s}\\right) \/_{s}B \\notag \\\\\n&&-\\left( A\/_{s}B\\circ _{s}\\left( B\\circ _{s}\\theta _{s}\\right) \\right)\n\/_{s}B \\notag \\\\\n&=&dA\/_{s}B-\\left( A\/_{s}B\\circ _{s}dB\\right) \/_{s}B-\\left[ A\/_{s}B,B,\\theta\n_{s}\\right] ^{\\left( s\\right) }\/_{s}B.\n\\end{eqnarray\nSimilarly, for the left quotient, using (\\ref{rprodqleft}), we have \n\\begin{eqnarray}\nd\\left( B\\backslash _{s}A\\right) &=&d\\left( \\left( B\\backslash As\\right)\n\/s\\right) \\notag \\\\\n&=&d\\left( B\\backslash As\\right) \/s-\\left( \\left( \\left( B\\backslash\nAs\\right) \/s\\right) \\cdot ds\\right) \/s \\notag \\\\\n&=&\\left( B\\backslash d\\left( As\\right) \\right) \/s-\\left( B\\backslash \\left(\ndB\\cdot B\\backslash As\\right) \\right) \/s-\\left( B\\backslash _{s}A\\right)\n\\circ _{s}\\theta _{s} \\notag \\\\\n&=&B\\backslash _{s}dA+\\left( B\\backslash \\left( A\\left( ds\\right) \\right)\n\\right) \/s-B\\backslash _{s}\\left( \\left( dB\\cdot B\\backslash As\\right)\n\/s\\right) \\notag \\\\\n&&-\\left( B\\backslash _{s}A\\right) \\circ _{s}\\theta _{s} \\notag \\\\\n&=&B\\backslash _{s}dA-B\\backslash _{s}\\left( dB\\circ _{s}\\left( B\\backslash\n_{s}A\\right) \\right) +B\\backslash _{s}\\left( A\\circ _{s}\\theta _{s}\\right) \n\\notag \\\\\n&&-\\left( B\\backslash _{s}A\\right) \\circ _{s}\\theta _{s}\n\\end{eqnarray\nHowever, using the mixed associator (\\ref{pxiqsol}), \n\\begin{eqnarray}\nA\\circ _{s}\\theta _{s} &=&\\left( B\\circ _{s}\\left( B\\backslash _{s}A\\right)\n\\right) \\circ _{s}\\theta _{s} \\notag \\\\\n&=&B\\circ _{s}\\left( \\left( B\\backslash _{s}A\\right) \\circ _{s}\\theta\n_{s}\\right) -\\left[ B,B\\backslash _{s}A,\\theta _{s}\\right] ^{\\left( s\\right)\n},\n\\end{eqnarray\nand thus, \n\\begin{equation*}\nd\\left( B\\backslash _{s}A\\right) =B\\backslash _{s}dA-B\\backslash _{s}\\left(\ndB\\circ _{s}\\left( B\\backslash _{s}A\\right) \\right) -B\\backslash _{s}\\left[\nB,B\\backslash _{s}A,\\theta _{s}\\right] ^{\\left( s\\right) }.\n\\end{equation*}\n\nTo show (\\ref{dbrack}), note that \n\\begin{eqnarray*}\n\\left. d\\left( \\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }\\right)\n\\right\\vert _{x}\\left( V\\right) &=&\\left. \\frac{d}{dt}\\left[ \\xi _{x\\left(\nt\\right) },\\eta _{x\\left( t\\right) }\\right] ^{\\left( s_{x\\left( t\\right)\n}\\right) }\\right\\vert _{t=0} \\\\\n&=&\\left[ \\left. d\\xi \\right\\vert _{x}\\left( V\\right) ,\\eta _{x}\\right]\n^{\\left( s_{x}\\right) }+\\left[ \\xi _{x},\\left. d\\eta \\right\\vert _{x}\\right]\n^{\\left( s_{x}\\right) } \\\\\n&&+\\left. \\frac{d}{dt}\\left[ \\xi _{x},\\eta _{x}\\right] ^{\\left( s_{x\\left(\nt\\right) }\\right) }\\right\\vert _{t=0}\n\\end{eqnarray*\nHowever, using (\\ref{db1}), the last term becomes \n\\begin{equation*}\n\\left. \\frac{d}{dt}\\left[ \\xi _{x},\\eta _{x}\\right] ^{\\left( s_{x\\left(\nt\\right) }\\right) }\\right\\vert _{t=0}=a_{s_{x}}\\left( \\xi _{x},\\eta\n_{x},\\left. \\theta _{s}\\right\\vert _{x}\\right)\n\\end{equation*\nand hence we obtain (\\ref{dbrack}).\n\nLet us now show (\\ref{dphis0}). From (\\ref{actpl}), given $\\gamma \\in \n\\mathfrak{p}$, setting $\\hat{\\gamma}\\left( r\\right) =\\varphi _{r}\\left(\n\\gamma \\right) $ for each $r\\in \\mathbb{L}$, we have \n\\begin{equation}\n\\left. d\\hat{\\gamma}\\right\\vert _{r}\\left( \\rho _{r}\\left( \\xi \\right)\n\\right) =\\gamma \\cdot \\xi -\\left[ \\hat{\\gamma}\\left( r\\right) ,\\xi \\right]\n^{\\left( r\\right) }\n\\end{equation\nfor some $\\xi \\in \\mathfrak{l}.$ Now for at each $x\\in M$ we have \n\\begin{eqnarray}\n\\left. d\\left( \\varphi _{s}\\left( \\gamma \\right) \\right) \\right\\vert\n_{x}\\left( V\\right) &=&\\left. d\\hat{\\gamma}\\right\\vert _{s_{x}}\\circ \\left.\nds\\right\\vert _{x}\\left( V\\right) \\notag \\\\\n&=&\\left. d\\hat{\\gamma}\\right\\vert _{s_{x}}\\left( \\rho _{s_{x}}\\left( \\theta\n_{s}\\left( V\\right) \\right) \\right) \\notag \\\\\n&=&\\gamma \\cdot \\theta _{s}\\left( V\\right) -\\left[ \\varphi _{s_{x}}\\left(\n\\gamma \\right) ,\\theta _{s}\\left( V\\right) \\right] ^{\\left( s_{x}\\right) }.\n\\end{eqnarray\nTherefore, $d\\varphi _{s}$ is given by \n\\begin{equation}\nd\\varphi _{s}\\left( \\gamma \\right) =\\gamma \\cdot \\theta _{s}-\\left[ \\varphi\n_{s}\\left( \\gamma \\right) ,\\theta _{s}\\right] ^{\\left( s\\right) }.\n\\label{dphis0a}\n\\end{equation}\n\\end{proof}\n\n\\begin{remark}\nSuppose $A$ and $B$ are now smooth maps from $M$ to $\\mathbb{L}$. In the\ncase when $\\mathbb{L}$ has the right inverse property, i.e. $A\/B=AB^{-1}$\nfor any $A,B\\in \\mathbb{L}$, (\\ref{dquots1}) becomes \n\\begin{equation}\nd\\left( AB^{-1}\\right) =\\left( dA\\right) B^{-1}-\\left( AB^{-1}\\cdot\ndB\\right) B^{-1}\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.} \\label{drquot2}\n\\end{equation\nHowever, from $d\\left( BB^{-1}\\right) =0$, we find that $d\\left(\nB^{-1}\\right) =-B^{-1}\\left( dB\\cdot B^{-1}\\right) $, and then expanding \nd\\left( AB^{-1}\\right) $ using the product rule, and comparing with (\\re\n{drquot2}), we find \n\\begin{equation}\n\\left( AB^{-1}\\cdot dB\\right) B^{-1}=A\\left( B^{-1}\\left( dB\\cdot\nB^{-1}\\right) \\right) , \\label{drquot3}\n\\end{equation\nwhich is an infinitesimal version of the right Bol identity (\\ref{rightBol\n). In particular, \n\\begin{equation}\n\\left( B^{-1}\\cdot dB\\right) B^{-1}=B^{-1}\\left( dB\\cdot B^{-1}\\right) .\n\\end{equation\nSimilarly, using (\\ref{dlquot}), the left inverse property then implies an\ninfinitesimal left Bol identity.\n\\end{remark}\n\nAt each point $x\\in M$, the map $s$ defines a stabilizer subgroup $\\func{Sta\n}\\left( s_{x}\\right) =\\func{Aut}\\left( \\mathbb{L},\\circ _{s}\\right) \\subset\n\\Psi ^{R}\\left( \\mathbb{L}\\right) $ $\\ $with the corresponding Lie algebra \n\\mathfrak{h}_{s_{x}}.$ Similarly, we also have the orbit of $s_{x}$ given by \n$\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{s_{x}}\\right) \\cong \n\\faktor{\\Psi ^{R}\\left( \\mathbb{L}\\right)} {\\func{Aut}\\left( \\mathbb{L},\\circ\n_{s_{x}}\\right)}$, and the corresponding tangent space $\\mathfrak{q\n_{s_{x}}\\cong \\mathfrak{p\/h}_{s_{x}}.$ Suppose $\\left. \\theta\n_{s}\\right\\vert _{x}\\in \\mathfrak{q}_{s_{x}}$ for each $x\\in M$. This of\ncourse always holds if $\\mathbb{L}$ is a $G$-loop, in which case $\\mathfrak{\n}_{s_{x}}=\\mathfrak{l}^{\\left( s_{x}\\right) }.$ In this case, there exists a \n$\\mathfrak{p}$-valued $1$-form $\\Theta $ on $M$ such that $\\theta\n_{s}=\\varphi _{s}\\left( \\Theta \\right) .$ We can then characterize $\\Theta $\nin the following way.\n\n\\begin{theorem}\n\\label{thmThetaPhi}Suppose there exists $\\Theta \\in \\Omega ^{1}\\left( M\n\\mathfrak{p}\\right) $ such that $\\theta _{s}=\\varphi _{s}\\left( \\Theta\n\\right) $. Then, for each $x\\in M$, $\\left. d\\Theta -\\frac{1}{2}\\left[\n\\Theta ,\\Theta \\right] _{\\mathfrak{p}}\\right\\vert _{x}\\in \\mathfrak{h\n_{s_{x}}$, where $\\left[ \\cdot ,\\cdot \\right] _{\\mathfrak{p}}$ is the Lie\nbracket on $\\mathfrak{p.}$\n\\end{theorem}\n\n\\begin{proof}\nConsider $d\\theta _{s}$ in this case. Using (\\ref{dphis0a}), we have \n\\begin{eqnarray}\nd\\theta _{s} &=&d\\left( \\varphi _{s}\\left( \\Theta \\right) \\right) =\\left(\nd\\varphi _{s}\\right) \\left( \\Theta \\right) +\\varphi _{s}\\left( d\\Theta\n\\right) \\notag \\\\\n&=&-\\Theta \\cdot \\theta _{s}+\\left[ \\varphi _{s}\\left( \\Theta \\right)\n,\\theta _{s}\\right] ^{\\left( s\\right) }. \\label{dthetasq}\n\\end{eqnarray\nNote that the signs are switched in (\\ref{dthetasq}) because we also have an\nimplied wedge product of $1$-forms. Overall, we have \n\\begin{equation}\nd\\left( \\varphi _{s}\\left( \\Theta \\right) \\right) =\\varphi _{s}\\left(\nd\\Theta \\right) -\\Theta \\cdot \\varphi _{s}\\left( \\Theta \\right) +\\left[\n\\varphi _{s}\\left( \\Theta \\right) ,\\varphi _{s}\\left( \\Theta \\right) \\right]\n^{\\left( s\\right) }, \\label{dphistheta}\n\\end{equation\nhowever since $\\theta _{s}=\\varphi _{s}\\left( \\Theta \\right) $, it satisfies\nthe Maurer-Cartan structural equation (\\ref{DarbouxMC}), so we also have \n\\begin{equation}\nd\\left( \\varphi _{s}\\left( \\Theta \\right) \\right) =\\frac{1}{2}\\left[ \\varphi\n_{s}\\left( \\Theta \\right) ,\\varphi _{s}\\left( \\Theta \\right) \\right] .\n\\label{dphistheta2}\n\\end{equation\nEquating (\\ref{dphistheta}) and (\\ref{dphistheta2}) , we find \n\\begin{equation}\n\\varphi _{s}\\left( d\\Theta \\right) =\\Theta \\cdot \\varphi _{s}\\left( \\Theta\n\\right) -\\frac{1}{2}\\left[ \\varphi _{s}\\left( \\Theta \\right) ,\\varphi\n_{s}\\left( \\Theta \\right) \\right] ^{\\left( s\\right) }. \\label{dphistheta3}\n\\end{equation\nHowever, from (\\ref{xiphi}), we find that \n\\begin{equation}\n\\Theta \\cdot \\varphi _{s}\\left( \\Theta \\right) -\\frac{1}{2}\\left[ \\varphi\n_{s}\\left( \\Theta \\right) ,\\varphi _{s}\\left( \\Theta \\right) \\right] =\\frac{\n}{2}\\varphi _{s}\\left( \\left[ \\Theta ,\\Theta \\right] _{\\mathfrak{p}}\\right) .\n\\end{equation\nThus, we see that \n\\begin{equation}\n\\varphi _{s}\\left( d\\Theta -\\frac{1}{2}\\left[ \\Theta ,\\Theta \\right] _\n\\mathfrak{p}}\\right) =0. \\label{dphistheta4}\n\\end{equation}\n\\end{proof}\n\n\\begin{remark}\nIn general, we can think of $d-\\Theta $ as a connection on the trivial Lie\nalgebra bundle $M\\times \\mathfrak{p}$ with curvature contained in $\\mathfrak\nh}_{s\\left( x\\right) }$ for each $x\\in M$. In general the spaces $\\mathfrak{\n}_{s\\left( x\\right) }$ need not be all of the same dimension, and thus may\nthis may not give a vector subbundle. On the other hand, if $\\mathbb{L}$ is\na $G$-loop$,$ then we do get a subbundle.\n\\end{remark}\n\nNow consider how $\\theta _{s}$ behaves under the action of $\\Psi ^{R}\\left( \n\\mathbb{L}\\right) .$\n\n\\begin{lemma}\nSuppose $h:M\\longrightarrow \\Psi ^{R}\\left( \\mathbb{L}\\right) $ is a smooth\nmap, then \n\\begin{equation}\n\\theta _{h\\left( s\\right) }=\\left( h^{\\prime }\\right) _{\\ast }\\left( \\varphi\n_{s}\\left( \\theta _{h}^{\\left( \\mathfrak{p}\\right) }\\right) +\\theta\n_{s}\\right) , \\label{thetahf}\n\\end{equation\nwhere $\\theta _{h}^{\\left( \\mathfrak{p}\\right) }=$ $h^{\\ast }\\theta ^{\\left( \n\\mathfrak{p}\\right) }$ is the pullback of the left-invariant Maurer-Cartan\nform $\\theta ^{\\left( \\mathfrak{p}\\right) }$ on $\\Psi ^{R}\\left( \\mathbb{L\n\\right) .$\n\\end{lemma}\n\n\\begin{proof}\nSuppose $h:M\\longrightarrow \\Psi ^{R}\\left( \\mathbb{L}\\right) $ is a smooth\nmap, then consider $\\theta _{h\\left( s\\right) }$. We then have \n\\begin{eqnarray*}\n\\left. \\left( \\theta _{h\\left( s\\right) }\\right) \\right\\vert _{x} &=&\\left(\nR_{h\\left( s\\left( x\\right) \\right) }^{-1}\\right) _{\\ast }\\left. d\\left(\nh\\left( s\\right) \\right) \\right\\vert _{x} \\\\\n&=&\\left( R_{h\\left( s\\left( x\\right) \\right) }^{-1}\\right) _{\\ast }\\left.\n\\left( \\left( dh\\right) \\left( s\\right) +h\\left( ds\\right) \\right)\n\\right\\vert _{x}.\n\\end{eqnarray*\nConsider each term. Using simplified notation, we have \n\\begin{eqnarray*}\n\\left( dh\\right) \\left( s\\right) \/h\\left( s\\right) &=&\\left( h^{\\prime\n}\\right) _{\\ast }\\left( \\left( h^{-1}dh\\right) \\left( s\\right) \/s\\right) \\\\\n\\left( R_{h\\left( s\\left( x\\right) \\right) }^{-1}\\right) _{\\ast }\\left.\n\\left( h\\left( ds\\right) \\right) \\right\\vert _{x} &=&\\left( h^{\\prime\n}\\right) _{\\ast }\\left( \\theta _{s}\\right) .\n\\end{eqnarray*\nThus, \n\\begin{equation*}\n\\left( R_{h\\left( s\\left( x\\right) \\right) }^{-1}\\right) _{\\ast }\\left.\n\\left( dh\\right) \\left( s\\right) \\right\\vert _{x}=\\left( h\\left( x\\right)\n^{\\prime }\\right) _{\\ast }\\varphi _{s\\left( x\\right) }\\left( \\left. \\theta\n_{h}^{\\left( \\mathfrak{p}\\right) }\\right\\vert _{x}\\right) ,\n\\end{equation*\nand hence we get (\\ref{thetahf}).\n\\end{proof}\n\nIf we have another smooth map $f:M\\longrightarrow \\mathbb{L}$, using right\nmultiplication with respect to $\\circ _{s\\left( x\\right) }$, we can define a\nmodified Darboux derivative $\\theta _{f}^{\\left( s\\right) }$ with respect to \n$s$\n\\begin{equation}\n\\left. \\left( \\theta _{f}^{\\left( s\\right) }\\right) \\right\\vert _{x}=\\left(\nR_{f\\left( x\\right) }^{\\left( s\\left( x\\right) \\right) }\\right) _{\\ast\n}^{-1}\\left. df\\right\\vert _{x}.\n\\end{equation\nNote that this is now no longer necessarily a pullback of $\\theta $ and\nhence may not satisfy the Maurer-Cartan equation. Adopting simplified\nnotation, we have the following: \n\\begin{eqnarray}\nd\\left( fs\\right) \/fs &=&\\left( df\\cdot s+f\\cdot ds\\right) \/fs \\notag \\\\\n&=&df\/_{s}f+\\func{Ad}_{f}^{\\left( s\\right) }\\theta _{s} \\label{thetafs}\n\\end{eqnarray\nHence, \n\\begin{equation}\n\\theta _{f}^{\\left( s\\right) }=\\theta _{fs}-\\left( \\func{Ad}_{f}^{\\left(\ns\\right) }\\right) _{\\ast }\\theta _{s}. \\label{thetafs2}\n\\end{equation}\n\n\\begin{lemma}\nSuppose $f,s\\in C^{\\infty }\\left( M,\\mathbb{L}\\right) ,$ then \n\\begin{equation}\nd\\theta _{f}^{\\left( s\\right) }=\\frac{1}{2}\\left[ \\theta _{f}^{\\left(\ns\\right) },\\theta _{f}^{\\left( s\\right) }\\right] ^{\\left( fs\\right) }-\\left(\nR_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\theta _{f}^{\\left(\ns\\right) },f,\\theta _{s}\\right] ^{\\left( s\\right) }. \\label{dthetafs}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nApplying the exterior derivative to (\\ref{thetafs2}) and then the structural\nequation for $\\theta _{fs}$, we hav\n\\begin{equation}\nd\\theta _{f}^{\\left( s\\right) }=\\frac{1}{2}\\left[ \\theta _{fs},\\theta _{fs\n\\right] ^{\\left( fs\\right) }-d\\left( \\left( \\func{Ad}_{f}^{\\left( s\\right)\n}\\right) _{\\ast }\\theta _{s}\\right) . \\label{dthetafs2}\n\\end{equation\nFrom Lemma \\ref{lemdtAd}, we can see that for $\\xi \\in \\mathfrak{l}$, \n\\begin{eqnarray}\nd\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\xi &=&\\left[ \\theta\n_{f}^{\\left( s\\right) },\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right)\n_{\\ast }\\xi \\right] ^{\\left( fs\\right) }-\\left( R_{f}^{\\left( s\\right)\n}\\right) _{\\ast }^{-1}\\left[ \\theta _{f}^{\\left( s\\right) },f,\\xi \\right]\n^{\\left( s\\right) } \\notag \\\\\n&&+\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ f,\\xi ,\\theta\n_{s}\\right] ^{\\left( s\\right) } \\label{dAdfs1} \\\\\n&&-\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \\func{A\n}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\xi ,f,\\theta _{s}\\right] ^{\\left(\ns\\right) }, \\notag\n\\end{eqnarray\nand hence \n\\begin{eqnarray}\nd\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\wedge \\theta _{s}\n&=&\\left[ \\theta _{f}^{\\left( s\\right) },\\left( \\func{Ad}_{f}^{\\left(\ns\\right) }\\right) _{\\ast }\\theta _{s}\\right] ^{\\left( fs\\right) }-\\left(\nR_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\theta _{f}^{\\left(\ns\\right) },f,\\theta _{s}\\right] ^{\\left( s\\right) } \\notag \\\\\n&&-\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ f,\\theta\n_{s},\\theta _{s}\\right] ^{\\left( s\\right) } \\label{dAdfs2} \\\\\n&&+\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \\func{A\n}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\theta _{s},f,\\theta _{s}\\right]\n^{\\left( s\\right) }, \\notag\n\\end{eqnarray\nwhere wedge products are implied. Now, using the structural equation and \n\\ref{Adbrack1}), we fin\n\\begin{eqnarray}\n\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast }d\\theta _{s} &=&\\frac\n1}{2}\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\left[ \\theta\n_{s},\\theta _{s}\\right] ^{\\left( s\\right) } \\notag \\\\\n&=&\\frac{1}{2}\\left[ \\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast\n}\\theta _{s},\\left( \\func{Ad}f\\right) _{\\ast }\\theta _{s}\\right] ^{\\left(\nfs\\right) } \\notag \\\\\n&&-\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\left( \\func{A\n}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\theta _{s},f,\\theta _{s}\\right]\n^{\\left( s\\right) } \\notag \\\\\n&&+\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ f,\\theta\n_{s},\\theta _{s}\\right] ^{\\left( s\\right) }. \\label{Adbracktheta}\n\\end{eqnarray\nCombining (\\ref{dAdfs2}) and (\\ref{Adbracktheta}), we see that \n\\begin{eqnarray}\nd\\left( \\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast }\\theta\n_{s}\\right) &=&d\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast\n}\\wedge \\theta _{s}+\\left( \\func{Ad}_{f}^{\\left( s\\right) }\\right) _{\\ast\n}d\\theta _{s} \\notag \\\\\n&=&\\left[ \\theta _{f}^{\\left( s\\right) },\\left( \\func{Ad}f\\right) _{\\ast\n}\\theta _{s}\\right] ^{\\left( fs\\right) }+\\frac{1}{2}\\left[ \\left( \\func{Ad\n_{f}^{\\left( s\\right) }\\right) _{\\ast }\\theta _{s},\\left( \\func{Ad}f\\right)\n_{\\ast }\\theta _{s}\\right] ^{\\left( fs\\right) } \\notag \\\\\n&&-\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\theta\n_{f}^{\\left( s\\right) },f,\\theta _{s}\\right] ^{\\left( s\\right) } \\notag \\\\\n&=&\\frac{1}{2}\\left[ \\theta _{fs},\\theta _{fs}\\right] ^{\\left( fs\\right) }\n\\frac{1}{2}\\left[ \\theta _{f}^{\\left( s\\right) },\\theta _{f}^{\\left(\ns\\right) }\\right] ^{\\left( fs\\right) } \\label{dadtheta} \\\\\n&&-\\left( R_{f}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left[ \\theta\n_{f}^{\\left( s\\right) },f,\\theta _{s}\\right] ^{\\left( s\\right) }. \\notag\n\\end{eqnarray\nThus, overall, substituting (\\ref{dadtheta}) into (\\ref{dthetafs2}), we\nobtain (\\ref{dthetafs}).\n\\end{proof}\n\nFor Lie groups, $\\theta _{f}$ determines $f$ up to right translation by a\nconstant element, however in the non-associative case this is not\nnecessarily true.\n\n\\begin{lemma}\n\\label{lemThetauniq}Let $M$ be a connected manifold and suppose \nA,B:M\\longrightarrow \\mathbb{L}$ be smooth maps. Then, $A=BC$\\ for some\nconstant $C\\in \\mathbb{L}$ if and only if \n\\begin{equation}\n\\theta _{A}=\\theta _{B}^{\\left( B\\backslash A\\right) }. \\label{thetaAB}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nFrom (\\ref{thetafs2}), \n\\begin{equation*}\n\\theta _{A}-\\theta _{B}^{\\left( B\\backslash A\\right) }=\\left( \\func{Ad\n_{B}^{\\left( B\\backslash A\\right) }\\right) _{\\ast }\\theta _{B\\backslash A},\n\\end{equation*\nand thus, $B\\backslash A$ is constant if and only if (\\ref{thetaAB}) holds.\n\\end{proof}\n\nIn particular, if $B\\backslash A\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \n, then $\\theta _{B}^{\\left( B\\backslash A\\right) }=\\theta _{B}$, and hence \n\\theta _{A}=\\theta _{B}$. If $\\mathbb{L}$ is associative, then of course \n\\theta _{B}^{\\left( A\\right) }=\\theta _{B}$ for any $A,B$, and we get the\nstandard result \\cite{SharpeBook}.\n\nWe can also get a version of the structural equation integration theorem. In\nparticular, the question is whether an $\\mathfrak{l}$-valued $1$-form that\nsatisfies the structural equation is the Darboux derivative of some $\\mathbb\nL}$-valued function.\n\n\\begin{lemma}\n\\label{lemAlphastruct}Suppose $M$ is a smooth manifold and $\\mathbb{L}$ a\nsmooth loop. Let $s\\in C^{\\infty }\\left( M,\\mathbb{L}\\right) $ and $\\alpha\n\\in \\Omega ^{1}\\left( M,\\mathfrak{l}\\right) $ satisfy the structural\nequation \n\\begin{equation}\nd\\alpha -\\frac{1}{2}\\left[ \\alpha ,\\alpha \\right] ^{\\left( s\\right) }=0,\n\\label{alphastructeq}\n\\end{equation\nthen \n\\begin{equation}\n\\left[ \\alpha ,\\alpha ,\\alpha -\\theta _{s}\\right] ^{\\left( s\\right) }=0,\n\\label{alphastruct2}\n\\end{equation\nwhere wedge products are implied.\n\\end{lemma}\n\n\\begin{proof}\nApplying $d$ to (\\ref{alphastructeq}) we have \n\\begin{eqnarray*}\n0 &=&d\\left[ \\alpha ,\\alpha \\right] ^{\\left( s\\right) } \\\\\n&=&\\left[ d\\alpha ,\\alpha \\right] ^{\\left( s\\right) }-\\left[ \\alpha ,d\\alpha\n\\right] ^{\\left( s\\right) }+\\left[ \\alpha ,\\alpha ,\\theta _{s}\\right]\n^{\\left( s\\right) } \\\\\n&=&\\left[ \\left[ \\alpha ,\\alpha \\right] ,\\alpha \\right] +\\left[ \\alpha\n,\\alpha ,\\theta _{s}\\right] ^{\\left( s\\right) } \\\\\n&=&-\\left[ \\alpha ,\\alpha ,\\alpha \\right] ^{\\left( s\\right) }+\\left[ \\alpha\n,\\alpha ,\\theta _{s}\\right] ^{\\left( s\\right) },\n\\end{eqnarray*\nwhere we have used (\\ref{db1}) and in the last line an analog of (\\ref{Jac3\n).\n\\end{proof}\n\n\\begin{theorem}\n\\label{thmLoopCartan}Suppose $M$ be a connected and simply-connected smooth\nmanifold and $\\mathbb{L}$ a smooth loop. Let $s\\in C^{\\infty }\\left( M\n\\mathbb{L}\\right) $ and $\\alpha \\in \\Omega ^{1}\\left( M,\\mathfrak{l}\\right) $\nis such that \n\\begin{equation}\nd\\alpha -\\frac{1}{2}\\left[ \\alpha ,\\alpha \\right] ^{\\left( s\\right) }=0,\n\\label{alphastruct}\n\\end{equation\nand \n\\begin{equation}\n\\left( \\func{Ad}_{s}^{-1}\\right) _{\\ast }\\left( \\alpha -\\theta _{s}\\right)\n\\in \\Omega ^{1}\\left( M,T_{1}\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\right)\n. \\label{cartanhyp}\n\\end{equation\nThen, there exists a function $f\\in C^{\\infty }\\left( M,\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) \\right) $ such that $\\alpha =\\theta _{sf}.$\nMoreover, $f$ is unique up to right multiplication by a constant element of \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$\n\\end{theorem}\n\n\\begin{proof}\nModifying the standard technique \\cite{SharpeBook,WarnerBook}, let \nN=M\\times \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\subset M\\times \\mathbb{L\n. $ Define the projection map $\\pi _{M}:N\\longrightarrow M$ and the map \n\\begin{eqnarray*}\nL_{s} &:&N\\longrightarrow \\mathbb{L} \\\\\n\\left( x,p\\right) &\\mapsto &s\\left( x\\right) p\n\\end{eqnarray*\nGiven the Maurer-Cartan form $\\theta $ on $\\mathbb{L}$ and $\\alpha \\in\n\\Omega ^{1}\\left( M,\\mathfrak{l}\\right) $, define $\\beta \\in \\Omega\n^{1}\\left( N,\\mathfrak{l}\\right) $ b\n\\begin{equation}\n\\beta =\\pi _{M}^{\\ast }\\alpha -\\left( L_{s}\\right) ^{\\ast }\\theta .\n\\label{beta}\n\\end{equation\nThen, at each point $\\left( x,p\\right) \\in N$, define $\\mathcal{D}_{\\left(\nx,p\\right) }=\\left. \\ker \\beta \\right\\vert _{\\left( x,p\\right) }$. We can\nthen see that this is a distribution on $N$ of rank $\\dim M$. Let $\\left(\nv,w\\right) \\in T_{\\left( x,p\\right) }N$, where we consider $w\\in $ $T_{p\n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\subset T_{p}\\mathbb{L}.$ Then, \n\\begin{equation}\n\\beta _{\\left( x,p\\right) }\\left( v,w\\right) =\\alpha _{x}\\left( v\\right)\n-\\theta _{s\\left( x\\right) p}\\left( \\left( L_{s}\\right) _{\\ast }\\left(\nv,w\\right) \\right) . \\label{betavw}\n\\end{equation\nNow, let $x\\left( t\\right) $ be a curve on $M$ with $x\\left( 0\\right) =x$\nand $\\dot{x}\\left( 0\\right) =v,$ and $p\\left( t\\right) $ a curve in \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\subset \\mathbb{L}$ with $p\\left(\n0\\right) =p$ and $\\dot{p}\\left( 0\\right) =w$. Then, using the fact that $p$\nis in the right nucleus\n\\begin{eqnarray*}\n\\theta _{s\\left( x\\right) p}\\left( \\left( L_{s}\\right) _{\\ast }\\left(\nv,w\\right) \\right) &=&\\left. \\frac{d}{dt}\\faktor{\\left( s\\left( x\\left(\nt\\right) \\right) p\\left( t\\right) \\right)} {\\left( s\\left( x\\right) p\\right)}\n\\right\\vert _{t=0} \\\\\n&=&\\left. \\frac{d}{dt}\\faktor{s\\left( x\\left( t\\right) \\right)} {s\\left(\nx\\right)} \\right\\vert _{t=0}+\\left. \\frac{d}{dt}\\faktor{\\left( s\\left(\nx\\right) \\left( \\faktor{p\\left( t\\right)}{p}\\right) \\right)} {s\\left(\nx\\right)} \\right\\vert _{t=0} \\\\\n&=&\\left. \\theta _{s}\\left( v\\right) \\right\\vert _{x}+\\left( \\func{Ad\n_{s\\left( x\\right) }\\right) _{\\ast }w.\n\\end{eqnarray*\nSo overall, \n\\begin{equation}\n\\beta _{\\left( x,p\\right) }\\left( v,w\\right) =\\left( \\alpha -\\theta\n_{s}\\right) _{x}\\left( v\\right) -\\left( \\func{Ad}_{s\\left( x\\right) }\\right)\n_{\\ast }w.\n\\end{equation\nHence, $\\left( v,w\\right) \\in \\mathcal{D}_{\\left( x,p\\right) }$ if and only\nif $\\left( \\alpha -\\theta _{s}\\right) _{x}\\left( v\\right) =\\left( \\func{Ad\n_{s\\left( x\\right) }\\right) _{\\ast }w.$ Now, consider $\\left. \\left( \\pi\n_{M}\\right) _{\\ast }\\right\\vert _{\\left( x,p\\right) }:\\mathcal{D}_{\\left(\nx,p\\right) }\\longrightarrow T_{x}M$. Suppose $\\left. \\left( \\pi _{M}\\right)\n_{\\ast }\\right\\vert _{\\left( x,p\\right) }\\left( v,w\\right) =0.$ Then, $v=0$,\nand since $\\left( \\alpha -\\theta _{s}\\right) _{x}\\left( v\\right) =\\left( \n\\func{Ad}_{s\\left( x\\right) }\\right) _{\\ast }w,$ we have $w=0$. Thus $\\left.\n\\left( \\pi _{M}\\right) _{\\ast }\\right\\vert _{\\left( x,p\\right) }$ is\ninjective on $\\mathcal{D}_{\\left( x,p\\right) }.$ On the other hand, it is\nalso clearly surjective, since if given $v\\in T_{x}M$, then $\\left( v,\\left( \n\\func{Ad}_{s\\left( x\\right) }^{-1}\\right) _{\\ast }\\left( \\left( \\alpha\n-\\theta _{s}\\right) _{x}\\left( v\\right) \\right) \\right) \\in \\mathcal{D\n_{\\left( x,p\\right) }.$ Overall, $\\left. \\left( \\pi _{M}\\right) _{\\ast\n}\\right\\vert _{\\left( x,p\\right) }$ is a bijection from $\\mathcal{D}_{\\left(\nx,p\\right) }$ to $T_{x}M$, so in particular, $\\dim \\mathcal{D}_{\\left(\nx,p\\right) }=\\dim M$ and thus $\\mathcal{D}$ is a distribution of rank $\\dim\nM.$\n\nNow let us show that $\\mathcal{D}$ is involutive. We have \n\\begin{eqnarray}\n\\left. d\\beta \\right\\vert _{\\left( x,p\\right) } &=&\\left. \\pi _{M}^{\\ast\n}d\\alpha \\right\\vert _{\\left( x,p\\right) }-\\left. \\left( L_{s}\\right) ^{\\ast\n}d\\theta \\right\\vert _{\\left( x,p\\right) } \\notag \\\\\n&=&\\frac{1}{2}\\left. \\pi _{M}^{\\ast }\\left[ \\alpha ,\\alpha \\right] ^{\\left(\ns\\right) }\\right\\vert _{\\left( x,p\\right) }-\\frac{1}{2}\\left. \\left(\nL_{s}\\right) ^{\\ast }\\left[ \\theta ,\\theta \\right] \\right\\vert _{\\left(\nx,p\\right) } \\notag \\\\\n&=&\\frac{1}{2}\\left[ \\left. \\pi _{M}^{\\ast }\\alpha \\right\\vert _{\\left(\nx,p\\right) },\\left. \\pi _{M}^{\\ast }\\alpha \\right\\vert _{\\left( x,p\\right) \n\\right] ^{s\\left( x\\right) } \\label{dbeta} \\\\\n&&-\\frac{1}{2}\\left[ \\left. \\left( L_{s}\\right) ^{\\ast }\\theta \\right\\vert\n_{\\left( x,p\\right) },\\left. \\left( L_{s}\\right) ^{\\ast }\\theta \\right\\vert\n_{\\left( x,p\\right) }\\right] ^{s\\left( x\\right) p}. \\notag\n\\end{eqnarray\nNote however that because $p\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) ,$\nwe have $\\left[ \\cdot ,\\cdot \\right] ^{s\\left( x\\right) }=\\left[ \\cdot\n,\\cdot \\right] ^{s\\left( x\\right) p}.$ So overall, using (\\ref{beta}), we\nget \n\\begin{equation*}\n\\left. d\\beta \\right\\vert _{\\left( x,p\\right) }=\\frac{1}{2}\\left[ \\left.\n\\beta \\right\\vert _{\\left( x,p\\right) },\\left. \\beta \\right\\vert _{\\left(\nx,p\\right) }\\right] ^{s\\left( x\\right) }+\\left[ \\left. \\beta \\right\\vert\n_{\\left( x,p\\right) },\\left. \\left( L_{s}\\right) ^{\\ast }\\theta \\right\\vert\n_{\\left( x,p\\right) }\\right] ^{s\\left( x\\right) }.\n\\end{equation*\nThus, $d\\beta =0$ whenever $\\beta =0$, and hence $\\mathcal{D=}\\ker \\beta $\nis involutive, and by the Frobenius Theorem, $\\mathcal{D}$ is integrable.\nLet $\\mathcal{L}$ be a leaf through the point $\\left( x,p\\right) \\in N.$\nThen, $\\pi _{M}$ induced a local diffeomorphism from a neighborhood to \n\\left( x,p\\right) $ to some neighborhood of $x\\in M.$ Then, let \nF:U\\longrightarrow \\mathcal{L}$ be the inverse map, such that $F\\left(\ny\\right) =\\left( y,f\\left( y\\right) \\right) $ for some $f:U\\longrightarrow \n\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$ By definition, $F^{\\ast }\\beta =0\n, so \n\\begin{eqnarray*}\n0 &=&F^{\\ast }\\beta \\\\\n&=&F^{\\ast }\\left( \\pi _{M}^{\\ast }\\alpha -\\left( L_{s}\\right) ^{\\ast\n}\\theta \\right) \\\\\n&=&\\alpha -\\left( L_{s}\\circ f\\right) ^{\\ast }\\theta\n\\end{eqnarray*\nHence, on $U$, $\\alpha =\\theta _{sf}$.\n\nIt is obvious that the distribution $\\mathcal{D}$ is right-invariant with\nrespect to $\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $, then proceeding in\nthe same way as for Lie groups, we find that in fact that when $M$ is\nconnected and simply-connected, the function $f$ extends to the whole\nmanifold.\n\nNow suppose $f,g\\in C^{\\infty }\\left( M,\\mathcal{N}^{R}\\left( \\mathbb{L\n\\right) \\right) $ such that $\\theta _{sf}=\\theta _{sg}.$ Then using (\\re\n{thetafs}), but with roles of $s$ and $f$ reversed, we find \n\\begin{equation*}\n\\theta _{sf}=\\theta _{s}+\\left( \\func{Ad}_{s}\\right) _{\\ast }\\theta _{f},\n\\end{equation*\nand similarly for $g$. Hence, we see that $\\theta _{f}=\\theta _{g}.$ Using\nLemma \\ref{lemThetauniq} for Lie groups, we find that $f=gC$ for some\nconstant $C\\in \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) .$\n\\end{proof}\n\n\\begin{remark}\nIn the case when $\\mathbb{L}$ is a group, Theorem \\ref{thmLoopCartan}\nreduces to the well-known analogous result for groups since the function $s$\ncan be taken to be arbitrary. In particular, the hypothesis (\\ref{cartanhyp\n) is automatically satisfied in that case. On the other hand, for the loop\nof unit octonions, this theorem becomes trivial. In this case, $\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) \\cong \\mathbb{Z}_{2},$ so the hypothesis (\\re\n{cartanhyp}) immediately implies that $\\alpha =\\theta _{s},$ even without\nusing the equation (\\ref{alphastruct}). However, under certain additional\nassumptions about $\\alpha $ and $s$, (\\ref{alphastruct}) may actually imply \n\\ref{cartanhyp}). Generally, (\\ref{cartanhyp}) is stronger than (\\re\n{alphastruct2}), which we know holds for any $\\alpha \\in \\Omega ^{1}\\left( M\n\\mathfrak{l}\\right) $ that satisfies (\\ref{alphastruct}). To bridge the gap\nbetween (\\ref{alphastruct2}) and (\\ref{cartanhyp}), additional properties of \n$\\mathbb{L}$ and $\\alpha $ are needed.\n\\end{remark}\n\n\\begin{corollary}\n\\label{corLoopCartan}Suppose $M$ be a connected and simply-connected smooth\nmanifold and $\\mathbb{L}$ a smooth loop such that $\\dim \\left( \\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) \\right) =\\dim \\left( \\mathcal{N}^{R}\\left( \n\\mathfrak{l}\\right) \\right) .$ Also suppose that $s\\in C^{\\infty }\\left( M\n\\mathbb{L}\\right) $ and $\\alpha \\in \\Omega ^{1}\\left( M,\\mathfrak{l}\\right) $\nare such that\n\n\\begin{enumerate}\n\\item $d\\alpha -\\frac{1}{2}\\left[ \\alpha ,\\alpha \\right] ^{\\left( s\\right)\n}=0,$\n\n\\item $\\left. \\alpha \\right\\vert _{x}:T_{x}M\\longrightarrow \\mathfrak{l}$ is\nsurjective for every $x\\in M,$\n\n\\item $T_{x}M\\cong \\ker \\left. \\alpha \\right\\vert _{x}+\\ker \\left( \\left.\n\\theta _{s}\\right\\vert _{x}-\\left. \\alpha \\right\\vert _{x}\\right) $ for\nevery $x\\in M$,\n\n\\item $s_{x}\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ for every $x\\in M$.\n\\end{enumerate}\n\nThen, there exists a function $f\\in C^{\\infty }\\left( M,\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) \\right) $ such that $\\alpha =\\theta _{sf}$ with \n$f$ unique up to right multiplication by a constant element of $\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) .$\n\\end{corollary}\n\n\\begin{proof}\nSince $\\alpha $ satisfies (\\ref{alphastruct}), from Lemma \\re\n{lemAlphastruct} we know that it also satisfies (\\ref{alphastruct2}).\nSuppose $X,Y,Z\\in T_{x}M$, such that $Z\\in \\ker \\left. \\alpha \\right\\vert\n_{x}$. Then, from (\\ref{alphastruct2}) we obtain \n\\begin{equation}\n\\left[ \\alpha \\left( X\\right) ,\\alpha \\left( Y\\right) ,\\left( \\alpha -\\theta\n_{s_{x}}\\right) Z\\right] ^{\\left( s_{x}\\right) }-\\left[ \\alpha \\left(\nY\\right) ,\\alpha \\left( X\\right) ,\\left( \\alpha -\\theta _{s_{x}}\\right) \n\\right] ^{\\left( s_{x}\\right) }=0. \\label{alphastructassoc}\n\\end{equation\nHowever, since $T_{x}M\\cong \\ker \\left. \\alpha \\right\\vert _{x}+\\ker \\left(\n\\left. \\theta _{s}\\right\\vert _{x}-\\left. \\alpha \\right\\vert _{x}\\right) $,\nthis is true for any $Z\\in T_{x}M.$ Since$\\left. \\alpha \\right\\vert _{x}$ is\nsurjective, we hence find that for any $\\xi ,\\eta \\in \\mathfrak{l},$ \n\\begin{equation}\n\\left[ \\xi ,\\eta ,\\left( \\alpha -\\theta _{s_{x}}\\right) Z\\right] ^{\\left(\ns_{x}\\right) }-\\left[ \\eta ,\\xi ,\\left( \\alpha -\\theta _{s_{x}}\\right) \n\\right] ^{\\left( s_{x}\\right) }=0. \\label{alphastructassoc2}\n\\end{equation\nNow, since $s_{x}\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) ,$ it is the\nright companion of some $h\\in \\Psi ^{R}\\left( \\mathbb{L}\\right) $, thus\napplying $\\left( h^{\\prime }\\right) _{\\ast }^{-1}$ to (\\re\n{alphastructassoc2}), and using (\\ref{loopalghom2}), we find that for any \n\\xi ,\\eta \\in \\mathfrak{l},$ \n\\begin{equation*}\n\\left[ \\xi ,\\eta ,\\left( h^{\\prime }\\right) _{\\ast }^{-1}\\left( \\left(\n\\alpha -\\theta _{s_{x}}\\right) Z\\right) \\right] ^{\\left( 1\\right) }-\\left[\n\\eta ,\\xi ,\\left( h^{\\prime }\\right) _{\\ast }^{-1}\\left( \\left( \\alpha\n-\\theta _{s_{x}}\\right) Z\\right) \\right] ^{\\left( 1\\right) }=0.\n\\end{equation*\nThus, we see that for any $Z\\in T_{x}M$, $\\left( h^{\\prime }\\right) _{\\ast\n}^{-1}\\left( \\left( \\alpha -\\theta _{s_{x}}\\right) Z\\right) \\in \\mathcal{N\n^{R}\\left( \\mathfrak{l}\\right) .$ We know that $\\mathcal{\\ }T_{1}\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) \\subset \\mathcal{N}^{R}\\left( \\mathfrak{l\n\\right) $, however by hypothesis, their dimensions are equal, so in fact, \nT_{1}\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) =\\mathcal{N}^{R}\\left( \n\\mathfrak{l}\\right) .$ Thus, $\\left( h^{\\prime }\\right) _{\\ast }^{-1}\\left(\n\\left( \\alpha -\\theta _{s_{x}}\\right) Z\\right) \\in T_{1}\\mathcal{N\n^{R}\\left( \\mathbb{L}\\right) $ and hence, from (\\ref{nuclearaction}), \n\\left( \\func{Ad}_{s\\left( x\\right) }^{-1}\\right) _{\\ast }\\left( \\alpha\n-\\theta _{s_{x}}\\right) \\in \\Omega ^{1}\\left( M,T_{1}\\mathcal{N}^{R}\\left( \n\\mathbb{L}\\right) \\right) .$ This fulfils the hypothesis (\\ref{cartanhyp})\nfor Theorem \\ref{thmLoopCartan}, and thus there exists a function $f\\in\nC^{\\infty }\\left( M,\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\right) $ such\nthat $\\alpha =\\theta _{sf}.$\n\\end{proof}\n\n\\begin{remark}\nSince $\\alpha $ is assumed to be surjective in Corollary \\ref{corLoopCartan}\nand $\\alpha =\\theta _{sf},$ we see that $sf:M\\longrightarrow \\mathbb{L}$ is\na smooth submersion.\n\\end{remark}\n\n\\section{Loop bundles}\n\n\\setcounter{equation}{0}\\label{sectBundle}Let $\\mathbb{L}$ be a smooth loop\nwith the $\\mathbb{L}$-algebra $\\mathfrak{l},$ and let us define for brevity \n\\Psi ^{R}\\left( \\mathbb{L}\\right) =\\Psi $, $\\func{Aut}\\left( \\mathbb{L\n\\right) =H$, and $\\func{PsAut}^{R}\\left( \\mathbb{L}\\right) =G\\supset H$, and \n$\\mathcal{N}^{R}\\left( \\mathbb{L}\\right) =\\mathcal{N}$. Suppose $\\Psi ,H,G\n\\mathcal{N}$ are Lie groups. Recall that we also have $\\Psi \/\\mathcal{N\n\\cong G.$\n\nLet $M$ be a smooth, finite-dimensional manifold with a $\\Psi $-principal\nbundle $\\mathcal{P}.$ Then we will define several associated bundles. In\ngeneral, recall that there is a one-to-one correspondence between\nequivariant maps from a principal bundle and sections of associated bundles.\nMore precisely, suppose we have a manifold $S$ with a left action $l:\\Psi\n\\times S\\longrightarrow S.$ Consider the associated bundle $E=\\mathcal{P\n\\times _{\\Psi }S.$ Suppose we have a section $\\tilde{f}:M\\longrightarrow E$,\nthen this defines a unique equivariant map $f$ $:\\mathcal{P}\\longrightarrow\nS,$ that is, such that for any $h\\in \\Psi $, \n\\begin{equation}\nf_{ph}=l_{h^{-1}}\\left( f_{p}\\right) . \\label{equimap}\n\\end{equation\nConversely, any equivariant map $f:\\mathcal{P}\\longrightarrow S$ defines a\nsection $\\left( \\func{id},f\\right) :\\mathcal{P}\\longrightarrow \\mathcal{P\n\\times S$, and then via the quotient map $q:\\mathcal{P}\\times\nS\\longrightarrow \\mathcal{P}\\times _{\\Psi }S=E$, it defines a section \n\\tilde{f}:M\\longrightarrow E.$ In particular, for each $x\\in M$, $\\tilde{f\n\\left( x\\right) =$ $\\left\\lfloor p,f_{p}\\right\\rfloor _{\\Psi }$ where $p\\in\n\\pi ^{-1}\\left( x\\right) \\subset \\mathcal{P}$ and $\\left\\lfloor \\cdot ,\\cdot\n\\right\\rfloor _{\\Psi }$ is the equivalence class with respect to the action\nof $\\Psi :\n\\begin{equation}\n\\left( p,f_{p}\\right) \\sim \\left( ph,l_{h^{-1}}\\left( f_{p}\\right) \\right)\n=\\left( ph,f_{ph}\\right) \\ \\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{for any }h\\in \\Psi .\n\\end{equation\nFor our purposes we will have the following associated bundles. Let $h\\in\n\\Psi $ and, as before, denote by $h^{\\prime }$ the partial action of $h$.\n\n\\begin{equation}\n\\begin{tabular}{l|l|l}\n\\textbf{Bundle} & \\textbf{Equivariant map} & \\textbf{Equivariance property}\n\\\\ \\hline\n$\\mathcal{P}$ & $k:\\mathcal{P}\\longrightarrow \\Psi $ & $k_{ph}=h^{-1}k_{p}$\n\\\\ \n$\\mathcal{Q}=\\mathcal{P}\\times _{\\Psi ^{\\prime }}\\mathbb{L}$ & $q:\\mathcal{P\n\\longrightarrow \\mathbb{L}$ & $q_{ph}=\\left( h^{\\prime }\\right) ^{-1}q_{p}$\n\\\\ \n$\\mathcal{\\mathring{Q}}=\\mathcal{P\\times }_{\\Psi }\\mathbb{\\mathring{L}}$ & \nr:\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ & $r_{ph}=h^{-1}\\left(\nr_{p}\\right) $ \\\\ \n$\\mathcal{N}\\cong \\mathcal{P}\\times _{\\Psi }\\left( \\Psi \/H\\right) $ & $s\n\\mathcal{P}\\longrightarrow \\Psi \/H\\cong \\mathcal{C\\subset }\\mathbb{\\mathring\nL}}$ & $s_{ph}=h^{-1}\\left( s_{p}\\right) $ \\\\ \n$\\mathcal{A}=\\mathcal{P\\times }_{\\Psi _{\\ast }^{\\prime }}\\mathfrak{l}$ & \n\\eta :\\mathcal{P}\\longrightarrow \\mathfrak{l}$ & $\\eta _{ph}=\\left(\nh^{\\prime }\\right) _{\\ast }^{-1}\\eta _{p}$ \\\\ \n$\\mathfrak{p}_{\\mathcal{P}}=\\mathcal{P\\times }_{\\left( \\func{Ad}_{\\xi\n}\\right) _{\\ast }}\\mathfrak{p}$ & $\\xi :\\mathcal{P}\\longrightarrow \\mathfrak\np}$ & $\\xi _{ph}=\\left( \\func{Ad}_{h}^{-1}\\right) _{\\ast }\\xi _{p}$ \\\\ \n$\\mathcal{G=P}\\times _{\\Psi ^{\\prime }}G$ & $\\gamma :\\mathcal{P\n\\longrightarrow G$ & $\\gamma _{ph}=\\left( h^{\\prime }\\right) ^{-1}\\gamma\n_{p} $ \\\\ \n$\\func{Ad}\\left( \\mathcal{P}\\right) =\\mathcal{P}\\times _{\\func{Ad}_{\\Psi\n}}\\Psi $ & $u:\\mathcal{P}\\longrightarrow \\Psi $ & $u_{ph}=h^{-1}u_{p}h\n\\end{tabular}\n\\label{equimap2}\n\\end{equation}\n\nThe bundle $\\mathcal{Q}$ is the loop bundle with respect to the partial\naction $\\Psi ^{\\prime }$ and the bundle $\\mathcal{\\mathring{Q}}$ is the loop\nbundle with respect to the full action of $\\Psi $. The bundle $\\mathcal{N}$\nhas fibers isomorphic to $\\Psi \/H\\cong \\mathcal{C},$ which is the set of\nright companions $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) \\subset \\mathbb\n\\mathring{L}}$. Thus it is a subbundle of $\\mathcal{\\mathring{Q}}.$\nEquivalently, $\\mathcal{N}=\\mathcal{P}\/H$ is the orbit space of the right $H\n-action on $\\mathcal{P}$. Recall that the structure group of $\\mathcal{P}$\nreduces to $H$ if and only if the bundle $N$ has a global section. If this\nis the case, then we can reduce the bundle $\\mathcal{P}$ to a principal $H\n-bundle $\\mathcal{H}$ over $M$, and then since $H\\subset G$, lift to a\nprincipal $G$-bundle $\\mathcal{G}$. Also, let $\\mathcal{Q}=\\mathcal{P}\\times\n_{\\Psi ^{\\prime }}\\mathbb{L}$ be the bundle associated to $\\mathcal{P}$ with\nfiber $\\mathbb{L}$, where $\\Psi ^{\\prime }$ denotes that the left action on \n\\mathbb{L}$ is via the partial action of $\\Psi $.\n\nWe also have some associated vector bundles - namely the vector bundle \n\\mathcal{A}$ with fibers isomorphic to the $\\mathbb{L}$-algebra $\\mathfrak{l\n\\ $with the tangent partial action of $\\Psi $ and the vector bundle \n\\mathfrak{p}_{\\mathcal{P}}$ with fibers isomorphic to the Lie algebra \n\\mathfrak{p}$, with the adjoint action of $\\Psi $.\n\n\\begin{example}\nSuppose $\\mathbb{L}=U\\mathbb{O}$ - the Moufang loop of unit octonions. In\nthis case, $\\Psi =Spin\\left( 7\\right) $, $H=G_{2}$, $G=SO\\left( 7\\right) $, \n\\mathcal{N}=\\mathbb{Z}_{2}$, and then we have the well-known relation\n\\begin{eqnarray*}\nSO\\left( 7\\right) &\\cong &Spin\\left( 7\\right) \/\\mathbb{Z}_{2} \\\\\nSpin\\left( 7\\right) \/G_{2} &\\cong &U\\mathbb{O\\cong }S^{7} \\\\\nSO\\left( 7\\right) \/G_{2} &\\cong &S^{7}\/\\mathbb{Z}_{2}.\n\\end{eqnarray*\nThen, if an orientable $7$-manifold has spin structure, we have a principal \nSpin\\left( 7\\right) $-bundle $\\mathcal{P}$ over $M$ and the corresponding \nSpin\\left( 7\\right) \/G_{2}$-bundle always has a smooth section, hence\nallowing the $Spin\\left( 7\\right) $-bundle to reduce to a $G_{2}$-principal\nbundle, which in turn lifts to an $SO\\left( 7\\right) $-bundle. The\nassociated bundle $\\mathcal{Q}$ in this case transforms under $SO\\left(\n7\\right) $, and is precisely the unit subbundle of the octonion bundle \n\\mathbb{R}\\oplus TM$ defined in \\cite{GrigorianOctobundle}. The bundle \n\\mathcal{\\mathring{Q}}$ then transforms under $Spin\\left( 7\\right) $ and\ncorresponds to the bundle of unit spinors. The associated vector bundle \n\\mathcal{A}$ in this case has fibers isomorphic to $\\func{Im}\\mathbb{O}\\cong \n\\mathbb{R}^{7}$, and then the bundle itself is isomorphic to the tangent\nbundle $TM.$\n\\end{example}\n\nLet $s:$ $\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ be an\nequivariant map. In particular, the equivalence class $\\left\\lfloor\np,s_{p}\\right\\rfloor _{\\Psi }$ defines a section of the bundle $\\mathcal\n\\mathring{Q}}.$ We will refer to $s$ as \\emph{the defining map} (or \\emph\nsection}). It should be noted that such a map may not always exist globally.\nIf $\\mathbb{L}$ is a $G$-loop\\textbf{,} then $\\mathcal{\\mathring{Q}\\cong N}$\nand hence existence of a global section of $\\mathcal{\\mathring{Q}}$ is\nequivalent to the reduction of the structure group of $\\mathcal{P}.$ There\nmay be topological obstructions for this.\n\n\\begin{example}\n\\label{exCx3}As in Example \\ref{ExNormedDiv}, let $\\mathbb{L=}U\\mathbb\nC\\cong }U\\left( 1\\right) $ - the unit complex numbers, and $\\Psi =U\\left(\nn\\right) $, $H=G=SU\\left( n\\right) .$ Then in this setting, $\\mathcal{P}$ is\na principal $U\\left( n\\right) $-bundle over $M$ and $\\mathcal{Q}$ is a\ncircle bundle. Existence of a section of $\\mathcal{Q}$ is equivalent to the\nreduction of the structure group of $\\mathcal{P}$ to $SU\\left( n\\right) .$\nThe obstruction for this is the first Chern class of $\\mathcal{Q}$ \\cit\n{MilnorStasheff}. In the quaternionic case, the structure group reduction\nfrom $Sp\\left( n\\right) Sp\\left( 1\\right) $ to $Sp\\left( n\\right) $ is less\nwell understood \\cite{BoyerGalicki}.\n\\end{example}\n\nGiven equivariant maps $q,r:\\mathcal{P}\\longrightarrow \\mathbb{L}$, we can\ndefine an equivariant product using $s$, such that for any $p\\in \\mathcal{P}\n\n\\begin{equation}\n\\left. q\\circ _{s}r\\right\\vert _{p}=q_{p}\\circ _{s_{p}}r_{p}.\n\\label{equiprod}\n\\end{equation\nIndeed, using (\\ref{PsiActcircr}), \n\\begin{eqnarray}\n\\left. q\\circ _{s}r\\right\\vert _{ph} &=&q_{ph}\\circ _{s_{ph}}r_{ph} \\notag\n\\\\\n&=&\\left( h^{\\prime }\\right) ^{-1}q_{p}\\circ _{h^{-1}\\left( s_{p}\\right)\n}\\left( h^{\\prime }\\right) ^{-1}r_{p} \\notag \\\\\n&=&\\left( h^{\\prime }\\right) ^{-1}\\left( \\left. q\\circ _{s}r\\right\\vert\n_{p}\\right) . \\label{equiprod2}\n\\end{eqnarray\nIn particular, this allows to define a fiberwise product on sections of \n\\mathcal{Q}$. Similarly, we define equivariant left and right quotients, and\nthus well-defined fiberwise quotients of sections of $\\mathcal{Q}.$\n\n\\begin{remark}\nThe map $s$ is required to define an equivariant product of two $\\mathbb{L}\n-valued maps. In the $G_{2}$-structure case, as discussed above, sections of \n$\\mathcal{\\mathring{Q}}$ correspond to unit spinors, and each unit spinor\ndefines a $G_{2}$-structure, and hence a product on the corresponding\noctonion bundle \\cite{GrigorianOctobundle}. On the other hand, a product of\nan equivariant $\\mathbb{L}$-valued map and an equivariant $\\mathbb{\\mathring\nL}}$-valued map will be always equivariant, using (\\ref{PsAutprod}). In the \nG_{2}$-structure case, this corresponds to the Clifford product of a unit\noctonion, interpreted as an element of $\\mathbb{R}\\oplus T_{x}M$ at each\npoint, and a unit spinor. The result is then again a unit spinor. This does\nnot require any additional structure beyond the spinor bundle.\n\\end{remark}\n\nGiven equivariant maps $\\xi ,\\eta :\\mathcal{P}\\longrightarrow \\mathfrak{l}$,\nwe can define an equivariant bracket using $s$. For any $p\\in \\mathcal{P}$\n\\begin{equation}\n\\left. \\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }\\right\\vert _{p}=\\left[\n\\xi _{p},\\eta _{p}\\right] ^{\\left( s_{p}\\right) }. \\label{equibracket}\n\\end{equation\nHere the equivariance follows from (\\ref{loopalghom}). Using (\\ref{phihs})\nwe then also have an equivariant map $\\varphi _{s}$ from equivariant \n\\mathfrak{p}$-valued maps to equivariant $\\mathfrak{l}$-valued maps\n\\begin{equation}\n\\left. \\varphi _{s}\\left( \\gamma \\right) \\right\\vert _{p}=\\varphi\n_{s_{p}}\\left( \\gamma _{p}\\right) . \\label{equiphis}\n\\end{equation\nOther related objects such as the Killing form $K^{\\left( s\\right) }$ and\nthe adjoint $\\varphi _{s}^{t}$ to $\\varphi _{s}$ are then similarly also\nequivariant.\n\nOverall, we can condense the above discussion into the following definition\nand theorem.\n\n\\begin{definition}\n\\label{defLoopStructure}A \\emph{loop bundle structure} over a smooth\nmanifold $M$ is a quadruple $\\left( \\mathbb{L},\\Psi ,\\mathcal{P},s\\right) $\nwhere\n\n\\begin{enumerate}\n\\item $\\mathbb{L}$ is a finite-dimensional smooth loop with a smoothly\nacting group of right pseudoautomorphism pairs $\\Psi $.\n\n\\item $\\mathcal{P}$ is a principal $\\Psi $-bundle over $M$.\n\n\\item $s:\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ is a smooth\nequivariant map.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{theorem}\nGiven a loop bundle structure $\\left( \\mathbb{L},\\Psi ,\\mathcal{P},s\\right) $\nover a manifold $M,$ and associated bundles $\\mathcal{Q}=\\mathcal{P}\\times\n_{\\Psi ^{\\prime }}\\mathbb{L},$ $\\mathcal{\\mathring{Q}}=\\mathcal{P\\times \n_{\\Psi }\\mathbb{\\mathring{L}},$ $\\mathcal{A}=\\mathcal{P\\times }_{\\Psi _{\\ast\n}^{\\prime }}\\mathfrak{l},$ and $\\mathfrak{p}_{\\mathcal{P}}=\\mathcal{P\\times \n_{\\left( \\func{Ad}_{\\xi }\\right) _{\\ast }}\\mathfrak{p},$ where $\\mathfrak{l}$\nis the $\\mathbb{L}$-algebra of $\\mathbb{L}$ and $\\mathfrak{p}$ the Lie\nalgebra of $\\Psi ,$\n\n\\begin{enumerate}\n\\item $s$ determines a smooth section $\\sigma \\in \\Gamma \\left( \\mathcal\n\\mathring{Q}}\\right) .$\n\n\\item For any $A,B\\in \\Gamma \\left( \\mathcal{Q}\\right) $, $\\sigma $ defines\na fiberwise product $A\\circ _{\\sigma }B,$ via (\\ref{equiprod}).\n\n\\item For any $X,Y\\in \\Gamma \\left( \\mathcal{A}\\right) ,$ $\\sigma $ defines\na fiberwise bracket $\\left[ X,Y\\right] ^{\\left( \\sigma \\right) }$, via (\\re\n{equibracket}).\n\n\\item $\\sigma $ defines a fiberwise map $\\varphi _{\\sigma }:\\Gamma \\left( \n\\mathfrak{p}_{\\mathcal{P}}\\right) \\longrightarrow \\Gamma \\left( \\mathcal{A\n\\right) $, via (\\ref{equiphis}).\n\\end{enumerate}\n\\end{theorem}\n\n\\subsection{Connections and Torsion}\n\n\\label{sectTorsion}Suppose the principal $\\Psi $-bundle $\\mathcal{P}$ has a\nprincipal Ehresmann connection given by the decomposition \n\\begin{equation}\nT\\mathcal{P}=\\mathcal{HP}\\oplus \\mathcal{VP} \\label{HPVP}\n\\end{equation\nwith $\\mathcal{H}_{ph}\\mathcal{P}=\\left( R_{h}\\right) _{\\ast }\\mathcal{H}_{p\n\\mathcal{P}$ for any $p\\in \\mathcal{P}$ and $h\\in \\Psi $ and $\\mathcal{V\n\\mathcal{P}=\\ker d\\pi $, where $\\pi :\\mathcal{P}\\longrightarrow M$ is the\nbundle projection map. Let the projection \n\\begin{equation*}\nv:T\\mathcal{P}\\longrightarrow \\mathcal{VP}\n\\end{equation*}\nbe the Ehresmann connection $1$-form. Similarly, define the projection \n\\func{proj}_{\\mathcal{H}}:T\\mathcal{P}\\longrightarrow \\mathcal{HP}.$\n\nLet $\\mathfrak{p}$ be the Lie algebra of $\\Psi $. Then, as it is well-known,\nwe have an isomorphism \n\\begin{eqnarray}\n\\sigma &:&\\mathcal{P}\\times \\mathfrak{p}\\longrightarrow \\mathcal{VP} \\notag\n\\\\\n\\left( p,\\xi \\right) &\\mapsto &\\left. \\frac{d}{dt}\\left( p\\exp \\left( t\\xi\n\\right) \\right) \\right\\vert _{t=0}. \\label{mapsigma}\n\\end{eqnarray\nFor any $\\xi \\in \\mathfrak{p},$ this defines a vertical vector field $\\sigma\n\\left( \\xi \\right) $ on $\\mathcal{P}$. Given the Ehresmann connection 1-form \n$v$, define the $\\mathfrak{p}$-valued connection $1$-form $\\omega $ via \n\\begin{equation*}\n\\left( \\pi ,\\omega \\right) =\\sigma ^{-1}\\circ v:T\\mathcal{P}\\longrightarrow \n\\mathcal{P}\\times \\mathfrak{p}\n\\end{equation*\nand recall that for any $h\\in \\Psi $, \n\\begin{equation*}\n\\left( R_{h}\\right) ^{\\ast }\\omega =\\func{Ad}_{h^{-1}}\\circ \\omega .\n\\end{equation*}\n\nAs previously, suppose $S$ is a manifold with a left action $l$ of $\\Psi .$\nGiven an equivariant map $f:\\mathcal{P}\\longrightarrow S$, define \n\\begin{equation}\nd^{\\mathcal{H}}f:=f_{\\ast }\\circ \\func{proj}_{\\mathcal{H}}:T\\mathcal{P\n\\longrightarrow \\mathcal{HP}\\longrightarrow TS. \\label{dHftilde}\n\\end{equation\nThis is then a horizontal map since it vanishes on any vertical vectors.\nEquivalently, for any $X_{p}\\in T_{p}\\mathcal{P},$ if $\\gamma \\left(\nt\\right) $ is a curve on $\\mathcal{P}$ with $\\gamma \\left( 0\\right) =0$ and \n\\dot{\\gamma}\\left( 0\\right) =\\func{proj}_{\\mathcal{H}}X_{p}\\in \\mathcal{H\n_{p}\\mathcal{P}$, then \n\\begin{equation}\n\\left. d^{\\mathcal{H}}f\\right\\vert _{p}\\left( X_{p}\\right) =\\left. \\frac{d}\ndt}f\\left( \\gamma \\left( t\\right) \\right) \\right\\vert _{t=0}.\n\\end{equation\nThe map $d^{\\mathcal{H}}f$ is moreover still equivariant. The group $\\Psi $\nacts on $T\\mathcal{P}$ via pushforwards of the right action of $\\Psi $ on \n\\mathcal{P}.$ Let $h\\in \\Psi $, so that $r_{h}:\\mathcal{P}\\longrightarrow \n\\mathcal{P}$ gives the right action of $\\Psi $ on $\\mathcal{P},$ and the\ncorresponding action of $\\Psi $ on $T\\mathcal{P}$ is $\\left( r_{h}\\right)\n_{\\ast }:T\\mathcal{P}\\longrightarrow T\\mathcal{P}.$ Note that the\ncorresponding action of $\\Psi $ on $TS$ is then $\\left( l_{h^{-1}}\\right)\n_{\\ast }:TS\\longrightarrow TS.$ Now\n\\begin{eqnarray*}\nd^{\\mathcal{H}}f\\circ \\left( r_{h}\\right) _{\\ast } &=&f_{\\ast }\\circ \\func\nproj}_{\\mathcal{H}}\\circ \\left( r_{h}\\right) _{\\ast }=f_{\\ast }\\circ \\left(\nr_{h}\\right) _{\\ast }\\circ \\func{proj}_{\\mathcal{H}} \\\\\n&=&\\left( f\\circ r_{h}\\right) _{\\ast }\\circ \\func{proj}_{\\mathcal{H}}=\\left(\nl_{h^{-1}}\\circ f\\right) _{\\ast }\\circ \\func{proj}_{\\mathcal{H}} \\\\\n&=&\\left( l_{h^{-1}}\\right) _{\\ast }\\circ d^{\\mathcal{H}}f\n\\end{eqnarray*\nwhere we have used the equivariance of both $f$ and $\\func{proj}_{\\mathcal{H\n}.$ So indeed, $d^{\\mathcal{H}}f$ is equivariant. Now consider the quotient\nmap $q^{\\prime }:\\mathcal{P}\\times TS\\longrightarrow \\mathcal{P\\times \n_{\\Psi }TS,$ where $\\Psi $ acts via $r_{h}$ on $\\mathcal{P}$ and $\\left(\nl_{h^{-1}}\\right) _{\\ast }$ on $TS.$ This is a partial differential of the\nmap $q:\\mathcal{P}\\times S\\longrightarrow E.$ Since $d^{\\mathcal{H}}f$ is\nhorizontal, it vanishes on the kernel of $\\pi _{\\ast }:T\\mathcal{P\n\\longrightarrow TM$. Given $\\tilde{f}\\ $, the section of the associated\nbundle $\\mathcal{P}\\times _{\\Psi }S$ that corresponds to $f$, we can use $d^\n\\mathcal{H}}f$ to define the unique ma\n\\begin{equation}\nd^{\\mathcal{H}}\\tilde{f}:TM\\longrightarrow \\mathcal{P\\times }_{\\Psi }TS\n\\label{dHf}\n\\end{equation\nsuch that \n\\begin{equation*}\nd^{\\mathcal{H}}\\tilde{f}\\circ \\pi _{\\ast }=\\left( \\pi _{T\\mathcal{P}},d^\n\\mathcal{H}}f\\right) \\circ q^{\\prime }\n\\end{equation*\nwhere $\\pi _{T\\mathcal{P}}:T\\mathcal{P}\\longrightarrow \\mathcal{P}$ is the\nbundle projection for $T\\mathcal{P}.$ Moreover, $d^{\\mathcal{H}}\\tilde{f}$\ncovers the identity map on $M,$ and hence is a section of the fiber product \nTM\\times _{M}\\left( \\mathcal{P\\times }_{\\Psi }TS\\right) .$ This construction\nis summarized in the commutative diagram in Figure \\ref{tikCovDer}.\n\n\\begin{center}\n\\begin{tikzcd}\n\\mathcal{P} \\times TS \\arrow[rrrrr,\"q'\",bend left=10] \\arrow[ddd] & & & & & \\mathcal{P} \\times_{\\Psi} TS \\arrow[ddd]\\\\\n &T\\mathcal{P} \\arrow[ul,\"{(\\pi_{T\\mathcal{P}},d^{\\mathcal{H}} f)}\",swap] \\arrow[ddl,\"\\pi_{T\\mathcal{P}}\",swap] \\arrow[rrr,\"\\pi_{*}\"]& & &TM \\arrow[ddr,\"\\pi_{TM}\"] \\arrow[ur,\"d^{\\mathcal{H}} \\tilde{f}\"] & \\\\\n & &\\mathcal{P} \\times S \\arrow[r,\"q\"] \\arrow[dll,\"\\func{prj}_{1}\",bend right=15]& \\mathcal{P} \\times_{\\Psi} S \\arrow[drr,\"\\pi_{E}\",bend left=15,swap] & & \\\\\n\\mathcal{P} \\arrow[rrrrr,\"\\pi\",bend right=10] \\arrow[urr,\"({\\func{id},f)}\",bend right=15,swap]& & & & & M \\arrow[ull,\"\\tilde{f}\",bend left=15] \\\\\n\\end{tikzcd}\n\\captionof{figure}{Covariant derivative on maps and sections.} \\labe\n{tikCovDer}\n\\end{center}\n\nOf course, if $S$ is a vector space, then this reduces to the usual\ndefinition of the exterior covariant derivative of a vector bundle-valued\nfunction and $d^{\\mathcal{H}}f$ is a vector-bundle-valued $1$-form.\n\nGiven the above correspondence between equivariant maps from $\\mathcal{P}$\nand sections of associated bundles, for convenience, we will work with\nequivariant maps rather than sections. This will allow us to use the\nproperties of $\\mathbb{L}$ from the previous section more directly.\n\nGiven a $\\mathfrak{p}$-valued connection $1$-form $\\omega $ on \\QTR{cal}{P}\n, $ we can concretely work out $d^{\\mathcal{H}}f.$ Suppose $X\\in \\Gamma\n\\left( T\\mathcal{P}\\right) $ is a vector field on $\\mathcal{P}$, then using\nthe definition (\\ref{dHftilde}), we have \n\\begin{eqnarray*}\n\\left( d^{\\mathcal{H}}f\\right) \\left( X\\right) &=&df\\left( \\func{proj}_\n\\mathcal{H}}\\left( X\\right) \\right) \\\\\n&=&df\\left( X-v\\left( X\\right) \\right) \\\\\n&=&df\\left( X\\right) -df\\left( \\sigma \\left( \\pi _{TP}\\left( X\\right)\n,\\omega \\left( X\\right) \\right) \\right)\n\\end{eqnarray*\nwhere from (\\ref{mapsigma}), for $p\\in \\mathcal{P},$ \n\\begin{equation*}\n\\sigma \\left( \\pi _{TP}\\left( X\\right) ,\\omega \\left( X\\right) \\right)\n_{p}=\\left. \\frac{d}{dt}\\left( p\\exp \\left( t\\omega \\left( X_{p}\\right)\n\\right) \\right) \\right\\vert _{t=0}.\n\\end{equation*\nNow, let $\\gamma \\left( t\\right) =\\exp \\left( t\\omega \\left( X_{p}\\right)\n\\right) \\in \\Psi $, and note that $\\gamma \\left( t\\right) ^{-1}=\\gamma\n\\left( -t\\right) $, so that \n\\begin{eqnarray}\n\\left. df\\left( \\sigma \\left( \\pi _{TP}\\left( X\\right) ,\\omega \\left(\nX\\right) \\right) \\right) \\right\\vert _{p} &=&\\left. \\frac{d}{dt}\\left(\nf\\left( p\\gamma \\left( t\\right) \\right) \\right) \\right\\vert _{t=0} \\notag \\\\\n&=&-\\left. \\frac{d}{dt}\\left( \\exp \\left( t\\omega \\left( X_{p}\\right)\n\\right) f\\left( p\\right) \\right) \\right\\vert _{t=0} \\notag \\\\\n&=&-\\omega \\left( X_{p}\\right) \\cdot f\\left( p\\right) \\label{omegasharp}\n\\end{eqnarray\nwhere we have used the equivariance of $f$ and where, $\\omega \\left(\nX_{p}\\right) \\cdot f\\left( p\\right) \\in T_{f\\left( p\\right) }S$ denotes the\ninfinitesimal action of $\\omega \\left( X_{p}\\right) \\in \\mathfrak{p}$ on $S$.\n\n\\begin{lemma}\nLet $s$ be a $\\Psi $-equivariant $S$-valued function on $\\mathcal{P}\\ $and\nlet $\\omega $ be a $\\mathfrak{p}$-valued connection $1$-form on $\\mathcal{P\n, $ then the covariant differential $d^{\\mathcal{H}}s:T\\mathcal{P\n\\longrightarrow TS$ is given by \n\\begin{equation}\nd^{\\mathcal{H}}s=ds+\\omega \\cdot s \\label{dHftilde2}\n\\end{equation\nwhere $\\omega \\cdot s:T_{p}\\mathcal{P}\\longrightarrow T_{s\\left( p\\right) }S$\nfor each $p\\in \\mathcal{P}\\ $gives the infinitesimal action of $\\omega $ on \nS$.\n\\end{lemma}\n\nNow, more concretely, given a principal connection $\\omega $ on $\\mathcal{P\n, $ consider the induced covariant derivatives on equivariant $\\mathbb{L}$-\nand $\\mathbb{\\mathring{L}}$-valued maps. To avoid confusion, denote $d^\n\\mathcal{H}}$ acting on $\\mathbb{L}$-valued maps by $D$ and by $\\mathring{D}$\nwhen it is acting on $\\mathbb{\\mathring{L}}$-valued maps. Similarly,\nconsider equivariant $\\mathfrak{l}$-valued maps from $\\mathcal{P}.$ Given \n\\xi :\\mathcal{P}\\longrightarrow \\mathfrak{l}$ such that $\\xi _{ph}=\\left(\nh^{-1}\\right) _{\\ast }^{\\prime }\\left( \\xi \\right) ,$ define the covariant\nderivative $d^{\\mathcal{H}}\\xi $ via (\\ref{dHftilde2}), so overall, given \nX\\in \\Gamma \\left( T\\mathcal{P}\\right) ,$ \n\\begin{equation}\nd_{X}^{\\mathcal{H}}\\xi =d_{X}\\xi +\\omega \\left( X\\right) \\cdot \\xi\n\\label{dHxi}\n\\end{equation\nwhere $\\omega \\left( X\\right) \\cdot \\xi $ refers to the linear\nrepresentation of the Lie algebra $\\mathfrak{p}$ on $\\mathfrak{l}$ given by \n\\ref{pactl1}).\n\nWe have the following useful relation between $D$ and $\\mathring{D}.$\n\n\\begin{lemma}\n\\label{lemProdAs}Suppose $A:\\mathcal{P}\\longrightarrow \\mathbb{L}$ and $s\n\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ are equivariant, and let \np\\in \\mathcal{P}.$ Then, \n\\begin{equation}\n\\left. \\mathring{D}\\left( As\\right) \\right\\vert _{p}=\\left( R_{s_{p}}\\right)\n_{\\ast }\\left. DA\\right\\vert _{p}+\\left( L_{A_{p}}\\right) _{\\ast }\\left. \n\\mathring{D}s\\right\\vert _{p}. \\label{DAs}\n\\end{equation\nNote that $\\left. \\mathring{D}\\left( As\\right) \\right\\vert _{p}:T_{p\n\\mathcal{P}\\longrightarrow T_{As}\\mathbb{\\mathring{L}}.$\n\\end{lemma}\n\n\\begin{proof}\nLet $X_{p}\\in T_{p}\\mathcal{P}$ and let $p\\left( t\\right) $ be a curve on \n\\mathcal{P}$ with $p\\left( 0\\right) =p$ and $\\dot{p}\\left( 0\\right) =\\func\nproj}_{\\mathcal{H}}\\left( X_{p}\\right) \\in \\mathcal{H}_{p}\\mathcal{P}.$\nConsider \n\\begin{equation}\n\\left. \\mathring{D}\\left( As\\right) \\right\\vert _{p}\\left( X_{p}\\right)\n=\\left. \\frac{d}{dt}\\left( A_{p\\left( t\\right) }s_{p\\left( t\\right) }\\right)\n\\right\\vert _{t=0}\n\\end{equation\nHowever, \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\left( A_{p\\left( t\\right) }s_{p\\left( t\\right) }\\right)\n\\right\\vert _{t=0} &=&\\left. \\frac{d}{dt}\\left( A_{p\\left( t\\right)\n}s_{p}\\right) \\right\\vert _{t=0}+\\left. \\frac{d}{dt}\\left( A_{p}s_{p\\left(\nt\\right) }\\right) \\right\\vert _{t=0} \\notag \\\\\n&=&\\left( R_{s_{p}}\\right) _{\\ast }\\left( DA\\right) _{p}\\left( X_{p}\\right)\n+\\left( L_{A_{p}}\\right) _{\\ast }\\left( \\mathring{D}s\\right) _{p}\\left(\nX_{p}\\right)\n\\end{eqnarray\nand thus (\\ref{DAs}) holds.\n\\end{proof}\n\nSuppose now $\\left( \\mathbb{L},\\Psi ,\\mathcal{P},s\\right) $ is a loop bundle\nstructure, as in Definition \\ref{defLoopStructure}, so that $s\\ $is an \n\\mathbb{\\mathring{L}}$-valued equivariant map. Then we have the following\nimportant definition.\n\n\\begin{definition}\n\\label{defTors}The $\\emph{torsion}$ $T^{\\left( s,\\omega \\right) }$ of \n\\left( \\mathbb{L},\\Psi ,\\mathcal{P},s\\right) $ with respect to $\\omega $ is\na horizontal $\\mathfrak{l}$-valued $1$-form on $\\mathcal{P}$ given by \n\\begin{equation}\nT^{\\left( s,\\omega \\right) }=\\theta _{s}\\circ \\func{proj}_{\\mathcal{H}}\n\\end{equation\nwhere $\\theta _{s}$ is the Darboux derivative of $s$. Equivalently, at $p\\in \n\\mathcal{P}$, we hav\n\\begin{equation}\n\\left. T^{\\left( s,\\omega \\right) }\\right\\vert _{p}=\\left(\nR_{s_{p}}^{-1}\\right) _{\\ast }\\left. \\mathring{D}s\\right\\vert _{p}.\n\\end{equation}\n\\end{definition}\n\nThus, $T^{\\left( s,\\omega \\right) }$ is the horizontal component of $\\theta\n_{s}.$ We also easily see that it is $\\Psi $-equivariant. Using the\nequivariance of $s$ and $\\mathring{D}s$, we have for $h\\in \\Psi $, \n\\begin{equation}\nT_{ph}^{\\left( s,\\omega \\right) }=\\left( h_{\\ast }^{\\prime }\\right)\n^{-1}T_{p}^{\\left( s,\\omega \\right) }. \\label{Tequi1}\n\\end{equation\nThus, $T^{\\left( s,\\omega \\right) }$ is a \\emph{basic} (i.e. horizontal and\nequivariant) $\\mathfrak{l}$-valued $1$-form on $\\mathcal{P}$, and thus\ndefines a $1$-form on $M$ with values in the associated vector bundle \n\\mathcal{A=P\\times }_{\\Psi _{\\ast }^{\\prime }}\\mathfrak{l}.$ We also have\nthe following key property of $T^{\\left( s,\\omega \\right) }.$\n\n\\begin{theorem}\n\\label{thmTprop}Suppose $T^{\\left( s,\\omega \\right) }$ is as given in\nDefinition \\ref{defTors} and also let $\\hat{\\omega}^{\\left( s\\right) }\\in\n\\Omega ^{1}\\left( \\mathcal{P},\\mathfrak{l}\\right) $ be given by \n\\begin{equation}\n\\hat{\\omega}^{\\left( s\\right) }=\\varphi _{s}\\left( \\omega \\right) .\n\\label{omegahat}\n\\end{equation\nThen, \n\\begin{equation}\n\\theta _{s}=T^{\\left( s,\\omega \\right) }-\\hat{\\omega}^{\\left( s\\right) }.\n\\label{stheta}\n\\end{equation\nIn particular, $T^{\\left( s,\\omega \\right) }$ and the quantity $-\\hat{\\omega\n^{\\left( s\\right) }$ are respectively the horizontal and vertical components\nof $\\theta _{s}$.\n\\end{theorem}\n\n\\begin{proof}\nLet $p\\in \\mathcal{P}.$ Then, from (\\ref{dHftilde2}) we have \n\\begin{eqnarray}\n\\left( R_{s_{p}}^{-1}\\right) _{\\ast }\\left. \\mathring{D}s\\right\\vert _{p}\n&=&\\left( R_{s_{p}}^{-1}\\right) _{\\ast }\\left. ds\\right\\vert _{p}+\\left(\nR_{s_{p}}^{-1}\\right) _{\\ast }\\left( \\omega \\cdot s_{p}\\right) \\notag \\\\\n&=&\\left. \\theta _{s}\\right\\vert _{p}+\\left. \\frac{d}{dt}\\faktor{\\left( \\exp\n\\left( t\\omega _{p}\\right) \\left( s_{p}\\right) \\right)}{ s_{p}}\\right\\vert\n_{t=0} \\notag \\\\\n&=&\\left. \\theta _{s}\\right\\vert _{p}+\\varphi _{s_{p}}\\left( \\omega\n_{p}\\right) \\label{Torsion1f2}\n\\end{eqnarray\nwhere we have used the definition (\\ref{phis}) of $\\varphi _{s}$. Hence we\nget (\\ref{stheta}).\n\\end{proof}\n\nSuppose $p\\left( t\\right) $ is a curve on $\\mathcal{P}$ with $p\\left(\n0\\right) =p$ and with a horizontal initial velocity vector $\\dot{p}\\left(\n0\\right) =X_{p}^{\\mathcal{H}}.$ Then, by definition, \n\\begin{equation}\n\\left. \\frac{d}{dt}s_{p\\left( t\\right) }\\right\\vert _{t=0}=\\left. \\mathring{\n}_{X}s\\right\\vert _{p}=\\left( R_{s_{p}}\\right) _{\\ast }\\left.\nT_{X_{p}}^{\\left( s,\\omega \\right) }\\right\\vert _{p}, \\label{dts}\n\\end{equation\nwhere $T_{X}^{\\left( s,\\omega \\right) }=T^{\\left( s,\\omega \\right) }\\left(\nX\\right) \\in \\mathfrak{l}.$ This observation will come in useful later on.\n\n\\begin{remark}\nIf $s_{p}\\in \\mathcal{C\\cong \\Psi }\/H$ for all $p\\in \\mathcal{P},$ then as\nwe know, the structure group of $\\mathcal{P}$ is reduced to $H.$ Moreover,\nthe reduced holonomy group of $\\omega $ is contained in $H$ if and only if\nthere exists such a map $s$ with $d^{\\mathcal{H}}s=0.$ This is equivalent to \n$T^{\\left( s,\\omega \\right) }=0$, so this is the motivation for calling this\nquantity the torsion. If $s$ is not necessarily in $\\mathcal{C},$ then we\ncan still say something about the holonomy of $\\omega $ in the case $d^\n\\mathcal{H}}s=0$. Let $p\\in \\mathcal{P}$ and suppose $\\Gamma \\left( t\\right) \n$ is the horizontal lift with respect to the connection $\\omega $ of some\nclosed curve based at $\\pi \\left( p\\right) .$ Then, the endpoint of $\\Gamma $\nis $\\Gamma \\left( 1\\right) =ph$ for some $h\\in \\Psi .$ The set of all such \nh\\in \\Psi $ form the holonomy group $\\func{Hol}_{p}\\left( \\omega \\right) $\nof $\\omega $ at $p$ \\cite{KobayashiNomizu1}. Now if we have an equivariant\nmap $s:\\mathcal{P}\\longrightarrow \\mathbb{L}$, then $s\\circ \\Gamma $ is a\ncurve on $\\mathbb{L}$ with $s\\left( \\Gamma \\left( 1\\right) \\right)\n=s_{ph}=h^{-1}s_{p}.$ However, $\\frac{d}{dt}\\left( s\\circ \\Gamma \\left(\nt\\right) \\right) =\\left( d^{\\mathcal{H}}s\\right) _{s\\circ \\Gamma \\left(\nt\\right) }\\dot{\\Gamma}\\left( t\\right) $ since the velocity vectors of \n\\Gamma \\left( t\\right) $ are horizontal. Thus, if $d^{\\mathcal{H}}s=0$\neverywhere, then the curve $s\\circ \\Gamma \\left( t\\right) $ is constant, and\nhence $h^{-1}s_{p}=s_{p}.$ By (\\ref{AutLr}), this means that $h\\in \\func{Aut\n\\left( \\mathbb{L},\\circ _{s_{p}}\\right) .$ This is true for any horizontal\nlift $\\Gamma $, hence we see that $\\func{Hol}_{p}\\left( \\omega \\right)\n\\subset \\func{Aut}\\left( \\mathbb{L},\\circ _{s_{p}}\\right) .$\n\\end{remark}\n\nThe torsion also enters expressions for covariant derivatives of the loop\nproduct, loop algebra bracket, as well as the map $\\varphi _{s}$.\n\n\\begin{theorem}\n\\label{lemProd}Suppose $A,B:\\mathcal{P}\\longrightarrow \\mathbb{L},$ and $s\n\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ are equivariant, and let \np\\in \\mathcal{P}.$ Then, \n\\begin{eqnarray}\n\\left. D\\left( A\\circ _{s}B\\right) \\right\\vert _{p} &=&\\left(\nR_{B_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast }\\left. DA\\right\\vert\n_{p}+\\left( L_{A_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast }\\left.\nDB\\right\\vert _{p} \\label{DAsB} \\\\\n&&+\\left[ A_{p},B_{p},\\left. T^{\\left( s,\\omega \\right) }\\right\\vert _{p\n\\right] ^{\\left( s_{p}\\right) }. \\notag\n\\end{eqnarray\nIf $\\xi ,\\eta :\\mathcal{P}\\longrightarrow \\mathfrak{l}$ are equivariant,\nthen \n\\begin{equation}\nd^{\\mathcal{H}}\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }=\\left[ d^\n\\mathcal{H}}\\xi ,\\eta \\right] ^{\\left( s\\right) }+\\left[ \\xi ,d^{\\mathcal{H\n}\\eta \\right] ^{\\left( s\\right) }+\\left[ \\xi ,\\eta ,T^{\\left( s,\\omega\n\\right) }\\right] ^{\\left( s\\right) }-\\left[ \\eta ,\\xi ,T^{\\left( s,\\omega\n\\right) }\\right] ^{\\left( s\\right) }. \\label{dHbrack}\n\\end{equation}\n\nThe $\\mathfrak{l}\\otimes \\mathfrak{p}^{\\ast }$-valued map $\\varphi _{s}\n\\mathcal{P}\\longrightarrow $ $\\mathfrak{l}\\otimes \\mathfrak{p}^{\\ast }$\nsatisfies \n\\begin{equation}\nd^{\\mathcal{H}}\\varphi _{s}=\\func{id}_{\\mathfrak{p}}\\cdot T^{\\left( s,\\omega\n\\right) }-\\left[ \\varphi _{s},T^{\\left( s,\\omega \\right) }\\right] ^{\\left(\ns\\right) }, \\label{dhphis}\n\\end{equation\nwhere $\\func{id}_{\\mathfrak{p}}\\ $is the identity map of $\\mathfrak{p}$ and \n\\cdot $ denotes the action of the Lie algebra $\\mathfrak{p}$ on $\\mathfrak{l}\n$ given by (\\ref{pactl1}).\n\\end{theorem}\n\n\\begin{proof}\nLet $X_{p}\\in T_{p}\\mathcal{P}$ and let $p\\left( t\\right) $ be a curve on \n\\mathcal{P}$ with $p\\left( 0\\right) =p$ and $\\dot{p}\\left( 0\\right) =\\func\nproj}_{\\mathcal{H}}\\left( X_{p}\\right) \\in \\mathcal{H}_{p}\\mathcal{P}.$ To\nshow (\\ref{DAsB}), first note that \n\\begin{equation}\n\\left. D\\left( A\\circ _{s}B\\right) \\right\\vert _{p}\\left( X_{p}\\right)\n=\\left. \\frac{d}{dt}\\left( A_{p\\left( t\\right) }\\circ _{s_{p\\left( t\\right)\n}}B_{p\\left( t\\right) }\\right) \\right\\vert _{t=0}. \\label{dHprod}\n\\end{equation\nHowever\n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\left( A_{p\\left( t\\right) }\\circ _{s_{p\\left( t\\right)\n}}B_{p\\left( t\\right) }\\right) \\right\\vert _{t=0} &=&\\left. \\frac{d}{dt\n\\left( A_{p\\left( t\\right) }\\circ _{s_{p}}B_{p}\\right) \\right\\vert\n_{t=0}+\\left. \\frac{d}{dt}\\left( A_{p}\\circ _{s_{p}}B_{p\\left( t\\right)\n}\\right) \\right\\vert _{t=0} \\notag \\\\\n&&+\\left. \\frac{d}{dt}\\left( A_{p}\\circ _{s_{p\\left( t\\right) }}B_{p}\\right)\n\\right\\vert _{t=0} \\notag \\\\\n&=&\\left( R_{B_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast }\\left.\nDA\\right\\vert _{p}\\left( X_{p}\\right) +\\left( L_{A_{p}}^{\\left( s_{p}\\right)\n}\\right) _{\\ast }\\left. DB\\right\\vert _{p}\\left( X_{p}\\right) \\notag \\\\\n&&+\\left. \\frac{d}{dt}\\left( A_{p}\\circ _{s_{p\\left( t\\right) }}B_{p}\\right)\n\\right\\vert _{t=0} \\label{dHprod2}\n\\end{eqnarray\nand then, using Lemma \\ref{lemQuotient}, \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\left( A_{p}\\circ _{s_{p\\left( t\\right) }}B_{p}\\right)\n\\right\\vert _{t=0} &=&\\left. \\frac{d}{dt}\\left( \\left( A_{p}\\cdot\nB_{p}s_{p\\left( t\\right) }\\right) \/s_{p\\left( t\\right) }\\right) \\right\\vert\n_{t=0} \\notag \\\\\n&=&\\left. \\frac{d}{dt}\\left( \\left( A_{p}\\cdot B_{p}s_{p\\left( t\\right)\n}\\right) \/s_{p}\\right) \\right\\vert _{t=0} \\label{dHprod2a} \\\\\n&&+\\left. \\frac{d}{dt}\\left( \\left( A_{p}\\cdot B_{p}s_{p}\\right) \/s_{p}\\cdot\ns_{p\\left( t\\right) }\\right) \/s_{p}\\right\\vert _{t=0}. \\notag\n\\end{eqnarray\nLooking at each term in (\\ref{dHprod2a}), we have \n\\begin{eqnarray*}\n\\left( A_{p}\\cdot B_{p}s_{p\\left( t\\right) }\\right) \/s_{p} &=&\\left(\nA_{p}\\cdot B_{p}\\left( \\faktor{s_{p\\left( t\\right) }}{s_{p}}\\cdot\ns_{p}\\right) \\right) \/s_{p} \\\\\n&=&A_{p}\\circ _{s_{p}}\\left( B_{p}\\circ _{s_{p}}\\left( \\faktor{s_{p\\left(\nt\\right) }}{s_{p}}\\right) \\right)\n\\end{eqnarray*\nan\n\\begin{equation*}\n\\left( \\left( A_{p}\\cdot B_{p}s_{p}\\right) \/s_{p}\\cdot s_{p\\left( t\\right)\n}\\right) \/s_{p}=\\left( A_{p}\\circ _{s_{p}}B_{p}\\right) \\circ _{s_{p}}\\left\n\\faktor{ s_{p\\left( t\\right) }}{s_{p}}\\right) .\n\\end{equation*\nOverall (\\ref{dHprod2}) becomes, \n\\begin{equation}\n\\left. \\frac{d}{dt}\\left( A_{p}\\circ _{s_{p\\left( t\\right) }}B_{p}\\right)\n\\right\\vert _{t=0}=\\left( \\left( L_{A_{p}}^{\\left( s_{p}\\right) }\\circ\nL_{B_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast }-\\left( L_{A_{p}\\circ\n_{s_{p}}B_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast }\\right) \\left(\nR_{s_{p}}^{-1}\\right) _{\\ast }\\left. \\mathring{D}s\\right\\vert _{p}\\left(\nX_{p}\\right) \\label{dHprod3}\n\\end{equation\nand hence we get (\\ref{DAsB}) using the definitions of $T^{\\left( s,\\omega\n\\right) }$ and the mixed associator (\\ref{pxiqsol}).\n\nTo show (\\ref{dHbrack}), note that \n\\begin{eqnarray*}\n\\left. d_{X}^{\\mathcal{H}}\\left( \\left[ \\xi ,\\eta \\right] ^{\\left( s\\right)\n}\\right) \\right\\vert _{p} &=&\\left. \\frac{d}{dt}\\left[ \\xi _{p\\left(\nt\\right) },\\eta _{p\\left( t\\right) }\\right] ^{\\left( s_{p\\left( t\\right)\n}\\right) }\\right\\vert _{t=0} \\\\\n&=&\\left[ \\left. d_{X}^{\\mathcal{H}}\\xi \\right\\vert _{p},\\eta _{p}\\right]\n^{\\left( s_{p}\\right) }+\\left[ \\xi _{p},\\left. d_{X}^{\\mathcal{H}}\\eta\n\\right\\vert _{p}\\right] ^{\\left( s_{p}\\right) } \\\\\n&&+\\left. \\frac{d}{dt}\\left[ \\xi _{p},\\eta _{p}\\right] ^{\\left( s_{p\\left(\nt\\right) }\\right) }\\right\\vert _{t=0}.\n\\end{eqnarray*\nHowever, using (\\ref{db1}) and (\\ref{dts}), the last term becomes \n\\begin{equation*}\n\\left. \\frac{d}{dt}\\left[ \\xi _{p},\\eta _{p}\\right] ^{\\left( s_{p\\left(\nt\\right) }\\right) }\\right\\vert _{t=0}=\\left[ \\xi _{p},\\eta _{p},\\left.\nT_{X}^{\\left( s,\\omega \\right) }\\right\\vert _{p}\\right] ^{\\left(\ns_{p}\\right) }-\\left[ \\eta _{p},\\xi _{p},\\left. T_{X}^{\\left( s,\\omega\n\\right) }\\right\\vert _{p}\\right] ^{\\left( s_{p}\\right) }\n\\end{equation*\nand hence we obtain (\\ref{dHbrack}).\n\nLet us now show (\\ref{dhphis}). From (\\ref{actpl}), given $\\gamma \\in \n\\mathfrak{p}$, setting $\\hat{\\gamma}\\left( r\\right) =\\varphi _{r}\\left(\n\\gamma \\right) $ for each $r\\in \\mathbb{L}$, we have \n\\begin{equation}\n\\left. d\\hat{\\gamma}\\right\\vert _{r}\\left( \\rho _{r}\\left( \\xi \\right)\n\\right) =\\gamma \\cdot \\xi -\\left[ \\hat{\\gamma}\\left( r\\right) ,\\xi \\right]\n^{\\left( r\\right) }\n\\end{equation\nfor some $\\xi \\in \\mathfrak{l}.$ Now for a map $s:\\mathcal{P}\\longrightarrow \n\\mathbb{L}\\ $and some vector field $X$ on $\\mathcal{P},$ we have at each \np\\in \\mathcal{P}$ \n\\begin{eqnarray}\n\\left. d\\left( \\varphi _{s}\\left( \\gamma \\right) \\right) \\right\\vert\n_{p}\\left( X\\right) &=&\\left. d\\hat{\\gamma}\\right\\vert _{s_{p}}\\circ \\left.\nds\\right\\vert _{p}\\left( X\\right) \\notag \\\\\n&=&\\left. d\\hat{\\gamma}\\right\\vert _{s_{p}}\\left( \\rho _{s_{p}}\\left( \\theta\n_{s}\\left( X_{p}\\right) \\right) \\right) \\notag \\\\\n&=&\\gamma \\cdot \\theta _{s}\\left( X_{p}\\right) -\\left[ \\varphi\n_{s_{p}}\\left( \\gamma \\right) ,\\theta _{s}\\left( X_{p}\\right) \\right]\n^{\\left( s_{p}\\right) }.\n\\end{eqnarray\nTherefore, $d\\varphi _{s}$ is given by \n\\begin{equation}\nd\\varphi _{s}\\left( \\gamma \\right) =\\gamma \\cdot \\theta _{s}-\\left[ \\varphi\n_{s}\\left( \\gamma \\right) ,\\theta _{s}\\right] ^{\\left( s\\right) }.\n\\label{dphis}\n\\end{equation\nTo obtain $d^{\\mathcal{H}}\\varphi _{s}$ we take the horizontal component,\nand hence using (\\ref{stheta}), we just replace $\\theta _{s}$ in (\\ref{dphis\n) by $T^{\\left( s,\\omega \\right) },$ which gives (\\ref{dhphis}).\n\\end{proof}\n\n\\begin{remark}\nIf $\\mathbb{L}$ is associative, i.e. is a group, then certainly $A\\circ\n_{s}B=AB$ and this is then an equivariant section, if $A$ and $B$ are such.\nIn (\\ref{DAsB}) the second term on the right vanishes, and thus $D$\nsatisfies the product rule with respect to multiplication on $\\mathbb{L}.$\n\\end{remark}\n\nWe can rewrite (\\ref{DAs}) as \n\\begin{eqnarray}\n\\mathring{D}\\left( As\\right) &=&\\left( DA\\right) s+A\\left( \\left( \\mathring{\n}s\\right) \/s\\cdot s\\right) \\notag \\\\\n&=&\\left( DA\\right) s+\\left( A\\circ _{s}T^{\\left( s,\\omega \\right) }\\right)\ns. \\label{DAs2}\n\\end{eqnarray\nUsing this, we can then define an adapted covariant derivative $D^{\\left(\ns\\right) }$ on equivariant $\\mathbb{L}$-valued maps, given by \n\\begin{equation}\n\\left. D^{\\left( s\\right) }A\\right\\vert _{p}=\\left( R_{s_{p}}^{-1}\\right)\n_{\\ast }\\left. \\mathring{D}\\left( As\\right) \\right\\vert _{p}=\\left.\nDA\\right\\vert _{p}+\\left( L_{A_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast\n}T_{p}^{\\left( s,\\omega \\right) } \\label{Dsderiv}\n\\end{equation\nwith respect to which, \n\\begin{equation}\n\\left. D^{\\left( s\\right) }\\left( A\\circ _{s}B\\right) \\right\\vert\n_{p}=\\left( R_{B_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast }\\left.\nDA\\right\\vert _{p}+\\left( L_{A_{p}}^{\\left( s_{p}\\right) }\\right) _{\\ast\n}\\left. D^{\\left( s\\right) }B\\right\\vert _{p}. \\label{Dsderivprod}\n\\end{equation\nThis is the precise analog of the octonion covariant derivative from \\cit\n{GrigorianOctobundle}. The derivative $D^{\\left( s\\right) }$ essentially\nconverts an $\\mathbb{L}$-valued map into an $\\mathbb{\\mathring{L}}$-valued\none using $s$ and then differentiates it using $\\mathring{D}$ before\nconverting back to $\\mathbb{L}.$ In particular, if we take $A=1$, \n\\begin{equation}\nD^{\\left( s\\right) }1=T^{\\left( s,\\omega \\right) }. \\label{Ds1}\n\\end{equation}\n\n\\begin{remark}\nUp to the sign of $T$, (\\ref{DAsB}) and (\\ref{Dsderiv}) are precisely the\nexpressions obtained in \\cite{GrigorianOctobundle} for the covariant\nderivative with respect to the Levi-Civita connection of the product on the\noctonion bundle over a $7$-manifold. In that case, $T$ is precisely the\ntorsion of the $G_{2}$-structure that defines the octonion bundle. This\nprovides additional motivation for calling this quantity the torsion of $s$\nand $\\omega .$ In the case of $G_{2}$-structures, usually one takes the\ntorsion with respect to the preferred Levi-Civita connection, however in\nthis more general setting, we don't have a preferred connection, thus \nT^{\\left( s,\\omega \\right) }$ should also be taken to depend on the\nconnection.\n\\end{remark}\n\n\\begin{corollary}\nSuppose $\\mathbb{L}$ is an alternative loop, so that the associator is\nskew-symmetric. Suppose $\\xi ,\\eta \\longrightarrow \\mathfrak{l}$ and $s\n\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ are equivariant. Then,\ndefining a modified exterior derivative $d^{\\left( s\\right) }$ on\nequivariant maps from $\\mathcal{P}$ to $\\mathfrak{l}$ vi\n\\begin{equation}\nd^{\\left( s\\right) }\\xi =d^{\\mathcal{H}}\\xi +\\frac{1}{3}\\left[ \\xi\n,T^{\\left( s\\right) }\\right] ^{\\left( s\\right) }, \\label{dsbrack}\n\\end{equation\nit satisfies \n\\begin{equation}\nd^{\\left( s\\right) }\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }=\\left[\nd^{\\left( s\\right) }\\xi ,\\eta \\right] ^{\\left( s\\right) }+\\left[ \\xi\n,d^{\\left( s\\right) }\\eta \\right] ^{\\left( s\\right) }.\n\\end{equation}\n\\end{corollary}\n\n\\begin{proof}\nIf $\\mathbb{L}$ is alternative, then the loop Jacobi identity (\\ref{Jac2})\nbecomes \n\\begin{equation}\n\\left[ \\xi ,\\left[ \\eta ,\\gamma \\right] ^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) }+\\left[ \\eta ,\\left[ \\gamma ,\\xi \\right] ^{\\left( s\\right) }\\right]\n^{\\left( s\\right) }+\\left[ \\gamma ,\\left[ \\xi ,\\eta \\right] ^{\\left(\ns\\right) }\\right] ^{\\left( s\\right) }=6\\left[ \\xi ,\\eta ,\\gamma \\right]\n^{\\left( s\\right) }. \\label{Jacalt}\n\\end{equation\nOn the other hand, (\\ref{dHbrack}) becomes \n\\begin{equation}\nd^{\\mathcal{H}}\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }=\\left[ d^\n\\mathcal{H}}\\xi ,\\eta \\right] ^{\\left( s\\right) }+\\left[ \\xi ,d^{\\mathcal{H\n}\\eta \\right] ^{\\left( s\\right) }+2\\left[ \\xi ,\\eta ,T^{\\left( s\\right) \n\\right] ^{\\left( s\\right) }. \\label{dHbrackalt}\n\\end{equation\nThus, using both (\\ref{Jacalt}) and (\\ref{dHbrackalt}), we obtain \n\\begin{eqnarray*}\nd^{\\left( s\\right) }\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) } &=&d^\n\\mathcal{H}}\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) }+\\frac{1}{3}\\left[\n\\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) },T^{\\left( s\\right) }\\right]\n^{\\left( s\\right) } \\\\\n&=&\\left[ d^{\\left( s\\right) }\\xi ,\\eta \\right] ^{\\left( s\\right) }+\\left[\n\\xi ,d^{\\left( s\\right) }\\eta \\right] ^{\\left( s\\right) } \\\\\n&&-\\frac{1}{3}\\left[ \\left[ \\xi ,T^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) },\\eta \\right] ^{\\left( s\\right) }-\\frac{1}{3}\\left[ \\xi ,\\left[\n\\eta ,T^{\\left( s\\right) }\\right] ^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) } \\\\\n&&+\\frac{1}{3}\\left[ \\left[ \\xi ,\\eta \\right] ^{\\left( s\\right) },T^{\\left(\ns\\right) }\\right] ^{\\left( s\\right) }+2\\left[ \\xi ,\\eta ,T^{\\left( s\\right) \n\\right] ^{\\left( s\\right) } \\\\\n&=&\\left[ d^{\\left( s\\right) }\\xi ,\\eta \\right] ^{\\left( s\\right) }+\\left[\n\\xi ,d^{\\left( s\\right) }\\eta \\right] ^{\\left( s\\right) }.\n\\end{eqnarray*}\n\\end{proof}\n\n\\begin{remark}\nIn the case of $G_{2}$-structures and octonions, the derivative (\\re\n{dsbrack}) exactly replicates the modified covariant derivative that\npreserves the $G_{2}$-structure that was introduced in \\cite{DGKisoflow}.\n\\end{remark}\n\n\\begin{example}\nThe map $\\varphi _{s}$ is equivariant on $\\mathcal{P}$ and hence defines a\nsection of the associated bundle $\\mathcal{A}\\otimes \\func{ad}\\left( \n\\mathcal{P}\\right) ^{\\ast }$ over $M.$ If $\\mathbb{L}$ is the loop of unit\noctonions and $\\mathfrak{l\\cong }\\func{Im}\\mathbb{O},$ and we have a $G_{2}\n-structure on $M,$ then $\\varphi _{s}$ corresponds to a section of \nTM\\otimes \\Lambda ^{2}TM,$ which up to a constant factor is a multiple of\nthe corresponding $G_{2}$-structure $3$-form $\\varphi $ with indices raised\nusing the associated metric. The torsion $T$ of $\\varphi $ with respect to\nthe Levi-Civita connection on $TM$ is then a section of $TM\\otimes T^{\\ast\n}M.$ Noting that $\\mathfrak{so}\\left( 7\\right) $ acts on $\\mathbb{R}^{7}$ by\nmatrix multiplication, if we set $\\left( \\varphi _{s}\\right) _{\\ }^{abc}=\n\\frac{1}{4}\\varphi ^{abc}$ in local coordinates, then (\\ref{dhphis})\nprecisely recovers the well-known formula for ${\\Greekmath 0272} \\varphi $ in terms of \nT.$ Indeed, suppose $\\xi \\in \\Gamma \\left( \\Lambda ^{2}T^{\\ast }M\\right) $,\nthen in a local basis $\\left\\{ e_{a}\\right\\} $, for some fixed vector field \nX$, we have \n\\begin{eqnarray*}\n\\left( {\\Greekmath 0272} _{X}\\varphi _{s}\\right) \\left( \\xi \\right) &=&\\xi \\cdot T_{X}- \n\\left[ \\varphi _{s}\\left( \\xi \\right) ,T_{X}\\right] ^{\\left( s\\right) } \\\\\n&=&\\left( \\xi _{\\ b}^{a}T_{X}^{b}+\\frac{1}{2}\\varphi _{\\ bc}^{a}\\varphi\n^{bde}\\xi _{de}T_{X}^{c}\\right) e_{a} \\\\\n&=&\\left( \\xi _{\\ b}^{a}T_{X}^{b}-\\frac{1}{2}\\left( \\psi _{\\ c}^{a\\\nde}+g^{ad}g_{c}^{\\ e}-g^{ae}g_{c}^{\\ d}\\right) \\xi _{de}T_{X}^{c}\\right)\ne_{a} \\\\\n&=&\\frac{1}{2}T_{X}^{c}\\psi _{c\\ }^{\\ ade}\\xi _{de}e_{a},\n\\end{eqnarray*\nwhere $\\psi =\\ast \\varphi $. Hence, indeed, \n\\begin{equation}\n{\\Greekmath 0272} _{X}\\varphi =-2T_{X}\\lrcorner \\psi , \\label{nablaXphi}\n\\end{equation\nwhich is exactly as in \\cite{GrigorianOctobundle}, taking into account that\nthe torsion here differs by a sign from \\cite{GrigorianOctobundle}. Here we\nalso used the convention that $\\left[ X,Y\\right] =2X\\lrcorner Y\\lrcorner\n\\varphi $ and also contraction identities for $\\varphi $ \\cit\n{GrigorianG2Torsion1,karigiannis-2005-57}. This is also consistent with the\nexpression (\\ref{dHbrack}) for the covariant derivative of the bracket.\nIndeed, in the case of an alternative loop, (\\ref{dHbrackalt}) shows that\nthe covariant derivative of the bracket function $b_{s}$ is given by \n\\begin{equation}\nd^{\\mathcal{H}}b_{s}=2\\left[ \\cdot ,\\cdot ,T^{\\left( s\\,,\\omega \\right) \n\\right] ^{\\left( s\\right) }. \\label{dHbrack1a}\n\\end{equation\nTaking $b_{s}=2\\varphi $ and $\\left[ \\cdot ,\\cdot ,\\cdot \\right] ^{\\left(\ns\\right) }$ given by $\\left( \\left[ X,Y,Z\\right] ^{\\left( s\\right) }\\right)\n^{a}=2\\psi _{\\ bcd}^{a}X^{b}Y^{c}Z^{d},$ as in \\cite{GrigorianOctobundle},\nwe again recover (\\ref{nablaXphi}).\n\\end{example}\n\n\\begin{example}\nSuppose $\\mathcal{P}$ is a principal $U\\left( n\\right) $-bundle and $\\mathbb\nL}\\cong U\\left( 1\\right) $, the unit complex numbers, as in Example \\re\n{exCx2}. Then, (\\ref{dhphis}) shows that $d^{\\mathcal{H}}\\varphi _{s}=0$. If \n$V\\ $is an $n$-dimensional complex vector space with the standard action of \nU\\left( n\\right) $ on it and $\\mathcal{V=P\\times }_{U\\left( n\\right) }V$ is\nthe associated vector bundle to $\\mathcal{P}$ with fiber $V$, then $\\varphi\n_{s}$ defines a K\\\"{a}hler form on $\\mathcal{V}.$\n\\end{example}\n\n\\begin{example}\nSuppose $\\mathcal{P}$ is a principal $Sp\\left( n\\right) Sp\\left( 1\\right) \n-bundle and $\\mathbb{L}\\cong Sp\\left( 1\\right) ,$ the unit quaternions, as\nin Example \\ref{exQuat2}. Then, (\\ref{dhphis}) shows that $d^{\\mathcal{H\n}\\varphi _{s}=-\\left[ \\varphi _{s},T^{\\left( s\\,,\\omega \\right) }\\right] _\n\\func{Im}\\mathbb{H}}.$ If $V\\ $is an $n$-dimensional quaternionic vector\nspace with the standard action of $Sp\\left( n\\right) Sp\\left( 1\\right) $ on\nit and $\\mathcal{V=P\\times }_{Sp\\left( n\\right) Sp\\left( 1\\right) }V$ is the\nassociated vector bundle to $\\mathcal{P}$ with fiber $V$, then $\\varphi _{s}$\ndefines a 2-form on $\\mathcal{V}\\ $with values in $\\func{Im}\\mathbb{H}$\n(since the bundle $\\mathcal{A}\\ $is trivial). So this gives rise to $3$\nlinearly independent $2$-forms $\\omega _{1},\\omega _{2},\\omega _{3}.$ If \nT^{\\left( s,\\omega \\right) }=0$, then this reduces to a HyperK\\\"{a}hler\nstructure on $\\mathcal{V}.$ It is an interesting question whether the case \nT^{\\left( s,\\omega \\right) }\\neq 0$ is related to \\textquotedblleft Hyper\n\\\"{a}hler with torsion\\textquotedblright\\ geometry \\cit\n{GrantcharovPoonHKT,VerbitskyHKT}.\n\\end{example}\n\n\\subsection{Curvature}\n\n\\label{sectCurv}Recall that the curvature $F\\in \\Omega ^{2}\\left( \\mathcal{P\n,\\mathfrak{p}\\right) $ of the connection $\\omega $ on $\\mathcal{P}$ is given\nby \n\\begin{equation}\nF^{\\left( \\omega \\right) }=d^{\\mathcal{H}}\\omega =d\\omega \\circ \\func{proj}_\n\\mathcal{H}}, \\label{curvom}\n\\end{equation\nso that, for $X,Y\\in \\Gamma \\left( T\\mathcal{P}\\right) $, \n\\begin{equation}\nF^{\\left( \\omega \\right) }\\left( X,Y\\right) =d\\omega \\left( X^{\\mathcal{H\n},Y^{\\mathcal{H}}\\right) =-\\omega \\left( \\left[ X^{\\mathcal{H}},Y^{\\mathcal{\n}}\\right] \\right) , \\label{curvom2}\n\\end{equation\nwhere $X^{\\mathcal{H}},Y^{\\mathcal{H}}$ are the projections of $X,Y$ to \n\\mathcal{HP}.$\n\nSimilarly as $\\hat{\\omega}$, define $\\hat{F}^{\\left( s,\\omega \\right) }\\in\n\\Omega ^{2}\\left( \\mathcal{P},\\mathfrak{l}\\right) $ to be the projection of\nthe curvature $F^{\\left( \\omega \\right) }$ to $\\mathfrak{l}$ with respect to \n$s$, such that for any $X_{p},Y_{p}\\in T_{p}\\mathcal{P},$ \n\\begin{eqnarray}\n\\hat{F}^{\\left( s,\\omega \\right) }\\left( X_{p},Y_{p}\\right) &=&\\varphi\n_{s}\\left( F^{\\left( \\omega \\right) }\\right) \\left( X_{p},Y_{p}\\right) \n\\notag \\\\\n&=&\\left. \\frac{d}{dt}\\faktor{\\left( \\exp \\left( tF^{\\left( \\omega \\right)\n}\\left( X_{p},Y_{p}\\right) \\right) \\left( s_{p}\\right) \\right)}{ s_{p}\n\\right\\vert _{t=0}. \\label{Fhat}\n\\end{eqnarray}\n\nWe easily see that \n\\begin{equation}\nd^{\\mathcal{H}}\\hat{\\omega}^{\\left( s\\right) }=\\hat{F}^{\\left( s,\\omega\n\\right) }. \\label{dHom}\n\\end{equation\nIndeed, \n\\begin{equation*}\nd^{\\mathcal{H}}\\hat{\\omega}^{\\left( s\\right) }=d^{\\mathcal{H}}\\left( \\varphi\n_{s}\\left( \\omega \\right) \\right) =d^{\\mathcal{H}}\\varphi _{s}\\wedge \\left(\n\\omega \\circ \\func{proj}_{\\mathcal{H}}\\right) +\\varphi _{s}\\left( d^\n\\mathcal{H}}\\omega \\right) =\\hat{F}^{\\left( s,\\omega \\right) },\n\\end{equation*\nwhere we have used the fact that $\\omega $ is vertical.\n\nWe then have the following structure equations\n\n\\begin{theorem}\n\\label{thmFTstruct}$\\hat{F}^{\\left( s,\\omega \\right) }$ and $T^{\\left(\ns,\\omega \\right) }$ satisfy the following structure equation \n\\begin{equation}\n\\hat{F}^{\\left( s,\\omega \\right) }=d^{\\mathcal{H}}T^{\\left( s,\\omega \\right)\n}-\\frac{1}{2}\\left[ T^{\\left( s,\\omega \\right) },T^{\\left( s,\\omega \\right) \n\\right] ^{\\left( s\\right) }, \\label{dHT}\n\\end{equation\nwhere a wedge product between the $1$-forms $T^{\\left( s,\\omega \\right) }$\nis implied. Equivalently, (\\ref{dHT}) can be written as \n\\begin{equation}\nd\\hat{\\omega}^{\\left( s\\right) }+\\frac{1}{2}\\left[ \\hat{\\omega}^{\\left(\ns\\right) },\\hat{\\omega}^{\\left( s\\right) }\\right] ^{\\left( s\\right) }=\\hat{F\n^{\\left( s,\\omega \\right) }-d^{\\mathcal{H}}\\varphi _{s}\\wedge \\omega ,\n\\label{dwstruct}\n\\end{equation\nwhere $\\left( d^{\\mathcal{H}}\\varphi _{s}\\wedge \\omega \\right) \\left(\nX,Y\\right) =\\left( d_{X}^{\\mathcal{H}}\\varphi _{s}\\right) \\left( \\omega\n\\left( Y\\right) \\right) -\\left( d_{Y}^{\\mathcal{H}}\\varphi _{s}\\right)\n\\left( \\omega \\left( X\\right) \\right) $ for any vector fields $X$ and $Y$ on \n$\\mathcal{P}.$\n\\end{theorem}\n\n\\begin{proof}\nUsing (\\ref{stheta}), we have \n\\begin{eqnarray}\nd^{\\mathcal{H}}T^{\\left( s,\\omega \\right) } &=&dT^{\\left( s,\\omega \\right)\n}\\circ \\func{proj}_{\\mathcal{H}} \\notag \\\\\n&=&\\left( d\\theta _{s}+d\\hat{\\omega}^{\\left( s\\right) }\\right) \\circ \\func\nproj}_{\\mathcal{H}}. \\label{Torsion1f3}\n\\end{eqnarray\nNow consider the first term. Let $X_{p},Y_{p}\\in T_{p}\\mathcal{P}$, then \n\\begin{eqnarray}\nd\\theta _{s}\\left( X_{p}^{\\mathcal{H}},Y_{p}^{\\mathcal{H}}\\right) &=&\\left(\nd\\theta \\right) _{s_{p}}\\left( s_{\\ast }X_{p}^{\\mathcal{H}},s_{\\ast }Y_{p}^\n\\mathcal{H}}\\right) \\notag \\\\\n&=&\\left( d\\theta \\right) _{s_{p}}\\left( \\mathring{D}_{X_{p}}s,\\mathring{D\n_{Y_{P}}s\\right) \\\\\n&=&\\left[ \\theta \\left( \\mathring{D}_{X_{p}}s\\right) ,\\theta \\left( \n\\mathring{D}_{Y_{P}}s\\right) \\right] ^{\\left( s_{p}\\right) } \\notag \\\\\n&=&\\left[ T^{\\left( s,\\omega \\right) }\\left( X_{p}\\right) ,T^{\\left(\ns,\\omega \\right) }\\left( Y_{p}\\right) \\right] ^{\\left( s_{p}\\right) },\n\\label{Torsion1f3a}\n\\end{eqnarray\nwhere we have used the Maurer-Cartan structural equation for loops (\\re\n{MCequation1}). Using (\\ref{dHom}) for the second term, overall, we obtain \n\\ref{dHT}).\n\nFrom the Maurer-Cartan equation (\\ref{MCequation1}), \n\\begin{equation*}\nd\\theta _{s}-\\frac{1}{2}\\left[ \\theta _{s},\\theta _{s}\\right] ^{\\left(\ns\\right) }=0.\n\\end{equation*\nWe also have from (\\ref{stheta}) \n\\begin{equation*}\n\\left[ \\theta _{s},\\theta _{s}\\right] ^{\\left( s\\right) }=\\left[ T^{\\left(\ns,\\omega \\right) },T^{\\left( s,\\omega \\right) }\\right] ^{\\left( s\\right) }-\n\\left[ \\hat{\\omega}^{\\left( s\\right) },T^{\\left( s,\\omega \\right) }\\right]\n^{\\left( s\\right) }+\\left[ \\hat{\\omega}^{\\left( s\\right) },\\hat{\\omega\n^{\\left( s\\right) }\\right] ^{\\left( s\\right) }.\n\\end{equation*\nHenc\n\\begin{equation*}\nd\\theta _{s}=dT^{\\left( s,\\omega \\right) }-d\\hat{\\omega}^{\\left( s\\right) }\n\\frac{1}{2}\\left[ T^{\\left( s,\\omega \\right) },T^{\\left( s,\\omega \\right) \n\\right] ^{\\left( s\\right) }-\\left[ \\hat{\\omega}^{\\left( s\\right) },T^{\\left(\ns,\\omega \\right) }\\right] ^{\\left( s\\right) }+\\frac{1}{2}\\left[ \\hat{\\omega\n^{\\left( s\\right) },\\hat{\\omega}^{\\left( s\\right) }\\right] ^{\\left( s\\right)\n}.\n\\end{equation*\nNoting that \n\\begin{equation*}\ndT^{\\left( s,\\omega \\right) }=d^{\\mathcal{H}}T^{\\left( s,\\omega \\right)\n}-\\omega \\dot{\\wedge}T^{\\left( s,\\omega \\right) }\n\\end{equation*\nwe find \n\\begin{eqnarray*}\nd\\hat{\\omega}^{\\left( s\\right) }+\\frac{1}{2}\\left[ \\hat{\\omega}^{\\left(\ns\\right) },\\hat{\\omega}^{\\left( s\\right) }\\right] ^{\\left( s\\right) } &=&d^\n\\mathcal{H}}T^{\\left( s,\\omega \\right) }-\\omega \\dot{\\wedge}T^{\\left(\ns,\\omega \\right) } \\\\\n&&-\\frac{1}{2}\\left[ T^{\\left( s,\\omega \\right) },T^{\\left( s,\\omega \\right)\n}\\right] ^{\\left( s\\right) }+\\left[ \\hat{\\omega}^{\\left( s\\right)\n},T^{\\left( s,\\omega \\right) }\\right] ^{\\left( s\\right) }\n\\end{eqnarray*\nand then using (\\ref{dHT}) and (\\ref{dhphis}) we obtain (\\ref{dwstruct}).\n\\end{proof}\n\n\\begin{corollary}[Bianchi identity]\nThe quantity $\\hat{F}^{\\left( s,\\omega \\right) }$ satisfies the equation \n\\begin{eqnarray}\nd^{\\mathcal{H}}\\hat{F}^{\\left( s,\\omega \\right) } &=&d^{\\mathcal{H}}\\varphi\n_{s}\\wedge F \\notag \\\\\n&=&F\\dot{\\wedge}T^{\\left( s,\\omega \\right) }-\\left[ \\hat{F}^{\\left( s,\\omega\n\\right) },T^{\\left( s,\\omega \\right) }\\right] ^{\\left( s\\right) }\n\\label{Bianchi}\n\\end{eqnarray\nwhere $\\cdot $ denotes the representation of $\\mathfrak{p}$ on $\\mathfrak{l}$\n\\end{corollary}\n\n\\begin{proof}\nUsing the definition (\\ref{Fhat}) of $\\hat{F}^{\\left( s,\\omega \\right) }$,\nwe have \n\\begin{equation*}\nd^{\\mathcal{H}}\\hat{F}^{\\left( s,\\omega \\right) }=d^{\\mathcal{H}}\\left(\n\\varphi _{s}\\left( F\\right) \\right) =d^{\\mathcal{H}}\\varphi _{s}\\wedge\nF+\\varphi _{s}\\left( d^{\\mathcal{H}}F\\right) ,\n\\end{equation*\nhowever using the standard Bianchi identity, $d^{\\mathcal{H}}F=0,$ and (\\re\n{dhphis}), we obtain (\\ref{Bianchi}).\n\\end{proof}\n\n\\begin{example}\nThe equation (\\ref{dHT}) is the precise analog of what is known as the\n\\textquotedblleft $G_{2}$-structure Bianchi identity\\textquotedblright\\ \\cit\n{GrigorianOctobundle,karigiannis-2007} (not to be confused with the Bianchi\nidentity (\\ref{Bianchi})). In that case, $\\hat{F}$ corresponds precisely to\nthe quantity $\\frac{1}{4}\\pi _{7}\\func{Riem}$, which is the projection of\nthe endomorphism part of $\\func{Riem}$ to the $7$-dimensional representation\nof $G_{2}.$ In local coordinates, it is given by $\\frac{1}{4}\\func{Riem\n_{abcd}\\varphi ^{cde}$.\n\\end{example}\n\n\\begin{example}\n\\label{exCx4}In the complex case, with $\\mathbb{L=}U\\mathbb{C}$ and \n\\mathcal{P}$ a principal $U\\left( n\\right) $-bundle, (\\ref{dHT}) shows that \n\\hat{F}^{\\left( s,\\omega \\right) }=dT^{\\left( s,\\omega \\right) }.$ Here $d^\n\\mathcal{H}}=d$ on $\\mathfrak{l}$-valued forms because the action of \n\\mathfrak{p}_{n}$ on $\\mathfrak{l}$ is trivial (as in Example \\ref{exCx2}).\nIf $s$ is a global section, then this shows that $\\hat{F}$ is an exact $2\n-form - and so the class $\\left[ \\hat{F}\\right] =0$. This is consistent with\na vanishing first Chern class which is a necessary condition for existence\nof a global $s$. On the other hand, if we suppose that $s$ is only a local\nsection, so that $T^{\\left( s,\\omega \\right) }$ is a local $1$-form$,$ then\nwe only get that $\\hat{F}^{\\left( s,\\omega \\right) }$ is closed, so in this\ncase it may define a non-trivial first Chern class. If $\\mathcal{P}$ is the\nunitary frame bundle over a complex manifold, it defines a K\\\"{a}hler\nmetric, and then $\\hat{F}$ precisely corresponds to the Ricci curvature, so\nthat the Ricci-flat condition for reduction to a Calabi-Yau manifold is \n\\hat{F}=0.$\n\\end{example}\n\nThe equation (\\ref{dwstruct}) is interesting because this is an analog of\nthe structure equation for the connection $1$-form $\\omega $ on $\\mathcal{P\n. $ However, in the case of $\\omega $, the quantity $d\\omega -\\frac{1}{2\n\\left[ \\omega ,\\omega \\right] $ is horizontal. However, for $\\hat{\\omega\n^{\\left( s\\right) }$, $\\hat{F}^{\\left( s,\\omega \\right) }$ gives the\nhorizontal component, while the remaining terms give mixed vertical and\nhorizontal components. The fully vertical components vanish. We also see\nthat $\\hat{\\omega}^{\\left( s\\right) }$ satisfies the loop Maurer-Cartan\nequation if and only if $\\hat{F}^{\\left( s,\\omega \\right) }=0$ and $d^\n\\mathcal{H}}\\varphi _{s}=0.$ In the $G_{2}$ case, ${\\Greekmath 0272} \\varphi =0$ of\ncourse is equivalent to $T=0$ and hence implies $\\frac{1}{4}\\pi _{7}\\func\nRiem}=0.$ More generally, this may not need to be the case.\n\n\\begin{lemma}\n\\label{lemTcond}Suppose $\\mathbb{L}$ is a left-alternative loop and suppose \n-\\hat{\\omega}^{\\left( s\\right) }$ satisfies the Maurer-Cartan equation \n\\begin{equation}\nd\\hat{\\omega}^{\\left( s\\right) }+\\frac{1}{2}\\left[ \\hat{\\omega}^{\\left(\ns\\right) },\\hat{\\omega}^{\\left( s\\right) }\\right] ^{\\left( s\\right) }=0,\n\\label{omegahatMC}\n\\end{equation\nthen for any $\\alpha ,\\beta \\in \\mathfrak{q}^{\\left( s_{p}\\right) }\\cong\nT_{1}\\mathcal{C}^{R}\\left( \\mathbb{L},\\circ _{s_{p}}\\right) $, \n\\begin{equation}\n\\left[ \\alpha ,\\beta ,T_{p}^{\\left( s,\\omega \\right) }\\right] ^{\\left(\ns_{p}\\right) }=0\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{.} \\label{Trestrict}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nTaking the exterior derivative of (\\ref{omegahatMC}) and applying (\\re\n{alphastructeq}), we find $\\hat{\\omega}^{\\left( s\\right) }$ satisfies \n\\begin{equation}\n0=\\left[ \\hat{\\omega}^{\\left( s\\right) },\\hat{\\omega}^{\\left( s\\right)\n},\\theta _{s}+\\hat{\\omega}^{\\left( s\\right) }\\right] ^{\\left( s\\right) }\n\\left[ \\hat{\\omega}^{\\left( s\\right) },\\hat{\\omega}^{\\left( s\\right)\n},T^{\\left( s,\\omega \\right) }\\right] ^{\\left( s\\right) }.\n\\label{omegahatassoc}\n\\end{equation\nSince $\\mathbb{L}$ is left-alternative, we know that the $\\mathbb{L}\n-algebra associator is skew in the first two entries, so if given vector\nfields $X,Y,Z$ on $\\mathcal{P}$, we have \n\\begin{eqnarray}\n0 &=&\\left[ \\hat{\\omega}^{\\left( s\\right) }\\left( X\\right) ,\\hat{\\omega\n^{\\left( s\\right) }\\left( Y\\right) ,T^{\\left( s,\\omega \\right) }\\left(\nZ\\right) \\right] ^{\\left( s\\right) }+\\left[ \\hat{\\omega}^{\\left( s\\right)\n}\\left( Y\\right) ,\\hat{\\omega}^{\\left( s\\right) }\\left( Z\\right) ,T^{\\left(\ns,\\omega \\right) }\\left( X\\right) \\right] ^{\\left( s\\right) } \\notag \\\\\n&&+\\left[ \\hat{\\omega}^{\\left( s\\right) }\\left( Z\\right) ,\\hat{\\omega\n^{\\left( s\\right) }\\left( X\\right) ,T^{\\left( s,\\omega \\right) }\\left(\nY\\right) \\right] ^{\\left( s\\right) }. \\label{omhatassoc2}\n\\end{eqnarray\nLet $\\xi \\in \\mathfrak{p\\ }$and let $X=\\sigma \\left( \\xi \\right) $ be a\nvertical vector field on $\\mathcal{P}$, then \n\\begin{equation*}\n\\hat{\\omega}^{\\left( s\\right) }\\left( X\\right) =\\varphi _{s}\\left( \\omega\n\\left( X\\right) \\right) =\\varphi _{s}\\left( \\xi \\right) .\n\\end{equation*\nIn (\\ref{omhatassoc2}), we take $X=\\sigma \\left( \\xi \\right) $ and $Y=\\sigma\n\\left( \\eta \\right) $ to be vertical vector fields and $Z=Z^{h}$ a\nhorizontal vector field. Then since $\\hat{\\omega}^{\\left( s\\right) }$ is\nvertical and $T^{\\left( s,\\omega \\right) }$ is horizontal, we find that for\nany $\\xi ,\\eta \\in \\mathfrak{p},$ \n\\begin{equation*}\n\\left[ \\varphi _{s}\\left( \\xi \\right) ,\\varphi _{s}\\left( \\eta \\right)\n,T^{\\left( s,\\omega \\right) }\\left( Z\\right) \\right] ^{\\left( s\\right) }=0.\n\\end{equation*\nWe know that for each $p\\in \\mathcal{P}$, the map $\\varphi _{s_{p}}$ is\nsurjective onto $\\mathfrak{q}^{\\left( s_{p}\\right) }\\subset \\mathfrak{l\n^{\\left( s_{p}\\right) }$ and thus (\\ref{Trestrict}) holds.\n\\end{proof}\n\n\\begin{theorem}\n\\label{thmTNucl}Suppose $\\mathcal{P}$ is connected and simply-connected and \n\\mathbb{L}$ a smooth loop such that\n\n\\begin{enumerate}\n\\item $\\mathfrak{l}$ is a left-alternative algebra (i.e. the associator on \n\\mathfrak{l}$ is skew-symmetric in the first two entries),\n\n\\item $\\dim \\left( \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) \\right) =\\dim\n\\left( \\mathcal{N}^{R}\\left( \\mathfrak{l}\\right) \\right) .$\n\\end{enumerate}\n\nMoreover, suppose $s_{p}\\in \\mathcal{C}^{R}\\left( \\mathbb{L}\\right) $ for\nevery $p\\in \\mathcal{P}$, then $\\hat{\\omega}^{\\left( s\\right) }$ satisfies\nthe Maurer-Cartan equation (\\ref{omegahatMC}) if and only if there exists a\nmap $f:\\mathcal{P}\\longrightarrow \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $\nsuch that \n\\begin{equation}\nT^{\\left( s,\\omega \\right) }=-\\left( \\func{Ad}_{s}\\right) _{\\ast }\\theta\n_{f}. \\label{Tsthetaf}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nSince $s$ has values in $\\mathcal{C}^{R}\\left( \\mathbb{L}\\right) ,$ using\nLemma \\ref{lemTcond}, we see that the conditions of Corollary \\re\n{corLoopCartan} are satisfied, and hence there exists a map $f:\\mathcal{P\n\\longrightarrow \\mathcal{N}^{R}\\left( \\mathbb{L}\\right) $ such that \n\\begin{eqnarray*}\n-\\hat{\\omega}^{\\left( s\\right) } &=&\\theta _{sf} \\\\\n&=&\\theta _{s}+\\left( \\func{Ad}_{s}\\right) _{\\ast }\\theta _{f}.\n\\end{eqnarray*\nFrom (\\ref{stheta}), \n\\begin{equation*}\nT^{\\left( s,\\omega \\right) }=\\theta _{s}+\\hat{\\omega}^{\\left( s\\right)\n}=-\\left( \\func{Ad}_{s}\\right) _{\\ast }\\theta _{f}.\n\\end{equation*\nConversely, suppose (\\ref{Tsthetaf}) holds for some right nucleus-valued map \n$f$. Then, clearly $\\hat{\\omega}^{\\left( s\\right) }=-\\theta _{sf}$, and thus \n$-\\hat{\\omega}^{\\left( s\\right) }$ satisfies (\\ref{omegahatMC}).\n\\end{proof}\n\n\\begin{remark}\nTheorem \\ref{thmTNucl} shows that if $\\mathbb{L}$ has a sufficiently large\\\nnucleus, then $\\hat{F}^{\\left( s,\\omega \\right) }=0$ and $d^{\\mathcal{H\n}\\varphi _{s}=0$ do not necessarily imply that $T^{\\left( s,\\omega \\right)\n}=0$. In the case of unit octonions, the nucleus is just $\\left\\{ \\pm\n1\\right\\} $, so any nucleus-valued map is constant on connected components,\nhence in this case if $\\hat{\\omega}^{\\left( s\\right) }$ satisfies (\\re\n{omegahatMC}), then $T^{\\left( s,\\omega \\right) }=0.$\n\\end{remark}\n\n\\subsection{Deformations}\n\n\\label{sectDeform}The torsion of a loop structure is determined by the\nequivariant $\\mathbb{\\mathring{L}}$-valued map $s$ and the connection \n\\omega $ on $\\mathcal{P}.$ There are several possible deformations of $s$\nand $\\omega $. In particular, $s$ may be deformed by the action of $\\Psi $\nor by left multiplication action of $\\mathbb{L}.$ The connection $\\omega $\nmay be deformed by the affine action of $\\Omega _{basic}^{1}\\left( \\mathcal{\n},\\mathfrak{p}\\right) $ or by gauge transformations in $\\Psi .$ Moreover, of\ncourse, these deformations may be combined or considered infinitesimally.\nSince $T^{\\left( s,\\omega \\right) }$ is the horizontal part of $\\theta _{s}\n, when considering deformations of $s$ it is sufficient to consider what\nhappens to $\\theta _{s}$ and then taking the horizontal component.\n\nRecall that the space of connections on $\\mathcal{P}$ is an affine space\nmodelled on equivariant horizontal (i.e. basic) $\\mathfrak{p}$-valued $1\n-forms on $\\mathcal{P}.$ Thus, any connection $\\tilde{\\omega}=\\omega +A$ for\nsome basic $\\mathfrak{p}$-valued $1$-form $A$. Then, \n\\begin{equation}\nT^{\\left( s,\\tilde{\\omega}\\right) }=\\theta _{s}+\\varphi _{s}\\left( \\tilde\n\\omega}\\right) =T^{\\left( s,\\omega \\right) }+\\hat{A} \\label{wtild}\n\\end{equation\nwhere $\\hat{A}=\\varphi _{s}\\left( A\\right) $. Thus, we can set $T^{\\left( s\n\\tilde{\\omega}\\right) }=0$ by choosing $A$ such that $\\hat{A}=-T^{\\left(\ns,\\omega \\right) }$ if and only if for each $p\\in P\\,,\\ T_{p}^{\\left(\ns,\\omega \\right) }\\in $ $\\mathfrak{q}^{\\left( s_{p}\\right) }=\\varphi\n_{s_{p}}\\left( \\mathfrak{p}\\right) $. Since $\\hat{\\omega}$ is always in the\nimage of $\\varphi _{s}$, we conclude there exists a connection $\\tilde{\\omeg\n}$ for which $T^{\\left( s,\\tilde{\\omega}\\right) }=0$ if and only if $\\left.\n\\theta _{s}\\right\\vert _{p}$ $\\in \\mathfrak{q}^{\\left( s_{p}\\right) }$ for\neach $p$. In that case, $\\theta _{s}=-\\varphi _{s}\\left( \\tilde{\\omega\n\\right) .$ From Theorem \\ref{thmThetaPhi}, we then see that $\\tilde{\\omega}$\nhas curvature with values in $\\mathfrak{h}_{s}.$\n\nRecall that if $\\phi :\\mathcal{P}\\longrightarrow \\mathcal{P}$ is a gauge\ntransformation, then there exists an $\\func{Ad}_{\\Psi }$-equivariant map $u\n\\mathcal{P}\\longrightarrow \\Psi $ such that for each $p\\in \\mathcal{P}$, \n\\phi \\left( p\\right) =pu_{p}$. Each such map then corresponds to a section\nof the associated bundle $\\func{Ad}\\left( \\mathcal{P}\\right) .$ The\ngauge-transformed connection $1$-form is then $\\omega ^{\\phi }=u^{\\ast\n}\\omega $, where \n\\begin{equation}\nu^{\\ast }\\omega =\\left( \\func{Ad}_{u^{-1}}\\right) _{\\ast }\\omega +u^{\\ast\n}\\theta _{\\Psi } \\label{omgauge}\n\\end{equation\nwhere $\\theta _{\\Psi }$ is the \\emph{left}-invariant Maurer-Cartan form on \n\\Psi $. Then, \n\\begin{eqnarray}\nd^{u^{\\ast }\\mathcal{H}}s &=&\\left( l_{u}^{-1}\\right) _{\\ast }d^{\\mathcal{H\n}\\left( l_{u}s\\right) \\notag \\\\\n&=&d^{\\mathcal{H}}s+\\left( u^{\\ast }\\theta _{\\Psi }\\right) ^{\\mathcal{H\n}\\cdot s_{p} \\label{dHphi}\n\\end{eqnarray\nwhere at each $p\\in \\mathcal{P}.\n\\begin{equation*}\n\\left. \\left( u^{\\ast }\\theta _{\\Psi }\\right) ^{\\mathcal{H}}\\right\\vert\n_{p}=\\left( l_{u_{p}}\\right) _{\\ast }^{-1}\\circ \\left( d^{\\mathcal{H\n}u\\right) _{p}\\mathfrak{.}\n\\end{equation*\nHence, \n\\begin{equation}\nT^{\\left( s,u^{\\ast }\\omega \\right) }=\\left( R_{s}^{-1}\\right) _{\\ast\n}d^{u^{\\ast }\\mathcal{H}}s=T^{\\left( s,\\omega \\right) }+\\varphi _{s}\\left(\n\\left( u^{\\ast }\\theta _{\\Psi }\\right) ^{\\mathcal{H}}\\right) .\n\\label{Tsgauge}\n\\end{equation\nConsider the curvature $F^{u^{\\ast }\\omega }$ of the connection $u^{\\ast\n}\\omega $. It is well-known that it is given b\n\\begin{equation}\nF^{u^{\\ast }\\omega }=\\left( \\func{Ad}_{u^{-1}}\\right) _{\\ast }F.\n\\end{equation\nFrom Theorem \\ref{lemGammahatsurj}, we then have \n\\begin{equation}\n\\hat{F}^{\\left( s,u^{\\ast }\\omega \\right) }=\\varphi _{s}\\left( \\left( \\func\nAd}_{u^{-1}}\\right) _{\\ast }F\\right) =\\left( u^{-1}\\right) _{\\ast }^{\\prime \n\\hat{F}^{\\left( u\\left( s\\right) ,\\omega \\right) }.\n\\end{equation\nOn the other hand, using (\\ref{dHphi}) and (\\ref{Dsderiv}) we have \n\\begin{eqnarray*}\nT^{\\left( s,u^{\\ast }\\omega \\right) } &=&\\left( R_{s}^{-1}\\right) _{\\ast\n}\\left( u^{\\ast }\\mathring{D}\\right) \\left( s\\right) \\\\\n&=&\\left( R_{s}^{-1}\\right) _{\\ast }\\left( u^{-1}\\right) _{\\ast }\\mathring{D\n\\left( u\\left( s\\right) \\right) \\\\\n&=&\\left( u^{-1}\\right) _{\\ast }^{\\prime }\\left( R_{u\\left( s\\right)\n}^{-1}\\right) _{\\ast }\\mathring{D}\\left( u\\left( s\\right) \\right) \\\\\n&=&\\left( u^{-1}\\right) _{\\ast }^{\\prime }T^{\\left( u\\left( s\\right) ,\\omega\n\\right) }.\n\\end{eqnarray*\nSummarizing, we have the following.\n\n\\begin{theorem}\nSuppose $s:\\mathcal{P}\\longrightarrow \\mathbb{\\mathring{L}}$ and $u:\\mathcal\nP}\\longrightarrow \\Psi $ are equivariant smooth maps. Then, \n\\begin{subequations}\n\\begin{eqnarray}\nT^{\\left( s,u^{\\ast }\\omega \\right) } &=&T^{\\left( s,\\omega \\right)\n}+\\varphi _{s}\\left( \\left( u^{\\ast }\\theta _{\\Psi }\\right) ^{\\mathcal{H\n}\\right) \\label{Tsustom} \\\\\n&=&\\left( u^{-1}\\right) _{\\ast }^{\\prime }T^{\\left( u\\left( s\\right) ,\\omega\n\\right) } \\notag \\\\\n\\hat{F}^{\\left( s,u^{\\ast }\\omega \\right) } &=&\\left( u^{-1}\\right) _{\\ast\n}^{\\prime }\\hat{F}^{\\left( u\\left( s\\right) ,\\omega \\right) }.\n\\end{eqnarray\n\\end{subequations\nIn particular, \n\\begin{equation}\nT^{\\left( u^{-1}\\left( s\\right) ,u^{\\ast }\\omega \\right) }=\\left( u^{\\prime\n}\\right) _{\\ast }^{-1}T^{\\left( s,\\omega \\right) }\\ \\ \\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{and }\\hat{F\n^{\\left( u^{-1}\\left( s\\right) ,u^{\\ast }\\omega \\right) }=\\left(\nu^{-1}\\right) _{\\ast }^{\\prime }\\hat{F}^{\\left( s,\\omega \\right) }.\n\\label{Tuom}\n\\end{equation}\n\\end{theorem}\n\nThis shows that both $T$ and $\\hat{F}$ transform equivariantly with respect\nto a simultaneous transformation of $s$ and $\\omega $. In particular, if we\nhave a Riemannian metric on the base manifold $M$ and a $\\Psi $-covariant\nmetric on $\\mathfrak{l},$ then with respect to the induced metric on \nT^{\\ast }\\mathcal{P}\\otimes \\mathfrak{l}$, the quantities $\\left\\vert\nT\\right\\vert ^{2}$ and $\\left\\vert F\\right\\vert ^{2}$ are invariant with\nrespect to the transformation $\\left( s,\\omega \\right) \\mapsto \\left(\nu^{-1}\\left( s\\right) ,u^{\\ast }\\omega \\right) .$ In the case of $G_{2}\n-structure, the key question is regarding the holonomy of the Levi-Civita,\nso in this general setting, if we are interested in the holonomy of $\\omega \n, it makes sense to consider individual transformations $s\\mapsto As$ for\nsome equivariant $A\\in C^{\\infty }\\left( \\mathcal{P},\\mathbb{L}\\right) $ and \n$\\omega \\mapsto u^{\\ast }\\omega $ because each of these transformations\nleaves the holonomy group unchanged. We also see that every transformation \ns\\mapsto u\\left( s\\right) $ for some equivariant $u\\in C^{\\infty }\\left( \n\\mathcal{P},\\Psi \\right) $ corresponds to a transformation $s\\mapsto As,$\nwhere $A=h\\left( s\\right) \/s$. From (\\ref{PsAutoriso}), this is precisely\nthe companion of the corresponding map $u_{s}\\in \\Psi \\left( \\mathbb{L\n,\\circ _{s}\\right) .$ Moreover, this correspondence is one-to-one if and\nonly if $\\mathbb{L}$ is a $G$-loop. It is easy to see that $A$ is then an\nequivariant $\\mathbb{L}$-valued map. Thus, considering transformations \ns\\mapsto As$ is more general in some situations.\n\n\\begin{theorem}\nSuppose $A:\\mathcal{P}\\longrightarrow \\mathbb{L}$ and $s:\\mathcal{P\n\\longrightarrow \\mathbb{\\mathring{L}}$ . Then, \n\\begin{subequations}\n\\begin{eqnarray}\nT^{\\left( As,\\omega \\right) } &=&\\left( R_{A}^{\\left( s\\right) }\\right)\n_{\\ast }^{-1}DA+\\left( \\func{Ad}_{A}^{\\left( s\\right) }\\right) _{\\ast\n}T^{\\left( s,\\omega \\right) }=\\left( R_{A}^{\\left( s\\right) }\\right) _{\\ast\n}^{-1}D^{\\left( s\\right) }A \\label{Trom} \\\\\n\\hat{F}^{\\left( As,\\omega \\right) } &=&\\left( R_{A}^{\\left( s\\right)\n}\\right) _{\\ast }^{-1}\\left( F^{\\prime }\\cdot A\\right) +\\left( \\func{Ad\n_{A}^{\\left( s\\right) }\\right) _{\\ast }\\hat{F}^{\\left( s,\\omega \\right) },\n\\label{From}\n\\end{eqnarray\n\\end{subequations\nwhere $F^{\\prime }\\cdot A$ denotes the infinitesimal action of $\\mathfrak{p}$\non $\\mathbb{L}.$\n\\end{theorem}\n\n\\begin{proof}\nRecall from (\\ref{thetafs2}), that \n\\begin{equation}\n\\theta _{As}=\\theta _{A}^{\\left( s\\right) }+\\left( \\func{Ad}_{A}^{\\left(\ns\\right) }\\right) _{\\ast }\\theta _{s}. \\label{thetaAs}\n\\end{equation\nNow, $T^{\\left( s,\\omega \\right) }$ is just the horizontal part of $\\theta\n_{s}$, so taking the horizontal projection in (\\ref{thetaAs}), we\nimmediately get (\\ref{Trom}). To obtain (\\ref{From}), from (\\ref{phiAs}) we\nhav\n\\begin{equation}\n\\hat{F}^{\\left( As,\\omega \\right) }=\\varphi _{As}\\left( F\\right) =\\left(\nR_{A}^{\\left( s\\right) }\\right) _{\\ast }^{-1}\\left( F^{\\prime }\\cdot\nA\\right) +\\left( \\func{Ad}_{A}^{\\left( s\\right) }\\right) _{\\ast }\\varphi\n_{s}\\left( F\\right) ,\n\\end{equation\nand hence we obtain (\\ref{From}).\n\\end{proof}\n\n\\begin{remark}\nThe expression (\\ref{Trom}) precisely replicates the formula for the\ntransformation of torsion of a $G_{2}$-structure within a fixed metric\nclass, as derived in \\cite{GrigorianOctobundle}.\n\\end{remark}\n\nNow suppose $s_{t}$ is a $1$-parameter family of equivariant $\\mathbb\n\\mathring{L}}$-valued maps that satisfy \n\\begin{equation}\n\\frac{\\partial s_{t}}{\\partial t}=\\left( R_{s_{t}}\\right) _{\\ast }\\xi _{t}\n\\label{Aevol}\n\\end{equation\nwhere $\\xi _{t}$ is a $1$-parameter family of $\\mathfrak{l}$-valued maps. In\nparticular, if $\\xi \\left( t\\right) $ is independent of $t$, then $s\\left(\nt\\right) =\\exp _{s_{0}}\\left( t\\xi \\right) s_{0}.$ Then let us work out the\nevolution of $T^{\\left( s\\left( t\\right) ,\\omega \\right) }$ and $\\hat{F\n^{\\left( s\\left( t\\right) ,\\omega \\right) }.$ First consider the evolution\nof $\\theta _{s\\left( t\\right) }$ and $\\varphi _{s\\left( t\\right) }$.\n\n\\begin{lemma}\nSuppose $s\\left( t\\right) $ satisfies (\\ref{Aevol}), then \n\\begin{subequations}\n\\begin{eqnarray}\n\\frac{\\partial \\theta _{s\\left( t\\right) }}{\\partial t} &=&d\\xi \\left(\nt\\right) -\\left[ \\theta _{s\\left( t\\right) },\\xi \\left( t\\right) \\right]\n^{\\left( s\\left( t\\right) \\right) } \\label{dtthetas} \\\\\n\\frac{\\partial \\varphi _{s\\left( t\\right) }}{\\partial t} &=&\\func{id}_\n\\mathfrak{p}}\\cdot \\xi \\left( t\\right) -\\left[ \\varphi _{s\\left( t\\right)\n},\\xi \\left( t\\right) \\right] ^{\\left( s\\left( t\\right) \\right) }.\n\\label{dtphis}\n\\end{eqnarray\n\\end{subequations\n\\end{lemma}\n\n\\begin{proof}\nFor $\\theta _{s\\left( t\\right) }$, suppressing pushforwards, we hav\n\\begin{eqnarray}\n\\frac{\\partial \\theta _{s\\left( t\\right) }}{\\partial t} &=&\\frac{\\partial }\n\\partial t}\\left( \\left( ds\\left( t\\right) \\right) \/s\\left( t\\right) \\right)\n\\notag \\\\\n&=&\\left( d\\dot{s}\\right) \/s-\\left( \\left( ds\\right) \/s\\cdot \\dot{s}\\right)\n\/s \\notag \\\\\n&=&d\\left( \\xi s\\right) \/s-\\left( \\left( ds\\right) \/s\\cdot \\left( \\xi\ns\\right) \\right) \/s \\notag \\\\\n&=&d\\xi -\\left[ \\theta _{s\\left( t\\right) },\\xi \\right] ^{\\left( s\\left(\nt\\right) \\right) }.\n\\end{eqnarray\nSimilarly, for $\\varphi _{s\\left( t\\right) },$ let $\\eta \\in \\mathfrak{p}$,\nthen \n\\begin{eqnarray}\n\\frac{\\partial \\varphi _{s\\left( t\\right) }\\left( \\eta \\right) }{\\partial t}\n&=&\\frac{\\partial }{\\partial t}\\left( \\left. \\frac{d}{d\\tau }\\exp \\left(\n\\tau \\eta \\right) \\left( s\\right) \/s\\right\\vert _{\\tau =0}\\right) \\notag \\\\\n&=&\\left. \\frac{d}{d\\tau }\\exp \\left( \\tau \\eta \\right) \\left( \\left( \\xi\ns\\right) \/s\\right) \\right\\vert _{\\tau =0}-\\left. \\frac{d}{d\\tau }\\left( \\exp\n\\left( \\tau \\eta \\right) \\left( \\left( s\\right) \/s\\right) \\cdot \\left( \\xi\ns\\right) \\right) \/s\\right\\vert _{\\tau =0} \\notag \\\\\n&=&\\left. \\frac{d}{d\\tau }\\exp \\left( \\tau \\eta \\right) ^{\\prime }\\left( \\xi\n\\right) \\right\\vert _{\\tau =0}+\\left. \\frac{d}{d\\tau }\\left( \\xi \\exp \\left(\n\\tau \\eta \\right) \\left( s\\right) \\right) \/s\\right\\vert _{\\tau =0} \\notag \\\\\n&&-\\left. \\frac{d}{d\\tau }\\left( \\exp \\left( \\tau \\eta \\right) \\left( \\left(\ns\\right) \/s\\right) \\cdot \\left( \\xi s\\right) \\right) \/s\\right\\vert _{\\tau =0}\n\\notag \\\\\n&=&\\eta \\cdot \\xi \\left( t\\right) -\\left[ \\varphi _{s\\left( t\\right) }\\left(\n\\eta \\right) ,\\xi \\left( t\\right) \\right] ^{\\left( s\\left( t\\right) \\right)\n}.\n\\end{eqnarray}\n\\end{proof}\n\nTo obtain the evolution of $T^{\\left( s\\left( t\\right) ,\\omega \\right) }$\nand $\\hat{F}^{\\left( s\\left( t\\right) ,\\omega \\right) }$, we just take the\nhorizontal component of (\\ref{dtphis}) and substitute $F$ into (\\ref{dtphis\n).\n\n\\begin{corollary}\nSuppose $s\\left( t\\right) $ satisfies (\\ref{Aevol}), then \n\\begin{subequations\n\\label{dtTF} \n\\begin{eqnarray}\n\\frac{\\partial T^{\\left( s\\left( t\\right) ,\\omega \\right) }}{\\partial t}\n&=&d^{\\mathcal{H}}\\xi \\left( t\\right) -\\left[ T^{\\left( s\\left( t\\right)\n,\\omega \\right) },\\xi \\left( t\\right) \\right] ^{\\left( s\\left( t\\right)\n\\right) } \\label{dtTF1} \\\\\n\\frac{\\partial \\hat{F}^{\\left( s\\left( t\\right) ,\\omega \\right) }}{\\partial \n} &=&F\\cdot \\xi \\left( t\\right) -\\left[ \\hat{F}^{\\left( s\\left( t\\right)\n,\\omega \\right) },\\xi \\left( t\\right) \\right] ^{\\left( s\\left( t\\right)\n\\right) }. \\label{dtTF2}\n\\end{eqnarray\n\\end{subequations\n\\end{corollary}\n\nThe expression (\\ref{dtTF1}) is the analog of a similar expression for the\nevolution of the torsion of a $G_{2}$-structure, as given in \\cit\n{GrigorianIsoflow,karigiannis-2007}.\n\n\\begin{remark}\nSuppose $u_{t}$ is a $1$-parameter family of equivariant $\\Psi $-valued maps\nthat satisfy \n\\begin{equation}\n\\frac{\\partial u_{t}}{\\partial t}=\\left( l_{u_{t}}\\right) _{\\ast }\\gamma _{t}\n\\end{equation\nfor a $1$-parameter family $\\gamma _{t}$ of equivariant $\\mathfrak{p}\n-valued maps. Then, each $u_{t}$ defines a gauge transformation of the\nconnection $\\omega .$ Define \n\\begin{equation}\n\\omega _{t}=u_{t}^{\\ast }\\omega .\n\\end{equation\nThen, it is easy to see that \n\\begin{equation}\n\\frac{\\partial \\omega _{t}}{\\partial t}=d\\gamma _{t}+\\left[ \\omega\n_{t},\\gamma _{t}\\right] _{\\mathfrak{p}}=d^{\\mathcal{H}_{t}}\\gamma _{t},\n\\label{dtomt}\n\\end{equation\nwhere $d^{\\mathcal{H}_{t}}$ is the covariant derivative corresponding to \n\\omega _{t}.$ Similarly, the corresponding curvature $F_{t}$ evolves via the\nequatio\n\\begin{equation}\n\\frac{\\partial F_{t}}{\\partial t}=\\left[ F_{t},\\gamma _{t}\\right] _\n\\mathfrak{p}}. \\label{dtFt}\n\\end{equation\nUsing (\\ref{dtomt}) together with (\\ref{dtTF1}) gives \n\\begin{equation}\n\\frac{\\partial T^{\\left( s_{t},\\omega _{t}\\right) }}{\\partial t}=d^{\\mathcal\nH}_{t}}\\xi _{t}-\\left[ T^{\\left( s_{t},\\omega _{t}\\right) },\\xi _{t}\\right]\n^{\\left( s_{t}\\right) }+\\varphi _{s_{t}}\\left( d^{\\mathcal{H}_{t}}\\gamma\n_{t}\\right) . \\label{dTstomt}\n\\end{equation\nHowever, \n\\begin{eqnarray*}\n\\varphi _{s_{t}}\\left( d^{\\mathcal{H}_{t}}\\gamma _{t}\\right) &=&d^{\\mathcal{\n}_{t}}\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) }-\\left( d^{\\mathcal{H\n_{t}}\\varphi _{s_{t}}\\right) \\left( \\gamma _{t}\\right) \\\\\n&=&d^{\\mathcal{H}_{t}}\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) }-\\gamma\n_{t}\\cdot T^{\\left( s_{t},\\omega _{t}\\right) }-\\left[ T^{\\left( s_{t},\\omega\n_{t}\\right) },\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) }\\right] ^{\\left(\ns_{t}\\right) }\n\\end{eqnarray*\nand thus (\\ref{dTstomt}) becomes \n\\begin{equation}\n\\frac{\\partial T^{\\left( s_{t},\\omega _{t}\\right) }}{\\partial t}=-\\gamma\n_{t}\\cdot T^{\\left( s_{t},\\omega _{t}\\right) }+d^{\\mathcal{H}_{t}}\\left( \\xi\n_{t}+\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) }\\right) -\\left[ T^{\\left(\ns_{t},\\omega _{t}\\right) },\\xi _{t}+\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) \n\\right] ^{\\left( s_{t}\\right) }. \\label{dTstomt2}\n\\end{equation\nFor the curvature, using (\\ref{dtFt}) together with (\\ref{dtTF2}) gives \n\\begin{equation}\n\\frac{\\partial \\hat{F}^{\\left( s_{t},\\omega _{t}\\right) }}{\\partial t\n=F_{t}\\cdot \\xi _{t}-\\left[ \\hat{F}^{\\left( s_{t},\\omega _{t}\\right) },\\xi\n_{t}\\right] ^{\\left( s_{t}\\right) }+\\varphi _{s_{t}}\\left( \\left[\nF_{t},\\gamma _{t}\\right] _{\\mathfrak{p}}\\right) . \\label{dFstomt}\n\\end{equation\nUsing (\\ref{xiphi}), we then ge\n\\begin{equation}\n\\frac{\\partial \\hat{F}^{\\left( s_{t},\\omega _{t}\\right) }}{\\partial t\n=-\\gamma _{t}\\cdot \\hat{F}_{t}+F_{t}\\cdot \\left( \\xi _{t}+\\hat{\\gamma\n_{t}^{\\left( s_{t}\\right) }\\right) -\\left[ \\hat{F}^{\\left( s_{t},\\omega\n_{t}\\right) },\\xi _{t}+\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) }\\right]\n^{\\left( s_{t}\\right) }. \\label{dFstomt2}\n\\end{equation\nTaking $\\xi _{t}=-\\hat{\\gamma}_{t}^{\\left( s_{t}\\right) }$ in (\\ref{dTstomt2\n) and (\\ref{dFstomt2}), we obtain the infinitesimal versions of (\\ref{Tuom}).\n\\end{remark}\n\n\\subsection{Variational principles}\n\n\\label{sectVar}In general we have seen that the loop bundle structure is\ngiven by $\\mathbb{\\mathring{L}}$-valued map $s$ as well as a connection \n\\omega $ on $\\mathcal{P}.$ We call the pair $\\left( s,\\omega \\right) $ the\nconfiguration of the loop bundle structure. Each point in the configuration\nspace gives rise to the corresponding torsion $T^{\\left( s,\\omega \\right) }$\nand curvature $\\hat{F}^{\\left( s,\\omega \\right) }.$Previously we considered \nT$ and $\\hat{F}$ as horizontal equivariant forms on $\\mathcal{P}$, but of\ncourse we can equivalently consider them as bundle-valued differential forms\non the base manifold $M$. To be able to define functionals on $M,$ let us\nsuppose $M$ has a Riemannian metric and moreover, $\\mathbb{L}$ has the\nfollowing properties:\n\n\\begin{enumerate}\n\\item For each $s\\in \\mathbb{\\mathring{L}}$, the Killing form $K^{\\left(\ns\\right) }$ is nondegenerate and invariant with respect to $\\func{ad\n^{\\left( s\\right) }$ and the action of $\\mathfrak{p}.$\n\n\\item $\\mathbb{L}$ is a $G$-loop, so that in particular, for each $s\\in \n\\mathbb{\\mathring{L}},$ $\\mathfrak{l}^{\\left( s\\right) }=\\mathfrak{q}_{s}.$\n\n\\item For each $s\\in \\mathbb{\\mathring{L}}$, the space $\\mathfrak{q}_{s}$ is\nan irreducible representation of the Lie algebra $\\mathfrak{h}_{s}$.\n\\end{enumerate}\n\nThese properties may not be strictly necessary, but they will simplify\narguments. Moreover, these are the properties satisfied by the loop of unit\noctonions, which is the key example. The first property means we can defined\nthe map $\\varphi _{s}^{t},$ and then the second and third properties\ntogether make sure that there exists a constant $\\lambda $ such that for any \n$s\\in \\mathbb{\\mathring{L}},$ $\\varphi _{s}\\varphi _{s}^{t}=\\lambda \\func{id\n_{\\mathfrak{l}}$ and $\\varphi _{s}^{t}\\varphi _{s}=\\lambda \\func{id}_\n\\mathfrak{h}_{s}^{\\perp }},$ as per Lemma \\ref{lemphisphist}. If $\\mathfrak{\n}_{s}$ is a reducible representation, then each irreducible component may\nhave its own constant. Moreover, the first and second properties together\nimply that $K^{\\left( s\\right) }$ is independent of the choice of $s$, and\nwhen extended as an inner product on sections, it is covariantly constant\nwith respect to a principal connection on $\\mathcal{P}.$\n\nLet $s\\in \\mathbb{\\mathring{L}}$ be fixed. Suppose we have a path of\nconnections on $\\mathcal{P}$ given by $\\tilde{\\omega}\\left( t\\right) =\\omega\n+tA$ for some basic $\\mathfrak{p}$-valued $1$-form $A$ and a fixed principal\nconnection $\\omega $. Then, define \n\\begin{subequations}\n\\begin{eqnarray}\nT\\left( t\\right) &=&T^{\\left( s,\\tilde{\\omega}\\left( t\\right) \\right)\n}=\\theta _{s}+\\varphi _{s}\\left( \\tilde{\\omega}\\left( t\\right) \\right)\n=T^{\\left( s,\\omega \\right) }+t\\hat{A}. \\\\\n\\hat{F}\\left( t\\right) &=&\\hat{F}^{\\left( s,\\hat{\\omega}\\left( t\\right)\n\\right) }=\\varphi _{s}\\left( F^{\\tilde{\\omega}\\left( t\\right) }\\right) =\\hat\nF}^{\\left( s,\\omega \\right) }+t\\varphi _{s}\\left( d^{\\mathcal{H}}A\\right) \\\\\n&&+\\frac{1}{2}t^{2}\\varphi _{s}\\left( \\left[ A,A\\right] _{\\mathfrak{p\n}\\right) , \\notag\n\\end{eqnarray\n\\end{subequations\nwhere $\\hat{A}=\\varphi _{s}\\left( A\\right) $. Hence, using (\\ref{dhphis}), \n\\begin{subequations}\n\\begin{eqnarray}\n\\left. \\frac{d}{dt}T\\left( t\\right) \\right\\vert _{t=0} &=&\\hat{A} \\\\\n\\left. \\frac{d}{dt}\\hat{F}\\left( t\\right) \\right\\vert _{t=0} &=&\\varphi\n_{s}\\left( d^{\\mathcal{H}}A\\right) =d^{\\mathcal{H}}\\hat{A}-\\left( d^\n\\mathcal{H}}\\varphi _{s}\\right) \\wedge A \\notag \\\\\n&=&d^{\\mathcal{H}}\\hat{A}+A\\cdot T-\\left[ \\hat{A},T\\right] ^{\\left( s\\right)\n},\n\\end{eqnarray\n\\end{subequations\nwhere for brevity, $T=T\\left( 0\\right) =T^{\\left( s,\\omega \\right) }$. Note\nthat if for each $p\\in \\mathcal{P}$, $A_{p}\\in \\mathfrak{h}_{s_{p}}$, then\nthe torsion is unaffected, so these deformations are not relevant for the\nloop bundle structure. Hence, let us assume that $A_{p}\\in \\mathfrak{h\n_{s_{p}}^{\\perp }$ for each $p\\in \\mathcal{P}.$ Equivalently, this means\nthat $A\\in \\varphi _{s}^{t}\\left( \\mathfrak{l}\\right) .$ So now suppose $\\xi\n\\in \\Omega _{basic}^{1}\\left( \\mathcal{P},\\mathfrak{l}\\right) $ is a basic \n\\mathfrak{l}$-valued $1$-form on $\\mathcal{P}\\ $such that $A=\\frac{1}\n\\lambda }\\varphi _{s}^{t}\\left( \\xi \\right) $, and thus, $\\hat{A}=\\xi .$\nMoreover, from (\\ref{piqsact}), we see that \n\\begin{equation}\nA\\cdot T=\\frac{1}{\\lambda }\\varphi _{s}^{t}\\left( \\xi \\right) \\cdot T=\\frac{\n}{2\\lambda ^{2}}\\left[ \\xi ,T\\right] _{\\varphi _{s}}+\\frac{1}{2}\\left[ \\xi ,\n\\right] ^{\\left( s\\right) },\n\\end{equation\nwhere the bracket $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}$ on \n\\mathfrak{l}$ is given by \n\\begin{equation}\n\\left[ \\xi ,\\eta \\right] _{\\varphi _{s}}=\\varphi _{s}\\left( \\left[ \\varphi\n_{s}^{t}\\left( \\xi \\right) ,\\varphi _{s}^{t}\\left( \\eta \\right) \\right] _\n\\mathfrak{p}}\\right) ,\n\\end{equation\nas defined in (\\ref{phisbrack}). Overall, the deformations are now given by \n\\begin{subequations\n\\label{TFdeformxi} \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}T\\left( t\\right) \\right\\vert _{t=0} &=&\\xi \\\\\n\\left. \\frac{d}{dt}\\hat{F}\\left( t\\right) \\right\\vert _{t=0} &=&d^{\\mathcal{\n}}\\xi +\\frac{1}{2\\lambda ^{2}}\\left[ \\xi ,T\\right] _{\\varphi _{s}}-\\frac{1}{\n}\\left[ \\xi ,T\\right] ^{\\left( s\\right) }.\n\\end{eqnarray\n\\end{subequations\nSuppose now $M$ is a $3$-dimensional compact manifold. For a fixed section \ns\\in \\mathcal{\\mathring{Q}},$ consider now a functional $\\mathcal{F}^{\\left(\ns\\right) }$ on the space of connections on $\\mathcal{P}$ modulo $\\mathfrak{h\n_{s},$ given by \n\\begin{equation}\n\\mathcal{F}^{\\left( s\\right) }\\left( \\omega \\right) =\\int_{M}\\left\\langle T\n\\hat{F}\\right\\rangle ^{\\left( s\\right) }-\\frac{1}{6\\lambda ^{2}}\\left\\langle\nT,\\left[ T,T\\right] _{\\varphi _{s}}\\right\\rangle ^{\\left( s\\right) },\n\\label{Fsfunctional}\n\\end{equation\nwhere wedge products between forms are implicit. From the properties of $T\n\\hat{F},$ $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}$, and $\\left\\langle\n{}\\right\\rangle ^{\\left( s\\right) }$, we see that that this is invariant\nunder simultaneous gauge transformation $\\left( s,\\omega \\right) \\mapsto\n\\left( u^{-1}\\left( s\\right) ,u^{\\ast }\\omega \\right) .$\n\nNow using (\\ref{TFdeformxi}) consider deformations of each piece of (\\re\n{Fsfunctional}). For the first term, using (\\ref{dHT}), we obtain \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\int_{M}\\left\\langle T\\left( t\\right) ,\\hat{F}\\left(\nt\\right) \\right\\rangle ^{\\left( s\\right) }\\right\\vert _{t=0}\n&=&\\int_{M}\\left\\langle \\xi ,\\hat{F}\\right\\rangle ^{\\left( s\\right) } \\notag\n\\\\\n&&+\\int_{M}\\left\\langle T,d^{\\mathcal{H}}\\xi +\\frac{1}{2\\lambda ^{2}}\\left[\n\\xi ,T\\right] _{\\varphi _{s}}-\\frac{1}{2}\\left[ \\xi ,T\\right] ^{\\left(\ns\\right) }\\right\\rangle ^{\\left( s\\right) } \\notag \\\\\n&=&\\int_{M}\\left\\langle \\xi ,\\hat{F}+d^{\\mathcal{H}}T+\\frac{1}{2\\lambda ^{2}\n\\left[ T,T\\right] _{\\varphi _{s}}-\\frac{1}{2}\\left[ T,T\\right] ^{\\left(\ns\\right) }\\right\\rangle ^{\\left( s\\right) } \\notag \\\\\n&=&\\int_{M}\\left\\langle \\xi ,2\\hat{F}+\\frac{1}{2\\lambda ^{2}}\\left[ T,\n\\right] _{\\varphi _{s}}\\right\\rangle ^{\\left( s\\right) }, \\label{dtFs1}\n\\end{eqnarray\nFor the second term in (\\ref{Fsfunctional}), using Lemma \\ref{lemPhibrack2},\nwe obtain \n\\begin{equation}\n-\\frac{1}{6\\lambda ^{2}}\\left. \\frac{d}{dt}\\int_{M}\\left\\langle T,\\left[ T,\n\\right] _{\\varphi _{s}}\\right\\rangle ^{\\left( s\\right) }\\right\\vert _{t=0}=\n\\frac{1}{2\\lambda ^{2}}\\int_{M}\\left\\langle \\xi ,\\left[ T,T\\right] _{\\varphi\n_{s}}\\right\\rangle ^{\\left( s\\right) }. \\label{dtFs2}\n\\end{equation\nCombining (\\ref{dtFs1}) and (\\ref{dtFs2}), we obtain \n\\begin{equation}\n\\left. \\frac{d}{dt}\\mathcal{F}^{\\left( s\\right) }\\left( \\tilde{\\omega}\\left(\nt\\right) \\right) \\right\\vert _{t=0}=2\\int_{M}\\left\\langle \\xi ,\\hat{F\n\\right\\rangle ^{\\left( s\\right) }. \\label{dtFs3}\n\\end{equation\nTherefore, we see that the critical points of $\\mathcal{F}^{\\left( s\\right)\n} $ are precisely the connections for which $\\hat{F}=0$. This gives a\ngeneralization of the standard Chern-Simons functional.\n\n\\begin{remark}\nThe condition $\\hat{F}=0$ means that each point, the curvature $F^{\\left(\n\\omega \\right) }$ lies in $\\mathfrak{h}_{s}.$ This is a restriction on the\nLie algebra part of the curvature. Usually instanton conditions on curvature\ngive conditions on the $2$-form part. So what we have here is a different\nkind of condition to an instanton, and there is term for this, coined by\nSpiro Karigiannis - an \\emph{extanton}. As we from Example \\ref{exCx4}, on a\nK\\\"{a}hler manifold, this just corresponds to the Ricci-flat condition.\n\\end{remark}\n\nThe above construction on $3$-manifolds can be extended to an $n\n-dimensional manifold $M$ if we have a closed $\\left( n-3\\right) \n-dimensional form. In that case, similarly as in \\cite{DonaldsonHigherDim},\nconsider the functional \n\\begin{equation}\n\\mathcal{F}^{\\left( s\\right) }\\left( \\omega \\right) =\\int_{M^{n}}\\left(\n\\left\\langle T,\\hat{F}\\right\\rangle ^{\\left( s\\right) }-\\frac{1}{6\\lambda\n^{2}}\\left\\langle T,\\left[ T,T\\right] _{\\varphi _{s}}\\right\\rangle ^{\\left(\ns\\right) }\\right) \\wedge \\psi .\n\\end{equation\nIn this case, the critical points then satisfy \n\\begin{equation}\n\\hat{F}\\wedge \\psi =0. \\label{Extantonndim}\n\\end{equation\nFor example if $M$ is a $7$-dimensional manifold with a \\emph{co-closed }\nG_{2}$-structure, i.e. $\\psi =\\ast \\varphi $ is closed, then (\\re\n{Extantonndim}) shows that as a $2$-form, $\\hat{F}$ has a vanishing\ncomponent in the $7$-dimensional representation of $G_{2}.$ In contrast,\nDonaldson-Thomas connections \\cite{DonaldsonHigherDim} satisfy $F\\wedge \\psi\n=0$. If $F=\\func{Riem}$, is the Riemann curvature on the frame bundle, then\nequation (\\ref{Extantonndim}) shows that, in local coordinates, \n\\begin{equation}\n\\func{Riem}_{ijkl}\\varphi _{\\ \\alpha }^{ij}\\varphi _{\\ \\ \\beta }^{kl}=0.\n\\label{Extanton7dim}\n\\end{equation\nThe quantity on the left-hand side of (\\ref{Extanton7dim}), is sometimes\ndenoted as $\\func{Ric}^{\\ast }$ \\cit\n{CleytonIvanovClosed,CleytonIvanovCurv,GrigorianFlowSurvey}. The traceless\npart of $\\func{Ric}^{\\ast }$ corresponds to a component of the Riemann\ncurvature that lies in a $27$-dimensional representation of $G_{2}$, with\nanother $27$-dimensional component given by the traceless Ricci tensor \n\\func{Ric}$.\n\nNow consider the functional (\\ref{Fsfunctional}), however now as functional\non sections of $\\mathcal{\\mathring{Q}}$ for a fixed connection $\\omega $, so\nthat now we vary $s$\n\\begin{equation}\n\\mathcal{F}^{\\left( \\omega \\right) }\\left( s\\right) =\\int_{M}\\left\\langle T\n\\hat{F}\\right\\rangle ^{\\left( s\\right) }-\\frac{1}{6\\lambda ^{2}}\\left\\langle\nT,\\left[ T,T\\right] _{\\varphi _{s}}\\right\\rangle ^{\\left( s\\right) },\n\\end{equation\nSuppose we have \n\\begin{subequations\n\\label{sdeforms\n\\begin{eqnarray}\n\\frac{\\partial T^{\\left( s\\left( t\\right) ,\\omega \\right) }}{\\partial t}\n&=&d^{\\mathcal{H}}\\eta \\left( t\\right) -\\left[ T^{\\left( s\\left( t\\right)\n,\\omega \\right) },\\eta \\left( t\\right) \\right] ^{\\left( s\\left( t\\right)\n\\right) } \\\\\n\\frac{\\partial \\hat{F}^{\\left( s\\left( t\\right) ,\\omega \\right) }}{\\partial \n} &=&F\\cdot \\eta \\left( t\\right) -\\left[ \\hat{F}^{\\left( s\\left( t\\right)\n,\\omega \\right) },\\eta \\left( t\\right) \\right] ^{\\left( s\\left( t\\right)\n\\right) }.\n\\end{eqnarray\n\\end{subequations\nfor some $\\eta \\in \\Gamma \\left( \\mathcal{A}\\right) .$ Let us now make\nadditional assumptions:\n\n\\begin{enumerate}\n\\item $\\left[ \\cdot ,\\cdot \\right] _{\\varphi _{s}}=k\\left[ \\cdot ,\\cdot\n\\right] ^{\\left( s\\right) }$\n\n\\item $\\mathbb{L}$ is alternative\n\\end{enumerate}\n\nThe last assumption implies in particular, that the associator is\nskew-symmetric, and moreover, for any $\\alpha ,\\beta ,\\xi ,\\eta \\in \n\\mathfrak{l}^{\\left( s\\right) }$, \n\\begin{equation}\n\\left\\langle a_{s}\\left( \\alpha ,\\beta ,\\xi \\right) ,\\eta \\right\\rangle\n^{\\left( s\\right) }=\\left\\langle \\xi ,a_{s}\\left( \\alpha ,\\beta ,\\eta\n\\right) \\right\\rangle ^{\\left( s\\right) }.\n\\end{equation\nNow, \n\\begin{equation}\n\\mathcal{F}^{\\left( \\omega \\right) }\\left( s\\right) =\\int_{M}\\left\\langle T\n\\hat{F}\\right\\rangle ^{\\left( s\\right) }-\\frac{k}{6\\lambda ^{2}}\\left\\langle\nT,\\left[ T,T\\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left( s\\right) },\n\\end{equation\nand in this case the derivative of $\\mathcal{F}^{\\left( \\omega \\right)\n}\\left( s\\right) $ is \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\mathcal{F}^{\\left( \\omega \\right) }\\left( s\\left(\nt\\right) \\right) \\right\\vert _{t=0} &=&\\int_{M}\\left\\langle d^{\\mathcal{H\n}\\eta -\\left[ T,\\eta \\right] ^{\\left( s\\right) },\\hat{F}\\right\\rangle\n^{\\left( s\\right) }+\\int_{M}\\left\\langle T,F\\cdot \\eta -\\left[ \\hat{F},\\eta\n\\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left( s\\right) } \\notag \\\\\n&&-\\frac{k}{2\\lambda ^{2}}\\int_{M}\\left\\langle d^{\\mathcal{H}}\\eta -\\left[\nT,\\eta \\right] ^{\\left( s\\right) },\\left[ T,T\\right] ^{\\left( s\\right)\n}\\right\\rangle ^{\\left( s\\right) } \\notag \\\\\n&&-\\frac{k}{6\\lambda ^{2}}\\int_{M}\\left\\langle T,a_{s}\\left( T,T,\\eta\n\\right) \\right\\rangle ^{\\left( s\\right) }. \\label{dtFoms}\n\\end{eqnarray\nConsider the first two terms in (\\ref{dtFoms}). \n\\begin{subequations\n\\begin{eqnarray}\n\\int_{M}\\left\\langle d^{\\mathcal{H}}\\eta -\\left[ T,\\eta \\right] ^{\\left(\ns\\right) },\\hat{F}\\right\\rangle ^{\\left( s\\right) } &=&\\int_{M}\\left\\langle\n\\eta ,-d^{\\mathcal{H}}\\hat{F}-\\left[ \\hat{F},T\\right] ^{\\left( s\\right)\n}\\right\\rangle ^{\\left( s\\right) } \\\\\n\\int_{M}\\left\\langle T,F\\cdot \\eta -\\left[ \\hat{F},\\eta \\right] ^{\\left(\ns\\right) }\\right\\rangle ^{\\left( s\\right) } &=&\\int_{M}\\left\\langle \\eta \n\\left[ \\hat{F},T\\right] ^{\\left( s\\right) }-F\\cdot T\\right\\rangle ^{\\left(\ns\\right) }.\n\\end{eqnarray\n\\end{subequations\nThe third term in (\\ref{dtFoms}) becomes \n\\begin{eqnarray*}\n\\int_{M}\\left\\langle d^{\\mathcal{H}}\\eta -\\left[ T,\\eta \\right] ^{\\left(\ns\\right) },\\left[ T,T\\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left(\ns\\right) } &=&\\int_{M}\\left\\langle \\eta ,-d^{\\mathcal{H}}\\left[ T,T\\right]\n^{\\left( s\\right) }+\\left[ T,\\left[ T,T\\right] ^{\\left( s\\right) }\\right]\n^{\\left( s\\right) }\\right\\rangle ^{\\left( s\\right) } \\\\\n&=&\\int_{M}\\left\\langle \\eta ,-2\\left[ \\hat{F},T\\right] ^{\\left( s\\right)\n}+a_{s}\\left( T,T,T\\right) \\right\\rangle ^{\\left( s\\right) }.\n\\end{eqnarray*\nThe last term in (\\ref{dtFoms}) is \n\\begin{equation*}\n\\int_{M}\\left\\langle T,a_{s}\\left( T,T,\\eta \\right) \\right\\rangle ^{\\left(\ns\\right) }=\\int_{M}\\left\\langle \\eta ,a_{s}\\left( T,T,T\\right) \\right\\rangle\n^{\\left( s\\right) }.\n\\end{equation*\nOverall, since $a_{s}\\left( T,T,T\\right) =\\left[ T,\\left[ T,T\\right]\n^{\\left( s\\right) }\\right] ^{\\left( s\\right) }$, \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\mathcal{F}^{\\left( \\omega \\right) }\\left( s\\left(\nt\\right) \\right) \\right\\vert _{t=0} &=&-\\int_{M}\\left\\langle \\eta ,d^\n\\mathcal{H}}\\hat{F}+F\\cdot T-\\frac{k}{\\lambda ^{2}}\\left[ \\hat{F},T\\right]\n^{\\left( s\\right) }\\right\\rangle ^{\\left( s\\right) } \\\\\n&&-\\int_{M}\\,\\left\\langle \\eta ,\\frac{2k}{3\\lambda ^{2}}\\left[ T,\\left[ T,\n\\right] ^{\\left( s\\right) }\\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left(\ns\\right) }. \\notag\n\\end{eqnarray}\n\nFrom the Bianchi identity (\\ref{Bianchi}), \n\\begin{equation*}\nF\\cdot T=d^{\\mathcal{H}}\\hat{F}+\\left[ \\hat{F},T\\right] ^{\\left( s\\right) },\n\\end{equation*\nand thus, \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\mathcal{F}^{\\left( \\omega \\right) }\\left( s\\left(\nt\\right) \\right) \\right\\vert _{t=0} &=&-\\int_{M}\\left\\langle \\eta ,2d^\n\\mathcal{H}}\\hat{F}+\\left( 1-\\frac{k}{\\lambda ^{2}}\\right) \\left[ \\hat{F},\n\\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left( s\\right) } \\\\\n&&-\\int_{M}\\,\\left\\langle \\eta ,\\frac{2k}{3\\lambda ^{2}}\\left[ T,\\left[ T,\n\\right] ^{\\left( s\\right) }\\right] ^{\\left( s\\right) }\\right\\rangle ^{\\left(\ns\\right) }.\n\\end{eqnarray\nThus, the critical points with respect to deformations of $s$ satisf\n\\begin{equation}\nd^{\\mathcal{H}}\\hat{F}+\\left( \\frac{1}{2}-\\frac{k}{2\\lambda ^{2}}\\right)\n\\left[ \\hat{F},T\\right] ^{\\left( s\\right) }+\\frac{k}{3\\lambda ^{2}}\\left[ T\n\\left[ T,T\\right] ^{\\left( s\\right) }\\right] ^{\\left( s\\right) }=0.\n\\label{SecondCrit}\n\\end{equation}\n\n\\begin{example}\nIn the case when $\\mathbb{L}$ is a Lie group, $a_{s}=0$ and $k=\\lambda =1$,\nso we just obtain $d^{\\mathcal{H}}\\hat{F}=0$, which is of course the Bianchi\nidentity. This shows that we just have a reduction from a $\\Psi ^{R}\\left( \n\\mathbb{L}\\right) $ connection to an $\\mathbb{L}$-connection. In the case of \n$\\mathbb{L}$ being the loop of unit octonions, we know $\\lambda =\\frac{3}{8}$\nand $k=3\\lambda ^{3}=\\frac{81}{512}$ so (\\ref{SecondCrit}) become\n\\begin{equation}\nd^{\\mathcal{H}}\\hat{F}-\\frac{1}{16}\\left[ \\hat{F},T\\right] ^{\\left( s\\right)\n}+\\frac{3}{8}\\left[ T,\\left[ T,T\\right] ^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) }=0.\n\\end{equation\nThe significance of this condition is not immediately clear.\n\\end{example}\n\nHowever combining the two variations, we find that critical points over \n\\left( s,\\omega \\right) $ satisfy \n\\begin{equation*}\n\\left\\{ \n\\begin{array}{c}\n\\hat{F}=0 \\\\ \n\\left[ T,T,T\\right] ^{\\left( s\\right) }=\n\\end{array\n\\right. .\n\\end{equation*}\n\n\\begin{remark}\nIt will be the subject of further work to understand the significance of\nthis Chern-Simons type functional $\\mathcal{F}$. In particular, given the\nnon-trivial $3$-form $\\left[ T,\\left[ T,T\\right] ^{\\left( s\\right) }\\right]\n^{\\left( s\\right) }$, there may be additional possibilities for similar\nhigher-dimensional functionals. The functional $\\mathcal{F}$ is invariant\nunder simultaneous gauge transformations of $\\left( s,\\omega \\right) ,$ but\nnot the individual ones. For the standard Chern-Simons functional in 3\ndimensions, the lack of gauge invariance causes it to be multi-valued, with\nonly the exponentiated action functional becomes truly gauge-invariant. It\nwill be interesting to see if there are any analogous properties in this\ncase.\n\\end{remark}\n\nIn the context of $G_{2}$-structures, another functional has been considered\nin several papers \\cite{Bagaglini2,DGKisoflow,GrigorianOctobundle,\nGrigorianIsoflow,SaEarpLoubeau}, namely the $L_{2}$-norm of the torsion,\nconsidered as functional on the space of isometric $G_{2}$-structures, i.e. \nG_{2}$-structures that correspond to the same metric. In the context of loop\nstructures we may define a similar functional. Given a compact Riemannian\nmanifold $\\left( M,g\\right) $ and a fixed connection $\\omega $ on $\\mathcal{\n}$, for any section $s\\in \\Gamma \\left( \\mathcal{\\mathring{Q}}\\right) $ let \nT^{\\left( s\\right) }$ be the torsion of $s$ with respect to $\\omega .$ Then\ndefine the energy functional on $\\Gamma \\left( \\mathcal{\\mathring{Q}}\\right) \n$ given by: \n\\begin{equation}\n\\mathcal{E}\\left( s\\right) =\\int_{M}\\left\\langle T^{\\left( s\\right) },\\ast\nT^{\\left( s\\right) }\\right\\rangle ^{\\left( s\\right) }, \\label{Efunc}\n\\end{equation\nwhere the wedge product is assumed. With respect to deformations of $s$\ngiven by (\\ref{Aevol}) and the corresponding deformation of $T$ given by \n\\ref{sdeforms}) we have \n\\begin{eqnarray}\n\\left. \\frac{d}{dt}\\mathcal{E}\\left( s_{t}\\right) \\right\\vert _{t=0}\n&=&2\\int_{M}\\left\\langle d^{\\mathcal{H}}\\eta -\\left[ T^{\\left( s\\right)\n},\\eta \\right] ^{\\left( s\\right) },\\ast T^{\\left( s\\right) }\\right\\rangle\n^{\\left( s\\right) } \\notag \\\\\n&=&-2\\int_{M}\\left\\langle \\eta ,d^{\\mathcal{H}}\\ast T^{\\left( s\\right) }\n\\left[ T^{\\left( s\\right) },\\ast T^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) }\\right\\rangle ^{\\left( s\\right) } \\notag \\\\\n&=&-2\\int_{M}\\left\\langle \\eta ,d^{\\mathcal{H}}\\ast T^{\\left( s\\right)\n}\\right\\rangle ^{\\left( s\\right) }, \\label{Edeform}\n\\end{eqnarray\nwhere $\\left[ T^{\\left( s\\right) },\\ast T^{\\left( s\\right) }\\right] ^{\\left(\ns\\right) }=0$ due to symmetry considerations. Thus the critical points of \n\\mathcal{E}$ satisfy \n\\begin{equation}\n\\left( d^{\\mathcal{H}}\\right) ^{\\ast }T^{\\left( s\\right) }=0, \\label{divT0}\n\\end{equation\nwhich is precisely the analog of the \\textquotedblleft divergence-free\ntorsion\\textquotedblright\\ condition in \\cit\n{Bagaglini2,DGKisoflow,GrigorianOctobundle, GrigorianIsoflow,SaEarpLoubeau}.\nAlso, similarly as in \\cite{SaEarpLoubeau}, if we assume $\\mathcal{P}$ is\ncompact, the functional $\\mathcal{E}$ may be related to the equivariant\nDirichlet energy functional for maps from $\\mathcal{P}$ to $\\mathbb\n\\mathring{L}}$. Given a metric $\\left\\langle \\cdot ,\\cdot \\right\\rangle\n^{\\left( s\\right) }$ on $\\mathfrak{l}$, we may extend it to a metric on all\nof $\\mathbb{L}$ via right translations: $\\left\\langle \\cdot ,\\cdot\n\\right\\rangle _{p}^{\\left( s\\right) }=\\left\\langle \\left( R_{p}\\right)\n_{\\ast }^{-1}\\cdot ,\\left( R_{p}\\right) _{\\ast }^{-1}\\cdot \\right\\rangle\n^{\\left( s\\right) }.$ Then, the Dirichlet energy functional on \\emph\nequivariant} maps from $\\mathcal{P}$ to $\\mathbb{\\mathring{L}}$ is given by \n\\begin{equation}\n\\mathcal{D}\\left( s\\right) =\\int_{\\mathcal{P}}\\left\\vert ds\\right\\vert\n^{2}=\\int_{\\mathcal{P}}\\left\\vert \\theta _{s}\\right\\vert ^{2},\n\\label{dirichletE}\n\\end{equation\nwhere we endow $T\\mathcal{P}$ with a metric such that the decomposition $\n\\mathcal{P=HP\\oplus VP}$ is orthogonal with respect to it, and moreover such\nthat it is compatible with the metrics on $M$ and $\\Psi $. Then, using (\\re\n{stheta}) \n\\begin{equation}\n\\mathcal{D}\\left( s\\right) =\\int_{\\mathcal{P}}\\left\\vert T^{\\left( s\\right)\n}\\right\\vert ^{2}+\\int_{\\mathcal{P}}\\left\\vert \\hat{\\omega}^{\\left( s\\right)\n}\\right\\vert ^{2}\n\\end{equation\nNote that given an orthogonal basis $\\left\\{ X_{i}\\right\\} $ on $\\mathfrak{p}\n$, $\\left\\vert \\hat{\\omega}^{\\left( s\\right) }\\right\\vert ^{2}=\\left\\vert \n\\hat{\\omega}^{\\left( s\\right) }\\left( \\sigma \\left( X_{i}\\right) \\right)\n\\right\\vert ^{2}=\\left\\vert \\hat{X}_{i}\\right\\vert ^{2}=\\lambda _{s}\\dim \n\\mathfrak{l}.$ With our previous assumptions, $\\lambda _{s}=\\lambda $ - does\nnot depend on $s$, so we have \n\\begin{equation*}\n\\mathcal{D}\\left( s\\right) =a\\mathcal{E}\\left( s\\right) +b\n\\end{equation*\nwhere $a=\\func{Vol}\\left( \\Psi \\right) $ and $b=\\lambda \\left( \\dim \\mathbb{\n}\\right) $ $\\func{Vol}\\left( \\mathcal{P}\\right) .$ Hence, the critical\npoints of $\\mathcal{E}\\left( s\\right) $ are precisely the critical points of \n$\\mathcal{D}\\left( s\\right) $ with respect to deformations through\nequivariant maps, i.e. equivariant harmonic maps. So indeed, to understand\nthe properties of these critical points, a rigorous equivariant harmonic map\ntheory is required, as initiated in \\cite{SaEarpLoubeau}.\n\n\\section{Concluding remarks}\n\n\\setcounter{equation}{0}\\label{sectConclusion}Given a smooth loop $\\mathbb{L}\n$ with tangent algebra $\\mathfrak{l}$ and a group $\\Psi $ that acts smoothly\non $\\mathbb{L}$ via pseudoautomorphism pairs, we have defined the concept of\na loop bundle structure $\\left( \\mathbb{L},\\Psi ,\\mathcal{P},s\\right) $ for\na principal $\\Psi $-bundle and a corresponding equivariant $\\mathbb\n\\mathring{L}}$-valued map $s$, that also defines a section of the\ncorresponding associated bundle. If we moreover have a connection $\\omega $\non $\\mathcal{P}$, then horizontal component of the Darboux derivative of $s$\ndefines an $\\mathfrak{l}$-valued $1$-form $T^{\\left( s,\\omega \\right) }$,\nwhich we called the torsion. This object $T^{\\left( s,\\omega \\right) }$ then\nsatisfies a structural equation based on the loop Maurer-Cartan equation and\ngives rise to an $\\mathfrak{l}$-valued component of the curvature $\\hat{F\n^{\\left( s,\\omega \\right) }.$ Overall, there are several possible directions\nto further this non-associative theory.\n\n\\begin{enumerate}\n\\item From a more algebraic perspective it would be interesting to construct\nadditional examples of smooth loops, in particular those that are not\nMoufang and possibly are not even $G$-loops in order to more concretely\nstudy the corresponding bundles in those situations. In fact, it may not\neven be necessary to have a full loop structure - it may be sufficient to\njust have a right loop structure, so that division is possible only on the\nright. Left division was used rarely, and it may be possible to build up a\nfull theory without needing it. New examples of loops may give rise to new\ngeometric structures.\n\n\\item In Lie theory, the Maurer-Cartan equation plays a central role. As\nwe've seen there is an analog in smooth loop theory as well. A better\nunderstanding of this equation is needed. The standard Maurer-Cartan\nequation is closely related to the concept of integrability, but it is not\nclear how to interpret the non-associative version.\n\n\\item In defining the loop bundle structure, we generally have assumed that\nthe map $s$ is globally defined. However, this may place strict topological\nrestrictions. It may be reasonable to allow $s$ to be defined only locally.\nThis would give more flexibility, but it would need to be checked carefully\nwhether other related quantities are well-defined.\n\n\\item We have defined a functional of Chern-Simons type in Section \\re\n{sectVar}. There are further properties that need to be investigated. For\nexample, is it possible to use the associator to define reasonable\nfunctionals on higher-dimensional manifolds? If the section $s$ is defined\nonly locally, are these functionals well-defined? Finally, do these\nfunctionals have any topological meaning?\n\n\\item In $G_{2}$-geometry, significant progress has been made in \\cit\n{Bagaglini2,DGKisoflow,GrigorianOctobundle, GrigorianIsoflow,SaEarpLoubeau}\nregarding the existence of critical points of the energy functional (\\re\n{Efunc}) via a heat flow approach. However, it is likely that a more direct\napproach, similar to Uhlenbeck's existence result for the Coulomb gauge \\cit\n{UhlenbeckConnection}, could also be used. This would give existence of a\npreferred section $s$ for a given connection or conversely, a preferred\nconnection in a gauge class for a fixed section $s$.\n\\end{enumerate}\n\nOverall, the framework presented in this paper may give an impetus to the\ndevelopment of a larger theory of \\textquotedblleft nonassociative\ngeometry\\textquotedblright .\n\n\n\\section*{Abstract (Not appropriate in this style!)}%\n \\else \\small \n \\begin{center}{\\bf Abstract\\vspace{-.5em}\\vspace{\\z@}}\\end{center}%\n \\quotation \n \\fi\n }%\n }{%\n }%\n\\@ifundefined{endabstract}{\\def\\endabstract\n {\\if@twocolumn\\else\\endquotation\\fi}}{}%\n\\@ifundefined{maketitle}{\\def\\maketitle#1{}}{}%\n\\@ifundefined{affiliation}{\\def\\affiliation#1{}}{}%\n\\@ifundefined{proof}{\\def\\proof{\\noindent{\\bfseries Proof. }}}{}%\n\\@ifundefined{endproof}{\\def\\endproof{\\mbox{\\ \\rule{.1in}{.1in}}}}{}%\n\\@ifundefined{newfield}{\\def\\newfield#1#2{}}{}%\n\\@ifundefined{chapter}{\\def\\chapter#1{\\par(Chapter head:)#1\\par }%\n \\newcount\\c@chapter}{}%\n\\@ifundefined{part}{\\def\\part#1{\\par(Part head:)#1\\par }}{}%\n\\@ifundefined{section}{\\def\\section#1{\\par(Section head:)#1\\par }}{}%\n\\@ifundefined{subsection}{\\def\\subsection#1%\n {\\par(Subsection head:)#1\\par }}{}%\n\\@ifundefined{subsubsection}{\\def\\subsubsection#1%\n {\\par(Subsubsection head:)#1\\par }}{}%\n\\@ifundefined{paragraph}{\\def\\paragraph#1%\n {\\par(Subsubsubsection head:)#1\\par }}{}%\n\\@ifundefined{subparagraph}{\\def\\subparagraph#1%\n {\\par(Subsubsubsubsection head:)#1\\par }}{}%\n\\@ifundefined{therefore}{\\def\\therefore{}}{}%\n\\@ifundefined{backepsilon}{\\def\\backepsilon{}}{}%\n\\@ifundefined{yen}{\\def\\yen{\\hbox{\\rm\\rlap=Y}}}{}%\n\\@ifundefined{registered}{%\n \\def\\registered{\\relax\\ifmmode{}\\r@gistered\n \\else$\\m@th\\r@gistered$\\fi}%\n \\def\\r@gistered{^{\\ooalign\n {\\hfil\\raise.07ex\\hbox{$\\scriptstyle\\rm\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{R}$}\\hfil\\crcr\n \\mathhexbox20D}}}}{}%\n\\@ifundefined{Eth}{\\def\\Eth{}}{}%\n\\@ifundefined{eth}{\\def\\eth{}}{}%\n\\@ifundefined{Thorn}{\\def\\Thorn{}}{}%\n\\@ifundefined{thorn}{\\def\\thorn{}}{}%\n\\def\\TEXTsymbol#1{\\mbox{$#1$}}%\n\\@ifundefined{degree}{\\def\\degree{{}^{\\circ}}}{}%\n\\newdimen\\theight\n\\@ifundefined{Column}{\\def\\Column{%\n \\vadjust{\\setbox\\z@=\\hbox{\\scriptsize\\quad\\quad tcol}%\n \\theight=\\ht\\z@\\advance\\theight by \\dp\\z@\\advance\\theight by \\lineskip\n \\kern -\\theight \\vbox to \\theight{%\n \\rightline{\\rlap{\\box\\z@}}%\n \\vss\n }%\n }%\n }}{}%\n\\@ifundefined{qed}{\\def\\qed{%\n \\ifhmode\\unskip\\nobreak\\fi\\ifmmode\\ifinner\\else\\hskip5\\p@\\fi\\fi\n \\hbox{\\hskip5\\p@\\vrule width4\\p@ height6\\p@ depth1.5\\p@\\hskip\\p@}%\n }}{}%\n\\@ifundefined{cents}{\\def\\cents{\\hbox{\\rm\\rlap c\/}}}{}%\n\\@ifundefined{tciLaplace}{\\def\\tciLaplace{\\ensuremath{\\mathcal{L}}}}{}%\n\\@ifundefined{tciFourier}{\\def\\tciFourier{\\ensuremath{\\mathcal{F}}}}{}%\n\\@ifundefined{textcurrency}{\\def\\textcurrency{\\hbox{\\rm\\rlap xo}}}{}%\n\\@ifundefined{texteuro}{\\def\\texteuro{\\hbox{\\rm\\rlap C=}}}{}%\n\\@ifundefined{euro}{\\def\\euro{\\hbox{\\rm\\rlap C=}}}{}%\n\\@ifundefined{textfranc}{\\def\\textfranc{\\hbox{\\rm\\rlap-F}}}{}%\n\\@ifundefined{textlira}{\\def\\textlira{\\hbox{\\rm\\rlap L=}}}{}%\n\\@ifundefined{textpeseta}{\\def\\textpeseta{\\hbox{\\rm P\\negthinspace s}}}{}%\n\\@ifundefined{miss}{\\def\\miss{\\hbox{\\vrule height2\\p@ width 2\\p@ depth\\z@}}}{}%\n\\@ifundefined{vvert}{\\def\\vvert{\\Vert}}{\n\\@ifundefined{tcol}{\\def\\tcol#1{{\\baselineskip=6\\p@ \\vcenter{#1}} \\Column}}{}%\n\\@ifundefined{dB}{\\def\\dB{\\hbox{{}}}}{\n\\@ifundefined{mB}{\\def\\mB#1{\\hbox{$#1$}}}{\n\\@ifundefined{nB}{\\def\\nB#1{\\hbox{#1}}}{\n\\@ifundefined{note}{\\def\\note{$^{\\dag}}}{}%\n\\defLaTeX2e{LaTeX2e}\n\\ifx\\fmtnameLaTeX2e\n \\DeclareOldFontCommand{\\rm}{\\normalfont\\rmfamily}{\\mathrm}\n \\DeclareOldFontCommand{\\sf}{\\normalfont\\sffamily}{\\mathsf}\n \\DeclareOldFontCommand{\\tt}{\\normalfont\\ttfamily}{\\mathtt}\n \\DeclareOldFontCommand{\\bf}{\\normalfont\\bfseries}{\\mathbf}\n \\DeclareOldFontCommand{\\it}{\\normalfont\\itshape}{\\mathit}\n \\DeclareOldFontCommand{\\sl}{\\normalfont\\slshape}{\\@nomath\\sl}\n \\DeclareOldFontCommand{\\sc}{\\normalfont\\scshape}{\\@nomath\\sc}\n\\fi\n\n\n\\def\\alpha{{\\Greekmath 010B}}%\n\\def\\beta{{\\Greekmath 010C}}%\n\\def\\gamma{{\\Greekmath 010D}}%\n\\def\\delta{{\\Greekmath 010E}}%\n\\def\\epsilon{{\\Greekmath 010F}}%\n\\def\\zeta{{\\Greekmath 0110}}%\n\\def\\eta{{\\Greekmath 0111}}%\n\\def\\theta{{\\Greekmath 0112}}%\n\\def\\iota{{\\Greekmath 0113}}%\n\\def\\kappa{{\\Greekmath 0114}}%\n\\def\\lambda{{\\Greekmath 0115}}%\n\\def\\mu{{\\Greekmath 0116}}%\n\\def\\nu{{\\Greekmath 0117}}%\n\\def\\xi{{\\Greekmath 0118}}%\n\\def\\pi{{\\Greekmath 0119}}%\n\\def\\rho{{\\Greekmath 011A}}%\n\\def\\sigma{{\\Greekmath 011B}}%\n\\def\\tau{{\\Greekmath 011C}}%\n\\def\\upsilon{{\\Greekmath 011D}}%\n\\def\\phi{{\\Greekmath 011E}}%\n\\def\\chi{{\\Greekmath 011F}}%\n\\def\\psi{{\\Greekmath 0120}}%\n\\def\\omega{{\\Greekmath 0121}}%\n\\def\\varepsilon{{\\Greekmath 0122}}%\n\\def\\vartheta{{\\Greekmath 0123}}%\n\\def\\varpi{{\\Greekmath 0124}}%\n\\def\\varrho{{\\Greekmath 0125}}%\n\\def\\varsigma{{\\Greekmath 0126}}%\n\\def\\varphi{{\\Greekmath 0127}}%\n\n\\def{\\Greekmath 0272}{{\\Greekmath 0272}}\n\\def\\FindBoldGroup{%\n {\\setbox0=\\hbox{$\\mathbf{x\\global\\edef\\theboldgroup{\\the\\mathgroup}}$}}%\n}\n\n\\def\\Greekmath#1#2#3#4{%\n \\if@compatibility\n \\ifnum\\mathgroup=\\symbold\n \\mathchoice{\\mbox{\\boldmath$\\displaystyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\textstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptscriptstyle\\mathchar\"#1#2#3#4$}}%\n \\else\n \\mathchar\"#1#2#3#\n \\fi \n \\else \n \\FindBoldGroup\n \\ifnum\\mathgroup=\\theboldgroup\n \\mathchoice{\\mbox{\\boldmath$\\displaystyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\textstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptstyle\\mathchar\"#1#2#3#4$}}%\n {\\mbox{\\boldmath$\\scriptscriptstyle\\mathchar\"#1#2#3#4$}}%\n \\else\n \\mathchar\"#1#2#3#\n \\fi \t \n\t \\fi}\n\n\\newif\\ifGreekBold \\GreekBoldfalse\n\\let\\SAVEPBF=\\pbf\n\\def\\pbf{\\GreekBoldtrue\\SAVEPBF}%\n\n\\@ifundefined{theorem}{\\newtheorem{theorem}{Theorem}}{}\n\\@ifundefined{lemma}{\\newtheorem{lemma}[theorem]{Lemma}}{}\n\\@ifundefined{corollary}{\\newtheorem{corollary}[theorem]{Corollary}}{}\n\\@ifundefined{conjecture}{\\newtheorem{conjecture}[theorem]{Conjecture}}{}\n\\@ifundefined{proposition}{\\newtheorem{proposition}[theorem]{Proposition}}{}\n\\@ifundefined{axiom}{\\newtheorem{axiom}{Axiom}}{}\n\\@ifundefined{remark}{\\newtheorem{remark}{Remark}}{}\n\\@ifundefined{example}{\\newtheorem{example}{Example}}{}\n\\@ifundefined{exercise}{\\newtheorem{exercise}{Exercise}}{}\n\\@ifundefined{definition}{\\newtheorem{definition}{Definition}}{}\n\n\n\\@ifundefined{mathletters}{%\n \n \\newcounter{equationnumber} \n \\def\\mathletters{%\n \\addtocounter{equation}{1}\n \\edef\\@currentlabel{\\arabic{equation}}%\n \\setcounter{equationnumber}{\\c@equation}\n \\setcounter{equation}{0}%\n \\edef\\arabic{equation}{\\@currentlabel\\noexpand\\alph{equation}}%\n }\n \\def\\endmathletters{%\n \\setcounter{equation}{\\value{equationnumber}}%\n }\n}{}\n\n\\@ifundefined{BibTeX}{%\n \\def\\BibTeX{{\\rm B\\kern-.05em{\\sc i\\kern-.025em b}\\kern-.08em\n T\\kern-.1667em\\lower.7ex\\hbox{E}\\kern-.125emX}}}{}%\n\\@ifundefined{AmS}%\n {\\def\\AmS{{\\protect\\usefont{OMS}{cmsy}{m}{n}%\n A\\kern-.1667em\\lower.5ex\\hbox{M}\\kern-.125emS}}}{}%\n\\@ifundefined{AmSTeX}{\\def\\AmSTeX{\\protect\\AmS-\\protect\\TeX\\@}}{}%\n\n\\def\\@@eqncr{\\let\\@tempa\\relax\n \\ifcase\\@eqcnt \\def\\@tempa{& & &}\\or \\def\\@tempa{& &}%\n \\else \\def\\@tempa{&}\\fi\n \\@tempa\n \\if@eqnsw\n \\iftag@\n \\@taggnum\n \\else\n \\@eqnnum\\stepcounter{equation}%\n \\fi\n \\fi\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \\global\\@eqnswtrue\n \\global\\@eqcnt\\z@\\cr}\n\n\n\\def\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}{\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}}\n\\def\\@TCItag#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{(#1)}%\n \\global\\def\\@currentlabel{#1}}\n\\def\\@TCItagstar*#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{#1}%\n \\global\\def\\@currentlabel{#1}}\n\\def\\QATOP#1#2{{#1 \\atop #2}}%\n\\def\\QTATOP#1#2{{\\textstyle {#1 \\atop #2}}}%\n\\def\\QDATOP#1#2{{\\displaystyle {#1 \\atop #2}}}%\n\\def\\QABOVE#1#2#3{{#2 \\above#1 #3}}%\n\\def\\QTABOVE#1#2#3{{\\textstyle {#2 \\above#1 #3}}}%\n\\def\\QDABOVE#1#2#3{{\\displaystyle {#2 \\above#1 #3}}}%\n\\def\\QOVERD#1#2#3#4{{#3 \\overwithdelims#1#2 #4}}%\n\\def\\QTOVERD#1#2#3#4{{\\textstyle {#3 \\overwithdelims#1#2 #4}}}%\n\\def\\QDOVERD#1#2#3#4{{\\displaystyle {#3 \\overwithdelims#1#2 #4}}}%\n\\def\\QATOPD#1#2#3#4{{#3 \\atopwithdelims#1#2 #4}}%\n\\def\\QTATOPD#1#2#3#4{{\\textstyle {#3 \\atopwithdelims#1#2 #4}}}%\n\\def\\QDATOPD#1#2#3#4{{\\displaystyle {#3 \\atopwithdelims#1#2 #4}}}%\n\\def\\QABOVED#1#2#3#4#5{{#4 \\abovewithdelims#1#2#3 #5}}%\n\\def\\QTABOVED#1#2#3#4#5{{\\textstyle \n {#4 \\abovewithdelims#1#2#3 #5}}}%\n\\def\\QDABOVED#1#2#3#4#5{{\\displaystyle \n {#4 \\abovewithdelims#1#2#3 #5}}}%\n\\def\\tint{\\mathop{\\textstyle \\int}}%\n\\def\\tiint{\\mathop{\\textstyle \\iint }}%\n\\def\\tiiint{\\mathop{\\textstyle \\iiint }}%\n\\def\\tiiiint{\\mathop{\\textstyle \\iiiint }}%\n\\def\\tidotsint{\\mathop{\\textstyle \\idotsint }}%\n\\def\\toint{\\mathop{\\textstyle \\oint}}%\n\\def\\tsum{\\mathop{\\textstyle \\sum }}%\n\\def\\tprod{\\mathop{\\textstyle \\prod }}%\n\\def\\tbigcap{\\mathop{\\textstyle \\bigcap }}%\n\\def\\tbigwedge{\\mathop{\\textstyle \\bigwedge }}%\n\\def\\tbigoplus{\\mathop{\\textstyle \\bigoplus }}%\n\\def\\tbigodot{\\mathop{\\textstyle \\bigodot }}%\n\\def\\tbigsqcup{\\mathop{\\textstyle \\bigsqcup }}%\n\\def\\tcoprod{\\mathop{\\textstyle \\coprod }}%\n\\def\\tbigcup{\\mathop{\\textstyle \\bigcup }}%\n\\def\\tbigvee{\\mathop{\\textstyle \\bigvee }}%\n\\def\\tbigotimes{\\mathop{\\textstyle \\bigotimes }}%\n\\def\\tbiguplus{\\mathop{\\textstyle \\biguplus }}%\n\\def\\dint{\\mathop{\\displaystyle \\int}}%\n\\def\\diint{\\mathop{\\displaystyle \\iint}}%\n\\def\\diiint{\\mathop{\\displaystyle \\iiint}}%\n\\def\\diiiint{\\mathop{\\displaystyle \\iiiint }}%\n\\def\\didotsint{\\mathop{\\displaystyle \\idotsint }}%\n\\def\\doint{\\mathop{\\displaystyle \\oint}}%\n\\def\\dsum{\\mathop{\\displaystyle \\sum }}%\n\\def\\dprod{\\mathop{\\displaystyle \\prod }}%\n\\def\\dbigcap{\\mathop{\\displaystyle \\bigcap }}%\n\\def\\dbigwedge{\\mathop{\\displaystyle \\bigwedge }}%\n\\def\\dbigoplus{\\mathop{\\displaystyle \\bigoplus }}%\n\\def\\dbigodot{\\mathop{\\displaystyle \\bigodot }}%\n\\def\\dbigsqcup{\\mathop{\\displaystyle \\bigsqcup }}%\n\\def\\dcoprod{\\mathop{\\displaystyle \\coprod }}%\n\\def\\dbigcup{\\mathop{\\displaystyle \\bigcup }}%\n\\def\\dbigvee{\\mathop{\\displaystyle \\bigvee }}%\n\\def\\dbigotimes{\\mathop{\\displaystyle \\bigotimes }}%\n\\def\\dbiguplus{\\mathop{\\displaystyle \\biguplus }}%\n\n\\if@compatibility\\else\n \n \\RequirePackage{amsmath}\n\\fi\n\n\\def\\makeatother\\endinput{\\makeatother\\endinput}\n\n\\bgroup\n\\ifx\\ds@amstex\\relax\n \\message{amstex already loaded}\\aftergroup\\makeatother\\endinput\n\\else\n \\@ifpackageloaded{amsmath}%\n {\\if@compatibility\\message{amsmath already loaded}\\fi\\aftergroup\\makeatother\\endinput}\n {}\n \\@ifpackageloaded{amstex}%\n {\\if@compatibility\\message{amstex already loaded}\\fi\\aftergroup\\makeatother\\endinput}\n {}\n \\@ifpackageloaded{amsgen}%\n {\\if@compatibility\\message{amsgen already loaded}\\fi\\aftergroup\\makeatother\\endinput}\n {}\n\\fi\n\\egroup\n\n\n\n\\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}\n\\let\\DOTSI\\relax\n\\def\\RIfM@{\\relax\\ifmmode}%\n\\def\\FN@{\\futurelet\\next}%\n\\newcount\\intno@\n\\def\\iint{\\DOTSI\\intno@\\tw@\\FN@\\ints@}%\n\\def\\iiint{\\DOTSI\\intno@\\thr@@\\FN@\\ints@}%\n\\def\\iiiint{\\DOTSI\\intno@4 \\FN@\\ints@}%\n\\def\\idotsint{\\DOTSI\\intno@\\z@\\FN@\\ints@}%\n\\def\\ints@{\\findlimits@\\ints@@}%\n\\newif\\iflimtoken@\n\\newif\\iflimits@\n\\def\\findlimits@{\\limtoken@true\\ifx\\next\\limits\\limits@true\n \\else\\ifx\\next\\nolimits\\limits@false\\else\n \\limtoken@false\\ifx\\ilimits@\\nolimits\\limits@false\\else\n \\ifinner\\limits@false\\else\\limits@true\\fi\\fi\\fi\\fi}%\n\\def\\multint@{\\int\\ifnum\\intno@=\\z@\\intdots@ \n \\else\\intkern@\\fi \n \\ifnum\\intno@>\\tw@\\int\\intkern@\\fi \n \\ifnum\\intno@>\\thr@@\\int\\intkern@\\fi \n \\int\n\\def\\multintlimits@{\\intop\\ifnum\\intno@=\\z@\\intdots@\\else\\intkern@\\fi\n \\ifnum\\intno@>\\tw@\\intop\\intkern@\\fi\n \\ifnum\\intno@>\\thr@@\\intop\\intkern@\\fi\\intop}%\n\\def\\intic@{%\n \\mathchoice{\\hskip.5em}{\\hskip.4em}{\\hskip.4em}{\\hskip.4em}}%\n\\def\\negintic@{\\mathchoice\n {\\hskip-.5em}{\\hskip-.4em}{\\hskip-.4em}{\\hskip-.4em}}%\n\\def\\ints@@{\\iflimtoken@ \n \\def\\ints@@@{\\iflimits@\\negintic@\n \\mathop{\\intic@\\multintlimits@}\\limits \n \\else\\multint@\\nolimits\\fi \n \\eat@\n \\else \n \\def\\ints@@@{\\iflimits@\\negintic@\n \\mathop{\\intic@\\multintlimits@}\\limits\\else\n \\multint@\\nolimits\\fi}\\fi\\ints@@@}%\n\\def\\intkern@{\\mathchoice{\\!\\!\\!}{\\!\\!}{\\!\\!}{\\!\\!}}%\n\\def\\plaincdots@{\\mathinner{\\cdotp\\cdotp\\cdotp}}%\n\\def\\intdots@{\\mathchoice{\\plaincdots@}%\n {{\\cdotp}\\mkern1.5mu{\\cdotp}\\mkern1.5mu{\\cdotp}}%\n {{\\cdotp}\\mkern1mu{\\cdotp}\\mkern1mu{\\cdotp}}%\n {{\\cdotp}\\mkern1mu{\\cdotp}\\mkern1mu{\\cdotp}}}%\n\\def\\RIfM@{\\relax\\protect\\ifmmode}\n\\def\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi{\\RIfM@\\expandafter\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi@\\else\\expandafter\\mbox\\fi}\n\\let\\nfss@text\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi\n\\def\\RIfM@\\expandafter\\text@\\else\\expandafter\\mbox\\fi@#1{\\mathchoice\n {\\textdef@\\displaystyle\\f@size{#1}}%\n {\\textdef@\\textstyle\\tf@size{\\firstchoice@false #1}}%\n {\\textdef@\\textstyle\\sf@size{\\firstchoice@false #1}}%\n {\\textdef@\\textstyle \\ssf@size{\\firstchoice@false #1}}%\n \\glb@settings}\n\n\\def\\textdef@#1#2#3{\\hbox{{%\n \\everymath{#1}%\n \\let\\f@size#2\\selectfont\n #3}}}\n\\newif\\iffirstchoice@\n\\firstchoice@true\n\\def\\Let@{\\relax\\iffalse{\\fi\\let\\\\=\\cr\\iffalse}\\fi}%\n\\def\\vspace@{\\def\\vspace##1{\\crcr\\noalign{\\vskip##1\\relax}}}%\n\\def\\multilimits@{\\bgroup\\vspace@\\Let@\n \\baselineskip\\fontdimen10 \\scriptfont\\tw@\n \\advance\\baselineskip\\fontdimen12 \\scriptfont\\tw@\n \\lineskip\\thr@@\\fontdimen8 \\scriptfont\\thr@@\n \\lineskiplimit\\lineskip\n \\vbox\\bgroup\\ialign\\bgroup\\hfil$\\m@th\\scriptstyle{##}$\\hfil\\crcr}%\n\\def\\Sb{_\\multilimits@}%\n\\def\\endSb{\\crcr\\egroup\\egroup\\egroup}%\n\\def\\Sp{^\\multilimits@}%\n\\let\\endSp\\endSb\n\\newdimen\\ex@\n\\ex@.2326ex\n\\def\\rightarrowfill@#1{$#1\\m@th\\mathord-\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\mathord-\\mkern-2mu$}\\hfill\n \\mkern-6mu\\mathord\\rightarrow$}%\n\\def\\leftarrowfill@#1{$#1\\m@th\\mathord\\leftarrow\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\mathord-\\mkern-2mu$}\\hfill\\mkern-6mu\\mathord-$}%\n\\def\\leftrightarrowfill@#1{$#1\\m@th\\mathord\\leftarrow\n\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\mathord-\\mkern-2mu$}\\hfill\n \\mkern-6mu\\mathord\\rightarrow$}%\n\\def\\overrightarrow{\\mathpalette\\overrightarrow@}%\n\\def\\overrightarrow@#1#2{\\vbox{\\ialign{##\\crcr\\rightarrowfill@#1\\crcr\n \\noalign{\\kern-\\ex@\\nointerlineskip}$\\m@th\\hfil#1#2\\hfil$\\crcr}}}%\n\\let\\overarrow\\overrightarrow\n\\def\\overleftarrow{\\mathpalette\\overleftarrow@}%\n\\def\\overleftarrow@#1#2{\\vbox{\\ialign{##\\crcr\\leftarrowfill@#1\\crcr\n \\noalign{\\kern-\\ex@\\nointerlineskip}$\\m@th\\hfil#1#2\\hfil$\\crcr}}}%\n\\def\\overleftrightarrow{\\mathpalette\\overleftrightarrow@}%\n\\def\\overleftrightarrow@#1#2{\\vbox{\\ialign{##\\crcr\n \\leftrightarrowfill@#1\\crcr\n \\noalign{\\kern-\\ex@\\nointerlineskip}$\\m@th\\hfil#1#2\\hfil$\\crcr}}}%\n\\def\\underrightarrow{\\mathpalette\\underrightarrow@}%\n\\def\\underrightarrow@#1#2{\\vtop{\\ialign{##\\crcr$\\m@th\\hfil#1#2\\hfil\n $\\crcr\\noalign{\\nointerlineskip}\\rightarrowfill@#1\\crcr}}}%\n\\let\\underarrow\\underrightarrow\n\\def\\underleftarrow{\\mathpalette\\underleftarrow@}%\n\\def\\underleftarrow@#1#2{\\vtop{\\ialign{##\\crcr$\\m@th\\hfil#1#2\\hfil\n $\\crcr\\noalign{\\nointerlineskip}\\leftarrowfill@#1\\crcr}}}%\n\\def\\underleftrightarrow{\\mathpalette\\underleftrightarrow@}%\n\\def\\underleftrightarrow@#1#2{\\vtop{\\ialign{##\\crcr$\\m@th\n \\hfil#1#2\\hfil$\\crcr\n \\noalign{\\nointerlineskip}\\leftrightarrowfill@#1\\crcr}}}%\n\n\\def\\qopnamewl@#1{\\mathop{\\operator@font#1}\\nlimits@}\n\\let\\nlimits@\\displaylimits\n\\def\\setboxz@h{\\setbox\\z@\\hbox}\n\n\n\\def\\varlim@#1#2{\\mathop{\\vtop{\\ialign{##\\crcr\n \\hfil$#1\\m@th\\operator@font lim$\\hfil\\crcr\n \\noalign{\\nointerlineskip}#2#1\\crcr\n \\noalign{\\nointerlineskip\\kern-\\ex@}\\crcr}}}}\n\n \\def\\rightarrowfill@#1{\\m@th\\setboxz@h{$#1-$}\\ht\\z@\\z@\n $#1\\copy\\z@\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\box\\z@\\mkern-2mu$}\\hfill\n \\mkern-6mu\\mathord\\rightarrow$}\n\\def\\leftarrowfill@#1{\\m@th\\setboxz@h{$#1-$}\\ht\\z@\\z@\n $#1\\mathord\\leftarrow\\mkern-6mu\\cleaders\n \\hbox{$#1\\mkern-2mu\\copy\\z@\\mkern-2mu$}\\hfill\n \\mkern-6mu\\box\\z@$}\n\n\n\\def\\qopnamewl@{proj\\,lim}{\\qopnamewl@{proj\\,lim}}\n\\def\\qopnamewl@{inj\\,lim}{\\qopnamewl@{inj\\,lim}}\n\\def\\mathpalette\\varlim@\\rightarrowfill@{\\mathpalette\\varlim@\\rightarrowfill@}\n\\def\\mathpalette\\varlim@\\leftarrowfill@{\\mathpalette\\varlim@\\leftarrowfill@}\n\\def\\mathpalette\\varliminf@{}{\\mathpalette\\mathpalette\\varliminf@{}@{}}\n\\def\\mathpalette\\varliminf@{}@#1{\\mathop{\\underline{\\vrule\\@depth.2\\ex@\\@width\\z@\n \\hbox{$#1\\m@th\\operator@font lim$}}}}\n\\def\\mathpalette\\varlimsup@{}{\\mathpalette\\mathpalette\\varlimsup@{}@{}}\n\\def\\mathpalette\\varlimsup@{}@#1{\\mathop{\\overline\n {\\hbox{$#1\\m@th\\operator@font lim$}}}}\n\n\\def\\stackunder#1#2{\\mathrel{\\mathop{#2}\\limits_{#1}}}%\n\\begingroup \\catcode `|=0 \\catcode `[= 1\n\\catcode`]=2 \\catcode `\\{=12 \\catcode `\\}=12\n\\catcode`\\\\=12 \n|gdef|@alignverbatim#1\\end{align}[#1|end[align]]\n|gdef|@salignverbatim#1\\end{align*}[#1|end[align*]]\n\n|gdef|@alignatverbatim#1\\end{alignat}[#1|end[alignat]]\n|gdef|@salignatverbatim#1\\end{alignat*}[#1|end[alignat*]]\n\n|gdef|@xalignatverbatim#1\\end{xalignat}[#1|end[xalignat]]\n|gdef|@sxalignatverbatim#1\\end{xalignat*}[#1|end[xalignat*]]\n\n|gdef|@gatherverbatim#1\\end{gather}[#1|end[gather]]\n|gdef|@sgatherverbatim#1\\end{gather*}[#1|end[gather*]]\n\n|gdef|@gatherverbatim#1\\end{gather}[#1|end[gather]]\n|gdef|@sgatherverbatim#1\\end{gather*}[#1|end[gather*]]\n\n\n|gdef|@multilineverbatim#1\\end{multiline}[#1|end[multiline]]\n|gdef|@smultilineverbatim#1\\end{multiline*}[#1|end[multiline*]]\n\n|gdef|@arraxverbatim#1\\end{arrax}[#1|end[arrax]]\n|gdef|@sarraxverbatim#1\\end{arrax*}[#1|end[arrax*]]\n\n|gdef|@tabulaxverbatim#1\\end{tabulax}[#1|end[tabulax]]\n|gdef|@stabulaxverbatim#1\\end{tabulax*}[#1|end[tabulax*]]\n\n\n|endgroup\n \n\n \n\\def\\align{\\@verbatim \\frenchspacing\\@vobeyspaces \\@alignverbatim\nYou are using the \"align\" environment in a style in which it is not defined.}\n\\let\\endalign=\\endtrivlist\n \n\\@namedef{align*}{\\@verbatim\\@salignverbatim\nYou are using the \"align*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endalign*\\endcsname =\\endtrivlist\n\n\n\n\n\\def\\alignat{\\@verbatim \\frenchspacing\\@vobeyspaces \\@alignatverbatim\nYou are using the \"alignat\" environment in a style in which it is not defined.}\n\\let\\endalignat=\\endtrivlist\n \n\\@namedef{alignat*}{\\@verbatim\\@salignatverbatim\nYou are using the \"alignat*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endalignat*\\endcsname =\\endtrivlist\n\n\n\n\n\\def\\xalignat{\\@verbatim \\frenchspacing\\@vobeyspaces \\@xalignatverbatim\nYou are using the \"xalignat\" environment in a style in which it is not defined.}\n\\let\\endxalignat=\\endtrivlist\n \n\\@namedef{xalignat*}{\\@verbatim\\@sxalignatverbatim\nYou are using the \"xalignat*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endxalignat*\\endcsname =\\endtrivlist\n\n\n\n\n\\def\\gather{\\@verbatim \\frenchspacing\\@vobeyspaces \\@gatherverbatim\nYou are using the \"gather\" environment in a style in which it is not defined.}\n\\let\\endgather=\\endtrivlist\n \n\\@namedef{gather*}{\\@verbatim\\@sgatherverbatim\nYou are using the \"gather*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endgather*\\endcsname =\\endtrivlist\n\n\n\\def\\multiline{\\@verbatim \\frenchspacing\\@vobeyspaces \\@multilineverbatim\nYou are using the \"multiline\" environment in a style in which it is not defined.}\n\\let\\endmultiline=\\endtrivlist\n \n\\@namedef{multiline*}{\\@verbatim\\@smultilineverbatim\nYou are using the \"multiline*\" environment in a style in which it is not defined.}\n\\expandafter\\let\\csname endmultiline*\\endcsname =\\endtrivlist\n\n\n\\def\\arrax{\\@verbatim \\frenchspacing\\@vobeyspaces \\@arraxverbatim\nYou are using a type of \"array\" construct that is only allowed in AmS-LaTeX.}\n\\let\\endarrax=\\endtrivlist\n\n\\def\\tabulax{\\@verbatim \\frenchspacing\\@vobeyspaces \\@tabulaxverbatim\nYou are using a type of \"tabular\" construct that is only allowed in AmS-LaTeX.}\n\\let\\endtabulax=\\endtrivlist\n\n \n\\@namedef{arrax*}{\\@verbatim\\@sarraxverbatim\nYou are using a type of \"array*\" construct that is only allowed in AmS-LaTeX.}\n\\expandafter\\let\\csname endarrax*\\endcsname =\\endtrivlist\n\n\\@namedef{tabulax*}{\\@verbatim\\@stabulaxverbatim\nYou are using a type of \"tabular*\" construct that is only allowed in AmS-LaTeX.}\n\\expandafter\\let\\csname endtabulax*\\endcsname =\\endtrivlist\n\n\n\n \\def\\endequation{%\n \\ifmmode\\ifinner\n \\iftag@\n \\addtocounter{equation}{-1}\n $\\hfil\n \\displaywidth\\linewidth\\@taggnum\\egroup \\endtrivlist\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \\global\\@ignoretrue \n \\else\n $\\hfil\n \\displaywidth\\linewidth\\@eqnnum\\egroup \\endtrivlist\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \\global\\@ignoretrue \n \\fi\n \\else \n \\iftag@\n \\addtocounter{equation}{-1}\n \\eqno \\hbox{\\@taggnum}\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@false%\n $$\\global\\@ignoretrue\n \\else\n \\eqno \\hbox{\\@eqnnum\n $$\\global\\@ignoretrue\n \\fi\n \\fi\\fi\n } \n\n \\newif\\iftag@ \\@ifnextchar*{\\@tagstar}{\\@tag}@false\n \n \\def\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}{\\@ifnextchar*{\\@TCItagstar}{\\@TCItag}}\n \\def\\@TCItag#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{(#1)}%\n \\global\\def\\@currentlabel{#1}}\n \\def\\@TCItagstar*#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{#1}%\n \\global\\def\\@currentlabel{#1}}\n\n \\@ifundefined{tag}{\n \\def\\@ifnextchar*{\\@tagstar}{\\@tag}{\\@ifnextchar*{\\@tagstar}{\\@tag}}\n \\def\\@tag#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{(#1)}}\n \\def\\@tagstar*#1{%\n \\global\\@ifnextchar*{\\@tagstar}{\\@tag}@true\n \\global\\def\\@taggnum{#1}}\n }{}\n\n\\def\\tfrac#1#2{{\\textstyle {#1 \\over #2}}}%\n\\def\\dfrac#1#2{{\\displaystyle {#1 \\over #2}}}%\n\\def\\binom#1#2{{#1 \\choose #2}}%\n\\def\\tbinom#1#2{{\\textstyle {#1 \\choose #2}}}%\n\\def\\dbinom#1#2{{\\displaystyle {#1 \\choose #2}}}%\n\n\\makeatother\n\\endinput\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\n\t\t\\label{sec:Intro}}\n\nIt is somewhat misleading to use the terms 'New Physics' or 'Beyond Standard\nModel Physics' for results that explicitly search for signatures that would\nresult from extensions to the standard model. In almost any precision\nmeasurement of the production, properties or decays of already-known property,\nthere is the possibility of an unusual result can be explained with an \nextension to the standard model. In a certain sense then, nearly every result\nbeing shown here this week is a search for new physics. In particular, \nLee Roberts will be giving a presentation on searches for 'New Physics' in\nlow energy experiments.\n\nHowever even restricting the discussion to analyses that search directly for\nextensions to the standard model at high energy colliders would create a\ndiscussion that goes on for far too long. With great regret, I have had to\ntrim my selection of topics very sharply. There is just too much good work\ndone here to give each study adequate coverage. In particular, there has been\na great deal of excellent work preparing for the analysis of the imminent\nflood of data from the LHC. I can only refer the reader to the parallel\nsessions of this conference.\n\nThe bulk of the recent results are from the TeVatron, which has been performing\nquite well. Although most of the results described here are based on smaller\ndatasets, just under $7\\,\\mathrm {fb}^{-1}$ have been delivered to each experiment \nat this time. Results from the CDF and D0 collaborations are collected\nat~\\cite{TeV_web}.\n\nCertain characteristics of hadron collisions are common to all or nearly all\nsearches for exotic phenomena in them. The copious production of multijet\n``QCD\" events is suppressed with detectors designed to reject fake electrons\nand muons (hereafter referred to as leptons, $\\ell$), kinematic cuts and\nisolation cuts that require that the identified lepton not be surrounded by\nother activity in the detector. Multijet background is rarely, if ever,\nmodeled effectively with Monte Carlo simulation techniques. For the many\nsearches which select highly energetic leptons or momentum imbalance in the\nfinal state, the following known physics processes typically produce significant \nbackgrounds: the Drell-Yan process $p\\overline p$$\\rightarrow \\gamma^* \/ Z \\rightarrow$$\\ell^+ \\ell^- $,\n$\\gamma^* \/ Z \\rightarrow$$\\tau^+\\tau^-$, $W^\\pm \\rightarrow \\ell^\\pm\\nu$, and $t\\overline t$~production. The\ndiboson production processes $p\\overline p$$\\rightarrow VV$ with $V \\in \\{\\gamma, Z,W\\}$ have \nlower production cross-sections but also create unusual signatures which are of\ninterest in many searches.\n\nsJust as limiting as backgrounds are the kinematic facts of life in hadron\ncolliders. In $p\\overline p$~(or $pp$) collisions, the component of the initial-state\nmomentum along the collision axis is not known and kinematic calculations can\nonly be done in the plane perpendicular to the collision. I will use \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$}\nto indicate the opposite of the observed sum of particle momenta in this\ntransverse plane, and \\mbox{$p_T$}~to indicate the momentum of an object projected onto\nthis transverse plane.\n\n\n\\section{About Supersymmetry\n\t\t\\label{sec:SUSY}}\n\nAs many of the results discussed here are based on supersymmetric (SUSY)\nextensions of the standard model, a short introduction is appropriate. In no\nway however can this replace the many excellent existing reviews and\nintroductions, some of which may be found in \nreference~\\cite{SUSY_reviews,Martins}.\n\nSUSY provides solutions to several existing dilemmas in the standard model.\nOne is the ``${\\mathrm M}_{H}$ problem\". The propagator for a Higgs scalar with fermionic\ncouplings $\\mathcal{L} = - \\lambda_f H f {\\bar f}$ has one loop correction terms \nthat contribute to the mass in amount\n$\\Delta {\\mathrm M}_H^2 = -\\frac{|\\lambda_f|^2}{8\\pi^2} \\Lambda_{UV}^2$, where\n$\\Lambda_{UV}$ is a cutoff scale corresponding to the point where our\nexisting understanding of nature's particle content becomes inadequate.\nOur difficulty is that we have no clear value for $\\Lambda_{UV}$ short of the\nPlank scale, resulting in large negative contributions to $m_H$. However,\nif for every fermion there is a corresponding scalar field $S$ with interaction\n$\\mathcal{L} = -\\lambda_s |h|^2 |S|^2$, then the corresponding scalar loop\ndiagram introduces canceling mass contributions\n$\\Delta _H^2 = \\frac{\\lambda_S}{16\\pi^2} [ \\Lambda_{UV}^2 + \\ldots ]$.\n\nA second outstanding problem in the standard model is the dark matter problem.\nThe lightest neutral sparticle often makes a good dark matter candidate.\n\nFinally, the coupling constants for the strong, weak and electromagnetic forces \nvary with the energy scale of the interaction according to the renormalization\ngroup. In the minimal supersymmetric extension to the standard\nmodel (MSSM), the coupling constants evolve out to reach similar values at the\nscale of $10^{16}\\,\\mathrm {GeV}$, which does not happen in the standard model.\nQuoting~\\cite{Martins}, ``While the apparent unification of gauge couplings at\n[this scale] might just be an accident, it may also be taken as a strong hint\nin favor of a grand unified theory or superstring models, both of which can\nnaturally accommodate gauge coupling unification below ${\\mathrm M}_P$.\"\n\nBecause the SUSY mass spectrum evidently differs from that of the standard\nmodel particle content, there must be SUSY-breaking terms in the Lagrangian.\nThe primary constraint on these terms is that they not reintroduce ultraviolet\ndivergences of the sort we were glad to be rid of earlier. This is not a\nvery tight constraint; there are at least 105 new free parameters in the most\ngeneral form of the symmetry breaking Lagrangian. What this does is provide\na flexible framework in which different models of symmetry breaking can be\ninserted and investigated. Of the many different physical concepts that can\nand have been inserted into the SUSY breaking Lagrangian, two of the most \nstudied ones are the mSUGRA and the GMSB models. We have recent results in\nboth of these SUSY breaking models.\n\nThe generality of the SUSY breaking Lagrangian is perhaps why the SUSY\nhypothesis has had such a long run. After all, SUSY was proposed in the early\n1970s, when the standard model was still a novel model, and much of its particle\ncontent was unknown. For nearly 4 decades, theorists have been able to write\nLagrangians of all sorts into this framework and work out the their possible\nimplications.\n\n$R$-parity is a hypothesized quantum number which differentiates standard\nmodel particles from SUSY particles. All of the searches presented here\nassume the conservation of $R$-parity, so that each SUSY particle is produced\nin conjunction with the corresponding SUSY anti-particle.\n\nIn SUSY, 2 Higgs doublets\n\\begin{equation}\nH_d = \\left( \\begin{array}{c} H_d^0 \\\\\n\t\t\t\t\t\t\t H_d^- \\end{array} \\right)\n\\hspace{15mm}\nH_u = \\left( \\begin{array}{c} H_u^+ \\\\\n\t\t\t\t\t\t\t H_u^0 \\end{array} \\right)\n\\label{Eqn:Higgses}\n\\end{equation}\ncoupling respectively to down- and up- type fermions are required in order to\nprevent triangle anomalies. The ratio of the vacuum expectation values of\nthe two neutral fields,\n$\\tan \\beta = \\langle H_u^0 \\rangle \/ \\langle H_d^0 \\rangle$\nis one of the key parameters of supersymmetry, or indeed of any 2 doublet\nmodel. Analyses that apply Bayesian methods to a random samplings of parameter\nspace~\\cite{Bayes} strongly favor larger values of $\\tan \\beta$ at least in the\ncontext of mSUGRA and similar SUSY-breaking models.\n\nThe SUSY partners to the Higgs fields are the spin 1\/2 Higgsinos:\n\\begin{equation}\n\\tilde{H}_d = \\left( \\begin{array}{c} \\tilde{H}_d^0 \\\\\n\t\t\t\t\t\t\t\t\t \\tilde{H}_d^- \\end{array} \\right)\n\\hspace{20mm}\n\\tilde{H}_u = \\left( \\begin{array}{c} \\tilde{H}_u^+ \\\\\n\t\t\t\t\t\t\t\t\t \\tilde{H}_u^0 \\end{array} \\right)\n\\label{Eqn:Higgsinos}\n\\end{equation}\nThe charged components of the Higgsino fields can form linear admixtures\nwith the wino to create 2 charginos, $\\tilde{\\chi}^\\pm$. The\nconvention is that $m(\\tilde{\\chi}^\\pm_1) < m(\\tilde{\\chi}^\\pm_2)$. \n$\\tilde{\\chi}$, without subscript, refers the lightest of the mixtures.\nThe neutral components of the Higgsino fields form linear admixtures\nwith the zino and photino to create 4 neutralinos, $\\tilde{\\chi}^0_i$.\n\nReturning to the scalars, after electroweak symmetry breaking, two doublet\nmodels yield 5 Higgs bosons: two $CP$-even neutral scalars $h$ and $H$, a\n$CP$-odd neutral $A$ and a pair of charged scalars, $H^\\pm$.\n\nNo discussion at length about SUSY is complete without mentioning that the\nMSSM at least is under some pressure from experimental results from the\nelectroweak symmetry breaking sector. The lightest neutral MSSM Higgs boson\n$h$ must have a mass below 135 GeV~\\cite{Martins,Pesky} and experimental lower\nbounds~\\cite{LEP_Higgs} have come to approach this level.\n\n\n\\section{Searches for $\\tilde{t}$\n\t\t\\label{sec:stop}}\n\nWe have recent search results for the pair production of the SUSY partner to\nthe top quark in $p\\overline p$~ collisions. There are 3 decay channels under study.\nIn all 3 cases, limits are placed in a plane where the horizontal axis is the\nmass of the pair-produced $\\tilde{t}$ and vertical axis is the mass of the\nfinal state SUSY particle.\n\n\nThe first channel is\n$\\tilde{t} \\rightarrow b \\tilde{\\chi}^+; \\tilde{\\chi}^+ \\rightarrow \\tilde{\\nu} \\ell^+$.\n$R$-parity conservation means that the charge conjugate process occurs on the\nother side of the event, where a $\\overline{\\tilde{t}}$ decays similarly. The\nsignature is an $\\ell^+ \\ell^- $~pair with \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} from the escaping neutrinos. There are 2\n$b$-jets in the final state, but both the D0 and CDF collaborations found\nkinematic selection sufficient. The recent D0 result~\\cite{D0_stop_emu} uses\n$3.1\\,\\mathrm {fb}^{-1}$ in the $e-\\mu$ channel in conjunction with earlier\n$1.1\\,\\mathrm {fb}^{-1}$ $e-\\mu$ and $e-e$ results. The CDF result~\\cite{CDF_stop_dilep} is\nbased on $1.0\\,\\mathrm {fb}^{-1}$ in all 3 dilepton channels. Limits are drawn in the\n$m(\\tilde{\\nu})$ \\hbox{\\it vs.}~$m(\\tilde{t})$ plane and extend up to\n$m(\\tilde{\\nu}) \\simeq 120\\,\\mathrm {GeV}$.\n\nThe second channel is\n$\\tilde{t} \\rightarrow b \\tilde{\\chi}^+; \\tilde{\\chi}^+ \\rightarrow \\tilde{\\chi}^0 (W^+\/H^+\/G^+)$\nwhere the remaining charged gauge boson decays semileptonically. This channel\nwas originally of interest when measured values of $m(t)$ seemed to be a little\nlower in the dilepton channel. It is possible if $m(\\tilde{t}) < m(t)$ for the\nSUSY process to contaminate the $t\\overline t$~dilepton channel and pull down the \napparent $t$ quark mass. With 4 undetected particles in the final state\n(the $\\tilde{\\chi}^0$ is taken to be stable), the kinematics are very\nunderconstrained, even in the transverse plane. However, one may use a\nweighted sum of possible solutions to the kinematic problem to estimate\n$m(\\tilde{t})$. CDF has set limits~\\cite{CDF_Robin} on $m(\\tilde{\\chi}^0)$\nas a function of $m(\\tilde{t})$ (up to $197\\,\\mathrm {GeV}$) and the assumed\n\\BR{$\\tilde{\\chi}^\\pm$}{$\\tilde{\\chi}^0\\nu\\ell^\\pm$} ~using $2.7\\,\\mathrm {fb}^{-1}$.\n\nThe third channel to be studied is $\\tilde{t} \\rightarrow c \\tilde{\\chi}^0$. As the\nlifetime of charm hadrons typically is shorter than that of bottom hadrons,\nand as the transverse momentum of the charged products of charm decays\ntypically is less than that of bottom decays, obtaining a pure sample of charm\ndecays with impact parameter tagging is very difficult. The CDF collaboration\nhas developed a 2 output, 22 input neural network that distinguishes (at one\noutput) between charm and bottom jets. The other output distinguishes between\ncharm and light or $\\tau$ jets. Cutting on the sum of the two outputs, they\nset limits~\\cite{CDF_underdocumented} in the $m(\\tilde{\\chi}^0)$\n\\hbox{\\it vs.}~$m(\\tilde{t})$ plane extending up to $m(\\tilde{t}) = 180\\,\\mathrm {GeV}$.\n\n\n\\section{Trifermion SUSY Searches\n\t\t\\label{sec:threebees}}\n\nSUSY allows a number of channels leading to 3 leptons in the final state,\nas shown in Figure~\\ref{Fig:trileptons}. There are relatively few backgrounds,\nbut the cross-section for production times the branching ratio for decay into\nany particular combination of leptons is small. Depending on the particular\nvalues of the SUSY-breaking parameters (here, the mSUGRA breaking is used) it\nmay happen that the mass of the charginos or neutralinos produced at the\n$q \\overline q$ vertex is only a little larger than the mass of the escaping \n$\\tilde{\\chi}_1^0$, in which case a low momentum lepton is produced. For the\nhigh values of $\\tan \\beta$ that are of particular interest, $\\tau^\\pm$ leptons\nare often produced, and are detected by their decays to electrons or muons that\nare also of lower momentum. To increase sensitivity then, it is common to not\nattempt to identify the lepton of third lowest \\mbox{$p_T$}, but rather to just ask for\na charged particle that is isolated from any jets that appear in the event.\nRobert Forrest and Todd Adams have presented the CDF~\\cite{CDF_trileptons} and\nD0~\\cite{D0_trileptons} results in this conference.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=80mm]{Fig1.eps}\n\\caption{Some of the ways in which SUSY creates trilepton signatures in\n$p\\overline p$~collisions.}\n\\label{Fig:trileptons}\n\\end{figure}\n\nAnother final state with three fermions produced via SUSY diagrams has been\ninvestigated by CDF~\\cite{CDF_lljj}. Suppose that in the top diagram of\nFigure~\\ref{Fig:trileptons} the $W$ materializes as a $q\\overline q$ pair,\ncreating 2 jets. The resulting event then appears as a $WZ$ pair with \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$}.\nIn the standard model, hadronically decaying $W$s with $Z\\rightarrow$$\\ell^+ \\ell^- $ do not have\n\\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} except as a result of mismeasurement, so this is a relatively clean final\nstate. While it has to be admitted that the existing sensitivity is not really\ncomparable to what might reasonably be expected in SUSY, there are several\nreasons why large improvements can be expected in the future. To date, only\n$Z\\rightarrow$$e^+e^-$ has been investigated, and $b$-jet identification has not been \nemployed although a large $t\\overline t$~background is present. Also, the present result\nis based on $2.7\\,\\mathrm {fb}^{-1}$ of data at one of the two TeVatron experiments; a final\nsample some 7 or 8 times larger than this could occur.\n\n\n\\section{Gauge Mediated Supersymmetry Breaking\n\t\t\\label{sec:GMSB}}\n\nIn order to give different mass spectra to SUSY \\hbox{\\it vs.}~standard model particles\nusing gauge interactions, one can postulate the existence of new fields, called\nmessengers, that couple the standard model and SUSY particles to an ultimate\nsource of symmetry breaking. In these GMSB models, the lightest neutral SUSY\nparticle is nearly always the gravitino, which is an interesting dark matter\ncandidate for masses on the scale of a few $\\,\\mathrm {keV}$. For the collider\nexperimentalist, the way to think of various versions of this model is to\ncategorize them in terms of their next-to-lightest SUSY particle (NLSP).\nWhatever particular SUSY particles might be created at the hard scattering\nvertex, they will cascade down to the NLSP (assuming $R$-parity conservation)\nwhich will after some lifetime go to an undetected gravitino. The nature of\nthe NLSP will then determine what type of events to look for in the dataset. \n\nWhen the NLSP is the lightest neutralino and $m(\\tilde{\\chi}^0) <$${\\mathrm M}_Z$, its\ndecay produces a photon in conjunction with the gravitino. If the \n$\\tilde{\\chi}_1^0$ lifetime is on the order of $10\\,\\mathrm {ns}$, the arrival of the\n$\\gamma$ will be delayed because of the flight path, as shown in\nFigure~\\ref{Fig:latelight}. The CDF detector has $~0.5\\,\\mathrm {ns}$ time resolution in\nits EM calorimeter, which makes this type of search feasible. In addition to\nthe delayed photon, the search requires a jet and \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$}. Limits up to\n$m(\\tilde {\\chi}^0) > 191\\,\\mathrm {GeV}$ for $\\tau(\\tilde {\\chi}^0) > 5\\,\\mathrm {ns}$ were obtained\nand a detailed description of the analysis was published in\n2008~\\cite{CDF_latelight}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=80mm]{Fig2.eps}\n\\caption{Why photons from $\\tilde{\\chi}^0 \\rightarrow \\gamma\\tilde{G}$ arrive late\nin the electromagnetic calorimeter of a large collider experiment.}\n\\label{Fig:latelight}\n\\end{figure}\n\nWhen the lifetime of the neutralino is on the order of a few $\\,\\mathrm {ns}$ or less,\nthe delayed photon technique will not work. However, as a consequence of\n$R$-parity, there should be 2 SUSY cascades in the event leading to 2\nNLSP $\\tilde{\\chi}^0$ decays to $\\gamma\\tilde{G}$. In this conference,\nEunsin Lee has reported on a search~\\cite{CDF_earlylight} for GMSB at CDF which\nrequires 2 photons with high \\mbox{$p_T$}~along with \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} from the gravitinos. Limits\nup to $m(\\tilde {\\chi}^0) > 149\\,\\mathrm {GeV}$ for $\\tau(\\tilde {\\chi}^0) < 1\\,\\mathrm {ns}$ were\nobtained.\n\n\n\\section{MSSM Higgs\n\t\t\\label{sec:MSSM_Higgs}}\n\nIn the large $\\tan \\beta$ limit, the mass and couplings of the $A$ boson\napproach the mass and couplings of one of the two $CP$-even bosons $h$ or $H$.\nIf $A \\rightarrow H$ and $m(H)$ is large, one has the ``decoupling\" limit, where $h$\nbecomes in many ways rather similar to the standard model Higgs. If $A \\rightarrow h$,\n$m(A)$ would not be not too large and hadron colliders can search for the $A$\nin the modes $A \\rightarrow \\tau^+\\tau^-$, $bA \\rightarrow b\\tau^+\\tau^-$ and $bA \\rightarrow bbb$.\nThe $Abb$ and $A\\tau\\tau$ couplings are enhanced relative to the experimentally\ndifficult $Att$ and $A\\nu\\nu$ couplings by a factor of $\\tan^2 \\beta$ and so\nlimits on the maximum possible value of $\\tan \\beta$ can be set as a function\nof $m(A)$. John Conway and Flera Rizatdinova in this conference have\ndiscussed the recent TeVatron results. At this time, values of $\\tan \\beta$\nover $\\simeq 30$ are ruled out~\\cite{CDF_MSSM,D0_MSSM} at $m(A) \\simeq 130\\,\\mathrm {GeV}$;\nif these results are scaled by the expected final Run II luminosity and\n$\\tan^2 \\beta$, it is reasonable to guess that the TeVatron experiments will\nultimately be able to set limits as low as $\\tan \\beta \\simeq 20$. More\ndetailed studies of the potential reach of the TeVatron and the LHC have been\ndone recently~\\cite{MSSM_future}.\n\n\n\\section{NMSSM Higgs\n\t\t\\label{sec:NMSSM_Higgs}}\n\nGiven the increasing restrictions on the available parameter space of the\nminimal supersymmetric extension of the standard model, it is natural to\nconsider a nearly-minimal SUSY extension. In the NMSSM SUSY model, the\nsmallest possible combination of fields is added to the known standard model\nfields and their SUSY partners. Neutral weak isospin singlet fermion and\ncorresponding complex scalar fields are introduced. The resulting physical\ncontent of the theory includes a new light pseudoscalar, $a$, which (in the\nmanner characteristic of Higgs bosons) decays into the heaviest kinematically\navailable particles. For $m(a)$ above $2M_\\mu$, $a \\rightarrow \\mu\\mu$ is possible\nand has a nearly 100\\% branching ratio. If $m(a)$ is over $\\simeq 3$ times\nthe pion mass, hadronic decays become dominant; when $m(a) > 2 M_\\tau$, the\ndecay into $\\tau^+\\tau^-$~becomes the dominant mode. Interest in this model was\nincreased~\\cite{HK_excitement} by the unusual dimuon mass spectrum observed in\n$\\Sigma \\rightarrow p\\mu^+\\mu^-$ by the HyperCP~\\cite{HK_thought} experiment.\n\nIn $e^+e^-$~colliders, the $\\Upsilon$ may decay to $a\\gamma$ and there should be\na narrow peak in the $\\gamma$ energy spectrum for events where a $\\tau^+\\tau^-$~or\n$\\mu^+\\mu^-$~pair has been identified. A search using this method was performed\nearlier by the CLEO collaboration~\\cite{CLEO_no_a} which set limits on\n\\BR{$\\Upsilon(1S)$}{$a\\gamma$} ~$\\times$ \\BR{$a$}{$\\mu^+\\mu^-$} ~on the scale of\na few times $10^{-6}$ in the range of about $250\\,\\mathrm {MeV}$ to $3.5\\,\\mathrm {GeV}$, and also upon\n\\BR{$\\Upsilon(1S)$}{$a\\gamma$} ~$\\times$ \\BR{$a$}{$\\tau^+\\tau^-$} ~on the scale\nof a few times $10^{-5}$ in the range of about $5$ to $9\\,\\mathrm {GeV}$. More\nrecently, BaBar~\\cite{BaBar_no_a} examined their data for evidence of this\nprocess, using the case where one $\\tau$ decayed to $e \\nu \\overline\\nu$ and\nthe other decayed to $\\mu \\nu \\overline\\nu$. They set limits on\n\\BR{$\\Upsilon(3S)$}{$a\\gamma$} ~$\\times$ \\BR{$a$}{$\\tau^+\\tau^-$} ~on the scale\nof a few times $10^{-5}$ in the range of about $4\\,\\mathrm {GeV}$ to just under $10\\,\\mathrm {GeV}$.\n\nIn a hadron collider, a pair of $a$ bosons would be produced as the result of\nthe decay of an $h$. From LEP II, we have a very general\nlimit~\\cite{OPAL_recoil} that the mass of any new scalar coupling to the $Z$,\nincluding the $h$, must have a mass over $82\\,\\mathrm {GeV}$, and so the $a$ is produced\nin a hadron collider with a high boost. That in turn means that its decay\ninto, say, a $\\mu^+\\mu^-$~pair will produce particles with a small opening angle. For\n$m(a) < 2{\\mathrm M}\\tau$ the two tracks can be difficult to resolve in the\n$r$-$\\phi$~plane. D0~\\cite{D0_NMSSM} has searched for the $a$ in the case\n$2{\\mathrm M}\\tau < m(a)$ using the modes $aa \\rightarrow \\mu\\mu\\mu\\mu$ and\n$aa \\rightarrow \\mu\\mu\\tau\\tau$. The branching ratios are substantially lower than for\n$aa \\rightarrow \\tau\\tau\\tau\\tau$ but the signature is clearer. Andy Haas has\ndiscussed the special reconstruction criteria needed for these collinear\nleptons in this conference. Limits on \n$\\sigma (p{\\overline p} \\rightarrow h)~\\times$ \\BR{$h$}{$aa$}\n~of a few $\\mathrm {pb}$ are obtained.\n\n\n\\section{Leptoquarks\n\t\t\\label{sec:GUTschmutz}}\n\nBecause silicon vertex detectors can identify jets produced by fragmenting $b$\nquarks, it is possibile to search for third generation leptoquarks at hadron\ncolliders. An $LQ$-$\\overline {LQ}$ pair would produce events containing 2\n$b$ jets and a large \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} from the 2 $\\nu_{\\tau}$. As Sergey Uzunan described\nat this conference, this is the same signature as that which one might expect \nfrom pair production of ${\\tilde b}$, with subsequent ${\\tilde b} \\rightarrow b \\chi^0$\ndecays. Limits can then be set~\\cite{TwoForOne} upon both models as a result\nof what is basically a single search method. As a search for $\\tilde b$,\nlimits up to $m(\\tilde b) > 250\\,\\mathrm {GeV}$ are obtained; as a search for leptoquarks,\n$m(\\mathrm {LQ}_3) > 252\\,\\mathrm {GeV}$ is obtained.\n\nThe best way to find a leptoquark, at least a first generation one, is to take\na lepton and accelerate it to high energy and then arrange for it to collide\nwith a quark, similarly accelerated. This is exactly what HERA did, collecting just\nunder $0.8\\,\\mathrm {fb}^{-1}$ of $e^\\pm p$ data at $\\sqrt{s} = 300-319\\,\\mathrm {GeV}$, $0.3\\,\\mathrm {fb}^{-1}$ of which\nhad polarized $e^\\pm$. The ZEUS~\\cite{ZEUS_LQ} collaboration measured the\n$Q^2$ distribution in their data and compared it to the standard model\nprediction. The (very small) difference was then compared against deviations\nthat would be created by first generation leptoquarks, resulting in limits on\n$m(LQ) \/ \\lambda(LQ)$ of $0.5 - 1.9 \\,\\mathrm {TeV}$, where $\\lambda(LQ)$ is the coupling\nof the leptoquark to the fermions. Using the same technique they were also\nable to set limits on large extra dimensions and contact interactions with the\nsame $Q^2$ distribution. The H1 collaboration worked with different\nkinematic variables, specifically, $M$ and $y$; their results~\\cite{H1_LQ} are\nnot straight lines on the $\\lambda(LQ)$ \\hbox{\\it vs.}~$m(LQ)$ plane. If the couplings\nare taken to be $\\lambda(LQ) = \\sqrt{4\\pi\\alpha_{em}}$, the H1 analysis rules\nout leptoquark masses below 275 to $325\\,\\mathrm {GeV}$, depending on the type of\nleptoquark.\n\n\n\\section{Hidden Valley Scenarios\n\t\t\\label{sec:DidYouFindItYet}}\n\nAs my long time friend and one of our kind hosts here in Detroit Dave Cinabro\nonce accurately pointed out, ``When somebody writes a paper that says he looked\nfor something and he did not find it, well then, you have to believe him.\"\nAnother, more common, reaction to a null search is to imagine that the imagined\nnew phenomena still actually does in fact exist, but at some higher energy\nscale which is at least for the moment experimentally inaccessible. Hidden\nValley scenarios are predicated on a third possible response: the new phenomena\nstill does exist at a relatively low mass scale, but is so weakly coupled to\nthe standard model phenomenology as to render it invisible, or at least, hard\nto see.\n\nOne can postulate a wide range of fields that could exist in such a hidden\nsector; ``hidden valleys\" is really a class of models rather than than a\nspecific model. In the simplest example of such a model~\\cite{HV_theory} \nthe valley is populated with two electrically neutral quarks which are\nconfined into so-called ``v-hadrons\". Some of these particles may be stable,\nproviding dark matter candidates; big-bang nucleosynthesis considerations\nsuggest that at least one v-hadron has to have a lifetime much less than 1 sec.\nA $Z'$ that couples to both the hidden valley particles and the standard\nmodel ones is included in this model, with a mass in the $1 \\sim 6\\,\\mathrm {TeV}$ range.\n\nAndy Haas, in this conference, has presented D0's search~\\cite{D0_HV} for\nv-hadrons that are produced by mixing with a Higgs boson and have a long\nlifetime; their decay is mediated by the $Z'$ and produces a pair of $b$ jets\nthat emanate from a vertex that is between 1.6 and $20\\,\\mathrm {cm}$ distant from the\n$p\\overline p$~interaction point. The large background from material interactions is\nsuppressed by comparing the locations of the jet vertices with the known\nmaterial distribution in the detector. Limits on\n$\\sigma (p{\\overline p} \\rightarrow HX)~\\times$ \\BR{$H$}{$HV {\\overline{HV}}$}\n\t\t\t\t\t\t\t ~$\\times~ \\mbox{Br}^2(HV \\rightarrow b{\\overline b})$\nas low as $1\\,\\mathrm {pb}$ are obtained.\n\n\n\\section{Supersymmetric Hidden Valley Dark Matter Model\n\t\t\\label{sec:TheWholeBallOfWax}}\n\nIn recent years, a number of experiments have reported results that could be\ninterpreted as dark matter annihilation to $e^+e^-$~pairs near the center of the\nMilky Way. Additionally, the DAMA experiment reports an annual modulation in\ntheir NaI(Tl) detector which may be interpreted as a signal from a dark\nmatter galactic halo. While there is no shortage of more mundane\nexplanations for these results, some authors~\\cite{AH_etal} have taken a more\nadventuresome approach. They begin with the assumption that all of these\nresults are in fact due to new physics and then ask what would that new physics\nlook like.\n\nThey come to the surprising conclusion that dark matter is on the\n$0.5 - 0.8 \\,\\mathrm {TeV}$ mass scale and that it annihilates to standard model\nparticles with ``sizeable\" cross-sections. With such a large mass, it is\nnatural to speculate that a new symmetry prevents the rapid decay of such\nstates. However, these states might couple to light (${\\mathcal O} (1\\,\\mathrm {GeV})$)\nparticles, known as ``dark photons\" $(\\gamma_D)$. They also have found that\nsuch a picture can be implemented in a SUSY framework with GMSB. In that case\na clear signature for $p\\overline p$~collider searches occurs through processes such as\nthat shown in Figure~\\ref{Fig:ballOwax}; a high energy $\\gamma$ would appear in\nconjunction with \\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} and a collinear $\\mu^+\\mu^-$~pair from the $\\gamma_D$ decay.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=80mm]{Fig3.eps}\n\\caption{A dark photon production diagram in $p\\overline p$~collisions.}\n\\label{Fig:ballOwax}\n\\end{figure}\n\nThe low mass, high boost and decay into $\\mu^+\\mu^-$~pairs of the dark photon means\nthat one may use the same reconstruction techniques as were applied in searching\nfor the NMSSM $a$ in hadron colliders. The D0 collaboration has set\nlimits~\\cite{D0_ballOwax} on $m({\\tilde \\chi}^0)$ as a function of $m(\\gamma_D)$\nin the range $0.1 < m(\\gamma_D) < 2.5\\,\\mathrm {GeV}$\n\n\n\\section{Model Independent Searches\n\t\t\\label{sec:BruceGoneNow}}\n\nMuch of the motivation for searching for new physics beyond the standard model\nstems from our dissatisfaction with the many aspects of the standard model which\nwe find so surprising. Indeed, were it not for such astonishments as parity\nviolation, the $J\/\\psi$ observation, the large value of ${\\mathrm M}_{t}$~and many others, the\nstandard model would surely have been easier to figure out! While we do hope\nand expect that getting the correct extension to the standard model will somehow\nreduce our overall level of astonishment, history warns us that such an outcome\nis not at all certain. With this in mind, it behooves us to try to conduct\nsearches for new physics without the guidance of models that are at least in\npart constructed so as to reduce our astonishment.\n\nThe basic scheme for the modern model-independent search begins by defining a\nlarge number of final states. The definition is usually made in terms of the\nparticle content of the final state, where particles are defined by the\ndetection capabilities of the experiment's apparatus. So for example, final\nstates with low \\mbox{$p_T$}~electrons would typically evade detection in hadron\ncolliders, and such final states can not be included. Particles that require\nunusual reconstruction schemes are typically not included in the list of\npossible final states. Particles that are found by vertexing their decay\nproducts (such as $\\mathrm K^0_{\\mathrm S}$~or $\\mathrm D^{*+}$) have by and large not been included to date,\nalthough there is no specific reason why they could not be. One consequently\nshould not think of a model-independent search as being exactly the same as a\nsearch for ``everything\"; it is not quite that, at least to date.\n\nFor each entry on the list of possible final states, the standard model\nprocesses contributing to the final state are identified and modeled. The data\nare then compared against this predicted background, and cases where the data\nappear at a higher rate than the known physics rate are flagged. Cases where\nthe data appear at a lower rate are also interesting, both as a check on the\nmethod and in case there might be new physics amplitudes that interfere \ndestructively with known amplitudes. In assessing the statistical significance\nof any departure of reality from prediction, it is important to allow for the\nfact that the more comparisons you make, the more likely it is that the most\ndiscrepant result will be at or beyond any particular level of significance.\n\nThere are different ways to compare the data to the predicted rates of\nknown physics. There might be a different total number of events.\nDistributions of kinematic variables for the data and the expectation can be\ncompared with an overall quality of fit statistic, such as the Komolgov-Smirnov\nstatistic. The distribution of a kinematic variable, such as a reconstructed\nmass, can be scanned for bumps. Or one might scan the distributions of $\\,\\mathrm {GeV}$\ndimensioned kinematic variables such as \\mbox{$p_T$}~or reconstructed mass from low\nto high values, and look for discrepancies in the event counts above the scan\npoint.\n\nThis type of analysis has been completed at the CDF~\\cite{CDF_mis},\nD0~\\cite{D0_mis} and H1~\\cite{H1_mis} experiments, although not all three have\nutilized the full range of possible comparison methods. Jim Linnemann, in this\nconference, has presented the D0 model independent search.\nTable~\\ref{Tab:MIScounts} shows the results of comparisons at the level of\nsimple event count comparisons of data with expected background levels. The\nH1 collaboration chose to express their results in terms of number of seen\nevents \\hbox{\\it vs.}~the expected backgrounds; to facilitate comparison with the CDF\nand D0 results I have calculated a corresponding number of standard deviations.\n\n\\begin{table}[h]\n\\begin{center}\n\\caption{Significance of event count discrepancies in 3 model independent\nsearches. See text regarding treatment of H1 results.\\\\}\n\\begin{tabular}{|c|c|c|} \\hline\n\\textbf{CDF ($2.0\\,\\mathrm {fb}^{-1}$)} & \\textbf{H1 ($0.5\\,\\mathrm {fb}^{-1}$)} & \\textbf{D0 ($1.1\\,\\mathrm {fb}^{-1}$)} \\\\\n\\hline\n$\\gamma \\tau$ \\hspace{5.5mm} $2.2\\sigma$ & $\\nu 4j$ \\hspace{4.0mm} $<3.0\\sigma$ & $\\mu jj$\\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} \\hspace{5.0mm} $9.3\\sigma$ \\\\\n\\hline\n$\\mu \\tau$ \\hspace{5.5mm} $1.7\\sigma$ & $e 4j$ \\hspace{4.0mm} $<2.4\\sigma$ & $\\mu j\\gamma$\\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} \\hspace{5.0mm} $6.6\\sigma$ \\\\\n\\hline\n$e \\tau$\\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} \\hspace{2.5mm} $1.7\\sigma$ & $eee$ \\hspace{4.0mm} $\\sim2.0\\sigma$ & $\\mu^+\\mu^-$~\\mbox{$E_T\\hspace{-1.2em}\\slash\\hspace{1.0em}$} \\hspace{1.5mm} $4.4\\sigma$ \\\\\n\\hline\n & $\\mu\\nu$ \\hspace{4.0mm} $\\sim1.5\\sigma$ & $\\mu^+\\mu^-$~$\\gamma$ \\hspace{3.0mm} $4.4\\sigma$ \\\\\n\\hline\n\\end{tabular}\n\\label{Tab:MIScounts}\n\\end{center}\n\\end{table}\n\nThe statistically significant deviations in the channels flagged by the D0\nanalysis are attributed to defects in the modeling of the rate at which jets\nfake as photons, trigger simulation shortcomings, and \\mbox{$p_T$}~resolution effects\nin the D0 tracking system which effect muon measurement.\n\nSignificantly, there is no overlap in the channels found by all 3 experiments.\n\n\n\n\n\\bigskip\n\\begin{acknowledgments}\nI would like to thank a number of people who helped improve this presentation:\nTodd Adams, Arnaud Duperrin, Andy Hass, Katjia Kruger, Monica D'Onofrio,\nMonica Turcato, Stefan Schmitt and Tom Wright. And I would also like very\nmuch to thank our hard-working conference organizers for this very productive\nmeeting and for their gracious hospitality.\n\\end{acknowledgments}\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Statistical models representations}\n\\label{app:chap2-graphs}\nGraphical representations have been developed to represent and efficiently exploit (in)dependencies between random variables encoded in joint probability distributions. They are useful tools to concisely present the model under scrutiny, as well as direct supports for some derivations of inference procedures. Let us briefly present two types of graphical representations. \n\n\\paragraph{Probabilistic graphical models}\nFormally, a probabilistic graphical model is a graph $G = (V, E)$ with nodes $V$ representing random variables and edges $E$ representing direct interactions between random variables. In many statistical models of interest, it is not necessary to keep track of all the possible combinations of realizations of the variables as the joint probability distribution can be broken up into factors involving only subsets of the variables. The structure of the connections $E$ reflects this factorization. \n\nThere are two main types of probabilistic graphical models: \\emph{directed graphical models} (or Bayesian networks) and \\emph{undirected graphical models} (or Markov Random Fields). They allow to represent different independencies and factorizations. In the next paragraphs we provide intuitions and remind some useful properties of graphical models, a good reference to understand all the facets of this powerful tool is \\cite{Koller2009}. \n\n\\subparagraph{Undirected graphical models} \nIn undirected graphical models\nthe direct interaction between a subset of variables $C \\subset V$ is represented by undirected edges interconnecting each pair in $C$. This fully connected subgraph is called a \\emph{clique} and associated with a real positive \\emph{potential} function $\\psi_C$ over the variable $\\x_C=\\{x_i\\}_{i \\in C}$ carried by $C$. The joint distribution over all the variables carried by $V$, $\\x_V$ is the normalized product of all potentials\n \\begin{gather}\n p(\\x_V) = \\frac 1 \\cZ \\prod_{C \\in \\mathcal{C}} \\psi_C(\\x_C).\n \\end{gather}\n\n\\begin{flushright}\n\\begin{minipage}{0.95\\linewidth}\n \\hypertarget{item:chap2-rbm}{Example} \\hyperlink{item:chap2-rbm}{(i)}: the Restricted Boltzmann Machine,\n \\begin{gather}\n \\label{eq:chap2-rbm-def}\n p(\\x, \\hid) = \\frac{1}{\\cZ} e^{\\x\\T\\W\\hidd}p_x(\\x)p_t(\\hidd)\n \\end{gather}\n with factorized $p_x$ and $p_t$ is handily represented using an undirected graphical model depicted in \\citefig~\\ref{fig:chap2-rbm-bis}. The corresponding set of cliques is the set of all the pairs with one input unit (indexed by $i = 1 \\cdots N$) and one hidden unit (indexed by $\\alpha = 1 \\cdots M$), joined with the set of all single units. The potential functions are immediately recovered from \\eqref{eq:chap2-rbm-def},\n \\begin{gather}\n \\mathcal{C} = \\{\\{i\\}, \\{\\alpha\\}, \\{i,\\alpha\\}\\,; \\,i=1 \\cdots N, \\,\\alpha = 1 \\cdots M \\} \\, , \\qquad\n \n \\psi_{i\\alpha}(x_i, t_\\alpha) = e^{x_i W_{i\\alpha}\\hiddv_\\alpha}\\, ,\\\\\n p(\\x, \\hid) = \\frac{1}{\\cZ}\\prod_{\\{i, \\alpha\\} \\in \\mathcal{C}}\\psi_{i\\alpha}(x_i, t_\\alpha)\\prod_{i=1}^Np_x(x_i)\\prod_{\\alpha=1}^Mp_t(t_\\alpha).\n \\end{gather}\n It belongs to the subclass of \\emph{pairwise} undirected graphical models for which the size of the cliques is at most two. \n\\end{minipage}\n\\end{flushright}\n\n\nUndirected graphical models handily encode \\emph{conditional independencies}. Let $A, B, S \\subset V$ be three disjoint subsets of nodes of $G$. $A$ and $B$ are said to be independent given $S$ if $p(A,B |S) = p(A|S)p(B|S)$. In the graph representation it corresponds to cases where $S$ separates $A$ and $B$: there is no path between any node in $A$ and any node in $B$ that is not going through $S$. \n\n\\begin{flushright}\n \\begin{minipage}{0.95\\linewidth}\n Example \\hyperlink{item:chap2-rbm}{(i)}: In the RBM, hidden units are independent given the inputs, and conversely: \n \\begin{gather}\n p(\\hidd | \\x) = \\prod_{\\alpha =1}^M p(\\hiddv_\\alpha | \\x), \\qquad\n p(\\x|\\hidd) = \\prod_{i=1}^M p(x_i | \\hidd).\n \\end{gather}\n This property is easily spotted by noticing that the graphical model (\\citefig~\\ref{fig:chap2-rbm-bis}) is \\emph{bipartite}.\n\\end{minipage}\n\\end{flushright}\n\n\\begin{figure}[t]\n \\centering\n \n \n \n \n \\captionsetup{width=.4\\linewidth}\n \n {\\includegraphics[width=0.3\\textwidth, valign=m]{chap2_glm.pdf}\n }\n \\captionsetup{width=.9\\linewidth}\n \\caption{\n \n Left: Directed graphical model for $p(\\x,\\y,\\W)$ without assumptions of factorizations for the channel and priors. Right: Directed graphical model reflecting factorization assumptions for $p(\\x,\\y|\\W)$.\\label{fig:chap2-glm}\n \n }\n\\end{figure}\n \n\\subparagraph{Directed graphical model } A directed graphical model uses a Directed Acyclic Graph (DAG), specifying directed edges $E$ between the random variables $V$. It induces an ordering of the random variables and in particular the notion of parent nodes $\\pi_i \\subset V$ of any given vertex $i \\in V$: the set of vertices $j$ such that $j \\to i \\in E$. The overall joint probability distribution factorizes as\n\\begin{gather}\n p(\\x) = \\prod_{i \\in V} p(x_i | \\x_{\\pi_i}).\n\\end{gather}\n\n\\begin{flushright}\n \\begin{minipage}{0.95\\linewidth}\n \\hypertarget{item:chap2-glm}{Example} \\hyperlink{item:chap2-glm}{(ii)}: The stochastic single layer feed forward network $\\y = g(\\W\\x ; \\eps)$, where $g( \\cdot; \\eps)$ is a function applied component-wise including a stochastic noise $\\eps$ that is equivalent to a conditional distribution $p_{\\rm out}(\\y | \\W\\x)$, and where inputs and weights are respectively drawn from distributions $p_x(\\x)$ and $p_W(\\W)$, has a joint probability distribution\n \\begin{gather}\n \\label{eq:chap2-glm-def}\n p(\\y, \\x, \\W) = p_{\\rm out}(\\y | \\W\\x) p_x(\\x) p_W(\\W),\n \\end{gather} \n precisely following such a factorization. It can be represented with a three-node DAG as in \\citefig~\\ref{fig:chap2-glm}. Here we applied the definition at the level of vector\/matrix valued random variables. By further assuming that $\\pout$, $p_W$ and $p_x$ factorize over their components, we keep a factorization compatible with a DAG representation \n \\begin{gather}\n p(\\y, \\x, \\W) = \\prod_{i=1}^{N} p_x(x_i) \\prod_{\\mu = 1}^M \\pout(y_\\mu | \\sum_{i=1}^N W_{\\mu i}x_i) \\prod_{\\mu, i} p_W(W_{\\mu i}).\n \\end{gather}\n For the purpose of reasoning it may be sometimes necessary to get to the finest level of decomposition, while sometimes the coarse grained level is sufficient. \n\\end{minipage}\n\\end{flushright}\n\nWhile a statistical physicist may have never been introduced to the formal definitions of graphical models, she inevitably already has drawn a few - for instance when considering the Ising model. She also certainly found them useful to guide physical intuitions. The following second form of graphical representation is probably newer to her. \n\n\\paragraph{Factor graph representations}\nAlternatively, high-dimensional joint distributions can be represented with factor graphs, that are undirected bipartite graphs $G = (V, F, E)$ with two subsets of nodes. The variable nodes $V$ representing the random variables as in the previous section (circles in the representation) and the factor nodes $F$ representing the interactions (squares in the representation) associated with potentials. The edge $(i \\mu)$ between a variable node $i$ and a factor node $\\mu$ exists if the variable $i$ participates in the interaction $\\mu$. We note $\\partial i$ the set of factor nodes in which variable $i$ is involved, they are the neighbors of $i$ in $G$. Equivalently we note $\\partial \\mu$ the neighbors of factor $\\mu$ in $G$, they carry the arguments $\\{x_i\\}_{i \\in \\partial \\mu}$, shortened as $\\x_{\\partial \\mu}$, of the potential $\\psi_\\mu$. The overall distributions is recovered in the factorized form:\n\\begin{align}\np(\\x) = \\frac 1 \\cZ \\prod_{\\mu = 1}^M \\psi_\\mu(\\x_{\\partial \\mu}).\n\\end{align}\nCompared to an undirected graphical model, the cliques are represented by the introduction of factor nodes. \n\n\n\\begin{flushright}\n \\begin{minipage}{0.95\\linewidth}\n Examples: The factor-graph representation of the RBM \\hyperlink{item:chap2-rbm}{(i)} is not much more informative than the pairwise undirected graph (see \\citefig~\\ref{fig:chap2-rbm-bis}). \n For the feed forward neural networks \\hyperlink{item:chap2-glm}{(ii)} we draw the factor graph of $p(\\y, \\x| \\W)$ (see \\citefig~\\ref{fig:chap2-glm-bis}). \n \\end{minipage}\n\\end{flushright}\n\n\\section{Vector Approximate Message Passing for the GLM}\n\\label{sec:app-chap3-vamp}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{chap3_vamp_glm.pdf}\n \\caption{Factor graph representation of the GLM for the derivation of VAMP (reproduction of \\citefig~\\ref{fig:chap3-vamp-glm} \\label{fig:app-chap3-vamp-glm}to help following the derivation described here in the \\citeapp).}\n\\end{figure}\n \nWe recall here a possible derivation of G-VAMP discussed in \\citechap~\\ref\n{sec:chap3} (\\citealg~\\ref{alg:chap3-vamp}). We consider a projection of the BP equations for the factor graph \\citefig~\\ref{fig:app-chap3-vamp-glm}.\n\n\\subparagraph{Gaussian assumptions} We start by parametrizing marginals as well as messages coming out of the Dirac factors. For $a = 1, 2$: \n\\begin{gather}\n m_{x,{(a)}}(\\x^{(a)}) = \\cN(\\x^{(a)}, \\xh^{(a)}, \\Cx^{(a)}) \\, ,\\qquad\n m_{z,{(a)}}(\\z^{(a)}) = \\cN(\\z^{(a)}, \\hat{\\z}^{(a)}, \\Cz^{(a)}) \\, ,\n\\end{gather}\nand\n\\begin{gather}\n \\msgt{m}{\\psi_x}{\\x^{(a)}}(\\x^{(a)}) \\propto e^{-\\frac 1 2 {\\x^{(a)}}\\T\\A_x^{(a)} \\x^{(a)} + {\\B_x^{(a)}}\\T {\\x^{(a)}} } \\, ,\\\\\n \\msgt{m}{\\psi_z}{\\z^{(a)}}(\\z^{(a)}) \\propto e^{-\\frac 1 2 {\\z^{(a)}}\\T\\A_z^{(a)} \\z^{(a)} + {\\B_z^{(a)}}\\T {\\z^{(a)}} } \\, .\n\\end{gather}\n\n\\subparagraph{Self consistency of the parametrizations at Dirac factor nodes} \nAround $\\psi_x$ the message passing equations are simply\n\\begin{gather}\n \\label{eq:chap3-vamp-trick1}\n \\msgt{m}{\\psi_x}{\\x^{(2)}}(\\x^{(2)}) = \\msg{m}{\\x^{(1)}}{\\psi_x}(\\x^{(2)}), \\qquad \\msgt{m}{\\psi_x}{\\x^{(1)}}(\\x^{(1)}) = \\msg{m}{\\x^{(2)}}{\\psi_x}(\\x^{(1)})\n\\end{gather}\nand similarly around $\\psi_z$. Moreover, considering that messages are marginals to which the contribution of the opposite message is retrieved we have\n\\begin{gather}\n \\msg{m}{\\x^{(1)}}{\\psi_x}(\\x^{(1)}) \\propto m_{x,{(1)}}(\\x^{(1)}) \/ \\msgt{m}{\\psi_x}{\\x^{(1)}}(\\x^{(1)}), \\\\\n \\msg{m}{\\x^{(2)}}{\\psi_x}(\\x^{(2)}) \\propto m_{x,{(2)}}(\\x^{(2)}) \/ \\msgt{m}{\\psi_x}{\\x^{(2)}}(\\x^{(2)}) \\, .\n\\end{gather}\nCombining this observation along with \\eqref{eq:chap3-vamp-trick1} leads to updates \\eqref{alg:chap3-vamp-Ax1}\nand \\eqref{alg:chap3-vamp-Ax2}.\nThe same reasoning can be followed for the messages around $\\psi_z$ leading to updates \\eqref{alg:chap3-vamp-Az1}\nand \\eqref{alg:chap3-vamp-Az2}.\n\n\\subparagraph{Input and output update functions}\nThe update functions of means and variances of the marginals are deduced from the parametrized message passing. For the variable $\\x^{(1)}$ taking into account the prior $p_x$, the updates are very similar to GAMP input functions: \n\\begin{align}\n \\xh^{(1)} \n \n & \\propto \\int \\dd{\\x^{(1)}} \\x^{(1)} p_x(\\x^{(1)}) \\msgt{m}{\\psi_x}{\\x^{(1)}}(\\x^{(1)}) \\\\\n & = \\frac{1}{\\cZ_x^{(1)}} \\int \\dd{\\x^{(1)}} \\x^{(1)} p_x(\\x^{(1)}) e^{-\\frac 1 2 {\\x^{(1)}}\\T\\A_x^{(1)} \\x^{(1)} + {\\B_x^{(1)}}\\T {\\x^{(1)}}} = f_1^x( {\\B_x^{(1)}}, \\A_x^{(1)}) \\, , \\\\\n {\\Cx}^{(1)} &= \\frac{1}{\\cZ_x^{(1)}} \\int \\dd{\\x^{(1)}} \\x^{(1)}{\\x^{(1)}}\\T \n p_x(\\x^{(1)}) e^{-\\frac 1 2 {\\x^{(1)}}\\T\\A_x^{(1)} \\x^{(1)} + {\\B_x^{(1)}}\\T {\\x^{(1)}}} \\\\\n &\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - f_1^x( {\\B_x^{(1)}}, \\A_x^{(1)}) {f_1^x( {\\B_x^{(1)}}, \\A_x^{(1)}) }\\T \\notag \\\\\n &= f_2^x( {\\B_x^{(1)}}, \\A_x^{(1)}) \\, ,\n\\end{align}\nwhere $\\cZ_x^{(1)}$ is as usual the partition ensuring the normalization.\n\nSimilarly for the variable $\\z^{(1)}$, the update functions are very similar to the GAMP output functions including the information coming from the observations:\n\\begin{align}\n \\hat{\\z}^{(1)} & \\propto \\int \\dd{\\z^{(1)}} \\pout(\\y|\\z^{(1)}) \\msgt{m}{\\psi_z}{\\z^{(1)}}(\\z^{(1)}) \\\\\n & = \\frac{1}{\\cZ_z^{(1)}} \\int \\dd{\\z^{(1)}} \\pout(\\y|\\z^{(1)}) e^{-\\frac 1 2 {\\z^{(1)}}\\T\\A_z^{(1)} \\z^{(1)} + {\\B_z^{(1)}}\\T {\\z^{(1)}}} = f_1^z( {\\B_z^{(1)}}, \\A_z^{(1)}) \\, , \\\\\n {\\Cz}^{(1)} &= \\frac{1}{\\cZ_z^{(1)}} \\int \\dd{\\z^{(1)}} \\z^{(1)}{\\z^{(1)}}\\T \n \\pout(\\y|\\z^{(1)}) e^{-\\frac 1 2 {\\z^{(1)}}\\T\\A_z^{(1)} \\z^{(1)} + {\\B_z^{(1)}}\\T {\\z^{(1)}}} \\\\\n &\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - f_1^z( {\\B_z^{(1)}}, \\A_z^{(1)}) {f_1^z( {\\B_z^{(1)}}, \\A_z^{(1)}) }\\T \\notag \\\\\n &= f_2^z( {\\B_z^{(1)}}, \\A_z^{(1)}) \\, .\n\\end{align}\n\n\\subparagraph{Linear transformation}\nFor the middle factor node we consider the vector variable concatenating $\\bar{\\x}=[\\x^{(2)} \\z^{(2)}] \\in \\R^{N + M}$. The computation of the corresponding marginal with the message passing then yields\n\\begin{gather}\n m_{\\bar{\\x}}(\\bar{\\x}) \\propto \n \n \\lim\\limits_{\\Delta \\to 0}\\cN(\\z^{(2)}; \\W\\x^{(2)} , \\Delta \\mat{I}_M) e^{-\\frac 1 2 {\\x}\\T\\A_x^{(2)} \\x + {\\B_x^{(2)}}\\T {\\x} } e^{-\\frac 1 2 {\\z}\\T\\A_z^{(2)} \\z + {\\B_z^{(2)}}\\T {\\z}}.\n\\end{gather}\nThe means of $\\x^{(2)}$ and $\\z^{(2)}$ are then updated through\n\\begin{gather}\n \\xh^{(2)}, \\hat{\\z}^{(2)} = \\argminn{\\x, \\z}\\left[ \n \\Vert\\W\\x - \\z \\Vert^2 \/\\Delta + {\\x}\\T\\A_x^{(2)} \\x - 2 {\\B_x^{(2)}}\\T {\\x} + {\\z}\\T\\A_z^{(2)} \\z - 2{\\B_z^{(2)}}\\T {\\z}\n \\right],\n\\end{gather}\nat $\\Delta \\to 0$.\nAt this point it is advantageous in terms of speed to consider the singular value decomposition $\\W=\\mat{U}\\mat{S}\\mat{V}\\T$ and to simplify the form of the variance matrices by taking them proportional to the identify, i.e. $\\A_z^{(2)} = A_z^{(2)} \\mat{I}_{M}$ etc. Under this assumption the solution of the minimization problem is\n\\begin{gather}\n \\xh^{(2)} = g^x_1({\\B_x^{(2)}}, A_x^{(2)}, {\\B_z^{(2)}}, A_z^{(2)}) = \\mat{V} \\, \\mat{D} \\left( {A^{(2)}_z}^{-2} \\mat{S} \\mat{U}\\T \\B_z^{(2)} + {A^{(2)}_x}^{-2} \\mat{V}\\T \\B_x^{(2)} \\right) \\, , \\\\\n \\hat{z}^{(2)} = g^z_1({\\B_x^{(2)}}, A_x^{(2)}, {\\B_z^{(2)}}, A_z^{(2)}) = \\W g^x_1({\\B_x^{(2)}}, A_x^{(2)}, {\\B_z^{(2)}}, A_z^{(2)}) \\, ,\n\\end{gather}\nwith $\\mat{D}$ a diagonal matrix with entries $D_{ii} = ({A_z^{(2)}}^{-1} S_{ii}^2+ {A_x^{(2)}}^{-1})^{-1} $.\nThe scalar variances are then updated using the traces of the Jacobians with respect to the $\\B^{(2)}$-s\n\\begin{align}\n \\Cx^{(2)} &= \\frac{A_x^{(2)}}{N} \\mathrm{tr}\\left(\\partial g^x_2 \/ \\partial{\\B_x^{(2)}}\\right) \\mat{I}_N = \\frac{1}{N} \\sum_{i=1}^N ({A_z^{(2)}}^{-1} S_{ii}^2+ {A_x^{(2)}}^{-1})^{-1} \\mat{I}_N \\\\ &= g^x_2({\\B_x^{(2)}}, A_x^{(2)}, {\\B_z^{(2)}}, A_z^{(2)})\\\\\n \\Cz^{(2)} &= \\frac{A_z^{(2)}}{M} \\mathrm{tr}\\left(\\partial g^z_2 \/ \\partial{\\B_z^{(2)}}\\right) \\mat{I}_M = \\frac{1}{M} \\sum_{i=1}^N S_{ii} ({A_z^{(2)}}^{-1} S_{ii}^2+ {A_z^{(2)}}^{-1})^{-1} \\mat{I}_M \\\\ &= g^z_2({\\B_x^{(2)}}, A_x^{(2)}, {\\B_z^{(2)}}, A_z^{(2)}).\n\\end{align}\n\n\\section{Multi-value AMP derivation for the GLM}\n\\label{app:chap6-vect-amp}\nWe here present the derivation of the multi-value AMP and its SE motivated in \\citesec~\\ref{sec:chap3-multivalue}, focusing on the multi-value GLM. These derivations also appear in \\cite{Gabrie2019}.\n\n\\subsection{Approximate Message Passing}\nThe systematic procedure to write AMP for a given joint probability distribution consists in first writing BP on the factor graph, second project the messages on a parametrized family of functions to obtain the corresponding relaxed-BP and third close the equations on a reduced set of parameters by keeping only leading terms in the thermodynamic limit.\n\n\\begin{figure}[t]\n \\centering\n {\\includegraphics[width=0.35\\textwidth]{chap6_glm_vec.pdf}\n }\n \n \n \n \\caption{Factor graph of the Generalized Linear Model (GLM) on vector variables corresponding to the joint distribution \\eqref{eq:chap6-glm-vec-meas}. \\label{fig:chap6-vect-amp}}\n\\end{figure}\n\nFor the generic multi-value GLM the posterior measure we are interested in is\n\\begin{gather}\n \\label{eq:chap6-glm-vec-meas}\n p(\\X | \\Y, \\W) = \\frac{1}{\\cZ(\\Y, \\W)} \\prod_{i=1}^N p(\\x_i)\\prod_{\\mu=1}^M \\pout(\\y_\\mu | \\vect{w}_\\mu\\T\\X \/ \\sqrt{N}), \\quad \\x_i \\in \\R^P, \\quad \\y_\\mu \\in \\R^P. \n\\end{gather}\nwhere the known entries of matrix $\\W$ are drawn i.i.d from a standard normal distribution (the scaling in $1\/\\sqrt{N}$ is here made explicit). \nThe corresponding factor graph is given on \\citefig~\\ref{fig:chap6-vect-amp}. \nWe are considering the simultaneous reconstruction of $P$ signals $\\x_{0,(k)} \\in \\R^N$ and therefore write the message passing on the variables $\\x_i \\in \\R^P$.\nThe major difference with the scalar version (P=1) of AMP is that we will consider covariance matrices between variables coming from the $P$ observations instead of scalar variances. \n\n\\paragraph{Belief propagation (BP)} \nWe start with BP on the factor graph of \\citefig~\\ref{fig:chap6-vect-amp}. For all pairs of index $i-\\mu$, we define the update equations of messages function\n\\begin{gather}\n \\label{eq:chap6-bp-calamp-1}\n \\msg{\\tilde{m}^{(t)}}{\\mu}{i} (\\x_i) = \\frac{1}{\\msg{\\cZ}{\\mu}{i}}\n \\int \\prod_{i'\\neq i} \\dd{\\x_{i'}}\\pout \\left(\\y_\\mu | \\sum_j \\frac{W_{\\mu j}}{\\sqrt{N}}\\x_j\\right) \\prod_{i'\\neq i} \\msg{m^{(t)}}{i'}{\\mu}(\\x_{i'})\\\\\n \\label{eq:chap6-bp-calamp-2}\n \\msg{m^{(t+1)}}{i}{\\mu} (\\x_i) = \\frac{1}{\\msg{\\cZ}{i}{\\mu}}\n p_x(\\x_i)\\prod_{\\mu' \\neq \\mu} \\msg{\\tilde{m}^{(t)}}{\\mu'}{i}(\\x_i),\n\\end{gather}\nwhere $\\msg{\\cZ}{\\mu}{i}$ and $\\msg{\\cZ}{i}{\\mu}$ are normalization function that allow to interpret messages as probability distributions.\nTo improve readability, we drop the time indices in the following derivation, and only specify them in the final algorithm.\n\n\\paragraph{Relaxed BP (r-BP)} The second step of the derivation is to develop messages keeping only terms up to order $O(1\/N)$ as we take the thermodynamic limit $N \\to + \\infty$ (at fixed $\\alpha = M\/N$). At this order, we will find that it is consistent to consider the messages to be approximately Gaussian, i.e. characterized by their means and co-variances. Thus we define\n\\begin{gather}\n\\label{eq:chap6-vect-amp-rbp-def-xh}\n\\msg{\\xh}{i}{\\mu} = \\int \\dd{\\x} \\x \\; \\msg{m}{i}{\\mu}(\\x) \\\\\n\\label{eq:chap6-vect-amp-rbp-def-Cx}\n\\msg{\\Cx}{i}{\\mu} = \\int \\dd{\\x} \\x \\x^T \\; \\msg{m}{i}{\\mu}(\\x) - \\msg{\\xh}{i}{\\mu}\\msg{\\xh\\T}{i}{\\mu}\n\\end{gather}\nand\n\\begin{gather}\n\\label{eq:chap6-vect-amp-rbp-def-w}\n\\displaystyle\n\\msg{\\w}{\\mu}{i} = \\sum_{i' \\neq i} \\frac{\\Wuip}{\\sqrt{N}}\\msg{\\xh}{i'}{\\mu} \\\\\n\\label{eq:chap6-vect-amp-rbp-def-V}\n\\displaystyle\n\\msg{\\V}{\\mu}{i} = \\sum_{i' \\neq i}\\frac{\\Wuip^2}{N} \\msg{\\Cx}{i'}{\\mu}, \n\\end{gather}\nwhere $\\msg{\\w}{\\mu}{i}$ and $\\msg{\\V}{\\mu}{i}$ are related to the intermediate variable $\\z_\\mu = \\vect{w}_\\mu\\T \\X$.\n\n\\subparagraph{Expansion of $\\msgt{m}{\\mu}{i}$ -} \nWe defined the Fourier transform $\\hat{p}_{\\rm out}$ of $\\pout(\\y_\\mu|\\z_\\mu)$ with respect to its argument $\\z_\\mu = \\vect{w}_\\mu\\T\\X$,\n\\begin{gather}\n \\hat{p}_{\\rm out}(\\y_\\mu|\\vect{\\xi}_\\mu) = \\int \\dd{\\z_\\mu} \\hat{p}_{\\rm out}(\\y_\\mu | \\z_\\mu) \\, e^{- i \\vect{\\xi}_\\mu\\T \\z_\\mu}.\n\\end{gather}\nUsing reciprocally the Fourier representation of $\\pout(\\y_\\mu|\\z_\\mu)$,\n\\begin{gather}\n \\pout(\\y_\\mu|\\z_\\mu) = \\frac{1}{(2\\pi)^M} \\int \\dd{\\vect{\\xi}_\\mu} \\hat{p}_{\\rm out}(\\y_\\mu | \\vect{\\xi}_\\mu) \\, e^{i \\vect{\\xi}_\\mu\\T \\z_\\mu},\n\\end{gather}\nwe decouple the integrals over the different $\\x_{i'}$ in \\eqref{eq:chap6-bp-calamp-1},\n\\begin{align}\n \\label{eq:chap6-deric-calamp-1}\n \\msgt{m}{\\mu}{i} (\\x_i) &\n \\propto \\int \\dd{\\vect{\\xi}_\\mu} \\hat{p}_{\\rm out} \\left(\\y_\\mu | \\vect{\\xi}_\\mu\\right) e^{i \\frac{\\Wui}{\\sqrt{N}}\\vect{\\xi}_\\mu\\T\\x_i} \\prod_{i'\\neq i} \\int \\dd{\\x_{i'}} \\msg{m}{i'}{\\mu}(\\x_{i'})e^{i \\frac{\\Wuip}{\\sqrt{N}}\\x_i \\vect{\\xi}_\\mu\\T\\x_{i'}} \\\\\n \\label{eq:chap6-deric-calamp-2}\n & \\propto \\int \\dd{\\vect{\\xi}_\\mu} \\hat{p}_{\\rm out} \\left(\\y_\\mu | \\vect{\\xi}_\\mu\\right) e^{i \\underline{\\xi}\\T \\left(\\frac{\\Wui}{\\sqrt{N}}\\x_i + \\msg{\\w}{\\mu}{i}\\right) - \\frac 1 2 \\underline{\\xi}\\T \\msg{{\\V}^{-1}}{\\mu}{i} \\underline{\\xi}}\n\\end{align}\nwhere developing the exponentials of the product in \\eqref{eq:chap6-bp-calamp-1} allows to express the integrals over the $\\x_{i'}$ as a function of the definitions \\eqref{eq:chap6-vect-amp-rbp-def-w}-\\eqref{eq:chap6-vect-amp-rbp-def-V}, before re-exponentiating to obtain the final result \\eqref{eq:chap6-deric-calamp-2}.\nNow reversing the Fourier transform and performing the integral over $\\vect{\\xi}$ we can further rewrite \n\\begin{align}\n \\msgt{m}{\\mu}{i} (\\x_i) &\n \\propto \\int \\dd{\\z_\\mu} \\pout \\left(\\y_\\mu | \\z_\\mu\\right) e^{- \\frac{1}{2} \n \\left( \\z_\\mu - \\frac{\\Wui}{\\sqrt{N}}\\x_i - \\msg{\\w}{\\mu}{i}\\right)\\T\n \\msg{{\\V}^{-1}}{\\mu}{i}\n \\left( \\z_\\mu - \\frac{\\Wui}{\\sqrt{N}}\\x_i - \\msg{\\w}{\\mu}{i}\\right)\n } \\\\\n \n \n \n \n \n \n \n \n \n \n \n \\label{eq:chap6-vect-amp-rbp-1}\n &\n \\propto \\int \\dd{\\z_\\mu} \\mathbb{P}_{\\rm out}(\\z_\\mu; \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) e^{ \\left( \\z_\\mu - \\msg{\\w}{\\mu}{i} \\right)\\T\n \\msg{{\\V}^{-1}}{\\mu}{i}\n \\frac{\\Wui}{\\sqrt{N}}\\x_i \n - \\frac{\\Wui^2}{2N}\\x_i\\T \\msg{{\\V}^{-1}}{\\mu}{i} \\x_i\n },\n\\end{align}\nwhere we are led to introduce the \\emph{output update functions},\n\\begin{gather}\n \\label{eq:chap6-vect-amp-Pout}\n \\mathbb{P}_{\\rm out}(\\z_\\mu; \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) = \\pout \\left(\\y_\\mu | \\z_\\mu\\right) \\cN(\\z_\\mu; \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) \\, ,\\\\\n \\label{eq:chap6-vect-amp-Zout}\n \\Zout(\\y_\\mu , \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) = \\int \\dd{z_\\mu} \\pout \\left(\\y_\\mu | \\z_\\mu\\right) \\cN(\\z_\\mu; \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) \\, ,\\\\\n \\label{eq:chap6-vect-amp-gout-dgout}\n \\gout(\\y_\\mu , \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) = \\frac{1}{\\Zout} \\frac{\\partial \\Zout}{\\partial \\w} \\quad \\text{ and } \\quad\n \\dgout = \\frac{\\partial \\gout}{\\partial \\w},\n\\end{gather}\nwhere $\\cN(\\z;\\w,\\V)$ is the multivariate Gaussian distribution of mean $\\w$ and covariance $\\V$.\nFurther expanding the exponential in \\eqref{eq:chap6-vect-amp-rbp-1} up to order $O(1\/N)$ leads to the Gaussian parametrization \n\\begin{align}\n \\msgt{m}{\\mu}{i} (\\x_i) & \\propto 1 + \\frac{{\\Wui}}{\\sqrt{N}} \\gout \\x_i + \\frac{{{\\Wui}}^2}{2 N} {\\x_i}^T (\\gout\\gout^T + \\dgout^1) \\x_i \\\\\n & \\propto e^{{\\msg{\\B}{\\mu}{i}}^T\\x_i - \\frac 1 2 {\\x_i}^T\\msg{\\A}{\\mu}{i}\\x_i}, \n\\end{align}\nwith\n\\begin{gather}\n \\label{eq:chap6-vect-amp-rb-B}\n \\msg{\\B}{\\mu}{i} = \\frac{{\\Wui}}{\\sqrt{N}} \\gout (\\y_\\mu , \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) \\\\\n \\label{eq:chap6-vect-amp-rb-A}\n \\msg{\\A}{\\mu}{i} = - \\frac{{{\\Wui}}^2}{ N} \\dgout(\\y_\\mu , \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) .\n\\end{gather}\n\n\\subparagraph{Consistency with $\\msg{m}{i}{\\mu}$ -}\nInserting the Gaussian approximation of $\\msgt{m}{\\mu}{i}$ in the definition of $\\msg{m}{i}{\\mu}$, we get the parametrization\n\\begin{align}\n \\msg{m}{i}{\\mu}(\\x_i) & \\propto p_x(\\x_i) \\prod_{\\mu' \\neq \\mu} e^{{\\msg{\\B}{\\mu'}{i}}^T\\x_i - \\frac 1 2 {\\x_i}^T\\msg{\\A}{\\mu'}{i}\\x_i} \\propto p_x(\\x_i) e^{-\\frac{1}{2}(\\x_i - \\msg{\\lbd}{i}{\\mu})^T \\msg{\\sig}{i}{\\mu}^{-1} (\\x_i - \\msg{\\lbd}{i}{\\mu})}\n\\end{align}\nwith\n\\begin{gather}\n \\label{eq:chap6-vect-amp-rbp-lbd}\n\\msg{\\lbd}{i}{\\mu} = \\msg{\\sig}{i}{\\mu}\\left( \\sum_{\\mu' \\neq \\mu} \\msg{\\B}{\\mu'}{i} \\right) \\\\\n\\label{eq:chap6-vect-amp-rbp-sig}\n\\msg{\\sig}{i}{\\mu} = \\left( \\sum_{\\mu' \\neq \\mu} \\msg{\\A}{\\mu'}{i} \\right)^{-1} .\n\\end{gather} \n\n\\subparagraph{Closing the equations -}\nEnsuring the consistency with the definitions \\eqref{eq:chap6-vect-amp-rbp-def-xh}-\\eqref{eq:chap6-vect-amp-rbp-def-Cx} of mean and covariance of $\\msg{m}{i}{\\mu}$ we finally close our set of equations by defining the \\emph{input update functions},\n\\begin{gather}\n \\label{eq:chap6-vect-amp-Zx}\n \\cZ^x = \\int \\dd{x} p_x(\\x)e^{-\\frac 1 2 (\\x-\\lbd)\\T\\sigma^{-1}(\\x-\\lbd)} \\\\\n \\label{eq:chap6-vect-amp-f1x}\n \\vect{f}^x_1(\\lbd, \\sig) = \\frac{1}{\\cZ^x}\\int \\dd{\\x} \\x \\, p_x(\\x)e^{-\\frac 1 2 (\\x-\\lbd)\\T\\sig^{-1}(\\x-\\lbd)} \\\\\n \\label{eq:chap6-vect-amp-f2x}\n \\mat{f}^x_2(\\lbd, \\sig) = \\frac{1}{\\cZ^x} \\int \\dd{x} \\x\\x\\T \\, p_x(\\x)e^{-\\frac 1 2 (\\x-\\lbd)\\T\\sig^{-1}(\\x-\\lbd)} - \\vect{f}^x_1(\\lbd, \\sig)\\vect{f}^x_1(\\lbd, \\sig)\\T,\n\\end{gather}\nso that\n\\begin{gather}\n \\label{eq:chap6-vect-amp-rb-xh}\n \\msg{\\xh}{i}{\\mu} = \\vect{f}^x_1(\\msg{\\lbd}{i}{\\mu} , \\msg{\\sig}{i}{\\mu}) \\\\\n \\label{eq:chap6-vect-amp-rb-Cx}\n \\msg{\\Cx}{i}{\\mu} = \\mat{f}^x_2(\\msg{\\lbd}{i}{\\mu} , \\msg{\\sig}{i}{\\mu}) .\n\\end{gather}\n\nThe closed set of equations \\eqref{eq:chap6-vect-amp-rbp-def-w}, \\eqref{eq:chap6-vect-amp-rbp-def-V}, \\eqref{eq:chap6-vect-amp-rb-B} \\eqref{eq:chap6-vect-amp-rb-A}, \\eqref{eq:chap6-vect-amp-rbp-lbd}, \\eqref{eq:chap6-vect-amp-rbp-sig}, \\eqref{eq:chap6-vect-amp-rb-xh} and \\eqref{eq:chap6-vect-amp-rb-Cx}, with restored time indices, defines the r-BP algorithm. At convergence of the iterations, we obtain the approximated marginals\n\\begin{align}\n \\label{eq:chap6-vect-amp-marginal-def}\n m_i(\\x_i) = \\frac 1 {\\cZ_i} p_x(\\x_i) e^{-\\frac 1 2 (\\x-\\lbd_i)\\T\\sig_i^{-1}(\\x-\\lbd_i)} \n\\end{align}\nwith \n\\begin{gather}\n \n \\lbd_i = \\sig_i\\left( \\sum\\limits_{\\mu=1}^M \\msg{\\B}{\\mu}{i} \\right) \\\\\n \\sig_i = \\left( \\sum\\limits_{\\mu}^M \\msg{\\A}{\\mu}{i} \\right)^{-1} .\n \n\\end{gather}.\n\nAs usual, while BP requires to follow iterations over $M \\times N$ message distributions over vectors in $\\R^P$, r-BP only requires to track $O(M \\times N \\times P)$ variables, which is a great simplification. Nonetheless, r-BP can be further reduced to the more practical GAMP algorithm, given the scaling of the weights in $O(1\/\\sqrt{N})$.\n\n\\paragraph{Approximate message passing}\nWe define parameters $\\w_\\mu$, $\\V_\\mu$ and $\\xh_i$, $\\Cx_i$, likewise $\\lbd_i$ and $\\sig_i$ defined above and consider their relations to the original $\\msg{\\lbd}{i}{\\mu}$, $\\msg{\\sig}{i}{\\mu}$, $\\msg{\\w}{\\mu}{i}$, $\\msg{\\V}{\\mu}{i}$, $\\msg{\\xh}{i}{\\mu}$ and $\\msg{\\Cx}{i}{\\mu}$. As a result we obtain the vectorized AMP for the GLM presented in \\citealg~\\ref{alg:chap6-vect-amp}. \nNote that, similarly to GAMP, relaxing the Gaussian assumption on the weight matrix entries to any distribution with finite second moment yields the same algorithm using the Central Limit Theorem.\n\n\n\\subsection{State Evolution}\nWe consider the limit $N \\to + \\infty$ at fixed $\\alpha = M\/N$ and a quenched average over the disorder (here the realizations of $\\X_0$, $s_0$, $\\Y$ and $\\W$), to derive a State Evolution analysis of the previously derived AMP.\nTo this end, our starting point will be the r-BP equations.\n\n\n\\subsubsection{State Evolution derivation in mismatched prior and channel setting}\n\\label{sec:chap6-se-cal-amp}\n\\paragraph{Definition of the overlaps}\nThe important quantities to follow the dynamic of iterations and fixed points of AMP are the overlaps. Here, they are the $P \\times P$ matrices\n\\begin{gather}\n \\q = \\frac{1}{N} \\sum_{i=1}^N \\xh_i \\xh_i^T, \\quad \\mm = \\frac{1}{N} \\sum_{i=1}^N \\xh_i {\\x_{0,i}}^T, \\quad \\q_0 = \\frac{1}{N} \\sum_{i=1}^N \\x_{0,i} {\\x_{0,i}}^T.\n\\end{gather} \n\n\\paragraph{Output parameters}\nUnder independent statistics of the entries of $\\W$ and under the assumption of independent incoming messages, the variable $\\msg{\\w}{\\mu}{i}$ defined in \\eqref{eq:chap6-vect-amp-rbp-def-w} is a sum of independent variables and follows a Gaussian distribution by the Central Limit Theorem. Its first and second moments are\n\\begin{align}\n \\EE{\\W}{\\msg{\\w}{\\mu}{i}} & = \\frac{1}{\\sqrt{N}} \\sum_{i'\\neq i} \\EE{\\W}{W_{\\mu i'}} \\msg{\\xh}{i'}{\\mu} = 0 \\, ,\n\\end{align}\n\\begin{align}\n\\EE{\\W}{\\msg{\\w}{\\mu}{i} \\msg{\\w}{\\mu}{i} ^T} \n & = \\frac{1}{N} \\sum_{i'\\neq i} \\sum_{i''\\neq i} \\EE{\\W}{W_{\\mu i''}W_{\\mu i'}}\\msg{\\xh}{i''}{\\mu}\\msg{\\xh}{i'}{\\mu}^T \\\\\n\t& = \\frac{1}{N} \\sum_{i'\\neq i} \\EE{\\W}{W^2_{\\mu i'}} \\msg{\\xh}{i'}{\\mu}\\msg{\\xh}{i'}{\\mu}^T = \\frac{1}{N} \\sum_{i'=1}^N \\msg{\\xh}{i'}{\\mu}\\msg{\\xh}{i'}{\\mu}^T + O\\left({1}\/{N}\\right) \\notag \\\\\n\t& = \\frac{1}{N} \\sum_{i=1}^N \\xh_{i'}\\xh_{i'}^T - \\partial_{\\lbd} \\vect{f}^x_1\\sig_i\\msg{B}{\\mu}{i} \\xh_i^T - \\left(\\partial_{\\lbd} \\vect{f}_1^x\\sig_i\\msg{B}{\\mu}{i} \\xh_i^T \\right)^T +O\\left({1}\/{N}\\right) \\notag\\\\\n\t& = \\frac{1}{N} \\sum_{i'=1}^N \\xh_{i'}\\xh_{'i}^T + O\\left({1}\/{\\sqrt{N}}\\right)\\,\n\\end{align}\nwhere we used the facts that the $W_{\\mu i}$-s are independent with zero mean, and that $\\msg{B}{\\mu}{i}$, defined in \\eqref{eq:chap6-vect-amp-rb-B}, is of order $O(1\/\\sqrt{N})$.\nSimilarly, the variable $\\msg{\\z}{\\mu}{i} = \\sum_{i'\\neq i} \\frac{W_{\\mu i'}}{\\sqrt{N}} \\x_{i'}$ is Gaussian with first and second moments\n\\begin{gather}\n \\EE{\\W}{\\msg{\\z}{\\mu}{i}} \n \n = \\frac{1}{\\sqrt{N}} \\sum_{i'\\neq i} \\EE{\\W}{W_{\\mu i'}} \\x_{0,i'} = 0 \\, ,\n \\\\\n \n \\EE{\\W}{\\msg{\\z}{\\mu}{i} \\msg{\\z}{\\mu}{i} ^T} \n \n = \\frac{1}{N} \\sum_{i'=1}^N \\x_{0,i'}{\\x_{0,i'}}^T + O\\left({1}\/{\\sqrt{N}}\\right). \n \\end{gather}\nFurthermore, their covariance is \n\\begin{align}\n \\EE{\\W}{\\msg{\\z}{\\mu}{i} \\msg{\\w}{\\mu}{i}^T} \n & = \\frac{1}{N} \\sum_{i'\\neq i} \\EE{\\W}{W^2_{\\mu i'}} \\x_{0,i'}\\msg{\\xh}{i'}{\\mu}^T = \\frac{1}{N} \\sum_{i'=1}^N \\x_{0,i'}\\msg{\\xh}{i'}{\\mu}^T + O\\left({1}\/{N}\\right) \\\\\n & = \\frac{1}{N} \\sum_{i'=1}^N \\x_{0,i'}\\xh_{i'}^T - \\x_{0,i}\\partial_{\\lbd} \\vect{f}^x_1\\sig_i\\msg{\\B}{\\mu}{i}^T +O\\left({1}\/{N}\\right) \\\\\n & = \\frac{1}{N} \\sum_{i'=1}^N \\x_{0,i'}\\xh_{i'}^T + O\\left({1}\/{\\sqrt{N}}\\right). \n \\end{align} \nHence we find that for all $\\mu$-s and all $i$-s, $ \\msg{\\w}{\\mu}{i}$ and $ \\msg{\\z}{\\mu}{i}$ are approximately jointly Gaussian in the thermodynamic limit following a unique distribution $\\cN\\left( \\msg{\\z}{\\mu}{i}, \\msg{\\w}{\\mu}{i}; \\; \\vect{0}, \\, \\Q \\right) $ with the block covariance matrix\n\\begin{gather}\n \\Q = \n \\begin{bmatrix}\n \\q_0 & \\mm \\\\\n \\\\\n {\\mm}\\T & \\q \\\\ \n \\end{bmatrix}.\n\\end{gather}\nFor the variance message $\\msg{\\V}{\\mu}{i}$, defined in \\eqref{eq:chap6-vect-amp-rbp-def-V}, we have \n\\begin{align}\n \\EE{\\W}{\\msg{\\V}{\\mu}{i}} &= \\sum_{i'\\neq i} \\EE{\\W}{\\frac{W_{\\mu i'}}{N}^2} \\msg{\\Cx}{i'}{\\mu} = \\sum_{i'=1}^N \\frac{1}{N} \\msg{\\Cx}{i'}{\\mu} + O\\left({1}\/{N}\\right) \\\\\n &= \\sum_{i'=1}^N \\frac{1}{N} \\Cx_{i'} + O\\left({1}\/{\\sqrt{N}}\\right) ,\n\\end{align}\nwhere using the developments of $\\msg{\\lbd}{i}{\\mu}$ and $\\msg{\\sig}{i}{\\mu}$ \\eqref{eq:chap6-vect-amp-rbp-lbd}-\\eqref{eq:chap6-vect-amp-rbp-sig}, along with the scaling of $\\msg{\\B}{\\mu}{i} $ in $O({1}\/{\\sqrt{N}})$ we replaced \n\\begin{align}\n \\msg{\\Cx}{i}{\\mu} = \\mat{f}_2^x(\\msg{\\lbd}{i}{\\mu}, \\msg{\\sig}{i}{\\mu}) = \\mat{f}_2^x(\\lbd_i, \\sig_i) - \\partial_{\\lbd}\\mat{f}^x_2 \\sig_i \\msg{\\B}{\\mu}{i}^T = \\mat{f}_2^x(\\lbd_i, \\sig_i) + O\\left({1}\/{\\sqrt{N}}\\right).\n\\end{align}\nFuthermore, we can check that \n\\begin{gather}\n \\lim_{N\\to + \\infty} \\EE{\\W}{\\msg{\\V}{\\mu}{i}^2 - \\EE{\\W}{\\msg{\\V}{\\mu}{i}}^2} = 0 ,\n\\end{gather}\nmeaning that all $\\msg{\\V}{\\mu}{i}$ concentrate on their identical mean in the thermodynamic limit, which we note\n\\begin{gather}\n \\V = \\sum_{i=1}^N \\frac{1}{N} \\Cx_i .\n\\end{gather}\n\n\\paragraph{Input parameters} Here we use the re-parametrization trick to express $\\y_\\mu$ as a function $g_0(\\cdot)$ taking\n a noise $\\eps_\\mu \\sim p_\\epsilon(\\eps_\\mu)$ as inputs:\n $\\y_\\mu = g_0(\\vect{w}_\\mu\\T\\X_0, \\eps_\\mu)$.\nFollowing \\eqref{eq:chap6-vect-amp-rb-A}-\\eqref{eq:chap6-vect-amp-rb-B} and \\eqref{eq:chap6-vect-amp-marginal-def}, \n\\begin{align}\n \\sig_i^{-1}\\lbd_i \n & = \\sum_{\\mu=1}^M \\frac{W_{\\mu i}}{\\sqrt{N}} \\gout\\left(\\y_\\mu, \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i}\\right) \\\\\n & = \\sum_{\\mu=1}^M \\frac{W_{\\mu i}}{\\sqrt{N}} \\gout\\left(g_0\\left( \\sum_{i'\\neq i} \\frac{W_{\\mu i'}}{\\sqrt{N}} \\x_{0,i'} + \\frac{W_{\\mu i}}{\\sqrt{N}} \\x_{0,i}, \\eps_\\mu \\right), \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i}\\right) \\\\\n & = \\sum_{\\mu=1}^M \\frac{W_{\\mu i}}{\\sqrt{N}} \\gout\\left(g_0\\left( \\sum_{i'\\neq i} \\frac{W_{\\mu i'}}{\\sqrt{N}} \\x_{0,i'}, \\eps_\\mu \\right), \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i}\\right) \\notag\\\\\n &\\qquad \\qquad + \\sum_{\\mu=1}^M \\frac{W^2_{\\mu i}}{N} \\partial_{z} \\mat{\\gouts}\\left(g_0\\left( \\msg{\\z}{\\mu}{i}, \\eps_\\mu\\right), \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i}\\right) \\x_{0,i}.\n\\end{align}\nThe first term is again a sum of independent random variables, given the $W_{\\mu i}$ are i.i.d. with zero mean, of which the messages of type $\\mu \\to i$ are assumed independent. The second term has non-zero mean and can be shown to concentrate. Finally recalling that all $\\msg{\\V}{\\mu}{i}$ also concentrate on $\\V$ we obtain the distribution\n\\begin{gather}\n \\sig_i^{-1}\\lbd_i \\sim \\cN\\left(\\sig_i^{-1}\\lbd_i;\\; \\alpha \\mh \\x_{0,i}, \\sqrt{\\alpha \\qh} I_P\\right) \n\\end{gather}\nwith\n\\begin{gather}\n \\label{eq:chap6-se-non-nishi-qh}\n \\qh = \\int \\dd{\\eps} p_{\\epsilon}(\\eps) \\dd{s_0}p_{s_0}(s_0) \\int \\dd{\\w} \\dd{\\z} \\cN(\\z, \\w ; \\underline{0}, \\Q) \n \\gout(g_0\\left( \\z , \\eps\\right) ,\\w, \\V) \\times \\\\\n \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\gout(g_0\\left( \\z , \\eps\\right), \\w, \\V)^T \\, ,\\notag \\\\\n\t\\label{eq:chap6-se-non-nishi-mh}\n \\mh = \\int \\dd{\\eps} p_{\\epsilon}(\\eps) \\dd{s_0}p_{s_0}(s_0) \\int \\dd{\\w} \\dd{\\z} \\cN(\\z, \\w ; \\underline{0}, \\Q)\n\t\\partial_{\\z} \\mat{\\gouts}(g_0\\left( \\z , \\eps\\right), \\w , \\V) .\n\\end{gather}\nFor the inverse variance $\\sig_i^{-1}$ one can check again that it concentrates on its mean \n\\begin{gather}\n \\sig_i^{-1} = \\sum\\limits_{\\mu=1}^M \\frac{{\\Wui}^2}{ N} \\dgout(\\y_\\mu , \\msg{\\w}{\\mu}{i}, \\msg{\\V}{\\mu}{i} ) \\simeq \\alpha \\chih \\, , \\\\\n \\label{eq:chap6-se-non-nishi-chih}\n \\chih = - \\int \\dd{\\eps} p_\\epsilon(\\eps) \\dd{s_0} p_{s_0}(s_0)\\int \\dd{\\eps} \\dd\\z \\cN(\\z, \\w ; \\underline{0}, \\Q) \n\t \\partial_{\\omega} \\mat{\\gouts}( g_0\\left( \\z , \\eps\\right), \\w , \\V) \\, .\n\\end{gather}\n\n\\paragraph{Closing the equations} These statistics of the input parameters must ensure that consistently\n\\begin{gather}\n \\V = \\frac{1}{N}\\sum\\limits_{i=1}^N \\Cx_i = \\EE{\\lbd, \\sig}{\\mat{f}^x_2(\\lbd, \\sig)} ,\\\\\n \\q = \\frac{1}{N} \\sum\\limits_{i=1}^N \\xh_i \\xh_i\\T = \\EE{\\lbd, \\sig}{\\mat{f}^x_1(\\lbd, \\sig)\\mat{f}^x_1(\\lbd, \\sig)\\T},\\\\\n \\mm = \\frac{1}{N} \\sum\\limits_{i=1}^N \\xh_i {\\x_{0,i}}\\T = \\EE{\\lbd, \\sig}{ \\mat{f}^x_1(\\lbd, \\sig){\\x_{0,i}}\\T } ,\n\\end{gather}\nwhich gives upon expressing the computation of the expectations\n\\begin{gather}\n \\label{eq:chap6-se-non-nishi-V}\n \\V = \\int \\dd{\\x_0}p_{x_0}(\\x_0) \\int \\D{\\vect{\\xi}} \\mat{f}^x_2 \\left( (\\alpha \\chih)^{-1}\\left({\\sqrt{\\alpha \\qh} \\underline{\\xi} + \\alpha \\mh\\x_0}\\right); (\\alpha \\chih)^{-1} \\right) \\, , \\\\\n \\label{eq:chap6-se-non-nishi-m}\n \\mm = \\int \\dd{\\x_0}p_{x_0}(\\x_0) \\int \\D{\\vect{\\xi}} \\vect{f}^x_1 \\left( (\\alpha \\chih)^{-1}\\left({\\sqrt{\\alpha \\qh} \\underline{\\xi} + \\alpha \\mh\\x_0}\\right); (\\alpha \\chih)^{-1} \\right){\\x_0}\\T \\, , \\\\\n \\q = \\int \\dd{\\x_0}p_{x_0}(\\x_0) \\int \\D{\\vect{\\xi}} \\vect{f}^x_1 \\left( (\\alpha \\chih)^{-1}\\left({\\sqrt{\\alpha \\qh} \\underline{\\xi} + \\alpha \\mh\\x_0}\\right); (\\alpha \\chih)^{-1} \\right) \\times \\notag\\\\\n \\label{eq:chap6-se-non-nishi-q}\n \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\vect{f}^x_1 \\left( (\\alpha \\chih)^{-1}\\left({\\sqrt{\\alpha \\qh} \\underline{\\xi} + \\alpha \\mh\\x_0}\\right); (\\alpha \\chih)^{-1} \\right)\\T .\n\\end{gather}\nThe State Evolution analysis of the GLM on the vector variables finally consists in iterating alternatively the equations \\eqref{eq:chap6-se-non-nishi-qh}, \\eqref{eq:chap6-se-non-nishi-mh}, \\eqref{eq:chap6-se-non-nishi-chih}, and the equations \\eqref{eq:chap6-se-non-nishi-V}, \\eqref{eq:chap6-se-non-nishi-m} \\eqref{eq:chap6-se-non-nishi-q} until convergence.\n\n\n\\paragraph{Performance analysis}\nThe mean squared error (MSE) on the reconstruction of $\\X$ by the AMP algorithm is then predicted by \n\\begin{gather}\n\\MSE(\\X) = q - 2 m + q_0,\n\\end{gather}\nwhere the scalar values used here correspond to the (unique) value of the diagonal elements of the corresponding overlap matrices. This MSE can be computed throughout the iterations of State Evolution. \nRemarkably, the State Evolution MSEs follow precisely the MSE of the cal-AMP predictors along the iterations of the algorithm provided the procedures are initialized consistently. A random initialization of $\\xh_i$ in cal-AMP corresponds to an initialization of zero overlap $m = 0$, $\\nu = 0$, with variance of the priors $q = q_0$ in the State Evolution.\n\n\\subsubsection{Bayes optimal State Evolution}\nThe SE equations can be greatly simplified in the Bayes optimal setting where the statistical model used by the student (priors $p_x$ and $p_s$, and channel $\\pout$) is known to match the teacher. \nIn this case, the true unknown signal $\\X_0$ is in some sense statistically equivalent to the estimate $\\mat{\\hat{X}}$ coming from the posterior. More precisely one can prove the Nishimori identities \\cite{Opper1991, Iba1999, Nishimori2001} (or \\cite{Kabashima2016} for a concise demonstration and discussion) implying that\n\\begin{gather}\n \\q = \\mm, \\quad \\V = \\q_0 - \\mm, \\quad \\qh = \\mh =\\chih \\quad \\text{ and } \\quad r = \\nu.\n\\end{gather}\nAs a result the State Evolution reduces to a set of two equations\n\\begin{gather}\n \n \n \n \\label{eq:chap6-se-vect-glm-bayes-opt-m}\n \\mm = \\int \\dd{\\x_0}p_{x_0}(\\x_0) \\int \\D{\\vect{\\xi}} \\vect{f}^x_1 \\left( (\\alpha \\mh)^{-1}\\left({\\sqrt{\\alpha \\mh} \\underline{\\xi} + \\alpha \\mh\\x_0}\\right); (\\alpha \\mh)^{-1} \\right){\\x_0}\\T \\, \\\\\n \\label{eq:chap6-se-vect-glm-bayes-opt-mh}\n \\mh = \\int \\dd{\\eps} p_{\\epsilon}(\\eps) \\dd{s_0}p_{s_0}(s_0) \\int \\dd{\\w} \\dd{\\z} \\cN(\\z, \\w ; \\underline{0}, \\Q)\n \\gout\\left(g_0\\left( \\z , \\eps\\right), \\w , \\q_0 - \\m)\\right) \\times \\\\\n \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\gout\\left(g_0\\left( \\z , \\eps\\right), \\w , \\q_0 - \\m)\\right) \\notag,\n\\end{gather}\nwith the block covariance matrix\n\\begin{gather}\n \\label{eq:chap6-Q-bayes-opt}\n \\Q = \n \\begin{bmatrix}\n \\q_0 & \\mm \\\\\n \\\\\n {\\mm}\\T & \\mm \\\\ \n \\end{bmatrix}.\n\\end{gather}\n\\section{Mean-field identity}\nWe derive the exact identity \\eqref{eq:chap3-mf-identity} for fully connected Ising model with binary spins $\\x \\in \\{0,1\\}^N$,\n\\label{app:chap3-mf-identity}\n\\begin{align}\n \\label{eq:app-chap3-1}\n \\langle x_i \\rangle_p &= \\frac{1}{\\cZ}\n \\sum_{\\x\\in \\{0,1\\}^N} \\, x_i \\, \\exp\\left(\\displaystyle\\beta \\sum_j b_j x_j + \\frac 1 2 \\sum_{jk}W_{jk}x_j x_k\\right) \\\\\n &= \\frac{1}{\\cZ}\\sum_{\\x_{\\setminus i}\\in \\{0,1\\}^{N-1}} \\exp\\left(\\displaystyle\\beta \\sum_{j\\neq i} b_j x_j + \\frac 1 2 \\sum_{\\underset{j\\neq i}{k \\neq i}}W_{ij}x_i x_j\\right) \\sum_{x_i \\in \\{0,1\\}} x_i e^{\\beta b_i x_i + \\sum_j W_{ij}x_i x_j} \\notag\n \n\\end{align}\nwhere $\\x_{\\setminus i}$ is the vector of $\\x$ without its $i$-th component.\nYet\n\\begin{gather}\n \\sigm(\\beta b_i + \\sum_{j \\in \\partial i}\\beta W_{ij} x_j) = \\frac\n {\\sum_{x_i \\in \\{0,1\\}} \\, x_i\\, e^{\\beta b_i x_i + \\sum_j W_{ij}x_i x_j}}\n {\\sum_{x_i \\in \\{0,1\\}} e^{\\beta b_i x_i + \\sum_j W_{ij}x_i x_j}},\n\\end{gather}\nso that multiplying and dividing \\eqref{eq:app-chap3-1} by the denominator above we obtain the identity \\eqref{eq:chap3-mf-identity} in \\citesec~\\ref{sec:chap3-nmf}\n\\begin{align}\n \\langle x_i \\rangle_p = \\langle \\sigm(\\beta b_i + \\sum_{j \\in \\partial i}\\beta W_{ij} x_j) \\rangle_p\\ .\n\\end{align}\n\n\\section{Georges-Yedidia expansion for generalized Boltzmann machines}\n\\label{app:chap3-real-GY}\n\nWe here present a derivation of the Georges-Yedidia for real-valued degrees of freedom on the example of a Boltzmann machine as in \\cite{Tramel2018}.\nFormally we consider $\\x \\in \\R^N$ governed by the energy function and parametrized distribution\n\\begin{gather}\n \\label{eq:chap4-real-meas-fully}\n E(\\x) = - \\sum_{(ij)} W_{ij}x_i x_j - \\frac{1}{\\beta} \\sum_{i=1}^N\\log p_x(x_i; \\theta_i) \\,, \\quad\n p(\\x) = \\frac{1}{\\cZ} e^{\\frac{\\beta}{2} \\x\\T\\W\\x} \\prod_{i=1}^N p_x(x_i ; \\theta_i) ,\n\\end{gather}\nwhere $p_x(x_i;\\theta_i)$ is an arbitrary prior distribution with parameter $\\theta_i$. For a Bernoulli prior with parameter $\\sigm(\\beta b_i)$ we recover the measure of binary Boltzmann machines. However we choose here a prior that does not depend on the temperature a priori. We now derive the expansion for this general case following the outline discussed in \\ref{sec:chap3-GY}, and highlighting the differences with the binary case.\n\nNote that inference in the generalized fully connected Boltzmann machine is somehow related to the symmetric rank-1 matrix factorization problem, which also features pairwise interactions. Similarly, inference for the bi-partite RBM maps to the asymmetric rank-1 matrix factorization. However, conversely to the Boltzmann inference, these factorizations are reconstruction problems. The mean-field techniques, derived in \\cite{Lesieur2016,Lesieur2017}, allow there to compute the MMSE estimator of unknown signals from approximate marginals. Here we focus on the evaluation of the free energy. \n\n\\paragraph{Minimization for fixed marginals}\nWhile fixing the value of the first moment is sufficient for binary variables, more than one constraint is now needed in order to minimize the Gibbs free energy at a given value of the marginals.\nIn the same spirit of the AMP algorithm we assume a Gaussian parametrization of the marginals. We note $\\am$ the first moment of $\\x$ and $\\cm$ its variance.\nWe wish to compute the constrained minimum over the distributions $q$ on $\\R^N$\n\\begin{gather}\n G(\\am, \\cm) = \\min_{q} \\left[ \\langle E(\\x) \\rangle_{q} - H(q)\/\\beta \\; | \\; \\langle \\x\\rangle_q = \\am \\,, \\langle \\x^2\\rangle_q = \\am^2 + \\cm \\right],\n\\end{gather}\nwhere the notation of squared vectors corresponds here and below to the vectors of squared entries.\nIt is equivalent to an unconstrained problem with Lagrange multipliers $\\lbd(\\am,\\cm, \\beta)$ and $\\vect{\\xi}(\\am,\\cm, \\beta)$\n\\begin{gather}\n \\label{eq:chap4-real-GY-G01}\n G(\\am, \\cm) = \\min_{q} \\left[ \\langle E(\\x) \\rangle_{q} - H(q)\/\\beta - \\lbd\\T(\\langle \\x\\rangle_q - \\am) \/ \\beta - \\vect{\\xi}(\\langle \\x^2\\rangle_q - \\am^2 - \\cm)\/ \\beta \\right].\n\\end{gather}\n The terms depending on the distribution $q$ in the functional to minimize above can be interpreted as a Gibbs free energy for the effective energy functional\n\\begin{gather}\n \\tilde{E}(\\x) = E(\\x) - \\lbd\\T\\x \/\\beta - \\vect{\\xi}\\T \\x^2\/\\beta .\n\\end{gather}\nThe solution of the minimization problem \\eqref{eq:chap4-real-GY-G01} is therefore the corresponding Boltzmann distribution \n\\begin{gather}\nq_{\\am, \\cm}(\\x) = \\frac{e^{-\\beta\\tilde{E}(\\x)}}{\\tilde{\\cZ}} = \\frac{1}{\\tilde{\\cZ}} e^{-\\beta E(\\x) + \\lbd(\\am,\\cm, \\beta)\\T\\x + \\vect{\\xi}(\\am,\\cm, \\beta)\\T \\x^2 } \\, \n\\end{gather}\nand the minimum $G(\\am, \\cm)$ is \n\\begin{align}\n \\label{eq:chap4-real-GY-G1}\n -\\beta G(\\am, \\cm) \n & = - \\lbd\\T \\am - \\vect{\\xi}\\T(\\am^2+\\cm) + \\log \\int \\dd{\\x} e^{-\\beta E(\\x) + \\lbd\\T\\x + \\vect{\\xi}\\T \\x^2 } \\notag\\\\\n & = \\log \\int \\dd{\\x} e^{-\\beta E(\\x) + \\lbd\\T(\\x -\\am) + \\vect{\\xi}\\T (\\x^2 - \\am^2 -\\cm)},\n\\end{align}\nwhere the Lagrange multipliers $\\lbd(\\am,\\cm, \\beta)$ and $\\vect{\\xi}(\\am,\\cm, \\beta)$ enforcing the constraints are still implicit.\nDefining a functional $\\tilde{G}$ for arbitrary vectors $\\tilde{\\lbd} \\in \\R^N$ and $\\vect{\\tilde \\xi} \\in \\R^N$, \n\\begin{gather}\n -\\beta \\tilde{G}(\\am,\\cm, \\tilde{\\lbd}, \\tilde{\\vect{\\xi}}) = \\log \\int \\dd{\\x} e^{-\\beta E(\\x) + \\tilde{\\lbd}\\T(\\x -\\am) + \\tilde{\\vect{\\xi}}\\T (\\x^2 - \\am^2 -\\cm)}, \n\\end{gather}\nwe have\n\\begin{align}\n \\label{eq:chap4-real-GY-stationary-lbd}\n & a_i = \\langle x_i \\rangle_{q_{\\am,\\cm}} \\Rightarrow -\\beta \\left.\\frac{\\partial\\tilde{G}}{\\partial \\tilde \\lambda_i}\\right|_{\\lbd, \\vect{\\xi}} = 0, && -\\beta \\left.\\frac{\\partial^2\\tilde{G}}{\\partial \\tilde\\lambda_i^2}\\right|_{\\lbd, \\vect{\\xi}}= \\langle x_i^2 \\rangle_{q_{\\am,\\cm}} - a_i^2 > 0 ,\\\\\n \\label{eq:chap4-real-GY-stationary-xi}\n &c_i + a_i^2 = \\langle x_i^2 \\rangle_{q_{\\am,\\cm}} \\Rightarrow -\\beta \\left. \\frac{\\partial \\tilde{G}}{\\partial \\tilde \\xi_i}\\right|_{\\lbd, \\vect{\\xi}} = 0, && -\\beta \\left.\\frac{\\partial ^2\\tilde{G}}{\\partial \\tilde \\xi_i^2} \\right|_{\\lbd, \\vect{\\xi}}= \\langle (x^2_i)^2 \\rangle_{q_{\\am,\\cm}} - (c_i + a_i^2)^2 > 0 .\n\\end{align}\nHence the Lagrange multipliers are identified as minimizers of $-\\beta\\tilde{G}$ and\n\\begin{gather}\n - \\beta G(\\am, \\cm) = - \\beta \\tilde{G}(\\am,\\cm, \\lbd(\\am,\\cm, \\beta), \\vect{\\xi}(\\am,\\cm, \\beta)) = \\min_{\\tilde{\\lbd}, \\tilde{\\vect{\\xi}}} - \\beta \\tilde{G}(\\am,\\cm, \\tilde{\\lbd}, \\tilde{\\vect{\\xi}}).\n\\end{gather}\nThe true free energy $F = - \\log \\cZ \/ \\beta$ would eventually be recovered by minimizing the constrained minimum $G(\\am, \\cm)$ with respect to its arguments.\nNevertheless, the computation of $G$ and $\\tilde{G}$ involves an integration over $\\x \\in \\R^N$ and remains intractable. The following step of the Georges-Yedidia derivation consists in approximating these functionals by a Taylor expansion at infinite temperature where interactions are neutralized.\n\n\\paragraph{Expansion around $\\beta=0$}\nTo perform the expansion we introduce the notation $A(\\beta, \\am, \\cm) = - \\beta G(\\am, \\cm) $.\nWe also define the auxiliary operator \n\\begin{gather}\n U(\\x; \\beta) = -\\frac{1}{2} \\x\\T\\W\\x + \\frac{1}{2}\\langle \\x\\T\\W\\x \\rangle_{q_{\\am, \\cm}} - \\sum_{i=1}^N \\frac{\\partial \\lambda_i}{\\partial \\beta} (x_i - a_i) - \\sum_{i=1}^N \\frac{\\partial \\xi_i}{\\partial \\beta} (x_i^2 - a_i^2 - c_i), \n\\end{gather}\nthat allows to write concisely for any observable $O$ the derivative of its average with respect to $\\beta$, \n\\begin{gather}\n \\frac{\\partial \\langle O(\\x; \\beta) \\rangle_{q_{\\am,\\cm}}}{\\partial \\beta} = \\left\\langle \\frac{\\partial O(\\x; \\beta)}{\\partial \\beta} \\right\\rangle_{q_{\\am,\\cm}} - \\langle U(\\x;\\beta)O(\\x;\\beta) \\rangle_{q_{\\am,\\cm}}.\n\\end{gather}\nTo compute the derivatives of $\\lbd$ and $\\vect{\\xi}$ with respect to $\\beta$ we note that \n\\begin{gather}\n \\label{eq:chap4-real-GY-derivative-A-a-c}\n \\frac{\\partial A}{\\partial a_i} = -\\beta \\frac{\\partial \\tilde G}{\\partial a_i} = - \\lambda_i(\\beta, \\am, \\cm)- 2 a_i \\xi_i(\\beta, \\am, \\cm) \\, , \\\\\n \\frac{\\partial A}{\\partial c_i} = -\\beta \\frac{\\partial \\tilde G}{\\partial c_i} = - \\xi_i(\\beta, \\am, \\cm),\n\\end{gather}\nwhere we used that $\\partial \\tilde G \/ \\partial \\tilde{\\lambda_i} = 0 $ and $\\partial \\tilde G \/ \\partial \\tilde{\\xi}_i = 0 $ when evaluated for $\\lbd(\\am,\\cm, \\beta)$ and $\\vect{\\xi}(\\am,\\cm, \\beta)$. Consequently,\n\\begin{gather}\n \\label{eq:chap4-real-GY-deriv-lbd}\n \\frac{\\partial \\xi_i}{\\partial \\beta} = - \\frac{\\partial}{\\partial c_i} \\frac{\\partial A }{\\partial \\beta} \\, , \\qquad \n \\frac{\\partial \\lambda_i}{\\partial \\beta} = - \\frac{\\partial}{\\partial a_i} \\frac{\\partial A }{\\partial \\beta} + 2 a_i \\frac{\\partial \\xi_i}{\\partial \\beta}.\n\\end{gather}\nWe can now proceed to compute the first terms of the expansion that will be performed for the functional $A$. \n\n\\subparagraph{Zeroth order}\nSubstituting $\\beta=0$ in the definition of $A$ we have\n\\begin{gather}\n A(0,\\am,\\cm) = - \\lbd(0,\\am,\\cm)\\T \\am - \\vect{\\xi}(0,\\am,\\cm)\\T (\\am^2 + \\cm) + \\log \\tilde{\\cZ}^0(\\lbd(0,\\am,\\cm), \\vect{\\xi}(0,\\am,\\cm)),\n\\end{gather}\nwith \n\\begin{align}\n \\tilde{\\cZ}^0(\\lbd(0,\\am,\\cm), \\vect{\\xi}(0,\\am,\\cm)) &= \\int \\dd{\\x} e^{\\lbd(0,\\am,\\cm)\\T \\x+ \\vect{\\xi}(0,\\am,\\cm)\\T \\x^2}\\prod_{i=1}^N p_x(x_i ; \\theta_i) \\\\\n & = \\prod_{i=1}^N \\int \\dd{x_i} e^{\\lambda_i(0,\\am,\\cm) x_i+ \\xi_i(0,\\am,\\cm) x_i^2}p_x(x_i ; \\theta_i).\n\\end{align}\nAt infinite temperature the interaction terms of the energy do not contribute so that the integral in $\\tilde{\\cZ}^0$ factorizes and can be evaluated numerically in the event that it does not have a closed-form.\n\n\\subparagraph{First order} We compute the derivative of $A$ with respect to $\\beta$. We use again that $\\lbd(\\am,\\cm,\\beta)$ and $\\vect{\\xi}(\\am,\\cm,\\beta)$ are stationary points of $\\tilde{G}$ to write\n\\begin{align}\n \\frac{\\partial A}{\\partial \\beta} & = - \\beta \\frac{\\partial \\tilde{G}}{\\partial \\beta} = \\frac{\\partial }{\\partial \\beta} \\left[ \\log \\int \\dd{\\x} e^{-\\beta E(\\x) + \\lbd(\\am,\\cm, \\beta)\\T(\\x -\\am) + \\vect{\\xi}(\\am,\\cm, \\beta)\\T (\\x^2 - \\am^2 -\\cm)} \\right] \\\\\n \n \n \n \n & = \\left\\langle \n \\frac{\\partial }{\\partial \\beta} (-\\beta E(\\x))\n + \\frac{\\partial \\lbd }{\\partial \\beta}\\T (\\x -\\am) \n + \\frac{\\partial \\vect{\\xi}}{\\partial \\beta}\\T (\\x^2 - \\am^2 -\\cm) \\right\\rangle_{q_{\\am, \\cm}} \\\\\n & = \\frac{1}{2} \\langle \\x\\T\\W\\x \\rangle_{q_{\\am, \\cm}}.\n\\end{align}\nAt infinite temperature the average over the product of variables becomes a product of averages so that we have\n\\begin{gather}\n \\left. \\frac{\\partial A}{\\partial \\beta} \\right|_{\\beta = 0} = \\frac{1}{2}\\am\\T\\W\\am = \\sum_{(ij)} W_{ij} a_i a_j .\n\\end{gather}\n\n\\subparagraph{Second order} Using the first order derivative of $A$ we can compute the derivatives of the Lagrange parameters \\eqref{eq:chap4-real-GY-deriv-lbd} and the auxillary operator at infinite temperature, \n\\begin{gather}\n \\left. \\frac{\\partial \\xi_i}{\\partial \\beta} \\right|_{\\beta = 0} = 0 \\, , \\qquad \\left. \\frac{\\partial \\lambda_i}{\\partial \\beta} \\right|_{\\beta = 0} = - \\sum_{j\\in\\partial i} W_{ij} a_j \\,, \\qquad U(\\x;0) = - \\sum_{(ij)} W_{ij} (x_i - a_i)(x_j - a_j). \\notag\n\\end{gather}\nThe second order derivative is then easily computed at infinite temperature\n\\begin{align}\n \\left. \\frac{\\partial^2 A}{\\partial \\beta^2} \\right|_{\\beta = 0} & = \\frac{1}{2}\\left. \\frac{\\partial}{\\partial \\beta} \\Big(\\langle \\x\\T\\W\\x \\rangle_{q_{\\am, \\cm}}\\Big) \\right|_{\\beta = 0}= - \\frac{1}{2} \\langle U(\\x;0) (\\x\\T\\W\\x )\\rangle^{\\beta=0}_{q_{\\am, \\cm}} \\\\\n & = \\sum_{(ij)} W_{ij}^2 \\langle (x_i -a_i)x_i (x_j - a_j)\\rangle^{\\beta=0}_{q_{\\am, \\cm}} = \\sum_{(ij)} W_{ij}^2 c_i c_j. \n\\end{align}\n\n\\paragraph{TAP free energy for the generalized Boltzmann machine}\n\\label{sec:chap4-tap-fe-grbm}\nStopping at the second order of the systematic expansion, and gathering the different terms derived above we have \n\\begin{align}\n -\\beta G(\\am, \\cm) = - \\lbd(0,\\am,\\cm)\\T \\am & - \\vect{\\xi}(0,\\am,\\cm)\\T (\\am^2 + \\cm) + \\log \\tilde{\\cZ}^0(\\lbd(0,\\am,\\cm), \\vect{\\xi}(0,\\am,\\cm)) \\\\\n &+ \\beta \\sum_{(ij)} W_{ij} a_i a_j + \\frac{\\beta^2}{2}\\sum_{(ij)} W_{ij}^2 c_i c_j, \\notag\n\\end{align}\nwhere the values of the parameters $\\lbd(0,\\am,\\cm)$ and $\\vect{\\xi}(0,\\am,\\cm)$ are implicitly defined through the stationary conditions \\eqref{eq:chap4-real-GY-stationary-lbd}-\\eqref{eq:chap4-real-GY-stationary-xi}.\nThe TAP approximation of the free energy also requires to consider the stationary points of the expanded expression as a function of $\\am$ and $\\cm$. \n\nThis second condition yields the relations \n\\begin{gather}\n \\label{eq:chap4-real-GY-A}\n -2 \\xi_i (0,\\am,\\cm) = -\\beta^2 \\sum_{j\\in \\partial i } W_{ij}^2 c_j = A_i \\\\\n \\label{eq:chap4-real-GY-B}\n \\lambda_i (0,\\am,\\cm) = A_i a_i + \\beta \\sum_{j\\in \\partial i } W_{ij} a_j = B_i \\,\n\\end{gather}\nwhere we define new variables $A_i$ and $B_i$. \nWhile the extremization with respect to the Lagrange multipliers gives\n\\begin{gather}\n \\label{eq:chap4-real-GY-a}\n a_i = \\frac{1}{\\cZ^x_i} \\int \\dd{x_i} x_i p_x(x_i; \\theta_i) e^{-\\frac{A_i}{2}x_i^2 + B_i x_i} = f_1^x(B_i,A_i;\\theta_i), \\\\\n \\label{eq:chap4-real-GY-c}\n c_i = \\frac{1}{\\cZ^x_i} \\int \\dd{x_i} x_i^2 p_x(x_i; \\theta_i) e^{-\\frac{A_i}{2}x_i^2 + B_i x_i} - a_i^2 = f_2^x(B_i,A_i;\\theta_i) ,\n\\end{gather}\nwhere we introduce update functions $f_1^x$ and $f_2^x$ with respect to the partition function \n\\begin{gather}\n \\cZ^x_i(B_i,A_i;\\theta_i) = \\int \\dd{x_i} p_x(x_i; \\theta_i) e^{-\\frac{A_i}{2}x_i^2 + B_i x_i} .\n\\end{gather}\nFinally we can rewrite the TAP free energy as\n\\begin{align}\n \\label{eq:chap4-real-GY-final-G}\n -\\beta G(\\am, \\cm) = - \\vect{B}\\T \\am + \\vect{A}\\T (\\am^2 + \\cm)\/2 + \\sum_{i=1}^N &\\log \\tilde{\\cZ_i}^x(B_i, A_i; \\theta_i) \\\\\n &+ \\beta \\sum_{(ij)} W_{ij} a_i a_j + \\frac{\\beta^2}{2}\\sum_{(ij)} W_{ij}^2 c_i c_j, \\notag\n\\end{align}\nwith the values of the parameters set by the self-consistency conditions \\eqref{eq:chap4-real-GY-A}, \\eqref{eq:chap4-real-GY-B}, \\eqref{eq:chap4-real-GY-a} and \\eqref{eq:chap4-real-GY-c}, which are the TAP equations of the generalized Boltzmann machine at second order. Note that the naive mean-field equations are recovered by ignoring the second order terms in $\\beta^2$.\n\n\\subparagraph{Relation to message passing}\nThe TAP equations obtained above must correspond to the fixed points of the Approximate Message Passing (AMP) following the derivation from Belief Propagation (BP) that is presented in \\citesec~\\ref{sec:chap3-gamp}. In the Appendix B of \\cite{Tramel2018} the relaxed-BP equations are derived for the generalized Boltzmann machine: \n\\begin{gather}\n \\msg{B}{i}{j}^{(t)} = \\sum_{k \\in \\partial i \\setminus j}\\beta W_{ik} \\msg{a^{(t)}}{k}{i} , \\quad \n \\msg{A}{i}{j}^{(t)} = - \\sum_{k \\in \\partial i \\setminus j} \\beta^2 W^2_{ik} \\msg{c^{(t)}}{k}{i} ,\\\\\n \\msg{a}{i}{j}^{(t)} = f_1^x(\\msg{B}{i}{j}^{(t-1)},\\msg{A}{i}{j}^{(t-1)};\\theta_i) , \\quad\n \\msg{c}{i}{j}^{(t)} = f_2^x(\\msg{B}{i}{j}^{(t-1)},\\msg{A}{i}{j}^{(t-1)};\\theta_i).\n\\end{gather}\nTo recover the TAP equations for them we define\n\\begin{gather}\n B_i^{(t)} = \\sum_{k \\in \\partial i}\\beta W_{ik} \\msg{a^{(t)}}{k}{i} , \\quad \n A_i^{(t)} = - \\sum_{k \\in \\partial i} \\beta^2 W^2_{ik} \\msg{c^{(t)}}{k}{i} ,\\\\\n \\label{eq:chap4-real-GY-TAP-ac}\n a_i^{(t)} = f_1^x(B_i^{(t-1)},A_i^{(t-1)};\\theta_i) , \\quad\n c_i^{(t)} = f_2^x(B_i^{(t-1)},A_i^{(t-1)};\\theta_i).\n\\end{gather}\nAs $B_i^{(t)} = \\msg{B}{i}{j}^{(t)} + \\beta W_{ij} \\msg{a^{(t)}}{j}{i}$ and $A_i^{(t)} = \\msg{A}{i}{j}^{(t)} - \\beta^2 W^2_{ij} \\msg{c^{(t)}}{j}{i}$ we have by developing $f_2^x$ that $c_i^{(t)} = \\msg{c}{i}{j}^{(t)} + O(\\beta)$ so that\n\\begin{gather}\n \\label{eq:chap4-real-GY-TAP-A}\n A_i^{(t)} = - \\beta^2 \\sum_{j \\in \\partial i} W^2_{ij} c_j^{(t)} + o(\\beta^2).\n\\end{gather}\nBy developing $f_1^x$ we also have\n\\begin{align}\n a_k^{(t)} &= f_1^x(\\msg{B}{k}{j}^{(t-1)} + \\beta W_{kj} \\msg{a^{(t-1)}}{j}{i}\\, , \\; \\msg{A}{k}{j}^{(t-1)}- \\beta^2 W^2_{kj} \\msg{c^{(t-1)}}{j}{k};\\theta_i) \\\\\n & = \\msg{a^{(t)}}{k}{j} + \\frac{\\partial f_1^x}{\\partial B_k} \\beta W_{kj} \\msg{a^{(t-1)}}{j}{k} + O(\\beta^2),\n\\end{align}\nwith $\\displaystyle \\frac{\\partial f_1^x}{\\partial B_k}(B_k^{(t-1)},A_k^{(t-1)};\\theta_k) = c_k^{(t)}$. Finally, by replacing in the definition of $B_i$ the messages we obtain\n\\begin{gather}\n B_i^{(t)} = \\sum_{k \\in \\partial i}\\beta W_{ik} \\msg{a^{(t)}}{k}{i} = \\sum_{k \\in \\partial i}\\beta W_{ik} a_k^{(t)} - \\beta W_{ki}c_k^{(t)} \\msg{a^{(t-1)}}{i}{k} .\n\\end{gather}\nAs $\\msg{a^{(t-1)}}{i}{k} = {a^{(t-1)}_i + O(\\beta)}$ and using the definition of $A_i^{(t)}$, we finally recover\n\\begin{gather}\n \\label{eq:chap4-real-GY-TAP-B}\n B_i^{(t)} = \\sum_{k \\in \\partial i}\\beta W_{ik} a_k^{(t)} + A_i^{(t)}a^{(t-1)}_i.\n\\end{gather}\n\nHence we indeed recover the TAP equations as the AMP fixed points in \\eqref{eq:chap4-real-GY-TAP-ac}, \\eqref{eq:chap4-real-GY-TAP-A} and \\eqref{eq:chap4-real-GY-TAP-B}. Beyond the possibility to cross-check our results, the message passing derivation also specifies a scheme of updates to solve the self-consistency equations obtained by the Georges-Yedidia expansion. In the applications we consider below we should resort to this time indexing with good convergence properties \\cite{bolthausen2014iterative}.\n\n\\subparagraph{Solutions of the TAP equations}\nAs already discussed in \\citesec~\\ref{sec:chap3-tap}, the TAP equations do not necessarily admit a single solution. In practice, different fixed points are reached when considering different initializations of the iteration of the self-consistent equations. \n\n\\section{Some applications}\n\\label{sec:chapex-all}\n\\subsection{A brief pre-deep learning history}\n\\label{sec:chap1-nn-and-mf}\n\nThe application of mean-field methods of inference to machine learning, and in particular to neural networks, already have a long history and significant contributions to their records. Here we briefly review some historical connections anterior to the deep learning revival of neural networks in the 2010s.\n\n\\paragraph{Statistical mechanics of learning} In the 80s and 90s, a series of works pioneered the analysis of learning with neural networks through the statistical physics lense.\nBy focusing on simple models with simple data distributions, and relying on the mean-field method of replicas, these papers managed to predict quantitatively important properties such as \\emph{capacities}: the number of training data point that could be memorized by a model, or \\emph{learning curves}: the generalization error (or population risk) of a model as a function of the size of the training set. This effort was initiated by the study of the Hopfield model \\cite{Amit1985}, an undirected neural network providing associative memory \\cite{Hopfield1982}. The analysis of feed forward networks with simple architectures followed (among which \\cite{Gardner1987, Gardner1988, Opper1991, Monasson1995, Opper1996, Monasson2004}, see also the reviews \\cite{Seung1992,Watkin1993, Opper1995, Saad1999a, Engel2001}). The dynamics of simple learning problems was also analyzed through a mean-field framework (not covered in the previous sections) initially in the simplifying case of online learning with infinite training set \\cite{Saad1995, Saad1995a, Biehl1995, Saad1999a} but also with finite data \\cite{Sollich1997, Li1999}.\n\nPhysicists, accustomed to studying natural phenomena, fruitfully brought the tradition of modelling to their investigation of learning, which translated into assumptions of random data distributions or teacher-student scenarios. Their approach was in contrast to the focus of the machine learning theorists on worst case guarantees: bounds for an hypothesis class that hold for any data distribution (e.g. Vapnik-Chervonenkis dimension and Rademacher complexity). \nThe originality of the physicists approach, along with the heuristic character of the derivations of mean-field approximations, may nonetheless explain the minor impact of their theoretical contributions in the machine learning community at the time.\n\n\\paragraph{Mean-field algorithms for practictioners} \nAlong with these contributions to the statistical mechanics theory of learning, new practical training algorithms based on mean-field approximations were also proposed at the same period (see e.g.\\cite{Wong1995,Opper1996,Wong1997}). \nYet, before the deep learning era, mean-field methods probably had a greater influence in the practice of unsupervised learning through density estimation, where we saw that approximate inference is almost always necessary. In particular the simplest method of naive mean-field, our first example in \\citechap~\\ref{sec:chap3}, was easily adopted and even extended by statisticians (see e.g. \\cite{Wainwright2008} for a recent textbook and \\cite{Blei2017} for a recent review). The belief propagation algorithm is another example of a well known mean-field methods by machine learners, as it was actually discovered in both communities. \nYet, for both methods, early applications rarely involved neural networks and rather relied on simple probabilistic models such as mixtures of elementary distributions.\nThey also did not take full advantage of the latest simultaneous developments in statistical physics of the mean-field theory of disordered systems. \n\n\n\\paragraph{Transferring advanced mean-field methods}\nIn this context, the inverse Ising problem has been a notable exception. The underlying question, rooted in theoretical statistical physics, is to infer the parameters of an Ising model given a set of equilibrium configurations. This is related to the unsupervised learning of the parameters of a Boltzmann machine (without hidden units) in the machine learning jargon, while it does not necessarily rely on a maximum likelihood estimation using gradients. The corresponding Boltzmann distribution, with pairwise interactions, is remarkable, not only to physicists. It is the least biased model under the assumption of fixed first and second moments in the sense that it maximizes the entropy. For this problem, physicists proposed dedicated developments of advanced mean-field methods for applications in other fields, and in particular in bio-physics (see \\cite{Nguyen2017} for a recent review). A few works even considered the case of Boltzmann machines with hidden units, more common in the machine learning community \\cite{Peterson1987,Galland1993}.\n\nBeyond the specific case of Boltzmann machines, the language barrier between communities is undoubtedly a significant hurdle delaying the global transfer of developments in one field to the other. \nIn machine learning, the potential of the most recent progress of mean-field approximations was advocated for in a pioneering workshop mixing communities in 1999 \\cite{opper2001advanced}. Yet the first widely-used application is possibly the Approximate Message Passing (AMP) algorithm for compressed sensing in 2009 \\cite{Donoho2009}.\nMeanwhile, in the different field of Constraint Satisfaction Problems (CSPs), there have been much tighter connections between developments in statistical physics and algorithmic solutions. The very first popular application of advanced mean-field methods outside of physics, beyond naive mean-field and belief propagation, is probably the survey propagation algorithm \\cite{Mezard2002} in 2002. It borrows from the 1RSB cavity method (not treated in the present paper) to solve efficiently certain types of CSPs.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section[Machine learning with neural networks\n]{Machine learning with neural networks \n}\n\\label{sec:chap1}\n\n\nMachine learning is traditionally divided into three classes of problems: supervised, unsupervised and reinforcement learning. For all of them, the advent of deep learning techniques, relying on deep neural networks, has brought great leaps forward in terms of performance and opened the way to new applications. \nNevertheless, the utterly efficient machinery of these algorithms remains full of theoretical puzzles.\nThis \\citechap~provides fundamental concepts in machine learning for the unfamiliar reader willing to approach the literature at the crossroads of statistical physics and deep learning.\nWe also take this \\citechap~as an opportunity to introduce the current challenges in building a strong theoretical understanding of deep learning. \nA comprehensive reference is \\cite{Goodfellow2016}, while \\cite{Mehta2018} offers a broad introduction to machine learning specifically addressed to physicists. \n\n\n\\subsection{Supervised learning}\n\\paragraph{Learning under supervision}\nSupervised learning aims at discovering systematic input to output mappings from examples. Classification is a typical supervised learning problem: for instance, from a set of pictures of cats and dogs labelled accordingly, the goal is to find a function able to predict in any new picture the species of the displayed pet. \n\nIn practice, the \\emph{training set} is a collection of $P$ example pairs $\\cD = \\{\\x\\kk, \\y\\kk\\}_{k=1}^P$ from an input data space $\\mathcal{X} \\subseteq \\R^N$ and an output data space $\\mathcal{Y} \\subseteq \\R^M$. Formally, they are assumed to be i.i.d. samples from a joint distribution $p(\\x,\\y)$. The predictor $h$ is chosen by a training algorithm from a \\emph{hypothesis class}, a set of functions from $\\mathcal{X}$ to $\\mathcal{Y}$, so as to minimize the error on the training set. This error is formalized as the \\emph{empirical risk}\n\\begin{gather}\n \\hat\\cR(h, \\ell, \\cD) = \\frac{1}{P}\\sum_{k=1}^P \\ell(\\y\\kk, h(\\x\\kk)) , \n\\end{gather}\nwhere the definition involves a loss function $\\ell: \\mathcal{Y} \\times \\mathcal{Y} \\rightarrow \\R$ measuring differences in the output space. \nThis learning objective nevertheless does not guarantee \\emph{generalization}, i.e. the ability of the predictor $h$ to be accurate on inputs $\\x$ that are not in the training set. It is a surrogate for the ideal, but unavailable, \\emph{population risk} \n\\begin{gather}\n \\cR(h, \\ell) = \\E_{\\x, \\y} \\left[ \\ell(\\y, h(\\x))\\right] = \\int_{\\mathcal{X}, \\mathcal{Y}} \\dd \\x \\dd \\y p(\\x, \\y) \\ell(\\y, h(\\x)),\n\\end{gather}\nexpressed as an expectation over the joint distribution $p(\\x,\\y)$.\nThe different choices of hypothesis classes and training algorithms yield the now crowded zoo of supervised learning algorithms. \n\n\\paragraph{Representation ability of deep neural networks}\nIn the context of supervised learning, deep neural networks enter the picture in the quality of a parametrized hypothesis class. Let us first quickly recall the simplest network, the \\emph{perceptron}, including only a single neuron. It is formalized as a function from $\\R^{N}$ to $\\mathcal{Y} \\subset \\R$ applying an activation function $f$ to a weighted sum of its inputs shifted by a bias $b \\in \\R$, \n\\begin{gather}\n \\label{eq:chap1-perceptron}\n \\hat{y} = h_{\\vect{w}, b}(\\x) = f(\\vect{w}\\T \\x + b)\n\\end{gather}\nwhere the weights are collected in the vector $\\vect{w} \\in \\R^N$. From a practical standpoint, this very simple model can only solve the classification of linearly separable groups (see \\citefig~\\ref{fig:chap1-perceptron}). Yet from the point of view of learning theory, it has been the starting point of a rich statistical physics literature that will be discussed in \\citesec~\\ref{sec:chap1-nn-and-mf}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{perceptron.pdf}\n \\caption{Let's assume we wish to classify data points $\\x \\in \\R^2$ with labels $y=\\pm 1$. We choose as an hypothesis class the perceptron sketched on the left with sign activation. For given weight vector $\\vect{w}$ and bias $b$ the plane is divided by a decision boundary assigning labels. If the training data are linearly separable, then it is possible to perfectly predict all the labels with the perceptron, otherwise it is impossible. \\label{fig:chap1-perceptron}}\n\\end{figure}\n\nCombining several neurons into networks defines more complex functions. The universal approximation theorem \\cite{Cybenko1989, Hornik1991} proves that the following two-layer network architecture can approximate any well-behaved function with a finite number of neurons,\n\\begin{gather}\n \\hat{y} = h_{\\theta}(\\x) = {\\vect{w}^{(2)}}\\T f(\\W^{(1)} \\x + \\vect{b}) = \\sum_{\\alpha = 1}^M w^{(2)}_\\alpha f({\\vect{w}_\\alpha^{(1)}}\\T\\x + b_\\alpha), \\qquad \\theta = \\{\\vect{w}^{(2)}, \\W^{(1)}, \\vect{b}\\}\n\\end{gather}\nfor $f$ a bounded, non-constant, continuous scalar function, acting component-wise. In the language of deep learning this network has one hidden layer of $M$ units. Input weight vectors $w^{(1)}_\\alpha \\in \\R^N$ are collected in a weight matrix $\\W^{(1)} \\in \\R^{M \\times N}$. Here, and in the following, the notation $\\theta$ is used as short for the collection of adjustable parameters. The universal approximation theorem is a strong result in terms of representative power of neural networks but it is not constructive. \nIt does not quantify the size of the network, i.e. the number $M$ of hidden units, to approximate a given function, nor does it prescribe how to obtain the values of the parameters $\\vect{w}^{(2)}, \\W^{(1)}$ and $\\vect{b}$ for the optimal approximation. While building an approximation theory is still ongoing (see e.g. \\cite{Grohs2019}). Practice, led by empirical considerations, has nevertheless demonstrated the efficiency of neural networks.\n\nIn applications, neural networks with multiple hidden layers, deep neural networks, are preferred. A generic neural network of \\emph{depth} $L$ is the function\n\\begin{gather}\n \\label{eq:chap1-dnn}\n \\hat{\\y} = h_{\\theta}(\\x) = f(\\W^{(L)}f(\\W^{(L-1)} \\cdots f(\\W^{(1)} \\x + \\vect{b}^{(1)}) \\cdots +\\vect{b}^{(L-1)})+ \\vect{b}^{(L)}), \\\\\n \\label{eq:chap1-dnn-theta}\n \\theta = \\{\\W^{(l)} \\in \\R^{N_{l} \\times N_{l-1}} , \\, \\vect{b}^{(l)} \\in \\R^{N_l} \\;; \\; l=1 \\cdots L\\},\n\\end{gather}\nwhere $N_0 = N$ is the dimension of the input and $N_L = M$ is the dimension of the output. The architecture is fixed by specifying the number of neurons, or \\emph{width}, of the hidden layers $\\{N_l\\}_{l=1}^{L-1}$. The latter can be denoted $\\hid^{(l)} \\in \\R^{N_l}$ and follow the recursion\n\\begin{gather}\n\t\\label{eq:chap1-dnn-rec1}\n\t\\hid^{(1)} = f(\\W^{(1)} \\x + \\vect{b}^{(1)}) \\, , \\\\\n\t\\label{eq:chap1-dnn-rec2}\n \\hid^{(l)} = f(\\W^{(l)} \\hid^{(l-1)} + \\vect{b}^{(l)}) \\, , \\quad l = 2 \\cdots L-1 \\, ,\\\\\n \\label{eq:chap1-dnn-rec3}\n\t\\hat{\\y} = f(\\W^{(L)} \\hid^{(L-1)} + \\vect{b}^{(L)}) \\, .\n\\end{gather}\nFixing the activation functions and the architecture of a neural network defines an hypothesis class. It is crucial that activations introduce non-linearities; the most common are the hyperbolic tangent tanh and the rectified linear unit defined as $\\mathrm{relu}(x)= \\max(0,x)$. Note that it is also possible to define stochastic neural networks by using noisy activation functions, uncommon in supervised learning applications except at training time so as to encourage generalization \\cite{Poole2014, Srivastava2014}.\n\nAn originally proposed intuition for the advantage of depth is that it enables to treat the information in a hierarchical manner; either looking at different scales in different layers, or learning more and more abstract representations \\cite{Bengio2013}. Nevertheless, getting a clear theoretical understanding why in practice `the deeper the better' is still an ongoing direction of research (see e.g. \\cite{Telgarsky2016, Daniely2017, Safran2019}).\n\n\\paragraph{Neural network training}\nGiven an architecture defining $h_\\theta$, the supervised learning objective is to minimize the empirical risk $\\hat \\cR$ with respect to the parameters $\\theta$. This optimization problem lives in the dimension of the number of parameters which can range from tens to millions. The idea underlying the majority of training algorithms is to perform a gradient descent (GD) starting at parameters drawn randomly from an initialization distribution:\n\\begin{gather}\n \\theta_0 \\sim p_{\\theta_0}(\\theta_0) \\\\\n \\theta_{t+1} \\leftarrow \\theta_t - \\eta \\nabla_\\theta \\hat \\cR = \\theta_t - \\eta \\frac{1}{P}\\sum_{k=1}^P \\nabla_\\theta \\ell\\left(\\y\\kk, h_{\\theta_t}\\left(\\x\\kk\\right)\\right) \\,.\n\\end{gather}\nThe parameter $\\eta$ is the learning rate, controlling the size of the step in the direction of decreasing gradient per iteration. The computation of the gradients can be performed in time scaling linearly with depth by applying the derivative chain-rule leading to the \\emph{back-propagation} algorithm \\cite{Goodfellow2016}. A popular alternative to gradient descent is stochastic gradient descent (SGD) where the sum over the gradients for the entire training set is replaced by the sum over a small number of samples, randomly selected at each step \\cite{RobbinsHMonro1951,Bottou2010}.\n\nDuring the training iterations, one typically monitors the \\emph{training error} (another name for the empirical risk given a training data set) and the \\emph{validation error}. The latter corresponds to the empirical risk computed on a set of points held-out from the training set, the validation set, to assess the generalization ability of the model either along the training or in order to select hyperparameters of training such as the value of the learning rate. A posteriori, the performance of the model is judged from the \\emph{generalization error}, which is evaluated on the never seen \\emph{test set}.\nWhile two different training algorithms (e.g. GD vs SGD) may achieve zero training error, they may differ in the level of generalization they typically reach.\n\n\n\\paragraph{Open questions and challenges}\nBuilding on the fundamental concepts presented in the previous paragraphs, practitioners managed to bring deep learning to unanticipated performances in the automatic processing of images, speech and text (see \\cite{LeCun2015a} for a few years old review). \nStill, many of the greatest successes in the field of neural network were obtained using ingenious tricks while many fundamental theoretical questions remain unresolved. \n\nRegarding the optimization first, (S)GD training generally discovers parameters close to zero risk. Yet, gradient descent is guaranteed to converge to the neighborhood of a global minimum only for a convex function and is otherwise expected to get stuck in a local minimum. Therefore, the efficiency of gradient-based optimization is a priori a paradox given the empirical risk $\\hat R$ is non-convex in the parameters $\\theta$. \nSecond, the generalization ability of deep neural networks trained by (S)GD is still poorly understood.\nThe size of training data sets is limited by the cost of labelling by humans, experts or heavy computations.\nThus training a deep and wide network amounts in practice to fitting a model of millions of degrees of freedom against a somehow relatively small amount of data points. Nevertheless it does not systematically lead to \\emph{overfitting}. The resulting neural networks can have surprisingly good predictions both on inputs seen during training and on new inputs \\cite{Zhang2017}. \nResults in the literature that relate the size and architecture of a network to a measure of its ability to generalize are too far from realistic settings to guide choices of practitioners. \nOn the one hand, traditional bounds in statistics, considering worst cases, appear overly pessimistic \\cite{Vapnik2000,Bartlett2002,Shalev-Shwartz2014,Abbara2019}. On the other hand, historical statistical physics analyses of learning, briefly reviewed in \\citesec~\\ref{sec:chap1-nn-and-mf}, only concern simple architectures and synthetic data. This lack of theory results in potentially important waste: in terms of time lost by engineers in trial and error to optimize their solution, and in terms of electrical resources used to train and re-train possibly oversized networks while storing potentially unnecessarily large training data sets.\n\n\nThe success of deep learning, beyond these apparent theoretical puzzles, certainly lies in the interplay of advantageous properties of training algorithms, the neural network hypothesis class and structures in typical data (e.g. real images, conversations). Disentangling the role of the different ingredients is a very active line of research (see \\cite{Giryes2016} for a review).\n\n\\subsection{Unsupervised learning}\n\\label{sec:chap1-unsupervised}\n\\paragraph{Density estimation and generative modelling}\nThe goal of unsupervised learning is to directly extract structure from data. \nCompared to the supervised learning setting, the training data set is made of a set of example inputs $\\cD = \\{\\vect{x}\\kk\\}_{k=1}^P$ without corresponding outputs. A simple example of unsupervised learning is clustering, consisting in the discovery of unlabelled subgroups in the training data. \nMost unsupervised learning algorithms either implicitly or explicitly adopt a probabilistic viewpoint and implement \\emph{density estimation}. The idea is to approximate the true density $p(\\x)$ from which the training data was sampled by the closest (in various senses) element among a family of parametrized distributions over the input space $\\{ p_\\theta(.),\\; \\theta \\in \\R^{N_\\theta} \\}$. The selected $p_{\\theta}$ is then a model of the data.\nIf the model $p_{\\theta}$ is easy to sample, it can be used to generate new inputs comparable to the training data points - which leads to the terminology of \\emph{generative models}. In this context, \\emph{unsupervised deep learning} exploits the representational power of deep neural networks to create sophisticated candidate $p_\\theta$.\n\nA common formalization of the learning objective is to maximize the \\emph{likelihood}, defined as the probability of i.i.d. draws from the model $p_\\theta$ to have generated the training data $\\cD = \\{\\vect{x}\\kk\\}_{k=1}^P$, or equivalently its logarithm,\n\\begin{gather}\n \\maxx{\\theta} \\prod_{k=1}^P p_\\theta(\\x\\kk) \\quad \\iff \\quad \\maxx{\\theta} \\sum_{k=1}^P \\log p_\\theta(\\x\\kk).\n\\end{gather} \nThe second logarithmic additive formulation is generally preferred. \nIt can be interpreted as the minimization of the Kullback-Leibler divergence between the empirical distribution $p_\\cD(\\x) = \\sum_{k=1}^P \\delta(\\x - \\x\\kk) \/ P$ and the model $p_\\theta$:\n\\begin{gather}\n \\minn{\\theta} \\KL(p_\\cD || p_\\theta) = \\minn{\\theta} \\int \\dd{\\x} p_\\cD(\\x) \\log \\frac{p_\\cD(\\x) }{p_\\theta(\\x)} \\quad \\iff \\quad \\maxx{\\theta} \\sum_{k=1}^P \\log p_\\theta(\\x\\kk) \\,, \n\\end{gather}\nalthough considering the divergence with the discrete empirical measure is slightly abusive.\nThe detail of the optimization algorithm here depends on the specification of $p_\\theta$. As we will see, the likelihood in itself is often intractable and learning consists in a gradient ascent on at best a lower bound, otherwise an approximation, of the likelihood. \n\nA few years ago, an alternative strategy called adversarial training was introduced by \\cite{Goodfellow2014}. Here an additional trainable model called the discriminator, for instance parametrized by $\\phi$ and denoted $d_\\phi(\\cdot)$, computes the probability for points in the input space $\\mathcal{X}$ of belonging to the training set $\\cD$ rather than being generated by the model $p_\\theta(\\cdot)$. The parameters $\\theta$ and $\\phi$ are trained simultaneously such that, the generator learns to fool the discriminator and the discriminator learns not to be fooled by the generator. The optimization problem usually considered is\n\\begin{gather}\n \\minn{\\theta}\\maxx{\\phi} \\EE{\\cD}{\\log(d_\\phi(\\x))} + \\EE{p_\\theta}{\\log(1-d_\\phi(\\x))} \\, ,\n\\end{gather}\nwhere the sum of the expected log-probabilities according to the discriminator for examples in $\\cD$ to be drawn from $\\cD$ and examples generated by the model not to be drawn from $\\cD$ is maximized with respect to $\\phi$ and minimized with respect to $\\theta$.\n\nIn the following, we present two classes of generative models based on neural networks.\n\n\\paragraph{Deep Generative Models}\n\\label{sec:chap1-vae}\nA deep generative models defines a density $p_\\theta$ obtained by propagating a simple distribution through a deep neural network. It can be formalized by introducing a latent variable $\\z \\in \\R^N$ and a deep neural network $h_\\theta$ similar to \\eqref{eq:chap1-dnn} of input dimension $N$. The generative process is then \n\\begin{gather}\n \\label{eq:chap1-dgm-1}\n \\z \\sim p_z(\\z) \\\\\n \\label{eq:chap1-dgm-2}\n \\x \\sim p_\\theta(\\x |\\z) = p_{\\rm out}(\\x| h_\\theta(\\z)),\n\\end{gather}\nwhere $p_z$ is typically a factorized distribution on $\\R^N$ easy to sample (e.g. a standard normal distribution), and $p_{\\rm out}(.|h_\\theta(\\z))$ is for instance a multivariate Gaussian distribution with mean and covariance that are functions of $h_\\theta(\\z)$.\nThe motivation to consider this class of models for joint distributions is three-fold. First the class is highly expressive. Second, it follows from the intuition that data sets leave on low dimensional manifolds, which here can be spaned by varying the latent representation $\\z$ usually much smaller than the input space dimension (for further intuition see also the reconstruction objective of the first autoencoders, see e.g. Chapter 14 of \\cite{Goodfellow2016}). Third, yet perhaps more importantly, the class can be optimized over easily using back-propagation, unlike the Restricted Boltzmann Machines presented in the next paragraph largely replaced by deep generative models.\nThere are two main types of deep generative models. Generative Adversarial Networks (GAN) \\cite{Goodfellow2014} trained following the adversarial objective mentioned above, and Variational AutoEncoders (VAE) \\cite{Kingma2014, Rezende2014} trained to maximize a likelihood lower-bound.\n\n\\subparagraph{Variational AutoEncoders}\nThe computation of the likelihood of one training sample $\\x\\kk$ for a deep generative model \\eqref{eq:chap1-dgm-1}-\\eqref{eq:chap1-dgm-2} requires then the marginalization over the latent variable $\\z$, \n\\begin{gather}\n p_\\theta(\\x) = \\int \\dd{\\z} p_{\\rm out}(\\x | h_\\theta(\\z)) p_z(\\z).\n\\end{gather}\nThis multidimensional integral cannot be performed analytically in the general case. It is also hard to evaluate numerically as it does not factorize over the dimensions of $\\z$ which are mixed by the neural network $h_\\theta$. \nYet a lower bound on the log-likelihood can be defined by introducing a tractable conditional distribution $q(\\z |\\x)$ that will play the role of an approximation of the intractable \\emph{posterior} distribution $p_{\\theta}(\\z | \\x)$ implicitly defined by the model:\n\\begin{align}\n \\log p_{\\theta}(\\x)\n \n \n \n & \\geq \\int \\dd{\\z} q(\\z | \\x) \\left[ - \\log q(\\z | \\x) + \\log p_{\\theta}(\\x, \\z) \\right] = \\mathrm{LB}(q, \\theta, \\x) \\label{eq:chap1-vae-lb}.\n\\end{align}\nMaximum likelihood learning is then approached by the maximization of the lower bound $\\mathrm{LB}(q, \\theta, \\x)$, which requires in practice to parametrize the tractable posterior $q = q_{\\phi}$, typically with a neural network. \nUsing the so-called re-parametrization trick \\cite{Kingma2014,Rezende2014}, the gradients of $\\mathrm{LB}(q_\\phi, \\theta, \\x)$ with respect to $\\theta$ and $\\phi$ can be approximated by a Monte Carlo, so that the likelihood lower bound can be optimized by gradient ascent.\n\n\\subparagraph{Generative Adversarial Networks}\nThe principle of adversarial training was designed directly for a deep generative model \\cite{Goodfellow2014}. Using a deep neural network to parametrize the discriminator $d_\\phi(\\cdot)$ as well as the generator $p_\\theta(\\cdot)$, it leads to a remarkable quality of produced samples and is now one of the most studied generative model.\n\n\n\n\\paragraph{Restricted Boltzmann Machines}\nModels described in the preceding paragraphs comprised only \\emph{feed forward} neural networks. In feed forward neural networks, the state or value of successive layers is determined following the recursion \\eqref{eq:chap1-dnn-rec1}-\\eqref{eq:chap1-dnn-rec3}, in one pass from inputs to outputs.\nBoltzmann machines instead involve \\emph{undirected} neural networks which consist of stochastic neurons with symmetric interactions.\nThe probability law associated with a neuron state is a function of neighboring neurons, themselves reciprocally function of the first neuron. Sampling a configuration therefore requires an equilibration in the place of a simple forward pass. \n\nA Restricted Boltzmann Machine (RBM) \\cite{Ackley1985, Smolensky186} with M hidden neurons in practice defines a joint distribution over an input (or visible) layer $\\x \\in \\{0,1\\}^N$ and a hidden layer $\\hidd \\in \\{0,1\\}^M$, \n\\begin{gather}\n\\label{eq:chap1-rbm-meas}\np_\\theta(\\x, \\hid) = \\frac{1}{\\cZ} e^{\\vect{a}\\T \\x + \\vect{b}\\T \\hidd + \\x\\T\\W\\hidd} \\, , \\qquad \\theta = \\{ \\W, \\vect{a}, \\vect{b} \\} \\, , \n\\end{gather}\nwhere $\\cZ$ is the normalization factor, similar to the partition function of statistical physics. The parametric density model over inputs is then the marginal $p_\\theta(\\x) = \\sum_{\\hidd \\in \\{0,1\\}^M} p_\\theta(\\x, \\hid)$. Although seemingly very similar to pairwise Ising models, the introduction of hidden units provides a greater representative power to RBMs as hidden units can mediate interactions between arbitrary groups of input units. Furthermore, they can be generalized to Deep Boltzmann Machines (DBMs) \\cite{Salakhutdinov2009}, where several hidden layers are stacked on top of each other. \n\nIdentically to VAEs, RBMs can represent sophisticated distributions at the cost of an intractable likelihood. Indeed the summation over $2^{M+N}$ terms in the partition function cannot be simplified by an analytical trick and is only realistically doable for small models. \nRBMs are commonly trained through a gradient ascent of the likelihood using approximated gradients. As exact Monte Carlo evaluation is a costly operation that would need to be repeated at each parameter update in the gradient ascent, several more or less sophisticated approximations are preferred: contrastive divergence (CD) \\cite{Hinton2002}, its persistent variant (PCD) \\cite{Tieleman2008} or even parallel tempering \\cite{Desjardins2010,Cho2010}.\n\nRBMs were the first effective generative models using neural networks. They found applications in various domains including dimensionality reduction \\cite{Hinton2006a}, classification \\cite{larochelle2008classification}, collaborative filtering \\cite{salakhutdinov2007restricted}, feature learning \\cite{coates2011analysis}, and topic modeling \\cite{hinton2009replicated}.\nUsed for an unsupervised pre-training of deep neural networks layer by layer \\cite{Hinton2006,Bengio2007}, they also played a crucial role in the take-off of supervised deep learning. \n\n\n\n\n\\paragraph{Open questions and challenges}\nGenerative models involving neural networks such as VAE, GANs and RBMs have great expressive powers at the cost of not being amenable to exact treatment. Their training, and sometimes even their sampling requires approximations. From a practical standpoint, whether these approximations can be either made more accurate or less costly is an open direction of research. \nAnother important related question is the evaluation of the performance of generative models \\cite{Sajjadi2018}. To start with the objective function of training is very often itself intractable (e.g. the likelihood of a VAE or a RBM), and beyond this objective, the unsupervised setting does not define a priori a test task for the performance of the model. \nAdditionally, unsupervised deep learning inherits some of the theoretical puzzles already discussed in the supervised learning section. In particular, assessing the difficulty to represent a distribution and select a sufficient minimal model and\/or training data set is an ongoing effort of research.\n\n\n\\section[Machine learning with neural networks\n]{Machine learning with neural networks \n}\n\\label{sec:chap1}\n\n\nThis \\citechap~provides the fundamental concepts in machine learning that will be used throughout this thesis. A comprehensive reference is \\cite{Goodfellow2016}. We also take this \\citechap~as an opportunity to introduce the current challenges in the understanding of deep learning. \n\n\\input{chap1_ml.tex}\n\n\n\\section{Statistical inference and the statistical physics approach\n}\n\\label{sec:chap2}\n\nTo tackle the open questions and challenges surrounding neural networks mentioned in the previous \\citesec, we need to manipulate high-dimensional probability distributions. The generic concept of statistical inference refers to the extraction of useful information from these complicated objects. Statistical physics, with its probabilistic interpretation of natural systems composed of many elementary components, is naturally interested in similar questions. \nWe provide in this section a few concrete examples of inference questions arising in neural networks and explicit how statistical physics enters the picture. In particular, the theory of disordered systems appears here especially relevant. \n \n\n\\subsection{Statistical inference}\n\n\\label{sec:chap2-stat-inf}\n \n\\subsubsection{Some inference questions in neural networks for machine learning}\n\\label{sec:chap2-teacher-student}\n\n\\paragraph{Inference in generative models}\nGenerative models used for unsupervised learning are statistical models defining high-dimensional distributions with complex dependencies. As we have seen in \\citesec~\\ref{sec:chap1-unsupervised}, the most common training objective in unsupervised learning is the maximization of the log-likelihood, i.e. the log of the probability assigned by the generative model to the training set $\\{\\x\\kk\\}_{k=1}^{P}$. Computing the probability of observing a given sample $\\x\\kk$ is an inference question. It requires to marginalize over all the hidden representations of the problem. For instance in the RBM \\eqref{eq:chap1-rbm-meas},\n\\begin{gather}\n \n p_\\theta(\\x\\kk) = \\frac{1}{\\cZ} \\sum_{\\hidd \\in \\{0,1\\}^M} e^{\\vect{a}\\T \\x\\kk + \\vect{b}\\T \\hidd + \\x\\kk\\T\\W\\hidd}.\n \\end{gather}\nWhile the numerator will be easy to evaluate, the partition function has no analytical expression and its exact evaluation requires to sum over all possible states of the network.\n\n\n\n\\paragraph{Learning as statistical inference: Bayesian inference and the teacher-student scenario}\n\n\nThe practical problem of training neural networks from data as introduced in \\citechap~\\ref{sec:chap1} is not in general interpreted as inference. To do so, one needs to treat the learnable parameters as random variables, which is the case in Bayesian learning. For instance in supervised learning, an underlying prior distribution $p_\\theta(\\theta)$ for the weights and biases of a neural network \\eqref{eq:chap1-dnn}-\\eqref{eq:chap1-dnn-theta} is assumed, so that Bayes rule defines a posterior distribution given the training data $\\mathcal{D}$,\n\\begin{align}\np(\\theta | \\mathcal{D}) & = \\frac{p(\\mathcal{D} | \\theta) p_\\theta(\\theta)}{p(\\mathcal{D})}. \n\\end{align}\nCompared to the single output of risk minimization, we obtain an entire distribution for the learned parameters $\\theta$, which takes into account not only the training data but also some knowledge on the structure of the parameters (e.g. sparsity) through the prior. In practice, Bayesian learning and traditional empirical risk minimization may not be so different. On the one hand, the Bayesian posterior distribution is often summarized by a point estimate such as its maximum. On the other hand risk minimization is often biased towards desired properties of the weights through regularization techniques (e.g. promoting small norm) recalling the role of the Bayesian prior. \n\nHowever, from a theoretical point of view, Bayesian learning is of particular interest in the \\emph{teacher-student} scenario. The idea here is to consider a toy model of the learning problem where parameters are effectively drawn from a prior distribution.\nLet us use as an illustration the case of the supervised learning of the perceptron model \\eqref{eq:chap1-perceptron}. We draw a weight vector $\\vect{w}_0$, from a prior distribution $p_w(\\cdot)$, along with a set of $P$ inputs $\\{\\x\\kk\\}_{k=1}^{P}$ i.i.d from a data distribution $p_x(\\cdot)$. Using this \\emph{teacher} perceptron model we also draw a set of possibly noisy corresponding outputs $y\\kk$ from a teacher conditional probability $p(.| \\vect{w}_0\\T\\x\\kk)$. From the training set of the $P$ pairs $\\mathcal{D} = \\{\\x\\kk, y\\kk\\}$, one can attempt to rediscover the teacher rule by training a \\emph{student} perceptron model.\nThe problem can equivalently be phrased as a reconstruction inference question: can we recover the value of $\\vect{w}_0$ from the observations in $\\mathcal{D}$? The Bayesian framework yields a posterior distribution of solutions\n\\begin{gather}\n p(\\vect{w}| \\mathcal{D}) = \\prod_{k=1}^P p(y\\kk| \\vect{w}\\T\\x\\kk)p_w(\\vect{w}) \\, \/ \\, \\prod_{k=1}^P p(y\\kk | \\x\\kk).\n\\end{gather}\n\nNote that the terminology of teacher-student applies for a generic inference problem of reconstruction: the statistical model used to generate the data along with the realization of the unknown signal is called the \\emph{teacher}; the statistical model assumed to perform the reconstruction of the signal is called the \\emph{student}. When the two models are identical or matched, the inference is \\emph{Bayes optimal}. When the teacher model is not perfectly known, the statistical models can also be different (from slightly differing prior distributions to entirely different models), in which case they are said to be mismatched, and the reconstruction is suboptimal.\n\nOf course in practical machine learning applications of neural networks, one has only access to an empirical distribution of the data and it is unclear whether there should exist a formal rule underlying the input-output mapping.\nYet the teacher-student setting is a modelling strategy of learning which offers interesting possibilities of analysis and we shall refer to numerous works resorting to the setup in \\citesec~\\ref{sec:chapex-all}.\n\n\n\\subsubsection{Answering inference questions}\n\\label{sec:chap2-challenges-inf}\n\nMany inference questions in the line of the ones mentioned in the previous \\citesec~have no tractable exact solution. When there exists no analytical closed-form, computations of averages and marginals require summing over configurations. Their number typically scales exponentially with the size of the system, then becoming astronomically large for high-dimensional models.\nHence it is necessary to design approximate inference strategies. They may require an algorithmic implementation but must run in finite (typically polynomial) time. \nAn important cross-fertilization between statistical physics and information sciences have taken place over the past decades around the questions of inference. Two major classes of such algorithms are Monte Carlo Markov Chains (MCMC), and mean-field methods. The former is nicely reviewed in the context of statistical physics in \\cite{Krauth2006}. The latter will be the focus of this short review, in the context of deep learning.\n\nNote that representations of joint probability distributions through probabilistic graphical models and factor graphs are crucial tools to design efficient inference strategies. In \\citeapp~\\ref{app:chap2-graphs}, we quickly introduce for the unfamiliar reader these two formalisms that enable to encode and exploit independencies between random variables. As examples, \\citefig~\\ref{fig:chap2-graphs} presents graphical representations of the RBM measure \\eqref{eq:chap1-rbm-meas} and the posterior distribution in the Bayesian learning of the perceptron as discussed in the previous \\citesec.\n\n \n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\captionsetup{width=.3\\linewidth}\n \\subfloat[Restricted Boltzmann Machine]{\\includegraphics[width=0.42\\textwidth, valign=m]{chap2_rbm_GM.pdf}\n \\label{fig:chap2-rbm-bis}}\n \n \\captionsetup{width=.4\\linewidth}\n \\hspace{0.1\\textwidth}\n \\subfloat[Perceptron teacher and student. ]{\\includegraphics[width=0.4\\textwidth, valign=m]{chap2_perceptron.pdf}\n \\label{fig:chap2-glm-bis}}\n \\captionsetup{width=.9\\linewidth}\n \\caption{\\textbf{(a)} Undirected probabilistic graphical model (left) and factor graph representation (right). \\textbf{(b)} Left: Directed graphical model of the generative model for the training data knowing the teacher weight vector $\\vect{w}_0$. Right: Factor graph representation of the posterior distribution for the student $p(\\vect{w} | \\X, \\y )$, where the vector $\\y \\in \\R^P$ gathers the outputs $y_{(k)}$ and the matrix $\\X\\in\\R^{N \\times P}$ gathers the inputs $\\x_{(k)}$.\\label{fig:chap2-graphs}}\n\\end{figure}\n\n\n\n\\subsection{Statistical physics of disordered systems, first appearance on stage}\nHere we re-introduce briefly fundamental concepts of statistical physics that will help to understand connections with inference and the origin of the methods presented in what follows.\n\n\\paragraph{The thermodynamic limit}\n\nThe equilibrium statistics of classical physical systems are described by the Boltzmann distribution.\nFor a system with $N$ degrees of freedom noted $\\x \\in \\cX^N$ and an energy functional $E(\\x)$, we have\n\\begin{align}\n \\label{eq:chap2-boltzmann}\n p(\\x) = \\frac{e^{-\\beta E(\\x)}}{\\cZ_N}, \\quad \\cZ_N = \\sum_{\\x \\in \\mathcal{X}^N} e^{-\\beta E(\\x)}, \\quad \\beta = 1\/k_B T ,\n\\end{align}\nwhere we defined the partition function $\\cZ_N$ and the inverse temperature $\\beta$.\nTo characterize the macroscopic state of the system, an important functional is the free energy \n\\begin{align}\n F_N & = - \\log \\cZ_N \/ \\beta = - \\frac 1 \\beta \\log \\sum_{\\x \\in \\mathcal{X}^N} e^{-\\beta E(\\x)}. \n \n \n \n\\end{align} \nWhile the number of available configurations $\\cX^N$ grows exponentially with $N$, considering the \\emph{thermodynamic} limit $N \\to \\infty$ typically simplifies computations due to concentrations. \nLet $e_N = E \/N$ be the energy per degree of freedom, the partition function can be re-written as a sum over the configurations of a given energy $e_N$\n\\begin{align}\n \\cZ_N = \n \n \\sum_{e_N} e^{-N \\beta f_N(e_N)} ,\n\\end{align}\nwhere we define $f_N(e_N)$ the free energy density of states of energy $e_N$. This rewriting implies that \nat large $N$ the states of energy minimizing the free energy are exponentially more likely than any other states. Provided the following limits exist, the statistics of the system are dominated by the former states and we have the thermodynamic quantities\n\\begin{gather}\n \\cZ = \\lim_{N \\to \\infty} \\cZ_{N} = e^{-\\beta f}, \\text{ and } f = \\lim_{N \\to \\infty} F_N \/ N .\n\\end{gather}\nThe interested reader will also find a more detailed yet friendly presentation of the thermodynamic limit in \\citesec~2.4 of \\cite{Mezard2009}.\n\n\\paragraph{Disordered systems}\n\nRemarkably, the statistical physics framework can be applied to inhomogeneous systems with \\emph{quenched disorder}. In these systems, interactions are functions of the realization of some random variables. An iconic example is the Sherrington-Kirkpatrick (SK) model \\cite{Sherrington1975}, a fully connected Ising model with random Gaussian couplings $\\mat{J} = (J_{i j})$, that is where the $J_{ij}$ are drawn independently from a Gaussian distribution. As a result, the energy functional of disordered systems is itself function of the random variables. For instance here, the energy of a spin configuration $\\x$ is then $E(\\x; \\mat{J}) = - \\frac 1 2 \\x\\T \\mat{J} \\x$. \nIn principle, system properties depend on a given realization of the disorder. In our example, the correlation between two spins $\\langle x_i x_j \\rangle_J$ certainly does. Yet some aggregated properties are expected to be \\emph{self-averaging} in the thermodynamic limit, meaning that they concentrate on their mean with respect to the disorder as the fluctuations are averaged out.\nIt is the case for the free energy. As a result, here it formally verifies:\n\\begin{gather}\n \\lim_{N\\to\\infty} F_{N; \\mat{J}} \/ N = \\lim_{N\\to\\infty} \\E_{\\mat{J}}[F_{N; \\mat{J}} \/ N] = f.\n\\end{gather}\n(see e.g. \\cite{Mezard1986, Castellani2005} for discussions of self-averaging in spin glasses).\nThus the typical behavior of complex systems is studied in the statistical physics framework by taking two important conceptual steps: averaging over the realizations of the disorder and considering the thermodynamic limit. These are starting points to design approximate inference methods. Before turning to an introduction to mean-field approximations, we stress the originality of the statistical physics approach to inference. \n\n\n\n\n\\paragraph{Statistical physics of inference problems}\nStatistical inference questions are mapped to statistical physics systems by interpreting general joint probability distributions as Boltzmann distributions \\eqref{eq:chap2-boltzmann}. \nTurning back to our simple examples of \\citesec~\\ref{sec:chap2-stat-inf}, the RBM is trivially mapped as it directly borrows its definition from statistical physics. We have\n \\begin{gather}\n E(\\x, \\hidd ; \\W) = - \\vect{a}\\T \\x - \\vect{b}\\T \\hidd - \\x\\T\\W\\hidd .\n \\end{gather} \nThe inverse temperature parameter can either be considered equal to 1 or as a scaling factor of the weight matrix $\\W \\leftarrow \\beta \\W$ and bias vectors $\\vect{a} \\leftarrow \\beta \\vect{a}$ and $\\vect{b} \\leftarrow \\beta \\vect{b} $. The RBM parameters play the role of the disorder. Here the computational hardness in estimating the log-likelihood comes from the estimation of the log-partition function, which is precisely the free energy. In our second example, the estimation of the student perceptron weight vector,\nthe posterior distribution is mapped to a Boltzmann distribution by setting \n\\begin{gather}\n \n E(\\vect{w} ; \\y, \\X) = - \\log p(\\y| \\vect{w}\\T\\X)p_w(\\vect{w})\n \n . \n\\end{gather}The disorder is here materialized by the training data.\nThe difficulty is here to compute $p(\\y | \\X)$ which is again the partition function in the Boltzmann distribution mapping.\nRelying on the thermodynamic limit, mean-field methods will provide asymptotic results. Nevertheless, experience shows that the behavior of large finite-size systems are often well explained by the infinite-size limits. \n\nAlso, the application of mean-field inference requires assumptions about the distribution of the disorder which is averaged over. Practical algorithms for arbitrary cases can be derived with ad-hoc assumptions, but studying a precise toy statistical model can also bring interesting insights. The simplest model in most cases is to consider uncorrelated disorder: in the example of the perceptron this corresponds to random input data points with arbitrary random labels. Yet, the teacher-student scenario offers many advantages with little more difficulty. It allows to create data sets with structure (the underlying teacher rule).\nIt also allows to formalize an analysis of the difficulty of a learning problem and of the performance in the resolution. Intuitively, the definition of a ground-truth teacher rule with a fixed number of degrees of freedom sets the minimum information necessary to extract from the observations, or training data, in order to achieve perfect reconstruction. This is an \\emph{information-theoretic limit}. \n\nFurthermore, the assumption of an underlying statistical model enables the measurement of performance of different learning algorithms over the class of corresponding problems from an average viewpoint. This is in contrast with\nthe traditional approach of computer science in studying the difficulty of a class of problem based on the \\emph{worst} case. This conservative strategy yields strong guarantees of success, yet it may be overly pessimistic compared to the experience of practitioners. Considering a distribution over the possible problems (a.k.a different realizations of the disorder), the average performances are sometimes more informative of \\emph{typical} instances rather than worst ones.\nFor deep learning, this approach may prove particularly interesting as the traditional bounds, based on the VC-dimension \\cite{Vapnik2000} and Rademacher complexity \\cite{Bartlett2002,Shalev-Shwartz2014}, appear extremely loose when compared to practical examples.\n\nFinally, we must emphasize that derivations presented here are not mathematically rigorous. They are based on `correct' assumptions allowing to push further the understanding of the problems at hand, while a formal proof of the assumptions is possibly much harder to obtain. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Further extensions of interest for learning}\n\\label{sec:chap3further}\nIn the previous \\citesec~we presented the classical mean-field approximations focusing on the simple and original examples of the Boltzmann machine (a.k.a. SK model) and the GLM with Gaussian i.i.d weight matrices. Along the way, we tried to emphasize how the procedures of approximation rely on structural (e.g. connectivity) and statistical properties of the model under scrutiny. In the present \\citesec, we will see that extensions of the message passing and replica methods have now broadened the span of applicability of mean-field approximations. We focus on a selection of recent developments of particular interest to study learning problems.\n\n\\subsection{Streaming AMP for online learning}\n\\label{sec:chap3-streaming-amp}\nIn learning applications, it is sometimes advantageous for speed or generalization concerns to only treat a subset of examples at the time - making\nfor instance the SGD algorithm the most popular training algorithm in deep learning. Sometimes also, the size of the current data sets may exceed the available memory. Methods implementing a step-by-step learning as the data arrives are referred to as \\emph{online}, \\emph{streaming} or \\emph{mini-batch} learning, as opposed to \\emph{offline} or \\emph{batch} learning. \n\n\nIn \\cite{Manoel2018}, a mini-batch version of the AMP algorithm is proposed. It consists in a generalization of Assumed Density Filtering \\cite{Opper1999, Rossi2016} that are fully-online, meaning that only a single example is received at once, or mini-batches have size 1. The general derivation principle is the same. On the example of the GLM, one imagines receiving at each iteration a subset of the components of $\\y$ to reconstruct $\\x$. We denote by $\\y\\kk$ these successive mini-batches. Bayes formula gives the posterior distribution over $\\x$ after seeing $k$ mini-batches\n\\begin{align}\n p(\\x|\\y\\kk, \\{\\y_{(k-1)}, \\cdots \\y_{(1)}\\}) = \\frac{p(\\y\\kk|\\x)p(\\x|\\{\\y_{(k-1)}, \\cdots \\y_{(1)}\\})}{\\int \\dd{\\x}p(\\y\\kk|\\x)p(\\x|\\{\\y_{(k-1)}, \\cdots \\y_{(1)}\\})} .\n\\end{align}\nThis formula suggests the iterative scheme of using as a prior on $\\x$ at iteration $k$ the posterior obtained at iteration $k-1$. This idea can be implemented in different approximate inference algorithms, as also noticed by \\cite{Broderick2013} using a variational method. In the regular version of AMP an effective factorized posterior is given at convergence by the input update functions \\eqref{eq:chap3-Zx}-\\eqref{eq:chap3-f2x}:\n\\begin{align}\np(\\x|\\y, \\W) \\simeq \\prod_{i=1}^N \\frac{1}{\\cZ_x(\\lambda_i, \\sigma_i)}p_x(x_i)e^{-\\frac{(\\lambda_i-x_i)^2}{2 \\sigma_i}}. \n\\end{align}\nPlugging this posterior approximation in the iterative scheme yields the mini-AMP algorithm using the converged values of ${\\lambda_{(\\ell)}}_i$ and ${\\sigma_{(\\ell)}}_i$ at each anterior mini-batch $\\ell < k$ to compute the prior\n\\begin{align}\n p_{x}^{(k)}(\\x) = p(\\x|\\{\\y_{(k-1)}, \\cdots \\y_{(1)}\\}, \\W) \\simeq \\prod_{i=1}^N \\frac{1}{\\cZ_{x,i}} \\; p_x(x_i) \\; e^{-\\sum\\limits_{\\ell=1}^{k-1}\\frac{(\\lambda_{(\\ell), i}-x_i)^2}{2 \\sigma_{(\\ell), i}}}, \n\\end{align}\nwhere the $\\cZ_{x,i}$ normalize each marginal factor.\nCompared to a naive mean-field variational approximation of the posterior, AMP takes into account more correlations and is indeed found to perform better in experiments reported by \\cite{Manoel2018}. Another advantage of the AMP based online inference is that it is amenable to theoretical analysis by a corresponding State Evolution \\cite{Opper1999, Rossi2016, Manoel2018}.\n\n\n\\subsection{Algorithms and free energies beyond i.i.d. matrices} \n\\label{sec:chap3-ortho-invariant}\nThe derivations outlined in the previous \\citesecs~of the equivalent replica, TAP and AMP equations required the weight matrices to have Gaussian i.i.d. entries. In this case, rigorous proofs of asymptotic exactness of the mean-field solutions were found, for the SK model \\cite{Talagrand2006} and the GLM \\cite{Reeves2016, Barbier2017a}. Mean-field inference with different weight statistics is a priori feasible if one finds a way either to perform the corresponding disorder average in the replica computation, to evaluate the corresponding Onsager correction in the TAP equations, or to write a message passing where messages remain uncorrelated (even in the high-connectivity limit we may be interested in). \n\nEfforts to broaden in practice the class of matrices amenable to such mean-field treatments lead to a series of works in statistical physics and signal processing with related propositions.\nParisi and Potters pioneered this direction by deriving mean-field equations for orthogonal weight matrices using a high-temperature expansion \\cite{Parisi1995}.\nThe adaptive TAP approach proposed by Opper and Winther \\cite{Opper2001, Opper2001prl} further allowed for inference in densely connected graphical models without prior knowledge on the weight statistics. The Onsager term of these TAP equations was evaluated using the cavity method for a given weight sample. The resulting equations were then understood to be a particular case of the Expectation Propagation (EP) \\cite{Minka2001} - belonging to the class of message passing algorithms for approximate inference - yet applied in densely connected models \\cite{Opper2005}. An associated approximation of the free energy called Expectation Consistency (EC) was additionally derived from the EP messages. Subsequently, Kabashima and collaborators \\cite{Shinzato2008, Shinzato2009, Kabashima2008} focused on the perceptron and the GLM to propose TAP equations and a replica derivation of the free energy for the ensemble of orthogonally invariant random weight matrices. In the singular value decomposition of such weight matrices, $\\W=\\mat{U}\\,\\mat{S}\\,\\mat{V}\\T \\in \\R^{M\\times N}$, the orthogonal basis matrices $\\mat{U}$ and $\\mat{V}$ are drawn uniformly at random from respectively $\\mathrm{O}(M)$ and $\\mathrm{O}(N)$, while the diagonal matrix of singular values $\\mat{S}$ has an arbitrary spectrum. The consistency between the EC free energy and the replica derivation for orthogonally invariant matrices was verified by \\cite{Kabashima2014} for signal recovery from linear measurements (the GLM without G). From the algorithmic perspective, Fletcher, Rangan and Schniter \\cite{Rangan2016, Schniter2016} applied the EP to the GLM to obtain the (Generalized) Vector-Approximate Message Passing (G-VAMP) algorithm. Remarkably, these authors proved that the behavior of the algorithm could be characterized in the thermodynamic limit, provided the weight matrix is drawn from the orthogonally invariant ensemble, by a set of scalar State Evolution equations similarly to the AMP algorithm. These equations are again related to the saddle point equations of the replica free energy. Concurrently, Opper, Cakmak and Winther proposed an alternative procedure for solving TAP equations with orthogonally invariant weight matrices in Ising spin systems relying on an analysis of iterative algorithms \\cite{Opper2016, Cakmak2019}. Finally, \\cite{Maillard2019} revisits the above cited contributions and provides detailed considerations on their connections. \n\nBelow we present the aforementioned free energy as proposed by \\cite{Shinzato2008, Shinzato2009, Kabashima2008}, and the G-VAMP algorithm of \\cite{Schniter2016}.\n\n\\subsubsection{Replica free energy for the GLM in the Bayes Optimal setting}\nConsider the ensemble of orthogonally invariant weight matrices $\\W$ with spectral density $\\sum_{i=1}^N\\dirac(\\lambda - \\lambda_i) \/ N$ of their `square' $\\W \\W\\T$ converging in the thermodynamic limit $ N \\to + \\infty$ to a given density $\\rho_\\lambda(\\lambda)$. The quenched free energy of the GLM in the Bayes optimal setting derived in \\cite{Kabashima2008, Shinzato2009} writes\n\\begin{gather}\n \\label{eq:chap3-kaba-fe} \n -f = \\mathrm{extr}_{q \\hat{q}} \\left[ - \\frac{1}{2} q \\hat{q} + \\mathcal{I}_x(\\hat{q}) + \\mathcal{J}_z(q_0, q, \\alpha, \\rho_\\lambda) \\right] \\, ,\\\\\n \\mathcal{J}_z(q_0, q, \\alpha, \\rho_\\lambda) = \\mathrm{extr}_{u \\hat{u}} \\left[ F_{\\rho_\\lambda, \\alpha}(q_0-q, \\hat{u}\/\\lambda_0) +\\frac{\\hat{u}q_0}{2} - \\frac{\\alpha \\hat{u}u}{2 \\lambda_0} + \\alpha \\mathcal{I}_z(q_0\\lambda_0\/\\alpha, u) \\right],\n\\end{gather}\nwhere $\\mathcal{I}_x$ and $\\mathcal{I}_z$ were defined as \\eqref{eq:chap3-replica-fe-glm_Ix}-\\eqref{eq:chap3-replica-fe-glm_Iz} and the spectral density $\\rho_\\lambda(\\lambda)$ appears via its mean $\\lambda_0=\\E_{\\lambda}[\\lambda]$ and in the definition of \n\\begin{gather}\n F_{\\rho_\\lambda, \\alpha}(q, u) = \\frac{1}{2} \\mathrm{extr}_{\\Lambda_q, \\Lambda_u} \\left[ -(\\alpha-1)\\log\\Lambda_u - \\E_{\\lambda}\\log(\\Lambda_u\\Lambda_q + \\lambda) + \\Lambda_q q + \\alpha\\Lambda_u u \\right] \\notag \\\\\n \\qquad \\qquad \\qquad - \\frac{1}2 (\\log q + 1) + \\frac{\\alpha}2 (\\log u + 1) .\n\\end{gather}\nGaussian random matrices are a particular case of the considered ensemble. Their singular values are characterized asymptotically by the Marcenko-Pastur distribution \\cite{Marcenko1967}. In this case, one can check that the above expression reduces to \\eqref{eq:chap3-replica-fe-glm}. More generally, note that $\\mathcal{J}_z$ generalizes $\\mathcal{I}_z$.\n\n\\subsubsection{Vector Approximate Message Passing for the GLM}\nThe VAMP algorithm consists in writing EP \\cite{Minka2001} with Gaussian messages on the factor graph representation of the GLM posterior distribution given in \\citefig~\\ref{fig:chap3-vamp-glm}. The estimation of the signal $\\x$ is decomposed onto four variables, two duplicates of $\\x$ itself and two duplicates of the linear transformation $\\z = \\W\\x$. The potential functions $\\psi_x$ and $\\psi_z$ of factors connecting copies of the same variable are Dirac distributions enforcing their equality. The factor node linking $\\z^{(2)}$ and $\\x^{(2)}$ is assumed Gaussian with variance going to zero.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{chap3_vamp_glm.pdf}\n \\caption{Factor graph representation of the GLM for the derivation of VAMP \\label{fig:chap3-vamp-glm}}\n\\end{figure}\nThe procedure of derivation, equivalent to the projection of the BP algorithm on Gaussian messages, is recalled in \\citeapp~\\ref{sec:app-chap3-vamp} and leads to \\citealg~\\ref{alg:chap3-vamp}. Like for AMP, the algorithm features some auxiliary variables introduced along the derivation. At convergence the duplicated $\\xh_1$, $\\xh_2$ (and $\\hat{\\z}_1$, $\\hat{\\z}_2$) are equal and either can be returned by the algorithm as an estimator.\nFor readability, we omitted the time indices in the iterations that here simply follow the indicated update.\n\n\\input{chap3_vamp.tex}\n\nFor a given instance of the GLM inference problem, i.e. a given weight matrix $\\W$, one can always launch either the AMP algorithm or the VAMP algorithm to attempt the reconstruction. If the weight matrix has i.i.d. zero mean Gaussian entries, the two strategies are conjectured to be equivalent and GAMP can be provably convergent for certain settings \\cite{Rangan2014}. If the weight matrix is not Gaussian but orthogonally invariant, then only VAMP is expected to always converge. More generally, even in cases where none of these assumptions are verified, VAMP has been observed to have less convergence issues than AMP. \n\nLike for AMP, a State Evolution can also be derived for VAMP (which was actually directly proposed for the multi-layer GLM \\cite{Fletcher2018a}). It rigorously characterizes the behavior of the algorithm when $\\W$ is orthogonally invariant. One can also verify that the SE fixed points can be mapped to the solutions of the saddle point equations of the replica free energy \\eqref{eq:chap3-kaba-fe} (see \\citesec~1 of Supplementary Material of \\cite{Gabrie2018}); so that the robust algorithmic procedure can advantageously be used to compute the fixed points to be plugged in the replica potential to approximate the free energy.\n\n\n\n\n\n\\subsection{Multi-value AMP}\n\\label{sec:chap3-multivalue}\nA recent extension of AMP consists in treating the simultaneous reconstruction of multiple signals undergoing the same mixing step from the corresponding multiple observations. This is a situation of particular interest for learning appearing for instance in the teacher-student set-up of committee machines. The authors of \\cite{Aubin2018} showed that the different weight vectors of these neural networks can be inferred from the knowledge of training input-output pairs introducing this extended version of AMP. Here the same matrix of training input data mixes the teacher weight vectors to produce the training output data. For a matter of consistency with the examples used in the previous sections, we here formalize the algorithm for the GLM. Nevertheless this is just a matter of rewriting of the committee algorithm of \\cite{Aubin2018}.\n\nConcretely let's consider a GLM with $P$ pairs of signal and observations $\\{\\x\\kk, \\y\\kk\\}_{k=1}^P$, gathered in matrices $\\X \\in \\R^{N\\times P}$ and $\\Y \\in \\R^{M\\times P}$. We are interested in the posterior distribution \n\\begin{gather}\n \\label{eq:chap6-glm-vec-meas}\n p(\\X | \\Y, \\W) = \\frac{1}{\\cZ(\\Y, \\W)} \\prod_{i=1}^N p(\\x_i)\\prod_{\\mu=1}^M \\pout(\\y_\\mu | \\vect{w}_\\mu\\T\\X), \\quad \\x_i \\in \\R^P, \\quad \\y_\\mu \\in \\R^P. \n\\end{gather}\nCompared to the traditional GLM measure \\eqref{eq:chap3-glm-meas}, scalar variables are here replaced by vectors in $\\R^P$. In \\citeapp~\\ref{app:chap6-vect-amp} we present a derivation starting from BP of the corresponding AMP presented in \\citealg~\\ref{alg:chap6-vect-amp}. \nThe major difference with the scalar GLM consists in the necessity of tracking covariance matrices between the $P$ different variables instead of simple variances.\n\n\\input{chap6_vect_amp}\n\nThis AMP algorithm can also be analyzed by a State Evolution. \nIn \\cite{Aubin2018}, the teacher-student matched setting of the committee machine is examined through the replica approach and the Bayes optimal State Evolution equations are obtained as the saddle point equations of the replica free energy.\nIn \\citeapp~\\ref{app:chap6-vect-amp} we present the alternative derivation of the State Evolution equations from the message passing and without assuming a priori matched teacher and student, as done in \\cite{Gabrie2019}.\n\n\n\n\n\\begin{figure}[t]\n \\centering\n {\\includegraphics[width=0.5\\textwidth]{chap6_mlglm_scalar.pdf}\n }\n \n \\caption{Factor graph representation of a generic 2-layer GLM. \\label{fig:chap6-mlglm}}\n\\end{figure}\n\n\\subsection{Model composition and multi-layer inference}\nAnother recent and ongoing direction of extension of mean-field methods is the combination of solutions of elementary models to tackle more sophisticated inference questions. The graphical representations of probability distributions (reintroduced briefly in \\citeapp~\\ref{app:chap2-graphs}) are here of great help. In a complicated joint probability distribution, it is sometimes possible to identify well-known sub-models, such as the GLM or the RBM. Understanding how and when it is justified to plug-in different solutions is of course non-trivial and a very promising direction of research. \n\nA particularly relevant extension in this direction is the treatment of multi-layer GLMs, or in other words multi-layer neural networks. With depth $L$, hidden layers noted $\\vect{u}^\\ell \\in \\R^{N_\\ell}$, and weight matrices $\\mat{\\Phi}^\\ell \\in \\R^{N_{\\ell +1} \\times N_\\ell}$, it formally corresponds to the statistical model\n\\begin{gather}\n \\vect{u}^0 = \\x_0 \\sim p_{x_0}(\\x_0) \\, ,\\\\\n \\vect{u}^\\ell \\sim \\pout^\\ell(u^\\ell | \\mat{\\Phi}^{\\ell-1} \\vect{u}^{\\ell -1}) \\quad \\forall \\ell = 1 \\cdots L -1 \\, , \\\\\n \\y \\sim \\pout^{L}(\\y | \\mat{\\Phi}^{L-1} \\vect{u}^{L -1} ).\n\\end{gather}\nIn \\cite{Manoel2017b} a multi-layer version of AMP is derived, assuming Gaussian i.i.d weight matrices, along with a State Evolution and a replica free energy. Remarkably, the asymptotic replica prediction was mathematically proven correct in \\cite{Gabrie2018}. In \\cite{Fletcher2018a}, the multi-layer version of the VAMP algorithm is derived with the corresponding State Evolution for orthogonally invariant weight matrices. The matching free energies were obtained independently in \\cite{Gabrie2018} by the generalization of a replica result and by \\cite{Reeves2016} from a different argument. \n\nIn the next paragraph we sketch a derivation of the 2-layer AMP presented in \\citealg~\\ref{alg:chap6-amp-2layer}, it provides a good intuition of the composition ability of mean-field inference methods.\n\n\\paragraph{Heuristic derivation of 2-layer AMP}\n\n\nThe derivation of the multi-layer AMP follows identical steps to the derivation of the single layer presented in \\citesec~\\ref{sec:chap3-bp-to-amp}, yet for a more complicated factor graph and consequently a larger collection of messages. Without conducting the lengthy procedure, one can form an intuition for the resulting algorithm starting from the single-layer AMP. \nThe corresponding factor graph is given on \\citefig~\\ref{fig:chap6-mlglm}. Compared to the single-layer case (see \\citefig~\\ref{fig:chap3-glm}), an interface with a set of $M=N_1$ hidden variables $u_\\mu$ is inserted between the $N=N_0$ signals $x_i$ and the $Q=N_2$ observations $y_a$. In the neighborhood of the inputs $x_i$ the factor graph is identical to the single-layer and the input functions can be defined from a normalization partition identical to \\eqref{eq:chap3-Zx}, \n\\begin{gather}\n \\cZ^x(\\lambda_i, \\sigma_i) = \\int \\dd{x_i} p_x(x_i)e^{-\\frac{ (x_i-\\lambda_i)^2}{2 \\sigma_i} },\n \n \n \n \n\\end{gather}\nyielding updates \\eqref{alg:chap6-vect-amp-2layer-xi}-\\eqref{alg:chap6-vect-amp-2layer-Cxi}. Similarly, the neighborhood of the observations $y_a$ is also unchanged and the updates \\eqref{alg:chap6-vect-amp-2layer-goutii} and \\eqref{alg:chap6-vect-amp-2layer-dgoutii} are following from the definition of\n\\begin{gather}\n \\Zout^y(\\omega^2_a, V^2_a) = \\int \\dd{z_a} \\; \\pout^2(\\y_a|z_a) \\; e^{- \\frac{\\left(z_a - \\omega^2_a\\right)^2}{2V^2_a }} ,\n\\end{gather}\nidentical to the single layer \\eqref{eq:chap3-Zout}. At the interface however, the variables $u_\\mu$ play the role of outputs for the first GLM and of inputs for the second GLM, which translates into a normalization partition function of mixed form\n\\begin{gather}\n \\label{eq:chap6-2layer-Zout-t}\n \\Zout^u(\\omega^1_\\mu, V^1_\\mu, \\lambda^1_\\mu, \\sigma^1_\\mu ) = \\int \\dd{z_\\mu} \\int \\dd{u_\\mu} \\pout^1(u_\\mu | z_\\mu) \\; \\times \n e^{- \\frac{\\left(u_\\mu - \\lambda^1_\\mu\\right)^2}{ 2 \\sigma^1_\\mu} } e^{-\\frac{\\left(z_\\mu - \\omega^1_\\mu\\right)^2}{2 V^1_\\mu}}. \\notag\n \n \n\\end{gather}\nUpdates \\eqref{alg:chap6-vect-amp-2layer-gouti} and \\eqref{alg:chap6-vect-amp-2layer-dgouti} are obtained by considering that the second layer acts as an effective channel for the first layer, i.e. from the normalization interpreted as\n\\begin{gather}\n \\Zout^u(\\omega^1_\\mu, V^1_\\mu, \\lambda^1_\\mu, \\sigma^1_\\mu ) = \\int \\dd{z_\\mu} \\; \\pout^{\\rm eff}(z_\\mu) \\; e^{- \\frac{\\left(z_\\mu - \\omega^1_\\mu\\right)^2}{2 V^1_\\mu}} .\n \n \n \n\\end{gather}\nFinally, update equations \\eqref{alg:chap6-vect-amp-2layer-t} and \\eqref{alg:chap6-vect-amp-2layer-Cti} are in turn derived considering the first layer defines an effective prior for the hidden variables and rewriting the normalization as\n\\begin{gather}\n \\Zout^u= \\int \\dd{u_\\mu}\\; p_u^{\\rm eff}(u_\\mu) \\; e^{- \\frac{\\left(u_\\mu - \\lambda^1_\\mu\\right)^2} {2 \\sigma^1_\\mu}}.\n\\end{gather}\nThe rest of the algorithm updates follows as usual from the self-consistency between the different variables introduced as they correspond to different parametrization of the same marginals. The schedule of updates and the time indexing reported in \\citealg~\\ref{alg:chap6-amp-2layer} results from the entire derivation starting from the BP messages. The generalization of the algorithm to an arbitrary number of layers is easily obtained repeating the heuristic arguments presented here. \n\n\\include{chap6_2layer_amp}\n\n\n\\section[\n Selected overview of mean-field treatments: free energies and algorithms \n \n ]{\nSelected overview of mean-field treatments: \\\\ Free energies and algorithms \n}\n\\label{sec:chap3}\n\n\nMean-field methods are a set of techniques enabling to approximate marginalized quantities of joint probability distributions by exploiting knowledge on the dependencies between random variables. \nThey are usually said to be analytical - as opposed to numerical Monte Carlo methods. In practice they usually replace a summation exponentially large in the size of the system by an analytical formula involving a set of parameters, themselves solution of a closed set of non-linear equations. Finding the values of these parameters typically requires only a polynomial number of operations. \n\nIn this \\citechap, we will give a selected overview of mean-field methods as they were introduced in the statistical physics and\/or signal processing literature. A key take away of what follows is that closely related results can be obtained from different heuristics of derivation. \nWe will start by deriving the simplest and historically first mean-field method. We will then introduce the important broad techniques that are high-temperature expansions, message-passing algorithms and the replica method. In the following \\citechap~\\ref{sec:chap3further} we will additionally cover the most recent extensions of mean-field methods presented in the present \\citechap~\\ref{sec:chap3} that are relevant to study learning problems.\n\n\n\n\\subsection{Naive mean-field}\n\\label{sec:chap3-nmf}\nThe naive mean-field method is the first and somehow simplest mean-field approximation. It was introduced by the physicists Curie \\cite{Curie1895} and Weiss \\cite{Weiss1907} and then adopted by the different communities interested in inference \\cite{Wainwright2008}.\n\n\\subsubsection{Variational derivation}\nThe naive mean-field method consists in approximating the joint probability distribution of interest by a fully factorized distribution. Therefore, it ignores correlations between random variables. Among multiple methods of derivation, we present here the variational method: it is the best known method across fields and it readily shows that, for any joint probability distribution interpreted as a Boltzmann distribution, the rather crude naive mean-field approximation yields an upper bound on the free energy. \nFor the purpose of demonstration we consider a Boltzmann machine without hidden units (Ising model) with variables (spins) $\\x = (x_1, \\cdots ,x_N) \\in \\mathcal{X} = \\{0,1\\}^N $, and energy function\n\\begin{gather}\n \\label{eq:chap3-ising-energy}\n E(\\x) = - \\sum_{i=1}^N b_i x_i - \\sum_{(ij)} W_{ij}x_i x_j = - \\vect{b}\\T \\x - \\frac{1}{2} \\x\\T \\W \\x \\, , \\quad \\vect{b} \\in \\R^N \\, , \\quad \\W \\in \\R^{N\\times N} \\, ,\n\\end{gather}\nwhere the notation $(ij)$ stands for pairs of connected spin-variables, and the weight matrix $\\W$ is symmetric. \nThe choices for $\\{0,1\\}$ rather than $\\{-1,+1\\}$ for the variable values, the notations $\\W$ for weights (instead of couplings), $\\vect{b}$ for biases (instead of local fields), as well as the vector notation, are leaning towards the machine learning conventions. We denote by $q_{\\m}$ a fully factorized distribution on $\\{0,1\\}^N$, which is a multivariate Bernoulli distribution parametrized by the mean values $\\m = (m_1, \\cdots, m_N) \\in [0,1]^N$ of the marginals (denoted by $q_{m_i}$):\n\\begin{gather}\n q_{\\m} (\\x) = \\prod_{i=1}^N q_{m_i}(x_i) = \\prod_{i=1}^N m_i \\dirac(x_i - 1) + (1-m_i) \\dirac(x_i). \n\\end{gather}\nWe look for the optimal $q_{\\m}$ distribution to approximate the Boltzmann distribution $p(\\x) = e^{-\\beta E(\\x)}\/\\cZ$ by minimizing the KL-divergence \n\\begin{align}\n \\minn{\\m} \\KL(q_{\\m} || p) & = \\minn{\\m} \\sum_{\\x \\in \\mathcal{X}} q_{\\m}(\\x) \\log \\frac{q_{\\m}(\\x)}{p(\\x)} \\\\\n & = \\minn{\\m} \\sum_{\\x \\in \\mathcal{X}} q_{\\m}(\\x) \\log q_{\\m}(\\x) + \\beta \\sum_{\\x \\in \\mathcal{X}} q_{\\m}(\\x) E(\\x) + \\log \\cZ \\\\\n & = \\minn{\\m} \\; \\beta G(q_{\\m}) - \\beta F \\geq 0, \\label{eq:chap3-variational-inequality}\n\\end{align}\nwhere the last inequality comes from the positivity of the KL-divergence. For a generic distribution $q$, $G(q)$ is the \\emph{Gibbs free energy} for the energy $E(\\x)$, \n\\begin{gather}\n G(q) = \\sum_{\\x \\in \\mathcal{X}} q(\\x) E(\\x) + \\frac{1}{\\beta}\\sum_{\\x \\in \\mathcal{X}} q(\\x) \\log q(\\x) = U(q) - H(q)\/\\beta \\geq F ,\n\\end{gather}\ninvolving the average energy $U(q)$ and the entropy $H(q)$.\nIt is greater than the true free energy $F$\nexcept when $q = p$, in which case they are equal. Note that this fact also means that the Boltzmann distribution minimizes the Gibbs free energy. Restricting to factorized $q_{\\m}$ distributions, we obtain the naive mean-field approximations for the mean value of the variables (or \\emph{magnetizations}) and the free energy:\n\\begin{gather}\n \\m^* = \\argminn{\\m} G(q_{\\m}) = \\langle \\x \\rangle_{q_{\\m^*}} \\, , \\\\\n \n F_{\\rm NMF} = G(q_{\\m^*}) \\geq F.\n\\end{gather} \nThe choice of a very simple family of distributions $q_{\\m}$ limits the quality of the approximation but allows for tractable computations of observables, for instance the two-spins correlations $\\langle x_i x_j \\rangle_{q^*} = m^*_i m^*_j$ or variance of one spin $\\langle x_i^2 \\rangle_{q^*} - \\langle x_i \\rangle_{q^*}^2 = m^*_i - {m^*_i}^2$. \n\nIn our example of the Boltzmann machine, it is easy to compute \nthe Gibbs free energy for the factorized ansatz, we define functions of the magnetization vector:\n\\begin{align}\n U_{\\rm NMF}(\\m) & = \\langle E(\\x) \\rangle_{q_{\\m}} = - \\vect{b}\\T \\m - \\frac{1}{2} \\m\\T\\W\\m \\, ,\\\\\n \\label{eq:chap3-hnmf}\n H_{\\rm NMF}(\\m) & = - \\langle \\log q_{\\m}(\\x) \\rangle_{q_{\\m}} = - \\sum_{i=1}^N m_i \\log m_i + (1-m_i) \\log (1-m_i) \\, ,\\\\\n G_{\\rm NMF}(\\m) &= G(q_{\\m}) = U_{\\rm NMF}(\\m) - H_{\\rm NMF}(\\m) \/ \\beta.\n\\end{align}\nLooking for stationary points we find a closed set of non linear equations for the $m^*_i$,\n\\begin{gather}\n \\label{eq:chap3-nmf-eq}\n \\left. \\frac{\\partial G_{\\rm NMF}}{\\partial m_i} \\right|_{\\m^*} = 0\n \\quad \\Rightarrow \\quad m^*_i = \\sigm(\\beta b_i + \\sum_{j \\in \\partial i} \\beta W_{ij} m^*_j) \\quad \\forall i = 1 \\cdots N\\,\n\\end{gather}\nwhere $\\sigm(x) = (1 + e^{-x})^{-1}$.\nThe solutions can be computed by iterating these relations from a random initialization until a fixed point is reached.\n\nTo understand the implication of the restriction to factorized distributions,\nit is instructive to compare this naive mean-field equation with the exact identity \n\\begin{align}\n \\label{eq:chap3-mf-identity}\n \\langle x_i \\rangle_p = \\langle \\sigm(\\beta b_i + \\sum_{j \\in \\partial i}\\beta W_{ij} x_j) \\rangle_p\\,,\n\\end{align}\nderived in a few lines in \\citeapp~\\ref{app:chap3-mf-identity}.\nUnder the Boltzmann distribution $p(\\x) = e^{-\\beta E(\\x)}\/\\cZ$, these averages are difficult to compute. The naive mean-field method is neglecting the fluctuations of the effective field felt by the variable $x_i$: $\\sum_{j \\in \\partial i} W_{ij} x_j$, keeping only its mean $\\sum_{j \\in \\partial i} W_{ij} m_j$. This incidentally justifies the name of mean-field methods. \n\n\\subsubsection{When does naive mean-field hold true?}\nThe previous derivation shows that the naive mean-field approximation allows to bound the free energy. While this bound is expected to be rough in general, the approximation is reliable when the fluctuations of the local effective fields $\\sum_{j \\in \\partial i} W_{ij} x_j$ are small. This may happen in particular in the thermodynamic limit $N\\to \\infty$ in \\emph{infinite range} models, that is when weights or couplings are not only local but distributed in the entire system, or if each variable interacts directly with a non-vanishing fraction of the whole set of variables (e.g. \\cite{opper2001advanced} \\citechap~2). The influence on one given variable of the rest of the system can then be treated as an average background. Provided the couplings are weak enough, the naive mean-field method may even become asymptotically exact. This is the case of the \\emph{Curie-Weiss} model, which is the fully connected version of the model \\eqref{eq:chap3-ising-energy} with all $W_{ij} = 1\/N$ (see e.g. \\citesec~2.5.2 of \\cite{Mezard2009}). The sum of weakly dependent variables then concentrates on its mean by the central limit theorem. \nWe stress that it means that for finite dimensional models (more representative of a physical system, where for instance variables are assumed to be attached to the vertices of a lattice with nearest neighbors interactions), mean-field methods are expected to be quite poor. By contrast, infinite range models (interpreted as infinite-dimensional models by physicists) are thus traditionally called \\emph{mean-field models}. \n\nIn the next \\citesec~we will recover the naive mean-field equations through a different method. The following derivation will also allow to compute corrections to the rather crude approximation we just discussed by taking into account some of the correlations it neglects.\n\n\n\\subsection{Thouless Anderson and Palmer equations}\n\n\n\\label{sec:chap3-tap}\nThe TAP mean-field equations \\cite{Thouless1977, Morita1976} were originally derived as an exact mean-field theory for the Sherrington-Kirkpatrick (SK) model \\cite{Sherrington1975}. The emblematic \\emph{spin glass} SK model we already mentioned corresponds to a fully connected Ising model with energy \\eqref{eq:chap3-ising-energy} and disordered couplings $W_{ij}$ drawn independently from a Gaussian distribution with zero mean and variance $W_0 \/ N$. The derivation of \\cite{Thouless1977} followed from arguments specific to the SK model. Later, it was shown that the same approximation could be recovered from a second order Taylor expansion at high temperature by Plefka \\cite{Plefka1982} and that it could be further corrected by the systematic computation of higher orders by Georges and Yedidia \\cite{Georges1999}. We will briefly present this last derivation, having again in mind the example of the generic Boltzmann machine \\eqref{eq:chap3-ising-energy}. \n\n\\subsubsection{Outline of the derivation}\n\n\\label{sec:chap3-GY}\nGoing back to the variational formulation \\eqref{eq:chap3-variational-inequality}, we shall now perform a minimization in two steps. Consider first the family of distributions $q_{\\m}$ enforcing $\\langle \\x \\rangle_{q_{\\m}} = \\m$ for a fixed vector of magnetizations $\\m$, but without any factorization constraint. The corresponding Gibbs free energy is\n\\begin{gather}\n G(q_{\\m}) = U(q_{\\m}) - H(q_{\\m}) \/ \\beta \n \n \n\\end{gather} \nA first minimization at fixed $\\m$ over the $q_{\\m}$ defines another auxiliary free energy\n\\begin{align}\n G_{\\rm TAP}(\\m) = \\minn{q_{\\m}} G(q_{\\m}).\n\\end{align} \nA second minimization over $\\m$ would recover the overall unconstrained minimum of the variational problem \\eqref{eq:chap3-variational-inequality} which is the exact free energy\n\\begin{align}\n F = -\\log \\cZ \/ \\beta = \\minn{\\m} G_{\\rm TAP}(\\m).\n\\end{align}\nYet the actual value of $G_{\\rm TAP}(\\m)$ turns out as complicated to compute as $F$ itself. Fortunately, $\\beta G_{\\rm TAP}(\\m)$ can be easily approximated by a Taylor expansion around $\\beta = 0$ due to interactions vanishing at high temperature, as noticed by Plefka, Georges and Yedidia \\cite{Plefka1982, Georges1999}. \nAfter expanding, the minimization over $G_{\\rm TAP}(\\m)$ yields a set of self consistent equations on the magnetizations $\\m$, called the \\emph{TAP equations}, reminiscent of the naive mean-field equations \\eqref{eq:chap3-nmf-eq}. Here again, the consistency equations are typically solved by iterations. Plugging the solutions $\\m^*$ back into the expanded expression yields the \\emph{TAP free energy} $F_{\\rm TAP}=G_{\\rm TAP}(\\m^*)$.\nNote that ultimately the approximation lies in the truncation of the expansion. At first order the naive mean-field approximation is recovered. Historically, the expansion was first stopped at the second order. This choice was model dependent, it results from the fact that the mean-field theory is already exact at the second order for the SK model \\cite{Morita1976, Thouless1977, Plefka1982}.\n\n\n\n\n\\subsubsection{Illustration on binary Boltzmann machines and important remarks}\nFor the Boltzmann machine \\eqref{eq:chap3-ising-energy}, the TAP equations and TAP free energy (truncated at second order) are \\cite{Thouless1977},\n\\begin{gather}\n m^*_i = \\sigm\\left(\\beta b_i + \\sum_{j \\in \\partial i} \\beta W_{ij} m^*_j - \\beta^2 W_{ij}^2(m^*_j - \\frac 1 2)(m^*_i - {m^*_i}^2 )\\right) \\; \\forall i \\label{eq:chap3-tap-eq}\\\\\n \\beta G_{\\rm TAP}(\\m^*) = - H_{\\rm NMF}(\\m^*) - \\beta \\sum_{i=1}^N b_i m^*_i - \\beta \\sum_{(ij)} m_i^*W_{ij}m^*_j\\\\\n \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad - \\frac{\\beta^2}{2} \\sum_{(ij)} W_{ij}^2 (m^*_i - {m^*_i}^2)(m^*_j - {m^*_j}^2)\\, , \\notag\n\\end{gather}\nwhere the naive mean-field entropy $H_{\\rm NMF}$ was defined in \\eqref{eq:chap3-hnmf}. For this model, albeit with $\\{+1, -1\\}$ variables instead of $\\{0,1\\}$, several references pedagogically present the details of the derivation sketched in the previous paragraph. The interested reader should check in particular \\cite{opper2001advanced, Zamponi2010}. We also present a more general derivation in \\citeapp~\\ref{app:chap3-real-GY}, see \\citesec~\\ref{sec:chap3-GY-generalized}.\n\n\\subparagraph{Onsager reaction term}\nCompared to the naive mean-field approximation the TAP equations include a correction to the effective field called the \\emph{Onsager reaction term}. The idea is that, in the effective field at variable $i$, we should consider corrected magnetizations of neighboring spins $j \\in \\partial i$, that would correspond to the absence of variable $i$. \nThis intuition echoes at two other derivations of the TAP approximation: the cavity method \\cite{Mezard1986} that will not be covered here and the message passing which will be discussed in the next \\citesec.\n\nAs far as the SK model is concerned,\nthis second order correction is enough in the thermodynamic limit as the statistics of the weights imply that higher orders will typically be subleading. Yet in general, the correct TAP equations for a given model will depend on the statistics of interactions and there is no guarantee that there exists a finite order of truncation leading to an exact mean-field theory. \nIn \\citesec~\\ref{sec:chap3-ortho-invariant} we will discuss models beyond SK where a conjectured exact TAP approximation can be derived. \n\n\n\\subparagraph{Single instance}\nAlthough the selection of the correct TAP approximation relies on the statistics of the weights, the derivation of the expansion outlined above does not require to average over them, i.e. it does not require an average over the disorder. Consequently, the approximation method is well defined for a single instance of the random disordered model and the TAP free energy and magnetizations can be computed for a given (realization of the) set of weights $\\{W_{ij}\\}_{(ij)}$ as explained in the following paragraph. \nIn other words, it means that the approximation can be used to design practical inference algorithms in finite-sized problems and not only for theoretical predictions on average over the disordered class of models. Crucially, these algorithms may provide approximations of disorder-dependent observables, such as correlations, and not only of self averaging quantities. \n\n\\subparagraph{Finding solutions}\nThe self-consistent equations on the magnetizations \\eqref{eq:chap3-tap-eq} are usually solved by turning them into an iteration scheme and looking for fixed points. This generic recipe leaves nonetheless room for interpretation: which exact form should be iterated? How should the updates for the different equations be scheduled? Which time indexing should be used? While the following scheme may seem natural\n\\begin{align}\n {m_i}^{(t+1)} \\leftarrow \\sigm\\left(\\beta b_i + \\sum_{j \\in \\partial i} \\beta W_{ij} {m_j}^{(t)} - W_{ij}^2\\left({m_j}^{(t)} - \\frac 1 2\\right)\\left({m_i}^{\\mathbf{(t)}} - {{m_i}^{\\mathbf{(t)}}}^2 \\right)\\right),\n\\end{align}\nit typically has more convergence issues than the following alternative scheme including the time index $t-1$\n\\begin{align}\n {m_i}^{(t+1)} \\leftarrow \\sigm\\left(\\beta b_i + \\sum_{j \\in \\partial i} \\beta W_{ij} {m_j}^{(t)} - W_{ij}^2\\left({m_j}^{(t)} - \\frac 1 2\\right)\\left({m_i}^{\\mathbf{(t-1)}} - {{m_i}^{\\mathbf{(t-1)}}}^2 \\right)\\right).\n\\end{align}\nThis issue was discussed in particular in \\cite{Kabashima2003,bolthausen2014iterative}.\nRemarkably, this last scheme, or algorithm, is actually the one obtained by the approximate message passing derivation that will be discussed in the upcoming \\citesec~\\ref{sec:chap3-bp-to-amp}.\n\n\\subparagraph{Solutions of the TAP equations}\nThe TAP equations can admit multiple solutions with either equal or different TAP free energy. \nWhile the true free energy $F$ corresponds to the minimum of the Gibbs free energy, reached for the Boltzmann distribution, the TAP derivation consists in performing an effectively unconstrained minimization in two steps, but with an approximation through a Taylor expansion in between. The truncation of the expansion therefore breaks the correspondence between the discovered minimizer and the unique Boltzmann distribution, hence the possible multiplicity of solutions. For the SK model for instance, the number of solutions of the TAP equations increases rapidly as $\\beta$ grows \\cite{Mezard1986}. While the different solutions can be accessed using different initializations of the iterative scheme, it is notably hard in phases where they are numerous to find exhaustively all the TAP solutions. In theory, they should be weighted according to their free energy density and averaged to recover the thermodynamics predicted by the replica computation \\cite{Dominicis1983}, another mean-field approximation discussed in \\citesec~\\ref{sec:chap3-replica}.\n\n\n\\subsubsection{Generalizing the Georges-Yedidia expansion}\n\\label{sec:chap3-GY-generalized}\nIn the derivation outlined above for binary variables, $x_i = 0$ or $1$, the mean of each variable $m_i$ was fixed. This is enough to parametrize the corresponding marginal distribution $q_{m_i}(x_i)$. Yet the expansion can actually be generalized to Potts variables (taking multiple discrete values) or even real valued variables by introducing appropriate parameters for the marginals. A general derivation fixing arbitrary real valued marginal distribution was proposed in \\citeapp~B of \\cite{Lesieur2017} for the problem of low rank matrix factorization. Alternatively, another level of approximation can be introduced for real valued variables by restricting the set of marginal distributions tested to a parametrized family of distributions. By choosing a Gaussian parametrization, one recovers TAP equations equivalent to the approximate message passing algorithm that will be discussed in the next \\citesec. In \\citeapp~\\ref{app:chap3-real-GY}, we present a derivation for real-valued Boltzmann machines with a Gaussian parametrization as proposed in \\cite{Tramel2018}.\n\n\\subsection{Belief propagation and approximate message passing}\n\\label{sec:chap3-bp-to-amp}\nAnother route to rediscover the TAP equations is through the approximation of message passing algorithms. Variations of the latter were discovered multiple times in different fields. In physics they were written in a restricted version as soon as 1935 by Bethe \\cite{Bethe1935}. In statistics, they were developed by Pearl as methods for probabilistic inference \\cite{Pearl1988}. \nIn this section we will start by introducing a case-study of interest, the Generalized Linear Model. We will then proceed by steps to outline the derivation of the Approximate Message Passing (AMP) algorithm from the Belief Propagation (BP) equations.\n\n\n\n\\subsubsection{Generalized linear model}\n\\label{sec:chap3-glm}\n\\subparagraph{Definition} We introduce the \\emph{Generalized Linear Model} (GLM) which is a fairly simple model to illustrate message passing algorithms and which is also an elementary brick for a large range of interesting inference questions on neural networks. It falls under the teacher-student set up: a student model is used to reconstruct a signal from a teacher model producing indirect observations. \nIn the GLM, the product of an unknown signal $\\x_0 \\in \\R^N$ and a known weight matrix $\\W \\in \\R^{N \\times M}$ is observed as $\\y$ through a noisy channel $\\pouto$,\n\\begin{gather}\n \\left\\{\n \\begin{array}{l}\n \\W \\sim p_W(\\W)\n \\\\\n \n \\x_0 \\sim p_{x_0}(\\x_0) = \\prod\\limits_{i=1}^N p_{x_0}(x_{0,i})\n \\end{array}\n \\right. \n \\quad \n \n \\Rightarrow\n \\y \\sim \\pouto(\\y | \\W\\x_0) = \\prod_{\\mu=1}^M \\pouto(y_\\mu | \\vect{w}_\\mu\\T\\x_0). \n\\end{gather}\nThe probabilistic graphical model corresponding to this teacher is represented in \\citefig~\\ref{fig:chap3-glm}. The prior over the signal $p_{x_0}$ is supposed to be factorized, and the channel $\\pouto$ likewise. The inference problem is to produce an estimator $\\xh$ for the unknown signal $\\x_0$ from the observations $\\y$. Given the prior $p_x$ and the channel $\\pout$ of the student, not necessarily matching the teacher, the posterior distribution is \n\\begin{align}\n \n p(\\x|\\y, \\W) &= \\frac{1}{\\cZ(\\y, \\W)} \\, \\prod_{\\mu=1}^M \\pout(y_\\mu | \\sum_{i=1}^N W_{\\mu i} x_i)\\, \\prod_{i=1}^N p_x(x_i) \\, , \\label{eq:chap3-glm-meas}\\\\\n \n \\cZ(\\y, \\W) &= \\int \\dd{\\x} \\pout(\\y | \\x, \\W) p_x(\\x), \\label{eq:chap3-glm-Z}\n\\end{align}\nrepresented as a factor graph also in \\citefig~\\ref{fig:chap3-glm}. The difficulty of the reconstruction task of $\\x_0$ from $\\y$ is controlled by the measurement ratio $\\alpha = M\/N$ and the amplitude of the noise possibly present in the channel. \n\n\n\\subparagraph{Applications}\nThe generic GLM underlies a number of applications. \nIn the context of neural networks of particular interest in this technical review, the channel $\\pout$ generating observations $\\y \\in \\R^M$ can equivalently be seen as a stochastic activation function $g(\\cdot; \\eps)$ incorporating a noise $\\eps \\in \\R^M$ component-wise to the output,\n\\begin{gather}\n y_\\mu = g(\\vect{w}_\\mu\\T\\x \\,; \\, \\eps_\\mu).\n\\end{gather}\nThe inference of the teacher signal in a GLM has then two possible interpretations. \nOn the one hand, it can be interpreted as the reconstruction of the input $\\x$ of a stochastic single-layer neural network from its output $\\y$. For example, this inference problem can arise in the maximum likelihood training of a one-layer VAE (see corresponding paragraph in \\citesec~\\ref{sec:chap1-vae}). On the other hand, the same question can also correspond to the Bayesian learning of a single-layer neural network with a single output - the perceptron - where this time $\\{\\W, \\y\\}$ are interpreted as the collection of training input-output pairs and $\\x_0$ plays the role of the unknown weight vector of the teacher (as cited as an example in \\citesec~\\ref{sec:chap2-teacher-student}). \nHowever, note that one of the most important applications of the GLM, Compressed Sensing (CS) \\cite{Donoho2006}, does not involve neural networks.\n\n\n\\subparagraph{Statistical physics treatment, random weights and scaling}\nFrom the statistical physics perspective, the effective energy functional is read from the posterior \\eqref{eq:chap3-glm-Z} seen as a Boltzmann distribution with energy\n\\begin{gather}\n E(\\x) = - \\log \\pout(\\y | \\x, \\W) p_x(\\x) = - \\sum_{\\mu =1}^M\\log \\pout(y_\\mu | \\sum_{i=1}^N W_{\\mu i} x_i) - \\sum_{i=1}^\n N \\log p_x(x_i) .\n\\end{gather} \nThe inverse temperature $\\beta$ has here no formal equivalent and can be thought as being equal to 1. The energy is a function of the random realizations of $\\W$ and $\\y$, playing the role of the disorder. Furthermore, the validity of the approximation presented below require additional assumptions. Crucially, the weight matrix is assumed to have i.i.d. Gaussian entries with zero mean and variance $1\/N$, much like in the SK model. The prior of the signal is chosen so as to ensure that the $x_i$-s (and consequently the $y_\\mu$-s) remain of order 1.\nFinally, the thermodynamic limit $N \\to \\infty$ is taken for a fixed measurement ratio $\\alpha=M\/N$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{chap3_glm.pdf}\n \\caption{Graphical representations of the Generalized Linear Model. \\textbf{Left:} Probabilistic graphical model of the teacher. \\textbf{Middle left:} Factor graph representation of the posterior distribution on the signal $\\x$ under the student statistical model. \\textbf{Middle right and right:} Belief propagation updates \\eqref{eq:chap3-bp-glm-1} - \\eqref{eq:chap3-bp-glm-2} for approximate inference. \\label{fig:chap3-glm}}\n\\end{figure}\n\n\n\n\n\\subsubsection{Belief Propagation} \n\\label{sec:chap3-bp}\nRecall that inference in high-dimensional problems consists in marginalizations over complex joint distributions, typically in the view of computing partition functions, averages or marginal probabilities for sampling. Belief Propagation (BP) is an inference algorithm, sometimes exact and sometimes approximate as we will see, leveraging the known factorization of a distribution, which encodes the precious information of (in)depencies between the random variables in the joint distribution. For a generic joint probability distribution $p$ over $\\x \\in \\R^N$ factorized as\n\\begin{gather}\n \\label{eq:chap3-factorization}\n p(\\x) = \\frac 1 \\cZ \\prod_{\\mu = 1}^M \\psi_\\mu(\\x_{\\partial \\mu}),\n\\end{gather}\n$\\psi_\\mu$ are called potential functions taking as arguments the variables $x_i$-s involved in the factor $\\mu$ \nshortened as $\\x_{\\partial \\mu}$.\n\n\n\\subparagraph{Definition of messages}\nLet us first write the BP equations and then explain the origin of these definitions. \nThe underlying factor graph of \\eqref{eq:chap3-factorization}\nhas $N$ nodes carrying variables $x_i$-s and $M$ factors associated with the potential functions $\\psi_\\mu$-s (see \\citeapp~\\ref{app:chap2-graphs} for a quick reminder). BP acts on \\emph{messages} variables which are tied to the edges of the factor graph.\nSpecifically, the sum-product version of the algorithm (as opposed to the max-sum, see e.g. \\cite{Mezard2009}) consists in the update equations\n\\begin{align}\n \\label{eq:chap3-bp1}\n \\msg{\\tilde{m}^{(t)}}{\\mu}{i}(x_i) & = \\frac{1}{\\msg{\\cZ}{\\mu}{i}} \\int \\prod_{i'\\in \\partial \\mu \\setminus i} \\dd{x_{i'}} \\psi_\\mu(\\x_{\\partial \\mu}) \\prod_{i'\\in \\partial \\mu \\setminus i} \\msg{m^{(t)}}{i'}{\\mu}(x_{i'}), \\\\\n \\label{eq:chap3-bp2}\n \\msg{m^{(t+1)} }{i}{\\mu}(x_i) & = \\frac{1}{\\msg{\\cZ}{i}{\\mu}} p_x(x_i)\\prod_{\\mu'\\in \\partial i \\setminus \\mu} \\msg{\\tilde{m}^{(t)}}{\\mu'}{i}(x_i) \n\\end{align}\nwhere again the $i$-s index the variable nodes and the $\\mu$-s index the factor nodes. \nThe notation $\\partial \\mu \\setminus i$ designate the set of neighbor variables of the factor $\\mu$ except the variable $i$ (and reciprocally for $\\partial i \\setminus \\mu$).\nThe partition functions $\\msg{\\cZ}{i}{\\mu}$ and $\\msg{\\cZ}{\\mu}{i}$ are normalization factors ensuring that the messages can be interpreted as probabilities. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{chap3_bp.pdf}\n \\caption{Representations of the neighborhood of edge $i$-$\\mu$ in the factor graph and corresponding local BP updates. Factors are represented as squares and variable nodes as circles. \\textbf{Left:} In the factor graph where all factors around $x_i$ are removed (in gray) except for the factor $\\mu$, the marginal of $x_i$ (in red) is updated from the messages incoming at factor $\\mu$ (in blue) \\eqref{eq:chap3-bp1}. \\textbf{Right:} In the factor graph where factor $\\mu$ is deleted (in gray), the marginal of $x_i$ (in blue) is updated with the incoming messages (in red) from the rest of the factors \\eqref{eq:chap3-bp2}. \\label{fig:chap3-bp}}\n\\end{figure}\n\nFor acyclic (or tree-like) factor graphs, the BP updates are guaranteed to converge to a fixed point, that is a set of time independent messages $\\{\\msg{m}{i}{\\mu} , \\msg{\\tilde{m}}{\\mu}{i}\\}$ solution of the system of equations \\eqref{eq:chap3-bp1}-\\eqref{eq:chap3-bp2}. Starting at a leaf of the tree, these messages communicate beliefs of a given node variable taking a given value based on the nodes and factors already visited along the tree. More precisely, $\\msg{\\tilde{m}}{\\mu}{i}(x_i)$ is the marginal probability of $x_i$ in the factor graph before visiting the factors in $\\partial i$ except for $\\mu$, and $\\msg{m}{i}{\\mu}(x_i)$ is equal the marginal probability of $x_i$ in the factor graph before visiting the factor $\\mu$, see \\citefig~\\ref{fig:chap3-bp}. \n\nThus, at convergence of the iterations, the marginals can be computed as\n\\begin{gather}\n \\label{eq:chap3-bp-marginal}\n m_i(x_i) = \\frac{1}{\\cZ_i} p_x(x_i) \\prod_{\\mu \\in \\partial i} \\msgt{m}{\\mu}{i}(x_i),\n\\end{gather}\nwhich can be seen as the main output of the BP algorithm. \nThese marginals will only be exact on trees where incoming messages, computed from different part of the graph, are independent. Nonetheless, the algorithm \\eqref{eq:chap3-bp1}-\\eqref{eq:chap3-bp2}, occasionally then called \\emph{loopy-BP}, can sometimes be converged on graphs with cycles and in some cases will still provide high quality approximations. For instance, graphs with no short loops are locally tree like and BP is an efficient method of approximate inference, provided correlations decay with distance (i.e. incoming messages at each node are still effectively independent). \nBP will also appear principled for some infinite range mean-field models previously discussed; an example of which being our case-study the GLM discussed below. \nWhile this is the only example that will be discussed here in the interest of conciseness, getting fluent in BP generally requires more than one application. The interested reader could also consult \\cite{Yedidia2002} and \\cite{Mezard2009} \\citesec~14.1. for simple concrete examples.\n\n\\paragraph{The Bethe free energy}\nThe BP algorithm can also be recovered from a variational argument. Let's consider both the single variable marginals $m_i(x_i)$ and the marginals of the neighborhood of each factor $\\tilde{m}_\\mu(\\x_{\\partial \\mu}$). On tree graphs, the joint distribution \\eqref{eq:chap3-factorization} can be re-expressed as\n\\begin{gather}\n p(\\x) = \\frac{\\prod_{\\mu=1}^M \\tilde{m}_\\mu(\\x_{\\partial \\mu})}{\\prod_{i=1}^Nm_i(x_i)^{n_i-1}},\n\\end{gather}\nwhere $n_i$ is the number of neighbor factors of the $i$-th variable. Abusively, we can use this form as an ansatz for loopy graph and plug it in the Gibbs free energy to derive an approximation of the free energy, similarly to the naive mean-field derivation of \\citesec~\\ref{sec:chap3-nmf}. This time the variational parameters will be the distributions $m_i$ and $\\tilde{m}_\\mu$ (see e.g. \\cite{Yedidia2002, Mezard2009} for additional details). The corresponding functional form of the Gibbs free energy is called the Bethe free energy:\n\\begin{gather}\n F_{\\rm Bethe}(m_i, \\tilde{m}_\\mu) = - \\int \\dd{\\x_{\\partial \\mu}} \\tilde{m}_\\mu(\\x_{\\partial \\mu}) \\ln \\psi_\\mu(\\x_{\\partial \\mu}) + (n_i -1) H(m_i) - H(\\tilde{m}_\\mu), \n\\end{gather}\nwhere $H(q)$ is the entropy of the distribution $q$. Optimization of the Bethe free energy with respect to its arguments under the constraint of consistency\n\\begin{gather}\n \\int \\dd{\\x_{\\partial \\mu \\setminus i}} \\tilde{m}_\\mu(\\x_{\\partial \\mu }) = m_i(x_i)\n\\end{gather}\ninvolves Lagrange multipliers which can be shown to be related to the messages defined in \\eqref{eq:chap3-bp1}-\\eqref{eq:chap3-bp2}. Eventually, one can verify that marginals defined as \\eqref{eq:chap3-bp-marginal} and \n\\begin{gather}\n \\tilde{m}_\\mu(\\x_{\\partial_\\mu}) = \\frac{1}{\\cZ_\\mu} \\psi(\\x_{\\partial \\mu}) \\prod_{i \\in \\partial \\mu}\\msg{m}{i}{\\mu}(x_i),\n\\end{gather}\nare stationary point of the Bethe free energy for messages that are BP solutions. In other words, the BP fixed points are consistent with the stationary point of the Bethe free energy. Using the normalizing constants of the messages, the Bethe free energy can also be re-written as\n\\begin{gather}\n \\label{eq:chap3-bethe-fe}\n F_{\\rm Bethe} = - \\sum_{i \\in V} \\log \\cZ_i - \\sum_{\\mu \\in F} \\log \\cZ_\\mu + \\sum_{(i \\mu) \\in E} \\log \\cZ_{\\mu i} \\, , \n\\end{gather}\nwith\n\\begin{gather}\n \\cZ_i = \\int \\dd{x_i} p_x(x_i) \\prod_{\\mu \\in \\partial i} \\msgt{m}{\\mu}{i}(x_i) \\, ,\\\\\n \\cZ_\\mu = \\int \\prod_{i \\in \\partial \\mu} \\dd{x_i} \\psi(\\x_{\\partial \\mu}) \\prod_{i \\in \\partial \\mu}\\msg{m}{i}{\\mu}(x_i) \\, ,\\\\\n \\cZ_{\\mu i } = \\int \\dd{x_i} \\msgt{m}{\\mu}{i}(x_i) \\msg{m}{i}{\\mu}(x_i)\\, .\n \\end{gather}\n\n\nAs for the marginals, the Bethe free energy, will only be exact if the underlying factor graph is a tree. Otherwise it is an approximation of the free energy, that is not generally an upper bound.\n\n\n\\subparagraph{Belief propagation for the GLM} The writing of the BP-equations for our case-study is schematized on the right of \\citefig~\\ref{fig:chap3-glm}. There are $2 \\times N \\times M$ updates:\n\\begin{align}\n \\label{eq:chap3-bp-glm-1}\n \\msg{\\tilde{m}^{(t)}}{\\mu}{i}(x_i) \n & = \\frac{1}{\\msg{\\cZ}{\\mu}{i}} \\int \\prod_{i'\\neq i} \\dd{x_{i'}} \\pout(y_\\mu | {w_\\mu}\\T\\x) \\prod_{i'\\neq i} \\msg{m^{(t)}}{i'}{\\mu}(x_{i'}),\\\\\n \\label{eq:chap3-bp-glm-2}\n \\msg{m^{(t+1)}}{i}{\\mu}(x_i) \n & = \\frac{1}{\\msg{\\cZ}{i}{\\mu}} p_x(x_i)\\prod_{\\mu'\\neq \\mu} \\msg{\\tilde{m}^{(t)}}{\\mu'}{i}(x_i),\n\\end{align}\nfor all $i-\\mu$ pairs. \nDespite a relatively concise formulation, running BP in practice turns out intractable since for a signal $\\x$ taking continuous values it would entail keeping track of distributions on continuous variables. In this case, BP is approximated by the (G)AMP algorithm presented in the next section.\n\n\n\n\n\\subsubsection{(Generalized) approximate message passing}\n\\label{sec:chap3-gamp}\nThe name of approximate message passing (AMP) was fixed by Donoho, Maleki and Montanari \\cite{Donoho2009} who derived the algorithm in the context of Compressed Sensing. Several works from statistical physics had nevertheless already proposed related algorithmic procedures and made connections with the TAP equations for different systems \\cite{Kabashima1998, opper2001advanced, Kabashima2003}. The algorithm was derived systematically for any channel of the GLM by Rangan \\cite{Rangan2011} and became Generalized-AMP (GAMP), yet again it seems that \\cite{Kabashima2004} proposed the first generalized derivation.\n\nThe systematic procedure to write AMP for a given joint probability distribution consists in first writing BP on the factor graph, second project the messages on a parametrized family of functions to obtain the corresponding \\emph{relaxed-BP} and third close the equations on a reduced set of parameters by keeping only leading terms in the thermodynamic limit. We will quickly review and justify these steps for the GLM. \nNote that here a relevant algorithm for approximate inference will be derived from message passing on a fully connected graph of interactions. As it tuns out, the high connectivity limit and the introduction of short loops does not break the assumption of independence of incoming messages in this specific case thanks to the small scale $O(1\/\\sqrt{N})$ and the independence of the weight entries. The statistics of the weights are here crucial.\n\n\\paragraph{Relaxed Belief Propagation}\nIn the thermodynamic limit $M, N \\to + \\infty$, one can show that the scaling $1\/\\sqrt{N}$ of the $W_{ij}$ and the extensive connectivity of the underlying factor graph imply that messages are approximately Gaussian.\nWithout giving all the details of the computation which can be cumbersome, let us try to provide some intuitions. We drop the time indices for simplicity and start with \\eqref{eq:chap3-bp-glm-1}. Consider the intermediate reconstruction variable $z_\\mu = \\vect{w}_\\mu\\T\\x = \\sum_{i'\\neq i}W_{\\mu i'}x_{i'} + W_{\\mu i}x_i$. Under the statistics of the messages $\\msg{m}{i'}{\\mu}(x_{i'})$, the $x_{i'}$ are independent such that by the central limit theorem $z_\\mu - W_{\\mu i}x_i$ is a Gaussian random variable with respectively mean and variance\n\\begin{gather}\n \\label{eq:chap3-rbp-om}\n \\msg{\\omega}{\\mu}{i} = \\sum_{i'\\neq i}W_{\\mu i'}\\msg{\\hat{x}}{i'}{\\mu},\\\\\n \\label{eq:chap3-rbp-V}\n \\msg{V}{\\mu}{i} = \\sum_{i'\\neq i}W^2_{\\mu i'}\\msg{C^x}{i'}{\\mu},\n\\end{gather}\nwhere we defined the mean and the variance of the messages $\\msg{m}{i'}{\\mu}(x_{i'})$,\n\\begin{gather}\n \\label{eq:chap3-rbp-xhat}\n \\msg{\\hat{x}}{i'}{\\mu} = \\int \\dd{x_{i'}} x_{i'} \\, \\msg{m}{i'}{\\mu}(x_{i'}), \\\\\n \\label{eq:chap3-rbp-cx} \n \\msg{C^x}{i'}{\\mu} = \\int \\dd{x_{i'}} x^2_{i'} \\, \\msg{m}{i'}{\\mu}(x_{i'}) - \\msg{\\hat{x}}{i'}{\\mu}^2.\n\\end{gather}\nUsing these new definitions, \\eqref{eq:chap3-bp-glm-1} can be rewritten as \n\\begin{gather}\n \\label{eq:chap3-rbp-01}\n \\msg{\\tilde{m}}{\\mu}{i}(x_i) \\propto \\int \\dd{z_\\mu} \\pout(y_\\mu | z_\\mu)e^{-\\frac{(z_\\mu - W_{\\mu i} x_i - \\msg{\\omega}{\\mu}{i})^2}{2 \\msg{V}{\\mu}{i}}} ,\n\\end{gather}\nwhere the notation $\\propto$ omits the normalization factor for distributions. Considering that $W_{\\mu i}$ is of order $1\/\\sqrt{N}$, the development of \\eqref{eq:chap3-rbp-01} shows that at leading order $\\msg{\\tilde{m}}{\\mu}{i}(x_i)$ is Gaussian:\n\\begin{gather}\n \\label{eq:chap3-rbp-02}\n \\msg{\\tilde{m}}{\\mu}{i}(x_i) \\propto e^{\\msg{B}{\\mu}{i}x_i + \\frac 1 2 \\msg{A}{\\mu}{i}x_i^2 }\n\\end{gather}\nwhere the details of the computations yield\n\\begin{gather}\n \\label{eq:chap3-rbp-B}\n \\msg{B}{\\mu}{i} = W_{\\mu i} \\, \\gouts(y_\\mu, \\msg{\\omega}{\\mu}{i}, \\msg{V}{\\mu}{i}) \\\\\n \\label{eq:chap3-rbp-A}\n \\msg{A}{\\mu}{i} = - W_{\\mu i}^2 \\, \\dgouts(y_\\mu, \\msg{\\omega}{\\mu}{i}, \\msg{V}{\\mu}{i})\n\\end{gather}\nusing the \\emph{output update functions}\n\\begin{gather}\n\\label{eq:chap3-gout}\n\\gouts(y, \\omega, V) = \\frac{1}{\\Zout}\\int \\dd{z} \\frac{(z - \\omega)}{V} \\pout(y|z) \\cN(z; \\omega, V), \\\\\n\\label{eq:chap3-dgout}\n\\dgouts(y, \\omega, V) = \\frac{1}{\\Zout}\\int \\dd{z} \\frac{(z - \\omega)^2}{V^2} \\pout(y|z) \\cN(z; \\omega, V) - \\frac 1 V - \\gouts(y, \\omega, V)^2,\\\\ \n\\label{eq:chap3-Zout}\n\\Zout( y, \\omega, V) = \\int \\dd{z} \\pout(y|z) \\cN(z; \\omega, V).\n\\end{gather}\nThese arguably complicated functions, again coming out of the development of \\eqref{eq:chap3-rbp-01}, can be interpreted as the estimation of the mean and the variance of the gap between two different estimate of $z_\\mu$ considered by the algorithm: the mean estimate $\\msg{\\omega}{\\mu}{i}$ given incoming messages $\\msg{m}{i'}{\\mu}(x_{i'})$ and the same mean estimate updated to incorporate the information coming from the channel $\\pout$ and observation $y_\\mu$. \nFinally, the Gaussian parametrization \\eqref{eq:chap3-rbp-02} of $\\msg{\\tilde{m}}{\\mu}{i}(x_i)$ serves to rewrite the other type of messages $\\msg{m}{i}{\\mu}(x_i)$ \\eqref{eq:chap3-bp-glm-2},\n\\begin{gather}\n \\label{eq:chap3-rbp-2}\n \\msg{m}{i}{\\mu}(x_i) \\propto p_x(x_i)e^{-\\frac{(\\msg{\\lambda}{i}{\\mu}- x_i)^2}{2 \\msg{\\sigma}{i}{\\mu}}}, \n\\end{gather}\nwith\n\\begin{gather}\n \\label{eq:chap3-rbp-sig}\n \\msg{\\sigma}{i}{\\mu} = \\left(\\sum_{\\mu' \\neq \\mu}\\msg{A}{\\mu'}{i} \\right)^{-1} \\\\ \n \\label{eq:chap3-rbp-lbd}\n \\msg{\\lambda}{i}{\\mu} = \\msg{\\sigma}{i}{\\mu} \\left(\\sum_{\\mu' \\neq \\mu} \\msg{B}{\\mu'}{i}\\right).\n\\end{gather}\nThe set of equations can finally be closed by recalling the definitions \\eqref{eq:chap3-rbp-xhat}-\\eqref{eq:chap3-rbp-cx}:\n\\begin{gather}\n \\label{eq:chap3-rbp-xhat-f1}\n \\msg{\\hat{x}}{i}{\\mu} = f^x_1(\\msg{\\lambda}{i}{\\mu}, \\msg{\\sigma}{i}{\\mu})\\\\\n \\label{eq:chap3-rbp-xhat-f2}\n \\msg{C^x}{i}{\\mu} = f^x_2(\\msg{\\lambda}{i}{\\mu}, \\msg{\\sigma}{i}{\\mu})\n\\end{gather}\nwith now the \\emph{input update functions}\n\\begin{gather}\n \\label{eq:chap3-Zx}\n \\cZ^x = \\int \\dd{x} p_x(x)e^{-\\frac{(x-\\lambda)^2}{2\\sigma}}, \\\\\n \\label{eq:chap3-f1x}\n f^x_1(\\lambda, \\sigma) = \\frac{1}{\\cZ^x}\\int \\dd{x} x \\, p_x(x)e^{-\\frac{(x-\\lambda)^2}{2\\sigma}}, \\\\\n \\label{eq:chap3-f2x}\n f^x_2(\\lambda, \\sigma) = \\frac{1}{\\cZ^x} \\int \\dd{x} x^2 \\, p_x(x)e^{-\\frac{(x-\\lambda)^2}{2\\sigma}} - f^x_1(\\lambda, \\sigma)^2.\n\\end{gather}\nThe input update functions can be interpreted as updating the estimation of the mean and variance of the signal $x_i$ based on the information coming from the incoming messages grasped by $\\msg{\\lambda}{i}{\\mu}$ and $\\msg{\\sigma}{i}{\\mu}$ with the information of the prior $p_x$.\n\nTo sum-up, by considering the leading order terms in the thermodynamic limit, the BP equations can be self-consistently re-written as a closed set of equations over mean and variance variables \\eqref{eq:chap3-rbp-om}-\\eqref{eq:chap3-rbp-V}-\\eqref{eq:chap3-rbp-B}-\\eqref{eq:chap3-rbp-A}-\\eqref{eq:chap3-rbp-sig}-\\eqref{eq:chap3-rbp-lbd}-\\eqref{eq:chap3-rbp-xhat-f1}-\\eqref{eq:chap3-rbp-xhat-f2}. Eventually, r-BP can equivalently be thought of as the projection of BP onto the following parametrizations of the messages\n\\begin{gather}\n \\label{eq:chap3-rbp-1}\n \\msg{\\tilde{m}^{(t)}}{\\mu}{i}(x_i) \\propto e^{\\msg{B^{(t)}}{\\mu}{i}x_i + \\frac 1 2 \\msg{A^{(t)}}{\\mu}{i}x_i^2} \\propto \\int \\dd{z_\\mu} \\pout(y_\\mu | z_\\mu)e^{-\\frac{(z_\\mu - W_{\\mu i} x_i - \\msg{\\omega^{(t)}}{\\mu}{i})^2}{2 \\msg{V^{(t)}}{\\mu}{i}}} ,\\\\\n \\label{eq:chap3-rbp-2}\n \\msg{m^{(t+1)}}{i}{\\mu}(x_i) \\propto e^{-\\frac{(\\msg{\\hat{x}^{(t+1)}}{i}{\\mu}- x_i)^2}{2 \\msg{{C^x}^{(t+1)}}{i}{\\mu}}} \\propto p_x(x_i)e^{-\\frac{(\\msg{\\lambda^{(t)}}{i}{\\mu}- x_i)^2}{2 \\msg{\\sigma^{(t)}}{i}{\\mu}}}. \n\\end{gather}\nNote that, at convergence, an approximation of the marginals is recovered from the projection on the parametrization \\eqref{eq:chap3-rbp-2} of \\eqref{eq:chap3-bp-marginal},\n\\begin{gather}\n m_i(x_i) = \\frac{1}{\\cZ_i}p_x(x_i)e^{-(\\lambda_i-x_i)^2\/2\\sigma_i} = f^x_1(\\lambda_i, \\sigma_i), \\\\\n \\msg{\\sigma}{i}{\\mu} = \\left(\\sum_{\\mu}\\msg{A}{\\mu}{i} \\right)^{-1}, \\\\ \n \\msg{\\lambda}{i}{\\mu} = \\msg{\\sigma}{i}{\\mu} \\left(\\sum_{\\mu} \\msg{B}{\\mu}{i}\\right).\n\\end{gather} \n\nNonetheless, r-BP is scarcely used as such as the computational cost can be readily reduced with little more approximation. Because the parameters in \\eqref{eq:chap3-rbp-1}-\\eqref{eq:chap3-rbp-2} take the form of messages on the edges of the factor graph there are still $O(M \\times N)$ quantities to track to solve the self-consistency equations by iterations. Yet, in the thermodynamic limit, the messages are closely related to the marginals as the contribution of \nthe missing message between \\eqref{eq:chap3-bp2} and \\eqref{eq:chap3-bp-marginal} is to a certain extent negligible. Careful book keeping of the order of contributions of these small differences leads to a set of closed equations on parameters of the marginals, i.e. $O(N)$ variables, corresponding to the GAMP algorithm.\n\nA detailed derivation and developed algorithm of r-BP for the GLM can be found for example in \\cite{Zdeborova2016} (\\citesec~6.3.1). In \\citesec~\\ref{sec:chap3-multivalue} of the present paper, we also present the derivation in a slightly more general setting where the variables $x_i$ and $y_\\mu$ are possibly vectors instead of scalars.\n\n\\paragraph{Generalized approximate message passing}\n\nThe GAMP algorithm with respect to marginal parameters, analogous to the messages parameters introduced above (summarized in \\eqref{eq:chap3-rbp-1}-\\eqref{eq:chap3-rbp-2}), is given in \\citealg~\\ref{alg:chap3-amp}. \nThe origin of GAMP is again the development of the r-BP message-like equations around marginal quantities. The details of this derivation for the GLM can be found for instance in \\cite{Zdeborova2016} (\\citesec~6.3.1).\nFor a random initialization, the algorithm can be decomposed in 4 steps per iteration which refine the estimate of the signal $\\x$ and the intermediary variable $\\z$ by incorporating the different sources of information.\nSteps 2) and 4) involve the \\emph{update functions} relative to the prior and output channel defined above. \nSteps 1) and 3) are general for any GLM with a random Gaussian weight matrix, as they result from the consistency of the two alternative parametrizations introduced for the same messages in \\eqref{eq:chap3-rbp-1}-\\eqref{eq:chap3-rbp-2}\n\n\n\\input{chap3_amp}\n\n\n\\subparagraph{Relation to TAP equations}\nHistorically the main difference between the AMP algorithm and the TAP equations is that the latter was first derived for binary variables with $2$-body interactions (SK model) while the former was proposed for continuous random variables with $N$-body interactions (Compressed Sensing). The details of the derivation (described in \\cite{Zdeborova2016} or in a more general case in \\citesec~\\ref{sec:chap3-multivalue}), rely on the knowledge of the statistics of the disordered variable $\\W$ but do not require a disorder average, as in the Georges-Yedidia expansion yielding the TAP equations.\nBy\nfocusing on\nthe GLM with a random Gaussian weight matrix scaling as $O(1\/\\sqrt{N})$ (similarly to the couplings of the SK model) we naturally obtained TAP equations at second order, with an Onsager term in the update \\eqref{alg:chap3-amp-om} of $\\omega_\\mu$. \nYet an advantage of the AMP derivation from BP over the high-temperature expansion is that it explicitly provides `correct' time indices in the iteration scheme to solve the self consistent equations \\cite{bolthausen2014iterative}. \n\n\\subparagraph{Reconstruction with AMP}\nAMP is therefore a practical reconstruction algorithm which can be run on a single instance (the disorder is not averaged) to estimate an unknown signal $\\x_0$. Note that the prior $p_x$ and channel $\\pout$ used in the algorithm correspond to the student statistical model and they may be different from the true underlying teacher model that generates $\\x_0$ and $\\y$. In other words, the AMP algorithm may be used either in the Bayes optimal or in the mismatched setting defined in \\citesec~\\ref{sec:chap2-teacher-student}.\nRemarkably, it is also possible to consider a disorder average in the thermodynamic limit to study the average case computational hardness, here of the GLM inference problem, in either of these matched or mismatched configurations.\n\n\\paragraph{State Evolution}\nThe statistical analysis of the AMP equations for Compressed Sensing in the average case and in the thermodynamic limit $N\\to \\infty$ lead to another closed set of equations that was called State Evolution (SE) in \\cite{Donoho2009}. Such an analysis can be generalized to other problems of application of approximate message passing algorithms. The derivation of SE starts from the r-BP equations and relies on the assumption of independent incoming messages to invoke the Central Limit Theorem. It is therefore only necessary to follow the evolution of a set of means and variances parametrizing Gaussian distributions. When the different variables and factors are statistically equivalent, as it is the case of the GLM, SE reduces to a few scalar equations. The interested reader should refer to \\citeapp~\\ref{app:chap6-vect-amp} for a detailed derivation in a more general setting.\n\n\\subparagraph{Mismatched setting} In the general mismatched setting we need to carefully differentiate the teacher and the student. We note $p_{x_0}$ the prior used by the teacher. We also rewrite its channel $\\pouto(y|\\vect{w}\\T\\x)$ as the explicit function $y = g_0(\\vect{w}\\T\\x; \\epsilon)$ assuming the noise $\\epsilon$ to be distributed according to the standard normal distribution.\nThe tracked quantities are the \\emph{overlaps},\n\\begin{gather}\n q =\\lim_{N\\to \\infty}\\frac{1}{N} \\sum_{i=1}^N \\hat{x}_i^2\\,, \\quad m = \\lim_{N\\to \\infty}\\frac{1}{N} \\sum_{i=1}^N \\hat{x}_i x_{0,i} \\,, \\quad q_0 = \\lim_{N\\to \\infty}\\frac{1}{N} \\sum_{i=1}^N x_{0,i}^2 = \\E_{p_{x_0}}[x_0^2] ,\n\\end{gather}\nalong with the auxiliary $V$, $\\hat{q}$, $\\hat{m}$ and $\\hat{\\chi}$:\n\\begin{align}\n\\label{eq:chap3-se-nonishi-out-q}\n\\hat{q}^{(t)} & = \\int \\D{\\epsilon} \\int \\dd{\\omega} \\dd{z} \\cN(z, \\omega ; 0, \\mat{Q}^{(t)}) \n\t\\gouts(\\omega, g_0\\left( z ; \\epsilon\\right) , V^{(t)})^2 \\, ,\\\\\n\t\\label{eq:chap3-se-nonishi-out-m}\n\\hat{m}^{(t)} & = \\int \\D{\\epsilon} \\int \\dd{\\omega} \\dd{z} \\cN(z, \\omega ; 0, \\mat{Q}^{(t)})\n\t\t\\partial_{z} \\gouts(\\omega, g_0\\left( z ; \\epsilon\\right) , V^{(t)}) \\, ,\\\\\n\\label{eq:chap3-se-nonishi-out-xi}\n\\hat{\\chi}^{(t)} & = - \\int \\D{\\epsilon} \\int \\dd{\\omega}\\dd{z} \\cN(z, \\omega ; 0, \\mat{Q}^{(t)}) \n\t \\partial_{\\omega} \\gouts(\\omega, g_0\\left( z ; \\epsilon_\\mu\\right) , V^{(t)}) \\, ,\n\\end{align}\n\\begin{align}\n\\label{eq:chap3-se-nonishi-in-q}\nq^{(t+1)} & = \\int \\dd{x_0} p_{x_0}(x_0) \\int \\D{\\xi} \n\tf^x_1 \\left( (\\alpha \\hat{\\chi}^{(t)})^{-1}\\left({\\sqrt{\\alpha \\hat{q}^{(t)}} \\xi + \\alpha \\hat{m}^{(t)}\\x_0}\\right); (\\alpha \\hat{\\chi}^{(t)})^{-1} \\right) ^2 \\, ,\\\\\n\\label{eq:chap3-se-nonishi-in-m}\nm^{(t+1)} & = \\int \\dd{x_0} p_{x_0}(x_0) \\int \\D{\\xi} x_0 \n\tf^x_1 \\left( (\\alpha \\hat{\\chi}^{(t)})^{-1}\\left({\\sqrt{\\alpha \\hat{q}^{(t)}} \\xi + \\alpha \\hat{m}^{(t)}\\x_0}\\right); (\\alpha \\hat{\\chi}^{(t)})^{-1} \\right) \\, ,\\\\\n\\label{eq:chap3-se-nonishi-in-V}\nV^{(t+1)} & = \\int \\dd{x_0} p_{x_0}(x_0) \\int \\D{\\xi} \nf^x_2 \\left( (\\alpha \\hat{\\chi}^{(t)})^{-1}\\left({\\sqrt{\\alpha \\hat{q}^{(t)}} \\xi + \\alpha \\hat{m}^{(t)}x_0}\\right); (\\alpha \\hat{\\chi}^{(t)})^{-1} \\right) \\, ,\n\\end{align}\nwhere we use the notation $\\cN(\\cdot;\\cdot,\\cdot)$ for the normal distribution, $\\D{\\xi}$ for the standard normal measure and the covariance matrix $\\mat{Q}^{(t)}$ is given at each time step by\n\\[\n\\mat{Q}^{(t)} = \n\\begin{bmatrix}\nq_0 & m^{(t)} \\\\\n\\\\\n{m^{(t)}} & q^{(t)} \\\\\n\\end{bmatrix}.\n\\]\n\nDue to the self-averaging property, the performance of the reconstruction by the AMP algorithm on an instance of size $N$ can be tracked along the iterations given \n\\begin{gather}\n \\label{eq:chap3-se-mse}\n \\MSE(\\hat{x})= \\frac{1}{N}\\sum_{i=1}^N (\\hat{x}_i - x_{0,i})^2 = q - 2 m + q_0,\n\\end{gather}\nwith only minor differences coming from finite-size effects.\nState Evolution also provides an efficient procedure to study from the theoretical perspective the AMP fixed points for a generic model, such as the GLM, as a function of some control parameters. It reports the average results for running the complete AMP algorithm on $O(N)$ variables\nusing a few scalar equations. Furthermore, the State Evolution equations simplify further in the Bayes optimal setting.\n\n\\subparagraph{Bayes optimal setting}\nWhen the prior and channel are identical for the student and the teacher, the true unknown signal $\\x_0$ is in some sense statistically equivalent to the estimate $\\xh$ coming from the posterior. More precisely one can prove the Nishimori identities \\cite{Opper1991, Iba1999, Nishimori2001} (or \\cite{Kabashima2016} for a concise demonstration and discussion) implying that $q = m$, $ V = q_0 - m$ and $\\hat{q} = \\hat{m} = \\hat{\\chi}$. Only two equations are then necessary to track the performance of the reconstruction:\n\\begin{align}\n \\label{eq:chap3-se-bo-qh}\n \\hat{q}^{(t)} & = \\int \\dd{\\epsilon} p_{\\epsilon_0}(\\epsilon) \\int \\dd{\\omega} \\dd{z} \\cN(z, \\omega ; 0, \\mat{Q}^{(t)}) \n \\gouts(\\omega, g_0\\left( z ; \\epsilon\\right) , V^{(t)})^2 \\\\\n \\label{eq:chap3-se-bo-q}\n q^{(t+1)} & = \\int \\dd{x_0} p_{x_0}(x_0) \\int \\D{\\xi} \n\tf^x_1 \\left( (\\alpha \\hat{\\chi}^{(t)})^{-1}\\left({\\sqrt{\\alpha \\hat{q}^{(t)}} \\xi + \\alpha \\hat{m}^{(t)}\\x_0}\\right); (\\alpha \\hat{\\chi}^{(t)})^{-1} \\right) ^2 \\, .\n\\end{align}\n\n\n\\subsection{Replica method}\n\\label{sec:chap3-replica}\nAnother powerful technique from the statistical physics of disordered systems to examine models with infinite range interactions is the replica method. It enables an analytical computation of the quenched free energy via non-rigorous mathematical manipulations. More developed introductions to the method can be found in \\cite{Mezard1986, Nishimori2001, Castellani2005}. \n\n\\subsubsection{Steps of a replica computation}\nThe basic idea of the replica computation is to compute the average over the disorder of $\\log \\cZ$ by considering the identity $\\log \\cZ = \\lim_{n\\to 0} (\\cZ^n - 1)\/n$. First the expectation of $\\cZ^n$ is evaluated for $n\\in\\mathbb{N}$, then the $n\\to 0$ limit is taken by `analytic continuation'. Thus the method takes advantage of the fact that the average of a power of $\\cZ$ is sometimes easier to compute than the average of a logarithm. We illustrate the key steps of the calculation for the partition function of the GLM \\eqref{eq:chap3-glm-Z}.\n\n\\subparagraph{Disorder average for the replicated system: coupling of the replicas}\nThe average of $\\cZ^n$ for $n\\in \\mathbb{N}$ can be seen as the partition function of a system with $n + 1$ non interacting replicas of $\\x$ indexed by $a \\in \\{0, \\cdots, n\\}$, where the first replica $a=0$ is representative of the teacher and the $n$ other replicas are identically distributed as the student:\n\\begin{align}\n \\E_{\\W, \\y, \\x_0}\\left[ \\cZ^n \\right] & = \\E_{\\W}\\left[\n \\int \\dd{\\y} \\dd{\\x_0} \n \\pouto(\\y|\\W\\x_0)\n p_{\\x_0}(\\x_0)\n \\left( \\int \\dd{\\x} \\pout(\\y|\\W\\x)p_x(\\x)\n \\right)^n\n \\right] \\\\\n & = \\E_{\\W}\\left[ \n \\int \\dd{\\y} \\prod_{a=0}^{n} \\left( \\dd{\\x_a} \\pouta(\\y|\\W\\x_a)p_{x_a}(\\x_a) \\right)\n \\right] \\\\\n & = \\E_{\\W}\\left[ \\int \\dd{\\y} \\prod_{a=0}^{n} \\left( \\dd{\\x_a} \\dd{\\z_a} \\delta(\\z_a - \\W\\x_a)\\pouta(\\y|\\z_a)p_{x_a}(\\x_a) \\right) \n \\right] \\;.\n\\end{align}\nTo perform the average over the disordered interactions $\\W$ we consider the statistics of $\\z_a = \\W\\x_a$. Recall that $W_{\\mu i} \\sim \\cN(W_{\\mu i} ;0,1\/N)$, independently for all $\\mu$ and $i$. Consequently, \nthe $\\z_a$ are jointly Gaussian in the thermodynamic limit with means and covariances\n\\begin{gather}\n \\E_{\\W}[z_{a,\\mu}] = \\E_{\\W}\\left[\\sum_{i=1}^N W_{\\mu i}x_{a,i}\\right] = 0\\, , \\quad E_{\\W}\\left[ z_{a,\\mu} z_{b, \\nu}\\right] = \\sum_{i=1}^N x_{a, i}x_{b, i} \/ N = q_{ab}.\n\\end{gather}\nThe overlaps, that we already introduced in the SE formalism, naturally re-appear. We introduce the notation $\\q$ for the $(n+1) \\times (n+1)$ overlap matrix. Integrating out the disorder $\\W$ shared by the $n+1$ replicas will therefore leave us with an effective system of now coupled replicas:\n\\begin{align}\n \\E_{\\W, \\y, \\x_0} \\left[ \\cZ^n \\right] = \n \\int \\prod_{a,b} \\dd{N q_{ab}} & \\int \\dd{\\y} \\prod_{a=0}^{n} \\dd{\\z_a} \\pouta(\\y|\\z_a) \\\\\n \\exp&\\left(-\\frac{1}{2} \\displaystyle \\sum_{\\mu=1}^M \\sum_{a,b}z_{a,\\mu}z_{b,\\mu} (\\q^{-1})_{ab} - M C(\\q,n)\\right)\\notag \\\\\n & \\int \\prod_{a=1}^n\\dd{\\x_a} p_{x_a}(\\x_a) \\delta(N q_{ab} - \\sum_{i=1}^N x_{a,i}x_{b,i}). \\notag\n\\end{align}\n\n\\subparagraph{Change of variable for the overlaps: decoupling of the variables}\nWe consider the Fourier representation of the Dirac distribution fixing the consistency between overlaps and replicas,\n\\begin{align}\n \\delta(N q_{ab} - \\sum_{i=1}^N x_{a,i}x_{b,i}) =\n \\int \\frac{{\\mathrm{d}\\hat{q}_{ab}}}{2 i \\pi } \\, e^{\\hat{q}_{ab}(N q_{ab} - \\sum_{i=1}^N x_{a,i}x_{b,i})},\n\\end{align}\nwhere $\\hat{q}_{ab}$ is purely imaginary, which yields\n\\begin{align}\n \\E_{\\W, \\y, \\x_0} \\left[ \\cZ^n \\right] = & \\int \\prod_{a,b} \\dd{Nq_{ab}} \\int \\prod_{a,b} \\frac{{\\mathrm{d}\\hat{q}_{ab}}}{2 i \\pi } \\, \\exp\\left(N \\hat{q}_{ab}q_{ab}\\right) \\\\\n & \\int \\dd{\\y} \\prod_{a=0}^{n} \\dd{\\z_a} \\pouta(\\y|\\z_a) \\exp\\left(-\\frac{1}{2} \\displaystyle \\sum_{\\mu=1}^M \\sum_{a,b}z_{a,\\mu}z_{b,\\mu} (\\q^{-1})_{ab} - M C(\\q,n)\\right)\\notag \\\\\n & \\int \\prod_{a=1}^n\\dd{\\x_a} p_{x_a}(\\x_a) \\exp\\left(- \\hat{q}_{ab} \\displaystyle \\sum_{i=1}^N x_{a,i}x_{b,i}\\right) \\notag\n\\end{align}\nwhere $C(\\q,n)$ is related to the normalization of the Gaussian distributions over the $\\z_a$ variables, and the integrals can be factorized over the $i$-s and $\\mu$-s. Thus we obtain\n\\begin{gather}\n \\label{eq:chap3-replica-Nscaling}\n \\E_{\\W, \\y, \\x_0}\\left[ \\cZ^n \\right] = \\int \\prod_{a,b} \\dd{N q_{ab}} \\int \\prod_{a,b} \\dd{ \\hat{q}_{ab}} e^{N \\hat{q}_{ab}q_{ab}} e^{M \\log \\hat{\\mathcal{I}}_z(\\q)} e^{N \\log \\hat{\\mathcal{I}}_x(\\hat{\\q})} \\, , \n\\end{gather}\nwith\n\\begin{gather}\n \\hat{\\mathcal{I}}_z(\\q) = \\int \\dd{y} \\prod_{a=0}^{n} \\dd{z_a} \\pouta(y|z_a) \\exp\\left(-\\frac{1}{2} \\displaystyle \\sum_{a,b}z_{a}z_{b} (q_{ab})^{-1} - C(\\q,n)\\right) \\, ,\\\\\n \\hat{\\mathcal{I}}_x(\\hat{\\q}) = \\int \\prod_{a=1}^n\\dd{x_a} p_{x_a}(x_a) \\exp\\left(- \\hat{q}_{ab} \\displaystyle x_{a}x_{b}\\right) \\, ,\n\\end{gather}\nwhere we introduce the notation $\\hat{\\q}$ for the auxiliary overlap matrix with entries $(\\hat{\\q})_{ab} = \\hat{q}_{ab}$ and we omitted the factor $2i\\pi$ which is eventually subleading as $N\\to + \\infty$.\nThe decoupling of the $x_i$ and the $z_\\mu$ of the infinite range system yields pre-factors $N$ and $M$ in the exponential arguments. In the thermodynamic limit, we recall that both $N$ and $M$ tend to $+\\infty$ while the ratio $\\alpha=M\/N$ remains fixed. Hence, the integral for the replicated average is easily computed in this limit by the saddle point method: \n\\begin{gather}\n \\log \\E_{\\W, \\y, \\x_0}\\left[ \\cZ^n \\right] \\simeq N \\mathrm{extr}_{\\q \\hat{\\q}}\\left[\\mathcal{\\phi}(\\q, \\hat{\\q})\\right] \\, , \\quad \\mathcal{\\phi}(\\q, \\hat{\\q}) = \\sum_{a,b} \\hat{q}_{ab}q_{ab} + \\alpha \\hat{\\mathcal{I}}_z(\\q) + \\hat{\\mathcal{I}}_x(\\hat{\\q}),\n\\end{gather}\nwhere we defined the replica potential $\\mathcal{\\phi}$.\n\n\\subparagraph{Exchange of limits: back to the quenched average}\nThe thermodynamic average of the log-partition is recovered through an a priori risky mathematical manipulation: (i) perform an analytical continuation from $n \\in \\mathbb{N}$ to $n \\to 0$ \n\\begin{gather}\n \n \\frac{1}{N}\\E_{\\W, \\y, \\x_0}\\left[ \\log \\cZ \\right] \n = \n \n \\lim_{n\\to 0} \\frac{1}{nN} \\E_{\\W, \\y, \\x_0}\\left[ \\cZ^n -1\\right] \n = \n \n \\lim_{n\\to 0} \\frac{1}{nN} \\log \\E_{\\W, \\y, \\x_0}\\left[ \\cZ^n \\right] \n\\end{gather}\nand (ii) exchange limits \n\\begin{gather}\n -f \n = \\lim_{N\\to \\infty} \\lim_{n\\to 0}\\frac{1}{n} \\frac{1}{N} \\log \\E_{\\W, \\y, \\x_0}\\left[ \\cZ^n \\right] = \\lim_{n\\to 0} \\frac{1}{n} \\mathrm{extr}_{\\q \\hat{\\q}}\\left[\\mathcal{\\phi}(\\q, \\hat{\\q})\\right].\n\\end{gather}\nDespite the apparent lack of rigour in taking these last steps, the replica method has been proven to yield exact predictions in the thermodynamic limit for different problems and in particular for the GLM \\cite{Reeves2016, Barbier2017a}.\n\n\\subparagraph{Saddle point solution: choice of a replica ansatz}\nAt this point, we are still left with the problem of computing the extrema of $\\mathcal{\\phi}(\\q, \\hat{\\q})$. To solve this optimization problem over $\\q$ and $\\hat{\\q}$, a natural assumption is that replicas, that are a pure artefact of the calculation, are equivalent. This is reflected in a special structure for overlap matrices between replicas that only depend on three parameters each,\n\\begin{align}\n \\q = \n\\begin{bmatrix}\nq_0 & m & m & m \\\\\nm & q & q_{12} & q_{12}\\\\\nm & q_{12} & q & q_{12}\\\\ \nm & q_{12} & q_{12} & q \\\\\n\\end{bmatrix} \\, , \\quad\n\\hat{\\q} = \n\\begin{bmatrix}\n\\hat{q_0} & \\hat{m} & \\hat{m} & \\hat{m} \\\\\n\\hat{m} & \\hat{q} & \\hat{q}_{12} & \\hat{q}_{12}\\\\\n\\hat{m} & \\hat{q}_{12} & \\hat{q} & \\hat{q}_{12}\\\\ \n\\hat{m} & \\hat{q}_{12} & \\hat{q}_{12} & \\hat{q} \\\\\n\\end{bmatrix},\n\\end{align} \nhere given as an example for $n=3$ replicas. \nPlugging this \\emph{replica symmetric} (RS) ansatz in the expression of $\\mathcal{\\phi}(\\q, \\hat{\\q})$, taking the limit $n\\to0$ and looking for the stationary points as a function of the parameters $q$, $m$, $q_{12}$ and $\\hat{m}$, $\\hat{q}$, $\\hat{q}_{12}$ recovers a set of equations equivalent to SE \\eqref{eq:chap1-dnn-rec1}, albeit without time indices. Hence the two a priori different heuristics of BP and the replica method are remarkably consistent under the RS assumption.\n\nNevertheless, the replica symmetry can be spontaneously broken in the large $N$ limit and the dominating saddle point does not necessarily correspond to the RS overlap matrix. This replica symmetry breaking (RSB) corresponds to substantial changes in the structure of the examined Boltzmann distribution. It is among the great strengths of the replica formalism to naturally capture it. \nYet for inference problems falling under the teacher-student scenario, the correct ansatz is always replica symmetric in the Bayes optimal setting \\cite{Nishimori2001, Castellani2005, Zdeborova2016}, and we will not investigate here this direction further. The interested reader can refer to the classical references for an introduction to replica symmetry breaking \\cite{Mezard1986, Nishimori2001, Castellani2005} in the context of the theory of spin-glasses.\n \n\n\\subparagraph{Bayes optimal setting} As in SE the equations simplify in the matched setting, where the first replica corresponding to the teacher becomes equivalent to all the others. The replica free energy of the GLM is then given as the extremum of a potential over two scalar variables:\n\\begin{gather}\n \\label{eq:chap3-replica-fe-glm}\n - f = \\mathrm{extr}_{q \\hat{q}}\\left[ - \\frac{1}{2} q \\hat{q} + \\mathcal{I}_x(\\hat{q}) + \\alpha \\mathcal{I}_z(q_0, q)\\right]\\\\\n \\label{eq:chap3-replica-fe-glm_Ix}\n \\mathcal{I}_x(\\hat{q}) = \\int \\D{\\xi} \\dd{x} p_x(x)e^{-\\hat{q}\\frac{x^2}{2} + \\sqrt{\\hat{q}}\\xi x} \\log\\left( \\int \\dd{x'} p_x(x')e^{-\\hat{q}\\frac{x'^2}{2} + \\sqrt{\\hat{q}}\\xi x'}\\right) \\\\\n \\mathcal{I}_z(q , q_0) = \\int \\D{\\xi} \\dd{y} \\dd{z} \\pout(y|z)\\cN(z;\\sqrt{q}\\xi,q_0-q) \\notag \\\\\n \\label{eq:chap3-replica-fe-glm_Iz}\n \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\log\\left( \\int \\dd{z'} \\pout(y|z')\\cN(z';\\sqrt{q}\\xi,q_0-q) \\right) .\n\\end{gather}\nThe saddle point equations corresponding to the extremization \\eqref{eq:chap3-replica-fe-glm}, fixing the values of $q$ and $\\hat{q}$, would again be found equivalent to the Bayes optimal SE \\eqref{eq:chap3-se-bo-qh} - \\eqref{eq:chap3-se-bo-q}. This Bayes optimal result is derived in \\cite{Krzakala2012} for the case of a linear channel and Gauss-Bernoulli prior, and can also be recovered as a special case of the low-rank matrix factorization formula (where the measurement matrix is in fact known) \\cite{Kabashima2016}.\n\n\\subsubsection{Assumptions and relation to other mean-field methods}\nA crucial point in the above derivation of the replica formula is the extensivity of the interactions of the infinite range model that allowed the factorization of the $N$ scaling of the argument of the exponential integrand in \\eqref{eq:chap3-replica-Nscaling}. The statistics of the disorder $\\W$ and in particular the independence of all the $W_{\\mu i}$ was also necessary. While this is an important assumption for the technique to go through, it can be possible to relax it for some types of correlation statistics, as we will see in \\citesec~\\ref{sec:chap3-ortho-invariant}.\n\nNote that the replica method directly enforces the disorder averaging and does not provide a prediction at the level of the single instance. Therefore it cannot be turned into a practical algorithm of reconstruction. Nonetheless, we have seen that the saddle point equations of the replica derivation, under the RS assumption, matches the SE equations derived from BP. This is sufficient to theoretically study inference questions under a teacher-student scenario in the Bayes optimal setting, and in particular predict the MSE following \\eqref{eq:chap3-se-mse}.\n\nIn the mismatched setting however, the predictions of the replica method under the RS assumption and the equivalent BP conclusions can be wrong. By introducing the symmetry breaking between replicas, the method can sometimes be corrected. It is an important endeavor of the replica formalism to grasp the importance of the overlaps and connect the form of the replica ansatz to the properties of the joint probability distribution examined. When BP fails on loopy graphs, correlations between variables are not decaying with distance, which manifests into an RSB phase. Note that there also exists message passing algorithms operating in this regime \\cite{Mezard2001, Mezard2002, Mezard2009, Saglietti2019, Antenucci2019, Antenucci2019a}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Some current directions of research}\n\\label{sec:chapex}\n\nThe great leap forward in the performance of machine learning with neural networks brought by deep learning algorithms, along with the multitude of theoretical and practical challenges it has opened, has re-ignited the interest of physicists for the theory of neural networks. \nIn this \\citesec, far from being exhaustive, we review some current directions of research leveraging mean-field approximations. Another relevant review is \\cite{Carleo}, which provides references both for machine learning research helped by physics methods and conversely research in physics using machine learning.\n\nWorks presented below do not necessarily implement one of the classical inference methods presented in \\citesecs~\\ref{sec:chap3} and \\ref{sec:chap3further}. In some cases, the mean-field limit corresponds to some asymptotic setting where the problem simplifies: typically some correlations weaken, fluctuations are averaged out by concentration effects and, as a result, ad-hoc methods of resolution can be designed. Thus, in the following contributions, different assumptions are considered to serve different objectives. For instance some take an infinite size limit, some assume random (instead of learned) weights or vanishing learning rates. Hence, there is no such a thing as one mean-field theory of deep neural networks. The below cited works are rather complementary pieces of solving a great puzzle. \n\n\n\n\n\n\\subsubsection{Neural networks for unsupervised learning}\n\n\\paragraph{Fundamental study of learning}\nGiven their similarity with the Ising model, Restricted Boltzmann Machines have unsurprisingly attracted a lot of interest. Studying an ensemble of RBMs with random parameters using the replica method, Tubiana and Monasson \\cite{Tubiana2017} evidenced different regimes of typical pattern of activations in the hidden units and identified control parameters as the sparsity of the weights, their strength (playing the role of an effective temperature) or the type of prior for the hidden layer. Their study contributes to the understanding of the conditions under which the RBMs can represent high-order correlations between input units, albeit without including data and learning in their model. \nBarra and collaborators \\cite{Barra2017,Barra2018}, exploited the connections between the Hopfield model and RBMs to characterize RBM learning understood as an associative memory. Relying again on replica calculations, they characterize the retrieval phase of RBMs.\nM\u00e9zard \\cite{Mezard2017} also re-examined retrieval in the Hopfield model using its RBM representation and message passing, showing in particular that the addition of correlations between memorized patterns could still allow for a mean-field treatment at the price of a supplementary hidden layer in the Boltzmann Machine representation. This result remarkably draws a theoretical link between correlations in the training data and the necessity of depth in neural network models.\n\nWhile the above results do not include characterization of the learning driven by data, a few others were able to discuss the dynamics of training. Huang \\cite{Huang2017} studied with the replica method and TAP equations the Bayesian leaning of a RBM with a single hidden unit and binary weights.\nBarra and collaborators \\cite{Barra2017} empirically studied a teacher-student scenario of unsupervised learning by maximum likelihood on samples of an Hopfield model which they could compare to their theoretical characterization of the retrieval phase. \nDecelle and collaborators \\cite{Decelle2017,Decelle2018} introduced an ensemble of RBMs characterized by the spectral properties of the weight matrix and derived the typical dynamics of the corresponding order parameters during learning driven by data. Beyond RBMs, analyses of the learning in other generative models are starting to appear \\cite{Wang2018}.\n\n\n\\paragraph{Training algorithm based on mean-field methods}\nBeyond bringing theoretical insights, mean-field methods are also found useful to build tractable estimators of the likelihood in generative models, which in turn serves to design novel training algorithms. \n\nFor Boltzmann machines, this direction was already investigated in the 80s and 90s, \\cite{Peterson1987,Hinton1989,Galland1993,Kappen1998}, albeit in small models with binary units and for artificial data sets very different from modern machine learning benchmarks. More recently, a deterministic training based on naive mean-field was tested on RBMs \\cite{Welling2002,Tieleman2008}. On toy deep learning data sets, the algorithm was found to perform poorly when compared to both CD and PCD, the commonly employed approximate Monte Carlo methods.\nHowever going beyond naive mean-field, considering the second order TAP expansion, allows to bridge the gap in efficiency \\cite{Gabrie2015, Tramel2018} . Additionally, the deterministic mean-field framework offers a tractable way of evaluating the learning success by exploiting the mean-field observables to visualize the representation learned by RBMs. Interestingly, high temperature expansions of estimators different from the maximum likelihood have also been recently proposed as efficient inference method for the inverse Ising problem \\cite{Lokhov2018}. \n\n\n\n\n\nBy construction, variational auto-encoders (VAEs)\nrely on a variational approximation of the likelihood. In practice, the posterior distribution of the latent representation given an input (see \\citesec~\\ref{sec:chap1-unsupervised}) is typically approached by a factorized Gaussian distribution with mean and variance parametrized by neural networks. The factorization assumption relates the method to a naive mean-field approximation. \n\n\\paragraph{Structured Bayesian priors}\nWith the progress of unsupervised learning, the idea of using generative models as expressive priors has emerged. \n\nFor reconstruction tasks, in the event where a set of typical signals is available a priori, the latter can serve as a training set to learn a model of the underlying data distribution with a generative model. Subsequently, in the reconstruction of a new signal of the same type, the generative model can serve as a Bayesian prior.\nIn particular, the idea to exploit RBMs in CS applications was pioneered by \\cite{Dremeau2012} and \\cite{Tramel2015}, who trained binary RBMs using Contrastive Divergence (to locate the support of the non-zero entries of sparse signals) and combined it with an AMP reconstruction. They demonstrated drastic improvements in the reconstruction with structured learned priors compared to the usual sparse unstructured priors. The approach, requiring to combine the AMP reconstruction for CS and the RBM TAP inference, was further generalized in \\cite{Tramel2016, Tramel2018} to real valued distributions. In the line of these applications, several works have also investigated using feed forward generative models for inference tasks. Using this time multi-layer VAMP inference, Rangan and co-authors \\cite{Pandit2019} showed that VAEs could help for in-painting partially observed images. \nNote also that a different line of works, mainly considering GANs, examined the same type of applications without resorting to mean-field algorithms \\cite{Bora2017, Hand2018, Hand2018a, Mixon2018}. Instead they performed the inference via gradient descent and back-propagation.\n\nAnother application of generative priors is to model synthetic data sets with structure. In \\cite{Gabrie2018, Aubin2019, Goldt2019a}, the authors designed learning problems amenable to a mean-field theoretical treatment by assuming the inputs to be drawn from a generative prior (albeit with untrained weights so far). This approach goes beyond the vanilla teacher-student scenario where input data is typically unstructured with i.i.d. components. This is a crucial direction of research as the role of structure in data appears as an important component to understand the puzzling efficiency of deep learning. \n\n\\subsubsection{Neural networks for supervised learning}\n\n\\paragraph{New results in the replica analysis of learning}\nThe classical replica analysis of learning with simple architectures, following bases set by Gardner and Derrida 30 years ago, continues to be explored. Among the most prominent results, Kabashima and collaborators \\cite{Kabashima2008, Shinzato2008, Shinzato2009} extended the mean-field treatment of the perceptron from data matrices with i.i.d entries to random orthogonal matrices. It is a much larger class of random matrices where matrix entries can be correlated.\nMore recently, a series of works explored in depth the specific case of the perceptron with binary weight values for classification on random inputs. \nReplica computations showed that the space of solutions is dominated in the thermodynamic limit by isolated solutions \\cite{Huang2013, Huang2014}, but also that subdominant dense clusters of solutions exist with good generalization properties in the teacher-student scenario case \\cite{Baldassi2015, Baldassi2016, Baldassi2018}. This observation inspired a novel training algorithm \\cite{Chaudhari2017a}. \nThe simple two-layer architecture of the committee machine was also reexamined recently \\cite{Aubin2018}. In the teacher-student scenario, a computationally hard phase of learning was evidenced by comparing a message passing algorithm (believed to be optimal) and the replica prediction. In this work, the authors also proposed a strategy of proof of the replica prediction. \n\n\\paragraph{Signal propagation in depth}\nMean-field approximations can also help understand the role and implications of depth by characterizing signal propagation in neural networks. The following papers consider the width of each layer to go to infinity. In this limit, Sompolinsky and collaborators characterized how neural networks manage to progressively separate data manifolds fed as inputs \\cite{Kadmon2016, Chung2018a, Cohen2019}. Another line of works focused on the initialization of neural networks (i.e. with random weights), and found an order-to-chaos transition in the signal propagation as a function of hyperparameters of training \\cite{Poole2016,Schoenholz2017}. As a result, the authors could formulate recommendations for combinations of hyperparameters to practitioners. This type of analysis could furthermore be generalized to convolutional networks \\cite{Novak2019}, recurrent networks \\cite{Gilboa2019} and networks with batch-normalization regularization \\cite{Yang2019}. The space of functions spanned by deep random networks in the infinite-size limit was also studied by \\cite{Li2018,Li2019}, using the different but related approach of the generating functional analysis.\nYet another mean-field argument, this time relying on a replica computation, allowed to compute the mutual information between layers of large non-linear deep neural networks with orthogonally invariant weight matrices \\cite{Gabrie2018}. Using this method, mutual informations can be followed precisely along the learning for an appropriate teacher-student scenario. The strategy offers an experimental test bed to characterize possible links between the generalization ability of deep neural networks and information compression phases in the training (see \\cite{Tishby2015, Shwartz2017, Saxe2018}).\n\n\n\\paragraph{Dynamics of SGD learning in simple networks and generalization}\nA number of different mean-field limits led to interesting analyses of the dynamics of gradient descent learning. In particular, the below mentioned works contribute to shed light on the generalization power of neural networks in the so-called overparametrized regimes, that is where the number of parameters exceeds largely either the number of training points or the underlying degrees of freedom of the teacher rule.\nIn linear networks first, an exact description in the high-dimensional limit was obtained for the teacher-student setup by \\cite{Advani2017} using random matrix theory. The generalization was predicted to improve with the overparametrization of the student. \nNon-linear networks with one infinitely wide hidden layer were considered by \\cite{Mei2018, Rotskoff2018, Chizat2018a, Sirignano2018} who showed that gradient descent converges to a finite generalization error. \nTheir results are related to others obtained in a slightly different limit of infinitely large neural networks \\cite{Jacot2018}. \nFor arbitrarily deep networks, Jacot and collaborators \\cite{Jacot2018} showed that, in a certain setting, gradient descent was effectively performing a kernel regression with a kernel function converging to a fixed value for the entire training as the size of the layers increases. \nIn both related limits, the absence of divergence is accounting for generalization not deteriorating despite of the explosion of the number of parameters. The relationship between the two above limits was discussed in \\cite{Chizat2018, Mei2019, Geiger2019a}. \nSubsequent works, leveraged the formalism introduced in \\cite{Jacot2018}. Scaling for the generalization error as a function of network sizes were derived by \\cite{Geiger2019}. Other authors focused on the characterization of the network output function in this limit, which takes the form of a Gaussian process \\cite{Lee2019}. This fact was probably first noticed by Opper and Winther with one hidden layer \\cite{Opper99}, to whom it inspired a TAP based Bayesian classification method using Gaussian processes. \nFinally, yet another limit was analyzed by \\cite{Goldt2019}, considering a finite number of hidden units with an infinitely wide input. Following classical works on the mean-field analysis of online learning (not covered in the previous sections \\cite{Saad1995, Saad1995a, Biehl1995, Saad1999a}), a closed set of equations can be derived and analyzed for a collection of overlaps. Note that these are the same order parameters as in replica computations. The resulting learning curves evidence the necessity of multi-layer learning to observe the improvement of generalization with overparametrization. An interplay between optimization, architecture and data sets seems necessary to explain the phenomenon.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclu}\n\nThis review aimed at presenting in a pedagogical way a selection of inference methods coming from statistical physics. In past and current lines of research that were also reviewed, these methods are sometimes turned into practical and efficient inference algorithms, or sometimes the angle stone in theoretical computations.\n\n\\textbf{What is missing}\nThere are more of these methods beyond what was covered here. In particular the cavity method \\cite{Mezard1986}, closely related to message passing algorithms and the replica formalism, played a crucial role in the physics of spin glasses. Note also that we assumed replica symmetry, which is only guaranteed to be correct in the Bayes optimal case. References of introductions to replica symmetry breaking are \\cite{Mezard1986, Castellani2005}, and newly proposed message passing algorithms with RSB are \\cite{Saglietti2019, Antenucci2019, Antenucci2019a}.\nThe methods of analysis of online learning algorithms pioneered by \\cite{Saad1995, Saad1995a, Biehl1995} and reviewed in \\cite{Saad1999a} also deserve the name of classical mean-field analysis. They are currently actively serving research efforts in deep learning theory \\cite{Goldt2019}. Another important method is the off-equilibrium mean-field theory \\cite{Crisanti1988, Crisanti1993,Cugliandolo1993}, recently used for example to characterize a specific type of neural networks called graph neural networks \\cite{Kawamoto2018} or to study properties of gradient flows \\cite{Mannelli2019}. \n\n\n\n\n\\textbf{On the edge of validity}\nWe have also touched upon the limitations of the mean-field approach. To start with, the thermodynamic limit is ignoring finite-size effects. Moreover, different ways of taking the thermodynamic limit for the same problem sometimes lead to different results. Also, necessary assumptions of randomness for weights or data matrices are sometimes in clear contrast with real applications. \n\n\nThus, the temptation to apply abusively results from one field to the other can be a dangerous pitfall of the interdisciplinary approach. We could mention here the characterization of the dynamics of optimization. While physicists have extensively studied Langevin dynamics with Gaussian white noise, the continuous time limit of SGD is unfortunately not an equivalent in the general case. While some works attempt to draw insights from this analogy using strong assumptions (e.g. \\cite{Choromanska2015, Jastrzebski2017}), others seek precisely to understand the differences between the two dynamics in neural networks optimization (e.g. \\cite{Baity-Jesi2018, Simsekli2019}). \nAlternatively, another good reason to consider the power of mean-field methods lies in the observation rooted in the tradition of theoretical physics that one can learn from models a priori far from the exact neural networks desired, but that retain some key properties, while being amenable to theoretical characterization. For example, \\cite{Mannelli2019} studied a high-dimensional non-convex optimization problem inspired by the physics of spin glasses apparently unrelated to neural networks, but gained insights on the dynamics of gradient descent (and Langevin) that is of primal interest. Another example of this surely promising approach is \\cite{Wang2018}, who built and analyzed a minimal model of GANs.\n\nMoreover, the possibility to combine well-studied simple settings to obtain a mean-field theory for more complex models, as recently demonstrated in a series of work \\cite{Tramel2015, Tramel2016,Tramel2018, Manoel2017b, Fletcher2018a, Gabrie2018, Aubin2019}, constitutes an exciting direction of research that should broaden considerably the limit of applications of mean-field methods.\n\n\\textbf{Patching the pieces together and going further}\nThus the mean-field approach alone cannot to this day provide complete answers to the still numerous puzzles on the way towards a deep learning theory. Yet, considering different limits and special cases, combining solutions to approach ever more complex models, the approach should help uncover more and more corners of the big black box. Hopefully, intuition gained at the edge will help revealing the broader picture. \n\n\n\n\n\n\n\n\n\n\n\n \n\\section{Introduction}\n\nWith the continuous improvement of storage techniques, the amount of available data is currently growing exponentially. While it is not humanly feasible to treat all the data created, \\emph{machine learning}, as a class of algorithms that allows to automatically infer structure in large data sets, is one possible response.\nIn particular, \\emph{deep learning} methods, based on neural networks, have drastically improved performances in key fields of artificial intelligence such as image processing, speech recognition or text mining. A good review of the first successes of this technology published in 2015 is \\cite{LeCun2015a}. A few years later, the current state-of-the-art of this very active line of research is difficult to envision globally.\nHowever, the complexity of deep neural networks remains an obstacle to the understanding of their great efficiency. Made of many layers, each of which constituted of many neurons, themselves accompanied by a collection of parameters, the set of variables describing completely a typical neural network is impossible to only visualize. Instead, aggregated quantities must be considered to characterize these models and hopefully help and explain the learning process. The first open challenge is therefore to identify the relevant observables to focus on. Often enough, what seems interesting is also what is hard to calculate. In the high-dimensional regime we need to consider, exact analytical forms are unknown most of the time and numerical computations are ruled out. \n, ways of approximation that are simultaneously simple enough to be tractable and fine enough to retain interesting features are highly needed.\n\nIn the context where dimensionality is an issue, physicists have experimented that macroscopic behaviors are typically well described by the theoretical limit of infinitely large systems. Under this \\emph{thermodynamic} limit, the statistical physics of disordered systems offers powerful frameworks of approximation called \\emph{mean-field theories}.\nInteractions between physics and neural network theory already have a long history as we will discuss in \\citesec~\\ref{sec:chap1-nn-and-mf}. Yet, interconnections have been re-heightened by the recent progress in deep learning, which also brought new theoretical challenges. \n\n\nHere, we wish to provide a concise methodological review of fundamental mean-field inference methods with their application to neural networks in mind. Our aim is also to provide a unified presentation of the different approximations allowing to understand how they relate and differ. \nReaders may also be interested in related review papers. Another methodological review is \\cite{Advani2013}, particularly interested in applications to neurobiology. Methods presented in the latter reference have a significant overlap with what will be covered in the following. Some elements of random matrix theory are there additionally introduced. \nThe approximations and algorithms which will be discussed here are also largely reviewed in \\cite{Zdeborova2016}. \nThis previous paper includes more details on spin glass theory, which originally motivated the development of the classical mean-field methods, and particularly focuses on community detection and linear estimation. \nDespite the significant overlap and beyond their differing motivational applications, the two previous references are also anterior to some recent exciting developments in mean-field inference covered in the present review, in particular extensions towards multi-layer networks. An older, yet very interesting, reference is the workshop proceedings \\cite{opper2001advanced}, which collected both insightful introductory papers and research developments for the applications of mean-field methods in machine learning. Finally, the recent \\cite{Carleo} covers more generally the connections between physical sciences and machine learning yet without detailing the methodologies. This review provides a very good list of references where statistical physics methods were used for learning theory, but also where machine learning helped in turn physics research. \n\n\nGiven the literature presented below is at the cross-roads of deep learning and disordered systems physics, we include short introductions to the fundamental concepts of both domains. These \\citesecs~\\ref{sec:chap1} and \\ref{sec:chap2} will help readers with one or the other background, but can be skipped by experts. In \\citesec~\\ref{sec:chap3}, classical mean-field inference approximations are derived on neural network examples. \\citesec~\\ref{sec:chap3further} covers some recent extensions of the classical methods that are of particular interest for applications to neural networks. We review in \\citesec~\\ref{sec:chapex-all} a selection of important historical and current directions of research in neural networks leveraging mean-field methods. As a conclusion, strengths, limitations and perspectives of mean-field methods for neural networks are discussed in \\citesec~\\ref{sec:conclu}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements} \nThis paper is based on the introductory chapters of my PhD dissertation, written under the supervision of Florent Krzakala and in collaboration with Lenka Zdeborov\u00e1, to whom both I am very grateful. I would like to thank also Benjamin Aubin, C\u00e9dric Gerbelot, Adrien Laversanne-Finot, and Guilhem Semerjian for their comments on the manuscript. I gratefully acknowledge the support of the `Chaire de recherche sur les mod\u00e8les et sciences des donn\u00e9es' by Fondation CFM pour la Recherche-ENS and of Fondation L'Or\u00e9al For Women In Science. I also thank the Kalvi Institute for Theoretical Physics, where part of this work was written. \n\n\n\n\n\\section{Index of notations and abbreviations}\n\n\\begin{itemize}\n\\item[] {[N]} - Set of integers from $1$ to $N$\n\\item[] {$\\dirac(\\cdot)$} - Dirac distribution\n\\item[] {$\\sigma(x) = (1 + e^{-x})^{-1}$} - Sigmoid\n\\item[] {$\\mathrm{relu}(x)=\\max(0,x)$} - Rectified Linear Unit\n\\item[] {$\\mat{X}$} - Matrix\n\\item[] {$\\vect{x}$} - Vector\n\\item[] {$\\mat{I}_N \\in \\R^{N\\times N}$} - Identity matrix\n\\item[] {$\\langle \\cdot \\rangle$} - Average with respect to the Boltzmann distribution\n\\item[] {$\\mathrm{O}(N) \\subset \\R^{N\\times N}$} - Orthogonal ensemble\n\\item[] {1RSB} - 1 Step Replica Symmetry Breaking\n\\item[] {AMP} - Approximate message passing\n\\item[] {BP} - Belief Propagation\n\\item[] {cal-AMP} - Calibration Approximate Message Passing\n\\item[] {CD} - Contrastive Divergence\n\\item[] {CS} - Compressed Sensing\n\\item[] {CSP} - Constrain Satisfaction Problem\n\\item[] {DAG} - Directed Acyclic Graph\n\\item[] {DBM} - Deep Boltzmann Machine\n\\item[] {EC} - Expectation Consistency\n\\item[] {EP} - Expectation Propagation \n\\item[] {GAMP} - Generalized Approximate message passing\n\\item[] {GAN} - Generative Adversarial Networks \n\\item[] {GD} - Gradient Descent\n\\item[] {GLM} - Generalized Linear Model\n\\item[] {G-VAMP} - Generalized Vector Approximate Message Passing\n\\item[] {i.i.d.} - independent identically distributed\n\\item[] {PCD} - Persistent Contrastive Divergence\n\\item[] {r-BP} - relaxed Belief Propagation\n\\item[] {RS} - Replica Symmetric\n\\item[] {RSB} - Replica Symmetry Breaking\n\\item[] {RBM} - Restricted Boltzmann Machine\n\\item[] {SE} - State Evolution\n\\item[] {SGD} - Stochastic Gradient Descent\n\\item[] {SK} - Sherrington-Kirkpatrick\n\\item[] {TAP} - Thouless Anderson Palmer \n\\item[] {VAE} - Variational Autoencoder\n\\item[] {VAMP} - Vector Approximate message passing\n\\end{itemize}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}